Software measurement: how to measure the productivity of software projects

Software measurement: how to measure the productivity of software projects

The software must be functional, modern, and easy to use. Development projects are an investment and have a strong economic component, which is defined by the relationship between expenditure and income. Thus, productivity is inseparable from this concept. Can you measure productivity in an activity with a creative component – such as software development?

In this article on software measurement, we deal with the productivity of software development. In times of scarce resources, limited budgets, and global competition, this economically driven question gains new significance. First of all, it must be clarified what is meant by this. How can the industrial concept of productivity be transferred to the world of software development? And could it at all?

 Two extreme positions are facing each other: The employer or client would like to determine the productivity exactly based on defined parameters. This means that the work performance between projects and people should be comparable and predictable as best as possible. The other extreme position is the statement that highly creative activities are under no circumstances amenable to measurement and comparability. As is so often the case, the truth is probably somewhere in the middle.

Types of software productivity

There are five core measures of software measurement. These are quantity, quality, time, effort, and productivity. In addition to processes related to software development, the productivity of maintenance, migration, and integration processes is also of interest. Productivity can be defined as follows: Productivity = quantity/effort.

The definitions outlined above have shown that productivity is measured differently and thus also defined differently. Since the development process itself is divided into different phases (analysis, design, programming, test, ...), productivity can be measured and determined separately for each of these phases. 

The following basic rules exist:

1. The greater the effort required in a phase to achieve the result (sub-goal) of this phase, the greater the effect of high productivity. At the same time, productivity increases have a particular effect on the overall project.

2. Productivity always refers to a certain level of quality of the product. Early development phases already play a decisive role in the quality and thus in the success of a project. It must be clear that an increase in productivity will not be accepted at the expense of quality. A careful and very precise analysis has its aim – even in times of agile project implementation. Saving time here and collecting incomplete requirements will lead to problems in the further course of the project. A short-term increase in productivity is usually more than offset by the need for subsequent improvement, addition, and correction. In this case, one also speaks of the taking on of technical debts, which are later to be paid for using interest (error corrections).

One can subdivide the productivity types concerning the development process:

• Productivity analysis: This type of analysis means the collection, analysis, and documentation of information. The question of access to information is problematic. Depending on the project, this is either easier or more difficult. Another problem is the validity of the requirements identified. It is important to have a clear definition of the task from the start. The process of requirements engineering is extensive, complex, and not always clearly structured. The implementation is costly. It is often necessary to develop prototypes. Measured values are the number of requests and request cases. The associated documentation can be expressed in function or use case points. In this respect, the measurement of productivity analysis greatly varies depending on the project and situation and cannot be generalized.

• Design productivity: Software design is a creative activity. The work of a software designer is difficult to measure and cannot be estimated in advance. Data, object, and function points are used as size units for the design. By comparing these quantities with the working days, design productivity can be measured retrospectively. Help is provided when “standard interfaces”, for example for database business applications, are designed. Comparability with projects that have already been implemented can be helpful. If, for example, the new UI is designed for a dialog mask, previous experience can be used. However, if the technology has changed (for example, changing the framework or programming language), comparability is very limited.

• Programming productivity: When developing new applications, programming productivity refers to the code produced by the developer. This can easily be measured in lines or the work scope completed. The interpretation is much more difficult due to the different programming languages, formatting guidelines, and approaches to the problem. Measured variables are again objected function and data points. When developing or changing existing codes, the programming productivity relates to the amount of code to be changed or added. Their evaluation is also not that easy, because every change must be checked by the programmer for possible (undesired) effects. If the programmer has already been instructed to create tests (mostly unit tests) for his code, this must be considered during the measurement.

• Test productivity: In principle, test productivity is considered measurable. This takes up 30 to 50 percent of the total effort. The test productivity is quite constant and depends heavily on technical requirements (degree of test automation).

• Overall productivity: In migration, renovation, evolution, and integration projects, overall productivity can be measured relatively easily and transferred to similar projects. In contrast, the overall productivity of development projects is difficult to determine with a large proportion of analysis and design activities. The comparability is limited here.

As you can see, productivity has many aspects which you should consider while trying to measure it.

How to increase productivity?

To conclude from the previous descriptions that productivity cannot be measured due to the many factors and the specifics of software development would be wrong. Instead of burying your head in the sand, you can derive the following practice recommendations, the implementation of which can increase productivity:

• Increase employee performance: Possible measures are to increase qualifications through further training, improve the working environment and working conditions, establish a cooperative and open management style and promote a good corporate culture.

• Increase in employee satisfaction through appealing and demanding work tasks and transferring overall responsibility.

• Improvement of the efficiency of the work steps, the optimization of the computer equipment (for example providing a good monitor in the appropriate size and resolution), and improvement of the possibilities of office communication.

• Use of current and established process models for software development through the motivation to create prototypes at an early stage and to introduce agile development approaches such as Scrum.

• Promotion of reuse through the use of component libraries and a fundamental alignment of the entire development process towards component-oriented development.

• Careful selection of personnel: The aim is to determine the motivation and potential performance of an employee in advance of hiring.


The subject of software measurement, which we touched is complex. One thing should have become clear in any case, software development can be measured and thus also classified in its determining dimensions. The complexity, the quality, the effort, and ultimately the productivity always remains associated with a certain degree of uncertainty. There will be no exact definition and measurability in the future either, but usable approximate values for work in practice can already be determined today. It is worth looking at your development projects from these points of view and allowing comparisons in the sense of a benchmark.

Measuring the effectiveness requires knowing the value we are providing to the client which is tremendously relative and subjective. We can ask the client what value they would assign to each user story, in the same way, that we estimate the stories, but then we would only be measuring the business value, and for example, the technical refactoring tasks or the spikes would not have an adequate value. We could be tempted to measure the ROI of each story, that is, the money they have generated vs what they have cost, but again we will be leaving out technical tasks, research, or even tasks that report intangible benefits such as a better image or the possibility of getting customer feedback.

Something that we can measure and that provides objective information on effectiveness is the number of times users use this functionality. We can do this by analyzing the application or database logs to find out if the functionality is being used by users as we expected. Even better, we can incorporate feedback forms to collect the opinion of users first-hand about the functionality and that they can report to us if it has been useful and if they have improvements that we can add to the backlog.