search Where Thought Leaders go for Growth

Software creation: how can you evaluate your design and be more productive?

Software creation: how can you evaluate your design and be more productive?

By Nicolas Payette

Published: 15 November 2024

The evaluation and costing of software design, in particular the size, effort, cost and time involved, is often the source of lively discussions by software evaluators. Project managers are generally responsible for this activity.

Software development involves a number of different activities that call on specialised knowledge, particularly in the following areasThese include requirements gathering, analysis and management; software design, coding and independent verification and validation (IV&V); implementation, deployment, installation and commissioning. Each of these activities is carried out by a qualified person, using various tools of varying degrees of complexity.

What is productivity? Definition

Productivity is defined as the rate of production for given inputs. Productivity is expressed as "so many production units per day" or "so many production units per hour". Productivity is also defined as the ratio of inputs to outputs.

In the context of this article, productivity refers to the rate of production of an output unit using a set of inputs, for a given period of time.

Problems in assessing software size

In today's IT industry, there are several units for measuring the size of a piece of software. These units include, for example, function points, use case points (UCP), object points, functionality points, Internet points, test points, function point analysis (FPA), lines of code (LOC), and so on. There is no established measure for converting the size of software into any of these units.

Strangely, in these measurements, the size of the software is adjusted (increased or decreased), depending on factors such as complexity. Yet size is an immutable character. For example, a kilo of cheese does not become heavier or lighter if the person weighing it is more or less experienced in weighing, or if the scale is mechanical or electronic. Let's take another example: a kilometre is still a kilometre, whether it's a young person or an elderly person walking that distance, or whether the distance is measured on a motorway or in a busy street.

However, the speed at which the results are obtained changes. If we take the examples above, the older person will certainly cover the kilometre more slowly than the younger person. What's more, a kilometre is covered more quickly on the motorway than in town.

What's more, there's no agreement on how to count LOCs. Should we count logical or physical propositions? And how should online documentation be treated? Should it be taken into account or not?

These are some of the main problems associated with assessing the size of a piece of software.

Concerns about productivity

The software publishing industry is obsessed with the possibility of stating a single, empirical rate of productivity, covering all activities.

Attempts have been made to define productivity as 10 person-hours per function point, while preThe unit person-hour per function point can vary from 2 to 135 depending on the size of the product, the team and other factors. "Defining productivity" means assigning a figure representing the effort required, expressed in person-hours, to develop a unit of software size, so that the software size (in function points) can be converted into development effort (in person-hours). Sometimes intervals are chosen, for example from fifteen to thirty hours per PCU. At other times, empirical formulas are created on the basis of a set of factors, as in the case of the 'constructive cost model' (COCOMO).

The problem with these productivity measures is that they combine all the activities - requirements analysis, design, review, testing, etc. - into a single measure. - into a single measure. Yet the skills required for these tasks are different, as are the tools used and the inputs and outputs. Grouping them together under the heading of 'software development' and giving a single measure of productivity can only provide a very rough estimate, and never an accurate one.

Designing software better and faster: how can you become more productive?

Software development involves the following activities:

  • project preparation activities, including feasibility studies, financial budgeting and project validation (financial and technical approval, and "project launch")
  • Project launch activities, such as identifying the project manager, creating a project team and setting up the development environment; project planning; setting up various protocols, such as service level agreements and progress reporting procedures; project training
  • software engineering activities, including user requirements analysis; software requirements analysis; software design, coding and unit testingdifferent types of integration, functional, negative, system and acceptance testing; preparation of documentation
  • deployment activities, including hardware and system installation; database creation; application software installationapplication software installation; pilot testing; user training; parallel phase and actual deployment
  • project closure activities, including documentation of good and bad practice; project analysis (end of project); archiving files; publication of resources; release of project manager from obligations; commencement of software maintenance.

When we talk about industry 'ground rules' (accepted, common-sense procedures) for productivity, it is difficult to determine which activities are included in the productivity rate. No one would bet on measuring productivity, even though it is a basic rule of the industry.

Let's take a look at the nature of these activities:

  1. Requirements analysis: understanding and documenting what the user needs, wants and expects so that software designers fully understand and can design a system in strict accordance with the stated requirements. Dependency on external factors is high.
  2. Software design: consider the different options available for hardware, system software and development platform; arrive at the optimum choice for each; design an architecture that meets stated requirements and customer expectations. The architecture must be compatible with current technologies and the design documented in such a way that the programmers understand and deliver a product that complies with the user's original specifications. There are several alternatives, and since software design is a major, strategic activity, mistakes can have serious consequences.
  3. Coding: developing software code that conforms to the design and contains as few errors as possible (it's so easy to inadvertently leave bugs in).
  4. Code review: studying code written by another programmer, deciphering its functionality and trying to predict errors that the customer might encounter when using the software.
  5. Testing: trying to discover any faults that might be left in the software. However, it is impossible to find almost all the defects. What's more, testing the software in its entirety is an impractical activity.

Because the nature of these activities is so different, it is obvious that the productivity for each of them is not uniform (and therefore cannot be described by the same figure). The pace of work differs for each of these activities.

They do not depend on the quantity of code produced, but on other factors, such as :

  1. requirements, which depend on the effectiveness and clarity of their source (users or documentation)
  2. design, which depends on the complexity of the process, the alternatives available and the constraints under which the functionality must be designed
  3. code revision, which depends on the coding style
  4. checking, which depends on how the code is written (the more errors there are, the more time is needed for testing and retesting)
  5. the coding itself, which depends on the quality of the design

As a result, we need to establish different productivity figures for each of these activities.

Let's try to draw a parallel for industry, for example with punching. The activities to be carried out are: 1) setting up the machine; 2) setting up the tools; 3) loading the job; 4) punching the hole; 5) deburring the hole; 6) cleaning; 7) delivering the sheet for the next operation.

If several holes are punched, the time "per hole" decreases, because the configuration activities are one-off activities.

Therefore, if we consider the coding of a unit, for example, the activities to be carried out could be: 1) receive the instructions; 2) study the design document; 3) code the unit; 4) test and debug the unit for the specific functionality; 5) test and debug the unit for any use; 6) test and debug the unit for any use; 7) test and debug the unit for any use.6) remove unnecessary code from the unit; 7) regression test the unit; 8) transfer the unit to the next stage.

Similarly, we can propose micro-activities for each software development phase.

Productivity figures: empirical or based on a methodical study?

Each of the above activities has a different success rate. Standard times for each of these activities need to be established. Once this has been done, work study techniques, such as synthesis or analytical estimation, should be used to estimate the total duration of the assignment.

Whether time study techniques are used to carry out individual productivity studies or to gather empirical data, software development is neither totally mechanical nor totally creative. It is also unrealistic to plan activities with a strong creative component; work study methods take this aspect of software development into account. A lot of research is being done on 'executive productivity', and perhaps methods for 'timing' productivity in software development will be available in the future. At the moment, empirical data seems to be the solution of choice.

Where do we get empirical data? The first option is through lead time studies that use industrial engineering techniques. The other way, which is easier and more reliable, is to rely on the historical data provided by time-sheets.

The majority of time management software used by the industry is payroll and billing oriented. They do not collect data at the lowest level to establish productivity trends. Most of this software logs two or three levels of data, in addition to date and time. A project is always recorded at the first level, and the second and third levels may be occupied by a module and a component, a component and an activity, or a similar combination. In addition to the date and time at which the employee is present, timesheets must record five levels of data: the project, the module, the component, the development phase and the task performed. In this way, data would be available to establish productivity measures empirically and realistically.

Currently, software development activities focus on macro-productivity. This trend needs to change, and we need to move from macro to micro productivity. To do this, we need to change our timesheet software and the depth of the data we collect.

Studying productivity at the micro level has the following advantages:

  • Better predictability of software development
  • Better quality estimates to improve pricing during the project development and finalisation phases
  • Establishment of more precise objectives when allocating tasks, which increases the confidence of software publishers
  • More accurate cost estimates

Conclusion

It is important to understand the difference between the terms productivity and capacity. Productivity is the success rate of a micro-activity; capacity is the success rate of an installation (factory, organisation, etc.), and several activities are included in the capacity diagram. For software evaluation purposes, the focus must shift from macro-productivity (capability) to productivity (for the micro-activity). Empirical data collection is preferred in order to obtain productivity measures for the different software development activities, as time and task study techniques cannot provide a complete picture of productivity.The collection of empirical data is favoured in order to obtain productivity measures for the various software development activities, because the techniques of studying deadlines and tasks cannot provide satisfactory results when the mission presents a high degree of creativity (which is the case of software publishing). In order to collect empirical data, it is necessary to improve time recording software. We recommend this procedure for calculating productivity figures at all micro-levels.

About the authors

Murali Chemuturi is an expert in industrial engineering at the Indian Institution of Industrial Engineering. He has spent over thirty years in professional organisations, including ECIL, TCS, Metamor and Satyam. He worked first in manufacturing and then in IT. He currently heads Chemuturi Consultants and has a particular interest in software for the software development industry. He has conducted several in-company training programmes for software project management and software evaluation.

Sarada Kaligotla has completed her Masters in Computer Applications and is a Project Management Professional (PMP) from the Project Management Institute (PMI) and a Certified Software Quality Analyst (CSQA) from the Quality Assurance Institute (QAI). She currently works for Blue Cross Blue Shield in Massachusetts. Sarada has six years of experience in the software industry, as well as project and development management.

Article translated from French