Scrap ratio

B/SLOC-j-, percentage of product scrapped

Rework ratio

E/Effort£,eve]opment, percentage of rework effort


B/N, average breakage per SCO


E/N, average effort per SCO


UT/(SCOq + SCOj), mean time between defects


(scrap ratio)/(rework ratio), maintenance productivity

• Scrap ratio. This metric provides a value for comparison with historical projects, future increments, or future projects. It defines the percentage of the product that had to be reworked during its life cycle.

• Rework ratio. This value identifies the percentage of effort spent in rework compared to the total effort. It probably provides the best indicator of rework (or maintenance) productivity.

• Modularity. This value identifies the average amount of SLOC broken per SCO, which reflects the inherent ability of the integrated product to localize the impact of change. To the maximum extent possible, CCBs should ensure that SCOs are written for single source changes and applied consistently across the project.

• Adaptability. This value provides insight into the ease with which the product can be changed. While a low number of changes is generally a good indicator of a quality process, the magnitude of effort per change is usually more important.

• Maturity. This value provides an indicator of the current mean time between failures (MTBF) for the product. While the ultimate goal for maturity is always infinity (namely, zero defects), every project must settle for less. Once a product has been released to its user community, the MTBF is generally fixed and stable. Throughout the development life cycle, however, maintenance actions are expected to improve the maturity over the life of a single release, and the trends across multiple releases should show improvement toward the project's end goals for maturity.

• Maintainability. This value identifies the relationship of maintenance cost to development cost. It provides a fair normalization for comparisons among different projects. Because the maintainability numerator is in terms of effort and its denominator is in terms of SLOC, it is a ratio of productivities (effort per SLOC). A simple mathematical rearrangement will show that maintainability (or the quality of maintenance, Qm) is equivalent to the following:

Qm = ProductivityMaintenance/ProductivityDeveiopment

For example, if the (scrap ratio) = (rework ratio), the productivity of modification is equivalent to the productivity of development and Qj^ = 1. Intuitively, a value of 1 represents a "poor" level of maintainability because it should be easier to change existing software than to develop an alternative from scratch. The fact that conventional projects tended to spend $2 on maintenance for every $1 of development [Boehm, 1987] can serve as a benchmark of what would constitute a "good" level of maintainability. Consider a software line of business with an average product life span of 16 years and an average yearly breakage rate of 12%. If Qm = 1, there would be about a 1:2 ratio between development expenditures and maintenance expenditures, or a maintainability that is roughly the norm for the software industry. A maintainability value much less than 1 would, in most cases, indicate a highly maintainable product, at least with respect to development cost and conventional experience.

These descriptions identify idealized trends for these metrics. Real project situations will never be ideal. It is important, however, for stakeholders to understand the extent to which the metrics vary from the ideal. The application of these metrics across project increments should be useful for the project as a whole and for comparisons with other projects.

C.2.3 In-Progress Indicators

The in-progress indicators are defined in Table C-3. Relative expectations are described next and illustrated in Figures C-l and C-2.

• Rework stability. This metric quantifies the difference between total rework and closed rework. Its importance is to indicate whether the resolution rate is keeping up with the breakage rate. Figure C-l shows an example of a healthy project in which the resolution rate does not diverge (except for short periods of time) from the breakage rate. The breakage rate should also be tracked relative to the SLOCç delivery rate, because the level of effort devoted to testing and maintenance varies over the life cycle. This is the purpose of the next metric.

Table C-3. Definitions of in-progress indicators



Rework stability

B-F, breakage minus fixes plotted over time

Rework backlog

(B-F)/SLOCc, currently open rework

Modularity trend

Modularity plotted over time

Adaptability trend

Adaptability plotted over time

Maturity trend

Maturity plotted over time




Architecture Baseline

(Plotted as a percentage of SLOCt )

Development Release

Alpha Release

Beta Release o O _i co

Cumulative breakage

Repaired breakage

Current breakage

Rework Stability

Cumulative breakage---

Open backlog

(Plotted as a percentage of SLOCc)

Rework Backlog

Figure C-l. Expected trends for in-progress indicators

• Rework backlog. The rework backlog is the percentage of the existing product baseline, SLOC^, that is currently in need of repair. In general, the backlog should rise to some manageable level following establishment of the first baseline, as testing uncovers necessary changes. Rework backlog should remain relatively stable throughout the test program until it drops to zero. Large changes or sustained growth in backlog from month to month should be scrutinized. Sustained increases may indicate instability and divergence from plan.

• Modularity trend. Changes in this value show how the extent of change is evolving over a project's life cycle. The general trend provides insight into quality (how well the architecture accommodates change localization) and management (schedule convergence and downstream change risks). Most trivial changes get caught and implemented in stand-alone test activities. This value addresses the nontrivial changes that creep into configuration baselines. While it is difficult to quantify what constitutes a good trend, the following rule of thumb is typical on successful projects: The average SCO should affect the equivalent of a single program unit (the lowest level of separately compilable code elements). For example, the average breakage per SCO for software written in C++ (in which the average program unit is about 50 lines of code) should be about 50 at project completion. In a mature iterative development process, the earlier changes (design changes that affect multiple components and people) are expected to require more rework than later changes (implementation changes, which tend to be confined to a single component or person). Modularity trends that are increasing with time clearly indicate that the product architecture is degrading.

• Adaptability trend. This value provides a mechanism for assessing trends in the complexity of change, as opposed to the extent of change. When changes are easy to implement, a project is more likely to increase the number of changes, thereby increasing quality. With the conventional process, it was more expensive to incorporate changes later in the life cycle. In a modern iterative process, the objective is to establish a robust process and architecture so that making changes is easier, and the results are more predictable, later in the life cycle. Rework trends may only stabilize rather than continue to get simpler over time. Nevertheless, this is a critical difference from the conventional process. A good trend is difficult to quantify in absolute terms. In practice, successful projects tend to experience an average cost of change of less than one staff-week.

• Maturity trend. It is easy to explain the expectation for this value for a single release. However, most modern software projects consist of several iterations and increments with overlapping activities and release schedules. Assessing the maturity of a whole system is far more complex than assessing the maturity of a given release. The expectation for an individual release would be a relatively immature product (frequently experienced defects) that shows rapidly improving maturity as fixes are incorporated in maintenance updates. The expectation for the simple project example illustrated in Figure C-2 is that with each subsequent release, the whole baseline encounters fewer defects and more usage time. Consequently, the reliability growth should be getting better and better. Exponential growth, shown in the figure, may be unrealistic for most systems. Linear growth may be more realistic. "Your mileage may vary," but a healthy process and architecture should not experience a sustained decrease in maturity, and short-term decreases should have an obvious cause.

Was this article helpful?

0 0
Project Management Made Easy

Project Management Made Easy

What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.

Get My Free Ebook

Post a comment