Conventional Software Management Performance

Barry Boehm's one-page "Industrial Software Metrics Top 10 List" [Boehm, 1987] is a good, objective characterization of the state of software development. (There is very little evidence of significant changes in the past decade.) Although many of the metrics are gross generalizations, they accurately describe some of the fundamental economic relationships that resulted from the conventional software process practiced over the past 30 years.

In the following paragraphs, quotations from Boehm's top 10 list are presented in italics, followed by my comments.

1. Finding and fixing a software problem after delivery costs 100 times more than finding and fixing the problem in early design phases.

a This metric dominates the rationale for virtually every dimension of process improvement discussed in this or any other book. It is not unique to software development. When one of the big automobile companies implements a recall for a post-delivery defect, the cost of repair can be many orders of magnitude greater than the cost of fixing the defect during the engineering or production stage.

2. You can compress software development schedules 25% of nominal, but no more.

a One reason for this is that an N% reduction in schedule would require an M% increase in personnel resources (assuming that other parameters remain fixed). Any increase in the number of people requires more management overhead. In general, the limit of flexibility in this overhead, along with scheduling concurrent activities, conserving sequential activities, and other resource constraints, is about 25%. Optimally, a 100-staff-month effort may be achievable in 10 months by 10 people. Could the job be done in one month with 100 people? Two months with 50 people? How about 5 months with 20 people? Clearly, these alternatives are unrealistic. The 25% compression metric says that the limit in this case is 7.5 months (and would require additional staff-months, perhaps as many as 20). Any further schedule compression is doomed to fail. On the other hand, an optimal schedule could be extended almost arbitrarily and, depending on the people, could be performed in a much longer time with many fewer staff resources. For example, if you have the luxury of a 25-month schedule, you may need only 75 staff-months and three people.

3. For every $1 you spend on development, you will spend $2 on maintenance.

a Boehm calls this the "iron law of software development." Whether you build a long-lived product that undergoes commercial version upgrades twice a year or build a one-of-a-kind custom software system, twice as much money will probably be spent over the maintenance life cycle than was spent in the development life cycle. It is hard to tell at first whether this relationship is good or bad. In the commercial product domain, the primary driver of this relationship is the product's success in the marketplace. Successful software products (such as Oracle, Microsoft applications, Rational Rose, and the UNIX operating system) are very long lived and can result in much higher ratios of maintenance cost to development cost. Managers of one-of-a-kind software projects, on the other hand, rarely plan to expend this much on software maintenance. In either case, anyone working in the software industry over the past 10 to 20 years knows that most of the software in operation is considered to be difficult to maintain.

4. Software development and maintenance costs are primarily a function of the number of source lines of code.

▲ This metric is primarily the result of the predominance of custom software development, lack of commercial componentry, and lack of reuse inherent in the era of the conventional process.

5. Variations among people account for the biggest differences in software productivity.

▲ This is a key piece of conventional wisdom: Hire good people. This metric is a subject of both overhype and underhype. When you don't know objectively why you succeeded or failed, the obvious scapegoat is the quality of the people. This judgment is subjective and difficult to challenge.

6. The overall ratio of software to hardware costs is still growing. In 1955 it was 15:85; in 1985, 85:15.

▲ The fact that software represents 85% of the cost of most systems is not so much a statement about software productivity (which is, arguably, not as good as we want) as it is about the level of functionality being allocated to software in system solutions. The need for software, its breadth of applications, and its complexity continue to grow almost without limits.

7. Only about 15% of software development effort is devoted to programming.

▲ This is an important indicator of the need for balance. Many activities besides coding are necessary for software project success. Requirements management, design, testing, planning, project control, change management; and toolsmithing are equally important considerations that consume roughly 85% of the resources.

8. Software systems and products typically cost 3 times as much per SLOC as individual software programs. Software-system products (i.e., system of systems) cost 9 times as much.

▲ This exponential relationship is the essence of what is called diseconomy of scale. Unlike other commodities, the more software you build, the more expensive it is per source line.

9. Walkthroughs catch 60% of the errors.

▲ This may be true. However, given metric 1, walkthroughs are not catching the errors that matter and certainly are not catching them early enough in the life cycle. All defects are not created equal. In general, walkthroughs and other forms of human inspection are good at catching surface problems and style issues. If you are using ad hoc design notations, human review may be your primary quality assurance mechanism, but it is not good at uncovering second-, third-, and Nth-order issues such as resource contention, performance bottlenecks, control conflicts, and so on. Furthermore, few humans are good at reviewing even first-order semantic issues in a code segment. How many programmers get their code to compile the first time?

10. 80% of the contribution comes from 20% of the contributors.

▲ This is a motherhood statement that is true across almost any engineering discipline (or any professional discipline, for that matter). I have expanded this metric into a more specific interpretation for software. The following fundamental postulates underlie the rationale for a modern software management process framework:

80% of the engineering is consumed by 20% of the requirements.

80% of the software cost is consumed by 20% of the components.

80% of the errors are caused by 20% of the components.

80% of software scrap and rework is caused by 20% of the errors.

80% of the resources are consumed by 20% of the components.

80% of the engineering is accomplished by 20% of the tools.

80% of the progress is made by 20% of the people.

These relationships provide some good benchmarks for evaluating process improvements and technology improvements. They represent rough rules of thumb that objectively characterize the performance of the conventional software management process and conventional technologies. In later chapters, I return to many of these measures to rationalize a new approach, defend an old approach, and quantify process or technology improvements.

Project Management Made Easy

Project Management Made Easy

What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.

Get My Free Ebook


Responses

  • Asmarina Hamid
    What is convenential software management?
    9 months ago
  • maxwell
    How to remember baary bohems top 10 software metrics principles?
    4 months ago

Post a comment