One critical problem in software cost estimation is a lack of well-documented case studies of projects that used an iterative development approach. Although cost model vendors claim that their tools are suitable for estimating iterative development projects, few are based on empirical project databases with modern iterative development success stories. Furthermore, because the software industry has inconsistently defined metrics or atomic units of measure, the data from actual projects are highly suspect in terms of consistency and comparability. It is hard enough to collect a homogeneous set of project data within one organization; it is extremely difficult to homogenize data across different organizations with different processes, languages, domains, and so on. For example, the fundamental unit of size (a source line of code or a function point) can be, and is, counted differently across the industry. It is surprising that modern language standards (such as Ada 95 and Java) don't make a simple definition of a source line reportable by the compiler. The exact definition of a function point or a SLOC is not very important, just as the exact length of a foot or a meter is equally arbitrary. It is simply important that everyone uses the same definition.
There have been many long-standing debates among developers and vendors of software cost estimation models and tools. Three topics of these debates are of particular interest here:
1. Which cost estimation model to use
2. Whether to measure software size in source lines of code or function points
About 50 vendors of software cost estimation tools, data, and services compete within the software industry. There are several popular cost estimation models (such as COCOMO, CHECKPOINT, ESTIMACS, KnowledgePlan, Price-S, ProQMS, SEER, SLIM, SOFTCOST, and SPQR/20), as well as numerous organization-specific models. Because my firsthand experience with these models has been centered on COCOMO and its successors, Ada COCOMO and COCOMO II, it is the basis of many of my software economics arguments and perspectives. COCOMO is also one of the most open and well-documented, cost estimation models. The evolution of COCOMO into its current version, COCOMO II, is summarized in Appendix B. While portions of the appendix are not directly applicable to today's techniques and technologies, it provides an interesting historical perspective on the evolution of the issues and priorities of software economics over the past 20 years.
The measurement of software size has been the subject of much rhetoric. There are basically two objective points of view: source lines of code and function points. Both perspectives have proven to be more valuable than a third, which is the subjective or ad hoc point of view practiced by many immature organizations that use no systematic measurement of size.
Many software experts have argued that SLOC is a lousy measure of size. However, when a code segment is described as a 1,000-source-line program, most people feel comfortable with its general "mass." If the description were 20 function points, 6 classes, 5 use cases, 4 object points, 6 files, 2 subsystems, 1 component, or 6,000 bytes, most people, including software experts, would ask further questions to gain an understanding of the subject code. (Many of them would ask how many SLOC.) So SLOC is one measure that still has some value. .
I was a SLOC zealot a decade ago because SLOC worked well in applications that were predominantly custom-built and because SLOC measurement was easy to automate and instrument. Today, language advances and the use of components, automatic source code generation, and object orientation have made SLOC a much more ambiguous measure. As an acute example, the case study in Appendix D describes the carefully crafted approaches for counting SLOC to accommodate reuse, custom development, and code generation tools on a large software project.
The use of function points has a large following, including Capers Jones, who cites the hazards associated with using SLOC metrics for object-oriented programs [Jones, 1994], The International Function Point User's Group, formed in 1984, is the dominant software measurement association in the industry. The primary advantage of using function points is that this method is independent of technology and is therefore a much better primitive unit for comparisons among projects and organizations. The main disadvantage is that the primitive definitions are abstract and measurements are not easily derived directly from the evolving artifacts.
Although both measures of size have their drawbacks, I think an organization can make either one work. The use of some measure is better than none at all. Anyone doing cross-project or cross-organization comparisons should be using function points as the measure of size. Function points are also probably a more accurate estimator in the early phases of a project life cycle. In later phases, however, SLOC becomes a more useful and precise measurement basis of various metrics perspectives. Chapter 16 presents my hypothesis of a next-generation cost model that could minimize or even obsolesce the need to measure SLOC.
The general accuracy of conventional cost models (such as COCOMO) has been described as "within 20% of actuals, 70% of the time." This level of unpredictability in the conventional software development process should be truly frightening to every investor, especially in light of the fact that few projects miss their estimate by doing better than expected. This is an interesting phenomenon to be considered when scheduling labor-intensive efforts. Unless specific incentives are provided for beating the overall schedule, projects rarely do better than planned. Why? Teams and individuals perform subplanning to meet their objectives. If the time objective is lenient, they either expend energy elsewhere (in further training, helping others, or goofing off), or they continue to add quality beyond what is necessary. They almost never propose to accelerate the schedule. If they did, their suggestion would most likely meet with
software architecture manager, software development manager, software assessment manager
"This project must cost $X to win this business."
"Here's how to justify that cost.'
Risks, options, trade-offs.
Figure 2-3. The predominant cost estimation process resistance from other stakeholders who are expecting to synchronize. So plans need to be as ambitious as can possibly be achieved.
Most real-world use of cost models is bottom-up (substantiating a target cost) rather than top-down (estimating the "should" cost). Figure 2-3 illustrates the predominant practice: The software project manager defines the target cost of the software, then manipulates the parameters and sizing until the target cost can be justified. The rationale for the target cost may be to win a proposal, to solicit customer funding, to attain internal corporate funding, or to achieve some other goal.
The process described in Figure 2-3 is not all bad. In fact, it is absolutely necessary to analyze the cost risks and understand the sensitivities and trade-offs objectively. It forces the software project manager to examine the risks associated with achieving the target costs and to discuss this information with other stakeholders. The result is usually various perturbations in the plans, designs, process, or scope being proposed. This process provides a good vehicle for a basis of estimate and an overall cost analysis.
A practical lesson learned from the field is that independent cost estimates (those done by people who are independent of the development team) are usually inaccurate. The only way to produce a credible estimate is for a competent team—the software project manager and the software architecture, development, and test managers—to iterate through several estimates and sensitivity analyses. This team must then take ownership of that cost estimate for the project to succeed.
What constitutes a good software cost estimate? This tough question is discussed in detail in Chapter 10. In summary, a good estimate has the following attributes:
• It is conceived and supported by the project manager, architecture team, development team, and test team accountable for performing the work.
• It is accepted by all stakeholders as ambitious but realizable.
• It is based on a well-defined software cost model with a credible basis.
• It is based on a database of relevant project experience that includes similar processes, similar technologies, similar environments, similar quality requirements, and similar people.
• It is defined in enough detail so that its key risk areas are understood and the probability of success is objectively assessed.
Extrapolating from a good estimate, an ideal estimate would be derived from a mature cost model with an experience base that reflects multiple similar projects done by the same team with the same mature processes and tools. Although this situation rarely exists when a project team embarks on a new project, good estimates can be achieved in a straightforward manner in later life-cycle phases of a mature project using a mature process.
Was this article helpful?
What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.