The CCPDS-R metrics approach was first developed solely to manage the project and meet the needs of the contract. While it achieved these goals, it also resulted in a great case study. CCPDS-R was nowhere near perfect; numerous mistakes were made all along the way. This was true of the metrics program, too: It measured some of the wrong things, measured some things in the wrong way, struggled with early interpretations, and used some manual methods where automation was needed. Nevertheless, these metrics activities led to more teamwork, better processes, better understanding of risks, and, ultimately, better products produced with more efficiency. Early in the project, there was resistance from management, from practitioners, and even from contract monitors. After the first year, following several improvements in interpretation, automation, presentation, and definition, there was nearly universal support. All parties used the objective data from the metrics program to substantiate their plans, their risks, their design directions, and their results.
All the Common Subsystem metrics presented here were extracted directly from the monthly project management reviews. None of these data were created after the fact. Although the CCPDSR-R metrics program was a contractual requirement, the government did not define the actual metrics to be used. This was left up to the contractor so that the project team would take ownership of the metrics program selected.
TRW formulated a metrics program with four objectives:
1. Provide data for assessing current project trends and identifying the need for management attention
2. Provide data for planning future builds and subsystems
3. Provide data for assessing the relative complexity of meeting the software end-item quality requirements
4. Provide data for identifying where process improvements are needed and substantiating the need
The following sections contain explicit examples of the metrics recommended in Chapter 13. There are several instances of progress metrics as well as the quality indicators of scrap, rework, and maturity. The basis for automation, which required some interesting technical approaches embedded directly in the evolving design and code artifacts, is also described.
Measuring development progress accurately with several concurrent builds in various states was a complex undertaking for the Common Subsystem management team. Significant effort went into devising a consistent approach that would provide accurate insight into subsystem-level status and build status. The goal was a balanced assessment that included the following:
• The Ada/ADL metrics. These data provided good insight into the direct indicators of technical progress. By themselves, these metrics were fairly accurate at depicting the true progress in design and implementation. They were generally weak at depicting the completed contract deliverables and financial status.
• Earned value metrics. These data provided good insight into the financial status and contract deliverables. They were generally weak indicators of true technical progress.
As with most software metrics, both of these perspectives initially were somewhat inaccurate assessments of absolute progress. They were, however, excellent assessments of relative progress when tracked periodically (in this case, monthly). As experience was gained with these metrics, the absolute assessments became well-tuned predictors of success or risk. The overall assessment was crammed into one chart, as illustrated in Figure D-9. The figure depicts the top-level progress summary for each build and for the Common Subsystem as a whole. The length of shading within each build relative to the dashed line (corresponding to the current month) identifies whether progress was ahead of or behind schedule. For example, Figure D-9 displays month 17 status: Build 2 SAT testing is one month behind schedule, build 3 design work is one month ahead of schedule, the Common Subsystem design effort is on schedule, and Common Subsystem SAT testing is one month behind schedule. The shading was a judgment by the software chief engineer, who combined the monthly progress metrics and the monthly financial metrics into a consolidated (and somewhat subjective) assessment.
Monthly collection of metrics provided detailed management insight into build progress, code growth, and other indicators. The metrics were collected by build and by CSCI to provide multiple perspectives. Individual CSCI managers collected and assessed their metrics before the metrics were incorporated into a project-level summary. This process was objective, efficient, and meaningful. Although the lowest level estimates of TBD_statements were certainly subjective, they were being determined by the most knowledgeable people: the actual designers. They were being maintained in the evolving source code format because this was the format in which the designers
Was this article helpful?
What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.