Demonstrationbased Assessment

Conventional design reviews define standards for review topics that result in tremendously broad reviews, only a small portion of which is really important or understood by a diverse audience. For example, reviewing all requirements in equal detail is inefficient and unproductive. All requirements are not created equal; some are critical to design evolution of the architecture, while others are critical only to a few components. The CCPDS-R software review process improved the efficiency of design evolution, review, and stakeholder concurrence in two ways: by allocating the technical breadth and depth of review to smaller scale design walkthroughs, and by focusing the major milestone reviews on the important design trade-offs. Moreover, focusing the design review on an executable demonstration provided a more understandable and concrete review vehicle for a diverse set of stakeholders.

Many conventional projects built demonstrations or benchmarks of stand-alone design issues (for example, a user-system interface mockup or a critical algorithm). However, the "design baseline" was usually represented on paper in design review presentations and design documents. Although it was easy for stakeholders to accept these artifacts as valid, they were ambiguous and not amenable to straightforward change management. Given the typical design review attitude that the design was "innocent until proven guilty," these representational formats made it easy to put up a credible facade and assert that the design was not guilty. In contrast, the CCPDS-R software design review process was demonstration-based, requiring tangible evidence that the architecture and design progress were leading to an acceptable quality product. The design review demonstrations provided such evidence by demonstrating an executable version of the current architecture under the critical scenarios of usage.

Numerous qualities of the evolving architecture baseline should be made visible at any given design review. At a minimum, these demonstrations provide acute insight into the integrity of the architecture and its subordinate components, the run-time performance risks, and the understanding of the system's operational concept and key use cases.

On the CCPDS-R project, lessons learned from informal design walkthroughs (and their informal demonstrations) were tracked via action items. Major milestone design reviews provided both a briefing and a demonstration. The briefing summarized the overall design and the important results of the design walkthroughs, and presented an overview of the demonstration goals, scenarios, and expectations. The demonstration at the design review was a culmination of the real design review process conducted by the software development team. The sequence of demonstration activities included the development of a plan, definition of a set of evaluation criteria, integration of components into an executable capability, and generation of test drivers, scenarios, and throw-away components. Although the demonstration plans were not elaborate (typically 15 to 35 pages), they captured the purpose of the demonstration, the actual evaluation criteria for assessing the results, the scenarios of execution, and the overall hardware and software configuration that would be demonstrated.

There is an interesting difference in the evolving insight into run-time performance when using a demonstration-based approach for design review. While the conventional approach almost always started with an optimistic assessment and then got worse, a modern demonstration-based approach frequently starts with a pessimistic assessment and then gets better.

The following key lessons were learned in the CCPDS-R demonstration activities:

• Early construction of test scenarios has a high ROI. The early investment in building some of the critical test scenarios served two invaluable purposes. First, it forced a certain important subset of the requirements to be "implemented" in very tangible form. These test scenarios caused several interactions and negotiations with the users that increased the understanding of requirements early in the life cycle. Second, these implementation activities got the test team involved early in building an environment for demonstration and testing that was highly mature by the time the project reached full-scale testing.

• Demonstration planning and execution expose the important risks. Negotiating the content of each demonstration and the associated evaluation criteria served to focus the architecture team, management team, and external stakeholders on the critical priorities of the early requirements and architecture activities. Rather than deal with the full elaboration and trace-ability of all 2,000 requirements, the team focused on understanding the 20 or so design drivers.

• Demonstration infrastructure, instrumentation, and scaffolding have a high ROI. At the outset of the project, there was a concern that these demonstrations would require a significant investment in throw-away components that were needed only for the purpose of the demonstration. In most cases, very little of this work ended up being thrown away. Most efforts resulted in components that were reused in later stand-alone tests, build integration tests, or engineering string tests. As one benchmark of the level of throw-away components, the IPDR demonstration amounted to 72,000 SLOC. Of this, only about 2,000 SLOC (smart stubs and dummy messages) were thrown away.

• Demonstration activities expose the crucial design trade-offs. The integration of the demonstration provided timely feedback on the important design attributes and the level of design maturity. The demonstration efforts typically involved 10 to 12 designers integrating components into the architecture. They ran into numerous obstacles, built numerous workarounds, and performed several component redesigns and a few architecture redesigns. Most of this work occurred over the period of a month, much of it late at night. What was really going on in these all-night integration-debug-rebuild-redesign efforts was very detailed, very effective design review. I coordinated these activities, gaining a first-hand understanding of what the architectural strengths and weaknesses were, which components were mature, which components were fragile, and what the priorities must be in post-demonstration improvements.

• Early performance issues drive early architecture improvements. The first two demonstrations contained extensive functionality and demonstrated run-time performance that was significantly less than required. The demonstration evaluation criteria were close to the end-item performance requirements. In retrospect, this was counterproductive because it led to an early expectation on the part of contract monitors that demonstration evaluation criteria and requirements would be too closely aligned. Although the customer and TRW management were initially quite anxious about this situation, the straightforward resolutions and substantial progress made in subsequent demonstrations ameliorated their concerns.

The implementation of demonstrations as the predominant intermediate product of an organic development effort is well understood. Section 9.1 describes demonstrations with little discussion of multiple-stakeholder coordination. In the context of multiple stakeholders in a contractual situation, however, the implementation of a demonstration-based assessment can be subtly difficult. The next few sections provide detailed perspectives to illuminate some of the CCPDS-R experience.

The IPDR Demonstration

The interim PDR major milestone demonstration of the Common Subsystem had three critical objectives:

1. Tangible assessment of the software architecture design integrity through construction of a prototype SAS

2. Tangible assessment of the critical requirements understanding through construction of the worst-case missile warning scenario

3. Exposure of the architectural risks associated with the peak missile warning scenario (the worst-case data processing performance corresponding to a mass missile raid from the Soviet Union) and the fault detection and recovery scenario (the worst-case control processing associated with a failure in the primary processing thread and a real-time switchover to a hot backup processing thread)

The CCPDS-R software culture is evident in these objectives. The demonstrations were not set up to be cakewalks that would impress the customer with perfect results and minimal open issues. (Neither were the walkthroughs, project management reviews, nor major milestones.) These demonstrations were always honest engineering activities with ambitious goals, open discussion of trade-offs, and a show-me approach to substantiating assertions about progress and quality. The results of a demonstration were apt to change requirements, plans, and designs equally; all three of these dimensions evolved during the life cycle.

Demonstration activities generally spanned a six-month period, with the first three months focused on planning. Only a few people across the stakeholder teams participated in specifying the formal evaluation criteria. Figure D-8 summarizes the schedule for the IPDR demonstration; it includes details of the intense integration period in the two months before the demonstration.

The first three months of planning, which encompassed a draft plan, government review and comment, and final plan production, could have been achieved in one week with a collocated team of all interested stakeholders. The review sequence that occurred was a requirement of the contract. Because this was the first time that TRW or the customer had used a demonstration-based approach, both were unsure of the best process and agreed on an overly conservative approach. This demonstration was the first attempt at constructing a full-scale SAS. Consequently, this was the first (and worst-case) major integration effort for the Common Subsystem. The subsequent demonstrations tended to have shorter, but equally intense, integration activities lasting 4 or 5 weeks.

Demonstration Milestone

Preliminary demonstration plan

Government feedback on plan Final demonstration plan Demonstration preparation Re-demonstration and report

NAS, demo testbed installation Scenario/message definition SAS construction Demo driver, stub construction Demonstration integration Dry run, demonstration tuning Demonstration

Demonstration Milestone

Preliminary demonstration plan

Government feedback on plan Final demonstration plan Demonstration preparation Re-demonstration and report

Weeks After Activity Initiation

Project Management Made Easy

Project Management Made Easy

What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.

Get My Free Ebook


Responses

  • JAAKKO
    What is demostartion based assesament in projectanagement?
    4 years ago

Post a comment