Late Risk Resolution In Waterfall Model

In 1970, my father, Winston Royce, presented a paper titled "Managing the Development of Large Scale Software Systems" at IEEE WESCON [Royce, Winston, 1970], This paper, based on lessons he had learned managing large software projects, remains the most quoted source of the waterfall model. It provides an insightful and concise summary of conventional software management philosophy circa 1970, and most of its 30-year-old advice has stood the test of time in the face of immense technology turnover.

The paper made three primary points. (Quotations and paraphrased statements are presented in italics.)

1. There are two essential steps common to the development of computer programs: analysis and coding.

2. In order to manage and control all of the intellectual freedom associated with software development, one must introduce several other "overhead" steps, including system requirements definition, software requirements definition, program design, and testing. These steps supplement the analysis and coding steps. Figure 1-1 illustrates the resulting project profile and the basic steps in developing a large-scale program.

Waterfall Model Part 1: The two basic steps to building a program

Analysis and coding both involve creative work that directly contributes to the usefulness of the end product.

Waterfall Model Part 2 : The large-scale system approach

Waterfall Model Royce Definition

Waterfall Model Part 3: Five necessary improvements for this approach to work

1. Complete program design before analysis and coding begin.

2. Maintain current and complete documentation.

3. Do the job twice, if possible.

4. Plan, control, and monitor testing.

5. Involve the customer.

Figure 1-1. The waterfall model

3. The basic framework described in the waterfall model is risky and invites failure. The testing phase that occurs at the end of the development cycle is the first event for which timing, storage, input/output transfers, etc., are experienced as distinguished from analyzed. The resulting design changes are likely to be so disruptive that the software requirements upon which the design is based are likely violated. Either the requirements must be modified or a substantial design change is warranted.

Item 1, which is seemingly trivial, will be expanded later into one of my overall management themes: the separation of the engineering stage from the production stage.

Seven of the article's nine pages are devoted to describing five improvements to the basic waterfall process that would eliminate most of the development risks alluded to in item 3. These five improvements are presented next. (Quotations and paraphrased statements are presented in italics, followed by my comments, in the context of today's technology and terminology.)

1. Program design comes first. The first step toward a fix is to insert a preliminary program design phase between the software requirements generation phase and the analysis phase. By this technique, the program designer assures that the software will not fail because of storage, timing, and data flux. As analysis proceeds in the succeeding phase, the program designer must impose on the analyst the storage, timing, and operational constraints in such a way that he senses the consequences. If the total resources to be applied are insufficient or if the embryonic operational design is wrong, it will be recognized at this early stage and the iteration with requirements and preliminary design can be redone before final design, coding, and test commences. How is this program design procedure implemented? The following steps are required:

Begin the design process with program designers, not analysts or programmers.

Design, define, and allocate the data processing modes even at the risk of being wrong. Allocate processing functions, design the database, allocate execution time, define interfaces and processing modes with the operating system, describe input and output processing, and define preliminary operating procedures.

Write an overview document that is understandable, informative, and current so that every worker on the project can gain an elemental understanding of the system.

▲ The essence of the process framework I present in later chapters is architecture-first development. Although a few terms may be changed (for example, architecture is used instead of program design), the essence of a modern process is congruent with the explanation given here. As described later, the architecture comes first and is designed and developed in parallel with planning and requirements definition as part of the engineering stage of a project.

2. Document the design. The amount of documentation required on most software programs is quite a lot, certainly much more than most programmers, analysts, or program designers are willing to do if left to their own devices. Why do we need so much documentation? (1) Each designer must communicate with interfacing designers, managers, and possibly customers. (2) During early phases, the documentation is the design. (3) The real monetary value of documentation is to support later modifications by a separate test team, a separate maintenance team, and operations personnel who are not software literate.

▲ If we ignore the technological inadequacies of the time frame in which the paper was written, the essence of this "document the design" message is still valid. Understandable representations of the artifacts, accessible by all stakeholders and teams, are essential. However, major advances in notations, languages, browsers, tools, and methods have rendered the need for many of the documents obsolete. In later chapters, I argue at length that to focus on documentation is wrong and counterproductive. This is because today's technologies support rigorous and self-documenting notations for requirements, designs, and implementations.

3. Do it twice. If a computer program is being developed for the first time, arrange matters so that the version finally delivered to the customer for operational deployment is actually the second version insofar as critical design/operations are concerned. Note that this is simply the entire process done in miniature, to a time scale that is relatively small with respect to the overall effort. In the first version, the team must have a special broad competence where they can quickly sense trouble spots in the design, model them, model alternatives, forget the straightforward aspects of the design that aren't worth studying at this early point, and, finallyj arrive at an error-free program.

A This is a concise and simplistic description of architecture-first development, in which an architecture team is responsible for the initial engineering. Generalizing this practice, as I do later, results in a "do it N times" approach that is a principle of modern-day iterative development.

Without this first pass, the project manager is at the mercy of human judgment. With this first-pass "simulation," he can at least perform experimental test of some key hypotheses and scope down what remains for human judgment, which in the area of computer program design (as in the estimation of takeoff gross weight, costs to complete, or the daily double) is invariably and seriously optimistic.

▲ This is a great description of the spirit of iterative development and its inherent advantages for risk management.

4. Plan, control, and monitor testing. Without question, the biggest user of project resources—manpower, computer time, and/or management judgment—is the test phase. This is the phase of greatest risk in terms of cost and schedule. It occurs at the latest point in the schedule, when backup alternatives are least available, if at all. The previous three recommendations were all aimed at uncovering and solving problems before entering the test phase. However, even after doing these things, there is still a test phase and there are still important things to be done, including: (1) employ a team of test specialists who were not responsible for the original design; (2) employ visual inspections to spot the obvious errors like dropped minus signs, missing factors of two, jumps to wrong addresses (do not use the computer to detect this kind of thing, it is too expensive); (3) test every logic path; (4) employ the final checkout on the target computer.

▲ Here we have some good advice and some obsolete advice. Items 1 and 4, still good advice, are discussed at length in later chapters. Item 2 is still a popular quality assurance fad (use software inspections), but its purpose as presented here is mostly obsolete. Although it may have been a good, cost-effective practice using 1970 technology, it is not today. Computers, compilers, analyzers, and other tools are far more efficient mechanisms for catching obvious errors. As for item 3, testing every logic path was difficult enough in 1970, without the added complexity of distribution, reusable components, and several other complicating factors. It is certainly not feasible with most of today's systems. This is especially true with distributed computing, in which, with time as an additional dimension, there are an infinite number of logic paths. In a modern process, testing is a life-cycle activity that, when executed properly, requires fewer total resources and uncovers issues far earlier in the life cycle, when backup alternatives can still be used.

5. Involve the customer. For some reason, what a software design is going to do is subject to wide interpretation, even after previous agreement. It is important to involve the customer in a formal way so that he has committed himself at earlier points before final delivery. There are three points following requirements definition where the insight, judgment, and commitment of the customer can bolster the development effort. These include a "preliminary software review" following the preliminary program design step, a sequence of "critical software design reviews " during program design, and a "final software acceptance review" following testing.

▲ This insight has been pursued for many years and, where practiced, has produced positive results. Involving the customer with early demonstrations and planned alpha/beta releases is a proven, valuable technique.

I have always been overwhelmed by the insight presented in this paper. While most of the industry has spent considerable energy bashing the waterfall model approach, I find only minor flaws in the theory even when it is applied in the context of today's technology. The criticism should have been targeted at the practice of the approach, which incorporated various unsound and unworkable elements. I suspect that most critics never really understood this theory; they just understood the default practice.

Throughout this book, I refer to the past and current practice of the waterfall model approach, discussed next, as the "conventional" software management approach or process. I argue that it is no longer a good framework for modern software engineering practices and technologies, and I use it as the reality benchmark to rationalize an improved process that eliminates some of its fundamental flaws.

1.1.2 In Practice

Despite the advice of many software experts and the theory behind the waterfall model, some software projects still practice the conventional software management approach. However, because its use is declining and was far more prevalent in the past, I refer to it in the past tense throughout.

It is useful to summarize the characteristics of the conventional process as it has typically been applied, which is not necessarily as it was intended. Projects destined for trouble frequently exhibit the following symptoms:

• Protracted integration and late design breakage

• Late risk resolution

• Requirements-driven functional decomposition

• Adversarial stakeholder relationships

• Focus on documents and review meetings

Protracted Integration and Late Design Breakage

For a typical development project that used a waterfall model management process, Figure 1-2 illustrates development progress versus time. Progress is defined as percent coded, that is, demonstrable in its target form. (The software was compilable and executable; it was not necessarily complete, compliant, nor up to specifications.) The following sequence was common:

• Early success via paper designs and thorough (often too thorough) briefings

• Commitment to code late in the life cycle

• Integration nightmares due to unforeseen implementation issues and interface ambiguities

• Heavy budget and schedule pressure to get the system working

• Late shoe-horning of nonoptimal fixes, with no time for redesign

• A very fragile, unmaintainable product delivered late

Format

Ad hoc text

Flowcharts

Source code

Configuration baselines

Activity

Requirements analysis

Program design

Coding and unit testing

Protracted integration and testing

Product

Documents

Documents

Coded units

Fragile baselines

Sequential activities: requirements — design — coding — integration — testing

Sequential activities: requirements — design — coding — integration — testing

Project Schedule

Figure 1-2. Progress profile of a conventional software project

Project Schedule

Figure 1-2. Progress profile of a conventional software project

Table 1-1. Expenditures by activity for a conventional software project

activity

cost

Management

5%

Requirements

5%

Design

10%

Code and unit testing

30%

Integration and test

40%

Deployment

5%

Environment

5%

Total

100%

Given the immature languages and technologies used in the conventional approach, there was substantial emphasis on perfecting the "software design" before committing it to the target programming language, where it was difficult to understand or change. This practice resulted in the use of multiple formats (requirements in English, preliminary design in flowcharts, detailed design in program design languages, and implementations in the target language, such as FORTRAN, COBOL, or C) and error-prone, labor-intensive translations between formats.

Conventional techniques that imposed a waterfall model on the design process inevitably resulted in late integration and performance showstoppers. In the conventional model, the entire system was designed on paper, then implemented all at once, then integrated. Only at the end of this process was it possible to perform system testing to verify that the fundamental architecture (interfaces and structure) was sound. One of the recurring themes of projects following the conventional process was that testing activities consumed 40% or more of life-cycle resources. Table 1-1 provides a typical profile of cost expenditures across the spectrum of software activities.

Late Risk Resolution

A serious issue associated with the waterfall life cycle was the lack of early risk resolution. This was not so much a result of the waterfall life cycle as it was of the focus on early paper artifacts, in which the real design, implementation, and integration risks were still relatively intangible. Figure 1-3 illustrates a typical risk profile for conventional waterfall model projects. It includes four distinct periods of risk exposure, where risk is defined as the probability of missing a cost, schedule, feature, or quality goal. Early in the life cycle, as the requirements were being specified, the actual risk

High

Project Management Made Easy

Project Management Made Easy

What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.

Get My Free Ebook


Responses

  • niklas
    How can this risk be resolved on using waterfall model?
    2 years ago
  • Saradoc
    What is do it twice in improvements in waterfall model?
    1 year ago
  • isabella
    What are the improvement in waterfall model?
    1 year ago
  • sointu fr
    What is late risk resolution in spm?
    6 months ago
  • Gabriele
    What are the risks in waterfall model in spm?
    5 months ago
  • Rose
    What symptoms do software using waterfall model show?
    5 months ago
  • Janina
    What is meant by integration and testing in waterfall model?
    5 months ago
  • tom
    What are the problems faced due to late risk resolution?
    1 month ago

Post a comment