The Mythical Manmonth

The classic hook. The Mythical Man-Month by Fredrick P. Brooks, was first published in 1975. Brooks worked at IBM as the manager of a large project that developed the OS/360 operating system. Although the OS/360 was eventually a successful product for IBM, the project was late, took more memory than planned, and cost several times more than originally estimated. In fact, the product did not perform well until after several releases. Based upon his experience. Brooks wrote a number of essays that were embodied in his book. As a result of his timeless advice (and probably due to the fact that some things have not changed, although the term person-month may he more appropriate today), a twentieth anniversary edition was issued. The following are some of Brooks' insights:

■ "First, our techniques of estimation are poorly developed. More seriously, they reflect an unvoiced assumption which is quite untrue—i.e., that all will go well." (14)

■ "Secood, our estimating techniques fallaciously confuse effort with progress, hiding the assumption that men and months are interchangeable." (14)

"Third, because we are uncertain of our estimates, software managers often lack the courteous stubbornness of Antoine's chef (14)-. Good cooking takes time. If you are made to wait, it is to serve you better, and to please you." (From the menu of Antoine's, a restaurant in New Orleans) "Fourth, schedule progress is poorly monitored. Techniques proven and routine in other engineering disciplines are considered radical innovations in software engineering." (14) "Fifth, when schedule slippage is recognized, the natural tendency (and traditional) response it to add more manpower. Like dousing a fire with gasoline, this makes matters worse, much worse. More fire requires more gasoline, and thus begins a regenerative cycle which ends in disaster." (14)

Brooks Law, "Adding manpower to a late software project makes it later." (25)

Maintenance projects and the installation of packaged software can experience similar difficulties.

The challenge lies in trying to estimate something that is logical, rather than physical, and that is not well defined until the later stages of the project life cycle. Scope definition can only provide a high-level view of what is and what is not within the scope boundary of the project. Specific requirements, in terms of features and functionality, are generally not defined until later, during the design phase. In addition, the complexity and technical challenges of implementing those features are either unknown or optimistically glossed over in the early stages of the project. As a result, estimating an IT project can be like trying to hit a moving target—hitting either one accurately requires continuous adjustments.

As illustrated in Figure 6.4, the first step to accurately estimating an IT application is determining its size (Jones 1998). In other words, how big is the application? Without getting into too much detail at this point, it should be intuitive that it takes more effort (i.e., in terms of schedule, resources, and budget) to build a larger system than a smaller system. However, the size of the application is only one piece of the estimation puzzle. A good portion of time and effort will be spent on features and functionality that are more complex. As a result, the greater the complexity, the more time and effort that will be spent. Constraints and various influences will also affect the time and effort needed to develop a particular application. These constraints could be attributes of the application (Jones 1998) or include the processes, people, technology, environment, and required quality of the product as well (Royce 1998). Once the resources and time estimates are known, the specific activities or tasks can be sequenced in order to create the project's schedule and budget.

Lines of Code (LOC)

Counting the number of lines of code in computer programs is the most traditional and widely used software metric for sizing the application product. It is also the most controversial.

Although counting lines of code seems intuitively obvious—a 1,000 LOC Java program will be ten times larger than a 100 LOC Java program—counting LOC is not all that straightforward. First, what counts as LOC? Do we include comments? Maybe we should not because a programmer could artificially boost his or her productivity by writing one hundred comment lines for every line of code that actually did something. On the other hand, comments are important because they tell us what the code should be doing. This makes it easier to debug and for others to understand what sections of code in the program are doing.

What about declaring variables? Do they count as LOC? In

Figure 6.4 Software Engineering Estimation Model £>ddltlon' experienced programmers tend to write less code than novice programmers. After all, an experienced

SOURCE: Adapted from Garmus and Herron 1996; Jones 1998, . . , 1111

Royce i998 programmer can write more efficient code, code that does the same thing in fewer lines of code than a novice programmer would use. The same can be said for different programming languages. Writing a program in Assembler requires a great deal more code than writing a similar program in Visual Basic. In fact, one could argue that counting LOC could encourage programmers to write inefficient code, especially when LOC are used as a productivity metric. Finally, it is much easier to count the lines of code after a program is written than it is to estimate how many lines of code will be required to write the program.

Function Points1

The inherent problems of LOC as a metric for estimation and productivity necessitated the need for a better software metric. In 1979, Allan Albrecht of IBM proposed the idea of function points at a conference hosted by IBM in Monterey, California (Albrecht 1979). Function points are a synthetic metric, similar to ones used every day, such as hours, kilos, tons, nautical miles, degrees Celsius, and so on. However, function points focus on the functionality and complexity of an application system or a particular module. For example, just as 20 degree Celsius day is warmer than a 10 degree Celsius day, a 1,000 function point application is larger and more complex than a 500 function point application.

The good thing about function points is that they are independent of the technology. More specifically, functionality and the technology are kept separate so we can compare different applications that may or may not use different programming languages or technology platforms. That is, we can compare one application written in COBOL with another application developed in Java. Moreover, function point analysis is reliable—i.e., two people who are skilled and experienced in function point

1 A more thorough discussion of function point analysis is provided in Appendix A.

analysis will obtain function point counts that are the same, that is, within an acceptable margin of error.

Counting function points is fairly straightforward; however, the rules can be complex for the novice. It is recommended that anyone serious about learning function point analysis become certified. Although several function point organizations exist, the two main ones are the International Function Point Users Group (IFPUG) and the United Kingdom Function Point Users Group (UFPUG). Both of these nonprofit organizations oversee the rules, guidelines, standards, and certifications for function point analysis. In addition, there are resources at the end of the chapter if you are interested in learning more about function points.

The key to counting function points is having a good understanding of the user's requirements. Early on in the project, a function point analysis can be conducted based on the project's scope. Then a more detailed analysis of the user's requirements can be made during the analysis and design phases. Then, function point analysis can and should be conducted at various stages of the project life cycle. For example, a function point analysis conducted based on the project's scope definition can be used for estimation and developing the project's plan. During the analysis and design phases, function points can be used to manage and report progress and for monitoring scope creep. In addition, a function point analysis conducted during or after the project's implementation can be useful for determining whether all of the functionality was delivered. By capturing this information in a repository or database, it can be combined with other metrics useful for benchmarking, estimating future projects, and understanding the impact of new methods, tools, technologies, and best practices that were introduced.

Function point analysis is based on an evaluation of five data and transactional types that define the application boundary as illustrated in Figure 6.5.

• Internal Logical File (ILF)—An ILF is a logical file that stores data within the application boundary. For example, each entity in an Entity-Relationship Diagram (ERD) would be considered as an ILF. The complex ity of an ILF can be classified as low, average, or high based on the number of data elements and subgroups of data elements maintained by the ILF. An example of a subgroup would be new customers for an entity called cus tomer. Examples of data elements would be customer number, name, address, phone number, and so forth. In short, ILFs with fewer data ele ments and subgroups will be less complex than ILFs with more data ele ments and subgroups.

• External Interface File (EIF)—An EIF is similar to an ILF; however, an EIF is a file maintained by another application system. The complexity of an EIF is determined using the same criteria used for an ILF.

• External Input (El)—An El refers to processes or transactional data that originate outside the application and cross the application boundary from outside to inside. The data generally are added, deleted, or updated in one or more files internal to the application (i.e., internal logical files). A com mon example of an El would be a screen that allows the user to input infor mation using a keyboard and a mouse. Data can, however, pass through the application boundary from other applications. For example, a sales system may need a customer's current balance from an accounts receivable system. Based on its complexity, in terms of the number of internal files referenced, number of data elements (i.e., fields) included, and any other human factors, each El is classified as low, average, or high.

• External Output (EO)—Similarly, an EO is a process or transaction that allows data to exit the application boundary. Examples of EOs include reports, confirmation messages, derived or calculated totals, and graphs or charts. This data could go to screens, printers, or other applications. After the number of EOs are counted, they are rated based on their complexity, like the external inputs (El).

• External Inquiry (EQ)—An EQ is a process or transaction that includes a combination of inputs and outputs for retrieving data from either the inter nal files or from files external to the application. EQs do not update or change any data stored in a file. They only read this information. Queries with different processing logic or a different input or output format are counted as a single EQ. Once the EQs are identified, they are classified based on their complexity as low, average, or high, according to the number of files referenced and number of data elements included in the query.

Once all of the ILFs, EIFs, Els, EOs, and EQs, are counted and their relative complexities rated, an Unadjusted Function Point (UAF) count is determined. For example, let's say that after reviewing an application system, the following was determined:

Using Table 6.1, the (UAF) value is calculated.

The next step in function point analysis is to compute a Value Adjustment Factor (VAF). The VAF is based on the Degrees of Influence (DI), often called the Processing Complexity Adjustment (PCA), and is derived from the fourteen General

Systems Characteristics (GSC) shown in Table 6.2. To determine the total DI, each GSC is rated based on the following scale from 0 to 5:

• 1 = incidental influence

• 4 = significant influence

Continuing with our example, let's say that after reviewing the application, the degrees of influence shown in Table 6.2 were determined to produce 210 total adjusted function points (TAFP). So what do we do with the total adjusted function point number? Once a total adjusted function point count is calculated, the function point count can be transformed into development estimates. The first approach focuses on productivity—i.e., a person, such as a programmer, can produce a certain number of function points in a given amount of time, such as in a day, a week, or a month. Once again, creating a repository of function point information and other metrics allows an organization to compare various projects and support more realistic estimates.

The second approach focuses on converting the function point count into an equivalent number of lines of code. Continuing with our example, we can determine how many lines of code will be required for several different programming languages. Table 6.3 provides an example that approximates the number of lines of code per function point for some of the more popular programming languages. As you can see, the number of lines of code depends on the programming language. An application or module that has 210 total unadjusted function points would require, for example, 134,440 lines of code if programmed in machine language, but only 6,090 lines of code using Visual Basic 5. Again, these estimates not only provide an estimate for the size of the application, but also for the complexity of the application.

In addition, T. Capers Jones has conducted extensive research and has come up with a technique called backfiring, which allows direct conversion from an application's source code to an equivalent function point count. Individual programming styles can create variation in the number of LOG so the accuracy of backfiring is not very high. It can, however, provide an easy way to create a function point inventory of an organization's project portfolio if LOG are readily available.

Table 6.2 GSC and Total Adjusted Function Point

General System Characteristic

Degree of Influence

Data communications Distributed data processing Performance

Heavily used configuration

Transaction rate

On-line data entry

End user efficiency

Online update

Complex processing

Reusability

Installation ease

Operational ease

Multiple sites

Facilitate change

Total degrees of influence (TDI)

Was this article helpful?

0 0
Project Management Made Easy

Project Management Made Easy

What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.

Get My Free Ebook


Post a comment