Function Points

The use of function points is considered to be an alternative to the COCOMO model to estimate the effort required to develop software. Indeed, its leading proponent, Capers Jones, argues strongly that function point analysis (FPA) is much superior to COCOMO and its derivative methods [10.19].

A formulation of FPA claims that unadjusted function points (UFPs) are related to the numbers of inputs, outputs, master files, interfaces, and inquiries [10.20]. These components are defined in more detail in the literature [10.21]. In particular, the UFP count is a linear combination of these components, with weighting factors. The UFP thus derived is then multiplied by an adjustment factor that changes the UFP by at most ±35% to develop the final function point count (FP). The adjustment factor is itself a function of some fourteen aspects of software complexity. One model for the estimation of effort, measured in work-hours, is then derived from the final function point count:

Thus, we have a method whereby we can estimate effort in work-hours from the function points, which, in turn, are derivable from the user-related estimates of inputs, outputs, files, interfaces and inquiries.

The preceding FPA has some definite attractions, as pointed out in the literature [10.20]. First, it does not explicitly depend on the source instructions or lines of code estimates, which, as pointed out here, can be difficult to estimate accurately. It also does not depend on the language used or other variables such as the use of CASE tools, and so forth. Also, the FPA may be usable at the stage of development when only requirements or design specifications have been formulated. Finally, the FPA tends to be oriented to the user's view of the system, as reflected by the five input components.

FPA has been gaining support and is considered to be a more than viable alternative to COCOMO-type estimation procedures. It is also embodied in existing software tools (e.g., SPQR/20 and ESTIMACS™) that add credibility and ease of use. Here again, only time will tell which of these two methods, if either, becomes the preferred procedure in terms of developer usage and customer acceptance.

10.5.5 Reliability

The issue of reliability in the software arena is still in its formative stages and is likely to be in a state of flux as industry gains more experience with the various ways in which software reliabiity has been viewed. The procedure described here is based on the theoretical and empirical work done at Bell Labs [10.22]. Their recommended approach is embedded in what is known as the Basic Execution Time Model (BETM).

BETM can be explained by referring to Figure 10.1. The graph plots the software failure intensity (I) against the total number of failures experienced (N) and, in this model, it is described as a straight line. The implication is that the software in question starts out at a given failure intensity Ia and has a total number of defects No (the Yand Xintercepts, respectively). BETM is a nonhomogeneous Poisson process, such that the failure intensity (I) varies and decreases linearly to zero as failures occur and are ''repaired.'' Also, time is measured in computer program execution time, that is, the time actually spent by a processor in executing program instructions.

The key issues associated with the use of the BETM are:

Figure 10.1. Basic execution time model of reliability.

a. The initial failure intensity must be estimated.

b. The total number of failures, or defects, in the software must also be estimated.

For both issues, the software developer may not have much experience on which to base these intial estimates. As experience is gained, and empirical data derived on real programs, these estimates are likely to improve.

The equation describing the BETM, as represented in Figure 10.1, is simply

where I = failure intensity

N = number of failures or defects m = slope of failure intensity/failure line Io = the initial estimate of failure intensity

To illustrate how this relationship leads to a probability estimate, assume that Io = 60 failures per CPU-hour, and the total number of estimated failures (N) is 300. These estimates fully describe the BETM line in Figure 10.1. Now let us further assume that we have experienced a total of 200 failures to date. The calculated slope of the line is -60/300, or -0.2, and Equation (10.8) becomes

I = -0.2N + 60 We then calculate the current failure intensity as

I = (-0.2)(200) + 60 = -40 + 60 = 20 failures per CPU-hour

This so-called ''instantaneous'' failure intensity describes the current situation and can be used as the ''failure rate'' in a Poisson process, but only at the current time and situation. We can then ask the question: What is the probability that there will be k failures found in the next T hours of execution time? We can use the Poisson probability distribution to calculate this answer as

If, for example, we take k = 0 and the number of hours T = 0.1 CPU-hour, the Poisson reduces to the exponential aound becomes

Thus, in this illustration, the probability that no defects will be found, that is, no failures experienced, in the next 0.1 CPU-hour is 0.135.

The BETM is, of course, not the only formulation of software reliability. Indeed, Bell Labs researchers argue that the logarithmic Poisson execution time model has high predictive validity and is only somewhat more complicated than is the BETM. Several other competitive models are presented and explored in their book [10.22], many of which have also been incorporated into the software package known as SMERFS, as discussed in the next section.

10.5.6 SMERFS

A particularly useful initiative on the part of the Navy led to the development of SMERFS (Statistical Modeling and Estimation of Reliability Functions for Software) [10.23]. SMERFS provides both the background mathematics as well as computer program disks that support the calculation of software reliability. As an example, one module in SMERFS deals with the execution of five software reliability models that yield reliability estimates for execution time data. These models are:

1. The Musa Basic Execution Time Model

2. The Musa Log Poisson Execution Time Model

3. The Littlewood and Verrall Bayesian Model

4. The Geometric Model

5. The NHPP Model for Time-Between-Errors

The first item on this list is the basis for the software reliability formulation in the previous section.

If the user is inclined to adopt an error-count approach to reliability estimation and prediction, the SMERFS formulation is able to support such an approach through the following five models:

1. The Generalized Poisson Model

2. The Nonhomogeneous Poisson Model

3. The Brooks and Motley Model

4. The Schneidewind Model

5. The S-Shaped Reliability Growth Model

Thus, developments such as SMERFS provide a good foundation for the rational consideration of how to deal with the difficult issue of software reliability, which itself is likely to be under debate for some time to come [10.24].

Project Management Made Easy

Project Management Made Easy

What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.

Get My Free Ebook


Post a comment