Advantages of Feature Point Analysis
Advantages of feature point analysis are essentially the same as those for function point analysis, with the additional advantage of being an excellent approach to use in the size estimation of algorithmically intensive systems.
Disadvantages of Feature Point Analysis
The primary disadvantage of feature point analysis is the subjective classification of algorithmic complexity.
Counting "object points" to determine software size is an approach developed for object-oriented technology. Conducted at a more macro level than function points, it assigns one object point to each unique class or object, such as a screen, output report, and so on. The rest of the process is similar to that of function and feature points, but the conversion factors are different.
Estimating gets better with each passing phase because more knowledge about project needs is gained in each phase. A great deal of knowledge is revealed in the analysis and design phase, in which models are produced (Chapter 22) that allow for increasingly accurate size estimates. Until that phase of the project is reached, there will probably be some useful but very high-level analysis models produced in the planning phase. They may be used as another method, simple but quick, for estimating size.
The concept of blitz modeling is based on Tom DeMarco's bang metric. Counting component pieces of the system (design elements) and multiplying the count by a productivity factor (on the average, how many lines of procedural code this takes to implement, based on historical precedent), results in rough estimates. For example, if high-level data flow diagrams or object models are produced as part of concept exploration or planning, their components may be observed for size. Imagine that there are 20 object classes and it is known by observation of existing systems that classes are implemented on average as five procedural programs per class. Also imagine that it is known by observation of existing systems that the average size procedural program (C language) is 75 LOC. Then the size can quickly be calculated
Number of processes (object classes) x Number of programs per class x Average Program Size = Estimated Size
20 object classes x 5 programs per class x 75 LOC per program = 7,500 LOC estimated This is known as a "blitz" of early feasibility documents. Components of any model (process bubbles, data flows, data repositories, entities, relationships, objects, attributes, services, etc.) may be multiplied by a factor that has been developed as a result of previous projects. Other examples are as follows: If it is known that each process bubble on a Level 0 DFD roughly corresponds to four actual SQL language programs, and it is also known that the average size for programs in the SQL library is 350 LOC, then a simple multiplication will suffice for the initial size count. Say that there are seven major process bubbles:
Number of processes (DFD bubbles) x Number of programs per bubble x Average Program Size = Estimated Size
7 bubbles x 4 programs per bubble x 350 LOC per program = 9,800 LOC estimated
If a high-level global object model is produced during the feasibility scoping phase, and it isknown from historical evidence that each service corresponds to two C++ programs and that company standards encourage confining each service packet to 100 LOC or less, then multiplying as follows will provide a good start in the estimation of the size of the system in LOC:
The key phrase here is "known from historical evidence." A historical database is essential to improving estimating accuracy. The database should contain a record of the actual delivered size for each software component. The amount of effort expended to create the component of that size must also be tracked. As the numbers of data points grow, so does the accuracy of the average number of LOC per program and the average amount of effort required building a component. When actual component sizes and their corresponding amounts of development effort are known, then average "productivity" is also known (size + effort).
DeMarco suggests, with the bang metric, that function-strong systems (e.g., real time) be computed separately from data-strong systems. Functions-strong systems rely on a count of indivisible functional primitives as defined by a data flow diagram. Data-strong systems rely on a count of objects in the system-global data model. Each will have a weighting factor (WF) applied.
An example with function-strong systems is this: WF (average number of modules needed to complete this function) is three, the number of processes plus control specifications (functions) is eight, and the average size per function is 78 LOC. Then:
WF x (Number of process and control specifications) x average LOC for this type of module = LOC
3 modules needed for function x 8 functions x 78 LOC = 1,872 LOC
How does this differ from the feature point analysis presented during the feasibility scoping phase? Not by much. A project manager may choose to perform feature point analysis during the feasibility scoping phase, when only high-level models such as context-level DFDs exist, and then refine that estimation during the planning phase, when there is more project knowledge and more documentation, such as a Level 0 DFD, along with a Level 1 DFD for a few of the major subsystems. Any of these models may be used during any phase. If they are applied consistently, the expectation is that sizing and estimating accuracy will increase.
Advantages of Model Blitz
Some of the advantages of using the Model Blitz method include:
® It is easy to use with structured methods (data flow diagrams, entity relationship diagrams, etc.) and with object-oriented classes, services, and so on.
® Accuracy increases with use of historical data.
® Continuous improvement activities are used for estimation techniques—the estimated size can be easily compared with the actual size during post-project analysis. (How accurate was the estimate? Why was it off by a certain percent? What can be learned for the next project's size estimation?)
Disadvantages of Model Blitz
Disadvantages of using Model Blitz include:
® It requires use of design methodology.
® Estimation cannot begin until design is complete.
® It requires historical data.
® It does not evaluate environmental factors.
Another popular and simple technique for estimating size and for estimating effort is the Wideband Delphi group consensus approach. The Delphi technique originated at the Rand Corporation decades ago; the name was derived from the Oracle of Delphi in Greek mythology. It was used successfully at Rand to predict the future of major world technologies.
This is a disciplined method of using the experience of several people to reach an estimate that incorporates all of their knowledge.
In software engineering circles, the original Delphi approach has been modified. The "pure" approach is to collect expert opinion in isolation, feed back anonymous summary results, and iterate until consensus is reached (without group discussion).
Guidelines for Conducting Wideband Delphi Group Consensus
Because the Delphi approach can take a very long time, the concept of Wdeband Delphi was introduced to speed up the process. This improved approach uses group discussion.
Steps in Conducting Wideband Delphi
There are six major steps in conducting Wideband Delphi:
1. Present experts with a problem and a response form.
2. Conduct a group discussion.
3. Collect expert opinion anonymously.
4. Feed back a summary of results to each expert.
5. Conduct another group discussion.
6. Iterate as necessary until consensus is reached.
Group discussions are the primary difference between pure Delphi and Wideband Delphi. The summary of results in Step 4 is presented in Figure 10-6.
Figure 10-6. Delphi Software Size Estimation Results Summary Form
Figure 10-6. Delphi Software Size Estimation Results Summary Form
Range-of-Size Estimate from Previous Estimation Round
Your Estimate 5,500 LOG
Mean Estimate 6,000 LOG
5,000 6,000 7rOOG 8,000 9,000 10,000 11,000 12,000
Please enter your estimate for the next round, or state the rationale behind remaining with your previous estimate.
Here's another way to look at the Wideband Delphi process:
• Get a few experts (typically three to five). Include experience in all of the "risk" areas—application domain, programming language, algorithms, target hardware, operating systems, and so on.
• Meet with them to discuss issues and describe the software to them. Bring specifications, other source documents, WBS, and so on. Let them add their own information and questions. Have everyone take notes.
• Ask each expert to develop an estimate, including a minimum, expected, and maximum rating. Allow the experts to remain independent and anonymous.
• Record anonymous estimates on a graph.
• Meet and have each expert discuss his estimate, assumptions, and rationale.
• Seek consensus on assumptions. This may result in action items to gather factual data.
• If possible, reach a consensus estimate.
• If no consensus can be reached, break until you can gather additional data; then repeat.
• Stop repeating when you reach consensus or two consecutive cycles do not change much and there is no significant additional data available (agree to disagree). At the end, there is a consensus estimate on an expected value. There should also be a minimum and a maximum so that the degree of confidence in the estimate can be understood.
Implemention is easy and inexpensive.
It takes advantage of the expertise of several people.
All participants become better educated about the software.
It does not require historical data, although it is useful if available.
It is used for high-level and detailed estimation.
Results are more accurate and less "dangerous" than LOC estimating.
Advantages of Wideband Delphi
The advantages of Wideband Delphi include the following:
This document was created by an unregistered ChmMagic, please go to http://www.bisenter.com to register it. Thanks. ® It aids in providing a global view of project to team members.
Disadvantages of Wideband Delphi
The disadvantages of Wideband Delphi include the following:
® It is difficult to repeat with a different group of experts.
® You can reach consensus on an incorrect estimate. Because you all "buy in," you may not be skeptical enough when actual data shows it is wrong.
® You can develop a false sense of confidence.
® You may fail to reach a consensus.
® Experts may be all biased in the same subjective direction.
Was this article helpful?
What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.