CCPM seeks to use mean, or approximately 50% probable, individual task-time estimates. The CCPM project manager recognizes that actual individual task performance times include common-cause variation, and he or she does not criticize task performers for individual task-duration performance.

Most project managers attempt to account for individual task common-cause variation by adding contingency time into each estimate. They usually do not specify the existence or amount of this contingency time. People estimating task times for a project usually do so believing that the project manager wants "low-risk" task times, perhaps a probability of 80% to 95% probability of completion within the task-duration estimate or less. Figure 4.5 illustrates that this estimate is two or more times the 50% probable estimate. In most project environments, people feel good if they complete a task by the due date and feel bad if they overrun the due date. This reinforces their attempts to estimate high-probability completion times. Walter A. Shewhart, mentor to W. Edwards Deming, said [3],

It should be noted that the statistician does not attempt to make any verifiable prediction about one single estimate; instead, he states his prediction in terms of what is going to happen in a whole sequence of estimates made under conditions specified in the operational meaning of the estimate that he chose.

This view clarifies why attempts to deal with uncertainty for individual task estimates are fruitless.

I read and hear much about "improving the accuracy of estimates." I used to think that this was a good thing to do, that if we would apply a more disciplined process, we could do a better job of estimating the time or cost of a project.

Actually, I know that to be true, but understanding variation changed my understanding of what that means. Most people, when they address "improving the accuracy of estimates," have in mind improving the accuracy of each point estimate that sums to the total cost or duration. Shewhart clarifies that you cannot do this. Indeed, I have come to realize that the probability of all point estimates is exactly the same: zero. You only have a finite probability when you state an interval that a single result might fall within. Thus, accuracy, as most people mean it, does not exist.

Consider defining the accuracy of a gun. If you shoot one shot at a target, you have no idea of how accurate the gun may be. Common-cause variation may have put that one shot right on target, or several inches off, or more. The only way to determine the accuracy of the gun is to shoot a number of shots, measure the spread of the result, and compare the center of the spread to the center of the target. Of course, you aren't really measuring the accuracy of the gun in that case, either. You are measuring the accuracy of the gun, the shooter, the cartridge, and the environment. Changing any of them will change the apparent accuracy of the gun. For example, letting my son shoot instead of me will make for a much more accurate gun. Shooting at 25 yards versus 100 yards will make for a much more accurate result. And so on.

Understanding what accuracy really means (the variation in the result relative to the specified mean result) clarifies that there are two ways to improve the accuracy of single-point estimates. You can better define the estimate assumptions, thus narrowing the necessary band (standard deviation), or you can improve the process. For example, specifying the gun accuracy at a specific range with a specific cartridge and a specific shooter will yield a smaller range in the variation (improve the uncertainty of the accuracy). Also, doing things such as clamping the gun or shooting indoors where there is no wind will actually change the process and reduce the variation. The counterpart to the first approach would be an improved estimating process (e.g., using an estimation database and repeatable process versus ad hoc estimates). The counterpart to the second approach would be improving the actual work process (e.g., using written procedures or new tools).

Some experienced project managers state that people tend to give optimistic estimates. They base this contention on remembering the instances in which projects had difficulty meeting the delivery date. Generalizing this observation does not hold up under examination for several reasons.

First, extensive psychological research demonstrates that people tend to seek pleasure and avoid pain. In most project environments, people get pleasure and avoid pain by completing tasks on the due date. Hardly anyone wants to be known as the person who can be counted on to deliver late. It is not reasonable to expect people to solicit pain by systematically giving "optimistic" estimates.

Second, people remember selectively. They easily remember worst-case outcomes (pain) but not necessarily all of the times things went to their advantage. Don't most people feel that they always pick the slowest line in a bank or supermarket? Do you really believe that this is true? People also tend to forget predecessors leading to the outcome. This mental feat has two interesting effects:

1. The project manager selectively remembers the instances where task-duration estimates were exceeded and therefore wants to add contingency of his own.

Third, if underestimating task durations were the predominant fact, nearly all project tasks would be late. Assuming that most of the potential positive variation in task times is returned to the project (evidence suggests otherwise), the merging of task paths ensures a very low probability of success if individual estimates are less than 50% probable. (Real project behavior is, of course, confounded by control actions taken during project performance. These control actions may help or hinder overall completion time performance.)

While many projects do fail to meet schedule, our observations indicate that a substantial portion (e.g., about one third of information-technology projects) do achieve the scheduled project end date. Almost all projects to create bid proposals complete on time. Nearly all major meetings come off as planned with few problems. The Olympics has not yet been delayed due to late project completion. (Greece caused anxious moments in 2004, but was ready nonetheless.)

Milestone performance in a very large project I examined demonstrated that the task-performance data conformed very closely with Goldratt's prediction that about 80% of the task milestones are achieved exactly on the scheduled date, with only one or two sooner and the rest later, including a few significantly later. This project consisted of about thirty large subprojects, some of which contain yet smaller subprojects.

My experience shows project plans from a variety of organizations (numbering in the hundreds) either fail to specify what probability and confidence of estimate are expected for task-duration estimates, or they fail to provide a quantitative basis for the estimate, or both. The PMBOK™ Guide admonishes project managers to provide these estimates but provides little guidance on what to do with them. Construction projects are somewhat of an exception, having access to extensive quantitative data. For example, the National Construction Estimator [4] uses an extensive database. The Construction Estimator lists many potential contributors to common-cause uncertainty in the estimates. The guide states that many of these uncertainty items range within several tens of percentage points of the cost estimate. Therefore, in many cases, they have the same potential impact on schedule.