figure 10.9 Developing Standards and Metrics figure 10.9 Developing Standards and Metrics response, conformance, aesthetics, and maintainability. Although these dimensions focus on the application system, other dimensions can be identified for each of the project deliverables (e.g., business case, project charter and baseline project plan, project reporting, user documentation, etc.).
Metrics are vital for gauging quality by establishing tolerance limits and identifying defects. A defect is an undesirable behavior associated with the product or process (Ginac 1998). It is a failure to comply with a requirement (Lewis 2000). In software development, defects are often referred to as bugs.1
Once the quality dimensions are identified, the next step is to define a set of metrics that allow the project manager and team to monitor each of the project standards. There are two parts to a metric—the metric itself and an acceptable value or range of values for that metric (Ginac 1998). Metrics should focus on three categories (Kan 1995):
• Process—The control of defects introduced by the processes required to develop or create the project deliver ables. Process metrics can be used to improve software development or maintenance processes. Process metrics should focus on the effectiveness of identifying and removing defects or bugs.
• Product—The intrinsic quality of the deliverables and the satisfaction of the customer with these deliverables. These metrics should attempt to describe the characteristics of the project's deliverables and final product. Examples of product metrics may focus on customer sat isfaction, performance, reliability, and design features.
• Project—The control of the project management processes to ensure that the project meets its overall goal as well as its scope, schedule, and budget.
Metrics can be used to determine whether the software product and project deliverables meet requirements for "fitness for use" and "conformance to requirements" as defined by the internal or external customers. Many technical people, however, often feel that standards are restricting and only serve to stifle creativity. Although too many standards that are rigidly followed can lend support to that argument, well-defined standards and procedures are necessary for ensuring quality. A quality approach can also decrease development costs because the sooner a defect or bug is found and corrected, the less costly it will be down the road (Lewis 2000). Table 10.3 provides a summary of some common process, product, and project metrics.
1 The term bug was introduced to the computer field by Dr. Grace Murray Hopper (1906—1992)—an extraordinary woman who retired as a Rear Admiral in the U. S. Navy. In 1946, while working on the Mark II and Mark III computers, she found that one of the computers crashed as a result of a moth that had became trapped in one of the computer's relays. The moth was carefully removed and taped to the logbook where an inscription was made that the computer was debugged. For some reason the term stuck, and errors, or glitches, in a program or computer system are called bugs.
Table 1G.3 Examples of Process, Product, and Project Metrics
Process Defect arrival rate Defects by phase Defect backlog Fix response time Defective fixes
Product Mean time to failure
Defect density Customer found defects Customer satisfaction
Project Scope change requests Scope change approvals Overdue tasks
Tasks that should have started Over budgeted tasks
Over allocated resources
The number of defects found over a specific period of time The number of defects found during each phase of the project The number of defects waiting to be fixed The average time it takes to fix a defect The number of fixes that created new defects
Average or mean time elapsed until a product fails
The number of defects per lines of code (LOG) or function points
The number of defects found by the customer
An index to measure customer satisfaction—e.g., scale from 1 (very unsatisfied) to 5 (very satisfied)
The number of scope changes requested by the client or sponsor The number of scope changes that were approved The number of tasks that were started but not finished by the expected date or time
The number of tasks that should have started but have been delayed
The number of tasks (and dollar amount) of tasks that have cost more to complete than expected
Budgeted Cost of Work Performed (BCWP)
The number of resources assigned to more than one task
The number of project team members who quit or terminated
The number of training hours per project team member
Verification and validation (V&V) are becoming increasingly important concepts in software engineering (Jarvis and Crandall 1997). V&V activities continually prompt us to ask whether we will deliver an IT solution that meets or exceeds our project sponsor's expectations.
The concept of verification emerged about twenty years ago in the aerospace industry, where it is important that software perform all of its intended functions correctly and reliably because any error in a software program could result in an expensive or disastrous mission failure (Lewis 2000). Verification focuses on the process-related activities of the project to ensure that the product or deliverable meets its specified requirements before final testing of the system begins.
Verification requires that the standards and metrics be defined clearly. Moreover, verification activities focus on asking the question of whether we followed the right procedures and processes. In general, verification includes three types of reviews (Ginac 1998):
• Technical Reviews—A technical review ensures that the IT solution will conform to the specified requirements. This review may include conformance to graphical user interface (GUI) standards, programming and documentation standards, naming conventions, and so forth. Two common approaches to technical reviews include structured walkthroughs and inspections. A walkthrough is a review process in which the programmer or designer leads a group of programmers or designers through a program or technical design.
Table 10.4 Testing Approaches Test
The participants may ask questions, make comments, or point out errors or violations of standards (Ginac 1998). Similarly, inspections are peer reviews in which the key feature is the use of a checklist to help identify errors. The checklists are updated after data is collected and may suggest that certain types of errors are occurring more or less frequently than in the past (Lewis 2000). Although walkthroughs and inspections have generally focused on the development of programs, they can be used as a verification of all project deliverables throughout the project life cycle.
Business Reviews—A business review is designed to ensure that the IT solution provides the required functionality specified in the project scope and requirements definition. However, business reviews can include all project deliverables to ensure that each deliverable (1) is complete, (2) provides the necessary information required for the next phase or process, (3) meets predefined standards, and (4) conforms to the project methodology. Management Reviews—A management review basically compares the project's actual progress against the baseline project plan. In general, the project manager is responsible for presenting the project's progress to provide a clear idea of the project's current status. Issues may need to be resolved, resources adjusted, or decisions made to either stay or alter the project's course. In addition, management may review the project to determine if it meets the scope, schedule, budget, and quality objectives.
Validation, on the other hand, is a product-oriented activity that attempts to determine if the system or project deliverable meets the customer or client's expectations and ensures that the system Unit testing is done at the module, program, or object performs as specified. Unlike verification, level and focuses on whether specific functions work validation activities occur toward the end of properly. Unit testing can be accomplished via: the project or after the information system • Black box testing—Tests the program code against has been developed. Therefore, testing
Acceptance testing specified requirements (i.e., functionality)
• White box testing—Examines paths of logic inside the program (i.e., structure)
• Gray box testing—Study the requirements and communicate with the developer to understand internal structure of the program (i.e., functionality and structure)
Tests whether a set of logically related units (e.g., functions, modules, programs, objects, etc.) work together properly after unit testing is complete The system is tested as a whole in an operating environment to verify functionality and fitness for use. May include tests to verify usability, performance, stress, compatibility, and documentation makes up the majority of validation activities. Table 10.4 provides a summary of the various types of tests that can be conducted for a software engineering project. Volumes and courses can be devoted to software testing, so just an overview (or refresher) can be provided in this text. However, understanding what needs to be tested and how is an important consideration for developing a quality strategy and plan for the IT project.
Testing provides a basis for ensuring that the system functions as intended and has all the capabilities and features that were defined in project's scope and requirements.
In addition, testing provides a formal, ttcef to^tem ^tofetteeMc^tomei-s structured, and traceable process that gives scope and detailed requirement specifications after m n m , ^ r t systems testing is complete. The end user or client is g p j responsible for assuring that all specified functionality is included and will provide value to the organization as defined by the project's goal or MOV.
sponsor confidence in the quality of the system (Lewis 2000). In addition, Lewis (2000) provides several suggestions for making software testing more effective:
• Testing should be conducted by someone who does not have a personal stake in the project. In other words, programmers should not test their own pro grams because it is difficult for people to be objective about their own work.
• Testing should be continuous and conducted throughout all the develop ment phases.
• In order to determine whether the test met its objectives correctly, a test plan should outline what is to be tested, how it will be tested, when it will be tested, who will do the testing, and the expected results.
• A test plan should act as a service level agreement among the various proj ect stakeholders and should encourage "quality before design and coding."
• A key to testing is having the right attitude. Testers should not be out to "break the code" or embarrass a project team member. A tester should eval uate a software product with the intent of helping the developers meet the customer's requirements and make the product even better.
Suppose you were developing a database application system for a client. After several weeks, you would undoubtedly make a number changes to the tables, attributes, user interface, and reports as part of a natural evolution of the project. This evolution is both normal and expected as you learn more about the technology and the requirements. In addition, the user/client may suggest changes or enhancements if the organizational environment changes.
If you are working alone, you may store all the products of the software development (i.e., reports, plans, design models, program and database files) on your computer. Change control may be nothing more than just keeping your documents and files organized. If, however, you need to share these files and documents with even one other person, controlling these changes becomes more problematic. You could all keep the files and documents being worked on at everyone's stand-alone workstation. Unfortunately, if you need to share or work on the same documents or files, this sharing can lead to several different versions of the same document or file distributed among several different computers. On the other hand, you may store all the work in a shared directory on a server. This solution would certainly allow everyone to share and use the same documents or files, but problems could occur if two or more people work on the same document or file at the same time. The changes one makes would be lost if someone else were to save a file after the first person saved it, thus replacing new file with a different new file. There could be a great deal of confusion and wasted time.
Change is inevitable throughout the life of the project. On any given project, each deliverable will progress through a series of stages from an initial conception through a final release. As the deliverable develops, changes will be made informally until it gets to a state of completeness, whereupon revision control is needed. At some point informal changes should be no longer permitted. After final acceptance, the deliverable should be frozen until it is released. An informal change control allows changes that can be traced and captured sequentially to be made to an evolving project deliverable. It provides for rapid development while allowing for backup and some measures of control. On the other hand, formal change control is a procedure in which changes to an accepted work are formally proposed and assessed and decisions to accept or reject proposed changes are documented to provide an element of stability beyond the informal change controls.
Configuration management is an important aspect of PQM that helps control and manage document and software product change (Jarvis and Crandall 1997). It provides the project team with an environment for efficiently accessing different versions of past documents or files. Its basic purpose is to establish and maintain the integrity of the various project work products and deliverables throughout the project life cycle. In short, configuration management attempts to answer the following basic questions (Ginac 1998):
• What changes were made?
• When were the changes made?
• Why were the changes made?
Configuration management tools allow different project team members to work on a specific section of a document or file. The document or file can be checked out and checked back into a repository or library in order to maintain control. Software and the supporting project deliverables often go through an evolution of successive temporary states called versions (Lewis 2000). Configuration management, therefore, includes a set of processes and tools that allows the project team to manage its various documents and files as various configurations of IT solutions and project deliverables are derived. It may include specifying and enforcing various policies that restrict access to specific individuals or preventing two people from changing the same document or file at the same time (Ginac 1998).
According to Lewis (2000), software configuration management includes four elements—component identification, version control, configuration building, and change control.
Component Identification This first element focuses on the processes or activities for defining or describing the various software configuration items or work products that make up a specific project deliverable. Guidelines are established and followed for identifying and naming the various baselines, software components, and configurations. As these elements go through changes, a numbering and/or naming scheme is used to uniquely identify each of the various versions or revisions as they evolve and change over time. The various components are often stored in a library or repository, where a list of all the components can be cataloged.
Version Control As the project deliverables and work products evolve and change over time, many different versions are created. Errors may be corrected and enhancements are made until the work product becomes stable. Each evolutionary change results in a new version. It is essential that these components be organized so that different versions can be distinguished from one another. With the exception of the first version, each subsequent version will have a predecessor and the ability to trace each version becomes the component's history. Allowing the project team to go back to any single version provides an important backup and allows for specific ideas to be saved and made available for reuse later on.
Configuration Building Configuration building entails identifying the correct component versions and then being able to execute the build procedures. A build includes all the software components, such as data files, programs, and so forth that are needed to implement one or more software functions (Pressman 2001). A software product must be built in order for it to run. For example, if you have a single program, building the application may require compiling and linking the program file in order to create an executable program. However, a larger application system may require hundreds or even thousands of files to be compiled, linked, and combined to create an executable system. This process can become time-consuming and complicated (McConnell 1996). Therefore, configuration building ensures that the derived software components are correctly associated and put together with each other in order to create an accurate build.
Change Control Once a software component becomes stable and accepted, a decision process must be in place to control any proposed changes. Moreover, a simple change will often involve several other components, so it is important that the impact of any change requests be assessed. The change control activities ensure that any modification to a software component is proposed, evaluated, approved or rejected, scheduled, and tracked. It provides the basis for reporting and auditing processes. If a change is made, the component should be checked back into the library or repository where it becomes a new component version and the previous version is retained.
Quality control focuses on monitoring the activities and results of the project to ensure that the project complies with the quality standards. Once the project's standards are in place, it is important to monitor them to ensure that the project quality objective is achieved. Moreover, control is essential for identifying problems in order to take corrective action and also to make improvements once a process is under control.
Similar to the quality assurance activities, quality control should be ongoing throughout the life cycle of the project and only end when the customer or project sponsor accepts the final IT solution (Kloppenborg and Petrick 2002). Moreover, quality control includes monitoring and controlling activities concerning the product, processes, and project. Using the system concept as illustrated in Figure 10.10, quality control activities must focus on the inputs and outputs of each process. If inputs to a process are of poor quality, then the output of a particular process will be of poor quality as well because, in general, the process may not be capable of changing the inherent quality of the input. Moreover, even if the input to a process is of high quality, the process itself may create an output of lower quality. Finally, the input and process may not produce a quality output or product if the requirements are not properly defined.
To support the quality control activities, several tools and techniques were introduced in this chapter. Figure 10.11 provides a summary of those tools. As Besterfield, et al (1999) point out, these tools can be used to monitor the process, product, and product metrics in order to:
Learn, Mature, and Improve
A central theme of this text has been the application of knowledge management as a tool for team learning and identifying best practices. Monitoring and controlling activities and tools can help point out problem areas, but the project team must solve these problems. Therefore, it is Figure 10-10 Quality C°ntr°l Actmties important that the lessons
Was this article helpful?
What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.