Managing Quality

Quality control is a large part of what the PM does. It is up to the PM to validate that tasks are being performed in such a way that the deliverables being provided will be of good quality, relative, of course, to the time and budget parameters the PM and his team have at their disposal. Recall that quality pivots off of the other two constraints, so limitations on time and budget may seriously restrict a PM' s ability to provide a high-quality deliverable.

Managing quality well means that someone validates code modules before they' re applied to the system as a whole. Someone makes sure that liberal commenting has been applied in the code. Someone validates that a module runs as it' s supposed to, by running one last time through the logic flow diagrams (following along in the code), by stepping through the code in the debugger (to make sure it goes where you think it' s going to go) and, if need be, by user acceptance testing (UAT). This someone can be the PM, but it' s primarily the PM ' s job to make sure it gets done, by scheduling someone to do it.

You can validate the same things with servers, infrastructure, router and protocol configurations, and everything else that pertains to your project. The basic question you ask is this: Has the team member done what the task said to do, and does the end result match what we re looking for?

Real World Scenario: Why Does Microsoft Windows XP Run Faster?

Real World Scenario: Why Does Microsoft Windows XP Run Faster?

Have you considered why Microsoft ' s Windows XP boots faster, runs faster, and runs apps faster than previous operating systems? Is it because Microsoft has reduced the amount of code in Windows XP? Hardly. Well then, does it have to do with new advances in hardware—faster RAM and processes make the code run faster? Only marginally. If you have an up-to- date computer system that was running Windows Me and is now running Windows XP Home or Windows XP Professional, you will certainly notice the increase in operational speed, regardless of the hardware.

So what is it?

It's in the code—the tightening of loops, closing up of If/Else statements, and elimination of other incongruities that make code run slower. Microsoft may have bought a company's software, but at some point they turn around, go back in, and refine the code so that it's better, faster, and leaner. Which is why Microsoft has been a leader for decades.

Po you visit those code modules one last time? Po you check those loops—making sure they're closed and that the logic is fast and tight? Po you use nested ifs instead of case statements? Po you cut down code that's unneeded, unmerited, and unwanted? That's what Microsoft does to beef up its quality.

One difficulty that you'll encounter with larger projects might be that team members are spread out over multiple locations. There may be a very good reason for this —servers have to be installed at each location, database replication has to take place, users have to have client components installed at each site. But you somehow must still manage the quality component. There are several ways to do this, two of which are centralization and standardization.

Centralization Code that is written and compiled at a central location, then deployed to users throughout the company, is much more prone to great quality than code in which some modules are written in one place and then coupled with others that were written elsewhere. The former is a much better scenario than the latter.

Standardization One of the best things you can do is to standardize wherever possible. Dor example, the process to install a server's network operating system (NOS) and associated software is called "burning." You should have your server administrators go through and prepare a standard burn document that details precisely how each server should be burned so that no one server has things installed that aren't on the others. You should also, as much as is possible, standardize on the kind of hardware you use in the system, at all locations.

You can also standardize on code modules. All modules should have basic entry and exit points that initialize common variables and clean those variables out of memory as the program exits. Variables should have a common naming nomenclature, so that programmers who need to modify a piece of code they haven't worked on before will have little difficulty recognizing the declaration of variables.

Database structures should all have standard naming nomenclatures, column names should follow a naming standard, and databases should be normalized using common techniques.

When possible, you should strive for software development teams to be located in the same area. That' s not always a possibility, and e-mail makes things much simpler, but it ' s difficult to get everyone together to talk about standardization policies and project techniques when one person' s in Boston and another' s in Shanghai.

Servers can be burned at a central location and shipped to their destination, then hung in the rack at their permanent home. They can be given IP addresses when they get there. By pre-burning servers at the local project site, you save yourself the possibility of encountering trouble because an admin didn ' t correctly burn a server.

You should also follow good communications standards, regardless of the location of personnel. When a code module is complete, for example, it would be prudent to have a preformatted checklist the developer follows that acknowledges that certain expected items have been provided. You should also go through liberal testing, regardless of the physical location of the developer. Testers typically live at the central core of the project, so you can simply walk down the hall to talk to them.

Was this article helpful?

0 0

Post a comment