When your system goes into live use, it will place certain demands on the target machines and their operators. The system will have its own requirements for disk, memory and processor capacity. It will probably have to coexist or interact with other systems (with their own hardware requirements). There may be regular processes, such as backups, end-of-day processing or handling externally produced data which the operators will have to perform and verify.
You have to define these requirements. For a PC system, they will form part of the user documentation. For a larger system, you may have to liaise with the operators and managers of other systems to make sure everything works together. You may need to create a production profile for each system to standardise the documentation.
Define the disk, memory and processor requirements of your system. It's a good idea to think in terms of the minimum requirements (e.g. for a new installation), the typical usage of memory and processor power, the maximum volumes of data which the system can handle, and the impact on system performance of the maximum planned transaction load. You should also try to define how the requirements will grow as the system handles more users and data. If the volumes of data will become very large over time, you also need to define how to archive and remove older data.
If several systems interact in the live environment, you have to help define the production cycle. This is the sequence of regular events and operational tasks. For example, system A must do its end-of-day processing before system B can run a summary report on the data, and both must complete before the systems can be taken out-of-use and a backup made. These events must be started at midnight, and finished before the systems are required again at 7 a.m.
You need to know how long each task will take, and what to do if it goes wrong. If you're trying to get very high availability (e.g. to support 24-hours a day use) then planning these tasks can be quite complex. Use Case diagrams can show how the systems interact, activity diagrams can show the sequence of processing, and a Gantt Chart can be used to schedule the different jobs. (See pages 80 and 112-118.)
You need to document how the system interacts with components of other systems. For example, you may use functions, libraries or databases from other applications. This information will be important when delivering your system, or upgrading the others. It may also help the operators to partition applications between disks and processors. In any event, you may want to recompile your system in the live environment (and then retest) to make sure it works with the versions already there.
The backup process must take account of business and legal requirements (which may define how long data has to be stored), and whether the users may ever need to restore the system to a previous point in time. As well as defining the backup process, you must define how to recover from loss of the system, particularly when one of the production processes (on which others may depend) is interrupted. A good method is to create a failure analysis table, which lists the main steps in each process, and what to do if each step fails to complete.
You must define each task for the operators, and write a procedure for it. As well as the production tasks, this includes things like granting user access and creating accounts, if these are not controlled by the users themselves.
Was this article helpful?