Architecture Design Synthesis

This step involves the synthesis of architectural alternatives. These alternatives flow from trying to define a set of choices that satisfy the requirements in each functional area. The conceptual problem that is attendant to this step can be described as follows. If there are a total of eight functional areas, and there are three design choices for each such functional area, then in principle there are a total of 38 or 6,561 (!) possible architectures. This, of course, makes no sense, so we try to narrow the field of possibilities through the addition of constraints.

One method for achieving this rather rapidly is depicted in Table 9.1. This table shows only three alternative architectures, mapped against the functional areas and the subordinate requirements in these areas. These three alternatives are:

1. A low-cost, minimum-effectiveness alternative

2. A ''baseline'' alternative

3. A high-performance, (high-cost) alternative

The entries in Table 9.1 describe how the alternative is designed to satisfy the key functional requirements. This notion was first presented in Figure 7.5. The idea is to bracket the reasonable alternatives by the low-cost and highperformance architectures to set the stage for evolving a cost-effective ''baseline'' alternative. All such alternatives satisfy the full set of requirements as defined by the customer. In this synthesis and top-level design process, there is a minimum of formal analysis and evaluation, which is the next step in the

TABLE 9.1 Alternative Architectures Versus Functions and Requirements

Alternative Architectures

Functions and Requirements

Low-Cost System

Baseline System

High-Performance System

Function 1 Req. 1.1 Req. 1.2 Req. 1.3 Function 2 Req. 2.1 Req. 2.2 Function 3 Req. 3.1 Req. 3.2 Req. 3.3 Req. 3.4

How is each system designed to satisfy the requirements?

process. A greater number of alternatives may also be considered and evaluated by formal Taguchi and response surface methods that the reader may wish to explore but that are outside the scope of this presentation.

9.5.4 Alternatives Analysis/Evaluation

Given the alternative architectures as defined in the previous step, these alternatives are then analyzed and evaluated with the objective of deriving a preferred system architecture. We refer again for this process to Figure 7.4. This figure shows a series of inputs to the analysis and evaluation of alternatives, namely:

• Architecture design and synthesis

— Technical performance measurement (TPM)

— Life-cycle costing

— Risk analysis

— Concurrent engineering

— Systems engineering management

• The ancillary elements of:

— Integrated logistics support (ILS)

— Reliability, Maintainability, availability (RMA)

— Quality assurance and management (QA&M)

— Specialty engineering

— Preplanned product improvement (P3I)

• Evaluation criteria

• An evaluation framework

The figure also shows the output as the preferred system architecture.

The basic evaluation framework brings together the final form of the alternatives evaluation process, as shown in Table 9.2. This table shows three alternatives as columns and the evaluation criteria as rows. With such a matrix, we evaluate each alternative with respect to how well the evaluation criteria are satisfied. The evaluation criteria are weighted to capture the likely possibility that they are not all equally important. We chose to select these weighting factors so that they add to unity to normalize the numbers and simplify the evaluation. Weighting factors are developed by group decision-making processes [9.9] involving the systems engineering team. Customer inputs are highly desirable but usually difficult to obtain in quantitative terms.

The individual ''cells'' in the matrix contain two basic numbers. The first is the rating of the given alternative with respect to the stated evaluation

TABLE 9.2 System Alternatives Evaluation Framework

Alternative Architectures

Evaluation Criteria

Criteria Weights (W)

System A

System B

System C

Rating

W x R

Rating

Wx R

Rating

Wx

Criterion 1

0.08

7

0.56

8

0.64

9

0.72

Criterion 2

0.10

6

0.60

6

0.60

8

0.80

Criterion 3

0.13

6

0.78

7

0.91

7

0.91

Criterion 4

0.09

5

0.45

7

0.63

8

0.72

Criterion 5

0.12

8

0.96

9

1.08

10

1.20

Criterion 6

0.07

6

0.42

7

0.49

7

0.49

Criterion 7

0.11

7

0.77

6

0.66

5

0.55

Criterion 8

0.10

9

0.90

6

0.60

4

0.40

1.00

5.44

5.61

5.79

criterion. The second is the product of the rating and the weight given to the evaluation criterion. This latter product is then summed down the column to place in evidence the final score for each alternative. This score is the basis for comparing the alternatives against each other.

There are many options available for the rating system that one might select. Three such options are:

2. A Likert scale with the numeric values of 0, 2, 4, 6, 8, and 10

3. S college scoring system of A, B, C, D, and F, equivalent to numeric scores of 4, 3, 2, 1, and 0

Table 9.2 shows the first of these options and the resultant scores for the three alternatives. Each of the given scores can then be compared with the understanding that the maximum score for any given alternative is 10. The illustrative numbers in the table show preferences for Systems C, B, and A, in that order.

With this evaluation framework, one may then ask the question: Where do the evaluation criteria come from? These criteria can be derived from several sources simultaneously:

• The requirements document for the system

• Possible evaluation criteria defined by the customer

• The measures of merit and effectiveness (MOMs and MOEs) that may have been developed

• The technical performance measures (TPMs) for the system

In general, the TPMs (which are measurable) and other possibly subjective factors, as well as cost measures, may make up the full set of evaluation criteria. Examples of evaluation criteria for a communications system and a transportation system are listed in Exhibit 9.3. The broad factors cited by Rechtin (see Section 9.2) [9.1] may be said to apply to all systems, independent of the domain in which they are operative.

Exhibit 9.3: Illustrative Evaluation Criteria

A Communications System

• Availability

• Connectivity

• Expandability

• Grade of service

• Life-cycle costs

• Number of channels (by type of channel)

• Quality of service

• Reliability

• Response time

• Speed of service

• Survivability

• Throughput

A Transportation System

• Capacity-to-demand ratio

• Comfort and convenience

• Economic impacts

• Environmental effects

• Frequency of service

• Growth capability

• Life-cycle costs

• Quality of service

• Reliability of service

In a study of the future development of the U.S. airport network, the Transportation Research Board of the National Research Council addressed directly the matter of criteria for its evaluation and developed the following set of fourteen criteria:

1.

Capacity

2.

Safety

3.

Cost

4.

Competition

5.

Flexibility

6.

Time

7.

Frequency

8.

Reliability

9.

Comfort and convenience

10.

Congestion and pollution

11.

Other environmental concerns

12.

Compatibility

13.

Funding

14.

Management

So whether we are dealing with a very large system, such as the national airport network, or a smaller system, such as a radar, the issue of developing and using a coherent list of evaluation criteria remains approximately the same.

Project Management Made Easy

Project Management Made Easy

What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.

Get My Free Ebook


Post a comment