System Analysis Synthesis.Systems Design and Development Practices
Выбери формат для чтения
Загружаем конспект в формате docx
Это займет всего пару минут! А пока ты можешь прочитать работу в формате Word 👇
Chapter 23
System Analysis Synthesis
Throughout Part I we sequenced through topical series of chapters that provide an analytical per- spective into HOW to THINK about, organize, and characterize systems. These discussions provide the foundation for Part II System Design and Development Practices, which enable us to translate an abstract System Performance Specification (SPS) into a physical system that can be verified and validated as meeting the User’s needs. So, HOW did Part I System Analysis Concepts provide this foundation?
23.1 SYNTHESIZING PART I ON SYSTEM ANALYSIS CONCEPTS
Part I concepts were embodied in several key themes that systems analysts and SEs need to under- stand when developing a new system, product, or service.
1. WHAT are the boundary conditions and constraints imposed by the User on a system, product, or service in terms of missions within a prescribed OPERATING ENVIRONMENT?
2. Given the set of boundary conditions and constraints, HOW does the User envision deploy- ing, operating, and supporting the system, product, or service to perform its missions within specific time limitations, if applicable?
3. Given the deployment, operation, support, and time constraints planned for the system, product, or service, WHAT is the set of outcome-based behaviors and responses required of the system to accomplish its missions?
4. Given the set of outcome-based behaviors and responses required of the system to accom- plish its mission, HOW is the deliverable system, product, or service to be physically imple- mented to perform those missions and demonstrate?
To better understand HOW Part I’s topical series and chapters supported these themes, let’s briefly explore each one.
Theme 1: The User’s Mission
Boundary conditions and constraints for most systems are established by the organization that owns or acquires the system, product, or service to accomplish missions with one or more outcome-based performance objectives. The following chapters provide a topical foundation for understanding organizational boundary conditions and constraints.
System Analysis, Design, and Development, by Charles S. Wasson Copyright © 2006 by John Wiley & Sons, Inc.
241
Chapter 13: Organizational Roles, Missions, and System Applications
Chapter 14: Understanding the System’s Problem, Opportunity, and Solution Spaces
Chapter 15: System Interactions with Its OPERATING ENVIRONMENT
Chapter 16: System Mission Analysis
Theme 2: Deployment, Operations, and Support of the System
Once the organization’s vision, boundary conditions, and constraints are understood, we addressed HOW the User envisions deploying, operating, and supporting the system to perform its missions. The following chapters provide a topical foundation for understanding HOW systems, products, or services are deployed, operated, and supported.
Chapter 17: System Use Cases and Scenarios
Chapter 18: System Operations Model
Chapter 19: System Phases, Modes, and States of Operation
Theme 3: System Behavior in Its OPERATING ENVIRONMENT
Given the deployment, operation, support, and time constraints planned for the system, product, or service, we need to identify the set of outcome-based behaviors and responses required of the system to accomplish its missions. The following chapters provide a topical foundation for under- standing HOW systems, products, or services are expected to behave and interact with their OPER- ATING ENVIRONMENT.
Chapter 20: Modeling System and Support Operations
Chapter 21: System Operational Capability Derivation and Allocation
Chapter 22: The Anatomy of a System Capability
Theme 4: Physical Implementation of the System
Based on an understanding of outcome-based behaviors and responses required of the system to accomplish its mission, the question is: HOW do we physically implement a system, product, or service to perform those missions? The following chapters provide a topical foundation for under- standing HOW systems, products, or services are physically implemented.
Chapter 8: The Architecture of Systems
Chapter 9: System Levels of Abstraction and Semantics Chapter 10: System of Interest (SOI) Architecture Chapter 11: Operating Environment Architecture Chapter 12: System Interfaces
By inspection, these themes range from the abstract concepts to the physical implementation; this is not coincidence. This progression is intended to show HOW SEs evolve a system design solu- tion from abstract vision to physical realization.
After examining this list, you may ask: Why did we choose to talk about system architectures early in an order that puts it last in this list? For instructional purposes, system architectures rep- resent the physical world most people can relate to. As such, architectures provide the frame of ref- erence for semantics that are key to understanding Chapters 13–22.
23.2 Introducing the Four Domains of Solution Development 243
23.2 INTRODUCING THE FOUR DOMAINS OF SOLUTION DEVELOPMENT
If we simplify and reduce these thematic groupings, we find that they represent four classes or domains of solutions that characterize HOW a system, product, or service is designed and devel- oped, the subject of Part II. Table 23.1 illustrates the mapping between the Part I’s systems analy- sis concepts themes and the four domain solutions.
There are several key points to be made about the mapping. First, observe that objectives 1 and 2 employ the User as the “operative” term; Objectives 3 and 4 do not. Does this mean the User is “out of the loop”? Absolutely not! Table 23.1 communicates that the User, Acquirer, and System Developer have rationalized and expressed WHAT is required. Given that direction, the system development contract imposes boundary conditions and constraints on developing the system. This communicates to the System Developer “Go THINK about this problem and TELL us about your proposed solution in terms of its operations, behaviors, and cost-effective implementation.” Since Table 23.1 represents how a system evolves, User involvement occurs explicitly and implicitly throughout all of the themes. Remember, if the User had the capabilities and resources available, such as expertise, tools, and facilities to satisfy Objectives 3–4, they would have already inde- pendently developed the system.
Second, if: 1) a SYSTEM has four domains of solutions and 2) the SYSTEM, by definition, is composed of integrated sets of components working synergistically to achieve an objective greater that the individual component objectives, deductive reasoning leads to a statement that each of the components ALSO has four domains of solutions, all LINKED, both vertically and horizontally.
The four themes provide a framework for “bridging the gap” between a User’s abstract vision and the physical realization of the system, product, or service. Thus, each theme builds on decisions established by its predecessor and expands the level of detail of the evolving system design solution as illustrated at the left side of in Figure 23.1. This allows us to make several observations:
Table 23.1 Linking Part I System Analysis Concepts themes into Part II System Design and Development Practices semantics
ID Part I Thematic Objectives Domain Solutions
23.1 Objective 1—WHAT are the boundary conditions and constraints imposed Requirements
by the User on a system, product, or service in terms of missions within a Domain Solution prescribed OPERATING ENVIRONMENT?
23.2 Objective 2—Given the set of boundary conditions and constraints, HOW Operations Domain does the User envision deploying, operating, and supporting the system, Solution
product, or service to perform its missions within specific time limitations, if applicable?
23.3 Objective 3—Given the deployment, operation, support, and time constraints Behavioral Domain planned for the system, product, or service, WHAT is the set of outcome- Solution
based behaviors and responses required of the system to accomplish its missions?
23.4 Objective 4—Given the set of outcome-based behaviors and responses Physical Domain required of the system to accomplish its mission, HOW is the deliverable Solution
system, product, or service to be physically implemented to perform those missions and demonstrate?
Abstract
Physical
Figure 23.1 Development and Evolution of a SYSTEM’s/Entity’s Solution Domains
▪ The mission (i.e., the opportunity/problem space) forms the basis for the User to establish the Requirements Domain Solution (i.e., the solution space).
▪ The Requirements Domain Solution forms the basis for developing and maturing the Oper- ations Domain Solution.
▪ The evolving Operations Domain Solution forms the basis for developing and maturing the Behavioral Domain Solution.
▪ The evolving Behavioral Domain Solution forms the basis for developing and maturing the Physical Domain Solution based on physical components and technologies available.
From a workflow perspective, the design and development of the system solution evolves and matures from the abstract to the physical over time. However, the workflow progression consists of numerous feedback loops to preceding solutions as System Analysts and SEs mature the solu- tions and reconcile critical operational and technical issues (COIs/CTIs). As a result, we symbol- ize the system solution domains as shown at the right side of Figure 23.1.
23.3 SYSTEM DOMAIN SOLUTION SEQUENCING
Figure 23.2 provides a way to better understand how the system domain solutions evolve over time. As shown, the Requirements Domain Solution is initiated first, either in the form of a contract System Performance Specification (SPS) or a System Developer’s item development specification. Here is how the sequencing occurs:
▪ When the Requirements Domain Solution is understood and reaches a level of maturity suf- ficient to develop concepts of operation, initiate the Operations Domain Solution.
▪ When the Operations Domain Solution reaches a level of maturity sufficient to define relationships and interactions among system capabilities, initiate the Behavioral Domain Solution.
23.4 Summary 245
Entry/Exit
Time
Figure 23.2 System Solution Domain Time-Based Implementation
▪ When the Behavioral Domain Solution reaches a level of maturity sufficient to allocate the behavioral capabilities to physical components, initiate the Physical Domain Solution.
▪ Once initiated, the Requirements, Operations, Behavioral, and Physical Domain Solutions evolve concurrently, mature, and stabilize.
23.4 SUMMARY
In this chapter we synthesized our discussions in Part I on system analysis concepts and established the foun- dation for Part II on system design and development. The introduction of the Requirements, Operations, Behavioral, and Physical Solution Domains, coupled with chapter references in each domain, encapsulate the key system analysis concepts that enable us to THINK about, communicate, analyze, and organize systems, products, and services for design and development. With this foundation in place, we are now ready to proceed to Part II System Design and Development Practices.
Part II
Systems Design and Development Practices
EXECUTIVE SUMMARY
Part II, System Design and Development Practices, builds on the foundation established in Part I System Analysis Concepts and consists of 34 chapters organized into six series of practices. The six series consist of:
▪ System Development Strategies Practices
▪ System Specification Practices
▪ System Design and Development Practices
▪ Decision Support Practices
▪ System Verification and Validation Practices
▪ System Deployment, Operations, and Support Practices
As an introductory overview, let’s explore a brief synopsis of each of these practices.
System Development Strategy Practices
Successful system development requires establishing an insightful strategy and supporting work- flow that employs proven practices to enable a program to efficiently progress from contract award to system delivery and acceptance. The System Development Strategy Practices, which consists of Chapter 24–27, provide insights for establishing a program strategy.
Our discussions describe how a program employs verification and validation concepts intro- duced in Part I to create a workflow that translates multi-level specifications into a physical design solution that leads to delivery of systems, products, or services. We explore various development methods such as the waterfall approach, incremental development, evolutionary development, and spiral development. We also dispel a myth that V & V are only performed after a system has been integrated and tested; V & V are performed continuously from contract award through system deliv- ery and acceptance.
Given an understanding of System Development Strategy Practices, we introduce the corner- stone for system design and development via the System Specification Practices.
System Specification Practices
System design and development begins with the derivation and development of system specifica- tions and requirements that bound the User’s solution space subject to technology, cost, schedule,
System Analysis, Design, and Development, by Charles S. Wasson Copyright © 2006 by John Wiley & Sons, Inc.
248 Part II Systems Design and Development Practices
support, and risk constraints. The System Specification Series, which consist of Chapters 28–33, explore what a specification is; types of specifications; how specifications are analyzed and devel- oped; and how specification requirements are analyzed, derived, developed, and reviewed.
The System Specification Practices provide the cornerstone for our next topical discussion,
System Design and Development Practices.
System Design and Development Practices
The design and development of a system requires that the developers establish an in-depth under- standing of WHAT the user is attempting to accomplish and select a solution from a set of viable candidates based on decision factors such as technical, technology, support, cost, schedule, and risk. The System Design and Development Practices series consists of Chapters 34–46 and cover a diverse range of system design and development practices. Our discussions include: understanding the operational utility, suitability, effectiveness, and availability requirements; formulation of domain solutions; selection of a system architecture; configuration identification; system interface
design; standards and conventions; and design and development documentation.
The System Design and Development Practices require timely data to support informed deci- sion making that the RIGHT system solution is selected. This brings us to our next topic, Decision Support Practices, which provide the data.
Decision Support Practices
The design and development of integrated sets of system elements requires analytical support to provide data and ensure that the system design balances technical, technology, support, cost, sched- ule, and risk considerations. The Decision Support Practices series, which consist of Chapters 47–52, provide mechanisms that range from analyzes to prototypes and demonstrations to provide timely data and recommendations.
Our discussions address analyses; statistical variation influences on system design; system per- formance budgets and margins; system reliability, availability, and maintainability; system model- ing and simulation; and trade study analysis of alternatives.
System design and development requires on-going integrity assessments to ensure that the system is being designed correctly and will satisfy the user’s operational need(s). This brings us to our next topic, Verification and Validation Practices, which enable us to assess the integrity of the evolving system design solution.
Verification and Validation Practices
System design and development requires answering two key questions: 1) Is the system being designed and developed RIGHT—in accordance with the contract requirements and 2) Does the system satisfy the user’s operational needs? The Verification and Validation Practices series, which consist of Chapters 53 through 55, enable the system users, acquirer, and developers to answer these questions from contract award through system delivery and acceptance.
Our discussions explore what verification and validation are; describe the importance of tech- nical reviews to verify and validate the evolving and maturing system design solution; and address how system integration, test, and evaluation plays a key role in performing V & V. We introduce verification methods such as inspection/examination, analysis, test, and demonstration that are available for verifying compliance to specification requirements.
Once a system is verified, validated, and delivered for final acceptance, the user is ready to employ the system to perform organizational missions. This brings us to our next topic, System Deployment, Operations, and Support Practices.
Part II Systems Design and Development Practices 249
System Deployment, Operations, and Support Practices
People often believe that SE and analysis end with system delivery and acceptance by the user; SE continues throughout the operational life of the system, product, or service. The System Deploy- ment, Operations, and Support Practices series, which consist of Chapters 56 and 57, provide key insights into system and mission applications and performance that require system analyst and SE assessments, not only for corrective actions to the current system but requirements for future systems and capabilities.
Our discussions explore how a system is deployed including site selection, development, and activation; describe key considerations required for system integration at a site into a higher level system; address how system deficiencies are investigated and form the basis for acquisition require- ments for new systems, products, or services; and investigate key engineering considerations that must be translated into specification requirements for new systems.
Chapter 24
System Development Workflow Strategy
24.1 INTRODUCTION
The award of a system development contract to a System Developer or Services Provider signifies the beginning of the System Development Phase. This phase covers all activities required to meet the provisions of the contract, produce the end item deliverable(s), and deploy or distribute the deliverables to the designated contract delivery site.
On contract award, the Offeror transforms itself from a proposal organization to a System Developer or Service Provider organization. This requires the organization to demonstrate that they can competently deliver the proposed system on time and within budget in accordance with the provisions of the contract. This transformation is best captured in a business jest: “The good news is we won the contract! The bad news is WHAT have we done to ourselves?”
Our discussion of this phase focuses on how a proposed system is developed and delivered to the User. We explore how the System Developer or Service Provider evolves the visionary and abstract set of User requirements through the various stages of system development to ultimately produce a physical system. The “system” may be country, a space shuttle, a mass mailing service, a trucking company, a hospital, or a symposium. The important point to keep in mind is that the duration of the System Development Phase may last from a few weeks or months to several years.
Author’s Note 24.1 The System Development Phase described here, in conjunction with the System Procurement Phase, may be repeated several times before a final system is fielded. For example, in some domains, the selection of a System Developer may require several sequences of System Development Phase contracts to evolve the system requirements and down select from a field of qualified contractors to one or two contractors. Such is the case with spiral development. For a System Service Provider contract, the System Development Phase may be a preparatory time to develop or adapt reusable system operations, processes, and procedures to support the con- tract’s mission, support services for the System Operations Phase. For example, a healthcare insur- ance provider may win a contract to deliver “outsourced” support services for a corporation’s insurance program. The delivered services may be a “tailored” version similar to programs the
contractor has administered for other organizations.
Once you have mastered the concepts discussed in this section, you should have a firm under- standing of how the SE process should be implemented and how to manage its implementation.
System Analysis, Design, and Development, by Charles S. Wasson Copyright © 2006 by John Wiley & Sons, Inc.
251
What You Should Learn from This Chapter
1. What are the workflow steps in system development?
2. What is the verification and validation (V&V) strategy for system development?
3. How does the V&V strategy relate to the system development workflow?
4. Why is the V and V strategy important?
5. What is the Developmental Configuration?
6. When does the Developmental Configuration start and end?
7. What is a first article system?
8. What is developmental test and evaluation (DT&E)?
9. How is DT&E performed?
10. When is DT&E performed during the System Development Phase?
11. Who is responsible for performing DT & E?
12. What is operational test & evaluation (OT&E)?
13. When is OT&E performed during the System Development Phase?
14. What is the objective of OT&E?
15. Who is responsible for performing OT&E?
16. What is the System Developer’s role in OT&E?
Definitions of Key Terms
▪ Developmental Test and Evaluation (DT&E) “Test and evaluation performed to:
1. Identify potential operational and technological limitations of the alternative concepts and design options being pursued.
2. Support the identification of cost-performance trade-offs.
3. Support the identification and description of design risks.
4. Substantiate that contract technical performance and manufacturing process requirements have been achieved.
5. Support the decision to certify the system ready for operational test and evaluation.” (Source: MIL-HDBK-1908, Section 3.0, Definitions, p. 12)
▪ Developmental Configuration “The contractor’s design and associated technical docu- mentation that defines the evolving configuration of a configuration item during develop- ment. It is under the developing contractor’s configuration control and describes the design definition and implementation. The developmental configuration for a configuration item consists of the contractor’s released hardware and software designs and associated technical documentation until establishment of the formal product baseline.” (Source: MIL-STD-973 (Canceled), Configuration Management, para. 3.30)
▪ First Article “[I]ncludes preproduction models, initial production samples, test samples, first lots, pilot models, and pilot lots; and approval involves testing and evaluating the first article for conformance with specified contract requirements before or in the initial stage of production under a contract.” (Source: DSMC—Test & Evaluation Management Guide, Appendix B, Glossary of Test Terminology)
▪ Functional Configuration Audit (FCA) “An audit conducted to verify that the development of a configuration item has been completed satisfactorily, that the item has achieved the per- formance and functional characteristics specified in the functional or allocated configuration
24.2 System Development Verification and Validation Strategy 253
identification, and that its operational and support documents are complete and satisfactory.” (Source: IEEE 610.12-1990 Standard Glossary of Software Engineering Terminology)
▪ Independent Test Agency (ITA) An independent organization employed by the Acquirer to represent the User’s interests and evaluate how well the verified system satisfies the User’s validated operational needs under field operating conditions in areas such as operational utility, suitability, and effectiveness.
▪ Operational Test and Evaluation (OT&E) Field test and evaluation activities performed by the User or an Independent Test Agency (ITA) under actual OPERATING ENVIRON- MENT conditions to assess the operational utility, suitability, and effectiveness of a system based on validated User operational needs. The activities may include considerations such as training effectiveness, logistics supportability, reliability and maintainability demonstra- tions, and efficiency.
▪ Physical Configuration Audit (PCA) “An audit conducted to verify that a configuration item, as built, conforms to the technical documentation that defines it.” (Source: IEEE 610.12-1990 Standard Glossary of Software Engineering Terminology)
▪ Quality Record (QR) A document such as a memo, e-mail, report, analysis, meeting minutes, and action items that serves as objective evidence to commemorate a task-based action or event performed.
Phase Objective(s)
The primary objective of the System Development Phase is to translate the contract and System Performance Specification (SPS) requirements into a physical, deliverable system that has been:
1. Verified against those requirements.
2. Validated by the User, if required.
3. Formally accepted by the User or the Acquirer, as the User’s contract and technical representative.
24.2 SYSTEM DEVELOPMENT VERIFICATION AND VALIDATION STRATEGY
During our discussion of system entity concepts in Figure 6.2 we explored the basic concept of system verification and validation. Verification and validation (V&V) provides the basis for a con- ceptual strategy to ensure the integrity of an evolving SE design solution. Let’s expand on Figure
6.2 to establish the technical and programmatic foundation for our discussion in this chapter. Figure
24.1 serves as a navigation aid for our discussion for a closed loop V&V system.
System Definition and System Procurement Phases V&V
When the User identifies an Operational Need (1), the User may employ the services of an Acquirer to serve as a contract and technical representative for the procurement action. The operational needs, which may already be documented in a Mission Needs Statement (MNS), are translated by the Acquirer into an Operational Requirements Document (ORD) (2) and validated in collabora- tion with the User. The ORD becomes the basis for the Acquirer to develop a System Requirements Document (SRD) (3) or Statement of Objectives (SOO). The SRD/SOO specifies the technical requirements for the formal solicitation—namely the Request for Proposal (RFP)—in an OPEN competition to qualified Offerors.
Figure 24.1 System V & V—Programmatic Perspective
Offerors analyze the SRD/SOO, derive and develop a System Performance Specification (SPS)
(4) from the SRD/SOO (2), and submit the SPS as part of their proposal. When the Acquirer makes a final source selection decision, a System Development Agreement (6) is formally established at the time of contract award (5).
System Development Phase V&V
The SPS (4) provides the technical basis for developing the deliverable system or product via System Engineering and Development (6) activities. Depending on the maturity of the require- ments, the System Developer may employ spiral development and other strategies to develop the system design solution. In support of the system engineering and development (6) activity, Deci- sion Support (8) performs analyses and trade studies, among other such activities, with inputs and preliminary assessments provided from User Feedback (9), such as validation on the implementa- tion of requirements.
As the SE design evolves, System Verification (12) methods are continually applied to assess the requirements allocation, flow down, and designs at all levels of abstraction—at the PRODUCT, SUBSYSTEM, ASSEMBLY, SUBASSEMBLY, and PART levels. Design verification activities include Developmental Test and Evaluation (DT&E) (10) and Major Technical Reviews and Trace- ability Audits (11). The purpose of these verification activities is to assess and monitor the progress, maturity, integrity and risk of the SE design solution. Baselines are established at critical staging or control points—using technical reviews—to capture formal baselines of the evolving Develop- mental Configuration to facilitate decision making.
Author’s Note 24.2 Although it isn’t explicitly shown in Figure 24.1, validation activities con- tinually occur within the System Developer’s program organization. Owners VALIDATE lower level design solution implementations in terms of documented use case-based requirements.
24.3 Implementing the System Development Phase 255
Design requirements established at the Critical Design Review (CDR) (13) provide the basis for procuring and developing components. As each component is completed, the item is verified for compliance to its current design requirements baseline.
Successive levels of components progress through levels of System Integration, Test, and Eval- uation (SITE) and verified against their respective item development specifications (IDSs). Test Readiness Reviews (TRRs) (14) are conducted at various levels of integration to verify that all aspects of a configuration and test environment are ready to commence testing with low risk.
When the SYSTEM level of integration is ready to be verified against the SPS (4), a System Verification Test (SVT) (15) is conducted. The SVT must answer the question “Did we build the system or product RIGHT?”—in accordance with the SPS (4) requirements.
Following the SVT, a Functional Configuration Audit (FCA) (16) is conducted to authenticate the SVT results, via quality records (QRs), as compliant with the SPS (4) requirements. The FCA may be followed by a Physical Configuration Audit (PCA) (17) to authenticate by physical meas- urement compliance of items against their respective design requirements. On completion of the FCA (16) and PCA (17), a System Verification Review (SVR) (18) is conducted to certify the results of the FCA and PCA.
Depending on the terms and conditions (T&Cs) of the System Development Agreement (6), completion of the SVR (18) serves as prerequisite for final system or product delivery and accept- ance (19) by the Acquirer for the User. For some agreements an Operational Test and Evaluation (OT&E) (20) may be required. In preparation for the OT&E (20), the User or an Independent Test Agency (ITA) representing the User’s interests may be employed to conduct scenario-based field exercises using the system or product under actual or similar OPERATING ENVIRONMENT conditions.
During OT&E (20), Acquirer System Validation (21) activities are conducted to answer the question “Did we acquire the RIGHT system or product?”—as documented in the ORD or the SOO, whichever is applicable. Depending on the scope of the contract (5), corrective actions may be required during OT&E (20) for any design flaws, latent defects, deficiencies, and the like. Follow- ing OT&E (20), the ITA prepares an assessment and recommendations.
Author’s Note 24.3 Although the System Development contract (6) may be complete, the User performs system verification and validation activities continuously throughout the System Devel- opment and system Operations and Support (O&S) phases of the system product life cycle. V&V activities expand to encompass organizational and system missions. As competitive and adversar- ial threats in the OPERATING ENVIRONMENT evolve and maintenance costs increase, “gaps” emerge in achieving organizational and system missions with existing capabilities. These degree of urgency to close these gaps subsequently leads to the next system or product development or upgrade to existing capabilities.
Now that we have established the V&V strategy of system development, the question is: HOW do we implement it? This brings us to our next topic, Implementing the system development phase.
24.3 IMPLEMENTING THE SYSTEM DEVELOPMENT PHASE
The workflow during the system development phase consists of five sequential workflow processes as illustrated in Figure 24.2. The processes consist of:
1. System Design Process
2. Component Procurement and Development Process
1
System Development Phase
2
7
8
Systems Design Process
3
Highly Iterative
Technical Management Process
4 5
Component
Procurement System Authenticate & Integration, Test, System
Development & Evaluation Baselines Process (SITE) Process Process
Highly Highly
Iterative Iterative
Decision Support Process
6
Highly Iterative
Operational Test & Evaluation (OT&E)
Process
Figure 24.2 The System Development Process Work flow
3. System Integration, Test, and Evaluation (SITE) Process
4. Authenticate System Baseline Process
5. Operational Test, and Evaluation (OT&E) Process
While the general workflow appears to be sequential, there are highly iterative feedback loops that connect to predecessor segements.
The System Development Phase begins at contract award and continues through deliverable system acceptance by the Acquirer and User. During this phase the approach that enabled the System Developer or Service Provider to convince the Acquirer that the organization can perform on the contract is implemented. Remember those brochureware phrases: well-organized, seamless organization, highly efficient, highly trained and performing teams; no problem, and so on.
The System Development Phase includes those technical activities required to translate the contract performance specifications into a physical system solution. We refer to the initial system(s) as the first article of the Developmental Configuration. Throughout the phase, the highly iterative system design solution evolves through a progression of maturity stages. Each stage of maturity typically consists of a series of design reviews with entry and exit criteria supported by analyses, prototypes, and technology demonstrations. The reviews culminate in design baselines that capture snapshots of the evolving Developmental Configuration. When the system design solution is for- mally approved at a Critical Design Review (CDR), the Developmental Configuration provides the basis for component procurement and/or development.
Procured and developed components are inspected, integrated, and verified against their respec- tive design requirements-drawings- and performance specifications at various levels of integration. The intent of verification is to answer the question: “Did we develop the system RIGHT?”—accord- ing to the specification requirements. The integration culminates with a System Verification Test (SVT) against the System Performance Specification (SPS). Since the System Development Phase focuses on development of the system, product, or service, testing throughout SVT is referred to as Developmental Test and Evaluation (DT&E).
When the first article system(s) of the Developmental Configuration has been verified as meeting its SPS requirements, one of two options may occur, depending on contract requirements. The system may be deployed to either of the following:
1. Another location for validation testing by the User or an Independent Test Agency (ITA) representing the User’s interests.
2. The User’s designated field site for installation, checkout, integration into the user’s Level
0 system, and final acceptance.
Validation testing, which is referred to as Operational Test and Evaluation (OT&E), enables the User to make a determination if they specified and procured the RIGHT SYSTEM to meet their validated operational needs. Any deficiencies are documented as discrepancy reports (DRs) and resolved in accordance with the terms and conditions (Ts&Cs) of the contract.
After an initial period of system operational use in the field to correct latent defects such as design flaws, errors and deficiencies and collect field data to validate system operations, a decision is made to begin the System Production Phase, if applicable. If the User does not intend to place the system or product in production, the Acquirer formally accepts system delivery, thereby initi- ating the System Operations and Support (O&S) Phase.
Technical Management Process
The technical orchestration of the System Development Phase resides with the Technical Manage- ment Process. The objective of this process is to plan, staff, direct, and control product-based team activities focused on delivering their assigned items within technical, technology, cost, schedule, and risk constraints.
Decision Support Process
The Decision Support Process supports all aspects of the System Development Phase process decision-making activities. This includes conducting analyses, trade studies, modeling, simulation, testing, and technology demonstrations to collect data to validate models and provide prioritized recommendations to support informed decision making within each of the workflow processes.
Exit to System Production Phase or System Deployment Phase
When the System Development Phase is completed, the workflow progresses to the System Pro- duction Phase or System Operations and Support (O&S) Phase, whichever is applicable.
Guidepost 24.1 Based on a description of the System Development Phase workflow processes, let’s investigate HOW the V&V strategy is integrated with the workflow progression.
24.4 APPLYING V&V TO THE SYSTEM DEVELOPMENT WORKFLOW
So far we have introduced the System Development Phase processes and described the sequential workflow. Each of these processes enables the System Developer to accomplish specific objectives such as:
1. Select and mature a design from a set of viable candidate solutions based on an analysis of alternatives.
Figure 24.3 Development Process Context System Verification and Validation Concept
2. Procure, fabricate, code, and test PART level components.
3. Perform multi-level system integration, test, and evaluation.
4. Verify items at each level of integration satisfy specification requirements.
5. Validate, if applicable, the integrated SYSTEM as meeting User operational needs.
Until the system is delivered and accepted by the Acquirer, the Developmental Configuration, which captures the various design baselines, is always in a state of evolution. It may require redesign or rework to correct latent defects, deficiencies, errors, and the like. So how do we minimize the impacts of these effects? This brings us to the need for an integrated verification and validation (V&V) strategy. To facilitate our discussion, Figure 24.3 provides a framework.
System Performance Specification (SPS) V&V Strategy
During the System Procurement Phase, Operational Needs (1) identified by the User and Acquirer are documented (2) in the System Performance Specification (SPS) (3). This is a critical step. The reason is that by this point the Acquirer, in collaboration with the User, has partitioned the organi- zational problem space into one or more solution spaces.
Each solution space is bounded by requirements specified in its SPS. If there are any errors in tactical or engineering judgment, they manifest themselves in the requirements documented in the SPS. Therefore the challenge question for the Acquirer, User, and ultimately the System Developer is: Have we specified the RIGHT system—solution space—to satisfy one or more operational needs—problem space? How do we answer this question?
SPS requirements should be subjected to Requirements Validation (4) against the Operational Need (1) to validate that the right solution space description has been accurately and precisely bounded by the SPS (3).
Author’s Note 24.4 A word of caution: any discussions with the User and Procurement Team regarding the System Performance Specification (SPS) requirements validation requires tactful pro- fessionalism and sensitivity. In effect, you are validating that the Acquirer performed their job cor- rectly. On the one hand they may be grateful for you identified any potential deficiencies in their assessment. Conversely, you may highly offend! Approach any discussions in a tactful, well- conceived, professional manner.
SE Design V&V Strategy
When the SPS requirements have been validated, the SPS (3) serves as originating or source requirements inputs to system design. During the system design, Interim Design Verification (6) is performed on the evolving system design solution by tracing allocated requirements back to the SPS and prototyping design areas for RISK mitigation and critical operational or technical issue (COI/CTI) resolution. Design Validation (7) activities should also be performed to confirm that the User and Acquirer, as the User’s technical representative, agree that the evolving System Design Solution satisfies their needs.
Author’s Note 24.5 The Interim Design Verification (6) and Interim Design Validation (7), or design “verification and validation,” are considered complete when the system has been verified, validated, and legally accepted by the User via the Acquirer in accordance with the terms of the contract. Therefore the term “interim” is applied.
Design verification and validation occurs throughout the SE Design Process. Validation is accom- plished via: 1) technical reviews (e.g., SDR, SSR, PDR, and CDR) and 2) technical demonstra- tions. Communications media such as conceptual views, sketches, drawings, presentations, technical demonstrations, and/or prototypes are used to obtain Acquirer and User validation accept- ance and approval, as appropriate. On completion of a system level CDR, workflow progresses to Component Procurement and Development (8).
Component Procurement and Development V&V Strategy
During Component Procurement and Development (8), design requirements from the System Design (5) serve as the basis for procuring, fabricating, coding, and assembling system compo- nents. Each component undergoes component verification (10) against its Design Requirements (5). As components are verified, workflow progresses to System Integration, Test, and Evaluation (SITE) (11).
System Integration, Test, and Evaluation (SITE) V&V Strategy
When components complete verification, they enter System Integration, Test, and Evaluation (11). Activities performed during this process are often referred to as developmental test and evaluation (DT&E) (12). DT&E occurs throughout the entire System Development Phase, from System Design
(5) through SITE (11). The purpose of DT&E in this context is to verify that the system and its embedded subsystems, HWCIs/CSCIs, assemblies, and parts are compatible and interoperable with themselves and the system’s external interfaces.
To accomplish the SITE Process, Verification Methods and Requirements (13) defined in the SPS (3) and multi-level item development specifications (IDSs) are used to develop Verification Procedures (14). Verification methods—consisting of inspection, analysis, test, demonstration, and similarity—are defined by the SPS for each requirement and used as the basis for verifying com- pliance. One or more detailed test procedures (14) that prescribe the test environment configura-
tion—such as the OPERATING ENVIRONMENT (initial and dynamic), data inputs, and expected test results—support each verification method.
During SITE (11), the System Developer formally tests the SYSTEM with representatives of the Acquirer and User as witnesses. Multi-level system verification activities, at appropriate inte- gration points (IPs) review (15) test data and results against the verification procedures (14) and expected results specified in the appropriate development specifications. When the system com- pletes SITE (11), a formal System Verification Test (SVT) corroborates the system’s capabilities and performance against the SPS.
Authenticate System Baselines V&V Strategy—First Pass
When the SVT is completed, workflow progresses to Authenticate System Baselines (18). The process contains two authentication processes that may be performed at different times, depending on contract requirements. The first authentication consists of a Functional Configuration Audit (FCA) (17). Using the SPS and other development specification requirements as a basis, the FCA reviews the results of the As-Designed, Built, and Verified system that has just completed the SVT to verify that it fully complies with the SPS functional and performance requirements. The FCA may be conducted at various levels of IPs during the SITE Process.
On successful completion of the FCA, the system may be deployed to a User’s test range or site to undergo Operational Test and Evaluation (OT&E) (19). On completion of OT&E, the system may reenter the Authenticate System Baselines process for the second pass.
Validate System Process V&V Strategy
Up to this point, the system is verified by SVT and audited by FCA to meet the SPS requirements. The next step is to validate that the As-Designed, Built, and Verified system satisfies the User’s Operational Needs (1) as part of the Operational Test and Evaluation (OT&E) (20) activities.
System validation activities (20) demonstrate how well the fielded system performs missions in its intended operational environment as originally envisioned by the User. Any system latent defects and deficiencies discovered during system validation are recorded as problem reports and submitted to the appropriate decision authority for disposition and corrective action, if required.
Author’s Note 24.6 During system validation, a determination is made that an identified defi- ciency is within the scope of the original contract’s SPS. This is a critical issue. For example, did the User or Acquirer overlook a specific capability as an operational need and failed to document it in the SPS? This point reinforces the need to perform a credible Requirements Validation (4) activity prior to or immediately after Contract Award to AVOID surprises during system accept- ance. If the deficiency is not within the scope of the contract, the Acquirer may be confronted with modifying the contract and funding additional design implementation and efforts to incorporate changes to correct the deficiency.
Authenticate System Baselines V&V Strategy—Second Pass
During the second pass through Authenticate System Baselines (18), the As Designed, Built, Ver- ified, and Validated (20) configuration system is subjected to a Physical Configuration Audit (PCA) (17). PCA audits the As-Designed, Built, and Verified physical system to determine if it fully com- plies with its design requirements such as drawings and parts lists. On successful completion of the PCA (17), a System Verification Review (SVR) is conducted to:
1. Certify the results of the FCA and PCA.
2. Resolve any outstanding FCA/PCA issues related to those results.
3. Assess readiness-to-ship decision.
24.5 Risk Mitigation with DT&E and OT&E 261
On successful completion of the OT&E Process (19), a Verified and Validated System (22) should be ready for delivery to the User via formal acceptance by the Acquirer.
Guidepost 24.2 Integrating the V&V strategy into the System Development Phase workflow describes the mechanics of ensuring the evolving Development Configuration is progressing to a plan. However, performing to a plan does not guarantee that the system can be completed suc- cessfully on schedule and within budget. The development, especially for large, complex systems, must resolve critical operational and technical issues (COIs/CTIs), each with one or more risks. So how do we mitigate these risks? This brings us to our next topical discussion, the roles of the Developmental Test and Evaluation (DT&E) and the Operational Test and Evaluation (OT&E).
24.5 RISK MITIGATION WITH DT&E AND OT&E
Satisfactory completion of system development requires that a robust strategy be established up front. We noted earlier that although the workflow appears to be sequential, each process consists of highly iterative feedback loops with each other as illustrated by Figure 24.4. To accomplish this, two types of testing occur during the System Development Phase: 1) Developmental Test & Eval- uation (DT & E) and 2) Operational Test & Evaluation (OT & E).
Developmental Test and Evaluation (DT&E)
Developmental testing (DT) serves as a risk mitigation approach to ensure that the evolving system design solution, including its components, complies with the System Performance Specification (SPS) requirements. DT focuses on two themes:
1. Are we building the system or product right—meaning using best practices in compliance with the SPS.
2. Do we have a design solution that represents the best, acceptable risk solution for a given set of technical, cost, technology, and schedule constraints?
Figure 24.4 System V & V—Technical Perspective
The DSMC T&E Management Guide states that the objectives of DT&E are to:
1. Identify potential operational and technological capabilities and limitations of the alterna- tive concepts and design options being pursued;
2. Support the identification of cost-performance tradeoffs by providing analyses of the capa- bilities and limitations of alternatives;
3. Support the identification and description of design technical risks;
4. Assess progress toward meeting critical operational issues (COIs), mitigation of acquisition technical risk, achievement of manufacturing process requirements and system maturity;
5. Assess validity of assumptions and conclusions from the analysis of alternatives (AOA);
6. Provide data and analysis in support of the decision to certify the system ready for opera- tional test and evaluation (OT&E);
7. In the case of automated information systems, support an information systems security cer- tification prior to processing classified or sensitive data and ensure a standards conformance certification.
(Source: Adapted from DSMC Test & Evaluation Management Guide, App. B, p. B-6)
DT&E is performed throughout System Design Process, Component and Procurement Process, and the SITE Process. Each process task verifies that the evolving and maturing system or product design solution—the Developmental Configuration—fully complies with the SPS requirements. This is accomplished via reviews, proof of principle and proof of concept demonstrations, tech- nology demonstrations, engineering models, simulations, brass boards, and prototypes.
On completion of verification, the physical system or product enters OT&E, whereby it is val- idated against the User’s documented operational need.
Operational Test and Evaluation (OT&E)
Operational test and evaluation (OT&E) activities are typically conducted on large, complex systems such as aircraft and military acquirer activity systems. The theme of OT&E is: Did we acquire the RIGHT system or product to satisfy our operational need(s)? OT&E consists of sub- jecting the test articles to actual field environmental conditions with operators from the User’s organization. An Independent Test Agency (ITA) designated by the Acquirer or User typically con- ducts this testing. To ensure independence and avoid conflicts of interest, the contract precludes the System Developer from direct participation in OT&E; the System Developer may, however, provide maintenance support, if required.
Since the OT&E is dependent on how well the system’s Users perform with the new system or product, the ITA or System Developer train the User’s personnel to safely operate the system. This may occur prior to system deployment following the SVT or on arrival at the OT&E site.
During the OT&E, the ITA trains the User’s personnel in how to conduct various operational use cases and scenarios under actual field OPERATING ENVIRONMENT conditions. The use cases and scenarios are structured to evaluate system operational utility, suitability, availability, and effectiveness. ITA personnel instrument the SYSTEM to record and observe the human–system interactions and responses. Results of the interactions are scored, summarized, and presented as recommendations.
On successful completion of the DT&E and OT&E and the follow-on Authenticate System Baselines Process, the verified and validated system or product is delivered to the Acquirer or User for final acceptance.
Organizational Centric Exercises 263
24.6 GUIDING PRINCIPLES
In summary, the preceding discussions provide the basis with which to establish the guiding prin- ciples that govern system development workflow strategy practices.
Principle 24.1 A system development strategy must have three elements:
1. A strategy-based roadmap to get from Contract Award to system delivery and acceptance supported by incremental verification and validation.
2. A plan of action for implementing the strategy.
3. Documented objective evidence that you performed to the plan via work product quality records.
Principle 24.2 System verification and validation applies to every stage of product development workflow beginning at Contract Award and continuing until system delivery and acceptance.
Principle 24.3 Developmental test and evaluation (DT&E) is performed by the System Devel- oper to mitigate Developmental Configuration risks; Users employ the operational test and evalu- ation (OT&E) to determine if they acquired the right system.
24.7 SUMMARY
During our discussion of the system development workflow strategy we introduced the system development phase processes. The System Development Phase processes include:
1. System Design
2. Component Procurement and Development
3. System Integration, Test, and Evaluation (SITE)
4. Authenticate System Baseline
5. Operational Test, and Evaluation (OT&E)
Based on the System Development Phase processes, we described an overall workflow strategy for ver- ification and validation. This strategy provides the high-level framework for transforming a User’s validated operational need into a deliverable system, product, or service.
We introduced the concepts of developmental test and evaluation (DT&E) and operational test and eval- uation (OT&E). Our discussion covered how DT&E and OT&E serve as key verification and validation activ- ities and their relationship to the system development workflow strategy.
GENERAL EXERCISES
1. Answer each of the What You Should Learn from This Chapter questions identified in the Introduction.
2. Using a system listed in Table 2.1, develop a description of the activities for each System Development Phase process to be employed and integrated into an overall V & V strategy.
ORGANIZATIONAL CENTRIC EXERCISES
1. Research your organization’s command media for guidance and direction in implementing the System Development Phase from an SE perspective.
(a) What requirements are levied on SE contributions?
(b) What overall process is required and how do SEs contribute?
(c) What SE work products and quality records are required?
(d) What verification and validation activities are required?
2. Contact a small, medium, and a large contract program within your organization. Interview the Technical Director or Project Engineer to identify the following information:
(a) Request the individual to graphically depict their development strategy?
(b) What factors drove them to choose the implementation strategy?
(c) What were some of the lessons learned from developing and implementing the strategy that would influence their approach next time?
(d) How was the V & V strategy implemented?
REFERENCES
Defense Systems Management College (DSMC). 1998. DSMC Test and Evaluation Management Guide, 3rd ed. Defense Acquisition Press Ft. Belvoir, VA.
IEEE Std 610.12-1990. 1990. IEEE Standard Glossary of Modeling and Simulation Terminology. Institute of Elec- trical and Electronic Engineers (IEEE) New York, NY.
MIL-STD-973. 1992. Military Standard: Configuration Management. Washington, DC: Department of Defense (DoD).
MIL-HDBK-1908B. 1999. DoD Definitions of Human Factors Terms. Washington, DC: Department of Defense (DOD).
ADDITIONAL READING
Defense Systems Management College (DSMC). 2001. Glossary: Defense Acquisition Acronyms and Terms, 10th ed. Defense Acquisition University Press Ft. Belvoir, VA. MIL-HDBK-470A. 1997. Designing and Developing Main- tainable Products and Systems. Washington, DC: Depart-
ment of Defense (DoD).
MIL-STD-1521B (canceled). 1985. Military Standard: Technical Reviews and Audits for Systems, Equipments, and Computer Software. Washington, DC: Department of Defense (DoD).
ASD-100 Architecture and System Engineering. 2003. National Air Space System—Systems Engineering Manual. Washington, DC: Federal Aviation Administra- tion (FAA).
Defense Systems Management College (DSMC). 2001. Systems Engineering Fundamentals. Defense Acquisition University Press Ft. Belvoir, VA.
Chapter 25
System Design, Integration, and Verification Strategy
25.1 INTRODUCTION
Our discussion of the system development workflow strategy established a sequence of highly inter- dependent processes that depict the workflow for translating the System Performance Specification (SPS) into a system design solution. The strategy provides a frame of reference to:
1. Verify compliance with the SPS requirements.
2. Validate that the deliverable system satisfies the User’s validated operational needs.
The workflow strategy identified two key processes that form the basis for designing and devel- oping a system: the System Design Process and the System Integration, Test, and Evaluation (SITE) Process.
This section focuses on the strategies for implementing the System Design Process and the SITE Process. Each strategy expands each process via lower level processes. Finally, we integrate the two strategies into an overall strategy referred to as the V-Model for system deployment.
What You Should Learn from This Chapter
1. What is the basic strategy for implementing the System Design Process of the System Development Phase?
2. What is the basic strategy for implementing the SITE Process of the System Development Phase?
3. What is the V-Model of system design and development?
4. What is an integration point?
5. How do the system design process strategy and the SITE Process integrate?
Definitions of Key Terms
• Corrective Action The set of tasks required to correct or rescope specification contents, errors, or omissions; designs flaws or errors; rework components due to poor workmanship, defective materials or parts; or correct errors or omissions in test procedures.
• Discrepancy Report (DR) A report that identifies a condition in which a document or test results indicate a noncompliance with a capability and performance requirement specified in a performance or item development specification.
System Analysis, Design, and Development, by Charles S. Wasson Copyright © 2006 by John Wiley & Sons, Inc.
265
• Integration Point Any one of a number of confluence points during the System Integra- tion, Test, and Evaluation (SITE) Process where two or more entities are integrated.
• V-Model A graphical model that illustrates the time-based, multi-level strategy for (1) decomposing specification requirements, (2) procuring and developing physical components, and (3) integrating, testing, evaluating, and verifying each set of integrated components.
25.2 SYSTEM DESIGN PROCESS STRATEGY
The System Design Process of the System Development Phase employs a highly iterative, top- down/bottom-up/lateral process. Since the process requires analysis and decomposition/expansion of abstract, high-level SPS requirements into lower levels of detail for managing complexity, each lower level design activity subsequently occurs in later time increments. Figure 25.1 illustrates this sequencing. Each horizontal bar in the figure:
1. Represents design activities that may range from very small to very large level of efforts (LOEs) over time.
2. Includes preliminary activities that ramp up with time.
This graphic is representative of most development programs. Some programs may be unprece- dented and require more maturity at higher levels until lower level design activities are initiated as evidenced by the white, horizontal bars. Other systems may be precedented and reuse some or most of an existing design solution. Thus preliminary design activities at all levels may be initiated with a small effort at Contract Award and ramp up over time. Given this backdrop, let’s describe the strategy for creating the multi-level design solution.
CA SRR SDR
HSRs/ SRRs
PDR CDR
SYSTEM Level Design Activities
Highly Iterative V & V PRODUCT Level CDRs
Highly Iterative V & V SUBSYSTEM Level CDRs
Highly Iterative V & V HWCI/CSCI Level CDRs
Highly Iterative V & V SUBASSY/CSC
Level CDRs
(Optional)
Highly Iterative V & V
Figure 25.1 Time-Based Sequencing of SE Design activities
PART/CSU
Level CDRs
(Optional)
25.3 Implementation of the System Design Process 267
Guidepost 25.1 At this point we have established the theoretical approach for performing system design. This brings us to the next topic, implementing the System Design Process.
25.3 IMPLEMENTATION OF THE SYSTEM DESIGN PROCESS
The initial implementation of the System Design Process occurs during the System Procurement Phase of the system/product life cycle. During the System Procurement Phase, the System Devel- oper proposes solution responses to the Acquirer’s formal Request for Proposal (RFP) solicitation. When the System Development Phase begins at Contact Award (CA), the System Design Process is repeated to develop refinements in the PROPOSED system design solution. These refine- ments are implemented to accommodate requirements changes that may have occurred as part of
the final contract negotiation and maturation of the proposed design solution.
The actual implementation involves several design approaches. Among these approaches are Waterfall, Incremental Development, and Spiral Development which are introduced in a later chapter. The approach selected depends on criteria such as:
1. Level of understanding of the problem and solution spaces.
2. The maturity of the SPS requirements.
3. Level of risk.
4. Critical operational or technical issues (COIs/CTIs).
Some people may categorize Figure 25.1 as the Waterfall Model for system development. While it may appear to resemble a waterfall, it is not a true waterfall in design. The Waterfall Model pre- sumes each level of design to be completed just before the next level is initiated (as illustrated in Figure 27.1). The fallacy with the Waterfall approach is:
• You must perform lower level analysis and preliminary design to be able to understand the requirements decisions at a higher level and their lower level ramifications.
• As a typical entry criterion for the Critical Design Review (CDR), the total system design is expected to have a reasonable level of maturity sufficient to commit component procure- ment and development resources with acceptable risk. Remember, at CDR most system designs solutions are UNPROVEN—that is, as the fully integrated system. Therefore, the system level design solution is NOT considered officially complete UNTIL the system has been formally accepted in accordance with the terms and conditions (T&Cs) of the contract.
As Figure 25.1 illustrates, each level of design occurs concurrently at various levels of effort throughout the System Design Process. Since every design level incrementally evolves to maturity over time toward CDR, the level of activity of each design activity bar diminishes toward CDR. By CDR, the quantity of requirements and performance allocation changes should have diminished and stabilized as objective evidence of the maturity.
Concurrent, Multi-level Design Activities
Each activity (bar) in Figure 25.1 consists of a shaded Preliminary Design segment followed by a Design Activity segment.
Preliminary Design segments are shaded from left to right, with darker shading to represent increasing level of effort (LOE) work activity. These preliminary design activities may involve analyses, modeling, and simulation to investigate lower level design issues to support higher level decision making by one or more individuals on a part-time or full-time basis. Consider the fol- lowing example:
EXAMPLE 25.1
An Integrated Product Team (IPT) may be tasked or subcontractor contracted to perform preliminary analy- sis and design with the understanding that a task or contract option may be to shift work activities at any level from feasibility studies to actual design activities.
The distinction between Preliminary Design (shaded) and Design Activities is graphically sepa- rated for discussion purposes. Within System Developer organizations, there is no break between segments—just an expansion in LOE from one or two people to five or ten.
Design Maturation Reviews
Throughout the System Design Process, program formal and informal reviews are conducted to assess the maturity, completeness, consistency, and risk of the evolving system design solution.
Referral A detailed description of these reviews appears in Chapter 54 Technical Review Prac- tices used in the verification and validation phase of the design process.
Guidepost 25.2 Given an overview of the system design strategy, let’s shift our focus to the
system integration, test, and evaluation (SITE) strategy.
25.4 IMPLEMENTING THE SYSTEM DESIGN PROCESS STRATEGY
To illustrate HOW the system design activities are implemented, consider the example illustrated in Figure 25.2. In the figure the SPS (1) is analyzed, a SYSTEM level Engineering Design (3) is selected, and requirements are allocated (2) to PRODUCTS A and B. SPS requirements allocated to PRODUCT B are flowed down (5) and captured via the PRODUCT B Development Specifica- tion (7). PRODUCT B Development Specification requirements are then traced (6) back to the source or originating requirements of the SPS.
To fully understand the implications of requirements allocated to PRODUCT B, a design team initiates PRODUCT B’s Engineering Design (9). PRODUCT B’s Engineering Design (9) is selected from a set of viable candidates. PRODUCT B’s Development Specification requirements are then allocated (8) to SUBSYSTEMS B1 and B2.
At each level, formal and informal technical review(s) are conducted to verify (4), (10), (16), and (22) that the evolving designs comply with the requirements allocated from their respective specifications. The process repeats until all levels of abstraction have been expanded into detailed designs for review and approval at the SYSTEM level CDR.
Guidepost 25.3 At this point we have investigated the strategy that depicts how the SPS is translated into a detailed design for approval at the CDR. Next we explore the SITE strategy that demonstrates that the various levels of integrated, physical components satisfy their specification requirements.
25.5 SYSTEM INTEGRATION, TEST, AND EVALUATION (SITE) STRATEGY
The system design strategy is based on a hierarchical decomposition framework that partitions a complex system solution space into lower level solution spaces until the PART level is reached.
25.5 System Integration, Test, and Evaluation (SITE) Strategy 269
Figure 25.2 System Engineering Design Strategy
For system integration we reverse this strategy by integrating the PART level solutions into higher levels of complexity. Thus we establish the fundamental strategy for SITE.
The SITE Process is implemented by integrating into successively higher levels physical com- ponents that have been verified at a given level of abstraction. We refer to each integration node as an Integration Point (IP). Figure 25.3 provides an illustrative example of the time-based graphical sequence of our discussion.
Multi-level Integration and Verification Activities
Suppose that we have a system that consists of multiple levels of abstraction. Physical hardware PARTS and/or computer software units (CSUs) that have been verified are integrated into higher level hardware SUBASSEMBLIES and/or computer software components (CSCs). Each SUB- ASSEMBLY/CSC is then formally verified for compliance to its respective specification or design requirements. The process continues until SYSTEM level integration and verification is completed via a formal System Verification Test (SVT).
Applying Verification Methods to SITE
Each integration step employs INSPECTION, ANALYSIS, DEMONSTRATION, or TEST verifi- cation methods prescribed by the respective development specification. Verification methods are implemented via acceptance test procedures (ATPs) formally approved by the program and Acquirer, if applicable, and maintained under program CM control. Representatives from the System Developer, the Acquirer, to the User organizations, as appropriate, participate in the formal event and witness verification of each requirement.
Level of Integration
Figure 25.3 System Integration, Test, and Evaluation (SITE) Sequencing
Author’s Note 25.1 The level of formality required for witnessing verification events varies by contract and organization. For some systems, certified System Developer testers at lower levels may be permitted to verify some components without a quality assurance (QA) witness. At higher levels the Acquirer may elect to participate and invite the User. Consult your contract and orga- nization’s policies and protocols for specific guidance. For some systems, critical technologies may require involvement of all parties in formal verification events at lower levels.
Correcting Design Flaws, Errors, Defects, and Deficiencies
If discrepancies between actual results and expected results occur, corrective actions are initiated. Consider the following example:
EXAMPLE 25.2
Corrective actions include:
1. Modification or updating of the specification requirements.
2. Redesign of components.
3. Design changes or error corrections.
4. Replacement of defective parts or materials, etc.
5. Workmanship corrections.
6. Retraining of certified testers, etc.
When the verification results have been approved and all critical Discrepancy Reports (DRs) are closed, the item is then integrated at the next higher level. In our example, the next level consists of hardware configuration items (HWCIs) and computer software configuration items (CSCIs).
25.6 Implementing the Site Process Strategy 271
25.6 IMPLEMENTING THE SITE PROCESS STRATEGY
Our previous discussions presented the top-down, multi-level, specification driven, SE Design Process. In this chapter we present a bottom-up, “mirror image” discussion regarding HOW the multi-level SITE Process and system verification are accomplished.
Our discussion of the system integration, test, and verification (SITE) strategy uses Figure 25.4 as a reference. Let’s begin by highlighting key areas of the figure.
▪ The left side of the chart depicts the multi-level specifications used by the System Engi- neering (SE) Design Process to create the system design solution.
▪ The right side of the chart depicts a hierarchical structure that represents how system com- ponents at various levels begin as verified hardware PARTS or computer software units (CSUs) are integrated to form the SYSTEM at the top of the structure.
▪ The horizontal gray arrows between these two columns represent the linking of verification methods, acceptance test procedures (ATPs), and verification results at each level.
Author’s Note 25.2 As previously discussed in the SE Design Process, the contents of this chart serve as an illustrative example for discussion purposes. Attempting to illustrate all the combina-
System Verification
Verification Methods & ATPs
Product B Verification
33 SYSTEM Level
Integration
Verification Methods & ATPs
Subsystem B2 Verification
30
PRODUCT B Integration
Verification Methods & ATPs
27
CSCI B22 Verification
24
SUBSYSTEM C Integration
CSCI
B22
Verification Methods & ATPs
Figure 25.4 System Integration, & Test Strategy
tions and permutations for every conceivable system in a single chart can be confusing and imprac- tical. You, as a practicing Systems Engineer, need to employ your own mental skills to apply this concept to your own business domain, systems, and applications.
Guidepost 25.4 The preceding discussions focused on the individual system design and SITE strategies. Next we integrate these strategies into a model.
25.7 INTEGRATING SYSTEM DESIGN AND DEVELOPMENT STRATEGIES
Although the system design and SITE strategies represent the overall system development work- flow, the progression has numerous feedback loops to perform corrective actions for design flaws, errors, and deficiencies. As such, the two strategies need to be integrated to form an overall strat- egy that enables us to address the feedback loops. If we integrate Figures 25.1 and 25.3, Figure
25.5 emerges and forms what is referred to as the V-Model of system development.
The V-Model is a pseudo time-based model. In general, workflow progresses from left to right over time. However, the highly iterative characteristic of the System Design Process strategy and verification corrective action aspects of SITE may require returning to a preceding step. Recall from above that the corrective actions might involve a re-working of lower level specifications, designs, and components. So, as the corrective actions are implemented over time, workflow does progress from left to right to delivery and acceptance of the system.
Author’s Note 25.3 This point illustrates WHERE and HOW system development programs become “bottlenecked,” consuming resources without making earned value work progress because of re-work. It also reinforces the importance of investing in up-front SE as a means of minimizing and mitigating re-work risks! Despite all of the rhetoric by local heroes that SE is a non–value-
User’s System
Figure 25.5 “V” Model of System Development
25.9 Summary 273
added activity, SITE exemplifies WHY homegrown ad hoc engineering efforts falter and program cost and schedule performance reflects it.
Final Thought
Although we have not covered it in this chapter, some programs begin work from a very abstract Statement of Objectives (SOO) rather than an SPS. Where this is the case, spiral development is employed to reiterate the V-Model for incremental builds intended to mature knowledge about the SYSTEM requirements. We will discuss this topic in Chapter 27 on system development Models.
25.8 GUIDING PRINCIPLES
In summary, the preceding discussions provide the basis with which to establish the guiding prin- ciples that govern system design, integration, and verification strategy practices.
Principle 25.1 System design is a highly iterative, collaborative, and multi-level process with each level dependent on maturation of higher level specification and design decisions.
Principle 25.2 A system design solution is not contractually complete until it is verified as com- pliant with its Acquirer approved System Performance Specification (SPS). Technically it is not complete until all latent defects are removed, but most systems exist between these two extremes.
Principle 25.3 The number of latent defects in the fielded system is a function of the thor- oughness of the effort—time and resources—spent on system integration, test, and evaluation (SITE).
25.9 SUMMARY
During our discussion of the system design, integration, and verification strategy, we described the SE design strategy that analyzes, allocates, and flows down System Performance Specification (SPS) requirements through multiple levels of abstraction to various item development specifications and item architectural designs. Next we described a strategy for integrating each of the procured or developed items into succes- sively higher levels of integration. At each level each item’s capabilities and levels of performance are to be verified against their respective specifications.
1. The SE process strategy provides a multi-level model for allocating and flowing down SPS require- ments to lower levels of abstraction.
2. Unlike the Waterfall Model, the SE Process strategy accommodates simultaneous, multi-level design activities including Preliminary Design activities.
3. Whereas a design at any level may be formally baselined to promote stability for lower level deci- sion making, a design at any level is still subject to formal change management modification through the end of formal acceptance for delivery.
4. The SE design strategy includes multiple control points to verify and validate decisions prior to com- mitment to the next level of design activities.
5. The SITE Process implements a strategy that enables us to integrate and verify lower level compo- nents into successively higher levels until the system is fully assembled and verified.
6. The SITE Process strategy includes breakout points to implement corrective actions that often lead back to the SE Design Process.
7. Corrective actions may require revision of lower level specifications, redesign, rework of components, or retraining of test operators to correct for design flaws and errors, deficiencies, discrepancies, etc.
8. The integration of the System Design Process and SITE Process strategies produces what we refer to as the V-Model of system development.
9. The V-Model, which represents a common model used on many programs, may be performed numer- ous times, especially in situations such as spiral development to evolve requirements to maturity.
GENERAL EXERCISES
1. Answer each of the What You Should Learn from This Chapter questions identified in the Introduction.
ORGANIZATIONAL CENTRIC EXERCISES
1. Research your organization’s command media for direction and guidance in developing SE design and system integration, test, and evaluation (SITE) strategies. Report your findings.
2. Contact a small, medium, and a large program within your organization. Interview the Lead SE or Tech- nical Director to understand what strategies the program used for SE design and system integration, test, and evaluation (SITE) and sketch a graphic of the strategy. For each type of program:
(a) How did the strategy prove to be the right decision.
(b) How would they tailor the strategy next time to improve its performance?
Chapter 26
The SE Process Model
26.1 INTRODUCTION
If you were to survey organizations to learn about what methods they employ to develop systems, products, or services, the responses would range from ad hoc methods to logic-based methodolo- gies. Humans, by nature, generally deplore structured methods and will naively go to great lengths to avoid them without understanding: 1) WHY they exist and 2) HOW they benefit from them. While ad hoc methods may be proved successful on simple, small systems and products, the scal- ability of these methods to large, complex programs employing dozens or hundreds of people results in chaos and disorder. So, the question is: Does a simple methodology exist that is scalable and can be applied for all size programs?
At the beginning of Part II, we introduced the basic workflow that System Developers employ to transform the abstract System Performance Specification (SPS) into a deliverable, physical system undergo. Although we described that workflow in terms of its six processes, the workflow does not capture HOW TO create the system, product, or service. Only how it evolves like a pro- duction line from conceptualization to delivery over time.
Our introduction of the system solution domains at the conclusion of Part I presented the Requirements, Operations, Behavioral, and Physical Domain Solutions, their sequential develop- ment, and interrelationships. The system solution domains enable us to describe the key elements of a system, product, or service solution. So the challenge question is: HOW do we create a logical method that enables us to:
1. Develop a system, product, or service?
2. Apply it across all system development phase workflow processes?
This chapter introduces the SE Process Model, its underlying methodology, and application to developing an SE design for a system or one of its components. Our discussion begins with a graph- ical depiction and accompanying descriptions of the SE Process Model and its methodology. Since the model is described by two characteristics: highly iterative and recursive, we illustrate how the model’s internal elements iterate and show how the model applies to multiple levels of abstraction within the system design process. We provide an example of HOW the model applies to entities at various levels of abstraction.
Finally, we illustrate HOW the application of the model produces an integrated framework that represents the multi-level system design workflow progression via the Requirements, Operations, Behavioral, and Physical Domain Solutions. The last point illustrates, by definition, a system com- posed of integrated elements synergistically working to achieve a purpose greater than their indi- vidual purpose-focused capabilities.
System Analysis, Design, and Development, by Charles S. Wasson Copyright © 2006 by John Wiley & Sons, Inc.
275
What You Should Learn from This Chapter
1. What is the SE Process Model?
2. What are the key elements of the SE Process Model?
3. How do the elements of the SE Process Model interrelate?
4. What are the steps of the underlying SE Process Model methodology?
5. What is meant by the Process Model’s highly iterative characteristic?
6. What is meant by the Process Model’s recursive characteristic?
Definitions of Key Terms
• Behavioral Domain Solution A technical design that:
1. Represents the proposed logical/functional solution to a specification.
2. Describes the entity relationships between an entity’s logical/functional capabilities including external and internal interface definitions.
3. Provides traceability between specification requirements and logical/functional capabilities.
4. Is traceable to the multi-level Physical Domain Solution via physical configuration items (PCIs).
• Iterative Characteristic A characterization of the interactions between each of the SE Process Model’s elements as an entity’s design solution evolves to maturity.
• Operations Domain Solution A unique view of a system that expresses HOW the System Developer, in collaboration with the User and Acquirer, envision deploying, operating, sup- porting, and disposing of the system to satisfy a solution space mission objectives and, if applicable, safely return the system to a home base or port.
• Physical Domain Solution A technical design that: 1) represents the proposed physical solution to a specification, 2) describes the entity relationships between hierarchical, physi- cal configuration items (PCIs)—namely the physical components of a system—including external and internal interface definitions, 3) provides traceability between specification requirements and PCIs, and 4) is traceable to the multi-level Behavioral design solution via functional configuration items (FCIs).
• Recursive Characteristic An attribute of the SE Process Model that enables it to be applied to any system or entity within a system regardless of level of abstraction.
• Requirements Domain Solution A unique view of a system that expresses: 1) the hierar- chical framework of specifications—namely a specification tree—and requirements, 2) requirements to capability linkages, and 3) vertical and horizontal traceability linkages to User originating or source requirements and between specifications and their respective requirements.
• SE Process Model A construct derived from a highly iterative, problem solving-solution development methodology that can be applied recursively to multiple levels of system design.
• Solution Domain A requirements, operations, behavioral, or physical viewpoint of the development of a multi-level entity or configuration item (CI) required to translate and elab- orate a set of User requirements into a deliverable system, product, or service.
26.2 SE PROCESS MODEL OBJECTIVE
The objective of the SE Process Model is enable SEs to transform and evolve a User’s abstract operational need(s) into a physical system design solution that represents the optimal balance of technical, technology, cost, schedule, and support solutions and risks.
Brief Background on SE Processes
Since World War II several types of SE processes have evolved. Organizations such as the US Department of Defense (DoD), the Institute of Electrical and Electronic Engineers (IEEE), Elec- tronics Industries Alliance (EIA), and the International Council of System Engineers (INCOSE) have documented a series of SE process methodologies. The more recent publications include US Army FM 770-78, Mil-Std-499, commercial standards IEEE 1220-1998, EIA-632 and ISO/IEC 15288. Each of these SE process methodologies highlights the aspects its developers considered fundamental to engineering practice.
Although the SE processes noted above advanced the state of the practice in SE, in the author’s opinion, no single SE process captures the actual steps performed in engineering a system, product, or service. As is the case with a recipe, SEs and organizations often formulate their own variations of how they view the SE process based on what works for them. This chapter introduces an SE Process Model validated through the experiences of the author and others. Note the two terms: process and model.
In this chapter’s Introduction, we considered a general workflow progression that translates a User’s abstract operational needs into a physical system, product, or service solution. We can state this progression to be a process. However, the workflow process steps required to engineer systems must involve highly iterative feedback loops to preceding steps for reconciliation actions. The SE Process is more than simply a sequential end-to-end process. The SE Process is an embedded element of a problem-solving/solution-development model that transforms a set of inputs and oper- ating constraints into a deliverable system, product, or service. Therefore, we apply the label SE Process Model.
Entry Criteria
Entry criteria for the SE process are established by the system/product life cycle phase that imple- ments the SE Process. In the case of the System Development Phase, the SE Process Model is applied with the initiation of each entity or configuration item (CI’s) SE design. This includes the SYSTEM, PRODUCT, SUBSYSTEM, ASSEMBLY, SUBASSEMBLY, and PART levels.
26.3 SE PROCESS MODEL METHODOLOGY
We concluded Part I with an introduction to the system solution domains, consisting of Require- ments, Operations, Behavioral, and Physical Domain Solutions. Although the domain solutions provide a useful means to characterize a system or one of its entities, individually, they do not help us create the total system solution. So, how do we do this?
We can solve this challenge by creating a system development model that enables us to trans- late the User’s vision into a preferred solution. However, the domain solutions are missing two key elements:
1. Understanding the opportunity/problem space and the relationship of the solution space.
2. Optimizing the domain solutions to achieve mission objectives.
If we integrate these missing elements with the sequencing of the system solution domains, we can create a methodology that enables us to apply it to any entity, regardless of level of abstraction. The steps of the methodology are:
Step 1: Understand the entity’s opportunity/problem and solution spaces.
Step 2: Develop the entity’s Requirements Domain Solution.
Step 3: Develop the entity’s Operations Domain Solution. Step 4: Develop the entity’s Behavioral Domain Solution. Step 5: Develop the entity’s Physical Domain Solution.
Step 6: Evaluate and optimize the entity’s total design solution.
When we depict these steps, their initial sequencing, and interrelationships graphically, Figure 26.1 emerges. We will refer to this as the SE Process Model.
Before we proceed with describing the SE Process Model, let’s preface our discussion with several key points:
1. The description uses the term, entity, to denote a logical/functional capability or physical item such as PRODUCT, SUBSYSTEM, ASSEMBLY, SUBASSEMBLY, and PART. You could apply the term, component. However, there may be some unprecedented systems in which physical components may not emerge until later in a design process. Therefore, we use the term entity.
2. The act of partitioning a problem space into lower level solution spaces is traditionally referred to in SE as decomposition. Since decomposition connotes various meanings, some people prefer to use the term expansion.
3. Since the model applies to any level of abstraction, role-based terms such as Acquirer, User, and System Developer are contextual. For example, the Acquirer (role) of a system con-
Figure 26.1 The System Engineering Process Model
tracts with a System Developer (role). Within the contractor’s (System Developer role) program organization, a PRODUCT level team (Acquirer role) allocates requirements to Development Team (System Developer role) to develop and deliver a SUBSYSTEM. The SUBSYSTEM development team, acting as an Acquirer (role), may procure some of the SUBSYSTEM’s components from various vendors (System Developer roles).
4. For simplicity, the description addresses a one-time procurement for a system, product, or service. Some complex system development efforts may consist of a series spiral develop- ment deliverables or contracts. In these cases, the requirements may be initially immature thereby necessitating stages of development to mature the requirements to a level neces- sary and sufficient for final system development. Thus the SE Process Model is reapplied to all levels of abstraction for each iteration of the spiral. These approaches serve to reduce system development risk.
Step 1: Understand the Entity’s Opportunity/Problem and Solution Spaces
The first step of the SE Process Model is to simply understand the entity’s opportunity/problem and solution spaces. System analysts and SEs need to understand and validate: HOW the User intends to use the system as well as WHAT expectations are levied on the entity to achieve higher-level mission objectives. This requires understanding:
1. The entity’s contextual role in the next higher level solution space—namely the User’s level 0 system or the deliverable SYSTEM, PRODUCT, SUBSYSTEM, SUBASSEMBLY, and other levels.
2. HOW the User plans to deploy, operate, support, and dispose of the system—namely use cases and scenarios.
3. The system’s interfaces with external systems in its OPERATING ENVIRONMENT such as HUMAN-MADE, NATURAL, and INDUCED ENVIRONMENTS.
4. The system’s mission event timeline (MET) or allocations.
5. The expected outcomes from system interactions with its OPERATING ENVIRONMENT.
6. Products, by-products, and services comprising the system outputs required to accomplish those outcomes.
Step 2: Develop the Entity’s Requirements Domain Solution
As the understanding of the entity’s opportunity/problem and solution spaces evolves and matures, the next step is to Develop the Requirements Domain Solution. The requirements, which specify and bound the entity’s solution space, document the Acquirer’s (role) required system capabilities via Statement of Objectives (SOO) or System Performance Specification (SPS). Within the System Developer’s program, lower level PRODUCT, SUBSYSTEM, ASSEMBLY, or SUBASSEMBLY item development specification (IDS), as applicable, capture the entity’s requirements.
As illustrated by the SE Process Model in Figure 26.1, the entity’s Requirements Domain Solution:
1. Serves as the frame of reference for deriving the Operations, Behavioral, and Physical Domain Solutions.
2. Iterates with understand the opportunity/problem and solution spaces.
3. Iterates with the Operations, Behavioral, and Physical Domain Solutions.
4. Integrates with a higher level User, SYSTEM, PRODUCT, ASSEMBLY, etc., level Require- ments Domain Solution (Figure 26.6).
5. May be expanded or decomposed into at least two or more lower level entity Requirements Domain Solutions and documented in development specifications.
6. Provides the decision criteria used by the Evaluate and Optimize the System Design Solution step to assess design compliance (e.g., verification, consistency, and traceability).
The Requirements Domain Solution consists of a hierarchical set of requirements derived from a User’s source or originating requirements, typically in a contract System Performance Specifica- tion (SPS). Any entity requirement that is:
1. Too abstract and broad.
2. Complicates implementation and verification, is simplified by decomposing it into two or more lower level SIBLING or DERIVED requirements.
Derived requirements more explicitly define WHAT is required and HOW WELL. Thus they become more manageable and eliminate the possibility of multiple interpretations. As a result each derived requirement simplifies and clarifies WHAT must be accomplished to satisfy a portion of the higher parent requirements.
Since requirements express User (role) expectations for acceptance of each entity, at a minimum, each requirement is assigned a verification method such as inspection, analysis, demon- stration, or test. Additional verification criteria such as the level where a specific requirement will be verified and verification conditions may also be added. The set of verification methods, levels, and criteria that serve as the basis for delivery acceptance are documented as a key section of the entity’s specification.
Step 3: Develop the Entity’s Operations Domain Solution
As each entity’s Requirements Domain Solution evolves to maturity, System Developers formulate and mature the Operations Domain Solution. Operational concepts are synthesized and documented in the entity’s Concept of Operations (ConOps) document or Theory of Operations. The ConOps:
1. Identifies the Level 0 User’s operational architecture that includes the MISSION SYSTEM(s) and SUPPORT SYSTEM(s).
2. Identifies friendly, benign, and hostile systems and threats in the OPERATING ENVI- RONMENT that interact with the system.
3. Identifies system operations and tasks required to accomplish the mission.
4. Synchronizes the tasks with the mission event timeline (MET) or allocations.
5. Identifies products, by-products, or services required to achieve mission outcomes. Referring to Figure 26.1, an entity’s Operations Domain Solution:
1. Implements requirements allocated and derived from the entity’s Requirements Domain Solution.
2. Documents work products that serve as inputs for deriving the entity’s Behavioral Domain Solution.
3. Integrates with a higher level User, SYSTEM, PRODUCT, ASSEMBLY, and SUB- ASSEMBLY Operations Domain Solution (Figure 26.6).
4. May be expanded or decomposed into at least two or more lower level entity
Operations Domain Solutions (Figure 26.6).
5. Iterates with the Requirements and Behavioral Domain Solutions for completeness, con- sistency, and traceability.
6. Is assessed for consistency, compliance, and performance by the Evaluate and Optimize the System Design Solution.
Step 4: Develop the Entity’s Behavioral Domain Solution
As each entity’s Operations Domain Solution evolves to maturity, System Developers formulate and mature the Behavioral Domain Solution. The Behavioral Domain Solution describes WHAT is to be accomplished in terms of logical interactions and sequences of tasks or processing required to produce the desired outcomes. This includes:
1. Identification of the required capabilities—namely functions and performance.
2. Constructing system interaction and sequence diagrams that depict HOW the SYSTEM is envisioned to react and respond to various external stimuli and cues from its OPERATING ENVIRONMENT.
3. Synchronizing those interactions, products, by-products, and services with the MET.
4. Analyzing, modeling, and simulating capability sequences and performance to ensure com- pliance with requirements and balance performance allocations.
Referring to Figure 26.1, an entity’s Behavioral Domain Solution:
1. Is allocated requirements from and must be traceable to the entity’s Requirements Domain Solution.
2. Documents work products that serve as the inputs for the developing Physical Domain Solution.
3. Integrates with a higher level User, SYSTEM, PRODUCT, ASSEMBLY, and SUBASSEMBLY Behavioral Domain Solution (Figure 26.6).
4. May be expanded or decomposed into at least two or more lower level entity Behavioral Domain Solutions (Figure 26.6).
5. Iterates with the Operations and Physical Domain Solutions.
6. Is assessed for consistency, compliance, and performance by the Evaluate and Optimize the System Design Solution.
Step 5: Develop the Physical Domain Solution
As the entity’s Behavioral Domain Solution evolves to maturity, System Developers formulate and mature the Physical Domain Solution. This solution describes HOW the Behavioral Domain Solu- tion is implemented via multi-level physical components. This requires:
1. Formulating several viable candidate architectures for the entity.
2. Conducting analyses and trade studies to evaluate and score the merits of each architecture relative to a set of pre-defined decision criteria.
3. Selecting a preferred architecture from a viable set of alternatives.
4. Establishing performance budgets and safety margins.
5. Finalizing selection of components to satisfy the entities identified within the architecture.
6. Translating the physical architecture into a detailed design—such as assembly drawings, schematics, wiring diagrams, and software design.
7. Assessing compatibility and interoperability with external systems in the entity’s OPER- ATING ENVIRONMENT.
Referring to Figure 26.1, each entity’s Physical Domain Solution:
1. Is allocated requirements from and must be traceable to the entity’s Requirements Domain Solution.
2. Integrates with a higher level User, SYSTEM, PRODUCT, ASSEMBLY, and SUB- ASSEMBLY Physical Domain Solution (Figure 26.6).
3. May be expanded or decomposed at least two or more lower level entity Physical Domain Solutions (Figure 26.6).
4. Is assessed for consistency, compliance, and performance by the Evaluate and Optimize the System Design Solution.
Step 6: Evaluate and Optimize the Entity’s Total Design Solution
As the Physical Domain Solution evolves and matures, the next step is to Evaluate and Optimize the System Design Solution. The purpose of this step is to verify and validate that the entity’s Requirements, Operations, Behavioral, and Physical Design Solutions:
1. Are consistent with each other.
2. Fully comply with and are traceable to its Requirements Domain Solution. These objectives are accomplished via the following:
1. Technical reviews.
2. Technical audits.
3. Prototype development.
4. Modeling and simulation.
5. Proof of concept or technology demonstrations.
You will encounter people who contend that you cannot optimize a system for a diverse set of oper- ating scenarios and conditions, it can only be optimal: let’s differentiate the two viewpoints.
For a prescribed set of operating conditions and priorities, you can theoretically optimize a system. The challenge is that these conditions are often independent and statistically random occur- rences in the OPERATING ENVIRONMENT. As a result a system, product, or service performance may not be optimized for all sets of random variable conditions. So people characterize the SYSTEM’s performance as optimal in dealing with these random variables.
There is an unwritten rule that says that most human attempts fall short of their goals. Assum- ing you have realistic goals, WHAT you accomplish depends on WHAT you strive to achieve. So in Step 6, Evaluate and Optimize System Design Solution, we use the term optimize to communi- cate WHAT we strive to achieve. Given human performance history in goal achievement, striving to simply be optimal will probably produce a result that is less than optimal, an even less desir- able outcome.
26.4 DECISION SUPPORT TO THE SE PROCESS MODEL
Decision support practices such as analyses, trade studies, prototypes, demonstrations, models, and simulations, etc. are employed to provide recommendations for technical decisions that bound the system’s solution space—such as the Requirements Domain Solution compliance.
26.5 EXIT CRITERIA
Since the SE Process Model is highly iterative and subject to development time constraints of the
entity, exit criteria are determined by:
1. Level of maturity required of the entity being designed and its required work products— specifications, designs, verification procedures, etc.
2. Level of criticality and practicality in correcting discrepancies between specification requirements and verification data subject to cost and schedule constraints.
26.6 WORK PRODUCTS AND QUALITY RECORDS
The SE Process Model supports the development of numerous system/product life cycle phase work products and quality records. When applied to a phase specific process, the SE Process produces four categories of work products for each entity: 1) the Requirements Domain Solution, 2) the Oper- ations Domain Solution, 3) the Behavioral Domain Solution, and 4) the Physical Domain Solution. General examples of work products and quality records include specifications, specification trees, architectures, analyses, trade studies, drawings, technical reports, verification records, and meeting minutes. Refer to Chapters 37–40 for details of each solution description that follows for
specific work products and quality records.
Author’s Note 26.1 People often confuse the purpose of any SE Process. They believe that the SE Process is established to create documentation; this is erroneous! The purpose of the SE Process Model is to establish a methodology to solve problems and produce a preferred solution that sat- isfies contract requirements subject to technical, cost, schedule, technology, and risk constraints. Work product and quality record documentation are simply enablers and artifacts of the process, a means to an end, not the primary focus.
Author’s Note 26.2 Remember, if you focus on producing documentation, you get documenta- tion and a design solution that may or may not meet requirements. If you focus on producing a design solution via the SE Process Model, you should arrive at a design solution that satisfies requirements supported by documentation artifacts that prove the integrity and validity of the solution.
26.7 SE PROCESS MODEL CHARACTERISTICS
The SE Process Model is characterized as being highly iterative and recursive. To better under- stand these characteristics, let’s explore each.
Highly Iterative Characteristic
The SE Process Model, when applied to a specific entity within a system level of abstraction, is characterized as highly iterative as illustrated in Figure 26.2. Although each of the multi-level steps of the methodology has a sequential workflow progression, each step has feedback loops that allow a return to preceding steps and reassess decisions when issues occur later along the work- flow. As a result the feedback loops establish the highly iterative characteristic as illustrated in Figure 26.3.
System Performance Specification (SPS)
Evolving Maturation
SYSTEM
Level CDR
Highly Iterative SE Process Model
V&V
PRODUCT
Level Design
Verification & Validation
SUBSYSTEM
Level Design
Evolving Maturation
Evolving Maturation
PRODUCT
Level CDRs
SUBSYSTEM
Item Level CDRs
Highly Iterative
SE Process Model
Highly Iterative
SE Process Model
Highly Iterative
SE Process Model
Verification & Validation
ASSEMBLY
Level Design
Verification & Validation
ASSEMBLY
Level CDRs
SUBASSY.
Level CDRs
Figure 26.2 Multi-Level System Engineering Design
Figure 26.3 Entity/Item SE Design Flow
Recursive Characteristic
Referring to Figure 26.2, observe that the SE Process Model is applied to every level of abstrac- tion. We call this its recursive characteristic. Thus the same model applies to the SYSTEM level as its does at the SUBASSEMBLY level. Let’s explore this point further.
We can simplify Figure 26.3 as shown in Figure 26.4. Note that each system development program begins with the System Performance Specification (SPS) and evolves through all four solu- tion domains. This graphic with its alternating gray quadrants and R (requirements), O (operations), B (behavioral), and P (physical) symbols—in short, ROBP—will be used as an icon to symbolize the four solution domains in our later discussions on system design practices.
Applying the SE Process Model to System Development
Now let’s suppose that we have the system shown in Figure 26.5. The SYSTEM consists of PROD- UCTS 1 and 2. PRODUCT 1, a large, complex design, consists of SUBSYSTEMs 1 and 2. PRODUCT 2, a simpler design, consists of hardware configuration item (HWCI) 21 and computer software configuration item (CSCI) 21. We apply the SE Process Model to each entity as illustrated by the boxes. Application of the SE Process Model to each entity continues until the SE design is mature and ready to commit to implementation. Figure 26.6 illustrates the state of the system design solution at completion.
Here we see a multi-level framework that depicts the horizontal workflow progression over time. Vertically, the Requirements, Operations, Behavioral, and Physical Domain Solutions are decomposed into various levels of abstraction. Collectively the framework graphically illustrates a system, which by definition, is the integration of multiple levels of capabilities to a higher level purpose that is greater than their individual capabilities.
Figure 26.4 System Solution Development via Multi-Domain Iterations
Process Model Application
Process Model Application
Process Model Application
Process Model Application
Figure 26.5 Multi-Domain SE Process Iterations
Requirements Domain
Solution
1
System
Performance Specification (SPS)
Operations Domain
Solution
2
System Level ConOps
Behavioral Domain
Solution
3
System Level
Behavioral Design Solution
Physical Domain
Solution
4
System Level
Physical Design Solution
5 6 7 8
PRODUCT
Item Development Specification
9 10 11 12
13 14 15 16
Figure 26.6 System Design Framework
26.9 Guiding Principles 287
Figure 26.7 The System Development Spiral
26.8 EVOLVING AND REVIEWING THE SYSTEM SOLUTIONS
Based on the highly iterative and recursive characteristics, System Developers evolve the system design solution over time from the SPS into a series of workflow progressions through each level of abstraction until the system design solution is initially complete. Figure 26.7 illustrates how the total system design solution evolves through the domain solutions at each level of abstraction and culminates with the Critical Design Review (CDR).
Symbolically, the inner loops of the spiral represent increasing levels of detail until; the CDR is conducted. Each loop of the spiral culminates in a technical review that serves as a critical staging or control point for commitment to the next level of detail. Each loop includes a breakout point to permit reconciling changes with previous levels and to continue to evolve and mature the higher level solutions until CDR.
Author’s Note 26.3 The context of Figure 26.7 is for the period between Contract Award and CDR when the total design solution is approved and released for component procurement and devel- opment. However, the system design solution IS NOT finalized until the first article system or product has been integrated, tested, verified, validated (optional), and accepted by the Acquirer or User.
26.9 GUIDING PRINCIPLES
In summary, the preceding discussions provide the basis with which to establish the guiding prin- ciples that govern the implementation of the SE Process Model.
Principle 26.1 Problem solving and solution development lead to an optimal design solution; simply creating a point design solution does not always indicate problem solving.
Principle 26.2 An entity’s design solution is composed of four domain solutions: Requirements, Operations, Behavioral, and Physical.
Principle 26.3 As a workflow, system design is the highly iterative, multi-level, transformation of an entity’s requirements into operations, behavior, and physical implementation.
26.10 SUMMARY
This section introduced the SE Process Model and its structure. In our discussion we described how the model integrates the Requirements, Operations, Behavioral, and Physical Domain Solutions into a highly iterative framework that can be applied to the design of entities at all levels of abstraction. We noted that the SE Process Model’s multi-level application attribute is referred to as its recursive characteristic.
The first step of the model is to Understand the Opportunity/Problem and Solution Spaces to appreciate the context of the requirements allocated to each entity. As a highly iterative model, we described how the model incorporates the workflow from the Requirements Domain to the Operations Domain to the Behavioral Domain to the Physical Domain. Since design solutions must be traceable to their requirements allocations as documented in the entity’s specification, we illustrated how the Requirements Domain Solution links to the Operations, Behavioral, and Physical Domain Solutions. Since every design solution must be evaluated and optimized, we illustrated how the Evaluate and Optimize the Entity’s Design Solution activity supports the Operations, Behavioral, and Physical Domain Solutions.
GENERAL EXERCISES
1. Answer each of the What You Should Learn from This Chapter questions identified in the Introduction.
2. Refer to the list of systems identified in Chapter 2. Based on a selection from the preceding chapter’s General Exercise or a new system, selection, apply your knowledge derived from this chapter’s topical dis- cussions. Describe how the SE Process Model is applied to identify the following:
(a) The system’s opportunity/problem space and solution space(s).
(b) The Requirements Domain Solution.
(c) The Operations Domain Solution.
(d) The Behavioral Domain Solution.
(e) The Physical Domain Solution.
ORGANIZATIONAL CENTRIC EXERCISES
1. Research your local command media for SE process requirements.
(a) Does your organization have a standard SE Process?
(b) How are SEs within the organization trained to apply the SE Process?
(c) Compare and contrast the organization’s SE process with the one described here.
(d) How are multidisciplined SEs trained to apply the process?
2. Research the following SE processes created over several decades. Develop a paper that describes each SE process, compare and contrast the differences; note evolutionary changes over time, and contrast with your own experiences.
(a) US Army Field Manual FM-770-78
(b) MIL-STD-499
Additional Reading 289
(c) IEEE 1220–1998
(d) International Council on Systems Engineering (INCOSE)
(e) ANSI/EIA 632
3. Contact technical programs within your organization and interview personnel concerning what SE process or methodology they employed to develop their systems or products. Report your findings and observations.
ADDITIONAL READING
ANSI/EIA 632-1999. 1999. Processes for Engineering a System. Electronic Industries Alliance (EIA). Arlington, VA.
IEEE 1220-1998. 1998. IEEE Standard for the Application and Management of the Systems Engineering Process. Institute of Electrical and Electronic Engineers (IEEE). New York, NY.
International Council on System Engineering (INCOSE). 2000. INCOSE System Engineering Handbook, Version
2.0. Seattle, WA.
ISO/IEC 15288. System Engineering—System Life Cycle Processes. International Organization for Standardization (ISO). Geneva, Switzerland.
FM-770-78. 1979. Field Manual: System Engineering. Washington, DC: US Army.
MIL-STD-499B (cancelled draft). 1994. Systems Engineer- ing. Washington, DC: Department of Defense (DoD).
Defense Systems Management College (DSMC). 2001. Systems Engineering Fundamentals. Defense Acquisition University Press Ft. Belvoir, VA.
Sheard, Sarah A. The Frameworks Quagmire, A Brief Look, 1997, Software Productivity Consortium, Herndon, VA.