Chapter 9. Step 3: Verification Testing

Verification testing is the most effective way to remove defects from software. If most of the defects are removed prior to validation testing (i.e., unit, integration, system, and acceptance testing), validation testing can focus on testing to determine whether the software meets the true operational needs of the user and can be effectively integrated into the computer operations activity.

Because the experience of many testers is limited to unit, integration, systems, and acceptance testing, these testers are not experienced in verification techniques. The verification techniques are not complex, and once understood, can be easily implemented into the test process.

Typically, verification testing—testing in a static mode—is a manual process. Verification testing provides two important benefits: defects can be identified close to the point where they originate, and the cost to correct defects is significantly less than when detected in dynamic testing.

Verification testing normally occurs during the requirements, design, and program phases of software development, but it can also occur with outsourced software. There are many different techniques for verification testing, most of which focus on the documentation associated with building software. This chapter discusses the many different ways to perform verification testing during the requirements, design, and programming phases of software development.

Overview

Most but not all verification techniques are manual. However, even in manual techniques, automated tools can prove helpful. For example, when conducting a software review, reviewers might want to use templates to record responses to questions.

Because most testing focuses on validation/dynamic testing, verification technique names are not consist. Consider, for example, a review, which is an independent investigation of some developmental aspect. Some call these reviews System Development Reviews, others call them End-of-Phase Reviews, still others refer to them as Peer Reviews, and some use Requirements Review. Because some of the verification techniques are similar, they may also be referred to as a walkthrough or inspection.

For the purposes of this chapter, specific names are assigned to the review techniques, as follows:

  • Reviews. A review is a formal process in which peers and/or stakeholders challenge the correctness of the work being reviewed. For example, in a requirements review, the correctness and completeness of requirements is challenged. It is a formal process usually based on the experience of the organization or outside experts, and uses a predetermined set of questions to accomplish the objectives of the review.

  • Walkthroughs. A walkthrough is an informal process by which peers and other stakeholders interact with project personnel to help ensure the best possible project is implemented. Frequently, the walkthrough is requested by the project team, to resolve issues that they are not sure they have resolved in the most effective and efficient manner. For example, they may be uncertain that they have the best design for a specific requirement and want an independent process to “brainstorm” better methods.

  • Inspections. Inspections are a very formal process in which peers and project personnel assume very specific roles. The objective of an inspection is to ensure that the entrance criteria for a specific workbench were correctly implemented into the exit criteria. The inspection process literally traces the entrance criteria to the exit criteria to ensure that nothing is missing, nothing is wrong, and nothing has been added that was not in the entrance criteria.

  • Desk debugging. This can be a formal or informal process used by a worker to check the accuracy and completeness of his/her work. It is most beneficial when the process is formalized so that the worker has a predefined series of steps to perform. The objective is basically the same as an inspection, tracing the entrance criteria to the exit criteria; unlike the inspection, however, it is performed by the worker who completed the task.

  • Requirements tracing. Requirements tracing, sometimes called quality function deployment (QFD), ensures that requirements are not lost during implementation. Once defined, the requirements are uniquely identified. They are then traced from work step to work step to ensure that all the requirements have been processed correctly through the completion of that process.

  • Testable requirements. A testable requirement has a built-in validation technique. Incorporation of testable requirements is sometimes referred to as developing a “base case,” meaning that the method of testing all the requirements has been defined. If you use this method, the requirements phase of software development or contracting cannot be considered complete until the testable component of each requirement has been defined. Some organizations use testers to help define and/or agree to a test that will validate the requirements.

  • Test factor analysis. This verification technique is unique to the test process incorporated in this book. It is based on the test factors described in an earlier chapter. Under this analysis, a series of questions helps determine whether those factors have been appropriately integrated into the software development process. Note that these test factors are attributes of requirements such as ease of use.

  • Success factors. Success factors are the factors that normally the customer/user will define as the basis for evaluating whether the software system meets their needs. Success factors correlate closely to project objectives but are in measurable terms so that it can be determined whether the success factor has been met. Acceptance criteria are frequently used as the success factors.

  • Risk matrix. The objective of a risk matrix is to evaluate the effectiveness of controls to reduce those risks. (Controls are the means organizations use to minimize or eliminate risk.) The risk matrix requires the identification of risk, and then the matching of controls to those risks so an assessment can be made as to whether the risk has been minimized to an acceptable level.

  • Static analysis. Most static analysis is performed through software. For example, most source code compilers have a static analyzer that provides information as to whether the source code has been correctly prepared. Other static analyzers examine code for such things as “non-entrant modules” meaning that for a particular section of code there is no way to enter that code.

These techniques are incorporated into either the verification process of requirements, design, or programming the software. However, just because a specific technique is included in one phase of development does not mean it cannot be used in other phases. Also, some of the techniques can be used in conjunction with one another. For example, a review can be coupled with requirements tracing.

Objective

Research has shown that the longer it takes to find and correct a defect, the more costly the correction process becomes. The objectives of verification testing during the requirements, design, and programming phases are twofold. The first is to identify defects as close to the point were they originated as possible. This will speed up development and at the same time reduce the cost of development. The second objective is to identify improvement opportunities. Experienced testers can advise the development group of better ways to implement user requirements, to improve the software design, and/or to make the code more effective and efficient.

Concerns

Testers should have the following concerns when selecting and executing verification testing:

  • Assurance that the best verification techniques will be used. The verification technique can be determined during the development of the test plan or as detailed verification planning occurs prior to or during an early part of the developmental phase. Based on the objectives to be accomplished, testers will select one or more of the verification techniques to be used for a specific developmental phase.

  • Assurance that the verification technique will be integrated into a developmental process. Development should be a single process, not two parallel processes of developing and testing during implementation. Although two processes are performed by potentially different groups, they should be carefully integrated so that development looks like a single process. This is important so that both developers and testers know when and who is responsible for accomplishing a specific task. Without this, developers may not notify testers that a particular phase has begun or ended, or budget the developer’s time, so that testers are unable to perform the verification technique. If verification has been integrated into the developmental process, verification will be performed.

  • Assurance that the right staff and appropriate resources will be available when the technique is scheduled for execution. Scheduling the staff and funding the execution of the verification technique should occur in parallel with the previous action of integrating the technique into the process. It is merely the administrative component of integration, which includes determining who will execute the technique, when the technique will be executed, and the amount of resources allocated to the execution of the technique.

  • Assurance that those responsible for the verification technique are adequately trained. If testers who perform the verification technique have not been previously trained, their training should occur prior to executing the verification technique.

  • Assurance that the technique will be executed properly. The technique should be executed in accordance with the defined process and schedule.

Workbench

Figure 9-1 illustrates the workbench for performing verification testing. The input to the workbench is the documentation prepared by the development team for the phase being tested. Near the end of the requirements, design, and programming phases, the appropriate verification technique will be performed. The quality control procedures are designed to ensure the verification techniques were performed correctly. At the end of each development phase test, testers should list the defects they’ve uncovered, plus any recommendations for improving the effectiveness and efficiency of the software.

The workbench for verification testing.

Figure 9-1. The workbench for verification testing.

Input

This section describes the inputs required to complete the verification testing during each phase of development: requirements, design, and programming.

The Requirements Phase

The requirements phase is undertaken to solve a business problem. The problem and its solution drive the system’s development process. Therefore, it is essential that the business problem be well defined. For example, the business problem might be to improve accounts receivable collections, reduce the amount of on-hand inventory through better inventory management, or improve customer service.

The analogy of building a home illustrates the phases in a system’s development life cycle. The homeowner’s needs might include increased living space, and the results of the requirements phase offer a solution for that need. The requirements phase in building a home would specify the number of rooms, the location of the lot on which the house will be built, the approximate cost to construct the house, the type of architecture, and so on. At the completion of the requirements phase, the potential homeowner’s needs would be specified. The deliverables produced from the homeowner’s requirements phase would be a functional description of the home and a plot map of the lot on which the home is to be constructed. These are the inputs that go to the architect to design the home.

The requirements phase should be initiated by management request and should conclude with a proposal to management on the recommended solution for the business need. The requirements team should study the business problem, the previous methods of handling the problem, and the consequences of that method, together with any other input pertinent to the problem. Based on this study, the team develops a series of solutions. The requirements team should then select a preferred solution from among these alternatives and propose that solution to management.

The most common deliverables from the requirements phase needed by the testers for this step include the following:

  • Proposal to management describing the problem, the alternatives, and proposing a solution

  • Cost/benefit study describing the economics of the proposed solution

  • Detailed description of the recommended solution, highlighting the recommended method for satisfying those needs. (Note: This becomes the input to the systems design phase.)

  • List of system assumptions, such as the life of the project, the value of the system, the average skill of the user, and so on

The Design Phase

The design phase verification process has two inputs: test team understanding of how design, both internal and external, occurs; and the deliverables produced during the design phase that will be subject to a static test.

The design process could result in an almost infinite number of solutions. The system design is selected based on an evaluation of multiple criteria, including available time, desired efficiency, skill of project team, hardware and software available, as well as the requirements of the system itself. The design will also be affected by the methodology and tools available to assist the project team.

In home building, the design phase equivalent is the development of blueprints and the bill of materials for supplies needed. It is much easier to make changes in the early phases of design than in later phases.

From a project perspective, the most successful testing is that conducted early in the design phase. The sooner the project team becomes aware of potential defects, the cheaper it is to correct those defects. If the project waited until the end of the design phase to begin testing, it would fall into the same trap as many projects that wait until the end of programming to conduct their first tests: When defects are found, the corrective process can be so time-consuming and painful that it may appear cheaper to live with the defects than to correct them.

Testing normally occurs using the deliverables produced during the design phase. The more common design phase deliverables include the following:

  • Input specifications

  • Processing specifications

  • File specifications

  • Output specifications

  • Control specifications

  • System flowcharts

  • Hardware and software requirements

  • Manual operating procedure specifications

  • Data retention policies

The Programming Phase

The more common programming phase deliverables that are input for testing are as follows:

  • Program specifications

  • Program documentation

  • Computer program listings

  • Executable programs

  • Program flowcharts

  • Operator instructions

In addition, testers need to understand the process used to build the program under test.

Do Procedures

Testers should perform the following steps during requirements phase testing:

  1. Prepare a risk matrix.

  2. Perform a test factor analysis.

  3. Conduct a requirements walkthrough.

  4. Perform requirements testing.

  5. Ensure requirements are testable.

Testers should perform the following steps during design phase testing:

  1. Score success factors.

  2. Analyze test factors.

  3. Conduct design review.

  4. Inspect design deliverables.

Testers should perform the following steps during programming phase testing:

  1. Desk debug the program.

  2. Perform programming phase test factor analysis.

  3. Conduct a peer review.

Task 1: Test During the Requirements Phase

System development testing should begin during the requirements phase, when most of the critical system decisions are made. The requirements are the basis for the systems design, which is then used for programming to produce the final implemented application. If the requirements contain errors, the entire application will be erroneous.

Testing the system requirements increases the probability that the requirements will be correct. Testing at this point is designed to ensure the requirements are properly recorded, have been correctly interpreted by the software project team, are reasonable when measured against good practices, and are recorded in accordance with the IT department’s guidelines, standards, and procedures.

The requirements phase should be a user-dominated phase. In other words, the user should specify the needs and the information services personnel should record the needs and provide counseling about the alternative solutions, just as the builder and architect would counsel the homeowner on building options. This means that the user, being the dominant party, should take responsibility for requirements phase testing.

Having responsibility for testing does not necessarily mean responsibility for performing the test. Performance of the test is different from the party having responsibility for the test—responsibility means the acceptance or rejection of the product based on the test results.

If there are multiple users, responsibility may be assigned to a committee, which may be the same committee that develops the requirements. One of the primary objectives of testing during requirements is to ensure that the requirements have been properly stated and recorded. Normally, only the user can look at recorded requirements and make that determination. Therefore, it is important for the user to accept testing responsibility during the requirements phase and to be an active participant in the test process.

People undertaking the test process must understand the requirements phase objectives and then evaluate those objectives through testing. Should the requirements phase be found inadequate as a result of testing, the phase should be continued until requirements are complete. Without testing, inadequacies in the requirements phase may not be detected.

Customarily, a management review occurs after the requirements phase is complete. Frequently, this is done by senior management, who are not as concerned with the details as with the economics and the general business solution. Unfortunately, inadequate details can significantly affect the cost and timing of implementing the proposed solution.

The recommended test process outlined in this book is based on the 15 requirements phase test factors and the test concerns for each factor (see the section “Requirements Phase Test Factors”). The test team determines which of those factors apply to the application being tested, and then conducts those tests necessary to determine whether the test factor has been adequately addressed during the requirements phase. This chapter defines the test factors and recommends tests to enable you to address the requirements phase testing concerns.

Requirements Phase Test Factors

The following list provides a brief description of the 15 requirement phase test factors (concerns):

  • Requirements comply with methodology (methodology test factor). The process used by the information services function to define and document requirements should be adhered to during the requirements phase. The more formal these procedures, the easier the test process. The requirements process is one of fact gathering, analysis, decision making, and recording the requirements in a predefined manner for use in design.

  • Functional specifications defined (correctness test factor). User satisfaction can only be ensured when system objectives are achieved. The achievement of these objectives can only be measured when the objectives are measurable. Qualitative objectives—such as improving service to users—are not measurable objectives, whereas processing a user order in four hours is measurable.

  • Usability specifications determined (ease-of-use test factor). The amount of effort required to use the system and the skill level necessary should be defined during requirements. Experience shows that difficult-to-use applications or features are not often used, whereas easy-to-use functional systems are highly used. Unless included in the specifications, the ease-of-use specifications will be created by default by the systems analyst or programmer.

  • Maintenance specifications determined (maintainable test factor). The degree of expected maintenance should be defined, as well as the areas where change is most probable. Specifications should then determine the methods of maintenance—such as user-introduced change of parameters—and the time span in which certain types of maintenance changes need to be installed; for example, a price change must be operational within 24 hours after notification to information services.

  • Portability needs determined (portable test factor). The ability to operate the system on different types of hardware, to move it at a later time to another type of hardware, or to move from version to version of software should be stated as part of the requirements. The need to have the application developed as a portable one can significantly affect the implementation of the requirements.

  • System interface defined (coupling test factor). The information expected as input from other computer systems, and the output to be delivered to other computer systems, should be defined. This definition not only includes the types of information passed, but the timing of the interface and the expected processing to occur as a result of that interface. Other interface factors that may need to be addressed include privacy, security, and retention of information.

  • Performance criteria established (performance test factor). The expected efficiency, economy, and effectiveness of the application system should be established. These system goals are an integral part of the design process and, unless established, default to the systems analyst/programmer. When this happens, user dissatisfaction is almost guaranteed to occur with the operational system. An end product of the requirements phase should be a calculation of the cost/benefit to be derived from the application. The financial data should be developed based on procedures designed to provide consistent cost and benefit information for all applications.

  • Operational needs defined (ease-of-operations test factor). The operational considerations must be defined during the requirements phase. This becomes especially important in user-driven application systems. The processes that must be followed at terminals to operate the system—in other words, the procedures needed to get the terminal into a state ready to process transactions—should be as simple as possible. Central site operating procedures also need to be considered.

  • Tolerances established (reliability test factor). The expected reliability from the system controls should be defined. For example, the requirements phase should determine the control requirements for the accuracy of invoicing, the percent of orders that need to be processed within 24 hours, and other such concerns. An invoicing tolerance might state that invoices are to be processed with a tolerance of plus or minus 1 percent from the stated current product prices. If you don’t establish these tolerances, there is no basis to design and measure the reliability of processing over an extended period of time. If you don’t define an expected level of defects, zero defects are normally expected. Controls to achieve zero defects are normally not economical. It is usually more economical and to the advantage of the user to have some defects occur in processing, but to control and measure the number of defects.

  • Authorization rules defined (authorization test factor). Authorization requirements specify the authorization methods to ensure that transactions are, in fact, processed in accordance with the intent of management.

  • File integrity requirements defined (file integrity test factor). The methods of ensuring the integrity of computer files need to be specified. This normally includes the control totals that are to be maintained both within the file and independently of the automated application. The controls must ensure that the detail records are in balance with the control totals for each file.

  • Reconstruction requirements defined (audit trail test factor). Reconstruction involves both substantiating the accuracy of processing and recovery after an identified problem. Both of these needs involve he retention of information to backup processing. The need to substantiate processing evolves both from the organization and regulatory agencies, such as tax authorities requiring that sufficient evidential matter be retained to support tax returns.

    Application management needs to state if and when the system recovery process should be executed. If recovery is deemed necessary, management needs to state the time span in which the recovery process must be executed. This time span may change based upon the time of the day and the day of the week. These recovery requirements affect the type and availability of data retained.

  • Impact of failure defined (continuity-of-processing test factor). The necessity to ensure continuity of processing is dependent upon the impact of failure. If system failure causes only minimal problems, ensuring continuous processing may be unnecessary. On the other hand, where continuity of operations is essential, it may be necessary to obtain duplicate data centers so that one can continue processing should the other experience a failure.

  • Desired service level defined (service level test factor). Service level implies response time based on the requirements. The service level required will vary based on the requirements. Each level of desired service needs to be stated; for example, there is a service level to process a specific transaction, a service level to correct a programming error, a service level to install a change, and a service level to respond to a request.

  • Access defined (security test factor). Security requirements should be developed showing the relationship between system resources and people. Requirements should state all of the available system resources subject to control, and then indicate who can have access to those resources and for what purposes. For example, access may be authorized to read, but not change, data.

At the conclusion of the testing, the test team can judge the adequacy of each of the criteria, and thus of all the test concerns included in the test process for the requirements phase. The test team should make one of the following four judgments about each criterion:

  1. Very adequate. The project team has done more than normally would be expected for the criterion.

  2. Adequate evaluation. The project team has done sufficient work to ensure the reasonableness of control over the criterion.

  3. Inadequate assessment. The project team has not done sufficient work, and should do more work in this criterion area.

  4. Not applicable (N/A). Because of the type of application or the system design philosophy by the organization, the implementation of this criterion is not applicable to the application being reviewed.

Each test process contains a test that can be performed for each evaluation criterion. The objective of the test is to assist the team in evaluating each criterion. The test should be conducted prior to assessing the adequacy of the project being tested. It should be noted that because of time limitations, review experience, and tests previously performed, the test team may choose not to assess each criterion.

The 15 test processes are recommended in Work Paper 9-1 as a basis for testing the requirements phase. One test program is constructed to evaluate each of the requirements phase concerns. Work Paper 9-2 is a quality control checklist for this task.

Table 9-1. Requirements Test Phase Process

TEST FACTOR: Requirements Comply with Methodology

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Have the applicable organization’s policies and procedures been identified?

    

Confirm with those individuals responsible for developing the policies and procedures that all the applicable policies have been identified.

Compliance

Confirmation/examination

2.

Do the requirements comply with these policies and procedures?

    

Review requirement to ensure compliance.

Compliance

Fact finding

3.

Have the requirements been documented in accordance with the requirements methodology?

    

Examine requirements to ensure all needed documentation is complete.

Compliance

Checklist

4.

Is the cost/benefit analysis prepared in accordance with the appropriate procedures?

    

Examine cost/benefit analysis to ensure it was prepared in accordance with procedures.

Compliance

Checklist

5.

Has the requirements phase met the intent of the requirements methodology?

    

Review the deliverables from requirements and assess if they meet the intent of the methodology.

Compliance

Checklist

6.

Is the requirements phase staffed according to procedures?

    

Verify that the project is appropriately staffed.

Compliance

Peer review

7.

Will all of the applicable policies, procedures, and requirements be in effect at the time the system goes in operation?

    

Confirm with the appropriate parties the effective dates of existing policies, procedures, and regulations.

Compliance

Fact finding

8.

Will there be new standards, policies, and procedures in effect at the time the system goes operational?

    

Confirm with the appropriate parties the effective dates of new standards, policies, and procedures.

Compliance

Fact finding

TEST FACTOR: Functional Specifications Defined

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Can the data required by the application be collected with the desired degree of reliability?

    

Confirm with the people who would generate the data that it can be generated with the desired degree of reliability.

Requirements

Fact finding

2.

Can the data be collected within the time period specified?

    

Confirm with the people generating the data that it can be collected within the required time frame.

Requirements

Fact finding

3.

Have the user requirements been defined in writing?

    

Confirm with the user that the requirements in writing are complete.

Requirements

Checklist

4.

Are the requirements stated in measurable terms?

    

Examine the reasonableness of the criteria for measuring successful completion of the requirements.

Requirements

Walkthroughs

5.

Has the project solution addressed the user requirements?

    

Examine the system specifications to confirm they satisfy the user’s stated objectives.

Requirements

Walkthroughs

6.

Could test data be developed to test the achievement of the objectives?

    

Verify that the requirements are stated in enough detail that they could generate test data to verify compliance.

Requirements

Test data

7.

Have procedures been specified to evaluate the implemented system to ensure the requirements are achieved?

    

Examine the specifications that indicate a post-installation review will occur.

Requirements

Confirmation/examination

8.

Do the measurable objectives apply to both the manual and automated segments of the application system?

    

Examine to verify that the system objectives cover both the manual and automated segments of the application.

Requirements

Confirmation/examination

TEST FACTOR: Usability Specifications Defined

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Have the user functions been identified?

    

Confirm with the user that all user functions are defined in requirements.

Manual support

Confirmation/examination

2.

Have the skill levels of the users been identified?

    

Examine requirements documentation describing user skill level.

Manual support

Confirmation/examination

3.

Have the expected levels of supervision been identified?

    

Examine requirements documentation describing expected level of supervision.

Manual support

Confirmation/examination

4.

Has the time span for user functions been defined?

    

Confirm with the user that the stated time span for processing is reasonable.

Manual support

Confirmation/examination

5.

Will the counsel of an industrial psychologist be used in designing user functions?

    

Confirm that the industrial psychologist’s services will be used.

Manual support

Confirmation/examination

6.

Have clerical personnel been interviewed during the requirements phase to identify their concerns?

    

Confirm with clerical personnel that their input has been obtained.

Manual support

Confirmation/examination

7.

Have tradeoffs between computer and people processing been identified?

    

Examine reasonableness of identified tradeoffs.

Manual support

Design reviews

8.

Have the defined user responsibilities been presented to the user personnel for comment?

    

Confirm that users have examined their responsibilities.

Manual support

Confirmation/examination

TEST FACTOR: Maintenance Specifications Defined

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Has the expected life of the project been defined?

    

Confirm with the user that the stated project life is reasonable.

Compliance

Confirmation/examination

2.

Has the expected frequency of change been defined?

    

Confirm with the user that the expected frequency of change is reasonable.

Compliance

Confirmation/examination

3.

Has the importance of keeping the system up to date functionally been defined?

    

Confirm with the user that the stated importance of functional updates is correct.

Compliance

Confirmation/examination

4.

Has the importance of keeping the system up to date technologically been defined?

    

Confirm with IT management that the importance of technological updates is correct.

Compliance

Confirmation/examination

5.

Has it been decided who will perform maintenance on the project?

    

Confirm with IT management who will perform maintenance.

Compliance

Confirmation/examination

6.

Are the areas of greatest expected change identified?

    

Examine documentation for areas of expected change.

Compliance

Peer review

7.

Has the method of introducing change during development been identified?

    

Examine project change procedures.

Compliance

Checklist

8.

Have provisions been included to properly document the application for maintenance purposes?

    

Examine the completeness of project maintenance documentation.

Compliance

Peer review

TEST FACTOR: Portability Needs Determined

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Are significant hardware changes expected during the life of the project?

    

Confirm with computer operations expected hardware changes.

Operations

Confirmation/examination

2.

Are significant software changes expected during the life of the project?

    

Confirm with computer operations expected software changes.

Operations

Confirmation/examination

3.

Will the application system be run in multiple locations?

    

Confirm with the user the locations where the application will be operated.

Compliance

Confirmation/examination

4.

If an online application, will different types of terminals be used?

    

Examine terminal hardware requirements.

Compliance

Confirmation/examination

5.

Is the proposed solution dependent on specific hardware?

    

Review requirements to identify hardware restrictions.

Compliance

Inspections

6.

Is the proposed solution dependent on specific software?

    

Review requirements to identify software restrictions.

Compliance

Inspections

7.

Will the application be run in other countries?

    

Confirm with the user the countries in which the application will be run.

Compliance

Confirmation/examination

8.

Have the portability requirements been documented?

    

Examine the requirements documentation for portability requirements.

Compliance

Inspections

TEST FACTOR: Systems Interface Defined

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Have data to be received from other applications been identified?

    

Confirm with the project team that interfaced applications have been identified.

Intersystems

Confirmation/examination

2.

Have data going to other applications been identified?

    

Confirm with the project team that interfaced applications have been identified.

Intersystems

Confirmation/examination

3.

Has the reliability of interfaced data been defined?

    

Confirm with other applications the reasonableness of reliability requirements.

Control

Fact finding

4.

Has the timing of transmitting data been defined?

    

Confirm with other applications the reasonableness of timing requirements.

Control

Fact finding

5.

Has the timing of data being received been defined?

    

Confirm with other applications the reasonableness of timing requirements.

Control

Fact finding

6.

Has the method of interfacing been defined?

    

Examine documentation to ensure the completeness of interface methods.

Intersystems

Walkthroughs

7.

Have the interface requirements been documented?

    

Verify completeness of the interface requirements documentation.

Intersystems

Walkthroughs

8.

Have future needs of interfaced systems been taken into consideration?

    

Confirm with interfaced projects the need to consider future requirements.

Intersystems

Fact finding

TEST FACTOR: Performance Criteria Established

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Will hardware and software be obtained through competitive bidding?

    

Examine the reasonableness of the competitive bidding procedures.

Compliance

Acceptance test criteria

2.

Have cost-effectiveness criteria been defined?

    

Examine the cost-effectiveness criteria.

Compliance

Confirmation/examination

3.

Has the cost-effectiveness for this application system been calculated in accordance with the procedures?

    

Examine the calculation and confirm that it has been prepared in accordance with the procedures.

Compliance

Checklist

4.

Are the cost-effectiveness procedures applicable to this application?

    

Confirm with the user that the procedures are applicable to this application.

Compliance

Confirmation/examination

5.

Could application characteristics cause the actual cost to vary significantly from the projections?

    

Confirm with the user that there are no unusual characteristics that could cause the cost to vary significantly.

Compliance

Confirmation/examination

6.

Are there application characteristics that could cause the benefits to vary significantly from the projected benefits?

    

Confirm with the user that there are no characteristics that would cause the actual benefits to vary significantly from the projected benefits.

Compliance

Confirmation/examination

7.

Is the expected life of the project reasonable?

    

Confirm with the user the reasonable life for the project.

Compliance

Confirmation/examination

8.

Does a design phase schedule exist that identifies tasks, people, budgets, and costs?

    

Examine the completeness of the design phase work program.

Compliance

Design review

TEST FACTOR: Operational Needs Defined

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Have the volume of transactions been identified?

    

Confirm with user that the volume of transactions is correct.

Compliance

Confirmation/examination

2.

Has the timing of processing been determined?

    

Confirm with user that the timing is reasonable.

Compliance examination

Confirmation/

3.

Has the frequency of processing been determined?

    

Confirm with user that the frequency is reasonable.

Compliance

Confirmation/examination

4.

Has the number of documents that need to be stored online been determined?

    

Confirm with user that the storage requirements are correct.

Compliance

Confirmation/examination

5.

Will communication capabilities be required for processing?

    

Confirm with user that the communication needs are correct.

Compliance

Confirmation/examination

6.

Will special processing capabilities such as optical scanners be required?

    

Review documentation to identify special processing needs.

Operations

Peer review

7.

Will computer operations be expected to perform special tasks, such as data entry?

    

Review documentation to identify special operating requirements.

Operations

Peer review

8.

Has it been confirmed with computer operations that they havebeen advised of project requirements?

    

Confirm with computer operations that they have been advised of project requirements.

Operations

Confirmation/examination

TEST FACTOR: Tolerances Established

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Have the significant financial fields been identified?

    

Confirm with the accounting department that the indicated financial fields are the key financial fields for the application system.

Control

Confirmation/examination

2.

Has responsibility for the accuracy and completeness of each financial field been assigned?

    

Examine system documentation indicating individual responsible for each key financial field.

Control

Inspections

3.

Have the accuracy and completeness risks been identified?

    

Assess the completeness of the identified risks.

Requirements

Walkthroughs

4.

Has the individual responsible for each field stated the required precision for financial accuracy?

    

Review the system documentation to determine that the stated accuracy precision is recorded.

Control

Confirmation/examination

5.

Has the accounting cutoff method been determined?

    

Confirm with the user that the projected cutoff procedure is realistic.

Control

Confirmation/examination

6.

Have procedures been established to ensure that all of the transactions will be entered on a timely basis?

    

Examine the reasonableness of the procedures to ensure the timely recording of transactions.

Control

Walkthroughs

7.

Has a procedure been specified to monitor the accuracy of financial information?

    

Review the reasonableness of the procedures to monitor financial accuracy.

Control

Walkthroughs

8.

Are rules established on handling inaccurate and incomplete data?

    

Review the reasonableness of the procedures to handle inaccurate and incomplete data.

Error handling

Inspections

TEST FACTOR: Authorization Rules Defined

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Have all of the key transactions been identified?

    

Confirm with the user that all of the key transactions are identified.

Security

Confirmation/examination

2.

Have the rules for authorizing each of the key transactions been determined?

    

Verify that the authorization rules comply with organizational policies and procedures.

Control

Confirmation/examination & Peer review

3.

Are the authorization rules consistent with the value of the resources controlled by the transaction?

    

Review the reasonableness of the authorization rules in relationship to the resources controlled.

Requirements

Walkthroughs and Peer review

4.

Have the individuals who can authorize each transaction been identified?

    

Verify that the individuals have been granted that specific authorization by management.

Control

Confirmation/examination & Peer review

5.

Have specifications been determined requiring the name of the individual authorizing the transaction to be carried with the transaction?

    

Review the documentation to verify the specifications require the system to maintain records on who authorized each transaction.

Requirements

Inspection

6.

Have the transactions that will be automatically generated by the system been identified?

    

Confirm with the user that all of the transactions that will be computer generated have been identified.

Security

Confirmation/examination

7.

Have the rules for authorizing computer-generated transactions been identified?

    

Verify that these authorization rules are consistent with the organization’s policies and procedures.

Control

Confirmation/examination

8.

Have procedures to monitor the reasonableness of computer-generated transactions been specified?

    

Review the reasonableness of the procedures that will monitor computer-generated transactions.

Requirements

Walkthroughs

TEST FACTOR: File Integrity Requirements

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Have key computer files been identified?

    

Confirm with the user that the identified files are the key files.

Requirements

Confirmation/examination

2.

Has the composition of the data on each of the key files been identified?

    

Confirm with the user that the major data fields have been identified.

Requirements

Confirmation/examination

3.

Have the key control fields been identified?

    

Confirm with the user that the identified key fields are the key control fields.

Requirements

Confirmation/examination

4.

Has the method of internal file integrity for each of the key fields been determined?

    

Verify the reasonableness of the method to ensure the integrity of the key fields within the automated system.

Control

Walkthroughs

5.

In a multiuser system, has one user been assigned data integrity responsibility?

    

Determine the reasonableness of assigning responsibility to the named individual.

Control

Fact finding

6.

Has a decision been made as to whether the integrity of the field warrants an external, independently maintained control total?

    

Confirm with the organization’s comptroller the importance of the key fields with which independent external control totals are not maintained.

Control

Confirmation/examination

7.

Has the method of maintaining independent control totals on the key fields been determined?

    

Examine the reasonableness of the method for maintaining independent control totals on key fields.

Control

Fact finding

8.

Have tolerances been established on the degree of reliability expected from file integrity controls?

    

Confirm the reasonableness of the integrity tolerances with the organization’s comptroller.

Control

Confirmation/examination

TEST FACTOR: Reconstruction Requirements Defined

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Does the organization’s record retention policy include automated applications?

    

Review the applicability of the record retention policy to automated applications.

Control

Walkthroughs

2.

Have the criteria for reconstructing transaction processing been determined?

    

Review the reasonableness of the reconstruction criteria with the application user.

Requirements

Fact finding

3.

Have the criteria for reconstructing computer files been determined?

    

Verify the reasonableness of reconstruction procedures with the manager of computer operations.

Requirements

Fact finding

4.

Is requirements documentation adequate and in compliance with standards?

    

Verify the completeness and adequacy of requirements documentation.

Requirements

Inspections

5.

Have the criteria for reconstructing processing from a point of known integrity been determined?

    

Confirm the reasonableness of the processing reconstruction requirements with the manager of computer operations.

Requirements

Confirmation/examination

6.

Has the project stated a requirement to trace transactions to application control totals?

    

Verify that the system specifications include this requirement.

Control

Confirmation/examination

7.

Has the project stated a requirement specifying that control totals must be supportable by identifying all the transactions comprising that control total?

    

Verify that the system specifications include this requirement.

Control

Confirmation/examination

8.

Has the retention period for all of the reconstruction information been specified?

    

Confirm that the retention periods are in accordance with the organization’s record retention policy.

Requirements

Inspections

TEST FACTOR: Impact of Failure Defined

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Has the dollar loss of an application system failure been defined?

    

Examine the reasonableness of the dollar loss.

Recovery

Fact finding

2.

Has the dollar loss calculation for a failure been extended to show the loss at different time intervals, such as one hour, eight hours, one day, one week, etc.?

    

Examine the reasonableness of the loss amounts at various time intervals.

Recovery

Fact finding

3.

Is the proposed system technology reliable and proven in practice?

    

Confirm with independent sources the reliability and track record of the recommended hardware and software.

Recovery

Confirmation/examination

4.

Has a decision been made as to whether it is necessary to recover this application in the event of a system failure?

    

Confirm the correctness of the decision with the system user.

Recovery

Confirmation/examination

5.

Are alternate processing procedures needed in the event that the system becomes unoperational?

    

Confirm with the user the need for alternate processing procedures.

Recovery

Confirmation/examination

6.

If alternate processing procedures are needed, have they been specified?

    

Confirm with the user the reasonableness of those alternate processing procedures.

Recovery

Confirmation/examination

7.

Has a procedure been identified for notifying users in the event of a system failure?

    

Confirm with the user the reasonableness of the notification procedure.

Recovery

Confirmation/examination

8.

Has the desired percent of up-time for the system been specified?

    

Confirm with the user the reasonableness of the up-time.

  

TEST FACTOR: Desired Service Level Defined

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Has the response time for each transaction been identified?

    

Confirm with the user that the response times are reasonable.

Operations

Confirmation/examination

2.

Has a schedule been established indicating which part of the system is run on which day?

    

Confirm with computer operations that there is sufficient capacity to meet these service levels.

Operations

Confirmation/examination

3.

Do all vendor contracts indicate maintenance support for key hardware and software?

    

Review contractual specifications to ensure they include maintenance.

Operations

Confirmation/examination

4.

Have processing tolerances been established for each part of the system?

    

Confirm with the user that these service level tolerances are correct.

Operations

Confirmation/examination

5.

Can computer operations process the requirements within the expected tolerances?

    

Confirm with the manager of computer operations the reasonableness of the tolerances.

Operations

Confirmation/examination

6.

Has the priority of each part of system processing been decided to determine which segment runs first in the event computer time is limited?

    

Confirm with the user the reasonableness of the priorities.

Operations

Confirmation/examination

7.

Has the priority of each application been established in relationship to other applications to determine priority of processing after a failure and in the event of limited computer time?

    

Confirm with a member of executive management the reasonableness of the application system priority.

Operations

Confirmation/examination

8.

Has the volume of processing requirements been projected for a reasonable period of time in the future?

    

Confirm with the manager of operations there will be sufficient capacity to meet these increased volumes.

Operations

Confirmation/examination

TEST FACTOR: Access Defined

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Have the application resources been identified?

    

Confirm with the user that the identified resources are complete.

Security

Risk matrix & Confirmation/examination

2.

Have the users of those resources been identified?

    

Confirm with the individual responsible for those resources that the users are authorized.

Security

Risk matrix & Confirmation/examination

3.

Have the individuals responsible for those resources been identified?

    

Confirm with user management that these are the individuals responsible for those resources.

Security

Risk matrix & Confirmation/examination

4.

Has a profile been established matching resources with the users authorized to access those resources?

    

Examine the completeness of the user profile.

Security

Risk matrix & Peer review

5.

Have procedures been identified to enforce the user profile?

    

Confirm with the manager of computer operations that the procedures are workable.

Security

Confirmation/examination

6.

Has the importance of each resource been identified?

    

Confirm with the individual responsible that the security classifications are correct.

Security

Confirmation/examination

7.

Has a procedure been established for monitoring access violations?

    

Evaluate the reasonableness of the monitoring procedures.

Control

Fact finding

8.

Has a process been established to punish access violators?

    

Confirm with management that they intend to enforce violation procedures.

Control

Confirmation/examination

Table 9-2. Quality Control Checklist

  

YES

NO

N/A

COMMENTS

1.

Are the defined requirements testable?

    

2.

Does the user agree the defined requirements are correct?

    

3.

Do the developers understand the requirements?

    

4.

Do the stated requirements meet the stated business objectives for the project?

    

5.

Have the project risks been identified?

    

6.

Was a reasonable process followed in defining the requirements?

    

7.

Are project control requirements adequate to minimize project risks?

    

8.

Was a project requirements walkthrough conducted?

    

Preparing a Risk Matrix

A risk matrix is a tool designed to assess the adequacy of controls in computer systems. The term controls is used in its broadest context, meaning all the mechanisms, methods, and procedures used in the application to ensure that it functions in accordance with the intent of management. It is estimated that in automated systems, controls account for at least one-half of the total developmental effort. Therefore, effort expended to ensure the adequacy of controls is essential to the success and credibility of the application system.

One of the major benefits of the risk matrix is the identification of risks and what the system must do for each of those risks. The risk matrix is primarily a design tool, but it can be used as a test tool because it is infrequently used in the design process.

The risk matrix can be used in both the requirements phase and the design phase. The following discussion explains how to use the risk matrix. Ideally, the risk matrix starts in the requirements phase and is expanded and completed in the design phase. The execution of the risk matrix requires five actions. The actions should be performed in the following sequence.

Establishing the Risk Team

The key to a successful risk matrix is the establishment of the correct risk team, whose responsibility will be to complete the matrix. The objective of completing the matrix is to determine the adequacy of the control requirements and design to reduce the risks to an acceptable level.

The risk team may be part of the requirements team or part of the test team, or it may be a team specifically selected for the purpose of completing the risk matrix. The team should consist of three to six members and at a minimum possess the following skills:

  • Knowledge of the user application

  • Understanding of risk concepts

  • Ability to identify controls

  • Familiarity with both application and information services risks

  • Understanding of information services concepts and systems design

  • Understanding of computer operations procedures

The candidates for the risk team should, at a minimum, include someone from the user area and any or all of the following:

  • Internal auditor

  • Risk consultant

  • Data processor

  • Security officer

  • Computer operations manager

Identifying Risks

The objective of the risk team is first to identify the application-oriented, not environmental, risks associated with the application system. For example, the risks that relate to all applications equally (for example, environmental risks) need not be identified unless they have some special relevance to the applicants. The risk team can use one of the following two methods for risk identification:

  1. Risk analysis scenario. In this method, the risk team “brainstorms” the potential application risks using their experience, judgment, and knowledge of the application area. It is important to have the synergistic effect of a group so that group members can challenge one another to develop a complete list of risks that are realistic for the application.

  2. Risk checklist. The risk team is provided with a list of the more common risks that occur in automated applications. From this list, the team selects those risks applicable to the application. In this method, the team needs fewer skills because the risk list provides the stimuli for the process, and the objective of the team is to determine which of the risks on the list are applicable to the application. Figure 9-2 provides a list of risks for the purpose of identification.

    Table 9-2. List of generalized application risks.

    CATEGORY: Uncontrolled System Access

    1.

    Date or programs may be stolen from the computer room or other storage areas.

    2.

    Information services facilities may be destroyed or damaged by either intruders or employees.

    3.

    Individuals may not be adequately identified before they are allowed to enter the information services area.

    4.

    Remote terminals may not be adequately protected from use by unauthorized persons.

    5.

    An unauthorized user may gain access to the system and an authorized user’s password.

    6.

    Passwords may be inadvertently revealed to unauthorized individuals. A user may write his or her password in some convenient place, or the password may be obtained from card decks, discarded printouts, or by observing the user as he or she types it.

    7.

    A user may leave a logged-in terminal unattended, allowing an unauthorized person to use it.

    8.

    A terminated employee may retain access to an information services system because his or her name and password are not immediately deleted from authorization tables and control lists.

    9.

    An unauthorized individual may gain access to the system for his or her own purposes (e.g., theft of computer services or data or programs, modification of data, alteration of programs, sabotage, denial of services).

    10.

    Repeated attempts by the same user or terminal to gain unauthorized access to the system or to a file may go undetected.

    CATEGORY: Ineffective Security Practices for the Application

    1.

    Poorly defined criteria for authorized access may result in employees not knowing what information they, or others, are permitted to access.

    2.

    The person responsible for security may fail to restrict user access to only those processes and data which are needed to accomplish assigned tasks.

    3.

    Large disbursements, unusual price changes, and unanticipated inventory usage may not be reviewed for correctness.

    4.

    Repeated payments to the same party may go unnoticed because there is no review.

    5.

    Sensitive data may be carelessly handled by the application staff, by the mail service, or by other personnel within the organization.

    6.

    Post-processing reports analyzing system operations may not be reviewed to detect security violations.

    7.

    Inadvertent modification or destruction of files may occur when trainees are allowed to work on live data.

    8.

    Appropriate action may not be pursued when a security variance is reported to the system security officer or to the perpetrating individual’s supervisor; in fact, procedures covering such occurrences may not exist.

    CATEGORY: Procedural Errors at the Information Services Facility

    Procedures and Controls

    1.

    Files may be destroyed during database reorganization or during release of disk space.

    2.

    Operators may ignore operational procedures (for example, by allowing programmers to operate computer equipment).

    3.

    Job control language parameters may be erroneous.

    4.

    An installation manager may circumvent operational controls to obtain information.

    5.

    Careless or incorrect restarting after shutdown may cause the state of a transaction update to be unknown.

    6.

    An operator may enter erroneous information at CPU console (e.g., control switch in wrong position, terminal user allowed full system access, operator cancels wrong job from queue).

    7.

    Hardware maintenance may be performed while production data is online and the equipment undergoing maintenance is not isolated.

    8.

    An operator may perform unauthorized acts for personal gain (e.g., make extra copies of competitive bidding reports, print copies of unemployment checks, delete a record from a journal file).

    9.

    Operations staff may sabotage the computer (e.g., drop pieces of metal into a terminal).

    10.

    The wrong version of a program may be executed.

    11.

    A program may be executed twice using the same transactions.

    12.

    An operator may bypass required safety controls.

    13.

    Supervision of operations personnel may not be adequate during nonworking hour shifts.

    14.

    Due to incorrectly learned procedures, an operator may alter or erase the master files.

    15.

    A console operator may override a label check without recording the action in the security log.

    CATEGORY: Procedural Errors at the Information Services Facility

    Storage Media Handling

    1.

    Critical tape files may be mounted without being write-protected.

    2.

    Inadvertently or intentionally mislabeled storage media are erased. In a case where they contain backup files, the erasure may not be noticed until the backup is needed.

    3.

    Internal labels on storage media may not be checked for correctness.

    4.

    Files with missing or mislabeled expiration dates may be erased.

    5.

    Incorrect processing of data or erroneous updating of files may occur when card decks have been dropped, partial input decks are used, write rings are mistakenly placed in tapes, paper tape is incorrectly mounted, or wrong tape is mounted.

    6.

    Scratch tapes used for jobs processing sensitive data may not be adequately erased after use.

    7.

    Temporary files written during a job step for use in subsequent steps may be erroneously released or modified through inadequate protection of the files or because of an abnormal termination.

    8.

    Storage media containing sensitive information may not get adequate protection because operations staff is not advised of the nature of the information content.

    9.

    Tape management procedures may not adequately account for the current status of all tapes.

    10.

    Magnetic storage media that have contained very sensitive information may not be degaussed before being released.

    11.

    Output may be sent to the wrong individual or terminal.

    12.

    Improperly operating output or post-processing units may result in loss of output.

    13.

    Surplus output material may not be disposed of properly.

    14.

    Tapes and programs that label output for distribution may be erroneous or not protected from tampering.

    CATEGORY: Program Errors

    1.

    Records may be deleted from sensitive files without a guarantee that the deleted records can be reconstructed.

    2.

    Programmers may insert special provisions in programs that manipulate data concerning themselves (e.g., payroll programmer may alter his or her own payroll records).

    3.

    Data may not be stored separately from code with the result that program modifications are more difficult and must be made more frequently.

    4.

    Program changes may not be tested adequately before being used in a production run.

    5.

    Changes to a program may result in new errors because of unanticipated interactions between program modules.

    6.

    Program acceptance tests may fail to detect errors that only occur for unusual combinations of input (e.g., a program that is supposed to reject all except a specified range of values actually accepts an additional value).

    7.

    Programs, the contents of which should be safeguarded, may not be identified and protected.

    8.

    Code, test data with its associated output, and documentation for certified programs may not be filed and retained for reference.

    9.

    Documentation for vital programs may not be safeguarded.

    10.

    Programmers may fail to keep a change log, to maintain backup copies, or to formalize recordkeeping activities.

    11.

    An employee may steal programs he or she is maintaining and use them for personal gain.

    12.

    Poor program design may result in a critical data value being initialized twice. An error may occur when the program is modified to change the data value—but only changes it in one place.

    13.

    Production data may be disclosed or destroyed when it is used during testing.

    14.

    Errors may result when the programmer misunderstands requests for changes to the program.

    15.

    Errors may be introduced by a programmer who makes changes directly to machine code.

    16.

    Programs may contain routines not compatible with their intended purpose, which can disable or bypass security protection mechanisms. For example, a programmer who anticipates being fired inserts code into a program that will cause vital system files to be deleted as soon as his/her name no longer appears in the payroll file.

    17.

    Inadequate documentation or labeling may result in the wrong version of program being modified.

    CATEGORY: Operating System Flaws

    1.

    User jobs may be permitted to read or write outside assigned storage area.

    2.

    Inconsistencies may be introduced into data because of simultaneous processing of the same file by two jobs.

    3.

    An operating system design or implementation error may allow a user to disable audit controls or to access all system information.

    4.

    An operating system may not protect a copy of information as thoroughly as it protects the original.

    5.

    Unauthorized modification to the operating system may allow a data entry clerk to enter programs and thus subvert the system.

    6.

    An operating system crash may expose valuable information such as password lists or authorization tables.

    7.

    Maintenance personnel may bypass security controls.

    8.

    An operating system may fail to record that multiple copies of output have been made from spooled storage devices.

    9.

    An operating system may fail to maintain an unbroken audit trail.

    10.

    When restarting after a system crash, the operating system may fail to ascertain that all terminal locations that were previously occupied are still occupied by the same individuals.

    11.

    A user may be able to get into monitor or supervisory mode.

    12.

    The operating system may fail to erase all scratch space assigned to a job after the normal or abnormal termination of the job.

    13.

    Files may be allowed to be read or written without having been opened.

    CATEGORY: Communication System Failure

    Accidental Failures

    1.

    Undetected communications errors may result in incorrect or modified data.

    2.

    Information may be accidentally misdirected to the wrong terminal.

    3.

    Communication nodes may leave unprotected fragments of messages in memory during unanticipated interruptions in processing.

    4.

    Communication protocols may fail to positively identify the transmitter or receiver of a message.

    Intentional Acts

    1.

    Communication lines may be monitored by unauthorized individuals.

    2.

    Data or programs may be stolen via telephone circuits from a remote job entry terminal.

    3.

    Programs in the network switching computers may be modified to compromise security.

    4.

    Data may be deliberately changed by individuals tapping the line.

    5.

    An unauthorized user may “take over” a computer communication port as an authorized user disconnects from it. Many systems cannot detect the change. This is particularly true in much of the currently available communication protocols.

    6.

    If encryption is used, keys may be stolen.

    7.

    A terminal user may be “spoofed” into providing sensitive data.

    8.

    False messages may be inserted into the system.

    9.

    True messages may be deleted from the system.

    10.

    Messages may be recorded and replayed into the system.

 

Establishing Control Objectives (Requirements Phase Only)

During the requirements phase, the control objectives for each risk should be established. These objectives define the acceptable level of loss for each of the identified risks. Another way of stating the acceptable level of loss is the measurable objective for control. When control can be stated in measurable terms, the controls to achieve that objective have a requirement to use for control-decision purposes.

The adequacy of control cannot be tested until the acceptable level of loss from each risk has been defined. Therefore, although the definition of the control objectives is a user and project responsibility, it may take the formation of a risk team to get them defined. After the control objectives have been defined, the requirements can be tested to determine whether those objectives are achievable.

Table 9-1 shows an example risk matrix at the end of the requirements phase for a typical billing and distribution system. This matrix lists four risks for the billing and distribution system and lists control objectives for each of those risks. For example, one of the risks is that the product will be shipped but not billed. In this instance, the control objective is to ensure that all shipments are billed. In other words, the acceptable level of loss for this risk is zero, and the project team must install a system that ensures that for each shipment leaving the distribution area an invoice will be prepared. However, note that the next risk is that the product will be billed at the wrong price or quantity and that the controls have a greater than zero level of loss established, as do the other two risks.

Table 9-1. Requirements Phase Risk Matrix Example

RISK

CONTROL OBJECTIVE

Shipped but not billed

Ensure all shipments are billed.

Billed for wrong quantity price

Bill at current price on 99 percent of line items and or have error pricing less than plus or minus 10 percent.

Billed to wrong customer

Reduce incorrect billings to less than 0.1 percent of invoices.

Shipped wrong product quantity

Ship correct product and quantity on 99 percent of or line items.

Identifying Controls in Each System Segment

The following are the common system segments:

  • Origination. The creation of the source document plus the authorization associated with that transaction origination.

  • Data entry. The transfer of information to machine-readable media.

  • Communication. The movement of data from one point in the system to another. Movement may be manual or electronic.

  • Processing. Application of the system logic to the data.

  • Storage. The retention of data, for both temporary and extended periods of time.

  • Output. The translation of data from computer media to media understandable and usable by people.

  • Use. Satisfaction of the business need through the results of system processing.

The risk team determines which controls are applicable to which risk and records them in the correct segment of the system. At the conclusion of the development of the risk matrix, the risk team assesses whether the controls are adequate to reduce the risk to the acceptable level identified in the control objective. This will test the adequacy of the controls at the conclusion of the design process. An example of a risk matrix for billing and distribution systems at the end of the design phase is illustrated in Table 9-2.

Table 9-2. Design Phase Risk Matrix Example

SYSTEM SEGMENT RISK

ORIGINATION

DATA ENTRY

COMMUNICATION

PROCESSING

STORAGE

OUTPUT

USE

Shipped but not billed

1

  

2

  

6

Billed for wrong quantity or price

 

6

 

7

8

9

10

11

 

Billed to wrong customer

   

12

3

14

15

16

Shipped wrong product or quantity

17

18

 

19

20

 

21

22

The same four risks that were identified during the requirements phase (refer to Table 9-1) are listed on this matrix also, as are the controls associated with each risk. In this example, the shipped-but-not-billed risk shows that three controls (1, 2, and 3) will help reduce that risk. (Note that for an actual matrix these controls must be described.) The matrix shows in which segment of the application system those controls reside. After the controls have been identified and recorded, the risk team must determine whether those three controls and the segments in which they exist are adequate to reduce the shipped-but-not-billed risk to the point where all shipments will be billed.

Determining the Adequacy of Controls

The test concludes when the risk team assesses whether controls are adequate to reduce each identified risk to an acceptable level.

Performing a Test Factor Analysis

Work Paper 9-1 provides a process to assess the concerns associated with the requirements phase of the system’s development life cycle. A test program is included for each concern. There are 15 concerns, covering each phase of the development process. For each concern, there is a test program comprising eight criteria. The test program lists those criteria that, if proved to be adequately addressed through testing, should ensure the test team that the concern is minimal.

The test team must perform sufficient testing to evaluate the adequacy with which the project team has handled each of the test criteria. For example, in the requirements phase, one test criterion is “Have the significant financial fields been identified?” To determine that the project team has adequately addressed this criterion, the test team conducts such tests as necessary to assure themselves that the significant financial fields have been identified. The testing may require fact finding in the accounting department to verify that the fields indicated as financial fields are complete.

Conducting a Requirements Walkthrough

The requirements phase involves creativity, experience, and judgment, as well as a methodology to follow. During this phase, the methodology helps, but it is really creativity and problem solving that is needed. Of the review processes, the walkthrough is the least structured and the most amenable to creativity. Therefore, the walkthrough becomes a review process that complements the objectives of the requirements phase. The objective of the walkthrough is to create a situation in which a team of skilled individuals can help the project team in the development of the project solutions. The walkthrough attempts to use the experience and judgment of the review team as an adjunct or aid in the developmental process. The walkthrough in the requirements phase is oriented toward assistance in problem solving as opposed to compliance to methodology.

The walkthrough involves five actions to be completed in the sequence listed below. The amount of time allocated to each step will depend on the size of the application being reviewed and the degree of assistance desired from the walkthrough team.

Establishing Ground Rules

The walkthrough concept requires that the project participants make a presentation explaining the functioning of the system as developed at the time of the presentation. The presentation, or reading of the requirements, is the vehicle for initiating discussion between the project team and the walkthrough team. The prime objective is to elicit questions, comments, and recommendations.

The walkthrough is most productive when ground rules are established before the actual walkthrough. The ground rules should be understood by both the project team and the walkthrough team and should normally include the following:

  • Size and makeup of the walkthrough team (Three to six skilled participants is a good size. Three members are needed to get sufficient perspective and discussion, but more than six members makes the process too large and unwieldy.)

  • Responsibility of the walkthrough team, which is usually limited to recommendations, comments, and questions.

  • Obligation of the project team to answer all questions and respond to recommendations.

  • Approximate length, time, and location of the walkthrough.

  • Confidentiality of information discussed at the walkthrough.

  • Non-negotiable aspects of the system.

  • Who will receive the results of the walkthrough and how are those results to be used? (For example, if a report is to be prepared, who will receive it, what is the purpose of the report, and what is the most likely action based on that report?)

Selecting the Team

The ground rules establish the size and makeup of the team. The ground rules are normally generic in nature, and must be converted into action. For example, if the ground rules say that the team should consist of two members of user management and two project leaders, the most appropriate individuals must then be selected.

The walkthrough team should be selected based on the objectives to be accomplished. Any of the involved parties (i.e., users, information services, and senior management) may want to recommend walkthrough team participants. These tend to be selected based on project concerns. For example, if operations is a major concern, operations people should be selected for the walkthrough team.

The most common participants on a walkthrough team include the following:

  • Information services project manager/systems analyst.

  • Senior management with responsibility over the computerized area.

  • Operations management.

  • User management.

  • Consultants possessing needed expertise. (The consultants may be from inside or outside the corporation. For example, the consultants may be internal auditors, database administrators, or independent computer consultants.)

A good review team has at least one member of user management, one senior member of information services, and one member of senior management. Additional participants can be added as necessary.

The team participants should be notified as soon as possible that they have been selected for the walkthrough and advised of the responsibility and time commitments and the date for the walkthrough. Generally, if people do not want to participate in the walkthrough, they should be relieved of that responsibility and another person selected. If a team participant has a scheduling conflict and cannot complete the review in time, it may be more advisable to change the time of the review than to lose the participant.

Presenting Project Requirements

The project personnel should present the project requirements to the walkthrough team. A good walkthrough includes a presentation of the following:

  • Statement of the goals and objectives of the project.

  • Background information, including appropriate statistics on the current and proposed application area. (Note that these statistics should be business statistics and not computer system statistics.)

  • List of any exceptions made by the project team.

  • Discussion of the alternatives considered and the alternative selected.

  • A walkthrough of the requirements using representative transactions as a baseline. (Rather than describing the system, it is better to select the more common transaction types and explain how those transactions will be processed based on the defined requirements.)

Responding to Questions/Recommendations

The project presentation should be interrupted with questions, comments, and recommendations as they occur to the walkthrough team. The objective of the walkthrough is to evoke discussion and not to instruct the walkthrough team on the application requirements. The project team should be prepared to deviate from any presentation plan to handle questions and recommendations as they occur.

It is generally good to appoint one person as recorder for the walkthrough. This is normally a member of the project team. The recorder’s duty is to capture questions for which appropriate answers are not supplied during the walkthrough, and to indicate recommendations for which acceptance and implementation are possible.

Issuing the Final Report (Optional)

The ground rules determine whether a report will be issued, and if so, to whom. However, if it is determined that a walkthrough report should be issued, responsibility should be given to a single person to write the report. State in advance to whom the report is to be issued. The entire walkthrough team should agree on the contents of the report; if they do not, the report should state minority opinions. The information captured by the recorder may prove valuable in developing the report. To be most valuable to the project team, the report should be issued within five days of the walkthrough.

Performing Requirements Tracing

Requirements tracing is a simple but difficult-to-execute concept. The objective is to uniquely identify each requirement to be implemented, and then determine at each checkpoint whether that requirement has been accurately and completely processed.

Requirements tracing requires the following three actions:

  1. Uniquely identify each requirement. The identification process can be as simple as 1 through x, or requirements can be named or any other method chosen that can uniquely identify the requirement. The end process of this step is a detailed listing of all the requirements (see the requirements tracing matrix in Figure 9-2).

  2. Identify the development checkpoints at which requirements will be traced. In most developmental processes, requirements will be traced at predefined checkpoints. For small projects, the checkpoints may be at the end of a phase, whereas in larger projects, sub-phases might require checkpoints. The checkpoints will be incorporated into the requirements tracing matrix. Note that in this matrix five checkpoints have been listed (a, b, c, d, e) as well as four requirements (1, 2, 3, 4).

  3. Check that the requirements have been accurately and completely implemented at the end of a checkpoint. Use the requirements tracing matrix to investigate whether the identified requirements have been accurately and correctly implemented at the end of a specific checkpoint. In Figure 9-3, for example, at developmental checkpoint A, a decision would be made as to whether requirements 1, 2, 3, and 4 have been accurately and correctly implemented. If they have not been, the developmental team must make the necessary corrections.

    Requirement tracing matrix.

    Figure 9-3. Requirement tracing matrix.

Ensuring Requirements Are Testable

Many believe this is one of the most valuable verification techniques. If requirements are testable, there is a high probability that they will, in fact, meet the user needs as well as simplify implementation. Ideally, users of the requirement would develop the means for validating whether the requirement has been correctly implemented. For example, if there was a requirement that customers could not exceed their credit limit on purchases, the users might define three tests that test below the credit limit, at the credit limit, and above the credit limit.

Ensuring that requirements are testable requires only that some stakeholder develop the means for testing the requirement. As previously discussed, ideally this is the user. However, some users do not have the background necessary to develop the test conditions without the assistance of someone experienced in creating test data. Note that developing testable requirements is very similar to a concept called “use cases.” A use case is a case that tests how the outputs from the software will be used by the operating personnel.

Use cases are helpful in three ways:

  • Testing that requirements are accurately and completely implemented

  • Assisting developers in implementing requirements because the implementers will know how the outputs will be used

  • Developing cases for the acceptance testing of the software by the users

Task 2: Test During the Design Phase

During the design phase, the user and the system designer must work together closely. Neither party should be dominant during this period, the phase during which the user-defined requirements are converted into a process that can be accomplished by a computer. It is important that both the user and system designer work as partners to develop not only an efficient application system, but also one that is acceptable to the user.

Testing during the design phase should be jointly shared by the user and the information services project team. If the team consists of both users and IT personnel, the project team can accept test responsibility.

The system design is an IT responsibility. It is therefore logical to assume that IT should accept responsibility for the adequacy of that design, and thus have test responsibility. Unfortunately, this logic shifts responsibility from the user to information services. The danger is that the system may become information services’ system, as opposed to the user’s system. When the user is involved in establishing test criteria, the ultimate responsibility for the application is more clearly established.

The design phase provides the opportunity to test the structure (both internal and external) of the software application. The greater the assurance of the project team that the structure is sound and efficient, the higher the probability that the project will succeed.

Current test tools permit the structure to be tested in both a static and a dynamic mode. It is possible through modeling and simulation to model the structure on the computer to analyze the performance characteristics of the structure. However, the testing concepts must be developed hand in hand with the design process to gain maximum test advantages. State testing of the adequacy of the design has proved to be effective.

The design phase can be viewed as a funnel that takes the broad system requirements at the wide end of the funnel and narrows them down through a design process to very detailed specifications. This is a creative phase of the life cycle. Along with this creativity is a concern that some important design aspects will be overlooked.

Understanding design phase concerns produces more effective testing. Testing can then be directed at specific concerns instead of attempting broad-based testing.

Scoring Success Factors

Scoring is a predictive tool that utilizes previous systems experience. Existing systems are analyzed to determine the attributes of those systems and their correlation to the success or failure of that particular application. When attributes that correlate to success or failure can be identified, they can be used to predict the behavior of systems under development.

Attributes of an effective scoring tool are as follows:

  • Sampling. The criteria that represent a sample of all the criteria involved in the implementation of an automated application system. The sampling criteria are not meant to be complete.

  • High positive correlation. The criteria picked will have shown a high positive correlation in the past with either success or failure of an automated application. These criteria should not be judgmental or intuitive, but rather, those criteria for which it can be demonstrated that the absence or presence of that attribute has shown a high correlation to the outcome of the project.

  • Ease of use. To be effective, the process of scoring must be simple. People will use an easy predictive concept, but will be hesitant to invest significant amounts of time and effort.

  • Develop risk score. The score for each attribute should be determined in a measurable format so that a total risk score can be developed for each application. This will indicate the degree of risk, the area of risk, and a comparison of risk among application systems.

The scoring test tool is prepared for use in evaluating all applications. The tool should be general in nature so that it will apply to diverse applications, because the degree of risk must be compared against a departmental norm.

The scoring tool can be used in one of the following two ways under the direction of the test team:

  1. Project leader assessment. The application project leader can be given the scoring mechanism and asked to rate the degree of risk for each of the attributes for his or her project. The project leader need not know the importance of any of the attributes in a risk score, but only needs to measure the degree of project risk based on his or her in-depth knowledge of the project.

  2. Test team assessment. A member of the test team can be assigned the responsibility to develop the risk score. If the test team has worked on the project from the beginning, that person may be knowledgeable enough to complete the scoring instrument. However, if the test team member lacks knowledge, investigation may be needed to gather sufficient evidence to score the project.

At the conclusion of the scoring process, the result can be used in any of the following ways:

  • Estimate extent of testing. The higher the risk, the more testing that management may desire. Knowing that an application is high risk alerts management to the need to take those steps necessary to reduce that risk to an acceptable level.

  • Identify areas of test. Depending on the sophistication of the scoring instrument, specific areas may be identified for testing. For example, if computer logic is shown to be high risk, testing should thoroughly evaluate the correctness of that processing.

  • Identify composition of test team. The types of risks associated with the application system help determine the composition of the test team. For example, if the risks deal more with technology than with logic, the test team should include individuals thoroughly knowledgeable in that technology.

A scoring instrument for application systems is presented in Work Paper 9-3 at the end of the chapter. This scoring instrument develops a computer application system profile on many different system characteristics/attributes. The user is then asked to determine whether the system being reviewed is high, medium, or low risk for the identified characteristic. For example, the first characteristic deals with the importance of the function being computerized. If that function is important to several organizational units, it is a high-risk application. If the requirements are only of limited significance to cooperating units, the risk drops to medium; if there are no significant conflicting needs and the application is primarily for one organizational unit, the risk is low. The person doing the assessment circles the appropriate indicator. At the conclusion, a score can be developed indicating the number of high-risk, medium-risk, and low-risk indicators.

Table 9-3. Computer Applications Risk Scoring Form[1]

SIGNIFICANT CHARACTERISTICS

INDICATIVE OF HIGH RISK

INDICATIVE OF MEDIUM RISK

INDICATIVE OF LOW RISK

COMMENTS

System Scope and Complexity

Organizational breadth

a)

Important functions

Must meet important conflicting needs of several organizational units.

Meets limited conflicting requirements of cooperative organizational units.

No significant conflicting needs, serves primarily one organizational unit.

 

b)

Unrelated organizational units deeply involved

Dependent upon data flowing from many organizational units not under unified direction.

Dependent upon data from a few organizational units with a common interest; if not unified control.

Virtually all input data comes from a small group of sections under unified control.

 

Information services breadth

a)

Number of transaction types

More than 25

6 to 25

Fewer than 6

 

b)

Number of related record segments

More than 6

4 to 6

Fewer than 4

 

c)

Output reports

More than 20

10 to 20

Fewer than 10

 

Margin of error

a)

Necessity for everything to work perfectly, for “splitsecond timing” for great cooperation (perhaps including external parties), etc.

Very demanding

Realistically demanding

Comfortable margin

 

Technical complexity

a)

Number of programs including sort/merge

More than 35

20 to 35

Fewer than 20

 

b)

Programming approach (number of module/functions interacting within an update/file maintenance program)

More than 20

10 to 20

Fewer than 10

 

c)

Size of largest program

More than 60K

25K to 60K

Fewer than 25K

 

d)

Adaptability of program to change

Low, due to monolithic program design.

Can support problems with adequate talent and effort.

Relatively high; program straightforward, modular, roomy, relatively unpatched, well documented, etc.

 

e)

Relationship to equipment in use

Pushes equipment capacity near limits.

Within capacities.

Substantial unused capacity.

 

f)

Reliance on online data entry, automatic document reading, or other advanced techniques

Heavy, including direct entry of transactions and other changes into the master files.

Remote-batch processing under remote operations control.

None or limited to file inquiry.

 

Pioneering aspects

Extent to which the system applies new, difficult, and unproven techniques on a broad scale or in a new situation, thus placing great demands on the non-IS departments, systems and programming groups, IS operations personnel, customers, or vendors, etc.

More than a few relatively untried equipment or system software components or system techniques or objectives, at least one of which is crucial.

Few untried systems components and their functions are moderately important; few, if any, pioneering system objectives and techniques.

No untried system components; no pioneering system objectives or techniques.

 

System stability

a)

Age of system (since inception or last big change)

Less than 1 year

1 to 2 years

Over 2 years

 

b)

Frequency of significant change

More than 4 per year

2 to 4 per year

Fewer than 2 per year

 

c)

Extent of total change in last year

Affecting more than 25% of programs.

Affecting 10 to 25% of programs.

Affecting less than 10% of programs.

 

d)

User approval of specifications

Cursory, essentially uninformed.

Reasonably informed as to general but not detailed specifications; approval apt to be informal.

Formal, written approval, based on informed judgment and written, reasonably precise specifications.

 

Satisfaction of user requirements

a)

Completeness

Incomplete, significant number of items not processed in proper period.

Occasional problems but normally no great difficulties.

No significant data omitted or processed in wrong period.

 

b)

Accuracy

Considerable error problem, with items in suspense or improperly handled.

Occasional problems but normally not great difficulties.

Errors not numerous or of consequence.

 

c)

Promptness in terms of needs

Reports and documents delayed so as to be almost useless; forced to rely on informal records.

Reports and documents not always available when desired; present timetable inconvenient but tolerable.

Reports and documents produced soon enough to meet operational needs.

 

d)

Accessibility of details (to answer inquiries, review for reasonableness, make corrections, etc.)

Great difficulty in obtaining details of transactions or balances except with much delay.

Complete details available monthly; in interim, details available with some difficulty and delay.

Details readily available.

 

e)

Reference to source documents (audit trail)

Great difficulty in locating documents promptly.

Audit trail excellent; some problems with filing and storage.

Audit trail excellent; filing and storage good.

 

f)

Conformity with established system specifications

Actual procedures and operations differ in important respects.

Limited tests indicate that actual procedures and operations differ in only minor respects and operations produce desired results.

Limited tests indicate actual procedures and operations produce desired results.

 

Source data origin and approval

a)

People, procedures, knowledge, discipline, division of duties, etc. in departments that originate and/or approve data

Situation leaves much to be desired.

Situation satisfactory, but could stand some improvement.

Situation satisfactory.

 

b)

Data control procedures outside the information services organization

None or relatively ineffective; e.g., use of noncritical fields, loose liaison with IT department, little concern with rejected items.

Control procedures based on noncritical fields; reasonably effective liaison with IT department.

Control procedures include critical fields; good tie-in with IT department; especially good on rejected items.

 

c)

Error rate

Over 7% of transactions rejected after leaving source data department.

4–7% of transactions rejected after leaving source data department.

Less than 4% of transactions rejected after leaving source data department.

 

d)

Error backing

Many 30-day-old items.

Mostly 10–15-day-old items.

Items primarily less than 7 days old.

 

Input data control (within IT department)

a)

Relationship with external controls

Loose liaison with external control units; little concern with rejected items; batch totals not part of input procedures; only use controls like item counts; no control totals of any kind.

Reasonably effective liaison with external data control units; good control over new items, but less satisfactory control over rejected items; batch totals received, but generated by computer.

Good tie-in with external control units for both valid and rejected items; batch totals received as part of input process.

 

b)

Selection of critical control fields

Control based on noncritical fields.

Control based on a mixture of critical and noncritical fields, with effective supplementary checks.

Control established on critical fields.

 

c)

Controls over key transcription

Control based on batch totals.

Control based on transmittal sheets; batch totals and key verification of critical fields not batch controlled.

Control based on transmittal sheets; batch totals maintained on data logs; key verification of all critical fields; written “sign-off” procedures.

 

Data validation

a)

Edit tests

Alphanumeric tests.

Range and alphanumeric tests.

Range, alphanumeric, and check-digit tests.

 

b)

Sophistication

Simple, based on edit of one field at a time.

Simple editing, plus some editing based on the interrelationship of two fields.

Simple editing, plus extensive edit tests based on the interrelationship of two or more fields.

 

c)

Application to critical data

A considerable amount of critical data is not edited.

A few critical fields are edited only indirectly.

Editing performed on critical fields.

 

d)

Error balancing, retrieval, and correction procedures

Error rejected by system and eliminated from controls; treated as new items when reintroduced.

Number and value of rejected items carried in suspense account without electronically maintained details.

Error carried in suspense account in total and in detail until removed by correction.

 

Computer processing control procedure

a)

Controls within machine room

Informal operating instructions.

Written operating procedures.

Operations are based on a schedule and use up-to-date instructions.

 

b)

Manual and electronic safeguards against incorrect processing of files

Tape library controls by serial number; no programmed checks.

Tape library controls by serial number; programmed checks applied to file identification.

Programmed label check applied to serial number, expiration date, and the identification.

 

c)

Recording of run-to-run debit, credit, and balance totals for both transaction processing and master field records

Run-to-run totals not used.

Run-to-run totals printed and compared manually.

Run-to-run totals printed and compared by program.

 

d)

Documentation status

Poor or no standards; uneven adherence; not part of system and program development.

Adequate practices not uniformly adhered to; documentation done “after the fact.”

Excellent standards closely adhered to and carried out as part of system and program development.

 

e)

System test practices

Some transaction paths not tested.

Each transaction path tested individually.

Each transaction path tested in combination with all other transactions.

 

Output control

a)

Quantitative controls

Virtually nonexistent.

Hard to tie back meaning-fully to input controls.

Tied back to input controls.

 

b)

Qualitative controls

Documents and reports accepted virtually without review.

Documents and reports receive limited review.

Documents and reports tested in detail, in addition to receiving a “common sense” review of reasonable data limits.

 

c)

Distribution controls

No routine report distribution procedures.

Routine procedures for distribution limited to list of users and frequency of report delivery.

Written procedures requiring that control log indicate receipt by user, time of accounting for each copy, etc.

 

Online processing controls

a)

Data transmission controls, including error detection, error recovery, and data security

The front-end control program does not validate operator identification codes or message sequence number, and does not send acknowledgment to origin.

The front-end control program checks terminal and operator identification codes and message sequence number, sends acknowledgment to origin, and provides a transaction log.

The front-end control program validates terminal/operator identification codes plus transaction authorization codes and message sequence number and count, corrects errors, sends acknowledgment to origin, and provides log of transactions plus copies of updated master file records.

 

b)

Data validation controls, including error detection and correction

Neither the front-end control nor the application processing program checks for authorization approval codes; no check digits are used with identification keys; little use of extensive data relationship tests; erroneous transactions are rejected without analysis or suspense entry.

The application program checks approval codes for key transaction types only, but check digits are not used with identification keys; extensive data relationship tests are used; erroneous transactions are sent back to terminal with a note, butno suspense entry is made.

The application program validates approval codes for all transactions, and check digits are used with identification keys; data relationship tests are used extensively; erroneous transactions are noted in error suspense file when sent back to terminal with note.

 

c)

Information services controls, including error detection, transaction processing, master file processing, and file recovery provisions

Application program produces a total number of transactions processed; no master file processing controls; file recovery provisions limited to periodic copy of master file.

Application program produces a summary record of all debit and credit transactions processed; no master file processing controls; file recovery provisions limited to transaction log and periodic copy of master file.

Stored validation range values are used to validate transaction fields; application program summarizes all transactions processed by type, with credit and debit values for each terminal, and uses a master file control trailer record that is balanced by program routine; end-of-processing file recovery provisions include transaction log of active master file records.

 

[1] Risk scoring method developed by the General Accounting Office.

A risk score is achieved by totaling the number of characteristics rated high, medium, and low, respectively, and then multiplying each of these totals by the risk factor (high = 3, medium = 2, low = 1) to arrive at a risk score. The three resulting risk score numbers are then added together to arrive at a total risk score, which you can use to compare application systems against a norm. Another way to use the information is to divide the total score by the total number of risk characteristics to obtain a score between one and three. The closer the score is to three, the higher the risk, and conversely, the lower the score, the lower the risk.

Analyzing Test Factors

Work Paper 9-4 contains a test process for each of the design phase test factors. The person conducting the test can select the concerns of interest and use the appropriate test programs, keeping in mind the following general objectives for the design phase:

  • Develop a solution to the business problem.

  • Determine the role of the computer in solving the business problem.

  • Develop specifications for the manual and automated segments of the system.

  • Comply with policies, procedures, standards, and regulations.

  • Define controls that will reduce application risks to an acceptable level.

  • Complete the project within budgetary, staffing, and scheduling constraints.

Table 9-4. Design Phase Test Process

TEST FACTOR: Data Integrity Controls Designed

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Are controls established over accuracy and completeness during the transaction origination process?

    

Review the adequacy of the transaction origination accuracy and completeness control.

Control

Risk matrix & Checklist

2.

Are input transactions controlled, such as through a sequential input number, to ensure that all transactions are entered?

    

Review the adequacy of the input controls to ensure that all input is entered.

Control

Risk matrix & Checklist

3.

Are communication controls established to ensure the accurate and complete transmission of data?

    

Review the adequacy of transmission accuracy and completeness controls.

Control

Risk matrix & Checklist

4.

For key entry transactions, such as cash receipts, are batch control totals prepared?

    

Verify the adequacy of the batch control total procedures.

Requirements

Control flow analysis

5.

For key entry input transactions, such as purchase orders, are batch numbers prepared to ensure that batches of input are not lost?

    

Verify the adequacy of the batch numbering procedures.

Requirements

Control flow analysis

6.

Are check digits or equivalent controls used on key control fields, such as product number, to ensure the accurate entry of product number?

    

Verify that key fields use procedures that ensure the accurate entry of that information.

Requirements

Error guessing & Design based functional testing

7.

Is each field subject to extensive data validation checks?

    

Examine the type and scope of data validation checks for each key field to determine that they are adequate.

Error handling

Acceptance test criteria, Error guessing. Checklist, & Data dictionary

8.

Are input numbers, batch numbers, and batch totals verified by the data validation programs to ensure the accurate and complete input of transactions?

    

Verify that the controls established at the time of manual input preparation are verified by the computer program.

Control

Inspections

TEST FACTOR: Authorization Rules Designed

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Has the method for authorizing each transaction been documented?

    

Review the documentation to ensure authorization rules are complete.

Security

Checklist & Inspections

2.

For those documents whose authorization is dependent upon the source of origination as opposed to a signature, can that source of origination be verified by the application system?

    

Determine that for transactions whose entry itself indicates authorization, that those transactions can only originate from the properly authorized source.

Security

Checklist, Error guessing, & Inspections

3.

In a multiuser system, has responsibility for authorization been assigned to a single individual?

    

Determine the adequacy of the assigned authorization responsibilities in a multiuser system.

Control

Inspections & Fact finding

4.

Is the authorization method consistent with the value of the resources being authorized?

    

Review the reasonableness of the authorization method in relationship to the resources being controlled.

Requirements

Cause effect graphing, Walkthroughs, & Scoring

5.

If passwords are used for authorization, are procedures adequate to protect passwords?

    

Review the adequacy of the password protection procedures.

Control

Error guessing

6.

If passwords are used, will they be changed at reasonable frequencies?

    

Determine the reasonableness of the frequency for changing passwords.

Control

Error guessing

7.

Are the authorization rules verified by the automated segment of the application?

    

Examine the documentation for verifying authorization rules.

Security

Checklist, Risk matrix, & Inspections

8.

Are procedures established to report authorization violations to management?

    

Examine the reasonableness of the procedure to report authorization violations to management.

Control

Error guessing & Inspections

TEST FACTOR: File Integrity Controls Designed

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Have the fields been identified that will be used to verify file integrity?

    

Confirm with users that there are sufficient file integrity checks based upon the importance of data.

Control

Error guessing Confirmation/examination

2.

Are procedures established to verify the integrity of key files?

    

Examine the documentation indicating the file integrity verification procedures to determine they are adequate.

Requirements

Inspections

3.

Are procedures established to verify the integrity of files on a regular basis?

    

Confirm with the user that the file integrity verification frequency is adequate to protect the integrity of the file.

Requirements

Confirmation/examination

4.

Are procedures established to report file integrity variances to management?

    

Examine the specifications and procedures for reporting file integrity variances to management.

Control

Inspections

5.

For key files, such as cash receipts, have procedures been establishment to maintain independent control totals?

    

Verify for key files that independent control total procedures are adequate.

Control

Checkpoint & Inspections

6.

Have procedures been established to reconcile independent control totals to the totals produced by the automated segment?

    

Verify the adequacy of the reconciliation procedures.

Control

Cause-effect graphing, Checklist, & Desk checking

7.

Will the independent control totals be reconciled regularly to the automated control totals?

    

Confirm with the user that the frequency of independent reconciliation is adequate.

Requirements

Confirmation/examination

8.

Are simple accounting proofs performed regularly to ensure that the updating procedures are properly performed?

    

Review the adequacy of the methods to ensure that updating is performed correctly.

Error handling

Boundary value analysis & Desk checking

TEST FACTOR: Audit Trail Designed

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Have the detailed specifications been documented for each audit trail objective?

    

Review the completeness of the documentation in relationship to the audit trail objectives.

Requirements

Walkthroughs

2.

Have the data fields and records for each audit trail been defined?

    

Review the reasonableness of the included data fields to satisfy the audit trail objective.

Requirements

Walkthroughs

3.

Has the length of time to save each audit trail been defined?

    

Verify that the length of time is consistent with the organization’s record retention policy.

Control

Confirmation/examination & Fact finding

4.

Have the instructions been defined for utilizing the audit trail?

    

Review the completeness of the specifications to instruct people in using the audit trail.

Requirements

Checklist & Data flow analysis

5.

Does the audit trail include both the manual and automated segments of the system?

    

Review the audit trail specifications to verify that both the manual and automated segments are included.

Requirements

Flowchart & Tracing

6.

Is the audit trail stored in a sequence and format making the retrieval and use easy?

    

Confirm with audit trail users that the form and sequence are consistent with the use they would make of the audit trail.

Requirements

Confirmation/examination & Fact finding

7.

Will sufficient generations of the audit trail be stored away from the primary site so that if the primary site is destroyed processing can be reconstructed?

    

Examine the adequacy of the off-site facility.

Requirements

Inspections

8.

Have procedures been established to delete audit trails in the prescribed manner at the completion of their usefulness?

    

Assess the adequacy of the audit trail destruction procedures.

Requirements

Checklist & Error guessing

TEST FACTOR: Contingency Plan Designed

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Has responsibility for the preparation of a contingency plan been assigned?

    

Verify that the assigned individual has the sufficient skills and time to prepare a contingency plan.

Operations

Fact finding

2.

Does the contingency plan define all of the causes of failure?

    

Confirm with the computer operations manager that the list of potential failures is complete.

Operations

Error guessing & Confirmation/examination

3.

Does the contingency plan define responsibilities during the contingency period?

    

Review the completeness of the assigned responsibilities.

Operations

Checklist

4.

Does the contingency plan identify contingency resources?

    

Confirm with the computer operations manager that the assigned resources will be available in the event of a failure.

Operations

Confirmation/examination

5.

Does the contingency plan predetermine the operating priorities after a problem?

    

Confirm with a member of executive management that the recovery priorities are reasonable.

Recovery

Confirmation/examination

6.

Are all the parties involved in a failure included in the development of the contingency plan?

    

Review the list of contingency plan participants for completeness.

Recovery

Checklist

7.

Are procedures established to test the contingency plan?

    

Review the adequacy of the contingency plan test procedures.

Recovery

Checklist & Disaster test

8.

Will the contingency plan be developed at the time the application goes operational?

    

Review the schedule for developing the contingency plan to ensure it will be complete when the system goes operational.

Recovery

Inspections

TEST FACTOR: Method to Achieve Service Level Designed

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Can the system design achieve the desired service level?

    

Either confirm the reasonableness with computer operations personnel or run a simulation of the system to verify service levels.

Execution

Confirmation/examination & Modeling

2.

Do peak period volumes impact upon the desired service level?

    

Develop a simulation to test service levels based upon maximum processed volumes.

Execution

Modeling

3.

Can user personnel manually handle their part of peak volume periods?

    

Develop a model to demonstrate the amount of time required to perform the manual part of processing.

Execution

Modeling

4.

Will expected errors impact upon service levels?

    

Determine the expected number of errors and include that in the system simulation.

Execution

Checklist, Error guessing, Inspections, & Modeling

5.

Has the cost of failing to achieve service levels been determined?

    

Confirm with users that the cost of failure to meet service levels has been calculated.

Execution

Confirmation/examination

6.

Are desired and projected service levels recalculated as the system is changed?

    

Examine the requests for system changes and determine their impact on the service level.

Execution

Inspections & Modeling

7.

Are procedures established to monitor the desired service level?

    

Review the adequacy of the monitoring procedure.

Execution

Checklist & Inspections

8.

Will sufficient computer resources be installed to meet the service levels as the volumes increase?

    

Confirm with the computer operations manager that computer resources will be increased in proportion to increased volumes of data.

Operations

Confirmation/examination & Fact finding

TEST FACTOR: Access Procedures Designed

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Have advanced security techniques such as cryptography been considered?

    

Confirm with the individual responsible for data security that advanced security measures have been been considered and implemented where necessary.

Security

Confirmation/examination

2.

Have operating software features been evaluated for security purposes and implemented where necessary?

    

Confirm with system programmers that a systematic process was used to evaluate systems software features needed for security.

Security

Risk matrix & Confirmation/examination

3.

Have procedures been designed to protect the issuance and maintenance of passwords?

    

Confirm with the data security officer the adequacy of password protection procedures.

Security

Risk matrix & Confirmation/examination

4.

Are procedures defined to monitor security violations?

    

Review the adequacy of the procedures to monitor security violations.

Control

Checklist & Fact finding

5.

Does senior management intend to prosecute security violators?

    

Confirm with senior management their intent to monitor security and prosecute violators.

Control

Confirmation/examination

6.

Have the security needs of each application resource been defined?

    

Review the completeness and adequacy of the security for each application resource.

Control

Risk matrix & Scoring

7.

Has one individual been assigned the responsibility for security of the application?

    

Confirm that the individual appointed has sufficient skill and time to monitor security.

Security

Checklist & Confirmation/examination

8.

Is the system designed to protect sensitive data?

    

Confirm with the user the completeness of the design to protect sensitive data.

Security

Cause-effect graphing, Correctness proof, Inspections, & Scoring

TEST FACTOR: Design Complies with Methodology

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Have the appropriate methodology specifications been determined?

    

Confirm with the responsible party that the specifications are correct.

Compliance

Correctness proof, Error guessing, & Confirmation/examination

2.

Has the required level of compliance to the methodology been achieved?

    

Verify that the project complies with the methodology.

Compliance

Design reviews

3.

Will the standards, policies, etc. be monitored during implementation?

    

Confirm with the involved parties that they will monitor compliance to the methodology.

Compliance

Confirmation/examination & Fact finding

4.

Has the cost of compliance been determined so that it can be measured against the benefit, sanction, etc.?

    

Review with the involved parties the cost/benefit of compliance.

Compliance

Fact finding

5.

Are procedures established to substantiate compliance to the methodology?

    

Review the adequacy of the specified method of substantiating compliance.

Compliance

Fact finding

6.

Will the methodology be in use when the system becomes operational?

    

Confirm with IT management the applicability of using all or part of the methodology based on the application’s expected implementation date.

Compliance

Confirmation/examination

7.

Have deviations from the methodology been documented and approved?

    

Verify variances from the methodology are approved.

Compliance

Design reviews & Confirmation/examination

8.

Is design documentation adequate and in compliance with standards?

    

Verify the completeness and adequacy of design documentation.

Compliance

Design reviews

TEST FACTOR: Design Conforms to Requirements

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

Very Adequate

Adequate

Inadequate

N/A

1.

Has the systems design group made changes to the application system without gaining user approval?

    

Examine all of the program change requests to verify they have been approved by the user.

Requirements

Confirmation/examination

2.

Is there a formal change request procedure that must be followed to make all system changes?

    

Examine the adequacy and compliance to the program change procedure.

Control

Checklist & Inspections

3.

Are the objectives of the system reevaluated and changed where necessary based on each approved change request?

    

Determine the effect of the approved system changes on the objectives, and determine if the objectives have been changed accordingly.

Requirements

Inspections & Walkthroughs

4.

Does the user continually reevaluate the application system objectives in regard to changing business conditions?

    

Confirm with the user that the objectives are updated based on changing business conditions.

Requirements

Acceptance test criteria, Confirmation/examination, & Fact finding

5.

Are user personnel heavily involved in the design of the application system?

    

Confirm with the information services project personnel that the user is heavily involved in the system design.

Requirements

Confirmation/examination & Fact finding

6.

If user management changes, does the new management reconfirm the system objectives?

    

Verify that the design specifications achieve the intent of the application requirements.

Requirements

Acceptance test criteria, Confirmation/examination

7.

If the objectives are changed, is the means of measuring those objectives changed accordingly?

    

Verify that the criteria to measure the objectives are reasonable.

Requirements

Acceptance test criteria, Cause-effect graphing, Design-based functional testing, Executable specs, & Symbolic execution

8.

Do the design specifications achieve the intent of the requirements?

    

Verify that the design specifications achieve the intent of the application requirements.

Requirements

Correctness proof, Data flow analysis, Design-based functional testing, Desk checking, Executable specs, & Symbolic execution

TEST FACTOR: Design Facilitates Use

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Have the people tasks been defined?

    

Examine the manual processing documentation.

Manual support

Inspections

2.

Are the tasks realistic based on the skill level of the people?

    

Review the application system processing.

Manual support

Peer review

3.

Is the timing of the tasks realistic?

    

Calculate the adequacy of manual turnaround time.

Requirements

Modeling

4.

Will the information needed to do the people tasks be available?

    

Confirm with users the expected availability of needed information.

Requirements

Confirmation/examination

5.

Is the workload reasonable based on the expected staffing?

    

Estimate the time required to complete assigned tasks.

Requirements

Modeling

6.

Have the people involved been presented their tasks for comment?

    

Confirm with users their independence in systems design.

Manual support

Confirmation/examination

7.

Could some of the people tasks be better performed on the computer?

    

Review the application system processing.

Requirements

Cause-effect graphing & Error guessing

8.

Will adequate instruction manuals be prepared for these tasks?

    

Review the design specifications for preparation of instruction manuals.

Manual support

Checklist

TEST FACTOR: Design Is Maintainable

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Is system design logically constructed?

    

Review the application design structure.

Compliance

Peer review

2.

Are data attributes fully defined?

    

Examine the data documentation for completeness.

Compliance

Inspections

3.

Is computer logic presented in an easy-to-follow manner?

    

Review the application system logic.

Compliance

Peer review

4.

Are changes to the system incorporated into the design documentation?

    

Trace changes to the system specifications.

Compliance

Inspections

5.

Have areas of expected high frequency of change been designed to facilitate maintenance?

    

Review the maintainability of logic in areas of expected high change.

Compliance

Fact finding

6.

Are business functions designed using a standalone concept?

    

Review the application design structure.

Compliance

Inspections

7.

Is design documentation complete and usable?

    

Examine the design documentation for usability.

Compliance

Inspections

8.

Are maintenance specialists involved in the design process?

    

Confirm with maintenance specialists that they are involved in the design process.

Compliance

Confirmation/examination

TEST FACTOR: Design Is Portable

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Does the design avoid specialized hardware features?

    

Review hardware specifications for special features.

Operations

Inspections

2.

Does the design avoid specialized software features?

    

Review software specifications for special features.

Operations

Inspections

3.

Will the system be coded in a common computer language?

    

Examine coding rules for the project.

Operations

Fact finding

4.

Will the system be restricted to common features of the language?

    

Examine coding rules for the project.

Operations

Fact finding

5.

Does the system avoid the use of specialized software packages?

    

Review software specifications for specialized software.

Operations

Inspections

6.

Are data values restricted to normal data structures?

    

Review data documentation for type of data structure used.

Operations

Inspections

7.

Does documentation avoid specialized jargon?

    

Review documentation for use of specialized jargon.

Operations

Inspections

8.

Have the portability implementation considerations been documented?

    

Review the adequacy of the portability documentation.

Operations

Inspections

TEST FACTOR: Interface Design Complete

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Have the transactions to be received from other applications been defined?

    

Examine interfaced input data documentation.

Intersystems

Checklist

2.

Have the transactions going to other applications been defined?

    

Examine interfaced output data documentation.

Intersystems

Checklist

3.

Has the timing of interfaced transactions been defined?

    

Review system specifications for definition of timing.

Intersystems

Flowchart

4.

Is the timing of interfaced transactions realistic?

    

Confirm with interfaced application personnel that timing is reasonable.

Operations

Confirmation/examination

5.

Has the media for transferring data to interfaced applications been defined?

    

Review system specifications for documentation of media.

Operations

Inspections

6.

Are common data definitions used on interfaced data?

    

Compare common data definitions of interfaced applications.

Control

Fact finding

7.

Are common value attributes used on interfaced data?

    

Compare common value attributes of interfaced applications.

Control

Fact finding

8.

Has interface documentation been exchanged with interfaced applications?

    

Confirm with interfaced projects that documentation has been exchanged.

Intersystems

Confirmation/examination

TEST FACTOR: Design Achieves Criteria

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Have the systems development and acceptance criteria costs been recalculated based on the systems design?

    

Confirm with the user that the new system costs and acceptance criteria are reasonable.

Execution

Acceptance test criteria & Confirmation/examination

2.

Have the criteria for developing the manual processing segments been confirmed?

    

Confirm with the user that the manual effort has been defined and the cost confirmed.

Execution

Acceptance test criteria & Confirmation/examination

3.

Has the cost of operating the computer programs been confirmed based on the systems design?

    

Confirm with computer operations that the operational costs are reasonable.

Execution

Acceptance test criteria & Confirmation/examination

4.

Have the costs to operate the manual segments of the system been confirmed?

    

Confirm with the user that the cost to operate the manual segments of the application are reasonable.

Execution

Acceptance test criteria & Confirmation/examination

5.

Have the benefits of the system been confirmed based upon the systems design?

    

Confirm with the user the reasonableness of the benefits.

Execution

Acceptance test criteria & Confirmation/examination

6.

Has the useful life of the system been confirmed based upon the systems design?

    

Confirm with the user the reasonableness of the expected life of the application.

Execution

Acceptance test criteria & Confirmation/examination

7.

Has the cost-effectiveness of the new system been recalculated if changes in the factors have occurred?

    

Confirm with the organization’s accountants that the cost is correct.

Execution

Confirmation/examination

8.

Does the cost-effectiveness after design warrant the continuance of the system?

    

Confirm with senior management that the system design is still cost-effective.

Execution

Confirmation/examination

TEST FACTOR: Needs Communicated to Operations

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Have special hardware needs been defined?

    

Review specifications for special hardware needs.

Operations

Inspections

2.

Have special software needs been defined?

    

Review specifications for special software needs.

Operations

Inspections

3.

Have operations timing specifications been defined?

    

Review specifications for operations timing specifications.

Operations

Inspections

4.

Have system volumes been projected over an extended time period?

    

Confirm with users the reasonableness of projections.

Compliance

Confirmation/examination

5.

Have operations capacity requirements been specified?

    

Review specifications to determine whether the capacity requirements are reasonable.

Operations

Checklist

6.

Have computer test requirements been specified?

    

Examine test specifications for reasonableness.

Operations

Fact finding

7.

Have supplies/forms been specified?

    

Review specifications to verify that all supplies/forms have been identified.

Operations

Fact finding

8.

Has computer operations been notified of the anticipated workload and other requirements?

    

Confirm with computer operations their awareness of operation requirements.

Operations

Confirmation/examination

The concerns to be analyzed during the design phase are as follows:

  • Data integrity controls designed. Data integrity commences with risk identification, followed by management decisions on the acceptability of that risk, stated in terms of the amount of loss acceptable. The data integrity controls are then designed to these risk-tolerance specifications.

  • Authorization rules designed. Authorization in automated systems may be manual and/or automated. The procedures for manual authorization should be specified during the design phase. Automated authorization methods must be specified in more detail than manual procedures because they cannot rely on people to react to unexpected situations.

  • File integrity controls designed. File integrity is ensured by file identification methods, automated file controls, and independently maintained file integrity controls. The specifications for this three-part integrity process must be determined during the design phase.

  • Audit trail designed. The audit trail provides the capability to trace transactions from their origination to control totals, and to identify all the transactions substantiating a control total. In addition, the audit trail is used to substantiate individual transaction processing, and to recover the integrity of computer operations after it has been lost. Frequently, governmental agencies specify the types of information that need to be retained for audit trail purposes—this information must be defined during the design phase. The audit trail should be designed to achieve those purposes.

  • Contingency plan designed. The contingency plan outlines the actions to be performed in the event of problems. This plan includes the manual methods to be followed while the automated applications are not in operation, the backup and recovery procedures, as well as physical site considerations. Contingency plan specifications should be outlined during the design phase.

  • Method to achieve service level designed. The requirements phase defined the service levels to be achieved during the operation of the application. This concern deals primarily with the performance of the system and its ability to satisfy user needs on a timely basis.

  • Access procedures defined. Security in an automated system is achieved by predefining who can have access and for what purpose and then enforcing those access rules. A security profile indicates who can have access to what resources.

  • Design complies with methodology. The system design process should be performed and documented in accordance with IT methodology. Standardized design procedures ensure ease of understanding by all parties trained in that methodology, and at the same time help ensure the completeness of the design process. The purpose of the methodology is to develop better systems at a lower cost.

  • Design conforms to requirements. The system design is a translation of the user requirements into detailed system specifications. During any translation, misunderstandings or misinterpretations can occur. Steps need to be taken to ensure that the completed design achieves the objectives and intent of the defined requirements.

  • Design facilitates use. The final product must be used by people. The easier the system is to use, the more likely that the features will be utilized and the transactions processed correctly. The design must take into consideration the skill levels and job motivation of the people using the application system.

  • Design is maintainable. The cost of maintaining a computer application normally far exceeds the cost to develop. Identifying those system aspects that are most likely to be changed and building those parts of the system for ease of maintenance is an important aspect of the design process. The system design needed for maintainability may change significantly depending on the expected frequency of change.

  • Design is portable. If the requirements indicate that the application system should be transferable from one piece of hardware to another or from one version of software to another, the design should incorporate those portability features. When future hardware and software is uncertain, the design should be generalized, and not attempt to take advantage of features or facilities of existing hardware and software.

  • Interface design is complete. The interface to other applications needs to be identified and the specifications for that interface designed. Interface specifications should also consider secondary uses of application information. Understanding these secondary uses may result in additional capabilities included within the design process.

  • Design fulfills criteria. The cost/benefit study performed during the requirements phase may not supply a high-precision evaluation. During the design phase, the performance estimates can be more accurately stated so that a better prediction can be made as to whether the performance criteria can be achieved. A guideline used by one corporation is that the accuracy of estimating the achievement of the performance criteria at the end of the design phase should be within plus or minus 10 percent.

  • Needs communicated to operations. Operations needs to identify future processing requirements to prepare to handle those requirements when the system becomes operational. The larger the processing requirements, the greater the need to involve operations in the design alternative considerations.

A detailed work program is provided for each of the 15 design phase test concerns. These work programs follow and outline the criteria to be assessed for each concern, together with the recommended test, test technique, and test tool to be used in evaluating each criterion. Note that the person conducting the test should use judgment regarding the extent of testing relative to the importance of the criteria to the application.

Conducting a Design Review

The design review is structured using the same basic information that formed the basis for scoring. However, in the case of the design review, the criteria is more specific. The objective is to pre-identify those attributes of design that correlate to system problems. The design review then investigates those attributes to determine that they have been appropriately addressed by the project team.

The design review is conducted by a team knowledgeable in the design process. They are responsible for reviewing the application system for completeness and reasonableness. It is not necessary that the team be knowledgeable about the specific application, but they must be knowledgeable about the design methodology.

In conducting a design review, the team follows a predetermined review process. The design review is normally formal and highly structured in nature, in that the review team has predetermined investigations to make and has known start and stop points. The design review normally follows the design methodology. Team members attempt to determine that all the tasks have been properly performed. At the conclusion of the design review, the team normally issues a formal report indicating their findings and recommendations about the project.

The design review team may consist of the following members:

  • Project personnel. The project personnel can conduct their own design review. Typically, the individual on the project who is assigned review responsibility is not the same person that actually designed the system; however, the reviewer may have had partial design responsibility. This requires team members to accept different roles and responsibilities during the review process than they have held during the design process. Because of the possible ties to the actual design of the system, having the design review checklist as a self-assessment tool normally fulfills a valuable function for the reviewer(s).

  • Independent review team. The members of this review team are not members of the project being reviewed. They can be from other projects or quality-assurance groups, or they can be professional testers. This mode of operation provides a greater degree of independence in conducting the review in that there is no conflict of interest between the design and review roles. On the other hand, it is frequently difficult for peers to be critical of each other, especially in situations where a reviewer might eventually work for the person being reviewed.

These general guidelines should be followed when conducting a review:

  1. Select the review team. The members of the review team should be selected in advance of the review process.

  2. Train the review team members. The individuals who will be conducting the review should be trained in how to conduct the review. At a minimum, this means reviewing the checklist and explaining the objective and intent of each question. It is also advisable to train the people in the interpersonal relationships involved in conducting a review so that the review can be held in a non-threatening environment.

  3. Notify the project team. The project team should be notified several days in advance of the review as to when the review will occur and the responsibility of the project team during the review. Obviously, if the project team conducts the review, this task is less important, but it is still necessary to formally schedule the review so that all members will be present.

  4. Allot adequate time. The review should be conducted in a formal, businesslike manner, as efficiently as possible, but should not be rushed. Sufficient time should be allocated to probe and investigate areas of concern. Even when the same people conduct the review that designed the system, the interpersonal relationships and synergistic effect of a review can produce many positive effects if sufficient time is allocated to enable appropriate interaction.

  5. Document the review facts. All the factual findings of the review should be recorded. Normally, this can be done on the review checklist unless the comments are lengthy or supporting evidence is required. In any case, facts should be referenced to the specific checklist questions that uncovered them.

  6. Review the facts with the project team. The correctness of the facts should be substantiated with all individuals involved, and the review should not proceed until this is done. It is better to do this at the end of the review for important findings than intermittently during the review process.

  7. Develop review recommendations. Based on the facts, the review team should offer their recommendations to correct any problem situation. These recommendations are an important part of the review process.

  8. Review recommendations with the project team. The project team should be the first to receive the recommendations and have an opportunity to accept, modify, or reject the recommendations.

  9. Prepare a formal report. A report documenting the findings, the recommendations, and the action taken or to be taken on the recommendations should be prepared. This report may or may not be sent to higher levels of management, depending on the review ground rules established by the organization. However, it is important to have a formal record of the review process, what it found, and the actions taken on recommendations.

One or more reviews may occur during the design phase. The number of reviews will depend on the importance of the project and the time span of the design phase. A program for a two-point design phase review is shown in Work Papers 9-5 and 9-6. This provides for the first review at the end of the business system design (Work Paper 9-5) that part of the design where it is determined how the business problem will be solved. The second review point would occur after the computer system design (Work Paper 9-6) is complete. Note that the questions in the two review checklists are taken from an actual organization’s review process, and therefore may not be applicable to all organizations. Normally, the review process needs to be customized based on the design methodology, information services policies and procedures, and the criteria found to be causing problems in the organization.

Table 9-5. Business System Design Review Checklist[2]

 

YES

NO

N/A

COMMENTS

Systems Overview

    

1.

Is there a brief description of interfaces with other systems?

    

2.

Is there an outline of the major functional requirements of the system?

    

3.

Are the major functions defined into discrete steps with no boundary overlapping?

    

4.

Have manual and automatic steps been defined?

    

5.

Has the definition of what data is required to perform each step been indicated along with a description of how the data is obtained?

    

System Description

    

6.

Has a system structure chart been developed, showing the logical breakdown into subsystems and interfaces with other systems?

    

7.

Have the major inputs and outputs been defined as well as the functional processing required to produce the output?

    

8.

Is there a narrative description of the major functions of the system?

    

9.

Have subsystem functional flow diagrams been developed showing the inputs, processing, and outputs relevant to the subsystem?

    

10.

Has subsystem narrative description been developed?

    

11.

Do the functional outlines follow the logical structure of the system?

    

12.

Are they hierarchical in nature—that is, by function and by steps within function?

    

Design Input and Output Data—Data Structure

    

13.

Has the data been grouped into logical categories (i.e., customer product, accounting, marketing sales, etc.)?

    

14.

Has the data been categorized as follows:

    
 

a) Static

b) Historical data likely to be changed

c) Transaction-related

    

15.

Have standard data names (if possible) been used?

    

16.

Has the hierarchical relationship among data elements been defined and described?

    

Design Output Documents

    

17.

Are there headings?

    

18.

Do the headings include report titles, department, date, page number, etc.?

    

19.

Are the output documents adaptable to current filing equipment?

    

20.

Are processing dates, system identification, titles, and page numbers shown?

    

21.

Has consideration been given to output devices?

    

22.

Is each data column identified?

    

23.

Where subtotals are produced (e.g., product within customer) are they labeled by control break?

    

Design Input Elements

    

24.

Are the data elements clearly indicated?

    

25.

Has the source of the data been defined (department and individual)?

    

26.

Have input requirements been documented?

    

27.

Is the purpose of the input document clear?

    

28.

Is the sequence indicated?

    

Design Computer Processing

    

29.

Has each function been described using functional terminology (e.g., if salary exceeds maximum, print message)?

    

30.

Has validity checking been defined with reference to the data element dictionary?

    

31.

In cases where the same data may be coming from several sources, have the sources been identified as to priorities for selection by the system?

    

32.

Has processing been classified according to type of function (e.g., transaction, calculation, editing, etc.)?

    

Design Noncomputer Processing

    

33.

Has the preparation of input been described?

    

34.

Has the distribution of output been described?

    

35.

Has an error correction procedure been described?

    

Organizational Controls

    

36.

Have organizational controls been established?

    

37.

Have controls been established across department lines?

    

38.

Have the control fields been designed?

    

39.

Are there control validation procedures prior to proceeding to the next step?

    

Overall System Controls

    

40.

Have controls been designed to reconcile data received by the computer center?

    

41.

Have controls for error correction and reentry been designed?

    

42.

Have controls been designed that can be reconciled to those of another system?

    

Input Controls

    

43.

Have some or all of the following criteria been used for establishing input controls?

    
 

a) Sequence numbering

b) Prepunched cards

c) Turnaround documents

d) Batch numbering

e) Input type

f) Predetermined totals

g) Self-checking numbers

h) Field length checks

i) Limit checks

j) Reasonability checks

k) Existence/nonexistence checks

    

44.

Do controls and totals exist for:

    
 

a) Each value column

b) Cross-foot totals

c) Counts of input transactions, errors, accepted transactions

d) Input transactions, old master, new master

    

45.

Are the results of all updates listed for each transaction showing the before and after condition?

    

46.

As the result of an update, are the number of adds, deletes, and changes processed shown?

    

47.

If relationship tests have been used, are they grouped and defined?

    

48.

Have control total records been utilized to verify that all records have been processed between runs?

    

Output Controls

    

49.

Have output controls been established for all control fields?

    

50.

Is there a separate output control on errors rejected by the system?

    

System Test Plan

    

51.

Have acceptance criteria been identified?

    

52.

Has a tentative user acceptance strategy been developed?

    

53.

Have test data requirements been defined?

    

54.

Have data element dictionary forms been completed?

    

55.

Have organizational changes been defined?

    

56.

Have new organizational charts or new positions been required?

    

57.

If required, have areas for special user procedures been identified?

    

58.

Has a timetable for operating the system been developed?

    

59.

Were separate timetables developed for different cycles (weekly, monthly)?

    

60.

Has the documentation been gathered and organized?

    

61.

Has a financial analysis been performed?

    

Plan User Procedures—Conversion Design

    

62.

Have the scope, objectives, and constraints been developed?

    

63.

Has a plan for user procedures and conversion phases been completed?

    

64.

Has the plan been broken down into approximate work units (days) to serve as a basis for a schedule for the other phases?

    

65.

Have the resources and responsibilities been arranged?

    

66.

Have schedules been prepared for the next phases?

    

67.

Have appropriate budgets for the next phases been prepared?

    

68.

Has a project authorization been properly prepared for remaining phases?

    

[2] Based on case study included in Effective Methods of EDP Quality Assurance.

Table 9-6. Computer Systems Design Review Checklist[3]

 

YES

NO

N/A

COMMENTS

Develop Outline Design

    

1.

Has a detailed review of the business system design resulted in requiring additional information or changes?

    

2.

Have these revisions been reviewed by the user?

    

3.

Have existing sources of data been identified?

    

4.

Has a data management alternative been considered because of the nature of the system?

    

5.

Have the data elements been grouped by category?

    

6.

Have the record layout forms been used for listing the data elements?

    

7.

Has the file description form been used to show the characteristics of each file?

    

8.

Have the access methods been determined?

    

9.

Has use been made of blocking factors to reduce accesses for a sequential file?

    

10.

If a database has been used, has the relationship between segments (views of the database) been included?

    

11.

If new data elements have been required, have they been included as part of the data dictionary?

    

12.

Has the description of processing been translated into system flowcharts showing programs and their relationships, as well as reports?

    

13.

Has the processing been isolated by frequency as well as function?

    

14.

Does each file requiring updating have an associated, unique transaction file?

    

15.

Does each main file have a separate validation and update function?

    

16.

Have the following been addressed in order to reduce excessive passing of files:

    
 

a) Sort verbs (statements)

b) Input procedure

c) Output procedure

d) Random updating

    

17.

Has a matrix been prepared showing which programs create, access, and update each file?

    

18.

Has a separate section been set up for each program in the system showing:

    
 

a) Cover page showing the program name, systems and/or subsystem name, run number, and a brief description of the program

b) Input/output diagram

c) Processing description

    

19.

Does the processing description contain a brief outline of the processing that the program is going to perform?

    

20.

Has the content and format of each output been defined?

    

21.

Has the content and format of each input been defined?

    

22.

Have data items been verified against to the rules specified in the data dictionary?

    

23.

Have transactions that update master files been assigned record types?

    

Hardware/Software Configuration

    

24.

Does the hardware configuration show the following:

    
 

a) CPU

b) Minimum core storage

c) Number and type of peripherals

d) Special hardware

e) Numbers of tapes and/or disk packs

f) Terminals, minicomputers, microfilm, microfiche, optical scanning, etc.

    

25.

Has the following software been defined:

    
 

a) Operating system

b) Telecommunications

c) Database management

    

26.

If telecommunications equipment is involved, has a communications analyst been consulted regarding type, number, speed, etc.?

    

File Conversion

    

27.

Have the file conversion requirements been specified?

    

28.

Have program specifications for the file conversion programs been completed?

    

29.

Can the main program(s) be utilized to perform the file conversion?

    

30.

Has a schedule been established?

    

Design System Tests

    

31.

Has the user’s role for testing been defined?

    

32.

Have responsibilities and schedules for preparing test data been agreed to by the user?

    

33.

Has the input medium been agreed to?

    

34.

Is special hardware/software required, and if so, will programmers and/or users require additional training?

    

35.

Have turnaround requirements been defined?

    

36.

Have testing priorities been established?

    

37.

If an online system, has an investigation of required space as opposed to available space been made?

    

38.

Has an analysis of the impact upon interfacing systems been made and have arrangements been made for acquiring required information and data?

    

39.

Have testing control procedures been established?

    

40.

Has the possibility of utilizing existing code been investigated?

    

41.

Has a system test plan been prepared?

    

42.

Has the user prepared the system test data as defined by the conditions to be tested in the system test plan?

    

43.

Has computer operations been consulted regarding keypunching and/or verification?

    

Revise and Complete Design

    

44.

Have all required forms from previous phases as well as previous task activities in this phase been completed?

    

45.

Has the processing description for program specifications been categorized by function?

    

46.

For validation routines, have the editing rules been specified for:

    
 

a) Field format and content (data element description)

b) Interfield relationships

c) Intrafield relationships

d) Interrecord relationships

e) Sequence

f) Duplicates

g) Control reconciliation

    

47.

Have the rejection criteria been indicated for each type of error situation, as follows:

    
 

a) Warning message but transaction is accepted

b) Use of the default value

c) Outright rejection of record within a transaction set

d) Rejection of an entire transaction

e) Rejection of a batch of transactions

f) Program abort

    

48.

Have the following validation techniques been included in the specifications:

    
 

a) Validation of entire transaction before any processing

b) Validation to continue regardless of the number of errors on the transaction unless a run abort occurs

c) Provide information regarding an error so the user can identify the source and determine the cause

    

49.

Has a procedure been developed for correction of rejected input either by deletion, reversal, or reentry?

    

50.

Do the specifications for each report (output) define:

    
 

a) The origin of each item, including the rules for the selection of optional items

b) The rules governing calculations

c) The rules for printing and/or print suppression

    

51.

Have the following been defined for each intermediate (work) file:

    
 

a) Origins or alternative origins for each element

b) Calculations

c) Rules governing record types, sequence, optional records, as well as inter- and intrarecord relationships

    

52.

Have the following audit controls been built in where applicable:

    
 

a) Record counts (in and out)

b) Editing of all source input

c) Hash totals on selected fields

d) Sequence checking of input files

e) Data checking

f) Listing of errors and review

g) Control records

    

Determine Tentative Operational Requirements

    

53.

Has the impact of the system upon existing computer resources been evaluated?

    

54.

Have the computer processing requirements been discussed with computer operations?

    

55.

Have backup procedures been developed?

    

Online Systems

    

56.

Have testing plans been discussed with computer operations to ensure that required resources (core, disk space) for “sessions” will be available?

    

57.

Have terminal types been discussed with appropriate technical support personnel?

    

58.

Have IMS considerations (if applicable) been coordinated with computer operations, technical support, and DBA representatives?

    

59.

Has a user training program been developed?

    

60.

Have run schedules been prepared to provide computer operations with the basic information necessary to schedule computer usage?

    

61.

Have run flowcharts including narrative (where required) been prepared?

    

62.

Have “first cut” estimates of region sizes, run times, etc. been provided on the flowcharts or some other documentations?

    

63.

Have restart procedures been described for each step of the job?

    

64.

Have restart procedures been appended to the security and backup section of the documentation?

    

Plan Program Design

    

65.

Has all relevant documentation for each program been gathered?

    

66.

Has the sequence in which programs are to be developed been defined in accordance to the system test plan?

    

67.

Has the number of user and project personnel (including outside vendors) required been ascertained?

    

68.

Has computer time required for program testing (compiles, test runs) been estimated?

    

69.

Have data preparation requirements been discussed with computer operations regarding data entry?

    

70.

Has a development cost worksheet been prepared for the next phase or phases?

    

71.

Have personnel been assigned and project work schedules been prepared?

    

72.

Has the project schedule and budget been reviewed and updated?

    

Prepare Project Authorization

    

73.

Has a project authorization form been completed?

    

[3] ibid.

Inspecting Design Deliverables

Inspection is a process by which completed but untested design products are evaluated as to whether the specified changes were installed correctly. To accomplish this, inspectors examine the unchanged product, the change specifications, and the changed product to determine the outcome. They look for three types of defects: errors, meaning the change has not been made correctly; missing, meaning something that should have been changed, but was not changed; and extra, meaning something not intended was changed or added.

The inspection team reviews the product after each inspector has reviewed it individually. The team then reaches a consensus on the errors, missing, and extra defects. The author (the person implementing the project change) is given those defect descriptions so that the product can be changed prior to dynamic testing. After the changes are made, they are re-inspected to verify correctness; then dynamic testing can commence. The purpose of inspections is twofold: to conduct an examination by peers, which normally improves the quality of work because the synergy of a team is applied to the solution; and to remove defects.

The following items can enhance the benefits of formal inspections:

  • Training. Use inspections to train new staff members in the department’s standards and procedures.

  • Product quality. Do not inspect obviously poor products; that is, the inspectors should not do the developers’ work. Developers should not submit a product for inspection if they are not satisfied with the quality of the product.

Work Paper 9-7 is a quality control checklist for this task.

Table 9-7. Quality Control Checklist

 

YES

NO

N/A

COMMENTS

1.

Is the test team knowledgeable in the design process?

    

2.

Are the testers experienced in using design tools?

    

3.

Have the testers received all of the design phase deliverables needed to perform this test?

    

4.

Do the users agree that the design is realistic?

    

5.

Does the project team believe that the design is realistic?

    

6.

Have the testers identified the success factors, both positive and negative, that can affect the success of the design?

    

7.

Have the testers used those factors in scoring the probability of success?

    

8.

Do the testers understand the 15 design-related test factors?

    

9.

Have the testers analyzed those design test factors to evaluate their potential impact on the success of the design?

    

10.

Do the testers understand the design review process?

    

11.

Has a review team been established that represents all parties with a vested interest in the success of the design?

    

12.

Does management support using the design review process?

    

13.

Is the design review process conducted at an appropriate time?

    

14.

Were the items identified in the design review process reasonable?

    

15.

Does the project team agree that the identified items need to be addressed?

    

16.

Does management support performing inspections on project rework?

    

17.

Has appropriate time been allotted in the project scheduling for performing inspections?

    

18.

Have the individuals responsible for project rework been educated in the importance of participating in the inspection process?

    

19.

Does management view inspections as an integral part of the process rather than as an audit to identify participants’ performance?

    

20.

Has the inspection process been planned?

    

21.

Have the inspectors been identified and assigned their specific roles?

    

22.

Have the inspectors been trained to perform their role?

    

23.

Have the inspectors been given the necessary materials to perform the review?

    

24.

Have the inspectors been given adequate time to complete both the preparation and the review meeting inspection process?

    

25.

Did the individual inspectors adequately prepare for the inspection?

    

26.

Did the individual inspectors prepare a defect list?

    

27.

Was the inspection scheduled at a time convenient for all inspectors?

    

28.

Did all inspectors come to the inspection meeting?

    

29.

Did all inspectors agree on the final list of defects?

    

30.

Have the inspectors agreed upon one of the three acceptable inspection dispositions (i.e., certification, reexamination, or reinspection)?

    

31.

Were the defects identified during the review meeting recorded and given to the author?

    

32.

Has the author agreed to make the necessary corrections?

    

33.

Has a reasonable process been developed to determine that those defects have been corrected satisfactorily?

    

34.

Has a final moderator certification been issued for the product/deliverable inspected?

    

Task 3: Test During the Programming Phase

Building an information system (i.e., programming) is purely an IT-related function, with little need for user involvement, except where questions arise about design specifications and/or requirements.

Wherever possible, changes requested by users should be discouraged through more complete design reviews, or postponed until the system is placed into operation. If changes cannot be postponed, they should be implemented through the regular development process and (preferably) tested before changing the original program specifications.

The complexity of performing the programming phase depends on the thoroughness of the design phase and the tool used to generate code. Well-defined and measurable design specifications greatly simplify the programming task. On the other hand, the failure to make decisions during the early phases necessitates those decisions being made during the programming phase. Unfortunately, if not made earlier, these decisions may be made by the wrong individual—the programmer.

Testing during the programming phase may be static or dynamic. During most of the phase, programs are being specified, designed, and coded. In this phase, the resultant code may not be executable, and therefore may require different test tools. The efficiency gained from early testing is just as appropriate to the programming phase as it is to other phases. For example, problems detected during program design can be corrected more economically than if they are detected while testing the executable program.

Note

The importance of testing programs will vary based on the means of code generation. The more automated code generation becomes, the less emphasis needs to be placed on programming phase testing. Because many organizations use a variety of methods for code generation, this verification task is designed to incorporate all the programming phase components needed. The user of this test process must adjust this task according to the method used to generate code.

The programming phase consists of three segments: The program specifications are written from the design specifications; a programmer converts the program specifications into machine-executable instructions; and then the programmer verifies that these instructions meet the program specifications.

The programming equivalent in home construction is the building of the house by masons, carpenters, plumbers, and electricians. These are the craftsmen who take the design specifications and materials and convert them into the desired product. However, just as aids are available to the programmer, aids are also available to the construction worker. For example, preconstructed roof trusses and other parts of the house can be purchased. The more pieces that can be produced automatically, the greater the probability of a successfully built home.

The programming phase in the construction of a system produces a large volume of deliverables. During this phase, the number of items to be tested increases significantly. Therefore, it becomes important to understand the deliverables, their risk, and which segments of the deliverables need to be tested.

The IT project leader should be responsible for testing during the programming phase. The primary objective for this testing is to ensure that the design specifications have been correctly implemented. Program testing is not concerned with achieving the user’s needs, but rather that the developed structure satisfies the design specifications and works. Much of the testing will be conducted by the programmer. Testing at this point is highly technical, and it normally requires someone with programming experience. These tests should be complete prior to interconnecting the entire application and testing the application system.

This verification task describes a test process to use during programming. Desk debugging and peer reviews are recommended as effective test methods during the programming phase. This relatively low-cost test approach has proven to be effective in detecting problems and can be used at any point during the programming activity. The task includes a complete test program addressing all of the programming phase concerns, as follows:

  1. Program testing will consist exclusively of dynamic testing as opposed to including static testing. Static testing using techniques such as desk debugging and peer reviews is much more effective in uncovering defects than is dynamic testing. The concern is that the proper testing technique will not be used for the needed test objective.

  2. Program testing will be too costly. Programmers have a tendency to identify defects, assume there are no more, and then correct those defects and retest. This has proven to be a time-consuming and costly approach to testing. Using static methods to remove defects and dynamic testing to verify functionality is a much more efficient method of program testing.

  3. Programs will be released for string, system, and acceptance testing before they are fully debugged. The shortest and most economical testing is to remove all the defects at one level of testing before moving to the next level. For example, it is much more economical to continue program testing to remove program defects than to identify those defects in string testing.

Desk Debugging the Program

Desk debugging enables the programmer to evaluate the completeness and correctness of the program prior to conducting more expensive testing. In addition, desk debugging can occur at any point in the programming process, including both program design and coding. Desk debugging can be as extensive or as minimal as desired. The amount of desk debugging performed will depend on the following:

  • Wait time until the next program deliverable is received

  • Implementation schedule

  • Testing resources

  • Efficiency of test tools

  • Departmental policy

  • Desk debugging can be syntactical, structural, or functional.

Syntactical Desk Debugging

Program specifications and program statements must be developed in accordance with departmental methodology and compiler requirements. The programmer can check the appropriate syntax of the documentation and statements to ensure they are written in compliance with the rules. Syntactical checking asks questions such as these:

  • Is the job identification correct?

  • Are program statements appropriately identified?

  • Are program statements constructed using the appropriate structure?

  • Are data elements properly identified?

  • Do the program statements use the proper data structure; for example, do mathematical instructions work on mathematical fields?

  • Are the data structures adequate to accommodate the data values that will be used in those structures?

Structural Desk Debugging

Structural problems account for a significant number of defects in most application systems. These defects also mask functional defects so that their detection becomes more difficult. The types of questions to be asked during structural desk debugging include these:

  • Are all instructions entered?

  • Are all data definitions used in the instructions defined?

  • Are all defined data elements used?

  • Do all branches go to the correct routine entrance point?

  • Are all internal tables and other limits structured so that when the limit is exceeded processing can continue?

Functional Desk Debugging

The functions are the requirements that the program is to perform. The questions to be asked about the function when desk debugging include the following:

  • Will the program perform the specified function in the manner indicated?

  • Are any of the functions mutually exclusive?

  • Will the system detect inaccurate or unreasonable data?

  • Will functional data be accumulated properly from run to run?

Performing Programming Phase Test Factor Analysis

The depth of testing in the programming phase depends on the adequacy of the system at the end of the design phase. The more confidence the test team has in the adequacy of the application at the end of the design phase, the less concern they will have during the programming phase. During requirements and design testing, the concerns over the test factors may change based on test results. In the programming phase, the test team should identify the concerns of most interest, and then develop the test process to address those concerns. In identifying these concerns, the test team must take into account changes that have occurred in the system specifications since the last test was conducted. The objectives that the test team members should continually consider when testing during the programming phase include the following:

  • Are the systems maintainable?

  • Have the system specifications been implemented properly?

  • Do the programs comply with information services standards and procedures as well as good practice?

  • Is there a sufficient test plan to evaluate the executable programs?

  • Are the programs adequately documented?

The test concerns to be considered during this subtask are as follows:

  • Data integrity controls implemented. Specific controls need to be implemented in a manner that will achieve the desired processing precision. Improperly implemented controls may not achieve the established level of control tolerance, and because of the widespread misunderstanding of the purpose of controls (i.e., reduced risk), simplistic solutions might be implemented where complex controls are needed to achieve the control objectives.

  • Authorization rules implemented. Authorization rules need to be implemented in a manner that makes it difficult to circumvent them. For example, when authorization limits are set, people should not be able to circumvent these limits by entering numerous items under the prescribed limit. Therefore, authorization rules must not only consider the enforcement of the rules, but also take into account the more common methods to circumvent those rules.

  • File integrity controls implemented. File integrity controls should be implemented in a manner that minimizes the probability of loss of file integrity, and they should both prevent the loss of integrity and detect that loss, should it occur, on a timely basis.

  • Audit trail implemented. The audit trail needs to be implemented in a manner that facilitates retrieval of audit trail information. If the audit trail contains needed information, but it is too costly or time-consuming to use, its value diminishes significantly. The implementation considerations include the amount of information retained, sequencing for ease of retrieval of that information, cross-referencing of information for retrieval purposes, as well as the length of time that the audit trail information needs to be retained.

  • Contingency plan written. The contingency plan is a set of detailed procedures in step-by-step format outlining those tasks to be executed in the event of problems. The plan should describe the preparatory tasks so that the necessary data and other resources are available when the contingency plan needs to be activated. The design contingency approach is of little value until it is documented and in the hands of the people who need to use it.

  • System to achieve service level designed. The desired service level can only become a reality when the procedures and methods are established. One procedure that should be set up is the monitoring of the level of service to ensure that it meets specifications. The inclusion of monitoring routines provides assurance over an extended period of time that service levels will be achieved, or if not, that fact will be detected early so corrective action can be taken.

  • Security procedures implemented. Security is the combination of employee awareness and training, plus the necessary security tools and techniques. The procedures ensuring that these two parts are available and working together must be developed during the programming phase.

  • Program complies with methodology. Procedures should be implemented that ensure compliance with developmental standards, policies, procedures, and methods. If noncompliance is detected, appropriate measures must be taken to either obtain a variance from the methodology or modify the system or design so that compliance is achieved. Although methodology does not necessarily satisfy user objectives, it is necessary to satisfy information services design objectives.

  • Program conforms to design.

    • Correctness. Changing conditions cause many information services project personnel to ignore project objectives during the program phase. The argument is that there are sufficient changes so that monitoring compliance to system objectives is meaningless. The test team should discourage this thinking and continually monitor the implementation of objectives. If objectives have not been met, either they should be changed or the system changed to bring it into compliance with the functional specifications of the application.

    • Ease of use. The implementation of system specs may negate some of the ease-of-use design aspects unless those aspects are specifically defined. Programming is a translation of design specifications and it may fail to achieve the ease-of-use intent. Programming must achieve this ease-of-use design spec as it does other functional specifications.

    • Portability. The portability of programs depends on the language selected and how that language is used. The specifications should indicate the do’s and don’ts of programming for portability, and the coding should conform to those design specifications. If portability is a major concern and the program specifications fail to define portability coding adequately, the programmer should make every effort to write in as straightforward a method as possible.

    • Coupling. The design specifications should indicate parameters passing to and from other application systems. It is normally good practice for the programmer to verify that the system’s specifications are up-to-date prior to coding intersystem functions. This ensures not only that the programs conform to the design, but that the specifications of interconnected applications have not changed since the design was documented.

    • Performance. The creation of the program provides the first operational opportunity for users to assess whether the system can achieve the desired performance level. At this point, the instructions to perform the requirements have been defined and can be evaluated. An early assessment of potential performance provides an opportunity to make performance adjustments if necessary.

  • Program is maintainable. The method of program design and coding may have a greater significance for maintainability than the design specifications themselves. The rules of maintainable code should be partially defined by departmental standards, and partially defined by system specifications. In addition, the programmer should use judgment and experience in developing highly maintainable code.

  • Operating procedures developed. Procedures should be developed during programming to operate the application system. During the next phase, the executable programs will be operated, and the necessary instructions should be developed prior to that phase of the SDLC. The operating procedures should be consistent with the application system operational requirements.

A detailed test process is illustrated in Work Paper 9-8 for each of the 15 identified programming phase test concerns. The test process includes test criteria, recommended test processes, techniques, and tools. The team conducting the test is urged to use judgment in determining the extent of tests and the applicability of the recommended techniques and tools to the application being tested. Work Paper 9-9 is a quality control checklist for this task.

Table 9-8. Initial Supplier Capability Assessment

TEST FACTOR: Data Integrity Controls Implemented

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Have procedures been written indicating how to record transactions for entry into the automated system?

    

Examine the usefulness of data error messages.

Manual support

Correctness proof, Exhaustive testing, & Flowchart

2.

Have data validation checks been implemented to ensure that input complies with system specifications?

    

Review the completeness of the data validation checks.

Requirements

Compiler-based analysis, Data dictionary, & Inspections

3.

Have anticipation controls been installed, where appropriate, to ensure that valid, but unreasonable, data is noted for manual investigation?

    

Examine the extensiveness of anticipation controls to identify potential problems.

Error handling

Correctness proof, Error guessing, & Inspections

4.

Are errors properly identified and explained so that follow-up action can be readily conducted?

    

Examine the completeness of the data entry procedures.

Error handling

Exhaustive testing

5.

Have procedures been established to take corrective action on data errors?

    

Examine the reasonableness of the procedures to take corrective action on identified errors.

Error handling

Cause-effect graphing

6.

Are procedures established to ensure that errors are corrected on a timely basis?

    

Verify that the procedures will ensure that errors are corrected on a timely basis.

Error handling

Correctness proof & Flowchart

7.

Are run-to-run controls installed to ensure the completeness and accuracy of transactions as they move from point to point in the system?

    

Examine the reasonableness of the procedures that ensure accuracy and completeness of transactions as they flow through the system.

Requirements

Control flow analysis & Data flow analysis

8.

Have procedures been implemented to ensure that complete and accurate input is recorded?

    

Verify the adequacy of the procedures to ensure that controls established during data origination are verified during processing.

Control

Correctness proof & Exhaustive testing

TEST FACTOR: Authorization Rules Implemented

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Have the authorization methods been divided between manual and automated?

    

Evaluate the reasonableness of the authorization method selected.

Security

Fact finding

2.

Have procedures been prepared to specify the manual authorization process for each transaction?

    

Review the adequacy of the manual authorization procedures.

Security

Inspections

3.

Have the methods been implemented for authorizing transactions in the automated segment of the system?

    

Examine the program specifications to determine that authorization method has been properly implemented.

Requirements

Inspections

4.

Have procedures been established to indicate violations of manual authorization procedures?

    

Examine the reasonableness of the violation procedures for manual authorization.

Control

Checklist & Fact finding

5.

Have procedures been established to identify and act upon violations of automated authorization procedures?

    

Examine the adequacy of the automated authorization violation procedures.

Requirements

Walkthroughs

6.

Do the implemented authorization methods conform to the authorization rules defined in the requirements phase?

    

Verify compliance of implemented authorization methods to the defined authorization rules.

Requirements

Inspections

7.

Have procedures been implemented to verify the source of transactions where the source becomes the basis for authorizing the transaction?

    

Verify that the system authenticates the source of transaction where that source itself authorizes the transaction.

Security

Inspections

8.

Does the system maintain a record of who authorized each transaction?

    

Verify that procedures are implemented to identify the authorizer of each transaction.

Requirements

Inspections

TEST FACTOR: File Integrity Controls Implemented

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Has someone been appointed accountable for the integrity of each file?

    

Verify that the assigned individual has the necessary skills and time available.

Control

Fact finding

2.

Have the file integrity controls been implemented in accordance with the file integrity requirements?

    

Compare the implemented controls to the integrity requirements established during the requirements phase.

Requirements

Inspections

3.

Have procedures been established to notify the appropriate individual of file integrity problems?

    

Examine the adequacy of the procedures to report file integrity problems.

Error handling

Walkthroughs

4.

Are procedures established to verify the integrity of files on a regular basis?

    

Review the reasonableness of the file integrity verification frequency.

Requirements

Walkthroughs

5.

Are there subsets of the file that should have integrity controls?

    

Confirm with the user that all file subsets are appropriately safeguarded through integrity controls.

Control

Error guessing & Confirmation/examination

6.

Are procedures written for the regular reconciliation between automated file controls and manually maintained control totals?

    

Verify the reasonableness and timeliness of procedures to reconcile automated controls to manually maintained controls.

Control

Walkthroughs

7.

Are interfile integrity controls maintained where applicable?

    

Confirm with the user that all applicable file relationships are reconciled as a means of verifying file integrity.

Control

Confirmation/examination

8.

Are sensitive transactions subject to special authorization controls?

    

Verify with legal counsel that sensitive transaction authorization controls are adequate.

Control

Confirmation/examination

TEST FACTOR: Implement Audit Trail

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Has the audit trail relationship from source record to control total been documented?

    

Examine the completeness of the audit trail from source document to control total.

Requirements

Walkthroughs

2.

Has the audit trail from the control total to the supporting source transaction been documented?

    

Examine the completeness of the audit trail from the control total to the source document.

Requirements

Walkthroughs

3.

Have all the defined fields been included in the audit trail?

    

Verify that the audit trail records include all of the defined audit trail fields.

Requirements

Walkthroughs

4.

Does the implemented audit trail satisfy the defined reconstruction requirements?

    

Verify that the implemented audit trail is in compliance with the reconstruction requirements phase.

Requirements

Inspections

5.

Have procedures been defined to test the audit trail?

    

Verify that an audit trail test plan has been devised.

Requirements

Fact finding

6.

Are procedures defined to store part of the audit trail off-site?

    

Examine the reasonableness of the procedures that require application audit trail records to be stored off-site.

Recovery

Cause-effect graphing & Peer review

7.

Does the implemented audit trail permit reconstruction of transaction processing?

    

Review the completeness of the transaction reconstruction process.

Requirements

Exhaustive testing & Inspections

8.

Does the audit trail contain the needed information to restore a failure?

    

Confirm with the computer operations manager that the audit trail information is complete.

Requirements

Confirmation/examination

TEST FACTOR: Write Contingency Plan

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Does the contingency plan identify the people involved in recovering processing after a failure?

    

Confirm with the operations manager that all the appropriate people are identified in the contingency plan.

Recovery

Confirmation/examination

2.

Has the contingency plan been approved by the operations manager?

    

Examine the evidence indicating the operations manager approves of the plan.

Recovery

Confirmation/examination

3.

Does the plan identify all the resources needed for recovery?

    

Confirm with the operations manager that all the needed resources are identified.

Recovery

Confirmation/examination

4.

Does the contingency plan include the priority for restarting operations after a failure?

    

Review the reasonableness of the priority with senior management.

Recovery

Error guessing & Fact finding

5.

Does the recovery plan specify an alternate processing site?

    

Confirm that an alternate site is available for backup processing.

Recovery

Confirmation/examination

6.

Does the contingency plan provide for security during a recovery period?

    

Review the reasonableness of the security plan with the security officer.

Recovery

Inspections

7.

Has a plan been developed to test the contingency plan?

    

Examine the completeness of the test plan.

Operations

Inspections

8.

Has the role of outside parties, such as the hardware vendor, been included in the test plan and confirmed with those outside parties?

    

Confirm with outside parties that they can supply the support indicated in the contingency plan.

Operations

Confirmation/examination

TEST FACTOR: Design System to Achieve Service Level

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Do the implemented programs perform in accordance with the desired service level?

    

Verify the performance criteria of the programs during testing.

Stress

Instrumentation

2.

Does the system performance achieve the desired level of service?

    

Verify the performance of the system during testing.

Stress

Instrumentation

3.

Have the training programs been prepared for the people who will use the application system?

    

Examine the completeness of the training programs.

Execution

Checklist & Inspections

4.

Is the support software available and does it meet service-level requirements?

    

Confirm with computer operations personnel that the support software is available and does meet performance criteria.

Operations

Confirmation/examination

5.

Is the support hardware available and does it provide sufficient capacity?

    

Confirm with computer operations personnel that the support hardware is available and does meet the capacity requirements.

Operations

Confirmation/examination

6.

Is sufficient hardware and software on order to meet anticipated future volumes?

    

Confirm with computer operations that sufficient hardware and software is on order to meet anticipated future volumes.

Operations

Confirmation/examination

7.

Has a test plan been defined to verify that service-level performance criteria can be met?

    

Examine the completeness of the test plan.

Execution

Checklist & Inspections

8.

Can the required input be delivered to processing in time to meet production schedules?

    

Confirm with the individuals preparing input that they can prepare input in time to meet production schedules.

Execution

Confirmation/examination

TEST FACTOR: Implement Security Procedures

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Is the required security hardware available?

    

Confirm with the security officer that the needed security hardware is available.

Security

Confirmation/examination

2.

Is the required security software available?

    

Confirm with the security officer that the needed security software is available.

Security

Confirmation/examination

3.

Has a procedure been established to disseminate and maintain passwords?

    

Examine the completeness and adequacy of the password dissemination and maintenance plan.

Security

Exhaustive testing

4.

Have the involved personnel been trained in security procedures?

    

Examine the adequacy and completeness of the security training procedures.

Security

Exhaustive testing

5.

Has a procedure been established to monitor violations?

    

Examine the completeness and adequacy of the test violation procedure.

Control

Exhaustive testing

6.

Has management been instructed on the procedure for punishing security violators?

    

Confirm with management that they have been adequately instructed on how to implement security prosecution procedures.

Control

Confirmation/examination

7.

Have procedures been established to protect the programs, program listings, data documentation, and other systems documentation defining how the system works?

    

Verify with the security officer the adequacy of the procedures to protect the system documentation and program.

Security

Risk matrix & Confirmation/examination

8.

Has one individual been appointed accountable for security of the application when it becomes operational?

    

Verify that the accountable individual has the necessary skills and the time available.

Security

Fact finding

TEST FACTOR: Programs Comply with Methodology

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Have the organization’s policies and procedures been incorporated into the application programs?

    

Examine the programs to ensure that they comply with the necessary organization policies and procedures.

Compliance

Inspections

2.

Have the organization’s information services policies and procedures been incorporated into the application programs?

    

Examine the programs to ensure that they comply with the necessary information services policies and procedures.

Compliance

Inspections

3.

Have the organization’s accounting policies and procedures been incorporated into the application programs?

    

Examine the programs to ensure that they comply with the necessary accounting policies and procedures.

Compliance

Inspections

4.

Have the governmental regulations been incorporated into the application program?

    

Examine the programs to ensure that they comply with the necessary government regulations.

Compliance

Inspections

5.

Have the industry standards been incorporated into the application programs?

    

Examine the programs to ensure that they comply with the necessary industry standards.

Compliance

Inspections

6.

Have the organization’s user department policies and procedures been incorporated into the application programs?

    

Examine the programs to ensure that they comply with the user department’s policies and procedures.

Compliance

Inspections

7.

Are the policies, procedures, and regulations used as a basis for system specifications up-to-date?

    

Confirm with the appropriate party that the regulations used for specifications are current.

Compliance

Confirmation/examination

8.

Are there anticipated changes to the policies, standards, or regulations between this phase and the time the system will become operational?

    

Confirm with the involved parties the probability of changes to the policies, standards, or regulations prior to the system becoming operational.

Compliance

Confirmation/examination

TEST FACTOR: Programs Conform to Design (Correctness)

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Have changes in user management affected their support of system objectives?

    

Confirm with user management that the stated objectives are still desired.

Requirements

Confirmation/examination

2.

Does the program implementation comply with stated objectives?

    

Compare program results to stated objectives.

Requirements

Design reviews

3.

Will the implemented systems produce correct results?

    

Verify that the implemented systems will produce correct results.

Requirements

Correctness proof

4.

Have the desired reports been produced?

    

Confirm that the reports produced by the application program comply with user-defined specifications.

Requirements

Design reviews

5.

Does the system input achieve the desired data consistency and reliability objectives?

    

Confirm with the user that the input to the system achieves the desired consistency and reliability objectives.

Requirements

Design reviews

6.

Are the manuals explaining how to use the computer outputs adequate?

    

Confirm with the user the adequacy of the output use manuals.

Requirements

Checklist & Confirmation/examination

7.

Are the input manuals and procedures adequate to ensure the preparation of valid input?

    

Confirm with the input preparers that the manuals appear adequate to produce valid input.

Requirements

Checklist & Confirmation/examination

8.

Has the user involvement in the developmental process continued through the programming phase?

    

Confirm with the project personnel that the user participation has been adequate.

Requirements

Checklist & Confirmation/examination

TEST FACTOR: Programs Conform to Design (Ease of Use)

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Do the application documents conform to design specifications?

    

Verify that the implemented ease of use segment of the application conforms to design.

Compliance

Design reviews

2.

Have easy-to-use instructions been prepared for interfacing with the automated application?

    

Examine the usability of the people interface instructions.

Manual support

Checklist

3.

Have provisions been made to provide assistance to input clerks?

    

Verify that provisions are implemented to assist input clerks in the proper entry of data.

Manual support

Checklist & Walkthroughs

4.

Are the training sessions planned to train personnel on how to interact with the computer system?

    

Examine the course content to verify the appropriateness of the material.

Manual support

Walkthroughs

5.

Are the output documents implemented for ease of use?

    

Verify the ease of use of the output documents.

Requirements

Checklist & Walkthroughs

6.

Is the information in output documents prioritized?

    

Verify that the information in output documents is prioritized.

Requirements

Inspections

7.

Are the input documents implemented for ease of use?

    

Verify the ease of use of the input documents.

Requirements

Checklist & Walkthroughs

8.

Do clerical personnel accept the application system as usable?

    

Confirm with clerical personnel their acceptance of the usability of the application.

Manual support

Confirmation/examination

TEST FACTOR: Programs Are Maintainable

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Do the programs conform to the maintenance specifications?

    

Verify that the programs conform to the maintenance specifications.

Compliance

Inspections

2.

Is the program documentation complete and usable?

    

Review the documentation for completeness and usability.

Compliance

Compiler-based analysis & Inspections

3.

Do the programs contain a reasonable number of explanatory statements?

    

Review the programs to determine they contain a reasonable number of explanatory statements.

Compliance

Inspections

4.

Is each processing segment of the program clearly identified?

    

Verify that each processing segment of the program is adequately identified.

Compliance

Inspections

5.

Do the programs avoid complex program logic wherever possible?

    

Review programs for complex programming logic.

Compliance

Checklist & Inspections

6.

Are the expected high-frequency change areas coded to facilitate maintenance?

    

Determine ease of maintenance of high-change areas.

Compliance

Peer review

7.

Have the programs been reviewed from an ease-of-maintenance perspective?

    

Review programs to determine their maintainability.

Compliance

Peer review

8.

Are changes introduced during programming incorporated into the design documentation?

    

Review changes and verify that they have been incorporated into the design documentation.

Compliance

Design reviews & Confirmation/examination

TEST FACTOR: Programs Conform to Design (Portable)

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Does the system avoid the use of any vendor-specific hardware features?

    

Review application for vendor-specific hardware restrictions.

Operations

Inspections

2.

Does the system avoid the use of any vendor-specific software features?

    

Review application for vendor-specific software restrictions.

Operations

Inspections

3.

Are the programs written using the common program language statements?

    

Review programs for use of uncommon programming statements.

Compliance

Inspections

4.

Are all portability restrictions documented?

    

Determine the completeness of the portability documentation.

Compliance

Inspections

5.

Are all operating characteristics documented?

    

Determine the completeness of operating characteristics documentation.

Compliance

Inspections

6.

Does program documentation avoid technical jargon?

    

Review documentation for use of technical jargon.

Compliance

Inspections

7.

Are the data values used in the program machine independent?

    

Review data values to determine they are machine independent.

Compliance

Checklist, Confirmation/examination & Fact finding

8.

Are the data files machine independent?

    

Review data files to determine they are machine independent.

Compliance

Checklist, Confirmation/examination & Fact finding

TEST FACTOR: Programs Conform to Design (Coupling)

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Are common record layouts used for interfaced programs?

    

Verify that common record layouts are used by interfaced applications.

Intersystems

Inspections

2.

Are the values in the data fields common to interfaced programs?

    

Verify that common data values are used by interfaced applications.

Intersystems

Inspections

3.

Do the interfaced systems use the same file structure?

    

Verify that common file structures are used by interfaced applications.

Intersystems

Inspections

4.

Have the interfaced segments been implemented as designed?

    

Verify that the interface segments of the application are implemented as designed.

Intersystems

Correctness proof, Desk checking, & Inspections

5.

Have changes to the interfaced system been coordinated with any affected application?

    

Confirm that changes affecting interfaced applications are coordinated with those applications.

Intersystems

Exhaustive testing & Confirmation/examination

6.

Is the program/interface properly documented?

    

Verify that the interface document is complete.

Intersystems

Error guessing & Inspections

7.

Is the data transfer media common to interfaced applications?

    

Verify that common media is used for interfaced application files.

Operations

Confirmation/examination & Fact finding

8.

Can the required timing for the transfer of data be achieved?

    

Verify that the data transfer timing between interfaced applications is reasonable.

Intersystems

Error guessing & Fact finding

TEST FACTOR: Develop Operating Procedures

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Has the size of the largest program been identified?

    

Review programs to determine their maximum size.

Operations

Inspections

2.

Have changes made during programming affected operations?

    

Review changes to ascertain if they affect operations.

Operations

Inspections

3.

Have any deviations from designed operations been communicated to computer operations?

    

Review application for operation design variations and confirm operations have been notified of these changes.

Operations

Error guessing

4.

Have operations documentation been prepared?

    

Review the completeness of operations documentation.

Compliance

Design reviews

5.

Have special forms and other needed media been ordered?

    

Determine if needed media has been ordered.

Operations

Confirmation/examination & Fact finding

6.

Have data media retention procedures been prepared?

    

Review the adequacy of data retention procedures.

Compliance

Inspections

7.

Has needed computer time for tests been scheduled?

    

Examine the computer schedule to ascertain if needed test time has been scheduled.

Operations

Fact finding

8.

Have off-site storage needs been defined?

    

Determine the reasonableness of off-site storage requirements.

Operations

Fact finding

TEST FACTOR: Programs Achieve Criteria (Performance)

 

ASSESSMENT

   

TEST CRITERIA

Very Adequate

Adequate

Inadequate

N/A

RECOMMENDED TEST

TEST TECHNIQUE

TEST TOOL

1.

Has the cost to design and test the system approximated the cost estimate?

    

Examine the projected budget to verify that actual costs approximate budget costs.

Execution

Fact finding

2.

Does the operational cost as represented by information services approximate the projected operational costs?

    

Use the data from the job accounting system to substantiate that the actual test operational costs approximate the projected operational costs.

Execution

Fact finding

3.

Are the costs monitored during the developmental process?

    

Confirm with the information services manager that project costs are monitored.

Compliance

Confirmation/examination

4.

Will changes made during the programming phase affect anticipated system costs?

    

Confirm with the project manager that changes during the program phase will not affect operational costs.

Execution

Confirmation/examination & Fact finding

5.

Are the projected benefits still reasonable?

    

Confirm with user management that projected benefits are still reasonable.

Execution

Confirmation/examination & Fact finding

6.

Is the projected life of the project still reasonable?

    

Confirm with user management that the expected life of the project is still reasonable.

Execution

Confirmation/examination

7.

Is the project on schedule?

    

Compare the current status versus projected status in the schedule.

Execution

Fact finding

8.

Are there any expected changes in the test or conversion phases that would impact the projected return on investment?

    

Confirm with the project leader whether there would be any changes during the test or conversion phase that could affect the projected return on investment.

Execution

Error guessing & Confirmation/examination

Table 9-9. Quality Control Checklist

 

YES

NO

N/A

COMMENTS

1.

Is verifying and validating programs considered to be a responsibility of the programmer?

    

2.

Does the programmer understand the difference between static and dynamic testing?

    

3.

Will the program be subject to static testing as the primary means to remove defects?

    

4.

Does the programmer understand the process that will generate the program code?

    

5.

Does the programmer understand and use desk debugging?

    

6.

Does the programmer understand the 15 programming concerns, and will they be incorporated into testing?

    

7.

Is the program tested using either the peer review technique or code inspections?

    

8.

Will the program be subject to full testing prior to moving to a higher-level testing (e.g., string testing)?

    

9.

Are all of the uncovered defects recorded in detail?

    

10.

Are all of the uncovered defects corrected prior to moving to the next level of testing?

    

Conducting a Peer Review

The peer review provides a vehicle for knowledgeable people (peers) to contribute to the construction of the computer program by informally but effectively reviewing the functioning of the program in a non-threatening environment. The peer review provides a static analysis that evaluates both the structure and the functioning of the program. The peer review can detect syntactical errors, but more through personal observation than as a direct result of the walkthrough.

Peer reviews can also be formal. Whether the formal or informal version is used, management should approve the peer review concept. Formal peer reviews are an integral task in the programming process, whereas informal peer reviews are called for at the discretion of the lead programmer.

The peer review team should consist of between three and six members. It is important to have at least three members on the peer review team to obtain sufficiently varied opinion and to keep discussion going. Individuals who should be considered for the peer review team include the following:

  • Computer programmers (at least two)

  • Job control specialists

  • Computer operator

  • Control clerk

  • Programming supervisor

  • Program peer reviews are performed by executing the following tasks.

Establishing Peer Review Ground Rules

This need not be done for every peer review, but it is important to have good ground rules. Among the ground rules that need to be decided are the following:

  • Areas included and excluded from the peer review; for example, whether efficiency of programs will be included

  • Whether reports will be issued

  • Method for selecting peer review team leader

  • Location of conducting the peer review

  • Method for selecting a peer review

Selecting the Peer Review Team

The members of the peer review team should be selected sufficiently in advance so that they can arrange their schedules to allocate sufficient time and acquire training for the peer review exercise.

Training Team Members

If an individual on the team has not participated in the program peer review previously, that individual should be trained in the process. Training includes an understanding of the peer review ground rules, preferably some training in interpersonal relationships such as how to interview and work with people in a peer review process, and training in the intent of the standards and program methodologies.

Selecting a Review Method

The team leader should select the review method. The review itself consists of two parts. The first part is a general explanation of the objectives and functioning of the program. The second part is the review of the program(s) using the selected method. Four methods can be used to conduct the peer review:

  1. Flowchart. The program is explained from a flowchart of the program logic. This is most effective when the flowchart is produced directly from the source code.

  2. Source code. The review examines each line of source code in order to understand the program.

  3. Sample transactions. The lead programmer explains the programs by explaining the processing that occurs on a representative sample of transactions.

  4. Program specifications. The program specifications are reviewed as a means of understanding the program.

Conducting the Peer Review

The project lead programmer normally oversees the peer review. The peer review commences by having the lead programmer briefly review the ground rules, explain the program’s objectives, and then lead the team through the program processing. The review team is free to question and comment on any aspect of the project programmer’s explanations and to make recommendations and suggestions about the program. Generally, the peer review is conducted in a democratic manner. The role of the team leader is to ensure that the team’s questions and comments are in order, ensure the team members’ rights to ask questions, to make recommendations, or to stop interrogation on a specific point if, in the opinion of the inspection team leader, there is no benefit from continuing discussion.

Drawing Conclusions

At the end of the formal peer review, the lead programmer indicates that he or she has no more comments to make and turns the meeting over to the peer review team leader. The peer review team leader now takes control of the meeting and summarizes the factual information drawn from the review and presents the review team’s recommendations. Ideally, this is done as a group activity, but some peer review teams, especially when the process is formalized, may want some time alone to discuss among themselves what they have heard and what they are going to recommend. The findings and recommendations are then presented to the project team for their consideration.

Preparing Reports

In the formal peer review process, reports may be prepared documenting the results. However, this is optional and not an essential part of the peer review process.

Check Procedures

Three quality control checklists are provided for this chapter. Testers should complete Work Paper 9-2 at the end of the requirements phase, Work Paper 9-7 at the end of the design phase, and Work Paper 9-9 at the end of the programming phase. The checklists are designed so that “Yes” responses indicate that the verification technique was performed correctly and “No” responses warrant further investigation.

Output

The only output from Task 1 is a report indicating requirements deficiencies. These will indicate where requirements are not accurate and/or complete. It is important that this report be prepared prior to completing the requirements checkpoint.

In Task 2, both the design review and the design deliverables inspection process will produce a defects list. Because the review is more general in nature, it may include some recommendations and areas of concern. Because inspections are more specific and tied to standards, these defects are usually variances from standards and are not debatable.

One of three categories of results can be produced from each design deliverables inspection:

  • No defects found

  • Minor work required

  • Major rework required

After all the steps in Task 2 have been performed, there should be only one deliverable: the moderator’s certification of the product, releasing the product to the next phase of the process to make the organization software compliant.

Two outputs should occur from Task 3. The first is a fully debugged program, after you have used static testing to uncover and remove defects. The second is a list of the defects uncovered during testing. Note that if the organization has a quality assurance activity, that list of defects should be forwarded to them, so that they may address weaknesses in processes to eliminate reoccurrence of the same defects in other programs. In the formal peer review process in Task 3, reports may be prepared documenting the results. However, this is optional and not an essential part of the peer review process.

Guidelines

The walkthrough test tool and risk matrix are two of the more effective test tools for the requirements phase. The use of these tools will help determine whether the requirements phase test factors have been adequately addressed. These recommendations are not meant to exclude from use the other test tools applicable to the requirements phase, but rather to suggest and explain in detail two of the more effective tools for this phase.

Many of the available test tools for systems design are relatively new and unproven. Some of the more promising techniques require design specifications to be recorded in predetermined formats. Although the long-run potential for design phase testing is very promising, few proven design phase test tools currently exist.

Two design phase test tools that are receiving widespread acceptance are scoring and design reviews. Scoring is a tool designed to identify the risk associated with an automated application. The design review concept involves a formal assessment of the completeness of the process followed during the design phase. These two recommended test tools complement each other. Scoring is a process of identifying the system attributes that correlate to risk and then determining the extent to which those attributes are present or absent in the system being scored. The result of scoring is a determination of the degree of risk in the application system, and thus establishes the extent to which testing is needed. The design review then becomes the vehicle for testing the design specifications. The higher the risk, the more detailed the design review should be; for minimal-risk systems, the design review could be limited or even nonexistent.

Two test tools have proven themselves over the years in programming phase testing: desk debugging and peer review. These two tools are closely related and complement each other. Desk debugging is performed by the individual programmer prior to peer reviews, which are normally performed by other members of the information services department. A combination of the two tools is effective in detecting both structural and functional defects.

Summary

This chapter covers three tasks for performing verification during three phases of system development. Task 1 provides a process for assessing the accuracy and completeness of requirements. The cost of uncovering and correcting requirement deficiencies at this phase of development is significantly less than during acceptance testing. Estimates indicate that it would cost at least ten times as much to correct a requirement deficiency in acceptance testing than during this phase. If testers can increase the accuracy and completeness of requirements at this point of development, the test effort during the design phase can emphasize structural concerns and implementation concerns as opposed to identifying improper requirements at later test phases.

Task 2 describes a process for testers to evaluate the accuracy and completeness of the design process. Once verified as accurate and complete, the design can be moved to the build phase to create the code that will produce the needed results from the user-provided input.

Task 3 describes static testing during the build/programming phase. The method of generating computer code varies significantly from organization to organization, and from project to project.

The programming phase testing approach outlined in this task is designed to cover all methods of code generation. However, all of the techniques should be used when code is generated through statement languages. When code generators are used from design specifications, the program testing will be minimal. Some of these programming testing techniques may be incorporated in design phase testing. After the static verification testing is done, the testing emphasis shifts to dynamic testing.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset