Chapter 10. Step 4: Validation Testing

This step provides the opportunity to evaluate a system in an executable mode. Although the previous verification steps ensure that the system will function as specified, it is not until the software is executed as a system that there is complete assurance it will function properly.

Testing tradeoffs can be made between the various phases of the life cycle. The more verification testing is performed during the requirements, design, and program phases, the less validation testing that needs to be performed. On the other hand, when only minimal verification testing is performed during the early phases, extensive validation testing may be needed during the test phase.

Overview

Although testers primarily use system documentation to conduct verification testing, they use test data and test scripts to conduct validation testing. Validation testing attempts to simulate the system’s operational environment. Validation testing is effective only when the test environment is equivalent to the production environment.

There are two types of validation testing. The first is the test that the developers implemented to software as specified. At a minimum, this is unit testing and integration testing of the units. The second type of testing tests that the developed software system can operate in a production environment. That is, it tests that the system’s various components can be integrated effectively. Although this second type of testing may be conducted by software developers, the preferred method is to use independent testers. This second type of dynamic testing is covered in Chapter 12.

Objective

The objective of this step is to determine whether a software system performs correctly in an executable mode. The software is executed in a test environment in approximately the same mode as it would be in an operational environment. The test should be executed in as many different ways as necessary to address the 15 concerns described in this test process, with any deviation from the expected results recorded. Depending on the severity of the problems, uncovered changes may need to be made to the software before it is placed in a production status. If the problems are extensive, it may be necessary to stop testing completely and return the software to the developers.

Concerns

Validation testing presents testers with the following concerns:

  • Software not in a testable mode. The previous testing steps will not have been performed adequately to remove most of the defects and/or the necessary functions will not have been installed, or correctly installed in the software. Thus, testing will become bogged down in identifying problems that should have been identified earlier.

  • Inadequate time/resources. Because of delays in development or failure to adequately budget sufficient time and resources for testing, the testers will not have the time or resources necessary to effectively test the software. In many IT organizations management relies on testing to ensure that the software is ready for production prior to being placed in production. When adequate time or resources are unavailable, management may still rely on the testers when they are unable to perform their test as expected.

  • Significant problems will not be uncovered during testing. Unless testing is planned and executed adequately, problems that can cause serious operational difficulties may not be uncovered. This can happen because testers at this step spend too much time uncovering defects rather than evaluating the software’s operational performance.

Workbench

Figure 10-1 illustrates the workbench for executing tests and recording results. This shows that the testers use a test environment at this point in the testing life cycle. The more closely this environment resembles the actual operational environment, the more effective the testing becomes. If test data was not created earlier in the testing process, it needs to be created as part of this step. Tests are then executed and the results recorded. The test report should indicate what works and what does not work. The test report should also give the tester’s opinion regarding whether he or she believes the software is ready for operation at the conclusion of this test step.

Workbench to execute dynamic tests and record results.

Figure 10-1. Workbench to execute dynamic tests and record results.

Input

Validation testing an application system has few new inputs. Many aspects of the developmental process are unavailable for evaluation during the test phase. Therefore, the testing during this phase must rely on the adequacy of the work performed during the earlier phases. The deliverables that are available during the validation testing include:

  • System test plan (may include a unit test plan)

  • Test data and/or test scripts

  • Results of previous verification tests

  • Inputs from third-party sources, such as computer operators

Part II of this book discussed the test environment. Ensuring that the test environment is representative of the operational environment is of critical importance to system testing and the integration testing of software systems to other systems on the operational environment. The test environment is less important for developer-conducted unit testing and testing of the integration of units. For that type of testing, what is important is that the test data include real-world test criteria.

The test environment should include the tools to perform testing effectively. For example, it is difficult to conduct regression testing unless the test environment includes the capture/playback tool. Likewise, it is difficult to create large amounts of test data without a tool that can help generate test conditions. Whereas verification testing is primarily a manual function, validation testing normally requires one or more software test tools.

Do Procedures

This step involves the following three tasks:

  1. Build the test data.

  2. Execute tests.

  3. Record test results.

Task 1: Build the Test Data

The concept of test data is a simple one: to enable testers to create representative processing conditions. The complex part of creating test data is determining which transactions to include. Experience shows that it is uneconomical to test every condition in an application system. Experience further shows that most testing exercises fewer than one-half of the computer instructions. Therefore, optimizing testing through selecting the most important test transactions is the key aspect of the test data test tool.

Several of the test tools are structured methods for designing test data. For example, correctness proof, data flow analysis, and control flow analysis are all designed to develop extensive sets of test data. Unfortunately, although extremely effective, these tools require significant time and effort to implement, and few organizations allocate sufficient budgets. Thus, IT personnel are often not trained sufficiently to use these tools.

Sources of Test Data/Test Scripts

Effective validation testing requires you to gather all the test data/scripts that represent the type of processing that will occur in an operational environment. You can determine some of these transactions by studying the software development documentation; other test transactions may not be obvious from the documentation and require experienced testers to ensure that the data is accurate and complete.

Test data/scripts can come from any of the following sources:

  • System documentation. Testers can use test data and scripts to test a system’s documented specifications. For example, if the documentation indicates that a customer cannot exceed a preset credit limit, the testers could create test data that would validate that a customer cannot purchase items if they exceed their credit limit.

  • Use cases. The testers should obtain from the users of the application the type of transactions they will be using. These are frequently referred to as use cases. In other words, there are test transactions that test that the software will work to process transactions by the users as they use the software.

  • Test generators. Test generators, as the name implies, can create test conditions for use by testers. However, the type of test data that can be generated depends on the capability of the test data generator. The concern over test generators is that most do not have the capability to generate data that tests interrelationships such as pay grade and pay dollars.

  • Production data. Testers can use production files themselves, or they can extract specific data from them.

  • Databases. Testers need databases for testing many software applications. They can use copies of databases or live databases that have features that will block data from being updated as a result of processing test data.

  • Operational profiles. Testers, in conjunction with the stakeholders of the software system, can analyze the type of processing that occurs in an operational environment. This is particularly useful when testing error conditions or when stress- or load-testing the system.

  • Individually created test data/scripts. Testers can create their own data/scripts based on their knowledge of where errors are most likely to occur.

Testing File Design

To design an adequate file of test data, testers must be familiar with the IT department’s standards and other relevant policies, include their provisions in the simulated transactions and procedures, and supply input and output formats for all types of transactions to be processed. To gain this knowledge, testers should review and analyze system flowcharts, operating instructions, and other documentation. This knowledge can alert the test team to possible system weaknesses for which unique test transactions should be designed.

To be effective, a test file should include transactions with a wide range of valid and invalid data—valid data for testing normal processing operations and invalid data for testing programmed controls.

Only one test transaction should be processed against each master record. This permits an isolated evaluation of specific program controls by ensuring that test results will not be influenced by other test transactions processed against the same master record. General types of conditions to test include the following:

  • Tests of normally occurring transactions. To test a computer system’s ability to accurately process valid data, a test file should include transactions that normally occur. For example, in a payroll system, transactions would include the calculations of regular pay, overtime pay, and some other type of premium pay (such as shift pay), as well as setting up master records for newly hired employees and updating existing master records for other employees.

  • Tests using invalid data. Testing for the existence or effectiveness of programmed controls requires the use of invalid data. Examples of tests for causing invalid data to be rejected or “flagged” include the following:

    • Entering alphabetic characters when numeric characters are expected, and vice versa.

    • Using invalid account or identification numbers.

    • Using incomplete or extraneous data in a specific data field or omitting it entirely.

    • Entering negative amounts when only positive amounts are valid, and vice versa.

    • Entering illogical conditions in data fields that should be related logically.

    • Entering a transaction code or amount that does not match the code or amount established by operating procedures or controlling tables.

    • Entering transactions or conditions that will violate limits established by law or by standard operating procedures.

  • Tests to violate established edit checks. Based on the system’s documentation, an auditor should be able to determine which edit routines are included in the computer programs to be tested. He or she should then create test transactions to violate these edits to see whether they, in fact, exist.

Defining Design Goals

Before processing test data, the test team must determine the expected results. Any difference between actual and predetermined results indicates a weakness in the system. The test team should determine the effect of the weakness on the accuracy of master file data and on the reliability of reports and other computer products.

One of the test tools described earlier in this book was a function/test matrix. This matrix lists the software functions along one side and the test objectives on the other. Completing this matrix would help create a file of test conditions that would accomplish the test objectives for each of the software functions. Another objective of the test file is to ensure that the desired test coverage occurred. Coverage might include requirements coverage as well as branch coverage.

Entering Test Data

After the types of test transactions have been determined, the test data should be entered into the system using the same method as users. To test both input and computer processing, testers should ensure that all the data required for transaction processing is entered. For example, if users enter data by completing a data entry template, the tester should use that template as well.

Applying Test Files Against Programs That Update Master Records

There are two basic approaches to test programs for updating databases and/or production files. In the first approach, copies of actual master records and/or simulated master records are used to set up a separate master file. In the second approach, special routines used during testing will stop testers from updating production records.

To use the first approach, the test team must have a part of the organization’s master file copied to create a test master file. From a printout of this file, the team selects records suitable for the test. The tester then updates the test file with both valid and invalid data by using the organization’s transaction-processing programs. Testers can simulate master records by preparing source documents and processing them with the program the organization uses to add new records to its master file. Procedures for using simulated records as test data are the same as those for copied records. An advantage of using simulated records is that they can be tailored for particular conditions and they eliminate the need to locate and copy suitable organization records. This advantage is usually offset when many records are needed because their creation can be complex and time-consuming when compared to the relatively simple procedure of copying a part of the organization’s master file.

Often, the most practical approach is to use a test master file that is a combination of copied and simulated master records. In this approach, copied records are used whenever possible and simulated records are used only when necessary to test conditions not found in the copied records.

By using copied and simulated master records in a separate test file, testers avoid the complications and dangers of running test data in a regular processing run against the current master file. A disadvantage of copied and simulated records is that computer programs must be loaded and equipment set up and operated for audit purposes only, thus involving additional cost.

Creating and Using Test Data

The following is the recommended process for creating and using test data:

  1. Identify test resources. Testing using test data can be as extensive or limited a process as desired. Unfortunately, many programmers approach the creation of test data from a “we’ll do the best job possible” perspective and then begin developing test transactions. When time expires, testing is complete. The recommended approach suggests that the amount of resources allocated for creating test data should be determined and then a process developed that creates the most important test data in the allotted time for creating test data.

  2. Identify test conditions. Testers should use a function/test matrix to identify the conditions to test.

  3. Rank test conditions. If resources are limited, the maximum use of those resources will be obtained by testing the most important test conditions. The objective of ranking is to identify high-priority test conditions that should be tested first.

    Ranking does not mean that low-ranked test conditions will not be tested. Ranking can be used for two purposes: first, to determine which conditions should be tested first; and second, and equally important, to determine the amount of resources allocated to each of the test conditions. For example, if testing the FICA deduction was a relatively low-ranked condition, only one test transaction might be created to test that condition, while for the higher-ranked test conditions several test transactions may be created.

  4. Select conditions for testing. Based on the ranking, the conditions to be tested should be selected. At this point, the conditions should be very specific. For example, “testing FICA” is a reasonable condition to identify and rank, but for creating specific test conditions it is too general. Three test situations might be identified—such as employees whose year-to-date earnings exceed the maximum FICA deduction; an employee whose current-period earnings will exceed the difference between the year-to-date earnings and the maximum deduction; and an employee whose year-to-date earnings are more than one pay period amount below the maximum FICA deductions. Each test situation should be documented in a testing matrix. This is a detailed version of the testing matrix that was started during the requirements phase.

  5. Determine correct results of processing. The correct processing results for each test situation should be determined. Each test situation should be identified by a unique number, and then a log made of the correct results for each test condition. If a system is available to automatically check each test situation, special forms may be needed as this information may need to be converted to machine-readable media.

    The correct time to determine the correct processing results is before the test transactions have been created. This step helps determine the reasonableness and usefulness of test transactions. The process can also show if there are ways to extend the effectiveness of test transactions, and whether the same condition has been tested by another transaction.

  6. Create test transactions. The method of creating the machine-readable transaction varies based on the application and the testing rules. The most common methods of creating test transactions include the following:

    • Key entry

    • Test-data generators

    • User-prepared input forms

    • Production data

  7. Document test conditions. Both the test situations and the results of testing should be documented.

  8. Conduct test. Testers should run the executable system using the test conditions or a simulated production environment.

  9. Verify and correct test results. The results of testing should be verified and any necessary corrections to the programs performed. Problems detected as a result of testing can be attributable not only to system defects, but to test data defects. Testers should be aware of both situations.

Payroll Application Example

In making two reviews of automated civilian payroll systems, the U.S. General Accounting Office used test files to test the agencies’ computer programs for processing pay and leave data. This case shows their test file development approach.

First, all available documentation was reviewed for the manual and automated parts of each system. To understand the manual operations, they interviewed payroll supervisors and clerks, reviewed laws and regulations relating to pay and leave, and familiarized themselves with standard payroll operating procedures. For the automated part of each system they interviewed system designers and programmers and reviewed system and program documentation and operating procedures.

After acquiring a working knowledge of each system, they decided to test computer programs used to update payroll master records and those used to calculate biweekly pay and leave entitlements. Although they were concerned primarily with these particular programs, they decided that other programs used in the normal biweekly payroll processing cycle (such as programs for producing pay and leave history reports, leave records, and savings bond reports) should also be tested to see how they would handle test data.

They then designed a test file of simulated pay and leave transactions to test the effectiveness of internal controls, compliance with applicable laws and regulations, and the adequacy of standard payroll operating procedures. The test file included transactions made up of both valid and invalid data. These transactions were based on specified procedures and regulations and were designed to check the effectiveness of internal controls in each installation’s payroll processing. They used one transaction for each master record chosen.

The best method of obtaining suitable payroll master records for the test, they decided, would be to use copies of actual master records, supplemented with simulated records tailored for test conditions not found in the copied records.

Accordingly, they obtained a duplicate of each agency’s payroll master file and had a section of it printed in readable copy. From this printout, they selected a specific master record to go with each test transaction. When none of the copied records appearing on the printout fit the specifics of a particular transaction, they made up a simulated master record by preparing source documents and processing them with the program used by each installation to add records for new employees to its master file. They then added the simulated records to the copied records to create the test master file.

They next prepared working papers on which were entered, for each test transaction, the control number assigned to the transaction, the type of input document to be used, and the nature and purpose of the test. They predetermined the correct end results for all test transactions and recorded these results in the working papers for comparison with actual results.

With some help from payroll office personnel, they next coded the test transactions onto source documents. The data was then key entered and key verified. They then processed the test data against actual agency payroll programs and compared the test results with the predetermined results to see whether there were any differences.

They found both systems accepted and processed several invalid test transactions that should have been rejected or flagged by programmed computer controls. Alternative manual controls were either nonexistent or less than fully effective because they could be bypassed or compromised through fraud, neglect, or inadvertent error. They recommended that the systems’ automated controls be strengthened to ensure accurate payrolls and protect the government from improper payments.

A copy of their work papers outlining the test conditions is illustrated in Figure 10-2.

Typical payroll transactions to include in a test file.

Figure 10-2. Typical payroll transactions to include in a test file.

Typical payroll transactions to include in a test file.
Typical payroll transactions to include in a test file.
Typical payroll transactions to include in a test file.
Typical payroll transactions to include in a test file.
Typical payroll transactions to include in a test file.
Typical payroll transactions to include in a test file.
Typical payroll transactions to include in a test file.
Typical payroll transactions to include in a test file.
Typical payroll transactions to include in a test file.
Typical payroll transactions to include in a test file.
Typical payroll transactions to include in a test file.

Creating Test Data for Stress/Load Testing

The objective of stress/load testing is to verify that the system can perform properly when internal program or system limitations have been exceeded. This may require that large volumes of transactions be entered during testing.

The following are the recommended steps for determining the test data needed for stress/load testing:

  1. Identify input data used by the program. A preferred method to identify limitations is to evaluate the data. Each data element should be reviewed to determine if it poses a system limitation. This is an easier method than attempting to evaluate the programs. The method is also helpful in differentiating between system and program limitations. Another advantage is that data may need to be evaluated only once, rather than evaluating numerous individual programs.

  2. Identify data created by the program. These would be data elements that are not input into the system but are included in internal or output data records. If testers know the input and output data, they can easily identify newly created data elements.

  3. Challenge each data element for potential limitations. Testers should ask the following questions about each data element. A Yes answer to any of the questions means a limitation has been identified.

    • Can the data value in a field exceed the size of this data element?

    • Is the value in a data field accumulated?

    • Is data temporarily stored in the computer?

    • Is the information in a data element stored in the program until a following transaction is entered?

    • If a data element represents an accounting entity, does the number used to identify the accounting entity in itself provide a future limitation, such as using a one-character field to identify sales districts?

  4. Document limitations. Documentation forms the basis for volume testing. Each limitation must now be evaluated to determine the extent of testing required.

  5. Perform volume testing. The testing follows the nine steps outlined in the earlier section, “Creating and Using Test Data.”

Creating Test Scripts

Several characteristics of scripting are different from batch test data development. These differences include the following:

  • Data entry procedures required. The test procedures take on greater significance in scripting. The person using the script needs to know the details of how to enter the transaction via the terminal. This may be more complex than simply creating a test condition.

  • Use of software packages. Scripting is a very difficult and complex task to do manually, particularly when the script has to be repeated multiple times. Therefore, most testers use a capture/playback type of software package, which enables the capture of transactions as they are entered via terminal, and then repeats them as the scripts are reused. There are many of these on the market, although they are aimed primarily at the IBM mainframe.

  • Sequencing of events. Scripts require the sequencing of transactions. In batch systems, sequencing is frequently handled by sorting during systems execution; however, with scripts, the sequence must be predefined.

  • Stop procedures. Batch testing continues until the batch is complete or processing abnormally terminates. Scripting may be able to continue, but the results would be meaningless; therefore, the script has to indicate when to stop, or if specific conditions occur, where to go in the script to resume testing.

To develop, use, and maintain test scripts, testers should perform the following five steps:

  1. Determine testing levels.

  2. Develop test scripts.

  3. Execute test scripts.

  4. Analyze the results.

  5. Maintain test scripts.

Determining Testing Levels

There are five levels of testing for scripts, as follows:

  • Unit scripting. Develops a script to test a specific unit/module.

  • Pseudo-concurrency scripting. Develops scripts to test when two or more users are accessing the same file at the same time.

  • Integration scripting. Determines that various modules can be properly linked.

  • Regression scripting. Determines that the unchanged portions of systems remain unchanged when the system is changed. (Note: This is usually performed with the information captured on capture/playback software systems.)

  • Stress/performance scripting. Determines whether the system will perform correctly when it is stressed to its capacity.

Developing Test Scripts

Typically, the capture/playback tool is used to develop test scripts. The development of a script involves a number of considerations, as follows:

  • Programs to be tested

  • Operating environment

  • Script components

  • Script organization

  • Terminal entry of scripts

  • Automated entry of script transactions

  • Manual entry of script transactions

  • Transaction edits

  • Transactions navigation

  • Transaction sources

  • Files involved

  • Terminal input and output

  • Online operating environment

  • Date setup

  • File initialization

  • Screen initialization

  • Secured initialization

  • File restores

  • Password options

  • Update options

  • Processing inquiries

  • Program libraries

  • File states/contents

  • Security considerations

  • Start and stop considerations

  • Logon procedures

  • Logoff procedures

  • Setup options

  • Menu navigation

  • Exit procedures

  • Re-prompting options

  • API communications

  • Single versus multiple terminals

  • Date and time dependencies

  • Inquiries versus updates

  • Unit test organization

  • Pseudo-concurrent test organization

  • Integration test organization

  • Regression test organization

Testers can use Work Paper 10-1 as an aid to developing test scripts. Table 10-1 summarizes the development strategies.

Table 10-1. Developing Test Scripts

Test Item

Entered By

Sequence

Action

Expected Result

Operator Instructions

      
      
      
      
      
      
      
      
      
      
      
      
      
      
      

Table 10-1. Script Development Strategies

TEST LEVEL

SINGLE TRANSACTION

MULTIPLE TRANSACTIONS

SINGLE TERMINAL

MULTIPLE TERMINALS

Unit

X

 

X

 

Concurrent

X

  

X

Integration

 

X

X

 

Regression

 

X

 

X

Stress

 

X

 

X

Executing Test Scripts

Testers can execute test scripts either manually or by using the capture/playback tools. Considerations to incorporate when using capture/playback tools include the following:

  • Environmental setup

  • Program libraries

  • File states/contents

  • Date and time

  • Multiple terminal arrival modes

  • Serial (cross-terminal) dependencies

  • Processing options

  • Stall detection

  • Synchronization of different types of input data

  • Volume of inputs

  • Arrival rate of input

Note

Be reluctant to use scripting extensively unless a software tool drives the script.

Analyzing the Results

After executing test scripts, testers must compare the actual results with the expected results. Much of this should have been done during the execution of the script, using the operator instructions provided. Note that if a capture/playback software tool is used, analysis will be more extensive after execution. The analysis should include the following:

  • System components

  • Terminal outputs (screens)

  • File contents

  • Environment variables, such as

    • Status of logs

    • Performance data (stress results)

  • Onscreen outputs

  • Order of outputs processing

  • Compliance of screens to specifications

  • Ability to process actions

  • Ability to browse through data

Maintaining Test Scripts

Once developed, test scripts need to be maintained so that they can be used throughout development. The following areas should be incorporated into the script maintenance procedure:

  • Identifiers for each script

  • Purpose of scripts

  • Program/units tested by this script

  • Version of development data that was used to prepare script

  • Test cases included in script

Task 2: Execute Tests

Effective validation testing should be based on the test plan created much earlier in the life cycle. The test phase testing is a culmination of the previous work preparing for this phase. Without this preparation, tests may be uneconomical and ineffective.

The following describes some of the methods of testing an application system. Testers can use Work Paper 10-2 to track their progress.

  • Manual, regression, and functional testing (reliability). Manual testing ensures that the people interacting with the automated system can perform their functions correctly. Regression testing verifies that what is being installed does not affect any portion of the application already installed or other applications interfaced by the new application. Functional testing verifies that the system requirements can be performed correctly when subjected to a variety of circumstances and repeated transactions.

  • Functional and regression testing (coupling). The test phase should verify that the application being tested can correctly communicate with interrelated application systems. Both functional and regression testing are recommended. Functional testing verifies that any new function properly interconnects, while regression testing verifies that unchanged segments of the application system that interconnect with other applications still function properly.

  • Compliance testing

    • Authorization. Testing should verify that the authorization rules have been properly implemented and complied with. Test conditions should include unauthorized transactions or processes to ensure that they are rejected, as well as ensuring authorized transactions are accepted.

    • Performance. Performance criteria are established during the requirements phase. These criteria should be updated if the requirements change during later phases of the life cycle. Many of the criteria can be evaluated during the test phase, and those that can be tested should be tested. However, it may be necessary to wait until the system is placed into production to verify that all of the criteria have been achieved.

    • Security. Testers should evaluate the adequacy of the security procedures by attempting to violate them. For example, an unauthorized individual should attempt to access or modify data.

  • Functional testing

    • File integrity. Testers should verify the controls over the file integrity. For example, if integrity depends on the proper functioning of an independent control total, that function should be tested along with the automated segment of the application system. In addition, sufficient updates of the file should be performed so that the integrity controls can be tested during several iterations of executing the application system.

    • Audit trail. Testers should test the audit trail function to ensure that a source transaction can be traced to a control total, that the transaction supporting a control total can be identified, and that the processing of a single transaction or the entire system can be reconstructed using audit trail information. It is normally advisable to list part of the audit trail file to ensure that it is complete based on the test transactions entered.

    • Correctness. Functional correctness testing verifies that the application functions in accordance with user-specified requirements. Because IT personnel normally concentrate their testing on verifying that the mainline requirements function properly, you may wish to emphasize the other test concerns during validation testing, or emphasize improperly entered transactions to test the data validation and error detection functions.

  • Recovery testing (continuity of testing). If processing must continue during periods when the automated system is not operational, alternate processing procedures should be tested. In addition, the users of application systems should be involved in a complete recovery test so that not only the automated system is tested, but the procedures for performing the manual aspects of recovery are tested. This may involve intentionally causing the system to fail so that the recovery procedures can be tested.

  • Stress testing (service level). The application under stress to verify that the system can handle high-volume processing. Stress testing should attempt to find those levels of processing at which the system can no longer function effectively. In online systems, this may be determined by the volume of transactions, whereas in batch systems the size of the batch or large volumes of certain types of transactions can test internal tables or sort capabilities.

  • Testing complies with methodology. Testing should be performed in accordance with the organization’s testing policies and procedures. The methodology should specify the type of test plan required, the recommended test techniques and tools, as well as the type of documentation required. The methodology should also specify the method of determining whether the test is successful.

  • Manual support testing (ease of use). The ultimate success of the system is determined by whether people can use it. Because this is difficult to evaluate prior to validation testing, it is important that the system is evaluated in as realistic a test environment as possible.

  • Inspections (maintainability). Modifications made during the system’s development life cycle provide one method of testing the maintainability of the application system. Fortunately, these changes are made by the developers who are intimately familiar with the application system. The completed system should be inspected by an independent group, preferably systems maintenance specialists. System development standards should be devised with maintainability in mind.

  • Disaster testing (portability). Disaster testing simulates problems in the original environment so that an alternative processing environment can be tested. Although it is not possible to simulate all environments into which an application system may be moved, knowing that it can transfer between two different environments provides a high probability that other moves will not cause major complications.

  • Operations testing (ease of operations). Testing in this phase should be conducted by the normal operations staff. Project development personnel should not be permitted to coach or assist during the test process. It is only through having normal operation personnel conduct the test that the completeness of instructions and the ease with which the system can be operated can be properly evaluated.

Table 10-2. Test Phase Test Process

TEST FACTOR: Manual, Regressional, and Functional Testing (Reliability)

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

Very Adequate

Adequate

Inadequate

N/A

1. Has data that does not conform to individual data element specifications been tested?

    

Verify that data validation programs reject data not conforming to data element specifications.

2. Have tests been performed to reject data relationships not conforming to system specifications?

    

Verify that the system rejects data relationships that do not conform to system specifications.

3. Have invalid identifiers been tested?

    

Verify that program rejects invalid identifiers.

4. Have tests been conducted to verify that missing sequence numbers will be detected?

    

Confirm that the system detects missing sequence numbers.

5. Have tests been conducted to verify that inaccurate batch totals will be detected?

    

Verify that the system will detect inaccurate batch totals.

6. Have tests been conducted to determine that data missing from a batch or missing scheduled data will be detected?

    

Verify that the programs will defect data missing from batches and scheduled data that does not arrive on time.

7. Have tests been conducted to verify that the unchanged parts of the system are not affected by invalid data?

    

Conduct regression test to ensure that unchanged portions of the program are not affected by invalid data.

8. Are the results obtained from the recovery process correct?

    

Verify the correctness of the results obtained from the recovery process.

TEST FACTOR: Compliance Testing (Authorization)

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

Very Adequate

Adequate

Inadequate

N/A

1. Do manual procedures ensure that the proper authorization is received?

    

Test manual procedures to verify that authorization procedures are followed.

2. Have automated authorization rules been tested?

    

Verify that programs enforce automated authorization rules.

3. Have the current authorization names and identifiers been included as part of the test?

    

Confirm that the actual identifiers for authorization are included in the programs.

4. Have unauthorized transactions been entered into the system to determine if they will be rejected?

    

Verify that the authorization programs reject Security unauthorized transactions.

5. If multiple authorization is required, do the procedures function properly?

    

Verify that multiple authorization procedures perform properly.

6. If authorizers are limited in the size of transactions they can testing authorize, have multiple transactions below that limit been entered to determine if the system checks against limit violations?

    

Verify that the system can identify potential violations of authorization limits caused by entering multiple transactions below the limit.

7. Have the procedures to change the name or identifier of individuals authorized to change a transaction been tested?

    

Verify that the procedure to change the authorization rules of a program performs properly.

8. Have the procedures to report authorization violations to management been tested?

    

Verify that the authorization reports are properly prepared and delivered.

TEST FACTOR: Functional Testing (File Integrity)

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

Very Adequate

Adequate

Inadequate

N/A

1. Have the file balancing controls been tested?

    

Verify that the procedures to balance the files function properly.

2. Have the independently maintained control totals been tested?

    

Verify that the independently maintained control totals can confirm the automated file control totals.

3. Have integrity procedures been tested to ensure that updates are properly recorded?

    

Verify that the new control totals properly reflect the updated transactions.

4. Have tests been performed to ensure that integrity can be retained after a program failure?

    

Cause a program to fail to determine if it affects the file integrity.

5. Has erroneous data been entered to determine if it can destroy the file integrity?

    

Enter erroneous data to determine that it cannot affect the integrity of the file totals.

6. Have the manual procedures to develop independent control totals been tested?

    

Verify that the manual procedures can be properly performed to produce correct independent control totals.

7. If multiple files contain the same data, will all like elements of data be changed concurrently to ensure the integrity of all computer files?

    

Change a data element in one file that is redundant in several files to verify that the other files will be changed accordingly.

8. Have nil and one record file conditions been tested?

    

Run system with one and no records on each file.

TEST FACTOR: Functional Testing (Audit Trail)

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

Very Adequate

Adequate

Inadequate

N/A

1. Has a test been conducted to verify that source documents can be traced to control totals?

    

Verify that a given source transaction can be traced to the appropriate control total.

2. Has a test been conducted to verify that all of the supporting data for a control total can be identified?

    

Determine for a control total that all the supporting transactions can be identified.

3. Can the processing of a single transaction be reconstructed?

    

Verify that the processing of a single transaction can be reconstructed.

4. Has a test been conducted to verify that the audit trail contains the appropriate information?

    

Examine the audit trail to verify that it contains the appropriate information.

5. Will the audit trail be saved for the appropriate time period?

    

Verify that a audit trail is marked to be saved for the appropriate time period.

6. Have tests been conducted to determine that people can reconstruct processing from the audit trail procedures?

    

Verify that by using the audit trail procedures people can reconstruct processing.

7. Have tests been conducted to verify that the audit trail is economical to use?

    

Determine the cost of using the audit trail.

8. Does the audit trail satisfy review requirements?

    

Verify with the auditors that the audit trail is satisfactory for their purpose.

TEST FACTOR: Recovery Testing (Continuity of Processing)

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

Very Adequate

Adequate

Inadequate

N/A

1. Has a simulated disaster been created to test recovery procedures?

    

Simulate a disaster to verify that recovery can occur after a disaster.

2. Can people perform the recovery operation from the recovery procedures?

    

Verify that a recovery can be performed directly from the recovery procedures.

3. Has a test been designed to determine recovery can occur within the desired frame?

    

Conduct a recovery test to determine that it can be performed within the required time frame.

4. Have operation personnel been trained in recovery procedures?

    

Confirm with operation personnel that they have received appropriate recovery training.

5. Has each type of system failure been tested?

    

Verify that the system can recover from each of the various types of system failures.

6. Have the manual backup procedures been tested using full volume for system failures?

    

Simulate a system disaster to verify that the manual procedures are adequate.

7. Have the manual procedures been tested for entering data received during downtime into the system after the integrity of the system has been restored?

    

Verify that the system users can properly enter data that has been accumulated during system failures.

8. Can alternate processing procedures be performed using the manual procedures?

    

Require the manual alternate processing procedures to be performed exclusively from the procedures.

TEST FACTOR: Stress Testing (Service Level)

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

Very Adequate

Adequate

Inadequate

N/A

1. Have the limits of all internal tables and other restrictions been documented?

    

Confirm with the project leader that all the project limits are documented.

2. Have each of the documented units been tested?

    

Verify that the application limits have been tested.

3. Have programmed procedures been included so that transactions that cannot be processed within current capacity are retained for later processing?

    

Confirm that when more transactions are entered than the system can handle they are stored for later processing.

4. Has the input portion of the system been subject to stress testing?

    

Verify that excessive input will not result in system problems.

5. Has the manual segment of the system been subject to stress testing?

    

Verify that when people get more transactions than they can process, no transactions will be lost.

6. Have communication systems been stress tested?

    

Verify that when communication systems are required to process more transactions than their capability, transactions are not lost.

7. Have procedures been written outlining the process to be followed when the system volume exceeds capacity?

    

Evaluate the reasonableness of the excess capacity procedures.

8. Have tests using backup personnel been performed to verify that the system can process normal volumes without the regular staff present?

    

Test the functioning of the system when operated by backup personnel.

TEST FACTOR: Compliance Test (Performance)

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

Very Adequate

Adequate

Inadequate

N/A

1. Can systems be operated at expected volumes with the anticipated manual support?

    

Verify that the systems can be operated with anticipated manual support.

2. Can transactions be processed at expected volumes for the expected cost?

    

Verify that the transaction processing costs are within expected tolerances.

3. Has the test phase been conducted within the test budget?

    

Verify from the accounting reports that the test phase has been performed within budget.

4. Have problems been encountered in testing that will affect the cost-effectiveness of the system?

    

Confirm with the project leader that uncovered problems will not significantly affect the cost effectiveness of the system.

5. Does the test phase indicate that the expected benefits will be received?

    

Confirm with user management that the expected benefit should be received.

6. Will projected changes to hardware and software significantly reduce operational or maintenance costs?

    

Confirm with computer operations whether projected changes to hardware and software will significantly reduce operations and maintenance costs.

7. Does a test phase schedule exist that identifies tasks, people, budgets, and costs?

    

Examine the completeness of the test phase work program.

8. Is the technology used for implementation sound?

    

Confirm with an independent source the soundness of the implementation technology.

TEST FACTOR: Compliance Testing (Security)

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

Very Adequate

Adequate

Inadequate

N/A

1. Do the identified security risks have adequate protection?

    

Examine the completeness of the protection against the identified security risks.

2. Have tests been conducted to violate physical security?

    

Attempt to violate physical security.

3. Have tests been conducted to violate access security?

    

Attempt to violate access security.

4. Have tests been conducted to determine if computer resources can be used without authorization?

    

Attempt to violate access security.

5. Have tests been conducted to determine if security procedures are adequate during off hours?

    

Conduct security violations during nonworking hours.

6. Are repetitive tests conducted to attempt to violate security by continual attempts?

    

Conduct repetitive security violations.

7. Are tests conducted to obtain access to program and system documentation?

    

Attempt to gain access to program and system documentation.

8. Are employees adequately trained in security procedures?

    

Verify that employees know and follow security procedures.

TEST FACTOR: Test Complies with Methodology

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

Very Adequate

Adequate

Inadequate

N/A

1. Does testing verify that the system processing is in compliance with the organization‘s policies and procedures?

    

Verify that the operational system results comply with the organization‘s policies and procedures.

2. Does testing verify that the system processing is in compliance with the information services processing policies and procedures?

    

Verify that the operational system results comply with the information services policies and procedures.

3. Does testing verify that the system processing is in compliance with the accounting policies and procedures?

    

Verify that the operational system results comply with the accounting policies and procedures.

4. Does testing verify that the system processing is in compliance with governmental regulations?

    

Verify that the operational system results comply with the governmental regulations.

5. Does testing verify that the system processing is in compliance with industry standards?

    

Verify that the operational system results comply with the industry standards.

6. Does testing verify that the system processing is in compliance with the user procedures?

    

Verify that the operational system results comply with the user department policies and procedures.

7. Did testing procedures conform to the test plan?

    

Verify that the test plan was fully implemented.

8. Has the testing verified that sensitive data is adequately protected?

    

Confirm with the user the completeness of the test to verify sensitive data is protected.

TEST FACTOR: Functional Testing (Correctness)

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

Very Adequate

Adequate

Inadequate

N/A

1. Do the normal transaction origination procedures function in accordance with specifications?

    

Verify that the transaction origination procedures perform in accordance with systems requirements.

2. Do the input procedures function in accordance with specifications?

    

Verify that the input procedures perform in accordance with systems requirements.

3. Do the processing procedures function in accordance with specifications?

    

Verify that the processing procedures perform in accordance with systems requirements.

4. Do the storage retention procedures function in accordance with specifications?

    

Verify that the storage retention procedures perform in accordance with systems requirements.

5. Do the output procedures function in accordance with specifications?

    

Verify that the output procedures perform in accordance with systems requirements.

6. Do the error-handling procedures function in accordance with specifications?

    

Verify that the error-handling procedures perform in accordance with systems requirements.

7. Do the manual procedures function in accordance with specifications?

    

Verify that the manual procedures perform in accordance with systems requirements.

8. Do the data retention procedures function in accordance with specifications?

    

Verify that the data retention procedures perform in accordance with systems requirements.

TEST FACTOR: Manual Support Testing (Ease of Use)

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

Very Adequate

Adequate

Inadequate

N/A

1. Do the clerical personnel understand the procedures?

    

Confirm with clerical personnel that they understand the procedures.

2. Are the reference documents easy to use?

    

Examine results of using reference documents.

3. Can input documents be completed correctly?

    

Examine processing for correctness.

4. Are output documents used properly?

    

Examine correctness of use of output documents.

5. Is manual processing completed within the expected time frame?

    

Identify time span for manual processing.

6. Do the outputs indicate which actions should be taken first?

    

Examine outputs for priority of use indications.

7. Are documents clearly identified regarding recipients and use?

    

Examine documents for clarity of identification.

8. Are the clerical personnel satisfied with the ease of use of the system?

    

Confirm with clerical personnel the ease of use of the system.

TEST FACTOR: Inspections (Maintainability)

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

Very Adequate

Adequate

Inadequate

N/A

1. Do the programs contain nonentrant code?

    

Determine all program statements are entrant.

2. Are the programs executable?

    

Examine the reasonableness of program processing results.

3. Can program errors be quickly located?

    

Introduce an error into the program.

4. Does the program conform to the documentation?

    

Verify the executable version of the program conforms to the program documentation.

5. Is a history of program changes available?

    

Examine the completeness of the history of program changes.

6. Are test criteria prepared so that they can be used for maintenance?

    

Examine the usability of test data for maintenance.

7. Are self-checking test results prepared for use during maintenance?

    

Examine the usability of expected test results for maintenance.

8. Are all errors detected during testing corrected?

    

Verify that errors detected during testing have been corrected.

TEST FACTOR: Disaster Testing (Portability)

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

Very Adequate

Adequate

Inadequate

N/A

1. Have alternate processing sites and/or requirements been identified?

    

Confirm that alternate site requirements have been identified.

2. Are data files readable at the new facilities?

    

Execute data files at the new facilities.

3. Are programs executable at the new facilities?

    

Execute programs at the new facilities.

4. Are operating instructions usable at the new facilities?

    

Request that normal operators execute system at the new facilities.

5. Are outputs usable at the new facilities?

    

Examine usability of outputs produced using the new facilities.

6. Is execution time acceptable at the new facilities?

    

Monitor execution time at the new facility.

7. Are programs recompilable at the new facilities?

    

Recompile programs at the new facility.

8. Are the user procedures usable at the new facilities?

    

Request users to operate system at the new facilities.

TEST FACTOR: Functional and Regression Testing (Coupling)

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

Very Adequate

Adequate

Inadequate

N/A

1. Are inputs from other appliance systems correct?

    

Verify correctness of computerized data.

2. Are outputs going to other applications correct?

    

Verify correctness of computerized data.

3. Does input from other applications conform to specifications documents?

    

Verify actual input conforms to specifications.

4. Does output going to other applications conform to specifications documents?

    

Verify actual output conforms to specifications.

5. Does input from other applications impact nonrelated functions?

    

Perform appropriate regression testing.

6. Can the intersystem requirements be processed within time frame specifications?

    

Monitor time span of processing for adherence to specifications.

7. Are intersystem operation instructions correct?

    

Verify intersystem operation instructions are correct.

8. Are the retention dates on intersystem files correct?

    

Confirm that intersystem file retention dates are correct.

TEST FACTOR: Operations Test (Ease of Operations)

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

Very Adequate

Adequate

Inadequate

N/A

1. Are operating instructions in the proper format?

    

Verify documented instructions conform to standards.

2. Have operators been instructed in how to operate the new applications?

    

Confirm with operators completeness of instructions.

3. Has a trouble call-in list been prepared?

    

Examine call-in list.

4. Are operating instructions complete?

    

Determine operator instructions are complete.

5. Has appropriate operations and test time been scheduled?

    

Examine schedule for reasonable allocation of time.

6. Are data retention procedures prepared?

    

Verify completeness of retention procedures.

7. Have normal operators successfully executed the application?

    

Verify that operators can operate the system by only using operator instructions.

8. Have operator recommendations for improvements been reviewed?

    

Verify that operator recommendations have been adequately reviewed.

Task 3: Record Test Results

Testers must document the results of testing so that they know what was and was not achieved. The following attributes should be developed for each test case:

  • Condition. Tells what is.

  • Criteria. Tells what should be.

    These two attributes are the basis for a finding. If a comparison between the two gives little or no practical consequence, no finding exists.

  • Effect. Tells why the difference between what is and what should be is significant.

  • Cause. Tells the reasons for the deviation.

A well-developed problem statement includes each of these attributes. When one or more of these attributes is missing, questions almost always arise, such as:

  • Condition. What is the problem?

  • Criteria. Why is the current state inadequate?

  • Effect. How significant is it?

  • Cause. What could have caused the problem?

Documenting a statement of a user problem involves three tasks, which are explained in the following sections.

Documenting the Deviation

Problem statements derive from a process of comparison. Essentially, the user compares “what is” with “what should be.” When a deviation is identified between what actually exists and what the user thinks is correct or proper, the first essential step toward development of a problem statement has occurred. It is difficult to visualize any type of problem that is not in some way characterized by this deviation. The “what is” can be called the statement of condition. The “what should be” can be called the criteria. These concepts are the first two, and most basic, attributes of a problem statement.

Documenting deviation means to describe conditions as they currently exist and criteria that represent what the user wants. The actual deviation is the difference, or gap, between “what is” and “what is desired.”

The statement of condition uncovers and documents facts as they exist. What is a fact? If somebody tells you something happened, is that “something” a fact? Or is it only a fact if someone told you it’s a fact? The description of the statement of condition does, of course, depend largely on the nature and extent of the evidence or support that is examined and noted. For those facts making up the statement of condition, the IT professional will obviously take pains to ensure that the information is accurate, well-supported, and worded as clearly and precisely as possible.

The statement of condition should document as many of the following attributes as appropriate for the problem:

  • Activities involved

  • Procedures used to perform work

  • Outputs/deliverables

  • Inputs

  • Users/customers served

  • Deficiencies noted

The criterion is the user’s statement of what is desired. It can be stated in either negative or positive terms. For example, it could indicate the need to reduce the complaints or delays as well as desired processing turnaround time.

Often, “should be” relates primarily to commonsense or general reasonableness, and the statement of condition virtually speaks for itself. These situations must be carefully distinguished from personal whims or subjective, fanciful notions. There is no room for such subjectivity in defining what is desired.

As much as possible, the criteria should directly relate to the statement of condition. For example, if volumes are expected to increase, the number of users served has changed, or the user processes have changed, they should be expressed in the same terms as used in documenting the statement of condition.

Work Paper 10-3 provides space to describe the problem and document the statement of condition and the statement of criteria. Note that an additional section could be added to Work Paper 10-3 to describe the deviation. However, if the statement of condition and statement of criteria are properly worded, the deviation should be readily determinable.

Table 10-3. Test Problem Documentation

 

Name of Software Tested

 

Problem Description

 
 

Actual Results

 
 

Expected Results

 
 

Effect of Deviation

 
 

Cause of Problem

 
 

Location of Problem

 
 

Recommended Action

 

Documenting the Effect

Whereas the legitimacy of a problem statement may stand or fall on criteria, the attention that the problem statement receives after it is reported depends largely on its significance. Significance is judged by effect.

Efficiency and economy are useful measures of effect and frequently can be stated in quantitative terms such as dollars, time, units of production, number of procedures and processes, or transactions. Whereas past effects cannot be ascertained, potential future effects may be determined. Sometimes effects are intangible but are nevertheless of major significance.

Effect is frequently considered almost simultaneously with the first two attributes (condition and criteria) of the problem. Reviewers may suspect a bad effect even before they have clearly formulated these other attributes in their minds. After the statement of condition is identified, reviewers may search for a firm criterion against which to measure the suspected effect. They may hypothesize several alternative criteria, which are believed to be suitable based on experiences in similar situations. They may conclude that the effects under each hypothesis are so divergent or unreasonable that what is really needed is a firmer criterion—say, a formal policy in an area where no policy presently exists. The presentation of the problem statement may revolve around this missing criterion, although suspicions as to effect may have been the initial path.

The reviewer should attempt to quantify the effect of a problem wherever practical. Although the effect can be stated in narrative or qualitative terms, that frequently does not convey the appropriate message to management; for example, statements such as “Service will be delayed,” or “Extra computer time will be required” do not really tell what is happening to the organization.

Documenting the Cause

In some cases, the cause may be obvious from the facts presented. In other instances, investigation is required to identify the origin of the problem.

Most findings involve one or more of the following causes:

  • Nonconformity with standards, procedures, or guidelines

  • Nonconformity with published instructions, directives, policies, or procedures from a higher authority

  • Nonconformity with business practices generally accepted as sound

  • Employment of inefficient or uneconomical practices

The determination of the cause of a condition usually requires the scientific approach, which encompasses the following steps:

  1. Define the problem (the condition that results in the finding).

  2. Identify the flow of work and/or information leading to the condition.

  3. Identify the procedures used in producing the condition.

  4. Identify the people involved.

  5. Re-create the circumstances to identify the cause of a condition.

Document the cause for the problem on Work Paper 10-3.

Check Procedures

Work Paper 10-4 is a quality-control checklist for this step. Yes responses indicate good test practices, and No responses warrant additional investigation. A Comments column is provided to explain No responses and to record the results of investigation.

Table 10-4. Quality Control Checklist

  

YES

NO

N/A

COMMENTS

1.

Has an appropriate test environment been established to perform the dynamic test of the application software?

    

2.

Are the testers trained in the test tools that will be used during this step?

    

3.

Has adequate time been allocated for this step?

    

4.

Have adequate resources been assigned to this step?

    

5.

Have the methods for creating test data been appropriate for this system?

    

6.

Has sufficient test data been developed to adequately test the application software?

    

7.

Have all the testing techniques that were indicated in the test plan been scheduled for execution during this step?

    

8.

Have the expected results from testing been determined?

    

9.

Has a process been established to determine variance/deviation between expected results and actual results?

    

10.

Have both the expected and actual results been documented when there’s a deviation between the two?

    

11.

Has the potential impact of any deviation been determined?

    

12.

Has a process been established to ensure that appropriate action/resolution will be taken on all identified test problems?

    

Output

Validation testing has the following three outputs:

  • The test transactions to validate the software system

  • The results from executing those transactions

  • Variances from the expected results

Guidelines

Validation testing is the last line of defense against defects entering the operational environment. If no testing has occurred prior to the test phase, it is unreasonable to expect testing at this point to remove all the defects. Experience has shown that it is difficult for the test phase to be more than 80 percent effective in reducing defects. Obviously, the fewer the number of defects that enter the test phase, the fewer the number of defects that get into the production environment.

At the end of the test phase, the application system is placed into production. The test phase provides the last opportunity for the user to ensure that the system functions properly. For this reason, the user should be heavily involved in testing the application system.

The IT department has an opportunity to evaluate the application system during the program phase. During this phase, they determine whether the system functions according to the requirements. The test step is best performed by a group other than the project team. This is not to say that the project team should not be involved or help, but rather that the team should not be the dominant party in the test phase. If the same individual were responsible for both the program phase testing and the test phase testing, there would be no need to have two different phases. If information services assume test responsibility during the program phase, and the user accepts it during the test phase, the two phases complement one another.

An independent test group should be given the responsibility to test the system to determine whether the system performs according to its needs. Because of communication problems, differences may exist between the specifications to which the system was built and the requirements that the user expected. Ideally, the test team will have been developing test conditions from the requirements phase, and during the test phase should uncover any remaining defects in the application system.

Summary

This chapter described how to dynamically test application software. Validation testing should not focus on removing defects, but rather on whether the system can perform as specified in operational mode. Because full testing is impractical, validation testing must concentrate on those operational aspects most important to the user. The next step is to analyze the results of testing and report the results.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset