Chapter 8. Step 2: Developing the Test Plan

The scope of the effort to determine whether software is ready to be placed into production should be defined in a test plan. To expend the resources for testing without a plan will almost certainly lead to waste and the inability to evaluate the status of corrections prior to installation. The test planning effort should follow the normal test planning process, although the content will vary because it will involve not only in-house developed software but also vendor-developed software and software embedded into computer chips.

Overview

The test plan describes how testing will be accomplished. Its creation is essential to effective testing and should take about one-third of the total testing effort. If you develop the plan carefully, test execution, analysis, and reporting will flow smoothly.

Consider the test plan as an evolving document. As the developmental effort changes in scope, the test plan must change accordingly. It is important to keep the test plan current and to follow it, for it is the execution of the test plan that management must rely on to ensure that testing is effective; and it is from this plan that testers will ascertain the status of the test effort and base opinions on its results.

This chapter contains a standard that defines what to include in the test plan. The procedures described here are amplified with work papers and checklists detailing how to develop the planning material. The organizing test described in Chapter 7 will assist in developing the test plan. Chapters 9 through 13 discuss executing the test plan and summarizing and reporting the results.

Objective

The objective of a test plan is to describe all testing that is to be accomplished, together with the resources and schedule necessary for completion. The test plan should provide background information on the software being tested, test objectives and risks, and specific tests to be performed. Properly constructed, the test plan is a contract between the testers and the project team/users. Thus, status reports and final reports will be based on that contract.

Concerns

The concerns testers face in ensuring that the test plan will be complete include the following:

  • Not enough training. The majority of IT personnel have not been formally trained in testing, and only about half of full-time independent testing personnel have been trained in testing techniques. This causes a great deal of misunderstanding and misapplication of testing techniques.

  • Us-versus-them mentality. This common problem arises when developers and testers are on opposite sides of the testing issue. Often, the political infighting takes up energy, sidetracks the project, and negatively impacts relationships.

  • Lack of testing tools. IT management may consider testing tools to be a luxury. Manual testing can be an overwhelming task. Although more than just tools are needed, trying to test effectively without tools is like trying to dig a trench with a spoon.

  • Lack of management understanding/support of testing. If support for testing does not come from the top, staff will not take the job seriously and testers’ morale will suffer. Management support goes beyond financial provisions; management must also make the tough calls to deliver the software on time with defects or take a little longer and do the job right.

  • Lack of customer and user involvement. Users and customers may be shut out of the testing process, or perhaps they don’t want to be involved. Users and customers play one of the most critical roles in testing: making sure the software works from a business perspective.

  • Not enough time for testing. This is common complaint. The challenge is to prioritize the plan to test the right things in the given time.

  • Over-reliance on independent testers. Sometimes called the “throw it over the wall” syndrome. Developers know that independent testers will check their work, so they focus on coding and let the testers do the testing. Unfortunately, this results in higher defect levels and longer testing times.

  • Rapid change. In some technologies, especially rapid application development (RAD), the software is created and/or modified faster than the testers can test it. This highlights the need for automation, but also for version and release management.

  • Testers are in a lose-lose situation. On the one hand, if the testers report too many defects, they are blamed for delaying the project. Conversely, if the testers do not find the critical defects, they are blamed for poor quality.

  • Having to say no. The single toughest dilemma for testers is to have to say, “No, the software is not ready for production.” Nobody on the project likes to hear that, and frequently, testers succumb to the pressures of schedule and cost.

Workbench

The workbench in Figure 8-1 shows the six tasks required to complete the test plan.

Workbench for developing a test plan.

Figure 8-1. Workbench for developing a test plan.

Input

Accurate and complete inputs are critical for developing an effective test plan. The following two inputs are used in developing the test plan:

  • Project plan. This plan should address the totality of activities required to implement the project and control that implementation. The plan should also include testing.

  • Project plan assessment and status report. This report (developed from Step 1 of the seven-step process) evaluates the completeness and reasonableness of the project plan. It also indicates the status of the plan as well as the method for reporting status throughout the development effort.

Do Procedures

The following six tasks should be completed during the execution of this step:

  1. Profile the software project.

  2. Understand the software project’s risks.

  3. Select a testing technique.

  4. Plan unit testing and analysis.

  5. Build the test plan.

  6. Inspect the test plan.

Task 1: Profile the Software Project

Effective test planning can occur only when those involved understand the attributes of the software project being tested. Testers need more information than is normally contained in a software development plan. Also, because testers should begin the testing process early in the developmental process, the project plan may not be complete when planning begins.

This task can be divided into the following two subtasks:

  1. Conduct a walkthrough of the customer/user areas.

  2. Develop a profile of the software project.

Conducting a Walkthrough of the Customer/User Area

Many, including this author, believe that testers work for the customers/users of the project, particularly if the scope of testing is greater than simply testing against specifications. And because testers represent the customer/users, they should have access to the users of the system.

A walkthrough of the customer/user area is designed for two purposes: to give an overview of the totality of activities users perform, and to gain an appreciation of the how the software will be used. For example, if your organization is building a software system to calculate employee tax deductions, testers should understand the totality of payroll responsibility so that they can put the tax deductions in the proper perspective of the overall payroll responsibilities.

Testers can gain an understanding of user responsibilities in two ways. The first is an orientation to the user area. This orientation should focus on user responsibilities before data is entered into the software project and the types of processing or uses of software deliverables during and at the conclusion of the software development process. Second, they need to follow the major business transactions as they move through the user area. If possible, it is helpful for testers to sit and observe for several hours activity in the user areas. By doing so, testers can gain an insight into busy and slack times in the user area, problems that users have in processing business transactions, and the frequency of transaction processing events.

Developing a Profile of the Software Project

The primary objective of understanding the business responsibilities of the user(s) is to develop a profile for the software project. Some of the needed profile information can be collected by the developmental project team, some can be collected by testers conducting a walkthrough of the user area, and other profile information can be gathered directly from the user or other stakeholders in the software project.

The following is the profile information that is helpful in preparing for test planning:

  • Project objectives. The test team should understand the high-level project objectives. Without this information, team members may make problematic test decisions. For example, if the user wants a particular screen to be easy to use but testers are not aware of that objective, they could conduct tests on the screen but never look at the “easy-to-use” attribute.

  • Development process. The type of development process for implementing software can have a significant impact on the test plan. For example, the process could be developed in-house or outsourced, and it can be waterfall, spiral, or agile.

    An important component of the development process is the maturity of that process. Testers can expect much higher defect rates at lower levels of process maturity than at higher levels.

  • Customer/users. Testers need to identify the software’s customers and users. For example, in an insurance company, the customer might be the organizational function responsible for writing property damage, whereas the users are the independent agents who write that type of insurance. If testers know the needs and competencies of users/customers, they can develop tests to assess whether the software performs appropriately.

  • Project deliverables. Just as it is important to know the deliverables to be produced by the test team, the testers need to know the deliverables produced by the project, including both interim and final deliverables. For example, in a payroll system, an interim deliverable may be the calculation of withholding tax, whereas a final deliverable would be a paycheck. Just as objectives help focus the tester on the real needs of the software system, the deliverables focus the tester on what the system is attempting to accomplish.

  • Cost/schedule. Resources for testing should be included in a project’s budget and schedule. In the preceding walkthrough step, there were tasks for the tester to validate the project costs and schedule through status reports. For the profile, both the costs and the schedule must be defined in much more detail. The testers need to know checkpoints, and they need to know how resources will be allocated.

  • Project constraints. Every software project should have a list of constraints, or conditions that will limit the type of software system produced. For example, a constraint in the payroll system may be the implementation of a new tax withholding table. Other constraints include expected turnaround time, volumes of transactions to be processed in a specific time period, skill sets of individuals entering data, relationships with other organizations, and the number of staff assigned to the project. These constraints can affect the extensiveness of testing, as well as conditions that need to be evaluated such as the probability of implementing a new tax withholding table on January 1 with existing developmental staff.

  • Developmental staff competency. The testers need to know the competency of the developmental staff. For example, with a relatively new and inexperienced staff, the testers might expect a much higher defect rate than with a very experienced staff.

  • Legal/industry issues. Software projects need to comply with governmental laws and industry standards and guidelines. For example, when building patient systems in hospitals, developers should be aware of laws such as HIPPA (the Health Insurance Portability and Accountability Act of 1996), as well as guidelines issued by leading hospital associations.

  • Implementation technology. Systems developed using proven technologies tend to have fewer defect rates than systems built using cutting-edge technology. For example, systems built around wireless technology may have to pioneer the use and control of that technology. On the other hand, building batch systems has been perfected over years and testers should have confidence that batch systems developed today will have minimal problems in development.

  • Databases built/used. Testers need to know the types of databases that will be used by the software system. These databases can be built specifically for that system or they can be existing databases. In establishing a software-testing environment, the testers will have to use controlled versions of databases or create equivalent databases for test purposes.

  • Interfaces to other systems. The more systems interfaced by the system being tested, the greater the test effort. Testers must ensure that proper coordination exists among all the systems affected by the software being developed. Testers should develop an inventory of systems directly interfaced as well as systems that will use the data but may not be directly interfaced. For example, if the system being tested creates a database that is used by many individuals on their own PCs, there may be issues with accounting cut-offs, which, if not controlled, would enable users to produce accounting information different than that produced by the software system that created the database.

  • Project statistics. Testers should attempt to gather as many statistics about the software system being developed as practical. For example, knowing the number of transactions, the periods in which those transactions exist, the number of users on the system, as well as any historical data on the application (such as problems encountered, downtime occurring, customer complaints, and so forth) will help testers develop appropriate test data.

Task 2: Understand the Project Risks

The test factors describe the broad objectives of testing. These are the risks/concerns that testers need to evaluate to ensure the objectives identified by that factor have been achieved. The following discussion (and Figure 8-2) delineates the type of system characteristics that testers should evaluate to determine whether test factors have been met. (Note: Testers should customize these factors for their specific system.)

Table 8-2. Testing concerns matrix.

TEST FACTOR

REQUIREMENTS

DESIGN

PROGRAM

TEST

OPERATION

MAINTENANCE

Reliability

Tolerances established

Data integrity controls designed

Data integrity controls implemented

Manual, regression, and functional testing

Accuracy and completeness of installation verified

Update accuracy requirements

Authorization

Authorization rules defined

Authorization rules designed

Authorization rules implemented

Compliance testing

Data changes during installation prohibited

Preserve authorization rules

File Integrity

File integrity requirements defined

File integrity controls designed

File integrity controls implemented

Functional testing

Integrity of production files verified

Preserve file integrity

Audit Trail

Reconstruction requirements defined

Audit trail designed

Implement audit trail

Functional testing

Installation audit trail recorded

Update audit trail

Continuity of Processing

Impact of failure defined

Contingency plan designed

Write contingency plan and procedures

Recovery testing

Integrity of previous testing ensured

Update contingency plan

Service Level

Desired service level defined

Method to achieve service level designed

Design system to achieve service level

Stress testing

Fail-safe installation plan implemented

Preserve service level

Access Control

Access defined

Access procedure designed

Implement security procedures

Compliance testing

Access during integration controlled

Preserve security

Methodology

Requirements comply with methodology

Design complies with methodology

Programs comply with methodology

Compliance testing

Integration complies with methodology

Maintenance complies with methodology

Correctness

Functional specifications designed

Design conforms to requirements

Programs conform to design

Functional testing

Proper programs and data placed into production

Update requirements

Ease of use

Usability specifications determined

Design facilitates use

Programs conform to design

Manual support testing

Usability instructions disseminated

Preserve ease of use

Maintainable

Maintenance specifications determined

Design is maintainable

Programs are maintainable

Inspection

Documentation completed

Preserve maintainability

Portable

Portability needs determined

Design is portable

Programs conform to design

Disaster testing

Documentation completed

Preserve portability

Coupling

Systems interface defined

Interface design complete

Programs conform to design

Functional and regression testing

Interface coordinated

Ensure proper interface

Performance

Performance criteria established

Design achieves criteria

Programs achieve criteria

Compliance testing

Integration performance monitored

Preserve level of performance

Ease of operation

Operational needs defined

Communicate needs to operations

Develop operating procedures

Operations testing

Operating procedures implemented

Update operating procedures

  • Reliability

    • The level of accuracy and completeness expected in the operational environment is established.

    • Data integrity controls are implemented in accordance with the design.

    • Manual, regression, and functional tests are performed to ensure the data integrity controls work.

    • The completeness of the system installation is verified.

    • The accuracy requirements are maintained as the applications are updated.

  • Authorization

    • The rules governing the authorization of transactions are defined.

    • The application is designed to identify and enforce the authorization rules.

    • The application implements the authorization rules.

    • Unauthorized data changes are prohibited during the installation process.

    • The method and rules for authorization are preserved during maintenance.

  • File integrity

    • Requirements for file integrity are defined.

    • The design provides for the controls to ensure the integrity of the file.

    • The specified file integrity controls are implemented.

    • The file integrity functions are tested to ensure they perform properly.

    • The integrity of the files is preserved during the maintenance phase.

  • Audit trail

    • The requirements to reconstruct processing are defined.

    • The audit trail requirements are incorporated into the system.

    • The audit trail functions are tested to ensure the appropriate data is saved.

    • The audit trail of installation events is recorded.

    • The audit trail requirements are updated during systems maintenance.

  • Continuity-of-processing

    • The impact of each system failure is defined.

    • The contingency plan and procedures have been written.

    • Recovery testing verifies that the contingency plan functions properly.

    • The integrity of the previous systems is ensured until the integrity of the new system is verified.

    • The contingency plan is updated and tested as system requirements change.

  • Service level

    • The desired service level for all aspects of the system is defined.

    • The method to achieve the predefined service levels is incorporated into the system design.

    • The programs and manual systems are designed to achieve the specified service level.

    • Stress testing is conducted to ensure that the system can achieve the desired service level when both normal and above normal volumes of data are processed.

    • A fail-safe plan is used during installation to ensure service is not disrupted.

    • The predefined service level is preserved as the system is maintained.

  • Access control

    • Access to the system is defined.

    • The procedures to enforce the access rules are designed.

    • The defined security procedures are implemented.

    • Compliance tests are utilized to ensure that the security procedures function in a production environment.

    • Access to the computer resources is controlled during installation.

    • The procedures controlling access are preserved as the system is updated.

  • Methodology

    • The system requirements are defined and documented in compliance with the development methodology.

    • The system design is executed in accordance with the design methodology.

    • The programs are constructed and documented in compliance with the programming methodology.

    • Testing is performed in compliance with the test methodology.

    • The integration of the application system in a production environment complies with the installation methodology.

    • System maintenance is performed in compliance with the maintenance methodology.

  • Correctness

    • The user has fully defined the functional specifications.

    • The developed design conforms to the user requirements.

    • The developed program conforms to the system design specifications.

    • Functional testing ensures that the requirements are properly implemented.

    • The proper programs and data are placed into production.

    • The user-defined requirement changes are properly implemented in the operational system.

  • Ease-of-use

    • The usability specifications for the application system are defined.

    • The system design attempts to optimize the usability of the implemented requirements.

    • The program optimizes ease of use by conforming to the design.

    • The relationship between the manual and automated system is tested to ensure the application is easy to use.

    • The usability instructions are properly prepared and disseminated to the appropriate individuals.

    • As the system is maintained, its ease of use is preserved.

  • Maintainable

    • The desired level of maintainability is specified.

    • The design is developed to achieve the desired level of maintainability.

    • The program is coded and designed to achieve the desired level of maintainability.

    • The system is inspected to ensure that it is maintainable.

    • The system documentation is complete.

    • Maintainability is preserved as the system is updated.

  • Portable

    • The portability in moving the system among hardware or software components is determined.

    • The design is prepared to achieve the desired level of portability.

    • The program is designed and implemented to conform to the portability design.

    • The system is subjected to a disaster test to ensure that it is portable.

    • The documentation is complete to facilitate portability.

    • Portability is preserved as the system is maintained.

  • Coupling

    • The interface between the system being developed and related systems is defined.

    • The design takes into account the interface requirements.

    • The program conforms to the interface design specifications.

    • Functional and regression testing are performed to ensure that the interface between systems functions properly.

    • The interface between systems is coordinated prior to the new system being placed into production.

    • The interface between systems is preserved during the systems maintenance process.

  • Performance

    • The performance criteria are established.

    • The design specifications ensure that the desired level of performance is achieved.

    • The program is designed and implemented to achieve the performance criteria.

    • The system is compliance tested to ensure that the desired performance levels are achieved.

    • System performance is monitored during the installation phase.

    • The desired level of performance is preserved during system maintenance.

  • Ease-of-operation

    • The operational needs are incorporated into the system design.

    • The operational procedures are tested to ensure they achieve the desired level of operational usability.

    • The operating procedures are implemented during the installation phase.

    • Changes to the operational system are updated in the operating procedures.

The test team should investigate the system characteristics to evaluate the potential magnitude of the risk, as follows:

  1. Define what meeting project objectives means. These are the objectives to be accomplished by the project team.

  2. Understand the core business areas and processes. All information systems are not created equal. Systems that support mission-critical business processes are clearly more important than systems that support mission-support functions (usually administrative), although these, too, are necessary functions. A focus on core business areas and processes is essential to the task of assessing the impact of the problem on the enterprise and for establishing priorities.

  3. Assess the severity of potential failures. This step must be performed for each core business area and its associated processes.

  4. Identify the system components:

    • Links to core business areas or processes

    • Platforms, languages, and database management systems

    • Operating system software and utilities

    • Telecommunications

    • Internal and external interfaces

    • Owners

    • Availability and adequacy of source code and associated documentation

  5. Identify, prioritize, and estimate the test resources required. Achieving test objectives requires significant investment in two vital resources: money and people. Accordingly, the organization has to make informed choices about priorities by assessing the costs, benefits, and risks of competing projects. In some instances, it may be necessary to defer or cancel new system development efforts and reprogram the freed resources to achieve and test a project.

  6. Develop validation strategies and testing plans for all converted or replaced systems and their components. The testing and validation of converted or replaced systems require a phased approach. Consider the specific objectives for the following phases:

    • Phase 1, unit testing. This phase focuses on functional and compliance testing of a single application or software module.

    • Phase 2, integration testing. This phase tests the integration of related software modules and applications.

    • Phase 3, system testing. This phase tests all the integrated components of a system.

    • Phase 4, acceptance testing. This phase tests that the system will function with live, operational data.

  7. Identify and acquire automated test tools and write test scripts. Regardless of the strategy selected, the scope of the testing and validation effort requires careful planning and use of automated tools, including test case analyzers and test data libraries.

  8. Define requirements for the test facility. Organizations should operate in an adequate testing environment to avoid potential contamination or interference with the operation of production systems.

  9. Address implementation schedule issues. This step includes:

    1. Selecting conversion facilities

    2. Determining the time needed to put converted systems into production

    3. Converting backup and archival data

  10. Address interface and data exchange issues. The test team should consider the following issues when conducting this step:

    1. Development of a model showing the internal and external dependency links among enterprise core business areas, processes, and information systems

    2. Notification of all outside data exchange entities

    3. Data bridges and filters

    4. Contingency plans if no data is received from an external source

    5. The validation process for incoming external data

    6. Contingency plans for invalid data

  11. Evaluate contingency plans. These should be realistic contingency plans, including the development and activation of manual or contract procedures to ensure the continuity of core business processes.

  12. Identify vulnerable parts of the system and processes operating outside the information resource management area. This includes telephone and network switching equipment and building infrastructure systems. Testers should develop a separate plan for their testing.

Task 3: Select a Testing Technique

Testing techniques should be selected based on their ability to accomplish test objectives. The technique selection process begins with selecting the test factor. Once the factor has been selected, testers know in which life cycle phase the technique will be utilized.

Both structural and functional testing can be accomplished using a predetermined set of techniques. Once the technique has been selected, the test method for implementing that technique needs to be determined. The test method can be either dynamic or static. Dynamic techniques attempt to determine whether the system functions properly while the programs are being operated, and static testing looks at the system in a non-operating environment.

The following describes the generic techniques testers can use for structural and functional testing.

Structural System Testing Techniques

The objective of structural testing is to ensure that the system is structurally sound. It attempts to determine that the technology has been used properly and that when all the component parts are assembled they function as a cohesive unit. The techniques are not designed to ensure that the application system is functionally correct but rather that it is structurally sound. The following techniques are briefly described in Figure 8-3 and then individually explained:

  • Stress testing

  • Execution testing

  • Recovery testing

  • Operations testing

  • Compliance testing

  • Security testing

Table 8-3. Structural testing techniques.

TECHNIQUE

DESCRIPTION

EXAMPLE

Stress

System performs with expected volumes

Sufficient disk space allocated

  

Communication lines adequate

Execution

System achieves desired level of proficiency

Transaction turnaround time adequate

  

Software/hardware use optimized

Recovery

System can be returned to an operational status after a failure

Induce failure

  

Evaluate adequacy of backup data

Operations

System can be executed in a normal operational status

Determine systems can run using documention

  

JCL adequate

Compliance

System is developed in accordance with standards and procedures

Standards followed

  

Documentation complete

Security

System is protected in accordance with importance to organization

Access denied

  

Procedures in place

Stress Testing

Stress testing is designed to determine whether the system can function when subjected to larger volumes than normally would be expected. Areas stressed include input transactions, internal tables, disk space, output, communications, and computer capacity. If the application functions adequately under stress testing, testers can assume that it will function properly with normal volumes of work.

Objectives

Specific objectives of stress testing include

  • Normal or above-normal volumes of transactions can be processed within the expected time frame.

  • System capacity, including communication lines, has sufficient resources to meet expected turnaround times.

  • Users can perform their assigned tasks and maintain the desired turnaround time.

How to Use Stress Testing

Stress testing should simulate the production environment as closely as possible. Online systems should be stress tested by having people enter transactions at a normal or above-normal pace. Batch systems can be stress tested with large input batches. Error conditions should be included in tested transactions. Transactions for use in stress testing can be obtained from one of the following three sources:

  • Test data generators

  • Test transactions created by the test group

  • Transactions previously processed in the production environment

In stress testing, the system should be run as it would in the production environment. Operators should use standard documentation, and the people entering transactions or working with the system should be the clerical personnel that will work with the system after it goes into production. Online systems should be tested for an extended period of time, and batch systems tested using more than one batch of transactions.

When to Use Stress Testing

Stress testing should be used when there is uncertainty regarding the volume of work the application system can handle without failing. Stress testing is most common with online applications because it is difficult to simulate heavy-volume transactions using the other testing techniques. The disadvantage of stress testing is the amount of time it takes to prepare for the test, as well as the number of resources consumed during the execution of the test. Testers should weigh these costs against the risk of not identifying volume-related failures until the application is placed into an operational mode.

Execution Testing

Execution testing is designed to determine whether the system achieves the desired level of proficiency in a production status. Execution testing can verify response and turnaround times, as well as design performance. The execution of a system can be tested in whole or in part, using the actual system or a simulated model.

Objectives

Specific objectives of execution testing include:

  • Determining the performance of the system structure

  • Verifying the optimum use of hardware and software

  • Determining response time to online requests

  • Determining transaction processing turnaround time

How to Use Execution Testing

Testers can conduct execution testing in any phase of the system development life cycle. The testing can evaluate a single aspect of the system—for example, a critical routine in the system—or the ability of the proposed structure to satisfy performance criteria. Execution testing can be performed in any of the following manners:

  • Using hardware and software monitors

  • Using a simulation model

  • Creating a quick and dirty program(s) to evaluate the approximate performance of a completed system

The earlier the technique is used, the greater the likelihood that the completed application will meet the performance criteria.

When to Use Execution Testing

Execution testing should be used early in the development process. Although there is value in knowing that the completed application does not meet performance criteria, if that assessment is not known until the system is operational, it may be too late or too costly to make the necessary modifications. Therefore, execution testing should be used at that point in time when the results can be used to affect or change the system structure.

Recovery Testing

Recovery is the ability to restart operations after the integrity of the application has been lost. The process normally involves reverting to a point where the integrity of the system is known, and then reprocessing transactions up until the point of failure. The time required to recover operations is affected by the number of restart points, the volume of applications run on the computer center, the training and skill of the people conducting the recovery operation, and the tools available. The importance of recovery will vary from application to application.

Objectives

Recovery testing is used to ensure that operations can be continued after a disaster. Recovery testing not only verifies the recovery process, but also the effectiveness of the component parts of that process. Specific objectives of recovery testing include the following:

  • Adequate backup data is preserved.

  • Backup data is stored in a secure location.

  • Recovery procedures are documented.

  • Recovery personnel have been assigned and trained.

  • Recovery tools have been developed.

How to Use Recovery Testing

Testers should conduct recovery testing in two modes. First, they should assess the procedures, methods, tools, and techniques. Then, after the system has been developed, they should introduce a failure into the system and test the ability to recover.

Evaluating the procedures and documentation is a process that primarily calls for judgment and checklists. The actual recovery test may involve off-site facilities and alternate processing locations. Normally, procedural testing is conducted by skilled systems analysts, professional testers, or management personnel. Testing the actual recovery procedures should be performed by computer operators and other clerical personnel who would, in fact, be involved had it been an actual disaster instead of a test disaster.

A simulated disaster is usually performed on one aspect of the application system. For example, the test may be designed to determine whether people using the system can continue processing and recover computer operations after computer operations cease. While several aspects of recovery need to be tested, it is better to test one segment at a time rather than induce multiple failures at a single time. When multiple failures are induced, it may be more difficult to pinpoint the cause of the problem than when only a single failure is induced.

It is preferable not to advise system participants when a disaster test will be conducted. When people are prepared, they may perform the recovery test in a manner different from the performance when it occurs at an unexpected time. Even if the participants know that recovery may be part of the test, I recommend that you don’t let them know specifically when the test will occur or what type of recovery will be necessary.

When to Use Recovery Testing

Recovery testing should be performed whenever the user of the application states that the continuity of operation is essential to the proper functioning of the user area. The user should estimate the potential loss associated with inability to recover operations over various time spans—for example, the inability to recover within five minutes, one hour, eight hours, and one week. The potential loss should determine both the amount of resources to be devoted to disaster planning as well as recovery testing.

Operations Testing

After an application has been tested, it is integrated into the operating environment. At this point, the application will be executed using the normal operations staff, procedures, and documentation. Operations testing is designed to verify prior to production that the operating procedures and staff can properly execute the application.

Objectives

Specific objectives of operations testing include

  • Determining the completeness of computer operator documentation

  • Ensuring that the necessary support mechanisms, such as job control language, have been prepared and function properly

  • Evaluating the completeness of operator training

How to Use Operations Testing

Operations testing evaluates both the process and the execution of the process. During the requirements phase, operational requirements can be evaluated to determine their reasonableness and completeness. During the design phase, the operating procedures should be designed and evaluated.

The execution of operations testing can normally be performed in conjunction with other tests. However, if operations testing is included, the operators should not be prompted or helped by outside parties. The test needs to be executed as though it were part of normal computer operations so that it adequately evaluates the system’s effectiveness in an operational environment.

When to Use Operations Testing

Operations testing should occur prior to placing any application into a production status. If the application is to be tested in a production-type setting, operations testing can piggyback that process at a very minimal cost.

Compliance Testing

Compliance testing verifies that the application was developed in accordance with IT standards, procedures, and guidelines. The methodologies are used to increase the probability of success, to enable the transfer of people in and out of the project with minimal cost, and to increase the maintainability of the application system.

Objectives

Specific objectives of compliance testing include the following:

  • Determining that systems development and maintenance methodologies are followed

  • Ensuring compliance to departmental standards, procedures, and guidelines

  • Evaluating the completeness and reasonableness of system documentation

How to Use Compliance Testing

Compliance testing requires that the prepared document/program be compared to organizational standards. The most effective method for compliance testing is the inspection process.

When to Use Compliance Testing

The type of testing conducted varies based on the phase of the development life cycle. However, it may be more important to test adherence to the process during the requirements phase than at later stages because it is difficult to correct applications when requirements are not adequately documented.

Security Testing

The level of security required depends on the risks associated with compromise or loss of information. Security testing is designed to evaluate the adequacy of protective procedures and countermeasures.

Objectives

Specific objectives of security testing include the following:

  • Determining that adequate attention has been devoted to identifying security risks

  • Determining that a realistic definition and enforcement of access to the system has been implemented

  • Determining that sufficient expertise exists to perform adequate security testing

  • Conducting reasonable tests to ensure that the implemented security measures function properly

How to Use Security Testing

Security testing is a highly specialized part of the test process. Most organizations can evaluate the reasonableness of security procedures to prevent the average perpetrator from penetrating the application. However, the highly skilled perpetrator using sophisticated techniques may use methods undetectable by novices designing security measures and/or testing those measures.

The first step is to identify the security risks and the potential loss associated with those risks. If either the loss is low or the penetration method routine, IT personnel can conduct the necessary tests. On the other hand, if either the risks are very high or the technology that might be used is sophisticated, specialized help should be acquired in conducting the security tests.

When to Use Security Testing

Security testing should be used when the information and/or assets protected by the application system are of significant value to the organization. The testing should be performed both before and after the system goes into an operational status. The extent of testing depends on the security risks, and the individual assigned to conduct the test should be selected based on the estimated sophistication that might be used to penetrate security.

Functional System Testing Techniques

Functional system testing is designed to ensure that the system requirements and specifications are achieved. The process normally involves creating test conditions to evaluate the correctness of the application. The following techniques are briefly described in Figure 8-4 and then individually explained:

  • Requirements testing

  • Regression testing

  • Error-handling testing

  • Manual-support testing

  • Intersystems testing

  • Control testing

  • Parallel testing

Table 8-4. Functional testing techniques.

TECHNIQUE

DESCRIPTION

EXAMPLE

Requirements

System performs as specified

Prove system requirements

  

Compliance to policies, regulations regulations

Regression

Verifies that anything unchanged still performs correctly

Unchanged system segments function

  

Unchanged manual procedures correct

Error Handling

Errors can be prevente or detected, and then corrected

Error introduced into test

  

Errors reentered

Manual support

The people-computer interaction works

Manual procedures developed

  

People trained

Intersystems

Data is correctly passed from system to system

Intersystem parameters changed

  

Intersystem documentation updated

Control

Controls reduce system risk to an acceptable level

File reconciliation procedures work

  

Manual controls in place

Parallel

Old system and new system are run and the results compared to detect unplanned differences

Old and new system can reconcile

  

Operational status of old system maintained

Requirements Testing

Requirements testing must verify that the system can perform correctly over a continuous period of time. The system can be tested throughout the life cycle, but it is difficult to test the reliability before the program becomes operational.

Objectives

Specific objectives of requirements testing include the following:

  • User requirements are implemented.

  • Correctness is maintained over extended processing periods.

  • Application processing complies with the organization’s policies and procedures.

  • Secondary user needs have been included, such as:

    • Security officer

    • Database administrator

    • Internal auditors

    • Comptroller

  • The system processes accounting information in accordance with procedures.

  • Systems process information in accordance with governmental regulations.

How to Use Requirements Testing

Requirements testing is primarily performed through the creation of test conditions and functional checklists. Test conditions are generalized during the requirements phase, and become more specific as the life cycle progresses.

Functional testing is more effective when the test conditions are created directly from user requirements. When test conditions are created from the system documentation, defects in that documentation will not be detected through testing. When the test conditions are created from other than the system documentation, defects introduced into the documentation will be detected.

When to Use Requirements Testing

The process should begin in the requirements phase and continue through every phase of the life cycle. It is not a question as to whether requirements must be tested but, rather, the extent and methods used.

Regression Testing

One of the attributes that has plagued IT professionals for years is the cascading effect of making changes to an application system. One segment of the system is developed and thoroughly tested, and then a change is made to another part of the system, which has a disastrous effect on the tested portion. Regression testing retests previously tested segments to ensure that they still function properly after a change has been made to another part of the application.

Objectives

Specific objectives of regression testing include the following:

  • Determining that system documentation remains current

  • Determining that system test data and conditions remain current

  • Determining that previously tested system functions perform properly after changes are introduced

How to Use Regression Testing

Regression testing is retesting unchanged segments of the application system. It normally involves rerunning tests that have been previously executed to ensure that the same results can be achieved. While the process is simple in that the test transactions have been prepared and the results known, unless the process is automated it can be a very time-consuming and tedious operation. It is also one in which the cost/benefit needs to be carefully evaluated or large amounts of effort can be expended with minimal payback.

When to Use Regression Testing

Regression testing should be used when there is a high risk that new changes may affect unchanged areas of the application system. In the developmental process, regression testing should occur after a predetermined number of changes are incorporated into the application system. In the maintenance phase, regression testing should be conducted if the potential loss that could occur due to affecting an unchanged portion is very high. The determination as to whether to conduct regression testing should be based on the significance of the loss that could occur as a result of improperly tested applications.

Error-Handling Testing

One of the characteristics that differentiate automated from manual systems is the predetermined error-handling feature. Manual systems can deal with problems as they occur, but automated systems must preprogram error handling. In many instances, the completeness of error handling affects the usability of the application. Error-handling testing determines the ability of the application system to properly process incorrect transactions.

Objectives

Specific objectives of error-handling testing include:

  • Determining that all reasonably expected error conditions are recognizable by the application system

  • Determining that the accountability for processing errors has been assigned and that the procedures provide a high probability that the error will be corrected

  • Determining that reasonable control is maintained during the correction process

How to Use Error-Handling Testing

Error-handling testing requires a group of knowledgeable people to anticipate what can go wrong with the application system. Most other forms of testing involve verifying that the application system conforms to requirements. Error-handling testing uses exactly the opposite concept.

A successful method for developing error conditions is to have IT staff, users, and auditors brainstorm what might go wrong with the application. The totality of their thinking must then be organized by application function so that a logical set of test transactions can be created. Without this type of synergistic interaction, it is difficult to develop a realistic body of problems prior to production.

Error-handling testing should test the introduction of the error, the processing of the error, the control condition, and the reentry of the condition properly corrected.

When to Use Error-Handling Testing

Error testing should occur throughout the system development life cycle. At all points in the developmental process the impact from errors should be identified and appropriate action taken to reduce those errors to an acceptable level. Error-handling testing assists in the error management process of systems development and maintenance. Some organizations use auditors, quality assurance, or professional testing personnel to evaluate error processing.

Manual-Support Testing

The manual part of the system requires the same attention to testing as does the automated segment. Although the timing and testing methods may differ, the objectives of manual testing remain the same as testing the automated segment of the system.

Objectives

Specific objectives of manual-support testing include the following:

  • Verifying that the manual-support procedures are documented and complete

  • Determining that manual-support responsibility has been assigned

  • Determining that the manual-support personnel are adequately trained

  • Determining that the manual support and the automated segment are properly interfaced

How to Use Manual-Support Testing

Manual testing involves first the evaluation of the adequacy of the process, and second, the execution of the process. The process itself can be evaluated in all phases of the development life cycle. Rather than preparing and entering test transactions themselves, testers can have the actual clerical and supervisory people prepare, enter, and use the results of processing from the application system.

Manual-support testing normally involves several iterations of the process. Testing people processing requires testing the interface between people and the application system. This means entering transactions, getting the results back from that processing, and taking additional action based on the information received, until all aspects of the manual computer interface have been adequately tested.

Manual-support testing should occur without the assistance of the systems personnel. The manual-support group should operate using the training and procedures provided them by the systems personnel. However, the results should be evaluated by the systems personnel to determine if they have been adequately performed.

When to Use Manual-Support Testing

Although manual-support testing should be conducted throughout the development life cycle, extensive manual-support testing is best done during the installation phase so that clerical personnel do not become involved with the new system until immediately prior to its entry into operation. This avoids the confusion of knowing two systems and not being certain which rules to follow. During the maintenance and operation phases, manual-support testing may involve only providing people with instructions on the changes and then verifying that they understand the new procedures.

Intersystem Testing

Application systems are frequently interconnected to other application systems. The interconnection may be data coming into the system from another application, leaving for another application, or both. Frequently, multiple applications—sometimes called cycles or functions—are involved. For example, there could be a revenue cycle that interconnects all the income-producing applications, such as order entry, billing, receivables, shipping, and returned goods. Intersystem testing is designed to ensure that the interconnection between applications functions correctly.

Objectives

Specific objectives of intersystem testing include the following:

  • Determining that the proper parameters and data are correctly passed between applications

  • Ensuring that proper coordination and timing of functions exists between the application systems

  • Determining that the documentation for the involved systems is accurate and complete

How to Use Intersystem Testing

One of the best testing tools for intersystem testing is the integrated test facility. This permits testing to occur in a production environment and thus the coupling of systems can be tested at minimal cost.

When to Use Intersystem Testing

Intersystem testing should be conducted whenever there is a change in parameters between application systems. The extent and type of testing will depend on the risk associated with those parameters being erroneous. If the integrated test facility concept is used, the intersystem parameters can be verified after the changed or new application is placed into production.

Control Testing

Approximately one-half of the total system development effort is directly attributable to controls. Controls include data validation, file integrity, audit trails, backup and recovery, documentation, and the other aspects of systems related to integrity. Control testing is designed to ensure that the mechanisms that oversee the proper functioning of an application system work.

Objectives

Specific objectives of control testing include the following:

  • Accurate and complete data

  • Authorized transactions

  • Maintenance of an adequate audit trail of information

  • Efficient, effective, and economical process

  • Process meeting the needs of the user

How to Use Control Testing

The term “system of internal controls” is frequently used in accounting literature to describe the totality of the mechanisms that ensure the integrity of processing. Controls are designed to reduce risks; therefore, to test controls, the risks must be identified.

One method for testing controls is to develop a risk matrix. The matrix identifies the risks, the controls, and the segment within the application system in which the controls reside.

When to Use Control Testing

Control testing should be an integral part of system testing. Controls must be viewed as a system within a system, and tested in parallel with other systems tests. Because approximately 50 percent of the total development effort goes into controls, a proportionate part of testing should be allocated to evaluating the adequacy of controls.

Parallel Testing

In the early days of computer systems, parallel testing was one of the more popular testing techniques. However, as systems become more integrated and complex, the difficulty in conducting parallel tests increased and thus the popularity of the technique diminished. Parallel testing is used to determine that the results of the new application are consistent with the processing of the previous application or version of the application.

Objectives

Specific objectives of parallel testing include the following:

  • Conducting redundant processing to ensure that the new application performs correctly

  • Demonstrating consistency and inconsistency between two versions of the same application system

How to Use Parallel Testing

Parallel testing requires that the same input data be run through two versions of the same application. Parallel testing can be done with the entire application or with a segment of the application. Sometimes a particular segment, such as the day-to-day interest calculation on a savings account, is so complex and important that an effective method of testing is to run the new logic in parallel with the old logic.

If the new application changes data formats, the input data will have to be modified before it can be run through the new application. The more difficulty encountered in verifying results or preparing common input, the less attractive the parallel testing technique becomes.

When to Use Parallel Testing

Parallel testing should be used when there is uncertainty regarding the correctness of processing of the new application, and the old and new versions of the application are similar. For example, in payroll, banking, and other financial applications where the results of processing are similar, even though the methods may change significantly—for example, going from batch to online banking—parallel testing is one of the more effective methods of ensuring the integrity of the new application.

Task 4: Plan Unit Testing and Analysis

This section examines the techniques, assessment, and management of unit testing and analysis. The strategies are categorized as functional, structural, or error-oriented. Mastery of the material in this section assists the software engineer to define, conduct, and evaluate unit tests and analyses and to assess new unit testing techniques.

Unit testing and analysis are the most practiced means of verifying that a program possesses the features required by its specification. Testing is a dynamic approach to verification in which code is executed with test data to assess the presence (or absence) of required features. Analysis is a static approach to verification in which required features are detected by analyzing, but not executing, the code. Many analysis techniques, such as proof of correctness, safety analysis, and the more open-ended analysis procedures represented by code inspections and reviews, have become established technologies with their own substantial literature. These techniques are not discussed in this section.

This section focuses on unit-level verification. What constitutes a “unit” has been left imprecise; it may be as little as a single statement or as much as a set of coupled subroutines. The essential characteristic of a unit is that it can meaningfully be treated as a whole. Some of the techniques presented here require associated documentation that states the desired features of the unit. This documentation may be a comment in the source program, a specification written in a formal language, or a general statement of requirements. Unless otherwise indicated, this documentation should not be assumed to be the particular document in the software life cycle called a “software specification,” “software requirements definition,” or the like. Any document containing information about the unit may provide useful information for testing or analysis.

Functional Testing and Analysis

Functional testing and analysis ensure that major characteristics of the code are covered.

Functional Analysis

Functional analysis seeks to verify, without execution, that the code faithfully implements the specification. Various approaches are possible. In the proof-of-correctness approach, a formal proof is constructed to verify that a program correctly implements its intended function. In the safety-analysis approach, potentially dangerous behavior is identified and steps are taken to ensure such behavior is never manifested. Functional analysis is mentioned here for completeness, but a discussion of it is outside the scope of this section.

Functional Testing

Unit testing is functional when test data is developed from documents that specify a module’s intended behavior. These documents include, but are not limited to, the actual specification and the high- and low-level design of the code to be tested. The goal is to test for each software feature of the specified behavior, including the input domains, the output domains, categories of inputs that should receive equivalent processing, and the processing functions themselves.

Testing Independent of the Specification Technique

Specifications detail the assumptions that may be made about a given software unit. They must describe the interface through which access to the unit is given, as well as the behavior once such access is given. The interface of a unit includes the features of its inputs, its outputs, and their related value spaces (called domains). The behavior of a module always includes the function(s) to be computed (its semantics), and sometimes the runtime characteristics, such as its space and time complexity.

Functional testing can be based either on the interface of a module or on the function to be completed.

  • Testing based on the interface. Testing based on the interface of a module selects test data based on the features of the input and output domains of the module and their interrelationships.

    • Input domain testing. In external testing, test data is chosen to cover the extremes of the input domain. Similarly, midrange testing selects data from the interiors of domains. The motivation is inductive—it is hoped that conclusions about the entire input domain can be drawn from the behavior elicited by some of its representative members. For structured input domains, combinations of extreme points for each component are chosen. This procedure can generate a large quantity of data, although considerations of the inherent relationships among components can ameliorate this problem somewhat.

    • Equivalence partitioning. Specifications frequently partition the set of all possible inputs into classes that receive equivalent treatment. Such partitioning is called equivalence partitioning. A result of equivalence partitioning is the identification of a finite set of functions and their associated input and output domains. Input constraints and error conditions can also result from this partitioning. Once these partitions have been developed, both external and midrange testing are applicable to the resulting input domains.

    • Syntax checking. Every robust program must parse its input and handle incorrectly formatted data. Verifying this feature is called syntax checking. One means of accomplishing this is to execute the program using a broad spectrum of test data. By describing the data with documentation language, instances of the input language can be generated using algorithms from automata theory.

  • Testing based on the function to be computed. Equivalence partitioning results in the identification of a finite set of functions and their associated input and output domains. Test data can be developed based on the known characteristics of these functions. Consider, for example, a function to be computed that has fixed points (that is, certain of its input values are mapped into themselves by the function). Testing the computation at these fixed points is possible, even in the absence of a complete specification. Knowledge of the function is essential in order to ensure adequate coverage of the output domains.

    • Special-value testing. Selecting test data on the basis of features of the function to be computed is called special-value testing. This procedure is particularly applicable to mathematical computations. Properties of the function to be computed can aid in selecting points that will indicate the accuracy of the computed solution.

    • Output domain coverage. For each function determined by equivalence partitioning there is an associated output domain. Output domain coverage is performed by selecting points that will cause the extremes of each of the output domains to be achieved. This ensures that modules have been checked for maximum and minimum output conditions and that all categories of error messages have, if possible, been produced. In general, constructing such test data requires knowledge of the function to be computed and, hence, expertise in the application area.

Testing Dependent on the Specification Technique

The specification technique employed can aid in testing. An executable specification can be used as an oracle and, in some cases, as a test generator. Structural properties of a specification can guide the testing process. If the specification falls within certain limited classes, properties of those classes can guide the selection of test data. Much work remains to be done in this area of testing.

  • Algebraic. In algebraic specification, properties of a data abstraction are expressed by means of axioms or rewrite rules. In one testing system, the consistency of an algebraic specification with an implementation is checked by testing. Each axiom is compiled into a procedure, which is then associated with a set of test points. A driver program supplies each of these points to the procedure of its respected axiom. The procedure, in turn, indicates whether the axiom is satisfied. Structural coverage of both the implementation and the specification is computed.

  • Axiomatic. Despite the potential for widespread use of predicate calculus as a specification language, little has been published about deriving test data from such specifications. A relationship between predicate calculus specifications and path testing has been explored.

  • State machines. Many programs can be specified as state machines, thus providing an additional means of selecting test data. Because the equivalence problem of two finite automata is decidable, testing can be used to decide whether a program that simulates a finite automation with a bounded number of nodes is equivalent to the one specified. This result can be used to test those features of programs that can be specified by finite automata—for example, the control flow of a transaction-processing system.

  • Decision tables. Decision tables are a concise method of representing an equivalence partitioning. The rows of a decision table specify all the conditions that the input may satisfy. The columns specify different sets of actions that may occur. Entries in the table indicate whether the actions should be performed if a condition is satisfied. Typical entries are “Yes,” “No,” or “Don’t care.” Each row of the table suggests significant test data. Cause-effect graphs provide a systematic means of translating English specifications into decision tables, from which test data can be generated.

Structural Testing and Analysis

In structural program testing and analysis, test data is developed or evaluated from the source code. The goal is to ensure that various characteristics of the program are adequately covered.

Structural Analysis

In structural analysis, programs are analyzed without being executed. The techniques resemble those used in compiler construction. The goal here is to identify fault-prone code, to discover anomalous circumstances, and to generate test data to cover specific characteristics of the program’s structure.

  • Complexity measures. As resources available for testing are always limited, it is necessary to allocate these resources efficiently. It is intuitively appealing to suggest that the more complex the code, the more thoroughly it should be tested. Evidence from large projects seems to indicate that a small percentage of the code typically contains the largest number of errors. Various complexity measures have been proposed, investigated, and analyzed in the literature.

  • Data flow analysis. A program can be represented as a flow graph annotated with information about variable definitions, references, and indefiniteness.

    From this representation, information about data flow can be deduced for use in code optimization, anomaly detection, and test data generation. Data flow anomalies are flow conditions that deserve further investigation, as they may indicate problems. Examples include: defining a variable twice with no intervening reference, referencing a variable that is undefined, and undefining a variable that has not been referenced since its last definition. Data flow analysis can also be used in test data generation, exploiting the relationship between points where variables are defined and points where they are used.

  • Symbolic execution. A symbolic execution system accepts three inputs: a program to be interpreted, symbolic input for the program, and the path to follow. It produces two outputs: the symbolic output that describes the computation of the selected path, and the path condition for that path. The specification of the path can be either interactive or preselected. The symbolic output can be used to prove the program correct with respect to its specification, and the path condition can be used for generating test data to exercise the desired path. Structured data types cause difficulties, however, because it is sometimes impossible to deduce what component is being modified in the presence of symbolic values.

Structural Testing

Structural testing is a dynamic technique in which test data selection and evaluation are driven by the goal of covering various characteristics of the code during testing. Assessing such coverage involves the instrumentation of the code to keep track of which characteristics of the program text are actually exercised during testing. The low cost of such instrumentation has been a prime motivation for adopting this technique. More important, structural testing addresses the fact that only the program text reveals the detailed decisions of the programmer. For example, for the sake of efficiency, a programmer might choose to implement a special case that appears nowhere in the specification. The corresponding code will be tested only by chance using functional testing, whereas use of a structural coverage measure such as statement coverage should indicate the need for test data for this case. Structural coverage measures form a rough hierarchy, with higher levels being more costly to perform and analyze, but being more beneficial, as described in the list that follows:

  • Statement testing. Statement testing requires that every statement in the program be executed. While it is obvious that achieving 100 percent statement coverage does not ensure a correct program, it is equally obvious that anything less means that there is code in the program that has never been executed!

  • Branch testing. Achieving 100 percent statement coverage does not ensure that each branch in the program flow graph has been executed. For example, executing an if...then statement (no else) when the tested condition is true, tests only one of two branches in the flow graph. Branch testing seeks to ensure that every branch has been executed. Branch coverage can be checked by probes inserted at points in the program that represent arcs from branch points in the flow graph. This instrumentation suffices for statement coverage as well.

  • Conditional testing. In conditional testing, each clause in every condition is forced to take on each of its possible values in combination with those of other clauses. Conditional testing thus subsumes branch testing and, therefore, inherits the same problems as branch testing. Instrumentation for conditional testing can be accomplished by breaking compound conditional statements into simple conditions and nesting the resulting if statements.

  • Expression testing. Expression testing requires that every expression assume a variety of values during a test in such a way that no expression can be replaced by a simpler expression and still pass the test. If one assumes that every statement contains an expression and that conditional expressions form a proper subset of all the program expressions, then this form of testing properly subsumes all the previously mentioned techniques. Expression testing does require significant runtime support for the instrumentation.

  • Path testing. In path testing, data is selected to ensure that all paths of the program have been executed. In practice, of course, such coverage is impossible to achieve, for a variety of reasons. First, any program with an indefinite loop contains an infinite number of paths, one for each iteration of the loop. Thus, no finite set of data will execute all paths. The second difficulty is the infeasible path problem: It is undecided whether an arbitrary path in an arbitrary program is executable. Attempting to generate data for such infeasible paths is futile, but it cannot be avoided. Third, it is undecided whether an arbitrary program will halt for an arbitrary input. It is therefore impossible to decide whether a path is finite for a given input.

    In response to these difficulties, several simplifying approaches have been proposed. Infinitely many paths can be partitioned into a finite set of equivalence classes based on characteristics of the loops. Boundary and interior testing require executing loops zero times, one time, and, if possible, the maximum number of times. Linear sequence code and jump criteria specify a hierarchy of successively more complex path coverage.

    Path coverage does not imply condition coverage or expression coverage because an expression may appear on multiple paths but some subexpressions may never assume more than one value. For example, in

    if a / b then S1 else S2

    b may be false and yet each path may still be executed.

Error-Oriented Testing and Analysis

Testing is necessitated by the potential presence of errors in the programming process. Techniques that focus on assessing the presence or absence of errors in the programming process are called error oriented. There are three broad categories of such techniques: statistical assessment, error-based testing, and fault-based testing. These are stated in order of increasing specificity of what is wrong with the program. Statistical methods attempt to estimate the failure rate of the program without reference to the number of remaining faults.

Error-based testing attempts to show the absence of certain errors in the programming process. Fault-based testing attempts to show the absence of certain faults in the code. Since errors in the programming process are reflected as faults in the code, both techniques demonstrate the absence of faults. They differ, however, in their starting point: Error-based testing begins with the programming process, identifies potential errors in that process, and then asks how those errors are reflected as faults. It then seeks to demonstrate the absence of those reflected faults. Fault-based testing begins with the code and asks what are the potential faults in it, regardless of what error in the programming process caused them.

Statistical Methods

Statistical testing employs statistical techniques to determine the operational reliability of the program. Its primary concern is how faults in the program affect its failure rate in its actual operating environment. A program is subjected to test data that statistically models the operating environment, and failure data is collected. From the data, a reliability estimate of the program’s failure rate is computed. This method can be used in an incremental development environment. A statistical method for testing paths that compute algebraic functions has also been developed. A prevailing sentiment is that statistical testing is a futile activity because it is not directed toward finding errors. However, studies suggest it is a viable alternative to structural testing. Combining statistical testing with an oracle appears to represent an effective tradeoff of computer resources for human time.

Error-Based Testing

Error-based testing seeks to demonstrate that certain errors have not been committed in the programming process. Error-based testing can be driven by histories of programmer errors, measures of software complexity, knowledge of error-prone syntactic constructs, or even error guessing. Some of the more methodical techniques are described in the list that follows:

  • Fault estimation. Fault seeding is a statistical method used to assess the number and characteristics of the faults remaining in a program. Harlan Mills originally proposed this technique, and called it error seeding. First, faults are seeded into a program. Then the program is tested and the number of faults discovered is used to estimate the number of faults yet undiscovered. A difficulty with this technique is that the faults seeded must be representative of the yet-undiscovered faults in the program. Techniques for predicting the quantity of remaining faults can also be based on a reliability model.

  • Domain testing. The input domain of a program can be partitioned according to which inputs cause each path to be executed. These partitions are called path domains. Faults that cause an input to be associated with the wrong path domain are called domain faults. Other faults are called computation faults. (The terms used before attempts were made to rationalize nomenclature were “domain errors” and “computation errors.”) The goal of domain testing is to discover domain faults by ensuring that the test data limits the range of undetected faults.

  • Perturbation testing. Perturbation testing attempts to decide what constitutes a sufficient set of paths to test. Faults are modeled as a vector space, and characterization theorems describe when sufficient paths have been tested to discover both computation and domain errors. Additional paths need not be tested if they cannot reduce the dimensionality of the error space.

Fault-Based Testing

Fault-based testing aims at demonstrating that certain prescribed faults are not in the code. It functions well in the role of test data evaluation: Test data that does not succeed in discovering the prescribed faults is not considered adequate. Fault-based testing methods differ in both extent and breadth. One with local extent demonstrates that a fault has a local effect on computation; it is possible that this local effect will not produce a program failure. A method with global extent demonstrates that a fault will cause a program failure. Breadth is determined by whether the technique handles a finite or an infinite class of faults. Extent and breadth are orthogonal, as evidenced by the techniques described below.

  • Local extent, finite breadth. Input-output pairs of data are encoded as a comment in a procedure, as a partial specification of the function to be computed by that procedure. The procedure is then executed for each of the input values and checked for the output values. The test is considered adequate only if each computational or logical expression in the procedure is determined by the test; that is, no expression can be replaced by a simpler expression and still pass the test. Simpler is defined in a way that allows only a finite number of substitutions. Thus, as the procedure is executed, each possible substitution is evaluated on the data state presented to the expression. Those that do not evaluate the same as the original expression are rejected. The system allows methods of specifying the extent to be analyzed.

  • Global extent, finite breadth. In mutation testing, test data adequacy is judged by demonstrating that interjected faults are caught. A program with interjected faults is called a mutant, and is produced by applying a mutation operator. Such an operator changes a single expression in the program to another expression, selected from a finite class of expressions. For example, a constant might be incremented by one, decremented by one, or replaced by zero, yielding one of three mutants. Applying the mutation operators at each point in a program where they are applicable forms a finite, albeit large, set of mutants. The test data is judged adequate only if each mutant in this set is either functionally equivalent to the original program or computes different output than the original program. Inadequacy of the test data implies that certain faults can be introduced into the code and go undetected by the test data.

    Mutation testing is based on two hypotheses. The competent-programmer hypothesis says that a competent programmer will write code that is close to being correct; the correct program, if not the current one, can be produced by some straightforward syntactic changes to the code. The coupling-effect hypothesis says that test data that reveals simple faults will uncover complex faults as well.

    Thus, only single mutants need be eliminated, and combinatory effects of multiple mutants need not be considered. Studies formally characterize the competent-programmer hypothesis as a function of the probability of the test set’s being reliable, and show that under this characterization, the hypothesis does not hold. Empirical justification of the coupling effect has been attempted, but theoretical analysis has shown that it does not hold, even for simple programs.

  • Local extent, infinite breadth. Rules for recognizing error-sensitive data are described for each primitive language construct. Satisfaction of a rule for a given construct during testing means that all alternate forms of that construct have been distinguished. This has an obvious advantage over mutation testing—elimination of all mutants without generating a single one! Some rules even allow for infinitely many mutants. Of course, since this method is of local extent, some of the mutants eliminated may indeed be the correct program.

  • Global extent, infinite breadth. We can define a fault-based method based on symbolic execution that permits elimination of infinitely many faults through evidence of global failures. Symbolic faults are inserted into the code, which is then executed on real or symbolic data. Program output is then an expression in terms of the symbolic faults. It thus reflects how a fault at a given location will affect the program’s output. This expression can be used to determine actual faults that could not have been substituted for the symbolic fault and remain undetected by the test.

Managerial Aspects of Unit Testing and Analysis

Administration of unit testing and analysis proceeds in two stages. First, techniques appropriate to the project must be selected, and then these techniques must be systematically applied.

Selecting Techniques

Selecting the appropriate techniques from the array of possibilities is a complex task that requires assessment of many issues, including the goal of testing, the nature of the software product, and the nature of the test environment. It is important to remember the complementary benefits of the various techniques and to select as broad a range of techniques as possible, within imposed limits. No single testing or analysis technique is sufficient. Functional testing suffers from inadequate code coverage, structural testing suffers from inadequate specification coverage, and neither technique achieves the benefits of error coverage.

  • Goals. Different design goals impose different demands on the selection of testing techniques. Achieving correctness requires use of a great variety of techniques. A goal of reliability implies the need for statistical testing using test data representative of the anticipated user environment. It should be noted, however, that proponents of this technique still recommend judicious use of “selective” tests to avoid embarrassing or disastrous situations. Testing may also be directed toward assessing the utility of proposed software. This kind of testing requires a solid foundation in human factors. Performance of the software may also be of special concern. In this case, external testing is essential. Timing instrumentation can prove useful.

    Often, several of these goals must be achieved simultaneously. One approach to testing under these circumstances is to order testing by decreasing benefit. For example, if reliability, correctness, and performance are all desired features, it is reasonable to tackle performance first, reliability second, and correctness third, since these goals require increasingly difficult-to-design tests. This approach can have the beneficial effect of identifying faulty code with less effort.

  • Nature of the product. The nature of the software product plays an important role in the selection of appropriate techniques.

  • Nature of the testing environment. Available resources, personnel, and project constraints must be considered in selecting testing and analysis strategies.

Control

To ensure quality in unit testing and analysis, it is necessary to control both documentation and the conduct of the test:

  • Configuration control. Several items from unit testing and analysis should be placed under configuration management, including the test plan, test procedures, test data, and test results. The test plan specifies the goals, environment, and constraints imposed on testing. The test procedures detail the step-by-step activities to be performed during the test. Regression testing occurs when previously saved test data is used to test modified code. Its principal advantage is that it ensures previously attained functionality has not been lost during a modification. Test results are recorded and analyzed for evidence of program failures. Failure rates underlie many reliability models; high failure rates may indicate the need for redesign.

  • Conducting tests. A test bed is an integrated system for testing software. Minimally, such systems provide the ability to define a test case, construct a test driver, execute the test case, and capture the output. Additional facilities provided by such systems typically include data flow analysis, structural coverage assessment, regression testing, test specification, and report generation.

Task 5: Build the Test Plan

The development of an effective test plan involves the following four steps:

  1. Set the test objectives.

  2. Develop a test matrix.

  3. Define test administration.

  4. Write the test plan.

Setting Test Objectives

The objectives of testing should restate the project objectives from the project plan. In fact, the test plan objectives should determine whether the project plan objectives have been achieved. If the project plan does not have clearly stated objectives, testers must develop their own. In that case, testers must have them confirmed as the project objectives by the project team. Testers can:

  • Set objectives to minimize the project risks

  • Brainstorm to identify project objectives

  • Relate objectives to the testing policy, if established

Normally, there should be ten or fewer test objectives. Having too many objectives scatters the test team’s focus.

Work Paper 8-1 is designed for documenting test objectives. To complete the Work Paper:

  • Itemize the objectives so that they can be referred to by number.

  • Write the objectives in a measurable statement to focus testers’ attention.

  • Assign a priority to the objectives, as follows:

    • High. The most important objectives to be accomplished during testing

    • Average. Objectives to be accomplished only after the high-priority test objectives have been met

    • Low. The least important of the test objectives

  • Define the completion criteria for each objective. This should state quantitatively how the testers will determine whether the objective has been accomplished. The more specific the criteria, the easier it will be for the testers to follow through.

Table 8-1. Test Objective

Number

Objective

Priority

Completion Criteria

    
    
    
    
    
    
    
    
    
    
    
    
    
    
    

Note

Establish priorities so that approximately one-third are high, one-third are average, and one-third are low.

Developing a Test Matrix

The test matrix is the key component of the test plan. On one side it lists what is to be tested; on the other, it indicates which test is to be performed, or “how” software will be tested. Between the two dimensions of the matrix are the tests applicable to the software; for example, one test may test more than one software module. The test matrix is also a test “proof.” It proves that each testable function has at least one test, and that each test is designed to test a specific function.

An example of a test matrix is illustrated in Table 8-1. This shows four functions in a payroll system, with three tests to validate the functions. Because payroll is a batch system, batched test data is used with various dates, the parallel test is run when posting to the general ledger, and all changes are verified through a code inspection. The test matrix can be prepared using the work papers described in the following sections. (Note: The modules that contain the function(s) to be tested will be identified.)

Table 8-1. Test Matrix Example

SOFTWARE FUNCTION

TEST DECK TRANSACTION

PARALLEL TEST

CODE INSPECTION

FICA Calculation

X

 

X

Gross Pay

X

 

X

Tax Deduction

X

 

X

General Ledger Charges

 

X

X

The recommended test process is first to determine the test factors to be evaluated in the test process, and then to select the techniques that will be used in performing the test. Figure 8-5 is a test factor/test technique matrix that shows which techniques are most valuable for the various test factors. For example, if testers want to evaluate the system structure for reliability, the execution and recovery testing techniques are recommended. On the other hand, if testers want to evaluate the functional aspects of reliability, the requirements, error handling, manual support, and control testing techniques are recommended.

Table 8-5. Test factor/technique matrix.

TEST FACTOR

STRUCTURAL TESTING

FUNCTIONAL TESTING

UNIT TESTING

Stress

Execution

Recovery

Operations

Compliance

Security

Requirements

Regression

Error Handling

Manual Support

Intersystems

Control

Parallel

Reliability

 

x

x

   

x

 

x

    

x

Authorization

     

x

x

      

x

File Integrity

  

x

   

x

 

x

    

x

Audit Trail

  

x

   

x

      

x

Continuity of Processing

x

 

x

x

         

x

Service Level

x

x

 

x

          

Access Control

     

x

        

Methodology

    

x

         

Correctness

      

x

x

x

x

x

x

x

x

Ease of Use

    

x

 

x

  

x

   

x

Maintainable

    

x

        

x

Portable

   

x

x

         

Coupling

   

x

      

x

x

  

Performance

x

x

  

x

        

x

Ease of Operation

   

x

x

         

Individual Software Modules

Testers should list the software modules to be tested on Work Paper 8-2, including the name of the module, a brief description, and the evaluation criteria. When documenting software modules, testers should include the following three categories:

  • Modules written by the IT development group

  • Modules written by non-IT personnel

  • Software capabilities embedded in hardware chips

Table 8-2. Software Module

Software Project:____________________________________________________

    

Number

Software Module Name

Description

Evaluation Criteria

    
    
    
    
    
    
    
    
    
    
    
    
    
    
    

Structural Attributes

Testers can use Work Paper 8-3 to identify the structural attributes of software that may be affected and thus require testing. The structural attributes can be those described earlier (maintainability, reliability, efficiency, usability, and so on) or specific processing concerns regarding how changes can affect the operating performance of the software.

Table 8-3. Structural Attribute

Software Project:____________________________________________________

    

Software Model Number

Structural Attribute

Description

Evaluation Criteria

    
    
    
    
    
    
    
    
    
    

Structural attributes also include the impact the processing of one software system has on another software system. This is classified as a structural attribute because the structure of one system may be incompatible with the structure of another.

 

Batch Tests

Batch tests are high-level tests. They must be composed during the execution phase in specific test transactions. For example, a test identified at the test plan level might validate that all dating in a software module is correct. During execution, each date-related instruction in a software module would require a test transaction. (It is not necessary for test descriptions at the test planning level to be that detailed.)

Work Paper 8-4 describes each batch test to perform during testing. If you use our previous example of the testing-related processing date, that task can be described in the test plan and related to all the software modules in which that test will occur. However, during execution, the test data for each module that executes that test will be a different transaction. To complete Work Paper 8-4, you must identify the software project, unless it is applicable to all software projects, in which case the word “all” should be used to describe the software project.

Table 8-4. Batch Tests

Software Project:________________________________________

  

Name of Test:

Test No.

  

Test Objective

 
  

Test Input

 
  

Test Procedures

 
  

Test Output

 
  

Test Controls

 
  

Software or Structure Attribute Tested

 
  
  

Each test should be named and numbered. In our example, it might be called Date Compliance test and given a unique number. Numbering is important both to control tests and to roll test results back to the high-level test described in the test plan.

Figure 8-6 shows a completed test document for a hypothetical test of data validation routines. Although all the detail is not yet known because the data validation routines have not been specified at this point, there is enough information to enable a group to prepare the data validation routines.

Table 8-6. Conducting batch tests.

Software Project: Payroll Application

Name of Test: Validate Input

Test No. 1

Test Objective

Exercise data validation routines.

Test Input

Prepare the following types of input data for each input field:

  • valid data

  • invalid data

  • range of codes

  • validation of legitimate values and tables

Test Procedures

Create input transactions that contain the conditions described in test input. Run the entire test deck until all conditions are correctly processed.

Test Output

Reject all invalid conditions and accept all valid conditions.

Test Controls

Run the entire test each time the test is conducted. Rerun the test until all specified output criteria have been achieved.

Software or Structure Attribute Tested

The data validation function.

Conceptual Test Script for Online System Test

Work Paper 8-5 serves approximately the same purpose for online systems as Work Paper 8-4 does for batch systems. Work Paper 8-4 is a high-level description of the test script, not the specific transaction that will be entered during online testing. For the test planning perspective, it is unimportant whether the individual items will be manually prepared or generated and controlled using a software tool.

Table 8-5. Conceptual Test Script for Online System Test

Software Project:________________________________________________

Software Module:___________________________

Test No.____________________

     

Sequence

Source

Script Event

Evaluation Criteria

Comments

     
     
     
     
     
     
     
     
     
     

The example given for entering a batch test to validate date-related processing is also appropriate for test scripts. The primary differences are the sequence in which the events must occur and the source or location of the origin of the online event.

Figure 8-7 shows an example of developing test scripts for the data validation function of an order-entry software project. It lists two scripting events, the evaluation criteria, and comments that would be helpful in developing these tests.

Table 8-7. Example of a test script for a data validation function.

Software Project: Order Entry

Software Module:

Test No.: 2

  

SEQUENCE

SOURCE

SCRIPT EVENT

EVALUATION CRITERIA

COMMENTS

1

Data entry clerk

The data entry clerk enters an invalid customer order.

The customer number should be rejected as invalid.

A help routine would help to locate the proper customer number.

2

Data entry clerk

The data entry clerk enters a correct order into the system for one or more invalid company products.

The system should, first, confirm that the information entered is valid and for legitimate values, and, second, ask the data entry clerk to verify that all the information has been entered correctly.

This tests the entry of a valid order through the data validation routines.

Verification Tests

Testers can use Work Paper 8-6 to document verification testing. Verification is a static test performed on a document developed by the team responsible for creating software. Generally, for large documents, the verification process is a review; for smaller documents, the verification process comprises inspections. Other verification methods include the following:

  • Static analyzers incorporated into the compilers

  • Independent static analyzers

  • Walkthroughs

  • Third-party confirmation of the document’s accuracy

Table 8-6. Verification Tests

Software Project:_____________________________________________

      

Number

Verification Test

System Product

Purpose

Responsibility

Test Point/Schedule

      
      
      
      
      
      
      
      
      
      

Verification tests normally relate to a specific software project, but because of the extensiveness of testing, a single verification test may be applicable to many software projects. For example, it may be determined that each source code listing that is changed will be inspected prior to unit testing. In this case, the software project should be indicated as “all.”

Software/Test Matrix

The objective of Work Paper 8-7 is to illustrate that the tests validate and verify all the software modules, including their structural attributes. The matrix also illustrates which tests exercise which software modules.

Table 8-7. Software/Test Matrix

Software Project:____________________________________________

     

Tests

    

Software Module

1

2

3

4

5

6

7

8

9

10

           
           
           
           
           
           
           
           
           
           
           

The information to complete this matrix has already been recorded in Work Papers 8-2 through 8-6. The vertical axis of the matrix lists the software modules and structural attributes from Work Papers 8-2 and 8-3. The horizontal axis lists the tests indicated on Work Papers 8-4, 8-5, and 8-6. The intersection of the vertical and horizontal axes indicates whether the test exercises the software module/structural attributes listed. This can be indicated by a check mark or via a reference to a more detailed description that relates to the specific test and software module.

Defining Test Administration

The administrative component of the test plan identifies the schedule, milestones, and resources needed to execute the test plan as illustrated in the test matrix. This cannot be undertaken until the test matrix has been completed.

Prior to developing the test plan, the test team has to be organized. This initial test team is responsible for developing the test plan and then defining the administrative resources needed to complete the plan. Thus, part of the plan will be executed as the plan is being developed; that part is the creation of the test plan, which itself consumes resources.

The test plan, like the implementation plan, is a dynamic document—that is, it changes as the implementation plan changes and the test plan is being executed. The test plan must be viewed as a “contract” in which any modifications must be incorporated.

Work Papers 8-8 through 8-10, described in the following sections, are provided to help testers develop and document the administrative component of the test plan.

Table 8-8. Test Plan General Information

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Software Project

The name or number that uniquely identifies the project or system that will be tested for compliance.

Summary

A one- or two-paragraph overview of what is to be tested and how the testing will be performed.

Pretest Background

Summary of any previous test experiences that might prove helpful with testing. The assumption is, if there were problems in the past, they will probably continue; however, if there were few problems with test tools, the test team can expect to use those tools effectively.

Test Environment

The computer center or facilities used to test the application. In a single computer center installation, this subsection is minimal. If the software is used in multiple installations, the test environments may need to be described extensively.

Test Constraints

Certain types of testing may not be practical or possible during testing. For example, in banking systems in which the software ties into the Fed Wire system, it is not possible to test software with that facility. In other cases, the software cannot yet interface directly with production databases, and therefore the test cannot provide assurance that some of those interfaces work. List all known constraints.

References

Any documents, policies, procedures, or regulations applicable to the software being tested or the test procedures. It is also advisable to provide a brief description of why the reference is being given and how it might be used during the testing process.

When to stop testing

What type of test results or events should cause testing to be stopped and the software returned to the implementation team for more work.

Software Project:_______________________________________________

  

Summary

 
  

Pretest Background

 
  

Test Environment

 
  

Test Constraints

 
  

References

 
  

When to Stop Testing

 
  

Test Plan General Information

Work Paper 8-8 is designed to provide background and reference data on testing. In many organizations this background information will be necessary to acquaint testers with the project. It is recommended that, along with this background data, testers be required to read all or parts of Chapters 1 through 4.

Define Test Milestones

Work Paper 8-9 is designed to indicate the start and completion date of each test. These tests are derived from the matrix in Work Papers 8-4, 8-5, and 8-6. The start/completion milestones are listed as numbers. If you prefer, these may be days or dates. For example, milestone 1 could just be week 1, day 1, or November 18. The tests from the test matrix are then listed in this work paper in the Test column; a start and completion milestone are checked for each test.

Table 8-9. Test Milestones

Field Requirements

FIELD

 

INSTRUCTIONS FOR ENTERING DATA

Tests

 

Tests to be conducted during execution (the tests described on Work Papers 8-4, 8-5, and 8-6 and shown in matrix format in Work Paper 8-7). The vertical column can contain either or both the test number and/or name.

Start/Completion Milestone

 

The names to identify when tests start and stop. The milestones shown in Work Paper 8-9 are numbers 1–30, but these could be week numbers, day numbers, or specific dates such as November 18, 1999, included in the heading of the vertical columns.

Intersection between Tests and Start/Completion Milestones

 

Insert a check mark in the milestone where the test starts, and a check mark in the column where the tests are to be completed.

Tests

Start/Completion Milestones

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               

Note

Organizations that have scheduling software should use that in lieu of this work paper. Both the work paper and the scheduling software should include the person responsible for performing that test as the assignment becomes known.

Define Checkpoint Administration

Test administration contains all the attributes associated with any other project. Test administration is, in fact, project management; the project is testing. Administration involves identifying what is to be tested, who will test it, when it will be tested, when it is to be completed, the budget and resources needed for testing, any training the testers need, and the materials and other support for conducting testing.

Work Paper 8-10, which is completed for each milestone, can be used to schedule work as well as to monitor its status. Work Paper 8-10 also covers the administrative aspects associated with each testing milestone. If the test plan calls for a different test at six milestones, testers should prepare six different work papers. Because budgeting information should be summarized, a total budget figure for testing is not identified in the administrative part of the plan.

Table 8-10. Administrative Checkpoint

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Software Project

The name or number that uniquely identifies the project or system that will be tested for compliance.

Project

The name of the project being tested.

Checkpoint for Test

The name of the systems development checkpoint at which testing occurs. Unless the test team knows which development documents have been completed, testing is extremely difficult to perform.

Schedule

The dates on which the following items need to be started and completed:

  • plan

  • train test group

  • obtain data

  • test execution

  • test report(s)

Budget

The test resources allocated at this milestone, including both test execution and test analysis and reporting.

Resources

The resources needed for this checkpoint, including:

  • equipment (computers and other hardware needed for testing)

  • software and test personnel (staff to be involved in this milestone test, designated by name or job function)

Testing Materials

Materials needed by the test team to perform the test at this checkpoint, including:

  • system documentation (specific products and documents needed to perform the test at this point)

  • software to be tested (names of the programs and subsystems to be tested at this point)

  • test input (files or data used for test purposes)

  • test documentation (any test documents needed to conduct a test at this point)

  • test tools (software or other test tools needed to conduct the test at this point)

Note: Not all these materials are needed for every test.

Test Training

It is essential that the test team be taught how to perform testing. They may need specific training in the use of test tools and test materials, the performance of specific tests, and the analysis of test results.

Software Project:___________________________________________________________________

Test Milestone Number:_____________________________________________________________

   

Start

Finish

Schedule:

Test Plan:

________________________________________

__________

__________

Tester Training:

________________________________________

__________

__________

Obtaining Data:

________________________________________

__________

__________

Execution:

________________________________________

__________

__________

Report:

________________________________________

__________

__________

Budget:

Resources

     

Equipment:

     

Support Personnel:

     

Test Personnel:

Testing Materials

     

Project Documentation:

     

Software to Be Tested:

     

Test Input:

     

Test Documentation:

     

Test Tools:

Test Training

     

Writing the Test Plan

The test plan can be as formal or informal a document as the organization’s culture dictates. When the test team has completed Work Papers 8-1 through 8-10, they have completed the test plan. The test plan can either be the ten work papers or the information on those work papers transcribed to a more formal test plan. Generally, if the test team is small, the work papers are more than adequate. As the test team grows, it is better to formalize the test plan.

Figure 8-8 illustrates a four-part test plan standard. It is a restatement and slight clarification of the information contained on the work papers in this chapter.

Table 8-8. System test plan standard.

1.

GENERAL INFORMATION

 

1.1

Summary. Summarize the functions of the software and the tests to be performed.

 

1.2

Environment and Pretest Background. Summarize the history of the project. Identify the user organization and computer center where the testing will be performed. Describe any prior testing and note results that may affect this testing.

 

1.3

Test Objectives. State the objectives to be accomplished by testing.

 

1.4

Expected Defect Rates. State the estimated number of defects for software of this type.

 

1.5

References. List applicable references, such as:

  

a)

Project request authorization.

  

b)

Previously published documents on the project.

  

c)

Documentation concerning related projects.

2.

PLAN

 

2.1

Software Description. Provide a chart and briefly describe the inputs, outputs, and functions of the software being tested as a frame of reference for the test descriptions.

 

2.2

Test Team. State who is on the test team and their test assignment(s).

 

2.3

Milestones. List the locations, milestone events, and dates for the testing.

 

2.4

Budgets. List the funds allocated to test by task and checkpoint.

 

2.5

Testing (systems checkpoint). Identify the participating organizations and the system checkpoint where the software will be tested.

  

2.5.1

Schedule (and budget). Show the detailed schedule of dates and events for the testing at this location. Such events may include familiarization, training, data, as well as the volume and frequency of the input. Resources allocated for test should be shown.

  

2.5.2

Requirements. State the resource requirement, including:

   

a)

Equipment. Show the expected period of use, types, and quantities of the equipment needed.

   

b)

Software. List other software that will be needed to support the testing that is not part of the software to be tested.

   

c)

Personnel. List the numbers and skill types of personnel that are expected to be available during the test from both the user and development groups. Include any special requirements such as multishift operation or key personnel.

  

2.5.3

Testing Materials. List the materials needed for the test, such as:

   

a)

System documentation

   

b)

Software to be tested and its medium

   

c)

Test inputs

   

d)

Test documentation

   

e)

Test tools

  

2.5.4

Test Training. Describe or reference the plan for providing training in the use of the software being tested. Specify the types of training, personnel to be trained, and the training staff.

  

2.5.5

Test to be Conducted. Reference specific tests to be conducted at this checkpoint.

 

2.6

Testing (system checkpoint). Describe the plan for the second and subsequent system checkpoint where the software will be tested in a manner similar to paragraph 2.5.

3.

SPECIFICATIONS AND EVALUATION

 

3.1

Specifications

  

3.1.1

Business Functions. List the business functional requirement established by earlier documentation.

  

3.1.2

Structural Functions. List the detailed structural functions to be exercised during the overall test.

  

3.1.3

Test/Function Relationships. List the tests to be performed on the software and relate them to the functions in paragraph 3.1.2.

  

3.1.4

Test Progression. Describe the manner in which progression is made from one test to another so that the entire test cycle is completed.

 

3.2

Methods and Constraints.

  

3.2.1

Methodology. Describe the general method or strategy of the testing.

  

3.2.2

Test Tools. Specify the type of test tools to be used.

  

3.2.3

Extent. Indicate the extent of the testing, such as total or partial. Include any rationale for partial testing.

  

3.2.4

Data Recording. Discuss the method to be used for recording the test results and other information about the testing.

  

3.2.5

Constraints. Indicate anticipated limitations on the test due to test conditions, such as interfaces, equipment, personnel, databases.

 

3.3

Evaluation.

  

3.3.1

Criteria. Describe the rules to be used to evaluate test results, such as range of data values used, combinations of input types used, maximum number of allowable interrupts or halts.

  

3.3.2

Data Reduction. Describe the techniques to be used for manipulating the test data into a form suitable for evaluation, such as manual or automated methods, to allow comparison of the results that should be produced to those that are produced.

4.

TEST DESCRIPTIONS

 

4.1

Test (Identify). Describe the test to be performed (format will vary for online test script).

  

4.1.1

Control. Describe the test control, such as manual, semiautomatic or automatic insertion of inputs, sequencing of operations, and recording of results.

  

4.1.2

Inputs. Describe the input data and input commands used during the test.

  

4.1.3

Outputs. Describe the output data expected as a result of the test and any intermediate messages that may be produced.

  

4.1.4

Procedures. Specify the step-by-step procedures to accomplish the test. Include test setup, initialization, steps and termination.

 

4.2

Test (Identify). Describe the second and subsequent tests in a manner similar to that used in paragraph 4.1.

 

Task 6: Inspect the Test Plan

This task describes how to inspect the corrected software prior to its execution. This process is used, first, because it is more effective in identifying defects than validation methods; and second, it is much more economical to remove the defects at the inspection stage than to wait until unit or system testing. This task describes the inspection process, including the role and training of the inspectors, and the step-by-step procedures to complete the process.

The implementation/rework step of the project team involves modifying software and supporting documentation to make it compliant. Thereafter, the software needs to be tested. However, as already noted, identifying defects in dynamic testing is more costly and time-consuming than performing a static inspection of the changed products or deliverables.

Inspection, then, is a process by which completed but untested products are evaluated as to whether the specified changes were installed correctly. To accomplish this, inspectors examine the unchanged product, the change specifications, and the changed product to determine the outcome. They look for three types of defects: errors, meaning the change has not been made correctly; missing, meaning something should have been changed but was not changed; and extra, meaning something not intended was changed or added.

The inspection team reviews the product after each inspector has reviewed it individually. The team then reaches a consensus on the errors and missing/extra defects. The author (the person implementing the project change) is given those defect descriptions so that the product can be changed prior to dynamic testing. After the changes are made, they are re-inspected to verify correctness; then dynamic testing can commence. The purpose of inspections is twofold: to conduct an examination by peers, which normally improves the quality of work because the synergy of a team is applied to the solution, and to remove defects.

Inspection Concerns

The concerns regarding the project inspection process are basically the same associated with any inspection process. They are as follows:

  • Inspections may be perceived to delay the start of testing. Because inspection is a process that occurs after a product is complete but before testing, it does in fact impose a delay to dynamic testing. Therefore, many people have trouble acknowledging that the inspection process will ultimately reduce implementation time. In practice, however, the time required for dynamic testing is reduced when the inspection process is used; thus, the total time is reduced.

  • There is resistance to accepting the inspection role. There are two drawbacks to becoming an inspector. The first is time; an inspector loses time on his or her own work assignments. The second is that inspectors are often perceived as criticizing their peers. Management must provide adequate time to perform inspections and encourage a synergistic team environment in which inspectors are members offering suggestions, as opposed to being critics.

  • Space may be difficult to obtain for conducting inspections. Each deliverable is inspected individually by a team; therefore, meeting space is needed in which to conduct inspections. Most organizations have limited meeting space, so this need may be difficult to fulfill. Some organizations use cafeteria space during off hours; or if the group is small enough, they can meet in someone’s work area. However, it is important to hold meetings in an environment that does not affect others’ work.

  • Change implementers may resent having their work inspected prior to testing. Traditional software implementation methods have encouraged sloppy developments, which rely on testing to identify and correct problems. Thus, people instituting changes may resist having their products inspected prior to having the opportunity to identify and correct the problems themselves. The solution is to encourage team synergism with the goal of developing optimal solutions, not criticizing the work of individuals.

  • Inspection results may affect individual performance appraisal. In a sense, the results of an inspection are also a documented list of a person’s defects, which can result in a negative performance appraisal. Management must emphasize that performance appraisals will be based on the final product, not an interim defect list.

Products/Deliverables to Inspect

Each software project team determines the products to be inspected, unless specific inspections are mandated by the project plan. Consider inspecting the following products:

  • Project requirements specifications

  • Software rework/maintenance documents

  • Updated technical documentation

  • Changed source code

  • Test plans

  • User documentation (including online help)

Formal Inspection Roles

The selection of the inspectors is critical to the effectiveness of the process. It is important to include appropriate personnel from all impacted functional areas and to carefully assign the predominant roles and responsibilities (project, operations, external testing, etc.). There should never be fewer than three inspection participants but not more than five.

Each role must be filled on the inspection team, although one person may take on more than one role. The following subsections outline the participants and identify their roles and responsibilities in the inspection process.

Moderator

The moderator coordinates the inspection process and oversees any necessary follow-up tasks. It is recommended that the moderator not be a member of the project team. Specifically, the moderator does the following:

  1. Organizes the inspection by selecting the participants; verifies the distribution of the inspection materials; and schedules the overview, inspection, and required follow-up sessions.

  2. Leads the inspection process; ensures that all participants are prepared; encourages participation; maintains focus on finding defects; controls flow and direction; and maintains objectivity.

  3. Controls the inspection by enforcing adherence to the entry and exit criteria; seeks consensus on defects; makes the final decision on disagreements; directs the recording and categorizing of defects; summarizes inspection results; and limits inspections to one to two hours.

  4. Ensures the author completes the follow-up tasks.

  5. Completes activities listed in moderator checklist (reference Work Paper 8-11):

    1. Determine if the product is ready for inspection, based on entry criteria for the type of inspections to be conducted.

    2. Select inspectors and assign the roles of reader and recorder.

    3. Estimate inspection preparation time (e.g., 20 pages of written documentation per two hours of inspections).

    4. Schedule the inspection meeting and send inspection meeting notices to participants.

    5. Determine if overview is required (e.g., if the product is lengthy or complex) with author and project leader.

    6. Oversee the distribution of the inspection material, including the meeting notice.

Table 8-11. Moderator Checklist

_______

Check that entry criteria (inspection package cover sheet) have been met.

_______

Meet with author and team leader to select qualified inspection participants and assign roles.

_______

Determine need for an overview session.

_______

Schedule inspection meeting; complete inspection meeting notice.

_______

Gather materials from author, and distribute to inspection participants.

_______

Talk with inspectors to ensure preparation time.

_______

Complete self-preparation of material for inspection.

_______

Conduct inspection meeting.

_______

Ensure completion and distribution of inspection defect list and inspection summary.

_______

Verify conditional completion (moderator review or reinspection).

_______

Complete inspector certification report.

Reader

The reader is responsible for setting the pace of the inspection. Specifically, the reader:

  • Is not also the moderator or author

  • Has a thorough familiarity with the material to be inspected

  • Presents the product objectively

  • Paraphrases or reads the product material line by line or paragraph by paragraph, pacing for clarity and comprehension

Recorder

The recorder is responsible for listing defects and summarizing the inspection results. He or she must have ample time to note each defect because this is the only information that the author will have to find and correct the defect. The recorder should avoid using abbreviations or shorthand that may not be understood by other team members. Specifically, the recorder:

  • May also be the moderator but cannot be the reader or the author

  • Records every defect

  • Presents the defect list for consensus by all participants in the inspection

  • Classifies the defects as directed by the inspectors by type, class, and severity, based on predetermined criteria

Author

  • The author is the originator of the product being inspected. Specifically, the author:

  • Initiates the inspection process by informing the moderator that the product is ready for inspection

  • May also act as an inspector during the inspection meeting

  • Assists the moderator in selecting the inspection team

  • Meets all entry criteria outlined in the appropriate inspection package cover sheet

  • Provides an overview of the material prior to the inspection for clarification, if requested

  • Clarifies inspection material during the process, if requested

  • Corrects the defects and presents finished rework to the moderator for sign-off

  • Forwards all materials required for the inspection to the moderator as indicated in the entry criteria

Inspectors

The inspectors should be trained staff who can effectively contribute to meeting objectives of the inspection. The moderator, reader, and recorder may also be inspectors. Specifically, the inspectors:

  • Must prepare for the inspection by carefully reviewing and understanding the material

  • Maintain objectivity toward the product

  • Record all preparation time

  • Present potential defects and problems encountered before and during the inspection meeting

Formal Inspection Defect Classification

The classification of defects provides meaningful data for their analysis and gives the opportunity for identifying and removing their cause. This results in overall cost savings and improved product quality.

Each defect should be classified as follows:

  • By origin. Indicates the development phase in which the defect was generated (requirements, design, program, etc.).

  • By type. Indicates the cause of the defect. For example, code defects could be errors in procedural logic, or code that does not satisfy requirements or deviates from standards.

  • By class. Defects should be classified as missing, wrong, or extra, as described previously.

  • By severity. There are two severity levels: major (those that either interrupt system operation or cause an incorrect result) and minor (all those that are not major).

Inspection Procedures

The formal inspection process is segmented into the following five subtasks, each of which is distinctive and essential to the successful outcome of the overall process:

  1. Planning and organizing

  2. Overview session (optional)

  3. Individual preparation

  4. Inspection meeting

  5. Rework and follow-up

Planning and Organizing

The planning step defines the participants’ roles and defines how defects will be classified. It also initiates, organizes, and schedules the inspection.

Overview Session

This task is optional but recommended. Its purpose is to acquaint all inspectors with the product to be inspected and to minimize individual preparation time. This task is especially important if the product is lengthy, complex, or new; if the inspection process is new; or if the participants are new to the inspection process.

Individual Preparation

The purpose of this task is to allot time for each inspection participant to acquire a thorough understanding of the product and to identify any defects (per exit criteria).

The inspector’s responsibilities are to:

  • Become familiar with the inspection material

  • Record all defects found and time spent on the inspection preparation report (see Work Paper 8-12) and inspection defect list (see Work Paper 8-13)

    Table 8-12. Inspection Preparation Report

    Software Project:_________________________________________Date:_________

    Name of Item Being Inspected:________________________________________

    Item Version Identification:_________________________________________

    Material Size (lines/pages):__________ Expected Preparation Time:_________

    Preparation Log:

    Date

    Time Spent

      

    __________

    __________

      

    __________

    __________

      

    Total Preparation Time: __________

    Defect List:

    Location

    Defect Description

    Exit Criteria Violated

    _______________________

    ________________

    ________________

    _______________________

    ________________

    ________________

    _______________________

    ________________

    ________________

    _______________________

    ________________

    ________________

    _______________________

    ________________

    ________________

    _______________________

    ________________

    _______________

    Table 8-13. Inspection Defect List

    Field Requirements

    FIELD

    INSTRUCTIONS FOR ENTERING DATA

    Project Name

    The name of the project in which an interim deliverable is being inspected.

    Date

    The date on which this workpaper is completed.

    Name of Item Being Inspected

    The number or name by which the item being Inspected is known.

    Item Version Identification

    The version number if more than one version of the item is being inspected.

    Material Size

    The size of the item being inspected. Code is frequently described as number of lines of executable code. Written documentation is frequently described as number of pages.

    Expected Preparation Time

    Total expected preparation time of all the inspectors.

    Moderator

    The name of the person leading the inspection.

    Phone

    The phone number of the moderator.

    Inspection Type

    Indicates whether an initial inspection or a reinspection of the item to verify defect correction.

    Release #

    A further division of version number indicating the sequence in which variations of a version are released into test.

    Product Type

    The type of product being inspected, such as source code.

    Location

    The location of a defect determined to be a defect by the formal inspection meeting.

    Origin/Defect Description

    The name by which the defect is known in the organization; inspectors’ opinion as to where that defect originated.

    Defect Phase

    The phase in the development process at which the defects were uncovered.

    Defect Type

    A formal name assigned to the defect. This Work Paper suggests 17 different defect types. Your organization may wish to modify or expand this list.

    Severity Class

    Indicate whether the defect is an extra, missing, or wrong class. (See Chapter 8 for explanation of defect class.)

    Severity MAJ/MIN

    Indicate whether the defect is of major or minor severity. (See Chapter 8 for a discussion of the meaning of major and minor.

      

    Note: This form is completed by the inspector filling the reporter role during the formal inspection process.

    Project Name:____________________________________________ Date:________

    Name of Item Being Inspected:____________________________________________

    Item Version Identification:_______________________________________________

    Material Size (lines/pages):________________Expected Preparation Time:__________

    Moderator:____________________________________________Phone:__________

    Inspection Type:________ Inspection

     

    Release #:_____________

     

    ________ Reinspection

     

    Product Type:____________

    Location

    Origin Defect Description

    Defect Phase

    Defect Type

    Severity

    Class

    Maj/Min

    ______

    ________________

    ______

    ______

    ______

    ______

    ______

    ________________

    ______

    ______

    ______

    ______

    ______

    ________________

    ______

    ______

    ______

    ______

    ______

    ________________

    ______

    ______

    ______

    ______

    ______

    ________________

    ______

    ______

    ______

    ______

    ______

    ________________

    ______

    ______

    ______

    ______

    ______

    ________________

    ______

    ______

    ______

    ______

    ______

    ________________

    ______

    ______

    ______

    ______

    ______

    ________________

    ______

    ______

    ______

    ______

    ______

    ________________

    ______

    ______

    ______

    ______

    ______

    ________________

    ______

    ______

    ______

    ______

    ______

    ________________

    ______

    ______

    ______

    ______

    ______

    ________________

    ______

    ______

    ______

    ______

    Defect Types:

    CM

    Comments

    DA

    Data

    DC

    Documentation

    EN

    English Readability

    IF

    Interface

    LD

    Logical Design

    LO

    Logic

    LR

    Linkage Requirements

    MN

    Maintainability

    MS

    Messages/Return Codes

    OT

    Other

    PD

    Physical Design

    PF

    Performance

    RQ

    Requirements

    SC

    Spec Clarification

    ST

    Standards

    TP

    Test Plan

    Defect Class:

    E Extra

    M Missing

     

    W Wrong

     

Each inspector performs a “desk review” of the material, with the following recommended guidelines:

  • It should be performed in one continuous time span.

  • The inspector must disregard the style of the work product (for example, the way a programmer chooses to build a report).

  • The emphasis should be on meeting standards and ensuring that output meets the product specification.

  • Every defect must be identified.

The activities involved in performing an individual inspection are as follows:

  • Review the input product (product specification).

  • Review the output product (author’s work).

  • Identify each input specification by a unique identifier on the input product document.

  • Trace specifications one by one to the output product, essentially repeating the author’s process.

  • Cross-reference the output to the input specification (block out output that relates to the input specification).

  • Continue this process until all specifications have been traced and all output has been referenced.

During the individual inspection, each inspector records defects, questions, and concerns to be addressed during the inspection meeting. Recommended guidelines for recording these items are that:

  • Every defect should be recorded, no matter how small.

  • Areas of concern regarding correctness of input specifications should be noted as issues to discuss.

  • Significant inefficiencies in the output product should be noted as issues to discuss.

  • Any output that does not have an input specification should be marked as a defect (that is, “extra”).

Inspection Meeting

The purpose of the inspection meeting is to find defects in the product, not to correct defects or suggest alternatives. A notice is sent to all participants notifying them of the meeting (see Work Paper 8-14). The following are the responsibilities of the meeting participants, in the sequence they occur:

  • Moderator responsibilities (at the beginning of the inspection)

    • Introduce participants and identify roles.

    • Restate the objective of the inspection.

    • Verify inspectors’ readiness by checking time spent in preparation and whether all material was reviewed prior to the meeting (as indicated on each inspector’s inspection preparation report). If any of the participants are not prepared, the moderator must decide whether to continue with the inspection or reschedule it to allow for further preparation.

  • Reader responsibilities

    • Read or paraphrase the material.

  • Inspector responsibilities

    • Discuss potential defects and reach a consensus about whether the defects actually exist.

  • Recorder responsibilities

    • Record defects found, by origin, type, class, and severity, on the inspection defect list.

    • Classify each defect found, with concurrence from all inspectors.

    • Prepare the inspection defect summary (see Work Paper 8-15).

  • Author responsibilities

    • Clarify the product, as necessary.

  • Moderator responsibilities (at the end of the inspection)

    • Call the inspection to an end if a number of defects are found early, indicating that the product is not ready for inspection. The author then is responsible for reinitiating an inspection, through the moderator, once the product is ready.

    • Determine the disposition of the inspection and any necessary follow-up work.

    • Approve the inspection defect list and the inspection summary, and then forward copies to the author and quality assurance personnel.

    • Sign off on the inspection certification report if no defects were found (see Work Paper 8-16).

Table 8-14. Inspection Meeting Notice

Project Name:_____________________________________________Date:_________

Name of Item Being Inspected:____________________________________________

Item Version Identification:_______________________________________________

Material Size (lines/pages):__________________Expected Preparation Time: ________

Moderator: ____________________________Phone:_________

Inspection Type: __________ Inspection

__________Reinspection

 

Schedule:

Date:_______________________

Time:________________________

Location:_____________________

Duration:_____________________

Participants:

Name

Phone

Role

__________________

_________________

______________

__________________

_________________

______________

__________________

_________________

______________

__________________

_________________

______________

__________________

_________________

______________

Comments:

  

____________________________________________________________________

____________________________________________________________________

____________________________________________________________________

____________________________________________________________________

____________________________________________________________________

____________________________________________________________________

____________________________________________________________________

____________________________________________________________________

Table 8-15. Inspection Defect Summary

Project Name:___________________________________ Date: ___________

Name of Item Being Inspected: ______________________________________

Item Version Identification: _________________________________________

Material Size (lines/pages): _________________________________________

Moderator: ________________________________ Phone: _______________

Inspection Type: ____________ Inspection

____________ Reinspection

   
 

Minor Defect Class

Major Defect Class

Defect Types

E

M

W

Total

E

M

W

Total

CM (Comments)

        

DA (Data)

        

DC (Documentation)

        

EN (English Readability)

        

IF (Interfaces)

        

LD (Logical Design)

        

LO (Logic)

        

LR (Linkage Requirements)

        

MN (Maintainability)

        

MS (Messages/Return Codes)

        

OT (Other)

        

PD (Physical Design)

        

PF (Performance)

        

RQ (Requirements)

        

SC (Spec Clarification)

        

ST (Standards)

        

TP (Test Plan)

        

Totals:

        

Table 8-16. Inspection Certification Report

Project Name:_________________________________________________ Date:__________

Name of Item Being Inspected:___________________________________________________

Item Version Identification:______________________________________________________

The following people have inspected the named item and have agreed that all technical, contractual, quality, and other requirements and inspection criteria have been satisfied:

Moderator:__________________________________________________________________

Recorder:___________________________________________________________________

Reader:_____________________________________________________________________

Author:_____________________________________________________________________

Software Quality Representative:__________________________________________________

Inspectors:__________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

_________________________________

Moderator Signature/Date

 

Rework and Follow-Up

The purpose of this task is to complete required rework, obtain a sign-off or initiate a reinspection, and capture inspection results. Listed next are the responsibilities of the participants, in order of occurrence:

  • Author responsibilities

    • Complete all rework to correct defects found during the inspection.

    • Reinitiate the inspection process if the inspection ended with major rework required.

    • Contact the moderator to approve the rework if the inspection ended with minor rework required.

  • Moderator responsibilities

    • Review all rework completed and sign off on the inspection report after all the defects have been corrected.

  • Recorder responsibilities

    • Summarize defect data and ensure its entry into an inspection defect database.

Check Procedures

Work Paper 8-17 contains the items to evaluate to determine the accuracy and completeness of the test plan. The questions are designed so that a Yes response is desirable, and a No response requires that testers evaluate whether that item should be addressed. If the item is not applicable, a check mark should be placed in the N/A column. For No responses, a comment should be entered; if action is required, the results of the action should also be recorded in the Comments column.

Table 8-17. Quality Control Checklist

  

YES

NO

NA

COMMENTS

Software Function/Software Attribute Work Papers

    

1.

Have all the business software functions been identified?

    

2.

Does the sponsor/user agree that these are the appropriate software functions?

    

3.

Is the software function identified by a commonly used name?

    

4.

Are all the software functions described?

    

5.

Have the criteria for evaluating the software functions been identified?

    

6.

Are the evaluation criteria measurable?

    

7.

Has the structure addressed:

Reliability?

Efficiency?

Integrity?

Usability?

Maintainability?

Testability?

Flexibility?

Portability?

Reusability?

Interoperability?

    

8.

Have the criteria for each structural attribute been stated?

    

9.

Are the evaluation criteria measurable?

    

10.

Has the description for each structural attribute been given?

    

Work Papers on Tests to Be Conducted

    

1.

Has the test been named?

    

2.

Has the test been given a unique identifying number?

    

3.

Has the test objective been stated clearly and distinctly?

    

4.

Are the tests appropriate to evaluate the functions defined?

    

5.

Is the level of detail on the document adequate for creating actual test conditions once the system is implemented?

    

6.

Are the verification tests directed at project products?

    

7.

Is the verification test named?

    

8.

Is the name of the verification test adequate for test personnel to understand the intent of the test?

    

9.

Have the products to be tested been identified?

    

10.

Has the purpose of the verification test been stated?

    

11.

Has the sequence in which each online test will be performed been identified?

    

12.

Has the name for each test been included (optional)?

    

13.

Have the criteria that would cause testing to be stopped been indicated?

    

14.

Are the stop criteria measurable (i.e., there is no question that the criteria have been met)?

    

15.

Are the stop criteria reasonable?

    

Software Function/Test Matrix

    

1.

Does the matrix contain all the software functions defined on Work Paper 8-2?

    

2.

Does the matrix contain all the structural attributes defined on Work Paper 8-3?

    

3.

Does the matrix contain all the tests described in test Work Papers 8-4, 8-5, and 8-6?

    

4.

Are the tests related to the functions?

    

5.

Are there tests for evaluating each software function?

    

6.

Are there tests for evaluating each structural attribute?

    

Administrative Work Papers

    

1.

Has a work paper been prepared for each test milestone?

    

2.

Has the date for starting the testing been identified?

    

3.

Has the date for starting test team training been identified?

    

4.

Has the date for collecting the testing material been identified?

    

5.

Has the concluding date of the test been identified?

    

6.

Has the test budget been calculated?

    

7.

Is the budget consistent with the test workload?

    

8.

Is the schedule reasonably based on the test workload?

    

9.

Have the equipment requirements for the test been identified?

    

10.

Have the software and documents needed for conducting the test been identified?

    

11.

Have the personnel for the test been identified?

    

12.

Have the system documentation materials for testing been identified?

    

13.

Has the software to be tested been identified?

    

14.

Has the test input been defined?

    

15.

Have the needed test tools been identified?

    

16.

Has the type of training that needs to be conducted been defined?

    

17.

Have the personnel who require training been identified?

    

18.

Will the test team be notified of the expected defect rate at each checkpoint?

    

19.

Has a test summary been described?

    

20.

Does this summary indicate which software is to be included in the test?

    

21.

Does the summary indicate the general approach to testing?

    

22.

Has the pretest background been defined?

    

23.

Does the pretest background describe previous experience in testing?

    

24.

Does the pretest background describe the sponsor’s/user’s attitude to testing?

    

25.

Has the test environment been defined?

    

26.

Does the test environment indicate which computer center will be used for testing?

    

27.

Does the test environment indicate permissions needed before beginning testing (if appropriate)?

    

28.

Does the test environment state all the operational requirements that will be placed on testing?

    

29.

Have all appropriate references been stated?

    

30.

Has the purpose for listing references been stated?

    

31.

Are the number of references complete?

    

32.

Are the test tools consistent with the departmental standards?

    

33.

Are the test tools complete?

    

34.

Has the extent of testing been defined?

    

35.

Have the constraints of testing been defined?

    

36.

Are the constraints consistent with the resources available for testing?

    

37.

Are the constraints reasonable based on the test objectives?

    

38.

Has the general method for recording test results been defined?

    

39.

Is the data reduction method consistent with the test plan?

    

40.

Is the information needed for data reduction easily identifiable in the test documentation?

    

Test Milestones Work Paper

    

1.

Has the start date of testing been defined?

    

2.

Are all the test tasks defined?

    

3.

Are the start and stop dates for each test indicated?

    

4.

Is the amount of time allotted for each task sufficient to perform the task?

    

5.

Will all prerequisite tasks be completed before the task depending on them is started?

    

Output

The single deliverable from this step is the test plan. It should be reviewed with appropriate members of management to determine its adequacy. Once approved, the tester’s primary responsibility is to execute the test in accordance with that plan, and then report the results. Once the test plan is approved, testers should not be held responsible for potential omissions.

Guidelines

Planning can be one of the most challenging aspects of the software testing process. The following guidelines can make the job a little easier:

  1. Start early. Even though you might not have all the details at hand, you can complete a great deal of the planning by starting on the general and working toward the specific. By starting early, you can also identify resource needs and plan for them before they are subsumed by other areas of the project.

  2. Keep the test plan flexible. Make it easy to add test cases, test data, and so on. The test plan itself should be changeable, but subject to change control.

  3. Review the test plan frequently. Other people’s observations and input greatly facilitate achieving a comprehensive test plan. The test plan should be subject to quality control just like any other project deliverable.

  4. Keep the test plan concise and readable. The test plan does not need to be large and complicated. In fact, the more concise and readable it is, the more useful it will be. Remember, the test plan is intended to be a communication document. The details should be kept in a separate reference document.

  5. Calculate the planning effort. You can count on roughly one-third of the testing effort being spent on planning, execution, and evaluation, respectively.

  6. Spend the time to develop a complete test plan. The better the test plan, the easier it will be to execute the tests.

Summary

The test plan drives the remainder of the testing effort. Well-planned test projects tend to cost less and get completed earlier than projects with incomplete test plans. It is not unusual to spend approximately one-third of the total test effort on planning, but that time reaps rewards during test execution and reporting.

This chapter covers test planning from a risk-oriented approach. Test objectives are designed to address the significant risks. The objectives are decomposed into test transactions. The test plan is completed when the administrative data, such as schedule and budget, are added to the written test plan.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset