Chapter 3. Building the Software Testing Process

The testing process is the means by which the test strategy is achieved. The team that develops the testing process uses the test strategy as the requirements for the process. Their task is to determine the tests and methods of performance needed to address the risks that the test strategy identifies.

Following a test process has two significant advantages. First, the tester does not have to determine the process to be used for software testing because that process already exists. Second, when all testers follow the same process, they will develop better means for testing. These means will be incorporated into the process by continually improving the software testing process.

This chapter describes the construction of a workbench for building software. The workbench illustrates both the “do” and the “check” procedures. The “do” procedures are the test procedures, and the “check” procedures determine whether the “do” procedures were performed correctly. The chapter then identifies the considerations for customizing a process for testing, as well as explains the need for a test process. Part Three of this book details the seven steps proposed as a generic test process.

Software Testing Guidelines

Experience has shown there are six general software testing guidelines that, if followed, can significantly improve software testing. These guidelines are the primary reason for building the software testing process:

  1. Software testing should reduce software development risk. Risk is present in all software development projects, and testing is a control that reduces those risks.

  2. Testing should be performed effectively. Testing should be performed in a manner in which the maximum benefits are achieved from the software testing efforts.

  3. Testing should uncover defects. Ideally, at the conclusion of testing there should be no defects in the software.

  4. Testing should be performed using business logic. Money should not be spent on testing unless it can be spent economically to reduce business risk. In other words, it does not make business sense to spend more money on testing than the losses that might occur from the business risk.

  5. Testing should occur throughout the development life cycle. Testing is not a phase, but rather a process. It begins when development begins and ends when the software is no longer being used.

  6. Testing should test both structure and function. Testing should test the functional requirements to ensure they are correct, and test the adequacy of the software structure to process those functional requirements in an effective and efficient manner.

Note

To learn how to customize the test process for a specific software system, see the section “Customizing the Software-Testing Process” later in this chapter.

Guideline #1: Testing Should Reduce Software Development Risk

Senior IT executives need to develop their IT strategy. Strategic plans are converted into business initiatives. The planning cycle comprising the plan-do components of the plan-do-check-act (PDCA) cycle is easy to understand. From a senior IT executive’s perspective, the check component must address business risk.

Risk is the probability that undesirable events will occur. These undesirable events will prevent the organization from successfully implementing its business initiatives. For example, there is the risk that the information used in making business decisions will be incorrect or late. If the risk turns into reality and the information is late or incorrect, an erroneous business decision may result in a failed initiative.

Controls are the means an organization uses to minimize risk. Software testing is a control that contributes to eliminating or minimizing risks; thus, senior executives rely on controls such as software testing to assist them in fulfilling their business objectives.

The purpose of controls such as software testing is to provide information to management so that they can better react to risky situations. For example, testing may indicate that the system will be late or that there is a low probability that the information produced will be correct. Knowing this information, management can then make decisions to minimize that risk: Knowing that the project may be late, they could assign additional personnel to speed up the software development effort.

Testers must understand that their role in a business is to evaluate risk and report the results to management. Viewed from this perspective, testers must first ensure they understand the business risk, and then develop test strategies focused on those risks. The highest business risk should receive the most test resources, whereas the lowest business risk should receive the fewest resources. This way, the testers are assured that they are focusing on what is important to their management.

Guideline #2: Testing Should Be Performed Effectively

Effectiveness means getting the maximum benefit from minimum resources. The process is well-defined. There should be little variance in the cost of performing the task from tester to tester. If no well-defined process is in place, the cost variance for performing a task between testers can vary significantly.

The object of the test process from an effective viewpoint is two-fold. First, processes reduce variance by having the process performed in a consistent manner by each tester. The second processes reduce variance through continuous process improvement. Once variance is minimized, testers can perform those tests they say they will perform in the timeframe and cost they say they can be performed in.

Guideline #3: Testing Should Uncover Defects

All testing focuses on discovering and eliminating defects or variances from what is expected. There are two types of defects:

  • Variance from specifications. A defect from the perspective of the builder of the product.

  • Variance from what is desired. A defect from a user’s (or customer’s) perspective. Testers need to identify both types of defects. Defects generally fall into one of the following three categories:

    • Wrong. The specifications have been implemented incorrectly. This defect is a variance from what the customer/user specified.

    • Missing. A specified or wanted requirement is not in the built product. This can be a variance from specification, an indication that the specification was not implemented, or a requirement of the customer identified during or after the product was built.

    • Extra. A requirement incorporated into the product was not specified. This is always a variance from specifications, but may be an attribute desired by the user of the product. However, it is considered a defect.

Defects Versus Failures

A defect found in the system being tested can be classified as wrong, missing, or extra. The defect may be within the software or in the supporting documentation. While the defect is a flaw in the system, it has no negative impact until it affects the operational system.

A defect that causes an error in operation or negatively impacts a user/customer is called a failure. The main concern with defects is that they will turn into failures. It is the failure that damages the organization. Some defects never turn into failures. On the other hand, a single defect can cause millions of failures.

Why Are Defects Hard to Find?

Some defects are easy to spot, whereas others are more subtle. There are at least two reasons defects go undetected:

  • Not looking. Tests often are not performed because a particular test condition is unknown. Also, some parts of a system go untested because developers assume software changes don’t affect them.

  • Looking but not seeing. This is like losing your car keys only to discover they were in plain sight the entire time. Sometimes developers become so familiar with their system that they overlook details, which is why independent verification and validation should be used to provide a fresh viewpoint.

Defects typically found in software systems are the results of the following circumstances:

  • IT improperly interprets requirements. Information technology (IT) staff misinterpret what the user wants, but correctly implement what the IT people believe is wanted.

  • The users specify the wrong requirements. The specifications given to IT staff are erroneous.

  • The requirements are incorrectly recorded. Information technology staff fails to record the specifications properly.

  • The design specifications are incorrect. The application system design does not achieve the system requirements, but the design as specified is implemented correctly.

  • The program specifications are incorrect. The design specifications are incorrectly interpreted, making the program specifications inaccurate; however, it is possible to properly code the program to achieve the specifications.

  • There are errors in program coding. The program is not coded according to the program specifications.

  • There are data entry errors. Data entry staff incorrectly enter information into the computers.

  • There are testing errors. Tests either falsely detect an error or fail to detect one.

  • There are mistakes in error correction. The implementation team makes errors in implementing your solutions.

  • The corrected condition causes another defect. In the process of correcting a defect, the correction process itself institutes additional defects into the application system.

Usually, you can identify the test tactics for any test process easily; it’s estimating the costs of the tests that is difficult. Testing costs depend heavily on when in the project life cycle testing occurs. As noted in Chapter 2, the later in the life cycle testing occurs, the higher the cost. The cost of a defect is twofold: You pay to identify a defect and to correct it.

Guideline #4: Testing Should Be Performed Using Business Logic

The cost of identifying and correcting defects increases exponentially as the project progresses. Figure 3-1 illustrates the accepted industry standard for estimating costs and shows how costs dramatically increase the later you find a defect. A defect encountered during the execution of a SDLC phase is the cheapest to fix if corrected in the same SDLC phase where the defect occurred. Let’s assume a defect found and corrected during the SDLC design phase costs x to fix. If that same defect is corrected during the system test phase, it will cost 10x to fix. The same defect corrected after the system goes into production will cost 100x. Clearly, identifying and correcting defects early is the most cost-effective way to develop an error-free system.

Relative cost versus the project phase.

Figure 3-1. Relative cost versus the project phase.

Testing should begin during the first phase of the life cycle and continue throughout the life cycle. Although this book is centered on V-concept testing (detailed in Chapter 6), it’s important to recognize that life-cycle testing is essential to reducing the cost of testing.

Guideline #5: Testing Should Occur Throughout the Development Life Cycle

Life-cycle testing involves continuous testing of the solution even after software plans are complete and the tested system is implemented. At several points during the development process, the test team should test the system to identify defects at the earliest possible point.

Life-cycle testing cannot occur until you formally develop your test process. IT must provide and agree to a strict schedule for completing various phases of the test process for proper life-cycle testing to occur. If IT does not determine the order in which they deliver completed pieces of software, appropriate tests are impossible to schedule and conduct.

Testing is best accomplished by forming a test team. The test team must use structured methodologies; they should not use the same methodology for testing that they used to develop the system. The effectiveness of the test team depends on developing the system under one methodology and testing it under another. As illustrated in Figure 3-2, when the project starts, both the development process and system test process also begin. Thus, the testing and implementation teams begin their work at the same time and with the same information. The development team defines and documents the requirements for implementation purposes, and the test team uses those requirements for the purpose of testing the system. At appropriate points during the development process, the test team runs the compliance process to uncover defects. The test team should use the structured testing techniques outlined in this book as a basis of evaluating the corrections.

Life-cycle testing concepts.

Figure 3-2. Life-cycle testing concepts.

As you’re testing the implementation, prepare a series of tests that your IT department can run periodically after your revised system goes live. Testing does not stop once you’ve completely implemented your system; it must continue until you replace or update it again.

Guideline #6: Testing Should Test Both Function and Structure

When testers test your project team’s solution, they’ll perform functional or structural tests. Functional testing is sometimes called black box testing because no knowledge of the system’s internal logic is used to develop test cases. For example, if a certain function key should produce a specific result when pressed, a functional test would be to validate this expectation by pressing the function key and observing the result. When conducting functional tests, you’ll be using validation techniques almost exclusively.

Conversely, structural testing is sometimes called white box testing because knowledge of the system’s internal logic is used to develop hypothetical test cases. Structural tests predominantly use verification techniques. If a software development team creates a block of code that will allow a system to process information in a certain way, a test team would verify this structurally by reading the code, and given the system’s structure, see if the code could work reasonably. If they felt it could, they would plug the code into the system and run an application to structurally validate the code. Each method has its pros and cons, as follows:

  • Functional testing advantages:

    • Simulates actual system usage

    • Makes no system structure assumptions

  • Functional testing disadvantages:

    • Includes the potential to miss logical errors in software

    • Offers the possibility of redundant testing

  • Structural testing advantages:

    • Enables you to test the software’s logic

    • Enables you to test structural attributes, such as efficiency of code

  • Structural testing disadvantages:

    • Does not ensure that you’ve met user requirements

    • May not mimic real-world situations

Why Use Both Testing Methods?

Both methods together validate the entire system. For example, a functional test case might be taken from the documentation description of how to perform a certain function, such as accepting bar code input. A structural test case might be taken from a technical documentation manual. To effectively test systems, you need to use both methods.

Structural and Functional Tests Using Verification and Validation Techniques

Testers use verification techniques to confirm the reasonableness of a system by reviewing its structure and logic. Validation techniques, on the other hand, strictly apply to physical testing, to determine whether expected results occur. You’ll conduct structural tests primarily using verification techniques, and functional tests using validation techniques. Using verification to conduct structural tests would include

  • Feasibility reviews. Tests for this structural element would verify the logic flow of a unit of software.

  • Requirements reviews. These reviews verify software attributes; for example, in any particular system, the structural limits of how much load (transactions or number of concurrent users) a system can handle.

Functional tests are virtually all validation tests, and inspect how the system performs. Examples of this include

  • Unit testing. These tests verify that the system functions properly—for example, pressing a function key to complete an action.

  • Integrated testing. The system runs tasks that involve more than one application or database to verify that it performed the tasks accurately.

  • System testing. These tests simulate operation of the entire system, and verify that it ran correctly.

  • User acceptance. This real-world test means the most to your business. Unfortunately, there’s no way to conduct it in isolation. Once your organization staff, customers, or vendors begin to interact with your system, they’ll verify that it functions properly for you.

Verification and validation are not mutually exclusive, so you will conduct functional tests with verification and structural tests with validation during your project. Table 3-3 shows the relationships just explained, listing each of the six test activities, who performs them, and whether the activity is an example of verification or validation. For example, when conducting a feasibility review, developers and users verify that the software could conceivably perform after the solution is implemented the way the developers expect.

Table 3-3. Functional Testing

TEST PHASE

PERFORMED BY

VERIFICATION

VALIDATION

Feasibility Review

Developers, users

X

 

Requirements Review

Developers, users

X

 

Unit Testing

Developers

 

X

Integration Testing

Developers

 

X

System Testing

Developers with user assistance

 

X

User Acceptance

Users

 

X

Note

You can learn more about verification and validation techniques in Chapters 9 and 10, respectively.

Now that you’ve seen how you must verify and validate your system structurally and functionally, the last tool to introduce is a process template for employing these tactics, called the tester’s workbench.

Workbench Concept

To understand testing methodology, you must understand the workbench concept. In IT organizations, workbenches are more frequently referred to as phases, steps, or tasks. The workbench is a way of illustrating and documenting how a specific activity is to be performed. Defining workbenches is normally the responsibility of a process management committee, which in the past has been more frequently referred to as a standards committee. There are four components to each workbench:

  1. Input. The entrance criteria or deliverables needed to complete a task.

  2. Procedures to do. The work tasks or processes that will transform the input into the output.

  3. Procedures to check. The processes that determine that the output meet the standards.

  4. Output. The exit criteria or deliverables produced from the workbench.

Note

Testing tools are not considered part of the workbench because they are incorporated into either the procedures to do or procedures to check. The workbench is illustrated in Figure 3-3, and the software development life cycle, which is comprised of many workbenches, is illustrated in Figure 3-4.

The workbench for testing software.

Figure 3-3. The workbench for testing software.

The test process contains multiple workbenches.

Figure 3-4. The test process contains multiple workbenches.

The workbench concept can be used to illustrate one of the steps involved in building systems. The programmer’s workbench consists of the following steps:

  1. Input products (program specs) are given to the producer (programmer).

  2. Work is performed (e.g., coding/debugging); a procedure is followed; a product or interim deliverable (e.g., a program/module/unit) is produced.

  3. Work is checked to ensure product meets specs and standards, and that the do procedure was performed correctly.

  4. If the check process finds problems, the product is sent back for rework.

  5. If the check process finds no problems, the product is released to the next workbench

Chapters 6 through 13, which walk you through testing your software development project, describe each step in workbench format. Each chapter begins with a workbench description for that step.

Testing That Parallels the Software Development Process

When the processes for developing software and testing software are shown in a single diagram, they are frequently presented as what is known as a “V diagram.” On one side of the V are the steps for developing software, and on the other side are the steps for testing software. Figure 3-5 illustrates the V diagram for the seven-step software-testing process presented in this book.

The V diagram for the seven-step software-testing process.

Figure 3-5. The V diagram for the seven-step software-testing process.

The process for developing software contains the following generic steps:

  1. Define the requirements for the software system.

  2. Design the software system based on the requirements.

  3. Build the software based on the design.

  4. Test the software (which involves unit testing and frequently integration testing).

  5. Install the software in an operational environment.

  6. Maintain the software as changes are needed (Note: unless changes are significant, the developers will test the changes and then install the new version.)

The process for testing software involves the following steps:

  1. Prepare for testing a software system.

  2. Plan the tests that will be conducted on the software system.

  3. Execute the steps as defined in the test plan.

  4. Conduct acceptance testing by the software system users. (Note: This testing may be assisted by the IT independent test group.)

  5. Analyze test results and report them to the appropriate software system stakeholders.

  6. Test the installation of the software into the operational environment, and test changes made to the software after it is placed into the operational environment.

  7. Conduct a post-implementation analysis to evaluate the effectiveness and efficiency of the test process.

Each of the seven steps in the software-testing process can be represented by the software testing workbench. In the seven-step process, these testing workbenches comprise multiple steps. Therefore, there would be multiple workbenches within the overall workbench for each step.

An IT organization should customize the seven-step testing process for its particular situation. The seven-step process presented in this book is one that testers might use for a large, complex software system. The following sections discuss eight considerations that your organization should use when customizing the seven-step software-testing process.

Customizing the Software-Testing Process

The following are eight considerations you need to address when customizing the seven-step software-testing process:

  1. Determine the test strategy objectives.

  2. Determine the type of development project.

  3. Determine the type of software system.

  4. Determine the project scope.

  5. Identify the software risks.

  6. Determine when testing should occur.

  7. Define the system test plan standard.

  8. Define the unit test plan standard.

Note

You can use the CD included with this book to customize the templates in the seven-step software-testing process for your organization.

Determining the Test Strategy Objectives

Test strategy is normally developed by a team very familiar with the business risks associated with the software; tactics are developed by the test team. Thus, the test team needs to acquire and study the test strategy. In this study, the test team should ask the following questions:

What is the ranking of the test factors?

Which of the high-level risks are the most significant?

What damage can be done to the business if the software fails to perform correctly?

What damage can be done to the business if the software is not completed on time?

Which individuals are most capable of understanding the impact of the identified business risks?

Determining the Type of Development Project

The type of development project refers to the environment/methodology in which the software will be developed. As the environment changes, so does the testing risk. For example, the risks associated with the traditional development effort differ from the risks associated with off-the-shelf purchased software. Different testing approaches must be used for different types of projects, just as different development approaches are used (see Figure 3-6).

Table 3-6. Test tactics for different project types.

TYPE

CHARACTERISTICS

TEST TACTICS

Traditional system development (and most perfective maintenance)

-

Uses a system development methodology

-

Test at end of each task/step/phase

 

-

User knows requirements

-

Verify that specs match need

 

-

Development determines structure

-

Test function and structure

Iterative development/prototyping

-

Requirements unknown

-

Verify that tools are used properly

 

-

Structure predefined

-

Test functionality

System maintenance

-

Modify structure

-

Test structure

   

-

Works best with release methods

   

-

Requires regression testing

Purchased/contracted software

-

Structure unknown

-

Verify that functionality matches need

 

-

May contain defects

-

Test functionality

 

-

Functionality defined in user documentation

-

Test fit into environment

 

-

Documentation may vary from software

  

Determining the Type of Software System

The type of software system refers to the processing that will be performed by that system. This step contains 16 different software system types. However, a single software system may incorporate more than one of these types. Identifying the specific software type will help build an effective test plan.

  • Batch (general). Can be run as a normal batch job and makes no unusual hardware or input-output actions (for example, a payroll program or a wind tunnel data analysis program).

  • Event control. Performs real-time data processing as a result of external events (for example, a program that processes telemetry data).

  • Process control. Receives data from an external source and issues commands to that source to control its actions based on the received data.

  • Procedure control. Controls other software (for example, an operating system that controls the execution of time-shared and batch computer programs).

  • Advanced mathematical models. Resembles simulation and business strategy software, but has the additional complexity of heavy use of mathematics.

  • Message processing. Handles input and output messages, processing the text or information contained therein.

  • Diagnostic software. Detects and isolates hardware errors in the computer where it resides or in other hardware that can communicate with that computer.

  • Sensor and signal processing. Similar to that of message processing, but requires greater processing to analyze and transform the input into a usable data processing format.

  • Simulation. Simulates an environment, mission situation, other hardware; inputs from these to enable a more realistic evaluation of a computer program or hardware component.

  • Database management. Manages the storage and access of (typically large) groups of data. Such software can also prepare reports in user-defined formats based on the contents of the database.

  • Data acquisition. Receives information in real time and stores it in some form suitable for later processing (for example, software that receives data from a space probe and files it for later analysis).

  • Data presentation. Formats and transforms data, as necessary, for convenient and understandable displays for humans. Typically, such displays would be for some screen presentation.

  • Decision and planning aids. Uses artificial intelligence techniques to provide an expert system to evaluate data and provide additional information and consideration for decision and policy makers.

  • Pattern and image processing. Generates and processes computer images. Such software may analyze terrain data and generate images based on stored data.

  • Computer system software. Provides services to operational computer programs.

  • Software development tools. Provides services to aid in the development of software (for example, compilers, assemblers, and static and dynamic analyzers).

Determining the Project Scope

The project scope refers to the totality of activities to be incorporated into the software system being tested—the range of system requirements/specifications to be understood. The scope of new system development is different from the scope of changes to an existing system. This step describes some of the necessary characteristics, but this list must be expanded to encompass the requirements of the specific software system being tested. The scope of the project usually delimits the scope of the testing effort. Consider the following issues:

  • New systems development:

    • What business processes are included in the software?

    • Which business processes will be affected?

    • Which business areas will be affected?

    • What existing systems will interface with this system?

    • Which existing systems will be affected?

  • Changes to existing systems:

    • Are the changes corrective or is new functionality being added?

    • Is the change caused by new standards?

    • What other systems are affected?

    • Is regression testing needed?

Identifying the Software Risks

Strategic risks are the high-level business risks faced by the software system; software system risks are subsets. The purpose of decomposing the strategic risks into tactical risks is to assist in creating the test scenarios that will address those risks. It is difficult to create test scenarios for high-level risks.

Tactical risks can be categorized as follows:

  • Structural risks

  • Technical risks

  • Size risks

Work Papers 3-1, 3-2, and 3-3 provide the method for assessing the structural, technical, and size risks, respectively. These Work Papers are to be completed by the test team interacting with the development team and selected end users/customers. Each of the three Work Papers identifies a risk, a rating for the risk, and a weight associated with the risk. The identification of the risk and its associated weight are supplied as part of the tactical risk assessment process. Weight is an indication of the relative importance of each risk in relationship to the other risks.

Table 3-1. Structural Risk Assessment

TEST DOCUMENT

Structural Risk Assessment

 

Ratings: L - Low

M - Medium

H - High

NA - Not Applicable

RATING × WEIGHT=SCORE

RISK

RATINGS

1.

Amount of time since last major change to existing area of business

 

3

 

• More than 2 years

L=1

 
 

• 1 to 2 years; unknown

M=2

 
 

• Less than 1 year

H=3

 
 

• No automated system

H=3

 

2.

Estimated frequency of change to proposed/existing systems

 

3

 

• No existing automated system; or development effort insufficient for estimate

NA=0

 
 

• Fewer than 2 per year

L=1

 
 

• 2 to 10 per year

M=2

 
 

• More than 20 per year

H=3

 

3.

Estimated extent of total changes in business area methods in last year in percentage of methods affected

 

3

 

• No changes NA=0

  
 

• Less than 10%

L=1

 
 

• 10 to 25% M=2

  
 

• More than 25%

H=3

 

4.

Magnitude of changes in business area associated with this project

 

3

 

• Minor change(s)

L=1

 
 

• Significant but manageable change

M=2

 
 

• Major changes to system functionality and/or resource needs

H=4

 

5.

Project performance site

 

2

 

• Company facility

L=1

 
 

• Local noncompany facility

M=2

 
 

• Not in local area

H=5

 

6.

Critical staffing of project

 

2

 

• In-house

L=1

 
 

• Contractor, sole-source

M=2

 
 

• Contractor, competitive-bid

H=6

 

7.

Type of project organization

 

2

 

• Line and staff: project has total management control of personnel

L=1

 
 

• Mixture of line and staff with matrix-managed elements

M=2

 
 

• Matrix: no management control transferred to project

H=3

 

8.

Potential problems with subcontractor relationship

 

5

 

• Not applicable to this project

NA=0

 
 

• Subcontractor not assigned to isolated or critical task: prime contractor has previously managed subcontractor successfully

L=1

 
 

• Subcontractor assigned to all development tasks in subordinate role to prime contractor: company has favorable experience with subcontractor on other effort(s)

M=2

 
 

• Subcontractor has sole responsibility for critical task; subcontractor new to company

H=3

 

9.

Status of the ongoing project training

 

2

 

• No training plan required

NA=0

 
 

• Complete training plan in place

L=1

 
 

• Some training in place

M=2

 
 

• No training available

H=3

 

10.

Level of skilled personnel available to train project team

 

3

 

• No training required

NA=0

 
 

• Knowledgeable on all systems

L=1

 
 

• Knowledgeable on major components

M=2

 
 

• Few components understood

H=3

 

11.

Accessibility of supporting reference and or compliance documents and other information on proposed/existing system

 

3

 

• Readily available

L=1

 
 

• Details available with some difficulty and delay

M=2

 
 

• Great difficulty in obtaining details, much delay

H=3

 

12.

Status of documentation in the user areas

 

3

 

• Complete and current

L=1

 
 

• More than 75% complete and current

M=2

 
 

• Nonexistent or outdated

H=6

 

13.

Nature of relationship with users in respect to updating project documentation to reflect changes that may occur during project development

 

3

 

• Close coordination

L=1

 
 

• Manageable coordination

M=2

 
 

• Poor coordination

H=5

 

14.

Estimated degree to which project documentation reflects actual business need

 

3

 

• Excellent documentation

L=1

 
 

• Good documentation but some problems with reliability

M=2

 
 

• Poor or inadequate documentation

H=3

 

15.

Quality of documentation for the proposed system

 

3

 

• Excellent standards: adherence and execution are integral part of system and program development

L=1

 
 

• Adequate standards: adherence is not consistent

M=2

 
 

• Poor or no standards: adherence is minimal

H=3

 

16.

Quality of development and production library control

 

3

 

• Excellent standards: superior adherence and execution

L=1

 
 

• Adequate standards: adherence is not consistent

M=2

 
 

• Poor or no standards: adherence is minimal

H=3

 

17.

Availability of special test facilities for subsystem testing

 

2

 

• Complete or not required

L=1

 
 

• Limited

M=2

 
 

• None available

H=3

 

18.

Status of project maintenance planning

 

2

 

• Current and complete

L=1

 
 

• Under development

M=2

 
 

• Nonexistent

H=3

 

19.

Contingency plans in place to support operational mission should application fail

 

2

 

• None required

NA=0

 
 

• Complete plan

L=1

 
 

• Major subsystems addressed

M=2

 
 

• Nonexistent

H=3

 

20.

User approval of project specifications

 

4

 

• Formal, written approval based on structured, detailed review processes

L=1

 
 

• Formal, written approval based on informal unstructured, detailed review processes

M=2

 
 

• No formal approval; cursory review

H=3

 

21.

Effect of external systems on the system

 

5

 

• No external systems involved

NA=0

 
 

• Critical intersystem communications controlled through interface control documents; standard protocols utilized: stable interfaces

L=1

 
 

• Critical intersystem communications controlled through interface control documents: some nonstandard protocols: interfaces change infrequently

M=2

 
 

• Not all critical intersystem communications controlled through interface control documents: some nonstandard protocols: some interfaces change frequently

H=3

 

22.

Type and adequacy of configuration management planning

 

2

 

• Complete and functioning

L=1

 
 

• Undergoing revisions for inadequacies

M=2

 
 

• None available

H=3

 

23.

Type of standards and guidelines to be followed by project

 

4

 

• Standards use structured programming concepts, reflect current methodology, and permit tailoring to nature and scope of development project

L=1

 
 

• Standards require a top-down approach and offer some flexibility in application

M=2

 
 

• Standards are out of date and inflexible

H=3

 

24.

Degree to which system is based on well-specified requirements

 

5

 

• Detailed transaction and parametric data in requirements documentation

L=1

 
 

• Detailed transaction data in requirements documentation

M=2

 
 

• Vague requirements documentation

H=5

 

25.

Relationships with those who are involved with system (e.g., users, customers, sponsors, interfaces) or who must be dealt with during project effort

 

3

 

• No significant conflicting needs: system primarily serves one organizational unit

L=1

 
 

• System meets limited conflicting needs of cooperative organization units

M=2

 
 

• System must meet important conflicting needs of several cooperative organization units

H=3

 
 

• System must meet important conflicting needs of several uncooperative organizational units

H=4

 

26.

Changes in user area necessary to meet system operating requirements

 

3

 

• Not applicable

NA=0

 
 

• Minimal

L=1

 
 

• Somewhat

M=2

 
 

• Major

H=3

 

27.

General user attitude

 

5

 

• Good: values data processing solution

L=1

 
 

• Fair: some reluctance

M=2

 
 

• Poor: does not appreciate data processing solution

H=3

 

28.

Status of people, procedures, knowledge, discipline, and division of details of offices that will be using system

 

4

 

• Situation good to excellent

L=1

 
 

• Situation satisfactory but could be improved

M=2

 
 

• Situation less than satisfactory

H=3

 

29.

Commitment of senior user management to system

 

3

 

• Extremely enthusiastic

L=1

 
 

• Adequate

M=3

 
 

• Some reluctance or level of commitment unknown

H=3

 

30.

Dependence of project on contributions of technical effort from other areas (e.g., database administration)

 

2

 

• None

L=1

 
 

• From within IT

M=2

 
 

• From outside IT

H=3

 

31.

User’s IT knowledge and experience

 

2

 

• Highly capable

L=1

 
 

• Previous exposure but limited knowledge

M=2

 
 

• First exposure

H=3

 

32.

Knowledge and experience of user in application area

 

2

 

• Previous experience

L=1

 
 

• Conceptual understanding

M=2

 
 

• Limited knowledge

H=4

 

33.

Knowledge and experience of project team in application area

 

3

 

• Previous experience

L=1

 
 

• Conceptual understanding

M=2

 
 

• Limited knowledge

H=4

 

34.

Degree of control by project management

 

2

 

• Formal authority commensurate with assigned responsibility

L=1

 
 

• Informal authority commensurate with assigned responsibility

M=2

 
 

• Responsibility but no authority

H=3

 

35.

Effectiveness of project communications

 

2

 

• Easy access to project manager(s); change information promptly transmitted upward and downward

L=1

 
 

• Limited access to project manager(s); downward communication limited

M=2

 
 

• Aloof project management; planning information closely held

H=3

 

36.

Test team’s opinion about conformance of system specifications to business needs based on early tests and/or reviews

 

3

 

• Operational tests indicate that procedures and operations produce desired results

L=1

 
 

• Limited tests indicate that procedures and operations differ from specifications in minor aspects only

M=2

 
 

• Procedures and operations differ from specifications in important aspects: specifications insufficient to use for testing

H=3

 

37.

Sensitivity of information

 

1

 

• None

L=0

    
 

• High

H=3

    
 

PREPARED BY:

DATE:

Total

107.00

 

Total Score/Total Weight = Risk Average

Table 3-2. Technical Risk Assessment

TEST DOCUMENT

Technical Risk Assessment

 

Ratings: L - Low

M - Medium

H - High

NA - Not Applicable

RATING × WEIGHT=SCORE

RISK

RATINGS

1.

Ability to fulfill mission during hardware or software failure

 

2

 

• Can be accomplished without system

L=1

 
 

• Can be accomplished without fully operational system, but some minimum capability required

M=2

 
 

• Cannot be accomplished without fully automated system

H=6

 

2.

Required system availability

 

2

 

• Periodic use (weekly or less frequently)

L=1

 
 

• Daily use (but not 24 hours per day)

M=2

 
 

• Constant use (24 hours per day)

H=5

 

3.

Degree to which system’s ability to function relies on exchange of data with external systems

 

2

 

• Functions independently: sends no data required for the operation of other systems

L=0

 
 

• Must send and/or receive data to or from another system

M=2

 
 

• Must send and/or receive data to or from multiple systems

H=3

 

4.

Nature of system-to-system communications

 

1

 

• System has no external interfaces

L=0

 
 

• Automated communications link using standard protocols

M=2

 
 

• Automated communications link using nonstandard protocals

H=3

 

5.

Estimated system’s program size limitations

 

2

 

• Substantial unused capacity

L=1

 
 

• Within capacity

M=2

 
 

• Near limits of capacity

H=3

 

6.

Degree of specified input data control procedures

 

3

 

• Detailed error checking

L=1

 
 

• General error checking

M=2

 
 

• No error checking

H=3

 

7.

Type of system hardware to be installed

 

3

 

• No hardware needed

NA=0

 
 

• Standard batch or on-line systems

L=1

 
 

• Nonstandard peripherals

M=2

 
 

• Nonstandard peripherals and mainframes

H=3

 

8.

Basis for selection of programming and system software

 

3

 

• Architectural analysis of functional and performance requirements

L=1

 
 

• Similar system development experience

M=2

 
 

• Current inventory of system software and existing programming language skills

H=3

 

9.

Complexity of projected system

 

2

 

• Single function (e.g., word processing only)

L=1

 
 

• Multiple but related function (e.g., message generation, editing, and dissemination)

M=2

 
 

• Multiple but not closely related functions (e.g., database query, statistical manipulation, graphics plotting, text editing)

H=3

 

10.

Projected level of programming language

 

2

 

• High level, widely used

L=1

 
 

• Low-level or machine language, widely used

M=2

 
 

• Special-purpose language, extremely limited use

H=3

 

11.

Suitability of programming language to application(s)

 

2

 

• All modules can be coded in straightforward manner in chosen language

L=1

 
 

• All modules can be coded in a straightforward manner with few exit routines, sophisticated techniques, and so forth

H=3

 
 

• Significant number of exit routines, sophisticated techniques, and so forth are required to compensate for deficiencies in language selected

H=3

 

12.

Familiarity of hardware architecture

 

2

 

• Mainframe and peripherals widely used

L=1

 
 

• Peripherals unfamiliar

M=2

 
 

• Mainframe unfamiliar

H=4

 

13.

Degree of pioneering (extent to which new, difficult, and unproven techniques are applied)

 

5

 

• Conservative: no untried system components; no pioneering system objectives or techniques

L=1

 
 

• Moderate: few important system components and functions are untried; few pioneering system objectives and techniques

H=3

 
 

• Aggressively pioneering: more than a few unproven hardware or software components or system objectives

H=3

 

14.

Suitability of hardware to application environment

 

2

 

• Standard hardware

NA=0

 
 

• Architecture highly comparable with required functions

L=1

 
 

• Architecture sufficiently powerful but not particularly efficient

M=2

 
 

• Architecture dictates complex software routines

H=3

 

15.

Margin of error (need for perfect functioning, split-second timing, and significant cooperation and coordination)

 

5

 

• Comfortable margin

L=1

 
 

• Realistically demanding

M=2

 
 

• Very demanding; unrealistic

H=3

 

16.

Familiarity of project team with operating software

 

2

 

• Considerable experience

L=1

 
 

• Some experience or experience unknown

M=2

 
 

• Little or no experience

H=3

 

17.

Familiarity of project team with system environment supporting the application

 

2

 

• Considerable experience

L=1

 
 

• Some experience or experience unknown

M=2

 
 

• Little or no experience with:

  
  

Operating System

H=3

 
  

DBMS

H=3

 
  

Data Communications

H=3

 

18.

Knowledgeability of project team in the application area

 

2

 

• Previous experience

L=1

 
 

• Conceptual understanding

M=2

 
 

• Limited knowledge

H=3

 

19.

Type of test tools used

 

5

 

• Comprehensive test/debut software, including path analyzers

L=1

 
 

• Formal, documented procedural tools only

M=2

 
 

• None

H=3

 

20.

Realism of test environment

 

4

 

• Tests performed on operational system: total database and communications environment

L=1

 
 

• Tests performed on separate development system: total database, limited communications

M=2

 
 

• Tests performed on dissimilar development system: limited database and limited communications

H=3

 

21.

Communications interface change testing

 

4

 

• No interfaces required

NA=0

 
 

• Live testing on actual line at operational transaction rates

L=1

 
 

• Loop testing on actual line, simulated transactions

M=2

 
 

• Line simulations within development system

H=3

 

22.

Importance of user training to the success of the system

 

1

 

• Little training needed to use or operate system: documentation is sufficient for training

L=1

 
 

• Users and or operators need no formal training, but experience is required in addition to documentation

M=2

 
 

• Users essentially unable to operate system without formal, hands-on training in addition to documentation

H=3

 

23.

Estimated degree of system adaptability to change

 

3

 

• High: structured programming techniques used: relatively unpatched, well documented

L=1

 
 

• Moderate

M=2

 
 

• Low: monolithic program design, high degree of inner/intrasystem dependency, unstructured development, minimal documentation

H=4

 
 

PREPARED BY:

DATE:

Total

61.00

 

Total Score/Total Weight = Risk Average

Table 3-3. Size Risk Assessment

TEST DOCUMENT

Size Risk Assessment

 

Ratings: L - Low

M - Medium

H - High

NA - Not Applicable

RATING × WEIGHT=SCORE

RISK

RATINGS

1.

Ranking of this project’s total worker-hours within the limits established by the organization’s smallest and largest system development projects (in number of worker-hours)

 

3

 

• Lower third of systems development projects

L=1

 
 

• Middle third of systems development projects

M=2

 
 

• Upper third of systems development projects

H=3

 

2.

Project implementation time

 

3

 

• 12 months or less

L=1

 
 

• 13 months to 24 months

M=2

 
 

• More than 24 months, with phased implementation

H=3

 
 

• More than 24 months; no phasing

H=4

 

3.

Estimated project adherence to schedule

 

1

 

• Ahead of schedule

L=1

 
 

• On schedule

M=2

 
 

• Behind schedule (by three months or less)

H=3

 
 

• Behind schedule (by more than three months)

H=4

 

4.

Number of systems interconnecting with the application

 

3

 

• 1 to 2

L=1

 
 

• 3 to 5

M=2

 
 

• More than 5

H=3

 

5.

Percentage of project resources allocated to system testing

 

2

 

• More than 40%

L=1

 
 

• 20 to 40%

M=2

 
 

• Less than 20%

H=3

 

6.

Number of interrelated logical data groupings (estimate if unknown)

 

1

 

• Fewer than 4

L=1

 
 

• 4 to 6

M=2

 
 

• More than 6

H=3

 

7.

Number of transaction types

 

1

 

• Fewer than 6

L=1

 
 

• 6 to 25

M=2

 
 

• More than 25

H=3

 

8.

Number of output reports

 

1

 

• Fewer than 10

L=1

 
 

• 10 to 20

M=2

 
 

• More than 20

H=3

 

9.

Ranking of this project’s number of lines of program code to be maintained within the limits established by the organization’s smallest and largest systems development projects (in number of lines of code)

 

3

 

• Lower third of systems development projects

L=1

 
 

• Middle third of systems development projects

M=2

 
 

• Upper third of systems development projects

H=3

 
 

PREPARED BY:

DATE:

Total

18.00

 

Total Score/Total Weight = Risk Average

To complete Work Papers 3-1, 3-2, and 3-3, perform the following steps:

  1. Understand the risk and the ratings provided for that risk. The higher the predefined rating, the greater the risk. In most instances, ratings will be between 1 and 4.

  2. Determine the applicable rating for the software system being tested. Select one of the listed ratings for each risk and place it in the Ratings column. For example, on the Structural Risk Assessment Work Paper (3-1), if you determined that the amount of time since the last major change to the existing area of business was more than two years, you would note that a low rating was indicated, and put a 1 in the Ratings column.

  3. Calculate and accumulate the risk score. The ratings you provided in the Ratings column should be multiplied by the weight to get a score. The score for each work paper should then be accumulated and the total score posted to Work Paper 3-4. When the three work papers have been completed, you will have posted three scores to the Risk Score Analysis Work Paper.

    Risk Score Analysis

    Figure 3-4. Risk Score Analysis

To complete Work Paper 3-4, perform the following steps:

  1. Calculate average risk score by risk area. To do this, total the number of risks on Work Papers 3-1, 3-2, and 3-3 and divide that into the total score on Work Paper 3-4 to obtain an average score for the three risk areas. Do the same for the total risk score for the software.

  2. Post comparative ratings. After you have used these Work Papers a number of times, you will develop average scores for your application systems. Take the score totals for your application systems and rank them from high to low for each of the three risk areas. Then determine an average for the high third of the scores, the middle third of the scores, and the low third of the scores. This average is the cumulative rating for your company’s applications and can be permanently recorded on Work Paper 3-4. This will enable you to compare the score of the system you are testing against comparative ratings so you can determine whether the system you are working on is high, medium, or low risk in each of the three risk areas and overall.

  3. List at the bottom of Work Paper 3-4 all the risk attributes from the three worksheets that received a high-risk rating. Identify the area (for example, structure) and list the specific risk that was given a high rating. Then, for each of those risks, determine the specific test concern and list it on Work Paper 3-4.

When you have completed this assessment process, the tactical risks will be well defined, enabling the insight gained from this step to be embedded into the test plan. Obviously, areas of high risk may need special attention; for example, if size puts the project in a high-risk rating, extra test effort may be needed, focused on ensuring that the system can handle the volume or size of transactions specified for the software. Test concerns can be addressed by specific tests designed to evaluate the magnitude of the risk and the adequacy of controls in the system to address that risk.

Determining When Testing Should Occur

The previous steps have identified the type of development project, the type of software system, the project scope, and the technical risks. Using that information, the point in the development process when testing should occur must be determined. The previous steps have identified what type of testing needs to occur, and this step will tell when it should occur.

Testing can and should occur throughout the phases of a project (refer to Figure 3-2). Examples of test activities to be performed during these phases are:

  1. Requirements phase activities

    • Determine test strategy

    • Determine adequacy of requirements

    • Generate functional test conditions

  2. Design phase activities

    • Determine consistency of design with requirements

    • Determine adequacy of design

    • Generate structural and functional test conditions

  3. Program phase activities

    • Determine consistency with design

    • Determine adequacy of implementation

    • Generate structural and functional test conditions for programs/units

  4. Test phase activities

    • Determine adequacy of the test plan

    • Test application system

  5. Operations phase activities

    • Place tested system into production

  6. Maintenance phase activities

    • Modify and retest

Defining the System Test Plan Standard

A tactical test plan must be developed to describe when and how testing will occur. This test plan will provide background information on the software being tested, on the test objectives and risks, as well as on the business functions to be tested and the specific tests to be performed.

Information on the test environment part of the test plan is described in Part Two of this book. Reference other parts of the book for development methodologies other than the SDLC methodology; for example, Chapter 15 addresses client/server systems.

The test plan is the road map you should follow when conducting testing. The plan is then decomposed into specific tests and lower-level plans. After execution, the results are rolled up to produce a test report. The test reports included in Chapter 11 are designed around standardized test plans. A recommended test plan standard is illustrated in Figure 3-7; it is consistent with most of the widely accepted published test plan standards.

Table 3-7. System test plan standard.

1.

GENERAL INFORMATION

 

1.1

Summary. Summarize the functions of the software and the tests to be performed.

 

1.2

Environment and Pretest Background. Summarize the history of the project. Identify the user organization and computer center where the testing will be performed. Describe any prior testing and note results that may affect this testing.

 

1.3

Test Objectives. State the objectives to be accomplished by testing.

 

1.4

Expected Defect Rates. State the estimated number of defects for software of this type.

 

1.5

References. List applicable references, such as:

  

a)

Project request authorization.

  

b)

Previously published documents on the project.

  

c)

Documentation concerning related projects.

2.

PLAN

 

2.1

Software Description. Provide a chart and briefly describe the inputs, outputs, and functions of the software being tested as a frame of reference for the test descriptions.

 

2.2

Test Team. State who is on the test team and their test assignment(s).

 

2.3

Milestones. List the locations, milestone events, and dates for the testing.

 

2.4

Budgets. List the funds allocated to test by task and checkpoint.

 

2.5

Testing (systems checkpoint). Identify the participating organizations and the system checkpoint where the software will be tested.

  

2.5.1

Schedule (and budget). Show the detailed schedule of dates and events for the testing at this location. Such events may include familiarization, training, data, as well as the volume and frequency of the input. Resources allocated for test should be shown.

  

2.5.2

Requirements. State the resource requirement, including:

   

a)

Equipment. Show the expected period of use, types, and quantities of the equipment needed.

   

b)

Software. List other software that will be needed to support the testing that is not part of the software to be tested.

   

c)

Personnel. List the numbers and skill types of personnel that are expected to be available during the test from both the user and development groups. Include any special requirements such as multishift operation or key personnel.

2.

PLAN

  

2.5.3

Testing Materials. List the materials needed for the test, such as:

   

a)

System documentation

   

b)

Software to be tested and its medium

   

c)

Test inputs

   

d)

Test documentation

   

e)

Test tools

  

2.5.4

Test Training. Describe or reference the plan for providing training in the use of the software being tested. Specify the types of training, personnel to be trained, and the training staff.

  

2.5.5

Test to be Conducted. Reference specific tests to be conducted at this checkpoint.

 

2.6

Testing (system checkpoint). Describe the plan for the second and subsequent system checkpoint where the software will be tested in a manner similar to paragraph 2.5.

3.

SPECIFICATIONS AND EVALUATION

 

3.1

Specifications

  

3.1.1

Business Functions. List the business functional requirement established by earlier documentation, or Task 1 of Step 2.

  

3.1.2

Structural Functions. List the detailed structural functions to be exercised during the overall test.

  

3.1.3

Test/Function Relationships. List the tests to be performed on the software and relate them to the functions in paragraph 3.1.2.

  

3.1.4

Test Progression. Describe the manner in which progression is made from one test to another so that the entire test cycle is completed.

 

3.2

Methods and Constraints.

  

3.2.1

Methodology. Describe the general method or strategy of the testing.

  

3.2.2

Test Tools. Specify the type of test tools to be used.

  

3.2.3

Extent. Indicate the extent of the testing, such as total or partial. Include any rationale for partial testing.

  

3.2.4

Data Recording. Discuss the method to be used for recording the test results and other information about the testing.

  

3.2.5

Constraints. Indicate anticipated limitations on the test due to test conditions, such as interfaces, equipment, personnel, data- bases.

3.

SPECIFICATIONS AND EVALUATION

 

3.3

Evaluation.

  

3.3.1

Criteria. Describe the rules to be used to evaluate test results, such as range of data values used, combinations of input types used, maximum number of allowable interrupts or halts.

  

3.3.2

Data Reduction. Describe the techniques to be used for manipulating the test data into a form suitable for evaluation, such as manual or automated methods, to allow comparison of the results that should be produced to those that are produced.

4.

TEST DESCRIPTIONS

 

4.1

Test (Identify). Describe the test to be performed (format will vary for online test script).

  

4.1.1

Control. Describe the test control, such as manual, semiautomatic or automatic insertion of inputs, sequencing of operations, and recording of results.

  

4.1.2

Inputs. Describe the input data and input commands used during the test.

  

4.1.3

Outputs. Describe the output data expected as a result of the test and any intermediate messages that may be produced.

  

4.1.4

Procedures. Specify the step-by-step procedures to accomplish the test. Include test setup, initialization, steps and termination.

 

4.2

Test (Identify). Describe the second and subsequent tests in a manner similar to that used in paragraph 4.1.

 

Defining the Unit Test Plan Standard

During internal design, the system is divided into the components or units that perform the detailed processing. Each of these units should have its own test plan. The plans can be as simple or as complex as the organization requires based on its quality expectations.

The importance of a unit test plan is to determine when unit testing is complete. It is a bad idea economically to submit units that contain defects to higher levels of testing. Thus, extra effort spent in developing unit test plans, testing units, and ensuring that units are defect free prior to integration testing can have a significant payback in reducing overall test costs.

Figure 3-8 presents a suggested unit test plan. This unit test plan is consistent with the most widely accepted unit test plan standards. Note that the test reporting in Chapter 11 for units assumes that a standardized unit test plan is utilized.

Table 3-8. Unit test plan standard.

1.

PLAN

 

1.1

Unit Description. Provide a brief description and flowchart of the unit which describes the input, outputs, and functions of the unit being tested as a frame of reference for the specific tests.

 

1.2

Milestones. List the milestone events and dates for testing.

 

1.3

Budget. List the funds allocated to test this unit.

 

1.4

Test Approach. The general method or strategy used to test this unit.

 

1.5

Functions not Tested. List those functions which will not be validated as a result of this test.

 

1.6

Test Constraints. Indicate anticipated limitations on the test due to test conditions, such as interfaces, equipment, personnel, and data bases.

2.

BUSINESS AND STRUCTURAL FUNCTION TESTING

 

2.1

Business Functions. List the business functional requirements included in this unit.

 

2.2

Structural Functions. List the structural functions included in the unit.

 

2.3

Test Descriptions. Describe the tests to be performed in evaluating business and structural functions.

 

2.4

Expected Test Results. List the desired result from each test. That which will validate the correctness of the unit functions.

 

2.5

Conditions to Stop Test. The criteria which if occurs will result in the tests being stopped.

 

2.6

Test Number Cross-Reference. A cross-reference between the system test identifiers and the unit test identifiers.

3.

INTERFACE TEST DESCRIPTIONS

 

3.1

Interface. List the interfaces that are included in this unit.

 

3.2

Test Description. Describe the tests to be performed to evaluate the interfaces.

 

3.3

Expected Test Results. List the desired result from each test. That which will validate the correctness of the unit functions.

 

3.4

Test Number Cross-Reference. A cross-reference between the system test identifiers and the unit test identifiers.

4.

TEST PROGRESSION

 

List the progression in which the tests must be performed. Note that this is obtained from the system test plan. This section may be unnecessary if the system test plan progression worksheet can be carried forward.

Converting Testing Strategy to Testing Tactics

Developing tactics is not a component of establishing a testing environment. However, understanding the tactics that will be used to implement the strategy is important in creating work processes, selecting tools, and ensuring that the appropriate staff is acquired and trained. The objective of this section is to introduce you to the testing tactics that will be incorporated into the approach to software testing presented in this book.

The testing methodology proposed in this book incorporates both testing strategy and testing tactics. The tactics address the test plans, test criteria, testing techniques, and testing tools used in validating and verifying the software system under development.

The testing methodology cube represents a detailed work program for testing software systems (see Figure 3-9). A detailed testing work program is important to ensure that the test factors have been adequately addressed at each phase of the systems development life cycle. This book provides a detailed description of the work program represented by the testing methodology cube.

Example of a test-tactics matrix.

Figure 3-9. Example of a test-tactics matrix.

The cube is a three-dimensional work program. The first and most important dimensions are the test factors that are selected for a specific application system test strategy. If the testing process can show that the selected test factors have been adequately handled by the application system, the test process can be considered satisfactorily completed. In designing the test work program, there are concerns in each phase of the life cycle that the test factors will not be achieved. While the factors are common to the entire life cycle, the concerns vary according to the phase of the life cycle. These concerns represent the second dimension of the cube. The third dimension of the cube is the test tactics. There are criteria that, if satisfied, would assure the tester that the application system has adequately addressed the risks. Once the test tactics have ensured that the risks are addressed, the factors can also be considered satisfied and the test tactics are complete.

The three dimensions of the cube will be explained in detail in later chapters, together with the tools and techniques needed for the testing of the application system. The test factors have been previously explained. The test tactics outline the steps to be followed in conducting the tests, together with the tools and techniques needed for each aspect of testing. The test phases are representative of the more commonly accepted system development life cycles. Later chapters are devoted to testing in each phase of the life cycle, and in those chapters, the phase and test tactics for that phase are explained in detail.

Process Preparation Checklist

Work Paper 3-5 is a checklist that you can use to assess the items to be addressed by the test planning process. Use this checklist as you build your test process; it will help ensure that the test process will address the components of effective testing.

Table 3-5. Testing Tactics Checklist

  

YES

NO

COMMENTS

1.

Did you use your test strategy as a guide for developing the test tactics?

   

2.

Did you decompose your strategy into test tactics? (May not fully occur until the test planning step.)

   

3.

Did you consider trade-offs between test factors when developing test tactics (e.g., choosing between continuity of processing and accuracy)?

   

4.

Did you compare your test tactics to the test strategy to ensure they support the strategy?

   

5.

Have you identified the individuals who can perform the tests?

   

6.

Did you compose a strategy for recruiting those individuals?

   

7.

Did management agree to let the team members accept the proposed responsibilities on your project team?

   

8.

Has a test plan for testing been established? If so does the test team have the following responsibilities:

   
 

Set test objectives.

   
 

Develop a test strategy.

   
 

Develop the test tactics.

   
 

Define the test resources.

   
 

Execute tests needed to achieve the test plan.

   

9.

Modify the test plan and test execution as changes occur.

   
 

Manage use of test resources.

   
 

Issue test reports.

   
 

Ensure the quality of the test process.

   
 

Maintain test statistics.

   

10.

Does the test team adequately represent the following:

   
 

User personnel

   
 

Operation’s staff

   
 

Data administration

   
 

Internal auditors

   
 

Quality assurance staff

   
 

Information technology

   
 

Management

   
 

Security administrator

   
 

Professional testers

   

11.

Did you develop test team assignments for each test member?

   
 

Does the test team accept responsibility for finding users/customer type defects?

   

12.

Does the test team accept responsibility for finding defects?

   

13.

Does the team recognize the benefit of removing defects earlier in the correction life cycle process?

   

14.

Will testing begin when the development process begins?

   

15.

Does one person have primary responsibility for testing?

   

16.

Will the test team perform validation tests?

   

17.

Will the test team perform verification tests?

   

18.

Will verification tests include requirement reviews?

   

19.

Will verification tests include design reviews?

   

20.

Will verification tests include code walkthroughs?

   

21.

Will verification tests include code inspections?

   

22.

Will validation tests include unit testing?

   

23.

Will validation tests include integration testing?

   

24.

Will validation tests include system testing?

   

25.

Will validation tests include user acceptance testing?

   

26.

Will testers develop a testers’ workbench?

   

27.

Will the workbench identify the deliverables/products to be tested?

   

28.

Will the workbench include test procedures?

   

29.

Will the workbench check accuracy of test implementation?

   

30.

Will you identify test deliverables?

   

31.

Does your workbench identify the tools you’ll use?

   

32.

Have the testers identified a source of these generic test tools?

   

A Yes response to any checklist items means that you’ve chosen an effective process component for your test process. If you don’t want to include a particular item in your test process, insert No for that item. Use the Comments column to clarify your response and to provide guidance for building the test process. A blank worksheet has been provided for your use at the end of this chapter.

Summary

Effective and efficient testing will occur only when a well-defined process exists. This chapter presented six guidelines to improve the effectiveness and efficiency of software-testing process. The chapter explained the workbench concept to be used in building your software-testing process. A seven-step software-testing process was presented that can be viewed as seven major testing workbenches; each of these steps incorporate several minor or sub-workbenches within the step workbench. Normally, that generic seven-step process requires customization to fit into your culture and IT mission. Customization considerations were provided to help you with the customization process.

The seven-step process designed in this book is recommended as a generic software-testing process you should use in your organization. The next chapter will provide guidance on selecting and incorporating tools into the software testing process.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset