Chapter 10. Test Program Review and Assessment

Improvements in quality always and automatically result in reductions in schedules and costs, increases in productivity, increases in market share, and consequently increases in profits.

W. Edwards Deming

image

Following test execution, the test team needs to review the performance of the test program to determine where enhancements can be implemented during the next testing phase or on the next project. This test program review represents the final phase of the ATLM. The ATLM is cyclical and implemented incrementally. Parts of the ATLM are repeated within a project or when the test team moves on to a new release or another project.

Throughout the test program, the test team collects various test metrics, including many during the test execution phase (see Chapter 9). The focus of the test program review includes an evaluation of earned value progress measurements and other metrics collected. The evaluation of the test metrics should examine how well the original test program cost and sizing measurements compared with the actual number of labor hours expended and test procedures developed to accomplish the test effort. If applicable, the review of test metrics should conclude with suggested adjustments and improvement recommendations.

Just as important, the test team should document those activities that it performed well and those done correctly, so that it can repeat these successful processes. This chapter addresses the test program review activities related to test metrics analysis, lessons learned, corrective actions, and the test program’s return on investment for test automation.

The test team should record lessons learned throughout each phase of the test program life cycle. It is not beneficial to wait until the end of the system development life cycle to document insights about improving specific procedures. Where possible, the test team needs to alter the detailed procedures during the test program, when it becomes apparent that such changes might improve the efficiency of an ongoing activity.

Once the project is complete, proposed corrective actions will surely prove beneficial to the next project. Corrective actions applied during the test program, however, can be significant enough to improve the final results of the current AUT testing program. For example, the incorporation of a modular test script may save several hours of test development and execution effort that might make the difference between the test program concluding within budget versus exceeding the number of hours allocated.

The test team needs to adopt, as part of its culture, an ongoing iterative process focusing on lessons learned. Such a program would encourage test engineers to raise corrective action proposals immediately, when such actions could potentially have a significant effect on test program performance. Test managers, meanwhile, need to promote such leadership behavior from each test engineer.

10.1 Test Program Lessons Learned—Corrective Actions and Improvement Activity

Although a separate quality assurance (QA) department commonly assumes responsibility for conducting audits to verify that the appropriate processes and procedures are implemented and followed, the test team can nevertheless conduct its own test program analysis. Test personnel should continuously review the QA audit findings and follow suggested corrective actions. Additionally, throughout the entire test life cycle, it is good practice to document and begin to evaluate lessons learned at each milestone. The metrics that are collected throughout the test life cycle and especially during the test execution phase will help pinpoint problems that need to be addressed. Consequently, the test team should periodically “take a pulse” on the quality of the test effort with the objective of improving test results, whenever necessary, by altering the specific ways that the test team performs its business. Not only should the test team concentrate on lessons learned for the test life cycle, but it should also point out issues pertaining to the development life cycle.

When lessons learned and metrics evaluation sessions take place only at the end of a system development life cycle, it is too late to implement any corrective action for the current project. Nevertheless, lessons learned that are recorded at this stage may benefit subsequent test efforts. Therefore the team should document lessons learned at this late stage, rather than not at all.

Lessons learned, metrics evaluations, and any corresponding improvement activity or corrective action need to be documented throughout the entire test process in an easily assessible central repository. A test team intranet site can prove very effective in maintaining such documentation. For example, lessons learned could be documented using a requirements management tool database. Keeping an up-to-date database of important issues and lessons learned gives all individuals on the project the ability to monitor the progress and status of issues until closure.

The test manager needs to ensure that lessons learned records are viewed as improvement opportunities. The records, for example, should not list the names of the individuals involved in the particular activity. Each record should include at least one corrective action that suggests how the process or process output can be improved. (Tables 10.2 through 10.5 provide examples of lessons learned records together with such corrective actions.) Suggested corrective actions must undergo much analysis and scrutiny. For example, for some projects, personnel might conduct a lessons learned review for each life-cycle phase and document the results. For each lesson learned, a measure showing the potential range of gain (hours saved) if the particular improvement activity were to be implemented could be specified. Sometimes a corrective action initially sounds like the perfect solution to a problem, but further analysis can show no specific gain. As a result, the test team must also be careful when coming up with improvements to testing lessons learned. It is often beneficial to construct a small prototype or pilot project to implement the suggested changes so as to gather a valid estimate of suggested corrective action/improvement gains.

Table 10.1 gives an example of a simplified improvement table that represents the outcome of a prototype implementation of improvement opportunities. Gain estimates are expressed as a percentage of hours saved. This table shows, for example, how many hours could be saved by implementing a new process or tool.

Table 10.1. Test Program Improvements

image

image

When reviewing the test program events, recall that test program goals, strategies, and objectives were defined and documented within the test plan. Test processes were adopted or defined as required. Test design, development, and execution were performed. Following test execution, the test team must match the actual implementation of the test program to the initially planned criteria. Several questions need to be addressed by the test team, with the QA department sharing insight. Was the documented test process entirely followed? If not, which parts were not implemented? Why? What was the effect of not following the process? Have the test program goals been met? Were strategies implemented as planned? If not, why not? Have objectives been accomplished? If not, why not? Were the defect prevention activities successfully implemented? If not, which defect prevention activity step was omitted and why? Were the risks made clear and documented in advance? Ideally, any deviations from the process and/or test plan, such as strategy changes, have been adequately documented, including the rationale for the change. Were lessons learned activities conducted throughout the life cycle and corrective actions taken, when appropriate?

Process improvement is an iterative endeavor. Test program analysis represents an important aspect of this improvement effort. For example, where applicable, the lessons learned from previous projects and insights gained from pursuing industry articles and reports should have been incorporated into the organization’s test life-cycle process. At the conclusion of a test program, test personnel should evaluate the effectiveness of the defined processes. The test team should find out whether the same mistakes were repeated and confirm whether any suggested improvement opportunities were bypassed.

This sort of test program analysis can identify problem areas and potential corrective actions, as well as review the effectiveness of implemented corrective action changes. Examples of test program analysis activities are provided in Tables 10.2 through 10.5. For instance, test engineers might want to conduct schedule slippage analysis, process analysis, tool usage analysis, environment problem analysis, and more. Additionally, they might perform the test outcome evaluation activities given in Table 9.2 on page 357. Likewise, software problem report analysis can help narrow down the cause of many problems. The test engineer would create a table like Table 10.2, which was taken from real-world experiences, at the conclusion of each phase of the development life cycle. It is important not only to determine the corrective actions for each problem encountered, but also to monitor corrective actions to ensure that they are being implemented effectively, so that problems do not recur.

Table 10.2. Examples of Schedule-Related Problems and Corrective Actions

image

image

image

image

Table 10.3. Examples of Test Program/Process-Related Problems and Corrective Actions

image

image

image

image

image

Table 10.4. Examples of Tool-Related Problems and Corrective Actions

image

image

image

image

image

Table 10.5. Examples of Environment-Related Problems and Corrective Actions

image

image

10.2 Test Program Return on Investment

After collecting lessons learned and other metrics, and defining the corrective actions, the test engineers also need to assess the effectiveness of the test program, including the test program’s return on investment. For example, test engineers capture measures of the benefits of automation realized throughout the test life cycle. One benefit might include the fact that the various new tools used throughout the life cycle boosted productivity due to the increased speed of automating test activities in a repeatable fashion while contributing an element of maintainability. Test engineers could therefore focus on more complex, nonrepeatable tests.

One method of collecting the benefits of automated testing relies on the test engineer’s comparison of the execution time of a manual test to the execution time of an automated test (see Chapter 2). Another method of collecting benefits is by collecting metrics throughout the test life cycle and evaluating them to determine which other benefits the test program reaped. An additional option is to use a survey (see Figure 10.1) that is distributed to the various groups involved throughout the test life cycle. Often the success of a test program depends on everyone involved working together and correctly following the processes and outputs. Unlike reviews of lessons learned, which involve coming up with corrective actions so as to avoid the same mistakes in future cycles/projects, the return on investment exercise is aimed at identifying and repeating those activities that have proven to be effective.

One way of collecting test program benefits is via a survey that is distributed to the various groups that were involved throughout the test life cycle. Test teams can conduct their own surveys to inquire about the potential value of testing process and tool changes. Individuals participating in such a survey may include business analysts, requirements management specialists, developers, test engineers, and selected end users. These surveys can help gather benefits information on effectiveness of the test life-cycle process as well as the test tools and the improvement activities implemented.

A sample survey form is provided in Figure 10.1 that could be used to solicit feedback on the potential use of requirements management tools, design tools, and development tools. Surveys are helpful in identifying potential misconceptions and gathering positive feedback.

Figure 10.1. Feedback Survey

image

image

image

Sometimes the results of surveys can be an eye-opener. Consider the following testimonial concerning a survey conducted to review the use of test plans on projects:

When I was at Apple, my team conducted a review of 17 test plans in use by our department. Our findings were that none of the plans [was] actually in use, at all. Some of them were almost pure boilerplate. In one case the tester wasn’t aware that his product had a test plan until we literally opened a drawer in his desk and found it lying where the previous test engineer on that product had left it! [2]

Remember that test automation is software development. It is necessary to implement a process, such as the automated test life-cycle methodology (ATLM), when automating the test effort. The test team also needs to be conscious of the benefits offered by a strategy that accepts good enough test automation. To execute an effective test program, it must adopt test design and development guidelines that incorporate proven practices. The test personnel may also need to obtain test tool expert support to get off the ground and running with test automation. Test personnel may also require formal training on the use of one or more automated test tools.

Chapter Summary

• Following test execution, the test team needs to review the performance of the test program to determine where improvements can be implemented to benefit the next project. This test program review represents the final phase of the ATLM.

• Throughout the test program, the test team collects various test metrics. The focus of the test program review includes an assessment of whether the application satisfies acceptance criteria and is ready to go into production. This review also includes an evaluation of earned value progress measurements and other metrics collected.

• The test team needs to adopt, as part of its culture, an ongoing, iterative process composed of lessons learned activities. Such a program encourages test engineers to suggest corrective action immediately, when such actions might potentially have a significant effect on test program performance.

• Throughout the entire test life cycle, a good practice is to document and begin to evaluate lessons learned at each milestone. The metrics that are collected throughout the test life cycle, and especially during the test execution phase, help pinpoint problems that should be addressed.

• Lessons learned, metrics evaluations, and corresponding improvement activity or corrective action need to be documented throughout the entire test process in an easily accessible central repository.

After collecting lessons learned and other metrics, and defining the corrective actions, test engineers need to assess the effectiveness of the test program by evaluating the test program’s return on investment. Test engineers capture measures of the benefits of automation realized throughout the test life cycle so as to perform this assessment.

• Test teams can perform their own surveys to inquire about the potential value of process and tool changes. A survey form can be employed to solicit feedback on the potential use of requirements management tools, design tools, and development tools. Surveys also help to identify potential misconceptions and gather positive feedback.

References

1. Used with permission of Sam Guckenheimer, Rational Software. www.rational.com.

2. Bach, J. Process Evolution in a Mad World. Bellevue, WA: Software Testing Laboratories, 1997.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset