Chapter 11. Step 5: Analyzing and Reporting Test Results

Testers should write two types of reports: interim reports and final reports. Interim test reports are necessary for both testers and management; testers need to know testing defect identification and correction status, and management needs to know the status of the overall project effort and the risks the organization faces as a consequence.

This chapter builds on the material presented so far in this book. In earlier steps, the test objectives are decomposed into a test plan, which eventually is decomposed into specific tests; tests are executed, and then the results are rolled up into test reports. The test results are compared against the expected results and experience with similar software systems. Reports are then prepared to provide the information that the user of the software system needs to make effective business decisions.

Overview

The user of the software system is responsible for deciding whether the software system should be used as presented and, if so, which precautions must be taken to ensure high-quality results. It is the testers who provide the information on which those decisions will be based. Thus, the testers are responsible not only for testing, but to consolidate and present data in a format that is conducive to good business decision making.

The project team is responsible for reporting the project’s status. However, experience has shown that project teams tend to be overly optimistic about their ability to complete projects on time and within budget. Testers can provide management with an independent assessment of the status of the project.

By maintaining a status report of their activities, testers can report regularly to management what works and what does not work. Not working may mean a variety of statuses, including not tested, partially working, and not working at all.

Reporting on how the system will perform in operation uses the results of acceptance testing. Management may be interested in knowing only that the software system is acceptable to the system users. Math-oriented management may want statistical reliability measurements, in addition to user acceptance. Reliability assessment would include statistical measurements such as the expected mean time between failures.

Whether to place a software system in production is a user management decision, although testers can offer factual data about the status of the system together with their opinion regarding that decision.

Concerns

Testers should have the following concerns about the development and delivery of test reports:

  • Test results will not be available when needed.

  • Test information is inadequate.

  • Test status is not delivered to the right people.

Workbench

Figure 11-1 shows the workbench for reporting test results. To report the results of testing, testers need not only the data collected during testing, but also the plans and the expected processing results. Tasks 1 and 2, which report the project’s status and interim test results, should be performed on a regular basis. In the early stages of testing, reports may be prepared only monthly, but during the later stages of testing the reports should become more frequent.

Workbench for reporting test results.

Figure 11-1. Workbench for reporting test results.

The type and number of final reports will vary based on the scope of the project and the number of software systems involved. There may be a final report for each software system or a single report if all of the software systems are placed into production concurrently.

Input

This section describes the three types of input needed to answer management’s questions about the status of the software system.

Test Plan and Project Plan

Testers need both the test plan and the project plan, both of which should be viewed as contracts. The project plan is the project’s contract with management for work to be performed, and the test plan is a contract indicating what the testers will do to determine whether the software is complete and correct. It is against these two plans that testers will report status.

Expected Processing Results

Testers report the status of actual results against expected results. To make these reports, the testers need to know what results are expected. For software systems, the expected results are the business results.

Data Collected during Testing

This section explains the four categories of data to be collected during testing.

Test Results Data

The test results data includes but is limited to the following:

  • Test factors. The factors incorporated in the plan, the validation of which becomes the test objective.

  • Business objectives. The validation that specific business objectives have been met.

  • Interface objectives. The validation that data/objects can be correctly passed among software components.

  • Functions/subfunctions. Identifiable software components normally associated with the requirements for the software.

  • Units. The smallest identifiable software components.

  • Platform. The hardware and software environment in which the software system will operate.

Test Transactions, Test Suites, and Test Events

These are the test products produced by the test team to perform testing. They include but are not limited to the following:

  • Test transactions/events. The type of tests that will be conducted during the execution of tests, which will be based on software requirements.

  • Inspections. A verification of process deliverables against their specifications.

  • Reviews. A verification that the process deliverables/phases are meeting the user’s true needs.

Defects

This category includes a description of the individual defects uncovered during testing. Work Paper 11-1 can be used for recording and monitoring defects. This description includes but is not limited to the following:

  • Data the defect uncovered

  • The name of the defect

  • The location of the defect

  • The severity of the defect

  • The type of defect

  • How the defect was uncovered

Table 11-1. Defect Reporting

Field Requirements

 

FIELD

INSTRUCTIONS FOR ENTERING DATA

Software/System Tested

Name of software being tested.

Date

Date on which the test occurred.

Defect Found

The name and type of a single defect found in the software being tested.

Location Found

The individual unit or system module in which the defect was found.

Severity of Defect

Critical means the system cannot run without correction; major means the defect will impact the accuracy of operation; minor means it will not impact the operation.

Type of Defect

Whether the defect represents something missing, something wrong, or something extra.

Test Data/Script Locating Defect

Which test was used to uncover the defect.

Origin of Defect/Phase of Development

The phase in which the defect occurred.

Date Corrected

The date on which the defect was corrected.

Retest Date

The date on which the testers were scheduled to validate whether the defect had been corrected.

Result of Retest

Whether the software system functions correctly and the defect no longer exists; or if additional correction and testing will be required.

Software/System Tested: __________________________________________________

__________________________________________________________________________

Date:_____________________________________________________________________

__________________________________________________________________________

Defect Found:_____________________________________________________________

__________________________________________________________________________

Location Found:___________________________________________________________

__________________________________________________________________________

Severity of Defect:

□ Critical

 

□ Major

 

□ Minor

Type of Defect:

□ Missing

 

□ Wrong

 

□ Extra

Test Data/Script Locating Defect:

__________________________________________________________________________

__________________________________________________________________________

Origin of Defect/Phase of Development:___________________________________________

__________________________________________________________________________

Date Corrected:___________________________________________________________

__________________________________________________________________________

Retest Date:______________________________________________________________

__________________________________________________________________________

Result of Retest: ___________________________________________________________

__________________________________________________________________________

The results of later investigations should be added to this information—for example, where the defect originated, when it was corrected, and when it was entered for retest.

Efficiency

Two types of efficiency can be evaluated during testing: software system and test. As the system is being developed, a process decomposes requirements into lower and lower levels. These levels normally include high- and low-level requirements, external and internal design, and the construction or build phase. While these phases are in progress, the testers decompose the requirements through a series of test phases, which are described in Steps 3 and 4 of the seven-step process.

Conducting testing is normally the reverse of the test development process. In other words, testing begins at the lowest level and the results are rolled up to the highest level. The final test report determines whether the requirements were met. Documenting, analyzing, and rolling up test results depend partially on the process of decomposing testing through a detailed level. The roll-up is the exact reverse of the test strategy and tactics.

Storing Data Collected During Testing

A database should be established in which to store the results collected during testing. I also suggest that the database be available online through a client/server system so that individuals with a vested interest in the status of the project have ready access.

The most common test report is a simple spreadsheet that indicates the project component for which status is requested, the test that will be performed to determine the status of that component, and the results of testing at any point in time. Interim report examples illustrated in Task 2 of this chapter show how to use such a spreadsheet.

Do Procedures

Three tasks are involved in reporting test results. They are described here as individual steps because each is a standalone effort. For example, reporting the status of the project is an activity independent of other test reports. Testers could issue interim and final test reports without reporting or knowing the status of the project. However, Tasks 2 and 3 are more closely linked. Interim test results will normally be used in developing the final test report. On the other hand, some testers prepare only interim reports and others only final reports.

The three tasks and their associated reports detailed in this chapter are representative of what testers could report. Testers should not limit themselves to these reports but rather use their creativity to develop others appropriate to the project and the organization.

What is important about test reports is that they supply management with the information it needs to make decisions. Reporting extraneous information is a waste of the tester’s time, and not reporting information needed by management is an ineffective use of testing. Testers are encouraged early in the project to consult with management to learn the types of reports they should prepare during and at the conclusion of testing.

Task 1: Report Software Status

This task offers an approach for reporting project status information. These reports enable senior IT management to easily determine the status of the project, and can be issued as needed.

The two levels of project status reports are as follows:

  • Summary status report. This report provides a general view of all project components. It is used to determine which projects need immediate attention and which are on schedule with no apparent problems.

  • Project status report. This report shows detailed information about a specific project component, allowing the reader to see up-to-date information about schedules, budgets, and project resources. Each report should be limited to one page so that only vital statistics are included.

Both reports are designed to present information clearly and quickly. Colorful graphics can be used to highlight status information. Senior management does not have time to read and interpret lengthy status reports from all project teams in the organization. Therefore, this step describes a process that enables management to quickly and easily assess the status of all projects.

Note

An individual software system that needs rework is referred to as a project.

The best way to produce “user-friendly” reports is to incorporate simple graphics and color-coding. For example, projects represented in green would be those with no apparent problems, projects in yellow would indicate potentially problematic situations, and projects in red would indicate those needing management’s immediate attention.

This step describes reporting on three status conditions for each project: implementation, schedule, and budgets. The number of status conditions should be kept to as few as possible; four is still manageable. Some organizations list quality as the fourth, beginning with system testing in later development phases.

In addition to serving as the input to project status reports, the data collected can be used for internal benchmarking, in which case the collective data from all projects is used to determine the mean level of performance for all enterprise projects. This benchmark is used for comparison purposes, to make judgments on the performance of individual projects.

Prior to effectively implementing a project reporting process, two inputs must be in place.

  • Measurement units. IT must have established reliable measurement units that can be validated. Management must be willing to use this quantitative data as an integral part of the decision-making process. All those involved in IT projects must be trained in collecting and using this data.

  • Process requirements. Process requirements for a project reporting system must include functional, quality, and constraint attributes. Functional attributes describe the results the process is to produce; quality attributes define the particular attributes that should be included in the software requirement; and constraint attributes include tester skill levels, budget, and schedule.

    The following sections describe the six subtasks for this task.

Establishing a Measurement Team

The measurement team should include individuals who:

  • Have a working knowledge of quality and productivity measurements

  • Are knowledgeable in the implementation of statistical process control tools

  • Have a working understanding of benchmarking techniques

  • Know of the organization’s goals and objectives

  • Are respected by their peers and management

Representatives should come from management and personnel responsible for software development and maintenance. For an IT organization of fewer than 50 people, the measurement team should be between three and five members.

Creating an Inventory of Existing Project Measurements

An inventory of existing measurements should be performed in accordance with a plan. If problems arise during the inventory, the plan and the inventory process should be modified accordingly. The formal inventory is a systematic and independent review of all existing measurements and metrics. All identified data must be validated to determine if they are valid and reliable.

The inventory process should start with an introductory meeting of the participants. The objective of this meeting is to review the inventory plan with management and representatives of the projects. A sample agenda for the introductory meeting would be to:

  1. Introduce all members.

  2. Review the scope and objectives of the inventory process.

  3. Summarize the inventory processes to be used, including work papers and data repositories.

  4. Establish communications channels.

  5. Confirm the inventory schedule with major target dates.

Creating an inventory involves the following activities:

  1. Review all measurements.

  2. Document all findings.

  3. Conduct interviews to determine what and how measurement data is captured and processed.

Developing a Consistent Set of Project Metrics

To enable senior management to quickly access the status of each project, it is critical to develop a list of consistent measurements spanning all project lines. Initially, this can be challenging, but with cooperation and some negotiating, you can establish a reasonable list of measurements. Organizations with development and maintenance standards will have an easier time completing this step.

Defining Process Requirements

The objective of this step is to use the management criteria and measurement data developed in Steps 2 and 3 to define the process requirements for the management project reporting system. Major criteria of this specification include the following:

  • A description of desired output reports

  • A description of common measurements

  • The source of common measurements and associated software tools to capture the data

  • A determination of how the data will be stored (centralized and/or segregated)

Developing and Implementing the Process

The objective of this step is to document the work process used to output the reports of the project data. The implementation will involve the following activities:

  1. Document the workflow of the data capture and reporting process.

  2. Procure software tools to capture, analyze, and report the data.

  3. Develop and test system and user documentation.

  4. Beta-test the process using a small- to medium-sized project.

  5. Resolve all management and project problems.

  6. Conduct training sessions for management and project personnel on how to use the process and interrelate the reports.

  7. Roll out the process across all project lines.

Monitoring the Process

Monitoring the reporting process is very important because software tools are constantly being upgraded and manual supporting activities sometimes break down. The more successful the system, the better the chance that management will want to use it and perhaps expand the reporting criteria.

The two primary reports from this step are Summary Status and Project Status.

Summary Status Report

The Summary Status report (see Figure 11-2) provides general information about all projects and is divided into the following four sections:

  • Report date information. The information in the report is listed as current as of the date in the top-left corner. The date the report was produced appears in the top-right corner.

  • Project information. Project information appears in a column on the left side of the report. Each cell contains the project’s name, manager, phase, and sponsor.

  • Timeline information. Timeline information appears in a chart that displays a project’s status over an 18-month period. It shows project status by measuring technical, budgeting, and scheduling considerations. The year and month (abbreviated with initials) appear along the top of the chart to indicate the month-by-month status of each project.

    Technical (T), scheduling (S), and budget (B) information also appears in the chart, and is specific to each project. These three considerations measure the status of each project:

    • Technical status (T) shows the degree to which the project is expected to function within the defined technical and/or business requirements.

    • Scheduling status (S) shows the degree to which the project is adhering to the current approved schedule.

    • Budgeting status (B) shows the degree to which the project is adhering to the current approved budget. Expenditures for the budget include funds, human resources, and other resources.

  • Legend information. The report legend, which is located along the bottom of the page, defines the colors and symbols used in the report, including category and color codes. The following colors could be used to help to quickly identify project status:

    • A green circle could mean there are no major problems and that the project is expected to remain on schedule.

    • A yellow circle could indicate potentially serious deviation from project progression.

    • A red circle could mean a serious problem has occurred and will have a negative effect on project progression.

A Summary Status report.

Figure 11-2. A Summary Status report.

Project Status Report

The Project Status report (see Figure 11-3) provides information related to a specific project component. The design of the report and use of color in your report enables the reader to quickly and easily access project information. It is divided into the following six sections:

  • Vital project information. Vital project information appears along the top of the report. This information includes:

    • Date the report is issued

    • Name of the executive sponsoring the project

    • Name of project manager

    • Official name of project

    • Quick-status box containing a color-coded circle indicating the overall status of the project

  • General Information. This section of the report appears inside a rectangular box that contains general information about the project. The work request number and a brief description of the project appear in the top half of the box. The lower half of the box shows the phase of the project (e.g., planning, requirements, development, and implementation), as well as important project dates and figures, which include:

    • Project start date, determined by official approval, sponsorship, and project management

    • Original target date for project completion

    • Current target date for project completion

    • Phase start date of the current phase

    • Original target date for completion of the current phase

    • Current target date for completion of the current phase

    • Original budget allotted for the project

    • Current budget allotted for the project

    • Expenses to date for the project

  • Project/Activities. The Project/Activities chart measures the status according to the phase of the project.

    Future activities for the project are indicated by a bar, which extends to the expected date of project completion, or the current target date, identified by the abbreviation Tgt.

  • Essential Elements. The Essential Elements section indicates the current status of the project by comparing it to the previous status of the project. The chart could use the color-coded circles and list considerations that allow the reader to quickly gather project statistics. These considerations ask:

    • Is the project on schedule?

    • Do the current project results meet the performance requirements?

    • Are the project costs within the projected budget?

    • Are the project costs over budget?

    • What is the dollar amount of the project budget overrun?

    These questions can be answered by comparing the previous report results (on the left side of the chart) to the current report results (on the right side of the chart).

    This section of the report also includes a graph that compares projected costs to actual costs. The projected cost line appears in one color; the actual cost line appears in another. The dollar amounts appear on the left side of the graph, and the time line appears along the bottom of the graph. This graph shows you whether the project is adhering to the current approved budget.

  • Legend information. The report legend, which is located along the bottom of the page, defines the colors and symbols used in the report, including category and color codes.

  • Project highlights information. The project highlights appear in a rectangular box located at the bottom of the report. This section may also contain comments explaining specific project developments.

A Project Status report.

Figure 11-3. A Project Status report.

Task 2: Report Interim Test Results

The test process should produce a continuous series of reports that describe the status of testing. The frequency of the test reports should be at the discretion of the team and based on the extensiveness of the test process.

This section describes ten interim reports. Testers can use all ten or select specific ones to meet their individual needs. However, I recommend that if available test data permits at the end of the testing phase, all ten test reports be prepared and incorporated into the final test report.

Function/Test Matrix

The function/test matrix shows which tests must be performed to validate the software functions, and in what sequence to perform the tests. It will also be used to determine the status of testing.

Many organizations use a spreadsheet package to maintain test results. The intersection of the test and the function can be color-coded or coded with a number or symbol to indicate the following:

  • 1 = Test is needed but not performed.

  • 2 = Test is currently being performed.

  • 3 = Minor defect noted.

  • 4 = Major defect noted.

  • 5 = Test is complete and function is defect-free for the criteria included in this test.

Testers should complete Work Paper 11-1 each time they uncover a defect. This information should be maintained electronically so that test managers and/or software users can review it. The information collected about each defect can be as simple or as complex as desired. For simplification purposes, it is suggested that the following guidelines be used:

  • Defect naming. Name defects according to the phase in which the defect most likely occurred, such as a requirements defect, design defect, documentation defect, and so forth.

  • Defect severity. Use three categories of severity, as follows:

    • Critical. The defect(s) would stop the software system from operating.

    • Major. The defect(s) would cause incorrect output to be produced.

    • Minor. The defect(s) would be a problem but would not cause improper output to be produced, such as a system documentation error.

  • Defect type. Use the following three categories:

    • Missing. A specification was not included in the software.

    • Wrong. A specification was improperly implemented in the software.

    • Extra. An element in the software was not requested by a specification.

The information from Work Paper 11-1 should be used to produce the function/test matrix, as shown in Table 11-1.

Table 11-1. Function/Test Matrix

 

TEST

1

2

3

4

5

6

7

8

9

10

Function

           

A

 

X

  

X

   

X

 

X

B

   

X

 

X

   

X

 

C

  

X

   

X

X

   

D

  

X

      

X

 

E

 

X

 

X

   

X

  

X

The report is designed to show the results of performing a specific test on a function. Therefore, no interpretation can be made about the results of the entire software system, only about the results of individual tests. However, if all the tests for a specific function are successful, testers can assume that function works. Nevertheless, “working” means that it has met the criteria in the test plan.

Functional Testing Status Report

The purpose of this report is to show the percentage of functions, including the functions that have been fully tested, the functions that have been tested but contain errors, and the functions that have not been tested.

A sample of this test report is illustrated in Figure 11-4. It shows that approximately 45 percent of the functions tested have uncorrected errors, 35 percent were fully tested, and 10 percent were not tested.

A Functional Testing Status report.

Figure 11-4. A Functional Testing Status report.

The report is designed to show the status of the software system to the test manager and/or customers. How the status is interpreted will depend heavily on the point in the test process at which the report was prepared. As the implementation date approaches, a high number of functions tested with uncorrected errors and functions not tested should raise concerns about meeting the implementation date.

Functions Working Timeline Report

The purpose of this report is to show the status of testing and the probability that the software will be ready on the projected date.

Figure 11-5 shows an example of a Functions Working Timeline report. This report assumes a September implementation date and shows from January through September the percent of functions that should be working correctly at any point in time. The “Actual” line shows that the project is doing better than anticipated.

A Functions Working Timeline report.

Figure 11-5. A Functions Working Timeline report.

If the actual performance is better than planned, the probability of meeting the implementation date is high. On the other hand, if the actual percent of functions working is less than planned, both the test manager and development team should be concerned and may want to extend the implementation date or add resources to testing and/or development.

Expected Versus Actual Defects Uncovered Timeline Report

The purpose of this report is to show whether the number of defects is higher or lower than expected. This assumes that the organization has sufficient historical data to project defect rates. It also assumes that the development process is sufficiently stable so that the defect rates are relatively consistent.

The example chart for the Expected versus Actual Defects Uncovered Timeline report in Figure 11-6 shows a project beginning in January with a September implementation date. For this project, almost 500 defects are expected; the expected line shows the cumulative anticipated rate for uncovering those defects. The “Actual” line shows that a higher number of defects than expected have been uncovered early in the project.

An Expected versus Actual Defects Uncovered Timeline report.

Figure 11-6. An Expected versus Actual Defects Uncovered Timeline report.

Generally, an actual defect rate varies from the expected rate because of special circumstances, and investigation is warranted. The cause may be the result of an inexperienced project team. Even when the actual defects are significantly less than expected, testers should be concerned because it may mean that the tests have not been effective and a large number of undetected defects remain in the software.

Defects Uncovered Versus Corrected Gap Timeline Report

The purpose of this report is to list the backlog of detected but uncorrected defects. It requires recording defects as they are detected, and then again when they have been successfully corrected.

The example in Figure 11-7 shows a project beginning in January with a projected September implementation date. One line on the chart shows the cumulative number of defects uncovered during testing, and the second line shows the cumulative number of defects corrected by the development team, which have been retested to demonstrate that correctness. The gap represents the number of uncovered but uncorrected defects at any point in time.

A Defects Uncovered versus Corrected Gap Time Line report.

Figure 11-7. A Defects Uncovered versus Corrected Gap Time Line report.

The ideal project would have a very small gap between these two timelines. If the gap becomes wide, it indicates that the backlog of uncorrected defects is growing, and that the probability the development team will be able to correct them prior to implementation date is decreasing. The development team must manage this gap to ensure that it remains narrow.

Average Age of Uncorrected Defects by Type Report

The purpose of this report is to show the breakdown of the gap presented in Figure 11-7 by defect type—that is, the actual number of defects by the three severity categories.

The Average Age of Uncorrected Defects by Type report example shows the three severity categories aged according to the average number of days since the defect was detected. For example, it shows that the average critical defect is about 3 days old, the average major defect is about 10 days old, and the average minor defect is about 20 days old. The calculation is to accumulate the total number of days each defect has been waiting to be corrected, divided by the number of defects. Average days should be working days.

Figure 11-8 shows a desirable result, demonstrating that critical defects are being corrected faster than major defects, which are being corrected faster than minor defects. Organizations should have guidelines for how long defects at each level should be maintained before being corrected. Action should be taken accordingly based on actual age.

An Average Age of Uncorrected Defects by Type report.

Figure 11-8. An Average Age of Uncorrected Defects by Type report.

Defect Distribution Report

The purpose of this report is to explain how defects are distributed among the modules/units being tested. It lists the total cumulative defects for each module being tested at any point in time.

The sample Defect Distribution report, shown in Figure 11-9, shows eight units along with the number of defects for each. The report could be enhanced to show the extent of testing that has occurred on the modules (for example, by color-coding the number of tests or by incorporating the number of tests into the bar as a number).

A Defect Distribution report.

Figure 11-9. A Defect Distribution report.

This report can help identify modules that have an excessive defect rate. A variation of the report could list the cumulative defects by test—for example, defects uncovered in test 1, the cumulative defects uncovered by the end of test 2, the cumulative defects uncovered by test 3, and so forth. Frequently, modules that have abnormally high defect rates are those that have ineffective architecture and, thus, are candidates to be rewritten rather than for additional testing.

Normalized Defect Distribution Report

The purpose of this report is to normalize the defect distribution presented in Figure 11-9. The normalization can be by function points or lines of code. This will enable testers to compare the defect density among the modules/units.

The Normalized Defect Distribution report example in Figure 11-10 shows the same eight modules presented in Figure 11-9. However, in this example, the defect rates have been normalized to defects per 100 function points or defects per 1,000 lines of code, to enable the reader of the report to compare defect rates among the modules. This was not possible in Figure 11-9, because there was no size consideration. Again, a variation that shows the number of tests can be helpful in drawing conclusions.

A Normalized Defect Distribution report.

Figure 11-10. A Normalized Defect Distribution report.

This report can help identify modules that have excessive defect rates. A variation of the report could show the cumulative defects by test: for example, the defects uncovered in test 1, the cumulative defects uncovered by the end of test 2, the cumulative defects uncovered by test 3, and so forth. Frequently, modules that have abnormally high defect rates are those that have ineffective architecture and, thus, are candidates for rewrite rather than additional testing.

Testing Action Report

This is a summary action report prepared by the test team. The information contained in the report should be listed as necessary to the test manager and/or the development manager so that they can properly direct the team toward a successful implementation date.

The Testing Action report example (see Figure 11-11) lists four pieces of information helpful to most test managers: total number of tests behind schedule, uncorrected critical defects, major uncorrected defects more than five days old, and the number of uncovered defects not corrected.

A Testing Action report.

Figure 11-11. A Testing Action report.

These items are examples of what could be included in the Testing Action report. Most are included in the other reports, but this report is a summation, or a substitute, for the other reports.

The test manager should carefully monitor the status of testing and take action when testing falls behind schedule.

Interim Test Report

As testers complete an individual project they should issue an Interim Test report. The report should discuss the scope of the test, the results, what works and does not work, and recommendations (see Figure 11-12).

Table 11-12. An Interim Test report.

1. Scope of Test

This section indicates which functions were and were not tested.

 

2. Test Results

This section indicates the results of testing, including any variance between what is and what should be.

 

3. What Works/What Does Not Work

This section defines the functions that work and do not work and the interfaces that work and do not work.

 

4. Recommendations

This section recommends actions that should be taken to:

a. Fix functions/interfaces that do not work.

b. Make additional improvements.

Any report about testing should indicate the test’s scope; otherwise, the reader will assume that exhaustive testing has occurred, which is never the case. Testing is a risk-oriented activity in which resources should be expended to minimize the major risks. Exhaustive testing is not possible, practical, or economical. Thus, testing is never designed to ensure that there are no defects remaining in the software, and the scope will explain what the testers accomplished.

The recommendations section is a critical part of the report, because the reader is usually removed from the project being tested and the technical recommendations provided by the testers can help with the reader’s business decision. For example, testers may indicate that there is a 50/50 probability that the system will terminate abnormally in production because of dating problems. A business decision might then be made to put the software into operation, but develop effective backup recovery procedures in case the termination occurs.

Task 3: Report Final Test Results

A final report should be prepared at the conclusion of each test activity. The report should summarize the information from the following reports:

  • Individual Project report

  • Integration Test report

  • System Test report

  • Acceptance Test report

A final test report is designed to document the results of testing as defined in the test plan. Without a well-developed test plan, it is difficult to develop a meaningful test report. It is designed to accomplish three objectives: to define the scope of testing (normally a brief recap of the test plan), to present the results of testing, and to draw conclusions and make recommendations. The test report may be a combination of electronic and hard copy data. For example, if the function/test matrix is maintained electronically, there is no reason to print it because the paper report will summarize that data, draw the appropriate conclusions, and present recommendations.

The test report has one immediate and two long-term purposes. The immediate purpose is to enable customers to determine whether the system is ready for production (and if not, to assess the potential consequences and initiate appropriate actions to minimize those consequences). The first of the two long-term uses is for the project development team to trace problems in the event the application malfunctions in production. By knowing which functions have been correctly tested and which functions still contain defects, testers can take corrective action. The second long-term purpose is to enable testers to analyze the rework process and make changes to prevent defects from occurring in the future.

Individual Project Test Report

This report focuses on individual projects. When different testers test individual projects, they should prepare a report on their results. Refer to Figure 11-12 for a sample report format.

Integration Test Report

Integration testing tests the interfaces between individual projects. A good test plan will identify the interfaces and institute test conditions that will validate interfaces between software systems. Given this, the integration report follows the same format as the Individual Project Test report, except that the conditions tested are the interfaces between software systems.

System Test Report

Chapter 8 presented a system test plan standard that identified the objectives of testing, what was to be tested, how it was to be tested, and when tests should occur. The System Test report should present the results of executing that test plan (see Figure 11-13). If test data is maintained electronically, it need only be referenced, not included in the report.

Table 11-13. A System Test report.

1. General Information

 

1.1

Summary. Summarize both the general functions of the software tested and the test analysis performed.

 

1.2

Environment. Identify the software sponsor, developer, user organization, and the computer center where the software is to be installed. Assess the manner in which the test environment may be different from the operation environment, and the effects of this difference on the tests.

 

1.3

References. List applicable references, such as:

  

a. Project request (authorization).

  

b. Previously published documents on the project.

  

c. Documentation concerning related projects.

2. Test Results and Findings

Identify and present the results and findings of each test separately in paragraphs 2.1 through 2.n.

 

2.1

Test (identify)

  

2.1.1

Validation tests. Compare the data input and output results, including the output of internally generated data, of this test with the data input and output requirements. State the findings.

  

2.1.2

Verification tests. Compare what is shown on the document to what should be shown.

 

2.n

Test (identify). Present the results and findings of the second and succeeding tests in a manner similar to that of paragraph 2.1.

3. Software Function Findings

Identify and describe the findings on each function separately in paragraphs 3.1 through 3.n.

 

3.1

Function (identify)

  

3.1.1

Performance. Describe briefly the function. Describe the software capabilities designed to satisfy this function. State the findings as to the demonstrated capabilities from one or more tests.

  

3.1.2

Limits. Describe the range of data values tested. Identify the deficiencies, limitations, and constraints detected in the software during the testing with respect to this function.

 

3.n

Function (identify). Present the findings on the second and succeeding functions in a manner similar to that of paragraph 3.1.

4. Analysis Summary

 

4.1

Capabilities. Describe the capabilities of the software as demonstrated by the tests. Where tests were to demonstrate fulfillment of one or more specific performance requirements, compare the results with these requirements. Compare the effects any differences in the test environment versus the operational environment may have had on this test demonstration of capabilities.

 

4.2

Deficiencies. Describe the deficiencies of the software as demonstrated by the tests. Describe the impact of each deficiency on the performance of the software. Describe the cumulative or overall impact on performance of all detected deficiencies.

 

4.3

Risks. Describe the business risks if the software is placed in production.

 

4.4

Recommendations and estimates. For each deficiency, provide any estimates of time and effort required for its correction, and any recommendations as to:

  

a. The urgency of each correction.

  

b. Parties responsible for corrections.

  

c. How the corrections should be made.

 

4.5

Option. State the readiness for implementation of the software.

 

Acceptance Test Report

Testing has two primary objectives. The first is to ensure that the system as implemented meets the software requirements. The second objective is to ensure that the software system can operate in the real-world user environment, which includes people with varying skills, attitudes, time pressures, business conditions, and so forth. This final report should address these issues. See Chapter 12 for conducting and reporting the results of acceptance testing.

Check Procedures

Work Paper 11-2 is a questionnaire that enables testers to verify that they have performed the test reporting processes correctly. The checklist is divided into three parts: Quality Control over Writing the Status Report, Quality Control for Developing Interim Test Result Reports, and Control over Writing Final Test Reports. Work Paper 11-3 is a questionnaire that will guide testers to writing effective reports.

Table 11-2. Report Writing Quality Control Checklist

   

YES

NO

N/A

COMMENTS

Part 1: Quality Control over Writing Status Reports

    

1.

Has management been involved in defining the information to be used in the decision-making process?

    

2.

Have the existing units of measure been validated?

    

3.

Are software tools in place for collecting and maintaining a database to support the project reporting process?

    

4.

Has the completed requirements document been signed off by management and project personnel?

    

5.

Have management and project personnel been trained in collecting quantitative data and using the reports?

    

Part 2: Quality Control for Developing Interim Test Result Reports

    

1.

Do the report writers have the expected results from testing?

    

2.

Is there a method of reporting uncovered defects?

    

3.

Is there a method of reporting the status of defects?

    

4.

Is there a method to relate the defects to the function that is defective?

    

5.

Have the testers consulted with management to determine what type of reports are wanted?

    

6.

Have the following reports been prepared?

    
 

Function Test/Matrix

    
 

Functional Testing Status

    
 

Function Working Timeline

    
 

Expected vs. Actual Defects Uncovered Timeline

    
 

Defects Uncovered vs. Corrected Gap Timeline

    
 

Average Age of Uncorrected Defects by Type

    
 

Defect Distribution

    
 

Normalized Defect Distribution

    
 

Testing Action

    

7.

Do the reports appear reasonable to those involved in testing?

    

8.

Have the reports been delivered to the person desiring the report?

    

9.

Have the reports been delivered on a timely basis?

    

Part 3: Control over Writing Final Test Reports

    

1.

Have reports been issued for the final results of individual project testing?

    

2.

Have reports been issued for the final results of integration testing?

    

3.

Has a summary report been issued on the overall results of testing?

    

4.

Did these reports identify the scope of testing?

    

5.

Did these reports indicate what works and what doesn’t?

    

6.

Do these reports provide recommendations on actions to take if appropriate?

    

7.

Do these reports provide an opinion to management on whether the software system should be placed into the production?

    

Table 11-3. Guidelines for Writing Test Reports

  

YES

NO

N/A

COMMENTS

Reporting Complete

    

1.

Does it give all necessary information?

    

2.

Is it written with the reader in mind, and does it answer all his or her questions?

    

3.

Is there a plan for a beginning, middle, and end?

    

4.

Are specific illustrations, cases, or examples used to best advantage?

    

5.

Are irrelevant ideas and duplications excluded?

    

6.

Are the beginning and the ending of the report effective?

    

Clarity

    

7.

Are the ideas presented in the best order?

    

8.

Does each paragraph contain only one main idea?

    

9.

Is a new sentence started for each main idea?

    

10.

Are the thoughts tied together so the reader can follow from one to another without getting lost?

    

11.

Are most sentences active? Are the verbs mostly action verbs?

    

12.

Is the language adapted to the readers; are the words the simplest to carry the thought?

    

13.

Is underlining used for emphasis, or parentheses for casual mention?

    

14.

Will your words impart exact meaning to the reader?

    

Concise

    

15.

Does report contain only essential facts?

    

16.

Are most of the sentences kept short?

    

17.

Are most paragraphs kept short?

    

18.

Are unneeded words eliminated?

    

19.

Are short words used for long ones?

    

20.

Are roundabout and unnecessary phrases eliminated?

    

21.

Is the practice followed of using pronouns instead of repeating nouns?

    

22.

Is everything said in the fewest possible words?

    

Correct

    

23.

Is the information accurate?

    

24.

Do the statements conform to policy?

    

25.

Is the writing free from errors in grammar, spelling, and punctuation?

    

Tone

    

26.

Is the tone natural? Is conversational language used?

    

27.

Is it personal? Are the “we” and “you” appropriately emphasized?

    

28.

Is it friendly, courteous, and helpful?

    

29.

Is it free from words that arouse antagonism?

    

30.

Is it free from stilted, hackneyed, or technical words and phrases?

    

Effectiveness

    

31.

Is there variety in the arrangement of words, sentences, and pages so that it is interesting to read?

    

32.

Was it given the “ear” test?

    

Conclusion

    

33.

Is the report satisfactory and ready for publication?

    

Output

This step should produce the following three categories of reports:

  • Project status reports. These reports are designed both for the project team and senior management. Senior management includes information services management, user/customer management, and executive management. These reports provide a check and balance against the status reports submitted by the project team, and any discrepancies between the two reports should be reconciled.

  • Interim test reports. These reports describe the status of testing. They are designed so that the test team can track its progress in completing the test plan. They are also important for the project implementers, as the test reports will identify defects requiring corrective action. Other staff members may wish to access the reports to evaluate the project’s status.

  • Final test reports. These reports are designed for staff members who need to make decisions regarding the implementation of developed software. The reports should indicate whether the software is complete and correct, and if not, which functions are not working.

Guidelines

Testers can use the data from individual project reports to develop a baseline for the enterprise based on mean scores of the reporting criteria. Rather than comparing quality, productivity, budget, defects, or other categories of metrics to external organizations, valuable management information can be made available. From this baseline, individual projects can be compared. Information from projects consistently scoring above the enterprise baseline can be used to improve those projects that are marginal or fall below the enterprise baseline.

Testers should heed the following guidelines when preparing reports:

  1. Use Work Papers 11-2 and 11-3 to ensure reports are written effectively.

  2. Allow project team members to review report drafts and make comments before reports are finalized.

  3. Don’t include names or assign blame.

  4. Stress quality.

  5. Limit the report to two or three pages stressing important items and include other information in appendixes.

  6. Eliminate small problems from the report and give these directly to the project people.

  7. Deliver the report to the project leader by hand.

  8. Offer to have the testers work with the project team to explain their findings and recommendations.

Summary

The emphasis of this chapter has been on summarizing, analyzing, and reporting test results. Once the decision to implement software has been made, the next step (as discussed in Chapter 12) is to determine whether the system meets the real needs of the users regardless of system requirements and specifications.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset