Chapter 6. Test Planning: Smart Application of Testing

Failing to plan is a plan to fail.

Effie Jones

image

The cornerstone of an effective test program is test planning. The test planning element of the Automated Testing Life-cycle Methodology (ATLM) encompasses the review of all activities that will be required for the test program and the verification that processes, methodologies, techniques, people, tools, and equipment (hardware/middleware/software) are organized and applied in an efficient way.

For each test program, test goals and objectives must be defined and test requirements must be specified. Test strategies aimed at supporting test requirements need to be defined. The main events and primary activities of the test program should be entered into a schedule. The products (including deliverables) to be created by the test team during the execution of the test program must be identified as well. Finally, all pertinent information should be captured and kept up-to-date in a test plan document.

Planning activities occur and planning information is captured throughout the various phases of the ATLM, as depicted in Table 6.1. The test planning phase of the ATLM focuses attention specifically on the identification of test program documentation, the planning required to achieve the test objectives and support the test environment, and the development of the test plan document. These particular activities are addressed within this chapter; the other test planning activities are covered in the various chapters of this book, as noted in Table 6.1.

Table 6.1. ATLM Process Hierarchy

image

6.1 Test Planning Activities

The efficient use of automated test tools requires considerable investment in test planning and preparation. The test plan contains a wealth of information, including much of the testing documentation requirements for the project. The test plan will outline the team’s roles and responsibilities, project test schedule, test design activities, test environment preparation, test risks and contingencies, and acceptable level of thoroughness. Test plan appendixes may include test procedures, a naming convention description, and requirements-to-test procedure traceability matrix. The test plan also needs to incorporate the outcome of each phase of the ATLM.

Given that much of the documentation developed throughout the various ATLM phases needs to find its way into the test plan, it is beneficial to briefly review this documentation and understand its relationship to the test plan. This mapping is summarized in Figure 6.1.

Figure 6.1. Test Plan Source and Content

image

At the beginning of the test plan, an Introduction section typically defines the purpose of the test plan, the background of the project, a system description, and a project organization chart. Early in the test plan, all relevant documentation available related to the test effort is listed, such as the business requirements, design specifications, user manual, operations manual, GUI standards, code standards, system certification requirements, and other project information. The test team should review the various project plans to ascertain information needed to complete the background, documentation, system description, and project organization sections. Such project plans might include the software development plan, system evolution plan, migration plans, systems engineering management plan, and project management plan.

An early planning exercise involves the inquiry into whether automated testing will be beneficial for the particular project given its testing requirements, available test environment and personnel resources, the user environment, platform, and product features of the target application. The test engineer follows the guidance outlined in Chapter 2 pertaining to deciding whether to automate the testing. An outcome of this planning exercise is the documentation of the reasons for applying automated test tools, which should be captured within the test plan.

Another early planning exercise pertains to the effort of evaluating and selecting automated test tools to support test efforts (see Chapter 3). An outcome of this exercise includes the documentation of the reasons for selecting one or more automated test tools. A “tools” section should be included in the test plan, listing the applicable tools, their functionality, their purpose on the project, and the reason for their selection.

Test planning also includes the performance of a test process analysis exercise, as described in Chapter 4. Test process analysis documentation is generated as a result of the test team’s process review plus the analysis of test goals and objectives. An outcome of this analysis includes the refinement of test goals, objectives, and strategies of the test program.

The test team then verifies that the particular test tools will actually work in the project environment and effectively support the test requirements for a particular test effort before implementing them. The test team utilizes the test tool introduction process outlined in Chapter 4 to support this verification. A test tool consideration exercise is performed to review test requirements and test schedule, among other activities.

The test plan must identify the scope of test activities to be performed. Typically, a work breakdown structure (WBS) is developed that identifies the categories of test activities at one level and the detailed activities at the next level (see Chapter 5 for an example). The WBS is commonly used in conjunction with timekeeping activities (see Chapter 9) to account for time spent performing each test activity.

In addition, the test plan should reflect the results of test effort sizing exercises. Sizing estimates (see Chapter 5) may specify the number of test team personnel to be applied to the project in terms of total hours or in terms of people, given a constant level of effort for some specified timeframe. Other sizing estimates, which should be documented in the test plan, when available, include the estimated number of test procedures to be developed and the estimated number of test scripts to be adopted from a reuse library.

The qualities and skills required to support a test program should be documented. The test engineer qualities and skills commonly required are described in Chapter 5. The composition of the test team, which should match the required qualities and skills, can be indicated in a test team profile. A sample profile is depicted in Table 5.10 on page 175.

Individuals assigned to the test team may not have all necessary skills and experience, however. The test manager therefore needs to assess the difference between skill requirements and team members’ actual skills to identify potential areas for training. The test planning effort, in turn, should document training requirements. Planned training should be reflected in the test schedule so that the timeframes for concurrent test activities can be adjusted.

The roles and responsibilities of test team personnel (see Chapter 5) need to be defined and documented in the test plan as well. These roles and responsibilities should be tailored for the particular project.

The test planning phase—the focus of this chapter—concentrates on the definition and documentation of test requirements, the identification of test program documentation, the planning required to support the test environment, and the development of the test plan document. Related to the documentation of test requirements is the need to define the requirements storage mechanism used to manage the test requirements.

As discussed in this chapter, every test program should have a defined scope, reflecting the fact that the test effort has limitations in terms of personnel, person-hours, and schedule. A system description or overview should define the components of the system being tested. Test program assumptions, prerequisites, and risks need to be defined and documented within the test plan. This definition includes any events, actions, or circumstances that might prevent the test program from meeting the schedule, such as the late arrival of test equipment or delays in the availability of the software application.

Associated with the system description is the need to identify critical success functions and the highest-risk functions of the system. Not everything can be tested and not everything can be automated. As noted in this chapter, identification of the highest-risk functionality ensures that the test team will focus its efforts on the critical success functions as a first priority.

A requirements traceability matrix allows the test team to keep track of test procedure coverage of requirements. This matrix is generally provided as part of the test plan and can be included as an appendix or separate section. As described later in this chapter, each system requirement is assigned a verification method. Verification methods include demonstration, analysis, inspection, and testing and certification.

The test plan will also need to identify hardware, software, and network requirements to support a test environment that mirrors the AUT environment, as described in this chapter. The procurement, installation, and setup activities for various test environment components must be carefully planned and scheduled. When unusual test environment requirements exist, the test team may need to increase the total test program sizing estimate, which is measured in terms of total person-hours. Test environment plans should identify the number and types of individuals who will need to use the test environment and verify that a sufficient number of computers is available to accommodate these individuals.

The overall test design approach is another component of the test plan. Methods for modeling the design of the (dynamic) test program are detailed in Chapter 7. A graphical representation of the test design identifies the test techniques that will be employed on the test effort, including white-box and black-box test techniques.

In addition, the test plan needs to document the various test procedures required, as outlined in Chapter 7. Documentation of the test procedure definition involves the identification of the suite of test procedures that must be developed and executed as part of the test effort. The design exercise involves the organization of test procedures into logical groups and the definition of a naming convention for the suite of test procedures.

Test planning will identify test data requirements and identify the means to obtain, generate, or develop test data. As explained in Chapter 7, the test plan should identify the mechanism for managing the integrity of the testbed, such as refreshing the test database to an original baseline state, so as to support regression testing. The test plan should indicate the names and locations of the applicable test databases necessary to exercise software applications.

The test development architecture is documented within the test plan. As noted in Chapter 8, it provides the test team with a clear picture of the test development preparation activities (building blocks) necessary to create test procedures. A graphical illustration depicts the test development architecture by showing the major activities to be performed as part of test development.

The test procedure development/execution schedule is prepared by the test team as a means to identify the timeframe for developing and executing the various tests. As noted in Chapter 8, this schedule takes into account any dependencies between tests and includes test setup activities, test procedure sequences, and cleanup activities.

The test plan incorporates the results of modularity relationship analysis. A modularity relationship matrix is presented (see Chapter 8) that specifies the interrelationship of the various test scripts. This graphical representation allows test engineers to quickly identify opportunities for script reuse in various combinations using the wrapper format, thereby minimizing the effort required to build and maintain test scripts.

Another element of test planning involves the identification of procedures for baselining test scripts. A configuration management tool should be specified for the project. The test plan needs to address the means for controlling the test configuration and environment, as discussed in Chapter 8. As test procedures are being developed, the test team needs to ensure that configuration control is performed for test design, test scripts, and test data, as well as for each individual test procedure.

Next, the test team needs to identify the test procedure development guidelines that will apply to the various test development activities. Test procedure development guidelines should be available for both manual test procedures and automated test procedures, as outlined in Table 8.5 on page 309.

The test plan needs to address the test execution activities, including the development of a test schedule and the transition from one testing phase to another. Execution and management topics are discussed in Chapter 9, where we provide a set of best practices pertaining to the development and execution of automated test procedures.

Chapter 9 also discusses the identification of a defect tracking tool as part of test planning. Likewise, defect tracking procedures and defect workflow should be defined and documented, and the test engineers need to be trained regarding these procedures.

The test plan should also define and document the test metrics that will be collected throughout the testing life cycle. Test metrics (see Chapter 9) provide the test manager with key indicators of the test coverage, progress, and quality.

During the early test planning stages, a preliminary test schedule should be created that complements the development schedule. Once test activities have been analyzed and plans documented in more detail within the test plan, the test schedule must be refined and augmented. The test team will need to review any updates to the project development schedule to ensure that the test schedule is consistent.

The development of a test plan is not a simple task, but rather takes a considerable amount of effort. The test team should begin its development by locating a test plan template and then customizing the outline as necessary. (A sample test plan outline is provided in Appendix D.) Once a test plan has been constructed and refined to fully document the test program approach, it will become the guiding instrument for ensuring the success of the test program.

6.2 Test Program Scope

This section outlines the primary activities necessary to define the scope of the test program. The scope of the test program is formulated through the definition of test goals, objectives, and strategies and the definition of test requirements. These definitions can be developed once the test team has a clear understanding of the system, has identified which automated tools will support the test program, and has documented several test program parameters.

A preliminary step in the process of defining the test program scope, as noted in Table 6.2, involves a test team review of system requirements or use cases and, when available, design documentation. Next, the test engineer reviews the system’s mission description and identifies critical and high-risk system functions. He must have a clear definition of the system and an understanding of system requirements or use cases to be able to define test goals, objectives, and strategies.

Table 6.2. Test Program Scope Definition

image

Automated tools to be applied to the project then need to be identified. Next in the process is the documentation of test program parameters, including any assumptions made when defining test goals, objectives, and strategies. This phase also includes the listing of prerequisite events, documentation, and products required to support various test program activities. System acceptance criteria are defined, and test program risks are assessed and mitigation plans developed. Finally, test requirements are defined. The outcomes of these various activities are all documented within the test plan.

6.2.1 System Description

To define a test program, the test engineer needs to have a clear understanding of the system being tested. A system description or overview needs to be obtained or developed and eventually documented within the test plan. This description may be derived from a system mission description or from the introductory material provided within the system requirements or use cases or a design document. The system description should define the user environment, computing platforms, and product features of the application-under-test.

In addition to describing the system, it is important to bound the system with regard to the test effort. Within a client-server or Web environment, the system under test spans more than just a software application. It may perform across multiple platforms, involve multiple layers of supporting applications, interact with a host of commercial off-the-shelf (COTS) products, utilize one or more different types of databases, and necessitate both front-end and back-end processing. It is important to identify the specific elements of the system that will be tested, including both software hardware and network components. The test engineer should also determine which software components will be supported by COTS products and which will be supported by newly developed software. It may also be necessary to specify whether the software is developed in-house or outsourced to a different organization.

6.2.2 Critical/High-Risk Functions

The test team needs to identify the critical success and high-risk functions of the system, listing them in the test plan in order of severity. These functions include those most critical to the mission of the system and those functions that could help mitigate the greatest risk to successful system operation. The test team then needs to solicit end-user feedback to validate and refine the priority ranking. Ranking system functions helps the test team prioritize test activities and address the most critical and high-risk system functions early.

6.2.3 Test Goals, Objectives, and Strategies

Once the boundary for the system under test has been established and the critical and high-risk functions of the system have been identified, the test team is ready to define the scope of the test program. It needs to perform a test process review and an analysis of test goals and objectives, as outlined within the test process analysis exercise discussed in Chapter 4. Test process analysis documentation is then generated that reflects the refinement of test goals, objectives, and strategies.

6.2.4 Test Tools

The test team should clearly specify which test tools will be applied on the project. When the team is still contemplating the use of an automated test tool, it should follow the process outlined in Chapter 2 to determine whether to automate the testing process. An outcome of this planning exercise is the documentation of the reasons for applying automated test tools. The evaluation and selection of automated test tools for the particular project were discussed in Chapter 3. A Tools section should be included within the test plan, listing the applicable tools, their functionality, their purpose on the project, and the reason for their selection. The test team should undertake the test tool consideration exercise outlined within Chapter 4 to verify that the particular test tools actually work in the project environment. The results of the test tool compatibility checks are documented in the test plan.

6.2.5 Test Program Parameters

Test program parameters defined may include the scope, assumptions, prerequisites, system acceptance criteria, and risks, depending on the testing phase. The test program scope noted in the test plan should provide a top-level description of the anticipated test coverage. It should identify the system application to be tested and indicate whether the test effort will include the network, hardware, and databases in addition to the software application.

The scope must state whether tests will address nondevelopmental items, such as integrated COTS products. When the system is being developed in accordance with a life-cycle methodology, such as the incremental build model, the team should specify whether system-level testing will involve regression test of existing software modules from an earlier implementation. The incremental build model, by definition, involves the performance of development activities in a series of builds, each incorporating more capabilities, until the system is complete.

The unit test plan should indicate whether stubs and drivers are to be used in unit testing. In particular, the unit testing schedule should account for whether these stubs must be developed to allow for unit testing.

The acceptance testing strategies need to be identified as well. When individual site acceptance tests are planned, the team should note whether these tests will involve a complete set of acceptance test procedures or some subset.

In its development of test goals, objectives, and strategies, the test team will undoubtedly make certain assumptions, which need to be documented in the test plan. Listing these assumptions has the benefit of flagging the need for potential test program redirection when any of these assumptions are not realized during the course of project execution.

It is also beneficial to clearly define dependencies between various test program activities as well as between test activities and other activities being performed on the project. Situations where the performance of a test activity depends on a prerequisite action or activity being performed on the project should be clearly noted in the test plan. For example, performance testing cannot be performed accurately unless the production configuration is in place. Prerequisites may include actions, activities, events, documentation, and products. Documenting these prerequisites may protect the test team from unwarranted blame for schedule slips and other project anomalies. Explicitly identifying event or activity dependencies also aids in the formulation of the test program schedule.

In addition to listing assumptions and prerequisite activities, the test team needs to define a test program boundary. This boundary represents the limit of the test program effort and the team’s responsibility. The test team should define the point when testing will be considered complete. The test team achieves this definition through the development of clear system acceptance criteria. The definition of such criteria will help to prevent various project team personnel from having different expectations for the test program. For example, the acceptance criteria might include a statement saying that all defined test procedures must be executed successfully without any significant problems. Other criteria may state that all fatal, high-priority, and medium-priority defects must have been fixed by development team and verified by a member of the test team. The test plan may also list an assumption stating that the software will probably be shipped with the existence of known low-priority defects and certainly with some unknown defects.

Developers need to be made aware of the system acceptance criteria. The test team should therefore communicate the list of acceptance criteria to the development staff prior to submitting the test plan for approval. System acceptance criteria for the organization should be standardized, where possible, based upon proven criteria used in several projects.

Because the budget and number of test engineers allocated to the test program are limited, the scope of the test effort must have limits as well. When acceptance criteria are stated in ambiguous or poorly defined terms, then the test team will not be able to determine the point at which the test effort is complete. The test team also needs to highlight any special considerations that must be addressed, such as the test of special technologies being implemented on the project or special attention that should be paid to mission-critical functions.

Test program risks should also be identified within the test plan, assessed for their potential impact, and then mitigated with a strategy for overcoming the risks should they be realized. A test program risk might involve an element of uncertainty on the project. For example, a software application development effort might involve a significant amount of COTS software integration, and the amount and depth of COTS software component testing may still be under negotiation. Meanwhile, budgets and schedules are in place based upon certain assumptions. The final resolution to the COTS testing issue may involve the performance of test activities well beyond those planned and budgeted. The potential impact of this test program risk therefore includes a cost overrun of the test program budget or an extension of the test schedule, possibly resulting in the project missing its targeted completion date.

Given this risk and the potential impacts, the test team would need to develop and define a risk mitigation strategy that can either minimize or alleviate the impact of the risk should it be realized. One possible strategy might involve a contingent agreement with one or more COTS vendors to support the test effort on a limited basis in the event that large-scale COTS testing is required. This approach may be especially helpful in instances where one or two large vendors have a significant stake in the successful outcome of the product integration effort and accompanying tests. Such a risk mitigation strategy may have the effect of reducing cost expenditures as well as minimizing the schedule extension.

6.2.6 Verification Methods

The test team should construct a requirements traceability matrix so as to keep track of system requirements or use cases and test procedure coverage of requirements. The traceability matrix explicitly identifies every requirement that will undergo examination by the test team, ensuring that all requirements have been successfully implemented.

The term “examination” in this context means that the test team will perform a verification (or qualification) of the system implementation. The ways that this examination can be performed are known as verification methods. Verification methods include demonstration, analysis, inspection, test (automated or manual), and certification, as outlined in Table 6.3 [1].

Table 6.3. Verification/Qualification Methods

image

Demonstration, for example, would be used to test the following statement: “Allow on-line queries to all enrollment records by case number.” The test input would consist of a query to the database using a case number. The test engineer would then observe the results displayed on the workstation. This method is appropriate for demonstrating the successful integration, high-level functionality, and connectivity provided by nondevelopmental (NDI), COTS, and government off-the-shelf (GOTS) products.

Analysis and inspection often do not require code execution of the application-under-test. Nevertheless, analysis can sometimes be used to verify that inputs produce the required outputs. For example, the test engineer would compare the input data to a process with the report generated by the system as a result of processing that input data. The test engineer would verify that the output corresponded with the input data. Inspection is used to examine graphical user interface (GUI) screens for compliance with development standards or to examine code for logic errors and compliance with standards. The test method is used when code must be modified or generated to simulate an unavailable input or output. The certification method is employed if the application-under-test needs to meet specific certification requirements. Consider the example of a financial system that requires certification by the Federal Reserve Bank. Such a system needs to meet specific requirements and pass specific tests before it can be moved into a production environment.

6.2.7 Test Requirements Definition

Recall that in previous chapters, the importance of testable system requirements or use cases was emphasized. Now these systems requirements need to be analyzed and specified in terms of test requirements. (Test requirements analysis and test design are described in detail in Chapter 7.) The test requirements analysis discussion addresses what to look for when identifying test requirements for the target application, how to decompose the application design or system requirements or use cases into testable test requirements, and how to analyze application documentation so as to identify test requirements. Test requirements provide a detailed outline of what is to be tested.

The development of test requirements, as noted in Table 6.2, requires the test team to complete several preliminary steps, including gaining an understanding of customer needs. These steps also include a test team review of system requirements or use case requirements and/or the system mission description to better comprehend the purpose and direction of the system. Another step includes the identification of critical and high-risk system functions. Test requirements analysis is performed to develop test goals, objectives, and strategies. The particular test tools being applied to the project are identified. Several test program parameters are then defined, such as assumptions, prerequisites, system acceptance criteria, and risks. Finally, test requirements need to be defined.

Test requirements can be derived from business requirements, functional system requirements, and use case requirements. This strategy for developing test requirements is referred to as a requirements-based or behavioral approach. Test requirements can also be derived based on the logic of the system design, which is referred to as a structural approach. The approach taken may depend on the timeframe within the development life cycle in which test requirements are defined. It can also vary according to the contractual and safety critical requirements. For example, decision coverage is often required in avionics and other safety-critical software. Various test requirements definition approaches are further outlined in Chapter 7.

When developing test requirements from system requirements or use cases, the test team can expect to develop at least one test requirement for each system requirement. The ratio of system requirements to system-level test requirements varies and can be either one-to-one or one-to-many, depending on the risk assigned to each functionality and the granularity of the requirement. The ratio of use case requirements to system-level test requirements also varies, depending on the risk and use case scenarios to be tested. System requirements or use cases that have been decomposed to a software specification or design level are often tested at the unit and integration test levels. The test team should be cognizant of the fact that a customer for an application development effort may wish to have some lower-level (derived or decomposed) software requirements addressed during system test and user acceptance test.

The mechanism for obtaining concurrence with a particular customer on the scope and depth of test requirements is the test plan. The customer reviews and eventually approves the test plan, which outlines test requirements and contains a requirements traceability matrix. The requirements traceability matrix specifies requirements information, as well as mappings between requirements and other project products. For example, the requirements traceability matrix usually identifies the test procedures that pertain to the project and maps test procedures to test requirements; and it also maps test requirements to system requirements or use cases. A requirements management tool such as DOORS allows for automatic mapping of these matrices (see Section 6.3 for more detail).

The traceability matrix explicitly identifies every requirement that will undergo examination by the test team, thereby ensuring that all requirements will be successfully implemented before the project goes forward. The term “examination” in this context means that the test team will perform a verification (or qualification) of the system implementation. Verification methods are summarized in Table 6.3.

The customer signifies his or her approval for the scope of requirements, which will be examined by the test team, by approving the test plan. To obtain early feedback on the scope of test coverage of requirements, the test team might submit a draft copy of the traceability matrix to the customer. (The traceability matrix may also be referred to as a test compliance matrix, verification cross–reference matrix, or requirements traceability matrix.) Early feedback on the traceability matrix from the customer gives the test team more time to respond to requests for changes in the test plan.

Another important benefit of early feedback on the traceability matrix is the mutual agreement reached on the verification method to be employed to support verification or qualification of each requirement. Some verification methods are easier to implement and less time-intensive than other methods. For example, the test for the various COTS products implemented for a system usually involves far more of an effort than simply qualifying the COTS products via the certification verification method.

6.3 Test Requirements Management

Test planning involves both the definition of test requirements and the development of an approach for managing test requirements. Test requirements management includes the storage of requirements, maintenance of traceability links, test requirements risk assessment, test requirements sequencing (prioritization), and identification of test verification methods. Traceability links include the mapping of test procedures to test requirements and of defects to test procedures.

In the test plan, the test team needs to outline the way in which the test requirements will be managed. Will test requirements be kept in a word-processing document, spreadsheet, or requirements management tool? Key considerations with regard to the storage of test requirements include the ease and flexibility of requirement sorting and reporting, the ease and speed of requirement entry, and the efficiency of requirements maintenance. Another consideration is whether a large number of project personnel will be able to access the requirements information simultaneously. Also important is the ability of the storage mechanism to accommodate various test requirements management data needs and to maintain the integrity and security of the requirements information.

Many organizations use word-processing documents and spreadsheets to maintain requirements information. There are many disadvantages of maintaining requirements in a word-processing document. For example, limitations exist regarding the ways that requirements can be sorted and filtered. Likewise, it is not easy to maintain a history of changes that indicates what changes were made, when, and by whom. Often people get so frustrated with the burden of maintaining requirements within these kinds of tools that the activity of maintaining requirements information is abandoned.

Commercial requirements management tools are available to handle these requirements management concerns. These tools are especially beneficial in maintaining traceability links. Traceability between system requirements or use cases and various project products can become complicated very quickly. Not only is this mapping difficult to maintain when done manually, but changes to requirements and the various project products also make the mapping highly tedious and even more difficult to maintain.

6.3.1 Requirements Management Tools

Special automated tools are particularly useful when applied to tedious or otherwise time-consuming tasks. The task of test requirements management fits this description. The test team needs to verify whether the organization has established a standard tool to support requirements management. Next, it needs to determine whether the tool is being applied to the current project or whether it can be installed in the system’s engineering environment to specifically support test requirements management for the project. When no standard tool has been established or planned for the project, the test team may need to present this absence as an issue to management or take the lead in evaluating and selecting a tool. Noteworthy tools on the market include Requisite Pro by Rational, DOORS by QSS, and RTM by Integrated Chipware.

Many advantages can be realized by using a requirements management (RM) tool. For example, all requirements pertinent to the project can be maintained within the tool’s database. The RM tool uses a central database repository to hold all related data. Requirements can include contractual, system, and test requirements. Adding test requirements to the RM tool database allows for easy management of testing coverage and mapping test requirements to business requirements or functional requirements.

As most RM tools support multiuser access, test team personnel need not worry that they are modifying a test requirement or procedure that is currently being modified elsewhere. These tools also offer a great benefit for test management. The test manager or lead can simply assign test procedures to the various test engineers, and then monitor test procedure development progress using the RM tool. With the DOORS tool, for example, simple filters can be set up to view the inputs provided by the various test engineers working on the project. Most RM tools also automatically maintain a history of any requirements changes (that is, the what, when, and how of the change).

Test teams can also benefit from the RM tool’s automatic capability for linking test requirements to test procedures, or linking defects to test procedures. Most RM tools provide an automated mapping of the test requirements to business requirements as well, based on the use of database object identification numbers. For example, DOORS uses the dxl language (a proprietary DOORS language); running a simple .dxl script allows for automatic linking of each test requirement to the applicable business requirement or detailed design component. With this linkage, the test team can obtain reports from the RM tool outlining the status of test requirement coverage. Additionally, test engineers can verify that a test requirement has been written for each business requirement with the execution of a simple script. Once the test team has completed the test procedures, a modified .dxl script can be run that automatically maps or links each test procedure to all of its respective test requirements, thus creating a requirements traceability matrix. More information on the requirements traceability matrix is provided in Section 6.3.4.

When test procedure information is stored within the RM tool, the test engineer needs to update the status of only the test procedure, which includes test pass/fail information and indicates whether the procedure has been executed yet. In DOORS, a simple filter on the status of test procedure execution permits the generation of an overall test execution status report, thereby providing percentage completion status for the project.

The fact that the requirements information can be continuously updated with relative ease is another benefit of using a RM tool. Columns can be moved around as necessary, and new columns can be added, while maintaining the integrity of the data. When requirements change, the test engineer can easily and quickly identify any test procedures that are affected (linked to the requirements) by the change. The test manager or lead can also readily identify the test engineers who are responsible for the affected test procedures.

Most seasoned test professionals have likely worked on a project where requirement changes occurred in midstream, but where each requirements change was not translated into a modified test requirement or test procedure. In other situations, requirements may be deleted or moved to the next delivery of an incremental build program without changes being made to an existing requirements traceability matrix. A RM tool can quickly make the changes to the requirements field that keeps track of the release version. The tool can then reproduce an updated requirements traceability matrix by running a simple script using defined parameters.

The first step is to add all test procedures into the RM tool (keep all test procedures in one place). Don’t go through the trouble of keeping the manual test procedures in one place and the automated test procedures in another place. If all test procedures are in one place and the test team decides to automate 50% of them, most RM tools will allow the test procedures to be exported into a .csv file, which can then be imported into any test management tool that can handle .csv or .txt files.

6.3.2 Assessing the Test Requirements Risk

Once the test requirements have been identified, the test team should assess the risks inherent in the various test requirements, by evaluating the requirements in terms of the following factors:

• Impact. Assess the value of the requirement. Suppose that the particular test requirement was not incorporated in the scope of the test effort and that the applicable performance area of the system eventually failed following system deployment. What impact would this failure have on system operations and on end users’ ability to perform their jobs? Does the failure represent a potential liability for the company?

• Probability. Assess the likelihood of a failure occurring if the particular test requirement is not incorporated into the scope of the test effort. Analyze the frequency with which the applicable performance area of the system would be exercised by the end user. Gauge the experience of the user within the particular performance area.

• Complexity. Determine which functionality is most complex and then focus test team resources on that functionality.

• Source of failure. Assess the failure possibilities and identify the test requirements that are most likely to cause these failures.

6.3.3 Prioritization of Tests

The test team needs to prioritize test requirements, while assessing the risk inherent in each test requirement. It should also review the critical functions and high-risk factors identified for the system, and use this information as input for determining the order in which to group the test requirements. The best idea is to group test requirements into those with the most critical functionality and those with the least critical functionality. Remember to include input from end users when determining which functionality is most critical and which is least critical. Another benefit of structuring and organizing the test requirements in this way is that it will make it easier to assign the test tasks to the various test engineers.

The criteria outlined here for determining the order in which to group test requirements represents the recommendation of Rational Corporation, as outlined within its test methodology literature [2].

• Risk level. Based upon the risk assessment, test requirements are organized so as to mitigate a high risk to system performance or the potential exposure of the company to liability. Examples of high risks include functions that prohibit data entry and business rules that could corrupt data or violate regulations.

• Operational characteristics. Some test requirements will rank high on the priority list due to the frequency of usage or the lack of end-user knowledge in the area. Functions pertaining to technical resources and internal users, and infrequently used functions, are ranked lower in priority.

• User requirements. Some test requirements are vital to user acceptance. If the test approach does not emphasize these requirements, the test program may possibly violate contractual obligations or expose the company to financial loss. It is important that the impact to the end user of the potential problem be assessed.

• Available resources. As usual, the test program will have constraints in the areas of staff availability, hardware availability, and conflicting project requirements. Here is where the painful process of weighing trade-offs is managed. A factor in the prioritization of test requirements is the availability of resources.

• Exposure. Exposure is defined as the risk (probability) multiplied by the cost of failure. For example, a highly probable defect with a high cost of failure has a high exposure.

6.3.4 Requirements Traceability Matrix

System requirements or use cases are usually maintained within an RM tool. Once identified during test planning or design, test procedures are documented within the RM tool and linked to the corresponding system requirements or use cases. Later, when tests are observed, their results are recorded and linked to corresponding test procedures.

The requirements traceability matrix represents an automated output of the RM tool, helping to track system requirements or use cases and test procedure coverage of requirements. It may take any of several forms based upon the particular mapping of interest. The requirements traceability matrix identifies every requirement that will undergo examination by the test team and specifies a verification method for each system requirement. Most importantly, the matrix maps test procedures to system requirements or use cases, which helps to ensure that system requirements or use cases requiring test verification have been successfully implemented.

It is important that the test team obtain early feedback on the requirements traceability matrix from end users or system customers, as a means to reach agreement on the verification method to be employed to support verification or qualification of each requirement. This decision is especially significant because some verification methods are easier to implement and less time-intensive than other methods. Early feedback on the matrix from the customer helps provide more time for the test team to respond to potential changes.

As the requirements traceability matrix identifies the test procedures that will be performed, approval of this matrix by the customer also signifies customer satisfaction with the scope of test coverage for system requirements or use cases. When user acceptance test (UAT) is performed later, the customer will review the requirements traceability matrix to verify test coverage of system requirements or use cases. Table 6.4 provides a sample requirements traceability matrix.

Table 6.4. Requirements Traceability Matrix*

image

image

The requirements traceability matrix in Table 6.4 includes a system requirements specification paragraph identifier, requirement statement text, unique requirement identifier generated by the RM tool, qualification method, risk/priority classification of the requirement, and test procedure associated with the requirement. It also identifies the system delivery (D1, D2, or D3) in which the solution to the requirement has been implemented.

6.4 Test Program Events, Activities, and Documentation

Key elements of test planning include the planning associated with project milestone events, test program activities, and test-program-related documentation. The technical approach for these key elements is developed, personnel are assigned, and performance timelines are defined in the test program schedule.

6.4.1 Events

The major events for the test team should be reflected in the test schedule. These events include requirements and design reviews, test readiness reviews, system configuration audits, technical interchange meetings, and formal test-related working group meetings. Other limited-timeframe activities may include the conduct of special tests, such as security testing, and the performance of acceptance tests.

To bolster the life-cycle performance of the test program, a test and integration work group (TIWG) may be defined. TIWG provides a forum to facilitate iterative interchange between test engineers, development staff, and customer representatives. TIWGs are held on a periodic basis, and TIWG meetings should be incorporated in the test schedule. The goals of the TIWG include the following:

• Ensure that planned testing activities support the verification of functional, performance, and technical requirements for the system.

• Ensure that testing addresses human engineering aspects of system operability.

• Identify and monitor major test program risks to ensure that test prerequisite activities are performed correctly and are proceeding according to the project schedule.

• Obtain early and informal customer feedback on draft test plans and traceabilty matrices so as to finalize the scope and depth of the test effort and to expedite approval of test documentation.

• Enhance the customer’s familiarity with and understanding of the detailed aspects of the test program so as to bring about a more efficient acceptance test effort.

Test readiness reviews (TRR) may be conducted as part of a project to ensure that the test program is ready to support a UAT. On large development projects, a TRR may involve comprehensive reviews that examine requirements specification and design document changes, unit- and integration-level test status, test environment readiness, and test procedure development. Test environment readiness may need to address the availability status of test data requirements, as well as the availability status of hardware and software required to support the test system configuration.

6.4.2 Activities

The test plan must identify the scope of test activities to be performed. Typically, a work breakdown structure (WBS) is developed to identify the categories of test activities that may be carried out (see Chapter 5).

One important activity that needs to be defined is the review of project documentation. Although documentation review by the test team is an effective defect removal strategy, the test team needs to be careful about leaving the scope of this activity open-ended. As noted previously, test resources are limited but expectations of test team support may be greater than the budget allows. It is important to clearly define which project documents will be reviewed. The test team should list the title of each project document to be reviewed within the test plan.

6.4.3 Documentation

In addition to reviewing project-generated documents, the test team will produce test documentation. It should develop a list containing the title of each test document type, indicate whether the document will be delivered outside the organization (deliverable), and list the scheduled due date or timeframe in which the document will be created. When the document is produced on a monthly (periodic) basis, the due date column can simply read monthly. Table 6.5 contains a sample listing of test documentation.

Table 6.5. Test Documentation

image

6.5 The Test Environment

Test planning must outline the resources and activities that are required to support the timely setup of the test environment. The test team should identify the hardware, software, and network and facility requirements needed to create and sustain the support of the test environment. The procurement, installation, and setup activities for various test environment components need to be planned and scheduled. Such test environment plans should identify the number and types of individuals who will access and use the test environment and ensure that a sufficient number of computers is planned to accommodate these individuals. Consideration should be given to the number and kinds of environment setup scripts and testbed scripts that will be required.

6.5.1 Test Environment Preparations

Early in the test planning exercise, the test team must review project plans so as to become familiar with the systems engineering (development) environment planned for the project. Project plans, which should be reviewed when available, include the software development plan, system evolution plan, migration plans, systems engineering management plan, and project management plan. While such plans are still under development, the test team should review the draft plans and identify questions and potential concerns related to the development environment or the migration of the development environment to an operational (production) environment.

The test team needs to review project planning documents specifically with regard to the plans for a separate test lab that mimics an operational environment. While unit- and integration-level (developmental) tests are usually performed within the development environment, system and user acceptance tests are ideally performed within a separate test lab setting; this lab should represent an identical configuration of the production environment or, at the minimum, a scaled-down version of the operational (production) environment. The test environment configuration needs to be representative of the production environment because it must replicate the baseline performance and relative improvement measures. Simulators and emulators can be used in situations where the production environment cannot be reproduced; such tools can prove vital in testing the environment and measure performance.

Next, the test team needs to document the results of its fact gathering. It then performs the following preparation activities, which support the development of a test environment design:

• Obtain information about the customer’s technical environment architecture (when applicable), including a listing of computer hardware and operating systems. Hardware descriptions should include such items as video resolution, hard disk space, processing speed, and memory characteristics. Printer characteristics include type, capacity, and whether the printer operates as a stand-alone unit or is connected to a network server.

• Identify network characteristics of the customer’s technical environment, such as use of leased lines, circuits, modems, Internet connections, and use of protocols such as Ethernet, or TCP/IP.

• Obtain a listing of COTS products to be integrated into the system solution.

• Count how many automated test tool licenses will be used by the test team.

• Identify development environment software that must reside on each computer desktop within the test environment.

• Identify hardware equipment required to support backup and recovery exercises within the test environment.

• Ensure that the test environment can accommodate all test team personnel.

• Review system performance test requirements to identify elements of the test environment that may be required to support applicable tests.

• Identify security requirements for the test environment.

After completing these preparation activities, the test team develops a test environment design consisting of a graphic layout of the test environment architecture plus a list of the components required to support the test environment architecture. The list of components should be reviewed to determine which components are already in place, which can be shifted from other locations within the organization, and which need to be procured. The list of components to be procured constitutes a test equipment purchase list. This list needs to include quantities required, unit price information, and maintenance and support costs. A sample test equipment purchase list is provided in Table 6.6.

Table 6.6. Test Equipment Purchase List

image

Next, the test team needs to identify and track adherence to the timeline for equipment receipt, installation, and setup activities. Although these activities may be performed by a network support staff from another department within the organization or from another project, the test team will need to ensure that these activities are kept on track to meet test program requirements. Where possible, it is beneficial for the test team to have at least one individual who has network, database, and system administration and integration skills. Such skills are particularly valuable during test environment setup and during the performance of manual hardware-related tests on the project.

The test team will need to monitor the purchase and receipt of test environment components carefully to ensure that hardware and software procurement delays do not affect the test program schedule. It may want to include in the test equipment order a few backup components to mitigate the risk of testing coming to a halt due to a hardware failure. The test team might consider an alternative risk mitigation option as well—the identification of hardware within the organization that can be substituted within the test environment in the event of a hardware failure.

One requirement of the system might be that it should remain operational 24 hours per day and never shut down. The test team may need to become involved in finding software/hardware that meets these high availability requirements. All such special test environment considerations should be documented by the test team.

6.5.2 Test Environment Integration and Setup

At least one member of the test team should have some network, database, and system administration skills. This person can then assist in the installation and integration of hardware components on behalf of the test team. She would also be responsible for installing and configuring software, including automated test tools and any necessary development environment software. The test environment configuration would need to be created, and scripts would need to be developed and used to refresh the test environment configuration. Chapter 8 describes the use of environment setup scripts.

Besides hardware and software receipt, installation, and integration, administration activities must be carried out to enable test activities within the test environment. These activities ensure that the test team personnel will have access to the necessary systems, software, networks, databases, and tools.

Plans must be defined for obtaining or developing the data and file types, which must be loaded into the test environment for use in test procedure development and test execution. Test data creation and management are further discussed in Chapter 7.

6.6 The Test Plan

The test plan should comprehensively document test program plans, and test team personnel must become very familiar with the content of this plan. The effort to build a test plan is an iterative process, requiring feedback and agreement between the various project participants on the defined approaches, test strategies, and timelines for performance.

When the development effort will support a particular customer, the test team needs to obtain end-user or customer buy-in on the test plan. This buy-in includes acceptance of the test strategy as well as the detailed test procedures, which define the actual tests planned. That is, the end user must concur that the test plan and associated test scripts will adequately verify satisfactory coverage of system requirements or use cases. What better way to ensure success of the test effort and obtain end-user acceptance of the application than to involve the end user throughout test planning and execution?

Many test strategies could potentially be implemented on the project, but there is never enough money in the test program budget to support all possible kinds of tests. A successful, cost-effective test program therefore requires a clear vision of the goals and an explicit understanding of the various test program parameters outlined in Section 6.2, which define the boundary of the test effort. A thorough understanding of the system and system requirements or use cases, coupled with careful definition of test program parameters and test requirements, is necessary to effectively tailor the test program solution to the particular project. Communication and analysis are key to selecting the right mix of test strategies to support the accomplishment of test goals and objectives.

The purpose of the test plan can be summarized as follows:

• It provides guidance for the management and technical effort necessary to support the test program.

• It establishes the nature and the extent of tests deemed necessary to achieve test goals and objectives.

It outlines an orderly schedule of events and activities that use resources efficiently.

• It provides assurance of the highest level of test coverage possible through the creation of a requirements traceability matrix.

• It outlines the detailed contents of test procedure scripts and describes how the test procedure scripts will be executed.

• It outlines the personnel, financial, equipment, and facility resources required to support the test program.

Once the test team is satisfied that the test plan incorporates all pertinent details of the test program, an approval authority should review the plan. In some cases, a particular customer may need to approve the test plan before execution of the test program can begin. In other situations, the manager responsible for the project will review and approve the test plan. In any event, it is important that the development staff also reviews and endorses the test plan.

The test team may wish to organize and conduct a test plan walkthrough, which would involve the principal personnel responsible for test program execution and test plan approval. Prior to the walkthrough, the test team should solicit reviews of the test plan and ask for comments. The significant comments can then be addressed at the walkthrough and resolved in a single setting.

It is important to remember that test planning is not a single event, but rather a process. The test plan is a living document that guides test execution through to conclusion, and it must be updated to reflect any changes. The test team should refer to the test plan often during the performance of test on the project. Table 6.7 provides a sample outline for a test plan.

Table 6.7. Test Plan Outline

image

image

6.6.1 Test Completion/Acceptance Criteria

Before the target application goes into production, test results analysis can help to identify any defects that need to be fixed and those whose correction can be deferred. For example, corrections of some defects may be reclassified as enhancements and addressed as part of a later software release. The project or software development manager who heads the engineering review board will likely determine whether to fix a defect or risk shipping a software product with the uncorrected defect. Several questions are commonly asked in this situation. What is the rate of regressions? How often are defect corrections failing? If the rate of regressions is high for a particular subsystem and that subsystem has light test coverage, then the risk impact of a defect correction is high.

With or without tools, there comes a day when testing must be halted and the product must be actually deployed. Perhaps the most difficult question in software testing is deciding when to stop. Humphrey notes that as the number of detected defects in a piece of software increases, the probability of the existence of more undetected defects also increases:

The question is not whether all the defects have been found but whether the program is sufficiently good to stop testing. This trade-off should consider the probability of finding more defects in test, the marginal cost of doing so, the probability of the users encountering the remaining defects, and the resulting impact of these defects on the users. [3]

It is important that the test team establish quality guidelines (criteria) for the completion and release of software. It should answer several questions. What type of testing and improvements need to be implemented, and when will they be finished? What type of resources will be needed to perform testing? A simple acceptance criteria statement could indicate that the application-under-test will be accepted provided that no problem reports with an associated priority level of 1 (fatal), 2 (high), or 3 (medium) are outstanding. Acceptance criteria might state that the existence of level 4 or 5 (low) outstanding problem reports is acceptable.

6.6.2 Sample Test Plan

The sample test plan given in Appendix D indicates the test planning carried out for a fictitious company called Automation Services Incorporated (AMSI), which is testing a system called the WallStreet Financial Trading System (WFTS). The content of this sample test plan has been developed for the sole purpose of illustrating the kinds of information and the ways of presenting relevant information within a test plan. For reasons of illustration, the information contained within the sample test plan may not necessarily be consistent from one section to another.

Chapter Summary

• The test planning element of the Automated Test Life-Cycle Methodology (ATLM) incorporates the review of all activities required in the test program. It is intended to ensure that testing processes, methodologies, techniques, people, tools, schedule, and equipment are organized and applied in an efficient way.

• The test team should begin its test plan development effort by locating or creating a test plan template, and then tailoring the test plan outline as necessary. Once a test plan has been constructed and refined to fully document the intended approach, it will become the guiding instrument for the subsequent test program.

• The scope of the test program is provided in the test plan as a top-level description of test coverage. The scope is further refined through the definition of test goals, objectives, and strategies, as well as test requirements. These definitions can be recorded once the test team has a clear understanding of the system, chooses the automated tools for the test program, and documents several test program parameters.

• Test planning involves both the definition of test requirements and the development of an approach for managing those requirements. Test requirements management encompasses the storage and maintenance of requirements, maintenance of traceability links, test requirements risk assessment, test requirements sequencing (prioritization), and identification of a test verification method for each system requirement.

• The requirements traceability matrix explicitly identifies every requirement that will undergo examination by the test team, and an associated verification (qualification) method for each system requirement. The traceability matrix maps test procedures to system requirements or use cases, allowing team members to readily confirm that system requirements or use cases requiring test verification have been fully and successfully implemented.

• Key elements of test planning include the planning associated with project milestone events, test program activities, and test program-related documentation. The technical approach for these key elements is developed, personnel are assigned, and performance timelines are specified in the test program schedule.

• Test planning efforts must outline the resources and activities that are necessary to support the timely setup of the test environment. The test team must identify the hardware, software, network, and facility requirements needed to create and sustain the test environment. The procurement, installation, and setup activities for various test environment components need to be planned and scheduled.

The test team needs to comprehensively document test program plans, and team members need to become very familiar with the content of this plan. The effort to build a test plan is an iterative process, requiring feedback and agreement between the various project participants on the defined approaches, test strategies, and timelines for performance.

• The test team needs to obtain end-user or customer buy-in on the test plan. This buy-in includes the customer’s acceptance of both the test strategy and the detailed test procedures, which define the actual tests planned. As part of this buy-in, the end user concurs that the test plan and associated test scripts adequately verify satisfactory coverage of system requirements or use cases.

• A successful, cost-effective test program requires a clear vision of goals and an explicit understanding of the various test program parameters that define the boundary of the test effort. A thorough understanding of the system requirements or use cases, coupled with careful definition of test program parameters and test requirements, is necessary to effectively tailor the test program solution to the particular project. Communication and analysis are key to selecting the right mix of test strategies to support the accomplishment of test goals and objectives.

• Test planning is not a single event, but rather a process. The test plan is the document that guides test execution through to conclusion, and it needs to be updated frequently to reflect any changes. The test team should refer to the test plan often during the performance of testing for the project.

References

1. Adapted from ANSI/IEEE Std 1008–1987.

2. Rational Unified Process 5.0. Jacobson, I., Booch, G., Rumbaugh, J. The Unified Software Development Process. Reading, MA: Addison-Wesley, 1998.

3. Humphrey, W.S. Managing the Software Process. Reading, MA: Addison-Wesley, 1989.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset