Chapter 7. Test Analysis and Design

Automated testing involves a mini-development life cycle.

image

An effective test program that incorporates the automation of software testing has a development life cycle of its own. This development effort comes complete with its own strategy and goal planning, test requirement definition, analysis, design, and coding. Like software application development, the test development effort requires careful analysis and design.

With respect to the overall test program, the test effort can be classified into two primary categories: static and dynamic testing. Test strategies supporting the static test category were described in Chapter 4 and include various reviews, inspections, and walkthroughs. This chapter focuses on dynamic tests—its definition and the associated requirements and/or use case analysis and design required. Dynamic testing consists of the implementation of test techniques that involve the development and execution of test procedures designed to validate requirements; various verification methods are employed in the requirements validation process.

This chapter describes several approaches to test requirements analysis, including various techniques to derive test requirements from the various application requirements (that is, business requirements, functional requirements, design requirements, sequence diagrams) and/or use case specifications. Test requirements statements should clearly outline test conditions that will provide the highest probability of finding errors. Test requirements analysis also involves the review of the system’s most critical success functions and high-risk functionality, as part of risk management. Test requirements statements should specify attributes of the most critical success functions and high-risk functionality, and testing should then verify that those requirements have been met.

In this chapter, methods for modeling the design of the (dynamic) test program are depicted. The approach to test design should ensure that the test effort verifies system requirements or use cases, increasing the probability that the system will actually succeed at what it is supposed to do. White-box and black-box test techniques that can be used in the test program design are outlined. White-box testing addresses the verification of software application internals, while black-box testing applies to the verification of application externals.

Test procedure definition is addressed along with parts of the requirements traceability matrix, which maps test procedures to test requirements (see Table 6.4 on page 210). The test engineer is reminded that these matrices can be automatically generated using a requirements management (RM) tool, such as DOORS. Test procedure definition involves the specification of the number and kinds of test procedures that will be carried out. During test procedure definition, it is important to identify the various test procedures that will exercise the test conditions mandated in the test requirements statements.

This chapter also addresses the need to analyze whether a test procedure should be performed manually or via an automated test tool, as well as the need to associate test data requirements with test procedures. Other topics considered include test procedure standardization and test procedure management. To facilitate test procedure development, test procedure design standards need to be established and then followed. Such standards promote the development of reusable, modular, maintainable, robust, and uniform test procedures.

All of these activities and issues pertain to the test analysis and design process. The progressive steps inherent in this process are outlined in Table 7.1.

Table 7.1. Test Analysis and Design Process

image

7.1 Test Requirements Analysis

Similar to the process followed in software application development, test requirements must be specified before test design is constructed. Such test requirements need to be clearly defined and documented, so that all project personnel will understand the basis of the test effort. They are derived from requirements statements as an outcome of test requirements analysis. This analysis, which is aimed at identifying the different kinds of tests needed to verify the system, can be undertaken by studying the system from several perspectives, depending on the testing phase.

As already mentioned in Chapter 6, one perspective includes the study of the system design or the functional process flow inherent in user operations, also known as the structural approach. It relies on unit and integration tests, also referred to as white-box tests. Test requirements analysis at the development test level is primarily based upon review of design specifications.

Other perspectives involve a review of system requirements or use cases, also referred to as the requirements or behavioral approach. This test requirements analysis pertains to testing performed in support of the system as a whole. This level, referred to as the system test level, most often involves black-box testing. The system test level consists of system and acceptance tests. Analysis at the system test level is geared primarily toward system (or business) requirements.

An alternative way of studying the system is to review critical success and highest-risk functions. The extra effort made to review these important functions can result in important insights that facilitate the creation of test requirements statements. Such thorough test requirements ensure that the test team will adequately exercise these functions in ways that guarantee proper operation when the system is eventually deployed.

Note, however, that the scope of test requirements that apply to the particular system needs to be limited. The test requirements definition is bounded by several parameters, including the description of the system and the system requirements definition. Other parameters include the defined scope of the test program and the test goals and objectives. Test requirements statements become the blueprint that enables the test team to design the next progressive step in the test analysis and design process by specifying a detailed outline of what is to be tested. A preliminary association of test requirements statements to test techniques is developed, and a test program model is created depicting the scope of test techniques that apply on the project.

The remainder of the section considers the test requirements analysis effort, as well as the two test levels (development and system levels).

7.1.1 Development-Level Test Analysis (Structural Approach)

An analysis should be performed to identify the test requirements that pertain to the development test effort. This analysis at the unit and integration testing level is also known as structural analysis.

The design-based structural analysis method requires the review of the detailed design and/or the software code. It emphasizes software module input and output. The resulting test requirements statements are based on examination of the logic of the design or the implementation of the software code, where a detailed design is not available. The design-based analysis method addresses test requirements from a white-box-based view, which is concerned with actual processing activities such as control, logic, and data flows. Design-based analysis may also be referred to as structural coverage, reflecting its focus on the inherent structure of the software code.

DO-178B, an international avionics standard produced by RTCA, defines three different approaches for design-based test requirements analysis: statement coverage, decision coverage, and modified condition decision coverage (MC/DC). The statement coverage approach invokes every statement in a program at least once. In decision coverage, every point of entry and exit in the program is invoked at least once and every decision in the program takes on all possible outcomes at least once. MC/DC is a structural coverage criterion that requires each condition (term) within a decision (expression) to be demonstrated by execution, ensuring that it independently and correctly affects the outcome of the decision (a process called Boolean instrumentation).

The strength of these test requirements development approaches lies in the fact that the resulting test procedures will provide step-by-step instructions that are detailed down to keystroke entry; quite often, these instructions eventually become the structure for the system operation and user manuals. Such design-based requirements analysis investigates whether test procedures need to be developed for both source and object code, in an effort to find errors introduced during design, coding, compiling, linking, and loading operations.

A potential weakness of this kind of test requirements development approach relates to the fact that, because the design is being exercised, the test engineer is not viewing test requirements from a system-high level. This method requires that all entries and exits of the software units be tested. This level of activity can require a large number of test procedures, consume a great deal of time, and require the work of a larger number of personnel.

If the test team does plan testing at the development level, it is worthwhile to analyze test requirements that specify the need for exercising software under abnormal conditions. The test team needs to phrase test requirements statements so as to call for attempts to drive arrays out of bounds, drive loops through too many iterations, and drive input data out of range. The goal of these test requirements is to uncover errors by forcing the software to operate under unintended or abnormal conditions.

When analyzing development-level test requirements, the test team might also derive test requirements by analyzing the program logic pertaining to decision trees. The test engineer can examine the code entry and exit conditions and derive test requirements intended to ensure that every point of entry and exit in the program is invoked at least once. Test requirements can include statements pertaining to testing every condition for a decision within a software program.

7.1.1.1 Development Test Requirements Matrix

The test requirements for development-level testing should be defined and entered into a requirements traceability database or matrix. Table 7.2 outlines parts of such a matrix. Within this database or matrix, each test requirement should be associated with a system architecture component or a design component identification number. The architecture component is then traced to detailed software requirements (SWR ID) and to system requirements (SR ID) or use cases. Where no detailed software requirements have been defined, the architecture component is traced to system requirements or use cases. Requirements management allows the test engineer to maintain these traceability matrices in an automated fashion.

Table 7.2. Development Test Requirements Matrix

image

image

In Table 7.2, each test requirement (TR) is linked to a requirement statement, and each requirement statement is assigned a test requirement identification (TR ID) number. Once test requirements have been defined, the test team should make a preliminary decision about the test technique that can best handle each requirement.

Some test requirements may be derived as a result of analysis, and thus may not specifically relate to a software or system-level requirement. In this case, the entries for SR ID and SWR ID in Table 7.2 are cited as “various.” The derived test requirement may also pertain to a number of system architecture components; in this case, the entry in the “Architecture Component” column is also “various.” The test team may also wish to note in this column that the test requirement was derived as a result of test analysis.

7.1.2 System-Level Test Analysis (Behavioral Approach)

Test analysis also needs to be performed to identify the test requirements pertaining to the system test effort. Test requirements analysis at the system test level is also known as the behavioral approach.

The requirements-based analysis method requires the review of system/software requirement specifications. The emphasis with this analysis method is on test input and expected output. The resulting test requirements statements are based on examination of the system/software requirements specifications. The requirements-based analysis method approaches test requirements from a black-box-based perspective. It is aimed at deriving test requirements that will generate test procedures intended to show that the software performs its specified functions under normal operating conditions.

Another approach for deriving test requirements includes the use of functional threads. Test requirements created via this method result from analysis of the functional thread of the high-level business requirements. The test engineer reviews the high-level business process by examining the results of enterprise business process reengineering or by employing use case analysis.

Business process reengineering (BPR) is a structured method for analyzing the procedures that a business entity uses to accomplish its goals and redesigning the business’s processes with a focus on the end user of its products. Consequently, rather than simply providing computer systems to support the business practices as they currently exist, BPR examines the entire scope of the processes, determining the flow of information through components of the business and suggesting major improvements. Frequently, the reengineering process incorporates an infusion of new technologies and automated information systems.

Use case analysis, on the other hand, is a way of modeling requirements and a requirements analysis methodology that involves the development and subsequent analysis of a number of mini-scenarios, which themselves exercise combinations of the system-level requirements. Each use case has a defined starting point, set of discrete steps, and defined exit criteria. The use case construct defines the behavior of a system or other semantic entity without revealing its internal structure. Each use case specifies a sequence of actions, including variants, that the entity can perform by interacting with actors of the entity.

Derived requirements elicitation and the identification of system capabilities to be developed are accomplished by analyzing each step in the use case. This analysis includes the identification of what the user does, what the software does, what the user interface does, and what the database needs to do to support the step. The accumulation of the various user actions facilitates the development of training and user documentation as well as the design of test procedures.

An auxiliary approach for deriving test requirements at the system test level involves the review of critical success and high-risk functions of the system. While it is important to outline test conditions that provide the highest probability of finding errors, it is also crucial to find errors in critical and high-risk functionality. Table 7.3 provides an example of a listing of critical and high-risk functionality that might be included in a test plan. Each defined risk is ranked, from the most critical risk to the least critical risk. The test engineer would analyze these functions in more detail and outline test requirements that would later result in test procedures intended to verify the comprehensive operation of the functionality.

Table 7.3. Critical/High-Risk Functions

image

As noted, it is important that test requirements statements also outline test conditions that provide the highest probability of finding errors. Test requirements statements should specify input values so that the test team can generate tests that utilize out-of-range input data. The goal of these test requirements is to uncover errors by forcing the software to exercise unintended or abnormal conditions.

7.1.2.1 System Test Requirements Matrix

The test requirements supporting system-level testing need to be defined and reflected in a matrix like that depicted in Table 7.4. Each test requirement is correlated with a system requirement. System requirements are reflected in the first column of Table 7.4, through the entry of the associated system requirement identification (SR ID) number. Test requirements may also be referenced to the associated system architecture component.

Table 7.4. System Test Requirements Matrix

image

image

Each test requirement (TR) is linked to a requirement statement, and each requirement statement is assigned a test requirement identification (TR ID) number. Once test requirements have been defined, the test team should make a preliminary decision about the test technique that will best meet the test requirement. Even though this effort may look like a lot of time spent cross-referencing different numbers, an RM tool, such as DOORS, can provide automated maintenance.

7.2 Test Program Design

Much as in any software development effort, the test program must be mapped out and consciously designed to ensure that the test activities performed represent the most efficient and effective tests for the system. Test program resources are limited, yet ways of testing the system are endless. This section addresses ways of graphically portraying the test program design so as to give project and test personnel a mental framework on the boundary and scope of the test program.

7.2.1 Test Program Design Models

Test program design activities follow test analysis exercises. The results of test analysis include definition of test goals and objectives, the selection of verification methods and their mapping to system requirements or use cases, and creation of test requirements statements. The test requirements are then mapped to system requirements or use cases and/or system design components, depending upon the relevant test life-cycle phase. After this mapping is complete, a preliminary association of test requirements to test techniques is developed.

Armed with a definition of test requirements and an understanding of the test techniques that may be suited to the particular project, the test team is then ready to develop the test program design models. The first of these design models consists of a test program model. The test program model consists of a graphic illustration that depicts the scope of the test program. At a minimum, it includes test techniques that will be employed at the development test and system test levels. The model may also outline static test strategies that are utilized throughout the application development life cycle. In addition, it may explicitly identify verification methods other than testing that will be employed during development- and system-level tests.

Figure 7.1 depicts a sample test program model. This figure defines the various test techniques, some of which will be implemented on a particular test program, depending on the test requirements. The test engineers on a project would develop this model by documenting the test techniques identified during the test analysis effort. The test team would start with a template model that included a laundry list of test techniques that could be applied to any given test program and then pare the list down to match the results of the test analysis. Working from a template saves the test team time and effort.

Figure 7.1. Test Program Model

image

Having defined a test program model, the test team’s next task is to construct a test architecture for the particular project. The test architecture consists of a graphic illustration depicting the structure of the test program. The objective is to define the way that test procedures will be organized within the test effort.

The structure of the test program is commonly portrayed in two different ways. One test procedure organization method involves the logical grouping of test procedures with the system application design components; it is referred to as a design-based test architecture. The second method associates test procedures with the various kinds of test techniques represented within the test program model; it is referred to as a technique-based test architecture.

In both architecture models, a distinction is made between the architecture that applies to the development test level and the architecture that applies to the system test level. The test architecture for a test program may, however, be represented by a combination of design- and technique-based approaches. For example, the architecture that applies to the development test level might be design-based, while the architecture that applies to the system test level might be technique-based. Refer to Section 7.3.1 or Appendix D for an example of a hybrid test architecture that reflects both the design-based and technique-based approaches.

7.2.1.1 Design-Based Test Architecture

The design-based test architecture associates test procedures with the hardware and software design components of the system application. The logic of this model stems from the understanding that the hardware and software design components can be traced to system and software requirements specifications. Additionally, test requirements statements can be traced to design components, as well as to system and software requirements specifications. Therefore, the test team can refer to a requirements traceability matrix to ascertain the test techniques that correlate to each of the various design components, as portrayed by the design-based test architecture depicted in Figure 7.2.

Figure 7.2. Design-Based Test Architecture

image

The test architecture in Figure 7.2 provides the test team with a roadmap for identifying the different test procedures needed to support each design component. The various design components are represented by their associated design identification numbers. The identifier SM-06, for example, stands for a design component called system management; this design component is the sixth component within the project’s software architecture diagram. Only four design components are included in Figure 7.2. The right-hand column is entitled “Others,” signifying that other design components depicted within the project’s software architecture diagram would be represented in a column of the test architecture diagram.

With the construction of the design-based test architecture, the test team gains a clear picture of the techniques that will be employed in testing each design component. The test team can now readily define all corresponding test procedures for the design component by referring to the requirements traceability matrix.

7.2.1.2 Technique-Based Test Architecture

The technique-based test architecture associates the requirement for test procedures with the test techniques defined within the test program model. The rationale for this model stems from the understanding that the test procedures supporting a particular test technique are logically coupled. Additionally, test techniques have already been traced to test requirements statements within a requirements traceability matrix.

Only four test techniques are included in the technique-based test architecture depicted in Figure 7.3. The right-hand column is entitled “Others,” signifying that other test techniques that apply to the test effort should be listed within a column of the test architecture diagram.

Figure 7.3. Technique-Based Test Architecture

image

The technique-based test architecture provides the test team with a clear picture of the test techniques to be employed. The test team can then identify the test requirements associated with each test technique by referring to the requirements traceability matrix. They can also readily define the test procedures that correspond to the applicable test techniques and the associated test requirements.

7.2.1.3 Effective Test Program Design

The overall test program design involves both dynamic and static test activities, as shown in Figure 7.1. Effective use of test engineers’ time to support static test activities can greatly improve system application design and development. The development of a quality system application design can streamline the test effort, as can an effective test design. An effective test design enables the test team to focus its efforts where they are most clearly needed.

The test architecture serves as a roadmap for the dynamic test effort. The dynamic test effort, in turn, facilitates the development- and system-level test efforts. These two test efforts are primarily based on the use of white-box and black-box approaches to testing. The white-box approach focuses on application “internals,” while the black-box approach concentrates on “externals.” Testing performed at the development and system levels may employ one of these approaches or a combination of both approaches.

To develop the test program design models described here, test personnel need to be familiar with the test techniques associated with the white-box and black-box testing approaches. Table 7.5 provides an overview of these two test approaches.

Table 7.5. White Box/Black Box Overview

image

image

7.2.2 White-Box Techniques (Development-Level Tests)

Many books have been written about the various white-box and black-box testing techniques [1]. This book does not offer a full review of all such test techniques, but instead focuses on automated testing. It is important to note that an understanding of the most widely used test techniques is necessary when developing the test design. This section covers several widely used development-level test techniques, and Section 7.2.3 discusses system-level test techniques.

White-box testing techniques are aimed at exercising internal facets of the target software program. These techniques do not focus on identifying syntax errors, as a compiler usually uncovers these types of defects. Instead, the white-box techniques perform tests that seek to locate errors that are more difficult to detect, find, and fix. That is, they attempt to identify logical errors and verify test coverage.

Test procedures associated with the white-box approach make use of the control structure of the procedural design. They provide several services, including the following:

  1. They guarantee that all independent paths within a module have been exercised at least once.
  2. They exercise all logical decisions on their true and false sides.
  3. They execute all loops at their boundaries and within their operational bounds.
  4. They exercise internal data structures to ensure their validity.

White-box testing usually includes the unit test strategy, which involves testing at the module or function level in a program, where testing focuses on the internal paths of the module. This type of testing is also called unit testing, clear-box, or translucent testing because the individual performing the test has insight into the operation of the program code and can see the program’s internal workings. This approach to testing is referred to as a structural approach as well.

This level of testing examines the control flow (each path) performed at the unit level. Test drivers are used to guarantee that all paths within a module have been exercised at least once, all logical decisions have been exercised with all possible conditions, loops have been exercised on upper and lower boundaries, and internal data structures have been exercised.

7.2.2.1 Descriptions of White-Box Techniques

The white-box techniques discussed here are described in only brief detail. Most of these techniques can be automatically executed using the applicable testing tools. For a more complete description of white-box testing techniques, refer to books that focus on software test techniques.

Fault Insertion

Fault insertion involves forcing return codes to indicate errors and seeing how the code behaves. It represents a good way to simulate certain events, such as disk full, out of memory, and so on. A popular method involves replacing alloc( ) with a function that returns a NULL value 10% of the time to see how many crashes result, an approach also called erroneous input testing. This testing checks the processing of both valid and invalid inputs. Testers may also select values that exercise the range of input/output (I/O) parameters as well as values outside the boundary range of the parameters.

Fault insertion offers a way to measure the effectiveness of false tests being performed. In general, a defect is purposely inserted into the application being tested at a particular point, which is intended to cause a single test to fail. Following the insertion of the defect, the entire suite of tests is rerun. Results of the test are reviewed.

Unit Testing

To verify that software code performs adequately and correctly implements detailed design, unit tests are conducted while the code is being generated for each software unit. Unit testing verifies that the new code matches the detailed design; checks paths through the code; verifies that screens, pull-down menus, and messages are formatted properly; checks inputs for range and type; and verifies that each block of code generates exceptions or error returns when appropriate. Each software unit is tested to ensure that the algorithms and logic are correct and that the software unit satisfies the requirements and functionality assigned to it. Errors documented as a result of unit testing can include logic, overload or overflow of range, timing, and memory leakage detection errors.

Error Handling Testing

This technique acknowledges the fact that it is nearly impossible to test for every possible error condition. For this reason, an error handler can smooth the transition when an unexpected error occurs. The test engineer needs to ensure that the application returns error messages properly. For example, an application that returns a message indicating a system error generated by middleware (for example, a “Common Object Request Broker Architecture [CORBA] user exception error”) has little value to the end user or the test engineer.

Error handling tests seek to uncover instances where the system application does not return a descriptive error message, but instead crashes and reports a runtime error. This testing ensures that errors are presented and dealt with properly in the target application. An efficient error handler will act at the function or module level. The error handling functionality therefore needs to be tested at the development test level. Of course, such tests cannot uncover all possible errors, but some invalid values can be passed to the function to test the performance of error handling routines.

Memory Leak

The memory leak test technique focuses on the execution of the application, attempting to find instances where the application is not releasing or freeing up allocated memory and, as a result, experiences performance degradation or a deadlock condition. The application of this test technique is valuable in program debugging as well as in testing a complete software release. Tools are available to facilitate the execution of the memory leak test technique, which may track the application’s memory usage over a period of hours or days to see whether memory consumption continues to increase. These tools may also be able to identify the program statements where allocated memory is not released following use of the memory space.

Integration Testing

The purpose of integration testing is to verify that each software unit interfaces correctly to other software units. Integration testing may utilize a top-down or bottom-up structured technique, where the leaf modules are integrated with the next lower- or higher-level modules until the entire software tree has been constructed. This testing technique examines not only the parameters passed between two components, but also the global parameters and, in the case of object-oriented applications, all high-level classes.

Each integration test procedure consists of a high-level test script that simulates a user performing a defined task by invoking the lower-level unit tests with the requisite parameters to exercise the interface. Units are incrementally integrated and tested together based upon control flow, after all unit test problem reports have been resolved. Because units may consist of other units, some integration testing may take place during unit testing. When unit test scripts have been developed using an automated test tool, these scripts may be combined and new scripts added to test module interconnectivity.

Integration test procedures are executed and refined as necessary, and trouble reports are documented and tracked. Trouble reports are generally classified in a range of 1 to 4, based on their degree of severity (1 being the most critical and 4 being the least). After handling these trouble reports, test engineers perform regression testing to verify that problems have been completely resolved.

String Testing

String testing involves the examination of a related group of modules that constitute a software function. Also known as module testing, it ensures sufficient testing of a system’s components. It determines whether modules work successfully to form a cohesive unit and whether the software unit produces accurate and consistent results.

A module consists of one or more functions. String testing concentrates on the interactions of the functions. The parameters passed from function to function within the module are tested for correctness in terms of data type and data validity. This type of testing assesses the soundness of the programmer’s assumptions, as each function in the module is examined for errors and completeness. The coding standards that apply to the development effort can be verified at this time.

During module/function-level testing, the test team verifies that each program statement executes at least once. This program statement execution takes into account all loops and conditional statements. Each conditional relationship is tested for all resulting conditions—that is, it is tested for all possible valid and invalid values. Each boundary condition is considered to either pass or fail. State changes are predicted, and the test ensures that the appropriate trigger is fired. In addition, the team determines whether individual edits are working by testing with valid and invalid values.

Coverage Analysis

When selecting a coverage analysis tool, it is important that the test team analyze the type of coverage required for the application. Coverage analysis can be obtained using many different techniques, some of which are described here.

Statement Coverage

The statement coverage technique is often referred to as C1, which also denotes node coverage. This measure reports whether each executable statement is encountered. It verifies coverage at a high level, rather than decision execution or Boolean expressions. It offers an advantage in that this measure can be applied directly to object code and does not require processing source code. Performance profilers commonly implement this testing technique.

The statement coverage technique requires that every statement in the program be invoked at least once. A weakness of this technique is that it does not verify decisions (paths/results) but is affected by computational statements [2]. Another weakness involves the fact that the technique does not test Boolean expressions thoroughly and source code coverage does not assure object code coverage [3]. Decisions need to be tested to validate the design—checking, for example, to ensure that the logical operator is correct. Logical operator checks include whether the and operator should actually be an or. Likewise, the technique could check whether the operator should read greater than or equal instead of greater than.

Inputs necessary to support the design of test procedures include design documents and source code listings. It is important that enough test procedures be developed to execute every statement at least once [4].

Decision Coverage

The decision coverage test technique seeks to identify the percentage of all possible decision outcomes that have been exercised by a suite of test procedures. The decision coverage technique is sometimes referred to as branch coverage and is denoted as C2. It requires that every point of entry and exit in the software program be invoked at least once. It also requires that all possible conditions for a decision in the program be exercised at least once. The technique further requires that each decision in the program be tested using all possible outcomes at least once [5].

A weakness of the decision coverage test technique pertains to the fact that the tests performed are inadequate for high-integrity applications, because they do not ensure decision coverage of the object code. Additionally, an incorrect evaluation of a condition can be masked by other conditions. For example, logic errors are not necessarily visible.

Condition Coverage

Condition coverage is similar to decision coverage. It seeks to verify the accuracy of the true or false outcome of each Boolean subexpression. This technique employs tests that measure the subexpressions independently of each other. The results of these measures are similar to those obtained with decision coverage except that the former show greater sensitivity to the control flow.

Path Coverage

The path coverage technique seeks to verify whether each of the possible paths in each function has executed properly. A path is a set of branches of logic flow. Because loops introduce an unbounded number of paths, the path coverage technique employs tests that consider only a limited number of looping possibilities. Boundary-interior path testing considers two possibilities for loops: zero repetitions and more than zero repetitions [6].

The path coverage technique provides very thorough testing, but has two significant drawbacks. First, the number of possible paths that need to be supported by test procedures may be exceedingly exhaustive and beyond the scope of most test programs. This drawback arises because the number of paths is exponential to the number of branches. Second, path coverage is very time-consuming. It should therefore be used for critical success functions only.

Data Flow Coverage

The data flow coverage test technique is a variation of path coverage. It seeks to incorporate the flow of data into the selection of test procedures. Such techniques are based on the selection of test path segments that satisfy some characteristic of the data flow for all possible types of data objects [7]. Test procedures derived from data flow analysis examine interactions involving definitions to program variables and subsequent references that are affected by these definitions [8].

Branch Coverage

Branch coverage measures the number of times that logical branches have been exercised for both true and false conditions. This analysis is used most often for detailed unit testing of systems.

7.2.2.2 Automated Tools Supporting White-Box Testing

When selecting white-box test techniques to include as part of the overall test program design, it is helpful to be familiar with the kinds of test tools that are available to support the development and performance of related test procedures. Table 7.6 matches white-box test techniques with various types of automated test tools. When selecting white-box test techniques and making commitments to use pertinent automated test tools, the test team should keep in mind the fact that the test tools may not be compatible with each other.

Table 7.6. White-Box Test Techniques and Corresponding Automated Test Tools

image

7.2.3 Black-Box Techniques (System-Level Tests)

Black-box testing is testing only via established, public interfaces such as the user interface or the published application programming interface (API). While white-box testing concerns itself with the program’s internal workings, black-box testing compares the application’s behavior against requirements. Additionally, the latter techniques typically seek to investigate three basic types of errors: those associated with the functional paths supported by the software, the computations performed by the software, or the range or domain of possible data values that can be executed by the software. At this level, the testers do not primarily concern themselves with the inner workings of the software components, though the inner components of the software are nevertheless exercised by default. Instead, the test team is concerned about the inputs and outputs of the software. In the context of this discussion, black-box testing is considered to be synonymous with system testing, although black-box testing can also occur during unit or integration testing.

User participation is important to black-box testing, as users are the people who are most familiar with the results that can be expected from the business functions. The correctness of data is key in successfully completing the system testing. Therefore, during the data generation phase, it is imperative that the end users give as much input as possible. Section 7.3.6.2 discusses black-box data definition.

Black-box testing attempts to derive sets of inputs that will fully exercise all functional requirements of a system. It is not an alternative to white-box testing. This type of testing attempts to find errors in many categories, including the following:

• Incorrect or missing functionality

• Interface errors

• Usability problems

• Errors in data structures or external database access

• Performance degradation problems and other performance errors

• Loading errors

• Multiuser access errors

• Initialization and termination errors

• Backup and recoverability problems

• Security problems

7.2.3.1 Black-Box Technique Descriptions

The black-box techniques outlined in this section are the most commonly used approaches.

Equivalence Partitioning

As noted in Chapter 4, exhaustive input testing typically is not possible. Instead, testing must be performed using a subset of all possible inputs.

Three basic types of equivalence classes apply when testing for range and domain errors: in-bound, out-of-bound, and on-bound situations. It is a good practice to develop test procedures that examine boundary cases plus/minus one to avoid missing the “one too many” or “one too few” error. In addition to developing test procedures that utilize highly structured equivalence classes, the test team should perform exploratory testing. Test procedures that are developed and later execute as expected are called positive cases. Test procedures that should result in an error when executed are referred to as negative cases.

One advantage of the equivalence partitioning technique is that it reduces the scope of exhaustive testing to a well-defined set of test procedures, as opposed to an ad hoc definition of test procedures. One disadvantage relates to the fact that the resulting test procedures do not include other types of tests that have a high probability of finding an error.

Boundary Value Analysis [9]

Boundary value analysis can be applied to both structural and functional testing levels. Boundaries define three classes of data: good, bad, and on the border. Boundary testing uses values that lie in or on the boundary (such as endpoints), and maximum/minimum values (such as field lengths). The analysis should always include plus/minus one boundary values. Outside boundary testing uses a representative sample of data from outside the boundary of values—that is, invalid values. For example, data type tests should check for numeric and alphabetic values. Does the field accept numeric values only as specified or does it accept alphanumeric values?

It is a judgment call to select the representative sample so that it truly represents the intended range of values. This task is sometimes very difficult when numerous interrelationships exist among values. Consider using random samples of possible cases. When the number of possibilities is very large and possible results are very close in value, choose input values that give largest variances in output—that is, perform sensitivity analysis.

Cause/Effect Graphing [10]

Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions. Four steps are involved in this technique. The first step involves the listing of causes (input conditions) and effects (actions) for a module and the assignment of an identifier to each module. In the second step, a cause-effect graph is developed. The graph is converted to a decision table in the third step. The fourth step involves the identification of causes and effects by reading the functional specifications. Each cause and effect is assigned a unique identifier. The causes are listed vertically on the left-hand side of a paper and the effects are listed on the right-hand side. After the lists are complete, part of the semantic content between the causes and effects is illustrated by directly and indirectly linking the causes to the effects with lines. The graph is then annotated with symbols representing Boolean expressions, which combine two or more causes associated with an effect. Decision table rules are then converted to test procedures.

System Testing

“System testing” is often used as a synonym for “black-box testing,” because during system testing the test team concerns itself mostly with the application’s “externals.” System testing encompasses several testing subtypes, such as functional, regression, security, stress, performance, usability, random, data integrity, conversion, backup and recoverability, configuration, operational readiness, user acceptance, and alpha/beta testing.

Functional Testing

A functional test exercises a system application with regard to functional requirements with the intent of discovering nonconformance with end-user requirements. This test technique is central to most software test programs. Its primary objective is to assess whether the application does what it is supposed to do in accordance with specified requirements.

Test development considerations for functional tests include concentrating on test procedures that execute the functionality of the system based upon the project’s requirements. One significant test development consideration arises when several test engineers will be performing test development and execution simultaneously. When these test engineers are working independently and sharing the same data or database, a method needs to be identified to ensure that test engineer A does not modify or affect the data being manipulated by test engineer B, potentially invalidating the test results produced by test engineer B. Chapter 9 discusses ways to structure the test procedure execution schedule so as to avoid these issues.

Another test development consideration pertains to the organization of tests into groups related to a business function. Automated test procedures should be organized in such a way that effort is not duplicated. The test team should review the test plan and the test design to perform the following analyses:

• Determine the order or sequence in which specific transactions must be tested, either to accommodate database issues or as a result of control or workflow

• Identify any patterns of similar actions or events that are used by multiple transactions

• Review critical and high-risk functions so as to place a greater priority on tests of this functionality and to address associated test procedures early in the development schedule

• Create a modularity-relationship matrix (see Chapter 8)

These analyses will help the test team organize the proper sequence of test development, ensuring that test procedures can be properly linked together and played back in a specific order to permit contiguous flow of playback and operation of the target application.

Another consideration when formulating functional tests pertains to the creation of certain test procedures with the sole purpose of supporting screen navigation. Such test procedures are not intended to validate specific functional requirements, but reflect user interface actions. For example, the test team may record a procedure that navigates the application through several windows and concludes at the particular window under test. The test engineer may then record a separate procedure to validate the destination window. Such navigation test procedures may be shared and reused several times during the test development effort.

Regression Testing

The whole notion of conducting tests is aimed at finding and documenting defects and tracking them to closure. The test engineer needs to be sure that the action performed to fix the software does not, in turn, create a new error in another area of the software system. Regression testing determines whether any errors have been introduced during the error-fixing process. It is in the area of regression testing that automated test tools offer the largest return on investment. All scripts developed prior can be executed in progression to verify that no new errors have been introduced through changes made to fix another error. This goal can be easily achieved because the scripts can be run with no manual intervention and therefore can be executed as many times as deemed necessary to detect errors.

Test engineers often feel that once they have tested something manually and the software checks out, that no further testing is required. In this instance, the test engineer does not take into consideration that a module change might have introduced a bug that affects a different module. Therefore, once the application is somewhat stable, the test team should focus on automating some or all regression tests, especially those involving high-risk functionality, repetitive tasks, and reusable modules. Regression tests may be reused many times during the development life cycle, including for the various new releases of the application-under-test. Some regression tests can also be reused for stress, volume, and performance testing.

Security Testing

Security tests involve checks to verify the proper performance of system access and data access mechanisms. Test procedures are devised that attempt to subvert the program’s security checks. The test engineer uses security tests to validate security levels and access limits and thereby verify compliance with specified security requirements and any applicable security regulations.

Test development considerations for security tests include the creation of test procedures based upon security specifications. Depending upon the particular application, security testing might involve tests of a COTS product that is integrated into the application to support security.

Stress Testing

Stress testing involves the exercise of a system without regard to design constraints, with the intent to discover the as-built limitations of the system. These tests are performed when processing of transactions reaches its peak and steady loads of high-volume data are encountered. Stress testing measures the capacity and resiliency of the system on each hardware platform. In this technique, multiple users exercise specific functions concurrently and some use values outside of the norm. The system is asked to process a huge amount of data or perform many function calls within a short period of time. A typical example could be to perform the same function from all workstations simultaneously accessing the database.

Tools are available to support stress tests intended to ensure that an application performs successfully under various operating conditions. For example, tools can exercise the system by creating virtual users and allowing for an incremental increase of the number of end-user workstations that are concurrently exercising the application. Response times can be captured and logged by the tool throughout the progressive increase in user load on the system to track any performance degradation.

Automated tools can be applied to exercise the application system under high-stress scenarios, such as system operations involving complex queries, large query responses, and large data object retrievals. Other high-stress scenarios include an application being exercised for many hours and the concurrent operation of a large number of test procedures. Stress test tools typically monitor resource usage, including usage of global memory, DOS memory, free file handles, and disk space, and can identify trends in resource usage so as to detect problem areas, such as memory leaks and excess consumption of system resources and disk space.

Several types of stress testing exist. Unit stress tests generate stress on a single interface. Module stress testing verifies that business functions are processed in a common area. System stress testing involves loading the system using high-volume transactions. It can include data volume tests, where the test engineer verifies that the application can support the volume of data and the number of transactions required. Also included are concurrency tests, where the test engineer verifies that the application can support multiple users accessing the same data without lockouts or multiuser access problems. The test team will also want to conduct some scalability tests, verifying that the system can support planned and unplanned growth in the user community or in the volume of data that the system is required to handle.

Test development considerations for stress tests include planning to determine the number of transactions that are required to be run by the test script, the number of iterations, the number of virtual users, the kinds of transactions, and the length of time to run the test script. The test team will need to identify the system limitations and perform tests to discover what happens when the system is pushed to the border and beyond these limitations.

The test team should understand and define the transaction performance monitoring considered necessary in terms of database time, response time, and central processing unit (CPU) time. Test outcome information can include minimum transaction response time, maximum transaction response time, mean transaction response time, number of parsed transactions, and number of failed transactions.

The test team needs to develop a benchmark plan to support stress testing. Benchmarks should include month, day, and week values based upon functional requirements. The test team also needs to determine occurrence rates and probabilities. It uses the benchmark plan and applicable values to perform stress testing on the software baseline. Benefits of stress testing include the identification of threshold failures, such as limitations involving not enough memory, SWAP or TEMP space, maximum number of concurrent open files, maximum number of concurrent users, maximum concurrent data, and DBMS page locking during updating.

Performance Testing

Performance tests verify that the system application meets specific performance efficiency objectives. Performance testing can measure and report on such data as input/output (I/O) rates, total number of I/O actions, average database query response time, and CPU utilization rates. The same tools used in stress testing can generally be used in performance testing to allow for automatic checks of performance efficiency.

To conduct performance testing, the following performance objectives need to be defined:

• How many transactions per second need to be processed?

• How is a transaction defined?

• How many concurrent users and total users are possible?

• Which protocols are supported?

• With which external data sources or systems does the application interact?

Usability Testing

Usability tests verify that the system is easy to use and that the user interface appearance is appealing. Such tests consider the human element in system operation. That is, the test engineer needs to evaluate the application from the prospective of the end user. Table 7.7 outlines the various kinds of tests to consider as part of usability testing.

Table 7.7. Usability Test Considerations

image

Test development considerations for usability tests include approaches where the user executes a prototype of the actual application that lacks the real functionality. By running a capture/playback tool in capture mode while the user executes the prototype, recorded mouse movements and keystrokes can track where the user moves and how he would exercise the system. Reading such captured scripts can help the designers understand the approach of the usability of the application design.

Random Testing

Random tests consist of spontaneous tests identified by the test engineer during test development or execution. These types of tests are also referred to as monkey tests, reflecting the spontaneous nature of their creation. With these tests, there is no formal design, nor are the tests commonly rerun as part of regression testing. The most important consideration with this type of testing is that it be documented, so that the developer will understand what test sequence was executed during random testing when a defect was discovered.

Random testing—one of the more common test strategies—does not assume any knowledge of the system under test, its specifications, or its internal design. This technique is insufficient for validating complex, safety-critical, or mission-critical software. Instead, it is best applied as a complementary test strategy with the use of other more specific and structured test strategies. Random testing can be applied throughout all test phases.

Data Integrity Testing

The data integrity test technique verifies that data are being stored by the system in a manner where the data is not compromised by updating, restoration, or retrieval processing. Validation to be performed can include checking data fields for alphabetic and numeric characters, for information that is too long, and for correct date format (with regard to year 2000 compliance verification). Data consistency checks include both internal and external validations of essential data fields. Internal checks involve data type checking and ensure that columns are of the correct data types; external checks involve the validation of relational integrity to determine whether duplicate data are being loaded from different files. Additionally, this type of test is intended to uncover design flaws that may result in data corruption, unauthorized data access, lack of data integrity across multiple tables, and lack of adequate transaction performance.

Conversion Testing

Conversion testing measures and reports the capability of the software to convert existing application data to new formats. Conversion accuracy is measured by comparing the test data file dump with the new database. Conversion testing will be done separately as part of the initial database load software test.

Backup and Recoverability Testing

The backup and recoverability test technique verifies that the system meets specified backup and recoverability requirements. These tests help to prove that the database and software can recover from partial or full catastrophic failures of system hardware or software. They are conducted to determine the durability and recoverability levels of the software on each hardware platform. The aim of recovery testing is to discover the extent to which data can be recovered after a system breakdown. Does the system provide possibilities to recover all of the data or just part of it? How much can be recovered and how? Are the recovered data still correct and consistent? This type of test technique is especially appropriate for applications that must meet high reliability standards.

Configuration Testing

Configuration testing verifies that an application operates properly on machines with different hardware and software configurations. Such tests check for compatibility issues and help to determine the optimal configuration of hardware and software to support an application.

Operational Readiness Testing

This test technique helps determine whether a system is ready for normal production operations. All valid and invalid values are defined and applied during these tests. Each value is passed to the AUT, and the resulting behavior of the software is observed. It is very important to test all possible invalid values to verify that the AUT will meet the project’s specifications.

Operational readiness tests also verify that the AUT can be installed on its targeted hardware platforms using the documentation provided by the development/test group and determine whether the application runs as expected. Uninstall instructions are tested for their impact on the environment.

Apart from general usability-related aspects, procedures supporting operational readiness tests are particularly useful for assessing the interoperability of the software system. This test technique verifies that the different software components making up the system can function correctly and communicate with one another once integrated. It also involves checks to determine whether all components that make up the AUT (such as .dll and .vbx libraries and .exe files) have been included as part of the installation package or have been installed correctly when the installed or actual production environment is tested.

User Acceptance Testing

Getting the user involved early in the testing process pays off at this stage. This way, the users will be familiar with the software at this point rather than experiencing a rude awakening when seeing the software for the first time. The acceptance test phase includes testing performed for or by end users of the software product. Its purpose is to ensure that end users are satisfied with the functionality and performance of the software system. Commercial software products don’t generally undergo customer acceptance testing, but do often allow a large number of users to receive an early copy of the software so that they can provide feedback as part of a beta test.

In a controlled environment, where a customer or end user is required to evaluate a system and make a determination of whether to accept the system, the acceptance test may be composed of test scripts performed during system testing. In an uncontrolled environment, where end users may be free to exercise a beta version of a software product at will, the purpose of the test may be to solicit end-user feedback. This feedback would then be evaluated and changes to the software product would be contemplated prior to formal release of the software.

The acceptance test phase begins only after the successful conclusion of system testing and the successful setup of a hardware and software configuration to support acceptance testing, when this configuration differs from the system test environment.

Alpha/Beta Testing

Most software vendors use alpha and beta testing to uncover errors through testing by the end user. Customers at the developer’s site, with the developer present, usually conduct alpha testing. Beta testing is usually conducted at one or more customer sites with the developers not present.

7.2.3.2 Automated Tools Supporting Black-Box Testing

When selecting black-box test techniques as part of developing the overall test program design, it is beneficial to be familiar with the various kinds of test tools available. Table 7.8 maps the black-box test techniques to various types of automated test tools. When choosing the desired black-box test techniques and making commitments to use pertinent automated test tools, the test team should keep in mind that some test tools may not be compatible with each other. Data from one tool might not be accessible from another tool, for example, if there is no import/export facility. The tools might use different databases, different database setups, and so on. Refer to Appendix B for more details on various test tools.

Table 7.8. Black-Box Techniques and Corresponding Automated Test Tools

image

7.2.4 Test Design Documentation

Test design is a complex task that should be documented in the test plan. Various documents and information that support test design activities are outlined in Table 7.9.

Table 7.9. Documents Supporting Test Design

image

image

7.3 Test Procedure Design

After test requirements have been derived, test procedure design can begin. Test procedure definition consists of the definition of logical groups of test procedures and a naming convention for the suite of test procedures. With a test procedure definition in place, each test procedure is then identified as either an automated or a manual test. The test team now has an understanding of the number of test techniques being employed, an estimate for the total number of test procedures that will be required, and estimates of the number of test procedures that will be performed manually as well as those that will be performed with an automated test tool.

The next step in the test procedure design process, as depicted in Table 7.10, is to identify the more sophisticated test procedures that must be defined further as part of detailed test design. These test procedures are flagged, and a detailed design document is prepared in support of the more sophisticated test procedures. Next, test data requirements are mapped to the defined test procedures. To create a repeatable, reusable process for producing test procedures, the test team needs to create a document that outlines test procedure design standards. Only when these standards are followed can the automated test program achieve real efficiency and success by being repeatable and maintainable.

Table 7.10. Test Procedure Design Process

image

7.3.1 Test Procedure Definition

Test procedures address preconditions for a test, data inputs necessary for the test, actions to be taken, expected results, and verification methods. Because the goal of the test effort is to find defects in the application under test while verifying that the system meets the test requirements, an effective test procedure design will consist of tests that have a high probability of finding previously undiscovered errors. A good test procedure design should not only cover expected inputs and outputs, but also attempt to account for unexpected input and output values. An effective suite of test procedures, therefore, should account for what the system should do and include tests for unexpected conditions.

Unfortunately, the scope of the test effort can be infinite, as noted in Chapter 4. As a result, the scope of the test program needs to be bounded. The exercise of developing the test procedure definition not only aids in test development, but also helps to quantify or bound the test effort. The development of the test procedure definition involves the identification of a suite of test procedures that will need to be created and executed in support of the test effort. The design exercise involves the organization of test procedures into logical groups and the definition of a naming convention for the suite of test procedures.

To construct the test procedure definition, the test must review the test architecture. Figure 7.4 depicts the test architecture of a project for which development-level tests are design-based and system-level tests are technique-based. In this example, the design components referenced were retrieved by the test team from the project’s software architecture. Five components are being tested at the development level: System Management (SM-06), Security Guard (SG-07), Distributed Computing (DC-08), Support Applications (SA-09), and Active Trade Visibility (TV-10). For each of these design components, the test techniques that will be applied are noted.

Figure 7.4. Sample Test Architecture

image

At the system test level, Figure 7.4 identifies the test techniques that are being applied. For each test technique, the scope of each technique test area is defined in terms of the design components involved and extraneous system requirement sources, such as security requirements outlined within a security plan.

The test architecture provides the test team with a clear picture of the test techniques that need to be employed. The test team can further identify the test requirements associated with each test technique by referring to the requirements traceability matrix. Test personnel can now readily define the test procedures that correspond to the applicable test techniques and the associated test requirements.

Table 7.11 gives a sample test procedure definition for development-level tests. Column 1 of this table identifies the series of test procedures allotted to test the particular design component using the particular test technique. Column 2 identifies the software or hardware design components that need to be tested.

Table 7.11. Test Procedure Definition (Development Test Level)

image

In the example given in Table 7.11, the design components referenced were retrieved from the test architecture. As depicted in Table 7.10, the SA component has been allocated test procedures numbering from 850 to 1399. The SA component has 32 software units (901–932) associated with it, as indicated in column 2. The test technique is listed in column 3, and the number of test procedures involved in each set of tests (row) is estimated in column 4.

Table 7.12 gives a sample test procedure definition for system-level tests. Column 1 of this table identifies the series of test procedures allotted to support each particular test technique. Column 2 identifies the test technique, which is derived from the test architecture. Although Table 7.12 includes only software tests, hardware tests could be represented as well.

Table 7.12. Test Procedure Definition (System Test Level)

image

Columns 3 through 5 provide information to identify the number of test procedures that will be involved at the system test level. The number of design units or functional threads that will be involved in the tests appears in column 3. A functional thread represents a useful or logical way for an end user to navigate (follow a functional path) through an application. If process flow documentation or user interface design information is available, the test team may include numbers in column 3 pertaining to threads to be used. In the example in Table 7.12, four functional threads are planned to support stress and performance testing. Usability tests will be conducted as part of functional testing, and, as a result, no additional test procedures are developed for this test technique.

The number of system requirements or use cases that are involved in the tests appears in column 4, and the number of test requirements that apply is noted in column 5. The value in the test requirements column reflects the need to have at least one test requirement per each system requirement. Note that test requirements may specify different conditions to be applied against a number of system requirements or use cases. Testing against some system requirements or use cases might necessitate that two or three different conditions be exercised. As a result, the total number of test requirements may exceed the number of system requirements or use cases for any row.

The last column in Table 7.12 gives the estimated number of test procedures that will be required for each test technique listed. For functional and security tests, there may be one test procedure for every test requirement. For stress and performance testing, four threads will be altered for each test procedure so as to examine 12 or 14 different system requirements or use cases. Additionally, the test team may choose to examine two different levels of system load for each stress and performance test—expected usage and double expected usage. By using the two levels and capturing two different measurements, the test team would be able to examine the performance degradation between the two levels.

With the test procedure definition in place for both the development and system levels, it is now time to adopt a test procedure naming convention that will uniquely distinguish the test procedures on the project from test procedures developed in the past or on other projects. Table 7.13 provides the test procedure naming scheme for a fictitious project called the WallStreet Financial Trading System (WFTS). The test procedure numbering scheme used in the previous test procedure definitions has been augmented by attaching the prefix WF.

Table 7.13. Test Procedure Naming Convention

image

7.3.2 Automated Versus Manual Test Analysis

Tests at the white-box or developmental level comprise primarily automated tests. Tests at the system level generally represent a combination of automated and manual tests. At the system level, the test team needs to review all test procedure requirements to determine which test procedures can be automated and which should be performed manually.

This section will describe an approach for deciding when to automate and when to test manually. Not everything should be automated immediately; instead, the test team should take the automation approach step by step. It is wise to base the automation effort on the test procedure execution schedule. While conducting the automated versus manual test analysis, keep in mind that it can take as much effort to create an automated test script for a complex functionality as it took to develop the code. The team should therefore analyze the automation effort. If it takes too much effort and time, a better approach might be to manually test the functionality. Remember that one of the test automation goals is to avoid duplicating the development effort.

If an automated test cannot be reused, the associated test automation effort may represent an inefficient use of test team resources. The test team needs to focus the automation effort on repetitive tasks, which can save on manual testing time and enable test engineers to focus on other, more pressing test issues and concerns.

Part of test procedure definition involves determining whether a test will be executed manually or whether the test lends itself for automation. During unit testing, it is a relatively simple task to use an automated testing tool for a variety of different kinds of tests. For example, a code coverage test tool or a memory leakage test tool may be used without much concern for identifying the various parts of the application amenable to automation. During system testing, the task of deciding what to automate is a bit more complex when using a capture/playback tool or a server test tool. Analyzing what to automate is one of the most crucial aspects of the automated testing life cycle. Several guidelines for performing the automation versus manual test analysis are outlined in this section.

7.3.2.1 Step by Step—Don’t Try to Automate Everything at Once

If the test team is not experienced with the use of automated test tools across a number of different projects, it is best to take a more cautious approach for introducing automation. Avoid trying to automate everything at once. Take a step-by-step approach by automating the more obvious applications of test tools first and postponing the automation of other tests until more experience is gained with test automation.

Consider the example test team, which decided to add all of its test requirements into the test management tool in support of system-level testing. The test team was eager to automate every possible test. It identified 1,900 test procedures amenable to automation. When the time came to develop these test procedures, however, the test team discovered that the automated test tool was not as easy to use as had been thought. The test team also learned that the test requirements had to be loaded. It eventually called in consultants to help with the test automation effort in an attempt to stay on schedule. The test team had not fully appreciated the magnitude of the automation effort. The lesson learned in this example was that a test team should not try to automate every test without experience with the kinds of tests planned and the particular test tools purchased. It is better to apply an incremental approach toward increasing the depth and breadth of test automation.

7.3.2.2 Not Everything Can Be Tested or Automated

As noted in Chapter 2, not everything can be tested, so consequently not everything can be automated. Now that the test engineer is trying to figure out which test procedures to automate, it is important to remember this fact. For example, it’s not possible to automate the verification of a print output. The test engineer must manually retrieve the output at the printer and inspect the output against the expected result. In this case, the application might have indicated a printer error when the printer was simply out of paper. In addition, it is not feasible to automate every required test given schedule and budget constraints.

7.3.2.3 Don’t Lose Sight of Testing Goals

When determining what to automate, the test engineer should not lose sight of the testing goal. A test engineer could feverishly be creating fancy automated scripts that take weeks to develop and along the way lose sight of the overall testing goal: to find defects early. While developing the most eloquent automated test scripts, he might forget about executing manual tests, which could discover defects immediately. The automation effort might postpone the immediate discovery of the defects because the test engineer is too involved in creating complex automated test scripts.

7.3.2.4 Don’t Duplicate Automation of the AUT’s Program Logic

When analyzing what to automate, keep in mind that the test program should not duplicate the AUT’s program logic. One rule of thumb to consider is that if it takes as much or more effort to automate a test script for a specific test requirement as it did to code the function, a new testing approach is required. Also, if the AUT’s program logic is duplicated by the automated test tool but a problem exists with the AUT’s program logic, then the automated script would not be able to determine the logic error of the AUT.

7.3.2.5 Analyze the Automation Effort

The suggestion that the initial automation effort should be based on the highest-risk functionality has one caveat. Experience shows that the highest-risk functionality is most often the most complex and thus the most difficult to automate. The test team should therefore analyze the automation effort at first. Also, whenever reviewing the effort required to automate test procedures pertaining to a functionality, it is important to be sensitive to the test schedule. If only two weeks are allotted for test development and execution, the schedule may not permit the creation of elaborate automated test scripts. In such a situation, it may be desirable not to use an automated test tool at all.

7.3.2.6 Analyze the Reuse Potential of Automated Modules

When determining which test procedures to automate, keep reuse in mind. Suppose that the test team decided to automate the highest-risk functionality of the application, but did not contemplate the level of effort required to automate the test procedures or consider the extent to which test scripts could be reused. If the scripts cannot be reused, automation efforts are wasted. Instead, the test team should examine the ability to reuse the test scripts in a subsequent software application release.

Another question to pose is whether and how much the baseline functionality would be expected to change. The test team should investigate whether the initial software baseline represents a one-time complex functionality that could change significantly with the next release. If so, automation is unlikely to produce labor-hour savings during test development. Automation may still permit test execution and regression test schedule savings, which may be more important for the particular project than overall test budget considerations.

7.3.2.7 Focus Automation on Repetitive Tasks—Reduce the Manual Test Effort

In addition to focusing the initial test automation efforts on the high-risk functionality of a stable module, it is beneficial to consider automation of repetitive tasks. If repetitive tasks are automated, test engineers can be freed up to test more complex functionality.

Consider a test engineer who must test a requirement that states “the system should allow for adding 1 million account numbers.” This task lends itself perfectly to automation. The test engineer would record the activity of adding one account number once, then modify the tool-generated program code to replace the hard-coded values with variables. A program loop could increment and test the account number, with iterations up to a specified level. This kind of script could be developed in less than 30 minutes, while it would take the test engineer weeks to test this specific requirement by manually keying in 1 million account numbers and their descriptions.

7.3.2.8 Focus Automation on Data-Driven Tasks—Reduce the Manual Test Effort

An example of automating repetitive tasks is the performance of century date testing by entering year 2000 data values. The test team should write a script that allows for reading such values from a file to perform add, delete, and update activities associated with various tests. Drawing values from such a file enables the test engineer to spend more time conducting complex and important test activities. Still another consideration when choosing to perform repetitive test tasks manually is the fact that such manual efforts are prone to errors. Individual test engineers do not perform as well on such repetitive tasks as do computers and software programs.

7.3.2.9 Consider the Test Tool’s Capabilities

When evaluating which test procedures to automate, the test engineer needs to keep in mind the test tool’s capabilities. What parts of the application can be automated, using what tool? The test engineer should view client-side GUI tests as different from server-side tests, because more than one test tool may be required for each environment. When deciding which parts of the GUI or server function tests to automate, the test engineer should review the capabilities of the GUI or server test tool.

7.3.2.10 Automate Test Requirements Based on Risk

One way that the test team can select which test procedures to automate relies on risk analysis. Chapter 6 discussed how the test requirements can be ordered by risk and by most critical functionality. When reviewing the defined test procedures to determine which are amenable to automation, take a look at the highest-risk functionality and its related test requirements, and analyze whether those requirements warrant priority attention with regard to the application of automation. The test team should also review the test procedure execution schedule when selecting test procedures to automate because the schedule sequence is generally based on risk, among other issues.

By applying these guidelines, the test team should be able to decide which test procedures warrant automation and which can be performed most efficiently using manual methods. Table 7.14 gives a portion of a traceability matrix that breaks down each test procedure required in system-level testing. Such tables can be updated and automatically generated using an RM tool.

Table 7.14. Automated Versus Manual Tests

image

RM tools such as DOORS allow the test engineer to cross-reference each test procedure (as in Table 7.14) to several other elements, such as design components and test techniques, and automatically generate a report. The last column in Table 7.14 indicates whether the test will be performed using an automated test tool (A) or whether it will be performed manually (M). Note that the matrix depicted in Table 7.14 has two columns for requirement identifiers. The “SR ID” column lists the associated system requirement, while the “SWR ID” column identifies a more detailed software requirement. When detailed software requirements have not been defined for a project, the “SWR ID” column would be left blank.

7.3.3 Automated Test Design Standards

To develop a repeatable and reusable process, a document needs to be created that lists the test procedure design standards that everyone involved in the test design effort should follow. Enforcing compliance with these standards is important to achieving a successful automated test program. Test procedure design standards will promote consistency and will facilitate the integration of various test procedures into the testing framework discussed in Chapter 8. Test design for automated test procedures should seek to minimize the script development effort, minimize maintenance, and promote reusability and flexibility of scripts so as to accommodate later changes to the AUT. It should also lead to more robust test procedures.

7.3.3.1 When to Design

As mentioned throughout this book, test development and particularly test design are most efficiently performed in parallel with the application development effort. Test requirements and test design can be initially addressed during the application requirements-gathering phase. At this time, the test engineer can begin to evaluate whether each requirement can be tested or decide on another verification method to verify that a requirement has been met. During the application design phase, he or she can provide input with regard to whether an application design is testable and influence the testability of the resulting application code. During the unit and integration development phase, test requirements can be gathered and test design can be initiated. Note that the test design effort should not interrupt application development work, but should be integrated smoothly into the application development life cycle.

7.3.3.2 What to Design

The previous sections in this chapter described how test requirements are derived from the various system requirements, depending on the current testing phase. Now it is time to design the test procedures, based on these test requirements. As the goal of testing is to find defects in the AUT while verifying that the system application meets test requirements, well-designed test procedures should have a high probability of finding previously undiscovered errors. A good test design needs to cover expected inputs and outputs and attempt to account for unexpected input and output. Good test procedures, therefore, should not only account for what the system should do, but also include test exercises to verify performance for unexpected conditions.

Test procedures are created to verify test requirements. Tests are therefore designed to answer several questions, including the following

Has an automated versus manual test analysis been conducted and documented?

What is the sequence of actions necessary to satisfy the test requirements?

What are the inputs and expected outputs for each test procedure?

What data are required for each test procedure used?

How is the function’s validity verified?

What classes of input will make for good test procedures?

Is the system particularly sensitive to certain input values?

How are the boundaries of a data class isolated?

What data rates and data volume can the system tolerate?

What effect will specific combinations of data have on system operation?

7.3.3.3 How to Design

Tests should be developed that cover the important aspects of the application. The test design needs to comply with design standards, which mandate such things as use of templates and naming conventions and define the specific elements of each test procedure. Before designing tests, the test engineer needs to employ the test design techniques discussed so far to derive test requirements. She can then use these test requirements as a baseline for test procedure design. A robust test design should make the automated test reusable and repeatable as well as helpful in identifying errors or defects in the target software. It should also tackle the high-priority tasks first. If test engineers solicit customer and end-user involvement early in test design, they may avoid surprises during test implementation.

7.3.3.4 Test Procedure Modularity

Test procedure design standards or guidelines need to address the size of a test. For example, the standard may stipulate the maximum number of steps allowed in a single test procedure. It is beneficial to limit the scope of a test procedure to a single function, so it will remain manageable and maintainable. Chapter 8 gives suggestions on how to develop maintainable test procedures.

7.3.3.5 Test Procedure Independence

When designing test procedures, it is a good idea to avoid data dependency between test procedures, whenever feasible. Execution of data-dependent test procedures can result in a domino effect, where the failure of one test procedure affects the next test procedure. Whenever a data dependency exists, make sure to document it in the modularity-relationship matrix, discussed in Chapter 8.

It is also important to avoid creating test procedures in a context-dependent fashion, such that where one test procedure ends, the other one begins. Whenever possible, test procedures should start and end at the same place. This approach is not always feasible, but it remains useful as a rule of thumb. Otherwise, if one test procedure doesn’t complete execution and the next test procedure depends on the outcome of this test procedure to start running, testing may stall. Make sure to document any test procedure context dependency in the modularity-relationship matrix.

7.3.3.6 Scripting Language

Some automated test tools come with multiple scripting languages. The test organization will need to document within its test design standards which scripting language has been adopted. For example, SQA Suite, an automated test tool, supports both SQA Basic and Visual Basic scripting languages. The test team needs to adopt a standard language for its test procedures. This choice eliminates any dependencies and allows any member of the team to read and interpret the test procedure code that is generated.

7.3.3.7 Test Tool Database

Some automated test tools also come with a variety of databases. Test Studio, for example, allows the use of either an MS Access or SQL Anywhere database. The test team must evaluate the benefits and drawbacks to each particular database. It should consider the rate of database corruption problems encountered, any database record size limitations, requirements for ODBC connections, and the use of SQL statements. If a large test database is expected and many test engineers will require access to the database simultaneously, a more robust database, such as SQL Anywhere, should be selected. When the test effort involves only a handful of test engineers, an Access database may be sufficient.

7.3.3.8 Test Procedure Template

A test procedure design template provides a structure for the individual test. It facilitates the design of the test and promotes consistency among the automated tests. Table 7.15 provides an example of a test procedure design template. A test procedure design template should be adopted by the test team and included within the test procedure design standard. The team should use the test procedure template in conjunction with the test execution schedule and the test modularity model.

Table 7.15. Automated Test Procedure Template

image

image

7.3.3.9 Naming Conventions

A fairly complex AUT will have a large number of test procedures associated with it. When creating these test procedures, it is very important to follow a naming convention that ensures the names of the different test procedures follow a standard format. Such conventions also help to prevent duplication of test procedure IDs and, more importantly, help avoid duplications of test procedures.

Standard naming conventions also enable the entire test team to quickly ascertain the kind of test being performed, making the entire test suite more maintainable. For example, a standard test procedure ID of ACCADD1 would be understood to mean account add test procedure 1. The naming convention may address issues such as whether positive and negative test data are tested within the same test procedure or whether the test for positive data will be separate from the test for negative data.

Test procedure naming conventions make procedures manageable and readable. Test procedures that test a particular functionality of an application should therefore have similar names. Identify the file naming limitations of the particular automated test tool being used, if any.

7.3.4 Manual Test Design Guidelines

Standards for test procedure design need to be enforced so that everyone involved will conform to the design guidelines. Test procedure creation standards or guidelines are necessary whether the test team is developing manual test procedures or automated test procedures.

7.3.4.1 Naming Convention

Like automated test procedures, manual test procedures should follow a naming convention.

7.3.4.2 Test Procedure Detail

The standards for manual test procedures should include an example indicating how much detail a test procedure should contain. The level of detail may be as simple as step 1—click on the File menu selection, step 2—select Open, step 3—select directory, and so on. Depending on the size of the AUT, there might not be time to write extensive test procedure descriptions, in which case the test procedure would only contain a high-level test procedure description. Also, if a test procedure is written using very low-level detail, the test procedure maintenance can become very difficult. For example, every time a button or control on the AUT changes, the test procedure would need to reflect the change.

Test Procedure ID. Use naming convention when filling in the test procedure ID.

Test Procedure Name. This field provides a longer description of the test procedure.

Test Author. Identify the author of the test procedure.

Verification Method. The method of verification can be certification, automated test, manual test, inspection, and analysis.

Action. The clear definition of goals and expectations within a test procedure helps ensure its success. Document the steps needed to create a test procedure, much as you might write pseudocode in software development. This effort forces the test engineer to clarify and document his thoughts and intentions.

Criteria/Prerequisites. Test engineers need to fill in criteria or prerequisite information that must be satisfied before the test procedure can be run, such as specific data setup requirements.

Dependency. This field is completed when the test procedure depends on a second test procedure—for example, where the second test procedure needs to be performed before the first test procedure can be carried out. This field is also completed in instances where two test procedures would conflict with one another when both are performed at the same time.

Requirement Number. This field needs to identify the requirement identification number that the test procedure is validating.

Expected Results. This field defines the expected results associated with the execution of the particular test procedure.

Actual Results. The automated test tool may have a default value for actual result field, which may read “Same as Expected Result.” The value for the field would change if the test procedure fails.

Status. Status may include the following possibilities: testable/passed, testable/failed, not testable, partially not testable/passed, or partially not testable/failed. A test requirement could not be testable, for example, because the functionality has not been implemented yet or has been only partially implemented. The status field is updated by the test engineer following test execution. Using the status field, a database management system or RM tool (such as DOORS) automatically calculates the percentage of test procedures executed and passed versus the percentage that executed and failed. It then creates a progress report.

7.3.4.3 Expected Output

Test procedure standards can include guidelines on how the expected results are documented. The standard should address several questions. Will tests require screen prints? Will tests require sign-off by a second test engineer who observes the execution of the test?

7.3.4.4 Manual Test Procedure Example

An example of a manual test procedure created in DOORS is provided below. In this example, the details of the test procedure are either generated by the system or completed by a test engineer.

Object Level (system-generated)—Shows the hierarchy and relationships of test procedures.

Object Number (system-generated)

Object Identifier (system-generated)—Links defects to each test procedure.

Absolute Number (system-generated)

Created by (system-generated)—Gives the name of the test engineer generating the test procedure.

Created on (system-generated)—Gives a date.

Created Thru (system-generated)

Criteria/Prerequisites—Includes criteria or prerequisite information completed by the test engineer before the test procedure could be run (such as specific data setup necessary).

Expected Results—Explains the expected results of executing the test procedure.

Actual Result—Lists “Same as Expected Result” as the default, but is changed if the test procedure fails.

Last Modified by (system-generated)

Last Modified on (system-generated)

Object Heading (system-generated)

Object Short Text

Object Text

Precondition/Dependency—Filled in if the test procedure has a dependency on another test procedure or when two test procedures would conflict with each other if run at the same time.

Requirement Number—Gives the number of the system or software requirement that applies.

Status—Identifies the status of the executed test.

Step—Documents the steps needed to create a test procedure. It is equivalent of pseudocode in software development.

Test Procedure ID—Defines the naming convention used to document the test procedure ID.

Test Procedure Name—Provides a full description of the test procedure.

Verification Method—Automated or manual test, inspection, analysis, demonstration, or certification.

7.3.5 Detailed Test Design

When testing at the system level, it may be worthwhile to develop a detailed test design for sophisticated tests. These tests might involve test procedures that perform complex algorithms, consisting of both manual and automated steps, and test programming scripts that are modified for use in multiple test procedures.

The first step in the detailed design process is to review the test procedure definition at the system test level so as to flag or identify those test procedures that may warrant detailed design. The test team could begin this exercise by printing out a list of all planned system-level test procedures, including a blank column as shown in Table 7.16. This blank column can be filled in depending on whether each test procedure should be further defined as part of a detailed design effort.

Table 7.16. Detailed Design Designation for System Test Approach

image

With Table 7.16 in hand, the test team now has a clear picture of how many test procedures will benefit from further definition as part of a detailed design effort. The test team should next create a detailed design document, as shown in Table 7.17. The detailed design document is intended to be an aid to the test engineers while they develop the test procedures. As a result of the detailed design effort, test procedures should be more consistent and include all of the tests required.

Table 7.17. Detailed Design Document Outline

image

The detailed design may take the form of program pseudocode when test programming is required. That is, it may be represented simply as a sequence of steps that need to be performed during testing. When programming variables and multiple data values are involved, the detailed design may include a loop to indicate an iterative series of tests involving different values plus a list or table identifying the kinds or ranges of data required for these tests.

7.3.6 Test Data Requirements

After the creation of the detailed test design, test data requirements need to be mapped against the defined test procedures. Once test data requirements are outlined, the test team should plan the means for obtaining, generating, or developing the test data. The mechanism for refreshing the test database to an original baseline state—a necessity in regression testing—also needs to be documented within the project test plan. In addition, the project test plan needs to identify the names and locations of the applicable test databases and repositories necessary to exercise software applications.

The data flow coverage testing technique described earlier in this chapter seeks to incorporate the flow of data into the selection of test procedures. Using this technique will help the test team to identify those test path segments that exhibit certain characteristics of data flows for all possible types of data objects. The following sections address test data requirements for both the white-box and black-box testing approaches.

7.3.6.1 White-Box Test Data Definition

Most testing techniques require that test data be defined and developed for the resulting test procedures. The identification of test data requirements is an important step in the definition of any test design. Test engineers should define the test data needed for activities such as executing every program statement at least once, assuring that each condition is tested, and verifying that the expected results include as many variations and combinations as possible and feasible. Test data are also required for exercising every boundary condition.

When available, data dictionary and detailed design documentation can be very helpful in identifying sample data for use in test procedures. In addition to providing data element names, definitions, and structures, the data dictionary may provide data models, edits, cardinality, formats, usage rules, ranges, data types, and domains. As part of the process of identifying test data requirements, it is beneficial to develop a matrix listing the various test procedures in one column and the test data requirements in another column. Table 7.18 presents such a matrix.

Table 7.18. White-Box (Development-Level) Test Data Definition

image

In addition to defining the requirements for test data, the test team should identify a means of developing or obtaining the necessary test data. When identifying white-box test data sources, it is worthwhile to keep in mind design-related issues, such as the use of arrays, pointers, memory allocations, and decision endpoints. When reviewing white-box (system-level) data concerns, it is also beneficial to be cognizant of possible sources of sample data. Such source documents may clarify issues and questions pertaining to the kind of test data required. Such source documentation may consist of the following items:

• Flow graphs (cyclomatic complexity)

• Data models

• Program analyzers

• Design documents, such as structure charts, decision tables, and action diagrams

• Detail function and system specifications

• Data flow diagrams

• Data dictionaries, which include data structures, data models, edit criteria, ranges, and domains

• Detailed designs, which specify arrays, networking, memory allocation, data/program structure, and decision endpoints

7.3.6.2 Black-Box Test Data Definition

In black-box testing, test data are required that will ensure that each system-level requirement is adequately tested and verified. A review of test data requirements should address several data concerns [11].

• Depth—volume or size of databases

• Breadth—variation of data values and data value categories

• Scope—the accuracy and completeness of the data

• Test execution data integrity—the ability to maintain data integrity

• Conditions—the ability to store particular data conditions

Depth

The test team must consider the volume or size of the database records needed for testing. It should identify whether 10 records within a database or particular table are sufficient or whether 10,000 records are necessary. Early life-cycle tests, such as unit or build verification tests, should use small, hand-crafted databases that offer maximum control and minimal disturbance. As the test effort progresses through the different phases and types of tests, the size of the database should increase to a size that is appropriate for the particular tests. For example, performance and volume tests are not meaningful if the production environment database contains 1,000,000 records but the tests are performed against a database containing only 100 records.

Breadth

Test engineers need to investigate the variation of the data values (for example, 10,000 different accounts and a number of different types of accounts). A well-designed test should incorporate variations of test data, as tests for which all data are similar will produce limited results. For example, tests may need to consider the fact that some accounts may have negative balances while others have balances in the low range (hundreds of dollars), moderate range (thousands of dollars), high range (hundreds of thousands of dollars), and very high range (tens of millions of dollars). Test must also utilize data that represent an average range.

In the case of a bank, customer accounts might be classified in several ways, such as savings, checking, loans, student, joint, and business.

Scope

The relevance of the data values must be investigated by the test team. The scope of test data includes considerations of the accuracy, relevance, and completeness of the data. For example, when testing the queries used to identify the various different kinds of accounts at a bank that have a balance due amounts greater than 100, not only should there be numerous accounts meeting this criteria, but the tests need to employ data such as reason codes, contact histories, and account owner demographic data. The inclusion of a complete set of test data enables the test team to fully validate and exercise the system and provides better results. The test engineer would also need to verify that the return of a record as a result of this query indicates a specific condition (more than 90 days due), rather than a missing value or inappropriate value.

Test Execution Data Integrity

Another test data consideration involves the need for the test team to maintain data integrity while performing tests. The test team should be able to segregate data, modify selected data, and return the database to its initial state throughout test operations. Chapter 8, in its discussion of the test execution schedule, addresses these test data management concerns. The test team also needs to make certain that when several test engineers are performing tests at the same time, one test will not adversely affect other tests.

Another data integrity concern relates to data used in testing that cannot be accessed through the user interface. This information might include a date value that is updated from another server. These types of values and elements should be identified and a method or resource identified when data are only readable and not writeable. Following test execution, the test team needs to be able to reset the test data set to an initial (baseline) state. Chapter 8 provides more information on this type of activity.

Conditions

Another concern pertains to the management of test data intended to reflect specific conditions. For example, health information systems commonly perform a year-end closeout. Storing data in the year-end condition enables the year-end closeout to be tested without actually entering the data for the entire year. When the test team is testing a health information system application for which the year-end closeout function has not yet been implemented as part of an operational system, it would create a set of test data to stand in for the entire year.

As part of the process of identifying test data requirements, it is beneficial to develop a matrix listing the various test procedures in one column and test data requirements in another column. When developing the list of test data requirements, the test team needs to review the black-box (system-level) data concerns mentioned earlier. Table 7.19 depicts such a matrix that cross-references test data requirements to individual test procedures.

Table 7.19. Black-Box (System-Level) Test Data Definition

image

When reviewing black-box data concerns, the test team will need to be cognizant of possible sources of sample data. Such source documents may also clarify issues and questions pertaining to the kind of test data required. Such source documents may include the following items:

• System concept documents (business proposals, mission statements, concept of operations documents)

System requirements documentation (customer requirements definition, system/software specifications)

• Business rules (functional/business rule documentation)

• Entity relationship diagrams (and other system design documentation)

• Use case scenarios and data flow diagrams (and other business process documentation)

• Event partitioning (and state transition diagrams)

• Data dictionaries (and data element and interface standards, application programming interface documentation)

• Help desk logs (pertaining to existing operational systems)

• User expertise (end-user input)

• Regulations and standards (industry and corporate standards)

• White-box data definition documentation (developed by test team)

Test Data Generators

The test team may wish to look into the use of test data generator tools, which automatically generate data for an application based on a set of rules. These rules may be derived from specifications or database documentation, or they can be manually modified by the test team to fit its particular requirements. Test data generators can quickly produce test data, if needed, for example, for the simulation of load testing. For more information on test data generators and other test performance and support tools, see Appendix B.

Test Procedure (Case) Generators

Some test procedures may be generated automatically through the use of a test procedure generator. Some test procedure generators, such as StP/T, are highly integrated with analysis and design products and enable developers to test code functionality against system design specifications. Other test procedure generators take documented requirement information from a common repository and create test procedures automatically. Because this generation is automatic, test procedures can be created as soon as application requirements are recorded. For more information, see Appendix B.

Chapter Summary

• An effective test program incorporating the automation of software testing involves a mini-development life cycle of its own, complete with strategy and goal planning, test requirements definition, analysis, design, and coding.

• Similar to the process followed in software application development, test requirements must be specified before a test design is constructed. Test requirements need to be clearly defined and documented, so that all project personnel will understand the basis of the test effort. Test requirements are defined within requirements statements as an outcome of test requirements analysis.

• Much as in a software development effort, the test program must be mapped out and consciously designed to ensure that the test activities performed represent the most efficient and effective tests for the target system. Test program resources are limited, yet ways of testing the system are endless. A test design that graphically portrays the test effort will give project and test personnel a mental framework for the boundary and scope of the test program.

• Following test analysis, the test team develops the test program design models. The first of these design models, the test program model, consists of a graphic illustration that depicts the scope of the test program. This model typically shows the test techniques required for the dynamic test effort and outlines static test strategies.

• Having defined a test program model, the test team constructs a test architecture, which depicts the structure of the test program and defines the organization of test procedures.

• The structure of the test program (test architecture) is commonly portrayed in two different ways. One test procedure organization method, known as a design-based test architecture, logically groups test procedures with the system application design components. A second method, known as a technique-based test architecture, associates test procedures with the various kinds of test techniques represented within the test program model.

• An understanding of test techniques is necessary when developing test designs and test program design models. Test personnel need to be familiar with the test techniques associated with the white-box and black-box test approach methods. White-box test techniques are aimed at exercising the software program’s internal workings, while black-box techniques generally compare the application’s behavior with requirements via established, public interfaces.

• When selecting white-box and black-box test techniques as part of the development of the overall test program design, it is beneficial to be familiar with the kinds of test tools that are available to support the development and performance of related test procedures.

• The exercise of developing the test procedure definition not only aids in test development, but also helps to quantify or bound the test effort. The development of the test procedure definition involves the identification of the suite of test procedures that will ultimately need to be created and executed. The design exercise involves the organization of test procedures into logical groups and the definition of a naming convention for the suite of test procedures.

• At the system level, it may be worthwhile to develop a detailed test design for sophisticated tests. These tests might include test procedures that perform complex algorithms, procedures that consist of both manual and automated steps, and test programming scripts that are modified for use in multiple test procedures. The first step in the detailed design process is to review the test procedure definition at the system test level. This review enables the test team to identify those test procedures that stand out as being more sophisticated and should therefore be defined further as part of detailed test design.

• Detailed test design may take the form of test program pseudocode, when test programming is required. The detailed design may be represented simply as a sequence of steps that need to be performed during testing. When programming variables and multiple data values are involved, the detailed design may include a loop to carry out an iterative series of tests involving different values plus a list or table identifying the kinds of data or ranges of data required for the test.

• After the detailed test design is complete, test data requirements need to be mapped against the defined test procedures. Once test data requirements are outlined, the test team needs to plan the means for obtaining, generating, or developing the test data.

References

1. Book examples include Boris Beizer’s Software Testing Techniques and John Joseph Chilenski’s Applicability of Modified Condition/Decision Coverage to Software Testing, to mention only a few.

2. Software Considerations in Airborne Systems and Equipment Certification. RTCA SC-167, EUROCAE WG-12, Washington, DC: RTCA, 1992.

3. Myers, G.J. The Art of Software Testing. New York: John Wiley and Sons, 1979.

4. Ibid.

5. Ibid.

6. Ntafos, S. “A Comparison of Some Structural Testing Strategies.” IEEE Transactions on Software Engineering 1988; 14:868–874.

7. Beizer, B. Software Testing Techniques, 2nd ed. New York: Van Nostrand Reinhold, 1990.

8. See note 7.

9. Adapted from Myers, G.J. The Art of Software Testing. New York: John Wiley and Sons, 1979.

10. Adapted from Myers, G.J. The Art of Software Testing. New York: John Wiley and Sons, 1979.

11. Adapted from SQA Suite Process, January 1996. (See www.rational.com.)

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset