Chapter 3. Automated Test Tool Evaluation and Selection

If the only tool you have is a hammer, you tend to see every problem as a nail.

Abraham Maslow

image

Frequently, the selection of an automated test tool is accomplished long after the development platform and development tools have been determined. In an ideal situation, the organization’s test team would be able to select a test tool that fits the criteria of the organization’s system engineering environment as well as a pilot project that is in the early stages of the system development life cycle. In reality, a project often has a detailed system design in place before the concern for software test is addressed.

Regardless of the particular situation, test tool cost and the formal and on-the-job training for the tool received by test team personnel represent an investment by the organization. Given this fact, the selected tool should fit the organization’s entire systems engineering environment. This approach allows the entire organization to make the most use of the tool. To accomplish this goal, the test team needs to follow a structured approach for performing test tool evaluation and selection.

This chapter systematically steps the test engineer through pertinent evaluation and selection criteria. Figure 3.1 provides a high-level overview of the automated test tool selection process. As a test engineer concerned with the selection of an automated test tool, you should have followed the process for supporting a decision to automate testing outlined in Chapter 2. Specifically, you should have developed a test tool proposal for management that outlined the test tool requirement and the justification for the tool. Test tool proposal development and its acceptance by management are intended to secure management’s commitment for the resources needed to properly implement the test tool and support the automated testing process.

Figure 3.1. Automated Test Tool Selection Process

image

Once management has approved the proposal and a commitment for resources has been obtained, the test engineer needs to take a methodical approach toward identifying the tool best suited for the situation. He or she must review the organization’s systems engineering environment or, when not feasible, review the systems engineering environment for a particular project, as outlined in Section 3.1. In this way, the test engineer becomes familiar with the system and software architectures for each of the various projects within the organization. Next, the test engineer defines the criteria for a tool evaluation domain based upon review of the system and software architectures supported by the defined system engineering environment.

The test engineer then identifies which of the various test tool types might apply to a particular project. Section 3.2 outlines the types of automated test tools available to support the testing effort throughout the various development life-cycle phases. The test engineer must assess the automated test tools available on the market, ascertaining which can support the organization’s systems engineering environment. A determination needs to be made whether the defined system requirements can be verified through the use of one or more test tools.

Next, the test engineer matches the test tool requirements with the types of test tools available so as to derive a list of candidate test tools. When one or more test tools exists within a single test type category, the test engineer needs to determine which of the candidate tools is the best option. He or she scores the candidate test tools using functional evaluation criteria, which reflects defined tool requirements. A sample evaluation form is provided in Section 3.3.

Once one or more automated test tools have been chosen to support the organization’s test efforts, the test engineer must identify an evaluation domain or a pilot project with which to apply the tool and develop some experience and lessons learned. Section 3.5 outlines evaluation domain and pilot project considerations. Once the technical environment in which the automated test tool will operate has been defined, the test engineer can work with test tool candidate vendors. He or she should request an evaluation copy of the software, as described in Section 3.5. Pricing, support, and maintenance information on the test tool can also be obtained and reviewed.

After screening the test tool for its functional capabilities and reviewing the pricing, support, and maintenance information, the test engineer should conduct a hands-on evaluation of the tool in an isolated test environment and produce an evaluation report, as outlined in Section 3.5. Providing that the test tool performs satisfactorily in the isolated test environment, it can then be installed to support the pilot project. The process for automated test tool selection may be an iterative one, as a particular test tool selection could fail at numerous decision points, requiring the test engineer to reenter the selection process with a different tool.

Section 2.3 outlined the development process for a test tool proposal, which is aimed at persuading management to release funds to support automated test tool procurement and automated test tool training. The test tool proposal should have estimated the cost of the resources needed to research and evaluate the tool and to procure the tool. At this stage, it is assumed that management has approved the proposal and is aware of the benefits of planned test tool implementation. In addition, management should have set aside appropriate funding to support the test tool implementation and should be visibly supportive of automated test efforts.

3.1 The Organization’s Systems Engineering Environment

Once management is committed to providing the required resources, the test engineer reviews the organization’s systems engineering environment. He or she will want to ensure that the tool is compatible with as many operating systems, programming languages, and other aspects of the technical environment used within the organization as possible. The test engineer reviews the organization’s system engineering environment by addressing the questions and concerns described in this section and documenting his or her findings.

3.1.1 Third-Party Input from Management, Staff, and End Users

It is valuable to survey management, project staff, and end-user customers to understand their expectations pertaining to automated testing. Such a survey allows the test engineer to ascertain and document the functionality that needs to be provided by the test tool. This activity ensures that all personnel potentially involved understand and support the automated test tool’s requirements and goals.

The following questions should be addressed as part of a data-gathering exercise, assuring that the desired tool functionality is defined adequately:

• How will the tool be used within the organization?

• Will other groups and departments use the tool?

• What is the most important function of the tool?

• What is the least important function of the tool?

• How will the tool mainly be used?

• How portable must the tool be?

During the survey, the test engineer identifies the database architecture and technical application architecture, including any middleware, databases, and operating systems most commonly used within the organization or on a particular project. The test engineer also identifies the languages used to develop the GUI for each application. Additionally, he or she needs to gain an understanding of the detailed architecture design, which can affect performance requirements. A review of prevalent performance requirements—including performance under heavy loads, intricate security mechanisms, and high measures of availability and reliability for a system—is beneficial. In particular, the test engineer should inquire about whether a majority of the applications support mission-critical operations.

The test team needs to understand the data managed by the application under test and define how the automated test tool supports data verification concerns. A primary purpose of most applications is the transformation of data into meaningful information. The test team should understand how this transformation takes place for the application, so that test strategies can be defined to support data verification and to validate that this transformation occurs correctly.

From the survey, the test engineer obtains an overall idea of the organization’s engineering environment, so that the organization’s tool needs can be properly identified. If the organization develops mainframe and client-server applications, but most of the system’s trouble reports pertain to client-server applications, then the test tool requirements should focus on the client-server environment. If the organization’s client-server applications experience primarily performance problems, then test tool selection efforts should concentrate on performance-monitoring tools.

The test team responsible for implementing a test tool must also account for its own expectations. As a single tool will not generally satisfy all organizational test tool interests and requirements, the tool should at a minimum satisfy the more immediate requirements. As the automated test tool industry continues to evolve and grow, test tool coverage for system requirements is likely to expand and the chance that a single tool can provide most desired functionality may improve. At present, one tool might be perfect for GUI testing, another tool might be needed for performance testing, and a third tool might support unit testing. Thus the use of multiple test tools needs to be considered, and expectations for the use of the tools must be managed. Likewise, the significance of tool integration should be assessed. The test engineer needs to document the inputs received from the test tool survey and compare these inputs with the features of the tools being considered.

The test tools applied to an automated test effort should accurately reflect customer quality concerns and priorities. The test engineer needs to clearly state customers’ needs and expectations to ensure that the proper breadth and scope of testing can be performed. Some customers may wish to apply the test tool used during system testing to the acceptance test effort.

3.1.2 Tool Criteria Reflecting the Systems Engineering Environment

Next, the test engineer should review the system and software architectures that pertain to the majority of the projects and environments within the organization (depending on the size of the organization). These other projects and environments may have a different set of test goals and objectives, and many types of tests can be performed. To which phase of the software development life cycle does the test team wish to apply automated testing? How will the test tool be used in the organization? Is there interest in a requirements management tool that could be applied during the requirements definition phase? Is there interest in performing usability testing during the design phase?

Section 3.2 describes the various types of test tools that are available for the different system development phases. The test engineer must identify which types of test tools are the most applicable. Ideally, an automated tool is used in support of each phase of the software development life cycle. It is generally not the test team’s responsibility to identify an automated tool to support each software engineering activity; rather, a software engineering manager or a process improvement group takes on this responsibility. Preferably, the tools will be integrated, allowing the output of one tool to be used as input to another tool.

With regard to the review of system requirements, several questions need to be posed. Do you want to concentrate the automated test effort during the development phase? Do you need a tool that supports memory leak testing? A memory leak tool analyzes a computer program to determine which part does not use memory effectively. Some tools support module complexity analysis—that is, they map the program module interdependencies in a given system. This capability may be useful in identifying computer programs that need to be structured or separated into two or more programs. Complexity information may also be helpful in identifying program code that warrants more detailed code inspection. Again, these types of issues might already have been identified in an organizational needs analysis.

The test team will generally concentrate its review on those test tools that have the greatest applicability to the testing phase. It is important to analyze the need for tools that support regression, stress, or volume testing. The test engineer needs to understand the importance of system performance requirements and determine which technical criteria are most important during the testing phase. These criteria might include the ability to maintain software test scripts and the ability to perform regression testing quickly and thoroughly. The level of sophistication inherent within an automated test tool is an important factor to consider when selecting a tool. (Questions and concerns pertaining to test tool sophistication are outlined later in this chapter in Table 3.2).

The specific selection criteria for any given effort will depend on the applicable system requirements of the target applications. If the test team might be able to take advantage of one or more automated test tools for several projects or applications, then it might be necessary to narrow the selection criteria to the most significant applications-under-test within the organization. Ideally, you should not limit the automated test tool selection criteria to a single project. Such a constraint may lead to an investment good only for that project alone, with the test tool becoming shelfware after the immediate project has been completed.

3.1.3 Level of Software Quality

When considering an automated test tool, the test engineer should define the level of software quality expected for the project and determine which aspects of the software development are the most crucial to a particular project or effort. He or she should inquire about whether the organization is seeking to comply with industry quality guidelines, such as ISO 9000 or the Software Engineering Institute’s Capability Maturity Model (CMM). The test engineer should also gain insight into the size of the planned application development effort. If the application development effort involves the use of five full-time developers, for example, over the duration of the project, then extensive use of a variety of test tools would likely prove too expensive given the size of the project. For projects involving as many as 30 or more developers, however, the size and complexity of the effort would justify the use of a greater variety of test tools.

The test engineer also needs to define the crucial aspects of the organization’s primary applications. A high level of software quality would be critical, for example, for a company that develops patient monitors, the electronic devices that monitor the physiological parameters (ECG, heart rate, blood pressure, oxygen saturation) of critically ill patients in real time. On the other hand, the quality criteria for an organization that develops noncritical software would not be as extensive. At financial institutions, which employ systems that manage the flow of millions of dollars each day, high availability of hardware and software is a critical concern.

3.1.4 Help Desk Problem Reports

When an application or version of an application is in operation, the test team can monitor help desk trouble reports so as to review the history of the most prevalent problems of the application. If a new version of the application is being developed, the team can focus its effort on the most prevalent problems of the operational system, identifying a test tool that supports this kind of testing.

3.1.5 Budget Constraints

When the test engineer has obtained management commitment for a test tool and must satisfy a large number of organizational test tool requirements on a limited budget, he or she must be both selective and cost-conscious when looking for one or more tools to support requirements. The test engineer may need to purchase a single tool that meets a majority of the requirements or one that best satisfies the most important requirements.

3.1.6 Types of Tests

As many types of tests can be performed on any given project, it is necessary to review the types of testing of interest. The types of tests to consider include regression testing, stress or volume testing, and usability testing. What is the most important feature needed in a tool? Will the tool be used mainly for stress testing? Some test tools specialize in source code coverage analysis; that is, they identify all possible source code paths that need to be verified through testing. Is this capability required for the particular project or set of projects? Other test tool applications might include support for process automation or bulk data loading through input files. Consider what the test team is trying to accomplish with the test tool. What is the goal? What functionality is desired?

3.1.7 Long-Term Investment Considerations

The use of one or more automated test tools should be contemplated for a longer period than just one year. As a result, the tool should be viewed as a long-term investment, involving multiple tool upgrades. Test tool selection criteria should therefore include the staying power of the tool. Who is the vendor? Does the product have a good track record? What is its industry acceptance? Ideally, the test tool vendor should be available to answer questions and should provide regular upgrades to keep up with technological development.

Another element to consider is the potential for the tool to be more widely used within the organization. This issue represents another reason why the test engineer should examine the entire organization’s system engineering make-up.

3.1.8 Test Tool Process

When evaluating a test tool, keep in mind that the test team will need to introduce the test tool within the organization. Thus the test engineer needs to verify that management is willing to commit adequate resources to support the test tool introduction process. Provided enough room exists in the project schedule to permit the introduction of an appropriate test tool for the organization, the test team needs to ensure that the tool is implemented in a manner that promotes its adoption. After all, if no one in the organization is using the tool, the effort to obtain and incorporate the tool will have been wasted. An appropriate test tool introduction process should be followed, as outlined in Chapter 4, and all stakeholders in the tool should receive education on the use of the tool and become involved in its implementation.

3.1.9 Avoiding Shortcuts

To determine the test tool requirements, the test engineer will have to address additional questions such as, “Will I be supporting a large testing effort?” Again, when considering the requirements for a project involving a small testing effort, it is beneficial to consider the future application of the test tool on other projects. Remember to think of test tool selection as a long-term investment!

Another concern when selecting a test tool is its impact and fit with the project schedule. Will there be enough time for the necessary people to learn the tool within the constraints of the schedule? Given a situation where the project schedule precludes the introduction of an appropriate test tool for the organization, it may be advisable not to introduce an automated test tool. By postponing the introduction of a test tool until a more opportune time, the test team may avoid the risk of rushing the launch of the right tool on the wrong project or choosing the wrong tool for the organization. In either case, the test tool likely will not be received well, and potential champions for the use of automated test tools may become their biggest opponents.

3.2 Tools That Support the Testing Life Cycle

When performing an organizational improvement or needs analysis, it is important to become familiar with the different types of tools available on the market. This section provides an overview of tools that support the various testing life-cycle phases. This tool section is not intended to be comprehensive, but instead provides a sample set of tools that improve the test life cycle. Table 3.1 lists tools that support each phase of the testing life cycle. In addition to testing tools, other tools are included in the table because they support the production of a testable system. Even though some tools are used throughout various phases (such as defect tracking tools, configuration management tools, and test procedure generation tools), the table lists only the tools in the first phase used. Appendix B provides examples and details of the tool types listed here, and it gives examples of other tool information sources.

Table 3.1. Test Life-Cycle Tools

image

image

image

image

image

The tools listed in Table 3.1 are considered valuable in improving the testing life cycle. Before an organization selects a particular tool to purchase, however, it should conduct a needs/improvement analysis. The organization needs to determine which tools could be most beneficial in improving the system development process. This assessment is made by comparing the current process with a target process and evaluating the improvement indicator and by conducting a cost-benefit analysis. Before purchasing a tool to support a software engineering activity, a tool evaluation should be performed for the tool, similar to the automated test tool evaluation process described in this chapter.

3.2.1 Business Analysis Phase Tools

Numerous tools on the market support the business analysis phase. Some tools support various methodologies, such as the Unified Modeling Language (UML). Other business analysis tools have the ability to record process improvement opportunities and allow for data organization capabilities, thus improving the testing life cycle. For more details on each of the tools listed in this phase, refer to Appendix B.

3.2.1.1 Business Modeling Tools

Business modeling tools support the creation of process models, organization models, and data models. They allow for recording definitions of user needs and for automating the rapid construction of flexible, graphical client-server applications. Some business modeling tools integrate with other phases of the system/testing life cycle, such as the data modeling phase, design phase, programming phase, and testing and configuration management phase. These tools can be very valuable in supporting the testing effort. Using these tools correctly and efficiently will enhance process modeling throughout the system life cycle, while simultaneously supporting the production of testable systems.

3.2.1.2 Configuration Management Tools

Configuration management tools should be deployed early in the life cycle, so as to manage change and institute a repeatable process. Although this tool category is depicted as part of the business analysis phase in Table 3.1, configuration management tools are actually used throughout the entire life cycle. The final outputs of each system life-cycle phase should be baselined in a configuration management tool.

3.2.1.3 Defect Tracking Tools

Just as it is important to use configuration management tools throughout the testing life cycle, so it is also important to use defect tracking tools from the beginning and throughout the testing life cycle. All defects or software problem reports encountered throughout the system life cycle should be documented and managed to closure. The identification of defects is the primary goal of testing and quality assurance activities. For defects to be removed successfully from the application, they must be identified and monitored until their elimination.

3.2.1.4 Technical Review Management Tools

One of the best defect detection strategies relies on technical reviews or inspections, because such reviews allow defects to be discovered early in the life cycle. Reviews and inspections represent a formal evaluation technique applicable to software requirements, design, code, and other software work products. They entail a thorough examination by a person or a group other than the author.

Technical review management tools allow for automation of the inspection process, while facilitating communication. They also support automated collection of key metrics, such as action items discovered during a review.

3.2.1.5 Documentation Generation Tools

Documentation generation tools can simplify the testing life cycle by reducing the effort required to manually produce software documentation. Again, they are helpful throughout the entire testing life cycle.

3.2.2 Requirements Definition Phase Tools

Software needs to be assessed relative to an understanding of what the software is intended to do. This understanding is encapsulated within requirements specifications or use case definitions. The quality of defined requirements can make the test (and, of course, development) effort relatively painless or extremely arduous.

When a requirements specification contains all of the information needed by a test engineer in a usable form, the requirements are said to be test-ready. Testable requirements minimize the effort and cost of testing. If requirements are not test-ready or testable, test engineers must search for missing information—a long, tedious process that can significantly prolong the test effort. See Appendix A for more information on testable requirements.

3.2.2.1 Requirements Management Tools

Requirements management tools permit requirements to be captured quickly and efficiently. Requirements can be recorded in a natural language, such as English, using a text editor within a requirements management tool. They can also be written in a formal language such as LOTOS or Z [1] using a syntax-directed editor. Additionally, requirements can be modeled graphically, using tools such as Validator/Req (see Appendix B for more details on this tool).

One method of modeling requirements involves use cases. The use case construct defines the behavior of a system or other semantic entity without revealing the entity’s internal structure. Each use case specifies a sequence of actions, including variants, that the entity can perform by interacting with actors of the entity.

Many requirements management tools support information traceability. Traceability involves more than just tracking requirements. Linking all information together—such as links forged between requirements/use cases and either design, implementation, or test procedures—is a critical factor in demonstrating project compliance and completeness. For example, the designer might need to trace the design components to the detailed system requirements, while the developer needs to trace the code components to the design components. The test engineer needs to trace the system requirements to test procedures, thereby measuring how much of the test procedure creation is complete. Requirements management tools can also automatically determine which test procedures are affected when a requirement is modified.

In addition, requirements management tools support information management. Regardless of whether the project involves software, hardware, firmware, or a process, data must be managed and traced to ensure compliance with requirements through all phases of development. Tools such as these can be used to support lessons learned review activities (discussed in Chapter 10 in detail) and to facilitate the management of issues and defects.

3.2.2.2 Requirements Verifiers

Requirements recorders are well-established tools that continue to be updated with new features and methods, such as use cases. Requirements verifiers [2], on the other hand, are relatively new tools. Before requirements verifiers appeared on the market, recorded requirements information could be checked in two ways: (1) by using another function, available in some requirements analysis tools, to verify that information conformed to certain methodology rules, or (2) by performing manual reviews on the information. Neither of these verifications, however, could assure that the requirements information represented a testable product.

To be testable, requirements information must be unambiguous, consistent, quantifiable, and complete. A term or word in a software requirements specification is unambiguous if it has one, and only one, definition. A requirements specification is consistent if each of its terms is used in one, and only one, way. Consider, for example, the word report. Within a requirements specification, report must be used as either a noun or a verb. To use the word report as both a name and an action, however, would make the specification inconsistent.

From a test engineer’s point of view, completeness means that the requirements contain necessary and sufficient information for testing. Every action statement must have a defined input, function, and output. Also, the tester needs to know that all statements are present. If any statement is incomplete or if the collection of statements known as the requirements specification is incomplete, testing will be difficult. Worse, some organizations have no requirement specifications, making testing impossible. For more details on testable requirements, see Appendix A.

Requirements verifiers quickly and reliably check for ambiguity, inconsistency, and statement completeness. An automated verifier, however, cannot determine whether the collection of requirement statements is complete. The tool can check only what is entered into it—not what should be entered. Checking the completeness of the requirements specification must therefore be performed manually.

Most testing and development tools used later in the software life cycle depend on the availability of reliable requirements information, so a requirements verifier can prove valuable in creating this sound foundation. Unlike most test tools, which are packaged separately, requirements verifiers are usually embedded within other tools.

3.2.3 Tools for the Analysis and Design Phase

A requirements specification defines what a software system is expected to do, while the design phase determines how these requirement specifications will be implemented.

3.2.3.1 Visual Modeling Tools

Visual modeling tools, which are used during the business analysis phase, may also prove helpful during the design phase. These tools, such as Rational Rose, allow developers to define and communicate a software architecture, which accelerates development, by improving communication among various team members; improves quality, by mapping business processes to software architecture; and increases visibility and predictability, by making critical design decisions explicit visually.

Additionally, the information gathered via the requirements management tools during the requirements phase can be reused in the design phase. A detailed design is generated from these detailed systems requirements, use cases, and use case diagrams. In addition to deploying the tools classified under earlier system life-cycle phases during the design phase, specific database design tools and application design tools can enhance the design effort and thus the testing phase. Structure charts, flowcharts, and sequence diagrams may support process management. For more details on these types of tools, see Appendix B.

3.2.3.2 Test Procedure Generators

The requirements management tools discussed in Section 3.2.2 may be coupled with a specification-based test procedure (case) generator. A requirements management tool will capture requirements information, which the generator then processes to create test procedures. A test procedure generator creates test procedures by statistical, algorithmic, or heuristic means. In statistical test procedure generation, the tool chooses input structures and values to form a statistically random distribution, or a distribution that matches the usage profile of the software under test. In algorithmic test procedure generation, the tool follows a set of rules or procedures, commonly called test design strategies or techniques. Most often test procedure generators employ action-, data-, logic-, event-, and state-driven strategies. Each of these strategies probes for a different kind of software defect. When generating test procedures by heuristic or failure-directed means, the tool employs information from the test engineer, such as failures that the test team discovered frequently in the past. The tool then becomes knowledge-based, using the knowledge of historical failures to generate test procedures.

In the past, test engineers primarily focused on the creation and modification of test procedures. Coming up with test procedures was a slow, expensive, and labor-intensive process. If one requirement changed, the tester had to redo many existing test procedures and create new ones. With modern test procedure generators, however, test procedure creation and revision time can be reduced to a matter of CPU seconds [3].

These types of tools can be used during the design and development phases and during the testing phases. For example, Interactive Development Environments (IDE) has announced that a version of its StP/T tool will be made available to software development teams. This tool allows developers to automatically test software code against the functionality specified in the analysis and design stages. IDE’s test procedure generation tool links the testing process to the initial analysis and design. As a result, developers can create test-ready designs from the start, thereby slashing the time and costs associated with multiple design iterations. Through its link to analysis and design tools, StP/T builds test cases directly from specifications outlined in the application’s data and object models.

3.2.4 Programming Phase Tools

During the business analysis and design phases, the modeling tools used often allow code generation from the models created in the earlier phases. If these setup and preparation activities have been carried out correctly, programming can be simplified. Thus programmers should follow standards and convey much of what a system can do through the judicious choice of names for methods, functions, classes, and other structures. Also, programmers should include extensive preambles or comments in their code that describe and document the purpose and organization of the program. In addition, application developers should create program logic and algorithms that support code execution.

The preambles, algorithms, and program code can serve as inputs to testing tools during the test automation development phase. These inputs make it easier for a test engineer to design a test. Preambles may be envisioned as requirements descriptions for the small units of software code that the programmer will develop. The test tools employed during the requirements phase may be reused to test these units of software code.

Tools such as the metrics reporter, code checker, and instrumentor may also support testing during the programming phase. Sometimes these tools are classified as static analysis tools, because they check code while it is not being executed and is in a static state. They are discussed in Section 3.2.5.

3.2.4.1 Syntax Checkers/Debuggers

Syntax checkers and debuggers are usually bundled within a high-level language compiler. These tools support the testability of the software and therefore are important in improving the testability of software during the programming phase. Debugging can include setting breakpoints in the code to allow the executing program to be stopped for debugging, inserting a stop condition, allowing source code to be viewed during debugging, and allowing program variables to be viewed and modified.

3.2.4.2 Memory Leak and Runtime Error Detection Tools

Memory leak and runtime error detection tools can check for memory leaks, showing where memory has been allocated but to which no pointers exist; such memory can never be used or freed. Runtime errors may be detected in third-party libraries, shared libraries, and other code. Memory leak and runtime error detection tools can identify problems such as uninitialized local variables, stack overflow errors, and static memory access errors, just to name a few. These tools are very beneficial additions to the testing life cycle.

3.2.4.3 Source Code Testing Tools

An early code checker test tool, called LINT, was provided to application developers as part of the UNIX operating system. LINT is still available in today’s UNIX systems. Many other code checkers are now offered to support other operating systems as well.

The name “LINT” was aptly chosen, because the code checker goes through code and picks out all the “fuzz” that makes programs messy and error-prone. This type of tool looks for misplaced pointers, uninitialized variables, and deviations from standards. Application development teams that utilize software inspections as an element of static testing can minimize the static test effort by invoking a code checker to help identify miniscule problems prior to each inspection [4]. Code checkers such as Abraxas Software’s Codecheck measure maintainability, portability, complexity, and standards compliance of C and C++ source code.

3.2.4.4 Static and Dynamic Analyzers

Some tools allow for static and dynamic analysis of code. Tools such as LDRA (see Appendix B for more details) perform static analysis, assessing the code in terms of programming standards, complexity metrics, unreachable code, and much more. These tools also support dynamic analysis, which involves the execution of code using test data to detect defects at runtime, as well as detection of untested code, analysis of statement and branch execution, and much more. These types of products analyze the source code, producing reports in both textual and graphical form.

3.2.5 Metrics Tools

Metrics tools, which are used during the unit and integration testing phases, can identify untested code and support dynamic testing. These types of tools provide coverage analysis to verify that the code is tested in as much detail as possible.

3.2.5.1 Metrics Reporter

The metrics reporter tool [5] has been around for years and remains valuable. This tool reads source code and displays metrics information, often in graphical format. It reports complexity metrics in terms of data flow, data structure, and control flow. It also provides metrics about code size in terms of modules, operands, operators, and lines of code. Such a tool can help the programmer correct and groom code and help the test engineer determine which parts of the software code require the most test attention.

3.2.5.2 Code Coverage Analyzers and Code Instrumentors

Measurement of structural coverage gives the development team insight into the effectiveness of tests and test suites. For example, tools such as McCabe’s Visual Test Tool can quantify the complexity of the design, measure the number of integration tests required to qualify the design, produce the desired integration tests, and measure the number of integration tests that have not been executed.

Other tools, such as Hindsight, measure multiple levels of test coverage, including segment, branch, and conditional coverage. The appropriate level of test coverage will depend upon the importance of the particular application. (For more detail on these tools, see Appendix B.)

3.2.5.3 Usability Measurement

These types of tools evaluate the usability of a client/server application. Please see Appendix B for more detail.

3.2.6 Other Testing Life-Cycle Support Tools

3.2.6.1 Test Data Generators

Many tools now on the market support the generation of test data and populate database servers. Such test data can be used for all testing phases, especially during performance and stress testing, simplifying the testing process. For examples of test data generators, see Appendix B.

3.2.6.2 File Compare Utilities

File compare utilities search for discrepancies between files that should be identical in content. Such comparisons are useful in validating that regression tests produce the same output files as baselined information, before code fixes were implemented. File compare utilities are often functions of capture/playback tools.

3.2.6.3 Simulation Tools

Simulation modeling tools can simulate the behavior of application-under-test models, using various modifications of the target application environment as part of “what-if” scenarios. These tools provide insight into the performance and behavior of existing or proposed networks, systems, and processes. For examples of simulation modeling tools, see Appendix B.

3.2.7 Testing Phase Tools

3.2.7.1 Test Management Tools

Test management tools support the testing life cycle by allowing for planning, managing, and analyzing all aspects of it. Some tools, such as Rational’s TestStudio, are integrated with requirement and configuration management tools, thereby simplifying the entire testing life-cycle process. For more details on test management tools, see Appendix B.

3.2.7.2 Network Testing Tools

The advent of applications operating in a client-server, multitier, or Web environment has introduced new complexity to the test effort. The test engineer is no longer exercising a single closed application operating on a single system, as in the past. Instead, the client-server architecture involves three components: the server, the client, and the network. Interplatform connectivity also increases the potential for errors. As a result, testing must focus on the performance of the server and the network, as well as the overall system performance and functionality across the three components. Many network test tools allow the test engineer to monitor, measure, test, and diagnose performance across the entire network. For more details on network test tools, see Appendix B.

3.2.7.3 GUI Application Testing Tools (Record/Playback Tools)

Many automated GUI test tools are on the market. These tools usually include a record and playback feature, which allows the test engineer to create (record), modify, and run (play back) automated tests across many environments. Tools that record the GUI components at the widget level (and not the bitmap level) are most useful. The record activity captures the keystrokes entered by the test engineer, automatically creating a script in a high-level language in the background. This recording is a computer program, which is referred to as a test “script.” The use of only the capture/playback features usually takes advantage of 10% of a test tool’s capacity. To get the best value from a capture/playback tool, it is necessary to work with its inherent scripting language. The test engineer will need to modify the script to create a reusable and maintainable test procedure (see Chapter 8 on automated test development guidelines). This script will become the baseline test and can later be played back on a new software build for comparison.

Test tools that provide a recording capability are usually bundled with a comparator, which automatically compares actual outputs with expected outputs and logs the results. The results can be compared pixel by pixel, character by character, as the tool automatically pinpoints the failure between the expected and actual result. In the case of the Rational Robot test tool, a positive result is logged in the Test Log Viewer as a pass and depicted on the screen using a green color, while a fail is represented using a red color.

3.2.7.4 Load/Performance/Stress Testing Tools

Performance testing tools, such as Rational’s PerformanceStudio, allow for load testing, where the tool can be programmed to run a number of client machines simultaneously to load the client/server system and measure response time. Load testing typically involves various scenarios to analyze how the client/server system responds under various loads.

Stress testing involves the process of running the client machines in high-stress scenarios to see when and if they break.

3.2.7.5 Environment Testing Tools

Numerous types of tools are available to support the numerous environments (mainframe, UNIX, X-Windows, and Web). The number of test tools supporting Web applications is increasing. Specific Web testing tools are now on the market geared toward testing Web applications, as noted in Appendix B.

3.2.7.6 Year 2000 (Y2K) Testing Tools

A number of tools on the market support Y2K testing. Such tools parse and report on mainframe or client-server source code regarding the date impact. Some Y2K tools support the baseline creation of Y2K data; others allow for data aging for Y2K testing. Still other Y2K tools provide data simulation and simulate the Y2K testing environment.

3.2.7.7 Product-Based Test Procedure Generators

The product-based test procedure generator has been known since the 1970s [6]. It reads and analyzes source code, and then derives test procedures from this analysis. This tool tries to create test procedures that exercise every statement, branch, and path (structural coverage). While attaining comprehensive structural coverage is a worthwhile goal, the problem with this tool is that it tries to achieve structural coverage by working from the code structure rather than from the requirements specification.

Criticism of this kind of test method springs from the fact that program code structure represents only what a software product does—not what the system was intended to do. Program code can have missing or incorrect functions, and a test tool utilizing the code structure has no way of compensating for such errors. Because it cannot distinguish between good program code and bad program code, the test tool attempts to generate test procedures to exercise every part of all program code. Thus it will not warn the test engineer that some of the program code may be faulty.

When utilizing a product-based test procedure generator, the determination of whether program code is good or bad is left up to the test engineer. As discussed earlier, the test engineer makes such a determination by comparing actual behavior of the code to its specified or expected behavior. When written requirements specifications are not available and test engineers must work from their recollection of specifications, then the test team will be inclined to trust the product-based test procedure generator. After all, the test engineers have no other reference point to support the test effort. In this situation, the test team mistakenly places its faith in reports showing high structural coverage. In contrast, test engineers with written or modeled specifications have the definitive reference point required to perform complete software testing. As a result, he or she does not need to test software against itself, and therefore has no need for a product-based test procedure generator.

3.3 Test Tool Research

The information given in Section 3.2 will assist the test engineer in fashioning a test tool feature “wish list” that supports the organization’s systems engineering environment. It outlines the various types of test tools to consider. Next, the test engineer must translate the need for a test tool type into one or more specific test tool candidates.

Based on test tool requirements, the test team needs to develop a test tool specification and evaluation form. The organization may already have a standard form to support tool evaluation. It is beneficial to check the organization’s process or standards library to obtain any existing guidelines and forms.

It is important that the test tool functionality that you require, as the test engineer responsible for implementing a test tool, be factored into the evaluation process. The test engineer should investigate whether a requirements management tool will be used, or any other tool that can potentially be integrated with a testing tool. A features checklist should be developed for each type of tool needed during the various system life-cycle phases, as each phase will have different tool requirements and needs.

Several questions need to be posed. Do you need a capture/playback tool or a code coverage tool, or both? What are you trying to accomplish with the test tool? Do you need a tool for test management? Do you plan to use the tool for load testing or only regression testing? Test tool goals and objectives will need to be considered when outlining the criteria and desired features of the test tool. Table 3.2 provides an example of an evaluation scorecard for an automated GUI test tool.

Table 3.2. Evaluation Scorecard—Automated GUI Testing (Record/Playback) Tool

image

image

image

image

image

image

3.3.1 Improvement Opportunities

At the end of the test life cycle, test program review activities are performed, as discussed in Chapter 10. The outcome of these activities might suggest the need for automated testing tools and identify those processes or products that need improvement as well as outline how an automated tool could be expected to help improve the process or product. The test engineer needs to incorporate the results of test program review activities when selecting the criteria for a new test tool.

Armed with documented expectations of the functionality that the test tools should provide, the test engineer begins to research the tools that fit the specific needs. A thorough evaluation requires a rich field of candidate vendors and products. A number of ways exist to identify candidates. A multitude of information is available on the World Wide Web and in software testing publications. Read magazines—especially software and database application development publications—looking for in-depth articles, technical reviews, and vendor advertisements. Use software programs such as Computer Select, which catalogs and indexes tens of thousands of articles from hundreds of magazines. Tool vendors often have their success stories published in industry periodicals. In addition, many software-related magazines research and rate test tools on the market. Another option is to use the research services of companies such as the Gartner Group.

Talk to associates and ask them to recommend tools and vendors. Join testing newsgroups and testing discussion groups, and obtain tool feedback and other expert opinions.

Narrow down the search by eliminating tools that don’t meet the minimal expectations, and focus additional research on the tools that fulfill at least the minimal requirements. Many questions need to be asked to ascertain whether the tool provides the required functionality. How will the tool be used in the organization? What is its most important function? How portable must the tool be to support multiple platforms? With which system life-cycle phases should the tool integrate?

The test team needs to research whether other groups or departments within the organization are already using specific tools and can share good insight about the tools. Once the test engineer has narrowed the search for a particular type of test tool down to two or three lead candidates, the evaluation scorecard depicted in Table 3.2 can be used to determine which tools best fit the particular requirements.

As the weighted values for the test tool characteristics will vary with each type of test tool, the test team may wish to develop an evaluation scorecard form for each type of test tool required. In Table 3.2, an automated GUI test tool (capture/playback) candidate is evaluated against the desired test tool characteristics. The total value of 2,638 for this candidate must then be compared with the total values derived for the other two candidates. As noted in the sample scorecard summary below, Candidate 3 achieved a rating of 75.3% in being able to provide coverage for all the desired test tool characteristics:

image

An optional evaluation scoring method involves sizing up the three candidates using only the most important test tool characteristics. Note that 12 of the characteristics were assigned a weight of 10. Table 3.3 reflects the scores for the three test tool candidates using a preferred scorecard form based upon product information obtained from each vendor.

Table 3.3. Preferred Scorecard—GUI Record/Playback Tool

image

Using this model for scoring, Candidate 2 achieves a higher rating than Candidate 3, which had posted the highest rating using the evaluation scorecard method. Candidate 2 achieved a rating of 90.0% for being able to provide coverage for the highest-priority test tool characteristics.

image

The evaluation for each kind of test tool being considered for an organization or project will differ, because each type of test tool has its own peculiar desired characteristics and necessitates a different weight scheme for the tool’s desired characteristics. The guidelines of what to look for and weigh when evaluating a GUI test tool, for example, will be different from guidelines of how to evaluate a network monitoring tool.

3.4 Evaluation Domain Definition

Even though the test tool vendor may guarantee a test tool’s functionality, experience shows that often tools don’t work as expected within the particular environment. It is worthwhile to develop an evaluation test plan that outlines in detail how to test the tool, so as to verify whether it fits particular system requirements and whether it is compatible with the target environment. The scope of the test plan for evaluating the product is highly dependent on the length of time that is available to review the test tool product.

To evaluate one or more candidate test tools, it is advantageous to first test the tool in an isolated test environment (test lab) before applying the test tool on a pilot project (target evaluation domain). Ideally, the test environment will be similar enough to the pilot project environment to provide assurance that the test tool will perform satisfactorily on the pilot project. Both the isolated test environment and the pilot project environment constitute evaluation domains. Evaluation within the test environment is aimed at proving the claims of test tool product literature and supporting a first-hand evaluation of the actual test tool itself. Evaluation within the pilot project environment is aimed at assessing the test tool’s actual performance on a first project.

The hardware/software configuration within the test lab, together with the end-user application selected to support testing, constitutes the test environment. If the hardware/software configuration will be used for only a single application, then the isolated test environment is simple to construct. If a broader evaluation domain is preferred, however, the automated test tool should be evaluated against the needs of several projects.

In support of a broader evaluation domain, the test environment needs to be expanded to include a number of applications, provided that such applications and the resources to test the applications are available. Larger evaluation domains not only establish better and broader selection requirements, but also can enhance partnerships across sections or departments within the organization. This broader base can be helpful in expanding the acceptance of the selection process. It might include a test of the tool within a multitude of operating system environments and a test with applications developed in several programming languages. By making the selection process as inclusive as possible, later automated test tool introduction will be viewed as voluntary rather than forced.

The test team may be able to select an application development project as a pilot for applying the test tool; on other occasions, however, the test team must select a test tool to support a specified project. Table 3.4 provides one set of guidelines for selecting an application development project as a pilot when a record/playback tool candidate is being applied.

Table 3.4. Pilot Project Selection Guidelines—Record/Playback Tool [7]

image

Along with the evaluation domain, the organizational structure of the testing team will need to be considered. In particular, you should identify as early as possible whether the organization intends to establish a centralized test team. Such a test team would carry out all testing functions required for the multiple application development groups. An alternative way of structuring the test organization is to develop a distributed organization in which the different application development project groups are responsible for the test of their own application, with only limited cross-application requirements.

The structure of the test organization affects the characteristics of the desired test tool. A centralized test team will want to utilize a more powerful automated test tool that offers tremendous flexibility, programming language capabilities, and growth potential. A decentralized organization will be better served by an easy-to-use tool that minimizes the cost and time associated with learning how to use it. Chapter 5 provides more details on how the test team can be structured within the organization.

While the hardware and software configuration of the test lab is important, as is the identification of an application to be used to support testing, it is also critical to identify the individuals who will perform the evaluation of the test tool in the test lab. In addition, it will be necessary to define the role that each test engineer will play in the test tool evaluation process. See Chapter 5 for further discussion of these roles and responsibilities.

3.5 Hands-on Tool Evaluation

As the test engineer responsible for selecting an automated test tool, you have now performed several of the steps necessary to support test tool evaluation and selection. The test engineer has become familiar with the system and software architectures of the application projects within the organization by surveying the systems engineering environment. The test group has reviewed the different types of test tools available on the market, used an evaluation scorecard to grade each candidate test tool against desired test tool characteristics, identified an isolated test environment, defined a target evaluation domain, and identified the individuals who will perform a hands-on evaluation of the test tool in the test environment.

Now, with a lead test tool candidate in mind, the test engineer needs to contact the test tool vendor to request a product demonstration. During the demonstration, the test engineer should note any questions or uncertainties and follow up with the vendor on any questions that cannot be answered during the demonstration. When working with a vendor representative, the test engineer should consider the professionalism demonstrated by the representative. It is important to assess whether the vendor representative will be supportive and easy to work with following the actual procurement of the test tool.

The test engineer should ask for a test tool evaluation copy from the vendor. Nearly all vendors have programs for allowing potential customers to try products for some specified period of time without an obligation to buy. The duration of this trial period may range from two weeks to 30 days. Some may extend the evaluation period even longer. The test engineer must clearly understand the duration specified, as failure to return the product within the vendor’s specified timeframe may automatically obligate its purchase.

Some vendors may want a purchase order prepared prior to shipping their products for a no-obligation product evaluation. The test engineer should avoid this kind of arrangement if possible. Ideally, the evaluation process should not take more than two months. Of course, this length will depend on the number of tools with which to evaluate. If the timeframe to make a test tool decision is limited, the test team will likely need to have more than one test tool installed within the evaluation domain at the same time. In such a case, it is important to make sure that enough resources are dedicated to perform the evaluations. The evaluators should also understand the required functions of each candidate test tool. For each type of test tool required, a test plan should be created. Remember—the goal is to ensure that the test tool performs as advertised and that the tool works within the required environment.

3.5.1 Evaluation Report

During the test tool demonstration (or exercise of the test tool evaluation copy), the test engineer should compare the test tool’s performance with its rating on desired test tool characteristics documented using the evaluation and preferred scorecard forms (see Tables 3.2 and 3.3). If the test tool’s rating significantly differs from the baseline score developed as part of the test tool research exercise outlined in Section 3.3, then the test engineer may need to reconsider whether that test tool represents the best product for the particular requirement.

Following the conclusion of the evaluation process, an evaluation report should be prepared that documents the results of the first-hand examination of the test tool [8]. This report is produced only when the evaluation process is complete—that is, after a test tool demonstration by the tool vendor and exercise of the test tool evaluation copy in an isolated test environment.

The evaluation report formally outlines the evaluation results using clear, precise language and is targeted toward addressing management concerns. The distribution for the report should include all functional areas that participated in the evaluation process or otherwise will be affected by the tool’s introduction. The report should contain background information, technical findings, product summaries, and a conclusion. It should also include a summary of the questions and answers addressed during the test tool demonstration, notes about the tool’s performance documented during a test tool evaluation period, and an updated evaluation scorecard.

An example of a typical evaluation report document outline is provided here. The test team will want to tailor this format to suit the particular needs for its own organization.

1.0 Introduction. The introduction identifies the document, describes its purpose and scope, and provides some background information. For example, it is important to document whether the scope of the test tool evaluation is geared toward 32-bit, Visual Basic applications or some other applications. If this information is omitted, someone could look at the report a year later and assume it also covers mainframe applications. Within the introduction (and conclusion), it would be politically correct to acknowledge the individuals within the functional areas that participated in the evaluation process or otherwise will be affected by the introduction of the tool.

2.0 Summary. Summarize the process that has taken place as well as the roles and participation of particular groups. Identify any assumptions that were made during the selection process, such as the anticipated organization structure of the project, preferred operating systems, and certain technical requirements. The summary is the area where the test team will want to allay any management concerns. How will this tool help? Where is the return on the investment?

3.0 Background Information. Include names, addresses, and contact information for all potential vendors, including information pertaining to test tools that were not formally evaluated. The list should be extensive enough to demonstrate that the test team conducted a thorough search. For those companies and products that did not pass early screening, list the names of the products and indicate why they were rejected. Describe the test environment utilized to evaluate the test tool or tools, and describe the application(s) used to support testing. Be brief, but be sure that the team articulates that it understands the applications used to evaluate the tool.

4.0 Technical Findings. Summarize the results and highlight points of particular interest. Note which product received the best score and why. This section is an overview of technical findings only; don’t attempt to address each evaluation criteria in this area.

5.0 Product Summaries. This section should summarize the results of the evaluation of each vendor and its product. The focus is on the company and the tool, rather than on the requirements. Provide the results of the evaluation scorecard for each test tool and present the findings in descending order of their scores, from best to worst. Raise issues that go beyond the absolute score. Although the process should be objective, “gut feelings” and instincts, if pertinent, should be mentioned. A table of the prices, such as the base tool, hotline support, maintenance, training, and other costs, would be a nice addition to this section. If a price table is included, make sure that anticipated quantities required are noted as well as the unit cost and total cost.

6.0 Conclusion. Reiterate the objective and the evaluation team’s recommendation. Be brief. In addition to the Introduction, the Conclusion could also be a good place in which to acknowledge test personnel who performed well.

3.5.2 License Agreement

Once a decision has been made to use a particular test tool, the test engineer needs to ensure that the resulting purchase (license) agreement satisfies operational requirements. The test engineer can also potentially reduce the organization’s costs by working out arrangements for site licenses for the tools or volume (purchase) discounts.

The test engineer needs to review the license agreement before it is accepted by the organization’s Purchasing Department. He or she needs to fully understand the license agreement, as the test team will be required to comply with it. Even though the initial agreement is written to protect the licenser (the tool vendor), not the licensee, it does not mean that the test team must accept it as written. The topics below can be considered when negotiating a test tool license agreement. (Note that this discussion is not to be construed as legal advice.)

Named Users versus Concurrent Users

Modify the agreement to allow for the concurrent use of the tool, regardless of location. Normally, test tool software is licensed to run on a single desktop computer. To install the test tool on another desktop computer requires that the tool be removed from the initial desktop computer. This transferral could be a logistical nightmare if the environment is large and dynamic. A concurrent users stipulation may alleviate this problem by limiting the test team’s organization to the number of copies that can be run simultaneously, not the number of desktop computers on which it can be installed.

Extended Maintenance

Maintenance agreements will usually renew automatically each year unless the test team explicitly terminates the agreement sufficiently ahead of time. Consider changing the maintenance agreement so that the positive act of payment (or notification of payment) extends the maintenance contract, and nonpayment terminates the extended maintenance.

Be aware, though, that vendors generally access a penalty to reinstate maintenance if the maintenance agreement is allowed to lapse. If the licenser grants itself the right to cancel the extended maintenance, have this text removed from the maintenance agreement. (As long as you pay for the service, the right to end it should remain with you.) Finally, be sure that the license agreement explicitly caps the amount that the maintenance cost may increase from one year to the next.

Termination

If the contract mentions termination due to “material breach,” make sure that the licenser specifies what this phrase specifically entails or cites a governing law, for example. Some licenses indicate that the laws of a particular state will govern all disputes. Consider changing the license agreement to the more favorable locations of New York or Delaware.

Upgrades

During the evaluation process, the vendor may indicate that certain functionality will be released in a later version. Consider having an addendum attached that requires the vendor to commit to a specific date that the new functionality (or beta releases) will be available and have them commit to providing the upgrades [9].

Chapter Summary

• In an ideal situation, the organization’s test team will select a test tool that fits the criteria of the organization’s system engineering environment, when feasible, and will select a pilot project that is in the early stages of the system development life cycle.

• With an understanding of the systems engineering environment, the test engineer needs to identify which of the various test tool types might potentially apply on a particular project. A determination needs to be made whether the defined system requirements can be verified with one or more test tools. The test tool requirements need to be matched with the types of test tools available so as to derive a list of candidate test tools.

• When considering each test tool type, the test engineer should define the level of software quality that is expected from the project, and determine which aspects of the software development are the most crucial to a particular project or effort.

• The test engineer narrows down the test tool search by eliminating tools that fail to meet minimal expectations and focuses additional research on the tools that fulfill at least the minimal requirements.

• Once the test engineer has narrowed down the search for a particular type of test tool to two or three lead candidates, an evaluation scorecard can be used to determine which tool best fits the particular requirements.

• An optional evaluation scoring method involves sizing up the candidates using only the most important test tool characteristics.

• With a lead test tool candidate in mind, the test engineer needs to contact the test tool vendor to request a product demonstration and ask for an evaluation copy. Even though the test tool vendor may guarantee a test tool’s functionality, experience has shown that tools do not always work as expected in the specific environment. It is worthwhile to develop an evaluation test plan that outlines in detail how to test the tool, allowing the test team to verify whether it fits the particular system requirements and whether it is compatible with the target environment.

Following the conclusion of the evaluation process, an evaluation report should be prepared that documents the results of the first-hand examination of the test tool. This report is produced only when the evaluation process is complete—that is, after a demonstration by the tool vendor and exercise of the test tool evaluation copy in an isolated test environment.

References

1. Poston, R. A Guided Tour of Software Testing Tools. San Francisco: Aonix, 1988. www.aonix.com.

2. Ibid.

3. Ibid.

4. Ibid.

5. Ibid.

6. Ibid.

7. Adapted from SQA Process “Cust_Chk.doc,” January 1996. See www.rational.com.

8. Greenspan, S. “Selecting Automated Test Tools During a Client/Server Migration.” Paper presented at STAR conference, Orlando, Florida, May 13–17, 1996.

9. Used with permission of Steven Greenspan. “Selecting Automated Test Tools During a Client/Server Migration.” Paper presented at STAR conference, Orlando, Florida, May 13–17, 1996.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset