Chapter 2. Decision to Automate Test

If you want a high quality software system, you must ensure each of its parts is of high quality.

Watts Humphrey

image

An organization has determined that its current testing program is not effective. An organizational need analysis has been conducted, and the outcome has shown that the current manual testing process requires improvement. The organization is looking for a more repeatable and less error-prone testing approach. An improvement analysis determines that automated testing should be introduced.

The organization’s test lead has just been informed about the decision to introduce an automated testing, although a pilot project remains to be identified. Questions begin to surface immediately. Does the application being developed as part of the current project lend itself to automation? The test lead rounds up all information gathered during the improvement analysis regarding automated testing, searches the Web for more automated testing information, and contemplates the automated test tools that might apply to the current project. Yet, with regard to the decision about the best approach to automated testing, the test lead is not sure where to begin.

This chapter outlines a structured way of approaching the decision to automate test. Figure 2.1 depicts this step-by-step methodology. Between each step appears a decision point—that is, should the process continue or should it terminate with a decision not to automate test for that particular project?

Figure 2.1. Automated Test Decision Process

image

The steps outlined in Figure 2.1 address the concerns of a test lead or manager facing a test effort on a new project. Which automated test tool should be used? How can management be convinced that automated testing is or is not beneficial to this project? At first, these issues may seem overwhelming. Manual testing may, in fact, be commonplace in the organization. Likewise, few or no test engineers in the organization may have been exposed to automated testing, and therefore few or no test automation advocates may exist.

How would the test engineer go about introducing a new concept such as automated testing? How can the test engineer determine whether the application lends itself to automated testing? The material outlined in this chapter and the structured approach it presents will help sort out these various issues. Step-by-step instructions will provide guidance regarding the decision about whether an application is suitable for automated testing.

The potential for unrealistic expectations of automated testing will also be examined, as some software engineers and managers perceive automated testing as a panacea for all quality-related problems. This chapter points out some of the misconceptions about automated testing and addresses ways to manage some of these “ivory tower” expectations. (Chapter 3 describes the types of tools available). The potential benefits of automated testing are outlined here, and guidance is provided on how to convince management that automated testing augments the quality of the product. Additionally, a structured approach for seeking resource commitment and acquiring management support is described.

2.1 Overcoming False Expectations for Automated Testing

At a new project kick-off meeting, the project manager introduces you as the test lead. She mentions that the project will use an automated test tool and adds that, due to the planned use of an automated test tool, the test effort is not expected to be significant. The project manager concludes by requesting that you submit within the next week a recommendation for the specific test tool required plus a cost estimate for its procurement. You are caught by surprise by the project manager’s remarks and wonder about her expectations with regard to automated testing. Clearly, any false automated testing expectations need to be cleared up immediately.

Along with the idea of automated testing come high expectations. Much is demanded from technology and automation. Some people believe that an automated test tool should be able to accomplish everything from test planning to test execution, without any manual intervention. Although it would be great if such a tool existed, no such capability is available today. Others believe—wrongly—that a single test tool can support all test requirements, regardless of environment parameters, such as the operating system or programming language used.

Some may incorrectly assume that an automated test tool will immediately reduce the test effort and the test schedule. Although automated testing can produce a return on investment, an immediate payback on investment is not always achieved. This section addresses some of the misconceptions that persist in the software industry and provides guidelines on how to manage some of the automated testing utopia.

2.1.1 Automatic Test Plan Generation

Currently, no commercially available tool can automatically create a comprehensive test plan, while also supporting test design and execution. This shortcoming may be a bitter pill for management to swallow.

Throughout the course of his or her career, a test engineer can expect to witness test tool demonstrations and review an abundant amount of test tool literature. Often the test engineer will be asked to give an overview of test tool functionality to a senior manager or a small number of managers. As always, the presenter must pay attention to the audience’s identity. In this case, the audience may include individuals who have just enough technical knowledge to make them enthusiastic about automated testing, but who are not aware of the complexity involved with an automated test effort. Specifically, the managers may have obtained third-hand information about automated test tools and may have reached the wrong conclusions about capabilities of automated test tools.

The audience at the management presentation may be waiting to hear that the proposed tool automatically develops the test plan, designs and creates the test procedures, executes all test procedures, and analyzes the results automatically. Instead, you start the presentation by informing the group that automated test tools should be viewed as enhancements to manual testing, and that they will not automatically develop the test plan, design and create the test procedures, and execute the test procedures.

Soon into the presentation and after several management questions, it becomes very apparent how much of a divide exists between the reality of the test tool capabilities and the perceptions of the individuals in the audience. The term automated test tool seems to bring out a great deal of wishful thinking that is not closely aligned with reality. Such a tool will not replace the human factor necessary for testing a product. In fact, the services of test engineers and other quality assurance experts will still be needed to keep the testing machinery running. Thus a test tool can be viewed as an additional part of the machinery that supports the release of a good product.

2.1.2 Test Tool Fits All

Currently, not one single test tool exists that can be used to support all operating system environments. Thus a single test tool will not fulfill all testing requirements for most organizations. Consider the experience of one test engineer, Dave, who encountered this misconception. Dave’s manager asked him to find a test tool that could automate all of the department’s year 2000 tests. The department was using various technologies: mainframe computers and Sun workstations; operating systems such as MVS, UNIX, Windows 3.1, Windows 95, and Windows NT; programming languages such as COBOL, C, C++, MS Access, and Visual Basic; other client-server technologies; and Web technologies.

Expectations have to be managed. That is, the test lead must make it clear that no single tool currently on the market is compatible with all operating systems and programming languages. More than one tool is required to test the various technologies.

2.1.3 Immediate Test Effort Reduction

Introduction of automated test tools will not immediately reduce the test effort. Again, this issue may run counter to management’s expectations.

A primary impetus for introducing an automated test tool as part of a project is to reduce the test effort. Experience has shown that a learning curve is associated with attempts to apply automated testing to a new project and to achieve effective use of automated testing. Test or project managers may have read the test tool literature and be anxious to realize the potential of the automated tools. They should be made aware that test effort savings do not necessarily come immediately.

Surprisingly, there is a good chance that the test effort will actually become more arduous when an automated test tool is first brought into an organization. The introduction of an automated test tool to a new project adds a whole new level of complexity to the test program. While a learning curve may exist for the test engineers to become familiar with and efficient in the use of the tool, manual tests must still be performed on the project. The reasons why an entire test effort generally cannot be automated are outlined later in this section.

The initial introduction of automated testing also requires careful analysis of the target application to determine which sections of the application are amenable to automation. In addition, test automation requires that the team pay careful attention to automated test procedure design and development. The automated test effort can be viewed as a mini-development life cycle, complete with the planning and coordination issues that come along with any development effort. Introducing an automated test tool requires the test team to perform the additional activities outlined in step 3 (automated testing introduction process) of the ATLM, which is discussed in detail in Chapter 4.

2.1.4 Immediate Schedule Reduction

Another misconception involves the expectation that the use of an automated testing tool on a new project will immediately minimize the test schedule. As the testing effort may actually increase, as described in Section 2.1.3, the testing schedule will not experience the anticipated decrease at first but may instead become extended. An allowance for schedule increase is therefore required when initially introducing an automated test tool. When rolling out an automated test tool, the current testing process must be augmented or an entirely new testing process must be developed and implemented. The entire test team and possibly the development team needs to become familiar with the new automated testing process (that is, ATLM) and learn to follow it. Once an automatic testing process has been established and effectively implemented, the project can expect to experience gains in productivity and turn-around time that have a positive effect on schedule and cost.

2.1.5 Tool Ease of Use

An automated tool requires new skills, so additional training is required. Plan for training and a learning curve!

Many tool vendors try to sell their tools by exaggerating the tools’ ease of use. They deny that any learning curve is associated with the use of a new tool. Vendors are quick to point out that the tool can simply capture (record) test engineer keystrokes and (like magic) create a script in the background, which then can be reused for playback. In fact, efficient automation is not that simple. The test scripts that the tool generates automatically during recording must be modified manually, which requires tool scripting knowledge, so as to make the scripts robust, reusable, and maintainable. To be able to modify the scripts, the test engineer must be trained on the tool and the tool’s built-in scripting language. Thus new training requirements and a learning curve can be expected with the use of any new tool.

2.1.6 Universal Application of Test Automation

As discussed earlier, automated testing represents an enhancement to manual testing, but it can’t be expected that all of the tests on a project can be automated. For example, when an automated GUI test tool is first introduced, it is beneficial to conduct some compatibility tests on the target application to see whether the tool will be able to recognize all objects and third-party controls. Chapter 4 provides further discussion on compatibility testing.

The performance of compatibility tests is especially important for GUI test tools, because such tools have difficulty recognizing some custom-control features within the application. These features include the little calendars or spin controls that are incorporated into many applications, especially in Windows applications. These controls or widgets were once called VBXs, then became known as OCXs, and are now referred to as ActiveX controls in the Windows interface world. They are usually written by third parties, and most test tool manufacturers cannot keep up with the hundreds of clever controls churned out by the various companies.

A test tool might be compatible with all releases of Visual Basic and PowerBuilder, for example, but if an incompatible third-party custom control is introduced into the application, the tool might not recognize the object on the screen. Perhaps most of the target application uses a third-party grid that the test tool does not recognize. The test engineer must then decide whether to automate this part of the application by finding a work-around solution or to test this control manually only. The incompatibility issues can be circumvented if the test engineer picks and evaluates the appropriate tool from the start that matches the project’s needs, as noted in Chapter 3.

Other tests are physically impossible to automate, such as verifying a printout. The test engineer can automatically send a document to the printer, but then must verify the results by physically walking over to the printer to make sure that the document really printed. After all, the printer could have been off-line or out of paper.

Often associated with the idea that an automated test tool will immediately reduce the testing effort is the fallacy that such a tool can automate 100% of the test requirements of any given test effort. Given an endless number of permutations and combinations of system and user actions possible with n-tier (client/middle-layer/server) architecture and GUI-based applications, a test engineer does not have enough time to test every possibility.

Needless to say, the test team will not have enough time or resources to support 100% test automation of an entire application. As outlined earlier in this section, the complete testing of modern system applications has become an infinite task. It is not possible to test all inputs or all combinations and permutations of all inputs. Even on a moderately complex system, it is impossible to test exhaustively all paths. As a result, it is not feasible to approach the test effort for the entire application-under-test with the goal of testing 100% of the entire software application.

Another limiting factor is cost. Some tests can be more expensive to automate than to execute manually. A test that is executed only once is often not worth automating. For example, an end-of-year report for a health claim system might be run only once because of all the setup activity involved to generate it. As a result, it might not pay off to automate this specific test. When deciding which test procedures to automate, a test engineer must evaluate the value or payoff for investing the time in developing an automated script.

The test engineer should perform a careful analysis of the application when determining which test requirements warrant automation and which should be executed manually. When performing this analysis, the test engineer must also weed out redundant tests. This “manual versus automated test analysis” activity is further discussed in Chapter 7. The goal for test procedure coverage, using automated testing, is for each single test to exercise multiple items but to avoid duplication of tests.

2.1.7 One Hundred Percent Test Coverage

Even with automation, not everything can be tested. One major reason why testing has the potential to be an infinite task is that, to verify that no problems exist for a function, the function must be tested with all possible data—both valid and invalid. Automated testing may increase the breadth and depth of test coverage, yet there still will not be enough time or resources to perform a 100% exhaustive test.

It is impossible to perform a 100% test of all possible simple inputs to a system. The sheer volume of permutations and combinations is simply too staggering. Take, for example, the test of a function that handles verification of a user password. Each user on a computer system has a password, which is generally six to eight characters long, where each character is an uppercase letter or a digit. Each password must contain at least one digit. How many password character combinations are possible? According to Kenneth H. Rosen in Discrete Mathematics and Its Application, 2,684,483,063,360 possible variations of passwords exist. Even if it were possible to create a test procedure each minute or 60 test procedures per hour (that is, 480 test procedures per day), it would still take 155 years to prepare and execute a complete test. Therefore, not all possible inputs could be exercised during a test. With this type of rapid expansion, it would be nearly impossible to exercise all inputs; in fact, it has been proved to be impossible in general.

It is impossible to exhaustively test every combination of a system. Consider the test of the telephone system in North America. The format of the telephone numbers in North America is specified by a numbering plan. A telephone number consists of ten digits: a three-digit area code, a three-digit office code, and a four-digit station code. Because of signaling considerations, certain restrictions apply to some of these digits. To specify the allowable format, let X denote a digit that can take any of the values of 0 through 9 and let N denote a digit that can take any of the values of 2 through 9.

The formats of the three segments of a telephone number are NXX, NXX, and XXXX, respectively. How many different North American telephone numbers are possible under this plan? There are 8 × 8 × 10 = 640 office codes with format NNX and 8 × 10 × 10 = 800 with format NXX. There are also 10 × 10 × 10 × 10 = 10,000 station codes with format XXXX. Consequently, there are 800 × 800 × 10,000 = 6,400,000,000 different numbers available. This number includes only valid numbers and inputs to the system; it does not even touch the invalid numbers that could be applied. Thus this example shows how it is impossible to test all combinations of input data for a system [1].

Clearly, testing is potentially an infinite task. In view of this possibility, test engineers often rely on random code reviews of critical modules. They may also rely on the testing process to discover defects early. Such test activities, which include requirement, design and code walkthroughs, support the process of defect prevention. (Defect prevention and detection technologies are discussed in detail in Chapter 4). Given the potential magnitude of any test, the test team needs to rely on test procedure design techniques, such as equivalence testing, that employ only representative data samples. (Test design techniques are described in Chapter 7).

2.2 Benefits of Automated Testing

Automated testing can provide several benefits when it is implemented correctly and follows a rigorous process. The test engineer must evaluate whether the potential benefits fit the required improvement criteria and whether the pursuit of automated testing on the project is still a logical fit, given the organizational needs. Three significant automated test benefits (in combination with manual testing) have been identified: (1) production of a reliable system, (2) improvement of the quality of the test effort, and (3) reduction of the test effort and minimization of the schedule.

2.2.1 Production of a Reliable System

A strategic goal of the test effort is to find defects, thereby minimizing errors in the application so that the system runs as expected with very little downtime. Another major goal is to ensure that the system’s performance requirements meet or exceed user expectations. To support these goals effectively, the test effort should be initiated during the development cycle’s requirements definition phase, when requirements are developed and refined.

The use of automated testing can improve all areas of testing, including test procedure development, test execution, test results analysis, error status/correction monitoring, and report creation. It also supports all test phases including unit, integration, regression, system, user acceptance, performance, and stress and configuration testing, among others.

In all areas of the development life cycle, automated testing helps to build reliable systems, provided that automated test tools and methods are implemented correctly, and a defined testing process, such as the ATLM, is followed. Table 2.1 indicates the specific benefits that can be expected through the use of automated testing.

Table 2.1. Production of a Reliable System

image

2.2.1.1 Improved Requirements Definition

As discussed previously, reliable and cost-effective software testing starts in the requirements phase, with the goal of building highly reliable systems. If requirements are unambiguous and consistently delineate all of the information that a test engineer needs in a testable form, the requirements are said to be test-ready or testable. Many tools on the market can facilitate the generation of testable requirements. Some tools allow the requirements to be written in a formal language such as LOTOS or Z [2] using a syntax-directed editor. Other tools allow for modeling requirements graphically. (Chapter 3 provides more details on the types of requirement tools available.)

Test-ready requirements minimize the test effort and cost. Requirements that are in test-ready condition help support the preparation of an efficient test design and requirements to test design/test procedure traceability. This better traceability, in turn, provides the project team with greater assurance of test completeness. Refer to Appendix A for more information on test-ready requirements.

2.2.1.2 Improved Performance Testing

Performance data are no longer gathered with stopwatches. As recently as 1998, in one Fortune 100 company, performance testing was conducted while one test engineer sat with a stopwatch, timing the functionality that another test engineer was executing manually. This method of capturing performance measures is both labor-intensive and highly error-prone, and it does not allow for automatic repeatability. Today many load-testing tools are on the market that allow the test engineer to perform tests of the system functionality automatically, producing timing numbers and graphs and pinpointing the bottlenecks and thresholds of the system. A test engineer no longer needs to sit with a stopwatch in hand. Instead, he or she initiates a test script to capture the performance statistics automatically, leaving the test engineer now free to do more creative and intellectually challenging testing work.

In the past, a number of different computers and people would be required to execute a multitude of tests over and over again to produce statistically valid performance figures. New automated performance test tools allow the test engineer to take advantage of programs that read data from a file or table or that use tool-generated data, whether the information consists of one line of data or hundreds of lines. Still other programs can be developed or reused from test program libraries to support looping constructs and conditional statements.

The new generation of test tools enables the test engineer to run performance tests unattended, because they allow a time for the execution of the test to be set up in advance; a test script then kicks off automatically, without any manual intervention. Many automated performance test tools permit virtual user testing, in which the test engineer can simulate tens, hundreds, or even thousands of users executing various test scripts.

The objective of performance testing is to demonstrate that a system functions in accordance with its performance requirement specifications regarding acceptable response times, while processing the required transaction volumes on a production-size database. During performance testing, production loads are used to predict behavior and a controlled and measured load is used to measure response time. The analysis of performance test results helps support performance tuning.

2.2.1.3 Improved Stress Testing

A test tool that supports performance testing also supports stress testing. The difference between the two tests involves only how the tests are executed. Stress testing is the process of running client machines in high-volume scenarios to see when and where the application will break under the pressure. In stress testing, the system is subjected to extreme and maximum loads to find out whether and where the system breaks and to identify what breaks first. It is important to identify the weak points of the system. System requirements should define these thresholds and describe the system’s response to an overload. Stress testing is useful for operating a system at its maximum design load to verify that it works properly. This type of testing also reveals whether the system behaves as specified when subjected to an overload.

It no longer takes ten or more test engineers to conduct stress testing. The automation of stress tests has benefits for all concerned. One story shared by a test engineer named Steve demonstrates this point. Steve was one of 20 test engineers supporting a large project. At the end of one week of testing, Steve’s manager required that all 20 test engineers work on Saturday to support the stress testing effort. The manager emphasized that the whole test team needed to be present so that each test engineer could exercise the system at a high rate, which would “stress test” the system.

As the manager explained, each test engineer would execute the most complex functionality of the system at the same time. Steve and the other test engineers dutifully dragged themselves into work that Saturday morning. As each arrived at work, it quickly became apparent that the manager had omitted one small detail from the weekend plan. None of the employees had an access key for the building—and the facility did not have a security guard or magnetic access key readers. Each test engineer eventually turned around and went back home; the trip in to work that morning was a complete waste of time. As Steve put it, “I think that each one of us was personally being ‘stress tested.’”

With an automated test tool, the test team does not need the extra resources and test engineers can often be spared from working after hours or on weekends. Likewise, managers can avoid paying overtime wages. With an automated stress-testing tool, the test engineer can instruct the tool when to execute a stress test, which tests to run, and how many users to simulate—all without user intervention.

It is expensive, difficult, inaccurate, and time-consuming to stress test an application adequately using purely manual methods. A large number of users and workstations are required to conduct the testing process. It is costly to dedicate sufficient resources to tests and difficult to orchestrate the necessary users and machines. A growing number of test tools provide an alternative to manual stress testing by simulating the interaction of many users with the system from a limited number of client workstations. Generally, the process begins by capturing user interactions with the application and the database server within a number of test scripts. The testing software then runs multiple instances of test scripts to simulate a large number of users.

Many automated test tools include a load simulator, which enables the test engineer to simulate hundreds or thousands of virtual users simultaneously working on the target application. No one need be present to kick off the tests or monitor them; timing can be set to specify when the script should be kicked off, and the test scripts can run unattended. Most such tools produce a test log output listing the results of the stress test. The tool can record any unexpected active window (such as an error dialog box), and test personnel can review the message contained in the unexpected window (such as an error message). An unexpected active window might arise, for example, when the test engineer records a test with window A open, but finds during script playback that window B is unexpectedly open as well.

Examples of stress testing include running a client application continuously for many hours or running a large number of different testing procedures to simulate a multiuser environment. Typical types of errors uncovered by stress testing include memory leakage, performance problems, locking problems, concurrency problems, excess consumption of system resources, and exhaustion of disk space.

2.2.1.4 Quality Measurements and Test Optimization

Automated testing will produce quality metrics and allow for test optimization. Its results can be later measured and analyzed. Indeed, the automated testing process itself can be measured and repeated. With a manual testing process, the steps taken during the first iteration of a test may not be the exact steps taken during the second iteration. As a result, it is difficult to produce any kind of compatible quality measurements with this approach. With automated testing, however, the testing steps are repeatable and measurable.

Test engineers’ analysis of quality measurements supports the effort of optimizing tests, but only when tests are repeatable. As noted earlier, automation allows for repeatability of tests. For example, consider the situation where a test engineer manually executes a test and finds an error. The test engineer tries to recreate the test, but without any success. With an automated test, the script could have been played again and the test would have been both repeatable as well as measurable. The automated tool produces many metrics, usually creates a test log, and can accommodate automated metrics reporting.

Automated testing also supports optimization. For example, a test engineer can optimize a regression test suite by performing the following steps.

  1. Run the regression test set.
  2. If cases are discovered for which the regression test set ran acceptably, but errors surface later, the test procedures that uncovered those bugs in the regression test set can be identified.
  3. By repeating these steps, the regression test suite of scripts is continuously optimized using quality measurements (in this case, the metric would be the amount and type of test procedure errors).

2.2.1.5 Improved Partnership with Development Team

When a test engineer named John was implementing automated testing at a Fortune 100 company, one of his many duties included mentoring other junior test engineers. John consulted or acted as a mentor for many project teams. On one project, he was asked to help a developer in a group with the use of an automated test tool. John installed the tool for the developer in approximately 15 minutes, and helped the developer begin a compatibility test between the automated test tool and the target application.

John gave the developer an overview of the major functionality of the tool, then started to conduct some compatibility tests. He observed that the tool could recognize some of the objects. Finally, the compatibility tests revealed that the automated test tool could not recognize one of the main third-party controls (widgets) used on each screen of the application under development. It was decided that the tool was not compatible with the target application. Later, the developer told John that he really appreciated John’s support in learning about the tool. The developer shared a short story about his experience:

I was working on this death-march (quote) project with a very tight schedule, when my boss walked over to me with the large box containing the XYZ automated test tool software and said, “here, use this automated test tool; it will speed up our testing.” I put the box aside and was wondering when to get started using this tool, given the tight development schedule and the many other responsibilities for which I had been tasked. I asked myself, “when would I have the time to learn and understand the tool?”

The developer explained in great detail how he worried about trying to become proficient with the tool without any training, help, or guidance. He noted how John had unexpectedly entered into the picture and installed the tool for him, gave him an overview, showed him how it worked, and got him jump-started. It took John a fraction of time that it would have taken the developer to begin using the automated tool and to determine that the automated testing tool was incompatible with the target application. To the developer, John’s dedication in helping him get on his feet with the tool was instrumental in fostering a close working relationship between the two.

Automated testing provides a convenient way for the test engineer and the application developer to work together. As the test engineer now needs to have similar software skills, more opportunities will arise for collaboration and for mutual respect. In the past, individuals who possessed only data entry skills often performed test program activities. In this type of environment, the developer did not view the person performing testing as a peer or confidant.

Because automated test tools are used to perform developmental testing, developers themselves will be working with these tools to carry out unit testing, memory leak testing, code coverage testing, user interface testing, and server testing. As a result, testers need to have the same qualifications—and the same career opportunities (remuneration, appreciation)—as developers. The respect between the application developer and the test engineer will continue to grow, and ideally the relationship will be viewed more as a partnership.

2.2.1.6 Improved System Development Life Cycle

Automated testing can support each phase of the system development life cycle. Today several automated test tools are available to support each phase of the development life cycle. For example, tools exist for the requirements definition phase that help produce test-ready requirements so as to minimize the test effort and cost of testing. Likewise, tools supporting the design phase, such as modeling tools, can record the requirements within use cases. Use cases represent user scenarios that exercise various combinations of system level (operational-oriented) requirements. These use cases have a defined starting point, a defined user (can be a person or an external system), a set of discrete steps, and a defined exit criteria.

Tools also exist for the programming phase, such as code checkers, metrics reporters, code instrumentors, and product-based test procedure generators. If requirement definition, software design, and test procedures have been prepared properly, application development may be the easiest activity of the bunch. Test execution will surely run more smoothly given these conditions.

Many other tools are available that can support the test effort, including performance test tools and capture/playback tools. The many different test tools contribute in one way or another to the overall system development life cycle and the final quality of the end product. Although each different tool has its purpose, it is not likely that all the tools have the same utility for a given project, nor is it likely that the organization will have every tool on hand.

2.2.2 Improvement of the Quality of the Test Effort

By using an automated test tool, the depth and breadth of testing can be increased. Specific benefits are outlined in Table 2.2.

Table 2.2. Improved Quality of the Test Effort

image

2.2.2.1 Improved Build Verification Testing (Smoke Test)

The smoke test (build verification test) focuses on test automation of the system components that make up the most important functionality. Instead of repeatedly retesting everything manually whenever a new software build is received, a test engineer plays back the smoke test, verifying that the major functionality of the system still exists. An automated test tool allows the test engineer to record the manual test steps that would usually be followed in software build/version verification. By invoking the use of an automated test tool, tests can be performed that confirm the presence of all major functionality, before any unnecessary manual tests are performed.

The automated test tool supports the smoke test by allowing the test engineer to play back the script. The script will automatically walk through each step that the test engineer would otherwise have performed manually, again reducing test effort. During the time that the script replays, the test engineer can concentrate on other testing issues, thereby enhancing the capabilities of the entire test team.

Smoke testing ensures that no effort is wasted in trying to test an incomplete build. At one large company, a test engineer named Mary Ellen had the following experience performing test on a new software version/build:

  1. The business users were called to come from the fifth floor to the fourth-floor testing room to verify that specific problem reports had been fixed in the new software build. The business users were often asked to drop what they were doing to perform a regression test on a new version/build. Sometimes they would start testing and immediately find a show-stopper bug (a bug that does not allow the system to be exercised any further). They would report the error and then return upstairs, because they couldn’t continue testing until the fix was incorporated into the build. Just finding one bug wasted at least an hour of time for five people.
  2. During manual test execution, a newly introduced defect to previously working functionality might not be found until hours into the regression testing cycle. As a result, even more time was wasted, as the entire new build had to be redone and then retested. That case was observed often, because test personnel would become complacent, thinking, “I’ve tested this functionality the last time already and it worked fine; instead, I will concentrate on the things that didn’t work last time.”

With a smoke test, when a developer has created a new software version or build, the developer or independent test engineer merely replays the test to verify that the major functionality, which worked in the previous version of code, still works with the latest release. Configuration management personnel can also benefit by using this test to verify that all versions of the build have been checked out correctly. The configuration management specialist also can immediately ascertain whether a version of the build or part of the build is missing. Thus a smoke test can save developers, configuration management personnel, business users, and test engineers much valuable time.

2.2.2.2 Improved Regression Testing

A regression test is a test or set of tests that is executed on a baselined system or product (baselined in a configuration management system) when a part of the total system product environment has been modified. The objective is to verify that the functions provided by the modified system or product match the specifications and that no unintended change in operational functions has been made. An automated test tool provides for simplified regression testing. Automated regression testing can verify in an expedient manner that no new bugs were introduced into a new build. Experience shows that modifying an existing program is a more error-prone process (in terms of errors per statements written) than writing a new program [3].

Regression testing should occur after each release of a previously tested application. The smoke test described in Section 2.2.2.1 is a smaller, rapid regression test of major high-level functionality. Regression testing expands on the smoke test to include all existing functionality that has already been proved viable. The regression test suite represents a subset of the overall test procedures that exercise the basic functionality of the application. It may include test procedures that have the highest probability of detecting the most errors. This type of testing should be performed via an automated tool because it is usually lengthy and tedious and thus prone to human error.

Executing a test shell or wrapper function containing a complete set of system test scripts is an example of the efficient use of automated scripts during regression testing. A test shell is a test procedure that calls or groups several test procedures, then plays them back in a specific, predefined sequence. Such a procedure allows the test engineer to create and execute a comprehensive test suite and subsequently store the results of the test in a single log output.

2.2.2.3 Improved Multiplatform Compatibility Testing

Another example of the savings attributable to the use of automated testing is the reuse of test scripts to support testing from one platform (hardware configuration) to another. Changes in computer hardware, network versions, and operating systems can cause unexpected compatibility problems with the existing configuration. Prior to a production rollout of a new application to a large number of users, the execution of automated test scripts can provide a clean method of ensuring that these changes did not adversely affect current applications and operating environments.

Prior to the advent of automated testing, a test engineer would have to repeat each manual test required for a Windows 95 environment step by step when testing in a Windows NT environment, for example. Now when the test engineer creates the test scripts for an application-under-test on a Windows 95 platform, he or she can simply execute the same test scripts on the Windows NT platform, using multiplatform-compatible tools, such as Rational’s TestStudio or AutoScriptor Inferno. (Refer to Appendix B for more information on these tools.)

2.2.2.4 Improved Software Configuration Testing

The same principle that drives multiplatform compatibility testing applies to software configuration testing. Software changes (such as upgrades or the implementation of a new version) can cause unexpected compatibility problems with existing software. The execution of automated test scripts can provide a clean method of ensuring that these software changes did not adversely affect current applications and operating environments.

2.2.2.5 Improved Execution of Mundane Tests

An automated test tool will eliminate the monotony of repetitious testing. Mundane repetitive tests are the culprits that allow many errors to escape detection. A test engineer may get tired of testing the same monotonous steps over and over again. One test engineer named Jo Ann was responsible for performing year 2000 testing using an automated test tool. Jo Ann’s test scripts put in hundreds of dates on as many as 50 screens, with various cycle dates, and with some having to be performed repeatedly. The only difference was that during one cycle Jo Ann would add rows of data to include the dates. During another cycle, she would delete the data; in another, she would perform an update operation. In addition, the system date had to be reset to accommodate high-risk year 2000 dates.

The same steps were repeated over and over again, with the only change from one to the other being the kind of operation (add, delete, update) that was being performed. An end user performing acceptance testing would have become tired very quickly when executing these mundane and repetitive tests and might omit some, hoping that the system would execute properly. An important testing issue such as year 2000 verification can’t be short-circuited, however. The tests were therefore automated in Jo Ann’s case. This instance is another good example when automation pays off, because a test script does not care whether it has to execute the same monotonous steps over and over again and can automatically validate the results.

2.2.2.6 Improved Focus on Advanced Test Issues

Automated testing allows for simple repeatability of tests. A significant amount of testing is conducted on the basic user interface operations of an application. When the application is sufficiently operable, test engineers can proceed to test business logic in the application and other behavior. With both manual and automated regression testing, the test teams repeatedly expend effort in redoing the same basic operability tests. For example, with each new release, the test team would need to verify that everything that worked in the previous version still works in the latest product.

Besides delaying other testing, the tedium of these tests exacts a very high toll on manual testers. Manual testing can become stalled due to the repetition of these tests, at the expense of progress on other required tests. Automated testing presents the opportunity to move on more quickly and to perform a more comprehensive test within the schedule allowed. That is, automatic creation of user interface operability tests gets these tests out of the way rapidly. It also frees up test resources, allowing test teams to turn their creativity and effort to more complicated problems and concerns.

2.2.2.7 Execution of Tests That Manual Testing Can’t Accomplish

Software systems and products are becoming more complex, and sometimes manual testing is not capable of supporting all desired tests. As discussed in the introduction to this section, many kinds of testing analysis simply cannot be performed manually today, such as decision coverage analysis or cyclomatic complexity metrics collection. Decision coverage analysis verifies that every point of entry and exit in the program has been invoked at least once and that every decision in the program has taken on all possible outcomes at least once [4]. Cyclomatic complexity, which is derived from an analysis of potential paths through the source code, was originally published by Tom McCabe and is now part of the IEEE Standard Dictionary of Measures to Produce Reliable Software. It would require many man-hours to produce the cyclomatic complexity of the code for any large application. In addition, manual test methods employed to perform memory leakage tests would be nearly impossible.

Today, tools on the market can test that the application’s Web links are up and running in a matter of seconds. Performing these tests manually would require many hours or days.

2.2.2.8 Ability to Reproduce Software Defects

How many times has a test engineer, in the course of performing manual tests, uncovered a defect, only to discover that he or she cannot recall the steps exercised that led to the error? Automated testing eliminates this problem. With an automated test tool, the manual steps taken to create a test are recorded and stored in a test script. The script will play back the exact same sequence of steps that were initially performed. To simplify matters even further, the test engineer can inform the appropriate developer about the defect, and the developer has the option of playing back the script to experience firsthand the sequence of events that produced the software bug.

2.2.2.9 Enhancement of Business Expertise

Many test managers have probably experienced a situation where the one resident functional expert on the test team is absent from the project for a week during a critical time of testing. A test engineer named Bill went through one such dilemma. Bill was shocked to learn one day that the primary business area expert designated to support the test was on vacation. Clearly, communication among project members was a problem. For many areas of the target application, only the business area expert had the requisite knowledge. Fortunately for Bill and the rest of the test team, the business area expert had scripted all of the business functionality of his expertise into automated test scripts.

Another business user was assigned to replace the one who had left on vacation. This business area expert, however, was not equally familiar with the application. Nonetheless, the new business user was able to play back the business test scripts that the expert had created. The use of these test scripts allowed the test team to verify that the original functionality still behaved in the correct manner. It also prevented the test team from worrying about the fact that the resident expert had left for a week to the sunny beaches of Hawaii. At the same time, the new business user learned more about the “other side” of the business functionality of the application-under-test by watching the script play back the exact sequence of steps required to exercise the functionality.

2.2.2.10 After-Hours Testing

As noted previously, automated testing allows for simple repeatability of tests. As most tools allow scripts to be set up to kick off at any specified time, automated testing allows for after-hours testing without any user interaction. The test engineer can set up a test script program in the morning, for example, to be initiated automatically by the automated test tool at 11 P.M., while the test team is at home sound asleep. When the test team returns to work the next day, it can review the test script output (log) and conduct analysis.

Another convenient time for kicking off a script is when the test engineer goes to lunch or attends a meeting, or just before he or she departs for home at the end of the work day. Initiating tests at these times makes maximum use of the test lab and time.

2.2.3 Reduction of Test Effort and Minimization of Schedule

As outlined in Section 2.1, a test team may not immediately experience an immediate or huge reduction in the testing effort. Initially, it may even see an increase in effort using automated test tools in some ways because of the need to carry out setup tasks, as explained in Section 2.1. While the testing effort will likely increase at first, a payback on test tool investment will appear after the first iteration of the implementation of an automated test tool, due to improved productivity of the test team. The use of an automated test tool can minimize both the test effort and the schedule. The case study at the end of this section entitled “Value of Test Automation Measurement” provides more details on how the test effort may be reduced through the use of test automation. The specific benefits that are associated with the more efficient testing are described here.

A benchmark comparison conducted by the Quality Assurance Institute analyzed the specific difference in effort, measured in man-hours, to perform testing using manual methods as compared with using automated test tools. The study showed that the total test effort using automated testing consumed only 25% of the number of man-hours required to perform testing using manual methods [5].

Test effort reduction perhaps has the biggest impact on shortening the project schedule during the test execution phase. Activities during this phase typically include test execution, test result analysis, error correction, and test reporting. The benchmark study showed that manual testing required nearly seven times more effort than automated testing.

Table 2.3 gives the results of this benchmark comparison of manual and automated testing effort for various test steps, as conducted by the Quality Assurance Institute in November 1995. The testing involved 1,750 test procedures and 700 errors. The figures in Table 2.3 reflect an overall 75% reduction in test effort achieved through the benefits of test automation.

Table 2.3. Manual Versus Automated Testing

image

Test Plan Development—Test Effort Increase

Before the decision is made to introduce an automated test tool, many facets of the testing process must be considered. A review of the planned application-under-test (AUT) requirements should be conducted to determine whether the AUT is compatible with the test tool. The availability of sample data to support automated test needs to be confirmed. The kinds and variations of data required should be outlined, and a plan should be developed for obtaining and/or developing sample data. For scripts to be reusable, test design and development standards must be defined and followed. The modularity and reuse of test scripts needs to be given thought and consideration. Automated testing therefore necessitates its own kind of development effort, complete with its own mini-development life cycle. The planning required to support the automated test development life cycle, operating in parallel with an application development effort, has the effect of adding to the test planning effort. For further details on test planning, see Chapter 6.

Test Procedure Development—Test Effort Decrease

In the past, the development of test procedures was a slow, expensive, and labor-intensive process. When a software requirement or a software module changed, a test engineer often had to redevelop existing test procedures and create new test procedures from scratch. Today’s automated test tools, however, allow for the selection and execution of a specific test procedure with the click of an icon. With modern automated test procedure (case) generators (see Chapter 3 for more details), test procedure creation and revision time is greatly reduced relative to manual test methods, with some test procedure creation and revision taking only a few seconds. The use of test data generation tools (also described in Chapter 3) contributes to the reduction of the test effort.

Test Execution—Test Effort/Schedule Decrease

Manual performance of test execution is labor-intensive and error-prone. A test tool allows test scripts to be played back at execution time with minimal manual interference. With the proper setup and in the ideal world, the test engineer simply kicks off the script and the tool executes unattended. The tests can be performed as many times as necessary and can be set up to kick off at a specified time and run overnight, if necessary. This unattended playback capability allows the test engineer to focus on other, higher-priority tasks.

Test Results Analysis—Test Effort/Schedule Decrease

Automated test tools generally include some kind of test result report mechanism and are capable of maintaining test log information. Some tools produce color-coded results, where green output might indicate that the test passed, while red output indicates that the test failed. Most tools can distinguish between a passed or failed test. This kind of test log output improves the ease of test analysis. Most tools also allow for comparison of the failed data to the original data, pointing out the differences automatically, again supporting the ease of test output analysis.

Error Status/Correction Monitoring—Test Effort/Schedule Decrease

Some automated tools on the market allow for automatic documentation of defects with minimal manual intervention after a test script has discovered a defect. The information documented in this way might include the identification of the script that produced the defect/error, identification of the test cycle that was being run, a description of the defect/error, and the date/time that the error occurred. For example, the tool TestStudio allows for creation of the defect report as soon as a script has detected an error, by simply selecting to generate a defect. The defect can then be automatically and dynamically linked to the test requirement, allowing for simplified metrics collection.

Report Creation—Test Effort/Schedule Decrease

Many automated test tools have built-in report writers, which allow the user to create and customize reports tailored to a specific need. Even those test tools that do not include built-in report writers might allow for import or export of relevant data in a desired format, making it a simple task to integrate the test tool output data with databases that support report creation.

Another benefit concerns the use of automated test tools to support the test engineer in the performance of test methods and techniques that previously had been performed manually. As noted earlier, automated test tools cannot completely eliminate the need to perform manual tests. Some test activities still must be performed manually by the test engineer. For example, many test setup activities must be conducted in a manual fashion. As a result, both manual test expertise and expertise in the use of automated test tools are necessary to execute a complete test and produce a system that meets the requirements of the end user. Automated test tools, therefore, cannot be viewed as a panacea for all test issues and concerns.

2.3 Acquiring Management Support

Whenever an organization tries to adopt a new technology, it faces a significant effort to determine how to apply the technology to its needs. Even with completed training, organizations wrestle with time-consuming false starts before they become adept with the new technology. For the test team interested in implementing automated test tools, the challenge is how to make the best case for implementation of a new test automation technology to the management team.

Test engineers need to influence management’s expectations for the use of automated testing on projects. They can help to manage these expectations by forwarding helpful information to the management staff. Bringing up test tool issues during strategy and planning meetings can also help develop a better understanding of test tool capabilities among everyone involved on a project or within the organization. For example, a test engineer might develop training material on the subject of automated testing and advocate to management that a seminar be scheduled to train staff members.

The first step in moving toward a decision to automate testing on a project requires that the test team influence management’s understanding of the appropriate application of this technology for the specific need at hand. The test team, for example, needs to check whether management is cost-adverse and would be unwilling to accept the estimated cost of automated test tools for a particular effort. If so, test personnel need to convince management about the potential return on investment by conducting cost-benefit analysis.

In some instances, management is willing to invest in an automated test tool, but is not able or willing to staff a test team with individuals having the proper software skill level or provide for adequate test tool training. The test team will then need to point out the risks involved and may need to reconsider a recommendation to automate testing.

Assuming that management has appropriate expectations on the use and execution of automated testing, then the test team can move on to the next step in the decision to automate test, which pertains to defining the objectives of this type of testing. What does the test team intend to accomplish, and which need will be met, by using automated testing tools?

Management also needs to be made aware of the additional cost involved when introducing a new tool—not only in terms of the tool purchase itself, but also the initial schedule/cost increase, additional training costs, and costs for enhancing an existing testing process or new testing process implementation.

Test automation represents a highly flexible technology, and one that provides several ways to accomplish an objective. Use of this technology requires new ways of thinking, which amplifies the problem of test tool implementation. Many organizations can readily come up with examples of technology that failed to deliver on its potential because of the difficulty of overcoming the “now what?” syndrome. The potential obstacles that organizations must overcome when adopting automated test systems include the following:

• Finding and hiring test tool experts

• Using the correct tool for the task at hand

• Developing and implementing an automated testing process, which includes developing automated test design and development standards

• Analyzing various applications to determine which are best suited for automation

Analyzing the test requirements to determine which are suitable for automation

• Training the test team on the automated testing process, including automated test design, development, and execution

• Dealing with the initial increase in schedule and cost

2.3.1 Test Tool Proposal

As a test engineer for a new project, you will have followed the structured approach outlined in the previous sections to obtain several results. You will have aligned management expectations to be consistent with the actual potential and impact of automated test on the project. Analyses associated with the automated test decision process will have been performed, and you will have worked through each of the decision points (quality gates). You also will need to gain an understanding of the types of tools available to determine which ones match the testing needs (see Section 3.2, which covers testing life-cycle support tools). The next step involves the development of a test tool proposal to present to project management. Management must support the decision to bring in a test tool by providing a tangible commitment to its use. The test tool proposal needs to convince management that a positive cost benefit is associated with the purchase of an automated test tool.

The test tool proposal effort is aimed at persuading management to release funds to support automated test tool research and procurement, as well as test tool training and implementation. The proposal may also serve the long-range budget planning requirements of the organization. Typically, organizations forecast budget requirements for one or more years out from the present fiscal year. As a result, funds such as those needed to support the procurement of automated test tools and training on such tools can be factored into the budget process early in the game.

The development of a test tool proposal helps to outline in detail the cost of test tool procurement and training requirements. The proposal also documents plan phases, where test tool licenses are procured incrementally over a period of time. In addition, it may help document the need for a phased buildup of a test engineering staff as well as the desired skills sought for the bolstered test team.

The test engineer needs to ascertain whether sufficient funding has been allocated within the organization’s budget for the purchase of software development support tools. Although funding may not be set aside specifically for automated test tools, perhaps management wants the test team to provide a proposal (or plan) that outlines the organization’s test tool investment requirements. Often it is the test team’s responsibility to define the test tool requirements of the organization and provide an associated cost estimate.

The test tool proposal should identify the benefits and give an idea of the features of the proposed automated test tool or tools. It should indicate the potential evaluation domain that will include the best software applications on which to exercise the test tool. When identifying these target applications, it is important to review the associated development schedules to ensure that they provide adequate time for the introduction of one or more automated test tools. Chapter 3 discusses pilot/evaluation domain selection criteria.

It is also important to verify that the associated project teams have the requisite skills to successfully utilize the automated test tool. Where skills are insufficient, the possibility of training must be examined. Most importantly, the team should follow the automated test tool introduction process discussed in Chapter 4.

In short, resource commitment from management is necessary to procure and introduce an automated test tool and successfully use the tool. Eventually, the test engineer will need to develop a budget that includes reasonable and accurate estimates for hardware and software procurement, personnel training, and other acquisition and administration costs. Chapter 3 helps the test team further define the features of the tool, and Chapter 5 helps the team define its roles and responsibilities. To win management backing for the resources needed, the test tool proposal should ideally contain the elements depicted in Table 2.5.

Table 2.5. Test Tool Proposal

image

2.3.1.1 Estimated Improvement Opportunities

At the end of the test life cycle, the test team needs to conduct a test program review (as discussed in detail in Chapter 9). Through an organizational needs analysis or the review of lessons learned, it may become apparent to the organization that a need exists for the introduction of automated test tools. In other situations, a test team may turn to automated testing after reviewing industry literature that highlights its potential advantages. In this case, it is especially important to identify a measure that shows the potential range of gain (hours saved) through the implementation of a particular automated tool. Sometimes an automated tool sounds like the perfect solution to a problem, but further analysis fails to reveal a specific gain. It is often beneficial to implement the suggested changes via a small prototype/pilot, thereby allowing the test team to make a valid estimate of suggested corrective action/improvement gains.

Table 2.6 provides an example of a simplified improvement table, produced as a result of an automated testing capability research. The third column labeled “Gain M” reflects gains in productivity achieved through the use of manual test methods, while the column labeled “Gain A” reflects gains in productivity achieved through the use of automated test methods. This table depicts productivity gain estimates, which are expressed in terms of percentage of test effort saved through the application of an automated test tool.

Table 2.6. Gain Estimates

image

image

The organization may require one or more automated test tools, with each test tool having its own features and strengths. These tools may support unique test interests and have special niche values. This consideration can be especially important when the type of test supported is of special interest to the end user or when the type of test has particular value because of the type of system or application being supported. Management needs to be well aware of the functionality and value of each test tool. A list of tool benefits needs to be provided within the proposal, as discussed in Section 2.2.

2.3.1.2 Criteria for Selecting the Correct Tool

The return on investment obtained by using an automated test tool largely depends on the appropriate selection of a test tool. Automated test tools can vary from ones having simple functionality to those offering complex functionality, and their performance can vary from mediocre to excellent. A test tool with only minimal functionality will often cost less than a tool with extensive functionality. The challenge for the test engineer is to select the best test tool for the organization and/or the particular test effort and to understand what types of tools are available that could meet these needs. For example, a requirements management tool such as DOORS can also be employed as a test management tool, even though DOORS is not advertised as such. With regard to selecting a test tool, the test engineer needs to outline the most important criteria for tool selection.

The criteria definition guidelines given here are provided for the purpose of supporting budget planning; they are covered in detail within Section 3.1:

• Gather third-party input from management, staff, and customers regarding tool needs.

• Select tool criteria to reflect the organization’s system engineering environment.

Specify tool criteria based on long-term investment assumptions.

• Ensure that the tool will be usable in many testing phases.

It is important to point out in the proposal that determining the tool requirements and researching and evaluating the various tools will require personnel resources, time, and money.

2.3.1.3 Tool Cost Estimate

After describing the benefits of the proposed automated test tools, it is necessary to estimate their cost. It may be necessary to outline a phased implementation of the test tools so that costs can be spread over a period of time. For larger purchases, pricing discounts may be available. A good resource for obtaining tool feature and cost information quickly is the World Wide Web.

Once the test team has identified the prospective cost for an initial automated test tool purchase, it is valuable to perform a quick mental evaluation about whether this cost is in line with management’s expectations. Recall that, in the steps described in Section 2.1, the test team did some investigative research to ascertain and help shape management expectations. Did management express a tolerable range for cost at that stage? Does the estimate for an initial tool purchase lie within this range? Again, a plan for the phased implementation of the test tools may need to be modified to align the short-term implementation strategy with the budget reality. Another option is to make a case for augmenting the budget so as to align the budget reality with the test performance requirements of the organization. A cost-benefit analysis should be conducted, with the test team ensuring that funding is available to support this analysis.

Providing that management expectations, tool purchase budgets, and test tool costs are consistent, then it is beneficial to organize a demonstration of the proposed test tool by the vendor for management, complete with a presentation that reiterates the tool’s benefits to the organization. The presentation may also need to revisit tool cost. Costs associated with the implementation of the test tool may include costs necessary to upgrade hardware to meet performance requirements, any necessary software maintenance agreements, hotline support, and requirements for tool training.

If, at the test tool proposal stage, the test team members are unsure of which specific tool they prefer, it may be necessary to estimate tool costs by providing a cost range for each kind of automated test tool of interest. If the test team has identified a very capable test tool that meets the organization’s requirements but is significantly more costly than the planned budget allows, several options are available. First, the test team could select a less expensive tool that supports test requirements adequately for the near term. Second, it could outline the cost savings or performance enhancing benefits in a way that convinces management that the tool is worth the upfront investment. Third, it could scale down the implementation of the test tool and plan for additional implementation during the next budget period.

2.3.1.4 Additional Time to Introduce Tool

A major concern when selecting a test tool focuses on its impact and fit with the project schedule. Will there be enough time for the necessary people to learn the tool within the constraints of the schedule? If there isn’t sufficient time to support implementation of a sophisticated tool, can the team deploy an easy-to-use tool? The scope of automated testing might be reduced, but the effort might nevertheless benefit from the use of this type of tool. This approach may not be advisable in instances where money for software support tools is difficult to obtain. Spending the money for a less expensive, easy-to-use tool that offers minimal functionality may put at risk the test team’s ability to follow up later with the procurement of a tool that better supports the organization’s long-term outlook.

Where the project schedule does not include enough time for introducing an appropriate test tool for the organization, it may be advisable to decide against implementing an automated testing tool. By postponing the introduction of a test tool until a more opportune time, the test team may avoid the risk of applying the right tool on the wrong project or selecting the wrong tool for the organization. In either case, the test tool likely will not be received well, and those who might otherwise become champions for the use of automated test tools may instead turn into their biggest opponents.

Provided the project schedule permits the introduction of an appropriate test tool for the organization, then the test team needs to ensure that the tool is launched in a manner that leads to its adoption. After all, if no one in the organization uses the tool, the effort to obtain and incorporate it will have been wasted.

2.3.1.5 Tool Expertise

When introducing automated testing tools, many believe—incorrectly—that the test team skill set does not need to include technical skills. In fact, the test team skill profile needs to include personnel with technical expertise on the operating system, database management system, network software, hardware device drivers, and development support software, such as configuration and requirements management tools. In addition to these skills, a proficiency in the scripting language of the automated test tool is necessary.

Some members of the test team should have a technical or software development background, ensuring that the features of the automated tool will be exercised sufficiently. Likewise, the test team needs to maintain its manual testing expertise.

When the test team consists of nontechnical individuals, it may need to obtain commitment from management to augment the test group with additional test tool experts. For example, the team might hire new personnel, borrow personnel from other projects, or use outside consultants. If enough lead time exists, a software professional might be retained to become more proficient in the use of the tool. This individual may then be able to provide tool leadership to others on the test team. (Chapter 5 further discusses test team composition strategies.)

The introduction of a new test tool to the project or organization adds short-term complexity and overhead. Additional effort is expended to support tool evaluation and implementation, as well as to conduct test planning and development. The appropriate test team composition may help mitigate performance risk, especially when the team is populated with individuals who have a strong technical background.

2.3.1.6 Tool Training Cost

The increased use of automated test tools on software application efforts has subsequently reduced the amount of manual test activity. Even though the know-how and analytical skills that pertain to manual testing will always be needed on the test effort, expertise with automated test tools and test automation must be developed. Test engineers, as a result, need to transition their skill set to include more technical skills and more experience with automated test tools. Some test engineers may take the initiative on their own to obtain additional technical training. These individuals may also volunteer their time on projects involving automated test tools so as to obtain further experience with such tools.

Managers supporting project planning or project start-up for an effort involving automated testing must carefully consider the test team’s composition. Do members of the test team need refresher training on the pertinent automated test tools? Do some members of the team lack automated test experience all together?

The test tool proposal should specify the cost of training required to successfully implement the automated test tool. The test team, in turn, needs to identify each individual who requires training and specify the kind of training necessary. Such training may include refresher courses on a specific test tool, introductory training on a specific tool for the individual who does not have experience with it, or more advanced test design and development training that could have application to a wide variety of test tools.

Once a shopping list of training needs has been developed, the test team should identify organizations that offer the desired training. Cost estimates from several sources should be obtained for each type of training required. Organizations that offer such training include test tool manufacturers and test consulting organizations.

Test teams may also wish to consider the temporary use of test consultants as mentors on the project. Such consultants may be able to provide helpful guidance to the members of the test team in areas such as test design and test development. Note, however, that the test team should not depend entirely upon consultants for the execution of the test program. If it does, once the consultants leave and are no longer available to provide support, test program repeatability on behalf of the remaining test organization will decrease and automated script maintenance could become difficult.

The automated test tool proposal should list the costs associated with various sources of training and mentoring that will be required for a specific project or organization. These costs may be rough estimates based upon information obtained from Web sites or from telephone conversations with training organizations.

In summary, the test team must develop test tool expertise if it is to take advantage of the powerful time-saving features of the tool. Training may be required at different levels. For instance, training may be required for the test engineer who has a manual test background and who has automated test tool experience but not on the specific tool being applied, or it may be needed for a business user or other professional who is being temporarily assigned to the test team.

2.3.1.7 Tool Evaluation Domain

As part of the test tool proposal, consideration should be given to the method with which the test tool or tools will be evaluated. For example, the test tool will need to be exercised during the evaluation against a particular application and a particular operating environment. This evaluation itself may carry a cost that must be factored into the proposal. In addition, it may require advanced coordination and approvals from several different managers. These cost and logistics factors should be made apparent in the test tool proposal.

The particular application and the particular operating environment utilized to support test tool evaluation may, in fact, be the first project on which the test team seeks to implement the test tool. Chapter 3 provides a set of pilot application selection guidelines that a test team can follow when picking a pilot application and listing the findings of the pilot in the test tool proposal.

When selecting an evaluation domain, it is beneficial to pick an application-under-test within the organization that has high visibility. A long-term goal for the test team is for the organization testing expertise to be held with high regard. If one or more success stories become known, other test teams within the organization will have an easier time advocating and implementing automated testing. Likewise, interest in automated testing will spread throughout the organization’s application developers.

When first implementing a test tool on a high-visibility project, the benefits of success are great—but so is the downside of failure. It is important that the test team follow a defined process when introducing automated test tools to a new project team. Chapter 4 includes a further discussion on the process of introducing automated test tools.

2.3.1.8 Tool Rollout Process

Another concern for the test proposal involves the method by which the test tool is implemented or rolled out within the organization. Once a test tool has successfully passed through the decision points of initial selection, tool evaluation, and final management approval, the test team needs to execute a plan to roll out the tool to the target project or projects.

The test team may advocate that a particularly strong application developer or test engineer be assigned the responsibility to review the test tool, develop simplified implementation procedures, and then teach or mentor members of the test team on the tool’s use. This person can become the tool champion. Alternatively, the test team may assign a selected tool to different developers who then learn the function, streamline access, and teach other developers.

Experience has shown that the best rollout strategy involves the use of a separate test team to implement the tool. Under this strategy, a separate test team evaluates the new tool, gains expertise in its use, and then assists project teams in rolling out the tool. The use of such an independent test team represents a structured approach that generally eliminates many frustrating weeks of inefficient concurrent trial-and-error learning by application developers. For more information on the setup of test teams, see Chapter 5; for more information on test tool rollout strategies, see Chapter 4.

In addition, the test team may wish to use test tool presentations and demonstrations so as to increase tool buy-in and interest throughout the project or organization. It might also post information about one or more automated test tools on the organization’s intranet—it is, after all, important to advertise the potential benefits of the various tools. The test team might consider organizing a test tool user group within the organization so as to transfer knowledge about the tool.

If the test tool proposal is accepted and funded by management, the test team then needs to obtain permission to proceed with test automation. It should next complete test tool selection and evaluation as outlined in Chapter 3, and follow the guidelines in Chapter 4 on rolling out an automated test tool on a new project.

Chapter Summary

• The steps outlined within the automate test decision process provide a structured way of approaching the decision to automate testing. These step-by-step instructions guide the test engineer toward making a decision on whether the software application is suitable for automated testing.

• One step in moving toward a decision to automate testing on a project requires that the test team make sure that management understands the appropriate application of automated testing for the specific need at hand.

• Another step in moving toward a decision to automate testing on a project requires that the test team decide how much of the test effort can be supported using an automated test tool given the type of application being developed, the hardware environment, and the project schedule.

• The benefits of automated testing (when implemented correctly) may include a reduction in the size of the test effort, a reduction of the test schedule, the production of a reliable system, and the enhancement of the test process.

• With the proper planning, an appropriate test tool, and a defined process with which to introduce automated testing, the total test effort required with automated testing represents only a fraction of the test effort required with manual methods.

• An automated test tool cannot be expected to support 100% of the test requirements of any given test effort.

• Automated testing may increase the breadth and depth of test coverage, yet there is still not enough time or resources to perform a 100% exhaustive test.

• The optimal value of test automation is obtained through the proper match of a test tool with the technical environment and the successful application of the Automated Test Life-cycle Methodology (ATLM).

• To obtain management backing for the resources needed, it is beneficial to develop an automated test tool proposal. This proposal should define the organization’s test tool requirements and benefits and provide a cost estimate. The test tool proposal may be especially helpful in persuading management to set aside future budget dollars for test tool support.

References

1. Rosen, K.H. Discrete Mathematics and Its Application, 2nd ed. New York: McGraw-Hill, 1991.

2. Poston, R. A Guided Tour of Software Testing Tools. San Francisco: Aonix, 1988. www.aonix.com.

3. Myers, G.J. The Art of Software Testing. New York: John Wiley and Sons, 1979.

4. RTCA. “Software Considerations in Airborne Systems and Equipment Certification.” Document No. RTCA/DO-178B, prepared by: SC-167. December 1, 1992.

5. Quality Assurance Institute. QA Quest. November 1995. See http://www.qaiusa.com/journal.html

6. Linz, T., Daigl, M. GUI Testing Made Painless. Implementation and Results of the ESSI Project Number 24306. 1998. www.imbus.de.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset