Appendix E. Best Practices

Following the ATLM, as outlined in this book, constitutes the adoption of a set of proven practices. This appendix summarizes these best practices and gives additional recommendations and suggestions that constitute a set of best practices for the development and execution of automated testing. These best practices are aimed at helping the test team to avoid the kind of test program missteps that consume test engineers’ time and increase test program effort. General practices that have served as ongoing themes in this book include the notion that not all test requirements can be supported by test automation. Other general practices include the performance of thorough test planning and related management activities, such as tracking test environment development activities.

Good planning adheres to the philosophy that the test effort must be approached from a systematic, structured, and step-by-step perspective. Each test activity should be valuable in and of itself, and it should support the next step in the test life-cycle process. In some cases, patience is a virtue, especially when the application-under-test (AUT) is unstable and constantly undergoes change. During such a timeframe, the test team needs to postpone the automation of black-box test procedures, and instead, focus its time and effort on other test activities.

Test automation should be a professional discipline, performed much like software development. Best practices and standards for test automation should be developed and applied just like a software development effort. Several test automation best practices are described in Table E.1.

Table E.1. Best Automated Testing Practices

E.1 Documented Process

The test team cannot automate a process that is not defined. Automated test tools do not impose a process on the test engineer; rather, they support a test process. Tools also do not offer application expertise. As a result, technical problems can obscure process issues.

The test team should start by documenting the current test life-cycle process used by the organization and then modify this process to reflect the incorporation of automated test tools. For a detailed discussion of test process definition, refer to Chapter 4.

E.2 Managing Expectations

The long-term success of the test team depends upon its ability to obtain the capital, training, and personnel resources needed to adequately perform test activities on projects. The political leverage required to obtain these resources is achieved by test team performance that is consistent with senior management expectations within the organization. Chapter 2 covers the establishment of expectations, communication of the benefits of test automation, and the salesmanship of test automation to management—all important facets of a winning political strategy.

E.3 Pilot Project

The test team needs to obtain experience with an automated test tool and test automation practices on a small project before undertaking automated testing on a large project. It is beneficial to identify an application in development that could be used as the pilot project on which to apply test automation or a new test tool for the first time.

A good strategy is to first apply test automation in an isolated test environment (test lab) and then a pilot project before initiating test automation on a wide scale across two or more projects concurrently. (Refer to the pilot application selection guidelines outlined in Chapter 3.) Ideally, the test environment and pilot project will be similar enough to projects typical for the organization that the test team can be adequately prepared for a larger-scale test effort. Note also that the test performance within the pilot project environment constitutes an evaluation aimed at assessing the test tool’s actual performance on a first project.

E.4 Test Tool Compatibility Checks

It is desirable to have candidate test tools checked out within an isolated environment before they are installed within the target test environment. If part of the target application exists in some form at the time of test tool consideration, then the test team should install the test tool or tools along with the application and determine whether the two are compatible. Once a compatibility check has been performed and a few problems arise, the test team will need to investigate whether work-around solutions are possible. See Chapter 4 for further discussion of compatibility checks.

When third-party controls (widgets, OCX, or ActiveX) are used to develop the application, the development team needs to know in advance whether the automated tool is compatible with each third-party control. It is a good idea to ask the tool vendor to provide a list of third-party controls with which the tool is compatible and to hand this list to the developers who are planning on implementing these various controls.

Also note that test tools need to be compatible with both the current software application and future development environments. The primary test tool adopted by the test team organization should be compatible with primary development tools in the market. The test team should keep abreast of the evolution of development tools and environments, enabling it to anticipate the environment in which testing will be applied in the near future.

One special concern is the availability of memory to support both the application and the automated test tool.

E.5 Test Tool Upgrades

A test team may have extensively tested a test tool in an isolated environment and then applied the test tool on one or more projects. As a result, the test team may be familiar and capable with the particular test tool and confident that it works as specified within product literature. Suppose that the test tool vendor announces a bigger and better version of the test tool product as part of a product upgrade. Should the test team immediately adopt the new version and apply it to the project at hand?

The test team will want to make sure that the scripts created in the old version of the tool still work with the new version of the tool. Most vendors promise backward compatibility, meaning that scripts created in the current version of the tool can be reused in the new version of the tool, without any changes to the existing scripts. Past experience, however, indicates that these promises are not always fulfilled.

For example, one test engineer named Neil worked on a project that had invested as much as $250,000 in an automated test tool. When the new version of the test tool was implemented in the test environment, Neil discovered that it supported MS Exchange as the mail engine; the previous version of the tool and Neil’s company, on the other hand, employed MS Mail. A work-around solution was identified that required the installation of both MS Mail and MS Exchange on desktop computers that were outfitted with the automated test tool. This approach allowed MS Mail to be used to support e-mail requirements, while MS Exchange supported defect notification and coordination requirements. This elaborate effort could have been avoided if the test tool vendor had brought this major change to the attention of the test team.

Any test team needs to approach the use of a new product release with caution. Some tests should be carried out to ensure that test team activities can still be performed with the new version in the same manner as with the previous test tool release. The test team also needs to make sure that the test tool upgrade doesn’t adversely affect the current system setup. One valuable check is to ensure that the new version of the test tool remains compatible with the organization’s systems engineering environment. Without these types of checks, the test team may suddenly discover that the test tool upgrade has disrupted its ability to perform activities with other tools within the test environment. When discoveries are made that part of the test tool no longer functions in the same way, the test team may need to identify or develop an extensive work-around solution.

E.6 Baselined System Setup and Configuration

To ensure the integrity of the current system environment, it is a good practice for the test team to back up the current system setup/configuration baseline before installing any new automated test tool or new version of an automated test tool. Specifically, it is beneficial to back up .dll files prior to new installations to ensure that these files are not overwritten. This precautionary activity should be performed even when the test team has successfully tested the new test tool software within an isolated test environment.

E.7 Software Installations in the Test Environment Baseline

It is a good practice to avoid the installation of unnecessary new software within the target test environment once the test environment becomes operational and has been baselined. On one occasion, a test engineer installed a product called Microsoft Plus! that provided elaborate new mouse icons and screen savers. Test scripts that had worked perfectly prior to the installation of Microsoft Plus! no longer worked, however. The test environment had changed, and the change affected test integrity.

E.8 Overall Test Program Objectives

The test team needs to be careful to avoid the trap of becoming consumed with the development of test scripts that distract the test engineer from his or her primary mission—finding errors. Specifically, the test engineer should not overautomate tests and should resist the impulse to automate a test that is more effectively performed manually. An obvious example would involve the situation where three days were consumed automating a test that could have been performed manually within a few minutes.

Remember to analyze what should be automated. It is not necessary or feasible to automate everything. Why take days to automate a feature that can be tested by hand in five minutes, is used frequently through normal use of the program, and will be tested heavily only at key milestones?

E.9 Keep Automation Simple

The most elaborate testing script is not always the most useful and cost-effective way of conducting automated testing. When using a table-driven approach, the test team should keep in mind the size of the application, the size of the test budget, and the return on investment that might be expected from applying a data-driven approach. Consider the example of the test engineer named Bill, who demonstrated an elaborate table-driven approach at a test tool user group meeting. Bill had developed a significant number of scripts to support a data-driven approach, even though the application’s functionality that he was trying to test in an automated fashion was quite basic—that is, simple record add, delete, and update functions. It would have been much more efficient to use a data file to enter the various records, which amounted to a test development effort of no more than a half an hour. The resulting script also could be reused as often as necessary. The table-driven approach that Bill presented took him two weeks to develop.

Test engineers, for example, might find a way to use a GUI test tool to circumvent the GUI entirely through the use of API or RPC calls. This elaborate effort may produce an elegant test automation solution, but at what cost? Test engineers should be careful not to spend more time on programming the automation code than is available in the test schedule and budget or than it would take to develop the AUT.

E.10 Test Procedure Design and Development Standards

Test engineers need to take a disciplined approach to test procedure design and development. Specifically, they need to rigorously adhere to design and development standards so as to promote maximum reusability and maintainability of the resulting automated testing scripts. Test engineers need to flag those test procedures that are repetitive in nature and therefore lend themselves perfectly for automation. Initially, the test team should focus on the augmentation of manual tests through automation. Test development should be a natural extension of the detailed test design activities performed consistently with the guidelines described in Chapter 7. In particular, high-risk and mission-critical functionality should be addressed early within the test development and execution schedules.

E.11 Automated versus Manual Test Analysis

Part of the test design effort outlined in Chapter 7 involves an analysis intended to determine when to automate and when to test manually. The test team should realize that not everything should be automated immediately. Instead, it should take a step-by-step approach to automation. It is wise to base the automation effort on the test procedure execution schedule, with one goal being to not duplicate the development effort.

E.12 Reuse Analysis

As described in Chapter 8, the test engineer needs to conduct reuse analysis of already-existing testing scripts, in an effort to avoid duplication of automation efforts. Test resources are limited, but expectations of test team support may be greater than budgeted for. As a result, the test team cannot afford to squander precious time and energy duplicating the test effort. Measures that can be taken to avoid duplication include performing a disciplined test design. Consider the example of test procedure scripts being allocated according to functional areas of testing. Such an action can magnify the number of automated tests produced. In this situation, the test design should include a strategy for having the test procedures cut across several functional areas.

E.13 Test Team Communication with Other Teams

The test team cannot work in isolation. Instead, it needs to be involved from the beginning of the system development life cycle and partner with all teams involved in the life cycle so as to implement an efficient test program. The test team needs to clarify any automated test tool add-on or code intrusion up-front with developers. When source code must be expanded or augmented by inserting probes, wrappers, or additional statements around source code statements and functions, the tool can be considered intrusive and the developers need to be advised of this potential problem.

For example, one test team deployed a test tool, which requires an add-on product when testing applications developed in a particular version of Visual Basic. The test team routinely informed the developers during test execution (late in the process) that an add-on product was being used and that the cause of defect might be attributable to the add-on product. The developers, however, were surprised to learn of the add-on product. Afterward, the development team asserted that the add-on product produced several reported defects. The test team realized that it would have been better to advise the developers earlier in the test process of the use of this add-on.

E.14 Schedule Compatibility

The project schedule should include enough time to accommodate the introduction and use of automated test tools. An automated test tool is best introduced at the beginning of the development life cycle. Early introduction ensures that the test team has adequate lead time to become familiar with the particular automated test tool and its advanced features. Sufficient lead time is also necessary so that system requirements can be loaded into a test management tool, test design activities can adequately incorporate test tool capabilities, and test procedures and scripts can be generated in time for scheduled test execution.

Without a review of the project schedule, the test team can find itself in a nowin situation, where it is asked to do too much with too little time. That is, expectations of test team performance may be greater than that which the test team can demonstrate. Friction and animosity often result. The test team or the automated test tool may bear the brunt of the blame for schedule slippage and cost overruns. Test automation needs to be embraced by the project early in the development life cycle to be truly effective. Also, the test team needs to be selective when identifying which tests will be supported by test automation and which would be more effectively performed manually. Chapter 4 discusses the test team’s review of the project schedule when contemplating the use of automation on the test effort.

E.15 Customer Involvement

When establishing the test environment, the test team needs to be conscious of how test results will ultimately be presented to senior management and to customers. The test engineer can gauge management and customer reaction early to report formats and test result output to obtain a feel for how well the test tool output is understood. The test team may need to port test output to different office automation tools that allow test results to be more clearly understood and conceptualized.

The form and appearance of test output constitute an extremely important issue with regard to satisfying the application customer or end user. Although the test team could have the most wonderful script for producing test results, the customer or end user may not like the format and appearance of the output.

Another customer involvement issue pertains to the end user’s or customer’s understanding of the particular test strategies and test design implemented. This consideration is particularly important when the end user or customer did not participate or was not involved in the project during the time that system and test requirements were developed. The best solution is to obtain customer involvement early in the life cycle and throughout the test effort.

E.16 Defect Documentation and Reporting

During test procedure development and even before official test execution has begun, the test team may identify defects. If so, the team should document the defects and share them with the development team. It should avoid postponing defect documentation and reporting until the official start of test execution.

E.17 Automated Test Advocates and Experts

Before deciding to develop automated test procedures to support test requirements, the test team needs to be sure that personnel on the team are proficient in test automation. Ideally, the project manager should act as an automated testing advocate. Given the many misconceptions about automated testing (see Chapter 2), an automated testing advocate is necessary to resolve any misunderstandings surrounding the test effort.

For example, successful automated test efforts require that test engineers become familiar with the application early in the life cycle and that enough time in the schedule be allocated to the development of test procedures. Project management personnel who are not familiar with these requirements may not allocate enough time in the schedule for the test effort. When the test effort then falls behind schedule, the test tool may be viewed as the problem, causing test automation to be abandoned on the project. To overcome this obstacle, there needs to be one or more automated testing advocates on a project who understand the value and the requirements for an automated test effort and who can effectively communicate the automated test life-cycle process to management and the rest of the test team.

E.18 Test Team Assignments

Just as it is important to include an automated testing expert and advocate on the test team, not everyone on the test team should be focused on performing test automation. When the entire test team focuses on automation, no one is left to perform test evaluation activities, manual tests, and other test analyses. It is best to separate these assignments, with one test engineer focusing on the automation of test scripts while another concentrates on the business issues associated with the AUT.

It is not uncommon for test engineers to become so involved in trying to automate one or more test scripts and develop the best automated testing library that they lose sight of the testing objective—to find defects. The test procedures may take longer to execute because the test engineer is trying to automate each one; problems that could have been discovered earlier through the use of manually executed scripts are therefore not found until each test procedure has been automated. Even with automation, the manual test effort is still a very valid approach for some tasks. The test engineer thus needs to conduct automation analysis and reuse analysis, and not lose sight of the fact that not everything can be automated.

E.19 User Group Participation

Numerous user groups have been established for leading test tools. The test team should ask the particular test tool manufacturer about user groups for the tool that operate within the particular metropolitan area. At the user group meetings, the test team can find out how others are using the tool, learn tips and tricks, and gather other good information about the tool and its future.

Additionally, the test team can participate in many testing discussion groups or testing newsgroups to expand its collective testing knowledge.

E.20 Test Tool Improvement Suggestions

During the design of a test procedure, the test engineer may think of a better way to develop tests, provided that the test tool would support the idea. In these situations, the test engineer should send in an enhancement request to the test tool manufacturer. Companies welcome such suggestions. There are often places on the company Web site where test engineers are encouraged to submit their enhancement requests on-line.

E.21 Become a Beta Testing Site

Another great way of learning about the latest and greatest developments of the test tool vendor is to volunteer to become a beta tester. As a beta tester, the test engineer can discover problems specific with his or her AUT that can be elevated and fixed in time for the official release of the tool.

E.22 Specialty Topic Experts

No matter which tool that the test team elects to use, some test automation experts will have thought of ways to improve upon the use of the tool. The test team should surf the Internet to identify free add-on utility software, automation ideas, and expert and mentoring support; these resources will increase the team’s return on its automation investment. See the authors’ Web site (http://www.autotestco.com/) for information on test tool expert support and links to various test tool training avenues.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset