Chapter 8. Test Development

Continuing the mini-development life cycle.

image

Chapter 7 outlined the approach for performing test analysis and design. The test team is now ready to perform test development. Table 8.1 correlates the development process phases to the test process phases. In the table, the testing processes and steps are strategically aligned with the development process. The execution of these steps results in the refinement of test procedures at the same time that developers are creating the software modules. Automated and/or manual test procedures are developed during the integration test phase with the intention of reusing them during the system test phase.

Table 8.1. Development–Test Relationship

image

Many preparation activities need to take place before test development can begin. The test development architecture (Figure 8.1) provides the test team with a clear picture of the test development preparation activities or building blocks necessary for the efficient creation of test procedures. As described in Section 8.1, the test team will need to modify and tailor this sample test development architecture to reflect the priorities of their particular project. These setup and preparation activities include tracking and management of test environment setup activities, where material procurements may have long lead times. This preparation activity was described in detail within Chapter 6. Prior to the commencement of test development, the test team also needs to identify the potential for reuse of already-existing test procedures and scripts within the automation infrastructure (reuse library).

Figure 8.1. Building Blocks of the Test Development Architecture

image

The test team should develop test procedures to meet the test procedure development/execution schedule. This schedule allocates personnel resources and reflects development due dates, among other things. The test team needs to monitor development progress and produce progress status reports. Prior to the creation of a complete suite of test procedures, it performs a modularity relationship analysis. The results of this analysis help to define data dependencies, plan for workflow dependencies between tests, and identify common scripts that can be repeatedly applied to the test effort. As test procedures are being developed, the test team should perform configuration control for the entire testbed, including the test design, test scripts, and test data, as well as for each individual test procedure. The testbed needs to be baselined using a configuration management tool.

Test development involves the creation of test procedures that are maintainable, reusable, simple, and robust, which in itself can prove as challenging as the development of the application-under-test (AUT). Test procedure development standards need to be in place, supporting structured and consistent development of automated tests. Such standards can be based on the scripting language standards of a particular test tool. For example, Rational’s Robot uses SQABasic, a Visual Basic-like scripting language; the script development standards could therefore be based on the Visual Basic development standards.

Usually internal development standards exist that can be followed if the organization chooses to work in a language similar to the tool’s scripting language. The adoption or slight modification of existing development standards generally represents a better approach than creating a standard from scratch. If no development standards exist within the organization for the particular tool scripting language, then the test team must develop script development guidelines. Section 8.2 provides an example of test development standards and guidelines. Such guidelines can include directions on context independence, which specifies the particular place where a test procedure should start and where it should end. Additionally, modularity and reusability guidelines need to be addressed.

By developing test procedures based on the development guidelines described in Section 8.2, the test team creates the initial building blocks for an automation infrastructure. The automation infrastructure, described in detail in Section 8.3, will eventually contain a library of common, reusable scripts. Throughout the test effort and in future releases, the test engineer can use the automation infrastructure to support reuse of archived test procedures, minimize duplication, and thus enhance the entire automation effort.

8.1 Test Development Architecture

Test team members responsible for test development need to be prepared with the proper materials. These personnel should follow a test development architecture that, for example, lists the test procedures assigned and the outcomes of automated versus manual test analysis. Additionally, the test engineers should adhere to the test procedure development and execution schedule, test design information, automated test tool user manuals, and test procedure development guidelines. Armed with the proper instructions, documentation, and guidelines, they will have a foundation of information that allows them to develop a more cohesive and structured set of test procedures. Note that the test team’s ability to repeat a process and repeatedly demonstrate the strength of a test program depends on the availability of documented processes and standard guidelines such as the test development architecture.

Figure 8.1 illustrates the major activities to be performed as part of one such test development architecture. Test development starts with test environment setup and preparation activities. Once they are concluded, the test team needs to ensure that information necessary to support development has been documented or gathered. It will need to modify and tailor the sample test development architecture depicted in Figure 8.1 to better reflect the priorities of its particular project.

8.1.1 Technical Environment

Several setup activities precede the actual test procedure development. The test development activity needs to be supported by a technical environment that facilitates the development of test procedures. This test environment must be set up and ready to go before such development begins. It represents the technical environment, which may include facility resources as well as the hardware and software necessary to support test development and execution. The test team needs to ensure that enough workstations are available to support the entire team. The various elements of the test environment need to be outlined within the test plan, as discussed in Chapter 6.

Environment setup activities can also include the use of an environment setup script, as described in Section 8.3, as well as the calibration of the test tool to match the specific environment. When test tool compatibility problems arise with the AUT, work-around solutions must be identified. When developing test procedures, the schedule for developing test procedures should be consistent with the test execution schedule. It is also important that the test team follow test procedure development guidelines.

The test team will need to ensure that the proper test room or laboratory facilities are reserved and set up. Once the physical environment is established, the test team must verify that all necessary equipment is installed and operational. Recall that in Chapter 6, the test plan defined the required technical environment and addressed test environment planning. Also within the test environment section of the test plan, the test team should have already identified operational support required to install and check out the operational readiness of the technical environment. These staff members must likewise ensure that operational support activities have been properly scheduled and must monitor progress of these tasks.

Specific tasks and potential issues outlined in the test plan should have been addressed and resolved at this point. Such issues could include network installation, network server configuration and allocated disk space, network access privileges, required desktop computer processing speed and memory, number and types of desktop computers (clients), video resolution requirements, and any additional software required to support the application, such as browser software. Automated test tools should have been scheduled for installation and assessment. These tools now should be configured to support the test team and operate within the specific test environment.

As part of the test environment setup activity, the test team tracks and manages test environment setup activities, where material procurements may have long lead times. These activities include the scheduling and tracking environment setup activities; the installation of test environment hardware, software, and network resources; the integration and validation of test environment resources; the procurement and refinement of test databases; and the development of environment setup scripts and testbed scripts.

The hardware supporting the test environment must be able to ensure complete functionality of the production application and to support performance analysis. In cases where the test environment uses hardware resources that also support other development or management activities, special arrangements may be necessary during actual performance testing. The hardware configuration supporting the test environment needs to be designed to support processing, storage, and retrieval activities, which may be performed across a local or wide area network, reflecting the target environment. During system testing, the software configuration loaded within the test environment must be a complete, fully integrated release with no patches and no disabled sections.

The test environment design also must account for stress testing requirements. Stress and load tests may require the use of multiple workstations to run multiple test procedures simultaneously. Some automated test tools include a virtual user simulation functionality that eliminates or greatly minimizes the need for multiple workstations under these circumstances.

Test data will need to be obtained with enough lead time to allow their refinement and manipulation so as to better satisfy testing requirements. Data preparation activities include the identification of conversion data requirements, preprocessing of raw data files, loading of temporary tables (possibly in a relational database management system format), and performance of consistency checks. Identifying conversion data requirements involves performing in-depth analysis on data elements, which includes defining data mapping criteria, clarifying data element definitions, confirming primary keys, and defining data-acceptable parameters.

During test planning, the test team defined and scheduled the test environment activities. Now it tracks the test environment setup activities. That is, it identifies the resources required to install hardware, software, and network resources into the test environment and integrate the test environment resources. The test environment materials and the AUT need to be baselined within a configuration management tool. Other test environment materials may include test data and test processes.

The test team will need to obtain and modify any test databases necessary to exercise software applications, and develop environment setup scripts and testbed scripts. In addition, it should perform product reviews and validate all test source materials. The location of the test environment for each project or task should be specified in the test plan for each project. Early identification of the test site is critical to cost-effective test environment planning and development.

8.1.2 Environment Readiness Checks

Once the environment setup and tracking activities have been performed, the test team is then ready to perform some final environment readiness checks. These checks include a review of the organization’s automation infrastructure (reuse library) to ascertain whether existing test program code can be applied to the project.

The test team should check the status of any software functionality that pertains to its test development assignment and ascertain the stability of the AUT. When an application is constantly changing—for example, in the case of GUI test automation development—automation efforts might prove futile. Experience shows that it is best to start automating when parts of the AUT are somewhat stable, meaning that functionality does not change constantly with each release. Ideally, a prototype of the application will exist and the test engineer can develop the object table layout for table-driven scripts. (See Section 8.3 for more detail on the use of table-driven scripts.)

The ATLM supports the iterative development approach, where a system development life cycle contains multiple iterative builds. This iterative methodology, which involves a “design a little, code a little, test a little” approach [1], is applied to each build. Before starting to execute test procedures, the test team must verify that the correct version of AUT is installed. In a GUI application, the version is usually updated in the About selection of the opening GUI menu.

8.1.3 Automation Framework Reuse Analysis

Prior to the commencement of test development, the test team needs to analyze the potential for reusing existing test procedures and scripts within the automation infrastructure (reuse library) that might already exist (if this project is not a first-time automation effort). In Chapter 7, the test design effort identified which test procedures were to be performed manually and which were to be automated. For each test procedure that will be supported by an automated test tool, the test team now needs to research the automation infrastructure (reuse library) to determine the extent to which existing test procedures can be reused. The test team will find it beneficial to modify the matrix that was already created during test design (see Table 7.15) and add a column listing existing test script library assets (see Table 8.2) that potentially can support the current test development. The outcome of this analysis provides input into the test modularity matrix.

Table 8.2. Automation Reuse Analysis

image

Table 8.2 gives a sample matrix depicting the results of a reuse analysis performed by one test team. In this example, the test team identified four test scripts available within the automation infrastructure that might be reusable as part of the current project. These test procedures were originally defined within the test procedure definition that was developed as part of the test design effort.

To enhance the usefulness of test script library assets within the automation infrastructure, the test team should be careful to adhere to test procedure creation standards, which are discussed in Section 8.3.

8.1.4 Test Procedure Development/Execution Schedule

The test procedure development/execution schedule is prepared by the test team as a means to identify the timeframe for developing and executing the various tests. The schedule takes into account various factors, such as the following:

• The individuals responsible for each particular test activity are identified.

• Setup activities are documented.

• Sequence and dependencies are included.

• Testing will accommodate the various processing cycles that pertain to the application.

• Test procedures that could potentially conflict with one another are documented.

• Test engineers are allowed to work independently.

Test procedures can be grouped according to specific business functions.

• Test procedures will be organized in such a way that effort is not duplicated.

• The test procedures’ organization considers priorities and risks assigned to tests.

• A plan for various testing phases and each activity in a particular phase exists.

The schedule aids in identifying the individual(s) responsible for each particular test activity. By defining a detailed test procedure development/execution schedule, the test team can help prevent duplication of test effort by personnel. The test procedure modularity-relationship model (described later in this chapter) is essential to developing this schedule.

The test procedure development/execution schedule needs to include test setup activities, test procedure sequences, and cleanup activities. Setup activities need to be documented to ensure that they adhere to configuration management standards, as it is essential that testing be performed within a controlled environment. For example, the test engineer needs to be able to return the environment to its original state after executing a particular test.

Test procedure sequence and dependencies need to be documented, because for many application test efforts, a particular function cannot be executed until a previous function has produced the necessary data setup. For example, a security access control cannot be verified until the security access privilege has been established. For a financial management application, a financial securities instrument cannot be transferred to a bank until the securities instrument has first been posted to an account and verified. Additionally, the financial management application might require a transaction summary or end-of-day report produced at the conclusion of business, which identifies the number of securities or funds delivered to the Federal Reserve for a particular day. Such an end-of-day summary report cannot be constructed and verified until the prerequisite was met—that is, until an entire day’s transactions has been properly recorded.

The test procedure development/execution schedule must accommodate testing that examines the various processing cycles pertaining to the application. For example, a health information management system may need to perform daily, weekly, quarterly, and annual claim report transaction summaries. The setup activities required to produce and test a yearly claim summary report need to be included in the test procedure development and execution schedule.

The test procedure development/execution schedule will also need to document all test procedures that could potentially create conflicts, thereby allowing test engineers to execute functionality and workflow without running into unknown dependencies or wrongfully affecting one another’s outcome. It is beneficial to determine the order or sequence in which specific transactions must be tested so as to better accommodate control or workflow.

Test engineers will need to be able to work independently and must be able to share the same data or database. A test procedure development/execution schedule helps ensure that one test engineer does not modify or affect the data being manipulated by another test engineer. Such interference could potentially invalidate the test results produced by one or both of these team members. As noted earlier, the schedule also identifies the individuals who will be developing and performing the various test procedures and the sequence for when particular test procedures are executed; the goal is to avoid execution mishaps and to clarify roles and responsibilities.

One of the primary tasks to be undertaken when developing the test procedure development and execution schedule pertains to organizing tests into groups. For example, tests may be grouped according to the specific business function. By scheduling separate groups of tests, business functionality A, for example, can be assigned to test engineer Cordula. Business functionality B, on the other hand, can be assigned to test engineer Thomas. A third business functionality, C, could be assigned to test engineer Karl. By assigning specific functionality to a separate test engineer, the test manager will be better able to monitor progress and check status by setting due dates for the completion of each test procedure.

It is important that automated test procedures be organized in such a way that effort is not duplicated. The test team should review the test plan and the test design to verify that the results of test analysis and design are incorporated into the test schedule. When developing the test procedure execution schedule, the test team should be aware of the test schedule considerations discussed below.

The test procedure development/execution schedule must take into consideration the priorities and risks assigned to the various tests. The test schedule should place greater emphasis on execution of mission-critical and high-risk functionality. These tests should be performed early in the schedule, giving more time for this functionality to be tested and regression-tested as necessary. The test procedure execution schedule will need to document the breakdown of the various testing phases and discuss the activities in each phase (that is, functional, regression, or performance testing).

When defining the test procedure execution schedule, the test team must allow time to accommodate multiple iterations of test execution, time to correct documented discrepancies, and time to carry out the regression testing required to verify the proper implementation of software fixes. It is also important to plan for the release of multiple application builds that incorporate software fixes. As a result, the test procedure execution schedule must reflect the planned delivery for each new build, which may be daily or weekly, and may be defined with a detailed development schedule.

Inevitably, changes will occur within the project schedule or within the detailed development schedule. The test team must monitor these changes and alter the test schedule accordingly. The project schedule may slip in one place or the timeframe for system test activities may be shortened in another situation. Other modifications to the test schedule arise when functionality that was supposed to be implemented in a particular release is omitted instead. Additionally, personnel who had been assigned to support test activities may be reassigned.

Given the variety of changes that require adjustments to the test schedule, it is beneficial to baseline the original test schedule and each subsequent major change to the schedule. Each schedule change needs to be documented, using schedule-tracking systems such as the earned value management system discussed in Chapter 9. The original test schedule and subsequent baselined changes to it should be reviewed with the proper authority to obtain approval for each version of the schedule. Formal approval of each test schedule baseline helps to keep performance expectations in line with test program implementation.

The test procedure execution schedule can be created using a project scheduling tool or developed within a spreadsheet or a table using a word-processing package, as noted in Table 8.3. Test procedure execution is initiated after the execution of the environment setup scripts, which are also noted on this schedule. These scripts perform a variety of functions, such as setting up video resolution, shutting down screen savers, and checking and setting the date format.

Table 8.3. Test Procedure Execution Schedule—System Test Phase

image

8.1.5 Modularity-Relationship Analysis

Prior to the creation of a complete suite of test procedures, it is very important to create a layout or logic flow that shows the interrelationship of the various scripts. This logic flow should reflect the test plan and the test team’s goals. It consists of the high-level design of the scripts and takes their integration into account. It can be based on a use case model or a system sequence diagram among other workflow artifacts. An important reason for performing this modularity-relationship analysis is to identify any data dependencies or workflow dependencies between automated test procedures. The test engineer needs to be aware of the state of the data when performing tests. For example, a record must exist before it can be updated or deleted. Thus the test team must determine which scripts need to be run together in a particular sequence using the modularity-relationship matrix.

The modularity-relationship analysis also enables the test team to plan for dependencies between tests and to avoid scenarios where failure of one test procedure affects a subsequent test procedure. In addition, it allows test engineers to identify common scripts that can be repeatedly applied to the test effort.

The first step in creating the modularity-relationship matrix involves the creation of a visual flow using boxes for each section of the application under test. Next, the test engineer breaks down each box into smaller boxes or lists each component or screen for each section of the AUT. This process will bring to light dependencies (preconditions and post-conditions) that are key to developing, debugging, and maintaining the scripts.

The relationships between test scripts are reflected within a modularity-relationship matrix, as depicted in Table 8.4. Such a matrix graphically represents how test scripts will interact with one another and how they are related to one another, either by script modularity or via functional hierarchy. This graphical representation allows test engineers to identify opportunities for script reuse, thereby minimizing the effort required to build and maintain test scripts.

Table 8.4. Sample Modularity-Relationship Matrix

image

The test procedure modularity-relationship analysis will also help the test team organize the proper sequence of test execution, so that test procedures can be correctly linked together and played back in specific order to ensure continuous flow of playback and maximum benefit. Modularity definition helps the team associate test procedures to the application design by using naming conventions and following the design hierarchy.

The modularity-relationship matrix shows how the various test procedures fit together and indicates how test procedures may be reused or modified. This matrix includes information such as the test procedure ID or names of the test procedures that must be executed prior to the start of a particular test procedure, as well as the data (or state of the data) that should already exist in the database being used for testing. It therefore helps to identify patterns of similar actions or events that are used by one or more transactions.

8.1.6 Explanation of the Sample Modularity-Relationship Matrix

In the modularity-relationship matrix portrayed in Table 8.4, the Shell Procedure (Wrapper) Name column identifies the name of the parent procedure that calls subordinate procedures. In this case, the parent procedures are called “Set Sight Verify Privilege,” “Set Key Verify Privilege,” and “Set Privilege to No Verify.” The parent shell procedure consists of a single main procedure that calls many procedures. The three shell procedures identified in Table 8.4—Set Sight Verify Privilege, Set Key Verify Privilege, and Set Privileges to No Verify—call multiple test procedures from within each shell procedure. Even though the shell procedures primarily invoke the same test procedures (with the exception of a single test procedure), they are accomplishing different tests and support different requirements or use case scenarios. This example shows how shell procedures can prove very effective in testing various requirements by simply adding or deleting a specific test procedure.

Each of the first three shell procedures logs into the application under test and opens a screen that contains information about a financial instrument called a security. The three shell procedures differ in that each tests different privileges that are required to verify a security (financial instrument). The first shell procedure tests system security using a sight verify function, where the privilege has been set up so that a user must verify the correctness of a security by visual examination. For example, the script conducts object property comparisons. The second shell script tests the key verify privilege, where a user (the script) has to reenter some values to verify the security. The third shell script tests the delivery of the security where no privilege has been established, which could apply when a particular user has special access authority. The detailed descriptions below outline the functions of the first three shell procedures depicted in Table 8.4.

Shell Procedure called Set Sight Verify Privilege, as created using
Rational's TestStudio.
********
Sub Main
Dim Result As Integer

CallProcedure "LogIntoApp"
CallProcedure "OpenSecScreen"
CallProcedure "SetSightVerifyPrivilege"
CallProcedure "CreateNewSecurity"

CallProcedure "SightVerifySecurity"
CallProcedure "DeliverSecurity"
CallProcedure "ReceiveAcknowledgment"
CallProcedure "CloseScreen"
CallProcedures "CloseApp"

End Sub
********

Shell Procedure called Set Key Verify Privilege, as created using
Rational's TestStudio.
********
Sub Main
  Dim Result As Integer

CallProcedure "LogIntoApp"
CallProcedure "OpenSecScreen"
CallProcedure "SetKeyVerifyPrivilege"
CallProcedure "CreateNewSecurity"
CallProcedure "KeyVerifySecurity"
CallProcedure "DeliverSecurity"
CallProcedure "ReceiveAcknowledgment"
CallProcedure "CloseScreen"
CallProcedures "CloseApp"

End Sub
********

Shell Procedure called Set No Verify Privilege, as created using
Rational's TestStudio.
********
Sub Main
  Dim Result As Integer

CallProcedure "LogIntoApp"
CallProcedure "OpenSecScreen"
CallProcedure "SetNoPrivilege"
CallProcedure "CreateNewSecurity"
CallProcedure "DeliverSecurity"
CallProcedure "ReceiveAcknowledgment"
CallProcedure "CloseScreen"
CallProcedures "CloseApp"

End Sub
********

Although these shell procedures appear very similar, they accomplish different tests simply by adding or deleting one subordinate procedure. They also provide a good example of the reuse of test procedures. The purpose of each test procedure called by the three shell procedures is described below.

LogIntoApp logs the user into the application and verifies that the password is correct. Error checking is built in.

OpenSecScreen opens the security (financial instrument) screen, verifies the menu, and verifies that the correct screen is open and the security screen hasn’t changed from the previous build. Error checking is built in.

SetSightVerifyPrivilege sets the security (financial instrument) verify privilege to “sight verify,” meaning that sight verification is necessary before a security can be delivered to the Federal Reserve Bank. Error checking is built in.

KeyVerifySecurity sets the security (financial instrument) verify privilege to “key verify,” meaning that a specific data field has to be verified by rekeying specific information before a security can be delivered to the Federal Reserve Bank. Error checking is built in.

SetNoPrivilege resets the security (financial instrument) verify privilege to none, meaning that no verification is necessary before a security can be delivered to the Federal Reserve Bank. Error checking is built in.

CreateNewSecurity creates a new security. Error checking is built in.

SightVerifySecurity uses the sight verify method by checking the object properties of the security to be delivered. If the object properties test passes, it goes on to next procedure; if it fails, it sends the user an error message and the procedure ends. Other error checking is built in.

KeyVerifySecurity uses the “key verify” method by automatically rekeying the information and then checking the object properties of the security to be delivered. If the object properties test passes, it goes on to the next procedure; if it fails, it sends the user an error message and the procedure ends. Other error checking is built in.

DeliverSecurity delivers the security to the Federal Reserve Bank.

ReceiveAcknowledgment receives acknowledgment of receipt from the Federal Reserve Bank. Error checking is built in.

CloseScreen closes the security screen. Error checking is built in.

CloseApp closes the application. Error checking is built in.

8.1.7 Calibration of the Test Tool

At this stage, the test team has established the required test environment and, during the test design effort, has defined test data requirements and specified the scripting language. Poised to begin test development, the test team first needs to calibrate the test tool.

In Chapter 4, the test team worked out tool-related third-party custom control (widget) issues. Now, when calibrating the automated test tool to match the environment, it will have to decide upon the tool playback speed, even though the playback speed can change from one test procedure to the next. Many tools allow for modification of the playback speed. This capability is important, because the test engineer might want to slow down the playback speed when playing back the script for an end user or to allow for synchronization.

The test team needs to make other decisions, such as what to do when an unexpected active window is observed, and then adjust the automated test tool accordingly. A common problem for unattended testing involves failures that have the domino effect. For example, following the failure of one test, an unexpected active window might appear displaying an error message and leaving the application in an unexpected state. Some tools, such as Test Studio, can handle these unexpected active windows by allowing the test engineer to set a parameter that instructs the test tool to automatically shut down unexpected windows during playback. As a result, the subsequent test scripts continue to execute even though an unexpected window has been encountered.

When the automated test tool does not have a provision for overcoming such failures, then the domino effect is often observed. When the product is left in an unexpected state, subsequent script logic cannot execute because the error message continues to be displayed. To run the test suite, the product must be reset and the test suite must be restarted following the test failure. Successive failures will require the tests to be restarted repeatedly. Therefore, it is beneficial if the chosen test tool provides for automatic handling of unexpected active windows and other unexpected application states.

8.1.8 Compatibility Work-Around Solutions

As previously mentioned, associated with the effort to verify whether an automated test tool is compatible with the AUT, the test team may wish to consider work-around solutions for any incompatibility problems that arise. When a previous version of the application already exists (or a prototype), the test engineer needs to analyze the application to determine which particular parts of the application should and can be supported by automated tests. He or she can install the test tool on the workstation where the application resides and conduct a preliminary compatibility analysis. A good understanding of the AUT is beneficial when contemplating a work-around solution to an incompatibility problem.

A particular test tool might work well under one operating system, but behave differently on another. It is very important to test for compatibility between the automated tools and the third-party controls or widgets that are in use for the particular AUT. The automated test tool might not recognize some third-party objects; therefore the test engineer may need to develop extensive work-around solutions for it. Some of these fixes might take time to develop, so it is worthwhile to work on these solutions concurrently with application development.

8.1.9 Manual Execution of Test Procedures

Another test development readiness activity that can be performed by the test engineer is the manual execution of the test procedure prior to automated test procedure development. In this activity, the test engineer steps through the test procedures associated with a particular segment/functionality of the system once manually and then decides whether to automate the test procedures associated with it based on the outcome. This step verifies that all intended/designed functionality is inherent in the system or contained within a particular segment of the system. When functionality is missing within the application, test automation of test procedures can be inefficient and unproductive. In this verification, a test engineer executes each step in a test procedure. By executing the manual tests, the test team can ensure that test engineers do not begin to automate a test procedure, only to find halfway through the effort that the test procedure cannot be automated.

The test engineer may be able to streamline the overall test effort by investing some time in this manual verification process. After executing all of the steps in the test procedure without any problems, he or she can start automating the test script for later reuse. Even if the steps in the test procedure didn’t pass, the test engineer can automate this script by manipulating it to record the “expected result.” Some automated test tools, such as Rational’s TestStudio, allow for manipulation of the recorded baseline data. For example, if a test engineer records an edit box called Las Name (typo) that should really be called Last Name, the baseline data can be corrected to the expected data result. Afterward, when the script is played back against a new software build and the script performs successfully (passes), the test engineer knows that the fix has been properly implemented. If the script fails, then the defect remains unresolved.

8.1.10 Test Procedure Inspections—Peer Reviews

When test procedures are being developed for the various phased test components, the test team may find it worthwhile to conduct peer review inspections of the test procedures being developed by the individual test engineers. Test procedure inspections are intended to detect defects, incorrect use of business rules, violations of development standards, and test coverage issues, as well as review test programming code and ensure that test procedure development is consistent with test design.

Test procedure inspections represent an excellent way to uncover any discrepancies with the test procedures, test requirements, or system requirements. Functional requirements can serve as a resource for developing test requirements and subsequent test procedures. As every test requirement will be tested, the peer review process can clear any questions that arise during the development of test procedures. In this review, the test team examines design information, such as computer-aided software engineering (CASE) tool information (if a CASE tool is used), to clarify the use of system paths and business functions. Test procedure design walkthroughs are therefore a helpful technique for discovering problems with the test requirements, errors in the system requirements, or flawed test procedure design.

8.1.11 Test Procedure Configuration Management

During the development of test procedures, the test team needs to ensure that configuration control is performed for test design, test scripts, and test data, as well as for each individual test procedure. Automated test scripts need to be baselined using a configuration management (CM) tool. Groups of reusable test procedures and scripts are usually maintained in a catalog or library of test data comprising a testbed.

Testbeds have many uses—for example, during regression testing that seeks to verify the integrity of an application following software updates implemented to correct defects. It is especially important to baseline automated test procedures, so as to maintain a basket of reusable scripts that apply to one application version and can be implemented again in a subsequent version of the application. The reuse of the scripts helps to verify that the new software in the subsequent application release has not adversely affected software that should have remained unchanged from the previous release.

CM tools, such as Source Safe or CCC Harvest, can provide this function (for more CM tools, see Appendix B). Numerous CM tools are on the market, and one will undoubtedly be compatible with the specific automated test tool being used. In the case where a project does not have a budget to purchase a configuration management tool, the test team needs to make sure that backups are performed daily for test program files, test databases, and everything else that is part of the test repositories. In some cases, twice-daily backups may be warranted.

When the test team uses a requirement management tool, such as DOORS, to maintain test requirements and test procedures, baseline control of test procedures is provided automatically. A tool such as DOORS also keeps a history of each test procedure modification or any other modification. This history typically includes the name of the person making the change, the date of the change, the reason for the change, and a description of the change.

When the test team creates test procedures in a simple word-processing document, it needs to ensure that the documents are baselined. Scripts can be destroyed if they are saved in a database, that later becomes corrupt—a not uncommon occurrence. Therefore the database should be backed up regularly.

A test procedure script could be lost, if not baselined, if safeguards are not in place and an individual simply modifies the script or—even worse—accidentally overwrites an existing script or deletes it. Similarly, a test procedure script could be destroyed, if not baselined in a CM tool, when a server or the network goes down and a particular script is in the middle of playback and subsequently becomes corrupted.

Another concern for the test team pertains to the maintenance of the testbed. After test procedures have been executed, these personnel should perform cleanup activities. Team members can use SQL scripts or play back automated scripts that will automatically clean up anything necessary to restore the application to its original state.

Testbed management also includes management of changes to software code, control of the new-build process, test procedures, test documentation, project documentation, and all other test-related data and information. In most organizations, a separate CM group is responsible for testbed management and other CM tasks. This group typically uses a CM tool to support these efforts.

It is important that testbed management include configuration control. As multiple versions of the AUT exist, multiple versions of the test procedure library will exist as well. Testbed management should ensure that the correct test procedure library is used with the corresponding and current AUT version and baselined test environment.

CM tools also facilitate coordination between project team members with regard to changes. The test team needs to keep abreast of any changes that can affect the test environment integrity, such as changes to the test lab, testbed, and AUT. Specific changes that may alter the test environment include a change to a network connection, modification of disk space, or the implementation of a faster processor or other AUT upgrades.

Poor CM can cause many problems. For example, a script that previously incorporated programmed code may now incorporate only tool-generated code. A defect that had been fixed at a great expense may reappear or a fully tested program may no longer work. Good CM can help avoid these problems. If common test procedures are modified, for example, the modification should be communicated to all test engineers. Without test procedure management, all test engineers affected might not be notified of these changes.

8.2 Test Development Guidelines

Once the test team has performed test development setup, including the setup of the test development architecture, it has a clear picture of the test procedures that must be created for the various testing phases. The team next needs to identify the test procedure development guidelines that will apply on the project’s various test development activities. Figure 8.2 provides an overview of test development activities, including the input and output of each activity.

Figure 8.2. Test Development Activities Using a Capture/Playback Tool

image

The automated test developer needs to follow the development standards of the scripting language that applies to the particular tool in use. For example, the test tool Test Studio uses SQA Basic, a Visual Basic-like language. In this particular case, it is recommended that the test engineer follow a published Visual Basic development standard. In the case of the test tools offered by Mercury, which use a C-like language, the test engineer should follow one of the many published C development standards. This section addresses some of the important programming considerations in test procedure script development, including those related to the scripting language used in the respective testing tool.

For the entire team of test engineers to be able to simultaneously develop test procedures that are consistent, reusable, and maintainable, the test team should consolidate all development guidelines into a single document. Where an organization-level test development standard exists, the team can adopt or modify those guidelines as necessary to support a particular project. Once a test development standard has been adopted, all test team personnel should receive this document. A means for ensuring that test development guidelines are implemented needs to be applied, such as test procedure walkthroughs.

Test procedure development guidelines should be available to support the development of both manual and automated test procedures, as outlined in Table 8.5. Reusability is one of the most important test procedure creation factors. If a test procedure is not reusable, the test engineer has wasted much of his effort; the test procedure has to be recreated and frustration sets in. Additionally, the test procedure should be maintainable, simple, and robust.

Table 8.5. Test Development Guidelines

image

image

8.2.1 Design-to-Development Transition

Provided that design standards are in place, the test team needs to ensure that test development represents a natural transition from test design. It is important that tests be properly designed before test development begins. Test design, which is the baseline for test development, is discussed in detail in Chapter 7. The test design baseline consists of several elements, including the test program model, test architecture, and test procedure definition. Futhermore, this baseline consists of several test procedure matrices, such as mappings of test procedures to test requirements, the automated/manual test mapping, and the mapping of test procedures to test data. It should provide the test team with a clear picture of how test development needs to be structured and organized.

With the test design baseline in place, the test team needs to perform the setup activities. Once test development setup activities have concluded, the test team then defines a test development architecture, which itself defines the overall test development approach. Such a diagram graphically illustrates the major components of the test development and execution activity to be performed. The test procedure design and development are closely intertwined. For example, the test team needs to be aware of any dependencies between test procedures. It should identify any test procedures that must be developed before other test procedures as well as procedures that must be executed before others. The test procedure execution schedule discussed in this chapter further addresses these issues.

The test development architecture, which was depicted in Figure 8.1, should be the foundation from which the test development and execution schedules are developed. The test team needs to create test procedures according to a test development schedule that allocates personnel resources and specifies development due dates. In addition, it needs to monitor development progress and produce progress status reports. The test team should hold a meeting to discuss the specific way in which test design, development setup, and the test development architecture will be translated into test development action. The results of this meeting should be recorded within a test development guideline document and promulgated to all test personnel.

8.2.2 Reusable Test Procedures

Automated test procedure scripts can be extended through the use of custom code. For example, as noted in Section 8.2.2.5, a test script that is created using the recording feature of an automated test tool is not very reusable. This method records every mouse click and every screen; as soon as the test procedure encounters a changed location for a button or a missing screen, however, the test procedure script fails. Test scripts can be easily extended for continued reusability through the use of procedural control and conditional logic.

The most important issue when developing or modifying a test procedure is reusability. The test team may find it beneficial to establish a separate test creation standard that addresses the development of reusable test procedures. Reusability of the test procedures is important in improving test team efficiency. For example, when the user interface for an application changes, the test team will not want to spend the time and energy updating each test script so that it accurately reflects and tests the application. Instead, the test team would be better off if it built in flexibility from the start.

To create a library of reusable functions, the best practice calls for the separation of functionality such as data read/write/validation, navigation, logic, and error checking. This section provides guidelines on several topics related to practices that help to build in test procedure reusability. These guidelines are based on the same principles that underlie good software development practices.

image

8.2.2.1 Data

One way to design reusable test procedures is to use a data-driven approach to test development, where data values are either read in from data files, instead of having the values hard-coded within the test procedure script, or written to data files. In this case, the test engineer identifies the data elements and data value names or data variable names that can be used by each test procedure. This information enables the test engineer to assess the benefit of using a particular data file.

When recording a test procedure script using an automated test tool, the tool automatically embeds hard-coded values. The action of simply recording test procedures that rely on hard-coded values limits the reuse of the test procedure. For example, a date field recorded today as a constant will prevent the script from functioning properly tomorrow. Also, consider the need to add a series of account numbers into an application. If account numbers represent a key data element, the account numbers added each time need to be unique; otherwise, an error message may be displayed, such as Duplicate Account Number.

Generally, it is recommended that the test engineer not hard-code data values into a test procedure. To avoid execution failure, the test engineer needs to replace such hard-coded values with variables and, whenever possible and feasible, read data from a .csv or ASCII file, spreadsheet, or word-processing software.

As part of the test development effort, the test team identifies the data elements, data value names, or data variable names used by the various test procedures. The key pieces of data added to the test procedure script are the variable definitions and the definition of the data source (that is, the location of the data file).

Another reason for using this technique of externalizing the input data is to facilitate maintenance of the data being entered and to make the scripts more efficient. Whenever the test procedure should be played back with a different set of data, the data file simply can be updated.

The following sample test script reads data from an input file.

Data Read from a File (using Rational Robot’s SQA Basic)

–   This is a sample script using SQA Basic
–   Create a data-driven script that reads a .CSV file to read in the data
and associated test procedure(s)
–   Separate sample input file (input.txt):
"01/02/1998","VB16-01A"
"12/31/1998","VB16-01B"
"09/09/1999","VB16-01C"
"12/31/1999","VB16-01D"
"12/31/2000","VB16-01E"
...
...
"10/10/2000","VB16-01Z"
Sample Code:
hInput = FreeFile
Open "input.txt" For Input Shared As # hInput
Do While Not Eof( hInput )
    Input # hInput , sDateIn , sTCName
    Window SetContext, "Name=fDateFuncTest", ""
    InputKeys sDateIn
    PushButton Click, "Name=cmdRun"
    Result = WindowTC (CompareProperties, "Name=fDateFuncTest", __
           "CaseID=" & sTCName & "")

Loop
Close # hInput

The pseudocode for this sample test script follows:

Open Input File
For Each Record in the Input File Do the Following Steps:
Read Input Date, Test Procedure Name
Enter Input Date into Date Field
Click on Run Command
Verify Results in the Date Field
Close Input File

8.2.2.2 Application Navigation

Test procedure development standards should address the ways that automated test procedures navigate application screens. To achieve higher reusability of the test scripts, a test engineer needs to select the navigation method that is least susceptible to changes in the AUT. Navigation functionality should reside in a separate module in the test library. The standards should specify whether test procedure development using a test tool recorder moves through the application by use of tabs, keyboard accelerator keys (hotkeys), or mouse clicks, capturing the x, y coordinates or the object names assigned to a particular object. For example, within environments that do not use object names (such as C, Visual C, and Uniface), the “caption,” or window title, is used to identify a window.

All of these choices may potentially be affected by design changes. For example, if test procedure scripts use the tab method to navigate between objects, the resulting scripts are tab-order-dependent. When the tab order changes, all scripts must be corrected, which can pose a monumental challenge. If accelerator keys are employed as a navigation method and new accelerator keys are later assigned, the recording again will become useless. If the mouse is used, a field could change and the mouse click might occur on a nonexistent field; once again, the script would need to be recreated. The best way to navigate through an application is to identify each field by its object name as assigned by the particular development tool—for example, Visual Basic, PowerBuilder, or Centura. The field can then be moved to any position on the screen, but the script would not be affected by this change. In this way, a recorded script would not be as susceptible to code changes and will become more reusable. Whenever possible, the test procedure needs to retrieve a window’s object name, which is most likely to stay static, instead of identifying a window by caption.

8.2.2.3 Bitmap Image Recording

Test development guidelines should address the application of the bitmap image recording method for developing reusable test procedures. Guidelines should stress that the use of test tools to perform bitmap image recording does not represent efficient test automation and that the use of this method of test procedure recording should be minimized.

Most automated GUI testing tools allow the test engineer to record bitmap images, which are also known as screen shot recordings. This method for developing a test procedure may prove valuable when trying to determine whether one or more pixels in the application’s screen display has changed. It allows a test procedure to compare a screen or an edit box with a baseline recording of the screen or edit box. Matching up pixel information, which is synonymous with the x, y coordinates, performs the comparison. The resulting test is overly sensitive to any change, including those that are appropriate. Test procedure maintenance using this method can prove too burdensome, given the significant amount of effort required.

Another drawback to the use of bitmap image recording pertains to the fact that when test procedures are played back on a system that uses a different video resolution or different color setting for screen display, the procedures will fail. For example, test procedures recorded using a system with a screen display of 1024 × 768 cannot be applied to a screen display with a resolution of 640 × 480.

Yet another concern for bitmap image recording involves the fact that test procedures that record screen capture take up a big chunk of storage space. In the case of a 1024 × 768 screen resolution at 24 bits per pixel (16 million colors), or 3 bytes per pixel, the result of 1024 × 768 × 3 is 2,359,296 bytes, or 2.3 megabytes, for a single snapshot.

8.2.2.4 Automation Wildcards

Some automated testing tools permit the use of “wildcards.” Standards need to be developed for the use of wildcards within test program code. Wildcards, such as the use of the asterisk “*”, enable a line in the test program to seek a match of a certain value or to identify when conditions meet certain criteria.

When using an automated test tool to record test procedures, a wildcard can be employed to look for a caption of a window so as to identify the window. For example, when using the Test Studio test tool, the Window Set Context command identifies the window in which subsequent actions should occur. Within environments that do not use object names (such as C, Visual C, and Uniface), the “caption,” or window title, is used to identify the window.

The test engineer can edit the test procedure script, remove the caption, and then insert standard wildcard expressions so that when the test script runs, it will place focus on a window; that is, a specific window is selected, even when the window title changes. This consideration is especially important when window titles change and the application’s window titles contain the current date or a customer number. In some testing tools, such as SQA Suite, the caption terminator technique can allow the test procedure to use a single character to designate the end of a window caption string. Note that the caption itself must be enclosed in braces ({ }), as reflected in the following code:

Test Studio Example

Window SetContext, "Caption={Customer No: 145}"
Window SetContext, "Caption={Customer No: *}" OR
Window SetContext, "Caption={Customer No: 1}"

8.2.2.5 Capture/Playback

Reusable test development guidelines should address the use of capture/playback methods of test procedure recording. Guidelines should stress that the out-of-the-box use of GUI test tools does not provide the most efficient test automation and that the use of the capture/playback method of test procedure recording should be minimized. As explained in Appendix B, in capture/playback an automated test tool records the keystrokes of user interactions that execute the functionality of an application. These keystrokes are recorded as part of test procedure development and then played back as part of test execution. Test procedures created using this simplified method have significant limitations and drawbacks.

A major drawback for the capture/playback method for test procedure development pertains to the fact that values are hard-coded within the underlying script language code that the test tool automatically produces during recording. For example, input values, window coordinates, window captions, and other values are all fixed within the code generated during recording. These fixed values represent a potential problem during test execution when any number of these values has changed within the product being tested. Because of the changed values in the AUT, the test procedure will fail during script playback. Because a particular window may appear in a number of test procedures, a single change to a single window can ripple throughout a significant number of test procedures and render them unusable. Another example of the capture and recording of a fixed value within a test procedure involves the capture of the current date stamp during test development. When the test engineer attempts to exercise the test procedure the following day or any day thereafter, the test procedure will fail because the hard-coded date value doesn’t equal the new date.

Clearly, test procedures created using the capture/playback method are not reusable and, as a result, are not maintainable. Basic scripts are useful in a few situations, but most often test engineers developing test procedures under this method will need to re-record the test procedures many times during the test execution process to accommodate any changes in the AUT. The potential advantages of using a test tool are negated by this need to continually recreate the test procedures. The use of this test procedure development method will result in a high level of frustration among test personnel, and the automated tool in use is likely to be shelved in favor of manual test development methods.

Instead of simply using the capture/playback capability of an automated testing tool, the test team should take advantage of the scripting language by modifying the code automatically generated by the capture/playback tool to make the resulting test scripts reusable, maintainable, and robust.

8.2.3 Maintainable Test Procedures

In addition to creating reusable test procedures, it is important that the test team follow guidelines that support the creation of maintainable test procedures. Developing test procedures with the standards described in this section can result in more easily maintained test procedures.

By carefully designing and developing modular procedures, the team can simplify the maintenance of the test scripts. The time spent actually building and maintaining tests can be minimized by designing carefully and by understanding the interdependence of the test procedures.

image

8.2.3.1 Cosmetic Standards

The test team should establish and adopt standards that promote the development of test program code that is easy to read, understand, and maintain. These standards stipulate the cosmetic appearance of test program code. Cosmetic coding standards would specify how to represent if/then/else statements and case statements within code, for example. They would also stipulate how to align the indentation of the first and last statements of each individual routine (loop) within the code. Ideally, the alignment for the first and last statements should be the same.

Cosmetic coding standards might articulate rules for code development that specify that program code not exceed the length of the screen, so that the individual does not have to scroll up and down (or back and forth) to follow the code logic. Rules for the use of continuation characters, such as “_” in Visual Basic, should be promulgated. These characters serve as a “carriage return” for each line of code, thereby minimizing the length of each line of code so that test procedure scripts become more readable.

When using an automated test tool that records test procedures in an object mode, it is easy to read the resulting code. The object mode makes scripts readable by clarifying the context of each command or action. That is, it recognizes context, such as field controls and labels. If the test tool records in analog mode, then the resulting test procedure script is incomprehensible with regard to where row-column and pixel coordinates are recorded. The first sample program code in this section illustrates code that is difficult to read, while the second example shows code that follows cosmetic coding standards.

Sample Test Script Using No Layout Standard

Sub Main
Dim sInput As String
Dim word, numeric
Dim Counter As Integer
'Initially Recorded: 12/01/97  23:52:43
Open "C: empwordflat.txt" For Input As #1
Counter = 1
Do While Counter < 5000
Input #1, sInput
word = Trim(Left$(sInput,Instr(sInput," ")))
numeric = Trim(Mid$(sInput,Instr(sInput," ")))
Window SetContext, "Caption=Microsoft Word - Document1", ""
MenuSelect "Tools->AutoCorrect..."
Window SetContext, "Caption=AutoCorrect", ""
inputKeys word &"{TAB}" &numeric
Window Click, "", "Coords=358,204"
Counter = Counter +1
Loop
Close #1

End Sub

Sample Test Script Using Layout Standard

Sub Main

    Dim sInput As String
    Dim word, numeric

    Dim Counter As Integer

    'Initially Recorded: 12/01/97  23:52:43
    Open "C: empwordflat.txt" For Input As #1

    Counter = 1
    Do While Counter < 5000

        Input #1, sInput
        word = Trim(Left$(sInput,Instr(sInput," ")))
        numeric = Trim(Mid$(sInput,Instr(sInput," ")))

        'Window SetContext, "Caption=Microsoft Word - Document1", ""
        'MenuSelect "Tools->AutoCorrect..."

        Window SetContext, "Caption=AutoCorrect", ""
        InputKeys word &"{TAB}" &numeric
        Window Click, "", "Coords=358,204"
        Counter = Counter +1

     Loop
     Close #1

End Sub

The structure of the script layout does not enhance performance, but it does facilitate debugging and the review of the test scripts. A tremendous payback is realized when trying to make sense of the different program constructs, such as loops.

8.2.3.2 Test Script Comments

Another guideline that helps to create maintainable test scripts relates to the use of comments within a test procedure. Such comments are meant to clarify the scope of the test. They represent test engineers’ notes and logical thoughts that enable other individuals to more easily understand the purpose and structure of the test scripts. This embedded documentation has no effect on script performance, but it offers a tremendous return on investment with regard to the maintenance of the scripts in the long term.

Comments should be used liberally in both manual and automated test procedures. As a standard practice, each test procedure should be prefaced with comments outlining the nature of test. Additionally, the steps making up the procedure should be clearly explained in complete sentences.

Guidelines should specify how comments are to be incorporated when using an automated test tool to develop test procedures. Most automated test tools allow the test engineer to add comments to the test procedure while recording the test. In instances where the automated test tool does not permit the addition of comments during recording, guidelines should specify that the test engineer add comments after test procedure recording ends.

The practice of outlining comments within automated test scripts helps to avoid confusion when the test team develops test automation scripts or attempts to reuse scripts from an existing automation infrastructure (reuse library). Detailed comments within the test procedure make it easier for test engineers to complete accompanying test scripts, revise existing test scripts, and allow archived test scripts to be reused on other projects. Such comments also facilitate the conduct of test procedure peer reviews and the audit of test scripts by outside teams, such as the quality assurance group or an independent verification and validation team.

Another potential benefit of documenting comments within each test procedure arises from the automatic extraction of comments from within test scripts so as to create a special report outlining the scope of the entire suite of test scripts. For example, a test engineer could create a simple program that parses all comments from within the test scripts to create an output that details the scope and purpose of the various tests being performed [3]. Functional analysts, application end users, and other project personnel can later review this report to obtain a more complete understanding of the test program’s scope.

8.2.3.3 Test Script Documentation

Along with commenting scripts to enhance their maintainability, it is very important that the test team document the test scripts. Such documentation is beneficial for any individual reviewing the test script who must understand the test script, yet doesn’t have the programming background to be able to read the scripting language. Additionally, this documentation supports the greater use of the script within a reuse library or automation infrastructure, as described in Section 8.3.

8.2.3.4 Test Procedure Header

Another guideline that needs to be in place pertains to the use of the test procedure introduction. A test procedure header is used as an introduction that defines the purpose of the test procedure. Guidelines need to specify the contents of this header. Information to be documented may include the test procedure ID, test procedure name, preconditions, data criteria, input arguments, test conditions, expected results, status, and related requirements (system, software, test) that are being verified by the test procedure.

At a minimum, each header should contain the test procedure name, test procedure ID, author of test procedure, date test procedure created. It should also define the prerequisites for running the test procedure, the functionality of the test procedure, the window or screen in which the test procedure must start, and the window or screen where it concludes. The range of information to be documented can be ascertained from the test procedure design matrix depicted in Chapter 7.

Automated test management tools, such as TeamTest, provide a built-in test procedure template that can be modified to accommodate the particular needs of the test team. Once modified, the new template serves as the standard for test procedure development. During development, each test engineer would operate with the same test procedure format and the required header information would be predefined. As a result, all test scripts will have the same look and feel. Figure 8.3 provides an example of a test procedure developed using a template managed by an automated test tool. When the automated test tool does not come with a built-in template, each test engineer should be instructed to use the same test procedure format, including the same style of test procedure introduction.

Figure 8.3. Sample Test Procedure Introduction

image

The use of a standard test procedure template together with test procedure introduction guidance allows test engineers and project personnel to share the same interpretation of test documentation. Such standardized information helps to create a common look and feel for each test procedure that makes all such procedures easy to read and understand.

8.2.3.5 Synchronization

Guidelines should be documented that address the synchronization between test procedure execution and application execution. A problem potentially exists with regard to the response time to information requests, which can include the server response plus the network response. The response time to information requests may not stay aligned with recorded test procedures.

A major factor in synchronization is the communications protocol employed. Such protocols provide the means for communication between the workstation computer and the server or host computer. This communication may consist of dial-up, network, or Internet access. Programs interact when one program requests something from another program, sends a message, or provides data to another program.

In test procedure script execution, wait times need to be incorporated into the script that allow for synchronization (that is, the script should wait until a specific response occurs). For example, the test engineer might add a wait state to pause the program while an hourglass changes back to its normal state. Note that this technique may not always work, because the hourglass might flicker. Alternatively, test engineers could record a message that appears at the bottom of a window and add a wait state until the message changes. Also, a wait state could last until a specific window appears—for example, a window that displays a message indicating that some task is complete. Similarly, test personnel can add a wait state to wait for the focus to change.

When designing test procedure scripts, the test team should ensure that the test procedure scripts remain synchronized with the AUT. For example, when a test procedure script performs a complex query to a database, it might take several extra milliseconds for the query to produce results. To account for this wait time, the test engineer will need to synchronize the test procedure script with the application response; otherwise, the script will fail.

A standard guideline needs to be in place that promulgates synchronization methods so that test procedure scripts can be reused. A particular synchronization method might direct test engineers on how to implement synchronization. For example, the guideline could identify a preference for a WaitState approach instead of a DelayFor approach when using the test tool TeamTest. The WaitState method (wait for window to pop up) is generally preferable, because the DelayFor method is hard-coded within the test script and may add unnecessary overhead to test procedure execution. For example, a function called DelayFor 6 will make the script wait 6 seconds, even if the desired state is reached within 4 seconds.

8.2.3.6 Test Procedure Index

With test programs that involve hundreds of test procedures, it quickly becomes difficult for the test team to readily ascertain the purpose and scope of every test procedure. Maintainability therefore becomes a problem. Sometimes the test team may need to pinpoint tests for certain application functionality or certain type of testing for reuse analysis, as described in Section 8.1.3, Automation Reuse Analysis. Therefore a guideline should document how the test team can quickly locate test procedures supporting a particular kind of test or a particular functionality. Essentially, the test team needs to maintain some type of index to find applicable test procedures.

When the test team has followed development guidelines that specify the inclusion of comments in the test procedures, it can generate a special report outlining the scope and purpose of the various tests performed in the parsing program discussed in Section 8.2.3.2. One way to find relevant test procedures is to carry out an electronic search within the special report file.

The test team could also develop and maintain a test dictionary. The test dictionary contains information associated with test procedures, test scripts, and pass/fail results. Additionally, it can include information such as document names, windows/screen names, fields/objects, vocabulary for tests, and cross references.

8.2.3.7 Error Handling

Test procedure development standards should account for the fact that error checking needs to be coded into the script, where errors are most likely to occur. Errors need to be reported from the test procedure that detects the error and knows what the error is. Incorporating an error-handling capability to take care of the most likely errors will increase the maintainability and stability of the test procedures.

Many test procedures are designed without considering the possible errors that a test might encounter. This omission can cause problems, for example, when the tests are played back unattended and one test fails; a subsequent test then cannot execute because error recovery is not built in. The entire test suite will therefore fail. To make a suite of tests truly automated, error recovery must be built in and must take into consideration the various types of errors that the test script could encounter.

Logic within a test script can be implemented to allow for branching to a different script or calling a script that cleans up the error condition. The error message generated should be self-explanatory to the test engineer. This approach facilitates debugging of a test procedure, because the error message should pinpoint the problem immediately. Guidelines need to be documented to specify how test procedures should handle these errors and to provide examples on how a script should handle errors. The test code given here, which was developed using SQA Basic, checks for a Test Station Name; when it doesn’t find this name, it exits the procedure. The test engineer can then code the script to allow it to start the next procedure.

Test Station Check

DataSource = SQAGetTestStationName()

    'Check for valid test station name
    If DataSource = "" Then
        MsgBox "No Test Station Name Found"
        Exit Sub
    End If

The following test script, which was also developed using SQA Basic, checks for the existence of a file. When the test script does not find the file, the test script returns an error message stating that the file could not be found and then exits the procedure. When the test script does find the file, a message box pops up on the screen to inform the user (that is, to write a message to the log) that the file was found and the script continues to execute.

File Check

'Make sure files exist
        DataSource = SQAGetDir(SQA_DIR_PROCEDURES) & "CUSTPOOL.CSV"
        If Dir(DataSource) = "" Then
            MsgBox "Cannot run this test.  File not found: " & DataSource
            Result = 0
            Exit Sub
        End If
    Else
        WriteLogMessage "Found " & DataSource
    End If

When designing modules, the test engineer should pay special attention to pre- and post-execution condition requirements. Each module should provide for this type of error checking to verify that pre-conditions are met. For example, the post-conditions for one test procedure may invoke the steps to clean up and set up the environment necessary for another test procedure to execute, thus fulfilling the preconditions for that test procedure.

8.2.3.8 Naming Standards

Naming standards improve script maintainability. The benefits of using naming standards include those listed here:

• They help test engineers standardize and decode the structure and logic of scripts.

• Variables are self-documenting as to the data they represent.

Variables are consistent within and across applications.

• The resulting script is precise, complete, readable, memorable, and unambiguous.

• The standards ensure that scripts are consistent with programming language conventions.

• They aid in being efficient from a string size and labor standpoint, thus allowing a greater opportunity for longer or fuller variable names, procedure names, function names, and so on.

The test team should follow the Hungarian Notation or another naming standard that is agreeable to the team. Briefly, the Hungarian Notation is

a naming convention that (in theory) allows the programmer to determine the type and use of an identifier (variable, function, constant). It was originally developed by Charles Simonyi of Microsoft and has become an industry standard in programming. [5]

Variable names need to be standardized with automated test scripts. In Hungarian Notation, a variable name consists of three parts: the prefix (or constructor), base type (or tag), and qualifier. The qualifier is the part that gives the most meaning to a variable name or a function name. Ideally, the name should describe the function being performed. It is important to be aware of the limitations of variable names when creating such a standard.

• Examples of bad variable naming conventions include Dim Var1, Counter1, and Test1 As Integer.

• Examples of good variable naming conventions include Dim nCustomerCounter As Integer, Dim sCustomerName As String, and Dim nMnuCounter as Integer.

• Some prefixes for commonly used variable types are listed here.

image

Some examples of tags are listed below.

image

To make test scripts efficient, it is very important to understand the scope of the variables. The larger the scope of a variable, the more the variable tends to use the valuable memory that could be available to the AUT. A good scripting practice is to limit the number of variables that have global scope. Of course, some variables must have global scope to maintain continuity in running test scripts.

8.2.3.9 Modularity

A modular script increases maintainability. A smaller test script is easier to understand and debug. Dividing the script into logical modules would therefore be one way to handle complex scripting tasks. If the test team sets up the logical flow correctly in the planning stage, each box in the diagram will be changed into a module. Additionally, through the use of modularity, the scripting tasks can be divided among test engineers in the group. If a test procedure is designed and developed in a modular fashion and part of the AUT changes, then the test team needs to alter only the affected modular test script components. As a result, changes usually must be made in only a single place.

Each module can consist of several small functions. A function comprises several lines of script that perform a task. For example, the Login( ) function will perform the following actions:

  1. Start the AUT.
  2. Input the login user ID.
  3. Verify the login user ID (error checking).
  4. Input the login password.
  5. Verify the login password (error checking).
  6. Hit OK.

Rather than including a long sequence of actions in the same test procedure, test procedures should be short and modular. They should focus on a specific area of testing, such as a single dialog box, or on a related set of recurring actions, such as navigation and error checking. For more comprehensive testing, modular test procedures can easily be called from or copied into other test procedures. In addition, they can be grouped into a combination of procedures (wrappers or shell procedures) that represent top-level, ordered groups of test procedures. Modular test procedures offer several benefits:

• Modular test procedures can be called, copied, or combined into umbrella-level shell procedures.

• They can be easily modified if the developers make intentional changes to the AUT.

• Modular test procedures are easier to debug.

• Changes have to be made only in one place, thereby avoiding the cascading effect.

• Maintenance of modular test procedures becomes easier.

While recording a test procedure, most automated test tools allow the test engineer to call previously recorded test procedures. By reusing existing test procedure functionality, the test engineer can avoid having to create repetitive actions within a test procedure.

Automated tests should be designed to operate as stand-alone tests, meaning that the output for one test does not serve as the input to the next test. Under such a strategy, common scripts can be used in any test procedure in any order. A database reset script is one such common script that is designed to restore the database to a known baseline. This script can be used to “reset” the database after a test has altered its contents, allowing the next test to start with the contents of the database being a known entity. The reset script would be called only by those tests that alter the database’s contents. Structuring tests in this fashion allows tests to be randomly executed, which more closely simulates real-life use of the system.

The random sequencing of all tests may not be possible where the results of the various tests do not have an effect on other tests. These particular instances would represent an exception to the overall design strategy. Such test scripts could be annotated with a unique descriptive extension to ensure that they execute properly.

8.2.3.10 Looping Constructs

Test engineers need to use looping constructs just as application developers do. Looping constructs support modularity and thus maintainability. Almost all test tool scripting languages support two kinds of loops. The counted loop acts as a FOR loop, while the continuous loop performs as a WHILE loop. In instances where application requirements specify a specific number or when the test engineer can determine a specific number of times that a task needs to be performed, then it is best to utilize the FOR loop.

FOR Loop Examples

Suppose that the requirement specifies that the application must be able to add five account numbers to a database:

FOR nLoopCounter=1 to 5
    Add account_number
END FOR LOOP

Giving the test script some more thought, the test engineer may consider enhancing the script further by using a variable for the endpoint of the loop, so that the loop can be further reused by the passing of a variable. Passing in a variable for the FOR loop in this manner gives the following script:

FOR nLoopCounter = 1 to nEndLoop
    Add account_number
END FOR LOOP

Looping constructs are an essential part of any program code. When a test procedure should read data from a file, the looping construct might contain statements such as “read the data until end of file is reached.” In the preceding example, all records would be located in a file and the test procedure script would read the records from the file until it reached the end of the file.

Looping constructs are useful in other situations. Consider a test procedure that records the contents of a third-party data grid after data are added. The approach taken to develop the test procedure would require repeated recording of the state of the data grid so as to capture changes to its content. This approach is cumbersome, and the resulting test script would not be reusable should the data grid contents change in any unexpected way. A better approach involves the modification of the test procedure script to programmatically parse the data grid information and compare it with a baseline data file. The resulting test script can be repeated much more easily.

In addition, looping constructs could be applied to a test procedure that must handle expected changes to the state of a record. For example, the status of a record might be expected to change from edit pending to edit complete. To capture the time required for a record to make this change, the specific status could be included in a loop and timed until the status of the record changed from pending to complete. Such a change to the test procedure allows for unattended playback of the script, timing of the functionality, and repeatability of the script.

8.2.3.11 Branching Constructs

Another good test procedure coding practice involves the use of branching constructs. As with looping constructs, branching constructs support modularity and thus maintainability of a test procedure. Branching involves the exercise of an application by taking a different path through an application, based upon the value of a variable. A parameter used within the test script can have a value of either true or false. Depending on the value, the test script may exercise a different path through the application.

Many reasons exist for using branching constructs. Test engineers can use branching constructs such as If..then..else, case, or GOTO statements in order to make automated test procedures reusable. If..then..else statements are most often used for error checking and other conditional checking. For example, they can be incorporated in a test procedure that checks whether a window exists.

As already described in Section 8.2.3.7, branching constructs can be used to branch from one test script to another if the test encounters a specific error. When an application could possibly be left in several different states, the test procedure should be able to gracefully exit the application. Instead of hard-coding steps to exit the application within the test procedure, the test procedure can call a cleanup script that uses conditional steps to exit the AUT, given the state of the application at the time. Consider the following cleanup script sample:

if windowA is active then
do this
and this
else if windowB is active then
do this
else exception error
endif

The use of GOTO routines often inspires debate within programming circles. It can create spaghetti code, meaning unstructured code that is hard to follow. Only in very rare circumstances is the use of GOTOs warranted. For scripting purposes, occasional use of GOTOs could prove advantageous to the test engineer, especially when testing complex applications. To save time, the test engineer could skip certain sections of the application so as to expedite testing. In case of SQA Basic scripting, for example, the GOTO statements can aid in the development of scripts. The test engineer can use GOTOs to skip already-working lines of scripts and go to the point where debugging needs to begin. Using a GOTO statement will also avoid script failure, because the script can go directly to the part that fails, allowing the test engineer to debug that section of the code rather than going through all input fields.

Another reason for using branching constructs is to verify whether a particular file exists. Before a test procedure opens a file to retrieve data, for example, it needs to verify the existence of the file. When the file doesn’t exist, the user needs to be prompted with an error message. This error message can also be written to the test result log and the script might end, depending on the logic setup of the built-in error-checking routine.

A branching construct can also prove valuable when a test script must operate with a specific video resolution. A test procedure can include a statement that says “if video resolution is 640 × 480,” then execute this script; else “when video resolution is some other value, update to 640 × 480;” otherwise, provide an error message and/or exit the program.

8.2.3.12 Context Independence

Another guideline for a test team that is trying to create maintainable scripts pertains to the development of context-independent test procedures. These guidelines should define where test procedures begin and end; they can be adapted from the test procedure design matrix given in Chapter 7. Context independence is facilitated by the implementation of the modularity principles already described in this chapter. Given the highly interdependent nature of test procedures and the interest in executing a sequential string of tests, a suite of test procedures are often incorporated into a test script shell or wrapper function. In this case, the test procedures are incorporated into a file in a specified sequence and can then be executed as a single test script or procedure.

In instances when a test script shell or wrapper function is not used, the test team must manually ensure that the AUT is at the correct starting point and that the test procedures are played back in the correct order. The required sequencing for test procedure execution could be documented within each test procedure as a precondition for execution. This manual approach to performing context-independent test procedures minimizes the benefits of automation. In any event, the test team needs to ensure that the guidelines are in place to direct the development and documentation of test procedures, given the relationship between test procedures that may exist.

Guidelines could specify that all test procedures pertaining to a GUI application must start at the same initial screen and end at that same window or that the application must be in the same state. This approach represents the ideal way of implementing modularity, because the scripts are independent and therefore result-independent, meaning that the expected result of one test will not affect the outcome of others and that data and context are not affected. This guideline would simplify interchangeability of test procedures, allowing for the creation of shell procedures. (See the modularity model depicted in Table 8.4.) It is not always feasible, however, especially when the test procedure must step through many lower-level windows to reach the final functionality. In such a case, it would not be practical to require that the test procedure trace back repeatedly through all windows.

A work-around solution for this situation could create test procedures in a modular fashion—that is, where one test procedure ends, another test procedure could begin. With this model, a series of test procedures can be linked together as part of a higher-level script and played back in a specific order to ensure continuous flow of test script playback as well as ongoing operation of the AUT. When using this strategy, however, test failure can cause domino effects. That is, the test sequence outcome of one test procedure affects the outcome of another procedure. Error recovery can easily become complex.

8.2.3.13 Global Files

Automated test procedures can be maintained more easily by adding global files. For example, the scripting language used by an automated test tool will prompt the test engineer to declare and define the program code for each test procedure. Test engineers need to take advantage of .sbl (Visual Basic) files when developing test automation. For example, in the case of the TeamTest tool, the basic header files have .sbh extensions and contain the procedure declarations and global variables referred to in the test procedure script files. Source code files have .sbl extensions and contain the procedure definitions that are used in the test script files.

Global files offer a benefit because the globally declared procedures are available for any test procedure. If a globally declared procedure must be modified the change must be made in only one place—not in all of the procedures that use the global test procedure. The test team can develop these global files parallel with application development or later when software modules have become stable enough to support test procedure scripting.

A prime candidate for inclusion within a global file would be test procedures related to the login screen. Each time that the AUT test needs to display the login script, the test script can pass the required user Id and password values to the application.

To take advantage of the .sbh or header file in SQA Basic code, the test engineer must place an include statement in the main body of the test script. This statement simply pulls in the source code to be included as part of the procedure during compilation time. An include file allows the test engineer to maintain one central copy of common code instead of repeating the same code in many test procedures. The .sbh files are normally restricted to holding the declarations for external functions as well as some information about where to find those functions (that is, BasicLib “Global”).

The external functions themselves (sbl/sbx) are similar to .dll files. The .sbl contains the code that is compiled to create the executable, which is stored in a .sbx file. If the source code .sbl file changes, then the .sbx file remains the same until it is recompiled. The .sbx file executes but is not included in the source code, because they are linked dynamically at run time. The .sbx file is therefore used to call the external functions. The .sbl files must be compiled and the parameters must exactly match the function declarations that are included in the .sbh file.

8.2.3.14 Constants

Use of constants enhances the maintainability of test procedures. The test team should store constants in one or more global files, so that they can easily be maintained. For example, the DelayFor command in SQA Suite permits synchronization of an application and accepts a time given in milliseconds. For example, the script might include a statement that would read DelayFor (60,000), meaning “hold the next step for 1 minute.” Whenever the test engineer wishes to use milliseconds, then some calculations must be performed. To make life simpler, the test engineer could put the time in a constant. The following SQA Basic script uses a constant in this way:

Test Station Check

******************
Sub Main
Const
MILLI_SEC = 1000
MILLI_MIN = 60000
MILLI_HOUR = 360000

DelayFor ((2 * MILLI_SEC) + (2 * MILLI_MIN)

StartApplication "C: otepad"
......

8.2.4 Other Guidelines

8.2.4.1 Output Format

During the test planning phase, the test team should identify the test result output format, which is based on end-user input. When developing test procedures, it is necessary to know the desired test procedure output format. For example, the test engineer can add the output format into the automated test script via Write to Log statements or other output statements.

For one project, the test engineer developed an elaborate output format that relied on an automated script. The output information was embedded in the test script and written to the test log. The end users for the application did not like this solution, however. They believed that the test engineer could programmatically manipulate the results output to their liking. Instead, they wanted to see screen prints of each resulting output, as they had in the past. To satisfy this request, the test engineer created an automated script to perform this function and used an automated testing tool. From this experience, the test engineer learned how important it is to obtain customer or end-user approval for the planned test results output format.

8.2.4.2 Test Procedures/Verification Points

As part of test procedure design standards, the test team needs to clarify which test procedures/verification points to use and when to use them. Testing tools come with various test procedures, also called verification points. Test procedures can examine the object properties of the various controls on a GUI screen. They can verify the existence of a window, test the menu, and test the OCX and VBX files of an application. For example, Test Studio allows for recording and insertion of 20 verification points: object properties, data window, OCX/VBX data, alphanumeric, list, menu, notepad, window existence, region image, window image, wait state test procedure, start application, start timer, end timer, call test procedure, write to test log, insert comment, object data, and Web site.

The test engineers might have a preference as to which verification points they use most often and which ones they avoid. For example, when working with Test Studio, some test engineers avoid the region image and window image verification points, because these tests are sensitive to every x, y coordinate (pixel) change.

8.2.4.3 User-Defined Verification Methods

Another place to incorporate variability is in the definition of the data that the test procedure requires to determine whether a test has passed or failed. When using standard test procedures and verification methods, the test team defines the exact string that should appear in order to pass a test. In contrast, instead of expecting a specific date when using the Test Studio tool, the test script can simply check that the data appear in a date form; in this example, any date is acceptable.

To allow for variation (beyond that offered by built-in verification methods, such as a numeric range), the test may incorporate a custom-built verification method. Test automation, involving code development, is necessary to create such a verification method. The test engineer can also take advantage of application programming interfaces (API) calls and dynamic link library (.dll) files.

8.2.4.4 API Calls and .dll Files

The use of API calls can drastically enhance test scripts. API calls represent a (rather large) set of functions built into Windows that can be used within a test script to enhance the script’s capabilities.

The test team needs to verify whether its chosen test tool supports the use of API calls. Usually, it can consult the test tool vendor to identify the various APIs that are compatible with the tool. To create an API call, the test engineer declares the API function within the test procedure code, usually in the declarations section of the program module. The program then calls the function as it would any other function.

There are many ways to use APIs in support of the test effort and many books have been published that address the use of APIs. A few examples of API calls employing the WinRunner test tool are outlined below.

Example 1. Determine Available Free Memory

Declaration: Declare Function GetFreeSpace& Lib “Kernel” (ByVal flag%)

API call: x& = GetFreeSpace(0)

Example 2. Identify System Resources in Use

Declaration: Declare Function GetFreeSystemResources& Lib “User” (ByValflags%)

API call: x& = GetFreeSystemResources(0)

The use of .dll files allows the test engineer to extend the usefulness of the script. Like API calls, they can be used in many ways.

Example 1. [6]: Sort the array contents by means of a .dll function.

Syntax: array_sort_C( array[ ], property1, property2 );

array[ ]: Field to be output.

property1: Primary sorting criteria (object property).

property2: Secondary sorting criteria (object property) – optional.

This code sorts a field that consists of object names. The new order is determined by comparing object properties. The new index begins with 1. As many as two sorting criteria can be specified; multistep sorting is thus possible.

Example 2. Sort a field according to class and x-position

# init sample
win_get_objects(window,all,M_NO_MENUS);
# sort array
array_sort_C(all, "class", "x");

8.3 Automation Infrastructure

The application of test automation to a project, including the development of test scripts (programs) or the manipulation of automated test tool-generated test programs, is best supported by the use of a library of reusable functions. This library is known as an automation infrastructure or automation framework.

The creation and maintenance of an automation infrastructure is a key component to any long-term test automation program. Its implementation generally requires a test team organization structure that supports the cohesiveness of the test team across multiple projects.

By following the development guidelines described in Section 8.2, the test team will start building reusable functions that become the building blocks of its automation infrastructure. An automation infrastructure represents a library of reusable functions that may have been created for test efforts on different projects or to test multiple or incremental versions of a particular application. These functions may be used to minimize the duplication of the test procedure development effort and to enhance the reusability of test procedures.

Typically, a minimal set of functionality becomes incorporated into a single test procedure. For example, a test procedure might activate a menu item by initiating a call to that test procedure from within another test procedure. If the application menu item changes, the test team must make the change in only a single place—the test procedure where the menu item is called—and then reexecute the test procedure. An automation infrastructure is augmented and developed over time, as test personnel create reusable functions (subprocedures) to support test procedures for a multitude of application development and maintenance projects.

It is important to document the contents and functionality of the automation infrastructure in support of reuse analysis efforts, as discussed in Section 8.1. The following functional scripts might prove especially valuable within an automation infrastructure:

Table-driven test automation

PC environment setup script

Automated recording options

Login function

Exit function

Navigation function

Verifying GUI standards function

Smoke test

Error-logging routines

Help function verification script

Timed message boxes function

Advanced math functions

8.3.1 Table-Driven Test Automation

A table-driven approach to testing [7] is similar to the use of a data template, as discussed in Section 8.2. This approach makes further use of input files. Not only is data input from a file or spreadsheet, but controls, commands, and expected results are incorporated in testing as well. Test script code is therefore separated from data, which minimizes the script modification and maintenance effort. When using this approach, it is important to distinguish between the action of determining “what requirements to test” and the effort of determining “how to test the requirements.” The functionality of the AUT is documented within a table as well as step-by-step instructions for each test. An example is provided in Table 8.6.

Table 8.6. Table Driven Automated Testing

image

Once this table has been created, a simple parsing program can read the steps from the table, determine how to execute each step, and perform error checking based on the error codes returned. This parser extracts information from the table for the purpose of developing one or multiple test procedure(s). Table 8.6 data are derived from the following SQA Basic code:

Script From Which Table 8.6 Is Derived

Window SetContext, "VBName=StartScreen;VisualText=XYZ Savings Bank", ""

    PushButton Click, "VBName=PrequalifyButton;VisualText=Prequalifying"

    Window SetContext, "VBName=frmMain;VisualText=Mortgage Prequalifier", ""
        MenuSelect "File->New Customer"

        ComboListBox Click, "ObjectIndex=" & TestCustomer.Title ,
        "Text=Mr.         "
        InputKeys TestCustomer.FirstName & "{TAB}" & TestCustomer.LastName &
        "{TAB}" & TestCustomer.Address & "{TAB}" & TestCustomer.City
        InputKeys "{TAB}" & TestCustomer.State & "{TAB}" &
        TestCustomer.Zip

PushButton Click, "VBName=UpdateButton;VisualText=Update"
        .
        .
        .
'End of recorded code

The test team could create a GUI map containing entries for every type of GUI control that would require testing. Controls would include every push button, pull-down menu, drop-down box, and scroll button. Each entry in the GUI map would contain information on the type of control, the control item’s parent window, and the size and location of the control in the window. Each entry would contain a unique identifier similar in concept to control IDs. The test engineer uses these unique identifiers within test scripts much in the same way that object recognition strings are used.

The GUI map serves as an index to the various objects within the GUI and the corresponding test scripts that perform tests on the objects. It can be implemented in several ways, including via constants or global variables. That is, every GUI object is replaced with a constant or global variable. The GUI map can also take advantage of a data file, such as a spreadsheet. The map information can then be read into a global array. By placing the information into a global array, the test engineer makes the map information available to every test script in the system; the same data can be reused and called repeatedly.

In addition to reading GUI data from a file or spreadsheet, expected result data can be placed into a file or spreadsheet and retrieved. An automated test tool can then compare the actual result produced by the test with the expected result maintained within a file.

The test team should continually add reusable and common test procedures and scripts to the electronic test program library. This library can offer a high return on investment, regardless whether the test engineer wants to create a common function that sets up the PC environment or develop a common script to log all errors. A key to the usefulness of the program library is the ease with which test engineers can search for and find functions within the library. The naming convention adopted should enable them to readily locate desired functions, as outlined in the discussion of test development guidelines in Chapter 7. A naming convention is also helpful in supporting reuse analysis. For example, the creation of a table like Table 8.2 on page 292, only without the column “Reuse Asset” filled in, would aid in this endeavor.

8.3.2 PC Environment Automated Setup Script

As mentioned throughout this book, a PC environment setup script is a useful addition to the reuse library. The state of the PC environment should exactly match the state during the recording stage to permit a script to be played back successfully. Otherwise playback problems can occur because of environment problems.

To prevent such playback problems from happening, the test team should develop a common and often reused test environment setup script. This setup script should prepare the environment for test execution by verifying the PC configuration. It can ensure, for example, that all desktop computers (PCs) allocated to the test environment are configured in the same way. It could check that the computer drive mapping is the same for all pertinent PCs and verify all network connections. One subsection of the script can check that the video resolutions for all PCs are the same, while another could ensure that the screen saver for each computer has been turned off. Still another script might synchronize the date and time for each desktop computer. The setup script can also verify that the correct version of the AUT is in place. It can provide data initialization, restore the state of the data to its original state after add, delete, or update functions are executed, and provide file backups.

This test environment setup script can also include functionality that checks the installation of all dynamic link libraries (.dll) and verifies registry entries. It could turn on or off the operation of system messages that might interfere with the development or playback of the scripts. This script can verify available disk space and memory, sending an appropriate warning message if either is too low.

Because the setup script can incorporate significant functionality, the test engineer must take care to keep modularity in mind. It is best to restruct the major functionality to a single script and to consolidate a number of separate scripts into one setup shell procedure (or wrapper).

8.3.3 Automated Recording Options

Another useful script in a test library repertoire might automate all recording options. Most automated test tools allow the test engineer to set up script recording options in various ways. So that members of the test team will use a consistent approach, a script can be created that automates this setup. This function will provide instructions on how to select a number of parameters, such as mouse drags, certain objects, window settings, and an object window.

In some automated test tools, such as Robot, the test engineer can specify how the tool should identify list and menu contents and unsupported mouse drags. Test personnel can specify which prefix to use in script auto-naming, whether to record think time, and whether Robot should save and restore the sizes and positions of active windows [8]. These sorts of setup activities can be automated to ensure that they remain consistent for each PC setup.

Additionally, most automated test tools will provide the capability to set up script playback options. For the playback option in Robot, the test engineer can specify how much of a delay should separate commands and keystrokes, whether to use the recorded think time and typing delays, whether to skip verification points, whether to display an acknowledge results box, and what happens to the Robot window during playback. Test personnel can specify which results to save in a log and whether the log should appear after playback. Other selections in Robot include the ability to specify “Caption Matching,” “Wait State,” “Unexpected Active Window” handling, “Error Recovery” handling, and “Trap” handling [9]. Again, this setup should remain consistent among all PCs within the test environment—perfect justification for automating the test environment setup.

8.3.4 Login Function

Another important part of a reuse library is a login script. This script can start and end at a specific point. It can start the AUT and verifies the login ID and password. In addition, it can verify the available memory before and after the application starts, among other things. This script can be called at the beginning of all procedures.

8.3.5 Exit Function

An endless number and variety of scripts can be added to the automation infrastructure (reuse library). The automation infrastructure includes common functions found within the test environment that can be called by a test procedure or incorporated within a new test procedure. Scripts can be created that select the various ways of opening, closing, deleting, cutting, and pasting records. For example, a test procedure script that randomly selects the method to exit an application could be called when needed from another test procedure script. The actions necessary for exiting an application might include the selection of an Exit button, selection of the Exit value from the File pull-down menu, selection of the Close value under the File pull-down menu, and double-clicking on the Window Control menu box, just to name a few. The single test procedure step (exit application) would be performed differently each time the script executed, thereby fully exercising the system.

8.3.6 Navigation

As already explained in the test development guidelines in Section 8.2, the navigation of an application requires its own script. The test team needs to add test procedures for navigation purposes only to the automation infrastructure. These test procedures are not designed to validate specific functional requirements, but rather to support user interface actions. For example, a test engineer may record a procedure that navigates the application through several windows and then exits at a particular window. A separate test procedure may then validate the destination window. Navigation procedures may be shared and reused throughout the design and development effort.

8.3.7 Verifying GUI Standards

Another script that could be added to the library of reusable scripts might verify the GUI standards. The test team can consider developing specific test procedures for performing checks against GUI standards (such as those examining font, color, and tab order choices) that could be run as part of a regression test, rather than as part of the system test. Team members might create a script that verifies GUI standards based on a set of rules that appear in a .csv file. The script then could read the rules from the .csv file and compare them with the actual GUI implementation.

8.3.8 Smoke Test

A very important addition to the library of reusable scripts consists of a build verification test, otherwise known as a smoke test. This test, which was described in Chapter 2, focuses on automating the critical high-level functionality of the application. Instead of repeatedly retesting everything manually whenever a new software build is received, a test engineer plays back the smoke test, verifying that the major functionality of the system still exists. An automated test tool allows the test engineer to record the manual test steps that would usually be taken in this software build verification.

The test team first determines which parts of the AUT test account for the high-level functionality. It then develops automated test procedures targeting this major functionality. This smoke test can be replayed by the test engineers or developers whenever a new software release is received to verify that the build process did not create a new problem and to prove that the configuration management checkout process was not faulty.

When the test effort is targeted at the first release of an application, smoke tests may consist of a series of tests that verify that the database points to the correct environment, the correct (expected) version of the database is employed, sessions can be launched, all screens and menu selections are accessible, and data can be entered, selected, and edited. One to two days of test team time may be required to perform a smoke test involving the creation and execution of automated scripts for that software release and the analysis of results. When testing the first release of an application, the test team may want to perform a smoke test on each segment of the system so as to be able to initiate test development as soon as possible without waiting for the entire system to become stable.

If the results meet expectations, meaning that the smoke tests passed, then the software is formally moved into the test environment. If the results do not meet expectations and the failures do not result from setup/configuration, script or test engineer error, or software fixes, then the test team should reject the software as not ready for test.

8.3.9 Error-Logging Routine

As mentioned in the development guidelines, error-logging routines should be part of a reuse library. The test engineer can create an error-checking routine by recording any error information that the tool might not already collect. For example, it might gather such information as “actual versus expected result,” application status, and environment information at time of error.

8.3.10 Help Function Verification Script

A help verification script can verify that the help functionality is accessible, that the content is correct, and that correct navigation is occurring. What follows are some examples of reusable script libraries implemented by various test engineers.

A group of test engineers using WinRunner, a test tool from Mercury Interactive, created a library of reusable scripts and made the scripts available to all test engineers by making them accessible via the Web [10]. This library contains templates for test scripts, extensions of the test script language, and standardized test procedures for GUI testing using WinRunner. Table 8.7 provides examples of reusable functions written to support this project.

Table 8.7. Reusable Functions of WinRunner Scripts

image

The library consists of three basic components: script templates, functions, and GUI checks. All three components are designed to simplify the work of the test developer and to standardize the test results. The script templates enable the test engineers to write their own scripts. The script can be copied and developed into complete test scripts or function sets. GUI checks are integrated into the WinRunner environment by starting the corresponding installation script. They work much like built-in GUI checks.

8.3.11 Timed Message Boxes Function

Another example of a reusable script is the timed message boxes function. This function allows message boxes to be displayed and then disappear after a specified amount of time, so as to avoid script failure.

8.3.12 Advanced Math Functions

Advanced math functions, once created, can be reused whenever applicable for any AUT.

Chapter Summary

• Test development involves the creation of test procedures that are maintainable, reusable, simple, and robust, which in itself can be as challenging as the development of the AUT. To maximize the benefit derived from test automation, test engineers need to conduct test procedure development, like test design, in parallel with development of the target application.

• The test development architecture provides the test team with a clear picture of the test development preparation activities (building blocks) necessary to create test procedures. This architecture graphically depicts the major activities to be performed as part of test development. The test team must modify and tailor the test development architecture to reflect the priorities of the particular project.

• Test procedure development needs to be preceded by several setup activities, such as tracking and managing test environment setup activities, where material procurements may have long lead times.

• Prior to the commencement of test development, the test team needs to perform an analysis to identify the potential for reuse of existing test procedures and scripts within the automation infrastructure (reuse library).

• The test procedure development/execution schedule is prepared by the test team as a means to identify the timeframe for developing and executing the various tests. This schedule identifies dependencies between tests and includes test setup activities, test procedure sequences, and cleanup activities.

• Prior to the creation of a complete suite of test procedures, the test team should perform modularity-relationship analysis. The results of this analysis help to clarify data independence, plan for dependencies between tests, and identify common scripts that can be repeatedly applied to the test effort. A modularity-relationship matrix is created that specifies the interrelationship of the various test scripts. This graphical representation allows test engineers to identify opportunities for script reuse in various combinations using the wrapper format, thereby minimizing the effort required to build and maintain test scripts.

• As test procedures are being developed, the test team needs to peform configuration control for the test design, test scripts, and test data, as well as for each individual test procedure. Groups of reusable test procedures and scripts are usually maintained in a catalog or library of test data called a testbed. The testbed needs to be baselined using a configuration management tool.

• The team next needs to identify the test procedure development guidelines that will apply on the project to support the various test development activities. These guidelines should apply to the development of both manual and automated test procedures that are reusable and maintainable.

• An automation infrastructure (or automation framework) is a library of reusable functions that may have been originally created for test efforts on different projects or to test multiple or incremental versions of a particular application. The key to these functions is their potential for reuse, which minimizes the duplication of the test procedure development effort and enhances the reusability of the resulting test procedures.

References

1. “Analyze a little, design a little, code a little, test a little” has been attributed to Grady Booch.

2. Linz, T., Daigl, M. GUI Testing Made Painless, Implementation and Results of the ESSI Project Number 24306. Moehrendorf, Germany, 1998. www.imbus.de

3. http://www3.sympatico.ca/michael.woodall/sqa.htm.

4. http://www3.sympatico.ca/michael.woodall.

5. VB Flavor Hungarian Notation: www.strangecreations.com/library/c/naming.txt for. Also see http://support.microsoft.com/support/kb/articles/Q110/2/64.asp.

6. Linz, T., Daigl, M. How to Automate Testing of Graphical User Interfaces, Implementation and Results of the ESSI Project Number 24306. Moehrendorf, Germany, 1998. www.imbus.de.

7. Pettichord, B. Success with Test Automation. (Web page.) 1995. www.io.com/~wazmo/qa.html.

8. Jacobson, I., Booch, G., Rumbaugh, J. The Unified Software Development Process. Reading, MA: Addison-Wesley, 1999.

9. Ibid.

10. See note 2.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset