Chapter 4. Automated Testing Introduction Process

A tool is only as good as the process being used to implement the tool. How a tool is implemented and used is what really matters.

Anonymous

image

A new technology is often met with skepticism—and software test automation is no exception. How test teams introduce an automated software test tool on a new project is nearly as important as the selection of the most appropriate test tool for the project.

Over the last several years, test teams have largely implemented automated testing tools on projects without having a process or strategy in place describing in detail the steps involved in using the tool productively. This approach commonly results in the development of test scripts that are not reusable, meaning that the test script serves a single test string but cannot be applied to a subsequent release of the software application. In the case of incremental software builds and as a result of software changes, these test scripts need to be recreated repeatedly and must be adjusted multiple times to accommodate minor software changes. This approach increases the testing effort and brings subsequent schedule increases and cost overruns.

Perhaps the most dreaded consequence of an unstructured test program is the need for extending the period of actual testing. Test efforts that drag out unexpectedly tend to receive a significant amount of criticism and unwanted management attention. Unplanned extensions to the test schedule may have several undesirable consequences to the organization, including loss of product market share or loss of customer or client confidence and satisfaction with the product.

On other occasions, the test team may attempt to implement a test tool too late in the development life cycle to adequately accommodate the learning curve for the test tool. The test team may find that the time lost while learning to work with the test tool or ramping up on tool features and capabilities has put the testing effort behind schedule. In such situations, the team may become frustrated with the use of the tool and even abandon it so as to achieve short-term gains in test progress. The test team may be able to make up some time and meet an initial test execution date, but these gains are soon forfeited during regression testing and subsequent performance of test.

In the preceeding scenarios, the test team may have had the best intentions in mind, but unfortunately was simply unprepared to exercise the best course of action. The test engineer did not have the requisite experience with the tool or had not defined a way of successfully introducing the test tool. What happens in these cases? The test tool itself usually absorbs most of the blame for the schedule slip or the poor test performance. In fact, the real underlying cause for the test failure pertained to the absence of a defined test process, or, where one was defined, failure to adhere to that process.

The fallout from a bad experience with a test tool on a project can have a ripple effect throughout an organization. The experience may tarnish the reputation of the test group. Confidence in the tool by product and project managers may have been shaken to the point where the test team may have difficulty obtaining approval for use of a test tool on future efforts. Likewise, when budget pressures materialize, planned expenditures for test tool licenses and related tool support may be scratched.

By developing and following a strategy for rolling out an automated test tool, the test team can avoid having to make major unplanned adjustments throughout the test process. Such adjustments often prove nerve-wracking for the entire test team. Likewise, projects that require test engineers to perform tests manually may experience significant turnover of test personnel.

It is worth the effort to invest adequate time in the analysis and definition of a suitable test tool introduction process. This process is essential to the long-term success of an automated test program. Test teams need to view the introduction of an automated test tool into a new project as a process, not an event. The test tool needs to complement the process, not the reverse. Figure 4.1 depicts the test tool introduction process that should be used to avoid false starts.

Figure 4.1. Test Tool Introduction Process

image

Following the analysis of the overall test process and the development of test goals, objectives, and strategies (as outlined in Section 4.1), the test team will need to verify that an automated test tool supports most of the project’s specific test needs. Recall that the selection criteria in Chapter 3 stated that a really useful automated test tool will meet the needs of the system engineering environment of the organization. This selection criteria is the basis for purchasing a particular test tool.

Section 4.2 outlines the steps necessary to verify that the test tool meets the project’s specific test needs and provides guidelines for determining whether it is feasible to introduce an automated testing tool, given the project schedule and other criteria. This section also seeks to ensure that automated test tool expertise is in place and that team members understand who is responsible for rolling out the testing tool and who will design, create, and execute the automated test scripts.

Once the test team has concluded that the test tool is appropriate for the current project, it continues with the ATLM by performing Test Planning (Chapter 6), Test Analysis and Design (Chapter 7), and Test Development (Chapter 8). The outcomes of the activities described in Sections 4.1 and 4.2 need to be recorded as part of the test planning activities that are documented within the formal test plan.

4.1 Test Process Analysis

The test team initiates the test tool introduction process by analyzing the organization’s current test process. Generally, some method of performing test is in place, and therefore the exercise of process definition itself may actually result in process improvement. In any case, process improvement begins with process definition.

The test process must be documented in such a way that it can be communicated to others. If the test process is not documented, then it cannot be communicated or executed in a repeatable fashion. If it cannot be communicated or is not documented, then often a process is not implemented.

In addition, if the process is not documented, then it cannot be consciously and uniformly improved. On the other hand, if a process is documented, it can be measured and therefore improved.

If the organization’s overall test process is not yet documented, or it is documented but outdated or inadequate, the test team may wish to adopt an existing test process or adopt an existing test process in part. The test team may wish to adopt the Automated Test Life-cycle Methodology (ATLM) outlined in this book as the organization’s test process. Similarly, the test team may wish to adopt the ATLM with modifications, tailoring the ATLM to accommodate the test goals and interests of its particular organization. When defining or tailoring a test process, it may prove useful for the test engineer to review the organization’s product development or software development process document, when available.

When defining a test process for an organization, the test team should become familiar with the organization’s quality and process improvement objectives. Perhaps the organization is seeking to comply with industry quality and process maturity guidelines, such as the Software Engineering Institute’s Capability Maturity Model (CMM). The CMM for software was established as a guide for providing software development organizations with a structured way of instilling discipline into their process for creating and maintaining software. This model instructs organizations to implement this discipline in an evolutionary manner, where levels or plateaus of maturity are reached and then surpassed.

Implementation of CMM guidelines is intended to create an infrastructure of people and proven practices that enable the organization to produce quality products, achieve customer satisfaction, and meet project objectives. Consistent with the CMM, the test team will want to define and refine test process inputs, outputs, and process-specific metrics. The test team should not be content to know that the overall organization is performing in a mature manner, if test procedures are not defined and test documentation is not being produced in a consistent manner. Only when the test process has been documented and metrics have been defined, collected, and analyzed can the test team make effective improvements to the test process.

To further support process objectives, the test team will want to maintain a repository of objective evidence that documents performance of testing in accordance with the defined test process. Should the organization undergo an assessment of its compliance with CMM guidelines, the assessment team will attempt to verify that the test team’s activities match the activities defined within the test process. The assessment team will also check whether process outputs or artifacts produced comply with the defined test process.

The purpose of analyzing the organization’s test process is to identify the test goals, objectives, and strategies that may be inherent in the test process. These top-level elements of test planning serve as the cornerstones for a project’s test program. The purpose of documenting the test tool introduction process is to ensure that the test team has a clearly defined way of implementing automated testing, thereby allowing the team to fully leverage the functionality and time-saving features of the automated test tool.

The additional time and cost associated with the documentation and implementation of a test tool introduction process sometimes emerges as a contentious issue. In fact, a well-planned and well-executed process will pay for itself many times over by ensuring a higher level of defect detection and fielded software fixes, shortening product development cycles, and providing labor savings. A test team that is disciplined in defining test goals and reflects the test goals in its definition of processes, selection of skills for test team staff, and the selection of a test tool will perform well. This kind of discipline, exercised incrementally, supports the test team’s (and the entire organization’s) advancement in quality and maturity from one level to the next.

4.1.1 Process Review

As noted earlier, the test engineer needs to analyze the existing development and test process. During this analytical phase, he or she determines whether the current testing process meets several prerequisites:

• Testing goals and objectives have been defined.

• Testing strategies have been defined.

• The tools needed are available to implement planned strategies.

• A testing methodology has been defined.

• The testing process is communicated and documented.

• The testing process is being measured.

• The testing process implementation is audited.

• Users are involved throughout the test program.

• The test team is involved from the beginning of the system development life cycle.

• Testing is conducted in parallel to the system development life cycle.

• The schedule allows for process implementation.

• The budget allows for process implementation.

This section provides information to help the test engineer determine whether the testing process meets this criteria, outlines what to look for during the evaluation of the test process, and supports the test process enhancement or implementation effort. Test goals and objectives are discussed along with strategies that can help to achieve these goals and objectives. As depicted in Figure 4.2, it is important to document the outcome of each test tool introduction phase in the test plan.

Figure 4.2. Documenting the Results of Test Process Analysis

image

A successfully implemented test process will minimize the schedule, achieve high defect detection efficiency, improve software quality, and support the development of reliable systems that will keep the users happy. This section describes proven practices to apply when reviewing or revising the organization’s test process.

4.1.1.1 Early Involvement of Test Team

The test process should be initiated at the beginning of the system development life cycle. Early test team involvement is essential. During the business analysis phase, the test team augments its understanding of business rules and processes, and it obtains greater understanding of customer needs. During the requirements phase, the test team verifies the testability of requirements. Early test involvement also permits earlier detection of errors and prevents migration of errors from requirements specification to design, and from design into code.

4.1.1.2 Repeatable Process

The organization’s test process should be repeatable. A repeatable test process can be measured, fine-tuned, and improved. This process should also result in quality and efficiency whenever it is undertaken. Such a process will be measurable, reliable, and predictable. A repeatable process can be achieved by documenting every step of the process. When a test process is thoroughly documented, then testing can be controlled and implemented uniformly.

In addition to documentation, automation of tests is the most efficient means of achieving repeatability. Conventional test design techniques should be supplemented with automated static and dynamic analysis. A repeatable process can be further augmented through the application of reusable test scripts. Chapter 8 provides detailed information on how to create such reusable test scripts.

Collection and analysis of measurements also need to be part of the test process. Within many organizations, a quality assurance group is responsible for performing audits to ensure that activities are being performed in accordance with defined processes. As a result, the test team needs to verify that the ATLM process has been properly implemented. To this end, the test team should review criteria by which the ATLM process can be measured. Several such criteria measures are as follows [2]:

Performance of the process. Obtain a measure of product attributes that result from the ATLM process, then measure the attributes of the process itself.

Repeatability. Would someone else be able to repeat the measurements and obtain the same results?

Product traceability. Measure the traceability of products to standards and products to process.

Stability of process. Are products on schedule according to plan? Are variations predictable and unexpected results rare? Does the process need to be improved to produce better and higher-quality products?

Compliance of process. Actual performance must be consistent with the defined test process. Table 4.1 outlines entities and attributes that can be measured to address process compliance.

Table 4.1. Process Compliance Measures

image

Fitness to execute process. Are project personnel aware of, trained in, or given the tools needed for executing the process? Are tools and procedures effective in accomplishing what was intended?

Use of defined process. Consistent execution of process is required. Is the process being executed as defined?

Capability of process. When a process is stable and conforming to requirements, it is termed capable.

4.1.1.3 Continuous Improvement

Another best practice to apply when maintaining a test process involves continuous process improvement. The primary goal is continuous refinement and improvement of the test process. To achieve process improvement, it is necessary to document lessons learned and QA audit findings throughout the testing life cycle and take corrective action before it is too late. It is also beneficial to document lessons learned at the end of the application development life cycle. The purpose of this effort is to identify needed improvement activities and ensure that mistakes are not repeated in the next test phase, during the next incremental software delivery, or on the next project. Chapter 10 discusses these lessons learned.

Determining and documenting the benefits reaped as a result of utilizing an automated test tool and making this information available to everyone in the organization also support continuous process improvement. Likewise, this information enhances the project team’s understanding of the benefits of using such tools. Chapter 9 discusses these benefits.

Surveys represent another way of determining how the process can be improved. The test team can send out a survey that asks project team personnel to give their impressions of the test process and the outputs produced by that process. The survey can also seek to identify ways in which the process could be improved. Project team personnel may include application developers, functional analysts/business users, and test engineers.

Process assessments and audits are very valuable in verifying that processes are being implemented correctly, thus supporting continuous improvement. The quality assurance department usually conducts these activities.

Root-cause analysis is another way of figuring out why a defect was introduced. Chapter 9 describes this type of analysis.

4.1.1.4 Safeguarding the Integrity of the Automated Test Process

To safeguard the integrity of the automated test process, the test team needs to exercise new releases of an automated test tool in an isolated environment; it can also validate that the tool performs up to product specifications and marketing claims. The test team should verify that the upgrades will run in the organization’s current environment. Although the previous version of the tool may have performed correctly and a new upgrade may perform well in other environments, the upgrade might adversely affect the team’s particular environment. The verification of the test tool upgrade should be performed in an isolated environment, as described in Chapter 3. Additionally, using a configuration management tool to baseline the test repository will help safeguard the integrity of the automated testing process.

Although process definition, metric-gathering, and process improvement activities can be expensive and time-consuming, the good news is that creating and documenting standards and procedures for an automated test program may be no more expensive than the same activities for a manual test program. In fact, use of an automated test tool with scripting, test identification, and automatic documentation capabilities can reduce costs by providing some of the framework and content required.

Following performance of the test process analysis as outlined so far, the test team must decide whether the process needs to be revised or whether the team can proceed with the existing test process. Once a test process has been defined, reviewed, and then updated through a couple of iterations to shake out the bugs, the test team is ready to define test goals, objectives, and the test strategies for a particular project of product development effort.

4.1.2 Goals and Objectives of Testing

What does the test effort hope to accomplish? Testing in general is conducted to verify that the software meets specific criteria and satisfies the requirements of the end user or customer. The high-level goal of testing is to identify defects in the application, thereby permitting the prevention, detection, and subsequent removal of defects and the creation of a stable system.

The primary goal of testing is to increase the probability that the application-under-test will behave correctly under all circumstances and will meet defined requirements, thus satisfying the end users by detecting (and managing to closure) as many defects as possible. One objective of automated testing is to support manual testing efforts intended to achieve this testing goal. Automated testing, when implemented correctly, promotes faster, better, and more efficient testing. It eventually can lead to a reduction in the size of the test effort, a reduction of the test schedule, the production of a reliable system, and the enhancement of a repeatable test process.

Testing verifies that software programs work according to the specifications. A program is said to work correctly when it satisfies the following criteria:

  1. Given valid input, the program produces the correct output.
  2. Given invalid input, the program correctly and gracefully rejects the input.
  3. The program doesn’t hang or crash, given either valid or invalid input.
  4. The program keeps running correctly for as long as expected.
  5. The program behaves as specified.

In addition to verifying that software programs work correctly and without any major defects, the test team, together with quality assurance personnel, seeks to verify that other outputs of the application development life cycle are correct or work as required. These criteria include requirement specifications, development and development support procedures, project documentation, test design and test procedures, and other output specific to a particular development effort. Testing also serves the purpose of finding defects so that they can be fixed. In addition, the test effort helps define “quality” when criteria exist for deciding when software is ready to be deployed.

Once test goals are defined and understood by the test team, these personnel must define more tangible or specific objectives that should be achieved during the test effort. Achievement of the test goals requires satisfaction of the test objectives. Once clear test objectives have been established, the test team needs to outline the test strategies required to attain test objectives. Test strategies include very specific activities that will be performed by the test team. The different test strategies that support the development life cycle are described in further detail in Chapter 7.

As an example of test goal and objective definition, consider the experience of a test engineer named Erika. A major redesign effort was under way for the largest application at her company. The application comprised many parts and incorporated a multitude of functionality. The system was supposed to be redesigned from scratch, but because it was a mission-critical system, it could be redesigned only in small fragments that collectively represented an upgrade of a major functionality.

As a result, the system was upgraded incrementally. Each incremental release had its own catchy name, such as Skywalker and Chewy. Erika needed to quantify the test goal in terms that reflected the mission of the application effort. Erika’s test could be stated as follows:

• Increase the probability that all applications that make up the final system can be integrated with the existing production system into a working system, while meeting all system requirements and acceptance criteria, by detecting as many defects as possible and managing them to closure.

Her test objectives included the following points:

• Ensure that each incremental application release will meet its specific requirements without any major defects, while producing a repeatable integration process that meets all system requirements.

• Incorporate an automated test tool that can record a baseline of reusable scripts to be played back repeatedly for regression, performance, and stress testing once the new modules have been integrated with the existing operational system.

• Create and maintain a baseline of reusable test scripts to streamline future test efforts.

The specific test objectives can vary from one test phase to the next, so the test engineer needs to ask several questions. What is the test team trying to accomplish during this test phase? What is the purpose of the software being tested? Section 4.1.3 provides further detail on how the concerns of the various test phases can affect the choice of a defect detection strategy.

Test objectives can also vary according to system requirements. Test objectives for a commercial-off-the-shelf (COTS) tool, for example, will be different than the test objectives applicable to an in-house developed system. When testing a COTS tool, a primary objective would be the integration of the COTS tool with the rest of the system, and the corresponding test objective would be based on black-box testing. On the other hand, when testing a homegrown system, the test team will be concerned with the application’s internal workings, which necessitates white-box testing, and the integration with other systems, which involves black-box testing.

With white-box testing, the individual performing the test can see into the program code and checks for defects related to the execution path, coverage, decision points, loops, and logic constructs. Black-box testing focuses on the external behavior of inputs and related outputs, and assesses the software’s ability to satisfy functional requirements. Chapter 7 provides more information on white-box and black-box testing.

Finally, test objectives may vary by test phase. A test objective that is pertinent during stress testing may not be relevant during functional requirement testing. During stress testing, the test objective is to establish an environment where multiple machines can access one or multiple servers simultaneously so as to obtain measurements of client and server response times. During functional requirements testing, the test objective is to determine whether the system meets business requirements.

Test objectives should be outlined early in the planning process and need to be clearly defined. It is a common practice to list test objectives within the Introduction section of the test plan. Table 4.2 provides sample test process analysis documentation, which would be reflected within the Introduction of the project test plan. This documentation is generated as a result of the test team’s process review together with its analysis of test goals and objectives.

Table 4.2. Test Process Analysis Documentation

image

4.1.3 Test Strategies

A multitude of test strategies can be implemented to support defined test goals and objectives. A careful examination of goals, objectives, and constraints should culminate with the identification of a set of systematic test strategies that produce more predictable, higher-quality test results and that support a greater degree of test automation. Test strategies can be lumped into two different categories: defect prevention technologies and defect detection technologies. Table 4.3 lists selected test strategies.

Table 4.3. Test Strategies and Techniques

image

Defect prevention provides the greatest cost and schedule savings over the duration of the application development effort. Given the complexity of systems and the various human factors involved, defect prevention technologies, by themselves, cannot always prohibit defects from entering into the application-under-test. Therefore defect detection technologies are best applied in combination with defect prevention technologies.

The specific test strategies required on a particular project will depend upon the test goals and objectives defined for that project. The test engineer should therefore review these test objectives and then identify suitable strategies for fulfilling them.

4.1.3.1 Defect Prevention Strategies

The Testing Maturity Model (TMM) that was developed by the Illinois Institute of Technology lists the highest maturity as level 5: optimization, defect prevention, and quality control [3]. The existence of defect prevention strategies not only reflect a high level of test discipline maturity, but also represent the most cost-beneficial expenditure associated with the entire test effort. The detection of defects early in the development life cycle helps to prevent the migration of errors from requirement specification to design, and from design into code.

Recognition that testing should take place at the earliest stages of the application development process is clearly a break from the general approach to testing pursued over the last several decades. In the past, the test effort was commonly concentrated at the end of the system development life cycle. It focused on testing the executable form of the end product. More recently, the software industry has come to understand that to achieve best system development results, the test effort needs to permeate all steps of the system development life cycle. Table 1.1 on page 8 lists some of the cost savings that are possible with early life-cycle involvement by the test team.

Examination of Constraints

Testing at program conception is intended to verify that the product is feasible and testable. A careful examination of goals and constraints may lead to the selection of an appropriate set of test strategies that will produce a more predictable, higher-quality outcome and support a high degree of automation. Potential constraints may include a short time-to-market schedule for the software product or the limited availability of engineering resources on the project. Other constraints may reflect the fact that a new design process or test tool is being introduced. The test team needs to combine a careful examination of these constraints, which influence the defect prevention technology, with the use of defect detection technologies to derive test strategies that can be applied to a particular application development effort.

Early Test Involvement

The test team needs to be involved in an application development effort from the beginning. Test team involvement is particularly critical during the requirements phase. A report from the Standish Group estimates that a staggering 40% of all software projects fail, while an additional 33% of projects are completed late, over budget, or with reduced functionality. According to the report, only 27% of all software projects are successful. The most significant factors for creating a successful project are requirements-related. They include user involvement, clear business objectives, and well-organized requirements. Requirements-related management issues account for 45% of the factors necessary to ensure project development success [4].

During the requirements definition phase, the test effort should support the achievement of explicit, unambiguous requirements. Test team involvement in the requirements phase also needs to ensure that system requirements are stated in terms that are testable. In this context, the word testable means that given an initial system state and a set of inputs, a test engineer can predict exactly the composition of system outputs.

Appendix A provides a detailed discussion of requirements testing. The test of system requirements should be an integral part of building any system. Software that is based on inaccurate requirements will be unsatisfactory, regardless of the quality of the detailed design documentation or the well-written code that makes up the software modules. Newspapers and magazines are full of stories about catastrophic software failures that ensued from vendor failure to deliver the desired end-user functionality. The stories, however, usually do not indicate that most of the failed systems’ glaring problems can be traced back to wrong, missing, vague, or incomplete requirements. In recent years, the importance of ensuring quality requirements has become more thoroughly understood. Project and product managers now recognize that they must become familiar with ways to implement requirements testing before they launch into the construction of a software solution.

During the design and development phases, test activities are intended to verify that design and development standards are being followed and other problems are avoided. Chapter 3 discusses the various automated testing tools that can be used during these stages.

Use of Standards

Many reasons exist for using standards. Coherent standards development will help prevent defects. Use of standard guidelines also facilitates the detection of defects and improves the maintainability of a software application. Test activities are interdependent, requiring a significant amount of teamwork among project personnel. Teamwork, in turn, requires rules for effective interaction. Standards provide the rules or guidance governing the interaction of the project personnel.

A multitude of different standards have been developed. There are standards for software design, program coding, and graphical user interfaces. There are standards issued by product vendors such as Microsoft, others issued by software industry organizations, and standards promulgated by the U.S. Department of Defense. There are communication protocol standards, safety standards, and many more. In addition, many large companies define and promote their own internal standards.

A software design standard may require the development of structure charts, and a coding standard may require that each software module have a single point of entry and a single exit point. The coding standard may specify that software code employ maximum cohesion (intramodule relationship) and minimum coupling (intermodule relationship). Typically, such standards promote the modularity of code in an effort to create functional independence. In some companies, the test team may take the lead in ensuring that the developers follow software design standards. This approach may be applied when the QA team does not exist or is understaffed (for example, during inspections, unit tests, and system tests). Any particular application development project will likely be bound to conform to a number of development standards. The test team needs to obtain a listing of these standards—perhaps from the project plan, where one exists.

The test team may also be responsible for promulgating standards for the performance of testing within the overall organization. It may decide to advocate the use of standard test tools or adopt the ATLM (discussed in Chapter 1) as the organization’s standard test process. Because standards represent methodologies or techniques that have proved successful over time, adherence to such standards supports the development of quality software products. Test activities therefore need to verify that the application-under-test adheres to required standards. Similarly, the development of test procedures should be performed in accordance with a standard.

Inspections and Walkthroughs

Inspections and walkthroughs represent formal evaluation techniques that can be categorized as either defect prevention or defect detection technology, depending upon the particular scope of the activity. The technology of using inspections and walkthroughs is listed as one of the principal software practices by the Airlie Software Council [5].

Walkthroughs and inspections provide for a formal evaluation of software requirements, design, code, and other software work products, such as test procedures and automated test scripts. They entail an exhaustive examination by a person or a group other than the author. Inspections are intended to detect defects, violations of development standards, test procedure issues, and other problems. Walkthroughs address the same work products as inspections, but perform a more cursory review.

Examples of defect prevention activities include walkthroughs and inspections of system requirements and design documentation. These activities are performed to avoid defects that might crop up later within the application code. When requirements are defined in terms that are testable and correct, then errors are prevented from entering the system development pipeline, which would eventually be reflected as defects of the overall system. Design walkthroughs can ensure that the design is consistent with defined requirements, conforms to standards and applicable design methodology, and contains few errors.

Walkthroughs and inspections have several benefits: They support the detection and removal of defects early in the development and test cycle; prevent the migration of defects to later phases of software development; improve quality and productivity; and reduce cost, cycle time, and maintenance efforts. These types of technical reviews and inspections have proved to be the most effective forms of defect detection and removal. As discussed in Chapter 3, technical review management tools are available to automate this process.

Quality Gates

Successful completion of the activities prescribed by the test process (such as walkthroughs and inspections) should be the only approved gateway to the next phase of software development. Figure 4.3 depicts typical quality gates that apply during the testing life cycle. Quality gates also exist throughout the entire development life cycle after each iterative phase.

Figure 4.3. Testing Life Cycle and Quality Gates

image

The test team needs to verify that the output of any one stage represented in Figure 4.3 is fit to be used as the input for the next stage. Verification that output is satisfactory may be an iterative process, and this goal is accomplished by comparing the output against applicable standards or project specific criteria.

The results of walkthroughs, inspections, and other test activities should adhere to the levels of quality outlined within the applicable software development standards. Test team compliance with these standards, together with involvement of the test team early in the development life cycle, improves the likelihood that the benefits of defect prevention will be realized.

4.1.3.2 Defect Detection Strategies

Although the defect prevention methodologies described in Section 4.1.3.1 are effective, they cannot always prevent defects from entering into the application-under-test. Applications are very complex, and it is difficult to catch all errors. Defect detection and subsequent removal techniques complement the defect prevention efforts. The two methodologies work hand in hand to increase the probability that the test team will meet its defined test goals and objectives.

Defect detection strategies seek to exercise the application so as to find previously undiscovered errors. Further detection and removal of errors increase the developers’ confidence that the system will satisfy the user requirements and perform correctly. As a result, end users of the system can entrust the operation of their business to the system. In the final analysis, thoroughly tested systems support the needs of end users, reduce the nuisance factors that are associated with misbehaving systems, and reduce the system maintenance cost and effort.

Defect tracking is very important during the defect prevention and detection stages. Everyone involved in the testing and development life cycle must keep track of defects found and maintain a log of defect status. Chapter 7 provides additional details concerning defect tracking.

Inspections and Walkthroughs

As previously noted, the performance of inspections and walkthroughs can be categorized as either defect prevention or defect detection technology, depending upon the particular scope of the activity. Defect prevention activities focus on eliminating errors prior to the start of coding; defect detection activities concentrate on eliminating errors after program coding has begun. Inspections and walkthroughs associated with defect detection include reviews of program code and test procedures.

A code walkthrough consists of a top-level or cursory review of program code to ensure that the code complies with relevant standards. It will typically involve the use of a checklist, which ensures that the most important guidelines addressed within the coding standard are being applied. A code inspection examines the code in more detail by having the programmers narrate their code and orally work through sample test procedures as a group. Similarly, a test procedure walkthrough is performed at a cursory level, while a test procedure inspection involves a more detailed examination of test procedures and test scripts.

As an example of a code inspection, consider the experience of an application developer named Jason, who developed some program code and now must attend his first code inspection. Jason has been informed by the development manager that the code inspection team will review his work in detail, and that Jason will be expected to explain the structure and workings of his code. Jason understands that he must outline the logic of his program, statement by statement. The development manager further explains that the code inspections are conducted with a group of people in attendance to include his programmer peers, a software quality assurance engineer, and a software test engineer. The group will analyze his program by comparing it with a list of common programming errors. During the code inspection, the group will mentally execute test procedures of his program by talking them out. Jason learns that usually three or more people attend a code inspection. The defined roles for the code inspection are as follows:

Moderator. The moderator distributes the program code before the actual meeting, schedules the meeting, leads the meeting, records the meeting minutes, and follows up on any resulting actions.

Programmer. Each programmer is responsible for narrating his or her code from beginning to end.

Test Lead. The test lead should come prepared with several test procedures to talk through with the group.

Peers. Programmers help review code developed by others and provide objective feedback.

Even though the code inspection detects errors in Jason’s program, no solutions to the errors are provided. Instead, the moderator of the meeting records the errors in his program. Jason is informed that corrective actions resulting from the inspection will be gathered by the moderator, who will then pass them to the development manager for final disposition. (Jason will probably end up fixing the coding errors, unless the corrective action involves other areas of the system development life cycle.)

Testing Product Deliverables

Another effective defect detection strategy involves the review of product deliverables. Deliverables consist of the work products resulting from the development effort that include documentation of the system. Documented work products may actually be delivered to the end user or customer, or the work product may represent an internal document deliverable. Product deliverables may include requirement specifications, design documentation, test plans, training material, user manuals, system help manuals, system administration manuals, and system implementation plans. Product deliverables may consist of hard-copy documents or on-line system documentation.

The test team should review product deliverables and document suspected errors. For example, following the review of test procedures, errors relating to requirement issues, implementation issues, and usability issues may be discovered. Product deliverables may be reviewed in isolation or in comparison with other product deliverables. For example, an effective way to detect errors is to compare a user manual deliverable against the actual application under test. When differences are detected, which is in error? When such differences are discovered, the test engineer needs to investigate further to ascertain which of the two items is correct.

Designing Testability into the Application

Defect detection becomes more effective when the concern for system testability is factored into the development process. Incorporating testability into the application is a global process that begins at project or product conception. The IEEE Standard Glossary of Software Engineering Terminology defines “testability” as: (1) the degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met, and (2) the degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine whether those criteria have been met [6].

Product or project mission statements should be defined in such a way that they can be traced to end-user requirements statements. End-user requirements, in turn, should be concise and clearly understood (see Appendix A for more detail on this point). These requirements should then be transformed into system requirements specifications, which need to be stated in terms that can be tested or verified.

The documentation of the system requirements specifications is often a critical juncture for the development effort. As outlined in Section 1.3, a poor requirements specification is one of the primary culprits in failed projects. When specified requirements are not testable, then neither development personnel nor end users can verify that a delivered system application will properly support the targeted business operation.

Incorporating testability into the system design is important to increase the likelihood of effective development. Structured and object-oriented design methodologies provide a means to structure or modularize design components in the form of structure charts or objects. This transformation facilitates the review of design components. Design documentation, which is considered to be testable, uses design components that permit the dissection of output and the analysis of complete threads of functionality. It is more productive to inspect and change graphical design documentation than it is to correct errors in code later, as code-based errors might have migrated as a result of poor design. By incorporating testability within design documentation, the number of logic and structural errors that can be expected within program code is reduced. Additionally, program code resulting from design documentation, which has incorporated testability, requires less debugging.

Testability must be factored into development strategies as well. If program modules are small and functionally associative (cohesive), they can be developed with fewer internal logic and specification errors. If program modules are linked only by required data (loosely coupled), then fewer opportunities exist for one module to corrupt another. Following these two development strategies reduces the number of severe error types. As a result, the remaining errors within the program modules will consist primarily of syntax and other minor error types, many of which can be caught by a compiler or program analyzer.

Another development strategy that incorporates testability is called application partitioning. In this strategy, applications are separated into several layers, which include the user interface, application-specific logic, business rules (or remote logic), and data access logic. The GUI and application-specific logic should reside on the client; the business rules (remote logic) and data access logic should reside on a server (“a server” here means possibly both database and application servers). The testing advantage of layered application design derives from the fact that it results in far more reusability, easier performance tuning, and expedited and more isolated debugging [7].

Many of the same characteristics that make a program testable make it maintainable as well. Following coding standards, for example, represents another way of incorporating testability within the program code. The use of such coding standards reduces the cost and effort associated with software maintenance and testing. For example, the use of preambles or comments in front of code that describe and document the purpose and organization of a program make the resulting program more readable and testable. Such preambles can become test requirement inputs during the test automation development phase.

Both application developers and test engineers must bear in mind that the guidelines promulgated within standards should not be overenforced. Even the most mature guidelines sometimes need to be adjusted or waived for a particular development effort. Although the use of standards facilitates later quality reviews and maintainability objectives, standard compliance should not obstruct the overall project goal—that is, creating an effective product.

Custom controls, widgets, or third-party add-on products, whether in Visual Basic or any other such programming language, often cause an application to be incompatible with the testing tool, thus decreasing the application’s automated testability. To counter this problem, test engineers should produce a list of custom controls (provided by the vendor) for which the test tool is compatible. The use of the controls in this list should be made a standard, thereby making the developers aware of controls that are compatible with the test tool and ensuring that they will use only supported third-party controls. Senior management should also be required to approve exceptions to using third-party controls that support the organization’s automated testing tool. By implementing such a standard, chances are good that the resulting applications will have the same look and feel and that test engineers will not have to continuously create work-around solutions to incompatibility problems caused by the use of an unapproved third-party control.

Another testability issue to consider is whether the application-under-test has been installed on the test machines in the exact same way that it will be installed on end-user machines. A good installation procedure, and one that is consistently followed throughout the entire application life cycle, will improve testability.

Use of Automated Test Tools

In addition to the manually intensive review and inspection strategies outlined earlier in this section, automated test tools can be applied to support the defect detection process. Increasingly, software testing must take advantage of test automation tools and techniques if the organization hopes to meet today’s challenging schedules and adequately test its increasingly complex applications.

Code testing, for example, can generate statistics on code complexity and provide information on what types of unit tests to generate. Code coverage analyzers examine the source code and generate reports, including statistics such as the number of times that a logical branch, path, or a function call is executed during a test suite run.

Test tools are being applied to generate test data, catalog the tests in an organized fashion, execute tests, store test results, and analyze data. (Chapter 3 describes the various test tools now available.) These tools broaden the scope of tests that can be applied to a development effort. Because so many specialty tools and niche testing capabilities are now available, the use of automated test tools itself can be considered a strategy that supports the achievement of test goals and objectives.

Project schedule pressures are another reason why the use of automated test tools should be viewed as a strategy for achieving test goals and objectives. Test automation is the only way that software test engineers can keep pace with application developers and still test each new build with the same level of confidence and certainty as previous builds.

Traditional Testing Phases

The various test techniques that are commonly applied during the traditional test phases represent the most widespread defect detection test strategies. In the book, Software System Testing and Quality Assurance, Boris Beizer identifies three phases of testing: unit, integration, and system testing. Commonly, acceptance testing appears in project schedules as an extension to system testing. During acceptance testing, feedback is gathered from end users for a specified time following the performance of system testing.

Unit testing, also known as module or component testing, involves testing of the smallest unit or block of program code. A software unit is defined as a collection of code segments that make up a module or function.

The purpose of the integration testing is to verify that each software unit interfaces correctly with other software units.

System testing seeks to test all implementation aspects of the system design. It relies on a collection of testing subtypes, including regression, load/stress, volume, and performance testing, among many other types of testing.

These various testing phases and subtypes are described in detail in Chapter 7.

Adherence to a Test Process

The test team’s adherence to a test process can serve as an effective defect detection strategy. Faithful adherence helps to ensure that the requisite activities of an effective test program are properly exercised. Following the required steps in the proper sequence and performing all of the necessary activities guarantees that test resources are applied to the maximum extent within given constraints. The end result is a software application product that is as correct and as responsive to defined requirements as possible, given the limitations of the project schedule and available manpower. Test team adherence to the defined test process makes it possible to identify and refine the defined process, thereby permitting continual process improvement.

As outlined in Section 4.1, the test team’s test goals, objectives, and strategies for a particular test effort should be reflected in the defined test process. These top-level elements of test planning represent the cornerstone on which a project’s test program is developed.

Risk Assessment

An important part of a testing strategy involves risk assessment. The principle here is simple. The test engineer identifies the parts of a project that pose the greatest risk and the functionality that is most likely to cause problems. The test engineer then develops tests for these parts first. Test goals and objectives generally include some considerations of minimizing the risk of failure, where “failure” is defined in terms of cost overruns, schedule slippage, critical software errors, and the like. The test team therefore needs to weigh the risk that system requirements cannot be successfully supported.

Risk assessments should include a determination of the probability that a defined risk will happen, as well as an estimate of the magnitude or impact of the consequence should the risk be realized. Risk mitigation strategies should be defined for those system requirements that are deemed the most critical. Chapters 6 and 8 discuss test planning and test development initiatives that further address risk management.

Strategic Manual and Automated Test Design

An effective test strategy specifies the way in which test design is approached. When tests are designed to incorporate reusability and maintainability, defects can be repeatedly identified. A test design that incorporates critical success functionality is also important in eliminating defects within the most critical component of the software application. This critical component may represent the part of the software that gets the most use or the part that is most important. In either case, the most critical component should be as close to error-free as possible. Test design is discussed in more detail in Chapter 7.

Development of Automated Tests

Another part of an effective test strategy involves the development of automated test development guidelines. If tests are to be uniform, repeatable, maintainable, and effective, the test team must follow its test development guidelines. Chapter 8 discusses examples of automated test development guidelines.

Execution and Management of Automated Tests

The way that automated software test is executed and managed can also serve as a defect detection strategy. Once again, the test team needs to realize that often the project schedule is tight and limited project resources are available for the testing effort. Invoking the proper elements of test execution and performing these elements in the correct manner (that is, with the proper management) help to ensure that the test effort will produce the maximum results. The end result should be a software application product that operates correctly and performs in accordance with defined requirements. Part IV addresses test execution, test script configuration management, defect tracking and reporting, test progress monitoring, and test metrics.

Test Verification Method

The test verification method is another part of a testing strategy. With this strategy, a test qualification method is employed to verify that the application satisfies all system requirements. This method involves the creation of a test verification summary matrix, which outlines the various system requirements and identifies a specific method for testing each one. In this matrix, each system requirement is assigned a testability indicator, otherwise known as a test verification method. Verification methods include demonstration, analysis, inspection, and test. Section 6.2 discusses verification methods.

User Involvement

Test team interaction with prospective end users is likely the most important strategy for ensuring that defect prevention or defect detection becomes incorporated into the test process. As noted earlier, the test team should participate in the requirements phase. Specifically, the test engineers should work closely with end users to ensure that system requirements are stated in testable terms.

User involvement with the test team continues as part of a defect detection strategy. The test team needs to obtain end-user or customer buy-in for the test plan, which outlines the overall testing strategy, and for the test procedures and scripts, which define the actual tests planned. This buy-in consists of end-user concurrence that the test plan and test scripts will satisfy the end user’s system functionality and performance concerns. What better way is there to ensure success of the test effort and to obtain end-user acceptance of the application than to involve the end user through the entire testing life cycle?

This section has introduced many testing strategies that a test team could potentially follow. The particular test strategies applied to a particular project will depend upon the application development environment as well as the test objectives and requirements. A successful, cost-effective testing strategy requires a clear vision of the project goals and the kinds of constraints that may be encountered along the way. Be careful to select a testing strategy that applies to the specific project and one that will be most useful to the test team. Remember, no one solution will fit all situations. Communication and analysis are key considerations in selecting the right mix of test strategies to support the accomplishment of test goals and objectives.

4.2 Test Tool Consideration

As a test engineer seeking to outline a test program in support of the next project and considering the use of automated test tools as part of that effort, you will have followed the course outlined in Chapters 2 and 3 to get started. These two chapters laid out a plan for performing a preliminary assessment intended to determine whether the test team should incorporate the use of automated test tools in a specific project and for identifying the potential benefits inherent in the use of these tools. In Chapter 3, the test team evaluated a host of prospective test tools, finding one or more that support all of the operating environments and most of the GUI languages that apply. Now let’s assume that the next project requiring test support has been identified.

After following the guidance outlined in Section 4.1, the test engineer has reviewed the test process and defined test goals, objectives, and strategies. With this understanding, he or she can decide whether to continue pursuing the use of an automated test tool. Specifically, the test engineer seeks to verify that the previously identified automated test tools will actually work in the environment and effectively meet the system requirements. Figure 4.4 depicts the various steps involved in test tool consideration.

Figure 4.4. Test Tool Introduction Process (Phase 2)—Test Tool Consideration

image

The first step in test tool consideration is to review the system requirements. The test team needs to verify that the automated test tool can support the user environment, computing platform, and product features. If a prototype or part of the application-under-test (AUT) already exists, the test team should ask for an overview of the application. An initial determination of the specific sections of the application that can be supported with automated testing can then be made.

Next, the test schedule should be reviewed. Is there sufficient time left in the schedule or allocated within the schedule to support the introduction of the test tool? Remember, automated testing should ideally be incorporated from the beginning of the development life cycle. The project schedule may need to be adjusted to include enough time to introduce an automated testing tool.

During test tool consideration, the automated test tool should be demonstrated to the new project team, enabling all pertinent individuals to gain an understanding of the tool’s capabilities. Project team personnel in this case should include application developers, test engineers, quality assurance specialists, and configuration management specialists. Remember that software professionals on the project may have a preconceived notion of the capabilities of the test tool, which may not match the tool’s actual application on the project.

If part of the application exists at the time of test tool consideration, conduct a test tool compatibility check. Install the testing tool in conjunction with the application and determine whether the two are compatible. One special concern relates to the availability of memory to support both the application and the automated test tool. Another concern is the compatibility of third-party controls used in the application. Once a compatibility check has been performed and a few problems arise, the test team will need to investigate whether work-around solutions are possible.

The use of automated test tools with a particular application requires the services of a test team that has the appropriate blend of skills needed to support the entire scope of the test effort. Roles and responsibilities of these people need to be clearly defined, and the skills and skill levels of test team personnel need to be considered carefully.

Another element of test tool consideration relates to the need to determine whether the test team has sufficient technical expertise to take advantage of all of the tool’s capabilities. If this technical expertise is not resident within the test team, individuals who can mentor the test team on the advanced features of the test tool might be applied to the project on a short-term basis. Another possibility is test tool training for all test team personnel. Section 4.2.7 discusses training considerations.

After completing the test tool consideration phase, the test team can perform the final analysis necessary to support a decision of whether to commit to the use of an automated test tool for a given project effort.

4.2.1 Review of Project-Specific System Requirements

Section 3.1 discussed the organization’s systems engineering environment and touched on the overall organization’s requirements. During the test tool consideration phase for a specific project, the test engineer is interested in the specific project’s requirements. Before he or she can effectively incorporate and use an automated test tool on a project, the test engineer needs to understand the requirements for the AUT. Once project-specific system requirements have been set, the test team can be more certain that the test tool will satisfy the particular needs of the specified project.

The test team must understand the user environment, computing platforms, and product features of the application-under-test, collectively referred to as the system architecture. For example, a client-server project can require a DBMS, application builder, network operation system, database middleware, source code control software, installation utility, help compiler, performance modeling tools, electronic distribution software, help-desk call tracking, and possibly systems management software to enforce security and track charge-backs. For a decision support system (perhaps as a replacement for an application builder), a project might also include a DSS front end tool (such as one geared toward Excel with the Microsoft EIS toolkit) and possibly some local temporary tables to expedite queries from previously downloaded data. For a high-volume, on-line transaction-processing system, the project may use a transaction-processing (TP) monitor and remote procedure call software. For a lookup or browsing-intensive application the project might use a variety of database back ends and simply create a front-end browser using an integrated shell; alternatively, it might employ data-mining tools [8].

The team then needs to analyze and provide a preliminary determination of whether a particular test tool is compatible with the specific system/software architecture and the system requirements of the AUT. The potential benefits of using the particular test tool for a project should be defined and documented as part of the test plan.

It is critically important that the test team develop an understanding of the primary business tasks addressed by the application. Several questions should be posed. For example, what are the transactions or scenarios that are performed to fulfill the tasks? What sort of data does each transaction or scenario require?

When the test team is involved on a project from the beginning of the development life cycle, it should investigate whether the development team plans to use a tool-incompatible operating environment or tool-incompatible application features, such as incompatible widgets. If such an incompatibility exists, the test team can negotiate recommended changes. If the incompatibility cannot be resolved, the test team can investigate the use of a different test tool or consider abandoning plans for the use of an automated test tool on the project.

The test team should determine the types of database activity (add, delete, change) that will be invoked, and identify when these transactions will occur within the application. It is beneficial to determine the calculations or processing rules provided by the application, as well as the time-critical transactions, functions, and conditions and possible causes of poor performance. Conditions that pose a stress to the system, such as low memory or disk capacity, should be identified. The test team should become familiar with the different configurations employed for the particular application, and should identify additional functions that are called to support the application-under-test. Other concerns include the plan for installing the application-under-test and any user interface standards, rules, and events.

Once the test team has found answers to these questions and addressed all relevant concerns, a decision needs to be made about whether to move forward with the particular test tool. The test team may have some leeway to consider using the selected test tool on a different project. The whole effort associated with reviewing system requirements is meant to verify that a particular automated test tool can support a specific set of test needs and requirements.

When attempting to perform a system requirements review, the team should identify the operational profile and user characteristics of the application-under-test. That is, it needs to understand the frequency of use for each system requirement as well as the number, type, and knowledge of users (that is, the user type) for each requirement. It is also beneficial to understand whether any conditions are attached to each system requirement. Obtaining operational profiles and user characteristics is a very difficult and time-consuming task, and one that requires management approval.

4.2.2 Application-Under-Test Overview

If it has adopted an automated test tool as a standard for the general systems engineering environment for an organization, the test team still needs to verify that the particular tool will be compatible with a specific application development effort. If parts of the AUT are already available or a previous release exists, the test team members—if not already familiar—should familiarize themselves with the target application. They should request an overview of the specific application as part of test tool consideration phase, as noted earlier. This overview may consist of a system prototype when only parts of the system application are available. Indeed, it could take the form of a user interface storyboard, if only a detailed design exists.

Technical aspects of the application now need to be explored. What is the GUI language or development environment? Is the application being developed to operate in a two-tier or a multitier environment? If it is intended for a multitier environment, which middleware is being used? Which database has been selected?

The test team should also review the application to determine which section or part can be supported with an automated test tool. Not all system requirements can be supported via automated testing, and not all system requirements can be supported with a single tool. The best approach to take is to divide and conquer the system requirements. That is, determine which automated test tool can be used for which system requirement or section of the application.

As tools differ and a variety of tests may be performed, the test team needs to include test engineers who have several types of skills. In support of the GUI test, a capture/playback tool, such as Rational’s TestStudio, may be applied. In support of the back-end processing, a server load test tool, such as Rational’s Performance Studio, may prove beneficial. UNIX system requirements may be fulfilled by UNIX shell script test utilities. Performance monitoring tools, such as Landmark TMON, can satisfy database system testing requirements. An automated tool may also support network testing. Chapter 3 offers more details on the various tools available.

The benefits of this step of the test tool consideration phase include the identification of potential design or technical issues that might arise with the use of the automated test tool. If incompatibility problems emerge during test tool consideration, then work-around solutions can be considered and the test design effort can focus on alternative test procedure approaches.

4.2.3 Review of Project Schedule

As noted earlier, an automated test tool is best introduced at the beginning of the development life cycle. Early introduction assures that lead time is adequate for the test team to become familiar with the tool and its advanced features. Sufficient lead time also ensures that system requirements can be loaded into a test management tool, test design activities can adequately incorporate test tool capabilities, and test procedures and scripts can be generated in time for scheduled test execution.

Beware of development project managers who invite test team participation near the end of the development life cycle! The project manager may be frantic about testing the particular application and desperate to obtain test engineering resources. This scenario presents some glaring problems. The test engineers have not become familiar with the application. The quality of the system requirements may be questionable. No time is available to assess the test design or cross-reference this design to the system requirements. The test team may not be able to determine whether its standard automated test tool is even compatible with the application and environment. Generation of test procedures and scripts may be rushed. Given these and other obstacles to test performance, test team personnel who step in to bail out the project will likely face a significant amount of scrutiny and pressure. It should not come as a great surprise if a few test engineers on the team leave the organization to seek greener pastures.

Clearly, a decision to incorporate automated testing at the end of the development life cycle carries risks. The test team is exposed to the risk of executing a test effort that does not catch enough application defects. On the other hand, it may perform sufficient testing but exceed the allocated schedule, thereby delaying a product from being released on time.

Deploying products on schedule has a multitude of implications. Product market share—and even the organization’s ability to survive—may hang in the balance. As a result, test teams need to review the project schedule and assess whether sufficient time is available or has been allocated to permit the introduction of an automated test tool.

Even when the test team is involved at the beginning of the development life cycle, it is important for team members to make sure that the automated test tools are made available and introduced early. In situations when an automated tool is an afterthought and does not become available until the end of the development process, the test team and project manager may both be in for a surprise. They may discover that insufficient time has been allocated in the schedule for a test process that incorporates the new tool.

When utilizing an automated test tool and exercising an automated testing process, a majority of the test effort will involve test planning, design, and development. The actual performance of tests and the capture of defects account for a much shorter period of time. It is imperative, therefore, that automated test be introduced in parallel to the development life cycle, as described in Figure 1.3 on page 15.

During the test tool consideration phase, the test team must decide whether the project schedule permits the utilization of an automated test tool and the performance of an automated testing process. It may be able to avoid an embarrassing and painful situation by reviewing and commenting on the feasibility of the project schedule. At a minimum, testing personnel must clearly understand the specific timeframes and associated deadline commitments that are involved. If potential schedule conflicts are minor, they might be resolved with judicious project management. The results of this project review—including identification of issues, updated schedules, and test design implications—should be documented within the test plan.

Without a review of the project schedule, the test team may find itself in a nowin situation, where it is asked to do too much in too little time. This kind of situation can create animosity between the test team and other project personnel, such as the development team, who are responsible for the development and release of the software application. In the heated exchange and the blame game that become inevitable, the automated test tool becomes the scapegoat. As a result of such a bad situation, the automated test tool may develop a poor reputation within the organization and become shelfware. The organization may then abandon the use of the particular test tool, perhaps even shunning automated testing in general.

4.2.4 Test Tool Compatibility Check

In Section 4.2.2, the usefulness of an application overview was discussed. During the application overview exercise, the test team may have identified some compatibility issues. Nevertheless, it still needs to conduct a hands-on compatibility check, provided that a part of the application or a prototype exists, to verify that the test tool will support the AUT.

In particular, the test group needs to install the test tool on a workstation with the application resident and perform compatibility checks. It should verify that all application software components that are incorporated within or are used in conjunction with the application are included as part of the compatibility check. Application add-ins such as widgets and other third-party controls should come from the organization’s approved list. Most vendors will provide a list of the controls that are compatible with the particular tool.

Some tools can be customized to recognize a third-party control. In this situation, the vendor should provide a list of compatible controls together with instructions on how to customize the tool to recognize the third-party tool. Depending on these requirements, the test engineer might want to verify that the resulting tool is still compatible with the application by conducting the following tests:

  1. Presentation layer calling the database server
  2. Presentation layer calling the functionality server
  3. End-to-end testing (presentation calling functionality server, calling database server)
  4. Functionality server calling another functionality server
  5. Functionality server calling the database server

Beware of the vendor that does not offer full disclosure for its tool. Gather information on industry acceptance of the vendor and its tools.

When tool conflicts occur with the application, work-around solutions should be investigated. The potential requirement for these solutions is another reason to introduce the automated test tool early in the development life cycle. When time permits, the test team might install and test the test tool on a variety of operating systems that the tool supposedly supports.

In addition, the test team should verify that it has access to the application’s internal workings, such as hidden APIs and protocols.

The compatibility check offers another benefit. Once the test tool has been installed, the test team can verify that the targeted hardware platform is compatible with the tool. For example, the test engineer can verify that disk space and memory are sufficient. The results of the compatibility check should be documented within the test plan.

4.2.5 Demonstration of the Tool to the Project Team

It is important to ensure project-wide understanding and acceptance of the automated test tool. Such positive feelings may prove valuable later should the test team encounter difficulties during the test effort. When other project personnel, such as managers and application developers, understand the usefulness and value of the automated test tool, then they are generally more patient and supportive when problems arise.

A demonstration of the proposed test tool or tools can help to win such support. It is especially useful when the tool will perform developmental testing during the unit test phase. Tools that may be employed in the unit test phase include code coverage, memory leakage, and capture/playback test tools, among others.

It is valuable for project personnel to see the tool first-hand. A demonstration will provide them with a mental framework on how the tool works and can be applied to support the needs of the project. Without such a demonstration, people may confuse other tools with the particular test tool being applied to the effort or fail to fully grasp its capabilities.

Some people expect a test tool to do everything from designing the test procedures to executing them. It is important that members of the project team understand both the capabilities and the limitations of the test tool. Following the test tool demonstration, the test team can gauge the level of acceptance by reviewing any issues raised and noting the tone of the discussion about the tool. The test group then must decide whether to continue with the demonstrated test tool or to pursue another tool. If it chooses to pursue another tool, the test team should document the reasons for abandoning the demonstrated test tool.

Project managers and other software professionals often assume that a testing tool necessarily automates and speeds up the entire test effort. In fact, automated test scripts generally reduce the required timeframe for test execution and really pay off on the second release of software, where scripts can be reused—but not during initial setup of test scripts with the initial release. Automated test scripts may also provide benefits with the initial release when performing load testing and regression testing. Further information about the benefits of automated testing is provided in Chapter 2.

4.2.6 Test Tool Support Profile

An important factor to consider when making a final determination on a test tool is the availability of test team personnel who have sufficient experience with the tool to plan, prepare, and execute testing. Another question to pose is whether any individual on the test team has enough experience with the tool to leverage the tool’s more advanced features. The test team may have received a slick test tool demonstration by tool professionals during a marketing demonstration, and the tool may have looked easy to use on the surface, but once heavily into the test process the team may discover that its skills with the tool are insufficient.

The test team manager should map out a staffing profile of each of the team members to ascertain the team’s strengths and weaknesses. Upon review of this staffing profile, the manager can determine whether there is sufficient coverage for all system requirements areas as well as sufficient experience to justify the use of the automated test tool. On a team of five test engineers, for example, all should have general experience with the tool or at least had introductory level training on the test tool and its capabilities.

On a five-person test team, at least one test engineer should be able to provide leadership in the area of test automation. This test automation leader should have advanced experience with the tool, have attended an advanced training course on the tool, or have a software programming background. Skills that are generally beneficial for the automated test engineer include familiarity with SQL, C, UNIX, MS Access, and Visual Basic. Preferably, the test team would include two test engineers with advanced skills, in the event that the one advanced-skill test engineer had to leave the project for some reason.

Table 4.4 provides a sample test tool support profile. In this example, the test team is considering the use of the automated test tool called Rational’s TestStudio. The second column indicates whether the test team member has at least some experience with Rational’s TestStudio or a similar kind of test tool. The third and fourth columns indicate whether the individual has advanced Rational’s TestStudio tool training or advanced tool experience with other similar test tools. If team members have sufficient experience with the automated test tool, then they can proceed to adopt the test tool. If not, the test team manager can entertain work-around solutions, such as general or advanced training to shore up the team’s capability with the tool. Another option is to hire a part-time consultant to mentor the test team through test design, development, and execution.

Table 4.4. Test Tool Support Profile

image

If the test team is not able to place personnel experienced with the designated test tool on the project and the right levels of training are not available, the test team will need to either consider an alternative test tool or an alternative test method, such as manual testing.

It may be worthwhile to document the desirable skills for test team members within the application’s test plan, then ensure that the staffing profile matches the desired skill set. Roles and responsibilities of the test team are further defined in Chapter 5.

4.2.7 Review of Training Requirements

The test tool consideration phase also must account for the need to incorporate training into the project schedule. To determine whether such training may be necessary, the manager may review the test tool support profile, such as the one depicted in Table 4.4.

It is important to develop test process expertise among those individuals involved in software test. It is not sufficient for the test team to have a well-defined test process; the test engineers must also be familiar with and use the process. Likewise, test team members should develop expertise in one or more automated test tools through training, knowledge transfer achieved as a result of attending user group meetings, membership in a user group Web site, or participation in a software quality assurance organization.

Test teams that find themselves short of needed experience may become frustrated and actually abandon the use of the automated test tool so as to achieve short-term gains in test progress, only to experience a protracted test schedule during actual and regression testing. Often the tool draws the blame for inadequate testing, when in reality the test process was not followed closely or was nonexistent.

Chapter Summary

• How test teams introduce an automated software test tool into a new project is nearly as important as the selection of the appropriate test tool for the project.

• The purpose of analyzing the organization’s test process, which is required as part of the test analysis phase, is to ascertain the test goals, objectives, and strategies that may be inherent in the test process.

• Test process analysis documentation is generated through the test team’s process review and its analysis of test goals and objectives. This documentation outlines test goals, test objectives, and test strategies for a particular effort. It is a common practice to include this information within the Introduction section of the project’s test plan.

• Test strategies can be classified into two different categories: defect prevention technologies and defect detection technologies. Defect prevention provides the greatest cost and schedule savings over the duration of the application development effort.

• Defect prevention methodologies cannot always prevent defects from entering into the application-under-test, because applications are very complex and it is impossible to catch all errors. Defect detection techniques complement defect prevention efforts, and the two methodologies work hand in hand to increase the probability that the test team will meet its defined test goals and objectives.

• Unit testing is often performed by the developer of the unit or module. This approach may be problematic because the developer may not have an objective viewpoint when testing his or her own product.

• The test strategies that apply to a particular project depend on the application development environment as well as the test objectives and requirements.

• One of the most significant steps in the test tool consideration phase requires that the test team decide whether the project schedule will permit the appropriate utilization of an automated test tool and whether an automated testing process can offer value under the particular circumstances.

• The compatibility check is intended to ensure that the application will work with the automated testing tool and, where problems exist, to investigate work-around solutions.

• The test team manager must determine whether the test team has sufficient skills to support the adoption of the automated test tool.

References

1. Kuhn, T. The Structure of Scientific Revolutions, Foundations of the Unity of Science, Vol. II. Chicago: University of Chicago, 1970.

2. Florac, W.A., Park, R.E., Carleton, A.D. Practical Software Measurement: Measuring for Process Management and Improvement. Guidebook CMU/SEI-97-HB-003. Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, April 1997.

3. Burnstein, I., Suwanassart, T., Carlson, C.R. Developing a Testing Maturity Model, Part II. Chicago: Illinois Institute of Technology, 1996.

4. Standish Group. http://www.standishgroup.com/chaos.html.

5. A Best Practices initiative created by the U.S. Department of Defense in late 1994 brought together a group of consultants and advisors, dubbed the Airlie Software Council after its meeting place in the Virginia countryside. Yourdon, E. “The Concept of Software Best Practices.” http://www.yourdon.com/articles/BestPractice.html.

6. Voas, J., and Miller, K. “Software Testability: The New Verification,” IEEE Software, page 3. May 1995. http://www.rstcorp.com/papers/chrono-1995.html

7. Corporate Computing Inc. Corporate Computing’s Top Ten Performance Modeling Tips. Monroe, LA, 1994.

8. Corporate Computing Inc. Corporate Computing’s Test to Evaluate Client/Server Expertise. Monroe, LA, 1994.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset