Chapter 4. Selecting and Installing Software Testing Tools

A tool can be defined as “anything that serves as a means to get something done.” It is important to recognize that you first must determine what that something is before acquiring a tool. Chapter 3 discussed the concept of a work process (the means for accomplishing a testing objective). Within the work process would be one or more tools to accomplish the objective. For example, in developing scripts, one might wish to use a capture/playback tool.

This chapter describes the relationship between tools and work processes. The chapter then explains the steps involved in selecting and installing a tool, as well as creating a toolbox for testers. Finally, the chapter proposes how to train testing staff in the use of tools, as well as designate a tool manager to provide testers the support they need in using those tools.

Integrating Tools into the Tester’s Work Processes

It is important to recognize the relationship between a tool and a technique. A technique is a procedure for performing an operation; a tool is anything that serves as a means to get something done. Let’s look at a non-testing example. If you want to attach two pieces of wood together, you might choose a nail as the means for accomplishing that bonding process. Joining the two pieces of wood together is a technique for building an object; a nail is a tool used to join two pieces of wood together. A technique for inserting the nail into the two pieces of wood might be a swinging motion hitting the nail on the head; a hammer is a tool that would help that technique.

Stress testing is a technique that a software tester might use to validate that software can process large volumes of data. Tools that would be helpful in stress testing software might include a test data generator or a capture/playback tool for using and re-using large amounts of test data.

Although software testing techniques are limited, software tools are almost unlimited. Testers can select a variety of software tools to accomplish any specific software testing technique, just as a carpenter could use tools such as nails, screws, or glue to fasten two pieces of wood together.

Note

This chapter will not discuss specific vendor tools. There are too many operating platforms and too many vendor tools to effectively identify and describe the availability of tools in this book. Search the Web for “software testing tools” and you find a variety of sources to identify what is currently available in the marketplace.

It is important that tools be integrated into the software tester’s work processes. The use of tools should always be mandatory. This does not mean that an individual tester may not select among several tools to accomplish a specific task, but rather that the process should identify specific tools or provide the tester a choice of tools to accomplish a specific work task. However, for that work task, the tester must use one of the tools specified in the work process.

Tools Available for Testing Software

This section is designed to cause you to think “outside of the box” regarding tools available for software testing. When the concept of the software testing tool is discussed, many testers think of automated tools provided by vendors of testing software. However, there are many manual tools available that can aid significantly in testing software (for example, code inspections).

The objective of this discussion is to categorize the tools used by testers into generic categories. A test script, for example, is a means for accomplishing some aspect of software testing. There are both manual tools to help you create scripts, such as building use cases, as well as automated tools that can both generate and execute a test script.

Testing tools are the aids used by individuals with testing responsibility to fulfill that responsibility. The tools cover a wide range of activities and are applicable for use in all phases of the systems development life cycle. Some of the techniques are manual, some automated; some perform static tests, others dynamic; some evaluate the system structure, and others, the system function.

The skill required to use the tools and the cost of executing the tools vary significantly. Some of the skills are highly technical and involve in-depth knowledge of computer programming and the system being tested. Other tools are general in nature and are useful to almost anyone with testing responsibilities. Some techniques involve only a short expenditure of man-hours, whereas others must be conducted by a team and make heavy use of computer resources in the test process.

The following is a list of the more common testing tools:

  • Boundary value analysis. A method of dividing application systems into segments so that testing can occur within the boundaries of those segments. The concept complements top-down system design.

  • Capture/playback. A technique that enables you to capture the data and results of testing, and then play it back for future tests.

  • Cause-effect graphing. Attempts to show the effect of each test event processed. The purpose is to categorize tests by the effect that will occur as a result of testing. This should reduce the number of test conditions by eliminating the need for multiple test events that all produce the same effects.

  • Checklist. A series of probing questions designed to review a predetermined area or function.

  • Code comparison. Identifies differences between two versions of the same program. You can use this tool with either object or source code.

  • Compiler-based analysis. Utilizes the diagnostics produced by a compiler or diagnostic routines added to a compiler to identify program defects during the compilation of the program.

  • Confirmation/examination. Verifies the correctness of many aspects of the system by contacting third parties, such as users, or examining a document to verify that it exists.

  • Control flow analysis. Requires the development of a graphic representation of a program to analyze the branch logic within the program to identify logic problems.

  • Correctness proof. Involves developing a set of statements or hypotheses that define the correctness of processing. These hypotheses are then tested to determine whether the application system performs processing in accordance with these statements.

  • Data dictionary. The documentation tool for recording data elements and the attributes of the data elements that, under some implementations, can produce test data to validate the system’s data edits.

  • Data flow analysis. A method of ensuring that the data used by the program has been properly defined, and that the defined data is properly used.

  • Database. A repository of data collected for testing or about testing that can be summarized, re-sequenced, and analyzed for test purposes.

  • Design-based functional testing. Recognizes that functions within an application system are necessary to support the requirements. This process identifies those design-based functions for test purposes.

  • Design reviews. Reviews conducted during the systems development process, normally in accordance with systems development methodology. The primary objective of design reviews is to ensure compliance to the design methodology.

  • Desk checking. Reviews by the originator of the requirements, design, or program as a check on the work performed by that individual.

  • Disaster test. A procedure that predetermines a disaster as a basis for testing the recovery process. The test group then causes or simulates the disaster as a basis for testing the procedures and training for the recovery process.

  • Error guessing. Uses the experience or judgment of people to predict what the most probable errors will be and then test to ensure that the system can handle those test conditions.

  • Executable specs. Requires a computer system for writing system specifications so that those specifications can be compiled into a testable program. The compiled specs have less detail and precision than will the final implemented programs, but they are sufficient to evaluate the completeness and proper functioning of the specifications.

  • Fact finding. Information needed to conduct a test or to ensure the correctness of a document’s information, achieved through an investigative process requiring obtaining information or searching for the facts about a predetermined condition.

  • Flowchart. Graphically represents the system and/or program flow in order to evaluate the completeness of the requirements, design, or program specifications.

  • Inspections. A highly structured step-by-step review of the deliverables produced by each phase of the systems development life cycle in order to identify potential defects.

  • Instrumentation. The use of monitors and/or counters to determine the frequency with which predetermined events occur.

  • Integrated test facility. A concept that permits the introduction of test data into a production environment so that applications can be tested at the same time they are running in production. The concept permits testing the accumulation of data over many iterations of the process, and facilitates intersystem testing.

  • Mapping. A process that analyzes which parts of a computer program are exercised during the test and how frequently each statement or routine in a program is executed. This can be used to detect system flaws, determine how much of a program is executed during testing, and identify areas where more efficient code may reduce execution time.

  • Modeling. A method of simulating the functioning of the application system and/or its environment to determine if the design specifications will achieve the system objectives.

  • Parallel operation. Runs both the old and new version within the same time frame in order to identify differences between the two processes. The tool is most effective when there is minimal change between the old and new processing versions of the system.

  • Parallel simulation. Develops a less precise version of a segment of a computer system in order to determine whether the results produced by the test are reasonable. This tool is effective when used with large volumes of data to automatically determine the correctness of the results of processing. Normally, this tool approximates only actual processing

  • Peer review. A review process that uses peers to review that aspect of the systems development life cycle with which they are most familiar. Typically, the peer review offers compliance to standards, procedures, guidelines, and the use of good practices, as opposed to efficiency, effectiveness, and economy of the design and implementation.

  • Ratios/relationships. Quantitative analysis that enables testers to draw conclusions about some aspect of the software to validate the reasonableness of the software. For example, in test planning, they may want to compare the proposed test budget to the number of function points being tested.

  • Risk matrix. Tests the adequacy of controls through the identification of risks and the controls implemented in each part of the application system to reduce those risks to a level acceptable to the user.

  • Scoring. A method to determine which aspects of the application system should be tested by determining the applicability of problem criteria to the application being tested. The process can be used to determine the degree of testing (for example, high-risk systems would be subject to more tests than low-risk systems) or to identify areas within the application system to determine the amount of testing needed.

  • Snapshot. A method of printing the status of computer memory at predetermined points during processing. Computer memory can be printed when specific instructions are executed or when data with specific attributes are processed.

  • Symbolic execution. Permits the testing of programs without test data. The symbolic execution of a program results in an expression that can be used to evaluate the completeness of the programming logic.

  • System logs. Uses information collected during the operation of a computer system to analyze how well the system performed. System logs are produced by operating software such as database management systems, operating systems, and job accounting systems.

  • Test data. System transactions that are created for the purpose of testing the application system.

  • Test data generator. Software systems that can be used to automatically generate test data for test purposes. Frequently, these generators require only parameters of the data element values in order to generate large amounts of test transactions.

  • Test scripts. A sequential series of actions that a user of an automated system would enter to validate the correctness of software processing.

  • Tracing. A representation of the paths followed by computer programs as they process data or the paths followed in a database to locate one or more pieces of data used to produce a logical record for processing.

  • Use cases. Test transactions that focus on how users will use the software in an operational environment.

  • Utility programs. A general-purpose software package that can be used to test an application system. The most valuable utilities are those that analyze or list data files.

  • Walkthroughs. A process that asks the programmer or analyst to explain the application system to a test team, typically by using a simulation of the execution of the application system. The objective of the walkthrough is to provide a basis for questioning by the test team to identify defects.

Selecting and Using Test Tools

This chapter presents an extensive array of tools for systems testing. Many of these tools have not been widely used. The principal reasons for this include: 1) specialized use (simulation); 2) the high cost of their use (symbolic execution); and 3) their unproven applicability (correctness proof). Many of these tools represent the state of the art and are in areas where research is continuing. However, this should not prevent organizations from experimenting with some of the newer test concepts. The tools attracting the most interest and activity at present include automated test support systems (capture/playback) and automated analysis (compiler-based analysis).

As better tools are developed for testing during the requirements and design phases of software testing, an increase in automatic analysis is possible. In addition, more sophisticated analysis tools are being applied to the code during construction. More complete control and automation of the actual execution of tests, both in assistance in generating the test cases and in the management of the testing process and result, are also taking place.

It is important that testing occur throughout the software development life cycle. One reason for the great success of disciplined manual techniques is the uniform applicability at the requirements, design, and coding phases. These tools can be used without massive capital expenditure. However, to be most effective they require a serious commitment and a disciplined application. Careful planning, clearly stated testing objectives, precisely defined tools, good management, organized record keeping, and a strong commitment are critical to successful testing. A disciplined approach must be followed during both planning and execution of the testing activities.

An integral part of this process is the selection of the appropriate testing tool. The following four steps are involved in selecting the appropriate testing tool:

  1. Match the tool to its use.

  2. Select a tool appropriate to its life cycle phase.

  3. Match the tool to the tester’s skill level.

  4. Select an affordable tool.

Matching the Tool to Its Use

The better a tool is suited to accomplish its task, the more efficient the test process will be. The wrong tool not only decreases the efficiency of testing, it may not permit testers to achieve their objectives. The test objective can be a specific task in executing tests, such as using an Excel spreadsheet to track defects, or to accomplish the testing technique, such as stress testing, using a tool such as capture/playback.

The objective for using a tool should be integrated into the process in which the tool is to be incorporated. Again, the tool is the means to accomplish a test objective. When test processes are developed, a decision is made as to whether a specific task should be performed manually or whether it can be more effectively and efficiently performed using a test tool. The test process comes first, the test tool second.

In some instances, an IT testing organization will become aware of a testing tool that offers an opportunity to do more effective testing than is currently being performed. It may be necessary to modify the test process to incorporate the capabilities of the new tool. In this instance, the tool will help determine the process. What is important is that the tool is integrated into the process and not used externally to the process at the discretion of the tester.

As test processes are continually improved, new tools will be integrated into the process. The search for and analysis of available tools is a continuous process. The objective is to improve the testing process by incorporating more effective and efficient tools.

You can use Work Paper 4-1 to identify the tools that will be considered for selection. Note that this Work Paper does not contain all the tools that might be considered.

Table 4-1. Selecting Tools

  

Include in Tester’s Toolbox?

Tool

Use

Yes

No

Boundary value analysis

Divides system top down into logical segments and then limits testing within the boundaries of each segment.

  

Capture/playback

Testing used to capture transactions from the testing process for re-use in future tests.

  

Cause-effect graphing

Limits the number of test transactions by determining which of the number of variable conditions pose minimal risk based on system actions.

  

Checklist

Provides a series of questions designed to probe potential system problem areas.

  

Code comparison

Compares two versions of the same program in order to identify differences between the two versions.

  

Compiler-based analysis

Detects errors during the program-compilation process.

  

Confirmation/examination

Verifies that a condition has or has not occurred.

  

Control flow analysis

Identifies processing inconsistencies, such as routines with no entry point, potentially unending loops, branches into the middle of a routine, and so on.

  

Correctness proof

Requires a proof hypothesis to be defined and then used to evaluate the correctness of the system.

  

Data dictionary

Generates test data to verify data validation programs based on the data contained in the dictionary.

  

Data flow analysis

Identifies defined data not used and used data that is not defined.

  

Database

Repository for collecting information for or about testing for later use analysis

  

Design-based functional testing

Evaluates functions attributable to the design process as opposed to design requirements; for example, capability may be a design process.

  

Design reviews

Requires reviews at predetermined points throughout systems development in order to examine progress and ensure the development process is followed.

  

Desk checking

Provides an evaluation by programmer or analyst of the propriety of program logic after the program is coded or the system is designed.

  

Disaster test

Simulates an operational or systems failure to determine if the system can be correctly recovered after the failure.

  

Error guessing

Relies on the experience of testers and the organization’s history of problems to create test transactions that have a high probability of detecting an error.

  

Executable specs

Provides a high-level interpretation of the system specs in order to create the response to test data. Interpretation of expected software packages requires system specs to be written in a high-level language.

  

Fact finding

Performs those steps necessary to obtain facts to support the test process.

  

Flowchart

Pictorially represents computer systems logic and data flow.

  

Inspections

Requires a step-by-step explanation of the product with each step checked against a predetermined list of criteria.

  

Instrumentation

Measures the functioning of a system structure by using counters and other monitoring instruments.

  

Integrated test facility

Permits the integration of test data in a production environment to enable testing to run during production processing.

  

Mapping

Identifies which part of a program is exercised during a test and at what frequency.

  

Modeling

Simulates the functioning of the environment or system structure in order to determine how efficiently the proposed system solution will function.

  

Parallel operation

Verifies that the old and new version of the application system produce equal or reconcilable results.

  

Parallel simulation

Approximates the expected results of processing by simulating the process to determine if test results are reasonable.

  

Peer review

Provides an assessment by peers of the efficiency, style, adherence to standards, and so on of the product that is designed to improve the quality of the product.

  

Ratio/Relationships

To provide a high-level proof quantitatively that some aspect of the software or testing is reasonable.

  

Risk matrix

Produces a matrix showing the relationship between system risk, the segment of the system where the risk occurs, and the presence or absence of controls to reduce that risk.

  

Scoring

Identifies areas in the application that require testing, through the rating of criteria that have been shown to correlate to problems.

  

Snapshot

Shows the content of computer storage at predetermined points during processing.

  

Symbolic execution

Identifies processing paths by testing the programs with symbolic rather than actual test data.

  

System logs

Provides an audit trail of monitored events occurring in the environment area controlled by system software.

  

Test data

Creates transactions for use in determining the functioning of a computer system.

  

Test data generator

Provides test transactions based on the parameters that need to be tested.

  

Test scripts

Creating test transactions in the sequence in which those transactions will be processed for an online software system.

  

Tracing

Follows and lists the flow of processing and database searches.

  

Use case

Preparing test conditions that represent real world uses of the software.

  

Volume testing

Identifies system restriction (e.g., internal table size) and then creates a large volume of transactions that exceed those limits.

  

Walkthroughs

Leads a test team through a manual simulation of the product using test transactions.

  

Chapter 8 describes a variety of testing techniques. Many of the tools used in testing will be utilized to effectively perform those techniques. Again, stress testing is a technique for which tools are necessary to support a large volume of test data.

Selecting a Tool Appropriate to Its Life Cycle Phase

The type of testing varies by the life cycle in which it occurs. Just as the methods change, so do the tools. Thus, it becomes necessary to select a tool appropriate for the life cycle in which it will be used.

As the life cycle progresses, the tools tend to shift from manual to automatic. However, this should not imply that the manual tools are less effective than the automatic, because some of the most productive testing can occur during the early phases of the life cycle using manual tools.

Table 4-1 lists the life cycle phases in which the identified test tools are most effective. This matrix shows the 41 test tools and for which of the 6 systems development life cycle phases each tool is most appropriate. You can use this matrix for the second step of the selection process, in which the population of tools identified in the first step can be reduced to those tools that are effective in the life cycle phase where the test will be occurring.

Table 4-1. SDLS Phase/Test Tool Matrix

TOOL

SDLC PHASE

 

Requirements

Design

Program

Test

Operation

Maintenance

Boundary value analysis

  

X

X

  

Capture/playback

   

X

 

X

Cause-effect graphing

 

X

X

   

Checklist

X

X

X

X

X

X

Code comparison

     

X

Compiler-based analysis

  

X

   

Confirmation/examination

X

X

X

X

X

X

Control flow analysis

   

X

 

X

Correctness proof

 

X

 

X

  

Data dictionary

   

X

  

Data flow analysis

  

X

   

Database

   

X

 

X

Design-based functional testing

 

X

 

X

  

Design reviews

 

X

    

Desk checking

X

X

X

  

X

Disaster test

   

X

 

X

Error guessing

X

X

X

X

X

X

Executable specs

 

X

    

Fact finding

X

X

X

X

X

X

Flowchart

X

X

X

   

Inspections

X

X

X

X

X

X

Instrumentation

   

X

X

X

Integrated test facility

     

X

Mapping

  

X

   

Modeling

X

X

    

Parallel operation

    

X

 

Parallel simulation

   

X

  

Peer review

X

X

X

X

X

X

Ratios/relationships

   

X

 

x

Risk matrix

X

X

    

Scoring

X

X

    

Snapshot

   

X

  

Symbolic execution

  

x

   

System logs

   

X

X

X

Test data

 

X

X

X

 

X

Test data generator

   

X

 

X

Test scripts

   

X

 

X

Tracing

  

x

  

X

Use cases

   

X

 

X

Utility programs

   

X

X

X

Walkthroughs

X

X

X

   

Matching the Tool to the Tester’s Skill Level

The individual performing the test must select a tool that conforms to his or her skill level. For example, it would be inappropriate for a user to select a tool that requires programming skills when the user does not possess those skills. This does not mean that an individual will not have to be trained before the tool can be used but rather that he or she possesses the basic skills necessary to undertake training to use the tool. Table 4-2 presents the tools divided according to the skill required. This table divides skills into user skill, programming skill, system skill, and technical skill.

Table 4-2. Skill Levels for Using Testing Tools

SKILL

TOOL

COMMENTS

User skill

Checklist

 
 

Integrated test facility

 
 

Peer review

 
 

Risk matrix

 
 

Scoring

 
 

Use case

 
 

Walkthroughs

 

Programmer skill

Boundary value analysis

 
 

Capture playback

 
 

Checklist

 
 

Code comparison

 
 

Control flow analysis

 
 

Correctness proof

 
 

Coverage-based metric testing

 
 

Data dictionary

 
 

Data flow analysis

 
 

Database

 
 

Design-based functional testing

 
 

Desk checking

 
 

Error guessing

 
 

Flowchart

 
 

Instrumentation

 
 

Mapping

 
 

Modeling

 
 

Parallel simulation

 
 

Peer review

 
 

Snapshot

 
 

Symbolic execution

 
 

System logs

 
 

Test data

 
 

Test data generator

 
 

Test scripts

 
 

Tracing

 
 

Volume testing

 
 

Walkthroughs

 

System skill

Cause/effect graphing

 
 

Checklist

 
 

Confirmation/examination

 
 

Correctness proof

 
 

Design-based functional testing

 
 

Design reviews

 
 

Desk checking

 
 

Disaster test

 
 

Error guessing

 
 

Executable specs

Few such languages in existence

 

Fact finding

 
 

Flowchart

 
 

Inspections

Helpful to have application knowledge

 

Integrated test facility

Skills needed to develop but not using ITF

 

Mapping

 
 

Modeling

 
 

Parallel simulation

 
 

Peer review

 
 

System logs

 
 

Test data

 
 

Test scripts

 
 

Tracing

 
 

Volume testing

 
 

Walkthroughs

 

Technical skill

Checklist

 
 

Coverage-based metric testing

Requires statistical skill to develop

 

Instrumentation

System programmer skill

 

Parallel operation

Requires operations skill

 

Peer review

Must be taught how to conduct review

 

Ratio/relationships

Requires statistical skills to identify, calculate, and interpret the results of a statistical analysis

  • User skill. Requires the individual to have an in-depth knowledge of the application and the business purpose for which that application is used. Skills needed include general business specializing in the area computerized, general management skills used to achieve the mission of the user area, and a knowledge of identifying and dealing with user problems.

  • Programming skill. Requires understanding of computer concepts, flowcharting, programming in the languages used by the organization, debugging, and documenting computer programs.

  • System skill. Requires the ability to translate user requirements into computer system design specifications. Specific skills include flowcharting, problem analysis, design methodologies, computer operations, some general business skills, error identification and analysis in automated applications, and project management. The individual normally possesses a programming skill.

  • Technical skill. Requires an understanding of a highly technical specialty and the ability to exhibit reasonable performance at that specialty.

Table 4-2 indicates which skills are required to execute which tools. In some instances, different skills are needed to develop the tool, and if this is the case, that has been indicated in the Comments column. The comments also indicate any skill qualification or specific technical skill needed.

 

Selecting an Affordable Tool

Typically, testing must be accomplished within a budget or time span. An extremely time-consuming and hence costly tool, while desirable, may not be affordable under the test budget and schedule. Therefore, the last selection criterion is to pick those tools that are affordable from the population of tools remaining after the preceding step. Work Paper 4-2 can be used to document selected tools.

Table 4-2. Documenting Tools

Tool Name: ________________________________________________________

Tool Vendor: _______________________________________________________

Tool Capabilities: ____________________________________________________

__________________________________________________________________

__________________________________________________________________

Tool Purpose: ______________________________________________________

__________________________________________________________________

__________________________________________________________________

Process That Will Use Tool: ____________________________________________

__________________________________________________________________

__________________________________________________________________

Tool Training Availability: ______________________________________________

__________________________________________________________________

__________________________________________________________________

Tool Limitations: _____________________________________________________

__________________________________________________________________

Some test tools are extremely costly to execute, whereas others involve only nominal costs. It is difficult to put a specific price tag on many of the tools because they require the acquisition of hardware or software, the cost of which may vary significantly from vendor to vendor.

Table 4-3 lists three categories of cost: high, medium, and low. Where costs are extremely high or low, the Comments column is used to further clarify the cost category.

Table 4-3. Cost To Use Testing Tools

COST

TOOL

COMMENTS

High

Correctness proof

 
 

Coverage-based metric testing

Cost to develop metrics is high—not usage

 

Executable specs

 
 

Inspections

 
 

Modeling

 
 

Parallel operation

 
 

Parallel simulation

 
 

Symbolic execution

 
 

Test data

Cost varies by volume of test transactions

Medium

Capture/playback

 
 

Cause-effect graphing

 
 

Code comparison

Major cost is acquisition of utility program

 

Control flow analysis

 
 

Database

 
 

Design-based functional testing

 
 

Design reviews

Cost varies with size of review team

 

Disaster test

Cost varies with size of test

 

Instrumentation

 
 

Integrated test facility

Major cost is building ITF

 

Mapping

Software is major cost

 

Peer review

 
 

Risk matrix

 
 

Snapshot

Major cost is building snapshot routines into programs

 

Systems logs

Assumes logs already in operation

 

Test data generator

Major cost is acquiring software

 

Test scripts

 
 

Utility programs

Assumes utility already available

 

Volume testing

 
 

Walkthroughs

Cost varies with size of walkthrough team

Low

Boundary value analysis

Requires establishing boundaries during development

 

Checklist

 
 

Compiler-based analysis

 
 

Confirmation/examination

 
 

Data dictionary

Assumes cost of DD is not a test cost

 

Desk checking

 
 

Error guessing

 
 

Fact finding

 
 

Flowchart

Assumes software is available

 

Ratio/relationship

 
 

Scoring

 

It is possible that you will have gone through the selection process and ended up with no tools to select from. In this instance, you have two options. First, you can repeat the process and be more generous in your selection criteria. In other words, be more inclined to include tools as you move from step to step. Second, you can ignore the formal selection process and use judgment and experience to select the tool that appears most appropriate to accomplish the test objective.

Training Testers in Tool Usage

Training testers in the use of test tools is a “no-brainer.” Not training testers in how to use tools before they begin to use them in practice is like letting someone drive an automobile without any training: It’s dangerous. The danger is that cost can escalate unnecessarily and by misusing the test tools, testers may not perform effective testing.

It is recommended that test tools be used only by testers who have demonstrated proficiency in their use. If it is necessary for the testers to use a tool in which they are not proficient, a mentor or supervisor should assist the tester in the use of that tool to ensure its effective and efficient use.

Appointing Tool Managers

The objective of appointing a tool manager is twofold:

  • More effective tool usage. Having a tool manager is, in fact, like establishing a help desk for testers. Because the tool manager is knowledgeable in what the tool does and how it works, that individual can speed the learning of other users and assist with problems associated with the tool’s usage.

  • Managerial training. The individual appointed to be a tool’s manager should have total responsibility for that tool. This includes contacting the vendor, budgeting for maintenance and support, overseeing training, and providing supervisory support. Being appointed a tool manager is an effective way to provide managerial training for individuals; it is also effective in evaluating future managerial candidates.

Managing a tool should involve budgeting, planning, training, and related managerial responsibilities.

The workbench for managing testing tools using a tool manager is illustrated in Figure 4-1. The three steps involve appointing a tool manager; assigning the duties the tool manager will perform; and limiting the tool manager’s tenure. This concept not only facilitates the use of tools but builds future managers at the same time.

Tool manager’s workbench for managing testing tools.

Figure 4-1. Tool manager’s workbench for managing testing tools.

Prerequisites to Creating a Tool Manager Position

Before appointing a tool manager, IT management should answer the following questions:

1.

Has management established objectives for the tool to be managed?

2.

Has the use of the tool been specified in IT work procedures?

3.

Has a training program been established for using the tool?

4.

Have the potential candidates for tool manager been trained in the use of the tool they would manage?

5.

Have potential candidates for tool manager effectively used the tool in a production environment?

6.

Do the candidates for tool manager have managerial potential?

7.

Does the individual selected for tool manager want to be manager of the tool?

8.

Do the candidates for tool manager believe that this tool is effective in accomplishing the organization’s mission?

9.

Does the candidate for manager have sufficient time to perform the tool manager duties?

10.

Have reasonable duties been assigned to the tool manager?

11.

Does the tool manager understand and agree that these are reasonable duties to perform?

12.

Has a tenure been established on the length of service for tool managers?

Once management has determined that a specific tool is needed (and selected that tool), a tool manager can be appointed. There are two inputs needed for this workbench: a clear definition of the objective for acquiring and using the tool, and a list of potential tool manager candidates.

Tool usage should be mandatory. In other words, work processes should indicate when to use a specific tool. The work process should indicate whether a tool user can select among two or more recommended tools. The tool manager should not be in the mode of marketing a tool but rather assisting and making tool usage more effective.

This section describes a three-step process for using a tool manager.

Selecting a Tool Manager

Ideally, the tool manager should be selected during the process of selecting the tool, and have ownership in the selection decision. The tool manager should possess the following:

  • Training skills

  • Tool skills

  • Managerial skills

    • Planning

    • Organizing

    • Directing

    • Controlling

If the tool manager candidate lacks the preceding skills, they can be developed during the tool manager tenure. If the tool manager position is used to train future managers, technical proficiency and competency in tool usage is the only real requirement. The other skills can be developed during the tenure as tool manager. A mentor must be assigned to a tool manager to develop the missing skills.

In addition to the tool manager, an assistant tool manager should also be named for each tool. This individual will not have any direct managerial responsibilities but will serve as backup for the tool manager. The primary responsibility of the assistant tool manager will be to gain competency in the use of the tool. Normally, the assistant tool manager is a more junior employee than the tool manager. The assistant is the most logical person to become the next manager for the tool.

Assigning the Tool Manager Duties

A tool manager can be assigned any or all of the following duties:

  • Assist colleagues in the use of the tool. The tool manager should be available to assist other staff members in the use of the tool. This is normally done using a “hotline.” Individuals having problems using the tool or experiencing operational problems with the tool can call the tool manager for assistance. Note: The hours of “hotline” activities may be restricted; for example, 8 to 9 A.M. and 2 to 5 P.M. This restriction will be dependent upon the other responsibilities of the tool manager and the expected frequency of the calls.

  • Train testers how to use the tool. The initial tool training normally comes from the vendor. However, additional tool training is the responsibility of the tool manager. Note that the tool manager may subcontract this training to the training department, the tool vendor, or other competent people. The tool manager has the responsibility to ensure the training occurs and may or may not do it personally.

  • Act as the tool vendor’s contact. The tool manager would be the official contact for the tool vendor. Questions from staff regarding the use of the tool that can only be answered by the vendor should be funneled through the tool manager to the vendor. Likewise, information from the vendor to the company should be directed through the tool manager.

  • Develop annual tool plans. The tool manager should develop an annual tool plan complete with planned tool usage, schedule, and resources needed to effectively utilize the tool. Tool managers may want to define penetration goals (the percent of the department that will use the tool by the end of the planning period) and should budget for upgrades, training, and other expenditures involved in tool usage. The tool manager’s time should be budgeted and accounted for.

  • Install tool upgrades. As vendors issue new versions of the tool, the tool manager is responsible for ensuring that those upgrades are properly incorporated and that the involved parties are made aware and trained, if necessary. Note that the tool manager may not have to do a lot of this personally but is responsible to make sure it happens.

  • Prepare annual reports. At the end of each year, or planning period, the tool manager should prepare for IT management an overview of the use of the tool during the year. This will require the tool manager to maintain statistics on tool usage, problems, costs, upgrades, and so forth. (Note that tool usage for mainframe tools can normally be obtained from job accounting software systems. Non-mainframe usage may have to be estimated.)

  • Determine timing of tool replacements. The tool manager, being responsible for a specific software tool, should also be responsible for determining when the tool is no longer effective or when better tools can be acquired to replace it. When these situations occur, the tool manager should prepare proposals to senior IT management regarding tool replacement.

The role of a tool manager can be enhanced in the following ways:

  • Allow individuals adequate time to perform the tool manager’s role. The assignment of a tool manager should be scheduled and budgeted so that the individual knows the amount of time and resources that can be allocated to it.

  • Incorporate tool manager performance into individual performance appraisals. The performance of the tool manager’s duties should be considered an important part of an individual’s work.

Limiting the Tool Manager’s Tenure

It is recommended that an individual serve two years as a manager for a specific tool. The rationale for the two years is that individuals tend to lose interest over a period of time. Also, after a period of time, the manager tends to lose perspective of new uses for the tool or deficiencies in the tool. Bringing in a new tool manager every two years tends to revitalize the use of that tool in the organization. Note that the tool managers can be transferred to manage another tool.

In instances where tools are highly specialized, very complex, or have minimal usage, it may be desirable to keep an individual manager for longer than a two-year period.

Summary

Efficient testing necessitates the use of testing tools. Each testing organization should have a portfolio of tools used in testing. This chapter described the more common software-testing tools. It also proposed the establishment of a test manager function for each tool.

The selection of the appropriate tool in testing is an important aspect of the test process. Techniques are few in number and broad in scope, whereas tools are large in number and narrow in scope. Each provides different capabilities; each tool is designed to accomplish a specific testing objective.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset