Chapter 24. Building Agility into the Testing Process

To enhance a software testing process, you must follow an improvement plan. Traditional improvement plans focus on identifying a defective component and then minimizing the impact of that defect, thus enhancing the testing process under review. This chapter uses a significantly different approach to improvement: time compression. Time compression drives agility. This process to add agility to software testing has proven in practice to be much more effective than traditional process improvement methods.

This chapter explains in detail each of the seven steps needed to build agility in to your software testing process. Each step is described in a “how-to” format. This chapter assumes that a team of testers has been assigned the responsibility to build agility into the software testing process (as discussed in Chapter 23).

Step 1: Measure Software Process Variability

A process is a method for making or doing something in which there are a number of steps. The steps and the time required to execute those steps comprise a timeline to produce the desired result. This chapter explains how to define and document those steps so that they represent a software testing process timeline. In this chapter, you also learn how to reduce variability.

If the timeline is lengthy, or if some steps are bottlenecks, or if the steps do not permit flexibility in execution, testing cannot be performed in an agile manner. Developing a software testing timeline will help identify those testing components that inhibit agility in software testing.

Timelines

A timeline is a graphic representation of the sequence in which the steps are performed and the time needed to execute those steps. The timeline not only shows the time required to execute a process but also the time required for each step. It can also show the time for substeps. Timelines enable someone who is responsible for building an agile testing process to evaluate that process.

A process is the means by which a task is performed. Whereas in manufacturing most processes are automated, professional processes rely much more on the competence of the individual performing the process. A professional process consists of two components: the people tasked with completing the process, and the process itself. The process steps normally assume a level of competence for the individual performing the process, and therefore much of the process need not be documented. For example, a programmer coding in a specific language follows a process, which assumes that the programmer following the process is competent in that specific language. Therefore, the process does not attempt to explain the programming language.

A poorly defined process relies much more on the competency of the individual performing that process than does a well-defined process. For example, a poorly defined requirement-gathering process may require only that the defined requirements be easy to understand. A well-defined process may utilize an inspection team of requirement-gathering analysts to determine whether the requirements are easily understandable. In addition, a well-defined process may have a measurement process to measure easy-to-understand requirements.

Normally, as processes mature, more of the process is incorporated into the steps and less depends on the competency of the individual. Therefore, the reliance on people performing the process tends to go down as the process matures.

As the timeline is defined, the variability inherent in the process is also defined. Variability can include the quality of the products produced by the process, as well as the time required to produce those products. A process with extensive variability is considered to be “out of control,” whereas a process with acceptable variability is considered to be “in control.”

Figure 24-1 illustrates variability in the software testing process. Chart A shows a bell-shaped curve showing extensive variability. For example, to perform a specific task in the software testing process may take an average of 100 hours but have a variability of between 24 and 300 hours to perform that task. Chart B shows that same task in the software testing process with minimal variability. For example, the average time to perform that task, as illustrated in Chart B, may be 100 hours, with a range of between 90 and 110 hours to perform that task. Thus, Chart B shows a process under control, which is more desirable than the out-of-control process illustrated in Chart A.

Variability in the software testing process.

Figure 24-1. Variability in the software testing process.

Process Steps

A process step has the following attributes:

  • A self-contained task

  • Defined (standards) input and output (i.e., entrance and exit) criteria

  • The specific work task(s) necessary to convert the input product(s) to the output product(s)

  • The quality control task(s) necessary to determine that specific work tasks have been correctly performed

  • The tools required

  • A rework procedure in case the output product does not meet exit criteria (standards)

Workbenches

The preferred way to illustrate a process step is to define a “workbench” to perform that step. The components of a workbench for unit testing are as follows:

  • Input products. The input product is the software to be unit tested.

  • Standards. The standards for unit testing are the entrance and exit criteria for the workbench. The standards state what those test specifications must include to be acceptable for unit testing the coding workbench and the attributes of the completed unit test (e.g., one of the exit criteria might be every branch tested both ways).

  • Do procedures. This would be the task(s) necessary to unit test the code.

  • Toolbox. One tool in the unit testing toolbox might be the test data generator.

  • Check procedures. Many check procedures might be used: One might be a routine in the test data generator, which would do a static analysis of the test. If the test data generator indicated an error, rework would occur. In addition, a unit test inspection process may be performed in which peers inspect the unit test data to ensure it meets the appropriate test “standards.”

  • Output product. If the check procedure indicates no errors in the unit test specifications, it becomes an output product from the workbench. The output product, then, would become the input product for integration and/or system testing.

Time-Compression Workbenches

The workbench concept needs to be expanded and used to identify causes of variability. Figure 24-2 shows a time-compression workbench. Added to this workbench are the activities that provide the greatest opportunity to reduce test-time variability. These additional activities are the ones that most commonly cause variability of a software test step.

A time-compression workbench.

Figure 24-2. A time-compression workbench.

The following four factors cause the most testing variability:

  • Rework associated with not meeting entrance criteria. The entrance criteria define the criteria the input products must meet to be utilized by the workbench. Failure to meet entrance criteria means that the previous workbench has not met the exit criteria for that workbench.

    A more common scenario is that the entrance criteria for the input are not checked. In other words, the worker for the workbench accepts defective input products. This means that the defects are incorporated into the workbench activities. The net result is that the longer a defect “lives,” the more costly it is to correct it. An obvious time-compression strategy is to check entrance criteria.

  • Worker competency. If the worker is not competent (skill sets or in using the work process), the probability of making defects increases significantly. Note that using the workbench effectively also assumes the worker is competent in using the tools in the toolbox. Therefore, an obvious time-compression strategy is to ensure the competency of the individuals using the workbench step.

  • Internal rework. Internal rework generally indicates either the worker is incompetent or the processes provided to the worker are defective. In our testing workbench example, if the tester did not prepare tests for all the code specifications and those defects were uncovered by a test inspection process, internal rework would occur. An obvious time-compression strategy is to reduce internal rework.

  • External rework. External rework refers to a situation in which the worker for a specific workbench cannot complete the workbench without additional input from previous workbench step(s).

Reducing Variability

Measuring testing time variability is a project. It must be recognized as a project and managed as a project. This means that time and resources must be allocated to the time-compression project. You can compress testing time by reducing variability.

Although preparation is needed to compress the testing effort, the actual compressing project will require only minimal staff and resources. The preparation that helps make “compressing” successful includes the following:

  1. Find a “compressing” champion. Identify someone in the IT organization to be the champion for this effort. It can be the IT director or anyone well-respected in the IT organization.

  2. Teach those having a vested interest in compressing software test time the process to compress testing. The champion or someone appointed by the champion should learn the seven-step process to compress test time and provide an overview of that process to those who have a vested interest in compressing testing time. Generally, those individuals are project leaders, system designers, systems analysts, system programmers, quality assurance personnel, testers, and standards and/or software engineering personnel. Everyone having a “stake” in compressing test time should understand the time-compression process.

  3. Find individuals to identify “compressible” components of the testing process. Identify two to five individuals who want to be involved in compressing the testing process. Generally, it should be at least two individuals, but no more than five. They will become the “agile implementation team.”

  4. Assign a budget and timeline to identify implementable time-compression improvements to the testing process. If the individuals on the team are knowledgeable about the testing process, this should require no more than two to three days for each team member. Generally, these individuals already have an idea about how testing might be compressed. The process that they follow will confirm these ideas and/or identify the probable root causes for excessive test time. Because these individuals are knowledgeable, it is recommended that they spend approximately two to four hours per week over a four to six week period to complete Steps 1 through 6 of the seven-step testing time-compression process. Step 7 deals with selecting these individuals and obtaining resources for the project.

The software testing agile implementation team needs to understand the components of a software testing process for two reasons:

  • To identify testing workbenches. If your organization has an immature testing process, you may have difficulty identifying specific workbenches because they may be integrated into larger tasks and not easily identifiable.

  • To help define root causes of variability. As the agile implementation team identifies a specific workbench as one having great variability, they may have difficulty identifying the root cause. By examining their knowledge of specific workbench activity, the agile implementation team may be able to determine that the root cause of variability is a lack of performing a specific activity, or performing it in an ineffective manner.

Developing Timelines

A testing timeline shows the number of workdays needed to complete the testing. The three tasks that need to be followed to develop a software testing timeline are as follows:

  1. Identify the workbenches.

  2. Measure the time for each workbench via many testing projects.

  3. Define the source of major variability in selected workbenches.

Identifying the Workbenches

The workbenches that create the software testing timeline must be defined. These are not all the workbenches in the software test process, just those that might be defined as “the critical path.” These workbenches, if lined end to end, determine the time span to complete the software testing.

There may be many other workbenches in software testing that need not be considered in this timeline definition. The types of workbenches that need not be considered include the following:

  • Workbenches done in parallel with another workbench, but by themselves do not influence the timeline. For example, there may be a testing estimation workbench that is performed in parallel with defining the testing plan workbench; but it is defining the testing plan workbench that is the one constraining test time, rather than the estimating workbench.

  • Report generation workbenches, such as time reporting, status reporting, and so forth.

  • Workbenches/tasks normally performed during wait time, such as software testing documentation. Note that as the timeline is “shrunk,” some of these workbenches may, at a later time, affect the timeline.

  • Estimating changes to the software testing plan.

  • Staff training/staff meetings, unless they delay the completion of a critical workbench.

It is important to note that the process for “compressing” software testing time need not be a high-precision exercise. For example, if a critical workbench is left out, or a noncritical one added, it will not significantly affect the process for compressing testing time. Those corrections can be made as the process is repeated over time.

The workbenches defined for the timeline may or may not be the same workbenches/tasks/steps defined in the organization’s software testing process. The timeline workbenches may divide a workbench in the organization software testing process, or it may combine one or more steps/tasks in the testing process into a single workbench. What is important is that the workbench be a critical component of software testing (in the opinion of the team responsible for compressing testing).

The workbenches must be identified and defined in a sequence from the beginning to the end of the testing process. For each workbench, the following information is needed:

  • Workbench name. The name may be one assigned by the software testing agile implementation team or the name given in the organization’s software testing process.

  • Input(s). The entrance criteria that will initiate action for the workbench. The names of the input(s) should be those used in your organization’s software testing process.

  • Workbench objective(s). The specific purpose for performing the workbench. This should be as specific as possible and, ideally, measurable. Note that in compressing the workbench, it is extremely important that those involved clearly understand the objective(s) for performing that workbench.

  • Output(s). The exit criteria for products produced by the workbench. The output(s) should be identified by the name used in the organization’s software testing process.

  • Approximate estimated timeline. This should be the number of workdays required to execute the workbench. The agile implementation team can determine the unit of measure they desire for the timeline. It can be days, half-days, or hours.

Once defined, this information should be transcribed to Work Paper 24-1. Note that this work paper is representative of what is needed; the actual work paper should be much larger.

Table 24-1. Define the Timeline Software Testing Workbenches

 

WORKBENCH 1

WORKBENCH 2

WORKBENCH 3

WORKBENCH 4

WORKBENCH 5

Input(s)

     

Workbench Name

     

Workbench Objective(s)

     

Output(s)

     

Approximate Estimated Workdays Timeline

     

The timeline is a completion timeline and not a person-hour timeline. In other words, a three-day timeline is three workdays. It may, in fact, take several individuals all working the three available workdays to complete the task in three days.

Measuring the Time for Each Workbench via Many Testing Projects

The objective of this task is to measure an approximate workbench timeline in workdays for each workbench included in Work Paper 24-1. Note that this would normally be done only for those workbenches that have an extended (that is, large) calendar-day timeline. For each workbench selected to calculate the completion timeline, Work Paper 24-2 should be completed. The objective of this work paper is to measure the completion timeline for a specific workbench for many different projects. If historical data is available, measuring the timeline for 10 to 20 different projects is ideal. However, in practice, most organizations calculate the timeline for fewer projects. If historical data is not available, it should be collected from projects currently being tested.

Table 24-2. Workbench Completion Calendar Day Timeline

Workbench Name:

Project Timelines:

Project(s)

Start Date

Date “Do” Procedures Completed First Time

Date Workbench Completed

Minimal Timeline Calendar Days

Actual Timeline Calendar Days

 
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       

Average No Rework Calendar Days Timeline:

No Rework Calendar Days Variability:

Average Actual Calendar Days Timeline:

Actual Calendar Days Variability:

For each project for which the workbench completion timeline is being calculated, the following information should be collected and recorded on Work Paper 24-2:

  • Workbench name. The name assigned to the specific workbench on Work Paper 24-1.

  • Project. The name of the software testing project for which the completion timeline is being documented.

  • Start date. The calendar date that identifies when the workbench activity commenced.

  • Date “do” procedures completed first time. This is the date on which the workbench was completed if no rework was required. In other words, if everything was done perfectly, this is the completion date. However, if rework was required, that rework extends the date of completion.

  • Date workbench completed. This is the date that the work assigned this specific workbench was completed, including any rework.

  • No-rework delivery days. This is the number of days between the start date for this project and the date the do procedures could have been completed the first time if there was not rework.

  • Actual timeline days. This is the number of workdays between the start date of this workbench to the date the workbench was completed.

After the workbench completion timeline data has been collected for a reasonable number of projects, the following should be calculated:

  • Average no-rework timeline. The average days as calculated in the “minimum timeline days” column. The no-rework completion timeline days are totaled for all the projects and divided by the number of projects to get this particular calculation.

  • Variability of no-rework days. The variability for the average no-rework day timeline is the number of days that the workbench was completed earlier than the average no-rework days and the number of days it was completed later than the average no-rework days. For example, if the average number of days is five and one project is completed in three days and another in eight days, the variability is between plus three days and minus two days.

  • Average actual days timeline. This is calculated by totaling the days in the “total timeline days” column and dividing by the number of projects.

  • Variability of actual days. This is calculated by determining which project was completed earliest and which was latest. The number of days early from the average actual days and the number of days late produce the plus and minus variability.

Figure 24-3 shows an example of calculating the delivery timeline for three workbenches that performed testing. Note that to complete a timeline like this, you should use projects of equal size and complexity (perhaps by classifying projects tested as small, medium, or large).

Table 24-3. Workbench completion workday timeline.

Workbench Name:

Project Timelines:

Project(s)

Start Date

Date “Do” Procedures Completed First Time

Date Workbench Completed

Minimum Timeline Workdays

Actual Timeline Workdays

A

June 1

June 18

June 26

14

20

B

July 30

August 30

September 8

26

33

C

November 3

November 18

December 1

Workbench completion workday timeline.
Workbench completion workday timeline.
      
   

÷ 3 =

18

25

      

Average No Rework Workdays Timeline:18

No Rework Workdays Variability:-4 to +8

Average Actual Workdays Timeline:25

Actual Workdays Variability:-5 to +8

As you can see in Figure 24-3, test planning was performed for three projects. For each project, the Minimum Timeline Workdays is the number of workdays between the start date and the date on which the first draft of the test plan was completed. The actual timeline workdays for each project is the number of workdays from the start of test planning until the test plan is complete. This example shows that the minimum average number of workdays for test planning is 18, with a variability of minus 4 and plus 8. The minus 4 means that some test planning projects were completed 4 days earlier than the average, and one was completed 8 days longer than the average. The same basic calculation in the example is performed for the actual timeline workdays.

Defining the Source of Major Variability in Selected Workbenches

For each workbench selected for calculating the workbench completion timeline using Work Paper 24-2, a variability completion timeline analysis should be performed. This analysis should be performed for both the best projects (i.e., completed in less than the average number of workdays) and the least-efficient projects (i.e., required more than the average number of workdays to be completed).

For projects with both the below- and above-average variability, an analysis should be done by the agile implementation team to determine where they believe the major source of variability occurred. For example, if a particular workbench took an average of five workdays and one project completed it in three workdays, the agile implementation team would need to determine the source of the two-day variability. For example, the project team that completed it in three days may have used a tool that none of the other projects used. In this case, the tool would be the source of variability. On the other hand, if a workbench is completed in an average of 5 workdays and one team took 10 workdays to complete it, the source of that variability may be lack of training or competency on the part of the individual performing the workbench. That lack of competency would be the source of the process variability.

The workbench components on Work Paper 24-3 are those previously described in this chapter. For the workbench being analyzed, the agile implementation team would identify one or more of what they believe are the source(s) of variability for that workbench. When they identify the source, they should then attempt to determine the probable cause. To complete Work Paper 24-3, the following needs to be determined:

  • Variability analyzed. This analysis should be performed both for those projects below the average timeline workdays and again for those projects above the average timeline workdays. (Note: The team may decide only to evaluate the single best and worst performances of the workbench, or it may analyze several of the better and several of the worst performances of the workbench.)

  • Source of variability. For each of the ten workbench components, the team should determine whether they believe that component is or is not the source of variability.

  • Root cause. For each workbench component the team believes is the source of variability, they should try to determine the root cause of that variability. Again, it might be the use of a tool by one project, or the lack of training or competence on the part of the worker in another project.

Table 24-3. Completion Timeline Variability Analysis

Workbench Name:

Variability Analyzed:

Below Average

Above Average

Workbench Component

Source of Variability

Root Cause

Yes

No

Input Criteria

   

Checking Input Criteria

   

Do Procedures

   

Check Procedures

   

Toolbox

   

Worker Competency

   

Internal Rework

   

External Rework

   

Exit Criteria

   

Other (specify)

   

Improvement Shopping List

The objective of performing Step 1 of the seven-step process to compress testing time is to identify some specific components of a testing process in which you can reduce the variability. When Work Papers 24-1 through 24-3 are complete, the agile implementation team should begin to develop a “shopping list” of potential process completion timeline improvements. Work Paper 24-4 is provided for that purpose. Note that this work paper is also used in Steps 2 and 3.

Table 24-4. Software Testing Completion Timeline Process

 

Ideas for Completion Timeline Improvement

Reference Number

Priority

   

High

Low

1

    

2

    

3

    

4

    

5

    

6

    

7

    

8

    

9

    

10

    

11

    

12

    

13

    

14

    

15

    

To complete Work Paper 24-4, the agile implementation team needs to provide the following information:

  • Ideas for completion time improvement. From the analysis performed on the timeline, the team can identify the following types of potential improvements:

    • Workbench(es) requiring a large number of workdays to complete. Large workbenches provide opportunities for improvement. Improvements may include dividing one workbench into two or more workbenches, providing a better way to do it, providing a more efficient way to perform the task, and so forth.

    • Workbench(es) with large negative and/or positive variability. Workbenches with large variability provide the opportunity to identify the good and bad projects and use the techniques in the good projects to improve the timeline or eliminate the characteristics in the workbench implementation that are less efficiently performed.

    • Identify the root cause of the variability. Knowing the potential cause of variability, both positive and negative, provides an opportunity for improvement.

    • Experience gained from the analysis process. Sometimes when a team performs an analysis and does impromptu brainstorming, they identify an opportunity for timeline improvement.

  • Reference number. This column is designed to let the agile implementation team reference other documents that contain more information about improvement suggestions.

  • Priority. The agile implementation team should give its first impression as to whether this is a good opportunity (higher priority) for timeline compression or an idea that might provide only minimal improvement (lower priority).

Quality Control Checklist

Work Paper 24-5 is a quality control checklist for Step 1 that the agile implementation team can use to minimize the probability of performing this step incorrectly.

Table 24-5. Quality Control Checklist for Step 1

  

YES

NO

COMMENTS

1.

Has an agile implementation team been established?

   

2.

If so, are the members of the team respected individuals in the IT organization?

   

3.

If so, does the team comprise no less than two members and no more than five members?

   

4.

Does the agile implementation team understand the relationship of process variability to performing processes effectively?

   

5.

Does the agile implementation team understand that the skill sets of the individual performing a professional process are assumed and not incorporated into the software testing process?

   

6.

Does the agile implementation team understand that a process is broken up into steps/tasks?

   

7.

Does the agile implementation team understand the concept of a process workbench and the various components in the workbench?

   

8.

Does the agile implementation team understand the time-compression workbench?

   

9.

Has the agile implementation team identified the key workbenches in the software testing process?

   

10.

Has the agile implementation team eliminated from consideration those software testing workbenches that do not affect the time to complete the software testing process?

   

11.

Have the inputs and outputs for each identified workbench been defined?

   

12.

Have the objectives for each identified workbench been stated in a manner in which the results are measurable?

   

13.

Is there general consensus on the approximate estimated completion timeline for each of the key workbenches?

   

14.

Has a reasonable number of workbenches been selected to provide reliable information on the completion timeline for that workbench? (Note: This assumes a reasonable process is used for selecting the workbenches for investigation)

   

15.

For the workbenches selected for completion time analysis, has a reasonable number of projects been identified and the calendar dates for those projects been documented?

   

16.

Have the projects for the identified workbenches that are significantly better or significantly worse than the average calendar days been identified?

   

17.

For each workbench where projects have been identified that were implemented more efficiently than the average timeline, has a variability completion timeline analysis been performed?

   

18.

For each workbench where projects have been identified that were implemented less efficiently than the average timeline, has a variability completion timeline analysis been performed?

   

19.

For each of the workbench components for the identified projects, have the source of variability and the probable cause been determined?

   

20.

Has a reasonable process been followed to identify ideas for completion time improvement?

   

21.

For those ideas identified for completion timeline improvement, has the agile implementation team assigned a high or low priority to that idea?

   

22.

Are measurements and analysis performed for testing workbenches executed for software project of equal size and complexity?

   

If the investigation indicates that a particular aspect of this step was incorrectly performed, it should be repeated. (Note: Some teams prefer to review the quality control checklist before they begin the step to give them a fuller understanding of the intention of this step.)

Conclusion

This step has explained work processes and how an analysis of those work processes can lead to ideas for completion timeline compression. The step has provided a tool, the workbench concept, for analyzing each critical task in the software testing process. The step also provided an expanded workbench for timeline compression. This expanded workbench provides a process to identify ideas to compress the testing timeline. These ideas are used in Step 6 to identify the specific ideas that will be turned into an improvement process. The next step, Step 2, is designed to help the team identify potential causes of weakness in the testing process.

Agile systems maximize flexibility and minimize variability. The primary purpose of this step has been to identify the variability in the software testing process so that the variability in those steps/tasks that would be included in an agile testing process is minimized.

Step 2: Maximize Best Practices

When identifying the best-of-the-best practices, you can begin to move all software testers toward maximum testing competency. Agile software testing needs best practices to be truly “agile.” When testers are proficient in the basics, (i.e., best testing practices), they can incorporate agility into those processes to perform software testing more effectively and efficiently.

This step describes how to maximize best practices. To do so, you must consider the skill sets of the individuals involved in testing, the testable attributes of software, and the processes used to test the software. This step will enable you to define any capability barriers in your organization’s software testing, determine the best practices available, and continue to develop your list of ideas for compressing time to make your testing process more agile.

Tester Agility

The traditional role of software testers is to validate that the requirements documented by the development team have, in fact, been implemented. This role makes two assumptions. First, that the requirements are, in fact, what the customer/user truly needs. Second, that part of the role of the tester is to identify, for the development team, those requirements that have been incorrectly implemented.

Problems are associated with the testers validating the defined requirements, as follows:

  • The belief that the implemented system will satisfy customer needs. This has proven not to be so, and the result is excessive change and maintenance.

  • The development team will capture all the needed requirements. The fact is that development teams tend to concentrate on functional requirements, not quality factors.

Quality factors are those attributes of software that relate to how the functional requirements are implemented. For example, ease of use is a quality factor. Requirements can be implemented in a manner that is not easy to use. However, customers/users rarely specify an ease-of-use requirement. The quality factors may be the deciding factors as to whether sales will be made.

Testers must change their role from validating developer-defined requirements to representing customers/users. As this discussion continues, keep in mind these two distinct focuses and how testers (representing users/customers) can eliminate many of the tester capability barriers.

Software Testing Relationships

The interaction of four relationships in software testing helps to determine the agility and the performance of testing. Each party has a perspective concerning the software testing, and that perspective affects test team performance capabilities:

  1. The customer/user. The perspective of the customer/user focuses on what they need to accomplish, their business objectives (a subjective determination). They may or may not be able to express these business objectives in the detail software testers need. This perspective is often called “quality in perspective.”

  2. The software development team. The development team focuses on how to implement user requirements. They seek to define the requirements to a level that allows the software to be developed (an objective determination). (If they can build that software and meet the requirements, however, they may or may not be concerned as to whether it is the right system for the user/customer.) This is often called the “quality in fact” perspective.

  3. IT management. The environment established by IT management determines the testing methodology, methodology tools, estimating testing tools, and so forth. The environment determines what testers can do. For example, if they do not have an automated tool to generate test data, testers may be restricted as to the number of test transactions they can deal with.

  4. The testers. The perspective of the testers focuses on building and executing a test plan to ensure the software meets the true needs of the user.

Operational Software

The testing capability barrier is two dimensional: One dimension is efficiency; the other dimension is effectiveness. However, these two dimensions are affected by software “quality factors.” Figure 24-4 shows that these quality factors affect how each party views its role compared to another stakeholder’s perspective.

Relationships affecting software testing.

Figure 24-4. Relationships affecting software testing.

Software Quality Factors

Software quality is judged based on a number of factors, as outlined in Figure 24-5. These factors are frequently referred to as “success factors,” in that if you satisfy user desire for each factor, you generally have a successful software system. These should be as much a part of the application specifications as are functional requirements.

Table 24-5. Software attributes (quality factors).

Factor

Definition

Correctness

Extent to which a program satisfies its specifications and fulfills the user’s mission objectives.

Reliability

Extent to which a program can be expected to perform its intended function with required precision.

Efficiency

The amount of computing resources and code required by a program to perform a function.

Integrity

Extent to which access to software or data by unauthorized persons can be controlled.

Usability

Effort required to learn, operate, prepare input, and interpret output of a program.

Maintainability

Effort required locating and fixing an error in an operational program.

Testability

Effort required testing a program to ensure that it performs its intended function.

Flexibility

Effort required modifying an operational program.

Portability

Effort required to transfer a program from one hardware configuration and/or software system environment to anther.

Reusability

Extent to which a program can be used in other applications related to the packaging and scope of the functions that program performs.

Interoperability

Effort required to couple one system with another.

Tradeoffs

It is incorrect to assume that with enough time and resources all quality factors can be maximized. For example, to optimize both integrity and usability is an unrealistic expectation. As the integrity of the system is improved (e.g., through more elaborate security procedures), using the system becomes more difficult (because the user must satisfy the security requirements). Thus, an inherent conflict is built in to these quality factors.

Figure 24-6 shows the relationship between the 11 software quality factors. Note in this figure that a relationship exist between efficiency and most other factors.

Relationships between software quality factors.

Figure 24-6. Relationships between software quality factors.

Figure 24-7 shows the impact of not specifying or incorporating all the quality factors in software testing. Let’s consider one quality factor: maintainability. Figure 24-7 shows that maintainability must be addressed during software test design, code, and testing. However, no significant impact occurs on the system test if maintainability has not been addressed in software testing. Likewise, no impact occurs on operation, initially, if software maintainability has not been addressed in software testing. What is crucial is that there is a high impact, because the software needs to be revised and transitioned into operation. Thus, a high cost is associated with software that is difficult to maintain.

The impact of not specifying software quality factors.

Figure 24-7. The impact of not specifying software quality factors.

The objective of discussing quality factors, software testing relationships, and the relationships between quality factors and the tradeoffs of implementing such is to help you understand some of the reasons for performance barriers. Your software testing staffs can only develop software with a predefined effectiveness and a predefined efficiency. If you intend to compress software testing time, you must understand these barriers and incorporate them into the software testing time-compression activities.

Capability Chart

The easiest way to understand the capability barrier is to illustrate that barrier on a capability chart, shown in Figure 24-8. This figure shows the two dimensions of the chart: efficiency and effectiveness. Efficiency is a measure of the productivity of software testing, and effectiveness is a measurement of whether the test objectives are being met.

Software testing capability chart.

Figure 24-8. Software testing capability chart.

Figure 24-8 shows ten different projects. At the conclusion of each project, the project is measured for efficiency and effectiveness. The location on the software testing capability chart is determined by that efficiency/effectiveness measurement.

Let’s look at two examples. Project A rates very high on efficiency but low on effectiveness. In other words, the test team optimized the resources available to test a project in a very efficient manner. However, the customer of Project A is not satisfied with the test results. Project J is just the opposite. The customer of Project J is extremely pleased with the results, but the test team was very inefficient in testing that project.

In an effort to compress testing time, an agile implementation team should expect that tested projects on this chart will provide the solutions to compress testing time. For example, the practices used in testing Project A, if they were transferable to the other projects, might result in high test efficiency. Likewise, if the practices used in Project J could be transferred to all tested projects, there might be high customer satisfaction with all projects. Identifying these practices is the key component of this time-compression step.

The capability barrier line illustrated in Figure 24-8 represents the best an IT test team can do given current practices, staff competency, and management support. Agile testers must use new and innovative practices to enable an organization to break through their capability barrier.

A question that people ask about this capability chart is why the capability barrier line does not represent the results of the most efficient and most effective project. Note that if Project A were as effective as Project J, it would be outside the capability barrier line. The reason for this is the relationship between the quality factors. As an organization project becomes more efficient, other quality factors suffer. Likewise, if they become more effective, the efficiency factors deteriorate. Therefore, the best compromise between effectiveness and efficiency will be less than the most effective project or the most efficient project. (Note: This is only true using current best practices. New test practices may enable an organization to break through their testing capability barrier.)

Measuring Effectiveness and Efficiency

Measuring software testing efficiency and effectiveness involves two activities: defining the measurement criteria, and developing a metric for efficiency and a metric for effectiveness. Completed testing projects can then be measured by these two metrics. The result of measuring a specific software testing project is then posted to the software testing capability chart (see Figure 24-8).

There is no single correct way to measure software testing effectiveness and efficiency. The IT industry has not agreed on such a metric. On the other hand, many organizations perform these measurements regularly. The measurement is one of the activities that should be conducted at the conclusion of a software testing project.

The correct metric for efficiency and effectiveness is a metric that is agreed on by the involved parties. In a similar manner, there is no perfect way to measure the movement of the stock market. However, a metric called the Dow Jones Average has been developed and agreed on by the involved parties. Thereby, it becomes an acceptable metric for measuring stock market movement. Does everyone agree that the Dow Jones Average is perfect? No. On the other hand, there is general consensus that it is a good measurement of movement of the stock market.

General rules must be followed to create an acceptable metric, as follows:

  • Easy to understand. The metric cannot be complex. The recommendation is that the metric range be between 0 and 100, because many measurements are on a scale of 0 to 100.

  • Limited criteria. If there are too many criteria involved in the metric, it becomes overly complex. The recommendation is that no more than five criteria be included in the metric.

  • Standardized and objective criteria. There needs to be a general definition of the criteria so that it can be used by different people and, as much as possible, the criteria should be objective (objective meaning that it can be counted as opposed to making a judgment).

  • Weighted criteria. All criteria are not equal. Therefore, the metric should weight criteria. The most important criteria should get the most weight.

Defining Measurement Criteria

The most common way to measure software testing effectiveness is to determine whether testers can validate the presence or absence of the development team’s defined requirements. In this case, four different metrics enable you to measure software testing effectiveness, as follows:

  • Requirements tested and found correct.

  • Requirement does not execute as specified.

  • Requirement is missing.

  • Requirement found in tested software, but not specified by development team.

Measuring Quality Factors

The quality factor of “correctness” refers to testers validating whether the requirements specified by the development team work. However, the quality factor of correctness does not include all the other quality factors that the test team should be addressing. For example, it does not address whether the software is maintainable. Note that the software can be implemented with all the functional requirements in place and working, but cannot be efficiently or effectively maintained because the software was not built with maintenance in mind. For example, the logic may be so complex that it is difficult for a maintainer to implement a change in the software.

If the tester represents the customer/user, the tester’s responsibility may include testing some or all of the quality factors. This is generally determined in the relationship of the tester to the customer/user. If the tester is to evaluate more than the correctness quality factor, additional effectiveness criteria must be determined and used.

The generally accepted criteria used to define software quality are as follow:

  • Traceability. Those attributes of the software that provide a thread from the requirements to the implementation with respect to the specific testing and operational environment.

  • Completeness. Those attributes of the software that provide full implementation of the functions required.

  • Consistency. Those attributes of the software that provide uniform design and implementation techniques and notation.

  • Accuracy. Those attributes of the software that provide the required precision in calculations and outputs.

  • Error tolerance. Those attributes of the software that provide continuity of operation under non-nominal conditions.

  • Simplicity. Those attributes of the software that provide implementation of functions in the most understandable manner (usually avoidance of practices that increase complexity).

  • Modularity. Those attributes of the software that provide a structure of highly independent modules.

  • Generality. Those attributes of the software that provide breadth to the functions performed.

  • Expandability. Those attributes of the software that provide for expansion of data storage requirements or computational functions.

  • Instrumentation. Those attributes of the software that provide for the measurement of usage or identification of errors.

  • Self-descriptiveness. Those attributes of the software that provide explanation of the implementation of a function.

  • Execution efficiency. Those attributes of the software that provide for minimum processing time.

  • Storage efficiency. Those attributes of the software that provide for minimum storage requirements during operation.

  • Access control. Those attributes of the software that provide for control of the access of software and data.

  • Access audit. Those attributes of the software that provide for an audit of the access of software and data.

  • Operability. Those attributes of the software that determine operation and procedures concerned with the operation of the software.

  • Training. Those attributes of the software that provide transition from current operation or initial familiarization.

  • Communicativeness. Those attributes of the software that provide useful inputs and outputs that can be assimilated.

  • Software system independence. Those attributes of the software that determine its dependency on the software environment (operating systems, utilities, input/output routines, etc.).

  • Machine independent. Those attributes of the software that determine its dependency on the hardware system.

  • Communications commonality. Those attributes of the software that provide the use of standard protocols and interface routines.

  • Data commonality. Those attributes of the software that provide the use of standard data representations.

  • Conciseness. Those attributes of the software that provide for implementation of a function with a minimum amount of code.

The software criteria listed here relate to the 11 software quality factors previously described. Figure 24-9 shows this relationship. For each of the factors, this figure shows which software criterion should be measured to determine whether the software factor requirements have been accomplished.

Table 24-9. Software criteria and related quality factors.

 

Factor

Software Criteria

Correctness

Traceability

Consistency

Completeness

Reliability

Error Tolerance

  

Consistency

Accuracy

Simplicity

Efficiency

Storage Efficiency

Execution Efficiency

Integrity

Access Control

Access Audit

Usability

Operability

Training

Communicativeness

Maintainability

Consistency

Flexibility

Modularity

Generality

Expandability

Testability

Simplicity

Modularity

Instrumentation

Self-descriptiveness

Portability

Modularity

Self-descriptiveness

Machine

Independence

Reusability

Software System

Independence

Interoperability

Generality

Modularity

Software System

Independence

  

Modularity

Communication

Commonality

Data Commonality

Note that some of the software criteria relates to more than one quality factor. For example, the modularity criterion is included in six of the factors, and consistency in three. As a general rule, you could assume that those criteria that appear most frequently have a higher relationship to the overall quality of the application system than do those criteria that appear only once. On the other hand, if the user rated a specific quality factor very high in importance, and that factor had a criterion that appeared only once, that criterion would be important to the success of the application as viewed from a user perspective.

The quality factors and the software criteria relate to the specific application system being developed. The desire for quality is heavily affected by the environment created by management to encourage the creation of quality products. An environment favorable to quality must incorporate those principles that encourage quality.

Defining Efficiency and Effectiveness Criteria

Efficiency has sometimes been defined as “doing the job right”; and effectiveness has sometimes been defined as “doing the right job.” Because of the lack of standards, IT metrics, and common software testing processes, it is difficult to reach agreement on efficient versus inefficient testing. For example, one would like an efficiency measure, such as X hours to test a requirement. However, because requirements are not equal, a requirement weighting factor would have to be developed and agreed on before this efficiency criteria could be used.

The lack of industry-accepted efficiency metrics should not eliminate the objective to measure testing efficiency.

There are many ways to measure software testing efficiency. One is the efficiency within the testing process. For example, do testers have the appropriate skills to use the testing tools? Does management support early involvement of testers in the development project? Are the testing processes stable and do they produce consistent results? Another way to measure software testing efficiency is the efficiency of the process itself.

Measuring Effectiveness

Measuring effectiveness of software testing normally focuses on the results achieved from testing. The generally acceptable criteria used to define/measure the effectiveness of testing are as follows:

  • Customer satisfaction. How satisfied are the software customers with the results of software testing.

  • Success criteria. Customers predefine quantitative success criteria. (If testers meet those criteria, it will be considered a successful project.)

  • Measurable test objectives. Objectives stated in a manner that can be measured objectively so that it can be determined specifically whether the objectives are achieved.

  • Service-level agreement. The contract between the customer and the testers as to the expected results of the testing efforts, how it will be measured, responsibilities in completing the testing, and other criteria considered important to the success of the test effort.

  • Inconsistency between the test and customer culture. If customer and tester cultures are of different types, this can impede the success of the project. For example, if the customer believes in using and complying with a process, but testers do not, conflicts can occur. Step 5 addresses these culture differences.

Measuring Efficiency

Generally acceptable criteria used to define/measure software testing efficiency are as follows:

  • Staff competency. Train staff members in skills necessary to perform those tasks within the software testing process. (For example, if staff members use a specific automated test tool, the staff members should be trained in the usage of the tool.)

  • Maturity of software testing process. Usually a commonly accepted maturity level, such as the levels in SEI’s CMMI.

  • Toolbox. The software testing effort contains the tools needed to do the testing efficiently.

  • Management support. Management support of the use of testing processes and tools, meaning that management requires compliance to process and compliance to the use of specified tools, and rewards based on whether processes are followed and tools are used efficiently.

  • Meets schedule. The software testing is completed in accordance with the defined schedule for software testing.

  • Meets budget. The software testing is completed within budget.

  • Software test defects. The number of defects that software testers make when testing software.

  • Software testing rework. The percent of the total test effort expended in rework because of defects made by solution testers.

  • Defect-removal efficiency. The percent of developer defects that were identified in a specific test phase as compared to the total number of defects in that phase. For example, if the developers made 100 requirement defects during the requirement phase, and the testers found 60 of those defects, the defect-removal efficiency for the requirements phase would be .6, or 60 percent.

  • Percent of deliverables inspected. The criteria transferred from validation to verification, removing defects at a point where they are cheaper to remove.

  • Software testing tool rework. The amount of resources consumed in using testing tools because of defects in those tools (or defects in how those tools were used).

Building Effectiveness and Efficiency Metrics

The agile implementation team should select the criteria used to measure the effectiveness and efficiency of the testing process. They should select what they believe are reasonable measurement criteria for both efficiency and effectiveness. These can be taken from the criteria examples described in this step or from criteria agreed on by the agile implementation team. They should document the criteria selected.

The measurement criteria should be recorded on Work Paper 24-6 and should include the following:

  • Criteria name

  • Description

  • Efficiency

  • Effectiveness

  • Rank

Table 24-6. Criteria Recommended to Measure Software Testing Effectiveness and Efficiency

  

Measuring

 

Criteria

Description

Efficiency

Effectiveness

Rank

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

After the agile implementation team has agreed on the criteria used to measure test efficiency and effectiveness, they need to rank those criteria. The ranking should be a two-part process:

  1. Rank the criteria high, medium, or low. (Those ranked high are the best criteria for measurement.)

  2. Starting with the high criteria, select no more than five criteria for efficiency and five for effectiveness. Note: These do not have to be those ranked high, but those ranked high should be considered before those ranked medium.

Both the efficiency and effectiveness metrics are developed the same way. It is recommended that three to five criteria be used. Then the criteria must be measurable. Use the following method to do that:

  1. A method for calculating a criteria score must be determined. It is recommended that the calculated score for individual criteria be a range of 0 to 100. This is not necessary, but it simplifies the measurement process because most individuals are used to scoring a variable using a range of 0 to 100.

  2. The criteria used for efficiency or effectiveness must then be weighted. The range should total 100 percent. For example, if there are five criteria and they all weighted equally, they each will be given a 20 percent rating or 20 percent of the total effectiveness or efficiency score.

  3. To calculate a specific efficiency or effectiveness score for a project, the criteria score is multiplied by the weighting percentage to produce an efficiency or effectiveness score. The individual scores for efficiency and effectiveness are added to produce a total score.

Work Paper 24-7 is designed to record the criteria score and weighting. You can also use this work paper to calculate a total effectiveness and efficiency score for the selected software testing projects.

Table 24-7. Measuring Software Testing Effectiveness/Efficiency

Software Project Name:

Efficiency Criteria

Method to Calculate Criteria Score

Weight

Efficiency Score

    
    
    
    
    
    
    
    
    
  

Total 100%

 
 

Total Efficiency Score

Effectiveness Criteria

Method to Calculate Criteria Score

Weight

Effectiveness Score

    
    
    
    
    
    
    
    
    
  

Total 100%

 
 

Total Efficiency Score

Figure 24-10 shows an example of an efficiency and effectiveness metric. For both efficiency and effectiveness, three criteria are defined. For each criterion, a method to calculate the criteria score has been determined. The criteria score is then multiplied by the weighting percentage, and Figure 24-10 shows a project example efficiency score of 76 percent and an effectiveness score of 80 percent.

Examples of an effectiveness and efficiency metric.

Figure 24-10. Examples of an effectiveness and efficiency metric.

After Work Paper 24-7 has been developed, the efficiency and effectiveness score for a reasonable number of software testing projects should be calculated. These can be historical projects or projects that will be completed in the near future. Those scores are then posted to Work Paper 24-8. Each project is to be posted to the intersection of the efficiency score and effectiveness score. Figure 24-10 shows an example of this posting. (Note: The circled criteria scores are the ones assigned for this “Project Example.”)

Recording Efficiency and Effectiveness Scores

Figure 24-8. Recording Efficiency and Effectiveness Scores

Identifying Best Practices from Best Projects

The projects that score best in efficiency and the projects that score best in effectiveness should be selected for analysis. The agile implementation team should analyze those projects to determine the practices that cause some to be most efficient and some to be most effective. If the agile implementation team is knowledgeable in the software testing methodology, and has in-depth discussions with the test teams whose projects are either efficient or effective, the best practices can usually be identified.

The best practices identified should be posted to Work Paper 24-9. The information posted should include the following:

  • Best practice. The name of the best practice, which may be named by the project team or it may be a specific commonly used best practice.

  • Description. The description should clearly state the objective of the best practice.

  • Project used in. The name of the software testing project that used this practice so that the project team can be consulted if more information is needed.

  • Application efficiency/effectiveness. Indicate whether this is a best practice that improves efficiency or effectiveness. Check the appropriate column.

Table 24-9. Potential Best Practices for Compressing Software Testing Completion Time

Best Practice

Description

Project Used In

Application Efficiency

Application Effectiveness

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

By using these practices the software testing process can become more agile.

Improvement Shopping List

The end objective of performing Steps 1 through 3 of the seven-step process is to identify the specific components of a software testing process where the timeline can be reduced.

Work Paper 24-9 has identified the potential best practices for compressing software testing time. The practicality of using these best practices should be investigated. Those that the agile advancement team believes would enhance the agility of the software testing process should be identified as software testing agile best practices and posted to Work Paper 24-10.

Table 24-10. Best Practices Shopping List

 

Best Practices for Time Improvement

Reference #

Priority

High

Low

1

    

2

    

3

    

4

    

5

    

6

    

7

    

8

    

9

    

10

    

11

    

12

    

13

    

14

    

15

    

To complete Work Paper 24-10, the agile implementation team needs to provide the following information:

  • Best practices for time improvement. From the analysis performed on the best practices, the team can identify those best practices that, if used in new testing projects, should result in improved agility in testing.

  • Reference #. This column is designed to let the agile implementation team reference other documents that contain more information about the idea suggested for improvement.

  • Priority. The agile implementation team should give their first impression as to whether this is a good opportunity for timeline compression or an idea that might provide only minimal improvement.

Quality Control Checklist

Step 2 is an important aspect of compressing software testing time. A misunderstanding of the process or making an error in the performance of Step 2 can lead to missed opportunities to compress software testing time. A quality control checklist is provided for the agile implementation team to use to minimize the probability of performing this step incorrectly.

Work Paper 24-11 is a quality control checklist for Step 2. The investigation should focus on determining whether a specific aspect of the step was performed correctly or incorrectly.

Table 24-11. Quality Control Checklist for Step 2

  

YES

NO

COMMENTS

1.

Have the roles and responsibilities of the testers been identified?

   

2.

Does an appropriate relationship exist between the customer/user, project development team, testers, and IT management to ensure that the project is tested correctly?

   

3.

Are the quality factors understood by the agile implementation team?

   

4.

Are the quality factors applicable to the projects being tested in your IT organization?

   

5.

Are the quality factors complete for assessing the quality of the projects in your IT organization, or are additional factors needed?

   

6.

Is the concept of trade-offs understood by the agile implementation team?

   

7.

In each software development project is someone responsible for making trade-offs? (It may be more than one group, depending on the type of trade-off.)

   

8.

Does the time-compression team understand the type of trade-offs that exist in all software testing projects?

   

9.

Does the software testing team understand the impact of not making the trade-offs during software testing?

   

10.

Does the agile implementation team understand the criteria that can be used to evaluate effectiveness and efficiency of a software testing project?

   

11.

Does the agile implementation team understand the software testing capability barrier chart?

   

12.

Does the agile implementation team understand why a capability barrier exists, and why it is difficult to break through that barrier?

   

13.

Has the agile implementation team developed an inventory of criteria they believe will be applicable for measuring testing efficiency and effectiveness?

   

14.

Has the agile enhancement team selected 3–5 criteria to evaluate projects for efficiency?

   

15.

Has the agile enhancement team selected 3–5 criteria to evaluate projects for effectiveness?

   

16.

Has the agile implementation team determined how they will create a score for each criterion?

   

17.

Has the agile implementation team weighted the criteria for both effectiveness and efficiency?

   

18.

Has the agile implementation team developed efficiency and effectiveness scores for a reasonable number of projects?

   

19.

Are the projects selected by the agile implementation team representative of the type of testing projects undertaken by the IT organization?

   

20.

Has the agile implementation team posted the scored projects to the capability barrier chart?

   

21.

Using the capability barrier chart, has the agile implementation team identified some best practices for both efficiency and effectiveness?

   

22.

Has the agile implementation team identified which of those best practices they believe has the greatest probability for time compression?

   

23.

Have the selected best practices been recorded on the improvement shopping list work paper?

   

Individuals should first review and answer the questions individually. Then the agile implementation team should review these questions as a team. A consensus Yes or No response should be determined. “No” responses must be explained and investigated. If the investigation indicates that the particular aspect of the step was incorrectly performed, it should be repeated. (Note: Some teams prefer to review the quality control checklist before they begin the step to give them a fuller understanding of the intention of this step.)

Conclusion

The objective of this step is to identify those best practices that, when implemented in new projects, will “compress” software testing time. The method to identify these best practices is to develop a metric that scores the effectiveness and efficiency of completed software testing projects. Those projects that scored high in efficiency are candidates to contain practices that will improve efficiency in future testing efforts. Those projects that scored high in effectiveness are candidates to contain best practices that will improve the effectiveness of future testing projects. By focusing on improving effectiveness and efficiency of the software testing effort, the total completion time should compress. The product of this step is a list of best practices identified as ideas or candidates to use to compress software testing time. The next step focuses on assessing the strengths and weaknesses of the existing software testing process.

Step 3: Build on Strength, Minimize Weakness

After determining the software testing process timeline and determining the testing process best practices, it is time to take an overall look at the software testing process. This will be done by conducting a macro self-assessment of the process. The objective is to identify the major strengths and weaknesses of the process. All projects should capitalize on the strengths and minimize the weaknesses. Doing so can help compress software testing delivery time.

Effective Testing Processes

Software development processes were developed years before much effort was expended on building software testing processes. Even today, many organizations have loosely defined testing processes. In addition, many computer science curricula do not include software testing as a topic. This step defines four criteria for an effective software testing process, and provides you with an assessment to evaluate your software testing process against those criteria.

It is common today to relate effectiveness of a process to the maturity of the process, maturity meaning primarily that variability is removed from the process. This means that each time the process is performed, it produces similar results. For example, if an IT organization desires a defect-removal efficiency of 95 percent in the requirements phase, a process that meets that objective time after time is considered a mature or effective process.

Software testing process characteristics make some more effective than others. In simplistic terms, the answers to these questions will help you determine the effectiveness of a process:

1.

Does the software testing process do the right thing?

2.

Does the software testing process do the thing right? In other words, is the software testing process efficient?

The five primary components of a software testing process are as follows:

  1. Create an environment conducive to software testing.

  2. Plan the testing.

  3. Execute the task in accordance with the plan.

  4. Analyze the result and report the results to software stakeholders.

  5. Analyze the test results to improve the testing process based on that analysis.

Efficient test processes perform those five components with minimal resources. They do the work “right the first time.” This means that there are minimal defects and minimal rework performed by the software testers.

When software testing is treated as an art form, and depends on the experience and judgment of the testers, it may or may not be effective and efficient. If it is effective and efficient, it is primarily because of the competency of the individual testers. However, when a test process is people dependent, it is not necessarily repeatable or transferable between projects.

Assessing the Process

An inefficient software testing process tends to expand the software testing delivery time. Fine-tuning the process so that it is efficient will compress the testing time. The purpose of an assessment is to identify the strengths and weaknesses of your specific testing process. (Note again that as you focus on efficiency, effectiveness is a by-product of that effort.)

Work Paper 24-12 provides the assessment items that include the three primary components of the software tester’s workbench (which represent the process to do work and the process to check work). However, assessment emphasizes those aspects of making processes effective (determining whether testers will do and check work in accordance with the process, and whether those Do and Check procedures are effective). These three components that make processes work are: management commitment to software testing processes, the environment established by management in which software testers perform their work, and management’s support and allocation of resources to continuously improve the software testing process.

Table 24-12. Software Testing Process Self-Assessment

Criteria 1: Management Commitment to Software Testing

  

YES

NO

COMMENTS

1.

Does management devote as much personal attention and involvement to software testing as it does for software development?

   

2.

Does management understand the challenges and impediments it will face in moving their IT organization to a quality software testing culture?

   

3.

Does IT management demonstrate its belief in the software testing process by allocating adequate resources to ensure the testing process is used effectively?

   

4.

Does management support processes such as management checkpoints, software reviews, inspections, checklists, and other methods that support implementing software testing principles and concepts in day-to-day work?

   

5.

Does management, on a regular basis, make decisions that reinforce and reward software testing initiatives, such as ensuring that quality will not be compromised for schedule and budget constraints? (Note: This does not mean that requirements and standards will not be negotiated; it means there will be agreement on quality if it conflicts with schedule or budget.)

   
 

Number of Yes Responses

   

Criteria 2: Software Testing Environment

  

YES

NO

COMMENTS

1.

Does the IT organization have a software testing policy that clearly defines the responsibilities and objectives as the software testing function?

   

2.

Are the software testers organizationally independent from the software developers, except for unit testing?

   

3.

Does the IT organization allot as many resources for acquisition and development of software testing process and tools as it does for software development processes and tools?

   

4.

Does the IT organization have a detailed plan to promote and improve software testing throughout the IT organization?

   

5.

Does the IT organization have an educational plan for all staff members in software testing principles, concepts, and other methods; and is that plan operational?

   
 

Number of Yes Responses

   

Criteria 3: Process to Do Work

  

YES

NO

COMMENTS

1.

Are there formal work processes outlining the detailed step-by-step procedures to perform all software testing projects within the IT organization?

   

2.

If so, are those work processes comprised of a policy, standards, and procedures to both do and check work?

   

3.

Does management both enforce compliance to work processes and reward compliance to work processes?

   

4.

Are the work processes developed and/or approved by those that will use the work processes in their day-to-day work?

   

5.

Are IT staff members hired to use specific work processes, and then trained sufficiently so that they can perform those work processes to a high level of competence?

   
 

Number of Yes Responses

   

Criteria 4: Processes to Check Work

  

YES

NO

COMMENTS

1.

Are check procedures developed in a formal manner for each work process?

   

2.

Is the combination of the work and check procedures integrated so that they are included in the project budget, and executed in a manner so that both become part of the day-to-day work of the IT staff?

   

3.

Are the check procedures developed commensurate with the degree of risk associated with not performing the “do work procedures” correctly?

   

4.

Are the results of the check procedures provided to the appropriate decision-makers so they can make any needed changes to the software in order to ensure they will meet the customer’s needs?

   

5.

Are the workers adequately trained in the performance of the check procedures so that they can perform them in a highly competent manner?

   
 

Number of Yes Responses

   

Criteria 5: Continuous Improvement to the Software Testing Process

  

YES

NO

COMMENTS

1.

Is information regarding defects associated with the software testing products and processes regularly gathered, recorded, and summarized?

   

2.

Is an individual or an organizational unit such as quality assurance charged with the responsibility of maintaining defect information and initiating quality improvement efforts?

   

3.

Does the IT budget include the money and staff necessary to perform continuous quality improvement?

   

4.

Is there a process in place that establishes a baseline for the current process, and then measures the variance from that baseline once the processes are improved?

   

5.

Are resources and programs in place to adequately train workers to effectively use the new and improved work processes?

   
 

Number of Yes Responses

   

The time-compression team should use this self-assessment questionnaire to self-assess their software testing process. They should do this as follows:

  1. Read each item individually aloud as a team.

  2. Have the team discuss the item to ensure a consensus of understanding.

  3. Reach consensus on a Yes or No response to the item. If consensus cannot be reached, then a No response should be recorded.

Software testers face many unique challenges. The root cause of these challenges is often the organizational structure and management’s attitude toward testing and testers. In many organizations, testers have lower job classifications and pay than developers. Many IT managers consider testing an annoyance and something that will be complete in whatever time is available between the end of development and the commencing of operational status for the software.

Developing and Interpreting the Testing Footprint

At the conclusion of assessing the five criteria in Work Paper 24-12, total the number of Yes responses in each criterion. Then post the number of Yes responses to Work Paper 24-13. This work paper is a software testing process assessment footprint chart. To complete this chart, put a dot in the criteria line that represents the number of Yes responses for that category. For example, if in category 1 (i.e., management commitment to quality software testing), you have two Yes responses, put a dot on the line on the number 2 circle. After posting the number of Yes responses for all five categories, draw a line between the five dots. The connection of the five dots results in a “footprint” that illustrates the assessment results of your software testing process. Explain your No responses in the Comments column.

Software Testing Process Assessment Footprint Chart

Figure 24-13. Software Testing Process Assessment Footprint Chart

Three footprint interpretations can be easily made, as follows:

  • Software testing process weaknesses. The criterion or criteria that score low in the number of Yes responses are areas of weakness in your software testing process. The items checked No indicate how you could strengthen a particular category.

  • Software testing process strengths. Those criteria that have a high number of Yes responses indicate the strengths of your software testing process. Where your software testing process is strong, you want to ensure that all of your software testing projects benefit from those strengths.

  • Minimal variability of strengths and weaknesses. Building an effective process involves moving the footprint envelope equally in all five criteria toward the five Yes response levels. The management commitment criterion would push the other four criteria in the right direction. However, there should not be significant variability between the five criteria. Ideally, they will all move outward toward the five Yes response levels one level at a time.

Examples of the type of analysis that you might make looking at this footprint are as follows:

  1. If you scored a 5 for management commitment to the process, but the other categories are low in Yes responses, this indicates not “walking the walk” in utilization of an effective software testing process.

  2. When there is a wide discrepancy between the numbers of Yes responses in different categories, it indicates a waste of resources. For example, investing a lot in the Do procedures for the process, but not building an effective environment, does not encourage consistency in use.

It is generally desirable to have the time-compression team evaluate their software testing footprint. They need to look at the footprint, draw some conclusions, discuss those conclusions, and then attempt to reach a consensus about them.

Poor Testing Processes

If a worker is given a poor process, that process reduces the probability of the individual being successful at performing the work task. Rather than the work task supporting success, the worker must modify, circumvent, or create new steps to accomplish the work task. In other words, the worker is diverted from doing an effective software testing job because of the time-consuming activity of surviving the use of a poor work process.

The objective of this self-assessment is to emphasize that work processes are much more than just the steps the software tester follows in performing the work task. For a work task to be effective, management must be committed to making that work process successful; management must provide the resources and training necessary to ensure worker competency and motivation in following the process; and management must provide the resources and skills sets needed to keep the software testing process current with the organization’s business and technical needs.

If that work process is enhanced using the principles of agile work processes, the workers will have the competency, motivation, empowerment, and flexibility needed to meet today’s software testing needs.

Improvement Shopping List

At the conclusion of the self-assessment process, the time-compression team should identify ideas for workday timeline improvements to your software testing process. Consider all category items assessed with a No response as a potential improvement idea. Record these ideas on Work Paper 24-14.

Table 24-14. Delivery Timeline Process Improvement Shopping List

 

Ideas for Delivery Timeline Improvement

Reference #

Priority

High

Low

1

    

2

    

3

    

4

    

5

    

6

    

7

    

8

    

9

    

10

    

11

    

12

    

13

    

14

    

15

    

Quality Control Checklist

Work Paper 24-15 is a quality control checklist for Step 3. The quality control investigation should focus on determining whether a specific aspect of the step was performed correctly or incorrectly.

Table 24-15. Quality Control Checklist for Step 3

  

YES

NO

COMMENTS

1.

Is the software testing process self-assessment being performed by the agile implementation team?

   

2.

Does the agile implementation team know the software testing process?

   

3.

Does the agile implementation team know management’s attitude about the use of the software testing process (e.g., rewarding for use of the process)?

   

4.

Does the agile implementation team know the type of support a tester would get if they use the software testing process (e.g., type of training, who can answer the questions, etc.)?

   

5.

Did the agile implementation team follow the self-assessment process as described in this chapter?

   

6.

Does the agile implementation team understand the meaning of Yes and No responses?

   

7.

For items in which the agile implementation team could not arrive at a consensus, was a No response given?

   

8.

Did the agile implementation team prepare the software testing process footprint and then discuss and draw conclusions about that footprint?

   

9.

Was each category item that had a No response evaluated as a potential improvement idea to compress the software testing delivery timeline?

   

The agile implementation team should review these questions as a team. A consensus Yes or No response should be determined. “No” responses should be explained and investigated. If the investigation indicates that the particular aspect of the step was incorrectly performed, it should be repeated. (Note: Some teams prefer to review the quality control checklist before they begin the step to give them a fuller understanding of its intention.)

Conclusion

This step has proposed a process to self-assess the software testing process to identify ideas to compress testing delivery time. The step provided the criteria associated with effective software testing processes. These effectiveness criteria were converted to a software testing self-assessment work paper. The result of conducting the self-assessment was to develop a software testing process footprint. This footprint, coupled with the results of the self-assessment, should enable the agile implementation team to develop many ideas for compressing software testing delivery time.

Step 4: Identify and Address Improvement Barriers

Many good ideas are never implemented, because barriers and obstacles associated with the ideas cannot be overcome. If the implementation attempt is undertaken without fully understanding the barriers and obstacles, success is less likely. Therefore, it is essential, if software testing time compression is to be effective, that these barriers and obstacles be understood and addressed in the implementation plan.

The software testing agile implementation team will face two types of barriers when attempting to compress testing time: people barriers and organizational/administrative barriers. Both can impede progress. This step identifies a process to identify both barriers, and provides guidance for the agile implementation team on how these barriers and obstacles might be addressed. According to the dictionary, a barrier and an obstacle are approximately the same thing. Both terms are used in this step to highlight the importance of identifying and addressing anything that reduces the probability of effectively reducing delivery time.

Organizations consist of people. The people have different wants and desires, which will surface when change is imminent. People not only normally resist change, they also have many techniques to stop change or turn change in a different direction.

The Stakeholder Perspective

The activity of compressing software testing time involves changing the way people do work. Thus, these individuals have a “stake” in the change happening or not happening. Individuals react differently as they consider what is their stake in the time-compression effort.

An important component in the people barrier concept is the “WIIFM” concept. WIIFM stands for “What’s in it for me?” If an individual cannot identify WIIFM in a proposed change, he or she will either not help the change occur, or will openly resist the change. Individuals have four different reactions to change:

  • Make it happen. An individual having this stake wants to be a driving force in reducing software testing delivery time. This person is willing to assume a leadership role to make change happen.

  • Help it happen. This individual does not want to lead the change, but will actively support the change. That support is expressed in a willingness to help change happen. In other words, if there is an assignment to help compress software testing time, the individual will work on activities such as data gathering, building solutions, writing new software testing procedures, and so forth to help compress software testing time.

  • Let it happen. This individual neither supports nor objects to the change. However, this individual will view that there is nothing in the change for them. Whether the change occurs or not, it is not important to them. They will neither support nor resist the change.

  • Stop it from happening. This individual does not believe that the change is worthwhile, or the individual just objects to the change. This individual may openly object to the change and take whatever action possible to stop the change from happening. Or even worse, some of these individuals outwardly support the change but quietly work hard to stop the change from happening by raising barriers and obstacles.

The agile implementation team must identify the key individuals in the activities for compressing software testing time and determine what stake they have in the change. These individuals usually include the following:

  • IT management

  • Testers

  • User management

  • Quality assurance/quality project leaders

  • Control personnel

  • Software designers

  • Administrative support

  • Programmers

  • Trainers

  • Test managers

  • Auditors

It may be desirable to identify these individuals by name. For example, list the individual names of IT management. This should be done whenever the individual is considered personally involved in the change or can influence a change. In other instances, the stakeholders can be grouped (for example, testers). If the agile implementation team determines that the testers all have approximately the same stake in the change, they do not need to name them individually.

In viewing which stake an individual has, the agile implementation team must carefully evaluate the individual’s motives. For example, some individuals may outwardly support compressing software testing time because it is politically correct to do so. However, behind the scenes, they may believe that the wrong group is doing it or it is being done at the wrong time, and thus will openly support it, but are in fact, in the “Stop It From Happening” stakeholder quadrant.

Stakeholder Involvement

After reading the preceding section, you might have this logical question: “Why are so many stakeholders involved in software testing?” The answer is that the goal of software testing is to identify defects and deficiencies in the software. Because someone is potentially responsible and accountable for those defects, they are concerned about how software testing is performed, what software testing does, and how and to whom the defects will be reported.

Let’s just look at a few examples. If developers and programmers are most likely the individuals responsible for those defects, they potentially consider testers as affecting their performance appraisal. Project leaders may consider the testers as “that group” that is delaying the implementation of the software. IT management may consider testers as the group that stops them from meeting their budgets and schedules. While believing that it is important to identify defects and deficiencies before software goes into operation, they may also believe it is more important to get the software into operation than to complete a test plan.

The bottom line is that as testing becomes more effective and efficient, resistance to testing may increase. This is because testers are identifying more defects. What this means is that the stakeholders’ involvement in change is an important component in making the software testing process more effective and efficient. Testers should not believe that just because they are testing more effectively and efficiently that their changes will be welcome by those having a stake in the testing.

Performing Stakeholder Analysis

The agile implementation team needs to do the following to analyze the various stakes held by those who have a vested interest in the change. Work Paper 24-16 is designed to document this analysis.

Table 24-16. Stakeholder Analysis

Stakeholder (Name of Function)

Current Stake

Reason(s)

Desired Stake

How to Address

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
  1. Identify the stakeholder. This can be the name or the function (for example, the name Pete Programmer or the users of the software).

  2. Identify the current stake. For that individual or function, determine which of the four stakeholder quadrants that individual is in (for example, the “Make It Happen” stake).

  3. Document the reasons the individual has that stake. The agile implementation team must carefully analyze and document why they believe an individual has that stake. For example, testers may be in the “Stop It From Happening” stake because they believe that by compressing time, the organization will need fewer testers.

  4. Identify the desired stake for the individual or function. If the individual/function does not have what the agile implementation team believes is the correct stake to make change happen, the team must decide which stake is the most desirable for making the change occur. For example, if the software designers are in the “Stop It From Happening” stake, the team may want to move them to the “Let It Happen” stake. In addition, if there are two or more individuals in the “Make It Happen” stake, to avoid leadership conflict, one of the leaders should be moved to the “Help It Happen” stake.

  5. Develop a plan to address stakeholder barriers. If an individual or function is not in a desired stake, a plan must be developed to move that individual/function to the desired stake. For example, moving the software designers from the “Stop It From Happening” stake to the “Let It Happen” stake might occur if the agile implementation team can assure them that the amount of time allocated for software designing will not be reduced.

Red-Flag/Hot-Button Barriers

This people barrier is highly individual. A red flag or hot button is something that causes a negative reaction on the part of a single individual. It is normally associated with someone in a management position, but can be associated with anyone who can exercise influence over the successful completion of a compression project. An individual, for a variety of reasons, may be strongly against a specific proposal. For example, if an improvement approach is assigned to “reduce defects,” the word defect may raise a red flag with a specific individual. They do not like the word. They prefer words such as problem or incident. Thus, by changing a word, a red flag can be avoided.

Other examples of red flags/hot buttons include the following:

  • Changing a procedure that was developed by an individual who likes that procedure and wants to keep in place.

  • People who do not like anything that was not “invented here.”

  • The individual or group proposing the change is not liked, and someone does not want them to get credit for anything.

  • The idea has been tried before. Some people believe that because something was tried before that a variation or additional attempt to do the same thing will result in failure.

  • Not the way I like it done. Some individuals want extreme formal approval processes to occur before any change happens. This means rather than taking a “fast track” process to compressing software testing time, they want it carefully analyzed and reviewed by multiple layers of management before anything happens.

Document these red flag/hot button barriers on Work Paper 24-17.

Table 24-17. Barrier/Obstacles

Barrier/Obstacle

Source

Root Cause

How to Address

    
    
    
    
    
    
    
    
    
    
  1. Barrier/obstacle/red flag. Name the barrier/obstacle/red flag in enough detail so it is understandable.

  2. Source. The name of the individual or condition creating the barrier/obstacle/red flag.

  3. Root cause. What is the reason that the barrier/obstacle/red flag exists.

  4. How to address. The team’s initial idea to overcome the barrier/obstacle/red flag.

Staff-Competency Barriers

Changes to the software testing process that can compress software testing time may be desirable, but the existing skills must be available if the implementation of the change is to be successful. In some instances, this is judgment; for example, the manual work process needs to be changed, and the agile implementation team believes for that specific process no one in the IT organization has the needed skills. In other instances, missing skills are obvious. For example, a new tool is recommended, but no one in that organization knows how to use that tool.

Document these competency barriers on Work Paper 24-17.

Administrative/Organizational Barriers

In many instances, the administrative procedures are organizational barriers that inhibit change. Many believe that these administrative and organizational barriers are designed to slow change. Some people believe that rapid change is not good, and by imposing barriers they will cause more analysis before a change is made. Also, delaying implementation will enable all individuals to make known any personal concerns about the change.

A partial list of the administrative and organizational barriers that can inhibit compressing software testing time follows:

  • Funding. Funds are not available to pay for making the change happen. For example, overtime may be needed, new tools may need to be acquired, specialized training contracted for, and so forth. Some of these money constraints are real, and others represent individual priorities. For example, if you invite someone to go to dinner with you, but they decline because they cannot afford it, it may mean they cannot afford it, but it also may mean that they have higher priorities for using their funds.

  • Staff. The individuals needed to develop and implement a change may have no time available. Staff resources may be committed to current projects, and therefore a request that new projects be undertaken may be declined because no staff is available to implement them.

  • Improvement approvals. Approval by one or more individuals may be required before work can be performed. Although the approval process is designed to stop unwanted consumption of resources, it can also inhibit activities that are not desired by the individual authorized to approve those activities. The approval process also gives an individual the opportunity to “sit” on this request to delay the activity until involved individuals become discouraged. If the approval process involves two or more people, delays are built in to the process, which can also discourage individuals from taking action.

  • Paperwork. Sometimes individuals who want to undertake a work activity might be required to complete a proposal form to have the work done. The organization may have a procedure that indicates how to complete a proposal request. Some of these can include developing budgets and schedules, naming involved parties and affected parties, listing the steps to be performed, identifying the controls built in to the process, and so forth. The paperwork itself might discourage an individual from undertaking a voluntary activity; but when the paperwork is coupled with the approval process, which can delay approval because the paperwork is not properly completed, good projects can be stopped from occurring.

  • Organizational turf. Questions may arise as to who has the authority to undertake a project. Third parties sometimes become involved in a project because they believe that their “organizational area” is the one that should undertake the project. For example, if a software testing team decides to improve a testing process, but the quality assurance or process engineering group believes that it is their “turn” to improve processes, they might object to the agile development team doing it and thus delay or kill the project.

  • Value received. Sometimes individuals must demonstrate the value they expect to receive from undertaking the work activity. This may involve accounting procedures with detailed return on investment calculations. Sometimes the return on investment calculation must be submitted to the accounting department for review. In many instances, it is easier to eliminate seven workdays from the software testing time than it is to demonstrate quantitatively the value that will be received from undertaking the time-compression project.

  • Priorities. Individuals and groups have priorities that they believe are the way in which work should be performed. If a time-compression project does not fit into that prioritization scheme, they will want to delay the time-compression project to deal with their priorities. Obviously, work priorities tend to take precedence over process-improvement priorities. This means that there is always enough time to correct that work but never enough time to eliminate the problem that causes bad work.

The organizational and administrative barriers should also be documented on Work Paper 24-17. At this point, it is unimportant whether a specific barrier/obstacle will stop a time-compression project. It is important to list as many barriers and obstacles as the agile implementation team feels might affect any time-compression project. The relationship between a specific barrier/obstacle and a specific time-compression project will be addressed when a plan is developed to implement a specific time-compression idea.

Determining the Root Cause of Barriers/Obstacles

The barrier/obstacle cannot be adequately addressed until the root cause is determined. Likewise, the barrier/obstacle will continue to exist until the root cause of the barrier/obstacle has been addressed. The process of determining the root cause is an analytical process.

Consider an example. Assume the agile implementation team believes that a new software testing estimating tool will create a better estimate for software testing, in that it will allocate the appropriate time needed for defining software testing objectives. The agile implementation team may believe that if the testing objectives are better defined, they will “compress” the remainder of the software testing time. When the idea is presented for approval, the involved manager indicates that no funds are available to acquire the estimation tool. One might assume that the lack of funds is the root cause. However, when an analysis is done, it becomes obvious that the approving manager does not want that specific estimating tool, but prefers the current estimation method. If the agile implementation team focuses on obtaining funding, the tool will never be acquired. If team members identify the root cause as the manager’s resistance to the tool, their efforts will focus on convincing the manager that the tool will be beneficial. If they address that root cause, the funds might become available.

Work Paper 24-18 is the recommended analysis method the agile implementation team should undertake to identify the root cause for a specific barrier or obstacle. It is also called the “Why-Why” analysis. One of these work papers should be used for each barrier/obstacle listed on Work Paper 24-17 that the team believes should be analyzed to identify the root cause.

Barrier/Obstacle (“Why-Why”) Analysis

Figure 24-18. Barrier/Obstacle (“Why-Why”) Analysis

The agile implementation team then begins the “Why-Why” analysis. Let’s revisit the previous example. The barrier/obstacle is the inability to obtain management approval because of lack of funding. The agile implementation team then begins to ask the question “Why?” In this example, it would say “Why can’t we get management funding for the estimation tool?” The first answer is that no funds are available. They then begin to ask the question “Why are no funds available?” and they arrive at the conclusion that no funds are available because the root manager does not like the estimation tool. If the agile implementation team believes that is the root cause of the barrier/obstacle, they post that root cause to Work Paper 24-17.

This analysis may be simple to conclude, or it may be complex. A complex analysis may involve many primary and secondary contributors. The “Why-Why” analysis should continue until the agile implementation team believes they have found the root cause. If the barrier identified affects a specific compression-improvement plan, the plan should include the approach recommended to overcome the barrier. If that plan does not work, the “Why-Why” work paper should be revisited to look for another potential root cause.

The success of using this analysis depends on the agile implementation team having an in-depth understanding of how the organization works. They also need to know the traits and characteristics of the individual in authority to overcome the barrier. The analysis will not always lead to the root cause, but in most instances it does.

Addressing the Root Cause of Barriers/Obstacles

After the potential root cause of a specific barrier/obstacle has been identified, the agile implementation team needs to identify how they will address that specific root cause. The “how to address component” can be done in this step, or the agile implementation team can wait until they are developing a specific time-compression plan and then determine how to address the root cause.

There are no easy answers to how to address the root cause. Again, the more the agile implementation team knows about the organization and the characteristics of the key individuals in the organization, the easier the solutions become. Some of the solutions as to how to address the root cause of the barrier/obstacle include the following:

  • Education. Sometimes the individual involved does not have the necessary knowledge to make a good decision for the IT organization. By providing that education to the individual, the individual is in a better position to make the proper decision. In our estimating tool example, if the approving manager had a better knowledge of how the estimation tool works and why it is more reliable in estimating than the current system, the individual might make that approval.

  • Training. Training provides the specific skills necessary to do a task. Resistance sometimes occurs because individuals feel inadequate in performing a task they will be required to do. For example, in the estimating tool example, they may not want to use the estimating tool because they are unsure they will be able to use it effectively. If they are provided the training on how to use it before the decision is made, that concern will be alleviated.

  • Champion. Sometimes a highly respected individual in the organization needs to be recruited to champion a specific improvement idea. The individual can be a member of management, someone from a customer/user area, or a highly respected person in the IT department. After that individual makes his/her position known as a champion for a specific idea, other people will normally accept that recommendation and support the idea.

  • Marketing/communicating. Marketing first must identify an individual’s need, and then provide the solution to satisfy that need. For example, if your approving manager for an estimating tool has a need to complete projects on time with high customer satisfaction, marketing the estimation tool to meet that need can help get approval for the idea. Marketing should not be viewed negatively, but rather, should be viewed as a method of getting ideas accepted. Individuals unfamiliar with marketing techniques should read books, such as those by Zig Zigler, that explain the various steps in a marketing program. Although these books are written for the marketing professional, they have proven beneficial for IT staff members in getting ideas accepted.

  • Rewards. Individuals tend to do what they are rewarded for. If IT management establishes a reward system for software testing time compression, it will greatly encourage individuals to support those efforts. The rewards can be financial, extra time off, or certain benefits such as special parking areas and so forth. Sometimes just a lunch recognizing success, paid for by the IT organization, will encourage people to work harder because they recognize the reward system as something management wants. (Note: It may be important to reward most or all of the stakeholders.)

  • Competitions. Organizing a competition to compress software testing time has been effective in some organizations. One organization set up a horse race track. The track was divided into ten parts. The parties were then asked to determine ways to compress software testing time. For each day they were able to compress a software testing effort, their horse moved one position. The first horse that completed the ten-day time-compression goal was rewarded. However, it was equally important that there be prizes for second place, third place, and so on.

The determination about how to address a specific root cause of barrier/obstacle should be posted to Work Paper 24-17. Note the previous examples are only a partial list of the solutions for addressing the root cause(s) of barrier/obstacles. The agile implementation team should select specific solutions to address those root causes.

Quality Control Checklist

Work Paper 24-19 is a quality control checklist for Step 4. The investigation should focus on determining whether a specific aspect of the step was performed correctly or incorrectly.

Table 24-19. Quality Control Checklist for Step 4

  

YES

NO

COMMENTS

1.

Does the agile implementation team recognize the impact that a barrier/obstacle can have on implementing a time compression idea?

   

2.

Does the agile implementation team understand the various views a stakeholder can have on a proposed time compression idea?

   

3.

Has the agile implementation team identified all of the potential stakeholders in compressing the software testing delivery time?

   

4.

Has the agile implementation team determined which stakeholders have to be individually identified and which stakeholders can be identified by job position?

   

5.

Has the current stake for each stakeholder been identified?

   

6.

Has the agile implementation team defined what they believe is the reason the person holds that specific stake?

   

7.

Has the desired stake for each individual/job position been determined?

   

8.

Has the agile implementation team developed a solution on how to address moving an individual from a current stake to a desired stake?

   

9.

Have the barriers associated with staff competency been identified?

   

10.

Have the barriers associated with individual’s red flags/hot buttons been identified?

   

11.

Does the agile implementation team understand that the individual looks at an idea from the viewpoint of “What’s In It For Me?”

   

12.

Have the administrative/organizational barriers been identified?

   

13.

Does the agile implementation team understand how to determine the root cause of each administrative/organizational barrier?

   

14.

Has a reasonable solution been developed for each root cause to address that root cause should it become necessary?

   

15.

Is the agile implementation team in agreement that the important people, administrative, and organizational barriers that can affect time compression projects have been identified?

   

The agile implementation team should review these questions as a team. A consensus Yes or No response should be determined. “No” responses should be explained and investigated. If the investigation indicates that the particular aspect of the step was incorrectly performed, it should be repeated. (Note: Some teams prefer to review the quality control checklist before they begin the step to give them a fuller understanding of the intention of this step.)

Conclusion

Everybody favors compressing software testing time. What they object to is the method proposed to compress software testing time. They might also object to the allocations of resources for compressing software testing time, especially when the IT organization is behind in implementing business software projects. It is important to recognize that individuals may believe that their career is partially dependent on completing a specific project. Anything that might delay that effort would be viewed negatively.

Many organizations establish approval processes designed to delay the quick implementation of projects. These processes ensure that the appropriate safeguards are in place, that only those projects desired by management are, in fact, implemented. These safeguards also become barriers and obstacles to implementing ideas that can compress software testing time.

Individuals who want to compress software testing time must be aware of what these barriers and obstacles are. Some are people related, whereas others are administrative and organizational related. For all of these, they must look for the root cause and then develop a solution to address that root cause. This step has provided a process to do just that.

Step 5: Identify and Address Cultural and Communication Barriers

The “management culture” of an IT organization refers to the approach management uses to manage the IT organization. The Quality Assurance Institute has identified five different IT cultures. These range from a culture that emphasizes managing people by setting objectives to a culture that encourages and supports innovation. Each of the five cultures requires a different type of solution to be used to compress software testing time. This step describes the five cultures, and then helps the agile implementation team through a process to identify the barriers and constraints imposed by the IT management culture.

The culture affects the way people do work, and the way people work affects their lines of communications. Open and complete communication is a key component of an agile software testing process. Thus, opening communication lines is an important part of building an agile software testing process.

This step explains how organizational cultures go through an evolutionary process. Some believe that the hierarchal organizational structure is patterned after the methods Moses used to organize his people in the desert. Up until the 1950s, most cultures were hierarchical in structure. The newer cultures have flattened organizational structures and emphasize teams and empowerment. Obviously, those cultures that emphasize teams and empowerment are more suited to agile software testing processes.

Management Cultures

There are five common management cultures in IT organizations worldwide (see Figure 24-11), as follows:

  • Manage people

  • Manage by process

  • Manage competencies

  • Manage by fact

  • Manage business innovation

The five management cultures.

Figure 24-11. The five management cultures.

The five cultures are generally additive. In other words, when an organization moves to a culture of “manage by process,” the organization does not stop managing people. When it moves to the “manage by competency” culture, it is addressing the people issue of work processes. Measurement is not effective until the work processes are stabilized at the “manage by competency” level.

Culture 1: Manage People

In this culture, people are managed. This is sometimes called management by objectives. The staff is given specific objectives to accomplish. Management’s concern is that those objectives be accomplished, but generally they are not concerned about how those objectives are accomplished.

The management philosophy for this culture is that good work is accomplished by hiring good people, setting appropriate objectives for those people, setting reasonable constraints, and then managing people to meet those objectives. This is a results-oriented culture. The people performing the work are responsible for determining the means to accomplish the results. Management is more concerned with satisfying constraint objectives than how products are built.

The management environment within this culture has these characteristics:

  • Manage people to produce deliverables. Results are emphasized and processes de-emphasized when people are given the responsibility to produce the deliverables (i.e., results) on which their performance will be evaluated. How products are built is of little concern to a Culture 1 manager.

  • Control people through budgets, schedules, staffing, and performance appraisals. Without processes, management cannot directly control or assess interim status of deliverables; thus, management places constraints on the workers and uses those constraints to control the results.

  • Hierarchical organization. Direction and communication flows from the top of the organization downward. Politically driven personal agendas often take precedence over doing the right thing.

  • Reactionary environment. Management does not anticipate, but rather reacts to unfavorable situations as they arrive.

  • Emphasis on testing quality into deliverables. Product testing uses a “fix on failure” (code and fix) approach to eliminate defects from products prior to delivery. This method does not prevent the same type of defect from occurring again in another project.

  • Success depends on who is assigned to a project. This culture emphasizes assigning the best people to the most important projects because success is primarily based on people’s skills and motivation.

  • Objective measures used. Measurement is based on things that can be counted, such as headcount, workdays, budgets, schedules, etc.

  • Outsourcing. Outside resources are used to fill competency gaps rather than training existing staff.

Why Organizations Continue with Culture 1

The primary reasons organizations stay at Culture 1 include the following:

  • Inexperience with other cultures. Management learned this culture prior to becoming management. They were managed using this culture and believe that goals and objectives can be met using this culture. Management is in a comfort zone that is difficult to change.

  • Pressures to meet schedules. Customers, users, and senior management evaluate IT management on results.

  • Time/resource constraints. The IT organization thinks it does not have the resources needed to develop and implement disciplined processes.

  • Quality is a discretionary expense. The actions and resources needed to improve quality are above and beyond current project cost, and thus, come under different budget categories. Many believe that quality costs are not recoverable.

  • The IT staff believes their profession is creative. Being creative means having the freedom to perform tasks in a manner that is subject to as few constraints as possible, which leads to variability among projects. Management may fear staff turnover if emphasis is placed on disciplined processes.

  • Customers/users control IT budget. Under charge out systems, IT management may not have the option to fund process improvement processes without concurrence from customers/users.

  • Difficulty. The change from Culture 1 to Culture 2 is the most difficult change of all the changes to approach.

Why Organizations Might Want to Adopt Culture 2

  • Current performance is not acceptable to the customers/users.

  • Without improvement, the IT function may be outsourced.

  • Without improvement, IT management may be replaced. Outsourcing is fast becoming a viable alternative.

  • Response to executive management requests to improve quality, performance, and productivity: executive management finds the IT organization’s performance unacceptable and demands improvement.

  • Improve staff morale and reduce turnover. IT staff feels overworked and inadequately rewarded and feels the probability of success is low. Thus, the desire to move to another organization.

  • Do more with less. Executive management increases the IT workload without corresponding increases in resources to complete the additional work.

  • Products delivered by suppliers do not meet the true needs of the IT organization, even though they might meet the purchasing or contractual specifications.

Culture 2: Manage by Process

The second culture manages by work processes. The movement from the “manage people” culture to the “manage by process” culture is significant. In the “manage people” culture, people are held responsible for results. In the “manage by process” culture, management is held responsible for results because they provide the work processes that people follow. If the work processes cannot produce the proper results, management is accountable because they are responsible for those work processes.

The management philosophy behind this culture is that processes increase the probability that the desired results will be achieved. To be successful, this culture requires management to provide the leadership and discipline needed to make technology professionals want to follow the work processes. At Culture 1, the workers are made responsible for success, whereas at Culture 2, management assumes the responsibility for success, while the workers are responsible for effectively executing the work processes. The management environment within the “manage by process” culture has these characteristics:

  • Management provides the means (i.e., processes) for people to perform work with a higher probability of repeatable success than without processes. Resources (time, people, budget) are allocated to process testing and improvement. Management budgets the funds needed to develop and improve processes.

  • Select staffs/teams are empowered to define and/or improve the processes. The individuals who use the processes develop processes. Thus, the developers of the processes become the “owners” of the process.

  • Management is proactive in reducing and eliminating the reoccurrence of problems through improved processes. The IT organization identifies and anticipates problems and then takes action, so that the same problem will not reoccur when the same series of tasks are performed in the future.

  • Cross-functional organizational structures are established. Teams composed of members of different organizational units are established to resolve cross-functional issues, such as testing of processes and change of management.

  • Subjective measurement is added to objective measurement. Subjective measures, such as customer surveys, become part of the operational culture.

  • Inputs and outputs are defined. Detailed definitions reduce misunderstanding and process variability.

Why Organizations Continue with Culture 2

The primary reasons organizations stay at Culture 2 include the following:

  • Management is comfortable with culture. After investing the time and resources to move to Culture 2, both management and staff have learned to operate at this approach. Management is not anxious to initiate additional significant changes to the way work is performed.

  • Cost/time to move to another culture. Each move to a different culture has an associated cost—monetary and staff time allocated. Based on current workloads, those resources may not be available.

  • Culture 2 has provided relief from Culture 1 problems. Many of the problems facing management, such as budget overruns and missed schedules, have been at least partially resolved by moving to Culture 2. Given this, management may follow the “if it isn’t broke, don’t fix it” philosophy.

  • Project priorities. Management must devote full effort to meet current project priorities and thus, does not have the time or energy to initiate significant change. These significant changes require moving to another culture.

  • Cross-functional politics. Timing may not be desirable to change the way the IT organization operates. For example, changing to another culture may put the IT organization out of step with the way its customers and senior management operate.

Why Organizations Might Want to Adopt Culture 3

  • Processes are incomplete. Culture 2 processes tend to be developed to address a wide variety of needs. For example, one system testing process may be built for implementing any and all projects. These processes tend to be large in order to encompass the many needs the processes have to fill. In addition, some processes may not be defined.

  • Processes are not easily customized. The organization may not be able to customize a process to meet a specific need of a customer/user.

  • Company needs/goals have changed. What the company does, and how the IT organization fits into its needs has changed, making the generic process culture (Culture 2) incompatible with the new needs/goals.

  • Processes are not fully integrated. Culture 2 processes tend to be independently developed and may not take into account the organization’s overall mission.

  • There is pressure from executive management and/or customers/user to improve. The improvements achieved from moving from Culture 1 to Culture 2 may not meet management’s current improvement expectation, thus expediting the need to move to another management culture.

  • Suppliers focus on the purchasing and the contractual specifications rather than spending effort on understanding the true business needs of the IT organization. The IT organization is normally unwilling to invite suppliers into an IT planning session to familiarize suppliers with the strategy and direction of the IT organization.

Culture 3: Manage Competencies

The third culture manages competencies. This means that the hiring and training of people is focused on making them competent in using the organization’s work processes. In addition, IT organizations using this cultural approach do not accept work that is outside their areas of competency.

The underlying philosophy for this culture is that an IT organization must identify the needed core competencies and then build an organization that is capable of performing those competencies effectively and efficiently. Requests for work outside those core competencies should be declined; however, IT may assist in obtaining outsourcing assistance in performing those requirements.

At this culture, IT builds trust with its customer base since it is capable of performing as it says it can, within the cost and budget and constraints.

The management environment within the “manage competencies” approach has these characteristics:

  • Processes that support the core competencies. The IT organization decides which products and services they will support and then builds the competencies (i.e., work processes) to produce and deliver those products and services.

  • Employee hiring based on competencies. Individuals are hired and trained to perform certain work processes (i.e. competencies).

  • Competencies that support organizational business objectives. The competencies selected by the IT organization should relate to the information needs incorporated in to the organization’s business objectives.

  • High level of process customization. Processes can be quickly customized and adapted to the specific customer/user needs.

  • Empowered teams. Teams are organized and empowered to take action and make decisions needed to meet business objectives.

  • Establishment of defect databases. Defects are recorded and entered into databases for analysis and improvement of core competencies.

  • Continuous learning. Education and training become an integral part of dayto-day work activities.

Why Organizations Continue with Culture 3

The primary reasons organizations stay at Culture 3 include the following:

  • New skills are required to move to Culture 4. Management and staff must be convinced that the effort and skill sets needed to move to a statistical approach are worthwhile. The background needed to make that decision may not be available.

  • Rapid changes in technology make it difficult to keep pace. The effort required to maintain the competencies at this culture is already consuming available resources, making it difficult to institute the changes needed to move to another culture. It may be more effective and economical to outsource.

  • Overcoming learning and unlearning curves is difficult. Reaching and maintaining competencies involves unlearning old processes and learning new processes. Maintaining learning and unlearning curves, as process changes accelerate, becomes a major activity for the IT staff.

  • Continuous training is expensive. Incorporating more training in addition to the Culture 3 training will significantly increase the training cost.

Why Organization Might Want to Adopt Culture 4

  • Improvement is needed to remain competitive. In a global marketplace, organizations are pressured to continually improve productivity and quality, add capabilities, and reduce cost to customers.

  • Need for more reliable data. Management of Culture 3 produces data about processes, but that data normally does not have the consistency and reliability needed for use in decision making.

  • Reduce judgment factor in decision making. Without reliable data, managers must make decisions based heavily on judgment. Judgment, although often effective, does not produce the consistency in decisions that encourages confidence from customers/users and senior management.

  • The suppliers not aligned with customer demand. Suppliers are not positioned to react to customer demand as it occurs so that specialized orders or products and services not previously requested can be obtained quickly enough to satisfy IT customer demand.

Culture 4: Manage by Fact

The fourth culture is “manage by fact.” Quantitative results are established and measures are collected and compared against expected results. Based on those measures, management makes whatever changes are needed to ensure that the outcome of the work processes is that needed by IT customers.

The underlying management philosophy of this approach is that decision making should be based on fact (however, those facts will be tempered by judgment and experience). The stability of the Culture 3 work processes produces reliable quantitative data, which management can depend on in decision making. The quantitative feedback data, normally produced as a work process by-product, will be used for managing and adjusting work in progress, as well as identifying defect-prone products and processes as candidates for improvement.

The management environment within the “manage by fact” culture has these characteristics:

  • Making decisions using reliable data. The more reliable the data, the more it can be incorporated in decision making and the better decisions that can be made.

  • Identifying and improving processes using reliable data. Having reliable data enables improvement teams to select processes and where within those processes the most productive improvements can be made.

  • Workers measure their own performance. Workers are provided quantitative data needed to effectively manage their own work processes.

  • Process managed quantitatively. Project teams and workers have the quantitative data needed to effectively manage their own work processes.

  • Measurement integrated from low levels to high levels. Quantitative data can be rolled up from low-level data to high-level data showing how low level components affect the goals and objectives of the organization.

  • Defect rates anticipated and managed. The defect databases established at Culture 3 can be used to predict defect rates by processes and work products, so that defects can be anticipated and acted upon.

Why Organizations Continue with Culture 4

The primary reasons organizations stay at Culture 4 include the following:

  • Current processes have been optimized. The integrated processes and integrated measurement data allows optimization of current work processes.

  • Allow others to innovate. It may be cheaper to follow the lead than to be a leader.

  • Management is comfortable with the culture. Organizations operate effectively and efficiently at Culture 4. If management is comfortable with that approach, they may not have the desire to initiate changes in the work environment.

  • Knowledge of business is not required. It is a major educational effort to train the IT staff in business activities.

  • Time and effort are required to optimize a measurement program. Management may wish to use its scarce resources for purposes other than innovating new technological and work approaches.

  • Unwilling to share processes with other organizations. Moving to Culture 5 normally involves sharing current work processes with other organizations. IT management may decide that work processes are proprietary and prefer not to share them.

Why Organizations Might Want to Adopt Culture 5

  • Leapfrog the competition. Innovative business cultures may provide the organization with a competitive advantage.

  • Become an industry leader. Culture 5 produces world-class organizations that tend to be recognized by leading industry associations and peer groups. In turn, this leads to other organizations wanting to share solutions.

  • Receive pressure from customers/users and senior management to improve. Even with existing processes optimized, executive management and customer/user may demand additional improvements in productivity and quality.

  • Partner with customers in reengineering business solutions. With current capabilities optimized, IT management and staff can redirect their efforts toward reengineering business solutions and be assured IT has the capabilities and customer trust needed to build these solutions.

  • Need to develop a business-to-business partnership for reducing supplier cost. The Internet, coupled with business-to-business partners for jointly ordering supplies and services, can minimize cost, but becomes effective only when cultures support business-to-business activities.

Culture 5: Manage Business Innovation

The fifth culture is one of business innovation. Innovation is possible because at Culture 4, management is confident that they can do what they say they will do with the resources estimated, and the work can be completed within or ahead of schedule. The types of improvements now possible are sometimes called “breakthrough” improvements, meaning significantly new ways to do work. Also in this culture, the focus is on using technology to drive business objectives.

The underlying management philosophy at this culture is using information as a strategic weapon of the business organization. IT is looking at new and innovative technological and system approaches to improve the overall business success of the organization. The culture requires information technology and management to be knowledgeable about the business side of the organization.

The management environment within the “manage business innovation” culture has these characteristics:

  • Supports e-commerce activities. Integrate e-commerce systems to the “back-office” systems so that they become an integrated network of systems. In many organizations, this involves interfacing with legacy systems.

  • Supports e-business activities. E-business activities involve rethinking the way organizations conduct their business. It involves developing new relationships with customers, providing customers new ways to acquire products and information regarding product testing and delivery, as well as increasing value of products and services to the customer. For e-business to be successful, IT is normally a major driver of the e-business.

  • Finds innovative solutions to business problems. Where existing processes are ineffective or cause productivity bottlenecks, innovative solutions may be the only option for productivity breakthroughs.

  • Acquires solution from other industries. Organizations that share their work processes with other organizations also receive mutual sharing, which can lead to productivity breakthroughs.

  • Enables workforce to become an “alliance” between employees, external customers, suppliers, and other parties having a vested interest in improvement. After IT organizations have optimized their work processes, they can then build an alliance with all of the parties having a vested interest in the success of the information technology organization.

  • Uses innovative technologies. New technologies, such as knowledge management and data warehousing, can be incorporated into process-improvement activities (e.g., suppliers, customers, auditors).

  • Strategic business planning and information technology become equal partners in setting business directions. IT becomes a leader in integrating technology into solving business problems and/or creating new business opportunities.

Cultural Barriers

From the discussion of the five IT management cultures, the agile implementation team must identify their organization’s IT culture and the barriers and obstacles associated with that culture. Five tasks are involved in completing this step. The results of the tasks should be recorded on Work Paper 24-20.

Table 24-20. Cultural Barrier Work Paper

Current IT management culture

 
 
 

________________________________________________________________

Barrier posed by culture

 
 
 

________________________________________________________________

What can be done in current culture

 
 
 

________________________________________________________________

Desired culture for time compression

 
 
 

________________________________________________________________

How to address cultural barriers

 
 
 

Identifying the Current Management Culture

Using the description of the five cultures, the agile implementation team must determine which of the five cultures is representative of their organization’s IT management culture. Normally, this is not difficult because at least 90 percent of the IT organizations are in Culture 1 (i.e., manage people) or Culture 2 (i.e., manage process). A Culture 1 organization is one in which IT management focuses more on stating objectives to be accomplished. In Culture 2, IT management expects their staff to follow the software testing process specifically, and if the project team follows it, management must assume responsibility for the results.

Identifying the Barriers Posed by the Culture

The barriers posed by the IT management culture are normally those described as the reasons why the IT organization desires to stay with its existing culture. For example, if the IT organization is a Culture 1 (i.e., manage people), IT management may want to stay with Culture 1. For example, if management likes to set objectives for people to accomplish and manages those objectives, any solution that does not place responsibility on the individual for completing a task would probably not be acceptable to IT management. The agile implementation team should review all the reasons why IT management would want to stay with its current culture and from that extrapolate what they believe are the barriers posed by that culture. These barriers should be transcribed to Work Paper 24-20.

Determining What Can Be Done in the Current Culture

This task is turning the identified barriers into positive statements. For example, in identifying a barrier, we said in the preceding example that any solution that is not focused on establishing and managing objectives for individuals would not be acceptable to IT management. Given this barrier, any solution should include specific objectives for individuals to accomplish.

Determining the Desired Culture for Time Compression

The agile implementation team generally cannot change the IT management culture. However, by this point in the time-compression process, the team members should have some general idea of how they can compress the software testing time. If those solutions would be more effectively implemented in a different culture, the agile implementation team should identify which culture is most conducive to the type of solutions they are thinking about.

Although an agile implementation team cannot change the culture, they may be able to change an aspect of the culture that will enable them to implement one of their time-compression solutions. If that culture would be desirable, the agile implementation team should indicate which is the desired culture and record it on Work Paper 24-20.

Determining How to Address Culture Barriers

Work Paper 24-20 identifies two categories of culture barriers: those imposed by the current IT management culture, and those that exist in a current culture, but could be alleviated if a new culture were in place. Given these two categories of barriers, the agile implementation team should determine how they could address those culture barriers. If the barrier cannot be adequately addressed, any solution inhibited by that barrier should not be attempted. If the barrier can be adequately addressed, the solution must include a plan to address that culture barrier. Those recommendations would be included in Work Paper 24-20. You can use the “Why-Why” analysis to help you identify the root cause of the cultural barrier and then apply the recommended solution to adjust to those cultural barriers.

Open and Effective Communication

Agile processes depend on open and effective communication. Open and effective communication has these three components:

  • Understanding and consensus on objectives. Those working on an agile software testing project must know the objectives that they are to accomplish and believe they are the right objectives. To accomplish this effectively, the test objectives need to be related to business objectives. In other words, for testing a website, if a business objective is that a website should be easy to use, the test objectives must be related to that business objective. In addition, the testing team must believe as a team these are the proper objectives for them to achieve. Taking time and effort to ensure the testers understand the objectives, and gain their support, is an important component of building an effective software test team.

  • Respect for the individual. Open and effective communication depends on those communicating having respect for one another. Disagreements are normal, but they must be in an environment of respect. Showing that appropriate respect exists is an important component of team building.

  • Conflict-resolution methods. Teams need methods to resolve conflict. Individuals may have variant opinions about how to perform a specific task. Those differences must be resolved. The resolution process will not always mean that individuals will change their opinions, but it means they will support the decision made.

For an in-depth discussion of the challenges testers face, refer to Chapter 8, “Step 2: Developing the Test Plan.”

Lines of Communication

The first step to improve information flow is to document the current information flow. This should be limited to information obtained and/or needed by a software testing team in the performance of their tasks. However, other information flows can be documented, if needed, such as the flow of information to improve the software testing process.

You can document the lines of communication using the communication graph. To complete this graph, you need to do the following:

  1. Identify the individual/groups involved in communication. All the stakeholders in the success of the tasks, such as software testing, need to be identified. The agile implementation team can determine whether the graph should show a specific individual, such as the project leader, or the function such as users of the software. Figure 24-12 is an example of a communication graph showing the five different parties involved in communication (in this case, identified as A, B, C, D, and E).

    Lines of communication graph.

    Figure 24-12. Lines of communication graph.

  2. Identify each type of communication going between the individuals/functions. Each type of communication, such as reporting the results of a test, is indicated on the graph as a line between the parties. For example, if Party D did the testing and reported the results to Party C, the line going between D and C represents that communication. Each type of communication should be documented. This can be done on the graph or separately.

  3. Determine the importance of the communication. A decision needs to be made about how important the information communicated is to the overall success of the project. If it is considered very important, it would be given a rating of 3; if it was important, a rating of 2; and if it was merely informative, a rating of 1. For example, if the goal of informing Party C of the results of testing was just to keep that party informed, it would get a rating of 1. A rating of 3 is normally given if not having that information would have a negative impact on the system; a rating of 2 is given if it would have a minor impact on the project, and a rating of 1 is given if it would have little or no impact but would be helpful to the success of the project.

  4. Develop a communication score for each party. A communication score is developed for each party by adding the importance rating for each line of communication. In the Figure 24-12 example, Party A has a score of 4, and Party C has a score of 9. The objective of developing a party score is to determine which parties are most important to the flow of communication. In Figure 24-12, Party C is the most important individual. This would probably be the software testing manager in the lines of communication for conducting testing for a specific software project.

Information/Communication Barriers

To identify the barriers to effective and open communication, the agile implementation team should analyze their lines of communication graph (see Figure 24-12) to determine whether there is adequate communication, nonexistent communication, communication to the wrong individual, and/or information that is needed by an individual but not provided to that individual. These barriers to effective and open communication should be recorded on Work Paper 24-21. To complete the work paper, you must address the following:

  • Information needed. Indicate information that is needed but not communicated, or any information that is available but not communicated to the right individual, or from the right individual.

  • Importance. The importance of the information communicated using the ranking of 3, 2, and 1 should be communicated; indicate the appropriate person to communicate it, and the appropriate person to receive it.

  • Barrier. The barrier the agile implementation believes is inhibiting the open and effective communication.

  • How to address barrier. The agile implementation team’s recommendation of what they might do to overcome this communication barrier.

Table 24-21. Information and Communication Flow Barrier

Information Flow

Information Needed

Importance

Should Be Communicated

Barrier

How to Address Barrier

By

To

      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      

Effective Communication

Communication involves dealing with expectations and responsibilities of individuals and groups. Effective communication must occur down, across, and up an organization, and with parties external to the IT organization. Effective communication to manage testing activities requires the following:

  • Management is provided with the necessary reports on the tester’s performance relative to established objectives. For example, consider whether:

    • Mechanisms are in place to obtain relevant information on project development changes.

    • Internally generated information critical to achievement of the testing objectives, including that relative to critical success factors, is identified and regularly reported.

    • The information that managers need to carry out their responsibilities is reported to them.

  • Information is provided to the right people in sufficient detail and on time to enable them to carry out their responsibilities efficiently and effectively. For example, consider whether:

    • Managers receive analytical information that enables them to identify what action needs to be taken.

    • Information is provided at the right level of detail for different levels of the test effort.

    • Information is summarized appropriately, providing pertinent information while permitting closer inspection of details as needed rather than just large amounts of data.

    • Information is available on a timely basis to allow effective monitoring of events and activities—and so that prompt reaction can be taken to business factors and control issues.

  • Management’s support for the development of necessary information systems is demonstrated by the commitment of appropriate resources (human and financial). For example, consider whether:

    • Sufficient resources (managers, analysts, programmers, all with the requisite technical abilities) are provided as needed to develop new or enhanced information systems.

  • Effectiveness with which employees’ duties and control responsibilities are communicated. For example, consider whether:

    • Communication vehicles—formal and information training sessions, meetings and on-the-job supervision—are sufficient in effecting such communication.

    • Employees know the objectives of their own activity and how their duties contribute to achieving those objectives.

    • Employees understand how their duties affect, and are affected by, duties of other employees.

  • Receptivity of management to employee suggestions of ways to enhance productivity, quality, or other similar improvements. For example, consider whether:

    • Realistic mechanisms are in place for employees to provide recommendations for improvement.

    • Management acknowledges good employee suggestions by providing cash awards or other meaningful recognition.

  • Adequacy of communication across the organization (for example, between testers and users) and the completeness and timeliness of information and its sufficiency to enable people to discharge their responsibilities effectively. For example, consider whether:

    • Salespeople inform engineering, production, and marketing of customer needs.

    • Accounts receivable personnel advise the credit approval function of slow payers.

    • Information on competitors’ new products or warranties reaches engineering, marketing, and sales personnel.

  • Openness and effectiveness of channels with and among all parties communicating information on changing customer needs. For example, consider whether:

    • Feedback mechanisms with all pertinent parties exist.

    • Suggestions, complaints and other input are captured and communicated to relevant internal parties.

    • Information is reported upstream as necessary and follow-up action taken.

  • Timely and appropriate follow-up action by project and user management resulting from communications received from testers. For example, consider whether:

    • Personnel are receptive to reported problems regarding defects or other matters.

    • Errors in software are corrected and the source of the error is investigated and corrected.

    • Appropriate actions are taken and there is follow up communication with the appropriate stakeholders.

    • IT management is aware of the nature and volume of defects.

Quality Control Checklist

Work Paper 24-22 is a quality control checklist for Step 5. The investigation should focus on determining whether a specific aspect of the step was performed correctly or incorrectly.

Table 24-22. Quality Control Checklist for Step 5

  

YES

NO

COMMENTS

1.

Does the agile implementation team have a good understanding of how an IT management culture affects the operation of the IT organization?

   

2.

Does the agile implementation team understand the five different cultures that can exist in an IT organization?

   

3.

Did the agile implementation team reach consensus on the current IT organization’s management culture?

   

4.

Given the discussion of why IT management would want to keep their current culture, can the agile implementation team identify barriers posed by the current IT culture?

   

5.

Can the agile implementation team convert those barriers into positive statements of how time compression solutions must be implemented?

   

6.

Has the agile implementation team determined whether or not a different culture would be more advantageous in implementing the proposed time compression solutions?

   

7.

For each of the barriers identified, has the agile implementation team determined whether those barriers can be adequately addressed in implementing time compression solutions?

   

8.

For those barriers that the agile implementation team believes can be adequately addressed in the time compression solutions, have they determined a potential solution for addressing those culture barriers?

   

9.

Does the agile implementation team recognize the importance of information and communication in building an agile software testing process?

   

10.

Does the agile implementation team understand the three components of effective communication?

   

11.

Has the team developed a lines of communication graph for software testing?

   

12.

Has the graph been analyzed to determine:

   
 

a. Information missing from the graph

   
 

b. Information not communicated to the right individual

   

13.

Has the team determined the importance of each communication and developed a communication score for each individual/function identified on the communication graph?

   

14.

Has the team studied and understood the guidelines for information and communication?

   

15.

Has the team identified the barriers for effective communication in the performance of software testing?

   

The agile implementation team should review these questions as a team. A consensus Yes or No response should be determined. “No” responses should be explained and investigated. If the investigation indicates that the particular aspect of the step was incorrectly performed, it should be repeated. (Note: Some teams prefer to review the quality control checklist before they begin the step to give them a fuller understanding of the intention of this step.)

Conclusion

This step, like Step 4, has identified some of the barriers and obstacles that need to be addressed to compress software testing time. Whereas Step 4 identified the root cause of the people and organizational barriers, the root cause of the culture barriers is known; it is the culture itself. It is the culture that primarily determines what information is shared and to whom it is shared. Ineffective and closed communication inhibits the building of an agile software testing process. Knowing these culture and communication barriers, and how to address the barriers, is an important part of the plan to compress software testing time.

Step 6: Identify Implementable Improvements

The best idea for compressing software testing time is one that is “implementable” (that is, “doable”). Implementable means that it will receive appropriate management support, have the needed resources for implementation, and have the acceptance of those who will be using the improved software testing process. This step describes what an implementable is and provides a process on how to identify the most-implementable ideas from the idea list developed from Steps 1 through 3. Those ideas will then be ranked by importance in jumpstarting an agile software testing process.

What Is an Implementable?

An implementable refers to something that can actually be realized in an organization. It may not be the most effective or most efficient idea, but it is doable. The implementable is not the idea that is easiest or the quickest to realize, but it is doable. It is not the idea that will compress software testing time by the most workdays, necessarily, but it is doable.

Experience from IT organizations that have been effective in compressing software testing time show there are four criteria for implementables, as follows:

  • User acceptance. Those individuals who will be required to use the improved process in software testing are willing to use the idea. If the users do not want to use the idea, they will find a way to stop the idea from being effective. An idea can be stopped by blaming problems on the idea, or by taking extra time and then attributing it to the new idea. However, when the users like the idea and want to use it (i.e., it is acceptable to them), they will make the idea work.

  • Barrier free. The greater the number of obstacles/barriers imposed on an idea, the greater the probability the idea will not be effective. As discussed in the previous two steps, the barriers can be people oriented, cultural, communicational, administrative, or organizational. The more barriers that need to be overcome, the less chance the idea has of being successful. In addition, not all barriers and obstacles are equal. Some barriers and obstacles are almost insurmountable, whereas others are easily overcome. For example, if the senior manager does not like a particular idea, implementing it is almost insurmountable. On the otherhand, if the barrier is just paperwork, then by completing the paperwork effectively, the barrier can be overcome.

  • Obtainable. Ideas that are easy to implement are more obtainable than ideas that are difficult to implement. The difficulty can be lack of competency, the amount of resources and time to implement it, or if the implementation team does not have the motivation to successfully implement the idea.

  • Effective. Effectiveness is the number of workdays that can be reduced by implementing the idea. If the number of workdays reduced is significant, the implemented idea will be more effective. A less-effective idea is one that might only eliminate a part of a single workday from software testing. Ideas from which no estimate can be made for the number of workdays that can be reduced should not be considered effective.

Obviously, other criteria affect whether an idea is doable. However, the preceding four criteria are most commonly associated with effective ideas.

Identifying Implementables via Time Compression

Ultimately the decision about what is implemented to compress testing time must be based on the judgment of the agile implementation team. The team has knowledge of the software testing process and the IT organization. They should be motivated to do the job, and responsible for the success of the effort. However, many organizations use a process that may help in determining which are the better ideas for implementation.

This process uses the four criteria previously mentioned: user acceptance, barrier free, attainability, and effectiveness. By using a simple scoring system, organizations can score an idea based on these four criteria; that score will help determine how doable an idea is.

The recommended scoring system is to allocate a maximum of three points for each of the four criteria. The agile implementation team evaluates each criterion on a scale of 0 to 3. For a specific idea, 3 is the most desirable state of the criteria, and a 0 is an unacceptable state for the criteria. For each idea, the score for each of the four criteria should be multiplied. Multiplication is used rather than addition because when you multiply by 0 (zero), you eliminate any idea that is awarded a 0 in any of the four criteria.

Listed here is a scoring guide for the four criteria:

  • Criteria 1: User acceptability. Acceptability will be evaluated on the stakeholder’s four quadrants. The most significant stake among the stakeholders must be determined. This does not mean who is in what stake, but the desired stake by the majority of the stakeholders.

    SCORE

    HOW THE SCORE IS ALLOCATED

    0

    Stakeholders want to stop it from happening

    1

    Stakeholders will let it happen

    2

    Stakeholders will help it happen

    3

    Stakeholders will make it happen

  • Criteria 2: Barrier free. Barrier/obstacles need to be classified as major or minor. A major barrier/obstacle is one with little probability of overcoming the barrier, or it would take extensive effort to overcome the barrier/obstacle. A minor barrier is one that can be overcome by the team. The determination of the score for barrier free is as follows:

    SCORE

    HOW THE SCORE IS ALLOCATED

    0

    Two or more major barriers

    1

    One major barrier or three minor barriers

    2

    One to two minor barriers

    3

    No barriers

  • Criteria 3: Obtainable. The agile implementation team has to decide on how obtainable the idea is. The easier it is to obtain or implement, the higher the score. Based on the size of the organization and the skill sets of the implementers, the difficulty to implement should be placed into one of four groups.

    SCORE

    HOW THE SCORE IS ALLOCATED

    0

    Very difficult to implement

    1

    Difficult to implement

    2

    Some difficulty to implement

    3

    Easy to implement

  • Criteria 4: Effectiveness. Effectiveness should be measured on the estimated reduction in workdays needed to test a project. The more “compression” that occurs, the higher the score. The agile implementation team must estimate for each idea how much that idea will compress delivery time. The percent of reduction should be based on the total software testing effort. For example, if it takes 100 workdays to write a test plan, a 2 percent reduction is a two-workday reduction. The scoring for effectiveness is as follows:

    SCORE

    HOW THE SCORE IS ALLOCATED

    0

    No known reduction or reduction cannot be estimated

    1

    Less than 1 workday reduction

    2

    1 to 3 workday reduction

    3

    Over 3 day workday reduction

To complete the scoring, the agile implementation team should first list the high-priority improvement ideas developed in Steps 1 through 3 and post them to Work Paper 24-23. For each idea, they should then determine the score for the four criteria used to select an improvement idea for implementation. These four individual criteria scores are multiplied to obtain a doable score. The result will be an overall score between 0 and 81 (zero meaning the idea will be discarded, and 81 is the best possible idea identified through performing Steps 1 through 3).

Table 24-23. Software Testing Time Compression Idea

Improvement Idea

User Acceptable

× Barrier Free

× Attainability

× Effectiveness

= Doable Score

      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      

At the end of this process, the ideas for compressing the software testing process would be ranked from 0 to 81 based on the actual scores. The idea getting the highest overall doable score should be considered the first idea to implement. The idea with the second highest score should be the second idea to implement.

You need to recognize that this selection process is not statistically valid. It includes judgment, but it does help rank the ideas by the probability of success. Obviously, a score of 27 may not be significantly different from a score of 24. However, the score of 24 would be significantly different from a score of 54.

The purpose of this selection process is to help the agile implementation team select the best overall idea for implementation. They should look at the ideas with the highest overall doable score as the ones they should select first for implementation. However, judgment would still apply, and one with a slightly lower score might be determined the better one for implementation even though it has a lower effectiveness score than the idea with the highest score.

After the ideas have been selected, a plan needs to be developed for implementation. It is generally recommended that a single idea be implemented. Then its results are evaluated before the next idea is implemented. This assumes the ideas can be implemented in a relatively short time span. Another advantage to implementing ideas in a series rather than many simultaneously is that it makes it easier for the software testing staff to assimilate the change and support it.

Prioritizing Implementables

A scoring process that ranks implementables is an effective way to select ideas. However, the agile implementation team may want to prioritize the better ideas. Prioritization might prove beneficial in jumpstarting the agile software testing process, by implementing the ideas that could quickly add some agility to the existing test process.

Three sets of guidelines are provided to the agile implementation team to help identify the best doable ideas. These are as follows:

  • The top-six best testing process workbench improvements. If it is determined that the best approach to achieving agility in a software testing process is to remove variability from the software testing process workbenches, these six ideas should be selected first.

  • The top-six idea categories for use in building an agile testing process. If the agile implementation team believes that the best approach to add agility to the software testing process is to focus on the environment in which testers perform work, these six categories will help place ideas in the category ranked highest by the agile implementation team (see Table 24-1). To use these guidelines, the categories must be ranked from one through six in a sequence the team believes will best achieve agility in the software testing process. The team should work on the top-ranked categories first. To do this, the ideas scored in sequence of doability need to be cross-referenced to the six categories.

    Table 24-1. Top Six for Compressing Software Testing Delivery Time

    1

    Remove non-essential tasks from the software testing critical plan.

    2

    Reduce rework if it occurs frequently in a workbench activity.

    3

    Verify the entrance criteria before starting a workbench activity.

    4

    Verify the exit criteria before starting a workbench activity.

    5

    Move testing activities to the front end of software testing.

    6

    Substitute an in-house best practice for a less effective work practice.

    Because these have been discussed throughout this book, they are not described individually.

  • The top-ten approaches effective for building an agile testing process. If the majority of the test agility team wants to go back to the basics, these concepts should be their implementation focus (see Table 24-2). To do this, the team needs to rank the ten approaches from 1 to 10 and then focus on the highest-rated approaches first.

    Because these approaches are discussed extensively in this book, they are not individually described.

    Table 24-2. Top Ten Approaches for Building an Agile Testing Process

     

    CONCEPT

    RANK

    1

    Eliminate the readiness barriers to building an agile testing process

    2

    Minimize process variability

    3

    Identify and use the best testing practices

    4

    Expand the tester’s role to test needs, not just specifications

    5

    Restrict test process compliance activities to those activities proven to be effective

    6

    Bypass the barriers to building an agile testing process

    7

    Incorporate the agile testing process into the existing IT culture

    8

    Build the agile testing process by performing the most doable tasks first

    9

    Identify and improve the flow of information

    10

    Develop and follow a plan to build the agile testing process

Documenting Approaches

If the agile implementation team determines that it wants to apply judgment to the scored implementables, they should document the basis of that selection process. Work Paper 24-24 can be used for that purpose. To complete this work paper, the following needs to be documented:

  • Implementable improvement idea ranked by doable score. All of the implementables scored high on Work Paper 24-23 should be transposed to Work Paper 24-24. In the transcription, they should be ranked. Note that the team may decide to eliminate some of the ideas in this transition; they should also eliminate those low-scoring ideas that do not appear to have the probability of making a significant change to the software testing process.

  • Prioritization considerations. In determining priority beyond the doable score, the team should indicate the basis on which they want to prioritize a specific implementable. This step has provided three categories of guidelines for making this determination. However, the team should feel free to use other methods for prioritization.

  • Prioritization rank. It is suggested that the team rank ideas for prioritization into these three categories:

    • High. Those ideas they will implement first.

    • Medium. Those ideas that will be implemented after the high-priority ideas.

    • Low. Ideas that may or may not be implemented.

Table 24-24. Establishing the Priority of Doable Ideas for Jumpstarting an Agile Software Testing Process

Implementable Improvement Idea Ranked by Doable Score

Prioritization Considerations

Prioritization Rank

High

Medium

Low

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

Quality Control Checklist

Work Paper 24-25 is a quality control checklist for Step 6. The investigation should focus on determining whether a specific aspect of the step was performed correctly or incorrectly.

Table 24-25. Quality Control Checklist for Step 6

  

YES

NO

COMMENTS

1.

Has the agile implementation team agreed upon a list of improvement ideas they will consider?

   

2.

Does the agile implementation team believe that an algorithm to score each idea from best to worst would assist them in selecting the best ideas?

   

3.

Does the agile implementation team understand the four criteria proposed for the selection for the best idea?

   

4.

Does the agile implementation team understand and accept the 0 to 3 scoring method for each of the four criteria?

   

5.

Has the agile implementation team scored each idea using the selection process criteria?

   

6.

Has the agile implementation team then ranked all the ideas from highest score to lowest score?

   

7.

Does the agile implementation team believe that the best idea is among the highest scoring ideas?

   

8.

Has the agile implementation team reviewed the few highest scoring ideas to determine which of those they believe are the best regardless of the final score?

   

9.

Has the agile implementation team reviewed the top six ideas for compressing software testing time to determine if the idea they selected is consistent with the top six?

   

10.

Has the agile implementation team agreed upon one idea for implementation?

   

11.

If the agile implementation team wants to do further prioritization to select doable ideas to implement, have team members determined how they will do that additional prioritization?

   

The agile implementation team should review these questions as a team. A consensus Yes or No response should be determined. “No” responses should be explained and investigated. If the investigation indicates that the particular aspect of the step was incorrectly performed, it should be repeated. (Note: Some teams prefer to review the quality control checklist before they begin the step to give them a fuller understanding of the intention of this step.)

Conclusion

Action must be taken if the time required to build software is to be reduced. The action proposed is to select and implement doable ideas to reduce time. This step has provided a process to help the agile implementation team determine which idea to implement first. After that idea has been identified and implemented, the agile implementation team will come back to this process and select another idea for implementation. Thus, the time required for testing software will continue to compress.

Step 7: Develop and Execute an Implementation Plan

A software testing organization can use three approaches to achieve agile software testing. The first is to acquire the agile testing process from a supplier. The second is to build the agile testing process from the bottom up. The third is to convert the current software testing process to an agile process.

I am unaware of any agile testing processes on the market; therefore, that approach won’t work. Building an agile testing process from the bottom up is a large, time-consuming, and risky project. Because IT management in general has not been willing to invest large sums of money in the software testing processes, it is unlikely they will support the time and resources needed to build such a process. Experience has shown that it normally costs ten times as much to deploy a new process as it does to build it. Thus, the bottom-up approach would barely get IT management’s support or enough support and resources to make it work. The bottom line is that the only realistic and effective approach to acquiring an agile testing process is to go through a continuous change to the current testing process to build more and more agility into that process.

An old but true saying is “If you fail to plan, plan to fail.” Having a good idea to compress software testing time is not enough. That idea must be put into action. A good plan will help facilitate action.

The process for planning to compress testing time should be viewed as a project. Thus, the planning process used for software projects is applicable to the planning process to compress software testing time. However, because the compression project is normally a small project, the planning does not have to be nearly as comprehensive. This step provides a simplified, but effective, planning process for implementing an idea to compress software testing time.

Planning

The planning process should start with the assumption that the process of building agility into the software testing process will take time. A pilot project should be used because piloting means there will be an opportunity to evaluate and “kill” the compressing effort. The effort you start should be continuous. Everyone involved in a compressing process must buy in to this concept.

Each idea to be implemented involves all of the four PDCA (Plan, Do, Check, Act) components discussed in detail in Chapter 1. If the implemented idea does not meet its objective, the PDCA cycle may have to be repeated based on the action taken to improve and expand/modify the implemented idea.

Implementing Ideas

In the proposed plan implementation, we use much of the information gathered in Steps 1 through 6. Using this information, the plan will involve modifying the testing environment and the workbench(es) involved in implementing the idea. For example, if the idea relates to the test plan, the implemented idea will modify the test planning workbench. The components of a workbench can be deleted, modified, or expanded (i.e., the input criteria, do procedures, check procedures, skills competency, tools, and exit criteria). Addressing management commitment and overcoming impediments may require changes to the IT management process.

Work Paper 24-26 is used to document the implementation work plan. The work plan should include the following:

  • Improvement idea. The name of the idea (from the idea list) that is to be implemented.

  • Objective. The objective to be accomplished by implementing the idea. (Note: This should be stated as a measurable objective.) In most instances, the measurable objective is the number of workdays to be compressed by implementing this idea. (Note: In evaluating the ideas for selection for implementation, an estimate has already been made of the number of workdays that could be compressed by using this idea.) If the objective is focused on the management process, the objective is a prerequisite to implementing an idea.

  • Current results. This is normally the number of workdays now required to complete the involved workbench (this comes from Step 2). Note that if the software testing process is immature, the current results most likely will be expressed in average days, including the variance associated with that workbench.

  • Expected results. This is the number of workdays to complete the task after the time-compression idea has been implemented. The variance between current and expected results should be the same as the measurable objective.

  • Method of measurement. The method is typically workdays. However, if there is a significant variance in the average workdays for the workbench, the method of measurement may need to account for that variance. For example, the method of measurement may measure four or five projects to get the new average and the new variance.

  • Work tasks. Two types of work tasks are involved. One is to adjust the obstacles/barriers associated with implementing this idea. The other type of work task is the actual tasks to implement the improvement idea. The barriers and obstacles that need to be overcome will be those identified in Steps 4 and 5, which are applicable to this specific improvement idea. (Note: The work papers included in Steps 4 and 5 have also identified how to address the barrier/obstacle. Thus, the task is the process to address the barrier/obstacle from the Step 4 and 5 work papers.)

    The task to implement the improvement normally involves modifying a workbench. These may add new Do or Check procedures to the workbench, may change the entrance and exit criteria, which may affect the amount of rework required and may involve additional training for the individuals performing the workbench. It may also add a new tool to the workbench.

    The changed workbench should change the work processes used in that segment of the software testing process. Modifications to the organization’s work processes must be coordinated with the standards committee/process engineering committee (i.e., the group responsible for defining, implementing, and modifying IT work processes).

  • Resources. The resources to perform a task will be the people’s time and any other resources involved in implementing the work task. Normally, it will just be the people’s time, but it may require some test time if automated procedures are modified.

  • Time frame. The time frame for starting the work task and the target completion date for the work task should also be documented as part of the work plan.

Table 24-26. Software Testing Time Compression Tactical Work Plan

Objective to Accomplish:

Improvement Idea:

Objective

Current Results

Expected Results

Actual Results

Method of Measurement

     
     
     
     
     
     
     

Work Plan

Time Frame

 

Tasks

Resources

Start Date

Target Completion Date

    
    
    
    
    
    
    

Preparing the Work Plan

The work plan as defined in Work Paper 24-26 should be executed. The plan as written should be followed. If the plan is being executed, the actual start date and the actual completion date should be recorded. If the resources were inadequate or too extensive, a note should be made in the Resources column to indicate the actual resources expended.

Checking the Results

When the work plan is complete, the actual results from the improved workbench should be measured and recorded. The method of measurement in the work plan should be used to record the actual results. If the actual results approximate the expected results, the improvement idea should be considered successful. If the actual results exceeded the expected results, the improvement idea was very successful. However, if the actual results are less than the current results, the implementation idea can be considered unsuccessful. In that instance, an assessment needs to be made as to whether the idea was effectively implemented, whether some modification to the idea can be made to make it successful, or whether the idea should be dropped for now and a new improvement idea be implemented.

Taking Action

If the results are successful, action should be taken to make a permanent modification to the workbench to incorporate the improvement idea. If the improvement idea was not successful, the action can be to modify the plan, re-execute the plan, or not to implement the improvement idea in a workbench. At that point, the team can either re-execute the PDCA cycle using a modified improvement idea, or implement the next improvement idea.

You must identify a leader to implement the plan. Ideally, that leader would be a member of the agile implementation team; however, if the IT organization has a process to change processes, the plan should be led by someone in that improvement process.

Requisite Resources

Time-compression efforts should be considered projects. Therefore, they need to be budgeted and scheduled like any other project. However, the budgetary resources for implementing these projects can derive from any of the following:

  1. Change the current test budget. The idea implementation costs can be charged to the testing project for which the implementers are currently assigned. The assumption being that if the idea works, that project’s workdays will be reduced and that reduction should pay for the time compression project.

  2. Change the resources to an appropriate IT budget category. IT management can make the decision that time compression projects are worthwhile. They can review their budget and make a decision for an appropriate budgetary account to record the resources. If resources are budgeted for building/improving work processes, that budgetary account is the logical one to record time compression projects. Another potential budgetary account is training.

  3. Establish a special budgetary account for time compression. IT management can allocate a certain percentage of their budget for time-compression projects. If they do this, they can have project personnel develop a test process work improvement plan and submit it to IT management for approval. Thus, staff members from different projects can become involved in suggesting and implementing ideas to compress software testing delivery time. IT management can control the work by approving the implementation work plan proposals. (Note: Some IT organizations allocate 2 to 5 percent of their testing budget for testing process improvement projects such as building an agile testing process.)

Quality Control Checklist

Work paper 24-27 is a quality control checklist for Step 7. The investigation should focus on determining whether a specific aspect of the step was performed correctly or incorrectly.

Table 24-27. Quality Control Checklist for Step 7

  

YES

NO

COMMENTS

1.

Has the agile implementation team gathered all the appropriate information related to a selected improvement idea from Steps 1 through 6?

   

2.

Does the agile implementation team have a project planning process that it can use to implement the improvement idea?

   

3.

Does the agile implementation team understand the “Plan-Do-Check-Act” cycle and its relationship to planning and implementing a time compression improvement idea?

   

4.

Can the agile implementation team express the improvement objective in measurable terms?

   

5.

Does the agile implementation team know the current results from the workbench that is designated to be improved?

   

6.

Has the agile implementation team agreed upon a method for measuring the expected results from implementing the time compression idea?

   

7.

Do the work tasks include both tasks to modify the workbench and tasks to address the obstacle/barrier that may impede implementing the improvement idea?

   

8.

Has the agile implementation team been authorized the resources needed to implement the improvement idea?

   

9.

After implementation, have the actual results from implementation been documented?

   

10.

Was a reasonable process used to record the actual results?

   

11.

If the actual results indicate a successful implementation of an improvement idea, has the agile implementation team taken the action necessary to make that improvement idea in part of the affected workbench?

   

The agile implementation team should review these questions as a team. A consensus Yes or No response should be determined. “No” responses should be explained and investigated. If the investigation indicates that the particular aspect of the step was incorrectly performed, it should be repeated. (Note: Some teams prefer to review the quality control checklist before they begin the step to give them a fuller understanding of the intention of this step.)

Conclusion

An idea is not valuable until it is implemented. This step has examined the process for taking the highest-priority recommended idea and developing a plan to implement that idea. The plan needs to involve both modifying the work tasks of the affected workbench, and addressing any obstacles/barriers that might impede the implementation of the improvement idea. The process proposed is the “Plan, Do, Check, Act” cycle. At the end of this cycle, action is taken on whether to make the idea permanent, whether to modify and re-execute a revised plan for the implementation idea, or whether to eliminate the idea and move on to the next high-priority improvement idea.

Summary

Agile processes are more effective than highly structured processes. Because agile processes seem to work best with small teams, and because most software testing projects are performed by small teams, testing is an ideal candidate to incorporate agility into their processes.

The process proposed in this chapter was built on these time-proven concepts:

  • It is far better to change the current process than to acquire/build and implement an entirely new process.

  • Focusing on time compression (i.e., reducing the time required to perform a task) has, as its by-product, testing effectiveness and process agility.

  • The quickest way to compress time in a testing process is to reduce process variability.

  • It is more important to determine that ideas are implementable than to select the best idea, which may not be doable.

  • Continuous small improvements are superior to a few major improvements.

  • Don’t make any improvements until you know that the organization, and those involved, will support the improvement (i.e., don’t begin a task that you know has a high probability of failure).

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset