Chapter 5. Building Software Tester Competency

Effective software testing will not occur unless the testers are confident. They must be confident in both testing basics and the use of their organization’s testing process and tools. It is as important to build the competency of the individual testers as it is to build test processes and acquire test tools.

Many colleges and universities that offer curriculums in computer science do not include courses on software testing. In many IT organizations, it is assumed that if you can build software, you can test it. This concept is changing, albeit slowly.

The emphasis in this chapter will be on building the competency of the software tester. The chapter will use the Common Body of Knowledge (CBOK) for the Certified Software Tester (CSTE) designation offered by Software Certifications (www.softwarecertifications.org) and administered by the Quality Assurance Institute (www.qaiusa.com).

What Is a Common Body of Knowledge?

Many professions have the following characteristics in common:

  • A common body of knowledge

  • A code of ethics

  • An examination to demonstrate competency

  • Continuing professional education

Normally, senior members of the profession establish the board to oversee certification. This board comprises individuals well respected within their profession, who then define the CBOK, administer the certification examination, develop the code of ethics, and oversee the profession’s continuing education policies and the conduct of the certified professionals.

Software Certifications, an organization that offers IT certifications, recently approved the 2006 Common Body of Knowledge for Software Testers. The new CBOK contains ten knowledge categories, each of which will be discussed in this chapter.

The CBOK is what the certification board believes individuals need to know to practice software testing effectively. If, based on results of the CSTE examination, it is determined that an individual is competent in software testing, that individual will receive a certification in software testing.

Who Is Responsible for the Software Tester’s Competency?

IT management is responsible for the competency of software testers. They, or their designated subordinates, develop job descriptions for software testers, select the individuals who will become software testers, and approve the necessary resources for training.

This, of course, does not preclude the individual’s responsibility for competency. Would-be software testers need to demonstrate to management that they have the necessary competency to practice software testing. This competency can be obtained by self-study or by formal study paid for by the tester and conducted on his or her time. Competency can also be obtained on the employer’s cost and time.

How Is Personal Competency Used in Job Performance?

To understand the role of personal competency in effective job performance, you need to understand the continuum of work processes (see Figure 5-1). A work process comprises both personal competency and the maturity or effectiveness of the work process. Maturity of the work process defines the amount of variance expected when the work procedures are followed precisely. Implicit in process maturity is the worker’s ability to understand and follow the process.

Continuum of work processes.

Figure 5-1. Continuum of work processes.

Personal competency is the experience that the worker brings to the job. This experience is assumed in the process, but not integrated into the process. For example, the process to write a computer program in a particular language assumes that the programmer knows how to code in that language and thus the focus is on how the language is used, not how to use the language. Consider a non-IT example: When performing an operation on an individual, it is assumed that the doctor has been trained in surgery prior to following the surgical process in a specific hospital.

As shown in Figure 5-1, the processes in a manufacturing environment are very mature and require workers to be only minimally competent. One would expect a worker whose responsibility is simply to follow a routine series of steps to recognize any obvious deviations. For example, if an auto worker is confronted with three black tires and one yellow tire, he should speak up.

In a “job shop” environment, on the other hand, no two like products are created (although the products are similar), so greater personal competency is required than in a manufacturing environment.

Professional work processes require extensive personal competency and less mature work processes. For most IT organizations, software testing is a professional work process. In many IT organizations, the testing processes serve more as guidelines than step-by-step procedures for conducting tests. For example, the testing process may state that all branches in a computer program should be tested. If a tester saw a large number of very similar decision instructions, it may be more prudent to perform other tests rather than testing each processing decision both ways.

Using the 2006 CSTE CBOK

The CSTE CBOK can be used for any of the following purposes:

  • Developing the job description for software testers

  • Assessing an individual’s competency in software testing

  • Developing an examination to evaluate an individual’s competency

  • Formulating a curriculum to improve an individual’s software testing competency

Work Paper 5-1 presents the discussion draft of the 2006 CSTE CBOK in a format that will enable you to identify skills in which you are competent and those you need to improve. Each Knowledge Category in the CBOK lists multiple skills. For example, Knowledge Category 1, “Software Testing Principles and Concepts,” requires testers to be proficient in the vocabulary of testing.

Table 5-1. 2006 Common Body of Knowledge

Knowledge Category 1: Software Testing Principles and Concepts The “basics” of software testing are represented by the vocabulary of testing, testing approaches, methods, and techniques, as well as the materials used by testers in performing their test activities.

  

FULLY COMPETENT

PARTIALLY COMPETENT

NOT COMPETENT

1

Testing Techniques

Understanding the various approaches used in testing, including static (e.g., desk checking), white-box (logic-driven), black-box (requirements-driven), load testing, coverage testing, and regression testing. Also included are the methods for designing and conducting tests.

   

2

Levels of Testing

Identifying testing levels such as unit, performance, string, integration, systems recovery, acceptance, parallel, performance, and interface testing.

   

3

Testing Different Types of Software

The changes in the approach to testing when testing different development approaches such as batch processing, client/server, Web-based, object-oriented, and wireless systems.

   

4

Independent Testing

Testing by individuals other than those involved in product/system development.

   

5

Vocabulary

The technical terms used to describe various testing techniques, tools, principles, concepts, and activities.

   

6

The Multiple Roles of Software Testers

The objectives that can be incorporated into the mission of software testers. This would include the testing to determine whether requirements are met, testing effectiveness and efficiency, testing user needs versus software specifications, and testing software attributes such as maintainability, ease of use, and reliability.

   

7

Testers Workbench

An overview of the process that testers use to perform a specific test activity, such as developing a test plan or preparing test data.

   

8

The V Concept of Testing

The V concept relates the build components of the development phases to the test components that occur during the test phases.

   

Knowledge Category 2: Building the Test Environment The test environment comprises all the conditions, circumstances, and influences surrounding and affecting software testing. The environment includes the organization’s policies, procedures, culture, attitudes, rewards, test processes, test tools, methods for developing and improving test processes, management’s support of software testing, as well as any test labs developed for the purpose of testing software and multiple operating environments. This category also includes ensuring the test environment fairly represents the production environment.

  

FULLY COMPETENT

PARTIALLY COMPETENT

NOT COMPETENT

1

Knowledge of Test Process Selection and Analysis

   
 

Concepts of Test Processes—The concepts of policies, standards, and procedures, and their integration into the test process.

   
 

Test Process Selection—Selecting processes that lead to efficient and effective testing activities and products.

   
 

Acquisition or Development of a Test Bed/Test Lab/Test Processes—Designing, developing, and acquiring a test environment that simulates the “real” world, including the capability to create and maintain test data.

   
 

Quality Control—Testing quality control to ensure that the test process has been performed correctly.

   
 

Test Process Analysis—Analyzing the test process to ensure

  1. Its effectiveness and efficiency

  2. Test objectives are applicable, reasonable, adequate, feasible, and affordable

  3. The test program meets the test objectives

  4. The correct test program is being applied to the project

  5. The test methodology, including the processes, infrastructure, tools, methods, and planned work products and reviews, is adequate to ensure that the test program is conducted correctly

  6. Test progress, performance, and process adherence are assessed to determine the adequacy of the test program

  7. Adequate, not excessive, testing is performed

   
 

Continuous Improvement—Identifying and making improvements to the test process using formal process improvement processes.

   
 

Adapting the Test Environment to Different Software Development Methodologies—Establishing the environment to properly test the methodologies used to build software systems, such as waterfall, Webbased, object-oriented, agile, and so forth.

   
 

Competency of the Software Testers—Providing the training necessary to ensure that software testers are competent in the processes and tools included in the test environment.

   

2

Test Tools

   
 

Tool Development and/or Acquisition—Understanding the processes for developing and acquiring test tools.

   
 

Tool Usage—Understanding how tools are used for automated regression testing, defect management, performance/load testing; understanding manual tools such as checklists, test scripts, and decision tables; using traceability tools, code coverage, and test case management.

   

3

Management Support for Effective Software Testing

   
 

Creating a tone that encourages testers to work in an efficient and effective manner.

   
 

Aligning test processes with organizational goals, business objectives, release cycles, and different developmental methodologies.

   

Knowledge Category 3: Managing the Test Project Software testing is a project with almost all the same attributes as a software development project. Software testing involves project planning, project staffing, scheduling and budgeting, communicating, assigning and monitoring work, and ensuring that changes to the project plan are incorporated into the test plan.

  

FULLY COMPETENT

PARTIALLY COMPETENT

NOT COMPETENT

1

Test Planning, Scheduling, and Budgeting

   
 

Alignment—Ensuring the test processes are aligned with organizational goals, user business objectives, release cycles, and different development methodologies.

   
 

Test Performance—Monitoring test performance for adherence to the plan, schedule and budget, reallocating resources as required, and averting undesirable trends.

   
 

Staffing—Acquiring, training, and retaining a competent test staff.

   
 

Management of Staff—Keeping staff appropriately informed, and effectively utilizing the test staff.

   
 

Differences Between Traditional Management—Using a hierarchical structure versus quality management using a flattened organization structure.

   

2

Personal and Organizational Effectiveness

   
 

Communication Skills

  1. Written Communication—Providing written confirmation and explanation of a variance from expectations. Being able to describe on paper a sequence of events to reproduce the defect.

  2. Oral Communication—Demonstrating the ability to articulate a sequence of events in an organized and understandable manner.

  3. Listening Skills—Actively listening to what is said, asking for clarification when needed, and providing feedback.

  4. Interviewing Skills—Developing and asking questions for the purpose of collecting data for analysis or evaluation.

  5. Analyzing Skills—Determining how to use the information received.

   
 

Personal Effectiveness Skills

  1. Negotiation—Working effectively with one or more parties to develop options that will satisfy all parties.

  2. Conflict Resolution—Bringing a situation into focus and satisfactorily concluding a disagreement or difference of opinion between parties.

  3. Influence and Motivation—Influencing others to participate in a goaloriented activity.

  4. Judgment—Applying beliefs, standards, guidelines, policies, procedures, and values to a decision.

  5. Facilitation—Helping a group to achieve its goals by providing objective guidance.

   
 

Project Relationships—Developing an effective working relationship with project management, software customers, and users.

   
 

Recognition—Showing appreciation to individuals and teams for work accomplished.

   
 

Motivation—Encouraging individuals to do the right thing and do it effectively and efficiently.

   
 

Mentoring—Working with testers to ensure they master the needed skills.

   
 

Management and Quality Principles—Understanding the principles needed to build a world-class testing organization.

   

3

Leadership

   
 

Meeting Chairing—Organizing and conducting meetings to provide maximum productivity over the shortest time period.

   
 

Facilitation—Helping the progress of an event or activity. Formal facilitation includes well-defined roles, an objective facilitator, a structured meeting, decision-making by consensus, and defined goals to be achieved.

   
 

Team Building—Aiding a group in defining a common goal and working together to improve team effectiveness.

   

Knowledge Category 4: Test Planning Testers need the skills to plan tests. Test planning assesses the business and technical risks of the software application and then develops a plan to determine if the software minimizing those risks. Test planners must understand the development methods and environment to effectively plan for testing.

  

FULLY COMPETENT

PARTIALLY COMPETENT

NOT COMPETENT

1

Prerequisites to Test Planning

   
 

Identifying Software Risks—Demonstrating knowledge of the most common risks associated with software development.

   
 

Identifying Testing Risks—Demonstrating knowledge of the most common risks associated with software testing.

   
 

Identifying Premature Release Risk—Understanding how to determine the risk associated with releasing unsatisfactory, untested software products.

   
 

Risk Contributors—Identifying the contributors to risk.

   
 

Identifying Business Risks—Demonstrating knowledge of the most common risks associated with the business using the software.

   
 

Risk Methods—Understanding of the strategies and approaches for identifying risks or problems associated with implementing and operating information technology, products, and processes; assessing their likelihood, and initiating strategies to test for those risks.

   
 

Risk Magnitude—Demonstrating the ability to calculate and rank the severity of a risk quantitatively.

   
 

Risk Reduction Methods—Understanding the strategies and approaches that can be used to minimize the magnitude of a risk.

   
 

Contingency Planning—Planning to reduce the magnitude of a known risk.

   

2

Test Planning Entrance Criteria

   
 

Success Criteria/Acceptance Criteria—Understanding the criteria that must be validated to provide user management with the information needed to make an acceptance decision.

   
 

Test Objectives—Understanding the objectives to be accomplished through testing.

   
 

Assumptions—Establishing the conditions that must exist for testing to be comprehensive and on schedule.

   
 

Issues—Identifying specific situations/products/processes that, unless mitigated, will impact forward progress.

   
 

Constraints—Limiting factors to success.

   
 

Entrance Criteria/Exit Criteria—Understanding the criteria that must be met prior to moving software to the next level of testing or into production.

   
 

Test Scope—Understanding what is to be tested.

   
 

Test Plan—Understanding the activities and deliverables to meet a test’s objectives.

   
 

Requirements/Traceability—Defining the tests needed and relating them to the requirements to be validated.

   
 

Estimating—Determining the resources and timeframes required to accomplish the planned activities.

   
 

Scheduling—Establishing milestones for completing the testing effort and their dependencies on meeting the rest of the schedule.

   
 

Staffing—Selecting the size and competency of the staff needed to achieve the test plan objectives.

   
 

Test Check Procedures—Incorporating test cases to ensure that tests are performed correctly.

   
 

Software Configuration Management—Organizing the components of a software system, including documentation, so that they fit together in working order.

   
 

Change Management—Modifying and controlling the test plan in relationship to actual progress and scope of system development.

   
 

Version Control—Understanding the methods to control, monitor, and achieve change.

   

Knowledge Category 5: Executing the Test Plan This category addresses the skills required to execute tests, design test cases, use test tools, and monitor testing.

  

FULLY COMPETENT

PARTIALLY COMPETENT

NOT COMPETENT

1

Test Design and Test Data/Scripts Preparation

   
 

Specifications—Ensuring test data scripts meet the objectives included in the test plan.

   
 

Cases—Developing test cases, including techniques and approaches for validation of the product. Determination of the expected result for each test case.

   
 

Test Design—Understanding test design strategies and attributes.

   
 

Scripts—Developing the online steps to be performed in testing; focusing on the purpose and preparation of procedures; emphasizing entrance and exit criteria.

   
 

Data—Developing test inputs; using data generation tools; determining the data set or sub-sets to ensure a comprehensive test of the system; determining data that suits boundary value analysis and stress testing requirements.

   
 

Test Coverage—Achieving the coverage objectives in the test plan to specific system components.

   
 

Platforms—Identifying the minimum configuration and platforms on which the test must function.

   
 

Test Cycle Strategy—Determining the number of test cycles to be conducted during the test execution phase of testing; determining what type of testing will occur during each test cycle.

   

2

Performing Tests

   
 

Execute Tests—Performing the activities necessary to execute tests in accordance with the test plan and test design—including setting up tests, preparing test data base(s), obtaining technical support, and scheduling resources.

   
 

Compare Actual Versus Expected Results—Determining whether the actual results meet expectations.

   
 

Documenting Test Results—Recording test results in the appropriate format.

   
 

Use of Test Results—Understanding how test results should be used and who has access to them.

   

3

Defect Tracking

   
 

Defect Recording—Recording defects to describe and quantify deviations from requirements/expectations.

   
 

Defect Reporting—Reporting the status of defects, including severity and location.

   
 

Defect Tracking—Monitoring defects from the time of recording until satisfactory resolution has been determined and implemented.

   

4

Testing Software Changes

   
 

Static Testing—Evaluating changed code and associated documentation at the end of the change process to ensure correct implementation.

   
 

Regression Testing—Testing the whole product to ensure that unchanged functionality performs as it did prior to implementing a change.

   
 

Verification—Reviewing requirements, design, and associated documentation to ensure they are updated correctly as a result of the change.

   

Knowledge Category 6: Test Status, Analysis, and Reporting Testers need to demonstrate the ability to develop status reports. These reports should show the status of the testing based on the test plan. Reporting should document what tests have been performed and the status of those tests. To properly report status, testers should review and conduct statistical analysis on the test results and discovered defects. The lessons learned from the test effort should be used to improve the next iteration of the test process.

  

FULLY COMPETENT

PARTIALLY COMPETENT

NOT COMPETENT

1

Metrics of Testing

   
 

Using quantitative measures and metrics to manage the planning, execution, and reporting of software testing.

   

2

Test Status Reports

   
 

Code Coverage—Monitoring the execution of software and reporting on the degree of coverage at the statement, branch, or path level.

   
 

Requirement Coverage—Monitoring and reporting the number of requirements tested, and whether they are correctly implemented.

   
 

Test Status Metrics—Understanding the following metrics:

  1. Metrics Used to Test—Includes metrics such as defect removal efficiency, defect density, and mean time to last failure.

  2. Complexity Measurements—Quantitative values, accumulated by a predetermined method, that measure the complexity of a software product.

  3. Project Metrics—The status of a project, including milestones, budget and schedule variance, and scope changes.

  4. Size Measurements—Methods primarily developed for measuring the software size of information systems, such as lines of code and function points.

  5. Defect Metrics—Values associated with the number or types of defects, usually related to system size, such as “defects/1000 lines of code” or “defects/100 function points.”

  6. Product Measures—Measures of a product’s attributes, such as performance, reliability, and usability.

   

3

Final Test Reports

   
 

Reporting Tools—Using word processing, database, defect tracking, and graphic tools to prepare test reports.

   
 

Test Report Standards—Defining the components that should be included in a test report.

   
 

Statistical Analysis—Demonstrating the ability to draw statistically valid conclusions from quantitative test results.

   

Knowledge Category 7: User Acceptance Testing The objective of software development is to meet the true needs of the user, not just the system specifications. Testers should work with the users early in a project to clearly define the criteria that would make the software acceptable in meeting the user needs. As much as possible, once the acceptance criteria have been established, they should integrate it into all aspects of development.

  

FULLY COMPETENT

PARTIALLY COMPETENT

NOT COMPETENT

1

Concepts of Acceptance Testing

   
 

Understanding the difference between system test and acceptance test.

   

2

Acceptance Test Planning Process

   
 

Defining the acceptance criteria.

   
 

Developing an acceptance test plan for execution by user personnel.

   
 

Testing data using use cases.

   

3

Acceptance Test Execution

   
 

Executing the acceptance test plan.

   
 

Developing an acceptance decision based on the results of acceptance testing.

   
 

Signing off on successful completion of the acceptance test plan.

   

Knowledge Category 8: Testing Software Developed by Outside Organizations Many organizations do not have the resources to develop the type and/or volume of software needed to effectively manage their business. The solution is to obtain or contract for software developed by another organization. Software can be acquired by purchasing commerical off-the-shelf software (COTS) or contracting for all or parts of the software development to be done by outside organizations.

  

FULLY COMPETENT

PARTIALLY COMPETENT

NOT COMPETENT

1

Understanding the difference between testing software developed in-house and software developed by outside organizations.

   

2

Understanding the election process for selecting COTS software.

   

3

Verifying that testers are able to

  1. Ensure that requirements are testable.

  2. Review the adequacy of the test plan to be performed by the outsourcing organization.

  3. Oversee acceptance testing.

  4. Issue a report on the adequacy of the software to meet the contractual specifications.

  5. Ensure compatibility of software standards, communications, change control, and so on between the two organizations.

   

4

Using the same approach as used for in-house software, but may need to be modified based on documentation available from the developer.

   

5

Understanding the following objectives:

  1. Testing the changed portion of the software

  2. Performing regression testing

  3. Comparing the documentation to the actual execution of the software

  4. Issuing a report regarding the status of the new version of the software

   

Knowledge Category 9: Testing Software Controls and the Adequacy of Security Procedures The software system of internal control includes the totality of the means developed to ensure the integrity of the software system and the products created by the software. Controls are employed to control the processing components of software, ensure that software processing is in accordance with the organization’s policies and procedures, and according to applicable laws and regulations.

  

FULLY COMPETENT

PARTIALLY COMPETENT

NOT COMPETENT

1

Principles and Concepts of a Software System of Internal Control and Security

   
 

Vocabulary of Internal Control and Security—Understanding the vocabulary of internal control and security, including terms such as risk, threat, control, exposure, vulnerability, and penetration.

   
 

Internal Control and Security Models—Understanding internal control and security models (specifically, the COSO [Committee of Sponsoring Organizations] model).

   

2

Testing the System of Internal Controls

   
 

Perform Risk Analysis—Determining the risk faced by the transactions/events processed by the software.

   
 

Determining the controls for each of the processing segments for transactions processing, including

  1. Transaction origination

  2. Transaction entry

  3. Transaction processing

  4. Database control

  5. Transaction results

   
 

Determining whether the identified controls are adequate to reduce the risks to an acceptable level.

   

3

Testing the Adequacy of Security for a Software System

   
 

Evaluating the adequacy of management’s security environment.

   
 

Determining the types of risks that require security controls.

   
 

Identifing the most probable points where the software could be penetrated.

   
 

Determining the controls at those points of penetration.

   
 

Assessing whether those controls are adequate to reduce the security risks to an acceptable level.

   

Knowledge Category 10: Testing New Techniques Testers require skills in their organization’s current technology, as well as a general understanding of the new information technology that might be acquired by their organization.

  

FULLY COMPETENT

PARTIALLY COMPETENT

NOT COMPETENT

1

Understanding the Challenges of New Technologies

   
 

New application architecture

   
 

New application business models

   
 

New communication methods

   
 

New testing tools

   

2

Evaluating New Technologies to Fit into the Organization’s Policies and Procedures

   
 

Assessing the adequacy of the controls within the technology and the changes to existing policies and procedures that will be needed before the new technology can be implemented effectively. This would include:

   
 

Testing new technology to evaluate actual performance versus supplier’s stated performance.

   
 

Determining whether current policies and procedures are adequate to control the operation of the new technology and modify to bring in currency.

   
 

Assessing the need to acquire new staff skills to effectively implement the new technology

   

For each skill, you should make one of the following three assessments:

  • Not Competent. It is a skill you do not have or a skill you do not believe you could use in the process of testing software. For example, for the vocabulary skill, you do not have a sufficient vocabulary to adequately discuss the job of software testing. Terms such as “regression testing,” “black box testing,” and “boundary value analysis” are not within your vocabulary.

  • Competent. You have learned the skill but have not practiced it sufficiently to believe you have fully mastered the skill. For example, you understand regression testing and know what to do, but you have not practiced it enough to feel you could perform it effectively.

  • Fully Competent. You understand the skill, know what to do, and feel very confident that you can perform the skill effectively. For example, you can develop and execute a regression test with high confidence that you can identify changes that occurred in the unchanged portion of the software.

At this point, read each skill in Work Paper 5-1 and assess your competency in one of the three assessment categories.

To develop a competency score, total the number of skills you have checked in each of the three columns. Then, at the bottom of Work Paper 5-2, multiply the number of skills checked in the Fully Competent column by 3; multiply the number of skills in the Competent column by 2; and multiply the number of skills in the Not Competent column by 1. Total those three amounts and divide by 120 (the number of skills assessed).

Table 5-2. Evaluating Individual Competency

 

KNOWLEDGE CATEGORY

NUMBER OF SKILLS

FULLY COMPETENT

PARTIALLY COMPETENT

NOT COMPETENT

1

Software Testing Principles and Concepts

8

   

2

Building the Test Environment

12

   

3

Managing the Test Project

16

   

4

Test Planning

27

   

5

Executing the Test Plan

19

   

6

Test Status, Analysis and Reporting

8

   

7

User Acceptance Testing

5

   

8

Testing Software Developed by Outside Organizations

6

   

9

Testing Software Controls and the Adequacy of Security Procedures

11

   

10

Testing New Technologies

8

   
 

Total

120

   
 

Multiply Total By

 

3

2

1

 

Multiplied Total

    
 

Total the Sum in Each of the Three Columns

    
 

Divide by 120

    
 

Software Testing Competency Score

    

The number produced will be between one and three. A score of three indicates that you are a world-class software tester, whereas a score of one means that you are not competent in software testing. If your score is between one and two, you do not have the basic skills necessary to perform software testing; if your score is between 2 and 3, you should consider yourself a software tester. The closer your score comes to a three, the more competent you are.

Developing a Training Curriculum

Every software testing organization should develop a curriculum for training software testers. When an individual is hired or transferred to become a software tester, that individual’s skill competency should be assessed. The competency assessment on Work Papers 5-1 and 5-2 can be used for that purpose. Based on that assessment, the individual can be placed into the curriculum at the appropriate point.

The following is a proposed curriculum to move individuals from “not competent” to “fully competent.”

  • Course 1: The Basics of Software Testing. Individuals need a basic understanding of the vocabulary, principles, and concepts for testing. Consider a job in the math profession: The basics include the ability to add, subtract, multiply, and divide. Without these basic abilities, it would be difficult to perform any significant mathematical computation. Likewise, without mastering the basics of software testing, one could not test effectively.

  • Course 2: The Process for Testing the Software System. Testers need to know the right way to test a software project. Without an understanding of how to prepare for testing or how to develop and execute a test plan, testers might just prepare and run test conditions. The equivalent to this course is the seven-step testing process presented in this book.

  • Course 3: Software Testing Tools. If the tester’s organization uses tools to test software, the tester should become proficient in the use of those tools. It is recommended that testers not be allowed to use a specific tool until they have been sufficiently trained.

  • Course 4: Test Case Design. Preparing the appropriate test cases is an important part of testing software. Testers need to know sources of test data, the various types of test data that can be prepared (for example, use cases), and how to prepare, use, and maintain those test cases.

  • Course 5: Variance Analysis and Defect Tracking. Testers need to know how to identify a variance from expected processes. Once they have identified the variance, testers need to know how to document that variance and how to track it until appropriate action has been taken.

  • Course 6: Preparing Test Reports. Testers need to know the type of reports that should be prepared, how to prepare them, who should get them, and how to present them in an acceptable manner.

  • Course 7: Test Process Improvement. Testers need to know how to use the results of testing many different projects to identify opportunities for improving the testing process.

Note

QAI offers public, in-house, and e-learning courses to assist you in improving your competency in software testing. For more information, visit www.qaiworldwide.org.

Table 5-1 cross-references the seven courses described in this chapter to the corresponding chapters in the book. If testers do not go to a formal course, a mentor should be assigned to help them master the material for each of the courses.

Table 5-1. Chapters Supporting the Software Tester’s Curriculum

COURSE NAME

SEE CHAPTER(S)

Course 1: The Basics of Software Testing

113

Course 2: The Process for Testing the Software System

613

Course 3: Software Testing Tools

4

Course 4: Test Case Design

910

Course 5: Variance Analysis and Defect Tracking

11

Course 6: Preparing Test Reports

11

Course 7: Test Process Improvement

4, 23

Using the CBOK to Build an Effective Testing Team

You can use Work Paper 5-3 to create a team that has mastery of all the competencies in the CSTE CBOK. Simply transfer the rating number you developed in Work Paper 5-2 to the corresponding columns for each team member. For example, if team member A was deemed “Competent” in Knowledge Category 1, then enter 2 in the corresponding column of Work Paper 5-3.

Table 5-3. Building Test Team Competency

 

CATEGORY

SOFTWARE TEST TEAM MEMBER

A

B

C

D

E

1

Software Testing Principles and Concepts

     

2

Building the Test Environment

     

3

Managing the Test Project

     

4

Test Planning

     

5

Executing the Test Plan

     

6

Test Status, Analysis and Reporting

     

7

User Acceptance Testing

     

8

Testing Software Developed by Outside Organizations

     

9

Testing Software Controls and the Adequacy of Security Procedures

     

10

Testing New Technologies

     

After all the team members’ ratings are recorded, you can determine whether there is adequate competency in each of the knowledge categories deemed necessary for this specific software project. For example, if knowledge of testing security was not necessary for a specific project, team members would not have to be competent in that particular knowledge category.

Generally, you would look for at least one member to be fully competent in each of the knowledge categories needed. However, if no one is fully competent in a specific skill category, having two or more individuals who are partially competent in that category would probably be adequate to make the team effective.

If the proposed software testing team does not have the necessary competency, you should take one of the following actions:

  • Replace one member with another tester who possesses the needed competency.

  • Add another tester to the team with the needed competency.

  • Assign a mentor to work with one or more team members to help them in testing tasks in which that knowledge competency is needed.

The following are additional guidelines that can help to build an effective team:

  • You can match team personalities by using techniques such as the Myers-Briggs Type Indicator (MBTI).

  • It is better to have a smaller test team than to add a tester who has a very negative attitude about testing or the assignment, which demoralizes the team and requires extra supervisory effort.

  • The number one skill for success in software testing is the ability to communicate. Any member of the test team who will interact with developers and/or users should be an effective communicator.

  • Test teams are most effective when there is only one leader. If two members want to set direction for the test team, conflict usually occurs.

Summary

Effective testing cannot occur unless the testers are competent. The best measure of a tester’s competency is to assess him or her using the CSTE CBOK, which represents the most current thinking in software tester competency. You can use the results of this assessment for two purposes. The first is to determine the strengths and weaknesses of an individual software tester so that the plan can be developed to improve his or her competency. Second, you can use the assessment to help build a software testing team, which, as a group, has the necessary competency to test a specific software project.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset