Chapter 7. Step 1: Organizing for Testing

Software development involves two concurrent processes: building the software and testing it. It does not matter whether testing is performed by developers or by an independent test team; what is important is that someone has responsibility for testing. This chapter defines the tasks to prepare for testing and to organize the test team.

If the developers do the testing, it is probably not necessary for the testers to ensure the project estimate is adequate and to develop a process to track the project’s status. However, when independent testers perform the testing, unless they can control their own test budget and the project team has an effective project status reporting process, the testers should perform the last task.

Objective

Testing can fall short of expectations for two reasons. First, the necessary preparation may not be accomplished. This chapter and the next discuss the needed preparatory work prior to executing tests. Second, many testing tasks are never completed because inadequate resources are allocated.

The objective of this chapter is to enable you to define the scope of testing and ensure that adequate time and resources are available for testing. If testing is included within the developer’s budget, the test manager needs to ensure that the estimate is adequate for testing. The test manager must also ensure that overruns in project development will not restrict the amount of testing as defined in the test plan.

Workbench

Figure 7-1 shows the workbench for organizing for testing. The workbench input is the current documentation for the software system being tested. Five tasks are listed, but some of the tasks may have been completed prior to starting the first task. The output from this step is an organized test team, ready to begin testing.

Workbench for organizing testing.

Figure 7-1. Workbench for organizing testing.

Input

The following two inputs are required to complete this step:

  • Project documentation. This includes the project plan, objectives, scope, and defined outputs.

  • Software development process. This includes the procedures and standards to be followed during the project’s implementation.

Do Procedures

The following five tasks are recommended for organizing the testing process:

  1. Appoint the test manager. If testing is part of the in-house development effort, the project leader should determine who is responsible for testing. If testing is performed by independent testers, IT management should appoint the test manager.

  2. Define the scope of testing. The test manager defines the scope of testing, although all or part of the scope may be defined by testing standards.

  3. Appoint the test team. The test manager, project manager, or IT management should appoint the test team.

  4. Verify the development documentation. The test manager should verify that adequate development documentation is available to perform effective testing.

  5. Validate the test estimate and project status process. The test estimate can be developed by either the test manager or the project manager.

Task 1: Appoint the Test Manager

Regardless of whether testing is performed by in-house developers or independent testers, someone needs to be responsible for testing. The test manager has the following responsibilities:

  • Define the scope of testing

  • Appoint the test team

  • Define the testing process and the deliverables produced

  • Write/oversee the test plan

  • Analyze test results and write the test report(s)

If the test manager cannot fulfill these responsibilities alone, other individuals should be assigned to the test team to assist him or her. Responsibilities may change based on the size of the project.

The skills required to be a test manager vary by the size of the project. For small projects (1–2 testers), the more experienced tester can fulfill the manager role; for mediumsized projects (3–5 testers), the test manager must be both a tester and a manager; and for larger projects (6 or more testers), managerial skills are more important than test skills.

Task 2: Define the Scope of Testing

Chapters 13 discussed the options available for testing scope. Traditionally, software testing validated that the specifications were implemented as specified. Previous discussions on testing scope expand that definition to include determining whether user needs are met, identifying whether the project was implemented in the most effective and efficient manner, ensuring the software system has met the desired quality factors, and testing for specialized software attributes, such as the adequacy of the system of internal control, and so forth.

The scope of testing may be defined in the test mission. In other words, if the testers are to ensure that the system meets the user’s needs, the test manager would not have to define that in the test scope. Likewise, if testers are to assist users in developing and implementing an acceptance test plan, it would not have to be defined in the scope of testing for a specific project.

If the test mission is not specific about testing scope and/or there are specific objectives to be accomplished from testing, the test manager should define that scope. It is important to understand the scope of testing prior to developing the test plan.

Task 3: Appoint the Test Team

The test team is an integral part of the testing process. Without the formalization of the test team, it is difficult to introduce a formalized testing concept into the development process. Extensive “desk checking” by the individual who developed the work is not a cost-effective testing method. The disadvantages of a person checking his or her own work include the following:

  • Misunderstandings will not be detected, because the checker will assume what he or she was told is correct.

  • Improper use of the development process may not be detected, because the individual may not understand the process.

  • The individual may be “blinded” into accepting erroneous test results because he or she falls into the same trap during testing that led to the introduction of the defect in the first place.

  • The IT staff is optimistic in their ability to do defect-free work and thus sometimes underestimate the need for extensive testing.

  • Without a formal division between software development and software testing, an individual may be tempted to improve the system structure and documentation rather than allocate that time and effort to the testing process.

This section describes the four approaches to appointing a test team (see Figure 7-2).

Table 7-2. Test team composition.

TEST TEAM APPROACH

COMPOSITION OF TEST TEAM MEMBERS

 

ADVANTAGES

 

DISADVANTAGES

Internal

Project team

Minimize cost

Time allocation

  

Training

Lack of independence

  

Knowledge of project

Lack of objectivity

External

Quality assurance

Independent view

Cost

 

Professional testers

IT professionals

Overreliance

  

Multiple project testing experience

Competition

Non-IT

Users

Independent view

Cost

 

Auditors

Independence in assessment

Lack of IT knowledge

 

Consultants

Ability to act

Lack of project knowledge

Combination

Any or all of the above

Multiple skills

Cost

  

Education

Scheduling reviews

  

Clout

Diverse backgrounds

Internal Team Approach

In the internal test team approach, the members of the project team become the members of the test team. In most instances, the systems development project leader is the test team project leader. It is not necessary to have all the development team members participate on the test team, although there is no reason why they cannot. What is important is that one member of the test team will be primarily responsible for testing other members’ work. The objective of the team is to establish a test process that is independent of the people who developed the particular part of the project being tested.

The advantage of the internal IT test team approach is that it minimizes the cost of the test team. The project team is already responsible for testing, so using project members on the test team is merely an alternate method for conducting the tests. Testing using the test team approach not only trains the project people in good testing methods, it cross-trains them in other aspects of the project. The internal IT test team approach uses those people in testing who are most knowledgeable about the project.

A potential disadvantage of the internal test team approach is that the team will not allocate appropriate time for testing. In addition, the project team members may lack independence and objectivity in conducting the test. The tendency is for the project team members to believe that the project solution is correct and thus find it difficult to challenge the project assumptions.

External Team Approach

Testing by an external team does not relieve the project personnel of responsibility for the correctness of the application system. The external team approach provides extra assurance of the correctness of processing. Typically, external testing occurs after the project team has performed the testing it deems necessary. The development team verifies that the system structure is correct, and the independent test team verifies that the system satisfies user requirements.

External testing is normally performed by either quality assurance personnel or a professional testing group in the IT department. While the project team is involved in all aspects of the development, the quality assurance test teams specialize in the testing process (although most individuals in these testing groups have experience in systems design and programming).

The advantage of external testers is the independent perspective they bring to the testing process. The group comprises IT professionals who have specialized in the area of testing. In addition, these groups have testing experience in multiple projects and, thus, are better able to construct and execute tests than those individuals who test only periodically.

The disadvantage of external IT testing is the additional cost required to establish and administer the testing function. Also, the development team may place too much reliance on the test team and thus fail to perform adequate testing themselves. In addition, the competition between the test team and the project team may result in a breakdown of cooperation, making it difficult for the test team to function properly.

Non-IT Team Approach

Testing also can be performed by groups external to the IT department. The three most common groups are users, auditors, and consultants. These groups represent the organizational needs and test on behalf of the organization. The advantage of a non-IT test team is that they provide an independent assessment. The non-IT group is not restricted by loyalty to report unfavorable results only to the IT department. The non-IT group has a greater capacity to cause action to occur once problems are detected than a group within an IT department.

The disadvantage of non-IT testing is its cost. Generally, these groups are not familiar with the application and must first learn the application, and then learn how to test within the organization.

Combination Team Approach

In the combination test team approach, any or all the preceding groups can participate on a test team. The combination team can be assembled to meet specific testing needs. For example, if the project has significant financial implications, an auditor could be added to the test team; if the project has communication concerns, a communications consultant could be added.

The advantage of drawing on multiple skills for the test team is to enable a multi-disciplined approach to testing. In other words, the skills and backgrounds of individuals from different disciplines can be drawn into the test process. For some of the test participants, particularly users, it can be helpful to make them aware of both the system and the potential pitfalls in an automated system. In addition, a combination test team has greater clout in approving, disapproving, or modifying the application system.

The disadvantage of the combination test team is the cost associated with assembling and administering the test team. It also may pose some scheduling problems determining when the tests will occur. Finally, the diverse backgrounds of the test team may make the determination of a mutually acceptable test approach difficult.

Task 4: Verify the Development Documentation

Testers rely on the development documentation to prepare tests and to determine the desired results. If the development documentation is vague, testers cannot determine the expected results. For example, an expectation that the system should be “easy to use” is not specific enough to test. It is not good practice for the tester to define the expected result or to indicate results are “adequate.”

It is important prior to test planning to determine the completeness and correctness of development documentation. In organizations where good development documentation standards exist, and IT management enforces compliance to those standards, this task is not necessary. However in that case it is necessary for the testers to have a thorough understanding of the development documentation standards.

Testers should be concerned that the documentation process will fail to

  • Assist in planning and managing resources

  • Help to plan and implement testing procedures

  • Help to transfer knowledge of software development throughout the life cycle

  • Promote common understanding and expectations about the system within the organization and—if the software is purchased—between the buyer and seller

  • Define what is expected and verify that is what is delivered

  • Provide managers with technical documents to review at the significant development milestones, to determine that requirements have been met and that resources should continue to be expended

Development Phases

Programs and systems are developed in phases, from the initial idea for a system to a properly working system. The terminology used to identify 170these inputs, phases, and the stages within these phases is defined in the following list:

  • Initiation. The objectives and general definition of the software requirements are established during the initiation phase. Feasibility studies, cost/benefit analyses, and the documentation prepared in this phase are determined by the organization’s procedures and practices.

  • Development. During the development phase, the requirements for the software are determined and then the software is defined, specified, programmed, and tested. The following documentation is prepared during the four stages of this phase:

    • Definition. During the definition stage, the requirements for the software and documentation are determined. The functional requirements document and the data requirements document should be prepared.

    • Design. During this stage, the design alternatives, specific requirements, and functions to be performed are analyzed and a design is specified. Documents that may be prepared include the system/subsystem specification, program specification, database specification, and test plan.

    • Programming. During the programming stage, the software is coded and debugged. Documents that should be prepared during this stage include the user manual, operations manual, program maintenance manual, and test plan.

    • Testing. During the test stage, the software and related documentation are evaluated and the test analysis report is prepared.

  • Operation. During the operation phase, the software is maintained, evaluated, and changed as additional requirements are identified.

The 14 documents needed for system development, maintenance, and operations are listed in Figure 7-3 and described in the following list:

  • Project request. The purpose of the project request document is to provide a means for users to request the development, purchase, or modification of software or other IT-related services. It serves as the initiating document in the software life cycle and provides a basis for communication with the requesting organization to further analyze system requirements and assess the possible effects of the system.

  • Feasibility study. Feasibility studies help analyze system objectives, requirements, and concepts; evaluate alternative approaches for achieving objectives; and identify proposed approaches. The feasibility study document, in conjunction with a cost/benefit analysis, should help management make decisions to initiate or continue an IT project or service. The study can be supplemented with an appendix containing details of a cost/benefit analysis or considered with a separate cost/benefit analysis document.

  • Cost/benefit analysis. Such analyses can help managers, users, designers, and auditors evaluate alternative approaches. The analysis document, in conjunction with the feasibility study document, should help management decide to initiate or continue an IT project or service.

  • Software summary. This document is used for very small projects to substitute for other development-phase documentation when only a minimal level of documentation is needed.

    Table 7-3. Documentation within the software life cycle.

    INITIATION PHASE

    DEVELOPMENT PHASE

    OPERATION PHASE

    Definition Stage

    Design Stage

    Programming Stage

    Test Stage

    SOFTWARE SUMMARY

    Project Request Document

    Functional Requirements Document

    System/Subsystem Specification

    User Manual

     

    (Uses and updates many of the initiation and development phase documents.)

    Feasibility Study Document

     

    Program Specification

    Operations Manual

      

    Cost/Benefit Analysis Document

    Data Requirements Document

    Database Specification

    Program Maintenance Manual

      

    TEST PLAN

        

    Test Analysis Report

     
  • Functional requirements. The purpose of the functional requirements document is to provide a basis for users and designers to mutually develop an initial definition of the software, including the requirements, operating environment, and development plan.

  • Data requirements. During the definition stage, the data requirements document provides data descriptions and technical information about data collection requirements.

  • System/subsystem specifications. Designed for analysts and programmers, this document specifies requirements, operating environment, design characteristics, and program specifications.

  • Program specification. The purpose of the program specification is to specify program requirements, operating environment, and design characteristics.

  • Database specifications. This document specifies the logical and physical characteristics of a particular database.

  • User manual. Written in non-technical terminology, this manual describes system functions so that user organizations can determine their applicability and when and how to use them. It should serve as a reference document for preparation of input data and parameters and for interpretation of results.

  • Operations manual. The purpose of this manual is to provide computer operation personnel with a description of the software and its required operational environment.

  • Program maintenance manual. This manual provides the information necessary to understand the programs, their operating environment, and their maintenance procedures.

  • Test plan. This document provides detailed specifications, descriptions, and procedures for all tests and test data reduction and evaluation criteria.

  • Test analysis report. The purpose of the test analysis report is to document test results, present the proven capabilities and deficiencies for review, and provide a basis for preparing a statement of software readiness for implementation.

The standards for preparing documentation, as developed by your IT organization, are the second input to this test process.

Measuring Project Documentation Needs

The formality, extent, and level of detail of the documentation to be prepared depend on the organization’s IT management practices and the project’s size, complexity, and risk. What is adequate for one project may be inadequate for another.

Too much documentation can also be wasteful. An important part of testing documentation is to determine first that the right documentation is prepared; there is little value in confirming that unneeded documentation is adequately prepared.

The testing methodology uses 12 criteria to establish the need for documentation:

  • Originality required. The uniqueness of the application within the organization.

  • Degree of generality. The amount of rigidity associated with the application and the need to handle a variety of situations during processing.

  • Span of operation. The percentage of total corporate activities affected by the system.

  • Change in scope and objective. The frequency of expected change in requirements during the life of the system.

  • Equipment complexity. The sophistication of the hardware and communications lines needed to support the application.

  • Personnel assigned. The number of people involved in developing and maintaining the application system.

  • Developmental cost. The total dollars required to develop the application.

  • Criticality. The importance of the application system to the organization.

  • Average response time to program change. The average amount of time available to install a change to the application system.

  • Average response time to data input. The average amount of time available to process an application transaction.

  • Programming language. The language used to develop the application.

  • Concurrent software development. Other applications and support systems that need to be developed concurrently to fulfill the total mission.

A five-point weighting system is used for each of the 12 criteria, as shown in Figure 7-4. For example, if two people have been assigned to the project, a weight of 1 is allocated for criterion 6, but if seven people were assigned, a weight of 3 would be used.

Table 7-4. Example of weighting criteria.

CRITERIA

WEIGHTS

 
  

1

2

3

4

5

1.

Originality required

None—reprogram on different equipment

Minimum—more stringent requirements

Limited—new interfaces

Considerable—apply existing state of the art to environment

Extensive—requires advance in state of the art

2.

Degree of generality

Highly restricted—single purpose

Restricted—parameterized for a range of capacities

Limited flexibility—allows some change in format

Multipurpose—flexible format, range of subjects

Very flexible—able to handle a broad range of subject matter on different equipment

3.

Span of operation

Local or utility

Small group

Department

Division

Entire corporation

4.

Change in scope and objective

None

Infrequent

Occasional

Frequent

Continuous

5.

Equipment complexity

Single machine—routine processing

Single machine—routine processing, extended peripheral system

Multicomputer—standard peripheral system

Multicomputer—advanced programming, complex peripheral system

Master control system—multicomputer, auto input/output, and display equipment

6.

Personnel assigned

1 to 2

3 to 5

6 to 10

11 to 18

More than 18

7.

Developmental cost ($)

1K to 10K

10K to 50K

50K to 200K

200K to 500K

More than 500K

8.

Criticality

Limited to data processing

Routine corporate operations

Important corporate operations

Area/product survival

Corporate survival

9.

Average response time to program change

2 or more weeks

1 to 2 weeks

3 to 7 days

1 to 3 days

1 to 24 hours

10.

Average response time to data input

2 or more weeks

1 to 2 weeks

1 to 7 days

1 to 24 hours

0 to 60 minutes

11.

Programming languages

High-level language

High-level and limited assembly language

High-level and extensive assembly language

Assembly language

Machine language

12.

Concurrent software development

None

Limited

Moderate

Extensive

Exhaustive

Work Paper 7-1 should be used in developing the total weighted documentation score, as follows:

  • Determine the weight for each of the 12 criteria. This is done by determining which weights for each criterion are appropriate for the application being tested. The descriptive information in the five weight columns should be the basis of this determination.

  • Enter the weight number on Work Paper 7-1 for each of the 12 criteria. For example, if under the originality criterion weight 5 is most applicable, enter 5 in the Weight column.

  • Total the weights for the 12 criteria. The minimum score is 12; the maximum is 60.

Table 7-1. Calculation of Total Weighted Documentation Criteria Score

 

CRITERION

WEIGHT

EXPLANATION

1.

Originality required

  

2.

Degree of generality

  

3.

Span of operation

  

4.

Change in scope and objective

  

5.

Equipment complexity

  

6.

Personnel assigned

  

7.

Developmental cost

  

8.

Criticality

  

9.

Average response time to program change

  

10.

Average response time to data input

  

11.

Programming languages

  

12.

Concurrent software development

  

Total Weighted Criteria Score:

The weighted score is used in determining what specific documents should be prepared for the software system being tested.

Determining What Documents Must Be Produced

Figure 7-5 relates the total weighted criteria score in Work Paper 7-1 to the previously described software documents and recommends which document testers should prepare. The need for several of the documents depends on the situation. (For example, database specifications and data requirement documents are usually required for systems using database technology.) A project request document is needed in organizations that require formal approvals before conducting a feasibility study. Cost/benefit analysis documents are needed in organizations requiring that such analyses be performed before a project is put into development.

Total weighted documentation criteria versus required document types.

Figure 7-5. Total weighted documentation criteria versus required document types.

With the total weighted criteria score developed, Figure 7-5 can be used as follows:

  • The appropriate row for selecting documents is determined by cross-referencing the score developed in Work Paper 7-1 to the score in the Total Weighted Criteria column. Some of the scores in this column overlap to accommodate highly critical projects, regardless of their scores.

  • For the row selected, the columns indicate which documents are needed.

If the project did not generate these documents, the test team should question the documentation. If unneeded documents were prepared, the test team should challenge the need for maintaining them.

 

Figure 7-6 illustrates a simpler method to determine the level of documentation needed. The four levels of documentation are:

  1. Minimal. Level 1 documentation applies to single-use programs of minimal complexity. This documentation should include the type of work being produced and a description of what the program really does. Therefore, the documentation that results from the development of the programs (i.e., program abstract, compile listing, test cases) should be retained as well. The criteria for categorizing a program as level 1 can be its expected use or its cost to develop (in hours or dollars) and may be modified for the particular requirements of the installation. Suggested cost criteria are programs requiring less than one worker-month of effort or less than $1,000 (these are not assumed to be equal).

  2. Internal. Level 2 documentation applies to special-purpose programs that, after careful consideration, appear to have no sharing potential and to be designed for use only by the requesting department. Large programs with a short life expectancy also fall into this category. The documentation required (other than level 1) includes input/output formats, setup instructions, and sufficient comments in the source code to provide clarification in the compile listing. The effort spent toward formal documentation for level 2 programs should be minimal.

  3. Working document. Level 3 documentation applies to programs that are expected to be used by several people in the same organization or that may be transmitted on request to other organizations, contractors, or grantees. This level should include all documentation types. The documentation should be typed but need not be in a finished format suitable for publication. Usually, it is not formally reviewed or edited; however, certain programs that are important to the using organization but not considered appropriate for publication should undergo a more stringent documentation review.

  4. Formal publication. Level 4 documentation applies to programs that are of sufficient general interest and value to be announced outside the originating installation. This level of documentation is also desirable if the program is to be referenced by a technical publication or paper. You should include programs that are critical to installation activities. These programs also should be formally documented, reviewed in depth, and subjected to configuration control procedures. You should include recurring applications (payroll, for example) in this level so that you maintain an accurate history of how the system has conformed to changing laws, rules, and regulations.

Table 7-6. Alternate method for determining documentation.

LEVEL

USE

DOCUMENTATION ELEMENTS

EXTENT OF EFFORT

1

Minimal

Software summary plus any incidentally produced documentation.

No special effort, general good practice.

2

Internal

Level 1 plus user manual and operations manual.

Minimal documentation effort spent on informal documentation. No formal documentation effort.

3

Working Document

Level 2 plus functional requirements document, program specification, program maintenance manual, test plan, test analysis report, system/subsystem specification, and feasibility study document.[*]

All basic elements of documentation should be typewritten, but need not be prepared in finished format for publication or require external edit or review.

4

Formal Publication

Level 3 produced in a form suitable for publication.[*]

At a minimum, all basic elements prepared for formal publication, including external review and edit.

[*] In addition, the following documents should be prepared, depending on the situation: data requirements, database specification, project report, and cost/benefit analysis.

Figure 7-6 summarizes the criteria for determining these levels of documentation. Additional criteria specific to an installation regarding program-sharing potential, life expectancy, and use frequency should also be considered when determining documentation levels.

You can determine which of the four documentation levels is appropriate:

  • As an alternate to the total weighted criteria score method.

  • As a means of validating the correctness of the total weighted score to the application system. If the same types of documentation are indicated by both methods, you have greater assurance that the documentation indicated is the correct one.

Determining the Completeness of Individual Documents

Testers should use Work Paper 7-2 to document the results of the completeness test. If the documentation does not meet a criterion, the Comments column should be used to explain the deficiency. This column becomes the test report on the completeness of the documentation.

Table 7-2. Testing Documentation Completeness

 

COMPLETENESS CRITERION

ADEQUATE

INADEQUATE

COMMENTS

1.

Content

   

2.

Audience

   

3.

Redundancy

   

4.

Flexibility

   

5.

Size

   

6.

Combining and expanding of document types

   

7.

Format

   

8.

Content sequence

   

9.

Documenting of multiple programs or multiple files

   

10.

Section titles

   

11.

Flowcharts and decision tables

   

12.

Forms

   

The 12 criteria used to evaluate the completeness of a document are as follows:

  • Content. The suggested content for all the documents (except the software summary) is included in a later section. A table of contents for each document is followed by a brief description of each element within the document. These document content guidelines should be used to determine whether the document contains all the needed information.

  • Audience. Each document type is written for a particular audience. The information should be presented with the terminology and level of detail appropriate to the audience.

  • Redundancy. The 14 document types in this section have some redundancy. In addition, most document types are specific (e.g., descriptions of input, output, or equipment). Information that should be included in each of the document types differs in context and sometimes in terminology and level of detail because it is intended to be read by different audiences at different points in the software life cycle.

  • Flexibility. Flexibility in the use of the document results from the organization of its contents.

  • Size. Each document-type outline can be used to prepare documents that range in size from a few to several hundred pages. Length depends on the size and complexity of the project and the project manager’s judgment as to the level of detail necessary for the environment in which the software will be developed or run.

  • Combining and expanding document types. It is occasionally necessary to combine several document types under one cover or to produce several volumes of the same document type. Document types that can be combined are manuals for users, operations, and program maintenance. The contents of each document type should then be presented with the outline (e.g., Part I—Users, Part II—Operations, and Part III—Program Maintenance).

    For large systems, you can prepare a document for each module. Sometimes, the size of a document may require it to be issued in multiple volumes to allow ease of use. In such cases, the document should be separated at a section division (e.g., the contents of the test plan may be divided into sections of plan, specifications and evaluation, and specific test descriptions).

  • Format. The content guidelines have been prepared in a generally consistent format. This particular format has been tested, and its use is encouraged.

  • Content sequence. In general, the order of the sections and paragraphs in a particular document type should be the same as shown in the content guidelines. The order may be changed if it significantly enhances the presentation.

  • Documenting multiple programs or multiple files. Many of the document content outlines anticipate and are adaptable to documenting a system and its subsystems, multiple programs, or multiple files. All these outlines can, of course, be used for a single system, subsystem, program, database, or file.

  • Section titles. These titles are generally the same as those shown in the content guidelines. They may be modified to reflect terminology unique to the software being documented if the change significantly enhances the presentation. Sections or paragraphs may be added or deleted as local requirements dictate.

  • Flowcharts and decision tables. The graphic representations of some problem solutions in the form of flowcharts or decision tables may be included in or appended to the documents produced.

  • Forms. The use of specific forms depends on organizational practices. Some of the information specified in a paragraph in the content guidelines may be recorded on such forms.

Determining Documentation Timeliness

Documentation that is not current is worthless. Most IT professionals believe that if one part of the documentation is incorrect, other parts are probably incorrect, and they are reluctant to use it.

The documentation test team can use any or all the following four tests to validate the timeliness of documentation. These tests can be done on complete documents or parts of documents. Testers familiar with statistics can perform sampling and validate the timeliness of that sample. Testers should strive for a 95 percent confidence level that the documentation is current.

  • Use the documentation to change the application. Timeliness can be validated with the same test process described in the preceding section. The timeliness test enables the tester to search for and confirm consistency between the various documents and to determine whether the documentation supports the operational system.

  • Compare the code with the documentation. This test uses the current version of the programs as the correct basis for documentation. This test is usually done on a sampling basis; the tester randomly selects several parts of the program and traces them to the appropriate levels of documentation. The objective is to determine whether the code is properly represented in the documentation. Because this test is done statistically, a few variations might imply extensive segments of obsolete documentation.

  • Confirm documentation timeliness with documentation preparers. The individuals who prepare the documentation should be asked whether it is current. Specific questions include:

    • Is this documentation 100 percent representative of the application in operation?

    • Is the documentation changed every time that a system change is made?

    • Do the individuals who change the system rely on the documentation as correct?

  • Confirm the timeliness of documentation with end users. End users should be asked whether the documentation for the system is current. Because end users might not be familiar with all the contents of the documentation, they may need to be selected on a sampling basis and asked about specific pieces of documentation. Again, because sampling is used, a few variances may mean extensive amounts of obsolete documentation.

Task 5: Validate the Test Estimate and Project Status Reporting Process

Troubled projects have two common characteristics: The project estimate is inadequate and the status report of the development effort is misleading.

The objective of validating the project estimate is to determine what resources will be available to produce and test software. Resources include staff, computer time, and elapsed time. Because a good estimate shows when and how costs will be incurred, it can be used not only to justify software development and testing but also as a management control tool.

Testers need to know the progress of the system under development. The purpose of project management systems and accounting systems is to monitor this progress. However, many of these systems are more budget- and schedule-oriented than project completion–oriented.

The tester’s main concern during the development is that inadequate resources and time will be allocated to testing. Because much of the testing will be performed after development is complete, the time period between completing development and the due date for production may be inadequate for testing.

There are three general concerns regarding available time and resources for testing:

  • Inaccurate estimate. The estimate for resources in time will not be sufficient to complete the project as specified.

  • Inadequate development process. The tools and procedures will not enable developers to complete the project within the time constraints.

  • Incorrect status reporting. The project leaders will not know the correct status of the project during early developmental stages and thus cannot take the necessary corrective action in time to meet the scheduled completion dates.

Validating the Test Estimate

Many software projects are essentially innovative, and both history and logic suggest that cost overruns may be to the result of an ineffective estimating system. Software cost estimating is a complicated process because project development is influenced by a large number of variables, many of which are subjective, non-quantifiable, and interrelated in complex ways.

Some reasons for not obtaining a good estimate include:

  • A lack of understanding of the process of software development and maintenance

  • A lack of understanding of the effects of various technical and management constraints

  • A view that each project is unique, which inhibits project-to-project comparisons

  • A lack of historic data against which the model can be checked

  • A lack of historic data for calibration

  • An inadequate definition of the estimate’s objective (whether it is intended as a project management tool or purely to aid in making a go-ahead decision) and at what stage the estimate is required so that inputs and outputs can be chosen appropriately

  • Inadequate specifications of the scope of the estimate (what is included and what is excluded)

  • An inadequate understanding of the premises on which the estimate is based

Strategies for Software Cost Estimating

There are five commonly used methods for estimating the cost of developing and maintaining a software system:

  • Seat-of-the-pants method. This method, which is often based on personal experience, is still very popular because no better method has been proven. One of its problems is that each estimate is based on different experience, and therefore different estimates of the cost of a single project may vary widely. A second problem is that the estimator must have experience with a similar project, of a similar size. Experience does not work on systems larger than those used for comparison nor on systems with a totally different content.

  • Constraint method. The constraint method is equivalent to taking an educated guess. Based on schedule, cost, or staffing constraints, a manager agrees to develop the software within the constraints. The constraints are not related to the complexity of the project. In general, this method will result in delivery of the software within the specified constraints, but with the specification adjusted to fit the constraints.

  • Percentage-of-hardware method. This method is based on the following two assumptions:

    • Software costs are a fixed percentage of hardware costs.

    • Hardware cost estimates are usually reasonably accurate.

    A study on estimating has indicated that the first of these assumptions is not justified.

  • Simulation method. Simulation is widely used in estimating life cycle support costs for hardware systems, but it is not appropriate for software cost estimating because it is based on a statistical analysis of hardware failure rates and ignores logistics for which there is no software equivalent.

  • Parametric modeling method. Parametric models comprise the most commonly used estimating method and are described in the following section.

Parametric Models

Parametric models can be divided into three classes: regression, heuristic, and phenomenological.

  • Regression models. The quantity to be estimated is mathematically related to a set of input parameters. The parameters of the hypothesized relationship are arrived at by statistical analysis and curve fitting on an appropriate historical database. There may be more than one relationship to deal with different databases, different types of applications, and different developer characteristics.

  • Heuristic models. In a heuristic model, observation and interpretation of historical data are combined with supposition and experience. Relationships between variables are stated without justification. The advantage of heuristic models is that they need not wait for formal relationships to be established that describe how the cost-driving variables are related. Over a period of time, a given model can become very effective in a stable predicting environment. If the model fails, it is adjusted to deal with the situation. It therefore becomes a repository for the collected experiences and insights of the designers.

  • Phenomenological models. The phenomenological model is based on a hypothesis that the software development process can be explained in terms of some more widely applicable process or idea. For example, the Putnam model is based on the belief that the distribution of effort during the software life cycle has the same characteristics as the distribution of effort required to solve a given number of problems given a constant learning rate.

Most of the estimating models follow a similar pattern, based on the following six steps. Not all steps occur in all models. For example, some models do not initially perform a total project labor or cost estimate, but start by estimating the different phases separately, so Step 4 aggregates the separate estimates instead of dividing up the total estimate. Similarly the adjustments for special project characteristics may occur between Steps 1 and 2 as well as or instead of between Steps 2 and 3.

  1. Estimate the software size. Most models start from an estimate of project size, although some models include algorithms for computing size from various other system characteristics, such as units of work.

  2. Convert the size estimate (function points or lines of code) to an estimate of the person-hours needed to complete the test task. Some models convert from size to labor, whereas others go directly from size to money estimates.

  3. Adjust the estimate for special project characteristics. In some models, an effective size is calculated from the basic size estimate obtained in Step 1; in others, a person-hours or cost estimate is calculated from the estimates obtained in Step 2. The estimate is an adjustment of the basic estimate intended to take account of any special project characteristics that make it dissimilar to the pattern absorbed in the underlying historical database. Such variations, which include the effect of volatility of the requirements, different software tools, difficulty above the level of projects in the database, or a different method of dealing with support costs, are frequently based on intuitively derived relationships, unsupported by statistical verification.

    The adjustment may precede amalgamation of the costs of the different phases, or a single adjustment may be applied to the total.

  4. Divide the total estimate into the different project phases. Each model dealing with a project’s schedule makes assumptions about the allocation of effort in the different project phases. The simplest assumption defines a percentage of the effort for each phase, for example, the much-quoted 40 percent design, 20 percent code, and 40 percent test rule. It should be noted that this is not a universally agreed-on rule. Some estimating research shows that other percentages may be more appropriate, and the percentage in each phase may depend on other software characteristics. Some models assume that staffing allocation with respect to time follows a rectangular distribution; others assume that it follows a beta distribution, or a Rayleigh distribution. In general, the assumptions on staffing allocation with respect to time are based on historical data. The effect of deviating from historical patterns has not been considered.

  5. Estimate the computer time and non-technical labor costs. Where these costs are explicitly included, they are often calculated as a percentage of the technical labor costs. Sometimes such costs are included implicitly because they were included in the database from which the model was derived.

  6. Sum the costs. The non-technical labor costs and the cost of computer time, where these are included in the estimates, are added to the technical costs of the different phases of the software life cycle to obtain an aggregated cost estimate.

Testing the Validity of the Software Cost Estimate

An improper cost estimate can do more damage to the quality of a software project than almost any other single factor. People tend to do that which they are measured on. If they are measured on meeting a software cost estimate, they will normally meet that estimate. If the estimate is incorrect, the project team will make whatever compromises are necessary to meet that estimate. This process of compromise can significantly undermine the quality of the delivered project. The net result is increased maintenance costs, dissatisfied customers, increased effort in the customer area to compensate for software system weaknesses, and discouraged, demoralized project personnel.

Estimating software costs is just that—estimating. No one can guarantee that the software estimate will be correct for the work to be performed. However, testing can increase the validity of the estimate, and thus is a worthwhile endeavor. Testing of a software estimate is a three-part process, as follows:

  1. Validate the reasonableness of the estimating model.

  2. Validate that the model includes all the needed factors.

  3. Verify the correctness of the cost-estimating model estimate.

Validate the Reasonableness of the Estimating Model

Work Paper 7-3 lists the 14 characteristics of a desirable estimating model. The worksheet provides a place to indicate whether the attributes are present or absent, and any comments you care to make about the inclusion or exclusion of those characteristics. The closer the number comes to 14, the more reliance you can place on your software-estimating model.

Table 7-3. Characteristics Included/Excluded from Your Organization’s Software Estimating Model

Name of Model: _____________________________________________________

 

CHARACTERISTIC

INCLUDED

EXCLUDED

COMMENTS

1.

The model should have well-defined scope.

(It should be clear which activities associated with the software life cycle are taken into account in the model and which are excluded. It should also be clear which resources—manpower, computer time, and elapsed time—are being estimated, and whether costs of support software are included.)

   

2.

The model should be widely applicable.

(It should be possible to tailor a model to fit individual organizations, and types of software development.)

   

3.

The model should be easy to use.

(Input requirements should be kept to a minimum, and output should be provided in an immediately useful format.)

   

4.

The model should be able to use actual project data as it becomes available.

(Initial project cost estimates are likely to be based on inadequate information. As a project proceeds, more accurate data becomes available for cost estimating. It is essential that any estimating model be capable of using actual data gathered at any stage in the project life to update the model and provide refined estimates, probably with a lower likely range of values than achieved initially.

Estimating is based on a probabilistic model. This means that an estimate is a number in the likely range of the quantity being estimated, and confidence in the estimate depends on the likely range of the quantity being estimated. The better the information we have on which to base an estimate, the smaller the likely range and the greater the confidence.)

   

5.

The model should allow for the use of historic data in the calibration for a particular organization and type of software.

   

6.

The model should have been checked against a reasonable number of historic projects.

   

7.

The model should only require inputs based on properties of the project which are well defined and can be established with a reasonable degree of certainty at the time the estimate is required.

   

8.

The model should favor inputs based on objective rather than subjective criteria.

   

9.

The model should not be oversensitive to subjective input criteria.

   

10.

The model should be sensitive to all the parameters of a project which have been established as having a market effect on the cost, and should not require input of parameters which do not have a marked effect on cost.

   

11.

The model should include estimates of how and when the resource will be needed.

(This is particularly important if the estimates are to be used for resource allocation, but also important if the results are given in financial terms since inflation needs to be taken into account.)

   

12.

The model should produce a range of likely values for the quantity being estimated.

(It is important to realize that an estimate cannot provide a precise prediction of the future. It must, of course, predict sufficiently closely to be useful, and to do this it should ideally be able to place bounds on either side of the estimate within a stated probability that the actual figures will lie within the stated bounds.)

   

13.

The model should include possibilities for sensitivity analysis, so that the response of the estimates to variation of selected input parameters can be seen.

   

14.

The model should include some estimate of the risk of failure to complete within the estimated time or cost.

   
 

TOTAL CHARACTERISTICS INCLUDED

   

Validate That the Model Includes All the Needed Factors

The factors that influence the cost of a software project can be divided into those contributed by the development and maintenance organization, many of which are subjective, and those inherent in the software project itself. Current models differ with respect to the factors that are required as specific inputs. Many different factors may be subsumed in a single parameter in some models, particularly the more subjective parameters.

Depending on the information fed to the model, the estimate produced can vary significantly. It is important that all the factors that influence software costs are properly entered into the model. Models can produce incorrect results in two ways. First, the factor can be excluded from the model, resulting in an incorrect estimate. Second, the factor can be incomplete or incorrectly entered into the model, again causing incorrect software cost estimates to be produced.

Work Paper 7-4 lists the factors that can influence software costs. Testers must determine whether a missing factor will significantly affect the actual costs of building the software. Factors that influence the software system include:

  • Size of the software. A favorite measure for software system size is lines of operational code, or deliverable code (operational code plus supporting code, for example, for hardware diagnostics) measured either in object code statements or in source code statements. It is rarely specified whether source code statements include non-executable code, such as comments and data declarations.

  • Percentage of the design and/or code that is new. This is relevant when moving existing software systems to new hardware, when planning an extension to or modification of an existing software system, or when using software prototypes.

  • Complexity of the software. Different software projects have different degrees of complexity, usually measured by the amount of interaction between the different parts of the software system, and between the software and the external world. The complexity affects programmer productivity and is an input parameter for several models.

  • Difficulty of implementing the software requirements. Different application areas are considered to have different levels of difficulty in design and coding, affecting programmer productivity. For example, operating system software is usually regarded as more difficult than standalone commercial applications. Software projects might be given a difficulty or an application mix rating, according to the degree to which they fall into one (or more) of the following areas:

    • Operating systems

    • Self-contained real-time projects

    • Standalone, non–real-time applications

    • Modifications to an existing software system

    There are, of course, other categories. Each model deals with the difficulty in its own way, some requiring estimates of the percentage of each type of software system, others asking for a number on a predefined scale. Others merge the factor with the complexity rating.

  • Quality. Quality, documentation, maintainability, and reliability standards required are all included in a single factor. This factor is sometimes called the platform type, reflecting the fact that the documentation and reliability requirements for software in a manned spacecraft are higher than in a standalone statistical package. The documentation and reliability requirements may be given a defined numeric scale—for example, from 1 to 10. Some estimating models include a parameter for the number of different locations at which the software will be run.

  • Languages to be used. The choice of programming language affects the cost, size, timescale, and documentation effort.

  • Security classification. The higher the security classification of the project, the more it will cost because of the additional precautions required. The security classification is not an input factor in most models.

  • Volatility of the requirement. The firmness of the requirement specification and the interface between developer and customer affect the amount of rework that will be needed before the software is delivered. This highly subjective but nonetheless important factor is an input factor to several models. The following are included in this factor:

    • Amount of change expected in the requirement

    • Amount of detail omitted from the requirement specification

    • Concurrent development of associated hardware, causing changes in the software specification

    • Unspecified target hardware

    Table 7-4. Factors that Influence Software Cost Estimate

     

    FACTOR

    INCLUDED

    EXCLUDED

    COMMENTS

    Project-Specific Factors

    1.

    Size of the software

       

    2.

    Percentage of the design and/or code that is new

       

    3.

    Complexity of the software system

       

    4.

    Difficulty of design and coding

       

    5.

    Quality

       

    6.

    Programming language

       

    7.

    Security classification level

       

    8.

    Target machine

       

    9.

    Utilization of the target hardware

       

    10.

    Requirement volatility

       

    Organization-Dependent Factors

    1.

    Project schedule

       

    2.

    Personnel

    • Technical competence

    • Nontechnical manpower

       

    3.

    Development environment

    • Development machine

    • Availability of associated software and hardware

    • Software tools and techniques to be used during design and development

       

    4.

    Resources not directly attributable to technical aspects of the project

       

    5.

    Computing resources

       

    6.

    Labor rates

       

    7.

    Inflation

       

    Organization-dependent factors include the following:

  • Project schedule. Attempts to save time by adding more people to a project prove counterproductive because more time and effort are expended in communication between project team members than can be gained by adding extra people. There must therefore be either a minimum time below which the project cannot be completed or at least a time below which the costs of saving a small amount of time become prohibitive. Conversely, if more time is allocated to a project than is required, it has been argued that the cost decreases. However, other models show costs increasing if more than some optimum time is allocated because more personnel are consumed. One effect of the compression of timescales is that work that should be done in series is undertaken in parallel, with the increased risk that some of the work will have to be scrapped (e.g., if coding is started before design is complete).

    Not all models deal with project schedules. Of those that do, some assume the 40-20-40 rule (40 percent design, 20 percent coding, and 40 percent testing), and others use more elaborate scheduling assumptions. Some research throws doubt on the validity of the 40-20-40 rule and indicates that phases are strongly interrelated, so effort skimped in one phase will probably result in a considerable increase in the effort needed in a later phase.

  • Personnel. The personnel assigned to a project contribute to the cost. Most projects are resource limited, in that the number of people with a given skill who are available to the project is limited. The number of personnel available at any stage in a project will affect the timescales, and hence the cost. An industrial engineering model called the “Raleigh Curve” shows the relationship between the number of assigned staff and project effectiveness.

    • Technical competence. Effective projects are staffed with personnel with the competence needed to complete the project successfully. A less competent staff will increase the cost and schedule of the defined testing tasks.

    • Non-technical staff. Estimates of the non-technical personnel levels required by a project are frequently made as a percentage of the technical personnel levels.

  • Development environment. The adequacy of the development environment, both in hardware and software, depends largely on the management of the development organization. This factor is not usually requested as an explicit input to a model, but may be implicit in the calibration of the model, or in some general management parameter. The following are three aspects of the development environment that are sometimes required as inputs:

    • Development machine. The adequacy of the development machine as a host for developing software for the selected target, and the availability of the development machine to the software development personnel, will affect both the schedule and the cost of a software development. The study showed that time sharing, where the development machine is constantly available, is 20 percent more productive than batch systems for software development.

    • Availability of associated software and hardware. Projected late delivery of some item of associated hardware or software can affect schedules and costs.

    • Software tools and techniques used during system design and development. Newer tools and techniques, properly applied, can reduce development effort.

  • Resources not directly attributable to technical aspects of the project. The development organization’s management style affects the amount of effort team members expend communicating with each other, the level of non-technical effort involved, as well as software/hardware costs, subcontracting costs, and profit. These factors are usually ignored, are implicit in the database from which the model is derived, or are taken care of by a general management factor. The geographical distribution of the development organization may affect costs because of travel and the cost of transmitting data between sites, and is therefore input to some models.

  • Labor rates. If the model estimates costs in terms of money rather than staff-hours, the relationship of labor costs to staff-hours within the development organization may be required by the model. The model may be capable of reflecting increased rates for staff required to work irregular hours because of decreases in the development timescale or lack of availability of development tools.

  • Inflation. Costs estimated in terms of money rather than staff hours should include inflation rates if the project will last more than 12 months.

Verify the Correctness of the Cost-Estimating Model Estimate

The amount of testing of the produced estimate will depend on the reasonableness of the estimating model and the completeness of the influencing factors included in the model. The less the tester can rely on the model, the more testing that needs to be performed on the validity of the estimate produced by the model.

The following four steps are suggested when testing the validity of the estimate produced by the software cost-estimating model:

  • Recalculate the estimate. The tester can validate the processing of the estimate by rerunning the estimating model. The purpose of this is to:

    • Validate the input was entered correctly

    • Validate the input was reasonable

    • Validate the mathematical computation was performed correctly

    This test can be done in totality or in part. For example, the tester can completely recalculate the estimate, check that the input going into the estimating model was correct, test the reasonableness of some of the input test by recalculating all or parts of the estimate, and so forth.

  • Compare produced estimate to similar projects. The tester can determine how long it took to develop projects of similar size and complexity. These actual project costs should be available from the organization’s accounting system. The estimate produced by the estimating system is then compared to the actual costs for like projects completed in the past. If there is any significant difference, the tester can challenge the validity of the estimate. This challenge may result in a recalculation or change of the estimate based on previous experience.

  • The prudent person test. This test is similar to the preceding test in that past experience is utilized. The tester documents the factors influencing the cost estimate, documents the estimate produced by the estimating system, and then validates the reasonableness of that estimate by asking experienced project leaders for their opinions regarding the validity of the estimate. It is recommended that three experienced project leaders be asked to validate the estimate. If one or more does not feel that the estimate is reasonable, the validity of the estimate should be challenged.

  • Redundancy in software cost estimating. This test has the tester recalculate the estimate using another cost-estimating model. For example, assume that your organization has developed a cost-estimating model. The project people have used that model to develop the cost estimate. The tester uses another method, for example, a software-estimating package. If the two estimating systems produce approximately the same estimate, the reliance on that estimate is increased. On the other hand, if there is a significant variance between the estimates using the two methods, then the tester needs to pursue additional investigation.

    Sources of software estimating models include:

    • Organization-developed estimating models

    • Estimating models included in system development methodologies

    • Software packages for developing software estimates

    • Function points in estimating software costs

Calculating the Project Status Using a Point System

The suggested project status test is based on a simple point-accumulation system. The points can then be compared to the project management or accounting system progress report. If the results of the two progress measurement systems differ, testers can challenge the validity of the results produced by the project management and/or accounting system.

The point system provides an objective, accurate, efficient means of collecting and reporting performance data in an engineering field that often lacks visibility. The method uses data based on deliverable software items and collected as a normal part of the development process. The results are easily interpreted and can be presented in a number of formats and sub-divisions. The scheme is flexible and can be modified to meet the needs of projects, both large and small.

Overview of the Point Accumulation Tracking System

The increasing complexity of software systems, combined with the requirements for structured and modular designs, has increased significantly the number of software elements developed and delivered on recent simulator programs. The increased number of elements and the traditionally “soft” milestones used to measure progress have made monitoring software development and predicting future progress time-consuming, subjective, and often unreliable.

A tracking system that uses an earned point scheme has been successfully used to monitor software development on several large tests. Points are assigned for each step in the software development cycle on a per-element basis. The steps are “hard” milestones in which a generated product is accepted by program management. As the products are accepted, the associated points are earned. The ratio of earned points to total possible points is compiled on an element, functional area, or total software system basis to determine progress achieved. A program that generates reports, usually resident on the simulator computational system, tabulates the data in a variety of management reports.

The system as implemented is flexible, highly automated, and closely coupled to configuration management systems and software quality assurance procedures to ensure the validity of data. Simple calculations or comparisons of the accumulated point values provide an accurate measure of progress, deviation from schedule, and prediction of future progress.

Typical Methods of Measuring Performance

Performance in software development is measured typically either by estimating the percentage of a task completed or by counting the number of predetermined milestones that have been reached. In the estimate of percent completed method, the person performing the work estimates the percent of the work that has been accomplished in reaching a milestone or completing a task. The percent completed method has several faults. The major fault is that the measurement is subjective. The manager is asking a person with a vested interest in completing the task as early as possible to make an educated guess as to how nearly complete it is. Most people tend to be optimistic in their ability to complete a task—particularly if their manager subtly encourages optimism. The old bromide of a task being 95 percent complete for months is all too true.

A potential shortcoming of this method when used with tasks rather than milestones is that the definition of completion is not always stated. Therefore, the person making the estimate may have one perception of what the task includes, whereas the manager may have another. Hence, when the programmer states the task is 100 percent complete—written, tested, and documented—the manager may have an unpleasant surprise when he or she asks to see the installation guide. Therefore, because the end of the task may not be clearly defined, the estimates of completion may be quite inaccurate.

Because the estimates are subjective, the interpretation of the results may also be subjective. In trying to ascertain the degree of completeness of a job, a manager may ask who made the estimate and then apply a “correction factor” to the estimate for that person to get a number he feels comfortable with.

The second method, the milestone method, attempts to alleviate these problems by defining specific milestones that must be met and measuring performance by summing the number of milestones that have been met. This method is much more objective, tends to describe the overall task more fully, and, as a result, is easier to interpret. The shortcomings of this method are more in the area of resolution of the measurement versus the efficiency of collecting, collating, and presenting the results in a meaningful way.

To get the resolution of measurement fine enough to show incremental progress on a periodic basis, numerous milestones need to be defined. However, the large number of milestones makes it more difficult to collect and present the data in a timely and meaningful way. A common method is to present the data on bar graphs, but on a large project with thousands of milestones, the upkeep of bar graphs can be a slow, expensive effort.

Another potential problem is that the milestone may not accurately reflect the real task. If care is not taken to define the milestone, the milestone may not be based on deliverable items, but on something that appears to show progress, such as lines of code generated. Also, if the milestones are not carefully chosen, it may be difficult to determine if the milestone has been reached.

These performance measurement tools and techniques emphasize functions performed early in the life of a project. Less information is available on the ongoing management function of control. Control can be thought of as a three-step process: An attribute or characteristic of interest is measured, the measured value is compared with an expected or baseline value, and an appropriate action is taken if an unacceptable deviation exists. Any number of items of interest during software development may be controlled in this manner. Development time, development costs, computer memory usage, and computer time are some of the more common items.

A performance measurement scheme should meet several criteria. First and most important, the scheme should be objective. The person claiming performance should not be required to estimate degree of completion. Likewise, the person monitoring performance should know exactly what a performance measurement represents. Ideally, the state of development should be sufficiently visible and the measurement means sufficiently clear to enable any project member to make the actual measurement.

Second, the scheme should measure performance in accomplishing the real task (i.e., the development of deliverable software). Further, the resolution of the measuring scheme should be sufficiently fine to measure incremental progress on a weekly or monthly basis, and the measurement should be timely in that it measures the current state of development. Providing accurate, current performance information on a periodic basis can be a positive motivating factor for a programming staff.

Finally, the scheme must be efficient. It should require minimal resources to collect, collate, and report performance data and should require minimum time to interpret the results. Systems that require constant inputs from the programming staff, updates by clerical personnel, or integration of large amounts of data by management should not be used.

Using the Point System

The point system is really an extension to the milestone system that lends itself to automation. In its simplest form, it is assumed that each software module goes through a similar development process and that the process comprises clearly identifiable milestones. For example, assume ten modules will be developed and four milestones will define the development process. The milestones may represent a reviewed and accepted design, a completed code walkthrough, verified test results, and a released module.

In the simple case, each milestone for each software item is worth 1 point. In the case of the system with ten modules, 40 points can be earned. As part of each design review, code walkthrough, test verification, or release audit, the milestone is achieved and the corresponding point earned. By including the milestones achieved (points earned) and creating a few simple generators, you can get an objective, accurate, and timely measure of performance. Figure 7-7 shows what a simple status report might look like.

Table 7-7. Simple status report.

SOFTWARE STATUS REPORT

 

DESIGN

CODE

TEST

RELEASE

POINTS EARNED

Module A

1

1

  

2

Module B

1

   

1

Module C

1

   

1

Module D

1

1

1

 

3

Module E

1

1

  

2

Module F

1

   

1

Module G

1

1

  

2

Module H

1

1

1

1

4

Module I

1

   

1

Module J

1

1

  

2

TOTALS

10

6

2

1

19

PERCENT COMPLETE = 19/40 = 48%

This simplified scheme works well for a homogeneous set of modules, where all modules are of the same complexity and each of the milestones represents an approximately equal amount of work. Through an introduction of weighting factors, you can easily handle modules of varying complexity or milestones representing unequal effort to complete.

Before this and other extensions are discussed, however, a brief description of implementation is in order. The heart of the system is a computer data file and a few simple report generators. The data file is simply a collection of records, one for each item that is to be tracked, that contains fields to indicate whether a particular milestone has been met. Usually, it is advantageous to include the following fields: item description, analyst responsible, work package identification, as well as various file identification fields. Often such a file will serve multiple uses, particularly if a few additional fields are added. Typical uses include family tree definition, specification cross-references, configuration control lists, and documentation cross-references.

Maintaining or updating the file can be as straightforward as modifying records with a line editor or as complex as building a special-purpose interactive update program. Some means of limited access should be used to restrict unauthorized modification of the file, particularly if some of the other uses of the file are sensitive to change.

Once the file is updated to include an entry of the module under development, the milestone status fields are updated as the milestones are met. In some cases this may be a manual process; once an event has occurred and the milestone achieved, a program librarian or other authorized person updates the status file. In other instances, in a more sophisticated system, a computer program could determine that a milestone event has occurred and automatically update the milestone status.

After the file has been built, programs to generate reports are written to print the status. For smaller projects, a program that simply prints each record, sums the earned and defined points, and calculates the percent points earned may be sufficient. Larger projects may need several reports for different subsystems or summary reports that emphasize change.

Extensions

A number of extensions can be added to the scheme as described so far. The first is to add a method of weighting modules and/or milestones. While weighting all modules equally on a large program where many (over 1,000) modules exist appears to give good results, smaller programs with few modules may need to weight the modules to give a sufficiently accurate performance measurement. Also, depending on the level of visibility of the measuring system and the attitude of the personnel involved, there may be a tendency to do all the “easy” modules first to show early performance.

A similar argument can be made for weighting milestones. Depending on the acceptance criteria, some milestones may involve more work than others. Therefore, achieving those milestones represents accomplishing a greater amount of work than in meeting other milestones. Further, there may be instances where a combination of module weight and milestone weight may interact. An example is a module that was previously written on another project in a different language. The amount of design work for that module may be considerably less than a module designed from scratch, but the amount of effort to code the routine might be more because an unfamiliar language may be involved.

The weighting scheme is easily implemented by assigning points to each milestone for all modules. Then, as a milestone is earned, the assigned points are added to the total earned and divided by the total defined points to compute percent completion. The number of points assigned to each milestone is in proportion to the difficulty in achieving the milestone, and, in fact, relates directly to the estimated number of hours needed to complete the milestone. When assigning points, it is recommended that points first be assigned to each of the modules and then reapportioned to the milestones.

A second extension is to add selecting and sorting options to the programs that generate the reports. Selecting options allows the user to select all entries in the file by some field, such as work package number, file name, software family tree component, or responsible analyst. Once the entries of interest are selected, the sort option allows the user to order the entries by some key. The points earned and points defined are summed from the selected entries and the percent complete calculated. Therefore, reports can be printed listing all modules and percent complete for a certain analyst, work package, or other selected criteria. It has been found valuable to allow Boolean operations on selection fields (analyst A and subsystem B) and to provide for major and minor sort fields (for example, to list modules in alphabetical order by analyst).

A third extension is to add target dates and actual completion dates to each module record. In this extension the individual milestone status fields are replaced by two dates. The first date field is a target date indicating when the milestone should be met. The target dates do not have to be used for all modules or milestones but are useful where interdependency exists between a particular module milestone and some other element in the system. These interdependencies may exist in the design stage to some extent, but they become very important during the integration phase of a project.

The actual completion date field becomes a flag identifying when the milestone is achieved. By adding up the points assigned to a milestone that have an actual date entered in the file, the percent complete can be computed.

Using the two date fields has two advantages: You can monitor schedule interdependence and a historical record exists for future analysis. By making the date fields selectable and sortable, additional reports can be generated. Assuming that an integration milestone has been identified, a list of all modules of interest can be selected by work package number, family tree identification, or individual module name. Target dates for the milestone of interest can then be entered. As the date of the integration milestone comes closer, lists of all modules of interest that have a particular due date and have not been completed can be provided to the responsible analyst or work package manager. Judicious use of these lists on a periodic basis can be used to monitor and motivate the programming staff to ensure that the milestone is met. Usually, several of these lists in various stages are active at once as key milestones are coming up. It has been found that choosing approximately one major milestone a month and starting the list several months in advance of the target date is very effective. More milestones than this tend to set up multiple or conflicting goals for the individual analysts. Also, the lists need to be started well enough in advance to allow suitable time for the work to be completed and to enable you to institute workarounds if problems arise.

It should be noted that the meeting of these interdependency dates is really separate from performance measurement. It is possible that in a given subsystem the performance may be fully adequate, say 75 percent complete, but a key integration event may have been missed. The manager must be aware of both elements. If performance is where it should be but an integration event has been missed, it may mean the manager’s people are not concentrating on the right item and need to be redirected.

Rolling Baseline

A potential problem with the point system described thus far has to do with an effect known as a rolling baseline. The rolling baseline occurs over the life of a program as new items are continually defined and added to the status file. This has the effect of changing the baseline, which causes percent complete to vary independently of milestones earned. During periods when few new items are added to the file, the percent complete accurately reflects real performance. At other times, as new items are added as quickly as previously defined milestones are met, reported progress tends to flatten out. In some instances where more new items were added than old items completed, negative progress is reported.

This problem is overcome by freeing the baseline for a unit of work or work package and reporting progress on the unit. That is, once a package of work is defined, no new points are allocated to the package. If, for some reason, it is decided certain modules have to be split up for the sake of modularity or computing efficiency, the points are likewise split up. In the instance where the scope of work changes because of an oversight or contract change, the effort is reprogrammed and either new work packages are created or existing work packages are expanded with a corresponding increase of points.

This has the effect of measuring performance on active or open work packages only, not on the system as a whole. However, because performance is being compared to an established schedule, which is also made up of units of work, the comparison is valid and useful.

Reports

Several informative detail and summary reports can be generated from the data file. The most encompassing detail report, of course, is a listing of all elements. Such a list may be useful in creating inventory lists of software items to be delivered and might be used during physical configuration audits. Other lists may be sorted and/or selected by work package or family tree identification number. Such lists show status of specific modules within subsets of the work breakdown structure or functional items of the system. Other sorts or selections by a responsible analyst show status of a particular individual’s effort. Figures 7-8 through 7-11 show sample summary reports. Collecting data from several summary runs allows rates of completion to be calculated, upon which trends or predictions of future performance can be made.

Detail interdependency listing.

Figure 7-8. Detail interdependency listing.

Detail status listing.

Figure 7-9. Detail status listing.

Summary report.

Figure 7-10. Summary report.

Summary status report.

Figure 7-11. Summary status report.

Check Procedures

Work Paper 7-5 is a quality control checklist for this step. It is designed so that Yes responses indicate good test practices and No responses warrant additional investigation. A Comments column is provided to explain No responses and to record investigation results. The N/A column should be used when the checklist item is not applicable to the test situation.

Table 7-5. Organizing for Testing Quality Control Checklist

  

YES

NO

N/A

COMMENTS

1.

Has the test team manager been appointed?

    

2.

Has the test team manager’s role been defined?

    

3.

Is the scope of testing consistent with the competency of the test manager?

    

4.

Is the test team competent?

    

5.

Are there standards for system documentation?

    

6.

Are the members of the test team in total knowledgeable of the intent and content of those standards?

    

7.

Are the standards customizable for systems of various sizes, so that small projects may not need as extensive documentation as large projects?

    

8.

Are the testers provided a complete copy of system documentation current to the point where the tests occur?

    

9.

Have the testers measured the documentation needs for the project based on the twelve criteria included in this chapter?

    

10.

Have the testers determined what documents must be produced?

    

11.

Do the project personnel agree with the testers’ assessment as to what documents are needed?

    

12.

Have the testers determined the completeness of individual documents using the 13 criteria outlined in Task 3?

    

13.

Have the testers used the inspection process to determine the completeness of system documentation?

    

14.

Have the testers determined the currentness of the project documentation at the point of test?

    

15.

Have the testers prepared a report that outlines documentation deficiency?

    

16.

Do the testers ensure that the documentations deficiency outlined in their report is acted upon?

    

17.

Does project management support the concept of having the test team assess the development estimate and status?

    

18.

If so, is the test team knowledgeable in the estimation process?

    

19.

If so, is the test team knowledgeable in the method that will be used to report project status?

    

20.

Does the test team understand how the software estimate was calculated?

    

21.

Has the test team performed a reasonable test to determine the validity of the estimate?

    

22.

If the test team disagrees with the validity of the estimate, will a reasonable process be followed to resolve that difference?

    

23.

Does the project team have a reasonable status reporting system?

    

24.

Have the testers determined that the project status system will be utilized on a regular basis?

    

25.

Is there a process to follow if the status reporting system indicates that the project is ahead or behind estimates?

    

26.

Have the test team taken into account the influencing factors in evaluating the estimate (e.g., size of the software and so forth)?

    

27.

Will the team receive copies of the status reports?

    

28.

Is there a process in the test plan to act upon the status reports when received?

    

29.

Does the test team have a knowledge of how projects are planned and how the content of a project is planned?

    

30.

Does the test team have an understanding of the project estimating process used to estimate this project?

    

31.

Does the project team have an understanding of the developmental process that will be used to build the software specified in this project?

    

32.

Is the project plan complete?

    

33.

Is the project estimate fully documented?

    

34.

Is the developmental process documented?

    

35.

Is the estimating method used for this project reasonable for the project characteristics?

    

36.

Is the estimate reasonable to complete the project as specified in the plan?

    

37.

Has the project been completed using the development process?

    

38.

Does the project team have a method for determining and reporting project status?

    

39.

Is that project status method used?

    

40.

Do the testers agree that the project status as reported is representative of the actual status?

    

Output

The output from this step includes a test manager, a definition of the scope of testing, an organized test team, and verification that development documentation is complete. Another output from this step is a report to the project personnel on the adequacy of the test estimate and the reasonableness of the project status. Note that this step may need to be repeated periodically as the project plan changes. Testers may also want to evaluate the reasonableness of the status report multiple times during the development process.

Summary

Time spent organizing for a test project will reduce the overall test effort. The five organizing tasks described in this chapter are an important prerequisite to test planning. It is less important as to who does the tasks, and when the tasks are performed, than the fact that the tasks are performed.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset