In its role as an agent for improving software technology use within the U.S. Air Force, the Software Technology Support Center (STSC) is supporting metrics technology improvement activities for its customers. These activities include: disseminating information regarding the U.S. Air Force policy on software metrics [AP93M-017], providing metrics information to the public through CrossTalk, conducting customer workshops in software metrics, guiding metrics technology adoption programs at customer locations, researching new and evolving metrics methodologies, and so on.
Helping customers become proficient in developing and using software metrics to support their software development and/or management activities is crucial to customer success. The STSC metrics support activities must be tailored to the customer’s needs to ensure
Customer-support needs include activities based on their apparent metrics capability and those that are particularly focused on dealing with the organizational and cultural issues that often need to be addressed to facilitate change.
This guide covers the following:
The foundation for the evaluation method is “A Method for Assessing Software Measurement Technology” [DASK90].* Metrics capability maturity consists of five maturity levels that are analogous to the software capability maturity model (CMM) levels defined by the Software Engineering Institute (SEI) [PAUL93]. This guide has been designed to cover metrics capability maturity Levels 1 through 3. When metrics capability evaluations show a strong percentage (e.g., 25% or more) of organizations at metrics capability maturity Level 3, the scope of the evaluation (and this guide) will be expanded to cover metrics capability maturity Levels 4 and 5.
This guide defines a set of questions to elicit information that will help characterize an organization’s metrics capability. The themes used in the questionnaire and their relationships to an organization’s metrics capability maturity (for Levels 1 through 3) are shown in Appendix A.
The guide contains two metrics capability questionnaires (one for acquisition organizations and one for software development/maintenance organizations). The questions in the questionnaires are used as the basis for interviews with an organization’s representative(s) to help determine their metrics capability maturity. After the interviews are complete, the results are collated and reported in a evaluation report that is delivered to the evaluated organization. Additional work with the evaluated organization will depend on the organization’s needs. Section 2.2 discusses the evaluation process. Appendix B contains a brief metrics customer profile form, which is filled out as a precursor to the metrics capability evaluation. Appendix C is an annotated outline of the metrics capability evaluation report, and Appendix D contains the customer organization information form.
The software metrics capability evaluation process consists of three basic parts:
These sets of activities are discussed in Sections 2.2.1 through 2.2.3.
In addition to evaluation, there may be follow-up activities. These include more detailed work with the customer that will provide a metrics capability improvement strategy and plan when applicable. Section 2.3 discusses the follow-up activities.
The initial contact with a customer generally is set up through an STSC customer consultant. The customer consultant briefs an assigned member of the STSC metrics team regarding a customer’s need for a metrics capability evaluation and provides a contact for the metrics team member at the customer’s site.
The metrics team member contacts the customer by phone to gain an initial understanding of the customer’s organization and to set up the evaluation interview. The metrics customer profile form is used to help gather that information. Information collected during this initial contact will be used to help determine the proper approach for the introduction briefing presented during the evaluation interview visit. Only the point of contact information must be completed at this time; however, it is highly desirable to include the STSC business information. When the profile is not completed during the initial contact, it needs to be completed prior to (or as an introduction to) the evaluation interview at the customer’s site.
Two STSC metrics team members conduct the interviews as a metrics evaluation team. On the same day as the evaluation interview, an introduction briefing is provided to key people within the organization (to be determined jointly by the evaluation team members, the customer consultant assigned to the organization, and the organization’s primary point of contact). The purpose of the briefing is to manage customer expectations. This is accomplished, in part, by providing education with respect to
The interviews are conducted with the manager most closely associated with the software development activities for the program (or project) under question.* One other representative from the program (or project) should participate in the interview (a staff member responsible for metrics analysis and reporting would be most appropriate). The first part of the interview is to complete the metrics customer profile. When this is completed, the metrics capability questionnaire most related to the organization (either acquirer or development/maintenance organization) is used as the input to the remainder of the evaluation process. The questionnaire sections for both Levels 2 and 3 are used regardless of the customer’s perceived metrics capability.
The questions in the metrics capability evaluation questionnaires have been formalized to require answers of yes, no, not applicable (NA), or don’t know (?). If an answer is yes, the customer needs to relate examples or otherwise prove performance that fulfills the question. If the answer is no, comments may be helpful but are not required. (If the answer is don’t know, a no answer is assumed.) If the answer is NA and it can be shown to be NA, the question is ignored and the answer is not counted as part of the score. The chosen metrics capability evaluation questionnaires need to be completed before the interview is considered complete.
An evaluation interview should not take more than one day for one program (or software project). If an organization is to be assessed, a representative sample of programs (or software projects) need to be assessed and each requires a separate interview.
The metrics capability questionnaires completed during the interview(s) and their associated examples (or other evidence of metrics capability maturity, see Section B.1) are collated and returned to STSC for analysis. The metrics capability evaluation team that conducted the interview(s) is responsible for analyzing and reporting the results. An assessed program (or software project) is at Level 2 if at least 80% of all Level 2 questions are answered yes. Otherwise, the organization is at Level 1, and so on [DASK90]. (Scoring is discussed in more detail in Section B.1. The contents of the metrics capability evaluation report are outlined in Appendix C.)
The questions in the metrics capability questionnaires are organized by metrics capability maturity themes to help focus the interviews and the results analysis. (The themes, as defined in [DASK90], and their characteristics at metrics capability maturity Levels 2 and 3 are reported in Appendix A.) The customer’s strengths and weaknesses can be addressed directly with the information gathered during the interview session(s). In addition, activities for becoming more effective in implementing and using metrics can be highlighted in the metrics capability evaluation report and in the project plan.
Software metrics capability evaluation follow-up includes two sets of activities:
The report details the evaluation results and provides recommendations for an initial set of improvement activities.
The project plan consists of a customer-approved, detailed plan to improve the customer’s metrics capability (which may include other aspects of support to the customer such as software process definition, project management support, or requirements management workshops, etc.).
The customer’s organizational culture is important in developing the content and phasing of the project plan. Issues such as ability to incorporate change into the organization, management commitment to software technology improvement, and so on, often need to be addressed in developing a success-oriented plan.*
Metrics capability improvement implementation consists of the physical implementation of the project plan and a periodic evaluation of the customer’s status to determine the program’s improvement and any required modifications to the plan. The project plan and implementation are described in Section 2.3.2.
The metrics capability evaluation report consists of two parts:
The results portion of the report is organized to discuss the customer’s overall software metrics capability and to define the areas of strengths and weaknesses based on each of the measurement themes. The recommendations portion of the report describes an overall improvement strategy that provides a balanced approach toward metrics capability improvement based on the customer’s current evaluation results. Appendix C contains an annotated outline of the report.
If a customer has the interest to proceed with a project plan, the STSC will develop the plan in conjunction with the customer. The contents of the project plan, the estimates for plan implementation, and the schedule will be developed specifically for each customer’s needs. Due to the possible variations in customer needs, it is difficult to determine the exact contents of the plan. At a minimum, the project plan contains the following information:
After the plan is approved, the metrics capability improvement implementation follows the plan. The periodic evaluations of the customer’s products provide feedback regarding the customer’s progress and an opportunity to revise the plan if the improvement is not proceeding according to the plan. In this way, the plan and implementation process can be adjusted as necessary to support the customer’s ongoing needs.
AF93M-017 | Software metrics policy—action memorandum, February 1994. |
DASK90 | Daskalantonakis, Michael K., Robert H. Yacobellis, and Victor R. Basilli, “A method for assessing software measurement technology,” Quality Engineering, Vol. 3, No. 1, 1990 to 1991, pp. 27 to 40. |
PAUL93 | Paulk, Mark C., et al., Capability maturity model for software, Version 1.1, CMU/SEI-93-TR-24, ESC-TR-93-177, February 1993. |
SEI94 | Software process maturity questionnaire, CMM, Version 1.1, April 1994. |
Table A4.1 shows the six metrics themes and relates the themes to software metrics capability maturity Levels 1 through 3.
THEME | INITIAL (LEVEL 1) | REPEATABLE (LEVEL 2) | DEFINED (LEVEL 3) |
1. Formalization of development process | Process unpredictable Project depends on seasoned professionals No/poor process focus |
Projects repeat previously mastered tasks Process depends on experienced people |
Process characterized and reasonably understood |
2. Formalization of metrics process | Little or no formalization | Formal procedures established Metrics standards exist | Documented metrics standards Standards applied |
3. Scope of metrics | Occasional use on projects with seasoned people or not at all | Used on projects with experienced people Project estimation mechanisms exist Metrics have project focus |
Goal/question/metric package development and some use Data collection and recording Specific automated tools exist in the environment Metrics have product focus |
4. Implementation support | No historical data or database | Data (or database) available on a per project basis | Product-level database Standardized database used across projects |
5. Metrics evolution | Little or no metrics conducted | Project metrics and management in place | Product-level metrics and management in place |
6. Metrics support for mgmt control | Management not supported by metrics | Some metrics support for management Basic control of commitments |
Product-level metrics and control |
* The information in this table has been extracted directly from [DASK90].
This appendix contains scoring information for the software metrics capability evaluations along with copies of the metrics customer profile form and the two software metrics capability evaluation questionnaires.
The metrics customer profile form helps gather general customer information for choosing the metrics capability evaluation questionnaire and for defining the contents of the project plan. The two software metrics capability evaluation questionnaires are as follows:
These two metrics capability evaluation questionnaires provide the contents of the evaluation interviews described in Section 2.2.2. The questions from the questionnaires are asked as written. The questions for Levels 2 and 3 are used for all interviews. The comments for each question are used to point to examples and other evidence of metrics capability maturity based on the activities referred to in the question. The answers to the questions and the examples and comments are the inputs to the scoring activity presented in Section B.1.2.
Scoring from the two metrics capability evaluation questionnaires is relatively simple:
The organization’s metrics capability level, as indicated from the scoring process, the proofs of conformance, and comments are all used as inputs to the metrics capability evaluation report. Appendix C contains an annotated outline of a metrics capability evaluation report.
The goals of the software metrics capability evaluation report are as follows:
Box C.1 is the annotated outline for the software metrics capability evaluation report.
BOX C.1 Software Metrics Capability Evaluation Results and Recommendations Report: Annotated Outline
Use the following sentence to identify the evaluation report: “This report provides the results of a software metrics capability evaluation given on (review dates, in mm/dd/yy format) for,” then provide the organization’s name, office symbol, location, and address. In addition, provide the approximate size of the organization appraised, the names and office symbols for any branches or sections that were represented from within a larger organization, the basic “type” of organization (i.e., acquisition, software development, software maintenance), and the number of individuals interviewed.
Identify the document’s organization and provide a summary of the information contained in each major section.
Give the metrics capability level for the organization, and provide backup for that result.
Provide a listing of general areas within the six metrics themes represented in the evaluation where the organization showed strengths, for example, establishment and general use of a metrics database or general examples of management decision-making based on metrics results.
Provide a listing of general areas within the six measurement themes represented in the evaluation where the organization showed weaknesses, for example, no metrics database or identification of metrics from the Air Force metrics mandate that are not being collected or used.
For each of the six measurement themes, provide a description of the weakness(es) for that theme. Include the following topics in that description:
Provide any general recommendations that resulted from analyzing the appraisal results, for example, need to determine general management approach and commitment to change before charting a detailed metrics improvement plan, and so on.
Give the background and rationale for the recommendations, and provide a set of positive steps the organization could take to improve their metrics capabilities. This section should be used as a place to recommend (or propose) possible first steps that the metrics customer and the STSC could explore to determine whether an ongoing relationship would be mutually beneficial. (In the case of metrics capability Level 1 organizations, examples are: to undertake a study of the organization’s culture to determine the easy and high-payback activities that would give the organization some positive results for minimal effort, to work with the organization’s management to determine their commitment to change, and so on. Other recommendations could include working with the STSC or another support organization to develop a project plan.)
Appendix A contains the measurement theme and relationships table (Table A4.1 herein). Also, if necessary, starting with Appendix B, provide background information (e.g., the customer profile) that would be difficult to incorporate in the main body of the report or that would interfere with the readability and understandability of the evaluation results.
It has been determined that the organization’s culture often is extremely important in determining how best to work for any type of software process improvement, including establishing a working metrics program. This appendix has been developed to elicit cultural information about the metrics customer that will help STSC develop the project plan and work with the customer for their metrics capability improvement.
* Metrics capability maturity (or metrics capability) refers to how well an organization uses metrics to help manage and control project performance, product quality, and process implementation and improvement. This concept is discussed in more detail in [DASK90].
* The assessment method defined in [DASK90] was based on the Software Engineering Institute (SEI) process assessment methodology, which is currently exemplified in the Capability Maturity Model (CMM) for Software, Version 1.1 [PAUL93].
* In the case of the acquirer, this will be the individual responsible for overseeing the software development organization. In the case of a development or maintenance organization, this will be the software project manager.
* Appendix D contains an organization information form that the STSC uses to help define cultural issues that need to be addressed in the project plan.
* Throughout these questionnaires, acquirer refers to an organization that acquires software or systems. Developer refers to an organization that develops or maintains software or systems for an acquirer. (For example, a developer could refer to a nonmilitary organization (e.g., a defense contractor, a university) that works under the terms of a legal contract; an external government or military organization that works under the terms of a memorandum of agreement (MOA); or an organic organization tasked with developing or maintaining software under an informal agreement.) Contract refers to an agreement between the acquirer and the contractor, regardless of its actual form (e.g., an MOA).