Appendix VIII: Value Measuring Methodology*

The purpose of the value measuring methodology (VMM) is to define, capture, and measure value associated with electronic services unaccounted for in traditional return-on-investment (ROI) calculations, to fully account for costs, and to identify and consider risk. Developed in response to the changing definition of value brought on by the advent of the Internet and advanced software technology, VMM incorporates aspects of numerous traditional business analysis theories and methodologies, as well as newer hybrid approaches.

VMM was designed to be used by organizations across the federal government to steer the development of an e-government initiative, assist decision-makers in choosing among investment alternatives, provide the information required to manage effectively, and to maximize the benefit of an investment to the government.

VMM is based on public and private sector business and economic analysis theories and best practices. It provides the structure, tools, and techniques for comprehensive quantitative analysis and comparison of value (benefits), cost, and risk at the appropriate level of detail.

This appendix provides a high-level overview of the four steps that form the VMM framework. The terminology used to describe the steps should be familiar to those involved in developing, selecting, justifying, and managing an information technology (IT) investment

  • Step 1: Develop a decision framework
  • Step 2: Alternatives analysis
  • Step 3: Pull the information together
  • Step 4: Communicate and document

Step 1: Develop a Decision Framework

A decision framework provides a structure for defining the objectives of an initiative, analyzing alternatives, and managing and evaluating ongoing performance. Just as an outline defines a paper’s organization before it is written, a decision framework creates an outline for designing, analyzing, and selecting an initiative for investment, and then managing the investment. The framework can be a tool that management uses to communicate its agency government-wide, or focus-area priorities.

The framework facilitates establishing consistent measures for evaluating current and/or proposed initiatives. Program managers may use the decision framework as a tool to understand and prioritize the needs of customers and the organization’s business goals. In addition, it encourages early consideration of risk and thorough planning practices directly related to effective e-government initiative implementation.

The decision framework should be developed as early as possible in the development of a technology initiative. Employing the framework at the earliest phase of development makes it an effective tool for defining the benefits that an initiative will deliver, the risks that are likely to jeopardize its success, and the anticipated costs that must be secured and managed.

The decision framework is also helpful later in the development process as a tool to validate the direction of an initiative, or to evaluate an initiative that has already been implemented.

The decision framework consists of value (benefits), cost, and risk structures, as shown in Figure A8.1. Each of these three elements must be understood to plan, justify, implement, evaluate, and manage an investment.

The tasks and outputs involved with creating a sound decision framework include

  • Tasks:
    1. Identify and define value structure
    2. Identify and define risk structure
    3. Identify and define cost structure
    4. Begin documentation
      Figure A8.1 The decision framework.

      Figure A8.1 The decision framework.

  • Outputs:
    1. Prioritized value factors
    2. Defined and prioritized measures within each value factor
    3. Risk factor inventory (initial)
    4. Risk tolerance boundary
    5. Tailored cost structure
    6. Initial documentation of basis of estimate of cost, value, and risk

Task 1: Identify and Define the Value Structure

The value structure describes and prioritizes benefits in two layers. The first considers an initiative’s ability to deliver value within each of the five value factors (user value, social value, financial value, operational and foundational value, and strategic value). The second layer delineates the measures to define those values.

By defining the value structure, managers gain a prioritized understanding of the needs of stakeholders. This task also requires the definition of metrics and targets critical to the comparison of alternatives and performance evaluation.

The value factors consist of five separate, but related, perspectives on value. As defined in Figure A8.2, each factor contributes to the full breadth and depth of the value offered by the initiative.

Because the value factors are usually not equal in importance, they must be “weighted” in accordance with their importance to executive management.

Identification, definition, and prioritization of measures of success must be performed within each value factor, as shown in Figure A8.3. Valid results depend on project staff working directly with representatives of user communities to define and array the measures in order of importance. These measures are used to define alternatives, and also serve as a basis for alternatives analysis, comparison, and selection, as well as ongoing performance evaluation.

Figure A8.2 Value factors.

Figure A8.2 Value factors.

Figure A8.3 A value factor with associated metrics.

Figure A8.3 A value factor with associated metrics.

In some instances, measures may be defined at a higher level to be applied across a related group of initiatives, such as organization-wide or across a focus-area portfolio. These standardized measures then facilitate “apples-to-apples” comparison across multiple initiatives. This provides a standard management “yardstick” against which to judge investments.

Whether a measure has been defined by project staff or at a higher level of management, it must include the identification of a metric, a target, and a normalized scale. The normalized scale provides a method for integrating objective and subjective measures of value into a single decision metric. The scale used is not important; what is important is that the scale remains consistent.

The measures within the value factors are prioritized by representatives from the user and stakeholder communities during facilitated group sessions.

Task 2: Identify and Define Risk Structure

The risk associated with an investment in a technology initiative may degrade performance, impede implementation, and/or increase costs. Risk that is not identified cannot be mitigated or managed, causing a project to fail either in the pursuit of funding or, more dramatically, during implementation. The greater the attention paid to mitigating and managing risk, the greater the probability of success.

The risk structure serves a dual purpose. First, the structure provides the starting point for identifying and inventorying potential risk factors that may jeopardize an initiative’s success and ensures that plans for mitigating their impact are developed and incorporated into each viable alternative solution.

Second, the structure provides the information management needs to communicate their organization’s tolerance for risk. Risk tolerance is expressed in terms of cost (what is the maximum acceptable cost “creep” beyond projected cost) and value (what is the maximum tolerable performance slippage).

Risks are identified and documented during working sessions with stakeholders. Issues raised during preliminary planning sessions are discovered, defined, and documented. The result is an initial risk inventory.

To map risk tolerance boundaries, selected knowledgeable staff are polled to identify at least five data points that will define the highest acceptable level of risk for cost and value.

Task 3: Identify and Define the Cost Structure

A cost structure is a hierarchy of elements created specifically to accomplish the development of a cost estimate, and is also called a cost element structure (CES).

The most significant objective in the development of a cost structure is to ensure a complete, comprehensive cost estimate and to reduce the risk of missing costs or double counting. An accurate and complete cost estimate is critical for an initiative’s success. Incomplete or inaccurate estimates can result in exceeding the budget for implementation requiring justification for additional funding or a reduction in scope. The cost structure developed in this step will be used during Step 2 to estimate the cost for each alternative.

Ideally, a cost structure will be produced early in development, prior to defining alternatives. However, a cost structure can be developed after an alternative has been selected or, in some cases, in the early stage of implementation. Early structuring of costs guides refinement and improvement of the estimate during the progress of planning and implementation.

Task 4: Begin Documentation

Documentation of the elements leading to the selection of a particular alternative above all others is the “audit trail” for the decision. The documentation of assumptions, the analysis, the data, the decisions, and the rationale behind them are the foundation for the business case and the record of information required to defend a cost estimate or value analysis.

Early documentation will capture the conceptual solution, desired benefits, and attendant global assumptions (e.g., economic factors such as the discount and inflation rates). The documentation also includes project-specific drivers and assumptions, derived from tailoring the structures.

The basis for the estimate, including assumptions and business rules, should be organized in an easy-to-follow manner that links to all other analysis processes and requirements. This will provide easy access to information supporting the course of action, and will also ease the burden associated with preparing investment justification documents. As an initiative evolves through the life cycle, becoming better defined and more specific, the documentation will also mature in specificity and definition.

Figure A8.4 Risk can be bundled across categories.

Figure A8.4 Risk can be bundled across categories.

Step 2: Alternative Analysis—Estimate Value, Costs, and Risk

An alternatives analysis is an estimation and evaluation of all value, cost, and risk factors (Figure A8.4) leading to the selection of the most effective plan of action to address a specific business issue (e.g., service, policy, regulation, business process or system). An alternative that must be considered is the “base case.” The base case is the alternative where no change is made to current practices or systems. All other alternatives are compared against the base case, as well as with each other.

An alternatives analysis requires a disciplined process to consider the range of possible actions to achieve the desired benefits. The rigor of the process to develop the information on which to base the alternatives evaluation yields the data required to justify an investment or course of action. It also provides the information required to support the completion of the budget justification documents. The process also produces a baseline of anticipated value, costs, and risks to guide the management and ongoing evaluation of an investment.

An alternatives analysis must consistently assess the value, cost, and risk associated with more than one alternative for a specific initiative. Alternatives must include the base case and accommodate specific parameters of the decision framework. VMM, properly used, is designed to avoid “analysis paralysis.”

The estimation of cost and the projection of value use ranges to define the individual elements of each structure. Those ranges are then subject to an uncertainty analysis (see Note 1). The result is a range of expected values and cost. Next, a sensitivity analysis (see Note 2) identifies the variables that have a significant impact on this expected value and cost. The analyses will increase confidence in the accuracy of the cost and predicted performance estimates (Figure A8.5). However, a risk analysis is critical to determine the degree to which other factors may drive up expected costs or degrade predicted performance.

Figure A8.5 Predicting performance.

Figure A8.5 Predicting performance.

An alternatives analysis must be carried out periodically throughout the life cycle of an initiative. The following list provides an overview of how the business value resulting from an alternatives analysis changes, depending on where in the life cycle the analysis is conducted.

  1. Strategic planning (predecisional)
    1. How well will each alternative perform against the defined value measures?
    2. What will each alternative cost?
    3. What is the risk associated with each alternative?
    4. What will happen if no investment is made at all (base case)?
    5. What assumptions were used to produce the cost estimates and value projections?

  2. Business modeling and pilots
    1. What value is delivered by the initiative?
    2. What are the actual costs to date? Do estimated costs need to be reexamined?
    3. Have all risks been addressed and managed?

  3. Implementation and evaluation
    1. Is the initiative delivering the predicted value? What is the level of value delivered?
    2. What are the actual costs to date?
    3. Which risks have been realized, how are they affecting costs and performance, and how are they being managed?

The tasks and outputs involved with conducting an alternatives analysis include

  • Tasks:
    1. Identify and define alternatives
    2. Estimate value and cost
    3. Conduct risk analysis
    4. Ongoing documentation

  • Outputs:
    • Viable alternatives
    • Cost and value analyses
    • Risk analyses
    • Tailored basis of estimate documenting value, cost, and risk economic factors and assumptions

Task 1: Identify and Define Alternatives

The challenge of this task is to identify viable alternatives that have the potential to deliver an optimum mix of both value and cost-efficiency. Decision-makers must be given, at a minimum, two alternatives plus the base case to make an informed investment decision.

The starting point for developing alternatives should be the information in the value structure and preliminary drivers identified in the initial basis of estimate (see Step 1).

Using this information will help to ensure that the alternatives and, ultimately, the solution chosen, accurately reflect a balance of performance, priorities, and business imperatives. Successfully identifying and defining alternatives requires cross-functional collaboration and discussion among the stakeholders.

The base case explores the impact of identified drivers on value and cost if an alternative solution is not implemented. That may mean that current processes and systems are kept in place or that organizations will build a patchwork of incompatible, disparate solutions. There should always be a base case included in the analysis of alternatives.

Task 2: Estimate Value and Cost

Comparison of alternatives, justification for funding, creation of a baseline against which ongoing performance may be compared, and development of a foundation for more detailed planning require an accurate estimate of an initiative’s cost and value. The more reliable the estimated value and cost of the alternatives, the greater confidence one can have in the investment decision.

The first activity to pursue when estimating value and cost is the collection of data. Data sources and detail will vary based on an initiative’s stage of development. Organizations should recognize that more detailed information may be available at a later stage in the process and should provide best estimates in the early stages, rather than delaying the process by continuing to search for information that is likely not available.

To capture cost and performance data, and conduct the VMM analyses, a VMM model should be constructed. The model facilitates the normalization and aggregation of cost and value, as well as the performance of uncertainty, sensitivity, and risk analyses.

Analysts populate the model with the dollar amounts for each cost element and projected performance for each measure. These predicted values, or the underlying drivers, will be expressed in ranges (e.g., low, expected, or high). The range between the low and high values will be determined based on the amount of uncertainty associated with the projection.

Initial cost and value estimates are rarely accurate. Uncertainty and sensitivity analyses increase confidence that likely cost and value have been identified for each alternative.

Task 3: Conduct Risk Analysis

The only risks that can be managed are those that have been identified and assessed. A risk analysis considers the probability and potential negative impact of specific factors on an organization’s ability to realize projected benefits or estimated cost, as shown in Figure A8.6.

Even after diligent and comprehensive risk mitigation during the planning stage, some level of residual risk will remain that may lead to increased costs and decreased performance. A rigorous risk analysis will help an organization better understand the

 Figure A8.6 Assessing probability and impact.

Figure A8.6 Assessing probability and impact.

probability that a risk will occur and the level of impact the occurrence of the risk will have on both cost and value. Additionally, risk analysis provides a foundation for building a comprehensive risk-management plan.

Task 4: Ongoing Documentation

Inherent in these activities is the need to document the assumptions and research that compensate for gaps in information or understanding. For each alternative, the initial documentation of the high-level assumptions and risks will be expanded to include a general description of the alternative being analyzed, a comprehensive list of cost and value assumptions, and assumptions regarding the risks associated with a specific alternative. This often expands the initial risk inventory.

Step 3: Pull Together the Information

Figure A8.7 Risk and cost benefit analysis.

Figure A8.7 Risk and cost benefit analysis.

As shown in Figure A8.7, the estimation of cost, value, and risk provide important data points for investment decision-making. However, when analyzing an alternative and making an investment decision, it is critical to understand the relationships among them.

  • Tasks:
    1. Aggregate the cost estimate
    2. Calculate the ROI
    3. Calculate the value score
    4. Calculate the risk scores (cost and value)
    5. Compare value, cost, and risk
  • Outputs:
    • Cost estimate
    • ROI metrics
    • Value score
    • Risk scores (cost and value)
    • Comparison of cost, value, and risk

Task 1: Aggregate the Cost Estimate

A complete and valid cost estimate is critical to determining whether or not a specific alternative should be selected. It also is used to assess how much funding must be requested. Understating cost estimates to gain approval, or not considering all costs, may create doubt as to the veracity of the entire analysis. An inaccurate cost estimate might lead to cost overruns, create the need to request additional funding, or reduce scope.

The total cost estimate is calculated by aggregating expected values for each cost element.

Task 2: Calculate the Return-On-Investment

ROI metrics express the relationship between the funds invested in an initiative and the financial benefits the initiative will generate. Simply stated, it expresses the financial “bang for the buck.” Although it is not considered the only measure on which an investment decision should be made, ROI is, and will continue to be, a critical data point for decision-making.

Task 3: Calculate the Value Score

The value score quantifies the full range of value that will be delivered across the five value factors as defined against the prioritized measures within the decision framework. The interpretation of a value score will vary based on the level from which it is being viewed. At the program level, the value score will be viewed as a representation of how alternatives performed against a specific set of measures. They will be used to make an “apples-to-apples” comparison of the value delivered by multiple alternatives for a single initiative.

For example, the alternative that has a value score of 80 will be preferred over the alternative with a value score of 20, if no other factors are considered. At the organizational or portfolio level, value scores are used as data points in the selection of initiatives to be included in an investment portfolio. Since the objectives and measures associated with each initiative will vary, decision-makers at the senior level use value scores to determine what percentage of identified value an initiative will deliver. For example, an initiative with a value score of 75 is providing 75% of the possible value the initiative has the potential to deliver. In order to understand what exactly is being delivered, the decision-maker will have to look at the measures of the value structure.

Consider the value score as a simple math problem. The scores projected for each of the measures within a value factor should be aggregated according to their established weights. The weighted sum of these scores is a factor’s value score. The sum of the factors’ value scores, aggregated according to their weights, is the total value score.

Task 4: Calculate the Risk Scores

After considering the probability and potential impact of risks, risk scores are calculated to represent a percentage of overall performance slippage or cost increase.

Risk scores provide decision-makers with a mechanism to determine the degree to which value and cost will be negatively affected and whether that degree of risk is acceptable based on the risk tolerance boundaries defined by senior staff. If a selected alternative has a high cost and/or high-value risk score, program management is alerted to the need for additional risk mitigation, project definition, or more detailed risk-management planning. Actions to mitigate the risk may include the establishment of a reserve fund, a reduction of scope, or a refinement of the alternative’s definition. Reactions to excessive risk may also include reconsideration of whether it is prudent to invest in the project at all, given the potential risks, the probability of their occurrence, and the actions required to mitigate them.

Task 5: Compare Value, Cost, and Risk

Tasks 1–4 of this step analyze and estimate the value, cost, and risk associated with an alternative. In isolation, each data point does not provide the depth of information required to ensure sound investment decisions.

Previous to the advent of VMM, only financial benefits could be compared with investment costs through the development of an ROI metric. When comparing alternatives, the consistency of the decision framework allows the determination of how much value will be received for the funds invested. Additionally, the use of risk scores provides insight into how all cost and value estimates are affected by risk.

By performing straightforward calculations, it is possible to model the relationships among value, cost, and risk

  1. The effect risk will have on estimated value and cost
  2. The financial ROI
  3. If comparing alternatives, the value “bang for the buck” (total value returned compared with total required investment)
  4. If comparing initiatives to be included in the investment portfolio, senior managers can look deeper into the decision framework, moving beyond overall scores to determine the scope of benefits through an examination of the measures and their associated targets.

Step 4: Communicate and Document

Regardless of the projected merits of an initiative, its success will depend heavily on the ability of its proponents to generate internal support, to gain buy-in from targeted users, and to foster the development of active leadership supporters (champions). Success or failure may depend as much on the utility and efficacy of an initiative as it does on the ability to communicate its value in a manner that is meaningful to stake-holders with diverse definitions of value. The value of an initiative can be expressed to address the diverse definitions of stakeholder value in funding justification documents and in materials designed to inform and enlist support.

Using VMM, the value of a project is decomposed according to the different value factors. This gives project-level managers the tools to customize their value proposition according to the perspective of their particular audience. Additionally, the structure provides the flexibility to respond accurately and quickly to project changes requiring analysis and justification.

The tasks and outputs associated with Step 4:

  • Tasks:
    1. Communicate value to customers and stakeholders
    2. Prepare budget justification documents
    3. Satisfy ad hoc reporting requirements
    4. Use lessons learned to improve processes
  • Outputs:
    • Documentation, insight, and support:
      • To develop results-based management controls
      • For Exhibit 300 data and analytical needs
      • For communicating initiatives value
      • For improving decision-making and performance measurement through “lessons learned”

    • Change and ad hoc reporting requirements

Task 1: Communicate the Value to Customers and Stakeholders

Leveraging the results of VMM analysis can facilitate relations with customers and stakeholders. VMM makes communication to diverse audiences easier by incorporating the perspectives of all potential audience members from the outset of analysis. Since VMM calculates the potential value that an investment could realize for all stakeholders, it provides data pertinent to each of those stakeholder perspectives that can be used to bolster support for the project. It also fosters substantive discussion with customers regarding the priorities and detailed plans of the investment. These stronger relationships not only prove critical to the long-term success of the project, but can also lay the foundation for future improvements and innovation.

Task 2: Prepare Budget Justification Documents

Many organizations require comprehensive analysis and justification to support funding requests. IT initiatives may not be funded if they have not proved:

  1. Their applicability to executive missions
  2. Sound planning
  3. Significant benefits
  4. Clear calculations and logic justifying the amount of funding requested
  5. Adequate risk identification and mitigation efforts
  6. A system for measuring effectiveness
  7. Full consideration of alternatives
  8. Full consideration of how the project fits within the confines of other government entities and current law

After completion of the VMM, one will have data required to complete or support completion of budget justification documents.

Task 3: Satisfy Ad Hoc Reporting Requirements

Once a VMM model is built to assimilate and analyze a set of investment alternatives, it can easily be tailored to support ad hoc requests for information or other reporting requirements. In the current, rapidly changing political and technological environment, there are many instances when project managers need to be able to perform rapid analysis. For example, funding authorities, agency partners, market pricing fluctuations, or portfolio managers might impose modifications on the details (e.g., the weighting factors) of a project investment plan; many of these parties are also likely to request additional investment-related information later in the project life cycle. VMM’s customized decision framework makes such adjustments and reporting feasible under short time constraints.

Task 4: Use Lessons Learned to Improve Processes

Lessons learned through the use of VMM can be a powerful tool when used to improve overall organizational decision-making and management processes. For example, in the process of identifying metrics, one might discover that adequate mechanisms are not in place to collect critical performance information. Using this lesson to improve measurement mechanisms would give an organization better capabilities for (a) gauging the project’s success and mission-fulfillment, (b) demonstrating progress to stakeholders and funding authorities, and (c) identifying shortfalls in performance that could be remedied.

Note 1: Uncertainty Analysis

Figure A8.8 Output of Monte Carlo simulation.

Figure A8.8 Output of Monte Carlo simulation.

Conducting an uncertainty analysis requires the following:

  1. Identify the variables: Develop a range of values for each variable. This range expresses the level of uncertainty about the projection. For example, an analyst may be unsure whether an Internet application will serve a population of 100 or 100,000. It is important to be aware of and express this uncertainty in developing the model in order to define the reliability of the model in predicting results accurately.
  2. Identify the probability distribution for the selected variables: For each variable identified, assign a probability distribution. There are several types of probability distributions (see “Technical Definitions”). A triangular probability distribution is frequently used for this type of analysis. In addition to establishing the probability distribution for each variable, the analyst must also determine whether the actual amount is likely to be high or low.
  3. Run the simulation: Once the variables’ level of uncertainty is identified and each one has been assigned a probability distribution, run the Monte Carlo simulation. The simulation provides the analyst with the information required to determine the range (low to high) and “expected” results for both the value projection and cost estimate. As shown in Figure A8.8, the output of the Monte Carlo simulation produces a range of possible results and defines the “mean,” the point at which there is an equal chance that the actual value or cost will be higher or lower. The analyst then surveys the range and selects the expected value.

Note 2: Sensitivity Analysis

Sensitivity analysis is used to identify the business drivers that have the greatest impact on potential variations of an alternative’s cost and its returned value. Many of the assumptions made at the beginning of a project’s definition phase will be found inaccurate later in the analysis. Therefore, one must consider how sensitive a total cost estimate or value projection is to changes in the data used to produce the result. Insight from this analysis allows stakeholders not only to identify variables that require additional research to reduce uncertainty, but also to justify the cost of that research.

The information required to conduct a sensitivity analysis is derived from the same Monte Carlo simulation used for the uncertainty analysis.

Figure A8.9 is a sample sensitivity chart. Based on this chart, it is clear that “Build 5/6 Schedule Slip” is the most sensitive variable.

Definitions

Figure A8.9 Sensitivity chart.

Figure A8.9 Sensitivity chart.

  • Analytic hierarchy process (AHP): AHP is a proven methodology that uses comparisons of paired elements (comparing one against the other) to determine the relative importance of criteria mathematically.
  • Benchmark: A measurement or standard that serves as a point of reference by which process performance is measured.
  • Benefit: A term used to indicate an advantage, profit, or gain attained by an individual or organization.
  • Benefit-to-cost ratio (BCR): The computation of the financial benefit/cost ratio is done within the construct of the following formula: Benefits ÷ Cost.
  • Cost element structure (CES): A hierarchical structure created to facilitate the development of a cost estimate. May include elements that are not strictly products to be developed or produced, for example, travel, risk, program management reserve, life-cycle phases, and so on. Samples include
  1. System planning and development
    • 1.1 Hardware
    • 1.2 Software
      • 1.2.1 Licensing fees

    • 1.3. Development support
      • 1.3.1. Program management oversight
      • 1.3.2. System engineering architecture design
      • 1.3.3. Change management and risk assessment
      • 1.3.4. Requirement definition and data architecture
      • 1.3.5. Test and evaluation

    • 1.4. Studies
      • 1.4.1. Security
      • 1.4.2. Accessibility
      • 1.4.3. Data architecture
      • 1.4.4. Network architecture

    • 1.5. Other
      • 1.5.1. Facilities
      • 1.5.2. Travel

  2. System acquisition and implementation
    • 2.1 Procurement
      • 2.1.1. Hardware
      • 2.1.2. Software
      • 2.1.3. Customized software

    • 2.2. Personnel
    • 2.3. Training

  3. System maintenance and operations
    • 3.1. Hardware
      • 3.1.1. Maintenance
      • 3.1.2. Upgrades
      • 3.1.3. Life-cycle replacement

    • 3.2. Software
      • 3.2.1. Maintenance
      • 3.2.2. Upgrades
      • 3.2.3. License fees

    • 3.3. Support
      • 3.3.1. Helpdesk
      • 3.3.2. Security
      • 3.3.3. Training

  • Cost estimate: The estimation of a project’s life-cycle costs, time-phased by fiscal year, based on the description of a project or system’s technical, program-matic, and operational parameters. A cost estimate may also include related analyses such as cost–risk analyses, cost–benefit analyses, schedule analyses, and trade studies.
  • Commercial cost estimating tools:
    • PRICE S—is a parametric model used to estimate software size, development cost, and schedules, along with software operations and support costs. Software size estimates can be generated for source lines of code, function points, or predictive objective points. Software development costs are estimated based on input parameters reflecting the difficulty, reliability, productivity, and size of the project. These same parameters are used to generate operations and support costs. Monte Carlo risk simulation can be generated as part of the model output. Government agencies (e.g., NASA, IRS, U.S. Air Force, U.S. Army, U.S. Navy) as well as private companies have used PRICE S.
    • PRICE H, HL, M—is a suite of hardware parametric cost models used to estimate hardware development, production and operations, and support costs. These hardware models provide the capability to generate a total ownership cost to support program management decisions. Monte Carlo risk simulation can be generated as part of the model output. Government agencies (e.g., NASA, U.S. Air Force, U.S. Army, U.S. Navy) as well as private companies have used the PRICE suite of hardware models.
    • SEER-SEM (system evaluations and estimation of resources-software estimating model)—is a parametric modeling tool used to estimate software development costs, schedules, and manpower resource requirements. Based on the input parameters provided, SEER-SEM develops cost, schedule, and resource requirement estimates for a given software development project.
    • SEER-H (system evaluations and estimation of resources-hybrid)—is a hybrid cost-estimating tool that combines analogous and parametric cost-estimating techniques to produce models that accurately estimate hardware development, production, and operations and maintenance cost. SEER-H can be used to support a program manager’s hardware life-cycle cost estimate or provide an independent check of vendor quotes or estimates developed by third parties. SEER-H is part of a family of models from Galorath Associates, including SEER SEM (which estimates the development and production costs of software) and SEER-DFM (used to support design for manufacturability analyses).

  • Data sources (by phase of development):
    1. Strategic planning
      • 1.1 Strategic and performance plans
      • 1.2 Subject-matter expert input
      • 1.3 New and existing user surveys
      • 1.4 Private/public sector best practices, lessons learned, and benchmarks
      • 1.5 Enterprise architecture
      • 1.6 Modeling and simulation
      • 1.7 Vendor market survey

    2. Business modeling and pilots
      • 2.1 Subject-matter expert input
      • 2.2 New and existing user surveys
      • 2.3 Best practices, lessons learned, and benchmarks
      • 2.4 Refinement of modeling and simulation

    3. Implementation and evaluation
      • 3.1 Data from phased implementation
      • 3.2 Actual spending/cost data
      • 3.3 User group/stakeholder focus groups
      • 3.4 Other performance measurement

  • Internal rate of return (IRR): The IRR is the discount rate that sets the net present value of the program or project to zero. While the internal rate of return does not generally provide an acceptable decision criterion, it does provide useful information, particularly when budgets are constrained or there is uncertainty about the appropriate discount rate.
  • Life-cycle costs: The overall estimated cost for a particular program alternative over the time period corresponding to the life of the program, including direct and indirect initial costs plus any periodic or continuing costs of operation and maintenance.
  • Monte Carlo simulation: A simulation is any analytical method that is meant to imitate a real-life system, especially when other analyses are too mathematically complex or too difficult to reproduce. Spreadsheet risk analysis uses both a spreadsheet model and simulation to analyze the effect of varying inputs on outputs of the modeled system. One type of spreadsheet simulation is Monte Carlo simulation, which randomly generates values for uncertain variables over and over to simulate a model. (Monte Carlo simulation was named for Monte Carlo, Monaco, where the primary attractions are casinos containing games of chance.) Analysts identify all key assumptions for which the outcome is uncertain. For the life cycle, numerous inputs are each assigned one of several probability distributions. The type of distribution selected depends on the conditions surrounding the variable. During simulation, the value used in the cost model is selected randomly from the defined possibilities:
  • Net present value (NPV): NPV is defined as the difference between the present value of benefits and the present value of costs. The benefits referred to in this calculation must be quantified in cost or financial terms in order to be included.

    Net Present Value = [PV(Internal Project Cost Savings, Operational) +PV(Mission Cost Savings)] − PV(Initial Investment)

  • Polling tools:
    • Option finder: A real-time polling device, which permits participants, using handheld remotes, to vote on questions and have the results displayed immediately with statistical information such as “degree of variance” and topics discussed.
    • Group systems: A tool that allows participants to answer questions using individual laptops. The answers to these questions are then displayed to all participants anonymously, in order to spur discussion and the free flowing exchange of ideas. Group systems also have a polling device.
    • Return on investment (ROI): A financial management approach used to explain how well a project delivers benefits in relationship to its cost. Several methods are used to calculate a return on investment. Refer to Internal Rate of Return (IRR), Net Present Value (NPV), and Savings to Investment Ratio (SIR).
    • Risk: A term used to define the class of factors that (a) have a measurable probability of occurring during an investment’s life cycle, (b) have an associated cost or effect on the investment’s output or outcome (typically an adverse effect that jeopardizes the success of an investment), and (c) have alternatives from which the organization may chose.

  • Risk categories:
    1. Project resources/financial: Risk associated with “cost creep,” misestimating life-cycle costs, reliance on a small number of vendors without cost controls, and (poor) acquisition planning.
    2. Technical/technology: Risk associated with immaturity of commercially available technology; reliance on a small number of vendors; risk of technical problems/failures with applications and their ability to provide planned and desired technical functionality.
    3. Business/operational: Risk associated with business goals; risk that the proposed alternative fails to result in process efficiencies and streamlining; risk that business goals of the program or initiative will not be achieved; risk that the program effectiveness targeted by the project will not be achieved.
    4. Organizational and change management: Risk associated with organizational/agency/government-wide cultural resistance to change and standardization; risk associated with bypassing, lack of use or improper use or adherence to new systems and processes due to organizational structure and culture; inadequate training planning.
    5. Data/information: Risk associated with the loss/misuse of data or information, risk of increased burdens on citizens and businesses due to data collection requirements if the associated business processes or the project requires access to data from other sources (federal, state, and/or local agencies).
    6. Security: Risk associated with the security/vulnerability of systems, web-sites, information, and networks; risk of intrusions and connectivity to other (vulnerable) systems; risk associated with the misuse (criminal/fraudulent) of information; must include level of risk (high, medium, basic) and what aspect of security determines the level of risk, for example, need for confidentiality of information associated with the project/system, availability of the information or system, or reliability of the information or system.
    7. Strategic: Risk that the proposed alternative fails to result in the achievement of those goals or in making contributions to them.
    8. Privacy: Risk associated with the vulnerability of information collected on individuals, or risk of vulnerability of proprietary information on businesses.

  • Risk analysis: A technique to identify and assess factors that may jeopardize the success of a project or achieving a goal. This technique also helps define preventive measures to reduce the probability of these factors from occurring and identify countermeasures to successfully deal with these constraints when they develop.
  • Savings to investment ratio (SIR): SIR represents the ratio of savings to investment. The “savings” in the SIR computation are generated by internal operational savings and mission cost savings. The flow of costs and cost savings into the SIR formula is as shown in Figure A8.10.
  • Sensitivity analysis: Analysis of how sensitive outcomes are to changes in the assumptions. The assumptions that deserve the most attention should depend largely on the dominant benefit and cost elements and the areas of greatest uncertainty of the program or process being analyzed.
  • Stakeholder: An individual or group with an interest in the success of an organization in delivering intended results and maintaining the viability of the organization’s products and services. Stakeholders influence programs, products, and services.
    Figure A8.10 Savings to investment ratio.

    Figure A8.10 Savings to investment ratio.

Note

* This appendix is based on the Value Measuring Methodology—How-To-Guide. The U.S. Chief Information Officers Council.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset