3
DESIGNING METRICS

We measure productivity and quality to quantify the project’s progress as well as to quantify the attributes of the product. A metric enables us to understand and manage the process as well as to measure the impact of change to the process—that is, new methods, training, and so on. The use of metrics also enables us to know when we have met our goals—that is, usability, performance, and test coverage.

In measuring software systems, we can create metrics based on the different parts of a system—that is, requirements, specifications, code, documentation, tests, and training. For each of these components, we can measure its attributes, which include usability, maintainability, extendibility, size, defect level, performance, and completeness. While the majority of organizations will use metrics found in books such as this one, it is possible to generate metrics specific to a particular task. The characteristics of metrics dictate that they should be collectable, reproducible, pertinent, and system independent.

The nuts and bolts of actually creating these sorts of metrics often run into some obstacles. Many employees complain that it is just not possible to measure—that is, develop metrics for what they do. That is simply not true. Areas previously thought to be “unmeasurable” have been shown to be measurable if someone is motivated and creative enough to pursue an innovative approach.

Many employees stress the unfairness of being measured because they feel that they do not have any control over the outcome or the impact. Although it is rare that any one specific person has total control over the outcome, the impact on the results should be clearly demonstrable. These same employees also suggest that measurement will invite unfair comparisons. However, comparison is going to happen whether they like it or not. By taking the initiative, the employee can help the team or organization by proactively comparing performance, determining how well they are doing, and seeking ways to improve their performance.

Employees also fear that the results will be used against them. They need to be convinced that demonstrating openness and accountability, even when the news is not so good, inspires trust. If they are open about where they need to improve, most people will give them the benefit of the doubt as long as they demonstrate that they are sincerely seeking to improve.

Two of the biggest complaints are that data to be used for measurement are not available and/or the team simply does not have the resources to collect the data. In this age of information technology, it is hard to believe that performance data are not available. If a project is important enough to fund, staff should be able to find some way to collect data on its effectiveness. It can be as simple as a desktop spreadsheet using information collected from a hard-copy log or it can be trained observer ratings, with numerous variations in between. What is important is that critical indicators of success are identified and measured consistently and conscientiously. Dedicating a small percentage of staff time to come up with thoughtful measures, collecting the data on those measures, and then using the data to manage for results, will generally save a larger portion of their time that they would have spent correcting problems down the road.

What Constitutes a Good Metric?

Now that the team is on board with the measurement process, they need to spend some time preparing meaningful performance measures. Table 3.1 provides 10 criteria for effective measures.

Table 3.1 Criteria of Effective Measures

RESULTS ORIENTEDFOCUSED PRIMARILY ON DESIRED OUTCOMES, LESS EMPHASIS ON OUTPUTS
ImportantConcentrate on significant matters
ReliableAccurate, consistent information over time
UsefulInformation is valuable to both policy and program decision-makers and can be used to provide continuous feedback on performance to agency staff and managers
QuantitativeExpressed in terms of numbers or percentages
RealisticMeasures are set that can be calculated
Cost-effectiveThe measures themselves are sufficiently valuable to justify the cost of collecting the data
Easy to interpretDo not require an advanced degree in statistics to use and understand
ComparableCan be used for benchmarking against other organizations, internally and externally
CredibleUsers have confidence in the validity of the data

A wide variety of metrics are available. You will have to determine which metrics are right for your organization. However, before you even select the metrics you will be using, you will need to gear your company up for the process of creating and/or selecting from those available metrics. A typical method for a benchmarking initiative consists of

  1. Selecting the process and building support. It is more than likely that there will be many processes to benchmark. Break down a large project into discrete, manageable subprojects. These subprojects should be prioritized, with those critical to the goals of the organization taking priority.
  2. Determining current performance. Quite a few companies decide to benchmark because they have heard the wonderful success stories of Motorola, General Electric, or more modern companies such as Facebook and Google. During my days with the New York Stock Exchange, the chairman was forever touting the latest current management fad and insisting that we all follow suit. The problem is that all organizations are different and, in the case of benchmarking, extremely issue-specific. Before embarking on a benchmarking effort, the planners need to really investigate and understand the business environment and the impact of specific business processes on overall performance.
  3. Determining where performance should be. Perhaps just as importantly, the organization should benchmark itself against one of its successful competitors. This is how you can determine where “you should be” in terms of your own organization’s performance.
  4. Determining the performance gap. You now know where you are (No. 2 on this list) as well as where you would like to be (No. 3 on this list). The difference between the two is referred to as the performance gap. The gap must be identified, organized, and categorized. In other words, the causal factor should be attributed to people, process, technology, or cultural influences, and then prioritized.
  5. Designing an action plan. Technologists are most comfortable with this step as an action plan is really the same thing as a project plan. It should list the chronological steps for solving a particular problem as identified in number 4. Information in this plan should also include problem-solving tasks, who is assigned to each task, and the time frame.
  6. Striving for continuous improvement. In the process-improvement business, there are two catchphrases: “process improvement” and “continuous improvement.” The former is reactive to a current set of problems and the latter is proactive, meaning that the organization should continuously be searching for ways to improve.

One of the reasons why there are more than a handful of performance measurement implementation failures is that the metrics were poorly defined. Therefore, one of the most critical of tasks confronting the team is the selection of metrics. However, you cannot just select some from column A and some from column B. Different metrics work differently for different companies, and even within different divisions of the same company.

One method that can be used to select among the plethora of metrics is the analytic hierarchy process (AHP). AHP is a framework of logic and problem-solving that organizes data into a hierarchy of forces that influence decision results. It is a simple, adaptable methodology used by government as well as many commercial organizations. One of the chief selling points of this methodology is that it is participative, promotes consensus, and does not require any specialized skillsets to utilize.

AHP is based on a series of paired comparisons in which users provide judgments about the relative dominance of the two items. Dominance can be expressed in terms of preference, quality, importance, or any other criterion. Metric selection usually begins by gathering participants together for a brainstorming session. The number of participants selected should be large enough to ensure that a sufficient number of metrics are initially identified.

Table 3.2 AHP Pairwise Comparisons

COMPARATIVE IMPORTANCE DEFINITION EXPLANATION
1 Equally important Two decision elements (e.g., indicators) equally influence the parent decision element.
3 Moderately more important One decision element is moderately more influential than the other.
5 Strongly more important One decision element has stronger influence than the other.
7 Very strongly more important One decision element has significantly more influence over the other.
9 Extremely more important The difference between influences of the two decision elements is extremely significant.
2, 4, 6, 8 Intermediate judgment values Judgment values between equally, moderately, strongly, very strongly, and extremely.
Reciprocals If v is the judgment value when i is compared with j, then 1/v is the judgment value when j is compared with i.

Participants, moderated by a facilitator, brainstorm a set of possible metrics and the most important metrics are selected. Using a written survey, each participant is asked to compare all possible pairs of metrics in each of the four errors as to their relative importance using a scale as shown in Table 3.2.

From the survey responses, the facilitator computes the decision model for each participant that reflects the relative importance of each metric. Each participant is then supplied with the decision models of all other participants and is asked to rethink their original metric choices. The group meets again to determine the final set of metrics for the scorecard. The beauty of this process is that it makes readily apparent any inconsistencies in making paired comparisons and prevents metrics from being discarded prematurely.

Clinton et al. (2002) provide an example of using AHP to determine how to weight the relative importance of the categories and metrics used in a balanced scorecard framework. A group of participants meet to compare the relative importance of the four balanced scorecard categories in the first level of the AHP hierarchy. They may want to consider the current product life-cycle stage when doing their comparisons. For example, while in the product-introduction stage, formalizing business processes may be of considerable relative importance. When dealing with a mature or declining product, on the other hand, the desire to minimize variable cost per unit may dictate that the financial category be of greater importance than the other three scorecard categories. They provide the following illustrative sample survey question that might deal with this issue:

Survey question: In measuring success in pursuing a differentiation strategy for each pair, indicate which of the two balanced scorecard categories is more important. If you believe that the categories being compared are equally important in the scorecard process, you should mark a “1.” Otherwise, mark the box with the number that corresponds to the intensity on the side that you consider more important described in the aforementioned scale.

Consider the following examples:

table0001.jpg

In this example, the customer category is judged to be strongly more important than the financial category.

table0002.jpg

In this example, the customer category is judged to be equally important to the internal business processes category.

The values can then be entered into AHP software, such as Expert Choice (http://www.expertchoice.com/software/), which will compute local and global weights with each set of weights always equal to “1.” Local weights are the relative importance of each metric within a category and global weights are the relative importance of each metric to the overall goal. The software will show the relative importance of all metrics and scorecard categories. For example, in our prior example the results might have been

CATEGORY RELATIVE WEIGHT
Innovation and learning .32
Internal business processes .25
Customer .21
Financial .22
Total 1.00

The results show that the participants believe that the most important category is innovation and learning. If, within the innovation and learning category, it is determined that the market share metric is the most important, with a local weight of .40, then we can calculate the global outcome by multiplying the local decision weights from Level 1 (categories) by the local decision weight for level 2 (metrics).

Using determined metrics for each of the four perspectives of the balanced scorecard, an example of the final calculation is shown in Table 3.3.

The results indicate that the least important metric is revenue from the customer category and the most important metric is market share from the innovation and learning category.

Table 3.3 AHP Global Outcome Worksheet

BALANCED SCORECARD
STRATEGIC OBJECTIVE: SUCCESS IN PURSUING A DIFFERENTIATION STRATEGY
CATEGORIES AND METRICS LEVEL ONE × LEVEL TWO GLOBAL OUTCOME
INNOVATION AND LEARNING
Market share (.40×.32) .128
No. of new products (.35×.32) .112
Revenue from new products (.25×.32) .080
Total: Innovation and learning .320
INTERNAL BUSINESS PROCESSES
No. of product units produced (.33×.25) .08333
Minimizing variable cost per unit (.33×.25) .08333
No. on-time deliveries (.33×.25) .08333
Total internal business processes .250
CUSTOMER
Revenue (.20×.21) .042
Market share (.38×21) .080
QFD (quality function deployment) score (.42×.21) .088
Total customer .210
FINANCIAL
Cash value-added (.28×.22) .062
Residual income (.32×.22) .070
Cash flow ROI (.40×.22) .088
Total financial .220
Sum of the global weights 1.00

IT-Specific Measures

The four balanced scorecard perspectives might require some modification to be effective as an information technology (IT) scorecard. The reason for this is that the IT department is typically an internal rather than external service supplier and projects are commonly carried out for the benefit of both the end users and the organization as a whole—rather than individual customers within a large market.

Four alternative perspectives might include

  1. User orientation (end-user view)
    1. Mission: Deliver value-adding products and services to end users
    2. Objectives: Establish and maintain a good image and reputation with end users, exploit IT opportunities, establish good relationships with the end-user community, satisfy end-user requirements, and be perceived as the preferred supplier of IT products and services

  2. Business value (management’s view)
    1. Mission: Contribute to the value of the business
    2. Objectives: Establish and maintain a good image and reputation with management, ensure that IT projects provide business value, control IT costs, and sell appropriate IT products and services to third parties

  3. Internal processes (operations-based view)
    1. Mission: Deliver IT products and services in an efficient and effective manner
    2. Objectives: Anticipate and influence requests from end users and management, be efficient in planning and developing IT applications, be efficient in operating and maintaining IT applications, be efficient in acquiring and testing new hardware and software, and provide cost-effective training that satisfies end users

  4. Future readiness (innovation and learning view)
    1. Mission: Deliver continuous improvement and prepare for future challenges
    2. Objectives: Anticipate and prepare for IT problems that could arise, continuously upgrade IT skills through training and development, regularly upgrade IT applications portfolio, regularly upgrade hardware and software, and conduct cost-effective research into emerging technologies and their suitability for the business

It is then possible to drill down to provide IT-specific measures for each of these four perspectives. Most of the metrics that appear in Table 3.4 have been derived from mainstream literature.

It is important to note that the three key balanced scorecard principles of cause-and-effect relationships, sufficient performance drivers, and linkage to financial measures are built into this IT scorecard. Cause-and-effect relationships can involve one or more of the four perspectives. For example, better staff skills (future readiness perspective) will reduce the frequency of bugs in an application (internal operations perspective).

In a typical company, senior management might question the benefits of large investments in IT and want IT to be better aligned with corporate strategy. Some of the concerns of the different stakeholder groups might be

  1. Senior management
    1. Does IT support the achievement of business objectives?
    2. What value does the expenditure on IT deliver?
    3. Are IT costs being managed effectively?
    4. Are IT risks being identified and managed?
    5. Are targeted intercompany IT synergies being achieved?

  2. Business unit executives
    1. Are IT services delivered at a competitive cost?
    2. Does IT deliver on its service-level commitments?

      Table 3.4 IT Scorecard Metrics

      PERSPECTIVEMETRIC
      User orientationCustomer satisfaction
      BUSINESS VALUE
      Cost controlPercentage over/under IT budget
      Allocation to different budget items
      IT budget as a percentage of revenue
      IT expenses per employee
      Sales to third partiesRevenue from IT-related products/services
      Business value of an IT projectTraditional measures (e.g., ROI, payback)
      Business evaluation based on information economics: value linking, value acceleration, value restructuring, technological innovation
      Strategic match with business contribution to: product/service quality, customer responsiveness, management information, process flexibility
      RisksUnsuccessful strategy risk, IT strategy risk, definitional uncertainty (e.g., low degree of project specification), technological risk (e.g., bleeding edge hardware or software), development risk (e.g., inability to put things together), operational risk (e.g., resistance to change), IT service delivery risk (e.g., human/computer interface difficulties)
      Business value of the IT department/functional areaPercentage of resources devoted to strategic projects
      Percentage of time spent by IT manager in meetings with corporate executives
      Perceived relationship between IT management and top management
      INTERNAL PROCESSES
      PlanningPercentage of resources devoted to planning and review of IT activities
      DevelopmentPercentage of resources devoted to applications development
      Time required to develop a standard-sized new application
      Percentage of applications programming with reused code
      Time spent to repair bugs and fine-tune new applications
      OperationsNumber of end-user queries handled
      Average time required to address an end-user problem
      FUTURE READINESS
      IT specialist capabilitiesIT training and development budget as a percentage of overall IT budget
      Expertise with specific technologies
      Expertise with emerging technologies
      Age distribution of IT staff
      Satisfaction of IT staffTurnover/retention of IT employees
      Productivity of IT employees
      Applications portfolioAge distribution
      Platform distribution
      Technical performance of applications portfolio
      User satisfaction with applications portfolio
      Research into emerging technologiesIT research budget as percentage of IT budget
      Perceived satisfaction of top management with the reporting on how specific emerging technologies may or may not be applicable to the company
    3. Do IT investments positively affect business productivity or the customer experience?
    4. Does IT contribute to the achievement of our business strategies?

  3. Corporate compliance internal audit
    1. Are the organization’s assets and operations protected?
    2. Are the key business and technology risks being managed?
    3. Are proper processes, practices, and controls in place?

  4. IT organization
    1. Are we developing the professional competencies needed for successful service delivery?
    2. Are we creating a positive workplace environment?
    3. Do we effectively measure and reward individual and team performance?
    4. Do we capture organizational knowledge to continuously improve performance?
    5. Can we attract/retain the talent we need to support the business?

One of the most important things a chief information officer (CIO) can do is convince senior management that IT is not a service provider, but a strategic partner. As shown in Table 3.5, there are some important differences.

Being a strategic partner enables us to develop an IT scorecard that will encompass the following four quadrants:

  1. Customer orientation: To be the supplier of choice for all information services, either directly or indirectly through supplier relationships
  2. Corporate contribution: To enable and contribute to the achievement of business objectives through effective delivery of value-added information services
  3. Operational excellence: To deliver timely and effective services at targeted service levels and costs
  4. Future orientation: To develop the internal capabilities to continuously improve performance through innovation, learning, and personal organization growth

The relationship between IT and business can be more explicitly expressed through a cascade of balanced scorecards, as shown in Figure 3.1.

Table 3.5 Service Provider to Strategic Partner

SERVICE PROVIDER STRATEGIC PARTNER
IT is for efficiency IT for business growth
Budgets are driven by external benchmarks Budgets are driven by business strategy
IT is separable from the business IT is inseparable from the business
IT is seen as an expense to control IT is seen as an investment to manage
IT managers are technical experts IT managers are business problem solvers

Cascading scorecards can be used within IT as well. Each set of scorecards is actually composed of one or more unit scorecards. For example, the IT operations scorecard might also include a scorecard for the IT service desk. The resulting IT scorecard consists of objectives, measures, and benchmarks, as shown in Tables 3.6 through 3.9.

Figure 3.1 Cascade of balanced scorecards.

Figure 3.1 Cascade of balanced scorecards.

Table 3.6 Corporate Contribution Scorecard Evaluates IT from the Perspective of Senior Management

OBJECTIVE MEASURES BENCHMARKS
Business/IT alignment Operational plan/budget approval Not applicable
Value delivery Measured in business unit performance Not applicable
Cost management Attainment of expense and recovery targets Industry expenditure comparisons
Attainment of unit cost targets Compass operational ‘top performance’ levels
Risk management Results of internal audits Defined sound business practices
Execution of security initiative Not applicable
Delivery of disaster recovery assessment Not applicable
Intercompany synergy achievement Single system solutions Merger and acquisition guidelines
Target state architecture approval Not applicable
Attainment of targeted integrated cost reductions Not applicable
IT organization integration Not applicable

The measure of each of these unit scorecards is aggregated in the IT scorecard. This, in turn, is fed into and evaluated against the business scorecard.

There are a wide variety of other IT-oriented metrics that an organization can utilize, as shown in Table 3.10. Others can be found in various appendices of this book.

Hopefully by now you understand the importance of developing cascading sets of interlinked scorecards. From a departmental perspective, you will need to review, understand, and adhere to the organizational scorecard from a macro perspective. However, you will need to review the departmental- and system-level scorecards from a micro level.

Table 3.7 Customer Orientation Scorecard Evaluated the Performance of IT from the Perspective of Internal Business Users

OBJECTIVE MEASURES BENCHMARKS
Customer satisfaction Business unit survey ratings
• Cost transparency and levels
• Service quality and responsiveness
• Value of IT advice and support
• Contribution to business objectives
Not applicable
Competitive costs • Attainment of unit cost targets
• Blended labor rates
Compass operational ‘top level performing’ levels Market comparisons
Development services performance Major project success scores
• Recorded goal attainment
• Sponsor satisfaction ratings
• Project governance rating
Not applicable
Operational services performance Attainment of targeted service levels Competitor comparisons

Table 3.8 Operational Excellence Scorecard Views IT from the Perspective of IT Managers and Audit and Regulatory Bodies

OBJECTIVE MEASURES BENCHMARKS
Development process performance Function point measures of:
• Productivity
• Quality
• Delivery rate
To be determined
Operational process performance Benchmark based measures of:
• Productivity
• Responsiveness
• Change management effectiveness
• Incident occurrence levels
Selected compass benchmark studies
Process maturity Assessed level of maturity and compliance in priority processes within:
• Planning and organization
• Acquisition and implementation
• Delivery and support
• Monitoring
To be defined
Enterprise architecture management • Major project architecture approval
• Product acquisition compliance to technology standards
• “State of the infrastructure” assessment
Sound business practices

System-Specific Metrics

Systems are what compose the micro level. For example, enterprise resource planning (ERP) is one of the most sophisticated and complex of all software systems. It is a customizable software package that includes integrated business solutions

Table 3.9 Future Orientation Perspective Shows IT Performance from the Perspective of the IT Department Itself: Process Owners, Practitioners, and Support Professionals

OBJECTIVE MEASURES BENCHMARKS
Human resource management Results against targets:
• Staff complement by skill type
• Staff turnover
• Staff “billable” ratio
• Professional development days per staff member
Market comparison
Industry standard
Employee satisfaction Employee satisfaction survey scores in:
• Compensation
• Work climate
• Feedback
• Personal growth
• Vision and purpose
North American technology-dependent companies
Knowledge management • Delivery of internal process improvements to library
• Implementation of “lessons-learned” sharing process
Not applicable

Table 3.10 Frequently Used Metrics

SYSTEM/SERVICE/FUNCTION POSSIBLE METRIC(S)
R&D Innovation capture
No. of quality improvements
Customer satisfaction
Process improvement Cycle time, activity costs
No. supplier relationships
Total cost of ownership
Resource planning, account management Decision speed
Lowering level of decision authority
Groupware Cycle time reduction
Paperwork reduction
Decision support Decision reliability
Timeliness
Strategic awareness
Lowering level of decision authority
Management information systems Accuracy of data
Timeliness
e-Commerce Market share
Price premium for products/services
Information-based products and services Operating margins
New business revenues
Cash flow
Knowledge retention

for core business processes such as production planning and control and warehouse management. Rosemann and Wiese (1999) use a modified balanced scorecard approach to

  1. Evaluate the implementation of ERP software
  2. Evaluate the continuous operation of the ERP installation

Along with the four balanced scorecard perspectives of financial, customer, internal processes, and innovation and learning, they have added a fifth for the purposes of ERP installation—the project perspective. The individual project requirements, such as identification of critical path, milestones, and so on, are covered by this fifth perspective that represents all the project-management tasks. Figure 3.2 represents the Rosemann–Wiese approach.

Most ERP implementers concentrate on the financial and business process aspects of ERP implementation. Using the ERP balanced scorecard would enable them to also focus on customer, innovation, and learning perspectives. The latter is particularly important as it enables the development of alternative values for the many conceivable development paths that support a flexible system implementation.

Figure 3.2 The ERP balanced scorecard.

Figure 3.2 The ERP balanced scorecard.

Implementation measures might include

  1. Financial: Total cost of ownership, which would enable identification of modules where overcustomization took place
  2. Project: Processing time along the critical path, remaining time to the next milestone, and time delays that would affect financial perspective
  3. Internal processes: Processing time before and after ERP implementation, and coverage of individual requirements for a process
  4. Customer: Linkage of customers to particular business processes automated, and resource allocation per customer
  5. Innovation and learning: Number of alternative process paths to support a flexible system implementation, number of parameters representing unused customizing potential, and number of documents describing customizing decisions

As in all well-designed balanced scorecards, this one demonstrates a very high degree of linkage in terms of cause-and-effect relationships. For example, “customer satisfaction” within the customer perspective might affect “total cost of ownership” in the financial perspective, “total project time” in the project perspective, “fit with ERP solution” in the internal process perspective, and “user suggestions” in the innovation and learning perspective.

Rosemann and Weise do not require the project perspective in the balanced scorecard for evaluating the continuous operation of the ERP installation. Here, the implementation follows a straightforward balanced scorecard approach. Measures include

  1. Financial: Compliance with budget for hardware, software, and consulting
  2. Customer:
    1. Coverage of business processes: Percentage of covered process types, percentage of covered business transactions, and percentage of covered transactions valued good or fair
    2. Reduction of bottlenecks: Percentage of transactions not finished on schedule, and percentage of canceled telephone order processes due to noncompetitive system response time

  3. Internal process:
    1. Reduction of operational problems: number of problems with customer order processing system, percentage of problems with customer order processing system, number of problems with warehouse processes, number problems with standard reports, and number of problems with reports on demand
    2. Availability of the ERP system: average system availability, average down-time, and maximum downtime
    3. Avoidance of operational bottlenecks: average response time in order processing, average response time in order processing at peak time, average number of online transaction processing (OLTP) transactions, and maximum number of OLTP transactions
    4. Actuality of the system: average time to upgrade the system, release levels behind the actual level
    5. Improvement in system development: punctuality index of system delivery, and quality index
    6. Avoidance of developer bottlenecks: average workload per developer, rate of sick leave per developer, and percentage of modules covered by more than two developers

  4. Innovation and learning:
    1. Qualification: Number of training hours per user, number of training hours per developer, qualification index of developer (i.e., how qualified is this developer to do what he or she is doing)
    2. Independency of consultants: Number of consultant days per module in use >2 years, number of consultant days per module in use <2 years
    3. Reliability of software vendor: number of releases per year, number of functional additions, number of new customers

It should be noted that these metrics can be used outside of the balanced scorecard approach as well.

Financial Metrics

Cost–benefit analysis and return on investment (ROI) are typically utilized during the project proposal stage to win management approval. However, these and other financial metrics provide a wonderful gauge for performance.

Cost–benefit analysis is quite easy to understand. The process compares the costs of the system with the benefits of having that system. We all do this on a daily basis. For example, if we go out to buy a new $1000 personal computer, we weigh the cost of expending that $1000 against the benefits of owning the personal computer. For example, these benefits might be

  1. No longer have to rent a computer. Cost savings $75 per month.
  2. Possible to earn extra money by typing term papers for students. Potential earnings $300 per month.

We can summarize this as shown in Table 3.11.

Table 3.11 Cost–Benefit Analysis

COSTS/ONE TIME BENEFITS/YEAR
$1000 1. Rental computer savings: $75 × 12 = $900
2. Typing income: $300 × 12 = $3600
$1000/one time $4500/year
Potential savings/earnings $3500/first year; $4500 subsequent years

One-time capital costs such as computers are usually amortized over a certain period of time. For example, a computer costing $1000 can be amortized over 5 years, which means that instead of comparing a one-time cost of $1000 with the benefits of purchasing the PC, we can compare a monthly cost instead. Not all cost–benefit analyses are so clear-cut, however. In our previous example, the benefits were both financially based. Not all benefits are so easily quantifiable. We call benefits that cannot be quantified “intangible benefits.” Examples are

  1. Reduced turnaround time.
  2. Improved customer satisfaction.
  3. Compliance with mandates.
  4. Enhanced interagency communication.

Aside from having to deal with both tangible and intangible benefits, most cost–benefit analyses also need to deal with several alternatives. For example, let’s say that a bank uses a loan processing system that is old and often has problems. There might be several alternative solutions:

  1. Rewrite the system from scratch
  2. Modify the existing system
  3. Outsource the system

In each case, a spreadsheet should be created that details one-time as well as continuing costs. These should then be compared with the benefits of each alternative, both tangible as well as intangible.

An associated formula is the benefit–cost ratio (BCR). The computation of the financial BCR is done within the construct of the following formula: benefits/cost. All projects have associated costs. All projects will also have associated benefits. At the outset of a project, costs will far exceed benefits. However, at some point the benefits will start outweighing the costs. This is called the break-even point. The analysis that is done to figure out when this break-even point will occur is called break-even analysis. In Table 3.12, we see that the break-even point comes during the first year.

Calculating the break-even point in a project with multiple alternatives enables the project manager to select the optimum solution. The project manager will generally select the alternative with the shortest break-even point.

Most organizations want to select projects that have a positive ROI. The ROI is the additional amount earned after costs are earned back. In our aforementioned “buy versus not buy” PC decision, we can see that the ROI is quite positive during the first, and especially during subsequent years of ownership.

ROI is probably the most favored and critical of all finance metrics from a management stand-point. Table 3.13 provides a list of questions that ROI can help answer.

Table 3.12 Break-Even Analysis

COSTS/ONE TIME BENEFITS/YEAR
$1000 1. Rental computer savings: $75 × 12 = $900
2. Typing income: $300 × 12 = $3600
$1000/one time $4500/year
Potential savings/earnings $3500/first year; $4500 subsequent years

Table 3.13 Questions ROI Can Answer

REQUIRED INVESTMENT IT OPERATING EFFICIENCY
How much investment–including capital expense, planning and deployment, application development, and ongoing management and support–will the project require? How will the project improve IT, such as simplifying management, reducing support costs, boosting security, or increasing IT productivity?
FINANCIAL BENEFITS RISK
What are the expected financial benefits of the project, measured according to established financial metrics, including ROI,… savings, and payback period? What are the potential risks associated with the project? How likely will risks impact the implementation schedule, proposed spending, or derived target benefits?
STRATEGIC ADVANTAGE COMPETITIVE IMPACT
What are the project’s specific business benefits, such as operational savings, increased availability, increased revenue, or achievement of specific goals? How does the proposed project compare with competitor’s spending plans?
ACCOUNTABILITY
How will we know when the project is a success? How will the success be measured (metrics and time frames)?

The IT department and the finance department need to be joint owners of the ROI process.

The basic formula for ROI is

ROI=(BenefitCost)Cost.

The results of this calculation can be used to either measure costs or measure benefits, each having its own advantages and disadvantages, as shown in Table 3.14.

ROI calculations require the availability of large amounts of accurate data, which is sometimes unavailable to the IT manager. Many variables need to be considered and decisions made regarding which factors to calculate and which to ignore. Before starting an ROI calculation, identify the following factors:

  1. Know what you are measuring: Successful ROI calculators isolate their true data from other factors, including the work environment and the level of management support.
  2. Do not saturate: Instead of analyzing every factor involved, pick a few. Start with the most obvious factors that can be identified immediately.
  3. Convert to money: Converting data into hard monetary values is essential in any successful ROI study. Translating intangible benefits into dollars is challenging and might require some assistance from the accounting or finance departments. The goal is to demonstrate the impact on the bottom line.
  4. Compare apples with apples: Measure the same factors before and after the project.

    Table 3.14 Measuring Costs or Measuring Benefits

    MEASUREMENT QUESTIONMEASURING COSTSMEASURING BENEFITS
    Can we afford this and will it pay for itself?Financial metrics; defined by policy and accepted accounting principles; reporting and control oriented; standards-based or consistent; not linked to business process; ignores important cost factors; short time frame; data routinely collected/reportedSavings as measured in accounting categories; narrow in focus and impact; increased revenues, reduced total costs, acceptable payback period
    How much ‘bang for the buck’ will we get out of this project?Financial and outcome/quality metrics; operations and management oriented; defined by program and business process; may or may not be standardized; often requires new data collection; may include organizational and managerial factorsPossible efficiency increases; increased output; enhanced service/product quality; enhanced access and equity; increased customer/client satisfaction; increased organizational capability; spillovers to other programs or processes
    Is this the most I can get for this much investment?Financial and organizational metrics; management and policy oriented; nonstandardized; requires new data collection and simulation or analytical model; can reveal hidden costsEfficiency increases; spillovers; enhanced capabilities; avoidance of wasteful or suboptimal strategies
    Will the benefits justify the overall investment in this project?Financial, organizational, social, individual metrics; individual and management oriented; nonstandard; requires new data collection and expanded methods; reveals hidden costs; potentially long timeframeEnhanced capabilities and opportunities; avoiding unintended consequences; enhanced equity; improved quality of life; enhanced political support

There are a variety of ROI techniques:

  1. Treetop: Treetop metrics investigate the impact on profitability for the entire company. Profitability can take the form of cost reductions because of the IT department’s potential to reduce workforce size for any given process.
  2. Pure cost: There are several varieties of pure cost ROI techniques. Total cost of ownership (TCO) details the hidden support and maintenance costs over time that provide a more concise picture of the total cost. The normalized cost of work produced (NOW) index measures the cost of one’s conducting a work task versus the cost to others doing similar work.
  3. Holistic IT: This is the same as the IT scorecard, where the IT department tries to align itself with the traditional balanced scorecard performance perspective of financial, customer, internal operations, and employee learning, and innovation.
  4. Financial: Aside from ROI, economic value added (EVA) tries to optimize a company’s shareholder wealth.

There are also a variety of ways of actually calculating ROI. Typically, the following are measured

  1. Productivity: Output per unit of input
  2. Processes: Systems, workflow
  3. Human resources: Costs and benefits for a specific initiative
  4. Employee factors: Retention, morale, commitment, and skills

The ROI calculation is not complete until the results are converted to dollars. This includes looking at combinations of hard and soft data. Hard data include such traditional measures as output, time, quality, and costs. In general, hard data are readily available and relatively easy to calculate. Soft data are hard to calculate and include morale, turnover rate, absenteeism, loyalty, conflicts avoided, new skills learned, new ideas, successful completion of projects, and so on, as shown in Table 3.15.

After the hard and/or soft data have been determined, they need to be converted to monetary values

  • Step 1: Focus on a single unit.
  • Step 2: Determine a value for each unit.
  • Step 3: Calculate the change in performance. Determine the performance change after factoring out other potential influences on the training results.
  • Step 4: Obtain an annual amount. The industry standard for an annual performance change is equal to the total change in performance data during 1 year.
  • Step 5: Determine the annual value. The annual value of improvement equals the annual performance change, multiplied by the unit value. Compare the product of this equation with the cost of the program using this formula: ROI = net annual value of improvement − program

What follows is an example of an ROI analysis for a system implementation. Spreadsheets were used to calculate ROI at various stages of the project: planning, development, and implementation.

Initial Benefits Worksheet

Calculation: hours/person average × cost/hour × no. of people = total $ saved

  1. Reduced time to learn system/job (worker hours)
  2. Reduced supervision (supervision hours)
  3. Reduced help from coworkers (worker hours)
  4. Reduced calls to help line
  5. Reduced down time (waiting for help, consulting manuals, etc.)
  6. Fewer or no calls from help line to supervisor about overuse of help service

Continuing Benefits Worksheet

Calculation: hours/person average × cost/hour × no. of people = total $ saved

  1. Reduced time to perform operation (worker time)
  2. Reduced overtime
  3. Reduced supervision (supervisor hours)

    Table 3.15 Hard Data versus Soft Data

    HARD DATA
    OutputUnits produced
    Items assembled or sold
    Forms processed
    Tasks completed
    QualityScrap
    Waste
    Rework
    Product defects or rejects
    TimeEquipment downtime
    Employee overtime
    Time to complete projects
    Training time
    CostOverhead
    Variable costs
    Accident costs
    Sales expenses
    SOFT DATA
    Work habitsEmployee absenteeism
    Tardiness
    Visits to nurse
    Safety-rule violations
    Work climateEmployee grievances
    Employee turnover
    Discrimination charges
    Job satisfaction
    AttitudesEmployee loyalty
    Employee self-confidence
    Employee’s perception of job responsibility
    Perceived changes in performance
    New skillsDecisions made
    Problems solved
    Conflicts avoided
    Frequency of use of new skills
    Development and advancementNumber of promotions or pay increases
    Number of training programs attended
    Requests for transfer
    Performance appraisal ratings
    InitiativeImplementation of new ideas
    Successful completion of projects
    Number of employee suggestions
  4. Reduced help from coworkers (worker hours)
  5. Reduced calls to help line
  6. Reduced down time (waiting for help, consulting manuals, etc.)
  7. Fewer or no calls from help line to supervisor about overuse of help service
  8. Fewer mistakes (e.g., rejected transactions)
  9. Fewer employees needed
  10. Total savings in 1 year
  11. Expected life of system in years

Quality Benefits Worksheet

Calculation: unit cost × no. of units = total $ saved

  1. Fewer mistakes (e.g., rejected transactions)
  2. Fewer rejects-ancillary costs
  3. Total savings in 1 year
  4. Expected life of system in years

Other Benefits Worksheet

Calculation: = $ saved per year

  1. Reduced employee turnover
  2. Reduced grievances
  3. Reduced absenteeism/tardiness (morale improvements)

ROI Spreadsheet Calculation

Calculation: ROI = (Benefits − Costs/Costs)

  1. Initial time saved total over life of system
  2. Continuing worker hours saved total over life of system
  3. Quality improvements with fixed costs total over life of system
  4. Other possible benefits total over life of system
  5. Total benefits
  6. Total system costs (development, maintenance, and operation)

These ROI calculations are based on valuations of improved work product, what is referred to as a cost-effectiveness strategy.

ROI evaluates an investment’s potential by comparing the magnitude and timing of expected gains to the investment costs. For example, a new initiative costs $500,000 and will deliver an additional $700,000 in increased profits. Simple ROI = gains − investment costs. ($700,000 − $500,000 = $200,000. $200,000/$500,000 = 40%.) This calculation works well in situations where benefits and costs are easily known, and is usually expressed as an annual percentage return.

However, technology investments frequently involve financial consequences that extend over several years. In this case, the metric has meaning only when the time period is clearly stated. Net present value (NPV) recognizes the time value of money by discounting costs and benefits over a period of time, and focuses either on the impact on cash flow rather than net profit, or savings.

A meaningful NPV requires sound estimates of the costs and benefits and use of the appropriate discount rate. An investment is acceptable if the NPV is positive. For example, an investment costing $1 million has an NPV of savings of $1.5 million. Therefore, ROI = the NPV of savings − initial investment cost/initial investment cost. ($1,500,000 − $1,000,000 = $500,000. $500,000/$1,000,000 = 50%.) This may also be expressed as ROI = $1.5M (NPV of savings)/$1M (initial investment) × 100 = 150%.

The internal rate of return (IRR) is the discount rate that sets the net present value of the program or project to zero. While the internal rate of return does not generally provide an acceptable decision criterion, it does provide useful information, particularly when budgets are constrained or there is uncertainty about the appropriate discount rate.

The U.S. CIO Council developed (see Appendix VIII) the value measuring methodology (VMM) to define, capture, and measure value associated with electronic services unaccounted for in traditional ROI calculations, to fully account for costs, and to identify and consider risk.

Most companies track the cost of a project using only two dimensions: planned costs versus actual costs. Using this particular metric, if managers spend all of the money that has been allocated to a particular project, they are right on target. If they spend less money, they have a cost underrun—a greater expenditure results in a cost overrun. However, this method ignores a key third dimension—the value of work performed.

Earned-value management—or EVM—enables you to measure the true cost of performance of long-term capital projects. Even though EVM has been in use for years, government contractors are the major practitioners of this method.

The key tracking EVM metric is the cost performance index or CPI, which has proved remarkably stable over the course of most projects. The CPI shows the relationship between the value of work accomplished (“earned value”) and the actual costs, as shown in the following example.

If the project is budgeted to have a final value of $1 billion, but the CPI is running at 0.8 when the project is, say, one-fifth complete, the actual cost at completion can be expected to be around $1.25 billion ($1 billion/0.8). You are earning only 80 cents of value for every dollar you are spending. Management can take advantage of this early warning by reducing costs while there is still time.

Several software tools, including Microsoft Project, have the capability of working with EVM.

Examples of Performance Measures

Table 3.16 provides examples of performance measures that are typical for many IT projects. While the category and metrics columns are fairly representative of those used in IT projects in general, the measure of success will vary greatly and should be established for each individual project, as appropriate.

Table 3.16 IT Performance Measures

CATEGORY FOCUS PURPOSE MEASURE OF SUCCESS
Schedule performance Tasks completed vs. tasks planned at a point in time. Assess project progress. Apply project resources. 100% completion of tasks on critical path; 90% all others
Major milestones met vs. planned. Measure time efficiency. 90% of major milestones met.
Revisions to approved plan. Understand and control project “churn.” All revisions reviewed and approved.
Changes to customer requirements. Understand and manage scope and schedule. All changes managed through approved change process.
Project completion date. Award/penalize (depending on contract type). Project completed on schedule (per approved plan).
Budget performance Revisions to cost estimates. Assess and manage project cost. 100% of revisions are reviewed and approved.
Dollars spent vs. dollars budgeted. Measure cost-efficiency. Project completed within approved cost parameters.
Return on investment (ROI). Track and assess performance of project investment portfolio. ROI (positive cash flow) begins according to plan.
Acquisition cost control. Assess and manage acquisition dollars. All applicable acquisition guidelines followed.
Product quality Defects identified through quality activities. Track progress in, and effectiveness of, defect removal. 90% of expected defects identified (e.g., via peer reviews, inspections).
Test case failures vs. number of cases planned. Assess product functionality and absence of defects. 100% of planned test cases execute successfully.
Number of service calls. Track customer problems. 75% reduction after 3 months of operation.
Customer satisfaction index. Identify trends. 95% positive rating.
Customer satisfaction trend. Improve customer satisfaction. 5% improvement each quarter.
Number of repeat customers. Determine if customers are using the product multiple times (could indicate satisfaction with the product). “X” percentage of customers use the product “X” times during a specified time period.
Number of problems reported by customers. Assess quality of project deliverables. 100% of reported problems addressed within 72 h.
Compliance Compliance with Enterprise Architecture model requirements. Track progress toward department-wide architecture model. Zero deviations without proper approvals.
Compliance with interoperability requirements. Track progress toward system interoperability. Product works effectively within system portfolio.
Compliance with standards. Alignment, interoperability, consistency. No significant negative findings during architect assessments.
For website projects, compliance with style guide. To ensure standardization of website. All websites have the same “look and feel.”
Compliance with Section 508. To meet regulatory requirements. Persons with disabilities may access and utilize the functionality of the system.
Redundancy Elimination of duplicate or overlapping systems. Ensure return on investment. Retirement of 100% of identified systems.
Decreased number of duplicate data elements. Reduce input redundancy and increase data integrity. Data elements are entered once and stored in one database.
Consolidate help desk functions. Reduce dollars spent on help desk support. Approved consolidation plan by June 30, 2002.
Cost avoidance System is easily upgraded. Take advantage of, for example, COTS upgrades. Subsequent releases do not require major “glue code” project to upgrade.
Avoid costs of maintaining duplicate systems. Reduce IT costs. 100% of duplicate systems have been identified and eliminated.
System is maintainable. Reduce maintenance costs. New version (of COTS) does not require “glue code.”
Customer satisfaction System availability (up time). Measure system availability. 100% of requirement is met. (e.g., 99% M–F, 8 a.m. to 6 p.m., and 90% S & S, 8 a.m. to 5 p.m.).
System functionality (meets customer’s/user’s needs). Measure how well customer needs are being met. Positive trend in customer satisfaction survey(s).
Absence of defects (that impact customer). Number of defects removed during project life cycle. 90% of defects expected were removed.
Ease of learning and use. Measure time to becoming productive. Positive trend in training survey(s).
Time it takes to answer calls for help. Manage/reduce response times. 95% of severity one calls answered within 3 h.
Rating of training course. Assess effectiveness and quality of training. 90% of responses of “good” or better.
Business goals/mission Functionality tracks reportable inventory. Validate system supports program mission All reportable inventory is tracked in system.
Turnaround time in responding to congressional queries. Improve customer satisfaction and national interests. Improve turnaround time from 2 days to 4 h.
Maintenance costs. Track reduction of costs to maintain system. Reduce maintenance costs by two-thirds over a 3-year period.
Standard desktop platform. Reduce costs associated with upgrading user’s systems. Reduce upgrade costs by 40%.
Time taken to complete tasks. To evaluate estimates. Completions are within 90% of estimates.
Number of deliverables produced. Assess capability to deliver products. Improve product delivery 10% in each of the next 3 years.

In Conclusion

The following set of questions is intended to assist in stimulating the thought process to determine performance measures that are appropriate for a given project or organization.

Project/Process Measurement Questions

  • What options are available if the schedule is accelerated by 4 months to meet a tight market window?
  • How many people must be added to get 2 months of schedule compression and how much will it cost?
  • How many defects are still in the product and when will it be good enough so that I can ship a reliable product and have satisfied customers?
  • How much impact does requirements growth have on schedule, cost, and reliability?
  • Is the current forecast consistent with our company’s historical performance?

Organizational Measurement Questions

  • What is the current typical time cycle and cost of our organization’s development process?
  • What is the quality of the products our organization produces?
  • Is our organization’s development process getting more or less effective and efficient?
  • How does our organization stack up against the competition?
  • How does our organization’s investment in process improvement compare with the benefits we have achieved?
  • What impact are environmental factors such as requirements volatility and staff turnover having on our process productivity?
  • What level of process productivity should we assume for our next development project?

References

Clinton, B., Webber, S. A., and Hassell, J. M. (2002). Implementing the balanced scorecard using the analytic hierarchy process. Management Accounting Quarterly, 3(3), 1–11.

Rosemann, M. and Weise, J. (1999). Measuring the performance of ERP software—A balanced scorecard approach. Proceedings of the 10th Australasian Conference on Information Systems, Wellington: Victoria University of Wellington.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset