We measure productivity and quality to quantify the project’s progress as well as to quantify the attributes of the product. A metric enables us to understand and manage the process as well as to measure the impact of change to the process—that is, new methods, training, and so on. The use of metrics also enables us to know when we have met our goals—that is, usability, performance, and test coverage.
In measuring software systems, we can create metrics based on the different parts of a system—that is, requirements, specifications, code, documentation, tests, and training. For each of these components, we can measure its attributes, which include usability, maintainability, extendibility, size, defect level, performance, and completeness. While the majority of organizations will use metrics found in books such as this one, it is possible to generate metrics specific to a particular task. The characteristics of metrics dictate that they should be collectable, reproducible, pertinent, and system independent.
The nuts and bolts of actually creating these sorts of metrics often run into some obstacles. Many employees complain that it is just not possible to measure—that is, develop metrics for what they do. That is simply not true. Areas previously thought to be “unmeasurable” have been shown to be measurable if someone is motivated and creative enough to pursue an innovative approach.
Many employees stress the unfairness of being measured because they feel that they do not have any control over the outcome or the impact. Although it is rare that any one specific person has total control over the outcome, the impact on the results should be clearly demonstrable. These same employees also suggest that measurement will invite unfair comparisons. However, comparison is going to happen whether they like it or not. By taking the initiative, the employee can help the team or organization by proactively comparing performance, determining how well they are doing, and seeking ways to improve their performance.
Employees also fear that the results will be used against them. They need to be convinced that demonstrating openness and accountability, even when the news is not so good, inspires trust. If they are open about where they need to improve, most people will give them the benefit of the doubt as long as they demonstrate that they are sincerely seeking to improve.
Two of the biggest complaints are that data to be used for measurement are not available and/or the team simply does not have the resources to collect the data. In this age of information technology, it is hard to believe that performance data are not available. If a project is important enough to fund, staff should be able to find some way to collect data on its effectiveness. It can be as simple as a desktop spreadsheet using information collected from a hard-copy log or it can be trained observer ratings, with numerous variations in between. What is important is that critical indicators of success are identified and measured consistently and conscientiously. Dedicating a small percentage of staff time to come up with thoughtful measures, collecting the data on those measures, and then using the data to manage for results, will generally save a larger portion of their time that they would have spent correcting problems down the road.
Now that the team is on board with the measurement process, they need to spend some time preparing meaningful performance measures. Table 3.1 provides 10 criteria for effective measures.
RESULTS ORIENTED | FOCUSED PRIMARILY ON DESIRED OUTCOMES, LESS EMPHASIS ON OUTPUTS |
Important | Concentrate on significant matters |
Reliable | Accurate, consistent information over time |
Useful | Information is valuable to both policy and program decision-makers and can be used to provide continuous feedback on performance to agency staff and managers |
Quantitative | Expressed in terms of numbers or percentages |
Realistic | Measures are set that can be calculated |
Cost-effective | The measures themselves are sufficiently valuable to justify the cost of collecting the data |
Easy to interpret | Do not require an advanced degree in statistics to use and understand |
Comparable | Can be used for benchmarking against other organizations, internally and externally |
Credible | Users have confidence in the validity of the data |
A wide variety of metrics are available. You will have to determine which metrics are right for your organization. However, before you even select the metrics you will be using, you will need to gear your company up for the process of creating and/or selecting from those available metrics. A typical method for a benchmarking initiative consists of
One of the reasons why there are more than a handful of performance measurement implementation failures is that the metrics were poorly defined. Therefore, one of the most critical of tasks confronting the team is the selection of metrics. However, you cannot just select some from column A and some from column B. Different metrics work differently for different companies, and even within different divisions of the same company.
One method that can be used to select among the plethora of metrics is the analytic hierarchy process (AHP). AHP is a framework of logic and problem-solving that organizes data into a hierarchy of forces that influence decision results. It is a simple, adaptable methodology used by government as well as many commercial organizations. One of the chief selling points of this methodology is that it is participative, promotes consensus, and does not require any specialized skillsets to utilize.
AHP is based on a series of paired comparisons in which users provide judgments about the relative dominance of the two items. Dominance can be expressed in terms of preference, quality, importance, or any other criterion. Metric selection usually begins by gathering participants together for a brainstorming session. The number of participants selected should be large enough to ensure that a sufficient number of metrics are initially identified.
COMPARATIVE IMPORTANCE | DEFINITION | EXPLANATION |
1 | Equally important | Two decision elements (e.g., indicators) equally influence the parent decision element. |
3 | Moderately more important | One decision element is moderately more influential than the other. |
5 | Strongly more important | One decision element has stronger influence than the other. |
7 | Very strongly more important | One decision element has significantly more influence over the other. |
9 | Extremely more important | The difference between influences of the two decision elements is extremely significant. |
2, 4, 6, 8 | Intermediate judgment values | Judgment values between equally, moderately, strongly, very strongly, and extremely. |
Reciprocals | If v is the judgment value when i is compared with j, then 1/v is the judgment value when j is compared with i. |
Participants, moderated by a facilitator, brainstorm a set of possible metrics and the most important metrics are selected. Using a written survey, each participant is asked to compare all possible pairs of metrics in each of the four errors as to their relative importance using a scale as shown in Table 3.2.
From the survey responses, the facilitator computes the decision model for each participant that reflects the relative importance of each metric. Each participant is then supplied with the decision models of all other participants and is asked to rethink their original metric choices. The group meets again to determine the final set of metrics for the scorecard. The beauty of this process is that it makes readily apparent any inconsistencies in making paired comparisons and prevents metrics from being discarded prematurely.
Clinton et al. (2002) provide an example of using AHP to determine how to weight the relative importance of the categories and metrics used in a balanced scorecard framework. A group of participants meet to compare the relative importance of the four balanced scorecard categories in the first level of the AHP hierarchy. They may want to consider the current product life-cycle stage when doing their comparisons. For example, while in the product-introduction stage, formalizing business processes may be of considerable relative importance. When dealing with a mature or declining product, on the other hand, the desire to minimize variable cost per unit may dictate that the financial category be of greater importance than the other three scorecard categories. They provide the following illustrative sample survey question that might deal with this issue:
Survey question: In measuring success in pursuing a differentiation strategy for each pair, indicate which of the two balanced scorecard categories is more important. If you believe that the categories being compared are equally important in the scorecard process, you should mark a “1.” Otherwise, mark the box with the number that corresponds to the intensity on the side that you consider more important described in the aforementioned scale.
Consider the following examples:
In this example, the customer category is judged to be strongly more important than the financial category.
In this example, the customer category is judged to be equally important to the internal business processes category.
The values can then be entered into AHP software, such as Expert Choice (http://www.expertchoice.com/software/), which will compute local and global weights with each set of weights always equal to “1.” Local weights are the relative importance of each metric within a category and global weights are the relative importance of each metric to the overall goal. The software will show the relative importance of all metrics and scorecard categories. For example, in our prior example the results might have been
CATEGORY | RELATIVE WEIGHT |
Innovation and learning | .32 |
Internal business processes | .25 |
Customer | .21 |
Financial | .22 |
Total | 1.00 |
The results show that the participants believe that the most important category is innovation and learning. If, within the innovation and learning category, it is determined that the market share metric is the most important, with a local weight of .40, then we can calculate the global outcome by multiplying the local decision weights from Level 1 (categories) by the local decision weight for level 2 (metrics).
Using determined metrics for each of the four perspectives of the balanced scorecard, an example of the final calculation is shown in Table 3.3.
The results indicate that the least important metric is revenue from the customer category and the most important metric is market share from the innovation and learning category.
BALANCED SCORECARD | ||
STRATEGIC OBJECTIVE: SUCCESS IN PURSUING A DIFFERENTIATION STRATEGY | ||
CATEGORIES AND METRICS | LEVEL ONE × LEVEL TWO | GLOBAL OUTCOME |
INNOVATION AND LEARNING | ||
Market share | (.40×.32) | .128 |
No. of new products | (.35×.32) | .112 |
Revenue from new products | (.25×.32) | .080 |
Total: Innovation and learning | .320 | |
INTERNAL BUSINESS PROCESSES | ||
No. of product units produced | (.33×.25) | .08333 |
Minimizing variable cost per unit | (.33×.25) | .08333 |
No. on-time deliveries | (.33×.25) | .08333 |
Total internal business processes | .250 | |
CUSTOMER | ||
Revenue | (.20×.21) | .042 |
Market share | (.38×21) | .080 |
QFD (quality function deployment) score | (.42×.21) | .088 |
Total customer | .210 | |
FINANCIAL | ||
Cash value-added | (.28×.22) | .062 |
Residual income | (.32×.22) | .070 |
Cash flow ROI | (.40×.22) | .088 |
Total financial | .220 | |
Sum of the global weights | 1.00 |
The four balanced scorecard perspectives might require some modification to be effective as an information technology (IT) scorecard. The reason for this is that the IT department is typically an internal rather than external service supplier and projects are commonly carried out for the benefit of both the end users and the organization as a whole—rather than individual customers within a large market.
Four alternative perspectives might include
It is then possible to drill down to provide IT-specific measures for each of these four perspectives. Most of the metrics that appear in Table 3.4 have been derived from mainstream literature.
It is important to note that the three key balanced scorecard principles of cause-and-effect relationships, sufficient performance drivers, and linkage to financial measures are built into this IT scorecard. Cause-and-effect relationships can involve one or more of the four perspectives. For example, better staff skills (future readiness perspective) will reduce the frequency of bugs in an application (internal operations perspective).
In a typical company, senior management might question the benefits of large investments in IT and want IT to be better aligned with corporate strategy. Some of the concerns of the different stakeholder groups might be
PERSPECTIVE | METRIC |
User orientation | Customer satisfaction |
BUSINESS VALUE | |
Cost control | Percentage over/under IT budget |
Allocation to different budget items | |
IT budget as a percentage of revenue | |
IT expenses per employee | |
Sales to third parties | Revenue from IT-related products/services |
Business value of an IT project | Traditional measures (e.g., ROI, payback) |
Business evaluation based on information economics: value linking, value acceleration, value restructuring, technological innovation | |
Strategic match with business contribution to: product/service quality, customer responsiveness, management information, process flexibility | |
Risks | Unsuccessful strategy risk, IT strategy risk, definitional uncertainty (e.g., low degree of project specification), technological risk (e.g., bleeding edge hardware or software), development risk (e.g., inability to put things together), operational risk (e.g., resistance to change), IT service delivery risk (e.g., human/computer interface difficulties) |
Business value of the IT department/functional area | Percentage of resources devoted to strategic projects |
Percentage of time spent by IT manager in meetings with corporate executives | |
Perceived relationship between IT management and top management | |
INTERNAL PROCESSES | |
Planning | Percentage of resources devoted to planning and review of IT activities |
Development | Percentage of resources devoted to applications development |
Time required to develop a standard-sized new application | |
Percentage of applications programming with reused code | |
Time spent to repair bugs and fine-tune new applications | |
Operations | Number of end-user queries handled |
Average time required to address an end-user problem | |
FUTURE READINESS | |
IT specialist capabilities | IT training and development budget as a percentage of overall IT budget |
Expertise with specific technologies | |
Expertise with emerging technologies | |
Age distribution of IT staff | |
Satisfaction of IT staff | Turnover/retention of IT employees |
Productivity of IT employees | |
Applications portfolio | Age distribution |
Platform distribution | |
Technical performance of applications portfolio | |
User satisfaction with applications portfolio | |
Research into emerging technologies | IT research budget as percentage of IT budget |
Perceived satisfaction of top management with the reporting on how specific emerging technologies may or may not be applicable to the company |
One of the most important things a chief information officer (CIO) can do is convince senior management that IT is not a service provider, but a strategic partner. As shown in Table 3.5, there are some important differences.
Being a strategic partner enables us to develop an IT scorecard that will encompass the following four quadrants:
The relationship between IT and business can be more explicitly expressed through a cascade of balanced scorecards, as shown in Figure 3.1.
SERVICE PROVIDER | STRATEGIC PARTNER |
IT is for efficiency | IT for business growth |
Budgets are driven by external benchmarks | Budgets are driven by business strategy |
IT is separable from the business | IT is inseparable from the business |
IT is seen as an expense to control | IT is seen as an investment to manage |
IT managers are technical experts | IT managers are business problem solvers |
Cascading scorecards can be used within IT as well. Each set of scorecards is actually composed of one or more unit scorecards. For example, the IT operations scorecard might also include a scorecard for the IT service desk. The resulting IT scorecard consists of objectives, measures, and benchmarks, as shown in Tables 3.6 through 3.9.
OBJECTIVE | MEASURES | BENCHMARKS |
Business/IT alignment | Operational plan/budget approval | Not applicable |
Value delivery | Measured in business unit performance | Not applicable |
Cost management | Attainment of expense and recovery targets | Industry expenditure comparisons |
Attainment of unit cost targets | Compass operational ‘top performance’ levels | |
Risk management | Results of internal audits | Defined sound business practices |
Execution of security initiative | Not applicable | |
Delivery of disaster recovery assessment | Not applicable | |
Intercompany synergy achievement | Single system solutions | Merger and acquisition guidelines |
Target state architecture approval | Not applicable | |
Attainment of targeted integrated cost reductions | Not applicable | |
IT organization integration | Not applicable |
The measure of each of these unit scorecards is aggregated in the IT scorecard. This, in turn, is fed into and evaluated against the business scorecard.
There are a wide variety of other IT-oriented metrics that an organization can utilize, as shown in Table 3.10. Others can be found in various appendices of this book.
Hopefully by now you understand the importance of developing cascading sets of interlinked scorecards. From a departmental perspective, you will need to review, understand, and adhere to the organizational scorecard from a macro perspective. However, you will need to review the departmental- and system-level scorecards from a micro level.
OBJECTIVE | MEASURES | BENCHMARKS |
Customer satisfaction | Business unit survey ratings • Cost transparency and levels • Service quality and responsiveness • Value of IT advice and support • Contribution to business objectives |
Not applicable |
Competitive costs | • Attainment of unit cost targets • Blended labor rates |
Compass operational ‘top level performing’ levels Market comparisons |
Development services performance | Major project success scores • Recorded goal attainment • Sponsor satisfaction ratings • Project governance rating |
Not applicable |
Operational services performance | Attainment of targeted service levels | Competitor comparisons |
OBJECTIVE | MEASURES | BENCHMARKS |
Development process performance | Function point measures of: • Productivity • Quality • Delivery rate |
To be determined |
Operational process performance | Benchmark based measures of: • Productivity • Responsiveness • Change management effectiveness • Incident occurrence levels |
Selected compass benchmark studies |
Process maturity | Assessed level of maturity and compliance in priority processes within: • Planning and organization • Acquisition and implementation • Delivery and support • Monitoring |
To be defined |
Enterprise architecture management | • Major project architecture approval • Product acquisition compliance to technology standards • “State of the infrastructure” assessment |
Sound business practices |
Systems are what compose the micro level. For example, enterprise resource planning (ERP) is one of the most sophisticated and complex of all software systems. It is a customizable software package that includes integrated business solutions
OBJECTIVE | MEASURES | BENCHMARKS |
Human resource management | Results against targets: • Staff complement by skill type • Staff turnover • Staff “billable” ratio • Professional development days per staff member |
Market comparison Industry standard |
Employee satisfaction | Employee satisfaction survey scores in: • Compensation • Work climate • Feedback • Personal growth • Vision and purpose |
North American technology-dependent companies |
Knowledge management | • Delivery of internal process improvements to library • Implementation of “lessons-learned” sharing process |
Not applicable |
SYSTEM/SERVICE/FUNCTION | POSSIBLE METRIC(S) |
R&D | Innovation capture |
No. of quality improvements | |
Customer satisfaction | |
Process improvement | Cycle time, activity costs |
No. supplier relationships | |
Total cost of ownership | |
Resource planning, account management | Decision speed |
Lowering level of decision authority | |
Groupware | Cycle time reduction |
Paperwork reduction | |
Decision support | Decision reliability |
Timeliness | |
Strategic awareness | |
Lowering level of decision authority | |
Management information systems | Accuracy of data |
Timeliness | |
e-Commerce | Market share |
Price premium for products/services | |
Information-based products and services | Operating margins |
New business revenues | |
Cash flow | |
Knowledge retention |
for core business processes such as production planning and control and warehouse management. Rosemann and Wiese (1999) use a modified balanced scorecard approach to
Along with the four balanced scorecard perspectives of financial, customer, internal processes, and innovation and learning, they have added a fifth for the purposes of ERP installation—the project perspective. The individual project requirements, such as identification of critical path, milestones, and so on, are covered by this fifth perspective that represents all the project-management tasks. Figure 3.2 represents the Rosemann–Wiese approach.
Most ERP implementers concentrate on the financial and business process aspects of ERP implementation. Using the ERP balanced scorecard would enable them to also focus on customer, innovation, and learning perspectives. The latter is particularly important as it enables the development of alternative values for the many conceivable development paths that support a flexible system implementation.
Implementation measures might include
As in all well-designed balanced scorecards, this one demonstrates a very high degree of linkage in terms of cause-and-effect relationships. For example, “customer satisfaction” within the customer perspective might affect “total cost of ownership” in the financial perspective, “total project time” in the project perspective, “fit with ERP solution” in the internal process perspective, and “user suggestions” in the innovation and learning perspective.
Rosemann and Weise do not require the project perspective in the balanced scorecard for evaluating the continuous operation of the ERP installation. Here, the implementation follows a straightforward balanced scorecard approach. Measures include
It should be noted that these metrics can be used outside of the balanced scorecard approach as well.
Cost–benefit analysis and return on investment (ROI) are typically utilized during the project proposal stage to win management approval. However, these and other financial metrics provide a wonderful gauge for performance.
Cost–benefit analysis is quite easy to understand. The process compares the costs of the system with the benefits of having that system. We all do this on a daily basis. For example, if we go out to buy a new $1000 personal computer, we weigh the cost of expending that $1000 against the benefits of owning the personal computer. For example, these benefits might be
We can summarize this as shown in Table 3.11.
COSTS/ONE TIME | BENEFITS/YEAR |
$1000 | 1. Rental computer savings: $75 × 12 = $900 |
2. Typing income: $300 × 12 = $3600 | |
$1000/one time | $4500/year |
Potential savings/earnings | $3500/first year; $4500 subsequent years |
One-time capital costs such as computers are usually amortized over a certain period of time. For example, a computer costing $1000 can be amortized over 5 years, which means that instead of comparing a one-time cost of $1000 with the benefits of purchasing the PC, we can compare a monthly cost instead. Not all cost–benefit analyses are so clear-cut, however. In our previous example, the benefits were both financially based. Not all benefits are so easily quantifiable. We call benefits that cannot be quantified “intangible benefits.” Examples are
Aside from having to deal with both tangible and intangible benefits, most cost–benefit analyses also need to deal with several alternatives. For example, let’s say that a bank uses a loan processing system that is old and often has problems. There might be several alternative solutions:
In each case, a spreadsheet should be created that details one-time as well as continuing costs. These should then be compared with the benefits of each alternative, both tangible as well as intangible.
An associated formula is the benefit–cost ratio (BCR). The computation of the financial BCR is done within the construct of the following formula: benefits/cost. All projects have associated costs. All projects will also have associated benefits. At the outset of a project, costs will far exceed benefits. However, at some point the benefits will start outweighing the costs. This is called the break-even point. The analysis that is done to figure out when this break-even point will occur is called break-even analysis. In Table 3.12, we see that the break-even point comes during the first year.
Calculating the break-even point in a project with multiple alternatives enables the project manager to select the optimum solution. The project manager will generally select the alternative with the shortest break-even point.
Most organizations want to select projects that have a positive ROI. The ROI is the additional amount earned after costs are earned back. In our aforementioned “buy versus not buy” PC decision, we can see that the ROI is quite positive during the first, and especially during subsequent years of ownership.
ROI is probably the most favored and critical of all finance metrics from a management stand-point. Table 3.13 provides a list of questions that ROI can help answer.
COSTS/ONE TIME | BENEFITS/YEAR |
$1000 | 1. Rental computer savings: $75 × 12 = $900 |
2. Typing income: $300 × 12 = $3600 | |
$1000/one time | $4500/year |
Potential savings/earnings | $3500/first year; $4500 subsequent years |
REQUIRED INVESTMENT | IT OPERATING EFFICIENCY |
How much investment–including capital expense, planning and deployment, application development, and ongoing management and support–will the project require? | How will the project improve IT, such as simplifying management, reducing support costs, boosting security, or increasing IT productivity? |
FINANCIAL BENEFITS | RISK |
What are the expected financial benefits of the project, measured according to established financial metrics, including ROI,… savings, and payback period? | What are the potential risks associated with the project? How likely will risks impact the implementation schedule, proposed spending, or derived target benefits? |
STRATEGIC ADVANTAGE | COMPETITIVE IMPACT |
What are the project’s specific business benefits, such as operational savings, increased availability, increased revenue, or achievement of specific goals? | How does the proposed project compare with competitor’s spending plans? |
ACCOUNTABILITY | |
How will we know when the project is a success? How will the success be measured (metrics and time frames)? |
The IT department and the finance department need to be joint owners of the ROI process.
The basic formula for ROI is
The results of this calculation can be used to either measure costs or measure benefits, each having its own advantages and disadvantages, as shown in Table 3.14.
ROI calculations require the availability of large amounts of accurate data, which is sometimes unavailable to the IT manager. Many variables need to be considered and decisions made regarding which factors to calculate and which to ignore. Before starting an ROI calculation, identify the following factors:
MEASUREMENT QUESTION | MEASURING COSTS | MEASURING BENEFITS |
Can we afford this and will it pay for itself? | Financial metrics; defined by policy and accepted accounting principles; reporting and control oriented; standards-based or consistent; not linked to business process; ignores important cost factors; short time frame; data routinely collected/reported | Savings as measured in accounting categories; narrow in focus and impact; increased revenues, reduced total costs, acceptable payback period |
How much ‘bang for the buck’ will we get out of this project? | Financial and outcome/quality metrics; operations and management oriented; defined by program and business process; may or may not be standardized; often requires new data collection; may include organizational and managerial factors | Possible efficiency increases; increased output; enhanced service/product quality; enhanced access and equity; increased customer/client satisfaction; increased organizational capability; spillovers to other programs or processes |
Is this the most I can get for this much investment? | Financial and organizational metrics; management and policy oriented; nonstandardized; requires new data collection and simulation or analytical model; can reveal hidden costs | Efficiency increases; spillovers; enhanced capabilities; avoidance of wasteful or suboptimal strategies |
Will the benefits justify the overall investment in this project? | Financial, organizational, social, individual metrics; individual and management oriented; nonstandard; requires new data collection and expanded methods; reveals hidden costs; potentially long timeframe | Enhanced capabilities and opportunities; avoiding unintended consequences; enhanced equity; improved quality of life; enhanced political support |
There are a variety of ROI techniques:
There are also a variety of ways of actually calculating ROI. Typically, the following are measured
The ROI calculation is not complete until the results are converted to dollars. This includes looking at combinations of hard and soft data. Hard data include such traditional measures as output, time, quality, and costs. In general, hard data are readily available and relatively easy to calculate. Soft data are hard to calculate and include morale, turnover rate, absenteeism, loyalty, conflicts avoided, new skills learned, new ideas, successful completion of projects, and so on, as shown in Table 3.15.
After the hard and/or soft data have been determined, they need to be converted to monetary values
What follows is an example of an ROI analysis for a system implementation. Spreadsheets were used to calculate ROI at various stages of the project: planning, development, and implementation.
Calculation: hours/person average × cost/hour × no. of people = total $ saved
Calculation: hours/person average × cost/hour × no. of people = total $ saved
HARD DATA | |
Output | Units produced |
Items assembled or sold | |
Forms processed | |
Tasks completed | |
Quality | Scrap |
Waste | |
Rework | |
Product defects or rejects | |
Time | Equipment downtime |
Employee overtime | |
Time to complete projects | |
Training time | |
Cost | Overhead |
Variable costs | |
Accident costs | |
Sales expenses | |
SOFT DATA | |
Work habits | Employee absenteeism |
Tardiness | |
Visits to nurse | |
Safety-rule violations | |
Work climate | Employee grievances |
Employee turnover | |
Discrimination charges | |
Job satisfaction | |
Attitudes | Employee loyalty |
Employee self-confidence | |
Employee’s perception of job responsibility | |
Perceived changes in performance | |
New skills | Decisions made |
Problems solved | |
Conflicts avoided | |
Frequency of use of new skills | |
Development and advancement | Number of promotions or pay increases |
Number of training programs attended | |
Requests for transfer | |
Performance appraisal ratings | |
Initiative | Implementation of new ideas |
Successful completion of projects | |
Number of employee suggestions |
Calculation: unit cost × no. of units = total $ saved
Calculation: = $ saved per year
Calculation: ROI = (Benefits − Costs/Costs)
These ROI calculations are based on valuations of improved work product, what is referred to as a cost-effectiveness strategy.
ROI evaluates an investment’s potential by comparing the magnitude and timing of expected gains to the investment costs. For example, a new initiative costs $500,000 and will deliver an additional $700,000 in increased profits. Simple ROI = gains − investment costs. ($700,000 − $500,000 = $200,000. $200,000/$500,000 = 40%.) This calculation works well in situations where benefits and costs are easily known, and is usually expressed as an annual percentage return.
However, technology investments frequently involve financial consequences that extend over several years. In this case, the metric has meaning only when the time period is clearly stated. Net present value (NPV) recognizes the time value of money by discounting costs and benefits over a period of time, and focuses either on the impact on cash flow rather than net profit, or savings.
A meaningful NPV requires sound estimates of the costs and benefits and use of the appropriate discount rate. An investment is acceptable if the NPV is positive. For example, an investment costing $1 million has an NPV of savings of $1.5 million. Therefore, ROI = the NPV of savings − initial investment cost/initial investment cost. ($1,500,000 − $1,000,000 = $500,000. $500,000/$1,000,000 = 50%.) This may also be expressed as ROI = $1.5M (NPV of savings)/$1M (initial investment) × 100 = 150%.
The internal rate of return (IRR) is the discount rate that sets the net present value of the program or project to zero. While the internal rate of return does not generally provide an acceptable decision criterion, it does provide useful information, particularly when budgets are constrained or there is uncertainty about the appropriate discount rate.
The U.S. CIO Council developed (see Appendix VIII) the value measuring methodology (VMM) to define, capture, and measure value associated with electronic services unaccounted for in traditional ROI calculations, to fully account for costs, and to identify and consider risk.
Most companies track the cost of a project using only two dimensions: planned costs versus actual costs. Using this particular metric, if managers spend all of the money that has been allocated to a particular project, they are right on target. If they spend less money, they have a cost underrun—a greater expenditure results in a cost overrun. However, this method ignores a key third dimension—the value of work performed.
Earned-value management—or EVM—enables you to measure the true cost of performance of long-term capital projects. Even though EVM has been in use for years, government contractors are the major practitioners of this method.
The key tracking EVM metric is the cost performance index or CPI, which has proved remarkably stable over the course of most projects. The CPI shows the relationship between the value of work accomplished (“earned value”) and the actual costs, as shown in the following example.
If the project is budgeted to have a final value of $1 billion, but the CPI is running at 0.8 when the project is, say, one-fifth complete, the actual cost at completion can be expected to be around $1.25 billion ($1 billion/0.8). You are earning only 80 cents of value for every dollar you are spending. Management can take advantage of this early warning by reducing costs while there is still time.
Several software tools, including Microsoft Project, have the capability of working with EVM.
Table 3.16 provides examples of performance measures that are typical for many IT projects. While the category and metrics columns are fairly representative of those used in IT projects in general, the measure of success will vary greatly and should be established for each individual project, as appropriate.
CATEGORY | FOCUS | PURPOSE | MEASURE OF SUCCESS |
Schedule performance | Tasks completed vs. tasks planned at a point in time. | Assess project progress. Apply project resources. | 100% completion of tasks on critical path; 90% all others |
Major milestones met vs. planned. | Measure time efficiency. | 90% of major milestones met. | |
Revisions to approved plan. | Understand and control project “churn.” | All revisions reviewed and approved. | |
Changes to customer requirements. | Understand and manage scope and schedule. | All changes managed through approved change process. | |
Project completion date. | Award/penalize (depending on contract type). | Project completed on schedule (per approved plan). | |
Budget performance | Revisions to cost estimates. | Assess and manage project cost. | 100% of revisions are reviewed and approved. |
Dollars spent vs. dollars budgeted. | Measure cost-efficiency. | Project completed within approved cost parameters. | |
Return on investment (ROI). | Track and assess performance of project investment portfolio. | ROI (positive cash flow) begins according to plan. | |
Acquisition cost control. | Assess and manage acquisition dollars. | All applicable acquisition guidelines followed. | |
Product quality | Defects identified through quality activities. | Track progress in, and effectiveness of, defect removal. | 90% of expected defects identified (e.g., via peer reviews, inspections). |
Test case failures vs. number of cases planned. | Assess product functionality and absence of defects. | 100% of planned test cases execute successfully. | |
Number of service calls. | Track customer problems. | 75% reduction after 3 months of operation. | |
Customer satisfaction index. | Identify trends. | 95% positive rating. | |
Customer satisfaction trend. | Improve customer satisfaction. | 5% improvement each quarter. | |
Number of repeat customers. | Determine if customers are using the product multiple times (could indicate satisfaction with the product). | “X” percentage of customers use the product “X” times during a specified time period. | |
Number of problems reported by customers. | Assess quality of project deliverables. | 100% of reported problems addressed within 72 h. | |
Compliance | Compliance with Enterprise Architecture model requirements. | Track progress toward department-wide architecture model. | Zero deviations without proper approvals. |
Compliance with interoperability requirements. | Track progress toward system interoperability. | Product works effectively within system portfolio. | |
Compliance with standards. | Alignment, interoperability, consistency. | No significant negative findings during architect assessments. | |
For website projects, compliance with style guide. | To ensure standardization of website. | All websites have the same “look and feel.” | |
Compliance with Section 508. | To meet regulatory requirements. | Persons with disabilities may access and utilize the functionality of the system. | |
Redundancy | Elimination of duplicate or overlapping systems. | Ensure return on investment. | Retirement of 100% of identified systems. |
Decreased number of duplicate data elements. | Reduce input redundancy and increase data integrity. | Data elements are entered once and stored in one database. | |
Consolidate help desk functions. | Reduce dollars spent on help desk support. | Approved consolidation plan by June 30, 2002. | |
Cost avoidance | System is easily upgraded. | Take advantage of, for example, COTS upgrades. | Subsequent releases do not require major “glue code” project to upgrade. |
Avoid costs of maintaining duplicate systems. | Reduce IT costs. | 100% of duplicate systems have been identified and eliminated. | |
System is maintainable. | Reduce maintenance costs. | New version (of COTS) does not require “glue code.” | |
Customer satisfaction | System availability (up time). | Measure system availability. | 100% of requirement is met. (e.g., 99% M–F, 8 a.m. to 6 p.m., and 90% S & S, 8 a.m. to 5 p.m.). |
System functionality (meets customer’s/user’s needs). | Measure how well customer needs are being met. | Positive trend in customer satisfaction survey(s). | |
Absence of defects (that impact customer). | Number of defects removed during project life cycle. | 90% of defects expected were removed. | |
Ease of learning and use. | Measure time to becoming productive. | Positive trend in training survey(s). | |
Time it takes to answer calls for help. | Manage/reduce response times. | 95% of severity one calls answered within 3 h. | |
Rating of training course. | Assess effectiveness and quality of training. | 90% of responses of “good” or better. | |
Business goals/mission | Functionality tracks reportable inventory. | Validate system supports program mission | All reportable inventory is tracked in system. |
Turnaround time in responding to congressional queries. | Improve customer satisfaction and national interests. | Improve turnaround time from 2 days to 4 h. | |
Maintenance costs. | Track reduction of costs to maintain system. | Reduce maintenance costs by two-thirds over a 3-year period. | |
Standard desktop platform. | Reduce costs associated with upgrading user’s systems. | Reduce upgrade costs by 40%. | |
Time taken to complete tasks. | To evaluate estimates. | Completions are within 90% of estimates. | |
Number of deliverables produced. | Assess capability to deliver products. | Improve product delivery 10% in each of the next 3 years. |
The following set of questions is intended to assist in stimulating the thought process to determine performance measures that are appropriate for a given project or organization.
Clinton, B., Webber, S. A., and Hassell, J. M. (2002). Implementing the balanced scorecard using the analytic hierarchy process. Management Accounting Quarterly, 3(3), 1–11.
Rosemann, M. and Weise, J. (1999). Measuring the performance of ERP software—A balanced scorecard approach. Proceedings of the 10th Australasian Conference on Information Systems, Wellington: Victoria University of Wellington.