7

THE PROMISE AND PERILS OF THE BALANCED SCORECARD

The balanced scorecard, the methodology developed by Drs. Robert S. Kaplan and David Norton, recognises the shortcomings of executive managements’ excessive emphasis on after the fact, short-term financial results. It resolves this myopia and improves organisational performance by shifting attention from financial measures and managing non-financial operational measures to customers, internal processes, and employee innovation, learning and growth. These influencing measures are reported during the period when reactions can occur sooner, which in turn, leads to better financial results.

The balanced scorecard is one of the underpinnings needed to complete the full vision of the performance management framework. Will the adoption rate of the balanced scorecard find the same difficulty crossing the chasms encountered by activity-based cost management (ABC/M) systems in the 1990s? It took many failures in ABC/M system implementations before organisations learned what ABC/M is and how to shape, size and level the detail of ABC/M systems before organisations began to get them ready for use. Are balanced scorecard implementations going to experience the same difficulty?

WHAT IS A BALANCED SCORECARD?

An early indication of trouble is the confusion about what a balanced scorecard is and more confusion about what its purpose is. There is little consensus. If you ask executives whether they are using a balanced scorecard, many say they are. However, if you next ask them to describe it, you’ll get widely different descriptions. There is no standard—yet. Some executives say they have successfully transferred their old columnar management reports into visual dashboards with flashing red and green lights and directional arrows. Some realise a scorecard is more than that, and they have compressed their old measures into a smaller, more manageable number of more relevant measures. Neither may be the correct method.

How does anyone know if those measures—the so-called ‘key performance indicators’ (KPIs)—support the strategic intent of the executive team? Are the selected measures the right measures? Or are they what you can measure rather than what you should measure? Is the purpose of the scorecard only to better monitor the dials, rather than facilitate the employee actions needed to move the dials?

Discussion about balanced scorecards and dashboards is regularly appearing in business magazines, website discussion groups and at conferences. Today’s technology makes it relatively simple to convert reported data into a dashboard dial, but what are the consequences? What actions are suggested from just monitoring the dials?

In Chapter 4, ‘A Taxonomy of Accounting and Costing Methods,’ I said that results and outcome information should answer three questions: What? So what? and Then what? These same questions apply here. Sadly, most scorecards and dashboards only answer the first question. Worse yet, answering the ‘what’ may not even focus on a relevant ‘what.’ Organisations struggle with determining what to measure.

Organisations need to think deeper about what measures drive value and reflect achieving the direction-setting strategic objectives of their executive team. With the correct measures, organisations should strive to optimise these measures and ideally be continuously forecasting their expected results.

Balanced Scorecards Are Companions to Strategy Maps

Why are so many people familiar with the term balanced scorecard but so few are familiar with the term strategy maps? I believe the strategy map is orders of magnitude more important than the scorecard, which is merely a feedback mechanism. Why do executives want a balanced scorecard but without a strategy map? One possible explanation is the mistaken belief that those vital few KPI measures, rather than the trivial many, can be derived without first requiring employee teams and managers to understand the answer to a key question: Where does the executive team want the organisation to go? This question is best answered by the executive team’s vision and mission—and they must point in the direction they want the organisation to go. That is the executive team’s primary job—setting direction. The strategy map and its companion balance scorecard are also important, but their combination answers a different question: How will we get there?

Figure 7-1 illustrates a generic strategy map with its four stacked, popular perspectives. Each rectangle represents a strategic objective and its associated projects, processes or competencies at which to excel plus their appropriate measures and targets.

Figure 7-1: Generic Strategy Map Architecture

image

Source: Copyright Gary Cokins. Used with permission.

Note that there are dependency linkages in a strategy map with an upward direction of cumulating effects of contributions. The derived KPIs are not in isolation but, rather, have context to the mission and vision. To summarise, a strategy maps linkages from the bottom perspective upward, which does the following:

• Accomplishing the employee innovation, learning and growth objectives contributes to the internal process improvement objectives.

• Accomplishing the internal process objectives contributes to the customer satisfaction and loyalty objectives.

• Accomplishing the customer-related objectives results in achieving the financial objectives, typically a combination of turnover growth and cost management objectives.

The strategy map is like a force field in physics, as with magnetism, where the energy, priorities and actions of people are mobilised, aligned and focused. One can say that at the top of the map, maximising shareholder wealth (or for public sector organisations, maximising community and citizen value) is not really a goal—it is a result. It is a result of accomplishing all the linked strategic objectives with cause and effect relationships.

The peril that threatens the success of this methodology is executive teams that are anxious to assign measures with targets to employees and hold them accountable. Executives typically skip two critical steps of involving employees to gain their buy-in (and also commitment to the measures) to assure they understand the executive team’s strategy, and the more critical prior step of identifying the mission-essential projects and initiatives that will achieve the strategic objectives. The presence of enabling projects and initiatives goes to the heart of what distinguishes a strategic objective from just getting better at what you have already been doing.

Figure 7-2 illustrates who ideally should be responsible for one of five elements of each strategic objective: the executive team or the managers and employees. Sadly, many organisations neglect the first two elements identified in a strategy map. They begin with the third column to select KPIs without constructing a strategy map. The enterprise practice management (EPM) intelligence resides in the strategy map.

Figure 7-2: Who is Responsible for What?

image

Source: Copyright Gary Cokins. Used with permission.

Strategy maps and their derived balanced scorecard are navigational tools to guide the organisation to implement the strategy, not necessarily to formulate the strategy. Executive teams are pretty good at defining strategy, but a high involuntary CEO turnover rate and the increasingly shorter tenure of CEOs are evidence of their failure to implement their strategy.

Measurements Are More of a Social System Than a Technical One

Do not misinterpret what I am saying. Selecting and measuring KPIs are critical. You get what you measure, and strategy maps and scorecards serve a greater social purpose than a technical one (although IT and software are essential enablers). Performance measures motivate people and focus them on what matters most.

Imagine if every day, every employee in an organisation, from the janitor to the CEO could answer the question ‘How am I doing on what is important?’ The first half of the question can be easily displayed on a dial with a target. It is reported in a scorecard or dashboard. However it is the second half of the question that is the key—on what is important——and that is defined from the strategy map.

The risk and peril of the balanced scorecard involves the process of identifying and integrating appropriate cause and effect linkages of strategic objectives that are each supported by the vital few measures and then subsequently cascading the KPIs down through the organisation. KPIs ultimately extend into performance indicators (PIs)—operational performance indicators—that employees can relate to and directly affect.

The primary task of a strategy map and its companion balanced scorecard is to align people’s work and priorities with multiple strategic objectives that, if accomplished, will achieve the strategy and consequently realise the end-game of maximising shareholder wealth (or maximising citizen value). The strategic objectives are located in the strategy map, not in the balanced scorecard. The KPIs in the balanced scorecard reflect the strategic objectives in the strategy map.

Debate will continue about how to select the vital few KPIs for workgroups. The following are two approaches:

1. Newtonian-slyle managers, who believe the world is a big machine with dials, pulleys and levers to push and pull, find appeal in looking at benchmark data to identify which relevant and unfavourably large performance gaps they should focus on. They want to know, ‘What must we get better at?’ The KPIs are then derived, and strategies are deduced from recognising deficiencies.

2. In contrast, Darwinian-style managers, who believe the organisation is a sense-and-respond organism, find appeal in having the executive team design the strategy map by applying a SWOT (strengths, weaknesses, opportunities and threats) approach. This approach begins with the executive team freely brainstorming and recording an organisation’s SWOTs. They then cluster similar, individual SWOTs into strategic objectives with causal linkages in the strategy map. Following this initial step, the middle managers and core process owners are then tasked with identifying the few and manageable projects and core processes to improve that will attain the executive team’s strategic objectives in the strategy map. After that step, those same middle managers can identify the KPIs that will indicate progress toward achieving the projects and improving critical core processes. This latter approach not only assures that middle managers and employee teams will understand the executive’s strategy, of which most middle managers and employees are typically unaware, but it further generates their buy-in and ownership of the balanced scorecard’s KPIs because these have not been mandated to them from the executives. (Of course, the executive team can subsequently challenge and revise their lower managers’ selected projects and KPIs [debate is always healthy] but only after the buy-in and learning has occurred.)

SCORECARD OR REPORT CARD? THE IMPACT OF SENIOR MANAGEMENT’S ATTITUDE

Regardless of which technique or other method is used to identify the KPIs, the KPIs ideally should reflect the executive team’s strategic intent and not be reported in isolation, as the annual financial budget typically is disconnected from the strategy. This is the peril of the balanced scorecard. Its main purpose is to communicate the executive team’s strategy to employees in a way they can understand and to report the impact of their contribution to attaining it. However starting with a KPI definition without context to the executive’s mission and vision denies this important step.

Research from a former State University of New York, Albany Professor, Raef Lawson, suggests that a major differentiator of success from failure in a balanced scorecard implementation is the senior management’s attitude. Is it a scorecard or report card? Will we use it for punishment or remedy? Do we work for bosses we must ‘obey?’ Or do we work for coaches, like on a sports team, and mentors who guide and advise us?

As an example, is senior management anxiously awaiting those dashboards so they can follow the cascading score meters downward in order to micro-manage the workers under their middle managers? Or will the executives appropriately restrict their primary role and responsibility to define and continuously adjust strategy (which is dynamic, not static, always reacting to new insights) and then allow the empowerment of employee teams to select KPIs from which employees can actively determine the corrective interventions to align with the strategy?

The superior strategy map and scorecard systems embrace employee teams that communicate among themselves to take action, rather than a supervisory command-and-control style from senior managers. An executive team micro-managing the KPI score performance of employees can be harsh. If the strategy map and cascading KPI and PI selection exercise is done well and subsequently maintained, then higher level managers need only view their own score performance, share their results with the employee teams below them and coach the teams to improve their KPI and PI scores or re-consider adding or deleting KPIs or PIs. For the more mature, balanced scorecard users using commercial software, they can re-adjust the KPI and PI weighting coefficients to steer toward better alignment with the strategic objectives.

GPS NAVIGATORS FOR AN ORGANISATION

In Chapter 2, ‘Enterprise Performance Management: Myth or Reality?’ I described a car analogy in which the latest thing is to have a global positioning system (GPS) route navigator in our car. As with most new technologies, such as when handheld calculators replaced slide rules or laptop computers emerged, a GPS is another gadget that is evolving into a necessity. They get you to your destination, and a comforting voice guides you along the way. Why can’t an organisation have a similar device? It can.

My belief is the refinement in usage of strategy maps and its companion balanced scorecard is becoming the GPS route navigator for organisations. For organisations, the destination input into the GPS is the executive team’s strategy. As earlier described, the executive team’s primary job is to set strategic direction, and the top of their strategy map is their destination. However, unlike a GPS’s knowledge of roads and algorithms to determine the best route, managers and employee teams must ‘map’ which projects, initiatives and business process improvements are best to get to the destination for realising the strategy. In addition, when you are driving a car with a GPS instrument and you make a wrong turn, the GPS’s voice tells you that you are off track—and it then provides you with a corrective action instruction. However with most organisation’s calendar-based and long cycle-time reporting, there is delayed reaction. The EPM framework includes a GPS.

Next, the organisation as the car itself needs to be included. The motor and driveshaft are the employees, with their various methodologies, such as customer value management and service delivery, that propel the organisation toward its target. Collectively, the many methodologies, including lean management and activity-based costing, constitute performance management as the organisation’s gears.

Just like a poorly performing car with some broken gears, misaligned tires and poor lubrication will yield poor gallons per mile (or litres per kilometre), poorly integrated methodologies, impure raw data and lack of digitisation and analytics results in poor rate of shareholder financial wealth creation. The full vision of EPM removes the friction and vibration plus weak torque to not only optimise the consumption of the organisation’s resources—its employees and spending—but it also gets the organisation to its strategy destination better, faster, cheaper, smarter and safer. The result? A higher shareholder wealth creation yield?

Finally, as earlier mentioned, a strategy is never static but is constantly adjusted. It is dynamic, which means the destination input to the GPS navigator is constantly changing. This places increasing importance on predictive analytics to determine where the best destination for stakeholders is located. How much longer do you want to drive your existing car when an EPM car with a GPS is now available to lift wealth creation efficiency and yield?

Some proposed management improvement methodologies, like the lights-out, fully automated manufacturing factory touted in the 1980s, are fads that come and go, but the strategy map and its companion, the balanced scorecard for feedback, are certain to be a sustained methodology in the long term—perhaps forever. It makes sense that executive teams provide direction-setting, and employee teams then perform the actions to get there. Are these early 21st century missteps and misunderstandings in implementing the balanced scorecard due to arrogance, ignorance or inexperience? I suggest it is due to inexperience.

Conflict and tension are natural in all organisations. Therefore, it takes managers and employees time to stabilise what ultimately is a behavioural measurement mechanism of cause and effect KPIs to distinguish between KPIs and PIs and to then master how to use both these types of measures to navigate, power and steer as an integrated enterprise. As stated by the author Peter Senge, a thought leader in the field of organisational change management, the differentiator between successful and failing organisations will be the rate, and not just the amount, of organisational learning. Those intangible assets—employees as knowledge workers and the information provided to them—are what truly power the EPM framework.

HOW ARE BALANCED SCORECARDS AND DASHBOARDS DIFFERENT?

Confusion exists about the difference between a balanced scorecard and a dashboard. There is similar confusion differentiating KPIs from normal and routine measures that we refer to as PIs. The word ‘key’ in KPI is the operative term. When an organisation proudly proclaims they have 300 KPIs, one must ask them the question, how can they all be key?

An organisation has only so many supplies or energy to focus. To use a radio analogy, KPIs are what distinguish the signal from the noise—the measures of progress toward strategy implementation. As a negative result of this confusion, organisations are including an excessive amount of PIs in their balanced scorecard that should be restricted only to KPIs.

A misconception about a balanced scorecard is that its primary purpose is to monitor results. That is secondary. Its primary purposes are to report the carefully selected measures that reflect the strategic intent of the executive team and then enable ongoing understanding about what should be done to align the organisation’s work and priorities to attain the executive team’s strategic objectives. The strategic objectives should ideally be articulated in a strategy map, which serves as the visual vehicle from which to identify the projects and initiatives needed to accomplish each objective or the specific core processes at which the organisation needs to excel. After this step is completed, KPIs are selected, and their performance targets are set. With this understanding, it becomes apparent that the strategy map’s companion scorecard, on its surface, serves more as a feedback mechanism to allow everyone in the organisation, from front-line workers up to the executive team, to answer that previously posed question, ‘How are we doing on what is important?’ More importantly, the scorecard should facilitate analysis to also know why. The idea is not to just monitor the dials but to move the dials.

Michael Hammer, the author who introduced the concept of business process re-engineering, described the sad situation of measurement abuse in his book, The Agenda: What Every Business Must Do to Dominate the Decade:

In the real world ... a company’s measurement systems typically deliver a blizzard of nearly meaningless data that quantifies practically everything in sight, no matter how unimportant; that is devoid of any particular rhyme or reason; that is so voluminous as to be unusable; that is delivered so late as to be virtually useless; and that then languishes in printouts and briefing books without being put to any significant purpose.... In short, measurement is a mess.. We measure far too much and get far too little for what we measure because we never articulated what we need to get better at, and our measures aren’t tied together to support higher-level decision making.1

Hammer did not hide his feelings, but has the cure been worse than the ailment? Simply reducing the number of measures can still result in an organisation measuring what it can measure as opposed to what it should measure. To determine what you should measure requires deeper understanding of the underlying purposes of a balanced scorecard relative to a dashboard.

Scorecards and Dashboards Serve Different Purposes

The two terms—scorecards and dashboards—have a tendency to confuse or, rather, get used interchangeably, when each brings a different set of capabilities. The sources of the confusion are as follows:

• Both represent a way to track results.

• Both make use of traffic light colouring systems, dials, sliders and other visual aids.

• Both can have targets, thresholds and alert messages.

• Both can provide drill down to other measurements and reports.

The difference comes from the context in which they are applied. To provide some history, as busy executives and managers have struggled to keep up with the amount of information being thrust at them, the concept of ‘traffic lighting’ has been applied to virtually any and all types of reporting. As technology has improved, more features have been added. An example is the ability to link to other reports and to drill down to finer levels of detail. The common denominator was the speed of being able to focus on something that required action or further investigation. The terminology evolved to reflect how technology vendors described what provided this capability. As a consequence, both dashboard and scorecard terms are being used interchangeably.

Figure 7-3 illustrates the difference between scorecards and dashboards, starting with all measurements as their source. Scorecards and dashboards are not contradictory. They both display measurement, and they are both important, but they serve different purposes.

Figure 7-3: Strategic KPI Scorecards Versus PI Dashboards

image

Source: Copyright Gary Cokins. Used with permission.

The top of the figure is the realm of scorecards. Scorecards are intended to be strategic. They align the behaviour of employees and partners with the strategic objectives formulated by the executive team. In contrast, dashboards, at the bottom of the figure, are intended to be operational.

Some refer to dashboards as ‘dumb’ reporting and scorecards as ‘intelligent’ reporting. The reason is that dashboards primarily are for data visualisation. They display what is happening during a time period. Most organisations begin with identifying what they are already measuring and construct a dashboard dial from there. However, dashboards do not communicate why something matters, why someone should care about the reported measure or what the impact may be if an undesirable declining measure continues. In short, dashboards report what you can measure.

In contrast, a scorecard does provide the information lacking in dashboards. A scorecard additionally answers questions by providing deeper analysis, drill-down capabilities, traffic light alert messaging and forecasting for inferences of performance potential to determine motivational targets. Scorecards do not start with the existing data but, rather, they begin with identifying what strategic projects to complete and core processes to improve and excel in.

The selection and validation of the correct or best KPIs is a constant debate. Statistical correlation interaction analysis among KPIs can determine the degree of influence and ‘lift’ that various cascaded KPIs have on the higher level, enterprise-wide KPIs. Hence, correlation analysis validates or improves the KPI selection. In addition, this type of analysis can automatically uncover previously unknown statistical relationships that may suggest cause and effects and can be used for predictive power. You want to make changes based on anticipated targets and constantly refocused outcomes so that employees can proactively make changes before unexpected events occur that would require a much more expensive reaction. In short, scorecards report what you should measure.

The following are some guidelines for understanding the differences:2

Scorecards monitor progress toward accomplishing strategic objectives. A scorecard displays periodic snapshots of performance associated with an organisation’s strategic objectives and plans. It measures organisational activity at a summary level against pre-defined targets to see if performance is within acceptable ranges. Its selection of KPIs helps executives communicate strategy to employees and focuses users on the highest priority projects, initiatives, actions and tasks required to implement plans. The word ‘key’ differentiates KPIs from the PIs reported in dashboards.
Scorecard KPIs ideally should be derived from a strategy map, rather than just a list of important measures that the executives have requested to be reported. Regardless of whether the suggested Kaplan and Norton four-stacked perspectives, or some variant, are used, scorecard KPIs should have cause and effect linkages (eg, statistical correlations). Directionally upward from the employee-centric innovation, learning and growth perspectives, the KPIs should reveal the cumulative build of potential to realised economic value.
There are two key distinctions of scorecards: (1) Each KPI must require a pre-defined target measure, and (2) KPIs should comprise both project-based KPIs (eg, milestones, progress percentage of completion, degree of planned versus accomplished outcome) and process-based KPIs (eg, customer satisfaction, % on-time delivery against customer promise dates). A scorecard comprising mainly or exclusively process-based KPIs is not an efficient engine of change. It merely monitors whether progress from the traditional drivers of improvement, such as quality or cycle-time improvement, is occurring. Process improvement is important, but innovation and change is even more important. Strategy is all about change and not just doing the same things better.

Dashboards monitor and measure processes. A dashboard, however, is operational and reports information typically more frequently than scorecards and usually with measures. Each dashboard measure is reported with little regard to its relationship to other dashboard measures. Dashboard measures do not directly reflect the context of strategic objectives.
This information can be more real-time in nature, like a car dashboard that shows drivers their current speed, fuel level and engine temperature. A dashboard ideally should be directly linked to systems that capture events as they happen, and it should warn users through alerts or exception notifications when performance against any number of measurements deviates from the norm or what is expected.

I caution organisations that are paying more attention to their performance measurements about (1) the linkage of scorecard KPIs to the strategy map and also to the fiscal budget (as well as rolling financial forecasts), and (2) the linkage of dashboard PIs selected to influence behaviour that will ultimately result in achieving or exceeding the KPI targets. Strategy diagrams and the budget are located in Figure 7-3 and are described in the following text.

Scorecards Link the Executives’ Strategy to Operations and the Budget

A strategy diagram is located in the upper left of Figure 7-3. The figure denotes that KPIs should be derived from the executives’ strategic objectives and plans. If KPIs are selected independent of the strategy, then they will likely report only what can be measured, as opposed to what should be measured. Failure to implement a strategy is one of a CEO’s major concerns and, therefore, KPIs should either reflect mission-critical projects and initiatives or core business processes that must be excelled at. (Hence, there is the need for both project-based and process-based KPIs.)

The budget (and increasingly rolling financial forecasts) should be derived from the required funding of the projects (ie, the nonrecurring strategy expenses and capital investments) and the operational processes (ie, the recurring operational capacity-related expenses that vary with driver volumes, such as customer demand).

A strategy is dynamic and never static, as executives appropriately shift directions based on their new insights and observations. Reliably accurate forecasting is critical for both strategy formulation and future resource capacity management. Hence, both the KPIs and the necessary funding to realise the strategic plans will continuously be derived from the ‘living’ strategy diagram.

Dashboards Move the Scorecard’s Dials

The organisation’s traction and torque is reflected in the dashboard’s PI measures—the more frequently reported operational measures. Although some PIs may have pre-defined targets, PIs serve more to monitor trends across time or results against upper or lower threshold limits. As PIs are monitored and responded to, then the corrective actions will contribute to achieving the KPI target levels with actual results.

Cause and effect relationships between and among measures underlie the entire approach to integrating the strategy map (formulation), balanced scorecard (appraisal), dashboards (implementation) and fiscal budgets (the fuel).

Strategy Is More Than Performing Better

A key to organisational survival involves differentiation from competitors. An important role of the executive team is to exhibit vision and constantly determine innovation to differentiate their organisation from others. This explains a misunderstanding about strategic objectives. Some mistakenly believe the purpose of strategic objectives is to keep an organisation adhered to a single, unbroken path. This is certainly not the case. As mentioned earlier, strategy is dynamic, not static. The purpose of strategic objectives in a strategy map is to re-direct the organisation from the tyranny of maintaining the status quo. Strategy is about constant change. If an organisation does not constantly change, then it is exposed to the competitors constantly converging to similar products, services and processes. Differentiation is key to maintaining a competitive edge. Strategic objectives are about the changes an organisation should make to maintain a competitive edge.

Dashboards and scorecards are not mutually exclusive. In fact, the best dashboards and scorecards merge elements from one another.

A simple rule is to use the term dashboard when you merely want to keep score, like during a sporting event, and use the term scorecard when you want to understand the context of key scores in terms of how they influence achievement of strategic outcomes. A scorecard’s measures will be fewer in number—they are strategic and carry more weight and influence. In contrast, the number of dashboard measures could number in the hundreds or thousands—but you still need a way to focus on the unfavourable-to-target ones fast for tactical action. However, action with respect to a single measurement in a dashboard is less likely to change strategic outcomes as dramatically compared to when reported in a scorecard.

In general, scorecard KPIs are associated with the domain of the performance management framework. In contrast, dashboard PIs are associated with business intelligence.

Getting Past the Speed Bumps

I believe that the balanced scorecard and dashboard components of commercial EPM software should have some, but not many, pre-defined KPIs. That is, the vendor’s software should deliberately come with a limited, rather than a comprehensive, selection of KPIs that are commonly used by each type of industry. Otherwise, they will be used as a crutch without the deeper thinking. The same goes for websites with KPI dictionaries. The purpose of providing standard KPIs should only be to accelerate the implementation of an organisation’s construction of their scorecard and dashboard system with a jump-start.

The reason for not providing a comprehensive and exhaustive list of industry-specific measures is because caution is needed whenever an organisation is identifying its measures. Measures drive employee behaviour. Caution is needed for two major reasons:

1. Measures should be tailored to an organisation’s unique needs.

2. Organisations should understand the basic concepts that differentiate scorecards from dashboards and KPIs from PIs.

My interest is that organisations successfully implement and sustain an integrated strategic scorecard and operational dashboard system. Hence, organisations should understand the distinctions described here. This is why I caution against simply using a generic list of various industries’ common KPIs and PIs, regardless of their source.

As with any improvement methodology, experience through use refines the methodology’s effectiveness and impact. The ‘plan-do-check-act’ cycle is a great practice for learning organisations. With improvement methodologies, it is difficult to get it perfectly right the first time. There will always be a learning curve. Many organisations over-plan and under-implement. With regard to KPI and PI selection, first learn the principles and then apply them through selecting, monitoring and refining the KPIs. Strategy maps and balanced scorecards are a craft, not a science.

Endnotes

1 Hammer. Michael. The Agenda: What Every Business Must Do to Dominate the Decade. New York: Crown Business, 2001, p. 101.

2 Eckerson, Wayne W. Performance Dashboards: Measuring, Monitoring, and Managing Your Business. Hoboken, NJ: John Wiley & Sons, 2006, p. 8.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset