Chapter 4

The Enterprise Excellence Index


WHY AN ORGANIZATION MIGHT TRACK THIS
Questions Answered
  • Are employees supportive of the enterprise excellence initiatives we are implementing?
  • Are people learning the concepts of these programs?
Why Is This Information Important?
Organizations waste millions of dollars each year on programs and practices that don’t work, frustrate employees, and infuriate customers. It doesn’t matter whether your job is in business, government, health care, or education: everyone will have experienced at least some of these programs. The mystery is how every one of these practices sometimes produces great results for a handful of organizations and fails for the rest. This chapter will provide useful tips on spotting bad programs and practices and for eliminating the waste and making other initiatives successful.

WHY TRASH ALL THESE SACRED COWS?

Well, first of all, it is fun to challenge sacred cows. In fact, according to authors Drs. Robert Kriegel and David Brandt, “Sacred cows make the best burgers.”1 Coming down on something most people are already down on is easy and ruffles very few feathers. Attacking the things people believe in, support, and are emotionally attached to gets a good reaction, however. Sure, you do get people mad, and they may attack you in response for trashing the program their company sells or is enamored with. However, at least you get a reaction and may cause people to think. If nothing else, it might encourage them to improve their programs and practices to make them more successful.

One of the most important skills to learn in life is to become a critical thinker—in other words, to be able to critically evaluate information, assess its truthfulness and accuracy, and decide whether you need to take action or use it in your life. We are bombarded with advertising messages at home and at work. People are constantly trying to sell us a product, service, or idea. Even those of us who pride ourselves on having good internal BS detectors are scammed every now and then. One of my doctors frequently tries to prescribe tests I don’t want or need.

We all claim to want the truth, but what we really want is hope. We want to believe that that new exercise machine purchased after watching a late-night infomercial will really give us six-pack abs. We really want to believe that a new president is going to fix our country, or a new CEO is going to fix our company. We all really want to believe that the financial planner we hired will invest our money wisely so we will be able to retire. Whether you are smart or not so smart, we all are human beings, and we all fall for some product, service, or program that does not really live up to its promises. Many of my friends are accountants, and they all say that doctors are pretty much the worst group of people when it comes to finances. As a group, I would guess that most doctors are a lot smarter than your average citizen, so it is puzzling that they are so bad with money. On the other hand, most of my accountant friends are not too handy with power tools. Even routine maintenance like painting a room, changing a light switch, or waxing a car can present a formidable challenge to those who might be more comfortable with balance sheets and computing alternative minimum taxes.

We are all bad at something, just as we all have areas of expertise. We are most prone to buying nonsense if it comes from an area where we have little or no expertise. Try selling CRM software to a group of sales managers and they may buy it, because most don’t know anything about software. Try selling it to IT specialists, and you might have a lot of hard questions to answer. Try selling a new CT scanner to a group of radiologists and there will be a very different sort of questions than those that come from the hospital administrators and procurement people.

Smart business people are no different than talented plumbers, smart doctors, brilliant artists, or intelligent engineers; we are all susceptible to buying a line of bull from someone adept at selling it. The thing that sells most business people more than anything else is fear of losing out to the competition. Want to sell to Pepsi? Tell them that Coke is already a customer. Want to sell to Mayo Clinic? Tell them Cedars-Sinai and Cleveland Clinic are already using your program. Getting endorsements from a few brand-name clients goes a long way to selling your program or practice to others. We all tend to trust our peers in the business as well. Most professionals belong to associations and groups where they interact with their peers in other organizations. If most of the sales vice presidents at the conference are raving about a particular sales training program, you are more likely to try it yourself. If everyone at the Conference Board meeting is talking about how one-question customer surveys have saved them money and improved customer satisfaction, you will probably start doing them as well. This herd mentality starts early when some kid in third grade has the cool shoes or toy and continues on through the rest of our lives.

TYPES OF ORGANIZATIONS WHERE THIS METRIC IS APPROPRIATE

This measure is worth considering for any large organization: hospitals, universities, public school systems, manufacturers, service companies, military, and government. No one is immune from the lure of these programs that are mostly peddled by consultants with books and training programs and impressive credentials. As organizations grow they become harder to manage, and results that were easy to achieve in their earlier years become more elusive. Large organizations also become sales targets for consultants and salespeople because they have both the resources and the perceived needs. If yours is an organization that never buys any of these management programs or develops your own, there is no need for this metric on your scorecard. However, those organizations are quite rare, and even if you are not implementing one of these programs today, you no doubt will in the future. These enterprise excellence initiatives are not just management programs with three letters either. They are often associated with expensive new software. I recall working with a big insurance company in Minneapolis that had contracted with Ross Perot’s former company to redesign all of their operational processes and implement new software to improve efficiency and reduce costs. This was about a three-year project and there were a lot of Perot consultants there for months on end, racking up those billable hours. If I were CEO of this company, I would want a way to track whether this initiative was going well, other than by looking at the monthly bills I got from the consultants and milestones completed on their project plan.

HOW DOES THIS IMPACT PERFORMANCE?

The most direct way these programs impact performance is that they increase your costs considerably. Most of these programs cost at least a few hundred thousand dollars, and it is not uncommon for organizations to spend millions on them. The biggest cost is not the checks you write to the consultants or software company, but the disruption caused to the organization. Often the entire place is in turmoil with all the extra meetings, training programs, committees and teams, and having to put other projects on hold. One way to get an accurate measure of this is to use the distraction index described in Chapter 19. At least this will tell you how many hours per week are spent on improvement programs. Computing the cost is a simple matter of multiplying employee hours by the average compensation. These programs also have a big impact on employee engagement. For every three people who think these programs add value and are important, there are seven who think they are a waste of time and regret having to take time away from job tasks to participate. For some employees, these programs are interesting and participation provides them with feelings of hope that the organization is really becoming a better place to work and one that produces better results. For others, just the opposite feelings occur. If successful, these programs can actually improve many aspects of performance. Cycle times will improve, costs will go down, productivity will increase, customer satisfaction will go up, profits will increase, and your stock value might actually go up. A big reason you need this metric is to see if any of these promised results actually do occur, along with measures of success along the way.

Another big cost of management programs is a loss of senior management credibility. If people view many of these programs as being a waste of time, and if time reveals that results do not improve, the judgment of senior management may be questioned, with people wondering how they could have fallen for these snake-oil-salesman consultants. The risks are high for all of these programs, but so are the potential rewards.

COST AND EFFORT TO MEASURE

The amount of effort needed to track the effectiveness of the enterprise excellence index is minimal compared to the cost of the programs, but the overall cost would fall in the medium range. You will need to agree on an overall approach for all programs, and then decide how to assign weights and track the performance of each individual program. You may employ the assistance of the consultants who are helping you develop and implement the program, but that might be like hiring the fox to watch the henhouse. A good place to start with outcome metrics is to go back to the presentation that sold you on the program in the first place. Chances are there are a lot of promised results in that PowerPoint deck and examples of results achieved by others. One consulting firm just made a pitch to one of my clients, offering a money-back guarantee if its program did not result in at least a 25 percent increase in customer satisfaction. Of course the program will cost $200,000 the first year, and there are some pretty stringent requirements that must be met in order to invoke the guarantee. There is also nothing mentioned about longer-term impact. If the consultant is promising cost savings, that should be one of the metrics; if the consultant is promising that customer or patient satisfaction will improve, that should be one of the metrics.

Coming up with good outcome metrics should not be hard. The more challenging measures will be the input and process measures. You may also have a political challenge trying to show any hard outcomes linked to programs that are softer and more philosophical in nature, like learning organization (Peter Senge ideas) or teaching everyone the habits of successful people. When I suggested to a military client I worked with that we should try to measure the impact of the millions of dollars it spent teaching everyone these seven magic habits of success, I was met with rejection: “We just have to accept on faith that this program will make us better.”

HOW DO I MEASURE IT?

Your first task is to identify the initiatives that are currently being implemented in your organization and establish a weight for each one, depending on its importance and the resources needed for implementation. One aerospace organization had two major initiatives and two minor ones being implemented, so the company set the weights as follows:

Example Enterprise Excellence Analytic (Aerospace Company)

image

Activity-based management and ISO were both initiatives that had been going on for a number of years, and the company was more in the maintenance mode with these programs. However, major resources were being dumped into both Lean Six Sigma (a combination of Lean and Six Sigma) and knowledge management, so it was important to put a high weight on those. Those two initiatives were also deemed important because they were of interest to senior management due to the potential bottom-line payoff.

A hospital wanted to have a metric on the CEO’s scorecard that told him how well the various initiatives were going. They had a vision of winning a Baldrige Award and achieving Magnet status, and were also working to maintain Joint Commission certification, which is now done by surprise audit. There was a great deal of overlap in the standards and results assessed in all three of these initiatives, but the CEO wanted to look at them separately and collectively, so this was the type of situation where an analytic metric makes perfect sense. The enterprise excellence analytic we constructed looked like the one in the following table. The process of getting the organization aligned with the Baldrige criteria is given the most weight because the Baldrige model is very comprehensive and covers all hospital processes and results. Improving alignment with Baldrige will also help meet the standards for the Joint Commission and Nursing Magnet.

Example Enterprise Excellence Analytic (Hospital)

Joint Commission Baldrige Nursing Magnet
25% 50% 25%

Once you have identified the improvement initiatives and assigned a weight to each one based on its importance, the next task is to figure out how to measure whether the programs are successful. As with any process, there are four categories of metrics that should go into the enterprise excellence analytic: inputs, processes, outputs, and outcomes. Some examples of each of these four types of measures for currently popular improvement initiatives are shown in the next section.

KNOWLEDGE MANAGEMENT METRICS

A big challenge for many organizations today is to document and pass on important knowledge to others so they can benefit from it. A long time ago this was a challenge as well, so they invented books, which still remain the major way that most knowledge is documented and passed on to others. Today we also have electronic databases that provide us with access not only to books, but papers, research, videos, and online diagnostics. In a study done by the American Productivity and Quality Center, many companies that are implementing knowledge management (KM) systems have no way to measure their effectiveness. They point to traditional lagging measures like growth and profits, but it would be a real stretch to suggest that a company’s growth is mostly due to their knowledge management program. Organizations that are spending thousands or even millions on KM need to have a metric on executives’ scorecards that tell them whether the program is adding value. “How’s it going?” data is not sufficient for any effort this large and expensive.

An effective knowledge management analytic should be composed of four types of data:

1. Awareness or input measures. Employee awareness of what types of knowledge need to be documented, KM tools and system, benchmarking data from KM systems in other companies, forms and processes for documenting knowledge.
2. Behavior or process measures. Attendance at KM training, participation in KM activities such as committees or teams, making presentations, leading knowledge sharing sessions, creating KM databases, documenting knowledge, researching best practices, building a KM web site or database.
3. Output measures. Measures of the quality, accuracy, completeness, and timeliness of KM outputs like best practices documentation, decision-making aids, white papers, presentations, and training materials.
4. Outcome measures. Adoption of best practices by others, awards and recognition for KM system, impact of new knowledge on key outcome measures or organizational performance, such as new product sales, productivity, growth, profits, cost reduction, or quality improvement.

A navy client of mine searched for the best practices when it came to KM metrics and found that most of the activity measures that various companies tracked did not correlate to any meaningful outcomes. These companies were counting the number of databases that were built, web site hits on those databases, presentations made, and knowledge sharing meetings held. There was lots of activity, but no real evidence that any of these things improved performance in the company. The KM metric they were most impressed with was the approach being used by Ford Motor Company. Ford’s approach was to measure only outputs and outcomes. They had been down the road of measuring activity and found that all this did was reward people for what could be wasteful effort. What Ford measures for is KM program is how many ideas were developed in one part of the company that are then adopted and implemented in other parts of the company. Ford also measures how the implementation of these approaches and ideas has paid off in bottom-line outcome measures.

Navy Carrier Team One loved the Ford approach, but it was concerned with a metric that was strictly rearview mirror or lagging. This was even a more important concern, since they were just getting started with KM and it would take a while for outcomes to materialize. They came up with a KM analytic that is shown next. You can see that the KM vitality index (or analytic) is made up of two tier two metrics: KSN (knowledge sharing network) commitment (40 percent) and proven practice replication (60 percent). The proven practice replication metric is the lagging measure that they derived from Ford’s metric. The KSN commitment metric is the leading indicator of activity and looks at how engaged various parts of the organization are in knowledge management practices. A low level of engagement indicates an organization that sends a few low-level people to the knowledge sharing meetings and those individuals rarely contribute anything or complete any assignments given to them. A high level of engagement is shown when an organization not only has a large number of high-level talented individuals participate, but it is also actively engaged in leadership roles and completing important assignments.

Example Knowledge Management Analytic (Navy Carrier Team One)

KSN Commitment Proven Practice Replication
40% 60%

Knowledge Sharing Network, Vitality Analytic

Participation Engagement
25% 75%

LEAN OR SIX SIGMA METRICS

Both Lean and Six Sigma have helped many organizations improve the efficiency of their work processes. Building a metric or analytic that tells senior management how well the programs are working is very similar to the approach I described for creating a KM metric. An analytic for any performance improvement initiative needs to include the four basic categories of metrics already mentioned: input, process, output, and outcome. A Six Sigma or Lean analytic should also evolve with time as your organization more fully deploys the program. Early on the weight on the submetrics should be on inputs and processes, and later evolve toward a greater focus on outputs and outcomes. Some possible metrics to consider in each category are:

  • Input metrics. Number of people trained and level of training (e.g., green belt, black belt, etc.), processes identified for study and/or improvement, number of teams formed, direction and goals from senior management, resources received, process documentation, performance data.
  • Process metrics. Number of team meetings held, level of engagement of team members, processes documented, benchmarking studies completed, processes analyzed using proper approach, research and studies conducted using proper methods, use of systematic processes while doing projects, proper documentation of progress, involvement of process stakeholders and owners, knowledge sharing activities with other organizations.
  • Output metrics. Number of process maps created, research study quality and thoroughness, integrity of process data, sufficient trend data collected, thoroughness of analyses completed, milestones completed on time, budget performance on improvement projects, stakeholder feedback, presentations made, clarity and thoroughness of documentation on improved processes, linkages of Six Sigma or Lean initiatives to company goals and objectives.
  • Outcome metrics. Cost reductions, cycle time improvements, improvements in safety, waste or scrap reduction, quality or yield improvement, improved margins or profits, increases in employee satisfaction, awards and recognition for team projects, increased resources for Lean or Six Sigma efforts, deployment of Lean or Six Sigma in daily operation of the organization, increased support of effort by employees and management.

Six Sigma and Lean should both have a positive effect on many scorecard metrics in an organization. The test of whether you are measuring the right things in your Lean or Six Sigma analytic is the link to other scorecard metrics. If the Lean gauge is always green and the key measures of company performance like productivity and profitability are consistently red, you probably have the wrong metrics in your Lean or Six Sigma analytic. This metric ought to be a leading indicator or predictor of many measures of organizational success. On the other hand, if company profits increase, you can’t just point to your Six Sigma effort as the major reason for that increase.

This same sort of breakdown of four types of metrics could be used for an initiative such as a major new software program, a leadership development program, or any other initiative that costs a lot of money and is supposed to improve organizational performance.

FORMULA AND FREQUENCY

In order to create an enterprise excellence analytic, you have to make sure that senior leaders agree on what is meant by this term and get their views on what to include and not include in this metric. By following the steps outlined next, you can create a customized analytic that suits your own organization.

Step 1 is to create a list of the various enterprise excellence initiatives going on in the organization. A good place to start is to look at all the meetings, teams, and training programs that are going on to make sure you have a complete list of programs. Try to limit the list to the top two to six initiatives or programs that are of concern to senior management.
Step 2 is to assign a percentage weight to each one based on the following factors:
  • Cost
  • Number and percentage of employees involved
  • Time to implement
  • Potential return on investment (ROI)
  • Urgency
Step 3 is to develop input, process, output, and outcome metrics for each of the two to six initiatives that you want to include in this index. Try to develop four to six key factors that can be measured in each of the four categories and eliminate those metrics that can only be tracked once or twice a year.
Step 4 is to assign weights to each of the four factors based on the life-cycle phase the initiatives are in. A suggested weighting is as follows. The following example is a good straw man that shows how one client assigned weights for a metric to assess a knowledge management program.
If I were constructing a KM analytic for a company just getting started with the program, I would put 70 or more percent of the weight on the input and process measures, since the outputs and outcomes will be few or none in the initial year or two. As the company began to more fully deploy a comprehensive KM system, the weights of the individual metrics in the analytic should evolve to focus more on outcomes and outputs. The following example depicts the changing weights of the individual submetrics:
image
Step 5 is to assign individual percentage weights to each of the submetrics in each of the four dimensions. Weights should be based on validity of the metric, data integrity, and links of input and process measures to output and outcome measures.

VARIATIONS

One variation I have seen is simply to survey employees and ask them about the value and usefulness of these enterprise excellence programs. You don’t need to survey all of them; you could do a sample or even hold some focus groups to get more detailed data on what people think of these programs. Of course, the problem with this data is that it is just people’s opinions and often the programs that are interesting and easy get much higher ratings than those that are challenging but produce real results. Surveys also tend to produce the “emperor’s new clothes” syndrome, wherein people see value because that is what they have been taught to expect.

Another variation is to skip the input and process metrics, which have a high risk of being false success indicators, and just focus on the outputs and outcomes. This sort of data tends to have much more credibility with management, and ultimately these are the only real measures that matter. The big problem with this approach is that you may be several years and several million dollars into a program before finding out that it does not work.

TARGETS AND BENCHMARKS

Individualized targets need to be set for the enterprise excellence metrics you have selected. However, a good place to start when coming up with targets is the promises made by the vendor who sold you the program. They may promise 30 percent cost reduction, 300 percent ROI, 25 percent improvement in customer satisfaction, or other things. These are good targets to use for defining the “green” level of performance for your own outcome metrics. You might also get some comparative data from other organizations that have implemented similar programs to help develop targets for the input and process metrics.

BENEFITS OF DATA

The benefits of having hard data on each of your enterprise excellence initiatives are huge. Evaluating these expensive and disruptive efforts by relying on anecdotal “How’s it going?” data or activity measures like teams and training sessions completed is the height of foolishness, yet quite common. The advantages of having valid measures of these programs are:

  • Being able to hold program vendors to promises made during the sales process.
  • Pointing out benefits to your board or other stakeholders.
  • Showing the effectiveness of the programs to employees, which should help garner their further support.
  • Stopping programs early on that are not producing anything besides increased costs and disruptions.
  • Having objective data on initiatives that can be reviewed each month to see if progress is being made.
  • Adjusting resources and priorities to focus more on programs that are producing results and less on those that are not.

NOTE

1. Robert Kriegel and David Brandt, Sacred Cows Make the Best Burgers: Developing Change-Ready People and Organizations (New York: Warner Books, 1996).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset