Introduction: What Are Predictive Analytics?

You’ve probably heard or read something about analytics, which are a hot topic these days with finance and IT types. You are probably also a little confused about what analytics are and how they might be useful to your organization. The world of finance is characterized by tracking and studying individual statistics like sales revenue and costs as well as ratios like assets and liabilities, or overhead costs and total costs. While these are important statistics, analytics are more sophisticated numbers used to track and predict future performance in a business.

Organizations spend a lot of time and money measuring the past. Pretty much all traditional financial metrics (sales, profits, costs, adherence to budget, stock price, etc.) are measures of the past. Most nonfinancial metrics are also measures of good and bad things that have already happened: lost customers, accidents, new accounts landed, acquisitions completed or sold, employee turnover, overtime hours or costs, customer complaints, patients discharged, and so on. The good thing about measures of the past is that the data tends to have high integrity. In other words, there is no uncertainty when an employee quits, a customer closes her account, or we buy another company. Past-focused metrics tend to be based on reality and hard to deny.

LEARNING FROM PAST MISTAKES

The problem with this type of performance measurement is that it is too late to do anything about it. When I canceled DirectTV and signed up with Verizon Fios, it was too late for DirectTV. Sure, the company tried to lure me back with better prices than they had offered when I was a loyal customer for years, but this made me hate them even more. One thing is certain: DirectTV knows that they lost a customer. Data on measures of past incidences like this can be useful to collect and analyze to try to prevent future problems. If we bought a company that ended up being a dog, we can analyze how the decision was made, and perhaps be more careful the next time an acquisition target comes onto our radar. If we lost a customer or employee, we can determine the reasons for those losses and try to improve next time. Learning from one’s mistakes can be very effective but also very expensive. Having a performance measurement system that includes only measures of the past ensures that an organization is always in a reactive mode, doing damage control when there are problems. The typical organizational scorecard includes 80 to 90 percent past-focused metrics. These organizations are always looking in the rearview mirror tracking things that have already happened and trying to solve problems that have already occurred. It is possible to combine a number of lagging metrics into an index to help predict future performance on another measure. Psychology teaches us that the best predictor of future behavior is past behavior. Therefore, a measure of how often you stayed in a Marriott hotel over the past 12 months might be a good predictive metric for future Marriott stays. This singular metric might become an even better predictor if it is combined with other past metrics like use of Marriott points for free stays, customer satisfaction, and Marriott dining experiences. Data might show that the most likely customer to stay in Marriotts in the future is one who has frequently stayed in Marriotts over the last year, eaten many meals there, filled out multiple surveys indicating satisfaction, and made use of Marriott Rewards points for free stays. This analytic is composed entirely of lagging metrics (number of stays, money spent on food, customer satisfaction levels, and free days and stays at Marriotts) and could be used to predict the future likelihood of this customer continuing to spend money at Marriott properties the next year. The problem with metrics like this is that they are never 100 percent accurate as predictors. Human beings are fickle creatures, and how we spend our money may change with time. I was very loyal to Starwood properties when I was doing a lot of work in St. Louis for a couple of years and spent many nights in the Westin there. However, when my contract in St. Louis ended, I found myself working in cities that did not have any Starwood hotels, so I started staying in Marriott properties. I was always really happy with Starwood, as I have been with Marriott, but my spending and loyalty are determined by the location of my clients, not my past experience with the brand.

ORGANIZATIONAL CHOLESTEROL

The problem with measuring heart attacks is that a lot of people who have them don’t survive. It is clear that a person who has a heart attack has cardiac problems, but this is an expensive, painful, and perhaps deadly way to learn about a health problem. A better way to measure health is to track predictive factors that lead to heart disease and manage those measures, rather than simply to count heart attacks. Most physicians today believe that the ratio of high-density lipoprotein (HDL), or good, to low-density lipoprotein (LDL), or bad, cholesterol is a better predictive measure than simply the total of HDL plus LDL. In fact, people with HDL of about 60 milligrams have been found to have very low risk of heart disease. Monitoring blood sugar may be a better measure of predicting and preventing diabetes than simply counting the number of people with diabetes. Tracking waist size and body mass index and weight have proven to be good predictors of future hip and knee problems. Huge advances have been made in community medicine that allow doctors, insurance companies, and the government to predict the likelihood that a population will experience health problems in the future.

I don’t see many organizational cholesterol metrics, but instead I do see mostly heart attack metrics. Those few predictive metrics are most often flawed and not supported by research or are based on data that cannot be trusted. For example, surveys are most often used to try to predict customer and employee loyalty. People who complete surveys tend to be ones who are really satisfied or really dissatisfied—if you are ambivalent, you probably don’t bother filling out the survey.

USES OF PREDICTIVE ANALYTICS

Predictive analytics are being used by a wide variety of organizations to improve planning, decision making, and marketing. Some of the specific uses are detailed in the following sections.

Managing Risk and Preparing for Disasters

Insurance companies are supposed to be masters of predicting and preparing for risks, so it was shocking when insurance giant AIG had to ask for a bailout to avoid going under. Apparently even insurance companies can be surprised and caught unprepared to deal with economic or other types of crises. Of course, greed tends to be more powerful than fear, so companies often assume great risks because of the great potential for rewards. The banking crisis of the last few years in the United States and Europe has caused entire industries to rethink the way they measure and assess risk, and reevaluate what levels are appropriate to balance other factors such as growth and profitability. Good analytics can provide better business intelligence about existing and projected future risks and an organization’s level of preparedness. With all the violent weather we have been having the last few years as well as acts of terrorism, it has become more and more important for organizations to get a reading on how well prepared they are for disasters. There are all sorts of risks besides bad weather that might go into a risk analytic. For an oil, pharmaceutical, or medical devices company, legal risks are a huge factor that must be considered. Judgments in recent years have exceeded $1 billion, which could cripple all but the largest of organizations.

Predicting Customer Loyalty

Waiting for a customer to leave to measure loyalty is an expensive way to track loyalty. Today’s most successful organizations have conducted extensive research to uncover the factors that impact consumer loyalty to their products, services, and brand. Asking customers to predict their future loyalty turns out to be a waste of time, because most people do not do a good job of predicting their future actions. We might predict we will be a loyal Lexus customer until we see that new Jaguar and read the reviews and then trade the Lexus in for the new Jag. If I were Lexus, I would want to know what factors I could measure about a consumer’s relationships with a car and the company that might predict his or her loyalty. If Lexus found out that a certain segment of its customers considered sexy styling slightly more important than a car’s reliability, those customers might be more easily lured away by an attractive new body style. If they found that another segment of Lexus owners is more concerned with price and value, those consumers might be lured away by the new top-of-the-line Hyundai. Predictive analytics can be extremely useful not just in measuring future customer loyalty but in ensuring it. Many of the studies I have seen on predicting loyalty or likelihood of future purchases are based on survey questions asking people if they are more likely to purchase a product or service in the future. Unless you are 100 percent dissatisfied with a product or service, a survey question like this is not likely to be a good predictor of future loyalty.

My wife and I went to a great Neapolitan pizza restaurant, Gjelena, in Venice, California, for my birthday, and the food, ambiance, and service were outstanding. I wrote a stellar review on Yelp, indicating that this was the best Neapolitan pizza I had eaten outside of Pizzeria Bianco in Phoenix (rated number one in the United States) and Keste, which is rated number one in New York City. Sadly, we have not been back in nine months. Getting a reservation takes about 30 days and driving to Venice is a pain. In this case, satisfaction did not predict my future visits and spending even though I was 100 percent satisfied. It seems as though dissatisfaction is a better predictor that you will not buy a product or service again than satisfaction is a predictor that you will do so. My wife and I went to another gourmet pizza place in Los Angeles that was rated as the best new restaurant by Los Angeles magazine and we hated it. We never went back. The food, service, wine, and atmosphere were all bad. In this case, dissatisfaction was a clear and accurate predictor of a lack of loyalty and future spending.

Suggestive Selling

Analytics are used to customize recommendations to customers, increasing sales. Music sellers like the iTunes Store suggest other artists and songs you might like based on your most recent purchase. Amazon does the same thing, and their recommendations have helped me find a few new authors whose books I like. These recommendations also help Amazon sell more books. The analytics used to do this suggestive selling are fairly simplistic, like counting beats per minute for songs, but companies are becoming more sophisticated at combining a number of individual factors into more comprehensive analytics that are more accurate. If a web site recommends a book or song that you end up hating, you will probably not trust these recommendations in the future. However, if every recommendation is right on and you love the song, book, or whatever is recommended, you are more likely to accept this advice in the future for purchases. Thus a good analytic might comprise whether I purchased a recommended book or song, my history of buying things recommended in the past, and my review of the recommended purchase.

Attracting and Keeping Talented Employees

Most of the biggest organizations today spend a lot more on payroll than they do on machinery, suppliers, and raw materials. In a company like Apple, manufacturing is done by outside vendors, but key processes like research and development (R&D), marketing, and retail sales are handled by Apple’s own army of geniuses. Some forward-thinking organizations have developed predictive analytics that help them assess the future performance of new employees, as well as the likelihood that they will stay with the company. A consumer products company I worked with has dramatically improved the caliber of their new hires and reduced recruiting costs by using predictive analytics to score job applicants. That same organization is also able to predict the likelihood of turnover of certain categories and individual employees based on predictive analytics. Google has found that eight key management behaviors correlate with managers’ success and high levels of employee engagement. The eight behaviors are anything but a surprise (e.g., “provides coaching,” and “empowers versus micromanages”), but now Google has some pretty strong evidence that certain behaviors are important for managers to exhibit on a regular basis. These same data can be used to assess the performance of managers, which Google does every six months. Some authors suggest that employee engagement depends on the person, not the job or work environment. Their research suggests that some people are likely to be engaged employees and some are not, so careful selection is the key to having an engaged workforce. While I agree that there is some truth to this, I also think engagement has a lot more to do with the job and work environment. I can recall many organizations like ARCO and Alcoa that had big groups of highly engaged engineers. They loved their jobs and had very high levels of engagement until they became managers or supervisors. Engagement levels dropped almost immediately when they had to deal with people issues and could no longer work on the cool engineering projects.

Targeted Marketing

Predictive analytics can be used to segment customers of one product or service and develop a targeted marketing pitch tailored to their interests and likelihood of purchase. A mail order catalog might buy the mailing list of a similar catalog and use that list to send its own catalog. The use of analytics to map out tastes and consumer preferences can help ensure that your marketing dollars are not wasted on mass e-mails or catalogs sent to thousands of people who will mostly delete them or throw them away. Analytics can also be used to help identify influential people to market your product or service to, in the hope that they will promote it to others. Virgin Airlines gave free round-trip airline tickets from Toronto to Los Angeles or several other cities to a couple hundred influential individuals who were identified using an analytic called a Klout score.

Product Differentiation

R&D in many big pharma firms today must “sell” their new products to business units, thus ensuring that the research function is closely aligned with the needs of consumers and the business. One I worked with has an analytic that looks at drug differentiation. Each new drug being proposed gets a differentiation score based on its unique properties and how it is different and better than any competing drugs. Technology companies use the same types of analytics to determine if their new product is going to be a real game changer like the iPhone or the iPad both were when they were released. These differentiation indices can help predict the sales and overall success or failure of a new product and rely on having good intelligence about existing and planned competitor offerings. The more dimensions on which your product or service is unique, the higher the score, and the more likely the company is to invest money in its design and marketing.

ANALYTICS VERSUS FORMULAS VERSUS SINGULAR METRICS

The easiest way to measure anything is just to count something. We can count dollars in sales, number of accidents, heartbeats, weight in pounds or kilograms, goals in soccer, new accounts, completed projects, and all sorts of things in our personal or work lives. Counting metrics are always preferred because they are simple and objective. Keeping score in both bowling and golf simply involves counting, whether it’s counting the total number of pins knocked down or the number of times you hit the ball before completing the course. Simple, right? Most sports are based on counting metrics, with some judgment for variables like balls, strikes, or fouls. Most metrics in business, government, and health care are based on counting: number of patients seen, flights completed, hotel rooms booked, days without an accident, transactions completed, products shipped, products sold, money in costs or profits, billable hours, or reports developed. Measuring performance with singular metrics is always the preferred approach. The problem is that most of what is easy to count may not matter too much. In the words of Albert Einstein:

Not everything that counts can be counted, and not everything that can be counted counts.

The second type of metric is a formula. We see all kinds of formulas used in business and nonprofits. Most are X/Y comparisons, such as assets/liabilities or sales/costs. Ratios of all sorts are found in financial statements as well as in measures of human resources (percent turnover, average performance appraisal scores, etc.), IT (average time to resolve trouble calls, percent milestones met on projects, etc.), customer service (percent on-time deliveries, percent returns, percent repeat business, average survey score, etc.), and operations (revenue per room, average restaurant check, inventory turns, etc.). Every industry has its own unique metrics that are usually some type of formula. Airlines track seat miles, banks track share of wallet, and hospitals track infections and mortalities. Formula metrics are easy to compute and well understood by most employees. If you can develop some key ratios like these examples to track every day or week, it is easy to monitor performance. If your entire suite of performance measures consisted of singular and formula metrics, measuring and managing performance would be easy and there would be no need for expensive business intelligence software. The problem with singular and formula metrics is that most of them are measures of the past, and rarely do they alone provide answers to important business questions.

An index or analytic metric allows you to combine unlike units of measurement on a single dimension of performance. A predictive analytic metric we all know something about is our credit or FICO score. Lenders calculate a score from 400 to 800 points based on a wide variety of counting and formula metrics such as ratio of income to monthly expenses, number of late payments, money in available credit, money in monthly expenses, net worth versus liabilities, and other factors. Each of these and other variables is given a percentage weight based on their importance in determining someone’s creditworthiness.

The FICO score is believed to be a better predictor of creditworthiness than any singular or formula metric by itself. The value of such a composite number is that lenders don’t need to review 30 to 40 individual numbers or ratios to determine a person’s creditworthiness. Your car dealer can access your credit score online and provide an instant decision on whether to give you a car loan and the percent interest you will pay. Savvy consumers have learned that canceling a credit card with a zero balance and $5,000 limit makes their credit score go down rather than up. This seems a little counterintuitive, but money in available credit is one of the variables that goes into the FICO score. The challenge with analytic metrics is that they are harder to understand and usually require software for data analysis. A navy shipyard I worked with spent several years getting managers to use and understand their scorecard that was populated with analytic metrics, until the leader changed and the new commanding officer went back to tracking a few singular and formula metrics like overtime hours and milestones met on maintenance projects.

TIME PERSPECTIVES

Regardless of whether your metrics are analytics, formulas, or singular, they can represent measures of either the past (lagging indicators) or the future (leading indicators). All measures are actually a measure of either the past or present, but a leading indicator is usually something that you would not care about by itself. You only care about cholesterol because it predicts heart disease. Blood pressure is a similar leading or future-focused metric. There is no pain with high blood pressure, but this measure predicts other health problems like strokes and heart attacks. Lagging metrics or past-focused measures tend to be water under the bridge. In other words, nothing can be done about them now. The employee already quit, the budget is spent, or the deadline was missed. Lagging metrics are good or bad things that have already occurred. The only way this data is useful is to avoid future problems or repeat past successes. Leading indicators, on the other hand, are very useful to track and manage aspects of performance linked to success or for the identification of minor problems so that they can be addressed before they become severe. Detecting a slight level of dissatisfaction from a customer is much more useful than waiting until the customer is so angry that they give their business to one of your competitors. A scorecard or collection of performance metrics for an organization should ideally consist of about 75 percent leading and 25 percent lagging metrics, and most of the metrics for executives should be analytics that drill down through many layers of detail and individual submetrics. Data is stacked in layers like a pyramid. Top-level analytics at the peak of each pyramid indicate “red,” “yellow,” or “green” performance. Further analysis and drill-downs are only necessary when top-level measures show yellow or red performance. Supplementing your traditional lagging financial and operational metrics with some good predictive analytics can go a long way toward allowing organizations to become more agile and lessen the number of surprises you encounter.

PAST, PRESENT, AND FUTURE

Another approach that is even more comprehensive is to include past, present, and future metrics in a single analytic. For example, a financial health index might include a past measure of revenue from last month, a present measure of dollars and aging of accounts receivable, and a future measure of orders or proposals. A health index might include family history (past), current health statistics (present), and knowledge of nutrition and exercise techniques (future). The challenge in coming up with good future-focused metrics is to ensure they are correlated to the past and present success measures. You might find, for example, that knowledge of nutrition and exercise is not at all correlated with eating healthy or engaging in regular exercise. Therefore, knowledge of nutrition and exercise would be a false indicator that does not link to improved health. Most businesses don’t have the time, patience, or expertise to conduct controlled studies that identify links between predictive and lagging measures. Consequently, they often rely on the research of others or anecdotal evidence and logic. Many strategy maps I’ve seen that are supposed to document causal relationships between leading and lagging outcome metrics are nothing more than a nice diagram of assumptions and opinions.

ANALYTICS ARE SUPERIOR TO INDIVIDUAL METRICS

The following are the top nine reasons analytics are superior to individual metrics:

1. Improve data integrity. Measuring a dimension of performance like risk, customer engagement, or financial health is complicated, and good performance is determined by a variety of individual factors. Rarely is an important dimension of organizational performance accurately assessed by looking at one or two individual measures. Ask any chief financial officer what are the key measures of financial health, and you are likely to hear a long list of variables that need to be measured. Ask your doctor what are the two or three best indicators of your overall health, and you are likely to get another long list. In order to accurately measure broad areas of performance it is critical to include a number of different metrics in your analysis.
2. Minimize total number of metrics reviewed. Research on a balanced scorecard conducted by American Productivity and Quality Center (APQC) suggests that no executive should regularly review more than 20 high-level metrics. The problem with having to review 50-plus charts every month is lack of focus. No one can keep track of that many variables, and the likelihood of missing important factors increases, as does the tendency to micromanage. If most of the executive-level metrics are analytics, executives can regularly scan 10 to 15 high-level gauges to see how the organization is performing and then drill down into details when necessary.
3. Minimize cheating and game playing. Employees quickly figure out how to make performance look good on a few key metrics if that is the focus of their bosses. Real estate agents being measured on customer satisfaction only hand out surveys to satisfied clients. Car dealers offer incentives for good scores on J.D. Power surveys. Salespeople inflate sales projections to achieve arbitrary targets. They sell customers stuff they don’t really need because they get measured and compensated on sales and margins. Cheating and manipulation is still possible with analytic metrics that are composed of four to six submeasures; it just becomes much more difficult. If the analytic includes both leading and lagging metrics, it also is harder to cheat because different strategies are likely to be required for improving each of the individual measures. Not revealing the exact formula for computing the analytic (as with a FICO score) also further minimizes deliberate manipulation or cheating to make performance look good.
4. Keep management focused on the big picture. One of the big problems with leaders who review 50 to 100 charts every month is that they tend to get down in the weeds too much. I’ve sat in countless monthly review meetings where leaders spend way too much time looking at detailed charts on measures that are way down on the hierarchy of important things executives need to track. Meeting time runs out before they get a chance to review major dimensions of performance because too much time has been spent discussing lower-level measures and micromanaging the details of how to improve performance. Part of the job of leaders is to detect and solve small problems before they become bigger problems, but most of the time they should be managing the forest rather than analyzing the leaves on the trees.
5. Improve forecasting and projections. Most scorecards I’ve reviewed contain individual or counting metrics that are measures of the past. While it is important to learn from the past, it is more important to predict and prepare for future events. Predictive analytics allow organizations to detect minor past problems and correct them so that big future problems are prevented. Waiting for a customer to leave or an employee to quit is an expensive way to learn about performance problems. Predictive analytics allow organizations to more accurately forecast the future behavior of markets, customers, competitors, and employees. If banks and mortgage companies had better risk analytics, they might have predicted the banking crisis of recent years.
6. Avoid wasting time on measures that show good performance. When data is stacked in layers that roll up to high-level analytics it is unnecessary to review the graphs and tables showing good performance. Data is reviewed in a hierarchical fashion from top to bottom if the top-level analytic shows there are problems with lower-level individual measures. Many clients have found that monthly review meetings take less than half the time they used to with analytic metrics and more time is spent analyzing and solving problems than listening to presenters drone on with hundreds of PowerPoint charts or unreadable spreadsheets. By focusing the meeting on areas of performance needing improvement, the emphasis is more on diagnosing and improving performance than on a “show and tell.”
7. Focus employees on a few key metrics. Employees in an aircraft maintenance and overhaul company get daily feedback on a few key measures, like the project management index, which balances cost, quality, and schedule with differential weights depending on customer priorities, and billable hours, which is an individual metric. FedEx and Jet Blue both focus employees on three key dimensions of performance:
1. People—employees and other members of the workforce
2. Service—customers
3. Profit—shareholders who care about financial performance
Having a few key analytic metrics can make it easy for employees to track how they are doing, and having three or four ensures the proper balance in focusing on different stakeholders and dimensions of success. In fact, FedEx takes this “People-Service-Profit” model all the way from the hourly workforce on up to the CEO. Of course, he monitors a lot more than three metrics, but they all fall into these three categories.
8. Review past, present, and future perspectives in a single metric. Some of the best analytics include a mixture of leading or predictive metrics (e.g., diet, exercise, stress), present-focused metrics, (current weight, blood pressure, body mass index [BMI], HDL/LDL cholesterol), and lagging metrics (genetic factors, previous health problems). The best way to measure any dimension of performance is with an analytic metric that incorporates all three time perspectives. By combining a number of indicators into a single analytic it is possible to get a more holistic view of performance. The performance of any dimension is determined by looking at both past performance indicators and predictive indicators. Leading indicators are the most useful for predicting and managing performance, but lagging indicators tend to have the highest data integrity, so a combined view of both tends to be the most useful.
9. Find correlations between leading and lagging measures. It all starts with a hypothesis that some random variable or factor is predictive of an important outcome. Someone suggests that engaged employees tend to predict higher levels of customer satisfaction. Or someone suggests that lowering admission standards on GPAs will lead to more revenue from tuition. Finding links between two individual variables with singular metrics is fairly simple research, but rarely is one factor completely predictive of a key outcome. In spite of how strong a link there is between high levels of HDL cholesterol and heart disease, this factor on its own is not enough to be a good predictor. In fact, a recent article I read suggests that there are 17 key metrics that have been found to be at least somewhat predictive of heart disease, with some better predictors than others. If one were to construct an analytic metric comprised of these 17 metrics weighted based on their importance, this would likely be a very accurate predictive measure of the likelihood of someone getting heart disease. Further research could then be done to determine if there is a stronger correlation between this analytic metric and heart disease compared to any of the singular metrics that make up the analytic. Understanding correlations like this enables organizations to fine-tune their analytic metrics to better predict organizational performance.

MYTHS AND FACTS ABOUT ANALYTICS

Here are some common myths and facts about analytics:

Myth: Analytics hide important facts about organizational performance by only providing a summary of overall performance.

Fact: The best business intelligence and data visualization software includes both an overall view of performance showing red, yellow, or green, a trend line, and a warning light that lets the viewer know that one of the subsidiary metrics is yellow or red, or trending in the wrong direction. With a few simple keystrokes, reviewers can drill into the data to see minor problems and diagnose their causes. The fact is that an analytics-based scorecard like this makes it more likely to detect minor problems since the data reporting screens alert viewers when they need to drill deeper into the data that makes up the high-level analytic. Without the warning light feature one needs to drill down into several layers of subsidiary metrics to make sure that the overall analytic measure is not hiding something.

Myth: You need to learn and memorize complicated formulas in order to use analytic metrics to review and manage organizational performance.

Fact: Analytics we use in everyday life like our FICO or credit score help us manage our finances and predict the likelihood that we will obtain credit, and no one I know understands the formula for computing a FICO score. I use my real age number I get from the analytic Realage.com to monitor and manage my health without having to consult my doctor or understand the exact formula used to compute my “real age” versus my chronological age. People need to understand the basic factors that make up an analytic, but they certainly don’t need to know the exact formula.

Myth: Your workforce needs to be highly educated to be able to understand analytic metrics.

Fact: Many of my clients have mostly analytic metrics on scorecards for teams of employees, and they monitor them daily. Many of these organizations have large groups of employees with less than a high school education, and they understand the measures and what they mean. Younger Brothers Construction builds components for houses and buildings, and has a scorecard that is posted daily for employees that includes many indices such as a safety index, quality index, and productivity index. Employees understand how their job performance makes each of the gauges move, and most like getting daily feedback on how their teams performed. Another client has hundreds of employees in a call center handling insurance claims, and workers have no problem understanding analytics that look at customer service and operational efficiency. You don’t need a staff of PhDs or engineers to make use of analytic metrics, nor do you need to understand the formula for computing the analytics for the data to be useful information.

Myth: It is best to keep searching for the ideal individual statistic that provides the best measurement of a performance dimension than to combine a bunch of stuff into a summary index, watering down the meaning of the metric.

Fact: If there was some magic statistic we could track that would tell us everything we need to know about the health of ourselves or our organizations, that would be the best choice, as opposed to developing complicated analytic metrics. However, every one of these metrics that were thought to be the holy grail of performance measures has turned out to be not quite as revealing or predictive as we initially thought. C-reactive protein is a factor measured in your blood that assesses the amount of inflammation in your body, which is a predictor of all sorts of diseases and health problems. It turns out this analytic has fallen out of favor with many doctors since inflammation is only one of many factors that needs to be measured to assess health. Total cholesterol was once thought to be the best number to use to assess a person’s likelihood of getting heart disease. Once again, it turns out that other factors need to be factored into the equation, such as BMI, glucose, blood pressure, and HDL/LDL ratios. In business, Net Promoter Score (NPS) was hawked as the one magic number organizations could track that would measure customer satisfaction and predict customer loyalty. But many firms have moved away from tracking NPS as their only measure of customer satisfaction because the majority of people don’t fill out surveys (even if they are one question), and sometimes extremely satisfied customers are not loyal. The bottom line is that running an organization or even managing your own health is extremely complicated and it is unlikely that you can monitor and manage performance by tracking a few simple statistics. Hence there are only two choices: monitor and track hundreds of individual measures, or try to roll them up into a dozen or so high-level analytics. The latter is the only reasonable choice.

Myth: Analytics are not sensitive enough to move much because all the different subsidiary measures tend to cancel one another out as one improves and another gets worse.

Fact: This is a valid concern that is commonly experienced with analytic metrics. Pay off your $12,000 Visa bill for the first time in two years and get rid of your car lease, and your overall credit score barely moves up at all. Take a daily aspirin and decrease your “real age” by two years, whereas diligent exercise and healthy diet might only reduce it by five years. The secret to constructing a good analytic is that it is sensitive enough to move up or down based on highly weighted variables and move only a little with changes in measures that are of lesser importance. Tuning the weights of individual metrics in the analytic often requires some research up front and quite a bit of trial and error. Periodic studies help you adjust weights and add and delete metrics as appropriate. The bad news about performance measurement is that measures need continual evaluation and improvement. New variables and techniques for collecting data are being developed all the time, so it is important to keep your metrics current with the latest discoveries and research. Software can help improve data analysis with analytics. Business intelligence programs can be set so an alert occurs if there is a change in level or trend in any of the submeasures in an analytic. Actuate has had this feature in their software for more than 10 years. SAS software also includes warnings for changes in subsidiary measures that make up an index or analytic.

ANALYTICS USED FOR STUDIES VERSUS ONGOING PERFORMANCE MEASUREMENT

A health care client learned through the voice of customer research that men and women want different things from their health care providers. One of men’s biggest concerns was speed. Men do not want to go to the doctor in the first place and their number one priority is getting in and out of there as quickly as possible with a prescription or some other solution to their problem. Women, on the other hand, care less about how fast they get in and out, but care about whether they have time to describe all their symptoms to the doctor, understand that doctor’s communication, and get answers to all their questions about side effects, treatment options, and so on. This study might lead to the development of different standards for cycle time for male and female patients that could be monitored easily without an analytic. This same organization might do another study to investigate the hypothesis that adhering to cycle time standards for male and female patients correlates to higher levels of patient satisfaction. If this proves to be a valid hypothesis, this information might be helpful in constructing a patient satisfaction index that includes both a survey and a measure of total cycle time, with different standards for male and female patients.

Studies are very useful for testing hypotheses and finding links between one factor and another. Studies sometimes result in a change in a product or service that does not need to be monitored over time—you just do it. For example, when airlines found out that leg and seat room did impact loyalty and repeat purchases from airline customers, airlines like American and United created sections in the plane with more leg room and slightly larger seats. They don’t have to continue to monitor the seat size and leg room, they just change the seats on the planes—done. What they do need to monitor is whether the additional room still links to loyalty. Southwest Airlines just made an announcement that they are going in the opposite direction by reducing seat size, pitch, and leg room. EasyJet and similar airlines in Europe is talking about making everyone stand on short flights just like passengers do on a subway. The point is, when your research indicates that certain variables will impact business performance, you change your product or service and closely monitor whether the desired outcomes really occur. Studies of links between two variables might lead to the creation of new measures and/or targets to continuously monitor performance. For example, a hospital that won the coveted Malcolm Baldrige National Quality Award found that a key phrase spoken by patient care personnel had a big impact on patient satisfaction: “Is there anything I can do today to make you more comfortable?” Not surprisingly, most patients had no problem responding to this question: “Yeah, either get me out of here or quit waking me up every hour to do something painful to me.” The study resulted in training for patient care personnel, but also required continual monitoring to make sure that staff asks the question of every patient every day. Getting an employee to engage consistently in a new behavior usually requires much more than training and is certainly a lot harder than installing new seats in an airplane.

Most of the articles I read about analytics are more about using them for studies to do things like predict customer buying behavior. While these studies are really useful and important, so is ongoing monitoring of analytic measures to ensure compliance with the new process or change that leads to a positive outcome. Too many organizations that conduct these sophisticated analytic studies still rely on simplistic singular metrics to evaluate ongoing performance. Analytics are useful for both purposes. This book is about 20 different analytic metrics that are useful for ongoing monitoring of performance versus simply conducting periodic studies. Many of these metrics will not be important or appropriate for your organization, and I certainly don’t recommend using them all. A good scorecard will include a mix of a few past-focused singular metrics like sales, profits, customers served, and so on, combined with a number of ratios or percentages and four to six good analytics or index metrics.

Some of the metrics in this book may not go on your chief executive officer’s (CEO) scorecard but may find their way to other people’s scorecards. An ongoing performance measure that is tracked weekly with complicated analytic metrics might only be tracked monthly and with simpler metrics for the CEO. For example, a human resources (HR) vice president I worked with had a comprehensive analytic that looked at the quality of new hires. The CEO just wanted to track the percent of time he hired his first choice, which was one of the submetrics in the new hire analytic. The more information you need about a particular performance dimension, the more comprehensive the metrics tend to be.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset