INTRODUCTION

FOUR ERAS IN TEN YEARS

A REVOLUTION IN ANALYTICS

The world of extracting insights from data was relatively stable for its first thirty years or so. There were certainly technological advances, but the act of creating a bar chart or running a regression analysis didn’t change much. An analyst in 1977 submitted the analysis program and the data to a computer on a deck of punched paper cards; the analyst in 2005 submitted it from a keyboard. But the other details were pretty similar.

Since the turn of the millennium, however, the pace of change has accelerated markedly. If we call the way that business intelligence and analytics were practiced before 2007 “Analytics 1.0,” we’ve seen the advent of 2.0, 3.0, and 4.0 in the ten years since then—three massive changes in a decade in how analytics are undertaken within companies.

When we were researching and writing Competing on Analytics in 2005 and 2006, we were largely describing that earliest era and the companies that excelled at it (we’ll recap the idea of “Analytics 1.0” in a moment). The companies that competed on analytics then were largely making the best of those older approaches to managing data and turning it into something valuable.

There is a lesson here. Extracting value from information is not primarily a matter of how much data you have or what technologies you use to analyze it, though these can help. Instead, it’s how aggressively you exploit these resources and how much you use them to create new or better approaches to doing business. The star companies of Competing on Analytics didn’t always use the latest tools, but they were very good at building their strategies and business models around their analytical capabilities. They were run by executives who believed that facts are the best guide to decisions and actions. They made data and analytics an integral component of their cultures.

That said, if the external world of analytics changes, the best companies will change along with it. We haven’t checked to see whether all of the organizations we described in the original version of this book have evolved beyond Analytics 1.0, but we know many of them have. In this introduction, we’ll describe the new opportunities for exploiting data and revolutionizing business that have emerged over the last decade. And we’ll briefly describe the earlier eras—not for a history lesson, but to examine what we can learn from them.

Analytics 1.0 and Its Implications for Today

In the mid-2000s, when we wrote Competing on Analytics, the most sophisticated companies had mastered Analytics 1.0 and were beginning to think about the next stage. But many firms today are still solidly ensconced in a 1.0 environment. And even though there are more advanced analytical technologies and processes available, every organization still needs to do some 1.0 activities. So it’s worth understanding this era even if you have generally moved on.

Analytics 1.0 was (or is, if you’re still practicing it) heavy on descriptive analytics—reports and visuals explaining what happened in the past—and light on using analytics to predict the future ( predictive analytics) or to make recommendations on how to do a job better ( prescriptive analytics). While we’ve spent much of the past decade trying to encourage companies to move beyond descriptive analytics, they are still necessary; you need to know what has happened in your organization in the recent past and how that compares with the more distant past.

Sophisticated companies in 2017 still generate descriptive analytics, but they try to control their volume and they try to get users (rather than analytical professionals) to create them. A new set of tools has emerged to make “self-service analytics” much easier, particularly when creating visual analytics. Of course, many analytics users employed spreadsheets as their primary analytical tool, and that’s still the case, despite issues around errors and the ease of creating “multiple versions of the truth” in spreadsheets.

One consistent problem throughout the eras has been data—getting it, cleaning it, putting it into databases for later access, and so on. As data proliferated over the past decades, a solution was needed in Analytics 1.0. The primary data storage solution developed and used during this period was the relational data warehouse. This was a big step forward from previous approaches to data storage, but it also brought substantial challenges. Getting data in through a process called extract, transform, and load (ETL) consumed a lot of time and resources. All data had to be structured in the same way (in rows and columns) before it could be stored. Eventually, data warehouses became so big and popular that it was difficult to know what resources were in them. And while the goal of the warehouse was to separate data for analysis from transactional systems, analytics became so important that some warehoused data was used in production applications.

It is not just technology that caused problems in Analytics 1.0. The culture of this era was reactive and slow. One analytical expert who grew up in those days described her role as “order taker.” Managers would ask for some analysis on a problem they were facing, and an analyst would come back—often after a month or so of rounding up data and doing some form of quantitative analysis—with an answer. The manager might not understand the analytical methods used, and might not actually use the results in making a decision. But at least he or she looked like a data-driven executive.

One of the terms used to describe analytics during the Analytics 1.0 era was decision support. And the word support is appropriately weak. Analytics were used only to support internal decisions, and they were often ignored. Managers didn’t typically have a close relationship with quantitative analysts, who largely stayed in the back office. As a result, many decisions continued to be made on intuition and gut feel.

Despite these challenges, the companies we found who were competing on analytics in the mid-2000s were making the best of a difficult situation. They figured out where analytics could help improve their decision making and performance, and they produced analytics in spades. It may have been slower and more difficult than it should have been, but they were dedicated enough to make analytics work for them. Their efforts were inspiring to us and to a lot of readers and listeners to the “competing on analytics” idea. But out in Silicon Valley, the world was already beginning to change.

Analytics 2.0: Big Data Dawns in the Valley

Ten years or so ago in Silicon Valley, the leading firms in the online industry (Google, eBay, PayPal, LinkedIn, Yahoo!, and so forth) were moving beyond Analytics 1.0. They had adopted a new paradigm for data and analytics, based on the need to make sense of all the customer clickstream data they had engendered. This data was voluminous, fast-moving, and fast-changing, and didn’t always come in rows and columns. In short, it was big data. The new era of Analytics 2.0 applies mostly to those pioneering firms. We don’t recommend that other types of organizations adopt their approaches directly, but there are many lessons that other companies can learn from Analytics 2.0.

In order to store, analyze, and act on all that data, the online firms needed some new technologies to handle it. So in 2006, Doug Cutting and Mike Cafarella created Hadoop, an open-source program for storing large amounts of data across distributed servers. Hadoop doesn’t do analytics, but it can do minimal processing of data, and it’s an inexpensive and flexible way to store big data.

Hadoop became the core of a collection of oddly named open-source technologies for processing big data. Pig, Hive, Python, Spark, R, and a variety of other tools became the preferred (at least in Silicon Valley) way to store and analyze big data. The analytics that were created were typically not that sophisticated (a data scientist friend referred to this as the “big data equals small math” syndrome), but the flexibility and low cost of the technologies, and the application of analytics to less structured forms of data, were big steps forward. The open-source development and ownership of these technologies began a slow but significant shift that continues today. Proprietary analytics and data management tools are often combined with open-source tools in many applications.

In order to both program in these new tools and do some data analysis as well, a new job category seemed in order. Practitioners of big data analytics began to call themselves data scientists. As Tom and his coauthor D. J. Patil (until recently, the chief data scientist at the White House), noted in their article “Data Scientist: The Sexiest Job of the 21st Century,” these people were different than the average quantitative analyst.1

First of all, they weren’t content to remain in the back office. Patil kept telling Tom that they want to be “on the bridge”—next to the CEO or some other senior executive, helping to guide the ship. Patil himself, for example, went from being a data scientist at LinkedIn to working in venture capital to being head of product at a startup, and then to the White House (where he admitted to having an office in the basement, but was at least in the right building).

Secondly, the data scientists we interviewed weren’t interested in decision support. One called advising senior executives on decisions “the Dead Zone.” They preferred in many cases to work on products, features, demos, and so forth—things that customers would use. LinkedIn developed data products like People You May Know, Jobs You May Be Interested In, and Groups You Might Like—and those offerings have been instrumental in that company’s rapid growth and acquisition by Microsoft for $26 billion. Practically everything Google does—except, perhaps, for its phones and thermostats—is a product or service derived from data and analytics. Zillow has its Zestimates and several other data products. Facebook has its own version of People You May Know, and also Trending Topics, News Feed, Timeline, Search, and many different approaches to ad targeting.

It’s also clear that analytics were core to the strategies of many of these firms. Google, for example, was formed around its PageRank algorithm. These companies competed on analytics perhaps more than any of the others we wrote about in the first version of this book. Such an alternative view of the objective and importance of analytics is a key lesson from Analytics 2.0 practitioners.

There was also a much more impatient, experimental culture to Analytics 2.0. The most common educational background we discovered among data scientists was a PhD in experimental physics. Facebook, a major employer of this new profession, referred to data scientists and developers as “hackers” and had the motto, “Move fast and break things.” This is an interesting component of Silicon Valley culture, although it perhaps would not fit well within many large organizations.

Analytics 3.0: Big (and Small) Data Go Mainstream

It became clear to many companies after 2010 or so that the big data topic was not a fad, and that there were important technologies and lessons to be adopted from this movement. However, given the cultural mismatch between Analytics 2.0 and large, established companies, there was a need for a new way of thinking about analytics at this point.

Analytics 3.0 is in many ways a combination of 1.0 and 2.0; it’s “big data for big companies,” but small data is still important in this era. Companies may want to analyze clickstream data, social media sentiments, sensor data from the Internet of Things, and customer location information—all “big”—but they are also interested in combining it with such “small data” as what customers have bought from them in the past. It’s really not big data or small data, but all data.

In the 3.0 world, analytics no longer stand alone. They become integrated with production processes and systems—what our friend Bill Franks, the chief analytics officer at the International Institute for Analytics, calls “operational analytics.” That means that marketing analytics don’t just inform a new marketing campaign; they are integrated into real-time offers on the web. Supply chain optimization doesn’t happen in a separate analytics run; instead it is incorporated into a supply chain management system, so that the right number of products is always held in the warehouse.

Companies in the 3.0 era also have a combination of objectives for analytics—both shaping decisions, and shaping new products and services. They still want to influence decisions with data and analysis, but are interested in doing so on a larger scale and scope. There is no better example than UPS’s massive ORION project, which is also a great example of operational analytics. ORION, which took about a decade to develop and roll out fully across UPS, is an analytical application for driver routing. Instead of following the same route every day, ORION bases routing on the addresses where packages need to be picked up or dropped off. Today, ORION gives a different route every day; eventually, it will change routings in real time based on factors like weather, a pickup call, or traffic.

The spending on and payoff from ORION have both been impressive—UPS spends several hundreds of millions, and reaps even more annual benefits. UPS has calculated (in typical analytical fashion) that the ORION project will save the company about half a billion dollars a year in labor and fuel costs. That’s the kind of scale that analytics can bring in the 3.0 era.

Decisions are important to companies that have moved into the 3.0 era, but these firms realize that analytics and data can stand behind not only decisions, but also products and services. The same data products that online startups offered in the 2.0 period can also be offered by big companies like GE, Monsanto, and United Healthcare. GE has a new “digital industrial” business model powered by sensor data in jet engines, gas turbines, windmills, and MRI machines. The data is used to create new service models based on prediction of need, not regular service intervals. Monsanto has a “prescriptive planting” business called Climate Pro that uses weather, crop, and soil data to tell a farmer the optimal times to plant, water, and harvest. United Healthcare has a business unit called Optum that generates $67 billion in annual revenues from selling data, analytics, and information systems.

Clearly, in the Analytics 3.0 era, data and analytics have become mainstream business resources. They have become critical in many companies’ strategies and business models. In short, competing on analytics has become much more accepted as a concept. Of course, that doesn’t mean that it’s easy to succeed with analytical innovations, or that companies don’t need to continue innovating over time.

Analytics 4.0: The Rise of Autonomous Analytics

The first three eras of analytics had one thing in common: the analytics were generated by human analysts or data scientists after they had gathered data, created a hypothesis, and told a computer what to do. But the most recent change in analytics is profound: it involves removing the human “quants” from the equation, or more accurately, limiting their role.

Artificial intelligence or cognitive technologies are widely viewed as perhaps the most disruptive technological force that the world is facing today. It is less widely known that most cognitive tools are based on analytical or statistical models. There are a variety of different technologies under the cognitive umbrella, but machine learning is one of the most common ones, and it is largely statistical in nature. But in machine learning, the machine creates the models, determines whether they fit the data or not, and then creates some more models. For some forms of machine learning, one might say that the data itself creates the model, in that the model is trained by a set of data and can adapt to new forms of it.

To a large degree, the rise of machine learning is a response to the rapid growth of data, the availability of software, and the power of today’s computing architectures. Neural networks, for example—a version of statistical machine learning—have been used since the 1950s, and were popular for business applications since the 1990s. But current versions—some of which are called deep learning because they have multiple layers of features or variables to predict or make a decision about something—require large amounts of data to learn on, and a high level of computing power to solve the complex problems they address. Fortunately, Moore’s Law (which predicts that processing power will double every 18 months) has supplied the needed computing horsepower. Labeled data (used to train machine learning models) is somewhat harder to come by. But in many cases, there are data sources at the ready for training purposes. The ImageNet database, for example—a free database used for training cognitive technologies to recognize images—has over 14 million images upon which a deep learning system can be trained.

In terms of software, both proprietary and open-source software is widely available to perform various types of machine cognition. Google, Microsoft, Facebook, and Yahoo! have all made available open source machine learning libraries. Startups like DataRobot and Loop AI Labs have proprietary offerings for machine learning. And some of the world’s largest IT companies are adding machine learning capabilities to their offerings. Cognitive technologies are available both as standalone software and increasingly as embedded capabilities within other types of software. SAS makes available machine learning methods to augment its traditional hypothesis-based analytical software. IBM has placed a big bet on Watson as either a stand-alone software offering, or a series of smaller programs (APIs) to link to others. Salesforce.com recently announced Einstein, a set of cognitive capabilities that are embedded within its “clouds” for sales, marketing, and service. We think that virtually every major software vendor will eventually embed cognitive capabilities in their business transaction systems.

In hardware, the most important computers are off-premise. The availability of virtually unlimited computing capability at reasonable prices in the cloud means that researchers and application developers can readily obtain the horsepower they need to crunch data with cognitive tools—without even buying a computer. And relatively new types of processors like graphics processing units (GPUs) are particularly well suited to addressing some cognitive problems such as deep learning. There are also emerging computational infrastructures that combine multiple processors in a mesh to enable an entire “stack” of complex cognitive algorithms and tools.

Leading analytical organizations, then, are rapidly making a strategic shift toward cognitive technologies in general, and machine learning in particular. In order to handle the amount of data they have at their disposal and to create the personalized, rapidly-adapting models they need, machine learning is generally the only feasible option.

These won’t replace human analysts anytime soon, but at a minimum machine learning is a powerful productivity aid for them. With these semi-autonomous technologies, thousands of models can be created in the time that a human analyst historically took to create one. Building many models quickly means that an organization can be much more granular in its approach to customers and markets, and can react to rapidly-changing data. Machine learning models may be more accurate than those created by artisanal methods (analytics that are hypothesized and painstakingly modeled by human analysts) because they often consider more variables in different combinations. Some machine learning approaches can also test an “ensemble” of different algorithm types to see which ones best explain the factor in question. The downside of these approaches is that models generated by machine learning are often not very transparent or interpretable by their human users.

If your company already has an analytics group and is doing some work with statistical models in marketing, supply chain, human resources, or some other area, how might they transition to machine learning? Your company’s analytics experts will need some new skills. Instead of slowly and painstakingly identifying variables and hypothesizing models, machine learning analysts or data scientists need to assemble large volumes of data and monitor the outputs of machine learning for relevance and reasonability.

They may also have to work with some new tools. As we’ve already noted, established vendors of proprietary analytics software are rapidly adding machine learning capabilities, but many algorithms are available in open-source formats that do the job, but may provide less support to users. And they may need to work with new hardware as well. Since machine learning models typically operate on large amounts of data and are computationally intensive, they may require in-memory architectures or cloud-based hardware environments that can be expanded as needed.

If there is already a central analytics group or center of excellence in place, it probably already has the statistical expertise in place to interpret machine learning models to some degree. But as we’ve suggested, full and logical interpretation is very difficult. If there are thousands of models and tens of thousands of variables being used to support a business process, it’s probably impossible to interpret each one. And some variations on machine learning—neural networks and their more complex cousin, deep learning—are virtually impossible to interpret. We can say which variables (or features, as they are sometimes called in machine learning) predict an outcome, but we may not know why or know what the variables mean in human terms.

This “black box” problem—the difficulty of interpreting machine learning models—is both a cultural and a leadership challenge with the technology—particularly when the models are used in a highly regulated industry. Internal managers and external regulators may have to learn to trust models that they depend on, but don’t fully understand. The key is to be vigilant about whether the models are actually working. If, for example, they no longer do a good job of successfully predicting sales from a marketing program or conversion rates from sales force attention to customers, it’s probably time to revisit them.

To illustrate the movement from artisanal analytics to autonomous analytics, we’ll provide an (anonymous) detailed example. The company involved is a big technology and services vendor. The company has over 5 million businesses around the world as customers, fifty major product and service categories, and hundreds of applications. Each customer organization has an average of four key executives as buyers. That’s a lot of data and complexity, so in order to succeed the company needed to target sales and marketing approaches to each company and potential buyer. If a propensity score could be modeled that reflected each customer executive’s likelihood of buying the company’s products, both sales and marketing could be much more effective.

This approach is called propensity modeling, and it can be done with either traditional or autonomous analytics approaches. Using traditional artisanal modeling, the company once employed thirty-five offshore statisticians to generate 150 propensity models a year. Then it hired a San Diego-based company called Modern Analytics, which specializes in analytics that are created autonomously by what it calls the “Model Factory.” Using machine learning let the company quickly bump the number of models up from 150 to 350 in the first year, 1,500 in the second, and now to about 5,000 models. The models use a mere 5 trillion pieces of information to generate over 11 billion scores a month, each predicting a particular customer executive’s propensity to buy particular products or respond to particular marketing approaches. Eighty thousand different tactics are recommended to help persuade customers to buy. Achieving this level of granularity with traditional approaches to propensity modeling would require thousands of human analysts if it were possible at all.

Of course, there is still some human labor involved—but not very much. Modern Analytics uses fewer than 2.5 full-time employees to create the models and scores. Ninety-five percent of the models are produced without any human intervention, but analysts need to make adjustments to the remainder. The technology company does have to employ several people to translate and evangelize for the models to sales and marketing people, but far fewer than the thirty-five statisticians it previously used.

Going back to the presumption that your company already has some analytical skills, if so it may be able to do this sort of thing by itself. Cisco Systems’ internal analysts and data scientists, for example, moved from creating tens of artisanal propensity models per quarter to tens of thousands of autonomously generated ones.

The world is a big and complex place, and there is increasingly data available that reflects its size and complexity. We can’t deal with it all using traditional, artisanal analytical methods, so it’s time to move to Analytics 4.0. Organizations with some experience and capabilities with traditional methods, however, will have an easier time transitioning to approaches involving greater autonomy.

What This Revolution Means for Organizations

Of course, all of these rapid changes in how analytics are done have important consequences for organizations. They mean new skills, new behaviors from employees, new ways of managing, and new business models and strategies. The details are still emerging, but we’ll try to give you a sense of what’s already visible.

First of all, so much change in such a short time means that organizations wanting to compete on analytics have to be very nimble. They have to integrate new technologies and new methods into their repertoires. For example, Capital One, the consumer bank that we profiled extensively in the first version of this book, was certainly a leader in 1.0 analytics. But it has kept pace with the times, and now is making extensive use of cognitive technologies for cybersecurity, risk, and marketing. The company has hired lots of data scientists and specialists in machine learning and artificial intelligence. And it uses all the latest open-source tools, from Hadoop to Python and a machine learning technology called H2O. It has no intention of retreating from its long-term goal of competing on analytics, in whatever form that may take.

The skills for doing analytics across the eras, unfortunately, are cumulative. That is, the skills necessary for doing Analytics 1.0 don’t go away as we move to the next era. That’s in part because companies still have a need for reporting and the other activities performed in 1.0, and also because the skills required for that era still apply in later eras. To be more specific, 1.0 quantitative analysts need to know statistics, of course, but also need to be able to integrate and clean data. They also require an understanding of the business, an ability to communicate effectively about data and analytics, and a talent for inspiring trust among decision-makers. For better or worse, none of these requirements go away when organizations move into Analytics 2.0.

But there are new skills required in the 2.0 era. As we noted above, data scientists in this environment need experimentation capabilities, as well as the need to transform unstructured data into structures suitable for analysis. That typically means a familiarity with open-source development tools. If the data scientists are going to help develop data products, they need to know something about product development and engineering. And for reasons we don’t entirely understand, the time that big data took off was also the time that visual analytics took off, so a familiarity with visual display of data and analytics is also important.

And all of those 1.0 and 2.0 skills are still required in the 3.0 era. What gets added to them? Well, in addition to the new technologies used in combining big and small data, there’s a lot of organizational change to be undertaken. If operational analytics means that data and analytics will be embedded into key business processes, there’s going to be a great need for change management skills. At UPS, for example, the most expensive and time-consuming factor by far in the ORION project was change management—teaching about and getting drivers to accept the new way of routing.

Analytics 4.0, of course, involves a heavy dose of new technical skills—machine and deep learning, natural language processing, and so forth. There is also a need for work design skills to determine what tasks can be done by smart machines, and which ones can be performed by (hopefully) smart humans.

Thus far, we’ve described the skills for quantitative analysts and data scientists across the ages, but there is just as much change required of managers and executives. The shift to a data- and analytics-driven organizational culture falls primarily on them. And for many, it has not been an easy transition.

As an illustration of the problem, the consulting firm NewVantage Partners has for several years surveyed companies about their progress with big data. The most recent survey in late 2016 of fifty large, sophisticated companies had a lot of good news. For example, 80.7 percent of the respondents—business and technology executives—felt that their big data initiatives have been successful.2 Forty-eight percent said that their firms had already achieved “measurable results” from their big data investments. Only 1.6 percent said their big data efforts were a failure; for some, it was still too early to tell.

But the organizational and human transitions were less successful. Forty-three percent mentioned “lack of organizational alignment” as an impediment to their big data initiatives: forty-one percent pointed specifically to middle management as the culprit; the same percentage faulted “business resistance or lack of understanding.” Eighty-six percent say their companies have tried to create a data-driven culture, but only 37 percent say they’ve been successful at it.

The problem, we believe, is that most organizations lack strong leadership on these topics. Middle managers can’t be expected to jump on the analytics bandwagon if no one is setting the overall tone for how this will improve their jobs and results. Culture change of any type seldom happens without committed leadership, and not enough leaders are committed to making decisions and competing in the marketplace on the basis of data and analytics. This situation has certainly improved over the past ten years, but it hasn’t improved enough.

New management skills will also be required to create new strategies and business models. Many firms today feel the threat from digital startups—the Ubers and Airbnbs of the world—and are attempting to create new digital business models. They’re also trying to harness new technologies like the Internet of Things and social media. What they need to realize is that digital business models are also analytical business models. Digital, data-rich strategies and processes aren’t of much value unless the organization learns from the data and adopts new analytically driven behaviors and tactics. These are already second nature to digital startups, but often difficult for established firms to master.

Perhaps these changes in skills and strategies will require a generational change in company leadership. The physicist Max Planck said that “Science progresses one funeral at a time.” The same might be said of analytical orientations in companies.

What’s in This Book

We didn’t invent the idea of competing on analytics, but we believe that this book (and the articles we wrote that preceded it) was the first to describe the phenomenon.3 In this book, you’ll find more on the topic than has ever been compiled: more discussion of the concept, more examples of organizations that are pursuing analytical competition, more management issues to be addressed, and more specific applications of analytics.

Part I of the book lays out the definition and key attributes of analytical competition, and discusses (with some analytics!) how it can lead to better business performance. The end of this part describes a variety of applications of competitive analytics, first internally and then externally, with customers and suppliers.

In chapter 1, we’ve attempted to lay out the general outlines of analytical competition and to provide a few examples in the worlds of business and sports. Chapter 2 describes the specific attributes of firms that compete on analytics and lays out a five-stage model of just how analytically oriented an organization is. Chapter 3 describes how analytics contribute to better business performance, and includes some data and analysis on that topic. Chapters 4 and 5 describe a number of applications of analytics in business; they are grouped into internally oriented applications and those primarily involving external relationships with customers and suppliers.

Part II is more of a how-to guide. It begins with an overall road map for organizations wishing to compete on their analytical capabilities. Whole chapters are devoted to each of the two key resources—human and technological—needed to make this form of competition a reality. We conclude by discussing some of the key directions for business analytics in the future.

There are a lot of words here, and we knew they wouldn’t be the last on the topic. Since the initial publication of this book, we’ve been gratified to see how businesses and the public sector have embraced the concept of competing on analytics. Many academics and consultants have embraced the topic, too. Many excellent books and a raft of articles have helped advance the field. There are many books on how to implement business intelligence, leveraging big data, how to create and modify analytical models in such areas as supply chain and marketing, data visualization, machine learning, and how to do basic quantitative and statistical analysis. If analytics are to continue to prosper and evolve, the world will have to spend a lot of time and energy focusing on them, and we’ll need all the guidance we can get.

We do our best to help organizations embark upon this path to business and organizational success. However, it’s important to remember that this is just an overview. Our goal is not to give businesspeople all the knowledge they’ll ever need to do serious analytical work, but rather to get you excited about the possibilities for analytical competition and motivated enough to pursue further study.

What’s Changed in This Book

Since a lot of things have changed in the world of analytics over the last decade, we’ve changed a lot in this book as well. Other than this totally new introduction, we’ve maintained the first edition’s chapter structure. But every chapter has been revised, with new content topics, new examples, new research, and so forth. Chapters 4 and 5, which include many examples of how analytics are used internally and externally within organizations, have both had their examples substantially updated. Chapter 8, on technology architecture, has changed dramatically, as any reader would expect. The future isn’t what it used to be, so chapter 9, on the future of analytics, also isn’t what it used to be. Throughout the book, we’ve added content on such topics as:

  • Data scientists and what they do
  • Big data and the changes it has wrought in analytics
  • Hadoop and other open-source software for managing and analyzing data
  • Data products—new products and services based on data and analytics
  • Machine learning and other artificial intelligence technologies
  • The Internet of Things (IoT) and its implications for analytics
  • New computing architectures, including cloud computing
  • Embedding analytics within operational systems
  • Visual analytics

We’ve also added some content that has been around for a while, but that we hadn’t developed yet when we wrote the first edition. The DELTA model is an easily (we hope) remembered acronym for the factors an organization has to address to get better at analytics. It’s already in our book (with Bob Morison) Analytics at Work, but we still think it’s a nifty framework and we’ve added it to this one too—primarily in chapter 2 and chapter 6.

Like it or not, some things in the world of analytics haven’t changed much. Issues like developing an analytical culture, the important role of leadership, and the critical need to focus your analytics on a pressing business problem, are all pretty similar to what they were in 2007. We’ve left all those lessons pretty constant in this edition, other than trying to find new examples of how important they are. They were the hardest things to pull off successfully a decade ago, and they’re still the hardest today.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset