Chapter 8. “Data, Take the Wheel!”

These days, it seems like everybody wants to be—or hire—a “data-driven product manager.” And why wouldn’t they? For a product manager, “data-driven” can be a handy shorthand for “in this ambiguously defined role full of squishy human complexity, I know how to do serious data business things.” And for a hiring manager, “data-driven” can be a handy shorthand for “don’t make any mistakes, ever.” What could possibly go wrong?

In all seriousness, there is a lot to be gained from looking to user, product, and market data to help guide our decision-making, as opposed to, say, “whatever the last thing is that my boss told me.” But if the goal of a data-driven approach is to guide us toward better decisions, how much of the driving can data actually do? Even if we are using data to inform our decision-making, we still need to know what we are trying to decide, why that decision is important, and what the possible outcomes of that decision might be. Not all decisions are created equal, and not all data will lead us to the best possible decisions for our business, our product, and our users.

At its best, a data-driven approach encourages us to use the information at our disposal to better understand our product and our users. At its worst, a data-driven approach encourages endless busywork that actually makes it more difficult for us to succeed as product managers. Here, again, we are compelled to follow the guiding principle: “live in your user’s reality.” If we spend all of our time as product managers with our heads buried in spreadsheets, charts, and dashboards, we are literally living in a different reality from that of our users. In the world of charts and dashboards, information is tidy, neatly categorized, and easily manipulable. In the real world…yeah, not so much.

In your work as a product manager, you will likely encounter a wide variety of off-the-shelf and custom data tools, dashboards, and widgets. In this chapter, we focus on the high-level, toolset-agnostic approaches that will help you to use data to your advantage without handing over the wheel.

The Trouble with the “D” Word

Let’s begin with the word data itself. This word can be used to describe a lot of things. In theory, data describes objective information, whether it is qualitative or quantitative. In practice, I have often seen the word data thrown around to describe conclusions drawn from information, filtered and structured representations or visualizations of data—or “anything that kinda looks like a number or a chart.” In its common and colloquial use, the word data does little to clarify what it’s actually describing, while handily imparting an air of certainty and rigor. The word data is dangerous for the very reason it is useful: it wields authority without specificity.

To that end, I’ve often advised product managers I work with to implement a counterintuitive-seeming rule if they want to take a truly data-driven approach: don’t use the word data. If you’re discussing a particular set of information, describe that specific set of information. If you’re discussing conclusions that you’ve made based on that information, describe those specific conclusions and how you reached them.

Take, for example, this somewhat-hypothetical sentence: “Our data shows that millennials are highly receptive to our value proposition.” Now imagine rephrasing this as, “The email survey we conducted shows that millennials are highly receptive to our value proposition.” There are still many points here that need clarification. (What is the value proposition? How does the email survey show this?) But at the very least, this rephrasing opens up a more meaningful conversation about what information was gathered, how it was gathered, and how it is being interpreted.

To use a more general example, imagine replacing the overused and often misapplied phrase “social data” with something more specific and descriptive like “sentiment analysis conducted on our customers’ Tweets.” The latter phrasing seems to invite more questions, but these are the very questions that make information both accessible and actionable. The absence of the d-word makes it easier to distinguish information from assumption, to have an informed conversation about methodology, and to set clear and reasonable expectations.

Don’t Hide Your Assumptions—Document Them!

As we turn toward data to help guide our decisions, it is absolutely inevitable that we are going to have to make some assumptions. We might assume that the data we are working from is representative of the people we are trying to understand. We might assume that the outliers in a particular dataset are not important for us to consider. In many cases, we assume that the data that does not support our initial hypothesis is somehow less accurate, important, or worthy of consideration than the data that does support our initial hypothesis.

Perhaps the single most meaningful step a product manager can take toward being truly data-driven is to be completely upfront and transparent about these assumptions. In many cases, this means presenting data that contradicts our decisions as well as data that supports our decisions. This is by no means an easy thing to do, given that product managers are ostensibly supposed to know the user and know the product—not show up with a bunch of wishy-washy ideas and blunt admissions that their best guesses about the user and the product are still riddled with untested assumptions.

Here, our first product management principle “clarity over comfort” can offer crucial guidance. Clarity does not mean presenting a single absolute, uncomplicated point of view. It means being clear and transparent about the limitations and assumptions at play in any conclusions you draw, and any datasets from which you draw those conclusions. Doing this is, at times, very uncomfortable, especially when you are presenting your conclusions to people who are hoping that a data-driven approach will remove all uncertainty and risk. But choosing not to document your assumptions does not mean that you are not making any assumptions. Instead, it means that you are failing to draw attention to the specific assumptions that informed your decision-making, and in turn depriving your organization of an opportunity to address those assumptions.

Imagine, for example, that you are working at a ride-sharing startup that is planning to expand from America to Europe. You have been tasked with proposing the three European markets in which you think your product should launch. You want to be data-driven in your approach, so you seek out publicly available information that might help you make your decision. And bingo, Google’s public data platform has some great figures about motorization rates in European countries. (Fun fact: it really does!) You know that, as a ride-sharing company, you need a pretty big fleet of potential drivers. So, you look for the countries with the highest motorization rates. Italy, Cyprus, Malta. Data-driven decision made.

The next week, you present your findings to senior leadership. “I consulted a canonical open data source,” you say, “and based on that data, I concluded that the areas where we will have the most seamless market entry from an operational perspective will be Italy, Cyprus, and Malta.” That sounds pretty legit!

Senior leaders, meanwhile, have their own concerns. “What about regulation?” one of them asks. “Who are the competitors in those markets?” asks another. You promise them that you will research their questions thoroughly, and you prepare to defend your brilliant data-driven decision-making.

There are, necessarily, a lot of assumptions that go into making any decision of this magnitude. But there is one specific assumption you made in your data-driven work that nobody thought to ask about: you assumed that a higher motorization rate was an indicator that a country would make a better market for your initial launch. In making this assumption, you also implicitly made the assumption that supply was a more important factor than demand in identifying your three initial markets. And when you presented to senior leaders, you gave them absolutely no opportunity to understand, confirm, or correct this assumption.

Of course, one of the most challenging steps toward documenting your assumptions is identifying them in the first place. Depending on how anxious and/or sneaky you are, you might be acutely aware of the assumptions you are choosing to gloss over in the interest of presenting an uncomplicated and definitive path forward. When I feel like I’m struggling to identify the assumptions at play in my work, I like to begin with the question, “what other things would need to be true in order for my interpretation to be correct?” But in some cases, the assumptions you’re making might be so deeply held that you yourself are not aware of them, even if you set about to think them through carefully. This is one of the many reasons why it is so important for you to open up a conversation about assumptions with your colleagues. It is inevitable that there are some important assumptions that you will not identify—but your colleagues might.

One of the best ways to open up a conversation about assumptions is to build a formal template for every data-driven decision that provides an opportunity to document goals, assumptions, and questions. Here is a rough template you can start with and customize for the particular needs of your organization:

The decision I’m trying to make or problem I’m trying to solve:




The data I’m using to make this decision:




Why I believe that this data will help me make this decision:




What I believe the data is telling me:




What assumptions are present in my interpretation of this data:




How we might test those assumptions:




The next steps I intend to take:




Returning to our previous example, this template would force a product manager to explain why they believe that motorization rate is a helpful statistic for deciding which European markets would be suitable for launch. Even if the product manager does not identify this as an “assumption,” this template asks them to reflect on the thought process behind their decision, and allows that product manager’s colleagues to participate in the conversation in a more constructive way.

Again, you can customize this template however you’d like; what’s important is that you frame documenting your assumptions as a critical part of working with data, not a misstep to be avoided. All work with data includes assumptions. It’s up to you to make sure that your organization can navigate, discuss, and test those assumptions as needed.

Focusing on Metrics That Matter

With all the data available to you, it is critical that you have a strong point of view about which metrics matter and why. Benjamin Yoskovitz and Alistair Croll’s book Lean Analytics (O’Reilly, 2013) provides an excellent framework for addressing this challenge: “One Metric That Matters.” Although the idea of actually committing to a single success metric might seem implausible, it is incredibly valuable as a thought exercise. Do you know what the single most important metric for your product is right now? Can you clearly state why it is the most important metric? Are your company’s overall strategy and vision making it easier or more difficult for you to know what metric you should be looking at and why?

Committing at the outset to what you are going to measure and why helps you to avoid what Lean Startup author Eric Ries describes as “vanity metrics.” In short, these are whatever metrics make it look like you’re doing a good job, even if they are not tied to your underlying goals. If you do not do the work of connecting specific metrics to your underlying goals, all of your metrics are essentially “vanity metrics,” because you have no idea what they are really telling you or what you should do about it.

Suppose, for example, that you’re a product manager working on a search product. You see a sudden decrease in daily page views. What does this mean? And what do you do about it?

This is a classic Google product manager interview question, and for good reason. If you are working on a product for which the goal is to get people the right information as quickly as possible, a decrease in page views might be a good thing. If you are working on a product whose revenue is directly proportional to its page views, a decrease in page views might be a very, very bad thing. The same metric can mean very different things depending on how it aligns with the overall goals and strategy of your product and organization.

Having a strong point of view about metrics might also help you discover things that you are not currently measuring but should be. If none of your current metrics align as closely as you would like to the overall goals you are working toward, this could be a critical signal that you need to change what you are measuring and how you are measuring it. More than once, I have worked on a product that was developed with little to no thought about how exactly its success would be measured, only to find myself working frantically to add basic instrumentation and metrics after the product has already launched. The sooner you develop a strong point of view about what metrics matter, the less likely you are to find yourself having to retool an existing product because you cannot effectively measure its success.

“Up and to the Right” Is a Signal, Not a Strategy

A few years ago, I was doing some training work with an ad agency that represents a large automotive brand. About halfway into the workshop, an executive stormed into the room and excitedly announced that the sales numbers for that automotive brand had exceeded their projections for the last quarter. A cheer went up throughout the room. “That’s great,” I said, “Why do you think that is?” The mood changed very quickly. The same executive who had been applauding just seconds earlier looked around and said, somberly, “I guess we’ve got some work to do.”

We all want things to go up and to the right…right? But if we don’t understand why we’re doing well, we are powerless to build on that momentum, and completely helpless if the trend reverses. As product managers, we need to be as relentless about understanding the “why” behind metrics going in the right direction as we are about understanding the “why” behind metrics going in the wrong direction.

In practice, this often means that our quantitative data simply must be supplemented with some kind of qualitative data. If we see an increase in monthly active users, but we aren’t actually speaking with any of our new users to understand what drew them to our product, we basically have no idea what is actually happening or what to do about it. Maybe a competitor just went out of business. Maybe an influential person shared a recommendation. Or, maybe our overall category is seeing significant growth, and our product is actually falling behind. The specific actions we might take are wildly different in each of these scenarios. And unless we take the time to dig deeper, any metric, even a metric that we decided upfront has clear and immediate importance to our business, is a vanity metric.

From “Accountability” to Action

In the past, I’ve often recommended that product managers ask to be held accountable for specific success metrics. In theory, this ensures that product managers have a clear point of view about what metrics matter, and stay focused on what will move the product and the company in the right direction.

In practice, though, I’ve often seen this backfire pretty badly. When product managers are held directly accountable for hitting a specific quantitative number, they often disengage when they feel that number is out of reach. If you’re being held accountable for a certain percentage increase in user growth, for example, and a competitor launches a product that you know is going to chip away at your market share, you might be tempted to just throw your hands up and get ready for an unpleasant quarterly review. And, as we discussed earlier in this chapter, you might be just as disengaged if you realize early on that the metric against which you’re being evaluated is on a one-way ride to successville.

Therein lies one of the most uncomfortable and difficult challenges around data-driven “accountability” for product managers. If you are being held accountable for a specific number, but the fate of that number is ultimately outside of your control, how do you stay focused on the actions you should be taking to move that number in the right direction?

This is a tough question, and there is no obvious or all-encompassing answer to it. However, I have found one shift in framing that helps product managers feel more connected to what they can do, rather than what is being done to them: Rather than being accountable for hitting a specific metric, product managers can seek to be held accountable for the following things:

  • Knowing which metrics matter and why

  • Having clear targets for these metrics

  • Knowing what is going on with these metrics right now

  • Identifying the underlying issues that are causing these metrics to do what they are doing

  • Determining which underlying issues can be effectively addressed by you and your team

  • Having an action plan for addressing these issues

In other words, rather than attaching your value as a product manager to a number that might be outside of your control, ask for accountability in a way that makes you wholly responsible for seeking out and acting on the things you can control. If the numbers for which you are accountable are moving in the right direction, but you don’t know why or what to do about it, you are not doing your job as a product manager. And, if the numbers for which you are accountable are moving in the wrong direction, but you have taken the time to understand why and develop a plan of action, you are doing your job as a product manager.

Acknowledging the Limitations of Obfuscating Quantitative Proxies

What if there were a single number that could tell you everything you need to know about your product or your users? That would be pretty great, wouldn’t it? It would certainly make your job as a product manager a whole lot easier. And, hey, it would also make that whole “One Metric That Matters” thing a lot easier, too, because there would literally be only one metric that matters!

Numbers and “scores” that purport to tell you everything you need to know about complex user behaviors and product health issues seem to be popping up in more and more articles, consulting proposals, and company reports. I have taken to calling these numbers and scores Obfuscating Quantitative Proxies (or OQPs)—which is to say, single-dimensional numerical answers for broader qualitative questions that obscure the complexity of the underlying issue. Here are some signs that you might have an OQP on your hands:

  • OQPs purport to capture complex and widely variable trends or behaviors in a single number or score.

  • OQPs generally disguise or omit the raw data from which they are generated, and the manner in which that data was collected.

  • OQPs are presented as being equally applicable to all organizations regardless of those organizations’ specific goals and needs.

That last point clearly differentiates OQPs from the “One Metric That Matters.” While the idea of “One Metric That Matters” is designed to focus your efforts on the specific goals and priorities of your team and organization at this very moment, OQPs offer the compelling fantasy that you don’t have to focus at all; the OQP will figure out what’s important for you.

Again, this is a pretty appealing pitch. But behind every OQP is a set of decisions, assumptions, omissions, and limitations. Of course, you can’t reduce complex human interactions and behaviors down to a single, easy-to-understand score. But you can summarize those behaviors in a way that might or might not be useful for solving specific problems and answering specific questions. If you find yourself confronted with an OQP, I would suggest using the following template to assess how you might put it to use:

What is this OQP attempting to represent?




What raw information went into generating this OQP?




How was that raw information turned into this OQP?




What questions can this OQP alone not answer?




How might we answer those questions?




Based on the above, how might we use this OQP to meet our specific goals and needs?




This template provides you with an opportunity to assess both the limitations and the usefulness of OQPs, moving the conversation beyond “it’s perfect” versus “it’s garbage.”

Let’s take a look at an OQP that has been widely debunked but remains maddeningly popular: Net Promoter Score. Net Promoter Score is a number (from –100 to 100) that purports to represent how likely your customers are to recommend your product. The Harvard Business Review introduced this score in 2003 as “The One Number You Need to Grow.” It is a widely used best practice among marketing and research teams all over the world.

Net Promoter is a classic OQP—a single score that purports to tell a complex and comprehensive story. But using the template above, we can begin to understand the real-world limitations of this score, and in doing so, gain a clearer sense of how we might actually use it to our benefit. To answer the first question in our template, Net Promoter Score purports to capture the likelihood that your customers will recommend your product or service moving forward. Sounds good so far. But what raw information goes into Net Promoter Score? And how is that raw information actually turned into a single score?

The entire process for calculating Net Promoter Score is actually quite simple and straightforward for an OQP. You use a one-question survey to ask your customers how likely they are to recommend your product to a friend or colleague on a scale of 0 to 10. Then, you add up all the 9s and 10s, and subtract all the 0s through 6s. You take the resulting number as a percentage of the total number of customers who responded to the survey, and that’s your Net Promoter Score.

So, what’s the problem? Well, that all depends on how you’re trying to use Net Promoter Score. If you simply want to measure high-level trends in likelihood to recommend—that is to say, whether people are more or less likely to recommend your product today than they were six months ago—Net Promoter might work out just fine. Whatever detail and complexity is lost in generating that score is presumably lost the same way whether you’re running the survey now, six months ago, or six months from now.

But suppose that you’re using Net Promoter Score to measure the likelihood to recommend of two different products—or the same product in different markets. Can you say definitively that a “7” for a toaster oven is the same as a “7” for a peer-to-peer marketplace that facilitates the sale of toaster ovens? Does a “7” mean the same to somebody in Dubai that it does to somebody in Dubuque? Is recommending something to a colleague the same—and equally valuable for a given product—as recommending something to a friend?

The goal of critically evaluating OQPs is not to unmask them as wholly useless, but rather to reveal the specific ways in which they can maximize our knowledge while minimizing our untested assumptions. OQPs often have real value as ways to summarize complex things enough to recognize broader trends that might become lost in the details. But they do not make the details any less relevant or impactful.

Keeping It Accessible

Early in my career, I was very easily intimidated by all things data-related. Any references to specific data tools, technologies, or concepts—even something as simple as the word algorithm—could leave me feeling hopelessly shut out of a conversation. I spent many long nights scrolling through Wikipedia pages in the hopes of being able to contribute something to these conversations, even if it was just enough to prove that I had even the faintest idea of what these data people were talking about.

Eventually, though, it became clear to me that just being able to define these terms wasn’t enough. As a product manager, it is your job to connect and align between people with specialized skill sets, including people who are experts in data. In many cases, these people were hired because they excel at a specific technical skill, not because they are expected to connect that skill with the needs of your users and the goals of your business. That is your job.

In practice, this means breaking down technical challenges around data into easy-to-understand and goal-oriented concepts that can be understood by anybody. If, for example, you are working with your engineers to choose between a “relational” database and a “NoSQL” database, take the time to walk through with them what the relative tradeoffs of each approach might be—in clear and descriptive terms that are tied back to the product’s goals.

This becomes particularly important when you’re working with data scientists—those who hold the “sexiest job of the 21st century” according to the Harvard Business Review, and who are often described as “unicorns” for their exceptional value and rarity. The technical jargon around data science can be particularly intimidating, precisely because it is placed at such a high value. And for this very reason, it is even more important for you to create a strong connection between the technical concepts deployed by data scientists and the plainspoken goals of your product and organization. In a truly excellent article titled “Recommendation Engines Aren’t for Maximising Metrics, They Are for Designing Experiences,” data scientist Michael Dewar explains what happens when a product manager does not recast tactical data science decisions in more accessible terms:

You will have left your design to the vagaries of the data scientists who will take their favorite metric (mine’s a Hamming distance) and figure out how to apply that to your problem of return purchases, or time on page, or completion rate, or whatever.

In other words, if you do not have a plainspoken conversation about the goals of a complex data-related initiative, you cannot assume that these goals are driving specific implementation decisions. As a product manager, it is your responsibility to make sure that technical conversations are opened up and made accessible, so that people with different types of expertise can collaborate on deciding how to address a given user need. This is true not only of working with data systems, but of working with any kind of technical system you encounter as a product manager.

Summary: No Shortcuts!

The notion of data-driven product management can promise an almost magical-seeming, worry- and risk-free future. But in reality, data often becomes a black hole into which unasked questions and untested assumptions disappear. If used thoughtfully and thoroughly, data can be a critical tool for understanding your users and your product—but it won’t do your job for you.

Your Checklist:

  • Recognize that a data-driven approach still means that you will have to set priorities and make decisions.

  • Avoid using the word data to generalize specific information. Say what that information is and how it was gathered.

  • Rather than hiding or erasing the assumptions that go into working with data, document those assumptions so that you and your team can address them together.

  • Have a clear and strong point of view about what metrics matter and why.

  • As a thought exercise, ask yourself to decide on the “One Metric That Matters.” If you’re having trouble focusing in, go back to your high-level goals and see if you can make them more specific and actionable.

  • Think through how you will measure a product’s success before you launch it, to avoid having to go back and add instrumentation after a product is already released.

  • Be just as curious and active about understanding metrics moving “the right way” as you are about metrics moving “the wrong way.”

  • Rather than being accountable for a number hitting a target, seek to be accountable for knowing why that number is moving toward or away from that target and having a plan for addressing whatever underlying issues are within your control.

  • Resist the siren call of scores and numbers that purport to tell you “everything that you need to know” about anything. Take the time to understand how these quantitative proxies are developed, and do the work of figuring out what specific questions they can and cannot answer based on your goals and priorities.

  • No matter how complex the data systems you’re working with, resist the pull of jargon. Keep conversations about technical decisions rooted in high-level goals that can be understood by everyone in the organization to make as much room as possible for collaboration.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset