Chapter 3. Act to Learn

This chapter is all about how to learn well, for better decision making. We explore how to articulate our beliefs and riskiest assumptions, and then design experiments that help us learn the relevant things. To help put theory into practice, there’s loads of methods for validating problems, evaluating potential solutions, and testing market demand.

Strategy is about doing. Doing is how we learn. Learning is how we win.

Action is how we push a theory toward reality, and in our complex world, winning is often about how we learn and respond, not how much we know.1

Learning your way to a strategy that works begins by doing the following:

  1. Defining your beliefs and assumptions (so that they can be tested)

  2. Deciding the most important thing to learn, and how you’ll learn it

  3. Designing experiments that will deliver learning

Defining Your Beliefs and Assumptions

Before we talk with customers or run an experiment, we need to identify what we need to learn. Otherwise, we’ll have a lovely chat, but might not learn anything that lets us know we’re on the right track.

The problem-assumption model in Figure 3-1 helps break down our beliefs and identify the underlying assumptions in our thinking. Those assumptions are the basis of what we test. We can translate them into questions for customer interviews or use them to design experiments that create a measurable result.

The problem-assumption model helps express problems, solutions, assumptions, and questions (source: created by Jonny Schneider and Barry O’Reilly)
Figure 3-1. The problem-assumption model helps express problems, solutions, assumptions, and questions (source: created by Jonny Schneider and Barry O’Reilly)

The problem-assumption model is flexible. You can begin from anywhere—problems, solutions, assumptions or questions—and elaborate to fill-out your thinking. Many people begin with solutions and then explore the problem being solved, later moving on to the implied assumptions. As adoption of Design Thinking continues, more and more teams begin with the problem and then elaborate their solutions, assumptions, and questions.

Decide What to Learn and How to Learn It

Know what you need to learn. That sounds obvious, but it’s surprising how often teams choose research methods that can’t deliver the insight needed to move forward. Overuse of online surveys to quiz customers about their desire, intent, or behavior is one such antipattern. A well designed and executed survey has its place, but so much of the time, product teams will learn more through other methods like observation or conversation. Another antipattern is conflating a good result in a prototype test with strong customer affinity with the problem or demand for the solution. Without being confident in the problem being solved and measuring the true demand for the solution, teams risk shipping awesome products that customers are unaware of, don’t care about, or never use.

Research and Experiments for Learning

Sometimes, we learn things that we weren’t expecting to learn. Perhaps more often, we don’t learn enough to make conclusive or definitive decisions. Finding the signal in the noise is challenging enough, so anything that reduces bad ambiguity5 and helps us to look in the relevant places with the appropriate tools is a good thing.

Following, is a range of ways to think about learning, along with practical methods that help us to carry out the work:

  • Validating the problem

  • Evaluating potential solutions

  • Testing market demand for a solution.

Problem Validation

Many solutions fail because they solve no meaningful problem. Charlie Guo learned that lesson the hard way. We fall in love with our ideas and our biases get the better of us. We focus too much on the solution, without properly understanding if there’s really a problem worth solving.6

I don’t want to start another company until I find a problem that I care about. A problem that I eat, sleep and breathe. A problem worth solving.

Charlie Guo, cofounder of FanHero

Customer Problems

To understand customer value, we must know customers.

Go to where they are. Watch them. Talk to them. Build an understanding of what it’s like to be them. Challenge our assumptions about what matters to them, how they behave, and what they need. This seems simple—and it is.

No facts exist inside the building, only opinions.

Steve Blank

Organizational problems

Sometimes the problem to solve is an organizational one, not a customer one. In this case, it’s not a customer problem that we need to validate; rather, it’s a problem or inefficiency in the way we’re solving customers’ problems.

Table 3-1 describes many options for identifying and validating meaningful problems.

Table 3-1. Learning for problem validation
Method What is it Why it’s good Note
In situ consumer study Observation of behavior in the context that it occurs. See how people interact with the product/service/brand from their point of view.  
Design probes Study of participants behavior and attitudes over a period of time, often recorded in a journal. High-quality data, collected close to when events occur. How to do a design probe
Qualitative interviews
(semi-structured interviews)
Interviews with customers to explore their attitudes, needs and problems. Fast way to test early assumptions, or discover real customer needs. Flexible. Mia Northropa on developing your interviewing technique
Surveys A structured questionnaire to gather data on a specific research question. Reach a large number of people cheaply and efficiently. Designing a statistically accurate, unbiased survey is a skilled activity
Experience mapping A visualization showing the outside-in view of the end-to-end experience for customers. Pin-points problems and creates alignment of business goals to customer value. Adaptive Path guide to experience mapping
Analytics Quantitative analysis of measured behavior. Data is empirical, representing actual behavior, not reported behavior. Learn nothing about why
Value stream mapping A process map showing the inside-out view of everything that happens in the organization to deliver value to customer. Lightweight, paper and pencil study. Helps to understand time-to-completion and identify waste and bottlenecks in the process. Rother & Shook, Learning to Seeb
Demand study An empirical study of how value/failure demand flows through an organization from concept to cash. Understand the volume of rework versus value generating work that is happening.
Identifies inefficiencies, problems, and opportunities for improvement.
Vanguard on failure demand

a Mia is a terrific design researcher—the queen of qualitative interviewing!

b Mike Rother, John Shook, and Lean Enterprise Institute, Learning to See: Value Stream Mapping to Add Value and Eliminate MUDA (Cambridge, Massachusetts: Lean Enterprise Institute, 1998).

Solution Evaluation

Table 3-2 describes ways to evaluate how well our proposed solutions solve a given customer problem. It’s the familiar ground of prototypes, analytics and testing with customers.

Table 3-2. Learning for solution evaluation
Method What is it Why it’s good Note
Concept prototype Low-fidelity, throw-away sketches and mock-ups, for rapidly exploring concepts with customers Fast, low cost.
Participants are more comfortable to give critical feedback, because sketches are low effort.
 
High-fidelity prototype A detailed and interactive mock-up of the product experience Validates the nuts and bolts of the solution like interaction design, content, look and feel.
Easy to iterate and build upon based on feedback.
 
Concierge Personalized service provided to a small cohort of early customers, to learn what works before building an automated solution. Generates solution options through exploring the problem with customers. Difficult to scale.
Risk building for a niche because cohort might not represent the market.
Working prototype A limited implementation of a product, focusing on the happy path, and tested with a pilot cohort of customers to measure how well the solution performs. High confidence in results because it’s real software with real customers.  
Multivariate/split tests Testing multiple variants of a solution with cohorts of live customers to quantitatively learn which elements perform best. Particularly effective for optimizing an existing product or service with high volume of traffic, and low cost to deploy working solutions as software.  

Demand Validation

Validating true customer demand is where we ask customers to put their money where their mouths are. Our goal is to measure behavior that indicates demand for the solution we are offering, in real market conditions.

It’s the reverse of build it, and they will come. Demand validation says, when they come, build it. Measuring demand helps with choosing where to invest or whether to invest at all. Before spending money to build what we think people want, we aim to measure what solutions they actually want or need. Table 3-3 describes ways to validate demand.

Table 3-3. Learning for demand validation
Method What is it Why it’s good Note
Facade ads A targeted search marketing ad directing customers to a simple landing page about the product. Easy to target specific groups of people and experiment with different variants to see what resonates most.  
Preorders Get paid customers signed-up to a product or service before it has been created. Strong signal of demand, as customers put their money where their mouths are.  
Wizard of Oz prototype The appearance and experience of a complete and working service, where all of the back-of-house processes are carried out manually. No investment in supporting infrastructure required.
A realistic test of actual market demand.
Zerocater started with a spreadsheet
Competitive market analysis (for existing markets) Quantitative data and reports from independent agencies on market size, growth trends, geographies, segments, etc. Find out what has or hasn’t worked for companies competing in the same space. Not very helpful for nascent markets!
Google trend analysis Compare search trends using Google search data by time, location and other dimensions as a proxy for customer interest. Free, fast, and kinda fun.

The Cost of Experiments

We’re looking for the shortest path to find out whether our strategy works. We want to move fast, challenge our beliefs, and mold the appropriate strategy, while also investing enough time and effort to properly explore uncertainty. Knowing when to stop exploring and commit to something in market is a balancing act.

Looking at experiments by cost and confidence helps guide us to select the best approaches for our circumstances (see Figure 3-2).

Cost
  • Time

  • Effort

  • Resources

  • Financial cost

Confidence
  • Accuracy of data or insight

  • Usefulness to inform decisions

A costly activity is one that takes time to prepare and execute, and involves significant people and resources. Traditional research and high-fidelity prototypes are examples of costly activities. A low-cost activity could be a sketch and a conversation examining a specific question you’re exploring.

An example of an activity that produces accurate results—results in which we can have high confidence—might be a preorder campaign, for which prospective customers literally vote with their money, promising to pay if you deliver on the product. An activity with low accuracy results would be a competitor analysis of the current market.

Activities that are high-cost or low-accuracy are not bad. It’s about choosing the best activities for a given stage of product development. It’s about knowing what you need to learn and determining how you’ll learn it.

The cost of experiments
Figure 3-2. The cost of experiments

Conclusion

Product success relies on a lot of stars lining up properly. We need a problem worth solving, access to customers who need what we’re offering, a solution that’s technically feasible and commercially viable, a good sense of market timing, and a measure of good luck. Developing solutions is an economic activity, taking a lot of time, money, and effort. And, great products fail all the time, for a wide range reasons. Our goal here is to learn quickly and fail fast.

Design Thinking helps put the customer into focus and brings empathy to problem solving, along with creativity and innovation to solution exploration. Lean gives us a framework for scientific learning. We identify what to learn, and run experiments that help us make decisions as we navigate uncertainty. Although a great deal of learning can happen faster, cheaper, and more effectively without writing a line of code, Agile software development still has a big role to play. The cost of creating software continues to decrease, meaning that many organizations are choosing to move to software-as-an-experiment earlier. Software prototypes with real customers can be a great way to learn what really works, especially when building new products. When it’s an existing product/service, quantitative analytics and split/multivariate testing create feedback loops that help Agile product delivery teams decide what to do next.

1 Remember, nothing is truly knowable in complex adaptive systems.

2 Robo-advisors are a class of financial adviser that provide mostly automated financial advice based on mathematical rules or algorithms.

3 In this case, we can infer that there is some truth in these assumptions, given that companies like Vanguard, Nutmeg, and WealthSimple have up to $50 billion in assets under management.

4 “The App Store in 2016: zombies, lists and ranks,” Adjust, https://www.adjust.com/resources/app-zombies-2016/.

5 Good ambiguity is the kind we explore to find opportunities. Bad ambiguity is where our learning is questionable because we’re not confident in our approach to learning.

6 That’s not the only reason things fail. Sometimes the solution just isn’t very good. Or, a competitor eats you for breakfast. Or you’re too early or too late. Or, or, or...

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset