Chapter 7. Prioritizing–with Science!

What you’ll learn in this chapter

Why prioritization is crucial

Bad (but common) ways to prioritize

Five prioritization frameworks

Limitations of using a scoring approach to product decisions

Wouldn’t it be great if no one could argue with your decisions?

In this chapter, we will detail five prioritization frameworks you can use and why you might choose each.

Leveraging an objective and collaborative prioritization method will help stakeholders focus on what’s important and come to alignment. It will help you get the buy-in you need to put together a product roadmap that inspires. If it were any more scientific, you’d need a lab coat.

Brenda rushed over from her desk in the sales pit. She was clearly excited. She’d uncovered an opportunity to partner with a local firm that would distribute the company’s free demo to a few thousand of their customers, and she was convinced it would be an easy way to generate a lot of new business quickly. Printing demo discs was cheap, so she just wanted my OK to get some cobranded materials made to package them with, and maybe we could do a seminar together and dedicate some telesales resources to following up on the leads. It was nearly free marketing, right?

I said no. I also helped Brenda understand why this didn’t fit with our strategy, and where she could more profitably focus her sales efforts. The company had been using a free demo disc for a couple of years to bring in small business customers. We’d run the numbers and it wasn’t a highly profitable go-to-market approach. We weren’t ready to phase out the SMB business altogether, but it was clear we needed to go up-market.

I redirected Brenda’s efforts that day (and for the next year) by sitting down with her and showing her our prioritized list of initiatives and—this is the important part—the underlying strategic goals that informed it. Everything on the list was ranked as to how much we estimated it would help with those goals versus how much effort we estimated it would take.

I put her idea through this model and, although the effort was small, it didn’t contribute to our new strategic goal of capturing larger customers, it would probably hurt our conversion rate, and it was unlikely to make much of a dent in our overall revenue picture. Brenda seemed happy to be heard, but being a smart salesperson she also absorbed the changed goals of the company and refocused her prospecting efforts.

A year later, Brenda became our first national accounts salesperson and the highest-paid person in the company. Our move toward larger customers doubled the company’s revenue and improved the company’s eventual acquisition price not long after.

—Bruce McCarthy, 2010

Why Prioritization Is Crucial

Opportunity Cost

In Bruce’s case study, why couldn’t the company move up to larger customers and also pursue the small ones with Brenda’s low-cost idea? The answer lies in opportunity cost or the fact that, to paraphrase Milton Friedman (and Robert Heinlein), there’s no such thing as a free demo.

Long experience has taught us that you can never get everything done you would like or even might think is minimally required. Resources are limited, priorities shift, executives have ADHD, and on and on. So if you are not going to get it all done, you have to be very sure you get the most important things done before something changes and your resources are redirected. If you are not doing the most important things right now, you risk not having the opportunity to do them in the future.

In fact, this concept is so important to prioritization and roadmapping, that we’ve developed a rule—nay, a law—to reinforce it:

Always assume you may have to stop work at any time.

This law is a core tenet of the Lean Startup movement popularized by Eric Reis. In his book The Lean Startup (Crown Business), Reis points out that the reason most startups fail is that they run out of money before they find a viable business model.

Now you may argue that you don’t work for a startup. Maybe you work for a well-funded medium-sized company, or a Fortune 500 behemoth. These companies have more than enough resources for everything on your list, right? The thing is, though, even in these larger companies, your project is in constant competition with every other good idea the executive team has or hears about. So all projects are always at risk of cancellation or downsizing.

If your funding could be diverted to somebody else’s bright idea at any moment, what do you do? You ruthlessly prioritize your work so that you get the most important things done first and begin to demonstrate value quickly. Opportunity cost is when you never get the chance to do something important because you chose to work on something else instead.

Shiny Object Syndrome

What about doing things in parallel? With sufficient resources, couldn’t Brenda’s company have done her little project alongside the move to support bigger customers?

We work with a lot of companies that are—like this company was—just coming out of the startup phase. They’ve been acquired, or gotten funding, or reached profitability and are looking to add to their product line or expand their market. A lot of them suffer from a lack of focus. Either they have 100 number-one priorities that the CEO somehow magically thinks they can pursue in parallel, or the priorities shift from day to day, or even hour to hour. What the startup CEO sees as responding with agility to opportunity, the development team secretly refers to as shiny object syndrome.

What the developers intuitively know is that there are a lot of hidden costs to any given development effort. Sure, you can build a feature that wins you a deal in a different segment from your target market, and you can probably figure out how to build it quickly on a shoestring. But then what?

Every feature you build has a carrying cost. For each new feature, you may have to:

  • Retest (and fix if necessary) the feature with each new release and in each supported environment

  • Document the feature

  • Produce training material for the feature

  • Handle support requests from customers (and train the support team to handle them)

  • Figure out how to incorporate the feature into your pricing

  • Figure out how to demo and sell the feature (and train the sales team)

  • Figure out how to position and market the feature (and train the marketing and channels teams)

It’s hard enough for an organization to do all of this well for a core feature set and a core target market. Imagine having to duplicate this effort for small numbers of customers in a variety of nonstrategic customer segments. The cost in distraction alone is huge, but never mind that. Adding all of this overhead means each individual effort slows down. The added overhead means that doing two things in parallel makes each take twice as long—and usually even longer due to communication, coordination, and mental switching costs.

Exponential Test Matrix Growth

Have you ever noticed that the more features your team develops, the longer it seems to take to develop the next one? Why is that? We’ve mentioned a lot of factors here, but the testing matrix is a large contributor. As you add features, you add the burden of testing not just each feature, but each feature in combination with every other feature. In software, this is commonly called regression testing, and unfortunately the size of this test matrix grows exponentially with the number of features (or modules or compatible third parties).

What does this mean? Think of it this way. If you have one feature, you have only that feature to test. If you have two features, you must test both individually, and also together (so now the number of tests is three). If you add a third feature, the number of test combinations rises to seven. So far, no big deal, but the numbers begin to rise rapidly from here, roughly doubling each time you add a feature. (For the math nerds, the progression is 2n – 1.) The test matrix for 5 features is already 31. For 10 features, it is 1,023. And to make your product go to 11 requires 2,047 test combinations. (Maybe that’s why Nigel Tufnel’s amplifiers don’t go any higher.)

In practice, testing teams can’t manually run through every possible combination every time, and so automation and sampling help reduce the amount of work involved, but in principle you can see how quickly the overhead of maintaining features and capabilities grows with every addition (Figure 7-1).

Brenda wasn’t proposing a feature, but she was proposing having the company split its attention by marketing to, selling to, supporting, and possibly developing features demanded by a segment that had proven to have low ROI.

The inverse of (and antidote to) shiny object syndrome is focus. If you focus as an organization on one set of problems for a strategic set of target customers, you minimize the increasing drag of bad decisions and seemingly small diversions.

Have we convinced you that prioritization is critical to success? We hope so. The next step is how to prioritize—but let’s start with how not to.

Features Versus Tests

Figure 7-1. The size of the test matrix grows exponentially with the number of features

Bad (but Common) Ways to Prioritize

A good product person spends much of their time gathering data as input to good requirements and good priorities. They talk to customers, prospects, investors, partners, executives in all departments, salespeople, support reps, analysts. The list is never-ending, but the best decisions come when you have all the right inputs, and (as you’ll see shortly) when you involve stakeholders in the process.

That is not to say that product management is a democracy or some kind of commune-like decision-by-consensus process. Not at all. In fact, letting others define your strategy is one of the most common mistakes we’ve seen in product planning. Other common pitfalls, or anti-patterns, include prioritizing based on the following factors

Your, or someone else’s, gut

Prioritizing solely on your or some other executive’s gut instinct can be a killer for team productivity and morale, and it usually results in high turnover, low productivity, and subpar results. Why? Two reasons. First, the lack of rigorous analysis means that the executive in question is very likely to change their mind, confidently proclaiming that X is the future, only to make an equally confident claim for Y a few days later. Second, while your CEO or other founding member of the executive team may have once been close to customers (or even been one at some point), their day-to-day experience is likely different now, and they are no longer in touch with the market.

A product person must learn to take executive gut opinion as input and apply some rigor to it, understanding what problem the executive is trying to solve, whether solving this problem aligns well with the product strategy, and whether the proposed solution is the best available. It is then often necessary to explain politely why, while this is not a bad idea, there are others that take higher priority.

Analyst opinions

You probably know more about your business and your customers than industry analysts do, and it seldom pays to substitute their judgment for your own. In the early 2000s, analysts projected that prices for flat-panel monitors would remain high for many years due to limitations in the supply chain. This was a logical but flawed straight-line extension of existing trends that failed to take into account the emergence of a few high-volume Chinese producers. Today, monitors can be had for less than $200 that only a few years ago sold for over $1,000. Take the time to do your own research and analysis, and you will be ahead of yesterday’s trends.

Popularity

We truly believe that outsourcing your product strategy to your customers is a mistake in nearly all situations. An inexperienced product manager will, with the best of intentions, rank feature requests by frequency or size of customer. It makes sense, initially. Why not give customers what they ask for? The issue with that is customers can’t often articulate what they need from your product. Yes, a seasoned product person will seek to understand customers and their needs—but this is fundamentally different from asking customers what they want.

Steve Jobs is famous for ignoring market research, saying, “A lot of times, people don’t know what they want until you show it to them.” This perhaps oversimplifies the problem a bit but resonates with our experience. A roadmap made up entirely of customer requests generally results in a product with no focus, unclear market positioning, and poor usability. If you understand what underlying problems motivated your endless list of customer requests, you and your team can usually develop a more elegant set of solutions that render that list moot.

Sales requests

As author and Silicon Valley Product Group founder Marty Cagan says, “Your job is not to prioritize and document feature requests. Your job is to deliver a product that is valuable, usable, and feasible.” Similarly, asking for input from your sales team is smart—they have insight into how buyers think and what will get their attention—but prioritizing based on what will help close the deals in the pipeline this quarter is short-term thinking. It may help make the numbers once or twice, but successful product people are focused primarily on how to serve a market rather than individual customers.

Carol Meyers, CMO of Rapid7, says: “You can lock yourself into creating something that really only one company needs. Maybe nobody else really needs it. I think it’s a big part of deciding who your real target customer set is and how that fits in with your business.” On the other hand, everything is a spectrum, as Meyers points out: “Some companies only serve the biggest of the big, in which case, their roadmap often is totally determined by a couple of large customers because that’s the focus of their business.” Even then, we would argue, these companies depend on your expertise in devising the best solutions to their underlying problems. This dictates a two-way dialogue between a product producer and a product consumer. Otherwise, you are running nothing more than a custom development shop.

Support requests

The customer support team is a terrific source of insight for product people. Many can provide you with a list of common complaints or trouble spots in your product, ranked by frequency or rep time spent. This is good data for prioritizing enhancements under the general heading of usability, and it makes a lot of sense to prioritize work here if usability is one of your key goals. Take special note of that “if,” though. We are not questioning the value of usability in general, but we are suggesting that this goal needs to be weighed in the context of all of your goals. A particular focus on usability makes all the sense in the world when you have a product people are buying but not using or not renewing sufficiently. It is probably not your #1 priority, however, if customers find it easy to use but it’s missing critical functionality that causes them not to buy it in the first place. In that situation, feedback from current customers doesn’t help you expand your appeal to prospects.

Competitive me-too features

The quickest way we’ve found to reduce the value of your product is to get into a tit-for-tat feature war with your competition. Once you and they are easily comparable, you have collaboratively created a commodity market where the guys with the longest feature list and the lowest price will win (though whoever makes it to the bottom of this race to the lowest price will have little to no profit margin left). Much better than trying to match the competition feature for feature is to differentiate yourself with capabilities perfectly matched to your chosen customer’s needs and which your competitors can’t or won’t match. Thus, as the best (or ideally only) option for your particular niche, you can charge based on the value you provide rather than a few dollars less than your nonexistent competition.

Prioritization Frameworks

Fortunately, there are also some good ways to set priorities, and some core underlying principles to prioritization that you can adapt to your needs. Here we’ve described a few that we’ve found useful, including critical path analysis; Kano; desirability, feasibility, viability; and (Bruce’s favorite) the ROI scorecard. We’ll also cover a related principle called MoSCoW, which helps you categorize your priorities once you’ve set them.

Throughout the discussion we’ll describe when you might use each framework, but you should use your own judgment, test different approaches, and see what each teaches you about your own thinking—and then mix and match!

Critical Path

As we discussed in Chapter 3, a user journey map provides an opportunity for product professionals to tease out various dimensions at each step in the customer’s journey, including their emotions and state of mind during each moment. Negative emotions can help you uncover key pain points and home in on those that are causing the most distress. These hot-button aggravations are often what brings users to the breaking point and convinces them to find a better way to do something.

Once you’ve identified those critical pain points, your job is to insert your elegant solution into your potential customer’s path at the exact moment when their pain or frustration is highest. Like a marathon volunteer handing out water and energy bars at the bottom of a big hill, a product person needs to understand the major struggles in the customer’s journey and offer the right solution at the right time.

Your goal is to answer the question, “What is the one thing (or set of things) our solution needs to get right?” These are the most painful steps in the process that you have to address or improve in order for the user to be convinced to hire your solution. You might identify numerous negative emotion states in your user’s journey, but the key here is to narrow those down to the “must haves.” Linking these key moments together into a critical path journey gives you a blueprint for creating a great product (or what some would call a minimum viable product).

Let’s look at a real-life example.

In the middle of 2016 my team was approached by a new startup called BarnManager for help with strategy and design on a relaunch of their product (see Figure 7-2). The founder of BarnManager, Nicole Lakin, grew up riding horses and competing in hunter-jumper competitions. Lakin was in a unique position to empathize fully with her customers. After years of both owning horses and working in barns, she noticed a large majority of facilities still relied on whiteboards and notepads to communicate between staff, and traditional manila envelopes and filing cabinets to store records. Over time she realized these outdated tools were causing all sorts of problems for horse management teams.

First, due to the constant demands and frantic pace of horse management, many miscommunications would arise. For example, a discussion would happen between two people, but they would forget to inform the rest of the team. Or worse, a decision would be made but no accountability would be assigned and important tasks would go undone. These misalignments led to losses of time and money, not to mention undermining team chemistry and morale.

Second, managing paperwork had become a big issue. On any given day, a horse team could be inundated with vet visits, ferrier reports, competition forms, treatment plans, prescriptions, feed supplements, x-rays, ultrasounds, and much more. Without a consistent and reliable way to collect and share this information, horse teams again wasted time and money. Even worse, they lost critical information needed to properly care for a horse. In some cases, the mismanagement of these elements led to health issues for the horses that could have been avoided (or even untimely death in extreme cases). Other pain points for barn management teams included things like scheduling, hiring and retaining talent, travel logistics, and ordering supplies.

During the user research portion of this project, we worked to understand the journey of each key role, including owners, barn managers, groomers, vendors, and more. We attended competitions and helped shuck stalls. We spent hours poring over notes and folders, and shadowed team members as they went about their days.

We uncovered a lot of pain points and had many pages of notes, numerous sketches, and dozens of interview transcripts to prove it. There were a lot of valuable things we could have included in an initial product design, but when we finally had enough information to understand the full ecosystem, we were able to clearly identify the critical path pain points.

In this case, we realized that at a bare minimum BarnManager’s new product had to provide a secure and easy-to-use repository for horse records. Through our research we were able to determine that if Lakin’s team didn’t get this right, the rest of the pain points didn’t matter. Without a solution for digital record keeping, there was no way the low-tech horse management world would adopt a new software product. (We also realized that if we got this right, there was ample opportunity to improve on other important pain points down the road.) Once we identified our critical path was to solve for record keeping, we were off to the races (please excuse the cheesy pun)!

—Evan Ryan, 2017

Critical path for existing products

BarnManager was a brand new product, designed from scratch around the customer’s critical path needs. However, this approach can also be helpful in prioritizing enhancements for an existing product or growth product.

We’ve seen too many talented product people make the mistake of forgetting that user behaviors, needs, and preferences change rapidly—including in response to your product! As a customer gets comfortable with your solution to the needs that initially prompted them to choose your product, new needs may become the critical path. Now that BarnManager solves the worst pains around document storage, perhaps their users feel the pain of daily communication more keenly. This may be an opportunity to expand the value of your application (or, if you ignore this, an opportunity for a competitor to use the new critical path to get in the door with your customer).

Staying in touch with your users helps you understand trends and anticipate new needs. There are many strategies for seeking direct customer feedback, from focus groups to surveys to product demos to A/B testing and more. As a product person, you will experiment and find the tools that work best for your team. Even in today’s data-driven world, however, direct contact with the customer is as important as it was in Sam Walton’s early days. That hasn’t changed, and we encourage all product people to interact regularly, and face-to-face if possible, with actual human customers.

Figure 7-2. BarnManager

Kano

The Kano model, developed by Dr. Noriaki Kano, is a way of classifying customer expectations into three broad categories: expected needs, normal needs, and exciting needs. This hierarchy can be used to help with your prioritization efforts by clearly identifying the value of solutions to the needs in each of these buckets.

The customer’s expected needs are roughly equivalent to the critical path. If those needs are not met, they become dissatisfiers, and your chosen customer will not buy your product. If you’ve met expected needs, however, customers will typically begin to articulate normal needs, or satisfiers—things they do not strictly need in your product but that will satisfy them. If normal needs are largely met, then exciting needs (delighters or wows) go beyond the customer’s expectations. These typically define the offerings of category leaders. Table 7-1 shows an example of applying the Kano model to a common automobile feature.

One thing to bear in mind is that over time, expectations rise. There was a time when intermittent wipers were exciting (the 70s, anyone?), but now they are pretty, well, normal. In order to continue to differentiate themselves, high-end brands like Mercedes-Benz had to develop something even better, a rain sensor that adjusts wiping frequency automatically based on how much rain is falling on the windshield.

Table 7-1. How the Kano model might be applied to one automobile feature
NeedsExample

Expected (Dissatisfier if missing)

Windshield wipers

Normal (Satisfier)

Intermittent wipers

Exciting (Delighter)

Rain-sensing wipers

Figure 7-3 illustrates how customers’ satisfaction and perception of product quality increases as the product moves from expected to normal to exciting needs. Note that in most cases, merely meeting expected needs is not sufficient to get out of the dissatisfied half of the diagram. This is especially true in a competitive market, where the customer can choose among several adequate solutions. Customers have high expectations!

The Kano model is most useful when you’re trying to prioritize among customer needs based on the customer’s own perception of value—for example, when you’ve covered the critical path needs and you’re trying to decide among other ideas of increasing value. Perception is the key word here. If the customer lives in an arid climate, rain-sensing wipers may seem unimportant to them, and there will be no delight. Using the Kano model (or any other model incorporating customer value) requires you to know your customer well. See Chapter 5 and Chapter 6 for more on understanding and solving customer problems.

Figure 7-3. Customers’ satisfaction and perception of product quality increases as the product moves from expected to normal to exciting needs

Desirability, Feasibility, Viability

While useful and popular, the critical path and Kano model by themselves are, in our view, insufficient to effectively judge the value of solving problems or of particular solutions to those problems. You must solve for critical path or expected needs (dissatisfiers) at a minimum, but that alone won’t guarantee you success. Nor will you usually have the luxury to develop every satisfier or delighter you can think of. Resource and time constraints dictate that you pick among them and choose the ones that will make the biggest difference first.

One method we’ve used to prioritize potential solutions is to score each idea in terms of desirability, feasibility, and viability (see Figure 7-4).

Figure 7-4. Desirability, Feasibility, Viability Venn diagram. No product book is complete without this.

Desirability

Desirability indicates the value to your customer of solving a problem, providing a feature, or performing a function. Things customers value most have higher desirability. (Counterintuitively, exciting needs have lower value than expected needs because the latter are absolute minimum requirements. The distinction between exciting needs and normal needs is trickier, and depends on the perception of value versus alternatives.)

Feasibility

Feasibility indicates how easy it will be for your organization to solve this problem, build this feature, or perform this function. Solutions that require more money, effort, or time to deliver have lower feasibility. (Easy stuff scores high.)

Viability

Viability indicates how valuable this solution is for your organization, often measured in revenue or profit. Solutions that make the company more successful have higher viability.

Using a simple 1 (low), 2 (medium), 3 (high) scale for each possibility allows you to add up the scores on each axis and come up with a composite score for priority. Ideas that score high on most or all of these criteria will rise to the top of the heap, above those that may score highly on one measure but not the others.

Let’s consider an example. Imagine you are a car maker, and your proposed design already meets the minimum expectations for safety, performance, and economy. What should you add to make your cars more attractive to buyers? Table 7-2 shows how you might answer that question by calculating a potential feature’s priority score based on desirability, feasibility, and viability. Perhaps customers have expressed a desire for custom colors and, based on your research, you give that a medium desirability score of 2 out of 3. However, manufacturing is telling you that they can’t make colors to order without a separate paint process, which lowers both the feasibility (it’s hard) and the viability (it will cost more and lower profits) to 1. The composite score is a disappointing 4 out of a possible 9.

On the other hand, your company has been investing in hybrid technology for some years, giving that a high viability score of 3. What’s more, customers are clamoring for better fuel economy, driving the desirability up to 3 as well—and many are willing to pay a premium for hybrids, so profits are higher and viability is also a 3. This feature is a home run at a priority score of 9!

This method of ranking ideas may not be intuitive, but once you grasp it, it is simple enough to explain to your stakeholders (see Chapter 8) to gain alignment on which potential features or solutions meet all the core criteria for success—and which don’t measure up.

Table 7-2. Calculating an automobile feature’s priority score based on its desirability, feasibility, and viability
FeatureDesirabilityFeasibilityViabilityPriority Score

Hybrid drivetrain

3

3

3

9

Leather interior

2

2

2

6

Custom colors

2

1

1

4

ROI Scorecard

The priority score was a perfect way to determine which of three potential car features to implement, but what if your feature list is dozens or hundreds of items long? Many product people are in this situation, and what’s more, they often find that a lot of proposed features score an 8 or a 9. When you have too many good ideas, a simple prioritization spreadsheet known as an ROI scorecard can help you tease out and quantify finer differences in priority.

Fundamental to this approach is the concept of ROI, or return on investment. You can’t do everything at once, so you should do the most leveraged things first—those that have the most bang for the least buck. This step can then provide funding for what you want to do next. We discuss how to define both bang and buck shortly, but the math is simple and intuitive:

  • Value/Effort = Priority

Value is the benefit the customer or the company gets out of a proposed feature or other change. Effort is what it takes for the organization to deliver that feature. If you get more benefit for less effort, that’s an idea you should prioritize.

This is the traditional formula used in finance to justify dollar investments that have an expected future payout. If you think about it, that’s exactly what we are doing when we invest time and money in employees to have them develop features or other product/service enhancements.

Consultants and product managers love a 2 × 2 grid, and it’s easy to plot value versus effort this way and look for the items that fall deepest into the upper-right quadrant (see Figure 7-5). With a long list of features, though, this kind of chart can get pretty busy, so later we’ll go over how the ROI scorecard can help you organize and rank ideas for improving your product.

Bruce has used models like this for years to set priorities based on relative ROI. He and others have used it successfully to prioritize features, projects, investments, lean experiments, acquisition candidates, OEM partners. He even used it once to figure out (and get his wife on board with) which car to buy. It seldom fails to support both the right decisions and (just as important) the right conversations. And at the right time, it helps frame up your business case. So next let’s dive a little deeper into the equation underlying the ROI approach.

Figure 7-5. Plotting Value versus Effort on a 2 × 2 grid

Strategy defines value

The first part of the equation is value. In Chapter 5 and Chapter 6, we gave you guidance on identifying and solving for customer needs. Providing the critical path, satisfiers, and delighters that will attract and please customers is obviously a part of delivering value, but what’s in it for you?

An equally important component of value is the benefit the company will receive from changing, adding, or enhancing some aspect of their product or service. As product people, we all assume that a better product will generate more success and more money for the company, but not all enhancements are equal in this regard. It makes sense to prioritize things that we expect to drive business results. This is similar to the viability concept we’ve discussed.

It’s easy to be tempted into using dollars and cents here, and many companies judge the value of proposed work based simply on a revenue projection. But hang on—what if your product is pre-revenue and you need to focus on getting your core feature set in place? What if some of your customers are unprofitable? What if the mandate from your executive team is to grab market share? Or to enter a new and untested market? And even if revenue is priority #1, is new customer revenue or renewal revenue more important for your product? Revenue usually does come into it somewhere, but in most cases there are multiple drivers of value.

This is why we spent Chapter 4 talking about vision and objectives. If your understand customer’s critical needs, and you’ve developed a set of business-outcome goals for your product, you have everything you need to define the value side of the ROI equation:

  • Value = Customer’s Needs + Organization’s Goals

Let’s go back to our Wombat garden hose example from Chapter 2. You’ve determined that your customers have two critical path needs: reliable water delivery and reliable nutrient delivery. Delivering on either provides value, but delivering on both provides more value, and your team is busily working on features, materials, and manufacturing methods to meet these needs. If you could assign a simple score for each idea, rating how much you believe it will contribute to each goal, then you could add these components of value together simply.

You can do the same with your organization’s goals. If your garden hose company has goals to grow new customer revenue and to increase the size of their addressable market, the expected contribution of proposed features can also be scored and added to the customer value scores, like this:

  • Value = CN1 + CN2 + OG1 + OG2

In this case, CN1 is the first Customer Need, CN2 is the second, OG1 is the first Organizational Goal—you get the picture.

Deliberate imprecision

We can use a small range of numbers here, as we did when quantifying desirability, feasibility, and viability earlier. Once you get the idea of modeling decisions in math, it’s easy to get caught up in perfecting your model. And since we are measuring business goals, which sometimes equate to money, it’s tempting to try to be as precise as you can.

Be wary of complexity here, however. You are not actually trying to make a sales forecast or project schedule; you’re merely trying to prioritize, so the need for precision is relatively low. You can always look more closely at the most promising ideas if you need additional validation before making a final decision.

In fact, much of the power of this model derives from this deliberate imprecision. Your team might argue about whether the revenue opportunity is $500K or $600K, but you’ll usually be quick to agree on whether it is high, medium, or low. The math will work well with a 1–3 or 1–5 scale. However, 0 is also possible and quite useful when a particular idea seems promising for solving one problem but helps not at all on another. Bruce often employs a scale of 0 (low enough that it doesn’t matter), 1 (medium or some positive effect), or 2 (large positive effect) to keep things as simple as possible.

A simple model is also easy for your stakeholders to understand. Try to make it so they can do the math in their head, so it’s obvious when and why one thing scores higher than another. We’ve seen models with a dozen or more goals and types of effort, as well as complex weighting schemes that make using the model more difficult without adding much to its utility for decision-making.

The effort side of the equation

The second part of the ROI equation, effort, is whatever is necessary to meet your customer’s needs and achieve your organization’s goals by adding the proposed feature, enhancement, or solution. It is the reverse of feasibility, so in this formula, higher effort scores are bad.

Many product people who develop their own prioritization scorecard will leave out the effort side of the equation. Often they will say that business value is all that matters or that estimates are someone else’s responsibility. We take issue with both of these claims. The reason is simple.

Let’s say you have two proposals—two new feature ideas—that you estimate are about equal in adding value according to your scoring. Feature A will take three months of effort to deliver, but feature B will take only a couple of days. Isn’t it obvious you should do feature B first, and start collecting that benefit while you are working on feature A? (Couldn’t you actually use the money generated from feature B to fund your work on A?)

OK, you say, but I can’t ask engineering for an estimate on every half-baked idea we might or might not ever do. They don’t have the time for that and, besides, every time I ask for an estimate they want detailed requirements, and I don’t have time for that.

Estimating effort without frightening engineers away

You’re right, of course. Engineers are rightfully afraid of providing estimates for things they don’t know enough about. If they’ve been around the block, they’ve probably been burned by a finger-in-the-air estimate (also known as a WAG, or wild-ass guess) that somehow turned into a commitment to a date when someone in management got hold of it.

A quick way to get over initial reluctance around estimates is to do the estimates yourself and then ask someone involved in the implementation for a reality check. By taking responsibility for the estimate, you absolve others from that feeling of premature commitment. (Also, it turns out it’s much easier to get an engineer to tell you why you’re wrong than to get them to give an estimate themselves.)

T-shirt sizing

One other thing you can do to get estimates without a lot of angst is to make them simple and abstract using T-shirt sizing. This common practice uses the familiar extra-small–small–medium–large–extra-large scale to roughly estimate the necessary effort without getting bogged down in sprints, days, or person-months—and if you assign numbers to the various sizes, it works well with the prioritization formula:

  • 1 – Extra Small

  • 2 – Small

  • 3 – Medium

  • 4 – Large

  • 5 – Extra Large

Remember, you’re not actually trying to schedule anything at this stage; you’re just trying to prioritize. So a simple scale like this allows you to quickly gauge what things are promising enough to spend the time to get a real estimate on.

T-shirt sizing also borrows a principle from agile estimation using story points called relative sizing. It’s not important whether a small is one to three weeks (or sprints or person-months). What’s important for your prioritization formula is that it is small-er than a medium.

Think cross-functionally

In some cases, effort is simply the amount of time the engineering team will need to spend to code, test, and release the product. But usually there are other costs to consider as well. To get a realistic feel for the effort the company as a whole will incur, consider whether the marketing team will need to research and target a new market segment, or the sales team will need extensive retraining. Maybe you will need to strike a partnership with another company for a key component, or recruit new channel partners. The particulars will vary, but if you are trying to estimate ROI, you must think of the effort of the whole company. What good is a shiny new feature if your company can’t market, sell, or service it?

Risks and unknowns

Just as you may run into a traffic jam or bad weather on a road trip, there are always unknowns in any product development effort. Especially in R&D efforts like software, you are building things that have never been built before, so there is no exact template, recipe, or blueprint to follow. Some teams build a fudge factor into their estimates to account for this. This risk factor applies (at least as well) on the value side of the equation because you cannot predict with 100% certainty how customers will react to your product before you’ve built it.

A risk factor can be used to discount the expected ROI of any proposed investment. A simple way to do this is to multiply the result of your value/effort equation by a confidence percentage. This is essentially the inverse of risk. You should be less confident about risky things.

A Formula for Prioritization

The full formula for prioritizing investments in your product might look something like this:

A simple scorecard

Let’s put this all together now so that we can compare the ROI of proposed initiatives and derive a priority list. Scoring every job, theme, feature idea, initiative, or solution allows you to develop a scorecard ranking each against the others, as shown in Figure 7-6. (Just remember to compare only like things; it’s hard to rank a theme versus a feature intended to fit within that or another theme.)

In this example, feature A ranks highly because it contributes to both goal 1 and goal 2 and has a high confidence. In contrast, feature B ranks lower despite the higher confidence because it supports only one of the goals.

Feature C has the lowest effort of the three and might have ranked higher, but it suffers both from low confidence and a negative score on goal 2. Why would a feature or other idea have a negative goal score? Sometimes, goals can conflict with each other. A given idea may contain an inherent trade-off such as quality versus quantity or power versus weight.

It is common, for example, in retail to discount prices to increase sales, but this comes at the expense of profit margin. Choosing to discount or not will depend on the balance of all your goals combined. In this example, another goal like unit sales or category sales may be the deciding factor.

Figure 7-6. A simple ROI scorecard

A more complex scorecard

Now let’s look at a ROI scorecard that’s a little more detailed. Imagine you are in charge of a website called Plush Life that sells stuffed animals. You have a long list of ideas to expand your business, including enhancements to the site, expansion into China, and new concepts for toys to sell.

If you know your company goals are to increase sales, increase margins, and increase customer lifetime value (LTV), you can rank each idea against each of these three goals—and also against level of effort—to derive a priority score based on relative ROI.

The Plush Life scorecard is shown in Figure 7-7, you can build a similar scorecard yourself in Excel or Google Sheets.

Scorecard models can be built in a variety of ways. This one takes the basic concepts of desirability, feasibility, and viability, and rearranges them to create an ROI ratio between desirability + viability on the one hand and feasibility on the other. It also allows you to quantify separate components of each value (e.g., multiple aspects of desirability) and, finally, it adds confidence to account for risk.

Chapter 8 covers ways to use a scorecard like this to discuss and align on priorities with your key stakeholders.

So, to recap: when your proposed feature list is long, there are multiple factors to consider, and the answers are not obvious to everyone at first glance, the ROI scorecard approach can add just enough rigor to help illuminate and quantify subtler differences in priority.

Figure 7-7. The Plush Life ROI scorecard

MoSCoW

Whatever method you use in your prioritization process, you must communicate those priorities clearly to your development team. MoSCoW is a method for categorizing your prioritized list of requirements into unambiguous buckets, making it clear what your release criteria are. It has nothing to do with Red Square or St. Basil’s Cathedral, but is an acronym for:

  • Must have

  • Should have

  • Could have

  • Won’t have

Must-haves are requirements that must be met for the product to be launched. These are the critical path items or dissatisfiers—those expected needs without which no one will buy or use your product. These are also sometimes called minimum-to-ship features because (if you’ve classified things correctly) you can’t launch until they are delivered.

Should-haves are not critical to launch, but are important and may be painful to leave out. Possibly including satisfiers, these are the last items you would cut in order to meet budget or deadline pressures.

Could-haves are features that are wanted, but not as important as should-haves. These are the first items you would cut if they introduced budget or deadline risk. You can distinguish could-haves from should-haves by the degree of pain that leaving them out would cause the customer, or the reduction in value of the solution. Another way some teams think of this category is as a list things to include “if easy.” Your delighters may also fall into this bucket.

Won’t-haves are requirements deemed “out of scope” for a particular release. Won’t-haves could contain both satisfiers and delighters, but should not contain dissatisfiers or critical path items (otherwise, what’s the point of releasing?). Won’t-haves may be included in future releases, of course. It’s useful to agree on these items up front to avoid misunderstandings about scope, rehashing scope mid-project, or what is often called scope creep.

MoSCoW is not itself a prioritization method, but we mention it here as a way to clearly communicate what your priorities mean in terms of release criteria.

Tools Versus Decisions

Numerical prioritization methods are controversial, and some experienced product people feel this left-brain approach can be misleading, providing a false sense of confidence in the math. Roger Cauvin, Director of Products, Cauvin, Inc., argues that this approach is an attempt “to address organizational dysfunction with formulas and analysis that ignore human factors” or to make up for a “lack of a shared understanding of the product strategy.” He contends that a scorecard approach tends to “distract the team from a singular focus on delivering the product’s unique value proposition.”

We agree that many organizations claim to be data-driven when they are actually seeking data to support political decisions. In our experience, however, introducing the structure of a scorecard often forces a team to clarify, articulate, and align on their strategy and the value proposition so that they can effectively pick the right columns for the scorecard.

Some participants in roadmapping workshops have said “I can make these numbers look however I want. We have found that going through this exercise tends to reduce the amount of opinion and emotion in discussions of priority, forcing people to frame their arguments in terms of relative contribution to common goals. A scorecard approach makes it clear to everyone that there are multiple criteria for success and that the ideas that hit on all or most of them will be winners.

All of that said, no one should be a slave to a formula. Frameworks like these should be used as an aid to decision-making, not as the decision itself. Table 7-3 outlines some limitations to be aware of when using a scoring approach to product decisions.

Table 7-3. Scoring pros and cons
ConsPros

Scoring lots of little features or requests could provide a false sense of confidence or progress.

Scoring items being considered anyway forces discussion of the underlying problems and the value of solving them.

Simple scale misses finer differences.

Simplicity keeps teams from arguing over unimportant details.

Keeping to a few goals misses important factors that should be considered.

Forcing teams to narrow down to a few goals forces them to face the reality that they can’t do it all.

Scoring models do not include intangible factors such as generating “buzz” or level of “innovation.”

Objective criteria supports a more rational and open discussion of trade-offs.

Scoring models do not include dependencies, resource availability, or promises made to key customers, the board, Wall Street, and so on.

Scoring models uncover where resources and promises do not align with priorities.

Dependencies, Resources, and Promises (Oh, My!)

If you’ve done a good job selecting and leveraging one of these frameworks, your resulting priorities should be directionally correct, but they will require an additional layer of practical considerations before they can be scheduled for work.

You may be forced, for example, to begin with an item that your model says is your #2 or #3 priority, for reasons that have nothing to do with dissatisfier, must-haves, or ROI. It may be that certain critical resources required for priority #1 will not be available until a future date, or that #1 is dependent on #2 being completed. It may also be that priority #47 has already been promised to the board of directors or written into a customer contract.

These additional factors don’t affect priorities, per se, but they can affect scheduling. It’s simple to make notes on such details in the margin of your scorecard, and then refer to them when sitting down to plan out the sequencing in the roadmap.

Prioritization Frameworks

Table 7-4. Priorization Frameworks overview
FrameworkUse ToChoose WhenDownsides

Critical Path

Identify the “one thing” that will drive a customer to buy

Designing an MVP or making a major expansion in product scope

Does not take into account effort, risk, or business goals; does not rank needs finer than “critical” or “noncritical”

Kano Model

Understand how customers perceive relative value

Identifying possible add-ons or enhancements

Does not take into account effort, risk, or business goals

Desirability, Feasibility, Viability

Identify opportunities that meet all key criteria for success

Prioritizing among a relatively small set of initiatives or solutions to a particular problem

Categories are not clearly defined in terms of customer needs, organization goals, or different types of effort or risk

ROI Scorecard

Rank ideas according to return-on-investment criteria

Weighing multiple factors and/or a long list of possible initiatives, problems to solve, features, or solutions

More complex model requires alignment on different components of value and effort

MoSCoW

Communicate launch criteria

Feeling uncertain about what must be included in a product, service, or release

Does not help set priorities, only communicate them

Note

Value/Effort = Priority

Summary

Companies that consistently prioritize and focus on a few highly leveraged initiatives invariably learn faster, grow larger, and become successful by getting everyone pulling in the same direction. You can’t do it all, so pick your bets thoughtfully.

Don’t fall into the trap of prioritizing by gut feel or outsourcing your decisions to your customers, competition, or industry analysts. Use one of the frameworks described here to develop, get input on, and make decisions about priorities in an objective and transparent way. Table 7-4 provides a quick summary of what each approach is best at and when you might choose to use it.

Regardless of which prioritization framework you use, how you order items on the roadmap will reflect your priorities, and make them starkly clear to your customers and other stakeholders. It’s important that you can explain why you’ve chosen these items and placed them in this order, so use these frameworks wisely and, as we describe in Chapter 4, keep your timeframes as loose as you can to preserve your flexibility.

With all of this information in place, you are ready to begin laying out a roadmap that will drive you quickly and efficiently toward your goals and your vision of a successful product—that is, one that adds value to customers’ lives and businesses.

First, however, you’re going to need buy-in. In the next chapter, we will discuss how to use both shuttle diplomacy and group workshops like design sprints to drive organizational alignment and gain the buy-in you’ll need for your roadmap to succeed. We’ll also show you how to leverage your prioritization framework to facilitate that cross-functional work.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset