#1 – data we need before we build the feature idea

In Chapter 3, Identifying the Solution and its Impact on Key Business Outcomes, we discussed how to derive impact scores for a feature idea. For impact on Key Business Outcomes that was rated to be more than 5 on a 0-10 scale, we came up with detailed success metrics, but how did stakeholders arrive at this rating? What made them decide to rate an idea as having an impact of 2 on one key business outcome and 8 on another? Was it a gut feeling? Well, it is not entirely a bad idea to go with our gut feeling. There are always situations where we don't have data or there is no existing reference. So, we are essentially placing a bet on the success of the feature idea being able to meet our key business outcomes.

However, we cannot presume that this gut feeling is right and jump straight into building the feature. It is important to step back and analyze if there are ways in which we could find indicators that point us toward the accuracy or inaccuracy of our gut feeling. We need to find ways by which we can validate the core assumptions we're making about the impact a feature will have on key business outcomes, without building the product or setting up elaborate operations. They could be small experiments that we can run to test some hypotheses without spending our resources. I refrain from using the term minimum viable product here, because in many cases, technology or business viability isn't what we're going after. These experiments are more about getting a pulse of the market. They are sort of similar to putting up posters about a movie to tease the interest of the audience, before really making the movie itself.

We can activate the interest of our target group by introducing pricing plans with the proposed feature included in a bundle and excluded in a different one. We could see if customers show an interest in our new feature or not. We can also try out teaser campaigns, landing pages with sign up options , and so on, to see whether the feature piques our customers' interests, and also whether they are willing to pay for such a feature. Problem interviews with a targeted group of customers can also be a useful input to this. For instance, let's say ArtGalore seeks to find out if introducing a gift-wrapping option will result in increased purchases of artworks during the festival season in India. We can add content to the ArtGalore website introducing the concept of gifting artworks for the festive season, and track number of views, clicks , and so on. The entire gifting experience and the end-to-end process need not be built or thought through until we know that there is enough interest from customers.

A big advantage of product experiments, especially in software, is that we can be Agile. We have the opportunity to make minor tweaks quickly, run multiple experiments in parallel, and respond fast to what we're observing. This allows us to conserve our resources and direct them toward the things that are working for us.

We need to figure out the best way to validate our bets, but what doesn't work in the early stages of a product, may work well in a later stage of maturity. What works well with early adopters, may not work well with a scaling consumer base. What works in one demography, may not work for another. If we choose to hold onto our opinions without an open mind, we're in for trouble.

Agility and learnability are key when we're figuring out how to survive. Having a testable hypothesis is about validating our riskiest proposition. If our hypothesis is falsified, then it's time to pivot (if the feature idea is core to the business model, or not, then add it to the product backlog). As author Ash Maurya puts it, "Life is too short to build something that nobody wants." We can keep our product backlog lean by adding only those feature ideas that have the backing of early validation metrics.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset