Is our product working well for the customer?

We identified features based on business investments/goals. We assigned value scores, identified costs, and found the smartest thing to build within the given constraints of the business. We then defined time-bound success metrics to help us to evaluate whether our feature is working well. We also put in effective mechanisms to capture quantitative and qualitative feedback from the consumers.

The metrics and feedback can give us a good hold on how close or far we are from meeting our success goals. However, as we build different features and work under the constraint of time and resources, we tend to make trade-offs. We also build product experience through multiple features, both in the product and also as part of the extended service around the product with support from the business functions.

The success metrics here are time-bound. They are also mostly time-bound to the near future (maybe three to five months), but measured over hours, days, or weeks, depending on the nature of the business. The typical metrics that we can capture at a feature level, that can point us towards the core goals of growth, influence, and sustainability are as follows:

  • Number of new customers acquired
  • Number of paying customers
  • Retention rate
  • Number of new leads
  • Ratio of leads to conversions
  • Number of renewals (subscribers)
  • Engagement rates
  • Number (and nature) of support requests received
  • Time to convert (from lead to paying customer)
  • Average time taken to complete flow
  • Feature usage patterns (frequency of use, how many users use a feature, and so on)
  • Feature adoption (drop off rates, completion of flow, workflow/activity metrics, and so on)

These are only some indicators of performance. These indicators are also dependent on the nature of the product. B2C metrics could be different from B2B metrics. Moreover, the way we benchmark these metrics also matters.

Benchmarking

We already touched upon the idea of defining realistic and achievable metrics in the context of available business resources, time and other constraints, and product maturity, as well as our own ambitious goals. In this context, benchmarking against industry averages may not really be advisable, especially for an early stage of a product. This is partly because industry average benchmarks are not easily available. More so, if you are building a product that is disrupting an industry, the chances are that there are no benchmarks. Even as a competitive player in an industry with widespread demand, there may not exist benchmarks that are easily available.

It is possible to get caught in the trap of going after numbers that someone outside of our business defines for us. This can be quite damaging. We will be unable to reconcile our motivations and constraints with those of someone external to the business. Each business context is unique and setting goals that are pertinent to the sustenance and growth of the business, based on its unique context, is necessary.

In this context, vanity metrics must be pointed out. It is rather convenient to go after likes, views, and shares on social media. It can make us feel absolutely great about ourselves without adding a single ounce of value to the business. Sometimes, time is spent creating posts on social media, and feeling great about likes and views, when our customers are corporate decision-makers who aren't really looking for business partners on Facebook. Getting views on posts almost justifies the effort put into creating the posts, but this clearly misses the point.

Let's take hypothetical data for two products. Both the products measure the same metrics every quarter:

Product 1

Features

number of new customers acquired

number of paying customers

number of leads

number of customers lost

Features 1 and 2

50

0

200

0

Product 2

Features

number of new customers acquired

number of paying customers

number of leads

number of customers lost

Feature A

180

0

200

0

This data tells us a few notable things. One thing that stands out when comparing the two products is that the ratio between leads and new customers acquired is higher for product 2, than for product 1. More fundamentally, there seems to be no basis for comparing the metrics of these two products, even if we assume they belong to the same domain. What can be gleaned individually is that features 1 and 2 haven't contributed toward getting paying customers. This also hasn't led to any customer loss and similarly for feature A of product 2.

Metrics that validate or falsify success criteria

Comparing success metrics for a different business or product, or even considering industry benchmarks without context isn't going to help us to validate the success of our product. The only way to figure out whether any of this data is relevant is to see how it ties into any of the success criteria that we have.

Product 1

Features

number of new customers acquired

number of paying customers

number of leads

number of customers lost

Target success metric for features 1 and 2

100

70

120

NA

Actual data for features 1 and 2

50

0

200

0

Product 2

Features

number of new customers acquired

number of paying customers

number of leads

number of customers lost

Target success metrics for feature A

200

50

200

0

Actual data for feature A

180

0

200

0

Table: Target success metrics versus actuals; this is only an indicative simplified example

Now, if we compare these results against the defined success criteria for each feature, it gives us a much clearer indication of how far or close we are from the original goals. For product 1, features 1 and 2 were meant to bring in 100 new customers. They brought in 50, which is only 50% of the goal met. The paying customers metric was not met at all, but the number of leads was well above the mark. Customer retention or loss wasn't relevant at all. This is a good indicator that the product needs improvement in the approach to customer acquisitions and converting them to paying customers. Similarly, for product 2, there is some lag in acquiring new customers, but it is the conversion to paying customers that is clearly lagging.

Contextual qualitative insights

There may be multiple other minor and major indicators of success, and a relative comparison based on original goals versus those achieved will tell us where our product needs improvement. Sometimes, we may need to wait a week more for some potential leads in the pipeline that could convert into customers. Factors preventing success could include possible budget cycles in the customers' businesses that could be impacting their actual conversion to paying customers. Alternatively, there could be a change needed in the layout of the webpage or the placement of the purchase option. These aspects can be gleaned by digging deeper into what these numbers mean and their context.

However, these are still only the quantitative aspects. Contextual insights, user experience, adoption, usage frequencies, and so on can be complemented with qualitative data. Interesting questions when seeking qualitative information about the product experience could be as follows:

  • What was the most frustrating problem we solved for a customer?
  • What was the highest expression of appreciation we received from a customer?
  • What was the worst complaint we received from a customer?
  • Which customer persona is missing from our product users?
  • When was the last time a customer who dropped off, came back and revived their activity?
  • Which cohort/user will most likely recommend our product to or influence other customers?

These questions can kindle a variety of contextual information gathering. Again, these are only indicative questions to start exploring aspects of the customer experience that we may find hard to quantify. The individual consumer context can offer a personal touch and make the data seem more relevant. It also helps us focus on the consumer without getting bogged down by just data.

At a feature level, these metrics offer great visibility into areas that need improvement or those that are already working in our favor. Where these metrics fall short is in telling us real insights about whether our product strategy is working well or not. They also do not tell us whether the decisions and trade-offs we made today will hurt us or help us in the long run. Many of the technical success criteria have a longer cycle of value realization. In the short term, making compromises on some product aspects (technical or functional) may work well. This will help us meet immediate timelines, beat competitors, and gain market share, but in the longer run this approach may drag us down.

It is these longer running strategic insights that will help us to take pivot or persevere decisions. In the preceding example, Feature 1 seems to failing to get paid customers. But that alone is not a compelling reason to pivot on our product strategy. It is very hard to objectively conclude this. Even if feature 1 is the crux of the product and epitomizes the riskiest proposition that our product is putting forward, the business must still dig deeper to find out what gap in the product experience was not anticipated, which we can discover from product usage or actual customer feedback. The Febreze case study in Chapter 7, Track, Measure, and Review Customer Feedback, tells us exactly this. We cannot possibly predict to the tee how the market will react to our product. However, we cannot pivot at the first sight of trouble. Pivoting is a decision that is based on how much investment the business can make on validating the riskiest proposition before resources run out. Alignment to strategy, therefore, needs a longer-term outlook. That brings us to the second part of the product insights.

Is our product working well for the business?

Businesses make investments on different areas of functions. When a product is core to the success of a business (either complementing the core business drivers or being a core driver itself), the investments that the business makes toward the product will determine how we channelize our resources toward creating value.

The Investment Game that we played as part of our product strategy is crucial to not only determining what to build but also to highlighting the most valuable goals for the business and also in determining whether the business is even investing in the right product goals.

We often miss out on this aspect of product insight. Sometimes, this is because product technology teams get treated as external to business strategy. However, having established that the goal setting and defining the Impact Driven Product is a collaborative, collectively owned activity, we can assume that business stakeholders will work closely with the product teams to define product direction and strategy.

The purpose of restricting the Investment Game to a limited amount of money is to enforce artificial constraints (of actual money available, human capital, partners, and so on) and to determine where the business wants to bet their biggest success on. So far, we have run this with a specific time frame in mind. We seek to answer, which aspects of the business do we want to channelize our resources to in the next 3 months in order to realize the most desired business outcomes?

However, in 3 months we may have insights about whether our product features (what and how) are working well. We might be asking if we made the right investments or went after the right goals (the why). The answers to this will tell us how well our product strategy is working. In order to do this, we will need to review the investments and outcomes over a longer period of time. For this, once again, let's look at the example of product 2 shown earlier in this chapter. This is not based on the ArtGalore case study, but based on the target success metrics versus actuals shown earlier in this chapter. Again, this is a hypothetical example of product performance for a select few metrics, but over four quarters.

Q1 investments made – produ ct 2

Is our product working well for the business?

Q2 insights for product 2

Features

number of new customers acquired

number of paying customers

Goals

200

50

Feature A

180

0

Seeing that our acquisition strategy is working well, but that we are unable to generate revenue, let's say that the product investments for the next quarter will be made as shown in the following diagram:

Q2 investments made – product 2

Is our product working well for the business?

Q3 performance

Features

# new customers acquired

# paying customers

Goals

250

70

Feature B

240

20

Increasing the investments made to generate revenue seems to show minimal results. There is some progress, but not something that can justify the amount of investment made here. We could now analyze what we need to change about our product experience to convert more registered customers to paying users. So, let's say that we figure out that customer engagement may be an area of investment. Here are the product investments for Q3:

Q3 investments made – product 2

Is our product working well for the business?

Q4 performance

Features

number of new customers acquired

number of paying customers

Goals

300

100

Feature C,D,E

280

50

Looking at the investments made on engagement and revenue generation, with the goal of increasing the number of paying customers, this strategy seems to be somewhat working. Let's say now that we continue to invest in these three areas of the product for the next quarter:

Q4 investments made – product 2

Is our product working well for the business?

Q5 performance

Features

number of new customers acquired

number of paying customers

Goals

350

150

Feature F

250

60

Individually, each product strategy seems to be working well when looking at investments made in each quarter, as we can see in the following figure:

Is our product working well for the business?

The following is a split of how product investments were made across the four quarters:

Is our product working well for the business?

The correlation between investments made, product goals, and actual realization may be represented as shown in the following figures:

Is our product working well for the business?
Is our product working well for the business?

It appears that where investments have been made in acquisitions, the product has delivered fairly well. However, the product has failed to deliver revenue in terms of paying customers. Even though the business has made 45% of its product investment toward gaining paying customers, and then some more in terms of engagement, which was also intended to meet the revenue metrics, the product strategy here hasn't really kicked off here. This could mean two things: one, the investment made is not enough to create significant value in the product, or two, the product features aren't making the best use of the investment made here. Understanding this will mean getting into the details of product-level success metrics and learning more from the available pool of new customers, which is growing, but they are not ready to pay.

Another aspect that we can see here is where the business is failing to invest. Individual feature metrics may be targeted at finding out how we meet the feature goals. What about the overall business metrics? For instance, do we know if we are losing customers along the way? Do we know the channel through which we are acquiring customers? What do we lose if we don't increase our marketing and reach beyond the current channels? Will our acquisition strategy still continue to work for us?

We seem to be slowly increasing the number of paying customers, but seem to fall too short on meeting the goals. Are our goals for paying customers far too high? What about other factors such as actual revenue numbers? Are a smaller number of users bringing in higher value? How do product investments compare against similar investments made in other aspects of the business? How do we compare the actual investments made versus profits? Should we take a hit on acquiring new customers, while we change our strategy for revenue? Should we discover new channels for growth? Is it time to pivot? The answers to some of these questions may be far more relevant when seen as a trend across time, rather than looking at them in the context of quarter-to-quarter performance.

Delayed gratification

Especially in areas such as marketing and reach, the cycle time is much longer for us to even see the results of our investments working. In order to get the right kind of leads, we may need to experiment with different channels, content, messaging, and product experiences. We will still not reap immediate results. The trend could look something like this: a high investment is made initially on marketing and reach, and it doesn't show desired results until Q4, where the continued investments made start to show results:

Delayed gratification

The same can be said for application stability, security, privacy, or any of the technical success metrics. Sometimes, the success of these investments is the absence of a harmful incident that threatens the product or the business. For some products dealing with highly confidential information, even a single incident of security breach can cause a lot of damage. Anticipating this and planning for the long term will require a different mindset and different data trends and insights.

Focusing on the short-term metrics can give the product much-needed acceleration to meet the needs of the customer and to react to the demands from the market. However, looking at longer-term trends and metrics can enable the business to invest in the right business aspects and look out for the longer-term benefits. This will lead to informed trade-offs, rather than looking only at the immediate performance of the product.

Looking at micro-level, short-term indicators alone cannot tell us how to steer our product in the ambiguous business ecosystem. For instance, if we are the first movers and are growing slowly and steadily, delivering on adoption and revenue per our success metrics, should we continue to run with the same pace or scale? If we were to scale, what opportunities or threats exist? If a new competitor emerges, making huge investments in marketing a similar product and capturing market share, we may have to factor that in our investments. This could also enable us to make informed decisions on which aspects of the product to invest in or to not enhance, in view of the macro-level factors of market opportunities and the threat from competition.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset