It is imperative that today's product teams start with a data mindset. Data strategy and accessibility must be built into a product team's DNA. We cannot assume that we will handle this if the need arises. In many cases, stakeholders don't know the power of data until we show them. Stakeholders also hold themselves back from seeking data because the process of getting data is hard and cumbersome, especially when it feels like they are imposing on the technology team's time. So, it becomes a Catch 22. Technology teams don't build a data strategy because they don't see stakeholders asking for data. Stakeholders don't ask for data because there isn't an easy way to access data.
Product strategy must proactively think, plan, and set up ways to collect data and share data transparently without elaborate procedures. The discussion on success metrics is a good indicator for the type of Key Performance Indicators that should be captured. An effective data strategy sometimes doesn't even need complicated digital tools to capture data. Simple paper-based observations are sometimes enough. Key metrics around revenue, acquisitions, sales , and so on. can even be shared on a whiteboard with a person assigned exclusively to doing this. This works in a small team with an early stage product, but finding digital tools in the market that allow real-time visualization isn't very hard either.
The finance team was convinced that investors were not investing every month because we hadn't made this process easier for investors. The product team was asked to pick up on this as a priority, and we started designing the feature. We soon realized that this was a huge feature to implement (purely based on the amount of complexity of rolling this out, and the dependencies on banks to deliver this successfully). We didn't have to estimate story points to figure out how big this was. Also, the paperwork aspect was a government regulation and outside of our control. So, while we could build requests for auto-debits into the workflow of the product, the paperwork still had to be done.
The team was getting pressurized into delivering this, so we started to gather some data. Why did the finance team think this feature would be so impactful in ensuring predictable monthly investments? The finance team insisted that every single customer they had spoken to wanted this option. Now, 100% of consumers wanting to invest every month is too compelling to ignore. Everyone in the leadership team was now convinced that implementing this feature was crucial for us to get repeat investments. Yet as we dug deeper and looked at our data, we found out that we had a very miniscule percentage of our investors who were investing through direct bank debits. The finance team had apparently spoken to only 15 people over the past three months. In a consumer base of over 9000 folks, 15 (the numbers are only indicative and not actuals) was not a sample big enough to base our product decisions on. Essentially, this was a decision not based on facts, but more on an opinion arising out of a limited context. Did it make sense for us to invest in a feature that was impacting so few of our consumers? If all our investors, who were investing through other payment options, such as credit cards, debit cards, and payment wallets, had to transition into paying through auto-debit, it presented a huge operational burden for us, given the paperwork involved. It was clear that given our finance team's capacity, this was not doable.
Once we had invalidated the basis on which the impact on business outcomes had been made, we ran a new experiment. We were now trying to validate if our investors (who were investing through other payment options such as credit cards, debit cards, and payment wallets) were even inclined to invest in us every month. If so, how many such investors were ready?
We built something very simple to validate this. We introduced an option for users to tell us whether they wanted a reminder service that would nudge them to invest in rural entrepreneurs every month. It took us half a day to add this option to our investment workflow. If they chose this option, we informed them that we hadn't yet built the feature and thanked them for helping us to improve our product. After three months of observation, we found that ~12% (the numbers are only indicative and not actuals) of the consumer base (who transacted on our website) opted in.
This was a big improvement from our earlier target base. While it was a good enough indicator and worth exploring we were still limited by our ability to automatically charge credit cards. So, we limited our solution to a reminder service to send out automated emails on specific dates to the customers who had opted in for a re-investment and tracked conversions from those. We explored our data to see if there was a trend in investments peaking on certain days/dates each month. We found that data trends indicated certain dates when there was a peak in investment. We scheduled our reminder emails to be sent on the peak investment date of each month.
After three months of observing conversions from reminder emails, we figured that this strategy was working well enough for us. We continued to sign up more investors and to socialize the payment reminder on our website.
Basing product decisions on data alone is not enough. It is necessary to collect ample verifiable evidence, but it is also important to capture this data at a time when the consumer is in the right context. For instance, asking for feedback on a website's payment process two weeks after a customer purchased something trivial, may not work very well. Context, timing, content, and sample size are key to finding data that is relevant for use.
For instance, my biases influence how I configure my social feeds. I found that a lot of content on my feeds was not appealing to my tastes or opinions. I started unfollowing a lot of people. I got picky about the groups and people I followed. Voilà, my social feed was suddenly palatable and full of things I wanted to hear.
This personal bias could potentially trickle into how we make recommendations on product platforms. We make recommendations of songs/movies/products/blogs based on our consumer's own likes and dislikes. This means that we are essentially appealing to the confirmation bias of our consumers. The more content we show them that appeals to their existing interests, the more likely they will be to engage with us. This shows us a positive trend in our engagement rates, and our recommendation strategy gets further strengthened. In the long run, though, we are slowly but silently creating highly opinionated individuals who have very little tolerance for anything but their own preferences.
Whether this is good or bad for business is dependent on the business intent itself. However, the bigger question to ask is: how do we learn something new about our customers, if we don't go beyond their current preferences?
Our bias also influences how we interpret data. For example, we might start with a hypothesis that women don't apply for core technology jobs. This might mean that our ads, websites, and social content have nothing that appeals to women. Yet, if the messaging and imagery on our careers website is well-attuned to middle-aged men in white-collar jobs, then can we claim that we can't find women who are qualified to work with us? Does this prove our hypothesis correct?