Mismatch in expectations from technology

Every business function needs to align toward the Key Business Outcomes and conform to the constraints under which the business operates. In our example here, the deadline is for the business to launch this feature idea before the Big Art show. So, meeting timelines is already a necessary measure of success.

The other indicators of product technology measures could be quality, usability, response times, latency, reliability, data privacy, security, and so on. These are traditionally clubbed under NFRs (nonfunctional requirements). They are indicators of how the system has been designed or how the system operates, and are not really about user behavior. There is no aspect of a product that is nonfunctional or without a bearing on business outcomes. In that sense, nonfunctional requirements are a misnomer. NFRs are really technical success criteria. They are also a business stakeholder's decision, based on what outcomes the business wants to pursue.

In many time and budget-bound software projects, technical success criteria trade-offs happen without understanding the business context or thinking about the end-to-end product experience.

Let's take a couple of examples: our app's performance may be okay when handling 100 users, but it could take a hit when we get to 10,000 users. By then, the business has moved on to other priorities and the product isn't ready to make the leap.

We can also think about cases where a product was always meant to be launched in many languages, but the Minimum Viable Product was designed to target users of one language only. We want to expand to other countries, and there will be significant effort involved in enabling the product, and operations to scale and adapt to this. Also, the effort required to scale software to one new location is not the same as the effort required to scale that software to 10 new locations. This is true of operations as well, but that effort is more relatable since it has more to do with people, process, and operations. So, the business is ready to accept the effort needed to set up scalable processes, and hire, train, and retain people. The problem is that the expectations of the technology are so misplaced that the business assumes that the technology can scale with minimal investment and effort. The limitations of technology can be sometimes perceived as lack of skills/capability of the technology team.

This depends on how each team can communicate the impact of doing or not doing something today in terms of a cost tomorrow. What that means is that engineering may be able to create software that can scale to 5000 users with minimal effort, but in order to scale to 500,000 users, there's a different level of magnitude required. The frame of reference can be vastly skewed here. In the following figure, the increase in the number of users correlates to an increase in costs:

Mismatch in expectations from technology

Let's consider a technology that is still in the realm of research, such as artificial intelligence, image recognition, or face recognition. With market-ready technology (where technology viability has been proven and can be applied to business use cases), in these domains, it may be possible to get to a 50% accuracy in image matching with some effort. Going from 50% to 80% would require an equal amount of effort as that which was needed to get to 50% accuracy. However, going from 80% to 90% accuracy would be way more complicated, and we would see a significant increase in costs and effort. Every 1% increase after 90% would be herculean, or just near impossible, given where the current technology is in that field. For instance, the number of variations in image quality that need to be considered could be a factor. The amount of blur, image compression quality, brightness, missing pixels in an image, and so on can impact the accuracy of results (https://arxiv.org/pdf/1710.01494.pdf). Face recognition from video footage brings in even more dimensions of complexity. The following figure is only for illustrative purposes and is not based on actual data:

Mismatch in expectations from technology

Now, getting our heads around something like this is going to be hard. We tend to create an analogy with a simple application. However, it's hard to get an apple-to-apple comparison of the effort involved in creating software. The potential of software is in the possibilities it can create, but that's also a bane because now that the bar is set so high, anything that lowers our expectations can be interpreted as: "Maybe you're not working hard enough at this!"

Sometimes the technology isn't even ready for this business case, or this is the wrong technology for this use case, or we shouldn't even be building this when there are products that can do this for us. Facial recognition technology with a 50% accuracy may suit a noncritical use case, but when applied to use cases for identifying criminals, or missing people, the accuracy needs are higher. In an online ads start-up that was built to show ads based on the images in the website content that a user was browsing, the context of the images was also important. The algorithm to show ads based on celebrity images worked with an accuracy that was acceptable to the business. The problem was that in some cases, the news item was related to a tragedy regarding a celebrity or an event where a celebrity was involved in a scandal. Showing the ads without the context could impact the image of the brand. This could be a potential threat for the online ads start-up looking to get new business. With a limited amount of resources and with a highly-skewed ratio of technology costs/viability, it remains a business decision on whether or not investment in technology is worth the value of the business outcomes. This is why I'm making the case that outcomes need to justify technology success criteria.

There is a different approach needed when building solutions for meeting short-term benefits, compared to how we might build systems for long-term benefits. It is not possible to generalize and make a case that just because we build an application quickly, that it is likely to be full of defects or that it won't be secure. By contrast, just because we build a lot of robustness into an application, this does not mean that it will make the product sell better. There is a cost to building something, and there is also a cost to not building something and a cost to a rework. The cost will be justified based on the benefits we can reap, but it is important for product technology and business stakeholders to align on the loss or gain in terms of the end-to-end product experience because of the technical approach we are taking today.

In order to arrive at these decisions, the business does not really need to understand design patterns, coding practices, or the nuanced technology details. They need to know the viability to meet business outcomes. This viability is based on technology possibilities, constraints, effort, skills needed, resources (hardware and software), time, and other prerequisites. What we can expect and what we cannot expect must both be agreed upon. In every scope-related discussion, I have seen that there are better insights and conversations when we highlight what the business/customer does not get from this product release. When we only highlight what value they will get, the discussions tend to go toward improvising on that value. When the business realizes what it doesn't get, the discussions lean toward improvising the end-to-end product experience.

Should a business care that we wrote unit tests? Does the business care what design patterns we used or what language or software we used? We can have general guidelines for healthy and effective ways to follow best practices within our lines of work, but best practices don't define us, outcomes do.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset