Chapter Six

Choose your route – Evaluate your alternatives

It’s usually not possible to pursue all alternatives; at least not simultaneously. Many of the important decisions we face are forks in the road, where choosing one path precludes us from pursuing others that might also be attractive. So, after having thought divergently to create potential alternatives, let’s think convergently to identify the alternative that would best help us achieve our goals, given limited resources.

Chapter 5 helped us prepare that decision by clarifying what matters to us in an alternative. Because we typically want more than one thing – say, a solution that is fast and cheap and high quality – and there is usually no alternative that scores best on all counts, we’ll need to trade off something that we value for getting a little more of something that we value even more. To be sure, if you have found an alternative with a great score on all criteria, feel free to skip this chapter!

A loop diagram depicts FrED. The steps are: frame, explore, and decide. The decide works as evaluate alternatives and convince. Convince is in bold.

To decide which alternative is best, let’s bring the alternatives and criteria together. It’s an important step, a bit like at a car factory when the chassis, transmission and engine come together on the production line – a procedure tellingly called ‘marriage’. How do we do accomplish a marriage in decision making? Well, a simple decision matrix can be surprisingly helpful, as it incorporates the four components of the decision: the quest that we identified in Chapters 1–3, the alternatives that we obtained in Chapter 4, and the criteria identified in the last chapter. The matrix enables us to systematically evaluate each alternative on each criterion (these ­evaluations being the fourth component) thereby exposing the alternative’s trade-offs and helping us identify which, on balance, works best.

 ASSEMBLE THE MATRIX’s STRUCTURE

Think of a decision matrix as a whole made of two parts: its structure – which consists of the quest, the alternatives and the criteria – and its inside, the evaluation of each alternative on each criterion.

A matrix structure of the decision matrix.

By separating the structure from the evaluations, you delay discussing your preferred alternative. This is valuable because the accuracy of intuitive judgment is improved when people don’t make a global evaluation until the end of a structured process.1

The preceding chapters helped us assemble the structure; with it in place, let’s populate its content.

 EVALUATE ALTERNATIVES

Systematically evaluating each alternative is good for two reasons: using one yardstick for all alternatives enables you to be more fair. Also, you create accountability in your thinking vis-à-vis yourself and others.

Given these high returns and how easy it is to use decision matrices, they should be in wide use. And yet we remain surprised by how rarely we see seasoned executives use a matrix to guide their difficult decisions. Some say that it takes too much time, others that it’s useless because they can make any matrix turn out the way they want them to. Let’s address these critiques in a next step. But first, let’s take a look at what kind of evaluations you can use.

Depending on the complexity of your problem and on how much you can invest in solving it, you may want to rely on evaluations that are either qualitative or quantitative.

Qualitative evaluation: In this simple approach, you rate each alternative using a basic scoring system, say from zero to five stars. The approach is so easy to set up that you might do it on the back of a napkin at your favourite restaurant. Such a qualitative matrix can help you and your team get a quick feel for the benefits and drawbacks of the alternatives, which helps surface the major trade-offs.

Although qualitative matrices are easy to set up, their value is limited, primarily because they can’t factor in the differences in importance you attach to each criterion. If, as is often the case, you value criteria differently – say, to you, quality is much more important than Affordability – just tallying the number of stars scored by each alternative won’t tell you which is on balance the best. In fact, when alternatives tally lots of stars on low-importance criteria, this qualitative evaluation might even be misleading.

Because of their limitations, qualitative matrices are useful for getting an initial sense of the overall trade-offs, but they don’t lend themselves to evidence-based decision-making of complex problems. There a quantitative matrix is usually needed.

Quantitative evaluation: To gain a better understanding of your alternatives’ trade-offs, it is often sensible to evaluate them numerically on each criterion as well as assigning a numerical weight to each criterion, as we discussed in the previous chapter. Doing so enables you to calculate a weighted sum for each alternative.2 You can do that yourself, or you can use the Dragon Master™ app, which will also colour code the results so that it is easier to see the winning alternative.

A table for quantitative evaluation.

Note that some managers fall into the trap of focusing on the aggregate score without understanding the sub-scores on each criterion. How you evaluate alternatives on criteria is both a science and an art. It’s a science because you use a structured, transparent and verifiable process to break down your problem-solving approach into discrete steps, systematically challenging your thinking at each stage. But setting up a matrix is also an art because there are many choices you make (quite a few implicit ones) on how you gather and evaluate evidence. In the end, your goal is less to find the objectively right answer – which most often doesn’t exist – as it is to improve the quality of the conversations that you have with yourself and others in your search for a great outcome.

In our experience, executives often make it too much of an art, though, not using an evidence-based analysis for their scoring. As a result, they might not detect their biases. Furthermore, such a casual approach to evaluation means that the support for their recommendation is weak, and believing them becomes a matter of opinion. Remember, what is asserted without evidence can be dismissed without evidence. If you plug in numbers from thin air to populate your matrix, don’t expect that it will be as convincing as if you use a verifiable analysis.

Therefore, your challenge is to be as rigorous in your analysis as your limited resources allow you to be. Ask yourself what evidence would be needed to evaluate each alternative on each criterion, be explicit with your assumptions, test your approach with people who disagree with you, and, overall, challenge your assumptions.

In general, you want to argue as solidly as possible for and against each alternative, rather than just look at one perspective.3 It might be useful to think of yourself as a lawyer preparing to plead a case to a judge – and attempting to make the plea as compelling as possible – but not knowing until the last minute whether you will plead in favour or against the case. It might also be useful to adopt various perspectives, such as organisational, personal, and technical, to gain further insight into your alternatives.4

 MAKE AN ON-BALANCE DECISION

Evaluating each alternative on each criterion will help you eliminate those that are clearly inferior across the board. Doing so is straightforward with a quantitative matrix: If alternative A scores below alternative B on all criteria, A is dominated by B, and you can safely remove it from further consideration.5

Often, though, you will face trade-offs that pose difficult dilemmas. For a recent example, consider how politicians needed to develop a response to Covid that balanced limiting the spread of the virus with limiting the economic damage and psychological distress of locked-down populations. Balancing competing objectives is difficult, but not trying to balance them misses the point. Yes, it is always possible to maximise performance on one criterion but at some point the price paid becomes exorbitant, so decision makers must identify what is reasonable.

The point is that even though you might have diligently followed the FrED process, you are still likely to face difficult dilemmas come decision time. FrED doesn’t remove the dilemmas; it merely exposes them.

That seems suboptimal but, still, merely exposing dilemmas is an important contribution, as that makes it easier to seek input from stakeholders to help you refine your question, alternatives, criteria, and evaluations. Also, exposing dilemmas sets the ground to create new alternatives that can help you to bypass the trade-offs (see next section).

Beware of a caveat we raised at the beginning of the section: Numerous executives mention that they can make the matrix produce the outcome that they want simply by adjusting the weights of the criteria and evaluations. ‘Sure, you can do this’, we reply, ‘but committing your thinking to a matrix will enable others to identify your (conscious or unconscious) biases much more easily than if the process took place during a fleeting discussion.’ In short, using a matrix makes you more accountable for your thinking.

In our experience, you will often find some discrepancies between your intuitively favoured alternative and the one that scores best in the matrix. If that’s the case, investigate these discrepancies. Perhaps some of the criteria aren’t sufficiently mutually exclusive? Or maybe an important criterion is missing? For an illustration, go back to the last chapter, where a team thought that they had identified a great alternative, but only because they had missed some key criteria, which they saw through their matrix’s results. When they modified their analysis to account for the omission, they reached different conclusions.

Naturally, no amount of analysis can guarantee that your chosen alternative is the best possible one but, as a general rule, it pays off to treat any deviation from your gut feeling as a signal that more analysis is needed. (More on how to manage uncertainty in Chapter 9.)

Generate support by conveying procedural fairness

Not everyone will agree with your decision. Saying yes to one alternative means saying no to many others, which is challenging. As management scholar Richard Rumelt points out: ‘There is difficult psychological, political and organisational work involved in saying “no” to whole worlds of hopes, dreams and aspirations.’6

Furthermore, given that we don’t all value the same things consistently, there will inevitably be important stakeholders who will not agree with your conclusions. We have all experienced disappointing some people in our efforts to achieve the goals we set for our organisation, so we probably all know that just telling people what you have decided can come short.

But, if not that, then, how should you explain your decision? Well, you may want to stress how you’ve integrated their perspectives in the decision. Research on procedural fairness in courts has shown that defendants are more ‘satisfied’ with even a relatively severe punishment when they feel that their perspectives were earnestly considered throughout the trial. In other words, people want to know that their side of the story is heard.7

What applies to court proceedings may also hold when making complex decisions. To create procedural fairness, integrate others’ preferred alternatives and relevant concerns early in the problem-solving process. This will enable you to provide a more balanced account of your final decision, pointing to the strengths of those alternatives that were not chosen.

Beyond demonstrating that you have considered your stakeholders’ considerations, also pay attention to how to present your preferred alternative. Research has shown that painting an overly positive picture raises red flags.8 We agree: We’ve noticed that painting a balanced picture is particularly important when more junior team members present to senior executives, as it demonstrates that they have grasped the complexity of the issue and understood the underlying trade-offs. Likewise, if you are being presented a recommendation, hearing that the preferred alternative scores best on all criteria should be a signal that something is off. There is no free lunch; if something seems too good to be true, it probably is. So, dig deeper to surface the trade-offs that remains implicit.

Demonstrating your rigour and thoughtfulness doesn’t guarantee that everyone will support your solution, of course, but it should improve the odds that even those people who initially opposed your plan support it. That’s just one additional reason why investing time upfront is worthwhile, as it can save you time and convincing efforts down the road.

To sum up, your challenge leading the problem-solving effort is to provide direction by deciding what to do while being empathetic, which you can do by conveying procedural fairness. Finding a good balance is delicate: Although you clearly need to invest effort to get your stakeholders on board, you shouldn’t over-engage. You face many constraints when solving complex problems – not least limitations on the time that you can invest – and you need to judiciously decide how much engagement serves you best.

 TREAT DIFFICULT TRADE-OFFS AS OPPORTUNITIES – INTEGRATIVE THINKING

Max creates his own car to sidestep a difficult trade-off

Max Reisböck was an engineer at BMW in Bavaria in the 1980s.9 Max and his wife wanted to leave on holidays with the family and their two young children’s toys, including bikes and tricycles. As he pondered his alternatives, he faced a difficult trade-off: They could take the family’s 3-series sedan, a sportive car that was fun to drive but too small to fit all their stuff. Or they could take their VW estate car wagon, which had lots of boot space but handled a bit like, well, a donkey. In short, they were at a fork in the road. Not a life-or-death situation, yet painful enough for Max to explore whether he could bypass the trade-off altogether.

So, Max became creative. He bought a 3-series sedan, cut off its back, and replaced it with a custom design to create his own estate car. Now called a BMW Touring, Max’s car had the best of both worlds: a sportive car that was spacious enough for the family and toys. It was Max’s unwillingness to accept the established trade-offs, choosing instead to engage in integrative thinking, that enabled him to create a third way.

Interestingly enough, BMW management had been considering building this type of a car but had held back, thinking that it would not fit BMW’s sporty image. When senior managers saw Max’s homebuilt model, they were intrigued. In fact, they kept the car at headquarters, and Max had to go on vacation with his VW after all! But they also launched an official project to pursue the Touring, which is now one of the most popular models in BMW’s fleet.

Max’s insistence to look for a better solution emphasises that as we stand at a fork in the road, it might be worthwhile to explore if the tensions the fork unveils might be a catalyst to develop another alternative. We ought to do this particularly when the trade-offs of the current alternative are too painful to accept.

Management scholar Roger Martin defines integrative thinking as ‘the ability to face constructively the tension of opposing ideas and, instead of choosing one at the expense of the other, generate a creative resolution of the tension in the form of a new idea that contains elements of the opposing ideas but is superior to each.’10

Each step of FrED promotes integrative thinking: Thinking about your quest, exploring a wide range of alternatives, systematically defining your criteria, and systematically evaluating the alternatives creates a foundation to identify what it would take to resolve trade-offs. In other words, the pain that you feel as you ponder having to choose between one of two imperfect solutions becomes the launch pad for developing a third way. Use the tension to ask: ‘How might we create a new alternative using existing building blocks that help us to eliminate the trade-offs?’

Jørgen Vig Knudstorp, former CEO of Lego, stated: ‘When you are a CEO, you are sort of forced all the time to have a simple hypothesis. You know there’s one answer. But instead of reducing everything to one hypothesis, you may actually get wiser if you can contain multiple hypotheses. You notice trade-offs, you notice opportunities’.11 By considering multiple alternatives simultaneously, Knudstorp explored opportunities for different solutions. In short, the goal of integrative thinking is to find an answer that takes the best of various alternatives to produce an outcome that is preferable to any of the existing ones.

 INTERPRET AND CHALLENGE
YOUR RESULTS

As you systematically evaluate your alternatives, your matrix highlights the best one. Great! But this isn’t the end of the journey just yet. In fact, the output of the evaluation shouldn’t be viewed as the solution to your original problem but, rather, as offering a clearer picture of the consequences of choosing one alternative or another.12 The process so far is a decision aid, but now you need to evaluate how good an aid it is. To help you do so, evaluate the quality of your analysis as a function of the quality of your reasoning and that of your evidence.

You need high-quality reasoning and high-quality evidence. In the words of mathematician and physicist Henri Poincaré: ‘Science is built up with evidence, as a house is with stones. But a collection of evidence is no more science than a heap of stones is a house.’13

Quality of analysis equals quality of reasoning times quality of evidence.

So, how do you conduct such a quality check? You have various avenues:

  • Perform a sensitivity analysis. What happens to the ranking of your alternatives if you modify the weight of your criteria or the evaluations of the alternatives? If small changes result in drastic reordering of the best alternatives, assume that your results are not robust and lower your confidence in them (more about confidence in Chapter 9). If, on the other hand, even sizeable variations in weights and evaluations keep the ranking in order, you might be more confident in your conclusions.14 Either way, think critically about the ‘so what?’ of your analysis.
  • Take an external perspective: It’s often easier to give thoughtful advice to others than it is to counsel ourselves. Research on ­construal-level theory shows that distance can bring clarity.15 When we are giving advice, we find it easier to focus on the most important factors while our own thinking flits among many variables. In other words, when we think of others we think of the forest, when we think of ourselves, we get stuck in the trees. To attain more distance, ask yourself a few ‘what if?’ questions. This is exemplified by former Intel CEO Andy Grove who once asked his top team when faced with a difficult decision around terminating an important project: ‘What would our successors do?’ Doing so helped the team add some distance to the decision.16
  • Find a devil’s advocate: Remember Chapter 4 where we highlighted the value of promoting (constructive!) dissent? Now might be another good time for one or two devil’s advocates to poke holes into your reasoning. In other words, create a safe environment for the dissenting voices to speak up (see below).
  • Carry out multitrack alternatives if possible: In our teaching and consulting work, executives often tell us that instead of choosing an alternative they would like to pursue various simultaneously ‘to keep their options open’. No doubt, if your setting allows it, this can be an effective approach. By not making a difficult decision that doesn’t have to be made, you avoid foregoing other attractive opportunities and you don’t face the risk of failing with the alternative you selected. In short, you reduce the risk in your portfolio. As you run test pilots on these multiple alternatives, you can collect additional information and see if your predictions hold. In an increasingly digital world, it is often getting more feasible to run rapid A/B testing that will give you a first understanding of how alternatives perform in real life. Because this can often be done at low marginal costs, multitracking can be one tool to push back the final moment of decision. Beware though that doing so is already a decision in and of itself, so you need to keep the cost of piloting under control. Although the benefit of not committing is that your alternatives are open for longer, you also spread your limited resources across multiple alternatives, which dilutes their effectiveness. You need to assess whether you can afford that dilution.
  • Trust aggregated averages of independent viewpoints over individual estimates: Consider how many of us book our holiday stays these days. Instead of talking to a single friend who just got back from a wonderful place, we consult travel guides and sites that aggregate user opinions. By consolidating independent viewpoints, these resources give us a more solid base to predict whether that trip to Tuscany will be what we always dreamed of. To be clear, these estimates aren’t foolproof – one of us remembers a trip to a restaurant in Rome that had splendid reviews but that turned out to be miserable – but, by and large, aggregating independent data points17 into large-scale studies can help us form better judgments.18 Data for your strategic challenge might not always be as easily available as hotel reviews, but if we only have anecdotal evidence of a story someone told us to justify the evaluation of an alternative, we should take it with a grain of salt. Furthermore, note that using various viewpoints to triangulate on what to do is beneficial primarily if the viewpoints are independent.19 In the end, the quality of our analysis is only going to be as good as the quality of the evidence we can muster up to support evaluation.

Create a safe environment

Psychological safety is the extent to which people on the team feel that they can admit mistakes, voice a dissenting opinion, willingly seek feedback, contribute honest feedback, take risks, or acknowledge confusion without risking being rejected or penalised. Empirical evidence supports that it is a strong predictor of team effectiveness across a variety of organisational context and geographies.20

Research also shows that psychological safety is associated with learning, which is particularly relevant in complex and fast-changing environments.21

So what? Well, create an environment where dissent is acceptable, in fact, where it’s encouraged. In an ideal setting, team members first dissent and then commit to the decision.

 CHAPTER TAKEAWAYS

Consistently evaluate each alternative on each criterion. Make these evaluations as rigorous as your limited resources allow you to.

Make your stakeholders feel heard – even if you don’t choose their preferred alternative, they should feel that their views were integrated.

Use your decision matrix to improve your sensemaking: identify the ‘so what?’ of your analysis.

Surface trade-offs. Cases where an alternative scores highest on all criteria are extremely rare, so if that’s the case with yours, assume something is off. Likewise, if you are recommended an alternative that only has upsides, treat this as a red flag.

 CHAPTER 6 NOTES

  1.   1Kahneman, D., D. Lovallo and O. Sibony (2019). ’A structured approach to strategic decisions.’ MIT Sloan Management Review Spring 2019.
  2.   2Although this simple additive model is very popular, it faces limitations when the criteria aren’t fully mutually exclusive. For more on the topic, see, for instance, p. 48, pp. 54–55 of Goodwin, P. and G. Wright (2014). Decision analysis for management judgment, John Wiley & Sons. For a review of multi-criteria decision analysis, see Marttunen, M., J. Lienert and V. Belton (2017). ’Structuring problems for multi-criteria decision analysis in Practice: A literature review of method combinations.’ European Journal of Operational Research 263(1): 1–17.
  3.   3For discussions, see Lovallo, D. and O. Sibony (2010). ’The case for behavioral strategy.’ McKinsey Quarterly. See also, pp. 103–104 of Chevallier, A. (2016). Strategic thinking in complex problem solving. Oxford, UK, Oxford University Press.
  4.   4Nutt, P. C. (2004). ’Expanding the search for alternatives during strategic decision-making.’ Academy of Management Perspectives 18(4): 13–28.
  5.   5See p. 49 of Goodwin, P. and G. Wright (2014). Decision analysis for management judgment, John Wiley & Sons.
  6.   6See p. 62 of Rumelt, R. P. (2011). Good strategy/bad strategy: The difference and why it matters.
  7.   7See, for instance, Lind, E. A., C. T. Kulik, M. Ambrose and M. V. de Vera Park (1993). ’Individual and corporate dispute resolution: Using procedural fairness as a decision heuristic.’ Administrative Science Quarterly: 224–251.
  8.   8Friestad, M. and P. Wright (1994). ’The persuasion knowledge model: How people cope with persuasion attempts.’ Journal of Consumer Research 21(1): 1–31.
  9.   9For a short description of Max Reisböck’s story, see BMW. (2020). ’The seven generations of the BMW 3 series.’ Retrieved 29 July, 2021, from https://www.bmw.com/en/automotive-life/bmw-3-series-generations.html.
  10. 10See p. 15 of Martin, R. L. (2009). The opposable mind: How successful leaders win through integrative thinking, Harvard Business Press.
  11. 11See p. 8 of Riel, J. and R. L. Martin (2017). Creating great choices: A leader’s guide to integrative thinking, Harvard Business Press.
  12. 12Riabacke, M., M. Danielson and L. Ekenberg (2012). ’State-of-the-art prescriptive criteria weight elicitation.’ Advances in Decision Sciences 2012.
  13. 13See p. 156 of Poincaré, H. (1905). Science and hypothesis. New York, The Walter Scott Publishing Co., Ltd. See also pp. 124–131, p. 269 of Gauch, H. G. (2003). Scientific method in practice, Cambridge University Press.
  14. 14This is called a ‘flat maximum’; see p. 51 of Goodwin, P. and G. Wright (2014). Decision analysis for management judgment, John Wiley & Sons.
  15. 15Trope, Y. and N. Liberman (2010). ’Construal-level theory of psychological distance.’ Psychological Review 117(2): 440.
  16. 16For the value of an outside-in perspective, see also Kahneman, D. and D. Lovallo (1993). ’Timid choices and bold forecasts: A cognitive perspective on risk taking.’ Management Science 39(1): 17–31.
  17. 17The dangers of aggregating viewpoints that aren’t independent. Imagine a turkey in a US farm before Thanksgiving (for those of you, dear readers, who are not from the US, the Americans eat a lot of turkeys for Thanksgiving). Observing that the farmer feeds it every day, this American turkey might conclude that the farmer is its friend and come to expect that he will continue to feed it ad infinitum. It might also ask the opinion of its fellow turkeys in the farm who, based on the same evidence, might get to the same conclusion: ‘yep, the farmer feeds us every day, therefore the farmer is our friend. Expect more food tomorrow.’ Unfortunately, for this turkey, reality catches up on Thanksgiving morning. Our turkey might have been better served from forming its opinion by triangulating evidence from independent sources, for instance looking for old turkeys on the farm, to see if there were such a thing as an old turkey, or asking the dog about what happens to turkeys on the farm. For more on Turkey, see pp. 40–42 of Taleb (2007) or refer back to the original, Bertrand Russell’s chicken—either way, it unfortunately doesn’t end well for the feathered fellows.If the turkey example above illustrates the dangers of pooling biased estimates, Galton’s ox helps demonstrate how pooling independent estimates reduces noise. At a village fair, Sir Francis Galton asked 787 villagers to estimate the weight of an ox. Although none guessed the right answer, the average was near perfect (1207 lb estimate for a true weight of 1198 lb). Galton, F. (1907). ’Vox populi.’ Nature 75: 450–451. Additional empirical results suggest that combining the opinions of independent agents outperforms the opinion of the best independent agent only when the accuracy of the agents is relatively similar. Kurvers, R. H., S. M. Herzog, R. Hertwig, J. Krause, P. A. Carney, A. Bogart, G. Argenziano, I. Zalaudek and M. Wolf (2016). ’Boosting medical diagnostics by pooling independent judgments.’ Proceedings of the National Academy of Sciences 113(31): 8777–8782.
  18. 18This is the principle behind meta-analyses that, although imperfect, provide an excellent standard of evidential quality. See, for instance, Stegenga, J. (2011). ’Is meta-analysis the platinum standard of evidence?’ Studies in history and philosophy of science part C: Studies in History and Philosophy of Biological and Biomedical Sciences 42(4): 497–507. Greco, T., A. Zangrillo, G. Biondi-Zoccai and G. Landoni (2013). ’Meta-analysis: Pitfalls and hints.’ Heart, Lung and Vessels 5(4): 219.
  19. 19Wallsten, T. S. and A. Diederich (2001). ’Understanding pooled subjective probability estimates.’ Mathematical Social Sciences 41(1): 1–18. ­Johnson, T. R., D. V. Budescu and T. S. Wallsten (2001). ’Averaging probability judgments: Monte Carlo analyses of asymptotic diagnostic value.’ Journal of Behavioral Decision Making 14(2): 123–140.
  20. 20See Tannenbaum, S. I., A. M. Traylor, E. J. Thomas and E. Salas (2021). ’Managing teamwork in the face of pandemic: Evidence-based tips.’ BMJ Quality & Safety 30(1): 59–63. Frazier, M. L., S. Fainshmidt, R. L. Klinger, A. Pezeshkan and V. Vracheva (2017). ’Psychological safety: A meta-analytic review and extension.’ Personnel Psychology 70(1): 113–165.
  21. 21Edmondson, A. C. and Z. Lei (2014). ’Psychological safety: The history, renaissance, and future of an interpersonal construct.’ Annual Review of Organizational Psychology and Organizational Behavior 1(1): 23–43.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset