So far, in Part 2 you have explored alternatives. This chapter shows you how to explore criteria that you can use to evaluate these alternatives. You will learn to identify what matters to you and to your stakeholders so that you can deliberately weigh off an alternative’s benefits and drawbacks. You’ll also learn how to prioritise criteria so that they best represent the group’s viewpoint.
Picture this: You face a difficult decision that requires agreeing with one of your colleagues, and it looks like you won’t agree. You have scheduled a meeting where you hope to reach common ground. Before the meeting, you remind yourself to be open minded and you commit to seeing the situation from her viewpoint. But once the meeting starts, your colleague states right away what he wants to do and why. You counter with your preferred alternative, revealing an unbridgeable gap. Both of you dig in your heels, emotions dial up to 11, and you get nowhere. It’s a stalemate.
All too often we quickly jump into defending the alternative we prefer. This is what Center for Applied Rationality’s Julia Galef calls a soldier mindset.1 But our intuitive judgment improves much if we delay choosing until the end of a structured process.2 Therefore, it is advisable to start with an open mind or, as Galef calls it, a scout mindset. Part of this open-mindedness is developing various alternatives, which we did in the previous chapter. Another part entails expressing what matters to us as we get our treasure – our decision criteria.
Making strategic decisions requires pursuing competing objectives and, thus, inevitably involves making trade-offs.3 For instance, you might be developing a go-to-market strategy for a new product that must be effective and low-cost and aligned with your organisational values. As a result, you will need to give away something that you value for something that you value more. In other words, whichever alternative you select will have a cost. Realising that there is no free lunch is a good reminder that if something seems too easy in the decision process, you’re probably missing something.
If an alternative seems too good to be true, it probably is
We worked with the 20 most senior executives of an organisation from the travel industry on an innovation exercise. It was an intensive multi-day programme during which the teams were furiously developing new offerings. One morning, after a long night of work, one team showed up, grinning. ‘We did it’, they boasted, ‘we found an offer that looks great on all criteria!’ ‘That’s fantastic’, we replied. ‘Just one question: if it’s so good, why haven’t you already done it?’ Their reply was immediate, ‘Well, our CEO would never go for it’. And then they stopped, processing what they had just said.
It took them a minute to realise what was going on. With further probing, they realised that they had forgotten to consider whether the initiative fitted with the overall strategy and implementation risks that were high on the CEO’s agenda. So they went back to the drawing board, refining their recommendation using the newly surfaced criteria.
Like you and us, these were smart, hardworking people who had spent significant time working on their challenge. And yet, they missed something fundamental. We bring up this example to highlight that it isn’t entirely obvious to integrate all the relevant criteria, yours and those of your key stakeholders. Rather, it takes a conscious effort.
Just like we have a tendency to settle too early for a limited set of alternatives, we naturally think of only a portion of the evaluation criteria. We think we know what we want, but a closer examination often shows us wrong. Now, to be fair to ourselves, criteria don’t just pop up in neat lists, and complex decisions often require deep soul searching and iterating between the question, the alternatives, and the criteria. Furthermore, what matters differs from one stakeholder to another, so there is often not one objectively superior alternative. This is worth repeating: Complex problems do not have one right answer, only better (and worse) ones.4 We are not solving for the right answer, we’re solving for an excellent alternative or, at least, an acceptable one.
So, let’s look at what makes a good set of criteria and how we can develop it. The good news is that we can leverage some of the tools from previous chapters, as a good set has criteria that are reasonably collectively exhaustive, mutually exclusive, and insightful.5
A good list has criteria that are distinct from one another, so that you don’t double count anything. It is also reasonably collectively exhaustive, so that you don’t forget anything important.
A basic universal set of criteria might be feasibility + desirability – as you will want to choose an alternative that you can implement (feasibility) and that you want to implement more than the others (desirability). Although this criteria set applies to virtually all decisions, it remains generic – and it is usually worth clarifying what feasibility and desirability mean for your specific decision.
The (not so) quiet Swiss countryside
Making critical decisions without a comprehensive perspective of what matters to you is a critical pitfall. Even after ten years, one of us (Albrecht) remembers the purchase of a house here in Switzerland on Lake Geneva. It was a tight market with few opportunities to buy anything. He had a set of criteria, including purchase price, number of rooms, construction quality, proximity to schools, size of garden, and so on. When an opportunity arose, he jumped in quickly to close the deal, fearing that if he didn’t, someone else would snatch it away. He moved into the house and during the first night, with the windows open, the noise from the highway a few kilometres down below kept him up all night! It was a huge shock, since a good night’s sleep, preferably with open windows to let in the fresh breeze, was something he valued highly. He just hadn’t acknowledged this criterion clearly enough in his decision-making process.
It took massive renovations – moving windows to the back of the house, in fact – to fix this problem! Now, he has the quiet nights with open windows he was looking for all along, but he might have had an easier time had he been more deliberate with his criteria specification at the outset. If nothing else, he would have walked into this situation with his eyes (and ears!) open, and the first night surprise would not have been as unpleasant. Now, you might say that this should have been obvious. Maybe so. What we see over and over again though (with ourselves and others) is that when time pressure and other stressors kick in during critical decision moments, it’s challenging to see clearly the things that really matter.
Criteria are not just beneficial to help you choose among alternatives but can also serve as a launchpad for creating new alternatives. Think of the house purchase example: If we had been more explicit about the quietness consideration from the outset, it might have led us to look at other neighbourhoods or even other villages.
Note, however, that you want to keep the size of the set of criteria manageable. It should balance being collectively exhaustive with being parsimonious, which might mean excluding secondary criteria.6
Often, criteria overlap. Consider for instance the trip from NYC to London. You might care about leg space, privacy, and comfort. Although all of them are potentially important, you are mixing up conceptual levels. Leg space, for instance, might be a sub-category of comfort. If you consider them as separate criteria on the same conceptual level, you double count some aspects of alternatives, which introduces problems in our weighted sum approach.
Now, in the case of comfort, it is reasonably easy to identify that some criteria overlap. However, with more abstract criteria, such as organisational fit, alignment with cultural values and other similarly intangible notions, it can be more challenging to explore criteria that are conceptually distinct from one another. Nonetheless, it’s critical that you create this clarity both for the sake of your problem-solving efforts and for communicating your conclusions later on. If your audience gets confused because you have not been clear in your thinking, they are less likely to trust your judgment and be convinced by your arguments.
How do you go about doing so? By always asking why something matters. For instance, you might say, leg space is important to you. Why? Because it allows me to stretch out my feet. Why does that matter? Because I can avoid cramps. Why is that important? Because it helps me be more comfortable during the journey. Here you might stop and say that this is enough specificity and that you don’t need to justify why it’s important to be comfortable. The act of asking why a few times has helped you move from instrumental criteria to fundamental criteria. It’s those criteria that you ultimately want to use to evaluate your alternatives. As a rule of thumb, if two criteria score similarly across alternatives, you might want to explore if one might be a sub-criterion of the other. If that’s the case, clean up the set!
Good criteria shed light on what matters in our solution. Two key characteristics of insightful criteria are that they are unambiguous and measurable. Unambiguous means that the criterion is explicitly defined. Consider ‘compensation’ as a criterion for deciding among job opportunities. Are you just considering base pay, or are you also including annual bonuses, health insurance, retirement benefits, disability insurance, and other ancillary benefits? If you do not explicitly define what comes under ‘compensation’, various people will have various interpretations, promoting disagreements.
Relatedly, you should also specify how each criterion will be measured. With financial compensation and other readily quantifiable criteria such as price, weight, time, distance and so on, this can be reasonably straightforward.
However, not all criteria lend themselves to this approach. Qualitative criteria, such as cultural fit, happiness, perceived risk and others, require you and your stakeholders to make judgment calls for how to score them.7 To be clear, there is nothing wrong with making judgment calls, but when presenting your conclusions, you need to be able to explain why alternatives scored high or low in a convincing manner to a potentially critical audience. In these settings, conclusions formed on weaker-quality evidence – say anecdotes, stories, or analogies – will be weaker than those based on ‘hard data’.
On a practical note, it is useful to make your criteria all vary in a consistent direction. For instance, where a higher score is consistently better. Such criteria are called benefit criteria. Imagine that what matters to us travelling from New York City to London is price, speed, comfort, and greenness (absence of emissions). The latter three – speed, comfort, and greenness – are benefit criteria: for each, an alternative with a higher score is more desirable than one with a low score. But price isn’t. With price, a higher price creates a worse alternative. There’s an easy fix, though: all we have to do is replace ‘price’ by ‘affordability’. Although seemingly trivial, making all criteria benefit criteria will save you headaches when calculating overall scores, interpreting results, and discussing viewpoints with stakeholders.
Finally, specify an appropriate range for each criterion. A good way to do this is to develop a 0–100 scale for each criterion. So, if we continue with our NYC-to-London example, an alternative scoring 0 in affordability would be the most expensive one. But don’t leave it as such a qualitative description. Instead, define what that means for you. Maybe it’s $1000, maybe it’s $100k, depending on your viewpoint. What matters is that you be as unambiguous as possible, using quantitative measures in lieu of qualitative ones wherever possible.
If you’re still making fun of Albrecht for omitting to consider quietness when choosing his house, thinking that developing a good list of criteria can’t be that hard, be careful. In empirical studies, researchers at Duke and Georgia Tech have demonstrated that decision makers can omit nearly half of the criteria they would later consider relevant. Not only that, the participating decision makers perceived the omitted criteria as not only relevant, but almost as important as the criteria they had initially considered!8
So, identifying criteria might not be as simple as it looks; here are five ideas to help.9
IMD explores new markets
Consider a discussion we have been having here at IMD, as we think of becoming more active in northern Africa, and more specifically Tunisia. Instead of discussing this idea in the abstract and figuring the criteria to evaluate different market entry approaches, we could also look at two concrete, yet vastly different, alternatives: moving into the Tunisian market as a hub for IMD’s activities with our own operations versus moving into the Tunisian market with a local partner who would be running these operations.
Thinking about what we like and dislike about these alternatives helps us quickly surface various issues: going by ourselves has the benefit of control and higher organisational learning but also the downsides of high investments and associated risk, a slow start-up phase, and a lack of market understanding, among others. Partnering would enable us to be fast, limit our investment, and leverage the know-how of our partner. But we would be less in control of the overall initiative, which might put our brand in danger. You get the point.
Looking at concrete alternatives helps envision what we might like and dislike in alternatives. Ultimately, looking at concrete alternatives helps you ask: ‘What are the conditions that would have to be true for this solution to be a good one?’ Doing so by using two alternatives helps you reduce any blind spots that you might have if you just used one alternative.
Promote active engagement
Despite your best efforts to engage stakeholders, it might very well be that they don’t contribute to their full potential. One key obstacle that can get in the way is Power Distance (PD). The concept of Power Distance captures ‘the extent to which the less powerful members of institutions and organisations within a country expect and accept that power is distributed unequally’.14 Various cultural aspects influence PD, including national cultures. Countries that score high on PD, such as Brazil, France, or Thailand, are more stratified – economically, socially, and politically. This, in turn, means that people accept more readily autocratic leadership styles.15 Likewise, organisations in these countries tend to have more hierarchical decision-making processes with limited one-way participation and communication.16
In an extensive study using data from 421 organisational units of a multinational company in 24 countries, organisational scientists Xu Huang and colleagues found that as Power Distance increased, employees were less likely to speak their mind – a phenomenon known as ‘organisational silence’.
The researchers identified two mechanisms that encouraged employees to speak up: First, involving employees in decision-making activities as well as team-building or management-change programmes. Second, and particularly important in cultures with high PD, creating an open and participative climate in which employees perceive that management is supportive of new ideas, suggestions, and even dissenting opinions.17
Finally, not all criteria are equally important, so you need to clarify their relative priority. Say, there is a trade-off to be made between customer satisfaction and cost. How much do you value one over the other? This is a complex enough task, but it gets even more difficult in group decision making, where each stakeholder has an opinion.
Over the past decades, decision analysis has yielded many ways to assign weights to criteria to match the decision makers’ preferences. Out of this debate, the community hasn’t converged on a single, widely accepted approach.18 To keep things simple, we propose to follow a simple direct-rating approach where you assign a weight from 1 (weakest) to 5 (strongest) to each criterion.
Weighing is usually an iterative process as your understanding of the problem evolves throughout the process. In particular, you might suffer from equalising bias, the tendency to allocate similar weights to all criteria. To counter this tendency, one debiasing technique consists of ranking the criteria first in numerical order and only then assigning them weights.19
Having come this far, use your list to evaluate alternatives, asking yourself if you would be comfortable living with the resulting decision. If not, you may have overlooked or misstated some criteria: Test whether your criteria would help you explain a prospective decision to someone else. If not, spend more time refining them: What’s unclear? What’s missing?
Summarise what matters in a set of criteria that is reasonably MECE and insightful.
For criteria, ‘insightful’ means that you’re balancing being collectively exhaustive and parsimonious; you’re giving more weight to the important criteria; you make all your criteria vary in a consistent direction; and you choose an appropriate range of performance for each criterion.
Identifying a good list of criteria can be surprisingly challenging. To help you, consider taking several cracks at it; using scenarios; leveraging frameworks; contrasting alternatives; and enlisting others.