CHAPTER 6
An Ivory Tower of Babel: Fixing the Confusion about Risk

If you wish to converse with me, define your terms.

—VOLTAIRE

Concepts about risk and even the word risk are sources of considerable confusion even among those who specialize in the topic. There are a lot of well-entrenched and mutually exclusive ideas about risk and risk management, and if we are going to make any progress, we have to work out these differences.

You might think that agreement on what the word risk means should be relatively simple and, for that matter, should have been resolved long ago. If only that were the case. Multiple definitions have evolved in different professions. Some will not even know they are using it differently from others and may incorrectly believe they are clearly communicating with other risk professionals.

We need our vocabulary and concepts on firm footing before we can begin any heavy lifting with risk management. First, let's clear up some confusion about how the word risk is used in different fields. I offered a clear definition of risk in chapter 2, but it is worth restating here. While we're here, let's also clarify the related concept of uncertainty and distinguish between the qualitative and quantitative use of these terms. (Note that this is the same distinction I make in my earlier book, How to Measure Anything: Finding the Value of Intangibles in Business.)

This specific distinction of the terms not only represents the de facto use of the terms in the insurance industry and certain other types of professions and areas of research but also is closest to how the general public uses the term. And although risk professionals need to be a bit more precise in the use of these terms than the general public, these definitions are otherwise entirely consistent with the definitions offered in all of the major English dictionaries.

But a risk manager needs to know that this specific language is not universally adopted—not even by all risk professionals and academics. Some circles will use a language all their own, and many of them will insist that their definition is the “formal” or the “accepted” definition among experts—unaware that other experts believe the same of other definitions. The lack of a common vocabulary can actually be the root of many disagreements and misconceptions about how to manage risk. So let's review how the definitions of risk and related terms differ and how this confusion can be resolved.

THE FRANK KNIGHT DEFINITION

Frank Knight was an influential economist of the early twentieth century who wrote a text titled Risk, Uncertainty and Profit (1921). The book, which expanded on his 1917 doctoral dissertation, has become what many economists consider a classic. In it, Knight makes a distinction between uncertainty and risk that still influences a large circle of academics and professionals today:

[To differentiate] the measurable uncertainty and an unmeasurable one we may use the term “risk” to designate the former and the term “uncertainty” for the latter.1

According to Knight, we have uncertainty when we are unable to quantify the probabilities of various outcomes, whereas risk applies to situations where the odds of various possible outcomes can be known. But Knight's definition was and is a significant deviation from both popular use and the practical use of these terms in insurance, statistics, engineering, public health, and virtually every other field that deals with risk.

First, Knight makes no mention of the possibility of loss as being part of the meaning of risk. He states that all we need to specify risk is to quantify probabilities for outcomes—contrary to almost every other use of the term in any field. Whether any of those outcomes are undesirable in some way is irrelevant to Knight's definition. In fact, the same year Knight published his book, the influential economist John Maynard Keynes published A Treatise on Probability, in which he defined risk differently. Within the context of making an investment, Keynes defined risk in terms of the probability of a “sacrifice” that may not be rewarded.2 Keynes's definition of risk was not only mathematically well-defined but also consistent with the popular understanding of the word.

Second, Knight's definition of uncertainty seems to be routinely contradicted by other researchers and professionals who speak of “quantifying uncertainty” by applying probabilities to various outcomes. In effect, Knight's definition of risk is what most others would call uncertainty.

Knight starts the preface of his book by stating, “There is little that is fundamentally new in this book.” But his definitions of uncertainty and risk were quite new—in fact, perhaps previously unheard of. Even Knight must have felt that he was breaking new ground because he apparently believed there were no adequate definitions to date that distinguished risk from uncertainty. He wrote in the same text, “Uncertainty must be taken in a sense radically distinct from the familiar notion of risk, from which it has never been properly separated.”3

In reality, there was already an extremely consistent, and sometimes mathematically unambiguous, use of these terms in many fields. Even within economics, it was generally understood that uncertainty can be represented quantitatively by probabilities and, similar to Keynes's definition, that risk must include loss. Consider the following quotes from economics journals, one published just after Knight's text and one well before it:

Probability, then, is concerned with professedly uncertain [emphasis added] judgments. Economica, 19224

The word risk has acquired no technical meaning in economics, but signifies here as elsewhere [emphasis added] chance of damage or loss. Quarterly Journal of Economics, 18955

The first quote speaks of probabilities—a term that is widely understood in economics, math, and statistics to be a quantity—as something that applies to uncertainty in judgments. The second quote acknowledges that risk as a chance of loss is generally understood.

The definitions I previously presented for risk and uncertainty were also used consistently in mathematics, especially in regard to games of chance, long before Knight wrote his book. Prior to 1900, many famous mathematicians such as Bayes, Poisson, and Bernoulli discussed uncertainty as being expressed by quantified probabilities. This directly contradicts Knight's use of the word uncertainty as something immeasurable. And there was so much of this work that I could have written an entire book just about the measurement of uncertainty before 1900. Fortunately, I didn't need to, because one was already written: The History of Statistics: The Measurement of Uncertainty before 1900.6

One interesting definition of uncertainty that I came across was in the field of the psychology of gambling (where, again, uncertainties are quantified) in the early 1900s. Clemens J. France defined uncertainty as “a state of suspense” in his article, “The Gambling Impulse,” in the American Journal of Psychology in 1902. In 1903, this use of the concept of uncertainty within gambling was common enough that it shows up in the International Journal of Ethics: “Some degree of uncertainty, therefore, and willingness to take the risk are essential for a bet.”7

Even shortly after Knight proposed his definitions, other fields carried on quantifying uncertainty and treating risk as the chance of a loss or injury. In 1925, for example, the physicist Werner Heisenberg developed his famous uncertainty principle, which quantified minimum uncertainty of the position and velocity of a particle. The mathematicians who dealt with decisions under uncertainty continued to define uncertainty and risk as we have. And the entire insurance industry carried on doing business as usual apparently without any regard for Knight's proposed alternative definition.

A simple test will demonstrate that Knight's use of the term uncertainty is not the way common sense would tell us to use it. Ask people around you the following three questions:

  1. “If I were to flip a coin, would you be uncertain of the outcome before I flipped it?”
  2. “What is the chance that the outcome will be tails?”
  3. “Assume you are not betting anything on the flip or depending on the flip in any other way. Do you have risk in the coin flip?”

Almost anyone you asked would answer “yes, 50 percent, and no.” Knight's definitions would have to answer “no, 50 percent, and yes” if he were serious about his definitions. Because our answer to question 2 indicates the odds are quantifiable, Knight would have to say a coin flip is not uncertain (he says uncertainty is immeasurable), even though almost anyone would say it is. Also, because the coin flip meets his only criterion for risk (that the odds are quantifiable) then he has to answer yes to question 3, even though the lack of having any stake in the outcome would cause most of the rest of us to say there is no risk.

Although Knight's definitions are quite different from many risk management professionals', his definitions influence the topic even today. I was corresponding with a newly minted PhD who had conducted what she called a prequantitative risk analysis of a major government program. While discussing risk, it became clear that we had a different vocabulary. She was using the term uncertainty as unquantifiable randomness, just as Knight did. She didn't mention Knight specifically but pointed out that, even though it was not the common use, this is how the term is “defined in the literature.” For evidence of this, she cited a definition proposed by the editors of a fairly important anthology of decision science, Judgment and Decision Making: An Interdisciplinary Reader, which defined the terms as Knight did.8 I happened to have a copy of this book and in less than five minutes found another article in the same text that discusses how uncertainty is “expressed in terms of probabilities,”9 which is consistent with nearly every other source I find.

Knight himself recognized that this was not the common use of these terms. But, for some reason, despite the volume of prior work that quantified both risk and uncertainty, he felt they were not defined properly. Unfortunately, Knight's views held a lot of sway with many economists and noneconomists alike, and it still contributes to confusion in the advancement of risk management. Let's just call it what it is—a blunder. This will brand me a heretic with fans of legendary economists (and there is more of that to come), but it was ill-conceived and didn't clarify anything.

KNIGHT'S INFLUENCE IN FINANCE AND PROJECT MANAGEMENT

According to Knight's definition, risk doesn't necessarily involve a loss. In fact, risk could be a probability of a good thing happening, and this is the common use in some fields. Terms such as upside risk can be heard in the professions of finance and project management.

In the world of finance, words which are often equated with risk are volatility and variance. If a stock price tends to change drastically and frequently, it is considered to be volatile and, therefore, it is risky. This is sometimes associated with Harry Markowitz, the economist who won the Nobel Prize in Economics for modern portfolio theory (MPT). As briefly mentioned in chapter 5, MPT attempts to define how a rational investor would select investments in a portfolio in a way that makes the best overall risk and return for the portfolio.

Markowitz never explicitly promotes such a definition. He merely states that, in most financial articles in general, “if … ‘risk’ [were replaced] by ‘variance of return,’ then little change of apparent meaning would result.” He treats volatility, similar to risk, as something that is acceptable if the return is high enough. In practice, though, analysts who use MPT often equate historical volatility of return to risk.

Although it is true that a stock with historically high volatility of returns is probably also a risky stock, we have to be careful about how this is different from the definitions I proposed previously. First—and this may seem so obvious that it's hardly worth mentioning—volatility of a stock is risky for you only if you own a position on that stock. I usually have a lot of uncertainty about the outcome of the Super Bowl (especially because I don't follow it closely), but unless I were to bet money on it, I have no risk.

Second, even if I have something at stake, volatility doesn't necessarily equate to risk. For example, suppose we played a game where I roll a six-sided die and whatever comes up on the roll I multiply by $100 and pay you that amount. You can, therefore, win anywhere from $100 to $600 on a roll. You only have to pay me $100 to play. Is there uncertainty (i.e., variance or volatility) in the outcome of the roll? Yes; you could net nothing from the game or you could net as much as $500. Do you have risk? No; there is no possible result that ends up as a loss for you.

Of course, games such as that don't usually exist in the market, and that's why it is understandable how volatility might be used as a sort of synonym for risk. In an actively traded market, the price of such a game would be “bid up” until there was at least some chance of a loss. Imagine if I took the same game and, instead of offering it only to you, I offered it to whomever in your office would give me the highest bid for it. It is very likely that someone out of a group of several people would be willing to pay more than $100 for one roll of the die, in which case that person would be accepting a chance of a loss. The market would make any investment with a highly uncertain outcome cost enough that there is a chance of a loss—and therefore a risk for anyone who invests in it.

But what works in the financial markets is not always relevant to managers dealing with investments in the operation of a firm. If you have the opportunity to invest in, say, better insulated windows for your office building, you may easily save substantially more than the investment. Even though energy costs are uncertain, you might determine that, in order for the new windows not to be cost effective, energy costs would have to be a small fraction of what they are now. The difference between this and a stock is that there is no wider market that can compete with you for this investment. You have an exclusive opportunity to make this investment and other investors cannot just bid up the price (although, eventually, the price of the windows may go up with demand).

It is also possible for operational investments with very little variance to be risky when the expected return is so small that even a slight variance would make it undesirable. You would probably reject such an investment, but in the market, the investment would be priced down until it was attractive to someone.

Risk as potentially a good thing as well as a bad thing is also used in the field of project management. Consider the following definition of project risk provided in The Guide to the “Project Management Body of Knowledge” (PMBoK), 2018 edition, published by the Project Management Institute (PMI): “an uncertain event or condition that, if it occurs, has a positive or negative [emphasis added] effect on a project objective.”

Partly due to the influence of PMI, this definition is acknowledged by a large number of people in project management. The PMI was founded in 1969 and by 2018 it had more than five hundred thousand members worldwide. In addition to publishing the PMBoK, it certifies individuals as project management professionals (PMPs). Although PMI attempts to cover projects of all sorts in all fields, there is a large presence of information technology (IT) project managers in its membership.

There are also UK-based organizations that define risk in this way. The Project Risk Analysis & Management Guide (PRAM Guide, 2010) of the UK Association for Project Management (APM) defines risk as “an uncertain event or set of circumstances which, should it occur, will have an effect on achievement of objectives,” and further notes that “consequences can range from positive to negative.” And the British Standards BS6079–1: 2010 Principles and Guidelines for the Management of Projects and BS6079–2: Project Management Vocabulary define risk as a “combination of the probability or frequency of occurrence of a defined threat or opportunity [emphasis added] and the magnitude of the consequences of the occurrence.”

I was discussing this definition of risk with a PMI-certified PMP, and I pointed out that including positive outcomes as part of risk is a significant departure from how the term is used in the decision sciences, insurance, probabilistic risk analysis in engineering, and most other professions that have been dealing with risks for decades. He asked why we wouldn't want to include all possible outcomes as part of risk and not just negative outcomes. I said, “Because there is already a word for that—uncertainty.”

I had another project manager tell me that risk can be a good thing because “sometimes you have to take risk to gain something.” It is true that you often have to accept a risk in order to gain some reward. But, if you could gain the same reward for less risk, you would. This is like saying that expenses—by themselves—are a good thing because you need them for business operations. But, again, if you could maintain or improve operations while reducing spending, you would certainly try. The fact that rewards often require other sacrifices is not the same thing as saying that those sacrifices are themselves desirable. That's why they are called sacrifices—you are willing to endure them to get something else that you want. If it were a good thing, you would want more of it even if all other things were held constant. You accept more costs or more risks, however, only if you think you are getting more of something else.

The fact is that every English dictionary definition you can find—including Merriam-Webster, American Heritage, Oxford English, or even Dictionary.com—defines risk in terms of peril, danger, chance of loss, injury, or harm. Not one mentions risk as including the possibility of a positive outcome alone. Risk as opportunity, in and of itself (as opposed to something one is willing to accept in exchange for opportunity), also contradicts the most established use of the word in the practical world of insurance as well as the theoretical world of decision theory. As we will see, risk aversion as used in decision theory is always in the context of aversion to a probability of a loss, not aversion to a probability of a gain.

Because PMBoK and the other project management standards don't appear to ever cite Knight's work, it isn't clear that PMI was influenced by it. At least we know it wasn't informed by decision science, actuarial science, or probabilistic risk analysis in general. And being confused about the meaning of the word risk isn't the only problem with PMI's approach to risk management. I will be discussing PMI again when I talk about problems with their risk assessment approach.

In summary, potentially varied outcomes imply risk only if some of the outcomes involve losses. Our definition of risk applies equally well regardless of whether the investment is traded on the market or is an operational investment exclusive to the management of a business.

A CONSTRUCTION ENGINEERING DEFINITION

I came across another use of the term risk when I was consulting on risk analysis in the construction engineering industry. It was common for engineers to put ranges on the costs of an engineering project and they would refer to this as the variance model. The price of steel might vary during the course of construction, so they would have to put a range on this value. This was likewise done for the hourly rates of various labor categories or the amount of effort required for each category. The uncertainty about these items would be captured as ranges such as “The hourly cost of this labor next year will be $40 to $60 per hour” or “This structure will take seventy-five to ninety-five days to finish.”

Fair enough; but they didn't consider this a risk of the project. The separate risk model was a list of specific events that may or may not happen, such as “There is a 10 percent chance of an onsite accident that would cause a work stoppage” or “There is a 20 percent chance of a strike among the electricians.” This use of the word risk makes an arbitrary distinction about risk based on whether the source of the uncertainty is a continuous value or a discrete event.

In the definition I propose for risk, the price of steel and labor, which could be much higher than they expected, would be a legitimate source of risk. A construction project has some expected benefit and it is quite possible for increasing costs and delayed schedules to wipe out that benefit and even cause a net loss for the project. Some uncertain outcomes result in a loss and that is all we need to call it a risk. Risk should have nothing to do with whether the uncertainty is a discrete event or a range of values.

RISK AS EXPECTED LOSS

I sometimes come across risk defined as “the chance of an unfortunate event times the cost if such an event occurred.” I've encountered this use of the term in nuclear power, many government agencies, and sometimes IT projects. The product of the probability of some event and the loss of the event are called the expected loss of the event.

Any reader new to the decision sciences should note that when risk analysts or decision scientists use the word expected they mean “probability-weighted average.” An expected loss is the chance of each possible loss times the size of the loss totaled for all losses (this value can be very different from the loss that is the most likely).

This definition was going down the right path before it took an unnecessary turn. It acknowledges the need for measurable uncertainty and loss. But this definition requires an unnecessary assumption about the decision-maker. This definition assumes the decision-maker is “risk neutral” instead of being “risk averse,” as most people are. A risk-neutral person, the value of an uncertain outcome is equal to its expected value, that is, the probability-weighted average of all the outcomes. For example, consider which of the following you would prefer:

  • A coin flip that pays you $20,000 on heads and costs you $10,000 on tails.
  • A certain payment to you of $5,000.

To a risk-neutral person, these are identical, because they both have the same expected value: images. However, because most people are not risk neutral, it's too presumptuous to just compute the expected loss and equate that to their risk preference.

How much the manager values a given risk (that is, how much she is willing to pay to avoid it) depends on her risk aversion, and this cannot be determined from simply knowing the odds and the losses involved. Some people might consider the two presented options equivalent if the certain payment were $2,000. Some might even be willing to pay not to have to flip the coin to avoid the chance of a $10,000 loss. But we will get to quantifying risk aversion later.

We can, instead, just leave the risk in its separate components until we apply it to a given risk-averse decision-maker. This treats risk as a sort of vector quantity. Vector quantities are quantities that can be described in only two or more dimensions, and they are common in physics. Quantities that are a single dimension, such as mass or charge, are expressed with one number, such as “mass of 11.3 kilograms” or “charge of .005 coulombs.” But vector quantities, such as velocity or angular momentum, require both a magnitude and a direction to fully describe them.

As with vector quantities in physics, we don't have to collapse the magnitude of the losses and the chance of loss into one number. We can even have a large number of possible outcomes, each with its own probability and loss. If there are many negative outcomes, and they each have a probability and a magnitude of loss, then that entire table of data is the risk (see Exhibit 6.1). Of course, losses and their probabilities often have a continuum of values. If a fire occurs at a major facility, there is a range of possible loss and each point on that range has an associated probability.

Any of the definitions you might find for risk that state that risk is “the probability, chance, and magnitude, amount, severity of a danger, harm, loss, injury” implicitly treat risk as a vector. The quantification of risk is both the probability and the consequence and doesn't require that they be multiplied together.

EXHIBIT 6.1 Example of the Risk of a Project Failure Expressed as a Vector Quantity

Event Probability Loss
Total project failure—loss of capital investment 4% $5–12 million
Partial failure—incomplete adoption 7% $1–4 million 

In order to determine whether one set of probabilities and losses is more undesirable than another, we will still need to compare them on a single quantity. We just don't need to assume risk neutrality. Instead, we need to know how to quantify aversion to risk. In other words, we need to measure our tolerance for risk.

DEFINING RISK TOLERANCE

How much risk are you willing to take? As we just discussed, most firms are risk averse to some degree instead of risk neutral. But exactly how risk averse? This is a very specific question that should get an equally specific answer. There is a mathematically unambiguous way to state this, but, unfortunately, it is often described in terms so ambiguous it becomes virtually useless in real decision-making.

A statement about how much risk an individual or organization is willing to endure may be called a risk tolerance or sometimes risk appetite. Many managers will use these interchangeably and they are often communicated using agreed-on policy statements such as the following:

The company will only tolerate low-to-moderate gross exposure to delivery of operational performance targets including network reliability and capacity and asset condition, disaster recovery and succession planning, breakdown in information systems or information integrity.

I found this example in the white paper of a consulting firm that helps companies come up with these types of policy statements. This is apparently a real risk appetite from one of their clients.

Now, let's consider what this statement is actually telling its target audience. What is “low-to-moderate”? If they have a plan for a $4 million investment that will reduce network outages by half, is it justified? What if that same investment could reduce the chance of “breakdown in information systems” from an annual probability of 6 percent to 2 percent? Statements such as these require so much more interpretation when applied to real-world decisions that they have little practical relevance as a standard policy.

In chapter 4, I described one way to establish—in an unambiguous way—how much risk an organization is willing take. Using the chart for the loss exceedance curve, we described another curve that we want the LEC to be entirely under. We called this the risk tolerance curve. If the LEC is above the risk tolerance curve at any point, then we say we have too much risk. It is a clear-cut rule and leaves no room for interpretation.

Still, there is more we can clarify. We can also quantify how much risk is acceptable depending on a potential reward. The risk tolerance curve on an LEC chart is really just a way of stating the maximum bearable risk, independent of potential reward. That in itself is much more useful than the wordy and ambiguous “risk appetite” example from the white paper, but if we also quantify the trade-off of risk and return, then the risk tolerance can be even more useful.

For example, would your firm pay $220,000 to avoid a 2 percent chance of losing $10 million? You wouldn't if you were risk neutral, because a risk neutral person would consider that risk to be exactly equal to losing $200,000 for certain and would not pay more than that to avoid the risk. But your firm would pay that if it was more like the typical insurance customer.

A way to answer a question like that is to determine a certain monetary equivalent (CME). A CME is an exact and certain amount that someone would consider equivalent to a bet with multiple uncertain outcomes. If we raise the payment until you were just barely indifferent between paying to avoid the risk or not, then the amount you would pay is the CME for that risk, but expressed as a negative value. For example, if you would pay $250,000 to avoid that risk, then the risk has a CME of −$250,000.

The CME can also apply to uncertain rewards. For example, consider a choice between a 20 percent chance of winning $10 million and a certain payment of $1 million. Which would your organization prefer? If you really were risk neutral, then you would value the uncertain reward at exactly $2 million and you would prefer the uncertain gain over a certain payoff of $1 million. But some firms (and most individuals, I think) would prefer the certain amount. Now, if the $1 million cash in hand is preferred, perhaps an even lower certain amount would still be acceptable to the uncertain amount. Again, whatever the point of indifference for a given person or organization would be is the CME for that uncertain reward, but now it would be expressed as a positive amount (because it is a gain, not a loss). If your CME is, say $300,000 in this case, then you are indifferent between being paid $300,000 or taking your chances with a 20 percent chance of winning $1 million.

We could always do this for any combination of uncertain losses and gains, but we don't want to do this on a case-by-case basis. For reasons we will see in the next chapter, our risk tolerance is something that changes frequently and unconsciously. If you had to make such judgments for every bet, you would inevitably act more risk averse on some than others. Fortunately, we can infer what your risk tolerance is by looking at just a few choices. Then the CME can be calculated for any other situations without having to make more subjective judgments.

One way to do this calculation comes from decision analysis (DA), which was first discussed in chapter 5. DA is based on the work of Ron Howard at Stanford and it was inspired by the earlier work of other researchers, such as Oscar Morgenstern and John von Neumann. As previously described, this is a large body of theoretical and applied work that deals with making decisions under a state of uncertainty. A central component is establishing a quantitative tolerance for risk.

The basic idea Morgenstern and von Neumann developed is that we don't think about risk and reward just in terms of probability weighted dollar amounts, but rather probability weighted utility. The reason most of us (other than the risk neutral individuals) don't consider a 20 percent chance of winning $10 million equivalent to $2 million is because utility is nonlinear. In other words, the value you perceive from $10 million (i.e., its utility) is not five times as much as the value you perceive of $2 million. From the point of view of most individuals (not a large firm), $2 million may have a much bigger impact on his or her life than the next $2 million. Another $2 million after that would have even less utility, and so on.

If we can describe a person's (or organization's) utility function then we have a way to work out the CME for any set of uncertain outcomes. Morgenstern and von Neumann showed a rigorous proof for how to do this under certain assumptions for a rational person. Because the person was rational, he or she would have to follow certain commonsense rules about preferences (for example, if you prefer A to B and B to C, then you can't prefer C to A). They found several fascinating, nonintuitive results.

One utility function that meets the basic mathematical requirements is shown below. Now, I will tell you in advance that there are some problems with this approach when modeling the risk tolerance of real decision-makers. But the solutions are still based on modifications of this method. So bear with me while I explain the exponential utility function and how to use it to compute CME. I've written the following function in the form of an Excel spreadsheet. (An example of this can be downloaded at www.howtomeasureanything.com/riskmanagement.)

equation

X is a given reward (so −X would be negative if the reward is positive and −X would be positive it X were a loss). S represents a scale which is unique to a given decision-maker or organization, and it is another way to define risk tolerance. Exhibit 6.2 shows an example of the exponential utility function where S is set equal to $5 million.

It is important to note that the exponential utility function produces a maximum utility of 1 no matter how much the reward is. If S is small the decision-maker reaches “maximum utility” with a smaller reward than if S were larger. A loss, however, has a limitless negative utility. In other words, pleasure maxes out; pain does not.

If we want to work out the CME for a given uncertain utility we can use the following formula. (Again, we are writing this as you would write it in Excel and this example is included in the spreadsheet on the website.)

equation

Here, Pr is the probability of a reward and S and Utility refer to the same quantities mentioned in the previous formula. To apply this to our example of a 20 percent chance of winning $10 million, a firm with an S of $5 million would compute its utility for the gain using the previous utility formula, which gives us a utility of 0.8647. Using the utility of the gain and the probability of the gain in the CME formula we get

Graph depicting a curve representing the exponential utility function where S is set equal to 5 million dollars.

EXHIBIT 6.2 The Exponential Utility Function Where S = $5 Million

equation

In other words, the firm would be indifferent between a certain cash payment of $949,348 and a 20 percent chance of winning $10 million. If you, individually, would be indifferent with a much lower certain payoff, that only means your S is much lower. For example, if your CME was $50,000 (the point where you would be indifferent between that certain amount and the uncertain reward) then that just means your S is much lower—just $224,091.

If we used the $5 million value for S for the 2 percent chance of losing $10 million, the CME would be −$601,260. If the company could pay $500,000 for insurance to avoid that risk entirely, it would pay it.

Ron Howard proposed a kind of game to measure S. Imagine we play a game in which the outcome is based on a single coin flip. You could win some amount of money x or lose half of x based on that flip. For example, you could win $200 if the coin flip lands on heads or lose $100 if it lands on tails. Would you take that bet? It is a good bet if you assume you are going to play it a very large number of times. You would definitely come out ahead in the long run. But suppose you could only play it once. If so, how large would you be willing to make x so that the game is still just barely acceptable to you? We can use that as a measure of risk tolerance. For example, if I said I am willing to make x equal to $1,000, then I would be willing to win $1,000 or lose $500 on a single coin flip.

Howard shows how the largest number you would be willing to accept for x can actually be used as a measure of S. Just like the expected value approach described previously, we could reduce risk to a single monetary value, except that we don't have to assume that a decision-maker is risk neutral.

Now, you may have noticed that some consequences of these formulas may produce unrealistic results. First, because utility cannot exceed 1, increasing the reward beyond some point adds no value to the decision-maker. For the case where S = $5 million, the CME of a 20 percent chance of winning $50 million is almost the same as that chance of winning $500 million or $5 billion. No matter how big the reward, the CME of a 20 percent chance of winning it never exceeds $1,115,718 for a decision-maker where S = $5 million.

Also, for large losses, the CME is surprisingly large for even tiny probabilities of a loss. The same decision-maker would pay almost $31 million to avoid a one-in-a-million chance of losing $100 million.

One assumption in the proof of the exponential utility function is that utilities are independent of the previous wealth of the individual or firm. In other words, your preferences for these bets would be the same if you were a billionaire or broke. This assumption is likely one reason why research shows that the risk preferences of individuals deviate from the exponential utility function. The psychologist Daniel Kahneman (whom we will discuss more later) won the 2002 Nobel Prize in Economics for showing empirical evidence of this. Kahneman called his alternative approach to quantifying real risk preferences prospect theory.

For example, suppose decision-makers said they would be indifferent among the three options described in exhibit 6.3. Executives may describe their risk tolerance by identifying three such bets as equally preferable. Unfortunately, no version of the earlier exponential function can make all three of these equally preferable. Does that mean that the decision-maker is irrational? Well, it just means that if there is a model that is rational, it's something other than that particular exponential function.

EXHIBIT 6.3 Example of Three Bets Considered Equivalent by a Decision-Maker

Reward Amount Probability of Winning
$10 million  20%
$2 million  50%
$500,000 100%

The trick, therefore, for any solution to the CME is satisfying both the fundamental axioms while meeting the preferences of decision-makers. Plainly irrational preferences can simply be pointed out to the executives. At least in my experience, they quickly come around and adjust accordingly. In other cases, the problems may be less obvious. It may be necessary to accept that rules produce rational results only within a given range of preference but, in practice, this is an entirely reasonable constraint. Even if the model produces nonsensical results (i.e., a decision-maker would prefer investment A to B, B to C, and C to A), that would be okay if such irrational outcomes only appear for unrealistic situations (such as investments much larger than their entire portfolio). The spreadsheet example you can download provides some options.

If we can get executives to converge on an internally consistent, rational set of equivalent bets, we will have a way of computing CME for any combination of risks and rewards. Using CME, we can answer every possible uncertain risk and reward choice for a firm or individual. We can say the risk is worse if there is a 1 percent chance of losing $50 million or a 10 chance of losing $10 million. And we can say whether we would approve of an investment that has a range of possible rewards and losses, each with their own probabilities, and we can say whether it is preferable to any other investment. Even if a bet has a range of many possible outcomes, some involve losses and some involve gains, we can work out the CME. If the CME is negative, we would not only reject the bet but also we would pay to avoid it if necessary. If the CME were positive, we would accept the bet even if we had to pay for the opportunity to take it.

DEFINING PROBABILITY

Does everyone really mean the same thing when they use the word probability? Getting on common footing with this term should be a prerequisite to a common understanding of risk. After all, being able to assess what events are more or less probable should be a key component of risk assessment and, therefore, risk management. What might surprise some risk analysts is that the meaning of this term is somewhat controversial even among statisticians. L. J. Savage once observed the following:

It is unanimously agreed that statistics depends somehow on probability. But, as to what probability is and how it is connected with statistics, there has seldom been such complete disagreement and breakdown of communication since the Tower of Babel.10

Perhaps some of the differing definitions of risk are related to differences in the use of the term probability. When people say that some probabilities are “immeasurable,” they are actually presuming a particular meaning of probability. To some writers on this topic, probability is a kind of idealized frequency of a thing. To them, probability means the frequency of an event during which three criteria are met: (1) it is a truly random process, (2) which is “strictly repeatable,” and (3) used an infinite number of trials. This is the definition promoted by a group of statisticians who would call themselves frequentists. This group includes some great statisticians such as R. A. Fisher.

But this is not a universally held notion even among other statisticians and mathematicians of equal status to Fisher. L. J. Savage and many others take the view that probability is a statement about the uncertainty of the observer, not the objective state of some system. You may assign a different probability to something than I would because you have more information than I do; therefore, your state of uncertainty would be different than mine.

A coin flip illustrates the difference between the two approaches. Saying that a coin flip has a 50 percent chance of landing on heads is a statement about the objective state of a physical system. Here, the frequentist view and subjectivist view would agree on the probability. But these views diverge if I already flipped the coin, looked at the result, and kept the result from you. If you were a subjectivist, you would still say there is a 50 percent chance the coin landed heads. I would say it was either 100 percent or 0 percent because I know the outcome exactly. If you were a frequentist and did not see the result, you would not say the probability was 50 percent. You would say it must be either heads or tails but the state is unknown.

In engineering risk analysis, sometimes a distinction is made among these types of uncertainties. “Aleatory” uncertainty is similar to the frequentist's use of probability. It refers to an “objective” state of the system independent of our knowledge of it, such as the known variability in a population of parts. Engineers may know that 10 percent of a given part fails after one hundred hours of use because they've observed failure rates for thousands of parts. But if they didn't have that data, they may put a probability of failure on a part based on “epistemic” uncertainty. That is, they lack perfect knowledge about the objective facts. There are cases where this distinction can be ambiguous but, fortunately, we don't really even need to make that distinction.

I take the side that the frequentist view of probability has little to do with real-world decisions. We just treat all uncertainty as epistemic. Perhaps any uncertainty could be reduced with more detailed study and measurements. If I discovered that there was one particular reason for a part failure that had to do with a slightly different alloy being used for a metal component, then some engineers would argue that was really epistemic uncertainty and not aleatory. Perhaps you can never conclusively prove that some uncertainty isn't ultimately epistemic. The only practical difference is that it may be more economical to reduce some uncertainties than others.

The criteria of truly random, strictly repeatable, and infinite trials make the frequentist definition a pure mathematical abstraction that never matches problems we have in the real world. A frequentist such as Fisher would have said there is no way to put a probability on anything that does not involve random, repeatable samplings over infinite trials. So you could not put a probability on a kind of cyberattack that never happened before.

Actuaries, however, have to deal with making decisions even for risks when there could not be even a hundred trials, much less infinite trials. If insurance companies took the frequentist definition literally, they could rarely if ever legitimately use probability theory.

In the early twentieth century, the actuary Bruno di Finetti proposed a pragmatic, “operational” definition of probability. Insurance companies are nearly risk neutral for each of the large numbers of risks they insure. If they believe that you have a 1 percent chance of making a life insurance claim of $1 million next year, then they consider that nearly equivalent to a liability of exactly $10,000 for certain (a little less, actually, if we consider the interest they could make on the premium before having to pay a claim). Because your family is probably more risk averse about the financial consequences of your death, you would be willing to pay a little more than what the insurance company would value the liability. That's what makes insurance possible. Looking at it another way, if the insurer is indifferent between a certain loss of $10,000 and an uncertain loss of $1 million, then the insurer is for operational purposes treating the probability of the uncertain loss as 1 percent.

Decision psychology gives us another reason to consider the subjectivist view of probability. Later, we will describe another area of research conducted by Daniel Kahneman related to “calibrated probability assessments.” The research of Kahneman and many others show that even experts providing subjective estimates (with training and other adjustments) can produce realistic probabilities. Hubbard Decision Research has also gathered data on more than one thousand people who have participated in our calibration training, which confirms this thought. That is, when they say something is 80 percent probable, it actually happens about 80 percent of the time; when they say 60 percent probable, it happens about 60 percent of the time; and so on. Because this data contain well over 140,000 individual estimates, we have a number of trials that should make even the frequentist pay notice. Over a very large number of trials, it can be shown that subjective estimates can closely match the observed frequencies of events—even when the individual estimates were of one-off events.

In the subjectivist sense of the word probability, a probability is never really immeasurable. Properly trained people can state a probability that best represents their uncertainty about an event. Uncertainty about any event, stated as set of outcomes and their probabilities, can vary from person to person. If you have more or less uncertainty than a colleague for some item, you will state a different probability than he or she would. When it comes to quantifying your own uncertainty, you are the world's leading expert. When you gather more information, your uncertainty will change. This is the practical use of probabilities not only by actuaries but also most real-world decision-makers.

Even with the large amount of empirical evidence supporting the realism of subjective estimates (for trained experts, that is), the frequentist view apparently seems more objectively scientific to some. If a person believes that the subjective view of probability seems unscientific or less rigorous in any way than the frequentists' position, that would be a mistake. In fact, the subjectivist view is often associated with the “Bayesian” school of thought based on what is known as Bayes' theorem.  This is a fundamental concept in probability theory which was developed by Thomas Bayes in the 18th century.  In 1995, the physicist Edwin T. Jaynes, who specialized in the undeniably scientific field of quantum mechanics, argues for the Bayesian view of probability even in physics when he said,

We are now in possession of proven theorems and masses of worked out numerical examples. As a result, the superiority of Bayesian methods is now a thoroughly demonstrated fact in a hundred different areas. 11

ENRICHING THE LEXICON

Let's summarize the risk terminology and add a few more items to our lexicon. We just reviewed several definitions of risk. Many of these were mutually exclusive, contradicted commonsense uses of the language, and they defied even the academic literature available at the time. A risk manager in a large organization with professionals in finance, IT, and perhaps engineering could have easily encountered more than one of these definitions just within his or her own firm. If a risk manager does run into these alternative uses of the word, we have to respond as follows:

  • Risk has to include some probability of a loss—this excludes Knight's definition.
  • Risk involves only losses (not gains)—this excludes PMI's definition.
  • Outside of finance, volatility may not necessarily entail risk—this excludes considering volatility alone as synonymous with risk.
  • Risk is not just the product of probability and loss. Multiplying them together unnecessarily presumes that the decision-maker is risk neutral. Keep risk as a vector quantity in which probability and magnitude of loss are separate until we compare it to the risk aversion of the decision-maker.
  • Risk can be made of discrete or continuous losses and associated probabilities. We do not need to make the distinctions sometimes made in construction engineering that risk is only discrete events.

At the beginning of this chapter, I provided definitions of both risk and uncertainty that are perfectly compatible with all these points. They are more consistent with the common use of the terms as well as being sufficient for quantitative analysis.

An enriched professional vocabulary doesn't necessitate shoe-horning disparate concepts into a single word (like PMI did with risk). We have different terms for different concepts, and they seem to me to be less about hair-splitting semantics than about clear-cut, night-and-day differences. Here is a summary and some other terms we just introduced along with a couple of other terms that may come in handy:

  • Uncertainty: This includes all sorts of uncertainties, whether they are about negative or positive outcomes. This also includes discrete values (such as whether there will be a labor strike during the project) or continuous values (such as what the cost of the project could be if the project is between one and six months behind schedule). Uncertainty can be measured (contrary to Knight's use of the term) by the assignment of probabilities to various outcomes. Although upside risk doesn't make sense in our terminology, the speaker can communicate the same idea by saying upside uncertainty.
  • Strict uncertainty: This is what many modern decision scientists would call Knight's version of uncertainty. Strict uncertainty is when the possible outcomes are identified but we have no probabilities for each. For the reasons we already stated, this should never have to be the case.
  • Probability: Probability is a quantitative expression of the state of uncertainty of the decision-maker (or the expert the decision-maker is relying on). As such, a probability is always attainable for any situation. The person providing the probability just has to be trained.
  • Risk tolerance: Risk tolerance is described with a mathematically explicit calculation that can tell you if a risk is acceptable. It could refer to the “maximum bearable” risk, represented by a curve that the loss exceedance curve should be under. It can also be a CME function that converts different uncertain outcomes to a fixed dollar amount. A bet with a negative CME is undesirable (you would be willing to pay, if necessary, to avoid the bet) and a bet with a positive CME is desirable (you would even pay more for the opportunity to make the bet).
  • Risk/return analysis: This considers the uncertain downside as well as the uncertain upside of the investment. By explicitly acknowledging that this includes positive outcomes, we don't have to muddy the word risk by shoehorning positive outcomes into it. Part of risk/return analysis is also the consideration of the risk aversion of the decision-maker, and we don't have to assume the decision-maker is risk neutral.
  • Ignorance: This is worse than strict uncertainty because in the state of ignorance, we don't even know the possible outcomes, much less their probabilities. This is what former US Secretary of Defense Donald Rumsfeld and others would have meant by the term unknown unknowns. In effect, most real-world risk models must have some level of ignorance, but this is no showstopper toward better risk management.

One final note about this terminology is that it has to be considered part of a broader field of decision analysis. Just as risk management must be a subset of management in the organization, risk analysis must be a subset of decision analysis. Decisions cannot be based entirely on risk analysis alone but require an analysis of the potential benefits if managers decide to accept a risk.

NOTES

  1.  1. F. Knight, Risk, Uncertainty and Profit (New York: Houghton-Mifflin, 1921), 19–20.
  2.  2. Keynes described risk as the product of E and q were “E measures the net immediate sacrifice which should be made in the hope of obtaining A; q is the probability that this sacrifice will be made in vain; so that qE is the ‘risk.’” Keynes, A Treastise on Probability (London: MacMillan, 1921), 360.
  3.  3. Knight, Risk, Uncertainty and Profit.
  4.  4. A. Wolf, “Studies in Probability,” Economica 4 (January 1922): 87–97.
  5.  5. J. Haynes, “Risk as an Economic Factor,” Quarterly Journal of Economics 9, no. 4 (July 1895): 409–49.
  6.  6. Stephen Stigler, The History of Statistics: The Measurement of Uncertainty before 1900 (Cambridge, MA: Harvard University Press, 1986).
  7.  7. W. R. Sorley, “Betting and Gambling,” International Journal of Ethics (July 1903).
  8.  8. T. Connolly, H. R. Arkes, and K. R. Hammond, Judgment and Decision Making: An Interdisciplinary Reader, 2nd ed. (New York: Cambridge University Press, 1999).
  9.  9. Baaruch Fischhoff, “What Forecasts (Seem to) Mean,” Judgment and Decision Making: An Interdisciplinary Reader, 2nd ed. (New York: Cambridge University Press, 2000), 362.
  10. 10. Leonard J. Savage, The Foundations of Statistics (New York: John Wiley & Sons, 1954), 2.
  11. 11. Edwin T. Jaynes, Probability Theory: The Logic of Science (St. Louis, MO: Washington University, 1995), xii.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset