CHAPTER 2

Have We Lost the Plot?

An axiom is a statement or a fact that is universally accepted to be true. Axioms are a concept that originated in mathematics and logic. For example, there is a reflexive axiom that says that “a” equals “a”. In other words, things that are equal to the same thing must be equal to each other. It is so trivial that it is actually difficult to state in words. But, we all know and understand without questioning it that five equals five.

Axioms prove to be extremely useful in developing mathematics as a field of study. All of the math that you learned in school is based on a few basic axioms. In fact, if you took a basic math theory course in high school or university you probably recall memorizing Euclid’s axioms and postulates. Without having to reprove these axioms each time, the development of extremely complicated concepts can be developed and expanded upon very efficiently.

In life, we also have axioms; the sun will rise in the East, apple pie is good, and the cable repairman will be late for their scheduled appointment, and you will waste a good portion of your day waiting for them to show up and implement a five minute fix. As in mathematics, axioms in life tend to be useful, although in life, we are not often trying to be as rigorously precise as in mathematics. Axioms, however, are only useful to the extent that they are true. Acting on a false axiom can lead to mistakes, incorrect conclusions, mishaps, and unintended consequences.

Risk management also has a series of axioms as well. However, with risk management, we tend to be quick to assume a hunch is truly a truth or something that we should accept as an axiom. Also, we are quick to build a series of more complicated theories and strategies for risk management upon these unexamined, and perhaps shaky, half-truths that we all too readily accept as axioms.

The purpose of this chapter is to more critically examine some of these axioms and the possible effects and unintended consequences that may arise from so readily accepting these false axioms. It seems at times as if we have lost the plot in managing risk. We have become a slave to perfecting the axioms rather than striving for better ideas for managing risk. Let’s begin to explore some of these false axioms.

Frequentist Statistics

Let’s play a little gambling game. I will flip a coin in the air 1,000 times. Each time it comes up heads, I will pay you $1 million. However, each time it comes up tails, you will have to pay me $900,000. We will settle up the net amount won and lost on each flip at the end of the 1,000 flips.

Would you enter into this game of chance with me? Of course, you will—or at least you should! On each flip, you expect to win on average $50,000. Yes, there will be times that you lose on the flip, but if we flip the coin 1,000 times, you expect that the coin will come up heads approximately 500 times and tails 500 times. Calculating your expected net winnings and your net losses you calculate expected net winnings over the 1,000 flips to be $50 million. You will be set on easy street for life and I will need to find some deep pocketed financial backers. If you are really mathematically inclined, you might calculate the value at risk for your gambling position, but assuming a fair coin (am I perhaps trying to deceive you with some weighted coins?), you will quickly calculate that the probability of your losing money is exceedingly low. You will take me as a sucker and plan your early retirement and life of future luxury.

Let’s however change the game ever so slightly. It is boring and tiring to flip a coin 1,000 times, so let’s just flip it once and multiply the outcome by 1,000. Mathematically, the expected outcome is the same, so it should not be a problem for you to readily accept this slight change. In fact, you will get your expected winnings that much quicker, as you do not need to wait for all of that flipping to be conducted.

Despite your state of excitement about becoming instantly rich, you have a nagging feeling about agreeing to the gamble, as you should. With the one-time flip, although the expected outcome is the same, the odds of your losing money are now 50 percent. Furthermore, the amount of money you will lose with a 50 percent probability is enough to bankrupt you unless you are one of the approximately 2,000 billionaires who exist in the world.

What is the difference in the two gambles? The difference is frequentist statistics. As brilliantly explained by finance professor and author Riccardo Rebonato in his enlightening book Plight of the Fortune Tellers,1 frequentist statistics are frequently abused in risk management. Frequentist statistics is when we assume that an event is repeated multiple times. In such cases, and assuming we know what the underlying statistical distribution of outcomes is (a sometimes very bold and incorrect assumption), we can use statistical analysis to determine what the outcomes are likely to be.

Frequentist statistics, for instance, is the basis of actuarial science. For any given person, an actuary cannot tell whether the person will die in the next year or not. However, given a large number of people with similar characteristics, say a collection of 1,000 men between the age of 60 and 65, in a given socioeconomic status and in otherwise good health, the actuary can predict quite accurately how many of the 1,000 men will die within the next year or even the next five years. This of course is how, in part, life insurance rates are calculated. Frequentist statistics is also the basis behind using credit scores for credit analysis for the granting of credit cards and other type of risk events involving large numbers of transactions or interactions.

Frequentist statistics is also the basis for much of risk management analysis. Virtually, every program or course in risk management begins with a session on statistical analysis. In fact, as I write this chapter, I am taking breaks from preparing lectures for a Masters level university course in risk management on statistical analysis. The use of frequentist statistics in risk analysis is so ubiquitous that it is axiomatic that the use of statistics is a valid and valuable risk management tool. However, this is an axiom that needs to be questioned.

There are four critical problems with frequentist statistics. The first is that in life, and particularly so with major decisions that have long-lasting effects, we rarely, if ever, get to make the decision multiple times. In the real world of business, we get to choose a strategic direction perhaps once every 5 to 10 years. (Of course, some organizations seem to change their strategic direction every other week, but that is a known and certain strategy for low employee morale and eventual organizational failure.) The real world of management is more akin to our coin flipping gamble where we only get to flip the coin once and the outcome is magnified. Few of us have a situation where we can grant 5,000 credit cards and rely on the statistical averages to even out among the various cardholders who turn out to be good or bad credits.

Assuming that statistics, and implicitly frequentist statistics, apply when the event size, or transaction size, is small is a serious abuse of statistics. The results can be disastrous, as disastrous as miscalculating the odds in our modified coin flipping gamble.

The second issue with frequentist statistics is that we rarely, if ever, know the true statistics of the underlying process. For instance, in our coin flipping gamble, we can reasonably assume that the coin is fair, and thus assume a 50 percent chance of turning up heads and a 50 percent chance of turning up tails. However, knowing the underlying distribution of events in managerial decisions is not a luxury that we usually have, and we, thus, need to make bold assumptions. For instance, it is frequently assumed that financial returns follow a normal distribution. The unfortunate reality is that realized returns are actually leptokurtic, which is a fancy pants way of saying that the odds of extremely positive returns or extremely negative returns are higher than the statistics of the normal distribution tell us they should be. Well, you say, that is an easy fix. Simply replace the normal distribution with the leptokurtic distribution and recalculate the statistics. Small problem though—we do not know how to calculate the statistics of a leptokurtic distribution. We, thus, revert to using the normal distribution that approximately works and then claim amazement when we get financial bubbles and our mathematics appear wonky with hindsight.

The third issue with frequentist statistics is that it blinds us to paradigm shifts. Consider, for instance, the 2008 financial crisis that in large part was based on a bust in the housing markets. Using the frequentist statistics available from the historical data available at that time, it was quite legitimate for the quantitative analysts at ultraconservative insurance company AIG to state that there was basically no chance that they could ever lose money trading securities based on housing defaults in the United States. A simple analysis of the historical data would have confirmed this. Of course, AIG almost went bankrupt and had to be bailed out by the government to prevent a financial collapse that threatened to take down a significant portion of the U.S. financial sector. What the highly trained analysts at AIG failed to appreciate was that a paradigm shift in the housing market was occurring and that, indeed, people in large numbers would start defaulting on their personal home mortgages with an unprecedented frequency. By definition, paradigm shifts are hard to predict, and thus easy to overlook or ignore when utilizing the convenience of a frequentist statistics assumption.

A final issue with frequentist statistics is the problem of distinguishing between a systemic event and a distinct event. This is an issue that is closely related to an event not being repeated frequently enough for the technique to apply. To illustrate, assume that you are a participant in a seminar that I am conducting. There are about 200 people in this seminar, and thus you feel comfortable in applying frequentist statistics. As I am talking, and as you are nodding off due to boredom, I suddenly drop dead at the front of the room. While you might be somewhat shocked, you likely will not be worried for your own well-being. For starters, you know that in a given room there is a nonzero probably that someone will drop dead, and it is better for it to be me who drops dead and not you. Also, you might surmise that my love of pizza has caught up with me, but with your superior diet of healthy salads that you have little to worry to about. You quickly conclude that my dropping dead was an idiosyncratic event. However, about 20 seconds later, someone else in the front row drops dead. And then another person and then another and soon almost everyone sitting at the front of the room is dead. Now your analysis of the situation changes from my love of pizza idiosyncratically killing me to one of a systemic effect at work, such as poisonous gas being dispersed throughout the room through the air conditioning vents. You correctly start to run out of the room for some fresh air and you abandon your frequentist statistical thinking.

When using frequentist statistics in the context of risk management, we rarely have the luxury of knowing if our use of frequentist statistics is valid or not until well after the fact. By the time the realization sets in, it is much too late to change our analysis and reverse our decisions based on that analysis.

Frequentist statistics is very tempting. It gives an illusion of precision and predictability. It allows for a wide variety of well-known statistical techniques to be used. It makes the analyst appear smart and knowledgeable. Frequentist statistics makes the effort exerted staying awake in statistics classes seem to pay off. It is true that it is very useful for those cases where large number of events with known statistical processes or distributions apply. The problem is that it can be very tricky to understand when it can be used with confidence and when it should be avoided. The differences between the instances are subtle, just as they were for our coin flipping gamble. Too often, we forget that the validity of frequentist statistical analysis is a false axiom of risk management.

Mark Twain once famously said that “there are lies, damn lies and statistics.” If the field had been developed in his day, I am sure that he would have added “and then furthermore there is econometrics and big data.” Frequentist statistics is great, but only when the conditions for its use are understood and respected. Otherwise, it is for humorists like Mark Twain to scorn.

Measurement Error

Everyone probably fondly remembers the science lab in high school. Every step of the way you had to take measurements and your lab instructor continually cautioned you that nothing could be measured with exactness. Thus, you also had to record your measurement error at each step and then correctly propagate those measurements through your calculations. The tedious calculation of those measurement errors is why everyone who has the talent becomes a theoretical physicist, rather than an experimental physicist.2

When was the last time you saw a risk measurement made with measurement error? More importantly, when was the last time you saw those errors in measurement carried through calculation? I suspect, very few times. In risk, we have the false axiom that data, particularly financial data, is made with zero measurement error. For instance, we see the closing stock price is $50.38. We take this as a fact that it is $50.38 and not $50.45 or even $50.37. The issue though is that the stock price is currently fluctuating through time. The price of $50.38 just happened to be the price at the instance that we looked at it. All financial prices are constantly moving, and thus it is much more accurate to state it as a range, with measurement error, than as a fixed and precise number. In fact, there is probably more uncertainty in the exact value of the stock than there was in any of the measurements that you made in your high school science lab. The error of the fact is of course magnified when we realize that each of our risk measurements—for example, calculating the value of a large portfolio of stocks—is summed up.

Now, the astute stock portfolio analyst will argue that this is a false issue, as they almost always state the value along with a standard deviation. However, it is well known that standard deviations are notoriously unstable. When was the last time you saw a standard deviation number given with an error measurement?

If you think back to the mathematics of calculating errors from your high school science lab, you quickly realize that the net error range for most risk calculations is often going to be extremely large. Think, for instance, of all of the assumptions that go into something seemingly objective as pricing a share of stock. Academic finance tells us the factors are the future dividends, the growth rate of those dividends, and the appropriate discount rate that accounts for the riskiness of those dividends. Each of these three factors is inherently uncertain with a wide range of reasonable values. Yet, I have never seen an analysis that puts an error range on these values, or propagates those errors through the calculation. While it is true that a good analyst will publish a range for what they believe the stock price will be, the range they provide is almost always much smaller than the range that would have been calculated, given the known measurement error by properly propagating the known errors through the calculation.

Continue with the argument and consider the measurement of risks that need to be made on nonfinancial data. What is the measurement error in calculating the tolerance of a manufacturing machine? What is the measurement error in calculating the confidence of consumers? What is the measurement risk in calculating the potential destruction of an oncoming hurricane?

As risk managers, we need to especially consider the measurement risk of Rumsfeld’s famous “unknown unknowns.” The secretary of defense, Donald Rumsfeld was ridiculed soundly for stating that “...there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.” It is a valid and profound statement, but one that risk managers too frequently try to avoid acknowledging.

What is the measurement of virtually anything in the future? (Remember that risk is concerned with the future.) No matter what definition of risk you decide upon, it includes some aspect of uncertainty. It is, thus, very ironic that measurement error is missing in action. The problem of course is that incorporating measurement error into risk measurements makes the field seem rather suspect. Far better, for the health of the risk management profession to assume, measurement error is unworthy of attention.

One particular example illustrates the point. The recovery rates of corporations after bankruptcy compiled by a well-known rating agency are reported in one textbook on derivatives.3 The average rate of recovery turns out to be approximately 50 percent. However, the standard deviation of this study show that the standard deviation, or variation in the measurement is approximately 25 percent. If we take two standard deviations as our measurement error, that implies that the range for recovery after default is between zero and 100 percent! Incorporating the measurement error in this one instance, of course, makes the risk calculations interesting.

Admittedly, I played a little fast and loose with the calculation in the previous paragraph. Also, it is a lot more difficult to incorporate measurement error into most risk calculations than it was in your high school lab experiments. However, the point holds that risk management as a field would be a lot less confident in their projections, but perhaps, a lot more accurate (but not as precise) and truthful if they regularly included measurement error in their calculations.

There is another major benefit to incorporating error measurement and that is the discipline it imposes in thinking long and hard about the errors in your measurement. There is value in seriously considering the limits of how well you can measure a value or know the true value of something. It is humbling, but it is also enlightening.

It is past the time to get rid of the false axiom of no measurement error. It is far more honest and more productive in the long term to embrace the old adage that it is better to be approximately right than to be precisely wrong.

Unfortunately, measurement error in risk management is implicitly assumed to be zero most of the time. This is a serious error and a significant false axiom in risk management. It is also highly ironic when one considers that risk is the discipline of uncertainty of the future.

Optimization

Have you ever had lunch with a colleague at work and something like this statement came up: “Can you imagine that any organization can be as screwed up as ours?” I bet, you have. At virtually any organization where two or more people believe they can talk freely, something like that statement will be uttered. I bet you said it, and I bet that the people who report to you said it about you behind your back. I know that I have uttered something like that many times, and I am highly confident that the people who have reported to me said it about me and the decisions that I have made.

Frederick Taylor, the father of scientific management, led us on a path of management that made us believe that virtually all organizational processes could be optimized. Furthermore, if you have been to business school or even just a management seminar, you have almost certainly drunk that organizational best practices Kool-Aid yourself. Sadly, organizational optimization is a false axiom—a myth. The reality is at best “satisfication”—a search not for the best solution, but an acceptable solution that does not totally suck—to borrow a phrase from the millennial generation.

As will be discussed at length later in the next chapter, What Is Complexity, and in a number of other different contexts throughout this book, organizations are not machines that conform nicely to something like the laws of physics and engineering. Organizations cannot be, and are not, optimized. They are not subject to a nice set of formulas for which some mathematical derivatives can be taken to calculate the maximum or minimum. Nor can risk management be optimized. However, a quick survey of financial advertisements playing during the weekend football games will feature companies that explain how they will optimize your financial future by optimizing risk for your personal preferences. It sounds so nice and reassuring, but it is pure poppycock.

There is another problem with optimization. What exactly does it mean to optimize risk? Does it mean that there is as little risk as possible? That, however, would limit the upside risk, and no company wants to do that (do they?) Does the optimization of risk mean that the upside risk is maximized while the downside risk is minimized? Nice thought, but how exactly do you pull that off?

Anyone who has taken a formalized course in decision making under risk will recognize the various frameworks; minimax (minimize the maximum loss), maximax (maximize the maximum gain) as well as various others. These, of course, are all mathematical guidelines or philosophies, or taxonomies for characterizing a decision, and not hard-and-fast rules where everyone will obviously agree on the method to use. Optimization in risk is a philosophical choice, and despite being a subject of discussion under olive trees since the time of Socrates (and likely even before that), no one has come up with a universally accepted way to optimize philosophical decisions. This is especially so for risk where everyone, and every organization, has a naturally different tolerance for risk, and thus a different calculus for calculating the risk–return tradeoff.

Thus, we have a false axiom of risk management in that the unexamined assumption is that risk can be optimized. The reality is “satisfication” not optimization. In other words, the reality is a word that is so imprecise that it is not even a real word is the modus operandi, and not the very technical and precise sounding phrase of optimization.

If It Cannot Be Measured, It Cannot Be Managed

Management guru Peter Drucker stated that “what gets measured gets managed.” Probably, in no area of management is this truer than in risk management. It seems as if risk management is actually measurement management. I have to personally come clean and admit that I often fall into this trap myself. I regret confidently claiming in a Business school discussion that I believed that anything in business could be measured. I was young and foolish when I said it. The problem is that a corollary of this is all too often taken to be true as well. That false axiom is that “if it can’t be measured it can’t be managed.”

Any seasoned and wise risk manager knows that there are quantitative risks as well as qualitative risks. There are risks that we have techniques to measure—even if those measurement techniques may at times be suspect—and there are risks that cannot be measured. Additionally, experienced risk managers realize that it is often the unmeasurable qualitative risks that are the more significant ones and the issues that need management the most and benefit from management the most.

It is nice to be able to measure progress, to be able to definitively answer the question of whether or not things are improving. However, there are lot of things in life that would be nice to have true that are not.

You cannot measure stupidity, absentmindedness, emotions, social movements, market complexity, politics, actions of competitors and suppliers, weather, acts of nature, Black Swans, acts of God, and a host of other common yet critical events. These are the types of events that have a much more significant impact on the uncertainty of an organization’s operations than the measureable risks.

Just because something is not measureable with a realistic degree of precision does not mean that you should not at least try to do so. Furthermore, it most certainly does not mean that you should not manage them. However, this is a message that risk managers, and particularly regulators, seem to forget. The rise of the mathematician who understands measureable risk, and the demise of the experienced, yet less quantitatively educated risk manager, is a tragedy as well as a travesty of modern risk management.

Risk is closely tied with human emotion. The response to risk is at the individual level an emotional artifact. For an organization, it is a sociological artifact. Emotions, either individually or sociologically, cannot be reduced to a number, nor should they be. Even if it could be tied to a number, that number or value would be constantly changing as the context changes. The emotional nature of risk, however, requires more human input into its management, not less. It requires more empathy and less mathematics. Uber may have self-driving cars, but risk management should not also be in a similar fashion be quantifiable digitized, so that a bot or a computer makes the decisions—or at least not for the majority of complex risk issues—a point that will be expanded and clarified in the following chapter on complexity.

For the risk decisions that can be quantified, then it is fine, and probably preferable for a computer or a bot to make the decisions. However, risk managers need to develop the wisdom to distinguish between measureable risks and nonmeasurable risks and manage accordingly.

Related to the role of measureable risk, and also with the axiom of optimization, is the concept that answers should be calculable. With many risks, the answer is not calculable and the best that one can hope for is an answer of “maybe.” Not exactly satisfying, but it is usually both a reasonable and realistic answer and needs to be recognized as such. A real problem is that regulators, and in particular financial regulators, as well as shareholders, senior managers, and board members will rarely accept “maybe” as an answer. In truth, the old risk saying of “it is better to be approximately right than precisely wrong” seems to have been forgotten.

As a graduate student in physics, I had one professor stress the importance of never starting a calculation until you intuitively know the answer. His reasons for making us do so also apply to risk management. Firstly, the professor had a learning goal in mind. By forcing us to intuitively develop an answer, we as students were forced to develop our intuitions, and thus our understanding of physics. Secondly, when the resulting calculation, or experiment, produced a different result, we knew that either we had a mistake in our intuitive knowledge, a mistake in the calculation, or in the best case scenario, we discovered a new phenomenon. By basing risk management on a precise black box calculation, all of these benefits are needlessly thrown away. The focus on measureable risk tempts us into such black box calculations and the learning unfortunately only takes place after painful and costly mistakes.

You are likely familiar with the serenity prayer by Reinhold Niebuhr. It goes, “God grant me the serenity to accept the things I cannot change, courage to change the things I can, and wisdom to know the difference.” I think this should be changed to “God grant me the serenity to accept the things I cannot calculate, courage to calculate the things I can, and wisdom to know the difference.” It is time to regain the plot and to start managing the risks that cannot be measured.

Noise

In 1985, the renowned finance academic Fischer Black wrote a paper that was presented at the annual meeting of the prestigious American Finance Association. The paper was simply titled “Noise.”4 In this very insightful thought piece, which has unfortunately been largely forgotten, Black highlights that too often what we take for meaningful data is, in fact, nothing more than meaningless noise.

The acting on noise, or perhaps more accurately confusing noise, for information leads us to calculating values and measuring risk based on false premises. The law of one price in finance states that similar sets of cash flows should have similar prices. Thus, there is a search for similar situations or arbitrages from which the value of a situation is calculated. The reality, however, is that false arbitrage is the norm. A false arbitrage is when you think something is similar when, in fact, it is not. For instance, every movement is the financial markets or in commodities prices is seen as significant and debated at end by the TV pundits. While the media needs to fill space and time, we must carefully filter the real data from the noise.

In the quest for measurement and quantification, there is a need (a lust?) for quantitative risk data. Frequently, however, noise is confused for data. The confusion leads to more problems than it solves. The problem obviously becomes even greater in this age when Big Data and the techniques for processing it are coming ever more popular.

Brain’s Win

Leslie Orgel was a chemist and a biologist who studied the early origins of life. His studies in chemistry and biology led him to famously claim that “evolution is cleverer than you are.” This also applies to risk management. Organizations are essentially an evolutionary system, as are industries, and as are economies. They are directly akin to biological systems in how they change and evolve. Thus, Orgel’s law about evolution applies to organizations as well. The assumption, however, in risk management is that “brain’s win.” Namely that with enough brain power that any risk problem can be solved, or better yet optimized. The reality is that, just like all the king’s horses, and all the king’s men, risk management cannot put Humpty Dumpty back together again.

Intelligence, creativity, and intellectual effort certainly help in risk management. However, that does not necessarily mean that they win. In any type of complex evolutionary system, the power and the mysteries of evolution will dominate. Referring again to the hit television sitcom The Big Bang Theory, it is the dim-witted waitress Penny who generally succeeds, while the superintelligent physicists bumble through life as their hyper-rationalism and knowledge continually trips them up in their everyday real life activities. While brains are useful, and necessary, hyper-rationalism rarely succeeds when human emotion is involved. In risk, human emotion is always playing a role.

Concluding Thoughts

While we may not have completely lost the plot in risk management, I do fear that we are well along that path. Risk management needs to be based in real life. It needs to be based in human emotions, and it needs to be based in complexity, which is the subject of the next chapter. Relying on axioms has some significant limits. Axioms make risk management seem like more of a science, and mathematically based axioms make risk management easier to deal with. However, relying on false axioms does not make risk management more realistic or effective. Realizing when these axioms apply and when they do not is key to successful risk management. Without these realizations being made, risk management truly is in danger of losing the plot.

 

1 Riccardo Rebonato, “Plight of the Fortune Tellers: Why We Need to Manage Risk Differently,” Princeton University Press, 2007.

2 If you need anecdotal proof of this statement, refer to the television sitcom The Big Bang Theory where the talented physicist Sheldon Cooper is a theorist, while his presumably less-talented roommate Leonard Hofstadter is an experimental physicist.

3 John Hull, “Options, Futures and Other Derivatives,” 8th Ed, Prentice Hall 2011.

4 Fischer Black, “Noise,” Journal of Finance, Volume 41, Issue 3, July 1986, 529–543.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset