CHAPTER 5
The “Four Horsemen” of Risk Management: Some (Mostly) Sincere Attempts to Prevent an Apocalypse

History is a race between education and catastrophe.

—H. G. WELLS

The biggest disasters, like the 2008 financial crisis or the recall of 737 aircraft crashes, generate a search for a cause, and in response to that demand, experts will provide a wide variety of theories. Most of these theories are judgment-laden. Explanations involving conspiracy, greed, and even stupidity are easier to generate and accept than more complex explanations that may be closer to the truth.

A bit of wisdom called Hanlon's razor advises us, “Never attribute to malice that which can be adequately explained by stupidity.”1 I would add a clumsier but more accurate corollary to this: “Never attribute to malice or stupidity that which can be explained by moderately rational individuals following incentives in a complex system.” People behaving with no central coordination and acting in their own best interest can still create results that appear to some to be clear proof of conspiracy or a plague of ignorance.

With that in mind, we need to understand how very different forces have evolved to create the state of risk management methods as we see them today. Similar to most systems, cultures, and habits, the current state of risk management is a result of gradual pressures and sudden events that happened along the way. Influential individuals with great ideas appear where and when they do more or less randomly. Wartime necessities and new technologies drove other developments that affect risk management today. Institutions with their own motivations arose and would create momentum for certain methods. These institutions had different research objectives and methods than those created by academics, and they had very different perspectives on the same problem. Those who would apply these methods were influenced by associations that were accidental at least as often as designed.

To map out the current state of affairs, I've divided risk management into four general groups according to the types of problems they focus on and the methods they use. There is a lot of overlap in these sets, and I'm sure others could come up with different and equally valid taxonomies. But I think that individuals who think of themselves as risk managers will tend to associate with one of these groups or the methods.

The “Four Horsemen” of Risk Management

  • Actuaries: These original professional risk managers use a variety of scientific and mathematical methods. Originally they focused on assessing and managing the risks in insurance and pensions, but they have branched out into other areas of risks.
  • War quants: Engineers and scientists during World War II used simulations and set up most decisions as a particular type of mathematical game. Today, their descendants are users of probabilistic risk analysis, decision analysis, and operations research.
  • Economists: After World War II, a new set of financial analysis tools were developed to assess and manage risk and return of various instruments and portfolios. Today, financial analysts of various sorts are the primary users of these methods. There is some overlap with the war quants.
  • Management consultants: Most managers and their advisors use more intuitive approaches to risk management that rely heavily on individual experience. They have also developed detailed “methodologies” for these softer methods, especially after the rising influence of managers addressing information technology. Users and developers of these methods are often business managers themselves or nontechnical business analysts. I'll include auditors of various sorts (safety, accounting, etc.) in this group because certain influential methods they use have a common origin.

Which of these groups are you in? Someone with a management consulting orientation may not have heard of some of the methods used by engineers or actuaries or, if they have, are probably thinking that such methods are impractical. An engineer reading this book may already know that the methods I'm going to discuss are entirely practical but may be unaware that their methods contain systematic errors. A financial analyst or economist may be vaguely aware of some of these solutions from other fields but probably not all of them. Academic researchers (who could have a research focus on any combination of these methods) might not necessarily be following how well methods they research are used in the real world. No matter who you are, there is also a good chance that we will discuss at least some issues outside of your area of focus.

ACTUARIES

Certainly, the oldest profession (in risk management) is practiced in the insurance industry by actuaries. The insurance industry is now often an example of fairly quantitative risk analysis, but there was a period of time—a long one—when insurance existed without what we know today as actuaries.

The word actuary was used as early as the sixteenth century to refer to someone who was a clerk keeping records of accounts. At that time, being an actuary didn't have much to do with probability theory or statistics, which appeared in insurance no earlier than the seventeenth century. Even when these methods did appear, they would not be common—and certainly there would not be standard requirements for another couple of centuries.

Prior to the mid-1800s, having an ownership stake in an insurance company was more like gambling than investing (although shareholders in AIG in 2008 would probably claim this hasn't changed much). And buying an insurance policy was no guarantee that the insurer would be financially able to cover your losses in a legitimate claim. In the days before the general acceptance of (and legal requirement for) actuaries in insurance, using quantitative methods for assessing risk was a kind of competitive advantage, and those who did not use statistical methods paid the price for it. In chapter 2, for example, I mentioned how in the United Kingdom between 1844 and 1853, 149 insurance companies were formed of which fifty-nine survived.2 This is far worse than the failure rate of insurers in modern times, even in 2008.

Those that failed tended to be those that did not use mathematically valid premium calculations. Insurance companies have to estimate contingent losses and make sure they have enough reserves on hand to pay out claims if and when they come. The companies that did not calculate this correctly eventually would not be able to pay claims when a disaster occurred or, on the other extreme, would charge too much to stay competitive and keep far too much in reserve at the expense of paying too few dividends to investors (although anxious investors would ensure the latter was almost never the case). According to the International Actuarial Association, one particular insurer—Equitable—survived this period “in good shape and flourished because of the scientific methods it employed.”3

In 1848, in the midst of this turmoil in the quickly growing insurance industry, the Institute of Actuaries in London was formed as a society for the actuarial profession. Actuarial societies in other countries soon followed. Today, when it comes to the question of whether more quantitative methods add value, there is not much of a debate in the insurance industry. It is generally understood that it would be foolish to attempt to compete in the insurance industry without sound actuarial methods (even if going without actuarial methods were legal in most industrialized countries, which it isn't).

But, after events such as the financial crisis of 2008/2009, some might wonder whether actuaries really had any more answers than anyone else. If actuarial science were effective, would the US government have had to take over the insurance giant AIG when it was in danger of becoming insolvent? This is another example of when anecdotes are not that helpful to evaluate risk management approaches, especially when the facts are misunderstood. AIG had taken a large position on instruments called credit default swaps (CDS). A CDS is purchased by mortgage banks to offset the risk of borrowers defaulting on loans. It is called a swap in the financial world because the parties both exchange cash but with different conditions and payment schedules. In the case of a CDS, one party pays cash up front to the other in exchange for a future cash payment on the condition that a borrower defaults on a loan.

This looks like insurance, sounds like insurance, feels like insurance—but, legally, it's not regulated like insurance. The actuaries of AIG, as with any other insurance company, would have to validate the reserves of the firm to ensure it can meet its responsibility to pay claims. But because a CDS is not legally insurance, actuaries are not responsible to review this risk. The part of AIG's business that actuaries did review was not the part that hurt the company. The actuarial profession, unfortunately, is one of a narrow focus. Outside of insurance and pensions, certified, regulated professions are rare in risk management.

The basic idea of the actuarial profession is sound. They are professional risk managers using scientifically and mathematically sound methods, and they are held to high standards of conduct. When an actuary signs a statement claiming that an insurance company can meet its contingent liabilities and is in a position to weather all but the rarest catastrophes, he or she puts his or her license to practice on the line. As with engineers, doctors, or auditors, actuaries are duty-bound to report their best judgment about truth and, if necessary, resign if pressured to do otherwise.

Similar to most venerable institutions, actuarial societies were not always known for keeping up with the latest developments in related fields. The name actuarial science aside, actuaries are not primarily trained to be scientists. Although some actuaries may get involved in original research, most are more like engineers and accountants applying already-established methods. Because they are a necessarily conservative lot, it's understandable that actuaries would be cautious about adopting new ideas. Even a slew of new developments coming out of World War II would take some time to be adopted by actuaries. But now the new and powerful methods developed by wartime necessity are considered standard risk analysis by actuarial science.

In 2009 (shortly after the publication of the first edition of this book), a global association of actuaries created the chartered enterprise risk actuary (CERA) certification. The purpose is to extend the proven methods of actuarial science to topics outside of what is traditionally associated with insurance. CERA-certified actuaries have started showing up in areas such as enterprise risk management and operational risk management, where soft consulting methods used to be more the norm. For reasons explained in detail in the rest of this book, this is a welcome development.

WAR QUANTS: HOW WORLD WAR II CHANGED RISK ANALYSIS FOREVER

When Churchill said, “Never have so many owed so much to so few,” he was talking about the pilots of the Royal Air Force (RAF) defending the citizens of Britain from German bombers. Of course, the RAF deserved every bit of this recognition, but Churchill might as well have been talking about an even smaller group of mathematicians, statisticians, and scientists solving critical problems in the war effort. Mathematicians and scientists have had some influence on business and government operations for centuries, but World War II arguably offered a unique showcase for the power and practicality of such methods. During the war, such thinkers would develop several interesting approaches to problem-solving that would affect business and government operations for decades to come, including the analysis of risk.

One of these groups of wartime mathematicians was the Statistical Research Group (SRG) at Columbia University. The SRG and similar groups among the Allies had been working on complicated problems such as estimating the effectiveness of offensive operations and developing tactics that improved antisubmarine operations.4 In military intelligence, such statistical analyses were consistently better than spies at estimating monthly German tank production.5 This diverse group of problems and methods was the origin of the field of operations research (OR).

I briefly mentioned in the previous chapter how, later in the war, a group of physicists and mathematicians working on the Manhattan Project developed the Monte Carlo simulation method. They were running into a particularly difficult problem that required a truly revolutionary solution. The problem was how to model fission reactions. Radioactive materials such as uranium or plutonium gradually decay to produce lighter elements and neutrons. When one atom of a heavy element such as uranium splits (i.e., undergoes fission), it releases energy and more neutrons. Those neutrons cause other atoms to split. If this process occurs at a certain sustained, steady rate, it is called critical, and the heat it generates can be harnessed to create electricity for consumption. If the chain reaction rapidly accelerates, it creates a runaway effect called supercriticality. The heat suddenly released from this process creates a rather powerful explosion or at least a meltdown. As you might guess, getting this distinction right is important.

The problem is that lots of factors affect the rate of reaction. How much fissile material there is in a given volume is one factor. Another factor is that the container housing this reaction might be made of material that absorbs neutrons or reflects neutrons, which decelerates or accelerates the reaction. And the geometry of the fuel and the container affect the rate of reaction. Even under ideal conditions, physicists could not calculate exact trajectories of neutrons—they could merely model them as probabilities. Modeling the behavior of this system proved to be impossible with conventional mathematical methods. This was the original reason why the Monte Carlo method was developed—it's a way to do math without exact numbers.

After the war, the Monte Carlo simulation would find other applications in related fields. Norman C. Rasmussen of MIT developed probabilistic risk analysis (PRA) as a basis for managing risks in nuclear power safety. PRA initially used Monte Carlo models to a limited extent6 to simulate detailed components of nuclear reactors and the interactions among them. The idea is that if the probability of failures of each of the components of a complex system could be described, the risk of failure of the entire system (e.g., a release of radioactive coolant, a meltdown, etc.) could be computed. This should apply even if that event had never occurred before or even if that particular reactor had not yet been built. PRA using Monte Carlo simulations continued to grow in scope, complexity, and influence in risk management in nuclear safety. It is now considered an indispensable part of the field.

One of the Manhattan Project scientists, John Von Neumann, was helping to develop and promote the idea of Monte Carlo simulations while he was, nearly in parallel, developing what he called game theory, the mathematical description of games of all sorts. In 1944, Von Neumann coauthored a seminal work in the field—Theory of Games and Economic Behavior—with the economist Oscar Morgenstern. One of Von Neumann's fans was the young Abraham Wald, a member of the SRG, who also contributed ideas central to games under uncertainty.

In one important type of game, the player had no competitor but did have to make a decision under uncertainty—in a way, nature was the other player. Unlike competitive games, we don't expect nature to act rationally—just unpredictably. One such decision that can be modeled this way might be whether to invest in a new technology. If a manager invests, and the investment succeeds, then the manager gains some specified reward. But the investment could also be lost with nothing to show for it. However, if the manager rejects the opportunity, the investment itself can't be lost but a big opportunity instead might be missed.

It turns out that quite a few decisions in both business and government can be described as types of one-person games against nature. This evolved into decision theory. After the war, the ideas behind decision theory were being turned into practical tools for business and government. The RAND Corporation, founded just after the war, began applying the theories, Monte Carlo simulations, and a variety of other methods to everything from social welfare policy analysis to cold war nuclear strategies. It also attracted a variety of thinkers who would influence the field of decision-making and risk assessment for the rest of the twentieth century.

In 1968, the term decision analysis (DA) was coined by Ron Howard at Stanford University to refer to practical applications of the theory to real-world problems. As was the focus of game theory and decision theory, Howard's original use of the term decision analysis was prescriptive.7 That is, it was meant to specify what decision-makers should do, not necessarily describe what they will do (to some, the term has since expanded to include both).

The introduction of personal computers (PCs) greatly facilitated the practicality of the Monte Carlo method. In the 1990s, companies such as Decisioneering (now owned by Oracle) and Palisade developed software tools that allowed users to run Monte Carlo simulations on PCs. The intellectual descendants of the World War II team continue to promote these tools as both a practical and theoretically sound way to model risks.

One such person, Professor Sam Savage of Stanford University, is an actual descendant of one of the members of the World War II team. Leonard “Jimmie” Savage, his father, was part of the SRG and also the chief statistical consultant to John Von Neumann (this alone is just about the most impressive thing I've ever heard of any statistician). Jimmie Savage went on to author The Foundations of Statistics, which included practical applications of game theory and probabilistic reasoning in general. Sam Savage, the son, is the author of his own Monte Carlo tools and an innovator of modeling methods in his own right. He founded Probabilitymanagment.org, a not-for-profit that I've had the pleasure of being involved with for many years.

This is the culture of risk management for many engineers, scientists, some financial analysts, and others who might have a quantitative background. Risk is something that is modeled quantitatively, often using simulations of systems. Actuaries, too, have adopted Monte Carlo simulations as a standard tool of risk analysis.

This group, similar to actuaries, is generally surprised to learn what passes as risk management as practiced by other people. They are steeped in quantitative methods on a daily basis, they are often subjected to peer reviews by other mathematically oriented people, and their emphasis is on improving their own quantitative models more than studying the nonquantitative methods used by some people. When I describe softer methods to them (i.e., qualitative scoring methods, categorizing risks as medium or high, etc.), they shake their heads and wonder how anyone could believe an approach like that could work. When exposed to some of the more popular methods in risk analysis, they react in a way that I suspect is something similar to how an astrophysicist would react to theories proposed by an astrologer.

I also tend to see the quantitative risk analysts react positively to the question, “How do you know decisions are any better with your modeling approach?” So far, I see much more of a genuine interest in the question and less of a reaction of defensiveness. Although most have not been collecting the data to validate their models, they agree that answering such a question is critical and have generally been helpful in efforts to gather data to answer this question. When I point out known problems with common methods used in Monte Carlo simulations, they seem eager to adopt the improvements. As a group with a scientific orientation, they seem ever wary of the weaknesses of any model and are open to scrutinizing even the most basic assumptions. I believe that it is from actuaries and war quants that we will find the best opportunity for improving risk management.

ECONOMISTS

Prior to the 1990s, Nobel Prizes in economics were generally awarded for explanations of macroeconomic phenomenon such as inflation, production levels, unemployment, money supply, and so on. For most of the history of economics, risk and probabilistic methods were treated superficially. Prior to World War II, arguably one of the key academic accomplishments on that topic in economics was Frank Knight's 1921 book titled Risk, Uncertainty and Profit8—a book that never once resorts to using a single equation or calculation about risk, uncertainty, profit, or anything else. By contrast, the economist and mathematician John Maynard Keynes's book of the same year, Treatise on Probability,9 was mathematically rigorous and probably had more influence on the subsequent work of the early decision theorists such as Von Neumann and Wald. When it comes to ideas about how basic terms such as risk and uncertainty are used by economists, Knight's more mathematically ambiguous ideas gained more traction.

I will mention again that my attempt at a taxonomy of risk management has overlaps between groups. (I think any taxonomy of this topic would.) There are mathematicians and economists in both the war quant and economist groups but, by an accident of history, they diverged a bit in methods and diverged a lot in the groups they influenced. So, when I speak of the war quants, I will refer more to the methods of operations research and certain engineers. I will group economists more with Knight and how they subsequently influenced the world of financial investments. (More about competing definitions of risk in the next chapter.)

Knight did not pay much attention to optimization problems for individuals—that is, how a person should ideally behave in a given situation—such as the decisions under uncertainty described by Wald and others. It wasn't until just after World War II, with at least indirect influence from the war quants, that economists (Keynes being the earlier exception) started to consider the problems of risk mathematically. And it was not until very recently that economics considered the issues of actually measuring human behavior regarding decisions under uncertainty in a manner more like a science.

Consider how investors have always had to make decisions under uncertainty. Uncertainty about future returns affects how much they value a stock, how they hedge against losses, and how they select investments for a portfolio. But, as incredible as it seems today, the literature on the economic theory of investments was almost silent on the issue of risk until the 1950s. In 1952, twenty-five-year-old Harry Markowitz, a former student of L. J. Savage and new employee of the RAND Corporation, noticed this absence of risk in investment theory.

At RAND, Markowitz would meet George Dantzig, who, similar to Savage, earned his stripes as a war quant (Dantzig was with the US Air Force Office of Statistical Control). The older Dantzig would introduce Markowitz to some powerful OR optimization methods. Dantzig developed a method called linear programming, which would be influential for decades in OR and which gave Markowitz an idea about how to approach portfolio diversification mathematically. The same year that Markowitz started at RAND, his ideas were published in the Journal of Finance.10

Markowitz explained in his new theory that a portfolio of investments, similar to the investments that comprise it, has its own variance and return. By changing the proportion of various investments in a portfolio, it is possible to generate a wide variety of possible combinations of returns and volatility of returns. Furthermore, because some investments vary somewhat independently of each other, the variability of the portfolio in principle could be less than the variability of any single investment. By analogy, you are uncertain about the role of one die but you would be far less uncertain about the average of one hundred rolls of dice. The effect of diversification together with the flexibility of setting the proportion of each investment in the portfolio enables the investor to optimize the portfolio for a given set of preferences for risk versus return. Markowitz's approach was to use Dantzig's linear programming method to find the optimal combination of investments depending on how much risk the investor was willing to accept for a given return.

When Markowitz presented this solution for his PhD dissertation in 1955, Milton Friedman (who would win the Economics Nobel Prize in 1976) was on his review board. According to Markowitz, Friedman initially argued that Markowitz's modern portfolio theory (MPT) was not part of economics. Friedman might not have been all that serious because Markowitz did successfully pass his orals. But it is true that the issue of optimizing the choice of an individual making decisions with risk was not previously part of economics. Friedman himself developed mathematical models about several economic topics as if the calculations were all deterministic. Clearly, discussing risk in a quantitative, probabilistic sense was a new idea to many economists.

This fits with a general century-long trend within economics to address risk in probabilistic terms and as a problem for individual decision-makers, not just some vague macroeconomic force. At the beginning of the twentieth century, articles in economics literature that discussed risk, rarely mentioned probability. Until the latter part of the twentieth century, most articles in economics journals did not mention the word probability much less do any math with it.

Using the academic research database called JSTOR, I looked for how often the word risk appeared in economic literature and then how often the those articles used the word probability. Exhibit 5.1 shows the percentage of economics articles on the topic of risk that mentioned the concept of probability. Most articles (over 80 percent) about risk didn't even mention probability prior to 1960. But the tendency to treat risk quantitatively (which makes it much more like to have to mention the word probability) increased to a majority by today.

About two decades after Markowitz would first publish MPT, another influential theory would be proposed for using the risk of an investment to price an option. Options are types of derivatives that give the holder the right, but not the obligation, to buy or sell (depending on the type of option) another financial instrument at a fixed price at some future point. The instrument being bought or sold with the option is called the underlying asset and it could be a stock, bond, or commodity. This future point is called the expiration date of the option and the fixed price is called the exercise price. This is different from futures, which obligate both parties to make the trade at the future date at a prearranged price.

Bar graph depicting the percentage of economic articles on the topic of risk that mentioned the concept of probability, in Economic Literature.

EXHIBIT 5.1 Risk and Probability in Economic Literature

A put option gives the holder the right to sell, say, a share of some stock at a certain price on a certain date. A call option gives the holder the right to buy the share at a certain price on a certain date. Depending on the price of the underlying instrument at the expiration date of the option, the holder could make a lot of money—or nothing.

The holder of a call option would use it only if the selling price of the underlying instrument were higher than the exercise price of the option. If the underlying instrument is selling at $100 the day the option expires, and the exercise price is $80, then the owner of the option can buy a $100 share for just $80. The option has a value that would be equal to the difference: $20 each. But if the shares were selling at just $60, then the option would be of no value (the right to buy something at $80 is worth nothing if the going price is $60).

But since the price of the underlying instrument at the expiration date is uncertain—which may be months in the future—it was not always clear how to price an option. A solution to this problem was proposed in 1973 by Robert C. Merton, an economist who first was educated as an applied mathematician, engineer, and scientist before receiving a doctorate in economics from MIT. The idea was developed further by Fischer Black, another applied mathematician, and Myron Scholes (the only one in the group with degrees solely in economics). Merton and Scholes would receive the Nobel Prize in Economics in 1997 for the development of options theory. (Black would probably have shared the prize but he died two years before and it is not awarded posthumously.) The model is now known as the Black-Scholes equation for pricing options.

The next major development in economics introduced the idea of empirical observation. Some would say it would be the first time economics could even legitimately be called a science. MPT and options theory (OT) were about what people should do in ideal situations instead of describing what people actually do. Earlier economics tried to be descriptive but assumed market participants acted rationally. This was called Homo Economus—the economically rational human. But about the 1970s, a group of researchers started to ask how people actually do behave in these situations. These researchers were not economists at all and, for a long time, had no impact on the momentum in the field of economics. However, by the 1990s, the idea of behavioral economics was starting to have an influence on economic thought. The tools developed in this field were even adopted by the most advanced users of PRA.

OT and MPT have at least one important conceptual difference from the PRA done by nuclear power. A PRA is what economists would call a structural model. The components of a system and their relationships are modeled in Monte Carlo simulations. If valve x fails, it causes a loss of backpressure on pump y, causing a drop in flow to vessel z, and so on.

But in the Black-Scholes equation and MPT, there is no attempt to explain an underlying structure to price changes. Various outcomes are simply given probabilities. And, unlike the PRA, if there is no history of a particular system-level event such as a liquidity crisis, there is no way to compute the odds of it. If nuclear engineers ran risk management this way, they would never be able to compute the odds of a meltdown at a particular plant until several similar events occurred in the same reactor design.

Of course, there is some attempt in finance to find correlations among various factors such as the price of a given stock and how it has historically moved with oil prices or the price of another stock. But even correlations are simple linear interpretations of historical movements without the attempt to understand much about the underlying mechanisms. It's like the difference between meteorology and seismology—both systems are extremely complex but at least the former gets to directly observe and model major mechanisms (e.g., storm fronts). Often, the seismologist can merely describe the statistical distribution of earthquakes and can't say much about what goes on deep in the Earth at a given moment. A PRA is more like the former and MPT and OT are more like the latter.

Other methods have evolved from OT and MPT, although none are especially novel improvements on these earlier ideas. Value at risk (VaR), for example, is widely used by many financial institutions as a basis of quantifying risk. VaR is is the loss exceeded at a given probability (e.g., in a given portfolio of investment, the 5 percentile VaR is $10 million).  A VaR is really just a single point on an LEC. Like an LEC, a VaR is a method of communicating risk—although much less information is conveyed with a single point than the entire LEC. Numerous other esoteric methods that I won't bother to list in detail have also grown out of these tools. But if the foundation of the house needs fixing, I'm not going to worry about the curtains just yet.

Even though OT, MPT, and VaR are widely used, they were the target of criticism well before the 2008/2009 financial crisis (but much more so afterward). As this book will explain in more detail later, OT and MPT make some assumptions that do not match observed reality. Major losses are far more common than these models predict. And because they don't attempt to model components of financial markets (e.g., individual banks, periodic major bankruptcies, etc.) the way that a PRA might, these models may fail to account for known interactions that produce common mode failures. Also, we will show how VaR paint a very misleading picture of risk compared to the loss exceedance curve presented in chapter 4.

The financial crisis caused many to think of these financial tools as being the risk management techniques most in need of repair. Certainly, there is plenty of room for improvement. But simply reacting to the most recent event is counter to good risk management. Risk management is about the next crisis. Calls are already being heard for improvements in popular financial tools. The really big problem may be in a far more popular approach to risk management promoted by the best salesmen among the four horsemen: management consultants.

MANAGEMENT CONSULTING: HOW A POWER TIE AND A GOOD PITCH CHANGED RISK MANAGEMENT

In the late 1980s, I started what I considered a dream job for a brand-new MBA, especially one from a small Midwestern university. It was the era of the Big 8 accounting firms, when, long before the demise of Enron and Arthur Andersen, all the major accounting firms had management consulting divisions under the same roof. I was hired to join the management consulting services (MCS) of Coopers & Lybrand and, being in the relatively small Omaha office, we had no specialists. I was able to work on a variety of different problems in lots of organizations.

I tended to define problems we were working on as fundamentally quantitative challenges, which also emphasized my key interests and talents. But that was not the modus operandi for most management consultants I saw. For some of my superiors, I noticed a tendency to see value in what I might now call PowerPoint thinking. We all love our own PowerPoint slides more than our audience probably will. They were based on the “smart art” images in PowerPoint but often light on content. Because these graphics would get tweaked in committee, whatever meaning the chart first had would sometimes get diluted even further. For many management consulting engagements, even some of significant size and scope, the PowerPoint slides together with an oral presentation was the only deliverable.

The other junior-level consultants and I joked about the process as the random deliverable generator (RDG), as if the actual content of the presentation didn't matter as much as the right combination of sexy graphics and buzzwords. Fortunately, Coopers also had pragmatic managers and partners that would keep the RDG from running completely unchecked. But what surprised me the most was how often the RDG actually seemed to generate deliverables (was deliverable even a word before the 1980s?) that satisfied the customers. Possibly, the credibility of the Big 8 name made some clients a little less critical than they otherwise would be.

I suppose some would expect me to be writing about the influence of Peter Drucker or W. E. Deming on management consulting if I claim to have a reasonably complete explanation of the field. But from where I sat, I saw another important trend more influenced by names such as Tom Peters, author of In Search of Excellence, Mike Hammer, author of Reengineering the Corporation, and software engineer James Martin, the author of Information Engineering. They had a flashier pitch and pep talk for an audience of frustrated executives who were looking for a competitive edge. I recall that their books were consistently on required reading lists for us consultants who wanted to show clients that we were literate in what they paid attention to.

Traditionally, the top management consultants were experienced managers themselves with a cadre of MBAs, typically from the best schools in the country. But by the 1980s, a new kind of management consulting related to information technology was changing the industry. It became more common for management consultants not to actually be consulting managers at all. Sometimes they would be software developers and project managers trying (with varying degrees of success) to solve the clients' problems with information technology.

When I started at Coopers & Lybrand, the IBM PC was only a few years old and still not taken seriously by many big organizations. Most critical software applications were on mainframes using COBOL with relational databases. The Big 8 and others in the software development industry were providing services to help organize disparate development efforts in a way that, in theory, would put business needs first. This is where James Martin, a former IBM executive who evangelized developing systems based on a systematic way of documenting business needs, had a major impact.

But innovators such as James Martin gave the Big 8 an even more important idea. Developing software for a client could be risky. If something went wrong, operations could be delayed, the wrong data would be generated, and the client could be seriously injured. Consultants found a way to get the same lucrative business of IT consulting—lots of staff billed at good rates for long periods—without any of the risks and liabilities of software. They could, instead, develop methodologies. Instead of spending that effort developing software, they could spend time developing nearly equally detailed written procedures for some management practice such as, say, running big software projects. The methodology could be licensed and, of course, would often require extensive training and support from the firm that sold it. James Martin had been licensing and supporting his “information engineering” methodology in the same way.

Methodologies such as this were something the Big 8 knew they could sell. If you can't replicate a few superstar consultants, document some structured methodology and have an army of average management consultants implement it. The ideal situation for a consulting firm is a client with whom you can park dozens of junior associates for long periods of time and bill them out at a handsome daily rate befitting the Big 8 name. The business of getting “alignment” between business and computers was just the ticket.

The familiar risk matrix provides exactly this kind of opportunity for the consultants. It was visual and helped everyone feel like they were really analyzing something. The risk matrix is a simple method that could easily have been developed independently more than once, but I did some research into whether there might have been a “patient zero” of the risk matrix. I did find a couple of strong candidates for the origins of the risk matrix and both are related to different types of auditors.

Whether or not individuals in this group had auditor in their title, I'm referring to those whose jobs are hunting for errors, flaws, and irregularities. These include conventional uses of the term in accounting, but also jobs related to regulatory compliance, safety, and so on. Auditors need to be thorough and systematic, with checklist approaches to just about everything they do. But when it comes to risk management, this group was probably isolated from the earlier work by the other three horsemen.

A group involved in a particular type of audit, reliability and safety engineers, used the term criticality matrix as early as the 1970s. Similar to the current risk matrix, the criticality matrix plotted different potential events on a chart with an axis for probability and another axis for the severity of an event. In at least one of the standards that defined this approach, the probability and severity were grouped into ordinal categories.11 The standard that defined the criticality matrix also defined the related procedure for failure modes, effects, and criticality analysis (FMECA).

A similar chart was developed by an auditor—of the financial sort—in the oil and gas industry in the 1980s. Jim DeLoach, the previously mentioned managing director at Protiviti, was witness to the rise of this method while he was at Arthur Andersen in the 1980s and 1990s (just before its demise). He explains how a financial auditor at the oil and gas company Gulf Canada developed what he called a control self-assessment (CSA), which was presented in the same form as a risk matrix. Perhaps this auditor was influenced by engineering safety because FMECA was probably used by that firm. The difference was that the financial auditor applied it much more broadly than the engineering safety assessors applied FMECA.

According to DeLoach, the original author of this method actively promoted it at conferences, and it seemed to catch on with the major consulting firms. “By the mid-1990s,” DeLoach told me, “every major consulting firm got on the train.” They all had developed their own versions of the risk matrix, sometimes a three-by-three version but oftentimes a five-by-five version—that is, with five likelihood categories and five impact categories.

While at Andersen, DeLoach himself was influential in internal audit and control circles and was one of many voices promoting the approach. DeLoach would later renounce the risk matrix altogether in favor of quantitative methods. He was aware of growing research refuting the value of these methods (as we will see in following chapters) and he also became aware that these methods were not informed by more foundational concepts in risk. The standards, textbooks, and other sources published in that field rarely if ever cited the earlier work of actuarial science, probability theory, or decision theory. But the risk matrix had taken hold and is now the most familiar version of risk assessment by far.

For better or worse, management consultants are, hands down, the most effective sales reps among the four horsemen. Making money also means being able to produce consulting on a large scale and keeping expenses low with a large number of consultants and less experienced staff. As a result, a set of strategies has naturally evolved for most successful management consultants in the area of risk management or any other area. (See the following How to Sell Analysis Placebos box.)

These selling strategies work well regardless of whether the product was developed in complete isolation from more sophisticated risk management methods known to actuaries, engineers, and financial analysts.

The influence of these popular methods cannot be overstated. They are used for major decisions of all sorts and have worked their way into the “best practices” promoted by respected standards organizations. These methods were quickly being adopted by organizations all over the world who wanted to be able to say they were at least following convention. Here are some examples of standards that have much more in common with these consulting methods I have described than any of the previous quantitative methods:

  • Control Objectives for Information and Related Technology (CobIT). This standard was developed by the Information Systems Audit and Control Association (ISACA) and the IT Governance Institute (ITGI). This includes a scoring method for IT risks.
  • The Project Management Body of Knowledge (PMBoK). This standard was developed by the Project Management Institute (PMI). Similar to CobIT, it includes a scoring method for evaluating project risk.
  • The 800-30 Risk Management Guide for Information Technology Systems. This was developed by the National Institute of Standards & Technology (NIST). It advocated another scoring method based on a high, medium, low evaluation of likelihood and impact.

Not only are these not best practices in risk management (because they leave out all of the improvements developed in earlier quantitative methods) but also one might have a hard time believing that some of these organizations even represent the best practices in their own field. For example, PMI's own Organizational Project Management Maturity Model (OPM3) manual says the manual was three years overdue in the making. Presumably, PMI is the premier project management authority for how to get things done on time.

Other standards organizations do not recommend specific methods but explicitly condone softer scoring methods as an adequate solution. The ISO 31000 standard stipulates only that “analysis can be qualitative, semi-quantitative or quantitative, or a combination of these, depending on the circumstances.”12 It does add, “When possible and appropriate, one should undertake more specific and quantitative analysis of the risks as a following step,” but does not indicate what constitutes “quantitative.” This gives the adopter of this standard plenty of room for interpretation. Because the scoring methods are easier to implement, this virtually ensures that such methods will be the predominant approach taken to comply with the standard.

And don't forget that some of this is making its way into legislation, as first mentioned in chapter 2. Dodd-Frank explicitly requires that the Federal Deposit Insurance Corporation use a risk matrix. The regulation doesn't specifically require it for individual banks, but it probably has some influence on banks who want to show regulators they are making a reasonable effort to manage risks.

The reader probably has determined quite a few pages ago that much of this book is an argument for why probabilistic methods are, in fact, entirely practical and justified for most major decisions once a few improvements are made. At the same time, I'll be arguing for the discontinuation of popular but ineffectual scoring methods regardless of how practical they seem to be.

COMPARING THE HORSEMEN

The four horsemen represent four different, although sometimes related, lineages of risk management methods. They all have different challenges, although some have more than others. Exhibit 5.2 sums up the issues.

Even though there are impressive individuals in other areas, actuarial practice is the only area wherein there are some formal, professional standards and ethics. Actuaries tend to eventually adopt the best quantitative methods from other fields but, as AIG unfortunately proved, the biggest risks are often outside of the actuaries' legal and professional responsibilities.

EXHIBIT 5.2 Summary of the Four Horsemen

The Horsemen Used by/for Short Description Challenges
Actuaries Historically, insurance and pensions (but branching out into other areas) Highly regulated and structured certification process; build on established methods, conservative Early adopters of mathematics to risk but since then tend to be conservatively slow adopters; authority not wide as it could be
War quants Engineers, a small minority of business analysts, and some financial analysts Tend to see the risk analysis problem like an engineering problem; detailed systems of components and their interactions are modeled Where subjective inputs are required, known systemic errors are not adjusted for; empirical analysis is rarely incorporated into modeling
Economists Financial analysts, some application to nonfinancial investments (projects, equipment investments, etc.) Focus on statistical analysis of historical data instead of detailed structural models (although there are exceptions) Still make assumptions known to be false regarding the frequency of extreme market changes; tend to avoid structural models or see them as impossible
Management consultants Consultants from big and small firms, auditors of many types, almost everyone else not listed in previous categories Mostly experience based; may have detailed documented procedures for analysis; use scoring schemes. Methods not validated; errors introduced by subjective inputs and further magnified by the scoring method

The nuclear engineers and others who use PRA and other methods inherited from wartime quantitative analysts also, like actuaries, tend to use mathematically sound methods. However, they are not immune to some errors, and their powerful methods are still considered esoteric and too difficult to use. Methods and tools exist that would overcome this objection, but most risk analysts are not aware of them.

Whereas some financial analysts are extraordinarily gifted mathematicians and scientists themselves, many of the basic assumptions of their financial models seem to go unquestioned. The kinds of common mode failures and cascade effects that caused the 2008/2009 financial crisis perhaps could have been caught by the more detailed modeling approach of a PRA, if anyone had built it (or if they did, they apparently didn't influence management). Instead, the financial models use simple statistical descriptions of markets that ignore these sorts of system failures.

Finally, the management consultants have the softest sell, the easiest sell, and the most successful sell of all the major risk management schools of thought. Unfortunately, they are also the most removed from the science of risk management and may have done far more harm than good.

I should remind the reader at this point of my modification to Hanlon's razor at the beginning of this chapter. We should consider all parties blameless for any shortcomings of the methods that have evolved around them so far, for reasons they had little control over. Most management consultants and auditors, similar to everyone else, are using the methods held up as best practices in their fields. These methods evolved before most consultants started using them. What they use now is a result of nothing more than historical accidents. The question is what they should do now, especially given the critiques of methods later in this book.

MAJOR RISK MANAGEMENT PROBLEMS TO BE ADDRESSED

The remainder of this book is an attempt to analyze the problems faced by one or more of these schools of thought and propose methods to fix them. Six of these challenges are summarized in the next box. The first five points are addressed in the remainder of part 2 of this book (“Why It's Broken”) and map one-to-one to the five following chapters. The last point will be addressed in multiple locations.

NOTES

  1.  1. Attributed to a Robert Hanlon by Arthur Bloch in Murphy's Law Book Two: More Reasons Why Things Go Wrong (Little Rock, AR: Leisure Arts, 1981), but a similar quote was also used by Robert Heinlein in his short story “Logic of Empire,” The Green Hills of Earth (New Rochelle NY: Baen Books, 1951).
  2.  2. H. Bühlmann, “The Actuary: The Role and Limitations of the Profession since the Mid-19th Century,” ASTIN Bulletin 27, no. 2 (November 1997): 165–71.
  3.  3. Ibid.
  4.  4. D. Christopherson and E. C. Baughan, “Reminiscences of Operational Research in World War II by Some of Its Practitioners: II,” Journal of the Operational Research Society 43, no. 6 (June 1992): 569–77.
  5.  5. R. Ruggles and H. Brodie, “An Empirical Approach to Economics Intelligence in World War II,” Journal of the American Statistical Association 42, no. 237 (March 1947): 72–91.
  6.  6. R. A. Knief, Nuclear Engineering: Theory and Technology of Commercial Nuclear Power (Washington, DC: Taylor & Francis, 1992), 391.
  7.  7. R. Howard, “Decision Analysis: Applied Decision Theory,” Proceedings of the Fourth International Conference of Operations Research (Boston: Author, 1966).
  8.  8. F. Knight, Risk, Uncertainty and Profit (Boston: Houghton Mifflin, 1921).
  9.  9. J. M. Keynes, A Treatise on Probability (New York: MacMillan and Co., 1921).
  10. 10. H. M. Markowitz, “Portfolio Selection,” Journal of Finance 7, no. 1 (March 1952): 77–91.
  11. 11. MIL STD 1629, Procedures for Performing Failure Modes, Effects and Criticality Analysis, (Washington, DC: Department of Defense, 1974).
  12. 12ISO/DIS 31000—Risk Management: Principles and Guidelines on Implementation (Geneva, Switzerland: ISO, 2008/2009).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset