CHAPTER 6

Measuring Risk

The Importance of Identifying and Measuring Risk

Famed management guru Peter Drucker famously stated that “what gets measured gets managed.” The corollary is also likely true; what doesn’t get measured, doesn’t get managed. Identifying and measuring risk is ­critically important for a variety reasons; reasons that extend beyond Drucker’s assertion. Intelligently identifying and carefully measuring risk are the central elements of successful ERM.

The first reason for having a complete identification of risk is what we call the first law of risk management: the mere fact that you acknowledge that a risk exists automatically increases the probability and magnitude of it occurring if it is a good risk, or automatically decreases the probability and severity of it occurring if it is a bad risk. Basically by doing nothing more than identifying a risk you automatically improve outcomes. Sounds silly and useless doesn’t it? Before you dismiss this concept, consider the situation of buying a new model of car. If you have ever bought a new model of car, you probably drive off the car lot with a sense of pride, and part of that pride will be that you have a car that very few other people have. However very soon you soon notice that it seems like a lot of other people have that car, and furthermore they seem to have it in the exact same color as your car! It is still a fact that few people own that model of car, but what has changed is your level of awareness and your level of reaction to seeing people in that model of car. It is the same with risk; if we are aware of a risk then we are quicker to perceive it if it does become a factor and thus more likely to exploit it if it is a good risk, or mitigate it if it is a negative risk.

Risk identification and measurement is key to designing and testing implementation of the risk management strategy. Without risk measurement it is impossible to accurately prioritize risks and determine the appropriate level and type of response required. Too often firms do a preliminary and a cursory identification of risks without a robust measurement component. This amateur approach to starting a risk management program leads to all risks being treating with an equal level of seriousness which is very inefficient. Furthermore it generally leads to the symptoms of risk being managed rather than the more important root causes, which are much more important and effective to manage.

Identifying and measuring risk is also important for grading the risk management function. At the heart of a successful ERM program is being able to answer the questions; is the firm getting riskier (or is the firm at its desired level of risk); is the firm getting better at risk management; and is risk management adding value to the organization? Risk management needs to be assessed just like any other type of organizational function. If risk management is not adding value, then the implementation of risk management needs to be changed and improved, or the risk management function needs to be abandoned. These assessments however cannot be made if risk is not being properly identified and measured.

Communication of risks and their values is another essential part of any reasonable risk management program. Effective communication requires some form of measurement for it to be effective. Stakeholders, both internal and external need the risk measurement data to make their own assessments of the operations of the organization. It needs to be noted that in many industries, the role of the communication to the ­regulatory aspects of risk management have heavily influenced the risk measurement function, which may or may not be to the detriment of the total effectiveness of the risk management function.

It should be obvious that risk measurement is an ongoing process. Risk levels are constantly changing. Thus risk management is not a static process. Neither should risk identification be static. New risks are ­constantly arising and evolving. Consider for instance that cyber risk, a mission critical risk for many organizations, is a risk that did not even exist as little as 30 years ago.

A very valuable, but infrequently considered benefit of having a process in place to regularly identify and monitor risks, is that it gives the organization the confidence to stick to their knitting; that is to focus on the business at hand with complete concentration without wasting energy worrying about the risks and what might happen.

Identifying and measuring risks does not mean that the firm will not face risk—it will. However by properly identifying and measuring the risks, the firm will be much better positioned to take advantage of the good risks when they arise and to mitigate the bad. It comes back to “what gets measured gets managed” and to our first law of risk management. An efficient ERM system and strategy will have identification and measurement of risks at its core, which in turn allows for the natural and complete integration of ERM thinking into the day to day operations of the firm. Risk management becomes management, but it all starts with the identification and measurement of risk.

Risk Types

It is frequently useful, but not always necessary to classify risks to aid in the identification of risks, one classification is to think of risks as being either qualitative or quantitative. We cover this classification later in this section. The risk classification we will begin with is the distinction between external risks and internal risks. External risks are the risks that are external to the organization; the risks that they cannot control but must prepare for and be ready to react to. Internal risks are risks that are caused internally by the actions and operations of the organization; these are the risks that the organization are directly responsible for. These internal risks must also be planned for and reacted to.

Examples of external risks include; economic and market risks, political risks, demographic risks, technological risks, attack risks such as cyber risks, competitive risks arising from the actions of competitors and ­reputational risks. Examples of internal risks include; operational risks, strategic risks, financial risks, and cultural risks.

External risks could be systemic and affect all organizations to more or less the same extent, or they could have special sensitivities to just a handful of firms. While it is obvious that both internal and external risks are important to manage, the prudent management of external risks have the potential to give an organization a long-term competitive advantage. External risks usually cannot be changed by any organization, but they can be managed. For instance, a single organization cannot prevent a stock market crash or a significant change in interest rates. Likewise, a firm cannot do anything about changing demographics, and despite significant efforts spent on lobbying there is relatively little an organization can to affect large scale political events. However, a firm can be prepared for these risks and can be prepared to react quickly or to have preemptive processes in place that mitigate or capitalize on any fallout effects.

The economic and market risks are the changes in the markets that can affect the firm. A partial lists of these risks would include the availability and cost of financing, the overall demand for goods and services, the state of the labor market and the availability to hire quality employees at reasonable wages. As with all external risks, it is important to note that it is not just the economic events and risks in the home market that should be on the firm’s risk radar. The economic world is global, and events in one market have a high potential to cross over to other markets. For instance, a significant change in exchange rates can change the competitive landscape very quickly with one group of organizations all of a sudden possessing a significant cost advantage or disadvantage even if all of their operations and sales are in the domestic market. As another example, events in the middle east will tend to affect energy prices on a global basis for the foreseeable future, and of course create spillover effects into political events and global demographics.

Examples of political risk include not only changes in governments, but also changes in regulations that can have far reaching effects. For instance, at the time of this writing the North American Free Trade Agreement is being renegotiated between the United States, Mexico and Canada. The trade agreement, first signed in 1994, has had long lasting impacts, changing the fate of not only individual companies but also even entire industries and economic regions. Changes, if any, in the agreement are likely to have similar impacts going forward. Some companies, and some industries will profit from any changes, and some will likely lose. Those organizations that are unprepared to react will likely be at a significant disadvantage no matter what happens.

Demographic risks include the changing proportions of age groups in various populations as well as changing mindsets. In North America, the largest percentage of the population is on the verge of retirement as the baby boomers start to roll into retirement. This is having significant impacts on pension plans, which in turn is changing the investment landscape as well as corporate pension plan strategies. It is changing the composition of the workforce, and perhaps more significantly changing the supply and demand of various goods and services. Questions such as will the next generation be as consumer focused as the boomers were, or will they be more experienced focused in their spending habits are key risks for devising long-term product development strategies. The demographic issues are of course intertwined with the political risks. Additionally, it is interesting to note the different nature of the demographic risks on a global scale. While North America is aging and starting to deal with the resultant issues similar to those that Japan dealt with in the 1990s, the middle east is facing the opposite set of demographics with a population that is getting much younger, which in turn creates its own unique set of risks.

Technological risk is an ongoing part of business, and many may not consider it as part of the risk plan. However, in an ERM context it certainly makes sense to integrate it. Robotic automation, artificial intelligence, the increasing practicality of big data analytics, and of course the ever-evolving social media and Internet of things make for an interesting future for the risk manager.

Some external risks the firm can get out ahead on. For instance, a firm can protect its reputational risk through consistent branding and positioning. Likewise, firms can get ahead of technological risk through aggressive research and development and by maintaining a high level of training and knowledge for its employees. However, for the most part, external risks are what happens to the firm, and the key is preparation and early recognition so the risks can be managed rather than reacted to in crisis mode.

Internal risks are quite different than external risks in that a firm conceptually has some level of control over their own internal risks. The reality however is that surprises will continue to happen. The main internal risks almost always tend to be operational in nature. Operational risks involve the people and processes of the organization. Process risk evolves as the culture and the context in which the organization exist change at different rates from the management processes.

The harder, but perhaps the most critical operational risk is the people component. People risk could arise from the actions of a single individual, or a few individuals, or it could be systemic with the culture of an organization. We prefer to place culture as separate from operational risk to distinguish the systemic nature of it.

There are a variety of people risk factors and measures that range from the ability to attract and retain the right quality and quantity of employees, to the training and compliance operations. People risks include elements such as the integrity of employees, their skill sets and their competitive ability relative to their function.

The diversity of employees, both identity diversity as well as cognitive diversity is becoming an especially prominent issue. Diversity issues are in part becoming prominent due to demographic changes. As the baby boomers begin to retire in significant numbers, companies are learning that dealing with the newer generation, particularly millennials, is not always a straightforward task. Cultural norms that worked well in the past do not always have the same effectiveness. Different strategies are required for sourcing, training, and retaining the appropriate employees. For some industries the effect is compounded by technological change. Personal workflow processes that worked well in the past are not guaranteed to have the same effectiveness. Workstyles are changing and what high quality employees want from their careers is quite different from earlier generations. Expectations are different, and alternate opportunities are different. While stating that there is a war for talent may be overstating the case, there are definitely issues that can make or break an organization.

Internal financial risks surround the financial flexibility that the firm has established for itself. These include issues such as the level of debt financing, the structure of the financing, the financial relationship with stakeholders such as lenders and investors, the financial strategies in terms of managing operating capital and working capital, the financial relationships with suppliers as well as the all-important relationship with customers and related issues such as the extension of credit and financial purchasing incentives offered. A general rule of thumb is the greater the operational and business risks of the firm the lower the financial risk should be.

Strategic risk arises from the strategic choices the firm makes. This includes decisions such as which markets to operate in, what products or services to sell, the marketing strategy, decisions such as whether the firm will be well diversified or perhaps vertically integrated, or more focused in a few select areas. Risk management techniques aid in the management of strategic risks by highlighting the relative sensitivities of competing strategies and placing them in the context of the overall risk level of the firm.

The final risk we will discuss is the cultural risk of the firm. Organizational culture can be difficult to get a handle on. In fact, most large organizations will have several different cultures depending on geography or the function area or by division. However, having knowledge of the sensitivity of the firm toward its culture, and the susceptibility of the culture to shift and the factors that might cause a shift certainly aid in the management of those risks. Ultimately culture risk may be the most significant risk of all. Including culture risk, and the change in the cultural risk into the overall set of risk metrics is a prudent thing to do for effective risk management.

Another way to separate risks into categories is by those risks that are qualitative versus those risks that are quantitative. As many risk management techniques grew out of engineering ideas about risk, or from financial risks, the bias in risk management is toward those risks that can be quantitatively measured. This however brings us back to the quote that we started the chapter with by Peter Drucker that, “what gets measured gets managed.” Arguably some of the most critical risks such as cultural risk or strategic risk are by nature qualitative, as well as subjective in nature. This does not mean however that an effort should not be made to make an attempt to get a measure of these qualitative risks and include them in the ERM process. Later in this chapter we will discuss some ideas and techniques for getting a measure of some of these qualitative risks. They are very important and cannot be ignored.

Finally, it bears repeating that another way to categorize risks is into those that are simple and complicated versus those that are complex. The different nature of dealing with complicated versus complex risks has been discussed in various places in this book. In categorizing risks, it can be very helpful to explicitly separate out those risks that are complex. This will help in ensuring that they are properly managed.

Quantitative Risk Measurement

There are a wide variety of quantitative risk measures. Some use historical data, some use forward looking data. Some are based on simple measures such as the range of historical results, while others use more advanced statistical techniques. It can be very tempting to get carried away with quantitative risk measures. They are generally available, they appear to be very precise, and a large number of mathematical tools have been developed to deal with them. However, just because sophisticated techniques are available does not necessarily mean that they will always be better. Many advanced quantitative techniques are like a modern exotic supercar; very advanced, expensive and glamorous, but not all that practical for necessary daily tasks such as picking up groceries or taking the kids to their soccer game.

The rise of ERM as a popular management tool has led to the rise of the quantitative risk analyst. It is important to note that in our opinion that wisdom gained through experience and thinking will always trump textbook learning. We will elaborate more at the end of this chapter on the dangers of an overreliance on quantitative techniques. Our aim in this section is to cover the basics and to give you, the reader, the fundamentals for understanding and implementing the major tools and concepts.

The main set of quantitative risk measures focus on how much a variable can vary. The two main measures are the range and the standard deviation. The range is simply the range from the minimum value of the variable to the maximum value of the variable. The standard deviation is the sum of the squared differences from the average of the values. The formula for standard deviation is shown in Equation 6.1 as follows.

Image

Equation 6.1 Standard deviation

The use of the standard deviation is well established for risk management purposes. The standard deviation basically gives the “fatness” of a distribution as shown in the following Figure 6.1. The distribution shown by the dotted line has a smaller standard deviation than the variable illustrated by the solid dotted line.

Image

Figure 6.1 Normal distributions with different standard deviations

The larger the standard deviation, the more widespread the distribution is, and the more volatile we say the variable is. Thus, standard deviation, and its associated variance, (which is just the standard deviation squared) are often directly associated with the riskiness of a variable; the greater the standard deviation, the riskier that the variable is said to be.

Standard deviation is very popular as a risk measure as a large number of mathematical tools are associated with the measure. For instance, we can calculate probabilities of observing certain values by just knowing the average value of the distribution and the standard deviation.

The use of the standard deviation however involves a few key assumptions. The first is that the underlying variable being measured is normally distributed; that is the shape of the plot of the frequency of values versus the values themselves has the familiar “bell shape.” For many of the ­variables that we are interested in risk management, the assumption of the normal distribution is a reasonable one. When we plot the frequency of outcomes we do indeed see that the probability of obtaining values that are near the average value is much higher than the probability of obtaining values that are in the “tails”; that is the values that are far from the average. This of course is evident in Figure 6.1, where values near the average occur with a much greater frequency that extreme high or low values.

Although the assumption of variables being normally distributed is valid, the reality is generally a bit different. What we observe in real life is that many risk variables, particularly financial variables (for ­example, changes in interest rates or exchange rates), are actually leptokurtic. ­Leptokurtic means that the distribution has fat tails; that is, extreme events occur with a higher probability than using standard deviation and the normal distribution lead us to believe. This is why you will hear final market commenters talk about a once in a lifetime event that seems to occur several times in a decade!

The errors introduced by fat tails may seem to be small and insignificant, and indeed most of the time they are, and thus can be ignored. However, for many risk management purposes they cannot. Risk management is often most concerned about extraordinary events; that is events that are extreme. Thus risk practices are designed to manage these extremes. Ignoring fat tails means that you will have a small error in the absolute value of the variable, but a very large percentage error. For instance, assuming a normal distribution may give you the probability of occurrence is only a tenth of a percent. The real probability that takes into account the leptokurtosis might be only two percent; still very small. However, the real probability is 20 times the calculated probability. When constructing risk metrics and risk mitigation strategies, this error is huge and risk managers ignore it at their peril.

A second issue is that risk management often occurs when tails are present; that is that risk management is most needed when extreme tail events are occurring. This makes the errors all the more significant, since when risk management is most needed is when we are likely to have the most inaccurate measures of risk.

A third issue with using standard deviation as a risk measure is that it ignores the difference between good risk and bad risk, and simply lumps all risk together into the same calculation. For instance, assume that you expect to gain or lose exactly zero dollars today on average. However, if you undertake activity “A,” there is a probability that you will lose 20 ­dollars out of your pocket, and if you undertake activity “B” there is the probability that you will find 20 dollars while walking down the street. Calculating your risk by using the standard deviation, the risk of activity “A,” will be the same as the calculated risk of activity “B.” Clearly however activity “B” would be preferred as it entails “good risk,” while activity “A” entails “bad risk.” Using standard deviation as a measure ignores this important distinction between bad risk and good risk.

To overcome this deficiency, a variation of standard deviation called semi-standard deviation has been developed. With semi-standard deviation, the deviation is measured as before, but the deviation is ­measured from a baseline value, rather than the average, and the ­deviation for the values from above the baseline (called the positive semi-standard deviation) are measured separately from the values that are below the baseline (called the negative semi-standard deviation). In other words, to calculate the positive semi-standard deviation you only include those values in the sum where the observation is above the baseline value. Likewise, for the negative semi-standard deviation you only include those values where he observation is below the baseline value. The respective formulas are shown in the following Equations 6.2a and 6.2b.

Image

Equation 6.2a Positive semi-standard deviation

Image

Equation 6.2b Negative semi-standard deviation

The use of semi-standard deviation allows one to calculate the amount of upside risk relative to downside risk. It also allows the measurement of risk relative to a benchmark, rather than the risk relative to the average, which may or may not be a relevant benchmark. A disadvantage of semi-standard deviation is that the mathematical tools for assessing probabilities are no longer available when semi-standard deviation is used.

A related measure to standard deviation is the skew. The skew of a distribution measures if there is a bias for the upside or the downside from the normal distribution which is assumed to be symmetric. Figure 6.2 illustrates skew in a distribution.

With risk measures, it is often the case that the co-movement between variables is the key factor. The correlation measures the co-movement. Variables that have a correlation of a positive one always tend to move in the same direction together, while variables that have a correlation of a minus one will always move in opposite directions. Variables that have a correlation of zero have no relationship in their co-movement.

Image

Figure 6.2 Skew

Co-movement is a key concept in risk management and thus by extension, the correlation is a key risk measure. Correlation in part explains the sensitivity of a change in one variable on the change in another variable; for instance how a change in interest rates changes the demand for a ­company’s product or services.

Besides just explaining the tendency of variables to move together, correlation is used to design hedge ratios and hedging instruments. For instance, airlines use oil futures contracts to manage the risk of changing jet fuel prices. Although jet fuel and raw oil are quite different things, the change in their respective prices exhibit a high correlation and thus oil contracts are a good way to hedge jet fuel. Another common example is the negative correlation between gold prices and the economy. When the economy is slumping badly, gold prices tend to do well, and vice versa. Thus many investors include gold in their portfolio as a hedge against the overall economy and the stock market slumping.

Using items with a correlation near one, (or near negative one), is a common risk management method. It is the basis of street vendors in New York selling sunglasses, but also having umbrellas ready to sell in case the weather changes. The sales of sunglasses is negatively correlated with the selling of umbrellas so having both items to sell helps to even out sales. When the sale of one product is weak, it is likely that the sales of the other product is strong.

There is a danger however of relying too much on correlation portfolio techniques. In times of crisis, correlations tend to be very unstable. There is a common saying that in times of crisis that all correlations go to one; that means in times of crisis everything tends to go down in price no matter if they are normally negatively correlated with the market.

Extending beyond correlations is the technique of regression. A regression equation is a mathematical technique whereby a variable, such as sales demand, can be related to a variety of different variables. For instance, a house builder might use a regression equation to predict house sales based on the mathematical relation between house sales and the economic variables of interest rates, GDP growth, stock market returns and wage inflation. A regression equation takes a set of variables that are correlated with the dependent variable in question (in our example house sales) and in effect calculates both the correlation as well as the sensitivity to the explanatory variables (interest rates, GDP growth, stock market returns, and wage inflation). The output of the regression equation is a series of betas “b,” one for each explanatory variable. The beta says how much the dependent variable will change for a change of one unit of each of the explanatory variables. For example see the following Equation 6.3 which illustrates a regression equation to explain house sales for a hypothetical house construction company.

Image

Equation 6.3 Regression equation

Regression analysis is a very powerful model for risk management as it can be used to build a model of the factors that are driving the economic performance of the firm. Of course, these models can also be used for risk management. Interestingly, a company’s beta, which is found by regressing the (generally monthly) returns of the company’s stock against the returns of the stock market index is a frequently used measure by investors to gauge the overall riskiness of a company. Companies whose beta is greater than one are said to be riskier than average, and these stocks are expected by investors to produce a higher than expected return in order to compensate for the higher level of investment risk accepted by investing in them. Conversely, stocks with a beta less than one are said to have lower risk than average and are frequently referred to as defensive stocks because of their lower levels of investment risk.

Regression models use historical data to build a model of how sensitive the firm’s performance is to a variety of variables. If one knows the value of the explanatory variables, one can use a regression model to get an estimate of what the future value of the dependent variable will be. This is a very common use of regression analysis. However, there are two issues. The first issue is that a regression model is developed using historical data. In other words, the regression model explains how the dependent variable changed in value based on the historical changes in the explanatory variables. Thus, the regression model is only as good as the ability of the historical relations to hold going forward. If there are significant changes in the market dynamics then a regression model can produce very misleading results. For example, historical regression models developed using data from the 1990s and before showed that there was a miniscule probability of a large number of households defaulting on their mortgages at once. However, changes in the mortgage market, and changes in the dynamics of the interest rate market meant that the results of regression models using data from the 1990s was no longer valid in the 2000s. That change in the underlying regression model dynamics (along with the previously discussed issues of leptokurtosis), meant that the prediction models for mortgage defaults were wildly inaccurate and misleading.

The second issue with using regression models to predict outcomes, is that it can only be done to the extent that we can predict what the values of the explanatory variables will be in the future. That is, we can only get good prediction estimates for our dependent variable of house sales, if we have good estimates of our explanatory variables of interest rates, GDP growth, stock market returns, and wage inflation. One technique is to make a prediction based on an “expected” case, a “best” case and a “worst” case set of scenarios or values of the explanatory variables. This however has some obvious limitations.

To overcome this limitation of regression models, a more advanced technique called Monte Carlo simulation is utilized. Monte Carlo simulation uses the power of a computer to predict thousands of different future scenarios. The speed of modern computers means that they can very quickly model thousands, or even tens of thousands of potential future scenarios. Furthermore, these future scenarios are predicted in such a way so the tendency, or probabilities, of our explanatory variables will occur in the same proportion that they have occurred on the past. It is like spinning a roulette wheel many times. Fifty percent of the time the roulette ball will land in a red spot, and 50 percent of the time the ball will land in a black spot. So, knowing in a given simulation, the simulation value of; interest rates, GDP growth, stock market returns and wage inflation, it can be used with our model to predict the value of house sales for that one scenario or simulation. Doing this thousands of times (again, utilizing the speed of a computer to do the calculations), gives us tens of thousands of predictions for house sales. We thus start to see a distribution of potential future outcomes, and from this we can calculate the expected value of future outcomes, as well as the range and the level of uncertainty of future outcomes.

Monte Carlo simulation sounds very complicated, but in reality it is very easy to do if one has a good model of the variables that affect the firm. Computer programs exist to produce Monte Carlo results and the technique is now taught in most business school programs. There are two distinct issues that the manager needs to be cognizant of with Monte Carlo simulation. The first is that the technique is only as accurate as the accuracy and relevancy of the model that is used to develop it. If the analyst does not have a good model of how the relevant factors affect the performance of the firm, then Monte Carlo simulation will be a very elegant technique that produces irrelevant output. The second issue, which might be considered an advantage from a risk standpoint, is that you do not get a single answer with a Monte Carlo simulation but instead a distribution of answers. Indeed, you get an answer for each simulation that you run. The output of the thousands of simulations is generally plotted on a frequency histogram and thus patterns can be established, but a single definitive answer is not forthcoming from a Monte Carlo analysis. For risk management this may be an advantage, as it is the distribution, or range of outcomes that the risk manager needs to prepare for. A risk manager, almost by definition, knows that the future cannot be known to a single point in any case.

From Monte Carlo simulation comes a very powerful, and widely used risk measure called value at risk (VAR). VAR is the amount that a company may expect to lose over a given period of time with a given confidence level. For instance, the VAR for a company might be stated as: with 95 percent confidence, the loss over the next month is expected to be $10 million or less. Variations of VAR are cash flow at risk (CFAR) (where a minimum level of cash flows is estimated with a given confidence level) or earnings at risk (EAR) (where the minimum earnings is estimated with a given confidence level).

VAR can be calculated through a variety of methods, although a Monte Carlo simulation is frequently used. The distribution of the expected value (or cash flows, or earnings) is projected using a Monte Carlo simulation and the projected value with the stated confidence level is chosen. Figure 6.3 illustrates. Suppose that the chosen confidence level is 95 percent. This implies finding the level of value such that 5 percent lies below this level of the simulated results. This is shown by the value X in Figure 6.3. This implies that 95 percent of the time that the value of the company (or the value of the company’s portfolio) will be above the level of X.

Image

Figure 6.3 Value at risk

VAR, CFAR or EAR is a convenient way to express a negative risk case for a company. There are a few caveats to be cognizant of though when using VAR. Firstly, VAR does not state the worst-case scenario. It is only stating a percentage of the worst-case scenarios that will be worse than the calculated value. The VAR number tells you little about what the worst case might be. For example, in Figure 6.4, the VAR was given at X. The actual loss however might be far to the left in the graph and thus a large multiple of X.

A second caveat is that VAR is only as accurate as the model used to build it. If the value model of the company is inaccurate, or does not take into account all significant variables, then the results of the VAR calculation may be quite misleading. A third caveat concerns the previous statements about the significance of Black Swan events,1 leptokurtosis (fat tails) as well as the fact that historical correlations may not be valid in times of crisis. The VAR calculation does not explicitly take into account Black Swan events and can be quite sensitive to leptokurtosis as well as changing correlations.

The issues with the quantitative measures that we have discussed necessitate a further check. To do so most companies will supplement their quantitative risk measures with stress testing. Stress testing involves calculating outcomes under worst plausible case scenarios. There is often much debate about what constitutes a worst plausible case scenario. Some will argue for something that is unrealistically catastrophic, while others will argue for something that does not reflect the possible extent of a confluence of negative events.

When doing stress testing it is important to examine the correlations and realize that historical relationships will often frequently change when conditions are stressed. Thus, it is prudent to examine outcomes under a variety of different scenarios. Additionally, there is significant value in developing scenarios or stories of how a confluence of events may arise. Furthermore, we believe there is value in developing upside scenarios as well to improve the opportunities for capturing outcomes from positive events. We will discuss scenario analysis more in the next section on ­qualitative risk analysis.

Qualitative Risk Measurement

Measuring and assessing qualitative risks is a much more subtle and subjective task than dealing with quantitative risks. Due to their objectively, quantitative risks are generally considered easier to work with—at least for the quantitatively minded risk analyst. Many techniques have been developed to assess and track quantitative risks, but the number of techniques for qualitative risks is much smaller. Although they cannot be measured with any type of objective mathematical basis, qualitative risks are nonetheless critical to assess. As mentioned at various times throughout this book, the major source of risk is ultimately people; and people are fundamentally qualitative in nature.

One simple, yet popular technique for assessing qualitative risks is a risk survey. In a risk survey, various stakeholders are surveyed as to what they consider the major risks to be, and to rank the risks according to their probability of occurrence, as well as to the impact of the risk if it does occur. To make a risk survey more effective, and to build a common dialogue, it is advised to provide a set of definitions with the survey that lays out the parameters for a low, medium, or high assessment. For some survey participants, a high probability might be anything with a probability over 5 percent, while others might consider the threshold for a high probability to be anything over 90 percent probability. Thus, when conducting a risk survey it can be very useful to include a set of definitions that clearly and objectively defines what a high, medium, or low probability or impact is. Of course, the survey participants will be subjective in their assessments, but at least the subjectivity will not be caused by differences in definitions.

An advantage of conducting a risk survey, versus convening a group of experts and perhaps having a brainstorming session for the assessment of risks, is that a survey allows the organization to gain a broader perspective, and also helps to avoid group-think bias. The disadvantage of a risk survey is that the value of having a group discussion to generate ideas and then revising beliefs about risk probabilities and impacts after discussion with others does not occur. The revision of assessments after discussion has been shown to be very helpful in revising risk estimates which in turn allow for more accurate risk forecasts.

To overcome the deficiencies of a risk survey, the Delphi method is frequently used for risk identification and subjective risk measurement. The Delphi method involves getting a diverse group of people into a facilitated meeting. The group begins by listing the set of risks for consideration (or a preliminary list could have been provided beforehand). A facilitated discussion about the risks is then conducted and the group discusses the lists of risks and the relative probabilities of the risks occurring and the relative importance of the risks. So far, the Delphi method sounds just like any other focus group. However, this is where the critical step occurs; the group votes anonymously on the relative probability and the relative impact of the discussed risks. The anonymous voting is critical as it prevents group-think bias, and it also gives those who may be reluctant to speak up a method by which their opinions will still count and be seen by others. After a round of voting, the anonymous voting results are displayed for the group and a facilitated discussion takes place. Rounds of anonymous voting followed by displaying and having a facilitated discussion on the results continue until a consensus is reached.

The Delphi method is a powerful way to get a rich picture of the risks facing an organization. By having a diverse group of people, (people from different levels of seniority, and from different job functions and different areas of specialization) and providing a methodology that is shown to produce more reliable and robust results that are free from group think and personality effects.

To illustrate the importance of having a diverse group of participants producing the risk forecast, consider the case of prediction markets. In a prediction market, a mechanism is set up for participants to trade “securities” of different scenarios. Participants are allotted “securities” which represent various risk scenarios. Participants then trade these scenarios (for example, a trade of three scenarios of a cyber-attack on the company for one scenario of a major lawsuit) and the value of each of the “scenarios” is determined at the end of several trading sessions. Perhaps the best known example of a trading market is the Iowa presidential prediction market which is a frequently cited source for predicting the outcome of the U.S. presidential election.2 In a typical election prediction market, participants buy a futures contract that pays $1 if their candidate wins and nothing otherwise. The price at which this futures contract trades is an indicator of the probability that the candidate will actually win. For instance, if the price of a futures contract on candidate A winning is trading at $0.62, then that implies that there is a 62 percent chance that candidate A will indeed win the election.

Prediction markets have been used in a wide variety of situations and to predict far more than presidential elections. They have also been shown to be far more reliable and accurate than the projections of experts or pundits. For a very interesting look at prediction markets refer to “The Wisdom of Crowds” by James Surowiecki.3

Perhaps the most commonly used method is to produce a thorough set of scenarios and assess the likelihood of these scenarios and the associated impact of each scenario. Obviously, the usefulness of scenario ­analysis as a risk planning tool will be dependent on the quality of the scenarios developed. The quality of a scenario analysis is a function of the creativity, the plausibility and the richness of the scenarios produced. A scenario does not necessarily have to accurately predict the future in order to be ­useful. Even the most well-developed scenarios are very unlikely to predict actual future outcomes; we have the well-known statement that “truth is stranger than fiction” to remind us of that fact. The value in the scenario analysis is that it starts the process of realizing what the potentialities for future risks are, and ultimately this is the best that any risk forecasting method can hope for.

A portfolio of scenario analysis should produce a portfolio of themes as well as a portfolio of potential specifics. In particular, scenario analysis should try to critically look at unforeseen consequences of events and of how correlations and causation between events may unfold and change as time passes. Scenario analysis is an exercise that combines creativity with a deep understanding of how the various external and internal risks affect the operations and outcomes of the organization. It is challenging to do well.

Risk Maps

Once the various risks have been listed, and an estimate of their likelihoods and impacts have been made it is time to produce some form of ranking or prioritization of the risks. One of the most common ways of doing this is by developing a risk map. A risk map is a chart with the likelihood of a risk on the vertical axis and the impact on the horizontal axis as shown in the following Figure 6.4.

Image

Figure 6.4 Risk map

The risk map gives a pictorial representation of the risks. In the version of a risk map shown in Figure 6.5, only negative or harmful risks have been considered. In this common form of the risk map, those risks that are in the top right hand corner, such as illustrated by risk A, are the main concerns. These are the risks that have both high impact as well as a high likelihood of occurrence. The goal is to develop risk management strategies so that risks such as those indicted by point A, are eventually assessed to be at point B, where they are considered to be of low impact and low probability.

Image

Figure 6.5 A better risk map

There are a couple of variations on a risk map that should be noted. Some risk maps plot risks with different size dots, where the different sizes of the plotting dots indicate the amount of control that the organization believes they have over the risk. Another variation is the risk map shown in Figure 6.5.

In the risk map shown in Figure 6.5, the horizontal axis has been separated into good risk and bad risk. This is to take into account the two-sided nature of risk; namely that risk could be positive or negative. In this case, the goal of the risk management function is to move those negative risks that have high likelihood and high negative impact to lower impact and lower likelihood. This is moving from risk A to risk B as before. However, a second objective is to identify those positive risks that have low impact and low probability and mange so as to increase both the impact and the likelihood of these positive risks occurring and to try to put in place practices that give an increased level of control. This is shown in Figure 6.5 as trying to move risk C to the point where risk D is plotted.

Risk maps are a powerful visual tool for assessing risks but also for assessing how the risk management function is performing. By tracking where various risks plot on a risk map through tie, the organization gets an idea of how effective its risk management function is operating and how the risks of the firm are trending.

Concluding Thoughts

Yogi Berra, (amongst others), is claimed to have famously said “it’s ­difficult to make predictions, especially about the future.” By its very nature, risk is uncertain. Thus the task of identifying and measuring risk is anything but an exact science; and that is fine. The purpose of risk identification and measurement is to create awareness, preparedness and a set of flexible risk management plans for the most likely scenarios that hopefully can be adopted for a wide variety of risk situations.

Predicting and measuring future levels of risk is almost by definition an oxymoron. Risk is dealing in uncertainties and in probabilities. Even with the most sophisticated of forecasts the best that we can do is develop a set of relative probabilities for events. Risk management is often blamed for not being able to predict. This is a false argument. It is like blaming a child for being too tall or too short—you cannot predict or control how tall you are going to grow.

Predicting and measuring risk is completely in synch with U.S. ­Secretary of Defense Donald Rumsfeld’s famous quote about “unknown unknowns.” More completely his statement was;

Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.

Although Rumsfeld was widely ridiculed for this statement, the reality is that it is a very accurate and astute statement about identifying and measuring risk.

1As a reminder to the reader, a Black Swan event is a low probability but high impact event.

2https://tippie.biz.uiowa.edu/iem/ (accessed February 1, 2018).

3Surowiecki, J. 2005. The Wisdom of Crowds. Anchor.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset