CHAPTER 5
Risk in Capital Markets

Peter Reynolds and Mike Hepinstall

INTRODUCTION

The capital markets provide an invaluable mechanism within the economy to enable private‐ and public‐sector investment and growth by facilitating the allocation of resources and sharing of risk. Such activities by their very nature require careful and appropriate risk management. The level of complexity and speed inherent within modern capital markets firms make sound risk management extremely important, as problems can emerge and escalate quickly. Particularly, following the financial crisis, there has also been an elevated regulatory focus on risk management practices, risk measures, and capital requirements.

This chapter provides a comprehensive overview of the risk management and measurement approaches used within capital markets firms. Where regulatory concepts are introduced, the focus is primarily on regulations that apply to banks active in the capital markets; however, the underlying risk management principles apply to non‐bank capital markets firms as well. This chapter will outline a common taxonomy used to parse risks into digestible types, review the major metrics used to measure and monitor risk, provide a brief overview of the regulatory environment, and discuss the intersection of risk and strategy.

The risk metrics section includes more detailed reviews of market and counterparty risk measures that are uniquely important to the capital markets, and provides an introduction to other relevant risk types such as operational risk and liquidity risk.

Even the in‐depth sections of this chapter only skim the surface of the thinking, practice, and approaches used to measure, monitor, and react to risks. Thousands of books, academic papers, regulatory rules, and other thought‐pieces have been written about risk management, and the approaches continue to evolve. This chapter provides an introduction to key concepts of that world. Finally, it closes with some thoughts on the future of risk management.

OVERVIEW OF RISK MANAGEMENT IN CAPITAL MARKETS

Defining Risk

Given its importance, there is surprisingly little consensus on the definition of risk. Early debates center on the distinction between “risk” and “uncertainty,” with a view that risk needs to be inherently quantifiable.1 This thinking has evolved notably with Holton (2004),2 who proposes that risk exists when there is uncertainty about potential outcomes and those outcomes have an impact on utility.

For the purposes of this chapter, we define risk as uncertainty around future expectations of earnings. We should highlight that risk is in the deviation from expectations. Within a distribution of potential outcomes, the expected value may be negative (for example, a bank may anticipate that there will be expected losses on a loan portfolio). But this is not in itself risk, as the expected value is known and it can be priced into the transaction at inception. Risk is an unexpected value: the volatility around that expectation.

A Taxonomy of Risks

In their paper from 2007,3 Schuermann and Kuritzkes define a mutually exclusive, collectively exhaustive taxonomy of risk (Figure 5.1). They use this to determine the proportion of earnings volatility attributable to each risk‐type. This serves as a good starting point for parsing risk into digestible types that can be measured and monitored.

A tree diagram of taxonomy of risks.

Figure 5.1: Taxonomy of Risks

Source: Oliver Wyman

  • Market risk is the earnings impact of a change in the market value of a position held by an institution. This risk‐type is most clearly seen in activities that involve principal investment (e.g., hedge funds and private equity firms). The failure of Long Term Capital Management in 1998, documented by Roger Lowenstein,4 serves as a quintessential case study on losses resulting from market risk.
  • Credit risk is the set of risks resulting from the default of other parties to whom you have exposure. In commercial and retail banking, this risk mainly arises from defaults of those to whom a bank has lent money. Within the capital markets, credit risk largely stems from defaults by trading counterparties resulting in nonperformance on their contractual obligations, such as those arising from derivative contracts. This is known more specifically as counterparty credit risk. For example, firm A may hedge an equity market risk exposure by purchasing put options from firm B, where the put references the same equity security. Yet even if the option is in the money, firm A may lose the value of that option in the event that firm B defaults and does not fulfill its obligation under the contract.

    As a high‐profile example, much of the fallout from the collapse of Lehman Brothers in 2008 stemmed from concern that losses arising from one counterparty default could be large enough to destabilize other counterparties, leading to a chain of further defaults.

  • Asset–liability risk, also known as liquidity risk, stems from a potential mismatch between a firm's asset and liability profile. Such a mismatch may lead to a firm being unable to meet short‐term financial demands. This risk may arise when a firm has a material portion of short‐term funding (e.g., overnight repo) supporting long‐term assets (e.g., multi‐month derivative contracts). If such an institution undergoes a crisis of confidence and short‐term funding becomes unavailable, long‐term assets may not be able to be liquidated in time, at a sufficient price to meet obligations. The failure of Bear Stearns is a good example of this risk.
  • Operational risk, as defined by the Bank of International Settlements, is “the risk of loss resulting from inadequate or failed internal processes, people, and systems, or from external events.”5 Such losses are referred to as non‐financial, as they are not driven by market prices, although they can be significantly influenced by market movements. For example, Société Générale lost EUR 4.9BN in 2008 due to rogue trading by Jerome Kerviel and a lack of adequate internal processes and controls. Other examples of operational risk include systems breakdowns, data hacking, and business practice lawsuits.

    More recently two subsets of operational risk have been the subject of close scrutiny: conduct risk and model risk. Conduct risk is the risk that losses may occur due to misconduct of employees of a bank, for example, the selling of inappropriate products to consumers. Model risk is the risk that errors in models or in their use lead to losses (e.g., by incorrectly pricing trades or drawing other faulty business decisions from model output).

  • Business risk is the risk caused by uncertainty in profits due to changes in the competitive, economic, or sociopolitical environment. Examples include increased costs and/or decreased revenue opportunities arising from regulatory changes, margin erosion or market share loss from increased competition, or the ramifications of a prolonged low interest rate environment.

Since the financial crisis, the approach to measuring and managing risk has materially evolved.

First, there has been an increased focus on the connections among various risk types, including more stringent approaches to handling correlation across risks. This has included greater regulatory emphasis on broad‐based stress tests—exemplified by the U.S. Federal Reserve's CCAR process, which considers major macroeconomic shocks that could influence multiple categories of risk.

Second, the prior focus on capitalization, which is the buffer held to mitigate all risk‐types covered above, has been expanded. Many new measures are much more blunt, and purposefully do not seek to risk‐weight assets (e.g., the leverage ratio), reflecting the view that models of risk are inherently limited. In addition, the regulatory focus has extended beyond capital to also include buffers for liquidity and funding risks (“asset–liability risk” in the framework above). This reflects the reality that many of the major firms that failed during the financial crisis did so primarily due to liquidity and funding issues, rather than pure insolvency.

Risk management as a discipline, and regulators in particular, have also taken a broader look at the overall interconnectivity of the whole financial system. This includes the designation of a number of non‐banks as systemically important financial institutions (SIFIs) covered by banking regulation, as well as reviews of the “shadow” banking sector (e.g., repo markets, non‐bank financial intermediaries). Furthermore, the overall functioning of the markets—including depth of liquidity—has been under close review to ensure that the financial system is sound.

Organizational Structures in Place to Manage Risks

The organization of controls and structures within capital markets firms has evolved materially over the past 20 years. The core principle used is that of “three lines of defense”:

  1. The first line responsible for risk management is the business itself—in capital markets firms, that is the front office. Much work has been done to ensure that appropriate controls are in place to govern the day‐to‐day activities of traders and bankers. Furthermore, incentives and compensation plans were altered to align the remuneration of traders and bankers with the risk goals of the firm (i.e., compensation claw‐back provisions). Often, this first line is heavily supported by the middle office and product control functions.
  2. The second line is responsible for ensuring that policies, procedures, and controls are firmly in place to measure and monitor risks. This includes establishing the firm's risk appetite, developing risk measurement models, monitoring risk measures, and liaising with regulators. Segregation of the lines of defense is critical. In most major capital markets firms, the second line reports to a chief risk officer (CRO), who has a direct line to both the CEO and board of directors.
  3. The third line of defense is Internal Audit, which provides independent testing and evaluation of the effectiveness of risk management, control, and governance processes, and independent advice to management and the board of directors on improving such effectiveness. Internal audit provides an independent evaluation of both the first and the second line of defense.

Capital markets firms often maintain a matrix risk reporting structure for the second line, the first dimension of this matrix being the primary lines of business—with a risk officer dedicated to ensuring that each business unit is appropriately managed, and acting as the primary point‐of‐contact for the business line management team. The second dimension in the matrix is based on risk type, with a risk officer charged with ensuring that the aggregate risk profile of the business is appropriate for each risk type incurred. Recently, a third dimension has grown in importance, as national regulators increasingly require global firms to “ring‐fence” their businesses in each geography, and hence regional risk officers are of growing importance.

In order for the risk organization to be effective, a clearly established set of policies and procedures needs to be put in place. Policies and procedures must be supported by a clear risk appetite statement, limit framework, robust risk measures, and systematic reporting of the risks and associated actions taken to mitigate these risks. Key risk measures, limits, and risk mitigation actions are reported through to a clearly defined committee structure, leading up to the board of directors, who often have a standalone risk committee.

METRICS USED TO MEASURE AND MONITOR RISK

At the core of risk management is a set of metrics comparing the ratio of risk to the buffer held by an institution against that risk. Quantification of these metrics requires two sets of calculations: (1) determining what qualifies as a buffer for the risk in question; and (2) quantifying the potential risk itself, and comparing this to the buffer to ensure adequate protection.

Key Buffers Against Risk in Capital Markets Firms

In order to mitigate risks, banks hold two distinct but related forms of cushion: capital and liquidity.

Through time, banks have failed or have required government assistance because they had shortfalls in capital, lack of liquidity, or a combination of the two. In addition, liquidity and capital shortfalls can blend together in times of stress, with perceptions of inadequate capital leading to liquidity shortfalls, and liquidity shortfalls leading to inadequate capital. All key stakeholders in capital markets firms, including regulators and investors, maintain a keen focus on ensuring adequate capital and liquidity levels.

Capital

In its simplest form, capital is the equity in a bank's balance sheet: the difference between assets and liabilities. If assets decline in value, capital is the cushion that banks hold against these losses.

Bank balance sheets are complex. A bank's total capital is made up of various types of capital, including:

  • Tier 1 capital, which is seen as a core measure of financial strength, composed of:
    • Core equity capital (common stock)
    • Disclosed reserves/retained earnings
    • Certain forms of nonredeemable, noncumulative preferred stock
  • Tier 2 capital is supplementary capital that either (a) is already set aside for known losses (rather than being a cushion for unexpected losses); or (b) shares characteristics with debt rather than equity. It is composed of:
    • Evaluation reserves
    • General loan‐loss reserves
    • Other undisclosed reserves
    • Hybrid capital instruments and subordinated term debt

Recently, banks have issued a number of contingent convertible capital instruments (CoCos). These are hybrid securities that convert from debt‐like instruments to equity‐like capital on specific trigger events designed to capture stressful environments. Depending on the precise nature of these contractual triggers, these instruments can be classed as either additional Tier 1 or Tier 2 capital.

Most jurisdictions set a regulatory minimum for each type of capital defined previously. In general, these minimum levels are presented as a percentage of risk‐weighted assets (RWA). Risk‐weighted assets are defined in a later section. Banks seek to be as efficient as possible in terms of required capital, as the cost of equity is generally higher than the cost of debt.

Liquidity

Following the recent financial crisis, the focus of regulators and risk management practitioners turned to capital adequacy measurement and management. Shortly thereafter, liquidity and funding management became the focus through many regulatory initiatives.

Capital and liquidity are distinct but related. While capital is fundamentally a measure of the solvency of a bank (the difference between assets and liabilities), liquidity reflects the ability a bank has to find the liquid resources (usually cash) to meet demands. A bank can be solvent from an accounting perspective, maintaining adequate capital, but still face a material crisis due to lack of liquidity. Indeed, most recent cases of bank failure have manifested themselves through liquidity, rather than capital, shortfalls.

Measures of asset liquidity for banks essentially have two dimensions:

  1. Measures of assets held in cash (in currency or on deposit with central banks) or securities that are readily convertible to cash, such as U.S. government discount notes or U.S. Treasury bills.
  2. Measures of the maturity profile of less liquid assets. While liquidity crises can be sudden, they are rarely instant. As a result, a number of less liquid assets are likely to mature in the window under consideration, generating additional funds for the bank.

As with capital, liquidity is costly, as expected returns on short‐term highly liquid positions, such as cash, are low relative to the alternatives.

Since liquidity risk arises from mismatches of assets and liabilities, the measurement of funding stability is also critical to liquidity risk management. Both asset‐side and funding measures of liquidity are further detailed later in this chapter.

BRIEF OVERVIEW OF REGULATORY LANDSCAPE

The financial system plays a critical role in the functioning of the modern economy. When healthy, it provides businesses and consumers access to the capital markets to fund their growth opportunities—be it a new factory, a new home or car, and so on. When the financial system is unhealthy, such growth opportunities may be systematically forgone, resulting in slower growth, recession, or even depression.

As a result, banks have unique access to government support, including access to central bank funding such as the funding window at the U.S. Federal Reserve. Given this, and the importance of their role to the economy as a whole, banks are subject to comprehensive regulation by multiple bodies.

At a global level, regulation is developed by the Basel Committee on Banking Supervision, a sister organization of the Bank for International Settlements (BIS), which was established in 1930 and is headquartered in Basel, Switzerland. The Basel Committee has played a central role in development of new capital and liquidity standards for financial services firms. These capital accords are referred to in shorthand as “Basel accords,” and have evolved through time:

  • Basel 1,6 established in 1988, was primarily focused on ensuring that banks had adequate capital to withstand credit risk events. The simple framework proposed five categories of assets, each with different risk weights ranging from 0% to 100%. Banks were required to hold capital above 8% of their risk‐weighted assets.
  • A Market Risk Amendment7 was added to Basel 1 in 1997, to include market risks within the regulatory capital landscape. The amendment established a standardized measurement method, covering all major capital markets exposures (rates, equities, foreign‐exchange, commodities, and options). In addition, for the first time, the amendment also proposed allowing firms to develop internal models to measure market risk.
  • Basel 2,8 finalized in 2004, completely revised the proposed capital framework. The capital adequacy framework was updated to include three separate pillars: minimum requirements, supervisory review, and market discipline. The minimum capital requirements were materially adjusted in the following aspects:
    • Granular standardized approaches to credit risk capital
    • The option to develop internal ratings‐based approaches to capital
    • A framework for capitalization of securitizations
    • Addition of operational risk capital
    • Major updates to trading book capital
  • “Basel 2.5” is shorthand used to refer to the 2009 revisions to the market risk framework.9 Again, a number of risk‐weights in the standardized measures were adjusted. In addition, a set of approaches for internal models to quantify “specific risk” were added.
  • Basel 310 was established in 2010–11 following the crisis, and again refreshed certain capital requirements. Most significantly, however, the framework was revised beyond capital to add a liquidity coverage ratio and net stable funding ratio, aiming to address liquidity risk in regulated entities.

The global regulatory landscape continues to evolve markedly. The BIS has conducted a fundamental review of the trading book (FRTB),11 which resulted in proposals to remove a number of the more complex modeling approaches for market risk, establish capital floors, and move management of market risk to the desk‐level. Further capital proposals are also under development for both the standardized approach to credit risk12 as well as specific approaches to counterparty credit risk.13 These recent regulatory proposals present a clear trend toward limiting the use of internal models to determine regulatory capital.

While the global regulatory framework is the product of negotiation and agreement across major central banks, the implementation of the rules falls to national regulators. The precise national interpretation of rules may differ by geography. In the United States, a web of regulatory bodies exists. Within the capital markets space, these bodies include:

  1. Securities and Exchange Commission (SEC)
  2. Financial Industry Regulatory Authority (FINRA)
  3. Federal Reserve System (Fed)
  4. Office of the Comptroller of the Currency (OCC)
  5. Federal Deposit Insurance Corporation (FDIC)—for those institutions that are deposit‐taking (e.g., Bank of America, JP Morgan, Citi)
  6. Commodities Futures Trading Commission (CFTC)

Each of these bodies, as part of their supervisory mandate, is able to develop and enforce regulation. In some cases this is done by implementing global regulations—for example, the FDIC, Federal Reserve Board, and the OCC have jointly implemented the Basel requirements in the United States.

In a number of cases regulation is local rather than global. For example, the Volcker Rule14 prohibiting capital markets firms from engaging in proprietary trading was part of the Dodd‐Frank Act, and hence only applies to firms with a U.S. presence. Furthermore, the supervisory process itself may be materially different based on geography. In the United States, for example, Dodd‐Frank mandated stress‐testing as the primary tool used by regulators to ensure appropriate capital adequacy and robustness in risk management processes. This manifests itself through the annual Comprehensive Capital Analysis and Review (CCAR). CCAR requires major bank holding companies (BHCs) and intermediate holding companies (IHCs) of large foreign‐owned banks to estimate the impact on income statements and balance sheets of a severely adverse economic scenario, and demonstrate adequacy of capital though this stressed forecast.

Outside of the United States, multiple regulatory agencies have evolved post‐crisis. They have a keen focus on real‐world stress testing of banks. Similar to CCAR, these stress tests act as a tool for determining adequate capital. Examples include the European Union–wide stress tests run by the European Banking Authority (EBA) and the Bank of England stress testing of UK banks, run by the Prudential Regulation Authority (PRA). Furthermore, additional regulations have required international firms to ring‐fence and manage risk for their local legal entities on a standalone basis, reducing the fungibility of moving capital across regions.

METRICS FOR MAJOR RISK TYPES

A number of analytical techniques are used to determine the riskiness of a bank's positions. In many cases, these are directly used to determine a risk‐weighted assets (RWA) estimate, which can then be compared to capital.

Market Risk Metrics

Risk Factors as Building Blocks of Market Risk

Changes in the market value of traded instruments are typically explained by the use of pricing models, which relate a position's market value to a set of individual market‐derived inputs known as risk factors. For example, using the classic Black‐Scholes model for a call option, the risk factors would include the stock price, discount curve, expected dividends, and implied volatility (the strike price and maturity would also be key static inputs, but not risk factors).

The use of risk factors is instrumental to the aggregation of market risk measurements across a portfolio of various instruments. Extending the above example, the translation of the option position into a set of risk factor sensitivities allows a risk manager to understand how its sensitivity to an equity price shock combines with other direct long and short positions to form overall portfolio sensitivities and conduct portfolio‐level scenario analysis. Such sensitivities may be linear or nonlinear.

Value at Risk (VaR)

Value‐at‐Risk (VaR) is a measure of the risk of loss on a specific portfolio of liquid assets in a trading book, for a given portfolio, probability, and holding period. VaR is defined as the threshold of a one‐sided confidence interval, such that the probability of a mark‐to‐market loss exceeding this threshold is equal to a specified probability threshold (Figure 5.2). For example, if a given portfolio has a 1‐day holding period and has a 99% confidence level VaR of $1MM, this means that on average, a daily loss exceeding $1MM should happen in only 1 out of 100 trading days.

Image described by caption and surrounding text.

Figure 5.2: Illustrative Calculation of VaR

Source: Oliver Wyman

Typically, VaR is computed assuming a static portfolio, that is, assuming no new positions or hedges are taken on and no existing positions or hedges are exited during the horizon.

Framework Variations in Modeling VaR

There are multiple methods for creating the distribution of portfolio gains and losses needed to calculate VaR. We review several of the alternatives. Models generally follow one of three broad frameworks:

  1. A “delta‐normal” parametric framework, in which all position sensitivities to risk factors are approximated as linear, all risk factor distributions are approximated as normal, and all dependence structures between risk factors are approximated as fixed correlations. The advantage of this framework is simplicity: If all those assumptions hold, VaR can be expressed by an analytic formula and calculated quickly without simulation. But these assumptions rarely hold completely, and the inaccuracy of these assumptions weighs heavily against the use of such a framework for sophisticated trading books.
  2. A Monte Carlo simulation framework, in which risk factor distributions are estimated from historical data (and may take a variety of non‐normal distributions) and then combined via simulation in which correlated random draws are taken for each risk factor. After the portfolio value is computed for each of a large number of simulations, a percentile can be computed from the simulation results. This form of aggregation is in principle very flexible for complex distributions and nonlinear sensitivities, but the tradeoff is that it can be highly computationally demanding—to the point that if valuation calculations are individually time‐consuming, repeating them across tens or hundreds of thousands of simulation paths can be prohibitive.
  3. A historical simulation framework, in which the joint distribution of risk factor changes is approximated by simply drawing from the past 1–3 years of returns over a particular holding period (e.g., 1‐day or 10‐day). In the example of 1‐day holding period returns, if there are 250 trading days in a year (rounded for simplicity), a 2‐year historical VaR would be calculated by drawing from the last 500 daily returns. Each of those 500 historical days can be viewed as a separate scenario that could apply to today's portfolio, in the sense that the movements observed in each risk factor from that day can be mapped into potential movements today. This can be done without explicitly estimating the shape of the distribution assumptions or the correlation between risk factors—instead, both are empirically matched to a specific historical window. By utilizing fewer scenarios, this approach is also less computationally challenging than the Monte Carlo simulation.
Selection of Holding Period

Another core building block of any VaR model is the selection of the appropriate holding period to use for analysis. Given that the VaR framework measures potential changes to the portfolio assuming no rebalancing, the selection of a longer holding period will generally lead to a higher VaR‐based risk measure. However, since trading books turn over quickly and tend to be managed closely, this fundamental modeling assumption becomes less representative of reality when it is applied to longer horizons.

While classical VaR models generally do not relax the assumption of a static portfolio, the rapid turnover of positions is one reason why VaR model time horizons are generally short. Historically, many trading businesses internally used a 1‐day VaR metric for internal management reporting, alongside a 10‐day VaR metric for regulatory capital and the associated reporting. Some other businesses use intermediate values (e.g., 2‐ or 3‐day VaR).

However, each of these is a fairly one‐size‐fits‐all metric, where highly liquid assets that could be managed down or hedged within hours are commingled with less liquid positions that would take significantly longer.

Percentile Threshold versus Expected Shortfall

In its classic definition as noted above, VaR refers to threshold value, that is, the size of loss beyond which there is only an x% chance of exceeding that loss within a certain period of time.

Another measure that has emerged over time is Expected Shortfall (ES), which measures the expected value (i.e., probability‐weighted average) of all the potential losses that are beyond a certain percentile of the distribution. Since VaR measures the threshold value while ES measures the values beyond it, for any given value of x, ES will give a larger dollar‐loss figure than VaR. ES has certain benefits for risk management purposes; in particular it always considers the most extreme potential losses, whereas VaR is mostly insensitive to differences in risk that are beyond the threshold value.

For instance, say that one trading desk (A) has entered a trade that pays off $5MM in 99.9% of cases, but loses $1 BN in 0.1% of cases, while another (B) has a position that pays off $5MM in 99% of cases, but loses $100MM in the other 1%. Using a 99% VaR, the risk in position B would be captured well but A could slip under the radar as its risk only arises further out in the tail than was considered. ES would capture them both, weighing the larger loss potential of A against the higher likelihood of loss in B.

Limitations of VaR

While VaR models are commonly used in risk management, they also have significant limitations, including:

  • As noted above, the threshold definition of VaR is fairly insensitive to risks that are beyond the specified percentile.
  • Inferences of risk factor distributions are generally drawn from past market movements, which are not necessarily indicative of the future. In particular, key market variables like equity index prices, credit spreads, and FX rates have all exhibited periods of low volatility followed by periods of high volatility, so recent calm does not necessarily signal future calm in the market. Similarly, certain market risk factors have also shown high correlation during some years and low correlation during others. As a result, VaR may significantly understate or overstate risk depending on the time period used.
  • For credit‐sensitive trading exposures, VaR models may not adequately capture certain price risks associated with credit events (e.g., default), which may be meaningful risks even if they have not occurred to a given issuer in the historical window.
  • The choice of a single time horizon (e.g., 10 days) for a VaR model is generally a simplification rather than a reflection of instrument‐specific risks that may unfold over different horizons.
Regulatory Measures Aimed at Addressing Shortcomings of VaR

In response to the financial crisis, global regulators have taken additional measures—first through Basel 2.5, and subsequently through the proposals arising from the Fundamental Review of the Trading Book—to address its limitations. Comparing the post‐FRTB capital proposals against Basel 2, key enhancements include:

  • Regulatory capital models will be adjusted to use an Expected Shortfall (ES) metric instead of threshold VaR.
  • Using a stressed calibration of the return distributions for Expected Shortfall, that is, a period of significant financial stress rather than simply the most recent year or two.
  • For credit‐sensitive trading exposures, adding a separate default risk charge (DRC) to address the default risks not captured in VaR. Banks may use separate internal models to compute this charge based on the 99.9% threshold value from the probability distribution of default‐related credit losses. To capture the shape of this distribution, such models must take into account not only the individual probabilities of default of different issuers, but also the correlations between the creditworthiness of different issuers, which may be higher if issuers are in the same industry or region. Where banks do not receive approval to use internal DRC models, prescriptive standardized add‐ons will be applied.
  • ES‐based regulatory capital models will be adjusted to differentiate the holding periods used for assessing the market risk associated with different factors, depending on assessments of the liquidity and depth of markets in those risk factors.

In addition to modeling of general and specific risks covered previously, capital markets firms are also required to identify risks not in VaR/ES (RNIVs) and to ensure that they are appropriately capitalized.

Market Risk Stress Testing

In addition to VaR or ES, risk managers also use other complementary measures for market risk. Several of these are variants of stress testing, including scenario analysis. Generally, in stress tests, instead of attempting to define an entire probability distribution for possible losses, the focus is on defining specific hypothetical situations and then using valuation models and other analytic tools to determine the impact the scenario would have on the portfolio. By considering hypothetical scenarios, stress tests can help risk managers to assess risks that have not occurred but may materialize in the future. On the other hand, stress tests are limited to whatever scenarios have been considered (not completely exhaustive). Nonetheless, to develop a robust stress‐testing framework, risk managers often consider three categories of scenarios:

  1. Single‐factor scenarios, such as simple parallel shifts up or down in a yield or spread curve, or shocks to equity index values, which are useful for identifying/confirming major sensitivities and concentrations.
  2. Targeted multifactor scenarios (e.g., equity price and equity volatility shocked at once to capture cross‐risks, i.e., the situation where a change in one risk factor leads to increased or decreased sensitivity to another risk factor). Alternatively, different yields or spreads may be shocked to different degrees in a way that exposes a basis risk (i.e., where opposite exposures to two related risk factors fail to offset as expected).
  3. Complete market environment scenarios, which, given the large number of risk factors to stress simultaneously, are often selected from historical events that caused major market disruptions. Custom scenarios may also be designed to explore a hypothesized vulnerability.

Another common risk measure used to evaluate credit risk concentrations in a trading portfolio (equivalent to a form of stress testing) is to simply identify and monitor the top 10, 20, or 50 counterparty exposures, ranked by the total loss incurred if each counterparty were to immediately jump to default. One or more alternative Loss Given Default (LGD) assumptions are used when utilizing this approach (e.g., JTD_100, JTD_50, JTD_0, JTD_worst).

Counterparty Credit Risk Metrics

Counterparty risk can be defined as the risk of a financial loss due to the potential default of a trading counterparty. It arises mainly from two types of activity:

  1. Derivatives trading (e.g., swaps, options)
  2. Securities financing (e.g., repo/reverse repo, margin lending against client portfolios)

Compared with the credit risk that arises from lending, counterparty risk presents an additional measurement challenge as the value of exposures fluctuates through time and is dependent on market movements. It can be helpful to think of counterparty risk as a mixture of market risk and credit risk—indeed, the tools used to measure and manage counterparty risk incorporate some elements from each, as well as some unique concepts.

For instance, concentration of exposure to any one entity or group of related entities is a major consideration across credit risk types whether arising from lending or counterparty activity. For credit risk management purposes, many organizations define total exposure limits and reporting to combine lending and counterparty activities.

On the other hand, like market risk, the potential variability in exposure to a given counterparty can be expressed in a VaR‐like measure. Such measures are commonly used in internal risk reporting, and in some cases also embedded in contractual margin requirements (discussed further in the following).

Exposure Definitions and Measurement

Given that exposure can vary over time, the management and capitalization of counterparty risk typically considers both current exposure and future exposure.

In the case of an uncollateralized derivative, current exposure can be measured simply as:

images

where the derivative's market value could be either positive or negative, but only positive market values create exposure to the counterparty.

Future exposure is a somewhat more nebulous concept since the future is uncertain and there are typically multiple time horizons of interest between the present and the maturity of a position (or portfolio of positions under one netting agreement). We use this term broadly to encompass a family of distinct metrics, some key members of which are defined more precisely in Table 5.1.

Table 5.1: Metrics and Their Corresponding Equations

Metric Equation
Expected exposure (EE) images
Effective expected exposure images
Potential future exposure (PFE) images
where images is a confidence level such as 95% or 99%
Maximum PFE/Peak PFE images

As shown, potential future exposure (PFE) is strongly analogous to VaR, except that it refers to the upper tail of potential market values at a particular future date, rather than the lower tail of potential changes in value over a particular horizon from today. When setting risk management limits on individual counterparties, it is common to utilize PFE‐related measures as they provide advance visibility into the more extreme possible exposure concentrations that could result from unexpected market movements.

Credit Valuation Adjustment (CVA)

Measurements of counterparty exposure are also intrinsically related to a second key counterparty risk concept, the Credit Valuation Adjustment (CVA). CVA measures the difference between the market value of a particular derivative contract and the market value the same contract would have had if the counterparty (or both the counterparty and the firm)15 had zero probability of default.

A simplified example will illustrate:

A derivative contract is structured such that there are two potential outcomes: In 50% of cases, the counterparty will owe the firm $15MM, and in the other 50% of cases, the firm will owe the counterparty $10MM. Assuming no credit risk on either party and a short enough horizon that discounting is immaterial, the NPV of the contract is $2.5 MM. Yet if the counterparty has a 10% chance of defaulting (with no recoveries), this decreases the positive contribution to NPV by 10%*50%*$15MM or $750 K. If the firm itself has a 1% chance of defaulting (with no recoveries), this decreases the negative contribution to NPV by 1%*50%*$10MM or $50 K. Netting these two effects, the bilateral CVA would be –$700 K and the final derivative value would be $1.8 MM.

In general mathematical form, unilateral CVA can be expressed as:

images

If the scenarios of MV, PD, and LGD are completely independent of each other (which may not necessarily hold in all cases), then this can be simplified to the more tractable expression below:

images

Prior to the global financial crisis, measuring CVA had already become a common accounting practice, used to create a reserve for expected credit losses on derivatives, which could be viewed as analogous to loan loss reserves. However, the approach to estimating the counterparty's PD and LGD could either be a historical/actuarial approach or a market‐driven approach derived from spreads, such as from traded CDS contracts. In recent years, the market‐driven approach has come to dominate, at least where the counterparty's spreads are observable or can be inferred from comparable companies and indices.

While CVA considers the possibility of default, it does so from an expected value point of view, so considering our definition of risk as “deviations from expected,” CVA is not in that sense a risk measure but simply a pricing or valuation measure. However, changes in CVA do present a second form of risk, since such changes affect the fair value of the firm's positions and ultimately flow through to earnings. In an environment where the counterparty PD is derived from the counterparty's market‐observed spreads, counterparty spread volatility can be a major driver of variations in CVA, and hence another source of risk to earnings.

Many firms have also expanded a similar approach to adjust derivatives valuations for other effects beyond CVA, such as the risk of their own default (debt valuation adjustment, or DVA), and the cost of funding the positions (funding valuation adjustment, or FVA). Further valuation adjustments for cost of capital (KVA) and margin (MVA) have even been considered. The term XVA is used as a catchall for the set of all valuation adjustments on derivatives.

Managing Counterparty Risk

There are a number of contractual mechanisms by which counterparty risk can be mitigated, if not entirely eliminated:

  • Right of offset and netting provisions—where the firm and counterparty owe each other certain gross amounts, these allow the firm not to make gross payments that would be owed back by the counterparty, reducing exposure from a gross to net basis.
  • Guarantees, especially from central counterparties (clearinghouses).
  • Collateral held as margin, under agreements such as a Credit Support Annex (CSA).
  • Haircuts applied to collateral posted.
  • Rating‐based triggers that may require greater overcollateralization if the counterparty is downgraded, or allow the firm to terminate the contract at that point.

Collateral posted and held as margin is an important element in both cleared and bilateral derivative trading. For instance, a clearinghouse will generally require each of its counterparties to post two forms of margin:

  1. Initial margin covers potential future exposure and hence is more driven by estimates of volatility in the risk factors.
  2. Variation margin is the additional collateral posted directly to cover price movements that change the current exposure.

Wrong‐Way Risk

Wrong‐way risk refers to the situation where a counterparty's exposure is strongly correlated, or even causally linked, to its Probability of Default (PD) and/or Loss Given Default (LGD). When this is the case, multiplying independent estimates of exposure, PD, and LGD together does not give a sufficient measure of expected loss, because under the scenarios that generate the most exposure, PD or LGD is also higher.

For illustration, consider a simple example where there are two equally probable scenarios: one in which images, images, images and another in which images, images and images. The first scenario leads to loss of $20 and the second $0, for an average of $10. Whereas if exposure, PD, and LGD were each averaged and then multiplied as though independent, the expected loss computed as 60*10%*images would be understated.

The most severe examples of wrong‐way risk are when the counterparty is directly affiliated with an issuer whose securities are referenced on a derivative or used as collateral. This is referred to as “specific wrong‐way risk,” and is generally addressed through controls to restrict such transactions, or provide no credit for such collateral, rather than advanced measurements.

However, broader forms of general wrong‐way risk can lurk where there are underlying economic drivers that affect, but do not fully determine, both the counterparty's creditworthiness and the exposure. Examples include:

  • Counterparty is in the same sector as underlying (e.g., oil & gas).
  • FX trades where the counterparty and the exposure are both more sensitive to a particular foreign economy.

In the presence of general wrong‐way risk, firms may make further adjustments to their exposure or LGD estimates to be more consistent with values conditional on default, or adjustments to their PD or LGD estimates to be more consistent with values conditional on a scenario that would drive high exposure.

The opposite of wrong‐way risk is sometimes referred to as “right‐way risk” (i.e., when a key factor that could lead to an increase in exposure would also lead to an improvement in counterparty creditworthiness). This can sometimes be seen in commodity hedging with certain corporate clients (e.g., where the client's profits are correlated to a commodity price, and the derivatives used for hedging involve paying the client a fixed price for a certain volume of that commodity). In principle, right‐way risk may reduce CVA or other measures of counterparty risk.

Liquidity Risk Metrics

Basel 3 introduced two distinct metrics to measure and manage liquidity risk, the Liquidity Coverage Ratio (LCR) and the Net Stable Funding Ratio (NSFR).

  1. The LCR is designed to ensure that banks would have necessary liquid resources to survive a 30‐day market crisis. It is calculated as the ratio of high‐quality liquid assets to projected cash claims. Assumptions are made as to the quality of assets, with differentiated haircuts applied to values of certain assets, as well as rollover rates on assets scheduled to mature in the 30‐day window. Further, liabilities such as retail deposits are categorized into levels of “stickiness” (i.e., the amount that are expected to persist through a financial crisis to support the asset base). The minimum LCR is specified at 100% by Basel, although national regulators have discretion over thresholds. In practice, banks will generally choose to ensure they have a buffer above the required minimum.
  2. The NSFR is designed to ensure banks have appropriately stable sources of funding. It is calculated as the ratio of stable sources of funds (Tier 1 and 2 capital, other preferred shares, liabilities with maturities >1 year, and portions of certain other liabilities such as deposits) to assets, adjusted to reflect their ability to be liquidated. As with the LCR, the requirement is for banks to have an NSFR above 100%. Again, banks are likely to target additional buffers.

In addition to the metrics above, firms are also expected to conduct periodic stress testing of their liquidity profile. As an example, in the United States, the Federal Reserve Board conducts a comprehensive liquidity analysis and review (CLAR) process to evaluate the liquidity position and liquidity risk management practices at the largest firms.16

Operational Risk Metrics

Since the financial crisis, the banking sector has seen a number of significant operational risk losses. The most noteworthy high‐profile cases included rogue trader Jérôme Kerviel at Société Générale, unauthorized or unknown risk positions put on by the “London Whale” at JP Morgan Chase, and legal fines levied on firms related to the mis‐selling of financial assets.

The Basel 2 accords outline seven categories of operational risk events:

  1. Internal Fraud—misappropriation of assets, tax evasion, intentional mismarking of positions, bribery
  2. External Fraud—theft of information, hacking, third‐party theft, forgery
  3. Employment Practices and Workplace Safety—discrimination, workers compensation, employee health and safety
  4. Clients, Products, and Business Practice—market manipulation, antitrust, improper trade, product defects, fiduciary breaches, account churning
  5. Damage to Physical Assets—natural disasters, terrorism, vandalism
  6. Business Disruption and Systems Failures—utility disruptions, software failures, hardware failures
  7. Execution, Delivery, and Process Management—data entry errors, accounting errors, failed mandatory reporting, negligent loss of client assets

Basel 2 also outlines two key approaches to quantification of operational risks:

  1. The advanced measurement approach (AMA) allows a firm to develop internal modeling of operational risks.
  2. The standardized approach, which defines “beta factors” for specific banking business lines. These can be simply used to estimate the capital required, based on the aggregate level of gross income.

In addition to the regulatory metrics, recent focus has once again been on stress testing and scenario analysis to test the vulnerability of firms to operational risk losses.

Internal Metrics

Beyond regulatory requirements, capital markets firms have developed their own in‐house approaches used to measure, monitor, and manage risks. These range from simple exposure levels, which are easily understood, to more complex internal “economic capital” measures that attempt to combine multiple risk types in a common metric. In addition, real‐world scenario analysis is often used. For example, a firm may analyze the impact of a default by two of its largest counterparties or customers.

THE INTERSECTION BETWEEN RISK AND STRATEGY

Risk Appetite

Capital markets firms use a “risk appetite” statement to express the firm's risk tolerance in the context of their overall risk framework. This is a communication tool used with both internal and external stakeholders—including regulators and the board of directors. A firm's risk appetite statement is a starting point for business units to assess the products and services offered to clients, and to analyze the trade‐offs of risk versus return on a day‐to‐day basis.

Limit Frameworks

A limit framework is often set up to translate a firm's risk appetite to an operational level. For example, specific limits are assigned at the desk and trader levels. The limit framework is continuously monitored by Risk to ensure that each trader is in compliance with the limit structure. Risk also maintains a formal process for the management of breaches.

Linking Management of Limited Resources to Strategy

Given the limited nature of, and costs associated with, financial resources (liquidity, funding, capital), Risk, Treasury, Finance, and the Front Office need to work hand in hand to optimize these scarce resources and maximize shareholder returns. This is often done by multiple mechanisms—one more top‐down and another bottom‐up.

A top‐down approach involves measuring returns on one or more of these financial resources for each line of business, and using the comparison of relative returns to inform strategic plans to grow certain businesses more quickly, challenge underperforming units to make do with fewer financial resources, or exit businesses that are expected to continue to underperform.

A bottom‐up approach involves creating internal mechanisms to charge businesses for the financial resources they use, at costs that may be higher for the scarcest financial resources. When these costs are pushed down, essentially every business is challenged to find and root out the constituent activities that drive a greater share of their resource usage than their share of profits and expand those that yield greater share of profits. Those activities may be distinguished at a much more granular level (e.g., offering of specific product subtypes, or even covering specific clients).

Risk‐Adjusted Returns Metrics

Given the relationship between risk and returns, banks look to be appropriately compensated for the risk they take, using a measure of risk‐adjusted returns for optimizing business decisions as well as for management reporting. Since there is no single industry‐standard approach, firms often tailor their choice of metric to fit their business models.

FUTURE OUTLOOK

Effective risk management is, and will remain, critical to the success of capital markets businesses. If anything, the importance will continue to increase as firms become increasingly interconnected. This trend is driven by the velocity of trading in an electronic age, as well as post‐crisis consolidation of the larger capital markets firms. Ongoing product development will only add further complexity to the product‐set, and hence risk management will need to continue to evolve to provide effective control and balance.

Metrics and regulatory requirements will continue to evolve through time; thus in highly competitive markets, risk management functions will need to be:

  • Agile and creative: Given the continued evolution in products, and ever‐changing macro‐ and microeconomic environments, managing risks effectively requires agility to reflect changing product characteristics, and imagination to test what might go wrong. In some cases, this will require a broadening of the current capabilities of risk teams.
  • Connected to strategy: Risk needs to be not only a control and compliance function, but also an advisor and enabler for the business to grow and succeed. Especially in an era of tightened margins and constrained financial resources, the risk function needs to partner with the first line in identifying and considering risk‐return tradeoffs. This requires risk to be embedded, ex‐ante, in the development of the strategic direction of the firm.
  • Pragmatic enablers of decision making: Risk management in capital markets requires careful balancing of complexity with real‐world management of risk. The advent of stress testing, for example, represents a much more digestible output than many of the complex economic capital models that had come before. Risk functions need to continue to develop metrics and reports that enable senior management to understand the risk–return trade‐offs embedded in their business, and take action.

NOTES

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset