Peter Reynolds and Mike Hepinstall
The capital markets provide an invaluable mechanism within the economy to enable private‐ and public‐sector investment and growth by facilitating the allocation of resources and sharing of risk. Such activities by their very nature require careful and appropriate risk management. The level of complexity and speed inherent within modern capital markets firms make sound risk management extremely important, as problems can emerge and escalate quickly. Particularly, following the financial crisis, there has also been an elevated regulatory focus on risk management practices, risk measures, and capital requirements.
This chapter provides a comprehensive overview of the risk management and measurement approaches used within capital markets firms. Where regulatory concepts are introduced, the focus is primarily on regulations that apply to banks active in the capital markets; however, the underlying risk management principles apply to non‐bank capital markets firms as well. This chapter will outline a common taxonomy used to parse risks into digestible types, review the major metrics used to measure and monitor risk, provide a brief overview of the regulatory environment, and discuss the intersection of risk and strategy.
The risk metrics section includes more detailed reviews of market and counterparty risk measures that are uniquely important to the capital markets, and provides an introduction to other relevant risk types such as operational risk and liquidity risk.
Even the in‐depth sections of this chapter only skim the surface of the thinking, practice, and approaches used to measure, monitor, and react to risks. Thousands of books, academic papers, regulatory rules, and other thought‐pieces have been written about risk management, and the approaches continue to evolve. This chapter provides an introduction to key concepts of that world. Finally, it closes with some thoughts on the future of risk management.
Given its importance, there is surprisingly little consensus on the definition of risk. Early debates center on the distinction between “risk” and “uncertainty,” with a view that risk needs to be inherently quantifiable.1 This thinking has evolved notably with Holton (2004),2 who proposes that risk exists when there is uncertainty about potential outcomes and those outcomes have an impact on utility.
For the purposes of this chapter, we define risk as uncertainty around future expectations of earnings. We should highlight that risk is in the deviation from expectations. Within a distribution of potential outcomes, the expected value may be negative (for example, a bank may anticipate that there will be expected losses on a loan portfolio). But this is not in itself risk, as the expected value is known and it can be priced into the transaction at inception. Risk is an unexpected value: the volatility around that expectation.
In their paper from 2007,3 Schuermann and Kuritzkes define a mutually exclusive, collectively exhaustive taxonomy of risk (Figure 5.1). They use this to determine the proportion of earnings volatility attributable to each risk‐type. This serves as a good starting point for parsing risk into digestible types that can be measured and monitored.
As a high‐profile example, much of the fallout from the collapse of Lehman Brothers in 2008 stemmed from concern that losses arising from one counterparty default could be large enough to destabilize other counterparties, leading to a chain of further defaults.
More recently two subsets of operational risk have been the subject of close scrutiny: conduct risk and model risk. Conduct risk is the risk that losses may occur due to misconduct of employees of a bank, for example, the selling of inappropriate products to consumers. Model risk is the risk that errors in models or in their use lead to losses (e.g., by incorrectly pricing trades or drawing other faulty business decisions from model output).
Since the financial crisis, the approach to measuring and managing risk has materially evolved.
First, there has been an increased focus on the connections among various risk types, including more stringent approaches to handling correlation across risks. This has included greater regulatory emphasis on broad‐based stress tests—exemplified by the U.S. Federal Reserve's CCAR process, which considers major macroeconomic shocks that could influence multiple categories of risk.
Second, the prior focus on capitalization, which is the buffer held to mitigate all risk‐types covered above, has been expanded. Many new measures are much more blunt, and purposefully do not seek to risk‐weight assets (e.g., the leverage ratio), reflecting the view that models of risk are inherently limited. In addition, the regulatory focus has extended beyond capital to also include buffers for liquidity and funding risks (“asset–liability risk” in the framework above). This reflects the reality that many of the major firms that failed during the financial crisis did so primarily due to liquidity and funding issues, rather than pure insolvency.
Risk management as a discipline, and regulators in particular, have also taken a broader look at the overall interconnectivity of the whole financial system. This includes the designation of a number of non‐banks as systemically important financial institutions (SIFIs) covered by banking regulation, as well as reviews of the “shadow” banking sector (e.g., repo markets, non‐bank financial intermediaries). Furthermore, the overall functioning of the markets—including depth of liquidity—has been under close review to ensure that the financial system is sound.
The organization of controls and structures within capital markets firms has evolved materially over the past 20 years. The core principle used is that of “three lines of defense”:
Capital markets firms often maintain a matrix risk reporting structure for the second line, the first dimension of this matrix being the primary lines of business—with a risk officer dedicated to ensuring that each business unit is appropriately managed, and acting as the primary point‐of‐contact for the business line management team. The second dimension in the matrix is based on risk type, with a risk officer charged with ensuring that the aggregate risk profile of the business is appropriate for each risk type incurred. Recently, a third dimension has grown in importance, as national regulators increasingly require global firms to “ring‐fence” their businesses in each geography, and hence regional risk officers are of growing importance.
In order for the risk organization to be effective, a clearly established set of policies and procedures needs to be put in place. Policies and procedures must be supported by a clear risk appetite statement, limit framework, robust risk measures, and systematic reporting of the risks and associated actions taken to mitigate these risks. Key risk measures, limits, and risk mitigation actions are reported through to a clearly defined committee structure, leading up to the board of directors, who often have a standalone risk committee.
At the core of risk management is a set of metrics comparing the ratio of risk to the buffer held by an institution against that risk. Quantification of these metrics requires two sets of calculations: (1) determining what qualifies as a buffer for the risk in question; and (2) quantifying the potential risk itself, and comparing this to the buffer to ensure adequate protection.
In order to mitigate risks, banks hold two distinct but related forms of cushion: capital and liquidity.
Through time, banks have failed or have required government assistance because they had shortfalls in capital, lack of liquidity, or a combination of the two. In addition, liquidity and capital shortfalls can blend together in times of stress, with perceptions of inadequate capital leading to liquidity shortfalls, and liquidity shortfalls leading to inadequate capital. All key stakeholders in capital markets firms, including regulators and investors, maintain a keen focus on ensuring adequate capital and liquidity levels.
In its simplest form, capital is the equity in a bank's balance sheet: the difference between assets and liabilities. If assets decline in value, capital is the cushion that banks hold against these losses.
Bank balance sheets are complex. A bank's total capital is made up of various types of capital, including:
Recently, banks have issued a number of contingent convertible capital instruments (CoCos). These are hybrid securities that convert from debt‐like instruments to equity‐like capital on specific trigger events designed to capture stressful environments. Depending on the precise nature of these contractual triggers, these instruments can be classed as either additional Tier 1 or Tier 2 capital.
Most jurisdictions set a regulatory minimum for each type of capital defined previously. In general, these minimum levels are presented as a percentage of risk‐weighted assets (RWA). Risk‐weighted assets are defined in a later section. Banks seek to be as efficient as possible in terms of required capital, as the cost of equity is generally higher than the cost of debt.
Following the recent financial crisis, the focus of regulators and risk management practitioners turned to capital adequacy measurement and management. Shortly thereafter, liquidity and funding management became the focus through many regulatory initiatives.
Capital and liquidity are distinct but related. While capital is fundamentally a measure of the solvency of a bank (the difference between assets and liabilities), liquidity reflects the ability a bank has to find the liquid resources (usually cash) to meet demands. A bank can be solvent from an accounting perspective, maintaining adequate capital, but still face a material crisis due to lack of liquidity. Indeed, most recent cases of bank failure have manifested themselves through liquidity, rather than capital, shortfalls.
Measures of asset liquidity for banks essentially have two dimensions:
As with capital, liquidity is costly, as expected returns on short‐term highly liquid positions, such as cash, are low relative to the alternatives.
Since liquidity risk arises from mismatches of assets and liabilities, the measurement of funding stability is also critical to liquidity risk management. Both asset‐side and funding measures of liquidity are further detailed later in this chapter.
The financial system plays a critical role in the functioning of the modern economy. When healthy, it provides businesses and consumers access to the capital markets to fund their growth opportunities—be it a new factory, a new home or car, and so on. When the financial system is unhealthy, such growth opportunities may be systematically forgone, resulting in slower growth, recession, or even depression.
As a result, banks have unique access to government support, including access to central bank funding such as the funding window at the U.S. Federal Reserve. Given this, and the importance of their role to the economy as a whole, banks are subject to comprehensive regulation by multiple bodies.
At a global level, regulation is developed by the Basel Committee on Banking Supervision, a sister organization of the Bank for International Settlements (BIS), which was established in 1930 and is headquartered in Basel, Switzerland. The Basel Committee has played a central role in development of new capital and liquidity standards for financial services firms. These capital accords are referred to in shorthand as “Basel accords,” and have evolved through time:
The global regulatory landscape continues to evolve markedly. The BIS has conducted a fundamental review of the trading book (FRTB),11 which resulted in proposals to remove a number of the more complex modeling approaches for market risk, establish capital floors, and move management of market risk to the desk‐level. Further capital proposals are also under development for both the standardized approach to credit risk12 as well as specific approaches to counterparty credit risk.13 These recent regulatory proposals present a clear trend toward limiting the use of internal models to determine regulatory capital.
While the global regulatory framework is the product of negotiation and agreement across major central banks, the implementation of the rules falls to national regulators. The precise national interpretation of rules may differ by geography. In the United States, a web of regulatory bodies exists. Within the capital markets space, these bodies include:
Each of these bodies, as part of their supervisory mandate, is able to develop and enforce regulation. In some cases this is done by implementing global regulations—for example, the FDIC, Federal Reserve Board, and the OCC have jointly implemented the Basel requirements in the United States.
In a number of cases regulation is local rather than global. For example, the Volcker Rule14 prohibiting capital markets firms from engaging in proprietary trading was part of the Dodd‐Frank Act, and hence only applies to firms with a U.S. presence. Furthermore, the supervisory process itself may be materially different based on geography. In the United States, for example, Dodd‐Frank mandated stress‐testing as the primary tool used by regulators to ensure appropriate capital adequacy and robustness in risk management processes. This manifests itself through the annual Comprehensive Capital Analysis and Review (CCAR). CCAR requires major bank holding companies (BHCs) and intermediate holding companies (IHCs) of large foreign‐owned banks to estimate the impact on income statements and balance sheets of a severely adverse economic scenario, and demonstrate adequacy of capital though this stressed forecast.
Outside of the United States, multiple regulatory agencies have evolved post‐crisis. They have a keen focus on real‐world stress testing of banks. Similar to CCAR, these stress tests act as a tool for determining adequate capital. Examples include the European Union–wide stress tests run by the European Banking Authority (EBA) and the Bank of England stress testing of UK banks, run by the Prudential Regulation Authority (PRA). Furthermore, additional regulations have required international firms to ring‐fence and manage risk for their local legal entities on a standalone basis, reducing the fungibility of moving capital across regions.
A number of analytical techniques are used to determine the riskiness of a bank's positions. In many cases, these are directly used to determine a risk‐weighted assets (RWA) estimate, which can then be compared to capital.
Changes in the market value of traded instruments are typically explained by the use of pricing models, which relate a position's market value to a set of individual market‐derived inputs known as risk factors. For example, using the classic Black‐Scholes model for a call option, the risk factors would include the stock price, discount curve, expected dividends, and implied volatility (the strike price and maturity would also be key static inputs, but not risk factors).
The use of risk factors is instrumental to the aggregation of market risk measurements across a portfolio of various instruments. Extending the above example, the translation of the option position into a set of risk factor sensitivities allows a risk manager to understand how its sensitivity to an equity price shock combines with other direct long and short positions to form overall portfolio sensitivities and conduct portfolio‐level scenario analysis. Such sensitivities may be linear or nonlinear.
Value‐at‐Risk (VaR) is a measure of the risk of loss on a specific portfolio of liquid assets in a trading book, for a given portfolio, probability, and holding period. VaR is defined as the threshold of a one‐sided confidence interval, such that the probability of a mark‐to‐market loss exceeding this threshold is equal to a specified probability threshold (Figure 5.2). For example, if a given portfolio has a 1‐day holding period and has a 99% confidence level VaR of $1MM, this means that on average, a daily loss exceeding $1MM should happen in only 1 out of 100 trading days.
Typically, VaR is computed assuming a static portfolio, that is, assuming no new positions or hedges are taken on and no existing positions or hedges are exited during the horizon.
There are multiple methods for creating the distribution of portfolio gains and losses needed to calculate VaR. We review several of the alternatives. Models generally follow one of three broad frameworks:
Another core building block of any VaR model is the selection of the appropriate holding period to use for analysis. Given that the VaR framework measures potential changes to the portfolio assuming no rebalancing, the selection of a longer holding period will generally lead to a higher VaR‐based risk measure. However, since trading books turn over quickly and tend to be managed closely, this fundamental modeling assumption becomes less representative of reality when it is applied to longer horizons.
While classical VaR models generally do not relax the assumption of a static portfolio, the rapid turnover of positions is one reason why VaR model time horizons are generally short. Historically, many trading businesses internally used a 1‐day VaR metric for internal management reporting, alongside a 10‐day VaR metric for regulatory capital and the associated reporting. Some other businesses use intermediate values (e.g., 2‐ or 3‐day VaR).
However, each of these is a fairly one‐size‐fits‐all metric, where highly liquid assets that could be managed down or hedged within hours are commingled with less liquid positions that would take significantly longer.
In its classic definition as noted above, VaR refers to threshold value, that is, the size of loss beyond which there is only an x% chance of exceeding that loss within a certain period of time.
Another measure that has emerged over time is Expected Shortfall (ES), which measures the expected value (i.e., probability‐weighted average) of all the potential losses that are beyond a certain percentile of the distribution. Since VaR measures the threshold value while ES measures the values beyond it, for any given value of x, ES will give a larger dollar‐loss figure than VaR. ES has certain benefits for risk management purposes; in particular it always considers the most extreme potential losses, whereas VaR is mostly insensitive to differences in risk that are beyond the threshold value.
For instance, say that one trading desk (A) has entered a trade that pays off $5MM in 99.9% of cases, but loses $1 BN in 0.1% of cases, while another (B) has a position that pays off $5MM in 99% of cases, but loses $100MM in the other 1%. Using a 99% VaR, the risk in position B would be captured well but A could slip under the radar as its risk only arises further out in the tail than was considered. ES would capture them both, weighing the larger loss potential of A against the higher likelihood of loss in B.
While VaR models are commonly used in risk management, they also have significant limitations, including:
In response to the financial crisis, global regulators have taken additional measures—first through Basel 2.5, and subsequently through the proposals arising from the Fundamental Review of the Trading Book—to address its limitations. Comparing the post‐FRTB capital proposals against Basel 2, key enhancements include:
In addition to modeling of general and specific risks covered previously, capital markets firms are also required to identify risks not in VaR/ES (RNIVs) and to ensure that they are appropriately capitalized.
In addition to VaR or ES, risk managers also use other complementary measures for market risk. Several of these are variants of stress testing, including scenario analysis. Generally, in stress tests, instead of attempting to define an entire probability distribution for possible losses, the focus is on defining specific hypothetical situations and then using valuation models and other analytic tools to determine the impact the scenario would have on the portfolio. By considering hypothetical scenarios, stress tests can help risk managers to assess risks that have not occurred but may materialize in the future. On the other hand, stress tests are limited to whatever scenarios have been considered (not completely exhaustive). Nonetheless, to develop a robust stress‐testing framework, risk managers often consider three categories of scenarios:
Another common risk measure used to evaluate credit risk concentrations in a trading portfolio (equivalent to a form of stress testing) is to simply identify and monitor the top 10, 20, or 50 counterparty exposures, ranked by the total loss incurred if each counterparty were to immediately jump to default. One or more alternative Loss Given Default (LGD) assumptions are used when utilizing this approach (e.g., JTD_100, JTD_50, JTD_0, JTD_worst).
Counterparty risk can be defined as the risk of a financial loss due to the potential default of a trading counterparty. It arises mainly from two types of activity:
Compared with the credit risk that arises from lending, counterparty risk presents an additional measurement challenge as the value of exposures fluctuates through time and is dependent on market movements. It can be helpful to think of counterparty risk as a mixture of market risk and credit risk—indeed, the tools used to measure and manage counterparty risk incorporate some elements from each, as well as some unique concepts.
For instance, concentration of exposure to any one entity or group of related entities is a major consideration across credit risk types whether arising from lending or counterparty activity. For credit risk management purposes, many organizations define total exposure limits and reporting to combine lending and counterparty activities.
On the other hand, like market risk, the potential variability in exposure to a given counterparty can be expressed in a VaR‐like measure. Such measures are commonly used in internal risk reporting, and in some cases also embedded in contractual margin requirements (discussed further in the following).
Given that exposure can vary over time, the management and capitalization of counterparty risk typically considers both current exposure and future exposure.
In the case of an uncollateralized derivative, current exposure can be measured simply as:
where the derivative's market value could be either positive or negative, but only positive market values create exposure to the counterparty.
Future exposure is a somewhat more nebulous concept since the future is uncertain and there are typically multiple time horizons of interest between the present and the maturity of a position (or portfolio of positions under one netting agreement). We use this term broadly to encompass a family of distinct metrics, some key members of which are defined more precisely in Table 5.1.
Table 5.1: Metrics and Their Corresponding Equations
Metric | Equation |
Expected exposure (EE) | |
Effective expected exposure | |
Potential future exposure (PFE) |
where is a confidence level such as 95% or 99% |
Maximum PFE/Peak PFE |
As shown, potential future exposure (PFE) is strongly analogous to VaR, except that it refers to the upper tail of potential market values at a particular future date, rather than the lower tail of potential changes in value over a particular horizon from today. When setting risk management limits on individual counterparties, it is common to utilize PFE‐related measures as they provide advance visibility into the more extreme possible exposure concentrations that could result from unexpected market movements.
Measurements of counterparty exposure are also intrinsically related to a second key counterparty risk concept, the Credit Valuation Adjustment (CVA). CVA measures the difference between the market value of a particular derivative contract and the market value the same contract would have had if the counterparty (or both the counterparty and the firm)15 had zero probability of default.
A simplified example will illustrate:
A derivative contract is structured such that there are two potential outcomes: In 50% of cases, the counterparty will owe the firm $15MM, and in the other 50% of cases, the firm will owe the counterparty $10MM. Assuming no credit risk on either party and a short enough horizon that discounting is immaterial, the NPV of the contract is $2.5 MM. Yet if the counterparty has a 10% chance of defaulting (with no recoveries), this decreases the positive contribution to NPV by 10%*50%*$15MM or $750 K. If the firm itself has a 1% chance of defaulting (with no recoveries), this decreases the negative contribution to NPV by 1%*50%*$10MM or $50 K. Netting these two effects, the bilateral CVA would be –$700 K and the final derivative value would be $1.8 MM.
In general mathematical form, unilateral CVA can be expressed as:
If the scenarios of MV, PD, and LGD are completely independent of each other (which may not necessarily hold in all cases), then this can be simplified to the more tractable expression below:
Prior to the global financial crisis, measuring CVA had already become a common accounting practice, used to create a reserve for expected credit losses on derivatives, which could be viewed as analogous to loan loss reserves. However, the approach to estimating the counterparty's PD and LGD could either be a historical/actuarial approach or a market‐driven approach derived from spreads, such as from traded CDS contracts. In recent years, the market‐driven approach has come to dominate, at least where the counterparty's spreads are observable or can be inferred from comparable companies and indices.
While CVA considers the possibility of default, it does so from an expected value point of view, so considering our definition of risk as “deviations from expected,” CVA is not in that sense a risk measure but simply a pricing or valuation measure. However, changes in CVA do present a second form of risk, since such changes affect the fair value of the firm's positions and ultimately flow through to earnings. In an environment where the counterparty PD is derived from the counterparty's market‐observed spreads, counterparty spread volatility can be a major driver of variations in CVA, and hence another source of risk to earnings.
Many firms have also expanded a similar approach to adjust derivatives valuations for other effects beyond CVA, such as the risk of their own default (debt valuation adjustment, or DVA), and the cost of funding the positions (funding valuation adjustment, or FVA). Further valuation adjustments for cost of capital (KVA) and margin (MVA) have even been considered. The term XVA is used as a catchall for the set of all valuation adjustments on derivatives.
There are a number of contractual mechanisms by which counterparty risk can be mitigated, if not entirely eliminated:
Collateral posted and held as margin is an important element in both cleared and bilateral derivative trading. For instance, a clearinghouse will generally require each of its counterparties to post two forms of margin:
Wrong‐way risk refers to the situation where a counterparty's exposure is strongly correlated, or even causally linked, to its Probability of Default (PD) and/or Loss Given Default (LGD). When this is the case, multiplying independent estimates of exposure, PD, and LGD together does not give a sufficient measure of expected loss, because under the scenarios that generate the most exposure, PD or LGD is also higher.
For illustration, consider a simple example where there are two equally probable scenarios: one in which , , and another in which , and . The first scenario leads to loss of $20 and the second $0, for an average of $10. Whereas if exposure, PD, and LGD were each averaged and then multiplied as though independent, the expected loss computed as 60*10%* would be understated.
The most severe examples of wrong‐way risk are when the counterparty is directly affiliated with an issuer whose securities are referenced on a derivative or used as collateral. This is referred to as “specific wrong‐way risk,” and is generally addressed through controls to restrict such transactions, or provide no credit for such collateral, rather than advanced measurements.
However, broader forms of general wrong‐way risk can lurk where there are underlying economic drivers that affect, but do not fully determine, both the counterparty's creditworthiness and the exposure. Examples include:
In the presence of general wrong‐way risk, firms may make further adjustments to their exposure or LGD estimates to be more consistent with values conditional on default, or adjustments to their PD or LGD estimates to be more consistent with values conditional on a scenario that would drive high exposure.
The opposite of wrong‐way risk is sometimes referred to as “right‐way risk” (i.e., when a key factor that could lead to an increase in exposure would also lead to an improvement in counterparty creditworthiness). This can sometimes be seen in commodity hedging with certain corporate clients (e.g., where the client's profits are correlated to a commodity price, and the derivatives used for hedging involve paying the client a fixed price for a certain volume of that commodity). In principle, right‐way risk may reduce CVA or other measures of counterparty risk.
Basel 3 introduced two distinct metrics to measure and manage liquidity risk, the Liquidity Coverage Ratio (LCR) and the Net Stable Funding Ratio (NSFR).
In addition to the metrics above, firms are also expected to conduct periodic stress testing of their liquidity profile. As an example, in the United States, the Federal Reserve Board conducts a comprehensive liquidity analysis and review (CLAR) process to evaluate the liquidity position and liquidity risk management practices at the largest firms.16
Since the financial crisis, the banking sector has seen a number of significant operational risk losses. The most noteworthy high‐profile cases included rogue trader Jérôme Kerviel at Société Générale, unauthorized or unknown risk positions put on by the “London Whale” at JP Morgan Chase, and legal fines levied on firms related to the mis‐selling of financial assets.
The Basel 2 accords outline seven categories of operational risk events:
Basel 2 also outlines two key approaches to quantification of operational risks:
In addition to the regulatory metrics, recent focus has once again been on stress testing and scenario analysis to test the vulnerability of firms to operational risk losses.
Beyond regulatory requirements, capital markets firms have developed their own in‐house approaches used to measure, monitor, and manage risks. These range from simple exposure levels, which are easily understood, to more complex internal “economic capital” measures that attempt to combine multiple risk types in a common metric. In addition, real‐world scenario analysis is often used. For example, a firm may analyze the impact of a default by two of its largest counterparties or customers.
Capital markets firms use a “risk appetite” statement to express the firm's risk tolerance in the context of their overall risk framework. This is a communication tool used with both internal and external stakeholders—including regulators and the board of directors. A firm's risk appetite statement is a starting point for business units to assess the products and services offered to clients, and to analyze the trade‐offs of risk versus return on a day‐to‐day basis.
A limit framework is often set up to translate a firm's risk appetite to an operational level. For example, specific limits are assigned at the desk and trader levels. The limit framework is continuously monitored by Risk to ensure that each trader is in compliance with the limit structure. Risk also maintains a formal process for the management of breaches.
Given the limited nature of, and costs associated with, financial resources (liquidity, funding, capital), Risk, Treasury, Finance, and the Front Office need to work hand in hand to optimize these scarce resources and maximize shareholder returns. This is often done by multiple mechanisms—one more top‐down and another bottom‐up.
A top‐down approach involves measuring returns on one or more of these financial resources for each line of business, and using the comparison of relative returns to inform strategic plans to grow certain businesses more quickly, challenge underperforming units to make do with fewer financial resources, or exit businesses that are expected to continue to underperform.
A bottom‐up approach involves creating internal mechanisms to charge businesses for the financial resources they use, at costs that may be higher for the scarcest financial resources. When these costs are pushed down, essentially every business is challenged to find and root out the constituent activities that drive a greater share of their resource usage than their share of profits and expand those that yield greater share of profits. Those activities may be distinguished at a much more granular level (e.g., offering of specific product subtypes, or even covering specific clients).
Given the relationship between risk and returns, banks look to be appropriately compensated for the risk they take, using a measure of risk‐adjusted returns for optimizing business decisions as well as for management reporting. Since there is no single industry‐standard approach, firms often tailor their choice of metric to fit their business models.
Effective risk management is, and will remain, critical to the success of capital markets businesses. If anything, the importance will continue to increase as firms become increasingly interconnected. This trend is driven by the velocity of trading in an electronic age, as well as post‐crisis consolidation of the larger capital markets firms. Ongoing product development will only add further complexity to the product‐set, and hence risk management will need to continue to evolve to provide effective control and balance.
Metrics and regulatory requirements will continue to evolve through time; thus in highly competitive markets, risk management functions will need to be: