CHAPTER 12

Internal Loss Data

What is the point of internal loss data? It is so hard to get and it is so hard to use.

When considering enterprise risk management, internal loss data does have a role; but perhaps it is not as sophisticated as some might think. Basically, internal loss data can tell you quite a lot about what has previously gone wrong, but it is rarely complete or accurate. The logic is that, in part, it will tell you about the losses that you generally incur and assist you with working out what budgeted losses should be. But how useful is it? Let’s explore that further.

Internal losses are the losses that arise because of your choice of internal control system. Generally, the weaker your control system, then the higher the level of losses that you will probably incur. But you need to be careful here. If, on average, you lose $25,000 a year from an event or style of loss and to prevent it would cost $50,000, then you would take the $25,000 loss every time. But are losses really like that? Do they work on average?

There are the losses that you expect to take place all the time and others which occur less frequently. Controls are often designed to either prevent or identify losses that are normal, but they are often weak at identifying less usual losses; such as those arising as a result of fraud.

It is unrealistic to expect a control designed to prevent problems in the day-to-day to be effective when you are considering extreme values.

Risk management need to therefore consider what internal loss data is and what it is for. There are two types of internal loss data:

  1. The losses that result from events that you expected to happen; and
  2. The losses that result from events that you did not expect to happen

Considering first losses that you did expect to happen, these fall into many categories. As already discussed, risk management is not about budgeting and loss distributions are curves showing loss severity and likelihood over the extent of a distribution.

You might expect five losses from a particular type of event over a year, but you might not expect 50. You might expect an individual loss of up to $120,000, but not a loss of $4 million. As you can see, an expected loss might be unexpected due to either value or frequency. It could be expected by nature, but unexpected by consequence, or it could be unexpected by frequency.

Unexpected losses are really the manifestation of some form of scenario and we will consider this in depth later. Generally, any loss that is incurred that is a manifestation of a scenario is not relevant for internal loss data modelling. It is the occurrence of an unlikely event. This does not make the event necessarily more likely tomorrow or tell you that much about the nature of the next event that might occur. If such events are in the database, take them out.

For internal loss data to be useful, it needs to be aligned to the nature of the business that you intend to conduct. The key value of internal loss data is in assisting management in assessing the quality of their control environment and how it contributes to the achievement of the goals and missions of the firm. As such, it needs to be related to the business that you are intending to conduct.

That means that any loss that took place a few years ago needs to be considered to ensure that it remains relevant, but the value also needs to be fully considered. A loss from 3 years ago, were it to occur today, could occur at a different value. Perhaps the average value of the transactions that you conduct has changed. Perhaps the loss is no longer relevant.

Taking the scaling issue first, to make internal loss data effective, it is necessary for the risk management professionals to consider what the loss would be were an equally significant loss to occur now or in the future. This, by its nature, may have a different value. The consequence of this is that an internal loss database is not static and all losses included within it need to be regularly reassessed to ensure that the values remain valid.

However, the loss itself might not be relevant. If you are ceasing a certain product or service, then clearly, the losses related to that product or service will cease to be relevant and should be excluded from any future analysis.

Similarly, if you are replacing a computer system or control structure due to its ineffectiveness, then the losses that relate to the discontinued system will also no longer be relevant. However, the loss database will need to consider the profile of losses that could arise from the new system and that will be undertaken by considering a scenario.

Now you have your database of losses that have been incurred, but how should you assess which value to include? Realistically, most firms do not have effective and fully implemented activity-based costing. What this means is that some areas are considered as cost centers and the costs that they incur are not fully allocated to income streams.

Basically, the internal losses should be fully costed and include both direct and indirect costs. The direct costs are those that directly relate to the event that has caused the loss. This could be the income that is written off or the external consultants that need to be hired to solve matters—for example, lawyers. Then there are the other costs that need to be considered such as supervision, reporting and monitoring.

These costs may well be expensed within other units and would have been incurred, whether the event took place or not. You do need to recognize that the event uses up management and other resources and therefore, these also need to be considered. Internal audit and legal departments are not free and if they are working on non-goal correlated events, then they will not be able to do their job effectively.

So ideally, losses should be fully costed and include both direct and indirect costs. They should be regularly assessed to see that they remain relevant, both in terms of their nature and their value. Also, internal losses as appearing within an internal loss database need to be regularly reported. As you can see they can be a bit of a pain and you do not want a load of them due to the level of modelling that is required.

In terms of their use, you need to look toward the development of a loss distribution. This is done by grouping the losses into value bands, for example:

 

Value band Errors identified
0–0,000
10,001–25,000
25,001–50,000
50,001–75,000
75,001–100,000
Greater than 100,000

 

Each loss that occurs will then be placed into its column and a shape of a distribution will then be developed. This will have a tail to the distribution, but this can be truncated if required. If you do not collect small losses, be very careful.

Just because you do not collect them does not mean they do not exist and have not been already considered in provisioning.

As an example, most managers would not be surprised if they went to one of their staff’s houses and found a pencil belonging to the company. That might be considered as pilferage and acceptable to the firm, but it is still a loss. If they went there and found a painting and some tables belonging to the company, then this would be considered as both fraud and a loss, yet both are really losses and both should be considered.

Most firms already own software to turn such a columnar distribution into a curve on which calculations can be conducted. Again, be careful with the tails to the distribution. If you do not collect small losses, it does not mean that they do not exist or are not included within budgets or provisions. Likewise, if losses are included within scenario modelling or stress testing, then these should also be excluded. The loss value from the stress testing or scenario modelling will then be used to estimate the shape of the other end of the distribution.

Be careful when you are using curve fitting software for modelling purposes. Some software has only limited curves available and the curve selected may underestimate median or mode values and also distort tail liabilities. This could undermine what you are seeking to achieve. Some software automatically selects the best fitting curve for you from the ones that are available. Such a selection might fit the expected part of the curve really well but could fail to have the right shape for either of the tails of the distribution. You do need to know what it is that your software is trying to do.

So, what then is the real point of all this work? Clearly, when a loss event occurs, the firm needs to see if they can learn from the event to prevent its recurrence. However, as we shall see, the true value is in confirming the consequence of your control environment and that is where risk and control self-assessment comes in.

These loss investigations should not be a witch hunt. There are many objectives to the work of investigation being conducted, but avoiding the blame culture is important. If staff know that they will be heavily criticized for an event, then they will naturally attempt to suppress knowledge of the event, which is contrarian to the implementation of a successful risk management framework.

The analysis conducted needs to consider whether the event was foreseeable and a consequence of the control environment or whether it was totally unexpected. If a control has been designed to prevent one type of thing happening, then a different event occurring which was not envisaged in designing the control cannot really a measure of the success of the control.

For example, a firm might have a sprinkler system designed to put out a fire within a business unit. If a fire was to occur, then the fire system operating and the loss resulting is expected. That is why the fire system had been installed. However, if there is a failure of an electronic panel causing the fire system to become operational when there was no fire, then this might be termed an unexpected loss. It is a secondary loss and a consequence of your control environment.

If there is a gas explosion, the fire deterrence system will not be effective. You still have the loss, but this is an unexpected loss and should not be compared to the control. The deterrence system was not designed to prevent a gas explosion, so of course, it would not be effective were this to occur.

Indeed, would any control be effective in such a case? You might consider business continuity planning, but this is predicated on staff being available to implement the plan. Mostly, such losses need to be accepted and that is why firms have loss acceptance policies for such losses which they need to keep a watch on and just accept.

Once again, you will be judging the quality and consequences of the control in terms of Value at Risk compared against unitary risk appetite, that is the risk appetite when seen at the level of the control. The loss to be compared will need to be assessed over a suitable period. Normally, 1 year is selected and that is generally consistent with regulatory expectations.

The next challenge is to select a suitable confidence level. As soon as such a confidence level is selected, you run the risk that a loss will be experienced, which is above that confidence level. For this purpose, the selection of 99.9 percent confidence level, which is also known as the soundness standard, will generally ensure that everything is captured with the exception of matters that are the consequence of a really remote scenario occurring, one of those rare events of which we are so concerned.

One way of looking at this loss figure is to consider it as the area under a loss distribution. That means, you look at all the losses that might occur and identify the relevant probability of that type of loss occurring, adding it to the database and including it in the loss distribution. This is then fitted to a curve on which the calculation is taken with the Value at Risk being 99.9 percent of the area under the curve. For more information on this, please refer to the Mathematics of Banking and Finance, which is also published by Wiley Finance.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset