© Raymond Pompon 2016

Raymond Pompon, IT Security Risk Control Management, 10.1007/978-1-4842-2140-2_3

3. Risk Analysis: Assets and Impacts

Raymond Pompon

(1)Seattle, Washington, USA

The real trouble with this world of ours is not that it is an unreasonable world, nor even that it is a reasonable one. The commonest kind of trouble is that it is nearly reasonable, but not quite. Life is not an illogicality; yet it is a trap for logicians. It looks just a little more mathematical and regular than it is.

—Gilbert K. Chesterton

All your decisions regarding security and meeting compliance requirements should be driven by disk. The more accurate the picture of risk you ascertain, the more efficient and effective your security program will be. The next three chapters explore risk in the IT world.

Why Risk

Imagine the CFO of your company telling you that there’s an extra $50,000 in the budget that you can spend on security. How would you spend it? After that, what would you say to the CFO when he asked you to explain your choices? Many of us are technologists and we love to buy new gadgets. But that may not always be what our organization needs. The well-organized security professional has a prioritized list of what she needs to do next. Where does that list come from? Risk analysis.

Risk is about avoiding unnecessary costs while maximizing revenue. Being hacked is expensive. So is installing a bunch of firewalls that don’t do anything. Therefore, risk should drive security decisions . Before you spend any money, understand what risks you have, to which key assets, and how you plan to deal with them. Ideally, you align controls such that the most important controls are reducing the biggest risks. This also necessitates that you measure how much a control is reducing risk. The goal is that you work your way down, reducing risk after risk on the list, based on that priority. Risk analysis tells you where to focus your attention.

No, risk analysis won’t always be perfect. However, careful and calibrated analysis is provably far superior to off-the-cuff guesses. Despite the apparent fuzziness of measuring risk, the components of risk can be quantified—at least enough to do comparative rankings of best to worst problems.

You should customize your risk analysis to your specific organization, at a specific point in time. Otherwise, there is a danger of missing major risks or overspending on insignificant risks. Risk analysis is dynamic. You need to update it on a regular basis to reflect the changing world. Valuable risk analysis is realistic, actionable, and reproducible .

Risk Is Context Sensitive

Risk on its own is merely a number. A measure of risk is only meaningful when compared against another measure of risk. For some people, driving on a crowded freeway is too risky. For others, skydiving is just a fun way to spend a weekend. Saying something is “too risky” implies that the risk in question exceeds some other level of acceptable risk. Saying “it is safe” means it does not. Risk is a relative measure. What is acceptable to your organization?

Furthermore, that risk measure itself is only meaningful to a particular system and organization. Robert Courtney Jr. of IBM summarized this in his first law: “You cannot say anything interesting (i.e., significant) about the security of a system except in the context of a particular application and environment.”1

Where is the context that we look at when examining risk? We can look right into our own organizations. In 2007, the Institute of Internal Auditors (IIA) released the Guide to the Assessment of IT (GAIT) Principles. The GAIT states: “Technology exists to further a business goal or objective. The failure of technology to perform as intended (i.e., technology risk) may result in or contribute to a business risk—the risk that business goals and objectives are not achieved effectively and efficiently.” Furthermore, GAIT published principles pertaining to risk.

The GAIT created these principles to guide auditors and the organizations they audit. It is a good compass in dealing with the confusing world of audits and compliance requirements. The GAIT principles are useful for dealing with IT risk as well any audit-related dilemmas.

Components of Risk

IT security professionals have a specific definition of risk: the event in which something bad happening to something we care about. Risk is composed of two subcomponents: likelihood (chance) and impact (something bad).

Saying, “If this machine gets hacked, all our data is owned” is a description of an impact, not risk. Risk analysis is also more informative than a list of disaster scenarios. Statements about “what could go wrong with” are impacts, which do not mean anything without an attached likelihood. The key to spotting these kinds of incomplete risk statements is to look for the word could. “The sun could go super nova!” Is that a risk that you need to consider?

Someone may complain, “We’re getting tons of malware spam every day on our support e-mail box.” This is statement about likelihood, not a fully formed risk statement. Without an impact (or a worthwhile measure of the chance of it leading to an impact), it is not a risk.

Watch out, it can get confusing, especially when non-security people say things like, “Our password lengths are too short and that’s risky!” This is an incomplete risk statement. Insufficient password length is a statement about control deficiency that informs likelihood, which is a part of risk. Risk analysis tells you more about actual threats to your organization than doing a gap analysis against which best practice controls you’ve implemented. Missing controls are vulnerabilities, which may or may not be relevant to risk. “Hackers are getting cleverer and better organized” is not a risk. It’s an evaluation of a threat actor, which is also a subcomponent of likelihood. Likelihood is the combination of a threat and a vulnerability.

Statements like “the cloud is risky” convey no useful IT security information . Risk statements must include probabilities and calculated impacts to provide you with the data that you need to make trade-offs.

Calculating Likelihood

Sometimes when IT security professionals discuss likelihood, the response they receive is “It’ll never happen.” If this is from a C-level, often it is the end of the discussion. To avoid this from happening, you need to do your homework. Calculating the chance of something happening has two primary pieces: the likelihood of a threat acting against you and the likelihood of the threat leveraging a vulnerability in your systems. Here are some examples of threats :

  • Malware

  • Malicious insider

  • Non-malicious insider

  • Burglar/office thief

  • Internet hacker

  • Earthquake

  • Pandemic

Within here, you can break down even further into things like prevalence of threat or capability of threat, as shown in Table 3-1.

Table 3-1. Common IT Security Threats

Threat

Capability

Prevalence

Malware

Untargeted, generic

Common

Malware

Targeted, customized

Rare

Malicious insider

Normal user, no rights

Common

Malicious insider

Sysadmin , full rights

Very rare

Non-malicious insider

Normal user has an accident

Common

Non-malicious insider

Sysadmin user has an accident

Rare

Burglar/office thief

Opportunistic

Common

Burglar/office thief

Targeted

Rare

Internet hacker

Script kiddie

Very common

Internet hacker

Cyber-criminal

Very rare

Earthquake

Richter 5.0 and under

Uncommon

Earthquake

Richter 5.1 and higher

Very rare

Pandemic

Debilitating

Uncommon

Pandemic

Fatal

Very rare

Remember that threats have to be able to act on something in order to cause problems. Threats and vulnerabilities are tied together so that certain types of threats only use certain types of vulnerabilities. Here are some examples of vulnerabilities associated with malware:

  • Web sites

  • Mail servers

  • Web browsers

  • Users, social engineering

Like threats, you can examine vulnerabilities in terms of attack surface, exploitable weaknesses, and resistance capability yielding the updated list in Table 3-2.

Table 3-2. Examples of Vulnerability Factors

Target

Attack Surface

Weakness

Resistance

Web sites

Many sites

Moderately patched

High resistance

Mail servers

Few sites

Well patched

High resistance

Web browsers

Large number of browsers

Poorly patched

Moderate resistance

Users

Social engineering

Large number of users clicking

Low to moderate resistance

Between these two factors, you can see how you could begin to get some idea on the magnitude of likelihood. You may have noticed that, in general, the threat vectors fall into two major types: natural disasters and human-caused attacks. You can assess these types of threats differently, with differing methodologies and models. These two types are covered in more detail in the next two chapters.

Calculating Impact

Going back to the GAIT principles, we know that risk flows from business objectives or the parts of the organization that make money, engage with customers, and/or do something useful. When in doubt, go back to the company’s mission. Note that unless the organization is specifically a technology company, these business objectives are likely non-technical in nature. That doesn’t mean that you won’t quickly discover a lot of IT systems that directly support those objectives.

Ultimately, impacts relate to assets. Assets can be tangibles such as money, people, facilities, and IT systems. Assets can also cover intangibles like market reputation, intellectual property, contracts, or market advantage. Assets are anything your organization considers valuable and would feel pain losing.

When looking at impact on assets, you can break these down into breaches of confidentiality, integrity, and availability. Some assets have a different magnitude of impact to different types of breaches. For example, a breach of confidentiality against a database of payment cards would likely be considered a higher magnitude impact than a loss of availability to that same database. Having the system down is bad, but leaking all the confidential data on the Internet is worse.

Just as risk analysis entails doing an impact analysis, doing an impact analysis presupposes a complete and timely asset inventory. For risk analysis, asset inventory is one of the first steps. Your goal is to have a prioritized list of your most important assets.

IT Asset Inventory

Of all the types of assets, IT assets are often the most difficult to nail down. This is because it requires a lot of tedious grunt work to identify intangible things like software, data, and configurations. This means scanning with tools, interviewing people, reviewing documentation, and examining configurations. You take all of that, cross-reference the results, and repeat the scanning, interviewing, and reviewing. Why? Because IT systems are everywhere in an organization and the data is even more widespread. Wired magazine founder and visionary Kevin Kelly famously said, “Every bit of data ever produced on any computer is copied somewhere. The digital economy is thus run on a river of copies.” Tracking down data in a dynamic organization, much less deciding on its value, is extremely challenging.

Chasing down data can begin by creating a detailed map of all the systems and data in your organization. Onto this map, you can draw out critical data flows and user action chains. Sometimes commercial tools can help identify critical data, like data leak prevention solutions or network scanners. Ultimately, you want to gather as much information as you can. What is on all of your file shares and who administers them? What are the most important servers on your network? The following are some ideas.

Key systems to include in an asset analysis:

  • Domain controllers

  • Mail servers

  • Accounting servers

  • HR servers

  • Database servers

  • Sales servers (customer and prospect lists)

  • Internet-facing anything

  • Point-of-Sale systems

  • File shares

  • Anything holding confidential data of any kind

It sometimes helps to do a little archaeology and dig up old network diagrams and documentation. Glean what you can from these documents and use it to generate questions when you interview key personnel both in IT and in business services. After all the work that you’re doing to come up with your asset inventory, it’s a good idea to document how you did it. Not only will this help you (or someone else) do it the next time (and you want to do this at least annually), but some audit standards require a documented asset inventory process. Later chapters describe how to document processes, but consider this a placeholder to write down how to do it.

Asset Value Assessment

Once you have an inventory of assets, you need to value it. Asset values change over time, so you should document and revisit this process at least annually. At this stage, you are looking at the value of assets to your organization, not necessarily the value of your assets to others. A file of customer credit cards has a different value to you than it does to a Latverian hacker. The valuation of your assets to malicious actors is covered in the discussion on adversarial risk analysis in Chapter 5.

You can categorize organizational data into different classes. This process, called information classification, usually breaks things down into several major categories. The most common types are confidential (secret), internal-use only (protected), and public (shared with the world).

You would likely classify the most valuable data objects as confidential, such as customer data, usernames/passwords, HR records, payment card numbers, product designs, sales plans, and financial records. Internal information normally includes things like source code, e-mail, or internal memos. Public information is the free stuff that you give away, like sales brochures, press releases, or product demo videos. Another thing to consider in valuing information is how old it is compared to its useful life. Obsolete data may not be as important as current data. Note that these are examples only.

Some organizations place different values on different things. Some government entities may classify much more information as public, as they may be bound by transparency rules. Firms doing research in highly competitive fields, like medical drug manufacture, may classify things much higher. You need to roll up your sleeves and customize asset analysis to your organization.

The following are examples of valuable assets:

  • Intellectual property (source code, product designs, copyrighted material)

  • Personal information (customer credit/debit cards, Social Security numbers, customer names, driver’s license numbers, account numbers, passwords, medical information, health insurance information, vehicle registration information)

  • Usernames/e-mail addresses and passwords stored together

  • Message repositories (e-mail, chat logs)

  • Customer-facing IT services that are revenue generating (e-commerce sites, streaming media, product catalogs)

  • Customer-facing IT services that are support/non-revenue (web support forums, documentation, demo sites)

  • Critical internal IT services (chats, help desks, accounting, payroll)

  • Semi-critical internal IT services (web sites, e-mail, SharePoint)

However way you estimate the significance of your assets, you should ensure that upper management agrees with your valuation. Talking to your leadership is an important part of asset valuation. Lastly, the information classification scheme should be formalized in policy and communicated to all users; this covered in more detail in later chapters.

Assessing Impact

With a prioritized list of assets and critical business functions, you can work on the impact component of risk. IT systems can fail in many ways and you can categorize them all. A simpler way is to look at three major aspects: confidentiality, integrity, and availability. A breach of confidentiality means that information that should have remained secret has been exposed to unauthorized parties (e.g., hackers broke into the mail server, or malware just transmitted all the payment cards to a botnet). An integrity breach refers to an unauthorized and possibly undetected change (e.g., a disgruntled employee just broke into the HR system and gave herself a big raise). A breach of availability is the unplanned functionality loss of something important (e.g., our web server got denial-of-serviced, or our server room lost cooling and shut down). The National Institute of Standards (NIST) has codified this way of doing impact classification in their standard FIPS-PUB-199.

Note

NIST FIPS-199 Standards for Security Categorization is good for asset classification. See http://csrc.nist.gov/publications/fips/fips199/FIPS-PUB-199-final.pdf .

Impacts against confidentiality, integrity, and availability are rated “low” for limited adverse effects, “moderate” for serious adverse effects, and “high” for severe or catastrophic adverse effects. It’s simple, but it’s a good place to start thinking about impact scenarios.

A common impact calculation is to estimate monetary damages from a breach of customer confidential records . Some analysts assign a dollar figure per record, such as It will cost us $150 per credit card number that falls into cyber-criminals’ hands. However, there is some nuance in obtaining an accurate estimate of financial impact. At very low numbers of records, costs can be extremely low, but very high on a per-record basis. I have worked on some cases where the breached bank simply got on the phone and called the few dozen affected customers. There was no public disclosure and associated bad press. In very large breaches, affected parties can leverage economies of scale in dealing with millions of affected customers and lower their impact costs as well. Like many other things, it is worth the time and effort to research and brainstorm what the actual impact of losing customer records would be for your organization. A good place to start would be to come up with some plausible scenarios and run them by the legal or compliance department .

If your organization provide IT services with an associated service level agreement (SLA), then the impact estimation becomes much clearer. You can calculate losses due to downtime in hard numbers down to the second. If you don’t know the SLAs in place, check the customer contractual language for paragraphs like the following.

Indirect Impacts

Beyond the direct impacts stemming from impairment of services to customers, successful attacks can go beyond monetary damages. There can be a loss of user productivity due to downtime, which you can convert back to dollars with some help from business leads. Impact can also include the slowing of response time and delay of service from IT as they clean up the security incident. There is also time lost in investigation and reporting to law enforcement, management, and regulation bodies . There is also reputation damage, something that can be controversial to quantify, but nonetheless a critical impact for some industries. Depending on the nature of the attack, there can also be a loss of competitive advantage. Some breaches were about the theft of industrial secrets, which has obvious competitive advantage impacts.

Some breaches involve public leakage attacks , as with WikiLeaks, Sony, or Ashley Madison, where malicious parties spill company secrets out into the Internet for the whole world to gawk over. Often such breaches result in interested parties like journalists and activists creating custom search tools to parse and scan through the leaked data to magnify the exposure of the breach. Even small organizations can experience ill effects from this kind of breach. Consider how your customers and executives would react if the contents of all their e-mail going back for years were made available to Internet search engines. In fact, some customers may find a vendor’s e-mail system has compromised their own information. Think of what may be sitting in your lawyer’s e-mail inbox.

Indirect impacts can also have a technical dimension that drives up IT resource and user overhead. These can include data corruption (requiring restoration from backup, assuming there is backup), impromptu software upgrades, loss of cryptographic keys or passwords (many breaches force everyone in the company to reset their passwords), increased regulatory scrutiny, investigative resources, and performance degradation (resource exhaustion from attacks, causing general slowness).

Compliance Impacts

Some of the worst kinds of impacts are regulatory. Compliance requirements like PCI and HIPAA have hefty monetary penalties associated with security breaches. One of the highest fines for security non-compliance comes from the North America Electric Reliability Corporation (NERC ), which can issue fines against a utility company up to $1 million per day for security non-compliance. On top of direct fines, organizations victimized by breaches could also be subject to class-action lawsuits from victims for failing to protect data. In addition to that, the Federal Trade Commission (FTC) has the authority to sue for “unfair or deceptive business practices” because organizations “failed to adequately protect” customer data. Even if an organization successfully escapes a legal judgment, there could be still legal costs involved.

In the end, impact goes far beyond records lost. Impact calculations can be simple or complex, depending on your needs. The most important thing is to think things through and tie the results to reality.

Qualitative vs. Quantitative

When you put together your risk analysis, one of the big decisions to make is whether your analysis is qualitative or quantitative. Qualitative risk analysis does not use tangible values, but instead uses relative measures like high, medium, and low. This is how FIPS-199 is structured. Quantitative analysis uses numerical values, such as dollars, and can entail a bit of work.

Qualitative Analysis

Qualitative risk assessment is subjective, usually based on subject matter experts rating the various factors on a scale. You can represent this scale as anything measurable, from using words like “high/medium/low” to temperatures or colors, and even numerals. Note that I said numerals, not numbers. I do this to keep from making a cognitive error. A numeral is the name or symbol for the number. Think Roman numerals. You should not be performing mathematical operations with qualitative risk factors. The expression that “Risk = Likelihood × Impact” is not an actual mathematical formula, but a description of the relationship between likelihood and impact. One way to keep yourself honest is that if you’re going to do mathematical operations, remember what you learned in middle school: keep track of your units. Just as you would never say “3 feet × 4 degrees = 12 feet-degrees,” you shouldn’t say “Likelihood 3 × Impact 4 = Risk 12.”

Qualitative risk assessment gets a bad rap as being too vague or misleading . It’s understandable when compared to quantitative analysis, where you say things like “There is a 20% chance in the next year of having six to ten malware incidents that cost the organization $900 apiece.” It’s not as clear when you say, “The risk of malware infection is medium-high.” However, remember that humans make decisions based on this information, and ultimately our brains convert everything to a subjective score anyway. The confusion of a qualitative analysis is that two different people may look at the same data and one may say it’s high and the other may say it’s medium. The opposite mistake can also happen. Just because someone filled in hard numbers like “20%” and “$900” doesn’t mean those numbers weren’t guesstimates pulled out of a hat, making them just as subjective as saying “medium-low.”

Clarifying Your Qualitative

So how do we avoid confusion? One good way is to define your qualitative labels and put them in two dimensions. Let’s start with this simple but unclear risk list in Table 3-3.

Table 3-3. Simple Risk List

Risk

Rating

Malware on LAN

Not bad

CEO phished

Kinda bad

Insider

Bad

Stolen backup tape

Kinda bad

Stolen laptop

Not bad

Web site defaced

Kinda bad

E-mail hacked

Kinda bad

Colocation facility fire

Bad

Wi-Fi hacked

Kinda bad

Wire transfer fraud

Bad

You might guess that a risk is “kinda bad” if it fell into three different categories:

  1. High impact and low likelihood

  2. Low impact and high likelihood

  3. Medium impact and medium likelihood

All three of these situations may be vastly different. High impact and low likelihood could describe a pandemic outbreak. Low impact and high likelihood could mean a flood of spam. Medium/medium could mean a malware infected laptop. In a single dimensional chart, all of these risks aggregate out to the same level. With respect to a decision process, it’s confusing to differentiate between these different levels of risk. Remember, the goal of risk analysis is to clarify and reveal information for good judgement.

Heat Maps

I recommend presenting qualitative risk analysis as a heat map so that these two dimensions are visible. Figure 3-1 is a simple graphical way to show the two risk components. Here the one dimensional risk labels are spread out into a heat map. You still can see the worst risk in the upper-right quadrant, and the three “kinda bads” as a diagonal band across.

A417436_1_En_3_Fig1_HTML.jpg
Figure 3-1. Risk heat map

With this format, you can now place risks directly into the graph axes at their proper magnitudes. The darkening effect represents “badness” and is optional. Figure 3-2 shows the same risk list expanded with qualitative numerals instead of badness labels, which is easier to graph. You can see the difference in clarity.

Figure 3-2 shows multiple risks on a single qualitative table with proper labelling. You now have a tool that allows decision makers to rank and discuss risks in a relative manner. The goal here is to make decisions on which risks are treated with controls, avoided by stopping the activity, or accepted by taking on the risk. Even with no hard numbers, it is obvious in Figure 3-2 which items are considered more risky and why. This type of diagram can also serve as a discussion aid, so that decision makers can weigh in and say, “I feel the risk of malicious insider hacking is not that high an impact. Move it down.” Now you have a valid and tangible starting point to discuss the factors involved in that particular risk.

A417436_1_En_3_Fig2_HTML.jpg
Figure 3-2. Simple two-dimensional qualitative risk map

Explaining Your Qualitative Ratings

Another way to reduce misunderstanding with qualitative models is to define your terms. You should do this at the outset as you are polling your subject matter experts and filling in the values. You can present this scale alongside the analysis so that it is clear to everyone involved what “moderate risk” actually means. Take a look at Table 3-3and Table 3-4for sample qualitative rating scales to include with your risk analysis. In Table 3-5, you can see that undetectable alterations to integrity are higher in impact than detectable ones. This is because if there are potential undetectable alterations, all the records are now untrustworthy.

Table 3-4. Example of Clarifying Likelihood

Rating

Meaning

Very Unlikely

Expected to occur less than once every 5 years

Remote

Expected to occur between 1 time per year and once every 5 years

Occasional

Expected to occur between 1 and 10 times per year

Frequent

Expected to occur more than 10 times per year

Table 3-5. Example of Clarifying Impact

Rating

Confidentiality Impact

Integrity Impact

Availability Impact

Minor

Under 10 records of confidential data exposed internally on an internal system but no proof of exploitation

Under 10 transaction records altered in a noticeable correctable manner

Several users offline for 1 to 5 business days. Customer-facing service offline up to an hour.

Major

Under 10 records of confidential data exposed internally to several unauthorized employees

More than 10 but less than a hundred records altered in noticeable correctable manner

Customer-facing service offline or data for more than an hour but less than a business day.

Critical

Under 10 records of confidential data exposed externally, more than 10 records disclosed internally

More than 100 records altered in noticeable correctable manner or records were altered in an undetectable manner

Customer-facing service offline or data for more than a business day.

Quantitative Analysis

Quantitative risk analysis uses real numbers and data in place of subjective measures. Remember in the end, it’s likely that the security steering committee will make subjective judgements on how to manage these risks, but with a quantitative analysis, they have the best possible data in hand to make decisions. What does this look like? Let’s take the previous description of threats and update it with real data, as shown in Table 3-6.

Table 3-6. An Example of Anal ysis Using Quantitative Values

Target

Attack Surface

Weakness

Resistance

Web sites

10 sites

Avg. 2 patches missing each

Firewalled

Mail servers

2 sites

No patches missing

firewalled

Web browsers

400 browsers

Avg. 14 patches missing each

host-based antivirus

Users

Social engineering

350 users

4% failed last phish pen-test training

As you can see, finding real data isn’t that hard. If you’ve done a good asset analysis, many of these things should be at your fingertips. Your asset and impact analysis can also feed in real numbers in terms of monetary loss. You can find more data in industry source web sites and intelligence reports. The specifics of doing quantitative analysis in risk modeling are covered in more detail in the next two chapters.

External Sources for Quantitative Data

You can find more data in government and industry security reports. For example, California passed S.B. 1386, which requires any organization that suffered an exposure of more than 500 California resident’s data must issue a notification to those persons. The organization also has to disclose that information to the state’s Office of the Attorney General, which now publishes this information on its web site at https://oag.ca.gov/ecrime/databreach/reporting , which is very useful for working out likely risk scenarios.

A417436_1_En_3_Fig3_HTML.gif
Figure 3-3. Breach data from Ca. Ag for 2014

This can give you a good idea on the likelihood of various risk scenarios. They also publish a lot more details on the breaches, including the actual notification letters sent out to victims.

Chronology of Data Breaches

Another free source of breach data is the Privacy Rights Clearinghouse ( https://www.privacyrights.org/data-breach/new ), a California nonprofit organization dedicated to educating people about privacy. They have a database going back to 2005 on reported data breaches found in news reports and media. Unlike California’s Office of the Attorney General database, this database is a bit vaguer because of the limitation of public reports. However, there is high-level data on thousands of breach cases stretching back a decade to examine. Their data is also automatically broken down into useful categories and available for direct download into a spreadsheet or statistical tool. The confusing part is that they use different categories than the California Office of the Attorney General.

Verizon Data Breach Investigations Report

The biggest, most widely read source of data breach data is Verizon’s annual Data Breach Investigations Report ( www.verizonenterprise.com/DBIR/ ). Verizon began by analyzing and reporting on the incident response reports from their consulting team. Over the past decade, the report has expanded to include data from a wide variety of sources, including law enforcement, United States Computer Emergency Readiness Team (US-CERT), several Information Sharing and Analysis Centers (ISACs), and many other security companies. Contemporary reports include data on tens of thousands of incidents all over the world.

VERIS Community Database

There is also a community version of the Verizon breach report, called the VERIS Community Database (VCDB), which uses the same reporting and analysis structure. The VCDB ( http://veriscommunity.net/vcdb.html ) contains data submitted by security professionals like you. You can download the database for your own analysis.

Annualized Loss Expectancy

One of the simplest forms of quantitate risk analysis is annualized loss expectancy (ALE). This is a way of expressing the predicted dollar loss for a particular asset because of particular a risk over a one-year period. The formula for ALE is single-loss expectancy (SLE) multiplied by the annualized rate of occurrence (ARO). SLE is a calculation based on the asset value and the exposure factor (EF). Exposure factor is a qualitative subjective guess as to the possible percentage of loss to the asset. So an EF of 0.5 means that the asset has lost half its value. You can think of SLE as likelihood and ARO as impact. Therefore, if the likelihood of losing a backup tape with customer data is 10% and the calculated impact is $50,000, then the ALE is $5,000.

The utility of ALE is that you can calculate a simple monetary “cost” of the risk. You can compare tape encryption software against the expected annual loss of five grand. If the encryption system cost $10,000, then you show that it will pay for itself in two years. This is very useful for cost-justifying new control purchases. The hard part is coming up with plausible, defendable numbers for the SLE. As stated before, this is covered more in later chapters.

Formalizing Your Risk Process

All major compliance requirements require a foundational risk analysis based on an industry standard model. Without a model for guidance, a risk analysis can become distorted by individual biases and selective perception. This is especially true regarding cyber-risk, which is complicated and non-intuitive compared to physical risks. Here are some formal IT risk models to choose from:

Whatever risk modeling method that you choose (and more are introduced in the next two chapters), it is important that you document the risk assessment process. The process should be systematic, meaning that someone else can follow your process and come up with the same results. Many audit requirements, including PCI and ISO 27001, require that organizations have an annual formal risk assessment process that identifies threats and vulnerabilities, with documented results. You should redo risk assessments whenever there is major infrastructure change as well.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset