1.3 Identifying Risks

To identify risks, we look at threat agents and attacks: who attacks our assets and how the attacks might take place. We might also have information about vulnerabilities and defenses, but for now we use it only as extra information to help identify plausible attacks. For example, we might keep valuable articles in a room, like an office or storage locker. We need to identify threat agents and attacks that affect the assets stored in that room.

Based on news reports, stories, and personal experience, most of us should be able to identify threat agents and attacks. Here are examples of threat agents, attacks, and risks associated with a store’s computer equipment:

  • ■   Threat agents—thieves and vandals

  • ■   Attacks—theft, destruction, or damage associated with a break-in

  • ■   Risks—computer equipment stolen or damaged by thieves or vandals

If the assets are really valuable or an attack can cause serious damage, then it’s worthwhile to list every possible threat agent and attack. For example, banks analyze the risk of attacks through vault walls because their assets are quite valuable and portable. We decide whether to consider a particular type of attack, like those through walls, by looking at the rewards the attacker reaps with a difficult or unusual attack. The more difficult attacks may be less likely, so we balance the likelihood against the potential loss.

We develop the list of risks in three steps. First, we identify threat agents by identifying types of people who might want to attack our assets. Second, we identify the types of attacks threat agents might perform. Third, we build a risk matrix to identify the attacks on specific assets.

Example: Risks to Alice’s Arts

The rest of this chapter will use Alice’s Arts as an example. We will perform the rest of PRMF Step A using Alice’s list of assets developed in Section 1.2.2. The remainder of this section develops Alice’s list of risks.

1.3.1 Threat Agents

We start our risk assessment by asking, “Who threatens our assets?” We might not be able to identify specific individuals, but we can usually identify categories of people. Those are our threat agents.

Natural disasters represent a well-known, nonhuman threat agent. Tornados, hurricanes, and other major storms can damage communications and power infrastructures. Major storms, like Katrina in 2005 and Sandy in 2012, destroyed communities and small businesses.

Other threat agents arise because some people act maliciously. We categorize such agents according to their likely acts and motivation: What might they do and why? For example, a thief will physically steal things and is probably motivated by a need for cash or possessions.

If our assets really face no threats from human threat agents, then our risk assessment is simple: We face nothing but natural disasters. In practice, even the smallest computer today is a target. Botnet operators happily collect ancient, underpowered desktops running obsolete software because they can exploit those computers.

Alice and her store face risks from specific threat agents. Someone might want to steal her computer or steal money from her online accounts. A competitor might want to interfere with her publicity on social media. Shoplifters and burglars also want to steal from her shop. Even her clerks might be tempted to steal. Petty thefts could put her out of business if they happen often enough. Door locks and a loud alarm may help protect her store when closed. Door locks on Alice’s office area and her storage area may discourage her less-trustworthy clerks.

Identifying Threat Agents

We identify Alice’s threat agents as a first step to identifying her risks. To identify human threat agents, we think about people in terms of their interest in attacking our assets (stealing, abusing, damaging, and so on). We don’t try to profile a particular person like Jesse James or John Dillinger. Instead, we try to capture a particular motivation. Criminal threat agents won’t hesitate to do us harm for their own benefit. Our friends and family, however, may be threat agents even if they aren’t intending to harm us. A friend could accidentally harm us greatly when using an unprotected computer.

A threat agent list starts with classic threat agents, like thieves or vandals, and grows to incorporate stories we’ve heard in the news, from other people, or other sources. Here are features of a human threat agent:

  • ■   Driven by a specific mission or specific goals—The mission may be to make money, make news, or achieve some ideological victory. Revolutionary threat agents may seek a government overthrow, and your enterprise might be one of their perceived stepping-stones.

  • ■   Interested in your assets or activities—If your assets or activities can be exploited to forward the threat agent’s mission or goals, then you are a possible target. Any computer on the internet represents a potential asset in the cyber underground, either to operate as part of a botnet or to be mined for sensitive data, like bank account passwords. A hotel, restaurant, or shop could be a terrorist target if it serves a person associated with, or who resides near, an important objective.

  • ■   Motivated to benefit at your expense—When we look at lesser threat agents like friends, family members, colleagues, and employees, their trustworthiness is balanced against their motivation to benefit at your expense. When we look at more significant threat agents, like criminal organizations, their level of motivation determines whether or not agents stop short of causing serious damage or even taking human lives.

  • ■   Established modus operandi (MO) at some level—Different threat agents have different resources in terms of money, mobility, equipment, and manpower. This leads them to focus on particular types of attacks. Bank thieves and politically motivated terrorists may use guns and bombs, while botnet builders rely primarily on network-based attacks. Financially motivated agents always work in conjunction with a money stream.

  • ■   Makes strategic decisions based on costs and benefits—The difference between a benign and serious threat agent is whether or not the agent can leverage your enterprise in a practical manner. Innovative attacks are, in fact, rare; most threat agents prefer tried-and-true methods.
    Here are categories of typical threat agents at the time of this text’s publication:

  • ■   Individual and petty criminals—often individuals, and occasionally small groups, whose MO opportunistically targets vulnerable individuals or assets.

    • Petty thieves

    • Petty vandals

    • Con artists who prey on individuals or small businesses

    • Identity thieves who prey on individuals

    • Mass murderers

  • ■   Criminal organizations—an organized group that performs a coordinated criminal or terrorist activity.

    • Geographically limited examples include gangs or organized crime in a city or neighborhood.

    • Larger-scale organizations include drug cartels.

    • Terrorist groups may include groups like Al Qaeda that execute strategic gestures or localized militant groups like Al-Shabaab.

Cyber-criminal teams may reside in multiple countries and rely on others to convert stolen information into cash. Different teams may specialize in particular aspects of the crime. Here are examples:

  • ■   Collecting exploits and incorporating them into malware to use in a criminal activity—Malware authors often rent their creations to other teams for actual use. The alleged developer of the “Blackhole” exploit kit, a 27-year-old Russian, earned as much as $50,000 a month from renting the kit to other cyber criminals.

  • ■   Collecting databases of user information and offering them for sale—Underground marketplaces that sell such information. After data breaches at Target Corporation in 2013, batches of cards were sold on an underground market for as much as $135 a card.

  • ■   Botnet herders—Each botnet is often subverted by similar malware, and each net is generally controlled by a particular individual or team. The botnet herder is in charge of the botnet. A herder may offer access to the botnet to other cyber criminals through underground marketplaces.

  • ■   Money-mule networks—When an attack manages to transfer money to a bank account somewhere, the money must be quickly converted to cash before the transfer can be reversed and the account closed. The money mule is a person who receives the money transfer, converts it to cash, and wires it to an accomplice in another country.

While some of these may be geographically limited, cybercrime organizations often span multiple countries. Many focus on specific parts of the operation, like exploit development, exploit tools, botnet management, login data collection, personal data collection, or conversion to cash through money mules. Here are some examples:

  • ■   Hacktivists—These threat agents are usually a loosely organized source of widespread attacks. Selected targets often reflect outrage at a political, social, or cultural phenomenon. The hacktivist group Anonymous, for example, has attacked internet sites associated with copyright enforcement, fringe pornography, and Wall Street.

  • ■   Nation-level competitors—These threat agents serve to forward the interests of particular nations. These can include:

    • Intelligence agents: the traditional “spies,” people who collect information on the behalf of competing countries.

    • Technical collectors: people and organizations who use remote sensing, surveillance, and intercepted communications to spy on other countries. The National Security Agency (NSA) intercepts communications on the behalf of the U.S. government.

    • Military actors: groups who use military force on behalf of a nation.

  • ■   Business and personal associates—People often interact with a broad range of others through their normal activities, and some of these people may be threat agents:

    • Competitors: people who are competing against us for some limited resource: a job, sales prospects, or other things.

    • Suite/room/family/housemates: people who share our living space.

    • Malicious acquaintances: people we know who would be willing to do us harm for their own benefit. This could include people who share our living space.

    • Maintenance crew: people who have physical access to our private living or working space. They might not usually intend to do us harm, but they might do so if the benefit is large enough and the risk is small enough.

    • Administrators: people who have administrative access to our computing resources. This may be limited to access to network resources or may include administrative access to our computer. As with maintenance people, administrators are rarely motivated to do harm to anyone in particular, but they might be tempted by easy prey.

  • ■   Natural threats—Actions of the natural environment may cause damage or loss, like severe weather or earthquakes.

A threat agent’s level of motivation suggests the degree to which the agent is willing to do damage to achieve its goals. We use a six-level scale that includes the risk levels in NIST’s RMF:

  • ■   Unmotivated—not motivated to do harm.

  • ■   Scant motivation—limited skills and mild motivation; may exploit opportunities like unsecured doors, logged-in computers, or written-down passwords.

  • ■   Stealth motivation—skilled and motivated to exploit the system, but not to cause significant, visible damage. For example, a dishonest employee might steal small items from his or her employer’s inventory if the item is hard to count accurately.

  • ■   Low motivation—will do harm that causes limited damage to assets. For example, a casual shoplifter rarely steals enough individually to put a store out of business.

  • ■   Moderate motivation—will do harm that causes significant damage to an enterprise or its assets, or injures a person, but does not cause critical injury. For example, a professional shoplifter could seriously hurt a store financially but would not threaten a clerk with a weapon.

  • ■   High motivation—will cause significant disruptions and even critical injuries to people to achieve objectives. This includes armed robbers, highly motivated terrorists, and suicide bombers.

In her quiet neighborhood, Alice is unlikely to encounter armed robbers, although any storekeeper might want to plan for that threat. Casual thieves and professional shoplifters pose the more likely threat. Depending on her situation, she may face threats with stealth motivation.

Profiling a Threat Agent

An accurate risk assessment requires up-to-date profiles of the relevant threat agents. The threat agents Alice probably faces might not change enough to affect her security strategy, but threat agents working against larger enterprises are always developing new techniques. Threat agent profiling becomes part of the risk assessment process.

We profile threat agents by studying their recent behavior. The threat agents who are worth studying have earned coverage in the public media. Large-scale news outlets cover some threat agents, while others are covered by industry-­specific or technology-specific bloggers. Brian Krebs is one of several bloggers on computer security incidents, especially those involving malware or multinational cybercrime organizations. The Internet Storm Center, Peter Neumann’s Risks List, and Bruce Schneier’s Cryptogram also serve as important resources for recent reports of cybersecurity incidents.

We use these sources to develop our profile of specific threat agents. A well-­developed profile relies on data reported by one or more reliable sources. The profile describes four major elements of the threat agent:

  1. Goals: Does the threat agent seek news coverage, financial gain, an ideological victory, or government overthrow? Financial goals are often achieved with less impact on the target than other goals.

  2. Typical MO: Does the threat agent use physical attacks or cyberattacks; does the agent coordinate attacks somehow; does the agent kill people; will the agent be sacrificed? Modes of operation should also illustrate the leadership and command structure of the organization.

  3. Level of motivation: How much harm are agents willing to do to achieve their goals?

  4. Capabilities and logistical constraints: How do financial costs, number of people, geographical limitations, operational complexity, or a need for special material or training affect the agent’s choice of activities? Governmental actors and well-funded multinational enterprises may have significant capabilities.

The risk assessment for a large-scale system may require detailed profiles of the threat agents. Available time and resources may be the only limit on the content and level of detail. In other cases, the profiles simply highlight the threat agents’ principal features. Organize a basic profile of a threat agent as follows:

  • ■   Title—Use a recognizable phrase to identify the threat agent.

  • ■   Overview—In one to two paragraphs, identify the threat agent in terms of newsworthy events and activities associated with that agent. The overview should identify specific examples of actions by that threat agent.

  • ■   Goals—In one paragraph, describe the goals of the threat agent. Illustrate this with specific examples from reports of the agent’s activities.

  • ■   Mode of operation—In one paragraph, describe how the threat agent seems to choose targets and how operations are led. Add an additional paragraph to describe each specific type of operation the agent has used. Illustrate with specific examples from reliable sources.

  • ■   Level of motivation—Use the six-level scale described earlier to specify the level of motivation. Write a paragraph that gives examples of meeting that level of motivation. If the agent is motivated at a less than “high” level, try to show the agent avoiding higher degrees of damage.

  • ■   Capabilities and constraints—Describe the threat agent’s logistical capabilities in one to three paragraphs. Use specific examples from reports to illustrate these capabilities.

  • ■   References—Include the sources used to produce this profile. We want the highest-quality information we can find.

The best sources are primary sources: a writing of the author’s own observations and experiences. A story from a large news outlet might say, “Cybersecurity experts uncovered a flaw in SSL.” This is not a primary source. To use the primary source, we must find out who the cybersecurity experts were and what they actually said.

In practice, we will use a lot of secondary sources. When a blogger describes a visit to a malicious website, we rely on the blogger’s reputation for describing such things accurately. Malicious websites rarely persist for long, and responsible bloggers avoid publishing malware links. Vendor documentation is an authoritative source, even if it isn’t a primary source.

A report in a newsletter or news outlet will usually identify its sources. When we track down such a source, we should find a report by someone who was closer to the actual event. Proximity should reduce errors in reporting. Wikipedia, on the other hand, makes a virtue of never being a primary source. At best, we might find decent sources in the article’s list of references. See BOX 1.2.

1.3.2 Potential Attacks

We review threat agents to confirm that our assets are at risk and to identify their modes of operation. Next, we identify potential attacks that could use those modes of operation. We start with broad categories of attacks. Our basic categories arise from the CIA properties: What attacks arise when those properties fail?

  • ■   Disclosure—data that should be kept confidential is disclosed.

  • ■   Subversion—software has been damaged, or at least modified; injuring system behavior.

  • ■   Denial of service—the use of computing data or services is lost temporarily or permanently, without damage to the physical hardware.

System subversion captures the essence of an integrity attack, but we also identify two other integrity attacks: forgery and masquerade. In forgery, the attacker constructs or modifies a message that directs the computer’s behavior. In a masquerade, the attacker takes on the identity of a legitimate computer user, and the computer treats the attacker’s behavior as if performed by the user.

Denial of service often arises from flooding attacks like those practiced by Anonymous against its targets. The system usually returns to normal service after a typical attack ends—not always, though. Fax attacks on the Church of Scientology intentionally used up fax paper and ink to make the machines unusable for a longer period. There are also “lockout” attacks in which the attacker triggers an intrusion defense that also blocks access by the target’s system administrators.

Physical theft or damage poses the ultimate DOS attack. Its physical nature gives it different properties from cyber-based DOS attacks. It requires different defenses. We treat it as a separate type of attack. BOX 1.3 summarizes our list of general attacks.

These general attacks usually aim at particular assets, while attack vectors exploit particular vulnerabilities. We also distinguish between passive and active attacks. A passive attack simply collects information without modifying the cyber system under attack. Disclosure is the classic passive attack. An active attack either injects new information into the system or modifies information already there.

The Attack Matrix

In a simple risk analysis, we can use the six general attack types and construct the risk matrix, described in the next section. If we need a more detailed analysis, we build an attack matrix. This yields a more specific set of attacks tied to our particular threat agents. The matrix lists threat agents along one axis and the general types of attacks on the other axis. For each agent and general attack, we try to identify more specific attacks that apply to our cyber assets.

Here is a list of Alice’s threat agents, focusing on cyber threats:

  • ■   Shoplifters—Some people will visit her store and prefer to steal merchandise instead of paying for it. These are often crimes of opportunity and aren’t intended to get Alice’s attention.

  • ■   Malicious employees—Most employees she hires will probably be honest, but Alice has to anticipate the worst. Like shoplifting, employee crimes are often crimes of opportunity and rely on not being detected.

  • ■   Thieves—Unlike shoplifters, these crimes won’t be overlooked. A thief might be an armed robber or a burglar. In either case, Alice will know that a theft has occurred.

  • ■   Identity thieves—For some thieves, it’s enough to steal a legitimate name and associated identity data like birth date or Social Security number, even if Alice herself doesn’t have any money.

  • ■   Botnet operators—Every computer has some value to a botnet, so attackers are going to try to collect Alice’s computers into their botnet.

We construct Alice’s attack matrix below. We compare the generic list of attacks in Box 1.3 to Alice’s threat agents, and we produce a more specific list of possible attacks. We use a table to ensure that we cover all possibilities. On one axis we list the generic attacks, and on the other we list the threat agents. Then we fill in the plausible cyberattacks (TABLE 1.1).

TABLE 1.1 Attack Matrix for Alice’s Arts

images

To fill in the table, we look at what motivates the threat agents, compare that against types of attacks, and identify specific kinds of attacks those agents might perform. This yields 12 specific attacks on Alice’s store. The following is a brief description of each:

  1. Burglary: Someone steals Alice’s laptop, POS terminal, or other computing items, like program disks or her USB drive.

  2. Shoplifting: A customer or malicious employee steals from Alice’s store without being detected, usually stealing merchandise. Portable computing equipment could also be stolen.

  3. Robbing the till: A malicious employee steals money from the cash drawer used to collect customer payments; this cash drawer is also called “the till.”

  4. Embezzlement: A malicious employee takes advantage of Alice’s trust to steal in some other way. For example, an employee might produce a bogus bill for purchased supplies; when Alice pays it, the money goes to the employee instead.

  5. Armed robbery: The thief confronts Alice or an employee with a weapon and demands that she hand over cash or other store assets.

  6. Social forgery: Someone sends false messages and makes false electronic statements masquerading as Alice, and those statements cause her personal damage or embarrassment.

  7. Password theft: Someone steals Alice’s password. Alice realizes the theft took place before the password is misused.

  8. Bogus purchase: Someone poses as Alice to make a purchase, using a credit card or other electronic financial instrument.

  9. Identity theft: Someone poses as Alice in one or more major financial transactions, for example, applying for a loan or credit card.

  10. Back door: Someone installs backdoor software in Alice’s computer so the computer may take part in a botnet.

  11. Computer crash: Someone installs software in Alice’s computer that causes the computer to crash.

  12. Files lost: Someone removes, erases, or otherwise damages some of Alice’s files, making those files unusable. This includes physical loss of the device storing the files.

The list of potential attacks must be realistic and relevant. There should be documented examples of each attack, and those examples should indicate the type of damage that occurs. We focus on relevant attacks by building our list from Alice’s threat agents.

Remember: The attack matrix is optional. We need to create a reasonable list of attacks. We can often use the generic attacks listed in Box 1.3 instead. The attack matrix is useful if we face rapidly changing threat agents or if we need a more detailed analysis.

1.3.3 Risk Matrix

A risk is a potential attack against an asset. A risk carries a likelihood of occurring and a cost if it does occur. We identify risks by combining the list of assets and the list of attacks. We use the attacks identified earlier in this chapter to construct this list. This may be the generic list of six attacks from Box 1.3, or it may be the output of the attack matrix in Table 1.1. The risk matrix is a table to associate specific attacks with assets. We mark each attack that applies to each asset.

The risk matrix helps us focus on relevant attacks and eliminate attacks that don’t apply to our assets. For example, physical theft will apply to physical computing devices and to software that is installed on specific devices. Fraudulent transactions will apply to resources associated with those transactions, like the server providing the defrauded service, or the bank supporting the fraudulent financial transaction.

The risk matrix lets us look for patterns among assets and attacks. We combine assets or attacks when they apply to a single, measurable risk. Earlier in this chapter we identified a list of nine cyber assets used in Alice’s Arts. We will combine some of them to simplify the matrix even further. This yields a list of six assets:

  1. Computer hardware and software

  2. Software recovery disks

  3. Computer customization

  4. Spreadsheets

  5. Online business and credentials

  6. Social media and credentials

The list distinguishes between these assets because they pose different security concerns. Computer hardware and software represent the working collection of cyber assets contained in Alice’s laptop and POS terminal. The recovery disk, customization, and spreadsheets represent different elements of those assets:

  • ■   Third-party software—represented by the recovery disks

  • ■   Site-specific configuration—represented by Alice’s computer customization, and includes the files she installed, how she arranged them, and how she organized her desktop

  • ■   Working files—represented by Alice’s spreadsheets

We use different tools and strategies to preserve and protect these different cyber resources. It’s relatively straightforward to save working files on a backup device. Site-specific configuration poses a challenge: It often resides in system files that are hard to reliably copy and restore. Third-party software poses a different challenge: Commercial software vendors may make it difficult to save a working copy of their application programs to reduce losses from software piracy.

TABLE 1.2 shows the risk matrix. Alice bought all of her computer equipment off-the-shelf, so disclosure poses no risk. Her arrangement of files, desktop, and so on reflects similar choices by others and poses no disclosure risk. Her spreadsheets contain business details, including employee salaries, and it’s best to keep such information confidential. A subversion attack on Alice specifically would target her computer hardware and software.

TABLE 1.2 Risk Matrix for Alice’s Arts

images

Alice’s local assets suggest other risks we can omit. Subversion, forgery, or masquerade of recovery disks could happen, but it’s unlikely given the threat agents. Software subversion is usually part of a remote hacker attack, especially in a small retail store, and recovery disks aren’t remotely accessible. Subversion is supposed to apply to Alice’s executable software, so it doesn’t apply to the other assets. A malicious employee could attack her computer customization or spreadsheets, but it doesn’t fit the typical objectives of employees’ attacks. Employees are most likely to steal tangible things and try to leave no obvious evidence. Changes to Alice’s spreadsheets or computer customization won’t give money to an employee; when Alice detects it, it causes extra work and inconvenience.

Both physical damage and DOS attacks could prevent Alice from using cyber resources. Alice could face DOS attacks due to power failures or outages at her internet service provider (ISP). These could be accidental or intentional; in both cases, she must address the risks. Physical attacks include storm damage, vandalism, and theft.

When we look at attacks related to Alice’s online activities, we must keep our security boundary in mind. While some of Alice’s assets reside online, we won’t analyze those online systems. We will focus on risks she can address within her security boundary. Alice can’t personally counteract a DOS attack against her online resources, but she can take the risk into account. For example, if her credit card processor is offline, can she still sell merchandise? If not, how much will she lose?

In a masquerade against Alice’s online accounts, the attacker transmits forged messages as if she were Alice. This may arise from disclosure of Alice’s credentials or from other weaknesses in the online site. While we can identify separate, specific attacks on Alice’s online sites that involve only forgery, disclosure, or masquerade, we will treat them as a single “identity theft” risk.

This analysis yields a list of 11 risks to Alice’s assets:

  1. Physical damage to computer hardware and software

  2. Physical damage to recovery disks

  3. Physical damage to computer customization

  4. Physical damage to spreadsheets

  5. Denial of service for online business and credentials

  6. Denial of service for social media and credentials

  7. Subversion of computer hardware and software

  8. Denial of service by computer hardware and software

  9. Disclosure of spreadsheets

  10. Identity theft of online business and credentials

  11. Identity theft of social media and credentials

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset