Reviewing the Risk Assessment for the IT Infrastructure

Once a risk assessment has been completed and approved, it can be reviewed for the IT infrastructure. A risk assessment includes the following high-level steps:

  • Identify and evaluate relevant threats.
  • Identify and evaluate relevant vulnerabilities.
  • Identify and evaluate countermeasures.
  • Develop mitigating recommendations.

Next, management reviews the risk assessment. Management can approve, reject, or modify the recommendations. The management decisions are then documented and included in a plan of action and milestones document.

The following step is for the purpose of translating the risk assessment into an actual risk mitigation plan. Before jumping into this, the risk assessment should be reviewed, paying special attention to the following key items:

  • In-place countermeasures—The risk assessment may have addressed some of the countermeasures that are already being used. Some of the countermeasures may need to be upgraded or reconfigured, and some may need to be replaced completely. If a countermeasure is to be replaced, the original countermeasure shouldn’t be removed until the new one has been installed.
  • Planned countermeasures—A planned countermeasure is one that has been approved and has a date for implementation. Planned countermeasures are documented in the risk assessment. These countermeasures should be reviewed to determine their status. A countermeasure may have been installed since the risk assessment was published. The date a planned countermeasure will be implemented might also affect the timeline for an approved countermeasure. These countermeasures should be documented in the plan of action and milestones.
  • Approved countermeasures—Approved countermeasures are the controls previously approved by management. They need to be added into the implementation pipeline. Some of them will be easy to implement, whereas others may be complex and require extra steps. They may need to be purchased or delegated. All of them need to be tracked for completion.

Overlapping Countermeasures

Another important consideration when reviewing the plan is to determine whether there is any overlap among the countermeasures. One countermeasure may reduce or resolve more than one risk. Additionally, some risks may be mitigated by more than one countermeasure.

The overlap may be purposeful or accidental. In other words, multiple countermeasures may be implemented for a single risk as a defense-in-depth strategy. This ensures that the risk is mitigated even if a countermeasure fails. An accidental overlap occurs when two or more countermeasures mitigate the same risk but the overlap wasn’t intentional. As long as the countermeasures aren’t mitigating the same risk in the same way, this isn’t a problem. However, any countermeasure overlaps should be identified.

If a countermeasure overlaps with another countermeasure, conflicts may occur between the two. Although many security countermeasures work together, some countermeasures may cause problems for other countermeasures.

For example, a vulnerability scanner and an intrusion detection system (IDS) used to protect a server may conflict. The vulnerability scanner could be configured to scan this server on a daily basis. However, the IDS will likely detect this scan and send an alert because it recognizes the scan as a potential attack. This notification could be an email to a group of administrators.

In this example, the IDS alert is a false alert, or false positive. It requires an administrator to investigate and review the alert. Because the internal vulnerability scanner is causing the alert, it clearly isn’t an actual attack. However, it still takes time to investigate.

This doesn’t mean that either of the countermeasures should be avoided, because there may be ways to avoid the conflict. Perhaps the IDS could be programmed so that it doesn’t detect scans from the vulnerability scanner. Maybe the IDS detects only one specific scan. Perhaps the scanner can be programmed to skip this scan. If the conflict can’t be avoided, personnel should at least be educated about the conflict. They should know what is causing the alert and that other alerts should be investigated thoroughly.

Attacks Ignored for a Full Weekend

A large network operations center had several countermeasures in place to detect attacks. These countermeasures provided notification to network operations center personnel on a large monitor viewable by all personnel.

On one weekend, an IDS sent alerts on a potential attack. One of the administrators investigated and realized it was a false alert. Three more alerts occurred in the next hour, and other administrators investigated. Each time they were false alerts.

These false alerts continued, but, at some point in the next few hours, an actual attack started, which also caused alerts. However, the administrators began to expect false alerts, and they gave all the alerts less and less attention. The IDS had become the IDS that cries wolf. When the real wolf was at the door, no one believed it, and none of the alerts were recognized as valid.

When administrators came on duty Monday morning, they completed a review of activity and detected the actual attack. Luckily, the attack didn’t take down any systems, but the attacker did gather reconnaissance data.

False alerts should be minimized if possible. Personnel can get accustomed to seeing alerts and dismiss them without investigation. This activity of reducing false alerts is called “tuning the IDS.” Without tuning, personnel may dismiss a live attack before even investigating it.

As long as two countermeasures don’t conflict with each other, overlapping countermeasures are OK. In fact, a defense-in-depth strategy encourages having more than one countermeasure for different risks. If one countermeasure fails or is circumvented, the other countermeasure still provides protection.

Risk Assessments: Understanding Threats and Vulnerabilities

One of the methods that can be used to determine whether countermeasures overlap is to conduct a risk assessment, which maps the countermeasure to threats, vulnerabilities, and the assets being protected. This helps to paint a complete picture of risk, which is often represented with the following formula:

Risk = Threat × Vulnerability × Asset

A vulnerability is a weakness; by itself, it doesn’t present a risk. Similarly, threats by themselves don’t present a risk. Risk is the probability of a threat taking advantage of a vulnerability to cause loss, damage, or harm to an asset. Countermeasures either reduce or eliminate the impact of the threat or the vulnerability on the asset.

NOTE

Risk = Threat × Vulnerability × Asset isn’t a mathematical formula. In other words, numerical values are not assigned to threats and vulnerabilities to determine a numerical value for risk. Instead, the formula shows that risks occur when both threats and vulnerabilities are combined to harm an asset.

According to NIST 800-30, a risk assessment is used to identify, estimate, and prioritize risk to the operations of a business or organization. The risk assessment helps decision makers in these organizations identify and evaluate how these threats impact them and what countermeasures can be implemented to reduce vulnerabilities and harm to their assets. The eventual outcome is a determination of risk.

FYI

Nonrepudiation prevents individuals from denying they took an action. By logging usernames, audit logs include details on who performed an action. Because the activity is logged, users can’t deny they took the action. Effective nonrepudiation is lost if one user can use another user’s account. The same goes for shared accounts where all team members use, for example, “admin.” Some logs include Internet Protocol (IP) addresses and computer names. This audit trail helps an investigator determine what actually happened.

For example, consider user accounts of terminated employees. As a best practice, accounts should be disabled but not deleted when the employee leaves. If necessary, administrators can enable the account later with a different password. This allows a supervisor to access the ex-employee’s data. After the supervisor reviews and copies important data, administrators delete the ex-employee’s account.

Imagine that a company doesn’t do anything to old accounts. As long as the account is enabled, anyone can access it.

If previous employees have physical access to the network, they can log on. Some networks will even allow them to log on remotely. They would have the same permissions as if they had never left the job. They could then access all the same data as if they were an employee. They could read, modify, or delete the data.

Perhaps a previous employee still has friends on the job. The previous employee could give his or her credentials to a friend, and the friend could log on using the ex-employee’s credentials. At this point, nonrepudiation is lost. If any of the activity is logged, it looks as if the ex-employee is taking the action. For example, Bob is an ex-employee, but Sally learns his username and password. With that information, Sally can log on as Bob. Audit logs may record what Sally does, but they record Bob’s username. This might send security personnel on a wild goose chase trying to determine how Bob gained access to the network.

In this situation, the vulnerability is that inactive accounts are still enabled. User accounts aren’t managed, leaving them available even if they aren’t needed. The threat is that a previous employee or someone else may log on and access the account. The assets that could be lost, damaged, or harmed include valuable company information in the accounts.

Identifying Countermeasures

Risks are mitigated by adding countermeasures. The following countermeasures could be implemented to mitigate the risks from not disabling inactive accounts:

  • Creating an account management policy—An account management policy is a written policy that spells out exactly what should be done with accounts. The policy may cover much more than just ex-employee accounts. For example, it could also address the format used to create accounts, such as firstname.lastname. It could include requirements for an account lockout policy and details for a password policy.
  • Creating a script to check account usage—Administrators could be tasked with writing a script to identify inactive accounts. An organization might define an inactive account as any account that hasn’t been used in the past 30 days. The script would scan accounts and automatically disable inactive accounts. Administrators could schedule the script to run once a week, log the results, and email the results to interested personnel.
  • Controlling physical access to employee areas—Access to employee-only areas could be controlled. Limiting access can be as simple as posting signs to discourage nonemployees from entering or as involved as using physical locks, cipher locks, badges, or proximity cards.

Similarly, the risk assessment may determine that users are not using strong passwords or changing their passwords regularly. The vulnerability is that the passwords are weak because password-cracking tools can easily crack weak passwords. The threat is that an attacker may use one of the many tools available to crack the weak password. Attackers can then use the cracked passwords to log on to a system or network.

The solution is to implement a password policy. A password policy is often part of an overall account management policy. Password policies can be enforced using technical means. For example, Microsoft domains allow IT administrators to enforce strong password practices with Group Policy.

A password policy would specify the following:

TIP

Group Policy settings allow an administrator to configure a setting once in a domain. This setting will then apply to all users and computers in the domain. If desired, the administrator can also configure a Group Policy Object to apply to specific users and computers. Once configured, Group Policy works the same in a network with 5 users and computers as it does in a network with 50,000 users and computers.

  • Password length—Common recommended lengths are at least 8 characters for regular users and at least 15 characters for administrators. Although 15-character passwords may seem outrageous to an administrator who hasn’t used them, they are used. However, passphrases are commonly used instead of passwords. For example, a password could be IL0veR1$kM@n@gement. This is a complex 19-character password, but it isn’t hard to remember.
  • Complexity—The complexity refers to the mixture of characters. Complex passwords commonly have a mixture of at least three of the four character types. Character types are uppercase letters, lowercase letters, numbers, and special characters. Some requirements specify all four character types must be used. A complex password is more difficult to crack than a simple password.
  • Maximum age—The maximum age identifies when the password must be changed. For example, a maximum age of 45 indicates the password must be changed at least every 45 days. Once the maximum age passes, the user is unable to log on until the password has been changed.
  • Password history—Some users will try to use one or two passwords. They use password 1 until they are forced to change the password, and then they switch to password 2. When they are forced to change the password again, they switch back to password 1. They constantly swap back and forth between password 1 and password 2. However, when password history is used, users are prevented from using a password they used before. For example, the past 24 passwords for Windows domain systems can be remembered, which means that users are prevented from reusing passwords until they have used 24 other passwords.
  • Minimum age—Establishing a minimum age prevents users from changing their passwords until a minimum amount of time has passed. Administrators commonly use one day as a minimum password age, which works with the password history to prevent users from changing their passwords right away to get back to their original passwords. With a password history set to 24 and the minimum age set to one day, users would have to change their passwords every day for the next 25 days to get back to the original password, which makes circumventing the intended policy too difficult for the users. Users will instead comply with the intention of the policy, which is to change the password to a new password.

TIP

NIST 800-63B shares valuable tips on passwords:

  • Having an 8-character minimum when the password is set by individuals
  • Having a 6-character minimum when the password is set by a system
  • Supporting at least 64 characters maximum length
  • Allowing at least 10 password attempts before lockout
  • Checking chosen passwords against known dictionaries

This list is not exhaustive but is worth reviewing.

At this point, the countermeasures can be matched with the threat/vulnerability pairs. TABLE 11-1 shows the threat/vulnerability pairs matched to recommended countermeasures.

TABLE 11-1 Matching Threat/Vulnerability Pairs with Countermeasures
THREAT VULNERABILITY COUNTERMEASURE(S)
Previous employee Inactive accounts not disabled Implement account management policy
Write script to deactivate accounts
Restrict access to employees only
Weak passwords Password-cracking tools launched by attacker Implement account management policy
Implement Group Policy password policy

The information in the table shows that the account management policy is addressing two threat/vulnerability pairs. This information can be valuable. If administrators recognize that a single countermeasure is addressing multiple risks, they may decide to increase the priority of the countermeasure. While the threat/vulnerability pairing is effective in evaluating threats and vulnerabilities, NIST 800-30’s risk assessment approach is also an effective alternative.

Scripting as a Technical Countermeasure

Technical countermeasures don’t have to be expensive. Some can be created at no cost with scripts if administrators have the expertise.

The difference between good administrators and great administrators is often the ability to write administrative scripts. Good administrators can get tasks done, but they often take longer. This is especially true for repetitive tasks. Great administrators can accomplish tasks much more quickly and with little effort.

One of the great benefits of scripts is that they can be automated. For example, an administrator wants to disable inactive accounts. He or she could write a script to identify and disable accounts that haven’t been used in the past 30 days and then schedule that script to run every Saturday night. All accounts would be automatically examined on a weekly basis and inactive accounts disabled without any additional administrative effort.

Compare this with the administrator who can’t script. The same tasks could still be accomplished, but they would take time each week.

As administrators become more proficient with scripts, they can add additional bells and whistles. For example, a script can log results or send an email. After the script has located and disabled inactive accounts, it can email a list of accounts that were disabled.

Scripts can meet most administrative needs. A common saying among scripting administrators is “If you can envision it, you can script it.”

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset