2.4 Access Control Strategies

Even though the Morris worm attacked systems that implemented many access control mechanisms, these did not halt the worm’s spread. To understand why, we need to examine those mechanisms. First, we look at the general problem of access control. Computer-based access control falls into four categories that correspond to these real-world situations:

  1. Islands: A potentially hostile process is marooned on an island. The process can only use resources brought to it.

  2. Vaults: A process has the right to use certain resources within a much larger repository. The process must ask for access to the resources individually, and the system checks its access permissions on a case-by-case basis.

  3. Puzzles: A process uses secret or hidden information in order to retrieve particular data items. This provides effective protection only if it is not practical to try to guess the unknown information through exhaustive trial and error.

  4. Patterns: The data items and programs made available to a process are compared against patterns associated with hostile data. If the data item matches a pattern, the system discards the data or at least blocks access to it by other processes. This technique is not very reliable and is used as a last resort.

Islands

In practice, computer-based access control begins by making each process into its own island. If you are marooned on an island, you are restricted to the resources already there, plus whatever resources are brought to you. If we look at a process as an island, the operating system gives the process access to its own RAM and to carefully metered turns using the CPU. The island forms its own security domain, within which the process has free rein.

This type of access control is called “isolation and mediation.” We isolate the process, or group of processes, to create the island. We then “mediate” its access to other resources. We try to do this with risky processes so that we may restrict the damage they might do. All operating systems isolate processes from one another, at least to start. Traditional systems provide APIs to reach shared resources in RAM or in the file systems. Sandboxes in mobile systems provide a greater degree of isolation than processes in traditional systems.

Vaults

Processes rarely operate simply as islands. Often the operating system provides access to a vault of computing resources. Each request is checked to ensure that the process has permission to use the file or other resource. If you have a safe-deposit box at a bank, the clerk may admit you to the vault, but only to the area containing your box, and only for retrieving that particular box. Section 2.5 discusses the RAM access controls applied by typical operating systems to typical processes.

2.4.1 Puzzles and Patterns

Puzzle-based access control appears in many ways in the real world and in computer systems. Its most powerful form appears in cryptography, or crypto for short. Crypto provides us with a variety of mathematical techniques to hide data through encryption or to authenticate its contents.

Another puzzle technique is called steganography, in which we try to hide one collection of information inside another. A simple example might be to write a message in invisible ink. Digital steganography often encodes the data in a special way and then spreads it among seemingly random data bits in another file. For example, the digital image on the left of FIGURE 2.10 contains the 243-word document shown at right. For every spot in the digital image, there is a binary number representing its brightness in the image. We encode the data in the low-order bits of individual spots. Changes to those bits rarely change the picture’s general appearance.

A photograph of the United States Capitol building.

FIGURE 2.10 Example of steganography. The text on the right has been encoded into the image shown on the left.

Left: Courtesy of Dr. Richard Smith. Right: Courtesy of Department of Defense.
EMISSIONS SECURITY ASSESSMENT REQUEST (ESAR) FOR ALL CLASSIFIED SYSTEMS
The information below are emission security conditions (EMSEC) requirements that must be complied with by government contractors before the processing of classified data can begin. These are the minimum requirements established for the processing of DOD SECRET information. The processing of higher than DOD SECRET calls for more stringent requirements.
(1) The contractor shall ensure that EMSEC related to this contract are minimized.
(2) The contractor shall provide countermeasure assessment data to the Contracting Officer as an ESAR. The ESAR shall provide only specific responses to the data required in paragraph 3 below. The contractor’s standard security plan is unacceptable as a “standalone” ESAR response. The contractor shall NOT submit a detailed facility analysis/assessment. The ESAR information will be used to complete an EMSEC Countermeasures Assessment Review of the contractor’s facility to be performed by the government EMSEC authority using current Air Force EMSEC directives.
(3) When any of the information required in paragraph 4 below changes (such as location or classification level), the contractor shall notify the contracting officer of the changes, so a new EMSEC Countermeasures Assessment Review is accomplished. The contractor shall submit to the Program Management Office a new ESAR. The new ESAR will identify the new configuration, a minimum of 30 days before beginning the change(s). The contractor shall not commence classified processing in the new configuration until receiving approval to do so from the contracting officer.

Puzzles also provide a popular but less-effective form of protection called security through obscurity (STO). This approach often relies on hiding to keep resources safe, without weighing the risk of potential attackers finding the resources through a process of searching.

An Incident: Several years ago, a spreadsheet vendor provided a feature that let customers password-protect their spreadsheets. The technique used a very weak encryption technique. A programmer was looking at how the technique worked and realized that there was a very easy way to break it. He reported the problem to the vendor and expected that the vendor would fix the problem and update the software.

Instead, the vendor threatened the programmer with legal action if he revealed the problem to anyone else. The vendor decided it was easier to threaten legal action than to fix the problem. Of course, this did not prevent others from finding the problem on their own and making their own announcements.

Encryption becomes STO when it’s easy to crack. Steganography used by itself may also qualify as STO. If it hides sensitive information, the information itself should be encrypted.

Although steganography can hide information, it isn’t effective when people know where to look for it. We also must prevent potential attackers from retrieving the information.

Open Design: A Basic Principle

STO is most often famous for its failures. For example, house thieves seem to find the hidden valuables first. The opposite of STO is the principle of Open Design.

When we practice Open Design, we don’t keep our security mechanisms secret. Instead, we build a mechanism that is strong by itself but kept secure using a component we easily change. For example, we can replace the locks on doors without replacing the doors and latches themselves.

Open Design acknowledges the fact that attackers may “reverse engineer” a device to figure out how it works. If a security device relies on STO, reverse engineering allows the attacker to bypass its protection. A well-designed security device still provides protection even after reverse engineering. For example, safecrackers know how safe mechanisms work. Even so, it may take them hours to open a safe unless they know its secret combination.

Open Design, however, is not the same as “open source.” An open-source software package is made freely available on terms to ensure that the free portions, and improvements made to them, remain available freely to everyone. Although all open-source systems are also Open Design by intent, not all Open Design systems are open source.

Cryptography and Open Design

Properly designed cryptosystems use Open Design. The concept arises from ­Kerckhoffs’s principle, in which we assume that potential attackers already know everything about how the cryptosystem works. All real security should rest in a changeable piece of secret information, the key, which we can easily change without replacing the whole system.

Kerckhoffs was a 19th-century French military thinker. Claude Shannon restated this principle succinctly by saying, “The enemy knows the system” in his 1949 paper “Communication Theory of Secrecy Systems.” This is sometimes called Shannon’s maxim. This arose as part of Shannon’s development of information theory and his work on the mathematical properties of cryptosystems.

If we build an access control system using puzzles and Open Design, we end up with a variation of the vault. Encryption presents a puzzle: We possess the data, but it’s effectively “locked up” in its encrypted form. We can retrieve the encrypted data only if we have the key.

Pattern-Based Access Control

Pattern-based access control is very common, although it is notoriously unreliable. Photo ID cards provide a common real-world example: A guard or clerk compares the fuzzy photo on an ID card to the face of the person. These checks are very unreliable because the photos are rarely of high quality or updated to reflect significant changes in appearance, like changes to facial hair or hairstyle.

Mathematically, there is no 100 percent reliable way to analyze a block of data or a computer program and conclusively determine what it could do. Antivirus software looks for patterns associated with known viruses and other possibly harmful data, and it blocks vulnerable processes from accessing that data. Pattern matching is rarely exact, so there is always a risk of a mismatch. This can lead to either mistaking safe data as malware or failing to identify malware as such. Malware pattern detection can only block existing types of malware that the system has been shown how to detect. It is not effective against new threats.

Biometrics

Another computer-based application of patterns is in biometrics: techniques that try to identify individual people based on personal traits like voice, fingerprints, eye features, and so on. In this case, as with malware detection, decisions are based on approximate matches. Again, we try to avoid both a mismatch of the legitimate person and a successful match against someone else. We examine biometrics further in Section 6.6.

2.4.2 Chain of Control: Another Basic Principle

When Alice starts up her laptop, it runs the operating system on her hard drive. She protects the files on her hard drive by telling the OS how to restrict access. If Alice ensures that the computer always runs her OS the way she set it up, then the system will protect her files. We call this the Chain of Control.

If Alice’s laptop boots from a different device instead of her hard drive, then it runs a different operating system. This other system does not need to follow the rules Alice established on her hard drive’s OS. This breaks the Chain of Control.

We retain the Chain of Control as long as the following is true:

  • ■   Whenever the computer starts, it runs software that enforces our security requirements, and

  • ■   If the software we start (typically an operating system) can start other software, then that other software either:

    • Complies with the security requirements, or

    • Is prevented from violating the requirements by other defensive measures, like operating system restrictions on RAM and file access.

Chain of Control is similar to another concept: control flow integrity (CFI). The flow of control between different parts of a program forms a data structure called a “graph.” To use CFI, we look at a particular program combined with its requirements and create a graph of how control should flow. Research has shown that we can also monitor the program’s execution and compare its flow against the graph. The monitor enforces CFI by blocking the program’s execution if it detects a break from the expected flow.

Controlling the BIOS on Traditional Computers

Operating systems programmers work hard to ensure the Chain of Control. Once the OS is running, it prevents most programs from violating security requirements. It grants a few users and programs special privileges that, if abused, can violate Chain of Control. The OS is usually successful at blocking attempts to circumvent Chain of Control by untrustworthy users or programs. This is the first line of defense against computer viruses.

In many computers, the biggest risk to Chain of Control is during the bootstrap process. When a computer starts up, it starts the BIOS program described in Section 2.1.1. Many computers recognize a special “escape key” during bootstrap (sometimes this is the key marked “ESC” for “escape”) that activates the BIOS user interface. The operator uses this interface to adjust certain computer features controlled by the motherboard hardware. An important feature is selection of the “bootstrap device” from which the BIOS loads the operating system. The user may choose the OS stored on an attached hard drive, USB drive, or optical storage.

Many vendors provide computer software on optical memory like CDs or digital versatile disks (DVDs). Even software downloaded from the internet is often formatted as if stored on a CD or DVD. A user may “burn” such software onto a large-enough optical disk. If a computer includes a DVD drive, then the BIOS often allows the user to bootstrap from the DVD. OS vendors and backup software services may provide their software on a boot-ready DVD so that users may use a DVD-based OS to repair or restore a damaged hard drive.

When most systems start up, they try to bootstrap the operating system from the first hard drive they find. If that fails, then they may search for another device. Some systems, like the Macintosh, allow the user to type a key on the keyboard (like the letter “C”) to force a bootstrap from a CD or DVD.

Although the ability to bootstrap the operating system from any device is useful at home and in some technical environments, it can compromise the computer’s Chain of Control. If this poses a real security risk, administrators may take another step: They may disable bootstrapping from any device except the approved hard drive containing the approved operating system. However, if the hard drive fails, the administrator will not be able to boot a DVD with recovery software. The administrator will need to physically reconfigure the computer to install a new hard drive or to reenable the BIOS.

Administrators may further protect the Chain of Control by enabling password protection in the BIOS. This protection prevents changes to the BIOS configuration unless the operator knows the password, thereby allowing the administrator to change the boot device if there is a hardware failure.

Unfortunately, the BIOS is not the only risk to the Chain of Control. The Morris worm illustrated another risk; malicious software can exploit a software weakness that allows the software to subvert the system.

Subverting the Chain of Control

Here are three ways to subvert the Chain of Control on Alice’s computer:

  1. Bootstrap the computer from a separate USB drive or DVD that contains an OS controlled by the attacker. This bypasses Alice’s normal operating system and uses the different, bootstrapped one to modify her hard drive.

  2. Trick Alice into running software that attacks her files. If she starts a process, the process has full access to her files.

  3. Trick the operating system, or an administrator, into running a subverted program with administrative or system privileges. This often allows a program to bypass security restrictions. This is how the Morris worm attacked through the finger process.

We can install security measures to block these vulnerabilities, but we need to decide that these are risks worth addressing. This depends on the threat agents who might bootstrap her computer.

Chain of Control on Mobile Devices

Modern mobile devices use sandboxes and other measures to provide stronger security than traditional operating systems. Here’s how mobile devices address the three types of attacks described above:

  1. Bootstrapping from another device:

    1. Mobile devices won’t generally bootstrap from any source except their built-in flash memory.

    2. Both iOS and Android use ROM-based code to check the operating system software cryptographically during the boot operation. This detects modifications made to the system and prevents running a modified operating system.

  2. An application attacks other files:

    1. Sandboxing prevents an individual application, even a subverted one, from accessing files and other resources belonging to other applications.

    2. iOS and Android vendors provide “App Stores” that provide applications that have been tested for malicious behavior. This does not guarantee the absence of malicious behavior, but it reduces the risk.

  3. Giving privileges to a subverted application: Applications don’t receive privileges except when explicitly granted by the user. Some critical privileges can’t be granted by the user.

There are a handful of practical techniques to interfere with a mobile device’s Chain of Control. For example, some Android owners can “root” their device. Without rooting the device, the owner might be restricted to using a particular App Store or from removing particular apps. Some versions of iOS allow “jailbreaking,” which allows installation of apps not from the Apple App Store. Some users prefer such flexibility, although it might open them up to malware attacks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset