2.3 Buffer Overflows and the Morris Worm

In a perfect world, a computer should not crash or misbehave just because it is connected to a network and it receives the “wrong” data. The network software (the computer’s protocol process) should identify a badly formed message and discard it. In practice, network software may contain flaws that allow cleverly designed messages to crash the software (and perhaps the computer’s operating system) or that may provide an attack vector for an attacker.

A “back door” is an attack vector that lets the attacker take control of a computer through a network connection. In the film WarGames (1983), a teen computer enthusiast found a backdoor connection. The teen thought he had reached a computer games company; instead, he reached a military computer that could launch nuclear missiles. The simplest back door gives the attacker control of a command shell, preferably one with administrative privileges. This allows the attacker to control the system remotely by typing keyboard commands.

In 1988, a graduate student at Cornell University named Robert T. Morris wrote an experimental worm program that could replicate itself across computers on the internet—then a research network with 60,000 host computers. Morris did not want to control these systems remotely; he just wanted to see if it would be possible to build a program that could distribute itself unobtrusively across the internet. The worm moved from one computer to another by seeking attack vectors on the next computer it found.

Morris had learned about several common vulnerabilities in the Unix operating system, which was heavily used on the internet. He programmed his worm, now called the Morris worm, to take advantage of those vulnerabilities. He selected several attack vectors and programmed them into the worm. If one attack vector failed, the worm automatically tried another.

The “Finger” Program

One of the vulnerabilities Morris used was in a networking service called “finger.” The purpose of finger was to report the status of individual computer users. A person could type the keyboard command:

images

The command includes the user name of interest and the host computer on which the user resides, all packaged together like an email address (of course, this often served as the user’s email address).

The command starts a program that contacts the finger server at the computer bu.edu and asks it about the user named jsl. The server returns an answer that reports whether the user is currently logged in, or when last logged in, and may deliver additional information (like address and phone number) if the user jsl has provided it.

2.3.1 The “Finger” Overflow

Researchers familiar with the Unix system, and with the finger service in particular, had discovered a buffer overflow flaw in the software. When a program sets up a buffer in RAM, it sets aside a specific amount of space. Properly written programs always keep track of the amount of data moved into or out of the buffer. In a buffer overflow, the program reads too much data into the buffer. The extra data is written over other data items nearby in RAM. To understand the problem, examine FIGURE 2.7, which shows how the finger server program arranged its data.

A screenshot of the program shows the data arranged by the finger server program.

FIGURE 2.7 Data section for the finger service.

The finger data section shown here is actually part of its stack. When the host computer’s network software recognized that it had received a finger request, it started the finger process, leaving the network software’s return address in RAM at 1000, the first location on the stack. The finger process began with a procedure that set aside variables at locations 1010 and 1120. It also set aside a 100-byte buffer at location 1020 to contain the user ID or identification (underlined in Figure 2.7).

Then the procedure called a network “read” function to retrieve the network message that just arrived. When calling the function, finger saved its return address at location 1180. The network read function created some variables starting at location 1200. (These RAM addresses are meant to illustrate what happened and are not exact.)

The finger service assumed that a user’s name and host computer name together would never exceed 100 bytes, including a null character at the end. The query shown earlier (“[email protected]”) is 10 bytes long, or 11, including the trailing null. In 1988 most internet user and host names were similarly small. The finger service did not check the size of the incoming text, assuming that users would not be inclined to type exceedingly long names.

Exploiting “Finger”

Morris had heard about this flaw and was aware of how an attacker might exploit it. When the worm program started running on a particular host computer, it would try to connect across the internet to other host computers. Then the worm would use the finger connection to try to attack the victim computer.

FIGURE 2.8 shows how the worm used a network connection to attack the finger service. The infected host computer (right) connected to the finger process (left). It then sent an excessively long string as the “name” of the user.

A process diagram shows the attack of the worm process on a finger service via a network connection.

FIGURE 2.8 Attacking a computer via the finger process.

The finger service read this string into its buffer, but the string was much longer than the buffer. As the finger service didn’t check the size of the incoming data, it simply kept copying the data into RAM past the end of the buffer’s assigned space. Thus, the input string “overflowed” the buffer. In FIGURE 2.9, the overflow spans the variables filled with “XXX.”

A screenshot of the program code depicts buffer overflow in the finger service.

FIGURE 2.9 Buffer overflow in the finger service.

The Shellcode

Morris carefully crafted the overflow data so that the worm process then could infect the computer it attacked. Morris had studied how the finger service used RAM and where it saved program counters for returning from procedures. He designed the overflow data to contain two essential components:

  1. A sequence of computer instructions (a program called a “shellcode”) would start a command shell, like “cmd” on Windows. Instead of taking commands from the keyboard, this shell took its commands from the attacking host computer. The commands traveled over the same connection it used to attack the finger service. In Figure 2.9, the shellcode program starts at location 1260 and ends before 1450.

  2. A storage location contained the return address for the network read function. The overflow data overwrote the return address. Once the read function finished, it tried to return to its caller by using the address stored at location 1180. The overflow rewrote that location to point to the shellcode (location 1260).

The attack took place as follows. Morris’ worm program sent a query to the computer’s finger service. The query was so long that it overflowed the buffer as shown. When the finger program called the network read function, that function copied the overlong query into the too-short buffer. The message overwrote the function’s return address.

When the network read function ended, it tried to return from its caller by retrieving the return address. Instead, however, that return address redirected the CPU to execute the shellcode. When executed, the shellcode started a command shell that took its commands from the opened network connection shown in ­Figure 2.8. The computer that sent the attack then gave commands to upload and install the worm program.

This attack worked because of shortcomings in the system. The first problem was that the CPU could not distinguish between instructions and data inside RAM. Although some CPUs can be programmed to treat control sections as instructions and data sections as data, not all Unix systems used the technique. If the system had done so, the CPU would not have executed the shellcode written on the stack. Even today, not all systems and applications use this feature. On Microsoft systems, there is a specific data execution prevention (DEP) feature. The operating system provides it, but only if applications take the trouble to distinguish between their control and data sections.

Another feature of the finger process made this attack especially powerful: The finger process ran with “root” privileges. In other words, the finger process had full administrative power on the computer. Once finger was subverted, the attacker likewise had full administrative access to the computer.

This is a notable case in which Least Privilege could have reduced the damage of an attack. In essence, the finger process had “Most Privilege” instead of Least Privilege. There were ways to arrange the Unix network software so that the finger process could run with user privileges, but it was easier to give it “root” privileges. This trade-off made sense in an experimental environment, but not in the face of growing risks.

The Worm Released

The Morris worm was successful in one sense: When Morris released it in late 1988, the worm soon infested about 10 percent of the machines on the internet. Morris had included a check in the program to limit the number of copies of the worm that ran on each computer, but the check did not halt its spread. Soon dozens and even hundreds of copies of the worm were running on each infested computer. Each computer would try to infest its neighbors dozens of times and succeed, and those neighbors returned the favor.

Buffer overflows are an ongoing problem with computer software. Some blame the C programming language, which was used to write many modern programs, protocol processes, and operating systems. The original programming libraries for C did not provide boundary checking for text-oriented input. This may have made programs more efficient when computers were smaller and slower, but the risks of buffer overflow no longer justify the efficiency gains.

Modern programming languages like Java and many scripting languages will automatically check for buffer overflow. However, many programmers still use C. Modern C libraries provide ways to check for buffer overflow, but not all programmers understand and use them.

2.3.2 Security Alerts

The Morris worm was the first internet-wide security incident. The worm was active on the internet by late afternoon, November 2. By 9 p.m., major internet sites at Stanford University and at MIT saw the effects of the worm. The University of California at Berkeley was attacked around 11 p.m. An expert in California reported characteristics of the attack via email early the following morning.

An informal network of experts at major universities quickly began to analyze the worm. An annual meeting of Unix experts was in progress at Berkeley when the attack took place, and the participants soon went to work analyzing the attack.

Some site managers voluntarily disconnected themselves from the internet to deal with the worm. In some cases, they disconnected before they were infected. In other cases, the disconnection made it difficult to get information to combat the site’s own worm infestations.

Establishing CERT

In response to the Morris worm, the United States government established its first Computer Emergency Response Team (CERT) at the Software Engineering Institute of Carnegie-Mellon University. CERT provides an official clearinghouse for reporting security vulnerabilities in computing systems. The CERT Coordination Center (CERT/CC) is a division of CERT that was established within weeks of the worm attack to coordinate the response to future ­internet-wide security incidents. More recently, the U.S. Department of Homeland Security transferred the traditional CERT reporting and coordination activities to the government-operated U.S.-CERT organization. Other countries and large entities have established their own CERT organizations. Computing vendors have their own processes for handling and reporting vulnerabilities in their products.

For several years, CERT published “CERT Advisories” to report security vulnerabilities. Organizations and security experts would use the CERT Advisory number to refer to well-known vulnerabilities. Today, we typically use “CVE numbers” taken from the Common Vulnerability Enumeration (CVE) database. Mitre Corporation maintains this database on behalf of the U.S. government. As with CERT Advisories, the system relies on the discovery of vulnerabilities by vendors or other interested parties, and the reporting of these vulnerabilities through the CVE process. The CVE database helps ensure that every security vulnerability has a single designation that is used consistently by all vendors, software tools, and commentators in the information security community.

Although the Morris worm attack took place in 1988, it took a decade for worm attacks to become a recurring problem on the internet. The “Internet Storm Center” was established in 1998 to watch internet traffic patterns and look for disruptions caused by large-scale worm attacks. It is sponsored by the SANS Institute (SysAdmin, Audit, Network, Security Institute), a cooperative that provides education and training in system administration and security.

2.3.3 Studying Cyberattacks

To profile threat agents, we use news reports and other published sources to fill out the profile. Here we study potential and actual attacks. The studies take two forms:

  1. Attack scenarios: These study possible attacks. While a scenario doesn’t need to describe an attack that has actually occurred, it should be based on proven attack vectors. For example, the scenario might combine attack vectors from two or more previous attacks to illustrate a new risk. A scenario doesn’t identify relevant threat agents; it focuses on goals and resources instead.

  2. Attack case studies: These review actual attacks, both successes and failures. A case study always includes an attack scenario. Additional sections identify the threat agent responsible for the attack, and they assess the effectiveness of pre-attack risk management activities.

Be sure to use authoritative sources when writing a scenario or case study. There are different sources of technical information to be used in threat agent profiling. The same recommendations apply to attack studies: primary sources are best, but other authoritative sources may be used if they are trustworthy.

As with threat agents, attacks often affect non-cyber resources. We can use these techniques to study non-cyberattacks as well as cyberattacks.

Attack Scenario

An attack scenario describes an attack that is theoretically possible and that may or may not have happened. In this text, attack scenarios contain specific information in a specific format:

  • ■   Goals—a few sentences describing the goals of the attack. The goals should match goals associated with recognized threat agents.

  • ■   Resources required:

    • Skills and/or training—special skills required for the attack

    • Personnel—number and types of people required for the attack

    • Equipment—special equipment required for the attack

    • Preparation time—amount of lead time required to set up the attack

    • Timing constraint—whether the attack is tied to a particular schedule or event

  • ■   How it happens—how the attack takes place

  • ■   Collateral results—attack results in addition to the goals noted above

  • ■   Recommended mitigation—basic steps that could prevent the attack. Certain acts or omissions make the attack feasible; this identifies such things.

  • ■   References—authoritative sources supporting the scenario description

We can identify likely attacks using a well-written scenario. The goals and resources help identify threat agents who have the means and motive to perform the attack. The recommended mitigation identifies conditions that could prevent the attack.

Attack Case Study

An attack case study builds on an attack scenario to describe an actual attack. The case study reviews the attack as an effort by the identified perpetrator: the threat agent. We assess the attack’s success or failure based on the threat agent’s motivation. We also assess how risk management efforts prior to the attack affected its outcome. The case study follows this format:

  • ■   Overview—a paragraph that summarizes the attack, including what happened, when it happened, and the degree to which the attack succeeded.

  • ■   Perpetrator—brief description of the threat agent who performed the attack. This does not need to be a complete profile. The attack scenario’s details fill in many details about the agent’s resources and motivation.

  • ■   Attack scenario—description of the attack as it took place, using the scenario format. Omit the “References” section from the scenario; combine scenario references with the others at the end of this case study.

  • ■   Risk management—description of how pre-attack risk management activities affected the attack’s outcome. Identify these effects using the steps of the Risk Management Framework (Section 1.1.2).

  • ■   References—a list of authoritative references used in the attack scenario and in other sections of the case study.

Studying an Attack: The Morris Worm

To illustrate the elements of an attack scenario and case study, we will look at how the Morris worm fits into this format (BOX 2.1). We will not repeat the worm’s details, except to show where the details fit into these studies.

The “Risk Management” section omits Steps 3, 4, and 5 from the case study. Step 3, implementation, is essentially redundant with Step 2 because the selected controls existed in the computers and just needed to be activated. Step 4, assessment, takes place only if there are explicit security requirements to validate. Internet sites at that time considered security requirements only in the most general manner. A site might require users to log in and be authenticated, but sites rarely tested the mechanism. They assumed the operating system developers had already tested it. Step 5, authorization, rarely took place; if the system could connect to the internet, it was implicitly authorized to connect. The internet’s decentralized nature prevented authorities from restricting access or exercising oversight.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset