2.5 Keeping Processes Separate

Worms and other malware aren’t the only risks to computers. Operator errors and software bugs can pose as big a threat. Operating systems use access control mechanisms to keep processes separate.

Computing pioneers didn’t need computer-based access control mechanisms. In the 1940s and 1950s, users often had exclusive use of a computer while their programs were run. Access control relied on physical protection of the computer itself. Although computer users often shared the computer with others, they didn’t have to share space inside the computer with others. Each user brought along his or her own persistent storage to use while the program ran. Once a program was finished, a computer operator replaced one user’s storage with the next user’s storage. As computer hardware and software became more sophisticated, designers realized a computer could do more work if it ran several users’ programs at once. This was the origin of multitasking.

The earliest computing systems were incredibly expensive to build and operate. A typical high-performance “mainframe” computer system in the 1960s might rent time at the rate of $100 per second of execution time. Today, we can buy an entire computer for the cost of renting less than a minute of 1960s mainframe time. Back then, computer owners had to keep their computers busy every moment in order to recoup the high costs. Multitasking helped keep the expensive mainframes busy.

Separation Mechanisms

Effective multitasking depended on protection to keep the processes separate. The University of Manchester, England, developed such protections for the Atlas computer in the early 1960s. In the United States, Fernando Corbató’s team at MIT used similar techniques to allow about 30 people to share a large IBM 7090 computer built by International Business Machines. These systems relied on access control mechanisms to protect processes from one another and to allow sharing of other computing resources. The protections relied on special circuits within the CPU to provide two features:

  1. Two “modes” to distinguish between a user’s application programs and special operating system programs

  2. Mechanisms to restrict a process to those specific regions of RAM assigned to it

Program Modes

The two program modes are user mode and kernel mode. Most programs run in user mode; this embodies all of the CPU’s standard instructions for calculating, comparing, and making decisions. A handful of special instructions are only available in kernel mode. The kernel mode instructions switch the CPU between processes and control which parts of RAM are visible to the CPU. When the operating system switches between processes, it runs a short program in kernel mode called the dispatcher. The program switches the program counter and RAM permissions from the previous process to the next one (see Section 2.7).

Keep in mind that these modes are different from the “administrative” or “root” privileges referred to when discussing the Morris worm. Such access privileges are enforced by decisions of the operating system’s software. The CPU itself enforces the kernel and user modes when instructions execute.

RAM Protection

When a process performs some work, it collects the results in RAM. In modern operating systems, each process operates in its own “island” of RAM. The island is implemented using a page table that lists all available RAM. When the process reaches for an instruction or data in RAM, the CPU consults the page table. If the table doesn’t include that location, the CPU stops the process with a memory error. The table also indicates whether the process may write to parts of RAM as well as read them.

When a process begins, the operating system constructs a page table that lists its control and data sections. A program touches other parts of RAM only by mistake. A program might reach “out of bounds” if its code contains an error. These memory restrictions don’t interfere when a program operates correctly.

The CPU can read or write any part of RAM while in kernel mode. When the operating system switches from one program to another, it constructs a page table for the new process. The new table points to different parts of RAM than those used by other processes.

RAM protection, like page tables, and the distinction between kernel and user mode, keep the system running smoothly. They keep processes safe from one another and keep the operating system itself safe from misbehaving processes.

User Identities

Building on process protection, an operating system provides the “vault” from which it serves up individual files and other resources. This mechanism typically employs user identification. When a particular individual starts to use the computer, the operating system associates a simple, unique name, the user identity, or user ID, with that person. When processes try to use a system’s resources (e.g., files and printers) the operating system may grant or deny access according to the user’s identity.

Evolution of Personal Computers

The cost of computing circuits dropped dramatically as the circuits changed from expensive vacuum tubes to lower-cost transistors and then to integrated circuits. The circuits have become correspondingly faster and more powerful over the same period. This process has been dubbed Moore’s law.

After about 5 years of watching integrated circuit development, engineer Gordon Moore published the observation that computer speed and circuit density seemed to double every year. This observation was dubbed Moore’s law. After closer examination, the industry concluded that the doubling actually averages every 18 months. This rate of improvement has been sustained since the 1960s. For example, a hundred million bytes of hard drive storage cost about $100,000 to purchase in 1970; the same amount of storage cost $20,000 6 years later. By 1990, the cost had dropped to $2,500.

The falling cost of circuits also produced a market for less-capable computers that cost far less. Individuals often used these computers interactively, as we use computers today. The internal software was intentionally kept simple because the computers lacked the speed and storage to run more complex programs.

“Microcomputers” emerged in the 1970s with the development of the microprocessor. The first microcomputers were slow and provided limited storage, but they cost less than $2,000. The earliest machines were operated using text-based commands typed on a keyboard. The Unix shell and MS-DOS both arose from lower-cost computers with limited storage and performance. Graphics improved as circuits and storage improved, leading to modern personal computers.

Security on Personal Computers

Process protection rarely appeared on personal computers until the late 1990s. The first low-cost desktop operating systems were fairly simple, although some could run multiple processes. Those operating systems took as little effort as possible to provide multitasking, and they did very little to protect one process from another. Computer viruses could easily install themselves in RAM and wreak havoc on the rest of the system. Even if the system was virus-free, the lack of protection made all running programs vulnerable to software bugs in any of the running programs.

For example, Alice’s dad used to run both the word-processing program and the spreadsheet on his 1990 vintage computer. If a nasty error arose in the word-processing program, it could damage the spreadsheet’s code stored in RAM. When the spreadsheet program took its turn to run, the damaged instructions could then damage Dad’s saved spreadsheet data. This was not a problem if programs were simple and reliable, but programs were constantly adding features. These changes made the programs larger, and they often made them less reliable. Process protection made it safer to run several programs at once, and it also made it easier to detect and fix software errors.

Mobile devices rarely run programs on behalf of multiple users. They may share programs between applications. For example, different applications may run file system functions on their own behalf and rely on these program-sharing features to keep their files separate.

Operating System Security Features

To summarize, a modern operating system contains the following features to run multiple processes and keep them safe and separate:

  • ■   Different programs take turns using the CPUs or cores.

  • ■   Separate parts of RAM are given to separate programs.

  • ■   RAM used by one process is protected from damage by other processes.

  • ■   Separate, window-oriented user interface is provided for each program, so one program doesn’t put its results in a different program’s window.

  • ■   Access to files or other resources is granted based on the user running the program.

2.5.1 Sharing a Program

We keep processes separate by giving them individual islands of RAM. The protections between processes prevent them from interfering by preventing them from sharing the same RAM. This is fine if our only concern is protection. However, there are times when it makes sense to share RAM between processes. For example, if two processes are running the same program (FIGURE 2.11), then we can save RAM if the processes share the same control section.

A process diagram shows the sharing of a control section by two processes.

FIGURE 2.11 Two processes sharing a control section.

Screenshots used with permission from Microsoft.

Problems arise if one process changes the control section while another process is using it. This could cause the second process to fail. The operating system prevents such problems by setting up an access restriction on the control section; the processes may read the RAM containing the control section, but they can’t write to that RAM. The CPU blocks any attempts to write to the control section. This is an example of a read-only (RO) access restriction.

In Figure 2.11, the arrows indicate the flow of data between each process (the ovals) and other resources. The control section is read-only, so the arrow flows from the control section to the processes. The window is part of an output device, so the arrow flows to the window from the processes. The processes can both read and write their data sections, so those arrows point in both directions. We isolate each process by granting it exclusive access to its own data section and blocking access to other data sections.

However, some processes need to share a data section with other processes. For example, networking software will often use several separate processes. These processes pass network data among one another using shared buffers in RAM. The operating system establishes shared blocks of RAM and grants full read/write access to both processes. This is the fastest and most efficient way to share data between processes.

Access Matrix

TABLE 2.1 takes the access permissions shown in Figure 2.11 and displays them as a two-dimensional form called an access matrix. Some researchers call this “Lampson’s matrix” after Butler Lampson, the computer scientist who introduced the idea in 1971. Lampson developed the matrix after working on a series of timesharing system designs.

TABLE 2.1 Access Matrix for Processes in Figure 2.11

images
Access Rights

Each cell in the matrix represents the access rights granted to a particular subject for interacting with a particular object. Because the matrix lists all subjects and all objects on the system, it shows all rights granted. When we talk about RAM access, the access choices include:

  • ■   Read/write access allowed (abbreviated RW)

  • ■   Read access only, no writing (omit the W; show only R-)

  • ■   No access allowed (omit both R and W; show two hyphens --)

The operating system always has full read/write access to RAM used by processes it runs. This allows it to create processes, to ensure that they take turns, and to remove them from the system when they have finished execution or when they misbehave.

In modern systems, programs often contain several separate control sections. One control section contains the “main” procedure where the program begins and ends. Other control sections often contain procedure libraries that are shared among many different programs. On Microsoft Windows, these are called “dynamic link libraries” and carry a “.dll” suffix.

2.5.2 Sharing Data

Modern computers often have situations where two or more processes need to work on the same data. When we need to send data to a printer or other output device, for example, our own program uses the operating system to give that data to the printer’s device driver. FIGURE 2.12 shows how the operating system arranges this in RAM.

A process diagram depicts sharing of a data section by two processes, a program and a device driver.

FIGURE 2.12 Two processes (a program and a device driver) share a data section.

First, the operating system acquires a data section to share with the printer device driver. This data section holds the buffer for the data we send to the printer.

When our program produces data to be printed, the program places that data in the buffer. When the buffer is full, the operating system tells the device driver process, which then retrieves the data from the buffer and sends it to the printer.

If we focus on the left and center of Figure 2.12, the two processes look simple and conventional; each has its own control section and one data section. The special part appears on the right, with the shared buffer stored in its own data section. Both processes have read and write access to it; this allows them to leave signals for each other in RAM to help keep track of the status of the output operations. TABLE 2.2 presents the access matrix for the system in Figure 2.12.

TABLE 2.2 Access Matrix for Figure 2.12

images

The device driver is part of the operating system, and it’s not unusual to grant the device driver full access to all of RAM, just like the rest of the operating system. It is, however, much safer for the operating system and the computer’s owner if we apply Least Privilege to device drivers. In other words, we grant the driver only the access rights it really needs.

Windows NT was the first version of Windows to provide modern process protection. Vintage Windows systems, like NT, occasionally suffered from the “Blue Screen of Death.” This was a failure in which the Windows system encountered a fatal error, switched the display into a text mode (with a blue background), and printed out the CPU contents and key information from RAM. The only way to recover was to reboot the computer. Process protection kept application programs from causing “blue screen” failures; many such failures on NT were traced to faulty device drivers.

One of the risks with device drivers is that the operating system developers rarely write them. Microsoft writes a few key drivers for Windows, but hardware manufacturers write most of them. Although many driver writers are careful to follow the rules and test their software, not all are as careful.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset