Inside the Box

First we'll need to expand on the definition of hardware. As noted earlier, hardware means the physical components of a computer, the ones you can touch.[3] Examples are the monitor, which displays your data while you're looking at it, the keyboard, the printer, and all of the interesting electronic and electromechanical components inside the case of your computer.[4] Right now, we're concerned with the programmer's view of the hardware. The hardware components of a computer with which you'll be primarily concerned are the disk, RAM (Random Access Memory), and last but certainly not least, the CPU.[5] We'll take up each of these topics in turn.

[3] Whenever I refer to a computer, I mean a modern microcomputer capable of running MS-DOS® or some version of Windows®; these are commonly referred to as PCs. Most of the fundamental concepts are the same in other kinds of computers, but the details differ.

[4] Although it's entirely possible to program without ever seeing the inside of a computer, you might want to look in there anyway, just to see what the CPU, RAM chips, disk drives, and other components, look like. Some familiarization with the components will give you a head start if you ever want to expand the capacity of your machine.

By the way, Susan recommends that you clean out the dust bunnies with a computer vacuum cleaner while you are in there; it's amazing how much dust can accumulate inside a computer case in a year or two!

[5] Other hardware components can be important to programmers of specialized applications; for example, game programmers need extremely fine control of how information is displayed on the monitor. However, we have enough to keep us busy learning how to write general data-handling programs. You can always learn how to write games later, if you're interested in doing so.

Disk

When you sit down at your computer in the morning, before you turn it on, where are the programs you're going to run? To make this more specific, suppose you're going to use a word processor to revise a letter you wrote yesterday before you turned the computer off. Where is the letter, and where is the word processing program?

You probably know the answer to this question; they are stored on a disk inside the case of your computer. Technically, this is a hard disk, to differentiate it from a floppy disk, the removable storage medium often used to distribute software or transfer files from one computer to another.[6] Disks use magnetic recording media, much like the material used to record speech and music on cassette tapes, to store information in a way that will not be lost when power is turned off. How exactly is this information (which may be either executable programs or data such as word processing documents) stored?

[6] Although at one time, many small computers used floppy disks for their main storage, the tremendous decrease in hard disk prices means that today even the most inexpensive computer stores programs and data on a hard disk.

We don't have to go into excruciating detail on the storage mechanism, but it is important to understand some of its characteristics. A disk consists of one or more circular platters, which are extremely flat and smooth pieces of metal or glass covered with a material that can be very rapidly and accurately magnetized in either of two directions, “north” or “south”. To store large amounts of data, each platter is divided into many billions of small regions, each of which can be magnetized in either direction, independent of the other regions. The magnetization is detected and modified by recording heads, similar in principle to those used in tape cassette decks. However, in contrast to the cassette heads, which make contact with the tape while they are recording or playing back music or speech, the disk heads “fly” a few millionths of an inch away from the platters, which rotate at very high velocity in a vacuum.[7]

[7] The heads have to be as close as possible to the platters because the influence of a magnet (called the magnetic field) drops off very rapidly with distance. Thus, the closer the heads are, the more powerful the magnetic field is and the smaller the region that can be used to store data so that it can be retrieved reliably, and therefore the more data that can be stored in the same physical space. Of course, this leaves open the question of why the heads aren't in contact with the surface; that would certainly solve the problem of being too far away. Unfortunately, this seemingly simple solution would not work at all. There is a name for the contact of heads and disk surface while the disk is spinning, head crash. The friction caused by such an event destroys both the heads and disk surface almost instantly.

The separately magnetizable regions used to store information are arranged in groups called sectors, which are in turn arranged in concentric circles called tracks. All tracks on one side of a given platter (a recording surface) can be accessed by a recording head dedicated to that recording surface; each sector is used to store some number of bytes of the data, generally a few hundred to a few thousand. Byte is a coined word meaning a group of binary digits, or bits for short; there are 8 bits in a byte in just about every modern general-purpose computer system, including PCs and Macintoshes.[8] You may wonder why the data aren't stored in the more familiar decimal system, which of course uses the digits from 0 through 9. This is not an arbitrary decision; on the contrary, there are a couple of very good reasons that data on a disk are stored using the binary system, in which each digit has only two possible states, 0 and 1. One of these reasons is that it's a lot easier to determine reliably whether a particular area on a disk is magnetized “north” or “south” than it is to determine 1 of 10 possible levels of magnetization. Another reason is that the binary system is also the natural system for data storage using electronic circuitry, which is used to store data in the rest of the computer.

[8] In some old machines, bytes sometimes contained more or less than 8 bits, and there are specialized machines today that have different byte sizes,. The C++ language specification requires only that a byte has at least eight bits.

While magnetic storage devices have been around in one form or another since the very early days of computing, the advances in technology just in the last 16 years have been staggering. To comprehend just how large these advances have been, we need to define the term used to describe storage capacities: the megabyte. The standard engineering meaning of mega is “multiply by 1 million”, which would make a megabyte equal to 1 million (1,000,000) bytes. As we have just seen, however, the natural number system in the computer field is binary. Therefore, one megabyte is often used instead to specify the nearest round number in the binary system, which is 220 (2 to the power of 20), or 1,048,576 bytes. This wasn't obvious to Susan, so I explained it some more:

Susan: Just how important is it to really understand that the megabyte is 220 (1,048,576) bytes? I know that a meg is not really a meg; that is, it's more than a million. But I don't understand 220, so is it enough to just take your word on this and not get bogged down as to why I didn't go any further than plane geometry in high school? You see, it makes me worry and upsets me that I don't understand how you “round” a binary number.

Steve: 220 would be 2 to the power of 20; that is, 20 twos multiplied together. This is a “round” number in binary, just as 106 (1,000,000, or 6 tens multiplied together) is a round number in decimal.

Seventeen Years of Progress

With that detail out of the way, we can see just how far we've come in a relatively short time. In 1985, I purchased a 20 megabyte disk for $900 ($45 per megabyte) and its access time, which measures how long it takes to retrieve data, was approximately 100 milliseconds (milli = 1/1000, so a millisecond is 1/1000 of a second). In October 2001, a 40,000 megabyte disk cost as little as $130, or approximately 0.3 cent per megabyte; in addition to delivering almost 14000 times as much storage per dollar, this disk had an access time of 9 milliseconds, which is approximately 11 times as fast as the old disk. Of course, this significantly understates the amount of progress in technology in both economic and technical terms. For one thing, a 2001 dollar is worth considerably less than a 1985 dollar. In addition, the new drive is superior in every other measure as well. It is much smaller than the old one, consumes much less power, and has many times the projected reliability of the old drive.

This tremendous increase in performance and decrease in price has prevented the long-predicted demise of disk drives in favor of new technology. However, the inherent speed limitations of disks still require us to restrict their role to the storage and retrieval of data for which we can afford to wait a relatively long time.

You see, while 9 milliseconds isn't very long by human standards, it is a long time indeed to a modern computer. This will become more evident as we examine the next essential component of the computer, the RAM.

RAM

The working storage of the computer, where data and programs are stored while we're using them, is called RAM, which is an acronym for random access memory. For example, a word processor is stored in RAM while you're using it. The document you're working on is likely to be there as well unless it's too large to fit all at once, in which case parts of it will be retrieved from the disk as needed. Since we have already seen that both the word processor and the document are stored on the disk in the first place, why not leave them there and use them in place, rather than copying them into RAM?

The answer, in a word, is speed. RAM, which is sometimes called “internal storage”, as opposed to “external storage” (the disk), is physically composed of millions of microscopic switches on a small piece of silicon known as a chip: a 4-megabit RAM chip has approximately 4 million of them.[9] Each of these switches can be either on or off; we consider a switch that is on to be storing a 1, and a switch that is off to be storing a 0. Just as in storing information on a disk, where it is easier to magnetize a region in either of two directions, it's a lot easier to make a switch that can be turned on or off reliably and quickly than one that can be set to any value from 0 to 9 reliably and quickly. This is particularly important when you're manufacturing millions of them on a silicon chip the size of your fingernail.

[9] Each switch is made of several transistors. Unfortunately, an explanation of how a transistor works would take us too far afield. Consult any good encyclopedia, such as the Encyclopedia Britannica, for this explanation.

A main difference between disk and RAM is what steps are needed to access different areas of storage. In the case of the disk, the head has to be moved to the right track (an operation known as a seek), and then we have to wait for the platter to spin so that the region we want to access is under the head (called rotational delay). On the other hand, with RAM, the entire process is electronic; we can read or write any byte immediately as long as we know which byte we want. To specify a given byte, we have to supply a unique number, called its memory address, or just address for short.

Memory Addresses

What is an address good for? Let's see how my discussion with Susan on this topic started.

Susan: About memory addresses, are you saying that each little itty bitty tiny byte of RAM is a separate address? Well, this is a little hard to imagine.

Steve: Actually, each byte of RAM has a separate address, which doesn't change, and a value, which does.

In case the notion of an address of a byte of memory on a piece of silicon is too abstract, it might help to think of an address as a set of directions to find the byte being addressed, much like directions to someone's house. For example, “Go three streets down, then turn left. It's the second house on the right”. With such directions, the house number wouldn't need to be written on the house. Similarly, the memory storage areas in RAM are addressed by position; you can think of the address as telling the hardware which street and house you want, by giving directions similar in concept to the preceding example. Therefore, it's not necessary to encode the addresses into the RAM explicitly.

Susan wanted a better picture of this somewhat abstract idea:

Susan: Where are the bytes on the RAM, and what do they look like?

Steve: Each byte corresponds to a microscopic region of the RAM chip. As to what they look like, have you ever seen a printed circuit board such as the ones inside your computer? Imagine the lines on that circuit board reduced thousands of times in size to microscopic dimensions, and you'll have an idea of what a RAM chip looks like inside.

Since it has no moving parts, storing and retrieving data in RAM is much faster than waiting for the mechanical motion of a disk platter turning.[10] As we've just seen, disk access times are measured in milliseconds, or thousandths of a second. However, RAM access times are measured in nanoseconds (ns); nano means one-billionth. In late 2001, a typical speed for RAM was 10 ns, which means that it is possible to read a given data item from RAM about 1,000,000 times as quickly as from a disk. In that case, why not use disks only for permanent storage, and read everything into RAM in the morning when we turn on the machine?

[10] There's also another kind of electronic storage, called ROM, for read-only memory; as its name indicates, you can read from it, but you can't write to it. This is used for storing permanent information, such as the program that allows your computer to read a small program from your boot disk; that program, in turn, reads in the rest of the data and programs needed to start up the computer. This process, as you probably know, is called booting the computer. In case you're wondering where that term came from, it's an abbreviation for bootstrapping, which is intended to suggest the fanciful notion of pulling yourself up by your bootstraps. Also, you may have noticed that the terms RAM and ROM aren't symmetrical; why isn't RAM called RWM, read-write memory? Probably because it's too hard to pronounce.

The reason is cost. In late 2001, the cost of 512 megabytes of RAM was approximately $120. For that same amount of money, you could have bought 37000 megabytes of disk space! Therefore, we must reserve RAM for tasks where speed is all-important, such as running your word processing program and holding a letter while you're working on it. Also, since RAM is an electronic storage medium (rather than a magnetic one), it does not maintain its contents when the power is turned off. This means that if you had a power failure while working with data only in RAM, you would lose everything you had been doing.[11] This is not merely a theoretical problem, by the way; if you don't remember to save what you're doing in your word processor once in a while, you might lose a whole day's work from a power outage of a few seconds.[12]

[11] The same disaster would happen if your system were to crash, which is not that unlikely under certain operating systems.

[12] Most modern word processors can automatically save your work once in a while, for this very reason. I heartily recommend using this facility; it's saved my bacon more than once.

Before we get to how a program actually works, we need to develop a better picture of how RAM is used. As I've mentioned before, you can think of RAM as consisting of a large number of bytes, each of which has a unique identifier called an address. This address can be used to specify which byte we mean, so the program might specify that it wants to read the value in byte 148257, or change the value in byte 66666. Susan wanted to make sure she had the correct understanding of this topic:

Susan: Are the values changed in RAM depending on what program is loaded in it?

Steve: Yes, and they also change while the program is executing. RAM is used to store both the program and its data.

This is all very well, but it doesn't answer the question of how the program actually uses or changes values in RAM, or performs arithmetic and other operations; that's the job of the CPU, which we will take up next.

The CPU

The CPU (central processing unit) is the “active” component in the computer. Like RAM, it is physically composed of millions of microscopic transistors on a chip; however, the organization of these transistors in a CPU is much more complex than on a RAM chip, as the latter's functions are limited to the storage and retrieval of data. The CPU, on the other hand, is capable of performing dozens or hundreds of different fundamental operations called machine instructions, or instructions for short. While each instruction performs a very simple function, the tremendous power of the computer lies in the fact that the CPU can perform (or execute) tens or hundreds of millions of these instructions per second.

These instructions fall into a number of categories: instructions that perform arithmetic operations such as adding, subtracting, multiplying, and dividing; instructions that move information from one place to another in RAM; instructions that compare two quantities to help make a determination as to which instructions need to be executed next and instructions that implement that decision; and other, more specialized types of instructions.[13]

[13] Each type of CPU has a different set of instructions, so programs compiled for one CPU cannot in general be run on a different CPU. Some CPUs, such as the very popular Pentium series from Intel, fall into a “family” of CPUs in which each new CPU can execute all of the instructions of the previous family members. This allows upgrading to a new CPU without having to throw out all of your old programs, but limits the ways in which the new CPU can be improved without affecting this “family compatibility”.

Of course, adding two numbers together, for example, requires that the numbers be available for use. Possibly the most straightforward way of making them available is to store them in and retrieve them from RAM whenever they are needed, and indeed this is done sometimes. However, as fast as RAM is compared to disk drives (not to mention human beings), it's still pretty slow compared to modern CPUs. For example, the computer I'm using right now has a 500 megahertz (MHz) Pentium III CPU, which can execute an instruction in 2 ns.[14]

[14] Since frequency is measured in decimal units rather than in binary units, the mega in megahertz means one million (106), not 1,048,576 (220) as it does when referring to memory and disk capacity. I'm sorry if this is confusing, but it can't be helped.

To see why RAM is a bottleneck, let's calculate how long it would take to execute an instruction if all the data had to come from and go back to RAM. A typical instruction would have to read some data from RAM and write its result back there; first, though, the instruction itself has to be loaded (or fetched) into the CPU before it can be executed. Let's suppose we have an instruction in RAM, reading and writing data also in RAM. Then the minimum timing to do such an instruction could be calculated as in Figure 2.1.

Figure 2.1. RAM vs. CPU speeds


To compute the effective speed of a CPU, we divide 1 second by the time it takes to execute one instruction.[15] Given the assumptions in this example, the CPU could execute only about 31 MIPS (million instructions per second), which is a far cry from the 500 MIPS or more that we might expect.[16] This seems very wasteful; is there a way to get more speed?

[15] In fact, the Pentium III or 4 can actually execute more than one instruction at a time if conditions are right. I'll ignore this detail in my analysis, but if I considered it, the discrepancy between memory speeds and CPU speeds would be even greater.

[16] 1 second/32 ns per instruction = 31,250,000 instructions per second.

In fact, there is. As a result of a lot of research and development both in academia and in the semiconductor industry, it is possible to approach the rated performance of fast CPUs, as will be illustrated in Figure 2.12. Some of these techniques have been around for as long as we've had computers; others have fairly recently trickled down from supercomputers to microcomputer CPUs. One of the most important of these techniques is the use of a number of different kinds of storage devices having different performance characteristics; the arrangement of these devices is called the memory hierarchy. Figure 2.2 illustrates the memory hierarchy of my home machine. Susan and I had a short discussion about the layout of this figure.

Susan: OK, just one question on Figure 2.2. If you are going to include the disk in this hierarchy, I don't know why you have placed it over to the side of RAM and not above it, since it is slower and you appear to be presenting this figure in ascending order of speed from the top of the figure downward. Did you do this because it is external rather than internal memory and it doesn't “deserve” to be in the same lineage as the others?

Steve: Yes; it's not the same as “real” memory, so I wanted to distinguish it.

Figure 2.12. Instruction execution time, using registers and prefetching


Figure 2.2. A memory hierarchy


Before we get to the diagram, I should explain that a cache is a small amount of fast memory where frequently used data are stored temporarily. According to this definition, RAM functions more or less as a cache for the disk; after all, we have to copy data from a slow disk into fast RAM before we can use it for anything. However, while this is a valid analogy, I should point out that the situations aren't quite parallel. Our programs usually read data from disk into RAM explicitly; that is, we're aware of whether it's on the disk or in RAM, and we have to issue commands to transfer it from one place to the other. On the other hand, caches are “automatic” in their functioning. We don't have to worry about them and our programs work in exactly the same way with them as without them, except faster. In any event, the basic idea is the same: to use a faster type of memory to speed up repetitive access to data usually stored on a slower storage device.

We've already seen that the disk is necessary to store data and programs when the machine is turned off, while RAM is needed for its higher speed in accessing data and programs we're currently using.[17] But why do we need the external cache?

[17] These complementary roles played by RAM and the disk explain why the speed of the disk is also illustrated in the memory hierarchy.

Actually, we've been around this track before when we questioned why everything isn't loaded into RAM rather than read from the disk as needed; we're trading speed for cost. To have a cost-effective computer with good performance requires the designer to choose the correct amount of each storage medium.

So just as with the disk vs. RAM trade-off, the reason that we use the external cache is to improve performance. While RAM can be accessed about 100 million times per second, the external cache is made from a faster type of memory chip, which can be accessed about 250 million times per second. While not as extreme as the speed differential between disk and RAM, it is still significant.

However, we can't afford to use external cache exclusively instead of RAM because there isn't enough of it. Therefore, we must reserve external cache for tasks where speed is all-important, such as supplying frequently used data or programs to the CPU.

The same analysis applies to the trade-off between the external cache and the internal cache. The internal cache's characteristics are similar to those of the external cache, but to a greater degree; it's even smaller and faster, allowing access at the rated speed of the CPU. Both characteristics have to do with its privileged position on the same chip as the CPU; this reduces the delays in communication between the internal cache and the CPU, but means that chip area devoted to the cache has to compete with area for the CPU, as long as the total chip size is held constant.

Unfortunately, we can't just increase the size of the chip to accommodate more internal cache because of the expense of doing so. Larger chips are more difficult to make, which reduces their yield, or the percentage of good chips. In addition, fewer of them fit on one wafer, which is the unit of manufacturing. Both of these attributes make larger chips more expensive to make.

How Caches Improve Performance

To oversimplify a bit, here's how caching reduces the effects of slow RAM. Whenever a data item is requested by the CPU, there are three possibilities.

  1. It is already in the internal cache. In this case, the value is sent to the CPU without referring to RAM at all.

  2. It is in the external cache. In this case, it will be “promoted” to the internal cache and sent to the CPU at the same time.

  3. It is not in either the internal or external cache. In this case, it has to be entered into a location in the internal cache. If there is nothing already stored in that cache location, the new item is simply added to the cache. However, if there is a data item already in that cache location, then the old item is displaced to the external cache, and the new item is written in its place.[18] If the external cache location is empty, that ends the activity; if it is not empty, then the item previously in that location is written out to RAM and its slot is used for the one displaced from the internal cache.[19]

    [18] It's also possible to have a cache that stores more than one item in a “location”, in which case one of the other items already there will be displaced to make room for the new one. The one selected is usually the one that hasn't been accessed for the longest time, on the theory that it's probably not going to be accessed again soon; this is called the least recently used (abbreviated LRU) replacement algorithm.

    [19] This is fairly close to the actual way caches are used to reduce the time it takes to get frequently used data from RAM (known as caching reads); reducing the time needed to write changed values back to RAM (caching writes) is more complicated.

How Registers Improve Performance

Another way to improve performance which has been employed for many years is to create a small number of private storage areas, called registers, that are on the same chip as the CPU itself.[20] Programs use these registers to hold data items that are actively in use; data in registers can be accessed within the time allocated to instruction execution (2 ns in our example), rather than in the much longer time needed to access data in RAM. This means that the time needed to access data in registers is predictable, unlike data that may have been displaced from the internal cache by more recent arrivals and thus must be reloaded from the external cache or even from RAM. Most CPUs have some dedicated registers, which aren't available to application programmers (that's us), but are reserved for the operating system (e.g., DOS, Unix, OS/2) or have special functions dictated by the hardware design; however, we will be concerned primarily with the general registers intended for our use.[21] These are used to hold working copies of data items called variables, which otherwise reside in RAM during the execution of the program. These variables represent specific items of data that we wish to keep track of in our programs.

[20] In case you're wondering how a small number of registers can help the speed of a large program, I should point out that no matter how large a program is, the vast majority of instructions and data items in the program are inactive at any given moment. In fact, perhaps only a dozen instructions are in various stages of execution at any given time even in the most advanced microprocessor CPU available in 2001. The computer's apparent ability to run several distinct programs simultaneously is an illusion produced by the extremely high rate of execution of instructions.

[21] All of the registers are physically similar, being just a collection of circuits in the CPU used to hold a value. As indicated here, some registers are dedicated to certain uses by the design of the CPU, whereas others are generally usable. In the case of the general registers, which are all functionally similar or identical, a compiler often uses them in a conventional way; this stylized usage simplifies the compiler writer's job.

The notion of using registers to hold temporary copies of variables wasn't crystal clear to Susan. Here's our discussion:

Susan: Here we go, getting lost. When you say, “The general registers are used to hold working copies of data items called variables, which reside in RAM”, are you saying RAM stores info when not in use?

Steve: During execution of a program, when data aren't in the general registers, they are generally stored in RAM.

Susan: I didn't think RAM stores anything when turned off.

Steve: You're correct; RAM doesn't retain information when the machine is turned off. However, it is used to keep the “real” copies of data that we want to process but won't fit in the registers.[22]

[22] Since RAM doesn't maintain its contents when power is turned off, anything that a program needs to keep around for a long time, such as inventory data to be used later, should be saved on the disk. We'll see how that is accomplished in a future chapter.

You can put something in a variable and it will stay there until you store something else there; you can also look at it to find out what's in it. As you might expect, several types of variables are used to hold different kinds of data; the first ones we will look at are variables representing whole numbers (the so-called integer variables), which are a subset of the category called numeric variables. As this suggests, other variable types can represent numbers with fractional parts. We'll use these so-called floating-point variables later.

Different types of variables require different amounts of RAM to store them, depending on the amount of data they contain; a very common type of numeric variable, known as a short, as implemented by the compiler on the CD in the back of the book, requires 16 bits (that is, 2 bytes) of RAM to hold any of 65536 different values, from – 32768 to 32767, including 0.[23] As we will see shortly, these odd-looking numbers are the result of using the binary system. By no coincidence at all, the early Intel CPUs such as the 8086 had general registers that contained 16 bits each; these registers were named ax, bx, cx, dx, si, di, and bp. Why does it matter how many bits each register holds? Because the number (and size) of instructions it takes to process a variable is much less if the variable fits in a register; therefore, most programming languages, C++ included, relate the size of a variable to the size of the registers available to hold it. A short is exactly the right size to fit into a 16-bit register and therefore can be processed efficiently by the early Intel machines, whereas longer variables had to be handled in pieces, causing a great decline in efficiency of the program.

[23] The size of a short varies from one compiler and machine to another, but on the most common current compilers, especially for machines such as the ubiquitous “PC”, it is 16 bits.

Progress marches on: more recent Intel CPUs, starting with the 80386, have 32-bit general registers; these registers are called eax, ebx, ecx, edx, esi, edi, and ebp. You may have noticed that these names are simply the names of the old 16-bit registers with an e tacked onto the front. The reason for the name change is that when Intel increased the size of the registers to 32 bits with the advent of the 80386, it didn't want to change the behavior of previously existing programs that (of course) used the old names for the 16-bit registers. So the old names, as illustrated in Figure 2.2, now refer to the bottom halves of the “real” (that is, 32-bit) registers; instructions using these old names behave exactly as though they were accessing the 16-bit registers on earlier machines. To refer to the 32-bit registers, you use the new names eax, ebx, and so on, for extended ax, extended bx, and so forth.

We still have some way to go before we get to the end of our investigation of the hardware inside a computer, but let's pause here for a question from Susan about how much there is to know about this topic:

Susan: You said before that we were going to look at the computer at the “lowest level accessible to a programmer”. Does it get any deeper than this?

Steve: Yes, it certainly does. There are entire disciplines devoted to layers below those that we have any access to as programmers. To begin with, the machine instructions and registers that we see as assembly language programmers are often just the surface manifestation of underlying code and hardware that operates at a deeper level (the so-called “microcode” level), just as C++ code is the surface manifestation of assembly language instructions.

Below even the lowest level of programmable computer hardware is the physical implementation of the circuitry as billions of transistors, and at the bottom we have the quantum mechanical level that allows such things as transistors in the first place. Luckily, the engineers who build the microprocessors (or actually, build the machines that make the microprocessors) have taken care of all those lower levels for us, so we don't have to worry about them.

Now that we've cleared that up, let's get back to the question of what it means to say that instructions using the 16-bit register names behave exactly as though they were accessing the 16-bit registers on earlier machines. Before I can explain this, you'll have to understand the binary number system on which all modern computers are based. To make this number system more intelligible, I have written the following little fable.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset