Chapter 5. Color, Materials, and Lighting: The Basics

By Richard S. Wright, Jr.

WHAT YOU'LL LEARN IN THIS CHAPTER:

How To

Functions You'll Use

Specify a color in terms of RGB components

glColor

Set the shading model

glShadeModel

Set the lighting model

glLightModel

Set lighting parameters

glLight

Set material reflective properties

glColorMaterial/glMaterial

Use surface normals

glNormal

This is the chapter where 3D graphics really start to look interesting (unless you really dig wireframe models!), and it only gets better from here. You've been learning OpenGL from the ground up—how to put programs together and then how to assemble objects from primitives and manipulate them in 3D space. Until now, we've been laying the foundation, and you still can't tell what the house is going to look like! To recoin a phrase, “Where's the beef?”

To put it succinctly, the beef starts here. For most of the rest of this book, science takes a back seat and magic rules. According to Arthur C. Clarke, “Any sufficiently advanced technology is indistinguishable from magic.” Of course, there is no real magic involved in color and lighting, but it sure can seem that way at times. If you want to dig into the “sufficiently advanced technology” (mathematics), see Appendix A, “Further Reading.”

Another name for this chapter might be “Adding Realism to Your Scenes.” You see, there is more to an object's color in the real world than just what color we might tell OpenGL to make it. In addition to having a color, objects can appear shiny or dull or can even glow with their own light. An object's apparent color varies with bright or dim lighting, and even the color of the light hitting an object makes a difference. An illuminated object can even be shaded across its surface when lit or viewed from an angle.

What Is Color?

First let's talk a little bit about color itself. How is a color made in nature, and how do we see colors? Understanding color theory and how the human eye sees a color scene will lend some insight into how you create a color programmatically. (If color theory is old hat to you, you can probably skip this section.)

Light as a Wave

Color is simply a wavelength of light that is visible to the human eye. If you had any physics classes in school, you might remember something about light being both a wave and a particle. It is modeled as a wave that travels through space much like a ripple through a pond, and it is modeled as a particle, such as a raindrop falling to the ground. If this concept seems confusing, you know why most people don't study quantum mechanics!

The light you see from nearly any given source is actually a mixture of many different kinds of light. These kinds of light are identified by their wavelengths. The wavelength of light is measured as the distance between the peaks of the light wave, as illustrated in Figure 5.1.

How a wavelength of light is measured.

Figure 5.1. How a wavelength of light is measured.

Wavelengths of visible light range from 390 nanometers (one billionth of a meter) for violet light to 720 nanometers for red light; this range is commonly called the visible spectrum. You've undoubtedly heard the terms ultraviolet and infrared; they represent light not visible to the naked eye, lying beyond the ends of the spectrum. You will recognize the spectrum as containing all the colors of the rainbow (see Figure 5.2).

The spectrum of visible light.

Figure 5.2. The spectrum of visible light.

Light as a Particle

“Okay, Mr. Smart Brain,” you might ask. “If color is a wavelength of light and the only visible light is in this 'rainbow' thing, where is the brown for my Fig Newtons or the black for my coffee or even the white of this page?” We begin answering that question by telling you that black is not a color, nor is white. Actually, black is the absence of color, and white is an even combination of all the colors at once. That is, a white object reflects all wavelengths of colors evenly, and a black object absorbs all wavelengths evenly.

As for the brown of those fig bars and the many other colors that you see, they are indeed colors. Actually, at the physical level, they are composite colors. They are made of varying amounts of the “pure” colors found in the spectrum. To understand how this concept works, think of light as a particle. Any given object when illuminated by a light source is struck by “billions and billions” (my apologies to the late Carl Sagan) of photons, or tiny light particles. Remembering our physics mumbo jumbo, each of these photons is also a wave, which has a wavelength and thus a specific color in the spectrum.

All physical objects consist of atoms. The reflection of photons from an object depends on the kinds of atoms, the number of each kind, and the arrangement of atoms (and their electrons) in the object. Some photons are reflected and some are absorbed (the absorbed photons are usually converted to heat), and any given material or mixture of materials (such as your fig bar) reflects more of some wavelengths than others. Figure 5.3 illustrates this principle.

An object reflects some photons and absorbs others.

Figure 5.3. An object reflects some photons and absorbs others.

Your Personal Photon Detector

The reflected light from your fig bar, when seen by your eye, is interpreted as color. The billions of photons enter your eye and are focused onto the back of your eye, where your retina acts as sort of a photographic plate. The retina's millions of cone cells are excited when struck by the photons, and this causes neural energy to travel to your brain, which interprets the information as light and color. The more photons that strike the cone cells, the more excited they get. This level of excitation is interpreted by your brain as the brightness of the light, which makes sense; the brighter the light, the more photons there are to strike the cone cells.

The eye has three kinds of cone cells. All of them respond to photons, but each kind responds most to a particular wavelength. One is more excited by photons that have reddish wavelengths; one, by green wavelengths; and one, by blue wavelengths. Thus, light that is composed mostly of red wavelengths excites red-sensitive cone cells more than the other cells, and your brain receives the signal that the light you are seeing is mostly reddish. You do the math: A combination of different wavelengths of various intensities will, of course, yield a mix of colors. All wavelengths equally represented thus are perceived as white, and no light of any wavelength is black.

You can see that any “color” that your eye perceives actually consists of light all over the visible spectrum. The “hardware” in your eye detects what it sees in terms of the relative concentrations and strengths of red, green, and blue light. Figure 5.4 shows how brown is composed of a photon mix of 60% red photons, 40% green photons, and 10% blue photons.

How the “color” brown is perceived by the eye.

Figure 5.4. How the “color” brown is perceived by the eye.

The Computer as a Photon Generator

Now that you understand how the human eye discerns colors, it makes sense that when you want to generate a color with a computer, you do so by specifying separate intensities for the red, green, and blue components of the light. It so happens that color computer monitors are designed to produce three kinds of light (can you guess which three?), each with varying degrees of intensity. In the back of your computer monitor is an electron gun that shoots electrons at the back of the screen you view. This screen contains phosphors that emit red, green, and blue light when struck by the electrons. The intensity of the light emitted varies with the intensity of the electron beam. (Okay, then, how do color LCDs work? We leave this question as an exercise for you!) These three color phosphors are packed closely together to make up a single physical dot on the screen (see Figure 5.5).

How a computer monitor generates colors.

Figure 5.5. How a computer monitor generates colors.

You might recall that in Chapter 2, “Using OpenGL,” we explained how OpenGL defines a color exactly as intensities of red, green, and blue, with the glColor command.

PC Color Hardware

There once was a time when state-of-the-art PC graphics hardware meant the Hercules graphics card. This card could produce bitmapped images with a resolution of 720×348. The drawback was that each pixel had only two states: on and off. At that time, bitmapped graphics of any kind on a PC were a big deal, and you could produce some great monochrome graphics—even 3D!

Actually predating the Hercules card was the Color Graphics Adapter (CGA) card. Introduced with the first IBM PC, this card could support resolutions of 320×200 pixels and could place any 4 of 16 colors on the screen at once. A higher resolution (640×200) with 2 colors was also possible but wasn't as effective or cost conscious as the Hercules card. (Color monitors = $$$.) CGA was puny by today's standards; it was even outmatched by the graphics capabilities of a $200 Commodore 64 or Atari home computer at the time. Lacking adequate resolution for business graphics or even modest modeling, CGA was used primarily for simple PC games or business applications that could benefit from colored text. Generally, it was hard to make a good business justification for this more expensive hardware.

The next big breakthrough for PC graphics came when IBM introduced the Enhanced Graphics Adapter (EGA) card. This one could do more than 25 lines of colored text in new text modes, and for graphics, it could support 640×350-pixel bitmapped graphics in 16 colors! Other technical improvements eliminated some flickering problems of the CGA ancestor and provided for better and smoother animation. Now arcade-style games, real business graphics, and even simple 3D graphics became not only possible but even reasonable on the PC. This advance was a giant move beyond CGA, but still PC graphics were in their infancy.

The last mainstream PC graphics standard set by IBM was the VGA card (which stood for Vector Graphics Array rather than the commonly held Video Graphics Adapter). This card was significantly faster than the EGA; it could support 16 colors at a higher resolution (640×480) and 256 colors at a lower resolution of 320×200. These 256 colors were selected from a palette of more than 16 million possible colors. That's when the floodgates opened for PC graphics. Near photo-realistic graphics became possible on PCs. Ray tracers, 3D games, and photo-editing software began to pop up in the PC market.

IBM, as well, had a high-end graphics card—the 8514—for its “workstations.” This card could do 1,024×768 graphics at 256 colors. IBM thought this card would be used only by CAD and scientific applications! But one thing is certain about consumers: They always want more. It was this short-sightedness that cost IBM its role as standard setter in the PC graphics market. Other vendors began to ship “Super-VGA” cards that could display higher and higher resolutions, with more and more colors. First, we saw 800×600, then 1,024×768 and even higher, with first 256 colors, and then 32,000, and 65,000. Today, 24-bit color cards can display 16 million colors at resolutions far greater than 1,024×768. Even entry-level Windows PCs sold today can support at least 16 million colors at resolutions of 1,024×768 or more.

All this power makes for some really cool possibilities—photo-realistic 3D graphics, to name just one. When Microsoft ported OpenGL to the Windows platform, that move enabled creation of high-end graphics applications for PCs. Combine today's fast processors with 3D-graphics accelerated graphics cards, and you can get the kind of performance possible only a few years ago on $100,000 graphics workstations—for the cost of a Wal-Mart Christmas special! Today's typical home machines are capable of sophisticated simulations, games, and more. Already the term virtual reality has become as antiquated as those old Buck Rogers rocket ships as we begin to take advanced 3D graphics for granted.

PC Display Modes

Microsoft Windows and the Apple Macintosh revolutionized the world of PC graphics in two respects. First, they created mainstream graphical operating environments that were adopted by the business world at large and, soon thereafter, the consumer market. Second, they made PC graphics significantly easier for programmers to do. The graphics hardware was “virtualized” by display device drivers. Instead of having to write instructions directly to the video hardware, programmers today can write to a single API (such as OpenGL!), and the operating system handles the specifics of talking to the hardware.

Screen Resolution

Screen resolution for today's computers can vary from 640×480 pixels up to 1,600×1,200 or more. The lower resolutions of, say, 640×480 are considered adequate for some graphics display tasks, and people with eye problems often run at the lower resolutions, but on a large monitor or display. You must always take into account the size of the window with the clipping volume and viewport settings (see Chapter 2). By scaling the size of the drawing to the size of the window, you can easily account for the various resolutions and window size combinations that can occur. Well-written graphics applications display the same approximate image regardless of screen resolution. The user should automatically be able to see more and sharper details as the resolution increases.

Color Depth

If an increase in screen resolution or in the number of available drawing pixels in turn increases the detail and sharpness of the image, so too should an increase in available colors improve the clarity of the resulting image. An image displayed on a computer that can display millions of colors should look remarkably better than the same image displayed with only 16 colors. In programming, you really need to worry about only three color depths: 4-bit, 8-bit, and 24-bit.

The 4-Bit Color Mode

On the low end, your program might run in 16 colors—called 4-bit mode because 4 bits are devoted to color information for each pixel. These 4 bits represent a value from 0 to 15 that provides an index into a set of 16 predefined colors. (When you have a limited number of colors that are accessed by an index, this is called a palette.) With only 16 colors at your disposal, you can do little to improve the clarity and sharpness of your image. It is generally accepted that most serious graphics applications can safely ignore the 16-color mode. We can be thankful that most of the newer display hardware available does not support this display mode any longer.

The 8-Bit Color Mode

The 8-bit color mode supports up to 256 colors on the screen. This is a substantial improvement over 4-bit color, but still limiting. Most PC OpenGL hardware accelerators do not accelerate 8-bit color, but for software rendering, you can obtain satisfactory results under Windows with certain considerations. The most important consideration is the construction of the correct color palette. This topic is covered briefly in Chapter 13, “Wiggle: OpenGL on Windows.”

The 24-Bit Color Mode

The best quality image production available today on PCs is 24-bit color mode. In this mode, a full 24 bits are devoted to each pixel to hold 8 bits of color data for each of the red, green, and blue color components (8 + 8 + 8 = 24). You have the capability to put any of more than 16 million possible colors in every pixel on the screen. The most obvious drawback to this mode is the amount of memory required for high-resolution screens (more than 2MB for a 1,024×768 screen). Indirectly, moving larger chunks of memory around is also much slower when you're doing animation or just drawing on the screen. Fortunately, today's accelerated graphics adapters are optimized for these types of operations and are shipping with larger amounts of onboard memory to accommodate the extra memory usage.

The 16- and 32-Bit Color Modes

For saving memory or improving performance, many display cards also support various other color modes. In the area of performance improvement, some cards support a 32-bit color mode sometimes called true color mode. Actually, the 32-bit color mode cannot display any more colors than the 24-bit mode, but it improves performance by aligning the data for each pixel on a 32-bit address boundary. Unfortunately, this results in a wasted 8 bits (1 byte) per pixel. On today's 32-bit Intel PCs, a memory address evenly divisible by 32 results in much faster memory access. Modern OpenGL accelerators also support 32-bit mode, with 24 bits being reserved for the RGB colors and 8 bits being used for destination alpha storage. You will learn more about the alpha channel in the next chapter.

Another popular display mode, 16-bit color, is sometimes supported to use memory more efficiently. This allows one of 65,536 possible colors for each pixel. This display mode is practically as effective as 24-bit color for photographic image reproduction, as it can be difficult to tell the difference between 16-bit and 24-bit color modes for most photographic images. The savings in memory and increase in display speed have made this popular for the first generation of consumer “game” 3D accelerators. The added color fidelity of 24-bit mode, however, really adds to an image's quality, especially with shading and blending operations.

Using Color in OpenGL

You now know that OpenGL specifies an exact color as separate intensities of red, green, and blue components. You also know that modern PC hardware might be able to display nearly all these combinations or only a very few. How, then, do we specify a desired color in terms of these red, green, and blue components?

The Color Cube

Because a color is specified by three positive color values, we can model the available colors as a volume that we call the RGB colorspace. Figure 5.6 shows what this colorspace looks like at the origin with red, green, and blue as the axes. The red, green, and blue coordinates are specified just like x, y, and z coordinates. At the origin (0,0,0), the relative intensity of each component is zero, and the resulting color is black. The maximum available on the PC for storage information is 24 bits, so with 8 bits for each component, let's say that a value of 255 along the axis represents full saturation of that component. We then end up with a cube measuring 255 on each side. The corner directly opposite black, where the concentrations are (0,0,0), is white, with relative concentrations of (255,255,255). At full saturation (255) from the origin along each axis lies the pure colors of red, green, and blue.

The origin of RGB colorspace.

Figure 5.6. The origin of RGB colorspace.

This “color cube” (see Figure 5.7) contains all the possible colors, either on the surface of the cube or within the interior of the cube. For example, all possible shades of gray between black and white lie internally on the diagonal line between the corner at (0,0,0) and (255,255,255).

The RGB colorspace.

Figure 5.7. The RGB colorspace.

Figure 5.8 shows the smoothly shaded color cube produced by a sample program from this chapter, CCUBE. The surface of this cube shows the color variations from black on one corner to white on the opposite corner. Red, green, and blue are present on their corners 255 units from black. Additionally, the colors yellow, cyan, and magenta have corners showing the combination of the other three primary colors. You can also spin the color cube around to examine all its sides by pressing the arrow keys.

Output from CCUBE is this color cube.

Figure 5.8. Output from CCUBE is this color cube.

Setting the Drawing Color

Let's briefly review the glColor function. It is prototyped as follows:

void glColor<x><t>(red, green, blue, alpha);

In the function name, the <x> represents the number of arguments; it might be 3 for three arguments of red, green, and blue or 4 for four arguments to include the alpha component. The alpha component specifies the translucency of the color and is covered in more detail in the next chapter. For now, just use a three-argument version of the function.

The <t> in the function name specifies the argument's data type and can be b, d, f, i, s, ub, ui, or us, for byte, double, float, integer, short, unsigned byte, unsigned integer, and unsigned short data types, respectively. Another version of the function has a v appended to the end; this version takes an array that contains the arguments (the v stands for vectored). In the reference section, you will find an entry with more details on the glColor function.

Most OpenGL programs that you'll see use glColor3f and specify the intensity of each component as 0.0 for none or 1.0 for full intensity. However, it might be easier, if you have Windows programming experience, to use the glColor3ub version of the function. This version takes three unsigned bytes, from 0 to 255, to specify the intensities of red, green, and blue. Using this version of the function is like using the Windows RGB macro to specify a color:

glColor3ub(0,255,128) = RGB(0,255,128)

In fact, this approach might make it easier for you to match your OpenGL colors to existing RGB colors used by your program for other non-OpenGL drawing tasks. However, we should say that, internally, OpenGL represents color values as floating-point values, and you may incur some performance penalties due to the constant conversion to floats that must take place at runtime. It is also possible that in the future, higher resolution color buffers may evolve (in fact, floating-point color buffers are already starting to appear), and your color values specified as floats will be more faithfully represented by the color hardware.

Shading

Our previous working definition for glColor was that this function sets the current drawing color, and all objects drawn after this command have the last color specified. After discussing the OpenGL drawing primitives in the preceding chapter, we can now expand this definition as follows: The glColor function sets the current color that is used for all vertices drawn after the command. So far, all our examples have drawn wireframe objects or solid objects with each face a different solid color. If we specify a different color for each vertex of a primitive (either point, line, or polygon), what color is the interior?

Let's answer this question first regarding points. A point has only one vertex, and whatever color you specify for that vertex is the resulting color for that point. Easy enough.

A line, however, has two vertices, and each can be set to a different color. The color of the line depends on the shading model. Shading is simply defined as the smooth transition from one color to the next. Any two points in the RGB colorspace (refer to Figure 5.7) can be connected by a straight line.

Smooth shading causes the colors along the line to vary as they do through the color cube from one color point to the other. Figure 5.9 shows the color cube with the black and white corners identified. Below it is a line with two vertices, one black and one white. The colors selected along the length of the line match the colors along the straight line in the color cube, from the black to the white corners. This results in a line that progresses from black through lighter shades of gray and eventually to white.

How a line is shaded from black to white.

Figure 5.9. How a line is shaded from black to white.

You can do shading mathematically by finding the equation of the line connecting two points in the three-dimensional RGB colorspace. Then you can simply loop through from one end of the line to the other, retrieving coordinates along the way to provide the color of each pixel on the screen. Many good books on computer graphics explain the algorithm to accomplish this effect, scale your color line to the physical line on the screen, and so on. Fortunately, OpenGL does all this work for you!

The shading exercise becomes slightly more complex for polygons. A triangle, for instance, can also be represented as a plane within the color cube. Figure 5.10 shows a triangle with each vertex at full saturation for the red, green, and blue color components. The code to display this triangle is shown in Listing 5.1 and in the sample program titled TRIANGLE on the CD that accompanies this book.

Example 5.1. Drawing a Smooth-Shaded Triangle with Red, Green, and Blue Corners

// Enable smooth shading
glShadeModel(GL_SMOOTH);

// Draw the triangle
glBegin(GL_TRIANGLES);
    // Red Apex
    glColor3ub((GLubyte)255,(GLubyte)0,(GLubyte)0);
    glVertex3f(0.0f,200.0f,0.0f);

    // Green on the right bottom corner
    glColor3ub((GLubyte)0,(GLubyte)255,(GLubyte)0);
    glVertex3f(200.0f,-70.0f,0.0f);

    // Blue on the left bottom corner
    glColor3ub((GLubyte)0,(GLubyte)0,(GLubyte)255);
    glVertex3f(-200.0f, -70.0f, 0.0f);
glEnd();
A triangle in RGB colorspace.

Figure 5.10. A triangle in RGB colorspace.

Setting the Shading Model

The first line of Listing 5.1 actually sets the shading model OpenGL uses to do smooth shading—the model we have been discussing. This is the default shading model, but it's a good idea to call this function anyway to ensure that your program is operating the way you intended.

The other shading model that can be specified with glShadeModel is GL_FLAT for flat shading. Flat shading means that no shading calculations are performed on the interior of primitives. Generally, with flat shading, the color of the primitive's interior is the color that was specified for the last vertex. The only exception is for a GL_POLYGON primitive, in which case the color is that of the first vertex.

Next, the code in Listing 5.1 sets the top of the triangle to be pure red, the lower-right corner to be green, and the remaining bottom-left corner to be blue. Because smooth shading is specified, the interior of the triangle is shaded to provide a smooth transition between each corner.

The output from the TRIANGLE program is shown in Figure 5.11. This output represents the plane shown graphically in Figure 5.10.

Output from the TRIANGLE program.

Figure 5.11. Output from the TRIANGLE program.

Polygons, more complex than triangles, can also have different colors specified for each vertex. In these instances, the underlying logic for shading can become more intricate. Fortunately, you never have to worry about it with OpenGL. No matter how complex your polygon, OpenGL successfully shades the interior points between each vertex.

Color in the Real World

Real objects don't appear in a solid or shaded color based solely on their RGB values. Figure 5.12 shows the output from the program titled JET from the CD. It's a simple jet airplane, hand plotted with triangles using only the methods covered so far in this book. As usual, jet and the other programs in this chapter allow you to spin the object around by using the arrow keys to better see the effects.

A simple jet built by setting a different color for each triangle.

Figure 5.12. A simple jet built by setting a different color for each triangle.

The selection of colors is meant to highlight the three-dimensional structure of the jet. Aside from the crude assemblage of triangles, however, you can see that the jet looks hardly anything like a real object. Suppose you constructed a model of this airplane and painted each flat surface the colors represented. The model would still appear glossy or flat depending on the kind of paint used, and the color of each flat surface would vary with the angle of your view and any sources of light.

OpenGL does a reasonably good job of approximating the real world in terms of lighting conditions. Unless an object emits its own light, it is illuminated by three different kinds of light: ambient, diffuse, and specular.

Ambient Light

Ambient light doesn't come from any particular direction. It has a source, but the rays of light have bounced around the room or scene and become directionless. Objects illuminated by ambient light are evenly lit on all surfaces in all directions. You can think of all previous examples in this book as being lit by a bright ambient light because the objects were always visible and evenly colored (or shaded) regardless of their rotation or viewing angle. Figure 5.13 shows an object illuminated by ambient light.

An object illuminated purely by ambient light.

Figure 5.13. An object illuminated purely by ambient light.

Diffuse Light

Diffuse light comes from a particular direction but is reflected evenly off a surface. Even though the light is reflected evenly, the object surface is brighter if the light is pointed directly at the surface than if the light grazes the surface from an angle. A good example of a diffuse light source is fluorescent lighting or sunlight streaming in a side window at noon. In Figure 5.14, the object is illuminated by a diffuse light source.

An object illuminated by a purely diffuse light source.

Figure 5.14. An object illuminated by a purely diffuse light source.

Specular Light

Like light comes from a particular direction but is reflected evenly off diffuse light, specular light is directional, but it is reflected sharply and in a particular direction. A highly specular light tends to cause a bright spot on the surface it shines upon, which is called the specular highlight. A spotlight and the sun are examples of specular light. Figure 5.15 shows an object illuminated by a purely specular light source.

An object illuminated by a purely specular light source.

Figure 5.15. An object illuminated by a purely specular light source.

Putting It All Together

No single light source is composed entirely of any of the three types of light just described. Rather, it is made up of varying intensities of each. For example, a red laser beam in a lab is composed of almost a pure-red specular component. However, smoke or dust particles scatter the beam, so it can be seen traveling across the room. This scattering represents the diffuse component of the light. If the beam is bright and no other light sources are present, you notice objects in the room taking on a red hue. This is a very small ambient component of that light.

Thus, a light source in a scene is said to be composed of three lighting components: ambient, diffuse, and specular. Just like the components of a color, each lighting component is defined with an RGBA value that describes the relative intensities of red, green, and blue light that make up that component. (For the purposes of light color, the alpha value is ignored.) For example, our red laser light might be described by the component values in Table 5.1.

Table 5.1. Color and Light Distribution for a Red Laser Light Source

 

Red

Green

Blue

Alpha

Specular

0.99

0.0

0.0

1.0

Diffuse

0.10

0.0

0.0

1.0

Ambient

0.05

0.0

0.0

1.0

Note that the red laser beam has no green or blue light. Also, note that specular, diffuse, and ambient light can each range in intensity from 0.0 to 1.0. You could interpret this table as saying that the red laser light in some scenes has a very high specular component, a small diffuse component, and a very small ambient component. Wherever it shines, you are probably going to see a reddish spot. Also, because of conditions (smoke, dust, and so on) in the room, the diffuse component allows you to see the beam traveling through the air. Finally, the ambient component—likely due to smoke or dust particles, as well—scatters a tiny bit of light all about the room. Ambient and diffuse components of light are frequently combined because they are so similar in nature.

Materials in the Real World

Light is only part of the equation. In the real world, objects do have a color of their own. Earlier in this chapter, we described the color of an object as defined by its reflected wavelengths of light. A blue ball reflects mostly blue photons and absorbs most others. This assumes that the light shining on the ball has blue photons in it to be reflected and detected by the observer. Generally, most scenes in the real world are illuminated by a white light containing an even mixture of all the colors. Under white light, therefore, most objects appear in their proper or “natural” colors. However, this is not always so; put the blue ball in a dark room with only a yellow light, and the ball appears black to the viewer because all the yellow light is absorbed and there is no blue to be reflected.

Material Properties

When we use lighting, we do not describe polygons as having a particular color, but rather as consisting of materials that have certain reflective properties. Instead of saying that a polygon is red, we say that the polygon is made of a material that reflects mostly red light. We are still saying that the surface is red, but now we must also specify the material's reflective properties for ambient, diffuse, and specular light sources. A material might be shiny and reflect specular light very well, while absorbing most of the ambient or diffuse light. Conversely, a flat colored object might absorb all specular light and not look shiny under any circumstances. Another property to be specified is the emission property for objects that emit their own light, such as taillights or glow-in-the-dark watches.

Adding Light to Materials

Setting lighting and material properties to achieve the desired effect takes some practice. There are no color cubes or rules of thumb to give you quick and easy answers. This is the point at which analysis gives way to art, and science yields to magic. When drawing an object, OpenGL decides which color to use for each pixel in the object. That object has reflective “colors,” and the light source has “colors” of its own. How does OpenGL determine which colors to use? Understanding these principles is not difficult, but it does take some simple grade-school multiplication. (See, that teacher told you you'd need it one day!)

Each vertex of your primitives is assigned an RGB color value based on the net effect of the ambient, diffuse, and specular illumination multiplied by the ambient, diffuse, and specular reflectance of the material properties. Because you make use of smooth shading between the vertices, the illusion of illumination is achieved.

Calculating Ambient Light Effects

To calculate ambient light effects, you first need to put away the notion of color and instead think only in terms of red, green, and blue intensities. For an ambient light source of half-intensity red, green, and blue components, you have an RGB value for that source of (0.5, 0.5, 0.5). If this ambient light illuminates an object with ambient reflective properties specified in RGB terms of (0.5, 1.0, 0.5), the net “color” component from the ambient light is

  • (0.5 * 0.5, 0.5 * 1.0, 0.5 * 0.5) = (0.25, 0.5, 0.25)

This is the result of multiplying each of the ambient light source terms by each of the ambient material property terms (see Figure 5.16).

Calculating the ambient color component of an object.

Figure 5.16. Calculating the ambient color component of an object.

Thus, the material color components actually determine the percentage of incident light that is reflected. In our example, the ambient light had a red component that was at one-half intensity, and the material ambient property of 0.5 specified that one half of that one-half intensity light was reflected. Half of a half is a fourth, or 0.25.

Diffuse and Specular Effects

Calculating ambient light is as simple as it gets. Diffuse light also has RGB intensities that interact in the same way with material properties. However, diffuse light is directional, and the intensity at the surface of the object varies depending on the angle between the surface and the light source, the distance to the light source, any attenuation factors (whether it is foggy between the light and surface), and so on. The same goes for specular light sources and intensities. The net effect in terms of RGB values is figured the same way as for ambient light, with the intensity of the light source (adjusted for the angle of incidence) being multiplied by the material reflectance. Finally, all three RGB terms are added to yield a final color for the object. If any single color component is greater than 1.0, it is clamped to that value. (You can't get more intense than full intensity!)

Generally, the ambient and diffuse components of light sources and materials are the same and have the greatest effect in determining the color of the object. Specular light and material properties tend to be light gray or white. The specular component depends significantly on the angle of incidence, and specular highlights on an object are usually white.

Adding Light to a Scene

This text might seem like a lot of theory to digest all of a sudden. Let's slow down and start exploring some examples of the OpenGL code needed for lighting; this exploration will also help reinforce what you've just learned. We demonstrate some additional features and requirements of lighting in OpenGL. The next few examples build on our JET program. The initial version contains no lighting code and just draws triangles with hidden surface elimination (depth testing) enabled. When we're done, the jet's metallic surface will glisten in the sunlight as you rotate it with the arrow keys.

Enabling the Lighting

To tell OpenGL to use lighting calculations, call glEnable with the GL_LIGHTING parameter:

glEnable(GL_LIGHTING);

This call alone tells OpenGL to use material properties and lighting parameters in determining the color for each vertex in your scene. However, without any specified material properties or lighting parameters, your object remains dark and unlit, as shown in Figure 5.17. Look at the code for any of the JET-based sample programs, and you can see that we have called the function SetupRC right after creating the rendering context. This is the place where we do any initialization of lighting parameters.

An unlit jet reflects no light.

Figure 5.17. An unlit jet reflects no light.

Setting Up the Lighting Model

After you enable lighting calculations, the first thing you should do is set up the lighting model. The three parameters that affect the lighting model are set with the glLightModel function.

The first lighting parameter used in our next example (the AMBIENT program) is GL_LIGHT_MODEL_AMBIENT. It lets you specify a global ambient light that illuminates all objects evenly from all sides. The following code specifies a bright white light:

// Bright white light – full intensity RGB values
    GLfloat ambientLight[] = { 1.0f, 1.0f, 1.0f, 1.0f };

    // Enable lighting
    glEnable(GL_LIGHTING);

    // Set light model to use ambient light specified by ambientLight[]
    glLightModelfv(GL_LIGHT_MODEL_AMBIENT,ambientLight);

The variation of glLightModel shown here, glLightModelfv, takes as its first parameter the lighting model parameter being modified or set and then an array of the RGBA values that make up the light. The default RGBA values of this global ambient light are (0.2, 0.2, 0.2, 1.0), which is fairly dim. Other lighting model parameters allow you to determine whether the front, back, or both sides of polygons are illuminated and how the calculation of specular lighting angles is performed. See the reference section at the end of the chapter for more information on these parameters.

Setting Material Properties

Now that we have an ambient light source, we need to set some material properties so that our polygons reflect light and we can see our jet. There are two ways to set material properties. The first is to use the function glMaterial before specifying each polygon or set of polygons. Examine the following code fragment:

Glfloat gray[] = { 0.75f, 0.75f, 0.75f, 1.0f };
...
...
glMaterialfv(GL_FRONT, GL_AMBIENT_AND_DIFFUSE, gray);

glBegin(GL_TRIANGLES);
    glVertex3f(-15.0f,0.0f,30.0f);
    glVertex3f(0.0f, 15.0f, 30.0f);
    glVertex3f(0.0f, 0.0f, -56.0f);
glEnd();

The first parameter to glMaterialfv specifies whether the front, back, or both (GL_FRONT, GL_BACK, or GL_FRONT_AND_BACK) take on the material properties specified. The second parameter tells which properties are being set; in this instance, both the ambient and diffuse reflectances are set to the same values. The final parameter is an array containing the RGBA values that make up these properties. All primitives specified after the glMaterial call are affected by the last values set, until another call to glMaterial is made.

Under most circumstances, the ambient and diffuse components are the same, and unless you want specular highlights (sparkling, shiny spots), you don't need to define specular reflective properties. Even so, it would still be quite tedious if we had to define an array for every color in our object and call glMaterial before each polygon or group of polygons.

Now we are ready for the second and preferred way of setting material properties, called color tracking. With color tracking, you can tell OpenGL to set material properties by only calling glColor. To enable color tracking, call glEnable with the GL_COLOR_MATERIAL parameter:

glEnable(GL_COLOR_MATERIAL);

Then the function glColorMaterial specifies the material parameters that follow the values set by glColor. For example, to set the ambient and diffuse properties of the fronts of polygons to track the colors set by glColor, call

glColorMaterial(GL_FRONT,GL_AMBIENT_AND_DIFFUSE);

The earlier code fragment setting material properties would then be as follows. This approach looks like more code, but it actually saves many lines of code and executes faster as the number of different colored polygons grows:

// Enable color tracking
glEnable(GL_COLOR_MATERIAL);

// Front material ambient and diffuse colors track glColor
glColorMaterial(GL_FRONT,GL_AMBIENT_AND_DIFFUSE);

...
...
glcolor3f(0.75f, 0.75f, 0.75f);
glBegin(GL_TRIANGLES);
    glVertex3f(-15.0f,0.0f,30.0f);
    glVertex3f(0.0f, 15.0f, 30.0f);
    glVertex3f(0.0f, 0.0f, -56.0f);
glEnd();

Listing 5.2 contains the code we add with the SetupRC function to our jet example to set up a bright ambient light source and to set the material properties that allow the object to reflect light and be seen. We have also changed the colors of the jet so that each section is a different color rather than each polygon. The final output, shown in Figure 5.18, is not much different from the image before we had lighting. However, if we reduce the ambient light by half, we get the image shown in Figure 5.19. To reduce it by half, we set the ambient light RGBA values to the following:

GLfloat ambientLight[] = { 0.5f, 0.5f, 0.5f, 1.0f };
Output from completed AMBIENT sample program.

Figure 5.18. Output from completed AMBIENT sample program.

Output from the AMBIENT program when the light source is cut in half.

Figure 5.19. Output from the AMBIENT program when the light source is cut in half.

You can see how we might reduce the ambient light in a scene to produce a dimmer image. This capability is useful for simulations in which dusk approaches gradually or when a more direct light source is blocked, as when an object is in the shadow of another, larger object.

Example 5.2. Setup for Ambient Lighting Conditions

// This function does any needed initialization on the rendering
// context.  Here it sets up and initializes the lighting for
// the scene.
void SetupRC()
    {
    // Light values
    // Bright white light
    GLfloat ambientLight[] = { 1.0f, 1.0f, 1.0f, 1.0f };

    glEnable(GL_DEPTH_TEST);    // Hidden surface removal
    glEnable(GL_CULL_FACE);        // Do not calculate inside of jet
    glFrontFace(GL_CCW);        // Counterclockwise polygons face out

    // Lighting stuff
    glEnable(GL_LIGHTING);    // Enable lighting

    // Set light model to use ambient light specified by ambientLight[]
    glLightModelfv(GL_LIGHT_MODEL_AMBIENT,ambientLight);


    glEnable(GL_COLOR_MATERIAL);    // Enable material color tracking

    // Front material ambient and diffuse colors track glColor
    glColorMaterial(GL_FRONT,GL_AMBIENT_AND_DIFFUSE);

    // Nice light blue background
    glClearColor(0.0f, 0.0f, 05.f,1.0f);
    }

Using a Light Source

Manipulating the ambient light has its uses, but for most applications attempting to model the real world, you must specify one or more specific sources of light. In addition to their intensities and colors, these sources have a location and/or a direction. The placement of these lights can dramatically affect the appearance of your scene.

OpenGL supports at least eight independent light sources located anywhere in your scene or out of the viewing volume. You can locate a light source an infinite distance away and make its light rays parallel or make it a nearby light source radiating outward. You can also specify a spotlight with a specific cone of light radiating from it, as well as manipulate its characteristics.

Which Way Is Up?

When you specify a light source, you tell OpenGL where it is and in which direction it's shining. Often, the light source shines in all directions, but it can be directional. Either way, for any object you draw, the rays of light from any source (other than a pure ambient source) strike the surface of the polygons that make up the object at an angle. Of course, in the case of a directional light, the surfaces of all polygons might not necessarily be illuminated. To calculate the shading effects across the surface of the polygons, OpenGL must be able to calculate the angle.

In Figure 5.20, a polygon (a square) is being struck by a ray of light from some source. The ray makes an angle (A) with the plane as it strikes the surface. The light is then reflected at an angle (B) toward the viewer (or you wouldn't see it). These angles are used in conjunction with the lighting and material properties we have discussed thus far to calculate the apparent color of that location. It happens by design that the locations used by OpenGL are the vertices of the polygon. Because OpenGL calculates the apparent colors for each vertex and then does smooth shading between them, the illusion of lighting is created. Magic!

Light is reflected off objects at specific angles.

Figure 5.20. Light is reflected off objects at specific angles.

From a programming standpoint, these lighting calculations present a slight conceptual difficulty. Each polygon is created as a set of vertices, which are nothing more than points. Each vertex is then struck by a ray of light at some angle. How then do you (or OpenGL) calculate the angle between a point and a line (the ray of light)? Of course, you can't geometrically find the angle between a single point and a line in 3D space because there are an infinite number of possibilities. Therefore, you must associate with each vertex some piece of information that denotes a direction upward from the vertex and away from the surface of the primitive.

Surface Normals

A line from the vertex in the upward direction starts in some imaginary plane (or your polygon) at a right angle. This line is called a normal vector. The term normal vector might sound like something the Star Trek crew members toss around, but it just means a line perpendicular to a real or imaginary surface. A vector is a line pointed in some direction, and the word normal is just another way for eggheads to say perpendicular (intersecting at a 90° angle). As if the word perpendicular weren't bad enough! Therefore, a normal vector is a line pointed in a direction that is at a 90° angle to the surface of your polygon. Figure 5.21 presents examples of 2D and 3D normal vectors.

A 2D and a 3D normal vector.

Figure 5.21. A 2D and a 3D normal vector.

You might already be asking why we must specify a normal vector for each vertex. Why can't we just specify a single normal for a polygon and use it for each vertex? We can—and for our first few examples, we do. However, sometimes you don't want each normal to be exactly perpendicular to the surface of the polygon. You may have noticed that many surfaces are not flat! You can approximate these surfaces with flat, polygonal sections, but you end up with a jagged or multifaceted surface. Later, we discuss a technique to produce the illusion of smooth curves with flat polygons by “tweaking” surface normals (more magic!). But first things first…

Specifying a Normal

To see how we specify a normal for a vertex, let's look at Figure 5.22—a plane floating above the xz plane in 3D space. We've made this illustration simple to demonstrate the concept. Notice the line through the vertex (1,1,0) that is perpendicular to the plane. If we select any point on this line, say (1,10,0), the line from the first point (1,1,0) to the second point (1,10,0) is our normal vector. The second point specified actually indicates that the direction from the vertex is up in the y direction. This convention is also used to indicate the front and back sides of polygons, as the vector travels up and away from the front surface.

A normal vector traveling perpendicular from the surface.

Figure 5.22. A normal vector traveling perpendicular from the surface.

You can see that this second point is the number of units in the x, y, and z directions for some point on the normal vector away from the vertex. Rather than specify two points for each normal vector, we can subtract the vertex from the second point on the normal, yielding a single coordinate triplet that indicates the x, y, and z steps away from the vertex. For our example, this is

  • (1,10,0) – (1,1,0) = (1 – 1, 10 – 1, 0) = (0, 9, 0)

Here's another way of looking at this example: If the vertex were translated to the origin, the point specified by subtracting the two original points would still specify the direction pointing away and at a 90° angle from the surface. Figure 5.23 shows the newly translated normal vector.

The newly translated normal vector.

Figure 5.23. The newly translated normal vector.

The vector is a directional quantity that tells OpenGL which direction the vertices (or polygon) face. This next code segment shows a normal vector being specified for one of the triangles in the JET sample program:

glBegin(GL_TRIANGLES);
    glNormal3f(0.0f, -1.0f, 0.0f);
    glVertex3f(0.0f, 0.0f, 60.0f);
    glVertex3f(-15.0f, 0.0f, 30.0f);
    glVertex3f(15.0f,0.0f,30.0f);
glEnd();

The function glNormal3f takes the coordinate triplet that specifies a normal vector pointing in the direction perpendicular to the surface of this triangle. In this example, the normals for all three vertices have the same direction, which is down the negative y-axis. This is a simple example because the triangle is lying flat in the xz plane, and it actually represents a bottom section of the jet. You'll see later that often we want to specify a different normal for each vertex.

The prospect of specifying a normal for every vertex or polygon in your drawing might seem daunting, especially because few surfaces lie cleanly in one of the major planes. Never fear! We shortly present a reusable function that you can call again and again to calculate your normals for you.

Unit Normals

As OpenGL does its magic, all surface normals must eventually be converted to unit normals. A unit normal is just a normal vector that has a length of 1. The normal in Figure 5.23 has a length of 9. You can find the length of any normal by squaring each component, adding them together, and taking the square root. Divide each component of the normal by the length, and you get a vector pointed in exactly the same direction, but only 1 unit long. In this case, our new normal vector is specified as (0,1,0). This is called normalization. Thus, for lighting calculations, all normal vectors must be normalized. Talk about jargon!

You can tell OpenGL to convert your normals to unit normals automatically, by enabling normalization with glEnable and a parameter of GL_NORMALIZE:

glEnable(GL_NORMALIZE);

This approach does, however, have performance penalties. It's far better to calculate your normals ahead of time as unit normals instead of relying on OpenGL to perform this task for you.

You should note that calls to the glScale transformation function also scale the length of your normals. If you use glScale and lighting, you can obtain undesired results from your OpenGL lighting. If you have specified unit normals for all your geometry and used a constant scaling factor with glScale (all geometry is scaled by the same amount), a new alternative to GL_NORMALIZE (new to OpenGL 1.2) is GL_RESCALE_NORMALS. You enable this parameter with a call such as

glEnable(GL_RESCALE_NORMALS);

This call tells OpenGL that your normals are not unit length, but they can all be scaled by the same amount to make them unit length. OpenGL figures this out by examining the modelview matrix. The result is fewer mathematical operations per vertex than are otherwise required.

Because it is better to give OpenGL unit normals to begin with, the glTools library comes with a function that will take any normal vector and “normalize” it for you:

void gltNormalizeVector(GLTVector vNormal);

Finding a Normal

Figure 5.24 presents another polygon that is not simply lying in one of the axis planes. The normal vector pointing away from this surface is more difficult to guess, so we need an easy way to calculate the normal for any arbitrary polygon in 3D coordinates.

A nontrivial normal problem.

Figure 5.24. A nontrivial normal problem.

You can easily calculate the normal vector for any polygon by taking three points that lie in the plane of that polygon. Figure 5.25 shows three points—P1, P2, and P3—that you can use to define two vectors: vector V1 from P1 to P2, and vector V2 from P1 to P3. Mathematically, two vectors in three-dimensional space define a plane. (Your original polygon lies in this plane.) If you take the cross product of those two vectors (written mathematically as V1 X V2), the resulting vector is perpendicular to that plane. Figure 5.26 shows the vector V3 derived by taking the cross product of V1 and V2.

Two vectors defined by three points on a plane.

Figure 5.25. Two vectors defined by three points on a plane.

A normal vector as the cross product of two vectors.

Figure 5.26. A normal vector as the cross product of two vectors.

Again, because this is such a useful and often-used method, the glTools library contains a function that calculates a normal vector based on three points on a polygon:

void gltGetNormalVector(GLTVector vP1, GLTVector vP2,
                                            GLTVector vP3, GLTVector vNormal);

To use this function, pass it three vectors (each just an array of three floats) from your polygon or triangle (specified in counterclockwise winding order) and another vector array that will contain the normal vector on return.

Setting Up a Source

Now that you understand the requirements of setting up your polygons to receive and interact with a light source, it's time to turn on the lights! Listing 5.3 shows the SetupRC function from the sample program LITJET. Part of the setup process for this sample program creates a light source and places it to the upper left, slightly behind the viewer. The light source GL_LIGHT0 has its ambient and diffuse components set to the intensities specified by the arrays ambientLight[] and diffuseLight[]. This results in a moderate white light source:

GLfloat  ambientLight[] = { 0.3f, 0.3f, 0.3f, 1.0f };
GLfloat  diffuseLight[] = { 0.7f, 0.7f, 0.7f, 1.0f };
...
...
// Set up and enable light 0
glLightfv(GL_LIGHT0,GL_AMBIENT,ambientLight);
glLightfv(GL_LIGHT0,GL_DIFFUSE,diffuseLight);

Finally, the light source GL_LIGHT0 is enabled:

glEnable(GL_LIGHT0);

The light is positioned by this code, located in the ChangeSize function:

GLfloat lightPos[] = { -50.f, 50.0f, 100.0f, 1.0f };
...
...
glLightfv(GL_LIGHT0,GL_POSITION,lightPos);

Here, lightPos[] contains the position of the light. The last value in this array is 1.0, which specifies that the designated coordinates are the position of the light source. If the last value in the array is 0.0, it indicates that the light is an infinite distance away along the vector specified by this array. We touch more on this issue later. Lights are like geometric objects in that they can be moved around by the modelview matrix. By placing the light's position when the viewing transformation is performed, we ensure the light is in the proper location regardless of how we transform the geometry.

Example 5.3. Light and Rendering Context Setup for LITJET

// This function does any needed initialization on the rendering
// context.  Here it sets up and initializes the lighting for
// the scene.
void SetupRC()
    {
    // Light values and coordinates
    GLfloat  ambientLight[] = { 0.3f, 0.3f, 0.3f, 1.0f };
    GLfloat  diffuseLight[] = { 0.7f, 0.7f, 0.7f, 1.0f };

    glEnable(GL_DEPTH_TEST);    // Hidden surface removal
    glFrontFace(GL_CCW);        // Counterclockwise polygons face out
    glEnable(GL_CULL_FACE);        // Do not calculate inside of jet

    // Enable lighting
    glEnable(GL_LIGHTING);

    // Set up and enable light 0
    glLightfv(GL_LIGHT0,GL_AMBIENT,ambientLight);
    glLightfv(GL_LIGHT0,GL_DIFFUSE,diffuseLight);
    glEnable(GL_LIGHT0);

    // Enable color tracking
    glEnable(GL_COLOR_MATERIAL);

    // Set material properties to follow glColor values
    glColorMaterial(GL_FRONT, GL_AMBIENT_AND_DIFFUSE);

    // Light blue background
    glClearColor(0.0f, 0.0f, 1.0f, 1.0f );
    }

Setting the Material Properties

Notice in Listing 5.3 that color tracking is enabled, and the properties to be tracked are the ambient and diffuse reflective properties for the front surface of the polygons. This is just as it was defined in the AMBIENT sample program:

// Enable color tracking
glEnable(GL_COLOR_MATERIAL);

// Set material properties to follow glColor values
glColorMaterial(GL_FRONT, GL_AMBIENT_AND_DIFFUSE);

Specifying the Polygons

The rendering code from the first two JET samples changes considerably now to support the new lighting model. Listing 5.4 is an excerpt taken from the RenderScene function from LITJET.

Example 5.4. Code Sample That Sets Color and Calculates and Specifies Normals and Polygons

GLTVector vNormal;    // Storage for calculated surface normal

...
...
// Set material color
glColor3ub(128, 128, 128);
glBegin(GL_TRIANGLES);
        glNormal3f(0.0f, -1.0f, 0.0f);
        glNormal3f(0.0f, -1.0f, 0.0f);
        glVertex3f(0.0f, 0.0f, 60.0f);
        glVertex3f(-15.0f, 0.0f, 30.0f);
        glVertex3f(15.0f,0.0f,30.0f);


       // Vertices for this panel
       {
       GLTVector vPoints[3] = {{ 15.0f, 0.0f,  30.0f},
                               { 0.0f,  15.0f, 30.0f},
                               { 0.0f,  0.0f,  60.0f}};

      // Calculate the normal for the plane
      gltGetNormalVector(vPoints[0], vPoints[1], vPoints[2], vNormal);
      glNormal3fv(vNormal);
      glVertex3fv(vPoints[0]);
      glVertex3fv(vPoints[1]);
      glVertex3fv(vPoints[2]);
      }


     {
     GLTVector vPoints[3] = {{ 0.0f, 0.0f, 60.0f },
                             { 0.0f, 15.0f, 30.0f },
                             { -15.0f, 0.0f, 30.0f }};

     gltGetNormalVector(vPoints[0], vPoints[1], vPoints[2], vNormal);
     glNormal3fv(vNormal);
     glVertex3fv(vPoints[0]);
     glVertex3fv(vPoints[1]);
     glVertex3fv(vPoints[2]);
     }

Notice that we are calculating the normal vector using the gltGetNormalVector function from glTools. Also, the material properties are now following the colors set by glColor. One other thing you notice is that not every triangle is blocked by glBegin/glEnd functions. You can specify once that you are drawing triangles, and every three vertices are used for a new triangle until you specify otherwise with glEnd. For very large numbers of polygons, this technique can considerably boost performance by eliminating many unnecessary function calls and primitive batch setup.

Figure 5.27 shows the output from the completed LITJET sample program. The jet is now a single shade of gray instead of multiple colors. We changed the color to make it easier to see the lighting effects on the surface. Even though the plane is one solid “color,” you can still see the shape due to the lighting. By rotating the jet around with the arrow keys, you can see the dramatic shading effects as the surface of the jet moves and interacts with the light.

Output from the LITJET program.

Figure 5.27. Output from the LITJET program.

TIP

The most obvious way to improve the performance of this code is to calculate all the normal vectors ahead of time and store them for use in the RenderScene function. Before you pursue this, read Chapter 11, “It's All About the Pipeline: Faster Geometry Throughput,” for the material on display lists and vertex arrays. Display lists and vertex arrays provide a means of storing calculated values not only for the normal vectors, but for the polygon data as well. Remember, these examples are meant to demonstrate the concepts. They are not necessarily the most efficient code possible.

Lighting Effects

The ambient and diffuse lights from the LITJET example are sufficient to provide the illusion of lighting. The surface of the jet appears shaded according to the angle of the incident light. As the jet rotates, these angles change and you can see the lighting effects changing in such a way that you can easily guess where the light is coming from.

We ignored the specular component of the light source, however, as well as the specular reflectivity of the material properties on the jet. Although the lighting effects are pronounced, the surface of the jet is rather flatly colored. Ambient and diffuse lighting and material properties are all you need if you are modeling clay, wood, cardboard, cloth, or some other flatly colored object. But for metallic surfaces such as the skin of an airplane, some shine is often desirable.

Specular Highlights

Specular lighting and material properties add needed gloss to the surface of your objects. This shininess has a whitening effect on an object's color and can produce specular highlights when the angle of incident light is sharp in relation to the viewer. A specular highlight is what occurs when nearly all the light striking the surface of an object is reflected away. The white sparkle on a shiny red ball in the sunlight is a good example of a specular highlight.

Specular Light

You can easily add a specular component to a light source. The following code shows the light source setup for the LITJET program, modified to add a specular component to the light:

// Light values and coordinates
GLfloat  ambientLight[] = { 0.3f, 0.3f, 0.3f, 1.0f };
GLfloat  diffuseLight[] = { 0.7f, 0.7f, 0.7f, 1.0f };
GLfloat  specular[] = { 1.0f, 1.0f, 1.0f, 1.0f};

...
...

// Enable lighting
glEnable(GL_LIGHTING);

// Set up and enable light 0
glLightfv(GL_LIGHT0,GL_AMBIENT,ambientLight);
glLightfv(GL_LIGHT0,GL_DIFFUSE,diffuseLight);
glLightfv(GL_LIGHT0,GL_SPECULAR,specular);
glEnable(GL_LIGHT0);

The specular[] array specifies a very bright white light source for the specular component of the light. Our purpose here is to model bright sunlight. The following line simply adds this specular component to the light source GL_LIGHT0:

glLightfv(GL_LIGHT0,GL_SPECULAR,specular);

If this were the only change you made to LITJET, you wouldn't see any difference in the jet's appearance. We haven't yet defined any specular reflectance properties for the material properties.

Specular Reflectance

Adding specular reflectance to material properties is just as easy as adding the specular component to the light source. This next code segment shows the code from LITJET, again modified to add specular reflectance to the material properties:

// Light values and coordinates
GLfloat  specref[] =  { 1.0f, 1.0f, 1.0f, 1.0f };

...
...


// Enable color tracking
glEnable(GL_COLOR_MATERIAL);

// Set material properties to follow glColor values
glColorMaterial(GL_FRONT, GL_AMBIENT_AND_DIFFUSE);

// All materials hereafter have full specular reflectivity
// with a high shine
glMaterialfv(GL_FRONT, GL_SPECULAR,specref);
glMateriali(GL_FRONT,GL_SHININESS,128);

As before, we enable color tracking so that the ambient and diffuse reflectance of the materials follow the current color set by the glColor functions. (Of course, we don't want the specular reflectance to track glColor because we are specifying it separately and it doesn't change.)

Now, we've added the array specref[], which contains the RGBA values for our specular reflectance. This array of all 1s produces a surface that reflects nearly all incident specular light. The following line sets the material properties for all subsequent polygons to have this reflectance:

glMaterialfv(GL_FRONT, GL_SPECULAR,specref);

Because we do not call glMaterial again with the GL_SPECULAR property, all materials have this property. We set up the example this way on purpose because we want the entire jet to appear made of metal or very shiny composites.

What we have done here in our setup routine is important: We have specified that the ambient and diffuse reflective material properties of all future polygons (until we say otherwise with another call to glMaterial or glColorMaterial) change as the current color changes, but that the specular reflective properties remain the same.

Specular Exponent

As stated earlier, high specular light and reflectivity brighten the colors of the object. For this example, the present extremely high specular light (full intensity) and specular reflectivity (full reflectivity) result in a jet that appears almost totally white or gray except where the surface points away from the light source (in which case, it is black and unlit). To temper this effect, we use the next line of code after the specular component is specified:

glMateriali(GL_FRONT,GL_SHININESS,128);

The GL_SHININESS property sets the specular exponent of the material, which specifies how small and focused the specular highlight is. A value of 0 specifies an unfocused specular highlight, which is actually what is producing the brightening of the colors evenly across the entire polygon. If you set this value, you reduce the size and increase the focus of the specular highlight, causing a shiny spot to appear. The larger the value, the more shiny and pronounced the surface. The range of this parameter is 1–128 for all implementations of OpenGL.

Listing 5.5 shows the new SetupRC code in the sample program SHINYJET. This is the only code that has changed from LITJET (other than the title of the window) to produce a very shiny and glistening jet. Figure 5.28 shows the output from this program, but to fully appreciate the effect, you should run the program and hold down one of the arrow keys to spin the jet about in the sunlight.

Example 5.5. Setup from SHINYJET to Produce Specular Highlights on the Jet

// This function does any needed initialization on the rendering
// context.  Here it sets up and initializes the lighting for
// the scene.
void SetupRC()
    {
    // Light values and coordinates
    GLfloat  ambientLight[] = { 0.3f, 0.3f, 0.3f, 1.0f };
    GLfloat  diffuseLight[] = { 0.7f, 0.7f, 0.7f, 1.0f };
    GLfloat  specular[] = { 1.0f, 1.0f, 1.0f, 1.0f};
    GLfloat  specref[] =  { 1.0f, 1.0f, 1.0f, 1.0f };


    glEnable(GL_DEPTH_TEST);    // Hidden surface removal
    glFrontFace(GL_CCW);        // Counterclockwise polygons face out
    glEnable(GL_CULL_FACE);        // Do not calculate inside of jet

    // Enable lighting
    glEnable(GL_LIGHTING);

    // Set up and enable light 0
    glLightfv(GL_LIGHT0,GL_AMBIENT,ambientLight);
    glLightfv(GL_LIGHT0,GL_DIFFUSE,diffuseLight);
    glLightfv(GL_LIGHT0,GL_SPECULAR,specular);
    glEnable(GL_LIGHT0);

    // Enable color tracking
    glEnable(GL_COLOR_MATERIAL);

    // Set material properties to follow glColor values
    glColorMaterial(GL_FRONT, GL_AMBIENT_AND_DIFFUSE);

    // All materials hereafter have full specular reflectivity
    // with a high shine
    glMaterialfv(GL_FRONT, GL_SPECULAR,specref);
    glMateriali(GL_FRONT,GL_SHININESS,128);


    // Light blue background
    glClearColor(0.0f, 0.0f, 1.0f, 1.0f );
    }
Output from the SHINYJET program.

Figure 5.28. Output from the SHINYJET program.

Normal Averaging

Earlier, we mentioned that by “tweaking” your normals, you can produce apparently smooth surfaces with flat polygons. This technique, known as normal averaging, produces some interesting optical illusions. Say you have a sphere made up of quads and triangles like the one shown in Figure 5.29.

A typical sphere made up of quads and triangles.

Figure 5.29. A typical sphere made up of quads and triangles.

If each face of the sphere had a single normal specified, the sphere would look like a large faceted jewel. If you specify the “true” normal for each vertex, however, the lighting calculations at each vertex produce values that OpenGL smoothly interpolates across the face of the polygon. Thus, the flat polygons are shaded as if they were a smooth surface.

What do we mean by “true” normal? The polygonal representation is only an approximation of the true surface. Theoretically, if we used enough polygons, the surface would appear smooth. This is similar to the idea we used in Chapter 3, “Drawing in Space: Geometric Primitives and Buffers,” to draw a smooth curve with a series of short line segments. If we consider each vertex to be a point on the true surface, the actual normal value for that surface is the true normal for the surface.

For our case of the sphere, the normal would point directly out from the center of the sphere through each vertex. We show this graphically for a simple 2D case in Figures 5.30 and 5.31. In Figure 5.30, each flat segment has a normal pointing perpendicular to its surface. We did this just like we did for our LITJET example previously. Figure 5.31, however, shows how each normal is not perpendicular to the line segment but is perpendicular to the surface of the sphere, or the tangent line to the surface.

An approximation with normals perpendicular to each face.

Figure 5.30. An approximation with normals perpendicular to each face.

Each normal is perpendicular to the surface itself.

Figure 5.31. Each normal is perpendicular to the surface itself.

The tangent line touches the curve in one place and does not penetrate it. The 3D equivalent is a tangent plane. In Figure 5.31, you can see the outline of the actual surface and that the normal is actually perpendicular to the line tangent to the surface.

For a sphere, calculation of the normal is reasonably simple. (The normal actually has the same values as the vertex relative to the center!) For other nontrivial surfaces, the calculation might not be so easy. In such cases, you calculate the normals for each polygon that shared a vertex. The actual normal you assign to that vertex is the average of these normals. The visual effect is a nice, smooth, regular surface, even though it is actually composed of numerous small, flat segments.

Putting It All Together

Now it's time for a more complex sample program. We demonstrate how to use normals to create a smooth surface appearance, move a light around in a scene, create a spot light, and finally, identify one of the drawbacks of OpenGL lighting.

Our next sample program, SPOT, performs all these tasks. Here, we create a solid sphere in the center of our viewing volume with glutSolidSphere. We shine a spotlight on this sphere that we can move around, and we change the “smoothness” of the normals and demonstrate some of the limitations of OpenGL lighting.

So far, we have been specifying a light's position with glLight as follows:

// Array to specify position
GLfloat  lightPos[] = { 0.0f, 150.0f, 150.0f, 1.0f };

...
...

// Set the light position
glLightfv(GL_LIGHT0,GL_POSITION,lightPos);

The array lightPos[] contains the x, y, and z values that specify either the light's actual position in the scene or the direction from which the light is coming. The last value, 1.0 in this case, indicates that the light is actually present at this location. By default, the light radiates equally in all directions from this location, but you can change this default to make a spotlight effect.

To make a light source an infinite distance away and coming from the direction specified by this vector, you place 0.0 in this last lightPos[] array element. A directional light source, as this is called, strikes the surface of your objects evenly. That is, all the light rays are parallel. In a positional light source, on the other hand, the light rays diverge from the light source.

Creating a Spotlight

Creating a spotlight is no different from creating any other positional light source. The code in Listing 5.6 shows the SetupRC function from the SPOT sample program. This program places a blue sphere in the center of the window. It also creates a spotlight that you can move vertically with the up- and down-arrow keys and horizontally with the left- and right-arrow keys. As the spotlight moves over the surface of the sphere, a specular highlight follows it on the surface.

Example 5.6. Lighting Setup for the SPOT Sample Program

// Light values and coordinates
GLfloat  lightPos[] = { 0.0f, 0.0f, 75.0f, 1.0f };
GLfloat  specular[] = { 1.0f, 1.0f, 1.0f, 1.0f};
GLfloat  specref[] =  { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat  ambientLight[] = { 0.5f, 0.5f, 0.5f, 1.0f};
GLfloat  spotDir[] = { 0.0f, 0.0f, -1.0f };


// This function does any needed initialization on the rendering
// context.  Here it sets up and initializes the lighting for
// the scene.
void SetupRC()
    {
    glEnable(GL_DEPTH_TEST);    // Hidden surface removal
    glFrontFace(GL_CCW);        // Counterclockwise polygons face out
    glEnable(GL_CULL_FACE);        // Do not try to display the back sides

    // Enable lighting
    glEnable(GL_LIGHTING);

    // Set up and enable light 0
    // Supply a slight ambient light so the objects can be seen
    glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientLight);

    // The light is composed of just diffuse and specular components
    glLightfv(GL_LIGHT0,GL_DIFFUSE,ambientLight);
    glLightfv(GL_LIGHT0,GL_SPECULAR,specular);
    glLightfv(GL_LIGHT0,GL_POSITION,lightPos);

    // Specific spot effects
    // Cut off angle is 60 degrees
    glLightf(GL_LIGHT0,GL_SPOT_CUTOFF,60.0f);

    // Enable this light in particular
    glEnable(GL_LIGHT0);

    // Enable color tracking
    glEnable(GL_COLOR_MATERIAL);

    // Set material properties to follow glColor values
    glColorMaterial(GL_FRONT, GL_AMBIENT_AND_DIFFUSE);

    // All materials hereafter have full specular reflectivity
    // with a high shine
    glMaterialfv(GL_FRONT, GL_SPECULAR,specref);
    glMateriali(GL_FRONT, GL_SHININESS,128);


    // Black background
    glClearColor(0.0f, 0.0f, 0.0f, 1.0f );
    }

The following line from the listing is actually what makes a positional light source into a spotlight:

// Specific spot effects
// Cut off angle is 60 degrees
glLightf(GL_LIGHT0,GL_SPOT_CUTOFF,60.0f);

The GL_SPOT_CUTOFF value specifies the radial angle of the cone of light emanating from the spotlight, from the center line to the edge of the cone. For a normal positional light, this value is 180° so that the light is not confined to a cone. In fact, for spotlights, only values from 0 to 90° are valid. Spotlights emit a cone of light, and objects outside this cone are not illuminated. Figure 5.32 shows how this angle translates to the cone width.

The angle of the spotlight cone.

Figure 5.32. The angle of the spotlight cone.

Drawing a Spotlight

When you place a spotlight in a scene, the light must come from somewhere. Just because you have a source of light at some location doesn't mean that you see a bright spot there. For our SPOT sample program, we placed a red cone at the spotlight source to show where the light was coming from. Inside the end of this cone, we placed a bright yellow sphere to simulate a light bulb.

This sample has a pop-up menu that we use to demonstrate several things. The pop-up menu contains items to set flat and smooth shading and to produce a sphere for low, medium, and high tessellation. Tessellation means to break a mesh of polygons into a finer mesh of polygons (more vertices). Figure 5.33 shows a wireframe representation of a highly tessellated sphere next to one that has few polygons.

On the left is a highly tessellated sphere; on the right, a sphere made up of few polygons.

Figure 5.33. On the left is a highly tessellated sphere; on the right, a sphere made up of few polygons.

Figure 5.34 shows our sample in its initial state with the spotlight moved off slightly to one side. (You can use the arrow keys to move the spotlight.) The sphere consists of few polygons, which are flat shaded. In Windows, use the right mouse button to open a pop-up menu (Ctrl-click on the Mac), where you can switch between smooth and flat shading and between very low, medium, and very high tessellation for the sphere. Listing 5.7 shows the complete code to render the scene.

Example 5.7. The Rendering Function for SPOT, Showing How the Spotlight Is Moved

// Called to draw scene
void RenderScene(void)
    {
    if(iShade == MODE_FLAT)
        glShadeModel(GL_FLAT);
    else //     iShade = MODE_SMOOTH;
        glShadeModel(GL_SMOOTH);

    // Clear the window with current clearing color
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // First place the light
    // Save the coordinate transformation
    glPushMatrix();
        // Rotate coordinate system
        glRotatef(yRot, 0.0f, 1.0f, 0.0f);
        glRotatef(xRot, 1.0f, 0.0f, 0.0f);

        // Specify new position and direction in rotated coords
        glLightfv(GL_LIGHT0,GL_POSITION,lightPos);
        glLightfv(GL_LIGHT0,GL_SPOT_DIRECTION,spotDir);

        // Draw a red cone to enclose the light source
        glColor3ub(255,0,0);

        // Translate origin to move the cone out to where the light
        // is positioned.
        glTranslatef(lightPos[0],lightPos[1],lightPos[2]);
        glutSolidCone(4.0f,6.0f,15,15);

        // Draw a smaller displaced sphere to denote the light bulb
        // Save the lighting state variables
        glPushAttrib(GL_LIGHTING_BIT);

            // Turn off lighting and specify a bright yellow sphere
            glDisable(GL_LIGHTING);
               glColor3ub(255,255,0);
            glutSolidSphere(3.0f, 15, 15);

        // Restore lighting state variables
        glPopAttrib();

    // Restore coordinate transformations
    glPopMatrix();


    // Set material color and draw a sphere in the middle
    glColor3ub(0, 0, 255);

    if(iTess == MODE_VERYLOW)
        glutSolidSphere(30.0f, 7, 7);
    else
        if(iTess == MODE_MEDIUM)
            glutSolidSphere(30.0f, 15, 15);
        else //  iTess = MODE_MEDIUM;
            glutSolidSphere(30.0f, 50, 50);

    // Display the results
    glutSwapBuffers();
    }
The SPOT sample—low tessellation, flat shading.

Figure 5.34. The SPOT sample—low tessellation, flat shading.

The variables iTess and iMode are set by the GLUT menu handler and control how many sections the sphere is broken into and whether flat or smooth shading is employed. Note that the light is positioned before any geometry is rendered. As pointed out in Chapter 2, OpenGL is an immediate-mode API: If you want an object to be illuminated, you have to put the light where you want it before drawing the object.

You can see in Figure 5.34 that the sphere is coarsely lit and each flat face is clearly evident. Switching to smooth shading helps a little, as shown in Figure 5.35.

Smoothly shaded but inadequate tessellation.

Figure 5.35. Smoothly shaded but inadequate tessellation.

Increasing the tessellation helps, as shown in Figure 5.36, but you still see disturbing artifacts as you move the spotlight around the sphere. These lighting artifacts are one of the drawbacks of OpenGL lighting. A better way to characterize this situation is that these artifacts are a drawback of vertex lighting (not necessarily OpenGL!). By lighting the vertices and then interpolating between them, we get a crude approximation of lighting. This approach is sufficient for many cases, but as you can see in our spot example, it is not sufficient in others. If you switch to very high tessellation and move the spotlight, you see the lighting blemishes all but vanish.

Choosing a finer mesh of polygons yields better vertex lighting.

Figure 5.36. Choosing a finer mesh of polygons yields better vertex lighting.

As OpenGL hardware accelerators begin to accelerate transformations and lighting effects, and as CPUs become more powerful, you will be able to more finely tessellate your geometry for better OpenGL-based lighting effects.

The final observation you need to make about the SPOT sample appears when you set the sphere for medium tessellation and flat shading. As shown in Figure 5.37, each face of the sphere is flatly lit. Each vertex is the same color but is modulated by the value of the normal and the light. With flat shading, each polygon is made the color of the last vertex color specified and not smoothly interpolated between each one.

A multifaceted sphere.

Figure 5.37. A multifaceted sphere.

Shadows

A chapter on color and lighting naturally calls for a discussion of shadows. Adding shadows to your scenes can greatly improve their realism and visual effectiveness. In Figures 5.38 and 5.39, you see two views of a lighted cube. Although both are lit, the one with a shadow is more convincing than the one without the shadow.

Lighted cube without a shadow.

Figure 5.38. Lighted cube without a shadow.

Lighted cube with a shadow.

Figure 5.39. Lighted cube with a shadow.

What Is a Shadow?

Conceptually, drawing a shadow is quite simple. A shadow is produced when an object keeps light from a light source from striking some object or surface behind the object casting the shadow. The area on the shadowed object's surface, outlined by the object casting the shadow, appears dark. We can produce a shadow programmatically by flattening the original object into the plane of the surface in which the object lies. The object is then drawn in black or some dark color, perhaps with some translucence. There are many methods and algorithms for drawing shadows, some quite complex. This book's primary focus is on the OpenGL API. It is our hope that, after you've mastered the tool, some of the additional reading suggested in Appendix A will provide you with a lifetime of learning new applications for this tool. Chapter 18, “Depth Textures and Shadows,” covers some new direct support in OpenGL for making shadows; for our purposes in this chapter, we demonstrate one of the simpler methods that works quite well when casting shadows on a flat surface (such as the ground). Figure 5.40 illustrates this flattening.

Flattening an object to create a shadow.

Figure 5.40. Flattening an object to create a shadow.

We squish an object against another surface by using some of the advanced matrix manipulations we touched on in the preceding chapter. Here, we boil down this process to make it as simple as possible.

Squish Code

We need to flatten the modelview projection matrix so that any and all objects drawn into it are now in this flattened two-dimensional world. No matter how the object is oriented, it is squished into the plane in which the shadow lies. The next two considerations are the distance and direction of the light source. The direction of the light source determines the shape of the shadow and influences the size. If you've ever seen your shadow in the early or late morning hours, you know how long and warped your shadow can appear depending on the position of the sun.

The function gltMakeShadowMatrix from the glTools library, shown in Listing 5.8, takes three points that lie in the plane in which you want the shadow to appear (these three points cannot be along the same straight line!), the position of the light source, and finally a pointer to a transformation matrix that this function constructs. We won't delve too much into linear algebra, but you do need to know that this function deduces the coefficients of the equation of the plane in which the shadow appears and uses it along with the lighting position to build a transformation matrix. If you multiply this matrix by the current modelview matrix, all further drawing is flattened into this plane.

Example 5.8. Function to Make a Shadow Transformation Matrix

// Creates a shadow projection matrix out of the plane equation
// coefficients and the position of the light. The return value is stored
// in destMat
void gltMakeShadowMatrix(GLTVector3 vPoints[3], GLTVector4 vLightPos,
                                                            GLTMatrix destMat)
    {
    GLTVector4 vPlaneEquation;
    GLfloat dot;

    gltGetPlaneEquation(vPoints[0], vPoints[1], vPoints[2], vPlaneEquation);

    // Dot product of plane and light position
    dot =   vPlaneEquation[0]*vLightPos[0] +
            vPlaneEquation[1]*vLightPos[1] +
            vPlaneEquation[2]*vLightPos[2] +
            vPlaneEquation[3]*vLightPos[3];


    // Now do the projection
    // First column
    destMat[0] = dot - vLightPos[0] * vPlaneEquation[0];
    destMat[4] = 0.0f - vLightPos[0] * vPlaneEquation[1];
    destMat[8] = 0.0f - vLightPos[0] * vPlaneEquation[2];
    destMat[12] = 0.0f - vLightPos[0] * vPlaneEquation[3];

    // Second column
    destMat[1] = 0.0f - vLightPos[1] * vPlaneEquation[0];
    destMat[5] = dot - vLightPos[1] * vPlaneEquation[1];
    destMat[9] = 0.0f - vLightPos[1] * vPlaneEquation[2];
    destMat[13] = 0.0f - vLightPos[1] * vPlaneEquation[3];

    // Third Column
    destMat[2] = 0.0f - vLightPos[2] * vPlaneEquation[0];
    destMat[6] = 0.0f - vLightPos[2] * vPlaneEquation[1];
    destMat[10] = dot - vLightPos[2] * vPlaneEquation[2];
    destMat[14] = 0.0f - vLightPos[2] * vPlaneEquation[3];

    // Fourth Column
    destMat[3] = 0.0f - vLightPos[3] * vPlaneEquation[0];
    destMat[7] = 0.0f - vLightPos[3] * vPlaneEquation[1];
    destMat[11] = 0.0f - vLightPos[3] * vPlaneEquation[2];
    destMat[15] = dot - vLightPos[3] * vPlaneEquation[3];
    }

A Shadow Example

To demonstrate the use of the function in Listing 5.8, we suspend our jet in air high above the ground. We place the light source directly above and a bit to the left of the jet. As you use the arrow keys to spin the jet around, the shadow cast by the jet appears flattened on the ground below. The output from this SHADOW sample program is shown in Figure 5.41.

Output from the SHADOW sample program.

Figure 5.41. Output from the SHADOW sample program.

The code in Listing 5.9 shows how the shadow projection matrix was created for this example. Note that we create the matrix once in SetupRC and save it in a global variable.

Example 5.9. Setting Up the Shadow Projection Matrix

GLfloat lightPos[] = { -75.0f, 150.0f, -50.0f, 0.0f };
...
...

// Transformation matrix to project shadow
GLTMatrix shadowMat;
...
...

// This function does any needed initialization on the rendering
// context.  Here it sets up and initializes the lighting for
// the scene.
void SetupRC()
    {
    // Any three points on the ground (counterclockwise order)
    GLTVector3 points[3] = {{ -30.0f, -149.0f, -20.0f },
                            { -30.0f, -149.0f, 20.0f },
                            { 40.0f, -149.0f, 20.0f }};

    glEnable(GL_DEPTH_TEST);    // Hidden surface removal
    glFrontFace(GL_CCW);        // Counterclockwise polygons face out
    glEnable(GL_CULL_FACE);        // Do not calculate inside of jet

    // Enable lighting
    glEnable(GL_LIGHTING);

    ...
// Code to set up lighting, etc.
    ...


    // Light blue background
    glClearColor(0.0f, 0.0f, 1.0f, 1.0f );

    // Calculate projection matrix to draw shadow on the ground
    gltMakeShadowMatrix(points, lightPos, shadowMat);
    }

Listing 5.10 shows the rendering code for the shadow example. We first draw the ground. Then we draw the jet as we normally do, restore the modelview matrix, and multiply it by the shadow matrix. This procedure creates our squish matrix. Then we draw the jet again. (We've modified our code to accept a flag telling the DrawJet function to render in color or black.) After restoring the modelview matrix once again, we draw a small yellow sphere to approximate the position of the light. Note that we disable depth testing before we draw a plane below the jet to indicate the ground.

This rectangle lies in the same plane in which our shadow is drawn, and we want to make sure the shadow is drawn. We have never before discussed what happens if we draw two objects or planes in the same location. We have discussed depth testing as a means to determine what is drawn in front of what, however. If two objects occupy the same location, usually the last one drawn is shown. Sometimes, however, an effect called z-fighting causes fragments from both objects to be intermingled, resulting in a mess!

Example 5.10. Rendering the Jet and Its Shadow

// Called to draw scene
void RenderScene(void)
    {
    // Clear the window with current clearing color
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // Draw the ground; we do manual shading to a darker green
    // in the background to give the illusion of depth
    glBegin(GL_QUADS);
        glColor3ub(0,32,0);
        glVertex3f(400.0f, -150.0f, -200.0f);
        glVertex3f(-400.0f, -150.0f, -200.0f);
        glColor3ub(0,255,0);
        glVertex3f(-400.0f, -150.0f, 200.0f);
        glVertex3f(400.0f, -150.0f, 200.0f);
    glEnd();

    // Save the matrix state and do the rotations
    glPushMatrix();

    // Draw jet at new orientation; put light in correct position
    // before rotating the jet
    glEnable(GL_LIGHTING);
    glLightfv(GL_LIGHT0,GL_POSITION,lightPos);
    glRotatef(xRot, 1.0f, 0.0f, 0.0f);
    glRotatef(yRot, 0.0f, 1.0f, 0.0f);

    DrawJet(FALSE);

    // Restore original matrix state
    glPopMatrix();


    // Get ready to draw the shadow and the ground
    // First disable lighting and save the projection state
    glDisable(GL_DEPTH_TEST);
    glDisable(GL_LIGHTING);
    glPushMatrix();

    // Multiply by shadow projection matrix
    glMultMatrixf((GLfloat *)shadowMat);

    // Now rotate the jet around in the new flattened space
    glRotatef(xRot, 1.0f, 0.0f, 0.0f);
    glRotatef(yRot, 0.0f, 1.0f, 0.0f);

    // Pass true to indicate drawing shadow
    DrawJet(TRUE);

    // Restore the projection to normal
    glPopMatrix();

    // Draw the light source
    glPushMatrix();
    glTranslatef(lightPos[0],lightPos[1], lightPos[2]);
    glColor3ub(255,255,0);
    glutSolidSphere(5.0f,10,10);
    glPopMatrix();

    // Restore lighting state variables
    glEnable(GL_DEPTH_TEST);

    // Display the results
    glutSwapBuffers();
    }

Sphere World Revisited

Our last example for this chapter is too long to list the source code in its entirety. In the preceding chapter's SPHEREWORLD sample program, we created an immersive 3D world with animation and camera movement. In this chapter, we revisit Sphere World and have added lights and material properties to the torus and sphere inhabitants. Finally, we have also used our planar shadow technique to add a shadow to the ground! We will keep coming back to this example from time to time as we add more and more of our OpenGL functionality to the code. The output of this chapter's version of SPHEREWORLD is shown in Figure 5.42.

Fully lit and shadowed Sphere World.

Figure 5.42. Fully lit and shadowed Sphere World.

Summary

This chapter introduced some of the more magical and powerful capabilities of OpenGL. We started by adding color to 3D scenes and smooth shading. We then saw how to specify one or more light sources and define their lighting characteristics in terms of ambient, diffuse, and specular components. We explained how the corresponding material properties interact with these light sources and demonstrated some special effects, such as adding specular highlights and softening sharp edges between adjoining triangles.

Also covered were lighting positions and the creation and manipulation of spotlights. The high-level matrix munching function presented here makes shadow generation as easy as it gets for planar shadows.

Reference

glColor

Purpose:

Sets the current color when in RGBA color mode.

Include File:

<gl.h>

Variations:

void glColor3b(GLbyte red, GLbyte green, GLbyte blue);
void glColor3d(GLdouble red, GLdouble green, 
ReferenceGLdouble blue);
void glColor3f(GLfloat red, GLfloat green, GLfloat
Reference blue);
void glColor3i(GLint red, GLint green, GLint blue);
void glColor3s(GLshort red, GLshort green, GLshort
Reference blue);
void glColor3ub(GLubyte red, GLubyte green, 
ReferenceGLubyte blue);
void glColor3ui(GLuint red, GLuint green, GLuint 
Referenceblue);
void glColor3us(GLushort red, GLushort green, 
ReferenceGLushort blue);
void glColor4b(GLbyte red, GLbyte green, GLbyte 
Referenceblue, GLbyte alpha);
void glColor4d(GLdouble red, GLdouble green, 
ReferenceGLdouble blue, GLdouble alpha);
void glColor4f(GLfloat red, GLfloat green, GLfloat
Reference blue, GLfloat alpha);
void glColor4i(GLint red, GLint green, GLint blue,
Reference GLint alpha);
void glColor4s(GLshort red, GLshort green, GLshort
Reference blue, GLshort alpha);
void glColor4ub(GLubyte red, GLubyte green, 
ReferenceGLubyte blue, GLubyte alpha);
void glColor4ui(GLuint red, GLuint green, GLuint 
Referenceblue, GLuint alpha);
void glColor4us(GLushort red, GLushort green, 
ReferenceGLushort blue, GLushort alpha);
void glColor3bv(const GLbyte *v);
void glColor3dv(const GLdouble *v);
void glColor3fv(const GLfloat *v);
void glColor3iv(const GLint *v);
void glColor3sv(const GLshort *v);
void glColor3ubv(const GLubyte *v);
void glColor3uiv(const GLuint *v);
void glColor3usv(const GLushort *v);
void glColor4bv(const GLbyte *v);
void glColor4dv(const GLdouble *v);
void glColor4fv(const GLfloat *v);
void glColor4iv(const GLint *v);
void glColor4sv(const GLshort *v);
void glColor4ubv(const GLubyte *v);
void glColor4uiv(const GLuint *v);
void glColor4usv(const GLushort *v);

Description:

This function sets the current color by specifying separate red, green, and blue components of the color. Some functions also accept an alpha component. Each component represents the range of intensity from zero (0.0) to full intensity (1.0). Functions with the v suffix take a pointer to an array that specifies the components. Each element in the array must be the same type. When the alpha component is not specified, it is implicitly set to 1.0. When non–floating-point types are specified, the range from zero to the largest value represented by that type is mapped to the floating-point range 0.0 to 1.0.

Parameters:

red

Specifies the red component of the color.

green

Specifies the green component of the color.

blue

Specifies the blue component of the color.

alpha

Specifies the alpha component of the color. Used only in variations that take four arguments.

*v

A pointer to an array of red, green, blue, and possibly alpha values.

Returns:

None.

See Also:

glColorMaterial, glMaterial

glColorMask

Purpose:

Enables or disables modification of color components in the color buffers.

Include File:

<gl.h>

Syntax:

void glColorMask(GLboolean bRed, GLboolean bGreen,
Reference GLboolean bBlue,
GLboolean bAlpha);

Description:

This function allows changes to individual color components in the color buffer to be disabled or enabled. (All are enabled by default.) For example, setting the bAlpha argument to GL_FALSE disallows changes to the alpha color components.

Parameters:

bRed

GLbooleanSpecifies whether the red component can be modified.

bGreen

GLbooleanSpecifies whether the green component can be modified.

bBlue

GLbooleanSpecifies whether the blue component can be modified.

bAlpha

GLbooleanSpecifies whether the alpha component can be modified.

Returns:

None.

See Also:

glColor

glColorMaterial

Purpose:

Allows material colors to track the current color as set by glColor.

Include File:

<gl.h>

Syntax:

void glColorMaterial(GLenum face, GLenum mode);

Description:

This function allows material properties to be set without having to call glMaterial directly. By using this function, you can set certain material properties to follow the current color as specified by glColor. By default, color tracking is disabled; to enable it, you must also call glEnable(GL_COLOR_MATERIAL). To disable color tracking again, you call glDisable(GL_COLOR_MATERIAL).

Parameters:

face

GLenumSpecifies whether the front (GL_FRONT), back (GL_BACK), or both (GL_FRONT_AND_BACK) should follow the current color.

mode

GLenumSpecifies which material property should be following the current color. This can be GL_EMISSION, GL_AMBIENT, GL_DIFFUSE, GL_SPECULAR, or GL_AMBIENT_AND_DIFFUSE.

Returns:

None.

See Also:

glColor, glMaterial, glLight, glLightModel

glGetLight

Purpose:

Returns information about the current light source settings.

Include File:

<gl.h>

Variations:

void glGetLightfv(GLenum light, GLenum pname, 
GLenum:GLfloat *params);
void glGetLightiv(GLenum light, GLenum pname, 
GLenum:GLint *params);

Description:

You use this function to query the current settings for one of the eight supported light sources. The return values are stored at the address pointed to by params. For most properties, this is an array of four values containing the RGBA components of the properties specified.

Parameters:

light

GLenumSpecifies the light source for which information is being requested. This ranges from 0 to GL_MAX_LIGHTS (a minimum of 8 is required by specification). Constant light values are enumerated from GL_LIGHT0 to GL_LIGHT7.

pname

GLenumSpecifies which property of the light source is being queried. Any of the following values are valid: GL_AMBIENT, GL_DIFFUSE, GL_SPECULAR, GL_POSITION, GL_SPOT_DIRECTION, GL_SPOT_EXPONENT, GL_SPOT_CUTOFF, GL_CONSTANT_ATTENUATION, GL_LINEAR_ATTENUATION, and GL_QUADRATIC_ATTENUATION.

params

GLfloat* or GLint*Specifies an array of integer or floating-point values representing the return values. These return values are in the form of an array of four or three or a single value. Table 5.2 shows the return value meanings for each property.

Table 5.2. Valid Lighting Parameters for glGetLight

Property

Meaning of Return Values

GL_AMBIENT

Four RGBA components.

GL_DIFFUSE

Four RGBA components.

GL_SPECULAR

Four RGBA components.

GL_POSITION

Four elements that specify the position of the light source. The first three elements specify the position of the light. The fourth, if 1.0, specifies that the light is at this position. Otherwise, the light source is directional and all rays are parallel.

GL_SPOT_DIRECTION

Three elements specifying the direction of the spotlight. This vector is not normalized and is in eye coordinates.

GL_SPOT_EXPONENT

A single value representing the spot exponent.

GL_SPOT_CUTOFF

A single value representing the cutoff angle of the spot source.

GL_CONSTANT_ATTENUATION

A single value representing the constant attenuation of the light.

GL_LINEAR_ATTENUATION

A single value representing the linear attenuation of the light.

GL_QUADRATIC_ATTENUATION

A single value representing the quadratic attenuation of the light.

Returns:

None.

See Also:

glLight

glGetMaterial

Purpose:

Returns the current material property settings.

Include File:

<gl.h>

Variations:

void glGetMaterialfv(GLenum face, GLenum pname, 
Valid Lighting Parameters for glGetLightGLfloat *params);
void glGetMaterialiv(GLenum face, GLenum pname, 
Valid Lighting Parameters for glGetLightGLint *params);

Description:

You use this function to query the current front or back material properties. The return values are stored at the address pointed to by params. For most properties, this is an array of four values containing the RGBA components of the property specified.

Parameters:

face

GLenumSpecifies whether the front (GL_FRONT) or back (GL_BACK) material properties are being sought.

pname

GLenumSpecifies which material property is being queried. Valid values are GL_AMBIENT, GL_DIFFUSE, GL_SPECULAR, GL_EMISSION, GL_SHININESS, and GL_COLOR_INDEXES.

params

GLint* or GLfloat*Specifies an array of integer or floating-point values representing the return values. For GL_AMBIENT, GL_DIFFUSE, GL_SPECULAR, and GL_EMISSION, this is a four-element array containing the RGBA values of the property specified. For GL_SHININESS, a single value representing the specular exponent of the material is returned. GL_COLOR_INDEXES returns an array of three elements containing the ambient, diffuse, and specular components in the form of color indexes. GL_COLOR_INDEXES is used only for color index lighting.

Returns:

None.

See Also:

glMaterial

glLight

Purpose:

Sets light source parameters for one of the available light sources.

Include File:

<gl.h>

Variations:

void glLightf(GLenum light, GLenum pname, GLfloat 
GLint* or GLfloat*:param);
void glLighti(GLenum light, GLenum pname, GLint 
GLint* or GLfloat*:param);
void glLightfv(GLenum light, GLenum pname, const 
GLint* or GLfloat*:GLfloat*params);
void glLightiv(GLenum light, GLenum pname, const 
GLint* or GLfloat*:GLint *params);

Description:

You use this function to set the lighting parameters for one of the eight supported light sources. The first two variations of this function require only a single parameter value to set one of the following properties: GL_SPOT_EXPONENT, GL_SPOT_CUTOFF, GL_CONSTANT_ATTENUATION, GL_LINEAR_ATTENUATION, and GL_QUADRATIC_ATTENUATION. The second two variations are used for lighting parameters that require an array of multiple values. They include GL_AMBIENT, GL_DIFFUSE, GL_SPECULAR, GL_POSITION, and GL_SPOT_DIRECTION. You can also use these variations with single-valued parameters by specifying a single element array for *params.

Parameters:

light

GLenumSpecifies which light source is being modified. This ranges from 0 to GL_MAX_LIGHTS (8 minimum). Constant light values are enumerated from GL_LIGHT0 to GL_LIGHT7.

pname

GLenumSpecifies which lighting parameter is being set by this function call. See Table 5.2 for a complete list and the meaning of these parameters.

param

GLfloat or GLintSpecifies the value for parameters that are specified by a single value. These parameters are GL_SPOT_EXPONENT, GL_SPOT_CUTOFF, GL_CONSTANT_ATTENUATION, GL_LINEAR_ATTENUATION, and GL_QUADRATIC_ ATTENUATION. These parameters have meaning only for spotlights.

params

GLfloat* or GLint*Specifies an array of values that fully describe the parameters being set. See Table 5.2 for a list and the meaning of these parameters.

Returns:

None.

See Also:

glGetLight

glLightModel

Purpose:

Sets the lighting model parameters used by OpenGL.

Include File:

<gl.h>

Variations:

void glLightModelf(GLenum pname, GLfloat param)
void glLightModeli(GLenum pname, GLint param);
void glLightModelfv(GLenum pname, const GLfloat 
GLfloat* or GLint*:*params);
void glLightModeliv(GLenum pname, const GLint 
GLfloat* or GLint*:*params);

Description:

You use this function to set the lighting model parameters used by OpenGL. You can set any or all of three lighting model parameters. GL_LIGHT_MODEL_AMBIENT is used to set the default ambient illumination for a scene. By default, this light has an RGBA value of (0.2, 0.2, 0.2, 1.0). Only the last two variations can be used to set this lighting model because they take pointers to an array that can contain the RGBA values.

The GL_LIGHT_MODEL_TWO_SIDE parameter is specified to indicate whether both sides of polygons are illuminated. By default, only the front (defined by winding) of polygons is illuminated, using the front material properties as specified by glMaterial. Specifying a lighting model parameter of GL_LIGHT_MODEL_LOCAL_VIEWER modifies the calculation of specular reflection angles, whether the view is down along the negative z-axis or from the origin of the eye coordinate system. Finally, GL_LIGHT_MODEL_COLOR control can be used to specify whether specular lighting produces a second color (textures get specular light) or whether all three lighting components are combined with GL_SINGLE_COLOR.

Parameters:

pname

GLenumSpecifies a lighting model parameter. GL_LIGHT_MODEL_AMBIENT, GL_LIGHT_MODEL_LOCAL_VIEWER, GL_LIGHT_MODEL_TWO_SIDE, and GL_LIGHT_MODEL_COLOR_CONTROL are accepted.

param

GLfloat or GLintFor GL_LIGHT_MODEL_LOCAL_VIEWER, a value of 0.0 indicates that specular lighting angles take the view direction to be parallel to and in the direction of the negative z-axis. Any other value indicates that the view is from the origin of eye coordinate system. For GL_LIGHT_MODEL_TWO_SIDE, a value of 0.0 indicates that only the fronts of polygons are to be included in illumination calculations. Any other value indicates that both the front and back are included. This parameter has no effect on points, lines, or bitmaps. For GL_LIGHT_MODEL_COLOR_CONTROL, the valid values are GL_SEPARATE_SPECULAR_COLOR or GL_SINGLE_COLOR.

params

GLfloat* or GLint*For GL_LIGHT_MODEL_AMBIENT or GL_LIGHT_MODEL_ LOCAL_VIEWER, this points to an array of integers or floating-point values, only the first element of which is used to set the parameter value. For GL_LIGHT_ MODEL_AMBIENT, this array points to four values that indicate the RGBA components of the ambient light.

Returns:

None.

See Also:

glLight, glMaterial

glMaterial

Purpose:

Sets material parameters for use by the lighting model.

Include File:

<gl.h>

Variations:

void glMaterialf(GLenum face, GLenum pname, 
GLfloat* or GLint*:GLfloat param);
void glMateriali(GLenum face, GLenum pname, GLint 
GLfloat* or GLint*:param);
void glMaterialfv(GLenum face, GLenum pname, const
GLfloat* or GLint*: GLfloat params)
void glMaterialiv(GLenum face, GLenum pname, const
GLfloat* or GLint*: GLint params);

Description:

You use this function to set the material reflectance properties of polygons. The GL_AMBIENT, GL_DIFFUSE, and GL_SPECULAR properties affect how these components of incident light are reflected. GL_EMISSION is used for materials that appear to give off their own light. GL_SHININESS can vary from 0 to 128, with the higher values producing a larger specular highlight on the material surface. You use GL_COLOR_INDEXES for material reflectance properties in color index mode.

Parameters:

face

GLenumSpecifies whether the front, back, or both material properties of the polygons are being set by this function. May be either GL_FRONT, GL_BACK, or GL_FRONT_AND_BACK.

pname

GLenumSpecifies the single-valued material parameter being set for the first two variations. Currently, the only single-valued material parameter is GL_SHININESS. The second two variations, which take arrays for their parameters, can set the following material properties: GL_AMBIENT, GL_DIFFUSE, GL_SPECULAR, GL_EMISSION, GL_SHININESS, GL_AMBIENT_AND_DIFFUSE, or GL_COLOR_INDEXES.

param

GLfloat or GLintSpecifies the value to which the parameter specified by pname (GL_SHININESS) is set.

params

GLfloat* or GLint*Specifies an array of floats or integers that contain the components of the property being set.

Returns:

None.

See Also:

glGetMaterial, glColorMaterial, glLight, glLightModel

glNormal

Purpose:

Defines a surface normal for the next vertex or set of vertices specified.

Include File:

<gl.h>

Variations:

void glNormal3b(GLbyte nx, GLbyte ny, GLbyte nz);
void glNormal3d(GLdouble nx, GLdouble ny, GLdouble
GLfloat* or GLint*: nz);
void glNormal3f(GLfloat nx, GLfloat ny, GLfloat nz);
void glNormal3i(GLint nx, GLint ny, GLint nz);
void glNormal3s(GLshort nx, GLshort ny, GLshort nz);
void glNormal3bv(const GLbyte *v);
void glNormal3dv(const GLdouble *v);
void glNormal3fv(const GLfloat *v);
void glNormal3iv(const GLint *v);
void glNormal3sv(const GLshort *v);

Description:

The normal vector specifies which direction is up and perpendicular to the surface of the polygon. This function is used for lighting and shading calculations. Specifying a unit vector of length 1 improves rendering speed. OpenGL automatically converts your normals to unit normals if you enable this with glEnable(GL_NORMALIZE);.

Parameters:

nx

Specifies the x magnitude of the normal vector.

ny

Specifies the y magnitude of the normal vector.

nz

Specifies the z magnitude of the normal vector.

v

Specifies an array of three elements containing the x, y, and z magnitudes of the normal vector.

Returns:

None.

See Also:

glTexCoord, glVertex

glShadeModel

Purpose:

Sets the default shading to flat or smooth.

Include File:

<gl.h>

Syntax:

void glShadeModel(GLenum mode);

Description:

OpenGL primitives are always shaded, but the shading model can be flat (GL_FLAT) or smooth (GL_SMOOTH). In the simplest of scenarios, one color is set with glColor before a primitive is drawn. This primitive is solid and flat (does not vary) throughout, regardless of the shading. If a different color is specified for each vertex, the resulting image varies with the shading model. With smooth shading, the color of the polygon's interior points are interpolated from the colors specified at the vertices. This means the color varies from one color to the next between two vertices. The color variation follows a line through the color cube between the two colors. If lighting is enabled, OpenGL performs other calculations to determine the correct value for each vertex. In flat shading, the color specified for the last vertex is used throughout the region of the primitive. The only exception is for GL_POLYGON, in which case, the color used throughout the region is the one specified for the first vertex.

Parameters:

mode

GLenumSpecifies the shading model to use, either GL_FLAT or GL_SMOOTH. The default is GL_SMOOTH.

Returns:

None.

See Also:

glColor, glLight, glLightModel

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset