Texture Mapping

One of the keys to making realistic looking environments is the proper use of texture maps. Texture maps are special images that are applied to 3D geometry and are most often used to simulate detail in the scene that would be impractical to create with geometric primitives (for example, grass). Texture maps are also used extensively to show mountainous scenes or cities in the far distance. When used properly, texturing can add a great deal of realism to virtual environments. When used improperly, texture mapping can destroy rendering performance. An understanding of basic texture mapping is invaluable to the applications developer. We provide a short introduction here and follow up with more detail in Chapter 11.

You saw in Chapter 2 how textures could easily be applied to 2D objects. The extension to 3D is slightly more complex. The process is a series of steps for mapping pixels from a 2D image onto coordinates on an arbitrary polygon. The standard notation for the coordinates of the texture is (u, v). By convention, (u, v) are each in the range of 0.0–1.0. The 3D shape is flattened so that it too occupies a 2D coordinate space, which is denoted by (s, t). The texture mapping algorithm's job is to fit the (u, v) coordinate system onto the (s, t) prior to rendering into screen coordinates.

The texture map, then, refers to a 2D array in which the first column is filled with the points of the polygon that are to receive the texture (u, v) and the second column is filled with the corresponding image values for each pair. Each element of this array is called a texel. The process of texture mapping is to determine the array of texels given the texture (a rectangular array of pixels) and the polygon (a list of triangles). Once the texels are determined, it becomes trivial to render them.

Determining the Texels

The easiest form of texture mapping is linear mapping. In this case, a reduced set of texture coordinates (u, v) is determined for each vertex of the polygon, basically anchoring the edges of the image to the edges of the polygon. It is then a simple matter to interpolate along vertical and then horizontal lines of the polygon, thus generating the remaining texels from the reduced set of texels at the anchor points.

Although linear mapping is the easiest to conceptualize and compute, it suffers from undesirable distortions when perspective projection is used to compute the polygon's image on the screen. This is because the transformation to the perspective projection is a non-linear transformation. In many cases, this linear mapping distortion might not be a problem. However, it is a serious problem when putting textures on walls and floors in virtual environments because the perspective transformations can be large in these cases.

MIPMAPing

MIPMAPing is an intimidating word for a fairly straightforward concept. The term MIPMAP was first introduced by Lance Williams in a 1983 SIGGRAPH paper. The MIP part of the acronym is derived from the Latin phrase multum in parvo (many in small) and refers to the central idea of a MIPMAP, which is to store many images in a memory buffer. The technique can be used to reduce a form of flicker that occurs when textures are reinterpolated as a result of a change in the size of the rendered texture polygon. Additionally, MIPMAPing can be used to perform Level of Detail (LOD) rendering (see the following discussion), which can be useful to reduce the rendering time for a scene.

Recall from the previous discussion on texture mapping that the texture is a fixed entity based on the number of pixels. If a large texture is mapped to a small polygon, there will be minification. Conversely, a small image mapped onto a large polygon will undergo magnification. Occasionally both processes occur at the same time when, for example, a texture needs to be magnified in one dimension and minified in the other.

The need for MIPMAPing arises from the fact that the process of magnification and minification often exhibit discrete jumps as the size of the rendered polygon changes. These jumps can look like a brief flash or shimmering. Additionally, the magnification and minification processes can be computationally expensive (particularly in the case of minification). It is more efficient to have a stack of images available in memory.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset