i
i
i
i
i
i
i
i
Chapter 6
Texturing
“All it takes is for the rendered image to look right.”
—Jim Blinn
A surface’s texture is its look and feel—just think of the texture of an oil
painting. In computer graphics, texturing is a process that takes a surface
and modifies its appearance at each location using some image, function,
or other data source. As an example, instead of precisely representing the
geometry of a brick wall, a color image of a brick wall is applied to a single
polygon. When the polygon is viewed, the color image appears where the
polygon is located. Unless the viewer gets close to the wall, the lack of
geometric detail (e.g., the fact that the image of bricks and mortar is on a
smooth surface) will not be noticeable. Color image texturing also provides
a way to use photographic images and animations on surfaces.
However, some textured brick walls can be unconvincing for reasons
other than lack of geometry. For example, if the mortar is supposed to be
glossy, whereas the bricks are matte, the viewer will notice that the gloss is
the same for both materials. To produce a more convincing experience, a
second image texture can be applied to the surface. Instead of changing the
surface’s color, this texture changes the wall’s gloss, depending on location
on the surface. Now the bricks have a color from the color image texture
and a gloss value from this new texture.
Once the gloss texture has been applied, however, the viewer may notice
that now all the bricks are glossy and the mortar is not, but each brick
face appears to be flat. This does not look right, as bricks normally have
some irregularity to their surfaces. By applying bump mapping,thesurface
normals of the bricks may be varied so that when they are rendered, they
do not appear to be perfectly smooth. This sort of texture wobbles the
direction of the polygon’s original surface normal for purposes of computing
lighting.
From a shallow viewing angle, this illusion of bumpiness can break
down. The bricks should stick out above the mortar, obscuring it from
view. Even from a straight-on view, the bricks should cast shadows onto
147
i
i
i
i
i
i
i
i
148 6. Texturing
Figure 6.1. Texturing. Color, bump, and parallax occlusion texture mapping methods
are used to add complexity and realism to a scene. (Image from “Toyshop” demo
courtesy of Natalya Tatarchuk, ATI Research, Inc.)
the mortar. Parallax and relief mapping use a texture to appear to deform
a flat surface when rendering it. Displacement mapping actually displaces
the surface, creating triangles between the texels. Figure 6.1 shows an
example.
These are examples of the types of problems that can be solved with
textures, using more and more elaborate algorithms. In this chapter, tex-
turing techniques are covered in detail. First, a general framework of the
texturing process is presented. Next, we focus on using images to texture
surfaces, since this is the most popular form of texturing used in real-time
work. Procedural textures are briefly discussed, and then some common
methods of getting textures to affect the surface are explained.
6.1 The Texturing Pipeline
Texturing, at its simplest, is a technique for efficiently modeling the sur-
face’s properties. One way to approach texturing is to think about what
happens for a single shaded pixel. As seen in the previous chapter, the
color is computed by taking into account the lighting and the material, as
well as the viewer’s position. If present, transparency also affects the sam-
i
i
i
i
i
i
i
i
6.1. The Texturing Pipeline 149
ple. Texturing works by modifying the values used in the shading equation.
The way these values are changed is normally based on the position on the
surface. So, for the brick wall example, the diffuse color at any point on
the surface is replaced by a corresponding color in the image of a brick
wall, based on the surface location. The pixels in the image texture are
often called texels, to differentiate them from the pixels on the screen. The
gloss texture modifies the gloss value, and the bump texture changes the
direction of the normal, so each of these change the result of the lighting
equation.
Texturing can be described by a generalized texture pipeline. Much
terminology will be introduced in a moment, but take heart: Each piece of
the pipeline will be described in detail. Some steps are not always under
explicit user control, but each step happens in some fashion.
A location in space is the starting point for the texturing process. This
location can be in world space, but is more often in the model’s frame of
reference, so that as the model moves, the texture moves along with it.
Using Kershaw’s terminology [646], this point in space then has a projector
function applied to it to obtain a set of numbers, called parameter-space
values, that will be used for accessing the texture. This process is called
mapping, which leads to the phrase texture mapping.
1
Before these new
values may be used to access the texture, one or more corresponder func-
tions can be used to transform the parameter-space values to texture space.
These texture-space locations are used to obtain values from the texture,
e.g., they may be array indices into an image texture to retrieve a pixel.
The retrieved values are then potentially transformed yet again by a value
transform function, and finally these new values are used to modify some
property of the surface, such as the material or shading normal. Figure 6.2
shows this process in detail for the application of a single texture. The
reason for the complexity of the pipeline is that each step provides the user
with a useful control.
object
space
location
projector
function
corresponder
function(s)
obtain
value
value
transform
function
parameter
space
coordinates
texture
space
location
texture
value
transforme
d
texture
value
Figure 6.2. The generalized texture pipeline for a single texture.
1
Sometimes the texture image itself is called the texture map, though this is not
strictly correct.
i
i
i
i
i
i
i
i
150 6. Texturing
Figure 6.3. Pipeline for a brick wall.
Using this pipeline, this is what happens when a polygon has a brick
wall texture and a sample is generated on its surface (see Figure 6.3). The
(x, y, z) position in the object’s local frame of reference is found; say it is
(2.3, 7.1, 88.2). A projector function is then applied to this position. Just
as a map of the world is a projection of a three-dimensional object into
two dimensions, the projector function here typically changes the (x, y, z)
vector into a two-element vector (u, v). The projector function used for
this example is equivalent to an orthographic projection (see Section 2.3.3),
acting something like a slide projector shining the brick wall image onto
the polygon’s surface. To return to the wall, a point on its plane could
be transformed into a pair of values ranging from 0 to 1. Say the values
obtained are (0.32, 0.29). These parameter-space values are to be used to
find what the color of the image is at this location. The resolution of our
brick texture is, say, 256 × 256, so the corresponder function multiplies
the (u, v) by 256 each, giving (81.92, 74.24). Dropping the fractions, pixel
(81, 74) is found in the brick wall image, and is of color (0.9, 0.8, 0.7). The
original brick wall image is too dark, so a value transform function that
multiplies the color by 1.1 is then applied, giving a color of (0.99, 0.88, 0.77).
This color is then used in the shading equation as the diffuse color of the
surface.
6.1.1 The Projector Function
The first step in the texture process is obtaining the surface’s location and
projecting it into parameter space, usually two-dimensional (u, v) space.
Modeling packages typically allow artists to define (u, v)coordinatesper
vertex. These may be initialized from projector functions or from mesh un-
wrapping algorithms. Artists can edit (u, v) coordinates in the same way
i
i
i
i
i
i
i
i
6.1. The Texturing Pipeline 151
Figure 6.4. Different texture projections. Spherical, cylindrical, planar, and natural (u, v)
projections are shown, left to right. The bottom row shows each of these projections
applied to a single object (which has no natural projection).
they edit vertex positions. Projector functions typically work by convert-
ing a three-dimensional point in space into texture coordinates. Projector
functions commonly used in modeling programs include spherical, cylin-
drical, and planar projections [85, 646, 723]. Other inputs can be used
to a projector function. For example, the surface normal can be used to
choose which of six planar projection directions is used for the surface.
Problems in matching textures occur at the seams where the faces meet;
Geiss [386, 387] discusses a technique of blending among them. Tarini et
al. [1242] describe polycube maps, where a model is mapped to a set of cube
projections, with different volumes of space mapping to different cubes.
Other projector functions are not projections at all, but are an implicit
part of surface formation. For example, parametric curved surfaces have
a natural set of (u, v) values as part of their definition. See Figure 6.4.
The texture coordinates could also be generated from all sorts of differ-
ent parameters, such as the view direction, temperature of the surface, or
anything else imaginable. The goal of the projector function is to generate
texture coordinates. Deriving these as a function of position is just one
way to do it.
Noninteractive renderers often call these projector functions as part of
the rendering process itself. A single projector function may suffice for the
whole model, but often the artist has to use tools to subdivide the model
and apply various projector functions separately [983]. See Figure 6.5.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset