i
i
i
i
i
i
i
i
6.7. Bump Mapping 187
Figure 6.28. A wavy heightfield bump image and its use on a sphere, rendered with
per-pixel illumination.
Another way to represent bumps is to use a heightfield to modify the
surface normal’s direction. Each monochrome texture value represents a
height, so in the texture, white is a high area and black a low one (or vice
versa). See Figure 6.28 for an example. This is a common format used when
first creating or scanning a bump map and was also introduced by Blinn in
1978. The heightfield is used to derive u and v signed values similar to those
used in the first method. This is done by taking the differences between
neighboring columns to get the slopes for u, and between neighboring rows
for v [1127]. A variant is to use a Sobel filter, which gives a greater weight
to the directly adjacent neighbors [401].
6.7.2 Normal Mapping
The preferred implementation of bump mapping for modern graphics cards
is to directly store a normal map.
12
This is preferred because the storage
cost of three components of the perturbed normal versus the two offsets or
single bump height is no longer considered prohibitive, and directly storing
the perturbed normal reduces the number of computations needed per pixel
during shading. The algorithms and results are mathematically identical
to bump mapping; only the storage format changes.
The normal map encodes (x, y, z)mappedto[1, 1], e.g., for an 8-bit
texture the x-axis value 0 represents 1.0 and 255 represents 1.0. An
example is shown in Figure 6.29. The color [128, 128, 255], a light blue,
12
In older, fixed-function hardware, this technique is known as dot product bump
mapping.
i
i
i
i
i
i
i
i
188 6. Texturing
Figure 6.29. Bump mapping with a normal map. Each color channel is actually a
surface normal coordinate. The red channel is the x deviation; the more red, the more
the normal points to the right. Green is the y deviation, and blue is z.Attheright
is an image produced using the normal map. Note the flattened look on the top of the
cube. (Images courtesy of Manuel M. Oliveira and Fabio Policarpo.)
would represent a flat surface for the color mapping shown, i.e., a normal
of [0, 0, 1].
The normal map representation was originally introduced as the world-
space normal map [182, 652], which is rarely used in practice. In that case,
the perturbation is straightforward: At each pixel, retrieve the normal
from the map and use it directly, along with a light’s direction, to compute
the shade at that location on the surface. This works because the normal
map’s stored values, where +z is up, align with how the square happens
to be oriented. As soon as the square is rotated to another position, this
direct use of the normal map is no longer possible. If the square is not
facing up, one solution could be to generate a new normal map, in which
all the stored normals were transformed to point using the new orientation.
This technique is rarely used, as the same normal map could not then be
used on, say, two walls facing different directions. Even if the normal map
is used but once, the object it is applied to cannot change orientation or
location, or deform in any way.
Normal maps can also be defined in object space. Such normal maps
will remain valid as an object undergoes rigid transformations, but not
any type of deformation. Such normal maps also usually cannot be reused
between different parts of the object, or between objects. Although the
light’s direction needs to be transformed into object space, this can be
done in the application stage and does not add any overhead in the shader.
Usually the perturbed normal is retrieved in tangent space, i.e., relative
to the surface itself. This allows for deformation of the surface, as well as
i
i
i
i
i
i
i
i
6.7. Bump Mapping 189
maximal reuse of the normal maps. It is also easier to compress tangent
space normal maps, since the sign of the z component (the one aligned with
the unperturbed surface normal) can usually be assumed to be positive.
The downside of tangent-space normal maps is that more transformations
are required for shading, since the reference frame changes over the surface.
To use illumination within a typical shading model, both the surface
and lighting must be in the same space: tangent, object, or world. One
method is to transform each light’s direction (as viewed from the vertex)
into tangent space and interpolate these transformed vectors across the
triangle. Other light-related values needed by the shading equation, such
as the half vector (see Section 5.5), could also be transformed, or could be
computed on the fly. These values are then used with the normal from the
normal map to perform shading. It is only the relative direction of the light
from the point being shaded that matters, not their absolute positions in
space. The idea here is that the light’s direction slowly changes, so it can be
interpolated across a triangle. For a single light, this is less expensive than
transforming the surface’s perturbed normal to world space every pixel.
This is an example of frequency of computation: The light’s transform is
computed per vertex, instead of needing a per-pixel normal transform for
the surface.
However, if the application uses more than just a few lights, it is more
efficient to transform the resulting normal to world space. This is done by
using the inverse of Equation 6.5, i.e., its transpose. Instead of interpolating
a large number of light directions across a triangle, only a single transform
is needed for the normal to go into world space. In addition, as we will see
in Chapter 8, some shading models use the normal to generate a reflection
direction. In this case, the normal is needed in world space regardless,
so there is no advantage in transforming the lights into tangent space.
Transforming to world space can also avoid problems due to any tangent
space distortion [887]. Normal mapping can be used to good effect to
increase realism—see Figure 6.30.
Filtering normal maps is a difficult problem, compared to filtering color
textures. In general, the relationship between the normal and the shaded
color is not linear, so standard filtering methods may result in objectionable
aliasing. Imagine looking at stairs made of blocks of shiny white marble.
At some angles, the tops or sides of the stairs catch the light and reflect
a bright specular highlight. However, the average normal for the stairs is
at, say, a 45-degree angle; it will capture highlights from entirely different
directions than the original stairs. When bump maps with sharp specular
highlights are rendered without correct filtering, a distracting sparkle effect
can occur as highlights wink in and out by the luck of where samples fall.
Lambertian surfaces are a special case where the normal map has an
almost linear effect on shading. Lambertian shading is almost entirely a
i
i
i
i
i
i
i
i
190 6. Texturing
Figure 6.30. An example of normal map bump mapping used in a game scene. The details
on the player’s armor, the rusted walkway surface, and the brick surface of the column
to the left all show the effects of normal mapping. (Image from “Crysis” courtesy of
Crytek.)
dot product, which is a linear operation. Averaging a group of normals
and performing a dot product with the result is equivalent to averaging
individual dot products with the normals:
l ·
n
j=1
n
j
n
=
n
j=1
(l · n
j
)
n
. (6.6)
Note that the average vector is not normalized before use. Equation 6.6
shows that standard filtering and mipmaps almost produce the right result
for Lambertian surfaces. The result is not quite correct because the Lam-
bertian shading equation is not a dot product; it is a clamped dot product—
max(l · n, 0). The clamping operation makes it nonlinear. This will overly
darken the surface for glancing light directions, but in practice this is usu-
ally not objectionable [652]. One caveat is that some texture compression
methods typically used for normal maps (such as reconstructing the z-
component from the other two) do not support non-unit-length normals,
so using non-normalized normal maps may pose compression difficulties.
Inthecaseofnon-Lambertiansurfaces,itispossibletoproducebetter
results by filtering the inputs to the shading equation as a group, rather
than filtering the normal map in isolation. Techniques for doing so are
discussed in Section 7.8.1.
i
i
i
i
i
i
i
i
6.7. Bump Mapping 191
6.7.3 Parallax Mapping
A problem with normal mapping is that the bumps never block each other.
If you look along a real brick wall, for example, at some angle you will not
see the mortar between the bricks. A bump map of the wall will never show
this type of occlusion, as it merely varies the normal. It would be better
to have the bumps actually affect which location on the surface is rendered
at each pixel.
The idea of parallax mapping was introduced in 2001 by Kaneko [622]
and refined and popularized by Welsh [1338]. Parallax refers to the idea
that the positions of objects move relative to one another as the observer
moves. As the viewer moves, the bumps should occlude each other, i.e.,
appear to have heights. The key idea of parallax mapping is to take an
educated guess of what should be seen in a pixel by examining the height
of what was found to be visible.
For parallax mapping, the bumps are stored in a heightfield texture.
When viewing the surface at a given pixel, the heightfield value is retrieved
at that location and used to shift the texture coordinates to retrieve a
different part of the surface. The amount to shift is based on the height
retrieved and the angle of the eye to the surface. See Figure 6.31. The
heightfield values are either stored in a separate texture, or packed in an
unused color or alpha channel of some other texture (care must be taken
when packing unrelated textures together, since this can negatively impact
compression quality). The heightfield values are scaled and biased before
being used to offset the values. The scale determines how high the height-
field is meant to extend above or below the surface, and the bias gives the
“sea-level” height at which no shift takes place. Given a location p,ad-
justed heightfield height h, and a normalized view vector v with a height
v
z
and horizontal component v
xy
, the new parallax-adjusted position p
adj
view
vector
TT
ideal
heightfield
polygon
offset
TT
p
Figure 6.31. On the left is the goal: The actual position on the surface is found from
where the view vector pierces the heightfield. Parallax mapping does a first-order ap-
proximation by taking the height at the location on the polygon and using it to find a
new location p
adj
. (After Welsh [1338].)
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset