i
i
i
i
i
i
i
i
6.2. Image Texturing 177
x
y
x
y
z
(n
x
,n
y
)
n=(n
x
,n
y
,n
z
)
Figure 6.22. Left: the unit normal on a sphere only needs to encode the x-andy-
components. Right: for BC4/3Dc, a box in the xy-plane encloses the normals, and 8 ×8
normals inside this box can be used per 4 × 4 block of normals (for clarity, only 4 × 4
normals are shown here).
of wasting a component (or having to pack another quantity in the fourth
component). Further compression is usually achieved by storing the x-and
y-components in a BC5/3Dc-format texture (see Figure 6.22). Since the
reference values for each block demarcate the minimum and maximum x-
and y-component values, they can be seen as defining a bounding box on
the xy-plane. The three-bit interpolation factors allow for the selection of
eight values on each axis, so the bounding box is divided into an 8 ×8grid
of possible normals.
On hardware that does not support the BC5/3Dc format, a common
fallback [887] is to use a DXT5-format texture and store the two compo-
nents in the green and alpha components (since those are stored with the
highest precision). The other two components are unused.
By using unexploited bit combinations in BC5/3Dc, Munkberg et al.
[908] propose some extra modes that increase the normal quality at the
same bit rate cost. One particularly useful feature is that the aspect ratio
of the box determines the layout of the normal inside a block. For example,
if the box of the normals is more than twice as wide as high, one can use
16×4 normals inside the box instead of 8×8 normals. Since this is triggered
by the aspect ratio alone, no extra bits are needed to indicate the normal
layout per block. Further improvements for normal map compression are
also possible. By storing an oriented box and using the aspect ratio trick,
even higher normal map quality can be obtained [910].
Several formats for texture compression of high dynamic range images
have been suggested, as well. Here, each color component is originally
stored using a 16- or 32-bit floating-point number. Most of these schemes
start compression of the data after conversion to a luminance-chrominance
color space. The reason for this is that the component with high dynamic
i
i
i
i
i
i
i
i
178 6. Texturing
range is usually only the luminance, while the chrominances can be com-
pressed more easily. Roimela et al. [1080] present a very fast compressor
and decompressor algorithm, where the chrominances are downsampled,
and the luminance is stored more accurately. Munkberg et al. [909, 911]
store the luminance using a DXTC-inspired variant of the logarithm of the
luminance and introduce shape transforms for efficient chrominance encod-
ings. Munkberg et al’s algorithm produces compressed HDR textures with
better image quality, while Roimela et al’s have much simpler hardware
for decompression, and compression is also faster. Both these algorithms
require new hardware. Wang et al. [1323] take another approach, which
reuses the DXTC decompression hardware and stores the content of an
HDR texture in two sets of DXTC textures. This gives real-time perfor-
mance today, but with apparent artifacts in some cases [911].
6.3 Procedural Texturing
Performing an image lookup is one way of generating texture values, given
a texture-space location. Another is to evaluate a function, thus defining
a procedural texture.
Although procedural textures are commonly used in offline rendering
applications, image textures are far more common in real-time rendering.
This is due to the extremely high efficiency of the image texturing hardware
in modern GPUs, which can perform many billions of texture accesses
in a second. However, GPU architectures are evolving toward cheaper
computation and (relatively) more costly memory access. This will make
procedural textures more common in real time applications, although they
are unlikely to ever replace image textures completely.
Volume textures are a particularly attractive application for procedural
texturing, given the high storage costs of volume image textures. Such
textures can be synthesized by a variety of techniques. One of the most
common is using one or more noise functions to generate values [295, 1003,
1004, 1005]. See Figure 6.23. Because of the cost of evaluating the noise
function, the lattice points in the three-dimensional array are often pre-
computed and used to interpolate texture values. There are methods that
use the accumulation buffer or color buffer blending to generate these ar-
rays [849]. Perlin [1006] presents a rapid, practical method for sampling this
noise function and shows some uses. Olano [960] provides noise generation
algorithms that permit tradeoffs between storing textures and performing
computations. Green [448] gives a higher quality method, but one that
is meant more for near-interactive applications, as it uses 50 pixel shader
instructions for a single lookup. The original noise function presented by
Perlin [1003, 1004, 1005] can be improved upon. Cook and DeRose [197]
i
i
i
i
i
i
i
i
6.3. Procedural Texturing 179
Figure 6.23. Two examples of real-time procedural texturing using a volume texture.
The marble texture is continuous over the surface, with no mismatches at edges. The
object on the left is formed by cutting a pair of spheres with a plane and using the
stencil buffer to fill in the gap. (Images courtesy of Evan Hart, ATI Technologies Inc.)
present an alternate representation, called wavelet noise, which avoids alias-
ing problems with only a small increase in evaluation cost.
Other procedural methods are possible. For example, a cellular texture
is formed by measuring distances from each location to a set of “feature
points” scattered through space. Mapping the resulting closest distances
in various ways, e.g., changing the color or shading normal, creates pat-
terns that look like cells, flagstones, lizard skin, and other natural textures.
Griffiths [456] discusses how to efficiently find the closest neighbors and
generate cellular textures on the GPU.
Another type of procedural texture is the result of a physical simulation
or some other interactive process, such as water ripples or spreading cracks.
In such cases, procedural textures can produce effectively infinite variability
in reaction to dynamic conditions.
When generating a procedural two-dimensional texture, parameteriza-
tion issues can pose even more difficulties than for authored textures, where
stretching or seam artifacts can be manually touched up or worked around.
One solution is to avoid parameterization completely by synthesizing tex-
tures directly onto the surface. Performing this operation on complex sur-
i
i
i
i
i
i
i
i
180 6. Texturing
faces is technically challenging and is an active area of research. See Lefeb-
vre and Hoppe [750] for one approach and an overview of past research.
Antialiasing procedural textures is both easier and more difficult than
antialiasing image textures. One one hand, precomputation methods such
as mipmapping are not available. On the other hand, the procedural tex-
ture author has “inside information” about the texture content and so can
tailor it to avoid aliasing. This is particularly the case with procedural tex-
tures created by summing multiple noise functions—the frequency of each
noise function is known, so any frequencies that would cause aliasing can be
discarded (actually making the computation cheaper). Various techniques
exist for antialiasing other types of procedural textures [457, 1085, 1379].
6.4 Texture Animation
The image applied to a surface does not have to be static. For example, a
video source can be used as a texture that changes from frame to frame.
The texture coordinates need not be static, either. In fact, for environ-
ment mapping, they usually change with each frame because of the way
they are computed (see Section 8.4). The application designer can also ex-
plicitly change the texture coordinates from frame to frame. Imagine that
a waterfall has been modeled and that it has been textured with an image
that looks like falling water. Say the v coordinate is the direction of flow.
To make the water move, one must subtract an amount from the v coordi-
nates on each successive frame. Subtraction from the texture coordinates
has the effect of making the texture itself appear to move forward.
More elaborate effects can be created by modifying the texture coor-
dinates in the vertex or pixel shader. Applying a matrix to the texture
coordinates
10
allows for linear transformations such as zoom, rotation, and
shearing [849, 1377], image warping and morphing transforms [1367], and
generalized projections [475]. By performing other computations on the
texture coordinates, many more elaborate effects are possible, including
waterfalls and animated fire.
By using texture blending techniques, one can realize other animated
effects. For example, by starting with a marble texture and fading in a
flesh texture, one can make a statue come to life [874].
6.5 Material Mapping
A common use of a texture is to modify a material property affecting the
shading equation, for example the equation discussed in Chapter 5:
10
This is supported on fixed-function pipelines, as well.
i
i
i
i
i
i
i
i
6.6. Alpha Mapping 181
L
o
(v)=
c
diff
π
+
m +8
8π
cos
m
θ
h
c
spec
E
L
cos θ
i
. (6.4)
In the implementation of this model in Section 5.5 the material param-
eters were accessed from constants. However, real-world objects usually
have material properties that vary over their surface. To simulate such
objects, the pixel shader can read values from textures and use them to
modify the material parameters before evaluating the shading equation.
The parameter that is most often modified by a texture is the diffuse
color c
diff
; such a texture is known as a diffuse color map. The specular
color c
spec
is also commonly textured, usually as a grayscale value rather
than as RGB, since for most real-world materials c
spec
is uncolored (colored
metals such as gold and copper are notable exceptions). A texture that
affects c
spec
is properly called a specular color map. Such textures are
sometimes referred to as gloss maps but this name is more properly applied
to textures that modify the surface smoothness parameter m.Thethree
parameters c
diff
, c
spec
,andm fully describe the material in this shading
model; other shading models may have additional parameters that can
similarly be modified by values that are read from textures.
As discussed previously, shading model inputs like c
diff
and c
spec
have
a linear relationship to the final color output from the shader. Thus, tex-
tures containing such inputs can be filtered with standard techniques, and
aliasing is avoided. Textures containing nonlinear shading inputs, like m,
require a bit more care to avoid aliasing. Filtering techniques that take
account of the shading equation can produce improved results for such
textures. These techniques are discussed in Section 7.8.1.
6.6 Alpha Mapping
The alpha value can be used for many interesting effects. One texture-
related effect is decaling. As an example, say you wish to put a picture of a
flower on a teapot. You do not want the whole picture, but just the parts
where the flower is present. By assigning an alpha of 0 to a texel, you make
it transparent, so that it has no effect. So, by properly setting the decal
texture’s alpha, you can replace or blend the underlying surface with the
decal. Typically, a clamp corresponder function is used with a transparent
border to apply a single copy of the decal (versus a repeating texture) to
the surface.
A similar application of alpha is in making cutouts. Say you make a
decal image of a bush and apply it to a polygon in the scene. The principle is
the same as for decals, except that instead of being flush with an underlying
surface, the bush will be drawn on top of whatever geometry is behind it.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset