i
i
i
i
i
i
i
i
162 6. Texturing
Figure 6.12. Minification: A view of a checkerboard-textured polygon through a row of
pixel cells, showing roughly how a number of texels affect each pixel.
Figure 6.13. The top image was rendered with point sampling (nearest neighbor), the
center with mipmapping, and the bottom with summed area tables.
i
i
i
i
i
i
i
i
6.2. Image Texturing 163
Better solutions are possible. As discussed in Section 5.6.1, the problem
of aliasing can be addressed by sampling and filtering techniques. The
signal frequency of a texture depends upon how closely spaced its texels
are on the screen. Due to the Nyquist limit, we need to make sure that the
texture’s signal frequency is no greater than half the sample frequency. For
example, say an image is composed of alternating black and white lines,
a texel apart. The wavelength is then two texels wide (from black line to
black line), so the frequency is
1
2
. To properly display this texture on a
screen, the frequency must then be at least 2 ×
1
2
, i.e., at least one pixel
per texel. So, for textures in general, there should be at most one texel per
pixel to avoid aliasing.
To achieve this goal, either the pixel’s sampling frequency has to in-
crease or the texture frequency has to decrease. The antialiasing methods
discussed in the previous chapter give ways to increase the pixel sampling
rate. However, these give only a limited increase in sampling frequency.
To more fully address this problem, various texture minification algorithms
have been developed.
The basic idea behind all texture antialiasing algorithms is the same: to
preprocess the texture and create data structures that will help compute a
quick approximation of the effect of a set of texels on a pixel. For real-time
work, these algorithms have the characteristic of using a fixed amount of
time and resources for execution. In this way, a fixed number of samples
are taken per pixel and combined to compute the effect of a (potentially
huge) number of texels.
Mipmapping
The most popular method of antialiasing for textures is called mipmap-
ping [1354]. It is implemented in some form on even the most modest
graphics accelerators now produced. “Mip” stands for multum in parvo,
Latin for “many things in a small place”—a good name for a process in
which the original texture is filtered down repeatedly into smaller images.
When the mipmapping minimization filter is used, the original texture
is augmented with a set of smaller versions of the texture before the actual
rendering takes place. The texture (level zero) is downsampled to a quar-
ter of the original area, with each new texel value often computed as the
average of four neighbor texels in the original texture. The new, level-one
texture is sometimes called a subtexture of the original texture. The re-
duction is performed recursively until one or both of the dimensions of the
texture equals one texel. This process is illustrated in Figure 6.14. The set
of images as a whole is often called a mipmap chain.
Two important elements in forming high-quality mipmaps are good
filtering and gamma correction. The common way to form a mipmap level
is to take each 2×2 set of texels and average them to get the mip texel value.
i
i
i
i
i
i
i
i
164 6. Texturing
Figure 6.14. A mipmap is formed by taking the original image (level 0), at the base
of the pyramid, and averaging each 2 × 2 area into a texel value on the next level up.
The vertical axis is the third texture coordinate, d.Inthisgure,d is not linear; it is a
measure of which two texture levels a sample uses for interpolation.
The filter used is then a box filter, one of the worst filters possible. This
can result in poor quality, as it has the effect of blurring low frequencies
unnecessarily, while keeping some high frequencies that cause aliasing [120].
It is better to use a Gaussian, Lanczos, Kaiser, or similar filter; fast, free
source code exists for the task [120, 1141], and some APIs support better
filtering on the GPU itself. Care must be taken filtering near the edges of
textures, paying attention to whether the texture repeats or is a single copy.
Use of mipmapping on the GPU is why square textures with powers-of-two
resolutions are the norm for use on models.
For textures encoded in a nonlinear space (such as most color textures),
ignoring gamma correction when filtering will modify the perceived bright-
ness of the mipmap levels [121]. As you get farther away from the object
and the uncorrected mipmaps get used, the object can look darker overall,
and contrast and details can also be affected. For this reason, it is impor-
tant to convert such textures into linear space, perform all mipmap filtering
in that space, and convert the final results back into nonlinear space for
storage.
As mentioned earlier, some textures have a fundamentally nonlinear
relationship to the final shaded color. Although this poses a problem for
filtering in general, mipmap generation is particularly sensitive to this issue,
since many hundred or thousands of pixels are being filtered. Specialized
mipmap generation methods are often needed for the best results. Such
methods are detailed in Section 7.8.1.
i
i
i
i
i
i
i
i
6.2. Image Texturing 165
pixel space
texture space
pixel corner's
translation
v
u
p
ixel's
cell
Figure 6.15. On the left is a square pixel cell and its view of a texture. On the right is
the projection of the pixel cell onto the texture itself.
The basic process of accessing this structure while texturing is straight-
forward. A screen pixel encloses an area on the texture itself. When the
pixel’s area is projected onto the texture (Figure 6.15), it includes one or
more texels.
6
The goal is to determine roughly how much of the texture
influences the pixel. There are two common measures used to compute d
(which OpenGL calls λ, and which is also known as the level of detail). One
is to use the longer edge of the quadrilateral formed by the pixel’s cell to
approximate the pixel’s coverage [1354]; another is to use as a measure the
largest absolute value of the four differentials ∂u/∂x, ∂v/∂x, ∂u/∂y,and
∂v/∂y [666, 1011]. Each differential is a measure of the amount of change in
the texture coordinate with respect to a screen axis. For example, ∂u/∂x is
the amount of change in the u texture value along the x-screen-axis for one
pixel. See Williams’s original article [1354] or the article by Flavell [345] or
Pharr [1011] for more about these equations. McCormack et al. [840] dis-
cuss the introduction of aliasing by the largest absolute value method, and
they present an alternate formula. Ewins et al. [327] analyze the hardware
costs of several algorithms of comparable quality.
These gradient values are available to pixel shader programs using
Shader Model 3.0 or newer. Since they are based on the differences be-
tween values in adjacent pixels, they are not accessible in sections of the
pixel shader affected by dynamic flow control (see Section 3.6). For texture
reads to be performed in such a section (e.g., inside a loop), the deriva-
tives must be computed earlier. Note that since vertex shaders cannot
access gradient information, the gradients or the level of detail need to be
computed in the vertex shader itself and supplied to the GPU when doing
vertex texturing.
6
Using the pixel’s cell boundaries is not strictly correct, but is used here to sim-
plify the presentation. Texels outside of the cell can influence the pixel’s color; see
Section 5.6.1.
i
i
i
i
i
i
i
i
166 6. Texturing
The intent of computing the coordinate d is to determine where to
sample along the mipmap’s pyramid axis (see Figure 6.14). The goal is a
pixel-to-texel ratio of at least 1:1 in order to achieve the Nyquist rate. The
important principle here is that as the pixel cell comes to include more
texels and d increases, a smaller, blurrier version of the texture is accessed.
The (u, v, d) triplet is used to access the mipmap. The value d is analogous
to a texture level, but instead of an integer value, d has the fractional value
of the distance between levels. The texture level above and the level below
the d location is sampled. The (u, v) location is used to retrieve a bilinearly
interpolated sample from each of these two texture levels. The resulting
sample is then linearly interpolated, depending on the distance from each
texture level to d. This entire process is called trilinear interpolation and
is performed per pixel.
One user control on the d coordinate is the level of detail bias (LOD
bias). This is a value added to d, and so it affects the relative perceived
sharpness of a texture. If we move further up the pyramid to start (in-
creasing d), the texture will look blurrier. A good LOD bias for any given
texture will vary with the image type and with the way it is used. For
example, images that are somewhat blurry to begin with could use a nega-
tive bias, while poorly filtered (aliased) synthetic images used for texturing
could use a positive bias. The bias can be specified for the texture as a
whole, or per-pixel in the pixel shader. For finer control, the d coordinate
or the derivatives used to compute it can be supplied by the user, in any
shader stage.
The result of mipmapping is that instead of trying to sum all the texels
that affect a pixel individually, precombined sets of texels are accessed
and interpolated. This process takes a fixed amount of time, no matter
what the amount of minification. However, mipmapping has a number of
flaws [345]. A major one is overblurring. Imagine a pixel cell that covers a
large number of texels in the u direction and only a few in the v direction.
This case commonly occurs when a viewer looks along a textured surface
nearly edge-on. In fact, it is possible to need minification along one axis
of the texture and magnification along the other. The effect of accessing
the mipmap is that square areas on the texture are retrieved; retrieving
rectangular areas is not possible. To avoid aliasing, we choose the largest
measure of the approximate coverage of the pixel cell on the texture. This
results in the retrieved sample often being relatively blurry. This effect can
be seen in the mipmap image in Figure 6.13. The lines moving into the
distance on the right show overblurring.
One extension to mipmapping is the ripmap. The idea is to extend the
mipmap to include downsampled rectangular areas as subtextures that can
be accessed [518]. The mipmap is stored 1 × 1, 2 × 2, 4 × 4, etc., but all
possible rectangles are also stored 1 × 2, 2 × 4, 2 × 1, 4 × 1, 4 × 2, etc.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset