i
i
i
i
i
i
i
i
6.2. Image Texturing 167
The u and v coverage determines the rectangle’s dimensions. While this
technique produces better visuals than mipmapping, it comes at a high cost
and so normally is avoided in favor of other methods. Instead of adding
just one-third storage space, as is needed for mipmapping, the cost is three
times additional space beyond the original image.
Summed-Area Table
Another method to avoid overblurring is the summed-area table (SAT) [209].
To use this method, one first creates an array that is the size of the texture
but contains more bits of precision for the color stored (e.g., 16 bits or more
for each of red, green, and blue). At each location in this array, one must
compute and store the sum of all the corresponding texture’s texels in the
rectangle formed by this location and texel (0, 0) (the origin). During tex-
turing, the pixel cell’s projection onto the texture is bound by a rectangle.
The summed-area table is then accessed to determine the average color of
this rectangle, which is passed back as the texture’s color for the pixel. The
average is computed using the texture coordinates of the rectangle shown
in Figure 6.16. This is done using the formula given in Equation 6.2:
c =
s[x
ur
,y
ur
] s[x
ur
,y
ll
] s[x
ll
,y
ur
]+s[x
ll
,y
ll
]
(x
ur
x
ll
)(y
ur
y
ll
)
. (6.2)
Here, x and y are the texel coordinates of the rectangle and s[x, y]isthe
summed-area value for that texel. This equation works by taking the sum
of the entire area from the upper right corner to the origin, then subtracting
off areas A and B by subtracting the neighboring corners’ contributions.
pixel space
texture space
pixel's
cell
bounding
box
A
x
ur
y
ur
x
y
x
11
y
11
B
C
pixel's
cell
Figure 6.16. The pixel cell is back-projected onto the texture, bound by a rectangle; the
four corners of the rectangle are used to access the summed-area table.
i
i
i
i
i
i
i
i
168 6. Texturing
Area C has been subtracted twice, so it is added back in by the lower
left corner. Note that (x
ll
,y
ll
) is the upper right corner of area C, i.e.,
(x
ll
+1,y
ll
+ 1) is the lower left corner of the bounding box.
The results of using a summed-area table are shown in Figure 6.13. The
lines going to the horizon are sharper near the right edge, but the diagonally
crossing lines in the middle are still overblurred. Similar problems occur
with the ripmap scheme. The problem is that when a texture is viewed
along its diagonal, a large rectangle is generated, with many of the texels
situated nowhere near the pixel being computed. For example, imagine
a long, thin rectangle representing the pixel cell’s back-projection lying
diagonally across the entire texture in Figure 6.16. The whole texture
rectangle’s average will be returned, rather than just the average within
the pixel cell.
Ripmaps and summed-area tables are examples of what are called an-
isotropic filtering algorithms [518]. Such algorithms are schemes that can
retrieve texel values over areas that are not square. However, they are able
to do this most effectively in primarily horizontal and vertical directions.
Both schemes are memory intensive. While a mipmap’s subtextures take
only an additional third of the memory of the original texture, a ripmap’s
take an additional three times as much as the original. Summed-area tables
take at least two times as much memory for textures of size 16 ×16 or less,
with more precision needed for larger textures.
Ripmaps were available in high-end Hewlett-Packard graphics accelera-
tors in the early 1990s. Summed area tables, which give higher quality for
lower overall memory costs, can be implemented on modern GPUs [445].
Improved filtering can be critical to the quality of advanced rendering tech-
niques. For example, Hensley et al. [542, 543] provide an efficient imple-
mentation and show how summed area sampling improves glossy reflec-
tions. Other algorithms in which area sampling is used can be improved
by SAT, such as depth of field [445, 543], shadow maps [739], and blurry
reflections [542].
Unconstrained Anisotropic Filtering
For current graphics hardware, the most common method to further im-
prove texture filtering is to reuse existing mipmap hardware. The basic idea
is that the pixel cell is back-projected, this quadrilateral (quad) on the tex-
ture is then sampled a number of times, and the samples are combined.
As outlined above, each mipmap sample has a location and a squarish area
associated with it. Instead of using a single mipmap sample to approximate
this quad’s coverage, the algorithm uses a number of squares to cover the
quad. The shorter side of the quad can be used to determine d (unlike in
mipmapping, where the longer side is often used); this makes the averaged
area smaller (and so less blurred) for each mipmap sample. The quad’s
i
i
i
i
i
i
i
i
6.2. Image Texturing 169
pixel space texture space
texture
pixel's
cell
mipmap
samples
line of
anisotropy
Figure 6.17. Anisotropic filtering. The back-projection of the pixel cell creates a quadri-
lateral. A line of anisotropy is formed between the longer sides.
longer side is used to create a line of anisotropy parallel to the longer side
and through the middle of the quad. When the amount of anisotropy is
between 1:1 and 2:1, two samples are taken along this line (see Figure 6.17).
At higher ratios of anisotropy, more samples are taken along the axis.
This scheme allows the line of anisotropy to run in any direction, and so
does not have the limitations that ripmaps and summed-area tables had. It
also requires no more texture memory than mipmaps do, since it uses the
mipmap algorithm to do its sampling. An example of anisotropic filtering
is shown in Figure 6.18.
This idea of sampling along an axis was first introduced by Schilling et
al. with their Texram dynamic memory device [1123]. Barkans describes the
Figure 6.18. Mipmap versus anisotropic filtering. Trilinear mipmapping has been done on
the left, and 16:1 anisotropic filtering on the right, both at a 640 ×450 resolution, using
ATI Radeon hardware. Towards the horizon, anisotropic filtering provides a sharper
result, with minimal aliasing.
i
i
i
i
i
i
i
i
170 6. Texturing
algorithm’s use in the Talisman system [65]. A similar system called Feline
is presented by McCormack et al. [841]. Texram’s original formulation
has the samples along the anisotropic axis (also known as probes)given
equal weights. Talisman gives half weight to the two probes at opposite
ends of the axis. Feline uses a Gaussian filter kernel to weight the probes.
These algorithms approach the high quality of software sampling algorithms
such as the Elliptical Weighted Average (EWA) filter, which transforms the
pixel’s area of influence into an ellipse on the texture and weights the texels
inside the ellipse by a filter kernel [518].
6.2.3 Volume Textures
A direct extension of image textures is three-dimensional image data that
is accessed by (u, v, w)(or(s, t, r) values). For example, medical imaging
data can be generated as a three-dimensional grid; by moving a polygon
through this grid, one may view two-dimensional slices of this data. A
related idea is to represent volumetric lights in this form. The illumination
on a point on a surface is found by finding the value for its location inside
this volume, combined with a direction for the light.
Most GPUs support mipmapping for volume textures. Since filtering
inside a single mipmap level of a volume texture involves trilinear interpo-
lation, filtering between mipmap levels requires quadrilinear interpolation.
Since this involves averaging the results from 16 texels, precision problems
may result, which can be solved by using a higher precision volume tex-
ture. Sigg and Hadwiger [1180] discuss this and other problems relevant
to volume textures and provide efficient methods to perform filtering and
other operations.
Although volume textures have significantly higher storage require-
ments and are more expensive to filter, they do have some unique ad-
vantages. The complex process of finding a good two-dimensional param-
eterization for the three-dimensional mesh can be skipped, since three-
dimensional locations can be used directly as texture coordinates. This
avoids the distortion and seam problems that commonly occur with two-
dimensional parameterizations. A volume texture can also be used to rep-
resent the volumetric structure of a material such as wood or marble. A
model textured with such a texture will appear to be carved from this
material.
Using volume textures for surface texturing is extremely inefficient,
since the vast majority of samples are not used. Benson and Davis [81] and
DeBry et al. [235] discuss storing texture data in a sparse octree structure.
This scheme fits well with interactive three-dimensional painting systems,
as the surface does not need explicit texture coordinates assigned to it at
the time of creation, and the octree can hold texture detail down to any
i
i
i
i
i
i
i
i
6.2. Image Texturing 171
level desired. Lefebvre et al. [749] discuss the details of implementing octree
textures on the modern GPU. Lefebvre and Hoppe [751] discuss a method
of packing sparse volume data into a significantly smaller texture.
6.2.4 Cube Maps
Another type of texture is the cube texture or cube map, which has six
square textures, each of which is associated with one face of a cube. A
cube map is accessed with a three-component texture coordinate vector
that specifies the direction of a ray pointing from the center of the cube
outwards. The point where the ray intersects the cube is found as follows.
The texture coordinate with the largest magnitude selects the correspond-
ing face (e.g., the vector (3.2, 5.1, 8.4) selects the Z face). The re-
maining two coordinates are divided by the absolute value of the largest
magnitude coordinate, i.e., 8.4. They now range from 1to1,andare
simply remapped to [0, 1] in order to compute the texture coordinates.
For example, the coordinates (3.2, 5.1) are mapped to ((3.2/8.4+1)/2,
(5.1/8.4+1)/2) (0.31, 0.80). Cube maps are useful for representing val-
ues which are a function of direction; they are most commonly used for
environment mapping (see Section 8.4.3).
Cube maps support bilinear filtering as well as mipmapping, but prob-
lems can occur along seams of the map, where the square faces join. Cube
maps are supposed to be continuous, but almost all graphics hardware can-
not sample across these boundaries when performing bilinear interpolation.
Also, the normal implementation for filtering cube maps into mipmap lev-
els is that no data from one cube face affects the other. Another factor that
Figure 6.19. Cube map filtering. The leftmost two images use the 2 × 2and4× 4
mipmap levels of a cube map, generated using standard cube map mipmap chain gen-
eration. The seams are obvious, making these mipmap levels unusable except for cases
of extreme minification. The two rightmost images use mipmap levels at the same reso-
lutions, generated by sampling across cube faces and using angular extents. Due to the
lack of seams, these mipmap levels can be used even for objects covering a large screen
area. (Images using CubeMapGen courtesy of ATI Technologies Inc.)
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset