i
i
i
i
i
i
i
i
124 5. Visual Appearance
Figure 5.27. On the left is the sampled signal, and the reconstructed signal. On the
right, the filter width has doubled in order to double the interval between the samples,
that is, minification has taken place.
aliasing. Instead it has been shown that a filter using sinc(x/a) should be
used to create a continuous signal from the sampled one [1035, 1194]. After
that, resampling at the desired intervals can take place. This can be seen
in Figure 5.27. Said another way, by using sinc(x/a) as a filter here, the
width of the lowpass filter is increased, so that more of the signal’s higher
frequency content is removed. As shown in the figure, the filter width (of
the individual sinc’s) is doubled in order to decrease the resampling rate
to half the original sampling rate. Relating this to a digital image, this is
similar to first blurring it (to remove high frequencies) and then resampling
the image at a lower resolution.
With the theory of sampling and filtering available as a framework, the
various algorithms used in real-time rendering to reduce aliasing are now
discussed.
5.6.2 Screen-Based Antialiasing
Edges of polygons produce noticeable artifacts if not sampled and filtered
well. Shadow boundaries, specular highlights, and other phenomena where
the color is changing rapidly can cause similar problems. The algorithms
discussed in this section help improve the rendering quality for these cases.
They have the common thread that they are screen based, i.e., that they
operate only on the output samples of the pipeline and do not need any
knowledge of the objects being rendered.
Some antialiasing schemes are focused on particular rendering primi-
tives. Two special cases are texture aliasing and line aliasing. Texture
antialiasing is discussed in Section 6.2.2. Line antialiasing can be performed
in a number of ways. One method is to treat the line as a quadrilateral
one pixel wide that is blended with its background; another is to consider
it an infinitely thin, transparent object with a halo; a third is to render the
line as an antialiased texture [849]. These ways of thinking about the line
i
i
i
i
i
i
i
i
5.6. Aliasing and Antialiasing 125
Figure 5.28. On the left, a red triangle is rendered with one sample at the center of the
pixel. Since the triangle does not cover the sample, the pixel will be white, even though
a substantial part of the pixel is covered by the red triangle. On the right, four samples
are used per pixel, and as can be seen, two of these are covered by the red triangle,
which results in a pink pixel color.
can be used in screen-based antialiasing schemes, but special-purpose line
antialiasing hardware can provide rapid, high-quality rendering for lines.
For a thorough treatment of the problem and some solutions, see Nelson’s
two articles [923, 924]. Chan and Durand [170] provide a GPU-specific
solution using prefiltered lines.
In the black triangle example in Figure 5.18, one problem is the low
sampling rate. A single sample is taken at the center of each pixel’s grid
cell, so the most that is known about the cell is whether or not the center
is covered by the triangle. By using more samples per screen grid cell and
blending these in some fashion, a better pixel color can be computed.
9
This
is illustrated in Figure 5.28.
The general strategy of screen-based antialiasing schemes is to use a
sampling pattern for the screen and then weight and sum the samples to
produce a pixel color, p:
p(x, y)=
n
i=1
w
i
c(i, x, y), (5.16)
where n is the number of samples taken for a pixel. The function c(i, x, y)
is a sample color and w
i
is a weight, in the range [0, 1], that the sample will
contribute to the overall pixel color. The sample position is taken based on
whichsampleitisintheseries1,...,n, and the function optionally also
uses the integer part of the pixel location (x, y). In other words, where
the sample is taken on the screen grid is different for each sample, and
optionally the sampling pattern can vary from pixel to pixel. Samples are
normally point samples in real-time rendering systems (and most other
rendering systems, for that matter). So the function c can be thought of as
9
Here we differentiate a pixel, which consists of an RGB color triplet to be displayed,
from a screen grid cell, which is the geometric area on the screen centered around a
pixel’s location. See Smith’s memo [1196] and Blinn’s article [111] to understand why
this is important.
i
i
i
i
i
i
i
i
126 5. Visual Appearance
two functions. First a function f(i, n) retrieves the floating-point (x
f
,y
f
)
location on the screen where a sample is needed. This location on the
screen is then sampled, i.e., the color at that precise point is retrieved.
The sampling scheme is chosen and the rendering pipeline configured to
compute the samples at particular subpixel locations, typically based on a
per-frame (or per-application) setting.
The other variable in antialiasing is w
i
, the weight of each sample. These
weights sum to one. Most methods used in real-time rendering systems
give a constant weight to their samples, i.e., w
i
=
1
n
. Note that the default
mode for graphics hardware, a single sample at the center of the pixel,
is the simplest case of the antialiasing equation above. There is only one
term, the weight of this term is one, and the sampling function f always
returns the center of the pixel being sampled.
Antialiasing algorithms that compute more than one full sample per
pixel are called supersampling (or oversampling) methods. Conceptually
simplest, full-scene antialiasing (FSAA) renders the scene at a higher res-
olution and then averages neighboring samples to create an image. For
example, say an image of 1000 ×800 pixels is desired. If you render an im-
age of 2000 ×1600 offscreen and then average each 2×2 area on the screen,
the desired image is generated with 4 samples per pixel. Note that this
corresponds to 2 × 2 grid sampling in Figure 5.29. This method is costly,
as all subsamples must be fully shaded and filled, with a Z-buffer depth
per sample. FSAA’s main advantage is simplicity. Other, lower quality
versions of this method sample at twice the rate on only one screen axis,
andsoarecalled1× 2or2× 1 supersampling.
A related method is the accumulation buffer [474, 815]. Instead of
one large offscreen buffer, this method uses a buffer that has the same
resolution as, and usually more bits of color than, the desired image. To
obtain a 2×2 sampling of a scene, four images are generated, with the view
moved half a pixel in the screen x-ory- direction as needed. Essentially,
each image generated is for a different sample position within the grid cell.
These images are summed up in the accumulation buffer. After rendering,
the image is averaged (in our case, divided by 4) and sent to the display.
Accumulation buffers are a part of the OpenGL API [9, 1362]. They can
also be used for such effects as motion blur, where a moving object appears
blurry, and depth of field, where objects not at the camera focus appear
blurry. However, the additional costs of having to rerender the scene a few
times per frame and copy the result to the screen makes this algorithm
costly for real-time rendering systems.
Modern GPUs do not have dedicated accumulation buffer hardware,
but one can be simulated by blending separate images together using pixel
operations [884]. If only 8 bit color channels are used for the accumulated
image, the low-order bits of each image will be lost when blending, po-
i
i
i
i
i
i
i
i
5.6. Aliasing and Antialiasing 127
1 × 2
2 × 1
2 × 2
2 × 2
4 × 4
4 × 4
8 × 8
8 × 8
Figure 5.29. A comparison of some pixel sampling schemes. The 2 × 2 rotated grid
captures more gray levels for the nearly horizontal edge than a straight 2 × 2grid.
Similarly, the 8 rooks pattern captures more gray levels for such lines than a 4 × 4grid,
despite using fewer samples.
i
i
i
i
i
i
i
i
128 5. Visual Appearance
tentially causing color banding. Higher-precision buffers (most hardware
supports buffers with 10 or 16 bits per channel) avoid the problem.
An advantage that the accumulation buffer has over FSAA (and over
the A-buffer, which follows) is that sampling does not have to be a uniform
orthogonal pattern within a pixel’s grid cell. Each pass is independent
of the others, so alternate sampling patterns are possible. Sampling in
a rotated square pattern such as (0, 0.25), (0.5, 0), (0.75, 0.5), (0.25, 0.75)
gives more vertical and horizontal resolution within the pixel. Sometimes
called rotated grid supersampling (RGSS), this pattern gives more levels
of antialiasing for nearly vertical or horizontal edges, which usually are
most in need of improvement. In fact, Naiman [919] shows that humans
are most disturbed by aliasing on near-horizontal and near-vertical edges.
Edges with near 45 degrees slope are next to most disturbing. Figure 5.29
shows how the sampling pattern affects quality.
Techniques such as supersampling work by generating samples that
are fully specified with individually computed shades, depths, and loca-
tions. The cost is high and the overall gains relatively low; in particu-
lar, each sample has to run through a pixel shader. For this reason, Di-
rectX does not directly support supersampling as an antialiasing method.
Multisampling strategies lessen the high computational costs of these al-
gorithms by sampling various types of data at differing frequencies. A
multisampling algorithm takes more than one sample per pixel in a single
pass, and (unlike the methods just presented) shares some computations
among the samples. Within GPU hardware these techniques are called
multisample antialiasing (MSAA) and, more recently, coverage sampling
antialiasing (CSAA).
Additional samples are needed when phenomena such as object edges,
specular highlights, and sharp shadows cause abrupt color changes. Shad-
ows can often be made softer and highlights wider, but object edges remain
as a major sampling problem. MSAA saves time by computing fewer shad-
ing samples per fragment. So pixels might have 4 (x, y) sample locations
per fragment, each with their own color and z-depth, but the shade is com-
puted only once for each fragment. Figure 5.30 shows some MSAA patterns
used in practice.
If all MSAA positional samples are covered by the fragment, the shading
sample is in the center of the pixel. However, if the fragment covers fewer
positional samples, the shading sample’s position can be shifted. Doing so
avoids shade sampling off the edge of a texture, for example. This position
adjustment is called centroid sampling or centroid interpolation and is done
automatically by the GPU, if enabled.
10
10
Centroid sampling can cause derivative computations to return incorrect values, so
it should be used with care.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset