i
i
i
i
i
i
i
i
126 5. Visual Appearance
two functions. First a function f(i, n) retrieves the floating-point (x
f
,y
f
)
location on the screen where a sample is needed. This location on the
screen is then sampled, i.e., the color at that precise point is retrieved.
The sampling scheme is chosen and the rendering pipeline configured to
compute the samples at particular subpixel locations, typically based on a
per-frame (or per-application) setting.
The other variable in antialiasing is w
i
, the weight of each sample. These
weights sum to one. Most methods used in real-time rendering systems
give a constant weight to their samples, i.e., w
i
=
1
n
. Note that the default
mode for graphics hardware, a single sample at the center of the pixel,
is the simplest case of the antialiasing equation above. There is only one
term, the weight of this term is one, and the sampling function f always
returns the center of the pixel being sampled.
Antialiasing algorithms that compute more than one full sample per
pixel are called supersampling (or oversampling) methods. Conceptually
simplest, full-scene antialiasing (FSAA) renders the scene at a higher res-
olution and then averages neighboring samples to create an image. For
example, say an image of 1000 ×800 pixels is desired. If you render an im-
age of 2000 ×1600 offscreen and then average each 2×2 area on the screen,
the desired image is generated with 4 samples per pixel. Note that this
corresponds to 2 × 2 grid sampling in Figure 5.29. This method is costly,
as all subsamples must be fully shaded and filled, with a Z-buffer depth
per sample. FSAA’s main advantage is simplicity. Other, lower quality
versions of this method sample at twice the rate on only one screen axis,
andsoarecalled1× 2or2× 1 supersampling.
A related method is the accumulation buffer [474, 815]. Instead of
one large offscreen buffer, this method uses a buffer that has the same
resolution as, and usually more bits of color than, the desired image. To
obtain a 2×2 sampling of a scene, four images are generated, with the view
moved half a pixel in the screen x-ory- direction as needed. Essentially,
each image generated is for a different sample position within the grid cell.
These images are summed up in the accumulation buffer. After rendering,
the image is averaged (in our case, divided by 4) and sent to the display.
Accumulation buffers are a part of the OpenGL API [9, 1362]. They can
also be used for such effects as motion blur, where a moving object appears
blurry, and depth of field, where objects not at the camera focus appear
blurry. However, the additional costs of having to rerender the scene a few
times per frame and copy the result to the screen makes this algorithm
costly for real-time rendering systems.
Modern GPUs do not have dedicated accumulation buffer hardware,
but one can be simulated by blending separate images together using pixel
operations [884]. If only 8 bit color channels are used for the accumulated
image, the low-order bits of each image will be lost when blending, po-