i
i
i
i
i
i
i
i
134 5. Visual Appearance
Figure 5.35. Magnified grayscale antialiased and subpixel antialiased versions of the
same word. When a colored pixel is displayed on an LCD screen, the corresponding
colored vertical subpixel rectangles making up the pixel are lit. Doing so provides
additional horizontal spatial resolution. (Images generated by Steve Gibson’s “Free &
Clear” program.)
ples. In this way, supersampling can be applied selectively to the surfaces
that would most benefit from it.
The eye is more sensitive to differences in intensity than to differences
in color. This fact has been used since at least the days of the Apple
II [393] to improve perceived spatial resolution. One of the latest uses
of this idea is Microsoft’s ClearType technology, which is built upon one
of the characteristics of color liquid-crystal display (LCD) displays. Each
pixel on an LCD display consists of three vertical colored rectangles, red,
green, and blue—use a magnifying glass on an LCD monitor and see for
yourself. Disregarding the colors of these subpixel rectangles, this con-
figuration provides three times as much horizontal resolution as there are
pixels. Using different shades fills in different subpixels. The eye blends
the colors together and the red and blue fringes become undetectable. See
Figure 5.35.
In summary, there are a wide range of schemes for antialiasing edges,
with tradeoffs of speed, quality, and manufacturing cost. No solution is per-
fect, nor can it be, but methods such as MSAA and CSAA offer reasonable
tradeoffs between speed and quality. Undoubtedly, manufacturers will offer
even better sampling and filtering schemes as time goes on. For example, in
the summer of 2007, ATI introduced a high quality (and computationally
expensive) antialiasing scheme based on fragment edge detection, called
edge detect antialiasing.
5.7 Transparency, Alpha, and Compositing
There are many different ways in which semitransparent objects can allow
light to pass through them. In terms of rendering algorithms, these can
be roughly divided into view-based effects and light-based effects. View-
based effects are those in which the semitransparent object itself is being
rendered. Light-based effects are those in which the object causes light to
be attenuated or diverted, causing other objects in the scene to be lit and
rendered differently.
i
i
i
i
i
i
i
i
5.7. Transparency, Alpha, and Compositing 135
In this section we will deal with the simplest form of view-based trans-
parency, in which the semitransparent object acts as a color filter or con-
stant attenuator of the view of the objects behind it. More elaborate view-
and light-based effects such as frosted glass, the bending of light (refrac-
tion), attenuation of light due to the thickness of the transparent object,
and reflectivity and transmission changes due to the viewing angle are dis-
cussedinlaterchapters.
A limitation of the Z-buffer is that only one object is stored per pixel.
If a number of transparent objects overlap the same pixel, the Z-buffer
alone cannot hold and later resolve the effect of all the visible objects.
This problem is solved by accelerator architectures such as the A-buffer,
discussed in the previous section. The A-buffer has “deep pixels” that
store a number of fragments that are resolved to a single pixel color after
all objects are rendered. Because the Z-buffer absolutely dominates the
accelerator market, we present here various methods to work around its
limitations.
One method for giving the illusion of transparency is called screen-door
transparency [907]. The idea is to render the transparent polygon with
a checkerboard fill pattern. That is, every other pixel of the polygon is
rendered, thereby leaving the object behind it partially visible. Usually
the pixels on the screen are close enough together that the checkerboard
pattern itself is not visible. The problems with this technique include:
A transparent object looks best when 50% transparent. Fill patterns
other than a checkerboard can be used, but in practice these are
usually discernable as shapes themselves, detracting from the trans-
parency effect.
Only one transparent object can be convincingly rendered on one
area of the screen. For example, if a transparent red object and
transparent green object are rendered atop a blue object, only two of
the three colors can appear on the checkerboard pattern.
That said, one advantage of this technique is its simplicity. Transparent
objects can be rendered at any time, in any order, and no special hard-
ware (beyond fill pattern support) is needed. The transparency problem
essentially goes away by making all objects opaque. This same idea is used
for antialiasing edges of cutout textures, but at a subpixel level, using a
feature called alpha to coverage. See Section 6.6.
What is necessary for more general and flexible transparency effects is
the ability to blend the transparent object’s color with the color of the
object behind it. For this, the concept of alpha blending is needed [143,
281, 1026]. When an object is rendered on the screen, an RGB color and a
Z-buffer depth are associated with each pixel. Another component, called
i
i
i
i
i
i
i
i
136 5. Visual Appearance
alpha (α), can also be defined for each pixel the object covers. Alpha is
a value describing the degree of opacity of an object fragment for a given
pixel. An alpha of 1.0 means the object is opaque and entirely covers the
pixel’s area of interest; 0.0 means the pixel is not obscured at all.
To make an object transparent, it is rendered on top of the existing
scene with an alpha of less than 1.0. Each pixel covered by the object will
receive a resulting RGBα (also called RGBA) from the rendering pipeline.
Blending this value coming out of the pipeline with the original pixel color
is usually done using the over operator, as follows:
c
o
= α
s
c
s
+(1 α
s
)c
d
[over operator], (5.17)
where c
s
is the color of the transparent object (called the source), α
s
is the
object’s alpha, c
d
is the pixel color before blending (called the destination),
and c
o
is the resulting color due to placing the transparent object over the
existing scene. In the case of the rendering pipeline sending in c
s
and α
s
,
the pixel’s original color c
d
gets replaced by the result c
o
. If the incoming
RGBα is, in fact, opaque (α
s
=1.0), the equation simplifies to the full
replacement of the pixel’s color by the object’s color.
To render transparent objects properly into a scene usually requires
sorting. First, the opaque objects are rendered, then the transparent ob-
jects are blended on top of them in back-to-front order. Blending in arbi-
trary order can produce serious artifacts, because the blending equation is
order dependent. See Figure 5.36. The equation can also be modified so
that blending front-to-back gives the same result, but only when rendering
Figure 5.36. On the left the model is rendered with transparency using the Z-buffer.
Rendering the mesh in an arbitrary order creates serious errors. On the right, depth peel-
ing provides the correct appearance, at the cost of additional passes. (Images courtesy
of NVIDIA Corporation.)
i
i
i
i
i
i
i
i
5.7. Transparency, Alpha, and Compositing 137
the transparent surfaces to a separate buffer, i.e., without any opaque ob-
jects rendered first. This blending mode is called the under operator and is
used in volume rendering techniques [585]. For the special case where only
two transparent surfaces overlap and the alpha of both is 0.5, the blend
order does not matter, and so no sorting is needed [918].
Sorting individual objects by, say, their centroids does not guarantee
the correct sort order. Interpenetrating polygons also cause difficulties.
Drawing polygon meshes can cause sorting problems, as the GPU renders
the polygons in the order given, without sorting.
If sorting is not possible or is only partially done, it is often best to use
Z-buffer testing, but no z-depth replacement, for rendering the transparent
objects. In this way, all transparent objects will at least appear. Other
techniques can also help avoid artifacts, such as turning off culling, or
rendering transparent polygons twice, first rendering backfaces and then
frontfaces [918].
Other methods of correctly rendering transparency without the appli-
cation itself needing to sort are possible. An advantage to the A-buffer
multisampling method described on page 128 is that the fragments can be
combined in sorted order by hardware to obtain high quality transparency.
Normally, an alpha value for a fragment represents either transparency,
the coverage of a pixel cell, or both. A multisample fragment’s alpha rep-
resents purely the transparency of the sample, since it stores a separate
coverage mask.
Transparency can be computed using two or more depth buffers and
multiple passes [253, 643, 815]. First, a rendering pass is made so that the
opaque surfaces’ z-depths are in the first Z-buffer. Now the transparent
objects are rendered. On the second rendering pass, the depth test is
modified to accept the surface that is both closer than the depth of the
first buffer’s stored z-depth, and the farthest among such surfaces. Doing
so renders the backmost transparent object into the frame buffer and the z-
depths into a second Z-buffer. This Z-buffer is then used to derive the next-
closest transparent surface in the next pass, and so on. See Figure 5.37.
The pixel shader can be used to compare z-depths in this fashion and so
perform depth peeling, where each visible layer is found in turn [324, 815].
One initial problem with this approach was knowing how many passes were
sufficient to capture all the transparent layers. This problem is solved using
the pixel draw counter, which tells how many pixels were written during
rendering; when no pixels are rendered by a pass, rendering is done. Al-
ternatively, peeling could be cut short when the number of pixels rendered
by a pass falls below some minimum.
While depth peeling is effective, it can be slow, as each layer peeled
is a separate rendering pass of all transparent objects. Mark and Proud-
foot [817] discuss a hardware architecture extension they call the “F-buffer”
i
i
i
i
i
i
i
i
138 5. Visual Appearance
Figure 5.37. Each depth peel pass draws one of the transparent layers. On the left is
the first pass, showing the layer directly visible to the eye. The second layer, shown in
the middle, displays the second-closest transparent surface at each pixel, in this case the
backfaces of objects. The third layer, on the right, is the set of third-closest transparent
surfaces. Final results can be found on page 394. (Images courtesy of Louis Bavoil.)
that solves the transparency problem by storing and accessing fragments
in a stream. Bavoil et al. [74] propose the k-buffer architecture, which also
attacks this problem. Liu et al. [781] and Pangerl [986] explore the use
of multiple render targets to simulate a four-level deep A-buffer. Unfortu-
nately, the method is limited by the problem of concurrently reading and
writing to the same buffer, which can cause transparent fragments to be
rendered out of order. Liu et al. provide a method to ensure the fragments
are always sorted properly, though at the cost of performance. Nonetheless,
they can still render depth-peeled transparency layers about twice as fast
in overall frame rate. That said, DirectX 10 and its successors do not allow
concurrent read/write from the same buffer, so these approaches cannot be
used with newer APIs.
The over operator can also be used for antialiasing edges. As discussed
in the previous section, a variety of algorithms can be used to find the
approximate percentage of a pixel covered by the edge of a polygon. Instead
of storing a coverage mask showing the area covered by the object, an alpha
can be stored in its place. There are many methods to generate alpha values
that approximate the coverage of an edge, line, or point.
As an example, if an opaque polygon is found to cover 30% of a screen
grid cell, it would then have an alpha of 0.3. This alpha value is then used to
blend the object’s edge with the scene, using the over operator. While this
alpha is just an approximation of the area an edge covers, this interpretation
works fairly well in practice if generated properly. An example of a poor
way to generate alphas is to create them for every polygon edge. Imagine
that two adjacent polygons fully cover a pixel, with each covering 50% of it.
If each polygon generates an alpha value for its edge, the two alphas would
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset