i
i
i
i
i
i
i
i
9.1. Shadows 347
Figure 9.14. Forming a shadow volume using the vertex shader. This process is done
twice, once for each of the two stencil buffer passes. The occluder is shown in the left
figure. In the middle gure, all its edges are sent down as degenerate quadrilaterals,
shown schematically here as thin quadrilaterals. On the right, those edges found by the
vertex shader to be silhouette edges have two of their vertices projected away from the
light, so forming quadrilaterals that define the shadow volume sides. Edges that are not
silhouettes render as degenerate (no area) polygons and so cover no pixels.
To drastically cut down on the number of polygons rendered, the silhou-
ette edges of the object could be found. Only the silhouette edges need to
generate shadow volume quadrilaterals—a considerable savings. Silhouette
edge detection is discussed in detail in Section 11.2.4.
The vertex shader also offers the ability to create shadow volumes on
the fly. The idea is to send every edge of the object down the pipeline as a
degenerate quadrilateral [137, 141, 506]. The geometric normals of the two
triangles that share the edge are sent with it. Specifically, the two vertices
of one edge of the degenerate quadrilateral get one face’s surface normal;
the other edge’s vertices get the second face’s normal. See Figure 9.14. The
vertex shader then checks these normals against the view direction. If the
vertex’s stored normal faces toward the light, the vertex is passed through
unperturbed. If it faces away from the light, the vertex shader projects
the vertex far away along the vector formed by the light’s position to the
vertex.
The effect of these two rules is to form silhouette shadow volumes auto-
matically. For edge quadrilaterals that have both triangle neighbors facing
the light, the quadrilateral is not moved and so stays degenerate, i.e., never
gets displayed. For edge quadrilaterals with both normals facing away, the
entire degenerate quadrilateral is moved far away from the light, but stays
degenerate and so is never seen. Only for silhouette edges is one edge pro-
jected outward and the other remains in place. If a geometry shader is
available, these edge quadrilaterals can be created on the fly, thus saving
storage and processing overall. Stich et al. [1220] describe this technique in
detail, as well as discussing a number of other optimizations and extensions.
The shadow volume algorithm has some advantages. First, it can be
used on general-purpose graphics hardware. The only requirement is a
i
i
i
i
i
i
i
i
348 9. Global Illumination
stencil buffer. Second, since it is not image based (unlike the shadow map
algorithm described next), it avoids sampling problems and thus produces
correct sharp shadows everywhere. This can sometimes be a disadvantage.
For example, a character’s clothing may have folds that give thin, sharp
shadows that alias badly.
The shadow volume algorithm can be extended to produce visually con-
vincing soft shadows. Assarsson and Akenine-M¨oller [49] present a method
called penumbra wedges, in which the projected shadow volume planes are
replaced by wedges. A reasonable penumbra value is generated by deter-
mining the amount a given location is inside a wedge. Forest et al. [350]
improve this algorithm for the case where two separate shadow edges cross,
by blending between overlapping wedges.
There are some serious limitations to the shadow volume technique.
Semitransparent objects cannot receive shadows properly, because the sten-
cil buffer stores only one object’s shadow state per pixel. It is also difficult
to use translucent occluders, e.g., stained glass or other objects that atten-
uate or change the color of the light. Polygons with cutout textures also
cannot easily cast shadows.
A major performance problem is that this algorithm burns fill rate, as
shadow volume polygons often cover many pixels many times, and so the
rasterizer becomes a bottleneck. Worse yet, this fill rate load is variable
and difficult to predict. Silhouette edge computation can cut down on
the fill rate, but doing so on the CPU is costly. Lloyd et al. [783] use
culling and clamping techniques to lower fill costs. Another problem is
that curved surfaces that are created by the hardware, such as N-patches,
cannot also generate shadow volumes. These problems, along with the
limitation to opaque, watertight models, and the fact that shadows are
usually hard-edged, have limited the adoption of shadow volumes. The
next method presented is more predictable in cost and can cast shadows
from any hardware-generated surface, and so has seen widespread use.
9.1.4 Shadow Map
In 1978, Williams [1353] proposed that a common Z-buffer-based renderer
could be used to generate shadows quickly on arbitrary objects. The idea
is to render the scene, using the Z-buffer algorithm, from the position of
the light source that is to cast shadows. Note that when the shadow map
is generated, only Z-buffering is required; that is, lighting, texturing, and
the writing of color values into the color buffer can be turned off.
When a single Z-buffer is generated, the light can “look” only in a
particular direction. Under these conditions, the light source itself is either
a distant directional light such as the sun, which has a single view direction,
or some type of spotlight, which has natural limits to its viewing angle.
i
i
i
i
i
i
i
i
9.1. Shadows 349
For a directional light, the light’s view is set to encompass the viewing
volume that the eye sees, so that every location visible to the eye has a
corresponding location in the light’s view volume. For local lights, Arvo
and Aila [38] provide an optimization that renders a spotlight’s view volume
to set the image stencil buffer, so that pixels outside the light’s view are
ignored during shading.
Each pixel in the Z-buffer now contains the z-depth of the object closest
to the light source. We call the entire contents of the Z-buffer the shadow
map, also sometimes known as the shadow depth map or shadow buffer.To
use the shadow map, the scene is rendered a second time, but this time with
respect to the viewer. As each drawing primitive is rendered, its location
at each pixel is compared to the shadow map. If a rendered point is farther
away from the light source than the corresponding value in the shadow
map, that point is in shadow, otherwise it is not. This technique can be
Figure 9.15. Shadow mapping. On the top left, a shadow map is formed by storing
the depths to the surfaces in view. On the top right, the eye is shown looking at two
locations. The sphere is seen at point v
a
, and this point is found to be located at texel a
on the shadow map. The depth stored there is not (much) less than point v
a
is from the
light, so the point is illuminated. The rectangle hit at point v
b
is (much) farther away
from the light than the depth stored at texel b, and so is in shadow. On the bottom left
is the view of a scene from the light’s perspective, with white being further away. On
the bottom right is the scene rendered with this shadow map.
i
i
i
i
i
i
i
i
350 9. Global Illumination
Figure 9.16. Shadow mapping problems. On the left there is no bias, so the surface
erroneously shadows itself, in this case producing a Moir´e pattern. The inset shows a
zoom of part of the sphere’s surface. On the right, the bias is set too high, so the shadow
creeps out from under the block object, giving the illusion that the block hovers above
the surface. The shadow map resolution is also too low, so the texels of the map appear
along the shadow edges, giving it a blocky appearance.
implemented by using texture mapping hardware [534, 1146]. Everitt et
al. [325] and Kilgard [654] discuss implementation details. See Figure 9.15.
Advantages of this method are that the cost of building the shadow
map is linear with the number of rendered primitives, and access time is
constant. One disadvantage is that the quality of the shadows depends on
the resolution (in pixels) of the shadow map, and also on the numerical
precision of the Z-buffer.
Since the shadow map is sampled during the comparison, the algorithm
is susceptible to aliasing problems, especially close to points of contact
between objects. A common problem is self-shadow aliasing, often called
“surface acne,” in which a polygon is incorrectly considered to shadow
itself. This problem has two sources. One is simply the numerical limits of
precision of the processor. The other source is geometric, from the fact that
the value of a point sample is being used to represent an area’s depth. That
is, samples generated for the light are almost never at the same locations
as the screen samples. When the light’s stored depth value is compared to
the viewed surface’s depth, the light’s value may be slightly lower than the
surface’s, resulting in self-shadowing. Such errors are shown in Figure 9.16.
Hourcade [570] introduced a different form of shadow mapping that
solves the biasing problem by avoiding depth compares entirely. This
method stores ID numbers instead of depths. Each object or polygon is
given a different ID in the map. When an object is then rendered from the
eye’s viewpoint, the ID map is checked and compared at the four neighbor-
ing texels. If no ID matches, the object is not visible to the light and so is
shadowed. With this algorithm, self-shadowing is not possible: Anything
that should shadow itself will never do so, since its ID in the shadowed
i
i
i
i
i
i
i
i
9.1. Shadows 351
bias
bias
Figure 9.17. A shadow map samples a surface at discrete locations, shown by the blue
arrows. The gray walls show the shadow map texel boundaries. Within each texel, the
red line represents the distance that the shadow map stores for the surface, i.e., where
it considers the surface to be located. Where the actual surface is below these red (and
dark gray) lines, its distance is greater than the stored depth for that texel, so will
be considered, erroneously, to be in shadow. By using a bias to shift the receiver so
that it is considered above the red texel depth lines, no self-shadowing occurs. (After
Sch¨uler [1138].)
area will match that found in the ID map. Decomposing an object into
convex sub-objects solves the problem, but is often impractical. Even with
proper convex data, an object can fall through the cracks and not cover
any texels, dooming it to always be considered in shadow.
Dietrich [257] and Forsyth [357] describe a hybrid approach with an
ID map and depth buffer. The ID map is used for determining object-to-
object shadowing, and a low-precision depth buffer stores depths for each
object relative to its z-depth extents and is used only for self-shadowing.
Pelzer [999] presents a similar hybrid scheme and provides further imple-
mentation details and code.
For depth shadow maps, if the receiver does not itself cast shadows,
it does not need to be rendered, and the self-shadowing problem can be
avoided. For shadows on occluders, one common method to help renderers
avoid (but not always eliminate) this artifact is to introduce a bias factor.
When checking the distance found in the shadow map with the distance of
the location being tested, a small bias is subtracted from the receiver’s dis-
tance. See Figure 9.17. This bias could be a constant value [758], but doing
so fails when a light is at a shallow angle to the receiver. A more accurate
method is to use a bias that is proportional to the angle of the receiver to
the light. The more the surface tilts away from the light, the greater the
bias grows, to avoid the problem. This type of bias is called depth-slope
scale bias (or some variant on those words). Sch¨uler [1138, 1140] discusses
this problem and some solutions in depth. Because a slope bias solution
will not fix all sampling problems for triangles inside concavities, these
various bias controls usually have to be hand tweaked for the application;
there are no perfect settings.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset