i
i
i
i
i
i
i
i
9.1. Shadows 367
Variance Shadow Maps
One algorithm that allows filtering of the shadow maps generated is Don-
nelly and Lauritzen’s variance shadow map (VSM) [272]. The algorithm
stores the depth in one map and the depth squared in another map. MSAA
or other antialiasing schemes can be used when generating the maps. These
maps can be blurred, mipmapped, put in summed area tables [739], or any
other method. The ability to treat these maps as filterable textures is a
huge advantage, as the entire array of sampling and filtering techniques can
be brought to bear when retrieving data from them. Overall this a gives a
noticeable increase in quality for the amount of time spent processing, since
the GPU’s optimized hardware capabilities are used efficiently. For exam-
ple, while PCF needs more samples (and hence more time) to avoid noise
when generating softer shadows, VSM can work with just a single (high-
quality) sample to determine the entire sample area’s effect and produce
a smooth penumbra. This ability means shadows can be made arbitrarily
soft at no additional cost, within the limitations of the algorithm.
To begin, for VSM the depth map is sampled (just once) at the receiver’s
location to return an average depth of the closest light occluder. When this
average depth M
1
(also called the first moment) is greater than the depth
on the shadow receiver t, the receiver is considered fully in light. When
the average depth is less than the receiver’s depth, the following equation
is used:
p
max
(t)=
σ
2
σ
2
+(t M
1
)
2
. (9.10)
where p
max
is the maximum percentage of samples in light, σ
2
is the vari-
ance, t is the receiver depth, and M
1
is the average (expected) depth in
the shadow map. The depth-squared shadow map’s sample M
2
(the second
moment) is used to compute the variance:
σ
2
= M
2
M
2
1
. (9.11)
The value p
max
is an upper bound on the visibility percentage of the
receiver. The actual illumination percentage p cannot be larger than this
value. This upper bound is from Chebychev’s inequality, one-tailed version.
The equation attempts to estimate, using probability theory, how much of
the distribution of occluders at the surface location is beyond the surface’s
distance from the light. Donnelly and Lauritzen show that for a planar
occluder and planar receiver at fixed depths, p = p
max
, so Equation 9.10
can be used as a good approximation of many real shadowing situations.
Myers [915] builds up an intuition as to why this method works. The
variance over an area increases at shadow edges. The greater the difference
in depths, the greater the variance. The (tM
1
)
2
term is then a significant
determinant in the visibility percentage. If this value is just slightly above
i
i
i
i
i
i
i
i
368 9. Global Illumination
zero, this means the average occluder depth is slightly closer to the light
than the receiver, and p
max
is then near 1 (fully lit). This would happen
along the fully lit edge of the penumbra. Moving into the penumbra, the
average occluder depth gets closer to the light, so this term becomes larger
and p
max
drops. At the same time the variance itself is changing within the
penumbra, going from nearly zero along the edges to the largest variance
where the occluders differ in depth and equally share the area. These terms
balance out to give a linearly varying shadow across the penumbra. See
Figure 9.32 for a comparison with other algorithms.
Figure 9.32. In the upper left, standard shadow mapping. Upper right, perspective
shadow mapping, increasing the density of shadow map texel density near the viewer.
Lower left, percentage-closer soft shadows, softening the shadows as the occluder’s dis-
tance from the receiver increases. Lower right, variance shadow mapping with a constant
soft shadow width, each pixel shaded with a single variance map sample. (Images cour-
tesy of Nico Hempe, Yvonne Jung, and Johannes Behr.)
i
i
i
i
i
i
i
i
9.1. Shadows 369
Figure 9.33. Variance shadow mapping, varying the light distance. (Images from the
NVIDIA SDK 10 [945] samples courtesy of NVIDIA Corporation.)
One significant feature of variance shadow mapping is that it can deal
with the problem of surface bias problems due to geometry in an elegant
fashion. Lauritzen [739] gives a derivation of how the surface’s slope is used
to modify the value of the second moment. Bias and other problems from
numerical stability can be a problem for variance mapping. For example,
Equation 9.11 subtracts one large value from another similar value. This
type of computation tends to magnify the lack of accuracy of the underlying
numerical representation. Using floating point textures helps avoid this
problem.
As with PCF, the width of the filtering kernel determines the width
of the penumbra. By finding the distance between the receiver and closest
occluder, the kernel width can be varied, so giving convincing soft shadows.
Mipmapped samples are poor estimators of coverage for a penumbra with
a slowly increasing width, creating boxy artifacts. Lauritzen [739] details
how to use summed-area tables to give considerably better shadows. An
example is shown in Figure 9.33.
One place variance shadow mapping breaks down is along the penum-
brae areas when two or more occluders cover a receiver and one occluder is
close to the receiver. The Chebychev inequality will produce a maximum
light value that is not related to the correct light percentage. Essentially,
the closest occluder, by only partially hiding the light, throws off the equa-
tion’s approximation. This results in light bleeding (a.k.a. light leaks),
where areas that are fully occluded still receive light. See Figure 9.34. By
taking more samples over smaller areas, this problem can be resolved, which
essentially turns variance shadow mapping into PCF. As with PCF, speed
and performance trade off, but for scenes with low shadow depth complex-
i
i
i
i
i
i
i
i
370 9. Global Illumination
Figure 9.34. On the left, variance shadow mapping applied to a teapot. On the right,
a triangle (not shown) casts a shadow on the teapot, causing objectionable artifacts in
the shadow on the ground. (Images courtesy of Marco Salvi.)
ity, variance mapping works well. Lauritzen [739] gives one artist-controlled
method to ameliorate the problem, which is to treat low percentages as 0%
lit and to remap the rest of the percentage range to 0% to 100%. This
approach darkens light bleeds, at the cost of narrowing penumbrae over-
all. While light bleeding is a serious limitation, VSM is excellent for pro-
ducing shadows from terrain, since such shadows rarely involve multiple
occluders [887].
The promise of being able to use filtering techniques to rapidly produce
smooth shadows generated much interest in variance shadow mapping; the
main challenge is solving the bleeding problem. Annen et al. [26] introduce
the convolution shadow map. Extending the idea behind Soler and Sillion’s
algorithm for planar receivers [1205], the idea is to portray a shadow’s effect
as a set of statistical basis functions based on filtering. As with variance
shadow mapping, such maps can be filtered. The method converges to the
correct answer, so the light leak problem is avoided. Salvi [1110] discusses
saving the exponential of the depth into a buffer. An exponential function
approximates the step function that a shadow map performs (i.e., in light
or not), so this can work to significantly reduce bleeding artifacts.
Another element that is a problem for shadow map algorithms in general
is the non-linear way in which the raw z-depth value varies with distance
from the light. Generating the shadow map so that the values stored vary
linearly within the range can also help PCF, but it is particularly impor-
tant for variance shadow mapping. Setting the near and far planes of the
lighting frustum as tight as possible against the scene also helps precision
problems [136, 739, 915].
Related Algorithms
Lokovic and Veach [786] present the concept of deep shadow maps,inwhich
each shadow map texel stores a function of how light drops off with depth.
i
i
i
i
i
i
i
i
9.1. Shadows 371
Figure 9.35. Hair rendering using deep opacity maps for shadows [1398]. (Images cour-
tesy of Cem Yuksel, Texas A&M University.)
This is useful for rendering objects such as hair and clouds, where objects
are either small (and cover only part of each texel) or semitransparent.
Self-shadowing such objects is critical for realism. St. Amour et al. [1210]
use this concept of storing a function to compute penumbrae. Kim and
Neumann [657] introduce a method they call opacity shadow maps that
is suited for the GPU. They generate a set of shadow maps at different
depths from the light, storing the amount of light received at each texel at
each depth. Linear interpolation between the two neighboring depth levels
is then used to compute the amount of obscurance at a point. Nguyen
and Donnelly [929] give an updated version of this approach, producing
images such as Figure 13.1 on page 577. Yuksel and Keyser [1398] improve
efficiency and quality by creating opacity maps that more closely follow the
shape of the model. Doing so allows them to reduce the number of layers
needed, as evaluation of each layer is more significant to the final image.
See Figure 9.35. Ward et al. [1327] provide an excellent in-depth survey of
hair modeling techniques.
In 2003 Chan and Durand [168] and Wyman [1385] simultaneously pre-
sented algorithms that work by using shadow maps and creating addi-
tional geometry along silhouette edges to generate perceptually convinc-
ing penumbrae. The technique of using cones and sheets is similar to
that shown on the right in Figure 9.7 on page 337, but it works for re-
ceivers at any distance. In their paper, Chan and Durand compare their
smoothies technique with Wyman’s penumbra maps method. A major ad-
vantage of these algorithms is that a smooth, noiseless penumbra is gen-
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset