i
i
i
i
i
i
i
i
382 9. Global Illumination
Rather than an ambient occlusion factor, a spherical harmonic visibility
function is computed. The first (order 0) coefficient can be used as the
ambient occlusion factor k
A
, and the next three (order 1) coefficients can
be used to compute the bent normal n
bent
. Higher order coefficients can be
used to shadow environment maps or circular light sources. Since geometry
is approximated as bounding spheres, occlusion from creases and other
small details is not modeled.
Hegeman et al. [530] propose a method that can be used to rapidly
approximate ambient occlusion for trees, grass, or other objects that are
composed of a group of small elements filling a simple volume such as a
sphere, ellipsoid, or slab. The method works by estimating the number
of blockers between the occluded point and the outer boundary of the
volume, in the direction of the surface normal. The method is inexpensive
and produces good results for the limited class of objects for which it was
designed.
Evans [322] describes an interesting dynamic ambient occlusion approxi-
mation method based on distance fields. A distance field is a scalar function
of spatial position. Its magnitude is equal to the distance to the closest
object boundary, and it is negative for points inside objects and positive
for points outside them. Sampled distance fields have many uses in graph-
ics [364, 439]. Evans’ method is suited for small volumetric scenes, since the
signed distance field is stored in a volume texture that covers the extents
of the scene. Although the method is nonphysical, the results are visually
pleasing.
Screen-Space Methods
The expense of object-space methods is dependent on scene complexity.
Spatial information from the scene also needs to be collected into some
data structure that is amenable for processing. Screen-space methods are
independent of scene complexity and use readily available information, such
as screen-space depth or normals.
Crytek developed a screen-space dynamic ambient occlusion approach
used in Crysis [887]. The ambient occlusion is computed in a full-screen
pass, using the Z-buffer as the only input. The ambient occlusion factor
k
A
of each pixel is estimated by testing a set of points (distributed in a
sphere around the pixel’s location) against the Z-buffer. The value of k
A
is a function of the number of samples that pass the Z-buffer test (i.e., are
in front of the value in the Z-buffer). A smaller number of passing samples
results in a lower value for k
A
—see Figure 9.38. The samples have weights
that decrease with distance from the pixel (similarly to the obscurance
factor [1409]). Note that since the samples are not weighted by a
cos θ
i
i
i
i
i
i
i
i
i
9.2. Ambient Occlusion 383
Figure 9.38. Crytek’s ambient occlusion method applied to three surface points (the
yellow circles). For clarity, the algorithm is shown in two dimensions (up is farther from
the camera). In this example, ten samples are distributed over a disk around each surface
point (in actuality, they are distributed over a sphere). Samples failing the z-test (which
are behind the Z-buffer) are shown in red, and passing samples are green. The value of
k
A
is a function of the ratio of passing samples to total samples (we ignore the variable
sample weights here for simplicity). The point on the left has six passing samples out
of 10 total, resulting in a ratio of 0.6, from which k
A
is computed. The middle point
has three passing samples (one more is outside the object but fails the z-test anyway,
as shown by the red arrow), so k
A
is determined from the ratio 0.3. The point on the
right has one passing sample, so the ratio 0.1 is used to compute k
A
.
factor, the resulting ambient occlusion is incorrect,
7
but the results were
deemed by Crytek to be visually pleasing.
To keep performance reasonable, no more than about 16 samples should
be taken per pixel. However, experiments indicated that as many as 200
samples are needed for good visual quality. To bridge the gap between
these two numbers, the sample pattern is varied for each pixel in a 4 × 4
block of pixels (giving effectively 16 different sample patterns).
8
This con-
verts the banding artifacts caused by the small number of samples into
high-frequency noise. This noise is then removed with a 4 × 4-pixel post-
process blur. This technique allows each pixel to be affected by its neigh-
bors, effectively increasing the number of samples. The price paid for
this increase in quality is a blurrier result. A “smart blur” is used that
does not blur across depth discontinuities, to avoid loss of sharpness on
edges. An example using Crytek’s ambient occlusion technique is shown in
Figure 9.39.
7
Most notably, samples under the surface are counted when they should not be.
This means that a flat surface will be darkened, with edges being brighter than their
surroundings. It is hard to avoid this without access to surface normals; one possible
modification is to clamp the ratio of passing samples to be no larger than 0.5, multiplying
the clamped value by 2.
8
The variation is implemented by storing a random direction vector in each texel of
a4× 4 texture that is repeated over the screen. At each pixel, a fixed three-dimensional
sample pattern is reflected about the plane, perpendicular to the direction vector.
i
i
i
i
i
i
i
i
384 9. Global Illumination
Figure 9.39. The effect of screen-space ambient occlusion is shown in the upper left. The
upper right shows the albedo (diffuse color) without ambient occlusion. In the lower
left the two are shown combined. Specular shading and shadows are added for the final
image, in the lower right. (Images from “Crysis” courtesy of Crytek.)
Figure 9.40. The Z-buffer unsharp mask technique for approximate ambient occlusion.
The image on the left has no ambient occlusion; the image on the right includes an ap-
proximate ambient occlusion term generated with the Z-buffer unsharp mask technique.
(Images courtesy of Mike Pan.)
i
i
i
i
i
i
i
i
9.2. Ambient Occlusion 385
A simpler (and cheaper) approach that is also based on screen-space
analysis of the contents of the Z-buffer was proposed by Luft et al. [802].
The basic idea is to perform an unsharp mask filter on the Z-buffer. An
unsharp mask is a type of high-pass filter that accentuates edges and other
discontinuities in an image. It does so by subtracting a blurred version
from the original image. This method is inexpensive. The result only
superficially resembles ambient occlusion, but with some adjustment of
scale factors, a pleasing image can be produced—see Figure 9.40.
Shanmugam and Arikan [1156] describe two approaches in their paper.
One generates fine ambient occlusion from small, nearby details. The other
generates coarse ambient occlusion from larger objects. The results of the
two are combined to produce the final ambient occlusion factor.
Their fine ambient occlusion method uses a full screen pass that accesses
the Z-buffer along with a second buffer containing the surface normals of
the visible pixels. For each shaded pixel, nearby pixels are sampled from the
Z-buffer. The sampled pixels are represented as spheres, and an occlusion
term is computed for the shaded pixel (taking its normal into account).
Double shadowing is not accounted for, so the result is somewhat dark.
Their coarse occlusion method is similar to the object-space method of
Ren et al. [1062] (discussed on page 381) in that the occluding geometry is
approximated as a collection of spheres. However, Shanmugam and Arikan
accumulate occlusion in screen space, using screen-aligned billboards cov-
ering the “area of effect” of each occluding sphere. Double shadowing is
also not accounted for in the coarse occlusion method either (unlike the
method of Ren et al.).
Sloan et al. [1190] propose a technique that combines the spherical har-
monic exponentiation of Ren et al. [1062] with the screen-space occlusion
approach of Shanmugam and Arikan [1156]. As in the technique by Ren et
al., spherical harmonic visibility functions from spherical occluders are ac-
cumulated in log space, and the result is exponentiated. As in Shanmugam
and Arikan’s technique, these accumulations occur in screen space (in this
case by rendering coarsely tessellated spheres rather than billboards). The
combined technique by Sloan et al. accounts for double shadowing and can
produce not only ambient occlusion factors and bent normals, but also
higher-frequency visibility functions that can be used to shadow environ-
mentmapsandarealights. Itcanevenhandle interreflections. However,
it is limited to coarse occlusion, since it uses spherical proxies instead of
the actual occluding geometry.
One interesting option is to replace Shanmugam and Arikan’s fine occlu-
sion method [1156] with the less expensive Crytek method [887], to improve
performance while retaining both coarse and fine occlusion. Alternatively,
their coarse method could be replaced with the more accurate one proposed
by Sloan et al. [1190], to improve visual quality.
i
i
i
i
i
i
i
i
386 9. Global Illumination
9.3 Reflections
Environment mapping techniques for providing reflections of objects at a
distance have been covered in Section 8.4 and 8.5, with reflected rays com-
puted using Equation 7.30 on page 230. The limitation of such techniques
is that they work on the assumption that the reflected objects are located
far from the reflector, so that the same texture can be used by all reflection
rays. Generating planar reflections of nearby objects will be presented in
this section, along with methods for rendering frosted glass and handling
curved reflectors.
9.3.1 Planar Reflections
Planar reflection, by which we mean reflection off a flat surface such as
a mirror, is a special case of reflection off arbitrary surfaces. As often
occurs with special cases, planar reflections are easier to implement and
can execute more rapidly than general reflections.
An ideal reflector follows the law of reflection, which states that the
angle of incidence is equal to the angle of reflection. That is, the angle
between the incident ray and the normal is equal to the angle between the
reflected ray and the normal. This is depicted in Figure 9.41, which illus-
trates a simple object that is reflected in a plane. The figure also shows
an “image” of the reflected object. Due to the law of reflection, the re-
flected image of the object is simply the object itself, physically reflected
through the plane. That is, instead of following the reflected ray, we could
follow the incident ray through the reflector and hit the same point, but
Figure 9.41. Reflection in a plane, showing angle of incidence and reflection, the reflected
geometry, and the reflector.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset