i
i
i
i
i
i
i
i
422 9. Global Illumination
phase, the vector irradiance can be computed by integrating over the hemi-
sphere around the unperturbed surface normal. Three separate vectors are
computed (one each for R, G and B). During rendering, the perturbed sur-
face normal can be dotted with the three vectors to produce the R, G, and
B components of the irradiance. The computed irradiance values may be
incorrect due to occlusion effects, but this representation should compare
well with the ones previously discussed.
PDI developed an interesting prelighting representation to render indi-
rect illumination in the film Shrek 2 [1236]. Unlike the previously discussed
representations, the PDI representation is designed for prelighting bumped
surfaces with arbitrary BRDFs, not just Lambertian surfaces. The PDI
representation consists of the surface irradiance E and three light direction
vectors l
R
, l
G
,andl
B
. These values are sampled in a light gathering phase
that is analogous to the prelighting phase for an interactive application.
Sampling is performed by shooting many rays over the hemisphere of each
sample point and finding the radiance from each intersected surface. No
rays are shot to light sources since only indirect lighting is represented.
The ray casting results are used to compute the surface irradiance E.The
ray directions, weighted by the radiance’s R, G, and B components, are
averaged to compute three lighting vectors l
R
, l
G
,andl
B
:
l
R
=
k
L
k,R
l
k
k
L
k,R
l
k
,
l
G
=
k
L
k,G
l
k
k
L
k,G
l
k
,
l
B
=
k
L
k,B
l
k
k
L
k,B
l
k
,
(9.41)
where L
k
is the radiance from the surface intersected by the kth ray (and
L
k,R
, L
k,G
,andL
k,B
are its R, G and B components). Also, l
k
is the kth
ray’s direction vector (pointing away from the surface). Each weighted sum
of direction vectors is divided by its own length to produce a normalized
result (the notation x indicates the length of the vector x).
During final rendering, E, l
R
, l
G
,andl
B
are averaged from nearby
sample points (in an interactive application, they would be individually
interpolated from vertex or texel values). They are then used to light the
surface:
L
o
(v)=
λ=R,G,B
f(v, l
λ
)
E
λ
max(n
orig
·l
λ
, 0)
max(n · l
λ
, 0), (9.42)
where L
o
(v) is the outgoing radiance resulting from indirect lighting (PDI
computes direct lighting separately). Also, f(v, l
λ
) is the BRDF evaluated
i
i
i
i
i
i
i
i
9.9. Precomputed Lighting 423
at the appropriate view and light directions, E
λ
is a color component (R,
G, or B) of the precomputed irradiance, and n
orig
is the original unper-
turbed surface vector (not to be confused with n, the perturbed surface
vector). If normal mapping is not used, then Equation 9.42 can be simpli-
fied somewhat.
A possible lower-cost variant would use just a single lighting direction
(in this case luminance should be used for weighting, instead of the ra-
diance R, G, or B color components). Another possibility is to optimize
the implementation by scaling the direction vectors by the irradiance and
cosine factor in the prelighting phase, to produce three illumination vectors
i
R
, i
G
,andi
B
:
i
λ
=
E
λ
max(n
orig
· l
λ
, 0)
l
λ
, (9.43)
for λ = R, G, B. Then Equation 9.42 is simplified to
L
o
(v)=
λ=R,G,B
f(v, l
λ
)max(n · i
λ
, 0), (9.44)
where l
λ
is computed by normalizing i
λ
.
Care should be taken when performing this optimization, since the re-
sults of interpolating i
R
, i
G
,andi
B
may not be the same as interpolating
E, l
R
, l
G
,andl
B
. This form of the PDI representation is equivalent to the
RGB vector irradiance representation suggested earlier. Although in the-
ory this representation is only correct for Lambertian surfaces, PDI reports
good results with arbitrary BRDFs, at least for indirect illumination.
As in the case of simple prelighting, all of these prelighting representa-
tions can be stored in textures, vertex attributes, or a combination of the
two.
9.9.3 Volume Prelighting
Many interactive applications (games in particular) feature a static environ-
ment in which characters and other dynamic objects move about. Surface
prelighting can work well for lighting the static environment, although we
need to factor in the occlusion effects of the characters on the environment.
However, indirect light from the environment also illuminates the dynamic
objects. How can this be precomputed?
Greger et al. [453] proposed the irradiance volume, which represents the
five-dimensional (three spatial and two directional dimensions) irradiance
function with a sparse spatial sampling of irradiance environment maps.
That is, there is a three-dimensional grid in space, and at the grid points
are irradiance environment maps. Dynamic objects interpolate irradiance
values from the closest of these environment maps. Greger et al. used
i
i
i
i
i
i
i
i
424 9. Global Illumination
Figure 9.65. Interpolation between two spherical harmonic irradiance map samples. In
the middle row, simple linear interpolation is performed. The result is clearly wrong
with respect to the light source in the top row. In the bottom row, the spherical har-
monic gradients are used for a first-order Taylor expansion before interpolation. (Image
courtesy of Chris Oat, ATI Research, Inc.)
a two-level adaptive grid for the spatial sampling, but other volume data
structures, such as octrees or tetrahedral meshes, could be used. Kontkanen
and Laine [687] describe a method for reducing aliasing artifacts when
precomputing irradiance volumes.
A variety of representations for irradiance environment maps were de-
scribed in Section 8.6. Of these, compact representations such as spherical
harmonics [1045] and Valve’s ambient cube [848, 881] are best suited for ir-
radiance volumes. Interpolating such representations is straightforward, as
it is equivalent to interpolating the individual coefficient values. The use of
spherical harmonic gradients [25] can further improve the quality of spher-
ical harmonic irradiance volumes. Oat [952, 953] describes an irradiance
volume implementation that uses cubic interpolation of spherical harmon-
ics and spherical harmonic gradients. The gradient is used to improve the
quality of the interpolation (see Figure 9.65).
The irradiance volumes in Valve’s Half-Life 2 are interesting in that no
spatial interpolation is performed. The nearest ambient cube to the char-
acter is always selected. Popping artifacts are avoided via time averaging.
One way to bypass the need to compute and store an irradiance volume
is to use the prelighting on adjacent surfaces. In Quake III: Arena,the
lightmap value for the floor under the character is used for ambient light-
ing [558]. A similar approach is described by Hargreaves [502], who uses
radiosity textures to store the radiosity, or exitance, of the ground plane.
i
i
i
i
i
i
i
i
9.10. Precomputed Occlusion 425
The values beneath the object are then used as ground colors in a hemi-
sphere lighting irradiance representation (see Section 8.6.3 for a description
of hemisphere lighting). This technique was used for an outdoor driving
game, where it worked quite well.
Evans [322] describes an interesting trick used for the irradiance vol-
umes in LittleBigPlanet. Instead of a full irradiance map representation,
an average irradiance is stored at each point. An approximate directionality
factor is computed from the gradient of the irradiance field (the direction in
which the field changes most rapidly). Instead of computing the gradient
explicitly, the dot product between the gradient and the surface normal
n is computed by taking two samples of the irradiance field, one at the
surface point p and one at a point displaced slightly in the direction of n,
and subtracting one from the other. This approximate representation is
motivated by the fact that the irradiance volumes in Little Big Planet are
computed dynamically.
Nijasure et al. [938] also compute irradiance volumes dynamically. The
irradiance volumes are represented as grids of spherical harmonic irradi-
ance environment maps. Several iteration steps are performed, where the
scene geometry lit by the previous iteration is used to compute the next
iteration’s irradiance volume. Their method can compute diffuse inter-
reflections (similar to those produced by radiosity) at interactive rates for
simple scenes.
Little work has been done on volume prelighting for specular and glossy
surfaces. Valve’s Half-Life 2 uses environment maps rendered and stored at
artist-determined locations [848, 881]. Objects in the scene use the nearest
environment map.
9.10 Precomputed Occlusion
Global illumination algorithms can be used to precompute various quanti-
ties other than lighting. These quantities can be stored over the surfaces or
volumes of the scene, and used during rendering to improve visuals. Some
measure of how much some parts of the scene block light from others is
often precomputed. These precomputed occlusion quantities can then be
applied to changing lighting in the scene, yielding a more dynamic appear-
ance than precomputing the lighting.
9.10.1 Precomputed Ambient Occlusion
The ambient occlusion factor and bent normal (both discussed in Sec-
tion 9.2) are frequently precomputed and stored in textures or vertices.
The most common approach to performing this precomputation is to cast
i
i
i
i
i
i
i
i
426 9. Global Illumination
rays over the hemisphere around each surface location or vertex. The cast
rays may be restricted to a certain distance, and in some cases the inter-
section distance may be used in addition to, or instead of, a simple binary
hit/miss determination. The computation of obscurance factors is one ex-
ample where intersection distances are used.
The computation of ambient occlusion or obscurance factors usually
includes a cosine weighting factor. The most efficient way to incorporate
this factor is by means of importance sampling. Instead of casting rays
uniformly over the hemisphere and cosine-weighting the results, the distri-
bution of ray directions is cosine-weighted (so rays are more likely to be
cast closer to the surface normal). Most commercially available modeling
and rendering software packages include an option to precompute ambient
occlusion.
Ambient occlusion precomputations can also be accelerated on the
GPU, using GPU features such as depth maps [1012] or occlusion
queries [362].
Strictly speaking, precomputed ambient occlusion factors are only valid
as long as the scene’s geometry does not change. For example, ambient
occlusion factors can be precomputed over a racetrack, and they will stay
valid as the camera moves through the scene. The effect of secondary
bounce lighting for light sources moving through the environment can be
approximated by applying the ambient occlusion factors to the lights, as
discussed in Section 9.2.2. In principle, the introduction of additional ob-
jects (such as cars) invalidates the precomputation. In practice, the simple
approach of precomputing ambient occlusion factors for the track (without
cars), and for each car in isolation, works surprisingly well.
For improved visuals, the ambient occlusion of the track on the cars
(and vice versa) can be approximated by placing each car on a large, flat
plane when performing its ambient occlusion precomputation. The ambient
occlusion factors on the plane can be captured into a texture and projected
onto the track underneath the car [728]. This works well because the track
is mostly flat and the cars are likely to remain in a fixed relationship to the
track surface.
Another reason the simple scheme described above works well is that
the cars are rigid objects. For deformable objects (such as human char-
acters) the ambient occlusion solution loses its validity as the character
changes pose. Kontkanen and Aila [690] propose a method for precomput-
ing ambient occlusion factors for many reference poses, and for finding a
correspondence between these and animation parameters, such as joint an-
gles. At rendering time, they use over 50 coefficients stored at each vertex
to compute the ambient occlusion from the parameters of the current pose
(the computation is equivalent to performing a dot product between two
long vectors). Kirk and Arikan [668] further extend this approach. Both
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset