i
i
i
i
i
i
i
i
9.9. Precomputed Lighting 417
Photon mapping is often used together with ray tracing. Ray tracing is
appliedinthefinal gather phase, where rays are shot from each rendered
location to collect indirect illumination (from the photon maps) and direct
illumination (from the light sources).
GPU implementations of these methods have been developed [383, 472,
1039, 1215], but are not yet fast enough to be used in real-time applications.
Rather, the motivation is to speed up offline rendering.
An interesting development is the invention of new global illumination
algorithms designed specifically for the GPU. One of the earliest of these
is “Instant Radiosity” [641], which, despite its name, has little in common
with the radiosity algorithm. The basic idea is simple: Rays are cast out-
ward from the light sources. For each place where a ray hits, a light source
is placed and rendered with shadows, to represent the indirect illumination
from that surface element. This technique was designed to take advantage
of fixed-function graphics hardware, but it maps well onto programmable
GPUs. Recent extensions [712, 1111, 1147] have been made to this algo-
rithm to improve performance or visuals.
Other techniques have been developed in the same spirit of representing
bounce lighting with light sources, where the light sources are stored in
textures or splatted on the screen [219, 220, 221, 222].
Another interesting GPU-friendly global illumination algorithm was
proposed by Sloan et al. [1190]. This technique was discussed on page 385
in the context of occlusion, but since it models interreflections as well, it
should be counted as a full global illumination algorithm.
9.9 Precomputed Lighting
The full global illumination algorithms discussed in the previous section
are very costly for all but the simplest scenes. For this reason, they are
usually not employed during rendering, but for offline computations. The
results of these precomputations are then used during rendering.
There are various kinds of data that can be precomputed, but the most
common is lighting information. For precomputed lighting or prelighting
to remain valid, the scene and light sources must remain static. Fortu-
nately, there are many applications where this is at least partially the case,
enabling the use of precomputed lighting to good effect. Sometimes the
greatest problem is that the static models look much better than the dy-
namic (changing) ones.
9.9.1 Simple Surface Prelighting
The lighting on smooth Lambertian surfaces is fully described by a single
RGB value—the irradiance. If the light sources and scene geometry do
i
i
i
i
i
i
i
i
418 9. Global Illumination
not change, the irradiance will be constant and can be precomputed ahead
of time. Due to the linear nature of light transport, the effect of any
dynamic light sources can simply be added on top of the precomputed
irradiance.
In 1997, Quake II by id Software was the first commercial interac-
tive application to make use of irradiance values precomputed by a global
illumination algorithm. Quake II used texture maps that stored irra-
diance values that were precomputed offline using radiosity. Such tex-
tures have historically been called light maps, although they are more pre-
cisely named irradiance maps. Radiosity was a good choice for Quake II’s
precomputation since it is well suited to the computation of irradiance
in Lambertian environments. Also, memory constraints of the time re-
stricted the irradiance maps to be relatively low resolution, which matched
well with the blurry, low-frequency shadows typical of radiosity solutions.
Born [127] discusses the process of making irradiance map charts using
radiosity techniques.
Irradiance values can also be stored in vertices. This works particu-
larly well if the geometry detail is high (so the vertices are close together)
or the lighting detail is low. Vertex and texture irradiance can even be
used in the same scene, as can be seen in Figure 9.42 on page 388. The
castle in this figure is rendered with irradiance values that are stored on
vertices over most of the scene. In flat areas with rapid lighting changes
(such as the walls behind the torches) the irradiance is stored in texture
maps.
Precomputed irradiance values are usually multiplied with diffuse color
or albedo maps stored in a separate set of textures. Although the exitance
(irradiance times diffuse color) could in theory be stored in a single set of
maps, many practical considerations rule out this option in most cases. The
color maps are usually quite high frequency and make use of various kinds
of tiling and reuse to keep the memory usage reasonable. The irradiance
data is usually much lower frequency and cannot easily be reused. The
combination of two separate signals consumes much less memory. The low-
frequency irradiance changes also tend to mask repetition resulting from
tiling the color map. A combination of a diffuse color map and an irradiance
map can be seen in Figure 9.63.
Another advantage of storing the irradiance separately from the diffuse
color is that the irradiance can then be more easily changed. Multiple
irradiance solutions could be stored and reused with the same geometry and
diffuse color maps (for example, the same scene seen at night and during
the day). Using texture coordinate animation techniques (Section 6.4) or
projective texturing (Section 7.4.3), light textures can be made to move on
walls—e.g., a moving spotlight effect can be created. Also, light maps can
be recalculated on the fly for dynamic lighting.
i
i
i
i
i
i
i
i
9.9. Precomputed Lighting 419
X=
Figure 9.63. Irradiance map combined with a diffuse color, or albedo, map. The diffuse
color map on the left is multiplied by the irradiance map in the middle to yield the result
on the right. (Images courtesy of J.L. Mitchell, M. Tatro, and I. Bullard.)
Although additional lights can be added to a precomputed irradiance so-
lution, any change or addition of geometry invalidates the solution, at least
in theory. In practice, relatively small dynamic objects, such as charac-
ters, can be introduced into the scene and their effect on the precomputed
solution can be either ignored or approximated, e.g., by attenuating the
precomputed irradiance with dynamic shadows or ambient occlusion. The
effect of the precomputed solution on the dynamic objects also needs to be
addressed—some techniques for doing so will be discussed in Section 9.9.3.
Since irradiance maps have no directionality, they cannot be used with
glossy or specular surfaces. Worse still, they also cannot be used with high-
frequency normal maps. These limitations have motivated a search for ways
to store directional precomputed lighting. In cases where indirect lighting
is stored separately, irradiance mapping can still be of interest. Indirect
lighting is weakly directional, and the effects of small-scale geometry can
be modeled with ambient occlusion.
9.9.2 Directional Surface Prelighting
To use low-frequency prelighting information with high-frequency normal
maps, irradiance alone will not suffice. Some directional information must
be stored, as well. In the case of Lambertian surfaces, this information will
represent how the irradiance changes with the surface normal. This is effec-
tively the same as storing an irradiance environment map at each surface
location. Since irradiance environment maps can be represented with nine
RGB spherical harmonic coefficients, one straightforward approach is to
store those in vertex values or texture maps. To save pixel shader cycles,
these coefficients can be rotated into the local frame at precomputation
time.
Storing nine RGB values at every vertex or texel is very memory inten-
sive. If the spherical harmonics are meant to store only indirect lighting,
four coefficients may suffice. Another option is to use a different basis
i
i
i
i
i
i
i
i
420 9. Global Illumination
n
m
2
m
0
m
1
b
t
Figure 9.64. Half-Life 2 lighting basis. The three basis vectors have elevation angles
of about 26
above the tangent plane, and their projections into that plane are spaced
equally (at 120
intervals) around the normal. They are unit length, and each one is
perpendicular to the other two.
that is optimized for representing functions on the hemisphere, rather than
the sphere. Often normal maps cover only the hemisphere around the
unperturbed surface normal. Hemispherical harmonics [382] can repre-
sent functions over the hemisphere using a smaller number of coefficients
than spherical harmonics. First-order hemispherical harmonics (four coef-
ficients) should in theory have only slightly higher error than second-order
spherical harmonics (nine coefficients) for representing functions over the
hemisphere.
Valve uses a novel representation [848, 881], which it calls radiosity
normal mapping,intheHalf-Life 2 series of games. It represents the direc-
tional irradiance at each point as three RGB irradiance values, sampled in
three directions in tangent space (see Figure 9.64). The coordinates of the
three mutually perpendicular basis vectors in tangent space are
m
0
=
1
6
,
1
2
,
1
3
,
m
1
=
1
6
,
1
2
,
1
3
,
m
2
=
2
3
, 0,
1
3
.
(9.37)
At rendering time, the tangent-space normal n is read from the normal
map and the irradiance is interpolated from the three sampled irradiance
i
i
i
i
i
i
i
i
9.9. Precomputed Lighting 421
values (E
0
, E
1
,andE
2
):
12
E(n)=
2
k=0
max(m
k
·n, 0)
2
E
k
2
k=0
max(m
k
· n, 0)
2
. (9.38)
Green [440] points out that shading can be made significantly cheaper if
the following three values are stored in the map instead of the tangent-space
normal:
d
k
=
max(m
k
·n, 0)
2
2
k=0
max(m
k
· n, 0)
2
, (9.39)
for k =0, 1, 2. Then Equation 9.38 simplifies to the following:
E(n)=
2
k=0
d
k
E
k
. (9.40)
Green describes several other advantages to this representation (some of
them are discussed further in Section 9.10.2). However, it should be noted
that some of the performance savings are lost if the tangent space normal
n is needed for other uses, such as specular lighting, since the normal must
then be reconstructed in the pixel shader. Also, although the required
storage is equivalent in theory to a normal map (both use three numbers), in
practice normal maps are easier to compress. For these reasons, the original
formulation in Equation 9.38 may be preferable for some applications.
The Half-Life 2 representation works well for directional irradiance.
Sloan [1189] found that this representation produces results superior to
low-order hemispherical harmonics.
Another representation of directional irradiance was used by Crytek in
the game Far Cry [887]. Crytek refers to the maps used in its representation
as dot3 lightmaps. The Crytek representation consists of an average light
direction in tangent space, an average light color, and a scalar directionality
factor. The directionality factor expresses to what extent the incoming light
varies in direction. It works to weaken the effect of the n · l cosine term
when the incoming light is scattered over the hemisphere.
Vector irradiance (discussed in Section 8.2) can also be used as a di-
rectional irradiance representation for prelighting. During the prelighting
12
The formulation given in the GDC 2004 presentation [848] is incorrect; the form in
Equation 9.38 is from a SIGGRAPH 2007 presentation [440].
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset