i
i
i
i
i
i
i
i
8.3. Ambient Light 295
Figure 8.7. Highlights on smooth objects are sharp reflections of the light source shape.
On the left, this appearance has been approximated by thresholding the highlight value
of a Blinn-Phong shader. On the right, the same object rendered with an unmodified
Blinn-Phong shader for comparison. (Image courtesy of Larry Gritz.)
In principle, the reflectance equation does not distinguish between light
arriving directly from a light source and indirect light that has been scat-
tered from the sky or objects in the scene. All incoming directions have
radiance, and the reflectance equation integrates over them all. However, in
practice, direct light is usually distinguished by relatively small solid angles
with high radiance values, and indirect light tends to diffusely cover the
rest of the hemisphere with moderate to low radiance values. This provides
good practical reasons to handle the two separately. This is even true for
offline rendering systems—the performance advantages of using separate
techniques tailored to direct and indirect light are too great to ignore.
8.3 Ambient Light
Ambient light is the simplest model of indirect light, where the indirect
radiance does not vary with direction and has a constant value L
A
.Even
such a simple model of indirect light improves visual quality significantly. A
scene with no indirect light appears highly unrealistic. Objects in shadow
or facing away from the light in such a scene would be completely black,
which is unlike any scene found in reality. The moonscape in Figure 8.1
comes close, but even in such scenes some indirect light is bouncing from
nearby objects.
The exact effects of ambient light will depend on the BRDF. For Lam-
bertian surfaces, the constant radiance L
A
results in a constant contribu-
i
i
i
i
i
i
i
i
296 8. Area and Environmental Lighting
tion to outgoing radiance, regardless of surface normal n or view direc-
tion v:
L
o
(v)=
c
diff
π
L
A
Ω
cos θ
i
i
= c
diff
L
A
. (8.18)
When shading, this constant outgoing radiance contribution is added to
the contributions from direct light sources:
L
o
(v)=
c
diff
π
πL
A
+
n
k=1
E
L
k
cos θ
i
k
. (8.19)
For arbitrary BRDFs, the equivalent equation is
L
o
(v)=L
A
Ω
f(l, v)cosθ
i
i
+
n
k=1
f(l
k
, v) E
L
k
cos θ
i
k
. (8.20)
We define the ambient reflectance R
A
(v)thus:
R
A
(v)=
Ω
f(l, v)cosθ
i
i
. (8.21)
Like any reflectance quantity, R
A
(v) has values between 0 and 1 and may
vary over the visible spectrum, so for rendering purposes it is an RGB
color. In real-time rendering applications, it is usually assumed to have a
view-independent, constant value, referred to as the ambient color c
amb
.
For Lambertian surfaces, c
amb
is equal to the diffuse color c
diff
. For other
surface types, c
amb
is usually assumed to be a weighted sum of the diffuse
and specular colors [192, 193]. This tends to work quite well in practice,
although the Fresnel effect implies that some proportion of white should
ideally be mixed in, as well. Using a constant ambient color simplifies
Equation 8.20, yielding the ambient term in its most commonly used form:
L
o
(v)=c
amb
L
A
+
n
k=1
f(l
k
, v) E
L
k
cos θ
i
k
. (8.22)
The reflectance equation ignores occlusion—the fact that many surface
points will be blocked from “seeing” some of the incoming directions by
other objects, or other parts of the same object. This reduces realism in
general, but it is particularly noticeable for ambient lighting, which appears
extremely flat when occlusion is ignored. Methods for addressing this will
be discussed in Sections 9.2 and 9.10.1.
i
i
i
i
i
i
i
i
8.4. Environment Mapping 297
8.4 Environment Mapping
Since the full reflectance equation is expensive to compute, real-time ren-
dering tends to utilize simplifying assumptions. In the previous section, we
discussed the assumption (constant incoming radiance) behind the ambi-
ent light model. In this section, the simplifying assumption relates to the
BRDF, rather than the incoming lighting.
An optically flat surface or mirror reflects an incoming ray of light into
one direction, the light’s reflection direction r
i
(see Section 7.5.3). Similarly,
the outgoing radiance includes incoming radiance from just one direction,
the reflected view vector r. This vector is computed similarly to r
i
:
r =2(n ·v)n v. (8.23)
The reflectance equation for mirrors is greatly simplified:
L
o
(v)=R
F
(θ
o
)L
i
(r), (8.24)
where R
F
is the Fresnel term (see Section 7.5.3). Note that unlike the
Fresnel terms in half vector-based BRDFs (which use α
h
, the angle between
the half vector h and l or v), the Fresnel term in Equation 8.24 uses the
angle θ
o
between v and the surface normal n. If the Schlick approximation
is used for the Fresnel term, mirror surfaces should use Equation 7.33,
rather than Equation 7.40 (substituting θ
o
for θ
i
; the two are equal here).
If the incoming radiance L
i
is only dependent on direction, it can be
stored in a two-dimensional table. This enables efficiently lighting a mirror-
like surface of any shape with an arbitrary incoming radiance distribution,
by simply computing r for each point and looking up the radiance in the
table. Such a table is called an environment map, and its use for rendering is
called environment mapping (EM). Environment mapping was introduced
by Blinn and Newell [96]. Its operation is conceptualized in Figure 8.8.
As mentioned above, the basic assumption behind environment map-
ping is that the incoming radiance L
i
is only dependent on direction. This
requires that the objects and lights being reflected are far away, and that
the reflector does not reflect itself. Since the environment map is a two-
dimensional table of radiance values, it can be interpreted as an image.
The steps of an EM algorithm are:
Generate or load a two-dimensional image representing the environ-
ment.
For each pixel that contains a reflective object, compute the normal
at the location on the surface of the object.
Compute the reflected view vector from the view vector and the
normal.
i
i
i
i
i
i
i
i
298 8. Area and Environmental Lighting
n
v
r
projector function converts
reflected view vector (x,y,z)
environment
texture image
reflective
viewer
surface
to texture image (u,v)
Figure 8.8. Environment mapping. The viewer sees an object, and the reflected view
vector r is computed from v and n. The reflected view vector accesses the environment’s
representation. The access information is computed by using some projector function
to convert the reflected view vector’s (x, y, z) to (typically) a (u, v) value, which is used
to retrieve texture data.
Use the reflected view vector to compute an index into the environ-
ment map that represents the incoming radiance in the reflected view
direction.
Use the texel data from the environment map as incoming radiance
in Equation 8.24.
A potential stumbling block of EM is worth mentioning. Flat surfaces
usually do not work well when environment mapping is used. The problem
with a flat surface is that the rays that reflect off of it usually do not vary
by more than a few degrees. This results in a small part of the EM texture’s
being mapped onto a relatively large surface. Normally, the individual tex-
els of the texture become visible, unless bilinear interpolation is used; even
then, the results do not look good, as a small part of the texture is extremely
magnified. We have also been assuming that perspective projection is being
used. If orthographic viewing is used, the situation is much worse for flat
surfaces. In this case, all the reflected view vectors are the same, and so
the surface will get a constant color from some single texel. Other real-time
techniques, such as planar reflections (Section 9.3.1), may be of more use for
flat surfaces.
The term reflection mapping is sometimes used interchangeably with
environment mapping. However, this term has a specific meaning. When
the surface’s material properties are used to modify an existing environment
map, a reflection map texture is generated. A simple example: To make a
red, shiny sphere, the color red can be multiplied with an environment map
to create a reflection map. Reflection mapping techniques are discussed in
depth in Section 8.5.
i
i
i
i
i
i
i
i
8.4. Environment Mapping 299
Figure 8.9. The character’s armor uses normal mapping combined with environment
mapping, giving a shiny, bumpy surface that reflects the environment. (Image from
“Hellgate: London” courtesy of Flagship Studios, Inc.)
Unlike the colors and shader properties stored in other commonly used
textures, the radiance values stored in environment maps have a high dy-
namic range. Environment maps should usually be stored using high dy-
namic range texture formats. For this reason, they tend to take up more
space than other textures, especially given the difficulty of compressing
high dynamic range values (see Section 6.2.6).
The combination of environment mapping with normal mapping is par-
ticularly effective, yielding rich visuals (see Figure 8.9). It is also straight-
forward to implement—the normal used to compute the reflected view vec-
tor is simply permuted first by the normal map. This combination of
features is also historically important—a restricted form of bumped envi-
ronment mapping was the first use of a dependent texture read in consumer-
level graphics hardware, giving rise to this ability as a part of the pixel
shader.
There are a variety of projector functions that map the reflected view
vector into one or more textures. Blinn and Newell’s algorithm was the first
function ever used, and sphere mapping was the first to see use in graphics
accelerators. Greene’s cubic environment mapping technique overcomes
many of the limitations of these early approaches. To conclude, Heidrich
and Seidel’s parabolic mapping method is discussed.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset