i
i
i
i
i
i
i
i
402 9. Global Illumination
distribution), but not its direction of propagation. Any discontinuities in
the substance, such as air bubbles, foreign particles, density variations, or
structural changes, may cause the light to be scattered. Unlike absorption,
scattering changes the direction, but not the amount, of light. Scattering
inside a solid object is called subsurface scattering. The visual effect of
scattering in air, fog, or smoke is often simply referred to as “fog” in real-
time rendering, though the more correct term is atmospheric scattering.
Atmospheric scattering is discussed in Section 10.15.
In some cases, the scale of scattering is extremely small. Scattered light
is re-emitted from the surface very close to its original point of entry. This
means that the subsurface scattering can be modeled via a BRDF (see
Section 7.5.4). In other cases, the scattering occurs over a distance larger
than a pixel, and its global nature is apparent. To render such effects,
special methods must be used.
9.7.1 Subsurface Scattering Theory
Figure 9.54 shows light being scattered through an object. Scattering
causes incoming light to take many different paths through the object.
Since it is impractical to simulate each photon separately (even for offline
rendering), the problem must be solved probabilistically, by integrating
over possible paths, or by approximating such an integral. Besides scatter-
Figure 9.54. Light scattering through an object. Initially the light transmitted into the
object travels in the refraction direction, but scattering causes it to change direction
repeatedly until it leaves the material. The length of the path through the material
determines the percentage of light lost to absorption.
i
i
i
i
i
i
i
i
9.7. Global Subsurface Scattering 403
ing, light traveling through the material also undergoes absorption. The
absorption obeys an exponential decay law with respect to the total travel
distance through the material (see Section 9.4). Scattering behaves simi-
larly. The probability of the light not being scattered obeys an exponential
decay law with distance.
The absorption decay constants are often spectrally variant (have differ-
ent values for R, G, and B). In contrast, the scattering probability constants
usually do not have a strong dependence on wavelength. That said, in cer-
tain cases, the discontinuities causing the scattering are on the order of a
light wavelength or smaller. In these circumstances, the scattering proba-
bility does have a significant dependence on wavelength. Scattering from
individual air molecules is an example. Blue light is scattered more than
red light, which causes the blue color of the daytime sky. A similar effect
causes the blue colors often found in bird feathers.
One important factor that distinguishes the various light paths shown
in Figure 9.54 is the number of scattering events. For some paths, the light
leaves the material after being scattered once; for others, the light is scat-
tered twice, three times, or more. Scattering paths are commonly grouped
into single scattering and multiple scattering paths. Different rendering
techniques are often used for each group.
9.7.2 Wrap Lighting
For many solid materials, the distances between scattering events are short
enough that single scattering can be approximated via a BRDF. Also, for
some materials, single scattering is a relatively weak part of the total scat-
tering effect, and multiple scattering predominates—skin is a notable ex-
ample. For these reasons, many subsurface scattering rendering techniques
focus on simulating multiple scattering.
Perhaps the simplest of these is wrap lighting [139]. Wrap lighting was
discussed on page 294 as an approximation of area light sources. When
used to approximate subsurface scattering, it can be useful to add a color
shift [447]. This accounts for the partial absorption of light traveling
through the material. For example, when rendering skin, a red color shift
could be used.
When used in this way, wrap lighting attempts to model the effect of
multiple scattering on the shading of curved surfaces. The “leakage” of light
from adjacent points into the currently shaded point softens the transition
area from light to dark where the surface curves away from the light source.
Kolchin [683] points out that this effect depends on surface curvature, and
he derives a physically based version. Although the derived expression is
somewhat expensive to evaluate, the ideas behind it are useful.
i
i
i
i
i
i
i
i
404 9. Global Illumination
9.7.3 Normal Blurring
Stam [1211] points out that multiple scattering can be modeled as a dif-
fusion process. Jensen et al. [607] further develop this idea to derive an
analytical BSSRDF model.
9
The diffusion process has a spatial blurring
effect on the outgoing radiance.
This blurring affects only diffuse reflectance. Specular reflectance occurs
at the material surface and is unaffected by subsurface scattering. Since
normal maps often encode small-scale variation, a useful trick for subsurface
scattering is to apply normal maps to only the specular reflectance [431].
The smooth, unperturbed normal is used for the diffuse reflectance. Since
there is no added cost, it is often worthwhile to apply this technique when
using other subsurface scattering methods.
For many materials, multiple scattering occurs over a relatively small
distance. Skin is an important example, where most scattering takes place
over a distance of a few millimeters. For such materials, the trick of not
perturbing the diffuse shading normal may be sufficient by itself. Ma et
al. [804] extend this method, based on measured data. They measured
reflected light from scattering objects and found that while the specular
reflectance is based on the geometric surface normals, subsurface scatter-
ing makes diffuse reflectance behave as if it uses blurred surface normals.
Furthermore, the amount of blurring can vary over the visible spectrum.
They propose a real-time shading technique using independently acquired
normal maps for the specular reflectance and for the R, G and B channels
of the diffuse reflectance [166]. Since these diffuse normal maps typically
resemble blurred versions of the specular map, it is straightforward to mod-
ify this technique to use a single normal map, while adjusting the mipmap
level. This adjustment should be performed similarly to the adjustment of
environment map mipmap levels discussed on page 310.
9.7.4 Texture Space Diffusion
Blurring the diffuse normals accounts for some visual effects of multiple
scattering, but not for others, such as softened shadow edges. Borshukov
and Lewis [128, 129] popularized the concept of texture space diffusion.
10
They formalize the idea of multiple scattering as a blurring process. First,
the surface irradiance (diffuse lighting) is rendered into a texture. This is
done by using texture coordinates as positions for rasterization (the real
positions are interpolated separately for use in shading). This texture is
blurred, and then used for diffuse shading when rendering. The shape and
9
The BSSRDF is a generalization of the BRDF for the case of global subsurface
scattering [932].
10
This idea was introduced by Lensch et al. [757] as part of a different technique, but
the version presented by Borshukov and Lewis has been the most influential.
i
i
i
i
i
i
i
i
9.7. Global Subsurface Scattering 405
XXX
+
XXX
Figure 9.55. Texture space multilayer diffusion. Six different blurs are combined using
RGB weights. The final image is the result of this linear combination, plus a specular
term. (Images courtesy NVIDIA Corporation.)
size of the filter depend on the material, and often on the wavelength,
as well. For example, for skin, the R channel is filtered with a wider
filter than G or B, causing reddening near shadow edges. The correct
filter for simulating subsurface scattering in most materials has a narrow
spike in the center, and a wide, shallow base. This technique was first
presented for use in offline rendering, but real-time GPU implementations
were soon proposed by NVIDIA [246, 247, 248, 447] and ATI [431, 432, 593,
1106]. The presentations by d’Eon et al. [246, 247, 248] represent the most
complete treatment of this technique so far, including support for complex
filters mimicking the effect of multi-layered subsurface structure. Donner
and Jensen [273] show that such structures produce the most realistic skin
renderings. The full system presented by d’Eon produces excellent results,
but is quite expensive, requiring a large number of blurring passes (see
Figure 9.55). However, it can easily be scaled back to increase performance,
at the cost of some realism.
9.7.5 Depth-Map Techniques
The techniques discussed so far model scattering over only relatively small
distances. Other techniques are needed for materials exhibiting large-scale
scattering. Many of these focus on large-scale single scattering, which is
easier to model than large-scale multiple scattering.
The ideal simulation for large-scale single scattering can be seen on the
left side of Figure 9.56. The light paths change direction on entering and
exiting the object, due to refraction. The effects of all the paths need to be
summed to shade a single surface point. Absorption also needs to be taken
i
i
i
i
i
i
i
i
406 9. Global Illumination
Figure 9.56. On the left, the ideal situation, in which light refracts when entering the
object, then all scattering contributions that would properly refract upon leaving the
object are computed. The middle shows a computationally simpler situation, in which
the rays refract only on exit. The right shows a much simpler, and therefore faster,
approximation, where only a single ray is considered.
into account—the amount of absorption in a path depends on its length
inside the material. Computing all these refracted rays for a single shaded
point is expensive even for offline renderers, so the refraction on entering
the material is usually ignored, and only the change in direction on exiting
the material is taken into account [607]. This approximation is shown in
the center of Figure 9.56. Since the rays cast are always in the direction of
the light, Hery [547, 548] points out that light space depth maps (typically
used for shadowing) can be used instead of ray casting. Multiple points
(shown in yellow) on the refracted view ray are sampled, and a lookup into
the light space depth map, or shadow map, is performed for each one. The
result can be projected to get the position of the red intersection point. The
sum of the distances from red to yellow and yellow to blue points is used
to determine the absorption. For media that scatter light anisotropically,
the scattering angle also affects the amount of scattered light.
Performing depth map lookups is faster than ray casting, but the multi-
ple samples required make Hery’s method too slow for most real-time ren-
dering applications. Green [447] proposes a faster approximation, shown
on the right side of Figure 9.56. Instead of multiple samples along the
refracted ray, a single depth map lookup is performed at the shaded point.
Although this method is somewhat nonphysical, its results can be con-
vincing. One problem is that details on the back side of the object can
show through, since every change in object thickness will directly affect
the shaded color. Despite this, Green’s approximation is effective enough
to be used by Pixar for films such as Ratatouille [460]. Pixar refers to
this technique as Gummi Lights. Another problem (shared with Hery’s
implementation, but not Pixar’s) is that the depth map should not con-
tain multiple objects, or highly nonconvex objects. This is because it is
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset