i
i
i
i
i
i
i
i
9.5. Refractions 397
Figure 9.49. Refraction and reflection by a glass ball of a cubic environment map, with
the map itself used as a skybox background. (Image courtesy of NVIDIA Corporation.)
is needed to properly warp an image generated from a refracted viewpoint.
In a similar vein, Vlachos [1306] presents the shears necessary to render
the refraction effect of a fish tank.
Section 9.3.1 gave some techniques where the scene behind a refractor
was used as a limited-angle environment map. A more general way to give
an impression of refraction is to generate a cubic environment map from
the refracting object’s position. The refracting object is then rendered,
accessing this EM by using the refraction direction computed for the front-
facing surfaces. An example is shown in Figure 9.49. These techniques give
the impression of refraction, but usually bear little resemblance to physical
reality. The refraction ray gets redirected when it enters the transparent
solid, but the ray never gets bent the second time, when it is supposed to
leave this object; this backface never comes into play. This flaw sometimes
does not matter, because the eye is forgiving for what the right appearance
should be.
Oliveira and Brauwers [965] improve upon this simple approximation
by taking into account refraction by the backfaces. In their scheme, the
backfaces are rendered and the depths and normals stored. The frontfaces
are then rendered and rays are refracted from these faces. The idea is to
find where on the stored backface data these refracted rays fall. Once the
i
i
i
i
i
i
i
i
398 9. Global Illumination
Figure 9.50. On the left, the transmitter refracts both nearby objects and the surrounding
skybox [1386]. On the right, caustics are generated via hierarchical maps similar in
nature to shadow maps [1388]. (Images courtesy of Chris Wyman, University of Iowa.)
backface texel is found where the ray exits, the backface’s data at that point
properly refracts the ray, which is then used to access the environment map.
The hard part is to find this backface pixel. The procedure they use to trace
the rays is in the spirit of relief mapping (Section 6.7.4). The backface z-
depths are treated like a heightfield, and each ray walks through this buffer
until an intersection is found. Depth peeling can be used for multiple
refractions. The main drawback is that total internal reflection cannot be
handled. Using Heckbert’s regular expression notation [519], described at
the beginning of this chapter, the paths simulated are then L(D|S)SSE:
The eye sees a refractive surface, a backface then also refracts as the ray
leaves, and some surface in an environment map is then seen through the
transmitter.
Davis and Wyman [229] take this relief mapping approach a step farther,
storing both back and frontfaces as separate heightfield textures. Nearby
objects behind the transparent object can be converted into color and depth
maps so that the refracted rays treat these as local objects. An example
is shown in Figure 9.50. In addition, rays can have multiple bounces, and
total internal reflection can be handled. This gives a refractive light path of
L(D|S)S + SE. A limitation of all of these image-space refraction schemes
is that if a part of the model is rendered offscreen, the clipped data cannot
refract (since it does not exist).
Simpler forms of refraction ray tracing can be used directly for basic
geometric objects. For example, Vlachos and Mitchell [1303] generate the
refraction ray and use ray tracing in a pixel shader program to find which
wall of the water’s container is hit. See Figure 9.51.
i
i
i
i
i
i
i
i
9.6. Caustics 399
Figure 9.51. A pool of water. The walls of the pool appear warped, due to refraction
through the water. (Image courtesy of Alex Vlachos, ATI Technologies Inc.)
9.6 Caustics
The type of refraction discussed in the previous section is that of radiance
coming from some surface, traveling through a transmitter, then reaching
the eye. In practice, the computation is reversed: The eye sees the surface
of the transmitter, then the transmitter’s surfaces determine what location
and direction are used to sample the radiance. Another phenomenon can
occur, called a caustic, in which light is focused by reflective or refractive
surfaces. A classic example for reflection is the cardioid caustic seen inside
a coffee mug. Refractive caustics are often more noticeable: light focused
through a crystal ornament, a lens, or a glass of water is an everyday
example. See Figure 9.52.
There are two major classes of caustic rendering techniques: image
space and object space. The first class uses images to capture the effect
of the caustic volume [222, 584, 702, 1153, 1234, 1387]. As an example,
Wyman [1388] presents a three-pass algorithm for this approach. In the
first, the scene is rendered from the view of the light, similar to shadow
mapping. Each texel where a refractive or reflective object is seen will cause
the light passing through to be diverted and go elsewhere. This diverted
illumination is tracked; once a diffuse object is hit, the texel location is
recorded as having received illumination. This image with depth is called
the photon buffer. The second step is to treat each location that received
light as a point object. These points represent where light has accumu-
lated. By treating them as small spheres that drop off in intensity, known
as splats, these primitives are then transformed with a vertex shader pro-
i
i
i
i
i
i
i
i
400 9. Global Illumination
Figure 9.52. Real-world caustics from reflection and refraction.
gram to the eye’s viewpoint and rendered to a caustic map.Thiscaustic
map is then projected onto the scene, along with a shadow map, and is used
to illuminate the pixels seen. A problem with this approach is that only so
many photons can be tracked and deposited, and results can exhibit seri-
ous artifacts due to undersampling. Wyman extends previous techniques
by using a mipmap-based approach to treat the photons in a more efficient
hierarchical manner. By analyzing the amount of convergence or diver-
gence of the photons, various contributions can be represented by smaller
or larger splats on different levels of the caustic’s mipmap. The results of
this approach can be seen in Figure 9.50.
Ernst et al. [319] provide an object-space algorithm, using the idea of
a warped caustic volume. In this approach, objects are tagged as able
to receive caustics, and able to generate them (i.e., they are specular or
refractive). Each vertex on each generator has a normal. The normal
and the light direction are used to compute the refracted (or reflected) ray
direction for that vertex. For each triangle in the generator, these three
refracted (or reflected) rays will form the sides of a caustic volume. Note
that these sides are in general warped, i.e., they are the surface formed
by two points from the generating triangle, and the corresponding two
points on the “receiver” triangle. This makes the sides of the volume into
bilinear patches. The volume represents how the light converges or diverges
as a function of the distance from the generating triangle. These caustic
volumes will transport the reflected and refracted light.
i
i
i
i
i
i
i
i
9.7. Global Subsurface Scattering 401
Figure 9.53. The caustic volumes technique can generate various types of caustics, such
as those from a metallic ring or from the ocean’s surface [319].
Caustics are computed in two passes. First the receivers are all rendered
to establish a screen-filling texture that holds their world positions. Then
each of the caustic volumes is rendered in turn. This rendering process
consists of drawing the bounding box for a caustic volume in order to force
evaluation of a set of pixels. The bounding box also limits the number of
pixels that need to be evaluated. At each pixel, a fragment shader program
tests whether the receiver’s location is inside the caustic volume. If it is,
the caustic’s intensity at the point is computed and accumulated. Caustic
volumes can converge on a location, giving it a high intensity. This process
results in light being added where it focuses.
See Figure 9.53. The right-hand image shows a caustic pattern due to
ocean water. Such caustic patterns from large bodies of water are a special
case that is often handled by particular shaders tuned for the task. For
example, Guardado and anchez-Crespo [464] cast a ray from the ocean
bottom straight up to the ocean surface. The surface normal is used to
generate a refraction ray direction. The more this ray’s direction deviates,
the less sunlight is assumed to reach the location. This simple shader is
not physically correct—it traces rays in the wrong direction and assumes
the sun is directly overhead—but rapidly provides plausible results.
9.7 Global Subsurface Scattering
When light travels inside a substance, some of it may be absorbed (see Sec-
tion 9.4). Absorption changes the light’s amount (and possibly its spectral
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset