i
i
i
i
i
i
i
i
10.15. Fog 499
depth can cause the unusual effect that the user can see farther into the
fog along the edges of the screen than in the center. A more accurate way
to compute fog is to use the true distance from the viewer to the object.
This is called radial fog, range-based fog,orEuclidean distance fog [534].
Figure 10.39 shows what happens when radial fog is not used. The highest-
quality fog is generated by using pixel-level radial fog.
The traditional fog factor described here is actually an approximation to
scattering and absorption of light in the atmosphere. The depth cue effect
caused by atmospheric scattering is called aerial perspective and includes
many visual effects not modeled by the somewhat simplistic traditional
fog model. The color and intensity of scattered light change with viewing
angle, sun position, and distance in subtle ways. The color of the sky is a
result of the same phenomena as aerial perspective, and can be thought of
as a limit case.
Hoffman and Preetham [556, 557] cover the physics of atmospheric scat-
tering in detail. They present a model with the simplifying assumption of
constant density of the atmosphere, allowing for a closed-form solution of
the scattering equations and a unified model for sky color and aerial per-
spective. This closed form solution enables rapid computation on the GPU
for a variety of atmospheric conditions. Although their model produces
good results for aerial perspective, the resulting sky colors are unrealistic,
due to the simplified atmosphere model [1209].
Sp¨orl [1209] presents a GPU implementation of empirical skylight and
aerial perspective models proposed by Preetham et al. [1032]. Compared
to Hoffman and Preetham’s model, the sky colors were found to be signif-
icantly more realistic, for about double the computation cost. However,
only clear skies can be simulated. Although the aerial perspective model is
considerably more costly than Hoffman and Preetham’s model, the visual
results are similar, so Sp¨orl suggests combining his skylight implementation
with Hoffman and Preetham’s aerial perspective model.
O’Neil [967] implements a physically based skylight model proposed
by Nishita et al. [937] which takes the shape of the earth and variation
in atmospheric density into account. The same model is implemented by
Wenzel [1341, 1342]. Since the full Nishita model is very expensive, the
computation is done onto a low-resolution sky dome texture and distributed
over many frames. For aerial perspective, Wenzel uses a simpler model that
is quicker to compute.
Mitchell [885] and Rohleder & Jamrozik [1073] present image-processing
methods using radial blur to create a beams-of-light effect for backlit ob-
jects (e.g., a skyline with the sun in view).
The atmospheric scattering methods discussed so far handle scattering
of light from only directional light sources. Sun et al. [1229] propose an
analytical model for the more difficult case of a local point light source
i
i
i
i
i
i
i
i
500 10. Image-Based Effects
Figure 10.40. Layered fog. The fog is produced by measuring the distance from a plane
at a given height and using it to attenuate the color of the underlying object. (Image
courtesy of Eric Lengyel.)
and give a GPU implementation. The effects of atmospheric scattering on
reflected surface highlights is also handled. Zhou et al. [1408] extend this
approach to inhomogeneous media.
An area where the medium strongly affects rendering is when viewing
objects underwater. The transparency of coastal water has a transmission
of about (30%, 73%, 63%) in RGB per linear meter [174]. An early, impres-
sive system of using scattering, absorption, and other effects is presented
by Jensen and Golias [608]. They create realistic ocean water effects in real
time by using a variety of techniques, including the Fresnel term, environ-
ment mapping to vary the water color dependent on viewing angle, normal
mapping for the water’s surface, caustics from projecting the surface of
the water to the ocean bottom, simplified volume rendering for godrays,
textured foam, and spray using a particle system. Lanza [729] uses a more
involved approach for godrays that generates and renders shafts of light, a
topic discussed further on in this section.
Other types of fog effects are certainly possible, and many methods
have been explored. Fog can be a localized phenomenon: Swirling wisps of
fog can present their own shape, for example. Such fog can be produced by
overlaying sets of semitransparent billboard images, similar to how clouds
and dust can be represented.
Nuebel [942] provides shader programs for layered fog effects. Lengyel
presents an efficient and robust pixel shader for layered fog [762]. An
example is shown in Figure 10.40. Wenzel [1341, 1342] shows how other
volumes of space can be used and discusses other fog and atmospheric
effects.
i
i
i
i
i
i
i
i
10.15. Fog 501
The idea behind these volumetric fog techniques is to compute how
much of some volume of space is between the object seen and the viewer,
and then use this amount to change the object’s color by the fog color [294].
In effect, a ray-object intersection is performed at each pixel, from the eye
to the level where the fog begins. The distance from this intersection to
the underlying surface beyond it, or to where the fog itself ends, is used to
compute the fog effect.
For volumes of space such as a beam of fog illuminated by a headlight,
one idea is to render the volume so as to record the maximum and minimum
depths at each pixel. These depths are then retrieved and used to compute
the fog’s thickness. James [601] gives a more general technique, using
additive blending to sum the depths for objects with concavities. The
backfaces add to one buffer, the frontfaces to another, and the difference
betweenthetwosumsisthethicknessofthefog. Thefogvolumemust
be watertight, where each pixel will be covered by the same number of
frontfacing and backfacing triangles. For objects in the fog, which can hide
backfaces, a shader is used to record the object’s depth instead. While
there are some limitations to this technique, it is fast and practical for
many situations.
There are other related methods. Gr¨un and Spoerl [461] present an effi-
cient method that computes entry and exit points for the fog volume at the
vertices of the backfaces of a containing mesh and interpolates these val-
ues. Oat and Scheuermann [956] give a clever single-pass method method
of computing both the closest entry point and farthest exit point in a vol-
ume. They save the distance, d, to a surface in one channel, and 1 d in
another channel. By setting the alpha blending mode to save the minimum
value found, after the volume is rendered, the first channel will have the
closest value and the second channel will have the farthest value, encoded
as 1 d.
It is also possible to have the fog itself be affected by light and shadow.
That is, the fog itself is made of droplets that can catch and reflect the
light. James [603] uses shadow volumes and depth peeling to compute the
thickness of each layer of fog that is illuminated. Tatarchuk [1246] simulates
volumetric lighting by using a noise texture on the surface of a cone light’s
extents, which fades off along the silhouette.
There are some limitations to using the computed thickness to render
shafts of light. Complex shadowing can be expensive to compute, the
density of the fog must be uniform, and effects such as illumination from
stained glass are difficult to reproduce. Dobashi et al. [265] present a
method of rendering atmospheric effects using a series of sampling planes of
this volume. These sampling planes are perpendicular to the view direction
and are rendered from back to front. The shaft of light is rendered where it
overlaps each plane. These slices blend to form a volume. A variant of this
i
i
i
i
i
i
i
i
502 10. Image-Based Effects
technique is used in the game Crysis. Mitchell [878] also uses a volume-
rendering approach to fog. This type of approach allows different falloff
patterns, complex gobo light projection shapes, and the use of shadow
maps. Rendering of the fog volume is performed by using a layered set of
textures, and this approach is described in the next section.
10.16 Volume Rendering
Volume rendering is concerned with rendering data that is represented by
voxels. “Voxel” is short for “volumetric pixel,” and each voxel represents a
regular volume of space. For example, creating clinical diagnostic images
(such as CT or MRI scans) of a person’s head may create a data set of
256 × 256 × 256 voxels, each location holding one or more values. This
voxel data can be used to form a three-dimensional image. Voxel rendering
can show a solid model, or make various materials (e.g., the skin and skull)
partially or fully transparent. Cutting planes can be used to show only
parts of the model. In addition to its use for visualization in such diverse
fields as medicine and oil prospecting, volume rendering can also produce
photorealistic imagery. For example, Fedkiw et al. [336] simulate the ap-
pearance and movement of smoke by using volume rendering techniques.
There are a wide variety of voxel rendering techniques. For solid objects,
implicit surface techniques can be used to turn voxel samples into polygonal
surfaces [117]. The surface formed by locations with the same value is called
an isosurface. See Section 13.3. For semitransparency, one method that
can be used is splatting [738, 1347]. Each voxel is treated as a volume
of space that is represented by an alpha-blended circular object, called a
splat, that drops off in opacity at its fringe. The idea is that a surface
or volume can be represented by screen-aligned geometry or sprites, that
when rendered together, form a surface [469]. This method of rendering
solid surfaces is discussed further in Section 14.9.
Lacroute and Levoy [709] present a method of treating the voxel data
as a set of two-dimensional image slices, then shearing and warping these
and compositing the resulting images. A common method for volume
rendering makes direct use of the texturing and compositing capabilities
of the GPU by rendering volume slices directly as textured quadrilater-
als [585, 849, 879]. The volume dataset is sampled by a set of equally
spaced slices in layers perpendicular to the view direction. These slice im-
ages are then rendered in sorted order so that alpha compositing works
properly to form the image. OpenGL Volumizer [971] uses this technique
for GPU volume rendering. Figure 10.41 shows a schematic of how this
works, and Figure 10.42 shows examples. Ikits et al. [585] discuss this
technique and related matters in depth. As mentioned in the previous
i
i
i
i
i
i
i
i
10.16. Volume Rendering 503
Figure 10.41. A volume is rendered by a series of slices parallel to the view plane. Some
slices and their intersection with the volume are shown on the left. The middle shows
the result of rendering just these slices. On the right the result is shown when a large
series of slices are rendered and blended. (Figures courtesy of Christof Rezk-Salama,
University of Siegen, Germany.)
Figure 10.42. On the left, volume visualization mixed with the slicing technique, done
with OpenGL Volumizer using layered textures. Note the reflection of the voxel-based
skull in the floor. On the right, volume visualization done using a ray casting technique
on the GPU. (Left image courtesy of Robert Grzeszczuk, SGI. Right image courtesy
of Natalya Tatarchuk, Game Computing Applications Group, Advanced Micro Devices,
Inc.)
section, this method is used to render shafts of light using complex vol-
umes [878]. The main drawback is that many slices can be necessary to
avoid banding artifacts, with corresponding costs in terms of fill rate.
Instead of slices, Kr¨uger and Westermann [701] developed the idea of
casting rays through the volume using the GPU. Tatarchuk and Shopf [1250]
perform medical imaging with this algorithm; see Figure 10.42. Crane et
al. [205] use this technique for rendering smoke, fire, and water. The ba-
sic idea is that at each pixel, a ray is generated that passes through the
volume, gathering color and transparency information from the volume at
regular intervals along its length. More examples using this technique are
shown in Figure 10.43.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset