i
i
i
i
i
i
i
i
10.15. Fog 501
The idea behind these volumetric fog techniques is to compute how
much of some volume of space is between the object seen and the viewer,
and then use this amount to change the object’s color by the fog color [294].
In effect, a ray-object intersection is performed at each pixel, from the eye
to the level where the fog begins. The distance from this intersection to
the underlying surface beyond it, or to where the fog itself ends, is used to
compute the fog effect.
For volumes of space such as a beam of fog illuminated by a headlight,
one idea is to render the volume so as to record the maximum and minimum
depths at each pixel. These depths are then retrieved and used to compute
the fog’s thickness. James [601] gives a more general technique, using
additive blending to sum the depths for objects with concavities. The
backfaces add to one buffer, the frontfaces to another, and the difference
betweenthetwosumsisthethicknessofthefog. Thefogvolumemust
be watertight, where each pixel will be covered by the same number of
frontfacing and backfacing triangles. For objects in the fog, which can hide
backfaces, a shader is used to record the object’s depth instead. While
there are some limitations to this technique, it is fast and practical for
many situations.
There are other related methods. Gr¨un and Spoerl [461] present an effi-
cient method that computes entry and exit points for the fog volume at the
vertices of the backfaces of a containing mesh and interpolates these val-
ues. Oat and Scheuermann [956] give a clever single-pass method method
of computing both the closest entry point and farthest exit point in a vol-
ume. They save the distance, d, to a surface in one channel, and 1 − d in
another channel. By setting the alpha blending mode to save the minimum
value found, after the volume is rendered, the first channel will have the
closest value and the second channel will have the farthest value, encoded
as 1 −d.
It is also possible to have the fog itself be affected by light and shadow.
That is, the fog itself is made of droplets that can catch and reflect the
light. James [603] uses shadow volumes and depth peeling to compute the
thickness of each layer of fog that is illuminated. Tatarchuk [1246] simulates
volumetric lighting by using a noise texture on the surface of a cone light’s
extents, which fades off along the silhouette.
There are some limitations to using the computed thickness to render
shafts of light. Complex shadowing can be expensive to compute, the
density of the fog must be uniform, and effects such as illumination from
stained glass are difficult to reproduce. Dobashi et al. [265] present a
method of rendering atmospheric effects using a series of sampling planes of
this volume. These sampling planes are perpendicular to the view direction
and are rendered from back to front. The shaft of light is rendered where it
overlaps each plane. These slices blend to form a volume. A variant of this