i
i
i
i
i
i
i
i
494 10. Image-Based Effects
444
4
4 4000
t=0
t=0.25
t=0.5
t=0.75
t=1
velocity
image 1
image 2
image 3
image 4
image 5
avg image
image buffer
x velocity buffer
object
background
blurred image
LIC sampling
Figure 10.35. On the left an accumulation buffer rendering is visualized. The red set
of pixels represents an object moving four pixels to the right over a single frame. The
results for six pixels over the five frames are averaged to get the final, correct result at
the bottom. On the right, an image and x direction velocity buffer are generated at time
0.5 (the y velocity buffer values are all zeroes, since there is no vertical movement). The
velocity buffer is then used to determine how the image buffer is sampled. For example,
the green annotations show how the velocity buffer speed of 4 causes the sampling kernel
to be four pixels wide. For simplicity of visualization, five samples, one per pixel, are
taken and averaged. In reality, the samples usually do not align on pixel centers, but
rather are taken at equal intervals along the width of the kernel. Note how the three
background pixels with a velocity of 0 will take all five samples from the same location.
provide motion blur by stretching the model itself. For example, imagine
a ball moving across the screen. A blurred version of the ball will look
like a cylinder with two round caps [1243]. To create this type of object,
triangle vertices facing along the velocity vector (i.e., where the vertex nor-
mal and velocity vector dot product is greater than zero) are moved half a
frame forward, and vertices facing away are moved half a frame backward.
This effectively cuts the ball into two hemispheres, with stretched triangles
joining the two halves, so forming the “blurred” object representation.
The stretched representation itself does not form a high-quality blurred
image, especially when the object is textured. However, a velocity buffer
built using this model has the effect of permitting sampling off the edges
of the object in a smooth fashion. Unfortunately, there is a side effect that
causes a different artifact: Now the background near the object is blurred,
even though it is not moving. Ideally we want to have just the object
blurred and the background remain steady.
This problem is similar to the depth-of-field artifacts discussed in the
previous section, in which per pixel blurring caused different depths’ results
to intermingle incorrectly. One solution is also similar: Compute the ob-
ject’s motion blur separately. The idea is to perform the same process, but
against a black background with a fully transparent alpha channel. Alpha
is also sampled by LIC when generating the blurred object. The resulting
image is a blurred object with soft, semitransparent edges. This image
is then composited into the scene, where the semitransparent blur of the
object will be blended with the sharp background. This technique is used
i
i
i
i
i
i
i
i
10.14. Motion Blur 495
Figure 10.36. Radial blurring to enhance the feeling of motion. (Image from Assassin’s
Creed” courtesy of Ubisoft.)
in the DirectX 10 version of Lost Planet, for example [1025]. Note that no
alpha storage is needed for making background objects blurry [402].
Motion blur is simpler for static objects that are blurring due to camera
motion, as no explicit velocity buffer is necessary. Rosado [1082] describes
using the previous frame’s camera view matrices to compute velocity on
the fly. The idea is to transform a pixel’s screen space and depth back to
a world space location, then transform this world point using the previous
frame’s camera to a screen location. The difference between these screen-
space locations is the velocity vector, which is used to blur the image for
that pixel. If what is desired is the suggestion of motion as the camera
moves, a fixed effect such as a radial blur can be applied to any image.
Figure 10.36 shows an example.
There are various optimizations and improvements that can be done
for motion blur computations. Hargreaves [504] presents a method of mo-
tion blurring textures by using sets of preblurred images. Mitchell [880]
discusses motion-blurring cubic environment maps for a given direction of
motion. Loviscach [261, 798] uses the GPU’s anisotropic texture sampling
i
i
i
i
i
i
i
i
496 10. Image-Based Effects
hardware to compute LICs efficiently. Composited objects can be rendered
at quarter-screen size, both to save on pixel processing and to filter out
sampling noise [1025].
10.15 Fog
Within the fixed-function pipeline, fog is a simple atmospheric effect that
is performed at the end of the rendering pipeline, affecting the fragment
just before it is sent to the screen. Shader programs can perform more
elaborate atmospheric effects. Fog can be used for several purposes. First,
it increases the level of realism and drama; see Figure 10.37. Second, since
the fog effect increases with the distance from the viewer, it helps the
viewer of a scene determine how far away objects are located. For this
reason, the effect is sometimes called depth cueing. Third, it can be used in
conjunction with culling objects by the far view plane. If the fog is set up
so that objects located near the far plane are not visible due to thick fog,
then objects that go out of the view frustum through the far plane appear
to fade away into the fog. Without fog, the objects would be seen to be
sliced by the far plane.
The color of the fog is denoted c
f
(which the user selects), and the
fog factor is called f [0, 1], which decreases with the distance from the
viewer. Assume that the color of a shaded surface is c
s
; then the final color
Figure 10.37. Fog used to accentuate a mood. (Image courtesy of NVIDIA Corporation.)
i
i
i
i
i
i
i
i
10.15. Fog 497
of the pixel, c
p
, is determined by
c
p
= fc
s
+(1 f)c
f
. (10.13)
Note that f is somewhat nonintuitive in this presentation; as f decreases,
the effect of the fog increases. This is how OpenGL and DirectX present
the equation, but another way to describe it is with f
=1 f.Themain
advantage of the approach presented here is that the various equations used
to generate f are simplified. These equations follow.
Linear fog has a fog factor that decreases linearly with the depth from
the viewer. For this purpose, there are two user-defined scalars, z
start
and
z
end
, that determine where the fog is to start and end (i.e., become fully
foggy) along the viewer’s z-axis. If z
p
is the z-value (depth from the viewer)
of the pixel where fog is to be computed, then the linear fog factor is
f =
z
end
z
p
z
end
z
start
. (10.14)
There are also two sorts of fog that fall off exponentially, as shown in
Equations 10.15 and 10.16. These are called exponential fog:
f = e
d
f
z
p
, (10.15)
and squared exponential fog:
f = e
(d
f
z
p
)
2
. (10.16)
The scalar d
f
is a parameter that is used to control the density of the
fog. After the fog factor, f, has been computed, it is clamped to [0, 1],
and Equation 10.13 is applied to calculate the final value of the pixel.
Examples of what the fog fall-off curves look like for linear fog and for the
Figure 10.38. Curves for fog fall-off: linear, exponential, and squared-exponential, using
various densities.
i
i
i
i
i
i
i
i
498 10. Image-Based Effects
two exponential fog factors appear in Figure 10.38. Of these functions,
the exponential fall-off is physically justifiable, as it is derived from the
Beer-Lambert Law, presented in Section 9.4. Also known as Beer’s Law, it
states that the intensity of the outgoing light is diminished exponentially
with distance.
A table (typically stored as a one-dimensional texture) is sometimes
used in implementing these fog functions in GPUs. That is, for each depth,
a fog factor f iscomputedandstoredinadvance. When the fog factor at
a given depth is needed, the fog factor is read from the table (or linearly
interpolated from the two nearest table entries). Any values can be put into
the fog table, not just those in the equations above. This allows interesting
rendering styles in which the fog effect can vary in any manner desired [256].
In theory, that is all there is to the fog effect: The color of a pixel is
changed as a function of its depth. However, there are a few simplifying
assumptions that are used in some real-time systems that can affect the
quality of the output.
First, fog can be applied on a vertex level or a pixel level [261]. Applying
it on the vertex level means that the fog effect is computed as part of the
illumination equation and the computed color is interpolated across the
polygon. Pixel-level fog is computed using the depth stored at each pixel.
Pixel-level fog usually gives a better result, though at the cost of extra
computation overall.
The fog factor equations use a value along the viewer’s z-axis to com-
pute their effect. For a perspective view, the z-values are computed in a
nonlinear fashion (see Section 18.1.2). Using these z-values directly in the
fog-factor equations gives results that do not follow the actual intent of the
equations. Using the z-depth and converting back to a distance in linear
space gives a more physically correct result.
Another simplifying assumption is that the z-depth is used as the depth
for computing the fog effect. This is called plane-based fog.Usingthez-
Figure 10.39. Use of z-depth versus radial fog. On the left is one view of two objects,
using view-axis-based fog. In the middle, the view has simply been rotated, but in the
rotation, the fog now encompasses object 2. On the right, we see the effect of radial fog,
which will not vary when the viewer rotates.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset