i
i
i
i
i
i
i
i
494 10. Image-Based Effects
444
4
4 4000
t=0
t=0.25
t=0.5
t=0.75
t=1
velocity
image 1
image 2
image 3
image 4
image 5
avg image
image buffer
x velocity buffer
object
background
blurred image
LIC sampling
Figure 10.35. On the left an accumulation buffer rendering is visualized. The red set
of pixels represents an object moving four pixels to the right over a single frame. The
results for six pixels over the five frames are averaged to get the final, correct result at
the bottom. On the right, an image and x direction velocity buffer are generated at time
0.5 (the y velocity buffer values are all zeroes, since there is no vertical movement). The
velocity buffer is then used to determine how the image buffer is sampled. For example,
the green annotations show how the velocity buffer speed of 4 causes the sampling kernel
to be four pixels wide. For simplicity of visualization, five samples, one per pixel, are
taken and averaged. In reality, the samples usually do not align on pixel centers, but
rather are taken at equal intervals along the width of the kernel. Note how the three
background pixels with a velocity of 0 will take all five samples from the same location.
provide motion blur by stretching the model itself. For example, imagine
a ball moving across the screen. A blurred version of the ball will look
like a cylinder with two round caps [1243]. To create this type of object,
triangle vertices facing along the velocity vector (i.e., where the vertex nor-
mal and velocity vector dot product is greater than zero) are moved half a
frame forward, and vertices facing away are moved half a frame backward.
This effectively cuts the ball into two hemispheres, with stretched triangles
joining the two halves, so forming the “blurred” object representation.
The stretched representation itself does not form a high-quality blurred
image, especially when the object is textured. However, a velocity buffer
built using this model has the effect of permitting sampling off the edges
of the object in a smooth fashion. Unfortunately, there is a side effect that
causes a different artifact: Now the background near the object is blurred,
even though it is not moving. Ideally we want to have just the object
blurred and the background remain steady.
This problem is similar to the depth-of-field artifacts discussed in the
previous section, in which per pixel blurring caused different depths’ results
to intermingle incorrectly. One solution is also similar: Compute the ob-
ject’s motion blur separately. The idea is to perform the same process, but
against a black background with a fully transparent alpha channel. Alpha
is also sampled by LIC when generating the blurred object. The resulting
image is a blurred object with soft, semitransparent edges. This image
is then composited into the scene, where the semitransparent blur of the
object will be blended with the sharp background. This technique is used