i
i
i
i
i
i
i
i
486 10. Image-Based Effects
blending usually saturates a color and goes to white, which is just what is
desired. An example is shown in Figure 10.28. Variants are possible, e.g.,
the previous frame’s results can also be added to the current frame, giving
animated objects a streaking glow [602].
10.13 Depth of Field
Within the field of photography there is a range where objects are in focus.
Objects outside of this range are blurry, the further outside the blurrier.
In photography, this blurriness is caused by the ratio of the aperture to
the relative distance of the object from the camera. Reducing the aperture
size increases the depth of field, but decreases the amount of light forming
the image. A photo taken in an outdoor daytime scene typically has a very
large depth of field because the amount of light is sufficient to allow a small
aperture size. Depth of field narrows considerably inside a poorly lit room.
So one way to control depth of field is to have it tied to tone mapping,
making out-of-focus objects blurrier as the light level decreases. Another
is to permit manual artistic control, changing focus and increasing depth
of field for dramatic effect as desired.
The accumulation buffer can be used to simulate depth of field [474].
See Figure 10.29. By varying the view and keeping the point of focus fixed,
objects will be rendered blurrier relative to their distance from this focal
point. However, as with most accumulation buffer effects, this method
comes at a high cost of multiple renderings per image. That said, it does
converge to the correct answer. The rest of this section discusses faster
image-based techniques, though by their nature these methods can reach
only a certain level of realism before their limits are reached.
The accumulation buffer technique of shifting the view location pro-
vides a reasonable mental model of what should be recorded at each pixel.
Surfaces can be classified into three zones: those at the distance of the focal
point, those beyond, and those closer. For surfaces at the focal distance,
each pixel shows an area in sharp focus, as all the accumulated images have
approximately the same result. A pixel “viewing” surfaces out of focus is
a blend of all surfaces seen in the different views.
One limited solution to this problem is to create separate image lay-
ers. Render one image of just the objects in focus, one of the objects
beyond, and one of the objects closer. This can be done by simply mov-
ing the near/far clipping plane locations. The two out-of-focus images
are blurred, and then all three images are composited together in back-
to-front order [947]. This 2.5 dimensional approach, so called because
two-dimensional images are given depths and combined, provides a rea-
sonable result under some circumstances. The method breaks down when