i
i
i
i
i
i
i
i
484 10. Image-Based Effects
Figure 10.27. Lens flare, star glare, and bloom effects, along with depth of field and
motion blur [261]. (Image from “Rthdribl,” by Masaki Kawase.)
back into the normal image. One approach is to identify bright objects and
render just these to the bloom image to be blurred [602]. Another more
common method is to bright-pass filter: any bright pixels are retained, and
all dim pixels are made black, possibly with some blend or scaling at the
transition point [1161, 1206].
This bloom image can be rendered at a low resolution, e.g., anywhere
from one-half the width by one-half the height to one-eighth by one-eighth
of the original. Doing so both saves time and helps increase the effect of
filtering; after blurring, bilinear interpolation will increase the area of the
blur when this image is magnified and combined with the original. This
image is then blurred and combined with the original image. The blur
process is done using a separable Gaussian filter in two one-dimensional
passes to save on processing costs, as discussed in Section 10.9. Because
the goal is an image that looks overexposed where it is bright, this image’s
colors are scaled as desired and added to the original image. Additive
i
i
i
i
i
i
i
i
10.12. Lens Flare and Bloom 485
Figure 10.28. High-dynamic range tone mapping and bloom. The lower image is pro-
duced by using tone mapping on, and adding a post-process bloom to, the original
image [1340]. (Image from Far Cry courtesy of Ubisoft.)
i
i
i
i
i
i
i
i
486 10. Image-Based Effects
blending usually saturates a color and goes to white, which is just what is
desired. An example is shown in Figure 10.28. Variants are possible, e.g.,
the previous frame’s results can also be added to the current frame, giving
animated objects a streaking glow [602].
10.13 Depth of Field
Within the field of photography there is a range where objects are in focus.
Objects outside of this range are blurry, the further outside the blurrier.
In photography, this blurriness is caused by the ratio of the aperture to
the relative distance of the object from the camera. Reducing the aperture
size increases the depth of field, but decreases the amount of light forming
the image. A photo taken in an outdoor daytime scene typically has a very
large depth of field because the amount of light is sufficient to allow a small
aperture size. Depth of field narrows considerably inside a poorly lit room.
So one way to control depth of field is to have it tied to tone mapping,
making out-of-focus objects blurrier as the light level decreases. Another
is to permit manual artistic control, changing focus and increasing depth
of field for dramatic effect as desired.
The accumulation buffer can be used to simulate depth of field [474].
See Figure 10.29. By varying the view and keeping the point of focus fixed,
objects will be rendered blurrier relative to their distance from this focal
point. However, as with most accumulation buffer effects, this method
comes at a high cost of multiple renderings per image. That said, it does
converge to the correct answer. The rest of this section discusses faster
image-based techniques, though by their nature these methods can reach
only a certain level of realism before their limits are reached.
The accumulation buffer technique of shifting the view location pro-
vides a reasonable mental model of what should be recorded at each pixel.
Surfaces can be classified into three zones: those at the distance of the focal
point, those beyond, and those closer. For surfaces at the focal distance,
each pixel shows an area in sharp focus, as all the accumulated images have
approximately the same result. A pixel “viewing” surfaces out of focus is
a blend of all surfaces seen in the different views.
One limited solution to this problem is to create separate image lay-
ers. Render one image of just the objects in focus, one of the objects
beyond, and one of the objects closer. This can be done by simply mov-
ing the near/far clipping plane locations. The two out-of-focus images
are blurred, and then all three images are composited together in back-
to-front order [947]. This 2.5 dimensional approach, so called because
two-dimensional images are given depths and combined, provides a rea-
sonable result under some circumstances. The method breaks down when
i
i
i
i
i
i
i
i
10.13. Depth of Field 487
viewer
focal plane
Figure 10.29. Depth of field via accumulation. The viewer’s location is moved a small
amount, keeping the view direction pointing at the focal point, and the images are
accumulated.
objects span multiple images, going abruptly from blurry to in focus. Also,
all blurry objects have a uniform blurriness, without variation with dis-
tance [245].
Another way to view the process is to think of how depth of field affects
a single location on a surface. Imagine a tiny dot on a surface. When the
surface is in focus, the dot is seen through a single pixel. If the surface is
out of focus, the dot will appear in nearby pixels, depending on the different
views. At the limit, the dot will define a filled circle on the pixel grid. This
is termed the circle of confusion. In photography it is called bokeh,andthe
shape of the circle is related to the aperture blades. An inexpensive lens
will produce blurs that have a hexagonal shape rather than perfect circles.
One way to compute the depth-of-field effect is to take each location on
a surface and scatter its shading value to its neighbors inside this circle.
Sprites are used to represent the circles. The averaged sum of all of the
overlapping sprites for the visible surface at a pixel is the color to display.
This technique is used by batch (non-interactive) rendering systems to
compute depth-of-field effects and is sometimes referred to as a forward
mapping technique [245].
One problem with using scattering is that it does not map well to pixel
shader capabilities. Pixel shaders can operate in parallel because they do
not spread their results to neighbors. That said, the geometry shader can
be used for generating sprites that, by being rendered, scatter results to
i
i
i
i
i
i
i
i
488 10. Image-Based Effects
Figure 10.30. A scatter operation takes a pixel’s value and spreads it to the neighboring
area, for example by rendering a circular sprite. In a gather, the neighboring values are
sampled and used to affect a pixel. The GPU’s pixel shader is optimized to perform
gather operations via texture sampling.
other pixels. This functionality can be used to aid in computing depth of
field [1025].
Another way to think about circles of confusion is to make the as-
sumption that the local neighborhood around a pixel has about the same
depth. With this assumption in place, a gather operation can be done.
Pixel shaders are optimized for gathering results from previous computa-
tions. See Figure 10.30. So, one way to perform depth-of-field effects is
to blur the surface at each pixel based on its depth [1204]. The depth
defines a circle of confusion, which is how wide an area should be sampled.
Wloka [1364] gives one scheme to perform this task. The scene is rendered
normally, with no special operations other than storing in the alpha chan-
nel the radius of the circle of confusion. The scene image is post-processed
using filtering techniques to blur it, and blur it again, resulting in a total
of three images: sharp, blurry, and blurrier. Then a pixel shader is used
to access the blur factor and interpolate among the three textures for each
pixel. Multiple textures are needed because of limitations on the ability of
the GPU to access mip levels in mipmaps [245].
The idea behind this technique to use the blur factor to vary the filtering
kernel’s size. With modern GPUs, mipmaps or summed-area tables [445,
543] can be used for sampling an area of the image texture. Another
approach is to interpolate between the image and a small, blurry version of
it. The filter kernel size increases with the blurriness, and the blurred image
is given more weight. Sampling artifacts are minimized by using a Poisson-
disk pattern [1121] (see Figure 9.30 on page 365). Gillham [400] discusses
implementation details on newer GPUs. Such gather approaches are also
called backwards mapping or reverse mapping methods. Figure 10.31 shows
an example.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset