i
i
i
i
i
i
i
i
7.8. Implementing BRDFs 271
accesses will cause rendering of such BRDFs to be significantly slower than
most analytic BRDFs. In addition, note that most of the texture accesses
need to be repeated for each light source.
7.8.1 Mipmapping BRDF and Normal Maps
In Section 6.2, we discussed a problem with texture filtering: Mechanisms
such as bilinear filtering and mipmapping are based on the assumption that
the quantity being filtered (which is an input to the shading equation) has
a linear relationship to the final color (the output of the shading equation).
Although this is true for some quantities, such as diffuse and specular
colors, it is not true in general. Artifacts can result from using linear
mipmapping methods on normal maps, or on textures containing nonlinear
BRDF parameters such as cosine powers. These artifacts can manifest as
flickering highlights, or as unexpected changes in surface gloss or brightness
with a change in the surface’s distance from the camera.
To understand why these problems occur and how to solve them, it
is important to remember that the BRDF is a statistical description of
the effects of subpixel surface structure. When the distance between the
camera and surface increases, surface structure that previously covered
several pixels may be reduced to subpixel size, moving from the realm of
bump maps into the realm of the BRDF. This transition is intimately tied
to the mipmap chain, which encapsulates the reduction of texture details
to subpixel size.
Let us consider how the appearance of an object such as the cylinder in
Figure 7.44 is modeled for rendering. Appearance modeling always assumes
a certain scale of observation. Macroscale (large scale) geometry is modeled
as triangles, mesoscale (middle scale) geometry is modeled as textures, and
microscale geometry, smaller than a single pixel, is modeled via the BRDF.
Figure 7.44. A shiny, bumpy cylinder, modeled as a cylindrical mesh with a normal map.
(Image courtesy of Patrick Conran, ILM.)
i
i
i
i
i
i
i
i
272 7. Advanced Shading
Figure 7.45. Part of the surface from Figure 7.44. The top shows the oriented NDFs, as
well as the underlying surface geometry they implicitly define. The center shows ideal
NDFs collected from the underlying geometry at lower resolutions. The bottom left
shows the result of normal averaging, and the bottom right shows the result of cosine
lobe fitting.
i
i
i
i
i
i
i
i
7.8. Implementing BRDFs 273
Given the scale shown in the image, it is appropriate to model the
cylinder as a smooth mesh (macroscale), and represent the bumps with a
normal map (mesoscale). A Blinn-Phong BRDF with a fixed cosine power
is chosen to model the microscale normal distribution function (NDF). This
combined representation models the cylinder appearance well at this scale.
But what happens when the scale of observation changes?
Study Figure 7.45. The black-framed figure at the top shows a small
part of the surface, covered by four normal map texels. For each normal
map texel, the normal is shown as a red arrow, surrounded by the cosine
lobe NDF, shown in black. The normals and NDF implicitly specify an
underlying surface structure, which is shown in cross section. The large
hump in the middle is one of the bumps from the normal map, and the
small wiggles are the microscale surface structure. Each texel in the normal
map, combined with the cosine power, can be seen as collecting the NDF
across the surface area covered by the texel.
The ideal representation of this surface at a lower resolution would
exactly represent the NDFs collected across larger surface areas. The center
of the figure (framed in purple) shows this idealized representation at half
and one quarter of the original resolution. The gray dotted lines show which
areas of the surface are covered by each texel. This idealized representation,
if used for rendering, would most accurately represent the appearance of
the surface at these lower resolutions.
In practice, the representation of the surface at low resolutions is the
responsibility of the lower mipmap levels in the mipmap chain. At the
bottom of the figure, we see two such sets of mipmap levels. On the bottom
left (framed in green) we see the result of averaging and renormalizing
the normals in the normal map, shown as NDFs oriented to the averaged
normals. These NDFs do not resemble the ideal ones—they are pointing
in the same direction, but they do not have the same shape. This will lead
to the object not having the correct appearance. Worse, since these NDFs
are so narrow, they will tend to cause aliasing, in the form of flickering
highlights.
We cannot represent the ideal NDFs directly with the Blinn-Phong
BRDF. However, if we use a gloss map, the cosine power can be varied from
texel to texel. Let us imagine that, for each ideal NDF, we find the rotated
cosine lobe that matches it most closely (both in orientation and overall
width). We store the center direction of this cosine lobe in the normal
map, and its cosine power in the gloss map. The results are shown on the
bottom right (framed in yellow). These NDFs are much closer to the ideal
ones. With this process, the appearance of the cylinder can be represented
much more faithfully than with simple normal averaging, as can be seen in
Figure 7.46.
i
i
i
i
i
i
i
i
274 7. Advanced Shading
Figure 7.46. The cylinder from Figure 7.44. On the left, rendered with the original nor-
mal map. In the center, rendered with a much lower-resolution normal map containing
averaged and renormalized normals (as shown in the bottom left of Figure 7.45). On
the right, the cylinder is rendered with textures at the same low resolution, but contain-
ing normal and gloss values fitted to the ideal NDF, as shown in the bottom right of
Figure 7.45. The image on the right is a significantly better representation of the origi-
nal appearance. It will also be less prone to aliasing when rendered at low resolutions.
(Image courtesy of Patrick Conran, ILM.)
In general, filtering normal maps in isolation will not produce the best
results. The ideal is to filter the entire surface appearance, represented by
the BRDF, the normal map, and any other maps (gloss map, diffuse color
map, specular color map, etc.). Various methods have been developed to
approximate this ideal.
Toksvig [1267] makes a clever observation, that if normals are averaged
and not renormalized, the length of the averaged normal correlates inversely
with the width of the normal distribution. That is, the more the original
normals point in different directions, the shorter the normal averaged from
them. Assuming that a Blinn-Phong BRDF is used with cosine power m,
Toksvig presents a method to compute a new cosine power m
. Evaluating
the Blinn-Phong BRDF with m
instead of m approximates the spreading
effect of the filtered normals. The equation for computing m
is
m
=
n
a
m
n
a
+ m(1 −n
a
)
, (7.62)
where n
a
is the length of the averaged normal. Toksvig’s method has the
advantage of working with the most straightforward normal mipmapping
scheme (linear averaging without normalization). This feature is particu-
larly useful for dynamically generated normal maps (e.g., water ripples),
for which mipmap generation must be done on the fly. Note that com-
mon methods of compressing normal maps require reconstructing the z-
component from the other two, so they do not allow for non-unit-length
normals. For this reason, normal maps used with Toksvig’s method may
have to remain uncompressed. This issue can be partially compensated for
i
i
i
i
i
i
i
i
7.9. Combining Lights and Materials 275
by the ability to avoid storing a separate gloss map, as the normals in the
normal map can be shortened to encode the gloss factor. Toksvig’s method
does require a minor modification to the pixel shader to compute a gloss
factor from the normal length.
Conran at ILM [191] proposes a process for computing the effect of fil-
tered normal maps on specular response. The resulting representation is
referred to as SpecVar maps. SpecVar map generation occurs in two stages.
In the first stage, the shading equation is evaluated at a high spatial reso-
lution for a number of representative lighting and viewing directions. The
result is an array of shaded colors associated with each high-resolution sur-
face location. Each entry of this array is averaged across surface locations,
producing an array of shaded colors for each low-resolution texel. In the
second stage a fitting process is performed at each low-resolution texel.
The goal of the fitting process is to find the shading parameters that pro-
duce the closest fit to the array values for the given lighting and viewing
directions. These parameters are stored in texture maps, to be used dur-
ing shading. Although the details of the process are specific to Pixar’s
RenderMan renderer, the algorithm can also be used to generate mipmap
chains for real-time rendering. SpecVar map generation is computation-
ally expensive, but it produces very high quality results. This method was
used at ILM for effects shots in several feature films. Figure 7.46 was also
generated using this technique.
Schilling [1124] proposed storing three numbers at each location in the
mipmap, to represent the variance of the normals in the u and v axes. Un-
like the previously discussed methods, this supports extracting anisotropic
NDFs from the normal map. Schilling describes an anisotropic variant of
the Blinn-Phong BRDF using these three numbers. In later work [1125],
this was extended to anisotropic shading with environment maps.
Looking at the ideal lowest-resolution NDF in the middle of Figure 7.45,
we see three distinct lobes, which are only somewhat approximated by the
single Phong lobe on the bottom right. Such multi-lobe NDFs appear when
the normal map contains non-random structure. Several approaches have
been proposed to handle such cases.
Tan et al. [1239, 1240] and Han et al. [496] discuss methods to fit a mix-
ture of multiple BRDF lobes to a filtered normal map. They demonstrate
significant quality improvement over Toksvig’s method, but at the cost of
considerable additional storage and shader computation.
7.9 Combining Lights and Materials
Lighting computations occur in two phases. In the light phase, the light
source properties and surface location are used to compute the light’s di-
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset