i
i
i
i
i
i
i
i
114 5. Visual Appearance
Figure 5.14. Side views of triangle meshes (in black, with vertex normals) representing
curved surfaces (in red). On the left smoothed vertex normals are used to represent
a smooth surface. On the right the middle vertex has been duplicated and given two
normals, representing a crease.
use the triangle normals directly for shading. However, triangle meshes are
typically used to represent an underlying curved surface. To this end, the
model description includes surface normals specified at each vertex (Sec-
tion 12.3.4 will discuss methods to compute vertex normals). Figure 5.14
shows side views of two triangle meshes that represent curved surfaces, one
smooth and one with a sharp crease.
At this point we have all the math needed to fully evaluate the shading
equation. A shader function to do so is:
float3 Shade(float3 p,
float3 n,
uniform float3 pv,
uniform float3 Kd,
uniform float3 Ks,
uniform float m,
uniform uint lightCount,
uniform float3 l[MAXLIGHTS],
uniform float3 EL[MAXLIGHTS])
{
float3 v = normalize(pv - p);
float3 Lo = float3(0.0f ,0.0f ,0.0f);
for (uint k = 0; k < lightCount; k++)
{
float3 h = normalize(v + l[k]);
float cosTh = saturate(dot(n, h));
float cosTi = saturate(dot(n, l[k]));
Lo += ((Kd + Ks * pow(cosTh, m)) * EL[k] * cosTi;
}
return Lo;
}
i
i
i
i
i
i
i
i
5.5. Shading 115
Figure 5.15. Per-vertex evaluation of shading equations can cause artifacts that depend
on the vertex tessellation. The spheres contain (left to right) 256, 1024, and 16,384
triangles.
The arguments marked as uniform are constant over the entire model.
The other arguments (p and n) vary per pixel or per vertex, depending on
which shader calls this function. The saturate intrinsic function returns
its argument clamped between 0 and 1. In this case we just need it clamped
to 0, but the argument is known not to exceed 1, and saturate is faster
than the more general max function on most hardware. The normalize
intrinsic function divides the vector passed to it by its own length, returning
a unit-length vector.
So which frequency of evaluation should we use when calling the
Shade() function? When vertex normals are used, per-primitive evaluation
of the shading equation (often called flat shading) is usually undesirable,
since it results in a faceted look, rather than the desired smooth appear-
ance (see the left image of Figure 5.17). Per-vertex evaluation followed by
linear interpolation of the result is commonly called Gouraud shading [435].
In a Gouraud (pronounced guh-row) shading implementation, the vertex
shader would pass the world-space vertex normal and position to Shade()
(first ensuring the normal is of length 1), and then write the result to an
interpolated value. The pixel shader would take the interpolated value and
directly write it to the output. Gouraud shading can produce reasonable
results for matte surfaces, but for highly specular surfaces it may produce
artifacts, as seen in Figure 5.15 and the middle image of Figure 5.17.
These artifacts are caused by the linear interpolation of nonlinear light-
ing values. This is why the artifacts are most noticeable in the specular
highlight, where the lighting variation is most nonlinear.
At the other extreme from Gouraud shading, we have full per-pixel eval-
uation of the shading equation. This is often called Phong shading [1014].
In this implementation, the vertex shader writes the world-space normals
and positions to interpolated values, which the pixel shader passes to
Shade(). The return value is written to the output. Note that even if
i
i
i
i
i
i
i
i
116 5. Visual Appearance
Figure 5.16. Linear interpolation of unit normals at vertices across a surface gives inter-
polated vectors that are less than a length of one.
the surface normal is scaled to length 1 in the vertex shader, interpolation
can change its length, so it may be necessary to do so again in the pixel
shader. See Figure 5.16.
Phong shading is free of interpolation artifacts (see the right side of Fig-
ure 5.17) but can be costly. Another option is to adopt a hybrid approach
where some evaluations are performed per vertex and some per pixel. As
long as the vertex interpolated values are not highly nonlinear, this can
often be done without unduly reducing visual quality.
Implementing a shading equation is a matter of deciding what parts
can be simplified, how frequently to compute various expressions, and how
the user will be able to modify and control the appearance. This section
has presented a model derived from theory, but that ignores some physical
phenomena. More elaborate shading effects and equations will be presented
in the chapters that follow. The next sections will cover sampling and
filtering, transparency, and gamma correction.
Figure 5.17. Flat, Gouraud, and Phong shading. The flat-shaded image has no specular
term, as shininess looks like distracting flashes for flat surfaces as the viewer moves.
Gouraud shading misses highlights on the body and legs from this angle because it
evaluates the shading equation only at the vertices.
5.6 Aliasing and Antialiasing
Imagine a large black triangle moving slowly across a white background.
As a screen grid cell is covered by the triangle, the pixel value representing
i
i
i
i
i
i
i
i
5.6. Aliasing and Antialiasing 117
Figure 5.18. The upper row shows three images with different levels of antialiasing of
a triangle, a line, and some points. The lower row images are magnifications of the
upper row. The leftmost column uses only one sample per pixel, which means that no
antialiasing is used. The middle column images were rendered with four samples per
pixel (in a grid pattern), and the right column used eight samples per pixel (in a 4 × 4
checkerboard). All images were rendered using InfiniteReality graphics [899].
this cell should smoothly drop in intensity. What typically happens in basic
renderers of all sorts is that the moment the grid cell’s center is covered, the
pixel color immediately goes from white to black. Standard GPU rendering
is no exception. See the leftmost column of Figure 5.18.
Polygons show up in pixels as either there or not there. Lines drawn
have a similar problem. The edges have a jagged look because of this, and
so this visual artifact is called “the jaggies,” which turn into “the crawlies”
when animated. More formally, this problem is called aliasing,andeorts
to avoid it are called antialiasing techniques.
6
The subject of sampling theory and digital filtering is large enough to
fill its own book [422, 1035, 1367]. As this is a key area of rendering, the
basic theory of sampling and filtering will be presented. Then, we will focus
on what currently can be done in real time to alleviate aliasing artifacts.
5.6.1 Sampling and Filtering Theory
The process of rendering images is inherently a sampling task. This is
so since the generation of an image is the process of sampling a three-
6
Another way to view this is that the jaggies are not actually due to aliasing—it is
only that the edges are “forced” into the grid formed by the pixels [1330, 1332].
i
i
i
i
i
i
i
i
118 5. Visual Appearance
Figure 5.19. A continuous signal (left) is sampled (middle), and then the original signal
is recov ered b y reconstruction (right).
dimensional scene in order to obtain color values for each pixel in the image
(an array of discrete pixels). To use texture mapping (see Chapter 6), texels
have to b e resampled to get good results under varying conditions. To gen-
erate a sequence of images in an animation, the animation is often sampled
at uniform time intervals. This section is an introduction to the topic of
sampling, reconstruction, and filtering. For simplicity, most material will
be presented in one dimension. These concepts extend naturally to two
dimensions as well, and can thus be used when handling two-dimensional
images.
Figure 5.19 shows how a continuous signal is b eing sampled at uniformly
spaced intervals, that is, discretized. The goal of this sampling process is
to represent information digitally. In doing so, the amount of information
is reduced. However, the sampled signal needs to be reconstructed in order
to recover the original signal. This is done by filtering the sampled signal.
Whenever sampling is done, aliasing may occur. This is an unwanted
artifact, and we need to battle aliasing in order to generate pleasing images.
In real life, a classic exa mple of aliasing is a spinning wheel being filmed by
Figure 5.20. The top row shows a spinning wheel (original signal). This is inadequately
sampled in second row, making it app ear to move in the opposite direction. This is an
example of aliasing due to a too low sampling rate. In the third row, the sampling rate
is exactly two samples per revolution, and we cannot determine in which direction the
wheel is spinning. This is the Nyquist limit. In the fourth row, the sampling rate is
higher than two samples per rev olution, and we suddenly can see that the wheel spins
in the right direction.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset