i
i
i
i
i
i
i
i
512 11. Non-Photorealistic Rendering
boundary of an object. For example, in a side view of a head, the computer
graphics definition includes the edges of the ears.
The definition of silhouette edges can also sometimes include bound-
ary edges, which can be thought of as joining the front and back faces
of the same triangle (and so in a sense are always silhouette edges). We
define silhouette edges here specifically to not include boundary edges.
Section 12.3 discusses processing polygon data to create connected meshes
with a consistent facing and determining the boundary, crease, and material
edges.
11.2.1 Surface Angle Silhouetting
In a similar fashion to the surface shader in Section 11.1, the dot product
between the direction to the viewpoint and the surface normal can be used
to give a silhouette edge [424]. If this value is near zero, then the surface is
nearly edge-on to the eye and so is likely to be near a silhouette edge. The
technique is equivalent to shading the surface using a spherical environment
map (EM) with a black ring around the edge. See Figure 11.5. In practice,
a one-dimensional texture can be used in place of the environment map.
Marshall [822] performs this silhouetting method by using a vertex shader.
Instead of computing the reflection direction to access the EM, he uses
the dot product of the view ray and vertex normal to access the one-
dimensional texture. Everitt [323] uses the mipmap pyramid to perform
the process, coloring the topmost layers with black. As a surface becomes
edge-on, it accesses these top layers and so is shaded black. Since no vertex
interpolation is done, the edge is sharper. These methods are extremely
fast, since the accelerator does all the work in a single pass, and the texture
filtering can help antialias the edges.
This type of technique can work for some models, in which the as-
sumption that there is a relationship between the surface normal and the
silhouette edge holds true. For a model such as a cube, this method fails,
as the silhouette edges will usually not be caught. However, by explicitly
drawing the crease edges, such sharp features will be rendered properly,
though with a different style than the silhouette edges. A feature or draw-
back of this method is that silhouette lines are drawn with variable width,
depending on the curvature of the surface. Large, flat polygons will turn en-
tirely black when nearly edge-on, which is usually not the effect desired. In
experiments, Wu found that for the game Cel Damage this technique gave
excellent results for one quarter of the models, but failed on the rest [1382].
i
i
i
i
i
i
i
i
11.2. Silhouette Edge Rendering 513
Figure 11.5. Silhouettes rendered by using a spheremap. By widening the circle along
the edge of the spheremap, a thicker silhouette edge is displayed. (Images courtesy of
Kenny Hoff.)
11.2.2 Procedural Geometry Silhouetting
One of the first techniques for real-time silhouette rendering was presented
by Rossignac and van Emmerik [1083], and later refined by Raskar and
Cohen [1046]. The basic idea is to render the frontfaces normally, then
render the backfaces in a way as to make their silhouette edges visible.
There are a number of methods of rendering these backfaces, each with
its own strengths and weaknesses. Each method has as its first step that
the frontfaces are drawn. Then frontface culling is turned on and backface
culling turned off, so that only backfaces are displayed.
One method to render the silhouette edges is to draw only the edges
(not the faces) of the backfaces. Using biasing or other techniques (see Sec-
tion 11.4) ensures that these lines are drawn just in front of the frontfaces.
In this way, all lines except the silhouette edges are hidden [720, 1083].
i
i
i
i
i
i
i
i
514 11. Non-Photorealistic Rendering
Figure 11.6. The z-bias method of silhouetting, done by translating the backface forward.
If the frontface is at a different angle, as shown on the right, a different amount of the
backface is visible. (Illustration after Raskar and Cohen [1046].)
One way to make wider lines is to render the backfaces themselves in
black. Without any bias, these backfaces would remain invisible. So, what
is done is to move the backfaces forward in screen Z by biasing them. In
this way, only the edges of the backfacing triangles are visible. Raskar and
Cohen give a number of biasing methods, such as translating by a fixed
amount, or by an amount that compensates for the nonlinear nature of
the z-depths, or using a depth-slope bias call such as glPolygonOffset.
Lengyel [758] discusses how to provide finer depth control by modifying the
perspective matrix. A problem with all these methods is that they do not
create lines with a uniform width. To do so, the amount to move forward
depends not only on the backface, but also on the neighboring frontface(s).
See Figure 11.6. The slope of the backface can be used to bias the polygon
forward, but the thickness of the line will also depend on the angle of the
frontface.
Raskar and Cohen [1046] solve this neighbor dependency problem by
instead fattening each backface triangle out along its edges by the amount
needed to see a consistently thick line. That is, the slope of the triangle and
the distance from the viewer determine how much the triangle is expanded.
Figure 11.7. Triangle fattening. On the left, a backface triangle is expanded along its
plane. Each edge moves a different amount in world space to make the resulting edge
the same thickness in screen space. For thin triangles, this technique falls apart, as one
corner becomes elongated. On the right, the triangle edges are expanded and joined to
form mitered corners to avoid this problem.
i
i
i
i
i
i
i
i
11.2. Silhouette Edge Rendering 515
Figure 11.8. Silhouettes rendered with backfacing edge drawing with thick lines, z-
bias, and fattened triangle algorithms. The backface edge technique gives poor joins
between lines and nonuniform lines due to biasing problems on small features. The z-
bias technique gives nonuniform edge width because of the dependence on the angles of
the frontfaces. (Images courtesy of Raskar and Cohen [1046].)
One method is to expand the three vertices of each triangle outwards along
its plane. A safer method of rendering the triangle is to move each edge of
the triangle outwards and connect the edges. Doing so avoids having the
vertices stick far away from the original triangle. See Figure 11.7. Note that
no biasing is needed with this method, as the backfaces expand beyond the
edges of the frontfaces. See Figure 11.8 for results from the three methods.
An improved way of computing the edge expansion factors is presented in
a later paper by Raskar [1047].
In the method just given, the backface triangles are expanded along
their original planes. Another method is to move the backfaces outwards
by shifting their vertices along the shared vertex normals, by an amount
eye
render frontfaces
expand and render
backfaces in black
Figure 11.9. The triangle shell technique creates a second surface by shifting the surface
along its vertex normals.
i
i
i
i
i
i
i
i
516 11. Non-Photorealistic Rendering
proportional to their z-distance from the eye [506]. This is referred to as
the shell or halo method, as the shifted backfaces form a shell around the
original object. Imagine a sphere. Render the sphere normally, then expand
the sphere by a radius that is 5 pixels wide with respect to the sphere’s
center. That is, if moving the sphere’s center one pixel is equivalent to
moving it in world space by 3 millimeters, then increase the radius of the
sphere by 15 millimeters. Render only this expanded version’s backfaces
in black.
1
The silhouette edge will be 5 pixels wide. See Figure 11.9. This
method has some advantages when performed on the GPU. Moving vertices
outwards along their normals is a perfect task for a vertex shader, so the
accelerator can create silhouettes without any help from the CPU. This
type of expansion is sometimes called shell mapping. Vertex information is
shared and so entire meshes can be rendered, instead of individual polygons.
The method is simple to implement, efficient, robust, and gives steady
performance. It is the technique used by the game Cel Damage [1382] for
example. See Figure 11.10.
This shell technique has a number of potential pitfalls. Imagine looking
head-on at a cube so that only one face is visible. Each of the four backfaces
Figure 11.10. An example of real-time toon-style rendering from the game Cel Damage,
using backface shell expansion to form silhouettes. (Image courtesy of Pseudo Interac-
tive Inc.)
1
A forcefield or halo effect can be made by expanding further and shading these
backfaces dependent on their angle.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset