i
i
i
i
i
i
i
i
522 11. Non-Photorealistic Rendering
part of the silhouette, the quadrilateral’s points are moved so that it is no
longer degenerate (i.e., is made visible). This results in a thin quadrilateral
“fin,” representing the edge, being drawn. This technique is based on the
same idea as the vertex shader for shadow volume creation, described on
page 347. Boundary edges, which have only one neighboring triangle, can
also be handled by passing in a second normal that is the negation of this
triangle’s normal. In this way, the boundary edge will always be flagged
as one to be rendered. The main drawbacks to this technique are a large
increase in the number of polygons sent to through the pipeline, and that
it does not perform well if the mesh undergoes nonlinear transforms [1382].
McGuire and Hughes [844] present work to provide higher-quality fin lines
with endcaps.
If the geometry shader is a part of the pipeline, these additional fin
polygons do not need to be generated on the CPU and stored in a mesh.
The geometry shader itself can generate the fin quadrilaterals as needed.
Other silhouette finding methods exist. For example, Gooch et al. [423]
use Gauss maps for determining silhouette edges. In the last part of Sec-
tion 14.2.1, hierarchical methods for quickly categorizing sets of polygons
as front or back facing are discussed. See Hertzman’s article [546] or either
NPR book [425, 1226] for more on this subject.
11.2.5 Hybrid Silhouetting
Northrup and Markosian [940] use a silhouette rendering approach that
has both image and geometric elements. Their method first finds a list of
silhouette edges. They then render all the object’s triangles and silhouette
edges, assigning each a different ID number (i.e., giving each a unique
color). This ID buffer is read back and the visible silhouette edges are
determined from it. These visible segments are then checked for overlaps
and linked together to form smooth stroke paths. Stylized strokes are then
rendered along these reconstructed paths. The strokes themselves can be
stylized in many different ways, including effects of taper, flare, wiggle,
and fading, as well as depth and distance cues. An example is shown in
Figure 11.12.
Kalnins et al. [621] use this method in their work, which attacks an
important area of research in NPR: temporal coherence. Obtaining a sil-
houette is, in one respect, just the beginning. As the object and viewer
move, the silhouette edge changes. With stroke extraction techniques some
coherence is available by tracking the separate silhouette loops. However,
when two loops merge, corrective measures need to be taken or a noticeable
jump from one frame to the next will be visible. A pixel search and “vote”
algorithm is used to attempt to maintain silhouette coherence from frame
to frame.