i
i
i
i
i
i
i
i
11.2. Silhouette Edge Rendering 517
forming the silhouette edge will move in the direction of its corresponding
cube face, so leaving gaps at the corners. This occurs because while there
is a single vertex at each corner, each face has a different vertex normal.
The problem is that the expanded cube does not truly form a shell, because
each corner vertex is expanding in a different direction. One solution is to
forceverticesinthesamelocationto share a single, new, average vertex
normal. Another technique is to create degenerate geometry at the creases
that then gets expanded into polygons with area [1382].
Shell and fattening techniques waste some fill, since all the backfaces are
rendered. Fattening techniques cannot currently be performed on curved
surfaces generated by the accelerator. Shell techniques can work with
curved surfaces, as long as the surface representation can be displaced
outwards along the surface normals. The z-bias technique works with all
curved surfaces, since the only modification is a shift in z-depth. Other
limitations of all of these techniques is that there is little control over
the edge appearance, semitransparent surfaces are difficult to render with
silhouettes, and without some form of antialiasing the edges look fairly
poor [1382].
One worthwhile feature of this entire class of geometric techniques is
that no connectivity information or edge lists are needed. Each polygon is
processed independently from the rest, so such techniques lend themselves
to hardware implementation [1047]. However, as with all the methods
discussed here, each mesh should be preprocessed so that the faces are
consistent (see Section 12.3).
This class of algorithms renders only the silhouette edges. Other edges
(boundary, crease, and material) have to be rendered in some other fashion.
These can be drawn using one of the line drawing techniques in Section 11.4.
For deformable objects, the crease lines can change over time. Raskar [1047]
gives a clever solution for drawing ridge lines without having to create and
access an edge connectivity data structure. The idea is to generate an
additional polygon along each edge of the triangle being rendered. These
edge polygons are bent away from the triangle’s plane by the user-defined
critical dihedral angle that determines when a crease should be visible.
Now if two adjoining triangles are at greater than this crease angle, the
edge polygons will be visible, else they will be hidden by the triangles. For
valley edges, this technique can be performed by using the stencil buffer
and up to three passes.
11.2.3 Silhouetting by Image Processing
The algorithms in the previous section are sometimes classified as image-
based, as the screen resolution determines how they are performed. An-
other type of algorithm is more directly image-based, in that it operates
i
i
i
i
i
i
i
i
518 11. Non-Photorealistic Rendering
entirely on data stored in buffers and does not modify (or even know about)
the geometry in the scene.
Saito and Takahashi [1097] first introduced this G-buffer, concept, which
is also used for deferred shading (Section 7.9.2). Decaudin [238] extended
the use of G-buffers to perform toon rendering. The basic idea is simple:
NPR can be done by performing image processing techniques on various
buffers of information. By looking for discontinuities in neighboring Z-
buffer values, most silhouette edge locations can be found. Discontinu-
ities in neighboring surface normal values signal the location of bound-
ary (and often silhouette) edges. Rendering the scene in ambient col-
ors can also be used to detect edges that the other two techniques may
miss.
Card and Mitchell [155] perform these image processing operations in
real time by first using vertex shaders to render the world space normals
and z-depths of a scene to a texture. The normals are written as a normal
map to the color channels and the most significant byte of z-depthsasthe
alpha channel.
Once this image is created, the next step is to find the silhouette, bound-
ary, and crease edges. The idea is to render a screen-filling quadrilateral
with the normal map and the z-depth map (in the alpha channel) and de-
tect edge discontinuities [876]. The idea is to sample the same texture six
times in a single pass and implement a Sobel edge detection filter [422].
The texture is sampled six times by sending six pairs of texture coordi-
nates down with the quadrilateral. This filter is actually applied twice to
the texture, once along each axis, and the two resulting images are com-
posited. One other feature is that the thickness of the edges generated can
be expanded or eroded by using further image processing techniques [155].
See Figure 11.11 for some results.
This algorithm has a number of advantages. The method handles all
primitives, even curved surfaces, unlike most other techniques. Meshes do
not have to be connected or even consistent, since the method is image-
based. From a performance standpoint, the CPU is not involved in creating
and traversing edge lists.
There are relatively few flaws with the technique. For nearly edge-on
surfaces, the z-depth comparison filter can falsely detect a silhouette edge
pixel across the surface. Another problem with z-depth comparison is that
if the differences are minimal, then the silhouette edge can be missed. For
example, a sheet of paper on a desk will usually have its edges missed.
Similarly, the normal map filter will miss the edges of this piece of paper,
since the normals are identical. One way to detect this case is to add a
filter on an ambient or object ID color rendering of the scene [238]. This is
still not foolproof; for example, a piece of paper folded onto itself will still
create undetectable edges where the edges overlap [546].
i
i
i
i
i
i
i
i
11.2. Silhouette Edge Rendering 519
Figure 11.11. The normal map (upper left) and depth map (middle left) have edge
detection applied to their values. The upper right shows edges found by processing the
normal map, the middle right from the z-depth map. The image on the lower left is
a thickened composite. The final rendering in the lower right is made by shading the
image with Gooch shading and compositing in the edges. (Images courtesy of Drew
Card and Jason L. Mitchell, ATI Technologies Inc.)
i
i
i
i
i
i
i
i
520 11. Non-Photorealistic Rendering
With the z-depth information being only the most significant byte,
thicker features than a sheet of paper can also be missed, especially in
large scenes where the z-depth range is spread. Higher precision depth
information can be used to avoid this problem.
11.2.4 Silhouette Edge Detection
Most of the techniques described so far have the disadvantage of needing
two passes to render the silhouette. For procedural geometry methods, the
second, backfacing pass typically tests many more pixels than it actually
shades. Also, various problems arise with thicker edges, and there is little
control of the style in which these are rendered. Image methods have
similar problems with creating thick lines. Another approach is to detect
the silhouette edges and render them directly. This form of silhouette edge
rendering allows more fine control of how the lines are rendered. Since
the edges are independent of the model, it is possible to create effects
such as having the silhouette jump in surprise while the mesh is frozen in
shock [1382].
A silhouette edge is one in which one of the two neighboring triangles
faces toward the viewer and the other faces away. The test is
(n
0
·v > 0) =(n
1
· v > 0), (11.1)
where n
0
and n
1
are the two triangle normals and v is the view direction
from the eye to the edge (i.e., to either endpoint). For this test to work
correctly, the surface must be consistently oriented (see Section 12.3).
The standard method for finding the silhouette edges in a model is
to loop through the list of edges and perform this test [822]. Lander [726]
notes that a worthwhile technique is to cull out edges that are inside planar
polygons. That is, given a connected triangle mesh, if the two neighboring
triangles for an edge lie in the same plane, do not add this edge to the list
of edges to test for being a silhouette edge. Implementing this test on a
simple clock model dropped the edge count from 444 edges to 256.
There are other ways to improve efficiency of silhouette edge searching.
Buchanan and Sousa [144] avoid the need for doing separate dot product
tests for each edge by reusing the dot product test for each individual
face. Markosian et al. [819] start with a set of silhouette loops and uses a
randomized search algorithm to update this set. For static scenes, Aila and
Miettinen [4] take a different approach, associating a valid distance with
each edge. This distance is how far the viewer can move and still have the
silhouette or interior edge maintain its state. By careful caching, silhouette
recomputation can be minimized.
In any model each silhouette always consists of a single closed curve,
called a silhouette loop. It follows that each silhouette vertex must have
i
i
i
i
i
i
i
i
11.2. Silhouette Edge Rendering 521
an even number of silhouette edges [12]. Note that there can be more than
one silhouette curve on a surface. Similarly, a silhouette edge can belong
to only one curve. This does not necessarily mean that each vertex on the
silhouette curve has only two incoming silhouette edges. For example, a
curve shaped like a figure eight has a center vertex with four edges. Once
an edge has been found in each silhouette, this edge’s neighbors are tested
to see whether they are silhouette edges as well. This is done repeatedly
until the entire silhouette is traced out.
If the camera view and the objects move little from frame to frame,
it is reasonable to assume that the silhouette edges from previous frames
might still be valid silhouette edges. Therefore, a fraction of these can be
tested to find starting silhouette edges for the next frame. Silhouette loops
are also created and destroyed as the model changes orientation. Hall [493]
discusses detection of these, along with copious implementation details.
Compared to the brute-force algorithm, Hall reported as much as a seven
times performance increase. The main disadvantage is that new silhouette
loops can be missed for a frame or more if the search does not find them.
The algorithm can be biased toward better speed or quality.
Once the silhouettes are found, the lines are drawn. An advantage of
explicitly finding the edges is that they can be rendered with line drawing,
textured impostors (see Section 10.7.1), or any other method desired. Bi-
asing of some sort is needed to ensure that the lines are properly drawn in
front of the surfaces. If thick edges are drawn, these can also be properly
capped and joined without gaps. This can be done by drawing a screen-
aligned circle at each silhouette vertex [424].
One flaw of silhouette edge drawing is that it accentuates the polygonal
nature of the models. That is, it becomes more noticeable that the model’s
silhouette is made of straight lines. Lake et al. [713] give a technique
for drawing curved silhouette edges. The idea is to use different textured
strokes depending on the nature of the silhouette edge. This technique
works only when the objects themselves are given a color identical to the
background; otherwise the strokes may form a mismatch with the filled
areas. A related flaw of silhouette edge detection is that it does not work
for vertex blended, N-patch, or other accelerator-generated surfaces, since
the polygons are not available on the CPU.
Another disadvantage of explicit edge detection is that it is CPU in-
tensive. A major problem is the potentially nonsequential memory access.
It is difficult, if not impossible, to order faces, edges, and vertices simulta-
neously in a manner that is cache friendly [1382]. To avoid CPU process-
ing each frame, Card and Mitchell [155] use the vertex shader to detect
and render silhouette edges. The idea is to send every edge of the model
down the pipeline as a degenerate quadrilateral, with the two adjoining
triangle normals attached to each vertex. When an edge is found to be
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset