i
i
i
i
i
i
i
i
660 14. Acceleration Algorithms
Scene graphs themselves can be used to provide some computational
efficiency. A node in the scene graph often has a bounding volume (BV), and
is thus quite similar to a BVH. A leaf in the scene graph stores geometry.
However, one often allows this geometry to be encoded in any spatial data
structure that is desired. So, a leaf may hold an entire BSP tree that stores
the geometry of, say, a car.
It is important to realize that entirely unrelated efficiency schemes can
be used alongside a scene graph. This is the idea of spatialization,inwhich
the user’s scene graph is augmented with a separate data structure (e.g.,
BSP tree, BVH, etc.) created for a different task, such as faster culling or
picking. The leaf nodes, where most models are located, are shared, so the
expense of an additional spatial efficiency structure is relatively low.
Another idea related to scene graphs is the idea of a display list.In
OpenGL, objects are put into a special display list structure in order to
precompile them into a form more efficient to render. Objects in one node
in a scene graph can be put into a display list for convenience and speed.
14.2 Culling Techniques
To cull means to “remove from a flock,” and in the context of computer
graphics, this is exactly what culling techniques do. The flock is the whole
scene that we want to render, and the removal is limited to those portions
of the scene that are not considered to contribute to the final image. The
rest of the scene is sent through the rendering pipeline. Thus, the term visi-
bility culling is also often used in the context of rendering. However, culling
can also be done for other parts of a program. Examples include collision
detection (by doing less accurate computations for offscreen or hidden ob-
jects), physics computations, and AI. Here, only culling techniques related
to rendering will be presented. Examples of such techniques are backface
culling, view frustum culling,andocclusion culling. These are illustrated
in Figure 14.9. Backface culling eliminates polygons facing away from the
viewer. This is a simple technique that operates on only a single polygon
at a time. View frustum culling eliminates groups of polygons outside the
view frustum. As such, it is a little more complex. Occlusion culling elim-
inates objects hidden by groups of other objects. It is the most complex
culling technique, as it requires computing how objects affect each other.
The actual culling can theoretically take place at any stage of the ren-
dering pipeline, and for some occlusion culling algorithms, it can even be
precomputed. For culling algorithms that are implemented in hardware, we
can sometimes only enable/disable, or set some parameters for, the culling
function. For full control, the programmer can implement the algorithm in
i
i
i
i
i
i
i
i
14.2. Culling Techniques 661
Figure 14.9. Different culling techniques. Culled geometry is dashed. (Illustration after
Cohen-Or et al. [185].)
the application stage (on the CPU). Assuming the bottleneck is not on the
CPU, the fastest polygon to render is the one never sent down the acceler-
ator’s pipeline. Culling is often achieved by using geometric calculations,
but is in no way limited to these. For example, an algorithm may also use
the contents of the frame buffer.
The ideal culling algorithm would send only the exact visible set (EVS)
of primitives through the pipeline. In this book, the EVS is defined as all
primitives that are partially or fully visible. One such data structure that
allows for ideal culling is the aspect graph,fromwhichtheEVScanbe
extracted, given any point of view [398]. Creating such data structures is
possible in theory, but not really in practice, since worst-time complexity
can be as bad as O(n
9
) [185]. Instead, practical algorithms attempt to find
a set, called the potentially visible set (PVS), that is a prediction of the
EVS. If the PVS fully includes the EVS, so that only invisible geometry is
discarded, the PVS is said to be conservative.APVSmayalsobeapproxi-
mate, in which the EVS is not fully included. This type of PVS may there-
fore generate incorrect images. The goal is to make these errors as small
as possible. Since a conservative PVS always generates correct images, it
is often considered more useful. By overestimating or approximating the
EVS, the idea is that the PVS can be computed much faster. The difficulty
lies in how these estimations should be done to gain overall performance.
For example, an algorithm may treat geometry at different granularities,
i.e., polygons, whole objects, or groups of objects. When a PVS has been
found, it is rendered using the Z-buffer, which resolves the final per-pixel
visibility [185].
In the following sections, we treat backface and clustered backface
culling, hierarchical view frustum culling, portal culling, detail culling, and
occlusion culling.
i
i
i
i
i
i
i
i
662 14. Acceleration Algorithms
14.2.1 Backface Culling
Imagine that you are looking at an opaque sphere in a scene. Approxi-
mately half of the sphere will not be visible. The obvious conclusion from
this observation is that what is invisible need not be rendered since it does
not contribute to the image. Therefore, the back side of the sphere need
not be processed, and that is the idea behind backface culling. This type
of culling can also be done for whole groups at a time, and so is called
clustered backface culling.
All backfacing polygons that are part of a solid opaque object can be
culled away from further processing, assuming the camera is outside of, and
does not penetrate (i.e., near clip into), the object. A consistently oriented
polygon (see Section 12.3) is backfacing if the projected polygon is known
to be oriented in, say, a counterclockwise fashion in screen space. This test
can be implemented by computing the normal of the projected polygon in
two-dimensional screen space: n =(v
1
v
0
) × (v
2
v
0
). This normal
will either be (0, 0,a)or(0, 0, a), where a>0. If the negative z-axis is
pointing into the screen, the first result indicates a frontfacing polygon.
This test can also be formulated as a computation of the signed area of
the polygon (see Section A.5.4). Either culling test can be implemented
immediately after the screen-mapping procedure has taken place (in the
geometry stage). Backface culling decreases the load on the rasterizer since
we do not have to scan convert the backfacing polygons. But the load on
the geometry stage might increase because the backface computations are
done there.
Another way to determine whether a polygon is backfacing is to create
a vector from an arbitrary point on the plane in which the polygon lies (one
of the vertices is the simplest choice) to the viewer’s position.
3
Compute
the dot product of this vector and the polygon’s normal. A negative dot
product means that the angle between the two vectors is greater than π/2
radians, so the polygon is not facing the viewer. This test is equivalent
to computing the signed distance from the viewer’s position to the plane
of the polygon (see Section A.5.2). If the sign is positive, the polygon
is frontfacing. Note that the distance is obtained only if the normal is
normalized, but this is unimportant here, as only the sign is of interest.
These culling techniques are illustrated in Figure 14.10.
In the article “Backface Culling Snags” [105], Blinn points out that
these two tests are geometrically the same. Both compute the dot product
between the normal and the vector from a point on the polygon to the eye.
In the test that is done in screen space, the eye has been transformed to
(0, 0, ), and the dot product is thus only the z-component of the polygon
3
For orthographic projections, the vector to the eye position is replaced with the
negative view direction, which is constant for the scene.
i
i
i
i
i
i
i
i
14.2. Culling Techniques 663
Figure 14.10. Two different tests for determining whether a polygon is backfacing. The
left figure shows how the test is done in screen space. The triangle and the quadrilateral
are frontfacing, while the seven-sided polygon is backfacing and can be omitted from
rasterization. The right figure shows how the backface test is done in eye space. Polygon
A is backfacing, while B and C are frontfacing.
vector in screen space. In theory, what differentiates these tests is the space
where the tests are computed—nothing else. In practice, the screen-space
test is often safer, because edge-on polygons that appear to face slightly
backward in eye space can become slightly forward in screen space. This
happens because the eye-space coordinates get rounded off to screen-space
integer pixel or subpixel coordinates.
Using an API such as OpenGL or DirectX, backface culling is normally
controlled with a few functions that either enable backface or frontface
culling or disable all culling. Be aware that a mirroring transform (i.e.,
a negative scaling operation) turns backfacing polygons into frontfacing
ones and vice versa [105] (see Section 4.1.3). Finally, it is worth noting
that DirectX Shader Model 3.0 introduces a register that lets pixel shader
programs know which side of a triangle is visible. Prior to this addition the
main way to display two-sided objects properly was to render them twice,
first culling backfaces then culling frontfaces and reversing the normals.
A common misconception about standard backface culling is that it cuts
the number of polygons rendered by about half. While backface culling will
remove about half of the polygons in many objects, it will provide little
gain for some types of models. For example, the walls, floor, and ceiling
of interior scenes are usually facing the viewer, so there are relatively few
backfaces of these types to cull in such scenes. Similarly, with terrain
rendering often most of the polygons are visible, only those on the back
sides of hills or ravines benefit from this technique.
While backface culling is a simple technique for avoiding rasterizing in-
dividual polygons, it would be even faster if the CPU could decide with
a single test if a whole set of polygons should be sent through the entire
i
i
i
i
i
i
i
i
664 14. Acceleration Algorithms
Figure 14.11. Left: a set of polygons and their normals. Middle-left: the normals are
collected (top), and a minimal cone (bottom), defined by one normal n, and a half-angle,
α, is constructed. Middle-right: the cone is anchored at a point a, and truncated so
that it also contains all points on the polygons. Right: a cross section of a truncated
cone. The light gray region on the top is the frontfacing cone, and the light gray region
at the bottom is the backfacing cone. The points f and b are respectively the apexes of
the front and backfacing cones.
pipeline or not. Such techniques are called clustered backface culling algo-
rithms. The basic concept that many such algorithms use is the normal
cone [1173]. For some section of a surface, a truncated cone is created
that contains all the normal directions and all the points. See Figure 14.11
for an example. As can be seen, a cone is defined by a normal, n,and
half-angle, α, and an anchor point, a, and some offset distances along the
normal that truncates the cone. In the right part of Figure 14.11, a cross
section of a normal cone is shown. Shirman and Abi-Ezzi [1173] prove that
if the viewer is located in the frontfacing cone, then all faces in the cone
are frontfacing, and similarly for the backfacing cone. There has been fur-
ther work in this field by a number of researchers [612, 704, 1103]. Such
techniques are less useful on modern GPUs, where a model is usually rep-
resented by a few complex meshes versus smaller patches. It can be useful
in some situations, such as culling entire backfacing terrain tiles.
14.3 Hierarchical View Frustum Culling
As seen in Section 2.3.4, only primitives that are totally or partially inside
the view frustum need to be rendered. One way to speed up the rendering
process is to compare the bounding volume (BV) of each object to the
view frustum. If the BV is outside the frustum, then the geometry it
encloses can be omitted from rendering. Since these computations are
done within the CPU, this means that the geometry inside the BV does not
need to go through the geometry and the rasterizer stages in the pipeline.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset