i
i
i
i
i
i
i
i
690 14. Acceleration Algorithms
Figure 14.29. The geometry involved in the estimation of the solid angle.
other lookup is used to actually find the silhouette vertices. After they
have been projected to the screen, the area is computed. Source code is
available on the web.
To select a good occluder, Coorg and Teller [199] estimate the solid
angle that a polygon subtends. This can also be used for LOD selection.
Their approximation is
r =
a(n · v)
d · d
. (14.4)
Here, a is the area of the polygon, n is the normal of the polygon, v is the
view direction vector, and d is the vector from the viewpoint to the center
of the polygon. Both v and n are assumed to be normalized. The geometry
involved is shown in Figure 14.29. The higher the value of r, the higher
benefit. The solid angle approximation estimates the benefit because the
larger the area, the larger the value, and the value is inversely proportional
to the distance to the polygon. Also, the maximum value is reached when
the viewer looks at a polygon “head-on,” and the value decreases with an
increasing angle between the polygon normal and the view direction [199].
However, this property is mostly useful for occluder selection and not for
LOD selection.
Hysteresis
Unnecessary popping can occur if the metric used to determine which LOD
to use varies from frame to frame around some value, r
i
. A rapid cycling
back and forth between levels can occur. This can be solved by introduc-
Figure 14.30. The gray areas illustrate the hysteresis regions for the LOD technique.
i
i
i
i
i
i
i
i
14.7. Level of Detail 691
ing some hysteresis around the r
i
value [661, 1079]. This is illustrated in
Figure 14.30 for a range-based LOD, but applies to any type. Here, the
upper row of LOD ranges are used only when r is increasing. When r is
decreasing, the bottom row of ranges is used.
Other Selection Methods
Range-based and projected area-based LOD selection are typically the most
common metrics used. However, many other are possible, and some will be
mentioned here. Besides projected area, Funkhouser and S´equin [371] also
suggest using the importance of an object (e.g., walls are more important
than a clock on the wall), motion, hysteresis (when switching LOD the
benefit is lowered), and focus. The viewer’s focus of attention can be an
important factor. For example, in a sports game, the figure controlling
the ball is where the user will be paying the most attention, so the other
characters can have relatively lower levels of detail [661].
Depending on the application, other strategies may be fruitful. Overall
visibility can be used, e.g., a nearby object seen through dense foliage can
be rendered with a lower LOD. More global metrics are possible, such as
limiting the overall number of highly detailed LODs used in order to stay
within a given polygon budget [661]. See the next section for more on this
topic. Other factors are visibility, colors, and textures. Perceptual metrics
can also be used to choose a LOD [1051].
14.7.3 Time-Critical LOD Rendering
It is often a desirable feature of a rendering system to have a constant
frame rate. In fact, this is what often is referred to as “hard real time” or
time-critical. Such a system is given a specific amount of time, say 30 ms,
and must complete its task (e.g., render the image) within that time. When
time is up, the system has to stop processing. A hard real-time rendering
algorithm can be achieved if the objects in a scene are represented by, for
example, LODs.
Funkhouser and S´equin [371] have presented a heuristic algorithm that
adapts the selection of the level of detail for all visible objects in a scene to
meet the requirement of constant frame rate. This algorithm is predictive
in the sense that it selects the LOD of the visible objects based on desired
frame rate and on which objects are visible. Such an algorithm contrasts
with a reactive algorithm, which bases its selection on the time it took to
render the previous frame.
An object is called O and is rendered at a level of detail called L,which
gives (O, L) for each LOD of an object. Two heuristics are then defined.
One heuristic estimates the cost of rendering an object at a certain level of
detail: Cost(O, L). Another estimates the benefit of an object rendered at
i
i
i
i
i
i
i
i
692 14. Acceleration Algorithms
a certain level of detail: Benefit(O, L). The benefit function estimates the
contribution to the image of an object at a certain LOD.
Assume the objects inside or intersecting the view frustum are called
S. The main idea behind the algorithm is then to optimize the selection
of the LODs for the objects S using the heuristically chosen functions.
Specifically, we want to maximize
S
Benefit(O, L) (14.5)
under the constraint
S
Cost(O, L) TargetFrameTime. (14.6)
In other words, we want to select the level of detail for the objects that
gives us “the best image” within the desired frame rate. Next we describe
how the cost and benefit functions can be estimated, and then we present
an optimization algorithm for the above equations.
Both the cost function and the benefit function are hard to define so
that they work under all circumstances. The cost function can be estimated
by timing the rendering of a LOD several times with different viewing pa-
rameters. See Section 14.7.2 for different benefit functions. In practice, the
projected area of the BV of the object often suffices as a benefit function.
Finally, we will discuss how to choose the level of detail for the objects
in a scene. First, we note the following: For some viewpoints, a scene may
be too complex to be able to keep up with the desired frame rate. To
solve this, we can define a LOD for each object at its lowest detail level,
which is simply an object with no primitives—i.e., we avoid rendering the
object [371]. Using this trick, we render only the most important objects
and skip the unimportant ones.
To select the “best” LODs for a scene, Equation 14.5 has to be optimized
under the constraint shown in Equation 14.6. This is an NP-complete prob-
lem, which means that to solve it correctly, the only thing to do is to test
all different combinations and select the best. This is clearly infeasible for
any kind of algorithm. A simpler, more feasible approach is to use a greedy
algorithm that tries to maximize the Value = Benefit(O, L)/Cost(O, L)for
each object. This algorithm treats all the objects inside the view frustum
and chooses to render the objects in descending order, i.e., the one with
the highest value first. If an object has the same value for more than one
LOD, then the LOD with the highest benefit is selected and rendered. This
approach gives the most “bang for the buck.” For n objects inside the view
frustum, the algorithm runs in O(n log n) time, and it produces a solution
that is at least half as good as the best [371, 372]. Funkhouser and equin
i
i
i
i
i
i
i
i
14.8. Large Model Rendering 693
also exploit frame-to-frame coherence for speeding up the sorting of the
values.
More information about LOD management and the combination of LOD
management and portal culling can be found in Funkhouser’s Ph.D. the-
sis [372]. Maciel and Shirley [806] combine LODs with impostors and
present an approximately constant-time algorithm for rendering outdoor
scenes. The general idea is that a hierarchy of different representations
(some LODs, hierarchical impostors, etc.) of an object is used. Then the
tree is traversed in some fashion to give the best image given a certain
amount of time. Mason and Blake [824] present an incremental hierar-
chical LOD selection algorithm. Again, the different representations of
an object can be arbitrary. Eriksson et al. [317] present hierarchical level
of details (HLODs). Using these, a scene can be rendered with constant
frame rate as well, or rendered such that the rendering error is bounded.
Wimmer and Schmalstieg present analytic formulae for selecting balanced
choices of the polygon count for continuous levels of detail [1357]. They
use Lagrange multipliers to solve the same problem Funkhouser and equin
solved for managing static LODs [371].
14.8 Large Model Rendering
So far it has been implied that the model that is rendered fits into the
main memory of the computer. This may not always be the case. One
example is to render a model of the earth. This is a complex topic, and
so we only point to some relevant literature. Very briefly, several nested
data structures are used. Often a quadtree-like data structure is used to
cover the surface of the earth [228, 330]. Inside each leaf node different
data structures can be used, depending on its contents. Also, in order
to maintain a reasonable frame rate, areas that are about to come into
view are paged in from disk just before they are needed. The quadtree is
used for that as well. The combination of different acceleration algorithms
is nontrivial. Aliaga et al. [17, 18, 19] have combined several algorithms
for extremely large scenes. Section 6.2.5 discusses clip-mapping,arelated
technique for managing large textures. Dietrich et al. [255] provide an
excellent overview of work in this field as a whole.
14.9 Point Rendering
In 1985, Levoy and Whitted wrote a pioneering technical report [766] where
they suggested the use of points as a new primitive used to render every-
thing. The general idea is to represent a surface using a large set of points
i
i
i
i
i
i
i
i
694 14. Acceleration Algorithms
Figure 14.31. These models were rendered with point-based rendering, using circular
splats. The left image shows the full model of an angel named Lucy, with 10 million
vertices. However, only about 3 million splats were used in the rendering. The middle
and right images zoom in on the head. The middle image used about 40,000 splats during
rendering. When the viewer stopped moving, the result converged to the image shown
to the right, with 600,000 splats. (Images generated by the QSPlat program by Szymon
Rusinkiewicz. The model of Lucy was created by the Stanford Graphics Laboratory.)
and render these. In a subsequent pass, Gaussian filtering is performed in
order to fill in gaps between rendered points. The radius of the Gaussian
filter depends on the density of points on the surface, and the projected
density on the screen. Levoy and Whitted implemented this system on a
VAX-11/780. However, it was not until about 15 years later that point-
based rendering again became of interest. Two reasons for this resurgence
are that computing power reached a level where point-based rendering truly
made sense, and that extremely large models obtained from laser range
scanners became available [768]. Such models are initially represented as
unconnected three-dimensional points. Rusinkiewicz and Levoy present a
system called QSplat [1091], which uses a hierarchy of spheres to repre-
sent the model. This hierarchy is used for hierarchical backface and view
frustum culling, and level of detail rendering. The nodes in this tree are
compressed in order to be able to render scenes consisting of several hun-
dred million points. A point is rendered as a shape with a radius, called a
splat. Different splat shapes that can be used are squares, opaque circles,
and fuzzy circles. See Figure 14.31 for an example. Rendering may stop at
any level in the tree. The nodes at that level are rendered as splats with
the same radius as the node’s sphere. Therefore, the bounding sphere hier-
archy is constructed so that no holes are visible at all levels. Since traversal
of the tree can stop at any level, interactive frame rates can be obtained by
stopping the traversal at an appropriate level. When the user stops mov-
ing around, the quality of the rendering can be refined repeatedly until the
leaves of the hierarchy are reached. Pfister et al. [1009] present the surfel
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset