i
i
i
i
i
i
i
i
14.7. Level of Detail 685
Figure 14.24. The cone in the middle is rendered using an alpha LOD. The transparency
of the cone is increased when the distance to it increases, and it finally disappears. The
images on the left are shown from the same distance for viewing purposes, while the
images to the right of the line are shown at different sizes.
decreased), and the object finally disappears when it reaches full trans-
parency (α =0.0). This happens when the metric value is larger than
a user-defined invisibility threshold. There is also another threshold that
determines when an object shall start to become transparent. When the
invisibility threshold has been reached, the object need not be sent through
the rendering pipeline at all as long as the metric value remains above the
threshold. When an object has been invisible and its metric falls below the
invisibility threshold, then it decreases its transparency and starts to be
visible again.
The advantage of using this technique standalone is that it is experi-
enced as much more continuous than the discrete geometry LOD method,
and so avoids popping. Also, since the object finally disappears altogether
and need not be rendered, a significant speedup can be expected. The dis-
advantage is that the object entirely disappears, and it is only at this point
that a performance increase is obtained. Figure 14.24 shows an example of
alpha LODs.
One problem with using alpha transparency is that sorting by depth
needs to be done to ensure transparency blends correctly. To fade out
distant vegetation, Whatley [1350] discusses how a noise texture can be
used for screen-door transparency. This has the effect of a dissolve, with
more texels on the object disappearing as the distance increases. While
i
i
i
i
i
i
i
i
686 14. Acceleration Algorithms
the quality is not as good as a true alpha fade, screen-door transparency
means that no sorting is necessary.
CLODs and Geomorph LODs
The process of mesh simplification canbeusedtocreatevariousLOD
models from a single complex object. Algorithms for performing this sim-
plification are discussed in Section 12.5.1. One approach is to create a set
of discrete LODs and use these as discussed previously. However, edge
collapse methods have an interesting property that allows other ways of
making a transition between LODs.
A model has two fewer polygons after each edge collapse operation is
performed. What happens in an edge collapse is that an edge is shrunk un-
til its two endpoints meet and it disappears. If this process is animated, a
smooth transition occurs between the original model and its slightly simpli-
fied version. For each edge collapse, a single vertex is joined with another.
Over a series of edge collapses, a set of vertices move to join other vertices.
By storing the series of edge collapses, this process can be reversed, so that
a simplified model can be made more complex over time. The reversal of
an edge collapse is called a vertex split. So one way to change the level of
detail of an object is to precisely base the number of polygons visible on
the LOD selection value. At 100 meters away, the model might consist of
1000 polygons, and moving to 101 meters, it might drop to 998 polygons.
Such a scheme is called a continuous level of detail (CLOD) technique.
There is not, then, a discrete set of models, but rather a huge set of models
available for display, each one with two less polygons than its more com-
plex neighbor. While appealing, using such a scheme in practice has some
drawbacks. Not all models in the CLOD stream look good. Polygonal
meshes, which can be rendered much more rapidly than single triangles,
are more difficult to use with CLOD techniques than with static models.
If there are a number of the same objects in the scene, then each CLOD
object needs to specify its own specific set of triangles, since it does not
match any others. Bloom [113] and Forsyth [351] discuss solutions to these
and other problems.
In a vertex split, one vertex becomes two. What this means is that
every vertex on a complex model comes from some vertex on a simpler
version. Geomorph LODs [560] are a set of discrete models created by
simplification, with the connectivity between vertices maintained. When
switching from a complex model to a simple one, the complex model’s
vertices are interpolated between their original positions and those of the
simpler version. When the transition is complete, the simpler level of detail
model is used to represent the object. See Figure 14.25 for an example of a
transition. There are a number of advantages to geomorphs. The individual
static models can be selected in advance to be of high quality, and easily
i
i
i
i
i
i
i
i
14.7. Level of Detail 687
Figure 14.25. The left and right images show a low detail model and a higher detail
model. The image in the middle shows a geomorph model interpolated approximately
halfway between the left and right models. Note that the cow in the middle has equally
many vertices and triangles as the model to the right. (Images generated using Melax’s
“Polychop” simplification demo [852].)
can be turned into polygonal meshes. Like CLOD, popping is also avoided
by smooth transitions. The main drawback is that each vertex needs to be
interpolated; CLOD techniques usually do not use interpolation, so the set
of vertex positions themselves never changes. Another drawback is that
the objects always appear to be changing, which may be distracting. This
is especially true for textured objects. Sander and Mitchell [1107] describe
a system in which geomorphing is used in conjunction with static, GPU-
resident vertex and index buffers.
A related idea called fractional tessellation has been finding its way onto
hardware. In such schemes, the tessellation factor for a curved surface can
be set to any floating point number, and so a popping can be avoided.
Fractional tessellation has been used for B´ezier patches, and displacement
mapping primitives. See Section 13.6.2 for more on these techniques.
14.7.2 LOD Selection
Given that different levels of detail of an object exist, a choice must be
made for which one of them to render, or which ones to blend. This is
the task of LOD selection, and a few different techniques for this will be
presented here. These techniques can also be used to select good occluders
for occlusion culling algorithms.
In general, a metric, also called the benefit function, is evaluated for the
current viewpoint and the location of the object, and the value of this metric
picks an appropriate LOD. This metric may be based on, for example, the
projected area of the bounding volume (BV) of the object or the distance
from the viewpoint to the object. The value of the benefit function is
denoted r here. See also Section 13.6.4 on how to rapidly estimate the
projection of a line onto the screen.
i
i
i
i
i
i
i
i
688 14. Acceleration Algorithms
Figure 14.26. The left part of this illustration shows how range-based LODs work. Note
that the fourth LOD is an empty object, so when the object is farther away than r
3
,
nothing is drawn, because the object is not contributing enough to the image to be
worth the effort. The right part shows a LOD node in a scene graph. Only one of the
children of a LOD node is descended based on r.
Range-Based
A common way of selecting a LOD is to associate the different LODs of an
object with different ranges. The most detailed LOD has a range from zero
to some user-defined value r
1
, which means that this LOD is visible when
the distance to the object is less than r
1
. The next LOD has a range from
r
1
to r
2
where r
2
>r
1
. If the distance to the object is greater than or equal
to r
1
and less than r
2
, then this LOD is used, and so on. Examples of four
different LODs with their ranges, and their corresponding LOD node used
in a scene graph are illustrated in Figure 14.26.
Projected Area-Based
Another common metric for LOD selection is the projected area of the
bounding volume (or an estimation of it). Here, we will show how the
number of pixels of that area, called the screen-space coverage,canbe
estimated for spheres and boxes with perspective viewing, and then present
how the solid angle of a polygon can be efficiently approximated.
Starting with spheres, the estimation is based on the fact that the size
of the projection of an object diminishes with the distance from the viewer
along the view direction. This is shown in Figure 14.27, which illustrates
how the size of the projection is halved if the distance from the viewer is
doubled. We define a sphere by its center point c and a radius r.The
viewer is located at v looking along the normalized direction vector d.The
distance from the view direction is simply the projection of the sphere’s
center onto the view vector: d ·(c v). We also assume that the distance
from the viewer to the near plane of the view frustum is n. The near
plane is used in the estimation so that an object that is located on the near
plane returns its original size. The estimation of the radius of the projected
sphere is then
p =
nr
d · (c v)
. (14.3)
The area of the projection is thus πp
2
. A higher value selects a more detailed
LOD.
i
i
i
i
i
i
i
i
14.7. Level of Detail 689
Figure 14.27. This illustration shows how the size of the projection of objects is halved
when the distance is doubled.
It is common practice to simply use a bounding sphere around an ob-
ject’s bounding box. Thin or flat objects can vary considerably in the
amount of projected area actually covered. Schmalstieg and Tobler have
developed a rapid routine for calculating the projected area of a box [1130].
The idea is to classify the viewpoint of the camera with respect to the box,
and use this classification to determine which projected vertices are in-
cluded in the silhouette of the projected box. This process is done via a
look-up table (LUT). Using these vertices, the area can be computed using
the technique presented on page 910. The classification is categorized into
three major cases, shown in Figure 14.28. Practically, this classification
is done by determining on which side of the planes of the bounding box
the viewpoint is located. For efficiency, the viewpoint is transformed into
the coordinate system of the box, so that only comparisons are needed
for classification. The result of the comparisons are put into a bitmask,
which is used as an index into a LUT. This LUT determines how many
vertices there are in the silhouette as seen from the viewpoint. Then, an-
Figure 14.28. Three cases of projection of a cube, showing one, two, and three frontfaces.
(Illustration after Schmalstieg and Tobler [1130].)
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset