i
i
i
i
i
i
i
i
104 5. Visual Appearance
Note that irradiance is additive; the total irradiance from multiple di-
rectional light sources is the sum of individual irradiance values,
E =
n
k=1
E
L
k
cos θ
i
k
, (5.2)
where E
L
k
and θ
i
k
are the values of E
L
and θ
i
for the kth directional light
source.
5.3 Material
In rendering, scenes are most often presented using the surfaces of objects.
Object appearance is portrayed by attaching materials to models in the
scene. Each material is associated with a set of shader programs, textures,
and other properties. These are used to simulate the interaction of light
with the object. This section starts with a description of how light interacts
with matter in the real world, and then presents a simple material model. A
more general treatment of material models will be presented in Chapter 7.
Fundamentally, all light-matter interactions are the result of two phe-
nomena: scattering and absorption.
2
Scattering happens when light encounters any kind of optical discon-
tinuity. This may be the interface between two substances with different
optical properties, a break in crystal structure, a change in density, etc.
Scattering does not change the amount of light—it just causes it to change
direction.
Figure 5.6. Light scattering at a surface—reflection and refraction.
2
Emission is a third phenomenon. It pertains to light sources, which were discussed
in the previous section.
i
i
i
i
i
i
i
i
5.3. Material 105
Figure 5.7. Interactions with reflected and transmitted light.
Absorption happens inside matter and causes some of the light to be
converted into another kind of energy and disappear. It reduces the amount
of light but does not affect its direction.
The most important optical discontinuity in rendering is the interface
between air and object that occurs at a model surface. Surfaces scatter
light into two distinct sets of directions: into the surface (refraction or
transmission)andoutofit(reection); see Figure 5.6 for an illustration.
In transparent objects, the transmitted light continues to travel through
the object. A simple technique to render such objects will be discussed in
Section 5.7; later chapters will contain more advanced techniques. In this
section we will only discuss opaque objects, in which the transmitted light
undergoes multiple scattering and absorption events, until finally some of
it is re-emitted back away from the surface (see Figure 5.7).
As seen in Figure 5.7, the light that has been reflected at the surface
has a different direction distribution and color than the light that was
transmitted into the surface, partially absorbed, and finally scattered back
out. For this reason, it is common to separate surface shading equations
into two terms. The specular term represents the light that was reflected at
the surface, and the diffuse term represents the light which has undergone
transmission, absorption, and scattering.
To characterize the behavior of a material by a shading equation, we
need to represent the amount and direction of outgoing light, based on the
amount and direction of incoming light.
Incoming illumination is measured as surface irradiance. We measure
outgoing light as exitance, which similarly to irradiance is energy per second
per unit area. The symbol for exitance is M. Light-matter interactions are
linear; doubling the irradiance will double the exitance. Exitance divided
by irradiance is a characteristic property of the material. For surfaces
that do not emit light on their own, this ratio must always be between 0
i
i
i
i
i
i
i
i
106 5. Visual Appearance
and 1. The ratio between exitance and irradiance can differ for light of
different colors, so it is represented as an RGB vector or color, commonly
called the surface color c. To represent the two different terms, shading
equations often have separate specular color (c
spec
)anddiffuse color (c
diff
)
properties, which sum to the overall surface color c. As RGB vectors with
values between 0 and 1, these are regular colors that can be specified using
standard color picking interfaces, painting applications, etc. The specular
and diffuse colors of a surface depend on its composition, i.e., whether it is
made of steel, colored plastic, gold, wood, skin, etc.
In this chapter we assume that the diffuse term has no directionality—
it represents light going uniformly in all directions. The specular term
does have significant directionality that needs to be addressed. Unlike the
surface colors that were primarily dependent on the composition of the
surface, the directional distribution of the specular term depends on the
surface smoothness. We can see this dependence in Figure 5.8, which shows
diagrams and photographs for two surfaces of different smoothness. The
beam of reflected light is tighter for the smoother surface, and more spread
out for the rougher surface. In the accompanying photographs we can
see the visual result of this on the reflected images and highlights. The
smoother surface exhibits sharp reflections and tight, bright highlights; the
rougher surface shows blurry reflections and relatively broad, dim high-
Figure 5.8. Light reflecting off surfaces of different smoothness.
i
i
i
i
i
i
i
i
5.4. Sensor 107
Figure 5.9. Rendering the same object at different scales.
lights. Shading equations typically incorporate a smoothness parameter
that controls the distribution of the specular term.
Note that, geometrically, the objects in the Figure 5.8 photographs
appear quite similar; we do not see any noticeable irregularities in either
surface. These irregularities are present, but they are too small to see in
these photographs. Whether surface detail is rendered directly as geometry
and texture, or indirectly by adjusting parameters in a shading equation
depends on the scale of observation. In Figure 5.9 we see several views of the
same object. In the first photograph on the left we see a magnified view of
the surface of a single leaf. If we were to render this scene, the vein patterns
would be modeled as textures. In the next photograph we see a cluster of
leaves. If we were to render this image, each leaf would be modeled as
a simple mesh; the veins would not be visible, but we would reduce the
smoothness parameter of the shading equation to take account of the fact
that the veins cause the reflected light to spread out. The next image shows
a hillside covered with trees. Here, individual leaves are not visible. For
rendering purposes, each tree would be modeled as a triangle mesh and
the shading equation would simulate the fact that random leaf orientations
scatter the light further, perhaps by using a further reduced smoothness
parameter. The final (rightmost) image shows the forested hills from a
greater distance. Here, even the individual trees are not visible. The hills
would be modeled as a mesh, and even effects such as trees shadowing other
trees would be modeled by the shading equation (perhaps by adjusting the
surface colors) rather than being explicitly rendered. Later in the book we
shall see other examples of how the scale of observation affects the rendering
of visual phenomena.
5.4 Sensor
After light is emitted and bounced around the scene, some of it is absorbed
in the imaging sensor. An imaging sensor is actually composed of many
small sensors: rods and cones in the eye, photodiodes in a digital camera,
i
i
i
i
i
i
i
i
108 5. Visual Appearance
Figure 5.10. Imaging sensor viewing a scene.
or dye particles in film. Each of these sensors detects the irradiance value
over its surface and produces a color signal. Irradiance sensors themselves
cannot produce an image, since they average light rays from all incoming
directions. For this reason, a full imaging system includes a light-proof
enclosure with a single small aperture (opening) that restricts the directions
from which light can enter and strike the sensors. A lens placed at the
aperture focuses the light so that each sensor only receives light from a
small set of incoming directions. Such a system can be seen in Figure 5.10.
The enclosure, aperture and lens have the combined effect of causing
the sensors to be directionally specific; they average light over a small area
and a small set of incoming directions. Rather than measuring average
irradiance (the density of light flow—from all directions—per surface area),
these sensors measure average radiance. Radiance is the density of light
flow per area and per incoming direction. Radiance (symbolized as L in
equations) can be thought of as the measure of the brightness and color
of a single ray of light. Like irradiance, radiance is represented as an
RGB vector with theoretically unbounded values. Rendering systems also
“measure” (compute) radiance, similarly to real imaging systems. However,
they use a simplified and idealized model of the imaging sensor, which can
be seen in Figure 5.11.
In this model, each sensor measures a single radiance sample, rather
than an average. The radiance sample for each sensor is along a ray that
goes through a point representing the sensor and a shared point p,which
is also the center of projection for the perspective transform discussed in
Section 4.6.2. In rendering systems, the detection of radiance by the sensor
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset