Chapter 6

Lighting and Shading

We see objects in the real world only if light from their surface or volume reaches our eye. Cameras capture the images of the objects only when light coming from them falls on their sensors. The appearance of an object in a scene, as seen by the eye or as captured in photographs, depends on the amount and the type of light coming out of the object towards the eye or the camera, respectively. So to create realistic looking images of our virtual scene we must learn how to compute the amount of light coming out from every 3D object of our synthetic world.

Light coming from an object may originate from the object itself. In such a case, light is said to be emitted from the object, the process is called emission, and the object is called an emitter. An emitter is a primary source of light of the scene, like a lamp or the sun; an essential component of the visible world. However, only a few objects in a scene are emitters. Most objects redirect light reaching them from elsewhere. The most common form of light redirection is reflection. Light is said to be reflected from the surfaces of objects, and objects are called reflectors. For an emitter, its shape and its inherent emission property determine the appearance. But, for a reflector its appearance not only depends on its shape, but also on the reflection property of its surface, the amount and the type of light incident on the surface, and the direction from which the light is incident.

In the following, after a brief introduction about what happens when light interacts with the surface of an object and is reflected by them, we will describe in depth the physics concepts that help us to better understand such interaction from a mathematical point of view. Then, we will show how such complex mathematical formulation can be simplified in order to implement it more easily, for example with the local Phong illumination model. After this, more advanced reflection models will be presented. As usual, after the theoretical explanation, we will put into practice all that we explain by putting our client under a new light, so to speak.

6.1 Light and Matter Interaction

By light we mean visible electromagnetic radiation in the wavelength range 400 to 700 nm. We have seen in Chapter 1 that this is the visible range of the light. The visible light may contain radiation of only one wavelength, in which case it is also called monochromatic light, or it may be composed of many wavelengths in the visible range; then it is called polychromatic light. In the real world mostly lasers are monochromatic in nature. No other emitter emits light in a single wavelength. There are emitters, such as gas discharge light, that emit light that is composed of a finite number of wavelengths, but emission from most emitters have wavelengths in the whole visible range. Reflectors can be nonselective in reflection, for example: white surfaces, or can be selective, for example: colored surfaces. So, unless specified otherwise, by light we mean polychromatic light and hence it must be specified as a spectrum, that is, the amount of electromagnetic radiation at every wavelength of the visible range. The spectrum is more or less continuous in wavelengths. Spectral representation of light can be expensive for rendering. If visible wavelength is discretized at 1 nm intervals then 300 pieces of information must be stored for every pixel of the image. Fortunately, as we have already discussed in Chapter 1, the color of a visible spectrum can be very well approximated by a linear combination of three primary light sources. So, it is a common practice in computer graphics to represent everything related to a visible spectrum as a (R, G, B) triplet. We will follow such a practice in the rest of this chapter.

Light, like any other electromagnetic radiation, is a form of energy. More precisely, it is a flow of radiant energy, where the energy is transported or propagated from one place (source) to another (receiver). The source could be an emitter, also called primary source, or a reflector, called secondary source. The energy flows at a speed close to 3 × 108 m/sec. Because of its high speed, independent of the size of the scene in the real world, light takes an insignificant amount of time to reach from one place to another. Hence, the light transport is considered instantaneous. Light travels from the source to the receiver in a straight line. Because of its nature, light is often represented as straight rays. In this representation, light travels in straight rays, and changes direction only when redirected by its interaction with matter. Though the ray representation of light is an approximation, such a representation is conceptually and computationally simple, and hence is a widely accepted nature of light. The area of physics that uses the ray representation for light is known as ray optics or geometric optics. Most of the properties of light are well and easily explained by ray optics.

As previously stated, for rendering realistic-looking images, we must learn how to compute the amount of light reflected by the objects that enters in the camera/eye after a certain number of direct or indirect reflections. Typically, we distinguish the reflected light into two types: the direct light, the light due to the reflection of light reaching from one or more primary sources, and indirect light, the light due to the reflection light from one or more secondary sources.

When light hits the surface of an object, it interacts with the matter and, depending on the specific properties of the material, different types of interaction can take place. Figure 6.1 summarizes most of the light-matter interaction effects. A part of the light reflected from a surface point is generally distributed in all the directions uniformly; this type of reflection is called diffuse reflection and it is typical of the matte and dull materials such as wood, stone, paper, etc. When the material reflects the light received in a preferred direction that depends on the direction of the incident light, then it is called specular reflection. This is the typical reflection behavior of metals. A certain amount of light may travel into the material; this phenomenon is called transmission. A part of the transmitted light can reach the opposite side of the object and ultimately leave the material; this is the case with a transparent object and this light is named refracted light. A part of the transmitted light can also be scattered in random directions depending on the internal composition of the material; this is the scattered light. Sometime a certain amount of scattered light leaves the material at points away from the point at which the light enters the surface; this phenomenon is called sub-surface scattering and makes the object look as if a certain amount of light is suspended inside the material. Our skin exhibits this phenomenon: if you try to illuminate your ear at a point from behind you will notice that the area around that point gets a reddish glow, particularly noticeable at the front. This reddish glow is due to sub-surface scattering.

Figure 6.1

Figure showing schematization of the effects that happen when light interacts with matter.

Schematization of the effects that happen when light interacts with matter.

6.1.1 Ray Optics Basics

Before discussing in detail the mathematical and physical aspects behind the lighting computation, and how to simplify the related complex formulas in order to include lighting in our rendering algorithms, we give here a brief introduction about the computational aspects of diffuse and specular reflection according to ray optics. We also give a brief treatment of refraction. The basic concepts and formulas provided here can help the reader to get a better understanding of what is stated in the following sections.

It has been common practice in computer graphics to express the complex distribution of the reflected light as a sum of two components only: one is the directionally independent uniform component, that is, the diffuse reflection, and the other is the directionally dependent component, that is, the specular reflection. So the amount of reflected light Lreflected reaching the synthetic camera is a combination of diffusely reflected light, Ldiffuse, and specularly reflected light, Lspecular:

Lreflected=Ldiffuse+Lspecular(6.1)

A part of the incident light could also be transmitted inside the material, in this case the equation (6.1) can be written as an equilibrium of energy:

Loutgoing=Lreflected+Lrefracted=Ldiffuse+Lspecular+Lrefracted.(6.2)

These three components of the reflected light will be treated in the next paragraphs.

6.1.1.1 Diffuse Reflection

Diffuse reflection is the directionally independent component of the reflected light. This means that the fraction of the incident light reflected is independent of the direction of reflection. So, diffusely reflecting surfaces look equally bright from any direction. A purely diffusive material is said to be a material that exhibits a Lambertian reflection. The amount of the light reflected in this way only depends on the direction of light incidence. In fact, a diffusive surface reflects the most when the incident light direction is perpendicular to the surface, and the reflection reduces as a function of inclination of the incident light direction to the surface normal. This reduction is modeled as the cosine of the angle between the normal and the direction of incidence. So the amount of reflected light, Ldiffuse from this type of surface is given by the following equation:

Ldiffuse=Lincidentkdiffusecosθ(6.3)

where Lincident is the amount of the incident light, θ is the angle of inclination of the incident light vector, that is, the angle between the normal N, which indicates the surface orientation, and the incident light direction Lincident (see Figure 6.2); kdiffuse is a constant term indicating how much the surface is diffusive. Using the relation of dot product of two normalized vectors and the cosine of angle between them we can express the cosine term in Equation (6.3) as:

Figure 6.2

Figure showing diffuse reflection.

Diffuse reflection.

cos θ=NLincident(6.4)

and this leads to the standard expression for the Lambertian reflection:

Ldiffuse=Lincidentkdiffuse(NLincident).(6.5)

6.1.1.2 Specular Reflection

Specular reflection is the directionally dependent component of the reflected light. The amount of specularly reflected light depends both on the incident and on the reflection direction. Figure 6.3 shows the specular reflection for an ideal specular surface, that is, a mirror. In this case the material reflects the incident light with exactly the same angle of incidence. In the non-ideal case the specular reflection is partially diffused around the mirror direction (see Figure 6.3, on the right). The mirror direction is computed taking into account the geometry between the normalized vector forming the isosceles triangle. Using simple vector algebra, we can express the mirror reflection direction R, as:

R=2N(NL)L(6.6)

Figure 6.3

Figure showing specular reflection. (Left) Perfect mirror. (Right) Non-ideal specular material.

Specular reflection. (Left) Perfect mirror. (Right) Non-ideal specular material.

The vectors used in Equation (6.6) are shown in Figure 6.4. The equation is easily understood by noting that the direction R can be obtained by adding to the normalized vector −L two times the vector x = N(NL), which corresponds to the edge BC of the triangle ABC shown in the figure.

Figure 6.4

Figure showing mirror direction equation explained.

Mirror direction equation explained.

6.1.1.3 Refraction

Refraction happens when a part of the light is not reflected but passes through the material surface. The difference in the material properties cause light direction to change when the light crosses from one material to the other. The amount of refracted light depends strictly on the material properties of the surface hit, the modification of the light direction depends on both the material in which the light travels (e.g., the air, the vacuum, the water) and on the material of the surface hit. Snell’s Law (Figure 6.5), also called reflectance law, models this optical phenomenon. The name of the law derives from the Dutch astronomer Willebrord Snellius (1580–1626), but it was first accurately described by the Arab scientist Ibn Sahl, who in 984 used it to derive lens shapes that focus light with no geometric aberrations [36, 43]. Snell’s Law stated that:

Figure 6.5

Figure showing refraction. The direction of the refracted light is regulated by Snell’s Law.

Refraction. The direction of the refracted light is regulated by Snell’s Law.

η1sinθ1=η2sinθ2(6.7)

where η1 and η2 are the refractive indices of the material 1 and the material 2, respectively. The refractive index is a number that characterizes the speed of the light inside a medium. Hence, according to Equation (6.7), it is possible to evaluate the direction change when a ray of light passes from a medium to another one. It is possible to see this phenomenon directly by putting a straw inside a glass of water and taking a look at the glass from the side. We see the straw appears to have different inclinations in the air and in the water, as if it was formed by two pieces. This visual effect is caused because of the difference in the refractive indices of the air and the water.

6.2 Radiometry in a Nutshell

In the previous section we wrote equations for a quantity of light L without actually defining the quantity. Light is the flow of radiant energy. So the fundamental measurable quantity of light is radiant flux or simply flux (conventionally indicated with Φ). This is the total amount of light passing through (reaching or leaving) an area or a volume. It is the flow of energy per unit of time, as Watt (W) is the unit of flux. It is understandable that more flux means more and hence brighter light. However, flux in itself is not indicative of the extent of area or volume through which the radiant energy is flowing. If the same amount of flux is flowing out from one square millimeter of area and from one square meter of area, then the smaller area must be much brighter than the larger area. Similarly, flux is also not indicative of the direction towards which the light is flowing. Light flux may be flowing through or from the area uniformly in all directions, or may be preferentially more in some directions and less in other directions, or may even be flowing along only one direction. In the case where the light is flowing out of the surface uniformly in all directions, then the area would look equally bright from every direction; this is the case with matte or diffuse surfaces. But in the nonuniform case, the surface will look much brighter from some directions compared to the other directions. Hence, we have to be more selective in terms of specifying the distribution of the flow in space and in direction. So we have to specify the exact area, or the direction, or better define density terms that specify flux per unit area and/or per unit direction. Here we introduce three density terms: irradiance, radiance and intensity.

The first density term we introduce is the area density, or flux per unit area. To distinguish light that is arriving or incident at the surface from the light that is leaving the surface, two different terms are used to specify area density. They are: irradiance and exitance. The conventional symbol for either of the quantities is E. Irradiance represents the amount of flux incident on or arriving at the surface per unit area, and exitance represents the flux leaving or passing though per unit area. They are expressed as the ratio dΦ/dA. So irradiance and exitance are related to flux by the expression

dΦ=E  dA,(6.8)

where dΦ is the amount of flux reaching or leaving a sufficiently small (or differential) area dA. Light from a surface with nonuniform flux distribution is mostly represented by the function E(x), where x represents a point on the surface. For a surface with uniform flux, irradiance is simply the ratio of total flux and surface area, that is,

E=ΦA.(6.9)

The unit of irradiance (or exitance) is Watt per meter squared (W ∙ M−2). Radiosity (B), a term borrowed from the heat transfer field, is also used to represent areal density. To distinguish between irradiance and exitance, radiosity is often qualified with additional terms, for example incident radiosity and exiting radiosity.

The next density term is directional density, which is known as intensity (I). It represents the flux per unit solid angle exiting from a point around a direction. The solid angle (ω) represents a cone of directions originating from a point x (Figure 6.6). The base of the cone can have any shape. The unit of solid angle is Steradian (Sr). A solid angle measures 1 Steradian if the area of the cone intercepted by a sphere of radius r is r2. For example, a solid angle that intercepts one meter squared of a sphere of 1 meter of radius measures 1 Steradian. Intensity is related to flux as:

Figure 6.6

Figure showing solid angle.

Solid angle.

I=dΦdω.(6.10)

The unit of intensity is Watt per Steradian (W∙Sr−1). From the definition of Steradian we can say that the sphere of directions around a point subtends a solid angle that measures 4π Steradians. So the intensity of a point light source emitting radiant flux Φ Watt uniformly around all the directions around the point is Φ/4π W∙Sr−1, since the surface area of a sphere of radius r is 4πr2. The intensity can be dependent on the direction ω, in which case it will be represented as function I(ω). We would like to bring to the attention of the reader that the term intensity is frequently used improperly to represent more or less intense light coming out of an object and is also incorrectly used in the lighting equation. The correct use of intensity is to represent directional flux density of a point light source and it is often not the quantity of interest in lighting computation.

Flux leaving a surface may vary over the surface and along the directions. So we introduce the final density term, actually a double density term, which is the area and direction density of flux. It is known as radiance (L). It represents the flow of radiation from a surface per unit of projected area and per unit of solid angle along the direction of the flow. The term projected area in the definition of radiance indicates that the area is to be projected along the direction of flow. Depending on the orientation of the surface, the same projected area may refer to different amounts of actual area of the surface. So with exitance remaining the same, the flux leaving along a direction will be different depending on the flow direction. The projected area term in the denominator takes into account this dependance of the light flow on the surface orientation. With this definition radiance can be related to flux as:

L(ω)=d2ΦdAdω=d2ΦdAcosθdω.(6.11)

where θ is the angle the normal to the surface makes with the direction of light flow (see Figure 6.7). The unit of radiance is Watt per meter squared per Steradian (W∙M−2 ∙Sr−1). The integral of the incoming radiance on the hemisphere corresponds to the irradiance:

Figure 6.7

Figure showing radiance incoming from the direction ωi(L(ωi)). Irradiance (E) is the total radiance arriving from all the possible directions.

Radiance incoming from the direction ωi (Li)). Irradiance (E) is the total radiance arriving from all the possible directions.

E=ΩL(ω)cosθdω.(6.12)

Radiance is the quantity of actual interest in rendering. The value of a pixel in a rendered image is directly proportional to the radiance of the surface point visible through that pixel. So to compute the color of a pixel we must compute the radiance from the surface point visible to the camera through that pixel. This is the reason we used the symbol L in the previous section, and provided the equation for computing radiance from a point on the reflector surface.

6.3 Reflectance and BRDF

Most objects we come across are opaque and reflect light incident on their surfaces. The reflection property determines whether the object surface is bright or dark, colored or gray. So the designer of the synthetic world must specify the reflection property of every synthetic object in the scene. We use this property and the incident light on it to compute reflected radiance from every visible surface of the scene. There are two commonly used specifications of the surface reflection property: reflectance and BRDF. Both these properties are wavelength dependent; despite this, they are usually specified per color channel.

Reflectance, also known as hemispherical surface reflectance, is a fraction that specifies the ratio of the reflected flux to the incident flux, that is, Φri, or equivalently, the ratio of the reflected exitance to the incident irradiance, that is, Er/Ei. Notice that we use the subscripts r and i to distinguish between reflected and incident radiation. ρ and k are the two commonly used symbols of reflectance. Reflectance, by definition, does not take direction into account. Incident flux may be due to light incident from any one direction or from the whole hemisphere of directions1, or part of it. Similarly, the reflected flux may be the flux leaving towards an any one direction, or the hemisphere of directions, or towards a selected few directions. Because of its independence from incident and reflected direction, it is a useful property only for dull matte surfaces that emit light almost uniformly in all directions.

Most real world reflectors have directional dependence. Directional-Hemispherical reflectance is used to represent this incident direction dependent reflection. Keeping the same symbols of reflectance, the dependence of incident direction is specified by making it a function of incident direction ωi. So we use ρ(ωi) or k(ωi) to represent directional-hemispherical reflectance, and we define it as:

ρ(ωi)=ErEi(ωi),(6.13)

where Ei(ωi) is the irradiance due to radiation incident from a single direction ωi.

For most reflectors, the reflected radiance around the hemisphere of directions are non-uniform and vary with the direction of incidence. So the general surface reflection function is actually a bidirectional function of incident and reflected directions. We use bidirectional reflectance distribution function or BRDF to represent this function. The commonly used symbol for such a function is fr. It is defined as the ratio of directional radiance to directional irradiance, that is,

fr(ωi,ωr)=dL(ωr)dE(ωi)(6.14)

where ωi is an incident direction and ωr is the reflection direction originating at the point of incidence. So the domain of each of the directions in the function is the hemisphere around the surface point, and is defined with respect to the local coordinate system set-up at the point of incidence of light (see Figure 6.8). By exploiting the relation between directional radiance and directional irradiance, that is

Figure 6.8

Figure showing bidirectional Radiance Density Function (BRDF). θi, and θr are the inclination angles and ϕi and ϕr are the azimuthal angle. These angles define the incident and reflection direction.

Bidirectional Radiance Density Function (BRDF). θi, and θr are the inclination angles and ϕi and ϕr are the azimuthal angle. These angles define the incident and reflection direction.

Ei(ωi)=L(ωi)cosθidωi(6.15)

we can rewrite the BRDF definition in terms of the radiance only:

fr(ωi,ωr)=dL(ωr)L(ωi)cosθidω(6.16)

The direction vectors may be specified in terms of inclination angle, θ, and azimuth angle, ϕ. This means that a direction is a function of two angular dimensions, and hence the BRDF at a certain surface point is a 4D function.

Using the definitions of the directional reflectance (6.13) and of the BRDF (6.14), we can derive the following relation between them:

ρ(ωi)=ωrΩL(ωr)cosθrdωE(ωi)=E(ωi)ωrΩfr(ωi,ωr)cosθrdωE(ωi)=ωrΩfr(ωi,ωr)cosθrdω.(6.17)

Earlier we mentioned that reflectance ρ is mostly used to specify direction-independent reflections, particularly, from matte and dull surfaces. We recall that the ideal diffusive surfaces are called Lambertian surfaces. For such reflectors, reflected radiance is constant in all reflection directions, that is, fr=(ωi,ωr)=fr=costant. Very few surfaces in the real world are exactly Lambertian in nature. However, surfaces of many real-world objects approximate the Lambertian reflection property well. Diffuse or matte surfaces are widely used in rendering. We can use the directional independence to simplify the relation between reflectance and BRDF as

ρ=ρ(ωi)=ωiΩfr(ωiωr)cosθRdω=frωiΩcosθrdω=frπ(6.18)

or simply,

fr=ρπ.(6.19)

So, for Lambertian surfaces BRDF is only a constant factor of the surface reflectance and hence is a zero-dimensional function.

Another ideal reflection property is exhibited by optically smooth surfaces. Such a reflection is also known as mirror reflection. From such surfaces reflection is governed by Fresnel’s law of reflection:

L(ωr)={R(ωi)L(ωi)if θi=θr0otherwise(6.20)

where θi and θr are inclination angles of the incident and reflection vectors, and the two vectors and normal to the point of the incidence are all in one plane. R(ωi) is the Fresnel function of the material. We provide more detail about the Fresnel function later. Note that the direction of the reflected light θi = θr is the same as the mirror direction just discussed for the ideal specular material treated in Section 6.1.1. Hence, BRDF for a perfect mirror is a delta function of incident directions, and hence is a one-dimensional function.

A number of real world surfaces exhibit invariance with respect to the rotation of the surface around the normal vector at the incident point. For such surfaces, BRDFs are dependent only on three parameters θi, θr and ϕ = ϕiϕr, thus they are three-dimensional in nature. To distinguish such reflectors from general four-dimensional BRDFs we call the former isotropic BRDFs and the latter as anisotropic BRDFs.

Light is a form of energy. So all reflection-related functions must satisfy the law of energy conservation. That means, unless a material is an emitting source, the total reflected flux must be less than or equal to the flux incident on the surface. As a consequence, ρ ≤ 1 and

ωrΩfr(ωi,ωr)cosθrdω1.(6.21)

The latter expression uses the relation between BRDF and reflectance. Note that while reflectance must always be less than one, the above expressions do not restrict the BRDF values to less than one. Surface BRDF values can be more than one for certain incident and outgoing directions. In fact, as we just noted, BRDF for a mirror reflector is a delta function, that is, it is infinity along the mirror reflection direction and zero otherwise. In addition to the energy conservation property, BRDF satisfies a reciprocity property according to which the function value is identical if we interchange the incidence and reflection directions. This property is known as Helmholtz Reciprocity.

Concluding, we point out that the BRDF function can be generalized to account for subsurface scattering effects. In this case the light incident from any direction at a surface point xi gives rise to reflected radiance at another point of the surface xo along its hemispherical directions. In this case the reflection property is a function of incident point and direction, and exiting point and direction, fr(xi, x0, ωi, ω0), and gets the name of bidirectional scattering surface reflectance distribution function (BSSRDF). The BSSRDF is a function of eight variables instead of four assuming that the surface is parameterized and thus that each surface point can be identified with two variables.

6.4 The Rendering Equation

We now know that for computing images we must compute radiance of the surfaces visible to the synthetic camera through each pixel. We also know that the surface reflection properties of the reflectors are mostly specified by their surface BRDF. So now all we need is an equation to compute the reflected surface radiance along the view direction. The equation for radiance reflected towards any direction ωr due to light incident from a single direction ωi is easily derived from the BRDF definition:

dL(ωr)=fr(ωi,ωr)E(ωi)=fr(ωi,ωr)L(ωi)cosθidω.(6.22)

In a real-world scene light reaches every surface point of a reflector from all the directions of the hemisphere around that point. So the total reflected radiance along ωr is:

L(ωr)=ωiΩdL(ωr)=ωiΩfr(ωi,ωr)L(ωi)cosθidω.(6.23)

This latter equation is referred to as radiance equation or rendering equation. We may generalize this equation to include emitters in the scene and express the outgoing radiance from a surface along any outgoing direction ω0 as a sum of radiance due to emission and radiance due to reflection. So the generalized rendering equation is:

L(ωo)=Le(ωo)+Lr(ωo)=Le(ωo)+ωiΩfr(ωi,ωo)L(ωi)cosθidω(6.24)

where Le and Lr are respectively radiance due to emission and reflection. We would like to point out that in this equation we replace the subscript r by o to emphasize the fact that the outgoing radiance is not restricted to reflection alone.

According to the definition of the BSSRDF previously given the rendering equation in the most general case becomes:

L(xo,ωo)=Le(xo,ωo)+AωiΩfr(xi,xo,ωi,ωo)L(ωi)cosθidωdA(6.25)

We can see to account for sub-surface scattering we need to evaluate it on the area A around xo.

6.5 Evaluate the Rendering Equation

A fundamental goal in any photo-realistic rendering system is to accurately evaluate the rendering equation previously derived. At the very least, the evaluation of this equation requires that we know L(ωi) from all incident directions. L(ωi) in the scene may originate from an emitter, or from another reflector, that is, an object of the scene. Evaluating radiance coming from another reflector would require evaluation of radiance at some other reflector, and so on, making it a recursive process. This process is made more complicated by the shadowing effect, that is, objects that could occlude other objects. Finally, for an accurate evaluation, we need to know for each point of the surface the BRDF (or the BSSRDF, for translucent materials or materials that exhibit scattering) function.

In the rest of this chapter we will use some restrictions in order to make the evaluation of the rendering equation, and so the lighting computation of our scene, simpler. First, we use the restriction that L(ωi) originates only from an emitter. In other words we do not consider effects of the indirect light, that is, the light reflected by other objects of the scene. Second, we do not take into account the visibility of the emitters, every emitter is considered visible from any surface of the scene. Another restriction we do is the use of relatively simple mathematical reflection models for the BRDF. Global lighting computation methods, which account for the recursive nature of the rendering equation and for visibility allowing a very accurate evaluation, will be treated in Chapter 11. We concentrate here on local lighting computation in order to focus more on the basic aspects of the lighting computation such as the illumination models and the light source type. To better visualize the differences between local and global lighting effects we refer to Figure 6.9. Some global lighting effects that cannot be obtained with local lighting computation are:

Figure 6.9

Figure showing global illumination effects. Shadows, caustics and color bleeding. (Courtesy of Francesco Banterle http://www.banterle.com/francesco.)

Global illumination effects. Shadows, caustics and color bleeding. (Courtesy of Francesco Banterle http://www.banterle.com/francesco.)

Indirect light: As just stated, this is the amount of light received by a surface through the reflection (or diffusion) by another object.

Soft shadows: Shadows are global effects since they depend on the position of the objects with respect to each other. Real shadows are usually “soft” due do the fact that real light sources have a certain area extent and are not points.

Color bleeding: This particular effect of the indirect light corresponds to the fact that the color of an object is influenced by the color of the neighboring objects. In Figure 6.9 we can see that the sphere is green because of the neighboring green wall.

Caustics: The caustics are regions of the scene where the reflected light is concentrated. An example is the light concentrated around the base of the glass sphere (Bottom-Right of Figure 6.9).

In Section 6.7 we will derive simplified versions of local lighting equations for a few simple light sources: directional source, point or positional source, spotlight source, area source, and environment source, whose evaluations will be mostly straightforward. Then, we will describe some reflection models, starting from the basic Phong illumination model, and going to more advanced ones such as the Cook-Torrance model for metallic surfaces, the Oren-Nayar model for retro-reflective materials and the Minneart model for velvet.

6.6 Computing the Surface Normal

As we have seen, light-matter interaction involves the normal to the surface. The normal at point p is a unitary vector np perpendicular to the plane tangent to the surface at point p.

The problem with triangle meshes (and with any discrete representation) is that they are not smooth at the vertices and edges (except for the trivial case of a flat surface), which means that if we move on the surface around a vertex the tangent plane does not change in a continuous way. As a practical example consider vertex v in Figure 6.10. If we move away a little bit from v we see that we have four different tangent planes. So which is the normal (and hence the tangent plane) at vertex v? The answer to this question is “there is no correct normal at vertex v,” so what we do is to find a reasonable vector to use as normal. Here “reasonable” means two things:

Figure 6.10

Figure showing how to compute vertex normals from the triangle mesh.

How to compute vertex normals from the triangle mesh.

  • that it is close to the normal we would have on the continuous surface we are approximating with the triangle mesh;
  • that it is as much as possible independent of the specific triangulation.

The most obvious way to assign the normal at vertex v is by taking the average value of the normals of all the triangles sharing vertex v:

nv=1|S*(v)|iS*(v)nfi(6.26)

This intuitive solution is widely used but it is easy to see that it is highly dependent on the specific triangulation. Figure 6.10.(b) shows the very same surface as Figure 6.10.(a), but normal n2 contributes more than the others to the average computation and therefore the result consequently changes. An improvement over Equation (6.26) is to weight the contribution with the triangle areas:

nv=1iS*(v)Area(fi)iS*(v)Area(fi)nvi(6.27)

However, if we consider the triangulation in Figure 6.10.(c) we may see that very long triangles may have a large area and, again, influence the normal. The problem with Formula (6.27) is that parts that are far away from v contribute to the areas and hence influence its normal, while the normal should depend only on the immediate neighborhood (infinitely small in the continous case).

This problem is avoided if we weight the normals with the angle formed by the triangle at v:

nv=1iS*(v)α(fi,v)iS*(v)α(fi,v)nvi(6.28)

That said, please note that in the average situation we do not create bad tessellation just for the fun of breaking the algorithms, so even the Formula (6.26) generally produces good results.

Note that if the triangle mesh is created by connecting vertices placed on a known continuous surface, that is, a surface of which we know the analytic formulation, we do not need to approximate the normal from the faces, we may simply use the normal of the continuous surface computed analytically.

For example, consider the cylinder in Figure 6.11. We know the parametric function for the points on the sides:

Cyl(α,r,y)=[r cosαyr sinα]

Figure 6.11

Figure showing using the known normal.

Using the known normal.

and the normal to the surface is none other than

n(α,r,y)=Cyl(α,r,y)[0.0,y,0.0]T||Cyl(α,r,y)[0.0,y,0.0]T||

So we can use it directly without approximating it from the triangulated surface.

Please note that this is the first example of a process called redetail, that is, re-adding original information on the approximation of a surface. We will see a more complex example of redetail in Chapter 7.

6.6.1 Crease Angle

With Equations (6.26), (6.27) and (6.28), we have shown how to compute the normal at a vertex v of a triangle mesh, overcoming the problem that the surface of the triangle mesh is not smooth at the vertices. However, there may be points where the surface is not smooth itself and not because of the tessellation, for example at the points along the bases of the cylinder in Figure 6.11. In these points one normal is simply not enough, because we do not want to hide the discontinuity but represent the two (or more) orientations of the surface in the neighboorhood of the vertex. So there are two questions. The first is how do we decide which vertices are not smooth because of the tessellation and which ones are not smooth because of the surface itself. This is commonly done by checking the dihedral angle between edge-adjacent faces and deciding that if the angle is too big then the edge must be along a crease of the surface (see Figure 6.12). This technique works well on the assumption that the surface is tessellated finer and well enough not to create big dihedral angles where the surface should be smooth. The second question is how to encode this in the data structure shown in Section 3.9. This is typically done in two alternative ways: the first way is to simply duplicate the vertices along a crease, assign them the same position but different normals, and this is what we will do. The second way is to encode the normal attribute on the face data, so that each face stores the normal at its vertices. This involves a useless duplication of all the normal values for all the vertices that are on smooth points (usually the vast majority) but it does not change the connectivity of the mesh.

Figure 6.12

Figure showing crease angle and vertex duplication.

Crease angle and vertex duplication.

6.6.2 Transforming the Surface Normal

We know from Section 4.3 that generic affine transformations do not preserve angles and lengths. Consider the normal n and a tangent vector u at a point p. Those two vectors are orthogonal, hence:

nuT=0

But if we appy an affine transformation M to both we have no guarantees that (Mn)(MuT) = 0. Figure 6.13 shows a practical example where a nonuniform scaling is applied. So the normal should be transformed so that it stays perpendicular to the tranformed surface.

nM1MuT=0(6.29)

(nM1)(MuT)=0(6.30)

Figure 6.13

Figure showing how the normal must be transformed.

How the normal must be transformed.

Since we multiply the matrices on the left we will write:

(nM1)T=M1TnT

Hence the normal must be transformed by the inverse traspose of the matrix applied to the positions.

6.7 Light Source Types

Here, we present different types of light sources and we learn how to compute lighting for each of them. In all cases we assume the material to be Lambertian. How to put the theory in practice is presented during the light-source-type description.

We remind the reader that for Lambertian surfaces, radiance is independent of direction. For spatially uniform emitters, radiance is independent of the position on the emitting surface. This directional and positional independence of radiance gives rise to a simple relationship between radiance from a uniform Lambertian emitter, and its radiosity (or exitance), and flux. The relation is:

E = πL,Φ=EA=πLA.(6.31)

The derivation of this relationship is as follows. For a spatially uniform emitter the surface emits flux uniformly over the whole area of the emitter. So E, the flux per unit area, is constant over the whole surface; from Equation (6.8) this is Φ=dΦ=EdA=EdA=EA. From the definition of radiance (6.11), d2ϕ=LdAcosθdω. The total flux from the emitter surface area is then the double integration:

Φ=d2ϕ=LdAcosθdω=(cosθdω)(LdA)=πLA=EA.(6.32)

The relations are also the same for a reflector if the surface is Lambertian, and the reflected light is spatially uniform over the surface of the reflector.

We will now proceed to derive equations for carrying out the computation of this exiting radiance from Lambertian reflectors because of the lighting coming from various light sources. In the derivation we will use subscripts out and in to distinguish light exiting and light incident on the reflector surface. Before doing this, we summarize below the property of a Lambertian reflector:

  • The reflected radiance from the surface of the reflector is direction independent, i.e., Lout is constant along any direction.
  • The relation between exitance (Eout), i.e., the areal density of reflected flux, and the outgoing radiance Lout due to reflection, is Lout=Eoutπ.
  • The relation between irradiance (Ein), i.e., the area density of incident flux, and the outgoing radiance Lout is Lout=kDEinπ where kD is the ratio of exiting flux to the incident flux at the surface, also known as the diffuse surface reflectance.

6.7.1 Directional Lights

Directional lights are the simplest to specify. All we have to do is define the direction from which the light will reach a surface. Then light flows from that direction to every surface point in the scene. The origin of such a light could be an emitter positioned at a very far distance, for example the stars, the sun, the moon, and so on. Along with the direction, we specify the exitance, E0, of such light sources. As the light flows only from one direction, along the path of the flow the incident irradiance on any surface area perpendicular to the path is the same as the exitance, that is,

Ei=E0(6.33)

To compute the incident irradiance on surface areas with other orientations, we must project them and use their projected area (see Figure 6.14). Thus the incident irradiance on a surface of arbitrary orientations is

Ei=E0cosθi(6.34)

Figure 6.14

Figure showing (Left) Lighting due to directional light source. (Right) Lighting due to point or positional light source.

(Left) Lighting due to directional light source. (Right) Lighting due to point or positional light source.

Substituting the value of Ei in the rendering equation and using the fact that light is incident from only one direction we get:

L(ωr)=fr(ωi,ωr)E0cosθi(6.35)

For Lambertian surfaces BRDF is a constant, independent of incident and reflection direction. We can then use the relation between BRDF and reflectance, obtaining:

L(ωr)=ρπE0cosθi(6.36)

6.7.2 Upgrade Your Client: Add the Sun

Until now we did not care about light in our client; we simply drew the primitives with a constant color. By adding the sun we are taking the first (small) step towards photorealism.

6.7.2.1 Adding the Surface Normal

Here we will add the attribute normal to every object in the scene for which we want to compute lighting. In order to add the normal to our objects we will simply add the lines in Listing 6.1 to the function createObjectBuffers.

25 if (createNormalBuffer) {
26  obj.normalBuffer = gl.createBuffer();
27  gl.bindBuffer(gl.ARRAY_BUFFER, obj.normalBuffer);
28  gl.bufferData(gl.ARRAY_BUFFER, obj.vertex_normal, gl.STATIC_DRAW);
29  gl.bindBuffer(gl.ARRAY_BUFFER, null);
30}

LISTING 6.1: Adding a buffer to store normals. (Code snippet from http://envymycarbook.com/chapter6/0/0.js.)

Similarly, we modify the function drawObject by adding the lines in Listing 6.2:

91 if (shader.aNormalIndex && obj.normalBuffer && shader. uViewSpaceNormalMatrixLocation) {
92  gl.bindBuffer(gl.ARRAY_BUFFER, obj.normalBuffer);
93  gl.enableVertexAttribArray(shader.aNormalIndex);
94  gl.vertexAttribPointer(shader.aNormalIndex, 3, gl.FLOAT, false, 0, 0);
95  gl.uniformMatrix3fv(shader.uViewSpaceNormalMatrixLocation, false, SglMat4.to33(this.stack.matrix));
96}

LISTING 6.2: Enabling vertex normal attribute. (Code snippet from http://envymycarbook.com/chapter6/0/0.js.)

We saw in Section 6.6 how normals can be derived by the definition of surface (for example for isosurfaces) or derived from the tessellation. In this client we use the function computeNormals(obj) (see file code/chapters/chapters3/0/compute-normals.js), which takes an object, computes the normal per vertex as the average of the normals to the faces incident on it and creates a Float32Array called vertex-normal. In this way we can test if an object has the normal per vertex by writing:

1 if (obj.vertex_normal) then ...

So we can have objects with normal per vertex. Now all we need is a program shader that uses the normal per vertex to compute lighting, which we will call lambertianShader. The vertex shader is shown in Listing 6.3.

6 precision highp float;				   
7              
8 uniform mat4 uProjectionMatrix;       
9 uniform mat4 uModelViewMatrix;       
10 mat3 uViewSpaceNormalMatrix;        
11 attribute vec3 aPosition;        
12 attribute vec3 aNormal;         
13 attribute vec4 aDiffuse;        
14 varying vec3 vpos;          
15 varying vec3 vnormal;          
16 varying vec4 vdiffuse ;          
17             
18 void main()            
19 {             
20 // vertex normal (in view space)      
21  vnormal = normalize(uViewSpaceNormalMatrix * aNormal); 
22             
23 // color (in view space)        
24 vdiffuse = aDiffuse;         
25             
26 // vertex position (in view space)      
27 vec4 position = vec4(aPosition, 1.0);     
28 vpos = vec3(uModelViewMatrix * position);    
29             
30 // output           
31 gl_Position = uProjectionMatrix *uModelViewMatrix *  
32  position;          

LISTING 6.3: Vertex shader. (Code snippet from http://envymycarbook.com/chapter6/0/shader.js.)

Note that, with respect to that shader we wrote in Section 5.3.4, we added the varying vpos and vnormal to have these values interpolated per fragment. See fragment shader code in Listing 6.4. Both these variables are assigned to coordinates in view space. For the position this means that only uModelViewMatrix is applied to aPosition, while the normal is transformed by the matrix aViewSpaceNormalMatrix. This also means that the light direction must be expressed in the same reference system (that is, in view space). We use the uniform variable uLightDirection to pass the view direction expressed in view space, which means that we take the variable this.sunLightDirection and transform it by the normal matrix.

36 shaderProgram.fragment_shader = “
37 precision highp float;          
38              
39 varying vec3 vnormal;         
40 varying vec3 vpos;          
41 varying vec4 vdiffuse;          
42 uniform vec4 uLightDirection;        
43              
44 // positional light: position and color      
45 uniform vec3 uLightColor;        
46              
47 void main()           
48 {            
49  // normalize interpolated normal       
50  vec3 N = normalize(vnormal);       
51              
52  // light vector (positional light)      
53  vec3 L = normalize(-uLightDirection.xyz);     
54              
55  // diffuse component          
56  float NdotL = max(0.0, dot(N, L));      
57  vec3 lambert = (vdiffuse.xyz * uLightColor) * NdotL; 
58              
59  gl_FragColor = vec4(lambert, 1.0);       
60} “ ;

LISTING 6.4: Fragment shader. (Code snippet from http://envymycarbook.com/chapter6/0/shaders.js.)

6.7.2.2 Loading and Shading a 3D Model

Even if we got attached to our boxy car, we may want to introduce in the scene some nicer 3D model. SpiderGl implements the concept of tessellated 3D model as SglModel and also provides a function to load it from a file and to enable the attributes it finds in it (color, normal and so on). This can be done with the lines shown in Listing 6.5:

158 NVMCClient.loadCarModel = function (gl, data) {
159  if (!data)
160 data = “../../../media/models/cars/ferrari.obj";
161  var that = this ;
162  this.sgl_car_model = null;
163  sglRequestObj(data, function (modelDescriptor) {
164 that.sgl_car_model = new SglModel(that.ui.gl, modelDescriptor);
165 that.ui.postDrawEvent();
166 }) ;
167};

LISTING 6.5: How to load a 3D model with SpiderGl. (Code snippet from http://envymycarbook.com/chapter6/0/0.js.)

SpiderGl also provides a way to specify the shaders to use for rendering a SglModel and the values that must be passed to such shaders with an object called SglTechnique. Line 170 in Listing 6.6 shows the creation of a SglTechnique for our simple single light shader.

169 NVMCClient.createCarTechnique = function (gl) {
170  this.sgl_technique = new SglTechnique(gl, {
171 vertexShader: this.lambertianShader.vertex_shader,
172 fragmentShader: this.lambertianShader.fragment_shader,
173 vertexStreams : {
174  "aPosition": [0.0, 0.0, 0.0, 1.0],
175  "aNormal": [1.0, 0.0, 0.0, 0.0],
176  "aDiffuse": [0.4, 0.8, 0.8, 1.0],
177},
178 globals: {
179  "uProjectionMatrix": {
180  semantic: "PROJECTION_MATRIX",
181  value: this.projectionMatrix
182 },
183  "uModelViewMatrix": {
184  semantic: "WORLD_VIEW_MATRIX",
185  value: this.stack.matrix
186 },
187  "uViewSpaceNormalMatrix": {
188  semantic: "VIEW_SPACE_NORMAL_MATRIX",
189  value: SglMat4.to33(this.stack.matrix)
190 },
191  "uLightDirection": {
192  semantic: "LIGHT0_LIGHT_DIRECTION",
193  value: this.sunLightDirectionViewSpace
194 },
195  "uLightColor": {
196  semantic: "LIGHT0_LIGHT_COLOR",
197  value: [0.9, 0.9, 0.9]
198 },}});};

LISTING 6.6: The SglTechnique. (Code snippet from http://envymycarbook.com/chapter6/0/0.js.)

The parameters of the function SglTechnique are fairly self-explanatory: it takes a WebGl context and all it takes to render the model, that is: the vertex and fragment shader sources, the attributes and the list of uniform variables. Furthermore, it allows us to assign names to the uniform variables to be used in the program (referred to as semantic). In this manner we can create a correspondence between the set of names we use in our JavaScript code and the name of the uniform variables in the program shader. This will become useful when we want to use shaders written by third parties with their own naming convention: instead of going to look for all gl.getUniformLocations in our code and change the name of the uniform variable, we just define a technique for the program shader and set all the correspondences with our name set.

Now we are ready to redefine the function drawCar (See Listing 6.7):

142 NVMCClient.drawCar = function (gl) {
143  this.sgl_renderer.begin();
144  this.sgl_renderer.setTechnique(this.sgl_technique);
145  this.sgl_renderer.setGlobals({
146 "PROJECTION_MATRIX" : this.projectionMatrix ,
147 "WORLD_VIEW_MATRIX": this .stack .matrix ,
148 "VIEW_SPACE_NORMAL_MATRIX": SglMat4.to33(this.stack.matrix),
149 "LIGHT0_LIGHT_DIRECTION": this.sunLightDirectionViewSpace,
150 });
151
152  this.sgl_renderer.setPrimitiveMode("FILL");
153  this.sgl_renderer.setModel(this.sgl_car_model);
154  this.sgl_renderer.renderModel ();
155  this.sgl_renderer.end();
156};

LISTING 6.7: Drawing a model with SpiderGl. (Code snippet from http://envymycarbook.com/chapter6/0/0.js.)

At line 144 we assign to the renderer the technique we have defined and at line 145 we pass the uniform values that we want to update with respect to their initial assignent in the definition of the technique. In this example LIGHT0_LIGHT_COLOR is not set since it does not change from frame to frame, while all the other variables do (note the sun direction is updated by the server to make it change with time). Finally, at line 154 we invoke this.renderer.renderModel, which performs the redering.

Note that there is nothing in these SpiderGL functionalities that we do not already do for the other elements of the scene (the trees, the buildings). These functions encapsulated all the steps so that it is simpler for us to write the code for rendering a model. Note that this is complementary but not alternative to directly using WebGL calls. In fact, we will keep the rest of the code as is and use SglRenderer only for the models we load from external memory.

Figure 6.15 shows a snapshot from the client with a single directional light.

Figure 6.15

Figure showing scene illuminated with directional light. (See client http://envymycarbook.com/chapter6/0/0.html.)

Scene illuminated with directional light. (See client http://envymycarbook.com/chapter6/0/0.html.)

6.7.3 Point Lights

As the name suggests, point light sources are specified by their position in the scene. These light sources are meant to represent small size emitters in the scene and are approximated as points. If we assume that the point light source is emitting light with uniform intensity I0 in every direction, then the expression for Ei at the reflector point located at a distance r away from the light source is:

Ei=I0cosθir2(6.37)

This expression is derived as follows: let dA be the differential area around the reflector point. The solid angle subtended by this differential area from the location of the point light source is dAcosθir2. Intensity is flux per solid angle. So the total flux reaching the differential area is I0dAcosθir2. Irradiance is the flux per unit area. So incident irradiance, Ei, on the differential area is Ei=I0cosθir2. From the incident irradiance, we can compute exiting radiance as

L(ωr)=fr(ωi,ωr)Ei=fr(ωi,ωr)I0cosθir2(6.38)

If the reflector is Lambertian then

L(ωr)=ρπI0cosθr2(6.39)

Thus the rendering equation for computing direct light from a Lambertian reflector due to a point light source is

ρπI0cosθπr2,(6.40)

that is, the reflected radiance from a perfectly diffuse reflector due to a point light source is inversely proportional to the square of the distance of the light source to the reflector and directly proportional to the cosine of the orientation of the surface with respect to the light direction. This is an important result and it is the reason why many rendering engines assume that the intensity of the light decays with the square radius of the distance.

6.7.4 Upgrade Your Client: Add the Street Lamps

In this update we will light a few lamps of the circuit, considered as point lights. Let us add a simple object to represent light sources (see Listing 6.8):

7 function Light(geometry , color) {
8 if (!geometry) this.geometry = [0.0, -1.0, 0.0, 0.0];
9 else this.geometry = geometry;
10  if (!color) this.color = [1.0, 1.0, 1.0, 1.0];
11  else this.color = color;
12}

LISTING 6.8: Light object. (Code snippet from http://envymycarbook.com/chapter6/1/1.js.)

Parameter geometry is a point in homogeneous coordinates that represents both directional and point lights, and color is the color of light. We introduce the function drawLamp, which, just like drawTree, assembles the basic primitives created in Section 3.9 to make a shape that resembles a street lamp (in this case a thick cylinder with a small cube on the top). The only important change to our client is the introduction of a new shader, called lambertianMultiLightShader (see Listing 6.9), which is the same as the lambertianShader with two differences: it takes not one but an array of nLights and it considers both directional and point lights.

36 precision highp float;           
37               
38 const int uNLights = “ + nLamps + “;        
39 varying vec3 vnormal;           
40 varying vec3 vpos;           
41 varying vec4 vdiffuse ;          
42               
43 // positional light: position and color       
44 uniform vec4 uLightsGeometry[uNLights];       
45 uniform vec4 uLightsColor[uNLights];        
46               
47 void main()             
48 {              
49  // normalize interpolated normal        
50  vec3 N = normalize(vnormal);          
51  vec3 lambert= vec3(0,0,0);         
52  float r,NdotL;            
53  vec3 L;             
54  for (int i = 0; i <uNLights; {        
55  if (abs(uLightsGeometry[i].w-1.0)<0.01) {     
56  r = 0.03*3.14*3.14*length(uLightsGeometry[i].xyz-vpos);    
57 // light vector (positional light)				 
58 L = normalize(uLightsGeometry[i].xyz-vpos);      
59 }              
60  else {            
61 L = -uLightsGeometry[i].xyz;         
62 r = 1.0;             
63}              
64 // diffuse component          
65 NdotL = max(0.0, dot(N, L))/(r*r);       
66 lambert += (vdiffuse.xyz * uLightsColor[i].xyz) * NdotL;    
67 }              
68  gl_FragColor = vec4(lambert ,1.0) ;        
69}";

LISTING 6.9: Light object. (Code snippet from http://envymycarbook.com/chapter6/1/shaders.js.)

In lines 44-45 the arrays of light geometry and color are declared and in lines 59-67 we accumulate the contribution of each light to the final color. For each light we test its fourth component to check if it is a directional light or a position and then we compute the vector L accordingly.

With this implementation the number of lights is limited to the size of the array we can pass to the program shader, which depends on the specific hardware. We will see other ways to pass many values to the shaders using textures (see Chapter 7). The number of lights may greatly impact on the performance of the fragment shader, which must run over all the array and make floating point computation. This is something that you can test on the device you are using by simply increasing the number of lights and observe the fall of frames per second.

Again, all the light computation is done in view space so all the lights geometry will have to be transformed before being passed to the shader. Figure 6.16 shows a snapshot from the client showing the light of the street lamps.

Figure 6.16

Figure showing adding point light for the lamps. (See client http://envymycarbook.com/chapter6/1/1.html.)

Adding point light for the lamps. (See client http://envymycarbook.com/chapter6/1/1.html.)

6.7.5 Spotlights

Spotlights represent a cone of light originating from a point. So they are basically point light sources with directionally varying intensity. These light sources are specified by: the position of the origin of the light source, and the direction of the center axis of the cone. The direction of the axis is also called the spot direction, and the spot intensity is the maximum along that direction, and may fall off away from that direction. So additional specifications are: the intensity fall-off exponent (f), and intensity cutoff angle (β). The cutoff angle is the angle around the spot direction beyond which the spotlight intensity is zero. The exponent determines the factor by which the intensity is reduced for directions away from the spot direction, and is computed as follows:

I(ωi)=I0(cosα)f(6.41)

where α is the angle between the cone axis and ωi, the direction of incidence (see Figure 6.17, on the left). Using derivations from the previous paragraph we can write the expression for reflected radiance due to spotlight as:

Figure 6.17

Figure showing (Left) Lighting due to spot light source. (Right) Lighting due to area light source.

(Left) Lighting due to spot light source. (Right) Lighting due to area light source.

L(ωr)={fr(ωi,ωr)I0(cosα)fcosθr2if α<β/20otherwise(6.42)

If the reflector is Lambertian then

L(ωr)={ρπI0(cosα)fcosθr2if α<β/20otherwise(6.43)

6.7.6 Area Lights

Area light sources are defined by their geometry, for example a sphere or a rectangle, and by the flux (if spatially and directionally uniform emission), or the exitance function (if spatially varying, but directionally uniform), or the radiance function (if varying both spatially and directionally). We will derive the lighting equation for area light sources by assuming that we know the radiance function at every point on the emitter surface. Let this radiance at point p of the emitter towards the reflector surface be Lp (ωp). Then the expression for the incident irradiance due to a differential area dAp around point p on the light source is:

Lp(ωp)cosθcosθpdApr2p(6.44)

This expression is derived as follows (refer to Figure 6.17). Let dAp be the differential area around a point p on the light source. The solid angle subtended by the differential reflector area dA at p is dAcosθr2p. The projection of dAp along the direction towards the receiver is dAp cosθp. Radiance is the flux per projected area per solid angle. So the total flux reaching dA at the reflector is

Lp(ωp)dApcosθpdAcosθr2p(6.45)

and hence, the incident irradiance at the receiver is

Lp(ωp)dApcosθpcosθr2p(6.46)

The reflected radiance dLr due to this incidence is:

dLr(ωr)=Lp(ωp)cosθcosθpr2pdAp(6.47)

The total reflected radiance due to the whole area light source is the integration of dLr, where the domain of integration is the whole area of the light source. So the irradiance is:

Lr(ωr)=pAdLr(ωr)=pAfr(ωp,ωr)Lp(ωp)cosθcosθpr2pdAp(6.48)

If we further assume that the radiance is constant over the light source and the reflecting surface is Lambertian, then the equation simplifies slightly to

Lr=ρπLppAcosθcosθpr2pdAp(6.49)

As we see here, the computation of lighting due to area light requires integration over area, which means a two-dimensional integration. Except for very simple area lights such as uniformly emitting hemisphere, close form integrations are difficult, or even impossible to compute. Thus, one must resort to numerical quadrature techniques in which an integration solution is estimated as a finite summation. A quadrature technique that extends easily to multidimensional integration divides the domain into a number of sub-domains, evaluates the integrand at the center (or a jittered location around the center) of the sub-domain, and computes the weighted sum of these evaluated quantities as the estimate of the integration solution. A simple subdivision strategy is to convert the area into bi-parametric rectangles, and uniformly divide each parameter and create equi-area sub-rectangles in the bi-parametric space. Such conversion may require domain transformation.

An alternative way to deal with area lights is to approximate them with a set of point lights. This approach is simple but could be computationally expensive, so we will use it in the next example.

6.7.7 Upgrade Your Client: Add the Car’s Headlights and Lights in the Tunnel

We can use the spotlight to implement the cars’ headlights. First of all we define an object SpotLight as shown in Listing 6.10:

8
9 SpotLight = function () {
10  this . pos = [] ;
11  this . dir = [] ;
12  this.posViewSpace = [];
13  this.dirViewSpace = [];
14  this.cutOff = [];
15  this.fallOff = [] ;

LISTING 6.10: Light object including spotlight. (Code snippet from http://envymycarbook.com/chapter6/2/2.js.)

We define pos and dir of the headlights in model space, that is, in the frame where the car is defined, but remember that our shader will perform lighting computation in view space; therefore we must express the headlights’ position and direction in view space before passing them to the shader. The modifications to the shader are straighforward. We proceed in the same way as for the point lights by adding arrays of uniform variables and running over all the spotlights (2 in our specific case, see Listing 6.11).

324 for(var i in this.spotLights){
325  this.spotLights[i].posViewSpace = SglMat4.mul4(this.stack. matrix, SglMat4.mul4(this.myFrame(), this.spotLights[i].pos));
326  this.spotLights[i].dirViewSpace = SglMat4.mul4(this.stack. matrix, SglMat4.mul4(this.myFrame(), this.spotLights[i].dir));
327}

LISTING 6.11: Bringing headlights in view space. (Code snippet from http://envymycarbook.com/chapter6/2/2.js.)

Adding area lights in the tunnel is also very straightforward. We define an area light as a rectangular portion of the XY plane of a local reference frame, with a given size and color. As suggested in the previous section, we implement the lighting due to area lights simply as a set of point lights distributed over the rectangle. In this example we simply use a grid of 3 × 2 lights. You can play with these numbers, that is, you can increase or decrease the number of point lights, and observe the effect on frame rate.

The implementation is very similar to what we did for the headlights. There are only two differences worth noticing. First, we have to convert an area light into a set of point lights. In our implementation we do this in the fragment shader. That is, we pass the frame of the area light to the shader and then we implicitly consider its contribution as due to 3 × 2 point lights, as shown in Listing 6.12. Second, the point lights are not really point lights because they illuminate only the −y half space.

303 for(int i = 0; i < uNAreaLights; ++i         
304 {
305 vec4 n = uAreaLightsFrame[i] * vec4(0.0,1.0 ,0.0 ,0.0) ;     
306 for(int iy = 0; iy < 3; ++iy)           
307  for(int ix = 0; ix < 2; ++ix)          
308  {
309 float y = float(iy)* (uAreaLightsSize[i].y / 2.0)      
310 - uAreaLightsSize[i].y / 2.0;         
311 float x = float(ix)* (uAreaLightsSize[i].x / 1.0)      
312 - uAreaLightsSize[i].x / 2.0;         
313 vec4 lightPos = uAreaLightsFrame[i] * vec4(x,0.0,y ,1.0) ;   
314 r = length(lightPos.xyz-vpos);          
315 L = normalize(lightPos.xyz-vpos);        
316 if(dot(L,n.xyz) > 0.0) {         
317  NdotL = max(0.0, dot(N, L))/(0.01*3.14 * 3.14 *r*r);     
318  lambert += (uColor.xyz * uAreaLightsColor[i].xyz) * NdotL/(3.0*2.0);  
319}
320 } 
321} 

LISTING 6.12: Area light contribution (fragment shader). (Code snippet from http://envymycarbook.com/chapter6/2/shaders.js.)

Figure 6.18 shows a snapshot of the client with headlights on the car.

Figure 6.18

Figure showing adding headlights on the car. (See client http://envymycarbook.com/chapter6/2/2.html.)

Adding headlights on the car. (See client http://envymycarbook.com/chapter6/2/2.html.)

6.8 Phong Illumination Model

6.8.1 Overview and Motivation

As we have seen, solving the rendering equation is a complex task that requires a high computational burden and sophisticated data structures to account for all the contribution of lights coming from the light sources and the reflectors. In order to allow easier computation we have stated that one approach is to consider only the local illumination ignoring visibility, taking into account only direct light sources. In other words, the local illumination model evaluates the light contribution at any point on the surface, for example at a vertex, only taking into account the material properties and the lights coming from the light sources. In this context, one of the most used local illumination models for many years has been the so-called Phong illumination model, developed by Bui Tuong Phong [35] in 1975. This model was designed empirically and it is a very good tradeoff between simplicity and the degree of realism obtained in the rendering. For this reason, it has been the standard way to compute illumination in the fixed rendering pipeline, until the advent of the programmable rendering pipeline.

This reflection model is composed of three distinct components:

Lreflected=kALambient+kDLdiffuse+kSLspecular(6.50)

The constants kA, kD and kS define the color and the reflection properties of the material. For example, a material with kA and kS set to zero means it exhibits purely diffusive reflection; conversely, a perfect mirror is characterized by kA = 0, kD = 0, kS = 1. The role of the ambient component will be clarified in the next section, where each component will be described in depth.

We emphasize that the summation of each contribution can be greater than 1, which means that the energy of the light may not be preserved. This is another peculiarity of the Phong model, since it is not a physical model but is based on empirical observations. An advantage of this independence from physical behavior is that each and every component in the Phong model can be freely tuned to give to the object the desired appearance. Later on, when we will see other reflection models, we will analyze models that follow the rule of conservation of the energy, such as the Cook-Torrance model.

6.8.2 Diffuse Component

The diffuse component of the Phong illumination model corresponds to the Lambertian model seen in Section 6.1.1. So,

Ldiffuse=Lincident(LN)(6.51)

where Lincident is the amount of light arriving from the direction L = ωi at the surface point considered. We remind the reader that for a diffuse material the light is reflected uniformly in every direction.

6.8.3 Specular Component

In specular reflection, the directionally dependent component of the reflected light, the amount of reflected light, depends both on the incident and on the reflection direction. Perfect mirrors reflect only along the mirror reflection direction of the incident light vector. Rough mirrors reflect the most along the mirror reflection direction, and additionally, reflect a reduced amount along the directions close to the mirror reflection direction (as shown in Figure 6.3). The reduction is modeled as a power function of the cosine function of an angle related to the incident direction and the direction along which we wish to compute the amount of reflection. In lighting computation related to rendering, the direction of interest is the direction connecting the surface point to the synthetic camera, also called the view direction, V. So the amount of reflected light along V is

Lspecular=Lincidentcos(α)ns(6.52)

where the exponent ns > 1 is the shininess coefficient of the surface. The role of the exponent is to accelerate the falloff. So the larger the value of ns, the larger is the falloff of the reflection as the reflection direction differs from the mirror reflection direction, and hence the closer the specular reflection property is to perfect mirror reflection. The exact definition of angle α distinguishes between two variants of the Phong reflection model. The original Phong reflection model uses the angle between the mirror reflection direction of the light vector, R, and the view vector V. The cosine of such an angle, cosα, may be computed as

cosα=RV(6.53)

where, according to (6.6),

R=L2(LN)N(6.54)

James Blinn proposed a variant to the Phong model [4] to avoid the computation of the reflection direction. Such a variant is called the Blinn-Phong model and it uses the angle between the normal to the surface point and the so-called half vector H, which the vector between the light vector and view vector. So, when using Blinn-Phong model cos α may be computed as

cosα=NH(6.55)

where

H=L+V2(6.56)

Figure 6.19 shows the difference in the angle α used in the two definitions. Note that all the vectors mentioned up until now are assumed to be normalized.

Figure 6.19

Figure showing (Left) Specular component of the Phong illumination model. (Right) The variant proposed by Blinn.

(Left) Specular component of the Phong illumination model. (Right) The variant proposed by Blinn.

6.8.4 Ambient Component

The ambient component of the Phong illumination model is used to “simulate” the effect of the indirect lighting present in the scene. Secondary light sources, that is, the nearby reflectors, can make significant contributions to the illumination of surfaces. We will see in later chapters that such computation can be very complex and computationally very expensive. Since speed of computation is of prime concern, the inter-reflection component of reflected light is approximated by introducing an ambient term that is simply the product of an ambient incident light Lambient and an ambient reflection coefficient kA. The idea of adding this term to the other terms is that a certain amount of light (coming from all directions) always reaches all the surfaces of the scene because of the inter-reflections between the objects in the scene.

6.8.5 The Complete Model

By putting the ambient, the diffuse and the specular component together, we obtain the final formulation of the Phong model:

Lrefl=kALambient+Lincident(kDmax(cosθ,0)+kSmax(cos(α)ns,0))(6.57)

The max function in the equation guarantees that the amount of reflected light due to any light source is not less than zero. Equation (6.57) is valid for one light source. In the presence of multiple light sources the reflection due to each individual light is accumulated to get the total amount of reflected light. This is a fundamental aspect of the light: the effect of lighting in a scene is the sum of all the light contributions. By accumulating the contributions of the different light sources, Equation (6.57) becomes:

Lrefl=Lambientkambient+iLincident,i(kDmax(cosθi,0)+kSmax(cosnsαi,0))(6.58)

where the subscript i in the equation is used to represent the dependence of the corresponding term with the i-th light source. We point out that the ambient term does not depend on the number of light sources since its role is to include just all indirect lighting contributions.

As previously stated, the specular and diffuse coefficients are functions of wavelength of light, but for practical reasons, they are normally represented as triplets consisting of three color components R, G and B. So, the final model, for a single light source, becomes:

(RGB)=(KA,rLA,rKA,gLA,gKA,bLA,b)+(KD,rLp,rKD,gLp,gKD,bLp,b)(LN)+(KS,rLp,rKS,gLp,gKS,bLp,b)(VR)ns(6.59)

where the cosine of the involved angles has been replaced by the dot product of the corresponding vectors. Figure 6.20 shows the effect of the different components.

Figure 6.20

Figure showing (Top-Left) Ambient component. (Top-Right) Diffuse component. (Bottom-Left) Specular component. (Bottom-Right) The components summed up together (kA = (0.2, 0.2, 0.2), kD = (0.0, 0.0, 0.6), kS = (0.8, 0.8, 0.8), ns = 1.2).

(Top-Left) Ambient component. (Top-Right) Diffuse component. (Bottom-Left) Specular component. (Bottom-Right) The components summed up together (kA = (0.2, 0.2, 0.2), kD = (0.0, 0.0, 0.6), kS = (0.8, 0.8, 0.8), ns = 1.2).

6.9 Shading Techniques

Shading is the way the computed light is mixed with the color of the surface of the object in order to obtain the final look of the rendered object. This can be achieved in different ways. For example, we can evaluate the illumination model at each vertex, that is, in the vertex shader, and then let the rasterization stage of the rendering pipeline interpolate the color obtained. Another way to shade the object is to evaluate the illumination model for each pixel of the final image, that is, in the fragment shader, instead of for every vertex. In this section we analyze three classic different general shading techniques, particularly important for didactic purposes, flat shading, Gouraud shading and Phong shading.

6.9.1 Flat and Gouraud Shading

The main difference between flat and Gouraud shading is that flat shading produces a final color for each face while Gouraud shading produces a final color for each vertex. Then, the color inside each triangle is generated by interpolation through the rasterization stage. For this reason it is usually said that the Gouraud shading computes the object’s illumination and color pervertex. Figure 6.21 shows the difference between the two shading techniques. As we can see, Gouraud shading produces a pleasant smooth visual effect that hides the tessellation of the surface. This interpolation effect is very important since it helps us to see less discontinuities on the surface due to an effect named Match Banding. Defined briefly, the Match Banding is caused by the fact that our perception tends to enhance the color differences we perceive at the edges. For this reason, Gouraud shading is not only useful to provide a smooth look to our 3D object but it is truly necessary every time we want to reduce the visibility of the faces that compose the object. Obviously, sometimes we would like to visualize clearly the faces of our object; in this cases flat shading is more useful than Gouraud shading.

Figure 6.21

Figure showing flat and Gouraud shading. As it can be seen, the flat shading emphasizes the perception of the faces that compose the model.

Flat and Gouraud shading. As it can be seen, the flat shading emphasizes the perception of the faces that compose the model.

6.9.2 Phong Shading

First of all, we would like to advise the reader to be aware not to confuse the Phong illumination model, just described, and Phong shading. The first is a local lighting model, the second is a method to interpolate the lighting calculated over our 3D scene.

Phong shading is a shading technique that consists of calculating the contribution of the lighting along the object’s surface instead of at the vertex only. To achieve this goal, from a practical point of view, the lighting should be computed in the fragment shader; in this way for each pixel we calculate the lighting equation. This is the reason why such lighting computation is referred to as per-pixel, or more precisely per-fragment, lighting. The perfragment lighting computation requires that each fragment knows the normal of the surface at the point visible through the fragment. To calculate such normals we declare a varying variable at the vertex shader that is then interpolated linearly by the rasterization stage, thus providing the normal for each pixel. The rendered effect is shown in Figure 6.22. The visual differences between the Gouraud and the Phong shading are strictly related to the number of triangles rendered in the viewport with respect to the total number of pixels of the viewport. If the object is composed by many triangles, so many that their average screen-size is around one or two pixels, the two renderings will look very similar.

Figure 6.22

Figure showing gouraud shading vs Phong shading. (Left) Gouraud shading. (Right) Phong shading. Note that some details result in a better look with Phong shading (per-pixel) due to the non-dense tessellation.

Gouraud shading vs Phong shading. (Left) Gouraud shading. (Right) Phong shading. Note that some details result in a better look with Phong shading (per-pixel) due to the non-dense tessellation.

6.9.3 Upgrade Your Client: Use Phong Lighting

The last client of this chapter incorporates only a shading model update. We drop the assumption that every material is Lambertian and use the Phong model explained in Section 6.8. The only modification with respect to the previous client is the way the fragment shader computes the contribution of each light source (see Listing 6.13).

74 vec3 phongShading(vec3 L, vec3 N, vec3 V, vec3 lightColor){
75  vec3 mat_ambient = vambient.xyz;     
76  vec3 mat_diffuse = vdiffuse.xyz;     
77  vec3 mat_specular= vspecular.xyz;     
78            
79  vec3 ambient = mat_ambient*lightColor;    
80            
81  // diffuse component        
82  float NdotL = max(0.0, dot(N, L));      
83  vec3 diffuse = (mat_diffuse * lightColor) * NdotL;  
84            
85  // specular component        
86  vec3 R = (2.0 * NdotL * N) - L;      
87  float RdotV = max(0.0, dot(R, V));      
88  float spec = pow(RdotV, vshininess.x);    
89  vec3 specular = (mat_specular * lightColor) * spec;  
90  vec3 contribution = ambient +diffuse + specular;  
91  return contribution;        
92}             

LISTING 6.13: Function computing the Phong shading used in the fragment shader. (Code snippet from http://envymycarbook.com/chapter6/3/shaders.js.)

6.10 Advanced Reflection Models

In the previous section we have seen a local illumination model that has been a reference for many years. With the advent of the possibility to program the rendering pipeline through vertex and fragment shaders, many other illumination models have become widely used to produce realistic renderings of different materials, such as metals, textiles, hair, etc. In the following we briefly describe some of them to increase the expressive tools at our disposal. We suggest that the reader also try to implement other models and experiments on his or her own.

6.10.1 Cook-Torrance Model

We know that the Phong illumination model has several limitations. In particular, it is not able to produce a realistic look for non-plastic, non-diffuse materials, and it is mainly based on empirical observations instead of physical principles.

The first local lighting model based on physical principles was proposed by James Blinn in 1977. In practice, the Blinn’s model was based on a physical model of the reflection developed by Torrance and Sparrow. The Torrance and Sparrow reflection model assumes that the surface of an object is composed by thousands of micro-faces that act as small mirrors, oriented more or less randomly. Taking a piece of surface, the distribution of the micro-facets in that part determines, at macroscopic level, the behavior of the specular reflection. Later, Cook and Torrance [5] extended this model to reproduce the complex reflection behavior of metals.

The Cook-Torrance model is defined as:

Lr=LpDGF(NL)(NV)(6.60)

where F, D, and G at the numerator are its three fundamental components.

The D term models the micro-facets assumption of Torrance and Sparrow, and it is called the roughness term. This term is modeled using the Spizzichino-Beckmann distribution:

D=1m2cos4αexptanαm(6.61)

where m is the average slope of the microfacets.

G is the geometric term and models the self-shadowing effects. Referring to Figure 6.23, we can note that the micro-facet can create self-shadowing effects, reducing the radiance arriving at a certain point of the surface (shadowing effect) or can block some of the reflected light, reducing the outgoing radiance (masking effect). This term is calculated as:

G1=2(NH)(NV)(VH)(6.62)

G2=(NH)(NL)(VH)(6.63)

G=min{1,G1,G2}(6.64)

Figure 6.23

Figure showing masking (left) and shadowing (right) effects.

Masking (left) and shadowing (right) effects.

where G1 accounts for the masking effects and G2 accounts for the shadowing effects.

The term F is the Fresnel term and takes into account the Fresnel law of reflection. The original work of Cook and Torrance is a valuable source of information about the Fresnel equation for different types of materials. The Fresnel effect depends not only on the material but also on the wavelength/ color of the incoming light. Even removing this dependency, its formulation is quite complex:

F=12(gc)2(g+c)2[1+(c(g+c)1)2(c(gc)+1)2](6.65)

where c=VH, g=c2+η21 and η is the refraction index of the material. Due to the complexity of this formula, when no high degree of realism is required, a good approximation can be achieved by the following simpler formulation:

F=ρ(1ρ)(1NL)5(6.66)

The Top-Left image in Figure 6.24 gives us an idea of how an object rendered with the Cook-Torrance model appears. As we can see, the car seems effectively composed of metal.

Figure 6.24

Figure showing a car rendered with different reflection models. (Top-Left) Phong. (Top-Right) Cook-Torrance. (Bottom-Left) Oren-Nayar. (Bottom-Right) Minnaert.

A car rendered with different reflection models. (Top-Left) Phong. (Top-Right) Cook-Torrance. (Bottom-Left) Oren-Nayar. (Bottom-Right) Minnaert.

6.10.2 Oren-Nayar Model

Oren and Nayar [32] proposed this reflection model to improve the realism of the diffuse component of a Lambertian material. In fact, some diffusive materials are not well described by the Lambertian model; for example, clay and some textiles, which exhibit the phenomenon of retro-reflection. Retro-reflection is an optical phenomenon that consists of reflecting the light back in the direction of the light source. Mathematically, the Oren-Nayar model is defined as:

Lr=kDLp(NL)(A+BCsin(α)tan(β))(6.67)

Equation (6.67) requires further explanation: α is the angle between the normal of the surface and the incident light, α = arccos (NL), β is the angle between the normal and the viewing direction, β = arccos (NV); A and B are parameters related to the roughness of the surface and C is the azimuthal angle between the light vector L and the view vector V.

The roughness, determined by assuming also in this case a micro-facets model for the surface, is modeled as a Gaussian distribution with zero mean. Hence, in this case the roughness is related to the standard deviation of the Gaussian (σ). With this premise, the parameters A and B are calculated on the basis of σ as:

A=1.00.5σ2σ2+0.33  B=0.45σ2σ2+0.09(6.68)

The angle represented by the parameter C requires some computational effort to calculate. An intuitive way to compute it is to project on the plane tangent to the surface the light and view vector and then recover the azimuthal angle. In other words we have to compute:

C=cos(ϕVϕL)=(LV)(6.69)

L=L(LN)N(6.70)

V=V(VN)N(6.71)

It may be noted that this is the first reflection model we describe where the reflected light depends not only on the angle of incidence of the incoming light but also on the azimuthal angle.

6.10.3 Minnaert Model

The last local illumination model to be described was developed a long time ago (1941) by Marcel Minnaert [30]. This model is basically a Lambertian model with the addition of a “darkening factor” capable of describing well the behavior of certain materials, like the reflection behavior of velvet or the visual aspects of the moon. In fact, Minnaert developed such an optical model to try to explain from an optical point of view the visual appearance of the moon. Mathematically, it is defined as:

Lr=kDLp(NL)diffuse((NL)K(NV)K1))darkening factor(6.72)

where K is an exponent used to tune the look of the material. For a visual comparison with the Phong illumination model, take a look at Figure 6.24.

6.11 Self-Exercises

6.11.1 General

  1. What kind of geometric transformations can also be applied to the normal without the need to be inverted and transposed?
  2. If you look at the inside of a metal spoon, you will see your image inverted. However, if you look at the backside of the spoon it does not happen. Why? What would happen if the spoon surface was diffusive?
  3. Find for which values of V and L the Blinn-Phong model gives exactly the same result as the Phong model.
  4. Consider a cube and suppose the ambient component is 0. How many point lights do we need to guarantee that all the surface is lit?
  5. Often the ambient coefficient is set to the same value for all elements of the scene. Discuss a way to calculate the ambient coefficient that takes into account the scene. Hint: for example, should the ambient coefficient inside a deep tunnel be the same as the ground in an open space?

6.11.2 Client Related

  1. Modify the “Phong model” client by setting the properties of the lighting and the materials to give the sensation of morning, daylight, sunset and night. Then, use the timer to cycle between these settings to simulate the natural light changes during the day.
  2. Modify the “Phong model” client in order to add several light sources to it. What happened due to its non-energy-preserving nature? How do you solve this problem?
  3. Modify the “Cook-Torrance” client and try to implement a per-vertex and a per-pixel Minnaert illumination model.

1 Note that there is only a hemisphere of directions around every point on an opaque surface. We will use symbol Ω to represent the hemisphere. Where it is not obvious from the context, we may use subscripting to distinguish the hemisphere of incoming directions from the hemisphere of outgoing directions, for example: Ωin and Ωout.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset