Chapter 8

Shadows

8.1 The Shadow Phenomenon

In Chapter 6 we learned that many types of interactions between light and matter can occur before photons leaving from emitters eventually reach our eye. We have seen that, because of the complexity of the problem, local illumination models are used in interactive applications. Then, we have seen some techniques for adding some more global lighting effects to the local model in order to improve the realism. For example, in Section 7.7.4 we used cube mapping to add reflections to the car, that is, to add one bounce of light.

In this chapter we will show how to add another global effect: the shadows. We all have the intuition of what a shadow is: a region that is darker than its surroundings because of something blocking the light arriving at such a region. The shadow phenomenon is critical in order to create more realistic images. The presence of shadowed areas helps us to better perceive the spatial relations among the objects in the scene, and, in the case of complex objects, the shadows created by the object on itself, known as self-shadows, allow for a better understanding of its shape. Now, we give a more formal definition of shadow: a point p is in shadow with respect to a light source if the light rays leaving the source towards p hit a point p’ “before” reaching p.

Figure 8.1 shows three examples with different types of light: point light, directional light and area light. The peculiarity of area lights is that many light rays leaving from the same light source may reach the same point. This means that, potentially, only a portion of the rays that may reach a point on the surface actually do so, while others may be blocked. This is the reason why near the edges of a shadowed area the amount of darkness increases while traveling inward from that area. This phenomenon is called penumbra and if you look around it is one of the most commonly observable types of shadow because it is rare to find emitters so small that they can be considered point lights. When the penumbra effect is not considered, due to the type of light or to simplify the implementation, the shadows we obtain are called hard shadows.

Figure 8.1

Figure showing (Left) Shadow caused by a point light. (Middle) Shadow caused by a directional light. (Right) Shadow caused by an area light.

(Left) Shadow caused by a point light. (Middle) Shadow caused by a directional light. (Right) Shadow caused by an area light.

Note that we gave only a geometric definition of shadow without saying anything about how much darker a point in shadow should look, because that depends on the illumination model we are using and the material properties of our objects. For example, let us consider a complete illumination model, that is, the perfect simulation of reality and a scene made of a closed room, with a light source inside and where all the surfaces are perfectly diffusive with albedo 1, so that they entirely reflect all the incoming light to every direction. In this situation it does not matter if a point is in shadow, it will be lit anyway because even if the straight path from the light source is blocked, photons will bounce around until they reach every point of the scene (with the only neglectable difference that they will not travel the same distance for all the points). Now let us change this setting by assuming that all the surfaces are perfectly specular. In this case, it may not be the case that all the points will be lit, but, again, it does not depend on whether they are in shadow or not. Finally, consider using a local lighting model such as the Phong model. With this model the outgoing light depends only on the light coming directly from the emitter and on the ambient term. So, we can state that a point in shadow will look darker; how much darker depends on the ambient term: if it is 0, that is, no indirect lighting is considered, each point in shadow will be pure black, otherwise the ambient term multiplied by the color is the final darker color.

We discuss the aforementioned examples to underline the fact that the determination of regions in shadow makes sense with a local illumination model, where there is a direct connection between being in shadow and receiving less light.

In this chapter we will see some of the techniques for rendering shadows in real-time. Because synthesizing accurate shadow phenomena is a difficult task, we will do some assumptions in order to allow us to concentrate on the fundamental concepts. These assumptions are:

  • the illumination model is local (see Section 6.5): we allow the emitted photons to perform at most one bounce before hitting the camera;
  • the light sources are either point light or directional, that is, they are not area lights: this will simplify the shadow algorithms but will not allow us to generate penumbra regions.

8.2 Shadow Mapping

Shadow mapping is a straightforward technique: perform the rendering of the scene and, for each fragment produced, test if the corresponding 3D point is in shadow or not.

The cleverness of shadow mapping resides in how to perform the test. Let us consider the case of directional lights illustrated in Figure 8.2: if we set up a virtual camera with an orthogonal projection and with the projectors having the same orientation of our directional light and perform a rendering of the scene from this camera, the result will correspond to the portion of surface visible from the camera (the cyan parts on the central illustration in Figure 8.1). But if a point is visible from this camera, it means that the path from the point to the camera image plane is free of obstacles, and since the camera is modeling the emitter, the visible points are those that are not in shadows. We will refer to this camera as a light camera . In this sense we can now restate the definition of being in shadow as follows: a point p is in shadow with respect to a light source if it is not visible from the corresponding light camera. Furthermore, the depth buffer, called shadow map, will contain, for each fragment, the distance from the light source of the corresponding point.

Figure 8.2

Figure showing (Left) A simple scene composed of two parallelepipeds is illuminated by a directional light source. (Right) a rendering with the setup.

(Left) A simple scene composed of two parallelepipeds is illuminated by a directional light source. (Right) a rendering with the setup.

After this rendering, if we want to know whether a given point p on the scene is lit or not, we can project it to the light camera and compare its distance along the z axis to the value contained in the depth buffer. If it is greater, it means that the point is in shadow because there is another point p′ along the line from p and the light source that generated a smaller depth value than p would (see Figure 8.2).

At this point we have all the ingredients for the Shadow Mapping algorithm:

  1. [Shadow Pass] Draw the scene with the parameters of the light camera and store the depth buffer;
  2. [Lighting Pass] Draw the scene with the parameters of the viewer’s camera and, for each generated fragment, project the corresponding point on the light camera. Then, access the stored depth buffer to test whether the fragment is in shadow from the light source and compute lighting accordingly.

Note that in the practical implementation of this algorithm we cannot use the depth buffer because WebGL does not allow us to bind it for sampling. To overcome this limitation we will use a texture as explained in Section 8.3. Also note that in the shadow pass we do not need to write fragment colors to the color buffer.

8.2.1 Modeling Light Sources

In the introductory example we modelled a directional light source with an orthogonal projection. Here we see more in general how the types of lights described in Section 6.7 are modelled with virtual cameras. What we need from our light camera is that:

  • every point that is visible from the light source is seen by the light camera
  • the projectors of the light camera are the same as light rays but with opposite direction

Note that the first requirement is actually too much, because we are only interested in the part of the scene seen by the viewer’s camera. We will discuss this aspect in more detail in Section 8.4; here we fulfill the first requirement by setting the view volume such that the light camera will include the whole scene.

8.2.1.1 Directional Light

This is the easiest case that we used in the first example. Let d be a unitary vector indicating the direction of the light rays. Set −d as z axis, compute the other two axes as shown in Section 4.5.1.1 and set the center of the scene’s bounding box as origin (see Figure 8.3). Set an orthogonal projection so that the viewing volume is a box centered at the origin (please recall that the projection is expressed in view space, so “the origin” is the origin of the light camera frame) and with sides equal to the diagonal of the bounding box, so that we are guaranteed the bounding box of the scene (and hence the scene) is inside the view volume.

Figure 8.3

Figure showing (Left) Light camera for directional light. (Right) Light camera for point light.

(Left) Light camera for directional light. (Right) Light camera for point light.

8.2.1.2 Point Light

A point light source is only identified by its position, let us call it c, from which light rays propagate in every direction. In this case we do the same as we did for adding real on-the-fly reflections in Section 7.7.4.1, that is, we use six cameras centered on c and oriented along the main axes (both in positive and negative directions). Like we did for the directional light, we could set the far plane to l, but we can do better and compute, for each of the six directions, the distance from the boundary of the bounding box and set the far plane to that distance, as shown in Figure 8.3 (Right). In this manner we will have a tighter enclosure of the bounding box in the viewing volumes and hence more precise depth values (see Section 5.2.4).

8.2.1.3 Spotlights

A spotlight source is defined as L = (c, d, β, f), where c is its position in space, d the spot direction, β the aperture angle and f the intensity fall-off exponent. Here we are only interested in the geometric part so f is ignored. The light rays of a spotlight originate from c and propagate towards all the directions described by the cone with apex in c, symmetry axis d and angle β. We set the z axis of the light camera’s frame as −d, compute the other two as shown in Section 4.5.1.1 and set the origin to c (see Figure 8.4). The projection is in perspective but finding the parameters is slightly more involved.

Figure 8.4

Figure showing light camera for a spotlight.

Light camera for a spotlight.

Let us start with the distance of far plane far. We can set it to the maximum among projections of the vertices of the bounding box on direction d as shown in Figure 8.4 (Left). Now that we have far we can compute the base of the smallest pyramidal frustum containing the cone as:

b=2arctanβfar

and scale it down to obtain the size of the viewing plane (that is, at the near distance)

b=bnearfar

and therefore left = bottom = −b′/2 and top = right = b′/2. So we computed the smallest trunk of the pyramid containing the cone, not the cone itself. The last part is done in image space: during the shadow pass, we discard all the fragments that are farther away from the image center than b′/2.

8.3 Upgrade Your Client: Add Shadows

We will now add to the objects of our scene the ability to cast shadows generated by the sun, which is well modelled as a directional light camera. The first thing to do is to prepare all the necessary graphics resources to perform a render-to-texture operation that will fill our shadow map. As mentioned before, in WebGL we cannot take the depth buffer and bind it as a texture, so what we do is create a framebuffer object that will be used as a target for generating the shadow map in the shadow pass. Ideally, the framebuffer should only consist of a texture that will act as the depth buffer, and that will eventually be accessed in the lighting pass.

Unfortunately, textures with adequate pixel format are not directly exposed from the core WebGL specifications, unless you use extensions.1 More specifically, a single channel of a color texture is only at most 8 bits while we are commonly using 24 bits for the depth values. In the next subsection we will show how to exploit standard 8 bits per channel, RGBA textures to encode 24 bits depth value.

Listings 8.1 and 8.2 show the code for vertex and fragment shader of the shadow pass, respectively. This is the most basic rendering from the light camera that we do for the sole purpose of producing the depth buffer. Note that the depth buffer is still there; we are just making a copy of it into a texture that will be our shadow map. Let us see the detail of how this is done.

8.3.1 Encoding the Depth Value in an RGBA Texture

We recall that the transformation from NDC space to window space is set with the viewport transformation matrix (gl.viewport(cornerX,cornerY,width, height)) that maps (x,y) from [(−1, −1), (1,1)] to [(0,0), (w, h)] and with gl.depthRange(nearval,farval) that maps z from [1,1] to [nearval, farval]. We assume that nearval and farval are set to the default values, 0 and 1, respectively, and explain how to encode a value in the interval [0, 1] to a four-channel texture. Note that otherwise we can simply encode z=znearvalfarvalnearval.

So let d = gl_FragCoord.z (gl_FragCoord is a GLSL built-in variable that contains the coordinates of the fragment in window space) be the floating point value to encode in a four-channel texture with eight bits (the number of channels and bits may vary, this is just the most common setup). The idea is that we treat the four channels as the coefficients of a number between 0 and 1 in base B, that is, we want to express d as:

d=a01B+a11B2+a21B3+a31B4(8.1)

Let us start with the more intuitive case where B = 10, so each channel may store an integer number between 0 and 9, and let us make the practical example where d = 0.987654. In this case we would have a0=9,a1=8,a2=7anda3=6:

d=9110+81102+71103+61104d=0.9+0.08+0.007+0.0006=0.9876

Obviously, with this simple encoding we are only approximating the value of d but this is more than enough for practical applications. Now what we need is an efficient algorithm to find the coefficients ai. What we want is simply to take the first decimal digit for a0, the second for a1, the third for a2 and the fourth for a3. Unfortunately in GLSL we do not have a built-in function for singling out the ith decimal digit of a number, but we have a function frac(x), which returns the fractional part of x, that is frac(x)=x-floor(x), so we we can write:

ithdigit(d)=(frac(d10i1)frac(d10i)10)10(8.2)

For example, the second digit of 0.9876 is:

2nddigit(0.9876)=(frac(0.987610)shiftfrac(0.9876102)mask10)10=(0.8760.076)10=8

This is a very simple mechanism to mask out all the digits except the one we want. We first place the dot on the left of the digit 0.9876 ∙ 10 = 9.8765, then use frac to remove the integer part and remain with 0.8765. Then we mask out the other digits by subtracting 0.0765. The same result can be obtained in many other ways, but in this way we can exploit the parallel execution of component-wise multiplication and subtraction of type vec4.

Now we can comment the implementation of the function pack_depth in Listing 8.2. First of all we have eight bit channels so the value for B is 28 = 256. The vector bit_shift contains the coefficients that multiply d in expression shift of Equation (8.2), while bit_mask contains the ones in expression mask. Note that the values in res are in the interval [0, 1], that is, the last multiplication in Equation (8.2) it is not done. The reason is that the conversion beween [0,1] and [0, 255] is done at the moment of writing the values in the texture. Getting a float value previously encoded in the texture is simply a matter of implementing Equation (8.1) and it is done by the function Unpack in Listing 8.4.

84 var vertex_shader = "
85 uniform mat4 uShadowMatrix;
86 attribute vec3 aPosition;
87 void main(void)
88 {
89 gl_Position = uShadowMatrix * vec4(aPosition, 1.0);
90 }";

LISTING 8.1: Shadow pass vertex shader.

92 var fragment_shader = "
93 precision highp float;
94 float Unpack(vec4 v){
95  return v.x + v.y / (256.0) + v.z/(256.0*256.0)+v.w/ (256.0*256.0*256.0);
96  // return v.x; 
97 }
98 vec4 pack_depth(const in float d)
99 {if(d==1.0) return vec4(1.0, 1.0, 1.0, 1.0) ;
100  const vec4 bit_shift = vec4(1.0 , 256.0 ,256.0*256.0 , 256.0*256.0*256.0);
101  const vec4 bit_mask = vec4(1.0/256.0 , 1.0/256.0 , ```																						````````````````````` 1.0/256.0 , 0.0);
102  vec4 res = fract(d * bit_shift);
103  res -= res.yzwx * bit_mask;
104  return res ;
105 }
106 void main(void)
107 {
108  gl_FragColor = vec4(pack_depth(gl_FragCoord.z));
109 }";

LISTING 8.2: Shadow pass fragment shader.

Once we have filled the shadow map with scene depth as seen from the light source in the shadow pass, we are ready to render the scene from the actual observer point of view, and apply lighting and shadowing. This lighting pass is slightly more complicated with respect to a standard lighting one, but it is not difficult to implement, as shown by vertex and fragment shaders code in Listings 8.3 and 8.4, respectively. The vertex shader transforms vertices in the observer clip space as usual, using the combined model, view, and projection matrices (uModelViewProjectionMatrix). Simultaneously, it transforms the same input position as if it were transforming the vertex in light space, as it has happened in the shadow pass, and makes it available to the fragment shader with the varying vShadowPosition. The fragment shader is now in charge of completing the transformation pipeline to retrieve the coordinates needed to access the shadow map (uShadowMap) and compare occluder depth (Sz) with the occludee depth (Fz) in the shadow test.

150	var vertex_shader = "
151	 uniform mat4 uModelViewMatrix;	      
152	 uniform mat4 uProjectionMatrix;	      
153	 uniform mat4 uShadowMatrix;	       
154	 attribute vec3 aPosition;	       
155	 attribute vec2 aTextureCoords;	      
156	 varying vec2 vTextureCoords;	       
157	 varying vec4 vShadowPosition;	       
158	 void main(void)	          
159	 {	            
160  vTextureCoords = aTextureCoords;	      
161  vec4 position = vec4(aPosition, 1.0);	    
162  vShadowPosition = uShadowMatrix * position;	    
163  gl_Position = uProjectionMatrix * uModelViewMatrix    
164  * vec4(aPosition , 1.0);	       
165	 }";

LISTING 8.3: Lighting pass vertex shader.

167	var fragment_shader = "
168	 precision highp float;          
169	 uniform sampler2D uTexture;	       
170	 uniform sampler2D uShadowMap;	       
171	 varying vec2 vTextureCoords;	       
172	 varying vec4 vShadowPosition;	       
173	 float Unpack(vec4 v){         
174	 return v.x + v.y / (256.0) +       
175	 v.z/(256.0*256.0)+v.w/ (256.0*256.0*256.0);    
176	 }             
177	 bool IsInShadow(){          
178	 vec3 normShadowPos = vShadowPosition.xyz / vShadowPosition.w; 
179	 vec3 shadowPos = normShadowPos * 0.5 + vec3(0.5);   
180	 float Fz = shadowPos.z;        
181	 float Sz = Unpack(texture2D(uShadowMap, shadowPos.xy));   
182	 bool inShadow = (Sz < Fz);        
183	 return inShadow;          
184	 }             
185	 void main(void){          
186  vec4 color = texture2D(uTexture,vTextureCoords);   
187  if(IsInShadow())          
188   color.xyz*=0.6;          
189  gl_FragColor = color;         
190   }";

LISTING 8.4: Lighting pass fragment shader.

Note that the inclusion of the shadow test only affects the lighting computation, and all other shader machinery is unchanged.

8.4 Shadow Mapping Artifacts and Limitations

When we apply an algorithm to introduce shadow in the scene, we are basically trying to resolve a visibility problem. Even if correct and sound from a theoretical point of view, these techniques must deal with the inaccuracies and limitations of the mathematical tools we use. In particular, with shadow mapping we have to face the inherent issues that arise when using the machine finite arithmetic, and the resolution of the depth map texture.

8.4.1 Limited Numerical Precision: Surface Acne

In the lighting pass, once we calculate the fragment z component Fz in light-space and retrieve the actual value stored in the shadow depth map Sz, we just need to compare them to see if the fragment is in shadow. Note that, theoretically, the two values can relate themselves in only one of two possible ways:

Fz>SzFz=Sz

The third comparative relation, Fz < Sz, cannot occur because we used depth testing in the shadow pass, hence Sz is the smallest value the whole scene can produce at that particular location of the depth buffer. The real problem comes from the fact that we are using finite arithmetic (that is, floating point numbers) from the beginning of the pipeline to the shadow test. This implies that the calculated numbers incur rounding operations that accumulate over and over, and, in the end, cause two conceptually identical numbers to be different in practice. The visual effect of these numerical inaccuracies is shown in Figure 8.5 and is referred to as surface acne : every z component deviates from the exact value by a very small amount, causing the shadow test to behave apparently randomly on fragments that:

Figure 8.5

Figure showing shadow map acne. Effect of the depth bias.

Shadow map acne. Effect of the depth bias.

  1. should be lit but fail the shadow test when compared against themselves
  2. are just below lit ones but pass the shadow test.

A commonly used solution to the acne problem is to give some advantage to the fragments being tested, Fz, over the shadow map ones, Sz. This means bringing Fz nearer to light, drastically reducing the misclassification of lit fragments as shadowed ones (false positives). This is accomplished by modifying line 20 in Listing 8.4 with the following:

bool inShadow = ((Fz − DepthBias) > Sz);

By subtracting a small value from Fz (or adding it to Sz), lit fragments are identified more accurately. Note that a good value for DepthBias that is not dependent on the scene being rendered does not exist, so it must be approximately determined by trial and error.

In this way we have worked against false positives, but unfortunately we have increased the number of false negatives, that is, fragments that are in shadow but are incorrectly classified as lit. Given that this last problem is far less noticeable than the first one, this is often considered an acceptable quality trade-off.

8.4.1.1 Avoid Acne in Closed Objects

We can, however, exploit some properties of the scene being rendered to relieve ourselves of the burden of searching for an adequate amount of depth bias. In particular, if all objects in the scene are watertight we can completely abandon the use of depth offseting. The idea is to render in the shadow pass only the back faces of the objects, that is, setting the face culling mode to cull away front faces instead of back ones as usual. In this way, we have moved the occurrence of acne phenomena from front faces to back ones:

  • the subtle precision issues that caused parts of the surfaces exposed to light to be wrongly considered in shadow are not an issue anymore, because back surfaces are sufficiently distant from front ones, which now do not self-shadow themselves;
  • on the other hand, now back surfaces are self shadowing, but this time precision issues will cause light leakage, making them incorrectly classified as lit.

The net effect of this culling reversal in the shadow pass is to eliminate false negatives (lit fragments classified as in shadow) but to introduce false positives (shadowed fragments being lit). However, removing the misclassification in case of false positive back surfaces is easily accomplished: in fact, observing that in a closed object a surface that points away from the light source and thus is back-facing from the light point of view is enough to correctly classify that surface as not being lit. To detect this condition, in the light pass we must be able to check if the fragment being shaded represents a part of the surface that is back facing from the point of view of the light: the fragment shader must check if the interpolated surface normal N points in the same hemisphere of the light vector (point and spot lights) or light direction (directional lights) L, that is, if NL > 0.

8.4.2 Limited Shadow Map Resolution: Aliasing

With the shadow mapping technique, what we are doing is approximating the visibility function V with a lookup table, namely, the shadow map: in the shadow pass we construct the table to be used later in the lighting pass. As with all approximation methods based on lookup tables, the quality of the output depends on the table size, which represents its granularity or resolution. This means that the shadow test heavily depends on the size of the shadow map texture: the larger the texture, the more accurate the shadow test will be. More formally, the problem that affects the accuracy of the method is the same that causes jagged edges during line or polygon rasterization (see Section 5.3.3), that is, aliasing. In particular, this occurs when a texel in the shadow map projects to more than one fragment generated from the observer’s point of view: when performing the lighting pass, every Fi will be backprojected to the same location in the shadow map, T, and thus the footprint of T will be larger than one single fragment. In other words we have magnification of the shadow map texture. Figure 8.6 shows a typical example of aliasing for a directional light. Although the problem exists for any type of light source, with perspective light cameras it is more noticeable when the part of the scene rendered is far away from the light camera origin.

Figure 8.6

Figure showing aliasing due to the magnification of shadow map.

Aliasing due to the magnification of shadow map.

8.4.2.1 Percentage Closer Filtering (PCF)

A way to mitigate the sharp boundaries of the footprint of T, that is, softening the shadow edges, is to perform not just a single shadow test against the corresponding reprojected texel T, but a series of tests taking samples in the neighborhood of T, and then averaging the boolean results and obtaining a value in the interval [0, 1] to use for lighting the fragment. We may compare this technique with the area averaging technique for segment antialiasing discussed in Section 5.3.3 to find out that it follows essentially the same idea.

It is very important to underline that we do not perform a single shadow test with the average depth value of the samples (that would mean to test against a non-existent surface element given by the average depth); instead, we execute the test for every sample and then average the results. This process, known as Percentage Closer Filtering (PCF), helps increment the quality of the shadow rendering at the cost of multiple accesses to the depth map.

Softening the shadow edges can also be used to mimic the visual effect of a penumbra (only when the penumbra region is small). At any rate, this should not be confused with a method to calculate the penumbra effect.

Figure 8.7 shows a client using PCF to reduce aliasing artifacts.

Figure 8.7

Figure showing PCF shadow mapping. (See client http://envymycarbook.com/chapter8/0/0.html.)

PCF shadow mapping. (See client http://envymycarbook.com/chapter8/0/0.html.)

8.5 Shadow Volumes

A shadow volume is a volume of 3D space where every point is in shadow. Figure 8.8 (Left) illustrates a shadow volume originated by a sphere and a point light. We can define the shadow volume as the volume bounded by the extrusion of the caster’s silhouette along the light rays to infinite plus the portion of its surface directly lit (the upper part of the sphere). Since we are not interested in the world outside the scene we consider the intersection with the scene’s bounding box. We can use the boundary of the shadow volume to find whether a point is in shadow or not as follows. Suppose we assign a counter, initially set to 0, to each view ray, and follow the ray as it travels through the scene. Every time the ray crosses the boundary entering a shadow volume we increment the counter by 1 and every time it crosses the boundary exiting a shadow volume we decrement the counter by 1. We know if the ray is entering or exiting by using the dot product between the ray direction and the boundary normal: if it is negative it is entering, otherwise it is exiting. So, when the ray hits a surface, we only need to test if the value of the counter is 0 or greater. If it is 0 the point is not in the shadow volume, if it is greater, it is. This may be referred to as the disparity test. Figure 8.8 (Right) shows a more complex example with four spheres with nested and overlapping shadow volumes. As you can verify, the method also works in these situations. The problem with this approach arises when the view point is itself in shadow. The example in Figure 8.9 shows a situation where the view ray hits a surface without having entered any shadow volume, because the ray origin is already inside one. Luckily this problem is solved by counting only the intersections of the ray after the first hit with a surface: if it exits more times than it enters, the point hit is in shadow, otherwise it is not.

Figure 8.8

Figure showing (Left) Example of shadow volume cast by a sphere. (Right) The shadow volume of multiple objects is the union of their shadow volumes.

(Left) Example of shadow volume cast by a sphere. (Right) The shadow volume of multiple objects is the union of their shadow volumes.

Figure 8.9

Figure showing if the viewer is positioned inside the shadow volume the disparity test fails.

If the viewer is positioned inside the shadow volume the disparity test fails.

8.5.1 Constructing the Shadow Volumes

As aforementioned, we need to extrude the silhouette edges of the shadow caster along the light rays, so the first step is to find them. Assuming the objects are watertight meshes, we observe that an edge is on the silhouette with respect to a given light camera if and only if one of the two faces sharing the edge is front facing the camera and the other is back facing the camera, as shown in Figure 8.10 (Left).

Figure 8.10

Figure showing (Left) Determining silhouette edges. (Right) Extruding silhouette edges and capping.

(Left) Determining silhouette edges. (Right) Extruding silhouette edges and capping.

For each silhouette edge we form a quadrilateral with bases as the silhouette edge itself and its projection on a plane orthogonal to the z direction (and outside the bounding box of the scene). So doing we have swept the silhouette and created a cone-like boundary that we need to cap. The opening nearest to the light can be capped by using the front faces of the object, the farthest one by projecting the same faces onto the same plane on which we projected the edges. Note that the projected faces will have to be inverted (that is, two of their indices need to be swapped) so that they are oriented towards the exterior of the bounding volume.

8.5.2 The Algorithm

We described the approach as if we could travel along each single view ray, but in our rasterization pipeline we do not follow rays, we rasterize primitives. Luckily, we can obtain the same result using the stencil buffer (see Section 5.3.1) as follows:

  1. Render the scene and compute the shading as if all the fragments were in shadow.
  2. Disable writes to the depth and color buffers. Note that from now on the depth buffer will not be changed and it contains the depth of the scene from the viewer’s camera.
  3. Enable the stencil test.
  4. Set the stencil test to increment on depth fail.
  5. Enable front-face culling and render the shadow volume. After this step each pixel of the stencil buffer will contain the number of times the corresponding ray has exited the shadow volumes after hitting the surface of some object.
  6. Set the stencil test to decrement on depth fail.
  7. Enable back-face culling and render the shadow volume. After this step each pixel of the stencil buffer will be decremented by the number of times the corresponding ray has entered the shadow volumes after hitting the surface of some object. Therefore, if the value at the end of the rendering pass is 0 it means the fragment is not in shadow.
  8. Set the stencil test to pass on 0, that is, if the number of front face and back face fragments behind the surface hit are equal.
  9. Render the scene as completely lit. This is correct because fragments in shadow are masked by the stencil test.

Even if with the shadow volume technique we can achieve pixel-perfect shadow edges and does not suffer from texture magnification aliasing as with shadow maps, it requires an intensive fill rate and it is not prone to easy modifications to soften the resulting shadows’ boundaries. Note that the shadow volumes need to be recomputed every time the relative position of light and object casting the shadow change so it may easily become too cumbersome. In fact, the construction of the shadow volume is typically done on the CPU side and it requires great care to avoid inaccuracies due to numerical precision. For example, if the normals of two adjacent triangles are very similar the edge may be misclassified as silhouette edge and so create a “hole” in the shadow volume.

8.6 Self-Exercises

8.6.1 General

  1. Describe what kind of shadow you would obtain if the viewer camera and the light camera coincide or if they coincide but the z axes of the view frame are the opposite of each other.
  2. We know that the approximation errors of shadow mapping are due to numerical precision and texture resolution. Explain how reducing the light camera viewing volume would affect those errors.
  3. If we have n lights, we need n rendering passes to create the shadow maps, and the fragment shader in the lighting pass will have to make at least n accesses to texture to determine if the fragment is in shadow for some of the lights. This will surely impact on the frame rate. Can we use frustum culling to reduce this cost for: directional light sources, point light sources or spotlights?
  4. What happens if we enable mipmapping on the shadow map texture?
  5. Suppose we have a scene where all the lights are directional and all the objects casting shadows are spheres. How could the shadow volumes technique benefit from these assumptions?

8.6.2 Client Related

  1. UFO over the race! Suppose there are a few unidentified flying objects over the race. The objects are disk-shaped, completely flat and with 3 meters radius, and they always stay parallel to the ground. Add the disks to the scene and make them cast shadows by implementing a version of the shadow mapping without the shadow map, that is, without performing the shadow pass. Hint: Think what problem is solved with the shadow map and why in this case the problem is so simple you do not need the shadow map.

    Variation 1: The UFOs are not always parallel to the ground, they can change orientation.

    Variation 2: This time the UFOs are not simple disks, they can be of any shape. However, we know that they fly so high they will always be closer to the sun than anything else. Think how to optimize the cast shadows with shadow mapping, reducing the texture for the depth value and simplifying the shader. Hint: How many bits would be enough to store the depth in the shadow map?

  2. Place a 2 × 2 meter glass panel near a building. On this panel map a texture with an RGBA image where the value for the α channel is either 0 or 1 (you can use the one in http://envymycarbook.com/media/textures/smiley.png). Implement shadow mapping for the car’s headlights so that when they illuminate the glass panel, the image on it is mapped on the walls. Hint: account for the transparency in the shadow pass.

1 Probably they will be included in the specification very shortly, but this trick we are going to show is worth knowing anyway.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset