Chapter 9

Image-Based Impostors

According to the original definition given in Reference [27], “An impostor is an entity that is faster to render than the true object, but that retains the important visual characteristics of the true object.” We have already seen an example of impostor technique with the skybox in Section 7.7.3, where a panoramic image of the environment was mapped on a cube to represent the faraway scene. In that case, we are looking at mountains, but, in fact, they are just images.

In this chapter we will describe some image-based impostors, which are impostors made of very simple geometry. Images arranged in a way to represent a certain 3D object should be rendered in such a way as to maximize the impression that they look like the geometric representation of the object. The geometry of the imposter is not necessarily related to the geometry of the object but it is the medium to obtain the final rendered image.

Consider the rendering of a tree. A tree has a very intricate geometric description (think about modelling all the branches and the leaves) but most of the time we are not so close to a tree so as to tell one leaf from another, so we can put a rectangle with the picture of a tree mapped onto it, and orient the rectangle towards the point of view, so as to minimize the perception of depth inconsistencies. In this chapter we will describe some simple techniques that are both useful and interesting from a didactic point of view. We will see that these techniques are sufficient to improve considerably the photorealistic look of our client with a little implementation effort.

The Image Based Impostors are a subset of the Image Based Rendering techniques, usually shortened to IBR. While the impostors are an alternative representation of some specific part of the scene, IBR are in general all the techniques that use digital images to produce a final photorealistic synthetic image. Since a high number of IBR techniques exists, different categorizations have been developed during the last few years to try to better formalize the various algorithms and assess the differences between them well. One of the most interesting categorizations, which is simple and effective, is the so-called IBR continuum proposed by Lengyel [23]. This categorization divides the rendering techniques according to the amount of geometry used to achieve the rendering of the impostor (see Figure 9.1).

Figure 9.1

Figure showing a categorization of image-based rendering techniques: the IBR continuum.

A categorization of image-based rendering techniques: the IBR continuum.

It is important to underline that the definition of impostor also includes pure geometric approximation of the original object, while here we focus only on image-based approximations. According to the IBR continuum, we can distinguish between techniques that employ a high number of images and no geometric information at all to techniques that use some auxiliary geometry (usually very simple) in conjunction with the images to obtain a final convincing rendering, and techniques that replace in some way a complex geometry with a simpler one plus a set of images. In the following we present some simple but effective techniques starting from the ones that employ no/simple geometry to the ones that rely on more complex geometry to work.

9.1 Sprites

A sprite is a two-dimensional image or animation inserted in a scene typically used to show the action of a character. Figure 9.2 shows sprites from the well known video game Pac-Man®. On the right part of the image is the animation of the Pac-Man. Note that since the sprite is overlaid on the background, pixels that are not part of the drawing are transparent. Knowing what we now know, sprites may look naive and pointless, but they have been a breakthrough in the game industry since Atari® introduced them back in 1977. When the refresh rate did not allow the game to show moving characters, hardware sprites, circuitry dedicated to light small squares of pixels in a predetermined sequence on any point of the screen allowed for showing an animation as in an overlay mode, without requiring the redraw of the background. As you may note by looking at one of these old video games, there may be aliasing effects in the transition between the sprite and the background because sprites are prepared beforehand and were the same on every position of the screen. With 3D games, sprites became less central and more a tool for things like lens flare, an effect we will see in Section 9.2.4. However, in recent years there has been an outbreak of 2D games on the Web and for mobile devices, and therefore sprites became popular again, although they are now implemented as textures on rectangles and no sprite-specialized hardware is involved.

Figure 9.2

Examples showing of sprites. (Left) The main character, the ghost and the cherry of the famous Pac-Man® game. (Right) Animation of the main character.

Examples of sprites. (Left) The main character, the ghost and the cherry of the famous Pac-Man® game. (Right) Animation of the main character.

9.2 Billboarding

We anticipated the example of a billboard in the introduction to this chapter. More formally, a billboard consists of a rectangle with a texture, usually with alpha channel. So, billboards also include sprites, only they live in the 3D scene and may be oriented in order to provide a sense of depth that is not possible to achieve with sprites. Figure 9.3 shows a representation of the billboard we will refer to in this section. We assume the rectangle is specified in an orthogonal frame B. Within this frame, the rectangle is symmetric with respect to the y axis, lies on the XY plane and in the Y + half space.

Figure 9.3

Figure showing (Left) Frame of the billboard. (Right) Screen-aligned billboards.

(Left) Frame of the billboard. (Right) Screen-aligned billboards.

The way frame B is determined divides the billboard techqniques into the following classes: static, screen-aligned, axis-aligned and spherical.

9.2.1 Static Billboards

With static billboards the frame B is simply fixed in world space once and for all. The most straightforward application is to implement real advertisement billboards along the street sides or on the buildings. With some elasticity, the skybox is a form of static billboard, although we have a cube instead of a simple rectangle.

Usually you do not find static billboards mentioned in other textbooks, because if B is static then the billboard is just part of the geometry and that is all. We added this case in the hope of giving a more structured view of billboarding.

9.2.2 Screen-Aligned Billboards

With screen-aligned billboards the axis of frame B and the view reference frame V concide, so the billboard is always parallel to the view plane (hence the name) and only the position (the origin of frame B) moves. These are essentially equivalent to sprites that can been zoomed, and can be used to simulate lens flares, make overlay writing and other gadgets or to replace very faraway geometry.

9.2.3 Upgrade Your Client: Add Fixed-Screen Gadgets

This is going to be a very simple add-on to our client, which will not introduce any new concept or WebGL notion. To keep the code clean, let us define a class OnScreenBillboard as shown in Listing 9.1, which simply contains the rectangle, the texture to map on it and the frame B, and the function to render the billboard.

7 function OnScreenBillboard(pos, sx, sy, texture, texcoords) {
8 this.sx = sx;   // scale width
9 this.sy = sy;   // scale height
10  this.pos = pos;  // position
11  this.texture = texture; // texture
12  var quad_geo = [−1, −1, 0, 1, −1, 0, 1, 1, 0, −1, 1, 0];
13  this.billboard_quad = new TexturedQuadrilateral(quad_geo, texcoords);
14};

LISTING 9.1: Definition of a billboard. (Code snippet from http://envymycarbook.com/chapter9/0/0.js.)

The only interesting thing to point out is how we build the frame B for screen aligned impostors and in which point of our rendering we render them. For things like the speedometer or the image of the drivers’s avatar that we always want to overlay the rest, we simply repeat what we did in Section 5.3.4, that is, we express the impostors directly in NDC space and draw it after everything else and after disabling the depth test. We may want fancier effects like to make some writing appear like it is in the middle of the scene, as to say, partially covered by a building or a tree. In this case we may simply express the frame B in view space and draw the billboard just after the rest of the scene but before drawing the inside of the cabin.

Listing 9.2 shows the initialization of the speedometer that you can see in Figure 9.4. We create both the analogic version with the needle and a digital one.

Figure 9.4

Figure showing client with gadgets added using plane-oriented billboard. (See client http://envymycarbook.com/chapter9/0/0/html.

Client with gadgets added using plane-oriented billboard. (See client http://envymycarbook.com/chapter9/0/0/html.

15 NVMCClient.initializeScreenAlignedBillboard = function (gl){
16  var textureSpeedometer = this.createTexture(gl, ../../../media/textures/speedometer.png);
17  var textureNeedle = this.createTexture (gl , ../../../media/textures/needle2.png) ;
18  this.billboardSpeedometer = new OnScreenBillboard([−0.8, −0.65], 0.15, 0.15, textureSpeedometer, [0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0]);
19  this.createObjectBuffers(gl, this.billboardSpeedometer. billboard_quad, false, false, true);
20  this.billboardNeedle = new OnScreenBillboard([−0.8, −0.58], 0.09, 0.09, textureNeedle, [0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0]);
21  this.createObjectBuffers(gl, this.billboardNeedle. billboard_quad);
22 
23  var textureNumbers = this.createTexture(gl, ../../../media/textures/numbers.png) ;
24  this.billboardDigits = [];
25  for (var i = 0; i < 10; i++) {
26 this.billboardDigits [i] = new OnScreenBillboard([−0.84, −0.27], 0.05, 0.08, textureNumbers, [0.1 * i, 0.0, 0.1 * i + 0.1, 0.0, 0.1 * i + 0.1, 1.0, 0.1 * i, 1.0]);
27 this.createObjectBuffers(gl, this.billboardDigits [i]. billboard_quad, false, false, true);
28 }
29};

LISTING 9.2: Initialization of billboards. (Code snippet from http://envymycarbook.com/chapter9/0/0.js.)

For the analogic version, we use two different billboards, one for the plate and one for the needle. When we render the speedometer, we first render the plate and then the needle, properly rotated on the base of the current car’s velocity. For the version with digits, we create 10 billboards, all referring to the same texture textureNumbers, containing the images of the digits 0… 9, and making sure that the texture coordinates of the billboard i map to the rectangle of texture containing the number i.

9.2.4 Upgrade Your Client: Adding Lens Flare Effects

When light enters the lens of a camera, light rays are refracted by the lens system to produce the image on the film or CCD. However, when rays come directly from a bright light source, there are also unwanted internal reflection effects that cause artifacts like those shown in Figure 9.5, a real photograph, called lens flares. Lens flares typically appear in round or hexagonal shapes principally along a line that goes from the projection of the light source, like the sun in the image, to its opposite position. Another artifact, also visible in the image, is blooming, the effect of a very bright source that propagates in the image. These are fairly complex effects to model optically in real time (although there are recent techniques that tackle this problem) but they can be nicely emulated using screen-aligned impostors, and they have been commonly found in video games as far back as the late 1990s. Figure 9.6 illustrates how to determine position and size of the flares. A flare can be done as a post processing effect, which means that it happens on the final image after the rendering of the 3D scene has been done. We say that a flare is a brighter and colored region. Figure 9.6 (Right) shows what is called a luminance texture, which is simply a single channel texture. If this image is set as texture on a, say, red rectangle we can modulate the color of the textured polygon by multiplying the alpha value by the fragment color in our fragment shader. Therefore the result would be a shade of red from full red to black. If we draw this textured rectangle enabling blending and set the blending coefficients to gl.ONE, gl.ONE the result will simply be the sum of the color in the framebuffer with the color of the textured polygon, which will cause the red channel to increase by the value of the luminance texture (please note that black is 0). This is it. We can combine more of these impostors and we will obtain any sort of flare we want. For the main flare, that is, the light source, we may use a few star-shaped textures and some round ones. We will have a white patch due to the overlapping between impostors with colors on all the three channels so we also achieve a kind of blooming.

Figure 9.5

Figure showing lens flare effect. Light scattered inside the optics of the camera produce flares of light on the final image. Note also the increased diameter of the sun, called blooming effect.

Lens flare effect. Light scattered inside the optics of the camera produce flares of light on the final image. Note also the increased diameter of the sun, called blooming effect.

Figure 9.6

Figure showing (Left) Positions of the lens flare in screen space. (Right) Examples of textures used to simulate the effect.

(Left) Positions of the lens flare in screen space. (Right) Examples of textures used to simulate the effect.

Note that if the light source is not visible we do not want to create lens flares. As we learned in Section 5.2, a point is not visible either because it is outside the view frustum or because it is hidden by something closer to the point of view along the same line of sight.

9.2.4.1 Occlusion Query

Listing 9.3 shows how the test for the visibility of a specific point will be implemented by using the feature called occlusion query, which is not available in WebGL 1.0 but most likely will be in the near future. With the occlusion query we make the API count the number of fragments that pass the depth test. A query is an object that is created by the function gl.genQueries at line 2. The call gl.beginQuery(gl.SAMPLES_PASSED,idQuery) at line 3 tells WebGL to start counting and the call gl.endQuery() to stop counting. Then the result is read by calling gl.getQueryObjectiv(idQuery,gl.QUERY_RESULT,nFragments) (line 7). In our specific case, we would render a vertex in the light source position after the rest of the scene is drawn. If the result of the query is not 0, it means that the point is visible. Obviously we do not need to actually render the vertex, so at line 4 we disable the writing on the color buffer.

1 isPositionVisible = function(gl,lightPos){
2 gl.genQueries(1,idQuery);
3 gl.beginQuery(GL_SAMPLES_PASSED,idQuery);
4 gl.colorMask(false,false,false,false);
5 this.renderOneVertex(lightPos);
6 gl.endQuery();
7 n_passed = gl.getQueryObjectiv(gl.QUERY_RESULT,idQuery);
8 return(n_passed>0);}

LISTING 9.3: Would-be implementation of a function to test if the point at position lightPos is visible. This function should be called after the scene has been rendered.

As we do not have occlusion queries in the WebGL 1.0 specification, we need some other way to test the visibility of a point. We can try to mimic the occlusion query with the following steps:

  1. Render the scene
  2. Render a vertex in the light source position, assign an attribute for the color with value (1, 1, 1, 1) (that is, white)
  3. Use gl.readPixels to read back the pixel correspoing to the vertex projection and check if the color is (1, 1, 1, 1): if so, it means the vertex passed the depth test and wrote a white pixel and hence is visible

Unfortunately, this way does not always work. The problem is that maybe the pixel we test was already white before the light source was rendered, so we may have false positives. We will suggest a way around this problem in the exercise section at the end of the chapter, but as a rule of thumb, it is alway better to try to avoid readbacks from GPU to CPU memory, because they are slow and because if you read back data from the GPU it means you will run some JavaScript code to use it and that is also slow. Here we show a way that does not require readbacks:

  1. Render the scene normally but set the shader to store the depth buffer as in the shadow pass of shadow mapping (see Listing 8.2 in Section 8.3)
  2. Render the scene normally
  3. Bind the depth texture created at step 1 and render the billboards for the lens flares. In the fragment shader, test if the z value of the projection of the light source position is smaller than the value in the depth texture. If it is not, discard the fragment.

Listing 9.4 shows the fragment shader that implements the technique just described. At lines 30-31 the light position is transformed in NDC space and the tests in lines 32-37 check if the position is outside the view frustum, in which case the fragment is discarded. Line 38 reads the depth buffer at the coordinates of the projection of the point light and at line 40 we test if the point is visible or not, in which case the fragment is discarded.

20  uniform sampler2D uTexture;       
21  uniform sampler2D uDepth;        
22  precision highp float;        
23  uniform vec4 uColor;         
24  varying vec2 vTextureCoords;      
25  varying vec4 vLightPosition;      
26  float Unpack(vec4 v){       
27 return v.x + v.y / (256.0) + v.z/(256.0*256.0)+v.w/ (256.0*256.0*256.0);
 
28 }            
29  void main(void)          
30  {vec3 proj2d = vec3(vLightPosition.x/vLightPosition.w, vLightPosition.y/vLightPosition.w,vLightPosition.z vLightPosition.w);
31 proj2d = proj2d * 0.5 + vec3(0.5);   
32 if(proj2d.x <0.0) discard;      
33 if(proj2d.x >1.0) discard;      
34 if(proj2d.y <0.0) discard;      
35 if(proj2d.y >1.0) discard;      
36 if(vLightPosition.w < 0.0) discard;   
37 if(proj2d.z < −1.0) discard;     
38 vec4 d = texture2D(uDepth, proj2d.xy);   
39 if(Unpack(d) < proj2d.z)     
40 discard;         
41 gl_FragColor = texture2D(uTexture, vTextureCoords); 
42 }            
43 ";

LISTING 9.4: Fragment shader for lens flare accounting for occlusion of light source. (Code snippet from http://envymycarbook.com/chapter9/2/shaders.js.)

On the Javascript side, Listing 9.5 show the piece of code drawing the lens flare. Please note that the function drawLensFlares is called after the scene has been rendered. At lines 65-67 we disable the depth test and enable blending and at line 73 update the position in NDC space of the billboards. Just like for shadow mapping, we bind the texture attachment of the framebuffer where the depth buffer has been stored (this.shadowMapTextureTarget.texture) and, of course, the texture of the billboard.

64 NVMCClient.drawLensFlares = function (gl,ratio) {
65  gl.disable(gl.DEPTH_TEST);
66  gl.enable(gl.BLEND);
67  gl.blendFunc(gl.ONE ,gl.ONE);
68  gl.useProgram(this.flaresShader);
69  gl.uniformMatrix4fv(this.flaresShader uProjectionMatrixLocation, false, this.projectionMatrix);
70  gl.uniformMatrix4fv(this.flaresShader.uModelViewMatrixLocation, , false, this.stack.matrix);
71  gl.uniform4fv(this.flaresShader.uLightPositionLocation, this.sunpos);
72
73  this.lens_flares.updateFlaresPosition();
74  for(var bi in this.lens_flares.billboards){
75  var bb = this.lens_flares.billboards[bi];
76  gl.activeTexture(gl.TEXTURE0);
77  gl.bindTexture(gl.TEXTURE_2D, this.shadowMapTextureTarget. texture);
78  gl.uniform1i(this.flaresShader.uDepthLocation ,0);
79  gl.activeTexture(gl.TEXTURE 1);
80  gl.bindTexture(gl.TEXTURE_2D, this.lens_flares.billboards[bi].texture);
81  gl.uniform1i(this.flaresShader.uTextureLocation ,1);
82  var model2viewMatrix = SglMat4.mul(SglMat4.translation([bb.pos [0] ,bb.pos [1] ,0.0,0.0]) ,
83  SglMat4.scaling([bb.s,ratio*bb.s,1.0,1.0]));
84  gl.uniformMatrix4fv(this.flaresShader) uQuadPosMatrixLocation, false, model2viewMatrix);
85  this.drawObject(gl, this.billboard_quad,this.flaresShader);
86}
87  gl.disable(gl.BLEND);
88  gl.enable(gl.DEPTH_TEST);
89};

LISTING 9.5: Function to draw lens areas. (Code snippet from http://envymycarbook.com/chapter9/1/1.js.)

This is just the shadow mapping concept again, in the special case where the light camera and view camera are the same and the test on the depth is always done against the same texel. We can do better than this. We will propose a first improvement in the exercises at the end of the chapter and discuss a major improvement in the exercises of Chapter 10.

Figure 9.7 shows a snapshot of the client implementing lens flares.

Figure 9.7

Figure showing a client with the lens flare are in effect. (See client http://envymycarbook.com/chapter7/4/4.html.)

A client with the lens flare are in effect. (See client http://envymycarbook.com/chapter7/4/4.html.)

9.2.5 Axis-Aligned Billboards

With axis aligned billboarding yB=[0,1,0]T and zB points toward the point of view (see Figure 9.3, Left), that is, toward its projection on the plane y=oBy:

yB=[0,1,0]TzB=(oVoB)[1,0,1]TzB=zB||zB||xB=yB×zB

Note that for an orthogonal projection the axes would be the same as for the screen aligned billboards. Axis aligned billboards are typically used for objects with a roughly cylindrical symmetry, which means they look roughly the same from every direction, assuming you are on the same plane (that is, not above or below). This is why trees are the typical objects replaced with axis aligned billboards.

9.2.5.1 Upgrade Your Client: Better Trees

We will now rewrite the function drawTree introduced in Section 4.8.1 to use axis aligned billboards instead of cylinders and cones.

The image we will use as texture for the billboard has an alpha channel and non-zero alpha value on the pixels representing the tree. Figure 9.8 shows the alpha channel remapped to gray scale for illustration purposes. Note that alpha is not only 1 or 0 but it is modulated to reduce the aliasing effect of the discrete representation. The color of the rectangle in this case is unimportant, since we will replace it with the color (and alpha) written in the texture. Just like we did for the windshield in Section 5.3.4, we will use blending to combine the color of the billboard with the color currently in the frame buffer. We recall that in order to handle transparency correctly we need to draw the non-opaque objects back-to-front like in the painter algorithm. Listing 9.6 shows the salient code for rendering the trees, provided that drawTrees is called after having rendered all the rest of the scene. The sorting is done at line 36. Here this.billboard trees.order is an array of indices where the position i indicates the front-to-back order of tree i. The JavaScript function sort perform the sorting of the array using a comparison function that we define as the comparison between the distance of the billboard to the viewer. Lines 39-47 compute the orientation of each billboard as explained above. Writing on the depth buffer is disabled (line 48), blending is set up (lines 49-50) and the billboards are rendered. Note that the order in which they are rendered is determined by the array we sorted (see line 59).

Figure 9.8

Figure showing alpha channel of a texture for showing a tree with a billboard. (See client http://envymycarbook.com/chapter9/2/2.html.)

Alpha channel of a texture for showing a tree with a billboard. (See client http://envymycarbook.com/chapter9/2/2.html.)

33 NVMCClient.drawTrees = function (gl) {
34  var pos = this.cameras[this.currentCamera].position ;
35  var billboards = this.billboard_trees.billboards;
36  this.billboard_trees.order.sort(function (a, b) {
37 return SglVec3.length(SglVec3.sub(billboards[b].pos, pos)) − SglVec3.length(SglVec3.sub(billboards[a].pos, pos))});
38
39  for (var i in billboards) {
40 var z_dir = SglVec3.to4(SglVec3.normalize(SglVec3.sub(pos, billboards[i].pos)) ,0.0) ;
41 var y_dir = [0.0, 1.0, 0.0,0.0];
42 var x_dir = SglVec3.to4(SglVec3.cross(y_dir, z_dir),0.0);
43 billboards[i].orientation = SglMat4.identity();
44 SglMat4.col$(billboards[i].orientation,0,x_dir);
45 SglMat4.col$(billboards[i].orientation,1,y_dir);
46 SglMat4.col$(billboards[i].orientation,2,z_dir);
47 }
48  gl.depthMask(false);
49  gl.enable(gl.BLEND);
50  gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA);
51
52  gl.useProgram(this.textureShader);
53  gl.uniformMatrix4fv(this.textureShader. uProjectionMatrixLocation, false, this.projectionMatrix);
54  gl.activeTexture(gl.TEXTURE0);
55  gl.uniform1i(this.textureShader.uTextureLocation, 0);
56  gl.bindTexture(gl.TEXTURE_2D, this.billboard_trees.texture);
57
58  for (var i in billboards) {
59 var b = billboards[this.billboard_trees.order[i]];
60 this.stack.push();
61 this.stack.multiply(SglMat4.translation(b.pos));
62 this.stack.multiply(b.orientation);
63 this.stack.multiply(SglMat4.translation([0.0, b.s[1], 0.0]))
64 this.stack.multiply(SglMat4.scaling([b.s[0], b.s[1], 1.0, 1.0]));
65 gl.uniformMatrix4fv(this.textureShader. uModelViewMatrixLocation, false, this.stack.matrix);
66 this.drawObject(gl, this.billboard_quad, this.textureShader, [0.0, 0.0, 0.0, 0.0]);
67 this.stack.pop();
68 }
69  gl.disable(gl.BLEND);
70  gl.depthMask(true);
71};

LISTING 9.6: Rendering axis-aligned billboards with depth sort. (Code snippet from http://envymycarbook.com/chapter9/2/2.js.)

As we increase the number of trees, at some point you may wonder if the cost of sorting the billboards in CPU will become the bottleneck. It will. We have two simple options to avoid depth sorting. The first is not to use blending but discard those fragments with a small alpha value instead. So in the fragment shader we would have a test like if (color[4] < 0.5) discard;. The second option for avoiding sorting is: do not sort; just ignore the problem! Both these solutions produce aliasing artifacts but we could find them acceptable to some extent, especially the second one. As a hint, consider that if two fragments with different depths have the same color, it does not matter if they are sorted or not, and since the trees are all copies and they are mostly a shade of green, it makes sense the missing sorting may go unnoticed.

9.2.6 On-the-fly Billboarding

So far we have seen that the image for billboarding is done beforehand, independently from time and view frame. With on-the-fly billboarding instead, the image for the billboard is created by rendering the geometry of the object to texture. The advantage of on-the-fly billboarding is better understood with a practical example.

Let us say that our race goes throught a city with many buildings, and we are approaching such a city from a distance. Until we approach the city, its projection on the screen will not change much, so we can render the city on a texture and then forget the geometry and use the texture to build a billboard. As we get closer, the projection of the billboard become bigger and at some point we will start to see magnification (that is, texels bigger than pixels, see Section 7.3.1). Before it happens, we refresh the texture re-doing a rendering of the real model and so on. On-the-fly billboarding may help us save a lot of computation but it also requires some criterion to establish when the billboard is obsolete.

This technique is often referred to as “impostor” (for example by Reference [1]). We refer to this technique as on-the-fly billboarding, in order to highlight its main characteristic, and keep the original and general meaning of the term impostor.

9.2.7 Spherical Billboards

With spherical billboards the axis yB concides with that of V and the axis zB points toward the point of view (See Figure 9.9). Note that in general we cannot set zB=oVoB directly because then zB and yB would not be orthogonal, but we can write:

yB=yVzB=oVoB||oVoB||xB=yB×zBzB=xB×yB

Figure 9.9

Figure showing (Left) Axis-aligned billboarding. The billboard may only rotate around the y axis of its frame B. (Right) Spherical billboarding: the axis zB always points to the point of view ov.

(Left) Axis-aligned billboarding. The billboard may only rotate around the y axis of its frame B. (Right) Spherical billboarding: the axis zB always points to the point of view oV.

This is exactly what we showed in Section 4.5.1.1: we built a non-orthogonal frame [xB,yB,zB]T and then recomputed zB to be orthogonal to the other two axes (see Figure 9.9 (Right)).

This kind of billboard makes sense when the object has a spherical symmetry. You probably cannot think of too many objects of this type except for bushes, balls or planets. However, consider clouds, fog or smoke, entities without a precise, well-defined contour that occupy a portion of volume. These participating media can be rendered by combining several spherical bill boards, not because they have spherical symmetry but because they have no well-defined shape.

9.2.8 Billboard Cloud

Recently, billboards have been extended to become a more accurate representation of a complex 3D object, preserving at the same time its simplicity in terms of rendering. One of these extensions is the billboard cloud [6]. As the name suggests, it consists of a set of freely oriented billboards that form, all together an alternative representation of the object.

The textures for the billboards are obtained by projecting (that is, rasterizing) the original geometry. This can be done by placing a camera so that its view plane coincides with the billboard rectangle, and then rendering the original geometry. The result of the rendering is the texture of the impostor. Note that we need a background alpha initialized to 0. This is an oversimplified explanation since there are several details to ensure all the original geometry is represented and adequately sampled. Figure 9.10 shows an example from the original paper that introduced them and that also proposed a technique to build them automatically from the original textured 3D geometry [6], although they can also be built manually like the one we will use for our client, shown in Figure 9.10.

Figure 9.10

Figure showing billboard cloud example from the paper [6]. (Courtesy of the authors.) (Left) The original model and a set of polygons resembling it. (Right) The texture resulting from the projections of the original model on the billboards.

Billboard cloud example from the paper [6]. (Courtesy of the authors.) (Left) The original model and a set of polygons resembling it. (Right) The texture resulting from the projections of the original model on the billboards.

This kind of impostor has “more 3D” than simple billboards. In fact they can be direction independent and therefore are nothing short of alternative representations of the object. In other words, there is no need to rotate the billboards in some way when the camera moves.

9.2.8.1 Upgrade Your Client: Even Better Trees

We do not really have to add anything to render a billboard cloud, since we just encoded it with a textured geometry, just like we did with the car. The only significant news is to remember to discard the fragment with a small alpha as we proposed in Section 9.8 to avoid the sorting. Figure 9.11 shows a snapshot of the corresponding client.

Figure 9.11

Figure showing snapshot of the client using billboard clouds for the trees. (See client http://envymycarbook.com/chapter9/3/3.html.)

Snapshot of the client using billboard clouds for the trees. (See client http://envymycarbook.com/chapter9/3/3.html.)

9.3 Ray-Traced Impostors

Since the GPU has become programmable and allowed branching and iterations, a number of new types of impostors techniques has been proposed. Here we will try to show the building blocks of these algorithms and provide a unified view of them.

In Section 7.8.1 we introduced the idea that a texture may contain a displacement value and so encode a height field. Then in Section 7.8.2 we showed how to tweak the normals in order to make it look like the height field was really there, but we know that that solution is limited to front-facing geometry, that is, the height field is not visible on the silhouette. Here we introduce the idea of applying ray tracing in order to show the height map. We recall from Section 1.3.1 that with ray tracing we shoot rays from the point of view towards the scene and find the intersection with the objects. Figure 9.12 shows the rays intersecting the height field encoded in a billboard (shown in 2D for illustration purposes). We know that the most time-consuming part of ray tracing is finding those intersections, but here we can exploit the rasterization for finding out exactly which rays possibly intersect the height field. The first step is to draw a box whose base is the billboard and whose height is the maximum height value contained in the texture plus a little offset so that the box properly includes the whole height field. Note that the pixels covered by the rasterization of this box are the only ones whose corresponding rays may hit the height field. If we associate to each vertex its position as an attribute, the attribute interpolation will give, for each fragment, the entry point of the corresponding ray (marked with i in the figure). Subtracting the view position to this point and normalizing it we have the direction of the ray. So the fragment shader will have the starting point and direction of the ray and will compute its intersection with the height field encoded in a texture and perform the shading computation. The intersection between the ray and the height map may be found using linear search, that is, starting from the origin of the ray (on the box surface), which is surely outside the height fields, and proceeding by small steps δ or even rasterizing the ray on the texture space. Note that we want to express the ray in the frame of the impostor and therefore the values of the vertex positions pi will be specified in this space as well as the viewer position.

Figure 9.12

Figure showing the way height field is ray traced by the fragment shader.

The way height field is ray traced by the fragment shader.

Summarizing, we can say there are two “interconnected” building blocks for these techniques:

  • what is encoded in the texture, and
  • what the ray tracing implementation does to find the intersection and then to shade the intersection point.

Let us consider a few examples. We can use an RGBA texture to include color and store the depth in alpha channel or we can add a second texture to include other material properties, such as diffusion and specular coefficients and the normal.

We can modify the shader to compute the normal by looking at the neighbor texels of the point hit.

We can change the fragment shader to determine if the point hit is in shadow with respect to a certain light. This is simply done by passing the light camera as a vertex attribute and then tracing the ray from the hitting point along the light direction to see if the surface is intersected. Note that if we just do this we only take care of the self shadowing, that is, of the shadows caused by the height field on itself. If we want to consider all of the scene, we should do the two-pass rendering: first render the box from the light source, perform the ray tracing as above and write on a texture which texels are in shadow, and then render from the view.

9.4 Self-Exercises

9.4.1 General

  1. Discuss the following statements:
    • If the object to render is completely flat there is no point in creating a billboard.
    • If the object to render is completely flat the billboard is completely equivalent to the original object.
    • The size in pixels of the texture used for a billboard must be higher than or equal to the viewport size.
  2. Which are the factors that influence the distance from which we can notice that a billboard has replaced the original geometry?
  3. Can we apply mipmapping when rendering a billboard?

9.4.2 Client Related

  1. Improve the implementation of the lens flare by changing the viewing window in the first pass so that it is limited to the projection of the light source position.
  2. Create a very simple billboard cloud for the car. The billboard cloud is made of five faces of the car’s bounding box (all except the bottom one); the textures are found by making one orthogonal rendering for each face. Change the client to use the billboard cloud instead of the textured model when the car is far enough from the viewer. Hint: For finding the distance consider the size of the projection of a segment with length equal to the car bounding box’s diagonal. To do so proceed as follows:
    1. Consider a segment defined by positions ([0,0,zcar],   (0, diag, zcar]) (in view space) , where zcar is the z coordinate of the car’s bounding box center and diag is its diagonal.
    2. Compute the length diagss of the segment in screen space (in pixels).
    3. Find heuristically a threshold for diagss to switch from original geometry to billboard cloud.

Note that this is not the same as taking the distance of the car from the viewer because considering also the size of the bounding box we indirectly estimate at which distance texture magnification would happen.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset