6

Animation

 

For many of us, the word ‘animation’ immediately conjures up childhood images of the characters from Walt Disney’s animated cartoon classics like Snow White or Jungle Book. In fact, the cartoon dates back long before the days of Hollywood and animation has come a long way since the early days of the studio artist!

Traditional Animation

The term ‘cartoon’ derives from the Italian word cartone, meaning pasteboard. In the world of fine arts, a cartoon was a full-sized drawing used as a model for a work to be executed in paint, mosaic, tapestry, stained glass, or another medium. The cartoon provided a vehicle for the artist to make alterations in the design before commencing the final project.

The simple line drawing style of the classical cartoon was developed, in the 18th and 19th centuries, by English caricaturists such as William Hogarth and Thomas Rowlandson, giving rise to the political and satirical cartoons which remain a stock ingredient of newspapers and magazines to this day.

A logical evolution of the cartoon – the comic strip – adopted some of its characteristics, such as the speech balloon, but also added the important dimension of time, as the left to right and top to bottom sequence of comic strip frames was used to represent the passage of time. From modest beginnings, the comic strip evolved to the point where series such as Superman and Dick Tracy attracted almost a cult following, and not only among the young; on a recent visit to France, while browsing in a book shop, I was surprised to see middle-aged men pouring over the latest comic strip adventures of Asterix, which has clearly developed a special Gallic appeal.

The animated movie was the offspring of a marriage between the drawing techniques developed for the comic strip and Roget’s persistence of vision principle. The movie background, which contains only static objects, is drawn only once. The figures to be animated are drawn on a series of transparent plastic sheets, called eels, and then superimposed on the background and photographed in sequence, frame by frame. A chief animator sketches characters at important points in the action, called key frames. Assistant animators then draw the ‘in-between’ frames to complete the animation.

The stop-frame movie camera used in animation, called a rostrum camera, is placed above a special table on which layers of background and eels are held horizontally. The height of the camera and the horizontal movement of the background are controlled by very precise gearing to ensure proper scenic composition.

The animation technique for photographing three-dimensional puppets, or Plasticine characters like those featured in the popular Wallace and Gromit series, is very similar. Movement and changes of expression are accomplished by careful sequential readjustment of the characters between each exposure of the rostrum camera.

Technical Term

An imation is the technique of using film or videotape to create the illusion of movement from a series of two-dimensional drawings or three-dimensional objects

The creation of an animated movie normally begins with the preparation of a storyboard – a series of sketches which portray the important sequences of the story and also include some basic dialogue. Additional sketches are then prepared to create backgrounds and to introduce the appearance and temperaments of the characters.

In some cases the series of sketches is prepared first and then the composition and arrangement of the music or other sound effects and the style and pace of the dialogue are timed to correlate with the visual content of the sketches. In other cases, the music and dialogue are recorded before the final animation is created, so that the sequence of drawings can be synchronised with the sound track. It is common for both types of synchronization to be used within the same production.

The full length animated feature films from the Disney studio represent the ultimate achievement of the application of traditional animation techniques. Created at the full 24 fps, in order to optimise the flowing quality of the action, a 90-minute feature requires nearly 130 000 frames, each consisting of a number of overlapping, individually drawn eels – an enormous labour-intensive creative task.

Computer Animation

As this chapter will show, computer animation offers a number of advantages over traditional animation methods, including the following:

  The animator has access to an enormous and growing range of 2D and 3D drawing and painting tools and techniques for the creation of still frames

  Objects within a scene can be manipulated with CAD precision

  Digital Cut, Copy and Paste commands, in conjunction with cloning and tracing techniques, make it simple to replicate and move objects, or parts of objects, from frame to frame

  The use of standard colour models ensures consistency of colour from frame to frame and clip to clip

  Digital layers or the use of ‘floating objects’ mimic the use of traditional eels, but provide greater flexibility, as digital layers can be easily reordered, duplicated or combined with each other or with the background. Masking and transparency controls provide further means of controlling how different layers interact

  Objects can be selected and combined in a group, so that effects can be applied simultaneously and uniformly to all members of the group

  After saving an animation in a format like AVI or MOV, it can be manipulated like any other video clip in a videoediting application, where it can be combined with live video, still image and text clips and can be edited using the filters and special effects filters described earlier

  In 3D applications, animation can be applied to object position, size, shape and shading attributes and also to cameras, lights, ambient lighting, backdrops and atmospheric effects

  3D objects can be linked in a parent/child relationship and inverse kinematics (more on this later) can be used to provide realistic movement of interconnected parts

  3D landscapes and figures can be created offline in specialist applications like Bryce and Poser and then integrated within the final animation

If there is a downside to computer animation then it lies in the rendering which is required to create the finished animation clip. Frames which involve the use of raytracing software to reproduce complex textures, reflections or atmospheric effects, for example, may take hours, or even days, to render even using a powerful processor. To speed up the process, professional systems distribute the work over a number of interconnected processors.

While still in its infancy, computer animation is already beginning to produce impressive results. The feature length film Toy Story demonstrated that it is already possible, with the benefits of the new technology, to create an animated cartoon with all the appeal of the Walt Disney classics. Later films like Titanic use animation techniques subtly blended with live action so that it nearly impossible to see the join I The dramatic panning and zooming shots, and the use, in a number of scenes, of animated 3D figures, gave a taste of what was to follow.

In computer graphics, animation can be accomplished in either 2D or 3D applications. In the 2D method, which simulates the traditional eel animation drawing board approach, an image is drawn on the screen of a drawing or painting application and the image is saved. Using an onion skin technique (see sidebar) a copy of the first image is erased and then redrawn in a slightly different position on the screen. Each of the frames created in this way is saved in sequence within a frame stack and the illusion of motion is created when the frames are played back at a high enough rate. Because animation drawings contain much less detail than live-action images, animations can be produced at frame rates significantly below those used for live action. Because of the smoothness of colour fills and continuity between images, animations can look quite acceptable at rates between 10 and 14 frames per second. Animation of cartoons for film, for example, usually runs at 14 fps, but each frame is printed twice so that film animation actually displays at 28 fps.

In the 3D time-based animation, which simulates the traditional use of a stop-motion camera to film three-dimensional puppets, virtual 3D characters are first created on screen in a 3D modelling application and then animated by placing the characters on a timeline and altering their shape, size, orientation, etc. at important points in the action. These changes are called key events. When the key event changes have been completed, the software fills in the gaps, or transitions, between the key events to complete the sequence. This technique is known as ‘tweening’ – short for ‘in betweening’.

2D Animation Applications

A number of simple dedicated 2D animation applications are available on the market for both the Mac and the Windows PC but most of these offer limited painting and editing tools. Increasingly, however, applications originally designed for sophisticated painting and photoediting work are including animation capabilities within their later releases. While these animation features may as yet be fairly rudimentary, the ready access to sophisticated painting and image editing features while creating animations offers an enormous range of creative possibilities.

Photo-Paint

Selecting Create a New Image from PHOTO-PAINT’s File/New menu opens the dialog box in Figure 6.1. After selecting Create a movie, the Colour mode, Image size, Resolution and Number of frames for the movie can be set from within the dialog box.

Figure 6.1 Configuring a new movie in PHOTO-PAINT

Several options are available for creating a background for the movie:

  It can be left as the default solid white fill

  A solid RGB colour fill can be applied from within the dialog box

  Any of PHOTO-PAINT’s fountain or pattern fill types can be used to fill the background

  Any combination of PHOTO-PAINT’s painting tools and techniques can be used to paint a custom background

  An imported image can be used as a background using the Movie/Insert from file command

Figure 6.2a shows the first frame of a 50 frame movie into which a Photo CD image of a Martian landscape has been imported as a background. Objects – like the clipart lunar excursion module shown Figure 6.2b can be inserted into the scene by opening the clipart object in a drawing application, copying it to the clipboard and then pasting it on top of the background. The dotted line appearing around the LEM means that it remains selectable after pasting, i.e. it can be dragged around, altered in size, skewed or rotated. It can also be masked from the background by applying the Mask/Create from Object command so that paint, filters or special effects can be applied to the object without affecting the background. The opacity slider can also be used to allow the background to show through the active object if desired.

Figure 6.2 (a) Using a Photo CD image as a background and (b) pasting an object into the scene

While it remains active, an object will appear in every frame of the movie. To incorporate an object into a single frame, the Object/Combine/ Combine Objects with Background command is used. A copy of the object can then be pasted into the next frame, where it can be manipulated, combined with the background, and so on. Figure 6.3a shows a copy of the LEM pasted into Frame 2 of the movie, where it has been scaled rotated and repositioned to give the impression that it has blasted off from the Martian surface. Any number of objects can be created within a frame using the Object/Create New Object command. All objects in the frame remain individually selectable and editable until incorporated into the background. Figure 6.3b shows a second object – an astronaut – pasted into Frame 3 of the movie as the LEM starts to move out of the frame leaving him alone on the surface.

Figure 6.3 (a) Modifying the original object and (b) pasting in a second object

PHOTO-PAINT provides an onion skin feature in the form of its Frame Overlay feature (Figure 6.4), selected from the Movie menu. With the slider in the Current position, only the contents of the current frame are seen, but as the slider is dragged to the left the contents of the previous or next frame (depending on whether the Previous Frame or Next Frame button has been selected) gradually become visible, to assist correct positioning of objects in the current frame. Figure 6.5 shows how the LEM in Frame 2 can be positioned relative to its position in Frame 1 with the use of Frame Overlay.

Figure 6.4 PHOTO-PAINT’s Frame Overlay dialog box

Figure 6.5 Using Frame Overlay in Frame 2 to reveal the position of the LEM in Frame 1

Like the layers in the stack of eels making up a traditional animation frame, the order of objects in a digital frame can be altered. Figure 6.6a shows three geometric objects in a frame, with a square (Object 2) sandwiched between a circle (Object 1) and a triangle (Object 3). As objects are added to a frame, thumbnails are displayed and numbered sequentially in PHOTO-PAINT’s Objects palette (Figure 6.6b). The position of an object in the Objects palette corresponds to its relative position in the frame, e.g. the black triangle appears at the top of the palette because it is the top object in the frame. The relative position of an object can be changed by simply dragging its thumbnail up or down within the Objects palette.

Figure 6.6 Overlapping objects in a frame (a) and their corresponding thumbnails in PHOTO-PAINT’s Objects palette (b)

Technical Term

Onion Skin – Traditional cartoon animators work on an onion skin paper that allows them to see a sequence of frames through the transparent layers. They then draw successive frames using the previous frames for reference. Seeing several images superimposed helps in incrementing the action evenly

PHOTO-PAINT also provides full control over the duration of each frame within a movie, making it easy to create slow motion or speeded up effects. Clicking on Movie/Frame Rate opens the window shown in Figure 6.7. Frames can be selected contiguously or non-contiguously and a frame delay (the time that the frame is displayed on screen) can be typed into the Frame delay window). The Select All command can be used to apply the same delay to all frames in the movie clip.

Figure 6.7 Setting Framer delay times in PHOTO-PAINT

Controls are provided for copying, deleting and moving frames within the same movie and also to add frames from another movie. This is accomplished by:

  clicking Movie/Insert From File

  choosing Full Image from the Loading Method list box in the Insert A Movie From Disk dialog box

  double-clicking the movie or image to be inserted and clicking Before to insert the frames before the frame specified in the Frame box or After (Figure 6.8) to insert the frames after the frame specified in the Frame box

Figure 6.8 Specifying the frame for file insertion

  typing the frame number where the new file is to appear

For navigating around the frames of an animation project and for previewing the work as it progresses, PHOTO-PAINT provides the special toolbar shown in Figure 6.9.

Figure 6.9 PHOTO-PAINT’s Movie control bar

The default format for a PHOTO-PAINT animation is AVI, but when work is complete, it can alternatively be saved in GIF, MOV or MPG format. Saved animations can be opened in a video-editing application where sound effects and other finishing touches can be added.

Painter

In Painter, original animations can be created by:

  drawing each frame by hand

  manipulating ‘floaters’ (Painter’s term for selectable objects)

  cloning or tracing video

The first step in creating a new animation is to create a new movie file by opening the New Picture dialog box (Figure 6.10a). After selecting frame size, number of frames and background colour and naming the new file, the New Frame Stack dialog appears (Figure 6.10b). After choosing the Layers of Onion Skin and the Storage Type, which determines the colour depth for the new movie, clicking OK opens the Frame window and displays the Frame Stack control panel window (Figure 6.10c). The VCR-like controls at the bottom of the window are used to preview the project either dynamically or frame by frame. As objects are added to frames, they appear in thumbnail windows in the Frame Stack; between two and five successive frames can be displayed. The project is automatically saved as work proceeds from frame to frame. When work is complete, the Save As command is used to export the animation to other formats.

Figure 6.10 Starting a new movie in Painter: (a) setting frame size, length of movie and background colour; (b) choosing the number of onion skin layers and; (c) displaying the first frame in the Frame Stack

Figure 6.11 and Figure 6.12 show the way the onion skin effect works in Painter. The animation in Figure 6.11 is using a two-layered onion skin, while the animation in Figure 6.12 is using a five-layered onion skin. The onion skin feature can be switched off and on by selecting or deselecting Canvas/Tracing Paper. Within the Frame Stack window, the down arrow positioned above one of the frame thumbnails indicates the active frame. Any frame can be selected by clicking on its thumbnail

Figure 6.11 (a) The current frame (Frame 2) showing the current figure position in black and the figure position from the previous frame (Frame 1) in grey and (b) The Frame Stack window showing thumbnails of Frames 1 and 2

Figure 6.12 (a) The current frame (Frame 5) showing the current figure position in black and the figure positions from the previous four frames (Frames 1 to 4) in grey and; (b) the Frame Stack showing thumbnails of Frames 1 to 5

Painter’s full suite of natural-media tools and effects can be used to work on each image in a frame stack, offering unprecedented potential for creating original animation. Readers interested in the results which can be created are referred to one of several publications devoted to the application, or to the author’s earlier book Digital Graphic Design. Figure 6.13 shows an example in which the tools have been used to create an animation depicting the movements of a butterfly’s wings.

Figure 6.13 A Painter animation depicting the fluttering wings of a butterfly

Figure 6.14 shows another example, in which letters build up the word ‘Painter 5’ by first overlaying and then displacing four hand prints. The effect, when animated, is similar to that of a cycling neon sign.

Figure 6.14 (Top) Frames 1, 7 and 14 of a fourteen frame animation and (Bottom) the corresponding Frame Stack window

Another method of creating an animation in Painter or PHOTO-PAINT is to move a floating object across a series of frames. Figure 6.15 shows an example. The object to be animated can be painted or imported into the first frame. In the example the bicycle was imported from Painter’s Objects library (the dotted outline indicates that the object is floating) and dragged to the left so that only the front section appeared in the frame. Clicking the Frame Forward button on the Frame Stack palette adds a frame and advances to it, dropping the floater (the bicycle) in the first frame. The bicycle becomes merged with the background in the first frame, but keeps floating above the second frame. After repositioning the bicycle to the right in the second frame, the above sequence is repeated so that the bicycle moves incrementally to the right in successive frames (Figure 6.15b).

Figure 6.15 Creating an animation in Painter by manipulating a single object. The object is created or imported into the first frame (a) and then copied and moved in stages in subsequent frames (b)

When the required number of frames has been completed, clicking the Play button on the Frame Stacks palette animates the movement of the bicycle across the screen.

The action contained in the few bicycle frames can be repeated, or looped, if the beginning and ending images are the same, i.e. if the end of one cycle is hooked up to the beginning of the next, the action can appear to continue smoothly. It is only necessary to draw the cycle once, as it can then be duplicated as many times as required. Many animated actions, such as a person walking, cycle repeatedly through the same sequence of actions. Scrolling a background is another example of a cycled action. Commonly, a subject remains in one place while the background slides by.

Scrolling a backdrop is another example of a cycled action. Commonly, a subject remains in one place while the backdrop scrolls by. This technique is illustrated in Figure 6.16. After placing a walking figure in the first frame of a new Painter clip (a), the image to be used as a backdrop (b) was opened in a separate window, selected, copied and pasted behind the figure in (a). Using Painter’s positioning tool, the background was dragged to the position shown in (c). Both objects were then merged with the background. The Forward arrow in the Frame Stacks window was then clicked to advance to the next frame and an edited copy of the walking figure was pasted into the frame. The backdrop was next pasted behind the new figure and dragged to the left as shown in (d). Once again, both images were merged into the background before the Forward arrow was used to advance to the third frame. The process was repeated to produce the result shown in Figure 6.16e and so on. As the frame stack is played back, the background image scrolls from right to left, as the figure cycles through a walking motion, creating the impression that the figure is walking from left to right.

Figure 6.16 Animating a background in Painter by means of scrolling

Power Goo

An application generally used to apply grotesque distortions to facial images, Power Goo by MetaTools, has sophisticated image manipulation capabilities. The effects which can be produced are reminiscent of those achieved in still image editing applications using distortion filters or distortion brushes. After opening an image either from a bitmap file or from a video clip, Power Goo’s many tools can be used to push, pull, tweak, twist and otherwise redistribute the pixels comprising the original image. The effect is like working on a painted image with a dry paintbrush while the painting is still wet – real-time liquid image editing as MetaTools calls it.

After importing an image into the workspace, either of two sets of tools can be called up by clicking on one of the two coloured ‘necklaces’ in the top left of the screen (Figure 6.17); a tool is selected by clicking on one of the large buttons encircling the image. Dragging in the image with the mouse then causes the image, e.g. to Bulge or to Smear at that point. The effect can be localised to one part of the image or repeated to affect a wider area. Figure 6.18 shows an example in which repetitive use of the Nudge tool was used to extend a particular anatomical feature. The frame being worked on appears in the film strip which can be seen faintly at the bottom of the main window. Clicking on a frame of the film clip pastes a copy of the edited image in the main window into that frame. Extra frames can be added to the end of the film strip as required. At any time, clicking on the movie camera which can be seen faintly at the right hand side of the main window, plays back an animation of the frames so far completed.

Figure 6.17 Power Goo’s two sets of manipulation tool buttons

Figure 6.18 Working on an image in the main PowerGoo window (a) and then saving a series rjl progressively edited frames as video clip frames (b): (0;{d)and (e)

When the work is finished, individual frames can be saved as bitmapped files or the whole film strip can be saved as an AVI file or else saved directly to videotape via a digitising card. Figure 6.19 shows the output options.

Figure 6.19 The Power Goo Save As window. The current image can be saved as a bitmap file or the series of images can be saved either as a native Goo file, as an AVI movie clip, or direct to videotape via a digitising card

Combining 2D Animations with Live Video

By using a solid background like chroma blue when creating an animation, it is a relatively simple matter to merge the animation with a live video clip. Figure 6.20 shows an example. The animation clip is placed on the overlay track V1 (in this case, in MediaStudio) and the live video clip is placed on track Va (Figure 6.20a). Opening the Clip/Overlay Options window (Figure 6.20b) shows just the animation clip which is overlaying, and therefore obscuring, the live video clip. Clicking with the Eyedropper tool selects the background colour of the animation clip. When Color Key is selected as the overlay type (Figure 6.20c), then everything in the animation clip which is the same colour as the background (in this case including the hands and face of the figure), become transparent, revealing the live video clip below. When the result is played back in MediaStudio, the animated figure strolls across the screen, showing the live clip in the background.

Figure 6.20 Combining animation with live video: (a) placing the live video and animation files in MediaStudio; (b) viewing the overlay clip and; (c) sampling the background colour and choosing Color Key as the Overlay Type

3D Animation Applications

Animating in three dimensions is about much more than adding a fourth dimension – time – to illustrations created in 3D drawing applications. It is about the creation of virtual reality, giving the animator the ability to design unique characters, scenes and landscapes with attributes paralleling those found in the real world.

Because of the hugely expensive demands that such applications place on hardware, the use of such techniques has been largely confined to the film and television industries, but now, with the power of desktop PCs increasing in leaps and bounds, access to such techniques is increasing rapidly. A growing market will fuel increased investment in both 3D hardware and software, expanding the desktop video environment into challenging new territory.

With the limited space available in this book, it is not feasible to cover this subject in depth, but the following pages show examples of desktop applications which are already available and the exciting possibilities which they provide.

Ray Dream Studio

In Ray Dream Studio (RDS), 3D animation can be applied to:

  the position of objects in a scene

  object sizes, shapes, and shading attributes

  the motion of objects, lights, and cameras

  camera and light parameters (e.g. colour or intensity)

  ambient lighting, background, backdrop, and atmospheric effects

RDS uses the tweening technique described earlier in the chapter; after movements within a scene have been set at key points, corresponding to key frames within the video clip, the application software fills in the gaps to complete the animation. The animation process involves four distinct stages:

  Creating an object or objects

  Building a scene

  Animating the scene

  Rendering the final animation

The RD5 workspace is shown in Figure 6.21. Objects, cameras and lights are manipulated in the Perspective window to create 3D scenes. The Browser window gives access to stored 3D Objects, Shaders (for applying colours and textures to objects), Deformers (for applying controlled deformation to objects), Behaviours (like bounce or spin), Links (like balljoint or slider), Lights, Cameras and Render Filters.

Figure 6.21 Ray Dream Studio’s workspace consisting of the Perspective window, where scenes are assembled, the Browser window, from which shaders, etc. are selected and the Time Line animation window

Movements within a scene are controlled by means of a 77’me Line window which provides a visual representation of the key events which make up an animation. Controls within the window are used to manipulate key events and move to different points in time. The Time Line window which consists of two main areas – the Hierarchy Area located on the left side of the window which displays the scene’s hierarchical structure and the Time Line area to the right of the hierarchy area which displays a time track for each item (object, effect, or property) appearing in the hierarchy area. The time axis extending across the bottom of the window acts as a time ruler, with marks indicating time increments.

Key event markers on the Time Line tracks represent key events in the animation – e.g. changes to the properties of objects, intensity of lights, position of cameras in the scene, at specific points in time. Key events are created by moving the vertical Current Time Bar to a position along the time line and then modifying an object or rendering effect. A key event marker then appears on the appropriate track in the time line. RDS automatically calculates the state of the objects and effects in the scene in between the various key events.

Figure 6.22 shows a simple example. The scene in Figure 6.22a involves a camera, two copies of a spotlight and an object described as a Tire. All of these items are listed in the Objects hierarchy in Figure 6.22. A default key event marker appears at time zero on each track. After rotating the tire 90° around the vertical axis (Figure 6.22c), a new marker was set at Frame 12 (Figure 6.22c). The tire was then rotated by another 90° and a marker was set at Frame 24 and so on. The positions of the tire corresponding to the frames between the key frames were calculated by RDS, so that when the animation was played back using the animation controls (Figure 6.23), the tire rotated smoothly from its starting point, through 360° and back to its starting point.

Figure 6.22 Using key events to animate a 3D object in Ray Dream Studio

Figure 6.23 Ray Dream Studio’s animation control tool bar

When RDS calculates the in-between frames, it does so using one of four different types of tweener. By specifying which tweener should be used for each transition, the rate of change between key events in the animation can be controlled:

  The Discrete tweener produces instantaneous change, i.e. objects move abruptly from position to position

  The Linear tweener produces a constant rate of change, i.e. objects move at a constant velocity from position to position

  The Oscillate tweener (Figure 6.24a) creates alternating back and forth movement between key events

Figure 6.24 Applying tweeners – Oscillate (a) and Bezier (b)

  The Bézier tweener (Figure 6.24) produces smooth motion paths and greater control over acceleration and deceleration

Tweeners make it easy to create more realistic and subtle changes in the transitions between key events and save time by automatically creating movements and changes which would be very time-consuming with key events alone.

The construction tools provided with RDS can be used to create a wide range of objects and scenes like those shown in Figure 6.25. Objects can also be saved to, and imported from, a library in native RDS format; objects created in other applications and saved in DXF or 3DMF format can also be imported and included in scene composition.

Figure 6.25 Ray Dream Studio objects (a, b, c) and Scenes (d, e, f)

RDS provides a number of powerful features for manipulating objects within a scene:

  Cloaking causes an object to enter or exit (appear or disappear) during the course of an animation. Cloaked objects can be manipulated in the Perspective window; they simply are not included in the rendering of the animation

  Shaders are texture maps simulating a wide range of organic and inorganic materials. Dragging a shader on to an object applies the texture to that object

  Behaviors apply sets of instructions to objects, which determine or modify their behaviour during the animation (e.g. Bounce or Spin). Inverse Kinematics is a specialised behaviour applied to linked objects such as a hand linked to an arm. Inverse Kinematics allows simulation of organic movement, so that raising the hand is accompanied by natural movement of the arm

  Specialised links – BallJoint, Lock, Slider, Axis, Shaft, or 2D plane – between objects, constrain the way one object moves in relation to movements of another, e.g. in the Shaft link, the child object can both rotate around one of its axes, while it slides up and down the same axis

  Rotoscoping allows the playing of video within video. Animations or live video files can be applied to objects as texture maps or as backdrops within an animation

Figure 6.26 shows an example of rotoscoping in which the stick man animation we saw in Figure 6.20 has been mapped on to the surface of a sphere. By animating the sphere to rotate about its vertical axis, the stick man can be made to ‘walk around’ the surface of the sphere. A similar technique can be used to animate, for example, the screen of a television set or the view from a window contained within a scene.

Figure 6.26 Using rotoscoping to map an animation clip on to an object in Ray Dream Studio: (a) importing the animation file into the Shader Editor; (b) applying the animation to the surface of a sphere object and; (c) rendering the frame

Before an RDS animation can be brought into a videoediting application, for the addition of sound or compositing with other video clips or stills, it must first be rendered and saved in AVI or MOV format. Rendering is analogous to taking a photograph of each frame of the animation. The result is photorealistic because the final rendering procedure includes all of the objects and the background in a scene simultaneously and calculates not only objects, colours and textures, but also the interaction of ambient and fixed lights with the various objects within the scene. Rendering also includes any atmospheric effects, like fog or smoke, which have been specified during the design process.

In post production work, rendered animations can be opened in an image-editing program like Painter. A mask can be included during rendering to facilitate such editing.

Poser

Poser is a remarkable application from MetaCreations designed to pose and animate figures in three dimensions. Using key frame animation, Poser makes it possible for the animator to pose and animate human motion with almost uncanny realism.

Figure 6.27 shows Poser’s workspace, with the default clothed male adult figure displayed in the central document window. Surrounding the Document window are a number of palettes which are used to control and edit what appears in the Document window. The Libraries palette, seen on the right-hand side of the screen, gives easy access to all the figures and poses available from the application’s libraries, as well as to libraries of facial expressions, hair styles, hand positions, props, lighting effects and camera positions. Clicking on any of these categories opens the corresponding palette, where previews of the contents are provided (Figure 6.28).

Figure 6.27 MetaCreations Poser workspace

Figure 6.28 Poser’s Libraries palettes

The Editing Tools – displayed above the document window – are used to adjust the position of the figure’s body parts to create specific poses. Controls include Rotate, Twist, Translate, Scale and Taper. These can be used in combination to pose figures in an infinite number of ways. All the models in Poser employ inverse kinematics, so that body parts interact just as they do in the real world. Moving a figure’s hips downwards by dragging down with the Translate tool causes the knees to bend in a natural way as shown in Figure 6.29a and rotating the chest using the Twist tool causes the shoulders, arms and head to rotate in harmony (Figure 6.29b). Figure 6.30 shows just a few samples of the figure types and poses which can be created using the Editing Tools.

Figure 6.29 Using the Editing Tools to alter figure poses

Figure 6.30 A few examples of Poser figure types and poses

The Parameter Dials – displayed to the right of the document window – work exactly like the Editing Tools except that they provide numerical precision for posing a figure. Other palettes within Poser’s workspace provide control over lighting parameters, camera angles and document display styles – e.g. Lit Wireframe or Flat Shaded.

Animation in Poser – with the inverse kinematics feature built into every model – is relatively simple. All that is required is to set up a pose, move to a different point in time, and set up another pose. Poser then fills in the gaps between the two poses. Three tools are provided for the animation process:

  The Animation Controls (Figure 6.31) located at the bottom of the workspace, are used to set which poses are saved as key frames as well as to delete frames or preview an animation. The main area displays the Timeline. The Current Time Indicator is used to move through time, setting up key frames. The counters in the centre of the panel display the number of the frame currently appearing in the working window and the total number of frames in the animation. The VCR-like controls on the left side of the panel are used to control preview and playback. The controls on the right side are used to edit the key frames on the Timeline. Clicking the button bearing the inscription of a key provides access to the Animation Palette

Figure 6.31 The Animation Controls panel

  The Animation Palette (Figure 6.32) is used to edit key frame positions and create more complex animations. When first displayed, the palette shows all the keyframes previously created using the Animation Controls. It can be used to animate individual body parts and to edit all the keyframes within an animation. The palette displays the timeline of the current animation and shows all the key frames created for the project. The Animation Palette is divided into three sections – the Setup Controls, the Hierarchy Area and the Timeline Area. The Setup Controls are used to set the frame rate, duration, frames, and display options for the Timeline Area. The Hierarchy Area displays a listing of all the objects in the studio. The Timeline Area, displays all the keyframes stored for each of the body parts

Figure 6.32 The Animation Palette

  The Walk Designer (Figure 6.33) provides the means of applying walking motions to figures. The deceptively simple act of walking actually involves a complex interaction between many body parts and would normally be a very time-consuming posing exercise. With the use of Walk Designer, creating incredibly realistic walking motions is a simple task.

Figure 6.33 The Walk Designer dialog box (a) and the alternative Side, Front and Top views which can be selected – (b); (c) and (d)

Creating a walking figure is a two step process. First the path to be followed is created in the document window and then the walking motion is applied to the figure. The Walk Path is used to define the path, which can be a line or a curve, which determines the figure’s course as it moves about the scene. Selecting Create Walk Path from the Figure menu causes a path to appear on the ground plane of the document window. The position or shape of the path can be adjusted using one of the Editing Tools.

After the path has been defined, applying a walk simply involves opening the Walk Designer window, clicking the Walk button to start the real-time preview of the walk, dragging the Blend Styles sliders to set the motions of the walk and then clicking the Apply button to apply the walk to a Walk Path. When the walk is applied, the figure starts walking at the start of the path and stops at the end of the path

Figure 6.34 and Figure 6.35 show examples of key frames from two Poser animations created using the above tools.

Figure 6.34 Applying Walk Designer to a Poser figure (Figure mode is Lit Wireframe)

Figure 6.35 Animating this Velociraptor model in Poser produces an amazingly lifelike motion

Poser can import sound clips which are added to the beginning of an animation and play every time the clip is played. The start and end point of the sound clip can be set by dragging on the Sound Range bar which appears at the bottom of the Animation Palette when the sound is imported. The Graph Palette (Figure 6.36) displays a waveform representing the sound, showing where marked changes in amplitude occur. Key frames in the animation can be positioned to synchronise with these changes; for example, speech can be simulated by matching the peaks in the sound waveform with changes in mouth position. Within its Faces library, Poser includes a number of phonemes – faces posed such that the position of the teeth and tongue correspond to particular sounds. A series of phonemes linked together, produce speech. Thus by posing the face to represent different phonemes, speech can be represented visually. By synchronizing the sounds of speech on an audio track with key frames of an animation consisting of the correct series of phonemes, the sound can be married to the movements of the face.

Figure 6.36 Using Poser's Graph Palette, an imported audio clip can be synchronized with the key frames of an animation

Poser is a truly amazing application with enormous creative potential. As well as offering the possibility of creating and animating figures using an infinite range of poses, it allows replacement of a body part with a prop – e.g. by importing props of body parts from other creatures, mythical figures like Pegasus or the Minotaur can be recreated. After replacement, the new parts can be manipulated just like other elements of the figure; even their colour can be changed and texture and bump maps can be applied to them.

Poser figures can be exported in DXF format to other 3D applications like Ray Dream Studio for inclusion as objects within a 3D scene. Animations can be saved in AVI or MOV format, with an alpha channel masking the figure, for easy compositing with other video clips in a videoediting application.

Bryce 3D

Like Poser, Bryce 3D is a unique application designed for a particular purpose – in this case to create and animate virtual environments from mountain ranges to seascapes, from cityscapes to extraterrestrial worlds. Bryce 3D provides separate controls for combining water planes, terrains and skies to create stunningly realistic scenes and for creating and importing a wide variety of objects into the environment. Once a scene is complete, objects within it can be animated or a camera can be animated to fly the viewer through the scene.

Figure 6.37 shows the Bryce 3D workspace. The scene which appears in the window is viewed through a camera which can be moved to show different perspective views. During construction, planes and objects appear in wireframe form but a Nano Preview in the top left corner of the screen shows a real time rendered version of the scene as work proceeds. Below the Nano Preview, the View Control is used to select different views of the scene and, below the View Control are Camera Controls similar to those we saw in Poser. At the top of the screen are three selectable palettes for creating all the types of objects which can be used in the scene (Create Palette), for transforming objects and accessing editors (Edit Palette) and for creating the atmospheric environment for the scene (Sky & Fog Palette).

Figure 6.37 The Bryce 3D workspace

After a terrain is placed in the scene it can be selected and edited using the Terrain Editor (Figure 6.38) which contains tools for reshaping and refining terrains. Once a terrain, water plain or other object has been selected, a material preset can be applied from the Material Presets Library (Figure 6.39).

Figure 6.38 Bryce 3D’s Terrain Editor

Figure 6.39 The Materials Presets Library

If a suitable material cannot be found among the presets, the Materials Lab (Figure 6.40) can create textures which simulate virtually any material found in the natural world as well as many which are not.

Figure 6.40 The Materials Lab

Using the above array of tools and palettes, even an inexperienced user can create and render scenes of quite breathtaking realism – or surrealism (Salvador Dali would have loved Bryce 3D!). Figure 6.41 shows just two examples, although, sadly, colour is needed to do them justice.

Figure 6.41 Scenes from MetaCreations Bryce

Animating a scene in Bryce involves setting up the arrangement of objects and scene settings and then adjusting the settings over time. The application software fills in the gaps between adjustments. The steps to creating an animation are as follows:

  Objects are first created using the Create tools or the Terrain Editor

  A scene is then built by arranging and transforming objects, lights and camera settings

  The position, orientation and/or scale of objects is adjusted as required

  A key event is created as each change is made

  Next the shape or placement of the motion path is adjusted

  The speed at which the object moves along the motion path is set using the Advanced Motion Lab

  Finally the animation is rendered as a video clip

In Auto Record mode, Bryce automatically adds key frames and each change made to a scene is recorded as a key event. Changes can include things like moving an object, changing a material or changing the shape of a terrain. The Advanced Motion Lab button located at the bottom right of the workspace opens the Advanced Motion Lab which contains tools for controlling the detailed properties of an animation. The lab is used to view object hierarchies, remap key events, adjust the position of key frames on the timeline and preview the animation. The four areas of the Lab window can be seen in Figure 6.42.

Figure 6.42 Bryce 3D’s Advanced Motion Lab

The Hierarchy List area, located in the bottom-left corner of the window, displays a visual representation of the object hierarchies in the scene. The Sequencer area, in the bottom-centre of the window, displays timelines containing all the key events in the animation. A timeline is displayed for each object or scene property in the animation. Every time an object property is changed, the change is registered as a key event in the Sequencer. The Time Mapping Curve Editor, located at the top-left of the window, displays a curve for each object or scene property listed in the hierarchy. The editor is used to control the length of time between key events. The time to complete an action can be adjusted by changing the shape of the curve – a time mapping curve can speed up events while slowing down others or even reversing them. The Preview area, located at the top right of the window, contains tools for previewing the animation.

In the simple example shown in Figure 6.42, the Preview window shows a sphere following a simple bouncing motion path. The sphere is listed as ‘Sphere 1’ in the Hierarchy window. The key event markers which appear along the sphere’s ‘Position’ timeline in the Sequencer window correspond to the dots along the sphere’s path in the Preview window. The path can be edited by dragging the points on the path to adjust its trajectory through the course of the animation. The straight line shown in the Time Mapping Curve means that the sphere will travel at uniform speed from start of motion to end of motion. The speed at different points along the path can be adjusted by changing the profile of the curve.

Figure 6.43, Figure 6.44 and Figure 6.45 show examples of animation in Bryce 3D. Dramatic effects can be obtained by setting up a scene and then using camera panning and zooming to create a ‘fly through’ effect. Figure 6.43 shows three frames from such an example using a scene consisting of a water plane bounded on either side by steep rocky terrains creating the illusion of a river flowing through a gorge. By panning and zooming the scene camera between key events, the viewer has the impression of flying through the gorge, low above the water. In the second example, in Figure 6.44, the fly through effect is combined with object animation; the central terrain was first animated to create the impression of mountains rising up out of the plain, using the animation controls provided at the bottom of the Terrain Editor (see Figure 6.38) and then, using camera animation, the viewer is taken on a ride through the new mountain range to the valley beyond.

Figure 6.43 Animation in Bryce 3D using camera panning and zooming

Figure 6.44 A combination of object animation and camera animation

Figure 6.45 A combination of texture animation and object animation

In Figure 6.45, the texture applied to the surface of a sphere has been animated using the animation controls at the bottom of the Materials Lab window (see Figure 6.40), to simulate the swirling surface of a gas covered planet. To add to the effect, a group of meteorite-like objects have been added, textured and animated to travel around the planet, as if in orbit.

The Environment Attributes dialog boxes (Figure 6.46) are used to control every aspect of the Bryce environment. Controls are provided to edit the look and motion of clouds, the position and look of the sun or moon and to add atmospheric effects like rainbows or the rings sometimes seen around the sun due to light scattering effects.

Figure 6.46 Editing environmental and atmospheric effects in Bryce 3D

Bryce 3D supports import of DXF, OBJ, 3DS and 3DMF files, so figures from Poser and objects from Ray Dream Studio and other 3D application can be imported and included within animation projects.

Tips and Techniques

Animation is the process of simulating the passage of time. In the real world, we perceive time through the changes in our environment – the simplest examples being the moving hands of a clock or the changing position of the sun. The most obvious type of change is motion. The skill of animation is the skill of portraying that motion convincingly, whether in a realistic or ‘cartoonish’ way. When an object changes position at different points in time, it appears to be moving. But, an object doesn’t have to be moving to indicate the passage time. A change in colour, texture or geometry can also serve the same purpose.

Over the last 50 or so years, traditional animators have developed methods and techniques to perfect their art. Many of these still apply today to the work of the 2D or 3D digital animator and are included in the following list of tips and techniques.

  Storyboarding – A technique used by traditional eel animators to plan the sequence of events in an animation. A storyboard is a series of drawn images which describe an animation, scene by scene, as it develops over time. Outlining the animation sequence at storyboard stage, before drawing or modelling begins, can help avoid unnecessary work and save considerable time. The storyboard can be a series of simple sketches showing the overall action of the animation as viewed through the rendering camera, accompanied by diagrams showing the position of objects, lights, and cameras at key points. Sample storyboards can be created by drawing a series of horizontal screen outlines on a sheet of paper, using a 4 to 3 aspect ratio. Space for pencilling in narration or annotations should be left below each row of drawings

  Simplifying Scenes – Because the eye tends to be drawn toward motion and elements in the immediate foreground, static objects and background elements need not be drawn or modelled in detail. Reducing unnecessary detail reduces rendering time dramatically and keeps the size of scene files to a minimum

  Smooth Motion with Hot Points – Complex motion is generally animated by moving an object through a sequence of positions along a motion path, with key events created at each position. When animating along simple curved paths in an application like Ray Dream Studio however, it is often easier to offset an object’s hot point and animate its motion using rotation rather than translation. For example, after pointing a camera at an object, the camera’s hot point can be moved to the centre of the object and then the camera can be rotated around its own hot point to animate a fly-around of the object. This approach generally requires fewer key events than creating a similar motion path in the normal way, and produces equally good results

  Inverse Kinematics – A specialized behaviour applied to linked 3D objects in a hierarchy, inverse kinematics creates organic movement, reduces the time it takes to create realistic animations and provides versatile child to parent control. Normally, movement is transmitted downward from parent to child in the hierarchy. When a child is linked to a parent object using inverse kinematics, movement can also be propagate upwards, i.e. when a child object moves, the parent follows. Motion cannot propagate from a linked child to its parent without the use of inverse kinematics

  Rendering without Compression – Unless hard disk space is severely limited, rendering without compression provides the highest quality animation clip to work with. Working from the uncompressed original, copies can be used for experimentation with different compression settings to determine an acceptable image quality and playback rate. Rendering should always be carried out without compression if post-processing in another application is planned

  Duplicating Relative Motion with Groups – Duplicating a 3D object or effect also duplicates its animation data, such as key events and tweeners. Used carefully, this can avoid the unnecessary work of having to recreate the same data for different objects

  Animating with Deformers – Deformers in 3D applications are used to alter the shape of an object dynamically. Deformers like Stretch, Bend, Twist, Explode, Dissolve and Shatter produce interesting animated effects that cannot be achieved by other means. Deformers can be used to animate the shape of entire groups and imported DXF objects, which cannot be edited directly in a modeller

  Animating Textures – Some spectacular animation effects can be created simply by animating object textures. Virtually any type of change to a texture can be animated, from a simple colour change to a shifting geometric pattern. An object can appear to change from metal to stone, or from glass to wood. Editable parameters include values for attributes like transparency or shininess and procedural function parameters like the number of squares in a checker pattern

Over the years, tradition cartoon eel animators have also developed a set of motion and timing principles which contributed significantly to the success of the cartoon as a form of entertainment. Again, many of these principles still apply in the new digital environment:

  Squash and Stretch – The animator’s terms for the exaggerated redistribution of an object’s mass as it changes position, conveying the qualities of elasticity and weight in a character or an object. An example is a bouncing ball; as it falls it stretches; as soon as it hits the ground it is squashed. Without the change of shape, the viewer would interpret the ball as a solid, rigid mass

  Lag and Overlap – When an object moves from one point to another, not every part of it has to move at once. To simulate real-life movement, action that is secondary to the main activity can lag or overlap. For example, when a car stops abruptly, the car body is thrown forward by its own momentum, before settling back on its springs. The cartoon animator exaggerates this real-life observation for effect

  Arc versus Straight Line Movement – Character motion appears more realistic if it follows an arc or curved path instead of a straight line. Most objects affected by gravity also follow curved, rather than straight, trajectories

  Secondary Motion – Secondary motion adds realism and credibility to a scene. A character turning his head to stare at something in surprise should not just turn his head; his jaw should drop and his eyes should open wide as well. The viewer focuses on the main action, but registers the secondary motion as supporting it

  Exaggeration – Exaggerating an action emphasises it, making it more pronounced. For example, if a story line calls for stealth, the character should sneak, not just walk. Virtually any type of action can be exaggerated to ensure effective communication with the viewer

  Timing – Timing is as important in cartoon animation as it is in any form of dramatic presentation. In general motion which continues at a constant pace lacks interest and seems unnatural. To animate realistic character action, it can be helpful to ‘act out’ the sequence, timing how long each pose is sustained and how long each action takes. Key events defined at different points on the time line need to be synchronized with those which come before and after. A key advantage of computer animation is the ability to fine-tune timing

Summary

By its nature, traditional animation has always been a labour-intensive task. While the work of traditional lead animators requires a high degree of skill and creativity, that of the rank and file animators – who are responsible for creating the ‘in between’ frames – is largely repetitive. Fortunately, computers are particularly good at repetitive tasks and take much of the drudgery out of animation work. The advantages of computer animation, however, go far beyond just reducing the tedium and improving the productivity of the task.

As we have seen from the examples in this chapter, the 2D and 3D animation applications now available to the desktop animator offer enormous flexibility through the ability to re-order layers and to manipulate and reuse all or parts of objects within any scene; objects can be effortlessly scaled rotated or skewed, their transparency can be adjusted and they can be cloned and edited using precise masking techniques.

In 3D applications, animation can be applied to object position, size, shape and shading attributes and also to cameras, lights, ambient lighting, backdrops and atmospheric effects. The use of inverse kinematics and specialised links like ball-joints assist the animator in achieving realistic three-dimensional motion with minimal effort.

The ability to work at pixel level within individual frames provides an unprecedented degree of drawing and editing precision, while digital colour control assures consistency of colour rendering across all the elements of a project and between projects. Electronic archival and retrieval techniques put backgrounds and objects at the instant disposal of the animator, while specialist applications like Poser and Bryce 3D significantly reduce the effort needed to generate new figure poses or landscapes.

The long rendering times for animated 3D sequences does present an obstacle, but the relentless progress of CPU speeds into the Gigaherz range and the exploitation of multiprocessing techniques will reduce and eventually resolve even this problem.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset