More and more computer-generated movies are being made. Every aspect of the production is created within the 3D world. This can be hundreds to even thousands of objects in a single scene. Often, multiple artists build these objects. The Autodesk® Maya® software allows you to tackle large scenes and complex objects with a special set of tools, and it makes it easy to work within a team.
When all of the assets are built, it’s time to film. As with all filmmaking, the story is told through the camera. The camera is used to manipulate the elements of the scene, letting viewers know on what they need to focus and how they should feel about what is going on in the scene.
Although the technical aspects of using the cameras are not difficult to learn, mastering the art of virtual cinematography can take years of practice.
In this chapter, you will learn to
A production pipeline consists of a number of artists with specialized tasks. Modelers, riggers, animators, lighters, technical directors (TDs), and many others work together to create an animation from a director’s vision. Organizing a complex animation sequence from all of the nodes in a scene can be a daunting task. Assets (collections of nodes that you choose to group together for the purpose of organization) are designed to help a director separate the nodes in a scene and their many attributes into discrete interfaces so that each team of specialized artists can concern itself only with its individual part of the project.
An asset is not the same as a group node; assets do not have an associated transform node and do not appear in the viewport of a scene. For example, a model, its animation controls, and its shaders can all be placed in a single asset. This example demonstrates some of the ways that you can create and work with assets.
In this example, you’ll create an asset for the front wheels of a vehicle:
vehicle_v01.ma
file from the chapter16scenes
directory at the book’s web page (www.sybex.com/go/masteringmaya2016
). You’ll see a three-wheeled vehicle. In the Outliner, the vehicle is grouped.Select the steering node in the Outliner. This is the animation control for the steering. Switch to the Rotate tool. Rotate the steering node on the y-axis. The front wheels match the arrow’s orientation (see the left image in Figure 16.1).
If you select one of the front_wheel groups in the Outliner, you’ll see that its Rotate Y channel is colored yellow in the Channel Box, indicating it has an incoming connection. The steering curve’s Y rotation is connected to both front_wheel groups’ Y connection.
Move steering up and down along the y-axis. The front wheels rotate on the x-axis based on the height of the steering object, making them tilt (see the right image in Figure 16.1).
If you look in the Channel Box for either of the front_wheel groups, you’ll see that the Rotate X channel is colored orange, indicating that it has been keyframed. The Rotate channels of the group are using a driven key to determine their values. The keyframe’s driver is the Y translation of the arrow group.
Drag the red arrow of the Move tool to translate the vehicle back and forth along the x-axis. All three wheels rotate as the car moves.
If you expand the front_wheel1 group in the Outliner and select the wheel1Rotate child group node, you’ll see that the Rotate Z channel is colored purple in the Channel Box, indicating that an expression is controlling its z-axis rotation. You can open the Attribute Editor for the front_wheel1 group and switch to the expression4 tab to see the expression, as shown in Figure 16.2. (The field isn’t large enough to display the entire expression; you can click the field and drag left or right to read the whole expression.)
This model uses a simple rig, but already there are a lot of nodes connected to the vehicle geometry to help the job of the animator. To simplify, you can create an asset for just the wheels and their connected nodes so that the animator can focus on just this part of the model to do their job without having to hunt through all of the different nodes grouped in the Outliner.
Click Apply And Close to create the asset.
In the Hypergraph, you’ll see a gray box labeled front_wheels; this box is also visible in the Outliner. The time1 and vehicle nodes are still visible in the Hypergraph. It may appear as though they have been disconnected, but that’s not actually the case.
Double-click the thick border of the asset to collapse it, or click the Collapse Selected Assets icon at the top of the Hypergraph.
You can select a node inside the container and remove it from the container by right-clicking the node and choosing Remove From Container. In addition, you can resize the container by moving nodes against the containers borders.
vehicle_v02.ma
.You can publish selected attributes of the container’s nodes to the top level of the container. This means that the animator can select the asset node and have all of the custom controls available in the Channel Box without having to hunt around the various nodes in the network. You can also template your asset for use with similar animation rigs.
In this exercise, you’ll publish the attributes of the front_steering asset:
vehicle_v02.ma
from the chapter16scenes
folder at the book’s web page.The Asset Editor can help you further customize and manage your scene’s assets. You can use it as another way to publish specific attributes of an asset.
Open the scene vehicle_v03.ma
from the chapter16scenes
folder at the book’s web page.
In the Outliner, you’ll see two containers, one named front_wheels and another named carPaint (which holds the blue paint shader applied to the car).
To open the Asset Editor, choose Windows ➣ General Editors ➣ Asset Editor. The editor is in two panels; on the left side, you’ll see all of the assets in the scene.
The Asset Editor opens in View mode. In the list of assets on the left, you can click the plus sign in the square to see the nodes within each container. You can see the attributes of each node listed by clicking the plus sign in the circle next to each node.
A dialog box will open prompting you to name the selected attribute. Name it steer (see Figure 16.7).
Note that steps 5 and 6 are just another way to publish an attribute; the end result is the same as when you published the wheelTilt attribute from the Channel Box in the previous section.
The Asset Editor has a number of advanced features, including the ability to create templates of complex assets that can be saved to disk and used in other scenes for similar assets.
Published assets and their connections can be viewed in the Node Editor. Assets are shown as unique nodes. Their published attributes are shown with ports for connecting while their non-published attributes are hidden. However, connections for non-published attributes can be accessed through the superport of the asset node.
File referencing is another workflow tool that can be used when a team of artists is working on the same scene. For example, by using file references, an animator can begin animating a scene while the modeler is still perfecting the model. This is also true for any of the other team members. A texture artist can work on textures for the same model at the same time. The animator and texture artist can import a reference of the model into their Maya scene, and each time the modeler saves a change, the model reference in the other scenes will update (when the animator or the texture artist reloads either the scene or the reference).
In this example, you’ll reference a model into a scene, animate it, and then make changes to the original reference, to get a basic idea of the file-referencing workflow:
vehicleReference_v01.ma
scene and the street_v01.ma
scene in the chapter16scenes
directory at the book’s web page. Copy both of these files to your local hard drive. I recommend that you put them in the scenes
directory of your current project.street_v01.ma
from the scenes
directory of the current project (or wherever you placed the file on your local drive). The scene contains a simple street model. A locator named carAnimation is attached to a curve in the center of the street. If you play the animation, you’ll see the locator zip along the curve.Find the vehicleReference_v01.ma
scene that you copied to your local drive. Select it and choose Reference in the Reference dialog box. After a few moments, the car will appear in the scene (see Figure 16.9).
In the Outliner, you’ll see the vehicleReference_v01:vehicle node, the vehicleReference_v01:front_wheels and carPaint container nodes, and the vehicleReference_v01RN node. (The container node is an asset with both the wheelTilt and steer attributes created in the previous section.) You can choose to display the reference node, or RN, by checking or unchecking it from the Outliner’s Display menu.
street_v02.ma
.vehicleReference_v01.ma
scene from the directory where you copied this file on your local drive.vehicleReference_v02.ma
).Open the street_v02.ma
scene. The scene is still referencing vehicleReference_v01.ma
.
If you had saved the changes to the vehicle with the same referenced scene filename, the changes would have shown up automatically when you opened the street_v02.ma
scene. However, since you changed the version of the scene filename, you will have to replace the reference. Select vehicleReference_v01RN in the Outliner. RMB-click in the Outliner, and choose Reference ➣ Replace. Browse for the vehicleReference_v02.ma
scene. Select it and choose Reference in the Reference dialog box. The car is now updated with its wider, red body (see Figure 16.10). Play the animation or scrub on the Time slider to see this.
If the car had been animated in the referenced scene, you could make changes to its animation after it is loaded into your scene. Once the changes are made, you can export the reference as an offline file. Allow Referenced Animation Curves To Be Edited must be turned on in the Animation Preferences in order for this to work.
This is the basic file-referencing workflow; however, a file-referencing structure can be made much more complex to accommodate the needs of multiple teams. In addition, a referenced file can use other references so that a file-referencing tree can be constructed by layering several levels of file references. This kind of structure is best planned out and agreed upon at the beginning of a project in order to minimize confusion and keep a consistent workflow.
Bounding-box representations allow you to use stand-ins for high-resolution objects, hierarchies, or animations. This can make dealing with large scenes a lot easier because the stand-ins improve performance and update faster in Maya as you work with all aspects of your scene.
Multiple versions of the model can be created and used as proxies to facilitate different needs in the scene. A proxy should be the same size and roughly the same shape as the referenced file.
street_v03.ma
scene from the chapter16scenes
directory at the book’s web page. This scene has the same street as before, with the same animated locator.vehicleReference_v01.ma
from the browser. The vehicle is imported into the scene.When starting a new project in Maya, you should first determine the final size of the rendered image or image sequence, as well as the film speed (frames per second). These settings will affect every aspect of the project, including texture size, model tessellation, render time, how the shots are framed, and so on. You should raise this issue as soon as possible and make sure that every member of the team—from the producer to the art director, to the compositor, and to the editor—is aware of the final output of the animation. This includes the image size, resolution, frames per second, and any image cropping that may occur after rendering. Nothing is worse than having to redo a render or even an animation because of a miscommunication concerning details such as resolution settings or frames per second.
The settings for the image size and resolution are located in the Render Settings window under the Image Size rollout on the Common tab (shown in Figure 16.13). When you start a new scene, visit this section first to make sure that the settings are what you need.
Image size refers to the number of pixels on the horizontal axis by the number of pixels on the vertical axis. Thus a setting of 640 × 480 means 640 pixels wide by 480 pixels tall.
Resolution refers to how many pixels fit within an inch (or centimeter, depending on the setting). Generally you’ll use a resolution of 72 pixels per inch when rendering for animations displayed on computer screens, television screens, and film. Print resolution is much higher, usually between 300 and 600 pixels per inch.
You can create any settings you’d like for the image size and resolution, or you can use one of the Maya Image Size presets. The list of presets is divided so that common film and video presets are at the top of the list and common print settings are at the bottom of the list. In addition to the presets, there are fields that allow you to change the size and resolution units.
Resolution is expressed in a number of ways in Maya:
Image Aspect Ratio The ratio of width over height. An image that is 1280 × 720 has a ratio of 1.778.
Pixel Aspect Ratio The ratio of the actual pixel size. Computer monitors use square pixels: the height of the pixel is 1, and the width of the pixel is 1; thus the pixel aspect ratio is 1.
Device Aspect Ratio The image aspect ratio multiplied by the pixel aspect ratio. High definition displays have brought pixel aspect ratios of 1.0. Prior to HD, standard video had a pixel aspect ratio of 0.9.
Film Aspect Ratio The film aspect ratio is found in the Attribute Editor for the selected camera. For a typical 35 mm video image, this would be 0.816 ÷ 0.612 = 1.333.
The film speed (also known as transport speed) is specified in frames per second. You can find this setting in the Maya Preferences window (Windows ➣ Settings/Preferences ➣ Preferences). Under the Categories column on the left side of the window, choose Settings. In the Working Units area, use the Time drop-down list to specify the frames per second of the scene. You can change this setting after you’ve started animating, but it’s a good idea to set it at the start of a project to avoid confusion or mistakes. When changing this setting on a scene that already has keyframed animation, you can choose to keep the keyframes at their current frame numbers or have Maya adjust the keyframe position automatically based on the new time setting (see Figure 16.14).
When you add a camera to a scene, you should think about how the shot will be composed and whether the camera will be animated. The composition of a shot affects the mood and tells the viewer which elements visible within the frame are most important to the story. The camera settings allow you to fine-tune the composition of the shot by controlling what is visible within the frame and how it appears.
Most of the attributes of a camera can be animated, allowing you to set the mood of a scene and create special camera effects. Three types of cameras offer different animation controls. These are the one-, two-, and three-node cameras. The controls available for each camera type are suited to different styles of camera movement. This section covers how to create different camera types for a scene and how to establish and animate the settings.
Every new Maya scene has four preset cameras by default. These are the front, side, top, and perspective (persp) cameras. You can render a scene using any of these cameras; however, their main purpose is to navigate and view the 3D environment shown in the viewport. It’s always a good idea to create new cameras in the scene for the purpose of rendering the animation. By keeping navigation and rendering cameras separate, you can avoid confusion when rendering.
chase_v01.ma
scene from the chapter16/scenes
folder at the book’s web page. You’ll find that a simple animatic of a car racing down a track has been created.Select shotCam1 in the Outliner, and press the f hot key to focus on this camera in the viewport.
The icon for the camera looks like a movie camera. It has a transform node and a shape node. The camera attributes are located on the shape node.
From the viewport panel menu, turn on the Resolution Gate display (the blue sphere on a white background icon). Click the Camera Attributes icon to open the Attribute Editor for shotCam1.
The image size of this scene is set to 1280 × 720, which is the HD 720 preset. You can see the image resolution at the top of the screen when the Resolution Gate is activated. Working with the Resolution Gate on is extremely helpful when you’re establishing the composition of your shots (see Figure 16.17).
When you create a new camera to render the scene, you need to add it to the list of renderable cameras in Render Settings. You can render the scene using more than one camera.
Open the Render Settings window. In the Renderable Cameras area, you’ll see both the shotCam1 and persp cameras listed (see Figure 16.18). Remove the perspective camera from the list of renderable cameras by clicking the Trash Can to the right of the listing.
To change the renderable camera, choose a different camera from the list. To add another camera, choose Add Renderable Camera at the bottom of the list. The list shows all of the available cameras in the scene.
At the top of the Attribute Editor for the camera’s shape node, you’ll find the basic settings for the camera available in the Camera Attributes rollout.
Single-Node Camera A single-node camera is just a plain camera like the perspective camera. You can change its rotation and translation by setting these channels in the Channel Box, by using the Move and Rotate tools, or by tumbling and tracking while looking through the camera.
Two-Node Camera A two-node camera is a camera that has a separate aim control. The Camera and Aim controls are contained within a group. When you switch to this type of camera (or create this type of camera using the Create ➣ Cameras menu), the rotation of the camera is controlled by the position of the aim node, which is simply a locator. It works much like the Show Manipulators tool except that the locator has a transform node itself. This makes it easy to visualize where the camera is looking in the scene and makes doing animation easier. You can keyframe the position of the aim locator and the position of the camera separately and easily edit their animation curves on the Graph Editor.
Three-Node Camera A three-node camera is created when you choose Camera, Aim, and Up from the Controls menu. This adds a third locator, which is used to control the camera’s rotation around the z-axis. These controls and alternative arrangements will be explored later in the “Creating Custom Camera Rigs” section.
When working with two- or three-node cameras, resist the temptation to move or keyframe the position of the group node that contains both the camera and the aim locator. Instead, expand the group in the Outliner, and keyframe the camera and aim nodes separately. This will keep the animation simple and help avoid confusion when editing the animation. If you need to move the whole rig over a large distance, Shift+click both the camera and the aim locator and move them together. Moving the group node separately is asking for trouble.
For most situations, a two-node camera is a good choice since you can easily manipulate the aim node to point the camera accurately at specific scene elements, yet at the same time, it doesn’t have additional nodes, like the three-node camera, which can get in the way. In this example, you’ll use a two-node camera to create an establishing shot for the car-chase scene.
The focal length of the camera has a big impact on the mood of the scene. Adjusting the focal length can exaggerate the perspective of the scene, creating more drama.
Adjusting the focal length of the camera has a similar effect on the camera as changing the angle of view; however, it is inversely related to the angle of view. Increasing the focal length zooms in on the scene, and decreasing it zooms out. The two settings are connected; they can’t be set independently of each other.
In a real camera, as you adjust the focal length, you are essentially repositioning the lens in the camera so that the distance between the lens and the film gate (where the sensor is exposed to light) is increased or decreased. As you increase the focal length, objects appear larger in the frame. The camera zooms in on the subject. The viewable area also decreases—this is the angle of view. As you decrease the focal length, you move the lens back toward the film gate, increasing the viewable area in the scene and making objects in the frame appear smaller. You’re essentially zooming out (see Figure 16.21).
By default, Maya cameras have a focal length of 35. Roughly speaking, the human eye has a focal length of about 50. A setting of 20 is a good way to increase drama in an action scene by exaggerating the perspective. Higher settings can flatten out the view, which creates a different type of mood; by reducing perspective distortion, you can make the elements of a scene feel large and distant.
Clipping planes are used to determine the range of renderable objects in a scene. Objects that lie outside the clipping planes are not visible or renderable in the current camera. Clipping planes can affect the quality of the rendered image; if the ratio between the near clipping plane and the far clipping plane is too large, image quality can suffer. (If the near clipping plane is 0.1, the far clipping plane should be no more than 20,000.) Keep the far image plane just slightly beyond the farthest object that needs to be rendered in the scene, and keep the detail of distant objects fairly low.
The Auto Render Clip Plane option automatically determines the position of the clipping planes when rendering with Maya software. (This setting does not affect animations rendered with mental ray, Maya hardware, or vector renders.) It’s always a good idea to turn off this option and set the clipping-plane values manually:
Zoom in on the shot cam in the perspective view, and click the blue manipulator switch twice (located just below the camera when the Show Manipulator tool is active) to switch to the clipping-plane display (see Figure 16.22). After you click the manipulator, it turns yellow.
The clipping-plane manipulator consists of two blue rectangles connected by lines. The near clipping plane is a small rectangle close to the camera; the far clipping plane is large and far from the camera.
chase_v02.ma
.To see a version of the scene to this point, open chase_v02.ma
from the chapter16/scenes
directory at the book’s web page.
In an actual film camera, the film back refers to the plate where the negative is placed when it is exposed to light. The size of the film determines the film-back setting, so 35 mm film uses a 35 mm film back. The film gate is the gate that holds the film to the film back. Unless you are trying to match actual footage in Maya, you shouldn’t need to edit these settings.
Ideally, you want the Film Gate and Resolution Gate to be the same size in the viewport. If you turn on the display of both the Film Gate and the Resolution Gate in the camera’s Display Options rollout (toward the bottom of the Attribute Editor—you can’t turn on both the Film Gate and Resolution Gate using the icons in the panel menu bar), you may see that the Film Gate appears to be larger than the Resolution Gate in the viewport; the gates are displayed as boxes. You can fix this by adjusting the Film Aspect Ratio setting. Simply divide the resolution width by the resolution height (1280 ÷ 720 = 1.777777), and put this value in the Film Aspect Ratio setting under the Film Back rollout (see Figure 16.24).
The Film Gate drop-down list has presets available that you can use to match footage if necessary. The presets will adjust the camera aperture, film aspect ratio, and lens squeeze ratio as needed. If you’re not trying to match film, you can safely leave these settings at their defaults and concern yourself only with the Image Size and Resolution attributes in the Render Settings window.
The Film Fit Offset and Film Offset controls in the Film Back rollout can be useful in special circumstances when you need to change the center of the rendered area without altering the position of the camera. The parallax caused by the perspective of the 3D scene in the frame does not change even though the camera view has changed. Creating an offset in an animated camera can create a strange but stylistic look.
The Film Fit Offset value has no effect if Fit Resolution Gate is set to Fill or Overscan. If you set Fit Resolution Gate to Horizontal or Vertical and then adjust the Film Fit Offset, the offset will be either horizontal or vertical based on the Fit Resolution Gate setting. The Film Offset values accomplish the same thing; however, they don’t depend on the setting of Fit Resolution Gate. The following steps demonstrate how to alter the Film Offset:
chase_v02.ma
scene from the chapter16/scenes
directory at the book’s web page. Set the current camera in the viewport to shotCam1 and the timeline to frame 61.The Shake attribute is an easy way to add a shaky vibrating motion to a camera. The first field is the horizontal shake, and the second field is the vertical shake. The values that you enter in the shake fields modify the current settings for Film Offset. When you are applying a shake, you’re essentially shaking the film back, which is useful because this does not change how the camera itself is animated. You can apply expressions, keyframes, or animated textures to one or both of these fields. The Shake Enabled option allows you to turn the shaking on or off while working in Maya; it can’t be keyframed. However, you can easily animate the amount of shaking over time.
In this example, you’ll use an animated fractal texture to create the camera-shake effect. You can use an animated fractal texture any time that you need to generate random noise values for an attribute. One advantage fractal textures have over mathematical expressions is that they are easier to animate over time.
Set the Rotate UV value to 45. This rotates the texture so that the output of this animated texture is different from the other, ensuring a more random motion.
You may notice that the shaking is nice and strong but that you’ve lost the original composition of the frame. To bring it back to where it was, adjust the range of values created by each texture. The Fractal Amplitude of both textures is set to 0.1, which means that each texture is adding a random value between 0 and 0.1 to the film offset. You need to equalize these values by adjusting the Alpha Offset and Alpha Gain settings of the textures.
If you look at what’s going on with the fractal texture, you’ll see that when the Amplitude setting of the texture is 0, the outAlpha value is 0.5. (You can see this by switching to the shotCamShape1 tab and looking at the Horizontal Shake field.) The fractal texture itself is a flat gray color (value = 0.5). As you increase the Amplitude setting, the variation in the texture is amplified. At an Amplitude value of 1, the outAlpha attribute ranges from 0 to 1. You can see this in the values generated for the Shake attribute in the camera node. This is a large offset and causes the shaking of the camera to be extreme. You can set Amplitude to a low value, but this means that the outAlpha value generated will remain close to 0.5, so as the shake values are added to the film offset, the composition of the frame is changed—the view shifts up to the right.
To fix this, you can adjust the Alpha Gain and Alpha Offset attributes found in the Color Balance rollout of each fractal texture. Alpha Gain is a scaling factor. When Alpha Gain is set to 0.5, the outAlpha values are cut in half; when Alpha Gain is set to 0, outAlpha is also 0, and thus the Shake values are set to 0 and the camera returns to its original position. If you want to shake the camera but keep it near its original position, it seems as though the best method is to adjust the Alpha Gain value of the fractal texture.
However, there is still one problem with this method. You want the outAlpha value of the fractal to produce both negative and positive values so that the camera shakes around its original position in all directions. If you set Alpha Gain to a positive or negative number, the values produced will be either positive or negative, which makes the view appear to shift in one direction or the other. To adjust the output of these values properly, you can use the Alpha Offset attribute to create a shift.
Set Alpha Offset to negative one-half of Alpha Gain to get a range of values that are both positive and negative; 0 will be in the middle of this range. Figure 16.29 shows how adjusting the Amplitude, Alpha Gain, and Alpha Offset attributes affects the range of values produced by the animated fractal texture.
You can reduce the number of controls needed to animate the camera shake by automating the Alpha Offset setting on the fractal node. The best way to set this up is to create a simple expression where Alpha Offset is multiplied by negative one-half of the Alpha Gain setting. You can use this technique any time you need to shift the range of the fractal texture’s outAlpha to give both positive and negative values.
In the field for Alpha Offset, type =–0.5*fractal1.alphaGain;. Then press the Enter key to enter the expression (see Figure 16.30). Note that the correct fractal node must be explicitly stated in the expression or you will get an error. If the node itself is named something other than fractal1, make sure that this is named in the expression accordingly. When in doubt, just look at the top of the Attribute Editor in the Fractal field.
You can create the same setup for the fractal2 node. However, it might be a better idea to create a direct connection between the attributes of fractal1 and fractal2 so that you need only adjust the Alpha Gain of fractal1, and all other values will update accordingly.
chase_v03.ma
.To see a version of the scene to this point, open the chase_v03.ma
scene from the chapter16scenes
directory at the book’s web page.
The Shake Overscan attribute moves the film back and forth on the z-axis of the camera as opposed to the Shake settings, which move the film back and forth horizontally and vertically. Try animating the Shake Overscan setting using a fractal texture to create some dramatic horror-movie effects.
The three camera types in Maya (Camera, Camera and Aim, Camera Aim and Up) work well for many common animation situations. However, you’ll find that sometimes a custom camera rig gives you more creative control over a shot. This section shows you how to create a custom camera rig for the car-chase scene. Use this example as a springboard for ideas to design your own custom camera rigs and controls.
This rig involves attaching a camera to a NURBS circle so that it can easily swivel around a subject in a perfect arc:
chase_v03.ma
scene from the chapter16scenes
directory at the book’s web page, or continue with the scene from the previous section. In the Display tab of the Layer Editor, turn off both the choppers and the buildings layers.Click Attach to attach the camera to the circle (see Figure 16.32). You may get a warning in the Script Editor when you attach a camera to a curve stating that the camera may not evaluate as expected. You can safely ignore this warning.
The camera is now attached to the circle via the motion path; the camera will stay in a fixed position on the circle curve. This is a fast and easy way to attach any object or other type of transform node (such as a group) to a curve.
The camera follows the car, but things don’t get interesting until you start to animate the attributes of the rig. To cut down on the number of node attributes through which you need to hunt to animate the rig, you’ll create an asset for the camera and rig and publish attributes for easy access in the Channel Box.
Try setting the following keyframes to create a dramatic camera move using the rig (see Figure 16.36).
FRAME
RISE
SWIVEL
PUSH
AIM X
AIM Y
AIM Z
Frame 1
3.227
48.4116
6
0
0
0
Frame 41
0.06
134.265
0.3
0
0
0
Frame 92
0.06
246.507
0.3
0
0.091
0.046
Frame 145
0.13
290.819
0.8
0
0.167
–0.087
Frame 160
0
458.551
0.4
0
0.132
–0.15
Frame 200
0.093
495.166
0.4
0
0.132
–0.15
chase_v04.ma
.To see a finished version of the animation, open the chase_v04.ma
scene from the chapter16scenes
directory at the book’s web page.
Depth of field and motion blur are two effects meant to replicate real-world camera phenomena. Both of these effects can increase the realism of a scene as well as the drama. However, they can both increase render times significantly, so it’s important to learn how to apply them efficiently when rendering a scene. In this section, you’ll learn how to activate these effects and the basics of how to work with them. Using both effects well is closely tied to render-quality issues.
The depth of field (DOF) settings in Maya simulate the photographic phenomena where some areas of an image are in focus and other areas are out of focus. Artistically, this can greatly increase the drama of the scene, because it forces the viewers to focus their attention on a specific element in the composition of a frame.
Depth of field is a ray-traced effect, and it can be created using both Maya Software and mental ray; however, the mental ray DOF feature is far superior to that of the Maya Software. This section describes how to render depth of field using mental ray.
There are two ways to apply the mental ray depth-of-field effect to a camera in a Maya scene:
Both methods produce the same effect. In fact, when you turn on the Depth Of Field option in the Depth of Field rollout, you’re essentially applying the mental ray physical DOF lens shader to the camera. The mia_lens_bokeh lens shader is a more advanced DOF lens shader that has a few additional settings that can help improve the quality of the DOF render. For more on lens shaders, consult Chapter 8, “mental ray Shading Techniques.”
The controls in the camera’s Attribute Editor are easier to use than the controls in the physical DOF shader, so this example will describe only this method of applying DOF:
chase_v05.ma
scene from the chapter16/scenes
directory at the book’s web page.Set the timeline to frame 136, click in the viewport to set the rendering view, and Choose Render ➣ Render Current Frame to create a test render (refer back to Figure 16.38).
As you can see from the test render, the composition of this frame is confusing to the eye and does not read very well. There are many conflicting shapes in the background and foreground. Using depth of field can help the eye separate background elements from foreground elements and sort out the overall composition.
Use the scroll bar at the bottom of the Render View window to compare the images. There’s almost no discernable difference. This is because the DOF settings need to be adjusted. There are only three settings:
Focus Distance This setting determines the area of the image that is in focus. Areas in front or behind this area will be out of focus.
F Stop This setting describes the relationship between the diameter of the aperture and the focal length of the lens. Essentially, it is the amount of blurriness seen in the rendered image. F Stop values used in Maya are based on real-world F stop values. The lower the value, the blurrier the areas will be beyond the focus distance. Changing the focal length of the lens will affect the amount of blur as well. If you are happy with a camera’s DOF settings but then change the focal length or angle of view, you’ll probably need to reset the F Stop setting. Typically, values range from 2.8 to about 12.
Focus Region Scale You can use this value to adjust the area in the scene that you want to stay in focus. Lowering this value will also increase the blurriness. Use this option to fine-tune the DOF effect once you have the Focus Distance and F Stop settings.
Select the DOF_cam, and set Focus Distance to 15, F Stop to 2.8, and Focus Region Scale to 0.1 and create another test render from the DOF_cam.
The blurriness in the scene is much more obvious, and the composition is a little easier to understand. The blurring is grainy. You can improve this by adjusting the Overall Quality slider in the Render Settings. For now, you can leave the settings where they are as you adjust the DOF (see Figure 16.39).
chase_v06.ma
.To see a version of the scene so far, open chase_v06.ma
from the chapter16scenes
directory at the book’s web page.
A rack focus refers to a depth of field that changes over time. It’s a common technique used in cinematography as a storytelling aid. By changing the focus of the scene from elements in the background to the foreground (or vice versa), you control what the viewer looks at in the frame. In this section, you’ll set up a camera rig that you can use to change the focus distance of the camera interactively.
chase_v06.ma
file from the chapter16scenes
directory at the book’s web page.Select unitConversion14 to switch to the unitConversion node in the Attribute Editor, and set Conversion Factor to 1.
Occasionally, when you create this rig and the scene size is set to something other than centimeters, Maya converts the units automatically and you end up with an incorrect number for the Focus Distance attribute of the camera. This node may not always be necessary when setting up this rig. If the value of the Focus Distance attribute of the camera matches the distance shown by the distanceDimension node, you don’t need to adjust the unitConversion’s Conversion Factor setting.
The area around the helicopter is now in focus (see Figure 16.44).
If you render a sequence of this animation for the frame range between 120 and 180, you’ll see the focus change over time. To see a finished version of the camera rig, open chase_v07.ma
from the chapter16scenes
directory at the book’s web page.
If an object changes position while the shutter on a camera is open, this movement shows up as a blur. Maya cameras can simulate this effect using the Motion Blur settings found in the Render Settings as well as in the camera’s Attribute Editor. Not only can motion blur help make an animation look more realistic, but it can also help smooth the motion in the animation.
Like depth of field, motion blur is expensive to render, meaning that it can take a long time. Also much like depth of field, there are techniques for adding motion blur in the compositing stage after the scene has been rendered. You can render a motion vector pass using mental ray’s passes and then add the motion blur using the motion vector pass in your compositing software. For jobs that are on a short timeline and a strict budget, this is often the way to go. In this section, however, you’ll learn how to create motion blur in Maya using mental ray.
There are many quality issues closely tied to rendering with motion blur. In this chapter, you’ll learn the basics of how to apply the different types of motion blur.
You enable the Motion Blur setting in the Render Settings window so, unlike the Depth Of Field setting, which is activated per camera, all cameras in the scene will render with motion blur once it has been turned on. Likewise, all objects in the scene have motion blur applied to them by default. You can, and should, turn off the Motion Blur setting for those objects that appear in the distance or do not otherwise need motion blur. If your scene involves a close-up of an asteroid whizzing by the camera while a planet looms in the distance surrounded by other slower-moving asteroids, you should disable the Motion Blur setting for those distant and slower-moving objects. Doing so will greatly reduce render time.
To disable the Motion Blur setting for a particular object, select the object, open its Attribute Editor to its Shape Node tab, expand the Render Stats rollout, and deselect the Motion Blur option. To disable the Motion Blur setting for a large number of objects at the same time, select the objects and open the Attribute Spread Sheet (Windows ➣ General Editors ➣ Attribute Spread Sheet). Switch to the Render tab, and select the Motion Blur header at the top of the column to select all of the values in the column. Enter 0 to turn off the Motion Blur setting for all of the selected objects (see Figure 16.45).
There are two types of motion blur in mental ray for Maya: No Deformation and Full. No Deformation calculates only the blur created by an object’s transformation—meaning its translation, rotation, and scale. A car moving past a camera or a helicopter blade should be rendered using No Deformation.
The Full setting calculates motion vectors for all of an object’s vertices as they move over time. Full should be used when an object is being deformed, such as when a character’s arm geometry is weighted to joints and animated moving past the camera. Using Full motion blur will give more accurate results for both deforming and nondeforming objects, but it will take a longer time to render than using No Deformation.
The following procedure shows how to render with motion blur:
chase_v08.ma
from the chapter16scenes
directory at the book’s web page.In the Render View panel, click the Render Region icon (second icon from the left) to render the selected region in the scene. When it’s finished, store the image in the render view. You can use the scroll bar at the bottom of the render view to compare stored images (see Figure 16.46).
In this case, the motion blur did not add a lot to the render time; however, consider that this scene has no textures, simple geometry, and default lighting. Once you start adding more complex models, textured objects, and realistic lighting, you’ll find that the render times will increase dramatically.
In the Scene tab of the Render Settings window, take a look at the settings for Motion Blur:
Motion Blur By This setting is a multiplier for the motion blur effect. A setting of 1 produces a realistic motion blur. Higher settings create more stylistic or exaggerated effects.
Motion Steps Increasing Motion Steps forces motion blur to be calculated in between frames.
Orthographic cameras are generally used for navigating a Maya scene and for modeling from specific views. A stereoscopic, or stereo, camera is a special rig that can be used for rendering stereoscopic 3D movies.
The front, top, and side cameras that are included in all Maya scenes are orthographic cameras. An orthographic view is one that lacks perspective. Think of a blueprint drawing, and you get the basic idea. There is no vanishing point in an orthographic view.
Any Maya camera can be turned into an orthographic camera. To do this, open the Attribute Editor for the camera and, in the Orthographic Views rollout, turn on the Orthographic option (see Figure 16.47). Once a camera is in orthographic mode, it appears in the Orthographic section of the viewport’s Panels menu. You can render animations using orthographic cameras; just add the camera to the list of renderable cameras in the Render Settings window. The Orthographic Width is changed when you dolly an orthographic camera in or out.
You can use stereo cameras when rendering a movie that is meant to be watched using special 3D glasses. Follow the steps in this example to learn how to work with stereo cameras:
In the perspective view, select the center camera and open the Attribute Editor to stereoCameraCenterCamShape.
In the Stereo rollout, you can choose which type of stereo setup you want; this is dictated by how you plan to use the images in the compositing stage. Interaxial Separation adjusts the distance between the left and right cameras, and Zero Parallax defines the point on the z-axis (relative to the camera) at which an object directly in front of the camera appears in the same position in the left and right cameras.
In the perspective view, switch to a top view and make sure that the NURBS sphere is directly in front of the center camera and at the same position as the Zero Parallax plane (Translate Z = –10).
As you change the Zero Parallax value, the left and right cameras will rotate on their y-axes to adjust, and the Zero Parallax Plane will move back and forth depending on the setting.
In the top view, move the sphere back and forth, toward, and away from the camera rig. Notice how the sphere appears in the same position in the frame in the left- and right-camera views when it is at the Zero Parallax plane. However, when it is in front of or behind the plane, it appears in different positions in the left and right views.
If you hold a finger up in front of your eyes and focus on the finger, the position of the finger is at the Zero Parallax point. Keep your eyes focused on that point, but move your finger toward and away from your face. You see two fingers when it’s before or behind the Zero Parallax point (more obvious when it’s closer to your face). When a stereo camera rig is rendered and composited, the same effect is achieved and, with the help of 3D glasses, the image on the two-dimensional screen appears in three dimensions.
The cameras will render as separate sequences, which can then be composited together in compositing software to create the final output for the stereo 3D movie.
You can preview the 3D effect in the Render View window by choosing Render ➣ Stereo Camera from the Render menu in the Render View. The Render View window will render the scene and combine the two images. You can then choose one of the options in the Display menu of the Render View by selecting Display ➣ Stereo Display menu to preview the image. If you have a pair of red/green 3D glasses handy, choose the Anaglyph option and put on the glasses, and you’ll be able to see how the image will look in 3D.
The upper-right viewport window has been set to StereoCamera, which enables a Stereo menu in the panel menu bar. This menu has a number of viewing options that you can choose from when working in a stereo scene, including viewing through just the left or right camera. Set the shading mode to Smooth Shade All, and switch to Anaglyph mode to see the objects in the scene shaded red or green to correspond with the left or right camera. (This applies to objects that are in front or behind the Zero Parallax plane.)
The Camera Sequencer is a nonlinear editing interface that allows you to stitch together multiple camera views into a single sequence. The Camera Sequencer editing interface itself is similar to editing interfaces found in video editing and compositing programs, but instead of editing a sequence of images, the Camera Sequencer edits the animation of cameras in an existing 3D scene. This allows you to work out the timing of shots in a scene without having to render any images.
This exercise will demonstrate the basic functions of the Camera Sequencer:
chase_v09.ma
scene from the chapter16/scenes
folder at the book’s web page.In the perspective viewport, choose Panels ➣ Saved Layouts ➣ Persp/Camera Sequencer (see Figure 16.49).
The Camera Sequencer interface appears at the bottom, below the persp view. Notice that it has its own timeline. When you work with the Camera Sequencer, you do not need to move the playhead on the main Time slider; in fact, this can get a little confusing when you first start using the sequencer, so it is not a bad idea to hide the Time slider.
From the main menu bar, choose Display ➣ UI Elements. Uncheck both Time Slider and Range Slider. Hold Ctrl while deselecting to prevent the menu from closing.
Now you can add a camera to the sequencer and start stitching together a sequence from all three cameras.
Hold the Alt key, and drag to the right in the sequencer so that you can see more of the timeline.
When you create the shot, a blue bar is added to the sequencer. This represents the range of shot1. Notice that the shot is placed at the start of the Time slider in the sequencer, even though the shot itself starts at frame 50 (see Figure 16.51).
Choose Create ➣ Shot ➣ ❒ to create a second shot. In the Create Shot Options dialog box, set the shot name to shot2, set Shot Camera to swivelCam, and set Start Time to 40 and End Time to 150. Leave New Shot Placement set to Current Frame. Click the Create Shot button.
A second blue bar appears in the Camera Sequencer editing interface below the shot1 bar on a new track. This is because New Shot Placement was set to Current Frame. You can add the shot to the same track as the original by choosing After Current Shot. I think it’s easier to work with the shots if they are on separate tracks. As you’ll see, you can easily move the tracks around in the Camera Sequencer Editing Interface.
In the Camera Sequencer Editing Interface, click the long blue bar in track 2 (turns yellow when selected) and drag it to the right so that the left end of shot2 is below the right end of shot1 (see Figure 16.52).
The numbers at either end of the track correspond to the frame numbers in the animation. The upper number is the frame of the original sequence. The lower number is the frame number in the Camera Sequencer Editing Interface. Look at the blue bar for shot1. The number 41 indicates that you’re on frame 41 in the sequencer. The number 90 indicates that it is frame 90 of the actual animation. At the start of the bar for shot2 (the left end), the number 42 indicates that you’re on frame 42 of the sequence, and the number 40 indicates frame 40 of the actual animation.
Click the Playback Sequence button in the sequencer to play the animation. In spite of the fact that technically the animation is jumping backward from the end of shot1 to the start of shot2, it looks seamless. In fact, you could even drag shot1 so that it comes after shot2.
The shot at the top of the stack is what you’ll see when the animation plays.
Play the sequence.
Now things are taking shape. With little effort, you’re already editing a film before a single frame has been rendered!
chase_v10.ma
. To see a version of this scene, open the chase_v10.ma
scene in the chapter16/scenes
directory at the book’s web page.The Camera Sequencer is a powerful tool. In addition to rearranging camera sequences in a nonlinear fashion, you can change the speed of the shots simply by dragging left or right on the frame number in the lower-left or lower-right corners of the shot bar. When you do this, you’ll notice that the percentage value at the center of the bar updates. So if you extend the shot to 200 percent of its original length, the camera moves and the animation in the shot will be slowed down to half speed. If you try this for shot3, you’ll see the blades of the helicopters rotate more slowly than in the original shot. However, the actual animation has not been changed. If you leave the Camera Sequencer as is, the original animation in the scene will not be altered.
You can create an ubercam that incorporates all of the changes created in the Camera Sequencer into a single camera. To do this, choose Create ➣ Ubercam in the Camera Sequencer. The main caveat is that you cannot alter the duration of the shots in the camera sequence.
Use assets. An asset is a way to organize the attributes of any number of specified nodes so that the attributes are easily accessible in the Channel Box. This means that members of each team in a pipeline only have to see and edit the attributes they need to get their job done, thus streamlining production.
Master It Create an asset from the nodes in the
miniGun_v04.ma
scene in thechapter1scenes
folder. Make sure that only the Y rotation of the turret, the X rotation of the guns, and the Z rotation of the gun barrels are available to the animator.
Create file references. File references can be used so that as part of the team works on a model, the other members of the team can use it in the scene. As changes to the original file are made, the referenced file in other scenes will update automatically.
Master It Create a file reference for the
miniGun_v04.ma
scene; create a proxy from theminiGun_loRes.ma
scene.
Determine the camera’s image size and film speed. You should determine the final image size of your render at the earliest possible stage in a project. The size will affect everything from texture resolution to render time. Maya has a number of presets that you can use to set the image resolution.
Master It Set up an animation that will be rendered to be displayed on a high-definition progressive-scan television.
Create and animate cameras. The settings in the Attribute Editor for a camera enable you to replicate real-world cameras as well as add effects such as camera shaking.
Master It Create a camera setting where the film shakes back and forth in the camera. Set up a system where the amount of shaking can be animated over time.
Create custom camera rigs. Dramatic camera moves are easier to create and animate when you build a custom camera rig.
Master It Create a camera in the car-chase scene that films from the point of view of chopperAnim3 but tracks the car as it moves along the road.
Use depth of field and motion blur. Depth of field and motion blur replicate real-world camera effects and can add a lot of drama to a scene. Both are expensive to render and therefore should be applied with care.
Master It Create a camera asset with a built-in focus distance control.
Create orthographic and stereo cameras. Orthographic cameras are used primarily for modeling because they lack a sense of depth or a vanishing point. A stereoscopic rig uses three cameras and special parallax controls that enable you to render 3D movies from Maya.
Master It Create a 3D movie from the point of view of the driver in the chase scene.
Use the Camera Sequencer. The Camera Sequencer can be used to edit together multiple camera shots within a single scene. This is useful when blocking out an animatic for review by a director or client.
Master It Add a fourth camera from the point of view of the car, and edit it into the camera sequence created in the section “Using the Camera Sequencer” in this chapter.