The Autodesk® Maya® software offers a number of options for dividing the individual elements of a render into separate passes. These passes can then be reassembled and processed with additional effects using compositing software such as Adobe After Effects, Autodesk Composite, or The Foundry Nuke. In this chapter, you’ll learn how to use the render layers in Maya and render passes with the mental ray® renderer to split rendered images into elements that can then be used in your compositing software.
For best results when working on the project files in this chapter, you should copy the chapter12 project folder to your local drive and make sure that it is the current project by choosing File ⇒ Set Project. Doing so will ensure that links to textures and Final Gathering maps remain intact and that the scenes render correctly. In this c
hapter, you will learn to:
Render layers are best used to isolate geometry, shaders, and lighting to create different versions of the same animation. Render layers can be used to create a balance between efficiency and flexibility. You have an enormous amount of creative flexibility when using render layers. This chapter explains the typical workflow; however, you may develop your own way of using render layers over time.
You can create and manage render layers using the Layer Editor in Render mode (called the Render Layer Editor). You can access the Layer Editor in the lower-right corner of the interface layout, just below the Channel Box.
Besides Render mode, the Layer Editor has Display and Animation modes. These three modes are the three types of layers that you can create in Maya. You change the mode by clicking one of the tabs at the top of the Layer Editor. Figure 12-1 shows the Render Layer Editor with a scene that has two custom render layers and the default render layer.
By default, every Maya scene has at least one render layer labeled masterLayer. All the lights and geometry of the scene are included in masterLayer. When you create a new render layer, you can specify precisely which lights and objects are included in that layer. As you add render layers, you can create alternate lights for each layer, use different shaders on each piece of geometry, render one layer using mental ray and another using Maya Software, use indirect lighting effects on one layer and not on another, and so on. A render layer can be rendered using any camera, or you can specify which camera renders which layer. In this section, you’ll use many of these techniques to render different versions of the same scene.
In this exercise, you’ll render Anthony Honn’s vehicle model in a studio environment and in an outdoor setting. Furthermore, the car is rendered using a different shader on the body for each layer.
Start by opening the carComposite_v01.ma scene from the chapter12scenes folder at the book’s web page (www.sybex.com/go/masteringmaya2014).
The scene is set up in a studio environment. The lighting consists of two point lights that have mental ray Physical Light shaders applied. These lights create the shadows and are reflected in the body of the car. An Area light and a Directional light are used as simple fill lights.
The car itself uses several mia materials for the metallic, glass, chrome, and rubber parts. The body uses a shading network that combines the mib_glossy_reflection shader and the mi_metallic_paint_x shader.
The shader used for the car body is named blueCarBody. You can select it in the Hypershade and graph the input and output connections in the Work Area to see how the shader is arranged. (Select the shader in the Hypershade, and choose Graph ⇒ Input And Output Connections from the Hypershade menu bar.) Figure 12-2 shows the graphed network.
The renderCam camera has a lens shader applied to correct the exposure of the image. As you learned in Chapter 10, “mental ray Shading Techniques,” mia materials and physical lights are physically accurate, which means their range of values does not always look correct when displayed on a computer screen. The mia_exposure_simple lens shader is applied to the camera to make sure the scene looks acceptable when rendered.
To create two alternative versions of the scene, you’ll want to use two separate render layers:
Generally, when you start to add render layers, the master layer is not rendered; only the layers that you add to the scene are used for rendering.
The first step is to create a new render layer for the scene:
Copying a layer is a fast and easy way to create a new render layer. You can instead create an empty layer as follows:
Another way to create a new layer is to select objects in the scene, and choose Create Layer From Selected from the Layers menu. A new render layer containing all the selected objects is created.
You can add new objects at any time by right-clicking the render layer and choosing Add Selected Objects. Likewise, you can remove objects by selecting the objects and choosing Remove Selected Objects. You can delete a render layer by right-clicking the layer and choosing Delete Layer. This does not delete the objects, lights, or shaders in the scene, but just the layer itself.
To see a version of the scene up to this point, open the carComposite_v02.ma scene from the chapter12scenes folder at the book’s web page.
An object’s visibility can be on for one render layer and off for another. Likewise, if an object is on a display layer and a render layer, the display layer’s visibility affects whether the object is visible in the render layer. This is easy to forget, and you may find yourself unable to figure out why an object that has been added to a render layer is not visible. Remember to double-check the settings in the Layer Editor’s Display mode if you can’t see a particular object.
You can use the Relationship Editor to see the layers to which an object belongs. Choose Window ⇒ Relationship Editors ⇒ Render Layers.
To create a different lighting and shading setup for a second layer, you’ll use render layer overrides. An override changes an attribute for a specific layer. So, for example, if you wanted Final Gathering to calculate on one layer but not another, you would create an override in the Render Settings window from the Final Gathering attribute. To create an override, right-click next to an attribute and choose Create Layer Override. As long as you are working in a particular layer that has an override enabled for an attribute, you’ll see the label of the attribute highlighted in orange. Settings created in the master layer apply to all other layers unless there is an override.
This next exercise shows you how to use overrides as you create a new layer for the outdoor lighting of the car:
Something has gone wrong; the lighting has changed for this layer. Final Gathering is not calculating, but you’ll see that the render takes a long time and the lighting no longer matches the original studioLighting render. This is not because of render layers per se, but because of the Physical Sun and Sky network that was added to the scene. Remember from Chapter 9, “Lighting with mental ray,” that when you add a Physical Sun and Sky network, a number of nodes are added to the scene, including the renderable cameras. Normally this feature saves time and work, but in this case it’s working against the scene.
The easiest way to fix the problem is to create a duplicate render camera. One camera can be used to render the studioLighting layer; the other can be used to render the outdoorLighting layer. You can make sure that the correct lens shaders are applied to both cameras. You can use overrides to specify which camera is available from which layer.
To see a version of the scene up to this point, open the carComposite_v03.ma scene from the chapter12scenes folder at the book’s web page.
Notice that you do not need to add cameras to render layers when you add them to a scene. You can if you want, but it makes no difference. The cameras that render the scene are listed on the Common tab of the Render Settings window.
If you’re rendering an animated sequence using two cameras with different settings as in the carComposite example, you’ll want to use overrides so that you don’t render more images than you need.
To see a version of the scene up to this point, open the carComposite_v04.ma scene from the chapter12scenes folder at the book’s web page.
After you create the overrides for the cameras, it is still possible to render with either camera in the render view. The overrides ensure that the correct camera is used for each layer during a batch render.
The flexibility of render layers becomes even more apparent when you apply different shaders to the same object on different layers. This allows you to render alternate versions of the same animation.
The car renders with a different material applied to the body. If you render the studioLighting layer (using the studioCam camera), you’ll see that the car is still blue. The new shader appears only when the outdoorLighting layer is rendered.
You don’t need to create overrides to apply different materials on different render layers; however, you can create overrides for the attributes of render nodes used on different layers (for instance, one shader could have different transparency values on different render layers).
Shaders applied to selected polygons can also differ from one render layer to the next. However, if for your render layers you choose to apply materials on a per-polygon as opposed to a per-object basis, be sure to check your renders, because it is possible to get unexpected results.
To see a finished version of the scene, open the carComposite_v05.ma scene from the chapter12scenes folder at the book’s web page.
A material override applies a material to all the objects within a particular layer. To create a material override, right-click one of the layers in the Render Layer Editor, and choose Overrides ⇒ Create New Material Override. You can then select a new material to be created from the list.
Render layers can use blend modes, which combine the results of the render to form a composite. You can preview the composite in the Render View window. Typically, you render each layer separately, import the render sequences into compositing software (such as Adobe After Effects, Autodesk Composite, or The Foundry Nuke), and then apply the blend modes using the controls in the compositing software. Maya gives you the option of creating a simple composite using render layers, which you can view in the Render View window.
Blend modes use simple algorithms to combine the numeric color values of each pixel to create a composite. A composite is created by layering two or more images on top of each other. The image on top is blended with the image below. If both images are rendered as Normal, then the top image covers the bottom image, except where the top layer is transparent due to alpha. If the blend mode is set to Multiply, then the light pixels in the top image are transparent, and the darker pixels of the top image darken the pixels in the bottom image. This technique is often used to add shadowing to a composite. If the blend mode of the top image is set to Screen, then the darker pixels are transparent, and the lighter pixels brighten the pixels of the lower image. You can use this to composite glowing effects.
The blend modes available in Maya are as follows:
In this exercise, you’ll use blend modes to create soft shadows for the render of the car in the studio lighting scenario.
This scene shows the car in the studio lighting scenario. A single render layer exists already. Using the technique in this exercise, you’ll eliminate the harsh cast shadows that appear on the ground in the rendered image (shown earlier in Figure 12-3) and replace them with soft shadows created using an ambient occlusion shader. First, you’ll remove the shadows cast on the ground by the physical lights in the scene (note that physical lights always cast shadows; there is no option for turning shadows off when you use these lights).
To see a finished version of the scene, open the carComposite_v07.ma scene from the chapter12scenes folder at the book’s web page.
This is a good way to preview basic composites; however, in practice you will most likely want more control over how the layers are composited. To do this, you should use more advanced compositing software such as Adobe Photoshop (for still images) or Adobe After Effects, Autodesk Composite, or The Foundry Nuke (for animations).
Render passes divide the output created by a render layer into separate images or image sequences. Using render passes, you can separate the reflections, shadows, diffuse color, ambient occlusion, specular highlights, and so on, into images or image sequences, which can then be reassembled in compositing software. By separating things such as the reflections from the diffuse color, you can then exert maximum creative control over how the different images work together in the composite. This approach also allows you to make changes or variations easily or fix problems in individual elements rather than re-rendering the entire image or sequence every time you make a change.
Render passes replace the technique of using multiple render layers to separate things like reflections and shadows in older versions of Maya. (Render passes also replace the layer presets; more on this in a moment.) Each layer can be split into any number of render passes. When render passes are created, each layer is rendered once, and the passes are taken from data stored in the frame buffer. This means that each layer needs to render only once to create all the necessary passes. Render time for each layer increases as you add more passes.
A typical workflow using passes is to separate the scene into one or more render layers, as demonstrated in the first part of this chapter, and then assign any number of render passes to each render layer. When you create a batch render, the passes are stored in subfolders in the images folder of the current project. You can then import the images created by render passes into compositing software and assemble them into layers to create the final composite.
Render passes work only with mental ray; they are not available for any other renderer (Maya Software or Maya Hardware). It’s also crucial to understand that at this point not all materials will work with render passes. If you find that objects in your scene are not rendering correctly, double-check that you are using a material compliant with render passes.
The materials that work with render passes are as follows:
Also, each shader does not necessarily work with every type of render pass listed in the render pass interface. For more information about specific shaders, consult the Maya documentation.
Note that the mental ray DGS, Dielectric, mib_glossy_reflection, and mib_glossy_refraction shaders, as well as the other mib shaders, are not supported by render passes. Even if you use a supported shader (such as mi_metallic_paint_x_passes) as a base material for these shaders, it will not render correctly. When using these shaders, you may need to devise an alternate workflow involving render layers and material overrides.
The decision to render a scene in passes for compositing is going to affect what type of lighting and materials you use on the surfaces in your scene. As noted earlier, not all materials work with render passes. In addition, light shaders, such as the mia-physical light shader, can behave unpredictably with certain types of passes.
Generally speaking, any of the mental ray shaders that end with the “_passes” suffix are a good choice to use when rendering passes. If you have already applied the mia_material or mia_material_x shader to objects in the scene, you can easily upgrade these shaders to the mia_material_x_passes shader. The same is true for the mi_car_paint, mi_metallic_paint, and misss_fast_shader materials.
The following example illustrates how to upgrade the mia_material_x shader to the mia_material_x_passes shader in order to prepare for the creation of render passes.
This scene uses an HDR image to create reflections on the surface of the metal. To render the scene correctly, we will be using the building_probe.hdr image from Paul Debevec’s website at http://ict.debevec.org/~debevec/Probes/. This image is connected to the mentalrayIbl1 node in the helmetComposite_v01.ma scene. For more information on using the mentalrayIbl node, consult Chapter 10.
To see a version of the scene, open the helmetComposite_v02.ma scene from the chapter12scenes folder at the book’s web page.
In this example, you’ll create multiple passes for reflection, specular, depth, and shadow using the space helmet scene:
The reflection pass shows only the reflections on the surface of the objects; the other parts of the image are dark. Thus the reflections are isolated. You can view the other passes using the File menu in the Render View window. Figure 12-22 shows each pass.
The shadow pass will appear inverted in IMF_display. When you import this image into your compositing program, you can invert the colors and adjust as needed to create the effect you need.
To see a finished version of the scene, open the helmetComposite_v03.ma scene from the chapter12scenes folder at the book’s web page.
You can add render passes to a render layer in the Render Layer Editor. To do so, follow these steps:
To remove a pass from a render layer, follow these steps:
The mental ray renderer has a built-in ambient occlusion pass, which creates ambient occlusion shadowing in a render pass without the use of a custom shader network. Before the introduction of render passes in Maya 2009, the standard practice was to use a shader network to create the look of ambient occlusion, and a separate render layer used this shader as a material override. This can still be done, but in many cases using a render pass is faster and easier.
As explained in Chapter 9, ambient occlusion is a type of shadowing that occurs when indirect light rays are prevented from reaching a surface. Ambient occlusion is a soft and subtle type of shadowing. It’s usually found in the cracks and crevices of objects in diffuse lighting.
To create ambient occlusion shadowing, mental ray uses raytracing to determine how the shading of a surface is colored. When a ray from the camera intersects with geometry, a number of secondary rays are shot from the point of intersection on the surface back into the scene. Imagine all the secondary rays as a hemisphere above each point on the surface that receives an initial ray from the camera. If the secondary ray detects another object (or part of the same object) within a given distance from the original surface, that point on the original surface has the dark color applied (which by default is black). If no other nearby surfaces are detected, then the bright color is applied (which by default is white). The proportion of dark to bright color is determined by the proximity of nearby surfaces.
In this section, you’ll practice creating an ambient occlusion pass for the space helmet scene.
The scene has a single render layer named helmet. This layer is a duplicate of the masterLayer. You can create render passes for the masterLayer, but for the sake of simulating a production workflow, you’ll use a render layer in this demonstration.
To see a finished version of the scene, open the helmetComposite_v04.ma scene from the chapter12scenes folder at the book’s web page.
Render pass contribution maps can be used to customize the render passes further for each layer. A contribution map specifies which objects and lights are included in a render pass. By default, when you create a render pass and associate it with a particular render layer, all the objects on the layer are included in the pass. Using a contribution map, you can add only certain lights and objects to the render pass. The whole point is to give you even more flexibility when rendering for compositing. This exercise demonstrates how to set up contribution maps.
To see a version of the scene up to this point, open the minigunComposite_v02.ma scene from the chapter12scenes folder at the book’s web page.
Lights can also be included in contribution maps. If no lights are specified, all the scene lights are added. In the minigun scene, the directional light is the only light that casts shadows; the other two lights have shadow casting turned off. You can use pass contribution maps to create a shadow pass just for this light.
In the composite, the shadow pass should be inverted and color-corrected. When creating a shadow-pass contribution map, you may want to include just the shadow-casting lights. In some cases, including all the lights can produce strange results.
To see a finished version of the scene, open the miniGun_v03.ma scene from the chapter12scenes folder at the book’s web page.
Render pass sets are simply a way to organize large lists of render passes on the Passes tab of the Render Settings window. You can create different groupings of the passes listed in the Scene Passes section, give them descriptive names, and then associate the set with the render layer or the associated contribution maps. If you have a complex scene that has a large number of passes, you’ll find it’s easier to work with the pass sets than with all of the individual passes.
You can create a render pass set as you create the render passes or add them to the set later. In the Create Render Passes window, select the Create Pass Set box, and give the pass set a descriptive name (see Figure 12-36). The new pass set appears in the Scene Passes section in the Render Settings window along with all the newly created passes (see Figure 12-36). To associate a render layer with the new pass set, you only have to move the pass set to the Associated Passes section. All the passes included with the set will be associated with the layer even though they do not appear in the Associated Passes section.
To verify which passes are included in the set, open the Relationship Editor for render passes (Window ⇒ Relationship Editors ⇒ Render Pass Sets). When you highlight the pass set on the left, the passes in the set are highlighted on the right. You can add or remove passes from the set by selecting them from the list on the right of the Relationship Editor (see Figure 12-37).
You can add a new set on the Passes tab of the Render Settings window by clicking the Create New Render Pass Set icon. You can then use the Relationship Editor to add the passes to the set. A render pass can be a member of more than one set.
Rendering is the process of translating a Maya animation into a sequence of images. The images are processed and saved to disk. The rendered image sequences can then be brought into a compositing software, where they can be layered together, edited, color-corrected, combined with live footage, and have additional effects applied. The composite can then be converted to a movie file or a sequence of images for distribution or imported to editing software for further processing.
Generally, you want to render a sequence of images from Maya. You can render directly to a movie file, but this usually is not a good idea. If the render stops while rendering directly to a movie file, it may corrupt the movie, and you will need to restart the whole render. When you render a sequence of images and the render stops, you can easily restart the render without re-creating any of the images that have already been saved to disk.
When you set up a batch render, you can specify how the image sequence will be labeled and numbered. You also set the image format of the final render, which render layers and passes will be included and where they will be stored, and other aspects related to the rendered sequences. You can use the Render Settings window to determine these properties or perform a command-line render using your operating system’s terminal. In this section, you’ll learn important features of both methods.
Batch rendering is also accomplished using render-farm software, such as Backburner, which is included with Maya. This allows you to distribute the render across multiple computers. Consult the help documents on how to use Backburner, since this subject is beyond the scope of this book.
File tokens are a way to automate the organization of your renders. If your scene has a lot of layers, cameras, and passes, you can use tokens to specify where all the image sequences will be placed on your computer’s hard drive, as well as how they are named.
The image sequences created with a batch render are placed in the images folder of the current project or whichever folder is specified in the Project Settings window (see Chapter 1, “Working in Autodesk Maya,” for information regarding project settings). Tokens are placed in the File Name Prefix field found on the Common tab of the Render Settings window. If this field is left blank, the scene name is used to label the rendered images (see Figure 12-38).
By default, if the scene has more than one render layer, Maya creates a subfolder for each layer. If the scene has more than one camera, a subfolder is created for each camera. For scenes with multiple render layers and multiple cameras, Maya creates a subfolder for each camera within the subfolder for each layer.
You can specify any folder you want by typing the folder names into the File Name Prefix field. For example, if you want your image sequences to be named marshmallow and placed in a folder named chocolateSauce, you can type chocolateSauce/marshmallowin the File Name Prefix field. However, explicitly naming a file sequence lacks the flexibility of using tokens and runs the risk of allowing you to overwrite file sequences by mistake when rendering. You can see a preview of how the images will be named in the upper portion of the Render Settings window (see Figure 12-39).
The whole point of tokens is to allow you to change the default behavior and specify how subfolders will be created dynamically for a scene. To use a token to specify a folder, place a slash after the token name. For example, to create a subfolder named after each camera, type <camera>/ in the File Name Prefix field. To use a token to name the images, omit the slash. For example, typing <scene>/<camera> results in a folder named after the scene containing a sequence of images named camera.iff.
Here are some common tokens:
Note that the capitalization of the token name does matter. If you had a scene named chocolateSauce that has a render layer named banana that uses a specular and diffuse pass with two cameras named shot1 and shot2 and you wanted to add the version label v05, the following tokens specified in the File Name Prefix field
<Scene>/<RenderLayer>/<Camera>/<RenderPass>/<RenderPass>_<Version>
would create a file structure that looks like this:
chocolateSauce/banana/shot1/specular/specular_v05.#.ext
chocolateSauce/banana/shot1/diffuse/diffuse_v05.#.ext
chocolateSauce/banana/shot2/specular/specular_v05.#.ext
chocolateSauce/banana/shot2/diffuse/diffuse_v05.#.ext
Use underscores or hyphens when combining tokens in the folder or image name. Avoid using periods.
You can right-click the File Name Prefix field to access a list of commonly used token keywords. This is a handy way to save a little typing.
For multiframe animations, you have a number of options for specifying the frame range and the syntax for the filenames in the sequence. These settings are found on the Common tab of the Render Settings window. To enable multiframe rendering, choose one of the presets from the Frame/Animation Ext drop-down list in the File output rollout. When rendering animation sequences, the safest choice is usually the name.#.ext option. This names the images in the sequence by placing a dot between the image name and the image number and another dot between the image number and the file extension. The Frame Padding option allows you to specify a number of digits in the image number, and it will insert zeros as needed. So a sequence named marshmallow using the Maya IFF format with a Frame Padding of 4 would be marshmallow.0001.iff.
The Frame Range settings specify which frames in the animation will be rendered. The By Frame setting allows you to render each frame (using a setting of 1), skip frames (using a setting higher than 1), or render twice as many frames (using a setting of 0.5, which renders essentially at half speed). You can also set Skip Existing Frames to have Maya automatically find frames that have already been rendered and skip over them.
It is possible to render backward by specifying a higher frame number for the Start Frame value than the End Frame value and using a negative number for By Frame. You would then want to use the Renumber Frames option so that the frame numbers move upward incrementally.
The Renumber Frames option allows you to customize the labeling of the image sequence numbers.
The rendering cameras are specified in the Renderable Cameras list. To add a camera, expand the Renderable Cameras list and choose Add Renderable Camera (see Figure 12-42). To remove a rendering camera, click the trashcan icon next to the renderable camera. As noted earlier in the chapter, you can use a layer override to include a specific camera with a render layer.
Each camera has the option of rendering alpha and Z-depth channels. The Z-depth channel stores information about the depth in the scene. This is included as an extra channel in the image (only a few formats, such as Maya IFF and OpenEXR, support this extra channel). Not all compositing software supports the Maya Z-depth channel. You may find it easier to create a camera depth pass using the custom passes (passes are described earlier in this chapter). The render depth pass can be imported into your compositing software and used with a filter to create depth-of-field effects.
When Maya renders a scene, the data stored in the frame buffer is converted into the native IFF format and then translated to the file type specified in the Image Format menu. Thus if you specify the TIFF format, for example, Maya translates the TIFF image from the native IFF format.
Many compositing packages (such as Adobe After Effects, Autodesk Composite, and The Foundry Nuke) support the IFF format, so it’s generally safe to render to this file format. The IFF format uses four 8-bit channels by default, which is adequate for most viewing purposes. If you need to change the file to a different bit depth or a different number of channels, you can choose one of the options from the Data Type menu in the Framebuffer section of the Quality tab. This is where you will also find the output options, such as Premultiply (see Figure 12-43).
Render passes use the secondary frame buffer to store the image data. You can specify the bit depth of this secondary buffer in the Attribute Editor for each render pass.
A complete list of supported image formats is available in the Maya documentation. Note that Maya Software and mental ray may support different file formats.
When you are satisfied that your animation is ready to render, and all the settings have been specified in the Render Settings window, you’re ready to start a batch render. To start a batch render, set the main Maya menu set to Rendering and choose Render ⇒ Batch Render ⇒ Options. If you are rendering with mental ray, you can specify memory limits and multithreading, as well as local and network rendering.
One of the most useful options is Verbosity Level. This refers to the level of detail of the messages displayed in the Maya Output window as the render takes place. (This works only when using Maya with Windows.) You can use these messages to monitor the progress of the render as well as diagnose problems that may occur while rendering. The Progress Messages setting is the most useful option in most situations (see Figure 12-44).
To start the render, click the Batch Render (or the Batch Render And Close) button. As the batch render takes place, you’ll see the Script Editor update (see Figure 12-45). For detailed information on the progress of each frame, you can monitor the progress in the Output window.
To stop a batch render, choose Render ⇒ Cancel Batch Render. To see how the current frame in the batch render looks, choose Render ⇒ Show Batch Render.
When the render is complete, you’ll see a message in the Script Editor that says Rendering Completed. You can then use FCheck to view the sequence (File ⇒ View Sequence) or import the sequence into your compositing software.
A batch render can be initiated using your operating system’s command prompt or terminal window. This is known as a command-line render. A command-line render takes the form of a series of commands typed into the command prompt. These commands include information about the location of the Maya scene to be rendered, the location of the rendered image sequence, the rendering cameras, the image size, the frame range, and many other options similar to the settings found in the Render Settings window.
Command-line renders tend to be more stable than batch renders initiated from the Maya interface. This is because when the Maya application is closed, more of your computer’s RAM is available for the render. You can start a command-line render regardless of whether Maya is running. In fact, to maximize system resources, it’s best to close Maya when starting a command-line render. In this example, you can keep Maya open.
In this exercise, you’ll see how you can start a batch render on both a Windows computer and a Mac. You’ll use the solarSystem_v01.ma scene, which is a simple animation showing two planets orbiting a glowing sun.
Open the solarSystem_v01.ma scene from the chapter12scenes folder at the book’s web page.
This scene has a masterLayer render layer, which should not be rendered, and two additional layers:
On the Common tab of the Render Settings window, no filename prefix has been specified, and a frame range has not been set. Maya will use the default file structure when rendering the scene, and the frame range will be set in the options for the command line.
The first example starts a command-line render using Windows 7:
When starting a batch render, you can either specify the path to the scenes folder in the command-line options or set the command prompt to the folder that contains the scene.
To start a batch render, use the render command in the command prompt, followed by option flags and the name of the scene you want to render. The option flags are preceded by a hyphen. The flags are followed by a space and then the flag setting. For example, to start a scene using the mental ray renderer, you would type render -r mr myscene.ma. The render command starts the batch renderer, the -r flag specifies the renderer, and mr sets the -r flag to mental ray. The command ends with the name of the scene (or the folder path to the scene if you’re not already in the folder with the scene).
If render is not recognized as a command, and the command line produces an error, the render executable was not given a proper global variable at the time of installation. You can sidestep this error by executing the render command from the Maya program bin folder. For example, in the Command Prompt window, navigate to the C:Program FilesAutodeskMaya2014in folder before launching the batch render.
There are many options, but you don’t need to use them, except if you want to specify an option that’s different from what is used in the scene. If you want all the layers to render using mental ray regardless of the layer setting in the scene, then you specify mental ray using the -r mr flag. If you omit the -r flag, Maya uses the default renderer, which is Maya Software. If you have a scene with several layers that use different renderers (as in the case of the solarSystem_v01.ma scene), you would type -r file. This sets the renderer to whatever is specified in the file, including what is specified for each layer.
Other common flags include the following:
There is a complete list of the flags in the Maya documentation. You can also print a description of commands by typing render -help. To see mental ray–specific commands, type render -help -r mr.
For example, if you want to render the scene using renderCam1, starting on frame 1 and ending on frame 24, type the following in the command prompt (see Figure 12-48):
render -r file -s 1 -e 24 -cam renderCam1 solarSystem_v01.ma
You’ll see the render execute in the command prompt. When it’s finished, you can use FCheck to view each sequence. In the Images folder, you’ll see two directories named after the layers in the scene. The orbitPath folder has the Paint Effects orbit paths rendered with Maya Software. The solarSystem folder has the rendered sequence of the planets and sun as well as subdirectories for the diffuse, incandescence, and MasterBeauty passes. (The MasterBeauty pass is created by default when you add passes to a scene.)
Let’s say you want to render only the orbitPaths layer using renderCam2 for the frame range 16 to 48. You want to specify Maya Software as the renderer. You may want to name the sequence after the camera as well. Type the following into the command prompt (use a single line with no returns):
render -r sw -s 16 -e 48 -rl orbitPaths -cam renderCam2 ↵
-im solarSystemCam2 solarSystem_v01.ma
For a Mac, the Maya command-line render workflow is similar except that, instead of the command prompt, you use a special Terminal window that is included when you install Maya. This is an application called Maya Terminal.term, and it’s found in the ApplicationsAutodeskMaya 2014 folder. It’s probably a good idea to add this application to the Dock so that you can easily open it whenever you need to run a batch render.
You need to navigate in the terminal to the scenes folder that contains the scene you want to render:
The commands for rendering on a Mac are the same as they are for Windows. From here, you can take up with step 6 from the previous exercise.
It’s possible to create a text file that can initiate a series of batch renders for a number of different scenes. Doing so can be useful when you need a machine to perform several renders overnight or over a long weekend. This approach can save you the trouble of starting every batch render manually. This section describes how to create a batch script for Windows and Mac.
To create a batch script for Windows, follow these steps:
render -r file -s 20 -e 120 -cam renderCam1 myScene.mb
render -r file -s 121 -e 150 -cam renderCam2 myScene.mb
render -r file -s 1-e 120 -cam renderCam1 myScene_part2.mb
You’ll probably want to close Maya to maximize system resources for the render. Maya will render each scene in the order it is listed in the batch file. Be very careful when naming the files and the image sequences so that one render does not overwrite a previous render. For example, if you render overlapping frame sequences from the same file, use the -im flag in each batch render line to give the image sequences different names.
A few extra steps are involved in creating a Mac batch render script, but for the process is similar to the Windows workflow:
render -r file -s 20 -e 120 -cam renderCam1 myScene.mb
render -r file -s 121 -e 150 -cam renderCam2 myScene.mb
render -r file -s 1-e 120 -cam renderCam1 myScene_part2.mb
The scenes will render in the order in which they are listed in the batch file.
The quality of your render is determined by a number of related settings, some of which appear in the Render Settings window and some of which appear in the Attribute Editor of nodes within the scene. Tessellation, antialiasing, sampling, and filtering all play a part in how good the final render looks. You will always have to strike a balance between render quality and render time. As you raise the level of quality, you should test your renders and make a note of how long they take. Five minutes to render a single frame may not seem like much until you’re dealing with a multilayered animation that is several thousand frames long. Remember that you will almost always have to render a sequence more than once as changes are requested by the director or client (even when you are sure it is the absolute final render!).
In this section, you’ll learn how to use the settings on the Quality tab as well as other settings to improve the look of the final render.
At render time, all the geometry in the scene, regardless of whether it is NURBS, polygons, or subdivision surfaces, is converted to polygon triangles by the renderer. Tessellation refers to the number and placement of the triangles on the surface when the scene is rendered. Objects that have a low tessellation will look blocky when compared to those with a high tessellation. However, low-tessellation objects take less time to render than high-tessellation objects (see Figure 12-50). Tessellation settings can be found in the shape nodes of surfaces. In Chapter 3, “Modeling I,” the settings for NURBS surface tessellation are discussed. The easiest way to set tessellation for NURBS surfaces is to use the Tessellation controls in the shape node of the surface. Additionally, you can set tessellation for multiple surfaces at the same time by opening the Attribute Spreadsheet (Window ⇒ General Editors ⇒ Attribute Spread Sheet) to the Tessellation tab.
You can also create an approximation node that can set the tessellation for various types of surfaces. To create an approximation node, select the surface and choose Window ⇒ Rendering Editors ⇒ mental ray ⇒ Approximation Editor.
The editor allows you to create approximation nodes for NURBS surfaces, displacements (when using a texture for geometry displacement), and subdivision surfaces.
To create a node, click the Create button. To assign the node to a surface, select the surface, and select the node from the drop-down menu in the Approximation Editor; then click the Assign button. The Unassign button allows you to break the connection between the node and the surface. The Edit button allows you to edit the node’s settings in the Attribute Editor, and the Delete button removes the node from the scene (see Figure 12-51).
You can assign a subdivision surface approximation node to a polygon object so that the polygons are rendered as subdivision surfaces, giving them a smooth appearance similar to a smooth mesh or subdivision surface. In Figure 12-52, a polygon cube has been duplicated twice. The cube on the far left has a subdivision approximation node assigned to it. The center cube is a smooth mesh. (The cube is converted to a smooth mesh by pressing the 3 key. Smooth mesh polygon surfaces are covered in Chapter 4, “Modeling II.”) The cube on the far right has been converted to a subdivision surface (Modify ⇒ Convert ⇒ Polygons To Subdiv). When the scene is rendered using mental ray, the three cubes are almost identical. This demonstrates the various options available for rendering smooth polygon surfaces.
When editing the settings for the subdivision approximation node, the Parametric Method option is the simplest to use. You can use the N Subdivisions setting to set the smoothness of the render. Each time you increase the number of subdivisions, the polygons are multiplied by a factor of 4. A setting of 3 means that each polygon face on the original object is divided 12 times.
Unified Sampling offers a simplified approach to your primary sampling settings. Instead of setting individual antialiasing and sampling settings, Unified Sampling uses a single Quality slider. The Quality slider employs enhanced sampling to avoid a lot of the common artifacts. For instance, Unified Sampling removes moiré patterns in your render.
Another advantage to Unified Sampling is the ability to use progressive rendering with the Interactive Photorealistic Render (IPR). When in IPR Progressive Mode the rendered image starts with a low sample rate and is refined with more samples until it achieves the final result. The amount of sampling is derived from your Unified Sampling quality.
Progressive rendering allows you to see an initial preview of your render. Although low quality, the initial rendering provides quick feedback for your render settings, light position, and other attributes that would otherwise take minutes to hours to render. Detecting problems in your render early on enables you to stop the render before wasting time on calculating expensive antialiasing or other quality settings that may not have any bearing on your current refinements. You can further refine your IPR render time with the following options:
Filtering occurs after sampling as the image is translated in the frame buffer. You can apply a number of filters, which are found in the menu in the Multi-Pixel Filtering section of the Render Settings window’s Quality tab. Some filters blur the image, whereas others sharpen the image. The Filter Size fields determine the height and width of the filter as it expands from the center of the sample across neighboring pixels. A setting of 1×1 covers a single pixel, and a setting of 2×2 covers four pixels. Most of the time, the default setting for each filter type is the best one to use.