26

Simulation Integration

As we’ve seen in the first half of this book, and also in Chapter Twenty-Four, the assembly of story-driven RT3D simulations is very complex. Once learning objectives have been arrived at and story and gameplay have been conceptualized, the work only begins.

Terrains need designing. Sets and props need creating, and characters need to be built and animated. Levels need construction, and all story and interactive events need placement and triggering. Cameras, lighting, and shadowing need to be in place. Resource, inventory, and player history data need managing.

As a consequence, significant programming and procedural language skills are needed to move from the asset creation stage (script, art, etc.) to a fully working level of a RT3D environment. Although commercial game development suites and middleware provide tools attempting to integrate some of this workflow, much of the level building comes down to “hand coding,” necessarily slowing down development time and implementation.

However, work has begun to create a more integrated design environment meant to be used by nonprogrammers. One such project, Narratoria, began during the development of Leaders, and is now a licensable technology from USC’s Office of Technology and Licensing (www.usc.edu/otl), where it is already being used to author follow-up simulation story worlds.

One of Narratoria’s inventors, USC senior research programmer Martin van Velsen, notes that other efforts to streamline authoring in 3D environments exists, “removing the programmer out of the production pipeline” but “ironically, [these] efforts take the burden of code development and place it instead on the artist”—clearly, an imperfect solution. In contrast, “Narratoria replaces the limited tools currently available with authoring tools that allow fine-grained control over virtual worlds. . . . Instant feedback allows real-time editing of what is effectively the finished product. In essence, we’ve combined the editing and shooting of a film, where there is no longer any difference between the raw materials and the final product.”

Van Velsen argues that most traditional game and simulation authoring uses a “bottom-up” paradigm, laboriously building up animations, processes, behaviors, interactions, levels, and final project. However, he sees the traditional Hollywood production model as pursuing a “top-down” paradigm, and has designed Narratoria to do the same, looking at the different authoring activities (scripting, animating, level building, etc.) as different language sets that can be tied together and translated between.

The approach is one of “decomposition.” If learning and story objectives and outcomes can be decomposed (i.e., deconstructed), along with all the elements, assets, and types of interactivity used to deliver these (levels, sets, props, characters, behaviors, camera movements, timelines, etc.), then it should be possible for this granularized data to be reassembled and built up as needed.

Narratoria accomplishes this through the processing of XML metadata attached to the assets it works with. These assets will typically call or trigger prescripted activities such as lighting, camera movements, resource and inventory evaluations, collision detection, and natural language processing. Scene and sequence (i.e., story and gameplay) scripts will designate the ordering of events, entrances and exits of characters, and so on.

images

Figure 26.1   A conceptualization of how Narratoria translates screenplay metadata to an event timeline. Image reproduced with permission of Martin Van Velsen.

The actual authoring of these events take place in Narratoria’s menu-driven, drag-and-drop environments (actually, a set of plug-in modules to handle individual tasks like asset management, character animation, camera controls, scriptwriting, natural language processing, etc.), so that artists, content designers, and even training leaders can directly and collaboratively build some or all of a level.

Once usable terrains, characters, and objects have been placed in the Narratoria system, any collaborator can immediately see how a level will look and feel by requesting a visualization using the chosen game engine for the project. (Currently, Narratoria works with the Unreal, Gamebryo, and TVML game engines, with other engines expected to be added.)

Let’s take a look at an example. The simulation authoring might begin with a very detailed screenplay containing extensive XML metadata to represent scenes and interactivity (Figure 26.1). The metadata would obviously include the scene’s characters, props, timeline, location, and set, and could suggest basic character blocking, available navigation, the mood of characters (perhaps a variable dependent on previous interactions), emotional flags on dialogue, and when and what type of interactivity will be available (perhaps a user avatar can talk to an nonplayer character to seek out information, or perhaps a user will need to locate a piece of equipment in a confined space, or perhaps an NPC requires a decision from the user avatar).

The script sequence itself could be directly authored in Narratoria (using the plug-in module designed for reading and handling scripts), or authored in another tool (e.g., Final Draft) and then imported into the proper plug-in module.

The sequence’s XML data could then call up previously input camera and lighting routines that will read location, character blocking, and character mood, and set up the right cameras and lights in the appropriate terrain or set, all at the correct placement within a level.

Character bibles (discussed in Chapter Twelve) which have very specifically defined character behaviors (e.g., this character, when depressed, shuffles his feet listlessly) will then drive AI so that character animation within the scene becomes fully realized (the character won’t just move from point A to point B, but will shuffle between the two points, with his head held down).

Integrating with other sequences and keeping track of all the variables, Narratoria will determine whether a piece of equipment is currently available (e.g., character A took away equipment X earlier in the level; therefore, character B will be unable to find equipment X, and be unable to complete the task), or whether NPCs will be forthcoming in offering information (e.g., the NPC has previously been rewarded by the user avatar, so will quickly offer necessary information if asked the right question by user).

If all of the sequence’s XML data has been detailed enough, Narratoria should be able to build, shoot, and edit a first cut of the sequence automatically, visualizing it within the game engine itself. The author can then fine-tune this portion of the level (e.g., adding more background characters to a scene, or adding another variable that will improve on the desired learning objective), either within the confines of Narratoria or working more directly with the game engine and its editing tools.

Narratoria can work with multiple instances of the game engine simultaneously, so that it becomes possible to test camera controls and moves within one visualization, while testing the placement of props, character movement related to them, and collision detection issues, in another. All this can occur where artists and writers are working with interfaces and language familiar to them, rather than having them master a particular editor for a particular game engine (which they might never use again).

Work on a similar authoring tool has been undertaken by the Liquid Narrative Group at North Carolina State. Although their tool isn’t yet licensable (at the time of this writing), it also attempts to integrate and automate the creation and production workflow in building a 3D simulation story world.

Part of this automation is the development and refinement of cinematic camera control intelligence, resulting in a discrete system that can map out all the necessary camera angles, moves, and selections for an interactive sequence. Arnav Jhala, doctoral candidate at North Carolina State, worked on the Leaders project and continues his pioneering work in this area for the Liquid Narrative Group.

As Jhala and co-author Michael Young write in a recent paper: “In narrative-oriented virtual worlds, the camera is a communicative tool that conveys not just the occurrence of events, but also affective parameters like the mood of the scene, relationships that entities within the world have with other entities and the pace/tempo of the progression of the underlying narrative.” If rules for composition and transition of shots can be defined and granularized, it should be possible to automate camerawork based on the timeline, events, and emotional content of a scene. As Jhala and Young put it:

Information about the story is used to generate a planning problem for the discourse planner; the goals of this problem are communicative, that is, they involve the user coming to know the underlying story events and details and are achieved by communicative actions (in our case, actions performed by the camera). A library of . . . cinematic schemas is utilized by the discourse planner to generate sequences of camera directives for specific types of action sequences (like conversations and chase sequences). (From A Discourse Planning Approach to Cinematic Camera Control for Narratives in Virtual Environments by Jhala and Young; see bibliography for full citation).

Having generated these camera directives, the system has one other task. We can easily imagine the system defining a camera tracking move that ignores the geometry and physics of the environment, i.e., where the camera may crash into a wall, trying to capture a specific move. A “geometric constraint solver” will need to evaluate the scene’s physical constraints and determine necessary shot substitutions before the game engine attempts to render the scene.

Not surprisingly, while a full implementation of this kind of automated camera system won’t eliminate the need for a director (as discussed in Chapter Twenty-four), it could eliminate days or even weeks of laborious level design, concentrating manpower on the thornier issues of how a sequence plays and its relationship to learning objectives and user experience.

SUMMARY

The difficulties of building RT3D simulations has brought forth the development of a new generation of software suites, which promise to make authoring easier for nongame programmers. USC is now licensing the results of its venture, Narratoria, into this arena; and for anyone embarking on an RT3D simulation, the use of this suite should be given consideration. North Carolina State’s Liquid Narrative Group is working on a similar suite, and an eye should also be kept on their research. By integrating the disparate tasks of simulation building, more focus can ultimately be given to the pedagogical and story content, as well as to evaluation of user comprehension and progress.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset