11
Video Editing Techniques

Editing for Multicamera and Single-Camera Productions

Editing is the process of selecting and arranging shots or sequences of shots into an appropriate order. Editing can be accomplished during the multicamera television production process, or it can be carried out in postproduction, after the program material has been captured.

In multicamera television production this process of shot selection takes place as the program is being produced. The signals from several television cameras and other video sources are simultaneously fed into the video switcher, and the pictures from each video source are displayed on the monitors in the control room. The director looks at the monitors and calls for the shots desired, and the technical director punches up each shot on the switcher as the director calls for it, instantly creating an edited sequence of shots that can be recorded or transmitted live to the audience. Shot selection in live news broadcasts, sports telecasts, and interview and discussion programs—all typical multicamera productions—is made in this manner by a director and a technical director working together with the camera crew to capture the appropriate sequence of shots.

In single-camera production, programs are recorded one shot at a time. Single-camera productions can be shot in the studio or on location. Many dramatic programs are shot single-camera style in controlled studio environments. ENG and EFP programs, by contrast, are typically shot in a variety of remote locations. The recorded footage may include on-location interviews with story subjects, stand-ups by the news reporter or program host, and footage of activities in the remote environment. For both dramatic and nondramatic footage that is shot in this manner, a video editor works in conjunction with the program’s director and/or producer to select the most appropriate shots and put them together into an effective program sequence.

Multicamera Production: Shooting and Editing

In Chapter 8 we discussed multicamera production from the perspective of the director. Particular attention was paid to issues of visualization, composition, and continuity. All of these elements are important in the live editing process as well.

Multicamera television production was developed and has survived as a viable production technique for over 60 years because it is an extremely efficient mode of production. In particular, it is efficient because it largely eliminates the need for the time-consuming and costly stage of postproduction editing that characterized the feature film production industry, which preceded the development of the television production industry. Even today, feature films are shot predominantly in the single-camera mode, and they require a lot of time and great budgetary support to complete the postproduction phases.

Multicamera television production, by contrast, accomplishes the editing as the program is being produced. When the shooting is done, so is the editing, unless relatively minor adjustments to the program need to be made after the fact. Of course, if the program was broadcast live, then no further editing is possible.

Producers working on single-camera productions can learn a lot about editing—and about shooting to edit—by watching a multicamera production. Most medium- and small-size studio productions typically employ three cameras. More complex productions may call for more cameras, but for our purposes the three-camera method provides a good illustration of how editing needs inform shooting techniques.

In single-camera film production the traditional shooting method is to record a master shot, a long shot that records all the essential action of a scene. After the scene is completed, the action is repeated a number of times from various camera angles. This provides the editor with a long shot of the scene as well as medium shots and close-ups of various elements of the scene, which can then be edited into the final program.

Multicamera video production recognizes the need to have a variety of shots of the scene taken from different angles but accomplishes this in real time, without the need for repeated takes. Consider the example of a television newscast with two newscasters sitting at a news desk. In a typical three-camera setup, one camera will be responsible for a wide shot of the scene, and the two other cameras will shoot a medium close-up (MCU) of each of the newscasters. Looking at the control room monitors, the director will most likely see an MCU of newscaster 1 on camera 1, a long shot of the two newscasters at the news desk on camera 2, and an MCU of newscaster 2 on camera 3. The director can call for each shot as needed, putting together a sequence of shots that establishes the set and the newscasters at the outset of the program and then cuts to appropriate MCUs as each news story is read. If the newscast features a weather reporter in front of a chroma key background, one of the three cameras can be swung around to shoot the action in another part of the studio.

The situation is similar for demonstration and interview programs. One camera can shoot an establishing shot of the scene; the other two cameras can shoot MCUs of the individuals who are involved in the interview or demonstration as well as appropriate over-the-shoulder shots and reaction shots. This can be done without repeating the action. (See Figure 11.1.)

Figure 11.1
Figure 11.1
Figure 11.1

Multicamera Editing Problems

Effective multicamera switching depends on correct placement of the cameras, making the right camera choice at the right time, and paying attention to the composition of individual shots to ensure that they will cut together well.

CAMERA PLACEMENT TO AVOID ACTION REVERSAL Correct placement of the cameras depends on the correct application of the 180-degree rule that we discussed in Chapter 8. Establish a principal vector line, as illustrated in Figure 8.5, and keep the cameras to one side of the line, within the 180-degree semicircle that circumscribes the action area. If the 180-degree rule is violated, the position of the character or characters in the frame will reverse from one shot to the next. This change in screen position is disruptive to the viewer and looks like what it is: an editing mistake. (See Figure 11.2.)

AVOID MATCHED CUTS A matched cut is a cut from one shot to another that is too similar in terms of angle of view and camera position. When you cut from one shot to another that is too closely matched in terms of angle of view, the subject appears to jump from one position in the frame to a different position in a subsequent shot. (See Figure 11.3.) If the transition is a dissolve instead of a cut, the transition is equally disruptive as the closely matched images overlap, giving the impression that the shots are momentarily out of focus.

Figure 11.3

Figure 11.3
Matched cut

Subject jumps in frame when cutting from camera 2 to camera 3.

The general rule of thumb in cutting from one camera to another in the studio is to make sure that each successive shot presents new information to the viewer and does it in a way that is visually pleasing. A cut from an MCU to a CU of a person or object is always more effective than a cut from an MCU to another MCU of the same subject.

AVOID CUTTING BETWEEN MISMATCHED SOURCES In multicamera production it is imperative that all video sources have similar color and light values. Each camera needs to be white balanced correctly, and the exposure needs to be set so that each camera shot matches the others. Make sure that the video engineer has white balanced and shaded each of the cameras before the program begins so that when you cut from one camera to another, color and light values are consistent from one camera shot to the next.

Postproduction Editing

Video editing in postproduction is an essential part of the production process for programs that have been shot single-camera style in the studio or the field. Production personnel concentrate on recording the information they need for the program without worrying about the arrangement of the shots until after they have finished shooting the program. Material may be gathered at a number of locations over an extended period of time. The challenge of the editing process is to put the material together in a seamless fashion, creating the illusion of continuity of action, lighting, sound, and performance, as we discussed in Chapter 8.

Multicamera programs that have been recorded live to a media card or a hard disk drive may be edited in postproduction as well. Most often this is done to fix one kind of problem or another with the program material. The most typical problems that need to be fixed are time problems, performance problems, and technical problems.

FIXING TIME PROBLEMS IN POSTPRODUCTION Programs that are produced for broadcast distribution are produced to very specific time limits. Each of the individual program segments must be a specified time to allow for commercials and station breaks, and the overall length of the program must also be precisely timed. During the production of a multicamera program in the studio slight errors in timing the individual segments may be made. In postproduction these errors can be corrected by adding or deleting program material from the offending segments.

FIXING PERFORMANCE PROBLEMS IN POSTPRODUCTION Multicamera programs such as situation comedies rely on precise timing, appropriate reaction shots, and convincing acting to achieve their comedic effect. In the live performance environment there is little room for error. If mistakes are made, producers have several options: they can stage retakes after the principal recording is done, they can review the recorded dress rehearsal performance to see whether the material was conveyed more effectively there, or if each camera’s isolated (ISO) feed has been recorded, in addition to the switched output of the video switcher, ISO shots can be inserted into the master recording to replace less effective shots there. All of these techniques are used to fix performance problems in multicamera, recorded live shows.

FIXING TECHNICAL PROBLEMS IN POSTPRODUCTION Technical problems often creep into even the best-planned multicamera production. A shot may be taken on line before the camera is ready, resulting in an image that is out of focus or that contains an unwanted camera movement. Despite the best efforts of the video engineer to achieve consistent light and color values from each camera, problems with image quality may be apparent in one or another shot. Or there may be problems with the audio. Perhaps an audio cue was missed and music was not introduced at the proper time, or perhaps the microphone level was set too low or too high on a particular shot. Postproduction editing allows the production team one more opportunity to fix technical problems that may have crept into the original production.

Figure 11.4

Figure 11.4
Editing continuum

Continuity Editing Dynamic Editing

Continuity and Dynamic Editing

There are two general techniques, or styles, of editing. One is continuity editing; the other is dynamic or complexity editing. In truth, editing is seldom done solely with one technique or the other. In most cases it is a combination of the two. So it might be useful to think of these two editing styles as points at either end of a continuum. (See Figure 11.4.) Some programs are edited with a style that draws more heavily on the principles of continuity editing; others lean more heavily in the direction of dynamic editing. In any event, these two terms provide a useful place to begin talking about different types of editing.

Continuity Editing

The goal of continuity editing is to move the action along smoothly without any discontinuous jumps in time or place. Traditional television programs such as soap operas, situation comedies, and game shows regularly apply the principles of continuity editing. Many narrative film and television programs (e.g., dramas) do so as well, although they may be more likely to bend the rules a bit and move down the line in the direction of dynamic editing.

Four general principals are central to the continuity editing style.

ESTABLISH AND MAINTAIN SCREEN POSITION The use of the establishing shot to identify the location of the people in the shot in relation to their environment is an important part of continuity editing. The establishing shot is typically a long shot or extreme long shot that orients the viewer to the scene and its constituent parts (actors, set, etc.). (See Figure 11.5.) The establishing shot is always located near the beginning of a shot sequence, particularly whenever there is a scene change. It may be the first shot in the sequence, or it may come after several closer shots.

Once the scene has been established, it is important to present a closer view of the details of the scene. The cut-in is a cut to a close-up of some detail of the scene. (See Figure 11.5.) Cut-ins are important because the long shot, while useful in orienting the viewer to the scene, often does not let the viewer see important details. Remember, television is a close-up medium.

Figure 11.5

Once the viewer has been shown essential scene elements in close-up detail, it may be necessary to cut-out again to a wider shot, particularly if action is about to take place. In cutting in to a tighter shot from a wide shot, objects and people should maintain their same relative place in the frame.

USE EYELINES TO ESTABLISH THE DIRECTION OF VIEW AND POSITION OF THE TARGET OBJECT An eyeline is simply a line created by your eyes when you look at a target object. If you look up a lighting instrument on the studio lighting grid, the lighting instrument is the target object, and the eyeline is the imaginary line between your eyes and the lighting instrument. (See Figure 11.6.) Look down at your feet, and a similar eyeline is created between your eyes and their target.

Eyelines are formed between people when they talk and can be used to create continuity when the conversation is edited. The editing process is facilitated if the original material has been shot by using complementary angles. This is also known as reverse angle shooting. (See Figure 11.1.)

If you are shooting in the field and not in the studio, you don’t have the benefit of viewing all three of the shots on the overhead camera monitors. In the field you have to shoot each shot one at a time, and you have to be careful to apply the 180-degree rule and make sure that your shots are carefully and similarly composed. Figure 8.4 in Chapter 8 illustrates this technique. The first shot in the sequence is the establishing shot, which establishes the relationship of the characters to each other and to the setting. The next two shots are shot one at a time at complementary angles, framing each subject in an MCU. When the shots are cut together the shot sequence reveals the two people talking to each other.

MAINTAIN CONTINUITY IN THE DIRECTION OF ACTION An extremely important component of continuity editing is to maintain directional continuity. Characters or objects that are moving in one shot should continue to move in the same general direction in a subsequent shot. Directional continuity will be apparent in the raw footage if the videographer has paid attention to the 180-degree rule and the principal action axis when shooting the original field material. Mismatches in directional continuity are most apparent when strong horizontal movement in one direction is immediately followed by another movement in the opposite direction, as in the example in Figure 11.7. The video editor can correct this apparent mistake by inserting a shot with a neutral motion vector—one in which the action moves directly toward or away from the camera—in between the two shots with mismatched vectors. (See Figure 11.7.)

USE SHOT CONTENT TO MOTIVATE CUTS In continuity editing, each edit or cut is usually motivated. That is, there should be a reason for making an edit. The two principal motivators of cuts are action and dialog.

Figure 11.6
Figure 11.6

One of the cardinal rules of editing is to cut on the action. This technique is facilitated if the action has been shot from several different angles and the action has been repeated. For example, to piece together a sequence covering the beginning of a swim race using two cameras, use one camera to shoot an establishing shot of the swimmers lined up on the pool deck. Use the second camera to shoot the swimmers from a tighter angle. When the swimmers dive into the water at the sound of the starter’s gun, you will get two shots from different angles of view but with matching action (the dive into the pool). That action provides the logical point to make the edit from the wide shot to the tighter shot. (See Figure 11.8.)

In programs that are based on staged action, it is important to shoot the action from several different angles, repeating the action in each shot. For example, to show someone getting into a car, shoot a long shot of the subject approaching the car and opening the door. Then move the camera inside the car and reshoot the action as the subject approaches the door, opens it, and enters the car. The action of the door opening is repeated in both shots and provides a logical point at which to make the cut.

Much editing is motivated by what is said. In an interview a question demands an answer, and if you show the interviewer asking a question, you most likely will want to cut to the subject as he or she answers the question. In a dramatic scene you may be likely to cut to another character to see and hear the person deliver a line of dialog. Or if one character says something that is particularly provocative, you may want to cut to a reaction shot of the person to whom the line was delivered. In either situation the cut has been motivated by what was said.

OTHER CONTINUITY ISSUES The editor should consider several other continuity issues as well. These were discussed in greater detail in Chapter 8, so we will mention them only briefly here.

Lighting and Color Continuity Make sure light and color values are consistent from shot to shot, particularly within the same scene. Video levels and white balance can be adjusted shot by shot in postproduction to achieve a consistent look.

Sound Continuity Make sure microphone levels are consistent from shot to shot, and use equalization to balance sound quality. Background music or ambient sound from a location can help to unify shots by providing a consistent sound bed.

Appearance of the Subjects Pay attention to the physical appearance of the subject. You don’t want clothing or hairstyle to change radically from one shot to another unless there is a logical explanation for the change. Also pay attention to the physical placement of the subject in the frame, facial expressions, and gestures. Edits should flow smoothly with no disrupting distractions caused by these important details.

Dynamic Editing

Dynamic editing differs from continuity editing in two important ways: it tends to be a bit more complex in structure, and it frequently utilizes visual material to create an impact rather than simply to convey literal meaning. Dynamic editing, then, is more affective than continuity editing. This is not to say that continuity editing must be listless or boring or that dynamic editing cannot be used to convey a literal message. The differences between the two are often differences of degree rather than of substance. Three of the most common dynamic editing techniques are editing to maximize impact, manipulating the timeline, and editing rhythm.

EDITING TO MAXIMIZE IMPACT Dynamic editing attempts to maximize a scene’s impact rather than simply to link individual shots into an understandable sequence. Shot selection in dynamic editing frequently includes shots that exaggerate or intensify the event rather than simply reproducing it. Extremely tight shots or shots from peculiar angles are frequently incorporated into dynamic shot sequences to intensify a scene’s impact.

MANIPULATING THE TIMELINE Dynamic editing is frequently discontinuous in time. That is, rather than concentrating on one action as it moves forward in time (a technique that is typical of continuity editing), dynamic editing often employs parallel cutting: cutting between two actions that are occurring at the same time in different locations or between events that happen at different times. The dynamic editor might intercut frames of past or future events to create the effect of a flashback or a flashforward.

EDITING RHYTHM Continuity editing is usually motivated by the rhythm of the event (either the action of the participants or the dialog of the characters); dynamic editing is more likely to depend on an external factor for its motivation and consequent rhythm. Two common techniques are editing to music and timed cuts.

Editing to music involves editing together a series of related or unrelated images to some rhythmic or melodic element in a piece of music. In the most clichéd type of editing to music, the editing matches a regular rhythmic beat and does not deviate from it. Editing that uses various musical components—the melody or a strong musical crescendo, for example to motivate the edits is more energetic and interesting.

The process of editing to music is simplified with the use of a nonlinear editing system. Many systems have a feature that makes it possible to display the audio waveform of the music track in the audio tracks in the timeline in the nonlinear editing system display. (See Figure 11.9.) Shots can then be aligned with these visible points in the audio waveform.

A timed cut is one in which shot length is determined by time rather than content. You can edit together a sequence of shots that are each two seconds in length, or you can use shot length like a music measure to compose a sequence with a rhythm based on the length of the shots.

Editing Voice Driven Pieces

Many projects that are shot single-camera style and are edited together in postproduction are interview-based, voice driven pieces. News stories, documentaries, and magazine features are all examples of this kind of program material. In these productions the story is told principally through the voice track, which is supported by the other production elements the editor has at his or her disposal, including B-roll video, natural sound, music, graphics, and appropriate transitions and video effects. (B-roll video is described later in this section.)

Editing the Voice Tracks

Once all of the shots have been logged, the editor can begin to piece together the story he or she is working on. A good strategy is to highlight critical sound bites in the typed transcriptions of video interviews and to begin to fashion a script out of them.

Narration may need to be used to link together these sound bites. In the final program narration may be delivered by an on-screen narrator, or the narrator may be off-camera. When the narrator is heard but not seen on screen, this is called a voice-over (VO).

Interviews can be edited in a similar fashion. The person who has been interviewed can be shown on screen, in which case we refer to this as sound on tape (SOT). Or the sound can be used without showing the person in the interview setting, by cutting away to appropriate B-roll. This also is an example of voice-over. (The term SOT continues to be used today even though videotape is seldom used as the recording medium!)

In any case, once the sound bites and narration that are going to be used in the project have been identified, they should all be imported into the nonlinear editing system. Using the script as a guide, drag each of the voice elements, in sequence, into an appropriate audio track. You do not need to be too concerned at this point about editing everything precisely and cleanly; just lay it out in the timeline so that you can get a sense of how the story flows.

It is also a good idea to think about keeping each sound bite relatively short. Novice editors frequently put very long sound bites and narration tracks into the timeline. In most situations it is better to break up long sound bites into shorter ones (e.g., three 10-second bites instead of one 30-second bite). This helps to pick up the pace of the program. The shorter, individual bites can be linked with narration if necessary, or you can cut to someone else’s sound bite and then come back to your first subject.

Once you are satisfied with the general layout of the voice track, you can go back into the timeline and trim each clip precisely. Make sure you don’t cut your subject off at the beginning or end of the clip. Also, do your best to make your subject sound good. If there are a lot of “ums,” “ahs,” and pauses in the interview, edit them out. Of course, each time you do this, you will create a jump cut in the video. If there are only a few jump cuts, and depending on the type of program you are editing, you may choose to insert a fast dissolve at the jump cut edit points. This will smooth out the jump cut at the same time that it gives the viewer a clear indication that the shot has been edited. However, if you have a lot of jump cuts, you most likely will want to cutaway to B-roll video footage.

Identifying the Interviewee

You should also think about how you will identify the on-camera interviewee to your audience. Two common techniques are to introduce the interviewee with VO narration that identifies the subject or to use a lower-third keyed title with the subject’s name and affiliation.

In short pieces it is almost always preferable to use the keyed title instead of the VO introduction, because the VO takes up valuable story time and the key does not. If the subject’s name is keyed, a couple of things need to be kept in mind. First, the key needs to be on screen long enough for the audience to read it easily, and the shot that it is keyed over it therefore needs to be long enough to cover the key. In general, keyed name identifiers need to be on screen five to eight seconds to be read comfortably by the viewer, so the interview shot will need to be at least that long, if not a few seconds longer, to allow some room at the head of the shot, before the key is introduced, and at the end of the shot, after the key is removed. In addition, the shot needs to be composed so that the key does not interfere with the background video. It is not good practice to insert a key over a close-up of the interviewee. Doing so puts the lower-third keyed titles directly over the subject’s mouth. This is distracting at best. It is better to use a loosely framed medium close-up of the subject as the background for a lower-third key. This puts the keyed title over the subject’s chest rather than over his or her face. (See Figure 11.10.)

B-Roll Video

The terms A-roll and B-roll were inherited from the days when news stories for television were produced by using 16-mm film. In the early days of television there was no method of electronically recording the video signal, so field production was done by using film. It was not until the mid-1970s that portable ENG systems were developed and film was made obsolete as the field production recording medium for television.

Until that time, however, news stories for local and national television stations were produced by using 16-mm film as the recording medium. Interviews were recorded onto 16-mm film with a magnetic stripe (or mag stripe) running along one edge of the film. The picture was recorded in the central part of the strip of film, and the audio was recorded in the mag stripe at the edge of the film. When the news story was run on the newscast, two special film projectors called telecine were used to convert the 24-frame-per-second film projection standard to the 30-frame-per-second video standard. The interview footage, called the A-roll, was run in one projector, and additional edited footage of the visuals for the story, called the B-roll, was run in the other projector. The two film projectors appeared as separate sources on the video switcher, and the TD switched from one projector to the other to edit the A-roll and B-roll footage together in real time, on the air, as the news story was being broadcast live.

Today, the term A-roll has largely disappeared from use. But we still use the term B-roll to describe visual, cutaway video footage that is used to cover voice-over or narration.

Many field producers working on news, documentary, and magazine format projects prefer to shoot their interview material first and then shoot B-roll footage. The reason for this is simple. The function of B-roll footage is to provide visual support for the story. By listening to what is said in the interview, you can get a good idea of the B-roll footage that is needed. If you are interviewing a teenage girl and she says, “Every afternoon when I come home from school, I like to paint my toenails,” the alert field producer will ensure that a sequence of shots is recorded showing her coming home from school and then painting her toenails.

As an editor, you will only have the B-roll material to work with that the field production crew has provided for you. Once again, the importance of good shot logging becomes apparent. With a complete transcription of the interviews and a detailed log of all the field footage that has been shot, you can begin to make connections between the interview sound bites and the available B-roll footage.

When using B-roll footage, try to think in terms of editing together meaningful shot sequences, rather than simply cutting away to a single shot and then cutting back to the on-camera interview. Apply the principles of continuity editing to your B-roll sequences so that each sequence tells a little story that complements what is being said in the interview.

Natural Sound and Music

Natural sound and music are two elements under the control of the editor that have the potential to make a significant impact on the effectiveness of the story that is being told.

Natural sound is actual ambient sound that is recorded on location along with the pictures. Any B-roll footage that is shot should always be shot with ambient sound recorded as well. The editor can then choose to include or eliminate the natural sound during the editing process.

Music and sound effects can be edited into the program as well. Sound effects are often used when natural sound is missing. If no sound was recorded as the car door was slammed shut, an appropriate sound effect can be used to replace the missing natural sound. Keep in mind that the process of using natural sound in an edited sequence is, ironically, anything but natural. By this we mean that natural sound is often used very selectively, to add emphasis to certain shots or parts of sequences. Although some editors insist on using natural sound whenever they cut to a B-roll shot or sequence, others may diminish the use of natural sound in favor of a more dominant music track. Either decision involves making an aesthetic choice about which technique works best in the context of the story that is being told.

Music is one of the most dynamic, emotional elements that can be introduced into an edited sequence. Music is often used to set or change the mood of a scene, to foreshadow something that is about to happen, and to establish the time or place in which an event takes place. It is also extremely important in determining the pace of a scene.

Sound Sequencing and Sound Layering

Given all of these sources and given that the editor may have several voices to work with as well, the editor’s sound-editing job is rather complex. In addition to determining the sequence of the sound segments, the editor must also determine the kinds of transitions that will be used between them. In editing voice, the most common transition is a straight cut. When one audio segment ends, the next one begins, leaving a natural pause between segments. In other kinds of audio sequences, segues or crossfades may be used.

A segue (pronounced “seg-way”) is a transition from one sound to another in which the first sound fades out completely before the second sound fades in. There is a slight space between the two sounds but no overlap. In a crossfade, as the first sound fades out, the second sound fades in before the first one fades out completely. This results in a slight overlap of the two sounds.

In addition to choosing the sounds to be included in the program and putting them into an appropriate sequence, the editor needs to determine how to layer or mix them together. Sound layering involves determining which sounds should be heard in the foreground, in the background, or in between. This layering effect is achieved by manipulating the volume of various sound elements during the editing process.

The relative strength of each of the sounds that are layered together is usually determined by its importance in the scene or sequence. The editor must correctly mix the sounds so that their relative volume matches their importance. A voice-over should not be overwhelmed by natural sound or music that is supposed to be in the background. However, in a highly dramatic scene the music may well come to the foreground, overwhelming the natural sound and any other sound elements in the scene.

Graphics, Transitions, and Effects

In postproduction the video editor has access to a full palette of video transitions and effects. These were described in detail in Chapter 6.

In addition, the editor has the ability to create or import graphic elements that help to tell the story. These may be as simple as lower-third keyed titles or as complex as animated composited graphics. The rules of unity, clarity, and style that are described in detail in Chapter 12 are important for the editor to consider as decisions about the graphic design elements of the project are created.

“Don’t Worry, We’ll Fix It in Post!”

Producers who work in the field know that their project will not be finished until it passes through the postproduction stage. This has given rise to a phrase that probably has been uttered at least once during the production phase of most programs when something goes wrong: “Don’t worry, we’ll fix it in post!” Although not every problem can be fixed in postproduction, many can be fixed, particularly given the image editing tools that most nonlinear editing systems now contain. A few of the most common postproduction fixes are as follows.

FIXING COLOR AND BRIGHTNESS PROBLEMS In studio production, a video engineer is in charge of camera shading and ensures that each of the cameras is producing an image with similar color and brightness values. The nature of field production requires a program to be recorded shot by shot, in different locations with different light conditions, over an extended period of time, and often with different cameras and crews.

Professional nonlinear editing systems contain sophisticated controls that allow the editor to correctly adjust brightness and color. This can be a slow process because each shot may have to be individually color corrected. However, even severe white balance problems can often be fixed with the software that is available today.

FIXING EYELINE PROBLEMS Often in postproduction the editor will have a problem matching eyelines or motion vectors. Perhaps a person is looking to the left side of the screen when the sequence calls for the person to look to the right. Assuming that no written material is visible in the shot, it is a rather simple procedure to use the digital video editing tools to flip the shot with the offending eyeline or motion vector. (See Figure 11.11.) Of course, if there is writing in the shot, the writing will be backward in the flipped shot. This is a reason that many field producers ask their subjects not to wear hats or shirts with written material on them.

Figure 11.11

REFRAMING A SHOT Novice producers sometimes make framing errors when shooting field material. One of the most common problems is to frame an interview subject too loosely when a tighter shot would be more appropriate. Shots that are too wide can be tightened up by zooming in on them using the zoom tool in the nonlinear editing software program. Images can also be repositioned in the frame. For example, if the subject is framed too loosely and is centered in the frame, you could zoom in and move the image slightly to the left or right to achieve a different framing effect. (See Figure 11.12.) However, remember that zooming in on the image while editing can affect the quality of the image; if you zoom in too far, the image may begin to pixelate and lose sharpness.

CHANGING THE DIRECTION AND SPEED OF A PAN, ZOOM, OR TILT Most nonlinear editing programs allow the editor to control the direction and speed at which each clip is played back. A pan, zoom, or tilt that is too fast can be brought under control by changing the playback speed of the clip. A clip that looks bad played back at normal speed (100%) might look much better at 70% or 80% of normal speed. Remember, of course, that when you slow down the playback speed of the shot, it takes longer to play it back, so you have actually lengthened the shot. It may be necessary to trim it back to its original length to fit it back into the timeline.

Sometimes the direction of a tilt, pan, or zoom needs to be changed. Imagine a shot of the exterior of an office building that begins with a close-up of the entrance and then slowly zooms out and tilts up to the top of the building. As the last shot in a sequence, this might work fine, but if your script calls for an opening shot that is going to be followed by an interior shot of the lobby of the building you would probably be better off if the shot started at the top of the building, and then slowly tilted down and zoomed in, ending on the close-up of the entrance.

Fixing this in postproduction is relatively simple. Once the clip has been imported into the nonlinear editing system, the editor can manipulate the direction of the clip. It can be played back from beginning to end or from the end to the beginning. Reversing the direction will produce the shot that is needed for the sequence.

Storytelling

It is important to note that producers of television news pieces, television magazine features, and documentaries almost universally refer to the video pieces they produce as stories. It is also important to note that while dramatic programs are almost always produced with a full script that accompanies the production from preproduction through postproduction, news stories, magazine features, and documentaries are almost never fully scripted. There may be a story outline script that identifies the main points to be made in the piece, along with a list of individuals to be interviewed. Or the structure of the story may be conveyed from the producer to the editor in a brief conversation, with little or no written guidance provided at all. In these situations the editor has a significant amount of responsibility and creative freedom to structure the field material into a compelling story that will capture and hold the viewer’s attention. In approaching such material, the editor can be guided by three of the main conventional elements of any story: structure, characters, and conflict.

Figure 11.12

Story Structure

In their simplest form, most stories have three main structural points: a beginning, a middle, and an end. This concept of story structure dates back to the Greek philosopher Aristotle, who wrote in the Poetics:

Now a whole is that which has a beginning, middle, and end. A beginning is that which is not itself necessarily after anything else, and which has naturally something else after it; an end is that which is naturally after something itself either as its necessary or usual consequent, and with nothing else after it; and a middle, that which is by nature after one thing and also has another after it.

THE LEAD OR HOOK For the video editor the challenge is often trying to figure out how to begin a story and how to end it. Most editors try to find an interesting introduction to the story. Print journalists frequently talk about writing a lead to the story, a compelling first sentence that succinctly tells what the story is about. Video editors look for a similar way into a story and frequently use the term hook to describe the interesting beginning of a story that catches the viewer’s attention and makes the viewer want to watch the story to find out what happens. The hook may be a statement or question delivered in VO by the narrator, or it may be an interesting sound bite from one of the interviewees who are featured in the story. In either case the hook provides information in a way that provokes the viewer’s interest in the piece.

All stories have a beginning, but the narrative does not necessarily have to begin at the beginning of the story. That is, you can begin the story in the middle of things (in medias res is the Latin phrase that is used to describe this technique) and then go back to an earlier point in time to bring the viewer up to date with what is happening.

THE NARRATIVE IN THE MIDDLE Once the main story idea has been introduced, the middle or body of the story provides the important details of the story. This may be presented principally through narration or through the use of interviewees who function as the “characters” in the story.

In either case the editor of short-form video pieces such as news and magazine features should keep a couple of things in mind. Magazine features tend to be short (three to five minutes), and news stories are shorter still, often a minute and a half or less. To be told in such a short amount of time, the story must be relatively compact. Complex stories are hard to tell in a short amount of time. Simple stories are easier to tell, and for that reason most news and magazine features are kept relatively simple. As we discussed in the chapter on production planning (Chapter 7), you should be able to state the essence of the story in one simple sentence. If you can’t, you may have difficulty making your story conform to the strict time limits that television imposes.

A simple visual device to help you track the complexity and development of your story is the story wheel. (See Figure 11.13.) Each of the spokes of the wheel corresponds to one significant element of the story. Most short news and magazine feature stories have two or three spokes; any more than that and you will not be able to tell the story within the allotted time.

THE END Once the story has been told, it needs to be brought to some kind of reasonable conclusion. If the story began with a question, the end of the story might restate the question and answer it. If the story started with an assertion, the close might support or contradict that assertion, depending on the point of view that is presented in the story. And if the story has been structured around some kind of conflict, then the end of the story provides a resolution of the conflict or explains why there is no resolution if the conflict hasn’t been resolved. News producers refer to the ending statement in a story as the closer.

VISUAL ELEMENTS Story structure also can be thought of in visual terms. Stories often begin and end with significant attention to visual detail. An edited montage of shots might provide a high-energy introduction; and a beauty shot of a striking visual element that seems to summarize the story and provide a final coda (coda is the Italian word for “tail”) for it might conclude the story. (See Figure 11.14.) In old-style Hollywood Western films the clichéd ending typically shows the hero riding off into the sunset, combining beauty shot and coda into one shot.

Characters

Videographers speak of A-roll and B-roll, but I focus on shooting C-roll—character roll.

—Dr. Bob Arnot, NBC correspondent and independent documentary producer

Good stories always revolve around interesting characters. It is no different for news, magazine, and documentary television programs. Characters are important because they humanize a story—they present the issues in a way that viewers can empathize with.

It is important to try to let the people in the story tell the story. Whereas narration is useful in providing expository detail and transitions from one story element to another, the characters within the story provide a unique first-person narrative of the events or issues that the story is exploring. And although narration is effective in presenting information in a concise manner, characters in the story can present an emotional response that narration cannot.

Many television and video productions are cast in ways that bear great similarity to casting in dramatic programs. At a public screening of Surfing for Life, a video documentary broadcast nationally on PBS that chronicles the lives of nine geriatric surfers (now in their sixties, seventies, eighties, and nineties), San Francisco Bay area independent producer David Brown described how he interviewed over 200 people before deciding on the six that he would use in his documentary. This shows the value of the preinterview in identifying potential subjects. (Go to www.surfingforlife.com for more information about this program.)

Similarly, for news and magazine stories a producer may have the main story elements in mind, and then do research to find appropriate characters to tell the story. A story about new advances in the treatment of cancer in children cannot be effectively told without showing the impact of cancer on children. Families that have tragic or triumphant stories to tell about childhood cancer will have to be identified and interviewed, and the editor will have to find the most compelling sound bites to help tell the story.

Conflict and Change

The concepts of conflict and change are central to many news, magazine, and documentary productions. A conflict is a situation in which there is a clash between people, people and organizations, or points of view. Most compelling stories have some kind of conflict at their core. The task of the editor is to create interest and suspense around the central conflict, to show how the conflict is resolved (if there is a resolution), and to show how the individuals in the story have been affected or changed by this conflict.

An editor can add visual counterpoint to the conflict by juxtaposing statements that express opposing points of view in consecutive shots. This oppositional counterpoint can be enhanced if the footage has been shot with the interviewees on opposite sides of the frame. For example, all those on one side of the issue might be framed on the left side of the frame looking slightly to screen right; all those on the other side of the issue might be framed on screen right looking slightly to screen left. When the sound bites are edited together, the interviewees will literally be on different sides of the screen, representing their differences in position on the central issue.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset