18

Audiology

Once the picture is locked, timecoded copies are screened with the sound editors and composer. In these spotting sessions the editor and director discuss the placement of sound effects and music with the supervising sound editor and the composer, respectively. Timecode is used as a reference to note the in and out points.

In the composer’s case he or she may have already seen a version of the film with a temp dub. Some composers refuse to listen to temp music, and understandably so, fearing that they will be unduly influenced. Others use it as a guideline. According to composer Mark Adler (Food, Inc., Bottle Shock, The Rat Pack),

The temp track should be a safety net and not a springboard… . If the temp becomes the jumping off point—the springboard—there’s a very real danger that the score will lack originality, freshness, and a unique sound and feel which is specific to the film.1

Either way, the spotting session is an opportunity to discuss where music cues will begin and end, as well as the emotional tone of the composition. In music spotting, a distinction is made between source and score. Source music emanates from objects in the scene, such as radios, iPods, stereos, and so on. Sometimes the composer is responsible for producing this, while at other times the source may come from a prerecorded song. Occasionally, these songs are determined by forces other than the director or editor. In such cases the studio or its music supervisor may have arranged for tunes as part of a marketing plan, as well as with hopes of enhancing the appeal of the film’s soundtrack. In a sense, source music comprises what the characters would hear, while the score is what only the audience hears.

Case Study

Years ago, on a film for 20th Century Fox, our editing crew was handed a recording by an unknown group and told that it had to be included in the movie, probably as source music. Since the director wasn’t very pleased with the music or the dictate to use it, the song was placed in the car radio so it could be turned off soon after the scene began. The song was by a group called Red Hot Chili Peppers. The rest is history.

Sound editors are artists who use audio to enhance the audience’s emotional experience of the film. Though not as apparent as picture, sound has a huge, often subliminal, effect on us. Veteran sound editor Don Hall (Young Frankenstein, The French Connection, Towering Inferno) suggests that image is 90% and sound 10% of what goes into a film. Yet emotionally it is a 50/50 split between the two. According to Hall, “You can use sound in place of imagery to tell the story.”2 The person listening to it fills in the meaning and context and even visualizes the event.

A beautifully edited film where the dialogue struggles to be heard, is out of sync, is contaminated with noise, or where sound effects are inappropriate or too loud for the image, can appear ugly or disjointed. Conversely, sound that fits the image, where the volume is neither too loud nor too soft, where the fidelity is clean and wide ranging, enhances the visuals. For these reasons filmmakers devote large amounts of time, research, and money to perfecting the sound that goes into their motion pictures.

Ever since the invention of talkies, as the early sound films were called, filmmakers have experimented with better ways to deliver better quality audio to their audience. Evolving from monaural to two-track stereo to 5.1 and beyond, with systems like Dolby Atmos and IMAX Laser/12.1 where speakers populate a theater’s ceiling as well as the surrounding walls and behind the screen, film sound has come to envelop its audience.

Tech Note

In a nonlinear editing timeline, soundtrack order occurs with the production/dialogue track at the top (on A1), then sound effects tracks underneath, and finally the music tracks at the bottom. Dialogue generally plays as mono. Sound effects can be stereo or mono depending on the specific source where they occur on the screen. Music and ambience are stereo.

One of the first steps toward creating a fulfilling auditory experience begins in the picture editing room. Beginning editors wonder how much attention they should pay to sound when they see themselves as mainly involved in picture. But if they want their picture to look good, they must attend to the sound. This is show business, after all, and whoever views the movie, at whatever stage, expects to have a seamless and satisfying experience. In this regard the picture editor should strive to build the best sound template possible before relying on a dedicated team of sound editors to bring it to a level suitable for exhibition. That implies that dropouts, uneven volume, and distracting background ambience must be eliminated. These days picture editors frequently set the levels and the frequency, pitch, and placement of dialogue as well. Additionally, they add temporary sound effects and music to round out the emotional experience of the film.

In the days of analog sound, adding sound effects was more cumbersome than it is now. Picture editors, or their assistants, often had to drive to sound effects houses where they would preview previously recorded effects on large reels of magnetic tape, select the desired ones, and then have them transferred to magnetic film to be cut into a separate soundtrack accompanying the picture back in the editing room. Before the magnetic era, sound was recorded optically onto film. The waveform could actually be seen along the film’s edge where an optical reader decoded it. These tracks were stored in the studio libraries for use on future films.

Case Study

According to Don Hall, one of the sound editors on Young Frankenstein (1974), the film was shot on black and white film in order to enhance the historical tone of the movie. The director, Mel Brooks, also chose to use archival sound effects from the studio’s library to take advantage of the old-school sound quality of optically recorded tracks. The sound of the door knockers, displayed at the arrival to Frankenstein’s castle, came from those tracks with their accompanying hiss and distortion.3

Today temp sound has grown easier to achieve. Websites like YouTube, as well as internet libraries specifically dedicated to sound effects, offer numerous free and fee-based sound effects. These can be easily accessed, downloaded, and then imported into an editing project. Another efficient procedure has evolved with the improvement in cellphone technology. These light, portable, and ubiquitous devices have made the process of gathering sound effects (and additional dialogue, known as wild lines) even easier. This is particularly useful for unusual or highly specific effects that might not be found on the internet or in good enough quality or, in the case of wild lines, when an actor is out of town.

Tech Note

Today’s smartphones include, among their many apps, an audio recorder, sometimes designated as Voice Memo. Editors can tailor temp effects to a particular scene simply by recording them on their phones. A quiet room, the right prop, and a helpful assistant is all one needs. Once the sound is recorded, and the file is named and saved, it can be sent as an attachment to the editor’s e-mail account. The file is downloaded and then dragged onto the system’s hard drive and imported into a folder or bin. Once it is available as media, it can be cut into the timeline’s audio track, just like any other sound effect or music.

But which sounds warrant inclusion into the picture editor’s cut? Not every possible sound that could find its way into the final film should be recorded or included at this time. That will be the work of the sound editors. At this point the film editor’s concern rests with bringing life to prominent objects in the frame or those that contribute to a story point, such as a phone ring or a gunshot. Sound libraries bulge with a prodigious variety of phone rings and gunshots, so the editor can access a particular type of phone or a particular make of gun.

In many cases the specific effect will work but in some cases the actual sound may not supply the appropriate feeling. Sound editors have long known that the actual sound may pale compared to a more dramatic sound from another source. For instance, instead of using an actual gunshot, which may sound like a mere pop, why not use the sound of a cannon and then bring the volume down a few notches? And when a character bites into an apple, the crunch created by teeth severing a celery stalk can masquerade as a better apple bite. Cracking walnuts can evoke the sound of someone cracking his knuckles.

Using the picture editor’s production and sound effects tracks as a template, sound editors fill out the sound landscape by finessing dialogue and incorporating sound design, ADR, hard effects, Foley, and ambiences.

For the most part there are few effects in the final movie that were recorded on the set. Except when the sound mixer is astute enough to record wild sound (nonsync sound) of unique ambiences or effects, most of the sounds are gathered and cut into the soundtrack during postproduction. The sounds of footsteps and other moves are rerecorded on a Foley stage. The Foley stage has pits with various surfaces, such as sand, leaves, steel, concrete, and so on. A Foley walker (Figure 18.1) watches a projected scene from the film and mimics the actor’s movements exactly. The sounds he creates are recorded and then cut into sync with the picture. Today Foley has reached beyond its original purpose to supply background effects for looped dialogue. Through Foley most of a scene’s effects, including wind and explosions, can be generated live. Some Foley artists will supply vocal sounds, such as efforts, grunts, and even a horse’s breathing.

Ambience

Another form of sound effect is ambience. Rather than a specific moment, like a kiss or body punch, ambience adds an unseen backdrop throughout a scene, allowing our minds to fill in the gaps. With the opportunity to reflect story, character, mood, and tone, ambience brings another emotional level to a scene, but in a subtle way. “Are there police helicopters or the sound of Rainbird sprinklers in a character’s neighborhood? What do their dogs sound like, what do their cars sound like? Are there babies crying?” asks Midge Costin, Hollywood sound editor (The Rock, Crimson Tide) and documentarian (Making Waves: The Art of Cinematic Sound).4

Modern sound editing binds the story together much the same way music does. “The most powerful part of any soundtrack is the music, but you can’t have music running all the way through. It will take away from the effect of the music,” points out Costin.5 Ambience, on the other hand, occurs in every scene, so the opportunity exists to reflect the story and fill in emotional gaps with ambience, such as a low drone, distant thunder, or a whirring wind found in horror films. Ambience needs to be strong enough that you feel it in your gut, yet not so obvious that you become aware of it working its way into your psyche. This subtlety is the forte of talented rerecording mixers.

In No Country for Old Men (2007), Llewelyn Moss (Josh Brolin) finds the satchel full of hundred dollar bills he’s been searching for beside a dead body. No music covers this scene, but as Moss opens the case to reveal the money, a subtle wind gust overtakes the soundtrack, evoking awe and subtly hinting at the grisly events to come. Costin points out another example:

In Master and Commander [2003], the kid wonders if he saw something, we just hear the sound of the waves, then he looks again, we have a sense that he saw something in his spyglass, but it’s very subtle; there’s a low frequency wind that nobody picks up on.6

According to Academy Award–winning sound editor Stephen Flick, Speed (1994) was one of the films that changed the realm of sound editing by changing the proscenium.7 In the early days of film sound, the cinematic proscenium resided in front of the audience and followed the traditional film-editing pattern. Each cut had its accompanying sound. If you saw a dog, you heard the dog. When you cut to a car, you heard the car. With Speed, the editors wrapped the sound around the cuts, no longer constrained by cut-for-cut audio. In the scene where Captain McMahon (Joe Morton) rides up on the platform beside the speeding bus the sound moves around the scene. It envelops the events transpiring inside the bus as Annie (Sandra Bullock) drives, outside the bus on McMahon, and around the entire action.

In constructing the soundscape for an off-screen world, Flick makes a distinction between a standard establishing shot, such as the apartment building exteriors seen in TV comedy shows, and an anchor shot. An anchor shot provides a rich off-screen sonic world by initially establishing a specific and detailed environment on screen. In an interior dialogue scene, this allows the editor to open with sounds that are native to the anchor shot and then slowly dip that ambience beneath the scene’s dialogue, bringing it back in toward the end of the scene. To imagine the sound design of an anchor shot, Flick gives the example of the Alfred Stieglitz photograph “The Terminal” (Figure 18.2). Here one sees a New York trolley terminal in 1892. The sound details are rich and varied based on the images—the bell of the horse-drawn trolley, the huffing of the horses, the clanking of the iron horseshoes against the steel rail, the sloshing of the snow, and so on. These are the sounds that give license to the off-screen ambience that the audience will hear and consequently process immediately, without question or confusion.

Microsurgery

For first you write a sentence,

And then you chop it small;

Then mix the bits, and sort them out

Just as they chance to fall:

The order of the phrases makes

No difference at all.

—Lewis Carroll

One of the hallmarks of Hollywood filmmaking is the superior quality of motion picture dialogue. The audience needs to hear every word in order for the dialogue and, ultimately, the movie to work.

Finely edited and mixed dialogue tracks with smooth extensions and overlaps, well-tuned EQ (equalization), and noise filtration—such as de-essers—create a strong, clear voice no matter whether it is from a Roman gladiator or a tiny mouse.

Midge Costin talks about her reverence for the original production track:

There is nothing that beats clean, good dialogue and production sound. Start with really good production sound and make it the best it can be. I love all production tracks. I love the footsteps, I love the door opens, I love everything… . Why production sound is so important—it’s the performance! The largest line item in the budget is the actors… . The character is the voice as much as anything else, their face, their body. It’s the voice.8

The head rerecording mixer is the dialogue mixer because what is said by the actors has priority.

Jerry Bruckheimer produces large-scale action films like Top Gun (1986), The Rock (1996), and Pirates of the Caribbean (2003), so one might expect he would give priority to sound effects. But he puts dialogue first, over sound effects and music. According to Costin, Bruckheimer’s ADR editor Jessica Gallavan observed that 90% of Bruckheimer’s notes dealt with dialogue issues.9 His point is that there is nothing worse than when someone leans to the person beside her and asks, “What did he say?” It pulls you out of the movie.

The dialogue editor doctors the tracks in order to keep the center track—the dialogue—running as if everything was recorded clean, with no clicks, no director’s voice, no footsteps that are too big, no lip smacks, no sibilance, no clipped words. Dialogue editors will do whatever they can to fix the original production track, attempting to alter the original take as little as possible. In some cases this means stealing needed consonants and vowels or alternate performances from other takes. When all else fails, a dialogue line may require ADR.

Automated dialogue replacement, or ADR, requires the actor to rerecord his lines in a soundproof booth. Since the sound will be clearer and cleaner than can be achieved on most sets, the editor must add a secondary track of ambience or roomtone that matches the scene’s environment in order to create a natural sound.

Roomtone

The set’s neutral ambience, usually recorded at the beginning or end of the shoot, is known as roomtone. It supplies the necessary audio fill for spots where the background sound would otherwise drop out, such as in MOS or ADR situations.

Case Study

ADR serves other purposes as well. In Mannequin 2 (1991), the director (Stewart Raffill) and I used ADR to replace several actors’ voices with the voices of other actors. In the story, the villain was protected by three muscle-bound thugs who were supposed to have voices like the Terminator, Arnold Schwarzenegger. Though they looked the part, their voices and accents weren’t strong enough or consistent enough to be convincing. So we hired several actors from a loop group who could replace their dialogue. It worked great, and the audience believed these were their real voices.

Another use for ADR is to supply needed information that was left out of the original script. Often the editor temps in these lines in response to questions raised during the test screenings. A new line of dialogue is placed over the actor’s back or at other points where the audience won’t see the actor’s lips.

While using ADR lines may be a last resort, getting the actor to do wild takes on the set can be useful while they are still in character. When actors come back months later to read ADR lines they may have trouble remembering their characters.

ADR works when it is invisible. The ADR supervisor has priority over the dialogue editor, and submits lines that need to be replaced to the dialogue editor. To this end, the dialogue editor will remove the offending production line, fill in the gap, and move the new ADR line down onto the X-track where it has its own place.

Doctor’s Note

What is known today as ADR used to called looping. In the days of analog sound, in order to replace a line of dialogue, the sound editor spliced the dialogue into a loop—imagine a strip of film with the ends connected—that would play again and again as it ran over the playheads. Similarly the recording medium, usually fullcoat magnetic sound film, was also cut into a loop that would record the actor’s new performance as he or she repeated the line as it played from the loop. Amusingly, in Mel Brooks’ Silent Movie (1976) the film’s only line of dialogue was delivered by the mime Marcel Marceau. It had to be looped, however, since the production track was so poorly recorded.

The production track falls under the auspices of the dialogue editor, regardless of whether it comprises spoken words or a sound effect. For instance, in Costin’s work on Time Machine (2002), there were live horses on the set, which the microphone picked up. The clip-clop of the hooves supplied such excellent tracks it was valuable to pull them, clean them up, and cut them into separate FX tracks (designated as P-FX, or “production effects”) rather than wait for horse effects to be recorded elsewhere. This culling process leaves only dialogue on the production track (A1), but preserves high-quality production effects. The rerecording mixers have the option of pulling from the P-FX track if they or the director are not completely happy with a track delivered by the sound effects editor.

Generally, dialogue editors try to avoid Foley or ADR unless it is truly necessary. They prefer to work with the natural production sound. On features, however, a lot of sounds are foleyed due to the demands of foreign deliverables, the requirement to make M&E (music and effects) tracks that are 100% filled. M&E tracks allow foreign territories to dub dialogue in their own language while still maintaining the rich, original soundtrack of effects and music.

Tech Note

Part of the challenge with ADR occurs in fitting the new dialogue readings into the actor’s lips. With ProTools sound editing software, an editor can perform microsurgery using the TCE (Time Compression and Expansion) Tool to stretch or contract a word. The sound editor pulls the region out or shrinks it according to the sync needs.

Instead of the current 48 kHz audio sampling rate, try sampling at twice the rate (96 kHz) when recording ADR. That way you won’t hear the samples when lines are stretched using the TCE trim tool.

Another useful application is VocALign that syncs the ADR with the original production track.

It is always a good idea to run sound on the set. There is rarely a reason to shoot MOS (Mit Out Sound), except perhaps for an aerial shot or occasional visual effects. If for nothing else, the production track can serve as a guide track to cut to for ADR and Foley. These days, an editor can sync ADR by the waveform rather than lips since the words form the same visual pattern regardless of the quality of the recording.

MOS

A Hollywood term with a German-sounding phrase, MOS, like many film terms, was originally coined as an on-set joke. When emigrant director Erich von Stroheim announced with his Austrian accent that he was going to shoot the next take “Mit out sound” (without sound) the set culture picked up the German word “mit,” meaning “with,” and repeated it. Eventually “Mit out sound” evolved into the abbreviated form known today.

Checking the Pulse

Dialogue has its own rhythm. Editors alter it by adding or subtracting air or breaths. This contributes to the emotional effect of a scene. Breath gives us a sense of how someone feels. A character’s breath bears continuous significance, but it is so subtle as to be easily ignored. The emotional rhythm of a performance can hedge on whether someone sighs, inhales, exhales, sobs, or holds her breath.

Background sound, also known as fill, is the essence of good dialogue editing. In the editor’s cut, different takes comprising the characters’ dialogue butt up against each other. Yet each take has its own background sound. The challenge comes with making the disparate takes blend together, helping them sound like they happened at the same time. This is accomplished by splitting the various characters onto their own separate tracks and moving them onto A1, A2, A3, and so on.

The next step is to find areas within the take where no dialogue occurs, culling out this fill and cutting it onto the head and tail of the dialogue line. In this way the individual lines can be faded from one to the other (Figure 18.3). The fill that is added to the end of the dialogue of Character A will extend under the first word spoken by Character B. The loudness or softness of the background sound will determine the length of the extension.

Takes with loud backgrounds usually require longer extensions on both sides so they can slowly fade in and slowly fade out, imperceptivity. The mistake that occurs, especially on indie films or with inexperienced editors, is in the attempt to mask the shifts in background sound by building an additional track of continuous ambience. This only muddies the sound. The main goal with dialogue editing is to get the sound as clean as possible.

Beginners and students sometimes try to short-cut this process by carving out the distracting background sound and replacing it with blank slug or clean fill between the lines of dialogue. If, for instance, the background sound is freeway traffic, the resulting impression is complete silence until the character opens his or her mouth to speak, and then traffic comes out!

Warning

Roomtone rarely matches. When the on-set production mixer runs roomtone in order to preserve some clean ambience for possible use in the scene later on, the tone of the room has changed. There may be more traffic outside, the microphone will have changed position, less or more people will be standing in the room, etc. Many editors rarely or never use roomtone, preferring to find ambience from the same production take where the actor’s dialogue line occurs. In the case of ADR, however, roomtone can add ambience to dialogue that has been recorded in a soundproof environment.

In recent decades the designation of sound designer has become a popular title, since many sound effects are highly engineered to achieve maximum impact, using sound as metaphor. In a sense, however, sound design has been around since the early days of sound movies, on films ranging from King Kong (1933) to Citizen Kane (1941). In the Joan Crawford film Sudden Fear (1952), the clever use of sound reinforces some of the thriller’s more chilling moments—the unnaturally loud ringing of an unseen phone as Crawford lies in wait for her con-artist husband, intent on shooting him, or the disturbing overmodulated ticking of a clock over the montage outlining her lethal plans.

With the advent of Dolby noise reduction, sound design became more extensive since sound editing no longer involved merely trying to make bad optical soundtracks intelligible. Only in the seventies was sound design distinguished by a separate title, often incorporating two jobs, that of a supervising sound editor and rerecording mixer.

With Star Wars (1977), Ben Burtt further advanced the art of sound design. A one-time physics major, Burtt started recording and designing sound a year before the cameras rolled on George Lucas’s science-fiction epic. One day he drove up from Southern California to drop off his tracks at Lucas’s Northern California facility and ended up staying. His tracks included the voice of R2-D2 (Burtt’s voice processed through an ARP synthesizer), the coarse respiration of Darth Vader (Burtt recorded himself breathing through a SCUBA regulator), and the classic light saber hum.

The films of Francis Ford Coppola, also created away from the Hollywood establishment at his Zoetrope studio in the 1970s, flourished through a willingness to experiment within the medium, including sound. In Coppola’s movies, the experiences within the characters’ psyches are reflected in the soundtrack—the thumping of helicopters blades in the humid air of Vietnam as a tormented Martin Sheen struggles in the isolation of his Saigon hotel room in Apocalypse Now (1979); the deafening sounds of a distant subway train amplified to reflect the rising tension inside Al Pacino’s head as he advances on the victim of his first gangland killing in The Godfather (1972). These scenes, as well as car chases, pirate ship battles, or light saber duels, reveal the more blatant examples of sound design, but more subtle moments carry as much significance. For example, the expansive sounds of a forest during the love scene in David Lean’s Ryan’s Daughter (1970). Even recent sci-fi films have taken more subtle and poetic approaches to sound design. Gravity (2013), Interstellar (2014), Ex Machina (2015), and Arrival (2016) use a more realistic, less electronic approach to sound design.

Sound editor Sylvain Bellemare, in an interview for A Sound Effect online magazine, refers to the sound design of Arrival where linguist Dr. Louise Banks (Amy Adams) struggles to communicate with an alien race:

Even if we are in a sci-fi film surrounded by giant aliens, we follow a single woman, very close to her emotion… . The sound had to represent a mental vision of what is happening inside her mind… . We wanted to be closer to real sound, to have an organic sound that we could transform and make fit the image and fit the story.10

Doctor’s Note

As a film editor I am sometimes tasked with coming up with sounds that would reflect the tone and meaning of the story. With the encouragement to experiment in the films of Alan Rudolph, I designed an early temp dubbing system attached to the KEM flatbed, which was later imitated and distributed by the equipment rental house. With it I could record effects, such as a box of push pins tossed onto the linoleum floor to mimic pearls from a broken necklace, then cut them into the reel and mix them onto an external dubber.

On Made in Heaven (1987), there had been a protracted struggle to find the sound of heaven—not an obvious or easily available effect. The director had rejected a myriad of sounds, finding them too artificial and not organic enough. Most had been generated through electronic synthesizers. So I gave it a shot. One particularly engaging aspect of heaven—at least as viewed by these filmmakers—was that you could transport yourself anywhere in heaven simply by thinking of a destination. You disappeared and then reappeared somewhere else. It struck me that this sudden abandonment of the self that was then reunited at a pleasurable location elsewhere must be an ecstatic experience.

But what do ecstatic experiences sound like? A sigh. I gathered everyone I could—assistants, the production coordinator, the secretary, and a couple others—and asked them to sigh into a microphone. Then I doubled up the tracks, placed some reverb on them, and mixed them together. It sounded otherworldly. When it was played for Rudolph he was delighted. We had found the sound of heaven. Later, my lower fidelity template was rerecorded by the sound editing team using professional voice actors and high-quality microphones and then mixed by rerecording mixer Richard Portman (The Deer Hunter, Star Wars, Godfather).

As a side note, the grateful director and producer generously offered me a single card sound designer credit, but I chose to stick solely with an editing credit, since that was my focus.

Music Editing

Music has the ability to lift a good film to an even higher level, as in the rousing, orchestral scores of Pirates of the Caribbean (2003) and Gladiator (2000). Think of the classic scores of Raiders of the Lost Ark (1981), Star Wars (1977), or Schindler’s List (1993) and you’ll know the influence that a composer such as John Williams has on a film. Alfred Hitchcock often used Bernard Herrmann to compose suspenseful scores for his films. And Tim Burton’s magically surreal movies come alive with Danny Elfman’s music. Carter Burwell’s edgy score for Twilight (2008) added depth and direction to the film. And what about the eerie music that pervades horror films?

Music helps bring tension, majesty, romance, and catharsis to a scene. Yet, as seen in films like Bullitt (1968), sometimes it is best to let a scene play without music. In that case the sound effects supply the “music.”

A well-cut picture, enhanced by clear and vivid dialogue and effects tracks, finds its fulfillment with the addition of a strong, emotional score. This may begin with the introduction of a well-conceived temp music track to reinforce the feelings engendered in the editor’s and director’s cuts.

In some cases, especially on lower budget films, the film editor may serve as a music editor, temping in soundtracks to help support the edited scenes. In other cases, a dedicated music editor will perform these operations. Later, when the score has been written and performed, the music editor builds the final tracks against the picture, sometimes adding, sometimes shortening, sometimes rearranging the final music.

“Music is the unseen actor,” notes award-winning music editor Joanie Diener (American Beauty, Angels in the Outfield, Hemingway and Gellhorn).

A lot of times music is picking up something, some textual thing, that is missing from the performances. Or missing from what was shot or edited. I’ve often had directors say, “we wanted this to feel this way or that way, but it didn’t come out that way. What can you do to help us?”11

The temp score helps in previewing a film for the studio and test audiences.

In designing a temp score, music editors look for soundtracks from movies that portray similar emotions. “It’s like a discovery mission. I often hear music in my head, like a composer, but then I have to go out and find it pre-existing,” recalls Diener.12 Since American Beauty (1999) was being scored by Thomas Newman, music was borrowed from several Newman scores such as Shawshank Redemption (1994) and Unstrung Heroes (1995). Music editors maintain soundtrack libraries, containing hundreds, sometimes thousands, of soundtracks from which they build the temp track.

The temp track can often influence the final score. It offers a chance to find out what kind of orchestration works and what doesn’t. Most composers do demos or mock-ups with sampled instruments. Sometimes they ask the music editor to work in the material, move it around, and mix it against the picture. He or she will output a QuickTime movie to send to the filmmaker. In the comedy sequel Ernest Saves Christmas (1999), the filmmakers imagined a more low-key synth and guitar score based on the previous film, Ernest Goes to Camp (1987), which relied on a downhome, swamp guitar infused score. But this was a heartfelt Christmas movie. Instead of swamp guitar, Diener temped in a huge orchestral score, using James Horner’s and John Williams’ music. Disney had negotiated for a cheaper synth score, but after hearing the temp music they renegotiated the deal, adding more money to pay for a big orchestral score.

Another aspect of music editing involves songs and song placement. In musicals, an editor will sometimes use the backing tracks from the songs as temp score. That way the score will segue seamlessly into the song. A problem with temping in songs can arise when music editors, who can use any temp music they desire since it doesn’t cost anything, use well-known, expensive songs. Producers and directors invariably fall in love with these songs they can’t afford.

Studio record deals can drive song selection. Studios will sometimes require filmmakers to use songs from the company’s soundtrack label (see the Red Hot Chili Peppers Case Study at the beginning of the chapter). On musicals, the music editor comes in early to help with the prerecords of the songs and to prep all the playbacks—the music that will be performed to on the set.

After the final songs (in a musical) are placed into the cut, the music editor can make audio adjustments so the playback lyrics fit perfectly in the mouth of the performer. This can be perilous, since perfect lip sync does not guarantee that the surrounding body language will communicate what the song expresses. Such is the case if the track was a large and spirited song but the actor was barely mouthing the playback lyrics.

Figure 18.5

Figure 18.5 Detail of a ProTools music session from Miracles from Heaven (2016)

Note: Paramedics attempt to rescue the daughter of Christy and Kevin Beam (Jennifer Garner and Martin Henderson) from inside a big tree. The temp track is shown here. The music editor’s challenge was to find and edit something that was elegiac yet hopeful.Image courtesy of Joanie Diener

Backing Track

The instrumental portion of a song.

Tech Note

Digital technology has introduced many new tools to help editors perfect the soundtracks they work with. These tools include pitch shift, speed variance, reverb, and track layering. The application Pitch-N-Time Pro allows music editors to condense or to drag out a track so the song fits in a performer’s lips.

Case Study

During the filming of the Alan Menken musical A Christmas Carol (2004) the director, Arthur Seidelman, suddenly called, “Cut,” in the middle of a song. He had decided that the actors needed to be over a bridge 30 seconds earlier than the music was allowing. Music editor Joanie Diener recalls: “I sat there with my headphones and ProTools on the set, taking 30 seconds out of this iconic Alan Menken song, and I’m thinking ‘if I don’t do this right they’re going to kill me, and we’re stuck with it because we’re shooting to it and they gave me, like, five minutes!”13 In the end it all worked out.

Rolling the Cut

In order to better sync on-camera action to the music, an editor, in consultation with the music editor, will remove a few frames from the head of a shot, then add the equal number of frames to the tail, while maintaining the same length. This is known as rolling the cut. The term originated with film, where the frames were physically cut off from the beginning of a shot and other frames added to the end. Now, with Avid and Premiere, the frames can be slid virtually, while maintaining the original duration.

After the picture editor’s cut is locked, the music editor works with the composer to create spotting notes (Figure 18.4) and prep for the scoring session. Eventually she will build the final score against the picture (Figure 18.5). In some cases the director, editor, or producer might realize that it’s not sounding as expected—this cue is too short, this one is too long, the mood is wrong, or maybe it should have a different theme. At that point the music editor might end up recutting the score on the dub stage, even stealing cues from other parts of the film.

Rx

  •  Next time you need a sound effect try recording it on your smartphone and inputting it into your sequence.
  •  Check out the “Who’s got the con?” scene in Crimson Tide (1995) to hear some masterful sound editing of Denzel Washington’s and Gene Hackman’s overlapping performances—recorded on a single microphone and with no ADR. The trick in doctoring the overlapping dialogue involved keeping the rhythm (the pattern of words) moving even when the listener couldn’t make sense of everything that was said. Depending on who was dominant at the moment, his sentence was the one that was designed to play clearly.

Notes

1. Mark Adler, interview with the author, 2010.
2. Don Hall, interview with the author, 2017.
3. Ibid.
4. Midge Costin, interview with the author, 2017.
5. Ibid.
6. Ibid.
7. Stephen Flick, interview with the author, 2017.
8. Ibid.
9. Ibid.
10. Jennifer Walden, “Creating the Poetic Sci-Fi Sound of Arrival,” ASoundEffect.com, November 18, 2016. www.asoundeffect.com/arrival-sound/
11. Joanie Diener, interview with the author, 2017.
12. Ibid.
13. Ibid.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset