Signal Flow 2

″And you may ask yourself, ′How do I work this?′ ″

— ″ONCE IN A LIFETIME,″ TALKING HEADS, REMAIN IN LIGHT (SIRE, 1980)

Ever been lost in a big, old city? The roads seem to wander randomly. Few streets intersect at right angles. Individual streets change names, become one way, or simply dead end without warning. I have a recurring nightmare that, as a tourist in a city, I reach an intersection while driving down a one way street and innocently encounter all three signs of doom at once: no left turn, no right turn, and do not enter. There is no flashing red light to warn me as I approach this dreaded intersection. Naturally there is no sign identifying the street I am on, nor the street I have reached. The cars, with me stuck in the middle, just line up, trapped by traffic. My blood pressure rises. My appointments expire. I wake up in a sweat, vowing to walk, not drive, and to always ask for directions.

2.1 Types of Sessions

Without some fundamental understanding of how a studio is connected, anyone can eventually find themselves at the audio equivalent of this intersection: feedback loops scream through the monitors, no fader seems capable of turning down the vocal, drums rattle away in the headphones but aren′t getting to the multitrack — I could go on. Believe me, I could go on.

At the center of this mess is the mixing console, in the form of an independent piece of hardware or part of the digital audio workstation. In the hands of a qualified engineer, the mixing console manages the flow of all audio signals, combining them as desired, and offers the engineer the controls needed to get each bit of audio to its appropriate destination safely and smoothly. The untrained user can expect to get lost, encounter fender benders, and quite possibly be paralyzed by gridlock.

The ultimate function of the console is to control, manipulate, process, combine, and route all the various audio signals racing in and out of the different pieces of equipment in the studio or synth rack; it provides the appropriate signal path for the recording task at hand.

Consider mixdown. The signal flow goal of mixing is to combine several tracks of music that have been oh-so-carefully recorded on a multitrack into two tracks of music (for stereo productions, more for surround) that friends, radio stations, and the music-buying public can enjoy. They all have stereos, so we ″convert″ the multitrack recording into stereo: 24 or more tracks in, 2 tracks out. The mixer is the device that does this.

Clearly, there′s a lot more to mixing than just combining the 24-plus tracks into a nice sounding 2-track mix. For example, one might wish to add reverb, equalization (EQ), compression, and a little turbo-auto-pan-flange-wah-distortionTM (patent pending). This book seeks to elevate the reader′s ability to use these effects and more. Before tackling the technical and creative issues of all of these effects, one must first understand how the audio signals flow from one device to the next.

It is the mixing console′s job to provide the signal flow structure that enables all these devices to be hooked up correctly. It ensures that all the appropriate signals get to their destinations without running into anything. A primary function of the console is revealed. The mixer must be able to hook up any audio output to any audio input (Figure 2.1).

In connecting any of these outputs to any of these inputs, the console is asked to make a nearly infinite number of options possible. Mixdown was alluded to briefly above, but engineers do more than mix. This signal routing device has to be able to configure the gear for recording a bunch of signals to the multitrack simultaneously, as occurs at a big band recording session. It should also be able to make the necessary signal flow adjustments required to permit an overdub on the multitrack, recording a lead vocal while listening to the drums, bass, guitars, and brass previously recorded to the multitrack. Additionally, one might need to record or broadcast live in stereo, surround, or both. Fortunately, all sessions fall into one of the following categories.

2.1.1 BASICS

If the music is to be created by live musicians, the multitrack recording project begins with the basics session. In the beginning, nothing is yet recorded onto multitrack. Any number of musicians are in the studio′s rooms and isolation booths playing, and the engineer is charged with the task of recording the project′s first tracks onto the multitrack recorder.

Image

Image Figure 2.1 The mixer can connect any output to any input.

Most band-based productions go like this: The entire band plays the song together. The engineer records each musical instrument onto separate tracks. It is fully expected that the singer will want to redo the vocal as an overdub later (overdubs are discussed below). The same goes for the guitarist. Even though the plan is to replace these performances with new recordings later, the recordist still tracks everything. Everything performed is recorded, needed or not.

Sometimes the most desirable performance — the ″keeper″ take — is the one that happens during basics. Try as they might, in one overdub session after another, the performers may not be able to top it in terms of performance value. At basics, there is less pressure on these ″scratch″ performances. Scratch tracks are performed and recorded as a guide to the rest of the band. These performers just sing/play along so the band can keep track of practical matters (e.g., which verse they are on), accelerating and crescendoing in expressive ways. A more careful track will be recorded later — that day, in a few weeks, or possibly several months later. Such freedom often leads to creativity and chance taking, key components of a great musical performance. So it is a good idea to record the singer and guitarist during the basics session.

With the intent to capture many, if not most, tracks as overdubs later anyway, the audio mission of the basics session is often reduced to getting the most compelling drum and bass performance onto the multitrack. Sometimes even the bass part gets deferred into an overdub. So for basics, the engineer records the entire band playing all at once so that they can get the drummer′s part tracked.

Check out the setup sheet (Figure 2.2) for a very simple basics session. It is just a trio — drums, electric bass, guitar, and vocals — and yet there are at least 15 microphones going to at least 10 tracks (″at least″ because it is easy and not unusual to place even more microphones on these same instruments. For example, one might create a more interesting guitar tone through the combination of several different kinds of microphones in different locations around the guitar amp.). If the studio has the capability, it is tempting to use even more tracks (e.g., record the electric bass using a Direct Inject (DI) box on a separate track from the bass cabinet microphone).

The console is in the center of all this, as shown in Figure 2.3. It routes all those microphone signals to the multitrack so they can be recorded. It routes them to the monitors so the audio can be heard. It routes those same signals to the headphones so the band members can hear each other, the producer, and the engineer. It also sends and receives audio to and from any number of signal processing effects: compressors, equalizers, reverbs, etc.

2.1.2 OVERDUBS

During an overdub session, a performer records a new track while listening to the tracks recorded so far. An electric guitar is overdubbed to the drums and bass recorded at an earlier basics session. A keyboard overdub adds another layer to the drums, bass and guitar arrangement that has been recorded. Note that an overdub describes a session where additional material is recorded so that it fits musically with a previously recorded set of tracks. Either blank tracks are utilized for the new tracks or any unwanted tracks are erased to make room for the new track. Overdubbing refers to laying a new performance element onto an existing set of performances. While it often is the case, overdubbing does not require anything be erased while recording. The new track does not have to go over an old track, it might be recorded onto its own new track.

Image

Image Figure 2.2 Setup sheet for a simple basics session.

Image

Image Figure 2.3 Signal flow for a basics session.

At the overdub session, there are often fewer musicians playing, fewer microphones in action, and possibly fewer band members around. It is often a much calmer experience. During basics there is the unspoken, but strongly implied, pressure that no one can mess up, or the whole take will have to be stopped and the song restarted from the top. The crowd in the studio is overwhelming — the whole band is there. The crowd in the control room is watching. The lights, meters, microphones, and cables surround the musicians, giving them that ″in the lab, under a microscope″ feeling. Performance anxiety often fills the studio of a basics session. Overdubs, on the other hand, are as uncomplicated as one singer, one microphone, a producer, and an engineer. Dim the lights. Relax. Do a few practice runs. Any musical mistakes at this point are known only to this intimate group; no one else will hear them. They will be erased. If the performer doesn′t like what they did, they can just stop. No worries. Try again. No hurries. Ah, low blood pressure, relatively speaking.

During overdubs, the console routes the microphones to the multitrack recorder. The console is also used to create the rough mix of the live microphones with all of the tracks already recorded on the multitrack. It sends the mix to the control room monitors. Simultaneously, it creates a separate mix for the headphones; the recording artists generally require headphone mixes tailored to suit their needs as performers that is different from the mix needed by the engineer and producer in the control room. In addition, one never misses an opportunity to patch in a compressor and/or some other effects. Figure 2.4 lays out the console in overdub mode. The overdub session is likely easier on the performer and the engineer, but the console works every bit as hard as it did during the more complicated basics session.

Image

Image Figure 2.4 Signal flow for an overdub session.

2.1.3 MIXDOWN

After basics and overdubs, any number of multitrack elements have been recorded. They are at last combined, with any effects desired, into a smaller number of tracks suitable for distribution to the consumer (stereo or surround) in the mixdown session.

At mixdown, the engineer and producer use their musical and technical abilities to the max, coaxing the most satisfying loudspeaker performance out of everything the band recorded. There is no limit to what might be attempted. There is no limit to the amount of gear that might be needed. On a big-budget pop mix, it is common for nearly every track (and there are at least 24, probably many more) to get equalized and compressed. Most tracks probably get a dose of reverb and/or some additional effects as well. A few hundred patch cables are used. Perhaps several tens, probably hundreds of thousands of dollars worth of outboard signal processing is used. Mixing automation is required (see Chapter 15), and an enormous console capable of handing so many tracks and effects is desired. During earlier recording and overdubbing sessions, the engineer might have thought, ″This is sounding like a hit.″ It′s not until mixdown when they′ll really feel it. It′s not until the gear-intense, track-by-track assembly of the tune that they′ll think, ″This sounds like a record!″

Image

Image Figure 2.5 Signal flow for a mixdown session.

The mixing console accommodates this need for multiple tracks and countless effects, as shown in Figure 2.5.

2.1.4 LIVE TO TWO

For many recording sessions, one bypasses the multitrack recorder entirely, recording a live performance of any number of musicians straight to the two-track master machine, or sending it live to a stereo broadcast or the sound reinforcement loudspeakers. A live-to-two session is the rather intimidating combination of all elements of a basics and a mixdown session. Performance anxiety haunts the performers, the producer, and the engineer.

The console′s role is actually quite straightforward (Figure 2.6): microphones in, stereo mix out. Of course the engineer may want to patch in any number of signal processors. Then, the resulting stereo feed goes to the control room monitors, the house loudspeakers, the headphones, the two-track master recorder, and/or the transmitter.

Image

Image Figure 2.6 Signal flow for a live-to-two session.

2.2 Console Signal Flow

These four types of sessions define the full range of signal flow requirements of the most capable mixer. Yet despite having distilled the possibilities into these key categories, the console demands to be approached with some organization. Broadly, the inexperienced engineer can expect to be frustrated by two inherent features of the device: complexity of flow and quantity of controls.

Complexity is built into the console because it is capable of providing the signal flow structure for truly any kind of recording session one might encounter. The push of any button on the console might radically change the signal flow configuration of the device. In this studio full of equipment, the button might change what′s hooked up to what. A fader that used to control the snare microphone going to track 16 of the multitrack might instantly be switched into controlling the baritone sax level in the mix feeding the studio monitors. It gets messy fast.

The sheer quantity of controls on the work surface of the mixer is an inevitable headache because the console is capable of routing so many different kinds of outputs to so many different kinds of inputs. Twenty four tracks used to be something of a norm for multitrack projects. Most contemporary productions exceed this. What about the number of microphones and signal processors? Well, let′s just say that in the hands and ears of a great engineer, more can be better. The result is consoles that fill the room — or two or three large computer monitors — with knobs, faders, and switches. The control room starts to look like the cockpit of the space shuttle, with a mind-numbing collection of controls, lights, and meters. These two factors, complexity and quantity, conspire to make the console a confusing and intimidating device to use. It need not be.

2.2.1 CHANNEL PATH

In the end, a mixer is not doing anything especially tricky. The mixer just creates the signal flow necessary to get the outputs associated with today′s session to the appropriate inputs. The console becomes confusing and intimidating when the signal routing flexibility of the console takes over and the engineer loses control over what the console is doing. It′s frustrating to do an overdub when the console is in a live-to-two configuration. The darn thing won′t permit the engineer to monitor what′s on the multitrack recorder. If, on the other hand, the console is expecting to mixdown, but the session plans to record basics, one is likely to experience that helpless feeling of not being able to hear a single microphone that′s been setup and plugged in. The band keeps playing, but the control room remains silent. It doesn′t take too many of these experiences before console phobia sets in. A loss of confidence maturing into an outright fear of using certain consoles is a natural reaction. Through total knowledge of signal flow, this can be overcome.

The key to understanding the signal flow of all consoles is to break the multitrack recording process — whether mixing, overdubbing, or anything else — into two distinct signal flow stages. First is the channel path. Also called the record path, it is the part of the console used to get a microphone signal (or synth output) to the multitrack recorder. It usually has a microphone preamp at its input, and some numbered multitrack busses at its output. In between lies a fader and maybe some equalization, compression, echo sends, cue sends, and other handy features associated with capturing a great sound (Figure 2.7a).

2.2.2 MONITOR PATH

The second distinct audio path is the monitor path. Also called the mix path, it is the part of the console used to actually hear the sounds being recorded. It typically begins with the multitrack returns and ends at the mix bus. Along the way, the monitor path has a fader and possibly another collection of signal processing circuitry like equalization, compression, and more (Figure 2.7b).

Image

Image Figure 2.7 The channel path and the monitor path.

Making a crisp, logical distinction between channel path and monitor path enables the engineer to make sense of the plethora of controls sitting in front of them on the console. Engineers new to the field should try to hang on to these two different signal paths conceptually as this will help them understand how the signal flow structure changes within the mixer when going from basics to overdubs to mixdown to live-to-two. Mentally divide the console real estate into channel sections and monitor sections so that it is always clear which fader is a channel fader and which is a monitor fader.

2.2.3 SPLIT CONSOLE

Console manufacturers offer two channel/monitor layouts. One way to arrange the channel paths and monitor paths is to separate them physically from each other. Put all the channel paths on, for example, the left side of the mixer, and the monitor paths on the right, as in Figure 2.8(a). Working on this type of console is fairly straightforward. See the snare drum signal overloading the multitrack recorder? This is a recording problem, so one must reach for the record path. Head to the left side of the board and grab the channel fader governing the snare microphone. What if the levels to multitrack look good, but the guitar is drowning out the vocal? This is a monitoring problem. Reach over to the right side of the console and fix it with the monitor faders. Sitting in front of 48 faders is less confusing when one knows the 24 on the left are controlling microphone levels to multitrack (channel faders) and the 24 on the right are controlling mix levels to the loudspeakers (monitor faders). So it′s not too confusing that there are two faders labeled, ″Lead Vocal.″ The one on the left is the microphone level to the multitrack recorder; the one on the right is the multitrack return to the monitor mix.

Image

Image Figure 2.8 Console configurations.

The digital audio workstation is a split console that integrates outboard analog and digital components in the studio with digital processing within the computer. The same studio building blocks are needed: channel path to multitrack recorder to monitor path. The channel paths live partly outside and partly inside the digital audio workstation, using equipment outside the computer and the signal processing available within the computer for recording signals to the multitrack recorder. An analog front end consisting of a microphone preamplifier and any optional analog effects desired (compression and equalization are typical) feeds the analog-to-digital converters. Once digital, the audio signal may be further processed using the digital signal processing abilities of the digital audio workstation as the signal is recorded to multitrack.

After the multitrack, the signals are monitored on separate signal paths (monitor paths, naturally) within the digital audio workstation. The record path offered the chance to add any desired effects as the signal was recorded to the multitrack. The separate monitor paths give the engineer the chance to manipulate the signals after the multitrack, as it feeds the control room speakers, the artist′s headphones, etc.

With built in software, the structure of the mixing console within a digital audio workstation is incredibly flexible, generally being custom-tailored to each recording session. If the artist wishes to add a tambourine to one of the songs, the engineer clicks a few commands and adds an additional track to the multitrack plus all the associated mixer components. In hardware, the number of inputs is determined when the mixer is built and purchased. In software, one acquires an upper limit of inputs (the computer lacks resources to accommodate more than some number of inputs), but can add or delete inputs freely as the production dictates as long as the track count remains below the limit.

2.2.4 IN-LINE CONSOLE

A clever, but often confusing, enhancement to the hardware-based console is the inline configuration. Here, the channel and monitor paths are no longer separated into different modules physically located on opposite ends of the mixer. In fact, they are combined into a single module (see Figure 2.8b).

Experience has shown that an engineer′s focus, and therefore the signal processing, tends to be oriented toward either the channel path or the monitor path, but not both. During tracking, the engineer is dedicating ears, brains, heart, and equipment to the record path, trying to get the best possible sounds onto the multitrack. The monitoring part of the console is certainly being used. The music being recorded couldn′t be heard otherwise. But the monitor section is creating a ″rough mix,″ giving the engineer, producer, and musicians an honest aural image of what is being recorded. The real work is happening on the channel side of things. The monitor path accurately reports the results of that work. Adding elaborate signal processing on the monitor path only adds confusion at best, and misleading lies at worst. For example, adding a ″smiley face″ equalization curve — boosting the lows and the highs so that a graphic EQ would seem to smile (see Chapter 5) — on the monitor path of the vocal could hide the fact that a boxy, thin, and muffled signal is what′s actually being recorded onto the multitrack.

It turns out that for all sessions — tracking, overdubbing, mixing, and live-to-two — engineers only really need signal processing once, in the channel or the monitor path. The basics session has a channel path orientation as discussed above. Mixing and live-to-two sessions are almost entirely focused on the final stereo mix, so the engineer and the equipment become more monitor path centric.

This presents an opportunity to improve the console. If the normal course of a session rarely requires signal processing on both the monitor path and the channel path, then why not cut out half the signal processors? If half the equalizers, filters, compressors, aux sends, etc. are removed, the manufacturer can offer the console at a lower price, or spend the freed resources on a higher-quality version of the signal processors that remain, or a little bit of both. As an added bonus, the console gets a little smaller, and a lot of those knobs and switches disappear, reducing costs and confusion further still. This motivates the creation of the inline console.

On an inline console, the channel path and the monitor path are combined into a single module so that they can share some equipment. Switches lie next to most pieces of the console letting the engineer decide, piece by piece, whether a given feature is needed in the channel path or the monitor path. A single equalizer, for example, can be switched into the record path during an overdub and then into the monitor path during mixdown. The same logic holds for compressors, expanders, aux sends, and any other signal processing in the console. Of course, some equipment is required for both the channel path and the monitor path, like faders and pan pots. So there is always a channel fader and a separate monitor fader. The inline console, then, is a clever collection of only the equipment needed, when it is needed, and where it is needed.

In-line Headaches

An unavoidable result of streamlining the console into an inline configuration is the following kind of confusion. A single module, which now consists of two distinct signal paths, might have two very different audio sounds within it. Consider a simple vocal overdub. A given module might easily have a vocal microphone on its channel fader but some other signal, like a guitar track, on its monitor fader. The vocal track is actually monitored on some other module and there is no channel for the guitar because it was overdubbed in an earlier session.

What if the levels to tape look good, but the guitar is drowning out the vocal? This is a monitoring problem. The solution is to turn down the monitor fader for the guitar. But where is it? The inline console has monitor faders on both ″sides″ of the console. Unlike the split design, an inline console presents the engineer with the ability to both record and monitor signals on every module across the entire console. Each module has a monitor path. Therefore, each module might have a previously recorded track under the control of one of its faders. Each module also has a channel path. Therefore, each module might have a live microphone signal running through its channel fader too.

To use an inline console, the engineer must be able to answer the following question in a split second: Which of the perhaps 100 faders in front of me controls the guitar signal returning from the multitrack recorder? Know where the guitar′s monitor path is at all times, and don′t be bothered if the channel fader sharing that module has nothing to do with the guitar track. The monitor strip may say, ″Guitar,″ but the engineer knows that the channel on the same module contains the vocal being recorded. It is essential to know how to turn down the guitar′s monitor fader without fear of accidentally pulling down the level of the vocal going to the multitrack recorder.

One must maintain track sheets, setup sheets, and other session documentation. These pieces of paper can be as important as the tape/hard disk that stores the music. However, rather than just relying on these notes, it helps to maintain a mental inventory of where every microphone, track, and effects unit is patched into the mixer. Much to the frustration of the assistant engineer who needs to watch and document what′s going on, and the producer who would like to figure out what′s going on, many engineers don′t even bother labeling the strip or any equipment for an overdub session or even a mix session. The entire session setup and track sheet is in their heads. Engineers who feel they have enough mental memory for this should try it. It helps one get their mind fully into the project. It forces the engineer to be as focused on the song as the musicians are. They have lines, changes, solos, and lyrics to keep track of. The engineer can be expected to keep up with the microphones, reverbs, and tracks.

This comes with practice. When one knows the layout of the console this intimately, the overlapping of microphones and tracks that must occur on an inline console is not so confusing. Sure the split console offers some geographic separation of microphone signals from tape signals, which makes it a little easier to remember what′s where. Through practice, all engineers learn to keep up with all the details in a session anyway. The inline console becomes a perfectly comfortable place to work.

Getting Your Ducks in a Row

If an engineer dials in the perfect equalization and compression for the snare drum during a basics session but fails to notice that the processing is inserted into the monitor path instead of the channel path, the engineer is in for a surprise. When the band, the producer, and the engineer listen to the snare track, at a future overdub session, they will find that the powerful snare was a monitoring creation only and was not preserved in the multitrack recording of the snare drum. It evaporated on the last playback of the last session. Hopefully, the engineer documented the settings of all signal processing equipment anyway, but it would have been more helpful to place the signal processing chain in front of the multitrack machine, not after. That is, these effects likely should have been channel path effects, not monitor path effects.

Through experience, engineers learn the best place for signal processing on any given session. Equalization, compression, reverb, and the feeds to the headphones — each has a logical choice for its source: the channel path or monitor path. It varies by type of session. Once an engineer has lived through a variety of sessions, these decisions become instinctive. The mission — and it will take some time to accumulate the necessary experience — is to know how to piece together channel paths, monitor paths, and any desired signal processing for any type of session. Then the signal flow flexibility of any mixer, split or inline, is no longer intimidating. By staying oriented to the channel portion of the signal and the monitor portion of the signal, one can use either type of console to accomplish the work of any session. The undistracted engineer can focus instead on helping make music.

What′s That Switch Do?

Even your gluttonous author will admit that there is such a thing as too much. When excellent engineers with deep experience recording gorgeous tracks are invited to work for the first time in a large, world-class studio, and sit in front of a 96-channel, inline console for the first time, they will have trouble doing what they know how to do (recording the sweet tracks) while they are bothered by what they may not know how to do (use this enormous console with, gulp, more than 10,000 knobs and switches). Good news: That vast control surface is primarily just one smaller control group (a regular inline module) repeated over, and over, and over again. When an engineer learns how to use a single module — its channel path and its monitor path — they then know how to use the whole collection of 96 modules.

2.3 Outboard Signal Flow

The signal flow challenge of the recording studio reaches beyond the console. After microphone selection and placement refinements, audio engineers generally turn to signal-processing devices next, using some combination of filters, equalizers, compressors, gates, delays, reverbs, and multi-effects processors to improve or reshape the audio signal. This leads to a critical signal flow issue: How are effects devices and plug-ins incorporated into an already convoluted signal path through the console or workstation?

2.3.1 PARALLEL AND SERIAL PROCESSING

Philosophically, there are two approaches to adding effects to a mix. Consider first the use of reverb on a vocal track (discussed in detail in Chapter 11). The right dose of reverb might support a vocal track that was recorded in a highly absorptive room with a close microphone. It is not merely a matter of support, however. A touch of just the right kind of reverb (it is known in the studio world as ″magic dust″) can enable the vocal to soar into pop music heaven, creating a convincing emotional presence for a voice fighting its way out of a pair of loudspeakers. Quite a neat trick, really. The distinguishing characteristic of this type of signal processing is that it is added to the signal, it does not replace the signal.

This structure is illustrated in Figure 2.9. The dry (i.e., without reverb, or more generally, without any kind of effect) signal continues on its merry way through the console as if the reverb were never added. The reverb itself is a parallel signal path, beginning with some amount of the dry vocal, going through the reverb processor, and returning elsewhere on the console to be combined to taste with the vocal and the rest of the mix.

Image

Image Figure 2.9 Parallel processing.

Consider, in contrast, the use of equalization (detailed in Chapter 5). A typical application of equalization is to make a spectrally mediocre or problematic track beautiful. A dull acoustic guitar, courtesy of some boost around 10 or 12 kHz, is made to shimmer and sparkle. A shrill vocal gets a carefully placed roll-off somewhere around 3 and 7 kHz to become more listenable. This type of signal processing changes an undesirable sound into a new-and-improved version. In the opinion of the engineer who is turning the knobs, it ″fixes″ the sound. The engineer does not want to hear the problematic, unprocessed version anymore, just the improved, equalized one. Therefore, the processed sound replaces the old sound.

To do this, the signal processing is placed in series with the signal flow, as shown in Figure 2.10. Adding shimmer to a guitar is not so useful if the murky guitar sound is still in the mix too. And the point of equalizing the vocal track was to make the painful edginess of the sound go away. The equalizer is dropped in series with the signal flow, between the multitrack machine and the console, for example, so that only the processed sound is heard. For the equalizer, it is murky or shrill sound in, and gorgeous, hi-fidelity sound out. Equalizing, compressing, de-essing, wah-wah, distortion, and such are all typically done serially so that listeners hear the affected signal and none of the unaffected signal.

Image

Image Figure 2.10 Serial processing.

Parallel processing, like reverb, adds to the sound. Serial processing, like EQ, replaces the sound.

2.3.2 EFFECTS SEND

Not surprisingly, these two flow structures — parallel and serial — require different signal flow approaches in the studio. For parallel processing, some amount of a given track is sent to an effects unit for processing. Enter the effects send. Also known as an echo send or aux send (short for auxiliary), it is a simple way to tap into a signal within the console and send some amount of that signal to some destination. Probably available on every module of the console or digital audio workstation, the effects send is really just another fader. Neither a channel fader nor a monitor fader, the aux send fader determines the level of the signal being sent to the signal processor. Reverb, delay, and such are typically done as parallel effects and therefore rely on aux sends (Figure 2.11).

There is more to the echo send than meets the eye, however. It is not just an ″effects fader.″ An important benefit of having an effects send level knob on every module on the console is that, in theory, the engineer could send some amount of every component of the project to a single effects unit. To give a perfectly legitimate example, if the studio invested several thousand dollars on a single, super-high-quality, sounds-great-on-everything sort of reverb, then the recording engineer is probably going to want to use it on several tracks. Unless the studio has several very high-quality (i.e., very expensive) reverbs, it is not practical to waste it on just the snare, or just the piano, or just the vocal. The aux send makes it easy to share a single effects unit across a number of tracks. Turn up the aux send level on the piano track a little to add a small amount of this reverb to the piano. Turn up the aux send level on the vocal a lot to add a generous amount of the same reverb to the vocal. In fact, the aux send levels across the entire console can be used to create a separate mix of all the music being sent to an outboard device. It is a mix the engineer does not usually listen to; it is the mix the reverb ″listens″ to when generating its sound.

Image

Image Figure 2.11 Effects send.

As if there already was not enough for the engineer to do during a session, it is informative to review the faders that are in action: The channel faders are controlling the levels of the various signals going to the multitrack, the monitor faders are controlling the levels of all the different tracks being heard in the control room, and the effect sends are controlling the specific levels of all the different components of music going to the reverb. Three different sets of faders have to be carefully adjusted to make musical sense for their own specific purposes.

So far, so good. Hang-on though, as there are two more subtleties to be explored. First, as one is rarely satisfied with just one kind of effect in a multitrack production, the engineer would probably like to employ a number of different signal processors within a single project. Each one of these effects devices might need its own aux send. That is, the engineer might have one device that has a lush and long reverb dialed in; another that is adding a rich, thick chorus effect; and perhaps a third unit that is generating an eighth note echo with two or three fading repetitions. The lead vocal might be sent in varying amounts to the reverb, chorus, and delay, while the piano gets just a touch of reverb, and the background vocals get a heavy dose of chorus and echo and a hint of reverb. The console must have more than one aux send to do this; in this particular case, three aux sends are required on each and every module of the console.

The solution, functionally, is that simple. More aux sends. It is an important feature to look for on consoles and digital audio workstations as the number of aux sends determines the number of different parallel effects devices one can use simultaneously during any session.

Beyond this ability to build up several different effects submixes, aux sends offer the engineer a second, very important advancement in signal flow capability: cue mixes. Generally sent to headphones in the studio or, in the case of live sound, fold-back monitors on the stage, an aux send is used to create a mix the musicians use to hear themselves and each other. As the parameters are the same as an effects send (the ability to create a mix from all the channels and monitors on the console), the cue mix can rely on the same technology.

With this deeper understanding of signal flow, faders should be reviewed. Channel faders control any tracks being recorded to the multitrack, monitor faders build up the control room mix, aux send number one might control the mix feeding the headphones, aux send number two might control the levels going to a long hall reverb program, aux send number three might adjust the levels of any signals going to a thick chorus patch, and aux send number four feeds a few elements to a delay unit. Six different mixes carefully created and maintained throughout a single overdub session. The recording engineer must balance all of these mixes quickly and simultaneously. Oh, and by the way, it is not enough for the right signals to get to the right places. It all must make technical and musical sense. The levels to the multitrack need to be just right for the recording medium — tape or hard disk, analog or digital. The monitor mix needs to sound thrilling, no matter how elaborate the multitrack project has become. The headphone mix needs to sound inspiring — the recording artists have some creating to do, after all. The effects need to be appropriately balanced, as too much or not enough of any signal going to any effects unit may cause the mix to lose impact. This is some high-resolution multitasking. It is much more manageable if the console is a comfortable place to work. Experience through session work in combination with careful study of this chapter will lead to technical and creative success.

Prefader Send

With all these faders performing different functions on the console, it is helpful to revisit the monitor fader to see how it fits into the signal flow. Compare the monitor mix in the control room to the cue mix in the headphones. The singer might want a vocal-heavy mix (also known as ″more of me″) in their headphones as they sing, with extra vocal reverb for inspiration and no distracting guitar fills. No problem. Use the aux send dedicated to the headphones to create the mix the artist wants. The engineer and the producer in the control room, however, have different priorities. They don′t want a vocal-heavy mix; they need to hear the vocal in an appropriate musical context with the other tracks. Moreover, extra reverb on the vocal would make it difficult to evaluate the vocal performance being recorded as it would perhaps mask any pitch, timing, or diction problems. Of course the guitarist on the couch in the rear of the control room (playing video games) definitely wants to hear those guitar fills. Clearly, the cue mix and the control room mix need to be two independent mixes. Using aux sends for the cue mix and monitor faders for the control room mix, the engineer creates two separate mixes.

Other activities occur in the control room during a simple vocal take. For example, the engineer might want to turn up the piano and pull down the guitar in order to experiment with some alternative approaches to the arrangement. Or perhaps the vocal pitch sounds uncertain. The problem may be the 12-string guitar, not the singer. So the 12-string is temporarily attenuated in the control room to enable the producer to evaluate the singer′s pitch versus the piano. All these fader moves in the control room need to happen in a way that doesn′t affect the mix in the headphones — an obvious distraction for the performer. The performers are in there own space, trying to meet a lot of technical and musical demands in way that is artistically compelling. They should not be expected to do this while the engineer is riding faders here and there throughout the take.

The mix in the headphones needs to be totally independent of the mix in the control room, hence the pre/post switch, included in Figure 2.11. A useful feature of many aux sends is that they can grab the signal before (i.e., pre) or after (i.e., post) the fader. Clearly, it is desirable for the headphone mix to be sourced prefader so that it will play along independently, unchanged by any of these control room activities.

Postfader Send

The usefulness of a postfader send is revealed when one looks in some detail at the aux send′s other primary function: the effects send. Consider a very simple two-track folk music mixdown: fader one controls the vocal track and fader two controls the guitar track (required by the folk standards bureau to be an acoustic guitar). The well-recorded tracks are made to sound even better by the oh-so-careful addition of some room ambience to support and enhance the vocal while a touch of plate reverb adds fullness and width to the guitar (see Chapter 11 for more on the many uses of reverb). After a few hours — more likely five minutes — of tweaking the mix, the record label representatives arrive and remind the mixer that, ″It′s the vocal stupid.″ Oops; the engineer is so in love with the rich, open, and sparkly acoustic guitar sound that the vocal track was a little neglected. The label′s issue is that listeners will not be able to reliably make out all the words. It is pretty hard to sell a folk record when the vocals are unintelligible, so it has to be fixed. Not too tricky though. Just turn up the vocal.

Here is the rub: While pushing up the vocal fader will change the relative loudness of the vocal over the guitar and therefore make it easier to follow the lyrics, it also changes the relative balance of the vocal versus its own reverb. Turning up the vocal leaves its prefader reverb behind. The dry track rises out of the reverb, the vocal becomes too dry, the singer is left too exposed, and the larger-than-life magic combination of dry vocal plus heavenly reverb is lost. The quality of the mix is diminished.

The solution is the postfader effects send. If the source of the signal going to the reverb is after the fader (see Figure 2.11), then fader rides will also change levels of the aux send to the reverb. Turn up the vocal, and the vocal′s send to the reverb rises with it. The all-important relative balance between dry and processed sound will be maintained.

Effects are generally sent postfader for this reason. The engineer is really making two different decisions: determining the amount of reverb desired for this vocal, and, separately, the level of the vocal appropriate for this mix. Flexibility in solving these two separate issues is maintained through the use of the postfader echo send.

The occasional exception will present itself, but generally cue mixes use prefader sends, while effects use postfader sends.

2.3.3 INSERT

While parallel processing of signals requires the thorough discussion of aux sends above, serial processing has a much more straightforward solution. All that is needed is a way to place the desired signal processor directly into the signal flow itself. One approach is to crawl around behind the gear, unplugging and replugging as needed to hook it all up. Of course, it is preferable to have a patch bay available. In either case, the engineer just plugs in the appropriate gear at the appropriate location. Want an equalizer on the snare recorded on track three? Then, track three out to equalizer in; equalizer out to monitor module three in. This was shown in Figure 2.10. Placing the effects in between the multitrack output and the console mix path input is a pretty typical signal flow approach for effects processors.

Adding additional processing requires only that they are daisy-chained together. Want to compress and EQ the snare on track three? Simple enough: Track three out to compressor in; compressor out to equalizer in; equalizer out to monitor module three in. Elaborate signal-processing chains are assembled in exactly this manner.

If the engineer is hooking up a compressor, and wants to use the console′s built-in equalizer, it gets a little trickier: Track three out to compressor in; compressor out to monitor module three in. That′s all fine and good if the engineer does not mind having the compressor compress before the equalizer equalizes. There is nothing universally wrong with that order for things, but sometimes one wants equalization before compression. Enter the insert, shown in Figure 2.12. The insert is a patch access point within the signal path of the console that lets the engineer, well, insert outboard processing right into the console.

It has an output destined for the input of the effects device, generally labeled insert send. The other half of the insert is an input where the processed signal comes back, called insert return. Using this pair of access points, outboard signal processing can be placed within the flow of the console, typically after any onboard equalizer. Using this insert or patching the processing in before it reaches the console are both frequently used serial-processing techniques.

Image

Image Figure 2.12 Channel insert.

2.4 FX Decision Making

Most music recordings are accumulated through the multitrack production process. The final musical arrangement that fills the loudspeakers when the music consumer plays the recording was built up piece by piece; many of the individual musical elements (a guitar, a vocal, a tambourine) were recorded one at a time, during different recording sessions. It is up to the composer, producer, musician, and engineer to combine any number of individual recorded tracks into an arrangement of words, melodies, harmonies, rhythms, spaces, textures, and sounds that make the desired musical statement. This one-at-a-time adding of each piece of the music occurs in overdub sessions. While listening to the music recorded so far, a new performance is recorded onto a separate audio track. Later, during the mixdown session, the various individual tracks are combined into the requisite number of discrete signals appropriate for the release format. The audio compact disc is a stereo format requiring two channels of audio (left and right). The DVD-video and DVD-audio formats currently permit up to six channels for surround-sound monitoring (typically left, center, right, left surround, right surround, and low frequency effects [LFE]). Studios with analog multitrack recorders often have 24 or maybe 48 tracks. Studios with digital multitrack recorders can handle track counts in excess of 96. In the end some 24 to 96 (or more) source tracks are mixed down with added effects to a stereo pair or surround 6-track creation.

2.4.1 FX PROCRASTINATION

Often, it is not until mixdown that the actual musical role of each individual track becomes clear. During an overdub, the artist records a piano performance today, predicting it will sound good with the guitars and violins to be recorded in future sessions. At the time of each overdub, decisions are made about the expected amplitude of each track (e.g., lead vocal versus snare drum versus bass guitar), the appropriate location within the loudspeakers left to right and front to back, and the desired sound quality in terms of added effects. Eventually, during the mixdown session, these decisions are finalized as the audio tracks and signal processing are combined into the final, released recording.

This step-by-step, track-by-track approach to the making of recorded music empowers recording artists to create rich, complicated, musically-compelling works of art that could not occur without a recording studio and the multitrack production approach to help create them. This music is not a single live performance that occurs all at once, in a single space. Popular recorded music is a special kind of synthesis built through multitrack production. The creative process expands in time to include not just the composition phase and the performance experience, but also this multitrack production process. This same step-by-step process desires, if not requires, that many, if not most, decisions about signal processing not be made at the time the track is recorded. Defining the subtle attributes of the various effects is best deferred until the mixdown session, when the greater context of the other recorded tracks can inform the decisions to be made on each individual track.

The recording engineer very often wishes, therefore, to record tracks without effects. The type and amount of effect needed on each track is anticipated, and possibly thoroughly documented. Temporary effects are set up during the overdub sessions so that they may be monitored but not recorded. The control room speakers play the tracks plus temporary effects. The artist′s headphones play back the tracks with these first-draft effects, but the signal being recorded onto the multitrack possibly has little to no effects at all. Deferring effects decision making until mixdown can be a wise strategy.

2.4.2 FX ASSERTION

Hedging one′s bets, signal-processing wise, and deferring all critical decisions regarding effects to the mixdown session, is not the only valid approach. Many engineers shun the ″fix it in the mix″ approach and push to track with any and all effects they feel they need. They record, track by track, overdub by overdub, to the multitrack recorder with all the effects required to make each track sound great in the final mix.

It takes a deep store of multitrack experience to be able to anticipate to a fine scale the exact type, degree, and quality of effects that each element of the multitrack arrangement will want in order to finally come together into an artistically valid whole by the end of the project. It takes confidence, experience, and maybe a bit of aggressiveness to venture into this production approach.

Processing to the multitrack might also come from necessity. If the studio has only a few effects devices or limited digital signal processing capability, then it is tempting indeed to make good use of them as often as possible, possibly at each and every overdub session, tracking the audio with the effect to the multitrack. The snare gets processed during that session. The guitar gets the same processors (likely used in different ways) at a different session. The vocal will also take advantage of the same signal processing devices at a future overdub session.

At mixdown, the engineer is not forced to decide how to spend the few effects available. Instead, they have already made multiple uses of these few machines. The faders are pushed up during mixdown, revealing not just the audio tracks, but the carefully tailored, radically altered, and variously processed audio tracks. If the engineer was able to intuit the appropriate effects for the individual tracks, the mix sounds every bit as sophisticated as one created at a studio that owns many more signal processors.

In the end, certain families of effects are better suited to the assertive approach, while others are best deferred until mixdown. Many types of serial effects, such as equalization and compression, are recorded to multitrack during the basics and overdub sessions in which they are first created. Engineers adjust effects parameters as part of the session, in their search for the right tone. These effects and their detailed settings become track-Defining decisions very much on par with microphone selection and placement.

Other effects, generally parallel effects like reverb and delay, leave such a large footprint in the mix that they can′t be fully developed until all overdubs are complete. The engineer needs to finesse these effects into the crowded mix, in and around the other tracks. This is best done at mixdown, when the full context of the effects and other multitrack sounds is known.

Critical, featured tracks — the lead vocal in most pop music, the featured solo instrument on most jazz projects — are certain to receive careful signal-processing attention. EQ and compression, which flatter the microphone selection and placement, may be all but required, come mixdown.

However, tracks that are this important to the overall sound of the production, and that are so very likely to receive at least fine-tuning effects at the mix session, should be spared the indignity of multiple generations of similar effects. Why EQ the vocal during the vocal overdub session when it is all but certain to receive equalization again at the mixdown session? Noise floors, distortions, phase shifts, and other signal-processing artifacts that may exist, however faint, will accumulate. Each time a key track passes through the device, these unwanted side effects become part of the audio signal.

Engineers may defer all effects on key tracks until mixdown. Monitor path effects might be used to help the engineer, producer, and artist hear how good the track will ultimately sound. Meantime, the channel path stays effects free. No effects are recorded to the multitrack.

2.4.3 FX UNDO

Most signal-processing effects that are recorded to the multitrack cannot be undone. Recording studios do not have a tool to remove delay effects. Typical studio effects cannot deconvolve reverb from a track. Recording effects during basics and overdubs means committing to them.

Readers already familiar with compression (see Chapter 6) and expansion (see Chapter 7) know that the effects are very much opposite processes. Theoretically, they should be capable of undoing each other and returning the track to its original, unprocessed state. However, as the detailed discussion of these effects in the chapters dedicated to them makes clear, there are many layers of subtlety to these effects. The signal-processing devices rarely respond with mathematical, linear rigidity when compressing or expanding. Attack times and release times especially react and adjust based on the level and frequency content of the program material being processed. There is, therefore, no expander that can perfectly uncompress the work of an LA-2A or an 1176. These effects, too, are considered permanent.

Equalization (see Chapter 5), which regularly offers the ability to cut or boost at a range of frequencies with a range of bandwidths, might seem to be the only effect that can be reversed if needed. While more forgiving, EQ can take its toll on signals being processed in irreversible ways. Noise, distortion, and phase shift often accompany all equalization effects. Boosting some spectral region during an overdub to multitrack that must be undone by a mirror image cut at mixdown forces the signal to be subjected to the noise, distortion, and phase shift side effects twice. It is quite possible these unwanted effects will undermine the quality of the audio to a point noticeable, not just by recording engineers, but also by casual listeners. Even with EQ, it is recommended that the only processes recorded to the multitrack are those that are known with high certainty to be correct for the final mix.

2.4.4 FX INNOVATION

The life-changing, career-advancing events that separate the great audio engineers from the merely good ones are not accidents. They are deliberate events. An engineer who wants to become one of those recordists who really knows the sonic difference between a tube EQ and a class-A, solid-state EQ; who wants to hear, really hear, the difference moving a microphone two inches can make; who wants to know how to hear compression, in all its subtlety; an engineer with these ambitions should plan into every project some time for discovery.

It is easy to be intimidated by those engineers who speak with such authority on all matters of fine audio detail. They say something seemingly profound, and the less-experienced engineers can only nod, listen, pray, and slip into that dreaded spiral of self doubt. Less-experienced engineers should not lose confidence in themselves when Famous Fred casually listens to a tune and says, ″Nice! That′s gotta be the Spasmotic Research X-17 compressor on the lead vocal. We have a matched pair at my studio….″

Most of this stuff is not difficult to hear, one just needs the chance to hear it.

Practice

Jazz sax players know that in order to survive, they must spend time chopping in the woodshed. In isolation, they practice scales, they develop licks, they work through tunes they know in keys they do not, and they improvise, improvise, improvise. The time spent developing their chops may not pay back for many years, but it prepares them for the day when, for example, John Scofield, Herbie Hancock, Dave Holland, and Max Roach invite them to sit in on their little gig at The Blue Note, with Quincy Jones in attendance. Under high-pressure situations, conscious thought is rather difficult, so one needs to rely on instincts, instincts born and developed in the woodshed.

Recording engineers must also ″shed.″ One of the best ways for a new engineer to fine tune their hearing and attach great knowledge to specific pieces of equipment is to book themselves in the studio, without client work. They lock themselves in the recording studio for some four hours minimum, with no agenda other than to master a piece of equipment, experiment with a microphone technique, explore the sonic possibilities of a single instrument by setting up the entire microphone collection on that instrument and recording each to a different track, practice mix automation moves, etc.

Engineers will get better with time, as more and more session work deepens their knowledge of how to use each piece of equipment. Engineers early in their audio careers can shorten the session-based learning curve by doing some practice sessions.

Restraint

To take one′s recording craft to the next level, an engineer needs to learn, grow, innovate, and take chances out of the woodshed and in actual, professional, high-profile recording sessions.

Top-shelf engineers working with multiplatinum bands do not possess a magic ability to, at any moment, create any sound. They do it well now because they have done it before.

The gorgeous sounds, the funky tones, the ethereal textures, and the immersive spaces were all likely born in other sessions in which a band and an engineer invested time and creative energy in search of something. The iconic studio moments documented in the touchstone recordings that all fans of audio collect often came from hours of deliberate exploration and tweaking.

In order to develop the reputation with artists that one really knows how to play the studio as a musical instrument, an engineer needs to pick their spots. Not every band is willing to partake in these hours-long audio experiments. Not every project has the budget. Not every session presents the appropriate moment.

About once every 10 projects, the engineer might strike up a relationship with the band that is so tight, they essentially become part of the band, part of their family. Not just good friends — best friends. And they will invite the engineer to contribute, to play, to jam. They will make room for the engineer and the recording studio to make their mark.

On this rare project, an ambitious engineer has the chance to succeed with sonic results sure to get noticed.

But even these projects don′t give the engineer studio-noodling carte blanc. Think of the life of a typical album, if there is such a thing.

Preproduction. An engineer can only hope they have time, and that the band invites them.

Basics sessions are generally fast-paced and a little harried. Drums, bass, guitar, keys, vox, etc. Basics takes every ounce of ability and concentration an engineer has. Everybody. Every microphone. Every headphone. Every gobo. Every assistant. With so much happening at once, engineers generally must retreat to a zone of total confidence. If a recordist sets up 24 microphones for a basics session, at least 20 of them are in tried-and-true situations. There is very little room to experiment.

Overdubs. This is the engineer′s chance. Some overdubs are calm. Some overdubs move along slowly. One, not all, of the slower, calmer overdub sessions might present the engineer with the chance to dig deep into their creative soul and try out one of the more innovative recording techniques they have been dying to attempt for the last several months.

Practice. Sometimes the band has to practice during a recording session. If the band is in the studio rehearsing because they have brought in a guest percussionist who does not really know the tune yet, this might be the moment to try out some different microphone preamplifiers while they rehearse. If the band is exploring a new idea that has just come to them that looks promising, the clever engineer becomes invisible and starts setting up wacky microphones in wackier places. If the bass player broke a string and needs five or ten minutes to get the instrument back in order, the innovative engineer patches up that compressor they just read about and gives it a test drive.

Mixdown. Some songs need only honest, faithful sonic stewardship. Other mixes hit a roadblock. Here the band wants, needs, and even expects the engineer to come up with the combination of edits, effects, and mix automation moves that lift the song over the roadblock. Great engineers know when to go for it, and when to stay out of the way; when they do go for it, if they have spent enough time in the shed, the band is very likely to like what they hear.

Engineers should not attempt this on every overdub and every mix, every night. These technical explorations for new sounds can be the bane of a good performance. If the vocalist lacks studio experience, if the guitarist lacks chops, if the accordion player lacks self-confidence, then the engineer′s first role is to keep the session running smoothly, the performance sounding good, and the musicians feeling comfortable.

Successful studio innovations and an engineer′s advanced understanding of all things audio come from just a few well-chosen moments during those special sessions where the planets have aligned, not a wandering moment of distracted tweaking and fiddling. Learn to spot the sessions where this could happen. Build it into the recording budget — more time for overdubs than they should need. Give the session enough slack to make those engineering moments of innovation possible. What seems improvised to the band should come from premeditated moments of inspiration.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset