28
Automation

It is as simple as this—commercial mixes are full of automation. This process, which is performed naturally by professional mixing engineers, is somewhat overlooked by the novice. Even before the invention of multitrack recorders, engineers used to ride levels of different microphones during recording. With the introduction of multitrack recorders, this practice didn’t stop, but some of it was postponed to the later mixing stages. Before automation computers were integrated into consoles, the engineer, the assistant, the producer, and even the band members (as many hands as were needed) used to gather around the console to perform automation passes that were printed straight onto the final two-track. Each person knew exactly what needed to be moved, how, and when. For the more complex mixes, an “action score” was written. It was, practically, a performance, and the console was the instrument. If someone botched, the whole performance had to start all over again. Automated consoles were introduced around the late 1970s. Even today, many of them can only write the automation of channel faders, mutes, and solos, and studio engineers and their companions continue to perform live automation on analog desks.

Nowadays, audio sequencers let us write the automation for virtually every control in the mix and, moreover, provide graphical editors in which automation can be corrected or even drawn from scratch. There is no need for 12 hands since we can automate different controls during different passes. Never in the history of mixing was writing and editing automation as easy as it is today with audio sequencers. It is surprising that many DAW users fail to comprehend the benefits of this powerful facility and the potent effect automation can have on the mix.

We say that each song is a story. In the case of classical music, progressive rock, and many jazz pieces, they might be epics. In modern pop productions, the story is often squeezed into around three minutes, and changes happen very quickly. There is a lot happening in each story—the music develops, the arrangement changes, different sections should have different impacts, and the importance of different instruments varies. It could be unfair to have so much action streaming through a static mix. It is our responsibility as mixing engineers to accommodate the dynamic movement of the music and the structural elements of the production. Then, automation can always be used to create some interest or add some extra movement. The options are endless, and virtually any process or effect can be automated. It is possible to regard this late phase in the mix as playtime, where creativity and experiment might replace practical needs. Finally, level automation, mostly on vocals, is sometimes done before a compressor as part of manual balancing.

Any list of possible automation examples would be partial—the options are truly endless. It should be stressed that the most common mix automation (and often the most practical one) involves levels, so fader rides should probably precede any other type of automation.

Here are just a few things we can automate. Some of them have been mentioned in this book already, while others can be heard on some commercial mixes:

  • Raise the level of a specific instrument during the chorus, then bring it down again during the verse. Likely candidates are vocals, kicks, snares, and guitars.
  • Mute some instruments early in the song, then introduce them later.
  • Make an instrument brighter or darker during specific sections.
  • Ride the level of overheads during crash hits.
  • Ride the level of overheads or any other instrument up and down with relation to the tempo.
  • Pan something to one place during one section, then pan it somewhere else during another.
  • Apply some interesting vocal effect momentarily.
  • Change the timbre of the kick during some sections.
  • Mute the double-tracked vocals at points.
  • Change the reverb of the snare between sections or just alternate it between hits.
  • Bring down the level of some instruments to clear some space for another instrument.
  • Introduce distortion on the bass during the chorus only.
  • Increase the compression on the drums as the song progresses.
  • Widen and narrow the stereo width of an instrument during various sections.

Automation engines

Automation engines work on the principle of storing the position of controls using automation events. An automation event typically includes which control has been automated and its position at specific timestamps (very similar to the way MIDI systems handle MIDI control messages). Before any automation has been written, each control is free to move. Once even a single automation pass has been performed, the control position is often bound to the calculated position between two automation events or the position of the latest automation event.

fignot.jpg

It is worth knowing that some automation systems store events with SMPTE time code, which advocates the use of 30 fps for higher-resolution automation. This is subject to the project not involving any visuals that might dictate a different frame rate.

The automation process

Performing vs. drawing

When performing automation, we move controls during playback. Audio sequencers (and some digital recorders) provide a graphical display of automation events and also let us draw them on screen. Sometimes, drawing automation is quicker. For example, if we want to mute a specific instrument for a minute, it should take less time to draw two mute events than performing mute automation and having to wait for a minute. Drawing automation can also be beneficial when we want events to be quantized to the tempo grid. However, some argue that performing automation yields more musical results as there is always an interaction between what we hear and what we do—we respond to the music rather than approximating the effect of drawn events (which effect we mostly hear after drawing). Depending on the situation, different methods may be more appropriate, but it is worth remembering that, between performing and drawing, the former is more likely to involve feel.

Figure 28.1 This Digital Performer screenshot shows all the automation events happening during the second break of “The Hustle.” Some of these events have been drawn (such as those in the top track), and others performed (such as those in the Lead Auto track).

Figure 28.1 This Digital Performer screenshot shows all the automation events happening during the second break of “The Hustle.” Some of these events have been drawn (such as those in the top track), and others performed (such as those in the Lead Auto track).

Performing automation

Automation is said to be written rather than recorded. We do not have to press a record button in order to write automation, although sometimes we have to tell the system which control is to be automated and sometimes we have to assign (arm) a specific channel of interest to the automation engine (more common on digital desks).

When writing automation on an audio sequencer, we either use a control surface or the mouse to alter the position of controls during playback. Ideally, automation systems would like to know when a control is touched and when it is released—often automation is only written between these two events. Some control surfaces feature touch-sensitive controls; these are either faders or (less commonly) rotary knobs that, by way of varying capacitance, detect a finger touch. If a control is not touch-sensitive, the automation engine starts writing automation either with the first control movement or as soon as the control position matches the existing automation value. In most cases, such control is considered released after a certain period has passed with no position changes (a period often called touch timeout). Controls on a computer screen are regarded as touch-sensitive—a control is considered touched as soon as the mouse button is pressed, and released as soon as the mouse button is released.

Automation modes

Automation engines may vary in their modes, features, and response to user action. We can, however, generalize about a few automation modes typical in many systems. The modes in this list are illustrated in Figure 28.2.

  • Off—automation data is neither read nor written.
  • Read—previously written automation is read, but new control changes are not written.
  • Touch—new automation data is written as long as a control is touched, otherwise previous automation is read.
  • Latch—automation is written from the moment a control is touched until playback stops.
  • Write—new automation information is written, as long as the playback is running, overriding previously recorded automation. To prevent unwanted automation overrides, automation engines often switch to a different mode after each pass in write mode.

Apart from when the playback stops, the writing of automation might also stop if the mode is set back to read or if the specific channel automation assignment is disarmed. Two things can happen when automation writing stops during playback: either the control jumps instantly to the previous automation position (which could generate clicks) or it slides to that position. Often the time it takes a control to slide between the two positions is called the match period.

Figure 28.2 The five typical automation modes.

Figure 28.2 The five typical automation modes.

Another mode known as Trim or Relative mode usually applies to levels only (faders and sends). It is useful when we want to adjust the level of previously written automation. The idea is that, instead of writing the absolute level values, trim mode simply offsets the existing automation levels by the number of dB we move the fader by. For example, if the fader is brought down from –6 to –12 dB, all automation events during the writing pass would drop by 6 dB. Systems often provide the functionality to apply the relative change throughout the song.

Automation is often one of the last things we do in a mix. After recording automation for a specific control, any adjustments can be something of an effort since we have to adjust the full automation information rather than just move the control position. This can be especially annoying if we want to alter the level of tracks after writing level automation. This is exactly what trim mode came to solve, but even trim mode can be cumbersome at times. An elegant solution to this problem was mentioned earlier—we could insert a gain plugin and use it to perform any level automation. The track’s fader would then be free for global level adjustments. Another solution is to send different instruments to an audio group and automate the level of the group instead of that of the original tracks.

Automation alternatives

Duplicates

In some situations, we might want to apply more than a few changes to a specific instrument. For example, during the chorus we might want to make the overheads louder, compress them more, narrow their stereo width, and alter their equalization. We can automate all the related controls, but there is a quicker method: most audio sequencers allow more audio tracks than any project requires. In scenarios where serious changes are needed, it sometimes pays to duplicate the track in question, trim or mute the respective sections on the two tracks (for example, having the choruses on the duplicate only, and removing them from the original track), and mix the duplicate differently, with all the involved changes (Figure 28.3).

Figure 28.3 Duplicates “automation.” This screenshot shows three bass tracks. The top track plays throughout most of the song and provides the main bass sound in the mix. During the break, a different sound was sought, involving a variation of tonal characteristic and level, and some additional processing. To achieve this, the bass was moved onto a new track with different processing and level (Bass Brk) for the length of the break only. The bottom track is a distorted layer that is only mixed with the main bass track during the outro.

Figure 28.3 Duplicates “automation.” This screenshot shows three bass tracks. The top track plays throughout most of the song and provides the main bass sound in the mix. During the break, a different sound was sought, involving a variation of tonal characteristic and level, and some additional processing. To achieve this, the bass was moved onto a new track with different processing and level (Bass Brk) for the length of the break only. The bottom track is a distorted layer that is only mixed with the main bass track during the outro.

Fades

Mix-fades are another often overlooked practice—instruments very often fade in or out rather than instantly starting or stopping. The risk of clicks exists with any mute automation or region-trimming, whereas fades are click-proof. Transitions between sections are gelled using fades, whereas both mute automation and region-trimming can come across as very unnatural. While crossfades are more of an editing affair, fade-ins and fade-outs are a powerful mixing tool. We can achieve fades using level automation, but automation makes level adjustment a longer procedure, and where fades are needed is often realized very early in the mixing process. The fade tools provided by every audio sequencer are used for this task.

Control surfaces

When Digidesign launched the Icon D-Control (Figure 28.4) back in 2004, eyebrows were raised as to whether there was a place for a pure control surface as big as a large-format console. But for many engineers who used large-format analog consoles, it was justified.

Figure 28.4 The Digidesign Icon D-Control. This product, which looks like a large-format console, is a pure control surface for Pro Tools. Products of this kind enable a mixing experience that was once reserved for users of large-format consoles, with all its technical and creative benefits.

Figure 28.4 The Digidesign Icon D-Control. This product, which looks like a large-format console, is a pure control surface for Pro Tools. Products of this kind enable a mixing experience that was once reserved for users of large-format consoles, with all its technical and creative benefits.

Source: Courtesy of Digidesign. Photo: Bill Schwob.

The analog vs. digital debate goes beyond audio quality to the realm of human interaction. Analog consoles provide the highest level of accessibility—all the controls are laid out in front of you and any sonic action you’d like to perform is within reach. In most cases, 2 or 3 seconds is all it takes to translate your sonic vision into sound. Then there are the layer-based digital desks, where usually a third or half of the mix might be readily accessible, although you might have to press the select button in order to equalize a specific track or navigate some on-screen menus in order to tweak a certain effect. On the bottom of the accessibility ladder comes a computer system, where the whole mix has to be channeled through a rather primitive device called a mouse. Compared to large-format desks, even a 21-inch screen is tiny. The more tracks a project involves, the slower the mixer navigation becomes. Plugin windows have to be opened and closed. Things can take time. Had it only been a matter of more or less time, perhaps control surfaces as large as the Icon would not exist. But the creative flow can easily be restrained while the brain is busy operating the computer. Audio sequencers provide a few features that let us shorten the time between our vision and its implementation. But there is no argument about the fact that mouse-mixing will never be as fast as mixing on a large-format analog console or on large control surfaces such as the Icon.

It goes beyond that, even. Most people find sliding faders and turning knobs much more natural than dragging a mouse on a screen, especially when it comes to automation. Then there is also the fact that a mouse is a serial interface—rarely can we change more than one control at a time, although in mixing it is frequently necessary. We might, for example, alter the ratio and threshold of a compressor simultaneously, we might fancy boosting the highs on one channel while attenuating them on another, or we might want to bring one fader down while bringing another up during an automation pass. There are many examples.

Figure 28.5 The Frontier Design Group AlphaTrack. Despite its compact size, a control surface such as this gives automation in particular and mixing in general a far more natural feel.

Figure 28.5 The Frontier Design Group AlphaTrack. Despite its compact size, a control surface such as this gives automation in particular and mixing in general a far more natural feel.

Control surfaces come in various forms and sizes: from the large Icon, to the moderate-size designs such as the eight-fader Euphonix MC Mix, to compact designs such as the AlphaTrack in Figure 28.5. Many of them can be cascaded, and the larger the work surface we are using, the more accessible our mix becomes and the faster we can realize our sonic vision. But even compact surfaces such as the AlphaTrack can make automation and other aspects of mixing a far more natural experience.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset