Chapter 1
IN THIS CHAPTER
Getting to know the mix process
Managing levels
Using control surfaces and external mixers
Understanding the stereo field
Using reference music
Think about all the time it took for you to record all the tracks for your song. You spent countless hours setting up mics; getting good, hot (high, but not distorting) levels on your instruments; and making sure that each performance was as good as you could get it. You’d think, then, that most of your work would be done.
Well, on the one hand, it is — because you no longer have to set up and record each instrument. On the other hand, you still have to make all the parts that you recorded fit together. This process can take as long as it took you to record all the tracks in the first place. In fact, for many people, it takes even longer to mix the song than to record all its parts.
In this chapter, I introduce you to the process of mixing your music. You get a chance to see the basics of mixing using the Pro Tools Mix window. You also discover how to set up external mixing aids, such as MIDI control surfaces, digital mixers, and analog mixers. To top it all off, you discover how to reference your music to other people’s recordings as well as how to train your ears so that your mix “translates” to different types of playback systems.
The goal of mixing is to make sure that each instrument can be heard in the mix — the recorded whole that results from blending all your recorded parts — without covering up something else or sounding out of place. You can pull this off in several ways:
When you mix all the tracks in your session, the mix bus (which, for its part, is controlled by the Master fader) is where they end up. There the signals are summed (added) and result in a level (volume) that’s higher than that of the original tracks. One danger of mixing in-the-box (within Pro Tools) is that this level can get pretty high, and you might not recognize it unless you listen very carefully.
The advantage to keeping down peak levels is that you get better levels going to the mix bus and reduce the chance that some clipping happens. (Clipping is also called overs in digital recording; it’s distortion that results when the summed signals end up too hot and overload the mix bus.)
Before I start to mix a song, I do a few things to prepare myself for the process. My goal before I mix is to get in the headspace of mixing. This often means taking a step back from the song and approaching it as a listener rather than as the musician who recorded each track. Start the mixing process by following these steps:
Determine the overall quality you want from the song.
At this point, I don’t mean quality in terms of, “Is it good or bad?” Rather, I define quality here as a musical style or a feeling. Do you want it to kick? Soothe? Scream? You probably don’t need to think about this too hard if you had a definite sound in mind when you started recording. In fact, most composers hear a song in their heads before they even start recording.
Listen to a song or two from a CD that has a sound or feel similar to the song you’re trying to mix.
Listen to the examples on your studio monitors if you can; try to get a sense of the tonal and textural quality of these songs. Listen to them at fairly low volume; be careful not to tire your ears. All you’re trying to do at this point is get your ears familiar with the sound you’re trying to produce in your music.
Set up a rough mix, using no EQ or effects, and listen through a song once.
For this listening session, don’t think like a producer; rather, try to put yourself in the mindset of the average listener. Listen to the various parts you recorded — does anything stick out as particularly good or bad? You’re not listening for production quality. You’re trying to determine which instruments, musical phrases, licks, melodies, or harmonies grab you as a listener.
Get a piece of paper and a pen to jot down ideas while you work.
When you listen through the song, take notes on where certain instruments should be in the mix. For example, you might want the licks played on lead guitar throughout the song to be muted during the first verse. Or maybe you decide that the third rhythm guitar part you recorded would be best put way to the right side of the mix, while the other two rhythm guitar parts might be closer to the center. Write down these ideas so you can try them later. Chances are that you’ll have a lot of ideas as you listen through the first few times.
Pro Tools comes with a powerful software mixer, and everything you might want to do can be done via your mouse and keyboard. Even so, many people — myself included — prefer to mix by using real faders, knobs, and buttons. This can be done several ways: using a computer control surface (such as the Artist Mix or a Mackie MCU Pro), a MIDI controller, a digital mixer, or an analog mixer. These alternative types of mixers are covered in more detail in Book 2, Chapter 1; for now, I just want to cover the basic setups of these various options.
If you have a computer control surface such as the Artist Mix (see Figure 1-1), your system is automatically integrated within Pro Tools software. With your system hooked up via a FireWire interface, all your faders, knobs, and buttons should work seamlessly with the software mixer. When you push a fader on the control surface, the corresponding track fader on the computer screen moves as well.
Check out the manual for the Artist Mix to get familiar with the operation of its control options. Or you can just fiddle around a bit; the controller is set up very intuitively.
Quite a few MIDI control surfaces on the market work well with Pro Tools. (The Mackie MCU Pro is one that immediately comes to mind.) If you want to go the MIDI route, I highly recommend doing an Internet search for “Pro Tools compatible control surface” and see what comes up. You’ll find a bunch of options and just as many opinions about each one.
Hooking up a MIDI control surface is the same as setting up any MIDI instrument, so go to Book 5, Chapter 1 for details on configuring your system.
After you’re hooked up and running, the control surface is basically the same as using an Artist Mix. All your faders, knobs, and buttons on your control surface will adjust one of the parameters in the Pro Tools software. The manual for your MIDI control surface will spell out the function of each of the buttons. In the case of the Mackie Control, you can get a template that fits over the surface of the controller and shows you the appropriate functions in Pro Tools listed for easy reference.
If you want to use a digital mixer with your Pro Tools system, you need to make sure that you have the proper number of digital inputs and outputs in your audio interface to connect to your mixer.
For example, if you have a MOTU 828x, you have ten outputs (eight ADAT and two S/PDIF) that you can use to send your tracks from your computer to your mixer. (Just make sure that your mixer can accept ADAT and S/PDIF signals — Book 2, Chapter 1 has more on ADAT and S/PDIF — at the same time; otherwise you’re down to eight.) This means you can send no more than ten tracks of material to your mixer to mix.
In this example, if you have a session with more than ten tracks, your digital mixer becomes somewhat useless unless you want to mix in stages (ten tracks at a time) or mix in the box (within Pro Tools) and from your mixer.
To connect your digital mixer to your Pro Tools system, simply run the appropriate cables (ADAT, for instance) from the output of your Avid interface to the input of your digital mixer. When you move a fader (or a button or a knob) in your mixer, the corresponding fader (or button or knob) track in the Pro Tools software you see onscreen won’t be affected.
Like with a digital mixer, your ability to mix in an analog mixer is limited by how many outputs your interface has. In this case, it all depends on the number of analog outputs you have. For example, the MOTU 828x provides eight analog outputs, so this is the maximum number of tracks you can mix with an analog mixer at any one time.
If you want to mix your session through an analog mixer, just connect each analog output from your audio interface to one of the inputs of your mixer. In the case of the MOTU 828x, you need eight TS cables running from outputs 1 through 8 to the corresponding inputs of your mixer. (Book 2, Chapter 1 has more on the various cables you meet in the recording world.) Again, if you have an audio interface that only has a couple of analog inputs and outputs, you won’t be able to mix anything other than simple songs (few tracks) using an analog mixer.
When you’re at a live concert and you close your eyes, you can hear where each instrument is coming from on stage. You can hear that certain instruments are on the left side of the stage, others are on the right, and still others seem to come from the center. You can also generally discern whether an instrument is at the front or the back of the stage. Put all these sound-based impressions together, and you have a 3-D image made of sound — a stereo field.
What makes up the stereo field is the specific placement of sound sources from left to right and front to back. When you mix a song, you can set your instruments wherever you want them on the imaginary “stage” created by your listener’s speakers. You can do this with panning, which sets your instruments from left to right. You can also use effects (such as reverb and delay) to provide the illusion of distance, placing your instruments toward the front or back in your mix. (See Chapter 4 of this mini-book for more on effects.) When you mix your song, try to visualize where on stage each of your instruments might be placed.
Some people choose to set the panning and depth of their instruments to sound as natural as possible, and others use these settings to create otherworldly sounds. There is no right or wrong when panning and adding effects to simulate depth — just what works for your goals. Don’t be afraid to get creative and try unusual things.
You adjust each instrument’s position from left to right in a mix with the Panning control, located in the Pro Tools Mix window. (See Figure 1-2.) Panning for most songs is pretty straightforward, and I outline some settings in the following sections. Some mixing engineers like to keep their instruments toward the center of the mix; other engineers prefer spreading things way out with instruments on either end of the spectrum. There’s no absolute right or wrong way to pan instruments. In fact, no one says you have to leave any of your instruments in the same place throughout the entire song. Just make sure that your panning choices contribute to the overall effect of the music. (Check out Chapter 5 of this mini-book for how to automate your panning in Pro Tools.)
Lead vocals are usually panned directly in the center. This is mainly because the vocals are the center of attention and panning them left or right takes the focus away from them. Some people will pan the vocals off center if there is more than one lead vocal (as in a duet), but this can get cheesy real fast unless you’re very subtle about it. Of course, you’re the artist and you may come up with a really cool effect moving the vocal around.
Because backup vocals are often recorded in stereo, they are panned hard left and hard right. If you recorded only one track of backup vocals, you can make a duplicate of the track and pan one to each side, just like you can with stereo tracks. Then you can either nudge one forward or backward in time by a few milliseconds or adjust the pitch up or down a semi-tone to differentiate the two tracks.
In addition to tracks panned to each side, some mixing engineers also have a third backup vocal track panned in the center to add more depth. Your choice to do this depends on how you recorded your backup vocals as well as how many tracks are available for them.
Lead guitar is often panned to the center, or just slightly off-center if the sound in the center of the stereo field is too cluttered. Rhythm guitar, on the other hand, is generally placed somewhere just off-center. Which side doesn’t matter, but it’s usually the opposite side from any other background instruments, such as an additional rhythm guitar, a synthesizer, an organ, or a piano.
Typically, bass guitar is panned in the center, but it’s not uncommon for mixing engineers to create a second track for the bass: panning one to the far left and the other to the far right. This gives the bass a sense of spaciousness and allows more room for bass guitar and kick drum in the mix.
As a general rule, I (and most other people) pan the drums so that they appear in the stereo field much as they would on stage. (This doesn’t mean that you have to, though.) Snare drums and kick drums are typically panned right up the center, with the tom-toms panned from slightly right to slightly left. Hi-hat cymbals often go just to the right of center; ride cymbals are just left of center; and crash cymbals sit from left to right, much like tom-toms.
Percussion instruments tend to be panned just off to the left or right of center. If I have a shaker or triangle part that plays throughout the song, for instance, I’ll pan it to the right an equal distance from center as the hi-hat is to the left. This way, you hear the hi-hat and percussion parts playing off one another in the mix.
These instruments are usually placed just off-center. If your song has rhythm guitar parts, the piano or organ usually goes to the other side. Synthesizers can be panned all over the place. In fact, synths are often actively panned throughout the song: That is, they move from place to place.
As you probably discovered when you were placing your mics to record an instrument, the quality of sound changes when you place a mic closer to — or farther away from — the instrument. The closer you place the mic, the less room ambience you pick up, which makes the instrument sound closer to you, or “in your face.” By contrast, the farther from the instrument you place your mic, the more room sound you hear: The instrument sounds farther away.
Think of standing in a large room and talking to someone to see (well, hear, actually) how this relationship works. When someone stands close to you and talks, you can hear him clearly. You hear very little of the reflections of his voice from around the room. As he moves farther away from you, though, the room’s reflections play an increasing role in the way that you hear him. By the time the other person is at the other side of the room, you hear not only his voice but also the room where you’re at. In fact, if the room is large enough, the other person probably sounds as if he were a mile away from you, and all the reflections from his voice bouncing around the room may make it difficult to understand what he says.
You can easily simulate this effect by using your reverb or delay effects processors. In fact, this is often the purpose of reverb and delay in the mixing process. With them, you can effectively “place” your instruments almost anywhere that you want them, from front to back, in your mix.
The type of reverb or delay setting that you use has an effect on how close or far away a sound appears as well. For example, a longer reverb decay or delay sounds farther away than a shorter one.
In Chapter 4 of this mini-book, I go into detail about the various effects processors to help you understand how best to use them. I also present settings you can use to create natural-sounding reverb and delay on your tracks, as well as some unusual settings that you can use for special effects.
After you have a rough mix and get your EQ (described in Chapter 3 of this mini-book) and panning settings where you want them, your next step is to determine which parts of which tracks are used when — and sometimes whether a part or track is used at all. If you’re like most musician/producers, you try to get all the wonderful instrumental and vocal parts you recorded as loud as possible in the mix so that each brilliant note can be heard clearly all the time. After all, you didn’t go through all the time and effort to record all those great tracks just to hide them in the mix or (worse yet) mute them, right?
Well, I feel your pain. But when you get to the mixing point of a song, it’s time to take off your musician’s hat and put on the one that reads Producer. And a producer’s job is to sort through all the parts of a song, choose those that add to its effect, and dump those that are superfluous or just add clutter. Your goal is to assemble the tracks that tell the story you want to tell and that carry the greatest emotional impact for the listener.
One of the great joys when listening to music (for me, anyway) is hearing a song that carries me away and pulls me into the emotional journey that the songwriter had in mind. If the song is done well, I’m sucked right into the song; by the end, all I want to do is listen to it again.
What is it about certain songs that can draw you in and get you to feel the emotion of the performers? Well, aside from a good melody and some great performances, it’s how the arrangement builds throughout the song to create tension, release that tension, and build it up again. A good song builds intensity so that the listener feels pulled into the emotions of the song.
Generally, a song starts out quiet, becomes a little louder during the first chorus, and then drops down in level for the second verse (not as quiet as the first, though). The second chorus is often louder and fuller than the first chorus, and is often followed by a bridge section that is even fuller yet (or at least differs in arrangement from the second chorus). The loud bridge section might be followed by a third verse where the volume drops a little. Then a superheated chorus generally follows the last verse and keeps building intensity until the song ends.
When you’re crafting the mix for your song, you have two tools at your disposal to build and release intensity: dynamics and instrumental content (the arrangement).
Dynamics are simply how loud or soft something is — and whether the loudness is emotionally effective. Listen to a classic blues tune (or even some classical music), and you’ll hear sections where the song is almost deafeningly silent, and other sections where you think the band is actually going to step out of the speakers and into your room. This is an effective and powerful use of dynamics. The problem is that this seems to be a lost art, at least in popular music.
It used to be that a song can have very quiet parts and really loud ones. Unfortunately, a lot of CDs nowadays have only one level — loud. This often isn’t the fault of the musicians or even the band’s producer. Radio stations and record company bean counters have fueled this trend, betting that if a band’s music is as loud as (or louder than) other CDs on the market, it’ll attract more attention and sell more copies. (You can read more about this trend in Book 7, Chapter 1.) But consider this: Whether you can hear the music is one thing; whether it’s worth listening to is another.
Building intensity with the arrangement involves varying the amount of sound in each section. A verse with just lead vocal, drums, bass, and an instrument playing the basic chords of the song is going to have less intensity (not to mention volume) than a chorus awash with guitars, backup vocals, drums, percussion, organ, and so on. Most songs that build intensity effectively start with fewer instruments than they end with.
When you mix your song, think about how you can use the instruments to add to the emotional content of your lyrics. For example, if you have a guitar lick played at every break in the vocal line, think about using it less to leave space for lower levels at certain points in your song. If you do this, each lick will provide more impact for the listener and bring more to the song’s emotion.
To create a mix that sounds good, the most critical tools you need are your ears because your capability to hear the music clearly and accurately is essential. To maximize this capability, you need a decent set of studio monitors and a good idea how other people’s music sounds on your speakers. You also need to make sure that you don’t mix when your ears are tired. The following sections explore these areas.
One of the best ways to learn how to mix music is to listen to music that you like — and listen, in particular, for how it’s mixed. Put on a CD of something similar to your music (or music with sound that you like) and ask yourself the following questions:
Even if you’re not mixing one of your songs, just sit down once in a while and listen to music on your monitors to get used to listening to music critically. Also, the more well-made music you hear on your monitors, the easier it is to know when your music sounds good on those same speakers.
Unless you spent a lot of time and money getting your mixing room to sound world-class, you’ll have to compensate when you mix to get your music to sound good on other people’s systems. If your room or speakers enhance the bass in your song, the same tracks will sound thin on other people’s systems. On the other hand, if your system lacks bass, your mixes will be boomy when you listen to them somewhere else.
Reference music can be any music that you like or that helps you to hear your music more clearly. For the most part, choose reference music that has a good balance between high and low frequencies and that sound good to your ear. That said, some music is mixed really well, which can help you get to know your monitors and train your ears to hear the subtleties of a mix. I name a few in the following list. (Disclaimer: I try to cover a variety of music styles in this list, but I can’t cover them all without a list that’s pages long.)
If you’ve ever had a chance to mix a song, you might have found that you do a better mix early on in the process — and the longer you work on the song, the worse the mix gets. In most cases, this is because your ears get tired — and when they do, hearing accurately becomes harder. To tame ear fatigue, try the following:
One great thing about digital recording is that it costs you nothing to make several versions of a mix. All you need is a little (well, actually a lot of) hard-drive space. Because you can make as many variations on your song’s mix as your hard drive allows, you can really experiment by trying new effects settings or trying active panning in your song and see whether you like it. You might end up with something exciting. At the very least, you end up learning more about your gear. That’s always a good thing.