CHAPTER 7

Webcast Audio Production

Every webcast should have broadcast quality audio. Good quality audio equipment is cheap, basic audio engineering is simple, and streaming audio codecs sound good at low bit rates. There’s no excuse for poor quality—period.

Still, to this day, many webcasts have sub-standard audio quality, which ruins the entire production. People will tolerate sub-optimal video quality if the audio is high quality—the reverse is not true.

To avoid this predicament, this chapter covers everything necessary to produce webcast audio to a high standard. Of course, we can’t teach you how to be an audio engineer in a single chapter, but if you follow the recommendations in this chapter, your audio should sound great. This chapter covers:

•    Audio Engineering Basics

•    Audio Equipment

•    Webcasting Audio Engineering Techniques

Audio Engineering Basics

Audio engineering is fairly simple. Sound waves are converted into electronic voltages, which can be digitized and stored on a hard drive. Figure 7-1 illustrates a digitized sound file displayed in an audio editing program.

image

Figure 7-1
A digitized audio file.

At its most basic level audio engineering deals with two things: amplitude and frequency. Amplitude is the measure of the strength or volume of a signal, generically referred to as level or gain. Frequency is the number of peaks and troughs per second and determines the pitch of the sound. Piccolos generate high frequencies, bass drums generate low frequencies.

As mentioned briefly in Chapter 3, streaming media codecs use perceptual models to convert raw audio into a streaming media format. Codecs use signal level and frequency to determine what is most important about an audio signal, and thereby determine what can be discarded. Knowing a little about how codecs work enables us to produce our audio in such a way as to generate a high quality streaming media file.

Audio engineers strive to record noise-free signals with as much fidelity as possible. Unsurprisingly enough, high quality signals produce the highest quality streaming audio files. Why?

•    Broadcast quality audio has a high signal-to-noise ratio, meaning program audio is loud compared to any unwanted noise. A codec therefore sees the noise as “unimportant” or “inaudible” and therefore tends to discard it.

•    Broadcast quality audio has high fidelity, meaning a full frequency range and a natural dynamic range. A codec has a better chance of encoding a full-fidelity source file rather than a compromised original.

The first point above is all about level; the second is all about frequency. Our goal, therefore, is to optimize our recording process to get the most level and a full, well-balanced range of frequencies.

Level

Level is the single most important concept to understand in audio engineering, for a number of reasons. First, all equipment has an optimal operating range. At the low end of this range, all audio equipment has internal noise, which is added to any signal that passes through. If the signal is low, the added noise is noticeable. If the signal is high, the added noise will not be audible.

At the high end, audio equipment can only handle signals up to a certain level. If that level is exceeded, distortion occurs. This is audible as a disturbing crackle or buzz. To operate audio equipment optimally you want to use signal levels that are high enough so that no audible noise is being added, but not so loud as to distort.

In addition to internal audio equipment levels, audio engineers must be sensitive to the relative levels of different sound sources in an environment. This is particularly important with streaming media.

Our hearing is incredibly sophisticated. The combination of our ears, as highly sensitive listening devices, and our brains, that adaptively filter this information, enables us to listen selectively. We can pick out a bump in the middle of the night when we’re half asleep; we can eavesdrop on the next table in the middle of a crowded, noisy restaurant.

Codecs are nowhere near this sophisticated. Using the noisy restaurant example, the relative signal levels of the hustle and bustle of the restaurant may even exceed the level of the conversation upon which we’re eavesdropping. But we can still actively decipher the information that is coming in, and pick out the conversation. Codecs cannot do this. They can divide the audible frequency range into different slices, but cannot distinguish between things that fall into the same frequency range.

Therefore, it is important when producing streaming media to pay close attention to level, both in the equipment (known as the signal chain), and in the environment. The first step is to set up a gain structure for your webcast.

Setting Up a Gain Structure

Gain structure refers to how a signal is being amplified throughout a signal chain. Setting up a gain structure means making sure each and every piece of equipment is operating in its ideal range. To do this, start with the first piece of equipment, and work your way through the chain, checking the levels at each step.

Most audio equipment allows you to adjust both the input and output gain. Most audio equipment also provides some sort of meter to provide a visual indication of the level. Figure 7-2 shows some examples of audio meters.

image

Figure 7-2
Different kinds of audio level meters. Photographs provided courtesy of PreSonus Audio Electronics.

As a simple example, let’s take the case of a microphone plugged into a small mixer, which is then plugged into a sound card (see Figure 7-3). Starting at the mic and moving towards the sound card, the first thing to be set is the input level of the microphone.

image

Figure 7-3
A simple audio signal chain. Photographs provided courtesy of Shure Incorporated, LOUD Technologies, and Echo Digital Audio Corporation.

Plug the microphone into the first channel. At the top of the channel strip, most mixers have a knob marked “gain.” This adjusts the input gain for whatever is plugged into the channel. They are generally rotary knobs that start at the 7 o’clock position and rotate up to the 5 o’clock position, 5 o’clock being maximum gain. Start with this knob in the 1 o’clock position, which is roughly two thirds of maximum gain.

At the bottom of the channel strip will be a knob or a fader (a square knob that slides up and down) to adjust the level of this channel with respect to the other channels. Start with this at the 0 db (or “U” for unit gain) position. You also must set the master fader for the mixer at the 0 dB (or “U”) position.

Speak into the microphone and check the level on the mixer’s meters. Make sure you’re speaking at a normal level, and at the same distance that it will be when it is being used. Adjust the gain knob at the top of the channel until your level on the meters is in the −6 to 0 db range. Occasional peaks above 0 db are allowed. If you plan on using multiple mics or additional inputs, set the levels for each input in the same manner.

The last step required is to adjust the input level to the sound card. One important note is that input levels for digital equipment must never exceed 0 db.

Inside the Industry

image

Analog vs. Digital Levels

There is one key difference between analog and digital equipment, and it has to do with how audio-signals are measured, digitized, and stored. A full discussion would be incredibly tedious, as there are different methods and scales used to measure audio. This is an extremely simplified explanation.

Essentially, 0 dB is considered to be the maximum signal level, though analog audio equipment can operate at levels significantly higher. This additional range is known as headroom. The amount of headroom varies among different types of equipment. Suffice it to say that occasional peaks above 0 dB are fine on analog equipment.

However, 0 dB is an absolute maximum in the digital realm. Because of the way digital audio is stored, there is nothing above 0 dB. If an input signal exceeds 0 dB, the top of the waveform gets clipped off, and the result is known as square-wave distortion. Needless to say, it sounds horrible.

When setting levels in the digital realm, be far more conservative, with peaks in the −10 dB to −6 dB range. This leaves you plenty of headroom for unseen spikes in your audio signal, and keeps your signal distortion-free.

Adjusting the input level for your sound card input can be done in a number of ways. If you’re using a standard sound card, open the windows volume control by clicking on the speaker icon in your task bar or selecting “Sounds and Audio Devices” from your control panel. This opens up the Volume Control window (see Figure 7-4).

image

Figure 7-4
The Windows Volume Control windows.

To adjust the input level, select Properties from the Options menu, and select the Recording radio button. This displays software faders for all your input options. You should be using the line input—microphone inputs on sound cards should not be used except in extreme cases, as they’re very low quality inputs.

Sound cards have optimal operating ranges just like every other piece of audio equipment. The fader for your line input should be somewhere between 1/2 and 2/3 of maximum. Check your input levels on the application you’re using. If you can’t achieve a satisfactory level with the software fader, adjust the output of your mixer until your level looks right.

ALERT

image

It’s best to set your input levels using the meters included with an audio editing program. The meters on encoding applications are notoriously inaccurate.

Some sound cards include their own mixer. It may look slightly different than the Windows mixer, but the functionality should be the same. Professional sound cards do not allow you to adjust the input level—they assume the input will be at a standard broadcast level. In this case, the input level should exactly match the output level of your mixer. If not, adjust the output level of the mixer until input level to your sound card is appropriate.

Once the gain structure is set, you’ve ensured that all the equipment is operating optimally, and the minimum amount of noise is being added to your signal. This simple step goes a long way towards ensuring high quality audio.

The best part about setting up a gain structure is that you only have to do it once. You can safely assume that people will always speak into microphones at approximately the same level, and that the relative balance between your inputs will remain roughly the same. You may need to make slight adjustments, but that can be done using the faders on your mixer. In fact, that’s exactly what your mixer is for, and why they’re indispensable. The next section talks about the importance of equipment and how it can improve your audio quality.

Audio Equipment

Thanks in part to the explosion in home recording, audio equipment is relatively inexpensive. Where once high quality equipment was only available in recording studios, a broadcast quality audio setup can be put together for as little as $500. The equipment you’ll need depends on the scale of your webcast, but chances are you’ll need a few microphones, a mixing desk (also known as a mixer), a compressor, and some cables to connect it all together.

Microphones

A good microphone is very important because it is the first item in your signal chain. A mic with poor frequency response and dynamics will provide an inferior signal. No audio equipment on the planet can take a bad signal and make it sound good. Microphones come in a variety of shapes and sizes, each with a specific application in mind.

All mics fall into two basic categories, dynamic or condenser. Dynamic mics are rugged and resistant to handling noise, which makes them a perfect choice for webcasts. Condenser mics are more sensitive, have better frequency response, but are highly susceptible to handling noise. Condenser mics are excellent in controlled situations such as studio webcasts.

The next thing to consider when choosing a microphone is the directional response. Some mics are omnidirectional, meaning they pick up sound from all directions. Others, generally known as cardioid mics, are more sensitive to sound coming from one direction. For most webcast applications, choose a directional mic, and make sure it’s pointed directly at the talent.

 

Author’s Tip image

When using lavaliere mics, it’s best to use omnidirectional mics. Because they’re physically attached to the talent, they don’t pick up too much extraneous noise. However, the omni pickup pattern helps when the talent turns their head from side to side.

Finally, there are different types of mics for different applications. Handheld mics can be held by the talent, or placed in a microphone stand and pointed at the talent. Lavaliere, or clip-on mics are clipped on to an item of the talent’s clothing, such as a lapel or shirt (see Figure 7-5).

It’s always best to have a mixture of mics available at your disposal. Some people are comfortable with handheld mics, others are hopeless. You also should consider investing in a wireless mic setup, as it reduces cable clutter and allows the talent far greater freedom of movement.

image

Figure 7-5
Common webcast microphones a) handheld (Shure SM58) and b) lavaliere (Audio-Technica AT803b). These photographs were provided courtesy of Shure Incorporated and Audio-Technica.

Mixing Desk

Mixing desks (or mixers) are important for two main reasons. First, if you have multiple sources, you need to be able to combine the signals and adjust the relative levels of each. Second, they have high quality mic inputs.

Microphone signals require enormous amounts of amplification. Poor quality mic inputs, such as those available on most sound cards, add noise during the amplification process. Using a mixing desk will get you far better sound quality, so be sure to plug your microphones into a mixer.

Another reason mixers are helpful is that they generally offer equalization (EQ) controls. Later in the chapter we’ll see that EQ can be used to improve your audio quality.

Mixers are available in a variety of sizes and price points. Buy one that suits your budget and has enough inputs and outputs for your webcast.

Signal Processing Equipment

There are a number of ways you can process your audio signal to improve the overall quality. These methods are discussed in the next section. These techniques can be achieved using an audio editing program or external hardware. During a webcast, signal processing must be done externally, because encoding requires all the computing resources.

 

Author’s Tip image

Some specialized encoding solutions include audio signal processing. These solutions do not require external hardware—but then again, having a backup never hurts.

The two most common types of signal processing are compression and equalization (EQ). EQ will generally be built-in to your mixer, but for specialized applications you may want to invest in an EQ unit. Compression, as we’ll see in the next section, is a very useful tool to have at your disposal. You’ll almost certainly want to buy an external compressor (see Figure 7-6).

image

Figure 7-6
FMR Audio’s RNC (Really Nice Compressor) 1773. This photograph was provided courtesy of FMR Audio.

Monitoring

It’s hard to produce high quality audio if you can’t hear what you’re working with. If you’re working in a studio environment, invest in a good pair of studio monitors. You should have a cheap pair of PC speakers to test your audio quality, but do not use these for critical monitoring. If you’re working on location, buy a good set of headphones.

Additional Audio Tools

There are a number of other things that you’ll want to include in your webcasting kit. The most obvious is cables. Make sure you’ve got plenty of cables to connect all your gear together, and bring plenty of spares. Cables are constantly being handled and stepped on, and endure a lot of abuse. However, they do fail, so bring spares.

An assortment of adapters can save your life. While it’s always best to avoid using adapters (because they’re prime candidates for failure), you may not always have that luxury. You’ll probably want to purchase an assortment of microphone stands and pop shields. Pop shields are designed to protect microphones from distortion when people say words starting with the letter ‘p.’ They come in a variety of sizes and shapes, and are generally seen as foam coverings on the end of handheld mics.

 

Author’s Tip image

Use balanced cables whenever possible. Balanced cables, such as XLR microphone cables and TRS (Tip-Ring-Sleeve) cables are far more resistant to noise, particularly over long cable runs.

Webcast Audio Engineering Techniques

Once you’ve set up your equipment and set up your gain structure, you’re more or less ready to start encoding. However there are a number of techniques that can make your webcast sound more professional. All of the techniques discussed below fall under the category of “good engineering practice.”

Get Rid of Ground Hum

One of the most common problems encountered in location audio production is ground hum. Ground hum is immediately recognizable as a low frequency humming noise in your audio signal. Essentially it is small amounts of the wall power “leaking” into the signal chain. Since wall power oscillates at 60 Hz (50 Hz in Europe and Japan), it is well within our range of hearing. There are two basic steps you can take to avoid ground hums, and a technique to use if all else fails.

1.   Plug all your equipment into the same power source. If everything is using the same power source, in theory they all have the same ground potential, and therefore no current will leak.

2.   Use balanced cables. Long cable runs can pick up hum from nearby power cables. Balanced cables employ a simple trick (phase inversion) to improve their noise resistance. Using balanced cables should virtually eliminate this kind of hum.

3.   If all else fails, you can use an isolation transformer. Isolation transformers connect two pieces of audio equipment without a physical connection. If this sounds like magic, in a way it is—ask any audio engineer at the end of his rope trying to get rid of a ground hum.

Insert the isolation transformer into the signal chain with the proper cables. In webcasting applications ground hum generally appears between the A/V production and the encoding stations.

Ambient Microphones

If you’re webcasting a musical event, ambient mics are a must. One of the most important aspects of a live broadcast is the ambience of a live event. Our ears can determine the size of an event by the sound of the audience. If you can’t hear the audience, your event will sound flat.

If you’re broadcasting from a smaller venue where loud music is being played, there’s another reason why you need ambient mics. In this case, much of the music may not be coming through the PA system. If you’re getting a feed from the soundman, you’ll hear mostly vocals. This is because all the other instruments on stage are already loud enough, or only need a small amount in the PA. Without ambient mics, the mix will sound strange.

It’s best to use a pair of ambient mics to get a full stereo effect. They should be placed coincident (in the same spot), in a “v” shape, at 90 degrees to each other, with the microphone capsules (where you speak into the mic) as close to each other as possible.

 

Author’s Tip image

If you’re in a small venue and trying to capture what is coming off stage, the mics should be about twenty feet in front of the stage, facing the stage. If you’re at a large venue and trying to capture the audience, they should be placed at the lip of the stage, facing the audience.

Deciding how much ambience to use in your mix is an aesthetic judgment and largely dependent on the sound of the room. If the room sound is good, then you can use a lot of ambience. In fact you may want to base the mix on the ambient mics, and then mix in a bit of the PA feed to give the mix some clarity. If the room sound isn’t that great, it will probably make more sense to base the mix on the PA feed, and mix in a bit of ambience to enliven the mix.

Compression

Compression is an audio engineering technique where the dynamic range of an audio signal is reduced. It can be very helpful, particularly during webcasts, because it makes the audio level more consistent. Consistent audio level is good because there is less chance of a stray spike in the audio level causing distortion. Also, a consistent audio level sounds far more professional.

Television and radio use compression liberally. In fact that’s one of the reasons to use compression—people are accustomed to hearing compressed audio. Another good reason to use compression is that it tends to boost the low frequencies, thereby making the audio sound warmer and fuller.

Audio compressors operate by attenuating the audio when it exceeds a certain threshold. Because the loud sections of the signal are attenuated, the overall gain of the signal should be boosted to maintain the original gain structure. This is what creates the consistency and fullness of sound.

Setting up a compressor is easy. Figure 7-6 shows the front panel of a compressor. Working from left to right, there are knobs to set the threshold, ratio, attack and release times, and overall gain:

•    Threshold: Determines where the compression kicks in

•    Ratio: Determines how much to attenuate the signal, with higher ratios meaning more compression

•    Attack and Release Times: Determine how quickly the attenuation is performed when a signal exceeds the threshold

•    Gain: Used to restore the gain lost during the compression stage

Start with a mild compression setting. Use a threshold of –10 dB, a ratio of 4:1, an attack time of around 10 milliseconds and a release time of a half second. Send signal through your compressor, and watch the gain reduction meters on your compressor to see how much the signal is being attenuated. Set the gain to compensate for this.

Compare your audio signal before and after compression (most compressors have a bypass switch for exactly this purpose). You should hear a fuller, more “present” sound. Compression tends to bring things forward in the mix, making them sound like they’re right in front of you. The more compression you add, the more noticeable the effect becomes.

ALERT

image

Speech is very compression-tolerant. It can be tempting to heavily compress spoken word content, but be careful. A little bit of compression goes a long way. You should always use light to medium compression to keep your levels under control and to make your presentation sound more professional. Add too much, however, and you’ll give your audience a headache.

It can be tempting to heavily compress your audio, because of the presence it adds and the increased warmth. Add too much, however, and the programming becomes fatiguing. Take drive-time radio, for example. The DDs scream and yell, the traffic report cuts in, advertisements blare, and each-and-every-word-is-as-loud-as-possible. Sure, it gets your attention, but after awhile it drives you nuts.

Another example is television advertisements. They sound like they’re recorded louder than regular television programming, but in fact they’re not—they’re heavily compressed, which makes them seem extra loud.

Programming should have dynamics. Whether it’s a distance learning course or a music broadcast, the natural flow will include some loud and some soft sections. Using compression judiciously reduces the dynamic range, making it easier to control. Using too much compression eliminates the dynamic range, which sounds unnatural, and, as we’ve all experienced, annoying.

EQ

Equalization, or EQ, is turning up or down certain frequencies in your audio signal. It can be additive, such as adding some low frequencies to “warm up” a sound, or corrective, such as removing harshness from your audio. Either way, the basic idea is to grab a knob and twist until it sounds better.

 

Author’s Tip image

If you want to do more surgical EQ, you’ll have to buy an external EQ unit, such as a graphic or parametric equalizer. Both these offer much finer control over not only the frequency, but also how much of the frequencies around the target frequency you’re adjusting.

Most mixing desks come with built in EQ, generally at fixed-frequencies. Because the frequencies are fixed, they are somewhat limited, but can be great to add “just a touch” of EQ. For example, this type of EQ enables you to brighten up a sound by adding a little bit of treble, or warm up a signal by adding a little bass. Similarly, you can get rid of hiss by “rolling off” some high end, and clear things up by turning down the bass.

To EQ your audio signal, listen closely to your audio signal and ask yourself two questions. First, is anything missing? Does the signal need any low end to warm it up, or some high end for some “sparkle”? If it’s hard to understand, it may need a midrange boost.

Next, is there anything that sounds wrong? For example, a muffled signal can be cleared up by turning down the low frequencies. Harsh audio can be tamed by turning down the midrange. Table 7-1 lists some useful frequencies and descriptions of what they sound like. You can use this table as a guide when using EQ.

Table 7-1
Useful EQ frequencies.

EQ Range Contents
20–60 Hz Extreme low bass. Most speakers cannot reproduce this.
60–250 Hz The audible low-end. Files with the right amount of low end sound warm, files without enough sound thin.
250 Hz–2 kHz The low-midrange. Files with too much in the low-mids are hard to listen to and sound telephone-like.
2 kHz–4 kHz The high-midrange. Where most speech information resides. In fact, cutting here in the music and boosting around 3 kHz in your narration makes it more intelligible.
4 kHz–6kHz The presence range. Provides clarity in both voice and musical instruments. Boosting 5 kHz can make your music or voiceover (not both!) seem closer to the listener.
6 kHz–20kHz The very high frequencies. Boosting here adds “air” but can also cause sibilance problems

Much like compression, it can be tempting to use too much EQ. In most cases you should use mild EQ settings to enhance your audio. If you’re twisting any knobs fully clockwise or counter-clockwise, you’ve gone too far. Be sure to compare your audio with EQ to the original. If there is too much bass or too much treble, you should be able to hear it.

Conclusion

With a little investment in decent audio equipment, and enough time to set up a proper gain structure, you’re most of the way towards creating high quality audio, and therefore broadcast quality streaming media. To get that extra bit of quality, use light compression and a bit of EQ to make your audio truly shine.

The next chapter delves into the wonderful world of video, and some techniques you can employ to make your streams look better.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset