Chapter 1. Behavior of Sound

What is sound? The answer depends on who you ask. A physicist will tell you that sound is both a disturbance of molecules caused by vibrations transmitted through an elastic medium (such as air) and the interaction of these vibrations within an environment. This definition does not mean much to the psychologist, who thinks of sound as a human response.

To put the question another way: If a tree falls in a forest and there is no one to hear it, does the falling tree make a sound? The physicist would say yes because a falling tree causes vibrations, and sound is vibration. The psychologist would probably say no because, without a perceived sensation, there can be no human response; hence, there is no sound. In practical terms, both the physicist and the psychologist are right. Sound is a cause-and-effect phenomenon, and the psychological cannot really be untangled from the physical. Thus, in audio production, you need to understand both the objective and the subjective characteristics of sound. Not to do so is somewhat like arguing about the falling tree in the forest—it is an interesting, but unproductive, exercise.

The Sound Wave

Sound is produced by vibrations that set into motion longitudinal waves of compression and rarefaction propagated through molecular structures such as gases, liquids, and solids. The molecules first set into motion those closest to the vibrating object and then pass on their energy to adjacent molecules, starting a reaction—a sound wave—much like the waves that result when a stone is dropped into a pool. The transfer of momentum from one displaced molecule to the next propagates the original vibrations longitudinally from the vibrating object to the hearer. What makes this reaction possible is air or, more precisely, a molecular medium with the property of elasticity. Elasticity is the phenomenon in which a displaced molecule tends to pull back to its original position after its initial momentum has caused it to displace nearby molecules.

As a vibrating object moves outward, it compresses molecules closer together, increasing pressure. Compression continues away from the object as the momentum of the disturbed molecules displaces the adjacent molecules, producing a crest in the sound wave. When a vibrating object moves inward, it pulls the molecules farther apart and thins them, creating a rarefaction. This rarefaction also travels away from the object in a manner similar to compression except that it decreases pressure, thereby producing a trough in the sound wave (see Figure 1-1). As the sound wave moves away from the vibrating object, the individual molecules do not advance with the wave; they vibrate at their average resting place until their motion stills or they are set in motion by another vibration. Inherent in each wave motion are the components that make up a sound wave: frequency, amplitude, velocity, wavelength, and phase (see Figures 1-1, 1-2, and 1-4).

Components of a sound wave. The vibrating object causes compression in sound waves when it moves outward (causing molecules to bump into one another). The vibrating object causes rarefaction when it moves inward (pulling the molecules away from one another).

Figure 1-1. Components of a sound wave. The vibrating object causes compression in sound waves when it moves outward (causing molecules to bump into one another). The vibrating object causes rarefaction when it moves inward (pulling the molecules away from one another).

Amplitude of sound. The number of molecules displaced by a vibration creates the amplitude, or loudness, of a sound. Because the number of molecules in the sound wave in (b) is greater than the number in the sound wave in (a), the amplitude of the sound wave in (b) is greater.

Figure 1-2. Amplitude of sound. The number of molecules displaced by a vibration creates the amplitude, or loudness, of a sound. Because the number of molecules in the sound wave in (b) is greater than the number in the sound wave in (a), the amplitude of the sound wave in (b) is greater.

Table 1-3. Selected frequencies and their wavelengths.

Frequency(Hz)

Wavelength

Frequency(Hz)

Wavelength

20

56.5 feet

1,000

1.1 feet

31.5

35.8

2,000

6.7 inches

63

17.9

4,000

3.3

125

9.0

6,000

2.2

250

4.5

8,000

1.6

440

2.5

10,000

1.3

500

2.2

12,000

1.1

880

1.2

16,000

0.07

Sound waves. (a) Phase is measured in degrees, and one cycle can be divided into 360 degrees. It begins at 0 degrees with 0 amplitude, then increases to a positive maximum at 90 degrees, decreases to 0 at 180 degrees, increases to a negative maximum at 270 degrees, and returns to 0 at 360 degrees. (b) Selected phase relationships of sound waves.

Figure 1-4. Sound waves. (a) Phase is measured in degrees, and one cycle can be divided into 360 degrees. It begins at 0 degrees with 0 amplitude, then increases to a positive maximum at 90 degrees, decreases to 0 at 180 degrees, increases to a negative maximum at 270 degrees, and returns to 0 at 360 degrees. (b) Selected phase relationships of sound waves.

Frequency

When a vibration passes through one complete up-and-down motion, from compression through rarefaction, it has completed one cycle. The number of cycles a vibration completes in one second is expressed as its frequency. If a vibration completes 50 cycles per second (cps), its frequency is 50 hertz (Hz); if it completes 10,000 cps, its frequency is 10,000 Hz, or 10 kilohertz (kHz). Every vibration has a frequency, and humans with excellent hearing may be capable of hearing frequencies from 20 to 20,000 Hz. The limits of low- and high-frequency hearing for most humans, however, are about 35 to 16,000 Hz. Frequencies just below the low end of this range, called infrasonic, and those just above the high end of this range, called ultrasonic, are sensed more than heard, if they are perceived at all.

These limits change with natural aging, particularly in the higher frequencies. Generally, hearing acuity diminishes to about 15,000 Hz by age 40; to 12,000 Hz by age 50; and to 10,000 Hz or lower beyond age 50. With frequent exposure to loud sound, the audible frequency range can be adversely affected prematurely.

Psychologically, and in musical terms, we perceive frequency as pitch—the relative tonal highness or lowness of a sound. Pitch and the frequency spectrum are discussed in Chapter 3.

Amplitude

We noted that vibrations in objects stimulate molecules to move in pressure waves at certain rates of alternation (compression/rarefaction) and that rate determines frequency. Vibrations not only affect the molecules’ rate of up-and-down movement but also determine the number of displaced molecules set in motion from equilibrium to a wave’s maximum height (crest) and depth (trough). This number depends on the intensity of a vibration; the more intense, the more molecules are displaced.

The greater the number of molecules displaced, the greater the height and the depth of the sound wave. The number of molecules in motion, and therefore the size of a soundwave, is called amplitude (see Figure 1-2). Our subjective impression of amplitude is a sound’s loudness or softness. Amplitude is measured in decibels.

The Decibel

The decibel (dB) is a dimensionless unit and, as such, it has no specifically defined physical quantity. Rather, as a unit of measure, it is used to compare the ratio of two quantities usually in relation to acoustic energy, such as sound pressure, and electric energy, such as power and voltage (see Chapter 8). In mathematical terms, it is 10 times the logarithm to the base 10 of the ratio between the powers of two signals: dB = 10 log (P1 / P0). P0 is usually a reference power value with which another power value, P1, is compared. It is abbreviated dB because it stands for one-tenth (deci) of a bel (from Alexander Graham Bell). The bel was the amount a signal dropped in level over a 1-mile distance of telephone wire. Because the amount of level loss was too large to work with as a single unit of measurement, it was divided into tenths for more practical application.

There are other acoustic measurements of human hearing based on the interactive relationship between frequency and amplitude. These are discussed in Chapter 3.

Velocity

Although frequency and amplitude are the most important physical components of a sound wave, another component—velocity, or the speed of a sound wave—should be mentioned. Velocity usually has little impact on pitch or loudness and is relatively constant in a controlled environment. Sound travels 1,130 feet per second at sea level when the temperature is 70°F (Fahrenheit). The denser the molecular structure, the greater the vibrational conductivity. Sound travels 4,800 feet per second in water. In solid materials such as wood and steel, sound travels 11,700 and 18,000 feet per second, respectively.

In air, sound velocity changes significantly in very high and very low temperatures, increasing as air warms and decreasing as it cools. For every 1°F change, the speed of sound changes 1.1 feet per second.

Wavelength

Each frequency has a wavelength, which is determined by the distance a sound wave travels to complete one cycle of compression and rarefaction; that is, the physical measurement of the length of one cycle is equal to the velocity of sound divided by the frequency of sound (λ = v/f) (see Figure 1-1). Therefore, frequency and wavelength change inversely with respect to each other. The lower a sound’s frequency, the longer its wavelength; the higher a sound’s frequency, the shorter its wavelength (see Figure 1-3).

Acoustical Phase

Acoustical phase refers to the time relationship between two or more sound waves at a given point in their cycles.[1] Because sound waves are repetitive, they can be divided into regularly occurring intervals. These intervals are measured in degrees (see Figure 1-4).

If two identical waves begin their excursions at the same time, their degree intervals will coincide and the waves will be in phase. If two identical waves begin their excursions at different times, their degree intervals will not coincide and the waves will be out of phase.

Waves in phase reinforce each other, increasing amplitude (see Figure 1-5a); out of phase they weaken each other, decreasing amplitude. When two sound waves are exactly in phase (0-degree phase difference) and have the same frequency, shape, and peak amplitude, the resulting waveform will be twice the original peak amplitude. Two waves exactly out of phase (180-degree phase difference) with the same frequency, shape, and peak amplitude cancel each other (see Figure 1-5b); however, these two conditions rarely occur in the studio.

Sound waves in and out of phase. (a) In phase: Their amplitude is additive. Here the sound waves are exactly in phase—a condition that rarely occurs. It should be noted that decibels do not add linearly. As shown, the amplitude here is 6 dB. (b) Out of phase: Their amplitude is subtractive. Sound waves of equal amplitude 180 degrees out of phase cancel each other. This situation also rarely occurs.

Figure 1-5. Sound waves in and out of phase. (a) In phase: Their amplitude is additive. Here the sound waves are exactly in phase—a condition that rarely occurs. It should be noted that decibels do not add linearly. As shown, the amplitude here is 6 dB. (b) Out of phase: Their amplitude is subtractive. Sound waves of equal amplitude 180 degrees out of phase cancel each other. This situation also rarely occurs.

It is more likely that sound waves will begin their excursions at different times. If the waves are partially out of phase, there would be constructive interference, increasing amplitude, where compression and rarefaction occur at the same time, and destructive interference, decreasing amplitude, where compression and rarefaction occur at different times (see Figure 1-6).

Waves partially out of phase (a) increase amplitude at some points and (b) decrease it at others.

Figure 1-6. Waves partially out of phase (a) increase amplitude at some points and (b) decrease it at others.

The ability to understand and perceive phase is of considerable importance in, among other things, microphone and loudspeaker placement, mixing, and spatial imaging. If not handled properly, phasing problems can seriously mar sound quality. Phase can also be used as a production tool to create different sonic effects.

Sound Envelope

Another factor that influences the timbre of a sound is its shape, or envelope, which refers to changes in loudness over time. A sound envelope has four stages: attack, initial decay, sustain, and release (ADSR). Attack is how a sound starts after a sound source has been vibrated. Initial decay is the point at which the attack begins to lose amplitude. Sustain is the period during which the sound’s relative dynamics are maintained after its initial decay. Release refers to the time and the manner in which a sound diminishes to inaudibility (see Figure 1-7).

Sound envelope.

Figure 1-7. Sound envelope.

Two notes with the same frequency and loudness can produce different sounds within different envelopes. A bowed violin string, for example, has a more dynamic sound overall than a plucked violin string. If you take a piano recording and edit out the attacks of the notes, the piano will start to sound like an organ. Do the same with a French horn, and it sounds similar to a saxophone. Edit out the attacks of a trumpet, and it creates an oboe-like sound. The relative differences in frequency spectra and sound envelopes are shown for a piano, violin, flute, and white noise sample in Figure 3-4. In the case of the piano and violin, the fundamental frequency is the same: middle C (261.63 Hz). The flute is played an octave above middle C at C5, or 523.25 Hz. By contrast, noise is unpitched with no fundamental frequency; it comprises all frequencies of the spectrum at the same amplitude.

Direct, Early, and Reverberant Sound

When a sound is emitted in a room, its acoustic “life cycle” can be divided into three phases: direct sound, early reflections, and reverberant sound (see Figure 1-8).

Anatomy of reverberation in an enclosed space. At time0 (T0) the direct sound is heard. Between T0 and T1 is the initial time delay gap—the time between the arrival of the direct sound and the first reflection. At T2 and T3, more early reflections of the direct sound arrive as they reflect from nearby surfaces. These early reflections are sensed rather than distinctly heard. At T4 repetitions of the direct sound spread through the room, reflecting from several surfaces and arriving at the listener so close together that their repetitions are indistinguishable.

Figure 1-8. Anatomy of reverberation in an enclosed space. At time0 (T0) the direct sound is heard. Between T0 and T1 is the initial time delay gap—the time between the arrival of the direct sound and the first reflection. At T2 and T3, more early reflections of the direct sound arrive as they reflect from nearby surfaces. These early reflections are sensed rather than distinctly heard. At T4 repetitions of the direct sound spread through the room, reflecting from several surfaces and arriving at the listener so close together that their repetitions are indistinguishable.

Direct sound reaches the listener first, before it interacts with any other surface. Depending on the distance from the sound source to the listener, the time, T0, is 20-200 milliseconds (ms). Direct waves provide information about a sound’s origin, size, and tonal quality.

The same sound reaching the listener a short time later, after it reflects from various surfaces, is indirect sound. Indirect sound is divided into early reflections, also known as early sound and reverberant sound (see Figure 1-9). Early reflections reaching the ear within 30 ms of when the direct sound is produced are heard as part of the direct sound. Reverberant sound, or reverberation (reverb, for short), is the result of the early reflections becoming smaller and smaller and the time between them decreasing until they combine, making the reflections indistinguishable. They arrive outside of the ear’s integration time.

Acoustic behavior of sound in an enclosed room. The direct-sound field is all the sound reaching the listener (or the microphone) directly from the sound source without having been reflected off of any of the room’s surfaces. The early and later reflections of the indirect sound are all the sound reaching the listener (or the microphone) after being reflected off of one or more of the room’s surfaces.

From Electronic Musician magazine, March 2002, p. 100. © 2002 Primedia Musician Magazine and Media, Inc. All rights reserved. Reprinted with permission of Prism Business Media, Inc. Copyright © 2005. All rights reserved.

Figure 1-9. Acoustic behavior of sound in an enclosed room. The direct-sound field is all the sound reaching the listener (or the microphone) directly from the sound source without having been reflected off of any of the room’s surfaces. The early and later reflections of the indirect sound are all the sound reaching the listener (or the microphone) after being reflected off of one or more of the room’s surfaces.

Early sound adds loudness and fullness to the initial sound and helps create our subjective impression of a room’s size. Reverb creates acoustical spaciousness and fills out the loudness and the body of a sound. It contains much of a sound’s total energy. Also, depending on the reverberation time, or decay time—the time it takes a sound to decrease 60 dB-sound pressure level (SPL) after its steady-state sound level has stopped—reverb provides information about the absorption and the reflectivity of a room’s surfaces as well as about a listener’s distance from the sound source. The longer it takes a sound to decay, the larger and more hard-surfaced the room is perceived to be and the farther the listener is or thinks he is from the sound source (see “Sound Pressure-Level” in Chapter 2).

Reverberation and Echo

Reverberation and echo are often used synonymously—but incorrectly so. Reverberation is densely spaced reflections created by random, multiple, blended repetitions of a sound. The time between reflections is imperceptible. If a sound is delayed by 35 ms or more, the listener perceives echo—a distinct repeat of the direct sound.

In large rooms, discrete echoes are sometimes perceived. In small rooms, these repetitions, called flutter echoes, are short and come in rapid succession. They usually result from reflections between two highly reflective, parallel surfaces. Because echoes usually inhibit sonic clarity, studios and concert halls are designed to eliminate them.

Main Points

  • A sound wave is a vibrational disturbance that involves mechanical motion of molecules transmitting energy from one place to another.

  • A sound wave is caused when an object vibrates and sets into motion the molecules nearest to it; the initial motion starts a chain reaction. This chain reaction creates pressure waves through the air, which are perceived as sound when they reach the ear and the brain.

  • The pressure wave compresses molecules as it moves outward, increasing pressure, and pulls the molecules farther apart as it moves inward, creating a rarefaction by decreasing pressure.

  • The components that make up a sound wave are frequency, amplitude, velocity, wavelength, and phase.

  • The number of times a sound wave vibrates per second determines its frequency. Humans can hear frequencies between roughly 20 and 20,000 hertz (Hz)—a range of 10 octaves.

  • The size of a sound wave determines its amplitude, or loudness. Loudness is measured in decibels.

  • The decibel (dB) is a dimensionless unit used to compare the ratio of two quantities usually in relation to acoustic energy, such as sound-pressure level (SPL).

  • Velocity, the speed of a sound wave, is 1,130 feet per second at sea level at 70°F. Sound increases or decreases in velocity by 1.1 feet per second for each 1°F change.

  • Each frequency has a wavelength, determined by the distance a sound wave travels to complete one cycle of compression and rarefaction. The length of one cycle is equal to the velocity of sound divided by the frequency of sound. The lower a sound’s frequency, the longer its wavelength; the higher a sound’s frequency, the shorter its wavelength.

  • Acoustical phase refers to the time relationship between two or more sound waves at a given point in their cycles. If two waves begin their excursions at the same time, their degree intervals will coincide and the waves will be in phase, reinforcing each other and increasing amplitude. If two waves begin their excursions at different times, their degree intervals will not coincide and the waves will be out of phase, weakening each other and decreasing amplitude.

  • A sound’s envelope refers to its changes in loudness over time. It has four stages: attack, initial decay, sustain, and release (ADSR).

  • The acoustic “life cycle” of a sound can be divided into three phases: direct sound, early reflections, and reverberant sound.

  • Direct sound reaches the listener first before it interacts with any other surface. The same sound reaching the listener a short time later, after it reflects from various surfaces, is indirect sound. Indirect sound is divided into early reflections, also known as early sound, and reverberant sound.

  • Reverberation is densely spaced reflections created by random, multiple, blended repetitions of a sound. The time between reflections is imperceptible. If a sound is delayed by 35 ms or more, the listener perceives echo, a distinct repeat of the direct sound.

  • Reverberation time, or decay time, is the time it takes a sound to decrease 60 dB-SPL after its steady-state sound level has stopped.

  • In large rooms, discrete echoes are sometimes perceived. In small rooms, these repetitions—called flutter echoes—are short and come in rapid succession.



[1] Polarity is sometimes used synonymously with phase. It is not the same. Polarity refers to values of a signal voltage and is discussed in Chapter 7.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset