CHAPTER   10

Video Recording
and Editing

Video Recording Technologies

Unless you work for a broadcast or cable television station or network that is engaged in the live transmission of programs, the program material that you produce will most likely be recorded. Recording allows us to save programs so that they can be distributed at a later time. Recording also allows us to acquire program material that will be edited into its final form in post-production.

With the advent of digital technologies, methods of video recording are changing. For most of the past fifty years, video recording has been dominated by the process of magnet recording onto videotape. Today, that process is being augmented, and ultimately supplanted, by video recording to optical discs, magnetic computer hard drives, and memory cards.

Videotape Recording

In 1956 a small company called Ampex, located in Redwood City, California, introduced the first practical videotape recorder to provide an all-electronic storage and production medium for video programs. All videotape recording systems are based on a set of similar principles. The video and audio signals pass through one or more recording heads which spin against a moving videotape. A recording head is a small electromagnet. Videotape is recording tape that is coated with magnetic material. When the heads pass over the tape they encode the signal into the layer of magnetic particles on the tape. In playback, the process is reversed: The playback heads trace over the tape and pick up the signal that is embedded there. (See Figure 10.1.)

All modern VCRs use the helical scan method of recording the video signal. This name is used because the videotape is wrapped around the head drum inside the machine in the form of a helix. Sometimes this type of recording is called slant-track recording, because this describes the angle of the video information on the tape.

image

FIGURE 10.1
Videotape Recording

Videotape Formats

Since the introduction of videotape recording, approximately fifty different videotape formats have been introduced with varying degrees of success in the marketplace. You are probably familiar with the terms VHS and DV. These are different tape formats. A videotape’s format is described by a number of different factors:

1. Width of the tape. Over the years tape width has been reduced from 2 inches to much smaller sizes (giving rise to the term small format videotapes). Today’s tape formats are ¼ inch (6.35 mm), 8 mm (Hi-8), or ½ inch wide. (See Figure 10.2.)

2.Configuration of the tape. Is it open reel or cassette? All modern videotape recorders use video cassettes.

3.Tape speed. How fast does the tape run? In the DV tape formats, MiniDV tapes run at a slower speed than DVCAM tapes, which run at a slower speed than DVCPRO tapes. Higher tape speed generally translates into higher recording quality.

4.How the color information is processed. Is it composite, S-Video, or component? These terms will be described in more detail later in the chapter.

5.How the signal is recorded. Is it analog or digital?

6.How the various tracks of information are arranged on the tape. Four different sets of signals are typically recorded onto videotape: the video signal, the audio signal, the control track, and SMPTE time code and/or data tracks.

image

FIGURE 10.2
6.35-mm, 8-mm, and ½-inch Videocassettes

Location of Signals on Videotape

Two of the most widely used videotape recording formats are VHS and DV. VHS (the letters stand for video home system) is a ½-inch analog, consumer-quality tape format. DV (the letters stand for digital video) is a ¼-inch (6.35 mm) digital videotape format that is used in consumer and professional applications. There are three general variants of the format: DV (or Mini DV), DVCAM (manufactured by Sony), and DVCPRO (manufactured by Panasonic). To illustrate some of the major similarities and differences between various tape formats, we will briefly discuss the track layout for these two tape formats.

VHS TRACK LAYOUT As you can see in Figure 10.3, the largest area of the tape contains the video information. Each track of video information is laid down at an angle on the tape. The top edge of the tape contains two analog audio tracks, and the bottom edge of the tape contains the control track, a series of electronic pulses that are used during tape playback to align the video heads with the tracks of recorded information on the tape. Many older VCRs contained a tracking control, a small knob that manually adjusted the relationship of the heads to the video tracks until the best picture quality was achieved.

The relationship of the control track pulses to the video tracks is very precise. Note that there is one control track pulse for each track of video information on the tape, and as we discussed in Chapter 4, each video track contains one field of information, and two tracks of information constitute one video frame. (See Figure 10.3.)

SMPTE time code is a digital signal that encodes each frame of information with a unique time reference in hours:minutes:seconds:frames. Thus a SMPTE time code number of 00:10:12:22 indicates that the frame is 0 hours, 10 minutes, 12 seconds, and 22 frames into the tape. On VHS format tapes, time code can be recorded in two places: linear time code, or LTC (pronounced “litsee”), can be recorded in one of the two audio tracks, and vertical interval time code, or VITC (pronounced “vitsee”), can be recorded along with the video information. Time code is discussed in greater detail later in the chapter.

image

FIGURE 10.3
VHS Track Pattern

DV/DVCAM/DVCPRO TRACK LAYOUT As is the case for all videotape formats, in the DV/DVCAM/DVCPRO formats, the video information occupies most of the central area of the tape. However, because the digital video signal is more complex than the signal in analog recording, each frame of video information is broken down into ten sections of approximately fifty-two lines each, which are then recorded into ten separate video tracks on the tape. (See Figure 10.4.) Digital audio can be recorded as two high-quality tracks or four lesser-quality tracks. There is a dedicated data track for insert and track information (ITI) that contains additional information about the video tracks (e.g., whether it is DV or DVCPRO) as well as the pilot tones that control DV and DVCAM playback. DVCPRO tapes contain an additional track for control track.

image

FIGURE 10.4
DV/DVCAM/ DVCPRO Track Pattern

Composite and Component Recording

The color video signal can be processed and recorded onto videotape in a number of different ways.

COMPOSITE RECORDING Analog recording systems such as VHS record a composite, NTSC-encoded video signal in which the luminance (Y) and chrominance (C) are mixed together. This generally produces a low-quality image when the signal is played back. The output on VCRs and DVD players labeled “Video Out” is a composite video output.

Y/C SIGNAL PROCESSING (S-VIDEO) Videotape systems such as Hi8 and S-VHS (Super VHS) record a composite video signal but process the luminance (Y) and chrominance (C) signals separately within the VCR to achieve better color purity and image detail than is possible in conventional composite recording systems. This is referred to as Y/C signal processing. When the video signal is fed out of the VCR from the S-Video output and is displayed on a video monitor equipped with an S-Video input, luminance and chrominance travel as two separate signals. This produces a better-quality image than a conventional composite recording.

COMPONENT RECORDING The highest-quality color recordings are made in systems that record the signal through the color difference process. This is often simply referred to as component recording.

As in the S-Video process, the luminance signal (Y) is processed and recorded separately from the color components. However, in component recording, the process is carried further, resulting in a recording composed of the luminance signal (Y) and two color difference signals: red minus luminance (R – Y) and blue minus luminance (B – Y). Because the composition of the luminance channel is based on a fixed formula (Y = 30% red, 59% green, and 11% blue), there is no need to create the green minus luminance (G – Y) component because it can be reconstructed from the other three signals.

The R – Y and B – Y designations are used to describe the color components in analog recording systems such as the Betacam SP tape format. In digital systems (e.g., DV, DVCAM, DVCPRO) the color components are Cr and Cb. In both systems the luminance signal is Y.

Component VCRs and DVD players produce pictures with superior picture quality. The images are sharper and the colors are purer than those obtained in composite or Y/C recordings. (See CP-6.)

Analog Recording

Like audio signals, the video signal can be analog or digital. An analog signal is one in which the recorded signal varies continuously in relation to the sound or light that produced it. The electrical video signal that is produced through the analog process varies in direct proportion to the light entering the camera lens, and the video waveform that is produced is continuous.

THE VIDEO WAVEFORM AND WAVEFORM MONITOR Figure 10.5 illustrates one scanning line from an analog video waveform. As we discussed earlier, the color video signal is composed of two principal components: luminance (or brightness) and chrominance (or color). The video waveform monitor is used to check the luminance levels of the picture. The peak of the waveform represents the brightest part of the scene. This is called peak white. Peak white should not exceed 100% (reference white) on the waveform.

image

FIGURE 10.5   Analog Video Waveform

If the picture is too bright and the waveform exceeds peak white, the level can be adjusted by closing the camera iris slightly to reduce the amount of light hitting the image sensor or by turning down the gain, a control on the camera control unit that is used to amplify the signal.

Conversely, if the picture is too dark, it can be corrected either by opening up the lens iris slightly or by increasing the gain. Although increasing the gain will make the picture brighter, it can also have some negative effects. When you amplify an electronic signal, you also increase the noise inherent in the system. In video this visible picture noise, in which the picture takes on a grainy quality, is called snow.

Just as there is a maximum point for white, there is a minimum point for black. The black level, or pedestal, is always set at 7.5% on the waveform monitor. As you can see, the area of the waveform between 7.5% and 100% represents the range of brightness values of the picture.

Note also that the waveform contains the sync pulses that control the scanning of each line of video information as well as a color burst signal that controls synchronization of the red, blue, and green color components to ensure that the reproduced colors are accurate.

Although the waveform monitor is used to monitor brightness and the overall level of the signal, it does not tell much about the color signal, other than whether color is present (visible color burst) or absent (no color burst signal.) The vectorscope is a monitoring device that shows which colors are present in the signal and how much of each one is present. (See Figure 10.6.)

THE VECTORSCOPE The vectorscope is used when color adjustments are made to cameras and VCRs. A standard pattern of color bars is used as a reference to makes sure that all components of the color system are functioning properly. In multicamera shooting situations the vectorscope is used to help match the color quality of the cameras when they are white-balanced so that the color values they reproduce are the same for each camera. Figure 10.6 illustrates a typical color bar display and shows how the color bar display appears on a waveform monitor and a vectorscope. Also shown is a diagram of a vectorscope illustrating where the primary and complementary colors of the color bar are displayed. (See also CP-4 and CP-5.)

image

FIGURE 10.6   Color Bars, Waveform, and Vectorscope Displays

Digital Recording

In the digital recording process, the incoming analog signal (audio or video) is converted into a digital signal that is composed of a series of on/ off pulses, or bits. There are three steps to the process: sampling, quantization, and compression.

SAMPLING Sampling rates for video vary greatly, depending, for ex ample, on whether the signal is standard definition television (SDTV = 13.5 Mhz) or high-definition television (HDTV = @74 Mhz). That is equivalent to 13.5 million samples per second for SDTV and 74 million samples per second for HDTV.

In addition to the overall sampling rate, the color sampling rate describes how often the luminance and chrominance elements of the analog video signal are sampled for conversion into packets of digital information. Three sampling rates are used in digital video: 4:2:2, 4:1:1, and 4:2:0. It is somewhat beyond the scope of this text to explain the technical differences between these different sampling schemes. Suffice it to say that 4:2:2 sampling provides the highest-quality signals and is therefore the standard for broadcast and HDTV; 4:1:1 sampling is used in the DV, DVCAM and DVCPRO formats.

QUANTIZATION As the signal is sampled, it is converted into packets of digital information (quantization). However, the amount of data this process produces is immense. To make it easier to store and transmit the information, compression is used.

COMPRESSION As we discussed in Chapter 5, compression schemes may be lossy or lossless. Lossy compression works by eliminating repetitive or redundant data. Lossless compression works by compressing the file without removing any data.

The amount of compression is typically expressed as a ratio. Higher numbers indicate greater compression, and lower numbers indicate less compression. For example, the 5:1 compression ratio that is used in the DV formats is higher than the approximately 3:1 compression ratio that is used in Digital-S. Signals that are recorded with less compression generally produce better pictures than those that are recorded with more compression.

TYPES OF COMPRESSION Several different compression methods are widely used in video systems:

JPEG (Joint Photographic Experts Group) compression was originally designed for use in digital still photography and has been adapted for use with video (motion JPEG).

MPEG-1 (Motion Picture Experts Group) compression is designed for use in CD-ROMs.

MPEG-2 is the standard for broadcast television and DVDs.

MPEG-4 is used in a range of applications including digital television and streaming media on the Internet.

MPEG-3, more popularly known as MP3, is widely used to compress audio files. (This is also called MREG-1 audio layer 3.)

DIGITAL ADVANTAGES Although the digital signal is significantly more complex than the analog signal, it has several distinct advantages over analog recording. Digital signals can be recorded, replayed, and copied with greater accuracy and less noise than analog signals. This makes it possible to make clean copies across generations of tape. A copy that is made from a copy of the original should be indistinguishable from the original if all copies are made digitally. A third-generation copy of an analog tape, by contrast, will show noticeable degradation of picture quality. Digital signals are also preferable to analog ones because they can be processed by a computer and manipulated to create stunning visual effects.

Digital Videotape Formats

All of the new videotape formats introduced since 1993 record the video signal digitally. A variety of digital tape formats are marketed to professional and consumer users.

DIGITAL VIDEOCASSETTE (DV) DV is one of the newest and smallest digital tape formats. DV systems record up to four digital audio tracks and a digital component video signal on a small 6.35-mm (¼-inch) cassette. DV tapes come in three different sizes: the small MiniDV cassette and the medium and large cassettes that are designed for use in professional DV, DVCAM, and DVCPRO machines. (See Figure 10.7.) A medium-size cassette adapter is used to play back MiniDV cassettes in full- sized DVCPRO machines. (See Figure 10.8.)

Image quality and sound quality rival those of the professional analog Betacam SP format, which was the broadcast standard before the current generation of digital tape formats were introduced. There are five variants of the DV format:

1. DV (Digital Video) was originally targeted at the consumer market but found ready acceptance in the professional market because of its high quality and low cost. The first generation of DV equipment exclusively used a very small cassette (MiniDV), as does most DV equipment sold today. However, several manufacturers now offer a professional DV camcorder that can record to either a MiniDV cassette or a standard DV cassette.

2.DVCAM is manufactured by Sony and is targeted at industrial and professional users.

3.DVCPRO (D7) was developed by Panasonic and BTS/Philips and is targeted at the professional and broadcast market. It has found wide use in ENG applications.

4.DVCPRO 50 is a modified version of DVCPRO that features 4:2:2 sampling instead of the 4:1:1 rate that is characteristic of the other DV formats. DVCPRO 50 records twice as much data as does conventional DVCPRO, making it suitable for high-definition as well as standard-definition digital recording.

5.DVCPRO-HD is another modified version of DVCPRO designed by Panasonic for both 1080i (interlaced scanning) and 720p (progressive scanning) high-definition as well as standard-definition digital recording. DVCPRO-HD records at twice the data rate of DVCPRO 50 (100 megabits per second versus 50 megabits per second), with a compression ratio of 6.7:1 versus 3.3:1 for DVCPRO 50 and 5:1 for DVCPRO.

image

FIGURE 10.7   Small (MiniDV), Medium, and Large DV Cassettes

image

FIGURE 10.8   MiniDV Cassette Adapter

HDV JVC, Sony, Canon, and other manufacturers have all been actively involved in developing relatively inexpensive (under $10,000) camcorders that record a high definition signal onto MiniDV cassettes. The HDV format has a resolution of 1,280 pixels by 720 lines in the 16:9 screen format. MPEG-2 compression is used to compress the data for recording. The HDV format was developed in 2003, but it was not until 2005 that JVC and Sony introduced 3-chip camcorders that took advantage of it. (JVC had previously introduced a single chip camcorder in 2003.) Sony’s Z1U HDV camcorder uses the 1080i (interlaced) 30 fps standard, while JVC’s GY-HD100U uses the 720p (progressive) standard, running at either 24 or 30 fps, non-interlaced. (See Figure 10.9.)

image

FIGURE 10.9   JVC HDV Camcorder

Panasonic has eschewed the use of the MiniDV format for highdefinition recording, choosing to focus instead on DVCPRO tape to record the high definition signal (DVCPRO-HD), or by using its P2 memory cards as recording media. (These are discussed later in this chapter.)

DIGITAL 8 Introduced by Sony in 1999, Digital 8 records a digital component signal that is the same as DV except that it is recorded onto 8-mm tape instead of 6.35-mm tape. Analog Hi8 tapes can be played back in Digital 8 camcorders and VCRs. This format is targeted principally at the consumer market.

DIGITAL-S (D-9) Developed by JVC for the professional production market, Digital-S records a component digital video signal and four audio channels onto ½-inch metal particle tape. Some VCRs contain a preread feature that can play back and simultaneously record new information, making it possible to create multiple layers of images for special effects and transitions without the use of an editing system or video switcher.

DIGITAL BETACAM AND BETACAM SX Developed by Sony, Digital Betacam records an extremely high-quality component digital signal with four digital audio tracks onto ½-inch tape. Betacam SX is a lower cost digital component ½-inch format that is targeted at ENG users.

D-3, D-5, AND HD-D5 D-3 (composite) and D-5 (component) digital VCRs were developed by Panasonic. Like most of the other formats, they use a ½-inch cassette tape. HD-D5, a high-definition version of the D-5 format, is one of the principal formats used for recording HDTV in the 1080i or 720p variants.

D-VHS JVC’s D-VHS (digital VHS) is designed for off-air recording of digital television broadcasts. It uses ½-inch tape, and analog VHS and S-VHS can be played back in the D-VHS machine. The quality is slightly better than what can be achieved with a DVD.

Analog Videotape Formats

Analog videotape formats are fast disappearing from the marketplace. However, a few formats are still in use.

½-INCH VHS AND S-VHS VHS (video home system) emerged as the dominant format for home video recording in the early 1980s and, until the introduction of DVDs, also served as the dominant medium for the rental movie market. VHS is strictly a consumer format and is generally not suitable for professional production applications.

S-VHS (Super-VHS) was introduced by Victor Company of Japan (JVC) in 1987 and marketed to industrial and professional users as a low-cost alternative to Sony’s professional Betacam format. S-VHS systems produced improved resolution and color quality in relation to VHS systems, but they are vastly inferior to DV.

Hi8 Hi8 format equipment was introduced by Sony in 1989 to replace conventional 8-mm video equipment and found a niche in the consumer and prosumer markets because of its low cost, extreme portability, and high-quality first-generation recordings. Despite its past popularity, Hi8 is being displaced from the marketplace by the Digital 8 and DV digital tape formats.

BETACAM SP Betacam SP, developed by Sony, is a professional, broadcast-quality ½-inch analog component recording format. Betacam SP was the de facto standard for field recording in ENG and EFP in professional broadcast operations in the United States until the late 1990s, when it was supplanted by a variety of digital videotape formats. Sony stopped manufacturing Betacam SP equipment in November 2001, although many Betacam camcorders and VCRs remain in use today.

VCR Operational Controls

TAPE TRANSPORT CONTROLS The tape transport controls regulate the movement of the videotape within the VCR. (See Figure 10.10.) The following controls are found on most VCRs and portable camcorders:

Eject. This button is used to eject the tape from the machine.

Play. This is used to play back a tape at normal speed.

Stop. To stop the movement of the tape in any mode, press STOP.

Fast Forward. Pushing this button causes the tape to move forward at a fast rate of speed. Typically the tape disengages from the head drum, and no picture is visible. Use the fast forward button when you want to find a spot on the tape that is a considerable distance from where the tape is now positioned.

Rewind. This functions the same as fast forward, except that it moves the tape in the reverse direction.

Pause. When the tape is in the play mode, pressing the pause button stops the movement of the tape and displays a still frame video image. Do not leave the tape in the pause mode for more than a few minutes as this may damage the tape.

Forward Search and Reverse Search. These controls allow you to foward or reverse the image at a high speed when the VCR is in the pla mode. Since the tape remains in contact with the heads, the picture i visible moving at high speed. Don’t overuse this feature. If more than a few minutes of forward or reverse movement are necessary, use the fast-forward or rewind button.

Record. The record button is used to record new audio and video ont videotape. Some machines have one touch recording, which puts the VCR into the record mode simply by pushing the record button. Other machines require you to push and hold the record button, and then simultaneously press the play button to engage the recording function.

Audio Dub. This is used to record new audio information onto a tap without erasing the video information already recorded onto it.

image

FIGURE 10.10   Panasonic AJD250 DVCPRO VCR

METERS AND TAPE COUNTERS

Meters Audio levels are monitored with the use of the volume unit meter. On full-size VCRs these are typically found on the front panel of the machine, on camcorders they may be located on the side of the camcorder, or they may be visible in the viewfinder display. Camcorder viewfinder displays also typically include a battery meter, which displays the amount of power remaining in the camcorder’s battery.

Tape Counters Three selections are commonly found for the tape counter: CTL, TC, or UB.

CTL stands for control track. Consumer quality equipment, not equipped with a SMPTE time code generator, uses the control track pulses on the tape to display the tape position in hours, minutes, seconds, and frames. Remember, however, that for the display to be accurate, the counter must be set to zero at the beginning of the tape. Since control track pulses do not contain data, anytime the Reset button is pushed the counter will reset to zero. Many professional VCRs contain a CTL counter as well as TC and UB counters.

TC stands for SMPTE time code. Professional VCRs and camcorders use SMPTE time code counters. These counters display the SMPTE time code value for the current position of the tape.

UB stands for user bits. This is a feature on some SMPTE time code enabled systems that allows the user to enter additional information about the shot or scene that is being recorded.

OTHER CONTROLS

Remote/Local Selector. Many VCRs can be operated not only from the tape transport controls on the front of the machine, but from a remote control device as well. This switch selects how the machine will be controlled.

Video Input Selector. The video input selector switch tells the VCR where the input video signal is coming from. Typical choices may include: line (composite video), Y/C (S-Video), Y, Pb, Pr (component video), DV, or IEEE 1394 (Firewire).

Audio Record Level. Audio potentiometers (pots) on the front of the VCR can be used to set the record level for incoming audio signals.

Record Inhibit. This switch is used to disable the VCR’s record circuitry so that you don’t accidentally record over the material already recorded onto the tape that is playing back in the VCR.

Audio Monitor Selector. This switch is used to select which audio channel(s) are routed to the audio monitor or headphones. Audio channels may be selected independently, or in groups (e.g., Channel 1 only, Channel 1 and 2, and so on).

Menu. Modern VCRs give the VCR operator control over many machine functions through the Menu control. One very common menu control is used to display the time code numbers in a window over the background video. By using the menu control, this feature can be turned on or off.

Tracking Control. On older analog VCRs, the tracking control is used in the playback mode to maximize the quality of the playback image by compensating for slight differences in the way different VCRs of the same format record and play back a signal. Modern, digital VCRs automatically provide for accurate tracking and there is no manual control.

Other Video Recording Devices

Although videotape has been the principal medium for recording the electronic video signal for almost fifty years, new technologies are evolving that will ultimately replace videotape as the recording medium of choice. A number of these new technologies are in use now.

COMPUTER HARD DISKS AND VIDEO SERVERS A hard disk is a magnetic storage system that is widely used in computers. Information is stored on a series of round magnetic platters within the computers. Video servers are computers that are equipped with hard disk storage systems that can record and play back video. In editing applications servers are often used to store video footage in a central location, which can then be accessed from individual editing workstations. In broadcast and cable stations video servers are increasingly used in place of VCRs to store commercials and programs. When the server is connected to the central computer that contains the day’s program log, commercials and programs can be replayed automatically.

Over the years several manufacturers have developed hard disk–based camcorder recording systems. Currently Ikegami markets a one-piece camcorder with a hard disk drive, and Ikegami and JVC both market dockable disk recorders that function with various professional camcorders cameras (see Figure 10.11). Several other manufacturers make portable hard disk systems that can be connected to a digital camcorder via its Firewire port.

When field shooting is complete, the disk drive is removed from the camcorder and connected directly to a compatible nonlinear editing system via its Firewire connection. This eliminates the need to capture (transfer the digital files) from a videotape. Video clips can be moved directly into the nonlinear editing system’s timeline, and editing can begin immediately.

image

Removable 20 Gigabyte Hard Disk Drive

image

Camcorder with Dockable Hard Disk Recorder

image

Video Can Be Recorded to DV Tape or the Hard Disk

FIGURE 10.11   JVC Hard Disk Drive Camcorder

Optical Discs CDs and DVDs are two examples of relatively inexpensive optical disc playback and (in some cases) recording systems. A laser within these systems is used to write (record) and read (play back) program information. Because no recording heads touch the discs, they are much more reliable than tape-based systems. They also have the advantage, similar to that of the hard disk systems described above, of allowing random access to information anywhere on the disc.

DVD Because of their limited storage capacity, CDs are not widely used to store video program information, but DVDs are. DVDs (digital versatile discs) have the capacity to record up to 17 gigabytes (GB) of data, though most DVD formats in the consumer market today hold 4.7 GB. This is equivalent to approximately two hours of high-quality video, making DVDs a good choice for the distribution of video program material and feature-length movies.

DVDCAM The first manufacturer to capitalize on DVD technology for use in a camcorder was Hitachi, which in 2001 introduced the DVDCAM, a digital camcorder that recorded full-motion MPEG2 video onto a DVDRAM disk. The system is capable of recording thirty minutes of high-quality (704 × 480) video on each side of the DVD disk. Hitachi has been joined in the marketplace by Sony and Panasonic, which also targets their mini DVD camcorders to the consumer market. Neither of these camcorders is designed to meet professional production specifications.

Sony XDCAM Sony introduced its professional XDCAM in 2003 at the National Association of Broadcasters (NAB) annual convention in Las Vegas. XDCAM is a professional optical disc camcorder that is capable of recording in either the DVCAM format or MPEG-2. Each single-sided disc has a storage capacity of 23.3 GB, records eighty-five minutes of DVCAM video or forty-five minutes of MPEG-2, and costs about $30. (See Figure 10.12.)

MEMORY CARDS Memory cards use silicon chips instead of magnetic media to store data. Sometimes also called flash memory, they have the added advantage of retaining the data even when the power is turned off. Memory cards have become quite popular as the storage device of choice in digital still cameras: CompactFlash, SmartMedia, and Memory Stick are some examples.

Panasonic has adapted memory card technology for use in its P2 professional camcorders, which were introduced at the 2004 NAB convention. (See Figure 10.13.) Panasonic touts this system as “having no moving parts,” unlike hard disk and optical disc systems, in which motors rotate the disks at extremely high speeds.

image

XDCam Camcorder

image

Optical Disc

FIGURE 10.12   Sony XDCam Camcorder and Optical Recording Disc

The initial model P2 camcorder holds up to five memory cards with a capacity of 4 GB each, costing about $2,000 each. Each card stores up to eighteen minutes of DVCPRO or nine minutes of DVCPRO 50. Panasonic projects the development of cards storing up to 128 GB, which would hold 144 minutes of DVCPRO HD, 285 minutes of DVCPRO 50, or 576 minutes of DV/DVCPRO program material. But the success of this system will be heavily dependent on reducing the cost of the memory media.

With the exception of the videotape that is used in camcorders for field acquisition of program material, many production facilities today are tapeless, relying on disk-based digital nonlinear editors for video editing and video servers for storage. The development of professional hard disk, optical disc, and flash memory cards as camcorder storage media marks a significant moment in the transition to an entirely tapeless production environment.

image

P-2 Recorder

image

AG-HVX200 DVCPROHD P2 Camcorder

image

8 Gigabyte P2 Card

image

AJ-SPX800 DVCPRO50/DVCPRO/DV Switchable P2 Camcorder

FIGURE 10.13   Panasonic P-2 Recording Systems

Video Editing Technologies

Video editing occurs during the postproduction phase of production. From an aesthetic standpoint editing is the process of arranging individual shots or sequences into an order that is appropriate for the program being produced. Editing is used to control the aural and visual elements of the program and to focus them so that they have an effective impact on the viewer. From a technical standpoint editing is often done to fix problems or to make a program longer or shorter to fit the time requirements of the show.

Two principal editing technologies are in use today: linear videotape editing and digital nonlinear editing.

Linear Videotape Editing

Linear videotape editing is a process of rerecording information into a desired sequence. Two videotape machines are used: the playback or source VCR, which contains the original videotape, and the record or editing VCR, which contains a blank videotape. Edits are made in a sequential fashion, starting at the beginning of the program and working to the end. Although linear videotape editing originated in the analog production era, many digital videotape formats can be edited in a linear fashion as well.

EDITING SYSTEM COMPONENTS A complete linear videotape editing system generally includes the following components:

1. Playback (source) VCR to play back the original, unedited videotape.

2.Record (editing) VCR capable of performing clean electronic edits on the video signal.

3.Monitors for the source and editing VCRs, to allow the editor to see and hear the audio and video from each source.

4.An edit control unit for controlling both VCRs so that edit points can be found and edits can be precisely made by incorporating an adequate preroll time that allows the machines to reach their edit points at the same time. This component is necessary because in videotape-based editing systems, an edit can be made only when the source and edit tapes are rolling at full playback and record speed.

5.Auxiliary audio components, which may include an audio mixer and CD or audiotape recorders to input additional audio sources such as sound effects and music.

6.A graphics generator to produce titles, credits, and other program graphics.

image

FIGURE 10.14
Panasonic Portable DVC Laptop Linear Editing System

A linear video editing system may be assembled out of free-standing individual components (VCRs, monitors, edit controller, etc.) or, as in the case of Panasonic’s professional DVC editing system, it may be manufactured as one integrated piece of equipment. (See Figure 10.14.)

EDIT CONTROL UNIT STANDARD CONTROLS Edit control units contain a set of standard controls that facilitate the editing process. These include the following:

VCR remote controls for standard VCR operating modes: play, stop, rewind, fast-forward, and pause.

Jog and shuttle control, which is a rotary knob that allows you to shuttle at various speeds forward or backward through the tape. The jog function allows the editor to advance or rewind the tape one frame at a time.

Edit mode controls, which select the two principal editing modes: assemble edit or insert edit. We will discuss this in greater detail later in this section.

Edit point controls, which allow the editor to set in and out points for the edits. Most edit control units use the three-point method of editing: an in point and an out point are set on the shot in the source VCR, and an in point is set on the editing VCR designating the destination of the material from the source VCR. There is no need to set an out point on the editing VCR because the edit will end when the source tape gets to its end point.

Trim controls are used to add or subtract frames from an edit point.

The preview control allows the editor to see what the edit will look like without actually performing the edit.

The auto-edit control automatically prerolls both VCRs the designated amount of time (typically three, five, or seven seconds) and then performs the edit.

Other controls allow the editor to review the edit or go to the edit in and out points.

CONTROL TRACK EDITING Edit control units may use the tape’s control track as a reference for the edit cues, or they may use SMPTE time code. The problem with control track editing is that all control track pulses look the same, and the edit control unit controls the tape by counting the pulses before the edit point to cue the tape and by counting the control track pulses between the edit points to perform the edit. For example, if the edit control unit is set to a five-second preroll, when the command to perform the edit is given, the edit controller prerolls (rewinds) the tape five seconds by counting 150 control track pulses. If there are any missing pulses, the edit point will be inaccurate. Similarly, if the machine does not stop and start precisely, that is, if there is a mechanical error in the rewind and stop functions, the edit point will also be inaccurate.

EDITING WITH SMPTE TIME CODE SMPTE time code systems allow for more precise editing because the edit reference point is keyed to a particular time code number. Because of its accuracy, SMPTE time code is widely used in videotape recording and editing. Entry and exit points for each shot can be logged in terms of their time code numbers, and a complete list of all of the shots that will compose the program can be compiled. (See Figure 10.15.)

SMPTE time code is recorded onto the source tapes as footage is acquired in the field. All professional camcorders have built-in SMPTE time code generators. Typically, the time code generator is set to the record-run mode, which produces time code only when the VCR is recording. This produces continuously ascending time code; the time code for each subsequent shot begins with the next frame number after the end of the previous shot. For example, if shot 1 ends at 00:00:22:12, shot 2 will begin at 00:00:22:13.

Time code numbers are generated in two different ways: drop-frame and non-drop-frame. In the non-drop-frame time code mode the numbers are generated at the rate of thirty frames per second. However, the video frame rate is actually 29.97 frames per second. To correct for this discrepancy, which is equivalent to 3.6 seconds or 108 frames per hour, dropframe time code generators drop (do not generate) two frames of time code information per minute each minute of the hour, with the exception of the tenth, twentieth, thirtieth, fortieth, fiftieth, and sixtieth minutes of each hour. To look at this another way, two frame numbers are dropped during fifty-four of the minutes in each hour (2 × 54 = 108).

image

Shot in and out points are noted in hours:minutes:seconds:frames.

FIGURE 10.15   SMPTE Shot Log.

It is important to note that drop-frame time code generators do not drop video frames out of the picture; they drop time code numbers. The picture is continuous, but there will be some missing time code numbers at each minute mark on the tape. If you look at the time code, you will see that it jumps—for example, from 00:00:00:29 to 00:00:01:02.

DV camcorders with built-in time code generators typically generate time code only in the drop frame mode.

image

FIGURE 10.16   Video Frame with Time Code Window

The time code information that is recorded onto the tape is digital data that is not visible in the picture. The time code numbers can be seen in the VCR’s counter, and on some VCRs the monitor output (as opposed to the video output) will generate a window with the time code numbers superimposed over the background video. (See Figure 10.16.)

A common step in the process of editing with time code involves making a window dub. A window dub is a copy of the original unedited videotape with the time code inserted over the picture. Window dubs are often made on VHS tapes, because VHS VCRs are cheap and readily accessible.

Once a window dub has been made, the tape can easily be logged by playing the tape back into a video monitor. Use the pause/still mode of the VCR to freeze the frame at the beginning and end of each shot, and write down the corresponding time code numbers and some descriptive material about the shot.

Of course, you could also log the tape from the original, using the VCR counter to display the time code numbers. But this method assumes that you have easy access to an appropriate playback VCR. If you shot your footage on DV, this may not be a problem. But what if you shot on digital Betacam? Digital Betacam VCRs are extremely expensive and may not be available for the amount of time it takes to log a tape.

Logging from a window dub rather than from your original field footage has another significant advantage: It eliminates the possibility of damaging the field tapes while you are logging them. DV format tapes in particular are very fragile, and logging involves a lot of pausing and tape shuttling—precisely the actions that are likely to damage a tape. It is much better practice to keep your field tapes safe and log from a window dub than to run the risk of damaging your originals during the logging process.

LINEAR VIDEOTAPE EDITING MODES There are two principal editing modes: assemble editing and insert editing.

ASSEMBLE MODE Assemble editing replaces all the signals on the edit master tape with new ones (audio, video, and control track) whenever an edit is made. In this mode it is impossible to edit the signals individually. For example, in the assemble mode you could not make a video-only edit or make an audio-only edit to add music. Whatever is on the source tape will be transferred to the edit master tape.

To finish a program that has been edited in the assemble mode, all edits must be made in sequential order, starting at the beginning of the program with the first edit, and working toward the end. (See Figure 10.17.) You cannot go back into the program and make a new edit in the assemble mode. If you try to do this, you will create a gap in the tape at the end of the edit, and when you play back your program, the picture will become unstable, and there will be a momentary loss of audio and video.

image

FIGURE 10.17
Assemble Editing

Insert Mode In insert editing, audio and video tracks can be edited selectively. They can be edited independently or in combination. For example, you can edit video only, audio 1 only, or audio 2 only or video and audio 1, video and audio 2, video and audio 1 and audio 2, or audio 1 and audio 2.

In addition, an insert edit does not erase or record control track on the edit master tape. Because control track is necessary for tapes to play back, insert editing requires that a tape with control track be prepared to edit onto. The typical way to approach this is to take a blank tape and record video black onto it. Video black is a black video signal. In studio production facilities video black can be sent out of the video switcher to a record VCR, or a black video generator can be connected to a VCR. If neither option is available, put a blank tape into your camcorder and close the iris and/or cap the lens. The camcorder will record a black picture.

Unlike assemble editing, insert editing does allow you to go back into an edited sequence and insert new video over an existing shot or to replace the audio tracks with new audio. The edit will be stable at the entry and exit points.

However, despite being called an insert edit, this edit mode does not allow you to insert a shot between two other shots. That is, you cannot pull a sequence apart, insert a new shot, and finish with a sequence that is longer than the one you started with. Perhaps it would be more useful to think of an insert edit as overlaying or overwriting new video (or audio) on top of existing audio or video material. (See Figure 10.18.) The old material is erased, and the new material is inserted into its place on the videotape.

image

FIGURE 10.18
Insert Editing

THE LINEAR VIDEOTAPE EDITING PROCESS The process of editing a program with a linear videotape editing system generally follows these steps:

1. Shoot footage in the field or in the studio. Record SMPTE time code as the footage is being shot.

2.Make a window dub of your source tapes.

3.Log the tapes.

4.Develop a preliminary edit decision list (EDL)—a list of all the shots you plan to use in your production with in and out points and a brief description of each shot.

5.Make a rough cut of the program. This is a preliminary version of the program.

6.Review the rough cut, and adjust your editing plan as necessary. If your production is being produced for a client, you will want the client to review the rough cut to give you feedback before you produce the final version of the program.

7.Complete the final edit, adding graphics and music to produce a compelling audiovisual program.

Digital Nonlinear Editing

Although videotape remains the recording medium of choice for field acquisition, the use of linear videotape editing systems has largely been supplanted by a new generation of digital nonlinear editing (NLE) systems. Nonlinear editing systems provide several advantages over linear tape-based systems. In particular, they allow for random access to footage, and they allow shots to be inserted in front of or in between others in the timeline.

RANDOM ACCESS Videotape editing systems are linear by nature: Shots on the source tapes are recorded in a linear sequence, and they can be accessed only by fast-forwarding or rewinding the source VCR. Edits are made on the editing VCR in sequential, linear fashion.

Nonlinear editing systems provide random access to source footage. Source footage is imported into the computer and stored on the computer’s hard disk drives. Shots can be accessed immediately in any order. In addition to providing random access to the video and audio information once it has been imported into the computer, nonlinear systems allow the user to assemble and change shot sequences easily through a cut-and-paste process similar to the way text is edited in a word-processing program.

INSERT AND OVERLAY EDITS In nonlinear editing systems even after a sequence of shots has been placed in the time line, a new shot can be inserted between them. (See Figure 10.19.) This is called an insert edit.

The finished shot sequence is equal to the total length of all of the clips. In addition to inserting shots between two other shots, nonlinear systems make it possible to insert a shot in front of another shot or shot sequence. The new shot becomes first in the sequence, and the other shots slide down in the time line behind it. Neither of these techniques is possible in linear videotape editing systems.

Nonlinear editing systems also allow the editor to overlay or overwrite new video or audio over existing shots or audio tracks. An overlay edit does not increase the length of a shot sequence and is similar to what has previously been defined as an insert edit performed on a linear tape-based editing system.

COMPONENTS OF A NONLINEAR VIDEO EDITING SYSTEM The principal components of a nonlinear video system (see Figure 10.20) include the following:

1. A computer equipped with a central processing unit that is fast enough to process video information.

2.Nonlinear editing software to perform video editing. Dozens of nonlinear editing software programs are available on the market, ranging from simple programs aimed at home users to sophisticated programs targeted to the professional production market.

3.A sufficient amount of storage capacity for the computer, usually in the form of hard disk arrays. Not only does editing require a lot of storage capacity, but also the system must be able to move the data around at a fast rate.

4.A VCR to send footage into the computer.

5.VCR/computer interfaces to allow the computer to control the playback and recording functions of the source VCR.

6.A video input/output board to move the video signal into and out of the computer, unless the computer has direct audio and video inputs or a FireWire connector for digital video.

7.High-quality audio and video monitors so that image and sound quality can be monitored during the editing process. It is always recommended to have a video monitor in addition to the computer monitor because image and color quality varies between the two.

image

FIGURE 10.19   Nonlinear Insert and Overlay Edits

image

FIGURE 10.20   Components of a Nonlinear Editing System

USER INTERFACE AND EDITING MODEL Each nonlinear editing system is built around its own proprietary user interface. The user interface describes the visual elements of the editing program that appear on the computer screen. These typically include the time line, bins for audio and video clips, and a preview and editing window, which displays the clips and program as they are edited. (See Figure 10.21.)

image

FIGURE 10.21   NLE User Interface

Time Line The time line is a graphical linear representation of the program. The various program tracks (video, audio, graphics) appear in a vertical stack on the screen. (See Figure 10.22.)

Clips and Bins Clips are digital files of the source material that is to be edited. Video footage, audio sources, and graphics are all stored as clips. Clips can usually be displayed as a list or by displaying the first frame of the clip as an icon in the bin window. (See Figure 10.23.)

Editing Window Clips are trimmed in an editing window. Entry and exit points can be selected by eye or ear, by time code numbers, or by selecting either an in or an out point and then specifying the length of the clip. Clips can also be easily trimmed or extended by clicking the computer’s mouse on the beginning or end of the clip in the time line and dragging the clip to extend or trim it.

image

FIGURE 10.22   NLE Time Line

image

FIGURE 10.23   NLE Time Line

Transitions Once individual clips have been arranged in the proper sequence in the time line, transitions such as dissolves, wipes, and digital video effects can be added between the clips. (See Figure 10.24.)

NONLINEAR EDITING PROCESS Once you have acquired your video footage in the field or studio, the process of nonlinear video editing involves the following four main steps.

Step 1: Log the Tapes Make a window dub of your source tapes and log them using the SMPTE time code numbers on the tape.

image

FIGURE 10.24   Avid Nonlinear Editing System Interface in Effects Mode

Step 2: Capture/Digitize Your Best Takes First, select the footage to be captured. Today, most videotape formats record a digital signal. The process of transferring the footage from tape into the computer is called capturing. In nonlinear editing systems the amount of storage is limited by the capacity of the hard drive system. Therefore you must make some preliminary decisions about which shots you plan to use in your final project before you begin to capture your material. If your source footage was recorded on an analog tape format, then as the footage is captured, it will be converted into digital files by the nonlinear editing system.

Next, select a video compression/quality rate. When video is converted from analog to digital form, it is compressed. Compression allows the computer to reduce the amount of information that is stored in digital form. Material that is stored at a high rate of compression (lower image quality) will take up less space on the hard drive than will material that has been digitized with a lower compression rate (higher image quality). Managing the available hard drive space by working with different compression rates is an important part of the editing process. (See Figure 10.25.)

For example, at the beginning of a project you might want to capture footage at a high-compression/low-quality rate to have access to a greater amount of the original footage during the initial stages of editing. After the editing decisions have been finalized, the unused clips can be deleted from the computer, freeing up hard drive space, and the clips that remain can be redigitized or recaptured at a higher quality setting so that the final version of the program will have the best quality the nonlinear system is capable of producing.

image

In this media settings window, note that the rough cut (draft) mode uses a low-quality setting of only 10 kb per frame, whereas the final cut (on-line) mode is set to a higher data rate of 120 kb per frame. The spheres identify the hard drive on which the video (V), audio (A), special effects (FX), and graphics (G) files will be saved.

FIGURE 10.25
Quality Rate

Next, digitize and capture shot by shot. Mark an in point and an out point for each shot using SMPTE time code numbers, or mark the video clips on the fly. Leave some extra footage, called handles, at the beginning and end of each shot. Many systems will do this automatically if you activate this feature in the appropriate menu. Handles are useful if you plan to add transition effects such as dissolves to your edit points. Because a dissolve will actually end after the edit point of the first shot in the sequence and begin before the edit point of the first shot, extra footage is needed to cover the effect; that is what handles provide.

Most nonlinear editing systems include a batch capture feature. Enter the in and out points for a number of shots along with descriptive material for those shots, and after all the data has been entered into the editing program, the system will automatically find and capture each of the clips. Import additional elements, such as music, sound effects, and digital picture files.

Remember that in most systems digitizing and capturing take place in real time. This can be a time-consuming part of the editing process, particularly when a large amount of footage is involved.

Step 3: Edit Your Captured Material Trim each clip to its actual in and out points, and use the time line to arrange shots in the proper sequence. With the computer’s mouse you can drag and drop shots into the time line, and you can easily move them elsewhere if you decide to change the sequence.

Add music and sound effects. Every program contains some mix of voice, natural sound, music, and/or sound effects. These elements can be edited and placed with the same ease as video elements.

Add graphics. Program titles, keyed titles, and other graphics can be created electronically within the editing program and incorporated into the project.

Add transitions between shots if they are not cuts. Most editing software programs allow you to choose from an extensive array of special effect transitions.

Render transitions, special effects, and graphics. Rendering is a process through which the computer changes transitions, special effects, and graphics from computer effects to video effects. This means that the effects need to be translated into video fields and frames. Depending on the complexity of the effects and transitions that are involved and the speed of your computer, this process can take some time to complete. Also, remember that in most systems if you move a rendered transition or effect, it may become unrendered, forcing you to go through the rendering process again. For this reason, it is best to leave effects and transitions unrendered until you are confident that the video and audio elements of your program are in their final form.

View the edited project in real time with high-quality audio and video monitoring to judge its effectiveness and acceptability. Don’t just view the program on the computer monitor. If you do not have an external video monitor as part of your editing system, output the project to videotape and watch it on a video monitor.

Make changes as necessary. Review the program and adjust your editing plan as necessary. If your production is being produced for a client, you will want the client to review the first cut to give you feedback before you produce the final version of the program. The real power of nonlinear editing is the ease with which changes can be made to an edited program. Take advantage of it!

Step 4: Print to Tape or DVD Output the final project from the computer to the VCR to make a videotape master, or if your system is equipped with a DVD burner, you may wish to save it on a DVD instead of tape.

SOME OTHER THINGS TO THINK ABOUT When nonlinear video editing was developed, many people saw it as the answer to many of the problems inherent in videotape editing. Indeed, random access to clips and the ability to easily change shots in the time line greatly improved the editing process. However, nonlinear editing is not without its own set of problems.

Computer crashes are a common problem. To minimize damage, save your work often, and make a backup copy at the end of the day.

Although the cost of hard drive storage has come down dramatically in recent years, storage may still be an issue, particularly in systems that are shared by multiple users. Maximize the storage capacity of your system to enable you to capture and store more source material. Also, remember not to fill hard drives to more than 75–80% of their maximum capacity. If the drives are too full, the system may not be able to move the files around as needed, and the system may crash.

File management is a big issue in nonlinear editing. Be sure to label everything clearly. If you have two hundred clips, all named “untitled,” you are going to waste a lot of time looking for the clips you need. Set up clearly labeled folders for your work, and do not mix material from one project with material from another.

Finally, budget your editing time. Ultimately, nonlinear editing may be better than linear editing because of its drag-and-drop editing and the ease with which edits can be made and/or changed. However, nonlinear editing is not necessarily faster than linear editing. Digitizing and capturing each clip take time that linear editing does not require. Also, because it is possible to try a variety of different editing decisions and transitions before committing to one, it is not unusual for projects that are edited on nonlinear systems to take more time to complete than projects that are edited on linear systems.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset