1

A brief
history

 

A definitive moment in the history of the moving image was the discovery, in 1824, by the Englishman Peter Roget that, if a series of drawings of an object were made, with each one showing the object in a slightly different position, and then the drawings were viewed in rapid succession, the illusion of a moving object was created. Roget had discovered what we now call ‘the persistence of vision’, the property whereby the brain retains the image of an object for a finite time even after the object has been removed from the field of vision. Experimentation showed that, for the average person, this time is about a quarter of a second, therefore if the eye is presented with more than four images per second, the successive objects begin to merge into one another. As the rate increases, the perception of motion becomes progressively smoother, until, a t about 24 drawings per second, the illusion is complete.

It is intriguing to consider that, had the chance evolution of the human eye not included this property, then today we would be living in a world without cinemas, TV sets or computer screens and the evolution of our society would have taken an entirely different course!

Early Cinematography

With the discovery of the photographic process in the 1840s, experimentation with a series of still photographs, in place of Roget’s drawings, soon demonstrated the feasibility, in principle, of simulating real-life motion on film. During the next fifty years, various inventors experimented with a variety of ingenious contraptions as they strove to find a way of converting feasibility into practice. Among these, for example, was the technique of placing a series of posed photographs on a turning paddle wheel which was viewed through an eyehole - a technique which found a use in many ‘What the butler saw’ machines in amusement arcades.

Thomas Edison (Figure 1.1) is generally credited with inventing the machine on which the movie camera was later based. Edison’s invention - the Kinetoscope, patented in 1891 - used the sprocket system which was to become the standard means of transporting film through the movie camera’s gate.

Figure 1.1 Thomas Edison

After several years of development, the movie camera (Figure 1.2), and the projector which was based on the same sprocket and gate mechanism, were ready for production of the first commercial silent films in the United States in 1908 (Figure 1.3). The earliest film-making activities were based on the east coast, but by 1915 Hollywood on the west coast was already emerging as film capital of the new industry and, by 1927, the Warner Brothers Hollywood studio released the first major talking film - The Jazz Singer, starring Al Jolson - to an enthusiastic public.

Figure 1.2 (a) Exploded view shows the basic mechanism of the movie camera based on Edison’s 1891 invention, (b) A scan of the home movie camera which I bought in Vermont in 1969 and which still uses the same basic mechanism

Figure 1.3 Scene from an early silent movie

The process of recording sound on magnetic tape (Figure 1.4) was as yet unknown and the early sound films used a process known as Vitaphone, in which speech and music were recorded on large discs which were then synchronized with the action on the screen, sadly displacing the many piano players who had made their living by providing an imaginative musical accompaniment to the silent films.

Figure 1.4 (a) An old 78 rpm disc, now consigned to the museum; (b) an early phonograph and (c) a tape deck

As film audiences grew, the movie makers turned their attention to methods of filming and projecting in colour and, by 1933, the Technicolor process had been perfected as a commercially viable three-colour system. Becky Sharp was the first film produced in Technicolor, in 1935, being quickly followed by others, including one of film’s all-time classics, The Wizard of Oz, just as war was breaking out in Europe.

While others developed the techniques which used real-life actors to portray the characters in a movie, Walt Disney pioneered the animated feature-length film, using a technique which derived directly from Roget’s work nearly a hundred years earlier. Using an early form of storyboarding and more than 400,000 hand-drawn images he produced Snow White and the Seven Dwarfs (Figure 1.5), based on Grimm’s fairy tale, in 1937. As we shall see later, computer animation owes much to the techniques developed by the Disney Studios over the last sixty years.

Figure 1.5 Snow White - a Disney classic

Television

The first experimental television took place in England in 1927, but it was not until the late 1940s that television began to challenge the popularity of the cinema. Unlike the movie camera, the TV camera builds up an image of a scene by sweeping a beam of electrons across the screen of camera tube (Figure 1.6). The scanning of a single image is called a field, and the horizontal and vertical scan rates are normally synchronous with the local power-line frequency. Because the world is generally divided into 50 Hz and 60 Hz electrical power frequencies, television systems use either 50 field or 60 field image-scanning rates. The television broadcasting standards now in use include 525 lines, 60 fields in North America, Japan, and other US-oriented countries, and 625 lines, 50 fields in Europe, Australia, most of Asia, Africa, and a few South American countries.

Figure 1.6 (a) A TV camera, (b) The Sony camcorder – employing a CCD detection system – which I use for recording the usual family events

The advantage of scanning with an electron beam (Figure 1.7) is that the beam can be moved with great speed and can scan an entire picture in a small fraction of a second. The European PAL (Phase Alternate Line) system uses the 625 horizontal line standard, i.e. to build up a complete ‘frame’ the beam scans the image 625 times as it moves from the top to the bottom, and continues to do this at a frequency of 25 frames per second. The number of picture elements in each line is determined by the frequency channels allocated to television (about 330 elements per line). The result is an image that consists of about 206,000 individual elements for the entire frame.

Figure 1.7 TV tube construction

The television signal is a complex electromagnetic wave of voltage or current variation composed of the following parts: (1) a series of fluctuations corresponding to the fluctuations in light intensity ot the picture elements being scanned; (2) a series of synchronizing pulses that lock the receiver to the same scanning rate as the transmitter; (3) an additional series of so-called blanking pulses; and (4) a frequency-modulated (FM) signal carrying the sound that accompanies the image. The first three of these elements make up the video signal. Colour television is made possible by transmitting, in addition to the brightness, or luminance, signal required for reproducing the picture in black and white, a signal called the chrominance signal, which carries the colour information.

Whereas the luminance signal indicates the brightness of successive picture elements, the chrominance signal specifies the hue and saturation of the same elements. Both signals are obtained from suitable combinations of three video signals, which are delivered by the colour television camera, each corresponding to the intensity variations in the picture viewed separately through appropriate red, green, and blue filters. The combined luminance and chrominance signals are transmitted in the same manner as a simple luminance signal is transmitted by a monochrome television transmitter.

Like the picture projected on to the cinema screen, the picture which appears on the domestic television screen depends upon the persistence of vision. As the three colour video signals are recovered from the luminance and chrominance signals and generate the picture’s red, blue, and green components by exciting the phosphor coatings on the screen’s inner surface, what the eye sees is not three fast moving dots, varying in brightness, but a natural full-frame colour image.

To prevent the flicker which can occur with a frame rate of thirty frames per second, ‘interlacing’ is used to send every alternate horizontal scan line 60 times a second. This means that every 1/60th of a second, the entire screen is scanned with 312½ lines (called a field), then the alternate 312½ lines are scanned, completing one frame, reducing flicker without doubling transmission rate or bandwidth.

Although computer monitors operate by the same principles as the television receiver, they are normally non-interlaced as the frame rate can be increased to sixty frames per second or more since increased bandwidth is not a problem in this case.

Emergence of the VCR and the Camcorder

Virtually all early television was live – with original programming being televised at precisely the time that it was produced – not by choice, but because no satisfactory recording method except photographic film was available. It was possible to make a record of a show – a film shot from the screen of a studio monitor, called a kinescope recording – but, because kinescope recordings were expensive and of poor quality, almost from the inception of television broadcasting electronics engineers searched for a way to record television images on magnetic tape.

The techniques which were developed to record images on videotape were similar to those used to record sound. The first magnetic video tape recorders (VTRs) (Figure 1.8), appeared in the early 1950s, with various alternative designs evolving through the 1960s and 1970s for use in the broadcasting industry. Recorders based on the VTR technology, but using cassettes (VCRs), were developed and cost-reduced until, by the early 1980s, the VCR became a spectacularly successful new consumer product.

Figure 1.8 Early top-loading VTR

Following the market success of the VCR, consumer electronics companies now raced to miniaturise the VCR so that it could be packaged with a scaled down version of the television camera opto-electronics to create, by the late 1980s, the camcorder. Intended to be portable, the first designs were barely luggable, with power supplies packaged separately from the camera and weighing even more than it did. The public interest was aroused, however, and fierce competition for this further extension to the market, soon brought about dramatic reductions in camcorder sizes, with use of new battery technology to integrate the power supply within the camera body and adoption of a new 8 mm tape standard (Figure 1.9).

Figure 1.9 Camcorder

The imaging technology in amateur video cameras differs from that of studio cameras. The most recent camcorders use solid-state image sensors – usually an array of charge coupled devices (CCDs), which convert light energy into the electrical pulses that are recorded on videotape. The clarity and resolution of the videotaped pictures depend on the number of picture elements, or pixels, which the CCD array can create. The newest camcorders, made even smaller and lighter by the use of CCDs, can produce clear, detailed pictures, even in extremely dim light.

Since their introduction, both the “-inch and the 8 mm formats used for VCRs and camcorders have been augmented by improved versions – Super-VHS and Hi8, respectively – which can handle greater band-widths. The result is better picture definition and detail resolution approaching that of professional video recorders.

Editing

In 1902, using an early camera based on Thomas Edison’s invention, the Frenchman George Méliès discovered that, by stopping the camera in midshot and then rearranging objects in the scene before continuing to film, he could make objects ‘disappear’ on film. Also, by stopping and partly rewinding the film between shots, he was able to superimpose scenes or dissolve one scene into another. Méliès had inadvertently stumbled across the basic technique of film editing.

Editing in the Professional Domain

In the years which followed, the editing together of the hundreds of film clips, which comprise the average movie, into a smooth-flowing, rhythmic whole, became a specialized art – the art of the film editor whose job it was to supervise teams of specialists in the cutting and editing of sound tracks and film negatives. The editor’s job included synchronizing the film to the sound track and providing the film director with feedback on each day’s shooting by preparing daily ‘rushes’ for review before the next day’s filming began.

Many movies are now edited on videotape and then transferred back on to film after the editing is completed. After the bulk of the photographic work is finished, the process of post-production involves the assembly of the many edited film clips to create the finished movie.

In some cases the sound editor may re-record dialogue in a studio while viewing the action on a screen (known as automated dialogue replacement), also adding further sound effects to enhance the dramatic impact of selected scenes. In parallel, the editor supervises the inclusion of special visual effects and titles which are to be inserted into the final version. In the final step of the editing process, the separate sound tracks are mixed on to one master magnetic film containing separate dialogue, music, and sound effects tracks in synchronization with the movie film.

Television programme editing borrowed heavily from the experience of film, but even with this advantage, a look back at some early BBC footage soon reveals how far the process has matured since the early broadcasts from Alexandra Palace! In spite of the significant instant playback advantage which videotape enjoyed over film, much time was still wasted in the analogue video environment, simply shuttling video tape back and forth among decks during the editing process.

Editing between decks is a linear process in which edits are created in sequential order. Linear editing is the term used to describe the process of dubbing shots from one or more tape decks to another in the precise order needed for the finished production. Video and audio maybe altered after the final production has been laid down but only if the timings do not change.

As digital technology began to emerge, at first albeit at a premium price, it quickly found its way into the professional domain, where it offered the major advantage of random access to any video frame on a tape. While insertion of a new video segment in a tape, using traditional editing tools would require recomposition of the entire tape, which could take hours; with digital video techniques, it literally took seconds.

Called non-linear editing, the digital process involves first entering all audio and video material into a digital storage medium of some sort, e.g. a hard drive or CD ROM. Once all the shots are entered they can be called up and arranged in any order the editor or director desires. Since the shots are not rerecorded but only viewed, changes may be made in any order and at any point in the production until the desired result is achieved. The control and flexibility offered by non-linear editing undoubtedly contributes to the creativity of the editing process. Special effects, for example, are traditionally among the most expensive items to add in the traditional video editing process. Digital video editing not only made adding such effects relatively easy, it also made entirely new kinds of effects possible.

Mirroring the changes which have taken place in the publishing industry, trends toward ‘digitisation’ have recently accelerated in both the film and television industries, with digital technology costs falling steadily, and performance appearing to increase unabated. Although the emphasis initially was on exploiting the cost and speed advantages of digital editing, both industries were keen to explore the new and unique editing possibilities offered by the digital environment.

With hardware supporting the overlay of computer graphics on video, television was soon into visual trickery, using, for example chroma-keying to create the perfect illusion of a newsreader sitting in front of a video image seemingly projected on a screen, but with much better brightness and clarity than could ever be obtained through previous rear-projection techniques. Extension of this technique has led to the ability to create elaborate, high-tech, virtual backdrops for the staging of a range of TV shows, which would have been prohibitively expensive using traditional means.

Hollywood’s use of digitally animated special effects has been even more spectacular, with box office successes like Toy Story, Alien, The Abyss, Terminator and Jurassic Park.

Editing in the Private Domain

As the public demonstrated their interest in Hollywood’s growing output of movies, by flocking to see them in their millions, a much scaled down and simplified version of the movie camera was being developed which was to offer cinema-goers the chance to try their own hands at the skills of movie-making. I purchased my first and only cine-camera (Figure 1.2b) in Vermont in 1969, and proceeded to capture on film anything which moved – and a lot which didn’t!

The long winter evenings were spent, with the aid of a shiny new Atlas-Warner manually operated home movie editor (Figure 1.10), hand cranking my series of three-minute epics through the film gate, while viewing a dim, grainy image of each frame on the editor’s ground glass screen, before using the built-in splicer to connect film sections together and wind them on to 400 foot reels. Given that such splicing was about as sophisticated as editing got, on the family’s return to England, both camera and editor were consigned to the back of a cupboard, from which they only occasionally emerged to celebrate a new arrival to the family.

Figure 1.10 Super 8 film editor

Although the low cost Super 8 camera did allow many enthusiastic home users to enter the magic world of film making, market penetration was limited because its relatively high cost per minute of developed film and its playback and other limitations. Editing was largely limited to basic insert editing – in which a sequence was physically cut out of a film and replaced by a different sequence – and assembly editing, which simply involved splicing different lengths of film to create a desired sequence. By comparison, the compact low cost camcorder has dramatically extended the possibilities for many budding video makers.

Overnight, the camcorder effectively rendered the Super 8 cinecamera obsolete. Ninety minutes of continuous recording on a low cost, reusable videotape and the ability to play back the results immediately on any convenient TV monitor made the camcorder vastly more appealing to the consumer.

Using a camcorder and VCR together, basic editing was also possible by copying just selected material in any sequence from the camcorder source tape to a VCR destination tape. For the enthusiastic amateur, simple animation effects could be added, by the crude method of setting up a scene, taking a quick shot, then adjusting the scene before the next shot and so on. Some VCRs also offered an audio dub feature which could be used to add sound retrospectively to recorded video.

As the preferred supplier of high end video editing equipment to the professional community, Sony Corporation have also responded to the interest of the consumer market in video editing, releasing products like the Edit Studio shown in Figure 1.11. With a camcorder, VCR and TV monitor attached to the Edit Studio, the user can edit the contents from various source tapes to a destination tape, as well as dubbing audio and adding simple titles, but the editor does not support more sophisticated special effects.

Figure 1.11 Sony’s Edit Studio

Desktop Video

Analogue Versus Digital Video

An analogue device is one which uses a continuously variable physical phenomenon – such as an expanding or contracting column of mercury – to represent another dynamic phenomenon – a rising or falling temperature. Representation of time by the moving hands of a clock or of the speed of a car by the moving needle on a speedometer are other common examples of analogue devices. A digital device, by contrast, is one which employs a limited number of discrete bits of information to approximate a continuously varying quantity.

Many analogue devices have been replaced by digital devices, because digital instruments can deal better with the problem of unwanted information, or noise. The recording of sound provides a good example; on a gramophone record, any imperfections on the recording surface will be picked up by the stylus and amplified, creating unwanted noise; in the digital equivalent – the compact disc – sounds are translated into binary code, and recorded on the disc as discrete pits. Any noise which does get encoded is easily recognized and eliminated during the retranslation process.

A digital recording can also be copied many times without degradation, while the quality of analogue copies deteriorates rapidly due to the cumulative build-up of noise. Using digital video also makes it easier and far less expensive to replace and update segments of video. Using traditional methods, an entirely new videotape had to be created every time any modification to the content was needed. A digital process does have one limitation, however, in that it cannot reproduce precisely every detail of a continuous phenomenon. An analogue device, although subject to noise propagation, will produce a more complete rendering of a continuous phenomenon. The accuracy with which a digital measurement tracks the property being measured depends on the sampling rate used, i.e. the number of times per second that the changing value of the property is measured; the higher the rate, the more accurately the property is reproduced.

The monitor on the Mac or Windows PC desktop is an analogue device, using continuously varying voltages to control the position and intensity of the electron beams which excite the screen phosphors. By contrast, a digital video signal assigns a number for each deflection or intensity level. The function of the video board, or equivalent circuitry included on the motherboard, found in every Mac or PC, is to convert the computer-created digital video signal to an analogue signal for output to a monitor.

Moving video, like film, presents to the viewer a sequence of individual images, or frames, projected on a screen. Also like film, moving video relies on the phenomenon of persistence of vision, typically projecting frames at a rate of 25 to 30 frames per second, to create the illusion of a moving image. Audio tracks may be synchronized with the video frames to provide a full audiovisual presentation.

The process of importing analogue video from a VCR or a camcorder to the desktop requires that the analogue signals be digitised by processing them through an additional video digitising board, sometimes called a frame grabber or video capture board, which is normally plugged into the computer motherboard. PAL and NTSC video signals are analogue in nature and must be digitised, or sampled, via such a board, before they can be recognised by a Mac or a PC. The process of digitising video is commonly called ‘capturing’. There is a growing number of video digitising boards on the market, differing widely in speed, quality, compression capability and, of course, price.

Digital recording of a video signal is very demanding in terms of disc storage, because the colour and brightness information for each pixel in every image frame must be stored. A full-screen image on a typical 14-inch computer monitor measures 640 pixels by 480 pixels. Thus, each full-screen frame of video contains 307 200 (640 × 480) pixels. Each pixel requires 24 bits, or 3 bytes, of storage (8 bits per RGB component) to describe its unique properties. Therefore, to store the information describing a full-screen, 307 200-pixel image results in a storage requirement of 921 600 bytes for each frame of digitized video. At a frame rate of 25 fps, storing just one second of digitized PAL video therefore requires 23 megabytes of storage space! Before today’s multi-gigabyte disc drives, such use of disc space to store digitized video was not feasible for the average computer user.

Playing back stored information of such a size also requires considerable processing power to provide an acceptable frame rate. Compromise in terms of frame rate, colour depth and image resolution have, until recently, been unavoidable. Fortunately, advances in data compression technology are reducing the need for such compromises.

Image Compression

Image compression reduces the amount of storage space required to store or transmit a digital image. Because the pixel values in an image are often similar to those of adjacent pixels, an image can be compressed by using these similarities, as it takes fewer bits to store the differences between pixels than to store the pixels themselves. Using a ‘lossless’ compressio algorithm, which just removes redundant information, the image will reappear in exactly its original form when decompressed.

Lossless compression might be used, for example, to transmit medical X-ray images over a network, in order to reduce transmission time and cost, while ensuring no loss of clinically important detail when the image is decompressed at the receiving end.

When ‘lossy’ compression is used, some information is lost and, on decompression, some detail will be missing from the decompressed image. A lossy compression algorithm might be used to transmit video-conference images, where a high level of compression is needed because of the limited bandwidth of a standard telephone line and the detail in the received image is not critical.

In the case of digital video, the type and degree of compression selected depends on the hardware capabilities of the system being used and on the quality required for the output image. More information on this subject can be found in Chapter 3.

Technical Data

PAL Video storage on a 1Gb drive versus compression ratio

Compression ratio

Video stored

1:1

00h00m33s

5:1

00h02m45s

10:1

00h05m30s

50:1

00h27m30s

100:1

00h55m00s

Video Signal Types

Video signals are normally transmitted in one of three different formats:

RGB

In RGB transmission, the video images are transmitted as three separate components – red, green and blue – on three separate cables. RGB does not specify any particular image size or frame rate; transmissions can range from standard interlaced PAL, or anything from 200 × 300 up to 2048 × 2048 pixels, interlaced or noninterlaced, with frame rates from 60 to 120 Hz. RGB just signifies that the image is sent in three parts.

Composite Video

Video can also be sent in composite form which mixes all the colours together along with the sync signal, into one signal which can be carried by a single cable. The colours are encoded so that they can be reconstructed by the display device, but, because of the mixing, composite video gives a noticeably inferior picture to RGB (most consumer VCRs have composite video inputs and outputs).

S-video

A third standard, S-video, has recently emerged and is used in Hi-8 and Super-VHS VCRs and camcorders. It offers a compromise between RGB and composite video, and consists of separate chrominance (colour) and luminance (brightness) signals. Image quality is generally better than composite video, but not as good as RGB.

Image Resolution

The quality of a digital image can be described in terms of its image resolution – the number of picture elements, or pixels, which make up the image. This is demonstrated by comparison of the three printed images shown in Figure 1.12. The first image was scanned at a resolution of 300 dpi – i.e. as the scanner optics traversed the image 300 different points were sampled for each linear inch of travel. For the second and third images, only 1 50 and 75 points per inch were sampled. Comparison shows that detail is increasingly lost as the image resolution decreases.

Figure 1.12 Effect of pixel density on image quality

Image resolution capability is an important parameter in the specification of a video camera’s performance. Like a scanner, a video camera translates the image information which it receives through the camera lens into an array of pixels, each pixel carrying colour and brightness information about a point on the original image. Projection of this pixel information on to a screen creates a simulation of the original image and the picture quality increases with the number of pixels used to recreate the image. A resolution of 640 pixels horizontally by 480 pixels vertically is sufficient to provide acceptable full screen desktop digital video on a typical 14 inch screen. A camera capable of proportionately higher resolution will be required to provide an image of the same quality on larger screens or, alternatively, the video image can be viewed in a 640 by 480 ‘window’ within the larger screen.

Recent Trends

Over the last 10 years, as the installed base of both business and private PCs has grown by leaps and bounds, users have come to relish the freedom afforded by their shiny new possessions. Indeed, in many companies, PC usage has become a war zone as Information Systems managers have fought to retain central control of DP operations, while users have devised ever more ingenious ways of using simple PC applications to do in minutes what used to take the DP room days to deliver!

Private users are also sharing in this new age of empowerment, as they learn the skills of desktop publishing, digital drawing and photo-editing, and accessing the vast sources of information available on the Internet. Many of these users are also owners of camcorders and/or have experienced some of the many ways in which camcorders are now being used in education and in the workplace, where applications like videoconferencing have linked the two technologies together.

Moving images are, of course, not new to the desktop. With the development of Apple’s QuickTime and Microsoft’s AVI (Audio Video Interactive) technologies, it has been possible for some time to view short video clips in a small desktop window. Figure 1.13 shows a clip of a QuickTime MOV file playing in a window, using Apple’s QuickTime Mow’e Player, while Figure 1.14 shows a clip of an AVI file playing in a window using Microsoft’s simple Media Player application. Movie Player and Media Player can also play clips saved in other formats such as CompCore’s MPG format or Autodesk’s FLI or FLC animation format. Using Microsoft’s Video for Windows and Apple’s QuickTime for Windows system utilities, video clips can be integrated into a range of Windows applications (see Chapter 9).

Figure 1.13 Apple’s QuickTime Movie Player

Figure 1.14 Microsoft’s Media Player

In addition, particularly the younger generation of users has become accustomed to the increasingly realistic looking graphics of the newest games programs.

So why has the desktop video market not already taken off? The reason is not too hard to find. Since one frame of video generally takes approximately one megabyte (Mb) of storage; and since video tape runs at 30 frames per second, one second of uncompressed video takes approximately 30 Mb of storage, and therefore the PC has to be capable of writing and reading the video data at sustained rates of 30 Mb per second. It is this hugely increased data processing demand – compared with demands of, say, a DTP or graphic design application – which has created a major roadblock, placing interactive digital video out of the reach of most users.

Fortunately, two important technology trends have been converging to help create affordable solutions for digital video editing. First, the plummeting cost of hard disc storage has led to the situation that even an entry-level PC today boasts a hard disk of a least one gigabyte (1000 Mb) capacity, making the storage of large video files feasible. Second, compression technology has developed to the point where video compression hardware and software can compress video at ratios up to 200:1 while maintaining an acceptable level of quality. This means that video can not only take up much less storage space, but can also be recorded to, and played back from, standard computer hard disks, with average read/write rates in the 500 Kb to 1.5 Mb per second range. The combination of better compression technology and inexpensive mass storage devices has paved the way for low cost digital video editing systems which offer more versatility for a few hundred pounds than their traditional counterparts costing tens of thousands of pounds!

Uses

So, if video on the desktop is really becoming technically possible, what can we actually do with it, apart, that is, from just marvelling at the technical wonder of it all?

In fact, potential applications range from simply cleaning up and adding titles and transitions to videocamera footage, to creating special effects for television, film, commercial presentations, teleconferences, web pages, graphic arts productions, video games and computer-based training packages. We shall look at practical examples later in the book, but at this point it is worth distinguishing between the different approaches to handling the above tasks.

Using the PC as an Intelligent Control Device

Original video footage can run into many hours of videotape and, even using a multi-gigabyte hard drive, it is not practical to digitise and store such a volume of material. Instead, a digitising board can be used to capture the first frame of each clip on each tape to be edited. Under the control of the PC, clips from the source videos can then be rearranged and/or trimmed as required, transitions can be inserted, titles can be overlayed and then the result can be printed to a blank destination tape for viewing and distribution.

On-line Video Editing

On-line video editing can involve the creation and manipulation of original digital video material on disk, the import and digitisation of analogue video, or a combination of both. Once the video is in digital format, a whole battery of digital special effects and techniques can be used by the editor to alter individual frames or whole video clips. If required, the result can be output to analogue video tape, CD ROM or an Internet Web site.

Off-Line Video Editing

The quality possible with today’s on-line digital video technology is to broadcast quality video what laserjet printing is to Linotronic* printing, i.e. the quality of both is high but the difference is discernible. When a project requires the ultimate broadcast quality in the final result, preprocessing of the video can still be carried out on-line. Then, using a product like Adobe Premiere, an SMPTE timecode-compatible list, called an edit decision list (EDL) can be created at the end of the on-line process. The EDL is a record of edit decisions (which frames are to be included in the final tape and where each section should start and stop) and special effects, into a simple text file which can be read by traditional EDL readers. The final high-quality broadcast tape can then be auto-assembled at a video post-production facility, using the master tapes and the EDL produced on the PC.

Summary

Compared with the history of the still image, which dates back around fifteen thousand years to the cave drawings of prehistoric man, that of the moving image is very short. However the development of commercial and artistic applications of this new and exciting form of communication has been very rapid; it took only eighty-four years from Peter Roget’s discovery of the persistence of vision in 1824 before the principle was adapted to enable release of the first commercial silent film in 1908. ‘Talkies’ followed around twenty years later, roughly coinciding with the advent of television in 1927.

The introduction of colour significantly enhanced the appeal of both media and, on both sides of the Atlantic, technical development proceeded apace to respond to growing public interest. As film and television budgets grew, investment in improved hardware and techniques led to more sophisticated and higher quality productions. The development of magnetic tape for video and audio recording and playback progressed rapidly and it was widely adopted throughout the professional domain.

Meanwhile, the Japanese skill in miniaturising electronics provided the platform needed for the development of the domestic VCR and the low cost video camera, bringing video out of the professional studio and into the home. Editing, however, remained a complex, time-consuming and expensive linear process, requiring the use sophisticated tape decks, until, as the use of desktop PCs took off, the first affordable desktop video editing applications appeared.

In the following chapters we shall look at examples of the new hardware and software which are now making desktop video a reality, at the growing range of video raw material which can be used in the creative processes, at the methods used to produce video on the desktop and at the alternative methods of publishing the results.

Milestones in the evolution of the moving image

1824

Peter Roget discovered the phenomenon of ‘persistence of vision’

1840s

The photographic process was developed

1877

Eadweard Muybridge used a battery of 24 cameras to record the cycle of motion of a running horse

1889

Hannibal Goodwin and George Eastman developed a form of movie file consisting of strips of high-speed emulsion mounted on strong celluloid

1891

Thomas Edison patented the Kinetoscope

1902

Using an early camera based on Thomas Edison’s invention, the Frenchman George Méliès discovered the basic technique of film editing

1908

The first commercial silent film was produced

1915

Hollywood became recognised as the centre of the new film industry

1915

Work began on the American Civil War film, The Birth of a Nation

1925

Charlie Chaplin starred in The Gold Rush

1926

The Warner Brothers studio introduced the first practicable sound films, using a process known as Vitaphone, recording musical and spoken passages on large discs

1927

Warner Brothers Hollywood studio released the first major talking film – The Jazz Singer – starring Al Jolson

1927

The first experimental television broadcast took place in England

1933

The Technicolor process was perfected as a commercially viable three-colour system

1935

Becky Sharp, the first film produced in Technicolor, was released, quickly followed by The Wizard of Oz, one of film’s all-time classics

1937

Walt Disney produced Snow White and the Seven Dwarfs

1950/55

The first magnetic video tape recorders (VTRs) appeared

1956

A transverse scanning system of video recording and playback, called Quadruplex, was developed

1960s

Videodisks were developed as a response to the high cost of magnetic tape

1976

Apple was formed by Steven Jobs and Stephen Wozniak to market the Apple 1

1981

IBM introduced the IBM PC

1980/85

The VCR became a spectacularly successful new consumer product

1985/89

The VCR was miniaturised and packaged with a scaled down version of television camera opto-electronics to create the camcorder

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset