CHAPTER   13

Video on
the Web

Types of Video on the Web

One of the fastest-developing areas for distribution of video content is via the Internet, including the World Wide Web. Web distribution of video has become extremely popular. This chapter explains the basic concepts of video on demand and streaming media, the steps in the process of preparing video and audio for streaming over the Internet, and the technology that is involved in this process.

If you have surfed the Web lately and watched any videos, most likely you have experienced video on demand or streaming video.

Video on Demand

Beginning in the 1990s, Web-based video was modeled on the concept of video on demand that had been developed by the cable television industry. At that time cable television companies attempted to provide subscribers with the option to select their preferred programs at any time of the day or night from a large digital database of program material. Subscribers would select a program, and the cable company would transmit it to the viewer’s home television. Today, video on demand is found on many cable and satellite television systems and in hotels, where guests can select from a relatively small menu of movie and other program titles.

Web-based video on demand systems require that video be downloaded onto a computer hard disk and then played back on the subscriber’s personal computer. In other words, Web video on demand is mainly a download-and-play technology. Because video files were usually very large and typically downloaded over 28, 33, or 56 kbps dial-up modems, it took most users too long to download the files. As a result, Web-based video on demand was avoided by most Internet content providers because it was too time consuming and inefficient.

Streaming Media

With the advent of faster computers and networks, increased adoption of broadband services such as DSL and cable modems, and the development of more efficient video and audio compression systems, a more convenient launching platform for streaming media on the Internet has developed.

Streaming media is a technology that is widely used today to deliver different types of media via the Internet. The term has come to exemplify the digital convergence of communication and information technologies, such as video, graphics, voice, text, and data. These elements come together on line to provide a wide range of audiovisual experiences that make the content much easier to understand and much more fun to interact with.

Many people consider streaming media to be a new delivery medium comparable to broadcast television or cable television. It is revolutionizing the ways in which organizations communicate with their employees, their customers, and the public in general. Many news media organizations are streaming news clips from their regular broadcast newscasts. For example, Reuters, an international news agency, unveiled a service in March 2003 offering raw footage of the U.S. military campaign in Iraq. Film studios and producers regularly stream movie trailers for upcoming theatrical film releases. Major league baseball, in open competition with the broadcast networks, started to stream live baseball games during the 2003 season. Yahoo broadcast the 2003 NCAA basketball tournament through its Platinum Internet service. The Harvard School of Public Health and Stanford University, among many other educational institutions, deliver on-demand and live streaming video of courses, commencements, services provided by their departments, and important events taking place on their campuses and provide links to research sites with streaming content. Table 13.1 shows some of the areas in which streaming media, particularly video, are being used.

These examples illustrate some of the most common uses of streaming video. Often, as in the case of news clips or sportscasts, streaming content is regular television programming that was originally produced for viewing on broadcast or cable television but now has been repurposed for delivery through the Internet. In other cases, content (programs) is produced with specific streaming (Web) values and characteristics in order to optimize, both aesthetically and technically, the experience for the Web viewer, thus improving the chance of achieving the program’s objectives.

What Is Streaming Media?

Streaming media is a technology that delivers dynamic content, such as video or audio files, over a local computer network or the Internet. It transmits files in small packets that “stream” for immediate, real-time viewing or listening. While the user watches or listens to the first packet, the next one is buffered or downloaded to the computer’s memory and then released for continuous viewing and/or listening. The stream (packet of information) is encoded at the sender’s end and decoded at the user’s end by using appropriate computer software.

TABLE 13.1 Typical Uses of Streaming Media

Areas Uses
Entertainment Music
Movie Trailers
Sports
Information News
Weather
Government Training
Military
Education Distance Learning
Academic events
Research
Corporate Training
Sales
Advertising
Videoconferencing
Community Services
Reach community members

Streaming media can be delivered from a streaming media server or from a web server at different speeds or data rates. Higher delivery speeds or data rates produce better-quality streaming media than do slower delivery speeds.

Types of Streaming:
Progressive, Real-Time, and
Live Progressive Download

Progressive Streaming

Progressive download is very similar to the video on demand approach described earlier; however, the file does not have to be downloaded completely before viewing. In this approach, a media file to be delivered over a specific network’s bandwidth is placed on a regular Web server, also known as a standard HTTP server. HTTP is the abbreviation for “hypertext transfer protocol,” a computer standard that defines how messages are formatted and transmitted over the World Wide Web.

After being accessed by the client computer, the server starts downloading the file, and after a few seconds of downloading, the user can play the video or audio while it is being downloaded. What the user sees is the part of the file that has been downloaded up to that time. Progressive download is best suited for playing short movies over modem speeds of 28 kbs or 56 kbs.

Real-Time Streaming

The real-time streaming approach follows the traditional model of prerecorded television broadcast delivery of programming with the big difference being that the program can be accessed by multiple users at the same time and at different places in the file. (This differs from the broadcast model, in which everyone watches the program at the same time and all viewers are at the same place in the program.)

With real-time streaming, archived (previously recorded) material residing on a server is viewed while being downloaded (broadcast). In other words, the user sees the video in real time, as it is streamed.

The computer systems that are used for real-time and live streaming require special dedicated servers called streaming media servers that keep the bandwidth (capacity) of the network matched to that of the viewer. This approach supports random access to the streamed signal, which means that the user can move to any part of the file or video at any time during delivery. This mode of streaming offers anytime and anyplace delivery of archived (on demand) media to users anywhere in the world.

Live Streaming

Also known as Webcasting, live streaming follows the traditional model of live broadcast delivery. The live streaming technology allows for an event to be viewed in real time as it is transmitted live. Afterwards, it can be archived and then viewed or “rebroadcast” using the real-time mode of delivery.

The Process of Preparing Streaming Video

The process for preparing streaming video can be divided in two stages: (1) creating/acquiring and editing the media and (2) encoding, delivering to servers, and streaming the media. (See Figure 13.1.)

image

FIGURE 13.1   Complete Streaming Process

In stage 1, during preproduction, scripts or storyboards are finalized, and decisions on which streaming technologies to use are made. During production and postproduction the video and audio are recorded, captured to a video card as a digital signal, and edited.

In Stage 2, the signal is sent via the video card to a streaming encoder, which prepares it for distribution over a network. The signal is then delivered to a media server (streaming server) for distribution over the Web. The media server is connected to a web server, which acts as a bridge between the media server and the end users. End users log onto the web server through a regular web page with HTML links. HTML is the abbreviation for “hypertext markup language,” the standard computer protocol that is used to format web pages. When a user accesses a link by clicking on it, a reference pointing to the media server is generated, and the media is accessed.

Stage 1: Creating and
Encoding the Media

As you can see in Figure 13.2, the creating and encoding stage involves some of the already familiar steps seen in our production model in Chapter 3. Preproduction, production, and postproduction are all part of the process. However, as you will see in a moment, production values and postproduction procedures for streaming video operate on a different set of principles than those for traditional video. Furthermore, streaming technology works with different network protocols and tools that have to be taken into account during the very early preproduction stage. In other words, as in any video production, careful planning is required to accomplish a successful streaming session.

image

FIGURE 13.2   Stage 1 of the Streaming Process: Creating and Encoding

PREPRODUCTION   As we mentioned earlier in the book, planning is at the heart of any media production process, and the preproduction stage is critical in preparing your video for the Web. Besides the regular steps in the production model that was presented in Chapter 7, when preparing video content for the Web, you also need to consider the following:

1. What type of production do you want or need to do? Is it going to be a single talking head or are you going to use a large cast?

2. How is your intended target audience going to access your video? Through a phone line–based dial-up modem, or a high-speed connection? Large-scale streaming requires high bandwidth capability. Bandwidth, the capacity of your delivery channel, is measured in bits per second (bps). Dial-up modems have speeds of 28, 33, or 56 kilobits (thousand bits) per second (kbps). Other channels such as DSL (digital subscriber line) or T1 lines have download speeds that range from around 384 kbps for DSL to 1.544 megabits (million bits) per second (mbps) for T1 lines. In general, the larger the bandwidth, the better is the quality of your video on the Web. For example, if your message is going to be delivered through a 28-kbps dial-up modem, then you should not plan a video with lots of motion and effects. If you plan to deliver over a 300-kbps channel, then you will have more flexibility in enhancing your video with more effects, transitions, and animated graphics.

3. What kind of delivery platform and media players (QuickTime, RealVideo, Windows Media Player, or Flash) will be used to encode and deliver your video? How much interactivity do you wish your audience to experience?

PRODUCTION

Shooting Video for the Web There are significant differences between shooting video for distribution via traditional broadcast or cable television and shooting video for the Web. Although in both cases high aesthetic standards are to be applied, Web technology such as compression codecs and bandwidth limitations can in many cases change your design approach to video shooting.

The first important thing to understand is that before getting to the Web, video has to be compressed (made smaller). Video compression works by eliminating redundant information. When compression is employed, each frame of information is analyzed to identify which information in the frame is repetitive and which is new. If there are a lot of changes between one frame and the next, it will take more bandwidth to transmit the signal. If there are few changes between one frame and the next, only the new information needs to be transmitted. So when we shoot video for the Web, we are concerned with keeping the scene as unchanged as possible.

1. Motion. Avoid unnecessary camera or subject movements. Pans, tilts, zooms, and other camera movements radically change the content of the shot from one frame to the next. Unfortunately, when video is compressed and distributed over the Web, these camera movements will mostly appear to be blurry on the screen. If you must pan or zoom, do it as slowly as possible. Try to keep your subject’s motion to a minimum. Also use a tripod whenever you can. Shaky pictures are as bad for compression as a moving image is.

2. Background. The background should be kept as static and uniform (a single color) as possible. Excessive motion in the background will cause the picture to be blurry. Remember, compression is about change, and the less change your compressor has to worry about, the better your video is going to look. A talking head against a dark blue background is a good option for streaming video.

3. Framing and focus. Video is, in general, a close-up medium. Video for the Web is even more a close up-medium. Most likely, your video will be displayed in a small window on a computer screen and not as a fullscreen image. Close-ups will help the viewer to appreciate the details of your production. Also, make sure that your images are in sharp focus.

4. Lighting and colors. Do not have talent wear white, black, or clothes with fine patterns (e.g., herringbone). Try to create good contrast between foreground and background. A well-lit and sharp scene will undoubtedly look much better that a poorly lit one. Although today’s cameras allow you to shoot in very low light and your video may look acceptable when viewed on a regular television monitor, these images will degrade considerably by the time they are ready to be streamed.

5. Animation. If you are going to use animation in your video, it is best to use vector-based animation software instead of bit-mapped software; the latter takes up a larger chunk of bandwidth.

Audio for the Web As in regular video production, there is a tendency to concentrate on the picture elements and take the audio for granted, thus degrading the whole production because of poor audio. However, keep in mind that audio and video both take up bandwidth. So the more complex the audio portion of your program is, the less bandwidth you will have for the delivery of the video. Again, planning becomes essential to find the proper balance between audio and video. Here are some basic guidelines for dealing with the sound side of your production.

1. Use external microphones. High-quality external microphones will usually give you better sound quality than will the camera’s built-in microphone. The idea is to create the best audio signal possible, clear and clean, because it will be degraded somewhat when compressed.

2. Ambient sound. Natural background sound, also called ambient sound, is probably desired in most video productions shot on location. However, make sure the background sound your microphone is picking up is not simply background noise. And make sure that the background sound is not so loud that it competes for attention with the principal foreground sound in your program.

3. Audio levels. Especially when working with DV, make sure your levels are high but do not peak into the red, which can cause serious distortion of the sound. When converting analog audio into digital files, remember that the analog peak of 0 VU is equivalent to -12 db in the digital domain. (See Chapter 5 for more discussion about audio levels.)

POSTPRODUCTION Once your video and audio have been recorded, the next steps in preparing your media for distribution on the Web involve capturing, digitizing, and editing your media. You have already reviewed general principles of video editing in previous chapters. The following editing guidelines should help you to enhance the quality of your streaming video.

Capturing and Digitizing Media producers use these two terms to describe the process of putting media into digital form and bringing media into a computer hard drive. Capturing describes the process of transferring a digital video clip onto a hard drive. Digitizing occurs when the video card converts the analog media (audio or video) into digital form. For example, an analog VHS videotape can be captured by outputting the signal from a VHS player into the computer’s video card, converting the signal into digital form, and then storing it on the computer’s hard drive.

However, some media, such as DV, are already in digital form. Although some producers may refer to the process of transferring digital video through an IEEE 1394 connector (Firewire or iLink) as digitizing, in reality no digitizing process is involved. The media is already in digital form, and all that is happening is that the signal (video and audio) is being transferred from DV tape via Firewire to a computer hard drive or from a hard drive onto a DV digital tape.

Capturing or digitizing not only converts an analog signal into a digital one, but compresses the signal as well. (DV signals are compressed when they are recorded by the camcorder.) Files must be digital before they can be encoded into a streamable format. Later, when they are encoded, these files will be compressed again into RealVideo, Windows Media, or QuickTime files to make them streamable.

Video File Formats There are three main digital video file formats: QuickTime MOV files, Windows AVI files (to be replaced soon by the Advanced Streaming Format, or ASF), and MPEG open standard files. Digitized video files are brought into your hard drive in one of these video file formats.

Editing Editing is the process of constructing your completed video project out of individual shots, audio clips, titles, and animation. The guidelines that follow are specifically geared toward producing video for the Web. The first principle to adhere to is to make sure that your video and audio source material is recorded and captured at the highest quality level, because there will be some inevitable quality loss when it is compressed for the Web.

1. Avoid unnecessary transitions and special effects. Just as we avoid camera movement and subject motion when shooting for the Web, we must avoid some of the transitions and special effects that we normally use when editing for the television screen. Dissolves, wipes, and other effects look good on the television screen, but they require a lot of bandwidth to transmit via the Internet, and they may not look very good after they have been compressed.

2. Deinterlace. The NTSC television system produces an image by a process called interlaced scanning: Two fields are combined to produce a video frame. Conventional video monitors then display this interlaced signal. All video on the Web is going to be viewed on a computer monitor, which uses progressive scanning. If you produce a program on a regular NTSC video system (e.g., DV) and are planning to view it on a progressive scan display (computer monitor), you need to deinterlace the signal. Deinterlacing converts the interlaced fields of a video frame into a single frame that can be progressively scanned. In addition, deinterlacing gets rid of the jaggies (jagged edges in the picture) that appear in standard interlaced video frames when they are viewed on a computer monitor.

3. Titles. Titles are part of almost every video, but again, you should not give them the same treatment that you do in regular video production. For the Web, keep your titles simple, and avoid any motion. Use as large a font as is practical because it is likely that your video will be viewed in a smaller size than full frame on the computer screen.

Stage 2: Encoding, Servers,
and Delivery

Figure 13.3 presents a model of the principal elements in streaming media over the Internet.

ENCODING AND COMPRESSION Encoding is a term that is used to describe the compression of files into specific formats. In the case of on-demand or streaming video files, encoding refers to the compression (reducing the file size) of those files into video files that can be viewed on demand or via streaming over a network. Keep in mind that any type of compression lowers the quality of the image.

image

FIGURE 13.3   Stage 2 of the Streaming Process: Servers and Delivery

Data transmission rate and file size can also be affected by frame rate and window size. A regular NTSC video or television signal plays at a frame rate of 30 frames per second (fps) with a window size of 640 × 480 pixels. (DV is 720 × 480 pixels.) Most computers, unless they have special acceleration hardware or software, cannot play a file at that frame rate or window size. Typical for use on the Web are frame rates from 5 to 15 fps (multiples of 5) and window sizes of 320 × 240 pixels (half screen) or 160 × 120 (one fourth of screen), or 240 × 180 (somewhere between half and one fourth of screen). Of course, you can always make it smaller. (See Figure 13.4.)

image

FIGURE 13.4   Window Sizes

Once your video has been edited and finalized, you need to create files that are streamable. During the preproduction steps you made some decisions about your audience, what bandwidth you were going to use to deliver your video, and the type of media player that would be used to view it at the user’s end. Keep in mind that if you are going to deliver your video from a web server, you will need to create a separate file for each speed that you intend to stream to. If you intend to stream your video from a streaming media server, then you will need to create only one single file with the capability of being streamed at different speeds. This is where codecs come in.

Codec is an abbreviation for “compression/decompression.” A codec is a mathematical algorithm that is used to squeeze media files into a smaller format for streaming. Compression occurs during the encoding stage and decompression occurs when the file is viewed at the user’s computer. Codecs are also used to prepare files before they are recorded to CDs or DVDs.

There are three widely used video on demand and streaming technologies: RealNetworks Media Player, Windows Media Player, and QuickTime. However, Macromedia Flash Video has been making important inroads as a streaming technology since its introduction in 2002 and has become very popular. These technologies consist mainly of three components: the encoding software that compresses and converts the media files to a format that can be viewed by a player, the server software that delivers the stream to multiple users, and the media player that allows the end user to view (decompress) the media files.

TABLE 13.2 Streaming Technologies

Developer Encoder Server Player
RealMedia Real Networks Helix Producer: Helix Real One
Real Video, Real Player
Real Audio
Windows Microsoft Windows Media Windows Media Windows Media
Media Encoders Server Series Player
QuickTime Apple QuickTime QuickTime QuickTime
Broadcaster Streaming Server Player
Sorensen
Flash Video Macromedia Flash Video Flash Flash Player
Exporter Communication
Server

The developers of these technologies are continually improving the quality of the encoding and media player products and developing new ones that are making the area of video on the Web more and more attractive to both content providers and end users. Table 13.2 gives a description of the basic characteristics of these technologies. (If you need up-to-date information on these technologies, it is recommended that you visit their websites.) In addition to these streaming technologies, some attention should be paid to MPEG-4 and MPEG-4 Part 10 (also known as H-264), compression technologies that promise to standardize the encoding and delivery for streaming and multimedia in general.

RealVideo RealVideo is a product of RealNetworks. The RealNetworks streaming technology has several components. The streaming media itself is called RealMedia, which can have two separate components: RealVideo for streaming video and RealAudio for streaming audio. RealVideo is a streaming media architecture that offers HTTP and real-time (true) streaming. It is widely used for network delivery of multimedia, especially video and audio. It provides full-screen video and dial-up data rates.

Windows Media Windows Media is Microsoft’s streaming architecture that runs only on a Windows server protocol and not on the standard realtime streaming protocol (RTSP). Windows Media uses Active Streaming Format (ASF) files and supports true streaming and limited HTTP streaming (progressive download).

QuickTime QuickTime is a multiplatform and multimedia architecture. It runs on Windows and Apple operating systems and supports HTTP and RTSP. QuickTime is widely used to deliver media not only via the Internet, but also to optical storage media such as DVDs and CD-ROMs. Sorenson was the QuickTime de facto codec. Its latest version QuickTime 7 features the H.264 codec.

Macromedia Flash Video Flash is a plug-in—a program that is designed to add to or change the functionality or specific features of a larger program or system. The new component just “plugs in” to an existing program and enables it to do additional tasks. Developed by Macromedia, Flash has become the standard for interactive graphics and streaming animation software for the Web. Flash uses SWF files, which are very small. Flash has now added the capability of video streaming by incorporating FLV—files containing audio and video—within the original SWF file. Flash Video uses Flash Video Exporter to encode audio and video to the FLV file format. For live streaming, FLV files are delivered from a Flash Communication Server and can be viewed by using Flash Player.

MPEG-4 MPEG-4 is a standard developed by the Moving Pictures Experts Group, an international committee dedicated to standardizing video encoding algorithms. This group also developed the MPEG-1 and MPEG-2 standards that made possible interactive video on CD-ROM and DVD and the MPEG-3 standard, more popularly known as MP3, that has become a standard for storing and sharing audio files. The objective of the group in developing MPEG-4 is to develop a compression standard that would enable the integration of content production, delivery, and access in the areas of interactive multimedia on the Web, interactive graphics applications, and digital television. More recently the Motion Pictures Experts Group (MPEG) and the International Telecommunications Union (ITU) joined efforts to develop the H.264 or MPEG-4 Part 10 standard. H.264 has a high level of compression efficiency. It can deliver video with the same quality, at half the data rate, as a DVD encoded with MPEG-2, the codec used in standard-and high-definition digital television and in DVDs. In addition, it has a broad range of applications—it can deliver high-quality video across a wide bandwidth range, from mobile phones to HD video.

SERVERS There are two primary ways to stream video over the Web. The first method is the HTTP approach, in which a regular web server provides the content to the end user. The second method is the streaming media server approach, in which the streaming content is supplied to the end user by a highly specialized server.

HTTP Streaming Server In this approach, the video or movie files are placed directly onto a web server. The media files are completely downloaded to the hard drive of the end user’s computer, where the files can be viewed at any time and at a higher data rate for better quality.

RTSP Streaming Server RTSP uses a dedicated streaming server that sends the media files as they are requested and does not require that the file be stored on the user’s computer as HTTP streaming does. Because RTSP streaming works without saving files to the receiving computer, it is possible to do live streaming, and it is also possible to send files at different data rates (speeds) simultaneously. With this approach, the server and end user can interact during the delivery process. Content that is delivered through this method can also be archived for future viewing.

LIVE STREAMING (WEBCAST) Live streaming, also known as webcasting, is the equivalent of a live television broadcast, except that in a webcast live audio and video are delivered over the Internet. The process for producing a live streaming event, like that for a live television broad cast, requires extensive and careful planning. Efficient preproduction is the key to success.

One important issue that a producer must think about when planning a webcast is what kind of computer hardware and Internet connectivity the target audience has. This is very different from a live televised event, in which the producer does not worry about the type of television set or how the audience is receiving the signal (e.g., over-the-air broadcast, cable, satellite) because it can be assumed that every television set can decode and display the television signal. For webcasters, by contrast, the type of technology the target audience has—bandwidth, computer, connection to the Internet—will determine many of the technical and aesthetic aspects of the streaming event.

Figure 13.5 illustrates one way in which a live event can be streamed and the different elements that come into play for this particular event. Notice that in this case the live signal is encoded onsite and then delivered through the Internet.

image

FIGURE 13.5   Live Streaming. A Live Event Is Captured and Encoded Onsite and Then Distributed via the Internet.

As the technology moves forward and broadband channels become more readily available to home computer users, we will likely see a large increase in the number of live streaming events. Furthermore, as the convergence between video and computers moves them closer to becoming an integrated medium, we will probably be able to receive both television broadcasts and webcasts through the same channel and view them on the same monitor.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset