1.2 End-to-End 3D Visual Ecosystem

As shown by the past experience and lessons learned from the development and innovation of visual systems, the key driving force is all about how to enrich the user experiences, or so-called QoE. The 3D visual system also faces the same issues. Although a 3D visual system provides a dramatic new user experience after traditional 2D systems, the QoE concept has to be considered at every stage of the communication system pipeline during system design and optimization work to ensure the worthwhileness of moving from 2D to 3D. There are many factors affecting the QoE, such as errors in multidimensional signal processing, lack of information, packet loss, and optical errors in display. Improperly addressing QoE issues will result in visual artifacts (objectively and subjectively), visual discomfort, fatigue, and other things that degrade the intended 3D viewing experiences.

An end-to-end 3D visual communication pipeline consists of the content creation, 3D representation, data compression, transmission, decompression, post-processing, and 3D display stages, which also reflects the lifecycle of a 3D video content in the system. We illustrate the whole pipeline and the corresponding major issues in Figure 1.2. In addition, we also show the possible feedback information from later stages to earlier stages for possible improvement of 3D scene reconstruction.

images

Figure 1.2 End-to-end 3D visual ecosystem.

The first stage of the whole pipeline is the content creation. The goal of the content creation stage is to produce 3D content based on various data sources or data generation devices. There are three typical ways of data acquisition which result in different types of data formats. The first is to use a traditional 2D video camera, which captures 2D images; the image can be derived for 3D data representation in the later stage of the pipeline. The second type is to use a depth video camera to measure the depth of each pixel corresponding to its counterpart color image. The registration of depth and 2D color image may be needed if sensors are not aligned. Note that in some depth cameras, the spatial resolution is lower than that of a 2D color camera. The depth image can also be derived from a 2D image with 2D-to-3D conversion tools; often the obtained depth does not have a satisfactory precision and thus causes QoE issues. The third type is to use an N-view video camera, which consists of an array of 2D video cameras located at different positions around one scene and all cameras are synchronized to capture video simultaneously, to generate N-view video. Using graphical tools to model and create 3D scene is another approach which could be time consuming, but it is popular nowadays to combine both graphical and video capturing and processing methods in the 3D content creation.

In the next stage, the collected video/depth data will be processed and transformed into 3D representation formats for different targeted applications. For example, the depth image source can be used in image plus depth rendering or processed for N-view application. Since the amount of acquired/processed/transformed 3D scene is rather large compared to single-view video data, there is a strong need to compress the 3D scene data. On the other hand, applying traditional 2D video coding schemes separately to each view or each different data type is inefficient as there exist certain representation/coding redundancies among neighboring views and different data types. Therefore, a dedicated compression format is needed at the compression stage to achieve better coding efficiency. In the content distribution stage, the packet loss during data delivery plays an important role in the final QoE, especially for streaming services. Although certain error concealment algorithms adopted in the existing 2D decoding and post-processing stages may alleviate this problem, directly applying the solution developed for 2D video system may not be sufficient. This is because the 3D video coding introduces more coding dependencies, and thus error concealment is much more complex compared to that in 2D systems. Besides, the inter-view alignment requirement in 3D video systems also adds plenty of difficulties which do not exist in 2D scenarios. The occlusion issue is often handled at the post-processing stage, and the packet loss will make the occlusion post-processing even more difficult. There are also some other application layer approaches to relieve the negative impact of packet loss, such as resilient coding and unequal error protection (UEP), and those technologies can be incorporated into the design of the 3D visual communication system to enrich the final QoE. At the final stage of this 3D visual ecosystem, the decoded and processed 3D visual data will be displayed on its targeted 3D display. Depending on the type of 3D display, each display has its unique characteristics of artifacts and encounters different QoE issues.

1.2.1 3D Modeling and Representation

3D scene modeling and representation is the bridging technology between the content creation, transmission, and display stages of a 3D visual system. The 3D scene modeling and representation approaches can be classified into three main categories: geometry-based modeling, image based modeling, and hybrid modeling. Geometry-based representation typically uses polygon meshes (called surface-based modeling), 2D/3D points (called point-based modeling), or voxels (called volume-based modeling) to construct a 3D scene. The main advantage is that, once geometry information is available, the 3D scene can be rendered from any viewpoint and view direction without any limitation, which meets the requirement for a free-viewpoint 3D video system. The main disadvantage is in the computational cost of rendering and storing, which depends on the scene complexity, that is the total number of triangles used to describe the 3D world. In addition, geometry-based representation is generally an approximation to the 3D world. Although there are offline photorealistic rendering algorithms to generate views matching our perception of the real world, the existing algorithms using graphics pipeline still cannot produce realistic views on the fly.

The image based modeling goes to the other extreme, not using any 3D geometry, but using a set of images captured by a number of cameras with predesigned positions and settings. This approach tends to generate high quality virtual view synthesis without the effort of 3D scene reconstruction. The computation complexity via image based representation is proportional to the number of pixels in the reference and output images, but in general not to the geometric complexity such as triangle counts. However, the synthesis ability of image based representation has limitations on the range of view change and the quality depends on the scene depth variation, the resolution of each view, and the number of views. The challenge for this approach is that a tremendous amount of image data needs to be stored, transferred, and processed in order to achieve a good quality synthesized view, otherwise interpolation and occlusion artifacts will appear in the synthesized image due to lack of source data.

The hybrid approach can leverage these two representation methods to find a compromise between the two extremes according to given constraints. By adding geometric information into image based representation, the disocclusion and resolution problem can be relieved. Similarly, adding image information captured from the real world into geometry-based representation can reduce the rendering cost and storage. As an example, using multiple images and corresponding depth maps to represent 3D scene is a popular method (called depth image based representation), in which the depth maps are the geometric modeling component, but this hybrid representation can reduce the storage and processing of many extra images to achieve the same high-quality synthesized view as the image based approach. All these methods are demonstrated in detail in Chapters 2 and 4.

1.2.2 3D Content Creation

Other than graphical modeling approaches, the 3D content can be captured by various processes with different types of cameras. The stereo camera or depth camera simultaneously captures video and associated per-pixel depth or disparity information; the multi-view camera captures multiple images simultaneously from various angles, then multi-view matching (or correspondence) process is required to generate the disparity map for each pair of cameras, and then the 3D structure can be estimated from these disparity maps. The most challenging scenario is to capture 3D content from a normal 2D (or monoscopic) camera, which lacks of disparity or depth information, and where a 2D-to-3D conversion algorithm has to be triggered to generate an estimated depth map and thus the left and right views. The depth map can be derived from various types of depth cues, such as the linear perspective property of a 3D scene, the relationship between object surface structure and the rendered image brightness according to specific shading models, occlusion of objections, and so on. For complicated scenes, the interactive 2D-to-3D conversion, or offline conversion, tends to be adopted, that is, human interaction is required at certain stages of the processing flow, which could be in object segmentation, object selection, object shape or depth adjustment, object occlusion order specification, and so on. In Chapter 4, a few 2D-to-3D conversation systems are showcased to give details of the whole process flow.

1.2.3 3D Video Compression

Owing to the huge amount of 3D video data, there is a strong need to develop efficient 3D video compression methods. The 3D video compression technology has been developed for more than a decade and there have been many formats proposed. Most 3D video compression formats are built on state-of-the-art video codecs, such as H.264. The compression technology is often a tradeoff between the acceptable level of computation complexity and affordable budget in the communication bandwidth. In order to reuse the existing broadcast infrastructure originally designed for 2D video coding and transmission, almost all current 3D broadcasting solutions are based on a frame-compatible format via spatial subsampling approach, that is, the original left and right views are subsampled into half resolution and then embedded into a single video frame for compression and transmission over the infrastructure as with 2D video, and at the decoder side the demultiplexing and interpolation are conducted to reconstruct the dual views. The subsampling and merging can be done by either (a) side-by-side format, proposed by Sensio, RealD, and adopted by Samsung, Panasonic, Sony, Toshiba, JVC, and DirectTV (b) over/under format, proposed by Comcast, or (c) checkerboard format. A mixed-resolution approach is proposed, which is based on the binocular suppression theory showing that the same subjective perception quality can be achieved when one view has a reduced resolution. The mixed-resolution method first subsamples each view to a different resolution and then compresses each view independently.

Undoubtedly, the frame-compatible format is very simple to implement without changing the existing video codec system and underlying communication infrastructure. However, the correlation between left and right views has not been fully exploited, and the approach is mainly oriented to the two-view scenario but not to the multi-view 3D scenario. During the past decade, researchers have also investigated 3D compression from the coding perspective and 3D video can be represented in the following formats: two-view stereo video, video-plus-depth (V+D), multi-view video coding (MVC), multi-view video-plus-depth (MVD), and layered depth video (LDV). The depth map is often encoded via existing a 2D color video codec, which is designed to optimize the coding efficiency of the natural images. It is noted that depth map shows different characteristics from natural color image. Researchers have proposed several methods to improve the depth-based 3D video compression. In nowadays, free-viewpoint 3D attracts a lot of attention, in which the system allows end users to change the view position and angle to enrich their immersive experience. Hybrid approaches combining geometry-based and image based representation are typically used to render the 3D scene for free-viewpoint TV. In Chapter 5, we discuss V+D, MVC, MVD, and LDV.

1.2.4 3D Content Delivery

Transmitting compressed 3D video bit streams over networks have more challenges than with conventional 2D video. From the video compression system point of view, the state-of-the-art 3D video codec introduces more decoding dependency to reduce the required bit rate due to the exploitation of the inter-view and synthesis prediction. Therefore, the existing mono-view video transmission scheme cannot be applied directly to these advanced 3D formats. From the communication system perspective, the 3D video bit stream needs more bandwidth to carry more views than the mono-view video. The evolution of cellular communications into 4G wireless network results in significant improvements of bandwidth and reliability. The end mobile user can benefit from the improved network infrastructure and error control to enjoy a 3D video experience. On the other hand, the wireless transmission often suffers frequent packet/bit errors; the highly bit-by-bit decoding-dependent 3D video stream is vulnerable to those errors. It is important to incorporate error correction and concealment techniques, as well as the design of an error resilient source coding algorithm to increase the robustness of transmitting 3D video bit streams over wireless environments. Since most 3D video formats are built up on existing 2D video codec, many techniques developed for 2D video systems can be extended or adapted to consider properties of 3D video. One technique that offers numerous opportunities for this approach is unequal error protection. Depending on the relative importance of the bit stream segments, different portions of the 3DTV bit stream are protected with different strengths of forward error control (FEC) codes. Taking the stereoscopic video streaming as an example, a UEP scheme can be used to divide the stream into three layers of different importance: intra-coded left-view frames (the most important ones), left-view predictive coded frames, and right-view frames encoded from both intra-coded and predictive left-view frames (the least valuable ones). For error concealment techniques, we can also draw from properties inherent to 3D video. Taking video plus depth format as an example, we can utilize the correlation between video and depth information to do error concealment. The multiple-description coding (MDC) is also a promising technology for 3D video transmission. The MDC framework will encode the video in several independent descriptions. When only one description is received, it can be decoded to obtain a lower-quality representation. When more than one description is received, they can be combined to obtain a representation of the source with better quality. The final quality depends on the number of descriptions successfully received. A simple way to apply multiple-description coding technology on 3D stereoscopic video is to associate one description with the right view and one with the left view. Another way of implementing multiple-description coding for 3D stereoscopic video and multi-view video consists of independently encoding one view and encoding the second view predicted with respect to the independently encoded view. This later approach can also be considered as a two-layer, base plus enhancement, encoding. This methodology can also be applied to V+D and MVD, where the enhancement layer is the depth information. Different strategies for advanced 3D video delivery over different content delivery path will be discussed in Chapters 8 and 10. Several 3D applications will be dealt with in Chapter 9.

1.2.5 3D Display

To perceive a 3D scene by the human visual system (HVS), the display system is designed to present sufficient depth information for each object such that HVS can reconstruct each object's 3D positions. The HVS recognizes objects' depth from the real 3D world through the depth cues. Therefore, the success of a 3D display depends on how well the depth cues are provided, such that HVS can observe a 3D scene. In general, depending on how many viewpoints are provided, the depth cues can be classified into monocular, binocular, and multi-ocular categories. The current 3DTV systems that consumers can buy in retail stores are all based on stereoscopic 3D technology with binocular depth cues. This stereoscopic display will multiplex two views at the display side and the viewers need to wear special glasses to de-multiplex the signal to get the left and right view. Several multiplexing/de-multiplexing approaches have been proposed and implemented in 3D displays, including wavelength division (color) multiplexing, polarization multiplexing, and time multiplexing.

For 3D systems without aided glasses, called auto-stereoscopic display (AS-D), the display system uses optical elements such as parallax barriers (occlusion-based approach) or lenticular lenses (refraction-based approach) to guide the two-view images to the left and right eyes of the viewer in order to generate the realistic 3D sense. In other words, the multiplexing and de-multiplexing process is removed compared to the stereoscopic display. Mobile 3DTV is an example of an AS-D product that we have seen in the market. The N-view AS-D 3DTVs or PC/laptop monitors have been in demos for many years by Philips, Sharp, Samsung, LG, Alioscopy, and so on, in which it explores the stereopsis of 3D space for multiple viewers without the need of glasses. However, the visual quality of these solutions still has lots of room to improve. To fully enrich the immersive visual experience, end users would want to interactively control the viewpoint, which is called free-viewpoint 3DTV (FVT). In a typical FVT system, the viewer's head and gaze are tracked to generate the viewing position and directions and thus to calculate images directed to the viewer's eyes. To render free-viewpoint video, the 3D scene needs to be synthesized and rendered from the source data in order to support the seamless view generation during the viewpoint changing.

To achieve full visual reality, holographic 3D display is a type of device to reconstruct the optical wave field such that the reconstructed 3D light beam can be seen as the physical presentation of the original object. The difference between conventional photography and holography is that photography can only record amplitude information for an object but holography attempts to record both the amplitude and phase information. Knowing that current image recoding systems can only record the amplitude information, holography needs a way to transform the phase information such that it can be recorded in an amplitude-based recoding system. For more details on 3D displays and their theory behind them, readers can refer to Chapter 3.

1.2.6 3D QoE

Although 3D video brings a brand new viewing experience, it does not necessarily increase the perceived quality if the 3D system is not carefully designed and evaluated. The 3D quality of experience refers to how humans perceive the 3D visual information, including the traditional 2D color/texture information and the additional perception of depth and visual comfort factors. As the evaluation criteria to measure the QoE of 3D systems is still in its early stages, QoE-optimized 3D visual communications systems still remain an open research area. At the current stage, the efforts to address 3D QoE are considering the fidelity and comfort aspects. 3D fidelity evaluates the unique 3D artifacts generated and propagated through the whole 3D visual processing pipeline, and comfort refers to the visual fatigue and discomfort to the viewers induced by the perceived 3D scene.

In general, stereoscopic artifacts can be categorized as structure, color, motion, and binocular. Structure artifacts characterize those that affect human perception on image structures such as boundaries and textures, and include tiling/blocking artifacts, aliasing, staircase effect, ringing, blurring, false edge, mosaic patterns, jitter, flickering, and geometric distortion. The color category represents artifacts that affect the color accuracy, with examples including mosquito noise, smearing, chromatic aberration, cross-color artifacts, color bleeding, and rainbow artifacts. The motion category includes artifacts that affect the motion vision, such as motion blur and motion judder. Binocular artifacts represent those that affect the stereoscopic perception of the 3D world, for example, keystone distortion, cardboard effect, depth plane curvature, shear distortion, puppet theater effect, ghosting, perspective rivalry, crosstalk, depth bleeding, and depth ringing. Note that AS-D suffers more crosstalk artifacts than stereoscopic scenarios. This is mainly caused by imperfect separation of the left and right view images and is perceived as ghosting artifacts. The magnitude of crosstalk is affected by two factors: observing position between display and the observer and the quality of the optical filter in the display. The extreme case of crosstalk is the pseudoscopic (reversed stereo) image where the left eye sees the image representing the right view and the right eye sees the image representing the left view.

Free-viewpoint 3D systems have more distinctive artifacts due to the need of synthesizing new views from 3D scene representations. In a highly constrained environment camera parameters can be calibrated precisely and, as a result, visual artifacts in view synthesis arise principally from an inexact geometric representation of the scene. In an unconstrained environment where the lighting conditions and background are not fixed and the videos may have different resolution and levels of motion blur, the ambiguity in the input data and inaccuracies in calibration and matting cause significant deviation in a reconstructed view from the true view of the scene.

Visual fatigue refers to a decrease in the performance of the human vision system, which can be objectively measured; however, its subjective counterpart, visual discomfort, is hard to quantify. These factors affect whether end users enjoy the entire 3D experience and are willing to purchase 3D consumer electronic devices. In Chapter 7, more details on QoE topics will be discussed.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset