2D to 3D conversion, 7, 56, 85, 293
3D content capturing, 85
3D content creation, 85
3D display, 63
3D multi-view generation, 125–6
3D quality of experience, 9–10
3D scene modeling, 85
3D-DCT, 241
3DVC, 167
active 3D-glass display, 290
adaptive modulation and coding (AMC), 11, 191
additive white Gaussian noise (AWGN), 177
ad-hoc network, 193
Advanced Video Coding (AVC), 129
alpha matting, 56
amplify-and-forward (AF), 284
anaglyph, 65
angular disparity, 64
angular intra-prediction, 141
animation framework extension (AFX), 165–6
application layer, 174
arbitrary slice order (ASO), 140
arithmetic coding, 136
asymmetric motion partition (AMP), 141
asymmetric stereo video coding, 142–3
asymmetries in stereo camera rig, 216–17
autofocus processing, 91
automatic 2D-to-3D conversion (a3DC), 103–11
automatic repeat request (ARQ), 173
automultiscopic, 79
autostereoscopic display, 9, 71–8
autostereoscopy, 289
banding artifact, 218
base layer (BL), 323
basis function, 32
B-frame, 137
bilateral filter, 153
binarization process, 136
binary bin, 136
binary format for scenes (BIFS), 165
binary phase shift keying (BPSK), 176
binocular rivalry, 217
binocular suppression theory, 7, 142, 318
block matching (BM), 241
blur gradient, 248
blurring artifact, 218
broadband channel, 181
B-slice, 137
capture cues, 247
cave automatic virtual environment (CAVE), 71
center of use (CoU), 226
channel, 175
channel code rate, 175
channel delay spread, 181
channel encoder, 174
channel-induced distortion, 259
chromatic aberration, 217
circuit switching, 171
circular polarization, 69
coding block, 141
coherence bandwidth, 181
coherence time, 188
color bleeding artifact, 218
color cues, 64
color dispersion, 77
color matching function, 66
color rivalry, 68
color separation, 77
colorimetric asymmetry, 217
comoplementary decimation, 146
complexity model, 314
compression artifact, 218
context adaptive binary arithmetic coding (CABAC), 135–6
context adaptive variable length coding (CAVLC), 135–6
contrast masking, 229
contrast sensitivity, 229
conventional stereo video (CSV), 291
converence point, 206
cooperative communications, 14, 284
coordinate transform, 207
corner cutting, 35
corner-table mesh, 26
corona artifact, 164
correlation, 318
crossed disparity, 206
crossed parallax, 207
curve fitting, 321
data-driven animation, 20
data link layer, 173
datagram congestion control protocol (DCCP), 329
DCT compression artifact, 218–19
deblocking filter, 130
deblocking process, 136
decode-and-forward (DF), 284
decoding drift, 324
de-correlation transform, 130
deficit round robin (DRR), 331
defocus cues, 63
depth camera, 88
depth cue, 205
depth from focus and defocus, 103–4
depth from geometric cues, 103
depth from planar model, 103
depth from shading, 104
depth-image-based rendering (DIBR), 148
depth image-based representation, 6, 18, 51–7
depth map bleeing, 219
depth map ringing, 219
depth offset, 156
depth-plane curvature, 212
depth ringing, 219
depth video camera, 5
descriptive quality, 223
diagonal scan, 141
DIBR-based error recovery, 324
diplopia, 206
discrete cosine transform (DCT), 133
disparity, 87
disparity compensation, 157
disparity correlation, 269
disparity estimation, 160
disparity map, 6
displaced frame difference (DFD), 317
displacement map, 33
distortion-quantization, 320
distributed source coding, 14
divergent parallax, 207
Doppler shift, 188
Doppler spectrum, 188
Doppler spread, 188
double flash, 70
downsampling, 281
drifting artifact, 139
dual LCD panel, 78
Digital Video Broadcast (DVB), 328
DVB-C, 329
DVB-H, 329
DVB-S, 329
DVB-T, 329
earliest deadline first (EDF), 331
earliest deadline first with deficit round robin (EDF-DRR), 331
edge, 24
edge collapse, 30
edge-preserving, 153
end-to-end 3D visual ecosystem, 3–5
end-to-end distortion, 265
enhancement layer (EL), 323
eNodeB, 195
entropy coding, 129–130, 135–6
error protection, 324
error resilience, 267
evolved packet core (EPC), 195, 200
evolved universal terrestrial radio access network (E-UTRAN), 195
Exp-Golomb coding, 135
external preference mapping, 224
extraordinary vertex, 36
extrinsic matrix, 162
face, 24
face-vertex mesh, 25
false contouring artifact, 218
false edge artifact, 218
fast binary arithmetic coding, 136
fast fading, 188
fatigue, 249
femtocell, 12
film-type patterned retarder (FPR), 69
FinePix Real 3D System, 291
finite impulse response (FIR), 136
flat-fading channel, 181
flexible macroblock order (FSO), 140
focus value (FV), 91
forward error control (FEC), 8, 324
frame-compatible, 7
frame-compatible stereo video streaming, 318
free viewpoint video, 166
free-viewpoint 3DTV (FVT), 9
frequency-selective channel, 181
full-reference (FR) metrics, 227
full-resolution frame-compatible, 146–8
full-resolution frame-compatible stereo video streaming, 318
Gaussian kernel, 153
generalized procrustes analysis (GPA), 224
geometric cues, 63
geometry-based modeling, 6, 85–6
geometry-based representation, 17, 22–43, 166
glass-less two-view systems, 289
global system for mobile communications (GSM), 194
google street, 48
Grammian matrix, 67
graphics pipeline, 31
group of GOP (GGOP), 318
group of pictures (GOP), 137
H.264, 129
H.264/MPEG-2 multivew profile, 143
half-edge mesh, 27
half-resolution frame-compatible, 144–6
Hardamard transform, 134
head tracking, 79
header, 172
hierarchical B-frame, 138, 159
high efficiency video coding (HEVC), 129, 140–142
high-speed uplink packet access (HSUPA), 195
hill-climbing focusing, 94
hole-filling algorithm, 150
holographic 3DTV, 9
horizontal disparity, 64
Horopter circle, 206
Huffman coding, 135
human vision system based metrics, 228–32
human visual system (HVS), 9, 63, 205–6
IEEE 802.16–WiMAX, 202
I-frame, 137
image-based representation, 18, 43–51, 166
information fidelity criterion (IFC), 234
information theoretic QA, 234
instantaneous decoder refresh (IDR), 137
integer DCT transform, 134
integral image, 75
integral videography (IV) overlay, 305
intensity cues, 64
interactive 2D-to-3D conversion, 111
interference, 177
inter-frame prediction, 130
interocular distance, 205
inter-symbol interference (ISI), 183
inter-view bilateral error concealment (IBEC), 273
inter-view correlation, 269
inter-view prediction, 143
intra-frame prediction, 130–132
intrinsic matrix, 162
IP multicast, 321
IPv4, 174
IPv6, 174
IS-95, 194
I-slice, 137
joint multi-program rate control, 318
just-noticeable threshold (JNT), 143
Karhunen-Loeve transform (KLT), 133
key frame extraction, 116
key-framed animation, 20
keystone distortion, 211
large-scale propagation effect, 177
largest coding unit (LCU), 140
layered depth video (LDV), 7, 163–5
least squared, 321
lenticular lens, 9, 75–8, 80, 290
linear polarization, 69
logical link control, 173
log-normal fading, 178
long-term evolution (LTE), 11, 194
long term evolution-advance (LTE-A), 194, 201
luma-based chroma prediction, 141
luminance masking, 229
macroblock (MB), 137
magnification factor, 215
M-ary phase shift keying (MPSK), 176
mean of absolute difference (MAD), 132–3, 316
mean square error (MSE), 227
medical visualization, 304
medium access control, 173
mesh compression and encoding, 29–31
mesh-based, 326
Mirage Table, 294
mismatch compensation, 158
mixed resolution stereo (MRS), 291
mixed-resolution video coding, 143
mode decision algorithm, 160
modulation, 175
monocular depth cue, 63–4, 205
mosquito noise, 219
motion compensation, 133
motion compensation mismatch, 219
motion confusion, 70
motion judder artifact, 219
motion modeling based QA, 235–6
motion parallax, 64, 79, 247–8
moving pictures quality metric (MPQM), 231
MPEG-2 test model 5 (TM5), 314
MPEG-4 Part 10, 129
MPEG-4 verification model (VM), 316
MPEG-A, 166
MPEG-C Part 3, 150
MPEG-FTV, 167
multi-band filter, 68
multicarrier modulation, 196
multi-hypothesis error concealment (MHEC), 274
multimedia application format (MAF), 166
multipath channel, 179
multiple description coding (MDC), 8, 269, 327
multiple description (MD) coding, 279
multiple texture technique, 48–50
multiple view coding (MVC), 156–160
multiple view image, 50
multiple-input and multiple-output (MIMO), 11
multiuser video communications, 13
multi-view camera, 88
multi-view correspondence, 87
multi-view video coding (MVC), 1, 7, 146
multi-view video plus depth (MVD), 7, 160–163
Nakagami fading, 190
narrowband channel, 181
negative parallax, 207
network abstraction layer (NAL), 278
network layer, 174
non-square quad-tree transform (NSQT), 141
non-uniform rational B-spline surface (NURBS), 32–4
no-reference (NR) metrics, 227
N-view video camera, 5
object classification, 120–121
object orientation, 119
object thickness, 119
octree, 41
open profiling quality (OPQ), 223–4
optical augmented display, 303
orthogonal frequency division multiplexing (OFDM), 11, 196
orthogonal projection, 65
packet, 172
packet switching, 172
Panum's fusional area, 206
parallax, 209
parallax barrier, 9, 71–5, 79–80, 289–290
parallel camera configuration, 207–8
parallel stereoscopic camera, 207
path loss exponent, 178
patterned retarder (PR), 69
peak signal-to-noise ratio (PSNR), 227
peer-to-peer (P2P), 13
perceptual distortion metric (PDM), 231
perceptual evaluation of video quality (PEVQ), 233
Percival's zone of comfort, 248
P-frame, 137
physical layer, 173
physically-based animation, 21
platelet, 154
plenoptic function, 43–6, 86, 166
point-based modeling, 6
point-based representation, 37–9
polarization multiplexing, 9, 69
positive parallax, 207
power delay profile, 180
prediction, 129
prediction path, 129
prediction unit (PU), 141
primary color system, 65
progressive mesh, 30
projection matrix, 162
pseudoscopic, 73
P-slice, 137
PSNR-HVS, 231
psychoperceptual, 222
quad-edge mesh, 27
quad-lateral filter, 154
quadratic model, 316
quadrature amplitude modulation (QAM), 176
quadrilateral, 24
quad-tree, 140
quad-tree decomposition, 155
quality accessment (QA), 220
quality of expeience (QoE), 3, 220
quality of perception (QoP), 222
quality of sevice (QoS), 11
quantization parameter (QP), 134
quantization step size, 134
quaternary phase shift keying (QPSK), 176
radial distortion, 217
radiosity, 32
rate control, 313
rate distortion optimization (RDO), 316
rate-quantization, 320
Rayleigh fading, 189
ray-tracing, 32
real-time transport protocol (RTP), 329
recontruction path, 129
reduced-reference (RR) metrics, 227
redundant slice, 140
reference picture buffer, 137
relay, 283
reproduction magnification factor, 215
residual, 129
residual quad-tree (RQT), 141
retinal disparity, 206
RGB, 130
ringing artifact, 218
Sarnoff JND vision model, 230
scalable MVC streaming, 323
scalable video coding (SVC), 146
scene geometric structure, 117
scheduling, 331
semantic object, 117
semantic rule, 122
shadow fading, 178
shape-from-silhouette, 86
shared motion vectors, 273
shutter glasses, 70
single carrier-frequency division multiple access (SC-FDMA), 200
single texture technique, 47–8
size maganification factor, 215
slanted multi-view display, 80
slice, 137
slice group, 140
slow fading, 188
small-scale propagation effect, 177
SNR asymmetry, 143
source encoder, 174
source encoding distortion, 259
spatial light modulator (SLM), 83
spatial monocular depth cue, 63–4
spatial multiplexing, 71
spatial redundancy, 130
spatial scalability, 146
spatially multiplexed systems, 290
spatial-temporal monocular depth cues, 64
spectral absorption function, 66
spectral density function, 65
splating, 39
spot focus window, 92
staircase artifact, 218
stereo band limited contrast (SBLC), 236
Stereobrush, 293
stereo camera, 87
stereo camera rig, 216
stereo correspondence, 87
stereo matching, 87
stereoscopic artifact, 10
store-and-forward, 172
structural similarity (SSIM) index, 232–3
structure based metrics, 232–3
structure from motion (SfM), 89
subdivision surface representation, 34–7
subpixel, 77
subpixel motion estimation, 133
sum of squared difference (SSD), 317
surface-based modeling, 6
surface-based representation, 17, 23–37
sweet spot, 71
switching I (SI)-slice, 139
switching P (SP)-slice, 139, 324
synchronization, 328
temporal asymmetry, 143
temporal bilateral error concealment (TBEC), 273
temporal correlation, 269
temporal random access, 157
temporal redundancy, 132
temporal scalability, 146
temporally multiplexed systems, 290
Teo and Heeger model, 230
texture-based representation, 18, 43–51
texturing techniques, 27
tile, 141
tiling artifact, 218
time-of-flight, 88
toed-in camera configuration, 207–8
toed-in stereoscopic camera, 207
transform, 129
transform unit (TU), 141
transmission artifact, 219
transmission control protocol (TCP), 174
transmission-induced distortion, 259
tree-based, 326
triangle, 24
triple flash, 70
two-layer overlay, 327
two-view stereo video streaming, 318
uncrossed disparity, 206
uncrossed parallax, 207
unequal error protection (UEP), 5, 8, 275
uniform scattering environment, 188
universal mobile telecommunications system (UMTS), 194
user datagram protocol (UDP), 174, 329
user-centered, 222
user-centered quality of experience, 225–6
vector Huffman coding, 136
vergence angle, 64
vergence-accommodation coupling, 250
vertex, 24
vertex insertion, 35
vertex split, 30
vertex-vertex mesh, 25
video augmented display, 303
video plus depth (V+D), 1, 7, 148–56, 291
video quality metric (VQM), 233
video structure analysis, 116
video-on-demand, 326
Vieth-Muller circle, 206
view switching latency, 323
view synthesized artifact, 219–220
view-dependent texture, 49
viewing zone, 71
view-switching, 157
view-switching latency, 327
view-synthesis prediction (VSP), 162
virtual environment (VE), 302
virtual reality, 71
virtual reality model language (VRML), 165
visible difference predictor (VDP), 230
visual discomfort, 10
visual fatigue, 10
visual hull, 86
visual information fidelity (VIF), 234
visual signal-to-noise ratio (VSNR), 230
volume-based modeling, 6
volume-based representation, 17, 40–43
voxel, 40
voxelization, 41
wavefront parallel processing, 141–2
wavelength division multiplexing, 9, 65–8
wedgelet, 154
weighted prediction, 133
wideband-CDMA, 194
Wiener-Ziv coding, 14
WiMAX, 11
winged-edge mesh, 27
YCbCr, 130
Z-buffer-basd 3D surface recovering, 98–100
zero parallax, 207
zigzag, 135
3D Visual Communications, First Edition. Guan-Ming Su, Yu-Chi Lai, Andres Kwasinski and Haohong Wang.
© 2013 John Wiley & Sons, Ltd. Published 2013 by John Wiley & Sons, Ltd.