i
i
i
i
i
i
i
i
894 A. Some Linear Algebra
Figure A.2. Vector-vector addition is shown in the two figures on the left. They are
called the head-to-tail axiom and the parallelogram rule. The two rightmost figures show
scalar-vector multiplication for a positive and a negative scalar, a and a, respectively.
u,is||u|| =
u
2
0
+ u
2
1
, which is basically the Pythagorean theorem. To
create a vector of unit length, i.e., of length one, the vector has to be
normalized. This can be done by dividing by the length of the vector:
q =
1
||p||
p,whereq is the normalized vector, which also is called a unit
vector.
For R
2
and R
3
, or two- and three-dimensional space, the dot product
can also be expressed as below, which is equivalent to Expression A.8:
u · v = ||u|| ||v||cos φ (dot product)
(A.15)
Here, φ (shown at the left in Figure A.3) is the smallest angle between u
and v. Several conclusions can be drawn from the sign of the dot product,
assuming that both vectors have non-zero length. First, u ·v =0 u v,
i.e., u and v are orthogonal (perpendicular) if their dot product is zero.
Second, if u · v > 0, then it can seen that 0 φ<
π
2
, and likewise if
u · v < 0then
π
2
π.
Now we will go back to the study of basis for a while, and introduce
a special kind of basis that is said to be orthonormal. For such a basis,
consisting of the basis vectors u
0
,...,u
n1
, the following must hold:
u
i
· u
j
=
0,i= j,
1,i= j.
(A.16)
This means that every basis vector must have a length of one, i.e.,
||u
i
|| = 1, and also that each pair of basis vectors must be orthogonal, i.e.,
the angle between them must be π/2radians(90
). In this book, we mostly
use two- and three-dimensional orthonormal bases. If the basis vectors are
mutually perpendicular, but not of unit length, then the basis is called
orthogonal. Orthonormal bases do not have to consist of simple vectors.
For example, in precomputed radiance transfer techniques the bases often
are either spherical harmonics or wavelets. In general, the vectors are
i
i
i
i
i
i
i
i
A.2. Geometrical Interpretation 895
Figure A.3. The left figure shows the notation and geometric situation for the dot
product. In the rightmost figure, orthographic projection is shown. The vector u is
orthogonally (perpendicularly) projected onto v to yield w.
exchanged for functions, and the dot product is augmented to work on
these functions. Once that is done, the concept of orthonormality applies
as above. See Section 8.6.1 for more information about this.
Let p =(p
0
,...,p
n1
), then for an orthonormal basis it can also be
shown that p
i
= p · u
i
. This means that if you have a vector p and
a basis (with the basis vectors u
0
,...,u
n1
), then you can easily get the
elements of that vector in that basis by taking the dot product between the
vector and each of the basis vectors. The most common basis is called the
standard basis, where the basis vectors are denoted e
i
.Theith basis vector
has zeroes everywhere except in position i, which holds a one. For three
dimensions, this means e
0
=(1, 0, 0), e
1
=(0, 1, 0), and e
2
=(0, 0, 1). We
also denote these vectors e
x
, e
y
,ande
z
, since they are what we normally
call the x-, the y-, and the z-axes.
A very useful property of the dot product is that it can be used to
project a vector orthogonally onto another vector. This orthogonal projec-
tion (vector), w, of a vector u onto a vector v is depicted on the right in
Figure A.3.
For arbitrary vectors u and v, w is determined by
w =
u · v
||v||
2
v =
u · v
v · v
v = tv, (A.17)
where t is a scalar. The reader is encouraged to verify that Expression A.17
is indeed correct, which is done by inspection and the use of Equation A.15.
The projection also gives us an orthogonal decomposition of u,whichis
divided into two parts, w and (u w). It can be shown that w (u w),
and of course u = w +(u w) holds. An additional observation is that
if v is normalized, then the projection is w =(u · v)v. This means that
||w|| = |u ·v|, i.e., the length of w is the absolute value of the dot product
between u and v.
i
i
i
i
i
i
i
i
896 A. Some Linear Algebra
Figure A.4. The geometry involved in the cross product.
Cross Product
The cross product, also called the vector product, and the previously in-
troduced dot product are two very important operations on vectors.
The cross product in R
3
of two vectors, u and v, denoted by w = u×v,
is defined by a unique vector w with the following properties:
||w|| = ||u×v|| = ||u|| ||v||sin φ,whereφ is, again, the smallest angle
between u and v. See Figure A.4.
w u and w v.
u, v, w form a right-handed system.
From this definition, it is deduced that u × v = 0 if and only if u v
(i.e., u and v are parallel), since then sin φ = 0. The cross product also
comes equipped with the following laws of calculation, among others:
u × v = v × u (anti-commutativity)
(au + bv) × w = a(u ×w)+b(v ×w) (
linearity)
(u × v) · w =(v × w) · u
=(w × u) ·v = (v × u) · w
= (u × w) ·v = (w × v) · u
(scala
r triple product)
u × (v × w)=(u · w)v (u · v)w (vector triple product)
(A.18)
From these laws, it is obvious that the order of the operands is crucial
in getting correct results from the calculations.
For three-dimensional vectors, u and v, in an orthonormal basis, the
cross product is computed according to Equation A.19:
w =
w
x
w
y
w
z
= u × v =
u
y
v
z
u
z
v
y
u
z
v
x
u
x
v
z
u
x
v
y
u
y
v
x
. (A.19)
i
i
i
i
i
i
i
i
A.3. Matrices 897
A method called Sarrus’s scheme, which is simple to remember, can be
used to derive this formula:
+++ −−

e
x
e
y
e
z
e
x
e
y
e
z
u
x
u
y
u
z
u
x
u
y
u
z
v
x
v
y
v
z
v
x
v
y
v
z
(A.20)
To use the scheme, follow the diagonal arrows, and for each arrow,
generate a term by multiplying the elements along the direction of the arrow
and giving the product the sign associated with that arrow. The result is
shown below and it is exactly the same formula as in Equation A.19, as
expected:
u × v =+e
x
(u
y
v
z
)+e
y
(u
z
v
x
)+e
z
(u
x
v
y
)e
x
(u
z
v
y
)e
y
(u
x
v
z
)e
z
(u
y
v
x
).
A.3 Matrices
This section presents the definitions concerning matrices and some com-
mon, useful operations on them. Even though this presentation is (mostly)
for arbitrarily sized matrices, square matrices of the sizes 2 ×2, 3 ×3, and
4 × 4 will be used in the chapters of this book. Note that Chapter 4 deals
with transforms represented by matrices.
A.3.1 Definitions and Operations
Amatrix,M, can be used as a tool for manipulating vectors and points. M
is described by p ×q scalars (complex numbers are an alternative, but not
relevant here), m
ij
,0 i p 1, 0 j q 1, ordered in a rectangular
fashion (with p rows and q columns) as shown in Equation A.22:
M =
m
00
m
01
··· m
0,q1
m
10
m
11
··· m
1,q1
.
.
.
.
.
.
.
.
.
.
.
.
m
p1,0
m
p1,1
··· m
p1,q1
=[m
ij
]. (A.22)
The notation [m
ij
] will be used in the equations below and is merely
a shorter way of describing a matrix. There is a special matrix called the
unit matrix, I, which is square and contains ones in the diagonal and zeros
elsewhere. This is also called the identity matrix. Equation A.23 shows
i
i
i
i
i
i
i
i
898 A. Some Linear Algebra
its general appearance. This is the matrix-form counterpart of the scalar
number one:
I =
100··· 00
010··· 00
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
000··· 10
000··· 01
. (A.23)
Next, the most ordinary operations on matrices will be reviewed.
Matrix-Matrix Addition
Adding two matrices, say M and N, is possible only for equal-sized matrices
andisdenedas
M + N =[m
ij
]+[n
ij
]=[m
ij
+ n
ij
], (A.24)
that is, componentwise addition, very similar to vector-vector addition.
The resulting matrix is of the same size as the operands. The following
operations are valid for matrix-matrix addition: i)(L + M)+N = L +
(M + N), ii) M + N = N + M, iii) M + 0 = M, iv) M M = 0,which
are all very easy to prove. Note that 0 is a matrix containing only zeroes.
Scalar-Matrix Multiplication
Ascalara and a matrix, M, can be multiplied to form a new matrix
of the same size as M, which is computed by T = aM =[am
ij
]. T
and M are of the same size, and these trivial rules apply: i)0M = 0,
ii)1M = M, iii) a(bM)=(ab)M, iv) a0 = 0, v)(a + b)M = aM + bM,
vi) a(M + N)=aM + aN.
Transpose of a Matrix
M
T
is the notation for the transpose of M =[m
ij
], and the definition is
M
T
=[m
ji
], i.e., the columns become rows and the rows become columns.
For the transpose operator, we have: i)(aM)
T
= aM
T
, ii)(M + N)
T
=
M
T
+ N
T
, iii)(M
T
)
T
= M, iv)(MN)
T
= N
T
M
T
.
Trace of a Matrix
The trace of a matrix, denoted tr(M), is simply the sum of the diagonal
elements of a square matrix, as shown below:
tr(M)=
n1
i=0
m
ii
. (A.25)
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset