i
i
i
i
i
i
i
i
A.3. Matrices 899
Matrix-Matrix Multiplication
This operation, denoted MN between M and N, is defined only if M is of
size p × q and N is of size q × r, in which case the result, T, becomes a
p ×r sized matrix. Mathematically, for these matrices the operation is as
follows:
T = MN =
m
00
··· m
0,q1
.
.
.
.
.
.
.
.
.
m
p1,0
··· m
p1,q1
n
00
··· n
0,r1
.
.
.
.
.
.
.
.
.
n
q1,0
··· n
q1,r1
=
q1
i=0
m
0,i
n
i,0
···
q1
i=0
m
0,i
n
i,r1
.
.
.
.
.
.
.
.
.
q1
i=0
m
p1,i
n
i,0
···
q1
i=0
m
p1,i
n
i,r1
.
(A.26)
In other words, each row of M and column of N are combined using a
dot product, and the result placed in the corresponding row and column
element. The elements of T are computed as t
ij
=
q1
k=0
m
i,k
n
k,j
,which
can also be expressed as t
ij
= m
i,
·n
,j
, that is, using the dot product and
the matrix-vector indexing from Section 1.2.1. Note also that an n × 1
matrix, S =(s
00
s
10
··· s
n1,0
)
T
, is basically an n-tuple vector. If seen
as such, then matrix-vector multiplication, between M (p × q)andv (q-
tuple), can be derived from the definition of matrix-matrix multiplication.
This is shown in Equation A.27, resulting in a new vector, w:
w = Mv =
m
00
··· m
0,q1
.
.
.
.
.
.
.
.
.
m
p1,0
··· m
p1,q1
v
0
.
.
.
v
q1
=
q1
k=0
m
0,k
v
k
.
.
.
q1
k=0
m
p1,k
v
k
=
m
0,
·v
.
.
.
m
p1,
·v
=
w
0
.
.
.
w
q1
.
(A.27)
These three rules hold for the matrix-matrix multiplication:
i)(LM)N = L(MN), ii)(L + M)N = LN + MN, iii) MI = IM = M.
If the dimensions of the matrices are the same, then MN = NM,ingen-
eral. This means that some pairs of matrices do commute, but usually they
do not.
i
i
i
i
i
i
i
i
900 A. Some Linear Algebra
Determinant of a Matrix
The determinant is defined only for square matrices, and in the general
case, the definition is recursive or defined via permutations [742]. Here,
the focus will be on determinants for 2 ×2and3×3 matrices, since those
are the ones most often needed in computer graphics.
The determinant of M, written |M|, for these matrices appears in Equa-
tion A.28 and A.29:
|M| =
#
#
#
#
m
00
m
01
m
10
m
11
#
#
#
#
= m
00
m
11
m
01
m
10
(A.28)
|M| =
#
#
#
#
#
#
m
00
m
01
m
02
m
10
m
11
m
12
m
20
m
21
m
22
#
#
#
#
#
#
(A.29)
= m
00
m
11
m
22
+ m
01
m
12
m
20
+ m
02
m
10
m
21
m
02
m
11
m
20
m
01
m
10
m
22
m
00
m
12
m
21
In these two equations, a certain pattern can be distinguished: The positive
terms are the elements multiplied in the diagonals from the top downwards
to the right and the negative terms are the elements in the diagonals from
the top downwards to the left, where a diagonal continues on the opposite
side if an edge is crossed. Note that if the top row in M is replaced
by e
x
e
y
e
z
, the middle row by u
x
u
y
u
z
, and the bottom row by
v
x
v
y
v
z
, the cross product u×v is obtained according to Sarrus’s scheme
(see Section A.2 on the cross product).
Another useful way to compute the determinant for 3 ×3 matrices is to
use the dot and the cross product as in Equation A.30, which is reminiscent
of the column vector indexing introduced in Section 1.2.1:
|M| = |m
,0
m
,1
m
,2
| = |m
x
m
y
m
z
| =(m
x
× m
y
) · m
z
(A.30)
The following notation is also used for determinants:
|M| =det(M)=det(m
x
, m
y
, m
z
). (A.31)
Observe that the scalar triple product from Equation A.18 can be ap-
plied to Equation A.30; that is, if the vectors are rotated, the determinant
remains unchanged, but changing the places of two vectors will change the
sign of the determinant.
If n × n is the size of M, then the following apply to determinant
calculations: i) |M
1
| =1/|M|, ii) |MN| = |M||N|, iii) |aM| =
a
n
|M|, iv) |M
T
| = |M|. Also, if all elements of one row (or one column)
i
i
i
i
i
i
i
i
A.3. Matrices 901
are multiplied by a scalar, a,thena|M| is obtained, and if two rows (or
columns) coincide (i.e., the cross product between them is zero), then |M| =
0. The same result is obtained if any row or column is composed entirely
of zeroes.
The orientation of a basis can be determined via determinants. A basis
is said to form a right-handed system, also called a positively oriented basis,
if its determinant is positive. The standard basis has this property, since
|e
x
e
y
e
z
| =(e
x
× e
y
) · e
z
=(0, 0, 1) · e
z
= e
z
· e
z
=1> 0. If the
determinant is negative, the basis is called negatively oriented or is said to
be forming a left-handed system.
Some geometrical interpretations of the determinant are given in Sec-
tion A.5.
Adjoints
The adjoint
6
is another form of a matrix that can sometimes be useful. In
particular, it is a matrix that is useful for transforming surface normals, as
discussed in Section 4.1.7. It is also a first step in computing the inverse of
a matrix. We start by defining the subdeterminant (also called cofactor)
d
M
ij
of an n ×n matrix M as the determinant that is obtained by deleting
row i and column j and then taking the determinant of the resulting (n
1) × (n 1) matrix. An example of computing the subdeterminant d
M
02
of
a3× 3 matrix is shown in Equation A.32:
d
M
02
=
#
#
#
#
m
10
m
11
m
20
m
21
#
#
#
#
. (A.32)
For a 3 × 3 matrix, the adjoint is then
adj(M)=
d
00
d
10
d
20
d
01
d
11
d
21
d
02
d
12
d
22
, (A.33)
wherewehaveleftoutthesuperscriptM of the subdeterminants for clarity.
Note the signs and the order in which the subdeterminants appear. If we
want to compute the adjoint A of an arbitrary sized matrix M, then the
component at position (i, j)is
[a
ij
]=
,
(1)
(i+j)
d
M
ji
-
. (A.34)
6
Sometimes the adjoint has another definition than the one presented here, i.e., the
adjoint of a matrix M =[m
ij
] is denoted M
=[m
ji
], where m
ji
is the complex
conjugate.
i
i
i
i
i
i
i
i
902 A. Some Linear Algebra
Inverse of a Matrix
The multiplicative inverse of a matrix, M, denoted M
1
, which is dealt
with here, exists only for square matrices with |M| = 0. If all elements
of the matrix under consideration are real scalars, then it suffices to show
that MN = I and NM = I,wherethenN = M
1
. The problem can also
be stated thus: If u = Mv and a matrix N exists such that v = Nu,then
N = M
1
.
The inverse of a matrix can be computed either implicitly or explicitly.
If the inverse is to be used several times, then it is more economical to
compute M
1
explicitly, i.e., to get a representation of the inverse as an
array of n × n real numbers. On the other hand, if only a linear system
of the type u = Mv needs to be solved (for v), then an implicit method,
like Cramer’s rule, can be used. For a linear system of the type Mv = 0,
|M| = 0 is a requirement if there is to be a solution, v.
Using Cramer’s rule to solve u = Mv gives v = M
1
u, but not M
1
explicitly. Equation A.35 shows the general solution for the elements of v:
v
i
=
d
i
|M|
,
d
i
= |m
,0
m
,1
... m
,i1
um
,i+1
... m
,n1
|.
(A.35)
The terms d
i
are thus computed as |M|,exceptthatcolumni is replaced by
u.Fora3×3 system, the solution obtained by Cramer’s rule is presented
below:
v =
v
x
v
y
v
z
=
1
|M|
det(u, m
y
, m
z
)
det(m
x
, u, m
z
)
det(m
x
, m
y
, u)
. (A.36)
Many terms in this equation can be factorized using the scalar triple
product rule and then reused for faster calculation. For example, this is
done in Section 16.8 when computing the intersection between a ray and a
triangle.
For a 2 × 2matrix,M, the explicit solution is given by Equation A.37,
and as can be seen, it is very simple to implement, since |M| = m
00
m
11
m
01
m
10
:
M
1
=
1
|M|
m
11
m
01
m
10
m
00
. (A.37)
For the general case, the adjoint from the previous section can be used:
M
1
=
1
|M|
adj(M). (A.38)
In fact, this is Cramer’s rule expressed differently to get the inverse
matrix explicitly.
i
i
i
i
i
i
i
i
A.3. Matrices 903
However, for larger sizes than 4 × 4, there are no simple formulae,
and Cramer’s rule also becomes infeasible for matrices larger than 4 × 4.
Gaussian elimination is then the method of preference, and it can be used
to solve for u = Mv v = M
1
u—that is, an implicit solution, as is the
case for Cramer’s rule. However, Gaussian elimination can also be used to
compute the matrix M
1
explicitly. Consider the system in Equation A.39,
where u and v are arbitrary vectors of the same dimension as M and I (the
identity matrix):
Mu = Iv. (A.39)
Performing Gaussian elimination on this system until M has been trans-
formed into the identity matrix I means that the right side identity matrix
has become the inverse M
1
.Thus,u and v are, in fact, not of any
particular use; they are merely means for expressing Equation A.39 in a
mathematically sound way.
LU decomposition is another method that can be used to compute the
inverse efficiently. However, a discussion of Gaussian elimination and LU
decomposition is beyond the scope of this text. Virtually any book on
linear algebra [741, 742] or numerical methods books, such as the one by
Press et al. [1034], describe these methods.
Some important rules of computation for the inverse of matrices are:
i)(M
1
)
T
=(M
T
)
1
, ii)(MN)
1
= N
1
M
1
.
Eigenva lue and Eigenvector Computation
The solution to the eigenvalue problem has a large range of uses. For
example, one application area is the computation of tight bounding volumes
(see Section 17.4). The problem is stated as follows:
Ax = λx, (A.40)
where λ is a scalar.
7
The matrix A has to be square (say of size n ×n), and
x = 0, then, if x fulfills this equation, x is said to be an eigenvector to A,
and λ is its belonging eigenvalue. Rearranging the terms of Equation A.40
yields Equation A.41:
(λI A)x = 0. (A.41)
This equation has a solution if and only if p
A
(λ)=det(λI A)=0,
where the function p
A
(λ) is called the characteristic polynomial to A.The
eigenvalues, λ
0
,...,λ
n1
, are thus solutions to p
A
(λ) = 0. Focus for a
while on a particular eigenvalue λ
i
to A.Thenx
i
is its corresponding
eigenvector if (λ
i
IA)x
i
= 0, which means that once the eigenvalues have
been found, the eigenvectors can be found via Gaussian elimination.
7
We use λ (even though this is not consistent with our notation) because that is what
most texts use.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset