Until now, we have studied linear transformations by examining their ranges and null spaces. In this section, we embark on one of the most useful approaches to the analysis of a linear transformation on a finite-dimensional vector space: the representation of a linear transformation by a matrix. In fact, we develop a one-to-one correspondence between matrices and linear transformations that allows us to utilize properties of one to study properties of the other.
We first need the concept of an ordered basis for a vector space.
Let V be a finite-dimensional vector space. An ordered basis for V is a basis for V endowed with a specific order; that is, an ordered basis for V is a finite sequence of linearly independent vectors in V that generates V.
In can be considered an ordered basis. Also is an ordered basis, but as ordered bases.
For the vector space , we call the standard ordered basis for . Similarly, for the vector space , we call the standard ordered basis for .
Now that we have the concept of ordered basis, we can identify abstract vectors in an n-dimensional vector space with n-tuples. This identification is provided through the use of coordinate vectors, as introduced next.
Let be an ordered basis for a finite-dimensional vector space V. For , let be the unique scalars such that
We define the coordinate vector of x relative to , denoted , by
Notice that in the preceding definition. It is left as an exercise to show that the correspondence provides us with a linear transformation from V to . We study this transformation in Section 2.4 in more detail.
Let , and let be the standard ordered basis for V. If , then
Let us now proceed with the promised matrix representation of a linear transformation. Suppose that V and W are finite-dimensional vector spaces with ordered bases and , respectively. Let be linear. Then for each there exist unique scalars such that
Using the notation above, we call the matrix A defined by the matrix representation of T in the ordered bases and and write . If and , then we write .
Notice that the jth column of A is simply . Also observe that if is a linear transformation such that , then by the corollary to Theorem 2.6 (p. 73).
We illustrate the computation of in the next several examples.
Let be the linear transformation defined by
Let and be the standard ordered bases for and , respectively. Now
and
Hence
If we let , then
Let be the linear transformation defined by . Let and be the standard ordered bases for and , respectively. Then
So
Note that when is written as a linear combination of the vectors of , its coefficients give the entries of column of .
Let V and W be finite-dimensional vector spaces with ordered bases and , respectively. Then
Hence , the zero matrix. Also,
Hence the jth column of is , that is,
The preceding matrix is called the identity matrix and is defined next, along with a very useful notation, the Kronecker delta.
We define the Kronecker delta by if and if . The identity matrix is defined by . When the context is clear, we sometimes omit the subscript n from .
For example,
Thus, the matrix representation of a zero transformation is a zero matrix, and the matrix representation of an identity transformation is an identity matrix.
Now that we have defined a procedure for associating matrices with linear transformations, we show in Theorem 2.8 that this association “preserves” addition and scalar multiplication. To make this more explicit, we need some preliminary discussion about the addition and scalar multiplication of linear transformations.
Let be arbitrary functions, where V and W are vector spaces over F, and let . We define by for all , and by for all .
Of course, these are just the usual definitions of addition and scalar multiplication of functions. We are fortunate, however, to have the result that both sums and scalar multiples of linear transformations are also linear.
Let V and W be vector spaces over a field F, and let be linear.
(a) For all is linear.
(b) Using the operations of addition and scalar multiplication in the preceding definition, the collection of all linear transformations from V to W is a vector space over F.
Proof.
(a) Let and . Then
So is linear.
(b) Noting that , the zero transformation, plays the role of the zero vector, it is easy to verify that the axioms of a vector space are satisfied, and hence that the collection of all linear transformations from V into W is a vector space over F.
Let V and W be vector spaces over F. We denote the vector space of all linear transformations from V into W by L(V, M). In the case that , we write L(V) instead of L(V, V).
In Section 2.4, we see a complete identification of L(V, W) with the vector space , where n and m are the dimensions of V and W, respectively. This identification is easily established by the use of the next theorem.
Let V and W be finite-dimensional vector spaces with ordered bases and , respectively, and let be linear transformations. Then
(a) and
(b) for all scalars a.
Proof.
Let and . There exist unique scalars and such that
Hence
Thus
So (a) is proved, and the proof of (b) is similar.
Let and be the linear transformations respectively defined by
Let and be the standard ordered bases of and , respectively. Then
(as computed in Example 3), and
If we compute using the preceding definitions, we obtain
So
which is simply , illustrating Theorem 2.8.
Label the following statements as true or false. Assume that V and W are finite-dimensional vector spaces with ordered bases and , respectively, and are linear transformations.
(a) For any scalar is a linear transformation from V to W.
(b) implies that .
(c) If and , then is an matrix.
(d) .
(e) is a vector space.
(f) .
Let and be the standard ordered bases for and , respectively. For each linear transformation , compute .
(a) defined by .
(b) defined by .
(c) defined by .
(d) defined by
(e) defined by .
(f) defined by .
(g) defined by .
Let be defined by . Let be the standard ordered basis for and . Compute . If , compute .
Define
Let
Compute .
Let
and
(a) Define by . Compute .
(b) Define
where ′ denotes differentiation. Compute .
(c) Define by . Compute .
(d) Define by . Compute .
(e) If
compute .
(f) If , compute .
(g) For , compute .
Complete the proof of part (b) of Theorem 2.7.
Prove part (b) of Theorem 2.8.
† Let V be an n-dimensional vector space with an ordered basis . Define by . Prove that T is linear.
Let V be the vector space of complex numbers over the field R. Define by , where is the complex conjugate of z. Prove that T is linear, and compute , where . (Recall by Exercise 39 of Section 2.1 that T is not linear if V is regarded as a vector space over the field C.)
Let V be a vector space with the ordered basis . Define . By Theorem 2.6 (p. 73), there exists a linear transformation such that for Compute .
Let V be an n-dimensional vector space, and let be a linear transformation. Suppose that W is a T-invariant subspace of V (see the exercises of Section 2.1) having dimension k. Show that there is a basis for V such that has the form
where A is a matrix and O is the zero matrix.
† Let be a basis for a vector space V and be a linear transformation. Prove that is upper triangular if and only if for . Visit goo.gl/
Let V be a finite-dimensional vector space and T be the projection on W along , where W and are subspaces of V. (See the definition in the exercises of Section 2.1 on page 76.) Find an ordered basis ft for V such that is a diagonal matrix.
Let V and W be vector spaces, and let T and U be nonzero linear transformations from V into W. If , prove that {T, U} is a linearly independent subset of L(V, W).
Let and define , where is the jth derivative of f(x). Prove that the set is a linearly independent subset of L(V) for any positive integer n.
Let V and W be vector spaces, and let S be a subset of V. Define . Prove the following statements.
(a) is a subspace of L(V, W).
(b) If and are subsets of V and , then .
(c) If and are subspaces of V, then .
Let V and W be vector spaces such that , and let be linear. Show that there exist ordered bases and for V and W, respectively, such that is a diagonal matrix.