In this section, we consider a number of examples of linear transformations. Many of these transformations are studied in more detail in later sections. Recall that a function T with domain V and codomain W is denoted by . (See Appendix B.)
Let V and W be vector spaces over the same Geld F. We call a function a linear transformation from V to W if, for all and , we have
and
.
If the underlying field F is the field of rational numbers, then (a) implies (b) (see Exercise 38), but, in general (a) and (b) are logically independent. See Exercises 39 and 40.
We often simply call T linear. The reader should verify the following properties of a function . (See Exercise 7.)
If T is linear, then .
T is linear if and only if for all and .
If T is linear, then for all .
T is linear if and only if, for and , we have
We generally use property 2 to prove that a given transformation is linear.
Define
To show that T is linear, let and , where and . Since
we have
Also
So T is linear.
As we will see in Chapter 6, the applications of linear algebra to geometry are wide and varied. The main reason for this is that most of the important geometrical transformations are linear. Three particular transformations that we now consider are rotation, reflection, and projection. We leave the proofs of linearity to the reader.
For any angle , define by the rule: is the vector obtained by rotating counterclockwise by if , and . Then is a linear transformation that is called the rotation by .
We determine an explicit formula for . Fix a nonzero vector . Let be the angle that makes with the positive x-axis (see Figure 2.1(a)), and let . Then and . Also, has length r and makes an angle with the positive x-axis. It follows that
Finally, observe that this same formula is valid for . It is now easy to show, as in Example 1, that is linear.
Define by . T is called the reflection about the x-axis. (See Figure 2.1(b).)
Define by . T is called the projection on the x-axis. (See Figure 2.1(c).)
We now look at some additional examples of linear transformations.
Define by , where is the transpose of A, defined in Section 1.3. Then T is a linear transformation by Exercise 3 of Section 1.3.
Let V denote the set of all real-valued functions defined on the real line that have derivatives of every order. It is easily shown that V is a vector space over R. (See Exercise 16 of Section 1.3.)
Define by , the derivative of f. To show that T is linear, let and . Now
So by property 2 above, T is linear.
Let , the vector space of continuous real-valued functions on R. Let . Define by
for all . Then T is a linear transformation because the definite integral of a linear combination of functions is the same as the linear combination of the definite integrals of the functions.
Two very important examples of linear transformations that appear frequently in the remainder of the book, and therefore deserve their own notation, are the identity and zero transformations.
For vector spaces V and W (over F), we define the identity transformation for all and the zero transformation by for all . It is clear that both of these transformations are linear. We often write l instead of .
We now turn our attention to two very important sets associated with linear transformations: the range and null space. The determination of these sets allows us to examine more closely the intrinsic properties of a linear transformation.
Let V and W be vector spaces, and let be linear. We define the null space (or kernel) N(T) of T to be the set of all vectors x in V such that ; that is, .
We define the range (or image) R(T) of T to be the subset of W consisting of all images (under T) of vectors in V; that is, .
Let V and W be vector spaces, and let and be the identity and zero transformations, respectively. Then , and .
Let be the linear transformation defined by
It is left as an exercise to verify that
In Examples 8 and 9, we see that the range and null space of each of the linear transformations is a subspace. The next result shows that this is true in general.
Let V and W be vector spaces and be linear. Then N(T) and R(T) are subspaces of V and W, respectively.
Proof.
To clarify the notation, we use the symbols and to denote the zero vectors of V and W, respectively.
Since , we have that . Let and . Then , and . Hence and , so that N(T) is a subspace of V.
Because , we have that . Now let and . Then there exist v and w in V such that and . So , and . Thus and , so R(T) is a subspace of W.
The next theorem provides a method for finding a spanning set for the range of a linear transformation. With this accomplished, a basis for the range is easy to discover using the technique of Example 6 of Section 1.6.
Let V and W be vector spaces, and let be linear. If is a basis for V, then
Proof.
Clearly for each i. Because R(T) is a subspace, R(T) contains by Theorem 1.1 (p. 31).
Now suppose that . Then for some . Because is a basis for V, we have
Since T is linear, it follows that
So R(T) is contained in .
It should be noted that Theorem 2.2 is true if is infinite, that is, . (See Exercise 34.)
The next example illustrates the usefulness of Theorem 2.2.
Define the linear transformation by
Since is a basis for , we have
Thus we have found a basis for R(T), and so .
Now suppose that we want to find a basis for N(T). Note that if and only if , the zero matrix. That is, if and only if
Let . Then
and . Hence
Therefore a basis for N(T) is .
Note that in this example
In Theorem 2.3, we see that a similar result is true in general.
As in Chapter 1, we measure the “size” of a subspace by its dimension. The null space and range are so important that we attach special names to their respective dimensions.
Let V and W be vector spaces, and let be linear. If N(T) and R(T) are finite-dimensional, then we define the nullity of T, denoted nullity(T), and the rank of T, denoted rank(T), to be the dimensions of N(T) and R(T), respectively.
Reflecting on the action of a linear transformation, we see intuitively that the larger the nullity, the smaller the rank. In other words, the more vectors that are carried into 0, the smaller the range. The same heuristic reasoning tells us that the larger the rank, the smaller the nullity. This balance between rank and nullity is made precise in the next theorem, appropriately called the dimension theorem.
Let V and W be vector spaces, and let be linear. If V is finite-dimensional, then
Proof.
Suppose that , and is a basis for N(T). By the corollary to Theorem 1.11 (p. 51), we may extend to a basis for V. We claim that is a basis for R(T).
First we prove that S generates R(T). Using Theorem 2.2 and the fact that for , we have
Now we prove that S is linearly independent. Suppose that
Using the fact that T is linear, we have
So
Hence there exist such that
Since is a basis for V, we have for all i. Hence S is linearly independent. Notice that this argument also shows that are distinct; therefore .
If we apply the dimension theorem to the linear transformation T in Example 9, we have that .
The reader should review the concepts of “one-to-one” and “onto” presented in Appendix B. Interestingly, for a linear transformation, both of these concepts are intimately connected to the rank and nullity of the transformation. This is demonstrated in the next two theorems.
Let V and W be vector spaces, and let be linear. Then T is one-to-one if and only if .
Proof.
Suppose that T is one-to-one and . Then . Since T is one-to-one, we have . Hence .
Now assume that , and suppose that . Then by property 3 on page 65. Therefore . So , or . This means that T is one-to-one.
The reader should observe that Theorem 2.4 allows us to conclude that the transformation defined in Example 9 is not one-to-one.
Surprisingly, the conditions of one-to-one and onto are equivalent in an important special case.
Let V and W be finite-dimensional vector spaces of equal dimension, and let be linear. Then the following are equivalent.
(a) T is one-to-one.
(b) T is onto.
(c) .
Proof.
From the dimension theorem, we have
Now, with the use of Theorem 2.4, we have that T is one-to-one if and only if , if and only if , if and only if , and if and only if . By Theorem 1.11 (p. 50), this equality is equivalent to , the definition of T being onto.
We note that if V is not finite-dimensional and is linear, then it does not follow that one-to-one and onto are equivalent. (See Exercises 15, 16, and 21.)
The linearity of T in Theorems 2.4 and 2.5 is essential, for it is easy to construct examples of functions from R into R that are not one-to-one, but are onto, and vice versa.
The next two examples make use of the preceding theorems in determining whether a given linear transformation is one-to-one or onto.
Let be the linear transformation defined by
Now
Since is linearly independent, . Since , T is not onto. From the dimension theorem, . So , and therefore, . We conclude from Theorem 2.4 that T is one-to-one.
Let be the linear transformation defined by
It is easy to see that so T is one-to-one. Hence Theorem 2.5 tells us that T must be onto.
In Exercise 14, it is stated that if T is linear and one-to-one, then a subset S is linearly independent if and only if T(S) is linearly independent. Example 13 illustrates the use of this result.
Let be the linear transformation defined by
Clearly T is linear and one-to-one. Let . Then S is linearly independent in because
is linearly independent in .
In Example 13, we transferred a property from the vector space of polynomials to a property in the vector space of 3-tuples. This technique is exploited more fully later.
One of the most important properties of a linear transformation is that it is completely determined by its action on a basis. This result, which follows from the next theorem and corollary, is used frequently throughout the book.
Let V and W be vector spaces over F, and suppose that is a basis for V. For in W, there exists exactly one linear transformation such that for .
Proof.
Let . Then
where are unique scalars. Define
(a) T is linear: Suppose that and . Then we may write
for some scalars . Thus
So
(b) Clearly
(c) T is unique: Suppose that is linear and for . Then for with
we have
Hence .
Let V and W be vector spaces, and suppose that V has a finite basis . If are linear and for then .
Let be the linear transformation defined by
and suppose that is linear. If we know that and , then . This follows from the corollary and from the fact that is a basis for .
Label the following statements as true or false. In each part, V and W are finite-dimensional vector spaces (over F), and T is a function from V to W.
(a) If T is linear, then T preserves sums and scalar products.
(b) If , then T is linear.
(c) T is one-to-one if and only if the only vector x such that is .
(d) If T is linear, then .
(e) If T is linear, then .
(f) If T is linear, then T carries linearly independent subsets of V onto linearly independent subsets of W.
(g) If T, are both linear and agree on a basis for V, then .
(h) Given and , there exists a linear transformation such that and .
For Exercise 2 through 6, prove that T is a linear transformation, and find bases for both N(T) and R(T). Then compute the nullity and rank of T, and verify the dimension theorem. Finally, use the appropriate theorems in this section to determine whether T is one-to-one or onto.
defined by .
defined by .
defined by
defined by .
defined by . Recall (Example 4, Section 1.3) that
Prove properties 1, 2, 3, and 4 on page 65.
Prove that the transformations in Examples 2 and 3 are linear.
In this exercise, is a function. For each of the following parts, state why T is not linear.
(a)
(b)
(c)
(d)
(e)
Suppose that is linear, , and . What is T(2, 3)? Is T one-to-one?
Prove that there exists a linear transformation such that and . What is T(8, 11)?
Is there a linear transformation such that and ?
Let V and W be vector spaces, let be linear, and let be a linearly independent set of k vectors from R(T). Prove that if is chosen so that for then S is linearly independent. Visit goo.gl/
Let V and W be vector spaces and be linear.
(a) Prove that T is one-to-one if and only if T carries linearly independent subsets of V onto linearly independent subsets of W.
(b) Suppose that T is one-to-one and that S is a subset of V. Prove that S is linearly independent if and only if T(S) is linearly independent.
(c) Suppose is a basis for V and T is one-to-one and onto. Prove that is a basis for W.
Let be defined by . Recall that T is linear. Prove that T is onto, but not one-to-one.
Let V and W be finite-dimensional vector spaces and be linear.
(a) Prove that if , then T cannot be onto.
(b) Prove that if , then T cannot be one-to-one.
Give an example of a linear transformation such that .
Give an example of vector spaces V and W and distinct linear transformations T and U from V to W such that and .
Let V and W be vector spaces with subspaces and , respectively. If is linear, prove that is a subspace of W and that is a subspace of V.
Let V be the vector space of sequences described in Example 5 of Section 1.2. Define the functions by
T and U are called the left shift and right shift operators on v, respectively.
(a) Prove that T and U are linear.
(b) Prove that T is onto, but not one-to-one.
(c) Prove that U is one-to-one, but not onto.
Let be linear. Show that there exist scalars a, b, and c such that for all . Can you generalize this result for ? State and prove an analogous result for .
Let be linear. Describe geometrically the possibilities for the null space of T. Hint: Use Exercise 22.
Let be linear, , and be nonempty. Prove that if , then . (See page 22 for the definition of the sum of subsets.)
The following definition is used in Exercises 25–28 and in Exercise 31.
Let V be a vector space and and be subspaces of V such that . (Recall the definition of direct sum given on page 22.) The function defined by where with and , is called the projection of V on or the projection on along .
Let . Include figures for each of the following parts.
(a) Find a formula for T(a, b), where T represents the projection on the y-axis along the x-axis.
(b) Find a formula for T(a, b), where T represents the projection on the y-axis along the line .
Let .
(a) If , show that T is the projection on the xy- plane along the z-axis.
(b) Find a formula for T(a, b, c), where T represents the projection on the z-axis along the xy-plane.
(c) If , show that T is the projection on the xy-plane along the line .
Using the notation in the definition above, assume that is the projection on along .
(a) Prove that T is linear and .
(b) Prove that and .
(c) Describe T if .
(d) Describe T if is the zero subspace.
Suppose that W is a subspace of a finite-dimensional vector space V.
(a) Prove that there exists a subspace and a function such that T is a projection on W along .
(b) Give an example of a subspace W of a vector space V such that there are two projections on W along two (distinct) subspaces.
The following definitions are used in Exercises 29–33.
Let V be a vector space, and let be linear. A subspace W of V is said to be T-invariant if for every , that is, . If W is T-invariant, we define the restriction of T on W to be the function defined by for all .
Exercises 29–33 assume that W is a subspace of a vector space V and that is linear. Warning: Do not assume that W is T-invariant or that T is a projection unless explicitly stated.
Prove that the subspaces {0}, V, R(T), and N(T) are all T-invariant.
If W is T-invariant, prove that is linear.
Suppose that T is the projection on W along some subspace . Prove that W is T-invariant and that .
Suppose that and W is T-invariant. See page 22 for the definition of direct sum.
(a) Prove that .
(b) Show that if V is finite-dimensional, then .
(c) Show by example that the conclusion of (b) is not necessarily true if V is not finite-dimensional.
Suppose that W is T-invariant. Prove that and .
Prove Theorem 2.2 for the case that β is infinite, that is, .
Prove the following generalization of Theorem 2.6: Let V and W be vector spaces over a common field, and let be a basis for V. Then for any function there exists exactly one linear transformation such that for all .
Exercises 36 and 37 require the definition of direct sum given on page 22.
Let V be a finite-dimensional vector space and be linear.
(a) Suppose that . Prove that .
(b) Suppose that . Prove that .
Be careful to say in each part where finite-dimensionality is used.
Let V and T be as defined in Exercise 21.
(a) Prove that , but V is not a direct sum of these two spaces. Thus the result of Exercise 36(a) above cannot be proved without assuming that V is finite-dimensional.
(b) Find a linear operator on V such that but V is not a direct sum of and . Conclude that V being finite-dimensional is also essential in Exercise 36(b).
A function between vector spaces V and W is called additive if for all . Prove that if V and W are vector spaces over the field of rational numbers, then any additive function from V into W is a linear transformation.
Let be the function defined by . Prove that T is additive (as defined in Exercise 38) but not linear.
Prove that there is an additive function (as defined in Exercise 38) that is not linear. Hint: Let V be the set of real numbers regarded as a vector space over the field of rational numbers. By the corollary to Theorem 1.13 (p. 61), V has a basis . Let x and y be two distinct vectors in , and define and otherwise. By Exercise 35, there exists a linear transformation such that for all . Then T is additive, but for .
Prove that Theorem 2.6 and its corollary are true when V is infinite-dimensional.
The following exercise requires familiarity with the definition of quotient space given in Exercise 31 of Section 1.3.
Let V be a vector space and W be a subspace of V. Define the mapping for .
(a) Prove that is a linear transformation from V onto V/W and that .
(b) Suppose that V is finite-dimensional. Use (a) and the dimension theorem to derive a formula relating dim(V), dim(W), and dim(V/W).
(c) Read the proof of the dimension theorem. Compare the method of solving (b) with the method of deriving the same result as outlined in Exercise 35 of Section 1.6.