In this section, we rely heavily on Theorems 6.16 (p. 369) and 6.17 (p. 371) to develop an elegant representation of a normal (if F=C
We assume that the reader is familiar with the results about direct sums developed at the end of Section 5.2. The special case where V is a direct sum of two subspaces is considered in the exercises of Section 1.3.
Recall from the exercises of Section 2.1 that if V=W1⊕W2
So V=R(T)⊕N(T)
Let V be an inner product space, and let T: V→V
Note that by Exercise 13(c) of Section 6.2, if V is finite-dimensional, we need only assume that one of the equalities in this definition holds. For example, if R(T)⊥=N(T)
An orthogonal projection is not the same as an orthogonal operator. In Figure 6.5, T is an orthogonal projection, but T is clearly not an orthogonal operator because ||T(v)||≠||v||
Now assume that W is a finite-dimensional subspace of an inner product space V. In the notation of Theorem 6.6 (p. 347), we can define a function T: V→V
To understand the geometric difference between an arbitrary projection on W and the orthogonal projection on W, let V=R2
As an application to Fourier analysis, recall the inner product space H and the orthonormal set S in Example 9 of Section 6.1. Define a trigonometric polynomial of degree n to be a function g∈H
where an
Let f∈H
For an application of this material to electronic music, visit goo.gl/
An algebraic characterization of orthogonal projections follows in the next theorem.
Let V be an inner product space, and let T be a linear operator on V. Then T is an orthogonal projection if and only if T has an adjoint T* and T2=T=T*
Suppose that T is an orthogonal projection. Since T2=T
and
So 〈x, T(y)〉=〈T(x), y〉
Now suppose that T2=T=T*
Therefore x∈N(T)⊥
Let y∈N(T)⊥
Since y−T(y)∈N(T)
Thus y−T(y)=0
Using the preceding results, we have R(T)⊥=N(T)⊥⊥⊇N(T)
Let V be a finite-dimensional inner product space, W be a subspace of V, and T be the orthogonal projection of V on W. We may choose an orthonormal basis β={v1, v2, …, vn}
If U is any projection on W, we may choose a basis γ
We are now ready for the principal theorem of this section.
Suppose that T is a linear operator on a finite-dimensional inner product space V over F with the distinct eigenvalues λ1, λ2, …, λk
(a) V=W1⊕W2⊕⋯⊕Wk.
(b) If W′i
(c) TiTj=δijTi
(d) I=T1+T2+⋯+Tk.
(e) T=λ1T1+λ2T2+⋯+λkTk.
(a) By Theorems 6.16 (p. 369) and 6.17 (p. 371), T is diagonalizable;
so
by Theorem 5.10 (p. 277).
(b) If x∈Wi
On the other hand, we have dim(W⊥i)=dim(V)−dim(Wi)
(c) The proof of (c) is left as an exercise.
(d) Since Ti
(e) For x∈V
The set {λ1, λ2, …, λk}
With the preceding notation, let β
that is, [T]β
We now list several interesting corollaries of the spectral theorem; many more results are found in the exercises. For what follows, we assume that T is a linear operator on a finite-dimensional inner product space V over F.
If F=C
Suppose first that T is normal. Let T=λ1T1+λ2T2+⋯+λkTk
Conversely, if T*=g(T)
If F=C
If T is unitary, then T is normal and every eigenvalue of T has absolute value 1 by Corollary 2 to Theorem 6.18 (p. 379).
Let T=λ1T1+λ2T2+⋯+λkTk
Hence T is unitary.
If F=C
Suppose that T is normal and that its eigenvalues are real. Let T=λ1T1+λ2T2+⋯+λkTk
Hence T is self-adjoint.
Now suppose that T is self-adjoint and hence normal. That its eigenvalues are real has been proved in the lemma to Theorem 6.17 (p. 371).
Let T be as in the spectral theorem with spectral decomposition T=λ1T1+λ2T2+⋯+λkTk
Choose a polynomial gj(1≤j≤k)
Label the following statements as true or false. Assume that the underlying inner product spaces are finite-dimensional.
(a) All projections are self-adjoint.
(b) An orthogonal projection is uniquely determined by its range.
(c) Every self-adjoint operator is a linear combination of orthogonal projections.
(d) If T is a projection on W, then T(x) is the vector in W that is closest to x.
(e) Every orthogonal projection is a unitary operator.
Let V=R2, W=span({(1, 2)})
For each of the matrices A in Exercise 2 of Section 6.5:
(1) Verify that LA
(2) For each eigenvalue of LA
(3) Verify your results using the spectral theorem.
Let W be a finite-dimensional subspace of an inner product space V. Show that if T is the orthogonal projection of V on W, then I−T
Let T be a linear operator on a finite-dimensional inner product space V.
(a) If T is an orthogonal projection, prove that ||T(x)||≤||x||
(b) Suppose that T is a projection such that ||T(x)||≤||x||
Let T be a normal operator on a finite-dimensional inner product space. Prove that if T is a projection, then T is also an orthogonal projection.
Let T be a normal operator on a finite-dimensional complex inner product space V. Use the spectral decomposition λ1T1+λ2T2+⋯+λkTk
(a) If g is a polynomial, then
(b) If Tn=T0
(c) Let U be a linear operator on V. Then U commutes with T if and only if U commutes with each Ti
(d) There exists a normal operator U on V such that U2=T
(e) T is invertible if and only if λi≠0
(f) T is a projection if and only if every eigenvalue of T is 1 or 0.
(g) T=−T*
Use Corollary 1 of the spectral theorem to show that if T is a normal operator on a complex finite-dimensional inner product space and U is a linear operator that commutes with T, then U commutes with T*.
Referring to Exercise 20 of Section 6.5, prove the following facts about a partial isometry U.
(a) U*U is an orthogonal projection on W.
(b) UU*U=U.
Simultaneous diagonalization. Let U and T be normal operators on a finite-dimensional complex inner product space V such that TU=UT
Prove (c) of the spectral theorem. Visit goo.gl/