# Iteration Method

**Topics:**Matrix, Linear algebra, Spectral theorem

**Pages:**25 (10513 words)

**Published:**April 20, 2013

1

Eigenvalues and Eigenvectors

A scalar λ ∈ K is an eigenvalue of T if there is a nonzero v ∈ V such that T v = λv. In this case v is called an eigenvector of T corresponding to λ. Thus λ ∈ K is an eigenvalue of T if and only if ker(T − λI) = {0}, and any nonzero element of this subspace is an eigenvector of T corresponding to λ. Here I denotes the identity mapping from V to itself. Equivalently, λ is an eigenvalue of T if and only if det(T − λI) = 0. Therefore all eigenvectors are actually the roots of the monic polynomial det(xI −T ) in K. This polynomial is called the characteristic polynomial of T and is denoted by cT (x). Since the degree of cT (x) is n, the dimension of V, T cannot have more than n eigenvalues counted with multiplicities. If A ∈ K n×n , then A can be regarded as a linear mapping from K n to itself, and so the polynomial cA (x) = det(xIn − A) is the characteristic polynomial of the matrix A, and its roots in K are the eigenvalues of A. A subspace W of V is T -invariant if T (W ) ⊆ W. The zero subspace and the full space are trivial examples of T -invariant subspaces. For an eigenvalue λ of T the subspace E(λ) = ker(T − λI) is T -invariant and is called the eigenspace corresponding to λ. The dimension of E(λ) is the geometric multiplicity of eigenvalue λ, and the multiplicity of λ as a root of the characteristic polynomial of T is the algebraic multiplicity of λ

1.1 (i). Let W be a T -invariant subspace of V. and let T and T be linear mappings induced by T on W and V /W. Then cT (x) = cT (x)cT (x). (ii). Let V = W1 ⊕ W2 , where W1 and W2 are T -invariant subspaces, and let T1 and T2 be linear mappings induced by T on W1 and W2 . Then cT (x) = cT1 (x)cT2 (x). 1.2 The geometric multiplicity of eigenvalue λ cannot exceed its algebraic multiplicity. Proof. Let the geometric multiplicity of λ be m. Then λ will have m linearly independent eigenvectors u1 , . . . , um , that is, T ui = λui for i ∈ 1(1)m. Extend this to a basis of V : B = {u1 , . . . , um , um+1 , . . . , un }. Then [T ]B = 1

λIm B . Therefore cT (x) = (x−λ)m cA (x) and the algebraic multiplicity 0 A of λ is at least m. Recall that L(V ), the set of linear operators on V is a vector space over K 2 of dimension n2 . Thus, if T is a linear operator on V , then I = T 0 , T, . . . , T n , are linearly dependent, that is, there are scalars α0 , α1 , . . . , αn2 not all of 2 which are zero such that α0 I + α1 T + · · · + αn2 T n = 0 (∈ L (V )) . Therefore, 2 if f (x) = α0 + α1 x + · · · + αn2 xn , then f (T ) = 0. This shows that S = {p (x) ∈ K [x] | p (T ) = 0} is a nonzero principal ideal of K[x]. The monic polynomial which generates this ideal is called the minimal polynomial of T . We will denote the minimal polynomial of T by mT (x). Also, for a ﬁxed vector y in V , the vectors y, T y, . . . , T n y are linearly dependent, and so there are scalars β0 , , . . . , βn in K not all of which are zero such that β0 y + β1 T y · · · + βn T n y = 0. Similarly the ideal of K[x], {p (x) ∈ K [x] | p (T ) (y) = 0} is nonzero and principal. The monic polynomial my (x) which generates this ideal is called the annihilator of y with T respect to T . Clearly, my (x) divides the minimal polynomial mT (x). T 1.3 Primary decomposition theorem. Let the minimal polynomial of T in K [x] be mT (x) = p1 (x)r1 · · · pk (x)rk , where p1 (x) , . . . , pk (x) are monic irreducible polynomials and r1 , . . . , rk are positive integers. Then V = ker p1 (T )r1 ⊕ · · · ⊕ ker pk (T )rk , a direct sum of T –invariant subspaces of V . If for each i, Ti is the linear operator on ker pi (T )ri induced by T , then the minimal polynomial of Ti is pi (x)ri . Proof. For each i, let qi (x) = p1 (x)r1 · · · pi−1 (x)ri−1 pi+1 (x)ri+1 · · · pk (x)rk = mT (x) /pi (x)ri . Then gcd (q1 (x) , . . . , qk (x)) = 1, and so there are polynomials s1 (x) ,...

Please join StudyMode to read the full document