The Waterloo Mathematics Review
Iterative Methods for Computing Eigenvalues and Eigenvectors Maysum Panju University of Waterloo email@example.com
Abstract: We examine some numerical iterative methods for computing the eigenvalues and eigenvectors of real matrices. The ﬁve methods examined here range from the simple power iteration method to the more complicated QR iteration method. The derivations, procedure, and advantages of each method are brieﬂy discussed.
Eigenvalues and eigenvectors play an important part in the applications of linear algebra. The naive method of ﬁnding the eigenvalues of a matrix involves ﬁnding the roots of the characteristic polynomial of the matrix. In industrial sized matrices, however, this method is not feasible, and the eigenvalues must be obtained by other means. Fortunately, there exist several other techniques for ﬁnding eigenvalues and eigenvectors of a matrix, some of which fall under the realm of iterative methods. These methods work by repeatedly reﬁning approximations to the eigenvectors or eigenvalues, and can be terminated whenever the approximations reach a suitable degree of accuracy. Iterative methods form the basis of much of modern day eigenvalue computation. In this paper, we outline ﬁve such iterative methods, and summarize their derivations, procedures, and advantages. The methods to be examined are the power iteration method, the shifted inverse iteration method, the Rayleigh quotient method, the simultaneous iteration method, and the QR method. This paper is meant to be a survey of existing algorithms for the eigenvalue computation problem. Section 2 of this paper provides a brief review of some of the linear algebra background required to understand the concepts that are discussed. In Section 3, the iterative methods are each presented, in order of complexity, and are studied in brief detail. Finally, in Section 4, we provide some concluding remarks and mention some of the additional algorithm reﬁnements that are used in practice. For the purposes of this paper, we restrict our attention to real-valued, square matrices with a full set of real eigenvalues.
Linear Algebra Review
We begin by reviewing some basic deﬁnitions from linear algebra. It is assumed that the reader is comfortable with the notions of matrix and vector multiplication. Deﬁnition 2.1. Let A ∈ Rn×n . A nonzero vector x ∈ Rn is called an eigenvector of A with corresponding eigenvalue λ ∈ C if Ax = λx.
Note that eigenvectors of a matrix are precisely the vectors in Rn whose direction is preserved when multiplied with the matrix. Although eigenvalues may be not be real in general, we will focus on matrices whose eigenvalues are all real numbers. This is true in particular if the matrix is symmetric; some of the methods we detail below only work for symmetric matrices. It is often necessary to compute the eigenvalues of a matrix. The most immediate method for doing so involves ﬁnding the roots of characteristic polynomials.
Methods for Computing Eigenvalues and Eigenvectors
Deﬁnition 2.2. The characteristic polynomial of A, denoted PA (x) for x ∈ R, is the degree n polynomial deﬁned by PA (x) = det(xI − A). It is straightforward to see that the roots of the characteristic polynomial of a matrix are exactly the eigenvalues of the matrix, since the matrix λI − A is singular precisely when λ is an eigenvalue of A. It follows that computation of eigenvalues can be reduced to ﬁnding the roots of polynomials. Unfortunately, solving polynomials is generally a diﬃcult problem, as there is no closed formula for solving polynomial equations of degree 5 or higher. The only way to proceed is to employ numerical techniques to solve these equations. We have just seen that eigenvalues may be found by solving polynomial equations. The converse is also true. Given any monic polynomial, f (z) = z n + an−1 z n−1 + . . . + a1 z + a0 , we can construct the...
Please join StudyMode to read the full document