Universidad Autónoma de Querétaro
Facultad de Ingeniería

“Iterative Methods”
“Gauss and Gauss-Seidel”

Profesor| | Nieves Fonseca Ricardo|
Mentado Camacho Félix
Navarro Escamilla Erandy
Péloquin Blancas María José
Rubio Miranda Ana Luisa

Abstract

Many real life problems give us several simultaneous linear equations to solve. And we have to find a common solution for each of them. There are several techniques to use. Instead of using methods that provide a solution to a set of linear equations after a finite number of steps, we can use a series of algorithms with fewer steps, but its accuracy depends on the number of times it is applied (also known as iterative methods). For large systems they may be a lot faster than direct methods. We will expand on two important methods to find numerical solutions to linear systems of equations. There will be an introduction to each method, besides detailed explanations on each of them. Normally each process is long, so they are ideal for programming.

Keywords

Iterative, algorithm, linear equation, convergence.

Objective

Understand the concepts of iterative methods, and convergence, besides the difference and usefulness between direct and iterative methods. To give a clear and understandable idea of Gauss

and Gauss-Seidel methods to solve systems of linear equations, and show how to apply them.

Investigation

Iterative method

An iterative method is one that computes approximations in a progressive way of the solution of a mathematical problem. The same improvement process is repeated on the approximate solution until a solution closer to the real value is obtained. Differently than a direct method, which entails finishing the process to get an answer, on an iterative one the process can be stopped when an iteration is finished to get an approximation to the solution. Advantages and disadvantages

The bad thing about iterative methods is that they...

...NumericalMethods Questions
1 f(x) = x3 – 2x – 5
a) Show that there is a root β of f(x) = 0 in the interval [2,3]. The root β is to be estimated using the iterative formula
,2 5 2 0 2 1 1 = =++ x x x n n
b) Calculate the values of x1, x2, x3, and x4, giving your answers to 4 sig fig.
c) Prove that, to 5 significant figures, β is 2.0946
2 Use the iterative formula
n
n n cox x x − =+ 1 1 With x1 = 0.5 to find the limit of the sequence x1, x2, x3,……. Correct to 2 decimal places.
3 Starting with x0 = 1, use the iterative formula
( ) 5lnln1 =++ xx n
To find, to 2 decimal places, the value of x1, x2, x3, and x4.
4 The equation 2x = x3 has two roots. Show that the intervals are [1,2] and [9,10].
5 f( x ) = x 3 – 2 –
x 1
, x ≠ 0.
(a) Show that the equation f( x ) = 0 has a root between 1 and 2.
An approximation for this root is found using the iteration formula
x n + 1 =
3 1
nx 1 2
+ , with 0 x = 1.5.
(b) By calculating the values of x 1, x 2, x 3 and x 4, find an approximation to this root, giving your answer to 3 decimal places.
(c) By considering the change of sign of f( x ) in a suitable interval, verify that your answer to part (b) is correct to 3 decimal places.
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to general symbolic manipulations) for the problems...

...Subject Name: NumericalMethods
Subject Code: MA1251
Unit I
1) Write the Descartes rule of signs
Sol:
1) An equation f ( x) = 0 cannot have more number of positive roots than there are
changes of sign in the terms of the polynomial f ( x) .
2)An equation f ( x) = 0 cannot have more number of positive roots than there are
changes of sign in the terms of the polynomial f ( x) .
2) What is the order of convergence of Newton Raphson method if the multiplicity of the
root is one.
Sol: Order of convergence of N.R method is 2.
3) Newton Raphson method is also known as the method of ………..
Sol: Iteration ( Newton’s iteration method)
Derive newtons algorithm to derive p th root of a number N.
1
p
Sol: If x = N then x p N 0 is the equation to be solved.
Let f ( x) = x p − N , f ′ ( x) = px p 1
=
−
−
By NR rule ,if xr is the rth iterate
f ( xr )
xr 1 = x r −
f ′ (rx )
+
xr p − N ( p − 1) xr p + N
=xr−
=
p 1
p 1
pxr
pxr
−
−
4) When would we not use N-R method .
Sol: If x1 is the exact root and x0 is its approximate value of the equation
f ( x0 )
f ′ ( x0 )
f ( x0 )
If f ′ ( x0 ) is small,the
will be large and the computation of the root by
f ′ ( x0
error
)
this,method will be a slow process or may even be impossible.
Hence the method should not be used in cases where the graph of the...

...Carl Gauss was a man who is known for making a great deal breakthroughs in the wide variety of his work in both mathematics and physics. He is responsible for immeasurable contributions to the fields of number theory, analysis, differential geometry, geodesy, magnetism, astronomy, and optics, as well as many more. The concepts that he himself created have had an immense influence in many areas of the mathematic and scientific world.
Carl Gauss was born Johann Carl Friedrich Gauss, on the thirtieth of April, 1777, in Brunswick, Duchy of Brunswick (now Germany). Gauss was born into an impoverished family, raised as the only son of a bricklayer. Despite the hard living conditions, Gauss's brilliance shone through at a young age. At the age of only two years, the young Carl gradually learned from his parents how to pronounce the letters of the alphabet. Carl then set to teaching himself how to read by sounding out the combinations of the letters. Around the time that Carl was teaching himself to read aloud, he also taught himself the meanings of number symbols and learned to do arithmetical calculations.
When Carl Gauss reached the age of seven, he began elementary school. His potential for brilliance was recognized immediately. Gauss's teacher Herr Buttner, had assigned the class a difficult problem of addition in which the students were to find the sum of the integers from one to one hundred. While...

...Gauss-Jordan Matrix Elimination
-This method can be used to solve systems of linear equations involving two or more variables. However, the system must be changed to an augmented matrix. -This method can also be used to find the inverse of a 2x2 matrix or larger matrices, 3x3, 4x4 etc. Note: The matrix must be a square matrix in order to find its inverse. An Augmented Matrix is used to solve a system of linear equations. a1 x + b1 y + c1 z = d1 a 2 x + b2 y + c 2 z = d 2 a3 x + b3 y + c3 z = d 3
System of Equations ⎯ ⎯→
Augmented Matrix ⎯ ⎯→
⎡ a1 ⎢ ⎢a 2 ⎢ a3 ⎣
b1 b2 b3
c1 d1 ⎤ ⎥ c2 d 2 ⎥ c3 d 3 ⎥ ⎦
-When given a system of equations, to write in augmented matrix form, the coefficients of each variable must be taken and put in a matrix. For example, for the following system:
3x + 2 y − z = 3 x − y + 2z = 4 2x + 3 y − z = 3
Augmented Matrix ⎯ ⎯→
⎡ 3 2 − 1 3⎤ ⎢ ⎥ ⎢1 − 1 2 4 ⎥ ⎢ 2 3 − 1 3⎥ ⎣ ⎦
The Math Center
■
Valle Verde
■
Tutorial Support Services
■
EPCC
1
-There are three different operations known as Elementary Row Operations used when solving or reducing a matrix, using Gauss-Jordan elimination method. 1. Interchanging two rows. 2. Add one row to another row, or multiply one row first and then adding it to another. 3. Multiplying a row by any constant greater than zero.
Identity Matrix-is the final result obtained when a matrix is reduced. This...

...alternative of an efﬁcient implementation in hardware has not been analyzed thoroughly yet.
In this paper we show that there is still room for drastic efﬁciency improvements of current algorithms when targeting
special purpose hardware such as re-programmable logic.
In order to develop such a special purpose hardware architecture we analyzed the suitability of various algorithms
while taking possible simpliﬁcations over GF(2) into account.
Gaussian elimination turned out to be basically best suited
for hardware implementation due to its simplicity. Moreover,
the coefﬁcient matrices of LSEs are not required to have any
special properties like being symmetric positive deﬁnite2 as
this is the case for example for the conjugate gradient methods [12]. However, compared to software, a direct implementation of Gaussian elimination in hardware does not yield the
desired advantage in efﬁciency. By slightly modifying the
logic of the algorithm and parallelizing element and row operations, we were able to develop a highly efﬁcient architecture for solving medium-sized LSEs and computing inverse
matrices. In a similar way, an architecture for fast matrix
multiplication can be obtained and can possibly be used in
conjunction with the other architecture to solve large LSEs
by means of Strassen’s algorithm [16].
This paper is roughly structured as follows. We start with
a brief discussion of previous work in this area. After reviewing standard Gaussian...

...Gauss Markov Theorem
In the mode [pic]is such that the following two conditions on the random vector [pic]are met:
1. [pic]
2. [pic]
the best (minimum variance) linear (linear functions of the [pic]) unbiased estimator of [pic]is given by least squares estimator; that is, [pic]is the best linear unbiased estimator (BLUE) of [pic].
Proof:
Let [pic]be any [pic]constant matrix and let [pic]; [pic] is a general linear function of [pic], which we shall take as an estimator of [pic]. We must specify the elements of [pic]so that [pic]will be the best unbiased estimator of [pic]. Let [pic] Since [pic] is known, we must find [pic]in order to be able to specify [pic]. For unbiasedness, we have
[pic]
But, to be unbiased, [pic] must equal [pic], and this implies that [pic]for all [pic]. Thus, unbiasedness specifies that [pic].
For property of “best” we must find the matrix [pic]that minimized [pic], where [pic], subject to the restriction [pic]. To examine this, consider the covariance
[pic]
Let [pic]Then [pic]. The diagonal elements of [pic] are respective variances of the [pic]. To minimize each [pic], we must, therefore, minimize each diagonal elements of [pic]. Since [pic]and [pic]are constants, we must find a matrix [pic]such that each diagonal element of [pic]is a minimum. But [pic] is positive semidefinite; hence [pic] Thus the diagonal elements of [pic] will attain their minimum when [pic] for [pic]. But, if [pic]then [pic] Therefore, if...

...Carl Friedrich Gauss
Carl Friedrich Gauss was a German mathematician and scientist who
dominated the mathematical community during and after his lifetime. His
outstanding work includes the discovery of the method of least squares, the
discovery of non-Euclidean geometry, and important contributions to the theory
of numbers.
Born in Brunswick, Germany, on April 30, 1777, Johann Friedrich Carl
Gauss showed early and unmistakable signs of being an extraordinary youth. As a
child prodigy, he was self taught in the fields of reading and arithmetic.
Recognizing his talent, his youthful studies were accelerated by the Duke of
Brunswick in 1792 when he was provided with a stipend to allow him to pursue his
education.
In 1795, he continued his mathematical studies at the University of Gö
ttingen. In 1799, he obtained his doctorate in absentia from the University of
Helmstedt, for providing the first reasonably complete proof of what is now
called the fundamental theorem of algebra. He stated that: Any polynomial with
real coefficients can be factored into the product of real linear and/or real
quadratic factors.
At the age of 24, he published Disquisitiones arithmeticae, in which he
formulated systematic and widely influential concepts and methods of number
theory -- dealing with the relationships and properties of integers. This book
set the pattern for many future research and won Gauss...

...Greetings, my fellows! I am Gauss, Johann Carl Friedrich Gauss. I am a German mathematician and I contributed significantly to maths and physics. I was born on the 30th April 1777 in Braunschweig, Germany. Unfortunately, my mother was not well educated and could not read or write and could not record my date of birth. The only she remembered was that I was born on a Wednesday, eight days before the Feast of Ascension, which occurs 40 days after Easter. I was christened and accepted in a church located near the school he went to as a child.
Not to brag, but I was a total genius! When I was 21, I made my first discovery which was to make me famous. I had completed Disquisitiones Arithmeticae. But it was not published until 1801. My book was so great that it shaped the field of number studies all the way to the this day. Don't ask me how I know this because I can tell you I discovered this when I was in heaven.
I was so talented that the Duke of Braunschweig sent me to one of the most prestigious schools at the time, Collegium Carolinum. I studied there from 1792 to 1795 and then transferred to the University of Göttingen and studied there from 1795 to 1798. While I was still in university, I rediscovered several important theorems. But my breakthrough occurred in 1796 when he was able to prove that any regular polygon with a number of sides which is a Fermat prime can be constructed by using just a compass and straightedge. Amazing isn't...