# Gps Algorithm

Topics: Gradient descent, Newton's method in optimization, Method of steepest descent Pages: 6 (1700 words) Published: December 11, 2012
Lab Report for Computer Assignment #2 GPS Algorithm

Introduction
In order to guide each object correctly on the earth, the GPS satellites need to locate them very accurately. The goal of the assignment is to use some optimization methods to locate an object with given pseudo-range data. The signals which are sent from satellites to the target are not accurate because the systematic errors caused by inaccurate receiver clock and some other random noise. The un-accurate range is known as pseudo-range comparing with true range.

Since the pseudo-range data, which is calculated, is given, we need to first build a pseudo-range equation, which includes true range, clock bias error and noise error. The equation is like: yl=RlS+b*e+vl ; e = [1 1 1 1]T

The random noise term is i.i.d with p.d.f. N(0, σ2). The clock bias error b is caused by an inaccurate clock in the GPS receiver. Knowing the satellite locations, Sl, l = 1, … , 4, we have m pseudo-range, yl, to every satellite, so we can estimate the receiver location, S, and the clock bias b, using Gradient Descent and Gauss-Newton methods for solving this non-linear least squares problems.

Procedure
1. Linearization
Linearize the pseudo-range equation,
yl=RlS+b*e+vl ;
Let X=Sb; X=Sb; ylXl=hXl+vl ;
The pseudo-range equation is a non-linear equation since Rl (S) is nonlinear. To linearize a non-linear equation, Taylor series expansion is used, yX+∆X=yX+∂∂XyX∆X+h.o.t;
∂∂XyX=∂∂XhX=rlT1=(S-Sl)TRl1=HX;
yX+∆X- yX=HX∆X+h.o.t.;
∆y=HX∆X+h.o.t.;
Since rlT= (S-Sl)TRl, if the true range increases, rlTbecomes smaller. Then, ∆S=(S-Sl), gets smaller. We can conclude that the larger the true range is, the better linearized approximation will be.

2. Algorithm Development
In order to find the XMLE where the object interested is most likely to be located in space, we need to minimize the Loss function about X. minXlX=12y-h(X)2
To find the minimum Loss function lX, we will find the negative gradient of the Loss function, which is the direction of steepest descent of the Loss function. ∇lX=-HXT(y-h(X))
Get the Steepest Descent and Gauss’ from the Generalized Gradient Descent Algorithms: Xk+1=Xk+∆Xk=Xk-αkQXk∇l(Xk)
Xk+1=Xk+∆Xk=Xk+αkQXkHXkT(y-h(Xk) )
With Q =I for Steepest Descent algorithm, the equation simplify as: Xk+1=Xk+αkHXkT(y-h(Xk))
With Q =(HXkTHXk)-1 for Gauss-Newton algorith, the equation simplify as: Xk+1=Xk+αk(HXkTHXk)-1HXkT(y-h(Xk))

In the simulation, for the step size, αk=0.3 will be picked. The termination criteria will achieve when either the iteration k reaches 10,000 or lXk+1 < lXk is reached.

3. Simulation
Using 3 noise levels, σ = 0, σ = 0.0004 and σ = 0.004, several sets of pseudo-range are generated. For σ = 0, only 1 sample is generated. For σ = 0.0004, 4 set of data, which have 1, 4, 16 and 256 samples, is generated. For σ = 0.004, 4 set of data, which have 1, 4, 16 and 256 samples, is generated too. With initial estimate value, X0=(0.93310, 0.25000, 0.258819, 0.00000)TER, we need to apply two algorithms so that the true range, X=(1.000000000, 0.000000000, 0.000000000, 2.354788068e-03)TER can be approached. a) Applying Steepest Descent Algorithm

Step size αk = 0.3; 1 ER=6,370,000 meters
σ = 0|
m| 1|
Estimated x| 1.0000 ER|
Estimated y| 0.0000 ER|
Estimated z| 0.0000 ER|
Estimated b| 0.0024 ER|
Position error| 1.929567e-04 meters|
Clock bias error| 1.392852e-04 meters|
# of iterations| 36009|

σ = 0.0004|
m| 1| 4| 16| 256|
Estimated x| 0.9992 ER| 1.0012 ER| 0.9998 ER| 1.0001 ER| Estimated y| 0.0003 ER| 0.0025 ER| 0.0000 ER| 0.0001 ER| Estimated z| 0.0004 ER| 0.0021 ER| 0.0000 ER| 0.0001 ER| Estimated b| -0.0006 ER| 0.0048 ER| 0.0022 ER| 0.0025 ER| Position error| 3.206096e+04 meters| 2.208520e+04 meters| 1.438194e+03 meters| 1.168879e+03 meters| Clock bias error| 1.896251e+04 meters| 1.533646e+04 meters| 8.068516e+02...