# OLS is blue

**Topics:**Expected value, Gauss–Markov theorem, Estimator

**Pages:**7 (445 words)

**Published:**September 17, 2014

Group Members :

Eliza Tan 01120120073

Praisya Lordrietta 01120120061

Wirhan Pandutama 0112012

UNIVERSITAS PELITA HARAPAN

LIPPO KARAWACI-TANGERANG

2014

Gauss-Markov Theorem

The Gauss-Markov Theorem is given in the following regression model and assumptions: The regression model

(1)

Assumptions (A) or Assumptions (B):

Assumptions (A)

Assumptions (B)

E(

If we use Assumptions (B), we need to use the law of iterated expectations in proving the BLUE. With Assumptions (B), the BLUE is given conditionally on Let us use Assumptions (A). The Gauss-Markov Theorem is stated below Under Assumptions (A), the OLS estimators, are the Best Linear Unbiased Estimator (BLUE), that is 1. Unbias :

2. Best :

Real data seldomly satisfy Assumptions (A) or Assumptions (B). Accordingly we should think that the Gauss-Markov theorem only holds in the never-never land. However, it is important to understand the Gauss-Markov theorem on two grounds: 1. We may treat the world of the Gauss-Markov theorem as equivalent to the world of perfect competition in micro economic theory. 2. The mathematical exercises are good for your souls.

We shall prove the Gauss-Markov theorem using the simple regression model of equation (1). We can prove the Gauss-Markov theorem using the multiple regression model (2)

To do so, however, we need to use vector and matrix language (linear algebra.) Actually, once you learn linear algebra the proof of Gauss-Markov theorem is far more straight forward than the proof for the simple regression model of (1). Proving the Gauss-Markov Theorem

Proof that is best

We need to re-express first

The BLUE only looks at linear estimators of

The linear estimators are defined by

If we noticed that if

Then

We have to make unbiased. To take expectation of we first substitute equation

(1) :

=

since E = 0 for all i. We see that

↔ means “if and only if”

We take variance of

Var ()2 = - since E

2

= (

= + … +

since

The variance of the OLS estimator , is given by

Var (

Var (Var () ↔

Since is an arbitrary nonstochastic constant we can rewrite as

Earlier we saw that is unbiased if and only if =1

So,

+ = 0

But = = 1-

Hence = 0

We square and sum with the respect to i=1,…n:

++ 2 = +

Since the product term is zero

Hence + >

and this concludes the proof.

Proof that is best

=

Where

We shall use the fact that :

The variance of is given by

Var ( =

Let be a linear estimator of

We need to find the condition that make is unbiased. Taking expectations we have E = E () = +

And thus E = ↔

The variance of , var (

var (

Let =

Then = = = 0

Since =

So the variance of

var (

= ++ 2 ) > > var (

Since = = - ) = 0

Please join StudyMode to read the full document