# Rosemary

**Topics:**Probability theory, Normal distribution, Variance

**Pages:**5 (1242 words)

**Published:**February 23, 2013

Lecture 12: ARCH and GARCH Models

1

2

3

Models of Changing Volatility 12.2 Let, the innovation, ηt+1 , in an asset return be deﬁned as having mean 2 zero and conditional on time t information. We deﬁne σt to be the time t 2 conditional variance of ηt+1 or the conditional expectation of ηt+1 . We assume the conditional on time t information, the innovation is normally distributed: 2 ηt+1 ∼ N (0, σt )

(1)

The unconditional variance of the innovation, σ 2 , is just the unconditional 2 expectation of σt . 2 2 σ 2 ≡ E[ηt+1 ] = E[σt ]

(2)

2 The variability of σt around its mean does not change the unconditional 2 2 variance σ . The variability of σt does aﬀect higher moments of the un2 conditional distribution of ηt+1 . With time-varying σt , the unconditional distribution of ηt+1 has fatter tails than a normal distribution. To show this deﬁne ηt+1 as follows:

ηt+1 = σt 4

t+1

(3)

where t+1 is IID with zero mean and unit variance. A useful measure of tail thickness is the fourth moment or kurtosis, i.e., K(y) = E[y 4 ]/E[y 2 ]2 . It is well known that the kurtosis of a normal random variable is 3; hence K( t+1 ) 4 = E[ ( t+1 −µ) ] = E[ 4 ] = 3. When considering the innovations of ηt+1 we t+1 σ4 obtain: K(ηt+1 ) = 4 E[σt ]E[ 4 ] t+1 2 (E[σt ])2 4 3E[σt ] = 2 (E[σt ])2 2 3(E[σt ])2 ≥ 2 (E[σt ])2

(independence)

(4) (5)

(Jensen s Inequality)

(6)

The unconditional distribution is a mixture of normal distributions, some with small variances that concentrate mass around the mean and some with large variances that put mass in the tails of the distribution. The result says that the mixed distribution of of ηt+1 and t+1 has fatter tails than the normal. Remember Jensen inequality says: let X be a random variable with mean E[X] and let g(. ) be a convex function; then E[g(X)] ≥ g(E[X]). For example, note that g(x) = x2 is convex; hence E[X 2 ] ≥ (E[X])2 . This says that the variance of X, which is E[x2 ] - (E[X])2 , is nonnegative. To capture the serial correlation of volatility, Engle (1982) proposed the class of ARCH models. These models deﬁne the conditional variance as a distributed lag of past squared innovations: 2 2 σt = ω + α(L)ηt ,

(7)

2 where α(L) is a polynomial in the lag operator. To ensure that σt is positive, the coeﬃcients ω and α(L) must be nonnegative. As a means for estimating a model with persistent movements in volatility without estimating many coeﬃcients in α(L), the GARCH or Generalized Autoregressive Conditional Heteroskedasticity model has been proposed: 2 2 2 σt = ω + β(L)σt−1 + α(L)ηt

(8)

As in the time series ARMA(p, q) model, we have the GARCH(p, q) model, which is deﬁned as p 2 σt = ω + i=1 2 βi σt−i + j=1 q 2 αj ηt−j+1

(9)

5

Empirically in most cases only the GARCH(1,1) is used and is written as: 2 2 2 σt = ω + βσt−1 + αηt 2 2 2 = ω + (α + β)σt−1 + α(ηt − σt−1 ) 2 2 = ω + (α + β)σt−1 + ασt−1 ( 2 − 1) t

(10) (11) (12)

2 2 where the term (ηt − σt−1 ) can be thought of a shock to volatility. The coeﬃcient α measures the extent to which a volatility shock enters the next period and (α + β) measures autoregressive component. The volatility shock 2 2 2 (ηt −σt−1 ) can be rewritten as σt−1 ( 2 −1), which is the demeaned χ2 variable t 2 - multiplied by past volatility σt−1 . Equation (10) can also be rewritten such that it has an ARMA(1,1) rep2 2 resentation: by adding ηt+1 on both sides and carrying the term σt on the other side we obtain: 2 2 2 2 2 2 ηt+1 = ω + (α + β)ηt + (ηt+1 − σt ) − β(ηt − σt−1 )

(13)

only the ARMA(1,1) is a model with homoskedastic shocks, while the shocks 2 2 (ηt+1 − σt ) are heteroskedastic. • Stationarity and Persistence The GARCH(1,1) model is stationary when (α + β) < 1. In this case it is easy to construct forecasts. First, note with equation (2) that 2 the unconditional variance of ηt+1 or Et−1 [σt ] = ω/(1 − α − β). Then from equation (12) and the law of iterated expectations,...

Please join StudyMode to read the full document