# Time Series

**Topics:**Autoregressive moving average model, Time series, Time series analysis

**Pages:**59 (8073 words)

**Published:**December 7, 2012

Chapter Three

Univariate Time Series Models

Chapter Three

Univariate time series models c WISE

1

3.1

Preliminaries

We denote the univariate time series of interest as yt.

• yt is observed for t = 1, 2, . . . , T ;

• y0, y−1, . . . , y1−p are available;

• Ωt−1 the history or information set at time t − 1.

Call such a sequence of random variables a time series.

Chapter Three

Univariate time series models c WISE

2

Martingales

Let {yt} denote a sequence of random variables and let It =

{yt, yt−1, . . .} denote a set of conditioning information or information set based on the past history of yt. The sequence {yt, It} is called a martingale if

• It−1 ⊂ It (It is a ﬁltration)

• E [|yt|] < ∞

• E [yt|It−1] = yt−1 (martingale property)

Chapter Three

Univariate time series models c WISE

3

Random walk model

The most common example of a martingale is the random walk model yt = yt−1 + εt,

εt ∼ W N (0, σ 2)

where y0 is a ﬁxed initial value.

Letting It = {yt, . . . , y0} implies E [yt|It−1] = yt−1 since E [εt|It−1] = 0.

Chapter Three

Univariate time series models c WISE

4

Law of Iterated Expectations

Deﬁnition 1. In general, for information sets It and Jt such that It ⊂ Jt (Jt is the bigger info set). The Law of Iterated Expectations says E [Y |It] = E [E [Y |Jt]|It].

Let {yt, It} be a martingale. Then

E [yt|It−2] = E [E [yt|It−1]|It−2] = E [yt−1|It−2] = yt−2. It follows that

E [yt|It−k ] = yt−k .

Chapter Three

Univariate time series models c WISE

5

Martingale difference sequence

Deﬁnition 2. Let {εt} be a sequence of random variables with an associated information set It. The sequence {εt, It} is called a martingale difference sequence (MDS) if

• It−1 ⊂ It

• E [εt|It−1] = 0 (MDS property)

If {yt, It} is a martingale, a MDS {εt, It} may be constructed by deﬁning εt = yt − E [yt|It−1].

Chapter Three

Univariate time series models c WISE

6

Linear time series models

Wold’s decomposition theorem (c.f. Fuller (1996) pg. 96) states that any covariance stationary time series {yt} has a linear process or inﬁnite order moving average representation of the form

∞

yt = µ +

∞

ψk εt−k ,

k=0

2

ψk < ∞

ψ0 = 1,

k=0

εt = W N (0, σ 2)

where εt is followed by the white noise (W N ) process.

Chapter Three

Univariate time series models c WISE

7

In the Wold form, it can be shown that

E [yt] = µ

∞

γ0 = Var(yt) = σ 2

2

ψk

k=0

∞

γj

= Cov(yt, yt−j ) = σ 2

ψk ψk+j

k=0

ρj

Chapter Three

=

∞

k=0 ψk ψk+j

∞

2

k=0 ψk

Univariate time series models c WISE

8

The moving average weights in the Wold form are also called impulse responses since

∂yt+s

= ψs ,

∂εt

s = 1, 2, . . .

For a stationary and ergodic time series

lim ψs = 0

s→∞

and the long-run cumulative impulse response

∞

ψs < ∞.

s=0

A plot of ψs against s is called the impulse response function (IRF).

Chapter Three

Univariate time series models c WISE

9

A very rich and practically useful class of stationary and ergodic processes is the autoregressive-moving average (ARMA) class of models made popular by Box and Jenkins (1976).

ARMA(p, q ) models take the form of a pth order stochastic difference equation

yt − µ = φ1(yt−1 − µ) + · · · + φp(yt−p − µ) +εt + θ1εt−1 + · · · + θq εt−q

εt ∼ W N (0, σ 2).

Chapter Three

Univariate time series models c WISE

10

The ARMA(p, q ) model may be compactly expressed using lag

polynomials.

$

'

Lag Operator Notation

The lag operator L is deﬁned such that for any time series {yt}, Lyt = yt−1. It has the following properties: L2yt = L · Lyt = yt−2, L0 = 1 and L−1yt = yt+1.

The operator ∆ = 1 − L creates the ﬁrst difference of a time series: ∆yt = (1 − L)yt = yt − yt−1.

&

%

Deﬁne φ(L) = 1 − φ1L − · · · − φpLp and θ(L) = 1 + θ1L + · · · + θq Lq , the ARMA model may then be expressed as

φ(L)(yt − µ) = θ(L)εt....

Please join StudyMode to read the full document