# Time Series Analysis - Arima Models - Basic Definitions and Theorems About Arima Models

Topics: Variance, Autocorrelation, Stationary process Pages: 14 (1907 words) Published: December 6, 2010
V.I.1.a Basic Definitions and Theorems about ARIMA models

First we define some important concepts. A stochastic process (c.q. probabilistic process) is defined by a T-dimensional distribution function.

Time Series Analysis - ARIMA models - Basic Definitions and Theorems about ARIMA models

marginal distribution function of a time series

(V.I.1-1)

Before analyzing the structure of a time series model one must make sure that the time series are stationary with respect to the variance and with respect to the mean. First, we will assume statistical stationarity of all time series (later on, this restriction will be relaxed).

Statistical stationarity of a time series implies that the marginal probability distribution is time-independent which means that: bullet

the expected values and variances are constant

stationary time series - expected values and variances are constant

(V.I.1-2)

where T is the number of observations in the time series;

bullet

the autocovariances (and autocorrelations) must be constant

stationary time series - autocovariances (and autocorrelations) are constant

(V.I.1-3)

where k is an integer time-lag;

bullet

the variable has a joint normal distribution f(X1, X2, ..., XT) with marginal normal distribution in each dimension

stationary time series - normality assumption

(V.I.1-4)

If only this last condition is not met, we denote this by weak stationarity.

Now it is possible to define white noise as a stochastic process (which is statistically stationary) defined by a marginal distribution function (V.I.1-1), where all Xt are independent variables (with zero covariances), with a joint normal distribution f(X1, X2, ..., XT), and with

variance and expected value of white noise

(V.I.1-5)

It is obvious from this definition that for any white noise process the probability function can be written as

probability density function of white noise

(V.I.1-6)

Define the autocovariance as

autocovariance definition

(V.I.1-7)

or

autocovariance definition

(V.I.1-8)

whereas the autocorrelation is defined as

autocorrelation definition

(V.I.1-9)

In practice however, we only have the sample observations at our disposal. Therefore we use the sample autocorrelations

sample autocorrelation

(V.I.1-10)

for any integer k.

Remark that the autocovariance matrix and autocorrelation matrix associated with a stochastic stationary process

autocovariance matrix

(V.I.1-11)

autocorrelation matrix

(V.I.1-12)

is always positive definite, which can be easily shown since a linear combination of the stochastic variable

linear combination of stochastic variable

(V.I.1-13)

has a variance of

variance of linear combination of stochastic variable

(V.I.1-14)

which is always positive.

This implies for instance for T=3 that

(V.I.1-15)

or

(V.I.1-16)

Bartlett proved that the variance of autocorrelation of a stationary normal stochastic process can be formulated as

(V.I.1-17)

This expression can be shown to be reduced to

(V.I.1-18)

if the autocorrelation coefficients decrease exponentially like

(V.I.1-19)

Since the autocorrelations for i > q (a natural number) are equal to zero, expression (V.I.1-17) can be shown to be reformulated as

(V.I.1-20)

which is the so called large-lag variance. Now it is possible to vary q from 1 to any desired integer number of autocorrelations, replace the theoretical correlations by their sample estimates, and compute the square root of (V.I.1-20) to find the standard deviation of the sample autocorrelation.

Note that the standard deviation of one autocorrelation coefficient is almost always approximated by

(V.I.1-21)

The covariances between autocorrelation coefficients have also been deduced by Bartlett

(V.I.1-22)

which is a good indicator for dependencies between autocorrelations. Remind therefore...