V.I.1.a Basic Definitions and Theorems about ARIMA models
First we define some important concepts. A stochastic process (c.q. probabilistic process) is defined by a T-dimensional distribution function.
Time Series Analysis - ARIMA models - Basic Definitions and Theorems about ARIMA models
marginal distribution function of a time series
Before analyzing the structure of a time series model one must make sure that the time series are stationary with respect to the variance and with respect to the mean. First, we will assume statistical stationarity of all time series (later on, this restriction will be relaxed).
Statistical stationarity of a time series implies that the marginal probability distribution is time-independent which means that: bullet
the expected values and variances are constant
stationary time series - expected values and variances are constant
where T is the number of observations in the time series;
the autocovariances (and autocorrelations) must be constant
stationary time series - autocovariances (and autocorrelations) are constant
where k is an integer time-lag;
the variable has a joint normal distribution f(X1, X2, ..., XT) with marginal normal distribution in each dimension
stationary time series - normality assumption
If only this last condition is not met, we denote this by weak stationarity.
Now it is possible to define white noise as a stochastic process (which is statistically stationary) defined by a marginal distribution function (V.I.1-1), where all Xt are independent variables (with zero covariances), with a joint normal distribution f(X1, X2, ..., XT), and with
variance and expected value of white noise
It is obvious from this definition that for any white noise process the probability function can be written as
probability density function of white noise
Define the autocovariance as
whereas the autocorrelation is defined as
In practice however, we only have the sample observations at our disposal. Therefore we use the sample autocorrelations
for any integer k.
Remark that the autocovariance matrix and autocorrelation matrix associated with a stochastic stationary process
is always positive definite, which can be easily shown since a linear combination of the stochastic variable
linear combination of stochastic variable
has a variance of
variance of linear combination of stochastic variable
which is always positive.
This implies for instance for T=3 that
Bartlett proved that the variance of autocorrelation of a stationary normal stochastic process can be formulated as
This expression can be shown to be reduced to
if the autocorrelation coefficients decrease exponentially like
Since the autocorrelations for i > q (a natural number) are equal to zero, expression (V.I.1-17) can be shown to be reformulated as
which is the so called large-lag variance. Now it is possible to vary q from 1 to any desired integer number of autocorrelations, replace the theoretical correlations by their sample estimates, and compute the square root of (V.I.1-20) to find the standard deviation of the sample autocorrelation.
Note that the standard deviation of one autocorrelation coefficient is almost always approximated by
The covariances between autocorrelation coefficients have also been deduced by Bartlett
which is a good indicator for dependencies between autocorrelations. Remind therefore...
Please join StudyMode to read the full document