# stochastic process

Topics: Probability theory, Markov chain, Stochastic process Pages: 11 (1999 words) Published: December 3, 2013
Economics 520
Lecture Note 9: Introduction to Stochastic Processes
This Version: October 5, 2013

These notes are based on S. Ross, Introduction to Probability Models, Academic Press, and J. Hamilton, Time Series Analysis, Princeton University Press.
Deﬁnition 1 A stochastic process {X (t ), t ∈ T } is a collection of random variables: for each t ∈ T , X (t ) is a random variable.
The set T is called an index set, and t ∈ T is often interpreted as time. So X (t ) would be the (random) state of a process at time t . Sometimes for simplicity we write X t = X (t ). Note that each X t is a random variable, so really we should be writing X t (ω). Each X t is a function from the sample space Ω to a subset of R. In applications we often are interested in modelling the evolution of some variable over time, so it is reasonable that the range of X t is the same across time. In that case we call the range of X t the state space. If the index set T is a countable set, we call the stochastic process a discrete-time process. If the index set is an interval of the real line, we call the process a continuous-time process. Although t is often used to indicate time, it can be used in other ways as well. For example, when modelling spatial phenomena (e.g. geographical concentrations of pollution), we might use a two-dimensional index t corresponding to longitude and latitude. Example 1: Consider ﬂipping a coin repeatedly. The sample space Ω would contain every possible inﬁnite sequence of H s and T s. We could deﬁne the index set as T = {1, 2, 3, . . . , } and X 1 = 1 if the ﬁrst toss is heads and 0 otherwise, X 2 = 1 if the second toss is heads and zero otherwise, and so on. This deﬁnes a stochastic process {X t , t ∈ T }, where X s is independent of X r for s = r . Next, we could deﬁne a new stochastic process {Y t , t ∈ T }, where Y t is the total number of heads up to that point in time: Y t =

t
i =1 X i .

Now there is a very distinct dependence between, say, Y s

and Y s+1 . We could also consider another stochastic process {Z t , t ∈ T } where Zt =

Yt 1 t
=
Xi .
t
t i =1

Since Z t is the average of X t up to that point in time, we might think that Z t would converge to 1/2 as t increases. We will come back to this idea in later lecture notes. Markov Chains
Suppose that {X t , t = 1, 2, 3, . . .} is a discrete-time stochastic process with a ﬁnite or countable 1

state space. That is, X t takes on a ﬁnite or countable number of possible values, and for simplicity let us say that the range is actually the nonnegative integers 0, 1, 2, 3, . . . We can fully specify the joint probability distribution of these random variables by starting with the marginal probabilities

P r (X 1 = i ) = f X 1 (i ),

i = 0, 1, 2, . . .

and then deﬁning various conditional probabilities recursively: P r (X 2 = j |X 1 = i )
P r (X 3 = j |X 2 = i , X 1 = k)
and so on. Suppose that the conditional probabilities have a simple form, depending only on the most recent past random variable:
P r (X t +1 = j |X t = i , X t −1 = k t −1 , . . . , X 1 = k 1 ) = P r (X t +1 = j |X t = i ) = P i j ,

∀t = 1, 2, 3, . . .

We call such a stochastic process a Markov chain.
The numbers P i j represent the transition probability, the probability of going from state i to state j . They must be nonnegative: P i j ≥ 0 for all i and j , and for each i , we must have

j =0 P i j

=

1. It’s handy to collect these into a matrix:

P 00

P 01

P 02

···

P =  P 10

.
.
.

P 11
.
.
.

P 12

··· .

Example 1 continued For independent coin ﬂips, the possible values are 0 and 1, and the transition matrix would be P=

.5 .5
.5 .5

.

Next, consider Y t , the cumulative sum of heads. For any time t , Y t +1 can either be equal to Y t (with probability 1/2) or Y t +1 (with probability 1/2). Thus P i j = .5 for j = i , i +1 and 0 otherwise.

Example 2: The paper “Intraﬁrm Mobility and Sex...

Please join StudyMode to read the full document