USA: +1-585-535-1023

UK: +44-208-133-5697

AUS: +61-280-07-5697

Steady-State Probabilities

There is long run behaviour of Markov process. For an irreducible ergodic Markov chain it can be shown that

 

Lt pij(n) = ∏j (i.e., independent of i)

where ∏j satisfies the following steady-state equations :

Here ∏j 's are called the steady state probabilities of the Markov chain because the probability of finding the process in a certain state, say j, after a large number of transitions tends to the value ∏j , independent of the initial probability distribution defined over the states.

 

Also we have

where µij is the expected recurrence time.

 

Example 6. Find the mean recurrence time for each state of the following Markov chain :

Solution. We have the steady-state equations

∏0 = ∏0 (0.5) + ∏1 (0.2) + ∏2(0.1)

∏1 = ∏ (0.3) + ∏1 (0.4) + ∏2 (0.5)

∏2 =∏ (0.2) + ∏1 (0.4) + ∏2 (0.4)

 

- 0.5∏+ 0.2∏ + 0.1∏2 = 0

 

0.3∏ - 0.6∏1 + 0.5∏2 = 0

0.2∏0 +  0.4∏1 - 0.6∏2 = 0

 

∏+ ∏1 + ∏2 = 1

Solving these equations we obtain

∏0 = 0.2353

∏l = 0.4118

∏2 = 0.3529

 

Hence the mean recurrence time for each state is given by

 

µ00 =     1/∏0    = 4.2499

 

µ 11=     1/∏1    = 2.4284

 

µ 22=     1/∏2               = 2.8337