There is long run behaviour of Markov process. For an irreducible ergodic Markov chain it can be shown that

Lt p_{ij}^{(n)} = ∏*j (i.e., *independent of *i)*

where ∏* _{j} *satisfies the following steady-state equations :

Here ∏* _{j} *'s are called the steady state probabilities of the Markov chain because the probability of finding the process in a certain state, say

*j,*after a large number of transitions tends to the value ∏

*independent of the initial probability distribution defined over the states.*

_{j},

where *µ _{ij}* is the expected recurrence time.

** **

**Example **6. *Find the mean recurrence time for each state of the following Markov chain :*

**Solution. **We have the steady-state equations

∏0 *= *∏0 (0.5) + ∏1 (0.2) + ∏2(0.1)

∏1 *= *∏ (0.3) + ∏1 (0.4) + ∏2 (0.5)

∏2 *=*∏ (0.2) + ∏1 (0.4) + ∏2 (0.4)

- 0.5∏+ 0.2∏ + 0.1∏2 *= *0

0.3∏ - 0.6∏1 + 0.5∏2 = 0

0.2∏0 + 0.4∏1 - 0.6∏2 = 0

∏+ ∏1 + ∏2 = 1

Solving these equations we obtain

∏0 = 0.2353

∏l = 0.4118

∏2 = 0.3529

Hence the mean recurrence time for each state is given by

µ_{00} = 1/∏0 *= *4.2499

µ _{11}= 1/∏1 *= *2.4284

µ* *_{22}*= *1/∏2 = 2.8337

** **