Sales Toll Free No: 1-855-666-7446

Markov Chains

Top

Let us talk about Markov chain introduction. Markov chain is a system in which one state undergoes transitions to another state where states are always in the countable form. It’s the random process in which the next state does not depend on the sequence of the events that precede it, but on the current state. This is based on the Markov property and named after the scientist Andrey Markov.
The Probability of moving from one state to another state in n time steps is
Pjk(n) = Pr ( Yn = k; Y0 = j )
This is the main formula for the transition from one state to another.
The single step transition can be written as
Pjk(n) = Pr (Y1 = k; y0 = j)
Time homogeneous Markov chain is
Pjk(n) = Pr (Yk+n = k; Yk = j )
Where n can be vary from 1 and so on.
If the Markov chain is a time homogeneous Markov chain (this means the process is described by a single, time independent matrix Pjk), then the vector π is known as a stationary distribution if
Π ( t + 1 ) = ∑j£s π ( t ) Pij
A state can be accessed from one state to another if a system started in state j has non zero probability of transitioning into another state k at some Point. In short, it is accessible in transition from one state to another if an Integer n ≥ 0 such that
Pr ( Yn = k; Y0 = j ) = pjk(n) > 0
In the equation if n = 0 then every state is defined to be accessible from itself.
Sometimes discrete Markov chains may or may not be adequate. Some points are discussed below about the Markov chain.
· Y(t), t ≥ 0 is a continuous Markov Chain i.e. it is a stochastic process taking values on a finite Set ( like 1, 2, 3 …) with the Markov property such that
P [ Y ( t + s) = k; Y(s) = j, Y ( u ) = y ( u ) where u lies between 0 and s ( 0 ≤ u ≤ s )] = P [ Y ( t + s ) = k Y ( s ) = j ]
· The Markov property for Y ( t ) implies the discrete time Markov property for Yn such that
Yn will be an embedded Markov chain with the transition matrix P = [ Pij ].
· In short a CTMC is a process that stays in the state j for a time Tj = e(ᶯi) after it moves to some other state j with some probability Pjk.
· Continuous time Markov process is the continuous version of Markov property.
· Markov processes are used to describe physical processes in which a system involves a constant time. They are applied to ensemble independent systems besides a single system and the Probabilities are used to find to count the members in a state.

Continuous Time Markov Chains

Back to Top
In probability Continuous time Markov chain is a random process represented as X (t): t ≥ 0. This property follows the Markov property and this is the continuous time segment of Markov chain. The Markov process tells that the Conditional Probability at a given time depends on the process at the same time. In other words the present state of process is independent of the process before the current time. A Markov chain is called continuous time Markov chain only if it possesses the Markov property.

The Markov property of continuous time Markov Chains:
Pr(X(t)) = k | X(m) = l , X(tn-1) = ln-1 …… , X(t1) = l1) = P(X(t)) = k | X(m) = l)
The above general formula shows that the process at the current time depends only at the most recent time prior to the current time or we can say that the process at time ‘m’ does not depend on the process before the time ‘m’.
Since we know that the Markov property is not efficient in the distribution of time therefore, unlike discrete time Markov chains, the continuous time Markov chains consider only time homogeneous process.
Pr (X (t)) = k | X (m) = l) = P(X(t - m) = j | X(0) = l)
By time homogeneous it is meant the time whenever the process enters the state (i.e. that the process is starting in an initial state ‘l’ at time zero(t = 0)). Now time spent by the process in the current state is called holding time in the state. When the continuous time Markov process leaves the current state the following conditions must hold true:
1. The number of states it has skipped must remain the same.
2. The Set of states must also remain the same.
3. The probability of a process going to other state should remain the same.

The properties of Continuous time Markov chains:
The main entity of continuous time Markov chain is Cij which acts like an alarm when we enter the state. There is an alarm for each state which gives the notification that we entered the state from the previous state. The time of the alarm clock is exponentially distributed with the rate of the process.
Vi = ∑j є m Cij
Where Cij = 0, if the process does not go from current state to the next state. When the process goes from one state to other state the probability is Cij / Vi which can also be denoted by Pij. It is the transition probability or in other words jump chain. The above quantities are related by
Vi = ∑j є m Cij
Cij = Vi * Pij
Pij = Cij / Vi
Here we assume Vi is less than infinity for all the states of the chain as if Vi is equals to infinity then the process will immediately leave the current state after it enters it, then there will be no holding time left. We also assume that Vi > 0 for all the current states. If Vi = 0, then the process will stay at the current state forever.

Since we know that the time spent in the current state is exponentially distributed with the rate, we can Calculate The Probability of two or more transitions in a given time interval by
Pr (Ti > g) = e-vig
Where ‘Ti’ is the holding time and ‘g’ is the time interval.
By expanding the above formula we get
Pr (Ti > g) = e-vig
= 1 – vi g + (vi h)2 / 2! – (vi h)3 /3! + …..
= 1 – vi g + o (g)
Or we can say
P (Ti ≤ g) = vi g + o (g)
Now if Tj is the holding time of ‘j’ then Tj is exponentially distributed Vj but Tj ≠Ti or we can say that both the events are independent.
Let’s discuss the limiting Probabilities of Continuous time Markov chains.
We denote the limiting probabilities for Continuous time Markov chains
Lim Cij (t), t → ∞
Where Cij (t)= Pr (X (t) = j| X(0) = I) is the transition probability function of states of a Continuous time Markov chains. Earlier when we considered the constant distribution ∆ of Continuous time Markov chains and ∆ is the distribution when the process is equally distributed we can also say that ∆ is a proportion of time that the process is in the current state. The limiting probability is equal to the probability ∆ for all i є m or we can say that no matter the process is in state i, the probability of going to state j at time t approaches ∆ as ‘t’ increases. This is the only case available with Continuous time Markov chains. The limiting probability and the probability of each state are same in Continuous time Markov chains
Cij = 0 for all t > 0, or
Cij (t) > 0 for all t > 0
This is called Levy Dichotomy.
Examples of Continuous time Markov chains are:
1. Passion process (only i → i +1)
State space 0, 1, 2,…
Ti ~ exp (λ) so vi = λ, i = 0, 1, 2….
Pij = 1, j = i + 1
= 0, j ≠ I + 1
2. Pure Birth Process:
Same Pij as is Poisson Process, but with vi = λi.
Example: Yule process (linear birth process): λi = iλ.
3. Birth and death process (only i → i + 1 or i → i – 1 )
State space 0, 1,…. or 0, 1,….N
Time till ‘birth’ i → i + 1 is Xi ~ exp(λi)
Time till next ‘death’ i → i – 1 is Yi ~ exp (μi)
Time till next transition is min (Xi, Yi) ~ exp(vi) with vi = λi + μi
Pi,i+1 = P(Xi < Yi) = λi / λi + μi
Pi,i-1 = P(Xi > Yi) = μi / λi + μi

Applications of Continuous time Markov Chains:
Continuous time Markov Chains are used in physical processes, identical systems, independent processes and probabilities.
1. In chemical reaction it is used to find the state of the enzyme that goes through the chemical reaction from one state to another since the molecules of the enzymes are independent of each other. It also describes the probability of a molecule in a given state.
2. In queuing theory, where the jobs are exponentially distributed with passion arrival rate ‘R’ as the number of jobs are continuous time Markov chains with non negative integers.

Classification of States of Markov Chains

Back to Top
In Markov chain states three classifications are there for each state. Here each state in Markov chain falls into one and only one category, thus these categories are the base of the partition of the states. This categorization is made to find the communicating classes. The states of the Markov chain are partitioned in the categories we are talking about. The only condition for the communication between two states if it’s possible to go from one state to the other state like for example two states x and y will communicate to each other if and only if it’s possible to go from state x to y and from the state y to x.
This can be concluded as each object would fall into one and only in one class if there is a collection of objects is partitioned. This is something like the equivalence relation according to which two objects are equivalent if one falls into the same partition as the other. There are three properties for the equivalence relation
For any object let it be X the relation (X, X) exits since an object is always equal to itself.
If there is existence of (X, Y) then there will be the existence of (Y, X) also since if ‘X’ is equivalent to the ‘Y’ then definitely ‘Y’ is equivalent to ‘X’.
If there is a existence of (X, Y) and (Y, Z) then the relation (X, Z) would be in existence also since ‘X’ is equivalent to ‘Y’ and ‘Y’ is equivalent to ‘Z’ then ‘X’ would be equivalent to ‘Z’ too.
Here it can be noticed that the relation of communicating is an equivalence relation. First is the definition of an object that belongs to the same communicating class as itself. Second shows that if ‘X’ and ‘Y’ communicate then it is possible to go from each state to the other and the last one shows that if ‘X’ communicates with ‘Y’ and ‘Y’ communicates with ‘Z’ then it is possible to go from ‘X’ to ‘Z’ as one can move from ‘X’ to ‘Y’ and then ‘Y’ to ‘Z’.
Here conclusion of is that, ‘X’ and ‘Y’ relates to the same communication class if and only if there is a communication between them means if it’s possible to go from ‘X’ to ‘Y’ and from ‘Y’ to ‘X’.
There are three state classifications for the finite Markov Chains that are transient state, ergodic state and periodic state. Two vertices’ relates to the same classification if they have the same classification. For the classification we first need to find the easiest vertex of that communicating class and then need to classify the vertex. Always find the communicating class before the classification of state. The classifications of states are,
Transient state is there if it is possible to leave the state and never return.
Periodic state exists if the state is returned to only on multiples of some positive integers > 1 and that Integer is the Period of the state.
Ergodic state exists if the state is neither transient nor periodic. This is all about classification of states.

Steady State Behaviour

Back to Top
The topic of our discussion is steady state behavior and by the name itself we can guess that when a system is in steady state it consists of so many properties which do not change with time or you can say they are steady. And if we want to define it mathematically then, as we know that if any system is not changing any of it’s properties with respect to time then the rate of change of that property with respect to time is equal to zero, so for instance, let the property be P of the system, then the partial derivative of that property is:
∂ P / ∂ t = 0
You all should know that a steady state behavior is seen in a system when some time has passed after the system has been started or has been initiated. In steady state behavior, the initial state is generally called as the transient state and the Period when the system starts till the steady state is known as the start-up or warm-up period.
There are many applications of steady state behavior and it plays a significant role in the fields of economics and thermodynamics. You have to understand that when a system is starts, initially it will show changes with time and will show transient behavior, but when at some Point it starts showing steady state behavior, then it will continue showing that behavior in the future also without any changes.
Let us understand this with some examples: like our tanks in our home, they are filled with fluids like water, milk, etc. They are the examples of transient state because the volume of fluid in it changes with time. But when we consider the case of pipes or tubes, then the flow of any fluid through them or we can also take the case of electricity through a network, they can be the cases of steady state behavior as the flow of the fluid in the tubes or network is constant.