Title: EE255CPS226 Discrete Time Markov Chain DTMC
1EE255/CPS226Discrete Time Markov Chain (DTMC)
- Dept. of Electrical Computer engineering
- Duke University
- Email bbm_at_ee.duke.edu, kst_at_ee.duke.edu
-
2Discrete Time Markov Chain
- Markov process dynamic evolution is such that
future state depends only on the present (past is
irrelevant). - Markov Chain ? Discrete state (or sample) space.
- DTMC time (index) is also discrete i.e. system
is observed only at discrete intervals of time. - X0, X1, .., Xn, .. observed state (of a
particular ensemble member (of the sample space)
at discrete times, t0, t1,..,tn, .. - X0, X1, .., Xn , .. describes the states of a
DTMC - Xn j ? system state at time step n is j. Then
for a DTMC, - P(Xn in X0 i0, X1 i1, , Xn-1 in-1)
P(Xn in Xn-1 in-1) - pj(n) ? P(Xn j) (pmf), or,
- pjk(m,n) ? P(Xn k Xm j ),
3Transition Probability
- pjk(m,n) probability transition function of a
DTMC. - Homogeneous DTMC pjk(m,n) pjk(m-n) i.e.,
transition probabilities exhibit stationary
property. For such a DTMC, - 1-step transition prob, pjk pjk(1) P(Xn k
Xn-1 j) , - Assuming 0-step transition prob as
- Joint pmf is given by, P(X0 i0, X1 i1,
, Xn in) - P(X0 i0, X1 i1, , Xn-1 in-1).
P(Xn in X0 i0, X1 i1, , Xn-1 in-1) - P(X0 i0, X1 i1, , Xn-1 in-1).
P(Xn in Xn-1 in-1) (due to Markov prop) - P(X0 i0, X1 i1, , Xn-1
in-1).pin-1, in -
- pi0(0)pi0, i1 (1) pin-1, in (1) pi0(0)pi0,
i1 pin-1, in
4Transition Probability Matrix
- The initial prob. is, pi0(0) P(X0 i0 ). In
general, - p0(0) P(X0 0 ), , pk(0) P(X0 k ) etc,
or, - p(0) p0(0), p1(0), ,pk(0), .
(initial prob. vector) - This allows us to define transition prob. matrix
as, - Sum of ith row elements pi,0(0) pi,1(0)
? - Any such sq. matrix with non-negative entries
whose row sum 1 is called a stochastic matrix. -
5State Transition Diagram
- pij describes random state value evolution
from i to j - Node with labels i, j etc. and an arc labeled
pij - Concept of ri reward (cost or penalty) for each
state I allows evaluation of various interesting
performance measures. - Example 2-state DTMC for a cascade binary comm.
channel. Signal values 0 or 1 form the state
values.
i
j
pij
6Total Probability
7n-Step Transition Probability
- For a DTMC, find
- Events state reaches k (from i) reaches j
(from k) are independent due to the Markov
property (i.e. no history) - Invoking total probability theorem
- Let P(n) n-step prob. transition matrix (i,j)
entry is pij(n). Making m1, nn-1 in the above
equation,
8Marginal pmf
-
- j, in general can assume countable values,
0,1,2, . Defining, - pj(n) for j0,1,2,..,j, can be written in the
vector form as, - Or,
- Pn can be easily computed if n is finite.
However, if n is countably infinite, it may not
be possible to compute Pn (and p(n) ). -
9Marginal pmf Example
- For a 2-state DTMC described by its 1-step
transition prob. matrix, - the n-step transition prob. Matrix is given by,
- Proof follows easily by using induction, that is,
assuming that the above is true for Pn-1. Then,
Pn P. Pn-1 -
10Computing Marginal pmf
- Previous example of a cascade digital comm
channels each stage described by a 2-state DTMC,
We want to find p(n) (a0.25 b0.5), - The 11 element for n2 and n3 are,
- Assuming initial pmf as, p(0) p0(0) p1(0)
1/3 2/3 gives, - What happens to Pn as n becomes very large (?
infinity)? -
11DTMC State Classification
- From the previous example, as n becomes infinity,
pij(n) becomes independent of n and i !
Specifically, - Not all Markov chains may exhibit this behavior.
- State classification may be based on the
distinction that asymtotically - some states may be visited infinitely many
times. Whereas, some other states may be visited
only a small number of times - Transient state iff there is non-zero
probability that the system will NOT return to
this state. - Define Xji to be the of visits to state i,
starting from state j, then, - For a transient state (i), visit count needs to
finite, which requires pji(n) ? 0 as n ?
infinity. Eventually, the system will always
leave state i.
12DTMC State Classification (contd.)
- State i is a said to be recurrent iff, starting
from state i, the process eventually returns to
the state i with probability 1. - For a recurrent state, time-to-return is a
relevant measure. Define fij(n) as the cond.
prob. that the first visit to j from i occurs in
exactly n steps. - If j i, then fii(n) denotes the prob. of
returning to i in exactly n steps. - Known result
- Let,
- Mean recurrence time for state i is
13Recurrent state
-
- Let i be recurrent and pii(n) gt 0, for some n gt
0. - For state i, define period di as GCD of all such
ve ns that result in pii(n) gt 0 - If di1, ? aperiodic and if digt1, then periodic.
- Absorbing state state i absorbing iff pii1.
- Communicating states i and j are said to be
communicating if there exits directed paths from
i and j and from j and i. - Closed set of states A commutating set of states
C forms a closed set, if no state outside of C
can be reached from any state in C.
14Irreducible Markov Chains
- Markov chain states partitioned into two distinct
subsets c1, c2, .., ck-1, ck , such that - ci, i1,2,..k-1 are closed set of recurrent
nun-null. - ck transient states.
- If ci contain only one state, then cis form a
set absorbing states - If k2 and ck empty, then c1 forms an irreducible
Markov chain - Irreducible Markov chain is one in which every
state can be reached from every other state in a
finite no. of steps, i.e., for all i,j e I, for
some integer n gt 0, pij(n) gt 0. Examples - Cascade of digital comm. channels is
-
0
1
15Irreducible Markov Chains (contd.)
- If one state is recurrent aperiodic, then so are
all the other states. Same result if periodic or
transient. - For a finite aperiodic irreducible Markov chain,
pij(n) becomes independent of i and n as n goes
to infinity.
- All rows of Pn become identical
16Irreducible Markov Chains (contd.)
- Law of total probability gives,
- Therefore, 1st eq. can be rewritten as,
- In the matrix form,
- v is a probability vector, therefore,
- Self reading exercise (theorems on pp. 351)
- For an aperiodic, irreducible, finite state DTMC,
17Irreducible Markov Chain Example
- Typical computer program continuous cycle of
compute I/O - The resulting DTMC is irreducible with period 1.
Therefore,
18Sojourn Time
- If Xn i, then Xn1 j should depend only on
the current state i, and not on the time spent in
state i. - Let Ti be the time spent in state i, before
moving to state j - DTMC will remain in state i in the next step with
prob. pii and, - Next step (n1), toss a coin, H? Xn1 i,
T?Xn1 i - At each step, we perform a Bernoulli trial. Then,
-
19Markov Modulated Bernoulli Process
- Generalization of a Bernoulli process the
Bernoulli process parameter is controlled by a
DTMC. - Simplest case is Binary state (on-off) modulation
- On? Bernoulli param c1 Off ? c2 (or 0)
- Controlling process is an irreducible DTMC, and,
- Reward assignment, r0 c1 and r1c2. This gives,
20Examples of Irreducible DTMCs
- Example 7.14 non-homogeneous DTMC for s/w
reliability - Slotted ALOHA wireless multi-access protocol
- Advantages
- 2X more efficient than pure Aloha.
- Automatically adapts to changes in station
population - Disadvantages
- Throughput maximum of 36.8 theoretical limit.
- Requires queuing (buffering) for re-transmission
- Synchronization.
21Slotted ALOHA DTMC
- New and backlogged requests
- Successful channel access iff
- Exactly one new req. and no backlogged req.
- Exactly one backlogged req. and no new req.
- DTMC state of backlogged requests.
backlogged
new
n
x
m-n
x
x
Channel
S
22Slloted Aloha contd.
- In a particular state n, successful contention
occurs with prob. rn - rn may be assigned as a reward for state n.
23Discrete-time Birth-Death Processes
- Special type of DTMC in which P has a
tri-diagonal form,
24DTMC solution steps
- Solving for v vP, gives the steady state
probabilities.
25Finite DTMCs with Absorbing States
- Example Program having a set of interacting
modules. Absorbing state failure state (? ps5
unreliability)
26Finite DTMCs, Absorbing States (contd.)
- M contains useful information.
- Xij rv denoting random number to visits to j
starting from i - EXij mij (for i, j 1,2,, n-1) . Need
to prove this statement. - There are three distinct situations that can be
enumerated - Let rv Y denote the state at step 2
(initial state i) - EXij y n dij
- EXij y k EXkj dij EXkj dij
dij , occurs with prob. pij Xkj dij ,
occurs with prob. Pik
k1,2,..n (dij term accounts for ij case)
Xij
si
sk
sj
sn
i
27Finite DTMCs, Absorbing States (contd.)
- Since, P(Yk) pik , k1,2,..n, total
expectation rule gives, - Over all (i,j) values, we need to work with the
matrix, - Therefore, fundamental matrix M elements give
the expected of visit to state j (from i)
before absorption. - If the process starts in state 1, then m1j
gives the average of visits to state j (from
the start state) before absorption.
28Performance Analysis, Absorbing States
- By assigning rewards values to different state, a
variety of performance measures may be computed. - Average time to execute a program
- s1 is the start state rjk (fractional)
execution time/visit for sj - Vj m1j is the average times statement block
sjis executed - We need to calculate total expected execution
time, I.e. until the process gets absorbed into
stop state (s5 ) - Software reliability jth reward Rj
Reliability of sj .Then,