Title: Markov Chains and variate selection SSP 5'5
1Lecture 9
- Markov Chains and variate selection (SSP 5.5)
2Markov Chains
A sequence of states, (integers, sets of
coordinates, times, configurations, etc.) which
evolve with the Markov condition, i.e. the
occurrence of a state depends, at most, on the
state which precedes it.
3Examples of deterministic Markov chains
(1) the 8 sequence of integers ?j(a?j-1c)mod
m - the congruential generator Note Compound
generators, or shift-register generators do not
form Markov chains.
- The sequence of numbers generated by the
nonlinear recurrence relation - xj x j-1(1-x j-1) x0lt1
- - the Logistic generator
4Examples of stochastic Markov chains
(3) ?j?j-1?1, with equal probability - random
walk on a one-dimensional grid
- ?j (?j-11)mod 6 or (?j-15)mod 6 , with equal
- probability - random walk on a circular grid
5Q. When these sequences settle down, do they
represent a sample of variates from any
distribution?
- Case 1 we know are from the uniform distribution
on (0,m-1) - Case 2 can show form the distribution
Case 3 does not settle down to any
distribution Case 4 gives us the uniform
distribution on the integers (0,5)
6- The evolution of a Markov chain is determined by
a transition operator, for discrete states a
matrix - If we denote the states by xi, Wij is the
probability that state xj succeeds xi - Clearly
- Wij ?0
- Wij ?0 for some j?i
W forms a stochastic matrix
7An example of a stochastic matrix W
8A stochastic matrix will generate a chain, are
the elements representative of some probability
distribution? In this case the states x0 to x5
are distributed as follows
9pN(n) is the stationary distribution of the walk
whose one-step transition matrix is W
General problem How to know if a stochastic
matrix W will give rise to a stationary
distribution
Our problem How to find a transition matrix W
which will generate variates from a specified
probability distribution, p.
10Conditions on W
11ProblemTo find a transition matrix W which will
generate variates from the discrete distribution
p(xi)
- Take W as the product AG of a targeting matrix G
and an acceptance matrix A
- To select the next variate after xi, target a
value xj with probability Gij and accept it with
probability Aij
- The simplest case is to take for G any symmetric
stochastic matrix linking the states, independent
of p
- The acceptance probability can be a function
only of - ?ijp(xj)/ p(xi)
12The Metropolis algorithm Aijmin1, ?ij
Less used Barker algorithm Aij ?ij/(1 ?ij)
13Return to our example
14Note that, for example, state 4 will never follow
state 1. (because of the targeting matrix). Thus,
although the variates generated follow the
distribution p, sequences of them are not a
random sample from it.
The strings generated contain strong correlations
Also the string may have some influence from the
arbitrary starting state adopted the generator
must be equilibrated before using the variates
15So, why use the method?
Only needs the ratios of probabilities, ?ij, so
normalisation of the probability distribution is
not required
16Metropolis method for continuous variable
To select from p(x)dx i,j ? x,x, Gij?
G(x?x) ?ij?p(x)/p(x)
Example Target x within a region ?? of x with
uniform probability, i.e. xx(2?-1)?
17(No Transcript)