Title: Uncertain Reasoning
1Uncertain Reasoning
- Dempster-Shafer theory (cont.)
2Reasoning on Markov trees
- Let us consider, instead of simply a domain O, a
domain encoded as a product space OM. - We will abuse notation a little, and use M both
as a natural number and the set 1, 2, ..., M. - Let N c M, and vN be a function that takes
subsets of ON to non-negative real numbers. We
call vN a valuation. - Let V be the set of all valuations. Later we will
constrain valuations to be mass distributions.
3Reasoning on Markov trees
- Mass distributions will constitute a special
class of valuations. We call any such special
class admissible valuations. - Let S, T c M, vS a valuation of subsets of OS and
vT a valuation of subsets of OT. We define vSxvT
as a valuation of subsets of OS U T as follows - If vS is not admissible or vT is not admissible,
then vSxvT is not admissible. - If both vS and vT are admissible, then not
necessarily vSxvT is admissible. If vSxvT is
admissible, then vS and vT are combinable.
4Reasoning on Markov trees
- Let S c T c M. A projection ?(TS) is a mapping
between valuations of subsets of OT and of OS,
that preserves admissibility if vT is
admissible, then so is its projection if vT is
not admissible, then so is not its projection. - If the valuations are mass distributions,
projection corresponds exactly to the notion of
projection defined in previous lectures, and the
product corresponds to the combination rule for
information fusion.
5Reasoning on Markov trees
- Our goal is, given the valuations for product
spaces of the form OTi, obtain the product
valuation (through information fusion the so
called Dempster's rule of combination) and then
project the resulting valuation to determine the
valuation of an event of interest.
6Reasoning on Markov trees
- These calculations are in general computationally
intractable. - As proved by Prakash Shenoy and Glenn Shafer, if
the dependencies among events are structured as a
Markov tree, the calculations can be done
relatively efficiently, through an interleaving
of projections and fusions.
7Reasoning on Markov trees
- Let T be a twig, vT be its valuation, and S be
its neighbour in the Markov tree. S can be the
neighbour of more twigs. T sends as a message
to S the value ?(T TnS)(vT), i.e. its valuation
projected to the variables that T and S have in
common.
8Reasoning on Markov trees
- S, in turn, updates its valuation given the
messages it receives, calculatingvS x ?(Ti
TinS)(vTi), for all twigs Ti to which S is
neighbour.
9Reasoning on Markov trees
- Example we can use this framework for uncertain
reasoning to permanently update the belief state
of an agent in the face of new evidence. Previous
belief state is taken as an independent
observation, and the incoming observations are
combined with the previous ones to gradually
update them.
10Reasoning on Markov trees
- Example let us consider the following simple
Markov tree, where the twigs are fed directly by
the sensors of a robot. To simplify our exemple,
each variable admits only two values (true and
false).
11Reasoning on Markov trees
T L F
A T
L F B
L S B
12Reasoning on Markov trees
- Example
- We have a mass distribution for each hyperedge.
Let us say that our interest is to know the
belief distribution for the root hyperedge, i.e.
the distribution for the possible values
respectively for the variables T, L and F 000,
001, 010, 011, etc.
13Reasoning on Markov trees
- Example
- We assume that we already have mass distributions
for each hyperedge. We now assume, however, that
new observations update the values in the twigs.
14Reasoning on Markov trees
- Example
- In this case, the twigs generate the
corresponding projections. For example, to go up
from the twig LSB to the node LFB, we use the
projection ?(LSB LB). This gives partial
information about LFB, to be combined using
Dempster's rule with the previous beliefs of LFB.
The process can be repeated up to the root of the
tree.