Reasoning with Changing Knowledge - PowerPoint PPT Presentation

1 / 35
About This Presentation
Title:

Reasoning with Changing Knowledge

Description:

Cabot pleads alibi too, claiming to have been watching a ski meet in the ... But the only support for Cabot telling the truth is that he was at the ski show ... – PowerPoint PPT presentation

Number of Views:36
Avg rating:3.0/5.0
Slides: 36
Provided by: scSn
Category:

less

Transcript and Presenter's Notes

Title: Reasoning with Changing Knowledge


1
Reasoning with Changing Knowledge
  • Bob McKay
  • School of Computer Science and Engineering
  • College of Engineering
  • Seoul National University
  • Partly based on
  • Russell Norvig, Edn 2, Ch 10
  • M Pagnucco Introduction to Belief Revision
    www.cse.unsw.edu.au/morri/ LSS/LSS99/belief_revis
    ion.pdf

2
Outline
  • Belief Revision
  • Foundationalist Approaches
  • Reason Maintenance
  • Truth Maintenance
  • Assumption-Based Truth Maintenance
  • Coherentist Approaches
  • AGM Framework for Belief Revision

3
Belief Revision
  • Aims to characterise the ways in which a rational
    agent can update its beliefs on receiving new
    information
  • There are two main streams
  • Foundational
  • Demarcates a special set of beliefs (axioms)
    requiring no justification
  • Coherentist
  • All beliefs are equal the aim is to find a
    maximum coherent subset
  • For the foundationalist every piece of knowledge
    stands at the apex of a pyramid that rests on
    stable and secure foundations whose stability and
    security does not derive from the upper stories
    or sections. For the coherentist a body of
    knowledge is a free-floating raft, every plank of
    which helps directly or indirectly to keep all
    the others in place and no plank of which would
    retain its status with no help from the others
  • Sosa

4
Foundational Approaches Reason Maintenance
  • Abbot, Babbitt and Cabot are suspects in a murder
    case
  • Abbott has an alibi, the register of a
    respectable hotel in Albany
  • Babbitt also has an alibi, for his brother-in-law
    testified that Babbitt was visiting him in
    Brooklyn at the time
  • Cabot pleads alibi too, claiming to have been
    watching a ski meet in the Catskills, but we have
    only his word for that
  • So we believe
  • That Abbott did not commit the crime
  • That Babbitt did not commit the crime
  • That Abbott or Babbitt or Cabot did commit the
    crime
  • But then Cabot documents his alibi
  • By chance, a television camera photographed him
    at the ski meet
  • We have to accept a new belief
  • That Cabot did not commit the crime

5
Foundational Approaches Reason Maintenance
  • The detective in charge of the case decides that
    Abbott is a primary suspect
  • because he was a beneficiary
  • He remains a suspect until he establishes an
    alibi
  • In default logic, this could be represented as
  • Beneficiary(X) Alibi(X)
  • Suspect(X)
  • For the moment, Abbott is a suspect because we
    assume there is no alibi
  • as soon as an alibi is provided, Abbott should no
    longer be viewed as a suspect
  • But how can we handle this efficiently in a
    rule-based system such as CLIPS?
  • One possible way is to have rules reflecting each
    possible change in defaults
  • It is impossibly complex

6
Justification-based Truth Maintenance
  • JTMS (or TMS) Doyle
  • Another way is to keep a separate database of
    justifications
  • The system represents justifications in terms of
    an IN-list and an OUT-list for a proposition
  • Note that the form is independent of the
    particular domain
  • It is also independent of the underlying
    reasoning system
  • JTMS does not care where its justifications come
    from

7
Labelling Nodes
  • How does the JTMS decide which nodes are IN and
    which are OUT?
  • A node is labelled IN if it has a valid
    justification
  • A node is labelled OUT if it has no valid
    justification
  • Premisses
  • There is one exception some nodes (Called
    premisses) are to be treated as given
  • eg that Abbott is a beneficiary
  • To avoid special cases, this is handled by giving
    them a justification from the empty node, which
    is always labelled IN

8
Propagating Justification
  • As the investigation progresses, the detective
    discovers that Abbott was registered at a hotel
    in Albany
  • This is far from the scene of the crime

9
Avoiding Circularity Well Foundedness
  • What about Cabot?
  • (Initially) the only support for his alibi is
    that Cabot is telling the truth
  • But the only support for Cabot telling the truth
    is that he was at the ski show
  • A JTMS must disallow such ill-founded reasoning
  • If the support for a node consists of a chain of
    positive links back to itself, the node must be
    labelled OUT
  • Even worse is a circular chain with an odd number
    of negative links in that case, there can be no
    consistent labelling
  • A JTMS needs to be able to detect both these cases

10
Contradictions
  • The detective believes that there are no other
    suspects than Abbott, Babbitt and Cabot
  • This negative knowledge is recorded as a
    justification for a special node, contradiction,
    which is never allowed to become IN
  • At the moment, this isn't a problem
  • Suspect Abbott is OUT because of the
    justification above
  • Suspect Others is OUT because it has no
    justifications
  • Suspect Babbitt is OUT because of the following

11
Contradictions
  • Fortunately, Suspect Cabot is still in, as above
  • But what happens when the television cameras pick
    Cabot out at the ski meet?
  • The CONTRADICTION node is (impermissibly) IN

12
Contradictions
  • The JTMS has no way of knowing that this
    particular node is a contradiction
  • this must be detected by the underlying reasoning
    system
  • But once the contradiction has been detected, the
    JTMS can trace back over the justifications to
    detect which of the underlying assumptions have
    caused the contradiction
  • the set of OUT nodes which, if they were IN,
    would remove the contradiction

13
Contradictions
  • In our case, either
  • Abbott's register was forged
  • or Babbitt's brother-in-law lied
  • or Cabot's TV tape was faked
  • or there is another suspect
  • Having found the candidates, the contradiction
    may be removed by making one of them IN
  • But how to choose which? There are two possible
    approaches
  • Leave the choice to the underlying reasoning
    system, which created the dependencies in the
    first place
  • Apply simple heuristics in the JTMS

14
Contradictions
  • Once we have chosen the node, a justification
    must be supplied
  • The justification should be minimal
  • in the sense that it will be invalidated if the
    system later comes to believe any other
    justifications which would resolve the
    contradiction

15
Assumption-Based Truth Maintenance Systems
  • (Reiter de Kleer)
  • You can think of a JTMS as doing a depth-first
    search of the space of truth assignments
    (contexts), in order to find a consistent one
  • From this perspective, an ATMS is simply doing a
    breadth-first (or parallel) search of the same
    space
  • The ATMS starts off, in effect, with a list of
    all possible contexts
  • As reasoning proceeds, it prunes this list,
    deleting contexts as contradictions are
    discovered

16
Assumption-Based Truth Maintenance Systems
  • As with JTMS, the ATMS sits on top of a separate
    reasoner, which
  • Creates the nodes corresponding to assertions
  • Provides justifications for each node
  • Informs the ATMS if any context is inconsistent
  • The ATMS' role is to
  • Propagate inconsistencies, ruling out other
    contexts
  • Label each node with contexts where it has a
    valid justification

17
Context Lattices
  • A context lattice looks somewhat like a
    generalisation hierarchy
  • The set A1, A2 represents the context in which
    A1 and A2 are both true
  • The context lattice is very large - if there are
    n assumptions, the lattice has 2n nodes - so we
    don't want to actually build it
  • The lattice provides a simple method for
    propagating inconsistency
  • If a node is inconsistent, then so is every node
    above it in the lattice
  • The lattice also provides a simple method for
    labelling nodes with the contexts in which it has
    a justification
  • For example, suppose a node's justification
    depends on assumption A1
  • Then the label A1 implies that the node is
    justified not only in context A1, but also in
    any other context which contains A1

18
Context Lattice Example
  • Assume that the underlying reasoner generates a
    sequence of nodes and justifications as below

19
Example Continued
20
Example (continued)
  • We assume that the reasoner can label some
    contexts as contradictory (nogoods)
  • ie not permitted as contexts
  • For example, nogood A7,A8
  • The ATMS first computes the labels in the third
    column, one for each justification
  • If the justification is an assumption
  • the label is that assumption
  • If the justification is a rule
  • the justification is the product of
    justifications for the antecedents of the rule
  • For node 8, this would give the label
  • A7,A4,A6, A7,A4,A8, A7,A4,A7,A4,A6,
    A7,A4,A7,A2,A6, A7,A8,A6, etc
  • But we can simplify by
  • Eliminating duplicates
  • Eliminating contexts containing a nogood
    (A7,A8)
  • Taking the lower bound of the contexts
  • ending up with just
  • A7,A4,A6

21
Example (continued)
  • The labels for a node are the union of the labels
    for its justifications, again taking lower bounds
  • Updating
  • Suppose the reasoner now supplies additional
    information
  • namely that context A2 is nogood
  • The labels for nodes 1, 2, 9 and 10 immediately
    disappear, as does one of the labels for node 12
  • The only remaining suspect node is Abbott but
    node 12 still has a justification, corresponding
    to the case that A, B C are not the only
    suspects
  • The ATMS can report this simplification, but an
    external source is still needed to resolve
    between them

22
ATMS as Explanation Generators
  • ATMS have been described above as systems for
    maintaining consistency
  • But there is another perspective
  • We can think of an ATMS as handling a database
    consisting of two sorts of knowledge
  • Common knowledge C (the nogoods above)
  • Assumptions A
  • An explanation for a proposition p is
  • a subset E of A such that (E U C) ? p
  • E is required to be minimal
  • in the sense that no subset of E has the same
    property
  • Minimal subsets are what an ATMS computes
  • So an ATMS can be viewed as a way of generating
    explanations

23
ATMS for Synthesis Planning
  • So far, we have assumed that an ATMS computes all
    minimal contexts (or explanations)
  • But in many contexts, only some minimal
    explanations are required
  • Consider the problem of designing logic circuits
    from basic components
  • We can describe the operation of each of the
    components as part of our common knowledge C
  • type(c,or-gate) value(input(i,c),1) ?
    value(output(c),1)
  • Similarly, rules about connecting components can
    be axiomatised
  • connected(i,j) value(i,v) gt value(j,v)
  • The goal would be a proposition describing the
    input-output relations required of the desired
    circuit
  • Then an explanation generated by the ATMS is a
    minimal set of assumptions which will guarantee
    the input-output relations - in other words, a
    circuit design

24
ATMS for Diagnosis
  • Diagnosis with ATMS works in a reverse way
  • The design of the circuit is held as common
    knowledge, while the intended operation of the
    components are treated as assumptions
  • (ie it is now an assumption that a particular AND
    gate actually operates as an AND gate - it might
    be faulty)
  • Some basic circuit knowledge is also treated as
    an assumption
  • assumption that there may be a short or open
    circuit
  • The proposition to be explained is the observed
    behaviour
  • Then an explanation generated by the ATMS is a
    minimal set of assumptions which will generate
    that observed behaviour
  • a diagnosis
  • this approach assumes complete knowledge of the
    underlying rules of operation of the system
  • applies to engine diagnosis, but not medical
    diagnosis

25
ATMS for Database Consistency
  • ATMS provide a mechanism for maintaining the
    consistency of deductive databases
  • Note that this is more than just a validity check
  • The ATMS treats the consistency rules of the
    database as common knowledge C
  • The assumptions A are the database tables
  • The proposition to be explained is the particular
    constraint which has been violated
  • An explanation is a minimal list of changes to
    the database which are required to restore
    consistency

26
Coherentist Approaches
  • Positive Coherence
  • The agent must possess reasons for maintaining a
    belief
  • Each belief must have positive support
  • Negative Coherence
  • The agent is justified in holding a belief so
    long as there is no reason to think otherwise
  • Innocent until proven guilty
  • Linear Coherence
  • The agent adopts a foundational view of reasons
    except that if we look at a reason, the reasons
    for holding reasons etc, we would never stop
  • Either we have an infinite sequence of reasons,
    or there is some circularity in the reason
    structure
  • Holistic Coherence
  • The agent is justified in holding a belief due to
    some relationship between the belief and all
    other beliefs held
  • Pollock, 1986

27
The AGM Approach
  • Alchourron, Gardenfors, Makinson 1985
  • An example of (and the best known) coherentist
    approach
  • Aims to axiomatise the behaviour of rational
    agents in reformulating beliefs
  • Where possible, belief systems should remain
    consistent
  • A belief system should contain all beliefs
    logically implied by beliefs in the system
  • When changing belief systems, loss of information
    should be minimised
  • Beliefs held in higher regard should be retained
    in preference to those held in lower regard
  • AGM systems specify axioms for three types of
    belief revision operators
  • Expansion
  • Contraction
  • Revision
  • The axioms assume a belief system (knowledge
    state) K, and a new sentence ?
  • There is a special (absurd) belief state K? in
    which everything is believed

28
Expansion Operator
  • Expansion () is applied when a proposed new
    belief ? is consistent with K, creating a new
    belief system K?
  • Axioms
  • Closure
  • For any sentence ? and belief system K, K? is a
    belief system
  • Success
  • ? ? K?
  • Inclusion
  • K ? K?
  • Vacuity
  • If ? ? K, then K? K
  • Monotonicity
  • If H ? K, then H? ? K?
  • Minimality
  • K? is the smallest belief system satisfying the
    above, and containing both ? and K

29
Contraction Operator
  • Contraction (-) is applied when the agent wishes
    to retract a belief ?, creating a new belief
    system K-?
  • Axioms
  • Closure
  • For any sentence ? and belief system K, K-? is a
    belief system
  • Inclusion
  • K-? ? K
  • Vacuity
  • If ? ? K, then K-? K
  • Success
  • Unless ? is necessarily true, ? ? K-?
  • Recovery
  • If ? ? K, then K ? (K-?)?
  • Extensionality
  • If ? is logically equivalent to ?, then K-? K-?
  • Intersection
  • (K-? ? K- ?) ? K-(???)
  • Conjunction
  • If ? ? K-(???) then K-(???) ? K-?

30
Revision Operator
  • Revision () is applied when belief ? is
    inconsistent with K, and the agent wishes to
    create a new belief system K? incorporating ?
  • Axioms
  • Closure
  • For any sentence ? and belief system K, K? is a
    belief system
  • Success
  • ? ? K?
  • Inclusion
  • K? ? K?
  • Preservation
  • If ? ? K, then K? K?
  • Vacuity
  • K? K? If and only if ? is a logical necessity
  • Extensionality
  • If ? is logically equivalent to ?, then K? K?
  • Superexpansion
  • K(???) ? ( K? )?
  • Subexpansion
  • If ? ? K? then ( K? )? ? K(???)

31
Revision Operators
  • If and - are AGM expansion and contraction
    operators, the operator
  • K? (K- ? ) K?
  • Is an AGM revision operator

32
Epistemic Entrenchment
  • Remember our requirements for a reasonable update
    mechanism
  • Where possible, belief systems should remain
    consistent
  • A belief system should contain all beliefs
    logically implied by beliefs in the system
  • When changing belief systems, loss of information
    should be minimised
  • Beliefs held in higher regard should be retained
    in preference to those held in lower regard
  • So far, we havent taken care of the fourth
    requirement
  • Axioms of epistemic entrenchment are aimed at
    this

33
Epistemic Entrenchment
  • Beliefs held in higher regard should be retained
    in preference to those held in lower regard
  • A preference order (or epistemic entrenchment)
    is a relationship over sentences which
    satisfies
  • Axioms
  • For any sentences ?, ? and ?
  • Transitivity
  • if ? ? and ? ? then ? ?
  • Dominance
  • if ? is a logical consequence of ? then ? ?
  • Conjunctiveness
  • Either ? ??? or ? ???
  • Minimality
  • When K ? K? , ? ? K iff ? ? for all possible ?
  • Maximality
  • If for all ?, ? ?, then ? is logically necessary

34
AGM Belief Revision
  • AGM belief revision then attempts to find , -
    and operators which satisfy the AGM axioms
  • There are a number of different known ways to do
    this
  • AGM and similar systems have very strong
    theoretical underpinnings
  • Today, they are perhaps more research approaches
    than heavily used in real-world applications

35
Summary
  • Foundationalist Approaches
  • Reason Maintenance
  • Truth Maintenance
  • Assumption-Based Truth Maintenance
  • Coherentist Approaches
  • AGM Framework for Belief Revision
Write a Comment
User Comments (0)
About PowerShow.com