Reasoning with Conflicting Knowledge - PowerPoint PPT Presentation

1 / 55
About This Presentation
Title:

Reasoning with Conflicting Knowledge

Description:

An ostrich is a bird. An ostrich cannot fly. under circumscription, the system can conclude ... Ostriches are not the only abnormal birds. 24. Belief Revision ... – PowerPoint PPT presentation

Number of Views:88
Avg rating:3.0/5.0
Slides: 56
Provided by: bobm8
Category:

less

Transcript and Presenter's Notes

Title: Reasoning with Conflicting Knowledge


1
Reasoning with Conflicting Knowledge
  • Bob McKay
  • School of Computer Science and Engineering
  • College of Engineering
  • Seoul National University
  • Partly based on
  • Russell Norvig, Edn 2, Ch 10
  • M Pagnucco Introduction to Belief Revision
    www.cse.unsw.edu.au/morri/ LSS/LSS99/belief_revis
    ion.pdf

2
Outline
  • Non-Monotonic Logics
  • Modal non-monotonic logics
  • Default Logics
  • Plausible Defaults
  • Abduction
  • Minimalist Reasoning
  • The Closed World Assumption
  • Circumscription
  • Belief Revision
  • Foundationalist Approaches
  • Reason Maintenance
  • Truth Maintenance
  • Assumption-Based Truth Maintenance
  • Coherentist Approaches
  • AGM Framework for Belief Revision

3
Conflicting Knowledge
  • Classical logic handles conflicting knowledge
    poorly
  • It is a theorem of classical logic that
  • p . p ? q
  • From an inconsistency, we can derive everything
  • Conflicting information occurs in most parts of
    ordinary life
  • there are very few propositions that we can be
    absolutely certain will never be falsified
  • Full inconsistency is difficult to deal with
  • if we are certain that both p and p are true,
    there is clearly a problem
  • Most often, the situation is more
  • in the absence of evidence to the contrary, p is
    a reasonable assumption
  • if p were certainly derivable, we would happily
    withdraw p
  • Some extension of classical logic is required to
    deal with this situation
  • We want a rational approach to inconsistency

4
Non-Monotonic Logics
  • In traditional logic, deducibility is monotonic
  • As you add new axioms, the set of truths
    increases
  • if you add a new axiom to a theory, the set of
    theorems now derivable contains the set of
    theorems previously derivable
  • as you increase the axioms, you also increase the
    theorems
  • We have already encountered one form of
    non-monotonic logic
  • the default inheritance discussed in connection
    with semantic nets and frames
  • if the default habitat of kangaroos is
    grassland, and if Skippy is a kangaroo, one
    consequence is that the habitat of Skippy is
    grassland
  • If you later add the explicit fact that Skippy's
    habitat is backyards, then you must retract the
    conclusion that Skippy's habitat is grassland
  • In general, non-monotonic logics allow for
    decreasing truth

5
Modal Non-Monotonic Logic
  • Modal logics extend traditional logic by adding a
    modal operator' to the logic
  • Typically, the new operator represents some form
    of knowledge of the system
  • Thus Kp represents the proposition
  • the system knows that p'
  • To be useful, such an operator must satisfy
  • K(p ? q) ? (Kp ? Kq)
  • It is also reasonable to assume that if p is
    provable, then Kp
  • On the other hand, we would not want to assume in
    general that
  • p ? Kp

6
Modal Non-Monotonic Logic
  • Within this range, there are a wide range of
    possible modal logics, with minor differences in
    their axioms
  • Some of the possible axioms are
  • Kp ? p
  • ? X, Kp ? K (? X p)
  • Kp ? KKp
  • Kp ? KKp
  • NB Depending on the text you use, you may instead
    come across the M operator, meaning it is
    believable that. They are related by
  • Mp Kp

7
Modal Logic and Conflicting Knowledge
  • Modal non-monotonic logics handle conflicting
    knowledge
  • in the sense that Mp and Mp are consistent
  • The various modal logics have been studied in
    traditional logic for some considerable time
  • the extension to non-monotonic reasoning is
    relatively recent
  • Read More
  • Modal Logics and Philosophy
  • http//plato.stanford.edu/entries/logic-modal/
  • A more technical introduction
  • http//cs.wwc.edu/aabyan/Logic/Modal.html
  • Wikipedia
  • http//en.wikipedia.org/wiki/Modal_logic
  • John McCarthy on modal logics
  • http//www-formal.stanford.edu/jmc/mcchay69/node22
    .html

8
Default Logics
  • Default logics add to traditional logic specific
    extra rules for inferring consequences
  • P Q
  • R
  • Bird(x) Flies(x)
  • Flies(x)
  • interpreted as
  • If one believes P, and it is consistent that Q,
    then one can also believe R
  • The consistency of Q is taken to be failure to
    prove Q
  • Default logics are non-monotonic
  • adding a new fact may make a previously
    consistent Q inconsistent
  • Penguin(x)
  • and therefore remove the ability to conclude R
  • Flies(x)

9
Default Logics Semantics
  • An extension of a default theory is formed by
    taking the underlying certain theory, and adding
    defaults to it while they are consistent
  • The process stops when no more defaults can be
    added without creating inconsistency
  • A theory may have a number of different
    extensions
  • For example, the Nixon diamond has two
    extensions
  • One in which Nixon is a pacifist
  • Another in which Nixon is not a pacifist
  • A theory may also have no extensions
  • Entailment
  • A credulous system accepts any conclusion which
    is true in some extension
  • A skeptical system accepts only conclusions true
    in all extensions

10
Default Logics Semantic Variants
  • An extension of a default theory is formed by
    taking the underlying certain theory, and adding
    defaults to it while they are consistent
  • If we restrict the defaults which may be added,
    we get new default logics
  • Justified
  • the theory has to be consistent with the defaults
    added so far
  • Not just their conclusions
  • Constrained
  • All added defaults must be consistent with each
    other, and with any consequences
  • Cautious
  • No default that is inconsistent with any other is
    ever applied

11
Default Logics Efficiency
  • The efficiency of default logics directly relates
    to the cost of computing Q
  • Ie of computing the consistency of Q
  • In general, this may be very expensive, but for
    particular restricted logics it may be feasible
  • A number of systems in the mid-late 1990s based
    on prolog technologies for closed Worlds (see
    later)
  • Closed World everything that cant be proven
    false is true
  • F
  • F

12
Default Logics References
  • Wikipedia
  • http//en.wikipedia.org/wiki/Default_logic
  • Neat default logic simulator
  • http//www.kr.tuwien.ac.at/students/dls/english/
  • Stanford Encyclopedia
  • http//plato.stanford.edu/entries/logic-nonmonoton
    ic/

13
Modal vs Default Logics
  • Modal and Default Logics appear very similar
  • But consider the case where we have the axioms
  • A MB ? B
  • A MB ? B
  • Applying standard logic, we can conclude
  • MB ? B
  • It might seem equivalent to a default logic
  • A B and A B
  • ____ _____
  • B B
  • we cannot reach a conclusion about B unless we
    know about the status of A

14
Deduction and Abduction
  • Deduction
  • Given two axioms
  • forall x measles(x) ? spots(x)
  • measles(fred)
  • We can conclude
  • spots(fred)
  • This is a logical inference
  • Abduction
  • From
  • forall x measles(x) ? spots(x)
  • spots(fred)
  • We conclude
  • measles(fred)
  • This is a plausible inference

15
Non-monotonic reasoning and Abduction
  • Abduction gives us a powerful source of
    non-monotonic reasoning
  • Abduction is permitted so long as the conclusion
    is consistent with our other knowledge
  • ((? x P(x) ? Q(x)) Q(x) MP(x)) ? P(x)
  • (Modal)
  • ((? x P(x) ? Q(x)) Q(x) MP(x))
  • P(x)
  • Default Logic
  • A search for plausible causes

16
Default Inheritance in Default Logic
  • Recall the skippy' inheritance example
  • We can express it as an inference rule
  • kangaroo(x) habitat(x,grasslands)
  • habitat(x,grasslands)
  • If we have an axiom asserting that each animal
    has only one habitat
  • ? x,y,z (habitat(x,y) habitat(x,z)) ? (x z)
  • then in the absence of other knowledge about a
    kangaroo leapy', the logic would conclude
  • habitat(leapy,grasslands)
  • but in the presence of the assertion
  • habitat(skippy,backyards)
  • the logic would not draw the equivalent
    conclusion about skippy

17
Default Inheritance in Modal Logic
  • We can also express the skippy' inheritance
    example in modal logic
  • (kangaroo(x) M(habitat(x,grasslands))) ?
    habitat(x,grasslands)

18
Minimalist Reasoning and the Closed World
Assumption
  • Many rule-based expert systems incorporate a
    simple form of the Closed World Assumption
  • P is assumed equivalent to a failure to prove P
  • In Default logic
  • F
  • F
  • The general form of the CWA says that the only
    objects that satisfy a predicate P are those that
    must

19
Semantic Problems with theClosed World Assumption
  • This simple form has two types of problems
  • Semantic problems
  • The CWA applies equally to all predicates, and
    does not allow us to distinguish between
    predicates
  • In some situations (routes in an airline
    database, for example) it is appropriate to make
    the CWA assumption
  • Many government databases are of this kind
  • Often by definition
  • If youre not recorded as having a licence, you
    dont have one
  • In other cases (has_bought_russell__norvig, for
    example) it would not be reasonable to assume
    that the CWA applies
  • a system is unlikely to contain all the valid
    assertions of this type
  • Many commercial databases are of this kind

20
Syntactic Problems with theClosed World
Assumption
  • Inconsistent theories
  • Given the knowledge base
  • A(joe) v B(joe)
  • the CWA forces the conclusions
  • A(joe)
  • B(joe)
  • which is inconsistent

21
Syntactic Problems with theClosed World
Assumption
  • Asymmetric conclusions
  • Given a knowledge base
  • single(john)
  • single(mary)
  • and the query
  • single(jane)?
  • The CWA results in the answer no
  • But given a knowledge base
  • married(john)
  • married(mary)
  • and the query
  • married(jane)
  • The CWA still results in the answer no

22
Circumscription
  • In a way, circumscriptive theories are an attempt
    to answer the problems with the CWA
  • by restricting its application to particular
    predicates
  • In crcumscription, the predicates of a theory T
    are divided into two parts
  • Some predicates express properties of the objects
    of the theory
  • Other predicates are intended to express that
    particular objects are abnormal in some way
  • a form of CWA applies to them
  • The theory is augmented with second-order axioms
  • which effectively say (for each abnormal'
    predicate) that the only abnormal objects are
    those which are abnormal as a direct consequence
    of the theory

23
Default Reasoning and Circumscription
  • default reasoning can be expressed along the
    lines of
  • Birds that are not abnormal can fly
  • An ostrich is a bird
  • An ostrich cannot fly
  • under circumscription, the system can conclude
  • Ostriches are abnormal birds
  • And in fact, ostriches are the only abnormal
    birds
  • if we then add axioms
  • A penguin is a bird
  • A penguin cannot fly
  • the system will conclude
  • Penguins are abnormal birds
  • Ostriches are not the only abnormal birds

24
Belief Revision
  • Aims to characterise the ways in which a rational
    agent can update its beliefs on receiving new
    information
  • There are two main streams
  • Foundational
  • Demarcates a special set of beliefs (axioms)
    requiring no justification
  • Coherentist
  • All beliefs are equal the aim is to find a
    maximum coherent subset
  • For the foundationalist every piece of knowledge
    stands at the apex of a pyramid that rests on
    stable and secure foundations whose stability and
    security does not derive from the upper stories
    or sections. For the coherentist a body of
    knowledge is a free-floating raft, every plank of
    which helps directly or indirectly to keep all
    the others in place and no plank of which would
    retain its status with no help from the others
  • Sosa

25
Foundational Approaches Reason Maintenance
  • Abbot, Babbitt and Cabot are suspects in a murder
    case
  • Abbott has an alibi, the register of a
    respectable hotel in Albany
  • Babbitt also has an alibi, for his brother-in-law
    testified that Babbitt was visiting him in
    Brooklyn at the time
  • Cabot pleads alibi too, claiming to have been
    watching a ski meet in the Catskills, but we have
    only his word for that
  • So we believe
  • That Abbott did not commit the crime
  • That Babbitt did not commit the crime
  • That Abbott or Babbitt or Cabot did commit the
    crime
  • But then Cabot documents his alibi
  • By chance, a television camera photographed him
    at the ski meet
  • We have to accept a new belief
  • That Cabot did not commit the crime

26
Foundational Approaches Reason Maintenance
  • The detective in charge of the case decides that
    Abbott is a primary suspect
  • because he was a beneficiary
  • He remains a suspect until he establishes an
    alibi
  • In default logic, this could be represented as
  • Beneficiary(X) Alibi(X)
  • Suspect(X)
  • For the moment, Abbott is a suspect because we
    assume there is no alibi
  • as soon as an alibi is provided, Abbott should no
    longer be viewed as a suspect
  • But how can we handle this efficiently in a
    rule-based system such as CLIPS?
  • One possible way is to have rules reflecting each
    possible change in defaults
  • It is impossibly complex

27
Justification-based Truth Maintenance
  • JTMS (or TMS) Doyle
  • Another way is to keep a separate database of
    justifications
  • The system represents justifications in terms of
    an IN-list and an OUT-list for a proposition
  • Note that the form is independent of the
    particular domain
  • It is also independent of the underlying
    reasoning system
  • JTMS does not care where its justifications come
    from

28
Labelling Nodes
  • How does the JTMS decide which nodes are IN and
    which are OUT?
  • A node is labelled IN if it has a valid
    justification
  • A node is labelled OUT if it has no valid
    justification
  • Premisses
  • There is one exception some nodes (Called
    premisses) are to be treated as given
  • eg that Abbott is a beneficiary
  • To avoid special cases, this is handled by giving
    them a justification from the empty node, which
    is always labelled IN

29
Propagating Justification
  • As the investigation progresses, the detective
    discovers that Abbott was registered at a hotel
    in Albany
  • This is far from the scene of the crime

30
Avoiding Circularity Well Foundedness
  • What about Cabot?
  • (Initially) the only support for his alibi is
    that Cabot is telling the truth
  • But the only support for Cabot telling the truth
    is that he was at the ski show
  • A JTMS must disallow such ill-founded reasoning
  • If the support for a node consists of a chain of
    positive links back to itself, the node must be
    labelled OUT
  • Even worse is a circular chain with an odd number
    of negative links in that case, there can be no
    consistent labelling
  • A JTMS needs to be able to detect both these cases

31
Contradictions
  • The detective believes that there are no other
    suspects than Abbott, Babbitt and Cabot
  • This negative knowledge is recorded as a
    justification for a special node, contradiction,
    which is never allowed to become IN
  • At the moment, this isn't a problem
  • Suspect Abbott is OUT because of the
    justification above
  • Suspect Others is OUT because it has no
    justifications
  • Suspect Babbitt is OUT because of the following

32
Contradictions
  • Fortunately, Suspect Cabot is still in, as above
  • But what happens when the television cameras pick
    Cabot out at the ski meet?
  • The CONTRADICTION node is (impermissibly) IN

33
Contradictions
  • The JTMS has no way of knowing that this
    particular node is a contradiction
  • this must be detected by the underlying reasoning
    system
  • But once the contradiction has been detected, the
    JTMS can trace back over the justifications to
    detect which of the underlying assumptions have
    caused the contradiction
  • the set of OUT nodes which, if they were IN,
    would remove the contradiction

34
Contradictions
  • In our case, either
  • Abbott's register was forged
  • or Babbitt's brother-in-law lied
  • or Cabot's TV tape was faked
  • or there is another suspect
  • Having found the candidates, the contradiction
    may be removed by making one of them IN
  • But how to choose which? There are two possible
    approaches
  • Leave the choice to the underlying reasoning
    system, which created the dependencies in the
    first place
  • Apply simple heuristics in the JTMS

35
Contradictions
  • Once we have chosen the node, a justification
    must be supplied
  • The justification should be minimal
  • in the sense that it will be invalidated if the
    system later comes to believe any other
    justifications which would resolve the
    contradiction

36
Assumption-Based Truth Maintenance Systems
  • (Reiter de Kleer)
  • You can think of a JTMS as doing a depth-first
    search of the space of truth assignments
    (contexts), in order to find a consistent one
  • From this perspective, an ATMS is simply doing a
    breadth-first (or parallel) search of the same
    space
  • The ATMS starts off, in effect, with a list of
    all possible contexts
  • As reasoning proceeds, it prunes this list,
    deleting contexts as contradictions are
    discovered

37
Assumption-Based Truth Maintenance Systems
  • As with JTMS, the ATMS sits on top of a separate
    reasoner, which
  • Creates the nodes corresponding to assertions
  • Provides justifications for each node
  • Informs the ATMS if any context is inconsistent
  • The ATMS' role is to
  • Propagate inconsistencies, ruling out other
    contexts
  • Label each node with contexts where it has a
    valid justification

38
Context Lattices
  • A context lattice looks somewhat like a
    generalisation hierarchy
  • The set A1, A2 represents the context in which
    A2 and A2 are both true
  • The context lattice is very large - if there are
    n assumptions, the lattice has 2n nodes - so we
    don't want to actually build it
  • The lattice provides a simple method for
    propagating inconsistency
  • If a node is inconsistent, then so is every node
    above it in the lattice
  • The lattice also provides a simple method for
    labelling nodes with the contexts in which it has
    a justification
  • For example, suppose a node's justification
    depends on assumption A1
  • Then the label A1 implies that the node is
    justified not only in context A1, but also in
    any other context which contains A1

39
Context Lattice Example
  • Assume that the underlying reasoner generates a
    sequence of nodes and justifications as below

40
Example Continued
41
Example (continued)
  • We assume that the reasoner can label some
    contexts as contradictory (nogoods)
  • ie not permitted as contexts
  • For example, nogood A7,A8
  • The ATMS first computes the labels in the third
    column, one for each justification
  • If the justification is an assumption
  • the label is that assumption
  • If the justification is a rule
  • the justification is the product of
    justifications for the antecedents of the rule
  • For node 8, this would give the label
  • A7,A4,A6, A7,A4,A8, A7,A4,A7,A4,A6,
    A7,A4,A7,A2,A6, A7,A8,A6, etc
  • But we can simplify by
  • Eliminating duplicates
  • Eliminating contexts containing a nogood
    (A7,A8)
  • Taking the lower bound of the contexts
  • ending up with just
  • A7,A4,A6

42
Example (continued)
  • The labels for a node are the union of the labels
    for its justifications, again taking lower bounds
  • Updating
  • Suppose the reasoner now supplies additional
    information
  • namely that context A2 is nogood
  • The labels for nodes 1, 2, 9 and 10 immediately
    disappear, as does one of the labels for node 12
  • The only remaining suspect node is Abbott but
    node 12 still has a justification, corresponding
    to the case that A, B C are not the only
    suspects
  • The ATMS can report this simplification, but an
    external source is still needed to resolve
    between them

43
ATMS as Explanation Generators
  • ATMS have been described above as systems for
    maintaining consistency
  • But there is another perspective
  • We can think of an ATMS as handling a database
    consisting of two sorts of knowledge
  • Common knowledge C (the nogoods above)
  • Assumptions A
  • An explanation for a proposition p is
  • a subset E of A such that (E U C) ? p
  • E is required to be minimal
  • in the sense that no subset of E has the same
    property
  • Minimal subsets are what an ATMS computes
  • So an ATMS can be viewed as a way of generating
    explanations

44
ATMS for Synthesis Planning
  • So far, we have assumed that an ATMS computes all
    minimal contexts (or explanations)
  • But in many contexts, only some minimal
    explanations are required
  • Consider the problem of designing logic circuits
    from basic components
  • We can describe the operation of each of the
    components as part of our common knowledge C
  • type(c,or-gate) value(input(i,c),1) ?
    value(output(c),1)
  • Similarly, rules about connecting components can
    be axiomatised
  • connected(i,j) value(i,v) gt value(j,v)
  • The goal would be a proposition describing the
    input-output relations required of the desired
    circuit
  • Then an explanation generated by the ATMS is a
    minimal set of assumptions which will guarantee
    the input-output relations - in other words, a
    circuit design

45
ATMS for Diagnosis
  • Diagnosis with ATMS works in a reverse way
  • The design of the circuit is held as common
    knowledge, while the intended operation of the
    components are treated as assumptions
  • (ie it is now an assumption that a particular AND
    gate actually operates as an AND gate - it might
    be faulty)
  • Some basic circuit knowledge is also treated as
    an assumption
  • assumption that there may be a short or open
    circuit
  • The proposition to be explained is the observed
    behaviour
  • Then an explanation generated by the ATMS is a
    minimal set of assumptions which will generate
    that observed behaviour
  • a diagnosis
  • this approach assumes complete knowledge of the
    underlying rules of operation of the system
  • applies to engine diagnosis, but not medical
    diagnosis

46
ATMS for Database Consistency
  • ATMS provide a mechanism for maintaining the
    consistency of deductive databases
  • Note that this is more than just a validity check
  • The ATMS treats the consistency rules of the
    database as common knowledge C
  • The assumptions A are the database tables
  • The proposition to be explained is the particular
    constraint which has been violated
  • An explanation is a minimal list of changes to
    the database which are required to restore
    consistency

47
Coherentist Approaches
  • Positive Coherence
  • The agent must possess reasons for maintaining a
    belief
  • Each belief must have positive support
  • Negative Coherence
  • The agent is justified in holding a belief so
    long as there is no reason to think otherwise
  • Innocent until proven guilty
  • Linear Coherence
  • The agent adopts a foundational view of reasons
    except that if we look at a reason, the reasons
    for holding reasons etc, we would never stop
  • Either we have an infinite sequence of reasons,
    or there is some circularity in the reason
    structure
  • Holistic Coherence
  • The agent is justified in holding a belief due to
    some relationship between the belief and all
    other beliefs held
  • Pollock, 1986

48
The AGM Approach
  • Alchourron, Gardenfors, Makinson 1985
  • An example of (and the best known) coherentist
    approach
  • Aims to axiomatise the behaviour of rational
    agents in reformulating beliefs
  • Where possible, belief systems should remain
    consistent
  • A belief system should contain all beliefs
    logically implied by beliefs in the system
  • When changing belief systems, loss of information
    should be minimised
  • Beliefs held in higher regard should be retained
    in preference to those held in lower regard
  • AGM systems specify axioms for three types of
    belief revision operators
  • Expansion
  • Contraction
  • Revision
  • The axioms assume a belief system (knowledge
    state) K, and a new sentence ?
  • There is a special (absurd) belief state K? in
    which everything is believed

49
Expansion Operator
  • Expansion () is applied when a proposed new
    belief ? is consistent with K, creating a new
    belief system K?
  • Axioms
  • Closure
  • For any sentence ? and belief system K, K? is a
    belief system
  • Success
  • ? ? K?
  • Inclusion
  • K ? K?
  • Vacuity
  • If ? ? K, then K? K
  • Monotonicity
  • If H ? K, then H? ? K?
  • Minimality
  • K? is the smallest belief system satisfying the
    above, and containing both ? and K

50
Contraction Operator
  • Contraction (-) is applied when the agent wishes
    to retract a belief ?, creating a new belief
    system K-?
  • Axioms
  • Closure
  • For any sentence ? and belief system K, K-? is a
    belief system
  • Inclusion
  • K-? ? K
  • Vacuity
  • If ? ? K, then K-? K
  • Success
  • Unless ? is necessarily true, ? ? K-?
  • Recovery
  • If ? ? K, then K ? (K-?)?
  • Extensionality
  • If ? is logically equivalent to ?, then K-? K-?
  • Intersection
  • (K-? ? K- ?) ? K-(???)
  • Conjunction
  • If ? ? K-(???) then K-(???) ? K-?

51
Revision Operator
  • Revision () is applied when belief ? is
    inconsistent with K, and the agent wishes to
    create a new belief system K? incorporating ?
  • Axioms
  • Closure
  • For any sentence ? and belief system K, K? is a
    belief system
  • Success
  • ? ? K?
  • Inclusion
  • K? ? K?
  • Preservation
  • If ? ? K, then K? K?
  • Vacuity
  • K? K? If and only if ? is a logical necessity
  • Extensionality
  • If ? is logically equivalent to ?, then K? K?
  • Superexpansion
  • K(???) ? ( K? )?
  • Subexpansion
  • If ? ? K? then ( K? )? ? K(???)

52
Revision Operators
  • If and - are AGM expansion and contraction
    operators, the operator
  • K? (K- ? ) K?
  • Is an AGM revision operator

53
Epistemic Entrenchment
  • So far, we havent taken care of the fourth
    requirement of a rational agent system, regarding
    belief preferences
  • A preference order (or epistemic entrenchment)
    is a relationship over sentences which
    satisfies
  • Axioms
  • For any sentences ?, ? and ?
  • Transitivity
  • if ? ? and ? ? then ? ?
  • Dominance
  • if ? is a logical consequence of ? then ? ?
  • Conjunctiveness
  • Either ? ??? or ? ???
  • Minimality
  • When K ? K? , ? ? K iff ? ? for all possible ?
  • Maximality
  • If for all ?, ? ?, then ? is logically necessary

54
AGM Belief Revision
  • AGM belief revision then attempts to find , -
    and operators which satisfy the AGM axioms
  • There are a number of different known ways to do
    this
  • AGM and similar systems have very strong
    theoretical underpinnings
  • Today, they are perhaps more research approaches
    than heavily used in real-world applications

55
Summary
  • Non-Monotonic Logics
  • Modal non-monotonic logics
  • Default Logics
  • Plausible Defaults
  • Abduction
  • Minimalist Reasoning
  • The Closed World Assumption
  • Circumscription
  • Belief Revision
  • Foundationalist Approaches
  • Reason Maintenance
  • Truth Maintenance
  • Assumption-Based Truth Maintenance
  • Coherentist Approaches
  • AGM Framework for Belief Revision
Write a Comment
User Comments (0)
About PowerShow.com