Title: Action, Change and Evolution: from single agents to multi-agents
1Action, Change and Evolution from single agents
to multi-agents
- Chitta Baral
- Professor, School of Computing, Informatics DSE
- Key faculty, Center for Evolutionary Medicine
Inform. - Arizona State University
- Tempe, AZ 85287
2Action, Change and Evolution importance to KR
R
- Historical importance
- Applicability to various domains
- Various knowledge representation aspects
- Various kinds of reasoning
3Heracleitos/Herakleitos/Heraclitus of Ephesus (c.
500 BC) - interpreted by Plato in Cratylus
- "No man ever steps in the same river twice,
- for it is not the same river and he is not the
same man. - Panta rei kai ouden menei
- Panta rei kai ouden menei
- All things are in motion and nothing at rest.
4Alternate interpretation of what Heraclitus said
- different waters flow in rivers staying the
same. - In other words, though the waters are always
changing, the rivers stay the same. - Indeed, it must be precisely because the waters
are always changing that there are rivers at all,
rather than lakes or ponds. - The message is that rivers can stay the same over
time even though, or indeed because, the waters
change. The point, then, is not that everything
is changing, but that the fact that some things
change makes possible the continued existence of
other things.
5Free will and choosing ones destiny
6Where does that line of thought lead us?
- Change is ubiquitous
- But one can shape the change in a desired way
- Some emerging KR issues
- How to specify change
- How to specify our desires/goals regarding the
change - How to construct/verify ways to control the
change
7Action and Change is encountered often in
Computing as well as other fields
- Robots and Agents
- Updates to a database
- Becomes more interesting when updates trigger
active rules - Distributed Systems
- Computer programs
-
- Modeling cell behavior
- Ligand coming in contact with a receptor
- Construction Engineering
-
8Various KR aspects encountered
- Need for non-monotonicity
- Probabilistic reasoning
- Modal logics
- Open and closed domains
- Causality
- Hybrid reasoning
9Various kinds of reasoning
- Prediction
- Plan verification control verification
- Narratives
- Counterfactuals
- Causal reasoning
- Planning control generation
- Explanation
- Diagnosis
- Hypothesis generation
10Initial Key Issue Frame Problem
- Motivation How to specify transition between
states of the world due to actions? - A state transition table would be too space
consuming! - Assume by default that properties of the world
normally do not change and specify the exceptions
of what changes. - How to precisely state the above?
- Many finer issues!
- To be elaborate upon as we proceed further.
11Origin of the AI frame problem
- Leibniz, c.1679
- "everything is presumed to remain in the state in
which it is" - Newton, 1687 (Philosophiae Naturalis Principia
Mathematica) - An object will remain at rest, or continue to
move at a constant velocity, unless a resultant
force acts on it.
12Early work in AI on action and change
- 1959 McCarthy (Programs with common sense),
- 1969 McCarthy and Hayes 1969 (Some philosophical
problems from the standpoint of AI) origin of
the frame problem in AI. - 1971 Raphael The frame problem in
problem-solving systems (Defines the frame
problem nicely) - 1972 Sandewall An approach to the frame problem
- 1972 Hewitt PLANNER
- 1973 Hayes The Frame problem and related
problems in AI - 1977 Hayes The logic of frames
- 1978 Reiter On reasoning by default
13Quotes from McCarthy Hayes 1969
- In the last section of part 3, in proving that
one person could get into conversation with
another, we were obliged to add the hypothesis
that if a person has a telephone he still has it
after looking up a number in the telephone book.
If we had a number of actions to be performed in
sequence we would have quite a number of
conditions to write down that certain actions do
not change the values of certain fluents. In fact
with n actions and m fluents we might have to
write down mn such conditions. - We see two ways out of this difficulty. The rest
is to introduce the notion of frame, like the
state vector in McCarthy (1962). A number of
fluents are declared as attached to the frame and
the effect of an action is described by telling
which fluents are changed, all others being
presumed unchanged.
14In summary
- Action and Change is an important topic in KR R
- Its historical basis goes back to pre Plato and
Aristotle days - In AI it goes back to the founding days of AI
- It has a wide applicability
- It involves various kind of KR aspects
- It involves various kinds of reasoning
15Outline of the rest of the talk
- Highlights of some important results and turning
points in describing the world and how actions
change the world (physical as well as mental) - Other aspects of action and change here we will
talk about mostly our work - Specifying Goals
- Agent architecture
- Applications
- A future direction
- Interesting issues with multiple agents
16The Yale Shooting Problem Hanks McDermott
(AAAI 1986)
- Nonmonotonic formal systems have been proposed as
an extension to classical first-order logic that
will capture the process of human default
reasoning or plausible inference through their
inference mechanisms, just as modus ponens
provides a model for deductive reasoning. - We provide axioms for a simple problem in
temporal reasoning which has long been identified
as a case of default reasoning, thus presumably
amenable to representation in nonmonotonic logic.
Upon examining the resulting nonmonotonic
theories, however, we find that the inferences
permitted by the logics are not those we had
intended when we wrote the axioms, and in fact
are much weaker. This problem is shown to be
independent of the logic used nor does it depend
on any particular temporal representation. - Upon analyzing the failure we find that the
nonmonotonic logics we considered are inherently
incapable of representing this kind of default
reasoning.
17Reiter 1991 A simple solution (sometimes) to the
frame problem
- Combines earlier proposal by Schubert (1990) and
Pednault (1989) together with a suitable closure
assumption. - Intermediate point
- Poss(a,s) ? preR(a,s) ? R(do(a,s) )
- Poss(a,s) ? preR-(a,s) ? R(do(a,s) )
- Poss(a,s) ?
- R(do(a,s) ) ? preR(a,s) ?
R(s) ? preR-(a,s) )
18Lin Shoham 1991 Provably correct theories of
actions
- argued that a useful way to tackle the frame
problem is to consider a monotonic theory with
explicit frame axioms first, and then to show
that a succinct and provably equivalent
representation using, for example, nonmonotonic
logics, captures the frame axioms concisely
19Sandewall Features and Fluents
- 1991/1994 Book IJCAI 1993 1994 JLC The range
of applicability of some non-monotonic logics for
strict inertia - Propose a systematic methodology to analyze a
proposed theory in terms of its selection
function - When
- Y is a scenario description (expressed using
logical formulae), - ?(Y) is the set of intended models of Y
- S(Y) is the set of models of Y selected by the
selection function S - Validation of S means showing
- S(Y) ?(Y) for an interesting and sufficient
large class of Y. - Range of applicability is the set Z Y ? Z
? S(Y) ?(Y)
20The language A - 1992
- 1992. Gelfond Lifschitz. Representing actions
in extended logic programs. Journal of Logic
Programming version in 1993. - Syntax
- Value proposition
- F after A1 Am initially F
- Effect proposition
- A causes F if P1, , Pm
- Domain Description a set of propositions
- Semantics
- Entailment between Domain Descriptions Value
Propositions - Entailment defined by models of domain
descriptions - Models defined in terms of initial states and
transition between states due to actions - Sound translation to logic programs
21Kartha 93 Soundness and Completeness of three
formalizations of actions
- Used A as the base language
- Proposed translations to
- Pednaults scheme
- Reiters scheme
- A circumscriptive schemed based on a method by
Baker - Proved the soundness and completeness of the
translations.
221990-91-92
- 1990 I first learn about Frame problem from Don
Perlis - 1991-92 Learn more about it from Michael Gelfond
23Effect of actions executed in parallel IJCAI 93
JLP 97 (with Gelfond)
- Initial frame problem
- Succinctly specifying state transition due to an
action - What if we allow actions to be executed in
parallel? - Do we explicitly specify effects of each possible
subsets of actions executed in parallel? - Too many
- Do we just add their effects?
- May not match reality
- l_lift causes spilled
- r_lift causes spilled
- l_lift, r_lift causes spilled if
spilled - l_lift, r_lift causes lifted
-
- initially spilled, lifted
- paint causes painted
24Our Solution and similar work
- Inherit from subsets under normal circumstances
and - use specified exceptions when necessary.
- High level language syntax and semantics
- Logic programming formulation
- Correctness theorem
- Similar work by Lin and Shoham in 1992.
25Our Solution Excerpts from the high level
language semantics
- Execution of an action a in a state s causes a
fluent literal f if - a immediately causes f (defined as there is a
proposition a causes f if p1, , pn such that p1,
, pn hold in s) - a inherits the effect f from its subsets in s.
(i.e. there is a b ? a, such that execution of b
in s immediately causes f and there is no c such
that b ? c a and execution of c in s
immediately causes f.) - E(a, s) f f is a fluent and execution of a
in s causes f - E-(a, s) f f is a fluent and execution of a
in s causes f - F(a, s) s ? E(a, s) \ E-(a, s).
26Our Solution Excerpts from the logic programming
axiomatization
- Inertia
- holds(F, res(A,S)) ?holds(F,S), not
may_i_cause(A, F,S), atomic(A), - not
undefined(A,S). - Translating a causes f if p1, , pn
- may_i_cause(a,f,S) ?not h(p1,S), , not
h(pn,S). - cause(a,f,S) ? h(p1,S), , h(pn,S).
- Effect axioms
- holds(F, res(A,S)) ?cause(A,F,S), not
undefined(A,S). - undefined(A,S) ? may_i_cause(A, F,S),
may_i_cause(A, F,S). - Inheritance axioms
- holds(F, res(A,S)) ?subset(B,A), holds(F,
res(B,S)), not noninh(F,A,S), - not
undefined (A,S). - cancels(X,Y,F,S) ?subset(X,Z), subseeq(Z,Y),
cause(Z,F,S). - noninh(F,A,S) ? subseeq(U,A),
may_i_cause(U, F,S), - not
cancels(U,A,F,S). - undefined(A,S) ?noninh(F,A,S),
noninh(F,A,S).
27Effect of actions in presence of specifications
relating fluents in the world
- Examples of state constraints
- dead iff alive.
- at(X) ? at(Y) ?X Y.
- Winslett 1988 s ? F(a,s) if
- s satisfies the direct effect (E) of an action
plus state constraints (C) and - There is no other state s that satisfies E and C
and that is closer (defined using symmetric
difference) to s than s. - But?
28Problems in using classical logic to express
state constraints
- Lins Suitcase example (Lin - IJCAI 95)
- flip1 causes up1
- filp2 causes up2
- State Constraint up1 ? up2 ? open
- initially up1, up2, open.
- What happens if we do flip2?
- But up1 ? up2 ? open is equivalent to open ?
up2 ? up1 - Marrying and moving (me - IJCAI 95)
- at(X) ? at(Y) ?X Y.
- married_to(X) ? married_to(Y) ?X Y.
- Ramification vs. Qualification.
29Causal connection between fluents
- We Suggested in IJCAI 95 that a causal
specification (in particular Marek and
Truszczynskis Revision programs) be used to
specify state constraints - out(at_B) ? in(at_A). out(at_A) ? in(at_B).
- ?in(married_to_A), in(married_to_B).
- Presented a way to translate it to logic
programs. - Thus a logic programming solution to the frame
problem in presence of state constraints that
can express causality and that distinguished
between ramification and qualification. - We proved soundness and completeness theorems.
- McCain and Turner presented a conditional logic
based solution in the same IJCAI. (1995) - Lin 1995 Embracing causality in specifying
indirect effects of actions - Thielscher 1996
- Used in RCS-Advisor system developed at Texas
Tech university.
30Knowledge and Sensing
- Moore 1979, 1984
- for any two possible worlds w1 and w2 such that
w2 is the result of the execution of a in w1 the
worlds that are compatible with what the agent
knows in w2 are exactly the worlds that are the
result of executing a in some world that is
compatible with what the agent knows in w1 - Suppose sensef is an action that the agent can
perform to know if f is true or not. Then for any
world represented by w1 and w2 such that w2 is
the result of sensef happening in w1 the world
that is compatible with what the agent knows in
w2 are exactly those worlds that are the result
of sensef happening in some world that is
compatible with what the agent knows in w1 and in
which f has the same truth value as in w2. - Scherl Levesque 1993
31Knowledge and Sensing
- Effect Specifications
- push_door causes open if locked, jammed
- push_door causes jammed if locked
- flip_lock causes locked if locked
- flip_lock causes locked if locked
- initially jammed, open
- Goal To make open true
- P1 If locked then push_door
- else flip_lock
push_door - P2 sense_locked
- If locked then push_door
- else flip_lock
push_door
32Formalizing sensing actions a transition
function based approach (with Son AIJ 2001)
s1
s1
sensef
s1, s2, s3, s4, s1, s2, s3,
s1, s2, s3, s4,
33Combining narratives with hypothetical reasoning
planning from the current situation
- With Gelfond Provetti JLP1997 The language L
- Besides effect axioms of the type
- a causes f if p1, , pn
- We have occurrence and precedence facts of the
form - f at si
- a occurs_at si
- si preceeds sj
34An example
- rent causes has_car
- hit causes has_car
- drive causes at_airport if has_car
- drive causes at_home if has_car
- pack causes packed if at_home
- at_home at s0
- at_airport at s0
- has_car at s0
- PLAN
- EXECUTE
- s0 preceeds s1
- pack occurs_at s1
- OBSERVE
- s1 preceeds s2
- has_car at s2
- Needs to make a new PLAN from the CURRENT
situation
35From sensing and narratives to dynamic diagnosis
basic ideas (With McIlraith, Son KR2000)
- Diagnosis Reiter defined diagnosis to be a fault
assignment to the various component of the system
that is consistent with (or explains) the
observations Thielscher extended it to dynamic
diagnosis. - Dynamic diagnosis using L and sensing
- Necessity of Diagnosis When observation is
inconsistent with the assumption that all
components were initially fine and no action that
can break one of those component occurred. I.e.,
(SD \ SDab, OBS ? OK0) does not have a model - Diagnostic model M is a model of the narrative
(SD, OBS ? OK0) - Narratives
- OBS s0 lt s1 lt s2 lt s3
- light_on at s0 light_on at s1
light_on at s2 light_on at s3 - turn_on occurs_at s0 turn_off
occurs_at s1
turn_on occurs_between s2, s3 - OK0 ab(bulb) at s0.
- Diagnostic plan A conditional plan with sensing
actions which when executed gives sufficient
information to reach a unique diagnosis.
36Golog JLP1997 (Levesque, Reiter, Lesperance,
Lin, Scherl)
- A logic based language to program robots/agents
- Allows programs to reason about the state of the
world and consider effects of various possible
course of actions before committing to a
particular behavior - I.e., it will unfold to an executable sequence of
actions - Based on theories of action and extended version
of Situation calculus
37Features of Golog
- Primitive actions
- Test actions (fluent formulas to be test in a
situation) - Sequence
- Non-deterministic choice of two actions
- Non-deterministic choice of action arguments
- Non-deterministic iteration (conditionals and
while loops can be defined using it) - Procedures
38Lots of follow-up on Golog
- Work at Toronto
- Work at York
- Work at Aachen
- Etc.
39Other aspects of action description languages
- Non-deterministic effect of actions
- Probabilistic effect of actions with causal
relationships counterfactual reasoning - Defeasible specification of effects
- Presence of triggers
- Characterizing active databases
- Actions with durations
- Hybrid effects of actions
- Thielschers fluent calculus
- Event calculus
- Modular action description
- Learning action models
40Issues studied so far
- Mostly about describing how actions may change
the world
41Outline of the rest of the talk
- Highlights of some important results and turning
points in describing the world and how actions
change the world (physical as well as mental) - Other aspects of action and change mostly
presenting our work - Specifying Goals and directives
- Agent architecture
- Applications
- A future direction
- Interesting issues with multiple agents
42Specifying goals and directives
43What are maintenance goals?
- Always f, also written as ? f
- too strong for many kind of maintainability (eg.
maintain the room clean) - Always Eventually f, also written as ? ? f.
- Weak in the sense it does not give an estimate on
when f will be made true. - May not be achievable in presence of continuous
interference by belligerent agents. - ? f ------------------ ? ?k f
-------------------------- ? ? f - ? ?3 f is a shorthand for ? ( f V O f V OO
f V OOO f ) - But if an external agent keeps interfering how is
one supposed to guarantee ? ?3 f .
44Definition of k-maintainability AAAI 00
- Given
- A system A (S,A,?), where
- S is the set of system states
- A is the union of agent actions Aag, and
environmental actions Aenv - ? S x A ? 2 S
- A set of initial states S, a set of maintenance
states E, parameter k, a function exo S ? 2
Aenv about exogenous action occurrence - we say that a control K k-maintains S with
respect to E, if - for each state s reachable from S via K and
exo, and each sequence s s, s1, . . . , sr (r
ltk) that unfolds within k steps by executing K,
we have - s, s1, . . . , sr n E ? .
45No 3-maintainable policy for S b with respect
to E h
a
c
d
a
a
e
a
a
f
b
h
e
g
463-maintainable policy for S b with respect to
E h Do a in b, c and d.
e
a
c
d
a
a
a
a
f
b
h
e
g
47Finding k-maintainable policies (if exists) an
overview (joint work with T. Eiter) ICAPS 04
- Encoding the problem in SAT whose models, if
exists, encode the k-maintainable policies. - This SAT encoding can be recasted as a Horn logic
program whose least model encodes the maximal
control. - (Maintainability is almost similar to Dijkstras
self-stabilization in distributed systems.)
48Motivational goal Try your best to reach a state
where p is true.
a7
p, q,r,s
a7
s2
a2
a5
a1
a5
p, q,r,s
a6
p,s
s5
p, q, r,s
s1
s4
a1
a4
a3
a3
p, q,r,s
s3
49Try your best to reach p Policy p1
a7
p
a7
s2
a2
a5
a1
a5
p
a6
p
s5
p
s1
s4
a1
a4
a3
a3
p
s3
50LTL, CTL and p-CTL
- LTL Next, Always, Eventually, Until
- For plans that are action sequences
- CTL exists path, all paths
- For plans that are action sequences
- p-CTL exists path following the policy under
consideration, all paths following the policy
under construction. (ECAI 04) - For policies (mapping states to actions)
51p-CTL not powerful enough! (AAAI 06)
- In F2 doing a2 in s1 is trying your best but not
in F1. - How to make that distinction while specifying our
goal? - p-CTL is not able to make such a distinction.
- Consider the policy p where p(s1) p(s2) a2
- p is a try your best policy for F2 but not for
F1. - But all p-CTL formulas have the same truth
value with respect to both F2 and F1 , given s1,
and p.
s2
s1
a2
p
p
a1
a2
a2
F1
s2
s1
a2
p
p
a2
F2
a2
52Expressing Try your best in P-CTL AAAI 06
- P-CTL exists policy and for all policies
- A representation of Try your best in P-CTL
- A Strong policy all paths eventually lead to
the goal state. - B Strong cyclic policy in all paths, in all
states, there is a path that eventually leads to
the goal state - C Weak policy exists a path that eventually
leads to the goal state. - P-CTL goal
- If exists a strong policy then agent should take
that - Elseif exists a strong cyclic policy then agent
should take that - Elseif exists a weak policy then agent should
take that.
53Non-monotonic goal specification IJCAI 07, AAI08
and ongoing work
- Motivation
- Initial goal Please get a cup of coffee.
- Weakening In case the coffee machine is broken
a cup of tea would be fine. - Exception to Exception Get a cup of tea only if
the coffee machine can not be easily fixed. - Revising If bringing tea, make sure it is hot.
- Past work on non-monotonic temporal logics
- Fujiwara and Honiden, 1991 A nonomotonic
temporal logic and its Kripke Semantics. - Saeki 1987 Non-monotonic temporal logic and its
application to formal specifications (in
Japneese) - Proposed a non-monotonic temporal logic in IJCAI
07 - Currently working to develop a better language.
- Started working on natural language semantics to
go from discourses in English to a non-monotonic
logical language.
54Other results related to goal specification
- Complexity of planning with LTL and CTL goals
IJCAI 01. - The approach to find k-maintainable policies also
leads to novel algorithms for planning with
respect to other temporal goals expressed in
p-CTL AAAI 05. - Diagnostic and repair goals (KR 00)
- Specifies that a unique diagnosis is reached,
with certain literals protected, certain literals
restored, and certain literals fixed. - Knowledge temporal goals (IJCAI 01)
55Outline of the rest of the talk
- Highlights of some important results and turning
points in describing the world and how actions
change the world (physical as well as mental) - Other aspects of action and change our work
- Specifying Goals
- Agent architecture
- Applications
- A future direction
- Interesting issues with multiple agents
56Some of our contributions to control
architectures and control execution languages
57My view of agent architecture
- Reactive, Deliberative and Hybrid
- Fully reactive sense-match-act cycle.
- Completely deliberative sense-plan/replan-act a
bit - Hybrid Reactive at low level deliberative at
high levels. - Our view of hybrid architecture (ETAI 98, Agent
98) - Reactive for the most common, most critical, etc.
- Fully deliberative for rare cases.
- Between reactive and deliberative for the rest.
58Between deliberative and reactive
- (Condition, Reasoning program) pairs
- Different kinds of reasoning programs
- Logic program based (Kowalski, Sadri, Pereira)
- Agent programming language (VS et al.)
- Planning using domain dependent knowledge
- Temporal (Bacchus and Kabanza)
- Partial Order, hierarchical (HTN), SHOP?
- Procedural (GOLOG, Congolog)
- A combination of the above (ATAL99, AAAI04,ACM
TOCL06)
59Our AAAI 96 robot 3rd in Office navigation
contest
60AAAI 96 and 97 robot contests Agents 98
- AAAI 96 Robots were given a topological map and
required to start from a directors office, find
if conference room 1 was empty, if not then find
if conference room 2 was empty. If either was
empty then inform prof1 and prof2 and the
director about a meeting in that room, otherwise
inform the professors and the director that the
meeting would be at the directors office, and
finally return to the directors office. - Do the above avoiding obstacles and without
changing the availability status of the
conference rooms. - We were third with 285 out of a total of 295
points. - AAAI 97 First place in the event Tidy Up of
the home vacuum contest. - Goal was to maintain several areas in an office
environment clean. - For both we used our notion of correctness of
reactive control and had proved the correctness
of our control.
61Some other contributions
- Correctness of reactive programs (ETAI98)
- Automatic policy generation algorithms
- For maintainability goals (ICAPS 04)
- For specific types of goals in p-CTL (AAAI05)
62Outline of the rest of the talk
- Highlights of some important results and turning
points in describing the world and how actions
change the world (physical as well as mental) - Other aspects of action and change our work
- Specifying Goals
- Agent architecture
- Applications
- A future direction
- Interesting issues with multiple agents
63Some of our contributions to applications
- Robots Active Databases Workflows Modeling
cells Question answering CBioC
64Mobile Robots
- Discussed our robot in AAAI 96 and 97 contests.
- Took a break for a few years.
- A recent ONR MURI project involving Indiana
University (lead Matthias Scheutz), Notre Dame
(Kathy M. Eberhard), Stanford (Stanley Peters)
and ASU (myself, Rao Kambhampati, Pat Langley
and Mike McBeath) - Effective Human Robot Interaction under Time
Pressurethrough Natural Language Dialogue and
Dynamic Autonomy
65Active Databases and Workflows
- Formal characterization of active databases (LIDS
96, DOOD 97, CL 00) - Formalizing and reasoning about the specification
of workflows - Coopis 2000
66Reasoning about cell behavior
- Biosignet-RR (ISMB 04, KR 04, AAAI05)
- Hypothetical Reasoning side effect of drugs
- Planning therapy design
- Explanation of observations figuring out what is
wrong - Biosignet-RRH (ECCB 05)
- Hypothesis generation
67Description of an NFkB signaling pathway
- Binding of TNF-a with TNFR1 leads to TRADD
binding with one or more of TRAF2, FADD, RIP. - TRADD binding with TRAF2 leads to over-expression
of FLIP provided NIK is phosphorylated on the
way. - TRADD binding with RIP inhibits phosphorylation
of NIK. - TRADD binding with FADD in the absence of FLIP
leads to cell death.
68Syntax by example
- bind(TNF-a,TNFR1) causes trimerized(TNFR1)
- trimerized(TNFR1) triggers bind(TNFR1,TRADD)
69General syntax to represent networks
- e causes f if f1 fk
- g1 gk causes g
- h1 hm n_triggers e
- k1 kl triggers e
- r1 rl inhibits e
- e is an event (also referred to as an action) and
the rest are fluents (properties of the cell) - For metabolic interactions
- e converts g1 gk to f1 fk if h1 hm
70Semantics queries and entailment
- Observation part of queries
- f at t
- a occurs_at t
- Given the Network N and observation O
- Predict if a temporal expression holds.
- Explain a set of observations.
- Plan to achieve a goal.
71Prediction
- Given some initial conditions and observations,
to predict how the world would evolve or predict
the outcome of (hypothetical) interventions.
72Prediction
- Binding of TNF-a with TNFR1 leads to TRADD
binding with one or more of TRAF2, FADD, RIP. - TRADD binding with TRAF2 leads to over-expression
of FLIP provided NIK is phosphorylated on the
way. - TRADD binding with RIP inhibits phosphorylation
of NIK. - TRADD binding with FADD in the absence of FLIP
leads to cell death.
- Initial Condition
- bind(TNF-a,TNF-R1) occurs at t0
- Observation
- TRADDs binding with TRAF2, FADD, RIP
- Query
- predict eventually apoptosis
- Answer Yes!
73Explanation
- Given initial condition and observations, to
explain why final outcome does not match
expectation.
74Explanation
- Binding of TNF-a with TNFR1 leads to TRADD
binding with one or more of TRAF2, FADD, RIP. - TRADD binding with TRAF2 leads to over-expression
of FLIP provided NIK is phosphorylated on the
way. - TRADD binding with RIP inhibits phosphorylation
of NIK. - TRADD binding with FADD in the absence of FLIP
leads to cell death.
- Initial condition
- bound(TNF-a,TNFR1) at t0
- Observation
- bound(TRADD, TRAF2) at t1
- Query Explain apoptosis
- One explanation
- Binding of TRADD with RIP
- Binding of TRADD with FADD
75Other issues in reasoning about cell behavior
- Planning interventions
- Generating Hypothesis
- Our observations can not be explained by our
existing knowledge OR the explanations given by
our existing knowledge are invalidated by
experiments? - Conclusion Our knowledge needs to be augmented
or revised! - How?
- Can we use a reasoning system to predict some
hypothesis that one can verify through
experimentation? - Automate the reasoning in the minds of a
biologist, especially helpful when the background
knowledge is humongous. - Constructing pathways
- Studying drug-drug interactions
76Outline of the rest of the talk
- Highlights of some important results and turning
points in describing the world and how actions
change the world (physical as well as mental) - Other aspects of action and change our work
- Specifying Goals
- Agent architecture
- Applications
- A future direction
- Interesting issues with multiple agents
77Multi-agent action scenarios
78Simple multi-agent actions
- Two agents need to lift a table
- Particular agents can do particular actions
- Different agents may be located in different
places depending on where the action is
occurring only the agents present there can
execute the action
79Multi-agent action scenarios Reasoning about
each others knowledge (Muddy Children problem)
- Three children playing in the mud.
- Common Knowledge They can see each others
forehead but not their own - Father says One of you have mud in your forehead
- Father asks Do you know if you have mud in your
forehead? - All Answer No
- Father again asks Do you know if you have mud in
your forehead? - All Answer No
- Father again asks Do you know if you have mud
in your forehead? - All answer Yes
80Muddy Children problem
- States are Kripke models
- Actions considered in the past Announcement
actions - Actions of interest Ask and faithfully answer
- AAMAS talk tomorrow by co-author Greg Gelfond.
81A, B, C in a room and have no clue if the gun is
loaded this is common knowledge
- On the left is a Kripke Model M
- S1 and S2 are two possible real worlds
- (S1, M) entails Ka l, Ka l, Kb l, Kb l,
Kc l, Kc l, Ka Kb l, Ka Kb l, - (S2, M) also entail the same
a,b,c
a,b,c
a,b,c
h
l
l
s1
s2
82A peeks and finds out l B sees A peeking C has
no clue
a,b,c
a,b,c
- Ka l - A knows l
- Kb l - B does not know l
- Kbl - B does not know l
- Kb (Ka l or Ka l) B knows
that A knows the value of l. - Kc l, Kc l C does not know the value of l.
- Bc (Ka l and Ka l)
- Bc Bb (Ka l and Ka l) C has no clue
a,b,c
h
l
l
a,b
a,b
b
l
l
c
c
c
c
l
l
a,b,c
a,b,c
a,b,c
83A peeks and finds out l B sees A peeking C has
no clue
a,b,c
a,b,c
- C has no clue As far as C is concerned the old
Kripke model is still the structure. - Thus we make a copy of the old Kripke model.
(bottom) - B sees A peeking So the edge labeled a is
removed in the top part. - A and B know C has no clue So c-edges are
intrduced between the top part and bottom part
and c-edges are removed in the top part.
a,b,c
h
l
l
a,b
a,b
b
l
l
c
c
c
c
l
l
a,b,c
a,b,c
a,b,c
84Multi-agent scenarios An action language
- Initially (We allow only restricted knowledge
about the initial state) - initially ?
- initially C ?
- initially C(Ki ? V Ki ?)
- Actions and effects
- executable a if ?
- a causes ? if ?
- a determines f
- a may_determine f
- a announces ?
85Multi-agent scenarios An action language (cont.)
- Agent roles
- agent observes a if ?
- agent partially_observes a if ?
- An example
- peek(X) determines l
- X observes peek(X)
- Y partially_observes peek(X) if looking(Y)
- distract(X,Y) causes looking(Y)
- signal(X,Y) causes looking(Y)
- The plan signal(a,b) distract(c) peek(a) will
result in a knowing the value of l, b knowing
that a knows that value and c having no clue.
86Planning Scenarios
- A can do an action to distract C so that when he
peaks C has no clue. - Similarly, A can do an action to make B attentive
towards what A is doing. - A can even do action to confuse C
- In a battle field friendly agents need to
- Share knowledge as needed, and
- Work together to take steps so that foes have no
clue or confuse or misinform them towards a
strategic goal.
87Conclusions
88Our Conclusions
- Action, Change and evolution are important issues
that crop up at times in Computer Science. - They are an important domain for KR R
- Early focus on this had been on the frame problem
succinctly specifying what changes and what
does not change due to actions - Over the years we have worked on that aspect as
well as other important aspects such as - Goal specification
- Control specification and architecture
- Various kinds of reasoning
- Various applications
- We are facing some interesting challenges in the
multi-agent domain past work in Dynamic
epistemic logic is helping us.
89Research supported by
- Current support
- NSF
- IARPA
- ONR
- Past
- NSF
- NASA
- United Space Alliance
- ARDA/DTO
90THANK YOU
- (Special thanks to all the collaborators and
colleagues, many of whom are here, who at
different times and in different ways motivated
us.)