Title: Preference Reasoning in Logic Programming
1Preference Reasoning inLogic Programming
Pierangelo DellAcqua Aida Vitória Dept. of
Science and Technology - ITN Linköping
University, Sweden pier, aidvi_at_itn.liu.se
José Júlio Alferes LuÃs Moniz Pereira Centro
de Inteligência Artificial - CENTRIA Universidade
Nova de Lisboa, Portugal jja, lmp_at_di.fct.unl.pt
2Outline
- Combining updates and preferences
- User preference information in query answering
- Preferring alternative explanations
- Preferring and updating in multi-agent systems
3References
- JELIA00
- J. J. Alferes and L. M. Pereira
- Updates plus Preferences
- Proc. 7th European Conf. on Logics in Artificial
Intelligence (JELIA00), LNAI 1919, 2000 - INAP01
- P. Dell'Acqua and L. M. Pereira
- Preferring and Updating in Logic-Based Agents
- Selected Papers from the 14th Int. Conf. on
Applications of Prolog (INAP01), LNAI 2543, 2003 - JELIA02
- J. J. Alferes, P. Dell'Acqua and L. M. Pereira
- A Compilation of Updates plus Preferences
- Proc. 8th European Conf. on Logics in Artificial
Intelligence (JELIA02), LNAI 2424, 2002 - FQAS02
- P. Dell'Acqua, L. M. Pereira and A. Vitoria
- User Preference Information in Query Answering
- 5th Int. Conf. on Flexible Query Answering
Systems (FQAS02), LNAI 2522, 2002
41. Update reasoning
- Updates model dynamically evolving worlds
- knowledge, whether complete or incomplete, can be
updated to reflect world change. - new knowledge may contradict and override older
one. - updates differ from revisions which are about an
incomplete static world model.
5Preference reasoning
- Preferences are employed with incomplete
knowledge when several models are possible - preferences act by choosing some of the possible
models. - this is achieved via a partial order among rules.
Rules will only fire if they are not defeated by
more preferred rules. - our preference approach is based on the approach
of KR98. - KR98
- G. Brewka and T. Eiter
- Preferred answer sets for extended logic
programs - KR98, 1998
6Preference and updates combined
- Despite their differences preferences and updates
display similarities. - Both can be seen as wiping out rules
- in preferences the less preferred rules, so as to
remove models which are undesired. - in updates the older rules, inclusively for
obtaining models in otherwise inconsistent
theories. - This view helps put them together into a single
uniform framework. - In this framework, preferences can be updated.
7LP framework
A objective atom not A default atom
Formulae
L0 L1 , ... , Ln
generalized rule
every Li is an objective or default atom
8- Let N be a set of constants containing a unique
name for each generalized rule.
priority rule
Z is a literal r1lt r2 or not r1lt r2
Z L1 , ... , Ln
r1lt r2 means that rule r1 is preferred to rule r2
Def. Prioritized logic program Let P be a set
of generalized rules and R a set of priority
rules. Then ?(P, R) is a prioritized logic
program.
9Dynamic prioritized programs
- Let S1,,s, be a set of states (natural
numbers). - Def. Dynamic prioritized program
- Let (Pi, Ri) be a prioritized logic program for
every i?S, then - ? ?(Pi, Ri) i?S is a dynamic prioritized
program. - The meaning of such a sequence results from
updating (P1, R1) with the rules from (P2, R2),
and then updating the result with the rules
from (Pn, Rn).
10Example
Suppose a scenario where Stefano watches programs
on football, tennis, or the news. (1) In the
initial situation, being a typical italian,
Stefano prefers both football and tennis to the
news and, in case of international competitions,
he prefers tennis over football.
r1lt r3 r2lt r3 r2lt r1 us xlty xltz, zlty
f not t, not n (r1) t not f, not n (r2) n
not f, not t (r3)
P1
R1
In this situation, Stefano has two alternative TV
programmes equally preferable football and
tennis.
11(2) Next, suppose that a US-open tennis
competition takes place
P2
R2
us (r4)
Now, Stefano's favourite programme is tennis.
(3) Finally, suppose that Stefano's preferences
change and he becomes interested in international
news. Then, in case of breaking news he will
prefer news over both football and tennis.
not (r1lt r3) bn not (r2lt r3) bn r3lt r1
bn r3lt r2 bn
P3
R3
bn (r5)
12Preferred stable models
Let P ?(Pi, Ri) i?S be a dynamic
prioritized program, Q ? Pi ? Ri i ? S ,
PR Ui (Pi ? Ri), and M an interpretation of
P. Def. Default and Rejected rules Default(PR,M)
not A ?? (AL1,,Ln) in PR and M ? L1,,Ln
Reject(s,M,Q) r ? Pi ? Ri ?r? Pj ? Rj,
head(r) not head(r), iltj?s and M ? body(r)
13- Def. Unsupported and Unprefered rules
- Unsup(PR,M) r ? PR M ? head(r) and M ?
body-(r) - Unpref(PR,M) is the least set including Unsup(PR,
M) and every rule r such that - r ? (PR Unpref(PR, M))
- M ? r lt r, M ? body(r) and
- not head(r)?body-(r) or (not head(r) ?
body-(r) and M ? body(r))
14- Def. Preferred stable models
- Let s be a state, P ?(Pi, Ri) i?S a dynamic
prioritized program, and M a stable model of P. - M is a preferred stable model of P at state s iff
- M least( X - Unpref(X, M) ? Default(PR, M) )
- where PR Uis(Pi ? Ri)
- Q ?Pi ? Ri i?S
- X PR - Reject(s,M,Q)
15?(s,P) transformation
- Let s be a state and P ?(Pi, Ri) i?S a
dynamic prioritized program. - In Jelia02 we gave a transformation ?(s,P) that
compiles dynamic prioritized programs into normal
logic programs. - The preference part of our transformation is
modular or incremental wrt. the update part of
the transformation. - The size of the transformed program ?(s,P) in the
worst case is quadratic in the size of the
original dynamic prioritized program P. - An implementation of the transformation is
available at - http//centria.di.fct.unl.pt/jja/updates
16Thm. Correctness of ?(s,P) An interpretation M a
stable model of ?(s,P) iff M, restricted to the
language of P, is a preferred stable model of P
at state s.
172. User preference information in query answering
- Query answering systems are often difficult to
use because they do not attempt to cooperate with
their users. - The use of additional information about the user
can enhance cooperative behaviour of query
answering systems FQAS02.
18- Consider a system whose knowledge is formalized
by a prioritized logic program - ? (P, R)
- Extra level of flexibility - if the user can
provide preference information at query time - ?- (G, Pref )
- Given ?(P,R), the system has to derive G from P
by taking into account the preferences in R which
are updated by the preferences in Pref. - Finally, it is desirable to make the background
knowledge (P,R) of the system updatable in a way
that it can be modified to reflect changes in the
world (including preferences).
19- The ability to take into account the user
information makes the system able to target its
answers to the users goal and interests. - Def. Queries with preferences
- Let G be a goal, ? a prioritized logic program
and - ? ?(Pi, Ri) i?S a dynamic prioritized
program. - Then ?- (G,?) is a query wrt. ?
20Joinability function
- S S ? max(S) 1
- Def. Joinability at state s
- Let s?S be a state, ? ?(Pi, Ri) i?S a
dynamic prioritized program and ?(PX, RX) a
prioritized logic program. - The joinability function ?s at state s is
- ? ?s ? ?(Pi, Ri) i?S
- (Pi, Ri) if 1 ? i lt s
- (Pi, Ri) (PX, RX) if i s
- (Pi-1, Ri-1) if s lt i ? max(S)
21Preferred conclusions
Def. Preferred conclusions Let s?S be a state
and ? ?(Pi, Ri) i?S a dynamic prioritized
program. The preferred conclusions of ? with
joinability function ?s are (G,?) G is
included in every preferred stable model of ?
?s ? at state max(S)
22Example car dealer
Consider the following program that exemplifies
the process of quoting prices for second-hand
cars.
price(Car,200) stock(Car,Col,T), not
price(Car,250), not offer (r1) price(Car,250)
stock(Car,Col,T), not price(Car,200), not offer
(r2) prefer(orange) not prefer(black)
(r3) prefer(black) not prefer(orange)
(r4) stock(Car,Col,T) bought(Car,Col,Date),
Ttoday-Date (r5)
23When the company buys a car, the information
about the car must be added to the stock via an
update
bought(fiat,orange,d1)
When the company sells a car, the company must
remove the car from the stock
not bought(volvo,black,d2)
24The selling strategy of the company can be
formalized as
r2 lt r1 stock(Car,Col,T), T lt 10 r1 lt r2
stock(Car,Col,T), T ? 10, not prefer(Col) r2 lt r1
stock(Car,Col,T), T ? 10, prefer(Col) r4 lt r3
price(Car,200) stock(Car,Col,T), not
price(Car,250), not offer (r1) price(Car,250)
stock(Car,Col,T), not price(Car,200), not offer
(r2) prefer(orange) not prefer(black)
(r3) prefer(black) not prefer(orange)
(r4) stock(Car,Col,T) bought(Car,Col,Date),
Ttoday-Date (r5)
25Suppose that the company adopts the policy to
offer a special price for cars at a certain times
of the year.
price(Car,100) stock(Car,Col,T), offer
(r6) not offer
Suppose an orange fiat bought in date d1 is in
stock and offer does not hold. Independently of
the joinability function used
?- ( price(fiat,P), (,) ) P
250 if today-d1 lt 10
P 200 if today-d1 ? 10
26?- ( price(fiat,P), (,not (r4 lt r3), r3 lt r4)
) P 250
- For this query it is relevant which joinability
function is used - if we use ?1, then we do not get the intended
answer since the user preferences are overwritten
by the default preferences of the company - on the other hand, it is not so appropriate to
use ?max(S) since a customer could ask
?- ( price(fiat,P), (offer,) )
27Selecting a joinability function
In some applications the user preferences in ?
must have priority over the preferences in ?. In
this case, the joinability function ?max(S) must
be used. Example a web-site application of a
travel agency whose database ? maintains
information about holiday resorts and preferences
among touristy locations. When a user asks a
query ?- (G, ?), the system must give priority to
?. Some other applications need the joinability
function ?1 to give priority to the preferences
in ?.
28Open issues
- Detect inconsistent preference specifications.
- How to incorporate abduction in our framework
abductive preferences leading to conditional
answers depending on accepting a preference. - How to tackle the problem arising when several
users query the system together.
293. Preferring abducibles
- The evaluation of alternative explanations is one
of the central problems of abduction. - An abductive problem of a reasonable size may
have a combinatorial explosion of possible
explanations to handle. - It is important to generate only the explanations
that are relevant. - Some proposals involve a global criterion
against which each explanation as a whole can be
evaluated. - A general drawback of those approaches is that
global criteria are generally domain independent
and computationally expensive. - An alternative to global criteria is to allow the
theory to contain rules encoding domain specific
information about the likelihood that a
particular assumption be true.
30- In our approach we can express preferences among
abducibles to discard the unwanted assumptions. - Preferences over alternative abducibles can be
coded into cycles over negation, and preferring a
rule will break the cycle in favour of one
abducible or another.
31Example
- Consider a situation where an agent Peter drinks
either tea or coffee (but not both). Suppose that
Peter prefers coffee to tea when sleepy. - This situation can be represented by a set Q of
generalized rules with set of abducibles AQtea,
coffee. - drink tea
- Q drink coffee
- coffee C tea sleepy
- a C b means that abducible a is preferred to
abducible b
32- In our framework, Q can be coded into the
following set P of generalized rules with set of
abducibles AP abduce. - drink tea
- drink coffee
- coffee abduce, not tea, confirm(coffee) (r1)
- tea abduce, not coffee, confirm(tea) (r2)
- P confirm(tea) expect(tea), not
expect_not(tea) - confirm(coffee) expect(coffee), not
expect_not(coffee) - expect(tea)
- expect(coffee)
- r1 lt r2 sleepy, confirm(coffee)
33- Having the notion of expectation allows one to
express the preconditions for an expectation - expect(tea) have_tea
- expect(coffee) have_coffee
- By means of expect_not one can express situations
where he does not expect something - expect_not(coffee) blood_pressure_high
344. Preferring and updating in multi-agents
- In INAP01 we proposed a logic-based approach
to agents that can - Reason and react to other agents
- Prefer among possible choices
- Intend to reason and to act
- Update their own knowledge, reactions and goals
- Interact by updating the theory of another agent
- Decide whether to accept an update depending on
the requesting agent
35Updating agents
- Updating agent a rational, reactive agent that
can dynamically change its own knowledge and
goals - makes observations
- reciprocally updates other agents with goals and
rules - thinks a bit (rational)
- selects and executes an action (reactive)
36Preferring agents
- Preferring agent an agent that is able to prefer
beliefs and reactions when several alternatives
are possible. - Agents can express preferences about their own
rules. - Preferences are expressed via priority rules.
- Preferences can be updated, possibly on advice
from others.
37Agents language
A objective atoms not A default atoms
?C projects ?C updates
Formulae
generalized/priority rules
Li is an atom, an update or a negated update
A L1, ... , Ln not A L1 , ... , Ln
Zj is a project
integrity constraint
false L1 , ... , Ln , Z1 , ... , Zm
active rule
L1 , ... , Ln ? Z
38Agents knowledge states
- Knowledge states represent dynamically evolving
states of agents knowledge. They undergo change
due to updates. - Given the current knowledge state Ps , its
successor knowledge state Ps1 is produced as a
result of the occurrence of a set of parallel
updates. - Update actions do not modify the current or any
of the previous knowledge states. They only
affect the successor state the precondition of
the action is evaluated in the current state and
the postcondition updates the successor state.
39Projects and updates
- A project jC denotes the intention of some agent
i of proposing the updating the theory of agent j
with C. - iC denotes an update proposed by i of the
current theory of some agent j with C.
wilmaC
Fred C
40Representation of conflicting information and
preferences
- Preferences may resolve conflicting information.
- This example models a situation where an agent,
Fabio, receives conflicting advice from two
reliable authorities. - Let (P,R) be the initial theory of Fabio, where
R and
dont(A) fa(noA), not do(A) (r1) do(A) ma(A),
not dont(A) (r2) false do(A), fa(noA) false
dont(A), ma(A) r1 lt r2 fr r2 lt r1 mr
P
fafather advises mamother advises frfather
responsibility mrmother responsibility
41- Suppose that Fabio wants to live alone,
represented as LA. - His mother advises him to do so, but the father
advises not to do so - U1 mother ma(LA), father fa(noLA)
- Fabio accepts both updates, and therefore he is
still unable to choose either do(LA) or dont(LA)
and, as a result, does not perform any action
whatsoever.
42- Afterwards, Fabio's parents separate and the
judge assigns responsibility over Fabio to the
mother - U2 judge mr
- Now the situation changes since the second
priority rule gives preference to the mother's
wishes, and therefore Fabio can happily conclude - do live alone.
43Updating preferences
- Within the theory of an agent both rules and
preferences can be updated. - The updating process is triggered by means of
external or internal projects. - Here internal projects of an agent are used to
update its own priority rules.
44- Let the theory of Stan be characterized by
- workLate not party (r1)
- P party not workLate (r2)
- money workLate (r3)
- r2 lt r1 partying is preferred to working
until late - beautifulWoman ? stan wishGoOut
- wishGoOut,, not money ? stan getMoney
- R wishGoOut, money ? beautifulWoman inviteOut
- getMoney ? stan r1 lt r2
- getMoney ? stan not r2 lt r1 to get money,
Stan must update his - priority rules