Meeting at the University Nancy 2 11 Oct 2006 - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

Meeting at the University Nancy 2 11 Oct 2006

Description:

i.e.: Collaborative Filtering (cases user's ratings) ... Filtered users with less than 65 ratings. Split user's data into learning set and test set ... – PowerPoint PPT presentation

Number of Views:89
Avg rating:3.0/5.0
Slides: 33
Provided by: vincents1
Category:

less

Transcript and Presenter's Notes

Title: Meeting at the University Nancy 2 11 Oct 2006


1
EPFLs AI Laboratory
  • Meeting at the University Nancy 2 11 Oct 2006

2
EPFL in some numbers
  • Ecole Polytechnique Fédérale de Lausanne
  • Founded in 1879 as part of university
  • Since 1969, one of the two federally funded
    university in Switzerland
  • In total 10000 people
  • Annual budget from Swiss government 380M
  • gt5000 bachelor and master students in 13 domains
  • gt1000 doctoral students
  • gt250 research faculties
  • gt100 nationalities

3
Computing at EPFL
  • The School of Computer and Communication Sciences
    at EPFL is one of the major European centers of
    teaching and research in information technology
  • 34 professors
  • 200 Ph.D students
  • gt1000 bachelor and master students
  • gt 8M research external funding.

4
The AI lab at EPFL
  • Director Prof Boi Faltings
  • Software agent
  • Case based reasoning
  • Constraint-based reasoning
  • Recommender Sytems
  • 3 Teaching Professors
  • Development of a research and teaching activity
    in the domain of Natural Language.
  • Numerical constraint satisfaction problems 
  • 2 Post-docs
  • In charge of European project CASCOM, DIP,
    Knowledge web
  • 12 Phd students
  • http//liawww.epfl.ch/

5
Artificial Intelligence
  • Definition 1
  • Software to mimic human behavior
  • Definition 2
  • Software to make people more intelligent
  • In particular, for artificial domains
    accounting, design, planning, coordination
  • Our vision of AI
  • Combine expertise and concerns of many parties to
    solve problems that no individual can combine

6
Open Issues
  • Network leads to unbounded, dynamic problems gt
    distributed problem-solving
  • Make agent incentives compatible to discourage
    manipulation
  • Model and reason with peoples preferences

7
Preference Elicitation P. Viappiani
  • Objective Build interactive tools that help
    users search for their most preferred item in a
    large collection of options (Preference-based
    search).
  • Consider example-critiquing, a technique for
    enabling users toincrementally construct
    preference models by critiquing exampleoptions
    that are presented to them.
  • Investigate techniques for generating automatic
    suggestions considering the uncertainty of the
    preference model and heuristics

8
Preference Elicitation V. Schickel
  • Objective Try to estimate missing preferences
    from an incomplete elicited user model and see
    how much preferences can be transferred.
  • Study how ontology can be used to model users
    preference model and model e-catlog product.
  • Investigate inference technique to guess
    missing preferences.
  • Build more robust similarity metric for
    hierarchical ontologies.

9
Reputation Mechanism R. Jurca
  • Objective build of reputation mechanisms for
    online environments where agents do not a priory
    trust each other.
  • The reputation of an agent is obtained by
    aggregating feedback
  • While most of the previous results assume that
    agents report feedback honestly, we explicitly
    consider rational reporting incentives and
    guarantee that truth-telling is in the best
    interest of the reporters. Carefully designed
    payment schemes (agents get paid when reporting
    feedback) insure that truth-telling is a Nash
    equilibrium as long as other agents report
    honestly, no reporter can gain by lying.

10
Reputation Mechanism Q. Ngyen
  • Objective find local search algorithms that
    achieve good performance while satisfying the
    incentive compatibility for bounded-rational
    agents.
  • Studying randomized algorithms and local search
    algorithms.
  • Main contributions including a local search
    algorithm called Random Subset optimization
    algorithm and an incentive compatible and
    budget-balanced protocol called leave-one-out
    protocol for bounded-rational agents.

11
Distributed constraint optimization A. Petcu
  • Objective Develop a Multiagent Constraint
    OPtimization (MCOP) for solving numerous
    practical problems like planning but distributed
    on many agents
  • Developing a mechanism to build MCOP in a linear
    number of messages (DPOP).
  • Study dynamic environments (problems can
  • change over time) and a self-stabilizing version
    of DPOP that can be applied in and techniques
    that maintain privacy.

12
Using an Ontological A-priori Score to Infer
Users Preferences
  • Advisor Prof Boi Faltings EPFL

13
Presentation Layout
  • Introduction
  • Introduce the problem and existing techniques
  • Transferring Users Preference
  • Introduce the assumptions behind our model
  • Explain the transfer of preference
  • Validation of the model
  • Experiment on MovieLens
  • Conclusion
  • Remarks Future work

14
Problem Definition
  • Recommendation Problem (RP)
  • Recommend a set of items I to the user from a
    set of all items O, based on his preferences P.
  • Use a Recommender System, RS, to find the best
    items
  • Examples
  • NotebookReview.com (ONotebooks, P criteria
    (Processor Type, Screen Size))
  • Amazon.com (OBooks, DVDs, , P grading)
  • Google (OWeb Documents, P keywords)

15
Recommendation Systems
  • Three approaches to build a RS 12345
  • Case-Based Filtering uses previous cases
  • i.e. Collaborative Filtering (cases users
    ratings)
  • Good performances low cognitive requirements
  • Sparsity, latency, shilling attacks and cold
    start problem
  • Content-Based Filtering uses items description
  • i.e. Multi-Attribute Utility Theory
    (descriptions-attributes)
  • Match users preferences very good precision
  • Elicitation of weights and value function.
  • Rule-Based Filtering uses association between
    items
  • i.e. Data Mining (associations rules)
  • Find hidden relationships good domain discovery
  • Expensive and time consuming

16
Central Problem of RS
17
Presentation Layout
  • Introduction
  • Introduce the problem and existing techniques
  • Transferring Users Preference
  • Introduce the assumption behind our model
  • Explain the transfer of preference
  • Validation of the model
  • Experiment on MovieLens
  • Conclusion
  • Remarks Future work

18
Ontology
  • D1 Ontology ? is a graph (DAG) where
  • nodes models concepts
  • Instances being the items
  • edges represents the relations (features).
  • Sub-concepts are distinguished by certain
    features
  • Feature are usually not made explicit

19
The Score of Concept -S
  • The RP viewed as predicting the score S assigned
    to a concept (group of items).
  • The score can be seen as a lower bound function
    that models how much a user likes an item

20
A-priori Score - APS
  • The structure of the ontology contains
    information
  • Use APS(c) to capture the knowledge of concept c
  • If no information, assume S(c) uniform 0..1
  • P(S(c)gtx)1-x
  • Concepts can have n descendants
  • Assumption A3 gt P(S(c)gtx)(1-x)n1
  • APS uses no user information

21
Inference Idea
Select the best Lowest Common Ancestor lca(SUV,
bus) AAAI06
Vehicle
Car
Bus
S(bus)???
SUV
Utilities
S(SUV)0.8
Pickup
S(Pickup)0.6
22
Upward Inference
A1 the score depends on the features of the item

vehicle
K levels
SUV
  • Going up k levels ? remove k known features
  • Removing features ? S? or S ? (S ?S)
  • S( vehicle SUV) a( vehicle, SUV) S(SUV)
  • a ?0..1 is the ratio of feature in common liked
  • How to compute a?
  • a feature(vehicle) / feature(SUV)
  • Does not take into account the feature
    distribution
  • a APS(vehicle) / APS(SUV)

23
Downward Inference
A2 Features contributes independently to the
score
vehicle
l levels
bus
  • Going down l levels ? adding l unknown features
  • Adding features ? S? or S? (S ?S)

S(busvehicle)a S(vehicle) a 1
?
  • S(busvehicle) S(vehicle) ß(vehicle, bus)
  • ß ?0..1 is ?features in bus not present in
    vehicle
  • How to compute ß?
  • ß APS(bus) - APS(vehicle)

24
Overall Inference
  • There exist a chain between city and vehicle
    but not a path

Vehicle
  • As for Bayesian Networks, we assume independence

Car
Bus
  • S(BusSUV) aS(SUV) ß

SUV
  • The score of a concept x knowing y is defined as
  • S(yx) a(x,lcax,y)S(x) ß(y,lcax,y)
  • The score function is asymmetric

25
Presentation Layout
  • Introduction
  • Introduce the problem and existing techniques
  • Transferring Users Preference
  • Introduce the assumption behind our model
  • Explain the transfer of preference
  • Validation of the model
  • WordNet (built best similarity metric see
    IJCAI07)
  • Experiment on MovieLens
  • Conclusion
  • Remarks Future work

26
Validation Transfer - I
  • MovieLens database used by CF community
  • 100,000 ratings on 1682 movies done by 943 users.
  • MovieLens movies are modeled by 23 Attributes
  • 19 themes, MPPA rating, duration, and released
    date.
  • Extracted from IMDB.com
  • Built an ontology modeling the 22 attributes of a
    movies
  • Used definitions found in various online
    dictionaries

27
Validation Transfer - II
  • Experiment Setup for each 943 users
  • Filtered users with less than 65 ratings
  • Split users data into learning set and test set
  • Computed utility functions from learning set
  • Frequency count algorithm for only 10 attributes
  • Our inference approach for other 12 attributes
  • Predicted the grade of 15 movies from the test
    set
  • Our approach HAPPL (LNAI 4198 WebKDD05)
  • Item-Item based CF (using adjusted Cosine)
  • Popularity ranking
  • Computed the accuracy of predictions for Top 5
  • Used the Mean Absolute Error (MAE)
  • Back to 3 with a bigger training set
    5,10,20,,50

28
Validation Transfer - III
29
Validation Transfer - IV
30
Conclusions
  • We have introduced the idea that ontology could
    be used to transfer missing preferences.
  • Ontology can be used to compute A-priori score
  • Inference model - asymmetric property
  • Outperforms CF without other people information
  • Requirements Conditions
  • A2 - Features contributes to preference
    independent.
  • Need an ontology modeling all the domain
  • Next steps Try to learn the ontology
  • Preliminary results shows that we still
    outperform CF
  • Learn ontology gives a more restricted search
    space

31
References - I
  • 1 Survey of Solving Multi-Attribute Decisions
    Problems
  • Jiyong Zang, and Pearl Pu, EPFL Technical
    Report, 2004.
  • 2 Improving Case-Based Recommendation A
    Collaborative Filtering Approach
  • Derry OSullivan, David Wilson, and Barry Smyth,
    Lecture Notes In Computer Science, 2002.
  • 3 An improved collaborative Filtering approach
    for predicting cross-category purchases based on
    binary market data.
  • Andreas Mild, and Thomas Reutterer, Journal of
    Retailing and Consumer Services Special Issue on
    Model Building in Retailing consumer Service,
    2002.
  • 4 Using Content-Based Filtering for
    Recommendation
  • Robin van Meteren and Maarten van Someren,
    ECML2000 Workshop, 2000.
  • 5 Content-Based Filetering and Personalization
    Using Structure Metadata
  • A. Mufit Ferman, James H. Errico, Peter van
    Beek, and M Ibrahim Sezan, JCDL02, 2002.

32
References - II
  • AAAI06 Inferring Users Preferences Using
    Onotlogies
  • Vincent Schickel and Boi Faltings, In Proc.
    AAAI06 pp 1413 1419, 2006.
  • IJCAI07 OSS A Semantic Similarity Function
    based on Hierarchical Ontologies
  • Vincent Schickel and Boi Faltings, To appear in
    Proc. IJCAI07.
  • LNAI 4198 Overcoming Incomplete User Models In
    Recommendation Systems via an Ontology.
  • Vincent Schickel and Boi Faltings, LNAI 4198, pp
    39 -57, 2006.

Thank-you Slides http//people.epfl.ch/vincent.sc
hickel-zuber
Write a Comment
User Comments (0)
About PowerShow.com