Intelligent Agents User Agent Interaction and Learning - PowerPoint PPT Presentation

About This Presentation
Title:

Intelligent Agents User Agent Interaction and Learning

Description:

The Lookup Protocol. Agents are identified by their keys, and not their names! ... which do things such as 'recognize' phone numbers with or without area code, ... – PowerPoint PPT presentation

Number of Views:210
Avg rating:3.0/5.0
Slides: 55
Provided by: onnsh6
Learn more at: http://euro.ecom.cmu.edu
Category:

less

Transcript and Presenter's Notes

Title: Intelligent Agents User Agent Interaction and Learning


1
Intelligent Agents -User Agent Interaction and
Learning
  • Katia Sycara
  • The Robotics Institute
  • email katia_at_cs.cmu.edu
  • www.cs.cmu.edu/softagents

2
Why do we need to worry about security?
  • We want to deploy our system in open networks
  • Agents come and go agents interact with
    strangers
  • Can agents be trusted? Can their deployers be
    trusted?
  • Agents are expected to do more serious things.
    Ex
  • Getting info on your banking account
  • Carry out sales transactions.

3
Security issues for mobile agents
  • Origin authentication
  • Include public key certificate of the originator
    as part of the agent
  • In practice this is not enough the entire agent
    must be signed and integrity-protected
  • Data Integrity
  • The body of the agent should be integrity
    protected, thus allowing for after-the-fact
    tampering detection

4
Security issues for mobile agents (cont)
  • Access/itinerary control
  • Restrict number and identity of the servers to be
    visited however access control by naming
    severely restricts agents freedom
  • In practice, it may be better to manage agent
    access control via attributes, e.g. cpu
    available, agent origins, willingness to pay
  • Agents privacy
  • It is difficult to keep an agent protected from
    tampering
  • Tradeoff between the security advantages of fixed
    itineraries and the flexibility of free roaming

5
Security issues for mobile agents (cont)
  • Privacy and integrity of gathered information
  • In stateless information gathering, the agent
    intermittently sends acquired information back to
    its user. Protecting information is done by
    encrypting it.
  • In stateful information gathering, the gathered
    information is kept by the agent as it visits
    servers and is delivered upon the agents
    eventual return. A malicious execution
    environment could terminate the agent, thus
    preventing it from migrating further or
    returning.

6
Trusting your agents
  • Agents are delegated authority by their users
  • Agents behavior may not be predictable
  • While this is true of many application programs,
    they usually have very well-defined domains of
    action and so can be thoroughly tested.
  • Agents have a potential for causing damage

7
Trusting your agents (cont)
  • Testing and training the agent in safe mode
    (proposing but not allowed to execute actions)
  • Limited delegation
  • Verification through an audit trail
  • Reputation
  • Receiving a request from a trusted agent
    client, makes the request more trustworthy
  • If an agent reports information from a reputable
    source, it is more trustworthy
  • Reputation servers can keep track of agent
    behaviors

8
Testing and validation of agents
  • Testing and training the agent in safe mode
  • --Testing is costly
  • --Incremental testing
  • --Agent should be configurable with realistic
    knowledge of its bounds
  • --Ability to log every action, and basis of
    decisions it makes
  • Validation
  • Harder for agents, e.g. if agents are learning,
    they could have non-deterministic behavior
  • Should be done in isolated environment
  • In conventional systems, one knows a priori the
    boundaries of the parameter space this is not so
    with agents
  • Establish separate environment with historical
    data and replay the environment aganst the agents
    while monitoring the outcomes

9
Training your agent
  • By analogy to human training
  • Identify the goal
  • Identify the process
  • Explain the environment
  • Identify the limits of the trainees authority
  • Verify the results

10
Assumptions we make
  • Neither agents nor their deployers are
    trustworthy (They can misbehave)
  • Agents may eavesdrop on communication between two
    other agents
  • An agent may masquerade as some other agent
  • Agents may not behave as expected (e.g., an agent
    may not want to pay for goods received).
  • There must be a few, selected entities that can
    be trusted.

11
Adding Security to Retsina (MAS in general)
  • Prevent misbehaviors from happening have
    recovery mechanisms if they happen
  • Identify different security issues that MAS face
  • Propose solutions for these problems
  • Design and implement a security infra-structure
    for Retsina.
  • Focus application-independent issues
  • Communication security yes
  • Fair exchange in electronic sales no.
  • Approach Standard security techniques used in
    distributed systems
  • Authentication
  • Access control
  • etc.

12
Security Threats in MAS
  • System-level threats Those that subvert
    inter-agent interactions, independently of the
    application a system is running.
  • Untrustworthy ANSs and matchmakers
  • Untrustworthy application agents
  • Insecure communication channels.
  • Application-level threats Those that subvert the
    security of applications. They may exist even if
    the underlying system of agents is secure.
  • Service providers that do not implement
    appropriate access control policies
  • Untrustworthy application agents.

13
Our Solution (1)
  • To guarantee the integrity of naming and
    matchmaking services
  • Include access control
  • trusted ANSs and matchmakers!
  • Make agents uniquely identifiable, and give them
    unforgeable proofs of identity
  • Prevents spoofing
  • Make deployers of agents liable for the actions
    of their agents
  • Agents are given proofs of identity only when
    deployers allow their own identities to be linked
    with those of their agents.

14
Our Solution (2)
  • Protect communication channels
  • Add access control mechanisms (which usually rely
    on the delegators Ids)
  • Make agents prove that they are delegates of whom
    they claim to be.

15
A Design of a Security Infra-structure for Retsina
  • Assumptions
  • Deployers have public key certificates binding
    their physical identities (SSN, company names,
    etc.) to their public keys
  • DCAs are assumed to exist (lie outside our
    security infra-structure)
  • ANSs and matchmakers
  • are trusted entities
  • their public keys are publicly known.
  • The addresses of ANSs are publicly known.

16
Public Key Cryptography
  • Key pairs
  • Private keys a
  • Public keys A pub(a)
  • Digital signatures (m)a
  • Signature verification
  • Use A to verify (m)a
  • Public key certificates
  • (Name, Public key)ca

17
Giving Identities to Agents Establishing
Liability (1)
  • 1. Choose an Agent ID AID
  • 2. Generate a public key pair a, pub(a)
  • 3. m1(certify AID,pub(a),t)d
  • 4. Verify the validity of the request
  • 5. Generate m2ACA-signed certificate binding
    pub(a) to AID
  • 6. Creates an entry Ds public key certificate,
    m1 in the certification DB
  • 8. Verify the signature in m2 with ACAs public
    key.

18
Giving Identities to Agents Establishing
Liability (2)
  • Agents are given
  • a public key certificate, and
  • a matching private key.
  • The certification process
  • certification can be requested only by deployers
    who can prove their own identities make the
    deployer aware of his or her liabilities.

19
Revoking an Agents Public Key
20
Registering at an ANS
21
Unregistering at an ANS
22
The Lookup Protocol
  • Agents are identified by their keys, and not
    their names!!

23
Matchmaker Protocols
  • Very similar to ANS protocols
  • Differences
  • Physical addr may not be shared by more than one
    agent capabilities may
  • Agents use ANSname.Agentname to register with the
    matchmaker
  • The lookup protocol
  • 1. CAP
  • 2. CAP, ANS-x1.AID1,CERT1, ANS-xn.AIDn,
    CERTn, Tmm

24
Secure Communication Channels
  • SSL (Netscapes Secure Socket Layer protocol)
  • Why?
  • Keep communication security transparent from the
    application
  • Off-the-shelf trustworthy technology (extensible
    too).
  • Implementation
  • Local effort at the Communicator.

25
Secure Delegation Access Control
  • Knowing who is the delegator may be necessary or
    desirable
  • Original design Have the agent know the secret
    key of its deployers
  • Weaknesses
  • Agents should not know such important secrets
  • Sometimes they do not even have to know (ex
    PIN)
  • Should have a weaker, temporary solution.

26
Some Interesting Pages
  • http//microsoft.com/security/tech/certificates/fo
    rmats.asp
  • Some introductory material on standards for
    cryptographic object
  • http//security.dstc.edu.au/projects/java/release3
    .html
  • Info on a real-world security package (can
    download the code and play)
  • Communications of the ACM - June 1996, volume 39,
    Number 6
  • An issue dedicated to EC from which the article
    is extracted.

27
Adaptability
  • Adaptability of an individual agent
  • Agent Communication
  • Agent Coordination
  • Agent Planning
  • Agent Scheduling
  • Agent Execution Monitoring
  • Learning from interactions (user or other
    agents)
  • Organizational adaptability (middle agents)
  • Performance adaptability (cloning)

_____________________ Sycara, Levels of
Adaptivity in Systems of Coordinating Information
Agents, Lecture Notes in Artificial
Intelligence1435, Klusch and Weiss (Eds.), July
1998 CIA98
28
User Agent Interaction and Learning
  • Agent can be pre-programmed
  • efficient but could be inflexible
  • User explicitly delegates goal to an agent
    through end user programming
  • flexible but difficult
  • Agent learns from user interactions
  • metaphor of agent as apprentice.

29
User Agent Interaction and Learning (cont)
  • Instruction
  • a set of directions to the agent on how to carry
    out a task
  • Confirmation
  • a request for users approval before the agent
    carries out an instruction
  • Observation
  • a behavioral pattern learned by the agent
  • Suggestion
  • an agent recommendation to the user

30
User Agent Interaction and Learning (cont)
  • Instruction
  • user can either instruct the agent directly or
    accept the agents offer of automation for an
    observed work pattern
  • When the agent informs of an observation, the
    user can
  • accept the observation and have the agent create
    an instruction
  • decline the observation
  • edit the observation to fine-tune the instruction
  • postpone a decision till later

31
Learning Approaches
  • Machine learning plus Rule based, or expert
    system
  • machine learning for knowledge acquisition and
    rule based inference for knowledge maintenance
  • Case-based reasoning
  • based on past experiences/cases that contain
  • the problem
  • attributes of the situation/context
  • the solution
  • indication of success or failure of the solution

32
Learning Approaches (cont)
  • Neural Networks
  • self-organizing learning approach
  • Various statistical clustering techniques
  • perform classification of concepts

33
Fact Interpretation for Agent-User Interaction
  • User behavior
    Agent interpretation rule
  • user declines agent offer to automate-- offer
    less general observation
  • user repeatedly undoes agent action -- offer
    to turn off existing rule
  • user sometimes undoes agent action -- offer
    to make rule
    more specific to match
    condition
  • agent interaction preference does not match user
    behavior --
    offer to change preference

34
Agent Apprentice Assistance
  • In-context tips
  • Coaching when the user needs help
  • Proactive Assistance
  • Shortcuts for a sequence of steps
  • Customized offer based on learned user
    preferences
  • Automation offer for repetitive user tasks
  • Automation suggestions based on what the user is
    not doing
  • Notification of significant events

35
Control Delegation of Actions
  • Confirm Once-- the agent displays a confirmation
    message only before the first time it carries out
    the instruction
  • Confirm Always-- the agent displays a
    confirmation message every time before it carries
    out an instruction
  • Dont Confirm -- the agent never confirms with
    the user before carrying out the instruction

36
Information Search
  • Ways to Find Information
  • Browsing Following hyper-links that seem of
    interest
  • Searching Sending a query to a search engine
    such as Lycos
  • Categories Following existing categories such as
    Yahoo
  • Problems
  • Spent a lot of time and effort to navigate. Can
    search be made more efficient?
  • Search but it is difficult to accurately express
    the users intention.
  • Search engines are not personalized

37
Web Site Personalization
  • Personalized newspaper-- a publisher can deliver
    personalized news to each of its subscribers
  • Personalized store fronts An on-line merchant
    can make customized recommendations based on
    user preferences and transaction history
  • Customized service providers An on-line service
    provider can make personalized recommendations
    (video, ads etc) based on individual user
    preferences and preferences of other users

38
Personalization Content
  • User Data
  • demographic information (e.g., age, gender)
  • dynamic user interests (e.g. music, travel)
  • user transaction history (e.g. seasonal
    purchases)
  • user behavior at a site (e.g. hyperlinks clicked)
  • User data in user data base for use during
    recommendations
  • indexing provided through data base tables
  • indexing through engine such as Verity Search for
    web pages

39
Personalized Web site Operation
  • Agent collects new content from web or other
    content data bases
  • Users register their interests with the agent
    using profiles
  • Agent serves new information to users according
    to their preferences
  • Using machine learning, agent learns more about
    users from what they click and where they go
  • Agent suggests changes to the user and learns
    from feedback
  • After each successive visit, the agent gives new
    information and ideas matching user interests and
    preferences

40
Level of Personalization
  • Customization--user fills out profile and agent
    delivers content according to it. But, this
    approach requires user to change preference
    profile
  • Learning user interests. --agent watches over
    user shoulder
  • Learning community behavior. -- agent compares
    user preferences to those of others with similar
    interests. This approach encourages user
    exploration

41
User Experience
  • Explicit vs implicit ranking--some collaborative
    filtering tools require that users explicitly
    rank an initial amount of pages. This imposes
    additional burden on the user
  • User identification-- user name and password use
    of cookies during revisits.

42
Collaborative Filtering
  • A collaborative filtering system makes
    recommendations based on the preferences of
    similar users.
  • People Yenta, Referral Web
  • Products Firefly, Tunes, Syskill Webert
  • Readings Wisewire, Phoaks

43
Content vs. Collaboration
  • Content-based retrieval returns documents that
    are similar to a query (search) or a user profile
    (preference)
  • Collaborative recommendation retrieves documents
    liked by others with similar profiles

44
Problems in Collaborative Filtering
  • Incentives Startup
  • Need a critical mass of users/recommenders to
    make meaningful predictions
  • Need mechanisms to maintain participation
  • Reliability
  • Spoofing- will content providers inflate their
    ratings
  • Technical problems with clustering similarity
    measures
  • Privacy
  • Once you share your profile who else may want it?

45
Functionality of WebMate
  • Learning users interests for information
    filtering
  • Multiple TF-IDF vectors representation
  • Incremental and adaptive Learning
  • Compile personal newspaper
  • Support for efficiently finding information
  • Automatic refinement using Trigger Pairs
  • Relevance feedback

_____________________________ Chen, Sycara,
WebMate A Personal Agent for Browsing and
Searching, Proceedings of the Second
International Conference on Autonomous Agents,
Minneapolis, MN, May 1998
46
Learning to extract information
  • Maulsby, David Ian Witten (1997). Teaching
    agents to learn From user study to
    implementation. IEEE Computer, Nov, 36-43.
  • Problem Features of materials prepared for
    human viewing such webpages or printed text are
    designed to make it easy for us to extract
    information but can be very difficult for a
    computer. Telling a computer how to extract
    information rather than just which URL to open
    can be a hard problem.

47
How We Extract Information
  • Repetitive information such as bibliographic
    records, on-line classifieds, or catalogs usually
    use some combination of
  • visual demarcations (such as ruled lines or
    tables)
  • visual distinctions (such as italics, color, or
    indentation)
  • punctuation (such as commas, colons, or periods)
  • precedence/succession relations among
    informational elements

48
But How do we tell a computer what these visual
cues are?
  • We dont want to write a parser
  • We dont want to write debug a production
    system
  • We may not even know what the markup looks like
  • So How do we tell an agent how to do it (from
    our point of reference) so it can do it (from its
    point of reference)..

49
Mausbys Study
  • Problem Train a program to recognize and
    translate between bibliographic entries in a
    variety of formats
  • Approach
  • Wizard of Oz study to find out how people would
    like to instruct a computer in doing the task
  • Implement/test a system that takes instruction of
    this sort

50
Instructing as Machine Learning
  • Learns incrementally and adds exceptions as they
    are encountered
  • Uses demonstration (point/select) and verbal
    instruction such as precedes
  • Proposes guesses when classification is
    ambiguous
  • Learns Perl style patterns for things such as
    phone numbers and uses punctuation and
    capitalization as referents in rules

51
CIMA/TURVEY Rules
  • Learns DNF rules.. Greedy with exceptions tacked
    on..
  • As a result of pointing, clicking, indicating
    precedes/follows, etc. CIMA can generate rules
    which do things such as recognize phone numbers
    with or without area code,
  • identify the surname and forename/initials of
    the first author of a paper and similar feats..

52
Adaptive Negotiation (the Bazaar Model)
  • Aims at modeling multi-issue negotiation
    processesª
  • Combines the strategic modeling aspects of
    game-theoretic models and single agent sequential
    decision making models
  • Supports an open world model
  • Addresses heterogeneous multi-agent learning
    utilizing the iterative nature of sequential
    decision making and the explicit representation
    of beliefs about other agents
  • ______________________________
  • ª D.Zeng and K.Sycara. Bayesian Learning in
    Negotiation. International Journal of
    Human-Computer Studies (1998), 48, pp. 125-141.

53
Utility of Learning Experimental Design
  • The set of players N is comprised of one buyer
    and one supplier who make alternative proposals.
  • For simplicity, the range of possible prices is
    from 0 to 100 units and this is public
    information
  • The set of possible actions (proposed prices by
    either the buyer or the supplier) A equals to 0,
    1, 2,, 100
  • Reservation prices are private information.
  • Each player's utility is linear to the final
    price ( a number between 0 and 100) accepted by
    both players
  • Normalized Nash product as joint utility (the
    optimal joint utility when full information is
    available is 0.25)

54
Average Performance of Three Experimental
Configurations in Bazaar
  • A non-learning agent makes decisions based solely
    on his own reservation price
  • A learning agents makes decisions based on both
    the agent's own and the opponent's reservation
    price
Write a Comment
User Comments (0)
About PowerShow.com