Title: Modeling MultiDimensional Trust
1Modeling Multi-Dimensional Trust
- Nishit Gujral, David DeAngelis, Karen Fullam,
- K. Suzanne Barber
- May 9th, 2006
2Goal Driven Behavior
- Autonomous agents can control their own behavior.
- Limited resources force an agent to cooperate
with a partner to achieve goals.
We propose a domain (problem) independent
mechanism for choosing the best partner agent.
3Trust
- Interaction in uncertain, open environments
introduces risk - Trustworthiness models of potential partner
agents is necessary to - Select the most suitable partner
- Avoid risks and maximize reward
- One common method Analyze number of positive and
negative experiences with a solution provider.
Jonker and Treur, 1999
4Single Dimensional Trust
What if the problem changes?
What if no trustworthy provider is available?
S2
S1
Goal-holding Agent
Potential Partner
5Research Objective
How can an agent choose a partner to maximize its
goal achievement?
- Use multi-dimensional trust to model potential
partners according to several different metrics
6Multi-Dimensional Trust
- General trust Built on previous experience, not
situational trust Marsh, 1994 - Griffiths proposed MD-Trust for the task
delegation problem Griffiths, 2005 - Here, goal requirements specify the trust
dimension values.
- Trust is modeled as a composition of constraints
defined in the context of requestors goal. - Quality, timeliness, availability
7Multi-Dimensional Trust
What if the problem changes?
What if no fully trustworthy provider is
available?
S2
Q
T
S1
Q
T
Goal-holding Agent
Potential Partner
8Why Multi-dimensional Trust?
- Evaluating partners and information sources
beyond the question, Is source A trustworthy?
(based on one factor) - Trust based on a single metric (ie. solution
quality) is insufficient for accomplishing goals
with multiple constraints. Maximilien and Singh,
2005 - Instantly adapt to new goals without training new
trust models. - Goal requirements must match partner constraints.
9Defining Multi-Dimensionality
Expected solution quality
Maps solution quality to goal achievement
Expected completion time
Maps completion time to goal achievement
Domain specified cost
Est. probability partner is available
Expected solution price
10Partner Selection Algorithm
- Choose the partner with the highest Estimated
Goal Payoff - If the selected partner is unavailable, update
constraint model and choose the next suitable
partner - Else, Interact, then update models
Est. Reward Payoff
Failed Interaction Cost
Fixed Interaction Cost
11Experiment
Demonstrate the advantage of modeling partner
agents using multi-dimensional trust.
Actual Reward Payoff
Unavailability Failure Cost
Interaction Cost
12Assumptions
- Goal reward functions are domain-based and handed
to an agent by the system - Goals are atomic and a single partner can
- accomplish the goal
- The goal-holding agent can choose only one
partner agent - Goal-holding agents start off with an optimistic
point of view
13Reward Functions
14Partner Behavior
- Timeliness (tactual)
- Hi v2, Med v5, Lo v10
- Quality (qactual) s0.1
- Hi .66
- Lo 0
- Availability (pactual)
- Hi .66
- 27 Agents, full factorial design
Hidden from Goal-holding agent
15Partner Constraint Models
- Quality (qest)
- Average quality rating for all previous
successful interactions - Timeliness (test)
- Average amount of time needed to provide a
solution in previous interactions - Availability (aest)
- percentage of times the partner has been
available based on all previous invitations to
interact - Cost (cp,est)
- Fixed at 5 reward units
- All models are initialized favorably to encourage
exploration among potential partners
16Partner Selection Strategies
- Complete
- Consider all available metrics for
multi-dimensional trust - Quality
- Consider only the quality of solution that a
partner agent provides - Timeliness
- Consider only the duration of time that a partner
requires to deliver a solution - Random
- Choose any partner with equal probability
17Reward Function R1
Performance using reward function R1
18Reward Function R2
Performance using reward function R2
19Reward Function R3
Performance using reward function R3
20Reward Function R4
Performance using reward function R4
21Conclusions
- A goal-holding agent is capable of accurately
modeling its potential partners. - models become more reflective of the partners
behaviors - An agent is endowed with the ability to assert
how much it should trust multiple facets of a
partners behavior. - Experimental data verifies that multi-dimensional
trust improves an agents goal achievement when
the goal is also multi-dimensional. - Immediate goal changes can be accommodated
without rebuilding trust models.
22Future Work
- Introduce multiple goal-holding agents into the
same resource constrained environment - Explore coverage
- Multiple partners may be needed for satisfactory
goal achievement. Barber and Park, 2004 - Demonstrate the effectiveness of
multi-dimensional trust in a more realistic
scenario
23References
- Barber, K. S. and Park, J., Agent Belief
Autonomy in Open Multi-Agent Systems, Agents and
Computational Autonomy, Lecture Notes in
Computer Science, Springer-Verlag p. 7-16, 2004 - Maximilien, E. M., Singh, M. P., Agent-Based
Trust Model Involving Multiple Qualities, In
proceedings of International Conference on
Autonomous Agents and Multi-Agent Systems
(AAMAS05), pp. 519-526, The Netherlands, 2005 - Marsh. S. Formalising Trust as a Computational
Concept. PhD thesis, University of Stirling, 1994
- Jonker C. M. and Treur J. Formal Analysis of
Models for the Dynamics of Trust Based on
Experiences, In the Proceedings of The 9th
European Workshop on Modeling Autonomous Agents
in a Multi-Agent World Multi-Agent System
Engineering (MAAMAW-99), pp. 221-231. 1999 - Griffiths, N., Task Delegation using
Experience-Based Multi- Dimensional Trust, In
Proceedings of the Fourth International
Conference on Autonomous Agents and Multi-Agent
Systems (AAMAS05), pp. 489-496, 2005
24Experimental Setup Details
- Fixed interaction cost 5
- 27 partner agents
- 1000 simulation cycles
- Runs are averaged over 1000 games
25Reward function R5
Performance using reward function R5
The algorithm does not perform well in sinusoidal
or periodic reward functions
26- Drawbacks
- Immediate goal changes require relearning trust
models - Multi-faceted goals are difficult to accommodate
- What if no trusted partners are available?