Part IV : Liability Chapter 16: Trust and Accountability

1 / 48
About This Presentation
Title:

Part IV : Liability Chapter 16: Trust and Accountability

Description:

Traditional definition by Matt Blaze: ... ( Blaze et al., 1998) Drawbacks of ... Example (form Grandison & Sloman, 2000): PolicyMaker (Blaze et al., 1996) ... – PowerPoint PPT presentation

Number of Views:33
Avg rating:3.0/5.0
Slides: 49
Provided by: ucbookCs

less

Transcript and Presenter's Notes

Title: Part IV : Liability Chapter 16: Trust and Accountability


1
Part IV LiabilityChapter 16 Trust and
Accountability
  • Sebastian Ries

2
Agenda
  • Introduction Trust And Accountability (1)
  • Trust (32)
  • Motivation (3)
  • Introduction of (Social) Trust for Computer
    Scientists (3)
  • Integration of Trust into Applications
    State-of-the-Art Classification (5)
  • Challenges (1)
  • Examples for Trust Models (20)
  • TidalTrust (9)
  • Subjective Logic (11)
  • Accountability (11)
  • Accountability in Computer Science (2)
  • Reputation Systems (3)
  • Concept (2)
  • Classification of Reputation Systems (2)
  • Example eBay Feedback Forum (MISSING)
  • Micropayments Systems (6)

3
Trust And Accountability
  • Trust And Accountability
  • Both are well-known concepts of every day social
    life
  • Both can be transferred to ubiquitous computing
  • Definitions (Merriam-Webster Online Dictionary)
  • Trust assured reliance on the character,
    ability, strength, or truth of someone or
    something, and the dependence on something
    future or contingent
  • Accountability the quality or state of being
    accountable especially an obligation or
    willingness to accept responsibility or to
    account for ones actions

4
  • Trust

5
Trust
  • Motivation for Trust
  • Socially based paradigms will play a big role in
    pervasive computing environments (Bhargava et
    al., 2004)
  • Trust Well-founded basis for an engagement in
    presence of uncertainty and risk

Trust?
Social contacts (real world)
Interactions between devices (virtual world)
6
Trust
  • Motivation for Trust in Ubiquitous computing
  • Calm technology
  • User centric approach
  • Characteristics of the UC environment
  • Unmanaged
  • Complex
  • Heterogeneous
  • Massively networked
  • gt There is a need for
  • interaction between loosely coupled devices
  • a basis for risky engagements

7
Introduction of Trust
  • Trust,
  • Is a well-known concept of everyday life
  • Allows simplification in complex situations
  • Is a basis for delegation and efficient rating of
    information
  • gt trust seems to be a promising approach
  • Trust is based on / influenced by,
  • Direct experiences (e.g. from previous
    interactions)
  • Indirect experiences
  • Recommendations
  • Reputation
  • Risk
  • Context

8
Introduction of Trust
  • Properties of Trust
  • Subjective
  • Asymmetric
  • Context-dependent
  • Dynamic
  • Non-monotonic
  • Gradual
  • Not transitive
  • But there is the concept of recommendations
  • Categories of Trust (McKnight Chervany, 1996)
  • Interpersonal Trust - between people or groups
  • Impersonal Trust arises from a social or
    organizational situation
  • Dispositional Trust general attitude towards
    the world

9
Introduction of Trust
  • Definition
  • Many different definitions (e.g., see also
    introduction)
  • Example definition which is adopted by some
    researchers
  • .. trust (or, symmetrically, distrust) is a
    particular level of the subjective probability
    with which an agent will perform a particular
    action, both before he can monitor such action
    (or independently of his capacity of ever to be
    able to monitor it) and in a context in which it
    affects his own action. (Gambetta, 2000)

10
Trust in Ubiquitous Computing
  • Goal
  • trust as a basis for risky engagements in
    dynamic, complex environments under uncertainty
    and risk
  • Design aspects when building trust-based systems
  • Trust modeling
  • Trust management
  • Decision making

11
Trust Modeling
  • Main task Representation of trust values and
    computational model
  • Different approaches regarding representation of
    trust
  • Dimension one-dimensional, multi-dimensional
  • Domain Binary, discrete, or continuous values
  • Semantics Rating, ranking, probability, belief,
    and fuzzy concept
  • Different approaches regarding computation of
    trust
  • Considering / Not considering of recommendations
  • Other aspects
  • Aging of evidences
  • Re-evaluation of trust values
  • Examples See TidalTrust and Subjective Logic

12
Trust Management
  • Traditional definition by Matt Blaze
  • Trust management (TMa) is a unified approach
    to specifying and interpreting security policies,
    credentials, and relationships that allows direct
    authorization of security-critical actions.
    (Blaze et al., 1998)
  • Drawbacks of traditional trust management
  • Trust establishment is not part of the model
  • Trust is passed on by credentials issuing
    credentials is not part of the TMa
  • Trust is only treated implicitly
  • Trust is (often / sometimes) treated as monotonic
  • Missing evaluation of risk
  • Message (Traditional) Trust management is access
    control.

Example (form Grandison Sloman, 2000)
PolicyMaker (Blaze et al., 1996) The following
policy specifies that any doctor who is not a
plastic surgeon should be trusted to give a
check-up. Policy ASSERTS
doctor_key WHERE filter that allows
check-up if the field is not plastic surger
13
Trust Management
  • More recent trust management focuses on
  • Collection of evidences
  • Evaluation of risk
  • Including dynamic aspects and levels of trust
  • Message Trust management has to manage trust.

Example SULTAN (Grandison, 2003) PolicyName
trust ( Tr, Te, As, L ) ? Cs The semantic
interpretation of a statement in the form above
is that Tr trusts/distrusts Te to perform As at
trust/distrust level L if constraint(s) Cs is
true.
14
Decision Making
  • Very important, since only with automated
    decision making trust-aided ubiquitous computing
    can become a calm technology.
  • Can be done
  • With and without user interaction
  • Based on binary decision criteria (e.g. users
    with a specific certificate accepted as
    trustworthy)
  • Based on thresholds, depending on uncertainty,
    risk,

15
Challenges
  • General challenge for UC Dealing with context
  • Specific challenges
  • Dealing with uncertainty and risk
  • Accurate long-term behavior
  • Smoothness
  • Weighting towards current behavior
  • Attack resistance
  • Intuitive representation in user interfaces

16
Trust Model (1) - TidalTrust
  • General
  • Developed by J. Golbeck (Golbeck, 2005)
  • Targets semantic web friend-of-a-friend
    networks
  • Static evaluation of trust (no re-evaluation)
  • Evaluation in the FilmTrust project
    http//trust.mindswap.org/FilmTrust
  • Idea of the FilmTrust project
  • Everyone
  • Can join the network
  • Rate movies on a scale from 1 to 10
  • Rate friends (people he knows and who participate
    in the network) in the sense of if the
    person were to have rented a movie to watch, how
    likely it is that you would want to see that film

17
TidalTrust (Trust Network)
  • Visualization of a trust network
  • Alice (A) has three trusted friends (T1, T2, T3)
    who rated the movie U
  • If Alice does not know the movie U, the
    TidalTrust algorithm is used to calculate a
    rating based on the information in the network.

18
TidalTrust (Simple Algorithm)
  • The formula for the calculation of recommended
    ratings is recursive
  • If a node has rated the movie m directly, it
    returns its rating for m
  • Else, the node asks its neighbors for
    recommendations
  • For a node s in a set of nodes S, the rating rsm
    inferred by s for the movie m is defined as
  • Where intermediate nodes are described by i, tsi
    describes the trust of s in i, and rim is the
    rating of the movie m assigned by i.

19
TidalTrust (Simple Algorithm)
  • Example
  • Lets assume
  • As trust in T1, T2, and T3 is tAT1 9, tAT2 8,
    tAT3 1.
  • The rating of T1, T2, and T3 about the movie U is
    rT1U8, rT2U9, rT3U2.
  • The rating of A about U is calculated as

20
Problems with the simple algorithm
  • Conclusion
  • In this case the inferred rating is close to the
    rating of As most trusted friends.
  • BUT If the number of lowly trusted friends of A
    increases, then they can heavily influence the
    calculated rating
  • Results by Golbeck indicate that
  • Most accurate results come from the highest
    trusted neighbors
  • Accuracy decreases with path length
  • BUT simple algorithm does not care for that
  • Optimizations
  • Define a minimum threshold for trust in
    recommenders (max)
  • Define a maximum threshold for path length
    (maxdepth)
  • Arguments against static set up of max
  • Some node mays have many neighbors with 10
  • While there may be other nodes having as highest
    rating for a neighbor only 6
  • Arguments against static set up of maxdepth

21
TidalTrust (Advanced Algorithm)
  • Very similar to the simple algorithm
  • BUT Constraints in search depth and selection of
    recommenders
  • maxdepth minimal depth to find at least one
    recommender for the movie m
  • max max. threshold for tsi to find at least one
    recommender for the movie m
  • Advanced algorithm
  • If a node s has rated the movie m directly, it
    returns its rating rsm for m
  • Else, the node asks its neighbors for
    recommendations
  • For a node s in a set of nodes S, the rating rsm
    inferred by s for the movie m is defined as
  • Where intermediate nodes are described by i, tsi
    describes the trust of s in i, and rim is the
    rating of the movie m assigned by i.
  • start indicates the node initiating the request
    for movie m

22
TidalTrust (Advanced Algorithm)
  • Example 1
  • Lets assume
  • As trust in T1, T2, and T3 is tAT1 9, tAT2 8,
    tAT3 1.
  • The rating of T1, T2, and T3 about the movie U is
    rT1U8, rT2U9, rT3U2.
  • gt maxdepth 1, max 9
  • The rating of A about U is calculated as

23
Tidal Trust (Advanced Algorithm)
  • Example 2
  • gt parameters
  • max 9
  • maxdepth 2
  • The rating of A about U is calculated as

24
TidalTrust
  • Conclusion
  • The inferred rating is close to the ratings
    provided by most trusted friends
  • The additional constraints in the advance
    algorithm should improve the recommendations
  • Drawbacks of the model
  • It does not deal with uncertainty
  • Does not update trust values in recommenders
  • Decision making, whether a rating of 6 is good
    enough or not, is up to the user.

25
Trust Model (2) - Subjective Logic
  • Subjective Logic
  • Developed by Audun Jøsang (basic ideas 1997,
    Jøsang, 2001)
  • The concept of atomicity is left out here.
  • Basic ideas
  • Uncertainty as main aspect of an opinion ?
    (b,d,u)
  • Constraint b d u 1 (b belief, d
    disbelief, u uncertainty)
  • Mathematical foundation based on Bayesian
    probability theory and belief theory
  • Defines operators for discounting
    (recommendation) and consensus
  • Many proposals how the trust model can be used in
    applications

26
Example 2 Subjective Logic
  • Bayesian probability theory
  • Allows to calculate the posteriori probability of
    binary events based on a priori collected
    evidence.
  • Beta probability density function (pdf) of a
    probability variable p
  • Examples
  • beta (p1,1) beta(p8,2)

,where
27
Example 2 Subjective Logic
  • Define
  • where r corresponds to the number of positive
    collected evidence,
  • And s corresponds to the number of negative
    collected evidence.
  • The mean value of the distribution is defined as

and
28
Example 2 Subjective Logic
  • Belief theory
  • An opinion is expressed by the triple (b,d,u)
  • b expresses the total belief of an observer that
    a particular state is true
  • d expresses the total disbelief of an observer
    that a particular state is true
  • In contrast to probability theory, in which P(A)
    P(not A) 1, it holds
  • The triple b, d, and u is related by
  • u allows expresses the uncertainty

29
Example 2 Subjective Logic
  • The mapping between bayesian probability theory
    and belief theory is done by the following
    equations
  • Opinion in belief model
  • Opinion in bayesian model
  • Mapping
  • Belief model gt Bayesian model
  • Bayesian model gt Belief model

und
mit
30
Example 2 Subjective Logic
  • Operator for consensus
  • Let the opinion of A about x be denoted as
  • Let the opinion of B about x be denoted as
  • The opinion of an agent, who has mode the
    observations of A and B (assuming they are based
    on independent evidence), can be calculated as

31
Example 2 Subjective Logic
  • Operator for discounting
  • Let the opinion of A about the trustworthiness B
    be denoted as
  • Let the opinion of B about the x be denoted as
  • The opinion of A about x based on the
    recommendation of B about x can be calculated as

with
,
, and
32
Example 2 Subjective Logic
  • Justification for the discounting operator
  • The belief of A about x in the recommendation
    increases with the belief of A in B, and with the
    belief of B in x.
  • The disbelief of A about x in the recommendation
    increases with the disbelief of A in B, and with
    the disbelief of B in x.
  • The uncertainty in the recommendation increases
    with the uncertainty of A about B, and with the
    uncertainty of B about x (if A has assigned any
    belief to B)

33
Example 2 Subjective Logic
  • Example (based on the graph shown with the
    TidalTrust example)
  • Lets assume the opinions of A about the
    trustworthiness of T1, T2, and T3 are
  • i.e. two very trusted users, and one more or less
    unknown
  • The opinions of T1, T2, and T3 about U are
  • i.e. two rather good ratings, and one very bad
  • The discounting of the recommendations of T1,
    T2, and T3 evaluates to
  • i.e. the first two opinion maintain high values
    for belief, since the value of uncertainty
    dominates the opinion of A about T3, the last
    opinion has an even bigger component of
    uncertainty

34
Example 2 Subjective Logic
  • Example (cont.)
  • The consensus between first two opinions
    evaluates to
  • i.e. the consensus between the two opinions with
    a dominating belief components. The belief in the
    resulting opinion increases, and uncertainty
    decreases.
  • The consensus between this opinion and the last
    one
  • i.e. consensus between an opinion with dominating
    belief and dominating uncertainty. The opinion
    with high uncertainty as only little influence on
    the resulting opinion.
  • Note Use the mapping for the transformation of
    belief representation in bayesian representation,
    apply the consensus operator, and do the
    transformation back to the belief representation.

35
Example 2 Subjective Logic
  • Conclusion
  • As shown we can see form the example the model
    seems to be intuitive.
  • The model allows to express one uncertainty about
    the trustworthiness of someone.
  • The operator for discounting and consensus are
    justified separately.
  • The model allows easy integration of new evidence
    (re-evaluation)
  • Drawback
  • Complex calculation model
  • It may be difficult for users to set up opinions.

36
  • Accountability

37
Accountability
  • Goal
  • Accountability helps to protect the interests of
    the collective and the usage of its resources
  • Problem Individual Rationality vs. Collective
    Welfare
  • Freeriding (e.g. in P2P file-sharing) is
    individual rational behavior
  • BUT Freeriding compromises the idea of
    file-sharing
  • How to enforce accountability (Dingledine, 2000)
  • Selecting favored users gt reputation systems
  • Restricting access (making users pay) gt
    micropayment systems

38
Reputation Systems
  • Basic idea
  • Good reputation is desirable
  • Contribution to the collective welfare leads to a
    good reputation
  • Selection Members of the community grant access
    to their resources only to members with a good
    reputation.
  • Reputation vs. Trust
  • Very similar idea, but
  • Trust subjective trust value of entity A for any
    entity B
  • Reputation only one system-wide reputation score
    for each entity
  • Trust can be built on reputation
  • Well-Known examples
  • eBay Feedback forum
  • Amazon review scheme

39
Reputation Systems
  • Classification of computational reputation models
    (Schlosser et al., 2006)
  • Accumulative Systems
  • calculate the reputation of an agent as the sum
    of all provided ratings
  • An example for this is the total score in the
    eBay feedback forum.
  • Average Systems
  • calculate the reputation as the average of
    ratings which an agent has received.
  • This corresponds, for instance, with the
    percentage of positive ratings in the eBay
    feedback forum.
  • Blurred Systems
  • calculate the reputation of an agent as the
    weighted sum of all ratings.
  • The weight depends on the age of a rating, i.e.,
    older ratings receive a lower weight.
  • OnlyLast Systems
  • determine the reputation of an agent as the most
    recent rating.
  • Although these systems seem to be very simple,
    the simulation had shown that they provided a
    reasonable level of attack-resistance. (e.g.
    Tit-for-Tat)
  • EigenTrust Systems
  • calculate the reputation of an agent depending on
    the ratings, as well as on the reputation of the
    raters. An interesting property of these systems
    is that each agent calculates the reputation of
    the other agents locally based on its own rating
    and the weighted ratings of the surrounding
    agents.
  • If all agents adhere to the protocol, the locally
    stored reputation information of all agents will
    converge. Thus, all agents have the same
    reputation value for any agent.
  • Adaptive Systems
  • calculate the reputation of an agent depending on
    its current reputation. For example, a single
    positive rating has a higher impact on the
    reputation of an agent with a low reputation than
    on the reputation of an agent with a high
    reputation.
  • Beta Systems

40
Reputation Systems
  • Further aspects
  • Incentives for building up a good reputation
  • Incentives to provide ratings
  • Location of the storage for the reputation values
  • Reputation of newcomers
  • Attack-resistance

41
Micropayments
  • Micropayments vs. Reputation
  • Reputation rewards for contribution to the
    collective welfare
  • gt allows to select only users which contribute
    to the collective
  • Mircopayments makes the users pay for any
    received resource/service
  • gt prevents users from arbitrary high (mis-)usage
    of a resource/service, or compensates over-usage
    by payments
  • Classification of micropayments (Dingledine et
    al., 2000)
  • Fungible micropayment has some intrinsic or
    redeemable value
  • Monetary
  • Very small payments with real money, e.g. one
    cent or less per payment
  • Need to be very efficient, i.e. low transaction
    costs
  • gt Can be less secure than approaches as eCash
    which focuses on the transfer of bigger amounts
    of money
  • Non-Monetary
  • Introduce an artificial currency, i.e. some kind
    of sparse resource
  • Can be used to pay others
  • Non-Fungible Micropayment does not have an
    intrinsic value
  • Typically users only show some Proof-Of-Work
    (PoW), e.g., solved a computational problem,
    i.e., the users show that they were will to do
    something before getting access to a service or
    resource
  • Can be used to counteract email spam (see chapter
    on Security for Ubiquitous Computing)
  • Two kinds of approaches

42
Micropayments
  • Classification and different approaches

43
Micropayments Example Payword
  • Payword is based on PKI and collision-resistant
    one-way hash functions (Rivest Shamir, 1997)
  • Set-Up
  • User U, Vender V, Broker B
  • U calculates a set of n 1 values w0, , wn,
    called paywords. The paywords are calculated in
    reverse order. The last payword wn is chosen
    randomly, the others are calculated recursively
  • It holds
  • U sends w0 to V (w0 is not a payment)
  • Payment
  • The i-th micropayment is defined as the tuple
    (wi,i)
  • Per requested micropayment U sends the next
    tuple (wi,i) to V
  • Charging
  • U sends w0 and the tuple (wl,l) as the last
    payword he received to B
  • B verifies that wl is the ls payword by
  • If every payword is worth 1 cent, B charges Us
    account with l cents and passes them to V

44
Micropayments Example P2PWNC
  • P2PWNC stands for Peer-to-Peer Wireless Network
    Confederation (Efstathiou Polyzos, 2005)
  • Application he participants in P2PWNC provide
    each other with access to their WLAN hotspots
  • P2PWNC uses a token-based micro-payment system,
    supporting non-simultaneous n-way exchanges
  • Token issuing
  • If user B is allowed to access the hotspot of A,
    B issues a signed receipt, which states that B
    owes a favor to A, i.e., B was granted access by
    A, and passes it to A.

45
  • Token-based payment (2-way exchange)
  • If A wants to access Bs hotspot at a later point
    of time, B will only grant access to A, if A can
    show a receipt stating that B owes a favor to A
    (2-way exchange).
  • Token-based payment (3-way exchange)
  • To make the scheme more flexible
  • A can also present a chain of receipts, from
    which B can learn that she owes something to A by
    transitivity.
  • For example
  • A shows two receipts to B
  • One receipt which states that B owes something to
    C, and the other one stating that C owes
    something to A.
  • Thus, B can learn from these two receipts that
    she owes something to A (3-way exchange).

46
Micropayments - Summary
  • Summary
  • Basic idea making users pay
  • Hugh variety of approaches
  • Monetary approaches
  • Service provider knows exactly the value of the
    payment
  • Allows for short item identifiers or anonymity
  • Need for central brokers
  • Non-monetary approaches
  • Less risk when loosing payments, or paying for
    bad service
  • Lower barrier of acceptance
  • Used e.g. for P2P file-sharing, WLAN connection
    sharing
  • Proof-of-Work
  • Hugh gap between capacities of high-end systems
    vs. low-end systems
  • This gap gets even bigger, when UbiComp devices
  • Hard to develop fair PoWs

47
Conclusion
  • Introduced the concepts of trust and
    accountability
  • Both concepts may help to
  • collaborate in UbiComp environments
  • allow UbiComp to become a calm technology
  • Increasingly, the bottleneck in computing is
    not its disk capacity, processor speed, or
    communication bandwidth, but rather the limited
    resource of human attention (Garlan et al.,
    2002)
  • Modeling trust is still in a pioneering phase
  • Need for attack-resistant models, and concepts
    allowing to transfer trust between similar
    contexts
  • Approaches for achieving accountability can
    already be seen
  • Reputation systems e.g. eBay feedback forum,
  • Micropayments PoW against spam,
  • Note It is a major issue not only to model
    concepts of everyday life, but to present them in
    a way which is appropriate for everyday usage!

48
Bibliography
  • Bhargava et al., 2004
  • Bhargava, B., Lilien, L., Rosenthal, A.,
    Winslet, M.  (2004).  Pervasive trust. IEEE
    Intelligent Systems, 19(5), 7488.
  • Blaze et al., 1996
  • Blaze, M., Feigenbaum, J., Lacy, J.  (1996). 
    Decentralized trust management.  In Proc. of the
    1996 IEEE Symposium on Security and Privacy
  • Blaze et al., 1998
  • Blaze, M., Feigenbaum, J., Keromytis, A. D. 
    (1998).  Keynote Trust management for public-key
    infrastructures.  In Security Protocols
    Workshop (p. 59-63). 
  • Dingledine et al., 2000
  • Dingledine, R., Freedman, M. J., Molnar, D. 
    (2000).  Accountability measures for peer-to-peer
    systems.  In A. Oram (Ed.), Peer-to-peer
    Harnessing the power of disruptive
    technologies (p. 271-340). OReilly.
  • Efstathiou Polyzos, 2005
  • Efstathiou, E., Polyzos, G.  (2005).  A
    self-managed scheme for free citywide Wi-Fi.  In
    WOWMOM 05 Proceedings of the First
    International IEEE WoWMoM Workshop on Autonomic
    Communications and Computing (ACC05) (pp. 502506
    ).  Washington, DC, USA IEEE Computer Society.
  • Jøsang, 2001
  • Jøsang, A.  (2001).  A logic for uncertain
    probabilities.  International Journal of
    Uncertainty, Fuzziness and Knowledge-Based
    Systems, 9(3), 279-212.
  • Gambetta, 2000
  • Gambetta, D. (2000). Can we trust trust? In
    D. Gambetta (Ed.), Trust Making and breaking
    cooperative relations, electronic
    edition (p. 213-237). 
  • Garlan et al., 2002
  • Garlan, D., Siewiorek, D., Smailagic, A.,
    Steenkiste, P.  (2002).  Project Aura Toward
    distraction-free pervasive computing. Pervasive
    Computing, IEEE, 1(2), 2231.
  • Golbeck, 2005
  • Golbeck, J.  (2005).  Computing and applying
    trust in web-based social networks.  Unpublished
    doctoral dissertation, University of Maryland,
    College Park.
  • Grandison Sloman, 2000
Write a Comment
User Comments (0)