Computing Trust in Social Networks - PowerPoint PPT Presentation

1 / 34
About This Presentation
Title:

Computing Trust in Social Networks

Description:

Profile is a list of movies and the hypothetical user's ratings of them ... User ratings r1...r10 for m1...m10. r1...r4 are extreme (1,2,9, or 10) r5...r10 are ... – PowerPoint PPT presentation

Number of Views:156
Avg rating:3.0/5.0
Slides: 35
Provided by: Jennifer792
Category:

less

Transcript and Presenter's Notes

Title: Computing Trust in Social Networks


1
Computing Trust in Social Networks
  • Jennifer Golbeck
  • College of Information Studies

2
Web-Based Social Networks (WBSNs)
  • Websites and interfaces that let people maintain
    browsable lists of friends
  • Last count
  • 245 social networking websites
  • Over 850,000,000 accounts
  • Full list at http//trust.mindswap.org

3
Using WBSNs
  • Lots of users, spending lots of time creating
    public information about their preferences
  • We should be able to use that to build better
    applications
  • When I want a recommendation, who do I ask?
  • The people I trust

4
Applications of Trust
  • With direct knowledge or a recommendation about
    how much to trust people, this value can be used
    as a filter in many applications
  • Since social networks are so prominent on the
    web, it is a public, accessible data source for
    determining the quality of annotations and
    information

5
Research Areas
  • Inferring Trust Relationships
  • Using Trust in Applications

6
Inferring Trust
The Goal Select two individuals - the source
(node A) and sink (node C) - and recommend to the
source how much to trust the sink.
tAC
A
B
C
tAB
tBC
7
Methods
  • TidalTrust
  • Personalized trust inference algorithm
  • SUNNY
  • Bayes Network algorithm that computes trust
    inferences and a confidence interval on the
    inferred value.
  • Profile Based
  • Trust from similarity

8
(No Transcript)
9
Trust Algorithm
  • If the source does not know the sink, the source
    asks all of its friends how much to trust the
    sink, and computes a trust value by a weighted
    average
  • Neighbors repeat the process if they do not have
    a direct rating for the sink

10
Accuracy
  • Comparison to other algorithms
  • Beth-Borcherding-Klein (BBK) 1994

11
Trust from Similarity
  • We know trust correlates with overall similarity
    (Ziegler and Golbeck, 2006)
  • Does trust capture more than just overall
    agreement?
  • Two Part Analysis
  • Controlled study to find profile similarity
    measures that relate to trust
  • Verification through application in a live system

12
Experimental Outline
  • Phase 1 Rate Movies - Subjects rate movies on
    the list
  • Ratings grouped as extreme (1,2,9,10) or far from
    average (4 different)
  • Create profiles of hypothetical users
  • Profile is a list of movies and the hypothetical
    users ratings of them
  • Subjects rate how much they would trust the
    person represented by the profile
  • Vary the profiles ratings in a controlled way

13
Generating Profiles
  • Each profile contained exactly 10 movies, 4 from
    an experimental category and 6 from its
    complement
  • E.g. 4 movies with extreme ratings and 6 with
    non-extreme ratings
  • Control for average difference, standard
    deviation, etc. so we could see how differences
    on specific categories of films affected trust

14
Example Profile
  • Movies m1 through m10
  • User ratings r1r10 for m1m10
  • r1r4 are extreme (1,2,9, or 10)
  • r5r10 are not extreme
  • Profile ratings pi ri?i

15
Results
  • Reconfirmed that trust strongly correlates with
    overall similarity (?).
  • Agreement on extremes (??)
  • Largest single difference (r)
  • Subjects propensity to trust (?)

16
Extreme Ratings
  • When ?high are used on movies with extreme
    ratings, the trust ratings are significantly
    lower than when ?low are applied to those films
  • Statistically significant for all ?i

17
Maximum Difference (r)
  • Holding overall agreement and standard deviation
    constant, trust decreased as the single largest
    difference between the profile and the subject
    (r) increased.

18
Propensity to Trust (?)
19
Validation
  • Gather all pairs of FilmTrust users who have a
    known trust relationship and share movies in
    common
  • 322 total user pairs
  • Develop a formula using the experimental
    parameters to estimate trust
  • Compute accuracy by comparing computed trust
    value with known value

20
In FilmTrust
  • Use weights (w1,w2, w3, w4, w?) (7,2,1,8,2)

21
Effect of change
  • If a node changes its trust value for another,
    that will propagate through the inferred values
  • How far? What is the magnitude? Does the impact
    increase or decrease with distance?
  • How does this relate to the algorithm?
  • Joint work with Ugur Kuter

22
(No Transcript)
23
Algorithms Considered
  • Eigentrust
  • Global algorithm
  • Like PageRank, but with weighted edges
  • Advogato
  • Finds paths through the network
  • Global group trust metric that uses a set of
    authoritative nodes to decide how trustworthy a
    person is
  • TidalTrust
  • TidalTrust
  • No minimum distance - search the entire network

24
Initial ideas?
  • The further you get from the sink, the smaller
    the impact.
  • Changes by more central, highly connected nodes
    will create a bigger impact.

25
Network
26
Methodology
  • Pick a pair of nodes in the network
  • Set trust to 0
  • Infer trust values between all pairs
  • Set trust to 1
  • Infer trust value between all pairs
  • Compare inferred values from trust0 to trust1
  • Repeat for every pair
  • Repeat for each algorithm

27
Fraction of Nodes at a Given Distance Whose
Inferred Trust Value for the Sink Changed
28
(No Transcript)
29
Average Magnitude of Change at a Given Distance
30
Conclusions andFuture Directions
31
Conclusions
  • Trust is an important relationship in social
    networks.
  • Social relationships are different than other
    common data used in CS research.
  • Trust can be computed in a variety of ways
  • The type of algorithm and behavior of users in
    the network impact the stability of trust
    inferences

32
Future Work - Computing with Trust
  • Major categories of trust inference global vs.
    local, same scale vs. new scale
  • All have algorithms
  • Additional features (like confidence)
  • Hybrid approaches
  • Use trust assigned by users and similarity
  • Use multiple relationships for better certainty
    in certain domains (e.g. authority)

33
Future Work - Applications
  • What sort of applications can trust be used to
    support?
  • Recommender systems, email filtering, tagging,
    information ranking
  • Disaster response
  • Highlight relevant items among vast collections
    of data

34
  • Jennifer Golbeck
  • golbeck_at_cs.umd.edu
  • http//www.cs.umd.edu/golbeck
Write a Comment
User Comments (0)
About PowerShow.com