Title: Collaboration of Untrusting Peers with Changing Interests
1Collaboration of Untrusting Peers with Changing
Interests
- Baruch Awerbuch, Boaz Patt-Shamir, David Peleg,
Mark Tuttle - Review by Pinak Pujari
2Introduction
- Reputation systems are an integral part of
e-commerce application systems. - Engines like eBay depend on reputation systems to
improve customer confidence. - More importantly, they limit the economic damage
done by disreputable peers.
3Introduction eBay example
- For Instance in eBay, after every transaction,
the system invites each party to post its rating
of the transaction on a public billboard the
system maintains. - Consulting the billboard is a key step before
making a transaction.
4Introduction Possibility of fraud?
- Scene 1 A group of sellers engaging in phony
transactions, and rating these transactions
highly to generate an appearance of reliability
while ripping off other people. - Scene 2 A single seller behaving in responsible
manner long enough to entice an unsuspecting
buyer into a single large transaction, and then
vanishing. - Reputation systems are valuable, but not
infallible.
5Model of the Reputation System
- n players. (Some honest, some dishonest)
- m objects. (Some good, some bad)
- Player probes an object to learn if it good or
bad. - The cost of the probe is 1 if the object is bad
and 0 if the object is good. - Goal Find a good object incurring minimal cost.
6Model of the Reputation System
- Players collaborate by posting the results of
their probes on a public billboard. - And, consulting the board when choosing an object
to probe. - Assume that entries are write-once, and that
billboard is reliable.
7So what is the problem?
- Problem Definition
- Some of the players are dishonest, and can behave
in an arbitrary fashion, including colluding and
posting false reports on the billboard to entice
honest players to probe bad objects.
8Model of the Reputation System (contd.)
- The execution of the system is as follows-
- A player reads the billboard, optionally probes
an object, and writes to the billboard. (Some
randomized protocol is used that chooses the
object to probe based on the contents of the
billboard) - Honest players are required to follow the
protocol. - But dishonest players are allowed to behave in an
arbitrary (or Byzantine) fashion, including
posting incorrect information on the billboard.
9Strategy
- Exploration rule A player should choose an
object at random (uniformly) and probe it. - This might be a good idea if there are a lot of
good objects, or if there are a lot of dishonest
players posting inaccurate reports to the
billboard. - Exploitation rule A player should choose another
player at random and probe whichever object it
recommends (if any), thereby exploiting or
benefiting from the effort the other player. - This might be a good idea most of the players
posting recommendations to the billboard are
honest.
10The Balanced Rule
- In most cases, the player will not know how many
honest players or good objects are in the system.
So best option would be to balance between the
two approaches. - Flip a coin. If the result is heads, follow
Exploration rule. If the result is tails,
follow Exploitation rule.
11Models with Restricted Access to players
- Dynamic object model Objects can enter and
leave the system over time. - Partial access model Each player has access to
a different subset of the objects.
12Model of the Reputation System (contd.)
- The execution of an algorithm is uniquely
determined by the algorithm, the coins flipped by
the players while executing the protocol, and by
three external entities - Three external entities
- The player schedule that determines the order in
which players take steps. - The dishonest players.
- The adversary that determines the behavior of the
dishonest players.
13Model of the Reputation System (contd.)
- What is the adversary?
- The adversary is a function from a sequence of
coin flips to a sequence of objects for each
dishonest player to probe and the results for the
player to post on the billboard. - Adversary is quite powerful, and may behave in an
adaptive, Byzantine fashion.
14Model of the Reputation System (contd.)
- What is an operating environment?
- An operating environment is a triple consisting
of a player schedule, a set of dishonest players,
and an adversary. - The purpose of the operating environment is to
factor out all of the nondeterministic choices
made during an execution, leaving only the
probabilistic choices to consider.
15Models with Restricted Access to players
- Dynamic object model Objects can enter and
leave the system over time. - Partial access model Each player has access to
a different subset of the objects.
16The Dynamic Object Model
- Operating Environment
- The player schedule.
- The dishonest players.
- The adversary.
- The object schedule that determines when objects
enter and leave the system, and their values. - m - upper bound on the number of objects
concurrently present in the system. - ß - lower bound on the fraction of good objects
at any time, for some 0 ß 1.
17The Dynamic Object Model Algorithm
- The algorithm is an immediate application of the
Balanced rule. - Algorithm DynAlg If the player has found a good
object, then probe it again. If not, then apply
the Balanced rule.
18Analysis of Algorithm DynAlg
- Given a probe sequence s, switches(s) denotes the
number of distinct objects in s. - Given an operating environment E, let sE(DynAlg)
be the random variable whose value is the probe
sequence of the honest players generated by
DynAlg under E. - s - the cost of an optimal probe sequence.
19Analysis of Algorithm DynAlg
- Theorem For every operating environment E and
every probe sequence s for the honest players,
the expected cost of sE(DynAlg) is at most - cost(s) switches(s)(2-ß)(m n ln n))
20Proof
- Partition the sequence s into subsequences
- s s1s2 s K such that for all 1iltK,
- -gt all probes in si are to the same object.
- -gt si and si1 probe different objects.
- Similarly, Partition the sequence s into
subsequences s s1s2 s K such that, - si si for all 1 i K.
21Proof
- Consider the difference cost(si) - cost(si).
- If the probes in si are to a bad object,
- then trivially cost(si) cost(si).
- To finish the proof, we show that
- If all probes in si are to a good object,
- then cost(si) (2 - ß).(m n ln n).
22Proof
- An object i-persistent if it is good and ifvit is
present in the system throughout the duration of
si. - A probe i-persistent if it probes an i-persistent
object. - Partition the sequence si into n subsequences si
Di0Di1Di2 Din, where Dik consists of all
probes in si that are preceded by i-persistent
probes of exactly k distinct honest players.
23Proof
- Obviously, cost (si) Snk0 cost(Dik).
- The expected cost of a single fresh probe in Dik
is at most 1-ß/2. - Each fresh probe in Dik finds a persistent object
with some probability pk. - The probability that Dik contains exactly l fresh
probes is (1 - pk)l-1pk. - Therefore, the expected cost of Dik is at most
24Proof
- For k 0, p0 1/2m.
- For k gt 0, pk k/2n.
- So, expected cost of si is at most
25The Partial Access Model
- Here, each player is able to access only a subset
of the objects. - The main problem with this model is that in
contrast to the full access model (where each
player can access any object), when we have
partial access, it is difficult to measure the
amount of collaboration a player can expect from
other players in searching for a good object. - To overcome this difficulty is to concentrate on
the amount of collective work done by subsets of
players.
26The Partial Access Model
- Notation
- Model the partial access to the objects with a
bipartite graph G (P,O,E) - P is the set of players
- O is the set of objects
- A player j can access an object i only if (j, i)
belongs to E. - For each player j, let obj(j) denote the set of
objects accessible to j, and let deg(j)
obj(j). - For each honest player j, let best(j) denote the
set of good objects accessible to j. - Let N(j) be the set of all players (honest and
dishonest) that are at distance 2 from a given
player j, i.e.,
27The Partial Access Model Algorithm
- Algorithm is same as DynAlg from the dynamic
model, except that the Balanced rule is adapted
to the restricted access model. - In the new rule, a player j flips a coin. If the
result is heads, it probes an object selected
uniformly at random from obj(j). Exploration
Rule - If the result is tails, it selects a player k
uniformly at random from N(j) and probes the
object k recommends, if any and otherwise it
probes an object selected uniformly at random
from obj(j). Exploitation Rule
28The Partial Access Model
- Theorem
- Let Y be any set of honest players.
- Denote
- Let
- If X(Y ) in nonempty, then the total work of
players in Y is at most
29Interpretation
- Consider any set Y of players with common
interest X(Y) (meaning any object in X(Y) would
satisfy any player in Y ). - From the point of view of a player, its load is
divided among the members of Y the total work
done by the group working together is roughly the
same as the work of an individual working alone. - The first term in the bound is just an upper
bound on expected amount of work until a player
finds an object in X(Y). - The second term is an upper bound on the total
number of recommendations (times a logarithmic
factor) a player has to go through. - This is pleasing, because it indicates that the
number of probes is nearly the best one can hope
for.
30Collaboration across groups without common
interest
- Consider sets of players who do not share a
common interest. Of course, one can partition
them into subsets SIGs (special interest groups),
where for each SIG there is at least one object
that will satisfy all its members. - The Theorem guarantees that each SIG is nearly
optimal. - In the sense that the total work done by a SIG is
not much more than the total work that must be
done even if SIG members had perfect coordination
(thus disregarding dishonest players). - However, the collection of SIGs may be
suboptimal, due to overlaps in the neighborhood
sets (which contribute to the second term of the
upper bound).
31Collaboration across groups without common
interest
- Does there always exists a good partition of
players into SIGs, so that the overall work
(summed over all SIGs) is close to optimal? - The answer is negative in the general case.
- Even if each good object would satisfy many
honest players, the total amount of work, over
all players, is close to the worst case (being
the sum of work necessary if each player is
working alone).
32Simulation
- The graph suggests that
- the algorithm works fairly
- well for values of p 0.1
- through p 0.7.
- It suggests that a little
- sampling is necessary, and
- a few recommendations
- can really help a lot
33Conclusion
- This paper shows that, in spite of asynchronous
behavior, different interests, changes in time,
and Byzantine behavior of unknown subset of
peers, the honest peers miraculously succeed in
collaborating, in the sense that the honest peers
relatively rarely repeat mistakes of other honest
peers. One interesting feature of our method is
that we mostly avoid the issue of discovering who
the faulty peers are. - Future Extensions?
- How can it be gained by trying to discover the
faulty peers. - Another open question is tightening the bounds
for the partial access case.