Chunkyspread: Multitree Unstructured Peer to Peer Multicast - PowerPoint PPT Presentation

1 / 16
About This Presentation
Title:

Chunkyspread: Multitree Unstructured Peer to Peer Multicast

Description:

Chunkyspread: Multi-tree. Unstructured Peer to Peer. Multicast. Vidhyashankar ... A multi-tree protocol that. Is ... multi-tree multicast protocol ... – PowerPoint PPT presentation

Number of Views:66
Avg rating:3.0/5.0
Slides: 17
Provided by: Vidhyas
Category:

less

Transcript and Presenter's Notes

Title: Chunkyspread: Multitree Unstructured Peer to Peer Multicast


1
Chunkyspread Multi-treeUnstructured Peer to
PeerMulticast
  • Vidhyashankar Venkataraman (Vidhya)
  • Paul Francis
  • (Cornell University)
  • John Calandrino
  • (University of North Carolina)

2
Introduction
  • Increased interest in P2P live streaming in
    recent past
  • Existing multicast approaches
  • Swarming style (tree-less)
  • Tree-based

3
Swarming-based protocols
  • Data-driven tree-less multicast (swarming)
    getting popular
  • Neighbors send notifications about data arrivals
  • Nodes pull data from neighbors
  • Eg. Coolstreaming, Chainsaw
  • ? Simple, unstructured
  • ? Latency-overhead tradeoff
  • Not yet known if these protocols can have good
    control over heterogeneity (upload volume)

4
Tree-based solutions
  • ? Low latency and low overhead
  • Tree construction/repair considered complex
  • Eg. Splitstream (DHT, Pushdown, Anycast) SRao
  • Tree repair takes time
  • Requires buffering, resulting in delays
  • Contribution A multi-tree protocol that
  • Is simple, unstructured
  • Gives fine-grained control over load
  • Has low latencies, low overhead, robust to
    failures

SRao Sanjay Rao et. Al. The Impact of
Heterogeneous Bandwidth Constraints on DHT-Based
Multicast Protocols, IPTPS February 2005.
5
Chunkyspread Basic Idea
  • Build heterogeneity-aware unstructured neighbor
    graph
  • Tree building
  • Sliced data stream one tree per slice
    (Splitstream)
  • Simple and fast loop avoidance and detection
  • Parent/child relationships locally negotiated to
    optimize criteria of interest
  • Load, latency, tit-for-tat, node-disjointness,
    etc.

6
Heterogeneity-aware neighbor graph
  • Neighbor graph built with simple random walks
  • Using Swaplinks, developed at Cornell SWAP
  • Degree of node in graph proportional to its
    desired transmit load
  • Notion of heterogeneity-awareness
  • So that higher-capacity nodes have more children
    in multicast trees

SWAP V. Vishnumurthy and P. Francis. On
Heterogeneous Overlay Construction and Random
Node Selection in Unstructured P2P Networks. To
appear in INFOCOMM, Barcelona 2006.
7
Sliced Data Stream
Source selects random neighbors using Swaplinks
Slice Source 2
Multicasts the slice
Slice 2
Slice Source 1
Source
Slice 1
Slice 3
Slice Source 3
Source sends one slice to each node - acts as
slice source to a tree
8
Building trees
  • Initialized by flooding control message
  • Pick parents subject to capacity (load)
    constraints
  • Produces loop-free but bad trees
  • Subsequently fine-tune trees according to
    criteria of interest

9
Simple and fast loop avoidance/ detection
  • Proposed by Whitaker and Wetherall ICARUS
  • All data packets carry a bloom filter
  • Each node adds its mask to the filter
  • Small probability of false positives
  • Avoidance advertise per-slice bloom filters to
    neighbors
  • Detection by first packet that traverses the
    loop

ICARUS A. Whitaker and D. Wetherall. Forwarding
without loops in Icarus. In OPENARCH, 2002.
10
Parent/child selection based on load latency
Never enter this load region
ML Maximum Load
Sheds children
Higher Load (more children)
TLd
Children Improve Latency
TL Target Load
TL-d
Lower Load (less children)
Adds children
0
11
Parent/Child Switch
Parent for slice k (LoadgtSatisfactory threshold)
Potential Parents (LoadltSatisfactory Threshold)
2) Gets info from all children
A
B
5) A says yes if still underloaded
3) Chooses A and asks child to switch
  • Child sends info
  • about A and B

4) Child requests A
Child
12
Chunkyspread Evaluation
  • Discrete event-based simulator implemented in C
  • Run over transit-stub topologies having 5K
    routers
  • Heterogeneity Stream split into 16 slices
  • TL uniformly distributed between 4 28 slices
  • ML(1.5)TL Enough capacity in network
  • Two cases
  • No latency improvement (d0)
  • With latency improvement
  • d(2/16).TL (or 12.5 of TL)

ML(1.5).(TL)
TLd
TL Target Load
TL-d
0
13
Control over load
Flash crowd scenario 2.5K nodes join a 7.5K node
network at 100 joins/sec
Nodes within 20 of TL even with latency
reduction (d12.5)
With latency
Peak of 40 control messages per node per second
with latency reduction Median of 10 messages
per node per second during period of join
Snap shot of system after nodes finished
fine-tuning trees Trees optimized 95s after
all nodes join with latency reduction
(Load-TL)/TL
14
Latency Improvement
Maximum latency Buffer capacity without node
failures
With Latency
Flash crowd scenario
No Latency
90th percentile network stretch of 9 small
buffer
15
Burst Failure Recovery Time
CDF of Disconnect duration with latency reduction
FEC codes improve recovery time
Failure Burst 1K nodes fail in a 10K-node
network at the same time instant
3 Redundancy
0 Redundant slices
Neighbor failure timeout set at 4 seconds
Recovery time within a few seconds Buffering
Dominant over effects of latency
1 Redundant slice
Shown with various Redundancy levels
16
Conclusion
  • Chunkyspread is a simple multi-tree multicast
    protocol
  • A design alternative to swarming style protocols
  • Achieves fine-grained control over load with good
    latencies
  • Suited to non-interactive live streaming
    applications
  • Need to do apples-to-apples comparisons with
    swarming protocols and Splitstream
Write a Comment
User Comments (0)
About PowerShow.com