Placement%20of%20Continuous%20Media%20in%20Wireless%20Peer-to-Peer%20Networks - PowerPoint PPT Presentation

About This Presentation
Title:

Placement%20of%20Continuous%20Media%20in%20Wireless%20Peer-to-Peer%20Networks

Description:

Placement of Continuous Media in Wireless Peer-to-Peer Networks Shahram Ghadeharizadeh, Bhaskar Krishnamachari, Shanshan Song, IEEE Transactions on Multimedia, vol. 6 ... – PowerPoint PPT presentation

Number of Views:140
Avg rating:3.0/5.0
Slides: 23
Provided by: TonyS166
Category:

less

Transcript and Presenter's Notes

Title: Placement%20of%20Continuous%20Media%20in%20Wireless%20Peer-to-Peer%20Networks


1
Placement ofContinuous Media in
WirelessPeer-to-Peer Networks
  • Shahram Ghadeharizadeh, Bhaskar Krishnamachari,
    Shanshan Song, IEEE Transactions on Multimedia,
    vol. 6, no. 2, April 2004 Presentation by Tony
    Sung, MC Lab, IE CUHK10th November 2003

2
Outline
  • The H2O Framework
  • A Novel Data Placement and Replication Strategy
  • Modeling (Topology) and Performance Analysis
  • Two Distributed Implementations
  • Conclusion, Discussion and Future Work

3
The H2O Framework
  • H2O Home-to-home online
  • A number of devices connected through wireless
    channel.
  • Complements existing wired infrastructure such as
    Internet and provide data services to individual
    households.
  • Implementing VoD over H2O
  • A household may store its personal video library
    on an H2O cloud.
  • Each device might act in3 possible roles
  • producer of data
  • an active client thatis displaying data
  • a router that deliversdata from a producerto a
    consumer.

4
The H2O Framework
  • Media Retrieval
  • The 1st block of a clip must make multiple hops
    to arrive.
  • A portion of the video must be prefetched before
    playback starts to compensate for network
    bandwidth fluctuations.
  • How to Minimize theStartup Latency?

5
The H2O Framework
  • By Caching
  • One Extreme Full replication
  • Startup latency is minimized.
  • Even if bandwidth is not a limiting factor,
    storage requirement is tremendous.
  • A Novel Approach Stripe the video clip into
    blocks, and replicate blocks that has a later
    playback time less often in the system.
  • Startup latency is still minimized.

6
A Novel Data Placement Replication Strategy
  • Parameters
  • Video clip X is divided into z equal sized blocks
    bi of size Sb
  • Playback duration of a block D Sb / BDisplay
  • Playback time of bi (i - 1)D
  • Let time to transmit a block across one hop h
  • bi can be placed at most Hi hops away under the
    constraint
  • Hi ((i - 1)D) / h
  • Assumptions
  • CBR media, h is a fixed constant, BLink gt
    BDisplay
  • b1 should be placed on all nodes in the network.
  • For bi with 1 i z, it can be placed less
    often to save storage.

7
A Novel Data Placement Replication Strategy
  • Core Replication and Placement Strategy
  • Divide the clip into z equal sized blocks bi,
    each Sb in size.
  • Place b1 on all nodes in the network.
  • For each bi with 1 i z, compute its delay
    tolerance Hi .
  • Based on Hi , compute ri which is the total
    number of replicas required for block bi . Notes
    that ri and its computation are topology
    dependent, and it decreases monotonically with i
    until it reaches 1.
  • Construct ri replicas of block bi and place each
    copy of the block in the network while ensuring
    that for all nodes there exists at least one copy
    of the block bi that is no more than Hi hops away.

8
Modeling and Performance Analysis
  • Analysis of ri for 3 different topologies
  • Worst-Case Linear Topology
  • Grid Topology
  • Average Case Graph Topology
  • Performance is measured in percentage savings
    over full replication.

9
Modeling and Performance Analysis
  • Worst-Case Linear Topology
  • N devices organized in a linear fashion.
  • In the worst case scenario, bi must be replicated
    ri N-Hi times and takes N-ri-1 hops to reach
    the destination.
  • If ri is non-positive, it is reset to 1.This is
    equivalent to stop replicating those blocks whose
    index exceeds Ur ((N 1)h)/D 1.
  • Giving total storage as

contains replica of bi
10
Modeling and Performance Analysis
  • Worst-Case Linear Topology
  • N 1000
  • h 0.5 s
  • BDisplay 4 Mbps
  • Sb 1 MB
  • D 2 s

Worst Case Storage Requirement
11
Modeling and Performance Analysis
  • Grid Topology
  • N devices organized in a square grid of fixed
    area, each neighbors only the 4 nodes in each
    cardinal direction.
  • No. of replicas ri
  • Expected total storage

12
Modeling and Performance Analysis
  • Grid Topology

Depends on h onlyIndep. of N and SC
2-min Clip
2-h Clip
Decrease StorageIncrease Complexity
Decrease StorageIncrease Complexity
Effect of Block Size
13
Modeling and Performance Analysis
  • Average Case Graph Topology
  • N devices scattered randomly in a fixed area A
    with radio range R for each node.
  • Of any given node, the expected number of nodes
    within Hi hops is between
  • where ? is a density dependent correction factor
    between 0 and 1 (when densed and nodes are
    distributed evenly).
  • Using the upper boundary, the number of replicas
    and expected total storage are

and
14
Modeling and Performance Analysis
  • Comparison

250 devices, 97 savings
more than 80
15
Distributed Implementations
  • TIMER and ZONE
  • Both control placement of ri copies of each bi
    with the follow objective
  • Each node in the network in within Hi hops of at
    least one copy of bi
  • General Framework
  • The publishing node H2Op computes block size Sb,
    no. of blocks z, and required hop-bound Hi, using
    the previous expressions.
  • H2Op floods the network with a message containing
    this information, and each recipient H2Oj
    computes a binary array Aj that signifies which
    blocks to host. The recipients will also retrieve
    from H2Op a copy of the blocks.
  • TIMER and ZONE differs in how Aj are computed.

16
Distributed Implementations
  • TIMER
  • A distributed timer-suppress algorithm
  • When H2Oj receives the flooded query message, it
    performs z rounds of elections, one for each
    block
  • H2Oj determines whether to maintain a copy of
    block bi
  • Each node picks a random timer value from 1 to M
    and starts count down.
  • When timer reaches 0, and the node is not already
    suppressed
  • elects itself to store a copy of block b
  • sends a suppress message to all nodes within Hi
    hops
  • At the end of a round, every node will either be
    elected or suppressed, and every node is
    guaranteed to be within Hi hops of an elected node

17
Distributed Implementations
  • ZONE
  • Assumes existence of nodes with geopositioning
    info
  • Consider all nodes fit within an area of S x S
  • For each bi , divide the area into si x si
    squares such that the square fits within a circle
    of radius HiR where R is the radio range
  • It can be shown that si ?HiRv2 where ?? 1 is a
    correction factor that depends on node density.
  • A copy of bi is placed near the center of each
    square

18
Distributed Implementations
  • ZONE
  • z rounds of elections, one for each block
  • All nodes determine which zone they belongs
    to,based on Hi
  • For each zone
  • Elect the node that is closest to the zone center
    by a distributed leader election protocol (such
    as FloodMax)

19
Distributed Implementations
  • Comparison

Both distribution algorithms require a few more
replicas per block than predicted
analytically. lt Border Effect The percentage
savings, however, is still several orders of
magnitude superior to full replication.
20
Distributed Implementations
  • Comparison

Blocks distribution across H2O devices is uniform
with TIMER. ZONE favors placement of blocks with
a large Hi towards the center of a zone. But
results are not provided.
21
Conclusion
  • Explored a novel architecture collaborating H2O
    devices to provide VoD with minimal startup
    latency.
  • A replication technique that replicates the first
    few blocks of a clip more frequently because they
    are needed more urgently.
  • Quantified impact of different H2O topologies on
    the storage space requirement.
  • Proposed two distributed implementation of the
    placement and replication technique.

22
Discussion and Future Work
  • Assumptions fixed h
  • In wireless ad hoc networks, h is a function of
    the amount of transmitting devices
  • Admission control and transmission scheduling
    shall be added to address the variability of h
  • Can be extended to adjust data placement when a
    user requests a clip
  • Enable H2O cloud to respond to user actions such
    as removal of device
  • Consider bandwidth constraints
Write a Comment
User Comments (0)
About PowerShow.com