Algorithmic Foundations of Ad-hoc Networks - PowerPoint PPT Presentation

About This Presentation
Title:

Algorithmic Foundations of Ad-hoc Networks

Description:

I would like to thank Roger Wattenhoffer, ETH Zurich, and Rajmohan Rajaraman, ... remote areas, ad-hoc meetings, disaster areas ... Packet radio network model (PRN) ... – PowerPoint PPT presentation

Number of Views:32
Avg rating:3.0/5.0
Slides: 121
Provided by: aykut1
Category:

less

Transcript and Presenter's Notes

Title: Algorithmic Foundations of Ad-hoc Networks


1
Algorithmic Foundations of Ad-hoc Networks
  • Andrea W. Richa, Arizona State U.
  • Rajmohan Rajaraman, Northeastern U.
  • ETH Zurich, Summer Tutorial 2004

2
Thanks!
  • I would like to thank Roger Wattenhoffer, ETH
    Zurich, and Rajmohan Rajaraman, Northeastern U.,
    for some of the slides / figures used in this
    presentation

3
What are ad-hoc networks?
  • Sometimes there is no infrastructure
  • remote areas, ad-hoc meetings, disaster areas
  • cost can also be an argument against an
    infrastructure
  • Wireless communication (why?)
  • Sometimes not every station can hear every other
    station
  • Data needs to be forwarded in a multihop manner

4
An ad-hoc network as a graph
  • A node is a mobile station
  • All nodes are equal (are they?)
  • Edge (u,v) in the graph iff node v can hear
    node u
  • These arcs can have weights that represent the
    signal strength
  • Directed X undirected graphs
  • Multi-hop

N1
N3
N2
N4
N5
N1
N3
N2
N4
N5
5
An ad-hoc network as a graph
  • Close-by nodes have MAC issues such as
    hidden/exposed terminal problems
  • Optional links are symmetric (undirected graph)
  • Optional the graph is Euclidian, i.e., there is
    a link between two nodes iff the distance of the
    nodes is less than a certain distance threshold r

6
What is a Hop?
  • Broadcast within a certain range
  • Variable range depending on power control
    capabilities
  • Interference among contending transmissions
  • MAC layer contention resolution protocols, e.g.,
    IEEE 802.11, Bluetooth
  • Packet radio network model (PRN)
  • Model each hop as a broadcast hop and consider
    interference in analysis
  • Multihop network model
  • Assume an underlying MAC layer protocol
  • The network is a dynamic interconnection network
  • In practice, both views important

7
Mobile (and Multihop) Ad-Hoc Networks (MANET)
  • Nodes move
  • First and foremost issue Routing
  • Mobility dynamic network

8
Overview Part I
  • General overview
  • Models
  • Ad-hoc network models mobility models
  • Routing paradigms
  • Some basic tricks to improve routing
  • Clustering
  • Dominating sets, connected dominating sets,
    clustering under mobility
  • Geometric Routing Algorithms

9
Overview Part II
  • MAC protocols
  • Power and Topology Control
  • connectivity, energy efficiency and interference
  • Algorithms for Sensor Networks
  • MAC protocols, synchronization, and query/stream
    processing

10
Literature
  • Wireless Networking work
  • - often heuristic in nature
  • - few provable bounds
  • - experimental evaluations in (realistic)
    settings
  • Distributed Computing work
  • - provable bounds
  • - often worst-case assumptions and general
    graphs
  • - often complicated algorithms
  • - assumptions not always applicable to
    wireless

11
Performance Measures
  • Time
  • Communication
  • Memory requirements
  • Adaptability
  • Energy consumption
  • Other QoS measures


path length ( of hops) number of messages

correlation
12
What is different from wired networks?
  • Mobility highly dynamic scenario
  • Distance no longer matters, number of hops does
  • Omnidirectional communication
  • Next generation unidirectional antennas?
  • Interference
  • Collisions
  • Energy considerations

13
Degrees of Mobility/Adaptability
  • Static
  • Limited mobility
  • a few nodes may fail, recover, or be moved
    (sensor networks)
  • tough example throw a million nodes out of an
    airplane
  • Highly adaptive/mobile
  • tough example a hundred airplanes/vehicles
    moving at high speed
  • impossible (?) a million mosquitoes with
    wireless links
  • Nomadic/viral model
  • disconnected network of highly mobile users
  • example virus transmission in a population of
    bluetooth users

14
Unit-disk Graph Model
  • We are given a set V of nodes in the plane
    (points with coordinates).
  • The unit disk graph UDG(V) is defined as an
    undirected graph (with E being a set of
    undirected edges). There is an edge between two
    nodes u,v iff the Euclidian distance between u
    and v is at most 1.
  • Think of the unit distance as the maximum
    transmission range.
  • All nodes are assumed to have the same
    communication range

15
Fading Channel Model
  • Signal strength decreases quadraticaly with
    distance (assuming free space)
  • noise (i.e., signal being transmitted from
    other nearby processors) may reduce the strength
    of a signal
  • No sharp threshold for communication range (or
    interference range)

16
Variations of the Unit-disk Model
  • Quasi-unit disk graph model
  • All nodes can receive a transmission from another
    node within distance ar
  • All nodes cannot receive a message from another
    node within distance gtr
  • A node may or may not receive a message from
    another node within distance in (ar,r

?
r
?
?
a r
?
?
17
Interference Models
  • Basic Model
  • Each node v has a communication range r and an
    interference range which is given by (1c)r,
    where c is a positive constant
  • If node u is within r distance from node v, then
    v can receive a message from u
  • If node u is within distance d in (r,(1c)r from
    v, then if u sends a message it can interfere
    with other messages being received at node v, but
    v cannot hear the message from node u

(1c)r
r
18
Mobility Models
  • Random Walk (including its many derivatives) A
    simple mobility model based on random directions
    and speeds.
  • Random Waypoint A model that includes pause
    times between changes in destination and speed.
  • Probabilistic Version of the Random Walk A model
    that utilizes a set of probabilities to determine
    the next position of an MN.
  • Random Direction A model that forces MNs to
    travel to the edge of the simulation area before
    changing direction and speed.
  • Boundless Simulation Area A model that converts
    a 2D rectangular simulation area into a
    torus-shaped simulation area.
  • Gauss-Markov A model that uses one tuning
    parameter to vary the degree of randomness in the
    mobility pattern.
  • City Section A simulation area that represents
    streets within a city.
  • All of the models above fail to capture some
    intrinsic characteristics of mobiliy (e.g., group
    movement). What is a good mobility model?

19
Routing Paradigms
  • We will spend the next few slides reviewing some
    of the classic routing algorithms that have
    been proposed for ad-hoc networks

20
Routing in (Mobile) Ad-hoc Networks
destination
source
  • changing, arbitrary topology
  • nodes are not 100 reliable
  • may need routing tables to find path to
    destination
  • related problem name resolution (finding
    closest item of certain type)

21
Basic Routing Schemes
  • Proactive Routing
  • keep routing information current at all times
  • good for static networks
  • examples distance vector (DV), link state (LS)
    algorithms, link reversal (e.g., TORA) algorithms
  • Reactive Routing
  • find a route to the destination only after a
    request comes in
  • good for more dynamic networks and low
    communication
  • examples AODV, dynamic source routing (DSR)
  • Hybrid Schemes
  • keep some information current
  • example Zone Routing Protocol (ZRP)

22
Reactive Routing Flooding
  • What is Routing?
  • Routing is the act of moving information across
    an internetwork from a source to a destination.
    (CISCO)

23
Reactive Routing Flooding
  • The simplest form of routing is flooding a
    source s sends the message to all its neighbors
    when a node other than destination t receives the
    message the first time it re-sends it to all its
    neighbors.
  • simple
  • a node might see the same message more than
    once. (How often?)
  • what if the network is huge but the target t
    sits just next to the source s?
  • We need a smarter routing algorithm

24
Reactive Routing Other algorithms
  • Ad-Hoc On Demand Distance Vector (AODV)
    Perkins-Royer 99
  • Dynamic Source Routing (DSR) Johnson-Maltz 96
  • Temporally Ordered Routing Algorithm
    Park-Corson 97
  • If source does not know path to destination,
    issues discovery request
  • DSR caches route to destination
  • Easier to avoid routing loops

source
destination
25
Proactive Routing 1 Link-State Routing Protocols
  • Link-state routing protocols are a preferred iBGP
    method (within an autonomous system think
    service provider) in the Internet
  • Idea periodic notification of all nodes about
    the complete graph

s
c
b
a
t
26
Proactive Routing 1 Link-State Routing Protocols
  • Routers then forward a message along (for
    example) the shortest path in the graph
  • message follows shortest path
  • every node needs to store whole graph,even
    links that are not on any path
  • every node needs to send and receivemessages
    that describe the wholegraph regularly

27
Proactive Routing 2 Distance Vector Routing
Protocols
  • The predominant method for wired networks
  • Idea each node stores a routing table that has
    an entry to each destination (destination,
    distance, neighbor) each node maintains distance
    to every other node

t?
t1
s
c
Dest Dir Dst
a a 1
b b 1
c b 2
t b 2
b
a
t
t2
28
Proactive Routing 2 Distance Vector
  • If a router notices a change in its neighborhood
    or receives an update message from a neighbor, it
    updates its routing table accordingly and sends
    an update to all its neighbors
  • message follows shortest path
  • only send updates when topology changes
  • most topology changes are irrelevant for a
    given source/destination pair
  • Single edge/node failure may require most
    nodes to change most of their entries
  • every node needs to store a big table
    bits
  • temporary loops

29
Proactive Routing 2 Distance Vector
  • Single edge/node failure may require most nodes
    to change most of their entries

half of the nodes
half of the nodes
30
Discussion of Classic Routing Protocols
  • There is no optimal routing protocol the
    choice of the routing protocol depends on the
    circumstances. Of particular importance is the
    mobility/data ratio.
  • On the other hand, designing a protocol whose
    complexity (number of messages, elapsed time) is
    proportional on the distance between source and
    destination nodes would be desirable

31
Trick 1 Radius Growth
  • Problem of flooding (and similarly other
    algorithms) The destination is in two hops but
    we flood the whole network
  • Idea Flood with growing radius use time-to-live
    (TTL) tag that is decreased at every node, for
    the first flood initialize TTL with 1, then 2,
    then 3 (really?), when destination is found, how
    do we stop?
  • Alternative idea Flood very slowly (nodes wait
    some time before they forward the message) when
    the destination is found a quick flood is
    initiated that stops the previous flood
  • Tradeoff time vs. number of messages

32
Trick 2 Source Routing
  • Problem nodes have to store routing information
    for others
  • Idea Source node stores the whole path to the
    destination source stores path with every
    message, so nodes on the path simply chop off
    themselves and send the message to the next node.
  • Dynamic Source Routing discovers a new path
    with flooding (message stores history, if it
    arrives at the destination it is sent back to the
    source through the same path)

33
Trick 2 Source Routing
  • Nodes only store the paths they need
  • Not efficient if mobility/data ratio is high
  • Asymmetric Links?

34
Trick 3 Asymmetric Links
  • Problem The destination cannot send the newly
    found path to the source because at least one of
    the links used was unidirectional.

s
c
b
a
t
35
Trick 3 Asymmetric Links
  • Idea The destination needs to find the source by
    flooding again, the path is attached to the
    flooded message. The destination has information
    about the source (approximate distance, maybe
    even direction), which can be used.

36
Trick 4 Re-use/cache routes
  • This idea comes in many flavors
  • Clearly a source s that has already found a route
    s-a-b-c-t does not need to flood again in order
    to find a route to node c.
  • Also, if node u receives a flooding message that
    searches for node v, and node u knows how to
    reach v, u might answer to the flooding initiator
    directly.

37
Trick 5 Local search
  • Problem When trying to forward a message on path
    s-a-u-c-t node u recognizes that node c is not
    a neighbor anymore.
  • Idea Instead of not delivering the message and
    sending a NAK to s, node u could try to search
    for t itself maybe even by flooding.
  • Some algorithms hope that node t is still within
    the same distance as before, so they can do a
    flooding with TTL being set to the original
    distance (plus one)

38
Trick 5 Local search
  • If u does not find t, maybe the predecessor of u
    (a) does?
  • One can construct examples where this works,
    but of course also examples where this does not
    work.

39
Trick 6 Hierarchy
  • Problem Especially proactive algorithms do not
    scale with the number of nodes. Each node needs
    to store big tables
  • Idea In the Internet there is a hierarchy of
    nodes i.e. all nodes with the same IP prefix are
    in the same direction. One could do the same
    trick in ad-hoc networks
  • Well, if it happens that the ad-hoc nodes with
    the same numbers are in the same area are
    together, hierarchical routing is a good idea.
  • There are not too many applications where this
    is the case. Nodes are mobile after all.

40
Trick 7 Clustering
  • Idea Group the ad-hoc nodes into clusters (if
    you want hierarchically). One node is the head of
    the cluster. If a node in the cluster sends a
    message it sends it to the head which sends it to
    the head of the destination cluster which sends
    it to the destination

41
Trick 7 Clustering
  • Simplifies operation for most nodes (that are
    not cluster heads) this is particularly useful
    if the nodes are heterogeneous and the cluster
    heads are stronger than others.
  • Brings network density down to constant, if
    implemented properly (e.g., no two cluster heads
    can communicate directly with each other)
  • A level of indirection adds overhead.
  • There will be more contention at the cluster
    heads.

42
Trick 8 Implicit Acknowledgement
  • Problem Node u only knows that neighbor node v
    has received a message if node v sends an
    acknowledgement.
  • Idea If v is not the destination, v needs to
    forward the message to the next node w on the
    path. If links are symmetric (and they need to be
    in order to send acknowledgements anyway), node u
    will automatically hear the transmission from v
    to w (unless node u has interference with another
    message).

43
Trick 8 Implicit Acknowledgement
  • Can we set up the MAC layer such that
    interference is impossible?
  • Finally a good trick

44
Trick 9 Smarter updates
  • Sequence numbers for all routing updates
  • Avoids loops and inconsistencies
  • Assures in-order execution of all updates
  • Decrease of update frequency
  • Store time between first and best announcement of
    a path

45
Trick 9 Smarter updates
  • Inhibit update if it seems to be unstable (based
    on the stored time values)
  • Less traffic
  • Implemented in Destination Sequenced Distance
    Vector (DSDV)

46
Trick 10 Use other distance metrics
  • Problem The number of hops is fine for some
    applications, but for ad-hoc networks other
    metrics might be better, for example Energy,
    Congestion, Successful transmission probability,
    Interference, etc.

47
Trick 10 Use other distance metrics
  • How do we compute interference in an online
    manner?Interference a receiving node is also
    in the receiving area of another transmission.

48
Routing in Ad-Hoc Networks
  • 10 Tricks ? 210 routing algorithms
  • In reality there are almost that many!
  • Q How good are these routing algorithms?!? Any
    hard results?
  • A Almost none! Method-of-choice is simulation
  • Perkins if you simulate three times, you get
    three different results

49
Clustering
  • disjoint or overlapping
  • flat or hierarchical
  • internal and border nodes/edges

Flat Clustering
50
Hierarchical Clustering
Hierarchical Clustering
51
Routing by Clustering
Routing by One-Level Clustering Baker-Ephremedis
81
  • Gateway nodes maintain routes within cluster
  • Routing among gateway nodes along a spanning tree
    or using DV/LS algorithms
  • Hierarchical clustering (e.g., Lauer 86,
    Ramanathan-Steenstrup 98)

52
Hierarchical Routing
  • The nodes organize themselves into a hierarchy
  • The hierarchy imposes a natural addressing scheme
  • Quasi-hierarchical routing Each node maintains
  • next hop node on a path to every other level-j
    cluster within its level-(j1) ancestral cluster
  • Strict-hierarchical routing Each node maintains
  • next level-j cluster on a path to every other
    level-j cluster within its level-(j1) ancestral
    cluster
  • boundary level-j clusters in its level-(j1)
    clusters and their neighboring clusters

53
Example Strict-Hierarchical Routing
  • Each node maintains
  • Next hop node on a min-cost path to every other
    node in cluster
  • Cluster boundary node on a min-cost path to
    neighboring cluster
  • Next hop cluster on the min-cost path to any
    other cluster in supercluster
  • The cluster leader participates in computing this
    information and distributing it to nodes in its
    cluster

54
Space Requirements and Adaptability
  • Each node has entries
  • is the number of levels
  • is the maximum, over all j, of the number of
    level-j clusters in a level-(j1) cluster
  • If the clustering is regular, number of entries
    per node is
  • Restructuring the hierarchy
  • Cluster leaders split/merge clusters while
    maintaining size bounds (O(1) gap between upper
    and lower bounds)
  • Sometimes need to generate new addresses
  • Need location management (name-to-address map)

55
Space Requirements for Routing
2
  • Distance Vector O(n log n) bits per node, O(n
    log n) total
  • Routing via spanning tree O(n log n) total, very
    non-optimal
  • Optimal (i.e., shortest path) routing requires
    Theta(n ) bits total on almost all graphs
    Buhrman-Hoepman-Vitanyi 00
  • Almost optimal routing (with stretch lt 3)
    requires Theta(n ) on some graphs
    Fraigniaud-Gavoille 95, Gavoille-Gengler 97,
    Gavoille-Perennes 96
  • Tradeoff between stretch and space Peleg-Upfal
    89
  • upper bound O(n ) memory with stretch
    O(k)
  • lower bound Theta(n ) bits with
    stretch O(k)
  • about O(n ) with stretch 5
    Eilam-Gavoille-Peleg 00

2
2
11/k
11/(2k4)
3/2
56
Proactive Routing Link Reversal Routing
  • An interesting proactive routing protocol with
    low overhead.
  • Idea For each destination, all communication
    links are directed, such that following the
    arrows always brings you to the destination.
  • Example (with only one destination D)

57
Link Reversal Routing
  • Note that positive labels can be chosen such that
    higher labels point to lower labels (and the
    destination label D 0).

58
Link Reversal Routing Mobility
  • Links may fail/disappear if nodes still have
    outlinks ? no problem!
  • New links may emerge just insert them such that
    there are no loops (use the labels to figure that
    out)

59
Link Reversal Routing Mobility
  • Only problem Non-destination becomes a sink ?
    reverse all links!
  • Not shown in example If you reverse all links,
    then increase label.
  • Recursive progress can be quite tedious

X
5
60
Link Reversal Routing Analysis
  • In a ring network with n nodes, a deletion of a
    single link (close to the sink) makes the
    algorithm reverse like crazy Indeed a single
    link failure may start a reversal process that
    takes n rounds, and n links reverse themselves n2
    times!
  • Thats why some researchers proposed partial link
    reversal, where nodes only reverse links that
    were not reversed before.

61
Link Reversal Routing Analysis
  • However, it was shown by Busch et al. (SPAA03)
    that in the extreme case also partial link
    reversal is not efficient, it may in fact be even
    worse than regular link reversal.
  • Still, some popular protocols (TORA) are based on
    link reversal.

62
Hybrid Schemes
  • Zone Routing Haas 97
  • every node knows a zone of radius r around
    it
  • nodes at distance exactly r are called
    peripheral
  • bordercasting sending a message to all
    peripheral nodes
  • global route search bordercasting reduces
    search space
  • radius determines trade-off
  • maintain up-to-date routes in zone and cache
    routes to external nodes

r
63
Clustering
64
Overview
  • Motivation
  • Dominating Set
  • Connected Dominating Set
  • The Greedy Algorithm
  • The Tree Growing Algorithm
  • The Local Randomized Greedy Algorithm
  • The k-Local Algorithm
  • The Dominator! Algorithm

65
Discussion
  • Flooding is key component of (many) proposed
    algorithms, including most prominent ones (AODV,
    DSR)
  • At least flooding should be done efficiently
  • We have also briefly seen how clustering can be
    used for point-to-point routing

66
Finding a Destination by Flooding
67
Finding a Destination Efficiently
68
Backbone
  • Idea Some nodes become backbone nodes
    (gateways). Each node can access and be accessed
    by at least one backbone node.
  • Routing
  • If source is not agateway, transmitmessage to
    gateway

69
Backbone Contd.
  • Gateway acts as proxy source and routes message
    on backbone to gateway of destination.
  • Transmission gateway to destination.

70
(Connected) Dominating Set
  • A Dominating Set (DS) is a subset of nodes such
    that each node is either in DS or has a neighbor
    in DS.
  • A Connected Dominating Set (CDS) is a connected
    DS, that is, there is a path between any two
    nodes in CDS that does not use nodes that are not
    in CDS.
  • A CDS is a good choicefor a backbone.
  • It might be favorable tohave few nodes in the
    CDS. This is know as theMinimum CDS problem

71
Formal Problem Definition M(C)DS
  • Input We are given an (arbitrary) undirected
    graph.
  • Output Find a Minimum (Connected) Dominating
    Set, that is, a (C)DS with a minimum number of
    nodes.
  • Problems
  • M(C)DS is NP-hard
  • Find a (C)DS that is close to minimum
    (approximation)
  • The solution must be local (global solutions are
    impractical for mobile ad-hoc network) topology
    of graph far away should not influence decision
    who belongs to (C)DS

72
Greedy Algorithm for (C)DS
  • Idea Greedy choose good nodes into the
    dominating set.
  • Black nodes are in the DS
  • Grey nodes are neighbors of nodes in the CDS
  • White nodes are not yet dominated, initially all
    nodes are white.
  • Algorithm Greedily choose a white or grey node
    that colors most white nodes.

73
Greedy Algorithm for Dominating Sets
  • One can show that this gives a log ?
    approximation, if ? is the maximum node degree of
    the graph.
  • One can also show that there is no polynomial
    algorithm with better approximation factor unless
    NP can be solved in detreministic
    time.

74
CDS The too simple tree growing algorithm
  • Idea start with the root, and then greedily
    choose a neighbor of the tree that dominates as
    many as possible new nodes
  • Black nodes are in the CDS
  • Grey nodes are neighbors of nodes in the CDS
  • White nodes are not yet dominated, initially all
    nodes are white.

75
CDS The too simple tree growing algorithm
  • Start Choose the node a maximum degree, and make
    it the root of the CDS, that is, color it black
    (and its white neighbors grey).
  • Step Choose a grey node with a maximum number of
    white neighbors and color it black (and its white
    neighbors grey).

76
Example of the too simple tree growing algorithm
  • CDS n/2 1 MCDS 4

u
u
u
v
v
v
  • Graph with 2n2 nodes tree growing CDSn2
    Minimum CDS4
  • tree growing start
    Minimum CDS

77
Tree Growing Algorithm
  • Idea Dont scan one but two nodes!
  • Alternative step Choose a grey node and its
    white neighbor node with a maximum sum of white
    neighbors and color both black (and their white
    neighbors grey).

78
Analysis of the tree growing algorithm
  • Theorem The tree growing algorithm finds a
    connected set of size CDS lt 2(1H(?)) DSOPT.
  • DSOPT is a (not connected) minimum dominating set
  • ? is the maximum node degree in the graph
  • H is the harmonic function with H(n) Theta(log
    n)

79
Analysis of the tree growing algorithm
  • In other words, the connected dominating set of
    the tree growing algorithm is at most a O(log(?))
    factor worse than an optimum minimum dominating
    set (which is NP-hard to compute).
  • With a lower bound argument (reduction to set
    cover) one can show that a better approximation
    factor is impossible, unless NP can be solved in
    time .

80
Proof Sketch
  • The proof is done with amortized analysis.
  • Let Su be the set of nodes dominated by u in
    DSOPT, or u itself. If a node is dominated by
    more than one node, we put it in one of the sets.
  • We charge the nodes in the graph for each node we
    color black. In particular we charge all the
    newly colored grey nodes. Since we color a node
    grey at most once, it is charged at most once.
  • We show that the total charge on the vertices in
    an Su is at most 2(1H(?)), for any u.

81
Charge on Su
  • Initially Su u0.
  • Whenever we color some nodes of Su, we call this
    a step.
  • The number of white nodes in Su after step i is
    ui.
  • After step k there are no more white nodes in Su.

82
Charge on Su
  • In the first step u0 u1 nodes are colored
    (grey or black). Each vertex gets a charge of
    at most 2/(u0 u1).
  • After the first step, node u becomes eligible to
    be colored (as part of a pair with one of the
    grey nodes in Su). If u is not chosen in step i
    (with a potential to paint ui nodes grey), then
    we have found a better (pair of) node. That is,
    the charge to any of the new grey nodes in step i
    in Su is at most 2/ui.

83
Adding up the charges in Su
84
Discussion of the tree growing algorithm
  • We have an extremely simple algorithm that is
    asymptotically optimal unless NP can be solved in

    time. And even the constants are small.
  • Are we happy?
  • Not really. How do we implement this algorithm in
    a real mobile network? How do we figure out where
    the best grey/white pair of nodes is? How slow is
    this algorithm in a distributed setting?
  • We need a fully distributed algorithm. Nodes
    should only consider local information.

85
Local Randomized Greedy
  • A local randomized greedy algorithm, LRG
    Jia-Rajaraman-Suel 01
  • Computes an approximation of
    MDS in time with high
    probability
  • Generalizes to weighted case and multiple
    coverage

86
Local Randomized Greedy - LRG
  • Each round of LRG consists of these steps.
  • Rounded span calculation Each node y calculates
    its span, the number of yet uncovered nodes that
    y covers it rounds up its span to the nearest
    power of base b , eg 2.
  • Candidate selection A node announces itself as
    a candidate if it has the maximum rounded span
    among all nodes within distance 2.
  • Support calculation Each uncovered node u
    calculates its support number s(u), which is the
    number of candidates that covers .
  • Dominator selection Each candidate v selects
    itself a dominator with probability 1/med(v),
    where med(v) is the median support of all the
    uncovered nodes that covers.

87
Performance Characteristics of LRG
  • Terminates in rounds whp
  • Approximation ratio is in expectation
    and whp
  • Running time is independent of diameter and
    approximation ratio is asymptotically optimal
  • Tradeoff between approximation ratio and
    running time
  • Terminates in rounds whp
    (the constant in the O-notation depends on 1/e.
  • Approximation ratio is in
    expectation

88
The k-local Algorithm
Input Local Graph
Fractional Dominating Set
Dominating Set
Connected Dominating Set
0.1
0.2
0.5
0
0.3
0
0.8
0.3
0.5
0.1
0.2
Phase C Connect DS by tree of bridges
Phase B Probabilistic algorithm
Phase A Distributed linear program rel. high
degree gives high value
89
Result of the k-local Algorithm
  • Distributed Approximation
  • The approximation factor depends on the number of
    rounds k (the locality)
  • Distributed Compl. O(k ) rounds O(k ?)
    messages of size O(log ?).
  • If k log ?, then constant approximation on MDS
    running time O(log ?).
  • Kuhn-Wattenhofer, PODC03

Theorem EDS O(k ? log ? MDS)
2/k
2
2
90
Unit Disk Graph
  • We are given a set V of nodes in the plane
    (points with coordinates).
  • The unit disk graph UDG(V) is defined as an
    undirected graph (with E being a set of
    undirected edges). There is an edge between two
    nodes u,v iff the Euclidean distance between u
    and v is at most 1.
  • Think of the unit distance as the maximum
    transmission range.

91
Unit Disk Graph
  • We assume that the unit disk graph UDG is
    connected (that is, there is a path between each
    pair of nodes)
  • The unit disk graph has many edges.
  • Can we drop some edges in the UDG to reduced
    complexity and interference?

92
The Dominator! Algorithm
  • For the important special case of Euclidean Unit
    Disk Graphs there is a simple marking algorithm
    that does the job.
  • We make the simplifying assumptions that MAC
    layer issues are resolved Two nodes u,v within
    transmission range 1 receive both all their
    transmissions. There is no interference, that is,
    the transmissions are locally always completely
    ordered.

93
The Dominator! Algorithm
  • Initially no node is in the connected dominating
    set CDS.
  • If a node u has not yet received an DOMINATOR
    message from any other node, node u will transmit
    a DOMINATOR message
  • If node v receives a DOMINATOR message from node
    u, then node v is dominated by node u.

94
Example
  • This gives a dominating set. But it is not
    connected.

95
The Dominator! Algorithm Continued
  • If a node w is dominated by more two dominators u
    and v, and node w has not yet received a message
    I am dominated by u and v, then node w
    transmits I am dominated by u and v and enters
    the CDS.
  • And since this is still not quite enough
  • 4. If a neighboring pair of nodes w1 and w2 is
    dominated by dominators u and v, respectively,
    and have not yet received a message I am
    dominated by u and v, or We are dominated by u
    and v, then nodes w1 and w2 both transmit We
    are dominated by u and v and enter the CDS.

96
Results
  • The Dominator! Algorithm produces a connected
    dominating set.
  • The algorithm is completely local
  • Each node only has to transmit one or two
    messages of constant size.
  • The connected dominating set is asymptotically
    optimal, that is, CDS O(MCDS)

97
Results
  • If nodes in the CDS calculate the Gabriel Graph
    GG(UDG(CDS)), the CDS graph is also planar
  • The routes in GG(UDG(CDS)) are competitive.
  • But is the UDG Euclidean assumption realistic?

98
Overview of (C)DS Algorithms
Algorithm Worst-Case Guarantees Local (Distributed) General Graphs CDS
Greedy Yes, optimal unless NP in No Yes No
Tree Growing Yes, optimal unless NP in No Yes Yes
LRG Yes, optimal unless NP in Yes Yes ?
k-local Yes, but with add. approx. factor Yes (k-local) Yes Yes
Dominator! Asymptotically Optimal Yes No Yes
99
Handling Mobility
  • Unit-disk graph model
  • We will present a constant approximation 1-hop
    clustering algorithm (i.e., an algorithm for
    finding a DS) which can handle mobility locally
    and in constant time per relevant change in
    topology
  • All the other algorithms seen handle the mobility
    of a node by fully recomputing the (C)DS, which
    is not desirable (global update)
  • For that, we will first to introduce the notion
    of piercing sets

100
Minimum Piercing Set Problem
  • A set of objects
  • A set of points to pierce all objects (green)
  • Goal find a minimum cardinality piercing set
    (red)
  • NP-hard

101
Mobile Piercing Set Problem (MPS)
  • The objects can move (mobility)
  • Goal maintain a minimum piercing set while
    objects are moving, with minimum update cost.
  • Distributed algorithm
  • Constant approximation on min. piercing set
  • Only consider unit-disks (disks with diameter 1).
    (Why?)

102
Setup and Update Costs
  • Setup cost the cost of computing an initial
    piercing set.
  • Update cost charged per event.
  • We define two types of events
  • When there are redundant piercing points
  • When there are unpierced disks
  • When either happens, an update is mandatory.

103
Clustering in Ad-hoc Network
  • Clustering
  • Simplify the network
  • Scalability
  • 1-hop Clustering
  • Mobiles are in 1-hop range from clusterhead.

104
Ad-hoc Networks Model
  • Mobiles are of the same communication range
    (unit-radius)
  • Unit-disk graph
  • Intersection graph of unit-diameter disks

105
Our Contribution Clustering Algorithm
  • M-Algorithms for MPS translate directly to 1-hop
    clustering algorithms
  • Formal analyses of popular clustering algorithms,
    showing that they both achieve same approximation
    factor as M-algorithm.
  • Lowest-id algorithm O(P) update cost
  • Least Clusterhead Change (LCC) algorithm optimal
    update cost

106
Related Work
  • Piercing set (static)
  • A 2d-1-approximation, L? norm, O(dnnlog P)
    centralized algorithm Nielsen 96
  • A 4-approximation, L2 2D, sequential algorithm
    for k-center problem Agarwal Procopiuc 98
  • Clustering
  • Geometric centers expected constant approx,
    update time O(log3.6 n) Gao, Guibas,
    Hershburger, Zhang Zhu 01
  • Lowest ID Gerla Tsai 95
  • LCC Chiang, Wu, Liu Gerla 97

107
Simple Case 1D
  • The optimal solution for intervals Sharir
    Welzl 98

108
General Case L1 L? norm
  • An example when d 2, L? norm
  • No two piercing disks intersect

Piercing disk
Normal disk
109
Distributed Algorithm
  • Select piercing disks in a distributed,
    top-down fashion

110
Cascading Effect
  • Cascading effect one event incurs global updates
  • high costs

111
Handling Mobility
  • Sever the cascading effect
  • Start from an arbitrary unpierced disk and use 4
    corners
  • Main idea find a set
  • of points which is
  • guaranteed to pierce
  • any configuration of
  • a neighborhood of a
  • disk

112
Handling Mobility
  • 2D, L2 norm 7-approximation
  • M-Algorithm constant approximation factor, and
    optimal update cost

113
M-Algorithm Setup
  • M-Setup find an initial piercing set
  • Every unpierced disk tries to become a piercing
    disk and to pierce all of its neighbors
  • If two neighboring disks try to become piercing
    disks at the same time lowest labeled disk wins
  • Setup cost O(P)

114
M-Algorithm Update
  • M-Update O(1) cost per event
  • When two piercing disks meet one piercing disk
    is set back to be normal disk
  • When a disk becomes unpierced, it invokes M-Setup
    locally

115
M-Algorithm Summary
  • Approximation factor
  • 2D, L2 norm 7-approximation
  • 3D, L2 norm 21-approximation
  • Setup cost
  • O(P)
  • Update cost
  • O(1) per event

116
Clustering Algorithm
  • 1-hop clustering algorithm
  • The mobile at the center of each piercing disk is
    a clusterhead
  • A 1-hop cluster is defined by a clusterhead and
    all disks (mobiles) pierced by it
  • No two clusterheads are neighbors
  • A k-approximation M-Algorithm for MPS gives a
    k-approximation on the minimum number of 1-hop
    clusters

117
Lowest ID Algorithm
  • Lowest ID algorithm M-Setup
  • 1-hop clustering algorithm
  • The lowest ID mobile becomes the clusterhead
  • Constant approximation factor same as
    M-algorithm
  • Setup cost
  • O(P)
  • Update cost
  • O(P) due to cascading effect

118
LCC Algorithm
  • Least Clusterhead Change (LCC) M-update
  • Cluster changes when necessary
  • Two clusterheads meet
  • A mobile is out of its clusterheads range
  • Constant approximation factor same as
    M-algorithm
  • Setup cost O(P)
  • Update cost O(1) cost per event, as update is
    required when event occurs, the update cost is
    optimal

119
(No Transcript)
120
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com