Mobility Inspired/Enabled Networks - PowerPoint PPT Presentation

1 / 42
About This Presentation
Title:

Mobility Inspired/Enabled Networks

Description:

Mobility Inspired/Enabled Networks 4/19/2004 Richard Yang – PowerPoint PPT presentation

Number of Views:113
Avg rating:3.0/5.0
Slides: 43
Provided by: YangR9
Category:

less

Transcript and Presenter's Notes

Title: Mobility Inspired/Enabled Networks


1
Mobility Inspired/Enabled Networks
4/19/2004 Richard Yang
2
Outline
  • Admin
  • Mobility

3
Admin.
  • Project check point due tonight at 1159pm
  • please send a progress report to David Goldenberg
  • at most three pages
  • Class this Wednesday is deferred to next week
  • we will announce results of incentive routing and
    localization on April 28
  • Project appointments
  • please signup time slots using the link on class
    home page
  • Two networking talks
  • Tuesday 1030 1130 am AKW 200 on BGP
  • Wednesday 230 330 pm AKW 400 on Random
    Access Networks

4
Big Picture Three Challenges
  • Wireless
  • Portability
  • Mobility

5
Controlled Mobility
  • Controlled mobility is to move to achieve some
    objectives
  • Discussion what are some examples of controlled
    mobility?

6
Controlled Mobility
  • There are many types of controlled mobility in
    the networking and mobile computing context,
    e.g.,
  • move the positions of the network nodes
  • either physical or logical
  • process/state migration to track target/user
    mobility, e.g., let your X Windows sessions
    follow you
  • path selection
  • use mobile agents
  • There are many examples of controlled mobility in
    the networking context
  • I will just give a few examples
  • a conceptual framework will be great

7
Path Selection Ants Foraging
  • some ants use pheromone (scent) to create trails
    to food
  • the probability that other ants will follow a
    trail is proportional to the density of
    pheromone
  • The algorithm is a type of reinforcement learning
    algorithm

8
Inspiration Load-Adaptive Routing
  • Suppose our objective is to discover low latency
    paths through the network
  • the routing algorithms we discussed so far are
    not load-adaptive
  • however, the latency of each link depends on load
  • why?

9
A Distributed Algorithm for Computing
User-Optimal Routing
  • A Bellman-Ford like algorithm combined with
    reinforcement learning
  • A probabilistic routing scheme
  • Each node maintains a forwarding table

Pikj is the routing probability at i to send to
dest. K using neighbor j
10
Protocol for Updating the Forwarding Table at
Node i for Destination k
Ljk is the path latency from j to k lij is the
link latency from i to j
11
Updating Forwarding Probability The Slow Path
  • Update
  • where

12
Updating Delay Estimation The Fast Path
  • Update
  • where ?(t) satisfies conditions as ? (t), and

Discussion why the above condition?
13
Other Comments on the Algorithm?
Why not set Lik(t1) to the weighted average?
14
Performance
15
Responsiveness of the Routing Algorithm
16
Extension
  • Application-specific reinforcement-based
    routing/forwarding
  • The Directed Diffusion paradigm
  • Elements
  • Naming
  • data is named using attribute-value pairs
  • Interests
  • a node requests data by sending interests for
    named data
  • Gradients
  • gradients is set up within the network designed
    to draw events, i.e., data matching the
    interest
  • Reinforcement
  • sink reinforces particular neighbors to draw
    higher quality ( higher data rate) events

17
Naming
  • Content based naming
  • Tasks are named by a list of ltattribute, valuegt
    pairs
  • Task description specifies an interest for data
    matching the attributes
  • Animal tracking

Request
Interest ( Task ) Description Type four-legged
animal Interval 20 ms Duration 1
minute Location -100, -100 200, 400
18
Interest
  • The sink periodically broadcasts interest
    messages to each of its neighbors
  • Every node maintains an interest cache
  • each item corresponds to a distinct interest
  • each entry in the cache has several fields
  • timestamp last received matching interest
  • several gradients data rate, duration, direction

19
Setting Up Gradient
Source
Sink
Interest Query
Gradient Who is interested (data rate,
duration, direction)
20
Data Propagation
  • When a node receives data
  • find a matching interest entry in its cache
  • examine the gradient list, send out data by rate
  • cache keeps track of recent seen data items (loop
    prevention)
  • data message is unicast individually to the
    relevant neighbors

21
Reinforcing the Best Path
Source
The neighbor reinforces the neighbor from whom
it first received the latest event (low delay)
Sink
Low rate event
Reinforcement Increased interest
22
Evaluation Surveillance
  • Five sources are randomly selected within a 70m x
    70m corner in the field
  • Five sinks are randomly selected across the
    field
  • High data rate is 2 events/sec
  • Low data rate is 0.02 events/sec
  • Event size 64 bytes
  • Interest size 36 bytes

23
Average Dissipated Energy
0.018
0.016
Flooding
0.014
0.012
0.01
0.008
(Joules/Node/Received Event)
Average Dissipated Energy
0.006
Diffusion
0.004
0.002
0
0
50
100
150
200
250
300
Network Size
24
Motivation
  • The previous diffusion approach assumes that
    information is always sent back to sinks
  • this may consume much energy
  • what if it is enough that at any time there is
    just one node who keeps track of the information
  • Example tracking of a mobile target

25
Target Tracking
26
Information-Driven Diffusion
  • Detecting model
  • zi (t) h(x(t), ?i (t)), where x(t) is
    parameter to be estimated, ?i (t) and zi (t) are
    characteristics and measurement of node i
    respectively
  • Example for sensors measuring sound amplitude
  • zi a / xi - x ?/2 wi ,
    where a is target amplitude, ? is attenuation
    coefficient, wi is Gaussian noise
  • State (belief)
  • representation of the current a posteriori
    distribution of x given measurement z1, , zN
    p(x z1, , zN)

27
Node Selection
  • j0 argj?A max ?(p(xzii? U ?zj))
  • A 1, , N - U is set of nodes whose
    measurements not incorporated into belief
  • ? is information utility function defined on the
    class of all probability distributions of x
  • intuitively, select node j for querying such that
    information utility function of updated
    distribution by zj is maximum

Current belief state
Next belief state
Sensor
28
Outline
  • Admin
  • Mobility
  • diffusion
  • deployment and coverage

29
Deployment and Coverage
  • Many formulations, here I first give one example
  • Given (uniform) initial node positions, assume
    events happen at different positions
  • known to all nodes (by flooding)
  • how to move the nodes to match the event
    distribution, i.e., the more likely an event will
    happen at a place, the more likely a node is
    there?

event positions
Initial node positions
30
A Solution Consider 1-Dimensional
  • Each node keeps track of histogram
  • Assume the (initial) position of a node is x0
  • Partition the range into buckets
  • Map node old position to new position
  • see right

31
Coverage
  • Discussion how will you define the deployment
    and coverage problem?

32
Coverage with Worst Case Guarantee
  • Consider a coverage of region ? with N nodes V
    v1, v2, , vN, where vi is the position of node
    i
  • For any point p in the region ?, define
    d(p, V) mini distance(vi, p)
  • Define the quality of the coverage V as
    d(?, V) maxp d(p)
  • A good coverage V is one which minimizes
    d(?, V) minV d(?, V)

33
Solve the One Node Case
  • Where is the best position of the single node?

34
Mobility Rule
- Move towards the furthest vertex - If more than
one vertices, move to the vector with the minimum
norm in the convex hull of the multiple
vertices.
35
Backup Slides
36
More Extension Tracking Mobile Targets
  • Many examples, e.g.,
  • PlanSys SECURES Network
  • patented acoustic sensor network for gunshot
    detection
  • wireless nodes contain ultra-low power processing
    for automatic detection, discrimination, and
    localization of gunshots.
  • nodes operate 12 months on battery pack

SECURES node
http//www.plansys.com
37
Node Selection (in practice)
  • zj is unknown before its sent back
  • best average case
  • j0 argj?A max Ezj?(p(xzii? U ?zj))
  • maximizing worst case
  • j0 argj?A max minzj?(p(xzii? U ?zj))
  • maximizing best case
  • j0 argj?A max maxzj?(p(xzii? U ?zj))

38
Information Utility Measures
  • covariance-based
  • ?(pX) - det(?), ?(pX) - trace(?)
  • Fisher information matrix
  • ?(pX) - det(F(x)), ?(pX) - trace(F(x))
  • entropy of estimation uncertainty
  • ?(pX) - H(P), ?(pX) - h(pX)

39
Information Utility Measures
  • volume of high probability region
  • ?? x?S p(x) ? ?, chose ? so that ?? ?, ?
    is given
  • ?(pX) - vol(??)
  • sensor geometry based measures
  • in cases utility is function of sensor location
    only
  • ?(pX) - (xi-x0)?-1(xi-x0), where x0 is current
    estimate of target location
  • also called Mahalanobis distance

40
Composite Objective Function
  • Mc(?l, ?j, p(xzii? U)
  • ?Mu(p(xzii? U, ?j) (1 - ?)Ma(?l, ?j)
  • Mu is information utility measure
  • Ma is communication cost measure
  • ? ? 0, 1 balances their contributions
  • ?l is characteristics of current sensor l
  • j0 argj?A max Mc(?l, ?j, p(xzii? U)

41
Incremental Update of Belief
  • p(x z1, , zN)
  • c p(x z1, , zN-1) p(zN x)
  • zN is the new measurement
  • p(x z1, , zN-1) is previous belife
  • p(x z1, , zN) is updated belief
  • c is normalizing constant
  • for linear system with Gaussian distribution,
    Kalman filter is used

42
IDSQ Algorithm
Write a Comment
User Comments (0)
About PowerShow.com