Distributed Information and Signal Processing in Sensor Network

1 / 23
About This Presentation
Title:

Distributed Information and Signal Processing in Sensor Network

Description:

Cost of getting a sensor's data ... x: target state; zi: sensor data ... Belief state's distribution is Gaussian or very elongated. Sensors are range sensors ... –

Number of Views:59
Avg rating:3.0/5.0
Slides: 24
Provided by: bluel6
Category:

less

Transcript and Presenter's Notes

Title: Distributed Information and Signal Processing in Sensor Network


1
Distributed Information and Signal Processing in
Sensor Network
  • CS213
  • Winter 2003
  • Hanbiao Wang

2
Outline
  • Information-based sensor selection
  • Fundamental question about collaboration
  • Who should be in the team?
  • Two contradicting factors
  • Utility of a sensors data
  • Cost of getting a sensors data
  • Beamforming
  • Processing of synchronized data from multiple
    sensors

3
Collaborative Sensing
  • CS213
  • Winter 2003
  • Hanbiao Wang

4
A tracking scenario
5
Formulation of IDSQ
  • Notation
  • x target state zi sensor data
  • p(xz1, .., zj-1) current belief state of target
    based on data z1, .., zj-1
  • p(xz1, .., zj- , zj) new belief state of
    target with additional data zj
  • IDSQ selects such zj that maximizes weighted
    difference between utility gain and cost

6
A special case
  • Sensors measure distance to target
  • A leader node requests data from a sensor to
    improve belief state about stationary target
    location.
  • Mahalanobis distance as information metrics,
    Euclidean distance as energy cost

7
IDSQ example
8
Challenge of sensor selection
  • Selection decision has to be made without
    explicit knowledge of the measurement residing at
    sensors to be selected
  • Decision has to be made solely based upon sensor
    characteristics and current belief state.

9
Sensor selection example
10
Information utility
  • From now on, lets focus on information utility,
    and ignore communication cost.
  • Requirements of information utility
  • The bigger the utility, the more useful the data
    to sensing application.
  • Utility of a piece of non-local data can be
    predicted before obtaining the data solely based
    on current belief state and sensor
    characteristics.

11
Information theoretic measure entropy
  • Entropy measures randomness of a random variable,
    e.g. target location
  • Infor. utility measure of a belief state

12
Back to Mahalanobis distance measure
  • Entropy-based utility is difficult to compute in
    practice.
  • Mahalanobis distance for special case
  • Belief states distribution is Gaussian or very
    elongated
  • Sensors are range sensors

13
Ideal belief update
  • where
  • , current belief state given a
    history of measurement up to time t.
  • , next belief state with
    additional data zj(t1) from sensor j.
  • p(x(t1)x(t)), dynamic model, determines next
    target state based on current target state.
  • p(zJ(t1)x(t1)) , predicts measurement on
    sensor j given next belief state.

14
Approximation of belief state
  • Belief state is represented by probability
    distribution on a discrete set of cells
  • Integral changes into summation

15
Approximation of observation likelihood function
  • Expected likelihood function as approximation

16
Belief state and expected likelihood function
17
Expected posterior belief
  • As approximation of the true next belief
  • Its information utility can be measured
  • Choose sensor i to maximize information utility

18
Localize a stationary target by selecting nearest
neighbor
19
Localize a stationary target by selecting
neighbor with min Mahalanobis distance
20
Tracking a moving target
21
Tracking error
22
Tracking error
23
Conclusion
  • Information-driven approach balances information
    gain and communication cost
  • Many remaining issues
  • Efficient belief state representation
  • Practically feasible information utility
    measurement
Write a Comment
User Comments (0)
About PowerShow.com