Title: Smart Sleeping Policies for Wireless Sensor Networks
1Smart Sleeping Policies for Wireless Sensor
Networks
- Venu Veeravalli
- ECE Department Coordinated Science Lab
- University of Illinois at Urbana-Champaign
- http//www.ifp.uiuc.edu/vvv
- (with Jason Fuemmeler)
- IPAM Workshop on Mathematical Challenges and
Opportunities in Sensor Networks, Jan 10, 2007
2Saving Energy in Sensor Networks
- Efficient source coding
- Efficient Tx/Rx design
- Efficient processor design
- Power control
- Efficient routing
3Active Sleep Transition
- Paging channel to wake up sensors when needed
- But power for paging channel is usually not
negligible compared to power consumed by active
sensor - Passive RF-ID technology?
4Active Sleep Transition
- Practical Assumption
- Sensor that is asleep cannot be communicated
with or woken up prematurely ? sleep duration
has to be chosen when sensor goes to into sleep
mode - Having sleeping sensors could result in
communication/sensing performance degradation - Design Problem
- Find sleeping policies that optimize
tradeoffbetween energy consumption and
performance
5Sleeping Policies
Duty Cycle Policy
- Sensor sleeps with deterministic or random (with
predetermined statistics) duty cycle - Synchronous or asynchronous across sensors
- Duty cycle chosen to provide desired tradeoff
between energy and performance - Simple to implement, generic
6Smart (Adaptive) Policies
- Use all available information about the state of
the sensor system to set sleep time of sensor - Application specific ? system-theoretic approach
required - Potential energy savings over duty cycle policies
7Tracking in Dense Sensor Network
- Sensor detects presence of object within close
vicinity - Sensors switch between active and sleep modes to
save energy - Sensors need to come awake in order to detect
object
8Design Problem
- Having sleeping sensors could result in tracking
errors - Design Problem
- Find sleeping policies that optimize tradeoff
between energy consumption and tracking error
9General Problem Description
- Sensors distributed in two-dimensional field
- Sensor that is awake can generate an observation
- Object follows random (Markov) path whose
statistics are assumed to be known
10General Problem Description
- Central controller communicates with sensors that
are awake - Sensor that wakes up remains awake for one time
unit, during which it - reports its observation to the central controller
- receives new sleep time from central controller
- sets its sleep timer to new sleep time and enters
sleep mode
11Markov Decision Process
- Markov model for object movement with absorbing
terminal state when object leaves system - State consists of two parts
- Position of object
- Residual sleep times of sensors
- Control inputs
- New sleep times
- Exogenous input
- Markov object movement
12Partially Observable Markov Decision Process
(POMDP)
- The state of the system is only partially
observable at each time step (POMDP) - Object position not known -- only have
distribution for where the object might be - Can reformulate MDP problem in terms of this
distribution (sufficient statistic) and residual
sleep times
13Sensing Model and Cost Structure
- Sensing Model Each sensor that is awake provides
a noisy observation related to object location - Energy Cost each sensor that is awake incurs
cost of c - Tracking Cost distance measure d(.,.) between
actual and estimated object location
14Dynamic System Model
Sensor Observations
Nonlinear Filter
Posterior
15Simple Sensing, Object Movement, Cost Model
- Sensors distributed in two-dimensional field
- Sensor that is awake detects object without error
within its sensing range - Sensing ranges cover field of interest without
overlap - Object follows Markov path from cell to cell
- Tracking cost of 1 per unit time that object not
seen
16What Can Be Gained
1
Duty Cycle
Tracking errors per unit time
Always Track
n
0
Number of sensors awake per unit time
17Always Track Policy
Unit random walk movement of object
18Always Track Asymptotics
E awake per unit time O(log n)
E awake per unit time n0.5
19Dynamic System Model
Sensor Observations
Nonlinear Filter
Posterior
20Nonlinear filter (distribution update)
21Optimal Solution via DP
- Can write down dynamic programming (DP) equations
to solve optimization problem and find Bellman
equation - However, state space grows exponentially with
number of sensors - DP solution is not tractable even for relatively
small networks
22Separating the Problem
- Problem separates into set of simpler problems
(one for each sensor) if - Cost can be written as sum of costs under control
of each sensor (always true) - Other sensors actions do not affect state
evolution in future (only true if we make
additional unrealistic assumptions) - We make unrealistic assumptions only to generate
a policy, which can then be applied to actual
system
23FCR Solution
- At time sensor is set to sleep assume we will
have no future observations of object (after
sensor comes awake) - Policy is to wake up at first time that expected
tracking cost exceeds expected energy cost - Thus termed First Cost Reduction (FCR) solution
24QMDP Solution
- At time sensor is set to sleep, assume we will
know location of object perfectly in future
(after sensor comes awake) - Can solve for policy with low complexity
- Assuming more information than is actually
available yields lower bound on optimal
performance!
25Line Network Results
26Line Network Results
27Line Network Results
28Two Dimensional Results
29Offline Computation
- Can compute policies on-line, but this requires
sufficient processing power and could introduce
delays - Policies need to be computed for each sensor
location and each possible distribution for
object location - Storage requirements for off-line computation may
be immense for large networks - Off-line computation is feasible if we replace
actual distribution with point mass distribution - Storage required is n values per sensor
30Point Mass Approximations
- Two options for placing point mass
- Centroid of distribution
- Nearest point to sensor on support of distribution
31Distributed Implementation
- Off-line computation also allows for distributed
implementation!
32Partial Knowledge of Statistics
- Support of distribution of object position can be
updated using only support of conditional pdf of
Markov prior! - Thus nearest point point mass approximation is
robust to knowledge of prior
33Point Mass Approximation Results
34Point Mass Approximation Results
35Conclusions
- Tradeoff between energy consumption and tracking
errors can be considerably improved by using
information about the location of the object - Optimal solution to tradeoff problem is
intractable, but good suboptimal solutions can be
designed - Methodology can be applied to designing smart
sleeping for other sensing applications, e.g.,
process monitoring, change detection, etc. - Methodology can also be applied to other control
problems such as sensor selection
36Future Work
- More realistic sensing model
- More realistic object movement models
- Object localization using cooperation among all
awake sensors at each time step - Joint optimization of sensor sleeping policies
and nonlinear filtering for object tracking - Partial known or unknown statistics for object
movement - Decentralized implementation
- Tracking multiple objects simultaneously