Title: Programming Sensor Networks
1Programming Sensor Networks
David Gay Intel Research Berkeley
2- Object tracking
- Sensors take magnetometer readings, locate object
using centroid of readings - Communicate using geographic routing to base
station - Robust against node and link failures
3- Environmental monitoring
- Gather temperature, humidity, light from a
redwood tree - Communicate using tree routing to base station
- 33 nodes, 44 days
4Challenges and Requirements
- Expressivity
- Many applications, many OS services, many
hardware devices - Real-time requirements
- Some time-critical tasks (sensor acquisition and
radio timing) - Constant hardware evolution
- Reliability
- Apps run for months/years without human
intervention - Extremely limited resources
- Very low cost, size, and power consumption
- Reprogrammability
- Easy programming
- used by non-CS-experts, e.g., scientists
5Recurring Example
- Multi-hop data collection
- Motes form a spanning tree rooted at a base
station node - Motes periodically sample one or more sensors
- Motes perform some local processing, then send
sensor data to parent in tree - Parents either process receiveddata
(aggregation) or forwardit as is
6Programming the Hard Way
C compiler
Assembler
7The Real Programmers Scorecard
- Expressivity
- Real-time requirements
- Constant hardware evolution
- Reliability
- Extremely limited resources
- Reprogrammability
- Easy programming
8Programming the Hard Way
C compiler
Assembler
93 Practical Programming Systems
- nesC
- A C dialect designed to address sensor network
challenges - Used to implement TinyOS, Maté, TinySQL (TinyDB)
- Maté
- An infrastructure for building virtual machines
- Provide safe, efficient high-level programming
environments - Several programming models simple scripts,
Scheme, TinySQL - TinySQL
- A sensor network query language
- Simple, well-adapted to data-collection
applications
10nesC overview
- Component-based C dialect
- All cross-component interaction via interfaces
- Connections specified at compile-time
- Simple, non-blocking execution model
- Tasks
- Run-to-completion
- Atomic with respect to each other
- Interrupt handlers
- Run-to-completion
- Whole program
- Reduced expressivity
- No dynamic allocation
- No function pointers
11nesC component model
interface Init command bool init()
- module AppM
- provides interface Init
- uses interface ADC
- uses interface Timer
- uses interface Send
-
- implementation
-
A call to a command, such as bool ok call
Init.init() is a function-call to behaviour
provided by some component (e.g., AppM).
12nesC component model
interface Init command bool init()
- module AppM
- provides interface Init
- uses interface ADC
- uses interface Timer
- uses interface Send
-
- implementation
-
interface ADC command void getData() event
void dataReady(int data)
Interfaces are bi-directional. AppM can call
ADC.getData and must implement event void
ADC.dataReady(int data) The component to
which ADC is connected will signal dataReady,
resulting in a function-call to ADC.dataReady in
AppM.
13nesC component model
interface Init command bool init()
- module AppM
- provides interface Init
- uses interface ADC
- uses interface Timer
- uses interface Send
-
- implementation
- TOS_Msg myMsg
- int busy
- event void ADC.dataReady()
-
- call Send.send(myMsg)
- busy TRUE
-
- ...
interface ADC command void getData() event
void dataReady(int data)
interface Timer command void setRate(int
rate) event void fired()
interface Send command void send(TOS_Msg m)
event void sendDone()
14nesC component model
configuration App implementation
components Main, AppM, TimerC, Photo,
MessageQueue
Main.Init -gt AppM.Init
AppM.Timer -gt TimerC.Timer
Main.Init -gt TimerC.Init
- module AppM
- provides interface Init
- uses interface ADC
- uses interface Timer
- uses interface Send
-
Main
Init
Init
AppM
Timer
ADC
Send
Init
Timer
ADC
Send
TimerC
MessageQueue
Photo
15Some Other Features
- Parameterised interfaces, generic interfaces
- interface ADCint id runtime dispatch between
interfaces - interface Attributelttgt type parameter to
interface - Generic components (allocated at compile-time)
- Reusable components with arguments
- generic module Queue(typedef t, int size)
- Generic configurations can instantiate many
components at once - Distributed identifier allocation
- unique(some string) returns a different number
at each use with the same string, from a
contiguous sequence starting at 0 - uniqueCount(some string) returns the number of
uses of unique(some string) - Concurrency support
- Atomic sections, compile-time data-race detection
16nesC Scorecard
- Expressivity
- Subsystems radio stack, routing, timers,
sensors, etc - Applications data collection, TinyDB, Nest final
experiment, etc - Constant hardware evolution
- René ? Mica ? Mica2 ? MicaZ Telos (x3)
- Many other platforms at other institutions
- How was this achieved?
- The component model helps a lot, especially when
used following particular patterns
17nesC Scorecard
- Expressivity
- Subsystems radio stack, routing, timers,
sensors, etc - Applications data collection, TinyDB, Nest final
experiment, etc - Constant hardware evolution
- René ? Mica ? Mica2 ? MicaZ Telos (x3)
- Many other platforms at other institutions
- How was this achieved?
- The component model helps a lot, especially when
used following particular patterns - Placeholder allow easy, application-wide
selection of a particular service implementation - Adapter adapt an old component to a new interface
18Placeholder allow easy, application-wide
selection of a particular service implementation
- Motivation
- services have multiple compatible implementations
- routing ex MintRoute, ReliableRoute hardware
independence layers - used in several parts of system, application
- ex routing used in network management and data
collection - most code should specify abstract service, not
specific version - application selects implementation in one place
19Placeholder allow easy, application-wide
selection of a particular service implementation
- configuration Router
- provides Init
- provides Route
- uses Init as XInit
- uses Route as XRoute
- implementation
- Init XInit
- Route XRoute
configuration App implementation
components Router, MRoute Router.XInit -gt
MRoute.Init Router.XRoute -gt MRoute.Route
20Adapter adapt an old component to a new
interface
- Motivation
- functionality offered by a component with one
interface needed needs to be accessed by another
component via a different interface.
Light
AttrPhoto
TinyDB
ADC
Attribute
21Adapter adapt an old component to a new
interface
- generic module AdaptAdcC
- (char name, typedef t)
- provides Attributelttgt
- provides Init
- uses ADC
- implementation
- command void Init.init()
- call Attribute.register(name)
-
- command void Attribute.get()
- call ADC.get()
-
-
configuration AttrPhoto provides
Attributeltlonggt implementation components
Light, new AdaptAdcC(Photo, long)
Attribute AdaptAdcC AdaptAdcC.ADC -gt Light
22nesC scorecard
- Expressivity
- Constant hardware evolution
- Real-time requirements (soft only)
- Radio stack, especially earlier bit/byte radios
- Time synchronisation
- High-frequency sampling
- Achieved through
- Running a single application
- Having full control over the OS (cf Placeholder)
23nesC scorecard
- Expressivity
- Constant hardware evolution
- Real-time requirements (soft only)
- Reliability
- C is an unsafe language
- Concurrency is tricky
- Addressed through
- A static programming style, e.g., the Service
Instance pattern - Compile-time checks such as data-race detection
24Service Instance support multiple instances
with efficient collaboration
- Motivation
- multiple users need independent instance of
service - ex timers, file descriptors
- services instances need to coordinate, e.g., for
efficiency - ex n timers sharing single underlying hardware
timer
25Service Instance support multiple instances
with efficient collaboration
- module TimerP
- provides Timerint id
- uses Clock
-
- implementation
- timer_t timersuniqueCount(Timer)
- command Timer.startint id()
-
generic configuration TimerC() provides
interface Timer implementation components
TimerP Timer TimerP.Timerunique(Timer)
components Radio, TimerC Radio.Timer gt new
TimerC()
26Race condition example
- module AppM
- implementation
- bool busy
- async event void Timer.fired()
- // Avoid concurrent data collection attempts!
- if (!busy) // ? Concurrent state access
- busy TRUE // ? Concurrent state access
- call ADC.getData()
-
-
-
27Data race detection
- Every concurrent state access is a potential race
condition - Concurrent state access
- If object O is accessed in a function reachable
from an interrupt entry point, then all accesses
to O are potential race conditions - All concurrent state accesses must occur in
atomic statements - Concurrent state access detection is
straightforward - Call graph fully specified by configurations
- Interrupt entry points are known
- Data model is simple (variables only)
28Data race fixed
- module AppM
- implementation
- bool busy
- async event void Timer.fired() // From
interrupt - // Avoid concurrent data collection attempts!
- bool localBusy
- atomic
- localBusy busy
- busy TRUE
-
- if (!localBusy)
- call ADC.getData()
-
29nesC scorecard
- Expressivity
- Constant hardware evolution
- Real-time requirements (soft only)
- Reliability
- Extremely limited resources
- Complex applications exist NEST FE, TinyScheme,
TinyDB - Lifetimes of several months achieved
- How?
- Language features resolve wiring at
compile-time - Compiler features inlining, dead-code
elimination - And, of course, clever researchers and hackers
30Resource Usage
- An interpreter specialised for data-collection
- Makes heavy use of components, patterns
- Optimisation reduces power by 46, code size by
44 - A less component-intensive system (the radio)
only gains 7 from optimisation
inlining dead-code dead-codeonly no optimisation
power draw 5.1mW 9.5mW 9.5mW
code size 47.1kB 52.5kB 83.8kB
31nesC scorecard
- Expressivity
- Constant hardware evolution
- Real-time requirements (soft only)
- Reliability
- Extremely limited resources
- Reprogrammability
- Whole program only
- Provided by a TinyOS service
32nesC scorecard
- Expressivity
- Constant hardware evolution
- Real-time requirements (soft only)
- Reliability
- Extremely limited resources
- Reprogrammability
- Easy programming
- Patterns help, but
- Split-phase programming is painful
- Distributed algorithms are hard
- Little-to-no debugging support
333 Practical Programming Systems
- nesC
- A C dialect designed to address sensor network
challenges - Used to implement TinyOS, Maté, TinySQL (TinyDB)
- Maté
- An infrastructure for building virtual machines
- Provide safe, efficient high-level programming
environments - Several programming models simple scripts,
Scheme, TinySQL - TinySQL
- A sensor network query language
- Simple, well-adapted to data-collection
applications
34The Maté Approach
- Goals
- Support reprogrammability, nicer programming
models - While preserving efficiency and increasing
reliability - Sensor networks are application specific.
- We dont need general programming nodes in
redwood trees dont need to locate snipers in
Sonoma. - Solution an application specific virtual machine
(ASVM). - Design an ASVM for an application domain,
exposing its primitives, and providing the needed
flexibility with limited resources..
35The Maté Approach
- Reprogrammability
- Transmit small bytecoded programs.
- Reliability
- The bytecodes perform runtime checks.
- Efficiency
- The ASVM exposes high-level operations suitable
to the application domain. - Support a wide range of programming models,
applications - Decompose an ASVM into a
- core, shared template
- programming model and application domain
extensions
36ASVM Architecture
37ASVM Template
- Threaded, stack-based architecture
- One basic type, 16-bit signed integers
- Additional data storage supplied by
language-specific operations - Scheduler
- Executes runnable threads in a round-robin
fashion - Invokes operations on behalf of handlers
- Concurrency Manager
- Analyses handler code for shared resources
(conservative, flow insensitive, context
insensitive analysis). - Ensures race free and deadlock free execution
- Program Store
- Stores and disseminates handler code
38ASVM Extensions
- Handlers events that trigger execution
- One-to-one handler/thread mapping (but not
required) - Examples timer, route forwarding request, ASVM
boot - Operations units of execution
- Define the ASVM instruction set, and can extend
an ASVMs set of supported types as well as
storage. - Two kinds primitives and functions
- Primitives language dependent (e.g., jump, read
variable) - Can have embedded operands (e.g., push constant)
- Functions language independent (e.g., send())
- Must be usable in any ASVM
39Maté scorecard
- Expressivity
- By being tailored to an application domain
40Maté scorecard
- Expressivity
- Real-time requirements
- Just say no!
41Maté scorecard
- Expressivity
- Real-time requirements
- Just say nesC!
42Maté scorecard
- Expressivity
- Real-time requirements
- Constant hardware evolution
- Presents a hardware-independent model
- Relies on nesC to implement it
43QueryVM an ASVM
- Programming Model
- Scheme program on all nodes
- Application domain
- multi-hop data collection
- Libraries
- sensors, routing, in-network aggregation
- QueryVM size
- 65 kB code
- 3.3 kB RAM
44QueryVM Evaluation
- Simple collect light from all nodes every 50s
- 19 lines, 105 bytes
45Simple query
- // SELECT id, parent, light INTERVAL 50
- mhop_set_update(100) settimer0(500)
mhop_set_forwarding(1) - any snoop() heard(snoop_msg())
- any intercept() heard(intercept_msg())
- any heard(msg) snoop_epoch(decode(msg,
vector(2))0) - any Timer0()
- led(l_blink l_green)
- if (id())
- next_epoch()
- mhopsend(encode(vector(epoch(), id(),
parent(), light()))) -
46QueryVM Evaluation
- Simple collect light from all nodes every 50s
- 12 lines, 105 bytes
- Conditional collect exponentially weighted
moving average of temperature from some nodes - 32 lines, 167 bytes
- SpatialAvg compute average temperature,
in-network - 31 lines, 127 bytes
- or, 117 lines if averaging code in Scheme
47Maté scorecard
- Expressivity
- Real-time requirements
- Constant hardware evolution
- Reliability
- Runtime checks
- Reprogramming always possible
48Maté scorecard
- Expressivity
- Real-time requirements
- Constant hardware evolution
- Reliability
- Extremely limited resources
- VM expresses application logic in high-level
operations - But, dont try and write an FFT!
49QueryVM Energy
- Ran queries on 42 node network
- Compared QueryVM to nesC implementation of same
queries - Fixed collection tree and packet size to control
networking cost.
- QueryVM sometimes uses less energy than the nesC
code. - Due to vagaries of radio congestion (nesC nodes
all transmitting in synch) - Decompose QueryVM cost using a 2-node network.
- Run with no query base cost (listening for
messages). - Ran programs for eight hours, sampling power draw
at 10KHz - No measurable energy overhead!
- Theres not much work in the application code
50Maté scorecard
- Expressivity
- Real-time requirements
- Constant hardware evolution
- Reliability
- Extremely limited resources
- Reprogramming
- Enough said already
51Maté scorecard
- Expressivity
- Real-time requirements
- Constant hardware evolution
- Reliability
- Extremely limited resources
- Reprogramming
- Easy programming
- The good simple threaded event-processing model
- The bad scripting languages dont make writing
distributed algorithms any easier
523 Practical Programming Systems
- nesC
- A C dialect designed to address sensor network
challenges - Used to implement TinyOS, Maté, TinySQL (TinyDB)
- Maté
- An infrastructure for building virtual machines
- Provide safe, efficient high-level programming
environments - Several programming models simple scripts,
Scheme, TinySQL - TinySQL
- A sensor network query language
- Simple, well-adapted to data-collection
applications
53TinySQL
- nesC, Scheme are both local languages
- A lot of programming effort to deal with
distribution, reliability - Example distributed average computation
54In-network averaging
- any maxdepth 5 // max routing tree depth
supported - any window 2 maxdepth
- // The state of an average aggregate is an
array containing - // counts for 'window' epochs
- // sums for 'window' epochs
- // the number of the oldest epoch in the
window - any avg_make()
- any Y make_vector(1 2 window)
- vector_fill!(Y, 0)
- return Y
-
- // The epoch changed. Ensure epoch 1 is
inside the - // window.
- any avg_newepoch(Y)
- any start Y2 window
- if (epoch() 1 gt start window)
for (i shift i lt window i)
Yi - shift Yi Yi -
shift window Yi window
// clear new values for (i window -
shift i lt window i) Yi 0
Yi window 0 // Update
the state for epoch 'when' with a count of 'n'
// and a sum of 'sum' any add_avg(Y, when, n,
sum) any start Y2 window if
(when gt start when lt start window)
Ywhen - start n // update
count Ywhen - start window n // update
sum // Add local result for
current epoch any avg_update(Y, val) add_avg(Y,
epoch(), 1, val)
// avg_get returns a six-byte string containg
encoded // epoch, count, sum (2 bytes each)
// where epoch is chosen so that our
descendant's // results have reached us before
we try and send // them on any avg_get(Y)
any start Y2 window any when
epoch() - 2 (maxdepth - 1 - depth()) any
tosend vector(when, 0, 0) // encode
the result for epoch 'when', but avoid //
problems if we don't know it or if we're too
// deep in the tree if (depth() gt maxdepth)
tosend0 epoch() - 256 else if (when
gt start) tosend1 Ywhen -
start // count tosend2 Ywhen -
start window // sum return
encode(tosend) any avg_buffer()
make_string(6) // Add some results from our
descendants to our state any avg_intercept(Y,
intercepted) any decoded decode(vector(2,
2, 2)) add_avg(Y, vector0, vector1,
vector2)
55TinySQL
- nesC, Scheme are both local languages
- A lot of programming effort to deal with
distribution, reliability - Example distributed average computation
- TinySQL is a higher-level programming model
- The sensor network is a database, sensors are
attributes - Query it with SQL!
- Examples
- SELECT id, parent, temp INTERVAL 50s
- SELECT id, expdecay(humidity, 3) WHERE parent gt 0
INTERVAL 50s - SELECT avg(temp) INTERVAL 50s
56TinySQL Two Implementations
- TinyDB the original
- A nesC program which interprets TinySQL queries
- TinySQLVM (aka QueryVM)
- Maté VM running Scheme
- TinySQL queries compiled to Scheme application
- Comparable performance, flexibility
- TinySQLVM uses 5-20 less energy than TinyDB on
the three example queries - 4 months operation on 2 AAs (15-node,
plant-watering app) - TinyDB can run multiple queries
- TinySQLVM allows user-written extensions to
TinySQL
57TinySQL scorecard
- Expressivity
- Limited to a single kind of application
58TinySQL scorecard
- Expressivity
- Real-time requirements
- We wont go there
59TinySQL scorecard
- Expressivity
- Real-time requirements
- Constant hardware evolution
- Presents a hardware-independent programming model
60TinySQL scorecard
- Expressivity
- Real-time requirements
- Constant hardware evolution
- Reliability
- You cant express any nasty bugs ?
61TinySQL scorecard
- Expressivity
- Real-time requirements
- Constant hardware evolution
- Reliability
- Extremely limited resources
- Little computation required
- Can perform high-level optimisations (see the
many TinyDB papers)
62TinySQL scorecard
- Expressivity
- Real-time requirements
- Constant hardware evolution
- Reliability
- Extremely limited resources
- Reprogramming
- Is not a problem
63TinySQL scorecard
- Expressivity
- Real-time requirements
- Constant hardware evolution
- Reliability
- Extremely limited resources
- Reprogramming
- Easy programming
- Simple, declarative, whole system programming
model
64Related Work nesC
- Other component and module systems
- Closest are Knit and Mesa
- None have our desired features component-based,
compile-time wiring, bi-directional interfaces - Lots of embedded, real-time programming languages
- Giotto, Esterel, Lustre, Signal, E-FRP
- Stronger guarantees, but not general-purpose
systems languages - Component architectures in operating systems
- Click, Scout, x-kernel, Flux OSKit, THINK
- Mostly based on dynamic dispatch, no
whole-program optimization
65Related Work ASVMs
- JVM, KVM, CLR
- Designed for devices with much greater resources
100 KB RAM - Java Card
- An application specific JVM for smart cards
- Many of the same principles apply (constraints,
CAP format) - SPIN, Exokernel, Tensilica, etc.
- Customizable boundaries for application
specificity - Impala/SOS
- Support incremental binary updates easy to crash
a system
66Scorecards
C nesC Maté TinySQL
Expressivity
Real-time ? ?
Hardware evol. ?
Reliability ?
Limited resources
Reprogramming ? ?
Easy to use ? ?
67And the Winner is
- nesC
- High performance, suitable for any part of an
application - But, harder to program, bugs may bring down
system - Maté
- Safe, simple programming environments
- But, limited performance best for scripting,
not radio stacks - TinySQL
- Easiest to use
- But, limited application domain
- Why not use all three?
- Extend TinySQL with Maté (Scheme) code
- Extend Maté with new functions, events written in
nesC
68Conclusion
- Sensor networks raise a number of programming
challenges - Very limited resources
- High reliability requirements
- Event-driven, concurrent programming model
- nesC and Maté represent two different tradeoffs
in this space - nesC maximum performance and flexibility, some
reliability through compile-time checks, harder
programming - Maté efficiency for a specific application
domain, full reliability through runtime checks,
simpler (scripting) to simple (TinySQL)
programming
69(No Transcript)
70Service Instance support multiple instances
with efficient collaboration
- Consequences
- clients remain independent, configurabilityimple
mentation can coordinate - clients guaranteed that service is
available robustness - service automatically sized to application
needs efficiency - static allocation may be wasteful if worst case
simultaneous users lt different users
71Placeholder allow easy, application-wide
selection of a particular service implementation
- Consequences
- defines global names for common
services replaceability - easy application-wide implementation selection
- no runtime cost efficiency
72Conditional query
- // SELECT id, expdecay(light, 3) WHERE (parent gt
0) INTERVAL 50 - any expdecay_make(bits) vector(bits, 0)
- any expdecay_get(s, val) s1 s1 - (s1 gtgt
s0) (val gtgt s0) - any s_op1 expdecay_make(3)
- mhop_set_update(100) settimer0(500)
mhop_set_forwarding(1) - any snoop() heard(snoop_msg())
- any intercept() heard(intercept_msg())
- any heard(msg) snoop_epoch(decode(msg,
vector(2))0) - any Timer0()
- led(l_blink l_green)
- if (id())
- next_epoch()
- if (parent() gt 0)
- mhopsend(encode(vector(epoch(), id(),
expdecay_get(s_op1, light())))) -
-
73Spatial query
- // SELECT avglight INTERVAL 50
- mhop_set_update(100) settimer0(500)
mhop_set_forwarding(0) - any s_op1 avg_make()
- any snoop() snoop_epoch(decode(snoop_msg(),
vector(2))0) - any intercept()
- vector fields decode(intercept_msg(),
vector(2, avg_buffer())) - snoop_epoch(fields0)
- avg_intercept(s_op1, fields1)
-
- any epochchange() avg_newepoch(s_op1)
- any Timer0()
- led(l_blink l_green)
- if (id())
- next_epoch()
- avg_update(s_op1, light())
-
- mhopsend(encode(vector(epoch(),
avg_get(s_op1)))) -