Programming Vast Networks of Tiny Devices - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

Programming Vast Networks of Tiny Devices

Description:

David Culler University of California, Berkeley Intel Research Berkeley http://webs.cs.berkeley.edu – PowerPoint PPT presentation

Number of Views:88
Avg rating:3.0/5.0
Slides: 22
Provided by: Davi1466
Category:

less

Transcript and Presenter's Notes

Title: Programming Vast Networks of Tiny Devices


1
Programming Vast Networks of Tiny Devices
  • David Culler
  • University of California, Berkeley
  • Intel Research Berkeley

http//webs.cs.berkeley.edu
2
Programmable network fabric
  • Architectural approach
  • new code image pushed through the network as
    packets
  • assembled and verified in local flash
  • second watch-dog processor reprograms main
    controller
  • Viral code approach (Phil Levis)
  • each node runs a tiny virtual machine interpreter
  • captures the high-level behavior of application
    domain as individual instructions
  • packets are capsule sequence of high-level
    instructions
  • capsules can forward capsules
  • Rich challenges
  • security
  • energy trade-offs
  • DOS

pushc 1 Light is sensor 1 sense Push
light reading pushm Push message
buffer clear Clear message buffer add
Append val to buffer send Send message
using AHR forw Forward capsule halt
3
Mate Tiny Virtual Machine
  • Comm. centric stack machine
  • 7286 bytes code, 603 bytes RAM
  • dynamicly typed
  • Four context types
  • send, receive, clock, subroutine (4)
  • each 24 instructions
  • Fit in a single TinyOS AM packet
  • Installation is atomic
  • Self-propagating
  • Version information

4
Case Study GDI
  • Great Duck Island application
  • Simple sense and send loop
  • Runs every 8 seconds low duty cycle
  • 19 Maté instructions, 8K binary code
  • Energy tradeoff if you run GDI application for
    less than 6 days, Maté saves energy

5
Higher-level Programming?
  • Ideally, would specify the desired global
    behavior
  • Compilers would translate this into local
    operations
  • High-Performance Fortran (HPF) analog
  • program is sequence of parallel operations on
    large matrices
  • each of the matrices are spread over many
    processors on a parallel machine
  • compiler translates from global view to local
    view
  • local operations message passing
  • highly structured and regular
  • We need a much richer suite of operations on
    unstructured aggregates on irregular, changing
    networks

6
Sensor Databases a start
  • Relational databases rich queries described by
    declarative queries over tables of data
  • select, join, count, sum, ...
  • user dictates what should be computed
  • query optimizer determines how
  • assumes data presented in complete, tabular form
  • First step database operations over streams of
    data
  • incremental query processing
  • Big step process the query in the sensor net
  • query processing content-based routing?
  • energy savings, bandwidth, reliability

SELECT AVG(light) GROUP BY roomNo
7
Motivation Sensor Nets and In-Network Query
Processing
  • Many Sensor Network Applications are Data
    Oriented
  • Queries Natural and Efficient Data Processing
    Mechanism
  • Easy (unlike embedded C code)
  • Enable optimizations through abstraction
  • Aggregates Common Case
  • E.g. Which rooms are in use?
  • In-network processing a must
  • Sensor networks power and bandwidth constrained
  • Communication dominates power cost
  • Not subject to Moores law!

8
SQL Primer
SELECT AVG(light) FROM sensors WHERE sound lt
100 GROUP BY roomNo HAVING AVG(light) lt 50
  • SQL is an established declarative language not
    wedded to it
  • Some extensions clearly necessary, e.g. for
    sample rates
  • We adopt a basic subset
  • sensors relation (table) has
  • One column for each reading-type, or attribute
  • One row for each externalized value
  • May represent an aggregation of several
    individual readings

SELECT aggn(attrn), attrs FROM
sensors WHERE selPreds GROUP BY
attrs HAVING havingPreds EPOCH DURATION s
9
TinyDB Demo (Sam Madden)
Joe Hellerstein, Sam Madden, Wei Hong, Michael
Franklin
10
Tiny Aggregation (TAG) Approach
  • Push declarative queries into network
  • Impose a hierarchical routing tree onto the
    network
  • Divide time into epochs
  • Every epoch, sensors evaluate query over local
    sensor data and data from children
  • Aggregate local and child data
  • Each node transmits just once per epoch
  • Pipelined approach increases throughput
  • Depending on aggregate function, various
    optimizations can be applied

hypothesis testing
11
Aggregation Functions
  • Standard SQL supports the basic 5
  • MIN, MAX, SUM, AVERAGE, and COUNT
  • We support any function conforming to

Aggnfmerge, finit, fevaluate Fmergelta1gt,lta2gt
? lta12gt finita0 ? lta0gt Fevaluatelta1gt ?
aggregate value (Merge associative, commutative!)
Partial Aggregate
Example Average AVGmerge ltS1, C1gt, ltS2, C2gt ?
lt S1 S2 , C1 C2gt AVGinitv ?
ltv,1gt AVGevaluateltS1, C1gt ? S1/C1
12
Query Propagation
  • TAG propagation agnostic
  • Any algorithm that can
  • Deliver the query to all sensors
  • Provide all sensors with one or more duplicate
    free routes to some root
  • simple flooding approach
  • Query introduced at a root rebroadcast by all
    sensors until it reaches leaves
  • Sensors pick parent and level when they hear
    query
  • Reselect parent after k silent epochs

Query
1
P0, L1
2
3
P1, L2
P1, L2
4
P2, L3
6
P3, L3
5
P4, L4
13
Illustration Pipelined Aggregation
SELECT COUNT() FROM sensors
Depth d
14
Illustration Pipelined Aggregation
SELECT COUNT() FROM sensors
Epoch 1
1
Sensor
1 2 3 4 5
1 1 1 1 1 1
1
1
1
Epoch
1
15
Illustration Pipelined Aggregation
SELECT COUNT() FROM sensors
Epoch 2
3
Sensor
1 2 3 4 5
1 1 1 1 1 1
2 3 1 2 2 1
1
2
2
Epoch
1
16
Illustration Pipelined Aggregation
SELECT COUNT() FROM sensors
Epoch 3
4
Sensor
1 2 3 4 5
1 1 1 1 1 1
2 3 1 2 2 1
3 4 1 3 2 1
1
3
2
Epoch
1
17
Illustration Pipelined Aggregation
SELECT COUNT() FROM sensors
Epoch 4
5
Sensor
1 2 3 4 5
1 1 1 1 1 1
2 3 1 2 2 1
3 4 1 3 2 1
4 5 1 3 2 1
1
3
2
Epoch
1
18
Illustration Pipelined Aggregation
SELECT COUNT() FROM sensors
Epoch 5
5
Sensor
1 2 3 4 5
1 1 1 1 1 1
2 3 1 2 2 1
3 4 1 3 2 1
4 5 1 3 2 1
5 5 1 3 2 1
1
3
2
Epoch
1
19
Discussion
  • Result is a stream of values
  • Ideal for monitoring scenarios
  • One communication / node / epoch
  • Symmetric power consumption, even at root
  • New value on every epoch
  • After d-1 epochs, complete aggregation
  • Given a single loss, network will recover after
    at most d-1 epochs
  • With time synchronization, nodes can sleep
    between epochs, except during small communication
    window
  • Note Values from different epochs combined
  • Can be fixed via small cache of past values at
    each node
  • Cache size at most one reading per child x depth
    of tree

1
2
3
4
5
20
Testbench Matlab Integration
  • Positioned mica array for controlled studies
  • in situ programming
  • Localization (RF, TOF)
  • Distributed Algorithms
  • Distributed Control
  • Auto Calibration
  • Out-of-band squid instrumentation NW
  • Integrated with MatLab
  • packets -gt matlab events
  • data processing
  • filtering control

21
Acoustic Time-of-Flight Ranging
no calibration 76 error
  • Sounder/Tone Detect Pair
  • Emit Sounder pulse and RF message
  • Receiver uses message to arm Tone Detector
  • Key Challenges
  • Noisy Environment
  • Calibration
  • On-mote Noise Filter
  • Calibration fundamental to many cheap regime
  • variations in tone frequency and amplitude,
    detector sensitivity
  • Collect many pairs
  • 4-parameter model for each pair
  • T(A-gtB, x) OA OB (LA LB )x
  • OA , LA in message, OB, LB local

joint calibration 10.1 error
Write a Comment
User Comments (0)
About PowerShow.com