Lecture 2: Software Platforms - PowerPoint PPT Presentation

1 / 54
About This Presentation
Title:

Lecture 2: Software Platforms

Description:

Discussion includes not only operating systems but also ... Small, tightly integrated design that allows crossover of software components into hardware ... – PowerPoint PPT presentation

Number of Views:34
Avg rating:3.0/5.0
Slides: 55
Provided by: Anish
Learn more at: https://cse.osu.edu
Category:

less

Transcript and Presenter's Notes

Title: Lecture 2: Software Platforms


1
Lecture 2 Software Platforms
  • Anish Arora
  • CIS788.11J
  • Introduction to Wireless Sensor Networks
  • Lecture uses slides from tutorials prepared by
    authors of these platforms

2
Outline
  • Discussion includes not only operating systems
    but also programming methodology
  • Some environments focus more on one than the
    other
  • Focus here is on node centric platforms
  • (versus distributed system centric platforms)
  • Platforms
  • TinyOS (applies to XSMs) slides from Culler
    et al
  • EmStar (applies to XSSs) slides from UCLA
  • SOS
  • Contiki
  • Virtual machines (Maté)
  • TinyCLR

3
References
  • NesC
  • The Emergence of Networking Abstractions and
    Techniques in TinyOS
  • EmStar An Environment for Developing Wireless
    Embedded Systems Software
  • TinyOS webpage
  • EmStar webpage

4
Traditional Systems
  • Well established layers of abstractions
  • Strict boundaries
  • Ample resources
  • Independent applications at endpoints communicate
    pt-pt through routers
  • Well attended

Application
Application
User
System
Network Stack
Transport
Threads
Network
Address Space
Data Link
Files
Physical Layer
Drivers
Routers
5
Sensor Network Systems
  • Highly constrained resources
  • processing, storage, bandwidth, power, limited
    hardware parallelism, relatively simple
    interconnect
  • Applications spread over many small nodes
  • self-organizing collectives
  • highly integrated with changing environment and
    network
  • diversity in design and usage
  • Concurrency intensive in bursts
  • streams of sensor data
    network traffic
  • Robust
  • inaccessible, critical operation
  • Unclear where the
    boundaries belong
  • ? Need a framework for
  • Resource-constrained concurrency
  • Defining boundaries
  • Appln-specific processing
  • allow abstractions to emerge

6
Choice of Programming Primitives
  • Traditional approaches
  • command processing loop (wait request, act,
    respond)
  • monolithic event processing
  • full thread/socket posix regime
  • Alternative
  • provide framework for concurrency and modularity
  • never poll, never block
  • interleaving flows, events

7
TinyOS
  • Microthreaded OS (lightweight thread support) and
    efficient network interfaces
  • Two level scheduling structure
  • Long running tasks that can be interrupted by
    hardware events
  • Small, tightly integrated design that allows
    crossover of software components into hardware

8
Tiny OS Concepts
  • Scheduler Graph of Components
  • constrained two-level scheduling model threads
    events
  • Component
  • Commands
  • Event Handlers
  • Frame (storage)
  • Tasks (concurrency)
  • Constrained Storage Model
  • frame per component, shared stack, no heap
  • Very lean multithreading
  • Efficient Layering

Events
Commands
send_msg(addr, type, data)
power(mode)
init
Messaging Component
Internal State
internal thread
TX_packet(buf)
Power(mode)
init
RX_packet_done (buffer)
TX_packet_done (success)
9
Application Graph of Components
Route map
Router
Sensor Appln
application
Active Messages
Example ad hoc, multi-hop routing of photo
sensor readings
Serial Packet
Radio Packet
packet
Temp
Photo
SW
3450 B code 226 B data
HW
UART
Radio byte
ADC
byte
clock
RFM
bit
Graph of cooperating state machines on shared
stack
10
TOS Execution Model
  • commands request action
  • ack/nack at every boundary
  • call command or post task
  • events notify occurrence
  • HW interrupt at lowest level
  • may signal events
  • call commands
  • post tasks
  • tasks provide logical concurrency
  • preempted by events

data processing
application comp
message-event driven
active message
event-driven packet-pump
crc
event-driven byte-pump
encode/decode
event-driven bit-pump
11
Event-Driven Sensor Access Pattern
command result_t StdControl.start() return
call Timer.start(TIMER_REPEAT, 200) event
result_t Timer.fired() return call
sensor.getData() event result_t
sensor.dataReady(uint16_t data)
display(data) return SUCCESS
SENSE
LED
Photo
Timer
  • clock event handler initiates data collection
  • sensor signals data ready event
  • data event handler calls output command
  • device sleeps or handles other activity while
    waiting
  • conservative send/ack at component boundary

12
TinyOS Commands and Events
... status call CmdName(args) ...
command CmdName(args) ... return status
event EvtName(args) ... return status
... status signal EvtName(args) ...
13
TinyOS Execution Contexts
  • Events generated by interrupts preempt tasks
  • Tasks do not preempt tasks
  • Both essential process state transitions

14
Tasks
  • provide concurrency internal to a component
  • longer running operations
  • are preempted by events
  • able to perform operations beyond event context
  • may call commands
  • may signal events
  • not preempted by tasks

... post TskName() ...
task void TskName ...
15
Typical Application Use of Tasks
  • event driven data acquisition
  • schedule task to do computational portion

event result_t sensor.dataReady(uint16_t data)
putdata(data) post processData()
return SUCCESS task void processData()
int16_t i, sum0 for (i0 i maxdata
i) sum (rdatai 7)
display(sum shiftdata)
  • 128 Hz sampling rate
  • simple FIR filter
  • dynamic software tuning for centering the
    magnetometer signal (1208 bytes)
  • digital control of analog, not DSP
  • ADC (196 bytes)

16
Task Scheduling
  • Currently simple fifo scheduler
  • Bounded number of pending tasks
  • When idle, shuts down node except clock
  • Uses non-blocking task queue data structure
  • Simple event-driven structure control over
    complete application/system graph
  • instead of complex task priorities and IPC

17
Maintaining Scheduling Agility
  • Need logical concurrency at many levels of the
    graph
  • While meeting hard timing constraints
  • sample the radio in every bit window
  • Retain event-driven structure throughout
    application
  • Tasks extend processing outside event window
  • All operations are non-blocking

18
The Complete Application
SenseToRfm
generic comm
IntToRfm
AMStandard
RadioCRCPacket
UARTnoCRCPacket
packet
noCRCPacket
photo
Timer
MicaHighSpeedRadioM
phototemp
SecDedEncode
SW
byte
RandomLFSR
SPIByteFIFO
HW
ADC
UART
ClockC
bit
SlavePin
19
Programming Syntax
  • TinyOS 2.0 is written in an extension of C,
    called nesC
  • Applications are too
  • just additional components composed with OS
    components
  • Provides syntax for TinyOS concurrency and
    storage model
  • commands, events, tasks
  • local frame variable
  • Compositional support
  • separation of definition and linkage
  • robustness through narrow interfaces and reuse
  • interpositioning
  • Whole system analysis and optimization

20
Components
  • A component specifies a set of interfaces by
    which it is connected to other components
  • provides a set of interfaces to others
  • uses a set of interfaces provided by others
  • Interfaces are bidirectional
  • include commands and events
  • Interface methods are the external namespace of
    the component

provides
StdControl
Timer
provides interface StdControl interface
Timer uses interface Clock
Timer Component
uses
Clock
21
Component Interface
  • logically related set of commands and events

StdControl.nc interface StdControl
command result_t init() command result_t
start() command result_t stop()
Clock.nc interface Clock command result_t
setRate(char interval, char scale) event
result_t fire()
22
Component Types
  • Configurations
  • link together components to compose new component
  • configurations can be nested
  • complete main application is always a
    configuration
  • Modules
  • provides code that implements one or more
    interfaces and internal behavior

23
Example of Top Level Configuration
configuration SenseToRfm // this module does
not provide any interface implementation
components Main, SenseToInt, IntToRfm, ClockC,
Photo as Sensor Main.StdControl -gt
SenseToInt Main.StdControl -gt IntToRfm
SenseToInt.Clock -gt ClockC SenseToInt.ADC -gt
Sensor SenseToInt.ADCControl -gt Sensor
SenseToInt.IntOutput -gt IntToRfm
Main
StdControl
SenseToInt
ADCControl
IntOutput
ADC
Clock
24
Nested Configuration
includes IntMsg configuration IntToRfm
provides interface IntOutput interface
StdControl implementation components
IntToRfmM, GenericComm as Comm IntOutput
IntToRfmM StdControl IntToRfmM
IntToRfmM.Send -gt Comm.SendMsgAM_INTMSG
IntToRfmM.SubControl -gt Comm
StdControl
IntOutput
IntToRfmM
SendMsgAM_INTMSG
SubControl
GenericComm
25
IntToRfm Module
command result_t StdControl.start() return
call SubControl.start() command result_t
StdControl.stop() return call
SubControl.stop() command result_t
IntOutput.output(uint16_t value) ...
if (call Send.send(TOS_BCAST_ADDR, sizeof(IntMsg),
data) return SUCCESS ... event
result_t Send.sendDone(TOS_MsgPtr msg, result_t
success) ...
includes IntMsg module IntToRfmM uses
interface StdControl as SubControl
interface SendMsg as Send provides
interface IntOutput interface StdControl
implementation bool pending struct
TOS_Msg data command result_t
StdControl.init() pending FALSE
return call SubControl.init()
26
Atomicity Support in nesC
  • Split phase operations require care to deal with
    pending operations
  • Race conditions may occur when shared state is
    accessed by premptible executions, e.g. when an
    event accesses a shared state, or when a task
    updates state (premptible by an event which then
    uses that state)
  • nesC supports atomic block
  • implemented by turning of interrupts
  • for efficiency, no calls are allowed in block
  • access to shared variable outside atomic block is
    not allowed

27
Supporting HW Evolution
  • Distribution broken into
  • apps top-level applications
  • tos
  • lib shared application components
  • system hardware independent system components
  • platform hardware dependent system components
  • includes HPLs and hardware.h
  • interfaces
  • tools development support tools
  • contrib
  • beta
  • Component design so HW and SW look the same
  • example temp component
  • may abstract particular channel of ADC on the
    microcontroller
  • may be a SW I2C protocol to a sensor board with
    digital sensor or ADC
  • HW/SW boundary can move up and down with minimal
    changes

28
Example Radio Byte Operation
  • Pipelines transmission transmits byte while
    encoding next byte
  • Trades 1 byte of buffering for easy deadline
  • Encoding task must complete before byte
    transmission completes
  • Separates high level latencies from low level
    real-time rqmts
  • Decode must complete before next byte arrives


Encode Task
Byte 2
Byte 1
Byte 3
Byte 4
Bit transmission
Byte 1
Byte 2
Byte 3
start
RFM Bits
29
Dynamics of Events and Threads
bit event gt end of byte gt end of packet gt
end of msg send
thread posted to start send next message
bit event filtered at byte layer
radio takes clock events to detect recv
30
Sending a Message
  • Refuses to accept command if buffer is still
    full or network refuses to accept send command
  • User component provide structured msg storage

31
Send done Event
event result_t IntOutput.sendDone(TOS_MsgPtr
msg, result_t success) if
(pending msg data) pending
FALSE signal IntOutput.outputComplete(success)
return SUCCESS
  • Send done event fans out to all potential senders
  • Originator determined by match
  • free buffer on success, retry or fail on failure
  • Others use the event to schedule pending
    communication

32
Receive Event
event TOS_MsgPtr ReceiveIntMsg.receive(TOS_MsgPtr
m) IntMsg message (IntMsg )m-gtdata
call IntOutput.output(message-gtval)
return m
  • Active message automatically dispatched to
    associated handler
  • knows format, no run-time parsing
  • performs action on message event
  • Must return free buffer to the system
  • typically the incoming buffer if processing
    complete

33
Tiny Active Messages
  • Sending
  • declare buffer storage in a frame
  • request transmission
  • name a handler
  • handle completion signal
  • Receiving
  • declare a handler
  • firing a handler automatic
  • Buffer management
  • strict ownership exchange
  • tx send done event ? reuse
  • rx must return a buffer

34
Tasks in Low-level Operation
  • transmit packet
  • send command schedules task to calculate CRC
  • task initiates byte-level data pump
  • events keep the pump flowing
  • receive packet
  • receive event schedules task to check CRC
  • task signals packet ready if OK
  • byte-level tx/rx
  • task scheduled to encode/decode each complete
    byte
  • must take less time that byte data transfer

35
TinyOS tools
  • TOSSIM a simulator for tinyos programs
  • ListenRaw, SerialForwarder java tools to receive
    raw packets on PC from base node
  • Oscilloscope java tool to visualize (sensor)
    data in real time
  • Memory usage breaks down memory usage per
    component (in contrib)
  • Peacekeeper detect RAM corruption due to stack
    overflows (in lib)
  • Stopwatch tool to measure execution time of code
    block by timestamping at entry and exit (in osu
    CVS server)
  • Makedoc and graphviz generate and visualize
    component hierarchy
  • Surge, Deluge, SNMS, TinyDB

36
Scalable Simulation Environment
  • target platform TOSSIM
  • whole application compiled for host native
    instruction set
  • event-driven execution mapped into event-driven
    simulator machinery
  • storage model mapped to thousands of virtual
    nodes
  • radio model and environmental
  • model plugged in
  • bit-level fidelity
  • Sockets basestation
  • Complete application
  • including GUI

37
Simulation Scaling
38
TinyOS Limitations
  • Static allocation allows for compile-time
    analysis, but can make programming harder
  • No support for heterogeneity
  • Support for other platforms (e.g. stargate)
  • Support for high data rate apps (e.g. acoustic
    beamforming)
  • Interoperability with other software frameworks
    and languages
  • Limited visibility
  • Debugging
  • Intra-node fault tolerance
  • Robustness solved in the details of
    implementation
  • nesC offers only some types of checking

39
Em
  • Software environment for sensor networks built
    from Linux-class devices
  • Claimed features
  • Simulation and emulation tools
  • Modular, but not strictly layered architecture
  • Robust, autonomous, remote operation
  • Fault tolerance within node and between nodes
  • Reactivity to dynamics in environment and task
  • High visibility into system interactive access
    to all services

40
Contrasting Emstar and TinyOS
  • Similar design choices
  • programming framework
  • Component-based design
  • Wiring together modules into an application
  • event-driven
  • reactive to sudden sensor events or triggers
  • robustness
  • Nodes/system components can fail
  • Differences
  • hardware platform-dependent constraints
  • Emstar Develop without optimization
  • TinyOS Develop under severe resource-constraints
  • operating system and language choices
  • Emstar easy to use C language, tightly coupled
    to linux (devfs)

41
Em Transparently Trades-off Scale vs. Reality
  • Em code runs transparently at many degrees of
    reality high visibility debugging
    before low-visibility deployment

42
Em Modularity
  • Dependency DAG
  • Each module (service)
  • Manages a resource resolves contention
  • Has a well defined interface
  • Has a well scoped task
  • Encapsulates mechanism
  • Exposes control of policy
  • Minimizes work done by client library
  • Application has same structure as services

43
Em Robustness
  • Fault isolation via multiple processes
  • Active process management (EmRun)
  • Auto-reconnect built into libraries
  • Crashproofing prevents cascading failure
  • Soft state design style
  • Services periodically refresh clients
  • Avoid diff protocols

scheduling
path_plan
depth map
EmRun
motor_x
motor_y
camera
44
Em Reactivity
  • Event-driven software structure
  • React to asynchronous notification
  • e.g. reaction to change in neighbor list
  • Notification through the layers
  • Events percolate up
  • Domain-specific filtering at every level
  • e.g.
  • neighbor list membership hysteresis
  • time synchronization linear fit and outlier
    rejection

scheduling
path_plan
notify filter
motor_y
45
EmStar Components
  • Tools
  • EmRun
  • EmProxy/EmView
  • Standard IPC
  • FUSD
  • Device patterns
  • Common Services
  • NeighborDiscovery
  • TimeSync
  • Routing

46
EmView/EmProxy Visualization
emview
47
EmSim/EmCee
  • Em supports a variety of types of simulation and
    emulation, from simulated radio channel and
    sensors to emulated radio and sensor channels
    (ceiling array)
  • In all cases, the code is identical
  • Multiple emulated nodes run in their own spaces,
    on the same physical machine

48
EmRun Manages Services
  • Designed to start, stop, and monitor services
  • EmRun config file specifies service dependencies
  • Starting and stopping the system
  • Starts up services in correct order
  • Can detect and restart unresponsive services
  • Respawns services that die
  • Notifies services before shutdown, enabling
    graceful shutdown and persistent state
  • Error/Debug Logging
  • Per-process logging to in-memory ring buffers
  • Configurable log levels, at run time

49
IPC FUSD
  • Inter-module IPC FUSD
  • Creates device file interfaces
  • Text/Binary on same file
  • Standard interface
  • Language independent
  • No client library required

User
Kernel
50
Device Patterns
  • FUSD can support virtually any semantics
  • What happens when client calls read()?
  • But many interfaces fall into certain patterns
  • Device Patterns
  • encapsulate specific semantics
  • take the form of a library
  • objects, with method calls and callback functions
  • priority ease of use

51
Status Device
  • Designed to report current state
  • no queuing clients not guaranteed to see every
    intermediate state
  • Supports multiple clients
  • Interactive and programmatic interface
  • ASCII output via cat
  • binary output to programs
  • Supports client notification
  • notification via select()
  • Client configurable
  • client can write command string
  • server parses it to enable per-client behavior

52
Packet Device
  • Designed for message streams
  • Supports multiple clients
  • Supports queuing
  • Round-robin service of output queues
  • Delivery of messages to all/ specific clients
  • Client-configurable
  • Input and output queue lengths
  • Input filters
  • Optional loopback of outputs to other clients
    (for snooping)

53
Device Files vs Regular Files
  • Regular files
  • Require locking semantics to prevent race
    conditions between readers and writers
  • Support status semantics but not queuing
  • No support for notification, polling only
  • Device files
  • Leverage kernel for serialization no locking
    needed
  • Arbitrary control of semantics
  • queuing, text/binary, per client configuration
  • Immediate action, like an function call
  • system call on device triggers immediate response
    from service, rather than setting a request and
    waiting for service to poll

54
Interacting With em
  • Text/Binary on same device file
  • Text mode enables interaction from shell and
    scripts
  • Binary mode enables easy programmatic access to
    data as C structures, etc.
  • EmStar device patterns support multiple
    concurrent clients
  • IPC channels used internally can be viewed
    concurrently for debugging
  • Live state can be viewed in the shell (echocat
    w) or using emview
Write a Comment
User Comments (0)
About PowerShow.com