Title: Wireless Mesh with Mobility
1Wireless Mesh with Mobility
- Thomas F. La Porta (tlp_at_cse.psu.edu) Guohong
Cao (gcao_at_cse.psu.edu) - The Pennsylvania State University
- Students Hosam Rowaihy, Mike Lin, Tim Bolbrock,
Qinghua Li - Wireless Mesh with Mobility
- Executive Summary
- Schedule
- Centralized
- Distributed
- Status
2Wireless Mesh with Mobility Executive Summary
- Network example 1 Large retail back-room
- central server acts as database
- mobile readers (automated and with personnel)
keep data fresh and respond real-time - generalize to large warehouses with wifi
- Network example 2 Make-shift large warehouse
- no central server use distributed cache
- multi-hop communication optimized for inventory
system - Problems
- scheduling robot movement to meet delay
constraints - locating inventory with no central controller
- Benefits to Vendors
- faster customer response inventory aggressively
updated - less expensive infrastructure mobile readers
cover large areas
3Schedule
- Milestones
- Q1 Querying algs for multiple robots defined,
centralized cache implemented - Q2 Mobile mesh implemented, CacheData
implemented - Q3 Querying algs implemented, sim results,
CachePath implemented, caching policies - Q4 Measurements
- Cost Share
- CISCO consulting
- Vocollect equipment and consulting
- Accipiter engineering and consulting
- Platform
- Custom (small) robot
- Gumstick Linux processors
- RFID readers from Vocollect
4Centralized Architecture
- Query algorithms
- Naïve
- Return to center
- Area of Responsibility
- Flexible Grid
5Area of Responsibility
Heavy load, Small area
Resting circle
- Areas of responsibility
- Change dynamically according to queries served
(weighted moving average) - If no readers covers a crate, closest serves it
- Resting circle
- Mobile reader can reach any location within area
of responsibility in lt tseconds - other basic scheme return to center
6Area of Responsibility
1
1
1
- Scenario query arrives for tag located outside
all areas of responsibility - Mobile RFID reader 1 calculates that it should
move - Mobile RFID reader 1 moves
- New AR is calculated
7Rest point
- Readers must reside on or within circumference of
rest circle - Center will reposition based on movement
8Flexible Grid
- Area of responsibility center remains constant
- Circumference changes based on movement
- Leads to stable data distribution
9Centralized Architecture Evaluation
- Consider both skewed and uniform queries
- Skewed queries are distributed using a
burstiness algorithm to model temporal locality
of queries and the Zipf distribution to model
popular items - 1,000,000 sq. ft. warehouse with 10,000
uniformly distributed RFID tags - 1000 queries to 4 and 16 mobile readers
- Skewed and uniform results are similar
10Centralized Algorithm Delay Results
4 robots
16 robots
- Naïve solution is the best
11Centralized Algorithm Distance Results
- Naïve results are the best
12Distributed Architecture
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Readers 1. Receive queries 2. Locate server 3.
Return answer 4. Local cache
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
- Multi-hop network may become disconnected due to
mobility - Algorithm updates required
- Connected readers run algorithm search for
others while moving - Results returned to query point (similar
process) - Implications
- Pre-positioning may help maintain connectivity
- Limiting movement may help maintain connectivity
13Distributed Architecture Example and Analysis
Total delay, T For centralized For fully
connected network
Definitions Alg RP (rest point) or Naïve
(N) Net multi-hop (MH) or centralized (C) Type
non-reader (nr), or reader (r)
14Distributed Architecture vs. Centralized
- More realistic case network has some partitions
- Naïve algorithm
- AR algorithms
(we may or may not pick the optimal reader)
(we may or may not pick the optimal reader)
(based on empirical data)
(based on empirical data)
15Multihop Evaluation
- 1,000,000 sq. ft. warehouse with 10,000 RFID
tags - Skewed and uniform queries (results are similar)
- Queries now originate from query sources on the
edge of the warehouse - Wireless transmission range of 300 ft.
16Multi-hop Results
- Flexible grid performs the best, naive is one of
the worst
17Multi-hop Results
- Flexible grid outperforms by a significant margin
18Analysis
- Flexible grid outperforms the other algorithms by
a wide margin - Performance can be characterized by looking at
the secondary distance travelled - Secondary distance is the total distance
travelled to respond to a query by readers that
were not the first reader to receive the query
19Secondary Distance
- Flexible grid has a very small secondary distance
compared to other algorithms
20Analysis
- The forced structure of the flexible grid
algorithm reduces the secondary distance - df - distance saved by forwarding query
- dfFlex Grid is much higher relative to the
overall distance travelled
21Analysis
- Although all algorithms begin on a grid, only the
flexible grid algorithm retains the structure,
which increases the efficiency of forwarding
queries and reduces the average distance the
reader must travel
22Discussion
- Centralized scheme will always be the best
- Always choose optimal reader
- No extra movement
- BUT not always feasible
- Flexible Grid scheme is best in a disconnected
network - Network is more connected
23Caching
- Cache Path keep record of how to reach data
- This is done in all mobile robots
- Used to determine nearest robot
- Cache Data keep copies of data that have been
gathered or forwarded - Will greatly reduce query time
- Improvement depends on
- Cache hit/miss ratio
- Cache time-out
- Important factors
- How much information is learned
- Shortest path is not always the best for
learning - Moving more robots may be better
24Centralized Cache Policy
- Basic Time-to-live
- Data is considered useful if it has been
refreshed within time T - Advanced Item-specific time-to-live
- Hot items have a lower T
- Inventory changes more frequently
- Current set by manager
25Centralized Basic Simulation Results (Naïve
Mobility)
- Query latency reduced from non-caching case by up
to 25 with 3600 second TTL
Random queries, 4 robots
Random queries, 16 robots
26Centralized Advanced Policy Simulation Results
(Naïve mobility)
- Query latency reduced from non-caching case by up
to 35 when hotspots present - Hot items have lower TTL (POP_TTL on x-axis),
but are queried more, resulting in updated data
and cache hits - Cold items have long TTL, so also experience
cache hits
Skewed queries, Cold item TTL 3600 second
27Distributed Cache Policies
- Active vs. Passive Caching
- Passive cache only what is queried (typically a
single data item) - Active cache everything read between starting
point and query point - Asymmetric Caching
- Use different paths for traveling to and from
query point - Learn more when using active caching vs.
traveling longer distance - Cache Exchange vs. Queried Item
- Queried Item only item queried is cached in
other readers - Cache Exchange readers that come in contact
exchange all data - Overhead of transfer vs. information learned
28Cache Performance Analysis Warehouse Model
29Cache Performance Analysis
- In general, response time is
? - Where f(T) is the probability of cache hit, T is
the cache TTL, and N is the number of tags - where ? is the
rate at which data enters the cache - For centralized
- For distributed
30Analytical Results
Single robot moving
All robots moving
Shows impact of learning rate more robots,
higher speed ? lower latency
31Multihop Cache Simulation Results
- Flexible Grid Algorithm Used
- No concurrent queries (worst case)
- Only a single robots moves at any instance
- Reduce query latency by up to 25
4 robots, random queries
16 robots, random queries
32Multihop Cache Simulation Results
- Flexible Grid Algorithm Used
- Skewed queries
- No concurrent movement (worst case)
- Reduce query latency by up to 35
16 robots, skewed queries
33Multihop Cache Simulation Results
- Concurrent queries allowed
- Multiple robots move at once
- More information being learned per unit time
- Most realistic case
- Reduce query latency by up to 70 over case with
only a single query at a time
16 robots, skewed queries
34Comparison with Goals
- Scale to networks with 10s of robots and 1,000s
of nodes - Simulations cover up to 16 robots and 10,000
tags - Show response times in 15 second range
- Extend RFID network lifetimes over active tag
hierarchy by factor of 2 - No active tags used, so RFID components have no
lifetime constraints - Reduce search times by factor of 2 over pure RFID
solution - Pure RFID equivalent to single robot case
(person reader) - We show greater than factor of 2 reduction when
we go from 4 to 16 robots without caching - We show an addition factor of 3 reduction with
cache data and concurrent queries
35Testbed
RFID tags
Robots
36Measurements
37Status
- Centralized Architecture
- Architecture defined
- Querying algorithms in place and simulated
- Integration with robots and centralized cache
complete - Distributed Architecture
- Architecture defined
- Mesh formation algorithms designed
- Simulation complete
- Integration with robots and distributed cache
complete - Caching
- Cache path in system
- Cache data analyzed and implemented in simulator
- Simulation complete
- Porting to robots complete
- Robots
- Design and implementation complete
- RFID equipment from Vocollect integrated
- Integration with Querying and Caching complete