Title: Inferring HighLevel Behavior from LowLevel Sensors
1Inferring High-Level Behavior from Low-Level
Sensors
- Don Peterson, Lin Liao, Dieter Fox, Henry Kautz
- Published in UBICOMP 2003
- ICS 280
2Main References
- Voronoi Tracking Location Estimation Using
Sparse and Noisy Sensor Data - (Liao L., Fox D., Hightower J., Kautz H., Shultz
D.) in International Conference on Intelligent
Robots and Systems (2003) - Inferring High-Level Behavior from Low-Level
Sensors - (Paterson D., Liao L., Fox D., Kautz H.) In
UBICOMP 2003 - Learning and Inferring Transportation Routines
- (Liao L., Fox D., Kautz H.) In AAAI 2004
3Outline
- Motivation
- Problem Definition
- Modeling and Inference
- Dynamic Bayesian Networks
- Particle Filtering
- Learning
- Results
- Conclusions
4Motivation
- ACTIVITY COMPASS - software which indirectly
monitors your activity and offers proactive
advice to aid in successfully accomplishing
inferred plans. - Healthcare Monitoring
- Automated Planning
- Context Aware Computing Support
5Research Goal
- To bridge the gap between sensor data and
symbolic reasoning. - To allow sensor data to help interpret symbolic
knowledge. - To allow symbolic knowledge to aid sensor
interpretation.
6Executive Summary
- GPS data collection
- 3 months, 1 users daily life
- Inference Engine
- Infers location and transportation mode on-line
in real-time - Learning
- Transportation patterns
- Results
- Better predictions
- Conceptual understanding of routines
7Outline
- Motivation
- Problem Definition
- Modeling and Inference
- Dynamic Bayesian Networks
- Particle Filtering
- Learning
- Results
- Conclusions
8Tracking on a Graph
- Tracking persons location and mode of
transportation using street maps and GPS sensor
data. - Formally, the world is modeled as
- graph G (V,E), where
- V is a set of vertices intersections
- E is a set of directed edges roads/foot paths
9Example
10Outline
- Motivation
- Problem Definition
- Modeling and Inference
- Dynamic Bayesian Networks
- Particle Filtering
- Learning
- Results
- Conclusions
11State Space
- Location
- Which street user is on.
- Position on that street
- Velocity
- GPS Offset Error
- Transportation Mode
L Ls, Lp
V
O Ox, Oy
M e BUS, CAR, FOOT
X Ls, Lp, V, Ox, Oy, M
12GPS as a Sensor
- GPS is not a trivial location sensor to use
- GPS has inherent inaccuracies
- Atmospherics
- Satellite Geometry
- Multi-path propagation errors
- Signal blockages
- Using the data is even harder
- Resolution 15m
- Coordinate mismatches
13Dynamic Bayesian Networks
- Extension of a Markov Model
- Statistical model which handles
- Sensor Error
- Enormous but Structured State Spaces
- Probabilistic
- Temporal
- A single framework to manage all levels of
abstraction
14Model (I)
15Model (II)
16Model (III)
17Dependencies
18Inference
We want to compute the posterior density
19Inference
- Particle Filtering
- A Technique for Solving DBNs
- Approximate Solutions
- Stochastic/ Monte Carlo
- In our case, a particle represents an
instantiation of the random variables describing - the transportation mode mt
- the location lt (actually the edge et)
- the velocity vt
20Particle Filtering
- Step 1 (SAMPLING)
- Draw n samples Xt-1 from the previous set St-1
and generate n new samples Xt according to the
dynamics p(xtxt-1) (i.e. motion model) - Step 2 (IMPORTANCE SAMPLING)
- assign each sample xt an importance weight
according to the likelihood of the observation
zt ?t p(ztxt) - Step 3 (RE-SAMPLING)
- draw samples with replacement according to the
distribution defined by the importance weights, ?t
21Motion Model p(xtxt-1)
- Advancing particles along the graph G
- Sample transportation mode mt from the
distribution p(mtmt-1,et-1) - Sample velocity vt from density p(vtmt) -
(mixture of Gauss densities see picture) - Sample the location using current velocity
- draw at random the traveled distance d (from a
Gauss density centered at vt). If the distance
implies an edge transition then we select next
edge et with probability p(etet-1,mt-1).
Otherwise we stay on the same edge et et-1
22Animation
Play short video clip
23Outline
- Motivation
- Problem Definition
- Modeling and Inference
- Dynamic Bayesian Networks
- Particle Filtering
- Learning
- Results
- Conclusions
24Learning
- We want to learn from history the components of
the motion model - p(etet-1,mt-1) - is the transition probability
on the graph, conditioned on the mode of
transportation just prior to transitioning to the
new edge - p(mtmt-1,et-1) - is the transportation mode
transition probability. It depends on the
previous mode mt-1 and the location of the person
described by the edge et-1 - Use the Monte Carlo version of EM algorithm
25Learning
- At each iteration it performs both a forward and
a backward (in time) particle filtering step. - At each forward and backward filtering steps the
algorithm counts the number of particles
transiting between the different edges and modes. - To obtain probabilities for different
transitions, the counts of the forward and
backward pass are normalized and then multiplied
at the corresponding time slices.
26Implementation Details (I)
- at(et,mt)
- the number of particles on edge et and in mode mt
at time t in the forward pass of particle
filtering - ßt(et,mt)
- the number of particles on edge et and in mode mt
at time t in the backward pass of particle
filtering - ?t-1(et,et-1,mt-1)
- probability of transiting from edge et-1 to et at
time t-1 and in mode mt-1 - ?t-1(mt,mt-1,et-1)
- probability of transiting from mode mt-1 to mt on
edge et-1 at time t-1
27Implementation Details (II)
After we have ?t-1 and ?t-1 for all t from 2 to
T, we can update the parameters as
28Implementation details (III)
29E-step
- Generate n uniformly distributed samples
- Perform forward particle filtering
- Sampling generate n new samples from the
existing ones using current parameter estimation
p(etet-1,mt-1) and p(mtmt-1,et-1). - Re-weight each sample, re-sample, count and save
at(et,mt). - Move to next time slice (t t1).
- Perform backward particle filtering
- Sampling generate n new samples from the
existing ones using the backward parameter
estimation p(et-1et,mt) and p(mt-1mt,et). - Re-weight each sample, re-sample, count and save
ß(et,mt). - Move to previous time slice (t t-1).
30M-step
- Compute ?t-1(et,et-1,mt-1) and ?t-1(mt,mt-1,et-1)
using (5) and (6) and then normalize. - Update p(etet-1,mt-1) and p(mtmt-1,et-1) using
(7) and (8).
LOOP Repeat E-step and M-step using updated
parameters until model converges.
31Outline
- Motivation
- Problem Definition
- Modeling and Inference
- Dynamic Bayesian Networks
- Particle Filtering
- Learning
- Results
- Conclusions
32Dataset
- Single user
- 3 months of daily life
- Collected GPS position and velocity data
- at 2 and 10 second sample intervals
- Evaluation data was
- 29 trips - 12 hours of logs
- All continuous outdoor data
- Divided chronologically into 3 cross-validation
groups
33Goals
- Mode Estimation and Prediction
- Learning a motion model that would be able to
estimate and predict the mode of transportation
at any given instant. - Location Prediction
- Learning a motion model that would be able to
predict the location of the person into the
future.
34Results Mode Estimation
35Results Mode Prediction
- Evaluate the ability to predict transitions
between transportation modes. - Table shows the accuracy in predicting
qualitative change in transportation mode within
60 seconds of the actual transition (e.g.
correctly predicting that the person goes off the
bus). - PRECISION percentage of time when the algorithm
predicts a transition that will actually occur. - RECALL percentage of real transitions that were
correctly predicted.
36Results Mode Prediction
37Results Location Prediction
38Results Location Prediction
39Conclusions
- We developed a single integrated framework to
reason about transportation plans - Probabilistic
- Successfully manages systemic GPS error
- We integrate sensor data with high level concepts
such as bus stops - We developed an unsupervised learning technique
which greatly improves performance - Our results show high predictive accuracy and
interesting conceptual conclusions
40Possible Future Work
- Craigs cookie framework may provide the
low-level sensor information. - Try and formalize Craigs problem in the context
of dynamic probabilistic systems.