Title: Probabilistic Localization And Mapping for Mobile Robotics
1Probabilistic Localization And Mapping for Mobile
Robotics
- Ranjith Unnikrishnan
- Marc Zinck
- Monday December 3, 2001
2Outline
- Probability in robotics
- Localization
- Mapping
- Monte-Carlo Localization
3Probability in Robotics
- Pays Tribute to Inherent Uncertainty
- Know your own ignorance
- Robustness
- Scalability
- No need for perfect world model
- Relieves programmers
4Limitations of Probability
- Computationally inefficient
- Consider entire probability densities
- Approximation
- Representing continuous probability
distributions.
5Probabilistic Localization
Simmons/Koenig 95 Kaelbling et al 96 Burgard
et al 96
6Probabilistic Action model
st-1
st-1
at-1
at-1
- Continuous probability density Bel(st) after
moving 40m (left figure) and 80m (right figure).
Darker area has higher probablity.
7Probabilistic Sensor Model
Probabilistic sensor model for laser range finders
8Bayes Rule
9Law of Total Probability
10Markov Assumption
Future is Independent of Past Given Current State
Assume Static World
11Probabilistic Model
12Derivation Markov Localization
Kalman 60, Rabiner 85
13Example Markov Localization
Burgard et al 96 Fox 99
14Summary of Localization
- The most fundamental problem to provide a mobile
robot with autonomous capabilities I.J.Cox
15Where am I?
- Building a map with an accurate set of sensors
Easy!
- Localization with an accurate map Simple!
- Fact You start off with noisy sensors
16A similar problem Line fitting
- Goal To group a bunch of points into two
best-fit line segments
17Chicken-egg problem
- If I knew which line each point belonged to, I
could compute the best-fit lines.
18Chicken-egg problem
- If I knew what the two best-fit lines were, I
could find out which line each point belonged to.
19Expectation-Maximization (EM)
Algorithm
- Initialize Make random guess for lines
- Repeat
- Find the line closest to each point and group
into two sets. (Expectation Step) - Find the best-fit lines to the two sets
(Maximization Step) - Iterate until convergence
- The algorithm is guaranteed to converge to some
local optima
20Example
21Example
22Example
23Example
24Example
Converged!
25Probabilistic Mapping
Maximum Likelihood Estimation
- E-Step Use current best map and data to find
belief probabilities - M-step Compute the most likely map based on the
probabilities computed in the E-step. - Alternate steps to get better map and
localization estimates - Convergence is guaranteed as before.
26The E-Step
- P(std,m) P(st o1, a1 ot,m) . P(st atoT,m)
27The M-Step
- Updates occupancy grid
- P(mxyl d)
28Probabilistic Mapping
- Addresses the Simultaneous Mapping and
Localization problem (SLAM) - Robust
- Hacks for easing computational and processing
burden - Caching
- Selective computation
- Selective memorization
29Results
30Localization flashback
- Several algorithms with vastly different
properties - The Kalman filter
- Topological Markov Localization
- Grid-based Markov Localization
- Sampling-based methods
31Localization flashback
- The Kalman Filter
- Concise, closed form equations
- Robust and accurate for tracking position
- Does not handle non-Gaussian or non-linear motion
and measurement models - Restricted sub-optimal extensions with varying
success - Topological Markov Localization
- Grid-based Markov Localization
- Sampling-based methods
32Localization flashback
- The Kalman Filter
- Topological Markov Localization
- Feature-based localization
- Bayesian Landmark Learning (BaLL)
- Very coarse resolution
- Low accuracy
- Grid-based Markov Localization
- Sampling-based methods
33Localization flashback
- The Kalman filter
- Topological Markov Localization
- Grid-based Markov Localization
- Fine resolution by discretizing state space
- Very robust
- A priori commitment to precision
- Very high computational burden, with effects on
accuracy - Sampling-based methods
34What is the right representation?
???
35Sampling the Action Model
Start
- Sampling-based model of position belief
36Sampling-based Methods
- Invented in the 70s!
- Rediscovered independently in target-tracking,
statistical and computer vision literature
- Bootstrap filter
- Monte-Carlo filter
- Condensation algorithm
37Monte-Carlo Localization
- Represent the probability density Bel(st) by a
set of randomly drawn samples - From samples, we can always approximately
reconstruct density (e.g. histogram) - Reason The discrete distribution defined by the
sample will approximate the desired one. - Goal Recursively compute at each time instance t
the set of samples St that is drawn from Bel(st)
38Algorithm Prediction phase
- 1. Draw a random sample St-1 from the current
belief Bel(st-1)
39Algorithm Update phase - I
- 2. For this St-1, guess a set of successor poses
st, as per the distribution p(stat-1,st-1,m) to
form St-1
p(stat-1,st-1,m)
St-1
40Algorithm Update phase - II
- 3. Weight each sample in St-1 by mt
p(otst,m), or what is called the importance
factor.
mt p(otst,m)
weighted St-1
41Algorithm Resampling
- 4. Draw each element sjt-1 in St-1 with
probability equal to its weight mj to form the
new set St
Bel(st)
St
42Algorithm
- 5. Normalize the importance factors and repeat
from (2).
Bel(st)
St
43Justification
- Predictive phase retrieves an empirical
predictive density (stratified sampling) that
approximates the real one. - Update phase retrieves an empirical posterior
density (importance sampling) by weighting more
likely states. - The entire procedure is called Sampling /
Importance Resampling (SIR)
44Animation
45Monte-Carlo Localization (MCL)
- Solves the global localization and kidnapped
robot problem - Multi-modal (unlike the Kalman filter)
- Drastic reduction in memory requirement
- More accurate than ML with a fixed cell size
- Easy to implement
- Fast
46References
- AAAI Tutorial on Probabilistic Robotics
(Sebastian Thrun) - Probabilistic Algorithms in Robotics (Thrun)
- Robust Monte Carlo Localization for Mobile Robots
(Thrun, Fox, Burgard, Dellaert) - Monte Carlo Localization for Mobile Robots
(Dellaert, Fox, Burgard, Thrun)
47Mapping with MCL
48Performance comparison
Monte Carlo localization
Markov localization (grids)
49Cons
- Insufficient points in key areas because of small
sample set size - Counter-intuitively, the performance degrades
with better sensors - Hack of injecting noise
- Mixture-MCL MCL with mixture proposal
distribution