Title: Probability in Robotics
1Probability in Robotics
2Trends in Robotics Research
- Classical Robotics (mid-70s)
- exact models
- no sensing necessary
- Hybrids (since 90s)
- model-based at higher levels
- reactive at lower levels
- Probabilistic Robotics (since mid-90s)
- seamless integration of models and sensing
- inaccurate models, inaccurate sensors
3Advantages of Probabilistic Paradigm
- Can accommodate inaccurate models
- Can accommodate imperfect sensors
- Robust in real-world applications
- Best known approach to many hard robotics
problems - Pays Tribute to Inherent Uncertainty
- Know your own ignorance
- Scalability
- No need for perfect world model
- Relieves programmers
4Limitations of Probability
- Computationally inefficient
- Consider entire probability densities
- Approximation
- Representing continuous probability
distributions.
5Uncertainty Representation
6Five Sources of Uncertainty
Approximate Computation
Environment Dynamics
Random Action Effects
Inaccurate Models
Sensor Limitations
7Nature of Sensor Data
Odometry Data
8Sensor inaccuracy
Environmental Uncertainty
9How do we Solve Localization Uncertainty?
- Represent beliefs as a probability density
- Markov assumption
- Pose distribution at time t conditioned on
- pose dist. at time t-1
- movement at time t-1
- sensor readings at time t
- Discretize the density by
- sampling
-
10Probabilistic Action model
At every time step t UPDATE each samples new
location based on movement RESAMPLE the pose
distribution based on sensor readings
at-1
st-1
st-1
at-1
- Continuous probability density Bel(st) after
moving 40m (left figure) and 80m (right figure).
Darker area has higher probablity.
11Localization
12Globalization
- Localization without knowledge of start location
13Probabilistic Robotics Basic Idea
- Key idea Explicit representation of uncertainty
using probability theory - Perception state estimation
- Action utility optimization
14Advantages and Pitfalls
- Can accommodate inaccurate models
- Can accommodate imperfect sensors
- Robust in real-world applications
- Best known approach to many hard robotics
problems - Computationally demanding
- False assumptions
- Approximate
15Axioms of Probability Theory
- Pr(A) denotes probability that proposition A is
true. -
-
-
16A Closer Look at Axiom 3
17Using the Axioms
18Discrete Random Variables
- X denotes a random variable.
- X can take on a finite number of values in x1,
x2, , xn. - P(Xxi), or P(xi), is the probability that the
random variable X takes on value xi. - P(? ) is called probability mass function.
- E.g.
19Continuous Random Variables
- X takes on values in the continuum.
- p(Xx), or p(x), is a probability density
function. - E.g.
p(x)
x
20Joint and Conditional Probability
- P(Xx and Yy) P(x,y)
- If X and Y are independent then P(x,y) P(x)
P(y) - P(x y) is the probability of x given y P(x
y) P(x,y) / P(y) P(x,y) P(x y) P(y) - If X and Y are independent then P(x y) P(x)
21Law of Total Probability
Discrete case
Continuous case
22Thomas Bayes (1702-1761)
- Clergyman and mathematician who first used
probability inductively and established a
mathematical basis for probability inference
23Bayes Formula
24Normalization
25Conditioning
- Total probability
- Bayes rule and background knowledge
26Simple Example of State Estimation
- Suppose a robot obtains measurement z
- What is P(openz)?
27Causal vs. Diagnostic Reasoning
- P(openz) is diagnostic.
- P(zopen) is causal.
- Often causal knowledge is easier to obtain.
- Bayes rule allows us to use causal knowledge
28Example
- P(zopen) 0.6 P(z?open) 0.3
- P(open) P(?open) 0.5
- z raises the probability that the door is open.
29Combining Evidence
- Suppose our robot obtains another observation z2.
- How can we integrate this new information?
- More generally, how can we estimateP(x z1...zn
)?
30Recursive Bayesian Updating
Markov assumption zn is independent of
z1,...,zn-1 if we know x.
31Example Second Measurement
- P(z2open) 0.5 P(z2?open) 0.6
- P(openz1)2/3
- z2 lowers the probability that the door is open.
32Actions
- Often the world is dynamic since
- actions carried out by the robot,
- actions carried out by other agents,
- or just the time passing by
- change the world.
- How can we incorporate such actions?
33Typical Actions
- The robot turns its wheels to move
- The robot uses its manipulator to grasp an object
- Actions are never carried out with absolute
certainty. - In contrast to measurements, actions generally
increase the uncertainty.
34Modeling Actions
- To incorporate the outcome of an action u into
the current belief, we use the conditional pdf - P(xu,x)
- This term specifies the pdf that executing u
changes the state from x to x.
35Example Closing the door
36State Transitions
- P(xu,x) for u close door
- If the door is open, the action close door
succeeds in 90 of all cases.
37Integrating the Outcome of Actions
Continuous case Discrete case
38Example The Resulting Belief
39Robot Environment Interaction
State transition probability
measurement probability
40How all of this relates to Sensors and navigation?
Sensor fusion
41Basic statistics Statistical representation
Stochastic variable
Travel time, X 5hours 1hour X can have many
different values
Continous The variable can have any value
within the bounds
Discrete The variable can have specific
(discrete) values
42Basic statistics Statistical representation
Stochastic variable
Another way of describing the stochastic
variable, i.e. by another form of bounds
Probability distribution
In 68 x11 lt X lt x12 In 95 x21 lt X lt x22 In
99 x31 lt X lt x32 In 100 -? lt X lt ?
The value to expect is the mean value gt Expected
value
How much X varies from its expected value gt
Variance
43Expected value and Variance
The standard deviation ?X is the square root of
the variance
44Gaussian (Normal) distribution
By far the mostly used probability distribution
because of its nice statistical and mathematical
properties
Normal distribution 68.3 95 99 etc.
What does it means if a specification tells that
a sensor measures a distance mm and has an
error that is normally distributed with zero mean
and ? 100mm?
45Estimate of the expected value and the variance
from observations
46Linear combinations (1)
X2 N(m2, s2)
X1 N(m1, s1)
Y N(m1 m2, sqrt(s1 s2))
Since linear combination of Gaussian variables is
another Gaussian variable, Y remains Gaussian if
the s.v. are combined linearly!
47Linear combinations (2)
We measure a distance by a device that have
normally distributed errors,
Do we win something of making a lot of
measurements and use the average value instead?
What will the expected value of Y be? What will
the variance (and standard deviation) of Y be? If
you are using a sensor that gives a large error,
how would you best use it?
48Linear combinations (3)
With ?d and ?a un-correlated gt V?d, ?a 0
(co-variance is zero)
di is the mean value and ?d N(0, sd)
ai is the mean value and ?a N(0, sa)
49Linear combinations (4)
D The total distance is calculated as before
as this is only the sum of all ds
The expected value and the variance become
50Linear combinations (5)
? The heading angle is calculated as before
as this is only the sum of all ?s, i.e. as the
sum of all changes in heading
The expected value and the variance become
What if we want to predict X and Y from our
measured ds and ?s?
51Non-linear combinations (1)
X(N) is the previous value of X plus the latest
movement (in the X direction)
The estimate of X(N) becomes
This equation is non-linear as it contains the
term
and for X(N) to become Gaussian distributed, this
equation must be replaced with a linear
approximation around . To do
this we can use the Taylor expansion of the first
order. By this approximation we also assume that
the error is rather small!
With perfectly known ?N-1 and ?N-1 the equation
would have been linear!
52Non-linear combinations (2)
Use a first order Taylor expansion and linearize
X(N) around .
This equation is linear as all error terms are
multiplied by constants and we can calculate the
expected value and the variance as we did before.
53Non-linear combinations (3)
The variance becomes (calculated exactly as
before)
Two really important things should be noticed,
first the linearization only affects the
calculation of the variance and second (which is
even more important) is that the above equation
is the partial derivatives of
with respect to our uncertain parameters squared
multiplied with their variance!
54Non-linear combinations (4)
This result is very good gt an easy way of
calculating the variance gt the law of error
propagation
The partial derivatives of
become
55Non-linear combinations (5)
The plot shows the variance of X for the time
step 1, , 20 and as can be noticed the variance
(or standard deviation) is constantly increasing.
?d 1/10 ?? 5/360
56The Error Propagation Law
57The Error Propagation Law
58The Error Propagation Law
59Multidimensional Gaussian distributions MGD (1)
The Gaussian distribution can easily be extended
for several dimensions by replacing the variance
(?) by a co-variance matrix (?) and the scalars
(x and mX) by column vectors.
The CVM describes (consists of) 1) the
variances of the individual dimensions gt
diagonal elements 2) the co-variances between the
different dimensions gt off-diagonal elements
! Symmetric ! Positive definite
60(No Transcript)
61MGD (2)
Eigenvalues gt standard deviations Eigenvectors
gt rotation of the ellipses
62MGD (3)
The co-variance between two stochastic variables
is calculated as
Which for a discrete variable becomes
And for a continuous variable becomes
63MGD (4) - Non-linear combinations
The state variables (x, y, ?) at time k1 become
64MGD (5) - Non-linear combinations
We know that to calculate the variance (or
co-variance) at time step k1 we must linearize
Z(k1) by e.g. a Taylor expansion - but we also
know that this is done by the law of error
propagation, which for matrices becomes
With ?fX and ?fU are the Jacobian matrices
(w.r.t. our uncertain variables) of the state
transition matrix.
65MGD (6) - Non-linear combinations
The uncertainty ellipses for X and Y (for time
step 1 .. 20) is shown in the figure.
66Circular Error Problem
If we have a map We can localize!
NOT THAT SIMPLE!
If we can localize We can make a map!
67Expectation-Maximization (EM)
Algorithm
- Initialize Make random guess for lines
- Repeat
- Find the line closest to each point and group
into two sets. (Expectation Step) - Find the best-fit lines to the two sets
(Maximization Step) - Iterate until convergence
- The algorithm is guaranteed to converge to some
local optima
68Example
69Example
70Example
71Example
72Example
Converged!
73Probabilistic Mapping
Maximum Likelihood Estimation
- E-Step Use current best map and data to find
belief probabilities - M-step Compute the most likely map based on the
probabilities computed in the E-step. - Alternate steps to get better map and
localization estimates - Convergence is guaranteed as before.
74The E-Step
- P(std,m) P(st o1, a1 ot,m) . P(st atoT,m)
75The M-Step
- Updates occupancy grid
- P(mxyl d)
76Probabilistic Mapping
- Addresses the Simultaneous Mapping and
Localization problem (SLAM) - Robust
- Hacks for easing computational and processing
burden - Caching
- Selective computation
- Selective memorization
77(No Transcript)
78(No Transcript)
79(No Transcript)
80Markov Assumption
Future is Independent of Past Given Current State
Assume Static World
81Probabilistic Model
82Derivation Markov Localization
83- Mobile Robot Localization
- Proprioceptive Sensors (Encoders, IMU) -
Odometry, Dead reckoning - Exteroceptive Sensors (Laser, Camera) - Global,
Local Correlation - Scan-Matching
- Correlate range measurements to estimate
displacement - Can improve (or even replace) odometry
Roumeliotis TAI-14 - Previous Work - Vision community and Lu Milios
97
84Weighted Approach
- Explicit models of uncertainty noise sources
for each scan point - Sensor noise errors
- Range noise
- Angular uncertainty
- Bias
- Point correspondence uncertainty
Combined Uncertanties
- Improvement vs. unweighted method
- More accurate displacement estimate
- More realistic covariance estimate
- Increased robustness to initial conditions
- Improved convergence
85Weighted Formulation
Goal Estimate displacement (pij ,fij )
Measured range data from poses i and j
sensor noise
bias
true range
Error between kth scan point pair
rotation of fij
Correspondence Error
Bias Error
Noise Error
86Covariance of Error Estimate
Covariance of error between kth scan point pair
Pose i
- Sensor Bias
- neglect for now
87- Correspondence Error cijk
- Estimate bounds of cijk from the geometry
- of the boundary and robot poses
Max error
- Assume uniform distribution
where
88Finding incidence angles aik and ajk Hough
Transform -Fits lines to range data -Local
incidence angle estimated from line tangent and
scan angle -Common technique in vision community
(Duda Hart 72) -Can be extended to fit simple
curves
aik
89Maximum Likelihood Estimation
Likelihood of obtaining errors eijk given
displacement
Non-linear Optimization Problem
- Position displacement estimate obtained in closed
form
- Orientation estimate found using 1-D numerical
- optimization, or series expansion
approximation methods
90Experimental Results
Weighted vs. Unweighted matching of two poses
512 trials with different initial displacements
within /- 15 degrees of actual angular
displacement /- 150 mm of actual spatial
displacement
Initial Displacements Unweighted
Estimates Weighted Estimates
- Increased robustness to inaccurate initial
displacement guesses - Fewer iterations for convergence
91Unweighted Weighted
92Eight-step, 22 meter path
- Displacement estimate errors at end of path
- Odometry 950mm
- Unweighted 490mm
- Weighted 120mm
- More accurate covariance estimate
- Improved knowledge of
- measurement uncertainty
- - Better fusion with other sensors
93Uncertainty From Sensor Noise