Title: Dr. Jizhong Xiao
1Probabilistic Robotics
Advanced Mobile Robotics
- Dr. Jizhong Xiao
- Department of Electrical Engineering
- City College of New York
- jxiao_at_ccny.cuny.edu
2Robot Navigation
Fundamental problems to provide a mobile robot
with autonomous capabilities
- Where am I going
- Whats the best way there?
- Where have I been? ? how to create an
environmental map with imperfect sensors? - Where am I? ? how a robot can tell where it is
on a map? - What if youre lost and dont have a map?
Mission Planning
Path Planning
Mapping
Localization
Robot SLAM
3Representation of the Environment
- Environment Representation
- Continuos Metric x,y,q
- Discrete Metric metric grid
- Discrete Topological topological grid
4Localization, Where am I?
- Odometry, Dead Reckoning
- Localization base on external sensors, beacons
or landmarks - Probabilistic Map Based Localization
5Localization Methods
- Mathematic Background, Bayes Filters
- Markov Localization
- Central idea represent the robots belief by a
probability distribution over possible positions,
and uses Bayes rule and convolution to update
the belief whenever the robot senses or moves - Markov Assumption past and future data are
independent if one knows the current state - Kalman Filtering
- Central idea posing localization problem as a
sensor fusion problem - Assumption gaussian distribution function
- Particle Filtering
- Central idea Sample-based, nonparametric Filter
- Monte-Carlo method
- SLAM (simultaneous localization and mapping)
- Multi-robot localization
6Markov Localization
- Applying probability theory to robot localization
- Markov localization uses an explicit, discrete
representation for the probability of all
position in the state space. - This is usually done by representing the
environment by a grid or a topological graph with
a finite number of possible states (positions). - During each update, the probability for each
state (element) of the entire space is updated.
7Markov Localization Example
- Assume the robot position is one- dimensional
The robot is placed somewhere in the environment
but it is not told its location
The robot queries its sensors and finds out it is
next to a door
8Markov Localization Example
The robot moves one meter forward. To account for
inherent noise in robot motion the new belief is
smoother
The robot queries its sensors and again it finds
itself next to a door
9Probabilistic Robotics
- Falls in between model-based and behavior-based
techniques - There are models, and sensor measurements, but
they are assumed to be incomplete and
insufficient for control - Statistics provides the mathematical glue to
integrate models and sensor measurements - Basic Mathematics
- Probabilities
- Bayes rule
- Bayes filters
10- The next slides are provided by the authors of
the book "Probabilistic Robotics, you can
download from the website - http//www.probabilistic-robotics.org/
11Probabilistic Robotics
Mathematic Background Probabilities Bayes
rule Bayes filters
12Probabilistic Robotics
- Key idea Explicit representation of uncertainty
using the calculus of probability theory - Perception state estimation
- Action utility optimization
13Axioms of Probability Theory
- Pr(A) denotes probability that proposition A is
true. -
-
-
14A Closer Look at Axiom 3
15Using the Axioms
16Discrete Random Variables
- X denotes a random variable.
- X can take on a countable number of values in
x1, x2, , xn. - P(Xxi), or P(xi), is the probability that the
random variable X takes on value xi. - P( ) is called probability mass function.
- E.g.
.
17Continuous Random Variables
- X takes on values in the continuum.
- p(Xx), or p(x), is a probability density
function. - E.g.
p(x)
x
18Joint and Conditional Probability
- P(Xx and Yy) P(x,y)
- If X and Y are independent then P(x,y) P(x)
P(y) - P(x y) is the probability of x given y P(x
y) P(x,y) / P(y) P(x,y) P(x y) P(y) - If X and Y are independent then P(x y) P(x)
19Law of Total Probability, Marginals
Discrete case
Continuous case
20Bayes Formula
If y is a new sensor reading
Prior probability distribution
?
Posterior probability distribution
?
Generative model, characteristics of the sensor
?
?
Does not depend on x
21Normalization
Algorithm
22Conditioning
23Bayes Rule with Background Knowledge
24Conditioning
25Conditional Independence
26Simple Example of State Estimation
- Suppose a robot obtains measurement z
- What is P(openz)?
27Causal vs. Diagnostic Reasoning
- P(openz) is diagnostic.
- P(zopen) is causal.
- Often causal knowledge is easier to obtain.
- Bayes rule allows us to use causal knowledge
28Example
- P(zopen) 0.6 P(z?open) 0.3
- P(open) P(?open) 0.5
- z raises the probability that the door is open.
29Combining Evidence
- Suppose our robot obtains another observation z2.
- How can we integrate this new information?
- More generally, how can we estimateP(x z1...zn
)?
30Recursive Bayesian Updating
Markov assumption zn is independent of
z1,...,zn-1 if we know x.
31Example Second Measurement
- P(z2open) 0.5 P(z2?open) 0.6
- P(openz1)2/3
- z2 lowers the probability that the door is open.
32Actions
- Often the world is dynamic since
- actions carried out by the robot,
- actions carried out by other agents,
- or just the time passing by
- change the world.
- How can we incorporate such actions?
33Typical Actions
- The robot turns its wheels to move
- The robot uses its manipulator to grasp an object
- Plants grow over time
- Actions are never carried out with absolute
certainty. - In contrast to measurements, actions generally
increase the uncertainty.
34Modeling Actions
- To incorporate the outcome of an action u into
the current belief, we use the conditional pdf - P(xu,x)
- This term specifies the pdf that executing u
changes the state from x to x.
35Example Closing the door
36State Transitions
- P(xu,x) for u close door
- If the door is open, the action close door
succeeds in 90 of all cases.
37Integrating the Outcome of Actions
Continuous case Discrete case
38Example The Resulting Belief
39Bayes Filters Framework
- Given
- Stream of observations z and action data u
- Sensor model P(zx).
- Action model P(xu,x).
- Prior probability of the system state P(x).
- Wanted
- Estimate of the state X of a dynamical system.
- The posterior of the state is also called Belief
40Markov Assumption
Measurement probability
?
State transition probability
?
- Markov Assumption
- past and future data are independent if one knows
the current state
- Underlying Assumptions
- Static world, Independent noise
- Perfect model, no approximation errors
41Bayes Filters
z observation u action x state
42Bayes Filter Algorithm
- Algorithm Bayes_filter( Bel(x),d )
- h0
- If d is a perceptual data item z then
- For all x do
-
-
- For all x do
-
- Else if d is an action data item u then
- For all x do
-
- Return Bel(x)
43Bayes Filters are Familiar!
- Kalman filters
- Particle filters
- Hidden Markov models
- Dynamic Bayesian networks
- Partially Observable Markov Decision Processes
(POMDPs)
44Summary
- Bayes rule allows us to compute probabilities
that are hard to assess otherwise. - Under the Markov assumption, recursive Bayesian
updating can be used to efficiently combine
evidence. - Bayes filters are a probabilistic tool for
estimating the state of dynamic systems.
45Thank You
Homework 3 Exercises 2 and 3 on pp3637 of
textbook
Next class March 3, 2008