Title: Kalman/Particle Filters Tutorial
1Kalman/Particle Filters Tutorial
- Haris Baltzakis, November 2004
2Problem Statement
- Examples
- A mobile robot moving within its environment
- A vision based system tracking cars in a highway
- Common characteristics
- A state that changes dynamically
- State cannot be observed directly
- Uncertainty due to noise in state/way state
changes/observations
3A Dynamic System
- Most commonly - Available
- Initial State
- Observations
- System (motion) Model
- Measurement (observation) Model
4Filters
- Compute the hidden state from observations
- Filters
- Terminology from signal processing
- Can be considered as a data processing algorithm.
Computer Algorithms - Classification Discrete time - Continuous time
- Sensor fusion
- Robustness to noise
- Wanted each filter to be optimal in some sense.
5Example Navigating Robot with odometry Input
- Motion model according to odometry or INS.
- Observation model according to sensor
measurements.
- Localization -gt inference task
- Mapping -gt learning task
6Bayesian Estimation
Bayesian estimation Attempt to construct the
posterior distribution of the state given all
measurements.
Inference task (localization)Compute the
probability that the system is at state z at time
t given all observations up to time t Note
state only depends on previous state (first order
markov assumption)
7Recursive Bayes Filter
- Bayes Filter
- Two steps Prediction Step - Update step
- Advantages over batch processing
- Online computation - Faster - Less memory -
Easy adaptation - Example two states A,B
8Recursive Bayes FilterImplementations
How is the prior distribution represented?
How is the posterior distribution calculated?
- Continuous representation
- Gaussian distributions Kalman filters (Kalman60)
- Discrete representation
- HMM Solve numerically
- Grid (Dynamic) Grid based approaches (e.g
Markov localization - Burgard98) - Samples Particle Filters (e.g. Monte Carlo
localization - Fox99)
9Example State Representations for Robot
Localization
Grid Based approaches (Markov localization)
Particle Filters (Monte Carlolocalization)
Kalman Tracking
10Example Localization Grid Based
- Initialize Grid(Uniformly or according to prior
knowledge) - At each time step
- For each grid cell
- Use observation model to compute
- Use motion model and probabilities to compute
- Normalize
11Kalman Filters - Equations
A State transition matrix (n x n) C Measurement
matrix (m x n) w Process noise (? Rn), v
Measurement noise(? Rm)
Process dynamics (motion model)
measurements (observation model)
Where
12Kalman Filters - Update
Predict
Compute Gain
Compute Innovation
Update
13Kalman Filter - Example
14Kalman Filter - Example
Predict
15Kalman Filter - Example
Predict
16Kalman Filter - Example
Predict
Compute Innovation
Compute Gain
17Kalman Filter Example
Predict
Compute Innovation
Compute Gain
Update
18Kalman Filter Example
Predict
19Non-Linear Case
- Kalman Filter assumes that system and
measurement processes are linear - Extended Kalman Filter -gt linearized Case
20ExampleLocalization EKF
- Initialize State
- Gaussian distribution centered according to prior
knowledge large variance - At each time step
- Use previous state and motion model to predict
new state - (mean of Gaussian changes - variance grows)
- Compare observations with what you expected to
see from the predicted state Compute Kalman
Innovation/Gain - Use Kalman Gain to update prediction
21Extended Kalman Filter
Project State estimates forward (prediction step)
Predict measurements
Compute Kalman Innovation
Compute Kalman Gain
Update Initial Prediction
22EKF Examplemotion model for mobile robot
- Synchro-drive robot
- Model range, drift and turn errors
23Particle Filters
- Often models are non-linear and noise in non
gausian. - Use particles to represent the distribution
- Survival of the fittest
Motion model
Proposal distribution
Observation model (weight)
24Particle Filters SIS-R algorithm
- Initialize particles randomly (Uniformly or
according to prior knowledge) - At each time step
- For each particle
- Use motion model to predict new pose (sample from
transition priors) - Use observation model to assign a weight to each
particle (posterior/proposal) - Create A new set of equally weighted particles by
sampling the distribution of the weighted
particles produced in the previous step.
Sequential importance sampling
SelectionRe-sampling
25Particle Filters Example 1
26Particle Filters Example 1
Use motion model to predict new pose (move each
particle by sampling from the transition prior)
27Particle Filters Example 1
Use measurement model to compute
weights (weightobservation probability)
28Particle Filters Example 1
Resample
29Particle Filters Example 2
Initialize particles uniformly
30Particle Filters Example 2
31Particle Filters Example 2
32Particle Filters Example 2
33Particle Filters Example 2
34Particle Filters Example 2
35Continuous State Approaches
- Perform very accurately if the inputs are precise
(performance is optimal with respect to any
criterion in the linear case). - Computational efficiency.
- Requirement that the initial state is known.
- Inability to recover from catastrophic failures
- Inability to track Multiple Hypotheses the state
(Gaussians have only one mode)
36Discrete State Approaches
- Ability (to some degree) to operate even when its
initial pose is unknown (start from uniform
distribution). - Ability to deal with noisy measurements.
- Ability to represent ambiguities (multi modal
distributions).
- Computational time scales heavily with the number
of possible states (dimensionality of the grid,
number of samples, size of the map). - Accuracy is limited by the size of the grid
cells/number of particles-sampling method. - Required number of particles is unknown
37Thanks!
Thanks for your attention!