CS 154 today - PowerPoint PPT Presentation

1 / 106
About This Presentation
Title:

CS 154 today

Description:

CS 154 today – PowerPoint PPT presentation

Number of Views:124
Avg rating:3.0/5.0
Slides: 107
Provided by: CIS42
Category:
Tags: akon | today

less

Transcript and Presenter's Notes

Title: CS 154 today


1
CS 154 today
Particle Filters and MCL
due Thursday, 2/28
1) project update
Hw 3
2) Set
1) project demo
Hw 4
2) MCL
Evidence/occupancy grids
This hw's papers
Bayes Rules
Robots, After All
The Economist, 1/06
H. Moravec
Robot Evidence Grids
Robot Mere Machine to Transcendent Mind
M. C. Martin and H. Moravec
2
CS 154 Topic Outline
  • Low-level robotics
  • architecture
  • motors/actuators
  • sensors
  • cameras-as-sensors
  • visual control via motion
  • Spatial Reasoning
  • reasoning with uncertainty
  • filtering and state estimation
  • localization
  • mapping
  • landmarks and vision
  • Vision
  • complex feature extraction
  • 3d reconstruction
  • Spatial Planning
  • configuration space
  • kinematics, dynamics
  • path planning

4wks
What am I?
based on sensor readings
5wks
Where am I?
Is seeing believing?
2wks
3wks
How do I get there?
3
Combining evidence
Update step multiplying the previous odds by a
precomputed weight.
evidence log( odds )
add vs. multiply
4
Interest Operator
Started with the Cart small regions of pixels
with high intensity variance were chosen.
Areas with high color variance are now used
3 cameras indicated by different colors...
Professional web-writeup for ARPA grant
reporting
http//www.frc.ri.cmu.edu/hpm/project.archive/rob
ot.papers/2002/ARPA.MARS/Report.0202.html
5
The key to successful projects!
visualization bandwidth
status
three images
individual and composite match curves
histogram of nearby match scores (used in adding
grid evidence)
range scale
locations of the three cameras
range probability curves vs. image noise model
(they use .5)
where doesnt it work?
finding correspondence between feature points in
multiple images
6
Learning the sensor model via color
The sensor model maps range readings to
occupancy...
Two types of voxel errors
7
Learning the sensor model via color
The sensor model maps range readings to
occupancy...
Two types of voxel errors
  • Voxels considered present, but actually are not
    there not part of any object
  • Voxels considered absent, but actually are there
    are a part of some object

... will be colored by the object(s) in the
background
... will contribute their color to various
background pixels
8
Learning the sensor model
Learning a sensor model by minimizing color
variance...
9
Results
more than just a point cloud
A small fraction (90,000 cells) of the dense
environmental representation (4,000,000 cells)
available!
A planned path in 3d (A)
10
Results
11
Results
A lab environment
voxel movie... tdgridLabColorLarsMovie.gif
12
Time to go public... (2004)
13
Time to go public... (2005)
14
Time to go public... (2007-8)
2006?
15
Technical details
Inviting high-level thinking about the approach .
Disadvantages?
16
Hans Moravec the Antarctic Rover
A Kurzweilesque graph
17
Spatial Representations
the evolution of evidence grids
Individual Points
2d maps
3d maps
18
Robot Localization
Where am I?
Rhino's home, Bonn
19
Robot Localization
Where am I?
?
robot tracking
robot kidnapping
global problem
local problem
20
Whats the problem?
only local data!
(even perfect data)
robot tracking
robot kidnapping
global problem
local problem
21
Whats the problem?
direct map-matching can be overwhelming
robot tracking
robot kidnapping
global problem
local problem
22
Monte Carlo Localization
Key idea keep track of a probability
distribution for where the robot might be in
the known map
Wheres this?
Initial (uniform) distribution
black - blue - red - cyan
23
Monte Carlo Localization
Key idea keep track of a probability
distribution for where the robot might be in
the known map
blue
Initial (uniform) distribution
Intermediate stage 1
black - blue - red - cyan
24
Monte Carlo Localization
Key idea keep track of a probability
distribution for where the robot might be in
the known map
blue
red
Initial (uniform) distribution
Intermediate stage 2
black - blue - red - cyan
25
Monte Carlo Localization
Key idea keep track of a probability
distribution for where the robot might be in
the known map
cyan
Initial (uniform) distribution
Intermediate stages
Final distribution
black - blue - red - cyan
But how?
26
Deriving MCL
  • Sebastian Thrun
  • Wolfram Burgard
  • Dieter Fox

Bag o tricks
p( B A ) p( A )
  • Bayes rule

p( A B )
p( B )
  • Definition of conditional probability

p( A ? B ) p( A B ) p(B)
  • Definition of marginal probability

S
p( A ) p( A ? B )
all B
What are these saying?
S
p( A ) p( A B ) p(B)
all B
27
Setting up the problem
The robot alternates between
  • sensing -- getting range observations o1,
    o2, o3, , ot-1, ot
  • acting -- driving around (or ferrying?) a1,
    a2, a3, , at-1

local maps
whence?
28
Setting up the problem
The robot alternates between
  • sensing -- getting range observations o1,
    o2, o3, , ot-1, ot
  • acting -- driving around (or ferrying?) a1,
    a2, a3, , at-1

local maps
whence?
We want to know rt -- the position of the
robot at time t
  • but well settle for p(rt) -- a probability
    distribution for rt

What kind of thing is p(rt) ?
What do we know?
29
Setting up the problem
The robot does (or can be modeled to) alternate
between
  • sensing -- getting range observations o1,
    o2, o3, , ot-1, ot
  • acting -- driving around (or ferrying?) a1,
    a2, a3, , at-1

local maps
whence?
We want to know rt -- the position of the
robot at time t
  • but well settle for p(rt) -- a probability
    distribution for rt

What kind of thing is p(rt) ?
We do know m -- the map of
the environment
(or will know)
p( o r, m )
-- the sensor model
p( rnew rold, a, m )
-- the motion model
the accuracy of performing action a
30
Sensor Model
map m and location r
p( o r, m ) sensor model
p( r, m ) .95
p( r, m ) .05
potential observations o
31
Sensor Model
map m and location r
p( o r, m ) sensor model
p( rnew rold, a, m ) action model
p( r, m ) .95
p( r, m ) .05
potential observations o
probabilistic kinematics -- encoder
uncertainty
  • red lines indicate commanded action
  • the cloud indicates the likelihood of various
    final states

(see 17/dr.mov for motivation)
32
Probabilistic Kinematics
We may know where our robot is supposed to be,
but in reality it might be somewhere else
Key question
supposed final pose
y
lots of possibilities for the actual final pose
x
VL (t)
VR(t)
starting position
What should we do?
33
Robot models how-to
p( o r, m ) sensor model
p( rnew rold, a, m ) action model
(0) Model the physics of the sensor/actuators (w
ith error estimates)
theoretical modeling
34
Robot models how-to
p( o r, m ) sensor model
p( rnew rold, a, m ) action model
(0) Model the physics of the sensor/actuators (w
ith error estimates)
theoretical modeling
(1) Measure lots of sensing/action results and
create a model from them
empirical modeling
  • take N measurements, find mean (m) and st. dev.
    (s) and then use a Gaussian model
  • or, some other easily-manipulated (probability?)
    model...

0 if x-m s
0 if x-m s
p( x )
p( x )
1 otherwise
1- x-m/s otherwise
35
MODEL the error in order to reason about it!
Running around in squares
3
  • Create a program that will run your robot in a
    square (2m to a side), pausing after each side
    before turning and proceeding.
  • For 10 runs, collect both the odometric
    estimates of where the robot thinks it is and
    where the robot actually is after each side.

2
4
  • You should end up with two sets of 30 angle
    measurements and 40 length measurements one set
    from odometry and one from ground-truth.

1
  • Find the mean and the standard deviation of the
    differences between odometry and ground truth for
    the angles and for the lengths this is the
    robots motion uncertainty model.

start and end
This provides a probabilistic kinematic model.
36
Robot models how-to
p( o r, m ) sensor model
p( rnew rold, a, m ) action model
(0) Model the physics of the sensor/actuators (w
ith error estimates)
theoretical modeling
(1) Measure lots of sensing/action results and
create a model from them
empirical modeling
  • take N measurements, find mean (m) and st. dev.
    (s) and then use a Gaussian model
  • or, some other easily-manipulated (probability?)
    model...

0 if x-m s
0 if x-m s
p( x )
p( x )
1 otherwise
1- x-m/s otherwise
The-paper-deadline-is-today! modeling
(2) Make something up...
37
Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with a
probability of 1/K
dimensionality?!
Particle Filter representation of a probability
distribution
38
Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with a
probability of 1/K
Get the current sensor observation, o1
For each sample point r0 multiply the importance
factor by p(o1 r0, m)
probability
39
Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with a
probability of 1/K
Get the current sensor observation, o1
For each sample point r0 multiply the importance
factor by p(o1 r0, m)
Normalize (make sure the importance factors add
to 1)
You now have an approximation of p(r1 o1, ,
m)
and the distribution is no longer uniform
How did this change?
40
Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with a
probability of 1/K
Get the current sensor observation, o1
For each sample point r0 multiply the importance
factor by p(o1 r0, m)
Normalize (make sure the importance factors add
to 1)
You now have an approximation of p(r1 o1, ,
m)
and the distribution is no longer uniform
How did this change?
Create new samples by dividing up large clumps
each point spawns new ones in proportion to its
importance factor
41
Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with a
probability of 1/K
Get the current sensor observation, o1
For each sample point r0 multiply the importance
factor by p(o1 r0, m)
Normalize (make sure the importance factors add
to 1)
You now have an approximation of p(r1 o1, ,
m)
and the distribution is no longer uniform
How did this change?
Create new samples by dividing up large clumps
each point spawns new ones in proportion to its
importance factor
The robot moves, a1
For each sample r1, move it according to the
model p(r2 a1, r1, m)
Where do the purple ones go?
42
Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with a
probability of 1/K
Get the current sensor observation, o1
For each sample point r0 multiply the importance
factor by p(o1 r0, m)
Normalize (make sure the importance factors add
to 1)
You now have an approximation of p(r1 o1, ,
m)
and the distribution is no longer uniform
How did this change?
Create new samples by dividing up large clumps
each point spawns new ones in proportion to its
importance factor
The robot moves, a1
For each sample r1, move it according to the
model p(r2 a1, r1, m)
Increase all the indices by 1 and keep going!
Where do the purple ones go?
43
MCL in action
Monte Carlo Localization -- refers to the
resampling of the distribution each time a new
observation is integrated
Rhino
Minerva
44
MCL in action
Monte Carlo Localization -- refers to the
resampling of the distribution each time a new
observation is integrated
in action...
45
MC Hammer
seeking out nails
Stanislaw Ulam (not MC Hammer)
perhaps playing solitaire?
1946, Manhattan Project
condensation algorithm
statistical sampling
M. Nelsons favorite approach... ACM success!
46
Problem 6 Find the area of a polygon, given
its vertices.
HMC Hammer Solved Runs Time 1 -
2 45738 2 2 25541 3
2 33758 4 - 0
00000 5 2 12042 6
4 44157 7 1 23244
  • Per-Problem Statistics
  • Problem Solved Submitted
  • 1 3 60
  • 2 11 72
  • 3 3 12
  • 4 0 0
  • 5 19 58
  • 6 7 10
  • 7 10 64

Matt Beaumont, Dan Halperin, Jonah Cohen
contestwide
63 teams total, Fall 04 ACM Contest
47
Deriving the MCL algorithm
How do we find
p( rt ) given all the information
available... ?
p(rt o1, a1, ..., ot-1, at-1, ot, m)
p(ot o1, a1, ..., ot-1, at-1, rt , m) p(
rt o1,, at-1, m)
p( ot o1,, at-1, m)
l p(ot o1, a1, ..., ot-1, at-1, rt , m)
p( rt o1,, at-1, m)
l p(ot rt , m ) p(rt o1,, at-1, m)
S
lp(ot rt, m) p(rt o1,,at-1,
rt-1, m) p(rt-1 o1,,at-1,m)
all rt-1
S
lp(ot rt, m) p(rt at-1, rt-1, m)
p(rt-1 o1,,at-1,m)
all rt-1
48
Quiz
What is the reasoning behind the six steps of
MCLs derivation? What are the final three terms
really saying?
How do we find
p( rt ) given all the information
available... ?
1
p(rt o1, a1, ..., ot-1, at-1, ot, m)
2
p(ot o1, a1, ..., ot-1, at-1, rt , m) p(
rt o1,, at-1, m)
p( ot o1,, at-1, m)
3
l p(ot o1, a1, ..., ot-1, at-1, rt , m)
p( rt o1,, at-1, m)
4
l p(ot rt , m ) p(rt o1,, at-1, m)
5
S
lp(ot rt, m) p(rt o1,,at-1,
rt-1, m) p(rt-1 o1,,at-1,m)
all rt-1
6
S
lp(ot rt, m) p(rt at-1, rt-1, m)
p(rt-1 o1,,at-1,m)
all rt-1
49
Deriving MCL
  • Sebastian Thrun
  • Wolfram Burgard
  • Dieter Fox

Bag o tricks
p( B A ) p( A )
  • Bayes rule

p( A B )
p( B )
  • Definition of conditional probability

p( A ? B ) p( A B ) p(B)
  • Definition of marginal probability

S
p( A ) p( A ? B )
all B
What are these saying?
S
p( A ) p( A B ) p(B)
all B
50
Quiz
What is the reasoning behind the six steps of
MCLs derivation? What are the final three terms
really saying?
Goal To find
p( rt ) given all the information
available... ?
1
p(rt o1, a1, ..., ot-1, at-1, ot, m)
2
p(ot o1, a1, ..., ot-1, at-1, rt , m) p(
rt o1,, at-1, m)
p( ot o1,, at-1, m)
3
l p(ot o1, a1, ..., ot-1, at-1, rt , m)
p( rt o1,, at-1, m)
4
l p(ot rt , m ) p(rt o1,, at-1, m)
5
S
lp(ot rt, m) p(rt o1,,at-1,
rt-1, m) p(rt-1 o1,,at-1,m)
all rt-1
6
S
lp(ot rt, m) p(rt at-1, rt-1, m)
p(rt-1 o1,,at-1,m)
all rt-1
What are these terms?
51
Quiz
What is the reasoning behind the six steps of
MCLs derivation? What are the final three terms
really saying?
How do we find
p( rt ) given all the information
available... ?
1
p(rt o1, a1, ..., ot-1, at-1, ot, m)
2
Bayes Rule
p(ot o1, a1, ..., ot-1, at-1, rt , m) p(
rt o1,, at-1, m)
p( ot o1,, at-1, m)
No r in denom.
3
l p(ot o1, a1, ..., ot-1, at-1, rt , m)
p( rt o1,, at-1, m)
4
Markov assumption
l p(ot rt , m ) p(rt o1,, at-1, m)
5
marginal probabilities
S
lp(ot rt, m) p(rt o1,,at-1,
rt-1, m) p(rt-1 o1,,at-1,m)
all rt-1
conditional independence
6
S
lp(ot rt, m) p(rt at-1, rt-1, m)
p(rt-1 o1,,at-1,m)
all rt-1
What are these terms?
possible results of the last motion
this sensor reading
previous distribution
52
MCL in action
Color-coded propagation of uncertainty...
sonar
laser rangefinder
53
MCL in action
using sonar
54
MCL in action
using a laser rangefinder
55
Taking a step back
Plusses
It has worked well in practice!
Simple algorithm
Well-motivated via probabilistic reasoning
Drawbacks
56
Taking a step back
Plusses
It has worked well in practice!
Simple algorithm
Well-motivated via probabilistic reasoning
Naturally fuses data from very disparate sensors!
Doesnt require control of the robot passive
localization
Its an any-time algorithm
Drawbacks
57
Taking a step back
Plusses
It has worked well in practice!
Simple algorithm
Well-motivated via probabilistic reasoning
Naturally fuses data from very disparate sensors!
Doesnt require control of the robot passive
localization
Its an any-time algorithm
Drawbacks
Any-time may not be enough !
Empty distributions
Doesnt use the robot control available active
localization
58
How would you encourage robust recovery of a
localization failure (robotnapping) ?
At the smithsonian
59
MCL as a friend to soccer players
vs. a source of woe...
Aibo soccer league
60
To guide this process...
Wed like to answer the questions
Which direction should we go to reduce
uncertainty in the robots location ?
Which direction should we go to reduce
uncertainty in a map ?
-- for map-matching or for exploration
uncertainty-seeking path
resulting map
Hans Moravec
61
For those who localize yourselves here on
Thursday guiding the MCL algorithm!
62
CS 154
Answering robotics big questions

A unified view of robotics
Reminders Hw 4 (vision/RC) due on Monday. Hw
5 due 2/28. Hw 6 due 3/11.
63
CS 154
Answering robotics big questions
  • how to map an environment with imperfect sensors
  • how a robot can tell where it is on a map
  • what if youre lost and dont have a map?

mapping
localization

both!
A unified view of robotics
64
CS 154
Answering robotics big questions
  • how to map an environment with imperfect sensors
  • how a robot can tell where it is on a map
  • what if youre lost and dont have a map?

mapping
localization

both!
A unified view of robotics
- R. Crabbe, USNA, 2005
Unifying AI Robotics Layers of Abstraction over
Two Channels
Another layer of robotic capabilities
65
Layered capabilities
Examples from each layer
BEAM
Industry
Behavior-based
Spacey
Fictional
66
Theory, anyone?
Ideally, a local range map will uniquely
identify a location
do all locations have distinct visibility
polygons is there any ambiguity here?
67
Getting a good map to match
Ideally, a local range map will uniquely
identify a location
all locations have distinct visibility polygons
(if robot pose is considered)
an environment with ambiguous local readings
  • Self-similarity in the environment
  • Sensor uncertainty in the local map

will conspire against this
Where to go ?
68
Where to go ?
Wed like to know a set of movements that will
localize the robot uniquely.
Even better to minimize the travel distance
required to do so!
69
Where to go ?
Wed like to know a set of movements that will
localize the robot uniquely.
Even better to minimize the travel distance
required to do so!
Even with a flawless map and perfect sensing,
minimizing localization effort is NP-complete.
70
Where to go ?
Wed like to know a set of movements that will
localize the robot uniquely.
Even better to minimize the travel distance
required to do so!
Even with a flawless map and perfect sensing,
minimizing localization effort is NP-complete.
Reduction with the Abstract Decision Tree
problem
  • a set of objects X x1, x2, x3, , xm

x1, ,xm
  • a set of binary tests T T1, T2, , Tn

h
Ti
  • a desired cost h

Is there a decision tree of height h that
identifies all of the elements of X using tests
from T ?
x1,x2
x3, ,xm
Tj
Tk
71
Where to go ?
Wed like to know a set of movements that will
localize the robot uniquely.
Even better to minimize the travel distance
required to do so!
Even with a flawless map and perfect sensing,
minimizing localization effort is NP-complete.
Reduction with the Abstract Decision Tree
problem
  • a set of objects X x1, x2, x3, , xm

x1, ,xm
  • a set of binary tests T T1, T2, , Tn

h
Ti
  • a desired cost h

Is there a decision tree of height h that
identifies all of the elements of X using tests
from T ?
x1,x2
x3, ,xm
Tj
Tk
How do we encode this problem in an environment?
72
Where to go ?
Ideally, wed like to know where to go next to
localize the robot uniquely.
Even better -- minimize the travel distance
required.
Even with a flawless map and perfect sensing,
minimizing localization effort is NP-complete.
each step represents an object we need to
identify which step were on!
73
Where to go ?
Ideally, wed like to know where to go next to
localize the robot uniquely.
Even better -- minimize the travel distance
required.
Even with a flawless map and perfect sensing,
minimizing localization effort is NP-complete.
each step represents an object we need to
identify which step were on!
each step contains hallways encoding the results
of the various tests...
if we could localize optimally...
perfect sensing...?
74
Have a great weekend!
75
Questions
  • Are there problems with MCL? Solutions?
  • How do we get the map in the first place?

Which direction should we go to reduce
uncertainty in the robots location ?

Which direction should we go to reduce
uncertainty in a map ?
-- for map-matching or for exploration
uncertainty-seeking path
resulting map
Hans Moravec
76
Information
( measured in bits )
77
Information
( measured in bits )
Information content of an event
log2( 1 / probability of that event )
intuition?
78
Information
( measured in bits )
Information content of an event
log2( 1 / probability of that event )
if the event has probability 1...
if the event has probability 0...
amount of surprise in the event...
intuition?
79
Information
( measured in bits )
Information content of an event
log2( 1 / probability of that event )
if the event has probability 1...
if the event has probability 0...
amount of surprise in the event...
of ways the event can happen
event probability
of possible things that can happen
intuition?
80
Information
( measured in bits )
Learning one bit of a binary number
000 001 010 011 100 101 110 111
set of possibilities
81
Information
( measured in bits )
Learning one bit of a binary number
000 001 010 011 100 101 110 111
sets of possibilities
cuts the space of possibilties in 1/2
000 001 010 011
82
Information
( measured in bits )
Learning one bit of a binary number
000 001 010 011 100 101 110 111
sets of possibilities
cuts the space of possibilties in 1/2
000 001 010 011
Learning two bits
cuts the space of possibilties in 1/4
010
83
Information
( measured in bits )
Learning one bit of a binary number
000 001 010 011 100 101 110 111
sets of possibilities
cuts the space of possibilties in 1/2
000 001 010 011
Learning two bits
cuts the space of possibilties in 1/4
010
reduction in uncertainty
Information content of an event
log2( old possibilities / new possibilties)
84
Information
( measured in bits )
Learning one bit of a binary number
000 001 010 011 100 101 110 111
sets of possibilities
cuts the space of possibilties in 1/2
000 001 010 011
Learning two bits
cuts the space of possibilties in 1/4
010
reduction in uncertainty
Information content of an event
log2( 1 / probability of that event )
85
Information
What if the distribution is not discrete ?
Either outcome from this distribution provides 1
bit of information.
0.5
0.5
?
0.75
0.25
0.999
0.001
86
Information
What if the distribution is not discrete ?
Either outcome from this distribution provides 1
bit of information.
0.5
0.5
log2( old possibilities / new possibilties)
0.75
0.25
0.999
0.001
87
Claude Shannon
founder of information theory
Masters thesis (MIT) A symbolic analysis of
relay and switching circuits
At Bell Labs (48) A mathematical theory of
communcation
A N-character message drawn from a distribution
with entropy H can be compressed into NH bits
but no fewer.
Shannons Source Coding Theorem
88
Mapping and localization ?!
Goal reduce the uncertainty in the current map...
  • the same goal arises in autonomous exploration

Rhino
U. of Bonn, Germany
where is the map the least certain ?
89
Map Matching
Key idea keep track of a probability
distribution for where the robot might be in
the known map
  • accounts for uncertainty in sensors and map
  • the newly learned map improves over time (and
    so, presumably, does the accuracy of
    localization)
  • large search space (without a good initial
    estimate)
  • - need to consider both translations and
    rotations...
  • cell-based maps will not divide space
    identically
  • would like the robots positional certainty to
    improve over time
  • doesnt explicitly represent what we want
    possible robot locations

-
robot fine-tuning
robot kidnapping
vs.
map-matching too cumbersome
map-matching OK
90
Next Time(s)
Where jotto and robotics meet...
- AND -
The relationship between maps and cake
Localizing and mapping at the same time...
Expanding the repertoire of probabilistic
approaches to handle sensor/environmental/motor
uncertainty...
91
Where we were
Motion Planning Given a known world and a
agreeable mechanism, how do I get there
from here ?
high
Localization Given sensors and a map, where am
I ?
Vision If my sensors are eyes, how does that
help?
Abstraction level
Mapping Given sensors, how do I create a useful
map?
Bug Algorithms Given an unknowable world but a
known goal and local sensing, how can I
get there from here?
Kinematics if I move this motor somehow, what
happens in other coordinate systems ?
low
Control (PID) what voltage should I set over
time ?
92
Where we are
Motion Planning Given a known world and a
agreeable mechanism, how do I get there
from here ?
high
Localization Given sensors and a map, where am
I ?
Vision If my sensors are eyes, how does that
help?
Abstraction level
Mapping Given sensors, how do I create a useful
map?
Bug Algorithms Given an unknowable world but a
known goal and local sensing, how can I
get there from here?
Kinematics if I move this motor somehow, what
happens in other coordinate systems ?
low
Control (PID) what voltage should I set over
time ?
93
Another use of map matching
The sonar model depends dramatically on the
environment
rather than hire Roman Kuc to develop another
one...
-- wed like to learn an appropriate sensor model
94
Learning the Sensor Model
The sonar model depends dramatically on the
environment
-- wed like to learn an appropriate sensor model
three maps created with different sonar models
95
Next Time(s)
Where jotto and robotics meet...
- AND -
The relationship between maps and cake
Localizing and mapping at the same time...
Expanding the repertoire of probabilistic
approaches to handle sensor/environmental/motor
uncertainty...
96
Localization via mapping
What is a reasonable matching process among
evidence grids?
A
B
C
0.5
0.9
0.5
1
0.9
0.5
probabilities, (not odds)
0.1
0.1
0
0
0.3
0.3
97
Map matching
raw match score the probability that the maps
are identical
P Ai Bi Ai Bi
A

B
C
all cells i
where
Ai 1 - Ai
0.5
0.9
0.5
1
0.9
0.5
0.1
0.1
0.3
0.3
0
0
A B C
match scores
A B C
.3645
.1089
.3645
.1225
.1089
.1225
What if the map is large ?
98
Map matching
Match score log2( 1 / prob. of being the same
)
1
A
B
C
log2
-
S log2(Ai Bi Ai Bi )
0.5
0.9
0.5
1
0.9
0.5
i
0.1
0.1
0.3
0.3
0
0
A B C
A B C
1.46
3.20
1.46
3.03
3.20
3.03
score measure of surprise
99
Map Matching ?
Weighing the evidence for evidence-grid-matching
localization
  • accounts for uncertainty in sensors and map
  • the newly learned map improves over time
  • large search space (if theres no good initial
    estimate)
  • - need to consider both translations and
    rotations...
  • cell-based maps will not divide space
    identically
  • would like the robots positional certainty to
    improve over time

-
100
Map Matching ?
Weighing the evidence for evidence-grid-matching
localization
  • accounts for uncertainty in sensors and map
  • the newly learned map improves over time
  • large search space (if theres no good initial
    estimate)
  • - need to consider both translations and
    rotations...
  • cell-based maps will not divide space
    identically
  • would like the robots positional certainty to
    improve over time
  • doesnt explicitly represent what we want
    possible robot locations

-
Robot Localization
global problem
local problem
101
Map Matching ?
Weighing the evidence for evidence-grid-matching
localization
  • accounts for uncertainty in sensors and map
  • the newly learned map improves over time
  • large search space (if theres no good initial
    estimate)
  • - need to consider both translations and
    rotations...
  • cell-based maps will not divide space
    identically
  • would like the robots positional certainty to
    improve over time
  • doesnt explicitly represent what we want
    possible robot locations

-
Robot Localization
global problem
local problem
robot fine-tuning
robot kidnapping
map-matching too cumbersome
map-matching OK
102
Dervish
Dervish
winner of the 94 AAAI office navigation contest
contrast Polly
unusual sensor coverage (with sonars)
103
Dervish
real feature
detected feature
probabilistic reasoning about the environment
winner of the AAAI office navigation contest
104
Conditional Prob. Bayes Rule
- Probability of two events
Bag o tricks
p( o ? S ) p( o S ) p( S )

p( S ? o ) p( S o ) p( o )
Bayes rule
p( S o ) p( o )
- Bayes rule switches
p( o S )
the conditioning event
p( S )
- Independence of events
p( S2 ? S1 ) p( S2 ) p( S1 )
- Conditional independence
p( S2 ? S1 o ) p( S2 o ) p( S1 o )
- So, what can we say about
odds( o S2 ? S1 ) ?
p(S2 o) odds( o S1 ) / p(S2 o)
105
Hans Moravecs code
odds ((1.0-pdeti)/(1.0-pfali))
(po2i/(1.0-po2i))
p( S2 o ) p( o S1 )
odds( o S2 ? S1)
p( S2 o ) p( o S1 )
previous odds
the sensor model
Also seen in this code
int g,i,j for ( gN/2 g0 g/2 ) for
( ig i0
AjAjg j-g ) Aj Ajg
Aj Ajg
106
Solved Runs Time HMC Hammer 1 -
2 45738 2 2 25541 3
2 33758 4 - 0
00000 5 2 12042 6
4 44157 7 1 23244
Matt Beaumont, Dan Halperin, Jonah Cohen
  • Solved Runs Time
  • HMC 42
  • - 0 00000
  • 2 2 23141
  • 3 - 0 00000
  • 4 - 0 00000
  • 5 1 24242
  • 6 1 30257
  • 7 - 0 00000

Brian Bentow, Jeff Hellrung, Tim Carnes
  • Solved Runs Time
  • HMC Escher
  • - 1 34900
  • 2 - 2 42741
  • 3 - 0 00000
  • 4 - 0 00000
  • 5 1 20842
  • 6 - 0 00000
  • 7 - 8 44200

63 teams total
Brian Rice, Steven Sloss, Greg Minton
  • Per-Problem Statistics
  • Problem Solved Submitted
  • 1 3 60
  • 2 11 72
  • 3 3 12
  • 4 0 0
  • 5 19 58
  • 6 7 10
  • 7 10 64
  • Solved Runs Time
  • HMC Monte Carlo
  • - 0 00000
  • 2 - 2 45641
  • 3 - 0 00000
  • 4 - 0 00000
  • 5 2 21542
  • 6 - 0 00000
  • 7 - 0 00000

Fall 04 ACM Contest
Cal Pierog, Mac Mason, Chris Erickson
Write a Comment
User Comments (0)
About PowerShow.com