Manipulation Under Uncertainty or: Executing Planned Grasps Robustly - PowerPoint PPT Presentation

About This Presentation
Title:

Manipulation Under Uncertainty or: Executing Planned Grasps Robustly

Description:

Using tactile sensors to precisely locate object before grasping (Petrovskaya et ... strategies based on higher-order properties of your belief state (variance, ... – PowerPoint PPT presentation

Number of Views:42
Avg rating:3.0/5.0
Slides: 35
Provided by: K2126
Category:

less

Transcript and Presenter's Notes

Title: Manipulation Under Uncertainty or: Executing Planned Grasps Robustly


1
Manipulation Under Uncertainty(or Executing
Planned Grasps Robustly)
  • Kaijen Hsiao
  • Tomás Lozano-Pérez
  • Leslie Kaelbling
  • Computer Science and
  • Artificial Intelligence Lab, MIT
  • NEMS 2008

2
Manipulation Planning
  • If you know all shapes and positions exactly, you
    can generate a trajectory that will work

3
Even Small Uncertainty Can Kill
4
Moderate Uncertainty
(not groping blindly)
  • Initial conditions (ultimately from vision)
  • Object shape is roughly known (contacted vertices
    should be within 1 cm of actual positions)
  • Object is on table and pose (x, y, rotation) is
    roughly known (center of mass std 5 cm, 30 deg)
  • Online sensing
  • robot proprioception
  • tactile sensors on fingers/hand
  • Planned/demonstrated trajectories (that would
    work under zero uncertainty) given

5
Model uncertainty explicitly
  • Belief state probability distribution over
    positions of object relative to robot
  • Use online sensing to update belief state
    throughout manipulation (SE)
  • Select manipulation actions based on belief
    state (p)

Controller
belief
?
SE
sensing
action
Environment
6
State Estimation
  • Transition Model how robot actions affect the
    state
  • Do we move the object during the grasp execution?
    (currently, any contact spreads out the belief
    state somewhat)
  • Observation Model P(sensor input state)
  • How consistent are various object positions with
    the current sensory input (robot pose and touch)?
  • Bayes Rule

7
Control Three approaches
  • Formulate as a POMDP, solve for optimal policy
  • Continuous, multi-dimensional state, action,
    observation spaces
  • -gtWildly intractable
  • Find most likely state, plan trajectory, execute
  • Bad if rest of execution is open loop
  • Maybe good if replanning is continuous, but too
    slow for execution-time
  • Will not select actions to gain information
  • Our approach define new robust primitives, use
    information state to select plan, execute

8
Robust Motion Primitive
  • Move-until(goal, condition)
  • Repeat until belief state condition is satisfied
  • Assume object is in its most likely location
  • Guarded move to object-relative goal
  • If contact is made
  • Undo last motion
  • Update belief state
  • Termination conditions
  • Claims success robot believes, with high
    probability, that it is near the object-relative
    goal
  • Claims failure some number of attempts have not
    achieved the belief condition

9
Robust primitive
Most likely robot-relative position
Where it actually is
10
Initial belief state (X, Y, theta)
11
Summed over theta (easier to visualize)
12
Tried to move down finger hit corner
13
Probability of observation location
14
Updated belief
15
Re-centered around mean
16
Trying again, with new belief
Back up
Try again
17
Executing a trajectory
  • Given a sequence of way-points in a trajectory
  • Attempt to execute each one robustly using
    move-until
  • So, now we can try to close the gripper on the
    box

18
Final state and observation
Observation probabilities
Grasp
19
Updated belief state Success!
Goal variance lt 1 cm x, 15 cm y, 6 deg theta
20
What if Y coord of grasp matters?
21
Need explicit information gathering
22
Use variance of belief to select trajectory
If this is your start belief, just run grasp
trajectory
23
The Approach
Trajectories (grasp, poke, )
command generator
relative motion
current belief
robot commands
strategy selector
belief update
world
most likely state
policy
sensor observations
24
Strategy Selector
  • Planner to automatically pick good strategies
    based on start uncertainties and goals
  • Simulate all particles forward using selected
    robot movements, including tipping probabilities
    (tipping failure)
  • Group into qualitatively similar outcomes
  • Use forward search to select trajectories/info-gat
    hering actions
  • Currently use hand-written conditions on belief
    state

25
Grasping a Brita Pitcher
Target grasp Put one finger through the handle
and grasp
26
Belief-Based Controller w/2 Info-Grasps
27
Brita Results
Increasing uncertainty
28
Related Work
  • Grasp planning without regard to uncertainty (can
    be used as input to this research) (Lozano-Perez
    et al, 1992, Saxena et al, 2008)
  • Finding a fixed trajectory that is likely to
    succeed under uncertainty (Alterovitz et al.
    2007, Burns and Brock 2007, Melchior and Simmons
    2007, Prentice and Roy 2007)
  • Visual servoing (tons of work)
  • Using tactile sensors to precisely locate object
    before grasping (Petrovskaya et al. 2006)
  • Regrasping to find stable grasp positions (Platt,
    Fagg, Grupen, 2002)
  • POMDPs for grasping (Hsiao et al. 2007)

29
Current Work
  • Real robot results (7-DOF Barrett Arm/Hand and
    Willow Garage PR2)
  • Automatic strategy selection

30
Key Ideas
  • Belief-based strategy
  • Maintain a belief state (updated based on actions
    and observations)
  • Express your actions relative to the current best
    state estimate
  • Choose strategies based on higher-order
    properties of your belief state (variance,
    bimodality, etc).

31
Acknowledgements
  • This material is based upon work supported by the
    National Science Foundation under Grant No.
    0712012. Any opinions, findings, and conclusions
    or recommendations expressed in this material are
    those of the author(s) and do not necessarily
    reflect the views of the National Science
    Foundation.

32
The End.
33
(No Transcript)
34
(No Transcript)
35
(No Transcript)
36
Box Results
  • Goal 1 cm x, 1 cm y, 6 degrees theta
  • Object uncertainty standard deviations of
    5 cm x, 5 cm y, 30 degrees theta
  • Mean state controller with info-grasp
  • 120/122, 98.4

37
Cup Results
Goal 1 cm x, 1 cm y Uncertainty std Met Goal 1
cm, 30 deg 150/152 (98.7) 3 cm, 30 deg 62/66
(93.9) 5 cm, 30 deg 36/40 (90.0)
38
Reactive Controller Pseudocode
  • Forward actions are relative to the current mean
    belief state, and backwards actions retrace
    previous actions. All actions are
    move-until-contact except for backtracing.
  • Go to the first keyframe
  • While action count lt max action count
  • If you are at a keyframe
  • If you are at the goal according to your
    most likely/mean state
  • If your belief variance is not within
    your goal limits
  • Regrasp
  • Else
  • Stop (check if you are successful)
  • Else
  • Go to the next keyframe
  • Else
  • If you were trying to go forward a keyframe
    and hit something unexpected
  • (optional close and open your fingers
    to gather more contact information)
  • Do state estimation
  • Try to go to the same keyframe you were
    trying to reach
  • (based on the new belief state)
  • If you were trying to go to the same
    keyframe and hit something unexpected
  • Do state estimation

39
Uses of forward search
  • Will this trajectory get me there with high
    probability if I just use a most likely state
    controller?
  • Will I just bounce around forever with high
    likelihood?
  • Will I knock over the object if it's not where I
    think it is?
  • Which info-gathering action gets me the most
    information?
  • Does it give me enough information that I can get
    there with a most likely state controller?
  • What action/strategy/trajectory should I take for
    a given belief state to succeed most often?

40
Simple examples of needing search
  • Pick up box without tilting (can use variance at
    end to decide to back up and do info-grasp)
  • Pick up Brita pitcher by handle (contacts don't
    tell you enough to get there if high uncertainty
    need to do info-grasp of handle)
  • Put cup on prong--need to grasp by side or by lip
    to get on prong
  • Let's say you can't reach side grasp
  • If right-side-up grasp by lip to get on prong
  • If upside-down grasp whole bottom to get on
    prong
  • Doing right-side-up grasp will tell you whether
    it's upside-down or not
  • Doing upside-down grasp won't tell you either way
    (until you try to put it on the prong)
  • Forward search will tell you to try the
    right-side-up grasp first
  • Or you can just tag MLS with strategies and bias
    towards right-side-up
  • Easy-to-knock-over objects (plan info-gathering
    actions that don't knock them over)
Write a Comment
User Comments (0)
About PowerShow.com