SIMBICON: Simple Biped Locomotion Control - PowerPoint PPT Presentation

1 / 27
About This Presentation
Title:

SIMBICON: Simple Biped Locomotion Control

Description:

[Vukobratovic 1969; Honda ASIMO] Passive dynamics walking ... Central pattern generators [Taga et al. 1991] Reinforcement learning ... – PowerPoint PPT presentation

Number of Views:618
Avg rating:3.0/5.0
Slides: 28
Provided by: dani52
Category:

less

Transcript and Presenter's Notes

Title: SIMBICON: Simple Biped Locomotion Control


1
SIMBICON Simple Biped Locomotion Control
  • KangKang Yin, Kevin Loken, Michiel van de Panne

2
Outline
  • Motivation
  • Introduction
  • Related works
  • Balance control strategy
  • Mocap-based controllers
  • Feedback error learning
  • Results
  • Conclusions

3
Motivation
  • Simple kinematic model
  • Cannot predict what happens with an unexpected
    step.
  • Cannot respond to being pushed.
  • Physically simulated bipeds
  • Recover balance realistically from unexpected
    steps and pushes.

Preview
4
Introduction
  • Give animated characters good awareness of
    balance for unexpected changes in environment.

5
Related Works
  • Data-driven kinematic algorithms
  • Mukai and Kuriyama 2005 Kwon and Shin 2005
    Shin and Oh 2006 Hech and Gleicher 2006
  • Resequencing and interpolating motion outcomes
  • Trajectory optimization
  • Fang and Pollard 2004 Liu et al. 2005
    Sulejmanpasic and Popovic 2005 Chai and Hodgins
    2007
  • Modeled using optimization criteria incl.
    satisfying user-specified constraints

6
Related Works (cont.)
  • Controllers
  • Faloutsos et al. 2001 Zordan and Hodgins 202
  • physics-based character simulation actually
    models how humans walk.

7
Related Works (cont.)
  • Biped locomotion control
  • Zero moment point (ZMP) techniques
  • Vukobratovic 1969 Honda ASIMO
  • Passive dynamics walking
  • McGeer 1990 Kuo 1999 Ruina et al. 2001
  • Central pattern generators
  • Taga et al. 1991
  • Reinforcement learning
  • Tedrake et al. 2004 Morimoto et al. 2004
  • Hopping controllers
  • Raibert 1986 Raibert and Hodgins 1991

8
Balance control strategy
  • Step 1 develop a cyclical base motion

FSM
Motion capture
9
Finite State Machine
state 0 ? state1 after 0.3s state 1 ? state
2 after swing foot contact
10
Finite State Machine
  • Drive each joint to its desired local angle.
  • Proportional-derivative (PD) control

Respond to changes.
Damp
11
Balance Control Strategy
  • Step 2 control torso and swing-hip wrt world
    frame

12
Balance Control Strategy
  • Step 3 COM feedback

v
v
COM velocity matters.


v0
v0


COM position matters.
dlt0
dgt0
13
Balance Control Strategy
  • Step 3 COM feedback

Base controller
Continuous feedback
14
Moving to 3D
  • Apply the same control ideas to both sagittal and
    coronal planes

15
FSM Design
?t
?t
cd
cv
GUI
4 states
16
Mocap-based Controllers
  • Import motions into a dynamic settings.
  • Process
  • Compute an average gait cycle.
  • Base controller tracks the average cycle.
  • Apply the same balance strategies.

17
Mocap-based Controllers
  • Compute an average gait cycle.
  • Fourier analysis to a right hip
  • Extract period T of the walking cycle.
  • Filters original data to a smooth periodic motion
    T.
  • Determine F from T.

Time tm of Largest right hip flex
Foot contact (transition between the states)
State 1, State 3 ignored. F 0.5 / 1 State 0 ?
3 State 3 ? 0
In place of FSM,
18
Mocap-based Controllers
  • Base controller tracks the average cycle.
  • T serves as a target trajectory.
  • Joint angles track using
    PD-controllers.

19
Mocap-based Controllers
  • Apply the same balance strategies.
  • This time, PD controller is

20
Mocap-based Controllers
  • Precision of imitation
  • Mismatch of physical parameters
  • Tracking rather than anticipating

21
Stiff Response Problems
  • Stiff response to perturbations / oscillations
    of the torso pitch angle
  • Always reacting to the movement of the hip rather
    than anticipating.
  • ? Need to reduce feedback torques.
  • See the phenomena from the video.

22
Learning Feed-forward Control
Kawato 1990
23
Feedback Error Learning
  • As an inverse model
  • Take advantage of being cyclic.
  • Learn cyclic motion as a function of phase.

24
Implementation of FEL
  • Divide the phase F uniformly into N bins.
  • current phase bin
  • Blend feed-forward and feed-back torques for each
    bin.

25
Simulation Details 2D Model
  • Newton-Euler dynamics formulation
  • 70kg trunk, 10kg legs
  • Penalty force ground model
  • Friction coefficient µ0.65
  • Joint limits, torque limits

26
Simulation Details 3D Model
  • Open Dynamics Engine
  • 28 internal DOF
  • Friction coefficient µ0.8
  • Joint limits, torque limits

27
Results
  • See the video.

28
Results
  • Settings

3D model Same as Laszlo et al. 1996 simulated
using Open Dynamics EngineODE PD gain values
kp 300 Nm/rad kd 0.1kp
2D model 7-link planar biped Simulated using
Newton-Euler PD gain values kp 300 Nm/rad
kd 30 Nms/rad
29
Conclusions
  • Simple, robust feedback mechanism
  • Feedforward strategy that allows for more natural
    low tracking gains
  • Real-time, balance-aware physics-based
    characters, with style from mocap
Write a Comment
User Comments (0)
About PowerShow.com