Title: SIMBICON: Simple Biped Locomotion Control
1SIMBICON Simple Biped Locomotion Control
- KangKang Yin, Kevin Loken, Michiel van de Panne
2Outline
- Motivation
- Introduction
- Related works
- Balance control strategy
- Mocap-based controllers
- Feedback error learning
- Results
- Conclusions
3Motivation
- Simple kinematic model
- Cannot predict what happens with an unexpected
step. - Cannot respond to being pushed.
- Physically simulated bipeds
- Recover balance realistically from unexpected
steps and pushes.
Preview
4Introduction
- Give animated characters good awareness of
balance for unexpected changes in environment.
5Related Works
- Data-driven kinematic algorithms
- Mukai and Kuriyama 2005 Kwon and Shin 2005
Shin and Oh 2006 Hech and Gleicher 2006 - Resequencing and interpolating motion outcomes
- Trajectory optimization
- Fang and Pollard 2004 Liu et al. 2005
Sulejmanpasic and Popovic 2005 Chai and Hodgins
2007 - Modeled using optimization criteria incl.
satisfying user-specified constraints
6Related Works (cont.)
- Controllers
- Faloutsos et al. 2001 Zordan and Hodgins 202
- physics-based character simulation actually
models how humans walk.
7Related Works (cont.)
- Biped locomotion control
- Zero moment point (ZMP) techniques
- Vukobratovic 1969 Honda ASIMO
- Passive dynamics walking
- McGeer 1990 Kuo 1999 Ruina et al. 2001
- Central pattern generators
- Taga et al. 1991
- Reinforcement learning
- Tedrake et al. 2004 Morimoto et al. 2004
- Hopping controllers
- Raibert 1986 Raibert and Hodgins 1991
8Balance control strategy
- Step 1 develop a cyclical base motion
FSM
Motion capture
9Finite State Machine
state 0 ? state1 after 0.3s state 1 ? state
2 after swing foot contact
10Finite State Machine
- Drive each joint to its desired local angle.
- Proportional-derivative (PD) control
Respond to changes.
Damp
11Balance Control Strategy
- Step 2 control torso and swing-hip wrt world
frame
12Balance Control Strategy
v
v
COM velocity matters.
v0
v0
COM position matters.
dlt0
dgt0
13Balance Control Strategy
Base controller
Continuous feedback
14Moving to 3D
- Apply the same control ideas to both sagittal and
coronal planes
15FSM Design
?t
?t
cd
cv
GUI
4 states
16Mocap-based Controllers
- Import motions into a dynamic settings.
- Process
- Compute an average gait cycle.
- Base controller tracks the average cycle.
- Apply the same balance strategies.
17Mocap-based Controllers
- Compute an average gait cycle.
- Fourier analysis to a right hip
- Extract period T of the walking cycle.
- Filters original data to a smooth periodic motion
T. - Determine F from T.
Time tm of Largest right hip flex
Foot contact (transition between the states)
State 1, State 3 ignored. F 0.5 / 1 State 0 ?
3 State 3 ? 0
In place of FSM,
18Mocap-based Controllers
- Base controller tracks the average cycle.
- T serves as a target trajectory.
- Joint angles track using
PD-controllers.
19Mocap-based Controllers
- Apply the same balance strategies.
- This time, PD controller is
20Mocap-based Controllers
- Precision of imitation
- Mismatch of physical parameters
- Tracking rather than anticipating
21Stiff Response Problems
- Stiff response to perturbations / oscillations
of the torso pitch angle - Always reacting to the movement of the hip rather
than anticipating. - ? Need to reduce feedback torques.
- See the phenomena from the video.
22Learning Feed-forward Control
Kawato 1990
23Feedback Error Learning
- As an inverse model
- Take advantage of being cyclic.
- Learn cyclic motion as a function of phase.
24Implementation of FEL
- Divide the phase F uniformly into N bins.
- current phase bin
- Blend feed-forward and feed-back torques for each
bin.
25Simulation Details 2D Model
- Newton-Euler dynamics formulation
- 70kg trunk, 10kg legs
- Penalty force ground model
- Friction coefficient µ0.65
- Joint limits, torque limits
26Simulation Details 3D Model
- Open Dynamics Engine
- 28 internal DOF
- Friction coefficient µ0.8
- Joint limits, torque limits
27Results
28Results
3D model Same as Laszlo et al. 1996 simulated
using Open Dynamics EngineODE PD gain values
kp 300 Nm/rad kd 0.1kp
2D model 7-link planar biped Simulated using
Newton-Euler PD gain values kp 300 Nm/rad
kd 30 Nms/rad
29Conclusions
- Simple, robust feedback mechanism
- Feedforward strategy that allows for more natural
low tracking gains - Real-time, balance-aware physics-based
characters, with style from mocap