The Robota Dolls - PowerPoint PPT Presentation

About This Presentation
Title:

The Robota Dolls

Description:

MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence Neuron models Dynamical Systems: Applications Model of human three-dimensional ... – PowerPoint PPT presentation

Number of Views:104
Avg rating:3.0/5.0
Slides: 73
Provided by: Aude81
Category:

less

Transcript and Presenter's Notes

Title: The Robota Dolls


1
MACHINE LEARNING Continuous Time-Delay NN
Limit-Cycles, Stability and Convergence


2
Recurrent Neural Networks
  • Sofar, we have considered only feed-forward
    neural networks
  • Apart for Hebbian Learning
  • Most biological network have recurrent
    connections.
  • This change of direction in the flow of
    information is interesting, as it can allow
  • To keep a memory of the activation of the neuron
  • To propagate the information across output
    neurons

3
Neuron models
Abstract
Binary neurons Discrete time
Real number neurons Discrete time
Real number neurons Continuous time
Realistic
4
Dynamical Systems and NN
Dynamical Systems are at the core of the control
systems underlying many of the vertebrates
control system for skillful motion
Central Pattern Generator Pure cyclic patterns
underlying basic locomotion


5
Dynamical Systems and NN
Dynamical Systems are at the core of the control
systems underlying many of the vertebrates
control system for skillful motion
Adaptive Controllers Dynamical modulation of CPG


6
Dynamical Systems

7
Dynamical Systems


8
Dynamical Systems


9
Dynamical Systems


10
Dynamical Systems


11
Dynamical Systems


12
Dynamical Systems


13
Dynamical Systems


14
Dynamical Systems


15


16
Dynamical Systems


17
Dynamical Systems Applications
  • Model of human three-dimensional reaching
    movements
  • To find a generic representation of motions that
    allows both robust visual recognition and
    flexible regeneration of motion.

18
Dynamical Systems Applications
Dynamical System Modulation
Adaptation to sudden target displacement
Different initial conditions
19
Dynamical Systems Applications
Adaptation to sudden target displacement
Different initial conditions
20
Dynamical Systems Applications
Online adaptation to changes in the context
Adaptation to different contexts
21
Neuron models
Abstract
Binary neurons Discrete time
Real number neurons Discrete time
Real number neurons Continuous time
Realistic
22
Leaky integrator neuron model
Idea add a state variable mj (membrane
potential) that is controlled by a differential
equation
23
Leaky integrator neuron model
Idea add a state variable mj (membrane
potential) that is controlled by a differential
equation
24
Leaky integrator neuron model
  • This type of neuron models are used in
  • Recurrent neural networks for time series
    analysis (e.g. echo-state networks)
  • Neural oscillators
  • Several CPG models
  • Associative memories, e.g. the continuous time
    version of the Hopfield model

25
Behavior of a single neuron
1
The behavior of a single leaky-integrator neuron
without self-connection is a linear differential
equation that can be solved analytically. Here S
is a constant input
26
Behavior of a single neuron
1
tau 0.2 D 1.0 m0 0.0 S
3.0 b 0.0
27
Behavior of a single neuron
1
The behavior of a single leaky-integrator neuron
with a self-connection gives a nonlinear
differential equation that cannot be solved
analytically
Nonlinear term
28
Behavior of a single neuron numerical simulation
1
tau 0.2 D 1 w11 -0.5 b
0.0 S 3.0
29
Fixed points with inhibitory self-connection
Finding the (stable or unstable) fixed points
w11 -20, S30
tau 0.2 D 1 w11 -20 b 0.0
30
Fixed points with inhibitory self-connection
tau 0.2 D 1 w11 -20 b 0.0
w11 -20, S30
31
Fixed points with excitatory self-connection
w11 20, S-10
Finding the (stable or unstable) fixed points
tau 0.2 D 1 w11 20 b 0.0
32
Fixed points with excitatory self-connection
Finding the (stable or unstable) fixed points
w11 20, S -10
tau 0.2 D 1 w11 20 b 0.0
This neuron will converge to one of the three
fixed points depending on initial conditions
33
Stable fixed point
34
Stable and unstable fixed points
35
Bifurcation
Stable
w11 -20
By changing the value of w11, the neuron
stability properties changes. The system has
undergone a bifurcation
36
Can we create a two-neuron oscillator?
2
1
Yes, but it is not easy.
37
Does this network oscillate?
-
2
1
-
No
38
Does this network oscillate?
2
1
No
39
Two-neuron oscillator

-

2
1

Yes, with
tau1 tau2 0.1 D 1 b1 -2.75, b2
-1.75 w11 4.5, w12 -1, w21 1, w22
4.5 See Beer (1995), Adaptive Behavior, Vol 3 No
4
40
Two-neuron oscillator
2
1
41
Two-neuron oscillator
Phase plot
2
1
42
Two-neuron network possible behaviors
See Beer (1995), Adaptive Behavior, Vol 3 No 4
43
Conclusioneven very simple
leaky-integratorneural networks can exhibit rich
dynamics
44
Half-center oscillators
Brown in 1914 suggested that rhythms could be
generated centrally (as opposed to peripherally)
by systems of coupled neurons with reciprocal
inhibition
-
2
1
-
Brown understood that mechanisms for producing
the transitions between activity in the two
halves of the circuit were required
45
Four-neuron oscillator
2
1
4
3
46
Four-neuron oscillator
2
1
4
3
D 1 tau 0.02, 0.02, 0.1, 0.1 b
3.0, 3.0, -3.0, -3.0 w(1,2) w(1,4)
w(2,1) w(2,3) -5 w(1,3) w(2,4) 5 w(3,1)
w(4,2) -5
47
Four-neuron oscillator
48
Modulation of a four-neuron oscillator
49
Modulation of a four-neuron oscillator
50
Applications of a four-neuron oscillator
Each neurons activation function is governed by
51
Applications of a four-neuron oscillator
Transition from walking to trotting and then
galloping gait following an increase of the tonic
input from 1 to 1.4 and 1.6 respectively.
52
Applications of a four-neuron oscillator
Simple circuit to implement a sitting and lying
down behavior by sequential inhibition of the legs
53
Applications of a four-neuron oscillator
54
How to design leaky-integrator neural networks?
  • Recurrent back-propagation algorithm
  • with the use of an energy function, cf. Hopfield
  • Genetic algorithms
  • Linear regression (echo state network)
  • Use guidance from dynamical systems theory

55
Examples of leaky-integrator neural networks
56
Application of leaky-integrator neural networks
Modeling Human Data
Time-Delay NN acting as associative memory for
storing sequences of activation
Coupled Oscillators for basic cyclic motion and
reflexes
Muscle Model
57
Application of leaky-integrator neural networks
Modeling Human Data
Human Data
Simulated Data
Muscle Model
58
Schematic setup of Echo State Network
59
Schematic setup of ESN (II)
60
How do we train Wout ?
  • It is a supervised learning algorithm. The
    training dataset is

the desired output time series
the input time series
61
Simply do a linear regression
  • Linear regression on the (high dimensional) space
    of the inputs AND internal states.
  • Geometrical illustration with a 3 units network

62
Data acquisition
63
Network inputs and outputs
64
Neuron models
Abstract
Binary neurons Discrete time
Real number neurons Discrete time
Realistic
65
BACKPROPAGATION
A two-layer Feed-Forward Neural Network
Outputs
Inputs
Output Neurons
Input Neurons
Hidden Neurons
The output of the hidden nodes is unknown. Thus,
the error must be back-propagated from output
neuron to hidden neurons.
66
BPRNN
Backprogagation has also been generalized to
allow learning in recurrent neural
networks (Elman, Jordan type of RNN Networks) ?
Learning time series
67
Recurrent Neural Networks
Recurrent neural network

JORDAN NETWORK
Output layer
Hidden units
Context units
Input units
Context Units
68
Recurrent Neural Networks
Recurrent neural network

ELMAN NETWORK
Output layer
Hidden units
Context units
Input units
The context units store the content of the hidden
units
69
Recurrent Neural Networks

ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONS
ROBOT ACTIONS
Output layer
Hidden units
Input units
Context units
ROBOT PERCEPTIONS
70
Recurrent Neural Networks Robotics Applications

ASSOCIATE SEQUENCES OF SENSORI-MOTOR
PERCEPTIONS Generalization
71
Recurrent Neural Networks Robotics Applications

ASSOCIATE SEQUENCES OF SENSORI-MOTOR
PERCEPTIONS Generalization
72
Recurrent Neural Networks Robotics Applications

ASSOCIATE SEQUENCES OF SENSORI-MOTOR
PERCEPTIONS Generalization
Ito, Noda, Hashino Tani, Dynamic and
interactive generation of object handling
behaviors by a small humanoid robot using a
dynamic neural network model, Neural Networks,
April, 2006
73
Recurrent Neural Networks Robotics Applications

74
Recurrent Neural Networks Robotics Applications

75
Recurrent Neural Networks Robotics Applications

ASSOCIATE SEQUENCES OF SENSORI-MOTOR
PERCEPTIONS Generalization
76
Recurrent Neural Networks Robotics Applications

77
Neuron models
Abstract
Binary neurons Discrete time
Real number neurons Discrete time
Real number neurons Continuous time
Spiking neurons (integrate and fire)
Realistic
78
Rate coding versus spike coding
Important question is information in the brain
encoded in rates of spikes or in the timing of
individual spikes? Answer probably both! Rates
encode information sent to muscles Visual
processing can be done very quickly (150ms),
with just a few spikes (Thorpe S., Fize D., and
Marlot C. 1996, Nature).
79
Rate coding versus spike coding
Spike coding
Rate coding
Time
80
Integrate-and-fire neuron
Integrate-and-fire like leaky-integrator models,
but with the production of spikes when the
membrane potential exceeds a threshold It
combines leaky-integration and reset See Spiking
Neuron Models. Single Neurons, Populations,
Plasticity, Gerstner and Kistler, Cambridge
University Press, 2002
81
Neuron models
Abstract
Binary neurons Discrete time
Real number neurons Discrete time
Real number neurons Continuous time
Spiking neurons (integrate and fire)
Biophysical models
Realistic
82
Hodgkin and Huxley neuron model
Very influential model of the spiking property of
a neuron based on ionic currents The
details are out of the scope of this course
83
Hodgkin and Huxley neuron model
Very influential model of the spiking property of
a neuron based on ionic currents
84
Hodgkin and Huxley neuron model
REFERENCES Original Paper A. L. Hodgkin and A.
F. Huxley, A quantitative description of membrane
current and its application to conduction and
excitation in nerve, J Physiol. 1952 August 28
117(4) 500544. http//www.pubmedcentral.nih.gov
/picrender.fcgi?artid1392413blobtypepdf Recen
t Update Blaise Agüera y Arcas, Adrienne L.
Fairhall, William Bialek, Computation in a Single
Neuron Hodgkin and Huxley Revisited Neural
Computation, Vol. 15, No. 8 1715-1749,
2003. http//www.mitpressjournals.org/doi/pdfplus/
10.1162/08997660360675017
85
FURTHER READING I
  • Ito, Noda, Hashino Tani, Dynamic and
    interactive generation of object handling
    behaviors
  • by a small humanoid robot using a dynamic neural
    network model, Neural Networks, April, 2006
  • http//www.bdc.brain.riken.go.jp/tani/papers/NN20
    06.pdf
  • H. Jaeger, "The echo state approach to analysing
    and training recurrent neural networks"
    (GMD-Report 148, German National Research
    Institute for Computer Science 2001).
    ftp//borneo.gmd.de/pub/indy/publications_herbert/
    EchoStatesTechRep.pdf
  • B. Mathayomchan and R. D. Beer, Center-Crossing
    Recurrent Neural Networks for the Evolution of
    Rhythmic Behavior, Neural Comput.,
    September 1, 2002 14(9) 2043 - 2051.
  • http//www.mitpressjournals.org/doi/pdf/10.1162/08
    9976602320263999
  • S. R. D. Beer, Parameter space structure of
    continuous-time recurrent neural networks.Neural
    Comput., December 1, 2006 18(12) 3009 - 3051.
  • http//www.mitpressjournals.org/doi/pdf/10.1162/ne
    co.2006.18.12.3009
  • Pham, Q.C., and Slotine, J.J.E., "Stable
    Concurrent Synchronization in Dynamic System
    Networks," Neural Networks, 20(1), 2007.
    http//web.mit.edu/nsl/www/preprints/Polyrhythms05
    .pdf
  • Billard, A. and Ijspeert, A.J. (2000)
    Biologically inspired neural controllers for
    motor control in a quadruped robot.. In
    Proceedings of the International Joint Conference
    on Neural Networks, Come (Italy), July.
  • http//lasa.epfl.ch/publications/uploadedFiles/AB_
    Ijspeert_IJCINN2000.pdf
  • Billard, A. and Mataric, M. (2001) Learning
    human arm movements by imitation Evaluation of a
    biologically-inspired connectionist architecture.
    Robotics Autonomous Systems 941, 1-16.
  • http//lasa.epfl.ch/publications/uploadedFiles/AB_
    Mataric_RAS2001.pdf

86
FURTHER READING II
  • Herbert Jaeger and Harald Haas, Harnessing
    Nonlinearity Predicting Chaotic Systems and
    Saving Energy in Wireless Communication, Science
    2, Vol. 304. no. 5667, pp. 78 - 80
  • http//www.sciencemag.org/cgi/reprint/304/5667/78.
    pdf
  • S. Psujek, J. Ames, and R. D. Beer Connection
    and coordination the interplay between
    architecture and dynamics in evolved model
    pattern generators. Neural Comput.,
    March 1, 2006 18(3) 729 - 747.
  • http//www.mitpressjournals.org/doi/pdf/10.1162/ne
    co.2006.18.3.729
Write a Comment
User Comments (0)
About PowerShow.com