Title: The Robota Dolls
1MACHINE LEARNING Continuous Time-Delay NN
Limit-Cycles, Stability and Convergence
2Recurrent Neural Networks
- Sofar, we have considered only feed-forward
neural networks - Apart for Hebbian Learning
- Most biological network have recurrent
connections. - This change of direction in the flow of
information is interesting, as it can allow - To keep a memory of the activation of the neuron
- To propagate the information across output
neurons
3Neuron models
Abstract
Binary neurons Discrete time
Real number neurons Discrete time
Real number neurons Continuous time
Realistic
4Dynamical Systems and NN
Dynamical Systems are at the core of the control
systems underlying many of the vertebrates
control system for skillful motion
Central Pattern Generator Pure cyclic patterns
underlying basic locomotion
5Dynamical Systems and NN
Dynamical Systems are at the core of the control
systems underlying many of the vertebrates
control system for skillful motion
Adaptive Controllers Dynamical modulation of CPG
6Dynamical Systems
7Dynamical Systems
8Dynamical Systems
9Dynamical Systems
10Dynamical Systems
11Dynamical Systems
12Dynamical Systems
13Dynamical Systems
14Dynamical Systems
15 16Dynamical Systems
17Dynamical Systems Applications
- Model of human three-dimensional reaching
movements - To find a generic representation of motions that
allows both robust visual recognition and
flexible regeneration of motion.
18Dynamical Systems Applications
Dynamical System Modulation
Adaptation to sudden target displacement
Different initial conditions
19Dynamical Systems Applications
Adaptation to sudden target displacement
Different initial conditions
20Dynamical Systems Applications
Online adaptation to changes in the context
Adaptation to different contexts
21Neuron models
Abstract
Binary neurons Discrete time
Real number neurons Discrete time
Real number neurons Continuous time
Realistic
22Leaky integrator neuron model
Idea add a state variable mj (membrane
potential) that is controlled by a differential
equation
23Leaky integrator neuron model
Idea add a state variable mj (membrane
potential) that is controlled by a differential
equation
24Leaky integrator neuron model
- This type of neuron models are used in
- Recurrent neural networks for time series
analysis (e.g. echo-state networks) - Neural oscillators
- Several CPG models
- Associative memories, e.g. the continuous time
version of the Hopfield model
25Behavior of a single neuron
1
The behavior of a single leaky-integrator neuron
without self-connection is a linear differential
equation that can be solved analytically. Here S
is a constant input
26Behavior of a single neuron
1
tau 0.2 D 1.0 m0 0.0 S
3.0 b 0.0
27Behavior of a single neuron
1
The behavior of a single leaky-integrator neuron
with a self-connection gives a nonlinear
differential equation that cannot be solved
analytically
Nonlinear term
28Behavior of a single neuron numerical simulation
1
tau 0.2 D 1 w11 -0.5 b
0.0 S 3.0
29Fixed points with inhibitory self-connection
Finding the (stable or unstable) fixed points
w11 -20, S30
tau 0.2 D 1 w11 -20 b 0.0
30Fixed points with inhibitory self-connection
tau 0.2 D 1 w11 -20 b 0.0
w11 -20, S30
31Fixed points with excitatory self-connection
w11 20, S-10
Finding the (stable or unstable) fixed points
tau 0.2 D 1 w11 20 b 0.0
32Fixed points with excitatory self-connection
Finding the (stable or unstable) fixed points
w11 20, S -10
tau 0.2 D 1 w11 20 b 0.0
This neuron will converge to one of the three
fixed points depending on initial conditions
33Stable fixed point
34Stable and unstable fixed points
35Bifurcation
Stable
w11 -20
By changing the value of w11, the neuron
stability properties changes. The system has
undergone a bifurcation
36Can we create a two-neuron oscillator?
2
1
Yes, but it is not easy.
37Does this network oscillate?
-
2
1
-
No
38Does this network oscillate?
2
1
No
39Two-neuron oscillator
-
2
1
Yes, with
tau1 tau2 0.1 D 1 b1 -2.75, b2
-1.75 w11 4.5, w12 -1, w21 1, w22
4.5 See Beer (1995), Adaptive Behavior, Vol 3 No
4
40Two-neuron oscillator
2
1
41Two-neuron oscillator
Phase plot
2
1
42Two-neuron network possible behaviors
See Beer (1995), Adaptive Behavior, Vol 3 No 4
43Conclusioneven very simple
leaky-integratorneural networks can exhibit rich
dynamics
44Half-center oscillators
Brown in 1914 suggested that rhythms could be
generated centrally (as opposed to peripherally)
by systems of coupled neurons with reciprocal
inhibition
-
2
1
-
Brown understood that mechanisms for producing
the transitions between activity in the two
halves of the circuit were required
45Four-neuron oscillator
2
1
4
3
46Four-neuron oscillator
2
1
4
3
D 1 tau 0.02, 0.02, 0.1, 0.1 b
3.0, 3.0, -3.0, -3.0 w(1,2) w(1,4)
w(2,1) w(2,3) -5 w(1,3) w(2,4) 5 w(3,1)
w(4,2) -5
47Four-neuron oscillator
48Modulation of a four-neuron oscillator
49Modulation of a four-neuron oscillator
50Applications of a four-neuron oscillator
Each neurons activation function is governed by
51Applications of a four-neuron oscillator
Transition from walking to trotting and then
galloping gait following an increase of the tonic
input from 1 to 1.4 and 1.6 respectively.
52Applications of a four-neuron oscillator
Simple circuit to implement a sitting and lying
down behavior by sequential inhibition of the legs
53Applications of a four-neuron oscillator
54How to design leaky-integrator neural networks?
-
- Recurrent back-propagation algorithm
- with the use of an energy function, cf. Hopfield
- Genetic algorithms
- Linear regression (echo state network)
- Use guidance from dynamical systems theory
55Examples of leaky-integrator neural networks
56Application of leaky-integrator neural networks
Modeling Human Data
Time-Delay NN acting as associative memory for
storing sequences of activation
Coupled Oscillators for basic cyclic motion and
reflexes
Muscle Model
57Application of leaky-integrator neural networks
Modeling Human Data
Human Data
Simulated Data
Muscle Model
58Schematic setup of Echo State Network
59Schematic setup of ESN (II)
60How do we train Wout ?
- It is a supervised learning algorithm. The
training dataset is
the desired output time series
the input time series
61Simply do a linear regression
- Linear regression on the (high dimensional) space
of the inputs AND internal states. - Geometrical illustration with a 3 units network
62Data acquisition
63Network inputs and outputs
64Neuron models
Abstract
Binary neurons Discrete time
Real number neurons Discrete time
Realistic
65BACKPROPAGATION
A two-layer Feed-Forward Neural Network
Outputs
Inputs
Output Neurons
Input Neurons
Hidden Neurons
The output of the hidden nodes is unknown. Thus,
the error must be back-propagated from output
neuron to hidden neurons.
66BPRNN
Backprogagation has also been generalized to
allow learning in recurrent neural
networks (Elman, Jordan type of RNN Networks) ?
Learning time series
67Recurrent Neural Networks
Recurrent neural network
JORDAN NETWORK
Output layer
Hidden units
Context units
Input units
Context Units
68Recurrent Neural Networks
Recurrent neural network
ELMAN NETWORK
Output layer
Hidden units
Context units
Input units
The context units store the content of the hidden
units
69Recurrent Neural Networks
ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONS
ROBOT ACTIONS
Output layer
Hidden units
Input units
Context units
ROBOT PERCEPTIONS
70Recurrent Neural Networks Robotics Applications
ASSOCIATE SEQUENCES OF SENSORI-MOTOR
PERCEPTIONS Generalization
71Recurrent Neural Networks Robotics Applications
ASSOCIATE SEQUENCES OF SENSORI-MOTOR
PERCEPTIONS Generalization
72Recurrent Neural Networks Robotics Applications
ASSOCIATE SEQUENCES OF SENSORI-MOTOR
PERCEPTIONS Generalization
Ito, Noda, Hashino Tani, Dynamic and
interactive generation of object handling
behaviors by a small humanoid robot using a
dynamic neural network model, Neural Networks,
April, 2006
73Recurrent Neural Networks Robotics Applications
74Recurrent Neural Networks Robotics Applications
75Recurrent Neural Networks Robotics Applications
ASSOCIATE SEQUENCES OF SENSORI-MOTOR
PERCEPTIONS Generalization
76Recurrent Neural Networks Robotics Applications
77Neuron models
Abstract
Binary neurons Discrete time
Real number neurons Discrete time
Real number neurons Continuous time
Spiking neurons (integrate and fire)
Realistic
78Rate coding versus spike coding
Important question is information in the brain
encoded in rates of spikes or in the timing of
individual spikes? Answer probably both! Rates
encode information sent to muscles Visual
processing can be done very quickly (150ms),
with just a few spikes (Thorpe S., Fize D., and
Marlot C. 1996, Nature).
79Rate coding versus spike coding
Spike coding
Rate coding
Time
80Integrate-and-fire neuron
Integrate-and-fire like leaky-integrator models,
but with the production of spikes when the
membrane potential exceeds a threshold It
combines leaky-integration and reset See Spiking
Neuron Models. Single Neurons, Populations,
Plasticity, Gerstner and Kistler, Cambridge
University Press, 2002
81Neuron models
Abstract
Binary neurons Discrete time
Real number neurons Discrete time
Real number neurons Continuous time
Spiking neurons (integrate and fire)
Biophysical models
Realistic
82Hodgkin and Huxley neuron model
Very influential model of the spiking property of
a neuron based on ionic currents The
details are out of the scope of this course
83Hodgkin and Huxley neuron model
Very influential model of the spiking property of
a neuron based on ionic currents
84Hodgkin and Huxley neuron model
REFERENCES Original Paper A. L. Hodgkin and A.
F. Huxley, A quantitative description of membrane
current and its application to conduction and
excitation in nerve, J Physiol. 1952 August 28
117(4) 500544. http//www.pubmedcentral.nih.gov
/picrender.fcgi?artid1392413blobtypepdf Recen
t Update Blaise Agüera y Arcas, Adrienne L.
Fairhall, William Bialek, Computation in a Single
Neuron Hodgkin and Huxley Revisited Neural
Computation, Vol. 15, No. 8 1715-1749,
2003. http//www.mitpressjournals.org/doi/pdfplus/
10.1162/08997660360675017
85FURTHER READING I
- Ito, Noda, Hashino Tani, Dynamic and
interactive generation of object handling
behaviors - by a small humanoid robot using a dynamic neural
network model, Neural Networks, April, 2006 - http//www.bdc.brain.riken.go.jp/tani/papers/NN20
06.pdf - H. Jaeger, "The echo state approach to analysing
and training recurrent neural networks"
(GMD-Report 148, German National Research
Institute for Computer Science 2001).
ftp//borneo.gmd.de/pub/indy/publications_herbert/
EchoStatesTechRep.pdf - B. Mathayomchan and R. D. Beer, Center-Crossing
Recurrent Neural Networks for the Evolution of
Rhythmic Behavior, Neural Comput.,
September 1, 2002 14(9) 2043 - 2051. - http//www.mitpressjournals.org/doi/pdf/10.1162/08
9976602320263999 - S. R. D. Beer, Parameter space structure of
continuous-time recurrent neural networks.Neural
Comput., December 1, 2006 18(12) 3009 - 3051. - http//www.mitpressjournals.org/doi/pdf/10.1162/ne
co.2006.18.12.3009 - Pham, Q.C., and Slotine, J.J.E., "Stable
Concurrent Synchronization in Dynamic System
Networks," Neural Networks, 20(1), 2007.
http//web.mit.edu/nsl/www/preprints/Polyrhythms05
.pdf - Billard, A. and Ijspeert, A.J. (2000)
Biologically inspired neural controllers for
motor control in a quadruped robot.. In
Proceedings of the International Joint Conference
on Neural Networks, Come (Italy), July. - http//lasa.epfl.ch/publications/uploadedFiles/AB_
Ijspeert_IJCINN2000.pdf - Billard, A. and Mataric, M. (2001) Learning
human arm movements by imitation Evaluation of a
biologically-inspired connectionist architecture.
Robotics Autonomous Systems 941, 1-16. - http//lasa.epfl.ch/publications/uploadedFiles/AB_
Mataric_RAS2001.pdf
86FURTHER READING II
- Herbert Jaeger and Harald Haas, Harnessing
Nonlinearity Predicting Chaotic Systems and
Saving Energy in Wireless Communication, Science
2, Vol. 304. no. 5667, pp. 78 - 80 - http//www.sciencemag.org/cgi/reprint/304/5667/78.
pdf - S. Psujek, J. Ames, and R. D. Beer Connection
and coordination the interplay between
architecture and dynamics in evolved model
pattern generators. Neural Comput.,
March 1, 2006 18(3) 729 - 747. - http//www.mitpressjournals.org/doi/pdf/10.1162/ne
co.2006.18.3.729