Architecture and Equilibria ????? ??:?? ??:??? - PowerPoint PPT Presentation

1 / 35
About This Presentation
Title:

Architecture and Equilibria ????? ??:?? ??:???

Description:

Chapter 6 Architecture and Equilibria Chapter 6 Architecture and Equilibria 6.1 Neutral Network As Stochastic ... – PowerPoint PPT presentation

Number of Views:60
Avg rating:3.0/5.0
Slides: 36
Provided by: studen291
Category:

less

Transcript and Presenter's Notes

Title: Architecture and Equilibria ????? ??:?? ??:???


1
Architecture and Equilibria?????
????
?????
Chapter 6
2
Chapter 6 Architecture and Equilibria6.1 Neutral
Network As Stochastic Gradient system
Classify Neutral network model By their synaptic
connection topologies and by how learning
modifies their connection topologies
synaptic connection topologies
how learning modifies their connection topologies
3
Chapter 6 Architecture and Equilibria 6.1
Neutral Network As Stochastic Gradient system
  • Attention the taxonomy boundaries are fuzzy
    because the defining terms are fuzzy.

4
Chapter 6 Architecture and Equilibria 6.1
Neutral Network As Stochastic Gradient system
  • Three stochastic gradient systems represent the
    three main categories
  • Backpropagation (BP)
  • Adaptive vector quantization (AVQ)
  • Random adaptive bidirectional associative memory
    (RABAM)

5
Chapter 6 Architecture and Equilibria 6.2 Global
Equilibria convergence and stability
Neural network synapses , neurons three
dynamical systems synapses dynamical
systems neurons dynamical systems
joint synapses-neurons dynamical
systems Historically,Neural engineers study the
first or second neural network independently
.They usually study learning in feedforward
neural networks and neural stability in
nonadaptive feedback neural networks. RABAM and
ART network depend on joint equilibration of the
synaptic and neuronal dynamical systems.
6
Chapter 6 Architecture and Equilibria 6.2 Global
Equilibria convergence and stability
Equilibrium is steady state (for fixed-point
attractors) Convergence is synaptic
equilibrium. Stability is neuronal
equilibrium. We denote steady state in the
neuronal field Another forms with noise
Stability - Equilibrium dilemma Neuron
fluctuate faster than synapses fluctuate. Converge
nce undermines stability
7
Chapter 6 Architecture and Equilibria 6.3
Synaptic convergence to centroids AVQ
Algorithms
Competitive learning adaptively quantizes the
input pattern space characterizes the
continuous distributions of pattern.
We shall prove that Competitive AVQ synaptic
vector converge exponentially to
pattern-class centroid. They vibrate about the
centroid in a Brownian motion
8
Chapter 6 Architecture and Equilibria 6.3
Synaptic convergence to centroids AVQ Algorithms
Competitive AVQ Stochastic Differential Equations
The Random Indicator function
Supervised learning
algorithms depend explicitly on the indicator
functions.Unsupervised learning algorithms dont
require this pattern-class information. Centriod
9
Chapter 6 Architecture and Equilibria 6.3
Synaptic convergence to centroids AVQ Algorithms
The Stochastic unsupervised competitive learning
law
We assume
The equilibrium and convergence depend on
approximation (6-11) ,so 6-10 reduces
10
Chapter 6 Architecture and Equilibria 6.3
Synaptic convergence to centroids AVQ Algorithms
Competitive AVQ Algorithms
1. Initialize synaptic vectors
2.For random sample ,find the
closet(winning)synaptic vector
3.Update the wining synaptic vectors by
the UCL ,SCL,or DCL learning algorithm.
11
Chapter 6 Architecture and Equilibria 6.3
Synaptic convergence to centroids AVQ Algorithms
Unsupervised Competitive Learning (UCL)
defines a slowly decreasing sequence of learning
coefficient
Supervised Competitive Learning (SCL)
12
Chapter 6 Architecture and Equilibria 6.3
Synaptic convergence to centroids AVQ Algorithms
Differential Competitive Learning (DCL)
denotes the time change of the jth neurons
competitive signal . In practice we only use
the sign of (6-20)
Stochastic Equilibrium and Convergence
Competitive synaptic vector converge to
decision-class centroids. May
converge to locally maxima.
13
Chapter 6 Architecture and Equilibria 6.3
Synaptic convergence to centroids AVQ Algorithms
AVQ centroid theorem if a competitive AVQ system
converges,it converge to the centroid of the
sampled decision class.
Proof. Suppose the jth neuron in Fy wins the
activity competition. Suppose the jth synaptic
vector codes for decision class
Suppose the synaptic vector has reached
equilibrium
14
Chapter 6 Architecture and Equilibria 6.3
Synaptic convergence to centroids AVQ Algorithms
15
Chapter 6 Architecture and Equilibria 6.3
Synaptic convergence to centroids AVQ Algorithms
  • Arguments
  • The spatial and temporal integrals are
    approximate equal.
  • The AVQ centriod theorem assumes that convergence
    occurs.
  • The AVQ centroid convergence theorem ensure
  • exponential
    convergence

16
Chapter 6 Architecture and Equilibria 6.4 AVQ
Convergence Theorem
AVQ Convergence Theorem Competitive synaptic
vectors converge exponentially quickly to
pattern-class centroids.
Proof.Consider the random quadratic form L
The pattern vectors x do not change in
time. (still valid if the pattern vector x change
slowly relative to synaptic changes.)
17
Chapter 6 Architecture and Equilibria 6.4 AVQ
Convergence Theorem
The average EL as Lyapunov function for the
stochastic competitive dynamical system. Assume
Noise process is zero-mean and independence of
the noise process with signalprocess
18
Chapter 6 Architecture and Equilibria 6.4 AVQ
Convergence Theorem
So ,on average by the learning law 6-12,
iff any synaptic vector move
along its trajectory. So, the competitive AVQ
system is asymptotically stable and in general
converges exponentially quickly to a locally
equilibrium.
Suppose
If
Then every synaptic vector has
Reached equilibrium and is constant .
19
Chapter 6 Architecture and Equilibria 6.4 AVQ
Convergence Theorem
Since p(x) is a nonnegative weight function. The
weighted integral of the learning difference must
equal zero
So equilibrium synaptic vector equal centroids.
Q.E.D
20
Chapter 6 Architecture and Equilibra 6.4 AVQ
Convergence Theorem
  • Argument
  • Total mean-squared error of vector quantization
    for the partition
  • So the AVQ convergence theorem implies that
  • the class centroid, and asymptotically
    ,competitive synaptic vector-total
    mean-squared error.

By
The Synaptic vectors perform stochastic gradient
descent on the mean-squared-error surface in
pattern-plus-error
In the sense competitive learning reduces to
stochastic gradient descent
21
Chapter 6 Architecture and Equilibria 6.5 Global
stability of feedback neural networks
  • Global stability is jointly neuronal-synaptics
    steady state.
  • Global stability theorems are powerful but
    limited.
  • Their power
  • their dimension independence
  • nonlinear generality
  • their exponentially fast convergence to fixed
    points.
  • Their limitation
  • do not tell us where the equilibria occur in the
    state space.

22
Chapter 6 Architecture and Equilibra 6.5 Global
stability of feedback neural networks
Stability-Convergence Dilemma
Stability-Convergence Dilemma arise from the
asymmetry in neuronal and synaptic fluctuation
rates. Neurons change faster than synapses
change. Neurons fluctuate at the millisecond
level. Synapses fluctuate at the second or even
minute level. The fast-changing neurons must
balance the slow-changing synapses.
23
Chapter 6 Architecture and Equilibria 6.5 Global
stability of feedback neural networks
Stability-Convergence Dilemma 1.AsymmetryNeurons
in and fluctuate faster than the
synapses in M. 2.stability
(pattern formation). 3.Learning
4.Undoing the ABAM theorem offers a general
solution to stability-convergence dilemma.
24
Chapter 6 Architecture and Equilibria 6.6 The
ABAM Theorem
The ABAM Theorem(?????????????Lyapunov??)
The Hebbian ABAM and competitive ABAM models are
globally stable.
Hebbian ABAM model
Competitive ABAM model , replacing 6-35 with 6-36
25
Chapter 6 Architecture and Equilibria 6.6 The
ABAM Theorem
If the positivity assumptions
Then, the models are asymptotically stable, and
the squared activation and synaptic velocities
decrease exponentially quickly to their
equilibrium values
Proof. the proof uses the bounded lyapunov
function L
26
Chapter 6 Architecture and Equilibria 6.6 The
ABAM Theorem
Make the difference to 6-37
27
Chapter 6 Architecture and Equilibria 6.6 The
ABAM Theorem
To prove global stability for the competitive
learning law 6-36
We prove the stronger asymptotic stable of the
ABAM models with the positivity assumptions.
28
Chapter 6 Architecture and Equilibria 6.6 The
ABAM Theorem
Along trajectories for any nonzero change in any
neuronal activation or any synapse. Trajectories
end in equilibrium points. Indeed 6-43 implies
The squared velocities decease exponentially
quickly because of the strict negativity of
(6-43) and ,to rule out pathologies . Q.E.D
because of the second-order assumption of
nondegenerate Hessian matrix.
29
Chapter 6 Architecture and Equilibria 6.7
structural stability of unsupervised learning and
RABAM
  • Is unsupervised learning structural stability?
  • Structural stability is insensitivity to small
    perturbations
  • Structural stability ignores many small
    perturbations.
  • Such perturbations preserve qualitative
    properties.
  • Basins of attractions maintain their basic shape.

30
Chapter 6 Architecture and Equilibria 6.7
Structural stability of unsupervised learning and
RABAM
Random Adaptive Bidirectional Associative
Memories RABAM
Brownian diffusions perturb RABAM model.
(?????????) The differential equations in 6-33
through 6-35 now become stochastic differential
equations, with random processes as solutions.
The diffusion signal hebbian law RABAM model
31
Chapter 6 Architecture and Equilibria 6.7
Structural stability of unsupervised learning and
RABAM
With the stochastic competitive law
  • If is sufficiently steep

32
Chapter 6 Architecture and Equilibria 6.7
Structural stability of unsupervised learning and
RABAM
With noise (independent zero-mean Gaussian
white-noise process). the signal hebbian noise
RABAM model
33
Chapter 6 Architecture and Equilibria6.7
Structural stability of unsupervised learning and
RABAM
RABAM Theorem. The RABAM model (6-46)-(6-48) or
(6-50)-(6-54), is global stable.if signal
functions are strictly increasing and
amplification functions and are strictly
positive, the RABAM model is asymptotically
stable.
Proof. The ABAM lyapunov function L in (6-37) now
defines a random process. At each time t,L(t) is
a random variable. The expected ABAM lyapunov
function E(L) is a lyapunov function for the
RABAM.
34
Chapter 6 Architecture and Equilibria6.7
Structural stability of unsupervised learning and
RABAM
35
Chapter 6 Architecture and Equilibria6.7
Structural stability of unsupervised learning and
RABAM
?Reference? 1 Neural Networks and Fuzzy
Systems -Chapter 6 P.221-261 Bart kosko
University of Southern California.
Write a Comment
User Comments (0)
About PowerShow.com