Title: Hebbian Learning in Multilayer Neural Networks
1Hebbian Learning in Multilayer Neural Networks
- Rick Strom
- Advisor Dr. Abbott
2Goals
- Build a feedforward and recurrent NN class using
Hebbian learning rules - Discover means to regulate Hebbian learning in
ANNs to give highest probability of finding a
meaningful weighting - Devise network visualizer to assist in finding
those rules - Discover class of problems for which Hebbian
trained networks are successful
3Preliminaries
- Hebbs Rule
- Neurons that fire together, wire together
- Local rule
- All weight adjustment rules are Hebbian in
nature - Only in that they involve incremental adjustment
of weights
4Overcoming Problems with Hebbs Rule
- Bounding weights
- Normalizing weights
- Ojas Rule
- ?W aY(X - YW)
5Network Visualization
6Network Visualization
7Example XOR
All trained in lt 1000 rounds
2-4-8-4-2-1
2-90-1
2-5-1
8Example XOR
- Combined network trained in lt 400 rounds
3-10-4 network shown here responding to (0,1,0)
as input. Input 3 and outputs 2-4 are
irrelevant to this problem. The algorithm
isolates inputs 1-2 or minimizes the effect of
input 3 on output 1.
9Hebbs Rule Variants
10h network hardness
- Network requires less learning as it
- Gets older
- Performs better
- Sees repetitive data
11Curved Regulated Learning
12Noisy Weighting at Zero
13Noisy Weighting at Extrema
14Feedforward Results XOR
No Effect (Random) 0 success rate
Hardness Regulated 71 success rate
Hardness Regulated with Noise at Extrema 89
success rate failures a result of random
weighting
15Feedforward Results XOR
- Accuracy graphed against time
- Without noise
- With noise at extrema
16Unsupervised Results Pattern Recognition
17Unsupervised Results Pattern Recognition
18Unsupervised Results Pattern Recognition
19Unsupervised Results Pattern Recognition
20Unsupervised Results Pattern Recognition
21Numenta HTM
22Future Work