Title: A Theory of Cerebral Cortex
1- A Theory of Cerebral Cortex
- (or, How Your Brain Works)
- Andrew Smith (CSE)
2Outline
- Questions
- Preliminaries
- Feature Attractor Networks
- Antecedent Support Networks
- Attractive properties of the theory / Conclusions
3Questions (to be answered!)
- What is cortical knowledge and how is it stored?
- How is it used to carry out thinking?
- How is it integrated with sensory input and
motor output?
4Preliminaries
- Thinking is a symbolic process.
- Thinking relies only on classical mechanics.
(Unlike the Penrose/Hameroff model.) - Thinking is not a mathematically grounded
reasoning process, rather confabulation!
5Feature Attractor Neuronal Networks
Each Feature Attractor Network Implements one
Column of Tokens
cortical region (one of about 120,000)
cerebral cortex
human cortical surface area ? 240,000 mm2
a feature attractor network
paired thalamic region
bidirectional connections
thalamus
An object (sensory, abstract, etc.) or action
(movement process, thought process, etc.) is
represented by a collection of feature attractor
tokens, each expressing a single token (node)
from its lexicon.
Each region encompasses a cortical surface area
of roughly 2 mm2 and possesses a total of about
200,000 neurons.
6Feature Attractor Networks
- Each network has a lexicon of random (!) tokens,
sparsely encoded each token has 100s of neurons
on at a time, out of 50,000. This lexicon is
fixed very early in life and never changes. - The function of the network is to change the
pattern of activation within a particular region
so that it expresses the token in its lexicon
closest to the original pattern of activation.
(aka vector quantizers) - The Feature Attractor Networks are extremely
robust to noise/partial tokens. - - A region can start out with 10 of a
particular token and - within one iteration, express the complete
token. - - A region can start out expressing many
(100s) of partial - tokens and within one iteration, express just
one token that was - most complete. (more on this later)
-
- Now we have 120,000 powerful pattern
recognizers, lets wire them up
7Antecedent Support Networks (ASNs)
- The role of the ASN is to do the thinking.
- - If several active tokens have strong links to
an inactive token, the ASN will activate that
token - (e.g. smoke heat -gt fire).
- - Learning occurs when the ASN increases the
link weight between two tokens. - Short term memory Which tokens are currently
active - Long term memory The link strengths between
tokens
8Antecedent Support Neuronal Network
Implementation Randomness to the rescue!
Axons from neuron of token i send their
collaterals randomly to millions of neurons in
the local area. Of these, a few thousand
transponder neurons just happen to receive
sufficient input from i to become active. Of
those, a few hundred just happen to send axons to
neurons belonging to token j on the target
region, activating (part of) token j.
The wiring of transponder neurons (pyramidal
neurons) is also fixed at a very early age.
transponder neurons
these are the synapses that are strengthened
target region token j
cerebral cortex
source region token i
9Input / Output
- Input Sensory neurons connect to token neurons
(layers III and IV), just like transponder
neurons. - Output Motor neurons can receive their inputs
from the token neurons, just like transponder
neurons.
10Attractive features (no pun intended)
- The Hecht-Nielsen model shows
- - how neurons can grow randomly and become
organized. - - that a large range of synaptic weights is not
necessary. - - how you can get a song stuck in your head.
(Youre unable to reset regions of your cortex.
One bar evokes the next) - - a model that can be viewed as implementing
Paul Churchlands semantic maps from last
lecture of CogSci 200. (IMHO) - A simulation of this has solved the classic
cocktail-party problem.
11Conclusions
- All knowledge comes from creating associations
between experiences. - - Aristotle
- Within 12 to 36 months, this theory will
revolutionize Artificial Intelligence. - - Hecht-Nielsen
- (as of last week)