Title: Bioinspired Computing Lecture 14
1Bioinspired ComputingLecture 14
- Alternative
- Neural Networks
- Netta Cohen
2Last time
Today
- Biologically inspired associative memories
- moves away from bio- realistic model
- Unsupervised learning
- Working examples and applications
- Pros, Cons open questions
- SOM (Competitive) Nets
- Neuroscience applications
- GasNets.
- Robotic control
Attractor neural nets
Other Neural Nets
3Spatial Codes
Natural neural nets often code similar things
close together. The auditory and visual cortex
provide examples.
Another example touch receptors in the human
body. "Almost every region of the body is
represented by a corresponding region in both the
primary motor cortex and the somatic sensory
cortex" (Geschwind 1979106). "The finger tips of
humans have the highest density of receptors
about 2500 per square cm!" (Kandel and Jessell
1991374). This representation is often dubbed
the homunculus (or little man in the brain)
Picture from http//www.dubinweb.com/brain/3.html
4Kohonen Nets
In a Kohonen net, a number of input neurons feed
a single lattice of neurons. The output pattern
is produced across the lattice surface.
Large volumes of data are compressed using
spatial/ topological relationships within the
training set. Thus the lattice becomes an
efficient distributed representation of the input.
5Kohonen Nets
also known as self-organising maps (SOMs)
- Important features
- Self-organisation of a distributed
representation of inputs. - This is a form of unsupervised learning
- The underlying learning principle competition
among nodes known as winner takes all. Only
winners get to learn losers decay. The
competition is enforced by the network
architecture each node has a self-excitatory
connection and inhibits all its neighbours. - Spatial patterns are formed by imposing the
learning rule throughout the local neighbourhood
of the winner.
6Training Self-Organising Maps
- A simple training algorithm might look like this
- Randomly initialise the network input weights
- Normalise all inputs so they are size-independent
- Define a local neighbourhood and a learning rate
- For each item in the training set
- Find the lattice node most excited by the input
- Alter the input weights for this node and those
nearby such that they more closely resemble the
input vector, i.e., at each node, the input
weight update rule is ?w r (x-w) - Reduce the learning rate the neighbourhood size
- Goto 2 (another pass through the training set)
7Training Self-Organising Maps (cont)
Gradually the net self-organises into a map of
the inputs, clustering the input data by
recruiting areas of the net for related inputs or
features in the inputs. The size of the
neighbourhood roughly corresponds to the
resolution of the mapped features.
8How Does It Work?
Imagine a 2D training set with clusters of data
points
The nodes in the lattice are initially randomly
sensitive.
Gradually, they will migrate towards the input
data.
Nodes that are neighbours in the lattice will
tend to become sensitive to similar inputs.
Effective resource allocation dense parts of the
input space recruit more nodes than sparse areas.
Another example The travelling salesman problem
Applet from http//www.patol.com/java/TSP/index.ht
ml
9How does the brain perform classification?
- One area of the cortex (the inferior temporal
cortex or IT) has been linked with two important
functions - object recognition
- object classification
- These tasks seem to be shape/colour specific but
independent of object size, position, relative
motion or speed, brightness or texture. - Indeed, category-specific impairments have been
linked to IT injuries.
10How does the brain perform classification (cont)?
Questions
- How do IT neurons encode objects/categories?
e.g., - local versus distributed representations/coding
- temporal versus rate coding at the neuronal
level - Can we recruit ANNs to answer such questions?
- Can ANNs perform classification as well given
similar data?
Recently, Elizabeth Thomas and colleagues
performed experiments on the activity of IT
neurons during an exercise of image
classification in monkeys and used a Kohonen net
to analyse the data.
11The experiment
Monkeys were trained to distinguish between a
training set of pictures of trees and various
other objects. The monkeys were considered
trained when they reached a 95 success
rate. Trained monkeys were now shown new images
of trees and other objects. As they classified
the objects, the activity in IT neurons in their
brains was recorded. All in all 226 neurons were
recorded on various occasions and over many
different images. The data collected was the
mean firing rate of each neuron in response to
each image. 25 of neurons responded only to one
category, but 75 were not category specific. All
neurons were image-specific. Problem Not all
neurons were recorded for all images
No images were tested across all
neurons. In fact, when a Table
of neuronal responses for each image was created,
it was more than 80 empty.
E. Thomas et al, J. Cog. Neurosci. (2001)
12Experimental Results
Question Given the partial data, is there
sufficient information to classify images as
trees or non-trees? Answer A 2-node Kohonen net
trained on the Table of neuronal responses was
able to classify new images with an 84 success
rate.
Question Are categories encoded by
category-specific neurons? Answer Delete data of
category-specific neuron responses from Table.
The success rate of the Kohonen net was degraded
but only minimally. A control set with random
data deletions yielded similar results.
Conclusion Category-specific neurons are not
important for categorisation!
E. Thomas et al, J. Cog. Neurosci. (2001)
13Experimental Results (cont.)
Question Which neurons are important, if
any? Answer An examination of the weights that
contribute most to the output in the Kohonen net
revealed that a small subset of neurons (lt50)
that are not category-specific yet respond with
different intensities to different categories are
crucial for correct classification.
Conclusions The IT employs a distributed
representation to encode categories of different
images. The redundancy in this encoding allows
for graceful degradation so that even with 80 of
data missing and many neurons deleted, sufficient
information is present for classification
purposes. The fact that only rate information was
used suggests that temporal information is less
important here.
E. Thomas et al, J. Cog. Neurosci. (2001)
14Attractor networksTwo examples
- Jets and Sharks network
- Weights set by hand
- Demonstrates recall
- Generalisation
- Prototypes
- Graceful degradation
- Robustness
- Hopfield networks
- Training algorithm Hebbian learning
15(No Transcript)
161 -1
17Dynamics
- o output of a node
- act gt 0 o act
- act lt0 o 0
- act activity of a node
- i gt0 ?au (max au)i decay(au-rest)
- i lt0 ?au (au - min)i decay(au-rest)
- i input of a node
- iu 0.1Swuioi 0.4 extu
18Jets and Sharks
- Units
- Weights (excitatory 1 inhibitory -1)
- Activation -0.2, 1.0
- Resting activation -0.1
- Dynamics
19(No Transcript)
201 -1
21Activate ART
22Jets and Sharks
Name Gang Age Education Mar. Occupation
Art Jets 40s J.H Sing. Pusher
Al Jets 30s J.H. Mar. Burglar
Clyde Jets 40s J.H Sing. Bookie
Mike Jets 30s J.H. Sing. Bookie
Phil Sharks 30s Col. Mar Pusher
Don Sharks 30s Col Mar. Burglar
Dave Sharks 30s H.S. Div. Pusher
23Properties
- Retrieving a name from other properties
- Content Addressable Memory
- Categorisation and prototype formation
- Activating sharks will activate person units of
shark members - Phil is quintessential shark
- 30s
- Pusher (wins out in the end!)
24Activate Shark
25Properties
- Can activate 20s and pusher and find persons who
match best - Robust
- Graceful degradation
- Noise
- Weight set by hand
26From Biology to ANNs Back
- Neuroscience and studies of animal behaviour have
led to new ideas for artificial learning,
communication, cooperation competition.
Simplistic cartoon models of these mechanisms can
lead to new paradigms and impressive technologies.
- Dynamic Neural Nets are helping us understand
real-time adaptation and problem-solving under
changing conditions. - Hopfield nets shed new insight on mechanisms of
association and the benefits of unsupervised
learning. - Thomas work helps unravel coding structures in
the cortex.
27Next time
Reading
- Elizabeth Thomas et al (2001) Encoding of
categories by noncategory-specific neurons in the
inferior temporal cortex, J. Cog. Neurosci. 13
190-200. - Phil Husbands, Tom Smith, Nick Jakobi Michael
OShea (1998). Better living through chemistry
Evolving GasNets for robot control, Connection
Science, 10185-210. - Ezequiel Di Paolo (2003). Organismically-inspired
robotics Homeostatic adaptation and natural
teleology beyond the closed sensorimotor loop,
in K. Murase T. Asakura (Eds) Dynamical
Systems Approach to Embodiment and Sociality,
Advanced Knowledge International., Adelaide, pp
19 - 42. - Ezequiel Di Paolo (2000) Homeostatic adaptation
to inversion of the visual field and other
sensorimotor disruptions, SAB2000, MIT Press.