Title: Biologically Inspired Intelligent Systems
1Biologically Inspired Intelligent Systems
- Lecture 13 and 14
- Roger S. Gaborski
2Thomas DeMarse UF SCIENTIST BRAIN IN A DISH
ACTS AS AUTOPILOT, LIVING COMPUTER
- Has grown a living brain that can fly a
simulated plane - A collection of 25,000 living neurons, or nerve
cells, taken from a rats brain and cultured
inside a glass dish - Unique real-time window into the brain at the
cellular level - http//www.napa.ufl.edu/2004news/braindish.htm
3- How is information represented and processed in
the brain? - How do we go about building a model??
4- It is reasonable to assume that the brain has
evolved to a very efficient information
processing system what ever method the brain
uses to represent and process information it must
be extremely efficient
5Visual Information Processing
- Connectionist models neural networks are a
reasonable approach to model human visual system-
two approaches - Neurally-inspired architecture can be quite
different that true biological systems. The main
concern is does the resulting system solve a
problem - Biologically reasonable help to understand true
biological systems
6Modeling of Biological neural networks
populations of neurons
spiking neuron model
behavior
neurons
signals
computational model
molecules
ion channels
Swiss Federal Institute of Technology Lausanne,
EPFL
Laboratory of Computational
Neuroscience, LCN, CH 1015 Lausanne
7Biological Feasibility Constraints
Visual Information Processing Speed
Activity of Neurons
Anatomical Organization
8The Neural Code
- An activated neuron fires a series of action
potentials - A cubic millimeter of cortex contains 20,000 to
100,000 neurons firing at a rate of a few
spikes/second - These neurons connect to another 800,000,000
neurons. If sufficiently excited, they fire an
action potential - Is exact timing of individual spikes random and
the information is in a average rate of action
potentials (over some window 20ms)?
9How is Information Represented in Neural Networks?
- Integrate and Fire (IR) model
- Neuron integrates inputs (information) over time
- A threshold is reached
- Neuron fires a spike (action potential) down its
axon - Neuron resets after firing spike
- Enters refractory period where it cannot fire
- Starts integrating information again,
- Cycle repeats
10Sigma Nodes (neurons)
- McCulloch and Pitts (1943)
- Simple binary unit
- Generates an output when summed input reaches a
threshold (neuron action potential) - Rate model
- low output ? low firing rate
- high output ? high firing rate
- What about spike timing??
- Fast responses cannot be captured by rate models
REF Remaining material from Fundamentals of
Computational Neuroscience- Trappenberg
11RecallMinimal Neuron Model
rin1
w1
w2
rin2
to subpopulation of neurons
rout
. . .
?
g
wi
rinn
rini
Real value inputs relate to rate values of
other neural groups
12Functionality of Sigma Node
- Input value rini of each channel is multiplied by
weight value wi for this channel - hi wi rini
- Each node sums up all the weighted inputs
- h ? hi
- The output, rout , is calculated as a function g
of the net input - rout g(h)
-
- rout g(? wi rini ) summed over all i
13Common Activation Functions
- g is the activation function
- g is also called the transfer function because it
transforms the input to the output using a
specific function - g can take on a wide range of mathematical forms
linear, step, threshold, sigmoid, radial basis,
etc.
14Activation Functions
15Activation Function Observations
- Linear function directly relates the output of
the node to the sum of its inputs- can be easily
treated analytically - Step function is non-linear. But can be
approximated by piecewise linear functions-
produces binary response (two possible responses) - Threshold linear is similar to linear, but is
thresholded from the bottom. (and biological
neurons have no sense of negative values) - Simple realization is a combination of a linear
function and a step function - Radial basis function is symmetric around a base
value. It is non-monotonic
16Sigmoid Function
- Bounds both minimal and maximal response of
neuron - Interpolates smoothly between extremes
- Most common response used in modeling
17Generalized Sigmoid Function
Using parameters to change slope and
offset gsig 1/ 1 exp(-? ( x x0 ))
where parameter ? increases the steepness of
the curve and x0 shifts the curve along the x
axis NOTE There are many possible activation
functions, but it is interesting to note the
information processing capabilities of networks
do not depend critically on the shape of the
function
18Continuous Dynamics of a Sigma Node
- Assume rout g(? wi rini ) takes a finite
amount of time, ? - ? is our basic time scale
- The continuous node only integrates a fraction of
the input in each smaller time step ?t - Set fraction equal to ?t / ? assures node still
integrates the same amount as the single step
defined by rout g(? wi rini ) - Requires node to remember its value before new
input value is added - BUT- if node never forgets, then net input will
continue to grow without bound as long as input
is applied - ? leaky integrator loses some of its current
state over time
19Leaky Integrator
- Introduce forgetting factor
- Ensures stability of node
- Set forgetting factor to act on the same scale
time scale as the integration of new input
h(t?t) (1 - ?t /?)h(t) (?t /?) ? wi rini
If we make one big time step of ?t ? we get the
original eq (1 - ? /?)h(t) ( ?/ ?) ? wi rini ?
? wi rini For this time step the node has
forgotten its previous state and just sums up
all the weighted inputs
20Time Dependent Equation for Continuous Nodes
- Still have discrete node since we are using
discrete time steps ?t in terms of ? - Recall from calculus, make time steps smaller and
smaller. Rewrite previous equation - h(t?t) (1 - ?t /?)h(t) (?t /?) ? wi
rini -
- as
- ? ( h(t?t) h(t) )/ ?t -h(t) ? wi rini
-
21Time Dependent Equation for Continuous Nodes-2
- Perform limit ?t ? 0, the differences become
differentials and the time-dependent equation for
the continuous node becomes - ? h(t) -h(t) ? wi rini
- This assumes slowly varying inputs
dt
22Leaky Integrator Characteristics
- Leaky integrator dynamics are common in
computational neuroscience - If external inputs are zero, equation becomes a
simple homogeneous first order differential
equation - Solution is the exponential function
- h(t) h(0)exp(-t/?) hrest
23What does it mean??
- Activation of a node without external input
decays exponentially on a time scale ? towards
the resting activation hrest - An input ? wi rini changes the behavior. A
positive input slows down the activation decay
24Leaky Integrator Coincidence Detector
- First consider a perfect integrator that
integrates the spikes of two presynaptic spike
trains
25Perfect Integrator
- Membrane potential accumulates synaptic currents
triggered by presynaptic spikes ? counts number
of spikes since last reset of membrane potential
FIRES while exceeding threshold?
Fires after membrane potential exceeds threshold
PERFECT INTEGRATOR
26Leaky Integrator Coincidence Detector
- Now consider leaky integrator
- Membrane potential decays after each spike ?
needs two coincident spikes to exceed threshold
Fires
Fires after membrane potential exceeds threshold
LEAKY INTEGRATOR
27Observations
- Both neurons fire after membrane potential
exceeds the threshold, but for very different
reasons - With the perfect integrator the exact timing of
the spikes is not important - With the leaky integrator model the neuron fires
because of the occurrence of two simultaneous
presynaptic spikes - The window of coincidence of the spikes will
depend on the time constant of the leaky
integrator
28Potential Methods of Information Representation
- How is information generated, extracted and used?
- Single spikes or temporal patterns of spikes
across populations of neurons - Mean spike firing rate
- Neuron firing latency relative to a given
reference event - Repetition of specific firing sequences or more
complex patterns - Phase differences during periods of oscillatory
firing
29A few interesting numbersHow fast do neurons
fire?
- Generally, maximum spiking rate interval is 5 ms
- How fast to biological systems respond to changes
in the environment? - Few hundred milliseconds thousand ms
- Results in a maximum of 100 computational steps
between input and output (approximately)
30Methods to determine visual processing time
- How long does it take to name an object
(Henderson, 87)?
31Methods to determine visual processing time
- How long does it take to name an object
(Henderson, 87)?
32Methods to determine visual processing time
- How long does it take to name an object
(Henderson, 87)?
33Methods to determine visual processing time
- How long does it take to name an object
(Henderson, 87)? - Depends apple 600ms, xylophone 1200 ms
- Includes verbal response time find the correct
word, motor actions to speak the word
34Methods to determine visual processing time -2
- What is the minimum presentation time for an
object, followed by a complex scene? - 100-120 ms, but processing may continue after
image of object is removed (information starts to
flow up visual stream and continues processing
even after stimuli is removed
35Methods to determine visual processing time -3
- Identify a particular object in a rapid serial
visual presentation of several images - Subjects pressed a button when a picture
contained a particular object (Potter, 75) - 60 of objects identified with presentation times
of 113ms per image - ISSUE the subject is looking for a particular
object maybe the visual system is primed to
look for this object
36Methods to determine visual processing time -4
- Recognize a pair of familiar images when they are
presented serially to the subject - 80 accuracy when presented for 100ms each
- Drops to 60 when presentation time reduced to
60ms
37Initial conclusions
- When subjects had no advanced knowledge of type
of image to be identified could accurately
process image in approximately 100ms - Assumes common objects, well known people and
places - It has been estimated that we can name 50,000
60,000 objects
38Visual response properties of individual neurons
in different brain areas
- ? substantial visual processing tasks
- Neurons in lateral hypothalamus become active in
150 ms when an animal is looking for food (Rolls,
76) - Maddison (83) found neurons in the orbitofrontal
cortex active in 120 ms to visual responses of
food - Population of neurons in part of temporal lobe
respond to faces in 120ms
39Pathways
- Recall What pathway the ventral stream-
- Starting at the primary visual cortex V1, to
- V2, V4 and finally to the inferotemporal cortex
(IT) - BUT How is the information actually coded?
40Models
- Model must be able to support fact that
information travels from the retina to the visual
cortex in about 50ms, 150ms to activate high
level representation - Mean spike firing rate has been a popular model
- Excited neurons increase the frequency of their
spike firing rate - BUT it takes about 50 ms just to measure the
firing rates of a single neuron
41Firing Rates
Firing rate in one spike train
Temporal average of a single spike train
42Firing Rates
?t
Firing rate in one spike train
Temporal average of a single spike train
43Firing Rates
v(t) (number of spikes in ?t / ?t)
?t
Firing rate in one spike train
Temporal average of a single spike
train (calculate average for many trials)
44Observations
- Individual spike trains are noisy
- Averaging over repetitions of experiments is
mathematically valid - Results correlate with behavioral responses
- Window must be smaller than typical response time
(otherwise we are averaging beyond the response) - But the brain cannot average several trails to
form a response
45Population Averages
- Would it make sense for the brain to rely on a
single spike train?
46Population Averages
- Would it make sense for the brain to rely on a
single spike train? - Reliance on a single spike train would make the
brain vulnerable to damage or reorganization - A subpopulation or pool of neurons with
statistically similar response properties would
allow for temporal averages resulting in a more
robust situation (this could be why single neuron
recordings correlate with measurements of
behavioral findings)
47Population Rate
?t
Local Pool of Similar Neurons
Pool Average is defined over smaller window
48Average Population Activity
1 Number of spikes in population of size N
A(t) lim
?T N
?T?0
Sum over neurons in a subpopulation which allows
for smaller windows (as opposed to a single spike
train with a large window)
49Observations
- Can we verify the population approach?
- ISSUES
50Observations
- Can we verify the population approach?
- ISSUES
- We need to record many thousands of neurons
simultaneously - Beyond our current experimental ability
- Brain imaging averages over too many neurons
51SummaryRate Code Mean Firing Rate
- Average over time temporal average
- Average over repeat of an experiment
- Average of population of neurons
52Phase with Respect to Background Oscillation
Information is encoded in the phase of a pulse
with respect to the background oscillation
Variable oscillations common in hippocampus,
olfactory areas
Oscillation of global variable
53- It would be computationally convenient if each
arriving pulse had a label identifying which
preprocess it came from. - Synchrony maybe all the action potentials from
one preprocess fire at the same time??
54Synchrony - Correlation
Information conveyed by relationship with each
other
- Belonging together
- Special event
- Reference with each other
Synchronous oscillatory activity in sensory
systems new vistas on mechanisms, Ritz, R. and
Sejnowski, T. J. (1997).
55Spiking Neuron Model
- Proposed by Thrope, van Rullen and others
- Neural code is based on the idea that information
is distributed across a population of neurons - Information is represented by the relative firing
times in a single wave of action potentials
56Assignment for Tuesday- Thursday- Spike timing
- Surfing a spike wave down the ventral stream,
- Rufin VanRullen, Simon J. Thorpe
- http//www.spikenet-technology.com/download/Surfin
g.pdf - Spike times make sense, Rufin VanRullen, Rudy
- Guyonneau and Simon J. Thorpe
- http//www.klab.caltech.edu/rufin/OriginalPapers/
VanRullenTiNS05.pdf
57How can we get a mathematical foundation for
information concepts?Information Theory Basics
- Claude Shannon (1948) studied the transmission of
over a general communication channel - message xi , one of several possibilities from a
set X - converted to a signal si f( xi ) that can be
transmitted over a communication channel - Receiver converts received signal ri back to a
message yi g( ri ), one of a set of Y possible
output messages that can be interpreted at the
destination receiver
58Information Theory Basics
- Shannons theory includes a noise input, n, to
the channel due to physical noise in the channel,
or faulty message conversion - In a noiseless channel, ri si which can be
converted to the original message when the
receiver does the inverse function of the
encoding done by the transmitter, - g f -1
- noise can be either additive, r s n
- or multiplicative, ri si n, where n is a
random variable
59Communication Channel
Channel Noise
n
Information source
Transmitter
Receiver
Destination
Signal sf(x)
Received signal, r
Received message, yg(r)
Message x
60Information Gain
- What is the amount of information you gain by
receiving a message- it depends on your
expectations - if I tell you this class meets on Tuesdays and
Thursdays then you gain no information - If you think the final is on a Monday or a
Saturday and I tell you Monday you gain some
information - If you think it is on either Monday, Wednesday or
Saturday you gain even more information when I
tell Wednesday
61Information Gain -2
- Information depends on the set of possible
messages and their frequency of occurrence - Information is a function of the probability pi
P(yi) of a message
62Information Gain -3
- Independent information should be additive, if I
tell you the final is a 8am, you gain additional
information over what I told you before - We define information gain when receiving the
message yi as - I(yi ) -log2(pi) the minus sign makes
information positive because probabilities are
less than 1. - log2 defines units of information as a bit
-
63Information Gain - 4
- If we have only two possible messages, 0 or 1,
both equally likely, P(0)P(1) ½. - So, receiving either message results in
- I -log2 (.5) 1 bit of information
64Information Gain - 4
- If we have a new code based on playing cards
- King of diamonds 0
- Queen of diamonds 1
- King of spades 2
- Three different states, 0, 1, 2
- How much information do we gain when we receive a
Queen of diamonds? -
65Information Gain - 4
- If we have a new code based on playing cards
- King of diamonds 0
- Queen of diamonds 1
- King of spades 2
- Three different states, 0, 1, 2
- How much information do we gain when we receive a
Queen of diamonds? - Each message is equally likely, P(message) 1/3
- I -log2P(1/3) 1.585 bits
-
66Information Gain - 5
- What if I tell you the final is on Saturday at
6am with a 90 probability (out of 4 equally
likely days)
67Information Gain - 5
- There is still some uncertainty we need to deal
with - Need to consider the probability of an event
before the message was sent ? a priori or prior
probability Pprior(x) - And probability of an event after taking the
information into account, the posterior
probability, Pposterior(x)
68Information Gain -6
- The information gain is defined by
- I(x) -? Pposterior(x) log2 Pprior(x)
Pposterior(x)
When we know the precise answer after the message
was received the Pposterior(x) 1
69Not all messages equally likely?
- If all the messages are not equally likely, then
the information gained is different for different
messages - m0 ¼, I(m0) -log2(1/4)2 bits
- m1 ¾, I(m1) -log2(3/4).415 bits
- Average information in the message set is
- S(x) -? pi log2( pi ) ENTROPY of information
- source
70Entropy
- Entropy is quantity of information set and not
defined for an individual message - Entropy of the set of possible received messages
S(y)1/42.03/40.415 0.8113 - Entropy is a measure of the average information
gain we can expect from a signal in a
communication channel - Individual messages may have more or less
information gain
71Entropy
Entropy of a message set with N possible messages
with equal probabilities is S(x) - ? (1/N)
log2(1/N) log2(N)
N
i1
The entropy is hence equivalent to the logarithm
of the number of possible states for equally
likely states
72Entropy for Continuous Distributions
X set x member of set p(x) probability
distribution function S(X) - ? p(x) log2p(x)
dx Consider a Gaussian distribution
p(x)(1/sqrt(2?) ?)exp(-(x- ?)2/2 ?2) S(x)
½ log2(2? e ?2) NOTE depends on variance
NOT
mean of Gaussian
73Noise
- We have assume g f 1, receives that invert
coding of transmitter and noise free transmission - Information transmission in neural systems must
take noise into account - What can a received signal y tell us about event
x - We would like to reconstruct the event (say
sensory input to the neural system from firing
pattern of neurons)
74Channel Capacity
- The amount of information that can be transmitted
over a noisy channel is limited by the ratio of
signal to noise - I lt ½ log2( 1 ltx2gt/ ltn2gt )
- where ltx2gt is the variance of the input signal
and ltn2gt is the effective variance of the channel
noise
75Channel Capacity Observation
- The information on a given channel with a given
noise can be increased if we increase the
variability of the input signals. - Recall, spike trains have high variability
- Therefore, well suited to transmit information
over noisy channels
76Information in Spike Trains
- Calculate the maximum entropy of a spike train
with temporal coding - Introduce time bins small enough so that only one
spike can occupy a bin, ?t - Spike train is then binary string with 1s for
spikes and 0s otherwise
77Information in Spike Trains
0 0 1 0 0 0 0 1 1 0 0 0 0 0
-Firing rate r of spike train fixes number of 1s
and 0s in spike Train of length T -Can calculate
number of possible spike trains with the fixed
number of spikes -Logarithm to base 2 of this
number is the entropy of spike train
78Information in Spike Trains
This can be calculated to be S -(T/ ?t ln2)
r ?t ln(r ?t ) (1- r ?t )ln(1- r ?t ) For
time bins much smaller than firing rate S T r
log2(e/ r ?t ) NTr is the number of spikes in
the spike train, so average entropy Is given
by S/N log2(e/ r ?t )
79Gaussian Tuning Curves
Note Gaussian curves replaced by triangles for
ease of drawing
Firing Rate
x y
Feature Value - direction
80Gaussian Tuning Curves
Note Gaussian curves replaced by triangles for
ease of drawing
Firing Rate
y
Feature Value - direction
81Direct Link to Visual Saliency
- Consider the idea that firing rate is a function
of the strength of its input - The more it is activated, the more it fires
- Time at which neuron reaches it threshold is a
decreasing function of its activation - The more a neuron is activated, the sooner it
fires
82- Strength of neurons inputs drives
- Firing rate of neuron
- Latency of firing
83Population of Neurons
- Assume population is stimulated with a particular
input intensity pattern (stimulus) - If we know the exact firing rate for each neuron
of the population we can describe the input
pattern (this would require a finite amount of
time to measure the firing rate)
84Population of Neurons
- Again, assume population is stimulated with a
particular input intensity pattern (stimulus) - If we know the time of emission of the first
spike (latency) of each neuron we have the same
information, but much faster - Or, if we knew the exact order of firing of each
neuron in the population- first to fire is most
active neuron, etc. (Rank Order Coding)
85But How Much Information Can We Code with Rank
Order Coding?
86A Great Deal of Information!
- Consider only two neurons, either neuron 1 or
neuron 2 first first - 1 then 2 or 2 then 1
- With three neurons
- 1,2,3 or 1,3,2 or 2,1,3 or 2,3,1
- or 3,1,2 or 3,2,1 which is 3!
- With 8 neurons, 8! or 40,320
- With 12 neurons, 12! or 479,001,600
- With 10,000 neurons, 10000!
87a-h represent a population of neurons f fires
first because input contrast is greatest
Output f,g,e,d,h,b,c,a
88Order
- First spikes in a wave of action potentials
correspond to the most active neuron - What does this mean in terms of saliency??
89Order
- First spikes in a wave of action potentials
correspond to the most active neuron (IF model) - What does this mean in terms of saliency??
- The first spikes represent the most salient
information, say the area of highest contrast in
a contrast image
90What about Mean Firing Rate?
- Doesnt the neuron with the highest mean firing
rate also represent the most salient
information?? - What is the problem with this interpretation??
91What about Mean Firing Rate?
- Doesnt the neuron with the highest mean firing
rate also represent the most salient
information?? - What is the problem with this interpretation??
- Before we can determine which neuron has the
highest firing rate all neurons must fire enough
times to get an estimate of mean firing time of
all the neurons in the population - With Rank order firing we know as soon as the
first neuron fires.
92Order of Firing vs. Exact Latency of Firing
- Exact latency of firing has the disadvantage that
it is more complicated to implement - Order of firing is simpler, but less precise. It
cannot make precise judgments about exact input
intensity values. - BUT supports psychophysical experiments
- Humans are better at reporting relative
comparisons of stimulus features (luminance, hue,
contrast, etc.) then reporting exact values.
93Which is Lighter?
LEFT RIGHT
94Which is Lighter?
LEFT RIGHT
95Which is Lighter?
LEFT RIGHT
96Level of activation will determine when the
neuron will fire
97Each map represents a different scale
Data after only 5 of ganglion cells have
fired. White dots represent center ON and black
dots represent center OFF cells
98Image reconstructed from 5 of ganglion cells
99Original Reconstructed
100Details
- First spike received represents highest
contrast ? receives maximum weight - Following weights receive progressively lower
weights - Exactly how the weights are reduced can be based
on statistics of natural images
101Percent of Spikes
102Neurons with Temporal Sensitivity
- Neurons in visual cortex must be sensitive to the
temporal information of the spike stream - Order is important!
- The neurons must respond selectively to a
particular sequence of activation of their
afferents, but not to the activation of the same
afferents in a different order
103- Target neuron should give maximum weight to the
first inputs it receives, - Later spikes should have progressively less
influence on the neurons activity - With a particular threshold, neuron can be made
selective to a particular firing order of its
input
104Shunting Inhibition
Each time one of the inputs fire (A-E) the
shunting inhibition increases.
INPUTS
Strength of connections Indicated by thickness
of connection
105Spiking Neuron
Coefficient Matrix a1 a2 a3 a4 a5
a6 a7 a8 a9 a10 a11 a12
Filter Weights .7 .7 .7 .7 .9 .7 .8 .9 .8 .8 .9
.8
Order 3 3 3 3 1 3 2 1 2 2 1 2
Organize weights by order who should fire
first ? largest filter coefficients Ideal firing
order a5 a8 a11 a7 a9 a10 a12 a1 a2 a3 a4 a6
106Spiking Neuron -2
a1 .7
a2 .7
a3 .7
a10 .8
a11 .9
a12 .8
107Spiking neuron -3
Act ? modj w(i,j) For mod .8 Sort image
data Consider ideal image patch (perfect match to
spiking filter) order a5 a8 a11 a7 a9 a10 a12
a1 a2 a3 a4 a6 (largest to smallest pixel
values) o 5 8 11 7 9 10 12 1 2 3 4 6
Spiking filter weights for all image
patches w(i) .7 .7 .7 .7 .9 .7 .8 .9 .8 .8
.9 .8
j
108Spiking neuron 4 Ideal image patch
t 1 Act(i,1) .81 .9 (mod1 w(i,5) ) from
order of image data, pixel 5 was the
largest) t2 Act(i,2) .81 .9 .82 .9
(from w(i,8) ) t3 Act(i,3) .81 .9 .82
.9 .83 .9 (from w(i,11) ) t4 Act(i,4) .81
.9 .82 .9 .83 .9 .84
.8 etc Act(i,4) 2.0845 Continue for all 12
pixels in image patch. Act will be maximum for a
image patch whose gray level values that matches
the order of the spiking neuron o 5 8 11
7 9 10 12 1 2 3 4 6 w(i) .7 .7 .7 .7 .9 .7
.8 .9 .8 .8 .9 .8
109Spiking neuron 5 Non-ideal image patch
Act ? modj w(i,j) For mod .8 Sort image
data Consider non-ideal image patch (not a match
to spiking filter) order a3 a5 a7 a1 a2 a4 a6
a10 a11 a8 a9 a12 (largest to smallest pixel
values) o 3 5 7 1 2 4 6 10 11 8 9 12
Spiking filter weights for all image
patches w(i) .7 .7 .7 .7 .9 .7 .8 .9 .8 .8
.9 .8
110Spiking neuron 6 Non-ideal image patch
t 1 Act(i,1) .81 .7 (mod1 w(i,3) ) from
order of image data, pixel 5 was the
largest) t2 Act(i,2) .81 .7 .82 .9
(from w(i,5) ) t3 Act(i,3) .81 .7 .82
.9 .83 .8 (from w(i,7) ) t4 Act(i,4) .81
.7 .82 .9 .83 .8 .84
.7 etc Act(i,4) 1.8323 o 3 5 7 1 2 4 6 10
11 8 9 12 function of image patch w(i) .7
.7 .7 .7 .9 .7 .8 .9 .8 .8 .9 .8 fixed for
filter
111Spiking Gabor Filter Example
How do we start?? ? First, set up filter
weights 1. Create Gabor filter, 0 degrees ,
15x15 pixels 2. Convert to vector (from matrix)
? GaborVector0_15 3. Rank order coefficients
112Gabor Filter -2
Y,I SORT(X) also returns an index matrix I.
If X is a vector, then Y X(I). Yfilter
Iorder sort((GaborVector0_15))really want to
first take absolute value to order (keeps
negative values in middle of list) Then use
actually pos and neg values in calculations sort
s in ascending order. reverse order to
descending IorderDescending Iorder(end-11)
reverse order w GaborVector0_15 (w is 255
vector, simply renamed ) qw(IorderDescending(1))
q is largest filter weight
113Gabor Filter -3
qw(IorderDescending(112)) q(1) is largest
filter weight q 0.9453 0.8820 0.8820
0.7164 0.7164 0.6229 0.5812
0.5812 0.5102 0.5065 0.5065
0.4760
114Gabor Filter -4
If image patch matches spiking filter exactly Act
2.9901 If patch is 5 degrees Act5
2.9761 If patch is 10 degrees Act10
2.9452 If patch is 15 degrees Act15
2.8921 If patch is 45 degrees Act45
2.2720 If patch is 90 degrees Act90 1.7913
115Image Reconstruction
- Similar to Wavelet or Fourier reconstruction
- Fourier A signal can be decomposed into a series
of sinusoids. Each sinusoid has a coefficient
associated with it. - Original signal can be reconstructed by
multiplying a sinusoid with its coefficient and
summing all of the scaled sinusoids
116Image reconstruction
- In the spiking neuron model we only know the
relative order of the individual neurons not
the exact coefficients. - Need to estimate coefficient
- Use scaled original Gabor filters in the
reconstruction.
117Homework Due Thursday
- Most of the details were discussed in class- the
main points are - Use the tree image (on webpage) as your test
case - Implement the spiking neurons rate code ideas
that were discussed in class and the Ruffin
VanRullen papers. There are other papers
available, including papers on Thorpes page.
Research issues you dont understand - Duplicate the results shown in Figure 4 of Rate
Coding Versus Temporal Order Coding (handed out
in class, also available on VanRullens webpage
118Homework
- Implement Spiking Neuron Gabor filters
- 5x5, 11x11, 23x23, 47x47 Center On-Surround Off
and Center off- Surround On - Create output images with only (initialize with
gray image matrix of 128 value, final image
ranges 0-255) - First 10 of neurons that fire
- First 25 of neurons that fire
- Estimate the mean contrast reconstruction value
using the information in Figure 3. - ONLY model with the spiking neurons we discussed
in class not the other models discussed in the
paper.
119- Email your code and processed images to my gmail
account by Tuesday - The images should be in jpg format that can be
easily opened in my browser
120Neuronal Basis of Spatial Memory Formation
- http//buster.maze.ucl.ac.uk/RESEARCH.html
121Neuronal Basis of Spatial Memory Formation
- http//buster.maze.ucl.ac.uk/RESEARCH.html