SENSORY INPUT - PowerPoint PPT Presentation

1 / 84
About This Presentation
Title:

SENSORY INPUT

Description:

Leo Sugrue. SENSORY INPUT. DECISION MECHANISMS. ADAPTIVE BEHAVIOR. low level sensory analyzers ... monkey use to match'? Theory: Can we build a model that replicates ... – PowerPoint PPT presentation

Number of Views:56
Avg rating:3.0/5.0
Slides: 85
Provided by: leosu
Category:
Tags: input | sensory | leo | match

less

Transcript and Presenter's Notes

Title: SENSORY INPUT


1
(No Transcript)
2
(No Transcript)
3
SENSORY INPUT
low level sensory analyzers
DECISION MECHANISMS
motor output structures
ADAPTIVE BEHAVIOR
4
(No Transcript)
5
SENSORY INPUT
low level sensory analyzers
representation of stimulus/ action value
DECISION MECHANISMS
motor output structures
REWARD HISTORY
ADAPTIVE BEHAVIOR
6
How do we measure value?
Herrnstein RJ, 1961
7
The Matching Law
Choice Fraction
8
Behavior What computation does the monkey use
to match? Theory Can we build a model that
replicates the monkeys behavior on the matching
task? How can we validate the performance of
the? model? Why is a model useful? Physiology
What are the neural circuits and signal
transformations within the brain that implement
the computation?
9
An eye movement matching task
10
Dynamic Matching Behavior
11
Dynamic Matching Behavior
12
Dynamic Matching Behavior
13
Relation Between Reward and Choice is Local
14
How do they do this?
What local mechanism underlies the monkeys
choices in this game?
To estimate this mechanism we need a modeling
framework.
15
Linear-Nonlinear-Poisson (LNP) Models of choice
behavior
Strategy estimation is straightforward
16
Estimating the form of the linear stage
How do animals weigh past rewards in determining
current choice?
17
(No Transcript)
18
(No Transcript)
19
(No Transcript)
20
Estimating the form of the nonlinear stage
How is differential value mapped onto the
animals instantaneous probability of choice?
21
Monkey F
Monkey G
Probability of Choice (red)
Differential Value (rewards)
22
Our LNP Model of Choice Behavior
  • Model Validation
  • Can the model predict the monkeys next choice?
  • Can the model generate behavior on its own?

23
Can the model predict the monkeys next choice?
24
Predicting the next choice single experiment
25
Predicting the next choice all experiments
26
Can the model generate behavior on its own?
27
Model generated behavior single experiment
28
Distribution of stay durations summarizes
behavior across all experiments
Stay Duration (trials)
29
Model generated behavior all experiments
Stay Duration (trials)
30
Model generated behavior all experiments
Stay Duration (trials)
31
Ok, now that you have a reasonable model what can
you do with it?
  • Explore second order behavioral questions
  • Explore neural correlates of valuation

32
Ok, now that you have a reasonable model what can
you do with it?
  • Explore second order behavioral questions
  • Explore neural correlates of valuation

33
Choice of Model Input
reward history
0000010100001
Surely not getting a reward also has some
influence on the monkeys behavior?
34
Choice of Model Input
reward history
0000010100001
35
Can we build a better model by taking unrewarded
choices into account?
  • Systematically vary the value of d
  • Estimate new L and N stages for the model
  • Test each new models ability to
  • a) predict choice and
  • b) generate behavior

36
Unrewarded choices The value of nothin
Predictive Performance
Generative Performance
Value of Unrewarded Choices (d)
Value of Unrewarded Choices (d)
37
Unrewarded choices The value of nothin
Predictive Performance
Generative Performance
Stay Duration Histogram Overlap ()
Value of Unrewarded Choices (d)
Value of Unrewarded Choices (d)
38
Choice of Model Input
Contrary to our intuition inclusion of
information about unrewarded choices does not
improve model performance
39
Optimality of Parameters
40
Weighting of past rewards
Is there an optimal weighting function to
maximize the rewards a player can harvest in
this game?
41
(No Transcript)
42
(No Transcript)
43
(No Transcript)
44
(No Transcript)
45
(No Transcript)
46
(No Transcript)
47
(No Transcript)
48
(No Transcript)
49
(No Transcript)
50
(No Transcript)
51
(No Transcript)
52
(No Transcript)
53
(No Transcript)
54
Weighting of past rewards
  • The tuning of the ?2 (long) component of the
    L-stage affects foraging efficiency. Monkeys
    have found this optimum.
  • The tuning of the s, the nonlinear function
    relating
  • value to p(choice) affects foraging efficiency.
    The monkeys have found this optimum also.
  • The ?1 (short) component of the L-stage does not
  • affect foraging efficiency. Why do monkeys
  • overweight recent rewards?

55
(No Transcript)
56
(No Transcript)
57
(No Transcript)
58
The differential model is a better predictor of
monkey choice
59
  • Monkeys match best LNP model
  • Model predicts and generates choices
  • Unrewarded choices have no effect
  • Monkeys find optimal t2 and s t1 not critical
  • Differential value predicts choices better
  • than fractional value

60
?
61
Best LNP model
62
Come tomorrow !!!
63
(No Transcript)
64
Aside what would Bayes do?
1) maintain beliefs over baiting probabilities
2) be greedy or use dynamic programming
Animals Don't Do This.
65
Firing rates in LIP are related to target value
on a trial-by-trial basis
LIP
gm020b
http//brainmap.wustl.edu/vanessen.html
66
The differential model also accounts for more
variance in LIP firing rates
67
What Ive told you
  • How we control/measure value
  • An experimental task based on that principle
  • A simple model of value based choice
  • How we validate that model
  • How we use the model to explore behavior
  • How we use the model to explore value
  • related signals in the brain
  • The matching law
  • A dynamic foraging task
  • Our Linear-Nonlinear-Poisson model
  • Predictive and generative validation
  • Hybrid models, optimality of reward weights
  • Neural firing in area LIP correlates with
  • differential value on a trial-by-trial basis

68
(No Transcript)
69
Foraging Efficiency Varies as a Function of ?2
70
Foraging Efficiency Does Not Vary as a Function
of ?1
71
What do animals do?
Animals match.
Matching is a probabilistic policy
Matching is almost optimal within the set of
probabilistic policies.
72
(No Transcript)
73
the change over delay
74
Greg Corrado
75
(No Transcript)
76
(No Transcript)
77
(No Transcript)
78
(No Transcript)
79
(No Transcript)
80
(No Transcript)
81
(No Transcript)
82
(No Transcript)
83
(No Transcript)
84
How do we implement the change over delay?
only one live target at a time
Write a Comment
User Comments (0)
About PowerShow.com