Kognitive Architekturen - PowerPoint PPT Presentation

1 / 64
About This Presentation
Title:

Kognitive Architekturen

Description:

Motivations for a Cognitive Architecture ... of different aspects of higher level cognition but of cognition, ... 3. A 50 millisecond step of cognition. ... – PowerPoint PPT presentation

Number of Views:72
Avg rating:3.0/5.0
Slides: 65
Provided by: reedCs
Category:

less

Transcript and Presenter's Notes

Title: Kognitive Architekturen


1
Introduction to ACT-R 5.0 Tutorial 24th
Annual Conference Cognitive Science Society
ACT-R Home Page http//act.psy.cmu.edu
Christian Lebiere Human Computer Interaction
Institute Carnegie Mellon University Pittsburgh,
PA 15213 cl_at_cmu.edu
2
Tutorial Overview
1. Introduction 2. Symbolic ACT-R Declarative
Representation Chunks Procedural
Representation Productions ACT-R 5.0 Buffers A
Complete Model for Sentence Memory 3. Chunk
Activation in ACT-R Activation
Calculations Spreading Activation The Fan
Effect Partial Matching Cognitive
Arithmetic Noise Paper Rocks Scissors Base-Leve
l Learning Paired Associate 4. Production
Utility in ACT-R Principles and Building Sticks
Example 5. Production Compilation Principles
and Successes 6. Predicting fMRI BOLD
response Principles and Algebra example
3
Motivations for a Cognitive Architecture 1.
Philosophy Provide a unified understanding of
the mind. 2. Psychology Account for
experimental data. 3. Education Provide
cognitive models for intelligent tutoring systems
and other learning environments. 4. Human
Computer Interaction Evaluate artifacts and help
in their design. 5. Computer Generated Forces
Provide cognitive agents to inhabit training
environments and games. 6. Neuroscience Provide
a framework for interpreting data from brain
imaging.
4
Approach Integrated Cognitive Models
  • Cognitive model computational processthat
    thinks/acts like a person
  • Integrated cognitive models

5
Study 1 Dialing Times
  • Total time to complete dialing

Model Predictions
Human Data
6
Study 1 Lateral Deviation
  • Deviation from lane center (RMSE)

Model Predictions
Human Data
7
These Goals for Cognitive Architectures
Require 1. Integration, not just of different
aspects of higher level cognition but of
cognition, perception, and action. 2. Systems
that run in real time. 3. Robust behavior in the
face of error, the unexpected, and the
unknown. 4. Parameter-free predictions of
behavior. 5. Learning.
8
History of the ACT-framework
Predecessor HAM (Anderson Bower
1973) Theory versions ACT-E (Anderson,
1976) ACT (Anderson, 1978) ACT-R (And
erson, 1993) ACT-R 4.0 (Anderson Lebiere,
1998) ACT-R 5.0 (Anderson Lebiere,
2001) Implementations GRAPES (Sauers
Farrell, 1982) PUPS (Anderson Thompson,
1989) ACT-R 2.0 (Lebiere Kushmerick,
1993) ACT-R 3.0 (Lebiere, 1995) ACT-R
4.0 (Lebiere, 1998) ACT-R/PM (Byrne,
1998) ACT-R 5.0 (Lebiere, 2001) Windows
Environment (Bothell, 2001) Macintosh
Environment (Fincham, 2001)
9
100 Published Models in ACT-R 1997-2002
III. Problem Solving Decision Making 1. Tower
of Hanoi 2. Choice Strategy Selection 3.
Mathematical Problem Solving 4. Spatial
Reasoning 5. Dynamic Systems 6. Use and Design
of Artifacts 7. Game Playing 8. Insight and
Scientific Discovery IV. Language
Processing 1. Parsing 2. Analogy Metaphor 3.
Learning 4. Sentence Memory V. Other 1.
Cognitive Development 2. Individual
Differences 3. Emotion 4. Cognitive
Workload 5. Computer Generated Forces 6.
fMRI 7. Communication, Negotiation, Group
Decision Making
I. Perception Attention 1. Psychophysical
Judgements 2. Visual Search 3. Eye
Movements 4. Psychological Refractory
Period 5. Task Switching 6. Subitizing
7. Stroop 8. Driving Behavior 9.
Situational Awareness 10. Graphical User
Interfaces II. Learning Memory 1. List
Memory 2. Fan Effect 3. Implicit
Learning 4. Skill Acquisition 5.
Cognitive Arithmetic 6. Category Learning
7. Learning by Exploration and
Demonstration 8. Updating Memory
Prospective Memory 9. Causal Learning
Visit http//act.psy.cmu.edu/papers/ACT-R_Models.h
tm link.
10
ACT-R 5.0
Intentional Module (not identified)
Declarative Module (Temporal/Hippocampus)
Retrieval Buffer (VLPFC)
Goal Buffer (DLPFC)
Matching (Striatum)
Productions (Basal Ganglia)
Selection (Pallidum)
Execution (Thalamus)
Visual Buffer (Parietal)
Manual Buffer (Motor)
Manual Module (Motor/Cerebellum)
Visual Module (Occipital/etc)
Environment
11
ACT-R Knowledge Representation
? goal buffer ? visual buffer ? retrieval buffer
12
ACT-R Assumption Space
13
Chunks Example
(
)
CHUNK-TYPE
NAME
SLOT1
SLOT2
SLOTN
(
FACT34
ADDITION-FACT
isa
ADDEND1
THREE
ADDEND2
FOUR
)
SUM
SEVEN
14
Chunks Example
(CLEAR-ALL) (CHUNK-TYPE addition-fact addend1
addend2 sum) (CHUNK-TYPE integer value) (ADD-DM
(fact34 isa addition-fact addend1 three
addend2 four sum seven) (three isa
integer value 3) (four isa integer
value 4) (seven isa integer value 7)
15
Chunks Example
ADDITION-FACT
3
7
VALUE
isa
VALUE
ADDEND1
SUM
FACT34
THREE
SEVEN
ADDEND2
4
isa
isa
FOUR
VALUE
isa
INTEGER
16
Chunks Exercise I
Fact
The cat sits on the mat.
proposition
isa
(Add-DM (fact007 isa proposition agent
cat007 action sits_on object mat) )
cat007
fact007
mat
agent
object
action
sits_on
17
Chunks Exercise II
Fact
The black cat with 5 legs sits on the mat.
proposition
cat
isa
isa
legs
cat007
5
mat
fact007
agent
object
color
action
black
sits_on
18
Chunks Exercise III
(Chunk-Type proposition agent action
object) (Chunk-Type prof money-status
age) (Chunk-Type house kind price
status) (Add-DM (fact008 isa
proposition agent prof08 action buys object
house1001 ) (prof08 isa
prof money-status rich age young )
(obj1001 isa house kind city-house price
expensive status beautiful ) )
Fact
The rich young professor buys a beautiful and
expensive city house.
Chunk
proposition
house
expensive
prof
isa
price
isa
isa
agent
object
fact008
rich
obj1001
prof08
money- status
kind
status
action
age
beautiful
city-house
buys
young
19
A Production is 1. The greatest idea in
cognitive science. 2. The least appreciated
construct in cognitive science. 3. A 50
millisecond step of cognition. 4. The source of
the serial bottleneck in otherwise parallel
system. 5. A condition-action data structure
with variables. 6. A formal specification of
the flow of information from cortex to basal
ganglia and back again.
20
Productions
modularity abstraction goal/buffer
factoring conditional asymmetry
Key Properties
Structure of productions
(
p
name
Specification of Buffer Tests
condition part
gt
delimiter
Specification of Buffer Transformations
action part
)
21
ACT-R 5.0 Buffers 1. Goal Buffer (goal,
goal) -represents where one is in the
task -preserves information across production
cycles
2. Retrieval Buffer (retrieval,
retrieval) -holds information retrieval from
declarative memory -seat of activation
computations 3. Visual Buffers -location
(visual-location, visual-location) -visual
objects (visual, visual) -attention switch
corresponds to buffer transformation 4. Auditory
Buffers (aural, aural) -analogous to visual 5.
Manual Buffers (manual, manual) -elaborate
theory of manual movement include feature
preparation, Fitts law, and device properties 6.
Vocal Buffers (vocal, vocal) -analogous to
manual buffers but less well developed
22
Model for Anderson (1974)
Participants read a story consisting of Active
and Passive sentences. Subjects are asked to
verify either active or passive sentences. All
Foils are Subject-Object Reversals. Predictions
of ACT-R model are almost parameter-free.
DATA Studied-form/Test-form
Active-active Active-passive Passive-active
Passive-passive Targets 2.25
2.80 2.30 2.75 Foils
2.55 2.95 2.55
2.95 Predictions
Active-active Active-passive Passive-active
Passive-passive Targets 2.36
2.86 2.36 2.86 Foils
2.51 3.01 2.51
3.01 CORRELATION 0.978 MEAN DEVIATION
0.072
23
250m msec in the life of ACT-R Reading the Word
The
Identifying Left-most Location Time 63.900
Find-Next-Word Selected Time 63.950
Find-Next-Word Fired Time 63.950 Module
VISION running command FIND-LOCATION Attending
to Word Time 63.950 Attend-Next-Word Selected
Time 64.000 Attend-Next-Word Fired Time
64.000 Module VISION running command
MOVE-ATTENTION Time 64.050 Module VISION
running command FOCUS-ON Encoding Word Time
64.050 Read-Word Selected Time 64.100
Read-Word Fired Time 64.100 Failure
Retrieved Skipping The Time 64.100 Skip-The
Selected Time 64.150 Skip-The Fired
24
Attending to a Word in Two Productions
(P find-next-word goalgt ISA
comprehend-sentence word nil gt
visual-locationgt ISA
visual-location screen-x lowest
attended nil goalgt word
looking ) (P attend-next-word goalgt
ISA comprehend-sentence word
looking visual-locationgt ISA
visual-location gt goalgt word
attending visualgt ISA
visual-object screen-pos
visual-location )
? no word currently being processed. ? find
left-most unattended location ? update
state ? looking for a word ? visual
location has been identified ? update state ?
attend to object in that location
25
Processing The in Two Productions
(P read-word goalgt ISA
comprehend-sentence word attending
visualgt ISA text value
word status nil gt goalgt
word word retrievalgt ISA
meaning word word ) (P skip-the
goalgt ISA comprehend-sentence
word "the" gt goalgt word
nil )
? attending to a word ? word has been
identified ? hold word in goal buffer ?
retrieve words meaning ?the word is
the ? set to process next word
26
Processing missionary in 450 msec.
Identifying left-most unattended Location
Time 64.150 Find-Next-Word Selected Time
64.200 Find-Next-Word Fired Time 64.200
Module VISION running command FIND-LOCATION
Attending to Word Time 64.200
Attend-Next-Word Selected Time 64.250
Attend-Next-Word Fired Time 64.250 Module
VISION running command MOVE-ATTENTION Time
64.300 Module VISION running command FOCUS-ON
Encoding Word Time 64.300 Read-Word
Selected Time 64.350 Read-Word Fired Time
64.550 Missionary Retrieved Processing the
First Noun Time 64.550 Process-First-Noun
Selected Time 64.600 Process-First-Noun Fired
27
Processing the Word missionary
Missionary 0.000 isa MEANING word
"missionary" (P process-first-noun goalgt
ISA comprehend-sentence agent
nil action nil word y
retrievalgt ISA meaning
word y gt goalgt agent
retrieval word nil )
? neither agent or action has
been assigned ? word meaning has been
retrieved ? assign meaning to agent and
set to process next word
28
Three More Words in the life of ACT-R 950 msec.
Processing by Time 65.300 Find-Next-Word
Selected Time 65.350 Find-Next-Word Fired Time
65.350 Module VISION running command
FIND-LOCATION Time 65.350 Attend-Next-Word
Selected Time 65.400 Attend-Next-Word Fired
Time 65.400 Module VISION running command
MOVE-ATTENTION Time 65.450 Module VISION
running command FOCUS-ON Time 65.450 Read-Word
Selected Time 65.500 Read-Word Fired Time
65.500 Failure Retrieved Time 65.500 Skip-By
Selected Time 65.550 Skip-By Fired
Processing was Time 64.600 Find-Next-Word
Selected Time 64.650 Find-Next-Word Fired Time
64.650 Module VISION running command
FIND-LOCATION Time 64.650 Attend-Next-Word
Selected Time 64.700 Attend-Next-Word Fired
Time 64.700 Module VISION running command
MOVE-ATTENTION Time 64.750 Module VISION
running command FOCUS-ON Time 64.750 Read-Word
Selected Time 64.800 Read-Word Fired Time
64.800 Failure Retrieved Time 64.800 Skip-Was
Selected Time 64.850 Skip-Was Fired Processing
feared Time 64.850 Find-Next-Word Selected
Time 64.900 Find-Next-Word Fired Time 64.900
Module VISION running command FIND-LOCATION
Time 64.900 Attend-Next-Word Selected Time
64.950 Attend-Next-Word Fired Time 64.950
Module VISION running command MOVE-ATTENTION
Time 65.000 Module VISION running command
FOCUS-ON Time 65.000 Read-Word Selected Time
65.050 Read-Word Fired Time 65.250 Fear
Retrieved Time 65.250 Process-Verb Selected
Time 65.300 Process-Verb Fired
29
(P skip-by goalgt ISA
comprehend-sentence word "by"
agent per gt goalgt word
nil object per agent nil )
Reinterpreting the Passive
30
Two More Words in the life of ACT-R 700 msec.
Processing the Time 65.550 Find-Next-Word
Selected Time 65.600 Find-Next-Word Fired Time
65.600 Module VISION running command
FIND-LOCATION Time 65.600 Attend-Next-Word
Selected Time 65.650 Attend-Next-Word Fired
Time 65.650 Module VISION running command
MOVE-ATTENTION Time 65.700 Module VISION
running command FOCUS-ON Time 65.700 Read-Word
Selected Time 65.750 Read-Word Fired Time
65.750 Failure Retrieved Time 65.750 Skip-The
Selected Time 65.800 Skip-The Fired Processing
cannibal Time 65.800 Find-Next-Word Selected
Time 65.850 Find-Next-Word Fired Time 65.850
Module VISION running command FIND-LOCATION
Time 65.850 Attend-Next-Word Selected Time
65.900 Attend-Next-Word Fired Time 65.900
Module VISION running command MOVE-ATTENTION
Time 65.950 Module VISION running command
FOCUS-ON Time 65.950 Read-Word Selected Time
66.000 Read-Word Fired Time 66.200 Cannibal
Retrieved Time 66.200 Process-Last-Word-Agent
Selected Time 66.250 Process-Last-Word-Agent
Fired
31
Retrieving a Memory 250 msec
Time 66.250 Retrieve-Answer Selected Time
66.300 Retrieve-Answer Fired Time 66.500
Goal123032 Retrieved
(P retrieve-answer goalgt ISA
comprehend-sentence agent agent
action verb object object
purpose test gt goalgt purpose
retrieve-test retrievalgt ISA
comprehend-sentence action verb
purpose study )
? sentence processing complete ?
update state ? retrieve sentence involving
verb
32
Generating a Response 410 ms.
Time 66.500 Answer-No Selected Time 66.700
Answer-No Fired Time 66.700 Module MOTOR
running command PRESS-KEY Time 66.850 Module
MOTOR running command PREPARATION-COMPLETE Time
66.910 Device running command OUTPUT-KEY
? ready to test ? retrieve sentence
does not match agent or object ? update
state ? indicate no
(P answer-no goalgt ISA
comprehend-sentence agent agent
action verb object object
purpose retrieve-test retrievalgt
ISA comprehend-sentence - agent
agent action verb - object
object purpose study gt goalgt
purpose done manualgt ISA
press-key key "d" )
33
Subsymbolic Level
The subsymbolic level reflects an analytic
characterization of connectionist computations.
These computations have been implemented in
ACT-RN (Lebiere Anderson, 1993) but this is not
a practical modeling system.
1. Production Utilities are responsible for
determining which productions get selected when
there is a conflict. 2. Production Utilities
have been considerably simplified in ACT-R 5.0
over ACT-R 4.0. 3. Chunk Activations are
responsible for determining which (if any chunks)
get retrieved and how long it takes to retrieve
them. 4. Chunk Activations have been simplified
in ACT-R 5.0 and a major step has been taken
towards the goal of parameter-free predictions by
fixing a number of the parameters. As with the
symbolic level, the subsymbolic level is not a
static level, but is changing in the light of
experience. Subsymbolic learning allows the
system to adapt to the statistical structure of
the environment.
34
Activation
Seven
Sum
Addend1
Addend2
Chunk i
Three
Four
B
i
S
ji
Goalgt
Retrievalgt

isa
write



isa
addition-fact
relation sum
Conditions
Actions
addend1 Three
arg1 Three
Sim
addend2 Four
arg2 Four
kl
35
Chunk Activation
(
)
similarity value
mismatch penalty
base activation




activation

noise
Activation makes chunks available to the degree
that past experiences indicate that they will be
useful at the particular moment
Base-level general past usefulness
Associative Activation relevance to the general
context Matching Penalty relevance to the
specific match required Noise stochastic is
useful to avoid getting stuck in local minima
36
Activation, Latency and Probability
  • Retrieval time for a chunk is a negative
    exponential function of its activation
  • Probability of retrieval of a chunk follows the
    Boltzmann (softmax) distribution
  • The chunk with the highest activation is
    retrieved provided that it reaches the retrieval
    threshold ?
  • For purposes of latency and probability, the
    threshold can be considered as a virtual chunk

37
Base-level Activation
Ai Bi
The base level activation Bi of chunk Ci reflects
a context-independent estimation of how likely Ci
is to match a production, i.e. Bi is an estimate
of the log odds that Ci will be used. Two
factors determine Bi frequency of using
Ci recency with which Ci was used
38
Source Activation
? Wj Sji
j
The source activations Wj reflect the amount of
attention given to elements, i.e. fillers, of
the current goal. ACT-R assumes a fixed capacity
for source activation
W ? Wj reflects an individual difference
parameter.
39
Associative Strengths
? Wj Sji
The association strength Sji between chunks Cj
and Ci is a measure of how often Ci was needed
(retrieved) when Cj was element of the goal, i.e.
Sji estimates the log likelihood ratio of Cj
being a source of activation if Ci was retrieved.
40
Application Fan Effect
41
Partial Matching
  • The mismatch penalty is a measure of the amount
    of control over memory retrieval MP 0 is free
    association MP very large means perfect
    matching intermediate values allow some
    mismatching in search of a memory match.
  • Similarity values between desired value k
    specified by the production and actual value l
    present in the retrieved chunk. This provides
    generalization properties similar to those in
    neural networks the similarity value is
    essentially equivalent to the dot-product between
    distributed representations.

42
Application Cognitive Arithmetic
43
Noise
  • Noise provides the essential stochasticity of
    human behavior
  • Noise also provides a powerful way of exploring
    the world
  • Activation noise is composed of two noises
  • A permanent noise accounting for encoding
    variability
  • A transient noise for moment-to-moment variation

44
Application Paper Rocks Scissors
(Lebiere West, 1999)
  • Too little noise makes the system too
    deterministic.
  • Too much noise makes the system too random.
  • This is not limited to game-playing situations!

45
Base-Level Learning
Based on the Rational Analysis of the Environment
(Schooler Anderson, 1997)
Base-Level Activation reflects the log-odds that
a chunk will be needed. In the environment the
odds that a fact will be needed decays as a power
function of how long it has been since it has
been used. The effects of multiple uses sum in
determining the odds of being used.
Base-Level Learning Equation
n(n / (1-d)) -
dn(L) Note The decay parameter d has been
set to .5 in most ACT-R models
46
Paired Associate Study
Time 5.000 Find Selected Time 5.050 Module
VISION running command FIND-LOCATION Time
5.050 Find Fired Time 5.050 Attend Selected
Time 5.100 Module VISION running command
MOVE-ATTENTION Time 5.100 Attend Fired Time
5.150 Module VISION running command FOCUS-ON
Time 5.150 Associate Selected Time 5.200
Associate Fired (p associate goalgt isa
goal arg1 stimulus step attending
state study visualgt isa text
value response status nil gt goalgt
isa goal arg2 response step
done goalgt isa goal state test
step waiting)
? attending word during study ? visual
buffer holds response ? store response in
goal with stimulus ? prepare for next trial
47
Paired Associate Successful Recall
Time 10.000 Find Selected Time 10.050 Module
VISION running command FIND-LOCATION Time
10.050 Find Fired Time 10.050 Attend Selected
Time 10.100 Module VISION running command
MOVE-ATTENTION Time 10.100 Attend Fired Time
10.150 Module VISION running command FOCUS-ON
Time 10.150 Read-Stimulus Selected Time 10.200
Read-Stimulus Fired Time 10.462 Goal Retrieved
Time 10.462 Recall Selected Time 10.512 Module
MOTOR running command PRESS-KEY Time 10.512
Recall Fired Time 10.762 Module MOTOR running
command PREPARATION-COMPLETE Time 10.912 Device
running command OUTPUT-KEY
48
Paired Associate Successful Recall (cont.)
(p read-stimulus goalgt isa goal
step attending state test visualgt
isa text value val gt retrievalgt
isa goal relation associate arg1
val goalgt isa goal relation
associate arg1 val step testing)
(p recall goalgt isa goal
relation associate arg1 val step
testing retrievalgt isa goal
relation associate arg1 val arg2
ans gt manualgt isa
press-key key ans goalgt
step waiting)
49
Paired Associate Example
Data
Predictions
Trial Accuracy Latency 1 .000 0.000 2 .526 2.1
56 3 .667 1.967 4 .798 1.762 5 .887 1.680 6 .9
24 1.552 7 .958 1.467 8 .954 1.402 ?
(collect-data 10) ? Note simulated runs show
random fluctuation. ACCURACY (0.0 0.515 0.570
0.740 0.850 0.865 0.895 0.930) CORRELATION
0.996 MEAN DEVIATION 0.053 LATENCY (0 2.102
1.730 1.623 1.589 1.508 1.552 1.462)
CORRELATION 0.988 MEAN DEVIATION 0.112 NIL
Trial Accuracy Latency 1 .000 0.000 2 .515 2.1
02 3 .570 1.730 4 .740 1.623 5 .850 1.584 6 .8
65 1.508 7 .895 1.552 8 .930 1.462
50
Production Utility
P is expected probability of success G is value
of goal C is expected cost
t reflects noise in evaluation and is like
temperature in the Bolztman equation
a is prior successes m is experienced successes b
is prior failures n is experienced failures
51
Building Sticks Task (Lovett)
52
Lovett Anderson, 1996
(2/3)
(5/6)
53
Building Sticks Demo
Web Address ACT-R Home Page Published
ACT-R Models Atomic Components of
Thought Chapter 4
Building Sticks Model
54
(No Transcript)
55
Decay of Experience
Note Such temporal weighting is critical in the
real world.
56
Production Compilation The Basic Idea
(p read-stimulus goalgt isa goal
step attending state test visualgt
isa text value val gt retrievalgt
isa goal relation associate arg1
val arg2 ans goalgt relation
associate arg1 val step testing) (p
recall goalgt isa goal relation
associate arg1 val step testing
retrievalgt isa goal
relation associate arg1 val arg2
ans gt manualgt isa
press-key key ans goalgt
step waiting) (p recall-vanilla goalgt
isa goal step attending state test
visualgt isa text value "vanilla gt
manualgt isa press-key
key "7" goalgt relation
associate arg1 "vanilla" step
waiting)
57
Production Compilation The Principles
1. Perceptual-Motor Buffers Avoid compositions
that will result in jamming when one tries to
build two operations on the same buffer into the
same production. 2. Retrieval Buffer Except for
failure tests proceduralize out and build more
specific productions. 3. Goal Buffers Complex
Rules describing merging. 4. Safe Productions
Production will not produce any result that the
original productions did not produce. 5.
Parameter Setting Successes
Pinitial-experience Failures (1-P)
initial-experience Efforts (Successes
Efforts)(C cost-penalty)
58
Production Compilation The Successes
1. Taatgen Learning of inflection (English past
and German plural). Shows that production
compilation can come up with generalizations.
2. Taatgen Learning of air-traffic control task
shows that production compilation can deal with
complex perceptual motor skill. 3. Anderson
Learning of productions for performing paired
associate task from instructions. Solves mystery
of where the productions for doing an experiment
come from. 4. Anderson Learning to perform an
anti-air warfare coordinator task from
instructions. Shows the same as 2 3. 5.
Anderson Learning in the fan effect that
produces the interaction between fan and
practice. Justifies a major simplification in
the parameterization of productions no strength
separate from utility. Note all of these
examples involve all forms of learning occurring
in ACT-R simultaneous acquiring new chunks,
acquiring new productions, activation learning,
and utility learning.
59
Predicting fMRI Bold Response from Buffer
Activity
Example Retrieval buffer during equation-solving
predicts activity in left dorsolateral prefrontal
cortex. where Di is the duration of the ith
retrieval and ti is the time of initiation of the
retrieval.
60
21 Second Structure of fMRI Trial
Equation
Blank Period
Load
a18 b6 c5
c x 3 a
1.5 Second Scans
61
Solving 5 x 3 18
Time 3.000 Find-Right-Term Selected Time
3.050 Find-Right-Term Fired Time 3.050 Module
VISION running command FIND- Time 3.050
Attend-Next-Term-Equation Selected Time 3.100
Attend-Next-Term-Equation Fired Time 3.100
Module VISION running command MOVE- Time 3.150
Module VISION running command FOCUS-ON Time
3.150 Encode Selected Time 3.200 Encode
Fired Time 3.281 18 Retrieved Time 3.281
Process-Value-Integer Selected Time 3.331
Process-Value-Integer Fired Time 3.331 Module
VISION running command FIND- Time 3.331
Attend-Next-Term-Equation Selected Time 3.381
Attend-Next-Term-Equation Fired Time 3.381
Module VISION running command MOVE- Time 3.431
Module VISION running command FOCUS-ON Time
3.431 Encode Selected Time 3.481 Encode
Fired Time 3.562 3 Retrieved Time 3.562
Process-Op1-Integer Selected Time 3.612
Process-Op1-Integer Fired Time 3.612 Module
VISION running command FIND- Time 3.612
Attend-Next-Term-Equation Selected Time 3.662
Attend-Next-Term-Equation Fired Time 3.662
Module VISION running command MOVE- Time 3.712
Module VISION running command FOCUS-ON Time
3.712 Encode Selected Time 3.762 Encode
Fired Time 4.362 Inverse-of- Retrieved Time
4.362 Process-Operator Selected
62
Solving 5 x 3 18 (cont.)
Time 4.412 Process-Operator Fired Time
5.012 F318 Retrieved Time 5.012
Finish-Operation1 Selected Time 5.062
Finish-Operation1 Fired Time 5.062 Module
VISION running command FIND- Time 5.062
Attend-Next-Term-Equation Selected Time 5.112
Attend-Next-Term-Equation Fired Time 5.112
Module VISION running command MOVE- Time
5.162 Module VISION running command FOCUS-ON
Time 5.162 Encode Selected Time 5.212 Encode
Fired Time 5.293 5 Retrieved Time 5.293
Process-Op2-Integer Selected Time 5.343
Process-Op2-Integer Fired Time 5.943 F315
Retrieved Time 5.943 Finish-Operation2
Selected Time 5.993 Finish-Operation2 Fired
Time 5.993 Retrieve-Key Selected Time 6.043
Retrieve-Key Fired Time 6.124 3 Retrieved
Time 6.124 Generate-Answer Selected Time
6.174 Generate-Answer Fired Time 6.174 Module
MOTOR running command PRESS-KEY Time 6.424
Module MOTOR running command PREPARATION- Time
6.574 Device running command OUTPUT-KEY ("3"
3.574)
63
Left Dorsolateral Prefrontal Cortex
64
5x 3 18
cx 3 a
Change
Percent Activation
9
10
11
12
13
14
Scan (1.5 sec.)
Write a Comment
User Comments (0)
About PowerShow.com