Title: Electrophysiological evidence for prelinguistic infants
1Electrophysiological evidence for prelinguistic
infants word recognition in continuous speech
- Valesca Kooijman, Peter Hagoort, Anne Cutler
- Cognitive Brain Research 24 (2005) 109 116
- Jill Warker
- Psych 593 9/15/2005
2Recognizing Words in Speech
3The Trials and Tribulations of Word Segmentation
- Why is it so difficult?
- Speech is extremely fast (20 phonemes/sec or
faster) - Phonemes dont always sound the same
4(No Transcript)
5The Trials and Tribulations of Word Segmentation
- Why is it so difficult?
- Speech is extremely fast (20 phonemes/sec or
faster) - Phonemes dont always sound the same
- in isolation
- in context
- No pauses between words
- Sounds and words run together
- I scream or ice cream
6Sohow do we do it?
- Adults can make use of meaning and vocabulary
- But how do infants who dont yet have a lexicon
do it? - At what age do we begin to segment words from
continuous speech streams?
7When do we begin to segment?
- Infants begin to show segmentation abilities
around 7.5 months - Become less sensitive to non-native language
sounds - Start to have the beginnings of phonetic
categories -
-
(Jusczyk, 1999)
8Headturn Preference Procedure
9Infant Behavioral Studies
- Prefer continuous stream passages containing
target words (familiarized with targets in
isolation) - Prefer target words in isolation (familiarized
with targets embedded in passages) - Respond to more than acoustic qualities
(familiarized with tup, heard cup in passage
but did not prefer cup passage) - Possible beginnings of a lexicon (familiarized on
targets but tested 24 hours later)
10Sources of Information
- Stress patterns, rhythmic cues
- prosodic cues like pitch
- Distributional properties and statistical
regularities (Saffran et al., 1996) - Phonotactics
- fat cat not fa tcat /tk/ cant begin a
syllable - Allophonic cues
- night rates vs nitrates
-
(Jusczyk, 1999)
11Going beyond HPP
- How quickly do infants segment?
- the proposed answer ERPs
- can look more closely at the timecourse
- look at time needed to segment and recognize a
word after hearing it in the stream - look at whether word recognition is done in whole
words or if recognition begins before the word is
over
12Procedure
- 28 10-month olds
- 20 experiment blocks
- No less than 9 blocks
- Familiarized with 2-syllable word in isolation
- eg. python
- Test phase four sentences with familiar word and
four without
13ResultsERPs collected during familiarization
14Positivity diminishes with familiarization
significant effects over left right frontal
quadrants
familiarity effect starts early on in the word
(around 160 ms)
15Results ERPs collected during testover left
hemisphere, fam. words gt negative deflection from
350-500ms
response to familiarity begins around 340-370 ms
16Results Recap
- Infants showed a recognition effect (reduced
positivity) to a word within half a second during
the continuous stream - About 180 ms longer to recognize word in stream
than in isolation - harder to determine onset
- Difference in distribution effects
- suggests that different processes may be at work
during recognition and during segmentation
17Their Conclusions
- Infants start the recognition response before a
word is over - Begin segmentation and recognition processes by
the end of the first syllable - NOT using whole word templates
- May be accessing memory representations from
familiarization and matching them to initial
portion of word in stream - Able to generalize across sound tokens
- Further evidence that they are forming phonetic
categories -
18Discussion
- Can the hemispheric differences in the results
really be attributed to different processes? - How much do you think segmentation contributes to
the formation of the lexicon? - Given the right hemispheres role in activation
of broad semantic meaning, do adults show more
activity in the right hemisphere during word
recognition? - Are ERPs sensitive enough to provide a way to
investigate at what point in time the different
segmentation cues come into play as well as how
influential they are?
19Mismatch Negativity (MMN) paradigm
- Passive oddball paradigm unexpected stimuli
change results in negative-going increase in
amplitude - Typically used to study auditory tones, phonemes,
and syllables in isolation