PSY 369: Psycholinguistics - PowerPoint PPT Presentation

1 / 16
About This Presentation
Title:

PSY 369: Psycholinguistics

Description:

... The maze game Pairs played a co-operative computer game Move position markers through a maze of boxes connected by paths Each player can only see his/her own ... – PowerPoint PPT presentation

Number of Views:134
Avg rating:3.0/5.0
Slides: 17
Provided by: PsychologyD170
Category:

less

Transcript and Presenter's Notes

Title: PSY 369: Psycholinguistics


1
PSY 369 Psycholinguistics
  • Language Production Comprehension
  • Conversation Dialog

2
Dialog is the key
  • Why so little research on dialog?
  • Most linguistic theories were developed to
    account for sentences in de-contextualized
    isolation
  • Dialog doesnt fit the competence/performance
    distinction well
  • Hard to do experimentally
  • Conversations are interactive and largely
    unplanned
  • Pickering and Garrod (2004)
  • Proposed that processing theories of language
    comprehension and production may be flawed
    because of a focus on monologues

3
Processing models of dialog
  • Pickering and Garrod (2004)
  • Interactive alignment model
  • Alignment of situation models is central to
    successful dialogue
  • Alignment at other levels is achieved via priming
  • Alignment at one level can lead to alignment at
    another
  • Model assumes parity of representations for
    production and comprehension

4
Assumptions of the model
  • 1. Alignment of situation models comes about via
    an automatic, resource-free priming mechanism
  • 2. Representational parity between comprehension
    and production
  • 3. Alignment at one level leads to alignment at
    other (interconnected) levels
  • 4. There is no need for explicit
    perspective-taking in routine language processing

5
Assumptions of the model
  • 1. Alignment of situation models comes about via
    an automatic, resource-free priming mechanism

Garrod Anderson (1987) The maze game
  • Pairs played a co-operative computer game
  • Move position markers through a maze of boxes
    connected by paths
  • Each player can only see his/her own start, goal
    and current positions
  • Some paths blocked by gates (obstacles) which are
    opened by switches
  • Gates and switches distributed differently for
    each player
  • Players must help their partner to move to switch
    positions, to change the configuration of the maze

6
Assumptions of the model
  • 1. Alignment of situation models comes about via
    an automatic, resource-free priming mechanism

Garrod Anderson (1987) The maze game
1-----B .... Tell me where you are? 2-----A Ehm
Oh God (laughs) 3-----B (laughs) 4-----A
Right two along from the bottom one
up 5-----B Two along from the bottom, which
side? 6-----A The left going from left to
right in the second box. 7-----B You're in the
second box. 8-----A One up (1 sec.) I take it
we've got identical mazes? 9-----B Yeah well
right, starting from the left, you're one
along 10----A Uh-huh 11----B and one
up? 12----A Yeah, and I'm trying to get to ...
7
Assumptions of the model
  • 1. Alignment of situation models comes about via
    an automatic, resource-free priming mechanism

Garrod Anderson (1987) The maze game
41----B You are starting from the left, you're
one along, one up? (2 sec.) 42----A Two along
I'm not in the first box, I'm in the second
box 43----B You're two along 44----A Two up
(1 sec.) counting the if you take the first
box as being one up 45----B (2 sec.) Uh-huh
46----A Well I'm two along, two up (1.5
sec.) 47----B Two up ? 48----A Yeah (1
sec.) so I can move down one 49----B Yeah I
see where you are
8
Assumptions of the model
  • 1. Alignment of situation models comes about via
    an automatic, resource-free priming mechanism

Garrod Anderson (1987) The maze game
  • Path descriptions (36.8)
  • See the bottom right, go two along and two up
  • Co-ordinate descriptions (23.4)
  • Im at C4
  • Line descriptions (22.5)
  • Im one up on the diagonal from bottom left to
    top right
  • Figural descriptions (17.3)
  • See the rectangle at the bottom right, Im in the
    top left corner of that

9
Assumptions of the model
  • 1. Alignment of situation models comes about via
    an automatic, resource-free priming mechanism

Garrod Anderson (1987) The maze game
  • Pairs converge on different ways of describing
    spatial locations
  • Entrainment on a particular conceptualization of
    the maze
  • But little explicit negotiation
  • Entrainment increases over the course of a game
  • Description schemes as local languages
  • Rules for mapping particular expressions onto
    interpretations with respect to a common
    discourse model
  • Once the meaning of a particular expression is
    fixed, players try to avoid an ambiguous use of
    that expression

10
Assumptions of the model
  • 1. Alignment of situation models comes about via
    an automatic, resource-free priming mechanism

Garrod Anderson (1987) The maze game
  • Entrainment emerges from a simple heuristic
  • Formulate your output using the same rules of
    interpretation as those needed to understand the
    most recent input
  • Representations used to comprehend an utterance
    are recycled during subsequent production
  • Leads to local consistency
  • Helps to establish a mutually satisfactory
    description scheme with least collaborative effort

11
Assumptions of the model
  • 2. Representational parity between comprehension
    and production
  • Parity important for interactive alignment
  • We dont go around repeating other peoples
    utterances!
  • Comprehension-to-production priming (BPC, 2000)
  • Priming from sentences which were only heard
  • Suggests that representations shared across
    modalities
  • Equivalent to production-to-production effects?
  • E.g. Bock (1986), syntactic priming in language
    production tasks

12
Assumptions of the model
  • 3. Alignment at one level leads to alignment at
    other (interconnected) levels

Bigger priming effect when the prime noun is
semantically related to the noun in the target
  • Cleland Pickering (2003)
  • Semantic boost
  • Primes either pre (the red sheep) or post
    nominally (the sheep that is red) modified NPs
  • Same (sheep to sheep), semantically related (goat
    to sheep), unrelated (knife to sheep)
  • Branigan, Pickering, Cleland (2000)
  • Lexical boost similar effect with same verb

13
Assumptions of the model
  • 4. There is no need for explicit
    perspective-taking in routine language processing
  • If communication is successful, interlocutors
    situation models come to overlap
  • Implicit common ground
  • Overlap may be small to begin with
  • But via alignment, it increases over the course
    of a conversation
  • What looks like audience design is simply a
    by-product of good alignment
  • Full common ground only consulted when there are
    sufficient processing resources available

14
Summary
  • People use language for doing things with each
    other, and their use of language is itself a
    joint action. Clark (1996, pg387)
  • Conversation is structured
  • But, that structure depends on more than one
    individual
  • Models of language use (production and
    comprehension) need to be developed within this
    perspective
  • Interactive Alignment model is a new theory
    attempting to do just this

15
Review for Exam 4
  • Chapters 13, 14, 15 (read 16 for interest, but I
    wont test on it)
  • Same format as the last 3 exams
  • General topics
  • Language Production
  • Conversation dialog
  • I have fixed the link to the review sheet

16
Review for Exam 4
  • Language production
  • Paradox form over meaning is preserved
  • Speech errors - observational experimental
  • Tip-of-the-tongue
  • Lexical bias
  • Grammaticality constraint
  • Models of speech production
  • Levelts model
  • Dells model
  • Lexical bias effect, mixed errors
Write a Comment
User Comments (0)
About PowerShow.com