The Chinese Room - PowerPoint PPT Presentation

About This Presentation
Title:

The Chinese Room

Description:

The Chinese Room * Philosophical vs. Empirical Questions The kind of questions Turing considers concern what a machine could, in principle, do. – PowerPoint PPT presentation

Number of Views:153
Avg rating:3.0/5.0
Slides: 33
Provided by: homeSand7
Category:
Tags: chinese | room

less

Transcript and Presenter's Notes

Title: The Chinese Room


1
The Chinese Room
2
Philosophical vs. Empirical Questions
  • The kind of questions Turing considers concern
    what a machine could, in principle, do.
  • Could it simulate human conversation so well that
    a normal human would mistake it for a person?
  • Could it simulate altruistic behavior?
  • Could it commit suicidei.e. self-destruct?
  • Philosophy cannot answer these empirical
    questions!
  • These are questions for science, in particular
    computer science
  • Our question is If a machine could do such
    things would it count as thinking? If a machine
    could perfectly simulate human behavior should we
    understand it as a thinking being?

3
Searle Against Strong AI
  • Could a machine think? On the argument advanced
    here only a machine could think, and only very
    special kinds of machines, namely brains and
    machines with internal causal powers equivalent
    to those of brains. And that is why strong AI
    has little to tell us about thinking, since it is
    not about machines but about programs, and no
    program by itself is sufficient for thinking.
  • Simulation and Duplication
  • Strong AI The computer is not merely a tool in
    the study of the mind rather, the appropriately
    programmed computer really is a mind
  • Computers may simulate human psychology, in the
    way that they may simulate weather or model
    economic systems but they dont themselves have
    cognitive states.
  • The Turing Test is not a test for intelligence
  • Even if a program could pass the most stringent
    Turing Test, that wouldnt be sufficient for its
    having mental states.

4
Searle isnt arguing against Physicalism!!!
  • Could a machine think? My own view is that
    only a machine could think, and indeed only very
    special kinds of machines, namely brains and
    machines that had the same causal powers as
    brains.
  • The problem isnt that the machines in question
    are physical (rather than spiritual) but
    precisely that they are not physical since they
    are abstract
  • Strong AI only makes sense given the dualistic
    assumption that, where the mind is concerned, the
    brain doesnt matter. In strong AI (and in
    functionalism as well) what matters are programs,
    and programs are independent of their realization
    in machinesThis form of dualisminsists that
    what is specifically mental about the mind has no
    intrinsic connection with the actual properties
    of the brain.
  • Searle is arguing against functionalism (and, a
    fortiori, behaviorism)
  • The Chinese Room thought-experiment is intended
    to show that passing the Turing Test isnt
    sufficient for understanding.

5
Searle isnt concerned with feely mental states
  • Some mental states are feely they have
    intrinsic qualitative character, a phenomenology,
    a what it is like to be in them.
  • If you dont have them, you dont know what
    theyre like
  • Lockes studious blind man thought red was like
    the sound of a trumpet
  • Other mental states are not feely in this sense
  • Believing that 2 2 4 isnt like anything
  • Referring to Obama when one utters Obama is
    President of the United States rather than just
    making language like noises
  • Speaking rather than just making noises as Mind
    the gap or You have enteredonetwothreefourf
    ive.
  • Searle argues that machines cant have even these
    non-feely mental states

6
Searle passes the Turing Test
but Searle doesnt understand Chinese!
7
Symbol-manipulation isnt understanding
  • From the external point of viewthe answers to
    the Chinese questions and the English questions
    are equally good. but in the Chinese case, unlike
    the English case, I produce the answers by
    manipulating uninterpreted formal symbolsFor the
    purposes of the Chinese, I am simply an
    instantiation of the computer program.
  • /P ? Q ? (P ? Q)
  • 1. P ACP
  • 2. Q ACP
  • 3. P ? Q 1, 2, conj.
  • 4. Q ? (P ? Q) 2 3, CP
  • 5. P ? Q ? (P ? Q) 1 4 CP

This is an example of manipulating uninterpreted
formal symbols. Do you understand anything?
8
Intentionality
  • Searles complaint is that mere rule-governed
    symbol-manipulation cannot generate
    intentionality.
  • Intentionality is the power of minds to be about,
    to represent, or to stand for, things, properties
    and states of affairs.
  • Reference is intentional in this sense I think
    (and talk) about things
  • Perceptions, beliefs, desires and intentions and
    many other propositional attitudes are mental
    states with intentionality they have content
  • Intentionality is directednessunderstood by
    Brentano as the criterion for the mental
  • Only mental states have intrinsic
    intentionality other things have it only in a
    derivative sense to the extent that they are
    directed by intelligent beings.

9
Intrinsic and Derived Intentionality
  • Guns dont killpeople do
  • the directedness of a gun to a target is derived
    people aim and direct them to their targets.
  • Words dont referpeople do
  • Computers, Searle argues, dont have intrinsic
    intentionality our ascriptions of psychological
    states and intentional actions to them is merely
    metaphorical
  • Formal symbol manipulations by themselves
    dont have any intentionality they are quite
    meaningless they arent even symbol
    manipulations, since the symbols dont symbolize
    anythingSuch intentionality as computers appear
    to have is solely in the minds of those who
    program them and those who use them, those who
    send in the input and those who interpret the
    output.

10
The Mystery of Intentionality
  • Problem granting that a variety of inanimate
    objects we use dont have intrinsic
    intentionality, what does and why?
  • Can a car believe that that fuel/air mixture is
    too rich and adjust accordingly?
  • My (1969) Triumph Spitfire had a manual choke
  • My first Toyota Corolla had a carburetor with an
    automatic choke that opened up and closed on
    relatively simple mechanical principles
  • My new Toyota Corolla has software I dont
    understand that does this stuff
  • Question Is there some level of complexity in
    theprogram at which we get intrinsic
    intentionality?
  • Searles Answer Looking in the programis
    looking in the wrong place.

11
Intentionality The Right Stuff
  • Precisely that feature of AI that seemed so
    appealingthe distinction between the program and
    the realizationproves fatal to the claim that
    simulation could be duplicationthe equation,
    mind is to brain as program is to hardware
    breaks downThe program is purely
    formalMental states and events are literally a
    product of the operation of the brain, but the
    program is not in that way a product of the
    computer.
  • Searle will argue that no matter how complex the
    software, no matter what inputs and outputs it
    negotiates, it cannot be ascribed mental states
    in any literal sense and
  • Neither can the hardware that runs the program
    since it lacks the causal powers of human (and
    other) brains that produce intentional states.
  • We may ascribe mental states to them
    metaphoricallyin the way we say the car (which
    needs an alignment) wants to veer left or the jam
    you just cooked is trying to gel.

12
Argument from Vacuous Opposition
  • If strong AI is to be a branch of psychology,
    then it must be able to distinguish those systems
    that are genuinely mental from those that are
    notThe study of the mind starts with such facts
    as that humans have beliefs, while thermostats,
    telephones, and adding machines don't. If you get
    a theory that denies this point you have produced
    a counterexample to the theory and the theory is
    falseWhat we wanted to know is what
    distinguishes the mind from thermostats and
    livers.
  • Xs in our ordinary way of thinking have P
  • Zs in our ordinary way of thinking dont have P.
  • If, in order to argue that Ys have P we have to
    redefine having P in such a way that Zs count
    as having P, ascribing P to Ys is
    uninteresting.
  • Compare the Gaia Hypothesis, to Everybodys
    beautifulin their own way, or to Lake Woebegone
    where all the children are above average

13
Objections to Searles Chinese Room Argument
  • Searle considers 3 kinds of objections to his
    Chinese Room Argument
  • Even if in the original thought experiment Searle
    wouldnt count as understanding Chinese, a more
    complicated system that was a machine in the
    requisite sense would understand Chinese
  • Systems Reply add the room, rule book,
    scratchpads, etc.
  • Robot Reply add a more elaborate input device
    and output
  • The Brain Simulator Reply complicate the system
    so that it mimics the pattern of brain activity
    characteristic of understanding Chinese
  • The Combination Reply all of the above
  • The Other Minds Reply
  • The Many Mansions Reply We could duplicate the
    causal processes of the brain as well as the
    formal features of brain activity patterns

14
Objections to the Chinese Room Argument
  1. The Systems Reply While it is true that the
    individual person who is locked in the room does
    not understand the story, the fact is that he is
    merely part of a whole system, and the system
    does understand the story. The person has a large
    ledger in front of him in which are written the
    rules, he has a lot of scratch paper and pencils
    for doing calculations, he has 'data banks' of
    sets of Chinese symbols. Now, understanding is
    not being ascribed to the mere individual rather
    it is being ascribed to this whole system of
    which he is a part.
  2. The Robot Reply Suppose we wrote a different
    kind of program from Schank's program. Suppose we
    put a computer inside a robot, and this computer
    would not just take in formal symbols as input
    and give out formal symbols as output, but rather
    would actually operate the robot in such a way
    that the robot does something very much like
    perceiving, walking, moving about, hammering
    nails, eating drinking -- anything you like. The
    robot would, for example have a television camera
    attached to it that enabled it to 'see,' it would
    have arms and legs that enabled it to 'act,' and
    all of this would be controlled by its computer
    'brain.' Such a robot would, unlike Schank's
    computer, have genuine understanding and other
    mental states.

15
Objections to the Chinese Room Argument
  1. The Brain Simulator Reply Suppose we design a
    program that doesn't represent information that
    we have about the world, such as the information
    in Schank's scripts, but simulates the actual
    sequence of neuron firings at the synapses of the
    brain of a native Chinese speaker when he
    understands stories in Chinese and gives answers
    to them. The machine takes in Chinese stories and
    questions about them as input, it simulates the
    formal structure of actual Chinese brains in
    processing these stories, and it gives out
    Chinese answers as outputsNow surely in such a
    case we would have to say that the machine
    understood the stories and if we refuse to say
    that, wouldn't we also have to deny that native
    Chinese speakers understood the stories? At the
    level of the synapses, what would or could be
    different about the program of the computer and
    the program of the Chinese brain?
  2. The Combination Reply While each of the previous
    three replies might not be completely convincing
    by itself as a refutation of the Chinese room
    counterexample, if you take all three together
    they are collectively much more convincing and
    even decisive. Imagine a robot with a
    brain-shaped computer lodged in its cranial
    cavity, imagine the computer programmed with all
    the synapses of a human brain, imagine the whole
    behavior of the robot is indistinguishable from
    human behavior, and now think of the whole thing
    as a unified system and not just as a computer
    with inputs and outputs. Surely in such a case we
    would have to ascribe intentionality to the
    system.

16
Objections to the Chinese Room Argument
  1. The Other Minds Reply How do you know that other
    people understand Chinese or anything else? Only
    by their behavior. Now the computer can pass the
    behavioral tests as well as they can (in
    principle), so if you are going to attribute
    cognition to other people you must in principle
    also attribute it to computers. Remember, Turing
    argued along these lines too
  2. The Many Mansions Reply Your whole argument
    presupposes that AI is only about analogue and
    digital computers. But that just happens to be
    the present state of technology. Whatever these
    causal processes are that you say are essential
    for intentionality (assuming you are right),
    eventually we will be able to build devices that
    have these causal processes, and that will be
    artificial intelligence. So your arguments are in
    no way directed at the ability of artificial
    intelligence to produce and explain cognition.

17
The Systems Reply
  • While it is true that the individual person who
    is locked in the room does not understand the
    story, the fact is that he is merely part of a
    whole system, and the system does understand the
    story.
  • Searle says Let the individual
    internalizememorize the rules in the ledger and
    the data banks of Chinese symbols, anddo all
    the calculations in his head. He understands
    nothing of the Chinese, and a fortiori neither
    does the system.
  • Youve memorized the rules for constructing WFFs,
    the 18 Rules of Inference and rules for
    Conditional and Indirect Proof in Hurleys
    Concise Introduction to Logic.
  • Now you can do all those formal derivations in
    the Propositional Calculus without lookingand
    get an A on your logic exam!
  • Do you understand what those symbols mean?

18
The Systems Reply
  • Could there be a subsystem of the man in the
    room that understands Chinese?
  • Searle The only motivation for saying there must
    be a subsystem in me that understands Chinese is
    that I have a program and I can pass the Turing
    test I can fool native Chinese speakers. But
    precisely one of the points at issue is the
    adequacy of the Turing test.
  • Which ever way you cut it, you cant crank
    semantics (meaning) out of syntax and
    symbol-manipulation.
  • Whether its the man in the room, the room (with
    rulebook, scratchpad, etc.) or some subsytem of
    the man in the room, if all thats going on is
    symbol-pushing, theres no understanding.

19
The Robot Reply
  • Suppose we put a computer inside a robot, and
    this computer would not just take in formal
    symbols as input and give out formal symbols as
    output, but would rather actually operate the
    robot in such a way that the robot does something
    very much like perceiving walking, moving about,
    etc.
  • Searle the addition of such perceptual and
    motor capacities adds nothing by way of
    understanding, in particular, or intentionality,
    in generalSuppose, unknown to me, some of the
    Chinese symbols come from a television camera
    attached to the robot and other Chinese symbols
    that I am giving out serve to make the motors
    inside the robot move the robots legs or armsI
    dont understand anythingAll I do is follow
    formal instructions about manipulating formal
    symbols.

20
The Brain Simulator Reply
  • Suppose we design a program that doesnt
    represent information that we have about the
    worldbut simulates the actual sequence of neuron
    firings at the synapses of the brain of a native
    Chinese speakerAt the level of the synapeses,
    what would or could be different about the
    program of the computer and the program of the
    Chinese brain?
  • Searle The problem with the brain simulator is
    that it is simulating the wrong things about the
    brain. As long as it simulates only the formal
    structure of the sequence of neuron firings at
    the synapses, it won't have simulated what
    matters about the brain, namely its causal
    properties, its ability to produce intentional
    states. Andthe formal properties are not
    sufficient for the causal properties.

21
Blocks Chinese Nation Thought Experiment
  • Suppose that the whole nation of China was
    reordered to simulate the workings of a single
    brain (that is, to act as a mind according to
    functionalism). Each Chinese person acts as (say)
    a neuron, and communicates by special two-way
    radio in the corresponding way to the other
    people. The current mental state of China Brain
    is displayed on satellites that may be seen from
    anywhere in China. China Brain would then be
    connected via radio to a body, one that provides
    the sensory inputs and behavioral outputs of
    China Brain.

Thus China Brain possesses all the elements of a
functional description of mind sensory inputs,
behavioral outputs, and internal mental states
causally connected to other mental states. If the
nation of China can be made to act in this way,
then, according to functionalism, this system
would have a mind.
22
The Hive Mind
  • Individual bees arent too brightbut the swarm
    behaves intelligently
  • Is there a hive mind?

23
The Combination Reply
  • While each of the previous three replies might
    not be completely convincing by itself as a
    refutation of the Chinese room counterexample, if
    you take all three together theyare collectively
    much more convincing and even decisive.
  • Searle Suppose we knew that the robot's
    behavior was entirely accounted for by the fact
    that a man inside it was receiving uninterpreted
    formal symbols from the robot's sensory receptors
    and sending out uninterpreted formal symbols to
    its motor mechanisms, and the man was doing this
    symbol manipulation in accordance with a bunch of
    rules. Furthermore, suppose the man knows none of
    these facts about the robot, all he knows is
    which operations to perform on which meaningless
    symbols. In such a case we would regard the robot
    as an ingenious mechanical dummy. The hypothesis
    that the dummy has a mind would now be
    unwarranted and unnecessary, for there is now no
    longer any reason to ascribe intentionality to
    the robot or to the system of which it is apart.

24
The Combination Reply
  • Compare to our reasons for ascribing intelligence
    to animals
  • Given the coherence of the animal's behavior and
    the assumption of the same causal stuff
    underlying it, we assume both that the animal
    must have mental states underlying its behavior,
    and that the mental states must be produced by
    mechanisms made out of the stuff that is like our
    stuff. We would certainly make similar
    assumptions about the robot unless we had some
    reason not to, but as soon as we knew that the
    behavior was the result of a formal program, and
    that the actual causal properties of the physical
    substance were irrelevant we would abandon the
    assumption of intentionality.
  • But some questions here
  • Why should the right stuff matter?
  • What sort of stuff is the right stuff?
  • And why?

25
The Right Stuff A Conjecture
  • Compare to the water/H20 case
  • Until recently in human history we didnt know
    what the chemical composition of water was we
    didnt know that it was H20.
  • But we assumed that what made this stuff water
    was something about the stuff of which it was
    composedits hidden internal structure.
  • Once we discover what that internal structure is,
    we refuse to recognize other stuff that has the
    same superficial characteristics as water.
  • Similarly, we dont know what thinking/understandi
    ng/intentionality is intrinsicallyin terms of
    its internal workings, but regard that internal
    structure (whatever it is) as what it is to
    think/understand.
  • So, when we discover that something that
    superficially behaves like a thinking being
    doesnt have the appropriate internal
    organization, we deny that it thinks/understands/e
    xhibits intentionality.

26
The Other Minds Reply
  • How do you know that other people understand
    Chinese or anything else? Only by their behavior.
    Now the computer can pass the behavioral tests as
    well as they can (in principle), so if you are
    going to attribute cognition to other people you
    must in principle also attribute it to computers.
  • Searle This discussion is not about how I
    know that other people have cognitive states, but
    rather what it is that I am attributing to them
    when I attribute cognitive states to them.
  • Compare to Turings remarks about solipcism
  • Searle notes that the issue isnt an epistemic
    question of how we can know whether some other
    being is the subject of psychological states but
    what it is to have psychological states.

27
The Many Mansions Reply
  • Your whole argument presupposes that AI is only
    about analogue and digital computers. But that
    just happens to be the present state of
    technology. Whatever these causal processes are
    that you say are essential for intentionality
    (assuming you are right), eventually we will be
    able to build devices that have these causal
    processes, and that will be artificial
    intelligence.
  • I really have no objection to this reply save to
    say that it in effect trivializes the project of
    strong AI by redefining it as whatever
    artificially produces and explains cognitionI
    see no reason in principle why we couldn't give a
    machine the capacity to understand English or
    Chinese But I do see very strong arguments for
    saying that we could not give such a thing to a
    machine where the operation of the machine is
    defined solely in term of computational processes
    over formally defined elementsThe main point of
    the present argument is that no purely formal
    model will ever be sufficient by itself for
    intentionality because the formal properties are
    not by themselves constitutive of intentionality.
    emphasis added Note Searle isnt a dualist!

28
And nowsome questions
  • What is the thing that thinks? How interesting
    is Searles Thesis?
  • I do see very strong arguments for saying that we
    could not give such a thing understanding a
    language to a machine where the operation of the
    machine is defined solely in term of
    computational processes over formally defined
    elements
  • What is defined in terms of such computational
    processes
  • The program as an abstract machine (at bottom, a
    Turing Machine)?
  • The hardware (or wetware) that runs the program?
  • Would we run into the same difficulties in
    describing how humans operate if we identified
    the thing that thinks with the programs they
    instantiate?
  • Arguably minds dont think and neither do
    brains people do.
  • And computer hardware may have causal powers
    comparable to human wetware.

29
Thought Experiments
  • Searle relies on a thought-experiment to elicit
    our intuitions, elaborated in response to
    objections how much does this show?
  • We may be guilty of species chauvinismvide
    Turing on ignoring the appearance of the machine
    or its capacity to appreciate strawberries and
    cream
  • The sequence of elaborations on the original
    thought experiment may be misleading suppose we
    started with the robot, or the brain-simulator?
  • With the development of more sophisticated
    computers our intuitions about what it is to
    think might change. Compare, e.g.
  • Stravinsky (or Wagner, or Berlioz) isnt music
    but just a lot of noise
  • Whales are fish
  • Marriage is (necessarily) between a man and a
    woman.

30
Theory of Mind?
  • What theories of mind does Searle reject?
  • behaviorism passing the Turing Test wont do.
  • functionalismat least machine functionalism.
  • Cartesian dualism Searle repeatedly remarks that
    the brain is a machine
  • To what theories of mind do Searles arguments
    suggest hes sympathetic?
  • The Identity Theory?
  • Intentionality remains a mystery and its not
    clear what Searles positive thesis, if any,
    comes to.

31
Liberalism and Species Chauvinism
  • Human adults are the paradigm case of beings with
    psychological states how similar, and in what
    respects similar, does something else have to be
    in order to count as the subject of psychological
    states?
  • Does the right stuff matter? If so why?
  • Could a machine think? The answer is obviously
    yes..Assuming it is possible to produce
    artificially a machine with a nervous system,
    neurons with axons and dendrites, and all the
    rest of it, sufficiently like ours.
  • Why axons and dendrites? What about Martians,
    etc.?
  • Does the right organization matter? If so, at
    what level of abstraction
  • Searles argument clearly tells against
    behaviorism, the view that internal organization
    doesnt count.
  • And its meant to tell against functionalism.

32
Is Intentionality what matters?
  • Or is it consciousness
Write a Comment
User Comments (0)
About PowerShow.com