Title: Dialogue Systems: Simulations or Interfaces?
1Dialogue Systems Simulations or Interfaces?
- Staffan Larsson
- Göteborg University
- Sweden
2Introduction
3Basic question
- What is the goal of formal dialogue research?
- Formal dialogue research
- formal research on the semantics and pragmatics
of dialogue
4Two possible answers
- Engineering view the purpose of formal dialogue
research is - interface engineering (services and technologies)
- enable building better human-computer interfaces
- Simulation view the ultimate goal of formal
dialogue research is - a complete formal and computational
(implementable) theory of human language use and
understanding
5The convergence assumption
- There is an extensive if not complete overlap
between the simulation of human language use and
the engineering of conversational interfaces.
6Aim of this presentation
- Review an argument against the possibility of
human-level natural language understanding in
computers (simulation view) - Explicitly apply this argument to formal dialogue
research, arguing that the covergence assumption
is dubious - Draw out the consequences of this for formal
dialogue research
7Formal dialogue research and GOFAI
8The Turing test
- Can a machine think? Turing offers an operational
definition of the ability to think - Turings imitation game
- Test person A has a dialogue (via a text
terminal) with B. - As goal is to decide whether B is a human or a
machine - If B is a machine and manages to deceive A that B
is a human, B should be regarded as able to think
9The Turing test and the Simulation view
- The Turing Test can be seen as the ultimate test
of a simulation of human language use - The ability to think is operationalised as the
ability to carry out a natural language dialogue
in a way that is indiscernible from that of a
human - The goal of formal dialogue research coincides
with the goal of AI (as originally perceived)
10GOFAI
- Artificial Intelligence
- Goal simulate human/intelligent
behaviour/thinking - Weak AIMachines can be made to act as if they
were intelligent - Until the mid-80s, the dominating paradigm of AI
was the idea that thinking is, essentially,
symbol manipulation - The physical symbol hypothesis
- All intelligent behaviour can be captured by a
system that reasons logically from a set of
facts and rules that describe the domain - This is sometimes referred to as GOFAI
- (Good Old Fashioned AI)
11Dialogue systems and GOFAI
- Since around the mid-80s, GOFAI has been
abandoned by many (but not all) AI researchers - Instead, focus on NEFAI (New-Fangled AI)
- connectionism,
- embodied interactive automata,
- reinforcement learning,
- probabilistic methods, etc.
- However, a large part of current dialogue systems
research is based on the GOFAI paradigm - Information States, for example
- Formal pragmatics is often used as a basis for
the implementation of dialogue managers in
GOFAI-style approaches
12Formal semantics and GOFAI
- GOFAI and formal semantics deals, to a large
extent, with similar problems and use similar
methods - Formal symbolic representations of meaning
- Natural Language Understanding as symbol
manipulation - (Even though many early GOFAI researchers appear
oblivious to the existence of formal semantics of
natural language in the style of Montague, Kamp
etc.) - Formal semantics perhaps not originally intended
to be implemented, and not as part of AI - Still, formal semantics shares with GOFAI rests
on the assumption that natural language meaning
can be captured in formal symbol manipulation
systems
13Why GOFAI?
- Why GOFAI in formal semantics and pragmatics?
- It seems to be the most workable method for the
complex problems of natural language dialogue - Natural language dialogue appears to be useful
for improving on current human-computer
interfaces - But is GOFAI-based research also a step on the
way towards human-level natural language
understanding in computers, i.e. simulation?
14Phenomenological arguments against GOFAI
15Some problems in AI
- Frame problem
- updating the world model
- knowing which aspects of the world are relevant
for a certain action - Computational complexity in real-time
resource-bounded applications - Planning for conjunctive goals
- Plan recognition
- Incompleteness of general FOL reasoning
- not to mention modal logic
- Endowing a computer with the common sense of a
4-year-old - AI is still very far from this
16- Humans dont have problems with these things
- Is it possible that all these problems have a
common cause? - They all seem to be related to formal
representations and symbol manipulation
17Background and language understanding
- Dreyfus, Winograd, Weizenbaum
- Human behaviour based on our everyday commonsense
background understanding - allows us to experience what is currently
relevant, and deal with tings and people - crucial to understanding language
- involves utterance situation, activity,
institution, cultural setting, ...
18- Dreyfus argues that the background has the form
of dispositions, or informal know-how - Normally, one simply knows what to do
- a form of skill rather than propositional
knowing-that - To achieve GOFAI,
- this know-how, along with interests, feelings,
motivations, social interests, and bodily
capacities that go to make a human being,... - ... would have to be conveyed to the computer as
knowledge in the form of a huge and complex
belief system
19CYC (Lenat) and natural language
- An attempt to formalise common sense
- The kind of knowledge we need to understand NL
- using general categories that make no reference
to specific uses of the knowledge - Lenats ambitions
- its premature to try to give a computer skills
and feelings required for actually coping with
things and people - L. is satisfied if CYC can understand books and
articles and answer questions about them
20The background cannot be formalised
- There are no reasons to think that humans
represent and manipulate the background
explicitly, or that this is possible even in
principle - ...understanding requires giving the computer a
background of commons sense that adult humans
have in virtue of having bodies, interacting
skilfully with the material world, and being
trained into a culture - Why does it appear plausible that the background
could be formalised knowing-that? - Breakdowns
- Skill acquisition
21Skills and formal rules
- When things go wrong - when we fail there is a
breakdown - In such situations, we need to reflect and
reason, and may have to learn and apply formal
rules - but it is a mistake to
- read these rules back into the normal situation
and - appeal to such rules for a causal explanation of
skilful behaviour
22Dreyfus account of skill acquisition
- 1. Beginner student Rule-based processing
- learning and applying rules for manipulating
context-free elements - There is thus a grain of truth in GOFAI
- 2. Understanding the domain seeing meaningful
aspects, rather than context-free features - 3. Setting goals and looking at the current
situation in terms of what is relevant - 4. Seeing a situation as having a certain
significance toward a certain outcome - 5. Expert The ability of instantaneously
selecting correct responses (dispositions)
23- There is no reason to suppose that the beginners
features and rules (or any features and rules)
play any role in expert performance - That we once followed a rule in tying our
shoelaces does not mean we are still following
the same rule unconsciously - Since we needed training wheels when learning
how to ride a bike, we must now be using
invisible training wheels. - Human language use and cognition involves symbol
manipulation, but is not based on it
24Recap
- Language understanding requires access to human
background understanding - This background cannot be formalised
- Since GOFAI works with formal representations,
GOFAI systems will never be able to understand
language as humans do
25Simulation and NEFAI
26What about NEFAI?
- This argument only applies to GOFAI!
- A lot of modern AI is not GOFAI
- New-Fangled AI (NEFAI)
- interactionist AI (Brooks, Chapman, Agre)
- embodied AI (COG)
- connectionism / neural networks
- reinforcement learning
- So maybe human language use and understanding
could be simulated if we give up GOFAI and take
up NEFAI? - Note that very few have tried this in the area of
dialogue - Simply augmenting a GOFAI system with statistics
is not enough
27Progress?
- Although NEFAI is more promising than GOFAI...
- ... most current learning techniques rely on the
previous availability of explicitly represented
knowledge the training data must be interpreted
and arranged by humans - in the case of learning the background, this
means that the background has to be represented
before it can be used for training - But as we have seen, Dreyfus argues that
commonsense background cannot be captured in
explicit representations
28- Russel Norvig, in Artificial Intelligence -A
Modern Approach (1999) - In a discussion of Dreyfus argument
- In our view, this is a good reason for a serious
redesign of current models of neural processing
.... There has been some progress in this
direction. - But no such research is cited
- So R N admit that this is a real problem. In
fact it is still the exact same problem that
Dreyfus pointed out originally - There is still nothing to indicate that Dreyfus
is wrong when arguing against the possibility of
getting computers to learn commonsense background
knowledge
29- But lets assume for the moment that the current
shortcomings of NEFAI could be overcome... - that learning mechanisms can be implemented who
learn in the same way humans do - and that appropriate initial structure of these
systems can be given - and that all this can be done without providing
predigested facts that rely on human
interpretation
30Some factors influencing human language use
- Embodiment
- having a human body, being born and raised by
humans - Being trained into a culture
- by interacting with other humans
- Social responsibility
- entering into social commitments with other people
31What is needed to achieve simulation?
- So, perhaps we can do real AI, provided we can
build robot infants that are raised by parents
and socialised into society by human beings who
treat them as equals - This probably requires people to actually think
that these AI systems are human - These systems will have the same ethical status
as humans - If we manage to do it, is there any reason to
assume that they would be more useful to us than
ordinary (biological) humans? - They are no more likely to take our orders...
32- It appears that the research methods required for
simulation are rather different from those
required for interface design - The convergence assumption appears very dubious
33Formal dialogue research and dialogue systems
design
34Consequences of the argument for the engineering
view
- If we accept the argument that the background is
not formalisable and that computers (at least as
we know them) cannot simulate human language
understanding... - ...what follows with respect to the relations
between - Formal semantics and pragmatics of dialogue
- Non-formal theories of human language use
- Dialogue systems design as interface engineering
- Both (1) and (2) are still relevant to (3)
35Winograd on language and computers
- Even though computers cannot understand language
in the way humans can... - ...computers are nevertheless useful tools in
areas of human activity where formal
representation and manipulation is crucial - e.g. word processing.
- In addition, many practical AI-style applications
do not require human-level understanding of
language - e.g. programming a VCR, getting timetable
information - In such cases, it is possible to develop useful
systems that have a limited repertoire of
linguistic interaction. - This involves the creation of a systematic domain
36Systematic domains
- A systematic domain is a set of formal
representations that can be used in a computer
system - Embodies the researchers interpretation of the
situation in which the system will function. - Created on the basis of regularities in
conversational behaviour (domains of
recurrence)
37so...
- For certain regular and orderly activities and
language phenomena... - ... it is possible to create formal
representations which capture them well enough to
build useful tools - Formal dialogue research can be regarded as the
creation of systematic domains in pragmatics and
semantics of dialogue
38Formal semantics and pragmatics of dialogue as
systematic domains
- Formal theories of language use should be
regarded as - the result of a creative process of constructing
formal representations (systematic domains) - based on observed regularities in language use
- These theories can be used in dialogue systems to
enable new forms of human-machine interaction
39Formal pragmatics
- Pragmatic domains include e.g.
- turntaking, feedback and grounding, referent
resolution, topic management - Winograd gives dialogue game structure as a prime
example of a systematic domain - Analysed along the lines of dialogue games
encoded in finite automata - ISU update approach is a variation of this,
intended to capture the same regularities in a
(possibly) more flexible way - It is likely that useful formal descriptions can
be created for many aspects of dialogue structure
40Formal semantics
- Not a focus of Winograds formal analysis,
- presumably because Winograd believes that
language understanding is not amenable to formal
analysis - However, even if one accepts the arguments such
as those above... - ... it seems plausible that the idea of
systematic domains also applies to semantics - That is, for certain semantically regular task
domains it is indeed possible to create a formal
semantics - e.g. in the form of a formal ontology and formal
representations of utterance contents - This formal semantics will embody the
researchers interpretation of the domain
41Relevant issues related to semantic domains
- How to determine whether (and to what extent) a
task domain is amenable to formal semantic
description - How to decide, for a given task domain, what
level of sophistication is required by a formal
semantic framework in order for it to be useful
in that domain - In some domains, simple feature-value frames may
be sufficient while others may require something
along the lines of situation semantics, providing
treatments of intensional contexts etc. - Fine-grainedness and expressivity of the formal
semantic representation required for a domain or
group of domains - e.g. database search, device programming,
collaborative planning, ... - Creation of application-specific ontologies
- How to extract applications ontologies from
available data of the domain, e.g. transcripts of
dialogues.
42but...
- Even though some aspects of language use may
indeed be susceptible to formal description - This does not mean that human language use
actually relies on such formal descriptions
represented in the brain or elsewhere - So implementations based on such formalisations
are not simulations of human language use and
cognition
43Limits of formalisation
- Formalisation will only be useful in areas of
language use which are sufficiently regular to
allow the creation of systematic domains - So, repeated failures to formally capture some
aspect of human language may be due to the limits
of formal theory when it comes to human language
use, rather than to some aspect of the theory
that just needs a little more tweaking.
44Non-formalisable language phenomena
- For other activities and phenomena, it may not
possible to come up with formal descriptions that
can be implemented - e.g. human language understanding in general,
since it requires a background which cannot be
formalised - also perhaps aspects of implicit communication,
conversational style, politeness in general,
creative analogy, creative metaphor, some
implicatures - This does not mean that they are inaccessible to
science. - They can be described non-formally and understood
by other humans - Their general abstract features may be
formalisable
45Usefulness of non-formal theory
- Non-formal theories of human language use are
still useful for dialogue systems design - Dialogue systems will need to be designed on the
basis of theories of human language - They will, after all, interact with a human
- May also be useful to have human-like systems
(cf. Cassell) - This does not require that implementations of
these theories have to be (even partial)
simulations of human language use and cognition - Also, observations of human-human dialogue can of
course be a source of inspiration for dialogue
systems design
46Conclusions
47- In important ways the simulation view and the
engineering view are different projects requiring
different research methods - For the simulation project, the usefulness of
systems based on formal representations is
questionable - Instead, formal dialogue research can be regarded
as the creation of systematic domains that can be
used in the engineering of flexible
human-computer interfaces - In addition, non-formal theory of human language
use can be useful in dialogue systems design
48- If interface engineering is liberated from
concerns related to simulation... - ...it can instead be focused on the creation of
new forms of human-computer (and
computer-mediated) communication... - ... adapting to and exploring the respective
limitations and strengths of humans and
computers.
49fin
50Other views of what FDR is
51A variant of the simulation view
- The goal of formal dialogue research is a
complete computational theory of language and
cognition for machines - cf. Luc Steels
- Robots evolving communication
- Not intended to describe human language use
- although some aspects may be similar
- Arguably interesting in its own right
52- One may even be able to implement computational
models that capture some abstract aspects of
human language use and understanding - That are not based on symbol manipulation, but
involve subsymbolic computation - For example, the evolution of shared language use
in robots (Steels et al) - However, such formal models and simulations will
never be complete simulations of human language
understanding (to the extend required by the
Turing test) - unless the machines they run on are human in
all aspects relevant to language, i.e. physical,
biological, psychological, and social
53A variant of the simulation view
- The goal of formal dialogue research is a
complete computational theory of language and
cognition in general - either such a theory subsumes a theory of human
language - and thus as difficult or more difficult
- or not
- and thus coherent with idea that only some
aspects of language are formalisable, - although it remains to show that the same
features are the ones that are essential for
language
54Applied science?
- Formal dialogue research as applied science
- c.f. medicine
- theories of interface design
- theories of (linguistic) human-computer
interaction - LHCI
55The role of human-human communication in LHCI
- What aspects of natural dialogue are
- formalisable
- implementable
- useful in HCI
56Scientific status of formal descriptions
- Formal descriptions may have some scientific
value as theories of human language use and
cognition - However, they are
- not useful as a basis for simulation of human
language use and congition, since this is not
based on explicit rules and representations
(except for novices and breakdowns) - often radical simplifications (as many other
scientific theories) - limited in scope and describe special cases only
- Even if the creation of a systematic domain is
possible for some linguistic phenomena, this does
not mean that human language use is based on
formal representations
57Formal dialogue research vs. Dialogue systems
research
- Both share the assumption that human language use
and meaning can be captured in formal symbol
manipulation systems - Human language use and meaning relies on
background - Background cannot be formalised
58Language use vs. cognition
- Turing test tests only behaviour cognition is a
black box - So whats the justification for talking about
cognition? - Turings test intended as an operational
definition of thinking, i.e. cognition - Possible underlying intuition
- There is no way of passing the Turing test for a
system with a style of cognition which is very
different from human cognition - Turing assumed that human cognition was based on
symbol manipulation
59Domain-specific simulation?
- In a regular domain, can a program based on a
formalisation of these regularities be regarded
as a simulation of human performance in that
domain? - Even if there are regularities that can be
captured to a useful extent in rules, this does
not mean that humans use such rules - unless they are complete novices who have been
taught the rules explicitly but have not yet had
time to descend down the learning hierarchy
60General vs. domain-specific intelligence
- Weizenbaum there is no such thing as general
intelligence - intelligence is always relative to a domain
(math, music, playing cards, cooking, ...) - Therefore, the question whether computers can be
intelligent is meaningless - one must ask this question in individual domains
61The Feigenbaum test
- Replace the general Turing test with a similar
test in limited domains? (proposed by Feigenbaum) - Certainly seems more manageable, especially in
systematic domains - On the other hand, it could be argued that it is
exactly in the non-systematic domains that the
most interesting and unique aspects of human
being are to be found - So this test is very different from the original
Turing test
62More on skills vs. rules
63Everyday skills vs. rules
- Dreyfus suggests testing the assumption that the
background can be formalised - by looking at the phenomenology of everyday
know-how - Heidegger, Merleau-Ponty, Pierre Bourdieu
- What counts as facts depends on our skills e.g.
gift-giving (Bourdieu) - If it is not to constitute an insult, the
counter-gift must be deferred and different,
because the immediate return of an exact
identical object clearly amounts to a refusal....
- It is all a question of style, which means in
this case timing and choice of occasion... - ...the same act giving, giving in return,
offering ones services, etc. can have
completely different meanings at different times.
64Everyday skills vs. rules
- Having acquired the necessary social skill,
- one does not need to recognize the situation as
appropriate for gift-giving, and decide
rationally what gift to give - one simply responds in the appropriate
circumstances by giving an appropriate gift - Humans can
- skilfully cope with changing events and
motivations - project understanding onto new situations
- understand social innovations
- one can do something that has not so far counted
as appropriate... - ...and have it recognized in retrospect as having
been just the right thing to do
65The B.A.B. objection - background
66The argument from infant development (Weizenbaum)
- (Based on writings by child psychologist Erik
Erikson) - The essence of human being depends crucially on
the fact that humans are born of a mother, are
raised by a mother and father, and have a human
body - Every organism is socialized by dealing with
problems that confront it (Weizenbaum) - For humans, the problems include breaking the
symbiosis with the mother after the infant period - This is fundamental to the human constitution it
lays the ground for all future dealings with
other people - Men and machines have radically different
constitutions and origins - Humans are born by a mother and father
- Machines are built by humans
- OK, so we need to give AI systems a human or
human-like body, and let human parents raise them
67The argument from language as social commitment
(Winograd)
- The essence of human communication is commitment,
an essentially social and moral attitude - Speech acts work by imposing commitments on
speaker and hearer - If one cannot be held (morally) responsible for
ones actions, one cannot enter into commitments - Computers are not human
- so they cannot be held morally responsible
- therefore, they cannot enter into commitments
- Therefore, machines can never be made to truly
and fully understand language - OK, so we need to treat these AI computers
exactly as humans, and hold them morally
responsible
68The argument from human being/Dasein
(Heidegger, Dreyfus)
- Heideggers project in Being and Time
- Develop an ontology for describing human being
- What its like to be human
- This can, according to Heidegger, only be
understood from the inside - Hs text is not intended to be understandable by
anyone who is not a human - Such an explanation is not possible, according to
H. human being cannot be understood from
scratch - Yet it is exactly such an explanation that is the
goal of AI - According to Heidegger/Dreyfus, AI is impossible
because (among other things) - Infants are, strictly speaking, not yet fully
human they must first be socialised into a
society and a social world - Only humans so socialized can fully understand
other humans - Since cultures are different, humans socialized
into one culture may have problems understanding
humans from another culture - Machines are not socialised, they are programmed
by humans - OK, so we need to socialise AI systems into
society!
69Arguments related to evolution
70The humans are animals argument
- What reason do we have to think that
non-conscious reasoning operates by formal
reasoning? - Humans have evolved from animals, so presumably
some non-formal thinking is still part of the
human mind - Hard to tell a priori how much
71The argument from the role of emotions
- Classical AI deals first with rationality
- Possibly, we might want to add emotions as an
additional layer of complexity - However, it seems plausible to assume that
emotions are more basic than rationality
(Damasio The Feeling of what happens) - Animals have emotions but not abstract rational
reasoning - The human infant is emotional but not rational
- So machines should be emotional before they are
made rational - unfortunately, no-one has a clue how to make
machines emotional
72The argument from brain matter and evolution
- Weak AI assumes that physical-level simulation is
unnecessary for intelligence - However, evolution has a reputation for finding
and exploiting available shortcuts - works by patching on previous mechanisms
- If there are any unique properties of biological
brain-matter that offers some possible
improvement to cognition, it is likely they have
been exploited - If so, it is not clear if these properties can be
emulated by silicon-based computers
73The argument from giving a damn
- Humans care machines dont give a damn
(Haugeland) - Caring (about surviving, for example) comes from
instincts (drives) which animals, but not
machines, have - Caring about things is intimately related to the
evolution of living organisms - Having a biological body
- So, can evolution be simulated?
- Winograd argues that the only simulation that
would do the job would need to be as complex as
real evolution - So in 3,5 billion years, we can have AI!
74More on CYC
75Problems with formalising commonsense background
- How is everyday knowledge organized so that one
can make inferences from it? - Ontological engineering finding the primitive
elements in which the ontology bottoms out - How can skills or know-how be represented as
knowing-that? - How can relevant knowledge be brought to bear in
particular situations?
76CYC (Lenat) and natural language
- Formalise common sense
- The kind of knowledge we need to understand NL
- using general categories that make no reference
to specific uses of the knowledge (context free) - Lenats ambitions
- its premature to try to give a computer skills
and feelings required for actually coping with
things and people - L. is satisfied if CYC can understand books and
articles and answer questions about them
77CYC vs. NL
- Example (Lenat)
- Mary saw a dog in the window. She wanted it.
- Dreyfus
- this sentence seems to appeal to
- our ability to imagine how we would feel in the
situation - know-how for getting around in the world (e.g.
getting closer to something on the other side of
a barrier) - rather than requiring us to consult facts about
dogs and windows and normal human reactions - So feelings and coping skills that were excluded
to simplify the problem return - We shouldnt be surprised this is the
presupposition behind the Turing Test that
understanding human language cannot be isolated
from other human capabilities
78CYC vs. NL
- How can relevant knowledge be brought to bear in
particular situations? - categorize the situation
- search through all facts, following rules to find
the facts possibly relevant in this situation - deduce which facts are actually relevant
- How deal with complexity?
- Lenat add meta-knowledge
- Dreyfus
- meta-knowledge just makes things worse more
meaningless facts - CYC is based on an untested traditional
assumption that people store context-free facts
and use meta-rules to cut down the search space
79Analogy and metaphor
- ... pervade language (example from Lenat)
- Texaco lost a major ruling in its legal battle
with Pennzoil. The supreme court dismantled
Texacos protection against having to post a
crippling 12 billion appeals bond, pushing
Texaco to the brink of a Chapter 11 filing (Wall
Street Journal) - The example drives home the point that,
- far from overinflating the need for real-world
knowledge in language understanding, - the usual arguments about disambiguation barely
scratch the surface
80Analogy and metaphor
- ... pervade language (example from Lenat)
- Texaco lost a major ruling in its legal battle
with Pennzoil. The supreme court dismantled
Texacos protection against having to post a
crippling 12 billion appeals bond, pushing
Texaco to the brink of a Chapter 11 filing (Wall
Street Journal) - The example drives home the point that,
- far from overinflating the need for real-world
knowledge in language understanding, - the usual arguments about disambiguation barely
scratch the surface
81Analogy and metaphor
- Dealing with metaphors is a non-representational
mental capacity (Searle) - Sally is a block of ice could not be analyzed
by listing the features that Sally and ice have
in common - Metaphors function by association
- We have to learn from vast experience how to
respond to thousands of typical cases - Mention approaches to metaphor, e.g. Abduction -
the boston office called - isnt this a
solution? Dead vs. Creative metaphors
82Neural nets
83- Helge!
- But people also interpret things differently (but
not wildly differently) - Not many researchers believe in tabula rasa
- Evolutionary alogrithms but so far not combined
with learning - Nns can learn without prior strong
symbolisation of learning data, but pehaps not
very complex stuff like dialogue? - Data can be either discrete or continuous
- How does this relate to predigestion? Is
selection of data (e.g. Dividing into frequency
ranges predig.? - Main obstacle now puny amounts of neurons,
little knowledge of interaction of evolved
initial structure learning
84- neural nets can learn some things without prior
conceptualisation (but some discretisation is
necessary, e.g. representation in the weak sense) - strong and weak sense of represenation
- Other problems with connectionism
- Current neural networks are much less complex
that brains - but maybe this will change
- Even if we had a working neural network, we would
not understand how it works - the scientific goal of AI would thus still not
have been reached
85Learning generalisation
- Take in dialogue systems based solely on
statistics (superhal?) - Mention hybrid vs totally nonsymbolic systems
- Learning depends on the ability to generalise
- Good generalisation cannot be achieved without a
good deal of background knowledge - Example trees/hidden tanks
- A network must share our commonsense
understanding ot the world if it is to share our
sense of appropriate generalisation
86- Some counter-counter-arguments
- The more the computer knows, the longer it will
take to find the right information - The more a human knows, the easier it is to
retrieve relevant information
87Non-symbolic approaches to AI and dialogue
88Interactionist AI
- No need for a representation of the world
- instead, look to the world as we experience it
- Behaviour can be purposive without the agent
having in mind a goal or purpose - In many situations, it is obvious what needs to
be done - Once youve done that, the next thing is likely
to be obvious too - Complex series of actions result, without the
need for complex decisions or planning - However, Interactionist AI does not address
problem of informal background familiarity - programmers have to predigest the domain and
decide what is relevant - systems lack ability to discriminate relevant
distinctions in the skill domain... - ... and learn new distinctions from experience
89Connectionism
- Apparently does not require being given a theory
of a domain in order to behave intelligently - Finding a theory finding invariant features in
terms of which situations can be mapped onto
responses - Starting with random weights, will neural nets
trained on same date pick out the same
invariants? - No it appears the tabula rasa assumption
(random initial weights) is wrong - Little research on how (possibly evolved) initial
structure interact with learnin
90Learning generalisation
- Learning depends on the ability to generalise
- Good generalisation cannot be achieved without a
good deal of background knowledge - Example trees/hidden tanks
- A network must share our commonsense
understanding ot the world if it is to share our
sense of appropriate generalisation
91Reinforcement learning
- Idea learn from interacting with the world
- Feed back reinforcement signal measuring the
immediate cost or benefit of an action - Enables unsupervised learning
- (The target representation in humans is neural
networks) - Dreyfus To build human intelligence, need to
improve this method - assigning fairly accurate actions to novel
situations - reinforcement-learning device must exhibit
global sensitivity by encountering situations
under a perspective and actively seeking relevant
input
92AI and Turing Test
93Prehistory of AI
- Plato / Socrates
- All knowledge must be statable in explicit
definitions which anyone could apply (cf.
definition of algorithm) - Descartes again
- Understanding and thinking is forming and using
symbolic representations the rational
manipulation of symbols by means of rules - Thinking as computation
- Leibniz
- universal calculus for representing reasoning
- goal put an end to all conflicts, which are
caused by misunderstandings of inexact, informal
language - Kant
- clear distinction between external world
(noumena) and inner life (phenomena,
representations)
94Artificial Intelligence
- Goal
- simulate human/intelligent behaviour/thinking
- Weak AI
- Machines can be made to act as if they were
intelligent - Strong AI
- Agents that act intelligently have real,
conscious minds - (It is possible to believe in strong AI but not
in weak AI)
95Some arguments against weak AI (Turing 1950)
- Ada Lovelaces objection
- computers can only do what we tell them to
- Argument from disability
- claims (usually unsupported) of the form a
machine can never do X - The mathematical objection
- based on Gödels incompleteness theorem
- The argument from informality of behaviour
- (Searles Chinese Room
- argument concerns strong AI
- purports to show that producing intelligent
behavoiur is not a sufficient condition for being
a mind)
96The Turing test and dialogue
- The Turing Test can be seen as the ultimate test
of a simulation of human language use - The ability to think is operationalised as the
ability to carry out a natural language dialogue
in a way that is indiscernible from that of a
human - The machine in question is assumed to be a Turing
machine, i.e. a general symbol manipulation
device, i.e. a computer
97The Turing test and the Simulation view
- According to the simulation view, the goal of
formal dialogue research is to reproduce, in a
machine, the human ability to use and understand
language - Thus, the Turing test can be regarded as a
potential method of evaluating theories of human
language use and understanding - The goal of formal dialogue research coincides
with the goal of AI (as originally perceived)
98Misc slides
99- Non-formal theories of those aspects of language
use which resist formalisation can be used as a
basis for design of aspects of dialogue systems
that do not need to be modelled by the system
itself. - For example, it is likely that any speech
synthesizer voice has certain emotional or other
cognitive connotations - it might sound silly, angry, etc..
- It is extremely difficult, if not impossible, to
design a completely neutral voice. - However, if we have some idea of how different
voices are perceived by humans, we can use this
(informal) knowledge to provide a dialogue system
application with an appropriate voice for that
application.
100Dreyfus account of skill acquisition
- 5 stages
- 1. Beginner student Rule-based processing
- learning and applying rules for manipulating
context-free elements - There is thus a grain of truth in GOFAI
- 2. Understanding the domain seeing meaningful
aspects, rather than context-free features - 3. Setting goals and looking at the current
situation in terms of what is relevant - 4. Seeing a situation as having a certain
significance toward a certain outcome - 5. Expert The ability of instantaneously
selecting correct responses (dispositions) - (Note this is how adults typically learn
Infants, on the other hand... - learn by imitation
- pick up on a style that pervades his/her
society)
101(No Transcript)
102- The question is still open exactly how far it is
possible to go in the formal description of
phenomena related to language use - The only way to find out is to by trial-and-error
(i.e., research). - In this, I suggest one might be well-advised to
keep in mind the following points..
103- From observations of some domain or aspect of
language use... - ... the researcher creates a formal
representation... - ... and implements it in a dialogue system.
- When this dialogue system is used and interacts
with humans, - new domain ?of interaction comes into being
- depending on the design of the system, different
conversational patterns will arise - Domains of language use that may be susceptible
to formalisation (i.e. creation of systematic
domains) can be roughly divided into pragmatic
and semantic domains
104Usefulness of formal semantics and pragmatics
- Systems based on formal representations provide a
great potential for improving on human-computer
interaction
105(An aside human reinforcement)
106Overview
- Introduction
- Formal dialogue research and GOFAI
- Phenomenological arguments against GOFAI
- Simulation and NEFAI
- Formal dialogue research and dialogue systems
design - Conclusions
107- Currently, programmer must supply machine with
rule formulating what to feed back as
reinforcement - What is the reinforcement signal for humans?
- Survival?
- Pleasure vs. pain?
- Requires having needs, desires, emotions
- Which in turn may depend on the abilities and
vulnerabilities of a biological body
108- Dreyfus, Haugeland and others trace the idea back
to Plato it pervades most of western philosophy - This shifts the burden of proof onto GOFAI