Title: Do we need robot morality?
1Do we need robot morality?
2WHAT IS INTELLIGENCE?
- Pragmatic definition of intelligence an
intelligent system is a system with the ability
to act appropriately (or make an appropriate
choice or decision) in an uncertain environment.
- An appropriate action (or choice) is that which
maximizes the probability of successfully
achieving the mission goals (or the purpose of
the system) - Intelligence need not be at the human level
3Human-Robot Interaction
interaction
intelligence
morality
Consciousness?
4- Robot Morality is a relatively new research area
which is becoming very popular because of
military and assistive robotics.
5WHY ROBOT MORALITY ?
These robots live in human environment and can
harm humans physically.
- Robots are becoming technically extremely
sophisticated. - The emerging robot is a machine with sensors,
processors, and effectors able to perceive the
environment, have situational awareness, make
appropriate decisions, and act upon the
environment - Various sensors active and passive optical and
ladar vision, acoustic, ultrasonic, RF,
microwave, touch, etc. - Various effectors propellers, wheels, tracks,
legs, hybrids
Military unmanned vehicles are robots Space, air,
ground, water
6Ethical concerns Robot behavior
- How do we want our intelligent systems to behave?
- How can we ensure they do so?
- Asimovs Three Laws of Robotics
- A robot may not injure a human being or, through
inaction, allow a human being to come to harm. - A robot must obey orders given it by human beings
except where such orders would conflict with the
First Law. - A robot must protect its own existence as long as
such protection does not conflict with the First
or Second Law.
7Ethical concerns Human behavior
- Is it morally justified to create intelligent
systems with these constraints? - As a secondary question, would it be possible to
do so? - Should intelligent systems have free will? Can we
prevent them from having free will?? - Will intelligent systems have consciousness?
(Strong AI) - If they do, will it drive them insane to be
constrained by artificial ethics placed on them
by humans? - If intelligent systems develop their own ethics
and morality, will we like what they come up
with?
8Department of Defense (DOD) PATH TOWARD AUTONOMY
9A POTPOURRI OF MILITARY ROBOTS
- Many taxonomies have been used for robotic air,
ground, and water vehicles based on size,
endurance, mission, user, C3 link, propulsion,
mobility, altitude, level of autonomy, etc., etc.
10All autonomous future military robots will need
morality, household and assistive robots as well
11WHICH TECHNOLOGIES ARE RELATED TO ROBOT
MORALITY?
- Various control system architectures
- deliberative,
- reactive,
- hybrid
- Various command, control, and communications
systems - cable,
- fiber optic,
- RF,
- laser,
- acoustic
- Various human/machine interfaces
- displays,
- telepresence,
- virtual reality
- Various theories of intelligence and autonomy
- Evolutionary
- Probabilistic
- Learning
- Developmental
Can we build morality without intelligence?
12The Tokyo University of Science Saya
Morality for non-military robots that deal
directly with humans.
13 Robots that look human
- "Robots that look human tend to be a big hit
with young children and the elderly," - Hiroshi Kobayashi, Tokyo University of Science
professor and Saya's developer, said yesterday. - "Children even start crying when they are
scolded."
14Human-Robot Interaction with human-like humanoid
robots
- "Simply turning our grandparents over to teams
of robots abrogates our society's responsibility
to each other, and encourages a loss of touch
with reality for this already mentally and
physically challenged population, - Kobayashi said.
15Can robots replace humans?
- Noel Sharkey, robotics expert and professor at
the University of Sheffield, believes robots can
serve as an educational aid in inspiring interest
in science, but they can't replace humans.
16Robot to help people?http//news.xinhuanet.com/en
glish/2009-03/12/content_10995694.htm
- Kobayashi says Saya is just meant to help people
and warns against getting hopes up too high for
its possibilities. - "The robot has no intelligence. It has no ability
to learn. It has no identity," he said. "It is
just a tool.
17 18Receptionist
19MechaDroyd Typ C3Business Design, Japan
- What kind of morality we expect from
- Robot for disabled?
- Receptionist robot?
- Robot housemaide?
- Robot guide ?
20Human RobotInteractionRobots for elderly in
Japan
21Jobs for robotshttp//uk.reuters.com/article/idUK
T27506220080408
- TOKYO (Reuters) - Robots could fill the jobs of
3.5 million people in graying Japan by 2025, - a thinktank says, helping to avert worker
shortages as the country's population shrinks.
22Robots to fill jobs in Japan
- Japan faces a 16 percent slide in the size of its
workforce by 2030 while the number of elderly
will mushroom, the government estimates, raising
worries about who will do the work in a country
unused to, and unwilling to contemplate,
large-scale immigration.
23HR-Interaction in Japan
Robots to fill jobs in Japan
- The thinktank, the Machine Industry Memorial
Foundation, says robots could help fill the gaps,
ranging from microsized capsules that detect
lesions to high-tech vacuum cleaners.
24HR-Interaction in Japan
Robots to fill jobs in Japan
- Rather than each robot replacing one person, the
foundation said in a report that robots could
make time for people to focus on more important
things.
25What is more important than work?
- What kind of more important things?
- This is an ethical question.
26using robots that monitor the health of older
people in Japan
- Japan could save 2.1 trillion yen (21 billion)
of elderly insurance payments in 2025 by using
robots that monitor the health of older people,
so they don't have to rely on human nursing care,
the foundation said in its report.
27Plans for robot nursing in Japan
- What are the consequences for relying on robot
nursing? - This is an ethical question.
28Assistive Robots
- Caregivers would save more than an hour a day if
robots - helped look after children,
- helped older people,
- did some housework
- reading books out loud
- helping bathe the elderly
29How children and elderly will respond?
- How will children and elderly react to robots
taking care of them? - This is an ethical question.
30Seniors in Japan
- "Seniors are pushing back their retirement until
they are 65 years old, - day care centers are being built so that more
women can work during the day, - and there is a move to increase the quota of
foreign laborers. - But none of these can beat the shrinking
workforce," - said Takao Kobayashi, who worked on the study.
31HR-Interaction in Japan
Seniors in Japan
- "Robots are important because they could help in
some ways to alleviate such shortage of the labor
force."
32HR-Interaction in Japan
Seniors in Japan
- How far will they alleviate such shortage of the
labor force? - And with what consequences?
- This is an ethical question.
33HR-Interaction in Japan
Seniors in Japan
- Kobayashi said changes was still needed for
robots to make a big impact on the workforce. - "There's the expensive price tag, the functions
of the robots still need to improve, and then
there are the mindsets of people," he said. - "People need to have the will to use the robots."
34HR-Interaction in Japan
Seniors in Japan
- The mindsets of people This is THE ethical
question!
35 36First robots in Entertainment
- Neologism derived from Czech noun "robota"
meaning "labor" - Contrary to the popular opinion, not originated
by (but first popularized by) Karel Capek, the
author of RUR - Originated by Josef Capek, Karels older brother
(a painter and writer) - Robot first appeared in Karel Capeks play RUR,
published in 1920 - Some claim that "robot" was first used in Josef
Capek's short story Opilec (the Drunkard)
published in the collection Lelio in 1917, but
the word used in Opilec is "automat - Robots revolt against their human masters a
cautionary lesson now as then
37WHAT IS A ROBOT?
- Many taxonomies
- Control taxonomy
- Pre-programmed (automatons)
- Remotely-controlled (telerobots)
- Supervised autonomous
- Autonomous
- Operational medium taxonomy
- Space
- Air
- Ground
- Sea
- Hybrid
- Functional taxonomy
- Military
- Industrial
- Household
- Commercial
- Etc.
38Entertainmenthttp//www.thepartypups.co/
39Sony Aibo
40Football
41RoboCup
42Love robots in Japanhttp//jankcl.wordpress.com
/2007/08/12/lovecom-18/
43EMA (Eternal Maiden Actualization) in
Japanhttp//www.fun-on.com/technology_robot_girlf
riend.php
What kind of intelligence and morality you would
expect from an ideal robot for entertainment?
44Why Ethics of Robots?
45Why Ethics of Robots?
- Robots behave according to rules we program
- We are responsible for their behavior
- But as they are autonomous they can decide
what to do or not in a specific situation - This is the human/robot moral dilemma
46Ethics of Robots West and East
- Rougly speaking
- Europe Deontology (Autonomy, Human Dignity,
Privacy, Anthropocentrism) Scepticism with
regard to robots - USA (and anglo-saxon tradition) Utilitarian
Ethics will robots make us more happy? - Eastern Tradition (Buddhism) Robots as one more
partner in the global interaction of things
47Ethics Robots West and East
- Morality and Ethics
- Ethics as critical reflection (or
problematization) of morality - Ethics is the science of morals as robotics is
the science of robots
48Concrete moral traditions
- Different ontic or concrete historical moral
traditions, for instance - in Japan
- Seken (trad. Japanese morality),
- Shakai (imported Western morality)
- Ikai (old animistic tradition)
- In the Far West
- Ethics of the Good (Plato, Aristotle),
- Christian Ethics,
- Utilitarian Ethics,
- Deontological Ethics (Kant)
49Ethics Robots Ontological Dimensions
- Ontological dimension Being or (Buddhist)
- Nothingness as the space of open possibilities
that allow us to critizise ontic moralities - Always related to basic moods (like sadness,
happiness, astonishment, ) - through which the uniqueness of the world and
human existence is experienced (differently in
different cultures)
50Asimos evolutionhttp//www.rob.cs.tu-bs.de/teach
ing/courses/seminar/Laufen_Mensch_vs_Roboter/
51Asimos evolutionhttp//www.rob.cs.tu-bs.de/teach
ing/courses/seminar/Laufen_Mensch_vs_Roboter/
If the robot looks like a human, do we have
different expectations?
Would you kill a robot car?
Would you kill a robot insect that would react
by squeaky noises and escape in panic?
Would you kill a robot biped that would react
by begging you to save his life?
52Why Ethics of Robots?
53Why Ethics of Robots?
- Ethics is thinking about human rules of good/bad
behavior - Towards each other
- Towards non-human living beings
- Towards the environment
- Towards artificial products
- Towards other societies or nations
- Towards the God or gods, culture-depending
54AA versus AC versus AE versus AI?
- Artificial Agency (AA)
- Artificial Consciousness (AC)
- Artificial Ethics (AE)
- Artificial Intelligence
- our interaction with them
- and our ethical relation to them.
55 56Artificial X
- One kind of definition-schema
- Creating machines which perform in ways which
require X when humans perform in those ways - (or which justify the attribution of X?)
- Outward performance, versus psychological
reality within?
X Intelligence, Life, Morality, etc.
57Artificial Consciousness
- Artificial Consciousness (AC)
- ? creating machines which perform in ways which
require consciousness when humans perform in
those ways (?) - Where is the psychological reality of
consciousness in this? - ? functional versus phenomenal consciousness?
58Shallow and deep AC research
- Shallow AC developing functional replications
of consciousness in artificial agents - Without any claim to inherent psychological
reality - Deep AC developing psychologically real
(phenomenal) consciousness
59Continuum or divide?
- Continuum or divide? (discrete or analog?)
- Is deep AC realizable using current
computationally-based technologies (or does it
require biological replications)? - Will it require Quantum Computing or biology-like
computing? - Thin versus thick phenomenality
- (See S.Torrance Two Concepts of Machine
Phenomenality, (to be submitted, JCS)
60Real versus simulated AC -an ethically
significant boundary?
- Psychologically real versus just simulated
artificial consciousness - -gt This appears to mark an ethically
significant boundary - ? (perhaps unlike the comparable boundary in AI?)
- Not to deny that debates like the Chinese Room
have aroused strong passions over many years - Working in the area of AC
- (unlike working in AI?)
- puts special ethical responsibilities on
shoulders of researchers
61Techno-ethics
- This takes us into the area of techno-ethics
- Reflection on the ethical responsibilities of
those who are involved in technological R D - (including the technologies of artificial
agents (AI, robotics, MC, etc.)) - Broadly, techno-ethics can be defined as
- Reflection on how we, as developers and users of
technologies, - ought to use such technologies to best meet
- our existing ethical ends,
- within existing ethical frameworks
- Much of the ethics of artificial agent research
comes - under the general techno-ethics umbrella
62From techno-ethics toartificial ethics
- Whats special about the artificial agent
research is that the artificial agents so
produced may count (in various senses) as ethical
agents in their own right - This may involve a revision of our existing
ethical conceptions in various ways - Particularly when we are engaged in research in
(progressively deeper) artificial consciousness - Bearing this in mind, we need to distinguish
between techno-ethics and artificial ethics - (The latter may overlap with the former)
Artificial ethics what ethics we will put to
future robots
Techno-ethics our responsibility for our
creations
63 64Towards artificial ethics (AE)
- A key puzzle in AE
- Perhaps ethical reality (or real ethical status)
goes together with psychological reality??
Can a robot be ethical if he is not
psychologically similar to you?
65Shallow and deep AE
- Shallow AE
- Developing ways in which the artificial agents we
produce can conform to, simulate, the ethical
constraints we believe desirable - (Perhaps a sub-field of techno-ethics?)
- Deep AE
- Creating beings with inherent ethical status?
- Rights of robots, rights of human owners of
robots? - Responsibilities of robots, responsibilities of
humans towards robots? - The boundaries between shallow and deep AE may
be perceived as fuzzy - And may be intrinsically fuzzy
You do not want your robot to hurt humans (or
other robots?)
66Proliferation of new technologies in the world
- A reason for taking this issue seriously
- AA, AC, etc. as potential mass-technologies
- Tendency for successful technologies to
proliferate across the globe - What if AC becomes a widely adopted technology?
- This should raise questions both
- of a techno-ethical kind
- and of a kind specific to AE
- Every body would like to have a robot slave.
- Every educated/rich roman had a slave
- Every professor in 19 century had a maid.
67Instrumentality
- Instrumental versus intrinsic stance
- Normally we take our technologies as our tools
or instruments - Instrumental/intrinsic division in relation to
psychological reality of consciousness? - As we progress towards deep AC there could be a
blurring of the boundaries between the two - (already seen in a small way with emerging
caring attitudes of humans towards
people-friendly robots) - This is one illustration of the move from
conventional techno-ethics and artificial ethics
Instrumental robot is just a device
Intrinsic if an old lady has a robot that she
loves, her children cannot just throw the old
robot to the garbage can.
68Artificial Ethics (AE)
- AE could be defined as
- The activity of creating systems which perform in
ways which imply (or confer) the possession of
ethical status when humans perform in those ways.
(?) - The emphasis on performance could be questioned
- What is the relation between AE and Artificial
Consciousness (AC)? - What is ethical (moral) status?
69- Two key elements of moral status of a robot
70- Can robot harm community?
- Can community harm the robot?
( Totality of moral agents )
71X is a member of community
( one moral agent )
( Totality of moral agents )
72Two key elements of Xs moralstatus (in the eyes
of Y)
- (a) Xs being the recipient or target of moral
concern by Y (moral consumption) Y ?X - (b) Xs being the source of moral concern towards
Y (moral production) X ? Y
73Ethical status in the absence ofconsciousness
- Trying to refine our conception on the relation
between AC and AE - What difference does consciousness make to
artificial agency? - In order to shed light on this question we need
to investigate - the putative ethical status of artificial agents
(AAs) when (psychologically real) consciousness
is acknowledged to be ABSENT.
Retired general has a superintelligent robot that
does not look like a human and is not
psychologically humanoid. Can he dismantle the
robot to pieces for fun? Can he shoot at him as
he paid for it?
74Our ethical interaction with non-conscious
artificial agents
- ?? Could non-conscious artificial agents have
genuine moral status - (a) As moral consumers?
- (having moral claims on us)
- (b) As moral producers?
- (having moral responsibilities towards us (and
themselves))
The robot that kills a human is killed?
?
The dog or horse that kills a human is ordered by
the law to be killed
75A Strong View of AE
- Psychologically real consciousness is necessary
for AAs to be considered BOTH - (a)as genuine moral consumers
- AND
- (b) as genuine moral producers
- AND there are strong constraints on what counts
as psychologically real consciousness. - So, on the strong view, non-conscious AAs will
have no real ethical status
The MIT strong AI researchers will be now in
trouble, explain why?
76- One way to weaken the strong view
- by accepting weaker criteria for what counts as
psychologically real consciousness - e.g. by saying Of course you need consciousness
for ethical status, but soon robots, etc. will be
conscious in a psychologically real sense.
77A weaker view of AE
- Psychologically real consciousness is NOT
necessary for an Artificial Agent (AA) to be
considered - (a) as a genuine moral producer
- (i.e. as having genuine moral responsibilities)
- But it may be necessary for an AA to be
considered - (b) as a genuine moral consumer
- (i.e. as having genuine moral claims on the moral
community)
78A version of the weaker view
- A version of the weaker view is to be found in
- Floridi, L. and Sanders, J. 2004. On the
Morality of Artificial Agents, Minds and Machines
, 14(3) 349-379. - Floridi Sanders Some (quite weak kinds
of) artificial agents may be considered as having
a genuine kind of moral accountability - even if not moral responsibility in a
full-blooded sense - ( i.e. this kind of moral status may attach to
such agents quite independently of their status
as conscious agents)
79Examining the strong view
- See Steve Torrance, Ethics and Consciousness in
Artificial Agents, Artificial Intelligence and
Society - Being a fully morally responsible agent requires
- empathetic intelligence or rationality
- moral emotions or sensibilities
- These seem to require presence of psychologically
real consciousness - BUT.
80Shallow artificial ethics a paradox
- Paradox
- Even if not conscious, we will expect artificial
agents to behave responsibly - ? To perform outwardly to ethical standards of
conduct - This creates an urgent and very challenging
programme of research for now - ?developing appropriate shallow ethical
simulations
- How you can make a robot responsible for its
actions if he has no real morality. - If he has real morality you cannot kill him.
81Who is responsible robot or the designer?
- Locus of responsibility
- Where would the locus of responsibility of such
systems lie? - For example, when they break down, give wrong
advice, etc? - On current consensus With designers, operators
rather than with AA itself. - If only with human designers/users, then such
moral AAs dont seem to have genuine moral
status even as moral producers? - BUT
- Is Alan responsible if his robot will insult the
US President during a visit? - Is the robot responsible?
- Is PSU responsible?
- Perkowski?
82Moral implications of increasingcognitive
superiority of AAs
- Well communicate with artificial agents (AAs) in
richer and subtler ways - We may look to AAs for moral advice and support
- We may defer to their normative decisions
- E.g when multiplicity of factors require
superior cognitive powers to humans - ? Automated moral pilot systems?
Whom to blame for bad behavior of children?
Busy parents professionals will rely on a robot
to give moral advice to their children.
Roman children loved often their Greek slave
teachers more than parents.
What if the child will love robot more than the
Mommie?
83Non-conscious AAs asmoral producers
- None of these properties seem to require
consciousness - ? So the strong view seems to be in doubt?
- ? Perhaps non-conscious AAs can be genuine moral
producers - The question of When can we trust a moral
judgment given by a machine? - ? See answer in Blay Whitby, Computing
Machinery and Morality submitted, AI and Society
Killing a slave or low-class people in the past
84- So
- So non-conscious artificial agents perhaps could
be genuine moral producers - At least in limited sorts of ways
85- In contrast, in a paper Ethics and
- Consciousness in Artificial Agents the author
believes - Having the capacity for genuine morally
- responsible judgment and action require a kind of
- empathic rationality
- And its difficult to see how such empathic
rationality - could exist in a being which didnt have
psychologically - real consciousness
86- In any case, it will be a hard and complex
- job to ensure that
- the robots designed for morality
- will simulate moral production
- in an ethically acceptable way.
87Non-conscious AAs asmoral consumers
88Non-conscious AAs asmoral consumers
- What about non-conscious AAs as moral consumers?
- (i.e. as candidates for our moral concern)?
- Our moral responsibility for a robot?
- Could it ever be rational for us to consider
ourselves as having genuine moral obligations
towards non-conscious AAs?
89Consciousness andmoral consumption
- At first sight being a true moral
- consumer seems to require being
- able to consciously experience pain,
- distress, need, satisfaction, joy,
- sorrow, etc.
- i.e. psychologically real consciousness
- Otherwise why waste resources?
- Can we dispose robots at our will when
convenient? - .
90Example of our responsibility for a robot The
case of property ownership
- AAs may come to have interests which we
- may be legally (and morally?) obliged to
- respect
- Andrew Martin he is a robot in Bicentennial
Man - Andre acquires (through courts) legal entitlement
to own property in his own person
91Bicentennial Man
- Bicentennial Man
- Household android is
- acquired by Martin family
- christened Andrew
- His decorative products
- exquisitely crafted from
- driftwood
- become highly prized
- collectors' items
92Bicentennial Man (cont)
93Bicentennial Man (cont)
- Andrew, arguably, has legal
- rights to his property
- It would be morally wrong for us not to
- respect them (e.g. to steal from him)
- His rights to maintain his property
- (and our obligation not infringe those rights)
- does not depend on our attributing
- consciousness to him
94Bicentennial Man (cont)
- A case of robot moral
- (not just legal) rights?
- Andrew, arguably, has moral, not just legal
- rights to his property
- Would it not be morally wrong for us not
- to respect his legal rights?
- (morally wrong, e.g., to steal from him?)
95Does it matter if he is non-conscious?
Bicentennial Man (cont)
- Arguably, Andrews moral rights to
- maintain his property
- (and our moral obligation to not infringe those
- rights)
- do not depend on our attributing
- consciousness to him
96Bicentennial Man (cont)
- On the legal status of artificial agents, see
- David Calverley, Imagining a Non-Biological
Machine - as a Legal Person,
- Submitted, Artificial Intelligence and Society
- For further related discussion of Asimovs
- Bicentennial Man, see
- Susan Leigh Anderson, Asimovs Three Laws of
- Robotics and Machine Metaethics
97Super-Intelligent Robots?
98Can developing Super-Intelligent Robots affect
the whole human civilization and fate of the
Universe ?
99Hugo De Garis
The question is not if we will design intelligent
robots, the questions is if we should design gods
who will supersede our intelligence and
consciousness. Artilects, Artilect wars?
100TECHNOLOGY FORECASTING
- First order impacts linear extrapolation
faster, better, cheaper - Second and third order impacts non-linear, more
difficult to forecast - Analogy The automobile in 1909
- Faster, better, cheaper than horse and buggy (but
initially does not completely surpass previous
technology) - Then industrial changes rise of automotive
industry, oil industry, road bridge
construction, etc.
Having no intelligence and consciousness, our
life affected morally and intellectually by new
technology development like cars or TV or
computers.
101Influence of cars on our lives!
- Then cars affected social changes
- clothing,
- rise of suburbs,
- family structure (teenage drivers, dating),
- increasing wealth
- and personal mobility
- Then cars affected geopolitical changes
- oil cartels,
- foreign policy,
- religious and tribal conflict,
- wars,
- environmental degradation
- and global warming
102Conclusions
- We need to distinguish between shallow and deep
AC and AE - We need to distinguish techno-ethics from
artificial ethics (especially strong AE) - There seems to be a link between an artificial
agents status as a conscious being and its
status as an ethical being - A strong view of AC says that genuine ethical
status in artificial agents (both as ethical
consumers and ethical producers) requires
psychologically real consciousness in such agents.
103Conclusions,continued
- Questions can be raised about the strong view -
(automated ethical advisors property ownership) - There are many important ways in which a kind of
(shallow) ethics has to be developed for present
day and future non-conscious agents. - But in an ultimate, deep sense, perhaps AC and
AE go together closely - (see paper Ethics and Consciousness in
Artificial Agents for defense of the strong view
much more robustly, as the organic view.)
104(No Transcript)
105Sources of slides
- Robert Finkelstein
- Steve Torrance, Middlesex University, UK
- ?????????
- http//www.capurro.de/home-jp.html
- Steinbeis Transfer Institut Information Ethics
(STI-IE) - http//sti-ie.de
- Cybernics
- University of Tsukuba, Japan
- http//www.cybernics.tsukuba.ac.jp/index.html
- September 30, 2009
-
106- This is an expanded version of a talk given at a
- conference of the ETHICBOTS project in
- Naples, Oct 17-18, 2006.
- See S. Torrance The Ethical Status of
Artificial Agents With and - Without Consciousness (extended abstract), in G.
Tamburrin and E. - Datteri (eds) Ethics of Human Interaction with
Robotic, Bionic and AI - Systems Concepts and Policies, Napoli Istituto
Italiano per gli Studi - Filosofici, 2006.
- See also S. Torrance, Ethics and Consciousness
in Artificial - Agents, submitted to Artificial Intelligence and
Society