Title: Introduction to Artificial Intelligence
1Introduction to Artificial Intelligence
- Lectured by
- Yen-Hsien Lee
- Department of Management Information Systems
- College of Management
- National Chiayi University
- September 24, 2008
2The Brief History of AI
- 1956 Birth of AI
- The Dartmouth workshop held by McCarthy
- 1952-1969 Big jump of AI?
- General Problem Solver imitating human
problem-solving protocols (Newell and Simon) - Geometry Theorem Prover proving tricky theorems
(Gelernter) - Amateur Checker learning to play checker
(Samuel) - Lisp AI programming language (McCarthy)
- Advice Taker the first complete AI system,
designed to use general knowledge of the world to
search for solutions to problems (McCarthy) - SAINT solving closed-form calculus integration
problems (Slagle)
3The Brief History of AI (Contd)
- 1952-1969 Big jump of AI? (Contd)
- ANALOGY solving geometric analogy problems
appearing in IQ test (Evans) - STUDENT solving algebra story problems (Bobrow)
- Block world (Minsky)
- Neural Network (McCulloch and Pitts)
- Adalines an learning network that enhanced
Hebbs learning methods (Widrow and Hoff). - Perceptron covergence theorem learning network
(Rosenblatt)
4The Brief History of AI (Contd)
- 1966-1973 Failures of AI?
- Simple syntactic manipulations on natural
language - Intractability of many of the problems
- Fundamental limitations on the basic structures
used to generate intelligent behavior
5The Brief History of AI (Contd)
- 1969-1979 Resurgence of AI
- The emergence of knowledge-based systems
- From general-purpose to specific-domain
- From common sense to expertise
- From elementary reasoning to knowledge-based
reasoning - The separation of the knowledge (in the form of
rules) from the reasoning component. - DENDRAL solving the problem of inferring
molecular structure from the information provided
by a mass spectrometer. - MYCIN diagnosing blood infections handle
uncertainty) - SHRDLU understanding natural language (designed
for a specific domain) - Various Knowledge Representation and Reasoning
methods
6The Brief History of AI (Contd)
- 1980-present AI becomes an industry
- The first commercial expert system, R1
configuring orders for new computer systems
(McDermott, DEC) - Fifth Generation Project (Japan)
- Microelectronics and Computer Technology
Corporation (U.S.) - 1986-present The return of neural networks
- Reinventing the back-propagation learning
algorithm
7The Brief History of AI (Contd)
- 1987-present AI becomes a science
- Hypotheses must be subjected to rigorous
empirical experiments, and the results must be
analyzed statistically for their importance
(Cohen, 1995) - Hidden Markov Models (HMMs) dominating the
speech recognition area. - Bayesian network basing on probability and
decision theory dominates AI research on
uncertain reasoning and expert systems.
8The Brief History of AI (Contd)
- 1995-present The emergence of intelligent agents
- Starting to look at the whole agent problem
again. - An agent have to possess the attributes such as
operating under autonomous control, perceiving
their environment, persisting over a prolonged
time period,adapting to change, and being capable
of taking on anothers goals. - SOAR situated movement aims to understand the
workings of agents embedded in real environment
with continuous sensory inputs. - Search engine, Recommender systems, Website
construction systems - The consequences of agent perspective
- The reorganization of subfields of AI research
- The integration of other fields related to AI
research such as control theory and economics.
9What Can AI Do Now?
- Autonomous planning and scheduling
- Game playing
- Autonomous control
- Diagnosis
- Logistics planning
- Robotics
- Language understanding and problem solving
- etc.
10(No Transcript)
11So, What is AI???
12What is AI?
- Some definitions of AI are given as follows
13Acting HumanlyThe Turing Test Approach
- Turing (1950) proposed the Turing Test, based on
indistinguishability from undeniably intelligent
entities human beings. - The coputer passes the test if a human
interrogator, after posing some written
questions, cannot tell whether the written
responses come from a person or not. - Suggested major components of AI knowledge
representation, automated reasoning, natural
language processing, machine learning, (computer
vision, robotics)
14Thinking HumanlyThe Cognitive Modeling Approach
- Once we have a sufficiently precise theory of the
mind, it becomes possible to express the theory
as a computer program. - 1960s "cognitive revolution" information-processi
ng psychology
- Requires scientific theories of internal
activities of the brain
- How to validate?
- 1) Predicting and testing behavior of human
subjects (top-down) - or 2) Direct identification from neurological
data (bottom-up)
15Thinking RationallyThe Law of Thought Approach
- Aristotle what are correct arguments/thought
processes? - His syllogisms provide patterns for argument
structures that always yield correct conclusions
when given correct premises. - The laws of thought are supposed to govern the
operation of the mind. - Problems
- Not easy to take informal knowledge and state it
in the formal terms, particularly the knowledge
with uncertainty. - Different between being able to solve a problem
in principle and doing so in practice.
16Acting RationallyThe Rational Agent Approach
- Rational behavior doing the right thing
- The right thing that which is expected to
maximize goal achievement, given the available
information
- Rational behavior don't necessarily involve
thinking e.g., blinking reflex but thinking
should be in the service of it. - An agent is an entity that perceives and acts.
- What is a rational agent?
17Acting RationallyThe Rational Agent Approach
(Contd)
- A rational agent is one that acts so as to
achieve the best outcome or, when there is
uncertainty, the best expected outcome. - For any given class of environments and tasks, we
seek the agent with the best performance
- Caveat computational limitations make perfect
rationality unachievable - Advantages of studying rational agent design
- More general than the laws of thought approach
that is only one of several ways for achieving
rationality. - The standard of rationality is more clearly
defined and completely general than human
behavior and thought.
18Agents
- An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through actuators
- Human agent eyes, ears, and other organs for
sensors hands, legs, mouth, and other body parts
for actuators - Robotic agent cameras and infrared range finders
for sensors various motors for actuators
19Rational Agents
- An agent should strive to "do the right thing",
based on what it can perceive and the actions it
can perform. - But, what does it mean to do the right thing?
- The right action is the one that causes the
agent to be most successful. - But, what does it mean the agent successes?
- The agent generates a sequence of actions
according to the percepts it receives. If the
sequence is desirable, then we say the agent
performed well
20Rational Agents (Contd)
- An agent needs a performance measure, an
objective criterion for success of an agent's
behavior. - E.g., performance measure of a vacuum-cleaner
agent could be amount of dirt cleaned up, amount
of time taken, amount of electricity consumed,
amount of noise generated, and etc. - But, can the above criteria really measure a
vacuum-cleaner agents performance? - According to what you actually wants in the
environment OR according to how you think the
agent should behave?
21Rational Agents (Contd)
- What is rational must depend on four things
- The performance measure
- The agents prior knowledge of the environment
- The actions that the agent can perform
- The agents percept sequence to date
- Definition of a rational agent
- For each possible percept sequence, a rational
agent should select an action that is expected to
maximize its performance measure, given the
evidence provided by the percept sequence and
whatever built-in knowledge the agent has.
22PEAS
- It must first specify the setting for rational
(intelligent) agent design, including Performance
measure, Environment, Actuators, Sensors (PEAS). - Considering the design (PEAS) of an automated
taxi driver and that of an interactive English
tutor
- Performance measure
- Environment
- Actuators
- Sensors
23PEAS (Contd)
- PEAS of an automated taxi driver
- Performance measure Safe, fast, legal,
comfortable trip, maximize profits
- Environment Roads, other traffic, pedestrians,
customers
- Actuators Steering wheel, accelerator, brake,
signal, horn
- Sensors Cameras, sonar, speedometer, GPS,
odometer, engine sensors, keyboard - PEAS of an interactive English tutor
- Performance measure Maximize student's score on
test - Environment Set of students
- Actuators Screen display (exercises,
suggestions, corrections) - Sensors Keyboard
24Environment Types
- Fully observable (vs. partially observable)
- An agent's sensors give it access to the complete
state of the environment at each point in time.
- Deterministic (vs. stochastic)
- The next state of the environment is completely
determined by the current state and the action
executed by the agent. (If the environment is
deterministic except for the actions of other
agents, then the environment is strategic) - Episodic (vs. sequential)
- The agent's experience is divided into atomic
"episodes" (each episode consists of the agent
perceiving and then performing a single action),
and the choice of action in each episode depends
only on the episode itself.
25Environment Types (Contd)
- Static (vs. dynamic)
- The environment is unchanged while an agent is
deliberating. (The environment is semidynamic if
the environment itself does not change with the
passage of time but the agent's performance score
does) - Discrete (vs. continuous)
- A limited (finite) number of distinct, clearly
defined percepts and actions.
- Single agent (vs. multiagent)
- An agent operating by itself in an environment.
26Inside Work of An Agent
- The job of AI is to design the agent program that
implements the agent function mapping percepts to
actions - According to the complexity, we outline four
basic kinds of agent program - Simple reflex agents
- Model-based reflex agents
- Goal-based agents
- Utility-based agents
27Simple Reflex Agents
28Simple Reflex Agents (Contd)
- The agent select actions on the basis of the
current percept, ignoring the rest of the
percept history. - In such agent, each condition corresponds to an
action, is call condition-action rule. - Problems
- Can only operate in a fully observable
environment - Incomplete condition-action rule
29Model-based Reflex Agents
30Model-based Reflex Agents (Contd)
- The agent to maintain some sort of internal state
that depends on the percept history and thereby
reflects at least some of the unobserved aspects
of the current state. - Updating internal state information requires two
kinds of knowledge - Information about how the world evolves
independently of the agent - Information about how the agents own actions
affect the world - Knowledge about how the world works is called a
model of the world.
31Goal-based Agents
32Goal-based Reflex Agents (Contd)
- Knowing about the current state of the
environment is not always enough to decide what
to do. - The agent needs some sort of goal information
that describes situations that are desirable,
e.g., the passengers destination. - Goal-based agent can combine its goal with
information about the results of possible actions
to choose actions that achieve the goal. - The goal-based agent is less efficient, it is
more flexible because the knowledge that supports
its decisions is represented explicitly and can
be modified.
33Utility-based Agents
34Utility-based Reflex Agents (Contd)
- Goal alone are not really enough to generate
high-quality behavior in most environment. - For example, there are many action sequences that
will get the taxi to its destination but some are
quicker, safer, or more reliable. - Goal just provide a binary distinction between
happy and unhappy state. - A more general performance measure should allow
a comparison of different world states according
to how happy (utility) they would make the agent
if they could be achieved.
35Learning Agents
36Learning Agents (Contd)
- Learning allows the agent to operate in initially
unknown environments and to become more competent
than its initial knowledge alone might allow. - Four conceptual components of a learning agent
- Learning element uses feedback from the Critic
on how the agent is doing and determines how the
performance element should be modified to be
better - Performance element is responsible for selecting
external actions. - Critic tells the learning element how well the
agent is doing with respect to a fixed
performance standard. - Problem generator is responsible for suggesting
actions that will lead to new and informative
experiences