Title: CS451CS551EE565 ARTIFICIAL INTELLIGENCE
1CS451/CS551/EE565ARTIFICIAL INTELLIGENCE
- Intelligent Agents
- 8-30-2006
- Prof. Janice T. Searleman
- jets_at_clarkson.edu, jetsza
2Outline
- Intelligent Agents (IAs)
- Environment types
- IA Behavior
- Rationality
- IA Structure
- PEAS (Performance measure, Environment,
Actuators, Sensors) - IA Types
- HW1 due Friday, 9/01/06, in class
- be prepared for class discussion
- Announcements Activity Fair tonight, 7-8, Cheel
3What is an (Intelligent) Agent?
- An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through actuators
- Agents include human, robots, softbots,
thermostats, etc. - An agent can perceive its own actions, but not
always its effects
4Intelligent Agents and AI
- Example Human mind as network of thousands or
millions of agents working in parallel. To
produce real artificial intelligence, this school
holds, we should build computer systems that also
contain many agents and systems for arbitrating
among the agents' competing results. - Distributed decision-making and control
- Challenges
- Action selection What next
- action to choose
- Conflict resolution
5Agent Types
- We can split agent research into two main
strands - Distributed Artificial Intelligence (DAI)
Multi-Agent Systems (MAS) (1980 1990) - Much broader notion of "agent" (1990s
present) - interface, reactive, mobile, information
6Rational Agents
How to design this?
Sensors
percepts
Environment
?
Agent
actions
Effectors
7Rational Agents
- Rational behavior an agent should strive to do
the right thing, based on what it can perceive
and the actions it can perform. - What is the right thing?
- The right action is the one that will cause the
agent to be most successful - This doesn't necessarily involve thinking e.g.,
blinking reflex but thinking should be in the
service of rational action - Performance measure should be objective
- The performance measure is according to what is
wanted in the environment instead of how the
agent should behave
8Rationality
- What is rational at a given time depends on four
things - Performance Measure
- Prior environment knowledge
- Actions
- Percept sequence to date (sensors).
- Defn A rational agent chooses whichever action
maximizes the expected value of the performance
measure given the percept sequence to date and
prior environment knowledge.
9The Right Thing The Rational Action
- Rational Action The action that maximizes the
expected value of the performance measure given
the percept sequence to date - Rational Best ?
- Rational Optimal ?
- Rational Omniscience ?
- Rational Clairvoyant ?
- Rational Successful ?
10The Right Thing The Rational Action
- Rational Action The action that maximizes the
expected value of the performance measure given
the percept sequence to date - Rational Best
- At least, to the best of its current knowledge
- Rational ? Optimal (Perfect)
- Rationality maximizes expected performance, while
perfection maximizes actual performance - Rational ? Omniscience
- An omniscience agent knows the actual outcome of
its actions - Rational ? Clairvoyant
- Rational ? Successful
11Rationality
- The proposed definition requires
- Information gathering/exploration
- to maximize future rewards
- Learn from percepts
- extending prior knowledge
- Agent autonomy
- An agent is autonomous if its behavior is
determined by its own experience (with ability to
learn and adapt)
12Environments
- To design a rational agent, we must specify its
task environment. - PEAS way to describe the environment
- Performance
- Environment
- Actuators
- Sensors
13Agents and environments
- The agent function maps from percept histories to
actions - f P ? A
- The agent program runs on the physical
architecture to produce f - agent architecture program
14Example Fully automated taxi
- PEAS description of the environment
- Performance
- Safety, destination, profits, legality, comfort
- Environment
- Streets/freeways, other traffic, pedestrians,
weather,, - Actuators
- Steering, accelerating, brake, horn,
speaker/display, - Sensors
- Video, sonar, speedometer, engine sensors,
keyboard, GPS,
15Environment types
Fully vs. partially observable an environment is
fully observable when the sensors can detect all
aspects that are relevant to the choice of
action.
16Environment types
Fully vs. partially observable an environment is
full observable when the sensors can detect all
aspects that are relevant to the choice of
action.
17Environment types
Deterministic vs. stochastic if the next
environment state is completely determined by the
current state the executed action then the
environment is deterministic.
18Environment types
Deterministic vs. stochastic if the next
environment state is completely determined by the
current state the executed action then the
environment is deterministic.
19Environment types
Episodic vs. sequential In an episodic
environment the agents experience can be divided
into atomic steps where the agents perceives and
then performs a single action. The choice of
action depends only on the episode itself
20Environment types
Episodic vs. sequential In an episodic
environment the agents experience can be divided
into atomic steps where the agents perceives and
then performs a single action. The choice of
action depends only on the episode itself
21Environment types
Static vs. dynamic If the environment can change
while the agent is choosing an action, the
environment is dynamic. Semi-dynamic if the
agents performance changes even when the
environment remains the same.
22Environment types
Static vs. dynamic If the environment can change
while the agent is choosing an action, the
environment is dynamic. Semi-dynamic if the
agents performance changes even when the
environment remains the same.
23Environment types
Discrete vs. continuous This distinction can be
applied to the state of the environment, the way
time is handled and to the percepts/actions of
the agent.
24Environment types
Discrete vs. continuous This distinction can be
applied to the state of the environment, the way
time is handled and to the percepts/actions of
the agent.
25Environment types
Single vs. multi-agent Does the environment
contain other agents who are also maximizing some
performance measure that depends on the current
agents actions?
26Environment types
Single vs. multi-agent Does the environment
contain other agents who are also maximizing some
performance measure that depends on the current
agents actions?
27Environment types
- The simplest environment is
- Fully observable, deterministic, episodic,
static, discrete and single-agent. - Most real situations are
- Partially observable, stochastic, sequential,
dynamic, continuous and multi-agent.
28Agent types
- How does the inside of the agent work?
- Agent architecture program
- All agents have the same skeleton
- Input current percepts
- Output action
- Program manipulates input to produce output
- Note difference with agent function.
29Agent types
- Function TABLE-DRIVEN_AGENT(percept) returns an
action -
- static percepts, a sequence initially empty
- table, a table of actions, indexed by percept
sequence -
- append percept to the end of percepts
- action ? LOOKUP(percepts, table)
- return action
This approach is doomed to failure
30Agent types
- Four basic kind of agent programs will be
discussed - Simple reflex agents
- Model-based reflex agents
- Goal-based agents
- Utility-based agents
- All these can be turned into learning agents.
31The vacuum-cleaner world
- function REFLEX-VACUUM-AGENT (location, status)
return an action - if status Dirty then return Suck
- else if location A then return Right
- else if location B then return Left
- Reduction from 4T to 4 entries
32Agent types simple reflex
- function SIMPLE-REFLEX-AGENT(percept) returns an
action - static rules, a set of condition-action rules
- state ? INTERPRET-INPUT(percept)
- rule ? RULE-MATCH(state, rule)
- action ? RULE-ACTIONrule
- return action
- Will only work if the environment is fully
observable otherwise infinite loops may occur.
33Summary
- Intelligent Agents
- Anything that can be viewed as perceiving its
environment through sensors and acting upon that
environment through its effectors to maximize
progress towards its goals. - PAGE (Percepts, Actions, Goals, Environment)
- Described as a Perception (sequence) to Action
Mapping f P ? A - Using look-up-table, closed form, etc.
- Rational Action The action that maximizes the
expected value of the performance measure given
the percept sequence to date