Title: Artificial Intelligence
1Artificial Intelligence
2Overview
- Agent and Environment
- Rationality
- World Description (PEAS)
- Task Environment Types
3Agent Environment
4Rational Agent
- Agent
- An entity that perceives and acts. e.g. human,
robots, thermostat, smoke detector, etc. - The agent function maps percept sequence to
actions - f P - A
- The agent function is implemented by an agent
program, running on the agent architecture - Rational Agent
- For any given set of environments and actions, we
seek the agent (or class of agents) with the best
performance
5The Vacuum-Cleaner World
- Environment square A and B
- Percepts location and status, e.g., A,Dirty
- Actions Left, Right, Suck, NoOp
6The Vacuum-Cleaner World
7Vacuum-Cleaner Agent
- Mapping percept to action
function Vacuum-Cleaner-Agent(L,S) returns an
action if Sdirty, then return suck else if LA,
then return right else if LB, then return left
8Rationality
9Rational Agent
- An agent is rational if it always does the right
thing - Most successful agent
- Right and wrong is decided by the designer of the
agent - Needs performance measure
- The criteria that determine how successful an
agent is - Imposed by authority and measured in the long
run - Performance measure should be objective
- e.g. the amount of dirt cleaned within a certain
time - e.g. how clean the floor is at each time step
- Performance measure should be designed
according to what is wanted in the environment
instead of how the agents should behave.
10Rational Agent
- Rationality depends on four things
- Performance measure
- Prior knowledge of the environment
- Actions
- Percept sequence to date
- Definition
- For each possible percept sequence, a rational
agent should select an action that is expected to
maximize its performance measure, given the
evidence provided by the percept sequence and
whatever built-in knowledge the agent has.
11Performance Measure
- 100 points for each piece of dirt vacuumed up
- Minus 1 point for each action taken
- Minus 1000 points for dumping the dirt in your
neighbors backyard
A rational agent maximizes the points given the
percept sequence Rational ? Omniscient Rational ?
Clairvoyant Rational ? Perfection Rational
Exploration, learning, autonomy
12Environment World Description
13Task Environment
- In order to design a rational agent, we must
specify its task environment - PEAS description of the environment
- To design a rational taxi agent
- Performance measure
- safety, destination, profits, legality, comfort,
- Environment
- US streets/freeways, traffic, pedestrians,
weather, - Actuators
- steering, accelerator, brake, horn,
speaker/display, - Sensors
- video, accelerometers, gauges, engine sensors,
keyboard, GPS,
14Task Environment Types
15Task Environment types
16Task Environment types
Fully vs. partially observable an environment is
fully observable when the sensors can detect all
aspects that are relevant to the choice of
action.
17Task Environment types
Fully vs. partially observable an environment is
full observable when the sensors can detect all
aspects that are relevant to the choice of
action.
18Task Environment types
Deterministic vs. stochastic if the next
environment state is completely determined by the
current state and the executed action then the
environment is deterministic.
19Task Environment types
Deterministic vs. stochastic if the next
environment state is completely determined by the
current state and the executed action then the
environment is deterministic.
20Task Environment types
Episodic vs. sequential In an episodic
environment the agents experience can be divided
into atomic steps where the agents perceives and
then performs A single action. The choice of
action depends only on the episode itself
21Task Environment types
Episodic vs. sequential In an episodic
environment the agents experience can be divided
into atomic steps where the agents perceives and
then performs A single action. The choice of
action depends only on the episode itself
22Task Environment types
Static vs. dynamic If the environment can change
while the agent is choosing an action, the
environment is dynamic. Semi-dynamic if the
agents performance changes even when the
environment remains the same.
23Task Environment types
Static vs. dynamic If the environment can change
while the agent is choosing an action, the
environment is dynamic. Semi-dynamic if the
agents performance changes even when the
environment remains the same.
24Task Environment types
Discrete vs. continuous This distinction can be
applied to the state of the environment, to the
way time is handled and to the percepts/actions
of the agent.
25Task Environment types
Discrete vs. continuous This distinction can be
applied to the state of the environment, the way
time is handled and to the percepts/actions of
the agent.
26Task Environment types
Single vs. multi-agent Does the environment
contain other agents who are also maximizing some
performance measure that depends on the current
agents actions?
27Task Environment types
Single vs. multi-agent Does the environment
contain other agents who are also maximizing some
performance measure that depends on the current
agents actions?
28Task Environment types
- The simplest environment is
- Fully observable, deterministic, episodic,
static, discrete and single-agent. - Most real situations are
- Partially observable, stochastic, sequential,
dynamic, continuous and multi-agent.
29Environment Classes
- Environment class
- The agent must work in different environments
- A chess program should play against a wide
collection of humans and other programs - Designing for a particular opponent can exploit
specific weaknesses, but is not good for general
play - The performance of an agent is averaged over the
environment class - The agent is not allowed to consult the
environment program.
30Summary
- Agent and Environment
- Rationality
- World Description
- Environment Types