Title: Intelligent Agents and Search Problems
1Intelligent Agents and Search Problems
2Outline
- Intelligent Agents
- Agents and environments
- Rationality
- PEAS (Performance measure, Environment,
Actuators, Sensors) - Environment types
- Agent types
- Search Problems
3Agents
- An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through actuators
- Human agent eyes, ears, and other organs for
sensors hands, - legs, mouth, and other body parts for actuators
- Robotic agent cameras and infrared range finders
for sensors - various motors for actuators
4Agents and environments
- The agent function maps from percept histories to
actions
- f P ? A
- The agent program runs on the physical
architecture to produce f
- agent architecture program
5Vacuum-cleaner world
- Percepts location and contents, e.g., A,Dirty
- Actions Left, Right, Suck, NoOp
6A vacuum-cleaner agent
7Rational agents
- An agent should strive to "do the right thing",
based on what it can perceive and the actions it
can perform. The right action is the one that
will cause the agent to be most successful - Performance measure An objective criterion for
success of an agent's behavior
- E.g., performance measure of a vacuum-cleaner
agent could be amount of dirt cleaned up, amount
of time taken, amount of electricity consumed,
amount of noise generated, etc.
8Rational agents
- Rational Agent For each possible percept
sequence, a rational agent should select an
action that is expected to maximize its
performance measure, given the evidence provided
by the percept sequence and whatever built-in
knowledge the agent has.
9PEAS
- PEAS Performance measure, Environment,
Actuators, Sensors - Must first specify the setting for intelligent
agent design
- Consider, e.g., the task of designing an
automated taxi driver
- Performance measure
- Environment
- Actuators
- Sensors
10PEAS
- Must first specify the setting for intelligent
agent design
- Consider, e.g., the task of designing an
automated taxi driver
- Performance measure Safe, fast, legal,
comfortable trip, maximize profits
- Environment Roads, other traffic, pedestrians,
customers
- Actuators Steering wheel, accelerator, brake,
signal, horn
- Sensors Cameras, sonar, speedometer, GPS,
odometer, engine sensors, keyboard
11PEAS
- Agent Medical diagnosis system
- Performance measure Healthy patient, minimize
costs, lawsuits - Environment Patient, hospital, staff
- Actuators Screen display (questions, tests,
diagnoses, treatments, referrals)
- Sensors Keyboard (entry of symptoms, findings,
patient's answers)
12PEAS
- Agent Part-picking robot
- Performance measure Percentage of parts in
correct bins - Environment Conveyor belt with parts, bins
- Actuators Jointed arm and hand
- Sensors Camera, joint angle sensors
13PEAS
- Agent Interactive English tutor
- Performance measure Maximize student's score on
test - Environment Set of students
- Actuators Screen display (exercises,
suggestions, corrections) - Sensors Keyboard
14Environment types
- Fully observable (vs. partially observable) An
agent's sensors give it access to the complete
state of the environment at each point in time.
- Deterministic (vs. stochastic) The next state of
the environment is completely determined by the
current state and the action executed by the
agent. (If the environment is deterministic
except for the actions of other agents, then the
environment is strategic) - Episodic (vs. sequential) The agent's experience
is divided into atomic "episodes" (each episode
consists of the agent perceiving and then
performing a single action), and the choice of
action in each episode depends only on the
episode itself.
15Environment types
- Static (vs. dynamic) The environment is
unchanged while an agent is deliberating. (The
environment is semidynamic if the environment
itself does not change with the passage of time
but the agent's performance score does) - Discrete (vs. continuous) A limited number of
distinct, clearly defined percepts and actions.
- Single agent (vs. multiagent) An agent operating
by itself in an environment.
16Environment types
- Chess with Chess without Taxi driving
- a clock a clock
- Fully observable Yes Yes No
- Deterministic Strategic Strategic No
- Episodic No No No
- Static Semi Yes No
- Discrete Yes Yes No
- Single agent No No No
- The environment type largely determines the agent
design
- The real world is (of course) partially
observable, stochastic, sequential, dynamic,
continuous, multi-agent
17Agent types
- Four basic types in order of increasing
generality
- Simple reflex agents
- Agents that keep track of the world
- Goal-based agents
- Utility-based agents
18Simple reflex agents
19Simple reflex agents
20Agents that keep track of the world (Agents with
internal states)
21Agents with internal states
22Goal-based agents
23Goal-Based Agents
24Utility-based agents
25Learning agents
26Outline
- Intelligent Agents
- Agents and environments
- Rationality
- PEAS (Performance measure, Environment,
Actuators, Sensors) - Environment types
- Agent types
- Search Problems
27Search and AI
- Search methods are ubiquitous in AI systems. They
often are the backbones of both core and
peripheral modules - An autonomous robot uses search methods
- to decide which actions to take and which sensing
operations to perform, - to quickly anticipate collision,
- to plan trajectories,
- to interpret large numerical datasets provided by
sensors into compact symbolic representations, - to diagnose why something did not happen as
expected, - etc...
- Many searches may occur concurrently and
sequentially
28Applications
- Search plays a key role in many applications,
e.g. - Route finding airline travel, networks
- Package/mail distribution
- Pipe routing, VLSI routing
- Comparison and classification of protein folds
- Pharmaceutical drug design
- Design of protein-like molecules
- Video games
29Example 8-Puzzle
State Any arrangement of 8 numbered tiles and an
empty tile on a 3x3 board
308-Puzzle Successor Function
The successor function is knowledgeabout the
8-puzzle game, but it does not tell us which
outcome to use, nor to which state of the board
to apply it.
Search is about the exploration of alternatives
31- Across history, puzzles and games requiring the
exploration of alternatives have been considered
a challenge for human intelligence - Chess originated in Persia and India about 4000
years ago - Checkers appear in 3600-year-old Egyptian
paintings - Go originated in China over 3000 years ago
So, its not surprising that AI uses games to
design and test algorithms
32(No Transcript)
3315-Puzzle
- Introduced in 1878 by Sam Loyd, who dubbed
himself Americas greatest puzzle-expert
3415-Puzzle
- Sam Loyd offered 1,000 of his own money to the
first person who would solve the following
problem
35- But no one ever won the prize !!
36Stating a Problem as a Search Problem
S
- State space S
- Successor function x ? S ? SUCCESSORS(x) ?
2S - Initial state s0
- Goal test
- x?S ? GOAL?(x) T or F
- Arc cost
37State Graph
- Each state is represented by a distinct node
- An arc (or edge) connects a node s to a node s
if s ? SUCCESSORS(s) - The state graph may contain more than one
connected component
38Solution to the Search Problem
- A solution is a path connecting the initial node
to a goal node (any one)
G
I
39Solution to the Search Problem
- A solution is a path connecting the initial node
to a goal node (any one) - The cost of a path is the sum of the arc costs
along this path - An optimal solution is a solution path of minimum
cost - There might be no solution !
I
G
40How big is the state space of the (n2-1)-puzzle?
41How big is the state space of the (n2-1)-puzzle?
- 8-puzzle ? 9! 362,880 states
- 15-puzzle ? 16! 2.09 x 1013 states
- 24-puzzle ? 25! 1025 states
- But only half of these states are reachable from
any given state(but you may not know that in
advance)
42Permutation Inversions
- Let the goal be
- A tile j appears after a tile i if, if either j
appears on the same row as i to the right of i,
or on another row below the row of i. - For all i 1, 2, ..., 15, let ni be the number
of tiles j lt i that appear after tile i
(permutation inversions) - N n2 n3 ? n15 row number of empty tile
n2 0 n3 0 n4 0 n5 0 n6 0 n7 1 n8
1 n9 1 n10 4 n11 0 n12 0 n13 0 n14
0 n15 0
? N 7 4
43- Proposition (N mod 2) is invariant under any
legal move of the empty tile - Proof
- Any horizontal move of the empty tile leaves N
unchanged - A vertical move of the empty tile changes N by an
even increment (? 1 ? 1 ? 1 ? 1)
N(s) N(s) 3 1
44- Proposition (N mod 2) is invariant under any
legal move of the empty tile - ? For a goal state g to be reachable from a state
s, a necessary condition is that N(g) and N(s)
have the same parity - It can be shown that this is also a sufficient
condition - ? The state graph consists of two connected
components of equal size
45N 4
N 5
- So, the second state is not reachable from the
first, and Sam Loyd took no risk with his money
...
46What is the Actual State Space?
- The set of all states? e.g., a set of 16!
states for the 15-puzzle - The set of all states reachable from a given
initial state? e.g., a set of 16!/2 states for
the 15-puzzle -
- In general, the answer is a) because one does
not know in advance which states are reachable -
But a fast test determining whether a state is
reachable from another is very useful, as search
techniques are often inefficient when a problem
has no solution
47Searching the State Space
- It is often not feasible (or too expensive) to
build a complete representation of the state
graph
488-, 15-, 24-Puzzles
8-puzzle ? 362,880 states
15-puzzle ? 2.09 x 1013 states 24-puzzle ?
1025 states
100 millions states/sec
49Searching the State Space
- Often it is not feasible (or too expensive) to
build a complete representation of the state
graph - A problem solver must construct a solution by
exploring a small portion of the graph
50Searching the State Space
Search tree
51Searching the State Space
Search tree
52Searching the State Space
Search tree
53Searching the State Space
Search tree
54Searching the State Space
Search tree
55Searching the State Space
Search tree
56Simple Problem-Solving-Agent Algorithm
- I ? sense/read initial state
- GOAL? ? select/read goal test
- Succ ? select/read successor function
- solution ? search(I, GOAL?, Succ)
- perform(solution)