Title: Problem Solving Using Search
1Problem Solving Using Search
- Shyh-Kang Jeng
- Department of Electrical Engineering/
- Graduate Institute of Communication Engineering
- National Taiwan University
2References
- J. P. Bigus and J. Bigus, Constructing
Intelligent Agents with Java, 2nd ed., Wiley
Computer Publishing, 2001 - S. Russell and P. Norvig, Artificial
Intelligence A Modern Approach, Englewood
Cliffs, NJ Prentice Hall, 1995
3Intelligent Agents (1)
- A computational system which
- Is long-lived
- Has goals, sensors, and effectors
- Decides autonomously which actions to take in the
current situation to maximize progress toward its
(time-varying) goals
4Intelligent Agents (2)
percepts
sensors
?
environment
agent
actions
effectors
5Goal-Based Agents
Sensors
Environment
State
Environment Model
Action Sequence
Decision Maker
Goals
Agent
Effectors
6Problem-Solving Agent
- A kind of goal-based agent
- Given the current environment situation, decides
what to do by finding sequences of actions that
lead to desirable states - Goal formulation
- Problem formulation
- Search
- Solution
7Problem Definition
- A problem is a collection of information that the
agent will use to decide what to do - Elements of a problem
- Initial state
- Set of possible actions available (operators)
- Goal test
- Path cost
- State space
- Path
8Route Finding Problem
9Search Tree for a Problem
10Breadth-First Search Concept
11Depth-First Search Concept
12Search Strategies
- Brute-force, uninformed, or blind search
- Heuristic, informed, or directed search
- Optimal search
- Complete search
- Complexity
- Time
- Space
13General Search Algorithm
- Initialize the search tree using the initial
state of problem - Loop
- If there are no candidates for expansion, return
failure - Choose a leaf node for expansion according to
strategy - If the node contains a goal state, return the
corresponding solution - Else expand the node and add the resulting nodes
to the search tree
14Problem-Solving Performance
- Does it find a solution at all?
- Is it a good solution?
- Search cost
- Time
- Memory
- Total cost of the search
- Path cost
- Search cost
15Searching over a Graph
16Breadth-First Search Algorithm
- Create a queue and add the first node to it
- Loop
- If the queue is empty, quit
- Remove the first node from the queue
- If the node contains the goal state, then exit
with the node as the solution - For each child of the current node Add the new
state to the back of the queue
17Trace of Breadth-First Search
- Rochester
- Sioux Falls, Minneapolis, LaCrosse, Debuque
- Minneapolis, LaCrosse, Debuque, Fargo,
Rochester - LaCrosse, Debuque, Fargo, Rochester, St. Cloud,
Wausau, Duluth, LaCross, Rochester - Debuque, Fargo, Rochester, St. Cloud, Wausau,
Duluth, LaCross, Rochester, Minneapolis,
GreenBay, Madison, Dubuque, Rochester - Fargo, Rochester, St. Cloud, Wausau, Duluth,
LaCross, Rochester, Minneapolis, GreenBay,
Madison, Dubuque, Rochester, Rochester, LaCrosse,
Rockford
18Avoiding Repeated States
- Use a flag in the node data structure to avoid
expanding a tested node - This will cut down the time and space complexity
19Features of Breadth-First Search
- All the nodes at depth d in the search tree are
expanded before the node of depth d1 - If there is a solution, breadth-first search is
guaranteed to find it - If there are several solutions, breadth-first
search will always find the shallowest goal state
first - Is complete and optimal provided that the path
cost is a nondecreasing function of the depth of
the node
20Time and Memory Requirements
- Branching factor b
- Maximum number of nodes expanded before finding a
solution with a path length d 1bb2b3bd,
O(bd) - The space complexity is the same as the time
complexity, because all the leaf nodes of the
tree must be maintained in memory at the same
time - If b 10, and dealing with one node takes 1 ms
as well as 100 bytes, we will need 18 minutes and
111 megabytes for depth 6 (bd106), and 3500
years as well as 11111 terabytes for depth 14
(bd1014)
21Depth-First Search Algorithm
- Create a queue and add the first node to it
- Loop
- If the queue is empty, quit
- Remove the first node from the queue
- If the node contains the goal state, then exit
with the node as the solution - For each child of the current node add the new
state to the front of the queue
22Trace of Depth-First Search
- Rochester
- Debuque , LaCrosse, Minneapolis, Sioux Falls
- Rockford, LaCrosse, Rochester, LaCrosse,
Minneapolis, Sioux Falls - Chicago, Madison, Dubuque, LaCrosse, Rochester,
LaCross, Minneapolis, Sioux Falls - Milwaukee, Rockford, Madison, Dubuque, LaCrosse,
Rochester, LaCross, Minneapolis, Sioux Falls - Green Bay, Madison, Chicago, Rockford, Madison,
Dubuque, LaCrosse, Rochester, LaCross,
Minneapolis, Sioux Falls
23Features of Depth-First Search (1)
- Expands the node at the deepest level
- Only when the search hits a dead end does the
search go back and expand nodes at shallower
levels - Store only a single path from the root to a leaf,
along with the remaining unexpanded sibling nodes
for each node on the path - For branching factor b and maximum depth m,
requires only storage of bm nodes
24Features of Depth-First Search (2)
- For b10, m12, needs only 12 kilobytes memory
- Time complexity O(bm)
- Could be faster if many solutions exit
- Can get stuck going down the wrong path
- Neither complete nor optimal
25Depth-Limited Search
- Imposes a cutoff on the maximum depth of a path
- Guaranteed to find the solution if it exists and
the depth limit is not too small - Not guaranteed to find the shortest solution
first - Time complexity O(bl), l is the depth limit
- Space complexity O(bl)
26Informed Search
- Characterized by that we have limited time and
space in which to find an answer to complex
problems and so we are willing to accept a good
solution - Applies heuristics or rule of thumb as we are
searching the tree to try to determine the
likelihood that following one path or another is
more likely to lead to a solution - Uses objective functions called evaluation
functions to try to gauge the value of a
particular node in the search tree and to
estimate the value of following down any of the
paths from the node
27Best-First Search Algorithm
- Create a queue and add the first node to it
- Loop
- If the queue is empty, quit
- Remove the first node from the queue
- If the node contains the goal state, then exit
with the node as the solution - For each child of the current node
- Insert the child to the queue according to the
evaluation function
28Greedy Search
- A Best-First Search strategy using heuristic
functions as the evaluation function - Heuristic function h(n) is a function that
estimates the cost of the state at node n to the
goal state - h(n) 0 if n is the goal state
- Example the straight-line distance to the goal
city in the graph search problem
29SearchApplet Demo
30Uniform Cost Search
- Always expand the lowest-cost node on the fringe
- The cost of a path must never decrease as we go
along the path, i.e., g(successor(n))gtg(n) - Breadth-First Search is a special case, with g(n)
depth(n) - Complete and optimal
- Time and space complexity O(bd)
31A Search
- A Best-First Search strategy using the evaluation
function f(n) g(n)h(n), where g(n) is the
cost of the path so far, and h(n) is the
estimated cost to the goal - Combines the Greedy search and the uniform-cost
search - h(n) should be an admissible heuristic, which
never overestimate the cost to reach the goal.
i.e., the straight-line distance in the city
traveling problem - pathmax equation f(n) max( f(n), g(n)h(n)
), where node n is the parent of node n
32Optimality of A Search
33Features of A Search
- The first solution found must be the optimal one,
because nodes in all subsequent contours will
have higher f-cost - Is complete
- Optimally efficient for any given heuristic
function
34Proof of the Optimality of A
- Assume that G2 is a suboptimal goal state that
g(G2) gtf - Node n is currently a leaf node on an optimal
path to the optimal goal state G - f gt f(n)
- f(n) gt f(G2)
- f gt f(G2)
- f gt g(G2), a contradiction
Start
n
G2
G
35Iterative Improvement Algorithms
- Need not maintain a search tree
- Starts with a complete configuration and make
modifications to improve its quality - Example Eight-Queen Problem
- Landscape concept
36Hill-Climbing Search
- current initial state
- Loop
- next a highest-valued successor of current
- if valuenext lt valuecurrent return current
- current next
37Drawbacks of Hill-Climbing
- May halt at a local maximum
- Plateaux may result in a random walk
- Ridges may cause oscillation
- Random-restart hill-climbing may help
38Simulated Annealing
- current initial state
- Loop for t 1 to infinity
- T schedulet
- if T 0 return current
- next a randomly selected successor of current
- DE valuenext-valuecurrent
- if DE gt 0 then current next
- else current next only with probability
exp(DE/T)
39Two-Person Games
- A game can be formally defined as a kind of
search problem with initial state, a set of
operators, a terminal test, and a utility
function - A search tree may be constructed, with large
number of states - States at depth d and depth d1 are for different
players (two plies) - Uncertainly is introduced
40A Partial Tree for Tic-Tac-Toe
41A Two-Ply Game Tree
MAX
3
MIN
3
2
2
5
2
3
6
14
8
12
2
4
42MiniMax Algorithm
- 1. Loop for each operator
- Valueoperator MiniMaxValue of the state
obtained by applying the operator - 2. return the operator with the highest value
43MiniMaxValue Function
- if the state is terminal then
- return the corresponding utility function value
- else if MAX is to move in state then
- return the highest MiniMaxValue of the successors
of the state - else
- return the lowest MiniMaxValue of the successors
of the state
44Evaluation Functions
- Returns an estimate of the expected utility of
the game from a given position - Must agree with the utility function on terminal
states - Must not take too long to compute
- Should accurately reflect the actual chance of
winning
45Alpha-Beta Pruning
- When applied to a standard minimax tree, it
returns the same move as minimax would, but
prunes away branches that can not possibly
influence the final decision - Alpha is the value of the best choice we have
found so far at any choice point along the path
for MAX - Beta is the value of the best choice we have
found so far at any choice point along the path
for MIN
46Example of Alpha-Beta Pruning
MAX
3
MIN
3
2
2
5
2
3
14
8
12
2
47MaxValue Function
- if the state is cutoff
- val Eval( state )
- a Max( a, val )
- Return val
- for each state s in successors of the state
- a Max( a, MinValue of state s )
- If a gt b return b
- return a
48MinValue Function
- if the state is cutoff
- Val Eval( state )
- b Min( b, val )
- Return val
- for each state s in successors of the state
- b Min( b, MaxValue of state s )
- If b lt a return a
- return b