Title: Searching
1Searching
- Craig A. Struble, Ph.D.
- Department of Mathematics, Statistics, and
Computer Science
2Overview
- Problem-solving agents
- Example Problems
- Searching
- Measuring performance
- Search strategies
- Partial Information
3Problem Solving Agents
Percepts
Sensors
State
How the world evolves
What the world is like now
What my actions do
What will it be like if I do action A
Environment
What action I should do now
Goals
Actions
Actuators
Agent
4Problem Solving
- World is made of up discrete states
- A state is the set of characteristics for the
world at a given point in time - Actions change the world from one state to
another.
S1
Forward
Left
Right
S2
S4
S3
5Problem Solving Steps
- Goal formulation
- Determine state or a set of states that will
satisfy the agents goal - Problem formulation
- Determine actions and states to consider
- Search
- Consider possible sequences of actions from
current state - Return a solution, a sequence of actions that
reaches a goal state - In general, would like the best sequence of
actions w.r.t. the agents performance measure.
6Problem Solving Agent Program
Figure 3.1 page 61
7Example, Route Finding
- Consider a system like MapQuest
- Provide instructions for driving from one
location to another - Agent perspective
- Agent simulates driving and identifies best route
given performance measure (time, distance,
interstates, etc.)
8Example, Route Finding
9Example, Route Finding
10Example, Route Finding
- What are the states of the world?
- The location of the agent, i.e. the Romanian
cities - What goal does the agent formulate?
- To be in Bucharest
- What is the performance measure?
- Shortest distance
11Example, Route Finding
- What is the problem formulation?
- States to consider are those between Arad and
Bucharest - Actions, go to a specific city
- Must be adjacent to city agent is located in.
- Search
- Try all possible paths between Arad and Bucharest
- Select shortest path
12Formal Definition of Problems
- An initial state that the agent starts in.
- A description of possible actions for the agent
from a given state. - A successor function, which maps a state to a set
of (action, state) pairs - A goal test, which determines if a state is a
goal state - A path cost, which assigns numeric cost to a
given path. - Optimal solution is one that has the least path
cost amongst all solutions
13Formal Definition for Example
- Initial state
- In(Arad)
- Successor function
- Maps state to adjacent cities
- ltGo(Sibiu), In(Sibiu)gt, ltGo(Timisoara),
In(Timisoara)gt, ltGo(Zerind), In(Zerind)gt - Goal test
- state is In(Bucharest)
- Path cost
- Sum of distances along driving path
14Modeling Problems in Java
public interface Problem / The initial
problem state / public State
getInitialState() / A function that
returns a mapping of actions to a state
given a state. / public Map successor(State
state) / Return whether or not the state
is a goal state / public boolean
goalState(State state) / Determine the
cost of a path / public Number
pathCost(SearchNode node)
15When is search a good idea?
- Static
- World doesnt change as agent thinks
- Observable
- Assumes initial state is known
- Discrete
- Must enumerate courses of action
- Deterministic
- Must be able to predict the results of actions
16More Problems
17Problem Formulation
- State
- Location of each tile
- Could be represented with an ordered list
7,2,4,5,0,6,8,3,1
Keep track of blank location
Position numbers
18Problem Formulation
- Initial state
- Any state can be the initial state
- Any permutation of 0 through 8
- Goal test
- State 1,2,3,4,5,6,7,8,0
- Path cost
- Each step costs 1, so total number of steps
19Problem Formulation
- Successor function
- View the blank as movable, so any state reachable
by moving blank left, right, up, or down
7,2,4,5,0,6,8,3,1
L
D
R
U
7,2,4,0,5,6,8,3,1
7,2,4,5,6,0,8,3,1
7,0,4,5,2,6,8,3,1
7,2,4,5,3,6,8,0,1
20Successor Function
b position of blank successors // empty
set if (b-1 3 ! 2) create new state by
swapping stateb and stateb-1 add new
state to successors if (b1 3 ! 0)
create new state by swapping stateb and
stateb1 add new state to successors if
(b-3 gt 0) create new state by swapping
stateb and stateb-3 add new state to
successors if (b3 lt 9) create new state
by swapping stateb and stateb3 add new
state to successors return successors
21More Problems
- The phasor measurement unit (PMU) problem
- Consider power system graphs (PSGs)
- Busses (nodes)
- Lines (edges)
- PMUs observe characteristics of a PSG for
monitoring - Place the fewest number of PMUs on a PSG
22How A PMU Observes a PSG
- Any bus with a PMU is observed
- Any line incident with a bus containing a PMU is
observed - Any bus incident with an observed line is
observed - Any line between two observed busses is observed
- If all the lines incident with an observed bus
are observed, save one, then all of the lines
incident to that bus are observed. - These all follow from physical laws.
23Problem Formulation
- State
- Locations of PMUs and which parts of PSG are
observed - Initial state
- No PMUs and nothing observed
- Successor function
- States obtained by placing a PMU on a node
without a PMU - Goal test
- The entire PSG is observed
- Path cost
- The number of PMUs placed
24Searching
- Problem solving involves searching through the
state space - The state space is organized with a state tree
- Start with initial state, then apply successor
function repeatedly - More generally, we have a search graph, as states
can be reached by different paths - 8-puzzle Consider moving blank left and then
right
25Search Nodes
- Nodes are a data structure with 5 components
Parent-Node
Action right
Depth 6
State
Path-Cost 6
26Search Nodes
public class SearchNode private SearchNode
parent private State state private
Action action private int depth
private Number cost // Include any
necessary constructors, access, setter // and
other methods needed by the node class.
27Tree Search
Goal test?
28Tree Search
- A search strategy determines the order in which
nodes are expanded
Figure 3.8 page 71
29Organizing the Fringe
- The fringe is implemented using a queue
- Actually a priority queue
- Operations
MakeQueue(element, ) // create queue containing
elements Empty?(queue) // return true if
queue is empty First(queue) // return
first element, but leave queued RemoveFirst(queue)
// return first element, dequeue Insert(eleme
nt,queue) // insert in ordered location into
queue InsertAll(elements, queue) // insert
multiple elements.
30Tree Search
Figure 3.9 page 72
31Tree Search in Java
public class TreeSearch private
PriorityQueue fringe private Problem
problem / Construct a tree search
object for the specified problem. /
public TreeSearch(Problem problem, PriorityQueue
fringe) this.problem problem
this.fringe fringe / Create the root
of the search tree / SearchNode root
new SearchNode(null, // parent
problem.getInitialState(), // state
null, //
action 0,
// depth null)
// cost fringe.add(root)
32Tree Search in Java
/ Carry out the search, returning a
collection of actions to perform. /
public Collection execute() while
(!fringe.isEmpty()) SearchNode node
fringe.removeFirst() if
(problem.goalState(node)) return
solution(node)
expand(node) return null //
throw exception instead?
33Tree Search in Java
/ Expand the current search node.
/ public void expand(SearchNode node)
Map successors successors
problem.successor(node.getState()) for
(Iterator iter successors.keySet().iterator()
iter.hasNext() )
Action act (Action) iter.next()
State state (State) successors.get(act)
SearchNode newNode new Node(node, state,
act,
node.getDepth() 1)
problem.pathCost(newNode)
fringe.add(newNode)
34Tree Search in Java
/ Return a sequence of actions for
the agent to take. / public Collection
solution(SearchNode node) LinkedList
actions new LinkedList() while (node
! null) actions.addFirst(node.getAc
tion()) node node.getParent()
return actions
35Measuring Problem Solving Performance
- Completeness
- Is the algorithm guaranteed to find a solution?
- Optimality
- Does the strategy find an optimal solution?
- Time complexity
- How long does it take to find a solution?
- Space complexity
- How much memory is needed to perform the search?
36Complexity and Big-Oh
- Read Appendix A.
- T(n) is the complexity of an algorithm.
- O(f(n)) is defined by
- T(n) is O(f(n)) if T(n) lt kf(n) for some k, for
all n gt n0 - What this means is that the growth in time (or
space) complexity is bounded above by some
constant times f(n).
37Complexity and Big-Oh
- Example
- If T(n) 2n 2, then T(n) is O(n) using k 3
and n0 2. - Use O() to say something about long term behavior
of algorithms - O(n) is always better than O(n2) as n approaches
infinity.
38Complexity and Search
- What is n for search strategies?
- Usually a measure of the search tree/graph size
- Search tree size is measured by
- b, the branching factor, the maximum number of
successors of any node - d, the depth of the shallowest goal node
- m, the maximum length of any path
39Complexity
- Time complexity is in terms of the number of
nodes expanded - Space complexity is in terms of the maximum
number of nodes stored in memory
40Cost
- Search cost, which is generally time complexity
but may also include space complexity - Total cost, which is the search cost plus the
path cost of the solution
41Search Strategy
- An uninformed strategy is given no information
other than the problem definition. - The algorithms are given no insight into the
structure of the problem - All non-goal states are equal
- Informed strategies take advantage of additional
a priori knowledge of the problem - Identify more promising non-goal states.
- Heuristics, pruning,
42Uninformed Search Strategies
- Breadth-first search
- Uniform-cost search
- Depth-first search
- Depth-limited search
- Iterative deepening search
- Bidirectional search
43Breadth First Search
- Fringe is organized in FIFO queue
1
1
2
44Breadth First Search
- Complete?
- Yes
- Optimal?
- Finds shallowest goal node. Only if step costs
are all the same - Time complexity?
- O(bd1)
- Space complexity?
- O(bd1)
45Uniform Cost Search
- Order fringe by increasing path cost
A
A
Z 75
46Uniform Cost Search
- Complete?
- Yes, if each step costs is at least e gt 0
- Optimal?
- Yes, as above
- Time complexity?
- O(bc), where C is cost of optimal solution c
ceil(C/e) - c could be larger than d!
- Space complexity?
- O(bc)
47Depth First Search
- Fringe organized in descending order of depth
(i.e. deepest node first)
A
A
S140
48Depth First Search
- Complete?
- No.
- Optimal?
- No.
- Time complexity?
- O(bm)
- Space complexity?
- O(bm)
49Depth First Search
- Can reduce memory using backtracking
- Store only one successor at a time
- Go back if path fails and generate next successor
- Feasible if actions can be easily undone
- Good for problems with large state descriptions.
50Depth-limited search
- Organize as depth first, but limit the depth in
the tree searched to level l - Depth-first is when l is infinity
- Complete?
- Only if l gt d
- Optimal
- Only if l d
- Time
- O(bl)
- Space
- O(bl)
51Iterative Deepening Search
- Execute depth limited search for l 1, 2, 3,
- Complete?
- Yes
- Optimal
- Yes, if step costs the same
- Time Complexity?
52Time Complexity of IDS
- Derived on the board.
- N(IDS) (d)b (d-1)b2 (1)bd
- N(BFS) b b2 bd (bd1-b)
- Time complexity
- O(bd)
- Space complexity
- O(bd)
53Bidirectional Search
- Search both forward and backward
- Need to have a predecessor function
- Complete
- Yes, if BFS used
- Optimal
- Yes, with identical step costs and BFS used
- Time
- O(bd/2)
- Space
- O(bd/2)
54Repeated States
- Repeated states can turn a linear problem into an
exponential one!
55Graph Search
- States are open (not visited) or closed (visited)
56Graph Search
- Uses more memory
- Proportional to size of state space
- DFS and IDS no longer linear space
- Implement closed list with hash table
- Could lose optimality
- Could arrive at a state by a more expensive path
- Particularly a problem with IDS
57Partial Information
- Sensorless problems
- Agent doesnt know initial state
- Each action could lead to several possible
successor states - Contingency problems
- Partially observable environments
- Nondeterministic actions
- Exploration
- States and actions are unknown
58Sensorless Problems
- The initial state of the agent is completely
unknown - One of several possible states
- Agent knows available actions, but cant see
results. - Can we still design an agent that performs
rationally?
59Senseless Monkey
60Exercise (in class)
- Draw the complete state diagram for the senseless
monkey. Ignore sleep action.
61Answer
D,E
U,E
D
U
E
E
D
D
D
U
U
E
U
D
D,E
U,E
E
E
E
U
D
U
D
U
D
62Sensorless Problems
- A set of states in the sensorless problem is a
goal state if every state in the set is a goal
state - In general, if the original state space has S
states, the sensorless state space has 2S
possible states (power set). Not all may be
reachable though.
63Contingency Problems
- Information is gained from sensors after acting
- Example Lawn environment. Mower doesnt know
locations of all obstacles. - Formulate action sequences like
- Counter, if (Obstacle) then Counter else
Forward, - Handled with planning (Chapter 12).
64Summary
- Represent problems in search spaces
- Nodes represent physical states
- Edges represent actions taken
- Problems have 4 components
- Initial state
- Description of actions (successor function)
- Goal Test
- Path cost (e.g., via step cost per action)
65Summary
- Search strategies defined by the order fringe
nodes are expanded - Priority queue
- In general, iterative deepening is best approach
- However, should study characteristics of each
problem to select best strategy - Sensorless problems can be solved by mapping to
sets of physical states