CS 2710, ISSP 2610 Foundations of Artificial Intelligence - PowerPoint PPT Presentation

About This Presentation
Title:

CS 2710, ISSP 2610 Foundations of Artificial Intelligence

Description:

should take whatever action is expected to maximize its ... All of these are defeasible... likely to be wrong in real settings. 20. Another Assumption ... – PowerPoint PPT presentation

Number of Views:43
Avg rating:3.0/5.0
Slides: 34
Provided by: james1100
Category:

less

Transcript and Presenter's Notes

Title: CS 2710, ISSP 2610 Foundations of Artificial Intelligence


1
CS 2710, ISSP 2610Foundations of Artificial
Intelligence
  • Solving Problems by Searching
  • Chapter 3

2
Framework
  • AI is concerned with the creation of artifacts
    that
  • Do the right thing
  • Given their circumstances and what they know

3
Ideal Rational Agents
  • should take whatever action is expected to
    maximize its performance measure on the basis of
    its percept sequence and whatever built-in
    knowledge it has
  • Key points
  • Performance measure
  • Actions
  • Percept sequence
  • Built-in knowledge

4
Rational agents
How to design this?
Sensors
percepts
Environ ment
?
Agent
actions
Effectors
5
Goal-based Agents
  • Agents that take actions in the pursuit of a goal
    or goals.

6
Goal-based Agents
  • What should a goal-based agent do when none of
    the actions it can currently perform results in a
    goal state?
  • Choose an action that appears to lead to a state
    that is closer to a goal than the current one is.

7
Problem Solving as Search
  • One way to address these issues is to view
    goal-attainment as problem solving, and viewing
    that as a search through a state space.
  • In chess, e.g., a state is a board configuration

8
Problem Solving
  • A problem is characterized as
  • An initial state
  • A set of actions
  • A goal test
  • A cost function

9
Problem Solving
  • A problem is characterized as
  • An initial state
  • A set of actions
  • successors state ? set of states
  • A goal test
  • goalp state ? true or false
  • A cost function
  • edgecost edge between states ? cost

10
Example Problems
  • Toy problems (but sometimes useful)
  • Illustrate or exercise various problem-solving
    methods
  • Concise, exact description
  • Can be used to compare performance
  • Examples 8-puzzle, 8-queens problem,
    Cryptarithmetic, Vacuum world, Missionaries and
    cannibals, simple route finding
  • Real-world problem
  • More difficult
  • No single, agreed-upon description
  • Examples Route finding, Touring and traveling
    salesperson problems, VLSI layout, Robot
    navigation, Assembly sequencing

11
Toy ProblemsThe vacuum world
  • The vacuum world
  • The world has only two locations
  • Each location may or may not contain dirt
  • The agent may be in one location or the other
  • 8 possible world states
  • Three possible actions Left, Right, Suck
  • Goal clean up all the dirt

1
2
4
3
6
5
7
8
12
Toy ProblemsThe vacuum world
  • States one of the 8 states given earlier
  • Operators move left, move right, suck
  • Goal test no dirt left in any square
  • Path cost each action costs one

13
Toy ProblemsMissionaries and cannibals
  • Missionaries and cannibals
  • Three missionaries and three cannibals want to
    cross a river
  • There is a boat that can hold two people
  • Cross the river, but make sure that the
    missionaries are not outnumbered by the cannibals
    on either bank
  • Needs a lot of abstraction
  • Crocodiles in the river, the weather and so on
  • Only the endpoints of the crossing are important
  • Only two types of people

14
Toy ProblemsMissionaries and cannibals
  • Problem formulation
  • States ordered sequence of three numbers
    representing the number of missionaries,
    cannibals and boats on the bank of the river from
    which they started. The start state is (3, 3, 1)
  • Operators take two missionaries, two cannibals,
    or one of each across in the boat
  • Goal test reached state (0, 0, 0)
  • Path cost number of crossings

15
Real-world problems
  • Route finding
  • Specified locations and transition along links
    between them
  • Applications routing in computer networks,
    automated travel advisory systems, airline travel
    planning systems
  • Touring and traveling salesperson problems
  • Visit every city on the map at least once and
    end in Bucharest
  • Needs information about the visited cities
  • Goal Find the shortest tour that visits all
    cities
  • NP-hard, but a lot of effort has been spent on
    improving the capabilities of TSP algorithms
  • Applications planning movements of automatic
    circuit board drills

16
Real-world problems
  • VLSI layout
  • Place cells on a chip so that they do not overlap
    and that there is room for connecting wires to be
    placed between the cells
  • Robot navigation
  • Generalization of the route finding problem
  • No discrete set of routes
  • Robot can move in a continuous space
  • Infinite set of possible actions and states
  • Additional problem errors in sensor readings and
    motor controls
  • Assembly sequencing
  • Automatic assembly of complex objects
  • The problem is to find an order in which to
    assemble the parts of some object
  • Wrong order some of the previous work needs to
    be undone

17
What is a Solution?
  • A sequence of actions that when performed will
    transform the initial state into a goal state
    (e.g., the sequence of actions that gets the
    missionaries safely across the river)
  • Or sometimes just the goal state (e.g., infer
    molecular structure from mass spectrographic
    data)

18
Our Current Framework
  • Backtracking state-space search
  • Others
  • Constraint-based search
  • Optimization search
  • Adversarial search

19
Initial Assumptions
  • The agent knows its current state
  • Only the actions of the agent will change the
    world
  • The effects of the agents actions are known and
    deterministic
  • All of these are defeasible likely to be wrong
    in real settings.

20
Another Assumption
  • Searching/problem-solving and acting are distinct
    activities
  • First you search for a solution (in your head)
    then you execute it

21
Generalized Search
  • Start by adding the initial state to a
  • list, called fringe
  • Loop
  • If there are no states left then fail
  • Otherwise remove a node from fringe, cur
  • If its a goal state return it
  • Otherwise expand it and add the resulting nodes
    to fringe

Expand a node generate its successors
22
Evaluation Criteria
  • Completeness
  • Does it find a solution when one exists
  • Time
  • The number of nodes generated during search

23
Search Criteria
  • Space
  • Maximum number of nodes in memory at one time
  • Optimality
  • Does it always find a least-cost solution?
  • Cost considered sum of the edgecosts of the
    path to the goal (the g-val)

24
Time and Space Complexity Measured in Terms of
  • b maximum branching factor of the
  • search tree
  • d depth of the shallowest goal
  • m maximum depth of any path in the state space
    (may be infinite)

25
Blind Search Strategies
  • Lets define and evaluate the following
    algorithms
  • Breadth-first search
  • Uniform-cost search
  • Depth-first search
  • (On the board)

26
Generalized Search
  • Start by adding the initial state to a
  • list, called fringe
  • Loop
  • If there are no states left then fail
  • Otherwise remove a node from fringe, cur
  • If its a goal state return it
  • Otherwise expand it and add the resulting nodes
    to fringe

Expand a node generate its successors
27
Implementation issues
  • Nodes versus states
  • Search a tree or general graph checking for
    duplicate states

28
def treesearch (qfun,fringe) qfun
queuing function fringe a list
containing the initial node while
len(fringe) gt 0 cur fringe0
fringe fringe1 if goalp(cur)
return cur fringe qfun(makeNodes(succes
sors(cur)),fringe) return
29
def graphsearch (qfun,fringe) expanded
while len(fringe) gt 0 cur
fringe0 fringe fringe1
if goalp(cur) return cur if not
(expanded.has_key(cur.state) and\
expandedcur.state.gval lt cur.gval)
expandedcur.state cur
fringe qfun(makeNodes(successors(cur)),fringe)
return
30
def iterativeDeepening(start) result
depthlim 1 startnode Node(start) while
not result result depthLimSearch(startno
de,depthlim) depthlim depthlim 1
return result
def depthLimSearch (fringe,depthlim) while
len(fringe) gt 0 cur fringe0
fringe fringe1 if goalp(cur)
return cur if cur.depth lt depthlim
fringe makeNodes(successors(cur))
fringe return
31
Bidirectional search
  • Search forward from the initial state and
    backward from the goal
  • Stop when the two searches meet in the middle

Start
Goal
32
Bidirectional SearchGo from Timisoara to
Bucharest
Oradea
Neamt
Iasi
Zerind
Sibiu
Fagaras
Arad
Vaslui
Rimnicu Vilcea
Timisoara
Urziceni
Hirsova
Lugoj
Pitesti
Mehadia
Bucharest
Eforie
Dobreta
Craiova
Giurgiu
33
Bidirectional search
  • Bidirectional search merits
  • Big difference for problems with branching factor
    b in both directions
  • A solution of length d will be found in O(2bd/2)
    O(bd/2)
  • For b 10 and d 6, only 2,222 nodes are needed
    instead of 1,111,111 for breadth-first search
  • Bidirectional search issues
  • Predecessors of a node need to be generated
  • Difficult when operators are not reversible
  • For each node check if it appeared in the other
    search
  • Needs a hash table of O(bd/2)
Write a Comment
User Comments (0)
About PowerShow.com