CS G120 Artificial Intelligence - PowerPoint PPT Presentation

1 / 45
About This Presentation
Title:

CS G120 Artificial Intelligence

Description:

An admissible heuristic never overestimates the cost to reach the goal, i.e., it ... A heuristic is consistent if for every node n, every successor n' of n generated ... – PowerPoint PPT presentation

Number of Views:62
Avg rating:3.0/5.0
Slides: 46
Provided by: carole5
Category:

less

Transcript and Presenter's Notes

Title: CS G120 Artificial Intelligence


1
CS G120 Artificial Intelligence
  • Prof. C. Hafner
  • Class Notes Feb 19, 2009

2
A question about unification
  • Unify
  • Likes(y, y) with Likes(x, John)
  • Can x/John, y/John and x/John, y/x both be
    MGUs?
  • First step ? y/x
  • Second step unify-var(y, John, ?)
  • Adds x/John to ? yields what?
  • Push(pair, theta) ? x/John, y/x
  • Compose(pair, theta) add pair to theta and
  • for each element e that is a rhs of an element
    in ?
  • replace e with subst(pair, e) ? x/John,
    y/John

3
The unification algorithm
And Op(x) Op(y)
4
The unification algorithm
Using compose
5
Assignment 4b. Now Due Feb 23, 10am
  • In one file
  • standardize (FOLexp) returns a FOLexp object with
    the variables replaced with new variables with
    unique names.
  • unify(FOLexp1, FOLexp2, )
  • returns None or a substitution, which is a list
    of variable/term pairs (2 element lists)
  • Note using Compose method you must also
    implement
  • subst(theta, FOLexp) and apply it correctly.
  • These programs must be accompanied GOOD test/demo
    datasets, and programs testStandardize() and
    testUnify() that run them.
  • These programs must be implemented using
    recursion for full credit.

6
FOLexp interface members kind var, const,
compound name (for var or const) op (for
compound) args (for compound) - a list of
FOLexps methods isVar, isConst, isCompound --
returns True or False firstArg --
returns an FOLexp or None restArgs --
returns a list of FOLexp or None
FOLexp (stringExp) -- the constructor requires
an argument The argument is a list of
strings representing a tokenized logical
expression. Any string beginning with a lower
case letter is a variable, and all
other symbols must begin with an upper case
letter. (following your textbook).
Here are some examples of input strings
x -- a variable John -- a
constant likesx, John - a
compound rprint -- prettyprints the
FOLexp with indentation
7
def testFOLexp(filename) f
open(filename) for aLine in
f.readlines() exp FOLexp(convert(aLine))
exp.rprint("") def convert(sExp)
returns a list of strings representing the
logical expr. add blanks for
tokenizing sExp sExp.replace('','
') sExp sExp.replace('',' ')
sExp sExp.replace(',' , ' , ') now
convert string to a list of token strings
return sExp.split()
Contents of test file x John Likesx,
John LikesMotherFather John,
BrotherSally, x
8
class FOLexp kind 'unknown' name ''
op '' args def __init__(self,
e) recursive function to build the
structure e is a list of strings that
needs to be parsed into a FOLexp if
len(e) 1 self.name e0
if self.name0.islower()
self.kind 'var' else
self.kind 'const' that was the easy
case, now we have to find the arguments
and use slices to recursively build the component
objects
9
else self.kind 'compound'
self.op e0 position 2
get ready to parse the argument list
while position lt len(e) - 1 skip the
final '' here we parse the
argument list get end pt of
next arg endpos position
self.getNext(eposition)
self.args.append(FOLexp(epositionendpos))
position endpos 1
10
Uninformed search strategies (cont.)
  • Uninformed search strategies use only the
    information available in the problem definition
  • Breadth-first search
  • Uniform-cost search
  • Depth-first search
  • Depth-limited search
  • Iterative deepening search

11
Explore the problems space by generating and
searching a tree
  • Root start state
  • Each node represents a state, children are all
    the next-states

12
Depth-limited search
  • depth-first search with depth limit l,
  • i.e., nodes at depth l have no successors
  • Recursive implementation

13
Iterative deepening search
14
Iterative deepening search l 0
15
Iterative deepening search l 1
16
Iterative deepening search l 2
17
Iterative deepening search l 3
18
Iterative deepening search
  • Number of nodes generated in a depth-limited
    search to depth d with branching factor b
  • NDLS b0 b1 b2 bd-2 bd-1 bd
  • Number of nodes generated in an iterative
    deepening search to depth d with branching factor
    b
  • NIDS (d1)b0 d b1 (d-1)b2 3bd-2
    2bd-1 1bd
  • For b 10, d 5,
  • NDLS 1 10 100 1,000 10,000 100,000
    111,111
  • NIDS 6 50 400 3,000 20,000 100,000
    123,456
  • Overhead (123,456 - 111,111)/111,111 11

19
Properties of iterative deepening search
  • Complete? Yes
  • Time? (d1)b0 d b1 (d-1)b2 bd O(bd)
  • Space? O(bd)
  • Optimal? Yes, if step cost 1

20
Summary of algorithms
21
Repeated states
  • Failure to detect repeated states can turn a
    linear problem into an exponential one!

22
Graph search
23
Informed search algorithms
  • Chapter 4

24
Outline
  • Heuristics
  • Best-first search
  • Greedy best-first search
  • A search

A search strategy is defined by picking the order
of node expansion
25
Best-first search
  • Idea use an evaluation function f(n) for each
    node
  • estimate of "desirability
  • Expand most desirable unexpanded node
  • Implementation
  • Order the nodes in fringe in decreasing order of
    desirability
  • Special cases
  • greedy best-first search
  • A search

26
Romania with step costs in km
27
Greedy best-first search
  • Evaluation function f(n) h(n) (heuristic)
  • estimate of cost from n to goal
  • e.g., hSLD(n) straight-line distance from n to
    Bucharest
  • Greedy best-first search expands the node that
    appears to be closest to goal

28
Greedy best-first search example
29
Greedy best-first search example
30
Greedy best-first search example
31
Greedy best-first search example
32
Properties of greedy best-first search
  • Complete? No can get stuck in loops, e.g., Iasi
    ? Neamt ? Iasi ? Neamt ?
  • Time? O(bm), but a good heuristic can give
    dramatic improvement
  • Space? O(bm) -- keeps all nodes in memory
  • Optimal? No

33
A search
  • Idea avoid expanding paths that are already
    expensive
  • Evaluation function f(n) g(n) h(n)
  • g(n) cost so far to reach n
  • h(n) estimated cost from n to goal
  • f(n) estimated total cost of path through n to
    goal

34
A search example
35
A search example
36
A search example
37
A search example
38
A search example
39
A search example
40
Admissible heuristics
  • A heuristic h(n) is admissible if for every node
    n,
  • h(n) h(n), where h(n) is the true cost to
    reach the goal state from n.
  • An admissible heuristic never overestimates the
    cost to reach the goal, i.e., it is optimistic
  • Example hSLD(n) (never overestimates the actual
    road distance)
  • Theorem If h(n) is admissible, A using
    TREE-SEARCH is optimal

41
Optimality of A (proof)
  • Suppose some suboptimal goal G2 has been
    generated and is in the fringe. Let n be an
    unexpanded node in the fringe such that n is on a
    shortest path to an optimal goal G.
  • f(G2) gt f(G) from above
  • h(n) h(n) since h is admissible
  • g(n) h(n) g(n) h(n)
  • f(n) f(G)
  • Hence f(G2) gt f(n), and A will never select G2
    for expansion

42
Consistent heuristics
  • A heuristic is consistent if for every node n,
    every successor n' of n generated by any action
    a,
  • h(n) c(n,a,n') h(n')
  • If h is consistent, we have
  • f(n') g(n') h(n')
  • g(n) c(n,a,n') h(n')
  • g(n) h(n)
  • f(n)
  • i.e., f(n) is non-decreasing along any path.
  • Theorem If h(n) is consistent, A using
    GRAPH-SEARCH is optimal

43
Properties of A
  • Complete? Yes (unless there are infinitely many
    nodes with f f(G) )
  • Time? Exponential
  • Space? Keeps all nodes in memory
  • Optimal? Yes

44
Admissible heuristics
  • E.g., for the 8-puzzle
  • h1(n) number of misplaced tiles
  • h2(n) total Manhattan distance
  • (i.e., no. of squares from desired location of
    each tile)
  • h1(S) ?
  • h2(S) ?

45
Admissible heuristics
  • E.g., for the 8-puzzle
  • h1(n) number of misplaced tiles
  • h2(n) total Manhattan distance
  • (i.e., no. of squares from desired location of
    each tile)
  • h1(S) ? 8
  • h2(S) ? 31222332 18
Write a Comment
User Comments (0)
About PowerShow.com