Inference for Propositional Logic and Review - PowerPoint PPT Presentation

1 / 56
About This Presentation
Title:

Inference for Propositional Logic and Review

Description:

Breadth-first. Iterative Deepening. 30. Formulating Problems as Search ... Breadth first. OPEN = start node; CLOSED = empty. While OPEN is not empty do ... – PowerPoint PPT presentation

Number of Views:24
Avg rating:3.0/5.0
Slides: 57
Provided by: Kathleen268
Category:

less

Transcript and Presenter's Notes

Title: Inference for Propositional Logic and Review


1
Inference for Propositional Logic and Review
  • Reading C. 8
  • New TA Vijay Anand Nagarajan
  • Office Hours Fridays 5-7pm in TA Room

2
(No Transcript)
3
Inference Properties
  • Inference method A is sound (or truth-preserving)
    if it only derives entailed sentences
  • Inference method A is complete if it can derive
    any sentence that is entailed
  • A proof is a record of the progress of a sound
    inference algorithm.

4
Other Types of Inference
  • Model Checking
  • Forward chaining with modus ponens
  • Backward chaining with modus ponens

5
Model Checking
  • Enumerate all possible worlds
  • Restrict to possible worlds in which the KB is
    true
  • Check whether the goal is true in those worlds or
    not

6
Wumpus Reasoning
  • Percepts nothing in 1,1 breeze in 2,1
  • Assume agent has moved to 2,1
  • Goal where are the pitss?
  • Construct the models of KB based on rules of
    world
  • Use entailment to determine knowledge about pits

7
(No Transcript)
8
Constructing the KB
9
(No Transcript)
10
Properties of Model Checking
  • Sound because it directly implements entailment
  • Complete because it works for any KB and sentence
    to prove a and always terminates
  • Problem there can be way too many worlds to
    check
  • O(2n) when KB and a have n variables in total

11
Inference as Search
  • State current set of sentences
  • Operator sound inference rules to derive new
    entailed sentences from a set of sentences
  • Can be goal directed if there is a particular
    goal sentence we have in mind
  • Can also try to enumerate every entailed sentence

12
(No Transcript)
13
(No Transcript)
14
(No Transcript)
15
(No Transcript)
16
(No Transcript)
17
(No Transcript)
18
(No Transcript)
19
(No Transcript)
20
Example
21
Complexity
  • N propositions M rules
  • Every possible fact can be establisehd with at
    most N linear passes over the database
  • Complexity O(NM)
  • Forward chaining with Modus Ponens is complete
    for Horn logic

22
(No Transcript)
23
Example
24
(No Transcript)
25
(No Transcript)
26
Review for Midterm
  • Concepts you should know
  • Search algorithms
  • Depth-first, breadth-first, iterative deepening,
    A, greedy, hill-climbing, beam
  • Constraint propagation
  • Game playing
  • Propositional logic and inference

27
Midterm format
  • Multiple choice
  • Short answer questions
  • Problem solving
  • Essay
  • An example midterm will be posted under links

28
Concepts
  • Any words in yellow or light blue or pink on
    slides

29
Uninformed Search
  • Depth-first
  • Breadth-first
  • Iterative Deepening

30
Formulating Problems as Search
  • Given an initial state and a goal, find the
    sequence of actions leading through a sequence of
    states to the final goal state.
  • Terms
  • Successor function given action and state,
    returns action, successors
  • State space the set of all states reachable from
    the initial state
  • Path a sequence of states connected by actions
  • Goal test is a given state the goal state?
  • Path cost function assigning a numeric cost to
    each path
  • Solution a path from initial state to goal state

31
A different view of the same problem for
visualization Breadth first
  • OPEN start node CLOSED empty
  • While OPEN is not empty do
  • Remove leftmost state from OPEN, call it X
  • If X goal state, return success
  • Put X on CLOSED
  • SUCCESSORS Successor function (X)
  • Remove any successors on OPEN or CLOSED
  • Put remaining successors on right end of OPEN
  • End while

32
A different view of the same problem for
visualizationDepth-first
  • OPEN start node CLOSED empty
  • While OPEN is not empty do
  • Remove leftmost state from OPEN, call it X
  • If X goal state, return success
  • Put X on CLOSED
  • SUCCESSORS Successor function (X)
  • Remove any successors on OPEN or CLOSED
  • Put remaining successors on left end of OPEN
  • End while

33
Can we combine benefits of both?
  • Depth limited
  • Select some limit in depth to explore the problem
    using DFS
  • How do we select the limit?
  • Iterative deepening
  • DFS with depth 1
  • DFS with depth 2 up to depth d

34
Complexity Analysis
  • Completeness is the algorithm guaranteed to find
    a solution when there is one?
  • Optimality Does the strategy find the optimal
    solution?
  • Time How long does it take to find a solution?
  • Space How much memory is needed to perform the
    search?
  • Is this notion of completeness the same as
    completeness in logic?

35
Cost variables
  • Time number of nodes generated
  • Space maximum number of nodes stored in memory
  • Branching factor b
  • Maximum number of successors of any node
  • Depth d
  • Depth of shallowest goal node
  • Path length m
  • Maximum length of any path in the state space

36
ComplexityBFS vs. DFS
  • Not optimal
  • Time 1bb2b3bmO(bm)
  • Space O(bm)Space is linear
  • Complete in finite spaces fails in infinite depth
  • Optimal
  • Time 1bb2b3bd(bd1-1) O(bd1)
  • Space O(bd1)Space is the big problem
  • Complete if b is finite

37
Informed Search
  • Best-first
  • A
  • Greedy
  • Hill climbing
  • Variants
  • Randomness, Simulated annealing, Local beam
    search,
  • Online search will not be on midterm

38
Greedy Search
  • OPEN start node CLOSED empty
  • While OPEN is not empty do
  • Remove leftmost state from OPEN, call it X
  • If X goal state, return success
  • Put X on CLOSED
  • SUCCESSORS Successor function (X)
  • Remove any successors on OPEN or CLOSED
  • Compute heuristic function for each node
  • Put remaining successors on either end of OPEN
  • Sort nodes on OPEN by value of heuristic function
  • End while

39
A Search
  • Try to expand node that is on least cost path to
    goal
  • Evaluation function f(n)
  • f(n)g(n)h(n)
  • h(n) is heuristic function cost from node to
    goal
  • g(n) is cost from initial state to node
  • f(n) is the estimated cost of cheapest solution
    that passes through n
  • If h(n) is an underestimate of true cost to goal
  • A is complete
  • A is optimal
  • A is optimally efficient no other algorithm
    using h(n) is guaranteed to expand fewer states

40
Admissable heuristics
  • A heuristic that never overestimates the cost to
    the goal
  • h1 and h2 are admissable heuristics
  • Consistency the estimated cost of reaching the
    goal from n is no greater than the step cost of
    getting to n plus estimated cost to goal from n
  • h(n) ltc(n,a,n)h(n)

41
Local Search Algorithms
  • Operate using a single current state
  • Move only to neighbors of the state
  • Paths followed by search are not retained
  • Iterative improvement
  • Keep a single current state and try to improve it

42
Steepest Ascent
43
(No Transcript)
44
Problems for hill climbing
  • When the higher the heuristic function the
    better maxima (objective fns) when the lower
    the function the better minima (cost fns)
  • Local maxima A local maximum is a peak that is
    higher than each of its neighboring states, but
    lower than the global maximum
  • Ridges a sequence of local maxima
  • Plateaux an area of the state space landscape
    where the evaluation function is flat

45
Some solutions
  • Stochastic hill-climbing
  • Chose at random from among the uphill moves
  • First-choice hill climbing
  • Generates successors randomly until one is
    generated that is better than current state
  • Random-restart hill climbing
  • Keep restarting from randomly generated initial
    states, stopping when goal is found
  • Simulated annealing
  • Generate a random move. Accept if improvement.
    Otherwise accept with continually decreasing
    probability.
  • Local beam search
  • Keep track of k states rather than just 1

46
Constraint Propagation
47
CSP algorithm
  • Depth-first search often used
  • Initial state the empty assignment all
    variables are unassigned
  • Successor fn assign a value to any variable,
    provided no conflicts w/constraints
  • All CSP search algorithms generate successors by
    considering possible assignments for only a
    single variable at each node in the search tree
  • Goal test the current assignment is complete
  • Path cost a constant cost for every step

48
Local search
  • Complete-state formulation
  • Every state is a compete assignment that might or
    might not satisfy the constraints
  • Hill-climbing methods are appropriate

49
(No Transcript)
50
General purpose methods for efficient
implementation
  • Which variable should be assigned next?
  • in what order should its values be tried?
  • Can we detect inevitable failure early?
  • Can we take advantage of problem structure?

51
Order
  • Choose the most constrained variable first
  • The variable with the fewest remaining values
  • Minimum Remaining Values (MRV) heuristic
  • What if there are gt1?
  • Tie breaker Most constraining variable
  • Choose the variable with the most constraints on
    remaining variables

52
Order on value choice
  • Given a variable, chose the least constraining
    value
  • The value that rules out the fewest values in the
    remaining variables

53
Forward Checking
  • Keep track of remaining legal values for
    unassigned variables
  • Terminate search when any variable has no legal
    values

54
Game Playing
  • Minimax
  • Alpha-beta pruning
  • Evaluation function (what is the difference
    between a cost function, a utility function, a
    heuristic function, an evaluation function?)

55
Logic
  • Model checking
  • Forward and backward chaining
  • And-or trees
  • Resolution theorem proving

56
Knowledge Representation
  • Semantic nets
  • Frames
  • KL-one
  • Ontological commitments
  • Inheritance
  • Properties you want to represent generics vs
    instants
Write a Comment
User Comments (0)
About PowerShow.com