Limitation of Algorithm Power - PowerPoint PPT Presentation

1 / 52
About This Presentation
Title:

Limitation of Algorithm Power

Description:

'Keep on the lookout for novel ideas that others have used successfully. ... Graph coloring problem. 16. NP, NP, NP-Complete. 17. P = NP ? Dilemma Revisited ... – PowerPoint PPT presentation

Number of Views:521
Avg rating:3.0/5.0
Slides: 53
Provided by: JohnDin3
Category:

less

Transcript and Presenter's Notes

Title: Limitation of Algorithm Power


1
Limitation of Algorithm Power
Keep on the lookout for novel ideas that others
have used successfully. Your idea has to be
original only in its adaptation to the problem
youre working on. Thomas Edison (18471931)
Topic 12
ITS033 Programming Algorithms
Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program,
Image and Vision Computing Lab. School of
Information, Computer and Communication
Technology (ICT) Sirindhorn International
Institute of Technology (SIIT) Thammasat
University http//www.siit.tu.ac.th/bunyaritbunya
rit_at_siit.tu.ac.th02 5013505 X 2005
2
ITS033
Midterm
  • Topic 01 - Problems Algorithmic Problem Solving
  • Topic 02 Algorithm Representation Efficiency
    Analysis
  • Topic 03 - State Space of a problem
  • Topic 04 - Brute Force Algorithm
  • Topic 05 - Divide and Conquer
  • Topic 06 - Decrease and Conquer
  • Topic 07 - Dynamics Programming
  • Topic 08 - Transform and Conquer
  • Topic 09 - Graph Algorithms
  • Topic 10 - Minimum Spanning Tree
  • Topic 11 - Shortest Path Problem
  • Topic 12 - Coping with the Limitations of
    Algorithms Power
  • http//www.siit.tu.ac.th/bunyarit/its033.php
  • http//www.vcharkarn.com/vlesson/7

3
Overview
  • P, NP, NP-complete problem
  • Coping with limitation of algorithm power
  • Backtracking
  • Branch and bound
  • Approximation Algorithm

4
P, NP, NP-Complete
Topic 12.1
ITS033 Programming Algorithms
Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program,
Image and Vision Computing Lab. School of
Information, Computer and Communication
Technology (ICT) Sirindhorn International
Institute of Technology (SIIT) Thammasat
University http//www.siit.tu.ac.th/bunyaritbunya
rit_at_siit.tu.ac.th02 5013505 X 2005
5
Problems
  • Some problems cannot be solved by any algorithms
  • Other problems can be solved algorithmically but
    not in polynomial time
  • This topic deals with the question of
    intractability which problems can and cannot be
    solved in polynomial time.
  • This well-developed area of theoretical computer
    science is called computational complexity.

6
Problems
  • In the study of the computational complexity of
    problems, the first concern of both computer
    scientists and computing professionals is whether
    a given problem can be solved in polynomial time
    by some algorithm.

7
Polynomial time
  • Definition
  • We say that an algorithm solves a problem in
    polynomial time if its worst-case time efficiency
    belongs to O(p(n)) where p(n) is a polynomial of
    the problems input size n.
  • Problems that can be solved in polynomial time
    are called tractable
  • and ones that can not be solved in polynomial
    time are all intractable.

8
Intractability
  • we can not solved arbitrary instances of
    intractable problems in a reasonable amount of
    time unless such instances are very small.
  • Although there might be a huge difference
    between the running time in O(p(n)) for
    polynomials of drastically different degree,
    there are very few useful algorithms with degree
    of polynomial higher than three.

9
P and NP Problems
  • Most problems discussed in this subject can be
    solved in polynomial time by some algorithms
  • We can think about problems that can be solved in
    polynomial time as a set that is called P.

10
Class P
  • Definition Class P is a class of decision
    problems that can be solved in polynomial time by
    (deterministic) algorithms.
  • This class of problems is called polynomial.
  • Examples
  • searching
  • element uniqueness
  • graph connectivity
  • graph acyclicity

11
Class NP
  • NP (Nondeterministic Polynomial) class of
    decision problems whose proposed solutions can be
    verified in polynomial time solvable by a
    nondeterministic polynomial algorithm

12
nondeterministic polynomial algorithm
  • A nondeterministic polynomial algorithm is an
    abstract two-stage procedure that
  • generates a random string purported to solve the
    problem
  • checks whether this solution is correct in
    polynomial time
  • By definition, it solves the problem if its
    capable of generating and verifying a solution on
    one of its tries

13
What problems are in NP?
  • Hamiltonian circuit existence
  • Traveling salesman problem
  • Knapsack problem
  • Partition problem Is it possible to partition a
    set of n integers into two disjoint subsets with
    the same sum?

14
P ? NP
  • All the problems in P can also be solved in this
    manner (but no guessing is necessary), so we
    have
  • P ? NP
  • However, we still have a big question P NP ?

15
NP-Complete
  • A decision problem is in a class of NP-complete
    if it is a problem in NP and any other problem be
    reduced to it in polynomial time.
  • Boolean satisfiability problem (SAT)
  • N-puzzle
  • Knapsack problem
  • Hamiltonian cycle problem
  • Traveling salesman problem
  • Subgraph isomorphism problem
  • Subset sum problem
  • Clique problem
  • Vertex cover problem
  • Independent set problem
  • Graph coloring problem

16
NP, NP, NP-Complete

17
P NP ? Dilemma Revisited
  • P NP would imply that every problem in NP,
    including all NP-complete problems, could be
    solved in polynomial time
  • If a polynomial-time algorithm for just one
    NP-complete problem is discovered, then every
    problem in NP can be solved in polynomial time,
    i.e., P NP
  • Most but not all researchers believe that P ? NP
    , i.e. P is a proper subset of NP

18
Coping with limitation of algorithm power
Topic 12.2
ITS033 Programming Algorithms
Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program,
Image and Vision Computing Lab. School of
Information, Computer and Communication
Technology (ICT) Sirindhorn International
Institute of Technology (SIIT) Thammasat
University http//www.siit.tu.ac.th/bunyaritbunya
rit_at_siit.tu.ac.th02 5013505 X 2005
19
Solving NP-complete problems
  • At present, all known algorithms for NP-complete
    problems require time that is superpolynomial in
    the input size, and it is unknown whether there
    are any faster algorithms.
  • The following techniques can be applied to solve
    computational problems in general, and they often
    give rise to substantially faster algorithms
  • Approximation Instead of searching for an
    optimal solution, search for an "almost" optimal
    one.
  • Randomization Use randomness to get a faster
    average running time, and allow the algorithm to
    fail with some small probability.
  • Restriction By restricting the structure of the
    input (e.g., to planar graphs), faster algorithms
    are usually possible.
  • Parameterization Often there are fast algorithms
    if certain parameters of the input are fixed.
  • Heuristic An algorithm that works "reasonably
    well" on many cases, but for which there is no
    proof that it is both always fast and always
    produces a good result. Metaheuristic approaches
    are often used.

20
Tackling Difficult Combinatorial Problems
  • There are two principal approaches to tackling
    difficult combinatorial problems (NP-hard
    problems)
  • Use a strategy that guarantees solving the
    problem exactly but doesnt guarantee to find a
    solution in polynomial time
  • Use an approximation algorithm that can find an
    approximate (sub-optimal) solution in polynomial
    time

21
Exact Solution Strategies
  • exhaustive search (brute force)
  • useful only for small instances
  • dynamic programming
  • applicable to some problems (e.g., the knapsack
    problem)
  • backtracking
  • eliminates some unnecessary cases from
    consideration
  • yields solutions in reasonable time for many
    instances but worst case is still exponential
  • branch-and-bound
  • further refines the backtracking idea for
    optimization problems

22
Backtracking
  • The principal idea is to construct solutions one
    component at a time and evaluate such partially
    constructed candidates as follows.
  • If a partially constructed solution can be
    developed further without violating the problems
    constraints, it is done by taking the first
    remaining legitimate option for the next
    component. If there is no legitimate option for
    the next component, no alternatives for any
    remaining component need to be considered.
  • In this case, the algorithm backtracks to replace
    the last component of the partially constructed
    solution with its next option.

23
Backtracking
  • This kind of processing is often implemented by
    constructing a tree of choices being made, called
    the state-space tree.
  • Its root represents an initial state before the
    search for a solution begins.
  • The nodes of the first level in the tree
    represent the choices made for the first
    component of a solution,
  • the nodes of the second level represent the
    choices for the second component, and so on.
  • A node in a state-space tree is said to be
    promising if it corresponds to a partially
    constructed solution that may still lead to a
    complete solution otherwise, it is called
    nonpromising.

24
Backtracking
  • Leaves represent either nonpromising dead ends or
    complete solutions found by the algorithm.
  • If the current node turns out to be nonpromising,
    the algorithm backtracks to the nodes parent to
    consider the next possible option for its last
    component
  • if there is no such option, it backtracks one
    more level up the tree, and so on.

25
General Remark
26
Backtracking
  • Construct the state-space tree
  • nodes partial solutions
  • edges choices in extending partial solutions
  • Explore the state space tree using depth-first
    search
  • Prune nonpromising nodes
  • DFS stops exploring subtrees rooted at nodes that
    cannot lead to a solution and backtracks to such
    a nodes parent to continue the search

27
Example n-Queens Problem
  • The problem is to place n queens on an n-by-n
    chessboard so that no two queens attack each
    other by being in the same row or in the same
    column or on the same diagonal.

28
N-Queens Problem
  • We start with the empty board and then place
    queen 1 in the first possible position of its
    row, which is in column 1 of row 1.
  • Then we place queen 2, after trying
    unsuccessfully columns 1 and 2, in the first
    acceptable position for it, which is square
    (2,3), the square in row 2 and column 3. This
    proves to be a dead end because there is no
    acceptable position for queen 3. So, the
    algorithm backtracks and puts queen 2 in the next
    possible position at (2,4).
  • Then queen 3 is.

29
State-space tree of solving the four-queens
problem by backtracking.
denotes an unsuccessful attempt to place a
queen in the indicated column. The numbers above
the nodes indicate the order in which the nodes
are generated.
30
Hamiltonian Circuit Problem
  • we make vertex a the root of the state-space
    tree. The first component of our future solution,
    if it exists, is a first intermediate vertex of a
    Hamiltonian cycle to be constructed.
  • Using the alphabet order to break the three-way
    tie among the vertices adjacent to a, we select
    vertex b.
  • From b, the algorithm proceeds to c, then to d,
    then to e, and finally to f , which proves to be
    a dead end.

31
Hamiltonian Circuit Problem
  • So the algorithm backtracks from f to e, then to
    d, and then to c, which provides the first
    alternative for the algorithm to pursue. Going
    from c to e eventually proves useless, and the
    algorithm has to backtrack from e to c and then
    to b.
  • From there, it goes to the vertices f , e, c, and
    d, from which it can legitimately return to a,
    yielding the Hamiltonian circuit a, b, f , e, c,
    d, a. If we wanted to find another Hamiltonian
    circuit, we could continue this process by
    backtracking from the leaf of the solution found.

32
(No Transcript)
33
Backtracking
  • It is typically applied to difficult
    combinatorial problems for which no efficient
    algorithms for finding exact solutions possibly
    exist
  • Unlike the exhaustive search approach, which is
    doomed to be extremely slow for all instances of
    a problem, backtracking at least holds a hope for
    solving some instances of nontrivial sizes in an
    acceptable amount of time. This is especially
    true for optimization problems.
  • Even if backtracking does not eliminate any
    elements of a problems state space and ends up
    generating all its elements, it provides a
    specific technique for doing so, which can be of
    value in its own right.

34
Branch-and-Bound
  • Branch and bound (BB) is a general algorithm for
    finding optimal solutions of various optimization
    problems, especially in discrete and
    combinatorial optimization.
  • It consists of a systematic enumeration of all
    candidate solutions, where large subsets of
    fruitless candidates are discarded, by using
    upper and lower estimated bounds of the quantity
    being optimized.

35
Branch-and-Bound
  • In the standard terminology of optimization
    problems, a feasible solution is a point in the
    problems search space that satisfies all the
    problems constraints
  • An optimal solution is a feasible solution with
    the best value of the objective function

36
Branch-and-Bound
  • 3 Reasons for terminating a search path at the
    current node in a state-space tree of a
    branch-and-bound algorithm
  • The value of the nodes bound is not better than
    the value of the best solution seen so far.
  • The node represents no feasible solutions because
    the constraints of the problem are already
    violated.
  • The subset of feasible solutions represented by
    the node consists of a single pointin this case
    we compare the value of the objective function
    for this feasible solution with that of the best
    solution seen so far and update the latter with
    the former if the new solution is better.

37
Branch-and-Bound
  • An enhancement of backtracking
  • Applicable to optimization problems
  • For each node (partial solution) of a state-space
    tree, computes a bound on the value of the
    objective function for all descendants of the
    node (extensions of the partial solution)
  • Uses the bound for
  • ruling out certain nodes as nonpromising to
    prune the tree if a nodes bound is not better
    than the best solution seen so far
  • guiding the search through state-space

38
Approximation Algorithm
Topic 12.3
ITS033 Programming Algorithms
Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program,
Image and Vision Computing Lab. School of
Information, Computer and Communication
Technology (ICT) Sirindhorn International
Institute of Technology (SIIT) Thammasat
University http//www.siit.tu.ac.th/bunyaritbunya
rit_at_siit.tu.ac.th02 5013505 X 2005
39
Randomized Algorithms
  • Randomized Algorithm aims to reduce both
    programming time and computational cost by
    approximating the process of calculation using
    randomness.

40
Randomized Algorithms Area Calculation Problem
  • Calculate the area of irregular shape (in red) in
    a box of size 20m x 24m
  • 3,3 B
  • 20,5 R
  • 4, 15 R
  • 6, 10 B

A box of size 20 x 24 m2
41
Randomized Algorithms
  • The randomization was made to produce a number
    points coordinate randomly with in the box.
  • The number of Hit Miss will be counted.
  • Scaling calculation will be made the calculate
    the area,
  • this case it is
  • Red Area 20 x 24 x red points/all points

42
Approximation Approach
  • Apply a fast (i.e., a polynomial-time)
    approximation algorithm to get a solution that
    is not necessarily optimal but hopefully close to
    it

43
Approximation Algorithms for Knapsack Problem
  • Greedy algorithms for the discrete knapsack
    problem
  • Step 1 Compute the value-to-weight ratios ri
    vi/wi , i 1, . . . , n, for the items given.
  • Step 2 Sort the items in nonincreasing order of
    the ratios computed in Step 1. (Ties can be
    broken arbitrarily.)
  • Step 3 Repeat the following operation until no
    item is left in the sorted list if the current
    item on the list fits into the knapsack, place it
    in the knapsack otherwise, proceed to the next
    item.

44
Approximation Algorithms for Knapsack Problem
Example
  • Let us consider the instance of the knapsack
    problem with the knapsacks capacity equal to 10
    and the item information

45
Approximation Algorithms for Knapsack Problem
Example
  • Computing the value-to-weight ratios and sorting
    the items in nonincreasing order of these
    efficiency ratios yields the table beside
  • The greedy algorithm will select the first item
    of weight 4, skip the next item of weight 7,
    select the next item of weight 5, and skip the
    last item of weight 3.
  • The solution obtained happens to be optimal for
    this instance

46
Approximation Algorithms for Traveling Salesman
Problem
  • Nearest-neighbor algorithm The following simple
    greedy algorithm is based on the nearest-neighbor
    heuristic the idea of always going to the
    nearest unvisited city next.
  • Step 1 Choose an arbitrary city as the start.
  • Step 2 Repeat the following operation until all
    the cities have been visited go to the unvisited
    city nearest the one visited last (ties can be
    broken arbitrarily).
  • Step 3 Return to the starting city.

47
Approximation Algorithms for Traveling Salesman
Problem Example
  • With a as the starting vertex, the
    nearest-neighbor algorithm yields the tour
    (Hamiltonian circuit)
  • sa a - b c - d - a of length 10.
  • The optimal solution, as can be easily checked by
    exhaustive search, is the tour
  • s. a - b - d - c - a of length 8.
  • Thus, the accuracy ratio
  • r(sa) f (sa)/ f (s) 10/8
  • 1.25

48
Twice-Around-the-Tree Algorithm
  • Stage 1 Construct a minimum spanning tree of the
    graph (e.g., by Prims or
    Kruskals algorithm)
  • Stage 2 Starting at an arbitrary vertex, create
    a path that goes twice around the
    tree and returns to the same vertex
  • Stage 3 Create a tour from the circuit
    constructed in Stage 2 by making
    shortcuts to avoid visiting intermediate
    vertices more than once
  • Note RA 8 for general instances, but this
    algorithm tends to produce better
    tours than the nearest-neighbor algorithm

49
Example
Walk a b c b d e d b a
Tour a b c d e a
50
Empirical Data for Euclidean Instances
51
ITS033
Midterm
  • Topic 01 - Problems Algorithmic Problem Solving
  • Topic 02 Algorithm Representation Efficiency
    Analysis
  • Topic 03 - State Space of a problem
  • Topic 04 - Brute Force Algorithm
  • Topic 05 - Divide and Conquer
  • Topic 06 - Decrease and Conquer
  • Topic 07 - Dynamics Programming
  • Topic 08 - Transform and Conquer
  • Topic 09 - Graph Algorithms
  • Topic 10 - Minimum Spanning Tree
  • Topic 11 - Shortest Path Problem
  • Topic 12 - Coping with the Limitations of
    Algorithms Power
  • http//www.siit.tu.ac.th/bunyarit/its033.php
  • http//www.vcharkarn.com/vlesson/7

52
End of Chapter 12
  • Thank You!
Write a Comment
User Comments (0)
About PowerShow.com