Search Algorithms - PowerPoint PPT Presentation

1 / 28
About This Presentation
Title:

Search Algorithms

Description:

The objective of a DOP is to find a feasible solution xopt, such that. f(xopt) f(x) for all x S. ... DOP Examples. 0/1 integer linear programming problem: ... – PowerPoint PPT presentation

Number of Views:31
Avg rating:3.0/5.0
Slides: 29
Provided by: Pao3
Category:
Tags: algorithms | dop | search

less

Transcript and Presenter's Notes

Title: Search Algorithms


1
Search Algorithms
  • Intro Examples
  • search space graphs, (admissible) heuristics,
    examples
  • Sequential Search Algorithms
  • DFS and variants backtracking, branch bound,
    IDA
  • BFS, A algorithm
  • Parallel Depth-First-Search
  • issues load balancing, work splitting,
    termination detection
  • DF branch Bound, IDA
  • Parallel Best-First Search
  • in trees
  • in graphs

2
Discrete Optimization Problems
  • Formal definition A tuple (S,f)
  • S the set of feasible states
  • f a cost function from S into real numbers
  • The objective of a DOP is to find a feasible
    solution xopt, such that
  • f(xopt) f(x) for all x ? S.
  • A number of diverse problems such as VLSI
    layouts, robot motion planning, test pattern
    generation, and facility location can be
    formulated as DOPs.

3
DOP Examples
  • 0/1 integer linear programming problem
  • Given an m x n matrix A, vectors b and c, find
    vector x such that
  • x contains only 0s and 1s
  • Ax ? b
  • f(x) xTc is maximized

8-puzzle problem Given initial configuration,
find the shortest sequence of moves that would
lead to the final configuration.
4
DOP and Graph Search
  • Many DOP can be formulated as finding a minimum
    cost path in a graph
  • nodes correspond to states
  • initial node, terminal (goal) and
    non-terminal nodes
  • edges correspond to possible state transitions
  • with associated cost (or the cost can be
    directly applied to the nodes)
  • called state-space graph
  • Typically exponentially large and given
    implicitly
  • by the transition rules, not explicitly as a
    graph

5
State-Space Graph Example
6
Exploring State-Space Graph
  • Exponentially large, better do it in smart way!
  • use a heuristic estimate of the cost to reach a
    goal state
  • l(x) g(x)h(x)
  • dont spend time exploring bad/unpromising
    states
  • Admissible heuristic
  • always an underestimate of the cost to reach
    solution
  • used in algorithms that guarantee finding the
    best solution
  • Example
  • Manhattan distance for 8-puzzle
  • S i-kj-l for all numbers, where (k,l) and
    (i,j) are current and desired positions

7
Why Parallel DOP?
  • DOPs are generally NP-hard problems. Does
    parallelism really help much?
  • for many problems, the average-case runtime is
    polynomial
  • often, we can find suboptimal solutions in
    polynomial time
  • many problems have smaller state spaces but
    require real-time solutions
  • for some other problems, an improvement in
    objective function is highly desirable,
    irrespective of time (VLSI design)

8
Search Algorithms
  • Intro Examples
  • search space graphs, (admissible) heuristics,
    examples
  • Sequential Search Algorithms
  • DFS and variants backtracking, branch bound,
    IDA
  • BFS, A algorithm
  • Parallel Depth-First-Search
  • issues load balancing, work splitting,
    termination detection
  • DF branch Bound, IDA
  • Parallel Best-First Search
  • in trees
  • in graphs

9
Sequential Search Strategies - DFS
  • Depth First Search strategies
  • simple/ordered backtracking
  • exhaustive search, stops on first terminal (non
    optimal)
  • ordered uses heuristic for order selection
  • depth-first branch and bound
  • partial solutions inferior to the currently best
    one are discarded (with admissible heuristics
    finds optimal)
  • iterative deepening A - why?
  • tree expanded to certain depth (or cost, using
    adm. heur.)
  • if no solution found, increase depth (cost)
  • Suitable for trees, memory cost linear in the
    depth.

10
DFS Example
States resulting from the first three steps of
depth-first search applied to an instance of the
8-puzzle.
11
Sequential Search Strategies - BFS
  • Best First Search strategies
  • use heuristic to guide search
  • OPEN/CLOSED lists
  • A algorithm
  • sorts OPEN list using heuristic l(x)
    g(x)h(x)
  • with admissible heuristic finds optimum
  • Memory cost proportional to the number of nodes
    visited
  • Equally suitable also for graphs

12
State-Space Tree vs Graph
Tree dont need to remember visited
states Graph unfold as a tree (but might bloat
exponentially), or remember visited states
(costly)
13
Search Algorithms
  • Intro Examples
  • search space graphs, (admissible) heuristics,
    examples
  • Sequential Search Algorithms
  • DFS and variants backtracking, branch bound,
    IDA
  • BFS, A algorithm
  • Parallel Depth-First-Search
  • issues load balancing, work splitting,
    termination detection
  • DF branch Bound, IDA
  • Parallel Best-First Search
  • in trees
  • in graphs

14
Parallel Depth-First Search
  • How to find parallelism?
  • different processes explore different parts of
    the search space
  • but search space can be highly irregular, the
    structure/sizes unknown in advance
  • needs dynamic load balancing

15
(No Transcript)
16
Parallel DFS - Work Splitting
  • Invoked when asked for work.
  • Idea split the workload (stored in the stack)
    evenly in half
  • Problem we do not know the sizes of underlying
    trees
  • Cutoff depth depth beyond which the trees are
    likely small, dont send those nodes, it wastes
    communication
  • Strategies which nodes to send
  • near the bottom of the stack (i.e. near the
    root)
  • near the cutoff depth
  • half of nodes between the bottom of the stack
    and cutoff depth
  • 1 3 good for uniform search spaces, 2 good with
    good heuristic,
  • 1 has better communication costs (sending fewer
    nodes)

17
Parallel DFS - Load Balancing
  • Asynchronous (local) Round Robin
  • split work with target process, chosen in RR
    fashion
  • can be really bad in the worst case, too much
    requests
  • Global Round Robin
  • target a global variable
  • needs to be locked - bottleneck
  • Random polling
  • chose target randomly
  • performs well in practice

18
Parallel Branch Bound and IDA
  • Parallel Branch Bound
  • In order to bound, each processor needs to know
    the cost of the currently best solution
  • when a better solution is found, broadcast its
    cost to everybody
  • Does not harm too much
  • the cost is typically a number/small structure
  • new better solutions are not found very often
  • Parallel IDA
  • synchronize the iterations, master chooses the
    next cost

19
Search Algorithms
  • Intro Examples
  • search space graphs, (admissible) heuristics,
    examples
  • Sequential Search Algorithms
  • DFS and variants backtracking, branch bound,
    IDA
  • BFS, A algorithm
  • Parallel Depth-First-Search
  • issues load balancing, work splitting,
    termination detection
  • DF branch Bound, IDA
  • Parallel Best-First Search
  • in trees
  • in graphs

20
Parallel Best-First Search
  • The OPEN list is the crucial data structure
  • how to access/implement it?
  • centralized
  • distributed
  • BFS quite often used for graphs
  • how to check for replication?

21
(No Transcript)
22
Parallel BFS Centralized OPEN list
  • Centralized OPEN list allows parallel expansion
    of p nodes
  • Problems
  • might not find the best solution
  • dont terminate on first found solution, need to
    finish processing vertices with lower cost
  • contention for the centralized list

23
Parallel BFS Distributed OPEN list
  • Each process uses/maintains a local OPEN list
  • Problem
  • might have nodes that are very far from the best
    globally, performing a lot of useless work
  • Solution
  • periodic communication to ensure a population of
    good nodes is evenly spread out among the local
    OPEN lists
  • trade-off between communication and unnecessary
    work (having stale nodes)
  • Communication strategies (with whom to exchange
    best nodes)
  • random (OK, do as often as possible)
  • neighbours in a ring (not good, slow spreading
    of good nodes)
  • centralized blackboard (for shared memory
    machines)

24
Parallel BFS in Graphs
  • In graphs, whenever a node is generated, we have
    to check whether it already is in the OPEN/CLOSED
    lists
  • Problem who maintains information about this
    node?
  • Solution
  • use a hash function to assign a process for each
    potential node
  • whenever a node is generated, it is sent to its
    owner
  • the owner checks whether it is new and processes
    it
  • good hash function should ensure good load
    balancing
  • but there is still lots of communication (for
    each generated node, almost always it has to be
    sent elsewhere)

25
Search Algorithms
  • Intro Examples
  • search space graphs, (admissible) heuristics,
    examples
  • Sequential Search Algorithms
  • DFS and variants backtracking, branch bound,
    IDA
  • BFS, A algorithm
  • Parallel Depth-First-Search
  • issues load balancing, work splitting,
    termination detection
  • DF branch Bound, IDA
  • Parallel Best-First Search
  • in trees
  • in graphs

26
What we did why - Summary
  • Parallel Architectures Models, Networks
    Embeddings
  • understand the setting, its implications
  • Design Strategies Techniques
  • how to find parallelism, how to minimize
    overhead
  • partitioning, load balancing, pipelining,
    minimizing communication overhead
  • Analysis of Parallel Algorithms
  • tools to evaluate/choose proper parallel
    approach
  • MPI, threads, OpenMP
  • some hands-on parallel programming experience
  • Parallel Algorithms
  • linear algebra, sorting, graph algorithms,
    search space exploration
  • real life examples how to design and analyze
    parallel algorithm

27
What you should be able to do
  • Find/create parallelism
  • there are usually surprisingly many ways, even
    in problems which look inherently sequential
  • Evaluate which parallelization strategy is likely
    to result in best results
  • consider the communication/synchronization
    requirements, level of parallelism, how easy is
    it to load balance, how does it scale
  • should be able to estimate speedup, efficiency,
    scalability
  • Program the chosen approach
  • at least in MPI

28
Done
Parallel Computing
Finito
  • is fun
  • if you like algorithms and complexity analysis
    (Ha!)
  • has future - hardware
  • multithreaded CPUs
  • multi-CPU chips
  • clusters
  • parallel coprocessors
  • has future software
  • with available hardware, there will be need for
    software
  • the techniques (esp. mitigating communication
    overheads) applicable to
  • cache friendly algorithms
  • external storage algorithms
  • distributed systems

KAPUT!
Write a Comment
User Comments (0)
About PowerShow.com