Title: Local Search and Optimization Presented by Collin Kanaley
1Local Search and OptimizationPresented by
Collin Kanaley
2Local Search Algorithms and Optimization Problems
3Local Search Algorithms
- -Local search algorithms are useful when the path
to the goal does not matter for example, in the
eight-queens problem, what matters is the
configuration of the queens, not the order in
which they are added to the board. - -This class of problems includes many important
applications such as integrated-circuit design,
factory-floor layout, job-shop scheduling,
automatic programming, telecommunications network
optimization, vehicle routing, and portfolio
management.
4- -Local search algorithms operate using a single
current state (rather than multiple paths)? - -Paths followed are typically not retained
- -Local search algorithms have two key advantages
- 1. They use very little memory
- 2. They can often find reasonable solutions in
large or infinite state spaces for which
systematic algorithms are not suitable - -Local search algorithms are also useful for
solving pure optimization problems, which aim to
find the best state according to an objective
function
5- -The state space landscape is useful for
understanding local search a landscape has both
location (defined by the state) and elevation
(defined by the value of the heuristic cost
function or objective function)?
6State Space Landscape (continued)?
- -If elevation corresponds to cost, then the aim
is to find the lowest valley, called a global
minimum - -If elevation corresponds to an objective
function, then the aim is to find the highest
peak, called a global maximum
7- State Space Landscape (continued)?
- In a state space landscape
- -A complete local search algorithm always
finds a goal if one exists - -An optimal algorithm always finds a global
maximum/minimum
8Hill-Climbing Search
- -The hill-climbing search is a loop that
continually moves uphill in the direction of
increasing value - -It terminates when it reaches a peak where no
neighbour has a higher value - -It does not maintain a search tree
- -This algorithm does not look beyond the
immediate neighbours of the current state - -Hill climbing is sometimes called greedy local
search because it grabs a good neighbour state
without thinking ahead about where to go next
9- -While hill-climbing searches often perform quite
well, they also often get stuck due to - 1. Local maxima a peak that is higher than
each of its neighbouring states, but lower
than the global maximum. Hill-climbing
algorithms that reach the vicinity of a local
maximum will be drawn upwards towards the
peak, but then be stuck with now where else to
go. - 2. Ridges a sequence of local maxima
- 3. Plateaux an area of the state space
landscape where the evaluation function is
flat
10Hill-climbing variations
- -Stochastic hill climbing chooses at random from
among the uphill moves the probability of
selection can vary with the steepness of the
uphill move - -First-choice hill climbing implements stochastic
hill climbing by generating successors randomly
until one is generated that is better than the
current state. This is good when a state has
many (e.g., thousands) of successors - -Random-restart hill climbing conducts a series
of hill-climbing searches from randomly generated
initial states, stopping when a goal is found - -Simulated annealing combines hill climbing with
a random walk this yields both efficiency and
completeness
11Local Beam Search
-Local beam search - this algorithm keeps track
of multiple states as opposed to just one. It
begins with k randomly generated states. At each
step, all the successors of all k states are
generated, and if any is a goal the algorithm
halts otherwise it selects the k best successors
from the complete list and repeats. This is
different from a random-restart search in that
useful information is passed among the k parallel
search threads. Therefore, unfruitful searches
are quickly abandoned and resources are moved to
where the most progress is being made
12Stochastic beam search
- -Stochastic beam search this variant of the
local beam search chooses k successors at random,
as opposed to choosing k from a pool of candidate
successors, with the probability of choosing a
given successor being an increasing function of
its value. - -This helps alleviate the problem local beam
search algorithms can have when they become too
concentrated in a small region of the state space
13Genetic Algorithms
- -A genetic algorithm is a variant of stochastic
beam search in which successor states are
generated by combining two parent states, rather
than by just modifying a single state.
14- Genetic Algorithms (continued)?
- -Genetic algorithm's begin with a set of k
randomly generated states, called the population - -Each state, or individual, is represented as a
string over a finite alphabet (most commonly a
string of 0s and 1s)? - -Each state is evaluated by the fitness function,
which rates better states more highly
15Local Search in Continuous Spaces
16Local Search in Continuous Spaces
- None of the previously described algorithms can
handle continuous state spaces. Introduced here
are some local search techniques for finding
optimal solutions in continuous spaces.
Basically, anything that deals with the real
world is in such a space.
17- Problems from local maxima, ridges, and plateaux
are just as prevalent in continuous state spaces
as they are in local search methods.
18- -One way to avoid continuous problems is to
simply discretize the neighbourhood of each
state. This means changing continuous models
into discrete models. - -The gradient of the landscape can be used to
find a maximum - -When the objective function is not available in
a differential form at all, an empirical gradient
can be determined by evaluating the response to
small increments and decrements in each
coordinate. (Empirical gradient search is the
same as steepest-ascent hill climbing in a
discretized version of the state space.)?
19Newton-Raphson method
- -Newton-Raphson method for may problems this
algorithm is the most effective this is a
general technique for finding roots of functions,
a.k.a. Solving equations of the form g(x)0
20- -The Newton-Raphson method works by computing a
new estimate for the root x according to Newton's
formula - x lt- x g(x)/g1(x)?
- -x must be found so that the gradient is zero.
Thus g(x) in Newton's formula becomes ?f(x) and
the updated equation can be written as - x lt- x Hf-1(x)?f(x)?
- where Hf (x) is the Hessian matrix of second
derivatives
21Constrained optimization
- -An optimization problem is constrained if
solutions must satisfy some hard constraints on
the values of each variable. - -The difficulty of constrained optimization
problems depends upon the nature of the
constraints and the objective function
22Sources