Title: Numerical Optimization
1 Numerical Optimization
- General Framework
- objective function f(x1,...,xn) to be minimized
or maximized - constraints gi(x1,...,xn) leq/eq 0 (i1,...,m)
- xi gt 0 i1,...,n (optional)
- Approaches
- Classical Differentiate the function and find
points with a gradient of 0 - problem f has to be differentiable
- does not cope with constraints
- equation systems have to be solved that are
frequently nasty (iterative algorithms such as
Newton-Raphsons method can be used). - Lagrange multipliers are employed to cope with
constraints. - if g1,...,gm and f are linear linear programming
can be used. - In the case that at least one function is
non-linear general analytical solutions do no
longer exist and iteration algorithms have to be
used.
2Popular Numerical Methods
- Newton-Raphsons Method to solve f(x)0
- f(x) is approximated by its tangent at the point
(xn, f(xn)) and xn1 is taken as the abcissa of
the point of intersection of the tangent with the
x-acis that is, xn1 is determined using f(xn)
(xn1?xn)?f(xn) 0 - xn1 xn hn with hn (?f(xn) / f(xn))
- the iterations are broken off when hn is less
than the largest tolerable error. - The Simplex Method is used to optimize a linear
function with a set of linear constraints (linear
programming). Quadratic programming 31
optimizes a quadratic function with linear
constraints. - Other interation methods (similar to Newtons
method) relying on - xv1 xv ???v?dv
- where dv is a direction and ?v denotes
the jump performed in the particular direction. - Use quadratic/linear approximations of the
optimization problem, and solve the optimization
problem in the approximated space. - Other popular optimization methods the penalty
trajectory method 220, the sequential quadratic
penalty function method, and the SOLVER method
80.
3 Numerical Optimization with GAs
- Coding alternatives include
- binary coding
- Gray codes
- real-valued GAs
- Usually lower and upper bounds for variables have
to be provided as the part of the optimization
problem. Typical operators include - standard mutation and crossover
- non-uniform and boundary mutation
- arithmetical , simple, and heuristic crossover
- Constraints are a major challenge for function
optimization. Ideas to cope with the problem
include - elimination of equations through variable
reduction. - values in a solution are dynamic they are
nolonger independent of each other, but rather
their contents is constrainted by the contents of
other variables of the solution in some cases a
bound for possible changes can be computed (e.g.
for convex search spaces (GENOCOP)). - penalty functions.
- repair algorithms (GENETIC2)
4Penalty Function Approach
- Problem f(x1,...,xn) has to be maximized
- with constraints gi(x1,...,xn) leq/eq 0
(i1,...,m) - define a new function f(x1,...,xn)
f(x1,...,xn) ?i1,mwi?hi (x1,...,xn) with - For gi(x1,...,xn) 0 hi(x1,...,xn)
?gi(x1,...,xn) - For gi(x1,...,xn) lt 0 hi(x1,...,xn) IF
gi(x1,...,xn) lt 0 -
THEN 0 ELSE ?gi(x1,...,xn) - Remarks Penalty Function Approach
- needs a lot of fine tuning, especially the
selection of weights wi is very critical for the
performance of the optimizer. - frequently, the GA gets deceived only exploring
the space of illegal solution, especially if
penalties are too low on the other hand,
situations of premature convergence can arise
when the GA terminates with a local minimum that
is surrounded by illegal solutions, so that the
GA cannot escape the local minimum, because the
penalty for traversing illegal solutions is too
high. - a special approach called sequential quadratic
penalty function method9,39 has gained
significant popularity.
5Sequential Quadratic Penalty Function Method
- Idea instead of optimizing the constrainted
function f(x), optimize - F(x,r) f(x) (1/(2?r))?(h1(x)2...hm(x)2)
- It has been shown by Fiacco et al. 189 that the
solutions of optimizing the constrainted function
f and the solutions of optimizing F are identical
for r--?0. However, it turned out to be difficult
to minimize F in the limit with Newtons method
(see Murray 220). More recently, Broyden and
Attila 39,40 found a more efficient method
GENOCOP II that is discussed in our textbook
employs this method.
6Basic Loop of the SQPF Method
- 1) Differentiate F(x,r) yielding F(x,r)
- 2) Choose a starting vector x0, choose a
starting value rogt0 - 3) r ro xx0
- REPEAT
- Solve F(x,r)G(x)0 for starting
vector x yielding vector x1 - xx1
- Decrease r by division through ?gt1
- UNTIL r is sufficiently close to 0
- RETURN(x)
-
-
7Various Numerical Crossover Operators
Let p1(x1,y1) and p2(x2,y2) crossover
operators crossover(p1,p2) include simple
crossover maxa(x1,y2?ay1? (1-a))
maxa(x2,y1?ay2?(1-a)) Whole arithmetical
crossover a?p1 (1-a)?p2 with a?0,1 heuristic
crossover(Wright312) p1 (p1?p2)?a with
a?0,1 if f(p1)gtf(p2)
Example let p1(1,2), p2(5,1) be points a
convex 2D-space x2y2 leq 28 and
f(p1)gtf(p2)
a1.0
phc(-3,3)
a0.25
phc(0, 2.25)
p1(1,2)
psc1(5,1.7)
p2(5,1)
psc2(1,1)
simple crossover yields (1,1) and (5,sqrt(3))
(25328). arithmetical crossover yields all
points along the line between p1 and
p2. heuristic crossover yields all points along
the line between p1 and phc(-3,3).
8Another Example (Crossover Operators)
Let p1(0,0,0) and p2(1,1,1) in an
unconstrainted search space arithmetical
crossover produces (a,a,a) with a?0,1 simple
crossover produces (0,0,1), (0,1,1), (1,0,0),
and (1,1,0). heuristic crossover produces
(a,a,a) with a?1,2, if f((1,1,1))gtf((0,0,0))
(a,a,a) with a?-1,0, if f((1,1,1))ltf((0,0,0))
(1,1,1)
(0,0,0)
9Problems of Optimization with Constraints
legal solutions
S
illegal solutions
illegal solutions
S
S
S
S
S
legal solutions
S a solution S the optimal solution
10A Harder Optimization Problem
legal solutions
legal solutions
illegal solutions
illegal solutions
legal solutions
11 A Friendly Convex Search Space
illegal solutions
pu
p1
p
legal solutions
p2
illegal solutions
illegal solutions
pl
Convexity (1) p1 and
p2 in S gt all points between p1 and p2 are in
S (2) p in S gt exactly two borderpoints can be
found pu and pl