Title: MAE 552
1MAE 552 Heuristic Optimization Lecture
2 January 25, 2002
2The optimization problem is then Find values of
the variables that minimize or maximize the
objective function while satisfying the
constraints. The standard form of the constrained
optimization problem can be written as Minimize
F(x) objective function Subject
to gj(x)?0 j1,m inequality constraints hk(x)?
0 k1,l equality constraints xi lower ? xi ? xi
upper i1,n side constraints where x(x1, x2, x3,
x4 , x5 ,xn) design variables
3- Conditions for Optimality
- Unconstrained Problems
- ?F(x)0 The gradient of F(x) must vanish at the
optimum - Hessian Matrix must be positive definite (i.e.
all positive eigenvalues at optimum point).
4- Conditions for Optimality
- Unconstrained Problems
- A positive definite Hessian at the minimum
ensures only that a local minimum has been found - The minimum is the global minimum only if it can
be shown that the Hessian is positive definite
for all possible values of x. This would imply a
convex design space. - Very hard to prove in practice!!!!
5(No Transcript)
6- Conditions for Optimality
- Constrained Problems
- Kuhn Tucker Conditions
- x is feasible
- ?jgj 0 j1,m
3.
These conditions only guarantee that x is a
local optimum.
7- Conditions for Optimality
- Constrained Problems
- In addition to the Kuhn Tucker conditions two
other conditions two other conditions must be
satisfied to guarantee a global optima. - Hessian must be positive definite for all x.
- Constraints must be convex.
- A constraint is convex if a line connecting any
two points in the feasible space travels always
lies in the feasible region of the design space.
8Determining the Complexity of Problems
- Why are some problems difficult to Solve?
- The number of possible solutions in the search
space is so large as to forbid an exhaustive
search for the best answer. - The problem is so complex that just to facilitate
any answer at all requires that we simplify the
model such that any result is essentially
useless. - The evaluation function (objective function) that
describes the quality of the proposed solution is
noisy or time-varying, thereby requiring a series
of solutions to be found. - The person solving the problem is inadequately
prepared or imagines some psychological barrier
that prevents them from discovering a solution.
91. The Size of the Search Space
- Example 1 Traveling Salesman Problem (TSP)
- A salesman must visit every city in a territory
exactly once and then return home covering the
shortest distance - Given the cost of traveling between each pair of
cities, how would the salesman plan his trip to
minimize the distance traveled?
Seattle
NY
Orlando
LA
Dallas
101. The Size of the Search Space
What is the size of the search space for the TSP?
Each tour can be described as a permutation of
the cities. Seattle-NY-Orlando-Dallas-LA LA-NY-Sea
ttle-Dallas-Orlando etc.
111. The Size of the Search Space
Number of permutations of n cites n! 2n ways to
represent each tour. Total number of unique tours
n!/(2n) (n-1)!/2 Size of search space S
(n-1)!/2
So for our example n5 and S12 which could
easily be solved by hand.
S grows exponentially however n10 S180,000
possible solutions n20 S10,000,000,000,000,000
possible solutions
121. The Size of the Search Space
n50 S100,000,000,000,000,000,000,000,000,000
,000,000,000,000,000,000,000,000,000,000,000,000
possible solutions
131. The Size of the Search Space
Example 2 Nonlinear Programming Problem
141. The Size of the Search Space
What is the size of the search space for the NLP?
If we assume a machine precision of 6 decimal
places, then each Variable can take on 10,000,000
possible values. The total number of possible
solutions 107n For n2 there are
100,000,000,000,000 possible solutions For n50
there are 10350 possible solutions!!!! Impossible
to enumerate all these solutions even with the
most powerful computers.
152. Modeling the Problem
Whenever we solve a problem we are actually only
finding a solution to a MODEL of the
problem Example Finite Element Model, CFD Model
etc.
- 2 Steps to Problem Solving
- Creating a Model for the Problem
- Using that Model to Generate a Solution
- Problem gt Model gt Solution
- The solution is only a solution in terms of
the model used.
162. Modeling the Problem
Example Methods of Performing Nonlinear
Aerodynamic Drag Predications
Solution Method Time for 1 solution
Linear Theory Solution 2 seconds
Wing Fuselage Euler Solution 16 minutes
Wing Fuselage Navier Stokes Solution 2 hours
There is a tradeoff between the fidelity of the
model and ability to solve it.
172. Modeling the Problem
When faced with a complex problem we have two
choices
- Simplify the model and try for a more exact
solution with a traditional optimizer. - We can keep the model as is and possibly only
find an approximate solution or use a
nontraditional optimization method to try to find
an exact solution. - This can be written
- Problem gt ModelagtSolutionp(Modela)
- ProblemgtModelpgtSolutiona(Modelp)
- It is generally better to use strategy 2 because
with strategy 1 there is no guarantee that the
solution to an approximate model will be useful
183. System Changes over Time
Real world systems change over time
- Example In the TSP the time to travel between
cities could change as a result of many factors. - Road conditions
- Traffic Patterns
- Accidents
- Suppose there are two possibilities that are
equally likely for a trip from New York to
Buffalo - Everything goes fine and it takes 6 hours
- You get delayed by one of the factors above and
it takes 7 hours
193. System Changes over Time
- How can this information be put into the model?
- Trip takes 6 hours 50 of the time
- Trip takes 7 hours 50 of the time
Buffalo
Time???
NY
203. System Changes over Time
- How can this information be put into the model?
- Simple approach 6.5 hours for the trip
- Problem It never takes exactly 6.5 hours to make
the trip so the solution found will be for the
WRONG problem - Repeatedly simulate the system with each case
occurring 50 of the time. - Problem It could be expensive to run repeated
simulations.
214. Constraints
- Real world problems do not allow you to choose
from the entire search space.
224. Constraints
- Effect of Constraints
- Good Effect Remove part of the search space from
consideration. - Inequality Constraints Cut the design space into
- Feasible and Infeasible regions
g x1x2 ? 3
234. Constraints
- Effect of Constraints
- Good Effect Remove part of the search space from
consideration. - Equality ConstraintsReduce the design space to a
line - or a plane.
g x1x2 3
X2
Infeasible
Infeasible
X1
244. Constraints
- Effect of Constraints
- Good Effect Remove part of the search space from
consideration. - If an algorithm can be designed so that it only
considers the feasible part of the design space,
the number of potential solutions can be reduced.
254. Constraints
- 2. Bad Effect Need an algorithm that finds new
solutions that are an improvement over previous
solutions AND maintains feasibility. - Often the optimal solution lie directly along one
or more constraints. Difficult to move along
constraint without corrupting solution. - One option is to design an algorithm that locates
a feasible design and then never corrupts it
while searching for a design that better
satisfies the objective function. - Very difficult to do!!!!
26Complexity Theory
- The complexity of decision and optimization
problems are classified according to the
relationship between solution time and input
size. - The simplest way to measure the running time of a
program is to determine the overall number of
instructions executed by the algorithm before
halting. - For a problem with input size n determine the
cost (time) of applying the algorithm on the
worst case instance of the problem. - This provides an upper bound on the execution
time.
27Complexity Theory
- The goal is to express the execution time for the
algorithm in terms of the input variables. - TimeF (Num of Inputs)
- The standard O notation is used to describe the
execution cost of the algorithm with input size
n - Execution time grows no more than O(n) as n
increases asymptotically
28Asymptotic Analysis
- Let t(x) be the running time of algorithm A on
input x. The worst case running time of A is
given by t(n)max(t(x) x such that x ? n) - Upper bound A has complexity O(f(n)) if t(n) is
O(f(n)) (that is, we ignore constants) - Lower bound A has complexity ?(f(n)) if t(n) is
?(f(n))
29Complexity Theory
Examples
Execution Time Bounded By O(n)
bn2 O(n2)
bn2log(n)n O(n2)
2nbn2log(n)n O(2n)
30Input size
- Size of input number of bits needed to present
the specific input - Existence of encoding scheme which is used to
describe any problem instance - For any pair of natural encoding schemes and for
any instance x, the resulting strings are related
31Complexity Classes
- For any function f(n), TIME(f(n)) is the set of
decision problems which can be solved with a time
complexity O(f(n)) - P the union of TIME(nk) for all k
- EXPTIME the union of TIME (2nk) for all k
- P is contained in EXPTIME
- It is possible to prove (by diagonalization) that
EXPTIME is not contained in P
32Examples
- SATISFYING TRUTH ASSIGNMENT given a logical
equation and a truth assignment f, does f satisfy
F? - SATISFYING TRUTH ASSIGNMENT is in P
- SATISFIABILITY (simply, SAT) given a logical
equation formula F, is F satisfiable? - SAT is in EXPTIME.
- Open problem SAT is in P?
33Complexity Classes Class NPO
- Optimization problems such that
- A instance of a problem is recognizable in
polynomial time - Feasible solutions are recognizable in polynomial
time - Objective Function is computable in polynomial
time
34Class PO
- NPO problems solvable in polynomial time
- Examples Linear, Quadratic Programming.
35Class NP-hard problems
- An optimization problem P is NP-hard if any
problem if it is at least as hard to solve as any
other NP problems - Solvable in Exponential Time Only
- TRAVELING SALESMAN
- MAXIMUM QUADRATIC PROGRAMMING
- MAXIMUM KNAPSACK
- and
- Many More
-