Title: Modelowanie zjawiska mikrosegregacji stopu dwuskladnikowego technika automatu kom
1Optimization of thermal processes
Lecture 8
Maciej Marek Czestochowa University of
Technology Institute of Thermal Machinery
2Overview of the lecture
- Indirect search (descent) methods
- Steepest descent (Cauchy) method
- The concept of the method
- Elementary example
- Practical example optimal design of three-stage
compressor - Possible problems with steepest descent method
- Conjugate directions methods
- Davidon-Fletcher-Powell method
3General types of iterative methods
- Direct search methods (discussed on the previous
lecture) - only the value of the objective function is
required (Gauss-Seidel and Powells method are
good examples) - Indirect (descent) methods
- methods of this type require not only the value
of the objective function but also values of its
derivatives - thus, we need
and
descent
4Indirect search (descent methods)
2D case. The gradient is
The gradient points in the direction of steepest
ascent. So is the steepest descent direction.
Descent directions
5Indirect search (descent methods)
Gradient is always perpendicular to the
isocontours of the objective function.
The step length may be constant (this is the idea
of simple gradient method) or may be found with
some one- dimensional optimization technique.
gradient
descent direction
All descent methods make use of the gradient
vector. But in some of them gradient is only
one of the components needed to find the
search direction
6Calculation of the gradient vector
- To find the gradient we need to calculate the
partial derivatives of the objective function.
But this may lead to certain problems - when the function if differentiable, but the
calculation of the components of the gradient is
either impractical or impossible - although the partial derivatives can be
calculated, it requires a lot of computational
time - when the gradient is not defined at all points
- In the first (or the second) case we can use e.g.
finite difference formula
7Calculation of the gradient vector
- The scalar quantity (grid step) ?xi cannot be
- too large the truncation error of finite
difference formula may be large - too small numerical round-off error may be
unacceptable - We can use the central finite difference
schemewhich is more accurate, but requires
an additional function evaluation. - In the third case (when gradient is not defined),
we usually have to resort to direct search
methods.
8Steepest descent (Cauchy) method
- Start with arbitrary initial point X1. Set the
iteration numer as i1. - Find the search direction Si as
- Determine the optimal step length in the
direction and set - Test the new point for optimality. If
is optimum, stop the process. Otherwise, go
to step 5. - Set the new iteration number ii1 and go to step
2.
9Steepest descent method (example)
Minimize
starting from the point
10Steepest descent method (example)
Now, we must optimize
to find the step length. Using the necessary
condition
Is it optimum? Lets calculate the gradient at
this point
11Steepest descent method (example)
12Steepest descent method (example)
13When to stop? (convergence criteria)
- When the change in function value in two
consecutive iterations is small - When the partial derivatives (components of the
gradient) of f are small - When the change in the design vector in two
consecutive iterations is small
iterations
Near the optimum point the gradient should not
differ much from zero.
14Optimum design of three-stage compressor(steepest
descent method)
Objective find the values of interstage pressure
to minimize work input. The heat exchangers
reduce the temperature of the gas to T after
compression.
Lets use steepest descent method.
15Optimum design of three-stage compressor(steepest
descent method)
Its convenient to use the following the
objective function
16Optimum design of three-stage compressor(steepest
descent method)
Using this values we calculate the first search
direction
17Optimum design of three-stage compressor(steepest
descent method)
This means that we are looking for the minimum of
the single-variable function
18Optimum design of three-stage compressor(steepest
descent method)
The value of the objective function at this point
is
and so on...
19Steepest descent method (possible difficulty)
Long narrow valey
- If the minimum is in a long narrow valey,
steepest descent method may converge rather
slowly - The problem stems from the fact, that gradient is
only local property of the objective function - More clever choice of search directions is
possible. It is based on the concept of conjugate
directions.
20Conjugate directions
A set of n vectors (directions) is
said to be conjugate (more accurately
A-conjugate) if
for all
Remark Powells method is an example of
conjugate directions method.
21Conjugate directions (example)
For instance, suppose we have
Convergence after two iterations.
So, conjugate direction are in this case simply
perpendicular.
22Conjugate directions (quadratic convergence)
- Thus, for quadratic functions conjugate
directions method converges after n steps (at
most) where n is the number of design variables - That is really fast but what about other
functions? - Fortunately, a general nonlinear function can be
approximated reasonably well by a quadratic
function near its minimum (see the Taylor
expansion) - Conjugate directions method is expected to speed
up the convergence for even general nonlinear
objective functions
23Davidon-Fletcher-Powell method
- Start with an initial point X1 and a n n
positive definite symmetric matrix B1 to
approximate the inverse of the Hessian matrix of
f. Usually, B1 is taken as the identity matrix
I. Set the iteration numer as i1. - Compute the gradient of the function, , at
point Xi, and set - Find the optimal step length in the
direction Si and set - Test the new point for optimality. If is
optimal, terminate the iterative process.
Otherwise, go to step 5.
24Davidon-Fletcher-Powell method
- Update the matrix B1 as
- Set the new iteration number as ii1 and go to
step 2.
where
25Thank you for your attention