Modelowanie zjawiska mikrosegregacji stopu dwuskladnikowego technika automatu kom - PowerPoint PPT Presentation

1 / 25
About This Presentation
Title:

Modelowanie zjawiska mikrosegregacji stopu dwuskladnikowego technika automatu kom

Description:

Title: Modelowanie zjawiska mikrosegregacji stopu dwusk adnikowego technik automatu kom rkowego Author: MM Last modified by: Maciej Marek Created Date – PowerPoint PPT presentation

Number of Views:85
Avg rating:3.0/5.0
Slides: 26
Provided by: MM92
Category:

less

Transcript and Presenter's Notes

Title: Modelowanie zjawiska mikrosegregacji stopu dwuskladnikowego technika automatu kom


1
Optimization of thermal processes
Lecture 8
Maciej Marek Czestochowa University of
Technology Institute of Thermal Machinery
2
Overview of the lecture
  • Indirect search (descent) methods
  • Steepest descent (Cauchy) method
  • The concept of the method
  • Elementary example
  • Practical example optimal design of three-stage
    compressor
  • Possible problems with steepest descent method
  • Conjugate directions methods
  • Davidon-Fletcher-Powell method

3
General types of iterative methods
  • Direct search methods (discussed on the previous
    lecture)
  • only the value of the objective function is
    required (Gauss-Seidel and Powells method are
    good examples)
  • Indirect (descent) methods
  • methods of this type require not only the value
    of the objective function but also values of its
    derivatives
  • thus, we need

and
descent
4
Indirect search (descent methods)
2D case. The gradient is
The gradient points in the direction of steepest
ascent. So is the steepest descent direction.
Descent directions
5
Indirect search (descent methods)
Gradient is always perpendicular to the
isocontours of the objective function.
The step length may be constant (this is the idea
of simple gradient method) or may be found with
some one- dimensional optimization technique.
gradient
descent direction
All descent methods make use of the gradient
vector. But in some of them gradient is only
one of the components needed to find the
search direction
6
Calculation of the gradient vector
  • To find the gradient we need to calculate the
    partial derivatives of the objective function.
    But this may lead to certain problems
  • when the function if differentiable, but the
    calculation of the components of the gradient is
    either impractical or impossible
  • although the partial derivatives can be
    calculated, it requires a lot of computational
    time
  • when the gradient is not defined at all points
  • In the first (or the second) case we can use e.g.
    finite difference formula

7
Calculation of the gradient vector
  • The scalar quantity (grid step) ?xi cannot be
  • too large the truncation error of finite
    difference formula may be large
  • too small numerical round-off error may be
    unacceptable
  • We can use the central finite difference
    schemewhich is more accurate, but requires
    an additional function evaluation.
  • In the third case (when gradient is not defined),
    we usually have to resort to direct search
    methods.

8
Steepest descent (Cauchy) method
  1. Start with arbitrary initial point X1. Set the
    iteration numer as i1.
  2. Find the search direction Si as
  3. Determine the optimal step length in the
    direction and set
  4. Test the new point for optimality. If
    is optimum, stop the process. Otherwise, go
    to step 5.
  5. Set the new iteration number ii1 and go to step
    2.

9
Steepest descent method (example)
Minimize
starting from the point
10
Steepest descent method (example)
Now, we must optimize
to find the step length. Using the necessary
condition
Is it optimum? Lets calculate the gradient at
this point
11
Steepest descent method (example)
12
Steepest descent method (example)
13
When to stop? (convergence criteria)
  1. When the change in function value in two
    consecutive iterations is small
  2. When the partial derivatives (components of the
    gradient) of f are small
  3. When the change in the design vector in two
    consecutive iterations is small

iterations
Near the optimum point the gradient should not
differ much from zero.
14
Optimum design of three-stage compressor(steepest
descent method)
Objective find the values of interstage pressure
to minimize work input. The heat exchangers
reduce the temperature of the gas to T after
compression.
Lets use steepest descent method.
15
Optimum design of three-stage compressor(steepest
descent method)
Its convenient to use the following the
objective function
16
Optimum design of three-stage compressor(steepest
descent method)
Using this values we calculate the first search
direction
17
Optimum design of three-stage compressor(steepest
descent method)
This means that we are looking for the minimum of
the single-variable function
18
Optimum design of three-stage compressor(steepest
descent method)
The value of the objective function at this point
is
and so on...
19
Steepest descent method (possible difficulty)
Long narrow valey
  • If the minimum is in a long narrow valey,
    steepest descent method may converge rather
    slowly
  • The problem stems from the fact, that gradient is
    only local property of the objective function
  • More clever choice of search directions is
    possible. It is based on the concept of conjugate
    directions.

20
Conjugate directions
A set of n vectors (directions) is
said to be conjugate (more accurately
A-conjugate) if
for all
Remark Powells method is an example of
conjugate directions method.
21
Conjugate directions (example)
For instance, suppose we have
Convergence after two iterations.
So, conjugate direction are in this case simply
perpendicular.
22
Conjugate directions (quadratic convergence)
  • Thus, for quadratic functions conjugate
    directions method converges after n steps (at
    most) where n is the number of design variables
  • That is really fast but what about other
    functions?
  • Fortunately, a general nonlinear function can be
    approximated reasonably well by a quadratic
    function near its minimum (see the Taylor
    expansion)
  • Conjugate directions method is expected to speed
    up the convergence for even general nonlinear
    objective functions

23
Davidon-Fletcher-Powell method
  1. Start with an initial point X1 and a n n
    positive definite symmetric matrix B1 to
    approximate the inverse of the Hessian matrix of
    f. Usually, B1 is taken as the identity matrix
    I. Set the iteration numer as i1.
  2. Compute the gradient of the function, , at
    point Xi, and set
  3. Find the optimal step length in the
    direction Si and set
  4. Test the new point for optimality. If is
    optimal, terminate the iterative process.
    Otherwise, go to step 5.

24
Davidon-Fletcher-Powell method
  1. Update the matrix B1 as
  2. Set the new iteration number as ii1 and go to
    step 2.

where
25
Thank you for your attention
Write a Comment
User Comments (0)
About PowerShow.com