Chapter 2 Single Variable Optimization - PowerPoint PPT Presentation

About This Presentation
Title:

Chapter 2 Single Variable Optimization

Description:

Title: Chapter 2 Single Variable Optimization Author: Last modified by: ssjang Created Date: 2/19/2004 7:08:23 AM Document presentation format – PowerPoint PPT presentation

Number of Views:158
Avg rating:3.0/5.0
Slides: 42
Provided by: 4518
Category:

less

Transcript and Presenter's Notes

Title: Chapter 2 Single Variable Optimization


1
Chapter 2Single Variable Optimization
  • Shi-Shang Jang
  • National Tsing-Hua University
  • Chemical Engineering

2
Contents
  • Introduction
  • Examples
  • Methods of Regional Elimination
  • Methods of Polynomial Approximation
  • Methods that requires derivatives
  • Conclusion

3
1. Introduction
  • A single variable problem is such that
  • Min f(x), x?a,b
  • Given f continuous, the optimality criteria is
    such that
  • df/dx0 at xx
  • where x is either a local minimum or a local
    maximum or a saddle point (a stationary point)

4
Property of a single variable function
(i) a discrete function
(ii) a discontinuous function
(iii) a continuous function
5
2. Example
  • In a chemical plant, the cost of pipes, their
    fittings, and pumping are important investment
    cost. Consider a design of a pipeline L feet
    long that should carry fluid at the rate of Q
    gpm. The selection of economic pipe diameter
    D(in.) is based on minimizing the annual cost of
    pipe, pump, and pumping. Suppose the annual cost
    of a pipeline with a standard carbon steel pipe
    and a motor-driven centrifugal pump can be
    expressed as

Formulate the appropriate single-variable
optimization problem for designing a pipe of
length 1000ft with a fluid rate of 20gpm.
The diameter of the pipe should be between 0.25
to 6 in.
6
The MATLAB Code for Process Design Example
  • dlinspace(0.25,6)
  • l1000q20
  • for i1100
  • hp4.4e-8(lq3)/d(i)5 1.92e-9(lq2.68)
    /(d(i)4.68)
  • f(i)0.45l0.245ld(i)1.53.25hp0.561.6hp0
    .925102
  • end

cost
diameter
7
2. Example- Continued
  • Carry out a single-variable search of
    f(x)3x212/x-5

df/dx6x-12/x20
8
Solutions Methods (1) Analytical Approach
  • Problem
  • Algorithm
  • Step 1 Set df/dx0 and solve all stationary
    points.
  • Step 2 Select all stationary point x1, x2,,xN
    in a,b.
  • Step 3 Compare function values f(x1),
    f(x2),,f(xN) and find global minimum.

9
Example Polynomial Problem
  • Example
  • In the interval of -2,4.

and f(3)37, f(-1)5 ?3 is the optimum point.
f(x)
x
x
10
Example- Inventory Problem
  • Example Inventory Control (Economic Order
    Quantity)
  • Inventory quantity each time Q units
  • Set up cost or ordering cost K
  • Acquisition cost C/unit
  • Storing cost per unit h/year
  • Demand (constant)? units/time unit, this implies
    ordering period TQ/ ?
  • Question What is optimum ordering amount Q?

11
Solution
Objective function Total cost per
year f(Q)Cost per cycle
12
Problems
  • Cost function must be explicitly expressed
  • The derivative of the cost function must also be
    explicitly written down
  • The derivative equation must be explicitly solved
  • In many cases, the derivative equation is solved
    numerically, such as Newtons method, it is more
    convenient to solve the optimization problem
    numerically.

13
Problems - Continued

x
a
b
Location of global minimum
14
The Importance of One-dimensional Problem - The
Iterative Optimization Procedure
  • Optimization is basically performed in a fashion
    of iterative optimization. We give an initial
    point x0, and a direction s, and then perform the
    following line search
  • where ? is the optimal point for the objective
    function and satisfies all the constraints. Then
    we start from x1 and find the other direction s,
    and perform a new line search until the optimum
    is reached.

15
The Importance of One-dimensional Problem - The
Iterative Optimization Procedure - Continued
Consider a objective function Min f(X) x12x22,
with an initial point X0(-4,-1) and a direction
(1,0), what is the optimum at this direction,
i.e. X1X0 ?(1,0). This is a one -dimensional
search for ?.
16
The Importance of One-dimensional Problem - The
Iterative Optimization Procedure - Continued
  • The problem can be converted into

17
Solution Methods (2) Numerical Approaches (i)
Numerical Solution to Optimality Condition
  • Example Determine the minimum of
  • f(x)(10x33x2x5)2
  • The optimality criteria leads
  • 2(10x33x2x5)(30x26x1)0
  • Problem What is the root of the above equation?

18
Newtons Method
  • Consider a equation to be solved
  • f(x)0
  • Step 1 give an initial point x0
  • Step 2 xn1xn-f(xn)/f(xn)
  • Step 3 is f(xn) small enough? If not go back to
    step 2
  • Step 4 stop

19
Newtons Method (example)MATLAB code
  • x010fx100iter0ffxx
  • while abs(fx)gt1.e-5
  • fx2(10x033x02x05)(30x026x01)
  • fffffx fxp2((30x026x01)(30x0
    26x01)(10x033x025)(60x06))
  • x0x0-fx/fxp
  • xxxxx0
  • iteriter1
  • end
  • fx -8.4281e-006
  • iter 43
  • x0 --0.8599

20
A Numerical Differentiation Approach
  • Problem Find df/dx at a point xk
  • Approach
  • Define ? xk
  • Find f(xk)
  • Find f(xk ? xk )
  • Approximate df/dx(f(xk ? xk )- f(xk))/ ? xk

21
A Numerical Differentiation Approach-MATLAB code
  • x010fx100iter0ffxxdx0.001
  • while abs(fx)gt1.e-5
  • fx2(10x033x02x05)(30x026x01)
  • fffffx
  • xpx0dx
  • ffp2(10xp33xp2x05)(30xp26xp1)
  • fxp(ffp-fx)/dx
  • x0x0-fx/fxp
  • xxxxx0
  • iteriter1
  • end
  • fx -2.4021e-006
  • iter 25
  • x0 -0.8599

22
Remarks (Numerical Solution to Optimality
Condition)
  • Difficult to formulate the optimality condition
  • Difficult to solve (multi-solutions, complex
    number solutions)
  • Derivative may be very difficult to solve
    numerically
  • Function calls are not saved in most cases
  • New Frontier Can we simply implement objective
    function instead of its derivative?

23
Solution Methods (2) Numerical Approaches (ii)
Reginal Elimination Methods
  • Theorem Suppose f is uni-model on the interval a
    ?x?b, with a minimum at x (not necessary a
    stationary point), let x1, x2?a,b such that alt
    x1lt x2ltb, then
  • If f(x1)gt f(x2)? x?x1,b
  • If f(x1)lt f(x2)? x?a, x2

24
Two Phase Approach
  • Phase I. Bounding Phase An initial course search
    that will bound or bracket the optimum
  • Phase II. Interval Refinement Phase A finite
    sequence of interval reductions or refinements to
    reduce the initial search interval to desired
    accuracy.

25
Phase II- Interval Refinement Phase - Interval
Halving
  • Algorithm
  • Step 1 Let , L b-a, find f(xm)
  • Step 2 Set x1aL/4, x2b-L/4.
  • Step 3 Find f(x1), if f(x1)ltf(xm), then b?xm, go
    to step 1.
  • if f(x1)gtf(xm), continue
  • Step 4 Find f(x2)
  • If f(x2)ltf(xm), then a?xm, go to step 1.
  • If f(x2)gtf(xm), then a?x1, b?x2, go to step 1.

26
Interval Halving
27
Exampley(10x33x2x5)2
  • global fun_call
  • a-3b3lb-axm(ab)/2x1(axm)/2x2(xmb)/2
    fminter_hal_obj(xm)
  • iter1fun_call0
  • while lgt1.e-8
  • f1inter_hal_obj(x1)
  • if f1ltfm
  • bxmfmf1
  • else
  • f2inter_hal_obj(x2)
  • if f2ltfm
  • axmfmf2
  • else
  • ax1
  • bx2
  • end
  • end
  • xm(ab)/2x1(axm)/2x2(xmb)/2
    fminter_hal_obj(xm)
  • lb-a
  • iteriter1

28
Remarks
  • At each stage of the algorithm, exactly half of
    the search is deleted.
  • At most two function evaluations are necessary at
    each iteration.
  • After n iterations, the initial search interval
    will be reduced to
  • According to Krefer, the three point search is
    most efficient among all equal-interval searches.

29
Phase II- Interval Refinement Phase The Golden
Search (Non-equal Interval Search)
  • Assume that the total length of the region of
    search 1, two experiments are done at ? and 1-?.
    One can compare the above two experiments and
    hence needs to delete either section between the
    end point and two trial points. Then one trial
    point is the new end point, the other is a new
    comparing point.
  • Problem We want the original trial point to be
    a new trial point, i.e.,. It is possible to
    solve , ?0.61803

30
The Algorithm Golden-Search Method
  • Step 1 Set L b-a, ?0.61803, x2a?L,
    x1a(1-?)L
  • Step 2 Find f(x1), f(x2), compare
  • If f(x1)gt f(x2)?ax1, x1?x2, go to step 3.
  • If f(x1)lt f(x2)?bx2, x2?x1, go to step 3.
  • Step 3 Set L b-a, x2?a?L, if (i) true
    x1?a(1-?)L, if (ii) true. Go to step 2.

31
MATLAB CODING The Golden Search
  • function al_optgoldsec(op2_func,tol,x0,d)
  • b1a0lb-a
  • tau0.61803x2ataulx1a(1-tau)l
  • while lgttol
  • xx1x0x1dxx2x0x2d
  • y1feval(op2_func,xx1)y2feval(op2_func,xx
    2)
  • if(y1gty2) ax1x1x2lb-ax2ataul
  • else
  • bx2x2x1lb-ax1a(1-tau)l
  • end
  • end
  • al_optb

32
Example The Piping Problem
  • x00.25d6-0.25tol1.e-6
  • al_optgoldsec('piping',tol,x0,d)
  • Dx0al_optd
  • function yobj_piping(D)
  • D(in)
  • L1000ft
  • Q20gpm
  • hp4.4e-8(LQ3)/(D5)1.92e-9(LQ2.68)/(D4.68
    )
  • y0.45L0.245LD1.53.25(hp)0.561.6(hp)0.9
    25102
  • al_opt 0.1000
  • D 0.8250
  • fun_call 29

33
Remarks
  • At each stage, only one function evaluation (or
    one experiment) is needed.
  • The length of searching is narrowed by a factor
    of ? at each iteration, i.e.
  • Variable transformation technique may be useful
    for this algorithm, i.e. set the initial length
    equal to 1.

34
Remarks - Continued
  • Define , N number of experiments,
    or function evaluations.
  • Let E FR(N)

Method E0.1 E0.05 E0.01 E0.001
I.H. 7 9 14 20
G.S. 6 8 11 16
35
Solution Methods (2) Numerical Approaches
(iii) Polynomial Approximation Methods Powells
Method
  • Powells method is to approximate an objective
    function by a quadratic function such as
    f(x)ax2bxc, then it can be shown the optimum
    is located at x-b/2a.
  • Given the above equation we need to do three
    experiments (function calls) to fit a quadratic
    function, let the three experiments (function
    calls) located at f(x1), f(x2), f(x3) and lets
    rewrite the quadratic equation based on the new
    notation

36
Powells Method- Continued
  • The parameters in the previous slide can be found
    using three experiments
  • At the optimum point, it can be derived based on
    the above three experiments, such that

or
37
Algorithm (Powells Method)
  • Step 1 Given x0, ?x, x1x0?x, ?, ?
  • Step 2 Evaluate f(x0), f(x1)
  • If f(x1)gt f(x0), then x2x0-?x
  • If f(x1)lt f(x0), then x2x02?x
  • Step 3 find f(x2)
  • Step 4 Find Fminmin (f(x0), f(x1), f(x2))
  • Xmin x0, x1, x2, such that f(xmin)
    Fmin, i0,1,2.
  • Step 5 Get a1, a2
  • Step 6 Get x, find f(x).
  • Step 7 Check if (i) or (ii)
    Yes, stop.
  • No, continue
  • Step 8 set x2?x, x1?xmin, x0?one of x0, x1, x2
    not xmin
  • Go to step 4.

38
Powells Method MATLAB code
  • function aloptone_dim_pw(xx,s,op2_func)
  • dela0.005
  • alp00.01
  • alpha(1)alp0alpha(2)alpha(1)dela
  • alalpha(1)x1xxals
  • y(1)feval(op2_func,x1)
  • alalpha(2)x2xxsalpha(2)
  • y(2)feval(op2_func,x2)
  • if(y(2)gty(1)) alpha(3)alpha(1)-dela
  • else alpha(3)alpha(1)2dela
  • end
  • eps100
  • delta100
  • while epsgt0.001deltagt0.001
  • x3xxsalpha(3)
  • y(3)feval(op2_func,x3)
  • fminmin(y)

39
Powells Method MATLAB code-Continued
  • if(fminy(1)) alminalpha(1)i1
  • else if(fminy(2)) alminalpha(2)i2
  • else alminalpha(3)i3
  • end
  • end
  • a0y(1)a1(y(2)-y(1))/(alpha(2)-alpha(1))
  • a21/(alpha(3)-alpha(2))((y(3)-y(1))/(alpha(3)-al
    pha(1))-(y(2)-y(1))/(alpha(2)-alpha(1)))
  • alopt(alpha(2)alpha(1))/2-a1/(2a2)
  • xxoptxxalopts
  • yoptfeval(op2_func,xxopt)
  • epsabs(fmin-yopt)
  • deltaabs(alopt-almin)
  • for j13
  • if(ji) alpha(1)alpha(j)
  • end
  • end
  • alpha(3)aloptalpha(2)almin
  • x1xxsalpha(1)x2xxsalpha(2)
  • y(1)feval(op2_func,x1)y(2)feval(op2_func,x2)

40
Example Piping Design
  • global fun_call
  • x00.25x_end6lx_end-x0
  • al_optone_dim_pw(x0,l,'obj_piping')
  • Dx0al_optl
  • fun_call
  • al_opt 0.1000
  • D 0.8250
  • fun_call 61

41
Comparison Interval_halving (tol1.e-6)
  • l
  • 6.8545e-007
  • b
  • 0.8250
  • a
  • 0.8250
  • iter
  • 24
  • fun_call
  • 63
Write a Comment
User Comments (0)
About PowerShow.com