Ordinary differential equations - PowerPoint PPT Presentation

1 / 88
About This Presentation
Title:

Ordinary differential equations

Description:

Differential equations are used to describe ... The classification of differential equations is important in order to find a ... The solution explodes. ... – PowerPoint PPT presentation

Number of Views:411
Avg rating:3.0/5.0
Slides: 89
Provided by: kariike
Category:

less

Transcript and Presenter's Notes

Title: Ordinary differential equations


1
Ordinary differential equations
  • KE-42.4520
  • Process modeling methods and tools
  • Kari I. Keskinen
  • 3.3.2008

2
Introduction
  • Differential equations are used to describe
    phenomena that are in movement or change with
    time.
  • Pre-examination material handled quite much the
    methods to solve differential equations
    analytically.
  • The classification of differential equations is
    important in order to find a suitable solution
    strategy.
  • In this lecture we handle mainly ordinary
    differential equations.

3
Plug flow reactor
  • Material balance

4
Plug flow reactor
  • Let us inspect a volume ?V, where the reaction
    rate can be assumed constant.

y?y
y
Fj(y?y)
?V
5
Plug flow reactor
  • Reaction rate is constant in ?V

Steady state assumption, i.e. no accumulation
6
Plug flow reactor
  • For the volume ?V we can write the material
    balance for component j.

A cross-sectional area
7
Plug flow reactor
8
Plug flow reactor
  • Separation of variables

Integral form of the dimensioning equation.
9
Classification of differential equations
  • A differential equation is an ordinary
    differential equation, ODE, when there are
    derivatives only with respect to one variable.
    Often this is a change with time.
  • When the same equation contains derivates with
    respect to more than one variable, it is called
    partial differential equation, PDE. Often this is
    with respect to several spatial coordinates
    (x,y,z) and time (t).

10
ODEs 1
  • Ordinary differential equations can be classified
    in two groups
  • initial value problems the variable value at
    t0, must be known and also for higher order
    differential equations the values of the
    derivates up to degree of highest derivative
    minus one must be known at t0
  • boundary value problems the variable value at
    two boundaries must be known.

11
ODEs 2
  • Initial value ODE.
  • Let x(t) and y(t) denote the number of rabbits
    and foxes at time moment t, respectively. The
    evolution of the rabbit and fox populations can
    then be described as a function of time

12
ODEs 3
  • The example shows that a set of ODEs can be
    included in the model. These are still ODEs, as
    they both have a derivate only with respect to
    one variable.
  • Generally, of set of ODEs can be expressed in a
    form of a vector equation
  • where y is a vector and f is vector valued
    function.

13
ODEs 4
  • Boundary value ODE.
  • The solution for a set of n ODEs is sought after
  • where vectors y and f contain n components. In
    addition the solution is bound by two distinct
    points (a and b) of the interval of interest
  • Here the boundary condition is given by

14
ODEs 5
  • The solution of boundary value ODE is much more
    difficult than solving initial value ODE.
  • The degree of the highest derivative determines
    the order of the ODE.
  • One classification of ODEs is
  • linear ODEs
  • non-linear ODEs
  • Normally, the solution of linear systems is much
    easier than that of non-linear ODEs.

15
ODEs 6
  • Linear ODE
  • The coefficient a do not depend on the variable
    y. f(x) is a forcing function.
  • Nonlinear ODE

16
ODEs 7
  • The latter is a function that is homogenous of
    degree n satisfying for every value of ?
  • and can be transformed into form (by
    substitution )

17
ODEs 8
  • Higher order ODEs can be treated as follows
  • This is a second order ODE. Let us denote y y1
    for the original variable and take into use a new
    variable y2 y1. Then we obtain a set of first
    order ODEs

18
ODEs 9
  • By solving the original ODE as a set of first
    order ODEs one obtains in addition to the
    approximation y1 for the solution curve y(t) the
    approximation y2 for the derivative y(t) .
  • Generally a differential equation of order n

19
ODEs 10
  • can be transformed into a set of n coupled first
    order differential equations

This is the RHS referred later.
20
Solution methods
  • Some solution methods were reviewed in the
    pre-examination material
  • I-factor method for first order ODE
  • manipulate non-linear equations into linear form
  • no general methods are available for non-linear
    systems

21
Solution methods 2
  • One can try to use the mathematical software in
    finding a solution to ODE problems. (Mathematica,
    Maxima, Maple etc.) Often, these dont help as
    there is no analytical solution available.
  • Other methods are series solutions and the use of
    Laplace transforms. A few slides of these
  • The main way to tackle ODEs and systems of them
    is to use numerical methods.

22
Series solutions
Pre-examination material showed that all
linear differential equations with constant
coefficients, like
sustained complementary solutions of the form
where the characteristic roots in this case
satisfy
.
The exponent function exp() can be expressed as
a series.
23
Series solutions 2
Based on this the solution of above
differential equation could have been tried as
Now we know that , but in general
case when analytical solution is not possible,
e.g. non-constant coefficients arise, like
,
this method proves useful.
24
Series solutions 3
The series usage has been developed into
method of Frobenius (see e.g. Rice Do, chapter
3.) Some special forms of differential equations
often arise in engineering and these has been
given names after persons Bessel, Laguerre,
Legendre, Chebysev. The solution of these
equations can be expressed by using special
functions by the names Bessel functions,
Laguerre polynomials, Legendre polynomial and
Chebysev polynomials. These are found in
handbooks in tabulated form.
25
Laplace transforms
  • Using the complex variable theory the following
    integral transform has been defined
  • s is a complex variable defined by
    , where s and ? are real variables and
    .
  • The usage of this means that we map our problem
    for example from time variable t to an abstract
    Laplace space where we have the Laplace variable
    (often used are s and p).
  • Then the equations are solved.

26
Laplace transforms 2
  • Once the solution is available as a function of
    the Laplace variable, an inverse Laplace
    transform is carried out.
  • The solution is normally such that a ready made
    tabulated inverse Laplace transform is not
    available. Then the solution must be manipulated
    for example by making it into a partial fraction
    expression. In order to do that quite often a
    high degree polynomial must be solved for roots.

27
Laplace transforms 3
  • The elegant point of the Laplace transform method
    is that the initial values or boundary values are
    taken into account already at the beginning in
    the solution method (and not as the final point
    for finding a particular solution for the general
    solution).

28
Huokosdiffuusio
R
Matemaattinen malli
  • An moolitase ?rssä (?V)
  • sis. ulos muod. akkum.
  • sis. WAr 4?r2?r
  • ulos WAr 4?r2?r?r

CAs
r
r ?r
rm on keskisäde välillä r ja r?r
29
Huokosdiffuusio
  • ?V approksimoidaan

?r ? 0
30
Huokosdiffuusio
  • reaktio A ? B
  • ? ekvimolaarinen vastavirtaus (lisäksi vakio T
    ja p)
  • aiemmin todettu
  • Yhtälö soveltuu myös laimeille seoksille.
  • Sijoittamalla

31
Huokosdiffuusio
  • Laadut

32
Huokosdiffuusio
  • oletus

Differentiaaliyhtälö diffuusio reaktio
33
Huokosdiffuusio
  • Reunaehdot
  • CA äärellinen, kun r 0
  • CA CAs, kun r R

34
Huokosdiffuusio
  • ratkaistaan dimensiottomien muuttujien avulla
  • uudet reunaehdot
  • CA CAs r R

CA äärellinen r 0 ?
äärellinen ? 0
35
Huokosdiffuusio
  • Differentiaaliyhtälö dimensiottomien muuttujien
    avulla

36
Huokosdiffuusio
37
Huokosdiffuusio
38
Huokosdiffuusio
  • Kun Thiele-moduuli on suuri, diffuusio
    tavallisesti rajoittaa reaktion kokonaisnopeutta
    kun ?n on pieni, pintareaktio määrää nopeuden.
  • Esimerkki ratkaisusta
  • A ? B

ensimmäinen kertaluku (pintareaktio rajoittaa,
korkea T)
k1 m3/m2 s m/s
39
Huokosdiffuusio
  • Differentiaaliyhtälö

? 1 ? 1
Reunaehdot
? äärellinen ? 0
Ratkaistaan uuden muuttujan avulla
y ? ? ? (1/?) y
40
Huokosdiffuusio reaktio
Sijoittamalla
41
Huokosdiffuusio reaktio

42
Hyperboliset funktiot
43
Huokosdiffuusio reaktio
  • Reunaehdot
  • ? 0 ? äärellinen
  • cosh?1? ?1

sinh?1? ? 0
1/? ? ? ? äärellinen
A1 0
44
Huokosdiffuusio reaktio
  • Toinen ehto
  • ? 1 ? 1

? sijoittamalla A1 ja B1
45
Huokosdiffuusio reaktio
  • Pienet ?1n arvot ? pintareaktio kontrolloi
    eli An konsentraatio on suuri huokosissa (ennen
    kuin A reagoi)

46
Non-isothermal porous catalyst particle
  • For a non-isothermal porous catalyst particle
    the following equation has been derived.


47
Non-isothermal porous catalyst particle
  • From the analogy of mass and heat transfer ?
    correspondingly from the energy balance

Boundary conditions
dT/dr 0 r 0 T Ts r R
  • ? To be solved together numerically, because
    kn k0 e-E/RT

48
Non-isothermal porous catalyst particle
  • The differential equations can be combined ?
    equation between temperature and concentration

kt effective heat transfer coefficient
49
Non-isothermal porous catalyst particle
  • if kt ? f(T) and ?H ? f(T)
  • First integration

Boundary conditions
? r 0 dCA/dr 0 and dT/dr 0
? A1 0
50
Non-isothermal porous catalyst particle
? r R CA CAs T Ts
De CAs (kt/?H) Ts A2
A2 De CAs - (kt /?H) Ts
Independent from the kinetic expression!
51
Non-isothermal porous catalyst particle,
temperature gradient
  • Maximum ?T is obtained, when all A has been
  • converted, i.e. CA 0

We can evaluate, whether ?T inside particle is
important.
If an exothermal reaction and non-isothermal case
? can be gt 1, because the external temperature
is lower than the temperature inside particle.
? Reaction rate inside particle higher that on
the particle surface.
52
Non-isothermal porous catalyst particle,
temperature gradient
  • Define two parameters ? and ?

kt thermal conductivity
Possibility for several stationary states.
No multiple stationary states when
4(1 ?) gt ? ?
53
Internal effectiveness as a function of Thiele
modulus
54
Example 1
  • Classify
  • Linear Nonlinear
  • Homogeneous Inhomogeneous
  • Order
  • Degree
  • Forcing function

55
Example 1
  • Classify
  • Linear Nonlinear (if nltgt1)
  • Homogeneous Inhomogeneous
  • Order 2
  • Degree 1
  • Forcing function? 0

56
Example 2
  • Classify
  • Linear Nonlinear
  • Homogeneous Inhomogeneous
  • Order
  • Degree
  • Forcing function?

57
Example 2
  • Classify
  • Linear Nonlinear
  • Homogeneous Inhomogeneous
  • Order 1
  • Degree 2
  • Forcing function? x - 1

58
Example 3
  • Classify
  • Linear Nonlinear
  • Homogeneous Inhomogeneous
  • Order
  • Degree
  • Forcing function?

59
Example 3
  • Classify
  • This is actually a differential-algebraic
    equation as there is also an algebraic equation
    in addition to a differential equation.

60
Why numerical solutions?
  • Sometimes an analytical solution can be found by
    using the mathematical solution methods or by
  • Series solution methods, like Frobenius
  • Special equation types of Bessel, Laguerre,
    Chebysev, Legendre
  • Laplace transforms
  • Symbolic mathematic programs
  • Often the evaluation of a numerical value from an
    analytic solution might be painful numerically.

61
Stability of ODEs
  • The second order differential equation
  • has the solution y(t)e-t. If we change the
    starting value of the first initial value by e
    then we have new initial values
  • By solving the problem with the new initial
    values we obtain the solution
  • The RHS goes to infinity when time goes to
    infinity, no matter how small the positive e is.

62
Stability of ODEs 2
  • The previous example is clearly unstable as an
    extremely small deviation in the starting values
    causes the divergence of the solution. The
    solution explodes.
  • This is important in numerical computation as
    there are always limited accuracy and the
    rounding errors tend to cumulate and cause the
    same phenomena as the erroneous starting values.

63
Stability of ODEs 3
  • The stability of a set of ODEs can be evaluated
    using the Jacobian matrix, J, of the solutions.
  • The Jacobian is evaluated for the solution and if
    all the real parts of eigenvalues are negative,
    the set of ODEs is stable.

64
Solutions by Euler method
Model
Initial condition
Closure first order reaction
65
Separable form, analytical solution possible
66
Numerical solution Eulers method
Explicit Euler if
is calculated at tn
Implicit Euler if
is calculated at tn1
67
Numerical solution Eulers method
For the test problem
Explicit Euler
Implicit Euler
68
Numerical solution Eulers method
Implicit Euler
For this linear test problem we have an explicit
formulation
Usually implicit methods require solution of a
nonlinear set of equations at each time step
For example, pth order reaction
69
Example
  • c01
  • k0.1
  • ?t1

70
Example
  • c01
  • k0.5
  • ?t1

71
Example
  • c01
  • k1
  • ?t1

72
Example
  • c01
  • k1.5
  • ?t1

73
Example
  • c01
  • k2
  • ?t1

74
Example
  • c01
  • k2.5
  • ?t1

75
Shorter time steps
  • c01
  • k2.5
  • ?t0.2

76
Other options
  • Calculate

at average of tn and tn1
77
Crank-Nicolson method
  • c01
  • k0.5
  • ?t1

78
Crank-Nicolson method
  • c01
  • k2.5
  • ?t1

Stable, but oscillates!
79
Solving initial value ODEs
  • The methods are
  • Runge-Kutta
  • Richardson extrapolation, especially
    Bulirsch-Stoer method
  • Predictor-corrector methods, also known as
    multistep methods
  • Runge-Kutta methods propagate a solution over an
    interval by combining the information from
    several Euler-style steps, and then using the
    information obtained to match a Taylor series
    expansion up to some higher order.

80
Solving initial value ODEs 2
  • Richardson extrapolation uses the idea of
    extrapolating a computed result to the value that
    would have been obtained if the step size had
    been very much smaller than it actually was. In
    particular, extrapolation to zero step size is
    the desired goal. The first practical ODE
    integrator that implemented this idea was
    developed by Bulirsch and Stoer. Thus the
    commonly used name Bulirsch-Stoer methods.

81
Solving initial value ODEs 3
  • Predictor-corrector methods or multistep methods
    store the solution along the way, and use those
    results to extrapolate the solution one step
    advanced. They then correct the extrapolation
    using derivative information at the new point.
    These are best for smooth functions.

82
Runge-Kutta
  • As Euler method (
    advancing the solution from to
    ), is not very accurate, an improvement
    of it is that a trial step is taken to the
    midpoint of the interval.
  • Then values of both x and y at that midpoint are
    used to compute the real step across the whole
    interval.

83
Runge-Kutta 2
  • This is the second-order Runge-Kutta method. Can
    be extended to fourth-order Runge-Kutta method as
    follows

84
Runge-Kutta 3
  • The 4th-order Runge-Kutta requires four
    evaluations of RHS while second-order Runge-Kutta
    requires less RHS evaluations.
  • Provided that 4th-order Runge-Kutta has much more
    a longer step size than second-order Runge-Kutta,
    it is superior in performance.
  • Still, higher order method does not always
    guarantee high accuracy.
  • Using an adaptive step size in 4th-order
    Runge-Kutta improves the method. There are
    several variants of this class.

85
Richardson Extrapolation
  • These methods are good for smooth functions.
    Bulirsch-Stoer method is believed to be the
    best-known way to obtain high accuracy solutions
    for ordinary differential equations with minimal
    computational effort. Three caveats
  • For nonsmooth problem, use Runge-Kutta.
  • If singular points inside the integration
    interval, then adaptive step size Runge-Kutta or
    special methods.
  • If the function is smooth and the RHS expensive
    to evaluate, use predictor-corrector methods.

86
Bulirsch-Stoer method
  • A modified midpoint method advances a vector of
    dependent variables y(x) from point x to a point
    x H by a sequence on n substeps of size h.
  • hH/n
  • The number of RHS evaluations required is n 1.
    The formulas for the method are
  • Here the zs are intermediate approximations that
    march along in steps of h, while yn is the final
    approximation to y(xH).

87
Bulirsch-Stoer method 2
  • Bulirsch-Stoer attempts to cross the interval H
    using the modified midpoint method with
    increasing value of n, the number of substeps.
    They used the sequence
  • n 2, 4, 6, 8, 12, 16, 24, 32, 48, 64, 96,
  • nj2nj-2
  • After each successive n is tried, a polynomial
    extrapolation is tried. If errors are not
    satisfactory a higher n is attempted.
  • If we go too far in increasing n, then the
    interval H is reduced. There also a limit for
    reduction of H.

88
Stiff sets of equations
  • If we have more than one first-order ODE, the
    possibility of a stiff set of equations arises.
  • Normally stiffness occurs when there are very
    different time scales involved in the system One
    phenomena is extremely fast and requires a very
    small time step, while some other phenomena are
    very slow and determine the total time required
    for the solution interval.

89
Stiff sets of equations 2
  • An initial value problem is stiff on interval
    a,b if it is valid for all that
  • Re(?i) lt 0, i 1, 2, , s,
  • where ?i are the eigenvalues of the Jacobian
    matrix J determined at point (t,y(t)). Term S(t)
    is called the stiffness coefficient at point t.
  • The definition means that a stabile set of ODEs
    is stiff when the difference between the largest
    and smallest real parts of the eigenvalues of the
    Jacobian is big.

90
Stiff sets of equations 3
  • Consider
  • With boundary conditions
  • A solution is found

91
Stiff sets of equations 4
  • The part e-1000x requires a very small step size
    in the solution algorithm.
  • The part e-1000x is negligible when compared to
    the first part of the solution, as soon as one is
    away the origin.
  • The accuracy requirement of the solution for
    reasonably large interval would allow much longer
    step based on the first term than the term
    e-1000x .

92
Stiff sets of equations 5
  • In stiff problems the step size is largely
    determined by terms that are negligible and fast
    disappearing on the point of view of the whole
    problem.
  • This kind of systems are often met when chemical
    reaction kinetics are modeled and some of the
    components (intermediate) react rapidly but the
    overall reactions are slow.

93
Stiff sets of equations 6
  • The explicit (or forward) Euler type methods
    would diverge.
  • Implicit methods has to be used, like backward
    Euler
  • Methods used for stiff systems Rosenbrock and
    semi-implicit Euler. The best methods for stiff
    problems are multistep backward methods.

94
Multistep, Multivalue, and Predictor-Corrector
Methods
  • To advance the solution of yf(x,y) from xn to x
    we have
  • In a single-step method (like Runge-Kutta or
    Bulirsch-Stoer) the value yn1 at xn1 depends
    only on yn.
  • In multistep methods f(x,y) is approximated by a
    polynomial passing through several points xn,
    xn-1, and possibly also through xn1.

95
Multistep, Multivalue, and Predictor-Corrector
Methods 2
  • The result of evaluating previous integral at x
    xn1 is then of the form
  • Here yn denotes f(x,y) and so on.
  • If ß0 0 the method is explicit, otherwise it is
    implicit.
  • The methods has been developed to include a
    predictor step and a corrector step ?
    predictor-corrector method.

96
Multistep, Multivalue, and Predictor-Corrector
Methods 3
  • Adams-Bashforth-Moulton method
  • Adams-Bashforth predictor
  • Adams-Moulton corrector

97
Boundary value problems
  • Boundary value ODE.
  • The solution for a set of n ODEs is sought after
  • where vectors y and f contain n components. In
    addition the solution is bound by two distinct
    points (a and b) of the interval of interest
  • Here the boundary condition is given by

98
Two-point Boundary Value Problems
  • The boundary value ODE problem is basically
    solved by two techniques
  • Shooting method
  • Relaxation method (also called difference method)
  • As the solution must satisfy two points, a point
    in the interval beginning and also at the end,
    this is not so straightforward as the initial
    value problem.
  • The boundary value problem solution requires
    iteration.

99
The Shooting Method
  • In the shooting method, using the following
    example problem,
  • the equation is transformed in to form
  • where the initial value y(a)a is fixed.

100
The Shooting Method 2
  • The solution of the latter equation at point b is
    a function of the shooting angle ?, i.e. the
    y(a), and we mark it with g(?).
  • The equation to be solved is now
  • In practice the shooting angle is first guessed
    and problem II is solved. This yields at point
    tb the approximation g(?0).
  • If it is close to y(b)ß we have solution,
    otherwise choose a new shooting angle and
    continue. After two approximations secant method
    can be used.

101
Relaxation Methods
  • In the relaxation method the interval a,b is
    divided into m equal parts, so that h(b-a)/m,
    tiaih, i0,,m.
  • Denote the approximation of the accurate solution
    of y(ti) by yi.
  • Then replace the derivatives with approximations

102
Relaxation Methods 2
  • This leads into a new difference equation set
  • These are easily expressed and manipulated in
    matrix form.
  • For different types of boundary value problems
    special methods have been developed.
  • Difference methods are used extensively in
    solving sets of partial differential equations,
    where boundary problems are typical.

103
Selection of methods and some recommended
computer programs
  • The number of routines for solving ODEs is big.
    There are many routines in IMSL- and
    NAG-packages.
  • The best programs can analyze whether the problem
    is stiff and based on the analysis they are able
    to change the solution method, step size or order
    of the method.
  • Some programs are also able to recognize slight
    discontinuities and are able to manage with these.

104
Selection of methods and some recommended
computer programs
  • Stiff problems
  • Only so called backward type programs are
    suitable for stiff problems.
  • There are no available program packages of
    implicit Runge-Kutta methods.
  • The recommended programs are LSODA, LSODAR. They
    are also applicable to non-stiff problems and can
    find discontinuities.
  • DSTIFF program also work with this class of
    problems.

105
Selection of methods and some recommended
computer programs
  • Non-stiff problems, the determination of the
    derivative f(x,y) is cheap
  • Runge-Kutta works well. There are different
    orders of Runge-Kutta solvers. Some programs
    RKSUITE, DEGRK, XRK.
  • In Matlab there are also Runge-Kutta solvers
    ode25 and ode45.

106
Selection of methods and some recommended
computer programs
  • Non-stiff problems, the determination of the
    derivative f(x,y) is expensive
  • All predictor-corrector methods are useful.
  • E.g. DVDQ, STEP and VOAS
  • Adams-Moulton technique based programs are
    suitable for this class of problems, examples of
    programs DIFSUB, GEAR, EPISODE and LSODA.

107
Selection of methods and some recommended
computer programs
  • High accuracy is required
  • The extrapolation methods are excellent provided
    that the solution curves are smooth enough (no
    discontinuities in derivatives).
  • Extrapolation methods can be used also for stiff
    problems but then it is expensive as the number
    of derivative determinations is high.
  • Extrapolation methods are not suitable if the
    solution is to be found on many points very close
    to each other.
  • A recommended program is METAN1.

108
Selection of methods and some recommended
computer programs
  • Higher order problems
  • The Taylor series methods are suitable, because
    the there is need then to transform the system in
    to a set of first order ODEs.
  • Taylor series methods generate the higher
    derivatives utilizing recursive algorithms.
  • One on the best known programs in this class in
    ATSMCC.

109
Selection of methods and some recommended
computer programs
  • Most of the program codes discussed earlier are
    available in netlib
  • http//www.netlib.org/
  • Many mathematical software package has its own
    selection of ODE solvers, but rarely these are
    suitable for all cases.
Write a Comment
User Comments (0)
About PowerShow.com