Week 4' Function approximation - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

Week 4' Function approximation

Description:

With 1 extra function call: lose half of the significant digits. ... T_n has n 1 extrema, located at cos(pi*k/n) All maxima equal 1, all minima 1 ... – PowerPoint PPT presentation

Number of Views:71
Avg rating:3.0/5.0
Slides: 31
Provided by: bul8
Category:

less

Transcript and Presenter's Notes

Title: Week 4' Function approximation


1
Week 4. Function approximation
  • This week acceleration of series, Chebyshev
  • Summary Last week function approximation
  • Quadratic and cubic roots
  • Derivatives
  • Choose epsilon wisely
  • With 1 extra function call lose half of the
    significant digits.
  • Approximation schemes for smooth functions
  • Roots
  • Polynomial deflation, inflation
  • Suppose x10, roots order lt10. Each term in
    product has accuracy of about 10machine
    accuracy, last term in series has accuracy of
    about 1010 times machine accuracy!

2
Summary week 3
  • Root finding
  • Try to bracket root
  • Bisection -gt sure to proceed. Every step the root
    is located twice as good -gt linear convergence
  • Secant/false position method superlinear
  • Van Wijngaarden-Dekker-Brent
  • Quadratic search, write x as polynomial from y
  • Newton-Rhapson quadratic convergence
  • Every step the number of significant digits
    doubles!
  • Exercise
  • Not many results back, discuss next week

3
Laguerres method
  • general method for finding real and complex roots
  • extremely stable.
  • Motivation

4
Laguerres method
  • drastic assumption trial root is located at
    distance a x-x0, all other roots are equal
    larger distance b x-x_i away.
  • a may be complex
  • next trial value x-a.
  • NRlaguer() does this for you.

5
Eigenvalue methods
  • Eigenvalues of matrix A are roots of the
    characteristic polynomial P(x)detA-xI.
  • Rootfinding is not the efficient way to find
    these eigenvalues we will see that later
  • Alternatively, it is possible to construct an
    m1xm1 matrix that has as Eigenvalues the roots
    of an m-th order polynomial P(x).
  • The roots can then be found with methods from
    chapter 11.

6
Power series
  • analytic function power series expansion
  • update terms (such as fac)
  • do not use p(i)c(i)pow(x,i), or
    p(4)c(4)xxxx.
  • one can calculate sum and derivative at once
  • pcn
  • dp 0
  • in
  • while ( igt0)
  • ii-1
  • dp dpxp
  • p pxci
  • convergence of power series (until first pole in
    complex plane) generally known.
  • convergence can be slow (e.g.
    )

7
Accelerating convergence
  • geometric series Aitkens delta2-process
  • Sn partial sum, up to term n
  • Sn new series, where partial sums are re-used
    and added with a different weight.
  • also possible n-1 and n1 with n-p, np.
  • example ½n correct result for S_1

8
accelerating convergence
  • Alternating series Eulers transformation
  • forward difference operator (widely used)
  • Eulers transformation converges more rapidly.

9
Eulers transformation
  • Eulers transformation
  • sometimes even OK for non-convergent series!
  • typically used for asymptotic series
  • Van Wijngaarden eulsum in NR
  • numerical algorithm for Eulers transformation
  • calculates itself whether to increase terms
    before Euler summation or update Euler terms
  • Euler converges rapidly! Sometimes, convert
    series with only single-sign terms into
    alternating series just to use Eulers
    transformation afterwards
  • (Also van Wijngaarden method in NR)

10
Converting series
  • replace sum
  • replaces sum over v by double sum, sum over w
    which itself contains a sum over v
  • w_r converges rapidly, since index grows so fast.
  • can only be used if random values of v_r can be
    easily calculated.
  • Eulers transform special case of general power
    transformation

11
  • Usual Euler transformation
  • Note, these series techniques are frequently used
    in Fourier analysis (later this course). Also for
    Chebyshev (next week).

12
Continued fractions
  • a,b may be functions of x usually linear or
    quadratic at most
  • e.g. tan(x) (x/1-) (x2/3-) (x2/5-) (x2/7-)
    ...
  • usually, this converges much more rapidly than
    power series, over much larger range in complex
    plane.
  • How far to evaluate? A scheme is needed to
    evaluate the series from left to right, instead
    of starting with the n-th term and hoping it is
    converged enough. (procedure from Wallis, 1655!)

13
Continued fractions
  • Wallis (Arithmatica Infinitorum) 1655

14
continued fractions
  • Frequently, an Aj or Bj is small or large,
    machine inaccuracies.
  • One can rescale all results by dividing e.g. the
    last 4 terms Aj, Aj-1, Bj and Bj-1 all by Bj.
  • Steed uses the ratios DjBj/Bj-1.
  • calculate recursively Dj and fj by

15
continued fractions
  • Best method Lentz. No rescaling is needed
    because both fractions Aj/Aj-1 and Bj/Bj-1 are
    used
  • if Cj or denominator in term Dj becomes very
    small, replace it by tiny number (e.g. 10-30).
    Fj1 will be correctly calculated.

16
Clenshaws recurrence formula
  • recurrence relations e.g. Legendre polynomials,
    Bessel, etc.

17
recurrence
  • recurrence relations e.g. Legendre polynomials,
    Bessel, etc.
  • Stability!
  • 2 solutions, fn and gn
  • maybe fn wanted. gn can be exponentially stable,
    exp. damped, or exp. growing (e.g. Bessel,
    growing n)
  • fn/gn -gt0 n-gtinf fn is minimal solution
  • gn dominant solution
  • minimal solution is unique, dominant not
  • How to test
  • start with 0 and 1 and 1 and 0
  • evolve 20 terms and see difference.
  • (stability is a property of the recurrence
    relation, not of the function)

18
stability
  • Alternatively, replace recurrence relation with
    linear one with constant coefficients

19
Recurrence
  • how to proceed if recurrence is non-stable
  • start in other direction with arbitrary seeds.
    Solution is correct times a normalization
    constant. Eg in the case of bessel functions
  • start with large n (eg ) for 10
    significant digits. set
  • go down to J0, J1 and normalize, to the
    calculated value of J0 (x).
  • (or e.g. with 1J02J22J42J6..2Jn(x)

20
Clenshaw recurrence formula
  • coefficients functions that obey recurrence
  • solve for ck

21
Clenshaw Recurrence
  • Terms sum to zero up till y2
  • only surviving term
  • make one pass through yks
  • apply above formula
  • almost always stable, only not when Fk is small
    for large k and ck small for small k, when there
    is delicate cancellation first terms in above
    eq. cancel each other use upwards recurrence
    (see book). This can be detected

22
Chebyshev Approximation
  • Chebyshev polynomials
  • T_n has n zeros, at cos(pi(k1/2)/n)
  • T_n has n1 extrema, located at cos(pik/n)
  • All maxima equal 1, all minima 1

23
(No Transcript)
24
Chebyshev
  • orthogonal over weight 1/sqrt(1-x2)
  • discrete relation for the m zeros xk

25
Chebyshev
  • Due to these nice properties
  • if (f(x) arbitrary in -1,1 derive N coeffs
    cj
  • This relation for f(x) is EXACT for the zeros xk
  • maybe not most optimal polynomial, but can be
    truncated to lower order m ltlt N in a way that
    yields most accurate approximation.

26
Chebyshev truncation
  • difference is
    smaller than the sum of all neglected ck
  • typically, ck rapidly decreasing
    (exponentionally)
  • cm Tm rapidly oscillating function bounded
    between /- 1
  • Chebyshev polynomial is very close to the minimax
    polynomial the polynomial of degree m that has
    the smallest distance to the function for all x.
  • change of variables eg.

27
Chebyshev approximation
  • calculate the Chebyshev coeffs to order N ( N
    function calls) N2 cosines
  • store these coefficients
  • How to calculate f(x)?
  • Clenshaw recurrence

28
Chebyshev
  • Even function all odd coeffs are zero
  • better to use T2n(x)Tn (2x2-1) from
    cos(2x)2cos2x-1
  • call chebev with the argument x replaced by 2x2-1
  • Odd function calculate f(x)/x this will give
    accurate results close to x0.
  • alternatively, again use y 2x2-1, but now (since
    c0 is not used) the last line of the code chebev
    needs to be changed
  • f(x)x(2y-1)d1-d2c0

29
Chebyshev polynomials
  • If you have the chebyshev coeffs, the coeffs for
    the derivative and integral of f are given by
  • integral arbitrary C0
  • derivative no info on m1

30
Chebyshev example from book
Write a Comment
User Comments (0)
About PowerShow.com