Spectral Differencing with a Twist - PowerPoint PPT Presentation

About This Presentation
Title:

Spectral Differencing with a Twist

Description:

Johann Radon Institute for Computational and Applied Mathematics. Linz, sterreich ... 1991 a patriot missile battery in Dhahran, Saudi Arabia, fails to intercept ... – PowerPoint PPT presentation

Number of Views:45
Avg rating:3.0/5.0
Slides: 56
Provided by: manfred6
Category:

less

Transcript and Presenter's Notes

Title: Spectral Differencing with a Twist


1

Spectral Differencing with a Twist
Johann Radon Institute for Computational and
Applied Mathematics Linz, Österreich 15. März 2004
2

Spectral Differencing with a Twist
Johann Radon Institute for Computational and
Applied Mathematics Linz, Österreich 15. März 2004
3

Spectral Differencing with a Twist
Richard Baltensperger Université Fribourg
Manfred Trummer Pacific Institute for
the Mathematical Sciences Simon Fraser
University
Johann Radon Institute for Computational and
Applied Mathematics Linz, Österreich 15. März 2004
trummer_at_sfu.ca
www.math.sfu.ca/mrt
4

Pacific Institute for the Mathematical
Sciences www.pims.math.ca
5

Simon Fraser University
6
Banff International Research Station
  • PIMS MSRI collaboration (with MITACS)
  • 5-day workshops
  • 2-day workshops
  • Focused Research
  • Research In Teams
  • www.pims.math.ca/birs

7
Round-off Errors
  • 1991 a patriot missile battery in Dhahran,
    Saudi Arabia, fails to intercept Iraqi missile
    resulting in 28 fatalities. Problem traced back
    to accumulation of round-off error in computer
    arithmetic.

8
Introduction
  • Spectral Differentiation
  • Approximate function f(x) by a global interpolant
  • Differentiate the global interpolant exactly,
    e.g.,

9
Features of spectral methods
  • Exponential (spectral) accuracy, converges
    faster than any power of 1/N
  • Fairly coarse discretizations give good results
  • - Full instead of sparse matrices
  • - Tighter stability restrictions
  • - Less robust, difficulties with irregular
    domains

10
Types of Spectral Methods
  • Spectral-Galerkin work in transformed space,
    that is, with the coefficients in the expansion.
  • Example ut uxx

11
Types of Spectral Methods
  • Spectral Collocation (Pseudospectral) work in
    physical space. Choose collocation points xk,
    k0,..,N, and approximate the function of
    interest by its values at those collocation
    points.
  • Computations rely on interpolation.
  • Issues Aliasing, ease of computation,
    nonlinearities

12
Interpolation
  • Periodic function, equidistant points FOURIER
  • Polynomial interpolation
  • Interpolation by Rational functions
  • Chebyshev points
  • Legendre points
  • Hermite, Laguerre

13
Differentiation Matrix D
  • Discrete data set fk, k0,,N
  • Interpolate between collocation points xk
  • p(xk) fk
  • Differentiate p(x)
  • Evaluate p(xk) gk
  • All operations are linear g Df

14
Software
  • Funaro FORTRAN code, various polynomial
    spectral methods
  • Don-Solomonoff, Don-Costa PSEUDOPAK FORTRAN
    code, more engineering oriented, includes
    filters, etc.
  • Weideman-Reddy Based on Differentiation
    Matrices, written in MATLAB (fast Matlab
    programming)

15
Polynomial Interpolation
  • Lagrange form Although admired for its
    mathematical beauty and elegance it is not useful
    in practice
  • Expensive
  • Difficult to update

16
Barycentric formula, version 1
17
Barycentric formula, version 2
1
  • Set-up O(N2)
  • Evaluation O(N)
  • Update (add point) O(N)
  • New fk values no extra work!

18
Barycentric formula weights wk
0
19
Barycentric formula weights wk
  • Equidistant points
  • Chebyshev points (1st kind)
  • Chebyshev points (2nd kind)

20
Barycentric formula weights wk
  • Weights can be multiplied by the same constant
  • This function interpolates for any weights !
    Rational interpolation!
  • Relative size of weights indicates
    ill-conditioning

21
Computation of the Differentiation Matrix
  • Entirely based upon interpolation.

22
Barycentric Formula
  • Barycentric (Schneider/Werner)

23
Chebyshev Differentiation
xk cos(k?/N)
  • Differentiation Matrix

24
Chebyshev Matrix has Behavioural Problems
numerical
  • Trefethen-Trummer, 1987
  • Rothman, 1991
  • Breuer-Everson, 1992
  • Don-Solomonoff, 1995/1997
  • Bayliss-Class-Matkowsky, 1995
  • Tang-Trummer, 1996
  • Baltensperger-Berrut, 1996

25
Chebyshev Matrix and Errors
Matrix
Absolute Errors
Relative Errors
26
Round-off error analysis
has relative error
and so has
, therefore
27
Roundoff Error Analysis
  • With good computation we expect an error in D01
    of O(N2?)
  • Most elements in D are O(1)
  • Some are of O(N), and a few are O(N2)
  • We must be careful to see whether absolute or
    relative errors enter the computation

28
Remedies
  • Preconditioning, add axb to f to create a
    function which is zero at the boundary
  • Compute D in higher precision
  • Use trigonometric identities
  • Use symmetry Flipping Trick
  • NST Negative sum trick

29
More ways to compute Df
If we only want Df but not the matrix D (e.g.,
time stepping, iterative methods), we can compute
Df for any f via
  • FFT based approach
  • Schneider-Werner formula

30
Chebyshev Differentiation Matrix
xk cos(k?/N)
  • Original Formulas

Cancellation!
31
Trigonometric Identities
32
Flipping Trick
sin(? x) not as accurate as sin(x)
Use
and flip the upper half of D into the lower
half, or the upper left triangle into the lower
right triangle.
33
NST Negative sum trick
Spectral Differentiation is exact for constant
functions
Arrange order of summation to sum the smaller
elements first - requires sorting
34
Numerical Example
35
Numerical Example
36
Observations
  • Original formula for D is very inaccurate
  • Trig/Flip NST (Weideman-Reddy) provides good
    improvement
  • FFT not as good as expected, in particular when
    N is not a power of 2
  • NST applied to original D gives best matrix
  • Even more accurate ways to compute Df

37
Machine Dependency
  • Results can vary substantially from machine to
    machine, and may depend on software.
  • Intel/PC FFT performs better
  • SGI
  • SUN
  • DEC Alpha

38
Understanding theNegative sum trick
39
Understanding theNegative sum trick
Error in f
Error in D
40
Understanding NST2
41
Understanding NST3
The inaccurate matrix elements are multiplied by
very small numbers leading to O(N2) errors --
optimal accuracy
42
Understanding the Negative sum trick
NST is an (inaccurate) implementation of the
Schneider-Werner formula
Schneider-Werner Negative Sum Trick
43
Understanding the Negative sum trick
  • Why do we obtain superior results when applying
    the NST to the original (inaccurate) formula?
  • Accuracy of Finite Difference Quotients

44
Finite Difference Quotients
  • For monomials a cancellation of the cancellation
    errors takes place, e.g.
  • Typically fj fk is less accurate than xj xk,
    so computing xj - xk more accurately does not
    help!

45
Finite Difference Quotients
46
Finite Difference Quotients
47
Fast Schneider-Werner
  • Cost of Df is 2N2, SW costs 3N2
  • Can implement Df with Fast SW method

Size of each corner blocks is N½ Cost 2N2 O(N)
48
Polynomial Differentiation
  • For example, Legendre, Hermite, Laguerre
  • Fewer tricks available, but Negative Sum Trick
    still provides improvements
  • Ordering the summation may become even more
    important

49
Higher Order Derivatives
  • Best not to compute D(2)D2, etc.
  • Formulas by Welfert (implemented in
    Weideman-Reddy)
  • Negative sum trick shows again improvements
  • Higher order differentiation matrices badly
    conditioned, so gaining a little more accuracy is
    more important than for first order

50
Using D to solve problems
  • In many applications the first and last
    row/column of D is removed because of boundary
    conditions
  • f -gt Df appears to be most sensitive to how D is
    computed (forward problem differentiation)
  • Have observed improvements in solving BVPs

Skip
51
Solving a Singular BVP
52
Results
53
Close
  • Demystified some of the less intuitive behaviour
    of differentiation matrices
  • Get more accuracy for the same cost
  • Study the effects of using the various
    differentiation matrices in applications
  • Forward problem is more sensitive than inverse
    problem
  • Df Time-stepping, Iterative methods

54
To think about
  • Is double precision enough as we are able to
    solve bigger problems?
  • Irony of spectral methods Exponential
    convergence, round-off error is the limiting
    factor
  • Accuracy requirements limit us to N of moderate
    size -- FFT is not so much faster than matrix
    based approach

55
And now for the twist.
Write a Comment
User Comments (0)
About PowerShow.com