Title: Spectral Differencing with a Twist
1Spectral Differencing with a Twist
Johann Radon Institute for Computational and
Applied Mathematics Linz, Österreich 15. März 2004
2Spectral Differencing with a Twist
Johann Radon Institute for Computational and
Applied Mathematics Linz, Österreich 15. März 2004
3Spectral Differencing with a Twist
Richard Baltensperger Université Fribourg
Manfred Trummer Pacific Institute for
the Mathematical Sciences Simon Fraser
University
Johann Radon Institute for Computational and
Applied Mathematics Linz, Österreich 15. März 2004
trummer_at_sfu.ca
www.math.sfu.ca/mrt
4 Pacific Institute for the Mathematical
Sciences www.pims.math.ca
5 Simon Fraser University
6Banff International Research Station
- PIMS MSRI collaboration (with MITACS)
- 5-day workshops
- 2-day workshops
- Focused Research
- Research In Teams
- www.pims.math.ca/birs
7Round-off Errors
- 1991 a patriot missile battery in Dhahran,
Saudi Arabia, fails to intercept Iraqi missile
resulting in 28 fatalities. Problem traced back
to accumulation of round-off error in computer
arithmetic.
8Introduction
- Spectral Differentiation
- Approximate function f(x) by a global interpolant
- Differentiate the global interpolant exactly,
e.g.,
9Features of spectral methods
- Exponential (spectral) accuracy, converges
faster than any power of 1/N - Fairly coarse discretizations give good results
- - Full instead of sparse matrices
- - Tighter stability restrictions
- - Less robust, difficulties with irregular
domains
10Types of Spectral Methods
- Spectral-Galerkin work in transformed space,
that is, with the coefficients in the expansion. - Example ut uxx
11Types of Spectral Methods
- Spectral Collocation (Pseudospectral) work in
physical space. Choose collocation points xk,
k0,..,N, and approximate the function of
interest by its values at those collocation
points. - Computations rely on interpolation.
- Issues Aliasing, ease of computation,
nonlinearities
12Interpolation
- Periodic function, equidistant points FOURIER
- Polynomial interpolation
- Interpolation by Rational functions
- Chebyshev points
- Legendre points
- Hermite, Laguerre
13Differentiation Matrix D
- Discrete data set fk, k0,,N
- Interpolate between collocation points xk
- p(xk) fk
- Differentiate p(x)
- Evaluate p(xk) gk
- All operations are linear g Df
14Software
- Funaro FORTRAN code, various polynomial
spectral methods - Don-Solomonoff, Don-Costa PSEUDOPAK FORTRAN
code, more engineering oriented, includes
filters, etc. - Weideman-Reddy Based on Differentiation
Matrices, written in MATLAB (fast Matlab
programming)
15Polynomial Interpolation
- Lagrange form Although admired for its
mathematical beauty and elegance it is not useful
in practice
- Expensive
- Difficult to update
16Barycentric formula, version 1
17Barycentric formula, version 2
1
- Set-up O(N2)
- Evaluation O(N)
- Update (add point) O(N)
- New fk values no extra work!
18Barycentric formula weights wk
0
19Barycentric formula weights wk
- Equidistant points
- Chebyshev points (1st kind)
- Chebyshev points (2nd kind)
20Barycentric formula weights wk
- Weights can be multiplied by the same constant
- This function interpolates for any weights !
Rational interpolation! - Relative size of weights indicates
ill-conditioning
21Computation of the Differentiation Matrix
- Entirely based upon interpolation.
22Barycentric Formula
- Barycentric (Schneider/Werner)
23Chebyshev Differentiation
xk cos(k?/N)
24Chebyshev Matrix has Behavioural Problems
numerical
- Trefethen-Trummer, 1987
- Rothman, 1991
- Breuer-Everson, 1992
- Don-Solomonoff, 1995/1997
- Bayliss-Class-Matkowsky, 1995
- Tang-Trummer, 1996
- Baltensperger-Berrut, 1996
25Chebyshev Matrix and Errors
Matrix
Absolute Errors
Relative Errors
26Round-off error analysis
has relative error
and so has
, therefore
27Roundoff Error Analysis
- With good computation we expect an error in D01
of O(N2?) - Most elements in D are O(1)
- Some are of O(N), and a few are O(N2)
- We must be careful to see whether absolute or
relative errors enter the computation
28Remedies
- Preconditioning, add axb to f to create a
function which is zero at the boundary - Compute D in higher precision
- Use trigonometric identities
- Use symmetry Flipping Trick
- NST Negative sum trick
29More ways to compute Df
If we only want Df but not the matrix D (e.g.,
time stepping, iterative methods), we can compute
Df for any f via
- FFT based approach
- Schneider-Werner formula
30Chebyshev Differentiation Matrix
xk cos(k?/N)
Cancellation!
31Trigonometric Identities
32Flipping Trick
sin(? x) not as accurate as sin(x)
Use
and flip the upper half of D into the lower
half, or the upper left triangle into the lower
right triangle.
33NST Negative sum trick
Spectral Differentiation is exact for constant
functions
Arrange order of summation to sum the smaller
elements first - requires sorting
34Numerical Example
35Numerical Example
36Observations
- Original formula for D is very inaccurate
- Trig/Flip NST (Weideman-Reddy) provides good
improvement - FFT not as good as expected, in particular when
N is not a power of 2 - NST applied to original D gives best matrix
- Even more accurate ways to compute Df
37Machine Dependency
- Results can vary substantially from machine to
machine, and may depend on software. - Intel/PC FFT performs better
- SGI
- SUN
- DEC Alpha
38Understanding theNegative sum trick
39Understanding theNegative sum trick
Error in f
Error in D
40Understanding NST2
41Understanding NST3
The inaccurate matrix elements are multiplied by
very small numbers leading to O(N2) errors --
optimal accuracy
42Understanding the Negative sum trick
NST is an (inaccurate) implementation of the
Schneider-Werner formula
Schneider-Werner Negative Sum Trick
43Understanding the Negative sum trick
- Why do we obtain superior results when applying
the NST to the original (inaccurate) formula? - Accuracy of Finite Difference Quotients
44Finite Difference Quotients
- For monomials a cancellation of the cancellation
errors takes place, e.g.
- Typically fj fk is less accurate than xj xk,
so computing xj - xk more accurately does not
help!
45Finite Difference Quotients
46Finite Difference Quotients
47Fast Schneider-Werner
- Cost of Df is 2N2, SW costs 3N2
- Can implement Df with Fast SW method
Size of each corner blocks is N½ Cost 2N2 O(N)
48Polynomial Differentiation
- For example, Legendre, Hermite, Laguerre
- Fewer tricks available, but Negative Sum Trick
still provides improvements - Ordering the summation may become even more
important
49Higher Order Derivatives
- Best not to compute D(2)D2, etc.
- Formulas by Welfert (implemented in
Weideman-Reddy) - Negative sum trick shows again improvements
- Higher order differentiation matrices badly
conditioned, so gaining a little more accuracy is
more important than for first order
50Using D to solve problems
- In many applications the first and last
row/column of D is removed because of boundary
conditions - f -gt Df appears to be most sensitive to how D is
computed (forward problem differentiation) - Have observed improvements in solving BVPs
Skip
51Solving a Singular BVP
52 Results
53Close
- Demystified some of the less intuitive behaviour
of differentiation matrices - Get more accuracy for the same cost
- Study the effects of using the various
differentiation matrices in applications - Forward problem is more sensitive than inverse
problem - Df Time-stepping, Iterative methods
54To think about
- Is double precision enough as we are able to
solve bigger problems? - Irony of spectral methods Exponential
convergence, round-off error is the limiting
factor - Accuracy requirements limit us to N of moderate
size -- FFT is not so much faster than matrix
based approach
55And now for the twist.