Revision - PowerPoint PPT Presentation

1 / 16
About This Presentation
Title:

Revision

Description:

Subtractive Cancellation. 3. Q1. Which of these decimal numbers cannot be represented exactly using the IEEE 754 ... What are the general approaches we can take ... – PowerPoint PPT presentation

Number of Views:50
Avg rating:3.0/5.0
Slides: 17
Provided by: laiwa
Category:

less

Transcript and Presenter's Notes

Title: Revision


1
Revision
2
Part I Errors
  • Floating-point number representations
  • Round-off and chopping errors
  • Overflow and underflow
  • Absolute vs. Relative Errors
  • Errors propagation through arithmetic operations
  • Subtractive Cancellation

3
Q1
  • Which of these decimal numbers cannot be
    represented exactly using the IEEE 754 standard
    for double precision floating point numbers (1
    sign bit, 11 bits exponent and 52 bits mantissa)?
  • 1234567890
  • 0.1
  • 0.1 x 102
  • The value of 1/1024
  • -123.25
  • Answer (B) 0.1

4
Q2
  • What are the general approaches we can take to
    minimize errors in the calculated results?
  • Avoid adding huge number to small number
  • Avoid subtracting numbers that are close
  • Minimize the number of arithmetic operations
    involved

5
Part I Truncation Errors
  • Truncation errors
  • Taylor's Series Approximation
  • Derive Taylor's series
  • Understand the characteristics of Taylor's Series
    approximation
  • Estimate truncation errors using the remainder
    term
  • Estimating truncation errors by
  • Alternating Series, Geometry series, Integration

6
Q3
  • Let g1(x) be the Taylor series approximation of
    f(x) at p/4.
  • Let g2(x) be the Taylor series approximation of
    f(x) at 0.
  • Suppose we want to approximate f(0.5) with error
    less than 10-6. Which Taylor series approximation
    is more efficient (involves less arithmetic
    operations)?
  • g1(x) is better than g2(x) for all f(x)
  • g2(x) is better than g1(x) for all f(x)
  • Depends on what f(x) is.
  • Answer C g1(x) is better than g2(x) for most
    f(x). A counter example would be f(x) sin(x)
    or f(x) cos(x) in which the alternative terms
    are zeroes.

7
Q4
  • Estimate the truncation errors for both series if
    we only include the first 10 terms of the series.
  • Answer For S1, infinity. For S2, 1/11.

8
Part II Roots Finding
  • Closed or Bracketing methods
  • How to select initial interval?
  • How to select the sub-interval for subsequent
    iterations?
  • Bisection vs. False-Positioning methods in terms
    of performance
  • Closed vs. Open methods
  • Performance
  • Convergence
  • Multiple roots

9
Part II Roots Finding
  • Fixed point iteration
  • Convergence Analysis
  • Newton Raphson method
  • Deriving the updating formula
  • Selecting initial point
  • Pitfalls
  • Convergence Rate (Single/multiple roots)
  • Secant vs. Newton Raphson

10
Part II Roots Finding
  • Convergent Rate
  • Definition
  • Estimation
  • e.g. Suppose the approximated errors in each
    iteration are
  • 0.1, -0.05, 0.001, 10-4, 10-8, 10-16, 10-32, 0,
    0, 0,
  • What is the convergent rate of the corresponding
    method?
  • Modified Newton's methods for multiple roots

11
Part III Systems of Equations
  • Gauss Elimination
  • Forward Elimination and its complexity
  • Back substitution and its complexity
  • Effect of pivoting and scaling
  • LU Decomposition Algorithm
  • LU Decomposition vs. Gauss Elimination
  • Similarity
  • Advantages (Disadvantages?)
  • Complexity

12
Part III Systems of Equations
  • Error Analysis
  • Ill-Condition system
  • What is an ill-condition function?
  • Iterative Methods (Gauss-Seidel, Jacobi)
  • What are they good for?
  • Convergent criteria
  • Special Form of matrices Optional
  • Tri-diagonal, symmetric, etc.

13
Part IV Optimization
  • Classification of Optimization Problems
  • Single or multiple variables
  • Linear or non-linear
  • Constrained or unconstrained
  • 1-D unconstrained optimization
  • Golden-Section Search
  • What is so special about this method?
  • Quadratic Interpolation
  • How fast does it converges?
  • Newton's Method

14
Part IV Optimization
  • Multi-Dimensional Unconstrained Optimization
  • Non-gradient or direct methods
  • Gradient methods
  • What is gradient?
  • What is Hessian matrices?
  • How to check if a point is a local maxima/minima
    or saddle point?
  • Linear Programming
  • Simplex Method

15
Part V Curve Fitting
  • Linear Least-Square Regression
  • Why is it called "Linear"?
  • When to use Regression and when to use
    interpolation?
  • How to construct polynomials?
  • Newton form
  • Lagrange form

16
Part V Curve Fitting
  • What is spline interpolation?
  • Spline interpolation vs. polynomial interpolation
  • Linear vs. Quadratic vs. Cubic Splines
  • What are the conditions used to determine the
    spline functions?
  • Does the order of the data points matter in
  • Regression
  • Polynomial interpolation
  • Spline interpolation
Write a Comment
User Comments (0)
About PowerShow.com