Lecture 9 System of Equations - PowerPoint PPT Presentation

1 / 27
About This Presentation
Title:

Lecture 9 System of Equations

Description:

Relaxation technique. Scaling ... A set of equations may involve relationships between quantities ... Successive over Relaxation (SOR) Convergence Restrictions ... – PowerPoint PPT presentation

Number of Views:71
Avg rating:3.0/5.0
Slides: 28
Provided by: ericsa3
Category:

less

Transcript and Presenter's Notes

Title: Lecture 9 System of Equations


1
Lecture 9 System of Equations
  • February 13, 2001
  • CVEN 302

2
Lectures Goals
  • Gauss-Jordon
  • Tridiagonal Solver
  • Iterative Techniques
  • Jacobian method
  • Gaus-Siedel method
  • Relaxation technique

3
Scaling
  • Scaling is an operation of adjusting the
    coefficients of a set of equations so that they
    are all of the same magnitude.
  • A set of equations may involve relationships
    between quantities measured in a widely different
    units (N vs. kN, sec vs hrs, etc.) This may
    result in equation having very large number and
    others with very small , if we select pivoting
    may put numbers on the diagonal that are not
    large in comparison to other rows and create
    round-off errors that pivoting was suppose to
    avoid.

4
Scaling
  • What happens with the following example?
  • 3X1 2 X2 100X3 105
  • - X1 3 X2 100X3 102
  • X1 2 X2 - 1X3 2

5
Scaling
  • The best way to handle the problem is to
    normalize the results.
  • 0.03X1 0.02 X2 1.00X3 1.05
  • - 0.01X1 0.03 X2 1.00X3 1.02
  • 0.50X1 1.00 X2 - 0.50X3 1.00

6
Gauss-Jordan Method
  • The Gauss-Jordan Method is similar to the
    Gaussian Elimination.
  • The method requires almost 50 more operations.

7
Gauss-Jordan Method
The Gauss-Jordan method changes the matrix into
the identity matrix.
8
Gauss-Jordan Method
  • There are one phases to the solving technique
  • Elimination --- use row operations to convert the
    matrix into an identity matrix.
  • The new b vector is the solution to the x values.

9
Gauss-Jordan Algorithm
  • Ax b
  • Augment the n x n coefficient matrix with the
    vector of right hand sides to form a n x (n1)
  • Interchange rows if necessary to make the value
    a11 with the largest magnitude of any coefficient
    in the first row
  • Create zero in 2nd through nth row in first row
    by subtracting ai1 / a11 times first row from ith
    row

10
Gaussian Elimination Algorithm
  • Repeat (2) (3) for first through the nth rows,
    putting the largest magnitude coefficient in the
    diagonal by interchanging rows (consider only row
    j to n ) and then subtract times the jth row
    from the ith row so as to create zeros in all
    positions of jth column and the diagonal becomes
    all ones
  • Solve for all of the equations, xi ai,n1

11
Example 1
  • X1 3X2 5
  • 2X1 4X2 6

12
Example 2
  • -3X1 2X2 - X3 -1
  • 6X1 - 6X2 7X3 -7
  • 3X1 - 4X2 4X3 -6

13
Band Solver
  • Large matrices tend to be banded, which means
    that the matrix has a band of non-zero
    coefficients and zeroes on the outside of the
    matrix.
  • The simplest of the methods is the Thomas Method,
    which is used for a tridiagonal matrix.

14
Advantages of Band Solvers
  • The method reduce the number of operations and
    save the matrix in smaller amount of memory.
  • The band solver is faster and is useful for large
    scale matrices.

15
Thomas Method
  • The method takes advantage of the bandedness of
    the matrix.
  • The technique uses a two phase process.
  • The first phase is to obtain the coefficients
    from the sweep.
  • The second phase solves for the x values.

16
Thomas Method
  • The first phase starts with the first row of
    coefficients scales the a and r coefficients.
  • The second phase solves for x values using the a
    and r coefficients.

17
Thomas Method
  • The program for the method is given as
    demoThomas(a,d,b,r)
  • The algorithm is from the textbook, where a,d,b,
    r are vectors from the matrix.

18
Band Solver
  • The technique can be applied to higher order
    banded matrices.
  • The quintdiagonal matrix has a similar algorithm.

19
Quintdiagonal Method
  • The first phase starts with the first row of
    coefficients scales the a and r coefficients.
  • The second phase solves for x values using the a
    and r coefficients.

20
Correction Quintdiagonal Method
  • There is a correction for the program, the bi
    term will change and cause problems.
  • The b term must be modified before being used in
    the updates of a, c and r terms.

21
Iterative Techniques
  • The method of solving simultaneous linear
    algebraic equations using Gaussian Elimination.
    Problems can arise from round-off errors and zero
    on the diagonal.
  • One means of obtaining an approximate solution to
    the equations is to use an educated guess.

22
Iterative Methods
  • We will look at three iterative methods
  • Jacobi Method
  • Gauss-Seidel Method
  • Successive over Relaxation (SOR)

23
Convergence Restrictions
  • There are two conditions for the iterative method
    to converge.
  • Necessary that 1 coefficient in each equation is
    dominate.
  • The sufficient condition is that the diagonal is
    dominate.

24
Jacobi Iteration
  • If the diagonal is dominant, the matrix can be
    rewritten in the following form

25
Jacobi Iteration
  • The technique can be rewritten in a shorthand
    fashion, where D is the diagonal, A is the
    matrix without the diagonal and c is the
    right-hand side of the equations.

26
Jacobi Iteration
  • The technique solves for the entire set of x
    values for each iteration.
  • The problem does not update the values until an
    iteration is completed.

27
Summary
  • Scaling of the problem will help in the
    convergence.
  • Gauss-Jordan method is more computational intense
    and does not improve the round-off errors.
    However, it is useful for finding matrix
    inverses.
  • Banded matrix solvers are faster and use less
    memory.
  • Two iterative techniques were presented.
Write a Comment
User Comments (0)
About PowerShow.com