PowerPoint-Pr - PowerPoint PPT Presentation

1 / 20
About This Presentation
Title:

PowerPoint-Pr

Description:

A standard task in linear algera is finding the solution of a system of linear ... a MATLAB function Thomas.m that realizes the Thomas algorithm and that accepts ... – PowerPoint PPT presentation

Number of Views:17
Avg rating:3.0/5.0
Slides: 21
Provided by: sebastia70
Category:

less

Transcript and Presenter's Notes

Title: PowerPoint-Pr


1
Lecture on Numerical Analysis
Dr.-Ing. Michael Dumbser
01 / 12 / 2008
2
Linear Algebra
A standard task in linear algera is finding the
solution of a system of linear equations of the
form
The solution to this task is given by the
so-called Gauß-algorithm Define a new matrix C
as
Now the matrix C is modified by a sequence of
operations on its rows to transform its left part
into the unit matrix. The admissible
row-operations are (1) addition /
subtraction of multiples of rows (2)
multiplication / division of a row by a constant
factor (3) exchange of rows When the left
part of the matrix C has been transformed into
the unit matrix, the right part contains the
solution X of the equation system.
3
The Gauß-Algorithm I
In detail, the Gauß-algorithm for the N x N
matrix A and the N x M matrix B proceeds as
follows
Part 1 is a forward loop, which transforms the
matrix A into a normalized upper triangular form.
Loop over all rows. The loop index i runs from 1
to N (1) check the coefficient C(i,i) on the
diagonal of the row. (2) IF it is zero,
look for the next row j with row index j gt i
which has a nonzero element in column
i. IF there is no such
row, then the matrix is singular. EXIT.
ELSE exchange rows i and j in the matrix
C. (3) Divide the whole row i by C(i,i).
Its first non-zero entry is now equal to 1.
(4) Subtract a suitable multiple of row i from
all rows j gt i in order to eliminate all entries
in column i of those rows j. end
loop
4
The Gauß-Algorithm II
In detail, the Gauß-algorithm for the N x N
matrix A and the N x M matrix B proceeds as
follows
Part 2 is a very simple backward loop, which
transforms the matrix A into the unit matrix.
Loop over all rows. The loop index i runs from N
to 1 Subtract a suitable multiple of row i
from all rows j lt i in order to eliminate all
entries in column i of those rows j. end loop
All operations have been performed on the
augmented matrix C. The type of row-operations is
completely determined by the left part if matrix
C, i.e. by the original matrix A. The right part,
i.e. matrix B, is transformed with the same
operations in a passive way. At the end of the
Gauß-algorithm, the right part of the matrix C,
i.e. where B was locatedinitially, we find the
solution X of the equation system. To compute
the inverse of a matrix using the Gauß-algorithm,
the right hand side B must be simply set to the
the N x N unit matrix, since we have
5
Exercise 1
Write a MATLAB function gauss.m that accepts a
general N x N matrix A and a general N x M matrix
B as input argument and that returns the N x M
matrix X that solves the matrix equation (1)
Use the Gauß algorithm to solve the equation
system (2) Use the Gauß algorithm to compute
the inverse of a random 5 x 5 matrix A
generated with the MATLAB command rand as
follows A rand(5). Hint
to generate a 5x5 unit matrix, use the following
MATLAB command B eye(5).
6
A Special Case of Gauß-Eliminiation The
Thomas-Algorithm
In the case where the matrix A is tridiagonal,
i.e. with the special structure
there is a very fast and efficient special case
of the Gauß-algorithm, called the
Thomas-algorithm. Since it is a variation of the
Gauß-algorith, the Thomas algorithm is a direct
method to solve general linear tridiagonal
systems. As the original Gauß-algorithm, it
proceeds in two stages, one forward elimination
and one back-substitution.
7
The Thomas-Algorithm
Part I Forward elimination.
Part II Back substitution
8
Exercise 2
Write a MATLAB function Thomas.m that realizes
the Thomas algorithm and that accepts as input
four vectors of equal length a, b, c and d, where
a contains the lower diagonal elements, b
contains the diagonal elements, c contains the
upper diagonal elements and the vector d contains
the right hand side of the linear equation
system Ax d. The output of Thomas.m is the
solution vector x that satisfies the
above-mentioned system. To test your code,
proceed in the following steps (1) Generate the
vectors a,b,c and d randomly using the MATLAB
command rand. (2) Assemble the full matrix A
using the vectors a, b and c. (3) Solve the
system Ax d directly using MATLAB. (4) Solve
the sytem Ax d using the function call x
thomas(a,b,c,d).
9
The Conjugate Gradient Method
For large matrices A, the number of operations of
the Gauß algorithm grows with N3. For large
but sparse matrices (many entries are zero), it
may be more efficient to use an iterative scheme
for the solution of the system. For symmetric,
positive definite matrices A, a very efficient
algorithm can be constructed, which is the
so-called conjugate gradient method, which we
have already seen for nonlinear, multi-variate
optimization.
We start with some definitions (1) A matrix is
symmetric positive definite, if
and
(2) Two vectors u and v are called conjugate with
respect to matrix A if
(3) If we knew N conjugate directions pj (j 1
.. N), we could write the solution x of
as
The coefficients a are given by
(4) The term is called
the residual
10
The Conjugate Gradient Method
The idea of the CG method is to minimize the
functional
Its gradient is
Using the steepest descent method in direction p,
we obtain the minimum of g starting from xk as
follows
11
The Conjugate Gradient Method
A preliminary version of the conjugate gradient
method now reads as
From definition (3) we know that the algorithm
will terminate with the exact solution after at
most N iterations
DO k 0... N - 1
1D minimization along direction pk
Move point x into the minimum along direction pk
Compute new residual ( negative gradient
steepest descent direction)
Do not move in the steepest descent direction,
but compute a new conjugate direction pk1 that
takes into account all the previous
search directions. As in nonlinear minimization,
this guarantees faster convergence.
ENDDO
12
The Conjugate Gradient Method
After some algebra, the final conjugate gradient
method is given as follows
DO k 0... N - 1
ENDDO
13
The GMRES Method of Saad and Schultz (1986)
The GMRES method of Saad and Schultz minimizes
the residual norm
for j 1N
i1...j
for i 1j-1
end
if( gj1 gt tol ) (see next page)
14
The GMRES Method of Saad and Schultz (1986)
...contiuation of the previous page
for j1N
if(gj1 gt tol)
else
for ij-11
end
end
end
15
Exercise 3
Write a MATLAB function CG.m that solves the
system The function should check if A verifies
the necessary symmetry property for the CG
method. Solve the system with
16
The Linear Least-Squares Method
In the case where we have more equations than
unknowns in the linear system
we call the equation system an overdetermined
system. Typical applications for such
overdetermined systems can be found in data
analysis (linear regression) where so-called
best-fit curves have to be computed e.g. from
observations or experimental data. In this case,
the matrix is no longer a N x N square matrix,
but a M x N rectangular matrixwith M gt N. In
general, such systems do not have a solution in
the original sense of the equation. However,
according to ideas of Adrien Marie Legendre and
Carl Friedrich Gauß, we can try to find the best
possible solution of the system by minimizing the
following goal function
17
The Linear Least-Squares Method
The minimum of g(x) can be computed analytically
using the differential criterion for a minimum
The least-squares solution of the overdetermined
equation system can therefore be found by
solving the so-called normal equation
is called the pseudo inverse of the N x M
matrix A.
18
The Linear Least-Squares Method
19
Exercise 4
Write a MATLAB script LSQ.m that computes the
coefficients of the parabola for an oblique throw
from position (x0,y0) with horizontal and
vertical speed (vx,vy). The gravitation constant
is g 9.81. The measurements are available in
form of points (xi,yi) that are generated from
the physical parabola of the throw plus a random
(positive and negative) perturbation of amplitude
A. Use the linear Least-Squares Method to
solve the problem, solving directly the normal
equation of the problem. Plot the measured
data as well as the parabola obtained from the
least squares method in the same figure.
Compare the exact coefficients of the parabola
obtained from the physics of an oblique throw
with the coefficients obtained by the least
squares method in function of the number of
points in the measurements.
20
Carl Friedrich Gauß
German mathematician, astronomer and physicist
30/04/1777 23/02/1855
  • Significant contributions to
  • Linear algebra (Gauß algorithm, Least Squares
    Method)
  • Statistics (Gaussian normal probability
    distribution)
  • Potential theory and differential calculus (Gauß
    divergence theorem)
  • Numerical analysis (Gaussian quadrature rules)

Already during his life, he was called the
prince of mathematics. He produced more than
250 scientific contributions to his research
fields.
Write a Comment
User Comments (0)
About PowerShow.com