Title: MECH2700 Engineering Analysis I
1MECH2700 Engineering Analysis I
- Condition number
- Jacobi Method
(Section 2.5 2.6 of Schilling and Harris)
2 Ax b Learning objectives
- Know how to formulate linear system equations
using vector and matrix notations. - Know how to compute and interpret the 1, 2, and
infinity norms of matrices and vectors. - Know how to compute and interpret the condition
number of a matrix. - Be able write computer programs that can
iteratively solve Axb problems using - The Jacobi method
- The Gauss-Seidel method
- Know the types of problems for which these
iterative methods are applicable and why - Understand the conditions under which these
iterative methods converge and why.
Today
3Ideas from previous lecture
- Vector norms - measures of the size or magnitude
of a vector - 1-norm sum of abs values of elements
- 2-norm
- infinity-norm abs value of largest element
4Recap Matrix norms
- Norm of a matrix represents maximum scaling
effect of that matrix. - Like vectors there are various norms we can use
- 1 norm
- 2 norm
- infinity norm
5Ideas from previous lecture
1-norm
infinity-norm
6Recap- Beam support
7Condition number
Rather that working with absolute errors, its
preferable to work with relative errors
Condition number gives the magnification factor
for relative errors in the solution of x given
relative errors in b.
8Example
Consider
Note scaling factor measured in the specified
norm. Here we are using the 1-norm
9Example - beam supports
When a is small relative to l, errors scaled
significantly
a/l
10Effects of ill-conditioning
- A simple rule of thumb
- For system with condition number of k, one can
expect a loss of log10(k) decimal places in the
accuracy of solution - If k 5000, log10(k) 3.6 So expect to lose
three to four decimal places of accuracy - If k50,000, log10(k) 4.7 so expect to lose 4
to 5 decimal places of accuracy
See page 68 of Schilling and Harris
11Hilbert matrix
Hilbert matrix arises in the solution of
polynomial curve fitting
12Hilbert matrix - a poorly conditioned matrix
- function k hilbertcond(n)
- for i 1n
- H hilb(i) matlab command
- k(i)cond(H)
- fprintf('kH(d) f\n', i, k(i))
- end
- kH(1) 1.000000
- kH(2) 19.281470
- kH(3) 524.056778
- kH(4) 15513.738739
- kH(5) 476607.250243
- kH(6) 14951058.641724
- kH(7) 475367356.277700
- kH(8) 15257575253.665625
k
size
13Weighted chain
Problem determine shape of chain under gravity
14Jacobi solution of weighted chain
Simplifying assumptions - links of unit
length - Unit tension in link - Net horizontal
forces small
15Jacobi solution of weighted chain
Resolve vertical forces
16Weighted chain
Consider 5 link chain
Let bj 1.0 x0x50
17Weighted chain
Consider 5 link chain
Let bj 1.0 x0x50
18Weighted chain
At each node
For a large number of links number of equations
is large
19Iterative solutions of Axb
- Weve already seem some examples of iterative
solutions, e.g. Newtons method for root solving. - Iterative solutions have the characteristic that
they converge to a solution from an initial
guess. - Iterative solutions are efficient
- Iterative solutions of Axb are virtually the
only way of solving large problems, e.g. A of
order 105x105.
20Karl Jacob Jacobi (1804-1851)
Jacobi made important contributions to partial
differential equations of the first order and
applied them to the differentialequations of
dynamic systems.
21Jacobi Algorithm - derivation
Diagonal elements
Off diagonal elements
22Jacobi algorithm derivation
23Jacobi algorithm derivation
Form iterative scheme
24Jacobi algorithm
Solve
Step 5 Choose xo and iterate using scheme above
255 link chain
Diagonal component of A
Off diagonal
265 link chain
Iterative scheme.
27Weighted chain
285 link chain
295 link chain
30Weighted chain
Converges after about 50 iterations
315 link chain
32Approximating a continuum - 50 link chain
33Next lecture
- Using matrix norms to determine rates of
convergence. - Gauss Seidel algorthims - faster convergence
- Successive over relaxation (SOR) - optimally fast
convergence.