Direct Methods for Sparse Linear Systems - PowerPoint PPT Presentation

1 / 39
About This Presentation
Title:

Direct Methods for Sparse Linear Systems

Description:

Thanks to Prof. Jacob White, Suvranu De, Deepak ... Diagonally Dominant. Nodal Matrix. 0. Sparse Matrices Fill-in Example 1. X. X. X. X. X= Non zero ... – PowerPoint PPT presentation

Number of Views:78
Avg rating:3.0/5.0
Slides: 40
Provided by: Alessand87
Category:

less

Transcript and Presenter's Notes

Title: Direct Methods for Sparse Linear Systems


1
Direct Methods for Sparse Linear Systems
Lecture 4 Alessandra Nardi
Thanks to Prof. Jacob White, Suvranu De, Deepak
Ramaswamy, Michal Rewienski, and Karen Veroy
2
Last lecture review
  • Solution of system of linear equations
  • Existence and uniqueness review
  • Gaussian elimination basics
  • GE basics
  • LU factorization
  • Pivoting

3
Outline
  • Error Mechanisms
  • Sparse Matrices
  • Why are they nice
  • How do we store them
  • How can we exploit and preserve sparsity

4
Error Mechanisms
  • Round-off error
  • Pivoting helps
  • Ill conditioning (almost singular)
  • Bad luck property of the matrix
  • Pivoting does not help
  • Numerical Stability of Method

5
Ill-Conditioning Norms
  • Norms useful to discuss error in numerical
    problems
  • Norm

6
Ill-Conditioning Vector Norms
L2 (Euclidean) norm
Unit circle
L1 norm
1
1
L? norm
Unit square
7
Ill-Conditioning Matrix Norms
Induced norm of A is the maximum magnification
of by
max abs column sum
max abs row sum
(largest eigenvalue of ATA)1/2
8
Ill-Conditioning Matrix Norms
  • More properties on the matrix norm
  • Condition Number
  • It can be shown that
  • Large k(A) means matrix is almost singular
    (ill-conditioned)

9
Ill-Conditioning Perturbation Analysis
What happens if we perturb b?
? k(M) large is bad
10
Ill-Conditioning Perturbation Analysis
What happens if we perturb M?
? k(M) large is bad
Bottom line If matrix is ill-conditioned,
round-off puts you in troubles
11
Ill-Conditioning Perturbation Analysis
Geometric Approach is more intuitive
When vectors are nearly aligned, Hard to decide
how much of versus how much of
12
Numerical Stability
  • Rounding errors may accumulate and propagate
    unstably in a bad algorithm.
  • Can be proven that for Gaussian elimination the
    accumulated error is bounded

13
Summary on Error Mechanisms for GE
  • Rounding due to machine finite precision we have
    an error in the solution even if the algorithm is
    perfect
  • Pivoting helps to reduce it
  • Matrix conditioning
  • If matrix is good, then complete pivoting
    solves any round-off problem
  • If matrix is bad (almost singular), then there
    is nothing to do
  • Numerical stability
  • How rounding errors accumulate
  • GE is stable

14
LU Computational Complexity
  • Computational Complexity
  • O(n3), where M n x n
  • We cannot afford this complexity
  • Exploit natural sparsity that occurs in circuits
    equations
  • Sparsity many zero elements
  • Matrix is sparse when it is advantageous to
    exploit its sparsity
  • Exploiting sparsity O(n1.1) to O(n1.5)

15
LU Goals of exploiting sparsity
  • Avoid storing zero entries
  • Memory usage reduction
  • Decomposition is faster since you do need to
    access them (but more complicated data structure)
  • (2) Avoid trivial operations
  • Multiplication by zero
  • Addition with zero
  • (3) Avoid losing sparsity

16
Sparse Matrices Resistor Line
Tridiagonal Case
m
17
GE Algorithm Tridiagonal Example
For i 1 to n-1 For each Row For
j i1 to n For each target Row below
the source For k i1 to n
For each Row element beyond Pivot

Order N Operations!
18
Sparse Matrices Fill-in Example 1
Nodal Matrix
0
Symmetric Diagonally Dominant
19
Sparse Matrices Fill-in Example 1
Matrix Non zero structure
Matrix after one LU step
X
X
X
X
X Non zero
20
Sparse Matrices Fill-in Example 2
Fill-ins Propagate
X
X
X
X
X
Fill-ins from Step 1 result in Fill-ins in step 2
21
Sparse Matrices Fill-in Reordering
Node Reordering Can Reduce Fill-in -
Preserves Properties (Symmetry, Diagonal
Dominance) - Equivalent to swapping rows
and columns
22
Exploiting and maintaining sparsity
  • Criteria for exploiting sparsity
  • Minimum number of ops
  • Minimum number of fill-ins
  • Pivoting to maintain sparsity NP-complete
    problem ? heuristics are used
  • Markowitz, Berry, Hsieh and Ghausi, Nakhla and
    Singhal and Vlach
  • Choice Markowitz
  • 5 more fill-ins but
  • Faster
  • Pivoting for accuracy may conflict with pivoting
    for sparsity

23
Sparse Matrices Fill-in Reordering
Where can fill-in occur ?
Already Factored
Possible Fill-in Locations
Multipliers
24
Sparse Matrices Fill-in Reordering
Markowitz Reordering (Diagonal Pivoting)
Greedy Algorithm (but close to optimal) !
25
Sparse Matrices Fill-in Reordering
Why only try diagonals ?
  • Corresponds to node reordering in Nodal
    formulation

1
2
3
1
0
0
3
2
  • Reduces search cost
  • Preserves Matrix Properties -
    Diagonal Dominance - Symmetry

26
Sparse Matrices Fill-in Reordering
Pattern of a Filled-in Matrix
Very Sparse
Very Sparse
Dense
27
Sparse Matrices Fill-in Reordering
Unfactored random Matrix
28
Sparse Matrices Fill-in Reordering
Factored random Matrix
29
Sparse Matrices Data Structure
  • Several ways of storing a sparse matrix in a
    compact form
  • Trade-off
  • Storage amount
  • Cost of data accessing and update procedures
  • Efficient data structure linked list

30
Sparse Matrices Data Structure 1
Orthogonal linked list
31
Sparse Matrices Data Structure 2
Vector of row pointers
Arrays of Data in a Row
Matrix entries
Val 11
Val 12
Val 1K
1
Column index
Col 11
Col 12
Col 1K
Val 21
Val 22
Val 2L
Col 21
Col 22
Col 2L
Val N1
Val N2
Val Nj
N
Col N1
Col N2
Col Nj
32
Sparse Matrices Data StructureProblem of
Misses
Eliminating Source Row i from Target row j
Row i
Row j
Must read all the row j entries to find the 3
that match row i Every Miss is an unneeded memory
reference (expensive!!!) Could have more misses
than ops!
33
Sparse Matrices Data Structure
Scattering for Miss Avoidance
Row j
1) Read all the elements in Row j, and scatter
them in an n-length vector 2) Access only the
needed elements using array indexing!
34
Sparse Matrices Graph Approach
Structurally Symmetric Matrices and Graphs
1
X
X
X
X
X
X
X
2
X
X
X
X
3
X
X
X
4
X
X
X
5
  • One Node Per Matrix Row
  • One Edge Per Off-diagonal Pair

35
Sparse Matrices Graph ApproachMarkowitz
Products
Markowitz Products
(Node Degree)2
36
Sparse Matrices Graph ApproachFactorization
One Step of LU Factorization
1
X
X
X
X
X
X
X
2
X
X
X
X
X
3
X
X
X
4
X
X
X
X
X
5
  • Delete the node associated with pivot row
  • Tie together the graph edges

37
Sparse Matrices Graph ApproachExample
Graph
Markowitz products ( Node degree)
38
Sparse Matrices Graph ApproachExample
Swap 2 with 1
Graph
39
Summary
  • Gaussian Elimination Error Mechanisms
  • Ill-conditioning
  • Numerical Stability
  • Gaussian Elimination for Sparse Matrices
  • Improved computational cost factor in O(N1.5)
    operations (dense is O(N3) )
  • Example Tridiagonal Matrix Factorization O(N)
  • Data structure
  • Markowitz Reordering to minimize fill-ins
  • Graph Based Approach
Write a Comment
User Comments (0)
About PowerShow.com