Title: CS 267 Applications of Parallel Computers Dense Linear Algebra
1CS 267 Applications of Parallel ComputersDense
Linear Algebra
- James Demmel
- http//www.cs.berkeley.edu/demmel/cs267_221001.pp
t
2Outline
- Motivation for Dense Linear Algebra (as opposed
to sparse) - Benchmarks
- Review Gaussian Elimination (GE) for solving Axb
- Optimizing GE for caches on sequential machines
- using matrix-matrix multiplication (BLAS)
- LAPACK library overview and performance
- Data layouts on parallel machines
- Parallel matrix-matrix multiplication review
- Parallel Gaussian Elimination
- ScaLAPACK library overview
- Eigenvalue problems
- Open Problems
3Motivation
- 3 Basic Linear Algebra Problems
- Linear Equations Solve Axb for x
- Least Squares Find x that minimizes S ri2 where
rAx-b - Eigenvalues Find l and x where Ax l x
- Lots of variations depending on structure of A
(eg symmetry) - Why dense A, as opposed to sparse A?
- Arent most large matrices sparse?
- Dense algorithms easier to understand
- Some applications yields large dense matrices
- Axb Computational Electromagnetics
- Ax lx Quantum Chemistry
- Benchmarking
- How fast is your computer?
How fast can you solve
dense Axb? - Large sparse matrix algorithms often yield
smaller (but still large) dense problems
4Computational Electromagnetics Solve Axb
- Developed during 1980s, driven by defense
applications - Determine the RCS (radar cross section) of
airplane - Reduce signature of plane (stealth technology)
- Other applications are antenna design, medical
equipment - Two fundamental numerical approaches
- MOM methods of moments ( frequency domain)
- Large dense matrices
- Finite differences (time domain)
- Even larger sparse matrices
5Computational Electromagnetics
- Discretize surface into triangular facets using
standard modeling tools - Amplitude of currents
on surface are unknowns
- Integral equation is discretized into a set of
linear equations
image NW Univ. Comp. Electromagnetics Laboratory
http//nueml.ece.nwu.edu/
6Computational Electromagnetics (MOM)
After discretization the integral equation has
the form A x b where A is
the (dense) impedance matrix, x is the unknown
vector of amplitudes, and b is the excitation
vector. (see Cwik, Patterson, and Scott,
Electromagnetic Scattering on the Intel
Touchstone Delta, IEEE Supercomputing 92, pp 538
- 542)
7Results for Parallel Implementation on Intel Delta
Task
Time (hours)
Fill (compute n2 matrix entries)
9.20 (embarrassingly parallel but slow)
Factor (Gaussian Elimination, O(n3) ) 8.25
(good parallelism with right
algorithm) Solve (O(n2))
2 .17 (reasonable
parallelism with right algorithm)
Field Calc. (O(n))
0.12 (embarrassingly
parallel and fast)
The problem solved was for a matrix of size
48,672. 2.6 Gflops for Factor - The world
record in 1991.
8Current Records for Solving Dense Systems
www.netlib.org, click on Performance DataBase
Server
Year System Size Machine
Procs Gflops (Peak) 1950's O(100)
1995 128,600 Intel Paragon
6768 281 ( 338) 1996
215,000 Intel ASCI Red 7264
1068 (1453) 1998 148,000
Cray T3E 1488 1127
(1786) 1998 235,000 Intel
ASCI Red 9152 1338 (1830)
(200 MHz
Ppro) 1999 374,000 SGI ASCI
Blue 5040 1608 (2520) 2000
362,000 Intel ASCI Red 9632
2380 (3207)
(333 MHz Xeon) 2001 518,000
IBM ASCI White 8000 7226 (12000)
(375 MHz
Power 3)
source Alan Edelman http//www-math.mit.edu/edel
man/records.html LINPACK Benchmark
http//www.netlib.org/performance/html/PDSreports.
html
9Current Records for Solving Small Dense Systems
www.netlib.org, click on Performance DataBase
Server
Megaflops Machine n100
n1000 Peak Fujitsu VPP 5000
1156 8784 9600 (1 proc
300 MHz) Cray T90
(32 proc, 450 MHz)
29360 57600 (4 proc,
450 MHz) 1129 5735
7200 IBM Power 4 (1 proc, 1300 MHz)
1074 2394 5200 Dell Itanium
(4 proc, 800 MHz)
7358 12800 (2 proc, 800 MHz)
4504 6400 (1
proc, 800 MHz) 600 2382
3200
source LINPACK Benchmark http//www.netlib.org/p
erformance/html/PDSreports.html
10Computational Chemistry Ax l x
- Seek energy levels of a molecule, crystal, etc.
- Solve Schroedingers Equation for energy levels
eigenvalues - Discretize to get Ax lBx, solve for eigenvalues
l and eigenvectors x - A and B large, symmetric or Hermitian matrices (B
positive definite) - May want some or all eigenvalues and/or
eigenvectors - MP-Quest (Sandia NL)
- Si and sapphire crystals of up to 3072 atoms
- Local Density Approximation to Schroedinger
Equation - A and B up to n40000, complex Hermitian
- Need all eigenvalues and eigenvectors
- Need to iterate up to 20 times (for
self-consistency) - Implemented on Intel ASCI Red
- 9200 Pentium Pro 200 processors (4600 Duals, a
CLUMP) - Overall application ran at 605 Gflops (out of
1800 Gflops peak), - Eigensolver ran at 684 Gflops
- www.cs.berkeley.edu/stanley/gbell/index.html
- Runner-up for Gordon Bell Prize at Supercomputing
98 - Same problem arises in astrophysics...
11Review of Gaussian Elimination (GE) for solving
Axb
- Add multiples of each row to later rows to make A
upper triangular - Solve resulting triangular system Ux c by
substitution
for each column i zero it out below the
diagonal by adding multiples of row i to later
rows for i 1 to n-1 for each row j below
row i for j i1 to n add a
multiple of row i to row j for k i to
n A(j,k) A(j,k) -
(A(j,i)/A(i,i)) A(i,k)
12Refine GE Algorithm (1)
- Initial Version
- Remove computation of constant A(j,i)/A(i,i) from
inner loop
for each column i zero it out below the
diagonal by adding multiples of row i to later
rows for i 1 to n-1 for each row j below
row i for j i1 to n add a
multiple of row i to row j for k i to
n A(j,k) A(j,k) -
(A(j,i)/A(i,i)) A(i,k)
for i 1 to n-1 for j i1 to n
m A(j,i)/A(i,i) for k i to n
A(j,k) A(j,k) - m A(i,k)
13Refine GE Algorithm (2)
- Last version
- Dont compute what we already know
zeros below diagonal in column i
for i 1 to n-1 for j i1 to n
m A(j,i)/A(i,i) for k i to n
A(j,k) A(j,k) - m A(i,k)
for i 1 to n-1 for j i1 to n
m A(j,i)/A(i,i) for k i1 to n
A(j,k) A(j,k) - m A(i,k)
14Refine GE Algorithm (3)
- Last version
- Store multipliers m below diagonal in zeroed
entries for later use
for i 1 to n-1 for j i1 to n
m A(j,i)/A(i,i) for k i1 to n
A(j,k) A(j,k) - m A(i,k)
for i 1 to n-1 for j i1 to n
A(j,i) A(j,i)/A(i,i) for k i1 to
n A(j,k) A(j,k) - A(j,i) A(i,k)
15Refine GE Algorithm (4)
for i 1 to n-1 for j i1 to n
A(j,i) A(j,i)/A(i,i) for k i1 to
n A(j,k) A(j,k) - A(j,i) A(i,k)
for i 1 to n-1 for j i1 to n
A(j,i) A(j,i)/A(i,i) for j i1 to n
for k i1 to n A(j,k)
A(j,k) - A(j,i) A(i,k)
16Refine GE Algorithm (5)
for i 1 to n-1 for j i1 to n
A(j,i) A(j,i)/A(i,i) for j i1 to n
for k i1 to n A(j,k)
A(j,k) - A(j,i) A(i,k)
- Last version
- Express using matrix operations (BLAS)
for i 1 to n-1 A(i1n,i) A(i1n,i) (
1 / A(i,i) ) A(i1n,i1n) A(i1n ,
i1n ) - A(i1n , i) A(i ,
i1n)
17What GE really computes
- Call the strictly lower triangular matrix of
multipliers M, and let L IM - Call the upper triangle of the final matrix U
- Lemma (LU Factorization) If the above algorithm
terminates (does not divide by zero) then A LU - Solving Axb using GE
- Factorize A LU using GE
(cost 2/3 n3 flops) - Solve Ly b for y, using substitution (cost
n2 flops) - Solve Ux y for x, using substitution (cost
n2 flops) - Thus Ax (LU)x L(Ux) Ly b as desired
for i 1 to n-1 A(i1n,i) A(i1n,i) /
A(i,i) A(i1n,i1n) A(i1n , i1n ) -
A(i1n , i) A(i , i1n)
18Problems with basic GE algorithm
- What if some A(i,i) is zero? Or very small?
- Result may not exist, or be unstable, so need
to pivot - Current computation all BLAS 1 or BLAS 2, but we
know that BLAS 3 (matrix multiply) is fastest
(earlier lectures)
for i 1 to n-1 A(i1n,i) A(i1n,i) /
A(i,i) BLAS 1 (scale a vector)
A(i1n,i1n) A(i1n , i1n ) BLAS 2
(rank-1 update) - A(i1n , i)
A(i , i1n)
Peak
BLAS 3
BLAS 2
BLAS 1
19Pivoting in Gaussian Elimination
- A 0 1 fails completely, even though
A is easy - 1 0
- Illustrate problems in 3-decimal digit
arithmetic - A 1e-4 1 and b 1 ,
correct answer to 3 places is x 1 - 1 1
2
1 - Result of LU decomposition is
- L 1 0 1
0 No roundoff error yet - fl(1/1e-4) 1 1e4
1 - U 1e-4 1
1e-4 1 Error in 4th decimal place - 0 fl(1-1e41)
0 -1e4 - Check if A LU 1e-4 1
(2,2) entry entirely wrong - 1
0
20Gaussian Elimination with Partial Pivoting (GEPP)
- Partial Pivoting swap rows so that each
multiplier - L(i,j) A(j,i)/A(i,i) lt
1
for i 1 to n-1 find and record k where
A(k,i) maxi lt j lt n A(j,i)
i.e. largest entry in rest of column i if
A(k,i) 0 exit with a warning that A
is singular, or nearly so elseif k ! i
swap rows i and k of A end if
A(i1n,i) A(i1n,i) / A(i,i) each
quotient lies in -1,1 A(i1n,i1n)
A(i1n , i1n ) - A(i1n , i) A(i , i1n)
- Lemma This algorithm computes A PLU, where
P is a - permutation matrix
- Since each entry of L(i,j) lt 1, this
algorithm is considered - numerically stable
- For details see LAPACK code at
www.netlib.org/lapack/single/sgetf2.f
21Converting BLAS2 to BLAS3 in GEPP
- Blocking
- Used to optimize matrix-multiplication
- Harder here because of data dependencies in GEPP
- Delayed Updates
- Save updates to trailing matrix from several
consecutive BLAS2 updates - Apply many saved updates simultaneously in one
BLAS3 operation - Same idea works for much of dense linear algebra
- Open questions remain
- Need to choose a block size b
- Algorithm will save and apply b updates
- b must be small enough so that active submatrix
consisting of b columns of A fits in cache - b must be large enough to make BLAS3 fast
22Blocked GEPP (www.netlib.org/lapack/single/sgetr
f.f)
for ib 1 to n-1 step b Process matrix b
columns at a time end ib b-1
Point to end of block of b columns
apply BLAS2 version of GEPP to get A(ibn ,
ibend) P L U let LL denote the
strict lower triangular part of A(ibend ,
ibend) I A(ibend , end1n) LL-1
A(ibend , end1n) update next b rows
of U A(end1n , end1n ) A(end1n ,
end1n ) - A(end1n , ibend)
A(ibend , end1n)
apply delayed updates with
single matrix-multiply
with inner dimension b
(For a correctness proof, see on-lines notes.)
23Efficiency of Blocked GEPP
24Overview of LAPACK
- Standard library for dense/banded linear algebra
- Linear systems Axb
- Least squares problems minx Ax-b 2
- Eigenvalue problems Ax lx, Ax lBx
- Singular value decomposition (SVD) A USVT
- Algorithms reorganized to use BLAS3 as much as
possible - Basis of math libraries on many computers, Matlab
6 - Many algorithmic innovations remain
- Projects available
- Automatic optimization
- Quadtree matrix data structures for locality
- New eigenvalue algorithms
25Performance of LAPACK (n1000)
26Performance of LAPACK (n100)
27Recursive Algorithms
- Still uses delayed updates, but organized
differently - (formulas on board)
- Can exploit recursive data layouts
- 3x speedups on least squares for tall, thin
matrices - Theoretically optimal memory hierarchy
performance - See references at
- http//lawra.uni-c.dk/lawra/index.html
- http//www.cs.berkeley.edu/yelick/cs267f01/lectur
es/Lect14.html - http//www.cs.umu.se/research/parallel/recursion/r
ecursive-qr/
28Recursive Algorithms Limits
- Two kinds of dense matrix compositions
- One Sided
- Sequence of simple operations applied on left of
matrix - Gaussian Elimination A LU or A PLU
- Symmetric Gaussian Elimination A LDLT
- Cholesky A LLT
- QR Decomposition for Least Squares A QR
- Can be nearly 100 BLAS 3
- Susceptible to recursive algorithms
- Two Sided
- Sequence of simple operations applied on both
sides, alternating - Eigenvalue algorithms, SVD
- At least 25 BLAS 2
- Seem impervious to recursive approach
- Some recent progress on SVD (25 vs 50 BLAS2)
29Parallelizing Gaussian Elimination
- Recall parallelization steps from earlier lecture
- Decomposition identify enough parallel work, but
not too much - Assignment load balance work among threads
- Orchestrate communication and synchronization
- Mapping which processors execute which threads
- Decomposition
- In BLAS 2 algorithm nearly each flop in inner
loop can be done in parallel, so with n2
processors, need 3n parallel steps - This is too fine-grained, prefer calls to local
matmuls instead - Need to discuss parallel matrix multiplication
- Assignment
- Which processors are responsible for which
submatrices?
for i 1 to n-1 A(i1n,i) A(i1n,i) /
A(i,i) BLAS 1 (scale a vector)
A(i1n,i1n) A(i1n , i1n ) BLAS 2
(rank-1 update) - A(i1n , i)
A(i , i1n)
30Different Data Layouts for Parallel GE (on 4
procs)
Load balanced, but cant easily use BLAS2 or BLAS3
Bad load balance P0 idle after first n/4 steps
Can trade load balance and BLAS2/3 performance
by choosing b, but factorization of block column
is a bottleneck
The winner!
Complicated addressing
31PDGEMM PBLAS routine for matrix
multiply Observations For fixed N, as P
increases Mflops increases, but
less than 100 efficiency For fixed P, as N
increases, Mflops (efficiency) rises
DGEMM BLAS routine for matrix
multiply Maximum speed for PDGEMM Procs
speed of DGEMM Observations (same as above)
Efficiency always at least 48 For fixed
N, as P increases, efficiency drops
For fixed P, as N increases, efficiency
increases
32Review BLAS 3 (Blocked) GEPP
for ib 1 to n-1 step b Process matrix b
columns at a time end ib b-1
Point to end of block of b columns
apply BLAS2 version of GEPP to get A(ibn ,
ibend) P L U let LL denote the
strict lower triangular part of A(ibend ,
ibend) I A(ibend , end1n) LL-1
A(ibend , end1n) update next b rows
of U A(end1n , end1n ) A(end1n ,
end1n ) - A(end1n , ibend)
A(ibend , end1n)
apply delayed updates with
single matrix-multiply
with inner dimension b
BLAS 3
33Review Row and Column Block Cyclic Layout
processors and matrix blocks are distributed in a
2d array pcol-fold parallelism in any column,
and calls to the BLAS2 and BLAS3 on matrices of
size brow-by-bcol serial bottleneck is
eased need not be symmetric in rows and columns
34Distributed GE with a 2D Block Cyclic Layout
block size b in the algorithm and the block sizes
brow and bcol in the layout satisfy bbrowbcol.
shaded regions indicate busy processors or
communication performed. unnecessary to have a
barrier between each step of the algorithm,
e.g.. step 9, 10, and 11 can be pipelined
35Distributed GE with a 2D Block Cyclic Layout
36Matrix multiply of green green - blue pink
37PDGESV ScaLAPACK parallel LU
routine Since it can run no faster than its
inner loop (PDGEMM), we measure Efficiency
Speed(PDGESV)/Speed(PDGEMM) Observations
Efficiency well above 50 for large
enough problems For fixed N, as P
increases, efficiency decreases
(just as for PDGEMM) For fixed P, as N
increases efficiency increases
(just as for PDGEMM) From bottom table, cost
of solving Axb about half of matrix
multiply for large enough matrices.
From the flop counts we would
expect it to be (2n3)/(2/3n3) 3
times faster, but communication makes
it a little slower.
38(No Transcript)
39(No Transcript)
40Scales well, nearly full machine speed
41Old version, pre 1998 Gordon Bell Prize Still
have ideas to accelerate Project Available!
Old Algorithm, plan to abandon
42The Holy Grail of Eigensolvers for Symmetric
matrices
To be propagated throughout LAPACK and ScaLAPACK
43Have good ideas to speedup Project available!
Hardest of all to parallelize
44Out-of-core means matrix lives on disk too
big for main mem Much harder to hide latency
of disk QR much easier than LU because no
pivoting needed for QR
45A small software project ...
46Extra Slides
47Parallelizing Gaussian Elimination
- Recall parallelization steps from Lecture 3
- Decomposition identify enough parallel work, but
not too much - Assignment load balance work among threads
- Orchestrate communication and synchronization
- Mapping which processors execute which threads
- Decomposition
- In BLAS 2 algorithm nearly each flop in inner
loop can be done in parallel, so with n2
processors, need 3n parallel steps - This is too fine-grained, prefer calls to local
matmuls instead
for i 1 to n-1 A(i1n,i) A(i1n,i) /
A(i,i) BLAS 1 (scale a vector)
A(i1n,i1n) A(i1n , i1n ) BLAS 2
(rank-1 update) - A(i1n , i)
A(i , i1n)
48Assignment of parallel work in GE
- Think of assigning submatrices to threads, where
each thread responsible for updating submatrix it
owns - owner computes rule natural because of locality
- What should submatrices look like to achieve load
balance?
49Different Data Layouts for Parallel GE (on 4
procs)
Load balanced, but cant easily use BLAS2 or BLAS3
Bad load balance P0 idle after first n/4 steps
Can trade load balance and BLAS2/3 performance
by choosing b, but factorization of block column
is a bottleneck
The winner!
Complicated addressing
50Computational Electromagnetics (MOM)
The main steps in the solution process are
Fill computing the matrix elements
of A Factor factoring the dense matrix
A Solve solving for one or more
excitations b Field Calc computing the fields
scattered from the object
51Analysis of MOM for Parallel Implementation
Task Work Parallelism
Parallel Speed
Fill O(n2) embarrassing
low Factor O(n3)
moderately diff. very high Solve
O(n2) moderately diff.
high Field Calc. O(n)
embarrassing high
52BLAS 3 (Blocked) GEPP, using Delayed Updates
for ib 1 to n-1 step b Process matrix b
columns at a time end ib b-1
Point to end of block of b columns
apply BLAS2 version of GEPP to get A(ibn ,
ibend) P L U let LL denote the
strict lower triangular part of A(ibend ,
ibend) I A(ibend , end1n) LL-1
A(ibend , end1n) update next b rows
of U A(end1n , end1n ) A(end1n ,
end1n ) - A(end1n , ibend)
A(ibend , end1n)
apply delayed updates with
single matrix-multiply
with inner dimension b
BLAS 3
53BLAS2 version of Gaussian Elimination with
Partial Pivoting (GEPP)
for i 1 to n-1 find and record k where
A(k,i) maxi lt j lt n A(j,i)
i.e. largest entry in rest of column i if
A(k,i) 0 exit with a warning that A
is singular, or nearly so elseif k ! i
swap rows i and k of A end if
A(i1n,i) A(i1n,i) / A(i,i)
each quotient lies in -1,1
BLAS 1 A(i1n,i1n)
A(i1n , i1n ) - A(i1n , i) A(i , i1n)
BLAS 2, most work in this
line
54How to proceed
- Consider basic parallel matrix multiplication
algorithms on simple layouts - Performance modeling to choose best one
- Time (message) latency words time-per-word
- a nb
- Briefly discuss block-cyclic layout
- PBLAS Parallel BLAS
55Parallel Matrix Multiply
- Computing CCAB
- Using basic algorithm 2n3 Flops
- Variables are
- Data layout
- Topology of machine
- Scheduling communication
- Use of performance models for algorithm design
561D Layout
- Assume matrices are n x n and n is divisible by p
- A(i) refers to the n by n/p block column that
processor i owns (similiarly for B(i) and C(i)) - B(i,j) is the n/p by n/p sublock of B(i)
- in rows jn/p through (j1)n/p
- Algorithm uses the formula
- C(i) C(i) AB(i) C(i) S A(j)B(j,i)
j
57Matrix Multiply 1D Layout on Bus or Ring
- Algorithm uses the formula
- C(i) C(i) AB(i) C(i) S A(j)B(j,i)
- First consider a bus-connected machine without
broadcast only one pair of processors can
communicate at a time (ethernet) - Second consider a machine with processors on a
ring all processors may communicate with nearest
neighbors simultaneously
j
58Naïve MatMul for 1D layout on Bus without
Broadcast
- Naïve algorithm
- C(myproc) C(myproc) A(myproc)B(myproc,my
proc) - for i 0 to p-1
- for j 0 to p-1 except i
- if (myproc i) send A(i) to
processor j - if (myproc j)
- receive A(i) from processor i
- C(myproc) C(myproc)
A(i)B(i,myproc) - barrier
- Cost of inner loop
- computation 2n(n/p)2 2n3/p2
- communication a bn2 /p
59Naïve MatMul (continued)
- Cost of inner loop
- computation 2n(n/p)2 2n3/p2
- communication a bn2 /p
approximately - Only 1 pair of processors (i and j) are active on
any iteration, - an of those, only i is doing computation
- gt the algorithm is almost
entirely serial - Running time (p(p-1) 1)computation
p(p-1)communication - 2n3 p2a pn2b
- this is worse than the serial time and grows
with p
60Better Matmul for 1D layout on a Processor Ring
- Proc i can communicate with Proc(i-1) and
Proc(i1) simultaneously for all i
Copy A(myproc) into Tmp C(myproc) C(myproc)
TB(myproc , myproc) for j 1 to p-1 Send
Tmp to processor myproc1 mod p Receive Tmp
from processor myproc-1 mod p C(myproc)
C(myproc) TmpB( myproc-j mod p , myproc)
- Same idea as for gravity in simple sharks and
fish algorithm - Time of inner loop 2(a bn2/p) 2n(n/p)2
- Total Time 2n (n/p)2 (p-1) Time of
inner loop - 2n3/p 2pa 2bn2
- Optimal for 1D layout on Ring or Bus, even with
with Broadcast - Perfect speedup for arithmetic
- A(myproc) must move to each other
processor, costs at least - (p-1)cost of sending n(n/p)
words
- Parallel Efficiency 2n3 / (p Total Time)
1/(1 a p2/(2n3) b p/(2n) ) - 1/ (1
O(p/n)) - Grows to 1 as n/p increases (or a and b shrink)
61MatMul with 2D Layout
- Consider processors in 2D grid (physical or
logical) - Processors can communicate with 4 nearest
neighbors - Broadcast along rows and columns
p(0,0) p(0,1) p(0,2)
p(1,0) p(1,1) p(1,2)
p(2,0) p(2,1) p(2,2)
62Cannons Algorithm
- C(i,j) C(i,j) S A(i,k)B(k,j)
- assume s sqrt(p) is an integer
- forall i0 to s-1 skew A
- left-circular-shift row i of A by i
- so that A(i,j) overwritten by
A(i,(ji)mod s) - forall i0 to s-1 skew B
- up-circular-shift B column i of B by i
- so that B(i,j) overwritten by
B((ij)mod s), j) - for k0 to s-1 sequential
- forall i0 to s-1 and j0 to s-1
all processors in parallel - C(i,j) C(i,j) A(i,j)B(i,j)
- left-circular-shift each row of A
by 1 - up-circular-shift each row of B by
1
k
63Communication in Cannon
C(1,2) A(1,0) B(0,2) A(1,1) B(1,2)
A(1,2) B(2,2)
64Cost of Cannons Algorithm
- forall i0 to s-1 recall s
sqrt(p) - left-circular-shift row i of A by i
cost s(a bn2/p) - forall i0 to s-1
- up-circular-shift B column i of B by i
cost s(a bn2/p) - for k0 to s-1
- forall i0 to s-1 and j0 to s-1
- C(i,j) C(i,j) A(i,j)B(i,j)
cost 2(n/s)3 2n3/p3/2 - left-circular-shift each row of A
by 1 cost a bn2/p - up-circular-shift each row of B by
1 cost a bn2/p
- Total Time 2n3/p 4 sa 4bn2/s
- Parallel Efficiency 2n3 / (p Total Time)
- 1/( 1 a
2(s/n)3 b 2(s/n) ) - 1/(1
O(sqrt(p)/n)) - Grows to 1 as n/s n/sqrt(p) sqrt(data per
processor) grows - Better than 1D layout, which had Efficiency
1/(1 O(p/n))
65Drawbacks to Cannon
- Hard to generalize for
- p not a perfect square
- A and B not square
- Dimensions of A, B not perfectly divisible by
ssqrt(p) - A and B not aligned in the way they are stored
on processors - block-cyclic layouts
- Memory hog (extra copies of local matrices)
66SUMMA Scalable Universal Matrix Multiply
Algorithm
- Slightly less efficient, but simpler and easier
to generalize - Presentation from van de Geijn and Watts
- www.netlib.org/lapack/lawns/lawn96.ps
- Similar ideas appeared many times
- Used in practice in PBLAS Parallel BLAS
- www.netlib.org/lapack/lawns/lawn100.ps
67SUMMA
B(k,J)
J
k
k
C(I,J)
I
A(I,k)
- I, J represent all rows, columns owned by a
processor - k is a single row or column (or a block of b
rows or columns) - C(I,J) C(I,J) Sk A(I,k)B(k,J)
- Assume a pr by pc processor grid (pr pc 4
above)
For k0 to n-1 or n/b-1 where b is the
block size
cols in A(I,k) and rows in B(k,J) for all
I 1 to pr in parallel owner of
A(I,k) broadcasts it to whole processor row
for all J 1 to pc in parallel
owner of B(k,J) broadcasts it to whole processor
column Receive A(I,k) into Acol Receive
B(k,J) into Brow C( myproc , myproc ) C(
myproc , myproc) Acol Brow
68SUMMA performance
For k0 to n/b-1 for all I 1 to s s
sqrt(p) owner of A(I,k) broadcasts it
to whole processor row time
log s ( a b bn/s), using a tree for
all J 1 to s owner of B(k,J)
broadcasts it to whole processor column
time log s ( a b bn/s), using a
tree Receive A(I,k) into Acol Receive
B(k,J) into Brow C( myproc , myproc ) C(
myproc , myproc) Acol Brow
time 2(n/s)2b
- Total time 2n3/p a log p n/b
b log p n2 /s - Parallel Efficiency 1/(1 a log p p /
(2bn2) b log p s/(2n) ) - Same b term as Cannon, except for log p factor
- log p grows slowly so this is ok
- Latency (a) term can be larger, depending on b
- When b1, get a log p n
- As b grows to n/s, term shrinks to a
log p s (log p times Cannon) - Temporary storage grows like 2bn/s
- Can change b to tradeoff latency cost with memory
69Summary of Parallel Matrix Multiply Algorithms
- 1D Layout
- Bus without broadcast - slower than serial
- Nearest neighbor communication on a ring (or bus
with broadcast) Efficiency 1/(1 O(p/n)) - 2D Layout
- Cannon
- Efficiency 1/(1O(p1/2 /n))
- Hard to generalize for general p, n, block
cyclic, alignment - SUMMA
- Efficiency 1/(1 O(log p p / (bn2) log p
p1/2 /n)) - Very General
- b small gt less memory, lower efficiency
- b large gt more memory, high efficiency
- Gustavson et al
- Efficiency 1/(1 O(p1/3 /n) ) ??