Title: Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV)
1Automatic Performance TuningandSparse-Matrix-Vec
tor-Multiplication (SpMV)
- James Demmel
- www.cs.berkeley.edu/demmel/cs267_Spr10
2Berkeley Benchmarking and OPtimization (BeBOP)
- Prof. Katherine Yelick
- Current members
- Kaushik Datta, Ozan Demirlioglu, Mark Hoemmen,
Shoaib Kamil, Rajesh Nishtala, Vasily Volkov, Sam
Williams, - Previous members
- Hormozd Gahvari, Eun-Jim Im, Ankit Jain, Rich
Vuduc, many undergrads - Many results here from current, previous students
- bebop.cs.berkeley.edu
3Outline
- Motivation for Automatic Performance Tuning
- Results for sparse matrix kernels
- OSKI Optimized Sparse Kernel Interface
- Tuning Higher Level Algorithms
- Future Work, Class Projects
- Future Related Lecture
- Sam Williams on tuning SpMV for multicore,
other emerging architectures - BeBOP Berkeley Benchmarking and Optimization
Group - Meet weekly T 1-2, in 380 Soda
4Motivation for Automatic Performance Tuning
- Writing high performance software is hard
- Make programming easier while getting high speed
- Ideal program in your favorite high level
language (Matlab, Python, PETSc) and get a high
fraction of peak performance - Reality Best algorithm (and its implementation)
can depend strongly on the problem, computer
architecture, compiler, - Best choice can depend on knowing a lot of
applied mathematics and computer science - How much of this can we teach?
- How much of this can we automate?
5Examples of Automatic Performance Tuning (1)
- Dense BLAS
- Sequential
- PHiPAC (UCB), then ATLAS (UTK) (used in Matlab)
- math-atlas.sourceforge.net/
- Internal vendor tools
- Fast Fourier Transform (FFT) variations
- Sequential and Parallel
- FFTW (MIT)
- www.fftw.org
- Digital Signal Processing
- SPIRAL www.spiral.net (CMU)
- Communication Collectives (UCB, UTK)
- Rose (LLNL), Bernoulli (Cornell), Telescoping
Languages (Rice), - More projects, conferences, government reports,
6Examples of Automatic Performance Tuning (2)
- What do dense BLAS, FFTs, signal processing, MPI
reductions have in common? - Can do the tuning off-line once per
architecture, algorithm - Can take as much time as necessary (hours, a
week) - At run-time, algorithm choice may depend only on
few parameters - Matrix dimension, size of FFT, etc.
7Tuning Register Tile Sizes (Dense Matrix Multiply)
333 MHz Sun Ultra 2i 2-D slice of 3-D space
implementations color-coded by performance in
Mflop/s 16 registers, but 2-by-3 tile size
fastest
Needle in a haystack
8Example Select a Matmul Implementation
9Example Support Vector Classification
10Machine Learning in Automatic Performance Tuning
- References
- Statistical Models for Empirical Search-Based
Performance Tuning(International Journal of High
Performance Computing Applications, 18 (1), pp.
65-94, February 2004)Richard Vuduc, J. Demmel,
and Jeff A. Bilmes. - Predicting and Optimizing System Utilization and
Performance via Statistical Machine Learning
(Computer Science PhD Thesis, University of
California, Berkeley. UCB//EECS-2009-181 )
Archana Ganapathi
11Examples of Automatic Performance Tuning (3)
- What do dense BLAS, FFTs, signal processing, MPI
reductions have in common? - Can do the tuning off-line once per
architecture, algorithm - Can take as much time as necessary (hours, a
week) - At run-time, algorithm choice may depend only on
few parameters - Matrix dimension, size of FFT, etc.
- Cant always do off-line tuning
- Algorithm and implementation may strongly depend
on data only known at run-time - Ex Sparse matrix nonzero pattern determines both
best data structure and implementation of
Sparse-matrix-vector-multiplication (SpMV) - Part of search for best algorithm just be done
(very quickly!) at run-time
12Source Accelerator Cavity Design Problem (Ko via
Husbands)
13Linear Programming Matrix
14A Sparse Matrix You Encounter Every Day
15SpMV with Compressed Sparse Row (CSR) Storage
Matrix-vector multiply kernel y(i) ? y(i)
A(i,j)x(j) for each row i for kptri to
ptri1-1 do yi yi valkxindk
Matrix-vector multiply kernel y(i) ? y(i)
A(i,j)x(j) for each row i for kptri to
ptri1-1 do yi yi valkxindk
16Example The Difficulty of Tuning
- n 21200
- nnz 1.5 M
- kernel SpMV
- Source NASA structural analysis problem
17Example The Difficulty of Tuning
- n 21200
- nnz 1.5 M
- kernel SpMV
- Source NASA structural analysis problem
18Taking advantage of block structure in SpMV
- Bottleneck is time to get matrix from memory
- Only 2 flops for each nonzero in matrix
- Dont store each nonzero with index, instead
store each nonzero r-by-c block with index - Storage drops by up to 2x, if rc gtgt 1, all 32-bit
quantities - Time to fetch matrix from memory decreases
- Change both data structure and algorithm
- Need to pick r and c
- Need to change algorithm accordingly
- In example, is rc8 best choice?
- Minimizes storage, so looks like a good idea
19Speedups on Itanium 2 The Need for Search
Mflop/s
Mflop/s
20Register Profile Itanium 2
1190 Mflop/s
190 Mflop/s
21SpMV Performance (Matrix 2) Generation 1
Power3 - 13
Power4 - 14
195 Mflop/s
703 Mflop/s
100 Mflop/s
469 Mflop/s
Itanium 2 - 31
Itanium 1 - 7
225 Mflop/s
1.1 Gflop/s
103 Mflop/s
276 Mflop/s
22Register Profiles IBM and Intel IA-64
Power3 - 17
Power4 - 16
252 Mflop/s
820 Mflop/s
122 Mflop/s
459 Mflop/s
Itanium 2 - 33
Itanium 1 - 8
247 Mflop/s
1.2 Gflop/s
107 Mflop/s
190 Mflop/s
23SpMV Performance (Matrix 2) Generation 2
Ultra 2i - 9
Ultra 3 - 5
63 Mflop/s
109 Mflop/s
35 Mflop/s
53 Mflop/s
Pentium III-M - 15
Pentium III - 19
96 Mflop/s
120 Mflop/s
42 Mflop/s
58 Mflop/s
24Register Profiles Sun and Intel x86
Ultra 2i - 11
Ultra 3 - 5
72 Mflop/s
90 Mflop/s
35 Mflop/s
50 Mflop/s
Pentium III-M - 15
Pentium III - 21
108 Mflop/s
122 Mflop/s
42 Mflop/s
58 Mflop/s
25Another example of tuning challenges
- More complicated non-zero structure in general
- N 16614
- NNZ 1.1M
26Zoom in to top corner
- More complicated non-zero structure in general
- N 16614
- NNZ 1.1M
273x3 blocks look natural, but
- More complicated non-zero structure in general
- Example 3x3 blocking
- Logical grid of 3x3 cells
- But would lead to lots of fill-in
28Extra Work Can Improve Efficiency!
- More complicated non-zero structure in general
- Example 3x3 blocking
- Logical grid of 3x3 cells
- Fill-in explicit zeros
- Unroll 3x3 block multiplies
- Fill ratio 1.5
- On Pentium III 1.5x speedup!
- Actual mflop rate 1.52 2.25 higher
29Automatic Register Block Size Selection
- Selecting the r x c block size
- Off-line benchmark
- Precompute Mflops(r,c) using dense A for each r x
c - Once per machine/architecture
- Run-time search
- Sample A to estimate Fill(r,c) for each r x c
- Run-time heuristic model
- Choose r, c to minimize time Fill(r,c) /
Mflops(r,c)
30Accurate and Efficient Adaptive Fill Estimation
- Idea Sample matrix
- Fraction of matrix to sample s ÃŽ 0,1
- Cost O(s nnz)
- Control cost by controlling s
- Search at run-time the constant matters!
- Control s automatically by computing statistical
confidence intervals - Idea Monitor variance
- Cost of tuning
- Lower bound convert matrix in 5 to 40 unblocked
SpMVs - Heuristic 1 to 11 SpMVs
31Accuracy of the Tuning Heuristics (1/4)
See p. 375 of Vuducs thesis for matrices
NOTE Fair flops used (ops on explicit zeros
not counted as work)
32Accuracy of the Tuning Heuristics (2/4)
33Accuracy of the Tuning Heuristics (2/4)
DGEMV
34Upper Bounds on Performance for blocked SpMV
- P (flops) / (time)
- Flops 2 nnz(A)
- Lower bound on time Two main assumptions
- 1. Count memory ops only (streaming)
- 2. Count only compulsory, capacity misses ignore
conflicts - Account for line sizes
- Account for matrix size and nnz
- Charge minimum access latency ai at Li cache
amem - e.g., Saavedra-Barrera and PMaC MAPS benchmarks
35Example L2 Misses on Itanium 2
Misses measured using PAPI Browne 00
36Example Bounds on Itanium 2
37Example Bounds on Itanium 2
38Example Bounds on Itanium 2
39Summary of Other Performance Optimizations
- Optimizations for SpMV
- Register blocking (RB) up to 4x over CSR
- Variable block splitting 2.1x over CSR, 1.8x
over RB - Diagonals 2x over CSR
- Reordering to create dense structure splitting
2x over CSR - Symmetry 2.8x over CSR, 2.6x over RB
- Cache blocking 2.8x over CSR
- Multiple vectors (SpMM) 7x over CSR
- And combinations
- Sparse triangular solve
- Hybrid sparse/dense data structure 1.8x over CSR
- Higher-level kernels
- AATx, ATAx 4x over CSR, 1.8x over RB
- A2x 2x over CSR, 1.5x over RB
- Ax, A2x, A3x, .. , Akx
40Example Sparse Triangular Factor
- Raefsky4 (structural problem) SuperLU colmmd
- N19779, nnz12.6 M
41Cache Optimizations for AATx
- Cache-level Interleave multiplication by A, AT
- Only fetch A from memory once
- Register-level aiT to be rc block row, or diag
row
42Example Combining Optimizations (1/2)
- Register blocking, symmetry, multiple (k) vectors
- Three low-level tuning parameters r, c, v
X
k
v
r
c
Y
A
43Example Combining Optimizations (2/2)
- Register blocking, symmetry, and multiple vectors
Ben Lee _at_ UCB - Symmetric, blocked, 1 vector
- Up to 2.6x over nonsymmetric, blocked, 1 vector
- Symmetric, blocked, k vectors
- Up to 2.1x over nonsymmetric, blocked, k vecs.
- Up to 7.3x over nonsymmetric, nonblocked, 1,
vector - Symmetric Storage up to 64.7 savings
44Potential Impact on Applications Omega3P
- Application accelerator cavity design Ko
- Relevant optimization techniques
- Symmetric storage
- Register blocking
- Reordering, to create more dense blocks
- Reverse Cuthill-McKee ordering to reduce
bandwidth - Do Breadth-First-Search, number nodes in reverse
order visited - Traveling Salesman Problem-based ordering to
create blocks - Nodes columns of A
- Weights(u, v) no. of nz u, v have in common
- Tour ordering of columns
- Choose maximum weight tour
- See Pinar Heath 97
- 2.1x speedup on Power 4 (caveat SPMV not
bottleneck)
45Source Accelerator Cavity Design Problem (Ko via
Husbands)
46Post-RCM Reordering
47100x100 Submatrix Along Diagonal
48Microscopic Effect of RCM Reordering
Before Green Red After Green Blue
49Microscopic Effect of Combined RCMTSP
Reordering
Before Green Red After Green Blue
50(Omega3P)
51Optimized Sparse Kernel Interface - OSKI
- Provides sparse kernels automatically tuned for
users matrix machine - BLAS-style functionality SpMV, Ax ATy, TrSV
- Hides complexity of run-time tuning
- Includes new, faster locality-aware kernels
ATAx, Akx - Faster than standard implementations
- Up to 4x faster matvec, 1.8x trisolve, 4x ATAx
- For advanced users solver library writers
- Available as stand-alone library (OSKI 1.0.1h,
6/07) - Available as PETSc extension (OSKI-PETSc .1d,
3/06) - Bebop.cs.berkeley.edu/oski
52How the OSKI Tunes (Overview)
Application Run-Time
Library Install-Time (offline)
1. Build for Target Arch.
2. Benchmark
Workload from program monitoring
History
Matrix
Benchmark data
Heuristic models
1. Evaluate Models
Generated code variants
2. Select Data Struct. Code
To user Matrix handle for kernel calls
Extensibility Advanced users may write
dynamically add Code variants and Heuristic
models to system.
53How the OSKI Tunes (Overview)
- At library build/install-time
- Pre-generate and compile code variants into
dynamic libraries - Collect benchmark data
- Measures and records speed of possible sparse
data structure and code variants on target
architecture - Installation process uses standard, portable GNU
AutoTools - At run-time
- Library tunes using heuristic models
- Models analyze users matrix benchmark data to
choose optimized data structure and code - Non-trivial tuning cost up to 40 mat-vecs
- Library limits the time it spends tuning based on
estimated workload - provided by user or inferred by library
- User may reduce cost by saving tuning results for
application on future runs with same or similar
matrix
54Optimizations in OSKI, so far
- Fully automatic heuristics for
- Sparse matrix-vector multiply
- Register-level blocking
- Register-level blocking symmetry multiple
vectors - Cache-level blocking
- Sparse triangular solve with register-level
blocking and switch-to-dense optimization - Sparse ATAx with register-level blocking
- User may select other optimizations manually
- Diagonal storage optimizations, reordering,
splitting tiled matrix powers kernel (Akx) - All available in dynamic libraries
- Accessible via high-level embedded script
language - Plug-in extensibility
- Very advanced users may write their own
heuristics, create new data structures/code
variants and dynamically add them to the system
55How to Call OSKI Basic Usage
- May gradually migrate existing apps
- Step 1 Wrap existing data structures
- Step 2 Make BLAS-like kernel calls
int ptr , ind double val /
Matrix, in CSR format / double x , y
/ Let x and y be two dense vectors / /
Compute y ?y ?Ax, 500 times / for( i 0
i lt 500 i ) my_matmult( ptr, ind, val, ?, x,
b, y )
56How to Call OSKI Basic Usage
- May gradually migrate existing apps
- Step 1 Wrap existing data structures
- Step 2 Make BLAS-like kernel calls
int ptr , ind double val /
Matrix, in CSR format / double x , y
/ Let x and y be two dense vectors / / Step 1
Create OSKI wrappers around this data
/ oski_matrix_t A_tunable oski_CreateMatCSR(ptr
, ind, val, num_rows, num_cols, SHARE_INPUTMAT,
) oski_vecview_t x_view oski_CreateVecView(x,
num_cols, UNIT_STRIDE) oski_vecview_t y_view
oski_CreateVecView(y, num_rows, UNIT_STRIDE) /
Compute y ?y ?Ax, 500 times / for( i 0
i lt 500 i ) my_matmult( ptr, ind, val, ?, x,
b, y )
57How to Call OSKI Basic Usage
- May gradually migrate existing apps
- Step 1 Wrap existing data structures
- Step 2 Make BLAS-like kernel calls
int ptr , ind double val /
Matrix, in CSR format / double x , y
/ Let x and y be two dense vectors / / Step 1
Create OSKI wrappers around this data
/ oski_matrix_t A_tunable oski_CreateMatCSR(ptr
, ind, val, num_rows, num_cols, SHARE_INPUTMAT,
) oski_vecview_t x_view oski_CreateVecView(x,
num_cols, UNIT_STRIDE) oski_vecview_t y_view
oski_CreateVecView(y, num_rows, UNIT_STRIDE) /
Compute y ?y ?Ax, 500 times / for( i 0
i lt 500 i ) oski_MatMult(A_tunable,
OP_NORMAL, ?, x_view, ?, y_view)/ Step 2 /
58How to Call OSKI Tune with Explicit Hints
- User calls tune routine
- May provide explicit tuning hints (OPTIONAL)
oski_matrix_t A_tunable oski_CreateMatCSR(
) / / / Tell OSKI we will call SpMV 500
times (workload hint) / oski_SetHintMatMult(A_tun
able, OP_NORMAL, ?, x_view, ?, y_view, 500) /
Tell OSKI we think the matrix has 8x8 blocks
(structural hint) / oski_SetHint(A_tunable,
HINT_SINGLE_BLOCKSIZE, 8, 8) oski_TuneMat(A_tuna
ble) / Ask OSKI to tune / for( i 0 i lt
500 i ) oski_MatMult(A_tunable, OP_NORMAL, ?,
x_view, ?, y_view)
59How the User Calls OSKI Implicit Tuning
- Ask library to infer workload
- Library profiles all kernel calls
- May periodically re-tune
oski_matrix_t A_tunable oski_CreateMatCSR(
) / / for( i 0 i lt 500 i )
oski_MatMult(A_tunable, OP_NORMAL, ?, x_view,
?, y_view) oski_TuneMat(A_tunable) / Ask OSKI
to tune /
60Quick-and-dirty Parallelism OSKI-PETSc
- Extend PETScs distributed memory SpMV (MATMPIAIJ)
- PETSc
- Each process stores diag (all-local) and off-diag
submatrices - OSKI-PETSc
- Add OSKI wrappers
- Each submatrix tuned independently
p0
p1
p2
p3
61OSKI-PETSc Proof-of-Concept Results
- Matrix 1 Accelerator cavity design (R. Lee _at_
SLAC) - N 1 M, 40 M non-zeros
- 2x2 dense block substructure
- Symmetric
- Matrix 2 Linear programming (Italian Railways)
- Short-and-fat 4k x 1M, 11M non-zeros
- Highly unstructured
- Big speedup from cache-blocking no native PETSc
format - Evaluation machine Xeon cluster
- Peak 4.8 Gflop/s per node
62Accelerator Cavity Matrix
63OSKI-PETSc Performance Accel. Cavity
64Linear Programming Matrix
65OSKI-PETSc Performance LP Matrix
66Tuning Higher Level Algorithms than SpMV
- We almost always do many SpMVs, not just one
- Krylov Subspace Methods (KSMs) for Axb, Ax
?x - Conjugate Gradients, GMRES, Lanczos,
- Do a sequence of k SpMVs to get vectors x1 , ,
xk - Find best solution x as linear combination of
x1 , , xk - Main cost is k SpMVs
- Since communication usually dominates, can we do
better? - Goal make communication cost independent of k
- Parallel case O(log P) messages, not O(k log P)
- optimal - same bandwidth as before
- Sequential case O(1) messages and bandwidth, not
O(k) - optimal - Achievable when matrix partitionable with low
surface-to-volume ratio
67Locally Dependent Entries for x,Ax, A
tridiagonal 2 processors
Proc 1
Proc 2
Can be computed without communication
68Locally Dependent Entries for x,Ax,A2x, A
tridiagonal 2 processors
Proc 1
Proc 2
Can be computed without communication
69Locally Dependent Entries for x,Ax,,A3x, A
tridiagonal 2 processors
Proc 1
Proc 2
Can be computed without communication
70Locally Dependent Entries for x,Ax,,A4x, A
tridiagonal 2 processors
Proc 1
Proc 2
Can be computed without communication
71Locally Dependent Entries for x,Ax,,A8x, A
tridiagonal 2 processors
Proc 1
Proc 2
Can be computed without communication k8 fold
reuse of A
72Remotely Dependent Entries for x,Ax,,A8x, A
tridiagonal 2 processors
Proc 1
Proc 2
One message to get data needed to compute
remotely dependent entries, not k8 Minimizes
number of messages latency cost Price
redundant work ? surface/volume ratio
73Fewer Remotely Dependent Entries for
x,Ax,,A8x, A tridiagonal 2 processors
Proc 1
Proc 2
Reduce redundant work by half
74Remotely Dependent Entries for x,Ax, A2x,A3x,
2D Laplacian
75Remotely Dependent Entries for x,Ax,A2x,A3x, A
irregular, multiple processors
76Sequential x,Ax,,A4x, with memory hierarchy
One read of matrix from slow memory, not
k4 Minimizes words moved bandwidth cost No
redundant work
77Performance Results
- Measured Multicore (Clovertown) speedups up to
6.4x - Measured/Modeled sequential OOC speedup up to 3x
- Modeled parallel Petascale speedup up to 6.9x
- Modeled parallel Grid speedup up to 22x
- Sequential speedup due to bandwidth, works for
many problem sizes - Parallel speedup due to latency, works for
smaller problems on many processors
78Speedups on Intel Clovertown (8 core)
79Avoiding Communication in Iterative Linear Algebra
- k-steps of typical iterative solver for sparse
Axb or Ax?x - Does k SpMVs with starting vector
- Finds best solution among all linear
combinations of these k1 vectors - Many such Krylov Subspace Methods
- Conjugate Gradients, GMRES, Lanczos, Arnoldi,
- Goal minimize communication in Krylov Subspace
Methods - Assume matrix well-partitioned, with modest
surface-to-volume ratio - Parallel implementation
- Conventional O(k log p) messages, because k
calls to SpMV - New O(log p) messages - optimal
- Serial implementation
- Conventional O(k) moves of data from slow to
fast memory - New O(1) moves of data optimal
- Lots of speed up possible (modeled and measured)
- Price some redundant computation
- Much prior work
- See bebop.cs.berkeley.edu
- CG van Rosendale, 83, Chronopoulos and Gear,
89 - GMRES Walker, 88, Joubert and Carey, 92,
Bai et al., 94
80Minimizing Communication of GMRES to solve Axb
- GMRES find x in spanb,Ab,,Akb minimizing
Ax-b 2 - Cost of k steps of standard GMRES vs new GMRES
Standard GMRES for i1 to k w A
v(i-1) MGS(w, v(0),,v(i-1)) update
v(i), H endfor solve LSQ problem with
H Sequential words_moved O(knnz)
from SpMV O(k2n) from MGS Parallel
messages O(k) from SpMV
O(k2 log p) from MGS
Communication-avoiding GMRES W v, Av, A2v,
, Akv Q,R TSQR(W) Tall Skinny
QR Build H from R, solve LSQ
problem Sequential words_moved
O(nnz) from SpMV O(kn) from
TSQR Parallel messages O(1) from
computing W O(log p) from TSQR
- Oops W from power method, precision lost!
81Monomial basis Ax,,Akx fails to converge
A different polynomial basis does converge
82Speed ups of GMRES on 8-core Intel
ClovertownRequires co-tuning kernels MHDY09
83Extensions
- Other Krylov methods
- Arnoldi, CG, Lanczos,
- Preconditioning
- Solve MAxMb where preconditioning matrix M
chosen to make system easier - M approximates A-1 somehow, but cheaply, to
accelerate convergence - Cheap as long as contributions from distant
parts of the system can be compressed - Sparsity
- Low rank
- No implementations yet (class projects!)
84Design Space for x,Ax,,Akx (1/3)
- Mathematical Operation
- How many vectors to keep
- All Krylov Subspace Methods
- Keep last vector Akx only (Jacobi, Gauss Seidel)
- Improving conditioning of basis
- W x, p1(A)x, p2(A)x,,pk(A)x
- pi(A) degree i polynomial chosen to reduce
cond(W) - Preconditioning (Ayb ? MAyMb)
- x,Ax,MAx,AMAx,MAMAx,,(MA)kx
85Design Space for x,Ax,,Akx (2/3)
- Representation of sparse A
- Zero pattern may be explicit or implicit
- Nonzero entries may be explicit or implicit
- Implicit ? save memory, communication
Explicit pattern Implicit pattern
Explicit nonzeros General sparse matrix Image segmentation
Implicit nonzeros Laplacian(graph) Multigrid (AMR) Stencil matrix Ex tridiag(-1,2,-1)
- Representation of dense preconditioners M
- Low rank off-diagonal blocks (semiseparable)
86Design Space for x,Ax,,Akx (3/3)
- Parallel implementation
- From simple indexing, with redundant flops ?
surface/volume ratio - To complicated indexing, with fewer redundant
flops - Sequential implementation
- Depends on whether vectors fit in fast memory
- Reordering rows, columns of A
- Important in parallel and sequential cases
- Can be reduced to pair of Traveling Salesmen
Problems - Plus all the optimizations for one SpMV!
87Summary
- Communication-Avoiding Linear Algebra (CALA)
- Lots of related work
- Some going back to 1960s
- Reports discuss this comprehensively, not here
- Our contributions
- Several new algorithms, improvements on old ones
- Preconditioning
- Unifying parallel and sequential approaches to
avoiding communication - Time for these algorithms has come, because of
growing communication costs - Why avoid communication just for linear algebra
motifs?
88Possible Class Projects
- Come to BEBOP meetings (T 1 230, 380 Soda)
- Experiment with SpMV on GPU
- Which optimizations are most effective?
- Try to speed up particular matrices of interest
- Data mining
- Experiment with new x,Ax,,Akx kernel
- GPU, multicore, distributed memory
- On matrices of interest
- Bottom solver in multigrid / AMR (Chombo)
- Experiment with solvers using this kernel
- New Krylov subspace methods, preconditioning
- Experiment with new frameworks (SPF)
- See proposals for details
89Extra Slides
90Optimizing Communication Complexity of Sparse
Solvers
- Need to modify high level algorithms to use new
kernel - Example GMRES for Axb where A 2D Laplacian
- x lives on n-by-n mesh
- Partitioned on p½ -by- p½ processor grid
- A has 5 point stencil (Laplacian)
- (Ax)(i,j) linear_combination(x(i,j), x(i,j1),
x(i1,j)) - Ex 18-by-18 mesh on 3-by-3 processor grid
91Minimizing Communication
- What is the cost (flops, words, mess) of s
steps of standard GMRES?
GMRES, ver.1 for i1 to s w A v(i-1)
MGS(w, v(0),,v(i-1)) update v(i), H
endfor solve LSQ problem with H
n/p½
n/p½
- Cost(A v) s (9n2 /p, 4n / p½ , 4 )
- Cost(MGS Modified Gram-Schmidt) s2/2 ( 4n2
/p , log p , log p ) - Total cost Cost( A v ) Cost (MGS)
- Can we reduce the latency?
92Minimizing Communication
- Cost(GMRES, ver.1) Cost(Av) Cost(MGS)
( 9sn2 /p, 4sn / p½ , 4s ) ( 2s2n2 /p , s2
log p / 2 , s2 log p / 2 )
- How much latency cost from Av can you avoid?
Almost all
93Minimizing Communication
- Cost(GMRES, ver. 2) Cost(W) Cost(MGS)
( 9sn2 /p, 4sn / p½ , 8 ) ( 2s2n2 /p , s2
log p / 2 , s2 log p / 2 )
- How much latency cost from MGS can you avoid?
Almost all
94Minimizing Communication
- Cost(GMRES, ver. 2) Cost(W) Cost(MGS)
( 9sn2 /p, 4sn / p½ , 8 ) ( 2s2n2 /p , s2
log p / 2 , s2 log p / 2 )
- How much latency cost from MGS can you avoid?
Almost all
GMRES, ver. 3 W v, Av, A2v, , Asv
Q,R TSQR(W) Tall Skinny QR (See Lecture
11) Build H from R, solve LSQ problem
95Minimizing Communication
- Cost(GMRES, ver. 2) Cost(W) Cost(MGS)
( 9sn2 /p, 4sn / p½ , 8 ) ( 2s2n2 /p , s2
log p / 2 , s2 log p / 2 )
- How much latency cost from MGS can you avoid?
Almost all
GMRES, ver. 3 W v, Av, A2v, , Asv
Q,R TSQR(W) Tall Skinny QR (See Lecture
11) Build H from R, solve LSQ problem
96(No Transcript)
97Minimizing Communication
- Cost(GMRES, ver. 3) Cost(W) Cost(TSQR)
( 9sn2 /p, 4sn / p½ , 8 ) ( 2s2n2 /p , s2
log p / 2 , log p )
- Latency cost independent of s, just log p
optimal - Oops W from power method, so precision lost
What to do?
- Use a different polynomial basis
- Not Monomial basis W v, Av, A2v, , instead
- Newton Basis WN v, (A ?1 I)v , (A ?2 I)(A
?1 I)v, or - Chebyshev Basis WC v, T1(v), T2(v),
98(No Transcript)
99Tuning Higher Level Algorithms
- So far we have tuned a single sparse matrix
kernel - y ATAx motivated by higher level algorithm
(SVD) - What can we do by extending tuning to a higher
level? - Consider Krylov subspace methods for Axb, Ax
lx - Conjugate Gradients (CG), GMRES, Lanczos,
- Inner loop does yAx, dot products, saxpys,
scalar ops - Inner loop costs at least O(1) messages
- k iterations cost at least O(k) messages
- Our goal show how to do k iterations with O(1)
messages - Possible payoff make Krylov subspace methods
much - faster on machines with slow networks
- Memory bandwidth improvements too (not
discussed) - Obstacles numerical stability, preconditioning,
100Krylov Subspace Methods for Solving Axb
- Compute a basis for a subspace V by doing y Ax
k times - Find best solution in that Krylov subspace V
- Given starting vector x1, V spanned by x2 Ax1,
x3 Ax2 , , xk Axk-1 - GMRES finds an orthogonal basis of V by
Gram-Schmidt, so it actually does a different
set of SpMVs than in last bullet
101Example Standard GMRES
- r b - Ax1, b length(r), v1 r / b
length(r) sqrt(S ri2 ) - for m 1 to k do
- w Avm at least O(1) messages
- for i 1 to m do Gram-Schmidt
- him dotproduct(vi , w ) at least
O(1) messages, or log(p) - w w h im vi
- end for
- hm1,m length(w) at least O(1) messages,
or log(p) - vm1 w / hm1,m
- end for
- find y minimizing length( Hk y be1 )
small, local problem - new x x1 Vk y Vk v1 , v2 , ,
vk
O(k2), or O(k2 log p), messages altogether
102Example Computing Ax,A2x,A3x,,Akx for A
tridiagonal
Different basis for same Krylov subspace What can
Proc 1 compute without communication?
Proc 2
Proc 1
(A8x)(130)
. . .
(A2x)(130)
(Ax)(130)
x(130)
103Example Computing Ax,A2x,A3x,,Akx for A
tridiagonal
Computing missing entries with 1 communication,
redundant work
Proc 2
Proc 1
(A8x)(130)
. . .
(A2x)(130)
(Ax)(130)
x(130)
104Example Computing Ax,A2x,A3x,,Akx for A
tridiagonal
Saving half the redundant work
Proc 2
Proc 1
(A8x)(130)
. . .
(A2x)(130)
(Ax)(130)
x(130)
105Example Computing Ax,A2x,A3x,,Akx for
Laplacian
A 5pt Laplacian in 2D, Communicated point for
k3 shown
106Latency-Avoiding GMRES (1)
- r b - Ax1, b length(r), v1 r / b
O(log p) messages - Wk1 v1 , A v1 , A2 v1 , , Ak v1
O(1) messages - Q, R qr(Wk1) QR decomposition, O(log
p) messages - Hk R(, 2k1) (R(1k,1k))-1 small, local
problem - find y minimizing length( Hk y be1 )
small, local problem - new x x1 Qk y local problem
O(log p) messages altogether Independent of k
107Latency-Avoiding GMRES (2)
- Q, R qr(Wk1) QR decomposition, O(log
p) messages - Easy, but not so stable way to do it
- X(myproc) Wk1T(myproc) Wk1 (myproc)
- local computation
- Y sum_reduction(X(myproc)) O(log p)
messages -
Y Wk1T Wk1 - R (cholesky(Y))T small, local
computation - Q(myproc) Wk1 (myproc) R-1 local
computation
108Numerical example (1)
Diagonal matrix with n1000, Aii from 1 down to
10-5 Instability as k grows, after many iterations
109Numerical Example (2)
Partial remedy restarting periodically (every
120 iterations) Other remedies high precision,
different basis than v , A v , , Ak v
110Operation Counts for Ax,A2x,A3x,,Akx on p procs
Problem Per-proc cost Standard Optimized
1D mesh messages 2k 2
(tridiagonal) words sent 2k 2k
flops 5kn/p 5kn/p 5k2
memory (k4)n/p (k4)n/p 8k
3D mesh messages 26k 26
27 pt stencil words sent 6kn2p-2/3 12knp-1/3 O(k) 6kn2p-2/3 12k2np-1/3 O(k3)
flops 53kn3/p 53kn3/p O(k2n2p-2/3)
memory (k28)n3/p 6n2p-2/3 O(np-1/3) (k28)n3/p 168kn2p-2/3 O(k2np-1/3)
111Summary and Future Work
- Dense
- LAPACK
- ScaLAPACK
- Communication primitives
- Sparse
- Kernels, Stencils
- Higher level algorithms
- All of the above on new architectures
- Vector, SMPs, multicore, Cell,
- High level support for tuning
- Specification language
- Integration into compilers
112Extra Slides
113A Sparse Matrix You Encounter Every Day
Who am I?
I am a Big Repository Of useful And useless Facts
alike. Who am I? (Hint Not your e-mail inbox.)
114What about the Google Matrix?
- Google approach
- Approx. once a month rank all pages using
connectivity structure - Find dominant eigenvector of a matrix
- At query-time return list of pages ordered by
rank - Matrix A aG (1-a)(1/n)uuT
- Markov model Surfer follows link with
probability a, jumps to a random page with
probability 1-a - G is n x n connectivity matrix n billions
- gij is non-zero if page i links to page j
- Normalized so each column sums to 1
- Very sparse about 78 non-zeros per row (power
law dist.) - u is a vector of all 1 values
- Steady-state probability xi of landing on page i
is solution to x Ax - Approximate x by power method x Akx0
- In practice, k 25
115Current Work
- Public software release
- Impact on library designs Sparse BLAS, Trilinos,
PETSc, - Integration in large-scale applications
- DOE Accelerator design plasma physics
- Geophysical simulation based on Block Lanczos
(ATAX LBL) - Systematic heuristics for data structure
selection? - Evaluation of emerging architectures
- Revisiting vector micros
- Other sparse kernels
- Matrix triple products, Akx
- Parallelism
- Sparse benchmarks (with UTK) Gahvari Hoemmen
- Automatic tuning of MPI collective ops Nishtala,
et al.
116Summary of High-Level Themes
- Kernel-centric optimization
- Vs. basic block, trace, path optimization, for
instance - Aggressive use of domain-specific knowledge
- Performance bounds modeling
- Evaluating software quality
- Architectural characterizations and consequences
- Empirical search
- Hybrid on-line/run-time models
- Statistical performance models
- Exploit information from sampling, measuring
117Related Work
- My bibliography 337 entries so far
- Sample area 1 Code generation
- Generative generic programming
- Sparse compilers
- Domain-specific generators
- Sample area 2 Empirical search-based tuning
- Kernel-centric
- linear algebra, signal processing, sorting, MPI,
- Compiler-centric
- profiling FDO, iterative compilation,
superoptimizers, self-tuning compilers,
continuous program optimization
118Future Directions (A Bag of Flaky Ideas)
- Composable code generators and search spaces
- New application domains
- PageRank multilevel block algorithms for
topic-sensitive search? - New kernels cryptokernels
- rich mathematical structure germane to
performance lots of hardware - New tuning environments
- Parallel, Grid, whole systems
- Statistical models of application performance
- Statistical learning of concise parametric models
from traces for architectural evaluation - Compiler/automatic derivation of parametric models
119Possible Future Work
- Different Architectures
- New FP instruction sets (SSE2)
- SMP / multicore platforms
- Vector architectures
- Different Kernels
- Higher Level Algorithms
- Parallel versions of kenels, with optimized
communication - Block algorithms (eg Lanczos)
- XBLAS (extra precision)
- Produce Benchmarks
- Augment HPCC Benchmark
- Make it possible to combine optimizations easily
for any kernel - Related tuning activities (LAPACK ScaLAPACK)
120Review of Tuning by Illustration
121Splitting for Variable Blocks and Diagonals
- Decompose A A1 A2 At
- Detect canonical structures (sampling)
- Split
- Tune each Ai
- Improve performance and save storage
- New data structures
- Unaligned block CSR
- Relax alignment in rows columns
- Row-segmented diagonals
122Example Variable Block Row (Matrix 12)
2.1x over CSR 1.8x over RB
123Example Row-Segmented Diagonals
2x over CSR
124Mixed Diagonal and Block Structure
125Summary
- Automated block size selection
- Empirical modeling and search
- Register blocking for SpMV, triangular solve,
ATAx - Not fully automated
- Given a matrix, select splittings and
transformations - Lots of combinatorial problems
- TSP reordering to create dense blocks (Pinar 97
Moon, et al. 04)
126Extra Slides
127A Sparse Matrix You Encounter Every Day
Who am I?
I am a Big Repository Of useful And useless Facts
alike. Who am I? (Hint Not your e-mail inbox.)
128Problem Context
- Sparse kernels abound
- Models of buildings, cars, bridges, economies,
- Google PageRank algorithm
- Historical trends
- Sparse matrix-vector multiply (SpMV) 10 of peak
- 2x faster with hand-tuning
- Tuning becoming more difficult over time
- Promise of automatic tuning PHiPAC/ATLAS, FFTW,
- Challenges to high-performance
- Not dense linear algebra!
- Complex data structures indirect, irregular
memory access - Performance depends strongly on run-time inputs
129Key Questions, Ideas, Conclusions
- How to tune basic sparse kernels automatically?
- Empirical modeling and search
- Up to 4x speedups for SpMV
- 1.8x for triangular solve
- 4x for ATAx 2x for A2x
- 7x for multiple vectors
- What are the fundamental limits on performance?
- Kernel-, machine-, and matrix-specific upper
bounds - Achieve 75 or more for SpMV, limiting low-level
tuning - Consequences for architecture?
- General techniques for empirical search-based
tuning? - Statistical models of performance
130Road Map
- Sparse matrix-vector multiply (SpMV) in a
nutshell - Historical trends and the need for search
- Automatic tuning techniques
- Upper bounds on performance
- Statistical models of performance
131Compressed Sparse Row (CSR) Storage
Matrix-vector multiply kernel y(i) ? y(i)
A(i,j)x(j)
Matrix-vector multiply kernel y(i) ? y(i)
A(i,j)x(j) for each row i for kptri to
ptri1 do yi yi valkxindk
Matrix-vector multiply kernel y(i) ? y(i)
A(i,j)x(j) for each row i for kptri to
ptri1 do yi yi valkxindk
132Road Map
- Sparse matrix-vector multiply (SpMV) in a
nutshell - Historical trends and the need for search
- Automatic tuning techniques
- Upper bounds on performance
- Statistical models of performance
133Historical Trends in SpMV Performance
- The Data
- Uniprocessor SpMV performance since 1987
- Untuned and Tuned implementations
- Cache-based superscalar micros some vectors
- LINPACK
134SpMV Historical Trends Mflop/s
135Example The Difficulty of Tuning
- n 21216
- nnz 1.5 M
- kernel SpMV
- Source NASA structural analysis problem
136Still More Surprises
- More complicated non-zero structure in general
137Still More Surprises
- More complicated non-zero structure in general
- Example 3x3 blocking
- Logical grid of 3x3 cells
138Historical Trends Mixed News
- Observations
- Good news Moores law like behavior
- Bad news Untuned is 10 peak or less,
worsening - Good news Tuned roughly 2x better today, and
improving - Bad news Tuning is complex
- (Not really news SpMV is not LINPACK)
- Questions
- Application Automatic tuning?
- Architect What machines are good for SpMV?
139Road Map
- Sparse matrix-vector multiply (SpMV) in a
nutshell - Historical trends and the need for search
- Automatic tuning techniques
- SpMV SC02 IJHPCA 04b
- Sparse triangular solve (SpTS) ICS/POHLL 02
- ATAx ICCS/WoPLA 03
- Upper bounds on performance
- Statistical models of performance
140SPARSITY Framework for Tuning SpMV
- SPARSITY Automatic tuning for SpMV Im Yelick
99 - General approach
- Identify and generate implementation space
- Search space using empirical models experiments
- Prototype library and heuristic for choosing
register block size - Also cache-level blocking, multiple vectors
- Whats new?
- New block size selection heuristic
- Within 10 of optimal replaces previous version
- Expanded implementation space
- Variable block splitting, diagonals, combinations
- New kernels sparse triangular solve, ATAx, Arx
141Automatic Register Block Size Selection
- Selecting the r x c block size
- Off-line benchmark characterize the machine
- Precompute Mflops(r,c) using dense matrix for
each r x c - Once per machine/architecture
- Run-time search characterize the matrix
- Sample A to estimate Fill(r,c) for each r x c
- Run-time heuristic model
- Choose r, c to maximize Mflops(r,c) / Fill(r,c)
- Run-time costs
- Up to 40 SpMVs (empirical worst case)
142Accuracy of the Tuning Heuristics (1/4)
DGEMV
NOTE Fair flops used (ops on explicit zeros
not counted as work)
143Accuracy of the Tuning Heuristics (2/4)
DGEMV
144Accuracy of the Tuning Heuristics (3/4)
DGEMV
145Accuracy of the Tuning Heuristics (4/4)
DGEMV
146Road Map
- Sparse matrix-vector multiply (SpMV) in a
nutshell - Historical trends and the need for search
- Automatic tuning techniques
- Upper bounds on performance
- SC02
- Statistical models of performance
147Motivation for Upper Bounds Model
- Questions
- Speedups are good, but what is the speed limit?
- Independent of instruction scheduling, selection
- What machines are good for SpMV?
148Upper Bounds on Performance Blocked SpMV
- P (flops) / (time)
- Flops 2 nnz(A)
- Lower bound on time Two main assumptions
- 1. Count memory ops only (streaming)
- 2. Count only compulsory, capacity misses ignore
conflicts - Account for line sizes
- Account for matrix size and nnz
- Charge min access latency ai at Li cache amem
- e.g., Saavedra-Barrera and PMaC MAPS benchmarks
149Example Bounds on Itanium 2
150Example Bounds on Itanium 2
151Example Bounds on Itanium 2
152Fraction of Upper Bound Across Platforms
153Achieved Performance and Machine Balance
- Machine balance Callahan 88 McCalpin 95
- Balance Peak Flop Rate / Bandwidth (flops /
double) - Ideal balance for mat-vec 2 flops / double
- For SpMV, even less
- SpMV streaming
- 1 / (avg load time to stream 1 array)
(bandwidth) - Sustained balance peak flops / model bandwidth
154(No Transcript)
155Where Does the Time Go?
- Most time assigned to memory
- Caches disappear when line sizes are equal
- Strictly increasing line sizes
156Execution Time Breakdown Matrix 40
157Speedups with Increasing Line Size
158Summary Performance Upper Bounds
- What is the best we can do for SpMV?
- Limits to low-level tuning of blocked
implementations - Refinements?
- What machines are good for SpMV?
- Partial answer balance characterization
- Architectural consequences?
- Example Strictly increasing line sizes
159Road Map
- Sparse matrix-vector multiply (SpMV) in a
nutshell - Historical trends and the need for search
- Automatic tuning techniques
- Upper bounds on performance
- Tuning other sparse kernels
- Statistical models of performance
- FDO 00 IJHPCA 04a
160Statistical Models for Automatic Tuning
- Idea 1 Statistical criterion for stopping a
search - A general search model
- Generate implementation
- Measure performance
- Repeat
- Stop when probability of being within e of
optimal falls below threshold - Can estimate distribution on-line
- Idea 2 Statistical performance models
- Problem Choose 1 among m implementations at
run-time - Sample performance off-line, build statistical
model
161Example Select a Matmul Implementation
162Example Support Vector Classification
163Road Map
- Sparse matrix-vector multiply (SpMV) in a
nutshell - Historical trends and the need for search
- Automatic tuning techniques
- Upper bounds on performance
- Tuning other sparse kernels
- Statistical models of performance
- Summary and Future Work
164Summary of High-Level Themes
- Kernel-centric optimization
- Vs. basic block, trace, path optimization, for
instance - Aggressive use of domain-specific knowledge
- Performance bounds modeling
- Evaluating software quality
- Architectural characterizations and consequences
- Empirical search
- Hybrid on-line/run-time models
- Statistical performance models
- Exploit information from sampling, measuring
165Related Work
- My bibliography 337 entries so far
- Sample area 1 Code generation
- Generative generic programming
- Sparse compilers
- Domain-specific generators
- Sample area 2 Empirical search-based tuning
- Kernel-centric
- linear algebra, signal processing, sorting, MPI,
- Compiler-centric
- profiling FDO, iterative compilation,
superoptimizers, self-tuning compilers,
continuous program optimization
166Future Directions (A Bag of Flaky Ideas)
- Composable code generators and search spaces
- New application domains
- PageRank multilevel block algorithms for
topic-sensitive search? - New kernels cryptokernels
- rich mathematical structure germane to
performance lots of hardware - New tuning environments
- Parallel, Grid, whole systems
- Statistical models of application performance
- Statistical learning of concise parametric models
from traces for architectural evaluation - Compiler/automatic derivation of parametric models
167Acknowledgements
- Super-advisors Jim and Kathy
- Undergraduate R.A.s Attila, Ben, Jen, Jin,
Michael, Rajesh, Shoaib, Sriram, Tuyet-Linh - See pages xvixvii of dissertation.
168TSP-based Reordering Before
(Pinar 97 Moon, et al 04)
169TSP-based Reordering After
(Pinar 97 Moon, et al 04) Up to
2x speedups over CSR
170Example L2 Misses on Itanium 2
Misses measured using PAPI Browne 00
171Example Distribution of Blocked Non-Zeros
172Register Profile Itanium 2
1190 Mflop/s
190 Mflop/s
173Register Profiles Sun and Intel x86
Ultra 2i - 11
Ultra 3 - 5
72 Mflop/s
90 Mflop/s
35 Mflop/s
50 Mflop/s
Pentium III-M - 15
Pentium III - 21
108 Mflop/s
122 Mflop/s
42 Mflop/s
58 Mflop/s
174Register Profiles IBM and Intel IA-64
Power3 - 17
Power4 - 16
252 Mflop/s
820 Mflop/s
122 Mflop/s
459 Mflop/s
Itanium 2 - 33
Itanium 1 - 8
247 Mflop/s
1.2 Gflop/s
107 Mflop/s
190 Mflop/s
175Accurate and Efficient Adaptive Fill Estimation
- Idea Sample matrix
- Fraction of matrix to sample s ÃŽ 0,1
- Cost O(s nnz)
- Control cost by controlling s
- Search at run-time the constant matters!
- Control s automatically by computing statistical
confidence intervals - Idea Monitor variance
- Cost of tuning
- Lower bound convert matrix in 5 to 40 unblocked
SpMVs - Heuristic 1 to 11 SpMVs
176Sparse/Dense Partitioning for SpTS
- Partition L into sparse (L1,L2) and dense LD
- Perform SpTS in three steps
- Sparsity optimizations for (1)(2) DTRSV for (3)
- Tuning parameters block size, size of dense
triangle
177SpTS Performance Power3
178(No Transcript)
179Summary of SpTS and AATx Results
- SpTS Similar to SpMV
- 1.8x speedups limited benefit from low-level
tuning - AATx, ATAx
- Cache interleaving only up to 1.6x speedups
- Reg cache up to 4x speedups
- 1.8x speedup over register only
- Similar heuristic same accuracy ( 10 optimal)
- Further from upper bounds 6080
- Opportunity for better low-level tuning a la
PHiPAC/ATLAS - Matrix triple products? Akx?
- Preliminary work
180Register Blocking Speedup
181Register Blocking Performance
182Register Blocking Fraction of Peak
183Example Confiden