Title: 2005 SIAM Conference on
12005 SIAM Conference on Computational Science
and Engineering HPC Panel Scientific Computing
Should the Tail Wag the Dog? Horst
Simon Associate Laboratory Director for Computing
Sciences Lawrence Berkeley National
Laboratory February 14, 2005
2How to Answer Questions
FAIR
Whats in it for me?
Facts Anecdotes Illustrations References
Solid Evidence (SE)
Value Impact (VI)
Positive Close (PC)
Bold Claim (BC)
Straight Answer (SA)
3Q1) It is often said that scientific computing is
too small of a market to lead high performance
computing hardware development. Is there a
sustainable (i.e. profitable) future for
large-scale HPC architectures, or are there only
capability demonstrations ahead? Are there ways
that the CSE community can or should help move
the HPC hardware industry?
SA Yes, yes
BC Left by itself industry will only build
cheap and useless systems
SE - TOP500 is all clusters - most useful
systems today have been built with user input
(BG/L, Red Storm, Earth Simulator)
VI Otherwise you are condemned to program
clusters forever
PC Science Driven Architectures
4Q2) What will the future look like clusters,
vector clusters, MPPs, special purpose
processors, flat memory or hierarchical memory,
etc.? Are there emerging trends that might tip
the balance in favor of one architectural
solution over another?
SA Yes all of the above no
BC HPC is in a boom that will let 1000 flowers
bloom
SE - at least ½ dozen new vendors with great
ideas - cluster vendors try to differentiate
themselves - entry barrier to market is low
VI the bad will drive out the good
PC now is the time to engage with vendors and
communicate requirements
5Q2) a) CSE applications Are vendors developing
machines that CSE researchers can use
effectively?
SA Not by themselves
6Q2b) HPC architecture developers Are CSE
researchers working to meet the challenges of
developing applications that fully exploit new
architectures?
SA Yes
BC they are working their b off to get a few
more performance
SE - SIAM is running two conferences with
400-500 attendees - funding agencies spend
10s -100s M
VI but maybe this is good, because we have
jobs and funding forever
PC now is the time to engage with vendors and
communicate requirements
7Q3) Currently Performance Peak Flops in some
sense for rating HPC machines we appear to be
getting EXACTLY what we paid for on the LINPACK
benchmarks. Have we simplified the rating of
machines to an absurd level? Future How should
scientists and hardware companies rate new
machines?
SA yes
BC after many years this is still an unsolved
research problem
SE good progress has been made - HPC Challenge
(Dongarra and Liuszik, UT) - DOE SciDAC PERC and
Snavely (SDSC) - APEX/MAP Strohmaier and Shen
(LBNL)
VI you get what you measure
PC scientific opportunity
8Q5) Is distributed memory message-passing (e.g.
MPI) all we will ever need to program our CSE
applications?
SA by heavens NOOO
BC MPI is the Fortran of the 2000s
SE 1983 message passing programming model
1994 MPI standard defined 2005 ossification
we use MPI for a 64K processor BG/L
system hope CAF and UPC
VI otherwise you are condemned to program
clusters forever using MPI cluster -gt MPI
applications -gt more clusters
PC another exciting research opportunity
9Q5) Is the CSE community, especially the part
that makes use of HPC, perceived to be making
significant contributions to society at large?
If not, what does the CSE and HPC community need
to do to improve this?
SA no
BC we are particularly inept in public relations
SE compare us to high energy physicists
VI otherwise you are condemned to program
clusters forever using MPI
PC CSE has many significant accomplishments, we
all need to learn how to communicate them