Title: Task 2.3 GridBench
1Task 2.3 GridBench
- 1st Year Targets
- Background
- Prototype
- Problems and Issues
- What's Next
21st Year Targets
- Investigate benchmarking the grid
- Existing benchmarking efforts and tools
- Consolidate our thoughts into a Requirements
Document. - Specify the GridBench architecture in a Design
Document. - Prototype Development
- deployment testing
3Background
4Background (design)
5Current Status
- Month 12 prototype in good shape
- Code compiles and installs
- RPM is close done (thanks to Christos!)
- Updated Design Document
- Updated use case
- A better understanding of our platform, it's
potential.
6Choosing a prototype Why HPL?
- It's targeted at measuring the performance of a
single site. - Simple input and output requirements.
- HPL is well known in the HPC community
- Configuring and building HPL on our testbed was
supposed to be easy ) - HPL was chosen to be our champion benchmark which
would provide us with the experience of deploying
benchmarks on the grid.
7Month 12 Prototype
- Implementation of a prototype benchmark that runs
on a local cluster. - Benchmark gb_site_hpl will run on a cluster that
supports MPI. - Based on the High Performance Linpack benchmark.
- Alteration of HPL to accept XML input and produce
XML output. - Current cvs version builds and runs on testbed
(requires minor changes including a revision of
the input XML schema)
8Month 12 Prototype
- Input of the benchmark is an XML document that
specifies a number of parameters such as - Problem size
- Algorithm
- Number of processes and data partitioning
- Other tuning parameters
- Output of the benchmark is an estimate of
floating point operations per second (FLOPS).
9XML schema for Input
lt?xml version"1.0" encoding"UTF-8"?gt ltxsschema
xmlnsxs"http//www.w3.org/2001/XMLSchema"
elementFormDefault"qualified"gt
ltxselement name"P"gt
ltxssimpleTypegt
ltxsrestriction base"xsbyte"gt
ltxsenumeration value"1"/gt
ltxsenumeration
value"2"/gt
ltxsenumeration value"4"/gt
lt/xsrestrictiongt
lt/xssimpleTypegt lt/xselementgt
ltxselement name"Q"gt
ltxssimpleTypegt
ltxsrestriction base"xsbyte"gt
ltxsenumeration value"13"/gt
ltxsenumeration
value"6"/gt
ltxsenumeration value"8"/gt
lt/xsrestrictiongt
lt/xssimpleTypegt lt/xselementgt
ltxscomplexType name"bcastsType" mixed"true"gt
ltxschoice minOccurs"0"
maxOccurs"unbounded"gt
ltxselement name"bcasts" type"bcastsType"/gt .
.. lt/xschoicegt
ltxsattribute name"num" type"xsbyte"/gt
lt/xscomplexTypegt ltxscomplexType
name"benchmarkType"gt
ltxssequencegt ltxselement
ref"benchmarkName"/gt
ltxselement ref"description"/gt ...
ltxselement name"parameters"
type"parametersType"/gt
lt/xssequencegt lt/xscomplexTypegt
ltxselement name"benchmarkName"
type"xsstring"/gt ltxselement
name"blockSizes" type"xsinteger"/gt
ltxscomplexType name"depthsType" mixed"true"gt
ltxschoice minOccurs"0"
maxOccurs"unbounded"gt
ltxselement name"depths" type"depthsType"/gt
lt/xschoicegt
ltxsattribute name"num" type"xsbyte"/gt
lt/xscomplexTypegt ltxselement
name"description" type"xsstring"/gt
ltxselement name"equilibration"
type"xsboolean"/gt ltxselement
name"executableName" type"xsstring"/gt
ltxselement name"gridbenchmark"gt
ltxscomplexTypegt
ltxssequencegt
ltxselement name"benchmark" type"benchmarkType
10Month 12 Prototype
- Dependencies (for gb_site_hpl only!)
- BLAS (provided by cernlib)
- MPI
- Currently these are dependencies for building
only. (Executable is statically linked)
11Implementation / Integration problems and issues.
- Measurement depends on the efficiency of the
underlying BLAS library - BLAS may not be optimized for the specific CPU
(i386 Vs P4 etc.) (May use statically linked
optimized routines) - moving target with respect to how things would
run - user-space installation, CE installation, UI
installation,... - Lack of real support for MPI for job submission
- Currently staging is partially manual
- Building on our platform was not as easy as
expected.
12What's next?
- 1) Specifying implementing more complex
(workflow-type) benchmarks - GGF Research Group (NGB)
- Tools for specifying (JDL-DAGMan EDG, TRIANA
GRIDLAB) - Evolution of Design Doc
13What's next?
- 2) Extraction, Collection and Archival of
benchmark results - Integration with monitoring (through a G-PM API)
- R-GMA (collection and archival)
- MDS
- 3) Interpretation
- Models
- Tools
- Metrics
14Related Activities
- Investigation of Benchmarks
- Functional Simulator for Processor Design
- Looking at real apps for deployment on the
testbed and their use as application benchmarks - Exploitation of other IST projects results and
tools
15ROADMAP
- A detailed doc with specifications for each of
the GridBench suite benchmarks outlined in the
Design Document. (an Addendum?) - Employment of monitoring architectures for
benchmark data collection (i.e. R-GMA and
G-PM-based on API availability). - Specification of benchmark result encoding (XML)
and archival