Title: The LIS Beowulf Cluster A Quick Update
1The LIS Beowulf Cluster-- A Quick Update
Yudong Tian
2The Cluster Staff Alumni
- Nikkia Anderson
- Joe Eastman
- Jim Geiger
- Paul Houser
- Aedan Jenkins
- Sujay Kumar
- Meg Larko
- Steve Lidard
- Luther Lighty
- Uttam Majumder
- Kevin Miller
- Susan Olden
- Christa Peters-Lidard
- Yudong Tian
3The LIS Beowulf Cluster
- General information
- System
- hardware
- Software
- Performance
- Network IO
- Computing power
- Summary
4- LIS Beowulf Cluster General info
- Built in Aug, 2002, after
- --- several months parts procurement
- --- a few weeks of assembly
- --- located in the basement of Bldg 33
- Challenges met
- --- Shipping Handling
- --- Hardware QC
- --- Bios debug/update
- --- Cooling
- --- Power
- --- Compilers, etc., etc.
- We did it !
5(No Transcript)
61. LIS Beowulf Cluster General
info 8 IO/storage nodes, Dual AMD CPU 192
Compute nodes, single AMD CPU 112 GB Total
memory 22 TB Total disk storage 10 1000Mbps
Ethernet links 192 100Mbps Ethernet links 100K
price tag very cost-effective
7Mainboard
Switch
RAID
Font view
82. System info Network architecture
92. System info Software
- LIS Code
- MPICH LAM-MPI
- Absoft Fortran Compiler
- Cluster Services Management -- GDS, LAS
- Cluster Networking -- MRTG, DNS, SSH
- Hardware monitoring -- C-RPM
- Linux
http//lis2.sci.gsfc.nasa.gov9090/
103. Low-level Performance Network Bandwidth
113. Low-level Performance Network Latency
123. Low-level Performance FLOPS
4 nodes 3.8 GFLOPS 100 nodes 34 GFLOPS 200
nodes TBD SGI Origin 2000 128 250MHZ
39.4GFLOPS 1 Earth Simulator 35.9 TFLOPS 2
HP 7.7 TFLOPS 500 HP 196 GFLOPS
134. Summary
- We built the cluster on schedule on budget
- The cluster is running
- We are ready