Title: OSC in Summary
1OSC in Summary
Innovations in computing, networking, and
education
- Providing a base infrastructure for Ohio higher
education - Collaborating with state and national partners
- Potential for growth in key areas
- Taking advantage of our infrastructure
- Wider collaboration
- Strong support from national/international
partners
2High Performance Computing Systems
- CRAY SV1ex
- SUN SunFire 6800
- IA32 Cluster
- Itanium 2 Cluster (includes SGI Altix 3000)
- Mass Storage System
- Interface Laboratory
3OSC CRAY SV1ex
The CRAY SV1ex computer system at the Ohio
Supercomputer Center (OSC) is a powerful parallel
vector processor (PVP) supercomputer that
features 32 (500 MHz) vector processors and 64
Gbytes of physical RAM, 32 Gbytes SSDi and 32
Gbytes user space, for large simulations. Serial
jobs, autotasked and parallel jobs are scheduled
on the 32 single streaming processors (SSP). Four
of the SV1e SSP processors can be combined into
one multistreaming processor (MSP).
Multistreaming is a feature that automatically
divides loop iterations among the four CPUs. You
may get speedup factors of up to four on loops to
which this technique can be applied.
Specifications are as follows
Processor Technology 500
MHz vector processor Number of Processors
32 32 single
streaming 4 SSPs can be combined into
1 processors
(SSP) multi-streaming processor (MSP)
for programs compiled with the
Ostream
option Vector Pipes 2 per
processor Peak Performance
Approximately 64.0 Gflops
(8.0 Gflops per MSP
processor) (2.0 Gflops per
SSP
processor)
4OSC CRAY SV1ex - continued
The CRAY SV1ex computer system at the Ohio
Supercomputer Center (OSC) is a powerful parallel
vector processor (PVP) supercomputer that
features 32 (500 MHz) vector processors and 64
Gbytes of physical RAM, 32 Gbytes SSDi and 32
Gbytes user space, for large simulations. Serial
jobs, autotasked and parallel jobs are scheduled
on the 32 single streaming processors (SSP). Four
of the SV1e SSP processors can be combined into
one multistreaming processor (MSP).
Multistreaming is a feature that automatically
divides loop iterations among the four CPUs. You
may get speedup factors of up to four on loops to
which this technique can be applied.
Specifications are as follows
Memory Memory size 8192
Mwords (64 Gbytes) 4096 Mwords
4096 Mwords (32 Gbytes) SSDI (32
Gbytes)
User Memory Maximum Memory
Approximately 25 Bandwidth
Gbytes/sec I/O Attached Disks
500 Gbytes Communication TCP/IP
network to graphics Interfaces
devices and OARnet HIPPI
Interface FDDI Interface
5OSC SUN SunFire 6800
The SunFire 6800 SMP server is a shared memory
multiprocessor system capable of serial and
parallel operation. It consists of twenty-four
900 MHZ UltraSPARC III microprocessors chips and
a memory size of 48 GB. Each UltraSPARC III chip
has an peak performance of 1.8 GFLOPS. This is
the latest generation of SPARC chip developed by
SUN it is backward compatible with all previous
generations. Specifications are as follows
- Operating System Solaris 5.8
- 48 gigabytes of main memory
- 900 MHz UltraSPARC III Processors
- Superscalar 4 instructions per clock cycle
- 64-bit or 32-bit precision supported in hardware
- Peak performance 1.8 GFLOPS
- 64 KB on-chip data cache (4-way associative)
- 8 MB off-chip data cache (4-way associative)
- 500 GB of direct disc storage Simultaneously
operating - Functional Units
- Instruction Issue
- Integer and Floating Point Calculation
(pipelined) - Data Caches Control and Memory Control
- System Interface
6OSC IA32 Cluster
The OSC IA32 Cluster is a distributed/shared
memory hybrid system constructed from commodity
PC components running the Linux operating system.
Specifications are as follows
- 128 compute nodes for serial jobs, configured
with
- Four gigabytes of RAM
- Two, 2.4 GHz Intel P4 Xeon processors
- One 80 gigabyte ATA100 hard drive
- One 100Base-T Ethernet interface, one gigabit
Ethernet interface
- 128 compute nodes for parallel jobs, configured
with
- Four gigabytes of RAM
- Two, 2.4 GHz Intel P4 Xeon processors
- One 80 gigabyte ATA100 hard drive
- One Infiniband interface
- One 100Base-T Ethernet interface, one gigabit
Ethernet interface
7OSC IA32 Cluster - continued
- 16 I/O nodes supporting a 7.83 terabyte PVFS
parallel - filesystem, configured with
- One gigabyte of RAM
- Two, 933 MHz Intel Pentium III processors, each
with 256kB of - secondary cache
- One 3ware 7500-8 ATA100 RAID controller with
eight 80 - gigabyte hard drives in RAID 5
- One Myrinet 2000 interface
- One Gigabit Ethernet interface
- One 100Base-T Ethernet interface
- 1 front-end node configured with
- Four gigabytes of RAM
- Two, 1.533 GHz AMD Athlon MP processors, each
with - 256kB of secondary cache
- One 80 gigabyte ATA100 hard drive
- One Gigabit Ethernet interface
- One 100Base-T Ethernet interface
8OSC Itanium 2 Cluster (includes SGI Altix 3000)
The OSC Itanium 2 Cluster is distributed/shared
memory hybrid of commodity systems based on the
new Intel Itanium 2 architecture processor. The
cluster is built using HP zx6000 workstations and
an SGI Altix 3000. Specifications are as follows
- One 32 processor SGI Altix 3000 for SMP and
large memory - applications configured with
- 64 Gigabytes of memory
- 32 1.3 Megahertz Intel Itanium 2 processors
- 4 Gigabit Ethernet interfaces
- 2 Gigabit Fibre Channel interfaces
- Approximately 400 GB of Temporary disk (/tmp)
-
- 128 Compute nodes for parallel jobs configured
with - Four Gigabytes of RAM
- Two, 900 Megahertz Intel Itanium 2 processors
with 1.5MB - of tertiary cache
- 80 Gigabytes, ultra-wide SCSI hard drive
- One Myrinet 2000 interface card
- One Gigabit Ethernet interface
- One 100Base-T Ethernet interface
9OSC Itanium 2 Cluster (includes SGI Altix 3000)
continued
- 20 Compute nodes for serial jobs configured
with - Twelve Gigabytes of RAM
- Two, 900 Megahertz Intel Itanium 2 processors
with 1.5 MB - of tertiary cache
- 80 Gigabytes, ultra-wide SCSI hard drive
- One Gigabit Ethernet interface
-
- 1 Front-end node configured with
- Twelve Gigabytes of RAM
- Two, 900 Megahertz Intel Itanium 2 processors
with 1.5 MB - of tertiary cache
- 80 Gigabytes, ultra-wide SCSI hard drive
- Two Gigabit Ethernet interface
- One 100Base-T Ethernet interface
10High Performance Computing Systems
11Utilization by Platform
12Usage by FOS
13Monthly Project Usage by Number of Platforms
14Yearly Computational Usage by Projects
15Storage
Files
Bytes
16Compute Jobs
17OSC Storage Management
W
D
W
V
D
W
V
D
W
W
D
W
V
D
W
V
D
W
D
W
V
D
W
V
D
W
D
W
V
D
W
V
D
W
D
W
V
D
W
V
D
W
D
W
D
V
W
D
W
D
V
W
W
D
18Future Mass Storage
- 50 TB of Performance Storage
- Home directories, project storage space,and
long-term frequently accessed files. - 420 TB of Performance/Capacity Storage
- Active Disk Cache - compute jobs that require
directly connected storage - Parallel file systems and scratch space
- Large temporary holding area
- 128 TB tape library
- Backups and long-term "offline" storage
W
D
V
D
D
V
D
W
W
D
V
D
W
D
V
D
IBMs Storage Tank technology combined with TFN
connections will allow large data sets to be
seamlessly moved throughout the state with
increased redundancy and seamless delivery.
19Phase 3 Third Frontier Network Map
20Future Directions
- Cluster Expansion with Commodity Processors
- Parallel / Vector, Large SMP Capability
Enhancements - Adaptable Computing and Networking (FPGA)
- Increased and Improved Data Management
- Data Grids / Shared Data Resources
21Summary
- High performance computing is rapidly expanding
the frontiers of education and research, and OSU
is at the forefront of those changes - Nationally, high performance computing is
becoming a critical resource in numerous areas
including homeland security, medicine, financial
analysis, engineering, and the arts - High performance computing and networking are
crucial tools in the growing demand for
collaborative research and education - More Information http//www.osc.edu