Powerpoint template for scientific posters Swarthmore College

About This Presentation
Title:

Powerpoint template for scientific posters Swarthmore College

Description:

Powerpoint template for scientific posters Swarthmore College – PowerPoint PPT presentation

Number of Views:55
Avg rating:3.0/5.0
Slides: 2
Provided by: colin225

less

Transcript and Presenter's Notes

Title: Powerpoint template for scientific posters Swarthmore College


1
The Richmond Supercomputing Cluster
What is it? The Supercomputing Cluster at the
University of Richmond is used to perform
computationally intensive running simulations and
data analysis. It consists of one master node and
52 slave nodes of which 49 are dual 1.4 GHz
Athlon class and 3 2.0 GHz Athlon Class.
History The cluster project at Richmond began in
the spring of 1999. Professors Mike Vineyard and
Jerry Gilfoyle realized the need for an increase
in computing power to support their research in
nuclear physics at the Thomas Jefferson National
Accelerator Facility (JLab). Using funds from
their Department of Energy (DOE) research grant
and other monies from the University, they put
together a prototype computing cluster in the
summer of 1999. That prototype (shown below) is
no longer operational. The current
Research cluster was assembled by LinuxLabs in
the winter of 2001-2002 and arrived in February
2002. After a commissioning phase, the cluster
became fully operational in fall, 2002. The funds
for the project came from a successful grant
application by Dr. Gilfoyle and Dr. Vineyard to
the Major Research Instrumentation Program of the
National Science Foundation for 175,000. The
current configuration consists of 52,
512-MByte-RAM, 1.4 GHz Athlon remote nodes and
one master. Each node has a single 18-GByte disk.
It is supported by 3 TBytes of space in three
RAID disks and communicating by another 1.4 GHz
Athlon fileserver.
Details on Using the Supercomputer The
Supercomputing Cluster runs on the Linux
operating system and uses the Beowulf system for
managing batch jobs. Programs are written in
Fortan, Perl, or C and are submitted to the
master node. The master node then sends the
commands to the slave nodes and
What we are currently working on? Out-of-Plane
Measurements of the Structure Functions of the
Deuteron We now know that particles called quarks
are the basis for the atoms, molecules, and
atomic nuclei that form us and our world.
Nevertheless, how these quarks actually combine
to form that matter is still a mystery. The
deuteron is an essential testing ground for any
theory of the nuclear force because it is the
simplest nucleus in nature. In this project we
use the unique capabilities of the CLAS detector
at Jefferson Lab to make some of the first
measurements of little-known electric and
magnetic properties of the deuteron. In the past
scattering experiments like those done at JLab
were confined to reactions where the debris from
the collision was in the same plane (usually
horizontal) as the incoming and outgoing
projectile (an electron in this case). With CLAS
we can now measure particles that are scattered
out of that plane and are sensitive to effects
that have been often ignored up to now. These
measurements will open a new window into the
atomic nucleus. For more, see Nuclear Physics
at UR poster.
Research cluster prototype consisting of 12 nodes.
Working with SpiderWulf. Students Sherwyn
Granderson (left) monitors the status of the
cluster while Rusty Burrell and Kuri Gill (right)
review some results from recent data analysis.
Why we need it... Although initial analysis of
data taken in CLAS is done at Jefferson Lab, we
analyze this data more deeply at the University
of Richmond. Performing this second pass analysis
requires us to be able to store full data sets
from a particular running period, so that we
dont have to repeatedly move data on and off the
system. The set we are currently working on is
about 1 Terabyte (1,000 Gigabytes), and past data
sets have been even larger. The computing cluster
has a maximum capacity of 4.4Terabytes. In
addition to disk space, considerable computing
power is necessary to perform not only data
analysis but also precise simulations of CLAS in
order to produce publication-quality measurements
and results. For example, running a typical
simulation on our prototype cluster developed in
1999 (see History) would take about 39 days.
This is dar too long to expect the analysis to be
complete in a reasonable amount of time. On our
current cluster, this same job can be run over a
weekend or even overnight.
The Power of SpiderWulf Often in the world of
computing (and specifically supercomputing), a
measure of FLOPS (floating point operations per
second) is used to compare computer performances.
In order to see where our cluster stands, we
measured our machine and compared it to the top
supercomputers in the world since 1993. The
results are shown below. The data points
are GigaFLOPS per processor, as many of these
supercomputers contain hundreds or thousands of
processors. The red points represent the top
supercomputer of each year, and the blue point
represents where our cluster fell in the year
that it was purchased. For further comparison,
we measured the FLOPS of a typical desktop PC,
and compared to our measure of the cluster.
Undergraduate Education Students who choose to
participate in an undergraduate research project
develop a deeper understanding of not only
nuclear physics, but how to go about solving a
daunting, computational challenge to develop a
new application. One vital aspect to studying
nuclear physics is scientific computing. Along
with pure physics instruction, students are given
access to the supercomputer and learn how to use
it. They learn about new operating systems and
programming languages, as well as the networking
structure and usage of the supercomputing
cluster.
University of Richmond
SpiderWulf URs supercomputing cluster,
2002-present
ENIAC The worlds first electronic digital
computer, 1945
Write a Comment
User Comments (0)