Beowulf Cluster - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

Beowulf Cluster

Description:

Background The Name and History ... Schedule. End of semester. Fully configured OS on nodes. Network infrastructure complete ... – PowerPoint PPT presentation

Number of Views:21
Avg rating:3.0/5.0
Slides: 20
Provided by: inst480
Category:

less

Transcript and Presenter's Notes

Title: Beowulf Cluster


1
Beowulf Cluster
  • Jon Green
  • Jay Hutchinson
  • Scott Hussey
  • Mentor Hongchi Shi

2
Goals and Objectives
  • To design and build a successful Beowulf cluster
  • Test its abilities
  • Prove its validity as a computing environment for
    solving real world problems

3
What is a Beowulf Cluster ?
  • Vague concept
  • What type of hardware?
  • Size thresholds
  • Used to describe a group of networked computing
    nodes that work towards a common task.

4
Background The Name and History
  • In 1994 Thomas Sterling and Don Becker,
    working at Goddard Space Flight Center ,built a
    cluster consisting of 16 486 PCs connected by
    channel bonded Ethernet. The name Beowulf was
    derived from the old English story.

5
Aspects of designing a Beowulf Cluster
  • Low latency communication network
  • Low cost
  • Diminishing returns
  • Application suitability
  • Hardware specialization

The Gargleblaster
6
GIVE US AN A!
7
Background Cost Effectiveness
  • Historically Beowulf Clusters have used
  • open source Unix distributions
  • e.g. Linux
  • Low cost off-the-shelf computers
  • e.g. PCs
  • Low cost network components
  • e.g. 10/100 Mbit Ethernet

8
Cost associated with Super Computing
  • Standard Supercomputer 10,000/GFLOPS
  • U.S. Dept of Energy - ASCI Project
  • Beowulf Supercomputer lt1/10 of the Cost.
  • KLAT2 costs 650/GFLOPS

9
Our Hardware.
  • 10 Indy client nodes
  • R4600 133MHZ MIPS Processor
  • 96 Mbytes RAM
  • 10 Mbit Ethernet
  • 1 Intel architected gateway
  • Allows for multiple network interfaces
  • Gives us access to cluster for external networks

10
Our Hardware
  • ethernet switch
  • Improves network performance

11
Our Software
  • Linux-MIPS Distribution
  • Kernel 2.2.x
  • PVM Parallel Virtual Machine
  • GCC GNU Compiler Collection

12
Constraints
  • Money
  • Beowulf philosophy is low cost
  • We are college students
  • Familiarity with operating environment
  • IRIX is not a standard Unix
  • Linux is a common Beowulf environment
  • Indy does not support gt 10 Mbit Ethernet

13
Alternate Solution 1
  • Dr. Shis NT machines
  • Disadvantages
  • Cost of NT
  • Small number of nodes
  • Can not be dedicated to cluster

14
Alternate Solution 2
  • Personal Machines
  • Disadvantages
  • Not homogeneous
  • Inconvenient

15
Alternate Solution 3
  • Dr. Pals O2s
  • Disadvantages
  • Cost of IRIX
  • Distaste for IRIX
  • Can not be dedicated to cluster

16
Testing Methods
  • Network latency
  • CPU Performance
  • Scalability of design (ex. 4 nodes vs. 8 nodes)

17
Testing Methods
  • Parallel applications
  • AI
  • NASA Parallel Benchmarks
  • Texture mapping onto elevation maps

18
Schedule
  • End of semester
  • Fully configured OS on nodes
  • Network infrastructure complete
  • End of Feb.
  • Finish developing parallel applications
  • Finish developing testing tools
  • End of March
  • Have all data from tests gathered and compiled

19
Questions
Write a Comment
User Comments (0)
About PowerShow.com