MPI on Argo-new - PowerPoint PPT Presentation

About This Presentation
Title:

MPI on Argo-new

Description:

16 dual-core Opterons. argo1-1 to argo4-4. 16 single-processor Xeons. argo5-1 to argo8-4. 32 dual-core dual-processor Xeons. argo9-1 to argo16-4 ... – PowerPoint PPT presentation

Number of Views:60
Avg rating:3.0/5.0
Slides: 17
Provided by: jason342
Learn more at: https://www.cs.uic.edu
Category:
Tags: mpi | argo | new

less

Transcript and Presenter's Notes

Title: MPI on Argo-new


1
MPI on Argo-new
  • Venkatram Vishwanath
  • venkat_at_evl.uic.edu
  • Electronic Visualization Laboratory
  • University of Illinois at Chicago

2
ARGO-NEW Cluster Configuration
  • 64 Node heterogeneous cluster
  • 16 dual-core Opterons
  • argo1-1 to argo4-4
  • 16 single-processor Xeons
  • argo5-1 to argo8-4
  • 32 dual-core dual-processor Xeons
  • argo9-1 to argo16-4
  • Gigabit Ethernet connectivity between nodes
  • PBS batch scheduling system

3
Logging in to ARGO-new
  • Access Argo-new from a machine at UIC
  • Account name is your UIC username
  • eg. ssh -l vvishw1_at_argo-new.cc.uic.edu
  • Access via ssh
  • Remote Access from home is possible
  • Log in from bert, ernie or icarus

4
Setting up the Environment Variables
  • MPICH2 (Argonne NL) and PGI(Portland Group) are
    installed on Argo-new.
  • Talk will focus on MPICH2
  • MPICH2 related environment settings
  • setenv MPI_PATH /usr/common/mpich2-1.0.1
  • setenv LD_LIBRARY_PATH MPI_PATH/libLD_LIBRARY_P
    ATH
  • setenv PATH MPI_PATH/binMPI_PATH/includePATH
  • In Bash
  • export MPI_PATH/usr/common/mpich2-1.0.1

5
Configure the .mpd.conf file
  • Create a .mpd.conf in your home directory
  • Add a single line secretword
  • vvishw1_at_argo-new cat .mpd.conf
  • secretwordsjdkfhsdkjf
  • Set the permission to 600 for .mpd.conf
  • MPI will NOT work if the permissions of .mpd.conf
    is 755
  • vvishw1_at_argo-new ls -al .mpd.conf
  • -rw------- 1 vvishw1 student 23 Feb 13
    1715 .mpd.conf

6
A Typical MPICH2 Session on Argo-new
  • Log in to Argo-new
  • Work and compile your MPI program
  • Set up the MPD ring
  • Run the MPI Program using the PBS scheduler
  • Bring down the MPD ring
  • Logout

7
Compile your MPI Program
  • MPICH provides wrapper scripts for C, C
  • Use mpicc in place of gcc and mpicxx for g

include ltstdio.hgt include "mpi.h" include
ltunistd.hgt int main(int argc, char argv)
int rank, size MPI_Init(argc,
argv) MPI_Comm_rank(MPI_COMM_WORLD,
rank) MPI_Comm_size(MPI_COMM_WORLD,
size) char buf1024
memset(buf, '\0', 1024) gethostname(buf,
1024) printf("Hello from s , I'm rank
d Size is d \n", buf, rank, size)
MPI_Finalize() return 0
  • Use the following Flags
  • -I MPI_PATH/include
  • -L MPI_PATH/lib -lmpich
  • mpicc -o Hellow testHostname.c -I
    MPI_PATH/include -L MPI_PATH/lib -lmpich

8
Set up the MPD ring
  • MPD ring needs to be set up on the nodes where
    the MPI program would run.
  • Launch the MPD daemons
  • rsh ltA_host_in_hostfilegt "/usr/common/mpich2-1.0.1
    /bin/mpdboot -n lttotal_hosts_in_filegt -f
    ltpath_to_hostfilegt -r rsh -v
  • e.g rsh argo1-1 "/usr/common/mpich2-1.0.1/bin/mpdb
    oot -n 4 -f HOME/mpd.hosts -r rsh -v
  • Check the Status of the MPD daemons using
    mpdtrace
  • rsh argo1-1 "/usr/common/mpich2-1.0.1/bin/mpdtrace

9
Run the MPI Program using the PBS scheduler
  • Submit a job to the scheduler using qsub
  • qsub -l nodesltlist of nodesgt ltjob_script_filegt
  • Typical Job Script
  • mpiexec -machinefile ltcomplete_machinefile_pathgt
    -np ltnumber of nodesgt ltcomplete_path_to_execgt
  • mpiexec -machinefile /home/homes51/vvishw1/my_mach
    inefile -np 4 /home/homes51/vvishw1/workspace/hell
    ow
  • qsub -l nodesargo1-1argo1-2argo1-3argo1-4
    my_script.sh
  • Returns a status message giving you the job id
  • 33112.argo-new.cc.uic.edu
  • 33112 is your job id

10
A few things to remember
  • The List of nodes in qsub should match the nodes
    in the machinefile
  • The List of nodes in qsub should be a subset of
    the nodes used in the MPD ring
  • Restrict the number of submitted jobs to 3

11
Argo-new stats online
  • Argo-new Stats is available online at
  • http//tigger.uic.edu/htbin/argo_acct.pl

12
Typical Machine Files
  • vvishw1_at_argo-new / cat my_machinefile
  • argo1-1
  • argo1-2
  • argo1-3
  • argo1-4
  • To simulate 8 logical nodes on 4 physical nodes
  • vvishw1_at_argo-new / cat my_machinefile
  • argo1-12
  • argo1-22
  • argo1-32
  • argo1-42

13
Other Useful PBS commands
  • qstat ltjob idgt
  • status of the job
  • qstat -f ltjob idgt
  • - detailed job information
  • qstat -u ltusernamegt
  • - information on all submitted user jobs
  • qdel ltjob idgt
  • - delete a submitted job
  • qnodes - status of all the argo-new nodes
  • Use this as a hint for the nodes in your MPD ring.

14
The output of your MPI program
  • Standard output and standard error are both
    redirected to files
  • script name.e,ojob id
  • Ex
  • stderr in my_script.e3208
  • stdout in my_script.o3208
  • The .e file should usually be empty.

Slide courtesy Paul Sexton
15
Stop the MPD environment
  • The MPD ring needs to be brought down using
    mpdallexit
  • rsh lthost_in_config_filegt "/usr/common/mpich2-1.0.
    1/bin/mpdallexit
  • rsh argo1-1 "/usr/common/mpich2-1.0.1/bin/mpdallex
    it

16
Miscellaneous Topics
  • Simulating Topologies
  • MPI_Cart_create will help create 2D grids,
    hypercubes, etc
  • Measuring Time
  • MPI_Wtime
  • Ideally, Performance results should be
    statistically significant
  • Computation time is the maximum of all nodes
  • MPE for in depth performance analysis of MPI
    programs.
Write a Comment
User Comments (0)
About PowerShow.com