Title: CS 584
1CS 584
2Message Passing
- Based on multi-processor
- Set of independent processors
- Connected via some communication net
- All communication between processes is done via a
message sent from one to the other
3MPI
- Message Passing Interface
- Computation is made of
- One or more processes
- Communicate by calling library routines
- MIMD programming model
- SPMD most common.
4MPI
- Processes use point-to-point communication
operations - Collective communication operations are also
available. - Communication can be modularized by the use of
communicators. - MPI_COMM_WORLD is the base.
- Used to identify subsets of processors
5MPI
- Complex, but most problems can be solved using
the 6 basic functions. - MPI_Init
- MPI_Finalize
- MPI_Comm_size
- MPI_Comm_rank
- MPI_Send
- MPI_Recv
6MPI Basics
- Most all calls require a communicator handle as
an argument. - MPI_COMM_WORLD
- MPI_Init and MPI_Finalize
- dont require a communicator handle
- used to begin and end and MPI program
- MUST be called to begin and end
7MPI Basics
- MPI_Comm_size
- determines the number of processors in the
communicator group - MPI_Comm_rank
- determines the integer identifier assigned to the
current process - zero based
8MPI Basics
include ltstdio.hgt include ltmpi.hgt main(int
argc, char argv) int iproc,
nproc MPI_Init(argc, argv) MPI_Comm_size(MPI
_COMM_WORLD, nproc) MPI_Comm_rank(MPI_COMM_WORL
D, iproc) printf("I am processor d of d\n",
iproc, nproc) MPI_Finalize()
9MPI Communication
- MPI_Send
- Sends an array of a given type
- Requires a destination node, size, and type
- MPI_Recv
- Receives an array of a given type
- Same requirements as MPI_Send
- Extra parameter
- MPI_Status variable.
10(No Transcript)
11MPI Basics
- Made for both FORTRAN and C
- Standards for C
- MPI_ prefix to all calls
- First letter of function name is capitalized
- Returns MPI_SUCCESS or error code
- MPI_Status structure
- MPI data types for each C type
- OUT parameters passed using operator
12Using MPI
- Based on rsh
- requires a .rhosts file (not on the SP)
- hostname login
- Path to compiler
- MPI_HOME /users/faculty/snell/mpich
- MPI_CC MPI_HOME/bin/mpicc
- use mpcc on the SP
13Using MPI
- Write program
- Compile using mpicc or mpcc
- Write process file (linux cluster)
- host nprocs full_path_to_prog
- 0 for nprocs on first line 1 for all others
- Run program (linux cluster)
- prog -p4pg process_file args
- mpirun np procs machinefile machines prog
14Example
- HINT benchmark
- Found at /users/faculty/snell/CS584/HINT or
/gfaculty/snell/Hint
15include mpi.h include ltstdio.hgt include
ltmath.hgt define MAXSIZE 1000 void main(int
argc, char argv) int myid, numprocs int
dataMAXSIZE, i, x, low, high, myresult,
result char fn255 char fp MPI_Init(argc,
argv) MPI_Comm_size(MPI_COMM_WORLD,numprocs)
MPI_Comm_rank(MPI_COMM_WORLD,myid) if (myid
0) / Open input file and initialize data
/ strcpy(fn,getenv(HOME)) strcat(fn,/MPI/
rand_data.txt) if ((fp fopen(fn,r))
NULL) printf(Cant open the input file
s\n\n, fn) exit(1) for(i 0 i lt
MAXSIZE i) fscanf(fp,d, datai) /
broadcast data / MPI_Bcast(data, MAXSIZE,
MPI_INT, 0, MPI_COMM_WORLD) / Add my portion
Of data / x n/nproc low myid x high
low x for(i low i lt high i) myresult
datai printf(I got d from d\n,
myresult, myid) / Compute global sum
/ MPI_Reduce(myresult, result, 1, MPI_INT,
MPI_SUM, 0, MPI_COMM_WORLD) if (myid 0)
printf(The sum is d.\n, result) MPI_Finalize(
)
16MPI
- Message Passing programs are non-deterministic
because of concurrency - Consider 2 processes sending messages to third
- MPI does guarantee that 2 messages sent from a
single process to another will arrive in order. - It is the programmer's responsibility to ensure
computation determinism
17MPI Determinism
- MPI
- A Process may specify the source of the message
- A Process may specify the type of message
- Non-Determinism
- MPI_ANY_SOURCE or MPI_ANY_TAG
18Example
for (n 0 n lt nproc/2 n)
MPI_Send(buff, BSIZE, MPI_FLOAT, rnbor, 1,
MPI_COMM_WORLD)
MPI_Recv(buff, BSIZE, MPI_FLOAT, MPI_ANY_SOURCE,
1, MPI_COMM_WORLD,
status) / Process the data /
19Global Operations
- Coordinated communication involving multiple
processes. - Can be implemented by the programmer using sends
and receives - For convenience, MPI provides a suite of
collective communication functions.
20Collective Communication
- Barrier
- Synchronize all processes
- Broadcast
- Gather
- Gather data from all processes to one process
- Scatter
- Reduction
- Global sums, products, etc.
21Collective Communication
22MPI_Reduce
MPI_Reduce(inbuf, outbuf, count, type, op, root,
comm)
23MPI_Reduce
24MPI_Allreduce
MPI_Allreduce(inbuf, outbuf, count, type, op,
root, comm)
25Distribute Problem Size
Distribute Input data
Exchange Boundary values
Find Max Error
Collect Results
26Other MPI Features
- Asynchronous Communication
- MPI_ISend
- MPI_Wait and MPI_Test
- MPI_Probe and MPI_Get_count
- Modularity
- Communicator creation routines
- Derived Datatypes