Title: MPI Program Structure
1MPI Program Structure
2Topics
- This chapter introduces the basic structure of an
MPI program. After sketching this structure using
a generic pseudo-code, specific program elements
are described in detail for C. These include - Header files
- MPI naming conventions
- MPI routines and return values
- MPI handles
- MPI data types
- Initializing and terminating MPI
- Communicators
- Getting communicator information rank and size
3Generic MPI Program
4A Generic MPI Program
- All MPI programs have the following general
structure - include MPI header file
- variable declarations
- initialize the MPI environment
- ...do computation and MPI communication calls...
- close MPI communications
5General MPI Program Structure
6A Generic MPI Program
- The MPI header file contains MPI-specific
definitions and function prototypes. - Then, following the variable declarations, each
process calls an MPI routine that initializes the
message passing environment. All calls to MPI
communication routines must come after this
initialization. - Finally, before the program ends, each process
must call a routine that terminates MPI. No MPI
routines may be called after the termination
routine is called. Note that if any process does
not reach this point during execution, the
program will appear to hang.
7MPI Header Files
8MPI Header Files
- MPI header files contain the prototypes for MPI
functions/subroutines, as well as definitions of
macros, special constants, and data types used by
MPI. An appropriate "include" statement must
appear in any source file that contains MPI
function calls or constants. - include ltmpi.hgt
9MPI Naming Conventions
10MPI Naming Conventions
- The names of all MPI entities (routines,
constants, types, etc.) begin with MPI_ to avoid
conflicts. - C function names have a mixed case
- MPI_Xxxxx(parameter, ... )
- Example MPI_Init(argc, argv).
- The names of MPI constants are all upper case in
both C and Fortran, for example, - MPI_COMM_WORLD, MPI_REAL, ...
- In C, specially defined types correspond to many
MPI entities. (In Fortran these are all
integers.) Type names follow the C function
naming convention above for example, - MPI_Comm
- is the type corresponding to an MPI
"communicator".
11MPI Routines and Return Values
12MPI Routines and Return Values
- MPI routines are implemented as functions in C.
In either case generally an error code is
returned, enabling you to test for the successful
operation of the routine. - In C, MPI functions return an int, which
indicates the exit status of the call. - int ierr
- ...
- ierr MPI_Init(argc, argv)
- ...
13MPI Routines and Return Values
- The error code returned is MPI_SUCCESS if the
routine ran successfully (that is, the integer
returned is equal to the pre-defined integer
constant MPI_SUCCESS). Thus, you can test for
successful operation with - if (ierr MPI_SUCCESS)
- ...routine ran correctly...
-
- If an error occurred, then the integer returned
has an implementation-dependent value indicating
the specific error.
14MPI Handles
15MPI Handles
- MPI defines and maintains its own internal data
structures related to communication, etc. You
reference these data structures through handles.
Handles are returned by various MPI calls and may
be used as arguments in other MPI calls. - In C, handles are pointers to specially defined
datatypes (created via the C typedef mechanism).
Arrays are indexed starting at 0. - Examples
- MPI_SUCCESS - An integer. Used to test error
codes. - MPI_COMM_WORLD - In C, an object of type MPI_Comm
(a "communicator") it represents a pre-defined
communicator consisting of all processors. - Handles may be copied using the standard
assignment operation.
16MPI Datatypes
17MPI Datatypes
- MPI provides its own reference data types
corresponding to the various elementary data
types in C. - MPI allows automatic translation between
representations in a heterogeneous environment. - As a general rule, the MPI datatype given in a
receive must match the MPI datatype specified in
the send. - In addition, MPI allows you to define arbitrary
data types built from the basic types.
18Basic MPI Datatypes - C
19Basic MPI Data Types
MPI Datatype C Type
MPI_CHAR signed char
MPI_SHORT signed short int
MPI_INT signed int
MPI_LONG signed long int
MPI_UNSIGNED_CHAR unsigned char
MPI_UNSIGNED_SHORT unsigned short int
20Basic MPI Data Types
MPI Datatype C Type
MPI_UNSIGNED unsigned int
MPI_UNSIGNED_LONG unsigned long int
MPI_FLOAT float
MPI_DOUBLE double
MPI_LONG_DOUBLE long double
MPI_BYTE (none)
MPI_PACKED (none)
21Special MPI Datatypes (C)
22Special MPI Datatypes (C)
- In C, MPI provides several special datatypes
(structures). Examples include - MPI_Comm - a communicator
- MPI_Status - a structure containing several
pieces of status information for MPI calls - MPI_Datatype
- These are used in variable declarations, for
example, - MPI_Comm some_comm
- declares a variable called some_comm, which is of
type MPI_Comm (i.e. a communicator).
23Initializing MPI
24Initializing MPI
- The first MPI routine called in any MPI program
must be the initialization routine MPI_INIT. This
routine establishes the MPI environment,
returning an error code if there is a problem. - int ierr
- ...
- ierr MPI_Init(argc, argv)
- Note that the arguments to MPI_Init are the
addresses of argc and argv, the variables that
contain the command-line arguments for the
program.
25Communicators
26Communicators
- A communicator is a handle representing a group
of processors that can communicate with one
another.
- The communicator name is required as an argument
to all point-to-point and collective operations. - The communicator specified in the send and
receive calls must agree for communication to
take place. - Processors can communicate only if they share a
communicator.
27Communicators
- There can be many communicators, and a given
processor can be a member of a number of
different communicators. Within each
communicator, processors are numbered
consecutively (starting at 0). This identifying
number is known as the rank of the processor in
that communicator. - The rank is also used to specify the source and
destination in send and receive calls. - If a processor belongs to more than one
communicator, its rank in each can (and usually
will) be different!
28Communicators
- MPI automatically provides a basic communicator
called MPI_COMM_WORLD. It is the communicator
consisting of all processors. Using
MPI_COMM_WORLD, every processor can communicate
with every other processor. You can define
additional communicators consisting of subsets of
the available processors.
Communicator MPI_COMM_WORLD Comm1 Comm2
29Getting Communicator Information Rank
- A processor can determine its rank in a
communicator with a call to MPI_COMM_RANK. - Remember ranks are consecutive and start with 0.
- A given processor may have different ranks in the
various communicators to which it belongs. - int MPI_Comm_rank(MPI_Comm comm, int rank)
- The argument comm is a variable of type MPI_Comm,
a communicator. For example, you could use
MPI_COMM_WORLD here. Alternatively, you could
pass the name of another communicator you have
defined elsewhere. Such a variable would be
declared as - MPI_Comm some_comm
- Note that the second argument is the address of
the integer variable rank.
30Getting Communicator Information Size
- A processor can also determine the size, or
number of processors, of any communicator to
which it belongs with a call to MPI_COMM_SIZE. - int MPI_Comm_size(MPI_Comm comm, int size)
- The argument comm is of type MPI_Comm, a
communicator. - Note that the second argument is the address of
the integer variable size. - MPI_Comm_size(MPI_COMM_WORLD, size) ? size 7
- MPI_Comm_size(Comm1, size1) ? size14
- MPI_Comm_size(Comm2, size2) ? size23
31Terminating MPI
32Terminating MPI
- The last MPI routine called should be
MPI_FINALIZE which - cleans up all MPI data structures, cancels
operations that never completed, etc. - must be called by all processes if any one
process does not reach this statement, the
program will appear to hang. - Once MPI_FINALIZE has been called, no other MPI
routines (including MPI_INIT) may be called. - int err
- ...
- err MPI_Finalize()
33Hello World!
34Sample Program Hello World!
- In this modified version of the "Hello World"
program, each processor prints its rank as well
as the total number of processors in the
communicator MPI_COMM_WORLD. - Notes
- Makes use of the pre-defined communicator
MPI_COMM_WORLD. - Not testing for error status of routines!
35Sample Program Hello World!
- include ltstdio.hgt
- include ltmpi.hgt
- void main (int argc, char argv)
- int myrank, size
- / Initialize MPI /
- MPI_Init(argc, argv)
- / Get my rank /
- MPI_Comm_rank(MPI_COMM_WORLD, myrank)
- / Get the total number of processors /
- MPI_Comm_size(MPI_COMM_WORLD, size)
- printf("Processor d of d Hello World!\n",
myrank, size) - MPI_Finalize() / Terminate MPI /
36Sample Program Output
37Sample Program Output
- Running this code on four processors will produce
a result like - Processor 2 of 4 Hello World!
- Processor 1 of 4 Hello World!
- Processor 3 of 4 Hello World!
- Processor 0 of 4 Hello World!
- Each processor executes the same code, including
probing for its rank and size and printing the
string. - The order of the printed lines is essentially
random! - There is no intrinsic synchronization of
operations on different processors. - Each time the code is run, the order of the
output lines may change.
38END