Message Passing and MPI CS433 Spring 2001 - PowerPoint PPT Presentation

About This Presentation
Title:

Message Passing and MPI CS433 Spring 2001

Description:

All processors execute essentially the same program, and same steps, but not in lockstep. All communication is almost in lockstep. Collective calls: ... – PowerPoint PPT presentation

Number of Views:61
Avg rating:3.0/5.0
Slides: 12
Provided by: laxmika
Learn more at: http://charm.cs.uiuc.edu
Category:

less

Transcript and Presenter's Notes

Title: Message Passing and MPI CS433 Spring 2001


1
Message Passing and MPICS433Spring 2001
  • Laxmikant Kale

2
Message Passing
send
receive
copy
data
data
PE0
PE1
3
Basic Message Passing
  • We will describe a hypothetical message passing
    system,
  • with just a few calls that define the model
  • Later, we will look at real message passing
    models (e.g. MPI), with a more complex sets of
    calls
  • Basic calls
  • send(int proc, int tag, int size, char buf)
  • recv(int proc, int tag, int size, char buf)
  • Recv may return the actual number of bytes
    received in some systems
  • tag and proc may be wildcarded in a recv
  • recv(ANY, ANY, 1000, buf)
  • broadcast
  • Other global operations (reductions)

4
Pi with message passing
Int count, c1 main() Seed s
makeSeed(myProcessor) for (I0 Ilt100000/P
I) x random(s) y random(s)
if (xx yy lt 1.0) count send(0,1,4,
count)
5
Pi with message passing
if (myProcessorNum() 0) for (I0
IltmaxProcessors() I) recv(I,1,4,
c) count c printf(pif\n,
4count/100000) / end function main /
6
Collective calls
  • Message passing is often, but not always, used
    for SPMD style of programming
  • SPMD Single process multiple data
  • All processors execute essentially the same
    program, and same steps, but not in lockstep
  • All communication is almost in lockstep
  • Collective calls
  • global reductions (such as max or sum)
  • syncBroadcast (often just called broadcast)
  • syncBroadcast(whoAmI, dataSize, dataBuffer)
  • whoAmI sender or receiver

7
Standardization of message passing
  • Historically
  • nxlib (On Intel hypercubes)
  • ncube variants
  • PVM
  • Everyone had their own variants
  • MPI standard
  • Vendors, ISVs, and academics got together
  • with the intent of standardizing current practice
  • Ended up with a large standard
  • Popular, due to vendor support
  • Support for
  • communicators avoiding tag conflicts, ..
  • Data types
  • ..

8
Basic MPI calls
  • MPI_Init(argc, argv) MPI_Finalize()
  • MPI_Comm_rank(MPI_COM_WORLD, my_rank)
  • my_rank is an int. (what is my processors serial
    number)
  • MPI_Comm_size(MPI_COM_WORLD, P)
  • P is an int. Total number of processors
    (processes)
  • MPI_Send(m, size, MPI_CHAR, dest, tag,
    MPI_COMM_WORLD)
  • MPI_Recv(m, size, MPI_CHAR, source, tag,
    MPI_COMM_WORLD, status)
  • m is a char, while size, tag, and status are
    ints
  • These 6 calls suffice to write many parallel
    programs!

9
include ltstdio.hgt include mpi.h define MPIW
MPI_COMM_WORLD main(int argc, char argv)
int me, P, size, tag char buf10 hello
MPI_Init(argc,argv) MPI_Comm_rank(MPIW,
me) MPI_Comm_size(MPIW, P) if (me ! 0)
MPI_Recv(buf, strlen(buf)1, MPI_CHAR,
me-1, 5, MPIW, status) printf(s from
process d\n, buf, me) if (me lt P-1)
MPI_Send(buf, strlen(buf)1, MPI_CHAR, me1, 5,
MPIW) MPI_Finalize()
A Simple MPI program
10
Review Basic MPI calls
  • MPI_Init(argc, argv) MPI_Finalize()
  • MPI_Comm_rank(MPI_COM_WORLD, my_rank)
  • my_rank is an int. (what is my processors serial
    number)
  • MPI_Comm_size(MPI_COM_WORLD, P)
  • P is an int. Total number of processors
    (processes)
  • MPI_Send(m, size, MPI_CHAR, dest, tag,
    MPI_COMM_WORLD)
  • MPI_Recv(m, size, MPI_CHAR, source, tag,
    MPI_COMM_WORLD, status)
  • m is a char, while size, tag, and status are
    ints
  • These 6 calls suffice to write many parallel
    programs!

So, what is MPI_CHAR and MPI_COMM_WORLD? Does it
support other data types? Yes. Other worlds?
Well. other communicators which are partitions
of this
11
MPI collective communications
  • Reductions and Broadcasts
  • MPI_Bcast(msg, size, datatype, root,
    communicator)
  • Note all processes must call this, including the
    root.
  • It is an implicit send by the root, Recv by the
    others
  • MPI_Reduce(data, result, size, type, op, root,
    comm)
  • data, result are pointers, op specifies operation
    (sum, max, min..)
  • count is the number of data items (not bytes)
  • MPI_Barrier(Comm)
  • MPI_Gather
  • collect data from everyone in one place
  • MPI_Scatter
  • reverse
Write a Comment
User Comments (0)
About PowerShow.com