Title: Operating Systems
1Operating Systems
- Session 2
- Process Threads
- Parallel Computing
- Multitasking, Multiprocessing
_Multithreading - MPI
Juan Carlos Martinez
2Parallel Computing
- A task is broken down into sub-tasks, performed
by separate workers or processes. - Processes interact by exchanging information.
- What do we basically need?
- The ability to start the tasks.
- A way for them to communicate.
3Parallel Computing
- Why do we need it?
- Speedup!!!
- Alternatives
- Simple Local Multithreaded Applications
- Java Library for threads
- Clusters
- MPI
- Grid
- Globus Toolkit 4, Grid Superscalar
4Parallel Computing
- Pros and Cons
- Better performance (Speedup)
- Blocking, Take Advantage of CPUs,
Prioritizations. - More Complex code -
- Concurrency problems -
- Deadlocks, Integrity
5Multitasking, Multiprocessing and Multithreading
- Multitasking
- Ability of an OS to switch among tasks quickly
to give the appearance of simultaneous execution
of those tasks. - E.g. Windows XP
- Multiprocessing vs Multithreading
- First
- Whats the difference between a process and a
thread????
6Multitasking, Multiprocessing and Multithreading
- Multiprocessing vs Multithreading
- Differences between Processes and Threads
- The basic difference is that fork() creates a new
process (child process) and with threads no new
process are created (all is in one process). - Relation 1 to many (Process Thread)
- With thread the data can be "shared" with other
- instances of thread while with fork() not.
- shared memory space vs individual memory space
- Now which one should be faster? And why?
7Multitasking, Multiprocessing and Multithreading
- Multiprocessing
- A Multi-processing application is such that has
multiple processes running on different
processors of the same or different computers
(even across OSs). - Process -gt own memory space -gt more memory
resources - The advantage of using processes instead of
threads is that there is very little
synchronization overhead between processes and
for example in video software this might lead to
faster renders compared to multithreaded renders.
- Another benefit One error in one process does
not affect other processes. - Contrast this with multi-threading, in which an
error in one thread can bring down all the
threads in the process. Further, individual
processes may - May be run as different users and have different
permissions.
8Multitasking, Multiprocessing and Multithreading
- Multithreading
- Multi-threading refers to an application with
multiple threads running within a process. - A thread is a stream of instructions within a
process. Each thread has its own instruction
pointer, set of registers and stack memory. The
virtual address space is process specific, or
common to all threads within a process. So, data
on the heap can be readily accessed by all
threads, for good or ill. - Switching and synchronization costs are lower.
The shared address space (noted above) means data
sharing requires no extra work.
9MPI
- A message passing library specification
- Message-passing model
- Not a compiler specification (i.e. not a
language) - Not a specific product
- Designed for parallel computers, clusters, and
heterogeneous networks
10MPI
- Development began in early 1992
- Open process/Broad participation
- IBM,Intel, TMC, Meiko, Cray, Convex, Ncube
- PVM, p4, Express, Linda,
- Laboratories, Universities, Government
- Final version of draft in May 1994
- Public and vendor implementations are now widely
available
11MPI
- Point to Point CommunicationÂ
- A message is sent from a sender to a receiver
- There are several variations on how the sending
of a message can interact with the program - Â
- SynchronousÂ
- A synchronous communication does not complete
until the message has been received - A FAX or registered mail
- Â
- AsynchronousÂ
- An asynchronous communication completes as soon
as the message is on the way. - A post card or email
12MPI
- Blocking and Non-blockingÂ
- Blocking operations only return when the
operation has been completed - Printer
- Non-blocking operations return right away and
allow the program to do other work - TV Capture Cards (Can record one channel and
still be watching another one) - Â Collective CommunicationsÂ
- Point-to-point communications involve pairs of
processes. - Many message passing systems provide operations
which allow larger numbers of processes to
participate
13MPI
- Types of Collective TransfersÂ
- Barrier
- Synchronizes processors
- No data is exchanged but the barrier blocks until
all processes have called the barrier routine - Broadcast (sometimes multicast)
- A broadcast is a one-to-many communication
- One processor sends one message to several
destinations - Reduction
- Often useful in a many-to-one communication
14MPI
- Whats in a Message?Â
- An MPI message is an array of elements of a
particular MPI datatype. - All MPI messages are typed
- The type of the contents must be specified in
both the send and the receive - Â
15Basic MPI Data Types
MPI Datatype C Type
MPI_CHAR signed char
MPI_SHORT signed short int
MPI_INT signed int
MPI_LONG signed long int
MPI_UNSIGNED_CHAR unsigned char
MPI_UNSIGNED_SHORT unsigned short int
16Basic MPI Data Types
MPI Datatype C Type
MPI_UNSIGNED unsigned int
MPI_UNSIGNED_LONG unsigned long int
MPI_FLOAT float
MPI_DOUBLE double
MPI_LONG_DOUBLE long double
MPI_BYTE (none)
MPI_PACKED (none)
17General MPI Program Structure
18Sample Program Hello World!
- include ltstdio.hgt
- include ltmpi.hgt
- void main (int argc, char argv)
- int myrank, size
- / Initialize MPI /
- MPI_Init(argc, argv)
- / Get my rank /
- MPI_Comm_rank(MPI_COMM_WORLD, myrank)
- / Get the total number of processors /
- MPI_Comm_size(MPI_COMM_WORLD, size)
- printf("Processor d of d Hello World!\n",
myrank, size) - MPI_Finalize() / Terminate MPI /
19MPI
- Finally a little more complex example
- Youve got 4 computers in a cluster, name them
A, B, C and D. - Your application should do the following
operations - VAtxB
- WAxBt
- XVxBt
- YWxAt
- ZXY
- Ideas suggestions for this???
20Reminders
- Dont forget to chose your course project.
- Have a good weekend!