205CSC316 High Performance Computing - PowerPoint PPT Presentation

1 / 10
About This Presentation
Title:

205CSC316 High Performance Computing

Description:

205CSC316 High Performance Computing. 205CSC316 High ... NON DETERMINISM. HOARE (1978) SYNCHRONISATION USING MESSAGE. PASSING BY INPUT/OUTPUT COMMANDS ... – PowerPoint PPT presentation

Number of Views:35
Avg rating:3.0/5.0
Slides: 11
Provided by: PPS295
Category:

less

Transcript and Presenter's Notes

Title: 205CSC316 High Performance Computing


1
  • PROGRAMMING HIGH PERFORMANCE COMPUTERS
  • OBJECTIVES
  • IN THIS SECTION WE WILL ....
  • INTRODUCE THE NEED FOR ABSTRACTION
  • COMPARE AND CONTRAST SHARED MEMORY AND MESSAGE
    PASSING
  • PROGRAMMING MODELS
  • OVERVIEW THE DATA PARALLEL PROGRAMMING MODEL
  • REVIEW AND ANALYSE THE MPI MESSAGE PASSING
    LIBRARY
  • FOR SOFTWARE DEVELOPMENT

205CSC316 High Performance Computing
2
ABSTRACT MODEL     SEPARATION OF
CONCERNS   DISTINGUISH USER CONCERNS FROM THOSE
OF THE SYSTEM   DEVELOP AN ABSTRACT PROGRAMMING
MODEL   COMMON PROGRAMMING MODEL ACROSS A RANGE
OF PARALLEL SYSTEMS
205CSC316 High Performance Computing
3
PROGRAMMING MODELS     SHARED-MEMORY PROGRAMMING
MODEL   processes share a common address space
and data are shared by a process directly
referencing that address space   no explicit
action is required for data to be shared  
process synchronisation is explicit  
programmer must identify when and what data are
being shared and must synchronise the processes
using special synchronisation constructs  
ensure the proper ordering of accesses to shared
variables by the different processes
205CSC316 High Performance Computing
4
PROGRAMMING MODELS     SHARED-MEMORY PROGRAMMING
MODEL     CONCURRENT EXECUTION EXPRESSED
THROUGH PROCESSES   PROCESS INTERACTION OCCURS
IN TWO SITUATIONS   WHEN PROCESSES WISH TO
COMPETE -gt MUTUAL EXCLUSION WHEN PROCESSES
CO-OPERATE -gt CONDITIONAL SYNCHRONISATION
5
PROGRAMMING MODELS     SHARED-MEMORY PROGRAMMING
MODEL     MONITORS   MONITOR CONTAINS SHARED
DATA PLUS ASSOCIATED OPERATIONS   SHARED DATA
IS ACCESSED BY CALLING A PROCEDURE OF THE
MONITOR ONLY ONE PROCESS CAN ENTER A MONITOR
AT ANY TIME   DATA TYPE - CONDITION
(QUEUE) OPERATORS - WAIT/SIGNAL
6
PROGRAMMING MODELS     SHARED-MEMORY PROGRAMMING
MODEL     MONITORS   A PROCESS CAN JOIN A QUEUE
IF A CERTAIN CONDITION IS NOT TRUE -gt WAIT
OPERATION   IF A SUBSEQUENT PROCESS MAKES THE
CONDITION TRUE IT CAN SIGNAL THE QUEUE RELEASING
A DELAYED PROCESS SHARING OF A SINGLE RESOURCE
AMONG N COMPETING PROCESSES. BEST SUITED TO A
SHARED MEMORY MACHINE   AXIOMATIC PROOF CAN BE
APPLIED
7
PROGRAMMING MODELS     MESSAGE-PASSING
PROGRAMMING MODEL   processes have their own
private address spaces and share data via
explicit messages   source process explicitly
sends a message and the target process explicitly
receives a message   synchronisation is
implicit in the act of sending and receiving of
messages
8
PROGRAMMING MODELS     MESSAGE-PASSING
PROGRAMMING MODEL   DIJKSTRA (1975) A GUARDED
COMMAND -gt A GUARD FOLLOWED BY A LIST OF
STATEMENTS GIVEN A CHOICE OF SEVERAL TRUE
GUARDS PICK ONE AT RANDOM gt NON
DETERMINISM   HOARE (1978) SYNCHRONISATION
USING MESSAGE PASSING BY INPUT/OUTPUT
COMMANDS SYMMETRICAL RELATIONSHIP BETWEEN
PROCESSES gt CSP     EXAMPLES ADA
(PARIS) CSP -gt OCCAM (UK) THEORETICALLY
PROVEN MODEL
9
PROGRAMMING MODELS     DATA PARALLEL PROGRAMMING
MODEL     single-program, multiple-data (SPMD)
structure   same basic code executes against
partitioned data. computation phases alternating
with communication phases.   programs written
using sequential FORTRAN to specify the
computations on the data (using either
iterative constructs or the vector
operations)   data mapping directives specify
how large arrays should be distributed across
processes
10
PROGRAMMING MODELS     DATA PARALLEL PROGRAMMING
MODEL    pre-processor or compiler then
translates the source code into an equivalent
SPMD program with message-passing calls
(message- passing architecture), or with proper
synchronisation (shared-memory architecture)  
computation is distributed to the parallel
processes to match the specified data
distributions   frees the user from ...
explicitly distributing global arrays onto local
arrays inserting the required communication
calls or the required synchronisations  
compatible with regular FORTRAN so that code
development can occur on workstations and
porting code is easier.
Write a Comment
User Comments (0)
About PowerShow.com