Introduction to Parallel Processing - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

Introduction to Parallel Processing

Description:

Chevy gets. rinse. Wax station. idle. Dry station. idle. Soap. station idle. Rinse. station idle ... Chevy OUT. Replication. Replication (space) ... – PowerPoint PPT presentation

Number of Views:148
Avg rating:3.0/5.0
Slides: 33
Provided by: najma7
Category:

less

Transcript and Presenter's Notes

Title: Introduction to Parallel Processing


1
Introduction to Parallel Processing
  • Chapter No.3

2
Program
  • Programmers Perspective
  • an ordered set of instructions
  • O.S perspective
  • an executable file stored in secondary memory,
    typically on a disk.

3
Program
4
Process
  • From OS view, process relates to execution
  • Process creation involves 4 major steps
  • setting up the process description
  • allocating an address space
  • loading the program into the allocated address
    space, and
  • passing the process description to the scheduler

5
Process Description
  • O.S describe a process by means of a description
    table known as process control block (PCB)
  • PCB contains all the information related to to
    the whole life cycle of a process like process
    identification, owner, process status,
    description of the allocated address space and so
    on.
  • Different O.S uses different names for process
    description table like process table in UNIX,
    Process Information Block in OS/2, and Task
    control block in IBM mainframe operating systems.

6
Allocating an Address Space
  • Allocation of an address space to a process can
    be divided into two steps
  • sharing the address among the created processes
    (shared memory).
  • Allocating address space to each process (per
    process address spaces).

7
Loading the Program Into Allocated Address Space
  • The executable program file is loaded into the
    allocated address space and this is done using
    the different memory management schemes adopted
    by the O.S.

8
Passing the Process Description to the Scheduler
  • The created process is passed to scheduler which
    allocates the processor to the competing
    processes.
  • This is managed by setting up and manipulating
    queues of PCBs.

9
The concept of process
10
Process Creation Sub-models
  • Simpler model used by IBM/PCP, IBM/360/DOS did
    not allow process to further breakdown into
    sub-processes.
  • Process Spawning
  • a process may create a new process called a child
    process.
  • Child process can be created by the programmer by
    using standard process creation mechanisms.
  • Spawned processes form a process hierarchy
    known as process tree.

11
Process Spawning (Independent Processes)
A
B
C
D
E
12
Process Scheduling Sub-models
  • Process Model
  • Scheduling is performed on per process basis,
    i.e. the smallest work to be schedules is a
    process.
  • Process-thread Model
  • in this model smaller units of work is known as
    thread and they are scheduled as entities.

13
Thread
  • Thread, like a process is a sequence of
    instructions.
  • smaller chunks of code (lightweight)
  • threads are created within and belong to process
    and share the resource of the process.
  • for parallel thread processing, scheduling is
    performed on a per-thread basis
  • finer-grain, less overhead on switching from
    thread to thread.

14
Thread
  • Most of the O.S released in the 1980s and 1990s
    are based on process thread model such as OS/2,
    Windows NT or SunOS 5.0.
  • Threads have similar life cycle to the processes
    and are mainly managed in the same way.

15
Single-thread Process or Multi-thread (Dependent)
Process
Threads
16
The Concepts of Concurrent Execution (N-client
1-server)
  • Concurrent execution is the temporal behavior of
    N-client 1
  • server model where one client is served at any
    given moment.

Server
Clients
t
Clients
Simultaneous nature
Sequential nature
17
The concepts of concurrent execution (N-client
1-server)
Preemption rule
Non preemptive
preemptive
Non pre-emptive
Pre-emptive
Time shared
Prioritized
Time-shared
Priotized
Server
t
Server
Client
Priority
Client
18
Parallel Execution
  • N-client N-server model
  • Synchronous each server starts
  • service at the same moment.
  • Asynchronous the servers do not work in concert.

Synchronous (lock step)
Asynchronous
Servers
Clients
Clients
Servers
19
Concurrent and Parallel Programming Languages
  • Sequential languages
  • languages that do not support N-client model (C,
    Pascal, Fortran, PROLOG, LISP)
  • Concurrent languages
  • employ constructs to implement the N-client 1
    server model by specifying concurrent threads and
    processes but lack language constructs to
    describe N-server model (Ada, Concurrent Pascal,
    Modula-2, concurrent PROLOG).

20
Concurrent and Parallel Programming Languages
  • Data parallel language
  • introduces special data structures that are
    processed in parallel, element by element (high
    performance Fortran, DAP Fortran, DAP
    PROLOG,Connection Machine LISP)
  • Parallel languages
  • extends the specifications of N-client model of
    concurrent languages with processor allocation
    language constructs that enable use of N-server
    model (Occam-2, 3L Parallel C, Strand-88)

21
Types and levels of parallelism
  • Available and utilized parallelism
  • available in program or in the problem solutions
  • utilized during execution
  • Types of available parallelism
  • functional
  • arises from the logic of a problem solution
  • data
  • arises from data structures that allow parallel
    operations on their elements, such as vectors or
    metrices in their problem solution.
  • Give rise to a parallel execution for data
    parallel part of computation.

22
Levels of Available Functional Parallelism
  • Parallelism at the instruction level
    (fine-grained)
  • particular instructions of a program executed in
    parallel.
  • Parallelism at the loop level (middle-grained)
  • consecutive loop iterations are candidates for
    parallel execution.
  • May be restricted due to data dependencies
    between subsequent lop iterations called
    recurrences.
  • Parallelism at the procedure level
    (middle-grained)
  • parallel execution of procedure.
  • Parallelism at the program level (coarse-grained)
  • multiple independent users are performing
    executions in parallel.

23
Available and Utilized Levels of Functional
Parallelism
Available levels
Utilized levels
User (program) level
User level
2
Procedure level
Process level
1
Loop level
Thread level
Instruction level
Instruction level
1. Exploited by architecture 2. Exploited by
means of operating systems
24
Utilization of Functional Parallelism
  • Available parallelism can be utilized by
  • architecture,
  • instruction-level functional parallel
    architectures or instruction level parallel
    architectures ( ILP-architectures).
  • compilers
  • parallel optimizing compiler
  • operating system
  • multitasking

25
Concurrent Execution Models
  • Multithreading
  • concurrent execution at the thread level.
  • Multiple threads are generated for each process,
    and these threads are executed concurrently on a
    single processor under the control of one O.S.
  • Multitasking
  • process level concurrent execution.
  • Multiprogramming
  • user level concurrent execution.
  • Timesharing
  • user level concurrent execution.

26
Concurrent Execution Models
User level
Multiprogramming time-sharing
Process level
Multitasking
Thread level
Multi-threading
27
Utilization of Data Parallelism
  • Using data-parallel architecture
  • permits parallel or pipelined operations on data
    elements.

28
Classification of Parallel Architectures
  • Flynns classification
  • SISD
  • SIMD
  • MISD (Multiple Instruction Single Date)
  • MIMD

29
Basic Parallel Technique
  • Pipelining (time)
  • A number of functional units are employed in
    sequence to perform a single computation.
  • Each functional unit represent a certain stage
    of computation.
  • Pipeline allows overlapped execution of
    instructions.
  • It increases the overall processors throughput.

30
Car Wash Example
Ford IN
Chevy OUT
31
Car Wash Example
Ford IN
Volvo IN
Saturn IN
Chevy OUT
Toyota IN
32
Replication
  • Replication (space)
  • a number of functional units perform multiply
    computation simultaneously
  • more processors
  • more memory
  • more I/O
  • more computers
  • e.g. array processors.
Write a Comment
User Comments (0)
About PowerShow.com