Chapter 4.1 Message Passing Communication - PowerPoint PPT Presentation

1 / 18
About This Presentation
Title:

Chapter 4.1 Message Passing Communication

Description:

Chapter 4.1 Message Passing Communication Prepared by: Karthik V Puttaparthi kputtaparthi1_at_student.gsu.edu OUTLINE Interprocess Communication Message Passing ... – PowerPoint PPT presentation

Number of Views:132
Avg rating:3.0/5.0
Slides: 19
Provided by: Kart163
Learn more at: https://www.cs.gsu.edu
Category:

less

Transcript and Presenter's Notes

Title: Chapter 4.1 Message Passing Communication


1
Chapter 4.1Message Passing Communication
  • Prepared by
  • Karthik V Puttaparthi
  • kputtaparthi1_at_student.gs
    u.edu

2
OUTLINE
  • Interprocess Communication
  • Message Passing Communication
  • Basic Communication Primitives
  • Message Design Issues
  • Synchronization and Buffering
  • References

3
INTERPROCESS COMMUNICATION
  • Processes executing concurrently in the operating
    system may be either independent or cooperating
    processes.
  • Reasons for providing an environment that allows
    process cooperation.
  • 1) Information Sharing
  • Several users may be interested in the
    same piece of information.
  • 2) Computational Speed up
  • Process can be divided into sub tasks
    to run faster, speed up can be achieved if the
    computer has multiple processing elements.
  • 3) Modularity
  • Dividing the system functions into
    separate processes or threads.
  • 4) Convenience
  • Even an individual user may work on
    many tasks at the same time.

4
COMMUNICATION MODELS
  • Cooperating processes require IPC
    mechanism that allow them to exchange data and
    information. Communication can take place either
    by Shared memory or Message passing Mechanisms.
  • Shared Memory
  • 1) Processes can exchange information by
  • reading and writing data to the shared region.
  • 2) Faster than message passing as it can be
  • done at memory speeds when within a computer.
  • 3) System calls are responsible only to establish
  • shared memory regions.
  • Message Passing
  • Mechanism to allow processes to communicate and
  • synchronize their actions without sharing the
    same
  • address space and is particularly useful in
    distributed

5
Message Passing Communication
  • Messages are collection of data objects and their
    structures
  • Messages have a header containing system
    dependent control information and a message body
    that can be fixed or variable size.
  • When a process interacts with another, two
    requirements
  • have to be satisfied.
  • Synchronization and Communication.
  • Fixed Length
  • Easy to implement
  • Minimizes processing and storage overhead.
  • Variable Length
  • Requires dynamic memory allocation, so
  • fragmentation could occur.

6
Basic Communication Primitives
  • Two generic message passing primitives for
    sending and receiving messages.
  • send (destination, message)
  • receive (source, message) source
    or dest process name, link, mailbox, port
  • Addressing - Direct and Indirect
  • 1) Direct Send/ Receive communication primitives
  • Communication entities can be addressed by
    process names (global process identifiers)
  • Global Process Identifier can be made unique
    by concatenating the network host address with
    the locally generated process id. This scheme
    implies that only one direct logical
    communication path exists between any pair of
    sending and receiving processes.
  • Symmetric Addressing Both the processes have to
    explicitly name in the communication primitives.
  • Asymmetric Addressing Only sender needs to
    indicate the recipient.


7
  • 2) Indirect Send/ Receive communication
    primitives
  • Messages are not sent directly from sender
    to receiver, but sent to shared data structure.
  • Multiple clients might
    request services

  • from one of multiple
    servers. We use mail boxes.
  • Abstraction of a finite size FIFO queue
    maintained by kernel.

8
Synchronization and Buffering
  • These are the three typical combinations.
  • 1) Blocking Send, Blocking Receive
  • Both receiver and sender are blocked
    until the message is delivered. (provides tight
    synchronization between processes)
  • 2) Non Blocking Send, Blocking Receive
  • Sender can continue the execution after
    sending a message, the receiver is blocked until
    message arrives. (most useful combination)
  • 3) Non Blocking Send, Non Blocking Receive
  • Neither party waits.

9
Message Synchronization Stages
  • Sender source network
    destination receiver
  • 1 2 message
    3
    4 request
  • 8
    7 ack
    6
    5 reply
  • Message passing depends on Synchronization at
    several points.
  • When sending a message to remote destination, the
    message is passed to sender system kernel which
    transmits it to communication network.
  • Non blocking Send 18
  • Sender process
    is released after message has been composed and
    copied into senders kernel.

10
Message Design Issues
  • Synchronization
  • Blocking vs. Non-blocking
  • Addressing
  • Direct
  • Indirect
  • Message transmission
  • Through value
  • Through reference
  • Format
  • Content
  • Length
  • Fixed
  • Variable
  • Queuing discipline
  • FIFO
  • Priority

11
The Producer Consumer Problem
  • The producer-consumer problem illustrates the
    need for synchronization in systems where many
    processes share a resource. In the problem, two
    processes share a fixed-size buffer. One process
    produces information and puts it in the buffer,
    while the other process consumes information from
    the buffer. These processes do not take turns
    accessing the buffer, they both work
    concurrently. Herein lies the problem. What
    happens if the producer tries to put an item into
    a full buffer? What happens if the consumer tries
    to take an item from an empty buffer?

12
Producer
13
Consumer
14
Pipe Socket APIs
  • More convenient to the users and to the system if
    the communication is achieved through a well
    defined set of standard APIs.
  • Pipe
  • Pipes are implemented with finite size, FIFO byte
    stream buffer maintained by the kernel.
  • Used by 2 communicating processes, a pipe serves
    as unidirectional communication link so that one
    process can write data into tail end of pipe
    while another process may read from head end of
    the pipe.
  • Pipe is created by a system call which returns 2
    file descriptors, one for reading and another for
    writing.
  • Pipe concept can be extended to include messages.
  • For unrelated processes, there is need to
    uniquely identify a pipe since pipe descriptors
    cannot be shared. So concept of Named pipes.
  • With a unique path name, named pipes can be
    shared among disjoint processes across different
    machines with a common file system.

15
SOCKETS
  • A Socket is a communication end point of a
    communication link managed by the transport
    services.
  • It is not feasible to name a communication
    channel across different domains.
  • A Communication channel can be visualized
    as a pair of 2 communication endpoints.
  • Sockets have become most popular message passing
    API.
  • Most recent version of the Windows Socket
    which is developed by WinSock Standard Group
    which has 32 companies (including Microsoft) also
    includes a SSL (Secure Socket Layer) in the
    specification.
  • The goal of SSL is to provide
  • Privacy in socket communication by using
    symmetric cryptographic data encryption.
  • Integrity in socket data by using message
    integrity check.
  • Authenticity of servers and clients by
    using asymmetric public key cryptography.

16
References
  • Operating System Concepts, Silberschatz, Galvin
    and Gange 2002
  • Sameer Ajmani Automatic Software Upgrades for
    Distributed Systems'' Ph.D. dissertation, MIT,
    Sep. 2004
  • Message passing information from The University
    of Edinburgh
  • MPI-2 standards beyond the message-passing
    modelLusk, E.Massively Parallel Programming
    Models, 1997. Proceedings. Third Working
    Conference on12-14 Nov. 1997 Page(s)43 - 49
    Digital Object Identifier 10.1109/MPPM.1997.71596
    0
  • A. N. Bessani, M. Correia, J. S. Fraga, and L. C.
    Lung. Sharing memory between Byzantine processes
    using policy-enforced tuple spaces. In
    Proceedings of the 26th International Conference
    on Distributed Computing Systems, July 2006
  • A multithreaded message-passing system for high
    performance distributed computing
    applicationsPark, S.-Y. Lee, J. Hariri,
    S.Distributed Computing Systems, 1998.
    Proceedings. 18th International Conference
    on26-29 May 1998 Page(s)258 - 265 Digital
    Object Identifier 10.1109/ICDCS.1998.679521
  • A message passing standard for MPP and
    workstations J. J. Dongarra, S. W. Otto, M. Snir,
    and D. Walker, CACM, 39(7), 1996, pp. 84-90
  • N. Alon, M. Merrit, O. Reingold, G. Taubenfeld,
    and R. Wright. Tight bounds for shared memory
    systems acessed by Byzantine processes.
    Distributed Computing, 18(2)99109, 2005

17
References
  • Lessons for massively parallel applications on
    message passing computersFox, G.C.Compcon
    Spring '92. Thirty-Seventh IEEE Computer Society
    International Conference, Digest of Papers.24-28
    Feb. 1992 Page(s)103 - 114 Digital Object
    Identifier 10.1109/CMPCON.1992.186695
  • An analysis of message passing systems for
    distributed memory computersClematis, A.
    Tavani, O.Parallel and Distributed Processing,
    1993. Proceedings. Euromicro Workshop on27-29
    Jan. 1993 Page(s)299 - 306 Digital Object
    Identifier 10.1109/EMPDP.1993.336388
  • An analysis of message passing systems for
    distributed memory computersClematis, A.
    Tavani, O.Parallel and Distributed Processing,
    1993. Proceedings. Euromicro Workshop on27-29
    Jan. 1993 Page(s)299 - 306 Digital Object
    Identifier 10.1109/EMPDP.1993.336388

18
Thank You!
Write a Comment
User Comments (0)
About PowerShow.com