Outline - PowerPoint PPT Presentation

About This Presentation
Title:

Outline

Description:

... maps the shared memory address to the physical memory, which can be local or remote ... access data in the shared address space as they access data in local memory ... – PowerPoint PPT presentation

Number of Views:12
Avg rating:3.0/5.0
Slides: 44
Provided by: xiuwe
Learn more at: http://www.cs.fsu.edu
Category:
Tags: local | maps | outline

less

Transcript and Presenter's Notes

Title: Outline


1
Outline
  • Distributed shared memory continued

2
Distributed Shared Memory
  • Distributed computing is mainly based on the
    message passing model
  • Client/server model
  • Remote procedure calls
  • Distributed shared memory
  • is a resource management component that
    implements the shared memory model in distributed
    systems, where there is no physically shared
    memory

3
Memory Hierarchies
4
Virtual memory
5
Page Table Mapping
6
Distributed Shared Memory cont.
  • This is a further extension of the virtual memory
    management on a single computer
  • When a process accesses data in the shared
    address space, a mapping manager maps the shared
    memory address to the physical memory, which can
    be local or remote

7
Distributed Shared Memory cont.
8
Distributed Shared Memory cont.
A mapping manager is a layer of software to map
addresses in user space into the physical memory
addresses
9
Distributed Shared Memory cont.
  • With DSM, application programs can access data in
    the shared space as they access data in
    traditional virtual memory
  • Mapping manager can move data among the local
    main memory, local disk, and another nodes

10
Distributed Shared Memory cont.
  • Advantages of DSM
  • It is easier to design and develop algorithms
    with DSM than message passing models
  • DSM allows complex structures to be passed by
    reference, simplifying the distributed
    application development
  • DSM can cut down the overhead of communication by
    exploiting locality in programs
  • DSM can overcome some the architectural
    limitations of shared memory machines

11
Distributed Shared Memory cont.
  • Central implementation issues in DSM
  • How to keep track of the location of remote data
  • How to overcome the communication delays and high
    overhead associated with communication protocols
  • How to improve the system performance

12
The Central-Server Algorithm
  • A central server maintains all the shared data
  • It serves the read requests from other nodes or
    clients by returning the data items to them
  • It updates the data on write requests by clients

13
The Central-Server Algorithm cont.
14
The Migration Algorithm
  • In contrast to the central-server algorithm, data
    in the migration algorithm is shipped to the
    location of data access request
  • Subsequent accesses can then be performed locally
  • Thrashing can be a problem

15
The Migration Algorithm cont.
16
The Migration Algorithm cont.
17
The Read-Replication Algorithm
  • The read-replication algorithm allows multiple
    node to have read access or one node to have
    read-write access

18
The Read-Replication Algorithm cont.
19
The Read-Replication Algorithm cont.
20
The Full-Replication Algorithm
  • This is a further extension of the
    read-replication algorithm
  • It allows multiple nodes to have both read and
    write access to shared data blocks
  • Data consistency issue

21
The Full-Replication Algorithm cont.
22
Memory Coherence
  • Memory consistency models
  • Many consistency models have been proposed
  • These models differ in
  • Restrictiveness
  • Implementation complexity
  • Ease of programming
  • Performance

23
Memory Coherence cont.
  • Strict consistency
  • Any read on a data item x returns a value
    corresponding to the result of the most recent
    write on x
  • Sequential consistency
  • The result of any execution is the same as if the
    operations by all processes on the data store
    were executed in some sequential order and the
    operations of each individual process appear in
    this sequence in the order specified by its
    program

24
Memory Coherence cont.
25
Memory Coherence cont.
Process P1 Process P2 Process P3
x 1 print ( y, z) y 1 print (x, z) z 1 print (x, y)
26
Memory Coherence cont.
x 1 print ((y, z) y 1 print (x, z) z 1 print (x, y) Prints 001011 (a) x 1 y 1 print (x,z) print(y, z) z 1 print (x, y) Prints 101011 (b) y 1 z 1 print (x, y) print (x, z) x 1 print (y, z) Prints 010111 (c) y 1 x 1 z 1 print (x, z) print (y, z) print (x, y) Prints 111111 (d)
27
Memory Coherence cont.
  • General consistency
  • All the copies of a memory location eventually
    contain the same data when all the writes issued
    by every process have completed
  • Causal consistency
  • Writes that are causally related must be seen by
    all the processes in the same order concurrent
    writes may be seen in a different order on
    different machines
  • Processor consistency is related to causal
    consistency but with one more relaxation

28
Memory Coherence cont.
  • Weak consistency
  • Accesses to synchronization variables are
    sequentially consistent
  • Before a synchronization access, all previous
    regular data accesses must be completed
  • Before a regular data access, all previous
    synchronization accesses must be completed

29
Memory Coherence cont.
30
Memory Coherence cont.
  • Release consistency
  • Synchronization operations are broken down into
    acquire and release operations
  • Before a read or write operation on shared data
    is performed, all previous acquires done by the
    process must have completed successfully
  • Before a release is allowed, all previous reads
    and write done by the process must have been
    completed

31
Memory Coherence cont.
  • Coherence protocols
  • A protocol to keep replicas coherent
  • Write-invalidate protocol
  • A write to a shared data causes the invalidation
    of all copies except the one where the write
    occurs
  • Write-update protocol
  • A write to a shared data causes all copies of
    that data to be updated

32
Memory Coherence cont.
33
Memory Coherence cont.
  • Type-specific memory coherence
  • Exploiting application specific semantics
    information
  • The system uses different kinds of coherence
    mechanisms for different kinds of classes
  • Write-once objects
  • Write-many objects
  • Read-mostly objects

34
Design Issues
  • Granularity
  • The size of the shared memory unit
  • Advantages of large sizes
  • A page size that is a multiple of the size
    provided by the underlying memory management
    system allows for the integration of DSM and the
    memory management system
  • Better utilization of locality of reference
  • Disadvantages
  • The greater the chance for contention
  • False sharing

35
False Sharing
36
Design Issues cont.
  • Page replacement
  • Traditional methods such as LRU can not be used
    directly
  • Page access modes must be taken into
    consideration
  • For example, private pages may be replaced before
    shared pages

37
Case Studies
  • IVY
  • Integrated Shared Virtual Memory at Yale
  • The granularity is a page
  • The address space is divided into shared virtual
    memory address space and private space
  • It supports strict consistency using
    write-invalidation protocol

38
IVY cont.
  • The coherence protocol
  • Write fault handling
  • The owner is identified first
  • Then the owner sends the page and its copy-set to
    the requested processor
  • The faulting processor sends out invalidation
    messages to all the processors contained in the
    copy-set
  • Read fault handing
  • The owner is identified first
  • The owner sends the page and adds the requested
    site in the copy-set of the page

39
IVY cont.
  • Coherence protocol
  • The centralized manager scheme
  • Fixed distributed manager scheme
  • Dynamic distributed manager scheme

40
IVY cont.
41
IVY cont.
  • Double fault
  • A double fault occurs if a page not available
    locally is read and written successively
  • The page is transferred first due to the read
    fault
  • Then the same page is transferred again due to
    the write fault
  • Memory allocation
  • Process synchronization

42
Case Studies cont.
  • Mirage
  • Thrashing control
  • Clouds
  • The RA kernel
  • Segmentation with paging (The size of a segment
    is a multiple of the physical page size)
  • Distributed shared memory controller
  • Four modes read-only, read-write, weak-read, and
    none

43
Summary
  • Distributed shared memory tries to provide an
    easy-to-use interface for distributed
    applications
  • Distributed programs access data in the shared
    address space as they access data in local memory
  • However, the performance is a critical issue
  • Replication is used to reduce the high cost of
    communications
  • Memory coherence is however difficult and
    expensive to achieve
Write a Comment
User Comments (0)
About PowerShow.com