Distributed Shared Memory - PowerPoint PPT Presentation

1 / 23
About This Presentation
Title:

Distributed Shared Memory

Description:

Consistence Issue. Replication ... conventional cache consistence problem: multicache scheme ... Consistence Models. Strict Consistency Model. Sequential ... – PowerPoint PPT presentation

Number of Views:656
Avg rating:3.0/5.0
Slides: 24
Provided by: shuz8
Category:

less

Transcript and Presenter's Notes

Title: Distributed Shared Memory


1
Distributed Shared Memory
  • Shu Zhang

2
Introduction
  • Interprocess Communication
  • Distributed system no physically shared memory
  • DSM a software layer based on Message Passing to
    provide a shared memory abstraction

3
Architecture
Distributed Shared Memory (exists only virtually)
Memory
Memory
Memory
CPU 1
CPU 1
CPU 1
CPU n
CPU n
CPU n
Memory-Mapping Manager
Memory-Mapping Manager
Memory-Mapping Manager
Communication Network
4
Consistence Issue
  • Replication of shared data
  • Memory coherence problem with multiple copies of
    data
  • Similar to the conventional cache consistence
    problem multicache scheme for shared-memory
    multiprocessors

5
Consistence Models
  • Strict Consistency Model
  • Sequential Consistency Model
  • Causal Consistency Model
  • Pipeline Consistency Model
  • Processor Consistency Model
  • Weak Consistency Model
  • Release Consistency Model

6
Strict Consistency Model
  • The value returned by a read operation on a
    memory address is always the same as the value
    written by the most recent write operation on
    that address
  • Impossible global time

7
Sequential Consistency Model
  • All processes see the same order of all memory
    access operations on the shared memory
  • One copy/single copy semantics

8
Sequential Consistency Model
  • Owner of a block of shared data
  • Noreplicated, nomigrating
  • Noreplicated, migrating
  • easy
  • Replicated, migrating
  • Write-invalidate
  • Write-update
  • Replicated, nomigrating
  • Write-update

9
Write-Invalidate
  • When a node requests a write operation
  • Without local copy, requests and replicates a
    valid copy
  • Send out an invalidate message to all the rest
    nodes who has a copy of the data
  • The node proceeds with the write operation and
    other read/write operations
  • When one of the other nodes requests a read/write
    operation, it fetches the block from the node
    with a valid copy

10
Write-Invalidate Implementation
  • Status tag read-only writable
  • Read Request
  • With local valid copy, read locally
  • No local valid copy, obtain one copy from a node
    currently having a valid copy. If the status tag
    is writable, change to read-only. Read locally
    until the next invalidate message.

11
Write-Invalidate Implementation
  • Write Request
  • With a local valid and writable copy, write
    locally
  • No a local valid and writable copy, request and
    replicate a valid copy, set the status tag
    writable, and send out a invalidate message, and
    then write locally

12
Write-Update
  • When a node requests a write operation
  • Without local copy, requests and replicates a
    copy
  • Perform a write operation on the local copy
  • Sending the address of the modified memory
    location and the new value to update all the
    copies
  • Write operation successes only after all the
    copies are updated successfully

13
Write-Update
  • Sequential consistency can be achieved by using a
    global sequencer to totally order the write
    operations of all the nodes
  • The intended modification of each write
    operations first is sent to the sequencer
  • the global sequencer assigns a sequence number to
    it.
  • Multicast the modification with the sequence
    number to all the nodes
  • Each node updates in sequence order
  • Expensive

14
Data Locating
  • Broadcasting
  • Centralized-server algorithm
  • Fixed distributed-server algorithm
  • Dynamic distributed-server algorithm

15
Broadcasting
  • Each node has an owned block table each entry
    contains a list of nodes that currently have a
    valid copy of the corresponding block
  • Read operation
  • Broadcasting the request
  • The owner responds by adding the requesting node
    into the block table and send back a copy of the
    block

16
Broadcasting
  • Write Operation (write-invalidate)
  • Broadcasting the write request
  • The owner of the block relinquishes the
    ownership, sends the block and the list of nodes
    who have the block, and initialize the list
  • The requesting node adds an entry for the new
    owned block and send out the invalidate message
    to the nodes in the list

17
Centralized-Server Algorithm
  • A centralized server maintains a block table
    containing the location information
  • For the non-local block of data, send request to
    the centralized server
  • The server extracts the location information from
    the block table and forwards it to the node. If
    migrating, then changes the entry of the block
    table
  • Drawbacks bottleneck, reducing parallelism

18
Fixed Distributed-Server Algorithm
  • Distributing the role of the centralized-server
    scheme
  • A block manager on several nodes
  • Each manages a subset of shared data block
  • A mapping function maps from data blocks to block
    managers and their corresponding nodes
  • Block manager handles the request in the same way
    as the centralized-server algorithm

19
Dynamic Distributed-Server Algorithm
  • Each node has a block table
  • The block table contains the probable owner
    information for all the blocks in the
    shared-memory space
  • The request is sent to local block table for the
    probable owner information
  • The request then is sent to the node of the
    probable owner

20
Dynamic Distributed-server Algorithm
  • If the node is the true owner, transfers the
    block to the requesting node
  • Otherwise, the block table of the probable owner
    extracts the request to get the probable owner
    information and forwards the request to that
    node, and updates the probable owner information
    of the requesting node
  • Repeat until the true owner is found

21
Weak Consistency
  • Sequential consistency was criticized for the
    unnecessary restrictions for some applications
  • In case of a process inside a critical section
    reading and writing, memory has no idea the CS,
    propagates all writes to all memories

22
Weak Consistency
  • Weak Consistency permits the system support the
    sets of operations
  • Individual operations in CS are not relevant to
    processes
  • Only the result from the CS does matter

23
Weak Consistency Requirements
  • The use of CS with Synchronization Variables
  • Access to synchronization variables are
    sequentially consistent
  • No accesses to a synchronization variables is
    allowed until all previous writes have completed
    everywhere
  • All previous accesses to synchronization
    variables must be completed before access to a
    nonsynchronization variable is allowed.
Write a Comment
User Comments (0)
About PowerShow.com