SYNCHRONIZATION IN DISTRIBUTED SYSTEMS - PowerPoint PPT Presentation

1 / 63
About This Presentation
Title:

SYNCHRONIZATION IN DISTRIBUTED SYSTEMS

Description:

... and circulated once again, to inform every process who the coordinator is ... But it will start up again and be killed again. ... 62 ... – PowerPoint PPT presentation

Number of Views:77
Avg rating:3.0/5.0
Slides: 64
Provided by: nmlabCs
Category:

less

Transcript and Presenter's Notes

Title: SYNCHRONIZATION IN DISTRIBUTED SYSTEMS


1
Chapter 3
  • SYNCHRONIZATION IN DISTRIBUTED SYSTEMS
  • (p.134p.168)

2
3.2 Mutual Exclusion
  • Mutual exclusion ensures that no other process
    will use the shared data structures at the same
    time.

3
  • A centralized algorithm the coordinator only
    lets one process at time into each critical
    region. Three messages per use of critical
    region request, grant, release.

4
  • Can the process distinguish a dead coordinator
    from permission denied?

5
Fig. 3-8. (a) Process 1 asks the coordinator for
permission to enter a critical region. (b)
Process 2 then asks permission to enter the same
critical region. (c) When process 1 exists the
critical region, it tells the coordinator, which
then replies to 2.
6
  • Richarts and Agrawalas distributed
    algorithm(1981)

Fig. 3-9. (a) Two process want to enter the same
critical region at the same moment. (b) Process 0
has the lowest timestamp, so it wins. (c) When
process 0 is done, it sends an OK also, so 2 can
now enter the critical region.
7
  • The number of messages request per entry is
    2(n-1), when n is the total number of processes
    in the system.

8
  • Some problems about distributed algorithm
  • If the destination is dead.
  • If group membership is changed.
  • Communication bottleneck.

9
  • Improvement The required message number of
    grants is a simple majority rather than all.
  • A token ring algorithm Only one process
    that holds the token may enter the critical
    region.

10
Fig. 3-10. (a) An un ordered group of processes
on a network. (b) A logical ring constructed in
software.
11
  • Some problems about token ring algorithm
  • The holding time is unbounded.
  • Everyone maintains the current ring
    configuration.

12
  • A comparison of the three algorithms

Fig. 3-11. A comparison of three mutual exclusion
algorithm.
13
3.3 Election Algorithm
  • The goal of an election algorithm is to ensure
    that when an election starts, it concludes with
    all processes agreeing on who the new coordinator
    is to be.

14
  • The bully algorithm A
    process, P, holds an election as follows
  • P sends an ELECTION message to every other
    processes with higher number.

15
  • If no one responds P wins the election and
    becomes coordinator.
  • If one of the higher-ups answers, it tokes over.
    Ps job is done.

16
Fig. 3-12. The bully election algorithm (a)
Process 4 holds an election. (b) Process 5 and 6
respond, telling 4 to stop. (c) Process 6 wins
and tells everyone.
17
  • A ring algorithm
  • When any process notices that the coordinator is
    not functioning, it builds an ELECTION message
    contain its own process number to its successor.
    At each step, the sender adds its own process
    number to the list in the message.

18
  • Eventually, the message gets back to the process
    that started it all. Then the message type is
    changed to COORDINATOR and circulated once again,
    to inform every process who the coordinator is
    (the list number with highest process number).

19
Fig. 3-13. Election algorithm using a ring.
20
3.4 Atomic Transactions
  • Atomic transaction is a much high level
    abstraction while semaphore is a low low level
    based synchronization technique (mutual
    exclusion, critical region management, deadlock
    prevention, and crash recovery).

21
  • The concept of transaction in computer system
    before, disks and online data bases are used is
    shown in figure 3-14.

22
Fig. 3-14. Updating a master tape is fault
tolerant.
23
  • Stable storage usually can be implemented with a
    pair of ordinary disks.

24
Fig. 3-15. (a) Stable storage. (b) Crash after
drive 1 is updated. (c) Bad spot.
25
  • Transaction primitives
  • BEGIN-TRANSACTION The commands that follow form
    a transaction.
  • END-TRANSACTION Terminate the transaction and
    try to commit.

26
  • ABORT-TRANSACTION Kill the transaction restore
    the previous transaction.
  • READ Read data from a file (or other object)
  • WRITE Write data to a file (or other object)

27
  • Properties of transactions
  • Atomic To the outside world, the transaction
    happens indivisibly.
  • Consistent The transaction does not violate
    system invariants.

28
  • Isolated Concurrent transactions do not
    interfere with each other.
  • Durable Once a transaction commits, the changes
    are permanent.

29
Fig. 3-17. (a)-(c) Three transactions. (d)
Possible schedules.
30
  • When any transaction or subtransaction starts, it
    is conceptually given a private copy of all
    object in the entire system for it to manipulate
    as it wishes.

31
  • If it aborts, its private universe just vanishes,
    as if it had never existed. If it commits, its
    private replaces the parents one.

32
  • Two methods are commonly used to implement
    transactions private workspace and writeahead
    log.

33
  • The
    first method means actually giving a process a
    private workspace at the instant it begins a
    transaction and if the transaction commits, the
    private workspace are moved into the parents
    workspace automatically, as shown in figure 3-18
    (c).

34


  • In this figure two optimizations are provided to
    reduce the cost of copying every thing use a
    pointer to its parent workspace while reading and
    use private index and shadow blocks for writing.

35
Fig. 3-18 (a) The file index and disk blocks for
a three-block file. (b) The situation after a
transaction has modified block 0 and appended
block 3. (c) After committing.
36
  • The second method means that files are actually
    modified in place, but before any block is
    changed, a record is written to the write ahead
    log on stable storage.

37
  • Only
    after the log has been successfully written is
    the change made to the file. So the log can also
    be used for recovering from crashes.

38
Fig. 3-19 (a) A transaction. (b)-(d) The log
before each statement is executed.
39
  • Two-phase commit protocol a protocol for
    achieving atomic commit in a distributed system.

40
Fig. 3-20. The two-phase commit protocol when it
succeeds.
41
  • When multiple transactions are executing
    simultaneously in different processes (on
    different processors), some mechanism is needed
    to keep them out of each others way.

42

  • There are three kinds of mechanisms such as
    locking, optimistic concurrency control and
    timestamps.

43
  • Two-phase locking The process first acquires all
    the locks it needs, then releases them. To avoid
    deadlocks, it acquires all locks in some
    canonical order to prevent hold-and-wait cycles.

44
Fig. 3-21. Two-phase locking.
45
  • Optimistic concurrency control Just go ahead and
    do whatever you want to, without paying attention
    to what any body else is doing. It fits best with
    the implementation based on private workspace.

46
  • But
    under conditions of heavy load, the probability
    of failure may go up substantially, making it a
    poor choice.

47
  • It is to assign each transaction it does
    BEGIN-TRANSACTION. When a process tries to access
    a file, the files read and wrote timestamps will
    be lower than the current transactions timestamp.

48
Fig. 3-22. Concurrency control using timestamps.
49
3.5 Deadlocks in Distributed Systems
  • Two kinds of distributed deadlocks communication
    deadlocks and resource deadlocks. But, the
    circular wait condition is unlikely to occur as
    result of communication alone (ex. In the
    client-server model).

50
  • Four strategies are commonly used to handle
    deadlocks Ostrich, detection, prevention and
    avoidance. But just both the detection and
    prevention algorithm are available.

51
  • Centralized deadlock detection A central
    coordinator maintains the resource graph for the
    whole system. When the coordinator detects a
    cycle, it kills of one process to break the
    deadlock.

52
Fig. 3-23. (a) Initial resource graph for machine
0. (b) Initial resource graph for machine. (c)
The coordinators view of the world. (d) The
situation after the delayed message.
53
  • Three methods are used to maintain the resource
    graph add/delete action, periodically, and the
    requirement of the coordinator.

54
  • But
    in figure 3-23, B release R and asks for T. If
    the message of ask arrives before the message
    of release, then it cause false deadlock. One
    possible solution is to use timestamp.

55
  • The Chandy-Misra-Haas algorithm (1983) is invoked
    when a process has to wait for some resource. A
    special probe message is generated and sent to
    the process holding the needed resource.

56
  • The
    message consists of three numbers the process
    that just blocked, the process sending the
    message, and the process to whom it is being sent.

57
Fig. 3-24. The Chandy-Misra-Haas distributed
deadlock detection algorithm.
58
  • If the message goes all the way around and comes
    back to the original sender, a cycle exits and
    system is deadlocked. Two ways is used to break
    the deadlock

59
  • one
    way is to have the process that initiated the
    probe commit suicide. The other way is to have
    each process add its identity to the end of the
    probe message and highest number on the list is
    killed.

60
  • Distributed deadlock prevention
  • Order all the resources and require processes to
    acquire them in strictly increasing order.

61
  • In a distributed system with global time and
    atomic transactions, two other practical
    algorithms are possible wait-die and wound-wait.
    In the former case, if the young one wants a
    resource held by the old one, the young one is
    killed. But it will start up again and be killed
    again.

62

  • In the wound-wait case, if an old process wants
    a resource held by a young one, the old process
    preempts the young one. The young one probably
    starts up again immediately, and tries to acquire
    the resource, forcing it to wait. (See Fig. 3-26)

63
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com