Title: Week 8: Mutual Exclusion
1Week 8 Mutual Exclusion
2Why Mutual Exclusion?
- Bank Database Think of two simultaneous deposits
of 10,000 into your bank account, each from one
ATM. - Both ATMs read initial amount of 1000
concurrently from the bank server - Both ATMs add 10,000 to this amount (locally at
the ATM) - Both write the final amount to the server
- Whats wrong?
- The ATMs need mutually exclusive access to your
account entry at the server
3Mutual Exclusion
- Mutual exclusion is required to prevent
interference and ensure consistency when
accessing shared resources. - Solutions
- Semaphores, mutexes, etc. in local operating
systems - Message-passing-based protocols in distributed
systems - enter() the critical section
- AccessResource() in the critical section
- exit() the critical section
4Distributed mutual exclusion
- Distributed mutual exclusion requirements
- Safety At most one process may execute in CS
at any time - Liveness Every request for a CS is eventually
granted - Ordering/fairness (desirable) Requests are
granted in FIFO order - what are the three requirement in traditional
OS? - Mutual exclusion
- Progress
- Bounded waiting
5Review - Semaphores
- To synchronize access of multiple threads to
common data structures - Semaphore S1
- Allows two operations
- wait(S) (or P(S))
- while(1) // each execution of the while loop
is atomic - if (S gt 0)
- S--
- break
-
- signal(S) (or V(S))
- S
- Each while loop execution and S are each atomic
operations
6How are semaphores used?
One Use Mutual Exclusion Bank ATM example
- extern semaphore S
- ATM2
- wait(S) // enter
- // critical section
- obtain bank amount
- add in deposit
- update bank amount
- signal(S) // exit
-
- semaphore S1
- ATM1
- wait(S) // enter
- // critical section
- obtain bank amount
- add in deposit
- update bank amount
- signal(S) // exit
-
7Distributed Mutual Exclusion Performance
Evaluation Criteria
- Bandwidth the total number of messages sent in
each entry and exit operation combined. - Client delay the delay incurred by a process at
each entry and exit operation (when no other
process is waiting) - Synchronization delay the time interval between
one process exiting the critical section and the
next process entering it (when there is one
process waiting) - These translate into throughput -- the rate at
which the processes can access the critical
section.
8Assumptions
- For all the algorithms studied, we make the
following assumptions - Each pair of processes is connected by reliable
channels (such as TCP). Messages are eventually
delivered to recipients input buffer. - Processes will not fail.
9Centralized Control of Mutual Exclusion
- A central coordinator
- Is appointed or elected
- Grants permission to enter CS keeps a queue of
requests to enter the CS. - Ensures only one process at a time can access
the CS - Separate handling of different CSs
- Operations (coordinatorserver)
- To enter a CS Send a request to the server wait
for token. - On exiting the CS Send a message to the server to
release the token. - Upon receipt of a request, if no other process
has the token, the server replies with the token
otherwise, the server queues the request. - Upon receipt of a release message, the server
removes the oldest entry in the queue (if any)
and replies with a token. - Features
- Safety, liveness and order are guaranteed
- It takes 3 messages per entry exit operation.
- Client delay one round trip time (request
grant) - Synchronization delay one round trip time
(release grant) - The coordinator becomes performance bottleneck
and single point of failure.
10Token Ring Approach
- Processes are organized in a logical ring pi has
a communication channel to p(i1)mod N. - Operations
- Only the process holding the token can enter the
CS. To exit the CS, the process sends the token
onto its neighbor. If a process does not require
to enter the CS when it receives the token, it
forwards the token to the next neighbor. - Features
- Safety liveness are guaranteed, but ordering is
not. - Bandwidth 1 message per exit
- Client delay 0 to N message transmissions.
- Synchronization delay between one processs exit
from the CS and the next processs entry is
between 1 and N-1 message transmissions.
11Token Ring Illustration
Previous holder of token
P1
current holder of token
P2
Pn
P3
next holder of token
P4
12Timestamp Approach Ricart Agrawala
- Processes requiring entry to CS multicast a
request, and can enter it only when all other
processes replied to the message. - Messages requesting entry are of the form ltT,pigt,
where T is the senders timestamp (from a Lamport
clock) and pi the senders identity. - To enter the CS
- set state to wanted
- multicast request to all processes (include
local time). - wait until all processes reply
- change state to held and enter the CS
- On receipt of a request ltTi, pigt at pj
- if (state held) or (state wanted (Tj,
pj)lt(Ti,pi)), - enqueue request
- else reply to pi
- On exiting the CS
- change state to release and reply to any
queued requests.
13Ricart and Agrawalas algorithm
On initialization state RELEASED To enter
the section state WANTED Multicast request
to all processes request processing deferred
here T requests timestamp Wait until
(number of replies received (N 1)) state
HELD On receipt of a request ltTi, pigt at pj (i
? j) if (state HELD or (state WANTED and
(T, pj) lt (Ti, pi))) then queue request from
pi without replying else reply immediately
to pi end if To exit the critical
section state RELEASED reply to any queued
requests
14Multicast synchronization
15Timestamp Approach Ricarti Agrawala
- Features
- Safety, liveness, and ordering (causal) are
guaranteed. - It takes 2(N-1) messages per entry operation (N-1
multicast requests N-1 replies) N messages if
the underlying network supports multicast, and
N-1 messages per exit operation in worst case - Client delay one round-trip time
- Synchronization delay one message transmission
time.
16 Maekawas Algorithm
- Multicasts messages to a (voting) subset of
nodes - Each process pi is associated with a voting set
vi - Each process belongs to its own voting set
- Each voting set is of size K
- Each process belongs to M other voting sets
- The intersection of any two voting sets is not
empty - To access at resource, pi requests permission
from all other processes within its own voting
set vi - Guarantees safety, not liveness (may deadlock)
- Maekawa showed that KM?N works best
- One way of doing this is to put nodes in a ?N
by ?N matrix and take the union of row column
containing pi as its voting set.
17Example of Deadlock
p
3
p
1
p
2
18Maekawas algorithm part 1
On initialization state RELEASED voted
FALSE For pi to enter the critical
section state WANTED Multicast request to
all processes in Vi pi Wait until (number
of replies received (K 1)) state
HELD On receipt of a request from pi at pj (i ?
j) if (state HELD or voted TRUE) then
queue request from pi without replying else
send reply to pi voted TRUE end if
Continues on next slide
19Maekawas algorithm part 2
For pi to exit the critical section state
RELEASED Multicast release to all processes in
Vi pi On receipt of a release from pi at pj
(i ? j) if (queue of requests is
non-empty) then remove head of queue from
pk, say send reply to pk voted
TRUE else voted FALSE end if
20Maekawas Algorithm - Analysis
- 2?N messages per entry, ?N messages per exit
- Better than Ricart and Agrawalas (2(N-1) and N-1
messages) - Client delay One round trip time
- Synchronization delay One round-trip time
21ISIS algorithm for total ordering
P
2
1 Message
3
2
P
2
4
2 Proposed Seq
1
3 Agreed Seq
1
2
P
1
3
P
3
22ISIS algorithm for total ordering
- The multicast sender multicasts the message to
everyone. - Recipients add the received message to a special
queue called the priority queue, tag the message
undeliverable, and reply to the sender with a
priority that is basically a sequence number
(that is, 1 more than the latest sequence number
heard so far), suffixed with the recipient's
process ID. The priority queue is always sorted
by priority. - The sender collects all responses from the
recipients, calculates their maximum, and
re-multicasts with this as the new and correct
priority for the message. - On receipt of this information, recipients mark
the message as deliverable, reorder the priority
queue, and deliver the set of lowest priority
messages that are marked as deliverable.
23Proof of Total Order
- If message m1 is at head of priority queue and
has been marked deliverable , let m2 be another
message on the same queue - Then
- finalpriority(m2) gt
- proposedpriority(m2) gt
- finalpriority(m1)
24Summary
- Mutual exclusion
- Semaphores review
- Token ring
- Ricart and Agrawalas timestamp algorithm
- Maekawas algorithm
25Optional Material Raymonds Algorithm
26Raymonds Token-based Approach
27Raymonds Token-based Approach
- Processes are organized as an un-rooted n-ary
tree. - Each process has a variable HOLDER, which
indicates the location of the privilege relative
to the node itself. - Each process keeps a REQUEST_Q that holds the
names of neighbors or itself that have sent a
REQUEST, but have not yet been sent the privilege
in reply. - To enter the CS
- Enqueue self.
- If a request has not been sent to HOLDER, send a
request. - Upon receipt of a REQUEST message from neighbor
x - If x is not in queue, enqueue x.
- If self is a HOLDER and still in the CS, does
nothing further. - If self is a HOLDER but exits the CS, then gets
the oldest requester (I.e., dequeue REQUEST_Q),
sets it to be the new HOLDER, and sends PRIVILEGE
to the new HOLDER.
28Raymonds Token-based Approach
- Upon receipt of a PRIVILEGE message
- Dequeue REQUEST_Q and set the oldest requester to
be HOLDER. - If HOLDERself, then holds the PRIVILEGE and
enters the CS. - If HOLDER some other process, send PRIVILEGE to
HOLDER. In addition, if the (remaining)
REQUEST_Q is non-empty, send REQUEST to HOLDER as
well. - On exiting the CS
- If REQUEST_Q is empty, continues to hold
PRIVILEGE. - If REQUEST_Q is non-empty, then
- dequeues REQUEST_Q and sets the oldest requester
to HOLDER, and send PRIVILEGE to HOLDER. - In addition, if the (remaining) REQUEST_Q is
non-empty, send REQUEST to HOLDER as well.
29ExampleRaymonds Token-based Approach
4. Req. by P6
7. P8 passes T to P3, to P6
8. Req. by P7
2. Req. by P8
5. P1 passes T to P3
3. Req. by P2
9. P6 passes T to P3, to P1
6. P3 passes T to P8