Title: Chapter 6: Process Synchronization
1Chapter 6 Process Synchronization
2Module 6 Process Synchronization
- Background
- The Critical-Section Problem
- Synchronization Hardware
- Semaphores
- Classical Problems of Synchronization
- Monitors
- Java Synchronization
- Solaris Synchronization
- Windows XP Synchronization
- Linux Synchronization
- Pthreads Synchronization
- Atomic Transactions
- Log-based Recovery
- Checkpoints
- Concurrent Transactions
- Serializability
- Locking Protocols
3Background
- Concurrent access to shared data may result in
data inconsistency - Maintaining data consistency requires mechanisms
to ensure the orderly execution of cooperating
processes - Concurrency is probably THE most important
and - THE most difficult concept to grasp in the
course
4What is Concurrency?
- Programs do not execute in a linear fashion
without interruption, as we often assume - Many processes are running at the same time
- The processes compete for the same resources
- Context switches between processes can occur at
any time - Must find a way to coordinate all processes'
access to and use of shareable resources (e.g.,
I/O, memory, files)
5Concurrency Problems
- These problems occur because today most computer
systems are Concurrent Systems - They contain a great many processes
- All processes are potentially executing
concurrently(in parallel) - Concurrent execution DOES NOT mean
- Processes are running at exactly the same
time(although they could be, with
multiprocessors) - Concurrent execution DOES mean
- Processes are running during the same time
period - AND trying to access the same shared resources
6Concurrency Problems (cont.)
- When processes are running concurrently
- They often can be interrelated
- Interrelationships and dependencies are complex
- Data/resource sharing
- Making decisions
- Data/resource dependencies
- Context switches, waits, job mix all are
unpredictable - Problem for both preemptive and non-preemptive
scheduling - Must insure
- Synchronization, communication, accuracy,
efficiency - Will look at examples, such as
- Producer-consumer problem
- Critical section problem
- Historical attempts at solutions
7Producer-Consumer Problem (a)
- Quite common e.g., printers, compilers, other
I/O, etc. - Introduced in Chapter 3
- A producer process produces data that is to be
consumed by a consumer process - To allow concurrency, create a pool of 'n'
buffers, which are filled and emptied by the
producer and consumer processes - Producer can fill one buffer while consumer
empties another - In order to make that possible -- both producer
and consumer must know which buffers are full and
which are empty
8Producer-Consumer Problem (b)
- Two versions of the problem
- Unbounded buffer version
- Unlimited number of buffers (Does not happen
often in the real world) - Producer can always produce new items
- Consumer may have to wait for a new item
- Bounded buffer version
- Fixed number of buffers
- Most common, since rarely have unlimited
resources - Also more challenging for programmer
- Producer may have to wait if all buffers are full
- Consumer may have to wait if all buffers are
empty - Will look at several potential solutions
9Bounded-Buffer Solution 1
// SHARED DATA static final int BUFFER_SIZE
10 Object bufferBUFFER_SIZE int in 0 //
Index of where to insert next item int out 0
// Index of next item to consume
// PRODUCER void producer() item
nextProduced do nextProduced
produceItem() while((in1)BUFFER_SIZE
out) // Wait if buffer full
bufferin nextProduced in
(in1)BUFFER_SIZE while(TRUE)
// CONSUMER void consumer() item
nextConsumed do while(in out)
// Wait if buffer empty nextConsumed
bufferout out (out1)BUFFER_SIZE
consumeItem(nextConsumed) while(TRUE)
10Bounded Buffer Solution 1 (b)
- Solution 1 is correct BUT two problems
- Can only fill a maximum of n-1 buffers ?
- Load and store must be ATOMIC
- ATOMIC means completed as a unit
- All or nothing
- Work either is completed or as if it were never
started - Can fix first problem on list above (so will use
all buffers), but introduce a concurrency
problem. - Add a variable count (number of full buffers)
- Initialized to 0
- Incremented whenever an item is added to buffer
- Decremented whenever an item is removed from
buffer
11Bounded-Buffer Solution 2
// SHARED DATA static final int BUFFER_SIZE
10 Object bufferBUFFER_SIZE int in 0 //
Index of where to insert next item int out 0
// Index of next item to consume int count 0
// lt ADDED -- Number of full buffers
// CONSUMER void consumer() item
nextConsumed do while(count 0)
// lt CHANGED empty? nextConsumed
bufferout --count // lt ADDED
consumeItem(nextConsumed) while(TRUE)
// PRODUCER void producer() item
nextProduced do nextProduced
produceItem() while(count BUFFER_SIZE
-1) // lt CHANGED wait if full
bufferin nextProduced in
(in1)BUFFER_SIZE count // lt ADDED
while(TRUE)
12Potential Concurrency Problem
- THE PROBLEM IS
- In order for the code to work correctly, the
statements - count
- --count
- must be performed atomically.
- An atomic operation is an operation that
completes in its entirety without interruption
i.e., it can never be interrupted.
13Potential Concurrency Problem -2-
- The statement count is likely implemented as
multiple lines of machine code, such as the
following - register1 counter MOV counter,R1
- register1 register1 1 INC R1
- counter register1 MOV
R1,counter - The statement --count might be implemented as
- register2 counter MOV counter,R2
- register2 register2 1 DEC R2
- counter register2 MOV R2,counter
- In this particular example it makes no difference
to the atomicity whether pre- or post-decrement
is used - The atomicity has to do with changing the value
of the variable itself
14Potential Concurrency Problem -3-
- If both the producer and consumer attempt to
update the buffer concurrently, the assembly
language statements may get interleaved. - That interleaving is NOT deterministic.
- It depends upon how the producer and consumer
processes are scheduled by the scheduler. - i.e., where context switches occur
- REMEMBER that whenever a context switch occurs
between producer and consumer, the process state
(including contents of registers) is
saved/restored
15Potential Concurrency Problem -4-
- Assume counter is initially 5. The following is
one way in which the statements could be
interleaved because of context switches - producer register1 counter
(register1 5) - producer register1 register1 1
(register1 6) - consumer register2 counter
(register2 5) - consumer register2 register2 1
(register2 4) - producer counter register1
(counter 6) - consumer counter register2
(counter 4) - The value of count may be either 4 or 6 BUT --
the correct value should be 5. - Problem even WORSE on multiprocessors, since a
single ASM instruction is actually several
machine cycles and operations, there are
pipelines, reordering, speculative execution,
etc. to consider - And note problem also occurs on compares (i.e.,
conditionals)
Task switch
Task switch
Task switch
16Race Condition
- Race condition The situation where several
processes access and manipulate shared data
concurrently. The final value of the shared
data depends upon which process just happens
to finish last. - Non-deterministic (given a set of conditions,
one cannot predict the outcome
accurately all the time) - To prevent race conditions, concurrent processes
must be synchronized.
17The Critical-Section Problem
- n processes all competing to use some shared
data - Each process has a code segment, called critical
section, in which the shared data is accessed. - E.g., read or write a database record
- Problem ensure that when one process is
executing in its critical section, no other
process is allowed to execute in its critical
section for that piece of shared data. - Critical section is a concept, not a name or type
of method - A piece of code may have one, many, or no
critical sections - Critical sections in different pieces of code /
objects / locations may be different (i.e.,
contain different code / instructions), even if
they access the same shared data - Code can have multiple critical sections, for
different shared data
18The Critical-Section Problem (cont.)
- General structure for handing critical section
do - beginning section
- entry section
- critical section
- exit section
- remainder section
- while (1)
- The above structure is only a general template
- The beginning or remainder sections may be
missing, there may be several critical sections
together, etc. - Potential problems for reads as well as writes
so must prevent protect both kinds of access - Especially consider compares and compound
conditional checks (in assembler instructions)
19Solution to Critical-Section Problem
- Any solution to the Critical Section Problem must
satisfy three criteria - Mutual Exclusion. If process Pi is executing in
its critical section, then no other processes can
be executing in their critical sections. - Only one process can be executing in its critical
section at a time - Progress. If no process is executing in its
critical section and there exist some processes
that wish to enter their critical section, then
the selection of the processes that will enter
the critical section next cannot be postponed
indefinitely. - If no other processes are waiting, a waiting
process should be allowed to enter its critical
section - Bounded Waiting. A bound must exist on the
number of times that other processes are allowed
to enter their critical sections after a process
has made a request to enter its critical section
and before that request is granted - A process cant be made to wait forever and
starve, while other processes that arrived later
are allowed to proceed ahead of it
20Initial Attempts to Solve Problem
- For simplicity, consider only two processes, P0
and P1 - General structure of process Pi (other process
Pj) - do beginning section
- entry section
- critical section
- exit section
- remainder section
- while (1)
- Processes may share some common variables to
synchronize their actions (not global, but
shared).
21Algorithm 1
- Shared variables int turn // Can be
initialized to either 0 or 1 // When turn
i, Pi can enter its critical section - Process Pi
- do // beginning section
- while (turn ! i) // Entry section
wait until its my turn - critical section
- turn j // Exit section now
its other processs turn - // remainder section
- while (1)
22Algorithm 1 Two Processes
- Shared variables int turn // Can be
initialized to either 0 or 1 // When turn
i, Pi can enter its critical section - Process Pi Process Pj
- do do
- // beginning section // beginning
section - while (turn ! i) // Entry
while (turn ! j) // Entry - critical section critical
section - turn j // Exit turn
i // Exit - // remainder section //
remainder section - while (1) while (1)
- PROBLEM processes must alternate so
- Satisfies mutual exclusion, but not progress
23Algorithm 2
- To improve, can record more about state of each
process - Shared variablesboolean iwantin1
false,false // When iwantini true, Pi
wishes to use its critical section - Process Pi
- do // beginning section
- iwantini true // Entry I
want to enter CS - while (iwantinj true) //
Wait for other process - critical section
- iwantini false // Exit Im
done in CS - // remainder section
- while (1)
24Algorithm 2
- To improve, can record more about state of each
process - Shared variablesboolean iwantin1
false,false // When iwantini true, Pi
wishes to use its critical section - Process Pi Process Pj
- do do
- // beginning section // beginning
section - iwantini true // Entry
iwantini true // Entry - while (iwantinj true)
// while (iwantini true) // - critical section critical
section - iwantini false //
Exit iwantinj false // Exit - // remainder section //
remainder section - while (1) while (1)
- PROBLEM satisfies mutual exclusion, but not
progress for different reason - Both flags can be TRUE at the same time DEADLOCK
25Algorithm 2b
- NOTE switching order of statements does not
help - Shared variablesboolean iwantin1
false,false // When iwantini true, Pi
wishes to use its critical section - Process Pi
- do // beginning section
- while (iwantinj true) //
Entry wait if other process in CS - iwantini true //
I want to enter CS - critical section
- iwantini false // Exit Im
done in CS - // remainder section
- while (1)
26Algorithm 2b
- NOTE switching order of statements does not
help - Shared variablesboolean iwantin1
false,false // When iwantini true, Pi
wishes to use its critical section - Process Pi Process Pj
- do do // beginning
section // beginning section - while (iwantinjtrue) // Entry
while (iwantinitrue) // Entry - iwantini true // iwantinj
true // - critical section critical
section - iwantini false // Exit
iwantinj false // Exit - // remainder section //
remainder section - while (1) while (1)
- PROBLEM now don't deadlock, but BOTH processes
can be in the critical section at the same time
27Algorithm 3
- Combined shared variables of algorithms 1 and 2.
- Have both iwantin array and turn variable
- Initialize iwantin array to false and turn to 0
- Process Pi
- do // beginning section
- iwantini true // Entry I
want in CS - turn j // its other
processs turn - while (iwantinj true //
While other process wants in - turn j) // and also other
processs turn - critical section
- iwantini false // Exit
Im done in CS - // remainder section
- while (1)
28Algorithm 3
- Combined shared variables of algorithms 1 and 2.
- Have both flag array and turn variable
- Initialize flags to false and turn to 0
- Process Pi Process Pj
- do do // beginning
section // beginning section - iwantini true // Entry
iwantinj true // Entry - turn j // turn i
// - while (iwantinj true //
while (iwantini true // turn j)
// turn i) // - critical section critical
section - iwantini false // Exit
iwantinj false // Exit - // remainder section //
remainder section - while (1) while (1)
- Meets all three requirements solves the
critical-section problem, but only for two
processes.
29Bakery Algorithm
Solves the Critical Section Problem for n
processes
- When each process wants to enter its critical
section, each process asks for and receives a
number. - Holder of the smallest number enters the critical
section. - Like taking a ticket while waiting to be served
(at bakery) - The numbering scheme always generates numbers in
increasing order of enumeration i.e.,
1,2,3,3,3,3,4,5... , although it can sometimes
generate duplicate numbers - If processes Pi and Pj receive the same number,
the process with the smallest process ID is
served first. - Process IDs are unique and in an enumerated order
30Bakery Algorithm
- Notation -- lexicographical order (ticket ,
process ID ),i.e., must check ticket before
check process ID - (a,b) lt (c,d) if a lt c or if a c and b lt d or,
in code -- if( (altc) (ac bltd) ) - max (a0,, an-1) is a number, k, such that k gt
ai for i - 0, , n 1 - Shared data
- boolean choosingn
- int numbern
- Data structures are initialized to false and
0 respectively
31Bakery Algorithm
- do
- // beginning section
- choosingi true
- numberi max(number0, number1, , number
n 1)1 - choosingi false
- for (j 0 j lt n j)
- while (choosingj)
- while ((numberj ! 0) (numberj,j lt
numberi,i)) -
- critical section
- numberi 0
- // remainder section
- while (1)
This can be rather complex and slow to use in
actual practice,since have to iterate through
entire list of processes several times
32Hardware Synchronization Solutions
- Software solutions to the critical section
problem are often complex - If it were possible to disable interrupts
(disable task dispatcher) while checking or
modifying a variable (to prevent context
switches), concurrency would be much simpler (and
can be done on a uni) - SOLUTION Hardware provides a special
instruction that will allow testing and modifying
a word atomically (in one cycle) - Will call this special instruction TestAndSet
(see next page) - Could also solve with an instruction that would
Swap the contents of two words atomically,
something like this swap(a,b) - MOV A,Rx These three lines MOV
B,A must all execute MOV Rx,B
atomically (as a unit) -
- Will show how this works later
- Use to implement LOCKS
33Synchronization Hardware
- Hardware implements a TestAndSet function that
provides a function like the following and
executes atomically - boolean TestAndSet(boolean target)
- boolean oldValue target
- target true
- return oldValue
- // Preceding method must be atomic
- The function checks the state of a boolean
variable - Gets the old state (saves it away temporarily)
- Unconditionally sets the current state to TRUE
- And returns the old state to the caller
34Mutual Exclusion with Test-and-Set
- Shared data boolean lock false
- Process Pi
- do // beginning section
- while (TestAndSet(lock)) // get
the lock - critical section
- lock false // drop the lock
- // remainder section
-
- This does the following
- Tests the lock to see if it is held (true) and
tries to set it to true (get the lock) - If lock was already held (true) then have not
changed its value - If lock was not held (false), then will have set
it to true, and can proceed into CS, since
returned value will have been false
35Test-and-Set With Two Processes
- Shared data boolean lock false
- Process Pi Process Pj
- do do // beginning
section // beginning section - while (TestAndSet(lock))
while (TestAndSet(lock)) - critical section critical
section - lock false lock
false - // remainder section //
remainder section -
- This does satisfy the mutual exclusion
requirement -- HOWEVER this does not satisfy
the bounded waiting requirement !!!
36Synchronization Hardware - Swap
- Could swap() be used in the preceding example to
provide mutual exclusion and also satisfy the
bounded wait requirement ? - swap() as a function would have to execute
atomically - void Swap(boolean a, boolean b)
- boolean temp a
- a b
- b temp
- // Method must execute atomically
37Mutual Exclusion with Swap
- Shared data (initialized to false) boolean
lock - Process Pi
- do
- // beginning section
- key true
- while (key true) // get the
lock - Swap(lock,key)
- critical section
- lock false // drop the lock
- // remainder section
- while(1)
- This does the following
- Gets key to put in lock sets value to true, so
can get lock and proceed through the while
statement the first time - Swaps value of lock and key lock is now true,
key is former value of lock - Executes while statement if former value of
lock was false, no other process was in CS, so
can proceed, with lock now set to true
38Mutual Exclusion with Swap
- Shared data (initialized to false) boolean
lock - Process Pi Process Pj
- do do
- // beginning section //
beginning section - key true key true
- while (key true)
while (key true) - Swap(lock,key)
Swap(lock,key) - critical section
critical section - lock false lock
false - // remainder section //
remainder section - while(1) while(1)
- Like TestAndSet, this also satisfies the mutual
exclusion requirement, but also fails to satisfy
bounded waiting
39Test and Set Solution With Bounded Wait
// Local data // Shared
data int i,j // indices
boolean waitingn // n processes boolean
key // procs key boolean lock
// 1 shared lock
------------------------- do // BEGINNING
SECTION waitingi true // iprocess
ID/IX // ENTRY PROVIDES SYNCHRONIZ. key
true // I am
waiting and have key while (waitingi
true key true) // While waiting and have
key key TestAndSet(lock)
// check if lock held waitingi false
// Not waiting any more
CRITICAL SECTION j (i1) n
// EXIT PROVIDES BOUNDED
WAIT while ((j ! i) waitingj false)
// Go thru list until find a j (j
1) n // waiter or get all way
around if (j i)
// If there were no waiters lock
false // drop the lock
else //
Otherwise let first waiter waitingj
false // found go into critical
sect. // REMAINDER SECTION while(1)
/////////////// THIS IS MESSY AND TIME COMSUMING
/////////////////
40Semaphores Concept
- Previous solutions difficult to generalize to
complex problems - Also require busy waiting
- To overcome, introduce the concept of a
- SEMAPHORE
- Integer variable / counter (counts down)
- Must be initialized
- Initialize to 1 if exclusive lock
- Or initialize to number of sharers if shared lock
41Semaphore How to Use
- Semaphore S integer variable
- Can only be accessed via two atomic operations
- wait (S) // Enter CS (get lock)
- while(S 0) do no-op
- --S
-
- signal (S) // Exit CS (drop lock)
- S
-
- Hardware support required so will be atomic
- HW / OS may also limit the number, type, etc.
42Critical Section of 'n' Processes
- Shared data
- semaphore mutex // initially mutex 1
- // This
is an exclusive lock - //
Provide mutual exclusion
// Only 1 process allowed in CS at a
time - Process Pi
- do // beginning section
- wait(mutex) // Wait if necessary
(ENTRY) critical section - signal(mutex) // Signal that are
done (EXIT) // remainder section
while (1) - Can be used to synchronize 'n' processes in
critical section problem - A mutex is a counting semaphore, initialized to
1, used to provide mutual exclusion to shared
data in a critical section - Some systems support mutexes as built-in objects
/ data types
43Shortcomings of Simple Semaphores
- Simple implementation of semaphores results in
- BUSY WAIT
- Spins around the WHILE, using up CPU
cycles(often called a spinlock) -
- wait (S) // Ask to enter
CS - while(S 0) do no-op // Processes
wait here - --S // Here get
lock -
- Can be useful in multiprocessor environments if
waits are short, since does not require a context
switch but can often hurt performance
(especially in uniprocessor systems) - SOLUTION modify definition of semaphore and of
the wait() and signal() operations
44Revised Semaphore Implementation
- Define a semaphore as a record (or object)
- Object semaphore
- int value
- queue of processes // ?ADD THIS
LINE -
- Also, assume two additional operations /
functions exist - block() suspends the process that invokes it.
- Sleeps indefinitely is removed from CPU ready
queue - wakeup(P) resumes the execution of a blocked
process P - Wakes up sleeping process by returning it to
ready queue (or to another place/queue where it
can compete for a spot in the ready queue)
45Revised Operation Implementation
- Semaphore operations now defined as
- wait(S)
- S.value--
- if (S.value lt 0)
- S.queue.enqueue(me) // enqueue
this process to S.queue - block() // and put it to
sleep (on another Q) -
-
- signal(S)
- S.value
- if (S.value lt 0)
- P S.queue.dequeue() //
dequeue a process P from S.queue - wakeup(P) // and wake it up
put in the Ready Q -
-
- It is critical that entire signal() and wait()
operations are executed atomically
46System Semaphore Implementation
- Uniprocessor
- Inhibit interrupts around wait / signal code
- Multiprocessor
- Special hardware is best solution
- If no special hardware, use one of the correct
critical section solutions, where critical
section is wait and signal code - This will minimize, but not eliminate the busy
wait - //////////////////////////////////////////////////
////////////////////////////////
47Semaphore as a General Synchronization Tool
Mutex is a semaphore initialized to 1 PROCESS 0
PROCESS 1 PROCESS 2
do do
do wait(mutex) wait(mutex)
wait(mutex) CS
CS CS signal(mutex)
signal(mutex) signal(mutex)
non-CS non-CS
non-CS while(1) while(1)
while(1)
- P1 makes 1st request to enter its CS
- mutex 0
- CS 1 entered
- P0 and P2 in that order want to enter their CS
- mutex-- (mutex now -1)
- P0 enqueued to mutex queue of blocked processes
- mutex-- (mutex now -2)
- P2 enqueued to mutex queue of blocked processes
- P1 leaves its CS
- mutex ( mutex now -1)
- P0 dequeued and put on ready queue
- P0 enters CS
48Two Types of Semaphores
- So far, have been talking about counting
semaphores - Counting semaphore integer value can range over
an unrestricted domain. - Can be used for serialization (mutexes, etc.)
- Are quite useful when need a counter that can be
updated atomically - Can also be used if have a maximum or minimum
number of shared data objects, or sharers of
objects - Binary semaphore integer value can range only
between 0 and 1 - Can be simpler to implement in both HW and SW
- Can implement a counting semaphore S with two
binary semaphores and an integer counter.
49Implementing S with Binary Semaphores
- Semaphore S implemented with two binary
semaphores and a counter (all internal to
Semaphore object) - Data structures
- binary-semaphore S1, S2
- int C // Counter
- Initialization
- S1 1
- S2 0
- C initial value of semaphore S
50Implementing S With Binary Semaphores
- wait() operation
- wait(BinS1) // Serialize access to
counter (get counter lock) - C-- // Decrement semaphores
counter - if (C lt 0) // If another process
already active - signal(BinS1) // Drop, so other
processes not blocked when go to sleep - wait(BinS2) // Put current
process to sleep (on S2s queue) -
- signal(BinS1) // Drop counter lock,
either from the first - // line of this method, or from the first
line - // of the signal() method (when wake up
- // a process)
- signal() operation
- wait(BinS1) // Serialize access to
counter (get counter lock) - C // Increment counter
- if (C lt 0) // If any sleeping
processes - signal(BinS2) // Wake up one
sleeping process - else // Else, if no sleeping
processes - signal(BinS1) // Drop counter
lock (if not waking up any process)
51Problems Deadlock and Starvation
- Deadlock may occur if there are multiple
semaphores - Deadlock two or more processes are waiting
indefinitely for an event that can be caused by
only one of the waiting processes. - Let S1 and S2 be two semaphores initialized to 1
- P0
P1 // NOTE ORDER IS IMPORTANT - wait(S1) wait(S2)
// P1 has waits in opposite order from - wait(S2) wait(S1) //
what is coded in P0 can DEADLOCK - CS CS
- signal(S2) signal(S1) //
Order here doesn't matter - signal(S1) signal(S2)
- Doesnt help if implement a semaphores list of
waiting processes as STACK instead of QUEUE --
AND can result in starvation - Starvation indefinite blocking. A process may
never be removed from the semaphore queue in
which it is suspended. - Background for Classical Concurrency Problems
52Classical Problems of Synchronization
- Bounded-Buffer Problem
- Have a producer and a consumer process
- Fixed number of buffers
- Consumer must wait if all buffers are empty
- Producer must wait if all buffers are full
- Readers and Writers Problem
- Multiple reader and writer processes access
shared data item - Readers have read-only access to item
- Writers have read/write access to item
- OK for multiple readers to access the same item
at same time - When a writer has access, no other process may
have access - Several forms
- Dining-Philosophers Problem
- To solve, use semaphores as mutexes and atomic
counters
53Bounded Buffer with Semaphores
SHARED DATA Object item item
buffernumItemsInBuffer semaphore csLock 1,
// Exclusive lock to access data in buffers
fullBuffers 0, // Atomic
counter number of full buffers
emptyBuffers n // Atomic counter number
of empty buffers producer() do produce
an item // Item is available to be put into a
buffer wait(emptyBuffers) // Wait for an
empty buffer, get one, dec. counter
wait(csLock) // Get the exclusive lock
for Critical Section add item to buffer //
Fill a buffer with data CRITICAL SECTION
signal(csLock) // Drop exclusive Critical
Section lock signal(fullBuffers) //
Increment counter of full buffers wake up
waiter while(1) consumer() do
wait(fullBuffers) // Wait for a full
buffer, get one, dec. counter wait(csLock)
// Get exclusive lock for Critical
Section remove item from buffer // Get data
empty a buffer CRITICAL SECTION
signal(csLock) // Drop exclusive
Critical Section lock signal(emptyBuffers)
// Increment number of empty buffers wake
waiter consume an item // Data no
longer exists while(1)
54Bounded-Buffer Problem Scenarios
- NOTE this is related to using semaphores as
atomic counters - Suppose number of buffers 3, producer active,
consumer inactive. - Empty and full change as follows
- empty 3 2 1 0 -1 // Producer
placed on 'empty' - full 0 1 2 3 // queue of
blocked processes - Suppose number of buffers 3, consumer active,
producer inactive - empty 3 // Consumer
immediately placed on - full 0 -1 // 'full' queue of
blocked processes - Try examples with both active
55Readers-Writers Problem
- Scenario
- Multiple reader and writer processes access
shared data item - Readers have read-only access to item
- Writers have read/write access to item
- OK for multiple readers to access the same item
at same time - They are only reading, so no conflict
- Implies shared lock
- When one writer has access, no other process may
have access - If either another reader or writer is accessing,
potential conflict - Implies exclusive lock
- Comes in several forms
- First Readers-Writers Problem
- Second Readers-Writers Problem
56Readers-Writers Problem 2 Forms
- First Readers-Writers Problem -- simplest
- No reader will be kept waiting unless a writer
has already obtained permission to use the shared
object - Reader does not have to wait just because a
writer is waiting (unless the writer is waiting
for another writer to finish writing) - Writers may starve
- Second Readers-Writers Problem
- When a writer is ready, it must perform its write
as quickly as possible - If a writer is waiting, no more readers can start
to read until the writer is done writing - Readers may starve
571st Readers-Writers Problem Solved
- SHARED DATA
- semaphore wtrLock 1, // Insures mutual
exclusion of wtrs, used by rdrs - updLock 1 // Excusive lock for
updating read count - int rdrcount 0 // Number of readers
- writer () reader()
- wait(wtrLock) wait(rdrCountLock)
// Get reader count lock - ... rdrcount
// Update count of rdrs - perform write if(rdrcount 1)
// If first reader, - ... wait(wtrLock)
// lock out writers - signal(wtrLock) signal(rdrCountLock)
// Drop reader count lock - ...
- perform read
- ...
- wait(rdrCountLock)
// Get reader count lock - --rdrcount
// Update count of rdrs - if(rdrcount 0)
// If last reader, then - signal(wtrLock)
// let writer proceed - signal(rdrCountLock)
// Drop reader count lock
581st Readers-Writers Problem (cont.)
- If a writer is in its CS and 'n' readers are
waiting, then 1 reader is queued on wtrLock and
the other n-1 are queued on the rdrCountLock - When a writer executes a signal(wtrLock), either
the waiting readers or a single waiting writer
may resume execution depends on contents of
wtrLock queue once first reader gets in, blocks
out any other writers that try to get in, until
last reader leaves - Example Suppose a writer has the shared item
and four readers have attempted to access it
wtrLock 1 0 -1 0 1
rdrCtLock1 0 -1 -2
-3 -2 -1 0 rdrcount 0
1 .... 2 3
4 ...
... Writer Place
Activate Activate
wait(wrtL) other rdr 1st reader
other rdrs enters CS on rdCtL.Q
from wrtL.Q Now all rdrs
active
in CS Place Writer
Activate 1st rdr
signal(wrtL) 2nd reader
wrtL.Q exits CS from rdrCntL.Q
since
rdrCntLock
changed
59Dining-Philosophers Problem
- If represent chopsticks by semaphores, then
- do
- wait(chopsticki)
- wait(chopstick(i1)5
- eat
- signal(chopsticki)
- signal(chopstick(i1)5
- think
- while(1)
- BUT potential deadlock see text for other
ideas - http//www.ssw.uni-linz.ac.at/General/Staff/DB/Pri
vate/DiningPhilosophers/index.html
60Problems with Semaphore Usage
- If do not use semaphores correctly, they cause
problems - If use in wrong order, do not protect critical
section - If use wait() twice, deadlock will occur
- If just forget them, or do not know how to use
them, then have Critical Section violations or
deadlocks - This is because semaphores rely on convention,
or programmers' following rules, to work
correctly - Would be better if could have the code/system
enforce the rules, and not have to rely on
programmers memory or good intentions - Look at a synchronization construct to provide
solutions - Monitor
61What is a Monitor ?
- In today's terminology a class / object
- Encapsulates shared data
- Data private data members
- Encapsulates access control to that data
- Serialization hidden inside class methods
- Code that is hidden inside the methods controls
order of client access to shared data - Safe, even if a context switch occurs
- Client only has access to public methods
- No dependency that client codes semaphores, etc.
correctly, since the only way to access data is
through monitors methods - Can do similar things with procedural languages
(and some of us did, before OO languages
existed), but it is more challenging
62How Does a Monitor Work ?
- If a process wishes access to shared data, it
- Asks the monitor for access (invokes a public
method) - Monitor queues request and guarantees no two
processes are operating in their critical
section at the same time - E.g., put the semaphores around the CS inside the
monitor - Have compiler support for special monitor
constructs - Monitor performs the operation on the requestor's
behalf - When one request is complete, the monitor returns
control and moves on to next requestor - If a requestor just wishes to look at shared
data, monitor can return a copy (in fact, this
would be usual behaviour) -- BUT - Monitor must be careful data not also being
updated at same time (as in readers/writers
problem) - Requestor must be careful how data will be used
realize it is a copy - This means monitor will control concurrency
- Do not have to rely on each user to do correctly
- In some OOLs (e.g., Java and some implementations
of C) a class method can handle concurrency
somewhat like a monitor(but sometimes it may not
be guaranteed or has other shortcomings)
63Schematic View of a Monitor
NOTE Everything inside this box is the
monitor
64Monitor With Condition Variables
This allows multiple processes to be active
within the monitor concurrently, if they are
active for different reasons (conditions) A
separate queue exists within monitor for each
condition
65Java Synchronization
- Bounded Buffer solution using synchronized,
wait(), notify() statements - Multiple Notifications
- Block Synchronization
- Java Semaphores
- Java Monitors
66synchronized Statement
- Every object has a lock or monitor associated
with it - Calling a synchronized method requires owning
the lock - If a calling thread does not own the lock
(another thread already owns it), the calling
thread is placed in the wait set for the objects
lock - Note this is a set, NOT a queue
- The lock is released when a thread exits the
synchronized method - Then one of the waiting threads (in the wait set)
is allowed to get the lock and proceed to execute
the synchronized method - It is NOT deterministic which of the waiting
threads will be awakened and receive the lock - i.e., any thread in the set may be allowed to run
next
67Entry Set
68A synchronized Method
- public synchronized void myMethod(Object parms)
- // critical section (sort of) goes here
- // The JVM guarantees that only one thread can
be in the method - // at any one time, so guarantees mutual
exclusion - // HOWEVER
- // THIS TECHNIQUE IS NOT GUARANTEED TO AVOID
- // STARVATION !!!!!!!!!
- //
- // Since the waiters are part of SET, not
QUEUE, and it is not - // deterministic which waiter will be
awakened when the lock - // becomes available.
- // SO this is not an adequate substitute for
classic semaphores
69The wait() Method
- The wait() method is invoked for a thread that
already owns the objects lock/monitor -- see
documentation for Object.wait() for more details - NOTE this wait() is more like block() than like
Ss signal() and wait() - When a thread calls wait(), the following occurs
- the thread releases the object lock
- thread state is set to blocked
- thread is placed in the wait set
- This is a separate set of processes from those
waiting to get the objects lock initially (in
the entry set) - i.e., there can be TWO sets of waiting processes
associated with the object (the entry set and the
wait set) - When an object is removed from the wait set, it
does not immediate get the object lock (which it
gave up when it was put into the wait set) - Instead, it goes back into the entry set and must
compete for the object lock with the other
processes already in the entry set. - Threads are put in the wait set explicitly they
are put in the entry set implicitly - Cannot immediately get object lock from wait set
must go through entry set
70Entry and Wait Sets
71The notify() Method
- When a thread calls notify(), the following
occurs - the JVM selects an arbitrary thread T from the
wait set - the JVM moves thread T to the entry set
- the JVM sets thread T to Runnable
- Thread T can now compete for the objects
lock/monitor again - (NOTE it is still waiting in the entry set)
72Multiple Notifications
- notify() selects an arbitrary thread from the
wait set. - NOTE this may not be the thread that you want
to be selected. - Java does not allow you to specify the thread to
be selected - notifyAll() removes ALL threads from the wait set
and places them in the entry set. This allows the
threads to decide among themselves who should
proceed next. - notifyAll() is a conservative strategy that works
best when multiple threads may be in the wait set - But you still cannot control which thread will be
selected from the entry set and allowed to run - And still cannot be guaranteed will avoid
starvation
73Block Synchronization
- Blocks of code rather than entire methods may
be declared as synchronized - Scope of lock is the time between when lock
acquired and released - This yields a lock scope that is typically
smaller than a synchronized method - But still does not guarantee fairness / avoidance
of starvation - And there still is only one lock / monitor per
object - Not per method or block
74Block Synchronization (cont)
- Object mutexLock new Object()
- . . .
- public void someMethod()
- nonCriticalSection()
- synchronized(mutexLock) // Put before opening
bracket - criticalSection()
- // When go out of scope, is not synchronized
- nonCriticalSection()
75Java Semaphores
- Java does not provide a semaphore in JDKs up
through 1.4, but a basic semaphore can be
constructed using Java synchronization mechanisms - You will see one in Project 2
- The Java Semaphore from Java SE5 onward is
similar to, but can also be used differently from
classic semaphores - It comes in both fair and unfair forms
- Keywords are acquire() and release()
- Initialize and manipulate counter value through
permits - Many additional functions/methods/inspectors,
etc., some quite different from what would be
expected in classic semaphores - But still relatively easy to use as classic
semaphores - Use fairnesstrue on ctor, i.e.,
- Semaphore S new Semaphore(1,true)
76Syncronization Examples
- Solaris
- Windows XP
- Linux
- Pthreads
77Solaris Synchronization
- Implements a variety of locks to support
multitasking, multithreading (including real-time
threads), and multiprocessing - Uses adaptive mutexes for efficiency when
protecting data from short code segments - Uses condition variables and readers-writers
locks when longer sections of code need access to
data - Uses turnstiles to order the list of threads
waiting to acquire either an adaptive mutex or
reader-writer lock
78Windows XP Synchronization
- Uses interrupt masks to protect access to global
resources on uniprocessor systems - Uses spinlocks on multiprocessor systems
- Also provides dispatcher objects which may act as
either mutexes or semaphores - Dispatcher objects may also provide events
- An event acts much like a condition variable
79Linux Synchronization
- Linux
- Prior to kernel version 2.6, disables interrupts
to implement short critical sections (on
uni-processor systems) - Version 2.6 and later, fully preemptive
- Linux provides
- semaphores
- spin locks
80Pthreads Synchronization
- Pthreads API is OS-independent
- It provides
- mutex locks
- condition variables
- Non-portable extensions include
- read-write locks
- spin locks
81Atomic Transactions
- Assures that an operation happens as a single
logical unit of work, in its entirety, or not at
all - Related to (but not restricted to) field of
database systems - Challenge is assuring atomicity despite computer
system failures - Transaction a collection of instructions or
operations that performs a single logical
function - Concerned with changes to stable storage e.g.,
disk - Transaction is a series of read and write
operations - Terminated by commit (transaction successful) or
abort (transaction failed) operations - An aborted transaction must be rolled back to
undo any changes it may have performed (perhaps
using a log log-based recovery) - More details on atomic transactions when discuss
commitment control
82Atomic Transactions
- System Model
- Log-based Recovery
- Checkpoints
- Concurrent Atomic Transactions
83System Model
- Assures that operations happen as a single
logical unit of work, in its entirety, or not at
all - Related to field of database systems
- Challenge is assuring atomicity despite computer
system failures - Transaction - collection of instructions or
operations that performs single logical function - Here we are concerned with changes to stable
storage disk - Transaction is series of read and write
operations - Terminated by commit (transaction successful) or
abort (transaction failed) operation - Aborted transaction must be rolled back to undo
any changes it performed
84Types of Storage Media
- Volatile storage information stored here does
not survive system crashes - Example main memory, cache
- Nonvolatile storage Information usually
survives crashes - Example disk and tape
- Stable storage Information never lost
- Not actually possible, so approximated via
replication or RAID to devices with independent
failure modes
- Goal is to assure transaction atomicity where
failures cause loss of information on volatile
storage
85Log-Based Recovery
- Record to stable storage information about all
modifications by a transaction - Most common is write-ahead logging
- Log on stable storage, each log record describes
single transaction write operation, including - Transaction name
- Data item name
- Old value
- New value
- ltTi startsgt written to log when transaction Ti
starts - ltTi commitsgt written when Ti commits
- Log entry must reach stable storage before
operation on data occurs
86Log-Based Recovery Algorithm
- Using the log, system can handle any volatile
memory errors - Undo(Ti) restores value of all data updated by Ti
- Redo(Ti) sets values of all data in transaction
Ti to new values - Undo(Ti) and redo(Ti) must be idempotent
- Multiple executions must have the same result as
one execution - If system fails, restore state of all updated
data via log - If log contains ltTi startsgt without ltTi commitsgt,
undo(Ti) - If log contains ltTi startsgt and ltTi commitsgt,
redo(Ti)
87Checkpoints
- Log could become long, and recovery could take
long - Checkpoints shorten log and recovery time.
- Checkpoint scheme
- Output all log records currently in volatile
storage to stable storage - Output all modified data from volatile to stable
storage - Output a log record ltcheckpointgt to the log on
stable storage - Now recovery only includes Ti, such that Ti
started executing before the most recent
checkpoint, and all transactions after Ti All
other transactions already on stable storage
88Concurrent T