Title: Chapter 6: Process Synchronization
1Chapter 6 Process Synchronization
2Chapter 6 Process Synchronization
- Background
- The Critical-Section Problem
- Petersons Solution
- Synchronization Hardware
- Semaphores
- Classic Problems of Synchronization
- Monitors
- Synchronization Examples
3Background
- Concurrent access to shared data may result in
data inconsistency - Maintaining data consistency requires mechanisms
to ensure the orderly execution of cooperating
processes - Suppose that we wanted to provide a solution to
the consumer-producer problem that fills all the
buffers. We can do so by having an integer count
that keeps track of the number of full buffers.
Initially, count is set to 0. It is incremented
by the producer after it produces a new buffer
and is decremented by the consumer after it
consumes a buffer.
4Producer
- while (true)
-
- / produce an item and put in
nextProduced / - while (count BUFFER_SIZE)
- // do nothing
- buffer in nextProduced
- in (in 1) BUFFER_SIZE
- count
-
5Consumer
- while (true)
- while (count 0)
- // do nothing
- nextConsumed bufferout
- out (out 1) BUFFER_SIZE
- count--
- / consume the item in nextConsumed
-
6Race Condition
- count could be implemented as register1
count register1 register1 1 count
register1 - count-- could be implemented as register2
count register2 register2 - 1 count
register2 - Consider this execution interleaving with count
5 initially - S0 producer execute register1 count
register1 5S1 producer execute register1
register1 1 register1 6 S2 consumer
execute register2 count register2 5 S3
consumer execute register2 register2 - 1
register2 4 S4 producer execute count
register1 count 6 S5 consumer execute
count register2 count 4
7Solution
- Any time multiple processes execute code that
modifies shared data, access to such code must be
serialized. Only one process at a time should be
allowed to execute such code, and the process
should execute the code to completion without
interruption. - This is called mutual exclusion. Enforcing
mutual exclusion is a key requirement (and
challenge) in the use of concurrent processes.
8The Critical Section Problem
- Code that is executed by a process for the
purpose of accessing and modifying shared data is
called a critical section. - Only one process at a time must be allowed to
enter its critical section. - In other words, mutual exclusion must be enforced
at the entry to a critical section. - The critical-section problem involves finding a
protocol that allows processes to cooperate in
the required manner.
9Solution to Critical-Section Problem
- 1. Mutual Exclusion - If process Pi is executing
in its critical section, then no other processes
can be executing in their critical sections - 2. Progress - If no process is executing in its
critical section and there exist some processes
that wish to enter their critical section, then
the selection of the processes that will enter
the critical section next cannot be postponed
indefinitely - 3. Bounded Waiting - A bound must exist on the
number of times that other processes are allowed
to enter their critical sections after a process
has made a request to enter its critical section
and before that request is granted - Assume that each process executes at a nonzero
speed - No assumption concerning relative speed of the N
processes
10Evolution of Solutions to the Critical Section
Problem (and the Implementation of Mutual
Exclusion)
- Software only implementations appeared first.
- Several unsuccessful attempts were tried.
- One successful implementation for two processes
is Petersons Algorithm. - All software only implementations require a busy
wait.
11Algorithm for Process Pi
- while (true)
- flagi TRUE
- turn j
- while ( flagj turn j)
- CRITICAL SECTION
- flagi FALSE
- REMAINDER SECTION
-
-
12Petersons Solution
- Two process solution
- Assume that the machine-level instructions that
load or store shared memory data are atomic that
is, cannot be interrupted. - The two processes share two variables
- int turn
- Boolean flag2
- The variable turn indicates whose turn it is to
enter the critical section. - The flag array is used to indicate if a process
is ready to enter the critical section. flagi
true implies that process Pi is ready!
13Evolution (continued)
- Once a successful software implementation was
demonstrated, computer designers considered the
assertion that hardware and software are
logically equivalent. They implemented
synchronization hardware.
14Synchronization Hardware
- Many systems provide hardware support for
critical section code - Uniprocessors could disable interrupts
- Currently running code would execute without
preemption - Generally too inefficient on multiprocessor
systems - Operating systems using this not broadly scalable
- Modern machines provide special atomic hardware
instructions - Atomic non-interruptable
- Either test memory word and set value the
testandset instruction - Or swap contents of two memory words the swap
instruction
15TestAndndSet Instruction
- Definition
- boolean TestAndSet (boolean target)
-
- boolean rv target
- target TRUE
- return rv
-
16Solution using TestAndSet
- Shared boolean variable lock., initialized to
false. - Solution
- while (true)
- while ( TestAndSet (lock ))
- / do
nothing - // critical
section - lock FALSE
- // remainder
section -
- Still requires a busy wait
- Good for more than two processes
-
17Swap Instruction
- Definition
- void Swap (boolean a, boolean b)
-
- boolean temp a
- a b
- b temp
-
18Solution using Swap
- Shared Boolean variable lock initialized to
FALSE Each process has a local Boolean variable
key. - Solution
- while (true)
- key TRUE
- while ( key TRUE)
- Swap (lock, key )
-
- // critical
section - lock FALSE
- // remainder
section -
-
19Evolution (continued)
- The final and most elegant solution is the
semaphore (developed by Edgar Dijkstra) - Use of the semaphore does not require a busy
wait. - Good for any number of processes
20Semaphore
- A semaphore may be viewed as an abstract data
type (ADT) having both a scalar value and an
associated queue of waiting processes - Basic operations (not including initializing the
scalar value) are Wait and Signal, originally
called P and V. - For semaphore S
- Wait(S) can be defined logically as
- if S gt 0 then
- S S - 1
- else wait in Queue S
- Signal(S)
- if any task currently waits in Queue S then
- awaken first task in the queue
- else S S 1
- Both of the above operations are atomic.
- The textbook defines the semaphore operations
logically as - wait (S)
- while S lt 0
- // no-op
- S--
-
- signal (S)
21Semaphore (continued)
- May be used for enforcing mutual exclusion and
for signaling among different processes - For enforcing mutual exclusion at the entry to a
critical section - Semaphore mutex has an initial value of 1.
- For two processes t1 and t2 accessing the same
data - t1 t2
-
- wait(mutex) wait(mutex)
- ltcritical sectiongt ltcritical sectiongt
- signal(mutex) signal(mutex)
-
22Semaphores (continued)
- For signaling between 2 processes t1 and t2
- Semaphore sem has initial value of 0.
- t2 waits for a signal from t1
- t1 t2
-
- ltgenerate data wait(sem)
- needed by t2gt
- signal(sem) ltuse data generated
-
by t1gt
23Semaphore as General Synchronization Tool
- Counting semaphore integer value can range over
an unrestricted domain - Binary semaphore integer value can range only
between 0 and 1 can be simpler to implement - Also known as mutex locks
24Semaphore Implementation with no Busy waiting
- With each semaphore there is an associated
waiting queue. Each entry in a waiting queue has
two data items - value (of type integer)
- pointer to next record in the list
- Two operations
- block place the process invoking the operation
on the appropriate waiting queue. - wakeup remove one of processes in the waiting
queue and place it in the ready queue. -
25Semaphore Implementation with no Busy waiting
(Cont.)
- Implementation of wait
- wait (S)
- value--
- if (value lt 0)
- add this process to waiting
queue - block()
-
- Implementation of signal
- Signal (S)
- value
- if (value lt 0)
- remove a process P from the
waiting queue - wakeup(P)
-
26Deadlock and Starvation
- Deadlock two or more processes are waiting
indefinitely for an event that can be caused by
only one of the waiting processes - Let S and Q be two semaphores initialized to 1
- P0 P1
- wait (S)
wait (Q) - wait (Q)
wait (S) - . .
- . .
- . .
- signal (S)
signal (Q) - signal (Q)
signal (S) - Starvation indefinite blocking. A process may
never be removed from the semaphore queue in
which it is suspended.
27Recall The Paradigm ofInterprocess Communication
and SynchronizationThe Producer-Consumer Problem
- The producer process produces information that is
consumed by a consumer process. A buffer is used
to hold data between the two processes. - The producer consumer must be synchronized
that is, a producer must wait if it attempts to
put data into a full buffer whereas a consumer
must wait if it attempts to extract data from an
empty buffer. - This represents the basis for interprocess
communication and can take two forms - Message passing by way of a separate mailbox
ormessage queue (discussed previously) - The operating system usually provides this
structure and the corresponding functions SEND
and RECEIVE - A producer SENDs to the mailbox, while the
consumer RECEIVEs from the mailbox - Message passing by way of a shared-memory buffer
- Usually implemented directly with semaphores
- The Bounded-buffer producer-consumer problem
assumes that there is a fixed buffer size. - The consumer must wait if the buffer is empty and
the producer must wait if the buffer is full.
28Message Passing by way of a Mailbox, or Message
Queue
- Convenient to the programmer, because the level
of abstraction is higher (via SEND and RECEIVE) - Typically exhibits higher overhead, because data
has to be moved more (sender process to mailbox
and mailbox to receiver process)
29Message Passing by way of a Shared-Memory Buffer
- Lower level of abstraction, requiring the use of
semaphores - More effort for the programmer
- Greater risk of mistakes in the use of semaphores
- Better performance due to less movement of data
30Message Passing by way of a Shared-Memory Buffer
(continued)
- Two possible design approaches
- Traditional Bounded Buffer, in which both sender
and receiver processes can access the shared
buffer if it is not completely full and not
completely empty OR - Sender and receiver processes separated by double
buffers - While one buffer is being filled by the sender,
the other buffer is being emptied by the receiver - Once one buffer is filled and the other emptied,
the sender and receiver swap buffers and continue - Not presented in the textbook
31Bounded-Buffer
- N buffers, each can hold one item
- Semaphore mutex initialized to the value 1
- Semaphore full initialized to the value 0
- Semaphore empty initialized to the value N.
32Bounded Buffer (Cont.)
- The structure of the producer process
- while (true)
- // produce an item
- wait (empty)
- wait (mutex)
- // add the item to the
buffer - signal (mutex)
- signal (full)
-
33Bounded Buffer (Cont.)
- The structure of the consumer process
- while (true)
- wait (full)
- wait (mutex)
- // remove an item
from buffer - signal (mutex)
- signal (empty)
-
- // consume the
removed item -
34Other Examples of Synchronization
- Readers/Writers
- Applicable to systems in which there are two
kinds of transactions - readers and writers - Reader transactions want to enter a database and
complete an inquiry only nothing is modified. - Writer transactions want to enter a database and
modify data (such as a record). - One example of such a system might be an airline
reservation system. - Dining Philosophers
- A hypothetical situation in which an odd number
of philosophers sit around a table with plates of
spaghetti or rice and alternate eating and
thinking. - On each side of each plate is one utensil (fork
or chopstick). - Given that eating requires two utensils, the
philosophers must devise a scheme in which they
share access to the utensils but in such a manner
that all are assured of eating in finite time
(not starving).
35Readers-Writers Problem
- A data set is shared among a number of concurrent
processes - Readers only read the data set they do not
perform any updates - Writers can both read and write.
- Problem allow multiple readers to read at the
same time. Only one single writer can access the
shared data at the same time. - Shared Data
- Data set
- Semaphore mutex initialized to 1.
- Semaphore wrt initialized to 1.
- Integer readcount initialized to 0.
36Readers-Writers Problem (Cont.)
- The structure of a writer process
-
- while (true)
- wait (wrt)
-
- // writing is
performed - signal (wrt)
-
-
37Readers-Writers Problem (Cont.)
- The structure of a reader process
-
- while (true)
- wait (mutex)
- readcount
- if (readercount 1)
wait (wrt) - signal (mutex)
-
- // reading is
performed - wait (mutex)
- readcount - -
- if (redacount 0)
signal (wrt) - signal (mutex)
-
-
38Dining-Philosophers Problem
- Shared data
- Bowl of rice (data set)
- Semaphore chopstick 5 initialized to 1
39Dining-Philosophers Problem (Cont.)
- The structure of Philosopher i
- While (true)
- wait ( chopsticki )
- wait ( chopStick (i 1) 5 )
-
- // eat
- signal ( chopsticki )
- signal (chopstick (i 1) 5 )
-
- // think
40Problems with Semaphores
- Correct use of semaphore operations
- signal (mutex) . wait (mutex)
- wait (mutex) wait (mutex)
- Omitting of wait (mutex) or signal (mutex) (or
both)
41Monitors
- A high-level abstraction that provides a
convenient and effective mechanism for process
synchronization - Only one process may be active within the monitor
at a time - monitor monitor-name
-
- // shared variable declarations
- procedure P1 () .
-
- procedure Pn ()
- Initialization code ( .)
-
-
42Schematic view of a Monitor
43Condition Variables
- condition x, y
- Two operations on a condition variable
- x.wait () a process that invokes the operation
is - suspended.
- x.signal () resumes one of processes (if any)
that - invoked x.wait ()
44 Monitor with Condition Variables
45Monitor Implementation Using Semaphores
- Variables
- semaphore mutex // (initially 1)
- semaphore next // (initially 0)
- int next-count 0
- Each procedure F will be replaced by
- wait(mutex)
-
-
body of F -
- if (next-count gt 0)
- signal(next)
- else
- signal(mutex)
- Mutual exclusion within a monitor is ensured.
46Monitor Implementation
- For each condition variable x, we have
- semaphore x-sem // (initially 0)
- int x-count 0
- The operation x.wait can be implemented as
-
- x-count
- if (next-count gt 0)
- signal(next)
- else
- signal(mutex)
- wait(x-sem)
- x-count--
-
47Monitor Implementation
- The operation x.signal can be implemented as
- if (x-count gt 0)
- next-count
- signal(x-sem)
- wait(next)
- next-count--
-
-
48End of Chapter 6