Background - PowerPoint PPT Presentation

1 / 43
About This Presentation
Title:

Background

Description:

Title: Synchronization and Deadlock Author: CSC Installer Last modified by: Itinstaller Created Date: 4/13/2004 6:27:25 PM Document presentation format – PowerPoint PPT presentation

Number of Views:71
Avg rating:3.0/5.0
Slides: 44
Provided by: CSCIns8
Learn more at: http://cs.sou.edu
Category:
Tags: background | lock

less

Transcript and Presenter's Notes

Title: Background


1
Background
  • Concurrent access to shared data can lead to
    inconsistencies
  • Maintaining data consistency among cooperating
    processes is critical
  • What is wrong with the code to the right?

Producer
Consumer
2
Race Condition
A Condition where the outcome depends on
execution order
  • count, count-- may not use one machine
    instruction
  • register1count, register1register11,
    countregister1
  • register2count, register2register21,
    countregister2
  • Suppose P producer, C consumer, and count 5
  • P register count // 5
  • P register register 1 //6
  • C register count //5
  • C register register 1 //4
  • P count register //6
  • C count register //4
  • Questions
  • What is the final value for count?
  • What other possibilities?
  • What should be the correct value?

3
Critical Sections
A block of instructions that must be protected
from concurrent execution
  • We cannot assume
  • Process execution speed, although we can assume
    they all execute at some non zero speed
  • Which process executes next
  • Process execution order

Race Condition Concurrent access to shared data
where the output depends on the order of execution
4
Critical-Section Solutions
We need a protocol to guarantee the following
  • 1. Mutual Exclusion There is a mechanism to
    limit (normally one) of processes that can
    execute in a critical section at any time
  • 2. Progress - If no process is executing in its
    critical section, processes waiting to enter
    cannot wait definitely
  • Bounded Waiting - A process cannot be preempted
    by other processes from entering a critical
    section an infinite number of times.

No assumptions can be made regarding process
speed or scheduling order
5
Peterson's Two Process Solution
Spinning or 'busy wait' is when there is no
blocking
  • Assume atomic instructions
  • Processes share
  • int turn
  • Boolean flag2
  • turn indicates whose turn it is.
  • flag
  • indicates if a process is ready to enter
  • flagi true implies that process Pi is ready!

Does Peterson satisfy mutual exclusion, progress,
bounded wait?
6
Hardware Solutions
  • Hardware atomic instructions
  • test and set
  • swap
  • increment and test if zero
  • Disable interrupts
  • Could involve loss of data
  • Won't work on multiprocessors without inefficient
    message passing

Atomic operation One that cannot be interrupted.
A pending interrupt will not occur till the
instruction completes
7
Simulate a Hardware Solution
  • public class HardwareData
  • private boolean value false
  • public HardwareData(boolean value)
    this.value value
  • public void release(boolean vaue) this.value
    value
  • public synchronized boolean lock() return
    value
  • public synchronized boolean getAndSet(boolean
    value)
  • boolean old value this.value
    value return old
  • public synchronized void swap(HardwareData
    other)
  • boolean old value value other.value
    other.value value

Usage HardwareData hdware new
HardwareData(false) // Threads
share HardwareData hdware2 new
HardwareData(true) Either while
(hdware.getAndSet(true)) Thread.yield() Or
hdware.swap(hdware2) while (hdware2.lock()
true) // Wait - Spin criticalSection()
hdware.release(false) someOtherCode()
8
Operating System Synchronization
  • Uniprocessors Disable interrupts
  • Currently running code would execute without
    preemption
  • The executing code must be extremely quick
  • Not scalable to multiprocessor systems, where
    each processor has its own interrupt vector
  • Multiprocessor - atomic hardware instructions
  • Spin locks are used must be extremely quick
  • Locks that require significant processing use
    other mechanisms preemptible semaphores

9
Semaphores
  • acquire()
  • value--if (valuelt0) add P to list
    block
  • release()
  • valueif (value lt0) remove P from list
    wakeup(P)
  • Synchronization tool that uses blocks
  • A Semaphore is an abstract data type
  • Contains an integer variable
  • Atomic acquire method (P proberin)
  • Atomic release method (V verhogen)
  • Includes an implied queue (or some randomly
    accessed data structure)

10
Semaphore for General Synchronization
  • Counting semaphore
  • the integer value has any range
  • can count available resources
  • Binary semaphore
  • integer value 0 or 1
  • Also known as mutex locks
  • Semaphores are error prone
  • acquire instead of release
  • forget to acquire or release
  • Semaphore S new Semaphore(n)
  • sem.acquire()// critical section code
  • sem.release()

Advantages and disadvantages?
11
Java Semaphores (Java 1.5)
  • public class Worker implements Runnable
  • private Semaphore sem
  • public Worker(Semaphore sem) this.sem
    sem
  • public void run() while (true)
    sem.acquire() doSomething()
  • sem.release() doMore()
  • public class Factory
  • public static void main(String args)
  • Semaphore sem new Semaphore(1)
  • Thread bees new Thread5
  • for (int i0 ilt5 i) beesi new
    Worker(sem)
  • for (int i0 ilt5 i) beesi.start()

12
Semaphore Implementation
  • It is possible that a semaphore needs to block
    while holding a mutex
  • Issue a wait() call to release the mutex
  • Another process must wake up the waiting thread
    (notify() or notifyAll()). Otherwise the thread
    will never execute.
  • Good design will minimize the time spent in
    critical sections

13
Deadlock and Starvation
  • Deadlock two or more processes wait for a
    resource that can never be satisfied.
  • Let S and Q be two semaphores initialized to 1
  • P0 P1
  • S.acquire() Q.acquire()
  • Q.acquire() S.acquire()
  • // Critical Section // Critical Section
  • S.release() Q.release()
  • Q.release() S.release()
  • Starvation Continual preemption (indefinite
    blocking)
  • Order of servicing the blocking list (LIFO for
    example)
  • Process priorities prevent a process from exiting
    a semaphore queue

14
The Bounded Buffer problem
  • One or more producers add elements to a buffer
  • One or more consumers extract produced elements
    from the buffer for service
  • There are N of buffers (hence a bounded number)
  • Solution with semaphores
  • Three semaphores, mutex, full, and empty
  • Initialize mutex to 1, full to 0 and empty to N
  • The full semaphore counts up, and the empty
    semaphore counts down

15
The Bounded Buffer Solution
  • public class BoundedBufferExample
  • public static void main(String args)
  • BoundedBuffer buffer new BoundedBuffer()
  • new Producer(buffer).start()
  • new Consumer(buffer).start()
  • public class Producer extends Thread
  • private BoundedBuffer buf
  • public Producer(Buffer buf) this.bufbuf
  • public void run()
  • while (true)
  • sleep(100) buffer.insert(new Date())
  • public class Consumer extends Thread
  • private BoundedBuffer buf
  • public Consumer(Buffer buf) this.bufbuffer
  • public void run()
  • while (true)
  • sleep(100)
  • System.out.println( (Date)buffer.remove())

16
Bounded Buffer Semaphore Solution
public void insert(Object item)
empty.acquire() mutex.acquire()
bufferin item in (in1)SIZE
mutex.release() full.release()
  • public class BoundedBuffer
  • private static final int SIZE 5
  • private Object buffer
  • private int in, out
  • private Semaphore mutex, empty,
    full
  • public BoundedBuffer()
  • in out 0
  • buffer new ObjectSIZE
  • mutex new Semaphore(1)
  • empty
  • new Semaphore(SIZE)
  • full new Semaphore(0)

public Object remove() full.acquire()
mutex.acquire() Object item bufferout
out (out1)SIZE mutex.release()
empty.release() return item
Demonstrates binary and Counting semaphores
17
The Readers-Writers Problem
  • Data is shared among concurrent processes
  • Readers only read the data set, without any
    updates
  • Writers May both read and write.
  • Problem
  • Multiple readers can read at concurrently
  • Only one writer can concurrently access shared
    data
  • Shared Data
  • Data set
  • Semaphore mutex initialized to 1.
  • Semaphore db initialized to 1 // Acquired by the
    first reader
  • Integer readerCount initialized to 0 counting
    upwards.

Note The best solution is to disallow more
readers while a writer waits. Otherwise,
starvation is possible.
18
Reader Writers with Semaphores
  • class DataBase
  • private int readers
  • private Semaphore mutex, db
  • public DataBase()
  • readers 0
  • mutex new Semaphore(1) db new
    Semaphore(1)
  • public void acquireRead() // First reader
    acquired db
  • mutex.acquire()
  • if (readers1) db.acquire()
    mutex.release()
  • public void releaseRead() // Last reader
    released db
  • mutex.acquire()
  • if (--readers0) db.release()
  • mutex.release()
  • public void acquireWrite() db.acquire()
  • public void releaseWrite() db.release()

How would we disallow more readers after a write
request?
19
Reader Writer User Classes
  • class Reader extends Thread
  • private DataBase db
  • public Reader(DataBase db) this.db db
  • public void run()
  • while(true) sleep(500)db.acquireRead()
    doRead() db.releaseRead()
  • class Writer extends Thread
  • private Locks db
  • public Writer(DataBase db) this.db db
  • public void run()
  • while (true) sleep(500)
    db.acquireWrite() doWrite()
    db.releaseWrite()
  • Hints for the lab project
  • make an array of DataBase objects
  • add throws InterruptedException to the methods
  • Randomly choose the sleep length

20
Condition Variables
An object with wait and signal capability
  • Condition x
  • Two operations on a condition
  • x.wait ()
  • Block the process invoking this operation
  • Expects a subsequent signal call by another
    process
  • x.signal ()
  • Wake up a process blocked because of wait()
  • If no processes are blocked, ignore
  • Which process wakes up? Answer indeterminate

Note wait and signal normally execute after some
condition occurs
21
Monitors
A high-level abstraction integrated into the
syntax of the language Only one process may be
active within the monitor at a time
Monitor without condition variables
Monitor with condition variables
22
Generic Monitor Syntax
23
Java Monitors
  • Every object has a single monitor lock
  • A call to a synchronized method
  • Lock Available acquires the lock
  • Lock Not available block and wait in the entry
    set
  • Non synchronized methods ignore the lock
  • Lock released when a synchronized method returns
  • The entry queue algorithm varies (normally FCFS)
  • Recursive locking occurs if a method with the
    lock calls another synchronized method in the
    object. This is legal.

24
Java Synchronization
  • wait() block the process and move to the wait
    set.
  • notify()
  • An arbitrary thread from the wait set moves to
    the entry set.
  • A thread not waiting for that condition reissues
    a wait()
  • notifyAll()
  • All threads in the wait set move to the entry
    set.
  • Threads not waiting for that condition call wait
    again
  • Notes
  • notify() notifyAll() are ignored if the wait()
    set is empty
  • wait() notify() are single condition variables
    per Java monitors
  • Java 1.5 adds additional condition variable
    support
  • Calls to wait() and notify() are legal only when
    lock is owned

25
Block Synchronization
Acquiring an object's lock from outside a
synchronized method
  • Example
  • Object locknew Object()
  • synchronized(lock)
  • // somecritical code.
  • lock.wait() // more criticalcode
    lock.notifyAll()
  • Question What's wrong withsynchronized(new
    Object()) // code here
  • Scope
  • time between acquire and release
  • synchronizing blocks of code in a method can
    reduce the scope
  • How to do it
  • Instantiate a lock object
  • Use the synchronized keyword around a block
  • Use wait() and notify() calls as needed

26
New Java Concurrency Features
  • Problem in old versions of Java
  • notify(), notifyAll(), and wait() are a single
    condition variable for an entire class.
  • This is not sufficient for every application.
  • Java 1.5 solution condition variablesLock key
    new ReentrantLock()Condition condVar
    key.newCondition()
  • Once created, a thread can use the await() and
    signal() methods.

27
Dining-Philosophers Problem
  • Philosopher i loop
  • while (true)
  • sleep((int)Math.random()TIME)
  • chopSticki.acquire()
  • chopStick(i1)5.acquire()
  • eat()
  • chopSticki.release()
  • chopStick(i1)5.release
  • think()
  • Shared data
  • Bowl of rice (data set)
  • Semaphore chopStick 5 initialized to 1

28
Dining Philosophers (Condition variables)
  • class DiningPhilosophers
  • enum State THINK, HUNGRY, EAT
  • Condition self new Condition5
  • State state new State5
  • public DiningPhilosophers
  • Lock lock new ReentrantLock()
  • for (int i0ilt5i)
  • stateiState.THINK Conditionilock.newCond
    ition()
  • public void takeForks(int i)
  • statei state.HUNGRY test(i)
  • while (statei ! State.EAT) selfi.await()
  • public void returnFork(int i)
  • statei State.THINK test((i1)5)
    test((i4)5)
  • private void test(int i)
  • if((state(i4)5!State.EAT)(stateiState
    .HUNGRY)
  • (state(i1)5!State.EAT))
  • statei State.EAT selfi.signal()

29
Dining Philosophers with Java Monitor
  • Void run()
  • String me
  • thread.currentThread.getName()
  • int stick Integer.parseInt(me)
  • while (true)
  • chopStick.pickUp(stick) eat()
  • chopStick.putDown(stick) think()
  • class Chopsticks
  • boolean stick new boolean5
  • public synchronized Chopsticks()
  • for (int i0ilt5i) stickifalse
  • public synchronized void pickUp(int i)
  • while (sti) wait()
  • stickitrue
  • while (stick(i1)5) wait()
  • stick(i1)5true
  • public synchronized void putDown(int i)
  • sticki false stick(i1)5
    false notifyAll()

Note All philosopher threads must share the
Chopsticks object.
30
Bounded Buffer - Java
Synchronized insert() and remove() methods
  • public synchronized void insert(Object item)
  • while (countSIZE) Thread.yield()
  • bufferinitem in(in1)SIZE count
  • public synchronized Object remove()
  • while (count0) Thread.yield()
  • Object item bufferout
  • out (out1)SIZE --count return item
  • What bugs are here?

31
Java Bounded Buffer with Monitors
  • public class BoundedBuffer
  • private int count, in, out
  • private Object buf
  • public BoundedBuffer(int size)
  • count in out 0
  • buf new Objectsize
  • public synchronized void insert(Object item)
  • throws
    InterruptedException
  • while (countbuffer.length) wait()
  • bufin item in (in 1)buf.length
  • count notifyAll()
  • public synchronized Object remove()
  • throws
    InterruptedException
  • while (count0) wait()
  • Object item bufout out(out1)buf.lengt
    h
  • -- count notifyAll() return item

32
Java Readers-Writers with Monitors
  • public class Database
  • private int readers
  • private boolean writers
  • public Database() readers0 writersfalse
  • public synchronized void acquireRead()
  • throws InterruptedException
  • while (writers) wait() readers
  • public synchronized void releaseRead()
  • if (--readers0) notify()
  • public synchronized void acquireWrite()
  • throws InterruptedException
  • while (readersgt0writers) wait()
    writerstrue
  • public synchronized void releaseWrite()
  • writersfalse notifyAll()

33
Solaris Synchronization
  • A variety of locks are available
  • Adaptive mutexes
  • Spin while waiting for lock if the holding task
    is executing
  • Block if the holding task is blocked
  • Note Always block on a single processor machines
    because only one task can actually execute.
  • condition variables and readers-writers locks
  • Threads block if locks are not available
  • Turnstiles
  • Threads block on a FCFS queue as locks awarded

34
Windows XP and Linux
  • Kernel locks
  • Single processor
  • Windows Interrupt masks allow high priority
    interrupts
  • Linux Disables interrupts for short critical
    sections
  • Multiprocessor Spin locks
  • User locks
  • Windows Dispatcher object handles callback
    mechanism
  • Linux semaphores and spin locks
  • P-threads OS independent API
  • Standard mutex locks and condition variables
  • Non universal extensions reader-writer, and spin
    locks

Note A Windows dispatcher object is a low-level
synchronization module that widows uses to
control user locks, semaphores, callback events,
timers, and inter-process communication.
35
Transactions
A collection of operations performed as an atomic
unit
  • Requirements
  • Perform in its entirety, or not at all
  • Assure atomicity
  • Stable Storage Operate during failures
    (typically RAID Redundant Array of Inexpensive
    disks)
  • Transactions consist of
  • A series of read and write operations
  • A commit operation completes the transaction
  • An abort operation cancels (rolls back) the any
    changes performed

Stable storage a group of disks that mirror
every operation
36
Write-ahead logging
  • The Log
  • Transaction name, item name, old new values
  • Writes to stable storage before data operations
  • Operations
  • ltTi startsgt when transaction Ti starts
  • ltTi commitsgt when Ti commits
  • Recovery methods (Undo(Ti), and Redo(Ti) )
  • restore or set values of all data affected by Ti
  • must be idempotent (same results if repeated)
  • System uses the log to restore state after
    failures
  • log ltTi startsgt without ltTi commitsgt undo(Ti)
  • log ltTi startsgt and ltTi commitsgt, redo(Ti)

37
Checkpoints
  • Purpose shorten long recoveries
  • Checkpoint scheme
  • flush records periodically to stable storage.
  • Output a ltcheckpointgt record to the log
  • Recovery
  • Only consider transactions, Ti, that started
    before the most recent checkpoint but hasnt
    committed, or transactions starting after Ti
  • All other transactions already on stable storage

38
Serial Schedule
A serial schedule is a possible atomic order of
execution
Note N transactions (Ti) imply N! serial
schedules
  • Transactions require multiple reads and writes
  • If T0 T1 are transactions
  • T0,T1, T1,T0 are serial schedules
  • T0, T1, T2 are transactions
  • T0,T1,T2,, T0,T2,T1, T1,T0,T2 , T1,T2,T0,
    T2,T0,T1, T2,T1,T0, are serial schedules

Concurrency-control algorithms those that ensure
a serial schedule Naïve approach Single mutex
controlling all transactions. However, this is
inefficient because transactions seldom access
the same data
39
Non-serial Schedule
  • A non-serial schedule allows reads and writes
    from one transaction occur concurrently with
    those from another
  • It's a Conflict if transactions access the same
    data item with at least one write.
  • If an equivalent serial schedule exists, to one
    where reads/writes overlap it is then conflict
    serializable

The above operations are conflict serializable
40
Pessimistic Locking Protocol
  • Transactions acquire reader/writer locks in
    advance. If a lock is already held, the
    transaction must block
  • Two-phase protocol
  • Each transaction issues lock and unlock requests
    in two phases
  • During the growing phase, a transaction can
    obtain locks. It cannot release any.
  • During the release phase, a transaction can
    release locks in a shrinking phase. It cannot
    obtain any
  • Problems
  • Deadlock can occur
  • Locks typically get held for too long

Note Reader/writer locks are called
shared/exclusive locks in transaction processing.
They apply to particular data items.
41
Optimistic Locking Protocol
  • Goal Not to prevent, but detect conflicts
  • Mechanism Numerically time stamp each
    transaction with an increasing transaction number
    written to items when reading or writing
  • Conflict occurs when
  • We read a transaction that was written by a
    transaction with a future time stamp
  • If we are about to write a transaction read or
    wrtiten by a future transaction with a previous
    timestamp
  • Action Roll back, get another timestamp, try
    again
  • Advantage minimizes lock time and is deadlock
    free
  • Disadvantage many roll backs if lots of
    contention

42
Time Stamp Implementation
  • Each data item has two time stamps The largest
    successful write and the largest successful read
  • Read a data record
  • If Ti timestamp lt item write timestamp, we cannot
    read a value written in the future. A roll back
    is necessary
  • If Ti timestamp gt item write timestamp, perform
    read and update the item read timestamp
  • Write a data record
  • If Ti lt item read timestamp, we cannot write over
    an item read in the future. A roll back is
    necessary
  • IF Ti lt data write timestamp, we cannot write
    over a record written in the future. A roll back
    is necessary
  • Otherwise, the write is successful
  • Implementation Issues Time stamp assignments
    must be atomic. Each read and write must be
    atomic

43
Schedule Possible Under Timestamp Protocol
Write a Comment
User Comments (0)
About PowerShow.com