Chapter 6.1: Process Synchronization Part 1 - PowerPoint PPT Presentation

1 / 22
About This Presentation
Title:

Chapter 6.1: Process Synchronization Part 1

Description:

... echo() and is interrupted right after the getchar() stores a value in chin. Say, x. ... x', then, is stored in chin. Process P2 is activated and invokes echo ... – PowerPoint PPT presentation

Number of Views:247
Avg rating:3.0/5.0
Slides: 23
Provided by: marilyntu8
Category:

less

Transcript and Presenter's Notes

Title: Chapter 6.1: Process Synchronization Part 1


1
Chapter 6.1 Process SynchronizationPart 1
2
Process Synchronization
  • Lecture 6.1
  • Background
  • The Critical-Section Problem
  • Petersons Solution
  • Synchronization Hardware
  • Semaphores
  • Lecture 6.2
  • Classic Problems of Synchronization
  • Monitors
  • Synchronization Examples
  • Atomic Transactions


3
Background
  • While we will talk about processes, primarily, in
    this chapter, what we say can and does apply to
    threads (or tasks in Linux).
  • Heres another definition from Stallings (another
    source) Concurrent pertaining to processes or
    threads that take place within a common interval
    of time during which they have to alternatively
    share common resources.
  • Concurrent processes can exist in the system at
    the same time.
  • Concurrent threads may execute completely
    independently of each other or they can execute
    in cooperation.
  • Asynchronous Execution Processes that operate
    independently of each other but must occasionally
    communicate and synchronize to perform
    cooperative tasks are said to execute
    asynchronously.
  • Such processes simply must be synched up
    properly with regard to sharing data.
  • If not, very undesirable results may occur and
    unpredictable too.
  • Important notes
  • Communication and synchronization are necessary
    for asynchronously executing threads and
    processes.
  • Such threads are said to execute asynchronously.

4
Background - more
  • So we will address processes that can affect or
    be affected by other processes.
  • Sharing may be
  • direct - as in sharing a logical address space or
    data, or
  • processes may share data through files or
    messages. We have discussed the former in
    Threads.
  • But the problem is that concurrent access to
    shared data may result in data inconsistency.
  • This is very serious!!
  • So, what do we do to avoid major consistency
    problems?
  • We will talk about mechanisms (both hardware and
    software) that ensure the orderly execution of
    cooperating processes so that data consistency
    is maintained.
  • ? The sharing of global resources is filled with
    peril.

5
Background - more
  • If two processes both make use of the same global
    variable and both perform reads and writes on
    that variable, then the order in which the
    various reads and writes occur is critical.
  • Same too for I/O channels. This is further
    complicated because repeating a sequence of I/O
    requests is necessarily inconclusive because
    there is no way to determine exact speeds of
    execution.
  • To really illustrate the problem in a single
    processor system lets consider the following
    example

6
Sample from Stallings
  • void echo()
  • chin getchar()
  • chout chin
  • putchar (chout)
  • Very simple routine that echos input from
    keyboard to monitor. (Go through the code)
  • Now consider that this code is shared and is thus
    global to more than one application.
  • It is a commonly used routine and sharing
    something like this makes sense and saves space.
  • But consider the following sequence
  • Process P1 invokes echo() and is interrupted
    right after the getchar() stores a value in chin.
    Say, x.
  • Process P2 is activated and invokes echo() which
    runs to conclusion, inputting and then displaying
    a single character, say y, on the screen.
  • Process P1 resumes, but the x was overwritten
    with the y which is written onto the screen.
    x is lost.
  • Furthermore, y is printed again by P1.
  • So, lets help ourselves. This is clearly not
    what we want, but the example does serve to show
    the notion of shared code.

7
Stallings - author of Operating Systems - more
  • Suppose that only one process at a time may be in
    that procedure. Then
  • Process P1 invokes echo() and is interrupted
    right away after it inputs an x
  • x, then, is stored in chin.
  • Process P2 is activated and invokes echo().
  • But because P1 is still inside echo(), even
    though it is currently suspended (in a wait
    queue), P2 is blocked from entering this
    procedure.
  • P2 is then blocked from entering the procedure
    and is suspended awaiting the availability of the
    echo() procedure.
  • Later, P1 is resumed and completes its execution
    of echo(). x is displayed.
  • When P1 exits echo(), the block on P2 is removed
    and P2 can be rescheduled and the echo()
    procedure properly invoked
  • Clearly the message is that in order to protect
    shared variables / data / etc. we must control
    access to the code that accesses shared
    variables
  • Imposing a discipline so that only one process at
    a time can enter echo() ensures that the
    described error will not occur.
  • This is what we will discuss this chapter.

8
How about Multiple Processors?
  • For multiple processors, same problems and same
    solutions exist for sharing resources.
  • Suppose, first, we have no mechanism for
    protecting shared resources in a multiprocessor
    environment.
  • Process P1 and P2 are both executing on separate
    processors.
    Both processes
    invoke echo(). Heres what happens
  • Process P1 Process P2
  • - -
  • chin getchar() -
  • - chin getchar()
  • chout chin chout chin
  • putchar(chout) -
  • - putchar (chout)
  • -
  • Result is that character input to P1 is lost
    before being displayed. And the character input
    to P2 is displayed by both P1 and P2. (Note
    this appears to be the same problem with single
    processor systems)
  • We could also enforce the discipline (as we did
    for single processors) that only one process at a
    time may be executing echo(), and, (in addition
    to blocking and that additional overhead),
    results are the same.
  • We have two processes executing simultaneously
    both trying to access the same global variable.
  • Solution is the same control access to the
    shared resource.

9
Race
  • We refer to a situation where several processes
    have access to the same shared resource and can
    access it concurrently and where the outcome
    depends on the order of execution a race.
  • To guard against this, we need to ensure that
    only one process at a time can be manipulating a
    shared resource.
  • Access must be synchronized and coordinated.
  • Another definition of a Race Condition
    situation in which multiple processors access
    and manipulate shared data with the outcome
    dependent upon the relative tyiming of the
    processes. (Stallings, Operating Systems
    Internals and Design Principles).

10
Heres Some Key Terms - Stallings
  • Critical Section a section of code within a
    process that requires access to shared resources
    and that may not be executed while another
    process is in the corresponding section of code.
  • Deadlock (Chapter 7) A situation in which two or
    more processes are unable to proceed because each
    is waiting for one of the others to do something.
  • Mutual Exclusion the requirement that when one
    process is in a critical section that accesses
    shared resources, no other process may be in a
    critical section that accesses any of these
    shared resources.
  • Race Condition A situation in which multiple
    threads or processes read and write a shared data
    item and the final result depends on the relative
    timing of their execution.
  • Starvation A situation in which a runnable
    process is overlooked indefinitely by the
    scheduler although it is able to proceed, it is
    never chosen.

11
The Critical Section Problem
  • Essentially we have a system of n processes
    each of which has some code called a critical
    section, within which this process may be
    accessing common variables, updating status
    tables, accessing a file that other processes
    also need to have access to etc.
  • The important thing to recognize is when one
    process is executing code in this critical
    section, no other process can be allowed to
    execute a critical section.
  • So, we must control access Simply put, a process
    desiring to enter its critical section must
    actually request permission.
  • An Entry Section of code in a process is where a
    process requests to enter its critical section.
  • An Exit Section (surprise!) is some code where
    the process releases its hold on the critical
    section.
  • Clearly, the actual execution of critical code
    (the critical section) lies in between the Entry
    Section and the Exit Section.
  • Code following the Exit Section is called
    remainder section.

12
Solution to Critical-Section Problem
  • 1. Mutual Exclusion - If process Pi is executing
    in its critical section, then no other processes
    can be executing in their critical sections
  • 2. Progress - If no process is executing in its
    critical section and there exist some processes
    that wish to enter their critical section, then
    the selection of the processes that will enter
    the critical section next cannot be postponed
    indefinitely
  • Bounded Waiting - A bound must exist on the
    number of times that other processes are allowed
    to enter their critical sections after a process
    has made a request to enter its critical section
    and before that request is granted
  • Consider the following code

13
Figure 6.1 Critical Section
  • do
  • entry section code // includes request to
    enter critical section)
  • critical section //executes here if
    permission granted.
  • exit section code .
  • remainder section
  • while (TRUE)
  • Note this is a dowhile.
  • So the code will execute its entry section code

14
More
  • We must realize that at any given point in time,
    there are many kernel-mode processes active in
    the operating system.
  • (Recall kernel mode a privileged mode of
    execution reserved for the kernel of the
    operating system. Typically kernel mode allows
    access to regions of main memory that are
    unavailable to processes executing in less
    privileged mode and also enables the execution of
    certain machine instructions that are restricted
    to the kernel mode. Kernel mode also referred to
    as system mode or privileged mode.)
  • So, when a lot of kernel code is being executed,
    there are strong possibilities of encountering
    race conditions.
  • Example a kernel data structure maintains a
    list of all open files in system.
  • List must be accessed when a new file is
    opened/closed.
  • If two processes were to open files
    simultaneously, the separate updates could result
    in a race condition.
  • Example Other kernel data structures prone to
    race conditions include memory allocation tables,
    process lists for interrupt handling and more..
  • Kernel developers must insure there are no races
    in here!!!

15
Still more
  • Complicating these issues, we note there are two
    kernel modes that impact the execution of
    critical sections. We may have
  • 1. preemptive kernels a kernel process that may
    be preempted while it is running, and in some
    cases, or
  • 2. non-preemptive kernel processes, do not allow
    a process running in kernel mode to be preempted.
  • Here, a preemptive kernel process will run until
    it exits the kernel code, voluntarily giving
    control back to CPU, or blocks.
  • We will get back to preemptive kernels and
    non-preemptive kernels ahead.
  • (It is important to note that preemptive kernels
    are really tricky to design for SMP architectures
    because two kernel mode processes may be running
    simultaneously on two different processors.
  • Preemptive kernels are better for real time
    processing, so that a real time process can
    preempt another kernel process to provide real
    time response.
  • Too, it is likely that kernel mode processes run
    for only short times, so this will often not
    cause too much degradation of services)

16
Lastly,
  • Review Petersons Solution.
  • Im going to skip going through the details of
    this solution.
  • It is interesting and points out the difficulties
    in solving the critical section problem.
  • However the problem is to be solved, we need to
    show that
  • 1. mutual exclusion is preserved
  • 2. the progress requirement is satisfied, and
  • 3. The bounded waiting requirement is met.
  • We will now look at synchronization hardware and
    synchronization software to address mutual
    exclusion, progress, and the bounded wait issue.

17
Synchronization
  • Many systems provide hardware support for
    critical section code
  • What we really need is a lock.
  • And this is the basic approach
  • Critical regions are protected by locks and a
    process must acquire a lock before entering a
    critical section and release it when it leaves
    the critical section.
  • Both hardware and software approaches to protect
    critical sections are based on locks.
  • But locks can be quite sophisticated as your
    book states.
  • Of course, if hardware is used, this can make
    programming easier and certainly execution will
    be quicker when implemented in hardware.

18
Synchronization of Hardware Using Locks
  • In Uniprocessors problem could be made rather
    simple merely disable interrupts while a kernel
    process is in its critical section.
  • No problems if we have non-preemptive kernels.
  • This is the approach that non-preemptive kernels
    take.
  • Currently running code would execute without
    preemption.
  • But disabling interrupts is far too inefficient /
    impractical on multiprocessor systems
  • Disabling interrupts on multiprocessors (can be
    several) takes time.
  • Kernel processes are not allowed to enter
    critical sections.

19
Synchronization of Hardware Using Locks
  • Modern machines provide special atomic hardware
    instructions, which means that an instruction
    itself cannot be interrupted while it is
    executing.
  • The instruction can do more than one thing, such
    as test a variable and set it as part of the
    execution of a single instruction or perhaps swap
    contents of two memory words done with a single
    instruction...
  • We call this atomic execution. Again, it is the
    execution of a single instruction atomically!
  • Lets abstract the concept of these types of
    instructions.
  • We consider two such instructions
  • TestAndSet() and
  • Swap()
  • Both provide for mutual exclusionbut unless we
    implement them carefully, they might not provide
    for bounded waiting

20
Test And Set Instruction
  • Definition Heres the Test and Set Instruction
  • It is executed atomically.
  • Consider the code
  • boolean TestAndSet (boolean target)
  • boolean rv target // rv
    set to the value of target
  • target TRUE // target set to
    TRUE
  • return rv // value passed to
    TEstAndSet() returned
  • // via rv.
  • We will see how this is used (implemented) on the
    next slide
  • But you can see that a pointer value
    (dereferenced) to something is passed to
    TestAndSet a boolean variable rv is set to the
    pointers value. The dereferenced value of the
    actual parameter is set to TRUE, and we return
    this boolean value of rv which will be whatever
    was passed to TestAndSet().
  • Note TestAndSet() returns a boolean value
    passed to it, but it sets the global
  • variable to TRUE.

21
Mutual Exclusion via TestAndSet()
  • But we implement the mutual exclusion shown by
    TestAndSet() by using a global boolean lock as
    the parameter and it is initially set to false.
  • Consider
  • do
  • while (TestAndSetLock (lock))
  • // do nothing
  • // if lock is false (see above) , value of
    predicate is false (but remember // lock itself
    was set to TRUE), and we drop into critical
    section
  • // if TestAndSet() returns true (which it
    would if another process
  • // was already executing its critical
    section) , do nothing Spin
  • // critical section
  • lock FALSE // reset global
    variable as part of this instruction.
  • // remainder section
  • while (TRUE)
  • Remember, TestAndSet() is atomic, the routine
    above includes TestAndSet() but the
  • overall execution is NOT atomic (only the
    TestAndSet() part).
  • Lets look at another hardware instruction that
    uses two variables a lock and a key.

22
End of Chapter 6Part 1
Write a Comment
User Comments (0)
About PowerShow.com