Title: Processes and Threads
1Processes and Threads
2.1 Processes 2.2 Threads 2.3 Interprocess
communication 2.4 Classical IPC problems 2.5
Scheduling
2ProcessesThe Process Model
- Multiprogramming of four programs
- Conceptual model of 4 independent, sequential
processes - Only one program active at any instant
3Process Creation
- Principal events that cause process creation
- System initialization
- Execution of a process creation system
- User request to create a new process
- Initiation of a batch job
4Process Termination
- Conditions which terminate processes
- Normal exit (voluntary)
- Error exit (voluntary)
- Fatal error (involuntary)
- Killed by another process (involuntary)
5Process Hierarchies
- Parent creates a child process, child processes
can create its own process - Forms a hierarchy
- UNIX calls this a "process group"
- Windows has no concept of process hierarchy
- all processes are created equal
6Process States (1)
- Possible process states
- running
- blocked
- ready
- Transitions between states shown
7Process States (2)
- Lowest layer of process-structured OS
- handles interrupts, scheduling
- Above that layer are sequential processes
8Implementation of Processes (1)
- Fields of a process table entry
9Implementation of Processes (2)
- Skeleton of what lowest level of OS does when an
interrupt occurs
10ThreadsThe Thread Model (1)
- (a) Three processes each with one thread
- (b) One process with three threads
11The Thread Model (2)
- Items shared by all threads in a process
- Items private to each thread
12The Thread Model (3)
- Each thread has its own stack
13Thread Usage (1)
- A word processor with three threads
14Thread Usage (2)
- A multithreaded Web server
15Thread Usage (3)
- Rough outline of code for previous slide
- (a) Dispatcher thread
- (b) Worker thread
16Thread Usage (4)
- Three ways to construct a server
17Implementing Threads in User Space
- A user-level threads package
18Implementing Threads in the Kernel
- A threads package managed by the kernel
19Hybrid Implementations
- Multiplexing user-level threads onto kernel-
level threads
20Scheduler Activations
- Goal mimic functionality of kernel threads
- gain performance of user space threads
- Avoids unnecessary user/kernel transitions
- Kernel assigns virtual processors to each process
- lets runtime system allocate threads to
processors - Problem Fundamental reliance on kernel
(lower layer) - calling procedures in user space (higher
layer)
21Pop-Up Threads
- Creation of a new thread when message arrives
- (a) before message arrives
- (b) after message arrives
22Making Single-Threaded Code Multithreaded (1)
- Conflicts between threads over the use of a
global variable
23Making Single-Threaded Code Multithreaded (2)
- Threads can have private global variables
24Interprocess CommunicationRace Conditions
- Two processes want to access shared memory at
same time
25Critical Regions (1)
- Four conditions to provide mutual exclusion
- No two processes simultaneously in critical
region - No assumptions made about speeds or numbers of
CPUs - No process running outside its critical region
may block another process - No process must wait forever to enter its
critical region
26Critical Regions (2)
- Mutual exclusion using critical regions
27Algorithm 1
- Shared variables
- int turninitially turn 0
- turn - i ? Pi can enter its critical section
- Process Pi
- do
- while (turn ! i) /infinite loop /
- critical section
- turn j
- reminder section
- while (1)
28Mutual Exclusion with Busy Waiting (1)
- Proposed solution to critical region problem
- (a) Process 0. (b) Process 1.
29Mutual Exclusion with Busy Waiting (2)
- Peterson's solution for achieving mutual exclusion
30Synchronization Hardware
- Test and modify the content of a word
atomically. - boolean TestAndSet(boolean target)
- boolean rv target
- tqrget true
- return rv
-
31Mutual Exclusion with Test-and-Set
- Shared data boolean lock false
- Process Pi
- do
- while (TestAndSet(lock))
- critical section
- lock false
- remainder section
-
32Mutual Exclusion with Busy Waiting (3)
- Entering and leaving a critical region using the
- TSL instruction
33Synchronization Hardware
- Atomically swap two variables.
- void Swap(boolean a, boolean b)
- boolean temp a
- a b
- b temp
-
34Mutual Exclusion with Swap
- Shared data (initialized to false) boolean
lock -
- Process Pi
- do
- key true
- while (key true)
- Swap(lock,key)
- critical section
- lock false
- remainder section
-
35Semaphores
- Synchronization tool that does not require busy
waiting. - Semaphore S integer variable
- can only be accessed via two indivisible (atomic)
operations - wait/down (S)
- while S? 0 do no-op S--
- signal/up (S)
- S
36Critical Section of n Processes
- Shared data
- semaphore mutex //initially mutex 1
- Process Pi do wait(mutex)
critical section - signal(mutex) remainder section
while (1) -
-
37Semaphore Implementation
- Define a semaphore as a record
- typedef struct
- int value struct process L
semaphore - Assume two simple operations
- block suspends the process that invokes it.
- wakeup(P) resumes the execution of a blocked
process P.
38Implementation
- Semaphore operations now defined as
- wait/down(S) S.value--
- if (S.value lt 0)
- add this process to S.L block
-
- signal/up(S) S.value
- if (S.value lt 0)
- remove a process P from S.L wakeup(P)
-
39Mutexes
- Implementation of mutex_lock and mutex_unlock
40Semaphore as a General Synchronization Tool
- Execute B in Pj only after A executed in Pi
- Use semaphore flag initialized to 0
- Code
- Pi Pj
- ? ?
- A wait/down(flag)
- signal/up(flag) B
41Deadlock and Starvation
- Deadlock two or more processes are waiting
indefinitely for an event that can be caused by
only one of the waiting processes. - Let S and Q be two semaphores initialized to 1
- P0 P1
- wait(S) wait(Q)
- wait(Q) wait(S)
- ? ?
- signal(S) signal(Q)
- signal(Q) signal(S)
- Starvation indefinite blocking. A process may
never be removed from the semaphore queue in
which it is suspended.
42Two Types of Semaphores
- Counting semaphore integer value can range over
an unrestricted domain. - Binary semaphore integer value can range only
between 0 and 1 can be simpler to implement. - Can implement a counting semaphore S as a binary
semaphore.
43Implementing S as a Binary Semaphore
- Data structures
- binary-semaphore S1, S2
- int C
- Initialization
- S1 1
- S2 0
- C initial value of semaphore S
44Implementing S
- wait operation
- wait(S1)
- C--
- if (C lt 0)
- signal(S1)
- wait(S2)
-
- signal(S1)
-
- signal operation
- wait(S1)
- C
- if (C lt 0)
- signal(S2)
- else
- signal(S1)
45Sleep and Wakeup
- Producer-consumer problem with fatal race
condition
46Semaphores
- The producer-consumer problem using semaphores
47Semaphores
- The producer-consumer problem using semaphores
48Monitors (1)
49Monitors (2)
- Outline of producer-consumer problem with
monitors - only one monitor procedure active at one time
- buffer has N slots
50Monitors (3)
- Solution to producer-consumer problem in Java
(part 1)
51Monitors (4)
- Solution to producer-consumer problem in Java
(part 2)
52Monitors
53Message Passing
- The producer-consumer problem with N messages
54Barriers
- Use of a barrier
- processes approaching a barrier
- all processes but one blocked at barrier
- last process arrives, all are let through
55Dining Philosophers (1)
- Philosophers eat/think
- Eating needs 2 forks
- Pick one fork at a time
- How to prevent deadlock
56Dining Philosophers (2)
- A nonsolution to the dining philosophers problem
57Dining Philosophers (3)
- Solution to dining philosophers problem (part 1)
58Dining Philosophers (4)
- Solution to dining philosophers problem (part 2)
59The Readers and Writers Problem
- A solution to the readers and writers problem
60The Sleeping Barber Problem (1)
61The Sleeping Barber Problem (2)
Solution to sleeping barber problem.
62SchedulingIntroduction to Scheduling (1)
- Bursts of CPU usage alternate with periods of I/O
wait - a CPU-bound process
- an I/O bound process
63Introduction to Scheduling (2)
- Scheduling Algorithm Goals
64Histogram of CPU-burst Times
65CPU Scheduler
- Selects from among the processes in memory that
are ready to execute, and allocates the CPU to
one of them. - CPU scheduling decisions may take place when a
process - 1. Switches from running to waiting state.
- 2. Switches from running to ready state.
- 3. Switches from waiting to ready.
- 4. Terminates.
- Scheduling under 1 and 4 is nonpreemptive.
- All other scheduling is preemptive.
66Dispatcher
- Dispatcher module gives control of the CPU to the
process selected by the short-term scheduler
this involves - switching context
- switching to user mode
- jumping to the proper location in the user
program to restart that program - Dispatch latency time it takes for the
dispatcher to stop one process and start another
running.
67Scheduling Criteria
- CPU utilization keep the CPU as busy as
possible - Throughput of processes that complete their
execution per time unit - Turnaround time amount of time to execute a
particular process - Waiting time amount of time a process has been
waiting in the ready queue - Response time amount of time it takes from when
a request was submitted until the first response
is produced, not output (for time-sharing
environment)
68Optimization Criteria
- Max CPU utilization
- Max throughput
- Min turnaround time
- Min waiting time
- Min response time
69First-Come, First-Served (FCFS) Scheduling
- Process Burst Time
- P1 24
- P2 3
- P3 3
- Suppose that the processes arrive in the order
P1 , P2 , P3 The Gantt Chart for the schedule
is - Waiting time for P1 0 P2 24 P3 27
- Average waiting time (0 24 27)/3 17
70FCFS Scheduling (Cont.)
- Suppose that the processes arrive in the order
- P2 , P3 , P1 .
- The Gantt chart for the schedule is
- Waiting time for P1 6 P2 0 P3 3
- Average waiting time (6 0 3)/3 3
- Much better than previous case.
- Convoy effect short process behind long process
71Shortest-Job-First (SJR) Scheduling
- Associate with each process the length of its
next CPU burst. Use these lengths to schedule
the process with the shortest time. - Two schemes
- nonpreemptive once CPU given to the process it
cannot be preempted until completes its CPU
burst. - preemptive if a new process arrives with CPU
burst length less than remaining time of current
executing process, preempt. This scheme is know
as the Shortest-Remaining-Time-First (SRTF). - SJF is optimal gives minimum average waiting
time for a given set of processes.
72Example of Non-Preemptive SJF
- Process Arrival Time Burst Time
- P1 0.0 7
- P2 2.0 4
- P3 4.0 1
- P4 5.0 4
- SJF (non-preemptive)
- Average waiting time (0 6 3 7)/4 - 4
73Example of Preemptive SJF
- Process Arrival Time Burst Time
- P1 0.0 7
- P2 2.0 4
- P3 4.0 1
- P4 5.0 4
- SJF (preemptive)
- Average waiting time (9 1 0 2)/4 - 3
74Determining Length of Next CPU Burst
- Can only estimate the length.
- Can be done by using the length of previous CPU
bursts, using exponential averaging.
75Prediction of the Length of the Next CPU Burst
76Examples of Exponential Averaging
- ? 0
- ?n1 ?n
- Recent history does not count.
- ? 1
- ?n1 tn
- Only the actual last CPU burst counts.
- If we expand the formula, we get
- ?n1 ? tn(1 - ?) ? tn -1
- (1 - ? )j ? tn -1
- (1 - ? )n1 tn ?0
- Since both ? and (1 - ?) are less than or equal
to 1, each successive term has less weight than
its predecessor.
77Priority Scheduling
- A priority number (integer) is associated with
each process - The CPU is allocated to the process with the
highest priority (smallest integer ? highest
priority). - Preemptive
- nonpreemptive
- SJF is a priority scheduling where priority is
the predicted next CPU burst time. - Problem ? Starvation low priority processes may
never execute. - Solution ? Aging as time progresses increase
the priority of the process.
78Scheduling in Interactive Systems (1)
- Round Robin Scheduling
- list of runnable processes
- list of runnable processes after B uses up its
quantum
79Round Robin (RR)
- Each process gets a small unit of CPU time (time
quantum), usually 10-100 milliseconds. After
this time has elapsed, the process is preempted
and added to the end of the ready queue. - If there are n processes in the ready queue and
the time quantum is q, then each process gets 1/n
of the CPU time in chunks of at most q time units
at once. No process waits more than (n-1)q time
units. - Performance
- q large ? FIFO
- q small ? q must be large with respect to context
switch, otherwise overhead is too high.
80Example of RR with Time Quantum 20
- Process Burst Time
- P1 53
- P2 17
- P3 68
- P4 24
- The Gantt chart is
- Typically, higher average turnaround than SJF,
but better response.
81Time Quantum and Context Switch Time
82Turnaround Time Varies With The Time Quantum
83Multilevel Queue
- Ready queue is partitioned into separate
queuesforeground (interactive)background
(batch) - Each queue has its own scheduling algorithm,
foreground RRbackground FCFS - Scheduling must be done between the queues.
- Fixed priority scheduling (i.e., serve all from
foreground then from background). Possibility of
starvation. - Time slice each queue gets a certain amount of
CPU time which it can schedule amongst its
processes i.e., 80 to foreground in RR, 20 to
background in FCFS
84Scheduling in Interactive Systems (2)
- A scheduling algorithm with four priority classes
85Multilevel Queue Scheduling
86Multilevel Feedback Queue
- A process can move between the various queues
aging can be implemented this way. - Multilevel-feedback-queue scheduler defined by
the following parameters - number of queues
- scheduling algorithms for each queue
- method used to determine when to upgrade a
process - method used to determine when to demote a process
- method used to determine which queue a process
will enter when that process needs service
87Example of Multilevel Feedback Queue
- Three queues
- Q0 time quantum 8 milliseconds
- Q1 time quantum 16 milliseconds
- Q2 FCFS
- Scheduling
- A new job enters queue Q0 which is served FCFS.
When it gains CPU, job receives 8 milliseconds.
If it does not finish in 8 milliseconds, job is
moved to queue Q1. - At Q1 job is again served FCFS and receives 16
additional milliseconds. If it still does not
complete, it is preempted and moved to queue Q2.
88Multilevel Feedback Queues
89Lottery Scheduling
90Multiple-Processor Scheduling
- CPU scheduling more complex when multiple CPUs
are available. - Homogeneous processors within a multiprocessor.
- Load sharing
- Asymmetric multiprocessing only one processor
accesses the system data structures, alleviating
the need for data sharing.
91Real-Time Scheduling
- Hard real-time systems required to complete a
critical task within a guaranteed amount of time. - Soft real-time computing requires that critical
processes receive priority over less fortunate
ones.
92Dispatch Latency
93Algorithm Evaluation
- Deterministic modeling takes a particular
predetermined workload and defines the
performance of each algorithm for that workload. - Queueing models
- Implementation
94Evaluation of CPU Schedulers by Simulation
95Scheduling in Batch Systems (1)
- An example of shortest job first scheduling
96Scheduling in Batch Systems (2)
97Scheduling in Real-Time Systems
- Schedulable real-time system
- Given
- m periodic events
- event i occurs within period Pi and requires Ci
seconds - Then the load can only be handled if
98Policy versus Mechanism
- Separate what is allowed to be done with how it
is done - a process knows which of its children threads are
important and need priority - Scheduling algorithm parameterized
- mechanism in the kernel
- Parameters filled in by user processes
- policy set by user process
99Thread Scheduling (1)
- Possible scheduling of user-level threads
- 50-msec process quantum
- threads run 5 msec/CPU burst
100Thread Scheduling (2)
- Possible scheduling of kernel-level threads
- 50-msec process quantum
- threads run 5 msec/CPU burst