Title: Uniprocessor Scheduling
1Uniprocessor Scheduling
2Levels of Scheduling
- Long-term (high level) A new job is added to the
pool of available processes. - Primarily required for batch jobs, which may be
held on disk in a queue. - Medium-term (intermediate level) One or more
processes is swapped in - Short-term (low-level) decides in what order
runnable processes (or threads) will be assigned
to the CPU.
3Scheduling Levels State Transitions
4Short-Term Scheduling
- The focus in this chapter is short-term
scheduling, or processor scheduling. - Events that invoke the scheduler
- interrupts clock, I/O
- system calls
- signals
- In short, any event that could cause suspension
or preemption of the current process.
5CPU-I/O Bursts
- In a uniprogramming system, a process will
alternate between CPU bursts I/O bursts. - CPU burst use the processor (execute)
- I/O bursts wait for completion of some event
- Process run time ? (all CPU bursts)
- Process wait time ? (all I/O bursts)
- A process is I/O bound if it has many short CPU
bursts, and CPU bound if it has fewer, but
longer, CPU bursts. - Multiprogramming was devised to take advantage of
I/O bursts.
6Preemptive Scheduling
- Scheduling is non-preemptive if new processes are
dispatched only when a process leaves the Run
state voluntarily - process terminates, or moves to Blocked state
- In preemptive scheduling, processes can be
preempted, or removed from the Run state before
the end of the current CPU burst. - Preemption improves response time and
predictability in an interactive system.
7Preemptive Scheduling
- Events which may lead to preemption
- a new process (or thread) is created
- an interrupt enables a process to move from the
blocked to the ready state - a clock interrupt
- Preemption requires hardware assistance in the
form of a timer which can be set to interrupt
periodically. - Active multiprogramming uses preemption
- Passive multiprogramming non-preemptive
8Disadvantages
- If a process is preempted while updating shared
resources or kernel data structures, the data may
become inconsistent if no synchronization is
provided. - Preemptive algorithms increase the number of
process switches, which adds to system overhead.
9Short-term Scheduling Criteria
- User oriented criteria
- response time (for interactive processes)
measured from the time a request is made until
the time results begin to appear - turnaround time (for batch processes) the total
time in the system - time spent waiting plus
actual execution time - deadlines
- predictability system load shouldnt affect cost
or response time.
10Short-term Scheduling Criteria
- System oriented criteria include
- throughput the number of processes completed per
unit of time. Sometimes stated as the amount of
work performed per unit of time. - processor utilization percentage of time the
processor is busy (with useful work). - System-oriented criteria arent very important in
single user systems. Response time is the key
factor.
11Other System Criteria
- Priorities
- intrinsic process priority
- task deadlines
- resource requirements, process execution
characteristics (e.g., I/O bound processes,
amount of CPU time received recently) - Fairness In the absence of priorities, treat all
processes about the same. When there are
priorities, treat all processes of the same
priority about the same. - Ensure that all processes make progress.
- Fairness in most environments means no
starvation. - Balance use of system resources
12Characterization of Scheduling Policies
- The selection function determines which ready
process gets to run next - The decision mode is non-preemptive or preemptive
- Other notation
- w total time in system, waiting executing
- e time spent in execution so far
- s total service time required by process,
including e - Tq turnaround time total time process spends in
system (waiting plus executing) - Tq/Ts normalized turnaround time (Ts is service
time)
13Sample Data Set for Examples
Service Time
Arrival Time
Process
A/P1
0
3
B/P2
2
6
C/P3
4
4
D/P4
6
5
E/P5
8
2
Service time total CPU time needed, or length of
next CPU burst Long jobs have a high service time
(long is relative)
14First Come First Served (FCFS)
- Selection function the oldest process in the
ready queue (maxw). Hence, FCFS. - Decision mode nonpreemptive
- a process runs until it blocks itself
- while it waits, another process can run
15Characteristics of FCFS
- Wide variation in wait times, sensitive to
process arrival order. - Favors long (CPU-bound) processes over short (I/O
bound) - Consequently, not effective in an interactive
environment - normalized turnaround time for
short jobs can be terrible. - But easy to implement, no danger of starvation
16Round-Robin (RR)
- Selection function same as FCFS
- Decision mode preemptive
- a process runs until it blocks or until its time
slice - typically from 10 to 100 ms - has expired - a timer is set to interrupt at the end of the
time slice the running process is put at the end
of the ready queue - FCFS and RR are implemented with a FIFO queue
17Characteristics of Round Robin
- RR is designed to give better service to short
processes. - Its appropriate for an interactive or
transaction processing environments - Less variance in wait times than FCFS
- Biased against I/O bound jobs, since they may not
use the entire time slice before blocking, and
thus get less total CPU time.
18RR Performance
- The main issue in RR performance is choice of
quantum size. (quantum time slice) - Should be significantly larger than process
switch time (for efficiency) - Should be slightly longer than the typical
interaction (to move processes through the queue
quickly) - If the quantum is too long, performance
approaches that of FCFS
19Time Quantum for Round Robin
In the figures above, we see the difference in
completion time for a process when the quantum is
a) slightly larger than the interaction time and
b) slightly smaller than that time.
20Virtual Round Robin
- Designed to avoid the bias against I/O bound jobs
that is found in basic RR. - Suppose a process blocks for I/O after executing
for p time units (quantum q units) - When the process is released from the I/O block
it is put onto an auxiliary queue instead of the
normal ready queue. - Processes are dispatched first from the auxiliary
queue, with a quantum q? q - p
21Queuing for Virtual Round Robin (VRR)
22Shortest Process Next (SPN)
- Selection function the process with the shortest
expected CPU burst time. (Mins) - Decision mode nonpreemptive
- Requires estimated CPU burst times
- But note that a short process may still get stuck
behind a long process since theres no preemption
23SPN Characteristics
- Optimal for nonpreemptive algorithms
- Minimizes average wait time, maximizes throughput
- Long jobs may be starved
- Difficult to estimate burst times.
- Lack of preemption is not suitable in a
time-sharing environment-a long job can still
monopolize the CPU once it is dispatched.
24Shortest Remaining Time (SRT)
- A preemptive version of SPN
- The scheduler will preempt the current process
when a shorter job arrives in the ready state. - New jobs
- Jobs returning from blocked state with reduced
service time - Still depends on having time estimates and
records of elapsed service times (i.e., extra
overhead)
25Analysis of SRT
- Somewhat more overhead than SPN
- Better throughput
- Favors short jobs even more than SPN does
26Highest Response Ratio Next (HRRN)
- Decision mode non-preemptive
- Selection criterion the largest Response Ratio
RR (w s) / s, where - s expected service time
- w time spent waiting for processor (notice that
this is not the same w mentioned earlier, which
includes service time to date). - Goal minimize avg. normalized turnaround time
- Avoids starvation while favoring short or old
jobs. - Still requires estimated service times.
27Highest Response Ratio Next (HRRN)
- Choose next process with the greatest ratio
time spent waiting expected service
time expected service time
28Sample Data Set for Examples
Service Time
Arrival Time
Process
A
0
3
B
2
6
C
4
4
D
6
5
E
8
2
B completes at time 9. C arrives at t4, w5
(ws)/s 9/4 D arrives at t6, w 3 RR
8/5E arrives at t8, RR 3/2. Schedule process
C. C completes at t13 RR for D is now
(75)/512/5 RR for E is (52)/2 7/2. Schedule
process E next.
29Other Scheduling Algorithms
- Deadline scheduling schedule the job with the
closest deadline (may be used in real time
environments) - Fair-share scheduling process groups get a
percentage of total CPU time, an individual
process gets a portion of its groups time. - Priority queues a prioritized set of FCFS
queues. Schedule the first process on the
highest priority non-empty queue.
30Scheduling Considerations
- In the absence of any knowledge about execution
time, deadlines, or priorities, how can an
operating system schedule processes fairly? - One approach is to use knowledge about time spent
executing so far. - A process that has accumulated significant recent
CPU time could be penalized, on the basis that it
must be a long job.
31Multilevel Feedback Scheduling (MFS)
- Preemptive scheduling with dynamic priorities.
- Several FIFO ready queues with decreasing
priorities Pr(RQ0) gt Pr(RQ1) gt ... gt Pr(RQn) - New processes are placed at the tail end of RQ0
- Dispatch from the head of RQ0 when quantum
expires, preempt it and place at the end of RQ1 - In general, if a process is preempted after
having been on RQi-1, it moves to RQi - Processes in RQn (lowest queue) execute round
robin - Dispatcher chooses a process for execution from
the highest priority non-empty queue.
32 Feedback Queues
- A running process either completes or moves to a
lower queue - processes that block may be
considered new when they return to the Ready
state, in which case they return to the highest
level queue.
33MFS Characteristics
- New processes are favored over old processes
- Short (I/O-bound) processes will complete
relatively quickly, since they stay in higher
priority queues - Long (CPU-bound) jobs will drift downward.
- Starvation is likely for long jobs
- give longer quanta to lower level queues
- age the process allow it to move back up to a
higher priority queue
34Time Quantum for Multilevel Feedback Scheduling
- Two examples one with constant quantum value,
one with increasing quantum for each queue. - Ex in second figure, time quantum of RQi 2i-1
- It may still be necessary to use aging to
guarantee timely completion for long processes.
35(No Transcript)
36A Traditional Unix Scheduler
- Targeted toward multi-user, time-shared
environments - Goal good response time for interactive users,
but no starvation for long background jobs. - Scheduling algorithm Multilevel Feedback
- round robin within each priority queue
- 1-second preemption
- priority based on process type and execution
history - priority decreases proportional to recent
processor utilization, increases as the process
sits in a queue.
37Traditional UNIX Scheduling
- User processes start at priority 0 and go up
kernel level processes have negative priorities -
the smallest negative number is the highest
priority. - The amount of CPU time a process has received
since the last priority computation is used to
lower the priority of processes that have been
running. - Older CPU usage figures decay to raise the
priority of processes that have been idle.
38UNIX Priority Computation
Priority (base priority) (recent CPU usage/2)
(nice factor) Pj(i) Basej CPUj(i-1)/2
nicej CPUj(i) CPUj(i-1)/2 , where Pj(i) is
priority of process j at start of interval
i Basej is base priority of process j CPUj(i) is
weighted average CPU utilization by process j
through interval i. Process execution time (if
any) is added into the figure nicej is
user-controllable adjustment factor
39Summary
- Traditional UNIX scheduling had a built-in ageing
factor as a process waits, its CPU usage
figures decay. At every time interval, the
figure is reduced by one-half. - Eventually, if a process doesnt execute,
CPUj(i-1)/2 will be effectively 0, and the
processs priority will thus increase.
40(No Transcript)