Module 6: CPU Scheduling - PowerPoint PPT Presentation

1 / 64
About This Presentation
Title:

Module 6: CPU Scheduling

Description:

... 1, each successive term has less weight than its predecessor ... The Gantt chart is: Typically, higher average turnaround than SJF, but better response. ... – PowerPoint PPT presentation

Number of Views:26
Avg rating:3.0/5.0
Slides: 65
Provided by: steve1802
Category:

less

Transcript and Presenter's Notes

Title: Module 6: CPU Scheduling


1
Module 6 CPU Scheduling
  • Basic Concepts
  • Scheduling Criteria
  • Scheduling Algorithms
  • Multiple-Processor Scheduling
  • Real-Time Scheduling
  • Algorithm Evaluation

2
Basic Concepts
  • Maximum CPU utilization obtained with
    multiprogramming
  • CPUI/O Burst Cycle Process execution consists
    of a cycle of CPU execution and I/O wait.
  • CPU burst distribution

3
Operating System - Main Goals
  • Interleave the execution of the number of
    processes to maximize processor utilization while
    providing reasonable response time
  • Overhead
  • Context switch
  • Switch to user mode
  • Restart user program at right location
  • Update resource use and assignment
  • Observation processes tend to run in bursts of
    cpu-I/o use...

4
Process Scheduling
  • Quality criteria for scheduling (algorithm)
  • Fairness each process gets a fair share of
    the cpu
  • Efficiency keep the cpu busy
  • Response time minimize, for interactive users
  • Turnaround for batch users (total time of
    batch)
  • Throughput maximal number of processed jobs per
    unit time
  • Waiting time minimize the average over processes

5
Scheduling quality measures
  • Throughput number of process completed per unit
    time
  • Turnaround time interval of (real) time from
    process submission to completion
  • Waiting time sum of time intervals the process
    spends in the ready queue
  • cpu utilization the part of the total (elapsed)
    time, the cpu is computing some process between
    0 and 100

6
Scheduling - Introduction to Scheduling
  • Bursts of CPU usage alternate with periods of I/O
    wait
  • a CPU-bound process
  • an I/O bound process

7
Alternating Sequence of CPU And I/O Bursts
8
Introduction to Scheduling
  • Scheduling Algorithm Goals

9
When are Scheduling Decisions made ?
  • 1. Process switches from Running to Waiting
    (I/o)
  • 2. Switch from Running to Ready (clock.. )
  • 3. Process switches from Waiting to Ready
    (I/o..)
  • 4. Process terminates
  • Types of Scheduling
  • Preemptive scheduling processes can be
    suspended by scheduler
  • non-preemptive only 1 And 4.

10
Utilization vs. Turnaround time
  • 5 interactive jobs i1i5 and one batch job b
  • Interactive jobs 10 cpu 20 disk I/o 70
    terminal I/o total time for each job 10 sec.
  • Batch job 90 cpu 10 disk I/o total time 50
    sec.
  • Cannot run all in parallel !!
  • i1..i5 in parallel - disk I/o is 100 utilized
  • b and one i in parallel - cpu is 100 utilized

11
Utilization vs. Turnaround time (II)
  • Two possible schedules
  • 1. First i1i5, then b
  • UT (10x50 50x90)/60 83
  • TA (10x5 60x1)/6 18sec.
  • 2. b and each of the is in parallel
  • UT (50x100)/50 100
  • TA (1020304050 50)/6 36sec.

12
CPU Scheduler
  • Selects from among the processes in memory that
    are ready to execute, and allocates the CPU to
    one of them.
  • CPU scheduling decisions may take place when a
    process
  • 1. Switches from running to waiting state.
  • 2. Switches from running to ready state.
  • 3. Switches from waiting to ready.
  • 4. Terminates.
  • Scheduling under 1 and 4 is nonpreemptive.
  • All other scheduling is preemptive.

13
Dispatcher
  • Dispatcher module gives control of the CPU to the
    process selected by the short-term scheduler
    this involves
  • switching context
  • switching to user mode
  • jumping to the proper location in the user
    program to restart that program
  • Dispatch latency time it takes for the
    dispatcher to stop one process and start another
    running.

14
Scheduling Criteria
  • CPU utilization keep the CPU as busy as
    possible
  • Throughput of processes that complete their
    execution per time unit
  • Turnaround time amount of time to execute a
    particular process
  • Waiting time amount of time a process has been
    wiating in the ready queue
  • Response time amount of time it takes from when
    a request was submitted until the first response
    is produced, not output (for time-sharing
    environment)

15
Optimization Criteria
  • Max CPU utilization
  • Max throughput
  • Min turnaround time
  • Min waiting time
  • Min response time

16
First-Come, First-Served (FCFS) Scheduling
  • Example Process Burst Time
  • P1 24
  • P2 3
  • P3 3
  • Suppose that the processes arrive in the order
    P1 , P2 , P3 The Gantt Chart for the schedule
    is
  • Waiting time for P1 0 P2 24 P3 27
  • Average waiting time (0 24 27)/3 17

17
FCFS Scheduling (Cont.)
  • Suppose that the processes arrive in the order
  • P2 , P3 , P1 .
  • The Gantt chart for the schedule is
  • Waiting time for P1 6 P2 0 P3 3
  • Average waiting time (6 0 3)/3 3
  • Much better than previous case.
  • Convoy effect short process behind long process

P1
P3
P2
6
3
30
0
18
Shortest-Job-First (SJR) Scheduling
  • Associate with each process the length of its
    next CPU burst. Use these lengths to schedule
    the process with the shortest time.
  • Two schemes
  • nonpreemptive once CPU given to the process it
    cannot be preempted until completes its CPU
    burst.
  • Preemptive if a new process arrives with CPU
    burst length less than remaining time of current
    executing process, preempt. This scheme is know
    as the Shortest-Remaining-Time-First (SRTF).
  • SJF is optimal gives minimum average waiting
    time for a given set of processes.

19
Example of Non-Preemptive SJF
  • Process Arrival Time Burst Time
  • P1 0.0 7
  • P2 2.0 4
  • P3 4.0 1
  • P4 5.0 4
  • SJF (non-preemptive)
  • Average waiting time (0 6 3 7)/4 - 4

P1
P3
P2
P4
7
3
16
0
8
12
20
Example of Preemptive SJF
  • Process Arrival Time Burst Time
  • P1 0.0 7
  • P2 2.0 4
  • P3 4.0 1
  • P4 5.0 4
  • SJF (preemptive)
  • Average waiting time (9 1 0 2)/4 - 3

P1
P3
P2
P4
P2
P1
11
16
0
4
2
5
7
21
Determining Length of Next CPU Burst
  • Can only estimate the length.
  • Can be done by using the length of previous CPU
    bursts, using exponential averaging.

22
Determining length of next CPU burst
  • Can be done by using the length of previous cpu
    bursts
  • tn actual length of nth cpu burst
  • Tn1 predicted value for the next cpu burst
  • for 0 ? ? ? 1 define
  • Tn1 ? tn (1 - ? ) Tn
  • Since both ? and (1 - ? ) are less than or equal
    1, each successive term has less weight than its
    predecessor

23
Priority Scheduling
  • A priority number (integer) is associated with
    each process
  • The CPU is allocated to the process with the
    highest priority (smallest integer ? highest
    priority).
  • Preemptive
  • nonpreemptive
  • SJF is a priority scheduling where priority is
    the predicted next CPU burst time.
  • Problem ? Starvation low priority processes may
    never execute.
  • Solution ? Aging as time progresses increase
    the priority of the process.

24
Round Robin (RR)
  • Each process gets a small unit of CPU time (time
    quantum), usually 10-100 milliseconds. After
    this time has elapsed, the process is preempted
    and added to the end of the ready queue.
  • If there are n processes in the ready queue and
    the time quantum is q, then each process gets 1/n
    of the CPU time in chunks of at most q time units
    at once. No process waits more than (n-1)q time
    units.
  • Performance
  • q large ? FIFO
  • q small ? q must be large with respect to context
    switch, otherwise overhead is too high.

25
Example RR with Time Quantum 20
  • Process Burst Time
  • P1 53
  • P2 17
  • P3 68
  • P4 24
  • The Gantt chart is
  • Typically, higher average turnaround than SJF,
    but better response.

0
20
37
57
77
97
117
121
134
154
162
26
How a Smaller Time Quantum Increases Context
Switches
27
Turnaround Time Varies With The Time Quantum
28
Context switch overhead - example
  • Assume each 32 bits register takes 2x100
    nanoseconds (10-9 seconds) to save
  • For a 32 registers 8 status registers machine
    saving time is (328)x2x100x10-98x10-68micro-se
    c.
  • For time-slices of 0.1 msec. The scheduler has an
    overhead of 16 (2x8) on context switch
  • Real overhead is much higher scheduler
    computes scheduler saves its own context
    process memory has to be saved too (?)some cpus
    have several sets of registers

29
Multilevel Queue
  • Ready queue is partitioned into separate
    queuesforeground (interactive)background
    (batch)
  • Each queue has its own scheduling algorithm,
    foreground RRbackground FCFS
  • Scheduling must be done between the queues.
  • Fixed priority scheduling i.e., serve all from
    foreground then from background. Possibility of
    starvation.
  • Time slice each queue gets a certain amount of
    CPU time which it can schedule amongst its
    processes i.e.,80 to foreground in RR
  • 20 to background in FCFS

30
Multilevel Queue Scheduling
31
Multilevel Feedback Queue
  • A process can move between the various queues
    aging can be implemented this way.
  • Multilevel-feedback-queue scheduler defined by
    the following parameters
  • number of queues
  • scheduling algorithms for each queue
  • method used to determine when to upgrade a
    process
  • method used to determine when to demote a process
  • method used to determine which queue a process
    will enter when that process needs service

32
Multilevel Feedback Queues
33
Example of Multilevel Feedback Queue
  • Three queues
  • Q0 time quantum 8 milliseconds
  • Q1 time quantum 16 milliseconds
  • Q2 FCFS
  • Scheduling
  • A new job enters queue Q0 which is served FCFS.
    When it gains CPU, job receives 8 milliseconds.
    If it does not finish in 8 milliseconds, job is
    moved to queue Q1.
  • At Q1 job is again served FCFS and receives 16
    additional milliseconds. If it still does not
    complete, it is preempted and moved to queue Q2.

34
Multiple-Processor Scheduling
  • CPU scheduling more complex when multiple CPUs
    are available.
  • Homogeneous processors within a multiprocessor.
  • Load sharing
  • Symmetric Multiprocessing (SMP) each processor
    makes its own scheduling decisions.
  • Asymmetric multiprocessing only one processor
    accesses the system data structures, alleviating
    the need for data sharing.

35
Real-Time Scheduling
  • Hard real-time systems required to complete a
    critical task within a guaranteed amount of time.
  • Soft real-time computing requires that critical
    processes receive priority over less fortunate
    ones.

36
Scheduler Activations
  • Goal mimic functionality of kernel threads
  • gain performance of user space threads
  • Avoids unnecessary user/kernel transitions
  • Kernel assigns virtual processors to each process
  • lets runtime system allocate threads to
    processors
  • Problem Fundamental reliance on kernel
    (lower layer)
  • calling procedures in user space (higher
    layer)

37
Scheduling in Batch Systems (1)
  • An example of shortest job first scheduling

38
Scheduling in Batch Systems (2)
  • Three level scheduling

39
Scheduling in Interactive Systems (1)
  • Round Robin Scheduling
  • list of runnable processes
  • list of runnable processes after B uses up its
    quantum

40
Scheduling in Interactive Systems (2)
  • A scheduling algorithm with four priority classes

41
Scheduling in Real-Time Systems
  • Schedulable real-time system
  • Given
  • m periodic events
  • event i occurs within period Pi and requires Ci
    seconds
  • Then the load can only be handled if

42
Policy versus Mechanism
  • Separate what is allowed to be done with how it
    is done
  • a process knows which of its children threads are
    important and need priority
  • Scheduling algorithm parameterized
  • mechanism in the kernel
  • Parameters filled in by user processes
  • policy set by user process

43
An Example
44
First-Come-First-Served (FCFS)
  • Each process joins the Ready queue
  • When the current process ceases to execute, the
    oldest process in the Ready queue is selected

1
2
3
4
5
45
Shortest Remaining Time - SRTF
  • Preemptive version of shortest process next
    policy
  • Must estimate processing time

46
Highest Response Ratio Next (HRRN)
  • Choose next process with the highest ratio

1
2
3
4
5
time spent waiting expected service
time expected service time
47
Feedback Scheduling
  • Penalize jobs that have been running longer
  • Dont know remaining time process needs to execute

48
Guaranteed Scheduling
  • To achieve guaranteed 1/n of cpu time (for n
    processes/users logged on)
  • Monitor the total amount of cpu time per process
    and the total logged on time
  • Calculate the ratio of allocated cpu time to the
    amount of cpu time each process is entitled to
  • Run the process with the lowest ratio
  • Switch to another process when the ratio of the
    running process has passed its goal ratio
  • also called Fair-share Scheduling

49
Priority (Feedback) Scheduling
  • Select runnable process with highest priority
  • Prevent high priority processes from running
    indefinitely, change priorities dynamically
  • decrease priorities at each clock tick
  • increase priorities of i/o bound processes
  • priority 1/f ( f is the fraction of
    time-quantum used)
  • group priorities each priority group uses round
    robin scheduling
  • main drawback - Starvation

50
Dynamic Priority Groups - CTSS
  • Assign different time quanta to the different
    priority classes
  • Highest priority class 1 quantum, 2nd class 2
    quanta, 3rd class 4 quanta, etc.
  • Move processes that used their time to lower
    classes (i.e. classify running processes)
  • Net result - longer runs for cpu-bound
    processes higher priority for I/o-bound
    processes
  • for 100 time quanta - 7 switches
  • Unix (low-level) scheduler ...

51
Parameters defining multi-level sceduling
  • Number of queues and scheduling policy in each
    queue (FCFS, Priority?)
  • When to demote a process to a lower queue
  • Which queue to promote a process to
  • Elapsed time since process received CPU (aging)
  • Expected CPU burst
  • Memory required
  • Process priority

52
Two-Level Scheduling
  • Assume processes can be either in memory or
    swapped out (to disk)
  • Schedule memory-resident processes by a low level
    scheduler as before
  • Swap in/out by a high level scheduler using
  • Elapsed time since process swapped in/out
  • Allocated cpu time to process
  • Process size (takes less space in memory)
  • Process priority

53
Scheduling in Unix
  • Two-level scheduling
  • Low level scheduler uses multiple queues to
    select the next process, out of the processes in
    memory, to get a time quantum.
  • High level scheduler moves processes from memory
    to disk and back, to enable all processes their
    share of cpu time
  • Low-level scheduler keeps queues for each
    priority
  • User mode processes have positive priorities
  • Kernel mode processes have negative priorities
    (which are higher)

54
Low-level Scheduling Algorithm
  • Pick process from highest (non-empty) priority
    queue
  • Run for 1 quantum (usually 1s), or until it
    blocks
  • Increment cpu usage count every clock tick
  • Every second, recalculate priorities
  • Divide cpu usage by 2
  • New priority base cpu usage/2
  • (base is usually 0, but, can be niced to less...)
  • Use round robin for each queue (separately)

55
Priority Calculation in Unix
  • Pj(i) Priority of process j at beginning of
    interval i
  • Basej Base priority of process j
  • Uj(i) Processor utilization of process j in
    interval i
  • Guk(i) Total processor utilization of all
    processes in group k during interval i
  • CPUj(i) Exponentially weighted average
    processor utilization by process j
    through interval i
  • GCPUk(I) Exponentially weighted average total
    processor utilization of group k through
    interval i
  • Wk Weighting assigned to group k, with the
    constraint that 0 ? Wk ? 1
  • and

56
Unix priority queues
57
Low-level Scheduling Algorithm - i/o
  • Blocked processes removed from queue, but, when
    blocking event occurs, placed in high priority
    queue
  • The negative priorities are meant for (returning)
    blocked processes
  • Negative priorities are hardwired in the system,
    for example, -5 for Disk i/o is meant to enable a
    reading/writing process to go back to the disk
    without waiting for other processes
  • Interactive processes get good service, cpu bound
    processes get whatever service is left...

58
Scheduling (2)
  • Windows 2000 supports 32 priorities for threads

59
Thread Scheduling
  • Local Scheduling How the threads library
    decides which thread to put onto an available
    LWP.
  • Global Scheduling How the kernel decides which
    kernel thread to run next.

60
Solaris 2 Scheduling
61
Java Thread Scheduling
  • JVM Uses a Preemptive, Priority-Based Scheduling
    Algorithm.
  • FIFO Queue is Used if There Are Multiple Threads
    With the Same Priority.

62
Java Thread Scheduling (cont)
  • JVM Schedules a Thread to Run When
  • The Currently Running Thread Exits the Runnable
    State.
  • A Higher Priority Thread Enters the Runnable
    State
  • Note the JVM Does Not Specify Whether Threads
    are Time-Sliced or Not.

63
Time-Slicing
  • Since the JVM Doesnt Ensure Time-Slicing, the
    yield() Method May Be Used
  • while (true)
  • // perform CPU-intensive task
  • . . .
  • Thread.yield()
  • This Yields Control to Another Thread of Equal
    Priority.

64
Thread Priorities
  • Thread PrioritiesPriority Comment
  • Thread.MIN_PRIORITY Minimum Thread Priority
  • Thread.MAX_PRIORITY Maximum Thread Priority
  • Thread.NORM_PRIORITY Default Thread Priority
  • Priorities May Be Set Using setPriority() method
  • setPriority(Thread.NORM_PRIORITY 2)
Write a Comment
User Comments (0)
About PowerShow.com