Chapter 5: CPU Scheduling - PowerPoint PPT Presentation

1 / 33
About This Presentation
Title:

Chapter 5: CPU Scheduling

Description:

Number of jobs completed per unit time ... last CPU burst and a number summarizing duration of prior CPU ... processes (e.g., Mach) Each queue can ... – PowerPoint PPT presentation

Number of Views:44
Avg rating:3.0/5.0
Slides: 34
Provided by: dUmn
Category:

less

Transcript and Presenter's Notes

Title: Chapter 5: CPU Scheduling


1
Chapter 5 CPU Scheduling
  • Always want to have CPU working
  • Usually many processes in ready queue
  • Ready to run on CPU
  • Consider a single CPU here
  • Need strategies for
  • Allocating CPU time
  • Selecting next process to run
  • Deciding what happens after a process completes a
    system call, or completes I/O
  • Short-term scheduling
  • Must not take much CPU time

2
CPU Burst I/O Burst Alternation (Fig. 5.1)
What happens when the I/O burst occurs? What is
the process state at that time?
3
When does/can scheduling occur?
  • When a process switches from running to waiting
    state
  • E.g., when it does an I/O request
  • When a process terminates
  • When a process switches from running to ready
    state
  • E.g., when a timer interrupt occurs
  • When a process switches from waiting state to
    ready state
  • E.g., completion of I/O

Cases 1 2 Non-preemptive scheduling Cases 3
4 Preemptive scheduling (will also include case
1 2)
4
Scheduling Performance Criteria
  • CPU utilization
  • Percentage of time that CPU is busy (and not
    idle), over some period of time
  • Throughput
  • Number of jobs completed per unit time
  • Turnaround time
  • Time interval from submission of a process until
    completion of the process
  • Waiting time
  • Sum of the time periods spent in the ready queue
  • Response time
  • Time from submission until first output/input
  • Approximate by time from submission until first
    access to CPU

5
Scheduling Algorithms
  • First-Come, First-Served (FCFS)
  • Complete the jobs in order of arrival
  • Shortest Job First (SJF)
  • Complete the job with shortest next CPU burst
  • Priority (PRI)
  • Processes have a priority
  • Allocate CPU to process with highest priority
  • Round-Robin (RR)
  • Each process gets a small unit of time on CPU
    (time quantum or time slice)

6
FCFS First-Come First-Served
  • Ready queue data structure a FIFO queue
  • Example
  • Draw Gantt chart
  • Compute the average waiting time for processes
    with the following next CPU burst times and ready
    queue order
  • P1 20
  • P2 12
  • P3 8
  • P4 16
  • P5 4
  • Waiting time Sum of the time periods spent in
    the ready queue

7
Solution Gantt Chart Method
FCFS First-Come First-Served
  • Waiting times
  • P1 0
  • P2 20
  • P3 32
  • P4 40
  • P5 56
  • Average wait time 148/5 29.6

8
FIFO
  • Disadvantage long waiting times

9
SJF Shortest Job First
  • The job with the shortest next CPU burst time is
    selected
  • Example (from before)
  • CPU job burst times and ready queue order
  • P1 20
  • P2 12
  • P3 8
  • P4 16
  • P5 4
  • Draw Gantt chart and compute the average waiting
    time given SJF CPU scheduling

10
SJF Solution
  • Waiting times (how long did process wait before
    being scheduled on the CPU?)
  • P1 40
  • P2 12
  • P3 4
  • P4 24
  • P5 0
  • Average wait time 16

(Recall FIFO scheduling had average wait time of
29.6)
11
SJF
  • Provably shortest average wait time
  • BUT What do we need to actually implement this?

12
SJF
  • Requires future knowledge
  • But, may estimate or predict
  • How to estimate or predict the future?
  • Use history to predict duration of next CPU burst
  • E.g., base on duration of last CPU burst and a
    number summarizing duration of prior CPU bursts
  • tn1 a tn (1 - a) tn
  • Where
  • tn is the actual duration of nth CPU burst value
    for the process,
  • a is a constant indicating how much to base
    estimate on last CPU burst, and
  • tn is the estimate of CPU burst duration for time
    n

13
Example Estimate
  • Say, a 0.5
  • t0 10 (last estimate)
  • Current (measured) CPU burst, t 6
  • What is estimate of next CPU burst?
  • t1 0.5 6 0.5 10 8

14
Time Complexity of SJF with estimation method?
  • Need to keep ready queue ordered
  • Process with the shortest estimated next CPU
    burst must be kept at the beginning of the
    queue
  • If queue is already sorted, time is O(n) or
    O(lg n) where n is queue length

15
Priority Scheduling
  • Have to decide on a numbering scheme
  • 0 can be highest or lowest
  • Priorities can be
  • Internal
  • Set according to O/S factors (e.g., memory
    requirements)
  • External
  • Set as a user policy e.g., User importance
  • Static
  • Fixed for the duration of the process
  • Dynamic
  • Changing during processing
  • E.g., as a function of amount of CPU usage, or
    length of time waiting (a solution to indefinite
    blocking or starvation)

16
Starvation Problem
  • Priority scheduling algorithms suffer from
    starvation (or indefinite blocking)
  • In a heavily loaded system, a steady stream of
    higher-priority processes can result in a low
    priority process never receiving CPU time
  • I.e., it can starve for CPU time
  • One solution aging
  • Gradually increasing the priority of a process
    that waits for a long time

17
Which Scheduling Algorithms Can be Preemptive?
  • FCFS (First-come, First-Served)
  • Non-preemptive
  • SJF (Shortest Job First)
  • Can be either
  • Choice when a new job arrives
  • Can preempt or not
  • Priority
  • Can be either
  • Choice when a processes priority changes or when
    a higher priority process arrives

18
RR (Round Robin) Scheduling
  • Method
  • Give each process a unit of time (time slice,
    quantum) of execution on CPU
  • Then move to next process in ready queue
  • Continue until all processes completed
  • Necessarily preemptive
  • Requires use of timer interrupt
  • Example
  • CPU job burst times order in queue
  • P1 20
  • P2 12
  • P3 8
  • P4 16
  • P5 4
  • Draw Gantt chart, and compute average wait time
  • Time quantum of 4
  • Assume 0 context switch time

19
Solution
  • Waiting times
  • P1 60 - 20 40
  • P2 44 - 12 32
  • P3 32 - 8 24
  • P4 56 - 16 40
  • P5 20 - 4 16
  • Average wait time 30.4

20
Other Performance Criteria
  • Response time
  • Estimate by time from job submission to time to
    first CPU dispatch
  • Assume all jobs submitted at same time, in order
    given
  • Turnaround time
  • Time interval from submission of a process until
    completion of the process
  • Assume all jobs submitted at same time

21
Response Time Calculations
22
Turnaround Time Calculations
23
Performance Characteristics of Scheduling
Algorithms
  • Different algorithms will have different
    performance characteristics
  • RR (Round Robin)
  • Average waiting time often high
  • Good average response time
  • Important for interactive or timesharing systems
  • SJF
  • Best average waiting time
  • Some overhead w.r.t. estimates of CPU burst
    length ordering ready queue

24
Context Switching Issues
  • This analysis has not taken context switch
    duration into account
  • In general, the context switch will take time
  • Just like the CPU burst of a process takes time
  • Response time, wait time etc. will be affected by
    context switch time
  • RR (Round Robin) quantum duration
  • To lower response times, want smaller time
    quantum
  • But, smaller time quantum increases system
    overhead
  • To increase throughput, want a large quantum
    (compared to context switch time)

25
Example
  • Calculate average wait time for RR (round robin)
    scheduling, for
  • Processes
  • P1 24
  • P2 4
  • P3 4
  • Assume above order in ready queue P1 at head of
    ready queue
  • Quantum 4 context switch time 1

26
Solution Average Wait Time With Context Switch
Time
  • P1 0 11 4 15
  • P2 5
  • P310
  • Average 10

(This is also a reason to dynamically vary the
time quantum. E.g., Mach O/S.)
27
Multi-level Ready Queues
  • Multiple ready queues
  • For different types of processes (e.g., system,
    user)
  • For different priority processes (e.g., Mach)
  • Each queue can
  • Have a different scheduling algorithm
  • Receive a different amount of CPU time
  • Have movement of processes to another queue
    (feedback)
  • e.g., if a process uses too much CPU time, put in
    a lower priority queue
  • If a process is getting too little CPU time, put
    it in a higher priority queue

28
Figure 5.6 Multilevel Queue Scheduling
29
Multi-level Feedback Queues (Fig. 5.7)
30
Multiprocessor Scheduling
  • Load sharing
  • Scheduling problem becomes more complex
  • Types of ready queues
  • Local dispatch to a specific processor
  • Global dispatch to any processor (load
    sharing)
  • Processor/process relationship
  • Run on only a specific processor (e.g., if it
    must use a device on that processors private
    bus)
  • Run on any processor
  • Symmetric Each processor does own scheduling
  • Master/slave
  • Master processor dispatches processes to slaves

31
Symmetric Multiprocessing Synchronization Issues
  • Involves synchronization of access to global
    ready queue
  • E.g., only one processor must execute a job at
    one time
  • Processors CPU1, CPU2, CPU3,
  • When a processor (e.g., CPU1) accesses the ready
    queue
  • If they attempt access to the ready queue, all
    other processors (CPU2, CPU3, ) must wait
    denied access
  • Accessing processor (e.g., CPU1) removes a
    process from ready queue, and dispatchs process
    on itself
  • Just before dispatch, that processor makes ready
    queue again available for use by the other CPUs
    (CPU2, CPU3, )

32
Pre-emptive Scheduling Data Structures
  • With pre-emptive CPU scheduling, a new process
    can run when an interrupt occurs
  • Consider the following situation
  • a) Process A in middle of updating data
    structures (in files or RAM), and due to an
    interrupt, is put into ready queue
  • b) CPU schedules process B to run
  • c) Process B accesses data structures that were
    being modified by Process A
  • Problem Data structure update was not completed
    by Process A data structures are in an
    inconsistent state
  • Need mechanisms to support cooperative data
    access
  • When one thread is updating data structures, we
    need to ensure that it completes the updates
    before other threads access the data structure
  • These mechanisms need to work even with
    pre-emptive scheduling!

33
Preemptive vs. Non-Preemptive Kernels (Chapter 6
p. 238-229 p. 256-257)
  • Kernels can be preemptive or non-preemptive
  • E.g., when a process is running a system call, a
    hardware timer may or may not be able to change
    the running process
  • Prior to the Linux 2.6 kernel, Linux was
    non-preemptive
  • Linux 2.6 (stable kernel version is 2.6.28.7 as
    of 3/6/09 http//www.kernel.org) is preemptive,
    but uses spinlocks (SMP), and on uni-processors
    will disable kernel premption in some cases
Write a Comment
User Comments (0)
About PowerShow.com