Scheduling - PowerPoint PPT Presentation

About This Presentation
Title:

Scheduling

Description:

none – PowerPoint PPT presentation

Number of Views:87
Avg rating:3.0/5.0
Slides: 58
Provided by: garyj7
Category:

less

Transcript and Presenter's Notes

Title: Scheduling


1
Scheduling
2
Model of Process Execution
Preemption or voluntary yield
Ready List
Scheduler
New Process
CPU
Done
job
job
Running
job
Ready
Resource Manager
Allocate
Request
job
job
Blocked
Resources
3
Recall Resource Manager
Blocked Processes
Resource Manager
Release
Request
Allocate
Release
Allocate
Release
Process
Allocate
Resource Pool
4
Scheduler as CPU Resource Manager
Ready List
Scheduler
Release
Ready to run
Dispatch
Release
Dispatch
Release
Process
Dispatch
Units of time for a time-multiplexed CPU
5
Scheduler Components
From Other States
Process Descriptor
Enqueue
Ready List
Context Switch
Dispatch
CPU
Running Process
6
Process Context
Right Operand
Status Registers
Left Operand
R1
R2
. . .
Rn
Functional Unit
ALU
Result
PC
IR
Ctl Unit
7
Context Switch
Process Descriptor
CPU
Process Descriptor
8
Invoking the Scheduler
  • Need a mechanism to call the scheduler
  • Voluntary call
  • Process blocks itself
  • Calls the scheduler
  • Involuntary call
  • External force (interrupt) blocks the process
  • Calls the scheduler

9
Voluntary CPU Sharing
yield(pi.pc, pj.pc) memorypi.pc PC
PC memorypj.pc
  • pi can be automatically determined from the
    processor status registers

yield(, pj.pc) memorypi.pc PC PC
memorypj.pc
10
More on Yield
  • pi and pj can resume one anothers execution

yield(, pj.pc) . . . yield(, pi.pc) . .
. yield(, pj.pc) . . .
  • Suppose pj is the scheduler

// p_i yields to scheduler yield(, pj.pc) //
scheduler chooses pk yield(, pk.pc) // pk
yields to scheduler yield(, pj.pc) // scheduler
chooses ...
11
Voluntary Sharing
  • Every process periodically yields to the
    scheduler
  • Relies on correct process behavior
  • Malicious
  • Accidental
  • Need a mechanism to override running process

12
Involuntary CPU Sharing
  • Interval timer
  • Device to produce a periodic interrupt
  • Programmable period

IntervalTimer() InterruptCount--
if(InterruptCount lt 0)
InterruptRequest TRUE InterruptCount
K
SetInterval(programmableValue) K
programmableValue InterruptCount K
13
Involuntary CPU Sharing (cont)
  • Interval timer device handler
  • Keeps an in-memory clock up-to-date (see Chap 4
    lab exercise)
  • Invokes the scheduler

IntervalTimerHandler() Time // update
the clock TimeToSchedule--
if(TimeToSchedule lt 0) ltinvoke
schedulergt TimeToSchedule TimeSlice

14
Contemporary Scheduling
  • Involuntary CPU sharing timer interrupts
  • Time quantum determined by interval timer
    usually fixed size for every process using the
    system
  • Sometimes called the time slice length

15
Choosing a Process to Run
  • Mechanism never changes
  • Strategy policy the dispatcher uses to select a
    process from the ready list
  • Different policies for different requirements

16
Policy Considerations
  • Policy can control/influence
  • CPU utilization
  • Average time a process waits for service
  • Average amount of time to complete a job
  • Could strive for any of
  • Equitability
  • Favor very short or long jobs
  • Meet priority requirements
  • Meet deadlines

17
Optimal Scheduling
  • Suppose the scheduler knows each process pis
    service time, t(pi) -- or it can estimate each
    t(pi)
  • Policy can optimize on any criteria, e.g.,
  • CPU utilization
  • Waiting time
  • Deadline
  • To find an optimal schedule
  • Have a finite, fixed of pi
  • Know t(pi) for each pi
  • Enumerate all schedules, then choose the best

18
However ...
  • The t(pi) are almost certainly just estimates
  • General algorithm to choose optimal schedule is
    O(n2)
  • Other processes may arrive while these processes
    are being serviced
  • Usually, optimal schedule is only a theoretical
    benchmark scheduling policies try to
    approximate an optimal schedule

19
Model of Process Execution
Preemption or voluntary yield
Ready List
Scheduler
New Process
CPU
Done
job
job
Running
job
Ready
Resource Manager
Allocate
Request
job
job
Blocked
Resources
20
Talking About Scheduling ...
  • Let P pi 0 ? i lt n set of processes
  • Let S(pi) ? running, ready, blocked
  • Let t(pi) Time process needs to be in running
    state (the service time)
  • Let W(pi) Time pi is in ready state before
    first transition to running (wait time)
  • Let TTRnd(pi) Time from pi first enter ready to
    last exit ready (turnaround time)
  • Batch Throughput rate inverse of avg TTRnd
  • Timesharing response time W(pi)

21
Simplified Model
Preemption or voluntary yield
Ready List
Scheduler
New Process
CPU
Done
job
job
Running
job
Ready
Resource Manager
Allocate
Request
job
job
Blocked
Resources
  • Simplified, but still provide analysis result
  • Easy to analyze performance
  • No issue of voluntary/involuntary sharing

22
Estimating CPU Utilization
Ready List
Scheduler
New Process
CPU
Done
Let l the average rate at which processes are
placed in the Ready List, arrival rate
Let m the average service rate ? 1/ m the
average t(pi)
l pi per second
System
Each pi uses 1/ m units of the CPU
23
Estimating CPU Utilization
Ready List
Scheduler
New Process
CPU
Done
Let l the average rate at which processes are
placed in the Ready List, arrival rate
Let m the average service rate ? 1/ m the
average t(pi)
Let r the fraction of the time that the CPU is
expected to be busy r pi that arrive per unit
time avg time each spends on CPU r l 1/ m
l/m
  • Notice must have l lt m (i.e., r lt 1)
  • What if r approaches 1?

24
Nonpreemptive Schedulers
Blocked or preempted processes
Ready List
Scheduler
New Process
CPU
Done
  • Try to use the simplified scheduling model
  • Only consider running and ready states
  • Ignores time in blocked state
  • New process created when it enters ready state
  • Process is destroyed when it enters blocked
    state
  • Really just looking at small phases of a process

25
First-Come-First-Served
i t(pi) 0 350 1 125 2 475 3 250 4
75
0
350
p0
TTRnd(p0) t(p0) 350
W(p0) 0
26
First-Come-First-Served
i t(pi) 0 350 1 125 2 475 3 250 4
75
475
350
p0
p1
TTRnd(p0) t(p0) 350 TTRnd(p1) (t(p1)
TTRnd(p0)) 125350 475
W(p0) 0 W(p1) TTRnd(p0) 350
27
First-Come-First-Served
i t(pi) 0 350 1 125 2 475 3 250 4
75
475
950
p0
p1
p2
TTRnd(p0) t(p0) 350 TTRnd(p1) (t(p1)
TTRnd(p0)) 125350 475 TTRnd(p2) (t(p2)
TTRnd(p1)) 475475 950
W(p0) 0 W(p1) TTRnd(p0) 350 W(p2)
TTRnd(p1) 475
28
First-Come-First-Served
i t(pi) 0 350 1 125 2 475 3 250 4
75
1200
950
p0
p1
p2
p3
TTRnd(p0) t(p0) 350 TTRnd(p1) (t(p1)
TTRnd(p0)) 125350 475 TTRnd(p2) (t(p2)
TTRnd(p1)) 475475 950 TTRnd(p3) (t(p3)
TTRnd(p2)) 250950 1200
W(p0) 0 W(p1) TTRnd(p0) 350 W(p2)
TTRnd(p1) 475 W(p3) TTRnd(p2) 950
29
First-Come-First-Served
i t(pi) 0 350 1 125 2 475 3 250 4
75
1200
1275
p0
p1
p2
p3
p4
TTRnd(p0) t(p0) 350 TTRnd(p1) (t(p1)
TTRnd(p0)) 125350 475 TTRnd(p2) (t(p2)
TTRnd(p1)) 475475 950 TTRnd(p3) (t(p3)
TTRnd(p2)) 250950 1200 TTRnd(p4) (t(p4)
TTRnd(p3)) 751200 1275
W(p0) 0 W(p1) TTRnd(p0) 350 W(p2)
TTRnd(p1) 475 W(p3) TTRnd(p2) 950 W(p4)
TTRnd(p3) 1200
30
FCFS Average Wait Time
i t(pi) 0 350 1 125 2 475 3 250 4
75
  • Easy to implement
  • Ignores service time, etc
  • Not a great performer

1275
1200
900
475
350
0
p0
p1
p2
p3
p4
TTRnd(p0) t(p0) 350 TTRnd(p1) (t(p1)
TTRnd(p0)) 125350 475 TTRnd(p2) (t(p2)
TTRnd(p1)) 475475 950 TTRnd(p3) (t(p3)
TTRnd(p2)) 250950 1200 TTRnd(p4) (t(p4)
TTRnd(p3)) 751200 1275
W(p0) 0 W(p1) TTRnd(p0) 350 W(p2)
TTRnd(p1) 475 W(p3) TTRnd(p2) 950 W(p4)
TTRnd(p3) 1200
Wavg (03504759501200)/5 2974/5 595
31
Predicting Wait Time in FCFS
  • In FCFS, when a process arrives, all in ready
    list will be processed before this job
  • Let m be the service rate
  • Let L be the ready list length
  • Wavg(p) L1/m 0.5 1/m L/m1/(2m)
  • Compare predicted wait with actual in earlier
    examples

32
Shortest Job Next
i t(pi) 0 350 1 125 2 475 3 250 4
75
75
0
p4
TTRnd(p4) t(p4) 75
W(p4) 0
33
Shortest Job Next
i t(pi) 0 350 1 125 2 475 3 250 4
75
200
75
0
p1
p4
TTRnd(p1) t(p1)t(p4) 12575
200 TTRnd(p4) t(p4) 75
W(p1) 75 W(p4) 0
34
Shortest Job Next
i t(pi) 0 350 1 125 2 475 3 250 4
75
450
200
75
0
p1
p3
p4
TTRnd(p1) t(p1)t(p4) 12575
200 TTRnd(p3) t(p3)t(p1)t(p4) 25012575
450 TTRnd(p4) t(p4) 75
W(p1) 75 W(p3) 200 W(p4) 0
35
Shortest Job Next
i t(pi) 0 350 1 125 2 475 3 250 4
75
800
450
200
75
0
p0
p1
p3
p4
TTRnd(p0) t(p0)t(p3)t(p1)t(p4)
35025012575 800 TTRnd(p1) t(p1)t(p4)
12575 200 TTRnd(p3) t(p3)t(p1)t(p4)
25012575 450 TTRnd(p4) t(p4) 75
W(p0) 450 W(p1) 75 W(p3) 200 W(p4) 0
36
Shortest Job Next
i t(pi) 0 350 1 125 2 475 3 250 4
75
1275
800
450
200
75
0
p0
p1
p2
p3
p4
TTRnd(p0) t(p0)t(p3)t(p1)t(p4)
35025012575 800 TTRnd(p1) t(p1)t(p4)
12575 200 TTRnd(p2) t(p2)t(p0)t(p3)t(p1)t
(p4) 47535025012575
1275 TTRnd(p3) t(p3)t(p1)t(p4) 25012575
450 TTRnd(p4) t(p4) 75
W(p0) 450 W(p1) 75 W(p2) 800 W(p3)
200 W(p4) 0
37
Shortest Job Next
i t(pi) 0 350 1 125 2 475 3 250 4
75
  • Minimizes wait time
  • May starve large jobs
  • Must know service times

1275
800
450
200
75
0
p0
p1
p2
p3
p4
TTRnd(p0) t(p0)t(p3)t(p1)t(p4)
35025012575 800 TTRnd(p1) t(p1)t(p4)
12575 200 TTRnd(p2) t(p2)t(p0)t(p3)t(p1)t
(p4) 47535025012575
1275 TTRnd(p3) t(p3)t(p1)t(p4) 25012575
450 TTRnd(p4) t(p4) 75
W(p0) 450 W(p1) 75 W(p2) 800 W(p3)
200 W(p4) 0
Wavg (450758002000)/5 1525/5 305
38
Priority Scheduling
i t(pi) Pri 0 350 5 1 125 2 2
475 3 3 250 1 4 75 4
  • Reflects importance of external use
  • May cause starvation
  • Can address starvation with aging

1275
925
850
375
250
0
p0
p1
p2
p3
p4
TTRnd(p0) t(p0)t(p4)t(p2)t(p1) )t(p3)
35075475125250
1275 TTRnd(p1) t(p1)t(p3) 125250
375 TTRnd(p2) t(p2)t(p1)t(p3) 475125250
850 TTRnd(p3) t(p3) 250 TTRnd(p4) t(p4)
t(p2) t(p1)t(p3) 75475125250 925
W(p0) 925 W(p1) 250 W(p2) 375 W(p3)
0 W(p4) 850
Wavg (9252503750850)/5 2400/5 480
39
Deadline Scheduling
i t(pi) Deadline 0 350 575 1 125
550 2 475 1050 3 250
(none) 4 75 200
  • Allocates service by deadline
  • May not be feasible

1050
550
200
575
1275
0
p0
p1
p2
p3
p4
p0
p1
p2
p3
p4
p0
p1
p2
p3
p4
40
Preemptive Schedulers
Preemption or voluntary yield
Ready List
Scheduler
New Process
CPU
Done
  • Highest priority process is guaranteed to be
    running at all times
  • Or at least at the beginning of a time slice
  • Dominant form of contemporary scheduling
  • But complex to build analyze

41
Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
0
50
p0
W(p0) 0
42
Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
100
0
p0
p1
W(p0) 0 W(p1) 50
43
Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
100
0
p0
p2
p1
W(p0) 0 W(p1) 50 W(p2) 100
44
Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
200
100
0
p0
p3
p2
p1
W(p0) 0 W(p1) 50 W(p2) 100 W(p3) 150
45
Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
200
100
0
p0
p4
p3
p2
p1
W(p0) 0 W(p1) 50 W(p2) 100 W(p3)
150 W(p4) 200
46
Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
300
200
100
0
p0
p0
p4
p3
p2
p1
W(p0) 0 W(p1) 50 W(p2) 100 W(p3)
150 W(p4) 200
47
Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
475
400
300
200
100
0
p0
p4
p0
p4
p3
p2
p1
p1
p2
p3
TTRnd(p4) 475
W(p0) 0 W(p1) 50 W(p2) 100 W(p3)
150 W(p4) 200
48
Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
475
400
300
200
100
0
550
p0
p4
p1
p0
p4
p3
p2
p1
p1
p2
p3
p0
TTRnd(p1) 550 TTRnd(p4) 475
W(p0) 0 W(p1) 50 W(p2) 100 W(p3)
150 W(p4) 200
49
Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
475
400
300
200
100
0
550
650
p0
p4
p1
p0
p4
p3
p2
p1
p1
p2
p3
p0
p3
p2
650
750
850
950
p0
p3
p2
p0
p3
p2
TTRnd(p1) 550 TTRnd(p3) 950 TTRnd(p4) 475
W(p0) 0 W(p1) 50 W(p2) 100 W(p3)
150 W(p4) 200
50
Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
475
400
300
200
100
0
550
650
p0
p4
p1
p0
p4
p3
p2
p1
p1
p2
p3
p0
p3
p2
650
750
850
950
1050
p0
p3
p2
p0
p3
p2
p0
p2
p0
TTRnd(p0) 1100 TTRnd(p1) 550 TTRnd(p3)
950 TTRnd(p4) 475
W(p0) 0 W(p1) 50 W(p2) 100 W(p3)
150 W(p4) 200
51
Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
475
400
300
200
100
0
550
650
p0
p4
p1
p0
p4
p3
p2
p1
p1
p2
p3
p0
p3
p2
650
750
850
950
1050
1150
1250
1275
p0
p3
p2
p0
p3
p2
p0
p2
p0
p2
p2
p2
p2
TTRnd(p0) 1100 TTRnd(p1) 550 TTRnd(p2)
1275 TTRnd(p3) 950 TTRnd(p4) 475
W(p0) 0 W(p1) 50 W(p2) 100 W(p3)
150 W(p4) 200
52
Round Robin (TQ50)
  • Equitable
  • Most widely-used
  • Fits naturally with interval timer

i t(pi) 0 350 1 125 2 475 3 250 4
75
475
400
300
200
100
0
550
650
p0
p4
p1
p0
p4
p3
p2
p1
p1
p2
p3
p0
p3
p2
650
750
850
950
1050
1150
1250
1275
p0
p3
p2
p0
p3
p2
p0
p2
p0
p2
p2
p2
p2
TTRnd(p0) 1100 TTRnd(p1) 550 TTRnd(p2)
1275 TTRnd(p3) 950 TTRnd(p4) 475
W(p0) 0 W(p1) 50 W(p2) 100 W(p3)
150 W(p4) 200
TTRnd_avg (11005501275950475)/5 4350/5
870
Wavg (050100150200)/5 500/5 100
53
RR with Overhead10 (TQ50)
  • Overhead must be considered

i t(pi) 0 350 1 125 2 475 3 250 4
75
540
480
360
240
120
0
575
790
635
670
p0
p4
p1
p0
p4
p3
p2
p1
p1
p2
p3
p0
p3
p2
910
1030
1150
1270
1390
1510
1535
790
p0
p3
p2
p0
p3
p2
p0
p2
p0
p2
p2
p2
p2
TTRnd(p0) 1320 TTRnd(p1) 660 TTRnd(p2)
1535 TTRnd(p3) 1140 TTRnd(p4) 565
W(p0) 0 W(p1) 60 W(p2) 120 W(p3)
180 W(p4) 240
TTRnd_avg (132066015351140565)/5 5220/5
1044
Wavg (060120180240)/5 600/5 120
54
Multi-Level Queues
Preemption or voluntary yield
Ready List0
Scheduler
New Process
Ready List1
CPU
Done
Ready List2
  • All processes at level i run before
  • any process at level j
  • At a level, use another policy, e.g. RR

Ready List3
55
Contemporary Scheduling
  • Involuntary CPU sharing -- timer interrupts
  • Time quantum determined by interval timer --
    usually fixed for every process using the system
  • Sometimes called the time slice length
  • Priority-based process (job) selection
  • Select the highest priority process
  • Priority reflects policy
  • With preemption
  • Usually a variant of Multi-Level Queues

56
BSD 4.4 Scheduling
  • Involuntary CPU Sharing
  • Preemptive algorithms
  • 32 Multi-Level Queues
  • Queues 0-7 are reserved for system functions
  • Queues 8-31 are for user space functions
  • nice influences (but does not dictate) queue level

57
Windows NT/2K Scheduling
  • Involuntary CPU Sharing across threads
  • Preemptive algorithms
  • 32 Multi-Level Queues
  • Highest 16 levels are real-time
  • Next lower 15 are for system/user threads
  • Range determined by process base priority
  • Lowest level is for the idle thread
Write a Comment
User Comments (0)
About PowerShow.com