CPU Scheduling - PowerPoint PPT Presentation

1 / 27
About This Presentation
Title:

CPU Scheduling

Description:

thus, always choose shortest job next ... real time jobs always have priority. timeouts lower the ... one queue - highest priority threads always on CPUs ... – PowerPoint PPT presentation

Number of Views:66
Avg rating:3.0/5.0
Slides: 28
Provided by: siue
Category:

less

Transcript and Presenter's Notes

Title: CPU Scheduling


1
CPU Scheduling
  • Stephen Blythe

2
FCFS Scheduling
  • When CPU is idle, which job from queue runs next?
  • Simplest idea first come, first served (FCFS)
  • easy to implement
  • can penalize short jobs
  • can benefit long jobs

3
Measuring Schedulers
  • Why do we say FCFS hurts short jobs?
  • Measured by turnaround time (TAT)
  • TAT completion time - arrival time
  • so, the longer you are in system, the larger
    TAT is
  • could be increased by large CPU time or wait time
  • job 1 arrives at time 0ns, needs 150ns of CPU
    time
  • job 2 arrives at time 50ns, needs 50ns of CPU
    time
  • Under FCFS, both have a TAT of 150ns.
  • Most people would say that job 2 did worse. Why?

4
Measuring Schedulers
  • Instead, use normalized TAT TAT/(CPU required)
  • So, job 1s normalized TAT 150ns/150ns1.0
  • And job 2s normalized TAT 150ns/50ns3.0
  • Every job would like its TAT to be close to 1.0
  • Why?
  • Can normalized TAT be less than 1.0?
  • How do other scheduling methods do?

5
SPN Scheduling
  • Want most jobs to have normalized TAT close to
    1.0
  • short jobs that wait will have high normalized
    TAT
  • thus, always choose shortest job next
  • longer jobs will wait, but the denominator in
    their normalized TAT is large (since they are
    long jobs)
  • can be mathematically shown to minimize TAT.
  • Problems
  • long jobs may starve if short jobs keep arriving.
  • how do we know what the shortest job will be?

6
SPN Scheduling ...
  • SPN needs a predictor for next job length
  • Estimate based on prior run times (Ti) of this
    job
  • simple average Sn1(1/n)? Ti
  • can be calculated as Sn1(1/n)Tn((n-1)/n)Sn
  • weighted average Sn1 aTn (1-a) Sn, 0ltalt1
  • a close to 1 weights recent runs more heavily
  • a close to 0 almost ignores recent runs
  • Still fails to handle our example.

n
i1
7
SRT Scheduling
  • SPN is a non-preemptive scheduler
  • once on CPU, job runs to completion
  • even if a shorter job shows up in queue
  • Shortest Remaining Time (SRT) is a preemptive
    SPN
  • when a job enters the queue
  • if its run time is less than the running jobs
    remaining time, pull running job out of CPU
  • put this new job onto the CPU
  • when a job completes, find shortest job in queue
    to run next.
  • in our example, normalized TATs are 1.33 and 1.00

8
RR Scheduling
  • However, SRT can starve longer jobs.
  • Instead, add a time slice at regular time
    intervals
  • take the running job and move it to back of queue
  • first job in queue now gets to use CPU
  • All jobs get to use queue in a Round Robin manner
  • Changing jobs requires time and work (context
    switch)
  • How long should time slice (a.k.a. quanta) be?
  • too short, and well be context switching all the
    time
  • too long, and short jobs suffer

9
FB Scheduling
  • RR still has long jobs competing with short jobs.
  • Instead, have jobs move down queues upon
    completing a time slice (i.e. feedback
    scheduling)

10
FB Scheduling ...
  • Now long jobs could get stuck in a lower queue
  • long jobs could starve!
  • Leads to two versions of FB scheduling
  • every queue has the same time slice length or
  • time slices grow exponentially with lower queues
  • first has quanta of 1, second has 2, third has
    4, ...
  • now, long jobs get longer time slices ...
  • ... but less frequently than short jobs

11
HRRN Scheduling
  • Highest Response Ratio Next
  • non- preemptive
  • picks job with highest wait/shortest CPU time
    next
  • i.e. job with largest (ws)/s next
  • w total time waiting
  • s total time being serviced (executing)
  • thus, it will minimize final (ws)/s values
  • these values are just normalized TAT !!!

12
Examples
13
Example Results
14
Multiprocessor Scheduling
  • So far, only one CPU has been considered
  • Many modern machines have more than one CPU
  • home.cs.siue.edu has 4 CPUs!
  • How do we schedule in such cases?
  • per CPU algorithm turns out to be less important
  • Why?
  • Four basic idea multiprocessor schedulers
  • Load Sharing
  • Gang Scheduling
  • Dedicated Processor Assignment
  • Dynamic Scheduling

15
Load Sharing
  • goal ensure no idle processors with jobs on
    queue
  • no centralized scheduler - each CPU schedules
    itself
  • shared single queue needs protection
  • Should use simple scheduler
  • FCFS
  • smallest number of unscheduled threads first
  • non-preemptive
  • pre-emptive
  • re-scheduled threads not likely to get same CPU
  • no guarantee of simultaneous thread execution

16
Gang Scheduling
  • goal have a set of threads execute
    simultaneously
  • allows communication within set quickly
  • scheduling impacts many CPUs simultaneously
  • Problems
  • requires an allocation of processors to a gang
    (set)
  • may leave idle processors
  • may not be enough left to service a gang
  • not good for overall system TAT

17
Dedicated Processors
  • goal help gangs as much as possible
  • once a thread gets a CPU, keep it until
    completion
  • leave idle during I/O !!!
  • really good for massive parallelism
  • completely avoids context switches
  • Problems
  • absolutely no multiprogramming!
  • idle CPUs

18
Dynamic Scheduling
  • goal let gang CPU allocation vary with time
  • When a process makes a request for a new CPU
  • if theres an idle CPU, grant it
  • else if this is a new process, grant request
  • take CPU from a process with more than one
  • else add this request to a needs queue
  • When a process releases a CPU
  • choose a NEW job from the needs queue
  • if none, give to any job from the needs queue

19
Real Time Scheduling
  • Processes that must meet deadlines
  • hard real time failure to meet deadline is
    catastrophic
  • soft real time makes sense to schedule after
    deadline
  • aperiodic tasks each invocation specifies
    deadline(s)
  • periodic tasks task repeats at a regular
    interval
  • Real time schedulers fall into four categories
  • static table driven
  • static priority driven
  • dynamic planning based methods
  • dynamic best effort based methods

20
Real Time Schedulers
  • Fixed priority scheduling
  • each RT process has a static (fixed) priority
  • keep queue sorted by priority
  • preemption can occur
  • Earliest deadline scheduling
  • choose process with nearest deadline
  • could use either completion or initiation
    deadlines
  • Both methods work well in static environments

21
RT Scheduling Theory
  • Suppose we have n RT processes
  • Process i can be characterized as follows
  • Ci execution time
  • Ti period (time between arrivals)
  • Note that Ci Ti, so Ci/Ti 1.
  • Ci/Ti is basically utilization due to process i.
  • In order to have a successful schedule
  • ? Ci/Ti 1.0
  • Real schedulers give a tighter constraint than
    1.0
  • Both methods work well in static environments

22
Rate Monotonic Scheduling
  • An RT scheduler that gives process i priority
    1/Ti
  • scheduler selects highest priority next
  • why? this process needs most deadline attention
  • Interestingly, RMS is guaranteed to work when
  • ? Ci/Ti n(21/n-1)
  • BUT, does NOT guarantee failure when
  • ? Ci/Ti n(21/n-1)

23
The SVR4 Scheduler
  • Unix, System 5, Release 4
  • Keeps an array of 160 queues
  • queues 0-59 time shared user processes
  • queues 60-99 kernel processes
  • queues 100-159 real time processes
  • bitmap of used queues aids efficiency

0
2
1
3
159
157
158
...
...
24
The SVR4 Scheduler ...
  • Basic idea is to schedule highest priority job
    next
  • When a job leaves the CPU
  • its priority is increased if it left to do I/O
  • its priority is decreased if it left due to a
    timeout
  • jobs cannot leave their class, though.
  • kernel can only be pre-empted at preemption
    points
  • these points are explicitly marked in kernel code
  • when points are hit, real time jobs could take
    over

25
The Linux Scheduler
  • keeps two arrays of 140 runqueues
  • active array - jobs that have not used up time
    slice
  • expired array - jobs that have used up time
    slice
  • also keeps a bitmap for each array.
  • as long as there are active processes, choose one
  • when a job uses up its time slice
  • recalculate its priority
  • place in appropriate expired queue
  • if no more active jobs, swap expired and active
    arrays

26
The Linux Scheduler ...
  • Real time support
  • priorities 0 ... 99 are for real time
  • priorities 100 ... 139 are for other processes
  • chooses lowest priority value first.
  • real time jobs always have priority
  • Multiprocessor support
  • uses n M/M/1 queues, with load balancing
  • finds busiest queue and migrates to other
    queue(s)
  • done whenever a CPU finds an empty queue
  • or every 200ms when CPU busy
  • or every 1ms when CPU is idle

27
The Windows2000 Scheduler
  • Each process is assigned a priority
  • priorities 0 ... 15 are for regular processes
  • priorities 16 ... 31 are for real time
    processes
  • chooses highest priority value first.
  • real time jobs always have priority
  • timeouts lower the priority of a process
  • blocking (I/O calls) increase the priority of a
    process
  • threads of a process may initially vary by 2
    priorities
  • Multiprocessor support
  • one queue - highest priority threads always on
    CPUs
  • thread can have processor affinity (causes static
    wait)
Write a Comment
User Comments (0)
About PowerShow.com