Thursday, June 15, 2006 - PowerPoint PPT Presentation

1 / 38
About This Presentation
Title:

Thursday, June 15, 2006

Description:

Round Robin (RR) 12. Example: RR with Time Quantum = 20. Process Burst Time. P1 53. P2 17. P3 68. P4 24 ... amount of CPU time which it can schedule amongst its ... – PowerPoint PPT presentation

Number of Views:15
Avg rating:3.0/5.0
Slides: 39
Provided by: Erud
Category:

less

Transcript and Presenter's Notes

Title: Thursday, June 15, 2006


1
Thursday, June 15, 2006
  • Confucius says He who play in root, eventually
    kill tree.

2
  • telnet 203.128.0.236
  • instead of
  • telnet chand.lums.edu.pk
  • from outside LUMS

3
  • Another example

4
FCFS
  • Simplest algorithm easy to implement
  • When a running process blocks, it is placed at
    the end of queue like a newly arrived process
  • Non preemptive
  • Does not emphasize throughput long processes
    are allowed to monopolize the CPU.

5
FCFS
  • Suffers from convoy effect
  • Penalizes short processes following long ones
  • Average WT varies if process CPU burst times vary
    greatly
  • Not suitable for time sharing systems
  • Tends to favor CPU bound over I/O bound processes

6
SRTN
  • Starvation possible
  • Throughput vs. turnaround time tradeoff
  • Introduces context switching.
  • Burst sizes known in advance and all available

7
Priority Scheduling
  • A priority number (integer) is associated with
    each process
  • The CPU is allocated to the process with the
    highest priority (smallest integer ? highest
    priority ...may be different on different
    systems).
  • Preemptive
  • nonpreemptive
  • SJF is a priority scheduling where priority is
    the predicted next CPU burst time.

8
Example
Processes Burst Time Priority Arrival Time
P1 10 3 0
P2 1 1 1
P3 2 3 2
P4 1 4 3
P5 5 2 4
9
Priority Scheduling
  • Problem ? Starvation low priority processes may
    never execute.
  • Solution ? Aging as time progresses increase
    the priority of the process.

10
Round Robin (RR)
  • Each process gets a small unit of CPU time (time
    quantum), usually 10-100 milliseconds. After
    this time has elapsed, the process is preempted
    and added to the end of the ready queue.
  • If there are n processes in the ready queue and
    the time quantum is q, then each process gets 1/n
    of the CPU time in chunks of at most q time units
    at once. No process waits more than (n-1)q time
    units.

11
Round Robin (RR)
  • Performance
  • q large ? FIFO
  • q small ? q must be large with respect to context
    switch, otherwise overhead is too high.

12
Example RR with Time Quantum 20
  • Process Burst Time
  • P1 53
  • P2 17
  • P3 68
  • P4 24
  • Typically, higher average turnaround than SJF,
    but better response.

13
The Gantt chart is
0
20
37
57
77
97
117
121
134
154
162
14
How a Smaller Time Quantum Increases Context
Switches
15
Multilevel Queue
  • Ready queue is partitioned into separate
    queuesforeground (interactive)background
    (batch)
  • Each queue has its own scheduling algorithm
  • foreground RR
  • background FCFS

16
Multilevel Queue
  • Scheduling must be done between the queues
  • Fixed priority scheduling (i.e., serve all from
    foreground then from background). Possibility of
    starvation.
  • Time slice each queue gets a certain amount of
    CPU time which it can schedule amongst its
    processes i.e., 80 to foreground in RR
  • 20 to background in FCFS

17
Multilevel Queue Scheduling
18
Multilevel Feedback Queue
  • A process can move between the various queues
    aging can be implemented this way

19
Multilevel Feedback Queue
  • Multilevel-feedback-queue scheduler defined by
    the following parameters
  • number of queues
  • scheduling algorithms for each queue
  • method used to determine when to upgrade a
    process
  • method used to determine when to demote a process
  • method used to determine which queue a process
    will enter when that process needs service

20
Example of Multilevel Feedback Queue
  • Three queues
  • Q0 RR with time quantum 8 milliseconds
  • Q1 RR time quantum 16 milliseconds
  • Q2 FCFS
  • Scheduling
  • A new job enters queue Q0 which is served FCFS.
    When it gains CPU, job receives 8 milliseconds.
    If it does not finish in 8 milliseconds, job is
    moved to queue Q1.
  • At Q1 job is again served FCFS and receives 16
    additional milliseconds. If it still does not
    complete, it is preempted and moved to queue Q2.

21
Multilevel Feedback Queues
22
Multilevel feed back queue example
Processes Arrival time Burst time
P1 0 17
P2 12 25
P3 28 8
P4 36 32
P5 46 18
23
Multilevel feed back queue example
  • Multilevel feedback queue scheduling with three
    queues Q1, Q2, Q3.
  • The scheduler first executes processes in Q1,
    which is given a time quantum of 8ms. If a
    process does not finish within this time, it is
    moved to tail of Q2.
  • The scheduler executes processes in Q2 only if Q1
    is empty. The process at the head of Q2 is given
    a quantum of 16ms. If it does not complete, it is
    preempted and put in Q3.
  • Processes in Q3 are run in FCFS basis, only when
    Q1 and Q2 are empty.
  • A process in Q1 will preempt a process in Q2, a
    process that arrives in Q2 will preempt a process
    in Q3.

24
THREAD SCHEDULING
User level thread with 50msec process quantum and
threads that run 5msec per CPU burst
25
User level thread with 50msec process quantum and
threads that run 5msec per CPU burst
26
Kernel level thread with 50msec process quantum
and threads that run 5msec per CPU burst
27
Kernel level thread with 50msec process quantum
and threads that run 5msec per CPU burst
28
Threads
  • Goal for threads Allow each to use blocking
    calls but prevent a blocked thread from affecting
    other threads.
  • Threads in user space Conflict with this goal.
  • One compelling reason for threads in user space
    Work with existing operating systems

29
Threads
  • System calls can be made non-blocking
  • select system call
  • checking code jacket / wrapper
  • Changes to system call library
  • Inelegant solution
  • Conflict with our goal
  • Changing semantics of calls means changing
    existing user programs

30
We want Combine the advantage of user threads
with those of kernel threads. We want good
performance and flexibility but without having to
make special non-blocking system calls or
checking for conditions.
31
Scheduler Activations
  • Many to many models User threads multiplexed
    onto kernel threads.
  • Main idea
  • Avoid unnecessary transitions between user and
    kernel space
  • If a thread a waiting locally for another one,
    then no need to involve the kernel
  • Some number of virtual processors assigned to
    each process by the kernel (LWP data structure
    between user and kernel threads)

32
Scheduler Activations
  • Some number of virtual processors assigned to
    each process by the kernel (LWP data structure
    between user and kernel threads)
  • LWPs can be requested or released by each process
  • User process can schedule user threads onto
    available virtual processors.

33
Scheduler Activations
When a kernel sees that a thread has blocked it
informs the process run-time system of this
occurrence by starting it at a well known address
(Upcall) Now the process can reschedule its
threads. When the data for blocked thread becomes
available kernel makes another upcall The process
will decide whether to run the previously blocked
thread or put it in ready queue.
34
Scheduler Activations
  • CPU-bound maybe one LWP
  • I/O bound multiple LWPs
  • One LWP for each concurrent blocking system call

35
Thread Scheduling
  • Many to many model Thread library schedules
    user-level threads on available LWPs (PCS)
  • Decision among threads of same process
  • Kernel decides which kernel thread to schedule
    onto a CPU (SCS)
  • One to one model systems use only SCS
  • Windows, Linux, Solaris 9

36
Scheduling in Unix - other versions also possible
  • Designed to provide good response to interactive
    processes
  • Uses multiple queues
  • Each queue is associated with a range of
    non-overlapping priority values

37
Scheduling in Unix - other versions also possible
  • Processes executing in user mode have positive
    values
  • Processes executing in kernel mode (doing system
    calls) have negative values
  • Negative values have higher priority and large
    positive values have lowest

38
Scheduling in Unix
  • Only processes that are in memory and ready to
    run are located on queues
  • Scheduler searches the queues starting at highest
    priority
  • first process is chosen on that queue and
    started. It runs for one time quantum (say 100ms)
    or until it blocks.
  • If the process uses up its quantum it is blocked
  • Processes within same priority range share CPU in
    RR
Write a Comment
User Comments (0)
About PowerShow.com