Chapter 13: Scheduling - PowerPoint PPT Presentation

1 / 29
About This Presentation
Title:

Chapter 13: Scheduling

Description:

The output from the program should be identical for all such schedules, but the ... These schemes are known as deferred pre-emption or cooperative dispatching. ... – PowerPoint PPT presentation

Number of Views:63
Avg rating:3.0/5.0
Slides: 30
Provided by: BTH3
Category:

less

Transcript and Presenter's Notes

Title: Chapter 13: Scheduling


1
Chapter 13 Scheduling
2
Background
  • There are many ways to schedule a set of tasks
    (processes) on a processor
  • The output from the program should be identical
    for all such schedules, but the timing behavior
    may differ considerably
  • A scheduling scheme provides two features
  • An algorithm for ordering the use of system
    resources (in particular the CPU)
  • A means of predicting the worst-case behavior of
    the system when the scheduling algorithm is
    applied

3
Process model
  • The application consists of a fixed set of
    processes (N)
  • All processes are periodic, with known periods
    (T)
  • The processes are completely independent of each
    other
  • All systems overheads, context-switching times
    and so on are ignored
  • All processes have deadlines (D) equal to their
    periods
  • All processes have fixed and known worst-case
    execution times (C)
  • The processor utilization (U) of each process is
    defined as UC/T

4
The cyclic executive approach
  • We lay out a fixed major cycle
  • The major cycle consists of a number of minor
    cycles each of fixed duration
  • No actual processes exist at run-time
  • Shared data do not need to be protected since
    concurrent data accesses are not possible
  • All process periods must be multiples of the
    minor cycle

5
The cyclic executive approach - example
  • Process A, period T 25, computation time C 10
  • Process B, period T 25, computation time C 8
  • Process C, period T 50, computation time C 5
  • Process D, period T 50, computation time C 4
  • Process E, period T 100, computation time C 2

6
The cyclic executive approach - example
  • Loop
  • Wait_for_interrupt
  • Procedure_for_A
  • Procedure_for_BProcedure_for_C
  • Wait_for_interruptProcedure_for_AProcedure_for_
    BProcedure_for_DProcedure_for_E
  • Wait_for_interrupt
  • Procedure_for_A
  • Procedure_for_BProcedure_for_C
  • Wait_for_interruptProcedure_for_AProcedure_for_
    BProcedure_for_D
  • End loop

7
Process based scheduling approaches
  • Fixed-Priority SchedulingEach process has a
    fixed static priority which is computed before
    the system is started.
  • Earliest Deadline FirstProcesses are executed in
    the order determined by their absolute deadlines.
    The priorities are thus dynamic.
  • Value-Based SchedulingThis strategy is used in
    overloaded systems, where one need to adapt the
    priorities depending on the load.

8
Preemption and non-preemption
  • Preemptive scheduling means that the runable
    (non-blocked) process with highest priority will
    always execute.
  • Non-preemption means that a running process will
    continue to execute until it has completed it
    becomes blocked.
  • Some systems allow a low-priority process to
    continue to execute for a bounded time (not
    necessarily to completion). These schemes are
    known as deferred pre-emption or cooperative
    dispatching.
  • Most real-time systems use preemptive scheduling

9
Rate monotonic priority assignment
  • The behavior of a set of processes (tasks) using
    fixed priority preemptive scheduling is decided
    by the way we assign priorities to processes.
  • The rate monotonic priority assignment scheme
    assigns priorities based on process periods (or
    deadlines) processes with short periods get
    high priority.
  • The rate monotonic priority assignment scheme is
    optimal in the sense that if some task misses its
    deadline using this scheme then there is no other
    priority assignment scheme such that all tasks
    meet all of their deadlines.

10
Utilization-based schedulability tests
  • Consider a set of N processes.
  • If the the total utilization (Sum(Ui i 1..N))
    is less than or equal to N(21/N - 1) then the
    process set is always schedulable using fixed
    priority assignment and rate monotonic priority
    assignment, i.e. all processes will always al
    their deadlines in all periods.
  • For large N, the bound asymptotically approaches
    69.3, i.e. all process sets with a utilization
    less than 69.3 will always meets all deadlines
    using fixed priority scheduling and rate
    monotonic priority assignment.

11
Critical instance
  • The worst-case scenario (critical instance)
    occurs when all processes are released at the
    same time

12
Example, Sum(U) 0.82
13
Example, Sum(U) 0.775
14
Example, Sum(U) 1.0
15
Schedulability overview
Utilization
Unschedulable
1
Sometimes schedulable
Always schedulable
Number of processes
16
Utilization-based schedulability test for
Earliest-Deadline-First
  • Consider a set of N processes.
  • If the the total utilization (Sum(Ui i 1..N))
    is less than or equal to 1 then the process set
    is always schedulable using earliest-deadline-firs
    t (EDF).
  • N.B. EDF is a scheduling algorithm that uses
    dynamic priorities.

17
Example, Sum(U) 0.82
18
Fixed Priority vs. Earliest-Deadline-First
  • EDF can schedule more process sets
  • FPS is easier to implement
  • It is easier to include processes with no
    deadlines in FPS
  • It is possible to include other priority aspects
    in FPS
  • The behavior of FPS is more predictable during
    overload situations

19
Response time analysis for FPS
  • We will now look at an exact schedulability test
  • A process i is schedulable if Ri lt Ti, where Ri
    is the worst-case response time for process i
  • Ri Ci Ii, where Ii is the interference from
    processes with higher priority
  • The interference from a high-priority process j
    on process i is CjCeiling(Ri/Tj).
  • Ri Ci Sum(CjCeiling(Ri/Tj), process j has
    higher priority than process i).
  • This can be solved numerically

20
Example
21
Sporadic and aperiodic processes
  • For sporadic and aperiodic processes we interpret
    T as the minimum inter-arrival interval.
  • For sporadic tasks we usually have a deadline D
    that is shorter than the minimum inter- arrival
    time.

22
Hard and soft processes
  • Many systems consist of a mix of hard and soft
    processes.
  • Rule 1 all processes should be schedulable
    using average execution times and average
    inter-arrival rates.
  • Rule 2 all hard processes should be schedulable
    using worst-case execution times and arrival rates

23
D lt T
  • In this case we use deadline monotonic
    scheduling, i.e. processes are given priorities
    based on their deadlines.
  • It is the same as rate monotonic scheduling when
    T D.
  • Deadline monotonic scheduling is optimal.

24
Priority inversion
  • When processes interact we may get priority
    inversion
  • Priority inversion means that a high priority
    process is waiting for a low priority process to
    complete a certain part of its execution

25
Priority inversion example with two shared
resources Q and V
26
Priority inheritance
  • The effects of priority inversion can be limited
    by a technique called priority inheritance
  • Process priorities are no longer static
  • If a process A is suspended waiting for a process
    B to undertake some computation, the n the
    priority of B becomes the maximum priority of A
    and B while A is blocked waiting for B.
  • With this scheme the priority of a process is the
    maximum of its own priority and the priority of
    the processes depending on it.

27
Priority ceiling protocols
  • The standard priority inheritance protocol gives
    a bound on the number of blocks a high-priority
    process can encounter, but the blocking effect
    can still be intolerably high
  • The priority ceiling protocols minimize the
    number of such blocking periods
  • There are two different priority ceiling
    protocols
  • The original ceiling protocol
  • The immediate ceiling protocol

28
The original priority ceiling protocol
  • Each process has a static default priority
  • Each resource has a static priority value, which
    is the maximum priority of the processes using
    the resource
  • A process has a dynamic priority that is the
    maximum of its own priority value and any it
    inherits due to blocking higher-priority
    processes
  • A process can only lock a resource if its dynamic
    priority is higher than the priority of any
    currently locked resources (excluding and
    resource that it may have locked itself)

29
The immediate priority ceiling protocol
  • Each process has a static default priority
  • Each resource has a static priority value, which
    is the maximum priority of the processes using
    the resource
  • A process has a dynamic priority that is the
    maximum of its own priority value and the
    priority of any resources it has locked
Write a Comment
User Comments (0)
About PowerShow.com