Title: Distributed Process Scheduling
1Distributed Process Scheduling
2Outline
- A System Performance Model
- Static Process Scheduling
- Dynamic Load Sharing and Balancing
- Distributed Process Implementation
- Real-Time Scheduling
3Overview
- Before execution, processes need to be scheduled
and allocated with resources - Objective
- Enhance overall system performance metric
- Process completion time and processor utilization
- In distributed systems location and performance
transparency - In distributed systems
- Local scheduling (on each node) global
scheduling - Communication overhead
- Effect of underlying architecture
- Dynamic behavior of the system
4System Performance Model
5Process Interaction Models
- Precedence process model Directed Acyclic Graph
(DAG) - Represent precedence relationships between
processes - Minimize total completion time of task
(computation communication) - Communication process model
- Represent the need for communication between
processes - Optimize the total cost of communication and
computation - Disjoint process model
- Processes can run independently and completed in
finite time - Maximize utilization of processors and minimize
turnaround time of processes
6Process Models
Partition 4 processes onto two nodes
7System Performance Model
Attempt to minimize the total completion time of
(makespan) of a set of interacting processes
8System Performance Model (Cont.)
- Related parameters
- OSPT optimal sequential processing time the
best time that can be achieved on a single
processor using the best sequential algorithm - CPT concurrent processing time the actual time
achieved on a n-processor system with the
concurrent algorithm and a specific scheduling
method being considered - OCPTideal optimal concurrent processing time on
an ideal system the best time that can achieved
with the concurrent algorithm being considered on
an ideal n-processor system(no inter-communication
overhead) and scheduled by an optimal scheduling
policy - Si the ideal speedup by using a multiple
processor system over the best sequential time - Sd the degradation of the system due to actual
implementation compared to an ideal system
9System Performance Model (Cont.)
P1
P2
P3
P4
Pi the computation time ofthe concurrent
algorithm onnode i
(RP ? 1)
P1
P3
P1
P2
P4
P2
OCPTideal
P3
P4
OCPTideal
10System Performance Model (Cont.)
(The smaller, the better)
(?????)
11System Performance Model (Cont.)
- RP Relative processing (algorithm)
- How much loss of speedup is due to the
substitution of the best sequential algorithm by
an algorithm better adapted for concurrent
implementation but which may have a greater total
processing need - Loss of parallelism due to algorithm conversion
- Increase in total computation requirement
- Sd
- Degradation of parallelism due to algorithm
implementation - RC Relative concurrency (algorithm?)
- How far from optimal the usage of the n-processor
is - RC1 ? best use of the processors
- Theoretic loss of parallelism
- ? loss of parallelism when implemented on a real
machine (system architecture scheduling)
12Efficiency Loss ?
Impact factors scheduling, system, and
communication
13Efficiency Loss ? (Cont.)
14Workload Distribution
- Performance can be further improved by workload
distribution - Loading sharing static workload distribution
- Dispatch process to the idle processors
statically upon arrival - Corresponding to processor pool model
- Load balancing dynamic workload distribution
- Migrate processes dynamically from heavily loaded
processors to lightly loaded processors - Corresponding to migration workstation model
- Model by queuing theory X/Y/c
- Proc. arrival time distributionX Service time
distributionY of servers c - ? arrival rate ? service rate ? migration
rate - ? depends on channel bandwidth, migration
protocol, context and state information of the
process being transferred.
15Processor-Pool and Workstation Queueing Models
Static Load Sharing
Dynamic Load Balancing
M for Markovian distribution
16Comparison of Performance for Workload Sharing
(Communication overhead)
(Negligible Communication overhead)
?0 ? M/M/1 ???M/M/2
17Static Process Scheduling
18Static Process Scheduling
- Static process scheduling deterministic
scheduling policy - Scheduling a set of partially ordered tasks on a
non-preemptive multi-processor system of
identical processors to minimize the overall
finishing time (makespan) - Optimize makespan ? NP-complete
- Need approximate or heuristic algorithms
- Attempt to balance and overlap computation and
communication - Mapping processes to processors is determined
before the execution - Once a process starts, it stays at the processor
until completion - Need prior knowledge about process behavior
(execution time, precedence relationships,
communication patterns) - May be available from compiler
- Scheduling decision is centralized and
non-adaptive
19Precedence Process and Communication System Models
Communication overhead for A(P1) and E(P3) 4
2 8
Communication overhead for one message
Execution time
No. of messagesto communicate
20Precedence Process Model
- Precedence Process Model NP-complete
- A program is represented by a DAG (Figure 5.5
(a)) - Node task with a known execution time
- Edge weight showing message units to be
transferred - Communication system model Figure 5.5 (b)
- Scheduling strategies
- List Scheduling (LS) no processor remains idle
if there are some tasks available that it could
process (no communication overhead) - Extended List Scheduling (ELS) LS first
communication overhead - Earliest Task First (ETF) scheduling the
earliest schedulable task is scheduled first - Critical path longest execution path
- Lower bound of the makespan
- Try to map all tasks in a critical path onto a
single processor
21Makespan Calculation for LS, ELS, and ETF
22Communication Process Model
- Communication process model
- Maximize resource utilization and minimize
inter-process communication - Undirected graph G(V,E)
- V Processes
- E weight amount of interaction between
processes - Cost equation
- e process execution cost (cost to run process j
on processor i) - C communication cost (C0 if ij)
- Again!!! NP-Complete
- Two heuristic algorithm are shown in page 162
23Stones two-processor model to achieve minimum
total execution and communication cost
- Example Figure 5.7 (Dont consider execution
cost) - Partition the graph by drawing a line cutting
through some edges - Result in two disjoint graphs, one for each
process - Set of removed edges ? cut set
- Cost of cut set ? sum of weights of the edges
- Total inter-process communication cost between
processors - Of course, the cost of cut sets is 0 if all
processes are assigned to the same node - Computation constraints (no more k, distribute
evenly) - Example Figure 5.8 (Consider execution cost)
- Maximum flow and minimum cut in a commodity-flow
network - Find the maximum flow from source to destination
24Computation Cost and Communication Graphs
25Minimum-Cost Cut
Only the cuts that separate A and Bare feasible
26Discussion Static Process Scheduling
- Once a process is assigned to a processor, it
remain there until its execution has been
completed - Need prior knowledge of execution time and
communication behavior - Not realistic
27Dynamic Load Sharing and Balancing Code
Migration
28Code Migration
- Moving programs between machines, with the
intention to have those programs to be executed
at the target - What is migrated?
- Code segment
- Resource segment files, printers, devices,
links, - Execution segment PCB, private data, stack,
- Goal utilization (system) and fairness (process)
- Based on the disjoint process model in Figure
5.1c - Ignore the effect of the interdependency among
processes - Since we dont know how a process interacts with
others
29Motivation
- Load sharing
- Move processes from heavily loaded to lightly
load systems - Load can be balanced to improve overall
performance - Communications performance
- Processes that interact intensively can be moved
to the same node - Move process to where the data reside when the
data is large - Availability
- The machine it is running on will be down
- Utilizing special capabilities
- Take advantage of unique hardware or software
capabilities
30Utilization and Fairness
- Utilization avoid having idle processors as much
as possible - Static load sharing and load balancing
- Use a controller process ? monitor queue lengths
of all processors - An arriving process make a request to controller
for assignment - Controller schedules process to processor with
shortest queue - Processor much inform controller when a process
completes - Dynamic load redistribution (or process/code
migration) - Migrate a process from a longer queue to a
shorter one dynamically - Fairness from the user side
- Give priority to a users process if the user has
lesser a share of computational resources - Schedule when a process completes
31Models for Code Migration
32Mobility and Cloning
- Mobility
- Weak mobility making the code portable
- Transfer only the code segment initialization
data - Always start from initial state
- Example Java applet
- Strong mobility
- Transfer code segment execution segment
- Process ? stop in A ? move to B ?resume where it
left off - Cloning
- Yield an exact copy of the original process, but
now running on a different machine - The cloned process is executed in parallel to the
original process
33Sender-initiated Algorithm
- Activated by a sender process that wishes to
off-load some of its computation - Three decisions
- Transfer policy when does a node become the
sender? - Queue length exceeds threshold when a new process
arrives - Selection policy how does the sender choose a
process for transfer? - The newly arrived process (no preemption is
needed) - Location policy which node should be the target
receiver? - Have prior knowledge? Random? Probe?
- Perform well in a lightly loaded system
- Chain or Ping-pong effect in heavily loaded system
34Flow Chart of A Sender-Initiated Algorithm
SQ senders queue length ST senders queue
threshold PL probe limit RQ polled receivers
queue length
35Receiver-Initiated Algorithm
- Activated by a receiver process that has low
utilization - Three decisions
- Transfer policy queue length falls below a
threshold upon the departure of a process - Location policy probing a heavily loaded sender
- Selection policy require preemption (process has
started running) - Benefit of load sharing should gt communication
and migration overhead - More stable and perform better than
sender-initiated (Why?) - Combine sender- and receiver-initiated algorithm
- Static/symmetric heavy-loaded ? sender
light-loaded?receiver - Adaptive ? based on global estimated system load
information - Disable sender-initiated part when global system
load is high - Disable receiver-initiated part when global
system load is low
36Negotiation of Migration
- Charlotte Finkel, R. The process migration
mechanism of Charlotte, Newsletter of the IEEE
Computer Society Technical Committee on Operation
Systems, Winter 1989. - Migration policy is the responsibility of Starter
utility - Starter utility is also responsible for long-term
scheduling and memory allocation - Each Starter utility control a cluster of
machines - Decision to migrate must be reached jointly by
two Starter processes (one on the source and one
on the destination)
37D may reject
38Eviction
- System evicts a process that has been migrated to
it - In Workstation-server model
- If a workstation is idle, process may have been
migrated to it - Once the workstation is active, it may be
necessary to evict the migrated processes to
provide adequate response time
39What is Migrated?
- Must destroy the process on the source system and
create it on the target system - What is migrated?
- Address space
- Execution state (PCB) easy in homogeneous
systems - Links between processes for passing messages and
signals - Resource
40(No Transcript)
41(No Transcript)
42Migration of Address Space (I)
- Eager (all)Transfer entire address space
- No trace of process is left behind
- If address space is large and if the process does
not need most of it, then this approach my be
unnecessarily expensive - Pre-copy Process continues to execute on the
source node while the address space is copied - Pages modified on the source during pre-copy
operation have to be copied a second time - Reduces the time that a process is frozen and
cannot execute during migration
43Migration of Address Space (II)
- Eager (dirty) Transfer only that portion of the
address space that is in main memory and have
been modified - Any additional blocks of the virtual address
space are transferred on demand - The source machine is involved throughout the
life of the process - Copy-on-reference Pages are only brought over on
reference - Variation of eager (dirty)
- Has lowest initial cost of process migration
- Flushing Pages are cleared from main memory by
flushing dirty pages to disk - Relieves the source of holding any pages of the
migrated process in main memory
44Migration of Messages and Signals (Links)
- Link redirection and message forwarding
- Link identifiers are recorded in link table
maintained by kernel - Pointers to the communication endpoints of the
peer processes - Host network address port number
- Link redirection and message forwarding stages
(Example) - The link tables of those processes that have an
outgoing link to the migration process must be
updated so that communication links can remain
intact after the migration - The processes that will send messages to the
migration process
45Link Redirection and Message Forwarding
- Request link-update to the communicating
processes - Messages arriving before the link update are
buffered and forwarded by the source kernel - Message arriving after the link update but before
the resumption at the remote host ? buffered by
the destination kernel - Messages may delay (sent before the link update
but arrive after the link update) ? source kernel
has to continue forwarding messages to the
already migrated process - Grace period to continue forwarding after the
periodmessage loss (application needs to deal
with message loss) - Cascading forwarding
46Link redirection and message forwarding stages
47Migration and Local Resource (I)
- Process-to-resource binding
- Identifier requires precisely the referenced
resource - URL, FTP server, local communication endpoint
- Value only the value of a resource is needed
- Another resource with the same value is OK
- Standard libraries in C or Java memory
- Type need only a resource of a specific type
- Reference to local devices, such as monitors,
printers - Code migration? need to change the references to
resources, but cannot affect the kind of
process-to-resource binding - How to change reference depends on whether the
resource can be moved along the code to the
target machine - Resource-to-machine binding
48Migration and Local Resource (II)
- Resource-to-machine binding
- Unattached resource can move between machines
- Files associated only with the migrated program
- Fastened resource possible, but with high costs
- Local databases, and complete web sites
- Fixed resource bound to a specific machine and
cannot be moved - local devices, local communication endpoints,
memory - Actions to be taken with respect to the
references to local resources when migrating code
to another machine - GR establish a global system-wide reference,
like URL, NFS, DSM - MV move the resource
- CP copy the value of the resource
- RB rebind process to locally available resource
49Migration and Local Resources only Reference
Resource-to machine binding
Process-to-resource binding
- Actions to be taken with respect to the
references to local resources when migrating code
to another machine.
GR maybe expensive (NFS for multimedia files) or
not easy (communication endpoint)
50Migration in Heterogeneous System (I)
- Migration in heterogeneous systems
- Require the code segment to be executed at each
platform - Perhaps after recompiling the original source
- How to represent the execution segment at each
platform? - Weak mobility no need to worry
- Strong mobility PCB, registers,, and so on are
different
51Migration in Heterogeneous System (II)
- One feasible approach
- Code migration is restricted to specific points
- Can take place only when a new subroutine is
called - Runtime system maintains a machine-independent
program stack migration stack - Update when a subroutine is called or when
execution returns from a subroutine - Migration stack is passed to the new machine when
migration - Work when
- Compiler generates code to update migration stack
- Compiler generates machine-independent label for
call/return - Runtime system
52(No Transcript)
53Distributed Process Implementation (5.4)
54Logical Model of Local and Remote Processes
Request Messages
Daemon is one kind ofstub process
Similar to the RPC model
55Application Scenarios
- Remote service
- Request messages are interpreted as a request for
a known service at the remote site - Remote execution
- Request messages contain a program to be executed
at the remote site - Once a remote operation starts at a remote site,
it remains there until completion of the
execution - Process migration
- Request messages represent a process being
migrated to the remote site for continuing
execution
56Remote Services
- Resource sharing
- Request messages can be generated at 3 levels
- As remote procedure calls at the language level
- Like NFS
- As remote commands at the operating system level
- rcp
- rsh host l user ls
- As interpretive messages at the application level
- get/put of ftp
- Constrained only to services supported at the
remote host - Primary implementation issues I/O redirection
and security
57Remote Execution/Dynamic Task Placement
- Request messages contain a program to be executed
at the remote site - Spawn a process at a remote host
- Remote hosts are used to off-load computation
- Implementation issues
- Load sharing algorithm process servers (like
STARTER) or sender-/receiver-initiated algorithm - Location independence
- System heterogeneity code compatibility and
data transfer format - Protection and security
58Process Migration
- Link redirection and message forwarding
- State and context transfer
59Real-Time Scheduling
60Real-Time Systems
- Real-Time System
- Correctness of the system depends not only on the
logical result of computation but also on the
time at which results are produced - Tasks or processes attempt to control or react to
events that take place in the outside world - These events occur in real time and process
must be able to keep up with them - Type
- Hard real-time must meet its deadline otherwise
it will cause undesirable damage or a fatal error
to the system - Soft real-time make sense to schedule and
complete the task even if it has passed its
deadline. - Firm real-time tasks missing their deadlines are
discarded
61Real-Time System Applications
- Control of laboratory experiments
- Process control plants
- Robotics
- Air traffic control
- Telecommunications
- Military command and control systems
- Multimedia processing system
- Virtual Reality System
62Definition of Real-Time Tasks
Searliest possible start timeCworst-case
execution timeDdeadline
VReal-time task set
Periodic Real-time task set Tperiod
The start time of a new instance of a job is the
deadline of the last instance
63Real-Time Scheduling
- Scheduling the tasks for execution in such a way
that all tasks meet their deadlines - Consider uniprocessor scheduling only
- Discuss hard-time scheduling only
- A schedule is an assignment of the CPU to the
real-time tasks such that at most one task is
assigned the CPU at any given moment
64Schedule
A schedule is a set of execution intervals Where
sstart time of interval, ffinish time of
interval,tthe task execution during the interval
A schedule is feasible if every task ?k receives
at least Ck seconds of CPU execution in the
schedule
Note a task may be executed partly in many
intervals
65Schedule Example
- Vt1(0,8,13), t2(3,5,10), t3(4,7,20)
- A(0,3,t1),(3,8,t2),(8,13,t1),(13,7,t3) is a
feasible schedule - for t1, (3-0) (13-8) 3 5 8
- Vt1(1,8,12), t2(3,5,10), t3(4,7,14)
- No feasible schedule
66Scheduling of a Real-Time Process (I)
Not appropriate
67Scheduling of aReal-Time Process (II)
Not appropriate
68Scheduling of a Real-Time Process (III)
Appropriate
Time interrupts occur every X time
unit.Determine scheduling when time interrupts
occur
69Scheduling of a Real-Time Process (IV)
Better
70Real-Time Scheduling
- Static table-driven
- Suitable for periodic tasks/earliest-deadline
first scheduling - Static analysis of feasible schedule of
dispatching - Static priority-driven preemptive ? rate
monotonic algorithm - Static analysis to determine priority
- Traditional priority-driven scheduler is used
- Dynamic planning-based (must re-evaluate
priorities on the fly) - Create a schedule containing the previously
scheduled tasks and the new arrival ? if all
tasks meets their constraints, the new one is
accepted - Dynamic best effort
- No feasibility analysis is performed
- Assigned a priority to the new arrival ? then
apply earliest deadline first - System tries to meet all deadlines and aborts any
started process whose deadline is missed
71Rate Monotonic Scheduling
- Assumption
- Tasks are periodic
- Tasks do not communicate with each other
- Tasks are scheduled according to priority, and
task priorities are fixed (static priority
scheduling) - Note
- A task set may have feasible schedule, but not by
using any static priority schedule - Feasible static priority assignment
- Rate Monotonic Scheduling (RMS)
- Assigns priorities to tasks on the basis of their
periods - Highest-priority task is the one with the
shortest period - If Th lt Tl, then PRh gt PRl
72Periodic Real-time task set Tperiod
The start time of a new instance of a job is the
deadline of the last instance
73How Can We Know A Priority-Based Schedule is
Feasible...
- Examine the execution of the tasks at the
critical instant of the lowest priority task to
see if feasible - If the lowest-priority task can meet its deadline
starting from its critical instant, all tasks can
always meet their deadline - Critical instant
- The critical instant of ? i occurs when ? i and
all higher-priority tasks are scheduled
simultaneously. - If ? i can meet its deadline when it is scheduled
at a critical instant, ? i can always meet its
deadline
74Preemptive!!!
No matter ?h releases earlier or later, ?h
cannot get more time to run in (t, tD?l)
Tl ?l???
75Critical Instant of J3
J1
? Arrive at 0, 2, 4, 6
? Arrive at 0, 3, 6, 9
J2
? Arrive at 0, 6, 12, 18
J3
V(1,2),(1,3),(1,6)
1
2
3
4
5
6
76Release J1 Earlier
-0.5
0.5
1.5
2.5
3.5
4.5
5.5
J1 -0.5, 1.5, 3.5, 5.5
0.5
1.5
3
3.5
4.5
5
J2 0, 3, 6, 9
2.5
3
5
5.5
J3 0, 6, 12, 18
V(1,2),(1,3),(1,6)
1
2
3
4
5
6
77Release J1 Later
J1
J1 2, 4, 6,
J2 0, 3, 6, 9
J2
J3 0, 6, 12, 18
J3
V(1,2),(1,3),(1,6)
1
2
3
4
5
6
78RMS is Optimal with Respect to
- If a set of periodic tasks has a feasible static
priority assignment, RMS is a feasible static
priority assignment (Proof page 177) - Hint if there is a non-RMS feasible static
priority assignment - List the tasks in decremented order of priority
- Because non-RMS, there must be ? i and ? i1 such
that Ti gt Ti1 - Prove exchange ? i and ? i1 and the schedule is
feasible - Repeat the priority exchange
79High Priority
HCi ? HCiCi1 ? Ti1 ? Ti
Low Priority
- PR i gt PR i1 but Ti gt Ti1
- Swap the Priorities of ? i and ? i1
80Time Analysis of RMS
Suppose we have n tasks, each with a fixed period
and execution time. Thenfor it to be possible to
meet all deadlines, the following must hold
The sum of processor utilizations cannot exceed
a value of 1
For RMS,the bound is lower
As n become larger, this bound approaches
ln20.693(Sufficient Condition)
Task P1 C120, T1100, U10.2Task P2 C240,
T2150, U10.267Task P3 C320, T3350,
U10.286The total CPU utilization is 0.753, and
the bound is 0.779
Schedulable!!!
81V(1,2),(1,3),(1,6) has a feasible RM priority
assignment and 100 load
J1
? Arrive at 0, 2, 4, 6
? Arrive at 0, 3, 6, 9
J2
? Arrive at 0, 6, 12, 18
J3
1
2
3
4
5
6
82Time Analysis of RMS (Cont.)
- Necessary Condition of RMS
- List the tasks in decreasing order of priorities
and simulate the executionof the tasks at the
critical instant
riResponse time of ? i
- Solve the recurrence equation
- If on some iteration ri(K)gtTi ?no feasible
priority assignment
(Eq. 5.1)
83r5
84Time Analysis of RMS (Cont.)
- Consider the task set (1, 4), (2, 5), (1.5, 6).
- Compute the response time of each task under RM
scheduling. Does the task set have a feasible
static priority assignment?
85Deadline Monotonic Scheduling
- Assumption
- Tasks are periodic
- Ji is requested at time t ?must complete by time
tDi - Tasks do not communicate with each other
- Static priority scheduling
- Deadline Monotonic Scheduling (DMS)
- Assigns priorities to tasks on the basis of their
periods - Highest-priority task is the one with the
shortest period - If Dh lt Dl, then PRh gt PRl
- DMS is Optimal
- Time analysis of DMS ? similar to Eq. 5.1
- The task set is feasible if ri ? Di for i1,,n
86Earliest Deadline First
- Assumption
- Tasks may be periodic or periodic
- Dynamic priority scheduling
- Priorities of real-time tasks vary during the
systems execution - Priorities are reevaluated when important events
occur, such as task arrivals, task completions,
and task synchronization - Earliest Deadline First (EDF)
- Scheduling tasks with the earliest deadline
minimized the fraction of tasks that miss their
deadlines - ?k(i) ith instance of job Jk dk(i) real-time
deadline of ?k(i) - If dh(i) lt dl(j) then PRh(i)gt PRl(j)
87Earliest Deadline First (Cont.)
- EDF is optimal
- If Di Ti for a set of periodic tasks, then the
task set has a feasible EDF priority assignment
as long as L ? 1 - But remember EDF is dynamic
- EDF is applicable to scheduling aperiodic
real-time tasks
88Two Tasks
89Making scheduling decisionevery 10ms
90Real-Time Synchronization
- Synchronization among the real-time tasks can be
generally called acquiring and releasing
semaphores - Blocking due to synchronization can cause subtle
timing problems - Priority inversion
- A lower-priority task (? 2 )executes at the
expense of a higher-priority task(? 1) ? we dont
want this. - ? 3 should have a higher-priority than ? 2 at
this time point - Chain Blocking
- Ex. ? 1 Locks semaphore R, ? 2 locks R and then
S, and ? 3 locks S. ? 1 might be blocked by ? 3
even though ? 1 does not lock S
91Priority ?1 gt ?2 gt ?3
92Real-Time Synchronization (Cont.)
- Goal
- Interact with the scheduler to reduce priority
inversion to a small and predictable level - Approach
- Dynamically adjust the priorities of real-time
tasks - Selectively grant access to semaphores
93Priority Inheritance Protocol (PIP)
- A task is assigned its RM priority when it is
requested - The CPU is assigned to the highest-priority ready
process - Before a task can enter a critical section, it
must first acquire a lock on the semaphore that
guards the critical sections. - If task ?h is blocked through semaphore S, which
is held by a lower priority task ?l , ?h is
removed from the ready list, and PRl is assigned
PRh (?l inherits the priority of ?h) - Priority inheritance is transitive.That is if ?2
blocks ?1 , ?3 and blocks ?2 , both ?3 and ?2
inherit PR1 - When ?l releases semaphore S, the
highest-priority process blocked through S is put
on the ready queue. Task ?l releases any priority
it inherited through S and resumes a lower
priority
94? h is directed blocked by ? l
Due to Priority Inheritance
?m is push-through blocked by ?l due to priority
inheritance
95Time Analysis of PIP
- A low-priority task ?l can block ?h only while ?l
is in CS - Assume properly nested critical sections
- Only look at the longest single CS during which
?l can block ?h - Bh(l) the set of all critical sections of ?l
that can block ?h - Bh(l) is empty if PRl gt PRh
- ceiling(S) priority of the highest-priority
task that can be blocked by S - A CS zl(j) of ?l blocks ?h if is zl(j) protected
by S and ceiling(S) ? PRh - Eh(l) the maximum execution time of any
critical section in Bh(l) - ?h will only blocked at most one CS of ?l
- Bh the maximum time that task ? h will be
blocked
96- ? m may be blocked by S or ?l , because
ceiling(S) gt PRm - ? m will not be blocked by R, because ceiling(R)
lt PRm
97- ?m may be blocked by S, because ceiling(S) gt PRm
ceiling(S)
98Blocking Example
Lock S
Unlock S
?l
Lock S
Unlock S
99Time Analysis of PIP (Cont.)
- Bh(l) the set of all critical sections of ? l
that can block ? h - Eh(l) the maximum execution time of any
critical section in Bh(l) - Eh(S) the longest execution of a critical
section protected by semaphore S by a task with
priority lower than PRh - ceiling(S) priority of the highest-priority
task that can be blocked by S - Bh the maximum time that task ? h will be
blocked
Sufficient
Necessary
100Priority Ceiling Protocol (I)
- Guarantee that when ?h is requested, at most one
low-priority task holds a lock on a semaphore
that can block ?h - A task can acquire a lock on semaphore S only if
no other task holds a lock on semaphore R such
that a high-priority task ?h can be blocked
through both S and R
101Priority Ceiling Protocol (II)
- Rules
- Each semaphore S has an associated priority
ceiling ceiling(S) - When task ?i attempts to set a lock on semaphore
S, the lock is granted only if PRi is larger than
ceiling(R) for every semaphore R locked by a
different task. Otherwise ?i is blocked. - If task ?h is blocked through semaphore S, which
is held by task ?l , then ?l inherits the
priority PRh . Priority inheritance is
transitive. - When ?i releases S, ?i releases any priority it
had inherited through S. The highest-priority
task that was blocked on S is put in the ready
queue.
102(No Transcript)