Title: Implementing Processes, Threads, and Resources
16
Threads and Scheduling
2Announcements
- Homework Set 2 due Friday at 11 am - extension
- Program Assignment 1 due Thursday Feb. 10 at 11
am - Read chapters 6 and 7
3Program 1 Threads - addenda
- draw the picture of user space threads versus
kernel space threads - user space threads yield voluntarily to switch
between threads - because its user space, the CPU doesnt know
about these threads - your program just looks like a single-threaded
process to CPU - Inside that process, use library to create and
delete threads, wait a thread, and yield a thread - this is what youre building
- Advantage can implement threads on any OS,
faster - no trap to kernel, no context switch - Disadvantage only voluntary scheduling, no
preemption, blocked I/O on one user thread blocks
all threads
4Program 1 Threads - addenda
- each process keeps a thread table
- analogous to process table of PCBs kept by OS
kernel for each process - Key question how do we switch between threads?
- need to save thread state and change the PC
- PA 1 does it like this
- scheduler is a global user thread, while your
threads a and b are user, but local (hence on the
stack) - stack pointer, frame pointer
5(No Transcript)
6What is a Process?
- A process is a program actively executing from
main memory - has a Program Counter (PC) and execution state
associated with it - CPU registers keep state
- OS keeps process state in memory
- its alive!
- has an address space associated with it
- a limited set of (virtual) addresses that can be
accessed by the executing code
Main Memory
Program P1 binary
Data
Heap
Stack
7How is a Process Structured in Memory?
Run-time memory
max address
User stack
- Run-time memory image
- Essentially code, data, stack, and heap
- Code and data loaded from executable file
- Stack grows downward, heap grows upward
Unallocated
Heap
Read/write .data, .bss
Read-only .init, .text, .rodata
address 0
8Multiple Processes
Main Memory
- Process state, e.g. ready, running, or waiting
- accounting info, e.g. process ID
- Program Counter
- CPU registers
- CPU-scheduling info, e.g. priority
- Memory management info, e.g. base and limit
registers, page tables - I/O status info, e.g. list of open files
Process P2
OS
PCB for P1
Data
Heap
Stack
PCB for P2
More Data, Heap, Stack
9Multiple Processes
Main Memory
Process P2
OS
CPU Execution
PCB for P1
Program Counter (PC)
Data
Heap
Stack
PCB for P2
ALU
More Data, Heap, Stack
10Context Switching
Executable Memory
- Each time a process is switched out, its context
must be saved, e.g. in the PCB - Each time a process is switched in, its context
is restored - This usually requires copying of registers
Process Manager
Interrupt Handler
P1
P2
Pn
11Threads
- A thread is a logical flow of execution that runs
within the context of a process - has its own program counter (PC), register state,
and stack - shares the memory address space with other
threads in the same process, - share the same code and data and resources (e.g.
open files)
12Threads
- Why would you want multithreaded processes?
- reduced context switch overhead
- In Solaris, context switching between processes
is 5x slower than switching between threads - shared resources gt less memory consumption gt
more threads can be supported, especially for a
scalable system, e.g. Web server must handle
thousands of connections - inter-thread communication is easier and faster
than inter-process communication - thread also called a lightweight process
13Threads
Main Memory
Process P1s Address Space
Process P2
- Process P1 is multithreaded
- Process P2 is single threaded
- The OS is multiprogrammed
- If there is preemptive timeslicing, the system is
multitasked
Data
Heap
Data
Heap
Stack
14Processes Threads
State
Map
Address Space
Program
Static data
Resources
15Thread-Safe/Reentrant Code
- If two threads share and execute the same code,
then the code needs to be thread-safe - the use of global variables is not thread safe
- the use of static variables is not thread safe
- the use of local variables is thread safe
- need to govern access to persistent data like
global/static variables with locking and
synchronization mechanisms - reentrant is a special case of thread-safe
- reentrant code does not have any references to
global variables - thread-safe code protects and synchronizes access
to global variables
16User-Space and Kernel Threads
- pthreads is a POSIX user space threading API
- provides interface to create, delete threads in
the same process - threads will synchronize with each other via this
package - no need to involve the OS
- implementations of pthreads API differ underneath
the API - Kernel threads are supported by the OS
- kernel must be involved in switching threads
- mapping of user-level threads to kernel threads
is usually one-to-one
17Model of Process Execution
Preemption or voluntary yield
Ready List
Scheduler
New Process
CPU
Done
job
job
Running
job
Ready
Resource Manager
Allocate
Request
job
job
Blocked
Resources
18The Scheduler
From Other States
Process Descriptor
Enqueuer
Ready List
Context Switcher
Dispatcher
CPU
Running Process
19Invoking the Scheduler
- Need a mechanism to call the scheduler
- Voluntary call
- Process blocks itself
- Calls the scheduler
- Involuntary call
- External force (interrupt) blocks the process
- Calls the scheduler
20 Voluntary CPU Sharing
yield(pi.pc, pj.pc) memorypi.pc PC
PC memorypj.pc
- pi can be automatically determined from the
processor status registers
yield(, pj.pc) memorypi.pc PC PC
memorypj.pc
21More on Yield
- pi and pj can resume one anothers execution
yield(, pj.pc) . . . yield(, pi.pc) . .
. yield(, pj.pc) . . .
- Suppose pj is the scheduler
// p_i yields to scheduler yield(, pj.pc) //
scheduler chooses pk yield(, pk.pc) // pk
yields to scheduler yield(, pj.pc) // scheduler
chooses ...
22Voluntary Sharing
- Every process periodically yields to the
scheduler - Relies on correct process behavior
- process can fail to yield infinite loop either
intentionally (while(1)) or due to logical error
(while(!DONE)) - Malicious
- Accidental
- process can yield to soon unfairness for the
nice processes who give up the CPU, while
others do not - process can fail to yield in time
- another process urgently needs the CPU to read
incoming data flowing into a bounded buffer, but
doesnt get the CPU in time to prevent the buffer
from overflowing and dropping information - Need a mechanism to override running process
23Involuntary CPU Sharing
- Interval timer
- Device to produce a periodic interrupt
- Programmable period
IntervalTimer() InterruptCount--
if(InterruptCount lt 0)
InterruptRequest TRUE InterruptCount
K
SetInterval(programmableValue) K
programmableValue InterruptCount K
24Involuntary CPU Sharing (cont)
- Interval timer device handler
- Keeps an in-memory clock up-to-date (see Chap 4
lab exercise) - Invokes the scheduler
IntervalTimerHandler() Time // update
the clock TimeToSchedule--
if(TimeToSchedule lt 0) ltinvoke
schedulergt TimeToSchedule TimeSlice
25Contemporary Scheduling
- Involuntary CPU sharing timer interrupts
- Time quantum determined by interval timer
usually fixed size for every process using the
system - Sometimes called the time slice length
26Choosing a Process to Run
- Mechanism never changes
- Strategy policy the dispatcher uses to select a
process from the ready list - Different policies for different requirements
27Policy Considerations
- Policy can control/influence
- CPU utilization
- Average time a process waits for service
- Average amount of time to complete a job
- Could strive for any of
- Equitability
- Favor very short or long jobs
- Meet priority requirements
- Meet deadlines
28Optimal Scheduling
- Suppose the scheduler knows each process pis
service time, t(pi) -- or it can estimate each
t(pi) - Policy can optimize on any criteria, e.g.,
- CPU utilization
- Waiting time
- Deadline
- To find an optimal schedule
- Have a finite, fixed of pi
- Know t(pi) for each pi
- Enumerate all schedules, then choose the best
29However ...
- The t(pi) are almost certainly just estimates
- General algorithm to choose optimal schedule is
O(n2) - Other processes may arrive while these processes
are being serviced - Usually, optimal schedule is only a theoretical
benchmark scheduling policies try to
approximate an optimal schedule
30Model of Process Execution
Preemption or voluntary yield
Ready List
Scheduler
New Process
CPU
Done
job
job
Running
job
Ready
Resource Manager
Allocate
Request
job
job
Blocked
Resources
31Talking About Scheduling ...
- Let P pi 0 ? i lt n set of processes
- Let S(pi) ? running, ready, blocked
- Let t(pi) Time process needs to be in running
state (the service time) - Let W(pi) Time pi is in ready state before
first transition to running (wait time) - Let TTRnd(pi) Time from pi first enter ready to
last exit ready (turnaround time) - Batch Throughput rate inverse of avg TTRnd
- Timesharing response time W(pi)
32Simplified Model
Preemption or voluntary yield
Ready List
Scheduler
New Process
CPU
Done
job
job
Running
job
Ready
Resource Manager
Allocate
Request
job
job
Blocked
Resources
- Simplified, but still provide analysis result
- Easy to analyze performance
- No issue of voluntary/involuntary sharing
33Estimating CPU Utilization
Ready List
Scheduler
New Process
CPU
Done
Let l the average rate at which processes are
placed in the Ready List, arrival rate
Let m the average service rate ? 1/ m the
average t(pi)
l pi per second
System
Each pi uses 1/ m units of the CPU
34Estimating CPU Utilization
Ready List
Scheduler
New Process
CPU
Done
Let l the average rate at which processes are
placed in the Ready List, arrival rate
Let m the average service rate ? 1/ m the
average t(pi)
Let r the fraction of the time that the CPU is
expected to be busy r pi that arrive per unit
time avg time each spends on CPU r l 1/ m
l/m
- Notice must have l lt m (i.e., r lt 1)
- What if r approaches 1?
35Nonpreemptive Schedulers
Blocked or preempted processes
Ready List
Scheduler
New Process
CPU
Done
- Try to use the simplified scheduling model
- Only consider running and ready states
- Ignores time in blocked state
- New process created when it enters ready state
- Process is destroyed when it enters blocked
state - Really just looking at small phases of a process
36First-Come-First-Served
i t(pi) 0 350 1 125 2 475 3 250 4
75
0
350
p0
TTRnd(p0) t(p0) 350
W(p0) 0
37First-Come-First-Served
i t(pi) 0 350 1 125 2 475 3 250 4
75
475
350
p0
p1
TTRnd(p0) t(p0) 350 TTRnd(p1) (t(p1)
TTRnd(p0)) 125350 475
W(p0) 0 W(p1) TTRnd(p0) 350
38First-Come-First-Served
i t(pi) 0 350 1 125 2 475 3 250 4
75
475
950
p0
p1
p2
TTRnd(p0) t(p0) 350 TTRnd(p1) (t(p1)
TTRnd(p0)) 125350 475 TTRnd(p2) (t(p2)
TTRnd(p1)) 475475 950
W(p0) 0 W(p1) TTRnd(p0) 350 W(p2)
TTRnd(p1) 475
39First-Come-First-Served
i t(pi) 0 350 1 125 2 475 3 250 4
75
1200
950
p0
p1
p2
p3
TTRnd(p0) t(p0) 350 TTRnd(p1) (t(p1)
TTRnd(p0)) 125350 475 TTRnd(p2) (t(p2)
TTRnd(p1)) 475475 950 TTRnd(p3) (t(p3)
TTRnd(p2)) 250950 1200
W(p0) 0 W(p1) TTRnd(p0) 350 W(p2)
TTRnd(p1) 475 W(p3) TTRnd(p2) 950
40First-Come-First-Served
i t(pi) 0 350 1 125 2 475 3 250 4
75
1200
1275
p0
p1
p2
p3
p4
TTRnd(p0) t(p0) 350 TTRnd(p1) (t(p1)
TTRnd(p0)) 125350 475 TTRnd(p2) (t(p2)
TTRnd(p1)) 475475 950 TTRnd(p3) (t(p3)
TTRnd(p2)) 250950 1200 TTRnd(p4) (t(p4)
TTRnd(p3)) 751200 1275
W(p0) 0 W(p1) TTRnd(p0) 350 W(p2)
TTRnd(p1) 475 W(p3) TTRnd(p2) 950 W(p4)
TTRnd(p3) 1200
41FCFS Average Wait Time
i t(pi) 0 350 1 125 2 475 3 250 4
75
- Easy to implement
- Ignores service time, etc
- Not a great performer
1275
1200
900
475
350
0
p0
p1
p2
p3
p4
TTRnd(p0) t(p0) 350 TTRnd(p1) (t(p1)
TTRnd(p0)) 125350 475 TTRnd(p2) (t(p2)
TTRnd(p1)) 475475 950 TTRnd(p3) (t(p3)
TTRnd(p2)) 250950 1200 TTRnd(p4) (t(p4)
TTRnd(p3)) 751200 1275
W(p0) 0 W(p1) TTRnd(p0) 350 W(p2)
TTRnd(p1) 475 W(p3) TTRnd(p2) 950 W(p4)
TTRnd(p3) 1200
Wavg (03504759501200)/5 2974/5 595
42Predicting Wait Time in FCFS
- In FCFS, when a process arrives, all in ready
list will be processed before this job - Let m be the service rate
- Let L be the ready list length
- Wavg(p) L1/m 0.5 1/m L/m1/(2m)
- Compare predicted wait with actual in earlier
examples
43Shortest Job Next
i t(pi) 0 350 1 125 2 475 3 250 4
75
75
0
p4
TTRnd(p4) t(p4) 75
W(p4) 0
44Shortest Job Next
i t(pi) 0 350 1 125 2 475 3 250 4
75
200
75
0
p1
p4
TTRnd(p1) t(p1)t(p4) 12575
200 TTRnd(p4) t(p4) 75
W(p1) 75 W(p4) 0
45Shortest Job Next
i t(pi) 0 350 1 125 2 475 3 250 4
75
450
200
75
0
p1
p3
p4
TTRnd(p1) t(p1)t(p4) 12575
200 TTRnd(p3) t(p3)t(p1)t(p4) 25012575
450 TTRnd(p4) t(p4) 75
W(p1) 75 W(p3) 200 W(p4) 0
46Shortest Job Next
i t(pi) 0 350 1 125 2 475 3 250 4
75
800
450
200
75
0
p0
p1
p3
p4
TTRnd(p0) t(p0)t(p3)t(p1)t(p4)
35025012575 800 TTRnd(p1) t(p1)t(p4)
12575 200 TTRnd(p3) t(p3)t(p1)t(p4)
25012575 450 TTRnd(p4) t(p4) 75
W(p0) 450 W(p1) 75 W(p3) 200 W(p4) 0
47Shortest Job Next
i t(pi) 0 350 1 125 2 475 3 250 4
75
1275
800
450
200
75
0
p0
p1
p2
p3
p4
TTRnd(p0) t(p0)t(p3)t(p1)t(p4)
35025012575 800 TTRnd(p1) t(p1)t(p4)
12575 200 TTRnd(p2) t(p2)t(p0)t(p3)t(p1)t
(p4) 47535025012575
1275 TTRnd(p3) t(p3)t(p1)t(p4) 25012575
450 TTRnd(p4) t(p4) 75
W(p0) 450 W(p1) 75 W(p2) 800 W(p3)
200 W(p4) 0
48Shortest Job Next
i t(pi) 0 350 1 125 2 475 3 250 4
75
- Minimizes wait time
- May starve large jobs
- Must know service times
1275
800
450
200
75
0
p0
p1
p2
p3
p4
TTRnd(p0) t(p0)t(p3)t(p1)t(p4)
35025012575 800 TTRnd(p1) t(p1)t(p4)
12575 200 TTRnd(p2) t(p2)t(p0)t(p3)t(p1)t
(p4) 47535025012575
1275 TTRnd(p3) t(p3)t(p1)t(p4) 25012575
450 TTRnd(p4) t(p4) 75
W(p0) 450 W(p1) 75 W(p2) 800 W(p3)
200 W(p4) 0
Wavg (450758002000)/5 1525/5 305
49Priority Scheduling
i t(pi) Pri 0 350 5 1 125 2 2
475 3 3 250 1 4 75 4
- Reflects importance of external use
- May cause starvation
- Can address starvation with aging
1275
925
850
375
250
0
p0
p1
p2
p3
p4
TTRnd(p0) t(p0)t(p4)t(p2)t(p1) )t(p3)
35075475125250
1275 TTRnd(p1) t(p1)t(p3) 125250
375 TTRnd(p2) t(p2)t(p1)t(p3) 475125250
850 TTRnd(p3) t(p3) 250 TTRnd(p4) t(p4)
t(p2) t(p1)t(p3) 75475125250 925
W(p0) 925 W(p1) 250 W(p2) 375 W(p3)
0 W(p4) 850
Wavg (9252503750850)/5 2400/5 480
50Deadline Scheduling
i t(pi) Deadline 0 350 575 1 125
550 2 475 1050 3 250
(none) 4 75 200
- Allocates service by deadline
- May not be feasible
1050
550
200
575
1275
0
p0
p1
p2
p3
p4
p0
p1
p2
p3
p4
p0
p1
p2
p3
p4
51Preemptive Schedulers
Preemption or voluntary yield
Ready List
Scheduler
New Process
CPU
Done
- Highest priority process is guaranteed to be
running at all times - Or at least at the beginning of a time slice
- Dominant form of contemporary scheduling
- But complex to build analyze
52Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
0
50
p0
W(p0) 0
53Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
100
0
p0
p1
W(p0) 0 W(p1) 50
54Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
100
0
p0
p2
p1
W(p0) 0 W(p1) 50 W(p2) 100
55Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
200
100
0
p0
p3
p2
p1
W(p0) 0 W(p1) 50 W(p2) 100 W(p3) 150
56Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
200
100
0
p0
p4
p3
p2
p1
W(p0) 0 W(p1) 50 W(p2) 100 W(p3)
150 W(p4) 200
57Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
300
200
100
0
p0
p0
p4
p3
p2
p1
W(p0) 0 W(p1) 50 W(p2) 100 W(p3)
150 W(p4) 200
58Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
475
400
300
200
100
0
p0
p4
p0
p4
p3
p2
p1
p1
p2
p3
TTRnd(p4) 475
W(p0) 0 W(p1) 50 W(p2) 100 W(p3)
150 W(p4) 200
59Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
475
400
300
200
100
0
550
p0
p4
p1
p0
p4
p3
p2
p1
p1
p2
p3
p0
TTRnd(p1) 550 TTRnd(p4) 475
W(p0) 0 W(p1) 50 W(p2) 100 W(p3)
150 W(p4) 200
60Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
475
400
300
200
100
0
550
650
p0
p4
p1
p0
p4
p3
p2
p1
p1
p2
p3
p0
p3
p2
650
750
850
950
p0
p3
p2
p0
p3
p2
TTRnd(p1) 550 TTRnd(p3) 950 TTRnd(p4) 475
W(p0) 0 W(p1) 50 W(p2) 100 W(p3)
150 W(p4) 200
61Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
475
400
300
200
100
0
550
650
p0
p4
p1
p0
p4
p3
p2
p1
p1
p2
p3
p0
p3
p2
650
750
850
950
1050
p0
p3
p2
p0
p3
p2
p0
p2
p0
TTRnd(p0) 1100 TTRnd(p1) 550 TTRnd(p3)
950 TTRnd(p4) 475
W(p0) 0 W(p1) 50 W(p2) 100 W(p3)
150 W(p4) 200
62Round Robin (TQ50)
i t(pi) 0 350 1 125 2 475 3 250 4
75
475
400
300
200
100
0
550
650
p0
p4
p1
p0
p4
p3
p2
p1
p1
p2
p3
p0
p3
p2
650
750
850
950
1050
1150
1250
1275
p0
p3
p2
p0
p3
p2
p0
p2
p0
p2
p2
p2
p2
TTRnd(p0) 1100 TTRnd(p1) 550 TTRnd(p2)
1275 TTRnd(p3) 950 TTRnd(p4) 475
W(p0) 0 W(p1) 50 W(p2) 100 W(p3)
150 W(p4) 200
63Round Robin (TQ50)
- Equitable
- Most widely-used
- Fits naturally with interval timer
i t(pi) 0 350 1 125 2 475 3 250 4
75
475
400
300
200
100
0
550
650
p0
p4
p1
p0
p4
p3
p2
p1
p1
p2
p3
p0
p3
p2
650
750
850
950
1050
1150
1250
1275
p0
p3
p2
p0
p3
p2
p0
p2
p0
p2
p2
p2
p2
TTRnd(p0) 1100 TTRnd(p1) 550 TTRnd(p2)
1275 TTRnd(p3) 950 TTRnd(p4) 475
W(p0) 0 W(p1) 50 W(p2) 100 W(p3)
150 W(p4) 200
TTRnd_avg (11005501275950475)/5 4350/5
870
Wavg (050100150200)/5 500/5 100
64RR with Overhead10 (TQ50)
- Overhead must be considered
i t(pi) 0 350 1 125 2 475 3 250 4
75
540
480
360
240
120
0
575
790
635
670
p0
p4
p1
p0
p4
p3
p2
p1
p1
p2
p3
p0
p3
p2
910
1030
1150
1270
1390
1510
1535
790
p0
p3
p2
p0
p3
p2
p0
p2
p0
p2
p2
p2
p2
TTRnd(p0) 1320 TTRnd(p1) 660 TTRnd(p2)
1535 TTRnd(p3) 1140 TTRnd(p4) 565
W(p0) 0 W(p1) 60 W(p2) 120 W(p3)
180 W(p4) 240
TTRnd_avg (132066015351140565)/5 5220/5
1044
Wavg (060120180240)/5 600/5 120
65Multi-Level Queues
Preemption or voluntary yield
Ready List0
Scheduler
New Process
Ready List1
CPU
Done
Ready List2
- All processes at level i run before
- any process at level j
- At a level, use another policy, e.g. RR
Ready List3
66Contemporary Scheduling
- Involuntary CPU sharing -- timer interrupts
- Time quantum determined by interval timer --
usually fixed for every process using the system - Sometimes called the time slice length
- Priority-based process (job) selection
- Select the highest priority process
- Priority reflects policy
- With preemption
- Usually a variant of Multi-Level Queues
67BSD 4.4 Scheduling
- Involuntary CPU Sharing
- Preemptive algorithms
- 32 Multi-Level Queues
- Queues 0-7 are reserved for system functions
- Queues 8-31 are for user space functions
- nice influences (but does not dictate) queue level
68Windows NT/2K Scheduling
- Involuntary CPU Sharing across threads
- Preemptive algorithms
- 32 Multi-Level Queues
- Highest 16 levels are real-time
- Next lower 15 are for system/user threads
- Range determined by process base priority
- Lowest level is for the idle thread
69Bank Teller Simulation
T1
T2
Customers Arrivals
Tn
Tellers at the Bank
Model of Tellers at the Bank
70Simulation Kernel Loop
simulated_time 0 while (true) event
select_next_event() if (event-gttime gt
simulated_time) simulated_time
event-gttime evaluate(event-gtfunction, )
71Simulation Kernel Loop(2)
void runKernel(int quitTime) Event
thisEvent // Stop by running to elapsed time,
or by causing quit execute if(quitTime lt
0) quitTime 9999999 simTime 0
while(simTime lt quitTime) // Get the
next event if(eventList NIL)
// No more events to process
break
thisEvent eventList
eventList thisEvent-gtnext
simTime thisEvent-gtgetTime() // Set the time
// Execute this event
thisEvent-gtfire()
delete(thisEvent)
72(No Transcript)
73Simple State Diagram
Request
Done
Running
Schedule
Request
Start
Allocate
Ready
Blocked
74UNIX State Transition Diagram
Request
Wait by parent
Done
Running
zombie
Schedule
Request
I/O Request
Sleeping
Start
Allocate
Runnable
I/O Complete
Resume
Traced or Stopped
Uninterruptible Sleep
75Windows NT Thread States
CreateThread
Terminated
Initialized
Reinitialize
Activate
Dispatch
Exit
Waiting
Wait
Ready
Running
Wait Complete
Wait Complete
Select
Preempt
Transition
Dispatch
Standby
76Resources
Resource Anything that a process can request,
then be blocked because that thing is not
available.
R Rj 0 ? j lt m resource types C cj ? 0
? Rj?R (0 ? j lt m) units of Rj available
Reusable resource After a unit of the resource
has been allocated, it must ultimately be
released back to the system. E.g., CPU, primary
memory, disk space, The maximum value for cj is
the number of units of that resource
Consumable resource There is no need to release
a resource after it has been acquired. E.g., a
message, input data, Notice that cj is
unbounded.
77Process Hierarchies
- Parent-child relationship may be significant
parent controls childrens execution
Request
Done
Running
Yield
Suspend
Schedule
Request
Start
Suspend
Ready-Active
Activate
Ready-Suspended
Allocate
Allocate
Suspend
Blocked-Active
Blocked-Suspended
Activate
78UNIX Organization
Process
Libraries
Process
Process
System Call Interface
Deadlock
Process Description
File Manager
Protection
Synchronization
Memory Manager
Device Manager
Monolithic Kernel
CPU
Other H/W
Memory
Devices
79Windows NT Organization
Process
Process
T
T
Process
T
T
T
T
Libraries
T
T
T
Subsystem
Subsystem
Subsystem
User
I/O Subsystem
NT Executive
NT Kernel
Hardware Abstraction Layer
Processor(s)
Main Memory
Devices