Processes and Threads - PowerPoint PPT Presentation

1 / 34
About This Presentation
Title:

Processes and Threads

Description:

Scheduling in Interactive Systems (1) Round Robin Scheduling. list of runnable processes ... example: if B on average uses 1/5 of a quantum, its tickets will be ... – PowerPoint PPT presentation

Number of Views:53
Avg rating:3.0/5.0
Slides: 35
Provided by: steve1797
Category:

less

Transcript and Presenter's Notes

Title: Processes and Threads


1
(No Transcript)
2
Scheduling
  • Bursts of CPU usage alternate with periods of I/O
    wait
  • a CPU-bound process
  • an I/O bound process

3
Scheduling Goals
4
Scheduling in Interactive Systems (1)
  • Round Robin Scheduling
  • list of runnable processes
  • list of runnable processes after B uses up its
    quantum

5
Scheduling in Interactive Systems (2)
  • A scheduling algorithm with four priority classes

6
Scheduling in Real-Time Systems
  • Schedulable real-time system
  • Given
  • m periodic events
  • event i occurs within period Pi and requires Ci
    seconds
  • Then the load can only be handled if

7
Policy versus Mechanism
  • Separate what is allowed to be done with how it
    is done
  • a process knows which of its children threads are
    important and need priority
  • Scheduling algorithm parameterized
  • mechanism in the kernel
  • Parameters filled in by user processes
  • policy set by user process

8
Thread Scheduling (1)
  • Possible scheduling of user-level threads
  • 50-msec process quantum
  • threads run 5 msec/CPU burst

9
Thread Scheduling (2)
  • Possible scheduling of kernel-level threads
  • 50-msec process quantum
  • threads run 5 msec/CPU burst

10
Problems with Scheduling
  • Priority systems are ad hoc at best
  • highest priority always wins
  • Fair share implemented by adjusting priorities
    with a feedback loop
  • complex mechanism
  • Priority inversion high-priority jobs can be
    blocked behind low-priority jobs
  • Schedulers are complex and difficult to control
  • what we need
  • proportional sharing
  • dynamic flexibility
  • simplicity

11
Tickets in Lottery Scheduling
  • Priority determined by the number of tickets each
    process has
  • Scheduler picks winning ticket randomly, gives
    owner the resource
  • Tickets can be used for a wide variety of
    different resources (uniform) and are machine
    independent (abstract)

12
Performance Characteristics
  • If client has probability p of winning, then the
    expected number of wins is np. (n of
    lotteries)
  • Variance of binomial distribution np(1-p)
  • Accuracy improves with n ½
  • need frequent lotteries
  • Big picture answer mostly accurate, but
    short-term inaccuracies are possible
  • see Stride scheduling below.

13
Ticket Inflation
  • Make up your own tickets (print your own money)
  • Only works among mutually trusting clients
  • Presumably works best if inflation is temporary
  • Allows clients to adjust their priority
    dynamically with zero communication

14
Ticket Transfer
  • Basic idea if you are blocked on someone else,
    give them your tickets
  • Example client-server
  • Server has no tickets of its own
  • clients give server all of their tickets during
    RPC
  • server's priority is the sum of the priorities of
    all of its active clients
  • server can use lottery scheduling to give
    preferential service to high-priority clients
  • Very elegant solution to a long-standing problem

15
Trust Boundaries
  • A group contains mutually trusting clients
  • A unique currency is used inside a group
  • simplifies mini lottery like mutex inside a group
  • supports fine-grain allocation decisions
  • Exchange rate is needed between groups
  • effect of inflation can be localized to a group

16
Compensation tickets
  • What happens if a thread is I/O bound and blocks
    before its quantum expires?
  • the thread gets less than its share of the
    processor.
  • Basic idea if you complete fraction f of the
    quantum, your tickets are inflated by 1/f until
    the next time you win.
  • example if B on average uses 1/5 of a quantum,
    its tickets will be inflated 5x and it will win 5
    times as often and get its correct share overall.
  • What if B alternates between 1/5 and whole
    quantums?

17
Implementation
  • Frequent lotteries mean that lotteries must be
    efficient
  • a fast random number generator
  • fast selection of ticket based on random number
  • Ticket selection
  • straightforward algorithm O(n)
  • tree-based implementation O(log n)

18
Implementation Ticket Object
19
Currency Graph
20
Problems
  • Not as fair as we'd like
  • mutex comes out 1.81 instead of 21,
  • possible starvation
  • multimedia apps come out 1.921.501 instead of
    321
  • possible jitter
  • Every queue is an implicit scheduling decision...
  • Every spinlock ignores priority...
  • Can we force it to be unfair? Is there a way to
    use compensation tickets to get more time, e.g.,
    quit early to get compensation tickets and then
    run for the full time next time?
  • What about kernel cycles? If a process uses a lot
    of cycles indirectly, such as through the
    ethernet driver, does it get higher priority
    implicitly? (probably)

21
Stride Scheduling
  • Basic idea make a deterministic version to
    reduce short-term variability
  • Mark time virtually using passes as the unit
  • A process has a stride, which is the number of
    passes between executions. Strides are inversely
    proportional to the number of tickets, so high
    priority jobs have low strides and thus run
    often.
  • Very regular a job with priority p will run
    every 1/p passes.

22
Stride Scheduling (contd)
  • Algorithm (roughly) always pick the job with the
    lowest pass number. Updates its pass number by
    adding its stride.
  • Similar mechanism to compensation tickets if a
    job uses only fraction f, update its pass number
    by instead of just using the stride.
  • Overall result it is far more accurate than
    lottery scheduling and error can be bounded
    absolutely instead of probabilistically

23
Stride Scheduling Example
24
Parallelism Vehicles
  • Process
  • source of overhead address space
  • kernel threads
  • kernel support multiple threads per address space
  • no problem integrating with kernel
  • too heavyweight for parallel programs
  • 10(user thread) lt performance lt 10(process)
  • user level threads
  • fast, but
  • kernel knows nothing about threads

25
Scheduler Activation
  • Goal
  • performance at the level of user thread
  • tight integration with kernel
  • Scheduler Activation allows
  • User level thread package that schedules parallel
    threads
  • Kernel level threads that integrates well with
    system
  • Two way interaction between thread package and
    kernel
  • scheduler activation (s_a) ?
  • kernel level thread that also runs in user space
  • needs two stacks

26
Program Start
27
Thread Creation/Deletion
  • when need more processors,
  • ask kernel for more processors (syscall)
  • kernel allocates m CPUs (m ltn)
  • kernel creates m s_a's
  • each s_a upcalls using the cpu allocated
  • then, the user level scheduler starts to schedule
    its threads
  • when there is idle processor
  • release it to the kernel

28
Upcall Points
29
I/O request/completion
Time T1
Time T2
User-Level Runtime System
B
Kernel
Add Processor
a s_a is blocked
Processors
Time T3
Time T4
User-Level Runtime System
Kernel
a s_a is unblocked
(C)
(D)
30
Thread Blocking (Time T2)
  • The thread manager in user space already has the
    threads stack and TCB
  • At blocking time, the registers and PC values are
    saved by the kernel
  • The kernel notifies the thread manager of the
    fact that the thread is blocked with saved
    register values
  • use new scheduler activation
  • use the CPU that was being used by the blocked
    thread
  • The thread manager saves the register values in
    TCB and marks it as blocked

31
Processor Reallocation (A-gtB)
  • The kernel sends an interrupt to CPU-a and stops
    the running thread (say Ti)
  • The kernel upcalls to B using the CPU-a with a
    new scheduler activation
  • notify B that a new CPU is allocated
  • The thread manager of B schedules a thread on
    CPU-a
  • The kernel takes off another CPU-b from A
  • suppose Tj was running on that CPU
  • The kernel upcalls to A notifying that two
    threads, Ti and Tj, have been preempted
  • The thread manager of A schedules a thread on
    CPU-b

32
System Calls
  • User level programs notifies the kernel of events
    such as
  • when more processors are needed
  • when processors are idle
  • System Calls (errata Table III)

33
Critical Sections
  • What if a thread holding a lock is preempted(or
    blocked)
  • this problem is not intrinsic to scheduler
    activation
  • waste of CPU by waiting threads
  • deadlock
  • a thread holding a lock on ready queue is
    preempted and a scheduler activation upcalls to
    access the ready queue

34
Critical Sections(2)
  • Solution
  • on upcall, the routine checks if the preempted
    thread was running in a critical section
  • if so, schedule the preempted thread on another
    CPU by preempting a thread that is not in a
    critical section
  • the thread releases the CPU as soon as it exits
    the critical section
  • this scheme necessitates two kinds of critical
    section
  • normal version
  • preempted/resumed version
Write a Comment
User Comments (0)
About PowerShow.com