Scheduler Activation - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

Scheduler Activation

Description:

Scheduler Activation allows. User level thread package that schedules parallel threads ... The kernel upcalls to B using the CPU-a with a new scheduler activation ... – PowerPoint PPT presentation

Number of Views:32
Avg rating:3.0/5.0
Slides: 20
Provided by: joonw
Category:

less

Transcript and Presenter's Notes

Title: Scheduler Activation


1
Scheduler Activation
2
Parallelism Vehicles
  • Process
  • source of overhead address space
  • kernel threads
  • kernel support multiple threads per address space
  • no problem integrating with kernel
  • too heavyweight for parallel programs
  • 10(user thread) lt performance lt 10(process)
  • user level threads
  • fast, but
  • kernel knows nothing about threads

3
user level thread package
  • managed by runtime library routines linked into
    each application program
  • require no kernel intervention
  • efficient
  • cost lt 10 (cost of procedure call)
  • flexible can be customized to the needs of
    language or user
  • view a process as virtual processor
  • these virtual processors are being multiplexed
    across real processors
  • may result in poor performance or incorrect
    behavior (e.g., deadlock caused by the absence of
    progress)

4
user level thread packages built on kernel level
thread
  • base allocate the same number of threads as the
    number of processors allocated to a job
  • when a user thread blocks, it wastes a processor
  • if it allocates more kernel threads than
    allocated processors, time slicing is needed
  • problems of busy-wait synchronization
  • scheduling of idle user thread
  • summary
  • the number of kernel thread a job needs changes
  • the number of processors allocated changes
  • by the scheduling between jobs
  • by the degree of parallelism

5
Scheduler Activation
  • Goal
  • performance at the level of user thread
  • tight integration with kernel
  • Scheduler Activation allows
  • User level thread package that schedules parallel
    threads
  • Kernel level threads that integrates well with
    system
  • Two way interaction between thread package and
    kernel
  • scheduler activation (s_a) ?
  • kernel level thread that also runs in user space
  • needs two stacks

6
Program Start
user level thread package
4. the s_a executes scheduler code inside the
package
initializes s_a with thread to run
1. creates a s_a 2. assign it to a cpu
3. upcalls thread package using the s_a
cpu
cpu
cpu
cpu
cpu
cpu
cpu
cpu
cpu
cpu
cpu
cpu
cpu
cpu
cpu
s_a is using user stack
s_a is using kernel stack
7
Thread Creation/Deletion
  • when need more processors,
  • ask kernel for more processors (syscall)
  • kernel allocates m CPUs (m ltn)
  • kernel creates m s_a's
  • each s_a upcalls using the cpu allocated
  • then, the user level scheduler starts to schedule
    its threads
  • when there is idle processor
  • release it to the kernel

8
Upcall Points
9
I/O request/completion
Time T2
Time T1
User-Level Runtime System
B
Kernel
Add Processor
a s_a is blocked
Processors
Time T3
Time T4
User-Level Runtime System
Kernel
a s_a is unblocked
(C)
(D)
10
Thread Blocking (Time T2)
  • The thread manager in user space already has the
    threads stack and TCB
  • At blocking time, the registers and PC values are
    saved by the kernel
  • The kernel notifies the thread manager of the
    fact that the thread is blocked with saved
    register values
  • use new scheduler activation
  • use the CPU that was being used by the blocked
    thread
  • The thread manager saves the register values in
    TCB and marks it as blocked

11
Processor Reallocation from A to B
  • The kernel sends an interrupt to a CPU-a that was
    used by A
  • stops the running thread (say Ti)
  • The kernel upcalls to B using the CPU-a with a
    new scheduler activation
  • notify B that a new CPU is allocated
  • the thread manager of B schedules a thread on
    CPU-a
  • The kernel takes off another CPU-b from A
  • suppose Tj was running on that CPU
  • The kernel upcalls to A notifying that two
    threads, Ti and Tj, have been preempted
  • The thread manager of A schedules a thread on
    CPU-b

12
System Calls
  • User level programs notifies the kernel of events
    such as
  • when more processors are needed
  • when processors are idle
  • System Calls (errata Table III)

13
Critical Sections
  • What if a thread holding a lock is preempted(or
    blocked)
  • this problem is not intrinsic to scheduler
    activation
  • waste of CPU by waiting threads
  • deadlock
  • a thread holding a lock on ready queue is
    preempted and a scheduler activation upcalls to
    access the ready queue
  • Solution
  • on upcall, the routine checks if the preempted
    thread was running in a critical section
  • if so, schedule the preempted thread on another
    CPU by preempting a thread that is not in a
    critical section
  • the thread releases the CPU as soon as it exits
    the critical section
  • this scheme necessitates two kinds of critical
    section
  • normal version
  • preempted/resumed version

14
Continuation
15
Execution Model
  • Defines the behavior of thread when it is blocked
    while being executed in the kernel
  • Process model
  • Unix model
  • each thread keeps its stack even when it is being
    blocked
  • increase the working set size of the kernel
  • increases cache/TLB misses
  • Interrupt model
  • there is only one stack in the kernel for
    actively running thread
  • blocking thread is responsible for storing its
    own context
  • Architectural bias
  • CISC favors process model while RISC is fairly
    unbiased

16
Continuation
  • a function that will be executed when a thread is
    awakened
  • responsible for restoring the saved state
  • the blocking thread should store state
    information, if needed
  • in a scratch area allocated to the thread
  • usage
  • a thread calls blocking procedure with
    continuation
  • the blocking procedure
  • if argument(i.e., continuation) is present
  • set the return address as to point the
    continuation
  • put the thread in the blocked queue
  • schedule another thread
  • if argument is missing, it treats the thread like
    the process model

17
code transformation
continuation
18
(No Transcript)
19
Benefits of continuation
  • reduced stack usage
  • state saving in continuation
  • stack hand off
  • reduce the working set size of the kernel
  • efficient control transfer
  • continuation recognition
  • a client examines servers area where
    continuations are saved
  • if there is one, performs the stack handoff to
    the server thread instead of sending a message
Write a Comment
User Comments (0)
About PowerShow.com