OS Structures Exokernel - PowerPoint PPT Presentation

1 / 40
About This Presentation
Title:

OS Structures Exokernel

Description:

Design principle. Expose hardware (securely) Expose allocation. Expose names. Expose revocation ... Kernel installs hardware from software TLB for guaranteed mappings ... – PowerPoint PPT presentation

Number of Views:42
Avg rating:3.0/5.0
Slides: 41
Provided by: compu379
Category:

less

Transcript and Presenter's Notes

Title: OS Structures Exokernel


1
OS Structures - Exokernel
  • Original slides by Kishore Ramachandran adapted
    by
  • Anda Iamnitchi

2
Exokernel
3
Exokernel approach
  • Separate protection from management
  • Export hardware resources securely
  • Secure binding
  • Visible resource revocation
  • Abort protocol

4
Exokernel design
5
Exokernel tasks
  • Track ownership
  • Guard all resources through bind points
  • Revoke access to resources

6
Design principle
  • Expose hardware (securely)
  • Expose allocation
  • Expose names
  • Expose revocation

7
Secure binding
  • Decouples authorization from use
  • Allows kernel to protect resource without
    understanding their semantics
  • Example TLB entry
  • Virtual to physical mapping performed in the
    library (above exokernel)
  • Binding loaded into the kernel used multiple
    times
  • Example packet filter
  • Predicates loaded into the kernel
  • Checked on each packet arrival

8
Implementing secure bindings
  • Hardware mechanisms
  • Capability for physical pages of a file
  • Frame buffer regions (SGI)
  • Software caching
  • Exokernel large software TLB overlaying the
    hardware TLB
  • Downloading code into kernel
  • Avoid expensive boundary crossings

9
Examples of secure binding
  • Physical memory allocation (hardware supported
    binding)
  • Library allocates physical page
  • Exokernel records the allocator and the
    permissions and returns a capability an
    encrypted cypher
  • Every access to this page by the library requires
    this capability
  • Page fault
  • Kernel fields it
  • Kicks it up to the library
  • Library allocated a page gets an encrypted
    capability
  • Library calls the kernel to enter a particular
    translation into the TLB
  • by presenting the capability

10
  • Download code into kernel to establish secure
    binding
  • Packet filter for demultiplexing network packets
  • How to ensure authenticity?
  • Only trusted servers (library OS) can download
    code into the kernel
  • Other use of downloaded code
  • Execute code on behalf of an app that is not
    currently scheduled
  • E.g. application handler for garbage collection
    could be installed in the kernel

11
Visible resource revocation
  • Most resources are visibly revoked
  • E.g. processor physical page
  • Library can then perform necessary action before
    relinquishing the resource
  • E.g. needed state saving for a processor
  • E.g. update of page table

12
Abort protocol
  • Repossession exception passed to the library OS
  • Repossession vector
  • Gives info to the library OS as to what was
    repossessed so that corrective action can be
    taken
  • Library OS can seed the vector to enable
    exokernel to autosave (e.g. disk blocks to which
    a physical page being repossessed should be
    written to)

13
Aegis an exokernel
14
Aegis processor time slice
  • Linear vector of time slots
  • Round robin
  • An application can mark its position in the
    vector for scheduling
  • Timer interrupt
  • Beginning and end of time slices
  • Control transferred to library specified handler
    for actual saving/restoring
  • Time to save/restore is bounded
  • Penalty? loss of a time slice next time!

15
Aegis processor environments
  • Exception context
  • Program generated
  • Interrupt context
  • External e,g. timer
  • Protected entry context
  • Cross domain calls
  • Addressing context
  • Guaranteed mappings implemented by software TLB
    mimicking the library OS page table

16
Aegis performance
17
Aegis - Address translation
  • On TLB miss
  • Kernel installs hardware from software TLB for
    guaranteed mappings
  • Otherwise application handler called
  • Application establishes mapping
  • TLB entry with associated capability presented to
    the kernel
  • Kernel installs and resumes execution of the
    application

18
ExOS library OS
  • IPC abstraction
  • VM
  • Remote communication using ASH (application
    specific safe handlers)
  • Takeaway
  • significant performance improvement possible
    compared to a monolithic implementation

19
On Micro-kernel Construction
20
Key points of the paper
  • Microkernel should provide minimal abstractions
  • Address space, threads, IPC
  • Abstractions machine independent but
    implementation hardware dependent for performance
  • Myths about inefficiency of micro-kernel stem
    from inefficient implementation and NOT from
    microkernel approach

21
What abstractions?
  • Determining criterion
  • Functionality not performance
  • Hardware and microkernel should be trusted but
    applications are not
  • Hardware provides page-based virtual memory
  • Kernel builds on this to provide protection for
    services above and outside the microkernel
  • Principles of independence and integrity
  • Subsystems independent of one another
  • Integrity of channels between subsystems
    protected from other subsystems

22
Microkernel Concepts
  • Hardware provides address space
  • mapping from virtual page to a physical page
  • implemented by page tables and TLB
  • Microkernel concept of address spaces
  • Hides the hardware address spaces and provides an
    abstraction that supports
  • Grant?
  • Map?
  • Flush?
  • These primitives allows building a hierarchy of
    protected address spaces

23
Address spaces
R
R
A2, P2
V2, NIL
A1, P1
V1, R
(P1, v1)
(P1, v1)
map
A3, P3
V3, R
R
(P2, v2)
A2, P2
V2, R
(P3, v3)
(P1, v1)
flush
R
A3, P3
V3, NIL
(P2, v2)
(P1, v1)
grant
24
  • Power and flexibility of address spaces
  • Initial memory manager for address space A0
    appears by magic and encompasses the physical
    memory
  • Allow creation of stackable memory managers (all
    outside the kernel)
  • Pagers can be part of a memory manager or outside
    the memory manager
  • All address space changes (map, grant, flush)
    orchestrated via kernel for protection
  • Device driver can be implemented as a special
    memory manager outside the kernel as well

25
PT
M2, A2, P2
Map/grant
M1, A1, P1
PT
PT
M0, A0, P0
processor
Microkernel
26
Threads and IPC
  • Executes in an address space
  • PC, SP, processor registers, and state info (such
    as address space)
  • IPC is cross address space communication
  • Supported by the microkernel
  • Classic method is message passing between threads
    via the kernel
  • Sender sends info receiver decides if it wants
    to receive it, and if so where
  • Address space operations such as map, grant,
    flush need IPC
  • Higher level communication (e.g. RPC) built on
    top of basic IPC

27
  • Interrupts?
  • Each hardware device is a thread from kernels
    perspective
  • Interrupt is a null message from a hardware
    thread to the software thread
  • Kernel transforms hardware interrupt into a
    message
  • Does not know or care about the semantics of the
    interrupt
  • Device specific interrupt handling outside the
    kernel
  • Clearing hardware state (if privileged) then
    carried out by the kernel upon driver threads
    next IPC
  • TLB handler?
  • In theory software TLB handler can be outside the
    microkernel
  • In practice first level TLB handler inside the
    microkernel or in hardware

28
Unique IDs
  • Kernel provides uid over space and time for
  • Threads
  • IPC channels

29
Breaking some performance myths
  • Kernel user switches
  • Address space switches
  • Thread switches and IPC
  • Memory effects
  • Base system
  • 486 (50 MHz) 20 ns cycle time

30
Kernel-user switches
  • Machine instruction for entering and exiting
  • 107 cycles
  • Mach measures 900 cycles for kernel-user switch
  • Why?
  • Empirical proof
  • L3 kernel 123 cycles (accounting for some TLB,
    cache misses)
  • Where did the remaining 800 cycles go in MACH?
  • Kernel overhead (construction of the kernel, and
    inherent in the approach)

31
Address space switches
  • TLBs
  • Instruction and data caches
  • Usually physically tagged in most modern
    processors so TLB flush has no effect
  • Address space switch
  • Complete reload of Pentium TLB 864 cycles

32
  • Do we need a TLB flush always?
  • Implementation issue of protection domains
  • Liedtke suggests similar approach in the
    microkernel in an architecture-specific manner
  • PowerPC use segment registers gt no flush
  • Pentium or 486 share the linear hardware address
    space among several user address spaces gt no
    flush
  • There are some caveats in terms of size of user
    space and how many can be packed in a 232
    global space

33
  • Conclusions
  • Address space switching among medium or small
    protection domains can ALWAYS be made efficient
    by careful construction of the microkernel
  • Large address spaces switches are going to be
    expensive ALWAYS due to cache effects and TLB
    effects, so switching cost is not the most
    critical issue

34
Thread switches and IPC
35
Segment switch (instead of AS switch) makes cross
domain calls cheap
36
Memory Effects System
37
Capacity induced MCPI
38
Portability vs. Performance
  • Microkernel on top of abstract hardware while
    portable
  • Cannot exploit hardware features
  • Cannot take precautions to avoid performance
    problems specific to an arch
  • Incurs performance penalty due to abstract layer

39
Examples of non-portability
  • Same processor family
  • Use address space switch implementation
  • TLB flush method preferable for 486
  • Segment register switch preferable for Pentium
  • gt 50 change of microkernel!
  • IPC implementation
  • Details of the cache layout (associativity)
    requires different handling of IPC buffers in 486
    and Pentium
  • Incompatible processors
  • Exokernel on R4000 (tagged TLB) Vs. 486 (untagged
    TLB)
  • gt Microkernels are inherently non-portable

40
Summary
  • Minimal set of abstractions in microkernel
  • Microkernels are processor specific (at least in
    implementation) and non-portable
  • Right abstractions and processor-specific
    implementation leads to efficient
    processor-independent abstractions at higher
    layers
Write a Comment
User Comments (0)
About PowerShow.com