Operating System Support for Virtual Machines - PowerPoint PPT Presentation

About This Presentation
Title:

Operating System Support for Virtual Machines

Description:

Near native performance for CPU intensive workloads. The major tradeoff ... CPU Virtualization. H/w interrupt. Interrupt reasserted. I/O Virtualization ... – PowerPoint PPT presentation

Number of Views:34
Avg rating:3.0/5.0
Slides: 50
Provided by: ashish9
Category:

less

Transcript and Presenter's Notes

Title: Operating System Support for Virtual Machines


1
Operating System Support for Virtual Machines
  • Samuel King, George Dunlap, Peter Chen
  • Univ of Michigan

Ashish Gupta
2
Overview
  • Motivation
  • Classification of VMs
  • Advantage of Type II VMs
  • About UMLinux exploiting Linux caps
  • How UMLinux works ?
  • The three bottlenecks, their solutions
  • Performance results
  • Conclusions Modifying host OS helps !

3
Two classifications for VM
1
Higher Level Interface
VMWare Guest toolsVAX VMM Security Kernel
VM/370VMWare
Denali
UMLinuxSimOSXen
u-kernels
JVM
4
Two classifications for VM
2
Underlying Platform
VM/370VMWare ESXDiscoDenaliXen
VMWare WorkstationVirtualPC
SimOSUMLinux
Type II
Type I
5
UMLinux
  • Higher level interface slightly different
  • Guest OS needs to be modified
  • Simple device drivers added
  • Emulation of certain instructions (iret and
    in/out)
  • Kernel Re-linked to different address
  • 17,000 lines of change
  • ptrace ? virtualization
  • Intercepts guest system calls
  • Tracks transitions

6
Advantage of Type II VM
Virtual CPU
Guest Machine Process
Virtual I/O Devices
Host files anddevices
Virtual Interrupts
Host Signals
Virtual MMU
mmapmunmap
7
The problem
8
Compiling the Linux Kernel
510 lines to Host OS
9
Compiling the Linux Kernel
510 lines to Host OS
10
Optimization OneSystem calls
11
(No Transcript)
12
Lots of context switches between VMM lt -- gt Guest
machine process
13
Use VMM as a Kernel module Modification to
Host OS also
14
?
15
(No Transcript)
16
Optimization TwoMemory protection
17
Frequent switching between Guest Kernel and Guest
application
18
Guest Kernel to Guest User
19
Guest User to Guest Kernel
Through mmap, munmap and mprotect Very expensive
20
Host Linux Memory Management
  • x86 paging provides built-in protection to memory
    pages
  • Linux uses page tables for translation and
    protection
  • Segments used only to switch between privilege
    levels
  • Uses supervisor bit to disallow ring 3 to access
    certain pages

The idea segments bound features are relatively
unused
21
Solution Change Segment bounds for each mode
22
(No Transcript)
23
(No Transcript)
24
Optimization ThreeContext Switching
25
  • The problem with context switching
  • Have to remap user processs virtual memory to
    the virtual physical memory
  • Generates large number of mmaps ? costly
  • The solution
  • Allow one process to maintain multiple
    address-spaces
  • Each address space ? different set of page tables
  • New system call switch guest, whenever context
    switching

26
Multiple Page Table Sets
guest proc a
Guest OS
Page Table Ptr
Host operating system
27
(No Transcript)
28
Conclusion
  • Type II VMM CAN be as fast as type Iby modifying
    the Host OS
  • Is the title of paper justified ?

29
Virtualizing I/O Devices on VMware
WorkstationsHosted VMM
  • Jeremy Sugerman, Ganesh Venkitachalam and
    Beng-Hong Lim
  • VMware, Inc.

30
Introduction
  • VM Definition from IBM
  • a virtual machine is a fully protected and
    isolated copy of the underlying physical
    machines hardware.
  • The choice for hosted architecture
  • Relies upon host OS for device support
  • Primary Advantage
  • Copes with diversity of hardware
  • Compatible with pre-existing PC software
  • Near native performance for CPU intensive
    workloads

31
(No Transcript)
32
The major tradeoff
  • I/O performance degradation
  • I/O emulation done in host world
  • Switching between the host world and the VMM world

33
How I/O works
VM Driver
ApplicationPortion
PrivilegedPortion
VM App
VMM
CPU Virtualization
I/O Request
H/w interrupt
34
I/O Virtualization
  • VMM intercepts all I/O operations
  • Usually privileged IN , OUT operations
  • Emulated either in VMM on in VMApp
  • Host OS drivers understand the semantics of port
    I/O, VMM doesnt
  • Physical Hardware I/O must be handled in Host OS
  • Lot of Overhead from world switching
  • Which devices get affected ?
  • CPU gets saturated before I/O

35
The Goal of this paper
I/O
CPU
I/O
CPU
36
The Network Card
  • Virtual NIC appears as a full fledged PCI
    Ethernet Controller, with its own MAC address
  • Connection implemented by a VMNet driver loaded
    in the Host OS
  • Virtual NIC a combination of code in the VMM
    and VMApp
  • Virtual I/O Ports and Virtual IRQs

37
(No Transcript)
38
VMM
H O S T
Sending a Packet
39
H O S T
Receiving a Packet
V M M
H O S T
40
Experimental Setup
Nettest throughput tests
41
Time profiling
  • Extra work
  • Switching worlds for every I/O instruction most
    expensive
  • I/O interrupt for every packet sent and received
  • VMM, host and guest interrupt handlers are run !
  • Packet trans two device drivers
  • Packet copy on transmit

42
Optimization One
  • Primary aim Reduce world switches
  • Idea Only a third of the I/O instructions
    trigger packet trans.
  • Emulate the rest in VMM
  • The Lance NIC address I/O has memory semantics
  • I/O ? MOV !
  • Strips away several layers of virtualization

43
Optimization Two
  • Very high interrupt rate for data trans.
  • When does a world switch occur
  • A packet is to be transmitted
  • A real interrupt occurs e.g. timer interrupt
  • The Idea Piggyback the packet interrupts on the
    real interrupts
  • Queue the packets in a ring buffer
  • Transmit all buffered packets on next switch
  • Works well for I/O intensive workloads

44
Packet Transmit
Real Interrupt
45
Optimization Three
  • Reduce host system calls for packet sends and
    receives
  • Idea Instead of select, use a shared bit-vector,
    to indicate packet availability
  • Eliminates costly select()

?
46
Summary of three optimizations
  • Native
  • VM/733 MHz
  • Optimized
  • VM/733 MHz
  • Version 2.0

Guest OS idles
47
Summary of three optimizations
  • Native
  • VM/350 MHz
  • Optimized
  • VM/350 MHz
  • Version 2.0

48
Most effective Optimization ?
  • Emulating IN and OUT to Lance I/O ports directly
    in VMM
  • Why ?
  • Eliminates lots of world switches
  • I/O changed to MOV instruction

49
Further avenues for Optimization ?
  • Modify the Guest OS
  • Substitute expensive-to-virtualize instructions
    e.g. MMU instructions . Example ??
  • Import some OS functionality into VMM
  • Tradeoff can use off-the-shelf Oses
  • An idealized virtual NIC (Example ??)
  • Only one I/O for packet transmit instead of 12 !
  • Cost custom device drivers for every OS
  • VMWare Server version

50
Further avenues for Optimization ?
  • Modify the Host OS Example ??
  • Change the Linux networking stack
  • Poor buffer management
  • Cost requires co-operation from OS Vendors
  • Direct Control of Hardware VMWare ESX
  • Fundamental limitations of Hosted Architecture
  • Idea Let VMM drive I/O directly, no switching
  • Cost ??
Write a Comment
User Comments (0)
About PowerShow.com