Inuit: a System for Waste Free Computing - PowerPoint PPT Presentation

About This Presentation
Title:

Inuit: a System for Waste Free Computing

Description:

Inuit: a System for Waste Free Computing. Stefan Savage. Michael Frederick, Diwaker Gupta, Marvin McNett, Alex Snoeren, Amin Vahdat, Geoff Voelker, ... – PowerPoint PPT presentation

Number of Views:120
Avg rating:3.0/5.0
Slides: 16
Provided by: stefan9
Learn more at: https://cseweb.ucsd.edu
Category:

less

Transcript and Presenter's Notes

Title: Inuit: a System for Waste Free Computing


1
Inuit a System for Waste Free Computing
  • Stefan Savage
  • Michael Frederick, Diwaker Gupta, Marvin McNett,
    Alex Snoeren, Amin Vahdat, Geoff Voelker,
    Michael Vrable, Ken Yocum
  • University of California, San Diego
  • MSR Cambridge Colloquium, 10/27/2005

2
First a warning
gt From Stefan Savage mailtosavage_at_cs.ucsd.edu
gt Sent 03 August 2005 0937 gt To Rebecca
Isaacs gt Cc jon.crowcroft_at_cl.cam.ac.uk
mvrable_at_cs.ucsd.edu gt Subject RE Invitation -
Cambridge Systems Colloquium 27 Oct gt If you'd
like to get a talk out of me I can certainly do
one, but I'm gt not sure the talks I have on tap
will be a perfect fit for the topics gt below
(unless you're amenable to a wildly gesticulating
rant)
gt From Rebecca Isaacs mailtorisaacs_at_microsoft.c
om gt Sent Friday, September 16, 2005 1153 AM gt
To Stefan Savage gt Cc jon.crowcroft_at_cl.cam.ac.uk
gt Subject RE Invitation - Cambridge Systems
Colloquium 27 Oct gt Hi Stefan, gt Actually wild
gesticulations could be quite entertaining...
  • Thus, this talk is wild speculation
  • I am unconstrained by
  • thinking my ideas through
  • evaluating if they might be practical
  • or, any real experience with the problem domain

3
American Culture
  • The US has developed, by far, the most
    consumption-oriented, wasteful culture in the
    history of mankind
  • 400 million tons of waste/year (almost 25 of
    world)
  • 100 quads energy/year (almost 30 of world usage)

4
Inuit Culture
  • Indigenous people of North America and Eastern
    Russia
  • Subsistence hunters -gt Bowhead Whale
  • Nothing is wasted for to waste would be
    disrespectful to the whale Abvibich lglaunifat
    Niginmun
  • Meat -gt protein
  • Jaw bones -gt sled runners
  • Ribs -gt Fences and arrow points
  • Shoulder blades -gt housing structures
  • Baleen -gt Rope and baskets
  • Lungs/Liver -gt Drums
  • Blubber -gt Oil lamps

5
Modern computing is very American
  • Most of the time they do nothing of value
  • void cpu_idle (void) (200 Trillion BTUs?)
  • Yet, we keep buying more (250M last year!)
  • And, we throw out the old ones (150M last year)
  • Bottom line computers are treated as disposable
    items to be acquired to excess, lightly used and
    then discarded

6
A Case StudySystems Computing at UCSD
  • 100 servers
  • 2-5 Trillion Instructions per Second
  • 150GB memory
  • 22kW power consumption (peak)
  • 100kBTU/hr heat load (peak)
  • We have 30 students sharing these machines
  • The waste paradox
  • Students insist that resources are scarce and we
    need to buy more machines
  • They cant get machines when they need them, and
  • They cant run the large-scale experiments they
    want
  • Yet. resource utilization is ltlt 5

7
Why is this?
  • We all like the abstraction of a machine as a
    unit of computing allocation
  • This is deep in the DNA of the PC computing model
  • Provides isolation (software, performance)
  • Our direct implementation of this abstraction is
    where we get into trouble
  • High preemption cost -gt long term allocation
  • Get off my machines you _at_-head
  • Poor resource sharing
  • Your disk storage is not unique Bolosky et al
  • Your memory is not unique
  • Your computing is not unique

8
The Inuit system
  • Unit of allocation is the machine cluster (can be
    1 machine)
  • Implementation via virtual machines (modified
    Xen)
  • Key idea Try to eliminate all unintended
    resource redundancy
  • Extreme multiplexing
  • Storage sharing
  • System continuations
  • Maximize cluster-level parallelism
  • Aggressive migration
  • Place to maximize resource utilization (sharing
    AND use)
  • Cluster activations
  • Allow cluster applications to adapt to changes in
    resource availability
  • Potemkin Xenoservers Parallax The
    Collective GMS Single Instance Store
    Pastiche Rx Transparent Result Caching
    judicious amounts of grad student effort

9
Extreme multiplexing
  • Share storage between similar VMs
  • CoW memory/disk (ala Potemkin)
  • Discover newly shared data (ala ESX server)
  • Throw away idle storage remotely available
  • Allows 10s or 100s of VMs per physical machine
  • Many VMs -gt likely that a VM is available when
    CPU goes idle
  • System continuations share execution traces
    across similar VMs
  • Pre-cache state changes from common (input, CFG)
    blocks
  • Roll-forward VMs to cached result state (opposite
    of ReVirt)
  • Note leverage controlled non-determinism (Rx) -gt
    Plausibility is sufficient

10
Aggressive migration
  • Cluster similar VMs on the same physical
    machines
  • To maximize cheap sharing opportunities
  • Migrate VM/storage to maximize global utilization
  • Balance CPU/memory load
  • E.g., Large working set VMs can disrupt
    multiplexing
  • Migrate VM away, or migrate memory away (depends
    on workload)

11
Cluster activations
  • For those who need multiple simultaneous machines
    (virtual clusters)
  • Gang scheduling of VC VMs
  • Can specify hard/soft resource requirements
  • Violate utilization maximization for hard
    requirements
  • Application notification on soft resource
    violation
  • Provide feedback about resources backing VMs

12
Where does this make sense?
  • Obvious places
  • UCSD
  • Datacenters
  • The Grid
  • Application servers
  • Less obvious
  • Desktops
  • Huh?

13
Inuit on the desktop?
  • Two opportunities
  • VMs make Networks of Workstations palatable
  • Idle CPU/Memory up for grabs
  • Manageability
  • Encapsulating desktop in VM makes deployment easy
  • Checkpoint/restore of whole system image
  • Cheap migration of whole workstation (Internet
    Suspend/Resume/Collective)

14
Where are we now?
  • At the social engineering stage
  • Get everyone to eat the dog food and fix problems
    until its at least as good as what we had
  • Almost all of our servers now run Xen by default
  • Students forced to use Xen as their only
    environment
  • Lots of support infrastructure
  • Address allocation/routing DNS (IP addresses
    DNS names correspond to VMs not PMs)
  • Control interface (which images onto which VMs,
    VM auth,etc)
  • Half-done with virtual reboot/tty interface
  • Work to improve Potemkin stability/overhead
  • The rest is a hand wave

15
Questions
  • ?
Write a Comment
User Comments (0)
About PowerShow.com