PlanetLab Tutorial - PowerPoint PPT Presentation

1 / 31
About This Presentation
Title:

PlanetLab Tutorial

Description:

... state via Boot API. 13. Chain-boot node (no restart) 14. Node booted ... 12. Verify node key, change state to 'boot' 5. Send boot manager. Software environment ... – PowerPoint PPT presentation

Number of Views:79
Avg rating:3.0/5.0
Slides: 32
Provided by: christophj
Category:

less

Transcript and Presenter's Notes

Title: PlanetLab Tutorial


1
PlanetLab Tutorial
  • Presented by
  • Manfred Georg and
  • Christoph Jechlitschek

2
What is PlanetLab?
  • An open platform for developing, deploying, and
    accessing planetary-scale services
  • A network of computers distributed around the
    world.
  • A huge distributed system.

3
Coverage
Current distribution of 595 nodes over 282 sites
4
Terminology
  • Site A site is a physical location where
    PlanetLab nodes are located.
  • Node. A node is a dedicated server that runs
    components of PlanetLab services.
  • Slice. A slice is a set of allocated resources
    distributed across PlanetLab.
  • Sliver. A set of allocated resources on a single
    PlanetLab node.

5
Why is it needed?
  • The Internet is ossified hard to make
    corrections or improvements.
  • It is difficult to take measurements on the
    Internet without vantage points.
  • It is complicated to test applications with real
    world traffic or on large topologies.
  • It is difficult to offer a distributed service
    from just one site.

6
What is it used for?
  • Realistic test bed
  • Measurement platform
  • Internet in a Slice
  • Deployment platform
  • Service platform

7
Short history
  • March 2002 first meeting, Intel agrees to
    seed-fund 100 machines
  • June-October 2002 deployment of the first 100
    machines at 42 locations
  • June 2003 intent to create an academic/industrial
    Consortium to oversee the further expansion and
    development of PlanetLab
  • October 2003 Version 1.0 released

8
Short History
  • September 2003 NSF funds PlanetLab with 4.5M
  • January 2004 Princeton, Berkeley, and Washington
    formally create the PlanetLab Consortium, with
    Intel and HP as commercial members.
  • January 2004 Version 2.0 released
  • July 2004 Version 3.0 released

9
Growth
  • October 2002 100 nodes
  • September 2003 200 nodes
  • December 2003 300 nodes
  • July 2004 400 nodes
  • December 2004 500 nodes
  • Today 595 nodes

10
How to join.
  • Users get accounts from their institution.
  • Institutions join by
  • Signing a membership agreement
  • Providing at least two computers to PlanetLab
  • Cost free for academic institutions, industrial
    pricing starts at 25k

11
Hardware requirements
  • For existing nodes
  • 1.0 GHz IA32, PIII class processor
  • 512 MByte RAM
  • 50 GByte HDD
  • CD-ROM
  • Floppy drive
  • Fast Ethernet interface

12
Hardware requirements
  • For new nodes
  • 1.5 GHz IA32, PIII class processor
  • 1 GByte RAM
  • 160 GByte HDD
  • CD-ROM
  • Floppy drive
  • Fast Ethernet interface

13
Connectivity
  • Static IP address
  • DNS entries (forward and reverse)
  • Outside of firewall
  • Not NATed
  • As few other traffic restrictions as possible

14
Virtualization Levels
  • Hardware Virtualization (e.g., VMWare)
  • doesnt scale well
  • we dont need multi-OS functionality
  • HW/SW Co-Virtualization (e.g., Xen, Denali)
  • not yet mature
  • requires OS tweaks
  • Virtualize at system call interface (e.g., Jail,
    Vservers)
  • reasonable compromise
  • isolation not as good as hardware virtualization
  • Unix processes
  • isolation is problematic
  • Java Virtual Machine
  • too high-level

15
Vservers
  • Virtualizes at system call interface
  • each Vserver runs in its own security context
  • private UID/GID name space
  • limited superuser capabilities (e.g., no
    CAP_NET_RAW)
  • uses chroot for file system isolation
  • scales to 1000 of vservers per node (29MB each)
  • Isolation
  • kernel schedulers (processor and link bandwidth)
  • separate address spaces
  • Node Manager
  • privileged security context
  • interface for creating virtual machines

16
Architecture
Vserver
Vserver
Service 3
Service 4
Combined Isolation and Application Interface
Linux
Resource Isolation VNET Sockets
Instrumentation
Hardware
17
Future Architecture(?)
XP
BSD
Service 3
Service 4
Application Interface
Isolation Interface
Isolation Kernel
  • Denali
  • Xenoserver

Hardware
18
VNET
  • Virtualizes the network interface
  • Isolates traffic
  • Tracks connections
  • Same API as sockets
  • Provides raw sockets
  • Non privileged
  • Discards malformed or spoofed packets
  • Provides connection reservation
  • Enforces rate caps

19
Proper
  • Privileged Operations service
  • Runs on every node
  • Provides slices with a way to perform privileged
    operations
  • reading files in other slices
  • opening true raw sockets

20
PlanetFlow
  • Logs every outbound IP flow on every node
  • accesses ulogd via Proper
  • retrieves packet headers, timestamps, context ids
    (batched)
  • used to audit traffic
  • Aggregated and archived at PLC

21
Principals
  • Node Owners
  • host one or more nodes (retain ultimate control)
  • selects an MA and approves of one or more SAs
  • Service Providers (Developers)
  • implements and deploys network services
  • responsible for the services behavior
  • Management Authority (MA)
  • installs an maintains software on nodes
  • creates VMs and monitors their behavior
  • Slice Authority (SA)
  • registers service providers
  • creates slices and binds them to responsible
    provider

22
Architectural Elements
23
PlanetLab Central
. . .
PLC (SA)
NM
VM
VM
VMM
. . .
24
Slice Creation
. . .
PLC (SA)
NM
VM
VM
VMM
. . .
25
Boot process
Node
PLC Boot Server
Boot Manager
1. Boots from BootCD (Linux loaded)
2. Hardware initialized
3. Read network config .
from floppy
4. Contact PLC (MA)
5. Send boot manager
6. Execute boot mgr
7. Node key read into memory from floppy
8. Invoke Boot API
9. Verify node key, send current node
state
10. State install, run installer
11. Update node state via Boot API
12. Verify node key, change state to boot
13. Chain-boot node (no restart)
14. Node booted
26
Software environment
  • Based on Fedora Core 2
  • Stripped down only few packages, no compilers,
    no man pages
  • Install needed packets with rpm, yum, aptget,
    scripts, stork, etc.
  • Limited super user

27
How to log in
  • Only via ssh and private key
  • Use group name instead of user name
  • ssh -i /.ssh/identity group_at_planetlab-host

28
Permanent services
  • Many long running services are already deployed
  • Serve PL and non PL users
  • e.g. CoDeeN, CoDeploy, SWORD, ScriptRoute,
    OpenDHT, Sirius,
  • Demonstrate the success of PlanetLab

29
Summary
  • PlanetLab an open, global network test-bed for
    pioneering novel planetary-scale services.
  • A model for introducing innovations into the
    Internet through the use of overlay networks.
  • A collaborative effort involving hundreds of
    academic and corporate researchers from around
    the world.

30
You Me
31
This Afternoon
  • Tutorial continues _at_ 200 pm
  • Same Room
  • Please bring your Laptop
Write a Comment
User Comments (0)
About PowerShow.com