Next-Generation Network Research Facilities - PowerPoint PPT Presentation

About This Presentation
Title:

Next-Generation Network Research Facilities

Description:

... layer-2 technologies Importance of building and deploying Bridging the gap between ... live streaming of audio/video to many clients Intermediate nodes ... – PowerPoint PPT presentation

Number of Views:218
Avg rating:3.0/5.0
Slides: 62
Provided by: andre48
Category:

less

Transcript and Presenter's Notes

Title: Next-Generation Network Research Facilities


1
Next-Generation Network Research Facilities
  • Jennifer Rexford
  • Princeton University
  • http//www.cs.princeton.edu/jrex

2
Outline
  • Networking research challenges
  • Security, economic incentives, management,
    layer-2 technologies
  • Importance of building and deploying
  • Bridging the gap between simulation/testbeds and
    real deployment
  • Global Environment for Network Innovations (GENI)
  • Major NSF initiative to support experimental
    networking research
  • Key ideas virtualization, programmability, and
    user opt-in
  • GENI backbone design
  • Programmable routers, flexible optics, and
    connection to Internet
  • Example experiments highlighting the capabilities
  • Virtual Network Infrastructure (VINI)
  • Initial experimental network facility in NLR and
    Abilene
  • Conclusions

3
Is the Internet broken?
  • It is great at what it does.
  • Everyone should be proud of this.
  • All sorts of things can be built on top of it.
  • But
  • Security is weak and not getting better.
  • Availability continues to be a challenge.
  • It is hard to manage and getting harder.
  • It does not handle mobility well.
  • A long list, once you start

4
Challenges Facing the Internet
  • Security and robustness
  • Naming and identity
  • Availability
  • Economic incentives
  • Difficulty of providing end-to-end services
  • Commoditization of the Internet infrastructure
  • Network management
  • No framework in the original Internet design
  • Tuning, troubleshooting, accountability,
  • Interacting with underlying network technologies
  • Advanced optics dynamic capacity allocation
  • Wireless mobility, dynamic impairments
  • Sensors small embedded devices at large scale

5
FIND Future Internet Design
  • NSF research initiative
  • Requirements for global network of 10-15 years
    out?
  • Re-conceive the network, if we could design from
    scratch?
  • Conceive the future, by letting go of the
    present
  • This is not change for the sake of change
  • Rather, it is a chance to free our minds
  • Figuring out where to go, and then how to get
    there
  • Perhaps a header format is not the defining piece
    of a new architecture
  • Definition and placement of functionality
  • Not just data plane, but also control and
    management
  • And division between end hosts and the network

6
The Importance of Building
  • Systems-oriented computer science research needs
    to build and try out its ideas to be effective
  • Paper designs are just idle speculation
  • Simulation is only occasionally a substitute
  • We need
  • Real implementation
  • Real experience
  • Real network conditions
  • Real users
  • To live in the future

7
Need for Experimental Facility
Goal Seamless conception-to-deployment process
Deployment
Analysis
Simulation / Emulation
Experiment At Scale
(models)
(code)
(results)
(measurements)
8
Existing Tools
  • Simulators
  • ns
  • Emulators
  • Emulab
  • WAIL
  • Wireless testbeds
  • ORBIT
  • Emulab
  • Wide-area testbeds
  • PlanetLab
  • RON
  • X-bone
  • DETER

9
Todays Tools Have Limitations
  • Simulation based on simple models
  • Topologies, administrative policies, workloads,
    failures
  • Emulation (and in lab tests) are similarly
    limited
  • Only as good as the models
  • Traditional testbeds are targeted
  • Not cost-effective to test every good idea
  • Often of limited reach
  • Often with limited programmability
  • Testbed dilemma
  • Production network real users, but hard to make
    changes
  • Research testbed easy to make changes, but no
    users

10
Bridging the Chasm
Maturity
DeployedFuture Internet
Global ExperimentalFacility
Small Scale Testbeds
Simulation and Research Prototypes
Foundational Research
Time
11
Goals for the Experimental Facility
  • Broader impact
  • Positive influence on the design of the future
    Internet
  • Network that is more secure, reliable, efficient,
    manageable, usable
  • Intellectual progress
  • Network science
  • Experimentally answer questions about complex
    systems
  • Better understanding of dynamics, stability,
    evolvability, etc.
  • Network architecture
  • Evaluate and compare alternative architectural
    structures
  • Reconcile the contradictory goals an architecture
    must meet
  • Network engineering
  • Evaluate engineering trade-offs in a controlled,
    realistic setting
  • Test theories of how different elements might be
    designed

12
GENI
  • Experimental facility
  • MREFC proposal to build a large-scale facility
  • Jointly from NSFs CS directorate, research
    community
  • We are currently at the Conceptual Design stage
  • Will eventually require Congressional approval
  • Global Environment for Network Innovations
  • Prototyping new architectures
  • Realistic evaluation
  • Controlled evaluation
  • Shared facility
  • Connecting to real users
  • Enabling new services

See http//www.geni.net
13
Three Key Ideas in GENI
  • Virtualization
  • Multiple architectures on a shared facility
  • Amortizes the cost of building the facility
  • Enables long-running experiments and services
  • Programmable
  • Enable prototyping and evaluation of new
    architectures
  • Enable a revisiting of todays layers
  • Opt-in on a per-user / per-application basis
  • Attract real users
  • Demand drives deployment / adoption
  • Connect to the Internet
  • To reach users, and to connect to existing
    services

14
Slices
15
Slices
16
User Opt-in
Client
Proxy
17
Realizing the Ideas
  • Slices embedded in a substrate of resources
  • Physical network substrate
  • Expandable collection of building block
    components
  • Nodes / links / subnets
  • Software management framework
  • Knits building blocks together into a coherent
    facility
  • Embeds slices in the physical substrate
  • Builds on ideas in past systems
  • PlanetLab, Emulab, ORBIT, X-Bone,

18
National Fiber Facility
19
Programmable Routers
20
Clusters at Edge Sites
21
Wireless Subnets
22
ISP Peers
ISP 2
ISP 1
23
Closer Look
Sensor Network
backbone wavelength
backbone switch
Customizable Router
Internet
Edge Site
Wireless Subnet
24
GENI Substrate Summary
  • Node components
  • Edge devices
  • Customizable routers
  • Optical switches
  • Bandwidth
  • National fiber facility
  • Tail circuits
  • Wireless subnets
  • Urban 802.11
  • Wide-area 3G/WiMax
  • Cognitive radio
  • Sensor net
  • Emulation

25
GENI Management Core
Management Services
  • name space for users, slices, components
  • set of interfaces (plug in new components)
  • support for federation (plug in new partners)

GMC
Substrate Components
26
Hardware Components
Substrate HW
Substrate HW
Substrate HW
27
Virtualization Software
Virtualization SW
Virtualization SW
Virtualization SW
Substrate HW
Substrate HW
Substrate HW
28
Component Manager
CM
CM
Virtualization SW
Virtualization SW
Substrate HW
Substrate HW
29
GENI Management Core (GMC)
Slice Manager
GMC
Resource Controller
Auditing Archive
node control
sensor data
CM
Virtualization SW
Substrate HW
30
Federation
GMC
GMC
. . .
31
User Front-End(s)
GUI
Front-End (set of management services)
provisioning service
file naming service
information plane
GMC
GMC
. . .
32
Virtualization in GENI
  • Multiple levels possible
  • Different level required by different experiments
  • Different level depending on the technology
  • Example base cases
  • Virtual server (socket interface / overlay
    tunnels)
  • Virtual router (virtual line card / static
    circuits)
  • Virtual switch (virtual control interface /
    dynamic circuits)
  • Virtual AP (virtual MAC / fixed spectrum
    allocation)
  • Specialization
  • The ability to install software in your own
    virtual-

33
Distributed Services in GENI
  • Goals
  • Complete the GENI management story
  • Lower the barrier-to-entry for researchers
    (students)
  • Example focus areas
  • Provisioning (slice embedder)
  • Security
  • Information plane
  • Resource allocation
  • Files and naming
  • Topology discovery
  • Development tools
  • Interfacing with the Internet, and IP

34
GENI Security
  • Limits placed on a slices reach
  • Restricted to slice and GENI components
  • Restricted to GENI sites
  • Allowed to compose with other slices
  • Allowed to interoperate with legacy Internet
  • Limits on resources consumed by slices
  • Cycles, bandwidth, disk, memory
  • Rate of particular packet types, unique addrs per
    second
  • Mistakes (and abuse) will still happen
  • Auditing will be essential
  • Network activity slice responsible
    user(s)

35
Success Scenarios
  • Change the research process
  • Sound foundation for future network architectures
  • Experimental evaluation, rather than paper
    designs
  • Create new services
  • Demonstrate new services at scale
  • Attract real users
  • Aid the evolution of the Internet
  • Demonstrate ideas that ultimately see real
    deployment
  • Provide architectural clarity for evolutionary
    path
  • Lead to a future global network
  • Purist converge on a single new architecture
  • Pluralist virtualization supporting many
    architectures

36
Working Groups to Flesh Out Design
  • Research (Dave Clark and Scott Shenker)
  • Usage policy / requirements / instrumentation
  • Architecture (Larry Peterson and John Wroclawski)
  • Define core modules and interfaces
  • Backbone (Jen Rexford and Dan Blumenthal)
  • Fiber facility / routers switches / tail
    circuits / peering
  • Wireless (Dipankar Raychaudhuri and Deborah
    Estrin)
  • RF technologies / deployment
  • Services (Tom Anderson, Reiter)
  • Edge sites / infrastructure and underlay services
  • Education
  • Training / outreach / course development

37
GENI Backbone Requirements
  • Programmability
  • Flexible routing, forwarding, addressing, circuit
    set-up,
  • Isolation
  • Dedicated bandwidth, circuits, CPU, memory, disk
  • Realism
  • User traffic, upstream connections, propagation
    delays, equipment failure modes,
  • Control
  • Inject failures, create circuits, exchange
    routing messages
  • Performance
  • High-speed packet forwarding and low delays
  • Security
  • Preventing attacks on the Internet, and on GENI
    itself

38
A Researchers View of GENI Backbone
  • Virtual network topology
  • Nodes and links in a particular topology
  • Resources and capabilities per node/link
  • Embedded in the GENI backbone
  • Virtual router and virtual switch
  • Abstraction of a router and switch per node
  • To evaluate new architectures (routing,
    switching, addressing, framing, grooming,
    layering, )
  • GENI backbone capabilities evolve over time
  • To realize the abstractions at finer detail
  • To scale to a larger number of experiments

39
Creating a Virtual Topology
Some links created by cutting through other nodes
Some links and nodes unused
40
GENI Backbone
41
GENI Backbone Node Components
  • Phase 0 General purpose blade server
  • Single node with collection of assignable
    resources
  • Virtual Router may be assigned VM, blade or gt1
    blades
  • Phase 1 Adding higher performance components
  • Assignable Network Processor blades and FPGA
    blades
  • NPs also used for I/O for better control of I/O
    bandwidth
  • Phase 2 Adding reconfigurable cross-connect
  • Enable experiments with configurable transport
    layer
  • Provide true circuits between backbone virtual
    routers
  • Phase 3 Adding dynamic optical switch
  • Dynamic optical switch with programmable groomer
    and framer, and reconfigurable add/drop
    multiplexers

42
GENI Backbone Node Components
  • Phase 0 General purpose blade server
  • Node with collection of assignable resources
  • Virtual Router may be assigned a virtual machine,
    blade, or multiple blades

43
GENI Backbone Node Components
  • Phase 1 Adding higher performance components
  • Assignable Network Processor blades and FPGA
    blades
  • NPs also used for I/O for better control of
    bandwidth
  • ATCA chassis and blades

44
GENI Backbone Node Components
  • Phase 2 Reconfigurable cross-connect
  • Enable experiments with configurable transport
    layer
  • Provide true circuits between backbone virtual
    routers
  • Cut-through traffic circumvents the router

1 GE
10GEVLAN
Control Plane
Wavelength tunable transponders/combiner
WDM Fiber
45
GENI Backbone Node Components
  • Phase 3 Adding dynamic optical switch
  • Dynamic optical switch with programmable groomer
    and framer, and reconfigurable add/drop
    multiplexers
  • Maleable bandwidth
  • Arbitrary framing

1 GE
10GEVLAN
Control Plane
Wavelength tunable transponders
46
GENI Backbone Software
  • Component manager and virtualization layer
  • Abstraction of virtual router and virtual switch
  • Setting scheduling parameters for subdividing
    resources
  • Multiplexers for resources hard to share
  • Single BGP session with the outside world
  • Single interface to an element-management system
  • Exchanging traffic with the outside world
  • Routing and forwarding software to evaluate
    extend
  • VPN servers and NATs at the GENI/Internet
    boundary
  • Libraries to support experimentation
  • Specifying, controlling, and measuring
    experiments
  • Auditing and accounting to detect misbehavior

47
Feasibility
  • Industrial trends and standards
  • Advanced Telecom Computing Architecture (ATCA)
  • Network processors and FPGAs
  • SONET cross connects and ROADMs
  • Open-source networking software
  • Routing protocols, packet forwarding, network
    address translation, diverting traffic to an
    overlay
  • Existing infrastructure
  • PlanetLab nodes, software, and experiences
  • National Lambda Rail and Abilene backbones

48
Example Experiment End-System Multicast
  • End-System Multicast
  • On-demand, live streaming of audio/video to many
    clients
  • Intermediate nodes forming a multicast tree
  • Ways GENI could support ESM research
  • Backbone nodes participating in the multicast
    tree
  • New network architectures running under ESM
  • Live native multicast support and QoS guarantees
  • Pre-recorded burst transfer, push, and
    network-storage

49
Example Experiment Routing Control Platform
  • Routing Control Platform (RCP)
  • Refactoring of control and management planes
  • Computes forwarding tables in separate servers
  • Ways GENI can support RCP research
  • Providing direct control over the data plane
  • BGP sessions with the commercial Internet
  • Controlled experiments with node/link failures

RCP
BGP with ISPs
BGP with ISPs
GENI
50
Example Experiment Valiant Load Balancing
  • Valiant Load Balancing
  • Fully mesh of circuits between routers
  • Direct traffic through intermediate node
  • Ways GENI can support VLB
  • Virtual circuits with dedicated bandwidth
  • Experimentation with routing
  • Measurement of effects ofhigher delay vs.
    higherthroughput on users
  • Explore impact on buffer sizing in routers

51
Example Experiments TCP Switching
  • TCP switching
  • TCP SYN packet triggers circuit set-up
  • Effective traffic management and quality of
    services
  • Ways GENI can support TCP flow switching
  • Programmable routers act as edge routers
  • Trigger circuit set-up and tear-down
  • Buffer data packets during circuit set-up
  • Measure overheads and performance

52
VINI Step Toward GENI Backbone
  • Virtual Network Infrastructure (VINI)
  • Multiple network experiments in parallel
  • Connections to end users and upstream providers
  • Supporting Internet protocols and new designs
  • VINI as an initial experimental platform
  • Support researchers doing network experiments
  • Explore software challenges of building GENI
    backbone
  • GENI will have a much wider scope
  • Programmable hardware routers
  • Flexible control of the optical components
  • Wireless and sensor networks at the edge

53
Network Infrastructure
  • Network topology
  • Points of Presence
  • Link bandwidth
  • Upstream connectivity
  • Two backbones
  • Abilene Internet2
  • National Lambda Rail

54
Building Virtual Networks
  • Physical nodes
  • Initially, high-end computers
  • Later, network processors and FPGAs
  • Virtual routers (a la PlanetLab)
  • Multiple virtual servers on a single node
  • Separate shares of resources (e.g., CPU,
    bandwidth)
  • Extensions for resource guarantees and priority

55
Building Virtual Links
  • Creating the illusion of interfaces
  • Create a tunnel for each link in the topology
  • Assign IP addresses to the end-points of tunnels
  • Match tunnels with one-hop links in the real
    topology

56
Building Multiple Virtual Topologies
  • Separate topology per experiment
  • Routers are virtual servers
  • Links are a subset of possible tunnels
  • Creating a customized environment
  • Running User Mode Linux (UML) in a virtual server
  • Configuring UML to see multiple interfaces
  • Enables running the routing software as is

R
R
R
R
R
Operating System
57
Overcoming Efficiency Challenges
  • Packet forwarding must be fast
  • But, we are doing packet forwarding in software
  • And dont want the extra overhead of UML
  • Solution separate packet forwarding
  • Routing protocols running within UML
  • Packet forwarding running outside of UML

UML
XORP
XORP routing software Click forwarding software
Click
58
Carrying Real User Traffic
  • Users opt in to VINI
  • User runs VPN client
  • Connects to VINI node
  • External Internet hosts
  • VINI connects to Internet
  • Apply NAT at boundary

VINI
UML
XORP
routing-protocol messages
Click
Client
Server
UML
UML
XORP
XORP
UDP tunnels
Open VPN
Network Address Translation
Click
Click
59
Example VINI Experiment
  • Configure VINI just like Abilene
  • VINI node per PoP
  • VINI link per inter-PoP link
  • Routing configuration as real routers
  • Network event
  • Inject link failure between two PoPs
  • in midst of an ongoing file transfer
  • Measuring routing convergence
  • Packet monitoring of the data transfer
  • Active probes of round-trip time loss
  • Detailed view of effects on data traffic

60
VINI Current Status
  • Initial Abilene deployment
  • Eleven sites
  • Nodes running XORP and Click on UML
  • Upcoming deployment
  • Six sites in National Lambda Rail
  • with direct BGP sessions with CRS-1 routers
  • Dedicated 1 Gbps bandwidth between Abilene sites
  • In the works
  • Upstream connectivity via a commercial ISP in NYC
  • Speaking interdomain routing with the Internet

Initial write-up http//www.cs.princeton.edu/jre
x/papers/vini.pdf
61
Conclusions
  • Future Internet poses many research challenges
  • Security, network management, economics, layer-2,
  • Research community should rise to the challenge
  • Conceive of future network architectures
  • Prototype and evaluate architectures in realistic
    settings
  • Global Environment for Network Innovations (GENI)
  • Facility for evaluating new network architectures
  • Virtualization, programmability, and user opt-in
  • GENI backbone design
  • Fiber facility, tail circuits, and upstream
    connectivity
  • Programmable router and dynamical optical switch
  • VINI prototype
  • Concrete step along the way to the GENI backbone
Write a Comment
User Comments (0)
About PowerShow.com