Paul Grun - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

Paul Grun

Description:

This gave us the makings of a low latency interconnect. ... Tailor-made for virtualization ... Make sure an application can use the channel I/O interface to ... – PowerPoint PPT presentation

Number of Views:23
Avg rating:3.0/5.0
Slides: 33
Provided by: paul153
Category:
Tags: grun | makings | paul

less

Transcript and Presenter's Notes

Title: Paul Grun


1
  • Paul Grun
  • System Fabric Works
  • pgrun_at_systemfabricworks.com
  • 4/7/08

2
  • Channel I/O an embarrassment of riches
  • Winning in the low latency space, marking time in
    the commercial sector
  • Never bet against Ethernet
  • Moving our customers forward
  • Conclusion

3
It all started with VIA
  • VIA gave us the concept of stack bypass
  • This gave us the makings of a low latency
    interconnect.
  • Perfectly suited for, among other things,
    clusters.

4
From VIA sprang InfiniBand
(after a few false starts)
  • An efficient transport
  • Network and phy layers
  • Virtual lanes
  • A mgmt infrastructure
  • Std methods for accessing
  • these services

In short, the ability to efficiently conduct
multiple traffic flows over a single
wire Interestingthe basics for a unified fabric
5
Equally interesting is this
  • Beyond a low latency unified fabric, we also
    gained the ability to directly connect virtual
    address spaces located in disjoint physical
    addresses - RDMA

6
Tailor-made for virtualization
VM
VM
app
app
Server virtualization- Platform resources are
shared among the containers within the platform
V M M

Datacenter virtualization - applications
connected to pools of virtualized resources
Better yetmore efficient use of resources
improves the green footprintdramatically
7
  • High bandwidth, no packet droppingall we would
    need would be a storage protocol and wed have
    the perfect fabric.
  • Enter SRP, followed a little later by iSER.
  • Voila, IB is an ideal storage interconnect!

8
  • Eventually, we arrived at the crux of the issue.
  • What really matters is the way an application
    communicates with other applications and storage.
    This is the notion of Channel I/O.
  • OFA emerged as the keeper and developer of the
    Channel Interface

9
Channel I/O is about creating pipes
Application
Application
Channel i/f
transport
transport
network
network
Its very much an application-centric viewwhat
matters is how the application communicates.
10
  • As the industry began to focus on Channel I/O,
    the underlying wire seemed to become
    progressively less important.
  • Someone noticed that it made sense to define RDMA
    over TCP/IP/EthernetiWARP emerged and became a
    member of the family of Channel I/O protocols.
    Now were really cooking with gas.

11
This is looking pretty good!
  • So far, weve got
  • The worlds lowest latency standards-based
    interconnect
  • An incredible foundation for virtualization
  • both server and data center virtualization
  • a native storage solution based on standard SCSI
    classes
  • wire independence
  • a unified fabric
  • An open Channel Interface for both Linux and
    Windows platforms
  • It works for legacy apps, and for those plucky
    native apps
  • And its cheap to boot! (no pun intended)
  • I mean, is this nirvana, or what???

12
  • With this embarrassment of riches, we can address
    some of the worlds most vexing computer science
    problems
  • The worlds fastest/cheapest supercomputers
  • The worlds lowest price/perf clusters
  • The worlds most flexible/agile datacenters
  • Highly energy efficient data centers

The world is our oyster, right?
13
So how come were not all rich?
14
The box score so far looks like
  • Naturally most of the gold is in that last bucket

15
  • Its interesting that both supercomputing and
    HPC/clustering are primarily single fabric
    environments
  • With a few exceptions, these installations are
    purpose-built from the ground up, using a single
    fabric
  • But what about the commercial spaces?

16
  • Commercial datacenters, OTOH
  • tend to be built on top of a combination of
    fabrics,
  • tend to include huge application investments,
  • tend to involve huge amounts of existing
    infrastructure
  • tend to rely on high volume, commodity OTS
    hardware

These are some pretty large rocks to roll uphill
17
  • (Of course, the commercial space isnt
    monolithicclearly, there are environments where
    the calculus of channel I/O produces a positive
    ROI.)

18
  • What about virtualization? Isnt that a key
    driver for the commercial space?
  • Server virtualization, which doesnt depend on
    channel I/O, is doing very nicely at driving up
    utilization, and helping reduce the physical
    space requirements.
  • Great progress is being made here, with or
    without channel I/O.

19
  • How about the green datacenter?
  • There is some promise here, but compared to the
    cost of migrating a massive investment to a
    greener approach, the pain will have to get
    alot higher before channel I/O grabs a foothold

yes, there are certainly spots where the pain
threshold is indeed a lot higher.
20
  • Unified fabrics? Doesnt OpEx/TCO conquer all?
  • Well

21
Nobody ever lost a bet on Ethernet
  • Ethernet continues to chug along
  • The Ethernet community is thinking about a
    lossless wire
  • Congestion management
  • reducing the burden on the transport, thus
    reducing the traditional achilles heel of TCP/IP
  • Virtual lanes to support multiple streams on a
    single fabric
  • This is starting to sound like a credible
    converged fabric

22
  • a converged fabric which is aimed squarely at
    the commercial spacei.e., those environments
    which can easily harvest the TCO benefits of a
    converged fabric.

23
Application
Its a completely pragmatic, wire-centric
approachintended to make the wire sufficiently
capable such that it will support multiple
streams of traffic. A single-minded focus on
creating a unified wire.
phy
24
  • Nobody believes that Ethernet will ever go away.
  • So we have a few choices.
  • Cede the converged fabric, commercial
    multi-protocol space to Ethernets emerging
    enhancements.
  • Battle it out for ownership of the converged
    fabric. And probably lose.
  • Look for niche enclaves in the enterprise where
    IB dominates - the toehold strategy.
  • Drive channel I/O in above the wire.

How?
25
  • 1. Dont depend on the emergence of a pure IB
    environment, it wont happen (except in HPC).
    Instead
  • Leverage the heavy lifting Ethernet is proposing
    to do by reducing the impedance mismatch between
    networks wherever possible. How about an IB
    transport on Ethernet?

LRH
GRH
BTH
ETH
payload
MAC
IPv6
TCP
payload
(not as simple as that of course, but maybe worth
a look)
26
  • 2. Enable IB in every toehold that exists in the
    data center

filesystem
Make sure an application can use the channel I/O
interface to access system resources, regardless
of the protocol. FCoIB, anyone?
SCSI
FC-4
transport
switch
phy
27
  • 3. Reduce impediments to accessing the features
    of the IB wire
  • IB already has the features now being proposed by
    the Ethernet community, in spades. Lets reduce
    the impediments to accessing those features.
  • Is there any good reason for continuing to
    support different bit rates/encoding schemes?

28
  • Apply industry leadership to make sure our
    customers arent swamped in a quagmire of I/O
    protocols, wire protocols, wires.
  • Remember that a key attraction of a converged
    fabric is its simplicity.

29
We need to ensure that the promise of channel I/O
can be deliveredno matter what the wire. We
need to lower the pain threshold of adopting
RDMA-based networks (aka channel I/O).
30
Conclusions
  • Channel I/O is a simple and powerful concept
  • The trend toward convergence in the
    multi-fabric space is accelerating Ethernet is
    quietly but aggressively attacking this space.
  • Channel I/O is in danger of being relegated to
    the supercomputer and HPC niches.
  • Expanding RDMA networks into the multi-fabric
    space requires that we keep moving channel I/O
    forward

31
A paid advertisement
  • System Fabric Works is dedicated to ensuring that
    its customers derive maximum competitive
    advantage from their information technology
    investment.
  • Our focus is on accelerating adoption of advanced
    data center networks (Fabric Computing) as a
    foundation technology.
  •  
  • We provide
  • strategic and architectural consulting services
    directly to end user clients,
  • design and software engineering services to our
    vendor and integrator partners,
  • custom integrated computing, networking and
    storage products.

32
  • Abstract
  • Channel I/O delivers an embarrassment of riches
    in terms of the features it provides.
  • Despite that, significant inroads are only being
    made primarily in the supercomputer, scientific
    and commercial HPC spacesspaces characterized as
    being purpose-built on top of a single primary
    fabric.
  • The commercial space is characterized as being
    built on top of a combination of fabrics (as
    opposed to the single-fabric character of the HPC
    space).
  • This space, where most of the gold is, remains
    more-or-less elusive.
  • There are a few emerging exceptions.
  • Why is this?
  • The values delivered by channel I/O do not yet
    out-weigh the costs of transitioning.
  • Meanwhile, Ethernet is quietly addressing those
    markets by providing a path toward a converged
    fabric.
  • Call to action
  • Lower the hurdles preventing end users from
    harvesting the values of channel I/O
  • Provide leadership in helping select customers
    through the I/O quagmire
Write a Comment
User Comments (0)
About PowerShow.com