OptIPuter Middleware Software Implementation Plans - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

OptIPuter Middleware Software Implementation Plans

Description:

Department of Computer Science and Engineering. University of California San Diego ... Bootable CD and 'Roll CDs' for entire cluster building ... – PowerPoint PPT presentation

Number of Views:94
Avg rating:3.0/5.0
Slides: 22
Provided by: optip6
Category:

less

Transcript and Presenter's Notes

Title: OptIPuter Middleware Software Implementation Plans


1
OptIPuter Middleware Software Implementation Plans
  • Nut Taesombut
  • Department of Computer Science and Engineering
  • University of California San Diego
  • optiputer_at_csag.ucsd.edu
  • 4th OptiPuter Backplane Architecture Workshop
  • January 17, 2006

2
Objectives
  • Tame Configurable Optical Network for
    Applications
  • Define application abstractions Model of use
  • Define software architecture Internal, external,
    infrastructure
  • Deliver the Communication Capabilities of
    Lambdas
  • High-performance transports
  • Novel communication capabilities Multi-endpoint,
    photonic multicast
  • Expose and exploit optical network control
  • Enable and Demonstrate Visualization and
    Data-Intensive Applications
  • Interactive communication, novel
  • Direct access to distributed display and storage
  • Aggregate wide-area distributed storage
    efficiently

3
OptIPuter System Software Architecture
Distributed Applications/ Web Services
Visualization
Telescience
SAGE
JuxtaView
Data Services
Vol-a-Tile
LambdaRAM
PIN/PDC
4
Distributed Virtual Computer (DVC) Grid
Resources Private Network
  • Tame Configurable Optical Network for
    Applications
  • Leverage Grid resource acquisition and use model
  • Integrate and optimize network configuration and
    resource selection
  • Provide a Simple Performance Abstraction and
    Environment for Applications
  • Access resource in SAN-like fashion (private
    connection, deterministic performance
  • Utilize standard and custom protocols (e.g., GTP,
    UDT, RBUDP) in uniform way

5
OptIPuter High-Performance Transport Protocols
  • Bridge the Gap between High Speed Link
    Technologies and Growing Demands of Advanced
    Applications
  • TCP has well-documented performance problems on
    long-haul networks
  • Pursue complementary avenues of investigation
  • Efficient congestion/flow management, fairness
    among flows
  • High-speed group communication (multipoint-to-poin
    t, multipoint-to-multipoint)

6
Unified Communication Framework
Distributed Applications/Web Services
DVC Communication Service/API
Socket-like API
File Transfer API
DVC Conf. API
Name resolution, Group Comm. Management
DVC Core Service
DVC Descriptor (Comm. Config.)
?- stream
GTP
UDT
XCP
RBUDP
CEP
  • Provide Unified Framework for Access to a Range
    of Transport Protocols/Mechanisms
  • Single implementation required for the
    application
  • Adaptable to diverse resource conditions
  • Configurable transport protocol use
  • Choose the best performer for your environment

7
OptIPuter System Software Year 2005
  • OptIPuter System Software Integrated and
    Delivered as Rocks Rolls
  • Bootable CD and Roll CDs for entire cluster
    building
  • Rolls are optical extension to basic cluster
    configuration
  • OptIPuter System SW Roll (Gold Roll) v.1
    July2005
  • OptIPuter Software Architecture
  • DVC Middleware v1.0
  • Basic security, namespace and job management
  • Coordinated resource and network selection
  • Automated resource and network configuration (via
    PDC, Globus)
  • Uniform access to novel communication protocols
    (GTP, CEP)
  • Advanced Transport Protocols
  • GTP v0.95 Receiver-based rate allocation and
    adaptation
  • CEP v1.1 Multi-endpoint file transfer,
    greedy-based flow scheduling
  • Optical Network Configuration
  • PDC v2.0 Intra-domain lambda reservation and
    allocation

More information on Rocks http//www.rockscluster
s.org/
8
OptIPuter 5-layer Demonstrations iGrid2005
  • Scientific Applications (Neuroscience,
    Geoscience)
  • Visualization
  • Distributed Virtual Computer (DVC)
  • Novel Transport Protocol (GTP, lambda-stream)
  • Optical Network Configuration (PIN/PDC)

9
OptIPuter Software Stack
10
Cross Team Integration and Demonstrations
  • TeraBIT Juggling, 2-layer Demo SC2004, Nov2004
  • DVC middleware, high-speed transport (GTP)
  • Move data between OptIPuter network endpoints
  • Endpoints across UCSD, UvA, UIC, Pittsburgh
  • Achieved 17.8Gbps, a TeraBIT in less than one
    minute
  • 3-layer Demo AHM2005, Jan2005
  • Visualization (JuxtaView/LambdaRAM), DVC
    middleware, high-speed transport
  • Remote data visualization (visualization NCMIR
    storage UIC and UvA)
  • Use DVC to establish visualization environments
  • Automated Grid resource selection and binding
  • Achieved 2.6 Gbps on 7 Streams
  • 5-layer Demos iGrid2005, Sep2005
  • Applications, visualization, DVC middleware,
    high-speed transports (GTP), optical network
    configuration (PIN/PDC)
  • Demo 1 Collaborative Data Visualization with
    Earth-Sciences
  • Demo 2 Real-time Brain Data Acquisition,
    Assembly and Analysis

11
Collaborative Data Visualization with
Earth-Sciences
Amsterdam
Seattle OptEx
Chicago OptEx
10 GE
10 GE
Chicago
UCSD OptEX
10 GE
20 GE
San Diego
  • Scientific Collaboration with Parallel
    Interactive 3D Visualization
  • Geographically distributed storage (San Diego,
    Chicago, Amsterdam)
  • Multi-gigabyte datasets (25GB)
  • Visualization centers at Calit2 and SIO/UCSD
  • Demonstrating OptIPuter Technologies
  • Dynamic Grid resource and network configuration
  • Real-time data acquisition with GTP, Achieved
    16.3Gbps peak rate (81.5 link utilization)
  • From UvA, UIC and JSOE to Calit2

12
Using DVC
  • User Specifies Resource and Network Connectivity
    Requirements
  • DVC Service Selects a Matching Resource
    Configuration
  • DVC Service Allocates Grid Resources and Private
    Networks

DVC Service
Grid Resource Manager PIN/PDC Service
Resource Spec.
UIC Cluster
Chicago OptEx
OXC
Seattle OptEx
UvA Cluster
OXC
SDSC Cluster
San Diego OptEx
OXC
CSE Cluster
NCMIR Cluster
Calit2 Cluster
JSOE Cluster
13
Multi-scale Correlated Microscopy Experiment
Reduction of information across scales and
imaging technologies
Simultaneous visualization of multi-scale biologic
al specimens and high-resolution video-conferencin
g
Source David Lee, NCMIR http//www.c5d.org/Projec
ts/iGrid/igrid05.html
14
Real-time Brain Data Acquisition, Visualization
and Collaborative Analysis
Japan
Chicago
HV Electron Microscope
Seattle OptEx
Chicago OptEx
10 GE
UIC
KDDI/Osaka
storage
10 GE
UCSD OptEX
Light Microscope
CalIT2/UCSD
NCMIR/ UCSD
10 GE
20 GE
viz. cluster
viz. cluster
San Diego
  • Transparent Operation of a Scientific Experiment
  • Visualization of multi-scale sample specimens
    live videos
  • Globally distributed visualization, storage and
    network resources
  • Real-time data acquisition and visualization
  • Demonstrating OptIPuter Technologies
  • Visualization SAGE, TeraVision, JuxtaView,
    Vol-a-tile, LambdaRAM
  • Dynamic resource and network configuration DVC,
    PIN/PDC
  • High-speed transports LambdaStream

Source David Lee
15
Snapshot of SAGE on 100-Megapixel Tiled Display
at iGrid2005
JuxtaView displays a dataset stored remotely at
UIC/Chicago. The dataset is interactively
streamed using LambdaRAM
A real-time video of a light microscope from
NCMIR streamed using TeraVision
An HDTV camera feed over TeraVision shows a
conference room at NCMIR
Pre-recorded video stream from Osaka, Japan shows
an electron microscope experiment
Dataset acquired using an electron microscope
Vol-a-Tile rendering a volume dataset stored
locally
Source Raj Singh
16
Plan for OptIPuter Year 4
17
OptIPuter System Software Year 2006
  • DVC Middleware v1.2
  • Improved resource selection algorithms
  • High-speed communication with UDT and ?-lambda
  • Web service interface (WS-RF)
  • GTP v1.3
  • Capability management at both senders and
    receivers
  • Improved CPU efficiency and scalability
  • CEP v2.0
  • Improved performance and stability
  • Integration of advanced protocols (e.g., GTP)
  • UDT v3.0 (aka Composable-UDT)
  • Support for multiple high-speed congestion
    control algorithms
  • Congestion controlled unreliable messaging
  • ?-Stream
  • Support for multipoint to multipoint communication

18
OptIPuter Software Summit Rolls
  • OptIPuter Software Summit Jan2006
  • Integration of end-to-end OptIPuter software
    intensive testing
  • OptIPuter Software Summit Rolls
  • OptIPuter System Software Roll v.2
  • DVC Middleware v1.0.1
  • DVC 1.0 integration with PIN
  • Advanced Transport Protocols
  • GTP v0.95, CEP v1.1, UDT v2.0
  • Optical Network Configuration
  • PDC v2.0, PIN v0.3
  • OptIPuter Visualization Roll (OptiViz) v.1
  • Visualization Toolkit
  • Ethereon, JuxtaView, Vol-a-Tile
  • Data Toolkit
  • LambdaRAM

19
Plan for GT4.0 Integration
20
Exposing OptIPuter Capability as Web Services
  • OptIPuter Software System is an Service-Oriented
    Architecture
  • Capabilities are presented as web services
    instantiated and accessed
  • Distributed Virtual Computers (DVCs), Real-Time
  • Network Protocol Configuration/Management
  • Optical Lightpath Configuration
  • Well-defined interfaces (i.e., WSRF)
  • Services are Linked and Accessed by Clients
    through the DVC Session Manager

DVC Core
Real-Time DVC
WS-RF
DVC Session Manager
Comm. Config.
Optical Network Configuration
GT4/GRAM
Web Service Client
Grid Resource Manager
21
Example of Use
  • User requests resources and connectivity via WS
    client

WS-RF
DVC Session Manager
  • DVC service finds a matching resource
    configuration

DVC Core
DVC Grid Resource Binding
DVC Net Binding
  • Grid resources and network are allocated using WS
    interfaces (OGSA, WS-RF) DVC is created

Configurable Optical Network
  • User obtains DVC reference (WS endpoint
    reference) and submit computation/communication
    tasks through it

22
Question?
Write a Comment
User Comments (0)
About PowerShow.com