THE OFFICE OF NONPROLIFERATION - PowerPoint PPT Presentation

1 / 45
About This Presentation
Title:

THE OFFICE OF NONPROLIFERATION

Description:

... world into single coherent and effective virtual computing facility ... Albert Einstein College of Medicine, Yeshiva University. QWest. Broadwing. Man Lan ... – PowerPoint PPT presentation

Number of Views:99
Avg rating:3.0/5.0
Slides: 46
Provided by: michaello2
Category:

less

Transcript and Presenter's Notes

Title: THE OFFICE OF NONPROLIFERATION


1
Ultra-High Bandwidth Forum Presented
by Brookhaven National Laboratory And NYSERNet
January 27, 2004 Scott
Bradley 631.344.5745
bradley_at_bnl.gov
2
Purpose
  • To create dialog between BNL, Long Island
  • academic and research institutions and
  • NYSERNet regarding future ultra-high
  • bandwidth requirements, and NYSERNets
  • planned buildout of the fiber infrastructure
  • required to deliver it.

3
Institutions Represented Here Today
  • BNL
  • NYSERNet
  • SUNY Stony Brook
  • NYIT
  • Cold Spring Harbor Research Laboratory
  • Long Island Jewish Medical System

4
Background
  • In order to remain at the forefront of science,
    BNL must be connected to ultra-high bandwidth
    scientific networks (e.g. National Lambda Rail,
    Ultralight).
  • BNL is at a geographic disadvantage, given
    eastern Long Islands lack of dark fiber
    infrastructure.

5
Dark Fiber Topology of Eastern Long Island
6
Agenda
  • Welcome/Introductions (Scott Bradley, ITD)
  • Presentation of BNL near and mid-term HEP/NP
    requirements (Dr. Bruce Gibbard, Manager of BNL
    RHIC/US Atlas Computing Facilities)
  • NYSERNet brief (Dr. Tim Lance, President,
    NYSERNet)
  • Open Discussion
  • Wrap-up/Next Steps

7
HEP/NP WAN Needs at BNL
  • Dr. Bruce G. Gibbard
  • BNL
  • 27 January, 2004

8
The Driver Evolution of Our Science
  • Over the past 2-3 decades there has been
    evolution in many aspects of basic science, lead
    by High Energy Nuclear Physics, toward
  • Very large and costly projects requiring huge
    internationally distributed and funded
    collaborations
  • The appearance of large distributed
    collaborations has put a premium on the use of
    effective wide area information technology
    services
  • First Email, the Web, etc. for wide area
    communication
  • Now the Grid for wide area integration and
    delivery of compute capabilities (CPU, Storage,
    etc.) available at collaborating institutions
  • There is therefore a need for rapid growth in the
    performance and reliability of the WANs which
    connect collaborating institutions as the
    infrastructure supporting these services

9
BNL Perspective
  • BNL has primary responsibility for two
    projects/programs which are prototypical of this
    brand of very large, internationally distributed
    collaboration which includes distributed
    computing resources
  • The Relativistic Heavy Ion Collider (RHIC) for
    which it is the host institution
  • US Participation in the ATLAS experiment at
    CERNs Large Hadron Collider (LHC) for which it
    is the lead US institution for both the
    construction project and for computing facilities
    (US Tier 1 Center)
  • For each project, BNL is responsible for
  • Directly supplying at BNL a major computing
    facility for the storage, production processing
    and analysis of data
  • Marshaling and integrating additional computing
    resources from a large number of institutions
    distributed around the world into single coherent
    and effective virtual computing facility via the
    Grid and its underlying WAN infrastructure

10
ATLAS Distributed Computing Model
CERNOutside Resource Ratio 12Tier0(?
Tier1)(? Tier2) 111
PByte/sec
100-400 MBytes/sec
Online System
ATLAS Experiment
CERN 5M SI2K gt1 PB Disk Tape Robot
Tier 0 1
Castor
2.5 Gbits/sec
Tier 1
BNL 500k SI2K 1PB
IN2P3 Center
INFN Center
RAL Center
HPSS
2.5 Gbps
Tier 2
2.5 Gbps
Tier 3
Tier 0 DAQ, reconstruction, archive Tier 1
Reconstruction, simulation and analysis Tier 2
Analysis, simulation Tier 3 Interactive analysis
Institute
Institute
Institute
Institute
100 - 1000 Mbits/sec
Physics data cache
Tier 4
Workstations
11
Processing Model
  • Types of data
  • RAW Raw Data from detector
  • ESD Event Summary Data (reconstruction from RAW)
  • AOD physics Analysis Object Data (derived from
    ESD)
  • DPD Derived Physics Data (very distilled from ESD
    AOD)
  • Tier 0 RAW ? ESD ?
  • Tier 1 RAW ? ESD ? AOD ? DPD ?
  • Tier 2 ESD ? AOD ? DPD ?
  • Other DPD ?

12
RHIC Computing Facility
  • In the RHIC case BNL is the Tier 01 facility
  • Supply computing Infrastructure for RHIC
    Experiments
  • Including code development, repository,
    distribution
  • Supply complete range of RHIC data
    handling/processing
  • Raw data recording and repository for all data
  • Production Processing (Reconstruction) of all
    data
  • Programmatic event selection and distilled data
    set production
  • Support Chaotic high level analysis by
    individuals
  • Limited Monte Carlo Generation
  • Remote sites contribute more as one moves down
    the list

13
US ATLAS Computing Facility
  • High Level Mission
  • Supply MOU agreed capacities to the ATLAS
    Distributed Virtual Computing Facility
  • Guarantee the Computing Capabilities Capacities
    Required for Effective Participation by U.S.
    Physicists in the ATLAS Physics Program
  • Functions
  • Serve as primary U.S. data repository for ATLAS
  • Perform programmatic event selection and
    distilled data production
  • Support Chaotic high level analysis by
    individuals
  • Generate Monte Carlo data
  • Supply technical support for smaller US computing
    resource centers

14
A U.S. ATLAS Physics Analysis Center at BNL
  • Motivation
  • Position the U.S. to insure active participation
    in ATLAS physics analysis
  • Builds on existing Tier 1 ATLAS Computing Center,
    CORE Software leadership at BNL, and theorists
    who already are working closely with
    experimentalists.
  • This BNL Center will become a place where U.S.
    physicists come with their students and
    post-docs.
  • Scope and Timing
  • Hire 1 key physicist/year starting in 2003 to add
    to excellent existing staff to cover all aspects
    of ATLAS physics analysis tracking, calorimetry,
    muons, trigger, simulation, etc.
  • Expect the total staff including migration from
    D0 will reach 25 by 2007
  • The first hire arrived in August 2003
  • The plan is to have a few of the members in
    residence at CERN for 1-2 years on a rotating
    basis.
  • Will Place Additional Demand, and be Critically
    Depended, on BNL WAN Connectivity

15
RHIC and ATLAS Capacities at BNL
16
Mass Storage
17
Linux Processor Farms
18
Online (Disk) Storage
19
Raw Data Recording Rates at RHIC
20
Current RHIC Run
  • AuAu rather than dAu last year thus inherently
    higher rates
  • Accelerator performing much better than best seen
    before while only 3 weeks into run
  • Experiments have update DAQ capabilities
  • Currently see peak raw data recording rate of 1
    TBytes/hr
  • With expected improvements in luminosity and duty
    factor 1-1.5 PByte of raw data is likely during
    current run compared to 200-300 TBytes for last
    run, x5

21
Data Flow Analysis by V. Lindenstruth
22
HEP/NP WAN Requirements at BNL
23
Qualitative Issues As Well
  • Need to share effectively between a number of
    very different requirement need differentiate
    services
  • Long term programmatic bulk transfers (CERN gt
    BNL, BNL LBNL, etc.) background activity?
  • Short term programmatic bulk transfers (BNL -gt
    Tier 2s Peer Tier 1s. etc.) scheduled
    activity?
  • High priority chaotic transfers (Support for
    interactive analysis, calibration metadata
    requests, etc.) priority driven preemptive
    activity?
  • Predictability required to schedule use of
    network depended resources
  • Greatly increased reliability needed because of
    interdependency of distributed components of
    Virtual Facilities
  • WAN is now backplane of a global computer (or LAN
    of global a facility)
  • Failure implies major disruption of a huge widely
    distributed resource

24
Issue for BNL is Site Connectivity
  • Progress on Infrastructure seems Good
  • Abilene / National Lambda Rail
  • ESnet backbone
  • Starlight CERN link
  • Work on network technology seems to be occurring
  • Logistical network buffering
  • Studies of revision to TCP/IP
  • Strategies to deliver differentiated services
  • Unlike Fermilab, SLAC, etc. BNL is 100 dependent
    on ESnet to for connectivity to major network
    peering points
  • No dark fiber right-of-way to major POP of major
    networks

25
  • NYSERNet Presentation
  • Dr. Timothy Lance
  • President, NYSERNet

26
I
ntroduction
NYSERNet Mission
NYSERNet advances network technologies and
applications that enable collaboration and
promote technology transfer for research and
education, expanding these to government,
industry, and the broader community.
27
P
resenter
  • Dr. Timothy Lance
  • NYSERNet President Board Chair since 1998.
  • Member of the Board of Directors since 1992.
  • Doctorate in Mathematics from Princeton
    University
  • Chair of the Department of Mathematics and
    Statistics at the University at Albany.
  • Mathematical research in topology and in complex
    analysis.
  • Networking efforts in issues such as intellectual
    property, downstreaming, publication, modeling
    and mathematical problems of next generation
  • networks, and the interface between the
    researchers and the
  • increasingly complex network on which they
    depend
  • Primary duty as Board Chairman is articulation
    and delivery of Board-informed vision of advanced
    network services.

28
N
YSERNet
NYSERNet Initiatives
  • New York City Dark Fiber Network (NYCDFN)
  • The NYSERNet Statewide Initiative Network
  • New England Research Education Network (NEREN)
  • National Lambda Rail (NLR)

29
N
YSERNet Infrastructure
New York City Dark Fiber Network (NYCDFN)
  • Multi-Phase All Optical Network Deployment
  • Purpose Built
  • Dedicated Dark Fiber Strands for Participating
    institutions
  • Institutional Selected Optronics/Electronics
  • Hub Loop
  • MAN Loop
  • Twenty Year IRUs
  • Private Fiber Loop
  • 32 Avenue of the Americas
  • 60 Hudson Street
  • 111 8th Avenue

30
N
YSERNet Infrastructure
NYSERNets Colocation Facility
  • NYSERNet Leased and Managed
  • Vendor-Neutral Colocation Site
  • Carrier-Grade Environment
  • Aggregation Point for National International
    RE Networks
  • Aggregation Point for Commercial Network
    Services
  • Access to 2 additional Colos through NYSERNet
    Agreements

31
N
YSERNet
NYSERNets Colocation Facility International
Network Resources
  • Manhattan Landing (ManLan)
  • GÉANT
  • CANet
  • HEANet
  • SINet
  • QatarFN

32
(No Transcript)
33
N
YSERNet
NYSERNets Colocation Facility Domestic Network
Resources
  • Abilene
  • Pre-positioned for additional NSF sponsored
    programs
  • Commodity ISP(s)
  • Other services being developed

34
Extensible Terascale Facilities
35
N
YSERNet Infrastructure
The Statewide Initiative
Current Network Infrastructure
Next Generation Planning
36
Current RE Network
Syracuse
Troy
Rensselaer Polytechnic Institute
City University of New York (CUNY)
University at Buffalo
Syracuse University
University of Rochester
New School University
Abilene
Rochester Institute of Technology
Albert Einstein College of Medicine, Yeshiva
University
SUNY Geneseo
Broadwing
University at Albany
Pace University
Cornell University
Columbia University
New York University
Binghamton University
Weill Medical College of Cornell University
Marist College
The Rockefeller University
POP (Point of Presence)
American Museum of Natural History
IBM Watson Research Center
Mount Sinai School of Medicine
Gateways
Connected Institution
CAnet
Abilene
Stony Brook University
Connection In-Progress
QWest
Hofstra University
Man Lan
37
N
YSERNet
The Statewide Initiative Objectives
  • Scalable Infrastructure to Support High
    Performance Applications
  • Leverage Manhattan Resources
  • Provide Statewide Access to Major National
    Research Networks
  • Provide alternate path from New York City/New
    England to National Resources

38
Potential Network Expansion
Syracuse
Troy
Internet 2
NEREN
Commodity ISP
Next Generation Backbone
NEREN
Potential Circuit/FOC/Lambda Paths
NLR
NEREN
POP (Point of Presence)
CAnet
Internet 2
Gateways
Man Lan
ETF
Commodity ISP
39
N
YSERNet Outreach
New England Research Education Network (NEREN)
NYSERNet is working with New England institutions
to create a dedicated, private network linking
the New England States and New York State. The
resulting infrastructure will further capitalize
on NYSERNets colocation facility in New York
City in order to facilitate educational consortia
and bridge research collaborations throughout the
Northeast
40
(No Transcript)
41
N
YSERNet
Cold Spring Harbor Laboratory

42
Next Steps
  • Obtain price point for 10 GB between connection
    BNL and 32 Ave of Americas
  • Dark Fiber vs. Managed Service
  • Bundle Total Long Island Requirement for Buying
    Power?
  • NYSERNet to engage Keyspan, Lightpath on our
    behalf to provide dark fiber/managed lambdas to
    eastern LI for connectivity to NLR.

43
Next Steps (contd)
  • Create business model to exert political power to
    demonstrate benefit of providers investing in
    ultra-high bandwidth networks for Long Island
  • Massive educational network requirement on Long
    Island
  • Much business to be had
  • Politically important to provide to population
  • Leverage K-12 needs (18 months out?)

44
Next Steps (contd)
  • BNL to create mailing list of todays
    participants to continue and further dialog
  • Sister institutions to provide bandwidth
    requirement projections to NYSERNet similar to
    BNL HEP projections shown today

45
QUESTIONS COMMENTS
?
?
?
?
?
Write a Comment
User Comments (0)
About PowerShow.com