Networks for HENP and ICFA SCIC - PowerPoint PPT Presentation

About This Presentation
Title:

Networks for HENP and ICFA SCIC

Description:

Harvey B. Newman California Institute of Technology APAN High Energy Physics Workshop January 21, 2003 Next Generation Networks for Experiments: Goals and Needs ... – PowerPoint PPT presentation

Number of Views:131
Avg rating:3.0/5.0
Slides: 44
Provided by: Harvey77
Category:

less

Transcript and Presenter's Notes

Title: Networks for HENP and ICFA SCIC


1
  • Networks for HENP and ICFA SCIC

Harvey B. Newman California Institute
of TechnologyAPAN High Energy Physics
WorkshopJanuary 21, 2003
2
Next Generation Networks for Experiments Goals
and Needs
Large data samples explored and analyzed by
thousands of globally dispersed scientists, in
hundreds of teams
  • Providing rapid access to event samples, subsets
    and analyzed physics results from massive data
    stores
  • From Petabytes by 2002, 100 Petabytes by 2007,
    to 1 Exabyte by 2012.
  • Providing analyzed results with rapid turnaround,
    bycoordinating and managing the large but
    LIMITED computing, data handling and NETWORK
    resources effectively
  • Enabling rapid access to the data and the
    collaboration
  • Across an ensemble of networks of varying
    capability
  • Advanced integrated applications, such as Data
    Grids, rely on seamless operation of our LANs
    and WANs
  • With reliable, monitored, quantifiable high
    performance

3
Four LHC Experiments The
Petabyte to Exabyte Challenge
  • ATLAS, CMS, ALICE, LHCB Higgs New
    particles Quark-Gluon Plasma CP Violation

Data stored 40 Petabytes/Year and UP
CPU 0.30 Petaflops and UP 0.1
to 1 Exabyte (1 EB 1018
Bytes) (2007) (2012 ?) for the LHC
Experiments
4
LHC Data Grid Hierarchy
CERN/Outside Resource Ratio 12Tier0/(?
Tier1)/(? Tier2) 111
PByte/sec
100-400 MBytes/sec
Online System
Experiment
CERN 700k SI95 1 PB Disk Tape Robot
Tier 0 1
HPSS
Tier 1
2.5 Gbps
FNAL
IN2P3 Center
INFN Center
RAL Center
2.5 Gbps
Tier 2
2.5 Gbps
Tier 3
Institute 0.25TIPS
Institute
Institute
Institute
Tens of Petabytes by 2007-8.An Exabyte within 5
Years later.
Physics data cache
0.1 to 10 Gbps
Tier 4
Workstations
5
ICFA and Global Networks for HENP
  • National and International Networks, with
    sufficient (rapidly increasing) capacity and
    capability, are essential for
  • The daily conduct of collaborative work in both
    experiment and theory
  • Detector development construction on a global
    scale Data analysis involving physicists from
    all world regions
  • The formation of worldwide collaborations
  • The conception, design and implementation of
    next generation facilities as global networks
  • Collaborations on this scale would never have
    been attempted, if they could not rely on
    excellent networks

6
ICFA and International Networking
  • ICFA Statement on Communications in Intl
    HEPCollaborations of October 17, 1996
    See http//www.fnal.gov/directorate/icfa/icfa_comm
    unicaes.html
  • ICFA urges that all countries and institutions
    wishing to participate even more effectively and
    fully in international HEP Collaborations should
  • Review their operating methods to ensure they
    are fully adapted to remote participation
  • Strive to provide the necessary communications
    facilities and adequate international bandwidth

7
ICFA Network Task Force 1998 Bandwidth
Requirements Projection (Mbps)
NTF
1001000 X Bandwidth Increase Foreseen for
1998-2005 See the ICFA-NTF Requirements
Report http//l3www.cern.ch/newman/icfareq98.htm
l
8
ICFA Standing Committee on Interregional
Connectivity (SCIC)
  • Created by ICFA in July 1998 in Vancouver
    Following ICFA-NTF
  • CHARGE
  • Make recommendations to ICFA concerning the
    connectivity between the Americas, Asia and
    Europe (and network requirements of HENP)
  • As part of the process of developing
    theserecommendations, the committee should
  • Monitor traffic
  • Keep track of technology developments
  • Periodically review forecasts of future
    bandwidth needs, and
  • Provide early warning of potential problems
  • Create subcommittees when necessary to meet the
    charge
  • The chair of the committee should report to ICFA
    once peryear, at its joint meeting with
    laboratory directors (Feb. 2003)
  • Representatives Major labs, ECFA, ACFA, NA
    Users, S. America

9
ICFA-SCIC Core Membership
  • Representatives from major HEP laboratories
  • W. Von Reuden (CERN)
  • Volker Guelzow (DESY) Vicky White (FNAL)
    Yukio Karita (KEK) Richard Mount (SLAC)
  • User Representatives Richard Hughes-Jones
    (UK) Harvey Newman (USA)
  • Dean Karlen (Canada)
  • For Russia Slava Ilyin (MSU)
  • ECFA representatives
  • Denis Linglin (IN2P3, Lyon)Frederico Ruggieri
    (INFN Frascati)
  • ACFA representatives
  • Rongsheng Xu (IHEP Beijing)
  • H. Park, D. Son (Kyungpook Natl University)
  • For South America Sergio F. Novaes
    (University of Sao Paulo)

10
SCIC Sub-Committees
  • Web Page http//cern.ch/ICFA-SCIC/
  • Monitoring Les Cottrell (http//www.slac.stanfor
    d.edu/xorg/icfa/scic-netmon) With Richard
    Hughes-Jones (Manchester), Sergio Novaes (Sao
    Paolo) Sergei Berezhnev (RUHEP), Fukuko Yuasa
    (KEK), Daniel Davids (CERN), Sylvain Ravot
    (Caltech), Shawn McKee (Michigan)
  • Advanced Technologies Richard Hughes-Jones,With
    Vladimir Korenkov (JINR, Dubna), Olivier
    Martin(CERN), Harvey Newman
  • The Digital Divide Alberto Santoro (Rio, Brazil)
  • With Slava Ilyin, Yukio Karita, David O. Williams
  • Also Dongchul Son (Korea), Hafeez Hoorani
    (Pakistan), Sunanda Banerjee (India), Vicky
    White (FNAL)
  • Key Requirements Harvey Newman
  • Also Charlie Young (SLAC)

11
Transatlantic Net WG (HN, L. Price)
Bandwidth Requirements

BW Requirements Increasing Faster Than
Moores Law See http//gate.hep.anl.gov/lprice/TAN
12
History One large Research Site
Much of the TrafficSLAC ? IN2P3/RAL/INFNvia
ESnetFranceAbileneCERN
Current Traffic 400 MbpsESNet
LimitationProjections 0.5 to 24 Tbps by 2012
13
Tier0-Tier1 Link Requirements Estimate for
Hoffmann Report 2001
  • Tier1 ? Tier0 Data Flow for Analysis 0.5 - 1.0
    Gbps
  • Tier2 ? Tier0 Data Flow for Analysis 0.2 - 0.5
    Gbps
  • Interactive Collaborative Sessions (30 Peak)
    0.1 - 0.3 Gbps
  • Remote Interactive Sessions (30 Flows Peak) 0.1
    - 0.2 Gbps
  • Individual (Tier3 or Tier4) data transfers
    0.8 GbpsLimit to 10 Flows of 5 Mbytes/sec
    each
  • TOTAL Per Tier0 - Tier1 Link 1.7 - 2.8 Gbps
  • NOTE
  • Adopted by the LHC Experiments given in the
    upcomingHoffmann Steering Committee Report 1.5
    - 3 Gbps per experiment
  • Corresponds to 10 Gbps Baseline BW Installed on
    US-CERN Link
  • Hoffmann Panel also discussed the effects of
    higher bandwidths
  • For example all-optical 10 Gbps Ethernet across
    WANs

14
Tier0-Tier1 BW Requirements Estimate for
Hoffmann Report 2001
  • Does Not Include the more recent ATLAS Data
    Estimates
  • 270 Hz at 1033 Instead of 100Hz
  • 400 Hz at 1034 Instead of 100Hz
  • 2 MB/Event Instead of 1 MB/Event
  • Does Not Allow Fast Download to Tier34 of
    Small Object Collections
  • Example Download 107 Events of AODs (104 Bytes)
    ? 100 GbytesAt 5 Mbytes/sec per person (above)
    thats 6 Hours !
  • This is a still a rough, bottoms-up, static, and
    hence Conservative Model.
  • A Dynamic distributed DB or Grid system with
    Caching, Co-scheduling, and Pre-Emptive data
    movement may well require greater bandwidth
  • Does Not Include Virtual Data
    operationsDerived Data Copies Data-description
    overheads
  • Further MONARC Computing Model Studies are Needed

15
ICFA SCIC Meetings and Topics
  • Focus on the Digital Divide This Year
  • Identification of problem areas work on ways to
    improve
  • Network Status and Upgrade Plans in Each Country
  • Performance (Throughput) Evolution in Each
    Country, and Transatlantic
  • Performance Monitoring World-Overview (Les
    Cottrell, IEPM Project)
  • Specific Technical Topics (Examples)
  • Bulk transfer, New Protocols Collaborative
    Systems, VOIP
  • Preparation of Reports to ICFA (Lab Directors
    Meetings)
  • Last Report World Network Status and Outlook -
    Feb. 2002
  • Next Report Digital Divide, Monitoring,
    Advanced Technologies Requirements Evolution
    Feb. 2003
  • Seven Meetings in 2002 at KEK In December 13.

16
Network Progress in 2002 andIssues for Major
Experiments
  • Backbones major links advancing rapidly to 10
    Gbps range
  • Gbps end-to-end throughput data flows have
    been tested will be in production soon (in 12
    to 18 Months)
  • Transition to Multi-wavelengths 1-3 yrs. in the
    most favored regions
  • Network advances are changing the view of the
    nets roles
  • Likely to have a profound impact on the
    experiments Computing Models, and bandwidth
    requirements
  • More dynamic view GByte to TByte data
    transactionsdynamic path provisioning
  • Net RD Driven by Advanced integrated
    applications, such as Data Grids, that rely on
    seamless LAN and WAN operation
  • With reliable, quantifiable (monitored), high
    performance
  • All of the above will further open the Digital
    Divide chasm. We need to take action

17
ICFA SCIC RE Backbone and International Link
Progress
  • GEANT Pan-European Backbone (http//www.dante.net/
    geant)
  • Now interconnects gt31 countries many trunks 2.5
    and 10 Gbps
  • UK SuperJANET Core at 10 Gbps
  • 2.5 Gbps NY-London, with 622 Mbps to ESnet and
    Abilene
  • France (IN2P3) 2.5 Gbps RENATER backbone from
    October 2002
  • Lyon-CERN Link Upgraded to 1 Gbps Ethernet
  • Proposal for dark fiber to CERN by end 2003
  • SuperSINET (Japan) 10 Gbps IP and 10 Gbps
    Wavelength Core
  • Tokyo to NY Links 2 X 2.5 Gbps started Peer
    with ESNet by Feb.
  • CAnet4 (Canada) Interconnect customer-owned
    dark fiber nets across Canada at 10 Gbps,
    started July 2002
  • Lambda-Grids by 2004-5
  • GWIN (Germany) 2.5 Gbps Core Connect to US at 2
    X 2.5 GbpsSupport for SILK Project Satellite
    links to FSU Republics
  • Russia 155 Mbps Links to Moscow (Typ. 30-45 Mbps
    for Science)
  • Moscow-Starlight Link to 155 Mbps (US NSF
    Russia Support)
  • Moscow-GEANT and Moscow-Stockholm Links 155 Mbps

18
RE Backbone and Intl Link Progress
  • Abilene (Internet2) Upgrade from 2.5 to 10 Gbps
    in 2002
  • Encourage high throughput use for targeted
    applications FAST
  • ESNET Upgrade to 10 Gbps As Soon as Possible
  • US-CERN
  • to 622 Mbps in August Move to STARLIGHT
  • 2.5G Research Triangle from 8/02
    STARLIGHT-CERN-NL to 10G in 2003. 10Gbps
    SNV-Starlight Link Loan from Level(3)
  • SLAC IN2P3 (BaBar)
  • Typically 400 Mbps throughput on US-CERN,
    Renater links
  • 600 Mbps Throughput is BaBar Target for Early
    2003 (with ESnet and Upgrade)
  • FNAL ESnet Link Upgraded to 622 Mbps
  • Plans for dark fiber to STARLIGHT, proceeding
  • NY-Amsterdam Donation from Tyco, September 2002
    Arranged by IEEAF 622 Gbps10 Gbps Research
    Wavelength
  • US National Light Rail Proceeding Startup
    Expected this Year

19
(No Transcript)
20
2.5? 10 Gbps Backbone
gt 200 Primary ParticipantsAll 50 States, D.C.
and Puerto Rico75 Partner Corporations and
Non-Profits23 State Research and Education Nets
15 GigaPoPs Support 70 of Members
21

2003 OC192 and OC48 Links Coming Into
ServiceNeed to Consider Links to US HENP Labs
22
National RE Network ExampleGermany DFN
Transatlantic Connectivity 2002
  • 2 X OC48 NY-Hamburg and NY-Frankfurt
  • Direct Peering to Abilene (US) and Canarie
    (Canada)
  • UCAID said to be adding another 2 OC48s in a
    Proposed Global Terabit Research Network (GTRN)
  • Virtual SILK Highway Project (from 11/01) NATO
    ( 2.5 M) and Partners ( 1.1M)
  • Satellite Links to South Caucasus and
    Central Asia (8 Countries)
  • In 2001-2 (pre-SILK) BW 64-512 kbps
  • Proposed VSAT to get 10-50 X BW for same cost
  • See www.silkproject.org
  • Partners CISCO, DESY. GEANT, UNDP, US
    State Dept., Worldbank, UC London, Univ.
    Groenigen

23
National Research Networks in Japan
  • SuperSINET
  • Started operation January 4, 2002
  • Support for 5 important areasHEP, Genetics,
    Nano-Technology,Space/Astronomy, GRIDs
  • Provides 10 ?s
  • 10 Gbps IP connection
  • Direct intersite GbE links
  • 9 Universities Connected
  • January 2003 Two TransPacific 2.5 Gbps
    Wavelengths (to NY) Japan-US-CERN Grid
    Testbed Soon

NIFS
IP
Nagoya U
NIG
WDM path
IP router
Nagoya
Osaka
Osaka U
Tokyo
Kyoto U
NII Hitot.
ICR
Kyoto-U
U Tokyo
ISAS
Internet
IMS
NAO
U-Tokyo
24
SuperSINET Updated Map October 2002


25
APAN Links in Southeast AsiaJanuary 15, 2003


26
National Light Rail Footprint
  • NLR
  • Buildout Started November 2002
  • Initially 4 10 Gb Wavelengths
  • To 40 10Gb Waves in Future

NREN Backbones reached 2.5-10 Gbps in 2002 in
Europe, Japan and USUS Transition now to
optical, dark fiber, multi-wavelength RE network
27
Progress Max. Sustained TCP Thruput on
Transatlantic and US Links
  • 8-9/01 105 Mbps 30 Streams SLAC-IN2P3 102
    Mbps 1 Stream CIT-CERN
  • 11/5/01 125 Mbps in One Stream (modified
    kernel) CIT-CERN
  • 1/09/02 190 Mbps for One stream shared on 2
    155 Mbps links
  • 3/11/02 120 Mbps Disk-to-Disk with One Stream
    on 155 Mbps link (Chicago-CERN)
  • 5/20/02 450-600 Mbps SLAC-Manchester on OC12
    with 100 Streams
  • 6/1/02 290 Mbps Chicago-CERN One Stream on
    OC12 (mod. Kernel)
  • 9/02 850, 1350, 1900 Mbps Chicago-CERN
    1,2,3 GbE Streams, OC48 Link
  • 11-12/02 FAST 940 Mbps in 1 Stream
    SNV-CERN 9.4
    Gbps in 10 Flows SNV-Chicago

Also see http//www-iepm.slac.stanford.edu/monitor
ing/bulk/ and the Internet2 E2E Initiative
http//www.internet2.edu/e2e
28
FAST (Caltech) A Scalable, Fair Protocol for
Next-Generation Networks from 0.1 To 100 Gbps
SC2002 11/02
Highlights of FAST TCP
  • Standard Packet Size
  • 940 Mbps single flow/GE card
  • 9.4 petabit-m/sec
  • 1.9 times LSR
  • 9.4 Gbps with 10 flows
  • 37.0 petabit-m/sec
  • 6.9 times LSR
  • 22 TB in 6 hours in 10 flows
  • Implementation
  • Sender-side (only) mods
  • Delay (RTT) based
  • Stabilized Vegas

Sunnyvale-Geneva
Baltimore-Geneva
Baltimore-Sunnyvale
SC2002 10 flows
SC2002 2 flows
I2 LSR
29.3.00 multiple
SC2002 1 flow
9.4.02 1 flow
22.8.02 IPv6
URL netlab.caltech.edu/FAST
Next 10GbE 1 GB/sec disk to disk
C. Jin, D. Wei, S. Low FAST Team Partners
29
HENP Major Links Bandwidth Roadmap (Scenario)
in Gbps
Continuing the Trend 1000 Times Bandwidth
Growth Per DecadeWe are Rapidly Learning to Use
and Share Multi-Gbps Networks
30
HENP Lambda GridsFibers for Physics
  • Problem Extract Small Data Subsets of 1 to 100
    Terabytes from 1 to 1000 Petabyte Data Stores
  • Survivability of the HENP Global Grid System,
    with hundreds of such transactions per day
    (circa 2007)requires that each transaction be
    completed in a relatively short time.
  • Example Take 800 secs to complete the
    transaction. Then
  • Transaction Size (TB) Net
    Throughput (Gbps)
  • 1
    10
  • 10
    100
  • 100
    1000 (Capacity of
    Fiber
    Today)
  • Summary Providing Switching of 10 Gbps
    wavelengthswithin 3-5 years and Terabit
    Switching within 5-8 yearswould enable
    Petascale Grids with Terabyte transactions,as
    required to fully realize the discovery potential
    of major HENP programs, as well as other
    data-intensive fields.

31
IEPM PingER Deployment
Monitoring Sites
  • Measurements from
  • 34 monitors in 14 countries
  • Over 790 remote hosts 3600 monitor-remote site
    pairs
  • Recently added 23 Sites in 17Countries, due to
    ICTP Collaboration
  • Reports on RTT, loss, reachability, jitter,
    reorders, duplicates
  • Measurements go 6ack to Jan-95
  • 79 Countries Monitored
  • Contain gt 80 of world population
  • 99 of online users of the Internet
  • Mainly AR sites

Remote Sites
32
History Loss Quality
(Cottrell)
  • Fewer sites have very poor to dreadful
    performance
  • More have good performance (lt 1 Loss)

33
History - Throughput Quality
Improvements from US
80 annual improvement Factor 100/8 yr
Bandwidth of TCP lt MSS/(RTTSqrt(Loss)) (1)
Progress but Digital Divide is Maintained
(1) Macroscopic Behavior of the TCP Congestion
Avoidance Algorithm, Matthis, Semke, Mahdavi,
Ott, Computer Communication Review 27(3), July
1997
34
NREN Core Network Size (Mbps-km)http//www.teren
a.nl/compendium/2002
100M
Logarithmic Scale
Leading
Nl
10M
Fi
Cz
Advanced
Hu
Es
1M
Ch
In Transition
It
Pl
Gr
100k
Ir
Lagging
10k
Ro
1k
Ukr
100
35
Work on the Digital DivideSeveral Perspectives
  • Identify Help Solve Technical Problems
    Natl, Regional, Last 10/1/0.1 km
  • Inter-Regional Proposals (Example Brazil)
  • US NSF Proposal (10/2002) possible EU LIS
    Proposal
  • Work on Policies and/or Pricing pk, in, br, cn,
    SE Europe,
  • E.g. RoEduNet (2-6 to 34 Mbps) Pricing not so
    differentfrom US-CERN price in 2002 for a few
    Gbps
  • Find Ways to work with vendors, NRENs, and/or
    Govts
  • Use Model Cases Installation of new advanced
    fiber infrastructures Convince Neighboring
    Countries
  • Poland (to 5k km Fiber) Slovakia Ireland
  • Exploit One-off Solutions E.g. extend the SILK
    Project (DESY/FSU satellite links) to a SE
    European site
  • Work with other organizations Terena, Internet2,
    AMPATH, IEEAF, UN, etc. to help with technical
    and/or political solns

36
Digital Divide Committee
37
Gigabit Ethernet Backbone 100 Mbps Link to GEANT
38
GEANT 155Mbps
Romania 155 Mbps to GEANT and BucharestInter-Ci
ty Links of 2-6 Mbps to 34 Mbps in 2003
Annual Cost gt 1 MEuro
39
Digital Divide WG Activities
  • Questionnaire Distributed to the HENP Lab
    Directors and the Major Collaboration
    Managements
  • Plan on Project to Build HENP World Network
    Map Updated and Maintained on a Web Site,
    Backed by Database
  • Systematize and Track Needs and Status
  • Information Link Bandwidths, Utilization,
    Quality, Pricing, Local Infrastructure, Last
    Mile Problems, Vendors, etc.
  • Identify Urgent Cases Focus on Opportunities to
    Help
  • First ICFA SCIC Workshop Focus on the Digital
    Divide
  • Target Date February 2004 in Rio de Janeiro
    (LISHEP)
  • Organization Meeting July 2003
  • Plan Statement at the WSIS, Geneva (December
    2003)
  • Install and Leave Behind a Good Network
  • Then 1 (to 2) Workshops Per Year, at Sites that
    Need Help

40
We Must Close the Digital Divide
  • Goal To Make Scientists from All World
    Regions Full Partners in the Process of
    Search and Discovery
  • What ICFA and the HENP Community Can Do
  • Help identify and highlight specific needs (to
    Work On)
  • Policy problems Last Mile problems etc.
  • Spread the message ICFA SCIC is there to help
    Coordinatewith AMPATH, IEEAF, APAN, Terena,
    Internet2, etc.
  • Encourage Joint programs such as in DESYs Silk
    project Japanese links to SE Asia and China
    AMPATH to So. America
  • NSF LIS Proposals US and EU to South America
  • Make direct contacts, arrange discussions with
    govt officials
  • ICFA SCIC is prepared to participate
  • Help Start, or Get Support for Workshops on
    Networks ( Grids)
  • Discuss Create opportunities
  • Encourage, help form funded programs
  • Help form Regional support training groups
    (requires funding)

41
Cultivate and promote practical solutions to
delivering scalable, universally available and
equitable access to suitable bandwidth and
necessary network resources in support of
research and education collaborations.
Groningen Carrier Hotel March 2002
http//www.ieeaf.org
42
CA-Tokyo by 1/03
NY-AMS 9/02
(Research)
43
Global Medical Research Exchange
Initiative Bio-Medicine and Health
Sciences
Global Quilt Initiative GMRE Initiative - 001
Propose Global Research and Education Network for
Physics
44
Networks, Grids and HENP
  • Current generation of 2.5-10 Gbps network
    backbones arrived in the last 15 Months in the
    US, Europe and Japan
  • Major transoceanic links also at 2.5 - 10 Gbps
    in 2003
  • Capability Increased 4 Times, i.e. 2-3 Times
    Moores
  • Reliable high End-to-end Performance of network
    applications(large file transfers Grids) is
    required. Achieving this requires
  • End-to-end monitoring a coherent approach
  • Getting high performance (TCP) toolkits in
    users hands
  • Digital Divide Network improvements are
    especially needed in SE Europe, So. America SE
    Asia, and Africa
  • Key Examples India, Pakistan, China Brazil
    Romania
  • Removing Regional, Last Mile Bottlenecks and
    Compromises in Network Quality are now On the
    critical path, in all world regions
  • Work in Concert with APAN, Internet2, Terena,
    AMPATH DataTAG, the Grid projects and the
    Global Grid Forum
Write a Comment
User Comments (0)
About PowerShow.com