ICFA Standing Committee on Interregional Connectivity SCIC - PowerPoint PPT Presentation

1 / 95
About This Presentation
Title:

ICFA Standing Committee on Interregional Connectivity SCIC

Description:

ICFA Standing Committee on Interregional Connectivity SCIC – PowerPoint PPT presentation

Number of Views:196
Avg rating:3.0/5.0
Slides: 96
Provided by: harv193
Category:

less

Transcript and Presenter's Notes

Title: ICFA Standing Committee on Interregional Connectivity SCIC


1
  • ICFA Standing Committee on
    Interregional Connectivity (SCIC)

Harvey B. Newman California Institute of
TechnologyICFA Meeting, VancouverFebruary 10,
2005
2
ICFA Standing Committee on Interregional
Connectivity (SCIC)
  • Created in July 1998 in Vancouver Following
    ICFA-NTF
  • CHARGE
  • Make recommendations to ICFA concerning the
    connectivity between the Americas, Asia and
    Europe
  • As part of the process of developing
    theserecommendations, the committee should
  • Monitor traffic on the worlds networks
  • Keep track of technology developments
  • Periodically review forecasts of future
    bandwidth needs, and
  • Provide early warning of potential problems
  • Create subcommittees as needed to meet the charge
  • Representatives Major labs, ECFA, ACFA, North
    and South American Users
  • The chair of the committee reports to ICFA once
    peryear, at its joint meeting with laboratory
    directors (Today)

3
SCIC in 2004 http//cern.ch/icfa-scic
  • Strong Focus on the Digital Divide Continues
  • Progress in Monitoring Ongoing Funding Issue
  • Intensive Work in the Field Presentations
    Demos at gt 70 Meetings
  • E.g., Internet2, TERENA, AMPATH, APAN, GLORIAD,
    1st Digital Divide and HEPGrid Workshop (Rio
    2/04), 3rd Intl Grid Workshop in Daegu (August
    26-28, 2004), CHEP2004, SC2004, Chinese-American
    Net Symposium, etc. etc.
  • 2nd ICFA Digital Divide and Grid Workshop in
    Daegu (May 2005)
  • HENP increasingly visible to governments heads
    of state
  • Through Network advances (records), Grid
    developments, Work on the Digital Divide and
    issues of Global Collaboration
  • Also through the World Summit on the Information
    Society Process. Next Step is WSIS II in TUNIS
    November 2005
  • A Striking Picture Continues to Emerge
    Remarkable Progress in Some Regions, and a
    Deepening Digital Divide Among Nations

4
SCIC in 2004-2005 http//cern.ch/icfa-scic
  • Three 2005 Reports, Presented to ICFA Today
  • Main Report Networking for HENP H. Newman
    et al.
  • Includes Updates on the Digital Divide,
    WorldNetwork Status Brief updates on Monitoring
    and Advanced Technologies
  • 18 Appendices A World Network Overview Status
    and Plans for the Next Few Years of Natl
    Regional Networks, and Optical Network
    Initiatives
  • Monitoring Working Group Report L.
    Cottrell
  • Digital Divide Workshop Status Report D. Son
  • Also See
  • SCIC 2003 Digital Divide Report A. Santoro et
    al.
  • SCIC 2004 Digital Divide in Russia Report
    V. Ilyin
  • TERENA (www.terena.nl) 2004 Compendium

5
SCIC Main Conclusion for 2004Setting the Tone
for 2005
  • The disparity among regions in HENP could
    increase even more sharply, as we learn to use
    advanced networks effectively, and we develop
    dynamic Grid systems in the most favored
    regions
  • We must therefore take action, and work to
    Close the Digital Divide
  • To make Physicists from All World Regions Full
    Partners in Their Experiments and in the Process
    of Discovery
  • This is essential for the health of our global
    experimental collaborations, our plans for future
    projects, and our field.

6
Long Term Trends in Network Traffic Volumes
300-1000X/10Yrs
ESnet Accepted Traffic 1990 2004Exponential
Growth Since 92Annual Rate Increased from 1.7
to 2.0X Per Year In the Last 5 Years
L. Cottrell
10 Gbit/s
W. Johnston
W. Johnston
Progressin Steps
  • SLAC Traffic 400 Mbps Growth in Steps (ESNet
    Limit) 10X/4 Years.
  • July 2005 2x10 Gbps links onefor production
    and one for research
  • Projected 2 Terabits/s by 2014

7
Internet 2 Land Speed Record (LSR)
  • Product of transfer speed and distance using
    standard Internet (TCP/IP) protocols.
  • Single Stream 7.5 Gbps X 16 kkm with Linux July
    2004
  • IPv4 Multi-stream record with FAST TCP 6.86 Gbps
    X 27kkm Nov 2004
  • IPv6 record 5.11 Gbps between Geneva and
    Starlight Jan. 2005
  • Concentrate now on reliable Terabyte-scale file
    transfers
  • Disk-to-disk Marks 536 Mbytes/sec (Windows)
    300 Mbytes/sec Linux
  • Note System Issues PCI-X Bus, Network Interface,
    Disk I/O Controllers, CPU

7.2G X 20.7 kkm
Internet2 LSRsBlue HEP
Throuhgput (Petabit-m/sec)
Nov. 2004 Record Network
http//www.guinnessworldrecords.com/
8
SC2004 Bandwidth Record by HEP High Speed
TeraByte Transfers for Physics
  • 80 10GbE switch ports
  • 50 10GbE Network Adapters
  • Aggregate Rate of 101 Gbps
  • 2.95 Gbps sustained to Brazil

9
CERN-FNAL (pre)Production Traffic
  • Data Challenge
  • 3 days at 4.75 Gbps (11/2004)
  • Disk to disk
  • Standard Packet Size
  • LSR tuning

Data Challenge
Data Challenge
FAST Flows dont affect production traffic This
is not the case with RENO
10
We are Following the HENP Bandwidth Roadmap for
Major Links (in Gbps)
Continuing Trend 1000 Times Bandwidth Growth
Per DecadeKeeping Pace with Network BW Usage
(ESNet, SURFNet etc.)
11
Refined LHC Data Grid Hierarchy
DISUN
Network Roadmaps ? Richly Structured, Dynamic
Grid System
12
NRENs Annual Incoming and Outgoing External
Traffic 2003 (PetaBytes)
Incoming
Outgoing
Traffic generated by the LHC experiments and
transferred between CERN and the Tier1 and Tier2
centers, will represent a significant load
increase on the NRENs backbones in Europe.
13
PingER World View from SLAC
C. Asia, Russia, SE Europe, L. America, M. East,
China 4-5 yrs behind India, Africa 7-8 yrs
behind
S.E. Europe, Russia Catching up Latin Am.,
China Keeping up India, Mid-East, Africa
Falling Behind
FNAL SLAC Support, but Funding Issue for
Operational Manpower For quality data, need
constant vigilance (host disappears, security
blocks pings, need to update remote host lists )
14
SCIC Recommendation Close Digital Divide in
End-to-end Performance
  • Reliable high End-to-end Performance of networked
    applications such as large file transfers and
    Data Grids requires
  • End-to-end monitoring extending to all regions
    serving our community. Need coherent monitoring
    that allows physicists to extract clear,
    unambiguous and inclusive information (IEPM,
    MonALISA)
  • Upgrading campus infrastructures. Still not
    designed to support Gbps data transfers to most
    HEP centers.
  • Removing local, last mile, and natl and intl
    bottlenecks end-to-end, whether technical or
    political in origin.While Natl Intl
    backbones have reached 10 Gbps speeds in many
    countries, the bandwidths across borders, the
    countryside or the city may be much less.
  • Then Close the Wizard Gap (Historically
    100-300X)By deploying toolkits with best
    protocols/practices/monitors widely
  • This problem is very widespread in our community,
    with examples stretching from China to South
    America to the Northeastern U.S. Root causes
    vary from lack of local infrastructure to
    unfavorable pricing

15
Bandwidth achievable by a network wizard and a
typical user
SC2004 HEP demo101 Gbps
1 Stream 7.5 Gbps
16
National Lambda Rail (NLR)
Transition beginning now to optical,
multi-wavelength Community owned or leased dark
fiber networks for RE
  • NLR
  • Northern Route LA-JAX Sept 2004
  • Southern Route Coming Up Now
  • Initially 4 10G Waves to 40 Waves
  • HEP Internet2 HOPI UltraScience Net Cisco
    Research Initiatives DISUN, UltraLight GridNet
    Projects
  • 25 U.S. State Optical Fiber Initatives (CA, FL,
    IL,)

17
ESnet Beyond FY07 (W. Johnston)
  • High Quality, High Bandwidth, Production Internet
    Service (10G)
  • Bandwidth Needed for Next Generation Science
    40-60 Gbps on Major Paths to Labs

SEA
CERN
AsiaPac
Europe
Europe
Japan
Japan
CHI
SNV
NYC
DEN
DC
  • Hubs at strategic locations

Japan
ALB
ATL
SDG
MANs
Qwest ESnet hubs
ELP
NLR ESnet hubs
High-speed cross connects with Internet2/Abilene
Major DOE Office of Science Sites
10Gb/s 30Bg/s 40Gb/s
Major international
Production IP ESnet core
High-impact science core
2.5 Gbs10 Gbs
Future phases
Lab supplied
18
GEANT2 Hybrid Architecture
  • Global Connectivity
  • 10 Gbps 3x2.5 Gbps to North America
  • 2.5 Gbps to Japan
  • 622 Mbps to South America
  • 45 Mbps to Mediterranean countries
  • 155 Mbps to South Africa
  • Will be Improved in GEANT2
  • Cooperation of Many NRENs
  • Implementation on dark fiber, IRU asset,
    Transmission Switching Equipment
  • Layer 1 2 switching, the Light Path
  • Point to Point WL services

19
JGN2 Japan Gigabit Network (4/04 3/08)20 Gbps
Backbone, 6 Optical Cross-Connects
  • JGN2
  • Connection services at the optical level.
  • 1GE and 10GE connection services
  • Optical testbed services

20
The Global Lambda Integrated Facility for
Research and Education (GLIF)
  • Virtual organization supports persistent
    data-intensive scientific research and middleware
    development on LambdaGrids
  • Grid applications ride on dynamically
    configured networks based on optical wavelengths.
  • Architecting an international LambdaGrid
    infrastructure

21
GLORIAD Topology Current, Plans for Years 1-5
Moscow
Novosibirsk
Seattle
Khabarovsk
Amsterdam
Chicago
Beijing
Pusan
NYC
Hong Kong
22
Closing the Digital Divide RE Networks in/to
Latin America
PacificWave
AtlanticWave
  • AmPath 45 Mbps Links to US (2002-5)
  • RedCLARA (EU) Connects 18 Latin Am. NRENs,
    Cuba 622 Mbps Link to Europe
  • CHEPREO (NSF, from 2004) 622 Mbps Sao Paulo
    Miami
  • WHREN/LILA (NSF, from 2005)
  • 0.6 to 2.5 G Ring by 2006
  • Connections to Pacific Wave then
    Atlantic Wave

To GEANT

622 Mbps
Sao PauloANSP
23
RNP2 and GIGA in Brazil
  • RNP, the Brazilian Natl Research Network,
    connects regional networks in all 26 states
  • 2004 Backbone on major links to 155 Mbps 622
    Mbps Rio Sao Paulo. 2.5 G to 10 Sites in 2005.
  • 622 Mbps Intl links to GEANT (RedCLARA) and US
    (CHEPREO)
  • Extension to Northern regions of Brazil
    underway, completed by 2008 (over 4000 km)

?
  • The GIGA Exp. Network (ANSP) 700 km dark fiber,
    7 cities, 20 institutes
  • Gigabit Ethernet links to Rio Tier-2, Sao Paulo
    Tier-3

?
24
HEPGRID (CMS) in Brazil
  • HEPGRID-CMS/BRAZIL is a project to build a Grid
    that
  • At Regional Level will include CBPF,UFRJ,UFRGS,UFB
    A, UERJ UNESP
  • At International Level will be integrated with
    CMS Grid based at CERN focal points include
    Grid3/OSG and bilateral projects with Caltech
    Group

25
Latin America Science Areas Interested in
Improving Connectivity ( by Country)
Networks and Grids The Potential to Sparka New
Era of Scientific Research in the Region
26
Asia Pacific Academic Network Connectivity
APAN Status 7/2004
Connectivity to US from JP, KO, AU is Advancing
Rapidly.Progress in the Region, and to Europe
is Much Slower
Better North/South Linkages within Asia JP-SG
link 155Mbps in 2005 is proposed to NSF by
CIREN JP- TH link 2Mbps ? 45Mbps in 2004 is
being studied. CIREN is studying an extension to
India
27
APAN China Consortium
  • Established in 1999.  The China Education and
    Research Network (CERNET) and the China Science
    and Technology Network (CSTNET) are the main
    advanced networks.
  • CERNET
  • 2000 Own dark fiber crossing 30 major cities
    and 30,000 kilometers
  • 2003 1300 universities and institutes, over 15
    million users
  • CERNET 2 Next Generation RE Net
  • Backbone connects 15-20 Giga-POPs at 2.5G-10Gbps
    (I2-like)
  • Connects to 200 Universities and 100 Research
    Institutes at 1 Gbps-10 Gbps
  • Native IPv6 and Lambda Networking
  • From 6 to 78 Million Internet Users in China from
    January July 2004

CERNet
2.5 Gbps
CSTnet
28
Connectivity to Africa
  • Digital Divide Lack of Infrastructure,
    especially interior
  • Internet Access More than an order of magnitude
    lower than the corresponding percentages in
    Europe (32) N. America (68).
  • EUMEDCONNECT (EU-North Africa)
  • GEANT 155 Mbps to South Africa
  • New Submarine cables expected

29
Digital Divide Illustrated by Network
Infrastructures TERENA NREN Core Capacity
In Two Years
Current
Core capacity goes up in Large Steps 10 to 20
Gbps 2.5 to 10
Gbps 0.6-1 to 2.5
Gbps
SE Europe, Medit., FSU, Middle EastLess
Progress Based on Older Technologies (Below 0.15,
1.0 Gbps) Digital Divide
Will Not Be Closed
Source TERENA
30
Romania Inter-City Links (2 to 6 Mbps in 2002)
to 155 Mbps in 2003-2004 GEANT-Bucharest Link
Improved 155 to 622 Mbps

RoEduNetJanuary 2005
Plan 3-4 Centers at 2.5 Gbps Dark Fiber
InterCity Backbone
Compare Pk 56 univ. share 155 Mbps
Internationally
31
Pakistan Educational Research Network (PERN)
  • 56 universities, 34 Mbps domestic links, 155 Mbps
    Total Intl bandwidth
  • Services digital library video-conferencing
    service for lectures tutorials
  • Project to setup educational grid infrastructure
    initiated

32
Highest Bandwidth Link in NRENs Infrastructure,
EU EFTA Countries, Dark Fiber
10.0G
1.0G
0.1G
0.01G
  • Owning (or leasing) dark fiber is an interesting
    option for an NREN Depends on the national
    situation.
  • NRENs that own dark fiber can decide for
    themselves which technology and what speeds to
    use on it

Source TERENA
33
Core Network Bandwidth Increase for Years
2001-2004 and 2004-2006
  • Countries With No Increase Already Had 1-2.5G
    Backbone in 2001
  • These are all going to 10G backbones by 2006-7
  • Countries Showing the Largest Increase Are
  • PIONIER (Poland) from 155 Mbps to 10 Gbps
    capacity (64X)
  • SANET (Slovakia) from 4 Mbps to 1 Gbps (250X).

Source TERENA
34
  • 1660 km of Dark Fiber CWDM Links, 1 to 4 Gbps
    (GbE)
  • August 2002 Dark Fiber Link, to Austria
  • April 2003 Dark Fiber Link to Czech
    Republic
  • 2004 Dark Fiber Link to Poland
  • Planning 10 Gbps Backbone

gt 250X 2002-2005
35
HEPGRID and Digital Divide Workshop UERJ, Rio de
Janeiro, Feb. 16-20 2004
Theme Global Collaborations, Grids and Their
Relationship to the Digital Divide For the past
three years the SCIC has focused on understanding
and seeking the means of reducing or eliminating
the Digital Divide, and proposed to ICFA that
these issues, as they affect our field of High
Energy Physics, be brought to our community for
discussion. This led to ICFAs approval, in July
2003, of the Digital Divide and HEP Grid
Workshop.  More Information
http//www.lishep.uerj.br
NEWS Bulletin ONE TWOWELCOME BULLETIN
General InformationRegistrationTravel
Information Hotel Registration Participant List
How to Get UERJ/Hotel Computer Accounts Useful
Phone Numbers Program Contact us Secretariat
Chairmen
  • Tutorials
  • C
  • Grid Technologies
  • Grid-Enabled Analysis
  • Networks
  • Collaborative Systems

Sessions Tutorials Available(w/Video) on the
Web
SPONSORS
 CLAF  CNPQ FAPERJ       
UERJ
36
HEPGRID and Digital Divide Workshop UERJ, Rio
Feb. 16-20 2004 Summary
See http//www.lishep.uerj.br/WorkshopSchedule.htm
l
  • Worldviews of Networks and Grid Projects and the
    Relationship to the Digital Divide Newman,
    Santoro, Gagliardi, Avery, Cottrell
  • View from Major Experiments and Labs (D0, ATLAS,
    CMS, LHCb, ALICE Fermilab) Blazey, Jones, Foa,
    Nakada, Carminati
  • Progress in Acad. Research Networks in Europe,
    Asia Pacific, Latin America Williams, Tapus,
    Karita, Son, Ibarra
  • Major Network Initatives I2, GLIF, NLR, Aarnet,
    CLARA Preston, de Laat, Silvester, McLaughlin,
    Stanton, Olson
  • View from/for Developing Countries Willers,
    Ali, Davalos, De Paula, Canessa, Sciutto,
    Alvarez
  • Grids and Digital Divide in Russia Ilyin,
    Soldatov
  • Other Fields ITER (Fusion), MammoGrid
    Velikhov, Amendolia
  • Round Tables Network Readiness, Digital Divide
    in Latin America
  • Special Talks on HEP in Brazil Lederman, and
    Review of the CERN Role of Science in the
    Information Society Event Hoffmann
  • Participation of Latin American NRENs, and Govt
    Officials
  • 152 Participants, 18 Countries BR 107, US 18, CH
    8, AG 3, FR 3,

37
Findings Recommendations on the Digital Divide
in Europe (Williams)
  • The Digital Divide Exists
  • The depth of the digital divide varies very
    greatly among countries
  • There are Four countries in Eastern Europe with
    a high overall standard of RE Networking (Pl,
    Cz, Hu, Sk). Reasons include
  • Good support for research networking at govtal
    level
  • Access to dark fibre where/when necessary
  • History of participation in joint European
    projects
  • Access to Dark Fibre is Vital
  • Enables NRENs in eastern European countries to
    upgrade the backbone and access links 100-fold
    without spending much more on the
    infrastructure
  • This is now the main step which could be taken
    to close the digital divide.
  • In most eastern European countries fibre already
    laid
  • Getting fiber in countries with a liberalized
    telecom market is not difficult
  • Examples in countries with telecom monopolies
    exist

ICFA DD WorkshopRio 2/2004
Removing the digital divide is a moving target.
Internet use has only just started. Technological
progress will move the goalposts a lot in the
next 2-3-5 years.
38
International ICFA Workshop on HEP Networking,
Grids, and Digital Divide Issues for Global
e-ScienceStatus Report
  • Dates May 23-27, 2005Venue Hotel Interburgo,
    Daegu, Korea
  • by
  • Dongchul Son
  • Center for High Energy Physics
  • Harvey Newman
  • California Institute of Technology
  • ICFA, Vancouver, CanadaFebruary 11, 2005

39
Actions taken
  • 2004 August
  • Presented at ICFA in Beijing, Date and Venue
    fixed. Approved. (22nd)
  • Collocation of IHEPCCC suggested by IHEPCCC Chair
    (GW)
  • End of Aug Discussion on the scheme of Workshop
    during the HEP Grid Workshop in Daegu (HN, PA,
    FG, DS)
  • September
  • Announcement in conf. databases CERN (CDS),
    DESY, FNAL (HEPIC), SLAC (SPIRES)
  • Invitation letter draft by chairs with Paul Avery
    Alberto Santoro
  • Local Organizing Committee (draft)
  • 30 Sept-1 Oct
  • Presented at SCIC and IHEPCCC at CERN
  • Consulted on International Advisory Committee
    (IAC), nominations
  • Collocation of IHEPCCC approved
  • October
  • Discussion on IAB, prelim. draft of program,
    invitation letters to IAC
  • December
  • Finalized Invitation letters sent to nominated
    IAB members
  • 2005 January
  • Established provisional web page
  • Program Draft 1.0 sent to IAC for Comments and
    Suggestions by IAC

40
Actions to be taken
  • 2005 Feb
  • Update databases
  • Posters preparation (Now in progress)
  • Hotel negotiation/Contact Sponsors/Tour
    Arrangements
  • Finalizing program by end of February
  • Selection of Speakers and Chairs
  • Invitation to suggested panelists
  • March
  • Invitations of speakers besides the IAB and
    potential participants
  • Updating the program, Distribution of Posters and
    1st bulletins to be sent
  • Arrangement of Performance, reception and
    banquets, tours
  • April
  • 2nd bulletin
  • May
  • 3rd bulletin
  • Site preparation, video conferencing, meeting
    places to be arranged
  • Workshop

41
International Advisory Committee
Arshad Ali (NIIT, Pakistan) Heidi Alvarez
(Florida Int. Univ., USA) Bill St. Arnaud
(CANARIE, Canada) Paul Avery (Univ. of Florida,
USA) Angelina Bacala (MSU-IIT, Philippines) Natash
a Bulashova (Gloriad, USA) Hesheng Chen (IHEP,
China) Peter Clarke (London, UK) Greg Cole
(GLORIAD. USA) Les Cottrell (SLAC, USA) Ed
Fantegrossi (GEO) Tom de Fanti (Translight) Lorenz
o Foa (Pisa, Italy)    David Foster (CERN) Ian
Foster (ANL and U. Chicago, USA) Fabrizio
Gagliardi (EGEE, CERN) Atul Gurtu (TIFR,
India) Abdeslam Hoummada (Casablanca,
Morocco) Julio Ibarra (Florida Int. Univ.,
USA) Viacheslav Ilyin (Moscow, Russia) Michael
Jensen (Africa) Yukio Karita (KEK,
Japan) Matthias Kasemann (DESY, Germany) Sachio
Komamiya (Tokyo, Japan)
Luis Lopez (ANSP, Sao Paulo, Brazil) Jysoo Lee
(KISTI, Korea) Iosif LeGrand (Caltech, USA) Simon
C. Lin (ASCC, Taiwan) Vera Luth (IUPAP-C11,
Stanford, USA) Mirco Mazzucato (Padova,
Italy) George McLaughlin (Aarnet,
Australia) Richard Mount (SLAC, USA) Harvey
Newman (Co-chair, Caltech, USA) Sergio Novaes
(Sao Paolo, Brazil) Pier Oddone (Berkeley/FNAL,
USA) Riazuddin (NCP, QAU, Pakistan) Don Riley
(Maryland, USA) Wolfgang Von Rueden
(CERN) Alberto Santoro (UERJ, Brazil) Bo-Hyun Seo
(KISDI, Korea) Randall Sobie (Victoria,
Canada) Dongchul Son (Co-chair, CHEP,
Korea) Michael Stanton (RNP, Brazil) Geoffrey
Taylor (Melbourne, Australia)   Michal Turala
(Krakow, Poland) Albrecht Wagner (DESY,
Germany) Peter Watkins (Birmingham, UK) Vicky
White (Fermilab, USA) Guy Wormser (Orsay, France)
42
Program (Draft)
  • HEP Grids (3 sessions)
  • Advanced Networking (3
    sessions)
  • Digital Divide Perspectives (3
    sessions)
  • Network Technology (1
    session)
  • Computing Technology (1
    session)
  • Grid Analysis Environment
    (3 sessions)
  • E-Global Science (2
    sessions)
  • HEP Computing Coordination (3 sessions)
  • Applications of Data Grids and Ubiquitous
    Computing (2 sessions)
  • Panel Discussions
  • SCIC IHEPCCC Meetings

All Sessions Available Live Via VRVS
43
ICFA Report (2/2005) Update Main Trends
Continue, Some Accelerate
  • Current generation 2.5-10 Gbps network backbones
    and major Intl links arrived in 2001-4 in US,
    Europe, Japan Now Korea and China
  • Growth 4 to Hundreds of Times Much Faster than
    Moores Law
  • Proliferation of 10G links across the Atlantic
    Now Will Begin use of Multiple 10G Links (e.g.
    US-CERN) Along Major Paths by Fall 2005
  • Tech. Progress Driving Equipment Costs Lower
    (e.g. 1 and 10 GbE)
  • Some regions (US, Canada, Europe) moving to dark
    fiber
  • Grid-based Data Distribution, Analysis demands
    end-to-end high performance management
  • Emergence of the Hybrid Network Model GLIF,
    UltraLight
  • Ability to fully use long 10G paths with TCP
    continues to advance 7.5 Gbps X 16kkm (July
    2004) 101 Gbps at SC2004
  • The rapid rate of progress is confined mostly to
    the US, Europe, Japan and Korea, as well as the
    major Transoceanic routes
  • This threatens to Open the Digital Divide Wider,
    Unless we take Action as a Community

44
We Need to Work on the Digital Dividefrom
Several Perspectives
  • Workshops and Tutorials/Training Sessions
  • For Example ICFA DD Workshops, Rio 2/04, Daegu
    5/05HONET (Pakistan), December 2004
  • Work on Policies and/or Pricing pk, in, br, cn,
    SE Europe,
  • Encourage Access to wavelength services, or dark
    fiber
  • Use Model Cases e.g. Poland, Slovakia, Czech
    Republic
  • Share Information Comparative BW Prices in
    different markets
  • Encourage, and Work on Inter-Regional Projects
  • Latin America CHEPREO/WHREN (US-Brazil) GIGA,
    RedCLARA
  • GLORIAD, Russia-China-Korea-US Optical Ring
  • Help with Modernizing the Infrastructure
  • Design, Commissioning, Development
  • Provide Tools for Effective Use Monitoring,
    CollaborationSystems Advanced TCP Stacks, Grid
    System Software
  • Raise General Awareness of Problems, Approaches
    to Solutions

45
SCIC Work in 2005
  • Continue Digital Divide Focus More In-Depth
    Information
  • In Europe with CERN, GEANT, TERENA
  • In Asia with APAN, KNU and KEK
  • In US, with Internet2, ESnet, NLR
  • In South America, with AMPATH, CHEPREO, RNP, et
    al.
  • AFRICA, with Jensen, Matthews and ICTP Trieste
  • Strengthen IEPM Monitoring Work
  • Set Up HENP Networks Web Site (Get Support
    and/or Funding)
  • Continue Work on Specific Improvements, Case by
    Case
  • Brazil and Latin America, with RNP, ANSP
  • Russia, China, Korea With GLORIAD
  • Pakistan with PERN. India (?)
  • Southeast Europe e.g. Romania
  • Follow the World Summit on the Information
    Society Process
  • Watch Requirements the Lambda Grid Grid
    Analysis Revolutions Digital Divide is a
    Moving Target
  • Encourage Creation of New Culture of
    Collaboration, e.g. in the context of the LHC
    Computing Models

46
  • ICFA Standing Committee on
    Interregional Connectivity (SCIC)

Additional Slides Follow
47
ICFA and Global Networks for Collaborative
Science
  • Given the wordwide spread and data-intensive
    challenges in our field
  • National and International Networks, with
    sufficient (rapidly increasing) capacity and
    seamless end-to-end capability, are essential for
  • The daily conduct of collaborative work in both
    experiment and theory
  • Experiment development construction on a
    global scale
  • Grid systems supporting analysis involving
    physicists in all world regions
  • The conception, design and implementation of
    next generation facilities as global networks
  • Collaborations on this scale would never have
    been attempted, if they could not rely on
    excellent networks

48
SCIC in 2004-5A Period of Intensive Activity
  • http//cern.ch/ICFA-SCIC/
  • Monitoring Les Cottrell (http//www.slac.stanfor
    d.edu/xorg/icfa/scic-netmon) With Richard
    Hughes-Jones (Manchester), Sergio Novaes (Sao
    Paolo) Sergei Berezhnev (RUHEP), Fukuko Yuasa
    (KEK), Daniel Davids (CERN), Sylvain Ravot
    (Caltech), Shawn McKee (Michigan)
  • Advanced Technologies Richard Hughes-Jones,With
    Olivier Martin(CERN), Vladimir Korenkov (JINR,
    Dubna), Harvey Newman
  • The Digital Divide Alberto Santoro (UERJ,
    Brazil)
  • With V. Ilyin (MSU), Y. Karita(KEK), D.O.
    Williams (CERN),D. Son (Korea), H. Hoorani, A.
    Ali, S. Zaidi (Pakistan), S. Banerjee (India),
    V. White (FNAL), J. Ibarra, Heidi Alvarez
    (AMPATH / CHEPREO / WHREN)
  • Key Requirements Harvey Newman et al.

49
Internet Growth in the World At Large
Amsterdam Internet Exchange Point Example
5 MinuteMax
40 Gbps
40.1 Gbps
20 Gbps
Average
Some Annual Growth SpurtsTypically In
Summer-Fall Acceleration this January
The Rate of HENP Network Usage Growth (100 Per
Year) is Similar to the World at Large
50
Coverage
  • Now monitoring 673 sites in 114 countries
    samples 99 of the poulation connected to the
    Internet.
  • In last 9 months added
  • Several sites in Russia (thanks GLORIAD)
  • Many hosts in Africa (5 ? 36 now in 27 out of 54
    countries)
  • Monitoring sites in Pakistan and Brazil (Sao
    Paolo and Rio)
  • Working to install monitoring host in Bangalore,
    India

51
Collaborations/funding
  • Good news
  • Active collaboration with NIIT Pakistan to
    develop network monitoring including PingER
  • Travel funded by US State department for 1 year
  • FNAL SLAC continue support for PingER
    management and coordination
  • Bad news
  • DoE funding for PingER terminated
  • Proposal to EC 6th framework with ICTP, ICT
    Cambridge UK, et al. rejected
  • Proposal to IDRC/Canada February, no word
  • Hard to get funding for operational needs
  • For quality data need constant vigilance (host
    disappears, security blocks pings, need to update
    remote host lists )

52
Next-generation hybrid packet- and
circuit-switched dynamic network infrastructure
  • Motivation
  • Increasing demand for deterministic paths
  • Demand for more dynamic requirements on bandwidth
    and topology
  • 40 Gb/s or 100 Gb/s in near future? Or N10Gb/s
    waves
  • RE community's plans to move to the N X 10G
    bandwidth range, in networks with global reach -
    within 5 years
  • Customer/community-owned (or leased) networks
  • New infrastructure
  • GEANT2, JGN2, NLR, ESnet (2007)
  • New Technologies
  • Long haul DWDM equipment, GMPLS (Control plane),
    photonic switches
  • New standards G.709, GFP, VCAT, LCAS, WAN-PHY
  • 10 Gigabit Ethernet on end-systems

53
Latin America RedCLARA Network (2004-2006 EU
Project)
  • NRENs in 18 LA countries forming a regional
    network for collaboration traffic, plus Cuba
  • Initial backbone ring bandwidth 155 Mbps
  • Spur links at 10 to 45 Mbps
  • Initial connection to Europe (Rio-Madrid) at 622
    Mbps
  • Tijuana (Mexico) PoP soon connected to US
    through dark fibre link (CUDI-CENIC)
  • Significant contribution from European Commission
    and Dante through ALICE project

Until 2004, Only Latin American Countries with
NRENs were Argentina, Brazil, Chile, Mexico,
Venezuela
54
Evolving Quantitative Science Requirements for
Networks (DOE High Perf. Network Workshop)
See http//www.doecollaboratory.org/meetings/hpnpw
/
55
Data Intensive Science University Network
(DISUN)
  • Distributed U.S. CMS Tier2C center
  • Caltech, UCSD, Florida, Wisconsinand FNAL
  • U.S. Sites linked by the 10G waves on National
    Lambda Rail (NLR)
  • Supports massive data storage and data movement
  • Synergies UltraLight Project, and Open Science
    Grid (OSG)

56
UltraLight Advanced Network Services for Data
Intensive HEP Applications
  • Extend and augment existing grid computing
    infrastructures (currently focused on
    CPU/storage) to include the network as an
    integral component
  • A next-generation hybrid packet- and
    circuit-switched dynamic network infrastructure
  • Partners Caltech, UF, FIU, UMich, SLAC,
    FNALGLORIAD (China, Korea, Russia)
  • Strong support from Cisco, CENIC, NLR, FLR

57
CERN Optical Exchange Point
  • Glimmerglass Photonic Switch with MonALISA
    Services
  • Interconnects OpenLab, Major CERN Computing
    Facilties, SURFNet/GLIF and

58
GEANT Pan-European Backbone (33
Countries)October 2004
Note 10 Gbps Connections to Poland, Czech
Republic, Hungary
Planning Underway for GEANT2 (GN2),
Multi-Lambda Backbone, to Start In 2005
59
Congestion for Some Countries of the Former
Soviet Union and Southeast Europe
Campus LAN
Access Network
NREN Backbone
Extnal Connectns
Congestion Status Legend
More Congestion in External Connections and also
Some NREN Backbones and Access Networks
60
GEANT Global Connectivity
  • 10 Gbps 3x2.5 Gbps to North America
  • 2.5 Gbps to Japan
  • 622 Mbps to South America
  • 45 Mbps to Mediterranean countries
  • 155 Mbps to South Africa

61
Congestion status for EU and EFTA
Campus LAN
Access Network
NREN Backbone
Extnal Connectns
Source TERENA
  • Congestion mainly at
  • The Campus local area network (LAN)
  • The Access Networks to the Backbone

Congestion Status
Congestion Status Legend
Little
Some
Serious
62
GARR in Italy
  • Current network in Italy has been in place since
    late 1999
  • The network topology is based on 2.5 Gbps
    circuits
  • A 10 Gbps link to GEANT was deployed in 2004
  • Next generation network will deploy
    point-to-point lambda connections
  • There is a pilot of a lambda-based network
    (Garr-G) in operation since 2001 based on 2.5
    Gbps wavelengths

63
RD Networking in France Renater3
  • 2.5 Gbps WDM backbone links
  • Transitioning from ATM topology
  • 10 Gbps link to GEANT
  • Dual IPv4/IPv6 stack on the backbone
  • Standard PoP connection is 1Gbps
  • Extensions (DOM-TOM Overseas Departments and
    Territories) to Martinique, Guadeloupe, French
    Guyana, Reunion, Mayotte, New Caledonia and
    Tahiti

Current topology will be in place until mid 2005
64
UK SuperJanet4 and SuperJanet5
  • 10 Gbps DWDM Backbone
  • 2.5 Gbps Connections to Regional Networks
  • 2.5 Gbps Connection to GEANT
  • 10 Gbps Connections to Amsterdam and StarLight
    UKLight
  • SuperJanet5 is in the procurement phase
  • Dark Fiber Leasing Lighting is Considered an
    Option
  • SJ5 Rollout Starts at the End of 2005

65
SURFNet6 in the Netherlands 3000 km of Owned
Dark Fiber
40M Euro ProjectScheduled Start
Mid-2005Support Hybrid Grids
66
Germany 2004-2005 GWIN to XWIN
  • GWIN Connects 550 Univ., Labs, Other
    Institutions

XWIN Q4/05 (Dark Fiber Option)
GWIN 3/2004
67
Dark Fiber in Eastern Europe Poland PIONIER
Network
2650 km Fiber Connecting16 MANs 5200 km, 21
MANs planned by 2005
  • 10 GbE Backbone
  • An Intermediate Step to a True Lambda Network
    Supporting
  • Computational Grids
  • Digital Libraries
  • Interactive TV
  • Addl Fibers for e-Regional Initiatives

68
2 Years Ago 4 Mbps was the highest bandwidth
link in Slovakia
69
Czech Republic CESNet2
  • First DWDM line started end of 2004 10 Gbps
    between Prague and Brno 4 Gbps to
    Slovakia (SANET)
  • 2.5 Gbps link to GEANT
  • 10 Gbps link to Netherlands for research traffic
  • Update - Feb 3, 2005 2 x 10 Gbps link to Czech
    peering center NIX.CZ

70
SuperSINET Updated Map Oct. 2003

SuperSINET 10 Gbps
Intl Circuit 5-10 Gbps

Domestic Circuit 30 100 Mbps
  • SuperSINET
  • 10 Gbps IP Tagged VPNs
  • Additional 1 GbE Inter-University
    Wave For HEP
  • 4 X 2.5 Gb to NY 10 GbE Peerings ESNet,
    Abilene and GEANT

71
APAN-KR KREONET/KREONet2 II
  • 2.5-5 Gbps links interconnecting 10 regional
    centers.
  • Typical user sites are connected at 1 Gbps.

72
CAnet4 Canadian Research and Education Network
  • Three 10 Gbps Lambdas Halifax to Vancouver
  • Lightpaths on permanent or as- needed basis
  • Operates as set of independent parallel IP
    networks, for specific disciplines or
    applications
  • UCLP (User Controlled LightPath) Software creates
    VPNs Optically Fully compliant with Open Grids
    Services Architecture (OGSA)

Pioneered Lambda-BasedRE Networks
73
Abilene / NLR(HOPI) Map
The Hybrid Packet and Optical Testbed
(HOPI)Investigation of issues of combining the
optical, lambda-based connectivity model with the
traditional packet-switched model
  • Increasing demand for deterministic paths
  • Demand for more dynamic requirements on
    bandwidth and topology
  • 40 Gb/s or 100 Gb/s in near future? Or
    N10Gb/s waves

74
Pacific Wave (2004)Distributed Exchange and
Peering Fabric
  • Pass IP traffic directly with other major
    national and intl networks
  • Reduce costs associated with IP traffic that
    would otherwise transit commercial carrier
    circuits
  • Increase efficiency by directing traffic as
    directly as possible to the target
    network/organization
  • Similar initiative on the US East Coast A-Wave
    (w/Extension to Latin America)

75
AFRICA Key Trends
M. Jensen and P. Hamilton Infrastructure Report,
March 2004
  • Growth in traffic and lack of infrastructure
    ?Predominance of SatelliteBut these satellites
    are heavily subscribed
  • Slow roll out of downstream bandwidth limiting
    markets No regional fiber in East, Central or
    Interior regions.
  • Intl Traffic Only 1 of traffic on links is
    for Internet connectionsMost Internet traffic
    (for 80 of countries) via satellite
  • Flourishing Grey market for Internet VOIP
    traffic using VSAT dishes
  • Many Regional fiber projects in planning phase
    (some languished in the past
  • Intl fiber Project SAT-3/WASC/SAFE Cable from
    South Africa to PortugalAlong West Coast of
    Africa
  • Supplied by Alcatel to Worldwide Consortium of 35
    Carriers
  • 40 Gbps by Mid-2003 Heavily Subscribed. Ultimate
    Capacity 120 Gbps
  • Extension to Interior Mostly by Satellite lt 1
    Mbps to 100 Mbps typical

Note World Conference on Physics and Sustainable
Development, 10/31 11/2/05 in Durban South
Africa Part of World Year of Physics 2005.
Sponsors UNESCO, ICTP, IUPAP, APS, SAIP
76
?
?
?
?
?
?
?
?
?
?
77
?
?
?
?
?
78
Internet in China (J.P.Wu APAN July 2004)
  • Internet users in China from 6.8 Million to 78
    Million within 6 months
  • IP Addresses 32M(1A233B146C)
  • Backbone2.5-10G DWDMRouter
  • International links20G
  • Exchange Pointsgt 30G(BJ,SH,GZ)
  • Last Miles
  • Ethernet,WLAN,ADSL,CTV,CDMA,ISDN,GPRS,Dial-up
  • Need IPv6

 
79
AFRICA Nectar Net Initiative
  • Growing Need to connect academic researchers,
    medicalresearchers practitioners to many sites
    in Africa
  • Examples
  • CDC NIH Global AIDS Project, Dept. of
    Parasitic Diseases,Natl Library of Medicine
    (Ghana, Nigeria)
  • Gates 50M HIV/AIDS Center in Botswana Project
    Coord at Harvard
  • Africa Monsoon AMMA Project, Dakar Site cf. East
    US Hurricanes
  • US Geological Survey Global Spatial Data
    Infrastructure
  • Distance Learning Emory-Ibadan (Nigeria)
    Research Channel Content
  • But Africa is Hard 11M Sq. Miles, 800 M People,
    54 Countries
  • Little Telecommunications Infrastructure
  • Approach Use SAT-3/WASC Cable (to Portugal),
    GEANT Across Europe, Amsterdam-NY Link Across
    the Atlantic, then Peer with RE Networks such
    as Abilene in NYC
  • Cable Landings in 8 West African Countries and
    South Africa
  • Pragmatic approach to reach end points VSAT
    satellite, ADSL, microwave, etc.

W. MatthewsGeorgia Tech
80
LHCNet, l Triangle US Connections (FNAL,
SLAC, BNL I2, ESNet, NLR, TG)
StarLightChicago
  • To OC192 (10 Gbps) September 2004
  • Lambda TriangleStarLight-SURFNet-CERN
  • Peer with Abilene,NLR, TeraGrid at 10 Gbps
  • Caltech-to-NLR (LA)Dedicated Waves(Cisco
    Donation)First Univ. DirectConnection at 2
    X 10G
  • RD Partnerships UltraLight, UltraScience
    Net,OSG, WANInLab,Cisco

MAN LAN 9/2005 Plan
81
AARnet SXTransport Project in 2004
  • Connect Major Australian Universities to 10 Gbps
    Backbone
  • Two 10 Gbps Research Links to the US (X 64 in BW
    !)
  • Aarnet/USLIC Collaboration on Net RD Planned

82

Global Ring Network for Advanced Applications
Development
  • Moscow-Chicago-Beijing OC3 since January
    2004,
  • OC3 circuit Moscow-Beijing July 2004 completed
    the ring IP traffic August
  • Korea (KISTI) joined US, Russia, China as full
    partner in August
  • Plans developing for Central Asian extension
    (w/Kyrgyz Government)
  • Rapid traffic growth with heaviest US use from
    DOE (FermiLab), NASA, NOAA, NIH and Universities
    (UMD, IU, UCB, UNC, UMN, PSU, Harvard, Stanford,
    250 Others)
  • Goal Multi-Lambda Ring by 2008-9

Many TBytes/month now transferred via GLORIAD to
US, Russia, China
  • China-Chicago 2.5G by April 2005
  • Korea-Chicago 10G by July 2005
  • US-Ru 622 Mbps Soon 10G by 2006

83
NetherLight
  • Network services optimized for high-performance
    applications.
  • Multiple Gigabit Ethernet switching facility Will
    ultimately become a pure wavelength switching
    facility for wavelength circuits
  • Major hub in GLIF, (the Global Lambda Integrated
    Facility for Research and Education)

84
Progress Loss Performance
(Cottrell)
Fraction of the Worlds PopulationWith Different
Levels of Packet Loss
Loss Rate lt 0.1 to 1 1 to 2.5 2.5 to 5
5 to 12 gt 12
2001
12/2003
  • BUT by December 2003It had improved to 77
  • In 2001 lt20 of the worlds population had Good
    or Acceptable Loss performance

85
Derived Throughput (kbps) Between Monitoring
Countries and Remote Regions
Monitoring Country
Remote Region
Good gt 1000 kbps Acceptable
500 to 1000 kbps Poor 200 to 500
kbps Very Poor lt 200 kbps
Intra-Continental Europe (Including Russia and
Baltics), Intra-US Much Improved.Inter-Regional
Connectivity Still Poor to Very Poor. Latin
America, Most of Asia, Africa Still Poor or Very
Poor Far Behind
86
APAN Recommendations(at July 2004 Meeting in
CAIRNS, Au)
  • New issues demand attention
  • Application measurement, particularly end-to-end
    network performance measurement is increasingly
    critical (deterministic networking)
  • Security must now be a consideration for every
    application and every network.
  • Central Issues for APAN this decade
  • Stronger linkages between applications and
    infrastructure - neither can exist independently
  • Stronger application and infrastructure linkages
    among APAN members.
  • Continuing focus on APAN as an organization that
    represents infrastructure interests in Asia
  • Closer connection between APAN the infrastructure
    applications organization and regional
    political organizations (e.g. APEC, ASEAN)

87
SURFnet6 Light Path Provisioning implementation
  • Over 3000 km of managed dark fiber
  • Light Path Provisioning (Bandwidth on Demand)
  • Extension to GLIF

88
Progress in Network RD
  • HEP is a leading community
  • Datatag (CERN) LambdaStation (FNAL) UltraLight
    (Caltech)
  • High speed data transfers
  • Internet2 Land speed record
  • IPv6 record 5.11 Gbps between Geneva and
    Starlight (Jan. 2005)
  • IPv4 Single Stream 7.5 Gbps X 16 kkm with Linux
    Achieved (July 2004)
  • IPv4 Multi-stream with FAST TCP 6.86 Gbps X 27
    kkm (Nov 2004)
  • SC2004 Bandwidth Record by HEP
  • Aggregate Rate of 101 Gbps
  • 2.95 Gbps sustained to Brazil
  • CERN-FNAL Data Challenge
  • (pre)Production Traffic
  • 3 days at 4.75 Gbps
  • Disk-to-disk Marks 536 Mbytes/sec with Windows
    300 Mbytes/sec Linux

89
Tier0-Tier1 Local Area Network Planning
  • New 2.4 Tb/s backbone to interconnect
  • LHC experiments (CERN Tier0)
  • general purpose network
  • CERN Tier1
  • T0-T1 WAN (regional Tier1s)
  • Based on 10GE technology
  • Layer 3 interconnections
  • No central switch(es)
  • Redundancy via multiple 10GE paths (OSPF)

90
11 LHC Tier1 Sites (as of 2005)
  • CNAF (Italy)
  • PIC (Spain)
  • CCIN2P3 (France)
  • RAL (UK)
  • FZK (Germany)
  • NIKHEF (Netherlands)
  • SNIC (Russia)
  • BNL (U.S.)
  • FNAL (U.S.)
  • TRIUMF (Canada)
  • ASCC
  • Planned
  • Korea (Kyungpook)
  • Brazil (UERJ)
  • perhaps others

91
T0-T1 network at CERN (LAN)
T0-T1 WAN
multiple 10GE
10GE
External network
GbE
RawLHC data
4 LHCexperimental areas
GPN
CERN Tier1
.
.
10GE-gt88GE
10GE-gt32GE
10GE-gt88GE
10GE-gt88GE
10GE-gtn10GE
2000 Tape Disk Servers
..10..
6000 CPU Servers
..32..
..88..
..88..
..88..
92
Tier0 network (LHC experimental areas)
Low speed (management)
High speed redundant 10GE (data)
T0-T1WAN
LHC Experiment
CERN Tier1
LHC Experiment
LHC Experiment
T0-T1 LAN
LHC Experiment Control Network
DAQ
GPN
93
T0-T1 WAN progress
  • A lot of progress has been made
  • 10 Gb/s equipment is commonly available (although
    not yet cheap) STM-64 (10GE WAN PHY), 10GE LAN
  • 10 Gb/s capacity (SDH, wavelength, WDM over dark
    fibre) is affordable
  • long-distance, high-speed TCP is feasible,
    although with special Linux tuning

94
Tier2
A possible T0-T1 WAN network
Tier1
multiple 10GE
Tier1
Tier1
10GE or STM-64
10GE or multiple GbE
Tier2
Tier1
External network
Tier1
Tier1
Data mover(spool)
Tier1
Tier1
LHC LAN
Tier1
.
95
International ICFA Workshop on HEP Networking,
Grids and Digital Divide Issues for Global
e-Science
  • Workshop Goals
  • Review the current status, progress and barriers
    to effective use of major national, continental
    and transoceanic networks used by HEP
  • Review progress, strengthen opportunities for
    collaboration, and explore the means to deal with
    key issues in Grid computing and Grid-enabled
    data analysis, for high energy physics and other
    fields of data intensive science, now and in the
    future
  • Exchange information and ideas, and formulate
    plans to develop solutions to specific problems
    related to the Digital Divide in various regions,
    with a focus on Asia Pacific, as well as Latin
    America, Russia and Africa
  • Continue to advance a broad program of work on
    reducing or eliminating the Digital Divide, and
    ensuring global collaboration, as related to all
    of the above aspects.

96
LOC Web Pages
  • Kihyeon CHO (CHEP)
  • Daehee HAN (CHEP)
  • Dae Young KIM (ANF)
  • Donghee KIM (CHEP)
  • Dong Kyun KIM (KISTI)
  • Kihwan KWON (CHEP)
  • Jaehwa LEE (KOREN, KT)
  • Young Do OH (CHEP)
  • Dongchul SON (Chair, CHEP)
  • Jun-Suhk SUH (CHEP)
  • Jeong Eun YE (CHEP)

 
 
97
Some Personal Comments on the Digital
Divide(D.O. Williams, LISHEP2004)
  • I think that the digital divide issue is actually
    very important for the future stability of the
    world
  • I think that it will be very difficult to fix
  • Some of it is finding the right technologies for
    different areas
  • But a lot is about the structure of society
  • Reliable electrical power
  • Government transparency
  • Support for education and scientific research
  • and while the developed countries can give
    advice and try to help, the real directions
    can only be determined in the developing world

98
Work to Close the Digital Divide Help Bring the
Needed Networks to All Regions
  • ICFA Members should work vigorously towards this
    goal Locally, Nationally and Internationally
  • Why ?
  • Physicists from all world regions have the Right
    to be full partners
  • It is the basis of our global community, and
    our largest projects
  • Involvement of students, and outreach to the
    community is vital to our field. In modern
    times, this is founded on networks.
  • How ? We are the prototypical global
    community
  • Developments by HENP of Grids, state-of-the-art
    networks and systems for collaborative work on a
    worldwide scale represent a unique
    opportunity, for science and society
  • Work with SCIC other cognizant organizations
  • And If We Dont ?
  • We fail as the first global community in
    science

99
World Map/Website
  • Project to Build HENP World Network Map Updated
    and Maintained on a Web Site, Backed by Database
  • Systematize and Track Needs and Status
  • Share Information On
  • Links Bandwidths Pricing Vendors
    Technologies
  • Problems Overloading ( Where) Quality
    Peering, etc.
  • Requirements Are They Being Met ?
  • Identify Urgent Cases Focus on Opportunities to
    Help
  • Funding Did Not Materialize in 2003-4
    Continue to Seek Help (Manpower) and Funds

100
VRVS on Windows
KEK (JP)
VRVS Meeting in 8 Time Zones
Caltech (US)
RAL (UK)
Brazil
CERN (CH)
AMPATH (US)
Pakistan
SLAC (US)
Canada
49 k hosts worldwide Users in 105 Countries 2 X
Growth/Year
AMPATH (US)
Write a Comment
User Comments (0)
About PowerShow.com