U'S' ATLAS Computing Facilities Overview - PowerPoint PPT Presentation

1 / 22
About This Presentation
Title:

U'S' ATLAS Computing Facilities Overview

Description:

Details of Tier 2's and Grid R. Gardner. Tier 2 configuration. Grid plan ... establish architectural support for the inclusion of institutional resources at ... – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 23
Provided by: BruceG90
Category:

less

Transcript and Presenter's Notes

Title: U'S' ATLAS Computing Facilities Overview


1
U.S. ATLAS Computing Facilities Overview
  • US ATLAS Computing Advisory Panel Meeting
  • October 25-27, 2000
  • Bruce G. Gibbard, BNL

2
Facilities Presentations
  • Overview B. Gibbard
  • Computing requirements
  • Network considerations
  • Overall plan rationale
  • Details of Tier 1 R. Baker
  • Tier 1 configuration
  • Cost schedule analysis
  • Details of Tier 2s and Grid R. Gardner
  • Tier 2 configuration
  • Grid plan
  • Cost schedule analysis

3
US ATLAS Computing Facilities
  • Facilities procured, installed and operated
  • to meet U.S. MOU obligations to ATLAS
  • Direct IT capacity (Monte Carlo, for example)
  • Support for detector construction, testing,
    calib.
  • Support for software development and testing
  • to enable effective participation by US
    physicists in the ATLAS physics program!
  • Direct access to and analysis of physics data
    sets
  • Simulation, re-reconstruction, and reorganization
    of data as required to complete such analysis

4
Setting the Scale
  • Significant Uncertainty in Defining Scale of
    ATLAS Computing Needs
  • Accelerator and detector far from completed
  • Five years of algorithm software development
  • Five years of computer technology evolution
  • For US ATLAS
  • Start from ATLAS Estimate of Requirements Model
    for Contributions
  • Adjust for US ATLAS perspective (experience,
    priorities and facilities model)

5
US ATLAS Perspective
  • US ATLAS facilities must be adequate to meet all
    reasonable U.S. ATLAS computing needs
    (Specifically, the U.S. role in ATLAS should not
    be constrained by a computing shortfall it
    should be enhanced by computing strength)
  • There needs to be significant capacity beyond
    that formally committed to International ATLAS
    which can be allocated at the discretion of U.S.
    ATLAS among its analysis efforts

6
ATLAS Estimate (1)
  • New Estimate Made As Part of Hoffmann LHC
    Computing Review
  • Current draft ATLAS Computing Resources
    Requirements, V1.5 by Alois Putzer
  • Assumptions for LHC / ATLAS detector performance

7
ATLAS Estimate (2)
  • Assumptions for ATLAS data

8
Architecture
  • Hierarchy of GRID Connected Distributed Computing
    Resources
  • Primary ATLAS Computing Centre at CERN
  • Tier 0 Tier 1
  • Remote Tier 1 Computing Centers
  • Remote Tier 2 Computing Centers
  • Institutional Computing Facilities
  • Individual Desk Top Systems

9
Functional definitions
  • Tier 0 Center at CERN
  • Storage of primary raw and ESD data
  • Reconstruction of raw data
  • Re-procession of raw data
  • Tier 1 Centers (At CERN in some major
    countries)
  • Simulation and reconstruction of simulated data
  • Selection and redefinition of AOD
  • User Analysis
  • Tier 2 Centers
  • Less clearly defined, reduced scale (
    functionality?) Tier 1

10
Required Tier 1 Capacities
  • Individual Tier 1 Center Capacities in 2006
  • CPU 209 kSPi95
  • Disk 365 TBytes
  • Tape 1800 Tbytes
  • Expect 6 Such Remote Centers
  • USA, FR, UK, IT, 2 to be determined
  • Capacity Ramp-up Profile
  • (Perhaps too much too early, as Rich Baker will
    discuss)

11
US ATLAS Facilities
  • Requirements Related Considerations
  • Analysis is the dominant Tier 1 activity
  • Experience shows analysis will be compute
    capacity limited
  • US scale is larger than other Tier 1 countries
    by authors, by institutions, by
    core fraction (x 1.7, x 3.1, x 1.8) and so will
    require more analysis than a single canonical
    Tier 1
  • US effort within itself is more distributed while
    being more isolated from CERN than other current
    Tier 1 countries
  • US Tier 1 must be augmented by additional
    capacity (particularly for analysis)
  • Appropriate US facilities level is 2 x
    canonical Tier 1

12
US ATLAS Facilities Plan
  • US ATLAS will have a Tier 1 Center, as defined by
    ATLAS, at Brookhaven
  • The Tier 1 will be augment by 5 Tier 2 Centers of
    comparable aggregate capacity
  • This model will
  • exploit high performance US regional networks
  • leverage existing resource at Tier 2 sites
  • establish architectural support for the inclusion
    of institutional resources at other (non-Tier 2)
    sites
  • focus on analysis both increasing capacity and
    encouraging local autonomy and creativity within
    the analysis effort

13
Schematic of Model
14
Facilities WBS
15
Tier 1 (WBS 2.3.1)
  • US ATLAS Tier 1 is now operational with
    significant capacities at BNL in coordinating
    with the RHIC Computing Facility (RCF)
  • There is broad commonality in requirements
    between ATLAS and RHIC and long term synergy with
    RCF is expected
  • Personnel and cost projections for US ATLAS
    facilities are heavily base on recent experience
    at RCF see Rich Bakers talk

16
Tier 1 (continued)
  • Full Tier 1 Functionality Includes ...
  • Dedicated High Bandwidth Connectivity to CERN
  • Primary Site for Storage/Serving
  • Cache/Replicate CERN other data needed by US
    ATLAS
  • Computation
  • Primary Site for any US Re-reconstruction
    (perhaps only site)
  • Major Site for Simulation Analysis
  • Regional support plus catchall for those without
    a region
  • Repository of Technical Expertise and Support
  • Hardware, OSs, utilities, other standard
    elements of U.S. ATLAS
  • Network, AFS, GRID, other infrastructure
    elements of WAN model

17
Tier 2 (WBS 2.3.2.7 - 2.3.2.11 )
  • The standard Tier 2 configuration will focus on
    the CPU and cache disk required for analysis
  • Some Tier 2s will be custom configured to
    leverage particularly strong institutional
    resources of value to ATLAS (the current
    assumption is that there will be 2 HSM capable
    sites)
  • Initial Tier 2 selections (2 sites) will be base
    on their ability to contribute rapidly and
    effectively to development and test of this Grid
    computing architecture

18
Tier 2 (continued)
  • Developmental Phase
  • 1st Tier 2 will be selected and receive initial
    funding this year, FY 01
  • 2nd Tier 2 will receive initial funding in FY 02
  • These 2 developmental Tier 2 sites will
    participate in 2003 ATLAS MDC 2
  • Deployment Phase
  • Remaining 3 Tier 2s will be selected on the
    basis of a single proposal process in 2003 and
    will receive initial funding in 2004
  • All Tier 2s will be fully operational for 2006
  • See Rob Gardners talk

19
Network
  • Tier 1 Connectivity to CERN and to Tier 2s is
    Critical to Facilities Model
  • Must have adequate bandwidth
  • Must eventually be guaranteed and allocable
    bandwidth (dedicated and differentiate) ?
  • Should grow with need OC12 to CERN should be
    practical by 2005
  • While the network is a integral part of the US
    ATLAS plan, its funding is not part of the US
    ATLAS budget

20
WAN Configurations
21
US ATLAS Capacities
22
Risks and Contingency
  • Risk Factors
  • Requirements may change
  • Price/performance projections may be too
    optimistic
  • Tier 2 funding remains less than certain
  • Grid projects are complex and maybe be less
    successful than hoped
  • Have Develop Somewhat Conservative but Realistic
    Plans and Now Expect to Build Facilities to Cost
  • Contingency takes the form of reduction in scale
    (design is highly scaleable)
  • Option to trade one type of capacity for another
    is retained until very late (80 of procured
    capacity occurs in 06)
Write a Comment
User Comments (0)
About PowerShow.com