ATLAS Computing Model - PowerPoint PPT Presentation

1 / 14
About This Presentation
Title:

ATLAS Computing Model

Description:

Efficiencies are not factored in (movable feast) RWL Jones, Lancaster University. 0.01 ... Time/event for Reconstruction now: 60 kSI2k sec ... – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 15
Provided by: rogerw
Category:
Tags: atlas | computing | model

less

Transcript and Presenter's Notes

Title: ATLAS Computing Model


1
ATLAS Computing Model
  • LCG Resources Group
  • CERN, 26th March 2004
  • Roger Jones, ICB Chair

2
Inputs
  • ATLAS internal
  • FORTRAN/ZEBRA timings and event sizes
  • Current OO/C timings and sizes
  • Relevant expert inputs
  • External
  • LEP experience and analysis patterns
  • D0 computing model
  • Beginning to look at CDF computing model
  • Data Challenges
  • Test the computing model
  • Refine the code and timings
  • Efficiencies are not factored in (movable feast)

3
Nominal year 107 s Accelerator efficiency
50
4
Processing times
  • Reconstruction
  • Time/event for Reconstruction now 60 kSI2k sec
  • We think we can reasonably recover a factor 4 at
    least
  • factor 2 from running only one default algorithm
  • factor 2 from optimization
  • Therefore now we take as reference 15 kSI2k
    sec/event
  • Simulation
  • Time/event for Simulation now 400 kSI2k sec
  • This is the worst case right now.
  • For resource calculations, let us assume
  • factor 2 from optimization (work already in
    progress)
  • factor 2 on average from the mixture of different
    physics processes (and rapidity ranges)
  • Therefore now we take as reference 100 kSI2k
    sec/event
  • Number of simulated events needed 108
    events/year (??)
  • Note D0 simulate 10 of real data rate as here
  • Generate samples about 3-6 times the size of
    their streamed AOD samples

5
Operation of Tier-0
  • The Tier-0 facility at CERN will have to
  • Hold a copy of all raw data to tape
  • Copy in real time all raw data to Tier-1s
    (second copy useful also for later reprocessing)
  • Keep calibration data on disk
  • Run first-pass calibration/alignment and
    reconstruction
  • Distribute ESDs to external Tier-1s
  • (1/3 to each one of 6 Tier-1s)
  • Currently under discussion
  • Archiving of simulated data probably not
  • Sharing of facilities between HLT and Tier-0
  • reprocessing capacity versus architecture
  • Distribution and access to conditions data

6
Operation of Tier-1s and Tier-2s
  • We envisage at least 6 Tier-1s for ATLAS. Each
    one will
  • Keep on disk 1/3 of the ESDs and a full AODs
    and TAGs
  • Keep on tape 1/6 of Raw Data
  • Keep on disk 1/3 of currently simulated ESDs and
    on tape 1/6 of previous versions
  • Provide facilities for physics group controlled
    ESD analysis
  • Calibration and/or reprocessing of real data (one
    per year)
  • We estimate 4 Tier-2s (various sizes) for each
    Tier-1. Each one will
  • Keep on disk a full copy of TAG and roughly one
    full AOD copy per four T2s
  • Keep on disk a small selected sample of ESDs
  • Provide facilities (CPU and disk space) for user
    analysis and user simulation (25 users/Tier-2)
  • Run central simulation

7
Analysis model (1)
  • Analysis model broken into two components
  • Central production of tuples and TAG collections
    from ESD
  • User analysis of group tuples/streams, new
    selections etc and total user simulation matching
    the official production
  • Central analysis
  • Assume 20 analysis WGs
  • Each analysis channel may require several
    iterations of event selections, starting with all
    TAGs, then with less AOD, much less ESD and very
    few RAW events
  • physics working groups will report on analysis
    model later this year
  • experience from DC2 exercise will provide useful
    information
  • 100 passes over ESD in total, producing tuples
  • Retain current and previous tuple on disk
  • Estimate data reduction to 10 of full AOD, so
    720Gb/group/annum
  • 0.5kSI2k per event (estimate), quasi real time ?
    9MSI2k
  • This is 3 times the first pass reconstruction load

8
Analysis model (2)
  • User analysis
  • Each user (600 total) assumed to
  • Perform 1/N of the MC non-central simulation load
  • Perform analysis of WG samples and AOD
  • Analyse 10 of WG statistics (1 total), 25
    passes/year
  • Retain current and previous tuple on disk ?
    2x0.9Tb
  • 0.5kSI2k per event, full year to process ?
    4.2kSI2k
  • Perform private simulation, 1/600th of total MC
    sample
  • 0.6kSI2k required, 435Gb on disk and tape
  • Total requirement 4.7kSI2k and 1.5/1.5Tb
    disk/tape
  • Assume this is all done on T2s

9
CERN Tier-2/1
  • In model, CERN non-T0 primarily for large user
    community
  • Tier-0 and a large Tier-2?
  • Could do some reprocessing
  • but just use staging pool and T0 tape, no extra
    copies
  • but external T1s can cope
  • Do not assume simulation role
  • 100 ATLAS users assumed, 4xnormal T2

10
CERN T1/2 in this Model
472 kSI2k required
11
External Tier 1
Processing for Physics Groups 1500
kSI2k Reconstruction 508 kSI2k
12
A Regular T2
Simulation 17 kSI2k Reconstruction 2 kSI2k Users
118 kSI2k Total 137 kSI2k
13
Summary (1)
Without Efficiencies
14
Summary (2)
Efficiencies (added during meeting) Scheduled cpu
85, chaotic 60 Disk 70
Write a Comment
User Comments (0)
About PowerShow.com