Title: Folie 1
1View of the ATLAS detector (under construction)
150 million sensors deliver data 40
million times per second
CERN June 2007
2(No Transcript)
3ATLAS distributed data management software, Don
Quijote 2 (DQ2)
4ATLAS full trigger rate is 780 MB/s shared
among 10 external Tier-1 sites(), amounting to
around 8 PetaBytes per year.
'Tier-0 exercise' of Distributed Data
Management project of ATLAS starting June
2007 6th August 2007 first PetaByte of
simulated data copied to Tier-1s worldwide
() ASGC in Taiwan, BNL in the USA, CNAF in
Italy, FZK in Germany, CC2IN2P3 in France, NDGF
in Scandinavia, PIC in Spain, RAL in the UK, SARA
in the Netherlands and TRIUMF in Canada.
5Computing Model central operations
- Tier-0
- - Copy RAW data to Castor tape for archival
- - Copy RAW data to Tier-1s for storage and
reprocessing - - Run first-pass calibration/alignment (within
24 hrs) - - Run first-pass reconstruction (within 48 hrs)
- - Distribute reconstruction output (ESDs, AODs
TAGS) to Tier-1s - - Keep current versions of ESDs and AODs on disk
for analysis - Tier-1s
- - Store and take care of a fraction of RAW data
- - Run slow calibration/alignment procedures
- - Rerun reconstruction with better calib/align
and/or algorithms - - Distribute reconstruction output to Tier-2s
- Tier-2s
- - Run simulation
- - Run calibration/alignment procedures
- - Keep current versions of AODs on disk for
analysis - - Run user analysis jobs
6Computing Model and Resources
- The ATLAS Computing Model is still the same as in
the Computing TDR (June 2005) and basically the
same as in the Computing Model document (Dec.
2004) submitted for the LHCC review in January
2005 - The sum of 30-35 Tier-2s will provide 40 of the
total ATLAS computing and disk storage capacity - CPUs for full simulation productions and user
analysis jobs - On average 12 for central simulation and
analysis jobs - Disk for AODs, samples of ESDs and RAW data, and
most importantly for selected event samples for
physics analysis - We do not ask Tier-2s to run any particular
service for ATLAS in addition to providing the
Grid infrastructure (CE, SE, etc.) - All data management services (catalogues and
transfers) are run from Tier-1s - Some larger Tier-2s may choose to run their own
services, instead of depending on a Tier-1 - In this case, they should contact us directly
- Depending on local expertise, some Tier-2s will
specialise in one particular task - Such as calibrating a very complex detector that
needs special access to particular datasets
7ATLAS Analysis Work Model
- Job preparation
- Medium-scale testing
- Large-scale running
Local system (shell) Prepare JobOptions ?
Run Athena (interactive or batch) ? Get
Output
Analysis jobs must run where the input data files
are As transferring data files from other sites
may take longer than actually running the job
8 but who contributes what?
9Annex 6.4 Ressources pledged
10Annex 3.3. Tier-2 Services . The following
services shall be provided by each of the Tier2
Centers in respect of the LHC Experiments that
they serve i. provision of managed disk
storage providing permanent and/or temporary data
storage for files and databases ii. provision
of access to the stored data by other centers of
the WLCG and by named AFs as defined in
paragraph 1.4 of this MoU iii. operation of
an end-user analysis facility iv. provision of
other services, e.g. simulation, according to
agreed Experiment requirements
11v. ensure network bandwidth and services for data
exchange with Tier1 Centres, as part of an
overall plan agreed between the Experiments and
the Tier1 Centres concerned. All storage and
computational services shall be grid enabled
according to standards agreed between the LHC
Experiments and the regional centres. The
following parameters define the minimum levels of
service. They will be reviewed by the operational
boards of the WLCG Collaboration.
12AUSTRIAN GRID Grid Computing Infrastruktur
Initiative für Österreich Business
Plan (Phase 2) Jens Volkert, Bruno Buchberger
(Universität Linz) Dietmar Kuhn (Universität
Innsbruck) März 2007
13Austrian Grid II Supported Project
5.4 M Contribution by groups from other
sources 5.1 " Total
10.5 Structure Research Center Development
Center Service Center 19 Work Packages
1 Administration 10 Basic
Research 8 Integrated
Applications
14PAK und Vertreter PMB D. Kranzlmüller (VR G.
Kotsis), W.Schreiner ,Th. Fahringer
15Austrian Grid Phase II (2007 2009)
C-MoU still not yet singed by Austria, but light
at the end of the tunnel Proposal for national
federated Tier-2 (ATLASCMS) in Vienna Accepted
2008
Launching project! Expected to be sustainable
after 2010
16Personnel 70 my, 15 for SC, 4.5 for fT-2 5 FTE
for Service Center, 1.5 for federated Tier-2
(Ibk) Vienna is expected to use presently vacant
available positions (estimate 1,5 FTE, too) 34
k/FTE/yr. (51k/y for for Tier-2) i.e.
Hardware 1.053 k Personnel 153
Total 1.206 k Formalities För
dervertrag signed Jan 08 in
Ministry Konsortialvertrag to be signed March
6 ? C- MoU to be signed soon
17(No Transcript)
18This graph shows a snapshot of data throughput at
peak operation to the Tier 1 centres. Each bar
represents the average throughput over 10 minutes
and the colours represent the 10 Tier 1 centres.
We can see the average throughput is fairly
stable at around 600 MB/s, that is the equivalent
of around 1 CD of data being shipped out of CERN
every second. The current rates we observe on
average are the equivalent of around 1 PetaByte
per month, close to the final data rates needed
for data taking.