Title: LHCb Italy, 2001-2003 Computing Plan
1LHCb Italy, 2001-2003 Computing Plan
- Umberto Marconi and Domenico Galli
- INFN Sezione di Bologna
- ROMA, 2001 May 22
2Present Status
- LHCb Italy is finalizing the built-up of a Linux
PC farm, consisting of 20 CPUs and 1TB NAS disk - Farm basic features
- Diskless biprocessor (800 MHz PIII) motherboards
- Boot on network (Intel PXE)
- Rack mounted
- Array of 14 80 GB EIDE disks configured in RAID-5
- Technical details available from our LHCb note
Hardware architecture of the LHCb Italian Tier1
prototype - LHCb software and production tools installed.
- We get funds by INFN in the end of February 01.
3Present Status Comments
- The FARM is intended just to perform Monte Carlo
data production its not an analysis tool. - The short term analyses model (2001) foresees the
analysis jobs be processed at CERN. - DSTs produced by regional centres will be shipped
via network to CERN where, stored on tape, will
be made available for analysis. - Raw data produced in Italy will be stored on tape
exploiting CNAFs 180 Eagle cassette StorageTek
robot (20GB/cassette) - A drawback is the present DST event size of
2MB/evt which is unfit to large scale analysis.
4Intermezzo
- LHCb Computing Meeting
- Bologna 2001, June 13-15
F. Harris U.Marconi - some topics
- LHCb analysis model
- Integrating GRID middleware in the LHCb software
- LHCb-Italy plan for the INFN national centre.
- Computing IMoU
5LHCb trigger fundamental
Level Input rate Ouput rate Suppression factor
L0 16MHz 1 MHz 16
L1 1MHz 40kHz 25
L23 40kHz 200Hz 200
Suppression factors by LHCb triggers on Minimum
Bias events
Events Light() Charm() Beauty()
Produced 90 10 0.6
After L0 86 13 1
After L1 79 15 6
After L2 44 23 33
After L3 0 0 100
Preliminary estimation of event quark content at
various trigger levels
6L0/L1 trigger correlation MB passing L0 are more
likely to pass also L1
L1 retention efficiency versus L1 probability
cut. Red line minimum bias events. Blue line
minimum bias events passing L0 trigger.
7L0/L1 trigger correlation hits distribution in
VELO (Vertex Locator)
Hits distribution in VELO for Minimum Bias events
Hits distribution in VELO for Minimum Bias events
going through L0 trigger
8Trigger correlation and L0, L1 design L1 becames
SL1 (super L1)
Channel ?(L1) TP ?(L1L0) ?(L1L0) highest pT
B ? ?? 45 30 60
B ? J/?(??) Ks 85 35 90(? ?), 80(?)
- Events going through L0 produce an average hit
number inside the VELO (vertex locator) largely
exceeding that of unfiltered MB events. - L1 efficiency drops down with the respect to TP,
on fundamental channels - L1 trigger improves the efficiency with the pT of
high impact tracks - L0 will transmit highest pT candidates to L1
9Plan for 2001
- Increase the farm CPU power by adding 30 more
CPUs ( 50Mlire) - Data requirement for triggers TDR and computing
need have been documented in a report to the
referees available at - http//www.bo.infn.it/lhcb/calcolo/LHCbdocuments.
html - A CPU power of 2300 SI95 has been evaluated
therein. - Contribute to L0 and L1 trigger TDR
- (expected by the end of 2001)
10Major Objectives within 2002
- Production of Monte Carlo events for L23 trigger
study (TDR by the beginning of 2003). - Test of OO software functionality (report to the
collaboration by July 02). - First Data Challenge (July 02)
- Production of large data samples (106 events) in
a short time (1 week) on top of running analysis
activities - Computing TDR by December 02
11MC background data for L23 trigger
- L23 trigger bandwith has to be shared among
several (10) physics channel. - L23 total suppression 200 implies a MB
suppression 2000 in each physics channel. - A 10 uncertainity on the MB suppression factor
requires N 105 events entering L23 - These events have to go through L0 and L1, so
that N0 8 ? 107 events should be tracked
throughout the whole detector, digitized and
filtered. - It can be achieved within 6 months sharing the
load among CERN, Lyon, Liverpool, Rutheford, INFN
12Backgroung for L23
- 105 events entering L23 require
- StorageSize 105 (event) ? 2 (MB/event) 2 ?
105 MB 200 GB - Computing Time/event 16 ? 25 ? 1 minute/event
- Total Computing Time 16 ? 25 ? 105 minute
- Total Computing Time/ CPU 16 ? 25 ? 105 / 50
minute 8 ? 105 minute 104 hour 500 days - Total Computing Time/CPU/center 500/5 days
13Plan for 2002 (towards the italian TIER1
prototype)
- Promote Bologna to an analysis centre
- Adding disk (2 TB), best suited for caotic access
- Mantaining the CPU capability acquired during
2001 (that assuming the complete power of 50
CPUs will be operating) - Moving part of CPUs to serve user analysis jobs
submission - Testing and developing tools to integrate
national centres into a distributed/integrated
analysis centre - Makes it possible to perform analysis jobs on an
entire data set distributed among several centres
142003 plan
- By this time we will need computing power mainly
for Data Challenge 2. - INFN national centre will allow to perform such
technical tests. - Otherwise, a huge amount of computing resources
would be requested for tests without any
motivation for physics production.
15Physics Studies
- Present interest of the italian groups focus on
- LHCbs performance in searching for light Higgs
(mH120 GeV) (Bologna-Milano) - Searching for b ? s l l- dilepton events (Roma)
- Studying Bs ? ? ?- , D0 ? ? ?- (Roma)
- Studying b ? s ? process, in case Bd ? K ?
(Bologna) - Studying of Bd ? J/? (? e e- ) Ks (Bologna)
- Development in detector software
- L0 eletromagnetic calorimeter trigger (Bologna)
- Muon identification (?-Italy)
- Tagging Kaon identification (Milano)
- Implementation of RICH in GEANT4 (Milano)