Title: High Level Triggering
1High Level Triggering
2High Level Triggering (HLT)
- Introduction to triggering and HLT systems
- Why do we Trigger
- Why do we use Multi-Level Triggering
- Brief description of typical 3 level trigger
- Case study of ATLAS HLT ( some comparisons with
other experiments) - Summary
3Why do we Trigger and why multi-level
- Over the years experiments have focussed on rarer
processes - Need large statistics of these rare events
- DAQ system (and off-line analysis capability)
under increasing strain - limiting useful event statistics
- Aim of the trigger is to record just the events
of interest - i.e. Trigger system which selects the events
we wish to study - Originally - only read-out the detector if
Trigger satisfied - Larger detectors and slow serial read-out gt
large dead-time - Also increasingly difficult to select the
interesting events - Introduced Multi-level triggers and parallel
read-out - At each level apply increasingly complex
algorithms to obtain better event
selection/background rejection - These have
- Led to major reduction in Dead-time which was
the major issue - Managed growth in data rates this remains the
major issue
4Summary of ATLAS Data Flow Rates
- From detectors gt 1014 Bytes/sec
- After Level-1 accept 1011 Bytes/sec
- Into event builder 109 Bytes/sec
- Onto permanent storage 108 Bytes/sec ?
1015 Bytes/year
5The evolution of DAQ systems
6TDAQ Comparisons
7Level 1
- Time few microseconds
- Hardware based
- Using fast detectors fast algorithms
- Reduced granularity and precision
- calorimeter energy sums
- tracking by masks
- During Level-1 decision time store event data in
front-end electronics - at LHC use pipeline - as collision rate shorter
than Level-1 decision time - For details of Level-1 see Dave Newbold lecture
- 2 weeks ago
8Level 2
- Previously - few milliseconds (1-100)
- Dedicated microprocessors, adjustable algorithm
- 3-D, fine grain calorimetry
- tracking, matching
- Topology
- Different sub-detectors handled in parallel
- Primitives from each detector may be combined in
a global trigger processor or passed to next
level - 2009 - few milliseconds (10-100)
- Processor farm with Linux PCs
- Partial events received via high-speed network
- Specialised algorithms
- Each event allocated to a single processor, large
farm of processors to handle rate
9Level 3
- millisecs to seconds
- processor farm
- Previously microprocessors/emulators
- Now standard server PCs
- full or partial event reconstruction
- after event building (collection of all data from
all detectors) - Each event allocated to a single processor, large
farm of processors to handle rate
10Summary of Introduction
- For many physics analyses, aim is to obtain as
high statistics as possible for a given process - We cannot afford to handle or store all of the
data a detector can produce! - The Trigger
- selects the most interesting events from the
myriad of events seen - I.e. Obtain better use of limited output
band-width - Throw away less interesting events
- Keep all of the good events(or as many as
possible) - must get it right
- any good events thrown away are lost for ever!
- High level Trigger allows
- More complex selection algorithms
- Use of all detectors and full granularity full
precision data
11Case study of the ATLAS HLT system
- Concentrate on issues relevant forATLAS (CMS
very similar issues), but try to address some
more general points
12Starting points for any HLT system
- physics programme for the experiment
- what are you trying to measure
- accelerator parameters
- what rates and structures
- detector and trigger performance
- what data is available
- what trigger resources do we have to use it
13Physics at the LHC
Interesting events are buried in a seaof soft
interactions
B physics
High energy QCD jet production
top physics
Higgs production
14The LHC and ATLAS/CMS
- LHC has
- design luminosity 1034 cm-2s-1 (In 2009 from
1030 - 1032 ?) - bunch separation 25 ns (bunch length 1 ns)
- This results in
- 23 interactions / bunch crossing
- 80 charged particles (mainly soft pions) /
interaction - 2000 charged particles / bunch crossing
- Total interaction rate 109 sec-1
- b-physics fraction 10-3 106 sec-1
- t-physics fraction 10-8 10 sec-1
- Higgs fraction 10-11 10-2 sec-1
15Physics programme
- Higgs signal extraction important - but very
difficult - There is lots of other interesting physics
- B physics and CP violation
- quarks, gluons and QCD
- top quarks
- SUSY
- new physics
- Programme will evolve with luminosity, HLT
capacity and understanding of the detector - low luminosity (2009 - 2010)
- high PT programme (Higgs etc.)
- b-physics programme (CP measurements)
- high luminosity (2011?)
- high PT programme (Higgs etc.)
- searches for new physics
16Trigger strategy at LHC
- To avoid being overwhelmed use signatures with
small backgrounds - Leptons
- High mass resonances
- Heavy quarks
- The trigger selection looks for events with
- Isolated leptons and photons,
- ?-, central- and forward-jets
- Events with high ET
- Events with missing ET
17Example Physics signatures
18ARCHITECTURE
Trigger
DAQ
40 MHz
1 PB/s(equivalent)
Three logical levels
Hierarchical data-flow
LVL1 - FastestOnly Calo and MuHardwired
On-detector electronics Pipelines
2.5 ms
LVL2 - LocalLVL1 refinement track association
Event fragments buffered in parallel
40 ms
LVL3 - Full eventOffline analysis
Full event in processor farm
4 sec.
19Selected (inclusive) signatures
20Trigger design Level-1
- Level-1
- sets the context for the HLT
- reduces triggers to 75 kHz
- Uses limited detector data
- Fast detectors (Calo Muon)
- Reduced granularity
- Trigger on inclusive signatures
- muons
- em/tau/jet calo clusters missing and sum ET
- Hardware trigger
- Programmable thresholds
- CTP selection based on multiplicities and
thresholds
21Level-1 Selection
- The Level-1 trigger
- an or of a large number of inclusive signals
- set to match the current physics priorities and
beam conditions - Precision of cuts at Level-1 is generally limited
- Adjust the overall Level-1 accept rate (and the
relative frequency of different triggers) by - Adjusting thresholds
- Pre-scaling (e.g. only accept every 10th trigger
of a particular type) higher rate triggers - Can be used to include a low rate of calibration
events - Menu can be changed at the start of run
- Pre-scale factors may change during the course of
a run
22Example Level-1 Menu for 2x1033
23Trigger design - Level-2
- Level-2 reduce triggers to 2 kHz
- Note CMS does not have a physically separate
Level-2 trigger, but the HLT processors include a
first stage of Level-2 algorithms - Level-2 trigger has a short time budget
- ATLAS 40 milli-sec average
- Note for Level-1 the time budget is a hard limit
for every event, for the High Level Trigger it is
the average that matters, so OK for a small
fraction of events to take times much longer than
this average - Full detector data is available, but to minimise
resources needed - Limit the data accessed
- Only unpack detector data when it is needed
- Use information from Level-1 to guide the process
- Analysis proceeds in steps with possibility to
reject event after each step - Use custom algorithms
24Regions of Interest
- The Level-1 selection is dominated by local
signatures (I.e. within Region of Interest - RoI) - Based on coarse granularity data from calo and
mu only - Typically, there are 1-2 RoI/event
- ATLAS uses RoIs to reduce network b/w and
processing power required
25Trigger design - Level-2 - contd
- Processing scheme
- extract features from sub-detector data in each
RoI - combine features from one RoI into object
- combine objects to test event topology
- Precision of Level-2 cuts
- Emphasis is on very fast algorithms with
reasonable accuracy - Do not include many corrections which may be
applied off-line - Calibrations and alignment available for trigger
not as precise as ones available for off-line
26ARCHITECTURE
FE Pipelines 2.5 ms
H L T
27CMS Event Building
- CMS perform Event Building after Level-1
- This simplifies the architecture, but places much
higher demand on technology - Network traffic 100 GB/s
- Use Myrinet instead of GbE for the EB network
- Plan a number of independent slices with barrel
shifter to switch to a new slice at each event - Time will tell whichphilosophy is better
28Example for Two electron trigger
LVL1 triggers on two isolated e/m clusters with
pTgt20GeV (possible signature Zgtee)
- HLT Strategy
- Validate step-by-step
- Check intermediate signatures
- Reject as early as possible
Sequential/modular approach facilitates early
rejection
29Trigger design - Event Filter / Level-3
- Event Filter reduce triggers to 200 Hz
- Event Filter budget 4 sec average
- Full event detector data is available, but to
minimise resources needed - Only unpack detector data when it is needed
- Use information from Level-2 to guide the process
- Analysis proceeds in steps with possibility to
reject event after each step - Use optimised off-line algorithms
30Electron slice at the EF
TrigCaloRec
Wrapper of CaloRec
EFCaloHypo
Wrapper of newTracking
EF tracking
matches electromagnetic clusters with tracks and
builds egamma objects
EFTrackHypo
TrigEgammaRec
Wrapper of EgammaRec
EFEgammaHypo
31Trigger design - HLT strategy
- Level 2
- confirm Level 1, some inclusive, some
semi-inclusive,some simple topology triggers,
vertex reconstruction(e.g. two particle mass
cuts to select Zs) - Level 3
- confirm Level 2, more refined topology
selection,near off-line code
32Example HLT Menu for 2x1033
33Example B-physics Menu for 1033
- LVL1
- MU6 rate 24kHz (note there are large
uncertainties in cross-section) - In case of larger rates use MU8 gt 1/2xRate
- 2MU6
- LVL2
- Run muFast in LVL1 RoI 9kHz
- Run ID recon. in muFast RoI mu6 (combined muon
ID) 5kHz - Run TrigDiMuon seeded by mu6 RoI (or MU6)
- Make exclusive and semi-inclusive selections
using loose cuts - B(mumu), B(mumu)X, J/psi(mumu)
- Run IDSCAN in Jet RoI, make selection for
Ds(PhiPi) - EF
- Redo muon reconstruction in LVL2 (LVL1) RoI
- Redo track reconstruction in Jet RoI
- Selections for B(mumu) B(mumuK) B(mumuPhi),
BsDsPhiPi etc.
34Matching problem
35Matching problem (cont.)
- ideally
- off-line algorithms select phase space which
shrink-wraps the physics channel - trigger algorithms shrink-wrap the off-line
selection - in practice, this doesnt happen
- need to match the off-line algorithm selection
- For this reason many trigger studies quote
trigger efficiency wrt events which pass off-line
selection - BUT off-line can change algorithm, re-process and
recalibrate at a later stage - So, make sure on-line algorithm selection is well
known, controlled and monitored
36Selection and rejection
- as selection criteria are tightened
- background rejection improves
- BUT event selection efficiency decreases
37Selection and rejection
- Example of a ATLAS Event Filter (I.e. Level-3)
study of the effectiveness of various
discriminants used to select 25 GeV electrons
from a background of dijets
38Other issues for the Trigger
- Efficiency and Monitoring
- In general need high trigger efficiency
- Also for many analyses need a well known
efficiency - Monitor efficiency by various means
- Overlapping triggers
- Pre-scaled samples of triggers in tagging mode
(pass-through) - To assist with overall normalisation ATLAS
divides each run into periods of a few minutes
called a luminosity block. - During each block the beam luminosity should be
constant and can also exclude any blocks where
there is a known problem - Final detector calibration and alignment
constants not available immediately - keep as
up-to-date as possible and allow for the lower
precision in the trigger cuts when defining
trigger menus and in subsequent analyses - Code used in trigger needs to be very robust -
low memory leaks, low crash rate, fast
39Other issues for the Trigger contd
- Beam conditions and HLT resources will evolve
over several years (for both ATLAS and CMS) - In 2009 luminosity low, but also HLT capacity
will be lt 50 of full system - For details of the current ideas on ATLAS Menu
evolution see - https//twiki.cern.ch/twiki/bin/view/Atlas/Trigger
PhysicsMenu - Gives details of menu for Startup (including
single beam running), 1031, 1032, 1033 - Description of current menu is very long !!
- Trigger Workshop next week in Beatenberg
- One aim to reduce menus to something more
manageable for early running - Corresponding information for CMS is at
- https//twiki.cern.ch/twiki/bin/view/CMS/TriggerMe
nuDevelopment
40Summary
- High-level triggers allow complex selection
procedures to be applied as the data is taken - Thus allow large samples of rare events to be
recorded - The trigger stages - in the ATLAS example
- Level 1 uses inclusive signatures (mus
em/tau/jet missing and sum ET) - Level 2 refines Level 1 selection, adds simple
topology triggers, vertex reconstruction, etc - Level 3 refines Level 2 adds more refined
topology selection - Trigger menus need to be defined, taking into
account - Physics priorities, beam conditions, HLT
resources - Include items for monitoring trigger efficiency
and calibration - Try to match trigger cuts to off-line selection
- Trigger efficiency should be as high as possible
and well monitored - Must get it right - events thrown away are lost
for ever! - Triggering closely linked to physics analyses
so enjoy!
41Additional Foils
42ATLAS HLT Hardware
- Each rack of HLT (XPU) processors contains
- 30 HLT PCs (PCs very similar to Tier-0/1
compute nodes) - 2 Gigabit Ethernet Switches
- a dedicated Local File Server
- Final system will contain 2300 PCs
43LFS nodes
UPS for CFS
XPUs
CFS nodes
SDX12nd floorRows 3 2
44Naming Convention
- First Level Trigger (LVL1) Signatures in capitals
e.g.
HLT in lower case
- New in 13.0.30
- Threshold is cut value applied
- previously was 95 effic. point.
- More details see https//twiki.cern.ch/twiki/bi
n/view/Atlas/TriggerPhysicsMenu
45What is a minimum bias event ? - event accepted
with the only requirement being activity in the
detector with minimal pT threshold 100 MeV
(zero bias events have no requirements) - e.g.
Scintillators at L1 (gt 40 SCT S.P. or gt 900
Pixel clusters) at L2 - a miminum bias event is
most likely to be either - a low pT (soft)
non-diffractive event - a soft
single-diffractive event - a soft double
diffractive event (some people do not include the
diffractive events in the definition !) - it is
characterised by - having no high pT objects
jets leptons photons - being isotropic
- see low pT tracks at all phi in a tracking
detector - see uniform energy deposits in
calorimeter as function of rapidity - these
events occur in 99.999 of collisions. So if any
given crossing has two interactions and one of
them has been triggered due to a high pT
component then the likelihood is that the
accompanying event will be a dull minimum bias
event.
46L1 Rates 1031 14.4.0
Removing overlaps between singlemulti EM gives
18 kHz Total estimated L1 rate with all overlaps
removed is 12 kHz
47L2 Rates 1031 14.4.0
Xanything includesJE, TE, anything with
MET except taus Bphys includes Bjet
Manually prescaled off pass-through triggers
mu4_tile, mu4_mu6
Total estimated L2rate with all overlaps removed
is 840 Hz
48EF Rates 1031 14.4.0
91 Hz total is in prescaled triggers 51 Hz of
prescaled triggers is unique rate
Total estimated EF Rate with overlaps removed is
250 Hz
49L1 Rates 1032 14.4.0
Total estimated L1 rate with all overlaps
removed is 46 kHz
50L2 Rates 1032 14.4.0
Total estimated L2 with all overlaps removed is
1700 (too high!)
51EF Rates 1032 14.4.0
Total estimated EF rates with all overlaps
removed is 390 Hz (Fixing L2 will likely come
close to fixing EF as well)