Title: Status of Online Neural Networks
1Status of Online Neural Networks
- Bruce Denby
- Université de Versailles and
- Laboratoire des Instruments et Systèmes, Paris,
France - Rapporteurs Presentation
- ACAT2000 Fermilab
16-20 October, 2000
2OUTLINE OF THE PRESENTATION
- The current situation
- Developments foreseen
- Neural net hardware
- Conclusions
3Acknowledgements
- Most of my transparencies were borrowed from the
talks of - Sotirios Vlachos
- Erez Etzion
- Jean-Christophe Prévotet
- Christian Kiesling
- Bertrand Granado
4The Current Situation Neural network triggers
are being used to produce physics. Examples
1) Dirac Experiment at the CERN PS 2) H1
Experiment at HERA
5(No Transcript)
6- 34 GeV p on target
- Measure lifetime of pionium
- Hodoscope input to NN
7-The network is trained to select low Q events
8- Net architecture 55-2-1
- Note that the multiply/accumulate and sigmoid
evaluation are done using look-up table memories.
9- i.e., it works.
10The H1 Neural Network Trigger Project
Christian Kiesling Max-Planck-Institut für
Physik München, Germany
11The H1 Experiment at HERA
- study
- the structure of the nucleon
- the fundamental interactions of quarks and
gluons - Quantum chromodynamics (QCD)
- electroweak interference
- search
- for physics beyond the Standard Model
Physics analysis Measurements of the structure
functions F2, FL, F3, F2D Jet Measurements
(strong coupling constant) Charm/Beauty
Production (gluon content of proton) Diffractive
Vector Meson Production (gluon struct.) Search
for Instanton Effects (QCD exotics)
Hardware (MPI) Liquid Argon Calorimeter
(forward barrel section) LAr front end
electronicsLAr trigger (L1) Neural Network
Trigger (L2)
12The H1 Trigger Scheme
L1 trigger OR of individual subdetectortrigger
s, such as MWPC, CJC, LAr, SpaCal, ? system ...
L2 systems have access to informationfrom all
subdetectors(information prepared bysubtrigger
processors)
13Detector Informationat level 2
14Architecture of the H1 Neural Network Trigger
Three-Layer Feed Forward Neural Net
Output (only one neuron)
One hidden layer
Inputs (from detector)
Central ProblemInputs for the Neural Nets
Data Selection
discriminatephysics from background
Data Transformation
15Organization and Processing of Data from L1
Subdetector information arrives in consecutive
time slices ti (frames, orbunch crossings BC)
(tmax 32 BCs at present) 1 BC 96 ns
10 MHz transfer rate
16 Modular and Expandable
The Neural Trigger System
17The complete System
Total of 1024 processors Integrated
computingpower over 20 Giga MAC/sec
18The Neural Network Trigger in Operation
19Some Physics Elastic Photoproduction of
Mesons
C. Adloff et al., Phys. Lett. B483 (2000) 23
20Photoproduction of ? Mesons withProton
Dissociation
21Developments Foreseen
22H1 New Network Preprocessing - The DDB II
Why a new preprocessor?
Neural Network Trigger successfully in
operation since Summer 1996, promising physics
results, but
NOW need to prepare for higher selectivity
(luminosity upgrade HERA 2000 factor 5
more physics _at_ constant logging rate)
New Goal separate interesting physics from
uninterestingphysics
23Intelligent Preprocessing for Neural Networks
Jean-Christophe Prévotet, MPI München Laboratoire
des Instruments et Systèmes (Paris VI)
24New Preprocessing The DDB2
Principle - intelligent preprocessing extrac
t physical values for the neural net (impulse,
energy, particle type) - Combination of
information from different subdetectors (the,phi
plane) - Executed in 4 steps
Post Processing
Clustering
Ordering
Matching
combination of clusters belonging to the
same object
generates variables for the neural network
find regions of interest within a given detector
layer
sorting of objects by parameter
25Description of a DDB2 board
Storage of parameters
L2 bus
Data
Clustering BT/TT
MEM
Addresses
Addresses
MEM
Clustering MWPC
Post Processing
Matching
Ordering
Clustering CJC
MEM
Workable data given to the NN
Clustering FTT
MEM
Clustering Muon
MEM
Clustering Spacal
Matching
MEM
26Hardware specifications
Time 8µs (Clustering, Matching, Ordering,
Post Processing)
Re-configurable hardware independent of data
format changes
Organization 5 DDB2 boards connected to 5 CNAPS
Each board works on the same data but
parameterized differently
27Hardware resources
Time 8 µs
FPGA - Low cost (prototype board) - Speed -
Xilinx Virtex Family XCV200, XCV400
Parallel processing Pipeline steps
Data format Luts Lot of small memories
Type N gates Rams SelRam bits
XCV200 236K 14 75K XCV400 468K 20 153K
Clustering Matching Ordering
Post processing
Algorithm
Number Type
6 to 8 XCV200
2 XCV400
1 to ? XCV200
1 to ? XCV200
28How does Physics profit from the DDB II ?
Test reaction ? photo-production
29Momentum Reconstruction and Triggering in the
ATLAS Detector
- FermiLab, October 2000
- Erez Etzion1,
- Gideon Dror2, David Horn1, Halina Abramowicz1
- 1. Tel-Aviv University, Tel Aviv, Israel.
- 2. The Academic College of Tel-Aviv-Yaffo, Tel
Aviv, Israel.
30ATLAS
S.C Solenoid
Hadron Calorimeter
S.C Air core Toroids
Inner Detector
EM Calorimeters
Muon Detectors
31LowPt High Pt trigger
Complicated magnetic field map gt difficult
problem
32Network architecture
linear output
sigmoid hidden layers
input
parameters of straight track of muon
(preprocessing LMS)
33Testing network performance
Training set 2500 events. In one octant. Test
set of 1829 events. Distribution of network
errors - approximately gaussian. compatible
with stochasticity of the data. charge is
discrete!!! 95.8 correct sign.
34Summary discussion
- The network can successfully estimate the charge
and transverse momentum of the muon. - Classification (triggering) is most efficient by
specially trained network. - The data is intrinsically stochastic giving rise
to approximately gaussian errors. - The simplicity of the network enables very fast
hardware realization. (See presentation this
workshop)
35Neural Network Hardware
- Off-the-shelf neural net hardware is scarce
- Many standard products no longer exist
- What should we do in HEP?
36Analog Devices
ETANN, 1991 (Electrically Trainable Artificial
Neural Network by Intel) (64x64x4 in 5 ?s)
NeuroClassifier, 1994 (by P. Masa, Univ. Twente,
NL) (70x6x1 in 20 ns)
Digital Devices
CNAPS 1993 (Adaptive Solutions, Oregon) 64
_at_20 MHz 8/16
MA16 1994 (Siemens, Germany) 16 _at_50
MHz 16/16
TOTEM 1994 (Trento, Italy) 32 _at_30
MHz 16/ 8
SAND1 1995 (KfK, Germany) 4 _at_50
MHz 16/16
recent development Maharadja, 1999 (Paris,
France) details at this conference (see talk
of B. Granado, AI, Sess.I)
back to analog (?)
Silicon Brain (Irvine Sensors Inc.)3D analog
FPGA array
37 - One interesting solution use memories to
evaluate NNs
38- Another solution can we use a fast general
purpose NN processor implemented in FPGAs?
39(No Transcript)
40(No Transcript)
41(No Transcript)
42(No Transcript)
43(No Transcript)
44(No Transcript)
45- FPGA clock speed of 100 MHz will be available
soon. - implying execution times of a few 100 ns.
46Conclusions
- Fast preprocessing is a concern FPGAs are one
way to go - H1 NN trigger upgrade is in the works
- There is some NN trigger Neural net triggers
exist and they work - activity in LHC experiments ATLAS muon proposal
(this workshop), CMS (electron trigger, Varela et
al.) - Finding NN hardware is a problem
- Memory or FPGA implementations may be the answer
- See also Neural Networks in High Energy Physics
A Ten Year Perspective, B. Denby, Comp. Phys.
Comm.119, August 1, 1999, p 219.