CBM FEE/DAQ/Trigger - PowerPoint PPT Presentation

1 / 36
About This Presentation
Title:

CBM FEE/DAQ/Trigger

Description:

D and J/? signal drives the rate ... profiles for all sub-system variants ... connectors. 6.4 Gpbs/pin. 8 port, 32 lane. PCIe switch. 160 Gbps ... – PowerPoint PPT presentation

Number of Views:31
Avg rating:3.0/5.0
Slides: 37
Provided by: walt133
Category:
Tags: cbm | daq | fee | connectors | sub | trigger

less

Transcript and Presenter's Notes

Title: CBM FEE/DAQ/Trigger


1
CBM FEE/DAQ/Trigger
Status and Overview
  • Walter F.J. Müller, GSI
  • CBM Collaboration Meeting, October 6-8, 2004

2
CBM Requirements Profile
  • D and J/? signal drives the rate capability
    requirements
  • D signal drives FEE and DAQ/Trigger requirements
  • Problem similar to B detection, see BTeV, LHCb
  • Adopted approach
  • displaced vertex trigger in first level, like
    in BTeV
  • Additional Problem
  • DC beam ? interactions at random times
  • ? time stamps with ns precision needed
  • ? explicit event association needed
  • Current design for FEE and DAQ/Trigger
  • Self-triggered FEE All hits shipped with time
    stamp
  • Data-push architecture L1 trigger throughput
    limited but not latency limited

Substantial RD needed
Quitedifferentfrom theusualLHC
styleelectronics
3
Conventional FEE-DAQ-Trigger Layout
Especially instrumented detectors
Detector
L0 Trigger
fbunch
Trigger Primitives
Dedicated connections
FEE
Cave
Limited capacity
Shack
L1 Accept
DAQ
Modest bandwidth
L2 Trigger
L1 Trigger
Limited L1 trigger latency
Specialized trigger hardware
Standard hardware
Archive
4
The way out ... use Data Push Architecture
Detector
Self-triggered front-end Autonomous hit detection
fclock
FEE
No dedicated trigger connectivity All detectors
can contribute to L1
Cave
High bandwidth
Shack
DAQ
Large buffer depth available System is
throughput-limited and not latency-limited
Modular design Few multi-purpose rather many
special-purpose modules
FPGA andCPU mix
5
Typical Self-Triggered Front-End
Use sampling ADC on each detector channel running
with appropriate clock
  • Average 10 MHz interaction rate
  • Not periodic like in collider
  • On average 100 ns event spacing

a 126 t 5.6
a 114 t 22.2
amplitude
Time is determined to a fraction of the sampling
period
100
threshold
50
time
0
5
10
15
20
25
30
6
Front-End for Data Push Architecture
  • Each channel detects autonomously all hits
  • An absolute time stamp, precise to a fraction of
    the sampling period, is associated with each hit
  • All hits are shipped to the next layer (usually
    concentrators)
  • Association of hits with events done later using
    time correlation

7
Typical FEE Parameters for CBM
  • Based on
  • interaction rate 107/sec
  • 1 occupancy for central collisions
  • minimum bias events have ¼ of central
    multiplicity
  • typically 3 detector channels fire per particle
    hit
  • a hit can be expressed in 5 bytes
  • one gets as 'typical' values
  • channel hit rate 100 kHz
  • channel data rate 500 kB/sec or 5 Mbps
  • BUT
  • in inner (outer) regions rates tend to be
    generally higher (lower)
  • ECAL has 7 cell occupancy ? 700 kHz channel
    hit rate
  • START (diamond micro strip) some 10 MHz channel
    hit rate

8
Typical FEE Parameters for CBM II
plausible channel/chip for Si Strip detector
  • For a multi-channel FEE one gets

channels Parameter 1 16 32 64 128 Unit
total hit rate (typ.) 0.1 1.6 3.2 6.4 12.8 MHz
total data rate (typ.) 5 80 160 320 640 Mbps
total hit rate (ECAL) 0.7 11.0 22.0 44.0 88.0 MHz
total data rate (ECAL) 35 560 1120 2240 4480 Mbps
plausible channel/chip for most sub-systems
9
Sub-System Specifics
  • Si-Strip
  • few ns time resolution
  • high channel density (50um pitch, strip length
    few to gt100 mm)
  • material budget essential
  • radiation hard electronics needed (2 MRad/year at
    z40cm)
  • RICH
  • current assumption use PMT's as photon detector
  • nominal signal amplitude few 105 e dark rate
    3 KHz/PMT
  • dynamic range modest, time resolution few ns

10
Sub-System Specifics
  • TRD
  • thin Xe absorber (typ. 6 mm), no drift time
  • pads size and shape changes drastically over R
    (square -gt strip)
  • type of chamber (wire or a 'fast' type) tbd
  • RPC
  • most likely differential detector and
    differential input stage
  • time resolution of electronics should be 25 ps or
    better
  • ECAL
  • current assumption use PMT's as photon detector
  • nominal signal amplitude to be determined
  • dynamic range large (12 bit), ns time resolution
    helpful

11
Typical Front-End Electronics Chain
preFilter
digital Filter
Hit Finder
Backend Driver
PreAmp
ADC
  • Si Strip
  • Pad
  • GEM's
  • PMT
  • APD's

Anti-AliasingFilter
Sample rate 10-100 MHz Dyn. range 8...gt12 bit
'Shaping' 1/t Tailcancellation Baselinerestorer
Hit parameter estimators Amplitude Time
Clustering Buffering Link protocol
All potentially in one mixed-signal chip
12
Towards a Multi-Purpose FEE Chain
Hit Finder
digital Filter
PreAmp
ADC
preFilter
programmable sampling rate ?
programmable hit finder and estimator
Several selectable input stages ? pos/neg
signals ? capacitance match
as option dual rangeconversion ?
13
FEE Summary
Slide from February 2004Collaboration Meeting
  • About half of the experiment cost is in FEE
  • Significant amount of development needed for Data
    Push

FEE Task Force
  • Mission
  • Collect requirement profiles for all sub-system
    variants
  • Identify common building blocks (chip or block
    level)
  • Formulate overall concept for FEE
  • Search for collaboration partners outside CBM
  • Prepare EU DC proposal by end 2004
  • Start RD on critical building blocks

in progress
in progress
in progress
in progress
retarget...
in 2005
14
First CBM FEE ASIC Meeting 9/26/2004
Follow link
15
Conclusion of 1st FEE ASIC Meeting
  • A multi-purpose FEE chain could handle
  • RICH
  • TRD
  • ECAL
  • the RPC is dedicated development, but can share
    buildings blocks (ADC, communication) and
    technology base
  • the Si-Strip is dedicated development (channel
    density, radiation hardness), might share
    building blocks, and can share technology base

16
TP and Workgroup Structure I
  • Sections of FEE chapter of TP
  • Overall Architecture
  • Communication and Time/Clock Architecture
  • Clock/Time distribution (FETA) (R. Tielert / V.
    Lindenstruth)
  • Link protocol (FELA) (U. Bruening / V.
    Lindenstruth)
  • OASE (R.Tielert)
  • CNet (U. Bruening / U. Kebschull)
  • DCS interface (V. Lindenstruth / U. Bruening)
  • RICH/TRD/ECAL solution
  • PreAmp (W. Mueller / R. Tielert)
  • ADC (R. Tielert / L. Musa)
  • filtering (W. Mueller / V. Lindenstruth)
  • feature extraction (V. Lindenstruth / L. Musa /
    R. Tielert)
  • backend (FELA)

17
TP and Workgroup Structure II
  • Sections of FEE chapter of TP (cont.)
  • RPC/START solution
  • PreAmp/Discriminator (M. Ciobanu / E. Badura)
  • TAC variant (H. Flemming / E. Badura)
  • DLL variant (H. Flemming / E. Badura)
  • Si Strip solution
  • (E. Atkin / A. Voronin)
  • MAPS interfacing (/w IReS)
  • Infrastructure
  • power distribution (V. Lindenstruth / R. Tielert)
  • DCS (U. Kebschull / V. Lindenstruth)
  • Configuration Mgmt, Fault Tolerance (U. Kebschull)

18
FEE ? DAQ/Trigger
  • Much of my is time over by now ...
  • ... and dinner is waiting outside
  • lets face DAQ/Trigger issues anyway

19
The FutureDAQ EU Project (I3HP-JRA1)
Follow link
2nd WorkshopSeptember 9, 2004in Darmstadt
Follow link
1st WorkshopMarch 25-26, 2004in Munich
Follow link
20
Regular local (Da/Hd/Ma) DAQ/Trigger Meetings
Follow link
21
Now lets look at the whole System
  • The following does not explore the full design
    space
  • It looks into one scenario, driven by the Ansatz
  • do (almost) all processing done after the build
    stage
  • Other scenarios are possible and should be
    investigated, putting more emphasis on
  • do all processing as early as possible
  • transfer data only then necessary
  • All specific 'implementations' are just examples
    for illustration
  • All stated numbers are still rough estimates

22
A Network oriented System View
Front-endElectronics
ConcentratorNetworks
BuildNetwork
BNet
ProcessorNetworks
ComputeResources
High LevelNetwork
HNet
to high level computing
23
... with some numbers
50000FEE chips
FEE
CNet
1 TB/Sec BW 1000 links
BNet
PNet
100sub farms
100 nodesper sub farm
HNet
Output BW 10 GB/sec
to high level computing
24
... with some questions
What is theFEE Link Protocol ?
FEE
Where is thetime distribution ?
CNet
Interfacing ofCNet to BNet ?
BNet
Interfacing ofBNet to PNet ?
PNet
Interfacing ofHNet to PNet ?
HNet
to high level computing
25
... and now decoupling the Networks
BNet
HNet
to high level computing
26
Add Time Distribution ? 5 Networks
TNet
BNet
TNetInterface
Connect toBNet or a PNet
HNet
to high level computing
27
Network Characteristics
Data PushDatagram'serrors markedbut not
recovered
TNet
BNet
Request/Responseand Data PushTransactionserrors
recovered
HNet
to high level computing
28
Where is the Cave Wall ?
Differentoptionsto beevaluated
TNet
BNet
HNet
to high level computing
29
FEE Interface
  • Proposal (V.Lindenstruth)
  • handle all three interfaces with one physical
    serial link
  • use an optical interface already at the FEE chip
    level
  • no noise from data lines
  • low chip pin count
  • this requires
  • SerDes cores on FEE chips
  • low-cost fiber couplings
  • this implies
  • clock/time is distributed on the uplinks of the
    CNet

3 logical interfaces
FEE
Also Test(JTAG)
OASE
Hit Data(out only)
Clock and Time(in only)
Control(bidirectional)
might bebidirectional(cluster RoI transfers)
New territory
30
Focus on PNet
TNet
BNet
HNet
to high level computing
31
A Sub-farm Scenario
8 portswitches
8 portswitches
FabricBoard
  • 10 node boards(different types)
  • double star fabric
  • 10 Gbps links
  • 40 GB/sec BW
  • almost feasible today !!

Node Board
from BNet
10 Gbpslinks
10 Gbpslinks
to HNet
32
A Sub-farm Scenario II
MultiGig RTconnectors6.4 Gpbs/pin
MultiGig RTconnectors6.4 Gpbs/pin
This is VMEVITA 41.x
  • Core pieces are already here today
  • can only improve...
  • Missing FPGA and SoC with PCIe
  • wait and see how market moves
  • Essential Power consumption
  • 2.5 Gbps SerDes now 100-200 mW
  • 160 Gbps switch now 6.5 W (130 nm)
  • PLX PEX8532 or BCM5675
  • in 2010 see what 45nm will do

8 port, 32 lanePCIe switch160 Gbps
bandwidthPLX PEX8532
102 slot VXS backplaneELMA 101VXSD712
33
Slide from CHEP'04 Dave McQueeneyIBM CTO US
Federal
34
Network Summary
  • 5 different networks with very different
    characteristics
  • CNet
  • medium distance, short messages, special
    requirements
  • connects custom components (FEE ASICs)
  • TNet
  • broadcast time (and tags), special requirements
  • BNet
  • naturally large messages, Rack-2-Rack
  • PNet
  • short distance, most efficient if already
    'build-in'
  • connects standard components (FPGA, SoCs)
  • HNet
  • general purpose, to rest of world

FEE Interfaces and CNet will be co-developed.
Depends on how clock/time distribution is done
Custom
Potentially build with CNet components
Custom
Probably uncritical
open severalCOTS options
Look at emerging technologiesStay open for
changes and surprisesCost efficiency is key here
!!
PCIe ASI
Whatever the implementation is, it will be
called Ethernet...
Ethernet
35
From Hardware to Software
  • Key question is whether we can achieve at least
    11000 trigger rejection for the D and J/?
    triggers at an affordable hardware setup.
  • Needed
  • Fast algorithms
  • Realistic Trigger Performance Simulations
  • Detector Trigger Co-development

See J. Gläß and I. Kisel talks
Starting in new FrameworkStill to do full
tracking MAPS pileup efficiencies/noise
Still on 0th iteration....
36
Timelines
  • CBM Technical proposal due 15.1.2005
  • this is 101 days from now
  • this is 73 working days from now
  • if you still believe in weekends and a xmas
    break...
  • One part is FEE/DAQ/Trigger architecture
    proposal
  • with implementation plan (RD steps,
    milestones,....)
  • with risk assessment
  • with cost estimate
  • Lets have dinner....
Write a Comment
User Comments (0)
About PowerShow.com