HAWC TriggerDAQ A sketch - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

HAWC TriggerDAQ A sketch

Description:

Triggering (local plane fit possible??) Readout/reconstruction granularity? ... Guess 1% efficiency loss or more at 25kHz singles rate ... – PowerPoint PPT presentation

Number of Views:63
Avg rating:3.0/5.0
Slides: 22
Provided by: james861
Category:

less

Transcript and Presenter's Notes

Title: HAWC TriggerDAQ A sketch


1
HAWC Trigger/DAQA sketch
  • Dan Edmunds, Philippe Laurens,
  • Jim Linnemann
  • MSU
  • Ground-based Gamma Ray Astronomy Workshop, UCLA
  • Oct 21, 2005

2
A Strawman Model of TDAQ
  • Lay out a simple model to think about
  • Assess feasibility identify issues
  • Try to minimize number of meshes, boxes, layers
  • Start with only required inputs and outputs
  • Slow Control paths later
  • Download, monitoring, calibration
  • Look at some basic parameters

3
Key Trigger/DAQ Parameters
  • Occupancy ? rate sensitive time
  • rate / max rate
  • Careful 25 kHz 1 µs 2.5
    significant
  • Can we afford to trigger on all proton
    showers?
  • Mean Hits µ channels ?
  • Data rate Bytes / hit µ rate(transfer)
  • Assuming zero suppression
  • 75x75 5625 tubes 4m centers 300x300 m
  • perhaps another layer

4
Front End and Patch Trigger
2B wide Circular Buffer (DPM?)
100MHz FADC
1GHz Counter
4B wide Circular Buffer (DPM?)
Discriminator
Latch
L1 Trigger
. . .
Patch Trigger
5
Triggering
  • Patch level a time input per triggering tube
  • wont drive the cost
  • Fixed width from tube ( 40 ns for 3x3 patch)
  • Fanout into several majority logic options
  • N3,4,5,6? Both layers?
  • Or perhaps analog sum of tube discriminator
    outputs
  • Defer multiplicity cut on patch to L1 allow
    global multiplicity
  • Enables risetime trigger
  • L1 overall 625 x n inputs
  • Goal down to within a factor of 2 of real
    showers
  • .15 MHz working figure (Too high?)
  • outputs of majority logics from each patch
  • Count patches simplest (pretrigger?)
  • Locality in space and time may be needed

6
625
Patches
T(patch gt n)
SBC
L1 Trigger
Taccept
SBC
Supervisor
.1 Gigabit Ethernet (full duplex)
100
Switch 625 x 100
Farm
Gigabit Ethernet (full duplex)
Pull Architecture
7
Switch and Farm
  • Farm guess 100 nodes (scale up Milagro)
  • 2K/node 200K
  • cpu may be marginal assume same cycles on more
    data
  • 12 GHz/node Moores law uniprocessor
    unlikely
  • can we use multicore efficiently?
  • need L2/3 trigger algorithm first?
  • Incremental processing
  • Cisco 6513 router 625 in 100 out 725
  • 30K 9K/48 channel blades (2004 )
  • 12 blades/chassis (need 15, so 2 chassis)
  • Interconnect two, or fewer patches or
    concentrators
  • Around 250K or a bit less

8
(No Transcript)
9
Target Budget 1K/channel 10M
  • SBC in a 3x3 patch 2k/18 110/channel
  • Farm and switch .5M/10K 50/channel
  • If guess 1M trigger 100/channel
  • So not implausible, but
  • HV
  • LV, crate(?)
  • ADC/TDC/memory
  • Design scale .3M (2-3 FTE-year) 30/channel
  • Cables
  • Calibration

10
Some Trigger Issues (RD)
  • L1 trigger lt .15 MHz with good gamma efficiency?
  • Complexity of inputs into L1 trigger?
  • Single pulse when gt patch multiplicity threshold?
  • Indication of PH? Or several multiplicity
    thresholds? Or analog sum proportional to tubes
    on? Or digital signals? Serial?
  • 300m cables to trigger?
  • A fixed time algorithm? Probably NO!
  • Decisions in strict time order? Probably no.
  • Simplest data extraction
  • BUT Shower planes sweep through in different
    time order
  • deadtime 20? 1 µsec traversal time / 5
    µsec/trigger What time resolution in requests?
    Affects occupancy implicit zenith angle cut
  • Ignore any results from trigger clustering and
    take all hits in 1µs?
  • Requiring serial order could forbid a farm in
    L1/2

11
Some DAQ Issues (RD)
  • What is design trigger rate? headroom? .1
    -.15MHz 10-7 µs/event!
  • CPU load on SBC
  • Data request handling find, compact, ship data
    (SBC response)
  • Events in blocks to reduce overheads? 7-10 µs vs.
    TCP 15-20 µs!
  • Choice of protocol through switch (software
    overhead)
  • Manpower costs of non-TCP
  • push or pull architecture Atlas experience
  • does destination ask for data?
  • L1, or Farm supervisor broadcast/multicast to
    patches
  • telling them which data to send
  • Switch Total throughput check, width,
    buffer memory/port
  • 100Base T limited to 100m
  • Need repeater(s), switch layers, or fiber? Or
    2nd story?

12
More DAQ Issues (RD)
  • How to throttle trigger rate by declaring
    deadtime
  • And how to monitor deadtime
  • Request specific patches? Regions of
    Interest
  • Incompatible with blocking of event requests, and
    broadcasting
  • If data volume sustainable, look at only ROIs in
    farm?
  • Read out the L1 trigger data to help
  • LARGE data volumes Raw quite daunting even
    reconstructed pretty big
  • Consequence of low trigger rejection
  • Think of HAWC as a telescope?
  • Primary output is images, not events?
  • Skymaps every 15 min? Sliced by E and gamma
    purity? lt1Mbit/s 4GB/day
  • How much reconstructed data and raw data needed?
  • validation, calibration, algorithm development,
    reprocessing
  • Also need online short burst analysis (including
    triggered searches)
  • 1/6 TB of raw data (_at_ 1/6GB/s) for 15 minute
    lookbackramdisk???
  • 6GB of reconstructed data _at_ 6MB/s for 15 minute
    lookbackmaybe

13
Plausible, but hard
  • Raw Rate
  • 10MHz muons
  • 1MHz sensitive times (shower sweepthrough
    pipeline)
  • Doubt L3 farm at 1MHz is plausible
  • L1 trigger
  • .1 to .2 MHz (showers) if can kill the muons
    (not trivial) curtains?
  • Overlapping sensitive times at .1 MHz
  • Problems for reconstruction and triggering
  • Must extract multiple events from 1 sensitive
    time
  • Reconstruction may need patch lists/patch times
  • Do a first pass reconstruction coarsely
  • To ignore out of time data (for this trigger) in
    linear time (ROI too?)
  • Real overlapping showers need to go into MC
  • Proton shower rate is pushing technology
  • Study Atlas L1 rate is also .1MHz
  • Beware they have lots more resources
  • Below .1MHz L1 requires
  • hadron rejection at trigger level
  • Or raise threshold to control rate physics
    impact!

14
(No Transcript)
15
(No Transcript)
16
Front End
  • Imagine for now located at patch station
  • 3 3 2 tubes 10m signal cable
  • Split into time and PH paths
  • Discriminator for time (adjustable threshold
    possible multiple thresholds?)
  • Record 4B wide 1GHz counter into circular buffer
  • Is 1ns least count OK?
  • Split/fanout to n-fold majority logic for
    triggering
  • 100 MHz FADC for PH
  • 2B wide circular buffer
  • Must be able to read out whole trace (slowly) for
    debugging
  • Independent, but strong resemblance to Ice Cube
    electronics

17
625
Patches
T(patch gt n)
SBC
L1 Trigger
Taccept
SBC
Supervisor
.1 Gigabit Ethernet (full duplex)
T, Destination
100
Switch 625 x 100
Farm
Gigabit Ethernet (full duplex)
Does timing need separate net?
Push Architecture
18
Some MC-driven RD
  • 1ns TDC least count OK?
  • L1/L2 Trigger Algorithms (select single showers)
  • Effect of Shower Pileup on L1 shower rate
  • Overlapping smaller (gt.1 MHz) showers?
  • How steep is spectrum?
  • What is best patch size for
  • Triggering (local plane fit possible??)
  • Readout/reconstruction granularity??
  • How does reconstruction cope with overlapped
    events
  • Can we ignore non-triggered patches?
  • Or do a useful pre-fit on patch times
  • Would reduce effective sensitive time/occupancy

19
Big choices
  • 2 layers? Not critical for triggering big money
  • Front End in tube base, or out of water?
  • Ease of Commissioning, servicing vs. noise
    considerations
  • Similar consequences if seal fails
  • different connector(s), cable complexity
  • L1 trigger complexity (inputs, processing) and
    target L1 rate!!
  • What local processing before L1 triggering
  • How many tubes in a patch? How many patches?
  • L1 trigger in fixed time?
  • L1 processing of special L1 data
  • Hope no data movement
  • Readout directly to farm, or layered?
  • Interacts with patches, farm size, switch width
    (NpNf)
  • And choice of protocol to event build
  • Blocking factor of events accept rates protocol
  • SBC CPU load and latency and FE memory time
  • Where is L1 trigger, Farm (cable lengths)?
    Beside pond? 2nd story?

20
Some Front End Issues (RD)
  • Number of TDC thresholds
  • FADC to PE Algorithm(s) when?
  • Upon readout request? TDC hit? Patch trigger?
  • Data extraction from circular buffer
  • Restrict readout to triggered patches?
  • Single or multiple (T, PE) readouts on request
  • First hit only vulnerable to noise, prepulsing
  • Guess lt 1 efficiency loss or more at 25kHz
    singles rate

21
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com