Data Acquisition Backbone Core DABC - PowerPoint PPT Presentation

1 / 24
About This Presentation
Title:

Data Acquisition Backbone Core DABC

Description:

Work supported by EU RP6 project JRA1 FutureDAQ RII3-CT-2004-506078 ... Ratemeter, trending, statistics. 27.09.07. J.Adamczewski, H.G.Essel, N.Kurz, S.Linev ... – PowerPoint PPT presentation

Number of Views:62
Avg rating:3.0/5.0
Slides: 25
Provided by: hansg
Category:

less

Transcript and Presenter's Notes

Title: Data Acquisition Backbone Core DABC


1
Data Acquisition Backbone Core J.Adamczewski,
H.G.Essel, N.Kurz, S.LinevGSI, Experiment
Electronics, Data Processing group
  • Motivation
  • Data-flow engine, control
  • Event building network
  • Performance
  • To do

Work supported by EU RP6 project JRA1 FutureDAQ
RII3-CT-2004-506078
2
CBM data acquisition
W.F.J.Müller, 2004
3
CBM DAQ features summary
  • Self-triggered time stamped data channels.
  • Complex trigger algorithms ? transport until
    filter farm
  • FPGA controlled data flows
  • Event building on full data rate 1TB/s
  • Event builder network BNet 1000 nodes,
    high-speed interconnections
  • Linux may run on most DAQ nodes (even FPGAs)

4
Use case example Frontend components test
FE
Front end board sampling ADCs,
clock distribution
FE
FE Frontend board ABB Active Buffer board
2.5 GBit/s bi-directional (optical) link data,
clock
Active Buffer board PCI express card
  • The goal
  • Detector tests
  • FEE tests
  • Data flow tests

ABB
PCIe
PCIe
PC
A.Kugel, G.Marcus, W.Gao, Mannheim University
Informatics V
5
Use case example middle-size setup
FE
Front end board sampling ADCs,
clock distribution
FE
FE Frontend board DC Data combiner board ABB
Active Buffer board GE Gigabit Ethernet IB
InfiniBand MBS Multi Branch System
625 Mb/s bi-directional (optical) link data,
clock
?8
DCB
Data combiner boards, clock distribution to FE
DC
2.5 Gb/s data links
?4
  • The goal
  • Investigate critical technology
  • Detector tests
  • Replace existing DAQ

ABB
Active Buffer board PCI express card
ABB
PCIe
PCIe
PC
8-20 PCs dual/quad
PC
MBS
Scales up to 10k channels, 160 CPUs
IB switch
6
Driving forces and motivation for DABC
  • Requirements
  • connect (nearly) any front-ends
  • handle triggered or self-trigger front-ends
  • process time stamped data streams
  • build events over fast networks
  • provide data flow control (to front-ends)
  • interfaces to plug in application codes
  • connect MBS readout or collector nodes
  • be controllable by several controls frameworks
  • 2004 ? EU RP6 project JRA1 FutureDAQ
  • 2004 ? CBM FutureDAQ for FAIR
  • 2005 ? FOPI DAQ upgrade (skipped)
  • 2007 ? NUSTAR DAQ

Intermediatedemonstrator
1996 ? MBS future 50 installations at
GSI, 50 external http//daq.gsi.de
  • Detector tests
  • FE equipment tests
  • Data transport
  • Time distribution
  • Switched event building
  • Software evaluation
  • MBS event builder
  • General purpose DAQ

Data Acquisition Backbone Core
RII3-CT-2004-506078
7
DABC key components
  • Data-flow engine
  • memory and buffers management
  • threads and events management
  • data processing modules with ports, parameters,
    timers
  • transport and device classes, file I/O
  • back pressure mechanism
  • Slow control and configuration
  • components setup on each node
  • parameters monitoring/changing
  • commands execution
  • state machine logic
  • user interface

8
Data flow engine
A module processes data of one or several data
streams. Data streams propagated through ports,
which are connected by transports
DABC Module
DABC Module
port
port
process
process
port
port
Local transport
Queue
9
Data flow engine
A module processes data of one or several data
streams. Data streams propagated through ports,
which are connected by transports and devices
DABC Module
DABC Module
port
port
process
process
port
port
Net transport
Net transport
Queue
Queue
Device
Device
10
Memory management
  • Main features
  • All memory to be used in modules for transport
    organized in memory pools
  • Memory pool consists of one or several blocks of
    memory, divided on equal peaces - buffers
  • Each buffer in memory pool can be referenced once
    for writing and any times for reading
  • Several references may be combined in gather list
  • Use for transport
  • Only data from memory pools can be transported
    via ports
  • Each module port associated with only memory pool
  • Transport between two modules in same application
    done via pointer
  • Zero-copy network transport where supported
    (InfiniBand)
  • Support of gather lists for all kind of
    transports

11
Threads and event management
A device, i.e. socket device controls several
ports (transports). Once a queue buffer is
filled, the transport signals the Event
manager, which in turn calls the processInput
function of the associated module.
Thread with modules
Device thread
Data ready event
Event manager
Transport
Queue
DABC Module B
DABC Module A
processInput processCommand
Manager thread
Command
Command event
Commands are synchronized with the data flow also
through the Event manager
12
Back pressure mechanism
  • Basic idea
  • sender allowed to send packets only after
    receiver confirms (with special acknowledge
    message) that it has enough resources to receive
    these packets
  • Implemented on transport layer (not visible for
    user)
  • Impact on module code
  • No additional efforts is required
  • Can be enabled/disabled for any port
  • To block connection - just do not read packets
    from it
  • Pro easy method for traffic control in small
    system
  • Con easy way to block complete network when
    single node is hanging

13
Class diagram as currently implemented
Module
Port
PCIboard
Bnet
Socket
InfiniBand
14
Slow control
  • Tools for control
  • Configuration via XML files
  • State machines, Infospace, message/error loggers,
    monitoring
  • Communication Webserver, SOAP, DIM
  • Connectivity through DIM to LabView, EPICS,
    Java, any DIM client/server
  • Java with NetBeans (Matisse GUI builder) maybe
    soon in Eclipse
  • Front-end controls?
  • Mix of cooperating control systems
  • First LabView and Java GUIs operable

15
Controls monitoring communication
XDAQ Executive (process, address space)
Web browser
Web server (SOAP)
SOAP client Java GUI
DIM server
DIM client Java GUI LabView GUI EPICS GUI
XDAQ Application
Infospace
State machine
DABC data-flow
Modules, command queue
GRIDCC
Infospace remotely accessible parameters
16
LabView-DIM Control GUI
Generic construction of fixed parameter table
from DIM servers
Dietrich Beck, EE
17
Java-DIM Control
Generic construction from commands and
parameters offered by DABC DIM
servers Applications create the commands and
parameters and publish Ratemeter, trending,
statistics
18
Generic Java DIM GUI controls
19
Event building network (BNet)
20
BNet dataflow bidirectional approach
Linux
DABC
building filtering analysis
collecting sorting tagging
frontendDataDispatcher
frontendMBS readout
Sender
Receiver
frontendother
GE
IB
Linux
Linux
analysis archive
Sender
Receiver
GE Gigabit Ethernet IB InfiniBand
collecting sorting tagging
building filtering analysis
archive
21
BNet modules view
M sender nodes
SubeventCombiner
DataSender
N outputs
plugin
6 different modules
N receiver nodes
EventFilter
EventBuilder
Analisys / Storage
M inputs
DataReceiver
plugin
plugin
optional
22
BNet components
  • BNet modules
  • Readout collecting packets from readout
    channels (MBS, PCI-board, other)
  • SubeventCombiner search and combines data,
    belonging to same event or time frame
  • DataSender sends subevent data over network to
    correspondent event building node
  • DataReceiver forwards packets from network to
    event builder
  • EventBuilder build complete event
  • EventFilter performs first event filtering
  • BNet user plugins
  • finds data tags (event id or time stamp) in raw
    packet
  • resort data for sending over net
  • build event from received subevent packets
  • implements filtering algorithm for events

23
InfiniBand testbench for BNet
  • InfiniBand - reliable low-latency zero-copy
    high-speed data transport
  • Aim Prove of principal as event building network
    candidate for CBM
  • Tests last year
  • GSI cluster - 4 nodes, SDR
  • Forschungszentrum Karlsruhe (March 2007) 23
    nodes DDR
  • UNI Mainz (August 2007) - 110 nodes DDR
  • DDR double data rate (up to 20 Gb/s), SDR
    single data rate (10 Gb/s)

Point-to-point tests
BNet prototype (GSI, 4 Nodes)
thanks to Frank Schmitz, Ivan Kondov and
Project CampusGrid in FZK thanks to Klaus
Merle and Markus Tacke at the Zentrum für
Datenverarbeitung in Uni Mainz
24
Scaling of asynchronous traffic
25
Chaotic (async.) versus scheduled (sync.)
26
DABC further tasks
  • Achieved
  • Control infrastructure
  • setup, configure
  • communication
  • very first Java GUI
  • Data flow engine
  • multi-threading
  • InfiniBand
  • Sockets (Gigabit Ethernet)
  • first PCIexpress board
  • good performance
  • back pressure
  • To do
  • Data formats
  • Error handling/recovery
  • MBS event building
  • Time-stamped data
  • Final API definitions
  • Documentation
Write a Comment
User Comments (0)
About PowerShow.com