Title: LOFAR project
1LOFAR project
- Astroparticle Physics workshop
- 26 April 2004
2LOFAR concept
- Combine advances in enabling IT
- inexpensive environmental sensors 10.000s of
sensors - wide area optical broadband networks customGigaP
ort/Géant - high performance computing IBM BlueGene/L
-
- to make a shared aperture multi-telescope
- but also
- to sense and interpret the environment
- in innovative ways
System spec driver
3LOFAR Sensors
- Sensor type Applications
- HF-antenna astrophysics
- astro-particle physics
- VHF-antenna cosmology, early Universe
- solar effects on Earth, space weather .
- Geophones ground subsidence .
- gas/oil extraction
- Weather micro-climate prediction
- precision agriculture
- wind energy .
- Water precision agriculture
- habitat management
- public safety
- Infra-sound atmospheric turbulence
- meteors, explosions, sonic booms
4LOFAR Phase 1
- - Radio telescope
- - Seismic imager
- - Precision weather
- for agriculture,
- wind energy
Integrate LOFAR network into regional fibre
network, sharing costs with schools, health
centres etc.
5Radio Telescope Specifications
- Frequency range
- 20 80 MHz, 120 240 MHz
- Angular resolution
- few 10 arcsec
- Sensitivity
- 100x previous instruments at these frequencies
- Shared aperture multi-telescope
- up to 8 independent telescopes
- plus geophone, weather etc arrays
- operated from remote Science Operations Centers
- similar to LHC tier-1 centers
6One day in the life ofLOFAR, the radio telescope
Telescope nr.
7Challenges
- Data rate
- 15 Tbits / sec total data generated
(increasing later) - 330 Gbits / sec input data rate to central
processor - 1 Gbit / sec to distributed Science
Operations Centres - Computational resources
- 34 TFLOP/s in custom co-processor (IBM BG/L)
- 500 TBytes on-line temporary storage
- Calibration
- adaptive multi-patch all-sky phase correction
- 10 sec duty cycle
8IBM BlueGene/L
- IBM
- 1st research machine on road to
multi-peta-FLOP/s - 3 BG/L machines under construction LLNL, LOFAR,
IBM Research - numbers 1-10 of Top-500 supercomputers in one
machine (LLNL) - SOC technology, standard components for
reliability - dual PowerPC 440 chips per node with 700 MHz
clock - scalability
- to many times 100.000 nodes
- low power, air cooled
- 20W per node
9IBM BlueGene/L
- LOFAR
- BG/L is our 1st non-custom central processor
- total CPU power is interesting (34 TFLOP/s)
and scalable - component failure rate one every 3 months, DRAM
dominated - BG/L is embedded co-processor in LINUX cluster
- stripped down LINUX kernal on-chip
- general purpose capability allows complex
modelling on-line, real time - efficient for complex arithmetic, streaming
applications - 330 Gb/s input data rate initially 768 Gb/s max
- low power
- 150 kW for LOFAR ( 6k nodes )
- scalable beyond LOFAR to SKA requirements
10Tier-0 computing LHC, LOFARin 2006
11LOFAR with Bsik financing
Central core - plus - 45 stations 150 km max
baseline
12Mid-LOFAR would extend into Lower
Saxony, Schleswig-Holstein, Northrhein-Westphalen
Max-LOFAR would have stations from Cambridge
UK to Potsdam DE, from Nançay FR to Växjö SE
13Post-2005 JIVE LOFAR data processing centre
30 Gbps 2 Tbps
LOFAR, the Sensor Network is under consideration
as FP7 Technology Platform
14LOFAR project timeline
- PDR in June/Oct 2003 M 14 expended
- Dutch funding end 2003 M 52 for
infrastructure - funding must be matched by partners
- 18 member consortium additional partners
possible - formal goal is economic positioning w.r.t.
adaptive sensor networks - RF, seismic, infra-sound, wind-energy sensors
- prototyping of a full station is in progress
- 100 low frequency antennas in field, now are
making all-sky videos - end 2004, expect 2 beam web-based system on-line
(to gain experience) - issues calibration, RFI, adaptive
re-allocation of resources - BlueGene/L delivery in 1Q-2005
- FDR start in mid-2004, complete mid-2005
- procurement start mid-2004, end mid-2006
- Initial operational status end-2006 (solar
minimum) - full operational status mid-2008
15Remaining tasksfor which partners are being
sought
Where?
- Array configuration size new stations !
- extension of array size to 400 km is highly
desirable - cost is 500k per station
- fiber connections through Géant, national
academic networks - Definition, designation of operations centers
- Science Operations Centers are remote, on-line
- basic data taking and archiving of observations
- financing mostly local, plus contribution to
common services - Engineering Operations Center in Dwingeloo
- monitor system, perform maintenance
- integrated operations team (with WSRT, possibly
JIVE) - Operational modelling and User interface
- use of (quasi-real-time) GRID technologies
foreseen - work packages not funded / manned yet
Where?
16User involvement
- Test User Group
- Heino Falcke, leader
- Lars Bähren, Michiel Breintjens, Stefan Wijholds
etc - open, remote access to developing system
- step-wise functionality improvements until 2006
- 1st user workshop Dwingeloo, May 24-25, 2004
- ASTRON is ready to host a (limited) number of
young researchers to test, help develop the
system - Formal operations from 2007
- scheduling will be an interesting problem
17LOFAR Research Consortium
Universities Research Institutes Commercial
Univ. of Amsterdam ASTRON (management org.) Ordina Technical Automation bv
TU Delft CWI Dutch Space bv
TU Eindhoven IMAG Twente Institute for Wireless and Mobile Communications bv
Univ. of Groningen KNMI ScienceTechnology bv
Leiden Univ. TNO-NITG
Nijmegen Univ. LOPES Consortium
Uppsala Univ. MPIfR-Bonn