Computing Challenges in High Energy Physics and Beyond - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

Computing Challenges in High Energy Physics and Beyond

Description:

Computing Challenges in High Energy Physics and Beyond – PowerPoint PPT presentation

Number of Views:25
Avg rating:3.0/5.0
Slides: 20
Provided by: LyleW5
Category:

less

Transcript and Presenter's Notes

Title: Computing Challenges in High Energy Physics and Beyond


1
Computing Challenges inHigh Energy Physicsand
Beyond
  • Lyle Winton
  • Experimental Particle Physics, University of
    Melbourne
  • Australian Frontiers of Science symposium, 2005

2
Overview
  • Introduction (to High Energy Physics)
  • Belle Experiment
  • ATLAS Experiment
  • Data Grids
  • Quantum Chromodynamics (QCD)
  • Medical Physics

3
Introduction
  • What we do in experimental High Energy or
    Particle Physics
  • Investigate frontiers of matter/forces making up
    universe
  • at extremely high energies
  • at extremely small scales
  • environment of the early universe.

4
Introduction
  • How we do this
  • construct particle accelerators
  • create high energy protons/electrons
  • collide these to deeply probe structure, to
    producing large forces, or to produce large
    amounts of free energy
  • do lots to search for rare processes
  • construct instrumented, precision detectors
  • reconstruct the nanoscopic collisions
  • identify and measure the resulting particles
  • Majority of activities involve high performance
    computing
  • designing accelerators and detectors
  • collecting and filtering experimental data
  • data processing
  • data analysed

5
Introduction - Data Processing
Raw Events
Online DAQ
Reconstruction
Reconstructed Events
Simulation
Parameters/Histograms
Fitting, Modelling, etc.
Event Analysis
Statistical Analysis
6
Introduction - Simulation
  • Simulated collisions or events (use Monte Carlo
    techniques)
  • used to predict what well see (features of data)
  • Essential to support design of systems
  • Essential for analysis
  • acceptances/efficiencies fine tuning understand
    uncertainties
  • Computationally intensive
  • collisions, interactions, decays
  • all components and materials (ATLAS is 22x22x46
    m, 7000 tons, 10 µm accuracy)
  • tracking, energy deposition
  • all electronics effects (signal shapes,
    thresholds, noise, cross-talk)
  • ratio of greater than 31 for SimulatedReal data

7
HEP Projects
  • The ATLAS Experiment
  • Large Hadron Collider (LHC) at CERN, Geneva
  • Search for Higgs particle which may lead
    tounderstanding origins of mass.
  • Collaboration 1800 people, 150 institutes, 34
    countries
  • over 3.5 PB data per year
  • operational in 2007
  • Software framework has progressed!
  • The Belle Experiment
  • KEK B-Factory, Japan
  • Investigating fundamental violation of
    symmetryin nature (Charge Parity) which may help
    explainthe universal matter antimatter
    imbalance.
  • Collaboration 400 people, 50 institutes
  • 100s TB data currently
  • Soon encounter need for much CPU!

8
Belle Experiment
  • Belle on KEKB Accelerator, Japan
  • Australia member since 1997
  • collides e- e to generate B mesons
  • Investigating fundamental violation of symmetry
    in nature (Charge Parity)
  • may help explain the universal matter
    antimatter imbalance
  • Luminosity increasing exponentially
  • Great!
  • increase in collisions
  • increase in data
  • greater statistics, probe deeply
  • Means more simulated data need
  • maintain 31 ratio (SimReal data)

9
Belle Experiment
  • Created a computing challenge
  • 4 billion events simulated in 2004
  • 3 seconds of CPU per event
  • Saturated KEK computing facilities!
  • Belle Monte Carlo Production
  • Facilities around the world contributed CPU
  • Australian major contributor, using our computing
    facilities
  • APAC, ANUSF, VPAC, SC3, Melbourne Unis ARC
  • data replicated between Australia and Japan via
    SRB (storage resource broker)
  • Effort ongoing in 2005

10
ATLAS Experiment
  • Complexity of experimental data (14 TeV collision
    energy)
  • 23 particles collide for every bunch collision
  • 700 tracks in inner detector
  • can take 100 ns for particles to exit
    detector(particles from 4 bunch collisions in
    detector at once)
  • Precision detector
  • 22 m high wide, 46 m long, 7000 tons
  • tracking accuracy down to 10 µm
  • High Radiation Environment
  • up to 160,000 Gy per year (Inner Detector)
  • flux from thermal neutrons up to TeV particles
  • much simulation gone into design
  • Extreme volume of data
  • Bunch collisions occur 40 million times a second
  • 1 PB/s data output, filtered to 3 GB/s
  • Event Filter will be a 2000 CPU farm
  • processes data in real time
  • filters down to 300 MB/s stored
  • up to 10 PB per year (long term storage)
  • looking for 1 event embedded in 10 thousand
    billion

10,000,000,000,000
1
11
ATLAS Experiment
  • Simulation Challenges
  • large body of knowledge encapsulated in code
  • interaction and decay of fundamental particles
  • passage of particles through matter
  • volume, complexity, precision of data that will
    be recorded
  • gt 3 billion events per year
  • Simulation can take 45 min per event
  • showering of particles in material
  • large number of secondaries
  • additional particles to track in simulation
  • eg. electrons in calorimeter
  • efforts to speed up byparametrising showers
  • stop tracking each particle
  • simulate shower shapes with energy deposited in
    detector

12
ATLAS Experiment
  • Large Collaboration
  • 1800 physicists, gt150 universities and labs, 34
    countries
  • many individual studies all requiring access to
    data and CPUs
  • Extensive Code (4M lines)
  • embodies much physics and experimental knowledge
  • management and distribution systems have been
    developed
  • Documentation and Communication
  • AccessGrid and Virtual Rooms, CDS (CERN Document
    Server), Wiki, Database and Meta-Data systems
  • Much effort on policy

13
Data Grid
  • To solve challenges we look towards new HPC
    techniques
  • The Grid
  • .. an infrastructure that enables the
    integrated, collaborative use of high-end
    computers, networks, databases, and scientific
    instruments owned and managed by multiple
    organizations.
  • one global peta-scale computing resource
  • Effort to provide transparent access to
    processing power (on tap)
  • Implemented as a Middleware solution
  • we are looking at Data Grid
  • Grid computing where access to data is extremely
    important
  • Effort to help share, manage, and process large
    amounts of distributed data
  • some examples
  • Earth Systems Grid
  • climate studies in the US
  • Global Bioinformatics Grid
  • collaborative data for The Bioinformatics
    Organisation
  • LCG (LHC Computing Grid)
  • HEP driver for the EDG project, which has become
    EGEE project (gt60 MEuro primarily manpower)

14
Data Grid
  • LHC Computing Grid (LCG)
  • about 10 complete
  • largest international scientific Grid
  • The Computing infrastructure to support 4
    experiments on LHC
  • support 5000 scientistsfrom 500 institutes
  • 10s PB/yr data
  • order 100,000 CPUs
  • current 130 sites 11,000 CPUs
  • 140 Tera-flops end of 2006
  • Fastest super-computer 70 Tflops
  • Earth simulator 40 Tflops (Japan)

15
Data Grid
  • ATLAS collaboration
  • largest user of the LHC Computing Grid
  • 24,000 x 3.6 GHz P4 (for 2008)
  • 20 PB (for 2008)
  • To distribute Data is summarised
  • RAW, SIM 2 MB/event
  • ESD (event summary) 0.5 MB
  • AOD (analysis object) 100 kB
  • TAG (event tags) 1 kB
  • LCG Tier structure
  • Tier 0 all RAW
  • Tier 1 0.1 RAW, all ESD, AOD, TAG
  • Tier 2 0.3 AOD, all TAG
  • Your Workstation connects into this
  • From your workstation
  • low latency access to data within 48 hrs
  • shared access to CPU from anywhere
  • users compose virtual data set
  • data could physically be anywhere

16
Data Grid
  • Advanced HPC essential for the future of our
    research
  • Large Scale International Collaborations
  • help Australians better utilise International
    Facilities
  • Access to Experimental Data, Simulation, and
    Results (critical)
  • Australian HEP Grid Program
  • Investigating the use of Data Grid technologies
  • Funding since 2002 from VPAC, ARC, and APAC
  • Collaborating with Computer Science to research
    tools
  • distributed data processing
  • distributed task scheduling
  • virtual organisation management
  • meta-data systems
  • Participated in major Data Grid projects the LCG
    and NorduGrid
  • Aim to drive infrastructure development within
    Australia
  • HEP computing challenges provide a killer app

17
Data Grid
  • Australian HEP Grid Program outcomes
  • Built and demonstrated Grid testbeds using
    various middleware (Globus, NorduGrid, LCG)
  • Demonstrated HEP applications on Grid middleware
  • Demonstrated HEP portals to the Grid (web based
    and command line)
  • Developed virtual organisation systems for Grid
    administration
  • Developed high level job scheduling tools for
    Data Grid
  • Recognised as leading Grid application
  • APAC Grid program Uni.Melb. eResearch pilot
    program
  • driver for advanced networks and data
    infrastructure within Australia
  • Belle simulation, ATLAS challenges

18
Quantum Chromodynamics (QCD)
  • the fundamental quantum field theory of the
    Standard Model
  • Centre for the Subatomic Structure of Matter
    (CSSM), The University of Adelaide.
  • describes the interactions between quarks and
    gluons
  • eg. found in nuclei (protons, neutrons)
  • flux tube binds quarks in nuclei
  • simulations on a space-time lattice only
    first-principles approach to study
  • Ideal large physical volume fine lattice
    spacing
  • typically 20 cubed or slightly larger
  • eg. QCD Lava Lamp (vacuum action density)
  • can take monthsyears on Tera-flop scale
    computers
  • International Lattice Data Grid (ILDG)
  • sharing generated data sets
  • sharing initial lattice state data
  • can help save computation time

19
Medical Physics
  • Simulation from experiments like ATLAS have lead
    to technology transfers
  • Astrophysics (cosmic rays, design telescopes)
  • Radiation Protection (effect on astronauts,
    equipment)
  • major transfers in Medical
  • Emission Tomography (eg. PET)
  • Radiation Therapy (eg. brachy-therapy, electron
    therapy, proton therapy)
  • Radiation Therapy equipment design (low energy
    particle accelerators)

20
Medical Physics
  • Efforts within Australia
  • University of Melbourne, University of
    Wollongong, Peter Mac.
  • PET
  • Radiation Therapy (prostate cancer, proton)
  • Nanodosimetry (DNA level, cell death)
  • HEP collaboration GEANT
  • GEANT4 toolkit for simulating passage of
    particles through matter
  • incorporates effects needed for high energy (LHC)
    and low energy
  • able model complex geometries
  • accurately predict dose calculation
  • GEANT4
  • used to help design medical equipment
  • inexpensive high resolution PET using advanced
    silicon technology
  • Nanodosimetry detectors
  • real-time patient planning
  • reproduce real geometry and tissues (from CT)
  • predict optimal plan
  • real-time, before patient moves (few minutes)
  • Investigating cluster and Grid computing
  • Multiple scattering
  • Bremsstrahlung
  • Ionisation
  • Annihilation
  • Photoelectric effect
  • Compton scattering
  • Rayleigh effect
  • g conversion
  • ee- pair production
  • Synchrotron radiation
  • Transition radiation
  • Cherenkov
  • Refraction
  • Reflection
  • Absorption
  • Scintillation
  • Fluorescence
  • Auger

21
Acknowledgements
  • Experimental Particle Physics Group, University
    of Melbourne
  • Glenn Moloney
  • Marco La Rosa
  • Melbourne Advanced Research Computing centre
    (ARC)
  • Dirk Van Der Knijff, Rob Sturrock
  • Special research Centre for the Subatomic
    Structure of Matter (CSSM), University of
    Adelaide
  • Derek Leinweber
  • Centre for Medical Radiation Physics, University
    of Wollongong
  • Andrew Wroe, Dean Cutajar, Anatoly Rosenfeld
  • Australian National University Supercomputing
    Facility
  • Stephen McMahon, Jon Smillie
  • Victorian Partnership for Advanced Computing
  • Damon Smith
  • Australian Partnership for Advanced Computing
  • Australian Research Council
  • Belle Collaboration (cast of hundreds)
  • ATLAS Collaboration (cast of thousands)
Write a Comment
User Comments (0)
About PowerShow.com