High Performance computing in Particle Physics - PowerPoint PPT Presentation

1 / 37
About This Presentation
Title:

High Performance computing in Particle Physics

Description:

Case study: Helium and deuterium abundances as a test for the time variation of ... collect data of the order of 15 Petabytes (15 million Gigabytes) every year. ... – PowerPoint PPT presentation

Number of Views:77
Avg rating:3.0/5.0
Slides: 38
Provided by: assindustr
Category:

less

Transcript and Presenter's Notes

Title: High Performance computing in Particle Physics


1
  • High Performance computing in Particle Physics
    Cosmology
  • Application to Neutrino Parameters Correlations
  • Nidal CHAMOUN
  • Department of Physics,
  • HIAST,
  • Damascus, Syria

HIAST
2
Table of Contents
  • Basics
  • Need for Powerful Computing in High Energy
    Physics
  • Art of Cosmological Simulations
  • The LHC Grid
  • Neutrino Parameters model with many free
    parameters

3
Powerful Computing in High Energy Physics
  • Theoretical particle physics is an integral part
    of the world-wide activities to search for new
    physics beyond the Standard Model of High Energy
    Physics.
  • In order to distinguish signs of non-standard
    physics from our present description of
    elementary particle interactions it is mandatory
    to have theoretical predictions originating from
    the underlying theory alone from a priori
    computations and without further approximations

4
Powerful Computing in High Energy Physics
  • Several applications in computational particle
    physics are still far beyond the reach of
    state-of-the-art computers, whenever following
    the evolution of even simple dynamics equations
    responsible for very complex behavior requires
    inordinately long execution times.
  • For instance, In the approach of Lattice Gauge
    Theory the continuum of nature is replaced with a
    discrete lattice of space-time points.

5
Powerful Computing in High Energy Physics
  • This lattice approximation allows for numerical
    simulations on massively parallel computer
    architectures.
  • Furnished with this conceptual tool, the high
    precision experimental data as expected from the
    newly planned accelerators can be interpreted in
    a clean manner not suffering from any built-in
    approximations.
  • However, despite its space-time economy, the
    lattice needs the power of the world's largest
    supercomputers to perform the calculations that
    are required in the complex problem of solving
    the complicated equations describing elementary
    particle interactions.

6
Powerful Computing in High Energy Physics
  • This is the case when we integrate, say, over
    configuration space of a three-dimensional
    lattice system of too many sites, requiring up to
    Tera Monte Carlo steps, which is still an
    untreatable task.
  • Extensive use of parallelism is the main avenue
    to boost computer performance.

7
Art of Cosmological Simulations
  • During the last 10 years new extensive
    observations of the Universe were made using both
    ground-based telescopes and space instruments.
  • The huge observational progress has been
    accompanied by considerable effort in our
    theoretical understanding of the formation of
    different components of the observed structure of
    the Universe galaxies and their satellites,
    clusters of galaxies, and super-clusters.
  • the standard
  • cosmological model

8
Art of Cosmological Simulations
  • A substantial part of this theoretical progress
    is due to the improvement of numerical methods
    and models, which mimic structure formation on
    different scales using a new generation of
    massive parallel supercomputers.
  • The nonlinear evolution of cosmological
    fluctuations can be studied only numerically. The
    details of galaxy formation must be followed
    using hydrodynamic simulations.
  • However, many features can already be studied by
    semi-analytical methods

9
Art of Cosmological Simulations
  • Modern astrophysics and cosmology are
    characterized by dealing with complex problems
    whose dynamical range covers extended space-time
    scales, ranging from stability of solar systems
    to physics of quasars and large scale
    structures. .
  • Thus, the requirements for modern cosmological
    simulations are extreme
  • a very large dynamical range for force resolution
    and many millions of particles are needed.

10
Art of Cosmological Simulations
  • Case study Helium and deuterium abundances as
    a test for the time variation of the fine
    structure constant and the Higgs vacuum
    expectation value,
  • J. Phys. G Nucl. Part. Phys. 34 (2007) 163176,
    by Chamoun, Mosquera, Landau Vucetich

We used semi-analytical methods (1991 Astrophys.
J. 378, 50418) to calculate the abundances of
helium and deuterium produced during Big Bang
nucleosynthesis assuming the fine structure
constant and the Higgs vacuum expectation value
may vary in time
11
Time Variation of Fundamental Constants ??
Nucleosynthesis
12
Time Variation of Fundamental Constants ??
Nucleosynthesis
  • We assumed that the discrepancy between SBBN
    estimation for 4He and D and their observational
    data is due to a change in time for the
    fundamental constants the Higgs vev v, the fine
    structure constant a.
  • We analysed the dependence of the 4He and D
    abundances on these fundamental constants within
    perturbation theory and on deviations with
    respect to the mean value of the baryonic density

13
Time Variation of Fundamental Constants ??
Nucleosynthesis
  • The calculation of the heavier elements
    abundance requires much more computations and
    coupled equations to be solved.

14
Table of Contents
  • Basics
  • Need for Powerful Computing in High Energy
    Physics
  • Art of Cosmological Simulations
  • The LHC Grid
  • Neutrino Parameters model with many free
    parameters

15
The Grid What is it?
processing power on demandlike electrical power
a virtual metacomputer for the seamless access to
dispersed resources
coordinated resource sharing and problem solving
in dynamic, multi-institutional, virtual
organizations
16
The GridVirtual Organizations
VO2
VO1
group of individuals or institutes who are
geographically distributed but appear to
function as one single unified organization
Internet
O1
O3
O2
17
The Grid What is it?
reality is catching up fast with the dream
  • Distributed computing a method of computer
    processing in which different parts of a program
    run simultaneously on two or more computers that
    are communicating with each other over a network.
    It is a type of segmented or parallel computing.
    It also requires that the division of the program
    take into account the different environments on
    which the different sections of the program will
    be running.

18
What the Grid can do ?
What type of applications will the Grid be used
for?
the first big-time users of the Grid will
probably be scientists with challenging
applications that are simply too difficult to do
on just one set of computers.
19
Computational problems
  • Degree of parallelism ability to split into
    many smaller sub-problems that can be worked on
    by different processors in parallel
  • Degree of granularity dependence of each
    sub-problem on the result of other sub-problems.
  • As a rule of thumb, fine-grained calculations are
    better suited to big, monolithic supercomputers
  • On the other hand, parallel calculations
    (high-throughput computing) are ideal for a more
    loosely-coupled network of computers.

20
Large Hadron Collider Computing Grid project at
CERN
  • To study the fundamental properties of subatomic
    particles and forces

Brookhaven National Laboratory (US) Fermi
National Accelerator Laboratory (Fermilab)
(US) Forschungszentrum Karlsruhe (Germany)
Rutherford Appleton Laboratory (UK)
CCIN2P3 (France) INFN-CNAF (Italy) SARA/NIKHEF
(Netherlands)
21
Grid _at_ CERN
  • CERN has a reputation for being at the forefront
    of networking technology - "where the Web was
    born" is the lab's motto. When it comes to Grid
    technology, this is particularly true CERN is
    leading some of the most ambitious Grid projects
    in the world
  • The Large Hadron Collider (LHC), to run fully
    in autumn 2009, will smash particles, protons to
    protons with nearly the speed of light to create
    conditions that occurred a few seconds after the
    Big Bang.

22
Need for Grid Computing in LHC
  • These collisions will happen at an unprecedented
    energy of 14 trillion electron volts. The beam
    collisions would reveal physics beyond the
    Standard Model.
  • Once operational we shall collect data of the
    order of 15 Petabytes (15 million Gigabytes)
    every year. That is more than 1000x the amount of
    information in book form printed every year
    around the world , and nearly 1 of all
    information that humans produce on the planet
    each year - including digital images, photos
  • This data shall be used by thousands of
    scientists from all around the world.

23
The LHC Computing Grid
  • One of the challenges that this scenario shall
    pose would be to build and maintain data storage
    and provide analysis infrastructure for the
    entire high energy physics community.
  • The current model adopted at the LHC includes a
    four-tier Grid Structure which shall distribute
    data worldwide. Formal access to the NWIC Grid is
    requested, in support of these activities.
  • This LHC Computing Grid, launched on October 3,
    2008, is a distribution network where The data
    stream from the detectors provides approximately
    300 GB/s, filtered for "interesting events",
    resulting in a "raw data" stream of about 300
    MMB/s.
  • The CERN computer center, considered the "Tier
    0" has a dedicated 10 Gb/s connection to the
    counting room

24
Large Hadron Collider Computing Grid project at
CERN
  • the LHC at Geneva produces roughly 15 Petabytes
    (15 million Gigabytes) of data annually. This
    data are accessed by the high energy physics
    community throughout the world

multiple copies of data and automatic reassigning
of computational tasks to available resources
ensures load balancing of resources and
facilitates access to the data
5000 scientists from about 500 research
institutes and universities
25
  •                
  •                   
  • MINSP (Short for Minimization of String
    potential)
  • country Syria
  • author Nizar Alhafez
  • institute HIAST
  • domain Theoretical Physics
  • contacts nzhafez_at_hiast.edu.sy
  • description We seek a minimum for a potential
    function (coming from the physics of string
    theory), involving many parameters. One run (when
    fixing different parameters) lasts for between 3
    to 10 hours, and hence the application needs
    (when we change parameters in the given regions)
    more than 15 years thinking to run the
    application on one machine.
  • requirements The application requires
    AXIONsoftware. It has been installed on EUMEDGRID
    e-Science

26
Table of Contents
  • Basics
  • Need for Powerful Computing in High Energy
    Physics
  • Art of Cosmological Simulations
  • The LHC Grid
  • Neutrino Parameters model with many free
    parameters

27
Neutrino Physics
  • Neutrinos (meaning "Small neutral ones") are
    elementary particles that often travel close to
    the speed of light, lack an electric charge, are
    able to pass through ordinary matter almost
    undisturbed and are thus extremely difficult to
    detect. Neutrinos have a minuscule, but nonzero
    mass.
  • The establishment of a non-vanishing neutrino
    mass in
  • neutrino oscillation experiments is one of the
    major new
  • Achievements in theoretical physics in the last
    decade.

28
Neutrino experimental constraints

29
A Model for Neutrino Mass Matrix
  • Zero minors of the neutrino mass matrix,
    PHYSICAL REVIEW D 78, 073002 (2008), by Lashin
    Chamoun
  • We examine the possibility that a certain class
    of neutrino mass matrices, namely, those with two
    independent vanishing minors in the flavor basis,
    regardless of being invertible or not, is
    sufficient to describe current data.
  • Strategy spanning the free parameters in their
    accepted ranges,
  • and test whether or not there are acceptable
    choices meeting the
  • current data constraints.

30
Neutrino Mass Matrix
31
Neutrino Mass Matrix
32
Neutrino Mass Matrix
33
Neutrino Model with two vanishing minors
34
What about One Vanishing Minor Neutrino Mass
Matrix
  • A work in progress
  • Less constraints ?More free parameters
  • Need to span a larger space ? need a more
    powerful computer
  • Can deduce correlations among the different
    parameters
  • Strategy Again, the nature of nested loops
    suggests the possibility
  • of using a distributed computing program which is
    split up into parts
  • that run simultaneously on multiple computers
    communicating over
  • a network .

35
Neutrino Model with 1-vanishing minor
  • We obtain acceptable points (here, \rho
    \sigma) for fixed
  • acceptable choices of given parameters (here
    \delta). However, I
  • Need to span \delta over its whole admissible
    range.

36
Table of Contents
  • Basics
  • Need for Powerful Computing in High Energy
    Physics
  • Art of Cosmological Simulations
  • The LHC Grid
  • Neutrino Parameters model with many free
    parameters

37
Discussion
Write a Comment
User Comments (0)
About PowerShow.com