ATLAS Software Installation at UIUC - PowerPoint PPT Presentation

About This Presentation
Title:

ATLAS Software Installation at UIUC

Description:

ATLAS Software Installation at UIUC Mutability is immutable. - Heraclitus ~400 B.C.E. D. Errede, M. Neubauer Goals: 1) ability to analyze data locally (with ... – PowerPoint PPT presentation

Number of Views:135
Avg rating:3.0/5.0
Slides: 10
Provided by: derr161
Category:

less

Transcript and Presenter's Notes

Title: ATLAS Software Installation at UIUC


1
ATLAS Software Installation at UIUCMutability
is immutable. - Heraclitus 400 B.C.E.

D. Errede, M. Neubauer
  • Goals 1) ability to analyze data locally (with
    constraints)
  • 2) ability to connect to other machines around
    the world
  • 3) ability to contribute to the detector
    commissioning and calibration locally
  • 4) ability to store reasonable quantities of
    data for ease of analysis
  • - not
    completely.
  • - not
    completely.
  • presently done at CERN(Irene Vichou) effort to
    make contributions locally. (see S. Erredes
    talk)
  • 4) Discussion with local UIUC computing center
    on access to available disk space ?

2
  • Ability to analyze data locally
  • Accomplished
  • installation of ATLAS release software locally
    and also installed
  • per machine. ( Restrictions on NFS type
    installation. ) Reduces disk-space usage.
  • ROOT software for ntuple/tree analysis
    exists. We do not expect to be able to generate
    large data sets locally of complete monte carlo
    data for instance because of the well known
    restrictions of generation time. Perhaps fast MC
    data can be generated locally (?).
  • installation of Scientific Linux 3 and 4
    platform on several local linux machines
  • installation of Open Science Grid User
    Interface allowing access to large farms(as
    clients, not as a Computing Element). Software
    installed and simple condor submission tested,
    however all local idiosyncrasies not yet
    understood. Installed as User, not globally for
    group, yet.
  • installation of dq2 software to access data
    files from OSGrid

3
  • Ability to analyze data locally
  • Not accomplished
  • installation of OSG UI centrally for all
    users
  • ability to connect to other machines around the
    world
  • 2) Accomplished
  • security issues related to logging into CERN
    lxplus machines ( LCG access, dq2 working, ROOT
    is prohibitively slow), SLAC linux machines, BNL
    linux machines.
  • OSG UI installation allowing access to large
    OSG farms, simply tested but not fully tested.
  • ability to contribute to the detector
    commissioning and calibration locally
  • 3) See Steve Erredes talk. Presumably dealing
    with ROOT ntuples is the primary requirement
    here. ROOT exists and is usable.
  • ability to store reasonable quantities of data
    for ease of analysis
  • 4) Discussion with CITES and NCSA on access to
    some fraction of petabyte disk. NCSA has
    resources that perhaps we can tap into in the
    future. We already have 10 Gbit/s connection to
    CITES and then Chicagos hub.

4
ARE WE DONE? ARE WE CLOSE TO BEING DONE?
NO. We will be establishing a systematic
analysis effort locally. (see Mark Neubauers
talk) --------------------------------------------
--------------------------------------------------
---------------- The HECA state of Illinois grant
(D. Errede P.I.) contributed over 200k toward
computing equipment etc from 2000-2004 which will
be used by the ATLAS group here, though in
limited fashon due to memory constraints. 16
dual-processor boxes for muon collaboration use.
-------------------------------------------------
--------------------------------------------------
----------- Next project setting up 3 linux
machines locally (OSG00,01,X0) with SL3/4
platforms on which to install ATLAS software for
testing. The software changes and is updated
constantly hence we will be starting with a
clean slate on which to work. We know that
some software doesnt work on top of other
software, for instance. Important to this
effort will be the authority to use root
privileges on our machines. The addition of Mark
Neubauer to this effort is a tremendous
contribution given his background in installing
the CONDOR batch queue system at CDF and hence
his knowledge in other required languages such as
Python and as can be seen from this good idea for
the next step/project. Overall comment
because of the evolving nature of the ATLAS
software what is easy at the present time would
have been quite difficult earlier on.
5
Toward a Tier-3 _at_ UIUC The Problem
  • The scale of computing requirements for the LHC
    experiments is unprecedented in HEPs history
  • ATLAS 3x103 collaborators spread across the
    globe
  • 3 PB / year RAW 1 PB / year ESD
  • Much progress has been made on pieces necessary
    to get ATLAS physics done on globally distributed
    computing (GRID)
  • Many complex pieces to handle authentication,
    VOs, job submission/handling/monitoring, data
    handling, etc
  • System needs to be exercised from the perspective
    of a physicist just trying to get their physics
    done (I.e. end user)
  • Convenient and flexible interface to
    authentication, dataset creation/consumption, job
    submission, monitoring, output retrieval
  • Support and reliability of service, deserving of
    a production system
  • These goals have not yet been fully achieved in
    ATLAS but need to be for the experiment to be
    successful!

6
Toward a Tier-3 _at_ UIUC A Solution?
  • Deploy a large computing cluster configured as
    a Tier-3 and operated as a model Tier-3 in terms
    of reliability and utilization for doing physics
    _at_ a home institution
  • Q How large is large A Large enough for ATLAS
    computing organization to pay attention and for
    UIUC to get our physics done. Could be as large
    as most Tier-2 sites
  • Why do this at UIUC?
  • We have an enormous amount of high-end computing
    infrastructure and technical expertise to pull
    this off!
  • Infrastructure includes HEP MRL computing,
    NCSA, recent availability of 10 GB/s network pipe
    to Chicago (ATLAS Tier-2 Center)
  • Technical expertise includes HEP group,
    networking experts, new addition Mark Neubauer
  • Neubauer At MIT then UCSD with F. Wurthwein
  • lead complete redesign of CDF analysis computing
    -gt CDF Analysis Facility (CAF)
  • CAF Project Leader (2001-2004)
  • Involved in subsequent to migration to Condor and
    utilization of offsite computing (one of, if not
    the, first operating GRIDs in HEP)
  • Goals
  • Drive ATLAS into a successful computing model
    from the analysis end
  • Get our physics done, and strengthen our
    collaborations (e.g. FTK)

7
Toward a Tier-3 _at_ UIUC A Prototype
  • Have recently assembled a prototype system to
    begin work on ATLAS Tier-3 _at_ UIUC (many thanks to
    Dave Lesny)
  • 3 Dual Xeon dual processor boxes, 2 Gbytes memory
    coming out of existing equipment.

8
Toward a Tier-3 _at_ UIUC
FNAL CAF 9 DCAFs (now)
Proto-CAF (Oct 2001)
Proto-Tier-3_at_UIUC
Insert Picture
?
9
Toward a Tier-3 _at_ UIUC
The time to repair the roof is when the sun is
shining JFK
Write a Comment
User Comments (0)
About PowerShow.com