Title: Ivanov V'V'
1 Grid computing for CBM at
JINR/Dubna
- Ivanov V.V.
- Laboratory of Information Technologies,
- Joint Institute for Nuclear Research, Dubna,
Russia - CBM Collaboration meeting, GSI, Darmstadt
- 9-12 March, 2005
2Main directions of this activity includes
- Integration and shared use of informational and
computational resources, distributed
databases, electronic libraries. Realisation of
the Dubna-Grid project. - Use of modern systems for storing and processing
large-scale arrays of model and experimental
data. - Possibility of remote participation of experts
of the JINR Member States at the basic
facilities of JINR. - Joint work on managing corporate networks,
including the problems of control, analysis,
protection of networks, servers, information. - Joint mastering and application of the
Grid-technologies for physical experiments and
participation in creation of national - Grid-segments.
- Joint work on creation of distributed
supercomputer applications.
3JINR telecommunication links
4JINR Gigabit Ethernet infrastructure (2003-2004)
5(No Transcript)
6Star-like logical topology of the JINR Gigabit
Ethernet backbone with the Cisco Catalyst 6509
and Cisco Catalyst 3550 switches in the center
of the core, and the Cisco Catalyst 3550 switches
in 7 JINR divisions (in 6 Laboratories and in
the JINR Administration), and Cisco Catalyst 3750
switch in LIT.
7In the year 2004 The network of Laboratory of
Information Technologies was left as a part of
the JINR backbone, meanwhile the rest JINR
divisions (7) were isolated off backbone with
their Catalyst 3550 switches.
Controlled-access (Cisco PIX-525 firewall) at the
entrance of the network.
8- Characteristics of the network
- High-speed transport structure(1000 Mbit/sec)
- Security-Controlled access (Cisco PIX-525
firewall) at the entrance of the network - Partially isolated local traffic (6 divisions
have their subnetworks with Cisco Catalyst 3550
as a gateway).
9Network Monitoring Incoming and outgoing traffic
distribution
Total year 2004 36.1 Tb Incoming
Total year 2004 43.64 Tb Outgoing
10(No Transcript)
11JINR Central Information and Computing Complex
12Russian regional centre the DataGrid cloud
LCG Tier1/Tier2 cloud
FZK
CERN
Grid access
Gbits/s
RRC-LHC
Tier2 cluster
Regional connectivity cloud backbone
Gbit/s to labs 1001000 Mbit/s
Collaborative centers
13LCG Grid Operations CentreLCG-2 Job Submission
Monitoring Map
14LHC Computing Grid Project (LCG)
- LCG Deployment and Operation
- LCG Testsuit
- Castor
- LCG AA- GenserMCDB
- ARDA
15Main results of the LCG project
- Development of the G2G (GoToGrid) system to
maintain installation and debug the LCG site. - Participation in the development of the CASTOR
system elaboration of a subservient module that
will be served as a garbage collector. - Development of structure of the database,
creation of a set of base modules, development of
a WEB-interface for creation/addition of articles
to the database (description of files with events
and related objects) http//mcdb.cern.ch - Testing a reliability of data transfer on the
GidFTP protocol implemented in the Globus Toolkit
3.0 package. - Testing the EGEE middleware components (GLite)
Metadata and Fireman catalogs. - Development of a code of constant WMS (Workload
Management System) monitoring the INFN site
gundam.chaf.infn.it in the testbed of a new EGEE
middleware Glite.
16LCG AA- GenserMCDB
- Correct Monte Carlo simulation of complicated
processes requires rather sophisticated expertise - Different physics groups often need same MC
samples - Public availability of the event files speeds up
their validation - Central and public location where well-documented
event files can be found - The goal of MCDB is to improve the communication
between Monte Carlo experts and end-users
17Main Features of LCG MCDB
- The most important reason to develop LCG MCDB
- is to expel the restrictions of the CMS MCDB
- An SQL-based database
- Wide search abilities
- Possibility to keep the events at particle level
as well as at partonic level - Large event files support storage Castor in
CERN - Direct programming interface from LCG
collaboration software - Inheritance of all the advantages of the
predecessor - CMS MCDB
18MCDB Web Interface
http//mcdb.cern.ch
Only Mozilla Browser Supported (for the time
being)
19High Energy Physics WEB at LIT
Idea Create a server with WEB access to
computing resources of LIT for Monte
Carlo simulations, mathematical support and etc.
- Provide physicists with informational and
mathematical support - Monte Carlo simulations at the server
- Provide physicists with new calculation/simulation
tools - Create copy of GENSER of the LHC Computing GRID
project - Introduce young physicists into HEP world.
Goals
HIJING Web Interface
HepWeb.jinr.ru will include FRITIOF, HIJING,
Glauber approximation, Reggeon approximation,
20Fixed Bug in the HIJING Monte Carlo
Model secures energy conservation
V.V. Uzhinsky (LIT)
21- G2G is a web-based tool to support the generic
installation and configuration of (LCG) grid
middleware - The server runs at CERN
- Relevant site-dependent configuration information
is stored in a database - It provides added-value tools, configuration
files and documentation to install a site
manually (or by a third-party fabric management
tool)
22- G2G features are thought to be useful for ALL
sites - First level assistance and hints (Grid Assistant)
- Site profile editing tool
- for small sites
- Customized tools to make manual installation
easier - for large sites
- Documentation to configure fabric management
tools - and for us (support sites)
- Centralized repository to query for site
configuration
23Deployment Strategy
Current LCG Release (LCG-2_2_0)
Next LCG Release
24EGEE (Enabling Grids for E-sciencE)
- Participation in the EGEE (Enabling Grids for
E-sciencE) project together with 7 Russian
scientific centres creation of infrastructure
for application of Grid technologies on a
petabyte scale. The JINR group activity includes
the following main directions
SA1 - European Grid Operations, Support and
Management NA2 Dissemination and Outreach
NA3 User Training and Induction NA4 -
Application Identification and Support
25Russian Data Intensive GRID (RDIG) Consortium
EGEE Federation
Eight Institutes made up the consortium RDIG
(Russian Data Intensive GRID) as a national
federation in the EGEE project. They are IHEP
- Institute of High Energy Physics (Protvino),
IMPB RAS - Institute of Mathematical Problems in
Biology (Pushchino), ITEP - Institute of
Theoretical and Experimental Physics (Moscow),
JINR - Joint Institute for Nuclear Research
(Dubna), KIAM RAS - Keldysh Institute of
Applied Mathematics (Moscow), PNPI - Petersburg
Nuclear Physics Institute (Gatchina), RRC KI -
Russian Research Center Kurchatov Institute
(Moscow), SINP-MSU - Skobeltsyn Institute of
Nuclear Physics (MSU, Moscow).
26 LCG/EGEE Infrastructure
- The LCG/EGEE infrastructure has been created that
comprises managing servers, 10 two-processor
computing nodes. - Software for experiments CMS, ATLAS, ALICE and
LHCb has been installed and tested. - Participation in mass simulation sessions for
these experiments. - A server has been installed for monitoring
Russian LCG sites based on the MonALISA system. - Research on the possibilities of other systems
(GridICE, MapCenter).
27Participation in DC04
Production in frames of DCs was accomplished at
local JINR LHC and LCG-2 farms CMS 150 000
events (350 GB) 0.5 TB data on B-physics was
downloaded to the CCIC for the analysis the JINR
investment in CMS DC04 was at a level of 0.3.
ALICE the JINR investment in ALICE DC04 was
at a level of 1.4 of a total number of
successfully done Alien jobs. LHCb the JINR
investment in LHCb DC04 - 0.5.
28(No Transcript)
29Dubna educational and scientific network
Dubna-Grid Project (2004)
Laboratory of Information Technologies, JINR
University "Dubna" Directorate of programme for
development of the science city Dubna University
of Chicago, USA University of Lund, Sweden
Creation of Grid-testbed on the basis of
resources of Dubna scientific and educational
establishments, in particular, JINR Laboratories,
International University "Dubna, secondary
schools and other organizations
More than 1000 CPU
30City high-speed network The 1 Gbps city high
speed network was built on the basis of a single
mode fiber optic cable of the total length of
almost 50 km. The total number of network
computers in the educational organizations
includes more than 500 easily administrated
units.
31Network of the University Dubna The computer
network of the University Dubna incorporates
with the help of a backbone fiber optic highway
the computer networks of the buildings housing
the university complex. Three server centres
maintain applications and services of computer
classes, departments and university subdivisions
as well as computer classes of secondary schools.
Total number of PCs exceeds 500
32Concluding remarks
- JINR/Dubna Grid segment and personal are well
prepared to be effectively involved into the CBM
experiment MC simulation and data analysis
activity - Working group prepare a proposal on a common
JINR-GSI-Bergen Grid activity for the CBM
experiment - Proposal present at the CBM Collaboration
meeting in September