Title: CMS%20Detector%20DB%20project
1CMS Detector DB project
- CD Briefing
- May 12, 2004
- Lee Lueking
2Outline
- Overview
- Short-term goals and achievements (March-July
2004) - DB Applications for HCAL test beam
- DB Operations for HCAL test beam
- Medium-term objectives (July-December 2004)
- Long term objectives and uncertainties (05-06)
3USCMS DetDB Mission
- Collect use cases, establish the requirements,
design and build a functioning database system
for use in the 2004 HCAL testbeam operation. - Extend the experience gained in the HCAL project
to additional Fermilab detector interests
including EMU and PIXEL. - Use the Testbeam experience as a prototype for
the full scale CMS detector database project
needed for full scale testing in 2006, and
operation in 2007. - Establish close relationships with the CMS, and
LCG teams, and IT at CERN to understand the
broader database landscape being established for
the LHC. Become involved in the planning, and
carefully define the Fermilab contributions.
4Project Milestones
- May 10, 2004 HCAL Testbeam at CERN. Testbeam
will run through first week of October. - July 2004 Extend HCAL DetDB to EMU detector also
in test beam. - September 22, 2004 Slice test for Testbeam
including HCAL and EMU detectors. Will use
COBRA/ORCA (CMS framework and reconstruction).
Testbeam ends mid-October. - November 2004 CMS Analysis challenge will
attempt to do analysis in US. Detector DB
information available at FNAL. - November 2004 Computing TDR is due. May have
Database elements included. Baseline is to have
Tier 2 centers producing MC data. - December 2004 Provide Construction Database for
PIXEL detector. - Early 2005 Begin building Database applications
for HCAL, EMU, PIXEL detectors to be used in the
production experiment, both online and offline. - Late 2005 Contribute to TDR for Regional
Centers. - Early 2006 Full scale testing of Database
applications. - 2007 Detector operation
Phase 1
Phase 2
Phase 3
5HCAL Testbeam DB
- Test beam details
- HCAL Testbeam description
- Schedule
- Use cases
- Design
- Implementation
CMS DetDB Phase 1 March July 2004
Details at http//lynx.fnal.gov/uscms-db-wiki/Dra
ft_20Project_20Document
6CMS HCAL Testbeam _at_ CERN
7DetDB Project History
Date Goal Comments
March 12, 2004 Use Cases Requirements completed
March 19, 2004 Preliminary Schema API designs discussed Need DB browsing, client api, PVSS and ROOT interface.
April 16 Evaluate loading and access tools SQL Loader, Tora, ROOT. Other options discussed.
April 23, 2004 Review Equipment DB Use SQL loader for last years TB data. Schemas in Oracle Designer. Reviews by Anil Kumar, Taka Yasuda
May 7, 2004 Review Conditions DB
May 14, 2004 Review Configuration DB Limited Test Beam Operations planned
CMS DetDB Phase 1 March July 2004
8Categories of DB Information(Glossary)
- Construction
- Test results for each detector component
- Details of detector construction
- Equipment
- Inventory of all detector components
- Details of channel mapping and electronics
modules - Configuration
- Download constants for front-end electronics
- Includes HV, LV, pedestals, etc.
- Conditions
- Measured values for HV, LV, temps
- Beam positions
- Offline pedestal and gains
CMS DetDB Phase 1 March July 2004
9Applications for HCAL Test Beam
CMS DetDB Phase 1 March July 2004
10Work List
- Schema design and review
- Loading the database
- Accessing the database
- Deployment
- Testing and Infrastructure
CMS DetDB Phase 1 March July 2004
Detailed work list //http//lynx.fnal.gov/uscms-d
b-wiki/Work_20List
11DetDB Server Machines
- Purchasing two DELL Power Edge 2650 Servers, one
for FNAL, and one for CERN test beam. - The FNAL machine OS and Oracle will be managed by
CSS/DSG. Run Prod and Int instances. - The CERN machine OS and Oracle will be managed by
a designated CERN person (Frank Glege). Run Prod
and Int instances. - Will install Red Hat Enterprise Server OS, and
Oracle 10g. - Plan to use Oracle Streams Replication one-way
from CERN to FNAL. - Two development servers will be in place at FNAL
to test all configurations and replication
scripts. Will use existing machine Bager, and new
machine. - Details of agreement for support by DSG are
available on Wiki
CMS DetDB Phase 1 March July 2004
12Manpower
- FNAL (March-July)
- PPD/CMS Shuichi Kunori (UMD, 0.2), Taka Yasuda
(0.2), Stefan Piperov (0.8?0.5), Jordan
Damgov(0.8?0.5), New Hire in June (?1.0) - CD/CEPA/DBS Lee Lueking (0.3), Yuyi Guo (0.8)
- CD/CSS Anil Kumar (0.2), Maurine Mihalek
(0.0?0.2) - Total CD 1.4 FTE
- CERN
- USCMS Serguei Sergueev (HCAL DCS),, Michael
Case (CAS) - CMS Frank Glege (coordinator of CMS DB
project), - IT Andrea Valassi (LCG conditions DB
coordinator)
CMS DetDB Phase 1 March July 2004
13Medium-term Objectives
- Additional CERN to FNAL DB Replication work.
Possibility for Tier 0 to Tier 1 transfer. - Add EMU to testbeam. Use HCAL experience to
provide EMU DetDB system. - Integrate with COBRA/ORCA (CMS framework and
reconstruction package) - Participate in November Physics Challenge
- PIXEL construction DB
CMS DetDB Phase 2 July December 2004
14CMS Detectors w/ FNAL Responsibility
Detector Construction Equipment Config. Conditions Comments
HCAL PMT DB in MySQL, HPD in ascii, Megatiles in ascii Inventory DB at CERN. Download constants to front-end electronics. Measured values for Slow Controls, Peds, and Gains 10k channels, relatively stable calibration
EMU Existing hardware DB at CERN. Similar to HCAL Similar to HCAL Similar to HCAL 100k Channels
PIXEL Use Cases and Requirements being assembled TBD TBD 2-2.4 GB per calibration set. Possibly multiple sets daily 70M Channels
CMS DetDB Phase 2 July December 2004
15Replication
- Significant work has been done at FNAL on Oracle
v9 Streams Replication. Still many problems with
throughput and stability. - Oracle 10g promises to be much better, but needs
to be thoroughly tested and understood. - A project to test this in cooperation w/ CERN
will help to understand the issues and
feasibility. - Possibly, could be tied to the November Physics
challenge.
CMS DetDB Phase 2 July December 2004
16DATABASE/SYSTEM SUPPORT FOR CMS DATABASE SERVERS
DRAFT 03/04/2004
System and Oracle Support
- Database Support
- System Support
- Levels of Support
- Shifts
- System levels Production, Integration,
Development
- Proposal is at
- http//www-css.fnal.gov/dsg/internal/cms/CMS/cms_d
b_n_system_support_proposal.html
17(CERN) Oracle Contract
From Jamie Shires March Presentation
- Previously based on active users, auditing via
CERN tools - Highly non-standard obsolete list of products
and machines - Maintenance costs were growing at 15 compound
- New contract based on named users (standard
Oracle license) - Platform independent
- Location independent
- iAS licenses dramatically increased
- Maintenance costs reduced and are fixed for 5
years - Extended to all CERN staff users (HR numbers)
- s/w can be installed and run at collaborating
institutes for CERN work - Double edged sword support issues for outside
use a big concern
18Oracle Licensing
From Jamie Shires May 10 e-mail message
- You do not need to register users. If you wish to
download Oracle from CERN, then at least one
person will have to sign a couple of forms,
essentially exactly as was done before for
Objectivity. See http//wwwdb.web.cern.ch/wwwdb/gr
id-data-management-deployment.html for more
details. - We do not give out a CSI. We have to finalise the
'support agreement' - support is the big
question. Essentially, any problem would have to
be reproduced at CERN. We expect larger sites to
(already?) run their own Oracle services and
hence have their own support group / channels. I
suspect this is true for FNAL? - We would like to stay in contact with you and
learn more about your applications / needs. For
the machine at CERN, would you expect us to play
a role it its configuration / management? Thanks
alot, --- Jamie
19(CERN) Oracle Distribution
From Jamie Shires March Presentation
- Users (sites) must register (group OR) and sign
contract addendum - Oracle DB and iAS packaged and redistributed as
basis of file (metadata) catalog for LCG - Well-defined application, well shielded from
users - Defined version of Oracle components, single
supported platform (RHEL) - Tools / kits now used within the IT-DB group for
CERN services - Also at a few Tier1 sites (CNAF, FZK, RAL, )
- First non-LCG customer COMPASS
- Offload significant(?) fraction of their
production / analysis - Requires local Oracle expertise
- Bulk data distribution few times per year
transportable tablespaces - Requests from other groups in queue proceeding
at rate we can support without impacting CERN
production services - Not targeting general purpose DB services outside
(Yet? Ever?)
20Long-Term Possibilities/Uncertainties
- We hope that the DBs built for the testbeam will
be prototypes on which to build production DB
applications for CMS online and Offline DBs - LCG has existing efforts which are discussing
plans for database - POOL has plans to provide DB interface
- CondDB project is active http//lcgapp.cern.ch/pr
oject/CondDB/ - FNAL FroNtier approach has been offered as an
approach to large scale DB access.
CMS DetDB Phase 3 2005-2007
21Possible Goal
- Conditions other DB info needed for offline
copied to Tier 0. - DB info needed for offline analysis replicated to
Tier 1 - Lightweight caching scheme (like FroNtier) used
for access by Tier 2 sites
FNAL
CERN IT
P5 (Point 5)
Et Cetera
Et Cetera
22Conclusion
- Work is ongoing to provide an Oracle-based DB
system for the HCAL test beam operation at CERN
this summer, and into the fall. - Work will continue to use this experience for EMU
in the testbeam in the summer, and a PIXEL
construction DB by the end of 2004. - There is potential for continued work beyond the
test beam to provide DB applications for the CMS
Detector HCAL, EMU, and PIXEL systems.
Development in 2005, and rigorous testing in 2006
will assure the DB applications are ready for
production CMS operations in 2007 - The current level of Manpower from CD is 1.4 FTE.
We ask that this level continue at least through
the finish of Phase 1. An assessment of Manpower
needs will then be provided for Phase 2.