Title: AMI Status
1AMIStatus Plans
Jerome Fulachier Atlas Software Week Database
session
2AMI Status (1)
Introduction of web services in the AMI
architecture - A prototype (client/server) is
nearly finished. - Achieved using JWSDP
technology with AXIS servlets. - It is made with
a light client that transmits queries to the AMI
web service. - This web service provides
metadata management layer for the offline
software. - It enables the client to be
independent of the querying process
implementation an then allows modifications
of AMI core layer without redeploying clients. -
It can collaborate with other web services like
SQLTuple (see Julius presentation). - Others
clients can easily be developed using AMI web
service WSDL file (C,php...).
Client Side
HTTP(S) (SOAP)
AMI WS Client
AMI Web Service
AMI Core
AMI Servlets
HTTP
AMI Client/Core
AMI DBs
3AMI Status (2)
XML introduction for internal
information management - It was done to make
integration of web services technology easier. -
It give possibility to clients to process AMI
results in a more consistent and
standardized way. New entities relations types
supported - AMI supports aggregation and
association relations. Both types of relations
are treated as 1 to any. This leads to
redundant records when any to any" relations
are described, but input and cascaded deletes are
simpler. - To implement any to any relation we
have introduced in AMI the concept of
bridging entities ( intra inter database ).
AMI DATABASE X
AMI DATABASE Y
AMI
1
AMI
AN_ENTITY
INTER_BRIDGE
1
KEYWORD
1
4AMI Status (3)
Study of how database schema versioning
will be managed in AMI - It is clear that there
will be some changes in schema for DC2. -
Physicists will still wish to refer to data
produced in DC1. - We have the choice between
converting DC1 data to the new DC2 schema, or
implementing schema versioning system. - A
document on this topic is available on the AMI
web site (author Solveig Albrand ), it
describes how those changes could be managed in
AMI. Database server - Mysql Server fixed /
NFS Server (async. ? sync.). - RAID disks tape
back up successfully tested. Production
layer - Production web interfaces improved
(better dataset submission procedure). - DC1
databases coherence checked (dataset
affiliation). - DC1 databases status
generation ? 153 datasets / 1,275 partitions
simulation ? 439 datasets / 62,184
partitions pile up ? 167 datasets / 38,065
partitions reconstruction ? 131 datasets /
31,958 partitions
5Plans for DC2 (1)
Replace all old AMI clients to new light ones
using the AMI web service. Design new databases
for DC2 following evolution schema plans -
Reinforce the notion of "physics working group"
Instead of attaching just one group to
each dataset we will introduce the "principal" or
"initiating" physics working group, and
allow a "1 to n" relationship between a dataset
and other physics groups. - Introduce the
notion of "physics property", which will allow
classification and search in a more
sophisticated way. - Put in place a mechanism
to enforce unique logicalDatasetNames across all
production projects. This means that a user
requesting for information about a particular
dataset could be directed to the correct
schema using the logicalDatasetName as a
key. - Remove references to physical location
of the dataset. - Improve management of
datasets provenance (link to the production
system needed to obtain information on
"transformations"). Deploy new AMI databases for
DC2 production processing steps.
6Plans for DC2 (2)
Link the new production replica catalog - The
replica catalogue used in DC1 was Magda. It is
not totally clear what will be needed for
DC2, but probably we will link to a replica web
service that will direct queries to one of
two replica catalogue implementations It will
certainly be the DMS Web service (see
http//mbranco.home.cern.ch/mbranco/cern/dms.html)
. Add new functions to the AMI command
interface - Bulk inserts to manage large
inserts. - Improve the querying
syntax. Integration of POOL events collections
metadata - We will introduce a mechanism to
know if data is contained in POOL, and thus has
POOL explicit event collection metadata. - In
order to keep some backward compatibility with
the DC1 semantics, and also to accommodate
the mix of POOL and non-POOL which is likely in
DC2, we will continue to use the entity
"partition" (perhaps changing its name). - We
will introduce the "collectionType" parameter. -
Athena algorithms working with this schema could
be directed to a POOL event collection
iterator (using SQLTuple). The Athena algorithm
would "see" the dataset/dataset partition
as a POOL multi collection.
7Forward
AMI Databases - Define and deploy new AMI
databases for combine Test Beam. - Migrate AMI
databases to a server at CERN (to benefit of IT
support). - Migrate AMI databases (mysql) to
Oracle or/and to Spitfire server. AMI
Software - Integrate AMI web service in the
GRID using OGSA specifications (ARDA). -
Realisation of a nightly database coherence and
status tool (student project). - Define a
transaction model (local and distributed). -
Re-design and implementation of a new version of
TagCollector base on the AMI
architecture. Questions - Have we to fit AMI
users to a standard ATLAS users database ? -
When do we have to deal with certificates ?
8AMI Tag Collector (1)
A New Tag Collector Why ? - The main goal is
to valorise this product ( in order to be used by
other projects ). - Redefine all its structure
and functionalities in order to increase its
modularity. - Redefine its interface in order
to make information more readable and to provide
better navigation. - Provide new functionalities
following some users requests. ( ex have the
same functionalities with branch releases than
with other releases ). - Provide tool that is
more generic to become independent of the context
it works with (CVS/CMT). ( Plug-in ? easier
implementation for other configuration and
versioning tool ).
Tag Collector
CVS Plug-in
CMT Plug-in
9AMI Tag Collector (2)
- A New Tag Collector How ?
- - The new Tag Collector will be implemented using
Java technology (web services and servlets). - - Its storage system and interfaces will be built
on top of the AMI architecture. - - It will it will reuse the AMI libraries.
- - It does not need to be aware of the particular
database used (DBMS). - We have an IT engineer, employed by the CNRS (2
to 12 months working period), for the Tag - Collector valorisation.
- A requirements document had been produced and a
formal ATLAS review will take place in two - weeks.
Client Side
HTTP(S) (SOAP)
TC WS Client
TC Web Service
TC Core
TC Web Interface
HTTP
AMI
Tag Collector DB
10AMI Architecture (1)
AMI
project, process
references
1
BkkJDBC
11AMI Architecture (2)
AMI DBs
AMI / BKKJDBC
Plug-in
1
1
LOGIN PASSWORD
local cluster
3306
1
Plug-in
1
CERTIFICATE
HTTPS (SOAP)
8443