Title: AMS TIM, CERN Apr 21, 2004
1AMS TIM, CERN Apr 21, 2004
AMS-02 Computing and Ground Centers
Alexei Klimentov Alexei.Klimentov_at_cern.ch
2Outline
- AMS-02 Ground Centers
- AMS Computing at CERN
- MC Production
- Manpower
3AMS-02 Ground Support Systems
- Payload Operations Control Center (POCC) at CERN
(first 2-3 months in Houston) - control room, usual source of commands
- receives Health Status (HS), monitoring and
science data in real-time - receives NASA video
- voice communication with NASA flight operations
- Backup Control Station at JSC (TBD)
- Monitor Station in MIT
- backup of control room
- receives Health Status (HS) and monitoring
data in real-time - voice communication with NASA flight
operations - Science Operations Center (SOC) at CERN (first
2-3 months in Houston) - receives the complete copy of ALL data
- data processing and science analysis
- data archiving and distribution to Universities
and Laboratories - Ground Support Computers (GSC) at Marshall Space
Flight Center - receives data from NASA -gt buffer -gt retransmit
to Science Center - Regional Centers
- Madrid,MIT, Yale, Bologna,Milan, Karlsruhe,
Lyon, Taipei,Nanjing, Shanghai, 19
centers - analysis facilities to support geographically
close Universities
4AMS facilities
NASA facilities
5(No Transcript)
6AMS-02 Ground Centers.Science Operations Center.
Computing Facilities.
Analysis Facilities (linux cluster)
Central Data Services
Shared Tape Servers
AMS regional Centers
Interactive and Batch physics analysis
tape robots tape drives LTO, DLT
10-20 dual processor PCs
5 PC servers
Shared Disk Servers
25 TeraByte disk 6 PC based servers
batch data processing
interactive physics analysis
CERN/AMS Network
7AMS Computers at CERN
- Central services (Web, Batch, Database)
- Offline SW repository
- SW development (libraries, compilers, SW tools)
- Computing facilities
- Data storage and archiving (AMS-01, TestBeam, MC)
- MC Production
8AMS Computers at CERN
CPU clock 1998 2004 7.5 times (450
MHz/3.4GHz) Disks capacity 1998 2004 14 times
(17GB/250GB)
- To buy the bulk of computing power later to have
better price/performance ratio
- To build system gradually
Out of warranty
AMS Computers Loading, (weekly graph, 30 mins
average) Max CPU 98, Average 85
9AMS-02 Computing Facilities .
Ready operational, bulk of CPU and disks
purchasing L-9 Months
10AMS Computing (action items Jan 2004)
- Action items
- improve networking topology
- gigabit switch will be installed before end of
Jan - Dr.Wu Hua (SEU) started to work on Network
Monitoring - dedicated batch, web, production server
- evaluate the market the order will be placed
Q1 2004 - Closed
- Switch installed beg of Feb (AE, AK)
- Server was delivered this week and will be
installed in April (AE,AK) - Network Monitoring is in production (Wu Hua,
draft note URL ams.cern.ch/AMS/Computing/network_m
onitor.pdf)
11AMS Computing. SOC Prototype.
MC production
Q2 2004
Q4 2004 Q1 2005
CIEMAT, ITEP, MIT group
12AMS-02 Ground Centers Prototypes
- Centers architecture and functions are identified
- The estimation of needed computing power and
benchmarking for AMS-02 data processing is done
(together with V.Choutko, will be reevaluted
based on MC production benchmarks) - Networking and data transmission issues are
studied - SW and centers prototypes are used for AMS-02 MC
mass-production - Data transfer computer is installed and will be
in production in May (M.Boschini et al)
13MC Production Y2004A
V.Choutko A.Eline A.Klimentov
14Year 2004 MC Production
- Started Jan 15, 2004
- Central MC Database
- Distributed MC Production
- Central MC storage and archiving
- Distributed access (under test)
15Y2004 MC production centers
MC Production is not a primary responsibility
for people participating in it.
16MC Production Statistics
97 days, 4139 GB
40 of MC production done Will finish by end of
July
URL pcamsf0.cern.ch/mm.html
17Y2004 MC Production Highlights
- Data are generated at remote sites, transmitted
to AMS_at_CERN and available for the analysis - Transmission, process communication and
book-keeping programs have been debugged, the
same approach will be used for AMS-02 data
handling - 97 days of running (95 stability)
- 15 Universities Labs
- 4.1 Tbytes of data produced, stored and archived
- Peak rate 130 GB/day (12 Mbit/sec)
- 547 computers
- Daily CPU equiv 173 1 GHz CPUs running 97
days/24h
To support MC production we need 1 more disks
server (5 TB) Q2 Production farm prototype (10
CPUs) Q4