title.open%20(%20);%20revolution%20{execute}; - PowerPoint PPT Presentation

About This Presentation
Title:

title.open%20(%20);%20revolution%20{execute};

Description:

'GridPP Year 1 to Year 2', Collaboration Meeting, Imperial College, 16 ... Linda Cornwall /O=Grid/O=UKHEP/OU=hepgrid.clrc.ac.uk/CN=Linda Cornwall member /O ... – PowerPoint PPT presentation

Number of Views:48
Avg rating:3.0/5.0
Slides: 56
Provided by: ppephysi
Category:

less

Transcript and Presenter's Notes

Title: title.open%20(%20);%20revolution%20{execute};


1
Tony Doyle a.doyle_at_physics.gla.ac.uk
GridPP Year 1 to Year 2, Collaboration
Meeting, Imperial College, 16
September 2002
2
Outline GridPP Year 1 to Year 2..
  • Who are we?
  • Are we a Grid?
  • Historical Perspective
  • Philosophy of the Grid?
  • Shared Distributed Resources 2003
  • Cartology of the Grid
  • Will the EDG middleware be robust?
  • LHC Computing Grid Status Report
  • Are we organised?
  • Achievements and Issues
  • (Not really a) Summary

3
Who are we?
Nick White /OGrid/OUKHEP/OUhepgrid.clrc.ac.uk/C
NNick White member Roger Jones
/OGrid/OUKHEP/OUlancs.ac.uk/CNRoger Jones
member Sabah Salih /OGrid/OUKHEP/OUhep.man.ac.u
k/CNSabah Salih member Santanu Das
/OGrid/OUKHEP/OUhep.phy.cam.ac.uk/CNSantanu
Das member Tony Cass /OGrid/OCERN/OUcern.ch/CN
Tony Cass member David Kelsey /OGrid/OUKHEP/OUp
p.rl.ac.uk/CNDavid Kelsey member Henry Nebrensky
/OGrid/OUKHEP/OUbrunel.ac.uk/CNHenry
Nebrensky member Paul Kyberd /OGrid/OUKHEP/OUbr
unel.ac.uk/CNPaul Kyberd member Peter Hobson
/OGrid/OUKHEP/OUbrunel.ac.uk/CNPeter R Hobson
member Robin Middleton /OGrid/OUKHEP/OUpp.rl.ac
.uk/CNRobin Middleton member Alexander Holt
/OGrid/OUKHEP/OUph.ed.ac.uk/CNAlexander Holt
member Alasdair Earl /OGrid/OUKHEP/OUph.ed.ac.u
k/CNAlasdair Earl member Akram Khan
/OGrid/OUKHEP/OUph.ed.ac.uk/CNAkram Khan
member Stephen Burke /OGrid/OUKHEP/OUpp.rl.ac.u
k/CNStephen Burke member Paul Millar
/OGrid/OUKHEP/OUph.gla.ac.uk/CNPaul Millar
member Andy Parker /OGrid/OUKHEP/OUhep.phy.cam.
ac.uk/CNM.A.Parker member Neville Harnew
/OGrid/OUKHEP/OUphysics.ox.ac.uk/CNNeville
Harnew member Pete Watkins /OGrid/OUKHEP/OUph.b
ham.ac.uk/CNPeter Watkins member Owen Maroney
/OGrid/OUKHEP/OUphy.bris.ac.uk/CNOwen Maroney
member Alex Finch /OGrid/OUKHEP/OUlancs.ac.uk/C
NAlex Finch member Antony Wilson
/OGrid/OUKHEP/OUpp.rl.ac.uk/CNAntony Wilson
member Tim Folkes /OGrid/OUKHEP/OUhepgrid.clrc.
ac.uk/CNTim Folkes member Stan Thompson
/OGrid/OUKHEP/OUph.gla.ac.uk/CNA. Stan
Thompson member Mark Hayes /OGrid/OUKHEP/OUamtp
.cam.ac.uk/CNMark Hayes member Todd Huffman
/OGrid/OUKHEP/OUphysics.ox.ac.uk/CNB. Todd
Huffman member Glenn Patrick /OGrid/OUKHEP/OUpp
.rl.ac.uk/CNG N Patrick member Pete Gronbech
/OGrid/OUKHEP/OUphysics.ox.ac.uk/CNPete
Gronbech member Nick Brook /OGrid/OUKHEP/OUphy.
bris.ac.uk/CNNick Brook member Marc Kelly
/OGrid/OUKHEP/OUphy.bris.ac.uk/CNMarc Kelly
member Dave Newbold /OGrid/OUKHEP/OUphy.bris.ac
.uk/CNDave Newbold member Kate Mackay
/OGrid/OUKHEP/OUphy.bris.ac.uk/CNCatherine
Mackay member Girish Patel /OGrid/OUKHEP/OUph.l
iv.ac.uk/CNGirish D. Patel member David Martin
/OGrid/OUKHEP/OUph.gla.ac.uk/CNDavid J.
Martin member Peter Faulkner /OGrid/OUKHEP/OUph
.bham.ac.uk/CNPeter Faulkner member David Smith
/OGrid/OUKHEP/OUph.bham.ac.uk/CNDavid Smith
member Steve Traylen /OGrid/OUKHEP/OUhepgrid.cl
rc.ac.uk/CNSteve Traylen member Ruth Dixon del
Tufo /OGrid/OUKHEP/OUhepgrid.clrc.ac.uk/CNRuth
Dixon del Tufo member Linda Cornwall
/OGrid/OUKHEP/OUhepgrid.clrc.ac.uk/CNLinda
Cornwall member /OGrid/OUKHEP/OUhep.ucl.ac.uk/C
NYee-Ting Li member Paul D. Mealor
/OGrid/OUKHEP/OUhep.ucl.ac.uk/CNPaul D Mealor
member /OGrid/OUKHEP/OUhep.ucl.ac.uk/CNPaul A
Crosby member David Waters /OGrid/OUKHEP/OUhep.
ucl.ac.uk/CNDavid Waters member Bob Cranfield
/OGrid/OUKHEP/OUhep.ucl.ac.uk/CNBob Cranfield
member Ben West /OGrid/OUKHEP/OUhep.ucl.ac.uk/C
NBen West member Rod Walker /OGrid/OUKHEP/OUhe
p.ph.ic.ac.uk/CNRod Walker member
/OGrid/OUKHEP/OUhep.ph.ic.ac.uk/CNPhilip
Lewis member Dave Colling /OGrid/OUKHEP/OUhep.p
h.ic.ac.uk/CNDr D J Colling member Alex Howard
/OGrid/OUKHEP/OUhep.ph.ic.ac.uk/CNAlex Howard
member Roger Barlow /OGrid/OUKHEP/OUhep.man.ac.
uk/CNRoger Barlow member Joe Foster
/OGrid/OUKHEP/OUhep.man.ac.uk/CNJoe Foster
member Alessandra Forti /OGrid/OUKHEP/OUhep.man
.ac.uk/CNAlessandra Forti member Peter Clarke
/OGrid/OUKHEP/OUhep.ucl.ac.uk/CNPeter Clarke
member Andrew Sansum /OGrid/OUKHEP/OUhepgrid.cl
rc.ac.uk/CNAndrew Sansum member John Gordon
/OGrid/OUKHEP/OUhepgrid.clrc.ac.uk/CNJohn
Gordon member Andrew McNab /OGrid/OUKHEP/OUhep.
man.ac.uk/CNAndrew McNab member Richard
Hughes-Jones /OGrid/OUKHEP/OUhep.man.ac.uk/CNR
ichard Hughes-Jones member Gavin McCance
/OGrid/OUKHEP/OUph.gla.ac.uk/CNGavin McCance
member Tony Doyle /OGrid/OUKHEP/OUph.gla.ac.uk/
CNTony Doyle admin Alex Martin
/OGrid/OUKHEP/OUph.qmw.ac.uk/CNA.J.Martin
member Steve Lloyd /OGrid/OUKHEP/OUph.qmw.ac.uk
/CNS.L.Lloyd admin John Gordon
/OGrid/OUKHEP/OUhepgrid.clrc.ac.uk/CNJohn
Gordon member
We need a New Year 2 Group Photo
4
Are we a Grid?
http//www-fp.mcs.anl.gov/foster/Articles/WhatIsT
heGrid.pdf
  • Coordinates resources that are not subject to
    centralized control
  • using standard, open, general-purpose protocols
    and interfaces
  • to deliver nontrivial qualities of service
  • YES. This is why development and maintenance of a
    UK-EU-US testbed is important.
  • YES... Globus/Condor-G/EDG meet this
    requirement. Common experiment application layers
    are also important here e.g. SAM, GANGA.
  • NO(T YET) Experiments should define whether this
    is true via this years data analyses and
    challenges.

5
Historical Perspective
  • I wrote in 1990 a program called "WorlDwidEweb",
    a point and click hypertext editor which ran on
    the "NeXT" machine. This, together with the first
    Web server, I released to the High Energy Physics
    community at first, and to the hypertext and NeXT
    communities in the summer of 1991.
  • Tim Berners-Lee
  • The first three years were a phase of persuasion,
    aided by my colleague and first convert Robert
    Cailliau, to get the Web adopted
  • We needed seed servers to provide incentive and
    examples, and all over the world inspired people
    put up all kinds of things
  • Between the summers of 1991 and 1994, the load on
    the first Web server ("info.cern.ch") rose
    steadily by a factor of 10 every year

6
News - Summer 2002
  • GridPP Demonstrations at the UK e-Science All
    Hands Meeting Fri 30 August 2002
  • EU DataGrid Testbed 1.2 released Mon 12 August
    2002
  • Computer Science Fellowships Wed 31 July 2002
  • GGF5 in Edinburgh Thu 25 July 2002
  • OGSA Early Adopters Workshop Sat 6 July 2002
  • GridPP sponsors joint ATLAS, LHCb Workshop Fri
    31 May 2002
  • Getting started on the EDG Testbed Fri 24 May
    2002
  • First significant use of UK particle physics Grid
    Sat 11 May 2002

7
News Spring 2002
  • GridPP demonstrated at NeSC opening Thu 25 April
    2002
  • First TierA/Prototype Tier1 Hardware delivered
    Wed 13 March 2002
  • LHC Computing Grid Project launched Mon 11 March
    2002
  • Fourth EDG Conference in Paris Mon 4 March 2002
  • The DataGrid project successfully passes the
    first year review Fri 1 March 2002
  • RAL included in successful deployment of DataGrid
    1.1 Fri 1 March 2002

8
News Winter 2002
  • 25 February 2002Second round of PPARC funded
    GRID Computing Opportunities at CERN announced
  • 25 February 2002Second round of PPARC e_Science
    Studentships announced
  • 21 February 2002X.509 Certificates authenticate
    file transfers across the Atlantic 17 February
    2002Fourth Global Grid Forum held in Toronto

9
News Autumn 2001
  • Internet2, Geant and the Grid discussed in
    Guardian article, 8 November 2001
  • IBM announces worldwide participation in grid
    initiatives, 2 August 2001
  • What might this tell us about Year 1?
  • The year has been busy for everyone..
  • (2-4-6-8) A lot has been happening inside and
    outside.. Linear growth
  • In the end we demonstrated that we lead Grid
    development/deployment in the UK..
  • Which is being recognised externally, but we all
    need to plan for future success(es)..

10
Interlude Last weeks News E-Science testbed
we can (and should) help
  • The issue in question is how most effectively to
    encourage use of the emerging UK Grid
    infrastructure by scientists and engineers. The
    proposal we discussed at our meeting this week
    was that JCSR might fund a 'Grid Computing
    Testbed' with the specific purpose of
    accelerating development and deployment of the
    research Grid. This would be of a scale which
    would provide a significant level of
    computational resource and it would be available
    to researchers only through the use of digital
    certificates and Globus but it would be free at
    the point of use. Through access to this testbed,
    users would provide convincing evidence of the
    usability and usefulness of the Grid.
  • It has been suggested that the most useful and
    cost effective type of resource to provide would
    be one or more Beowulf clusters (commodity
    processor clusters with high throughput, low
    latency interconnect) of a size not readily
    available to researchers at their home
    institution, say with 256 processors.
  • David Boyd (JCSR, 13/9/02)
  • We needed seed servers to provide incentive and
    examples

11
Philosophy of the Grid?
  • Everything is becoming, nothing is. Plato
  • Common sense is the best distributed commodity
    in the world. For every (wo)man is convinced
    (s)he is well supplied with it. Descartes
  • The superfluous is very necessary Voltaire
  • Heidegger, Heidegger was a boozy beggar, I drink
    therefore I am Python
  • Only daring speculation can lead us further, and
    not accumulation of facts. Einstein
  • The real, then, is that which, sooner or later,
    information and reasoning would finally result
    in. C. S. Pierce
  • The philosophers have only interpreted the world
    in various ways the point is to change it. Marx
  • (some of) these may be relevant to your view of
    The Grid

12
Another Grid?
  • . Net
  • March 2001
  • My (i.e. Microsoft) Services
  • Alerts, Application settings, Calendar,
    Categories, Contacts, Devices, Documents,
    FavouriteWebSites, Inbox, Lists, Location,
    Presence, Profile, Services, Wallet(?!)
  • Microsoft drops My Services centralised
    planning June 2002
  • Microsoft unveils new identification plans
    July 2002
  • TrustBridge Grid
  • Distributed
  • Authentication using .Net Passport (and other)
    servers (Kerberos 5.0)
  • Authorisation defined by a Federation VO
  • OO using C
  • Microsoft Increased research budget by 20 in
    2003 (from 2.8B to 3.4B) to develop .Net

13
Red pill or blue pill?
  • The three criteria apply most clearly to the
    various large-scale Grid deployments being
    undertaken within the scientific community...
    Each of these systems integrates resources from
    multiple institutions, each with their own
    policies and mechanisms uses open,
    general-purpose (Globus Toolkit) protocols to
    negotiate and manage sharing and addresses
    multiple quality of service dimensions, including
    security, reliability, and performance.
  • Microsoft .NET is a set of Microsoft software
    technologies for connecting your world of
    information, people, systems and devices. It
    enables an unprecedented level of software
    integration through the use of XML web services
    small, discrete, building-block applications that
    connect to each other - as well as to other,
    larger applications - via the Internet.

14
GridPP Vision
  • From Web to Grid - Building the next IT
    Revolution
  • Premise
  • The next IT revolution will be the Grid. The
    Grid is a practical solution to the
    data-intensive problems that must be overcome if
    the computing needs of many scientific
    communities and industry are to be fulfilled over
    the next decade.
  • Aim
  • The GridPP Collaboration aims to develop and
    deploy a large-scale science Grid in the UK for
    use by the worldwide particle physics community.

Many Challenges.. Shared distributed
infrastructure For all applications
15
GridPP Overview
17m 3-year project funded by PPARC through the
e-Science Programme
CERN - LCG (start-up phase) funding for staff
and hardware...
Applications
Operations
EDG - UK Contributions Architecture Testbe
d-1 Network Monitoring Certificates
Security Storage Element R-GMA LCFG MDS
deployment GridSite SlashGrid Spitfire
1.99m
1.88m
Tier - 1/A
3.66m
CERN
5.67m
DataGrid
3.78m
http//www.gridpp.ac.uk
Applications (start-up phase) BaBar CDF/D0
(SAM) ATLAS/LHCb CMS (ALICE) UKQCD
16
GridPP Bridge
Provide architecture and middleware
Future LHC Experiments
Running US Experiments
Build Tier-A/prototype Tier-1 and Tier-2 centres
in the UK and join worldwide effort to develop
middleware for the experiments
Use the Grid with simulated data
Use the Grid with real data
17
Grid issues Coordination
  • Technical part is not the only problem
  • Sociological problems? resource sharing
  • Short-term productivity loss but long-term gain
  • Key? communication/coordination between
    people/centres/countries
  • This kind of world-wide close coordination across
    multi-national collaborations has never been done
    in the past
  • We need mechanisms here to make sure that all
    centres are part of a global planning
  • In spite of different conditions of funding,
    internal planning, timescales etc
  • The Grid organisation mechanisms should be
    complementary and not parallel or conflicting to
    existing experiment organisation
  • LCG-DataGRID-eSC-GridPP
  • BaBar-CDF-D0-ALICE-ATLAS-CMS-LHCb-UKQCD
  • Local Perspective build upon existing strong PP
    links in the UK to build a single Grid for all
    experiments

18
Shared Distributed Resources2003
  • Tier-1 600 CPUs 150 TB
  • Tier-2 e(4000 CPUs 200 TB)
  • which could be shared by the experiments to first
    test the concept and then meet particular
    deadlines.
  • The key is to create a Grid.
  • Steps could be
  • Agree to allow other experiments software to be
    installed
  • Agree to share on a limited basis e.g. limited
    tests on allocated days
  • Aim to extend this capability... to increase e
    (efficiency)

Tier-1
  • The real, then, is that which, sooner or later,
    information and reasoning would finally result
    in

19
A Gridin 2003?
  • This will all be a bit ad-hoc
  • Just as it is in Testbed-1
  • but Testbed-2 will be different
  • Information Services will be increasingly
    important

Tier-1
  • to deliver nontrivial qualities of service

20
Distributed Resourcesin 2003?
  • This will be less ad-hoc
  • should also consider other applications

QCDGrid
Tier-1
LISA
H1
UKDMC
ZEUS
  • The first three years were a phase of
    persuasion

21
Cartology of the Grid
Dynamic version Including Resource
Discovery? Network monitoring? CPU load
average? Disk resource availability?
What can we learn by looking at a few
maps?
22
Connectivity of UK Grid Sites. BW to Campus, BW
to site, limit
Glasgow 1G 100M 30M?
Edinburgh 1G 100M
Lancaster 155M 100M move to cnlman at 155Mbit
Durham 155M ??100M
Manchester 1G 100M 1 G soon
Sheffield 155M ??100M
Liverpool 155M 100M 4155M soon. To hep ?
Cambridge 1G 16M?
DL 155M 100M
UCL 155M 1G 30M?
Birmingham 622M ?? 100M
IC 155M 34M then 1G to Hep
Oxford 622M 100M
RAL 622M 100M Gig on site soon
QMW 155M ??
Swansea 155M 100M
Brunel 155M ??
Bristol 622M 100M
RHBNC 34M 155M soon ?? 100M
Portsmouth 155M 100M
Southampton 155 100M
Sussex155 100M
23
EDG TestBed 1 Status13 Sep 2002 1446
  • Web interface showing status of (400) servers
    at testbed 1 sites
  • Production Centres

24
EDG TestBed 1 Status13 Sep 2002 1446
  • Spot the difference?
  • Dynamic version, regenerated each night from
    MDS

25
Were getting there.. Status 13 Sep 2002 1446
Integrated Information Service inc. resource
discovery
26
GridPP Sites in Testbed Status 13 Sep 2002
1446??
Dynamic version, regenerated from
R-GMA Including Resource Discovery Network
monitoring CPU load average Disk resources
More work needed Part of Testbed and Network
Monitoring. Input from Information Service
Part of GUIDO? (Whats GUIDO?)
27
Network 2003
  • Internal networking is currently a hybrid of
  • 100Mb(ps) to nodes of cpu farms
  • 1Gb to disk servers
  • 1Gb to tape servers
  • UK academic network SuperJANET4
  • 2.5Gb backbone upgrading to 20Gb in 2003
  • EU SJ4 has 2.5Gb interconnect to Geant
  • US New 2.5Gb link to ESnet and Abilene for
    researchers
  • UK involved in networking development
  • internal with Cisco on QoS
  • external with DataTAG

28
Robust?Development Infrastructure
  • CVS Repository
  • management of DataGrid source code
  • all code available (some mirrored)
  • Bugzilla
  • Package Repository
  • public access to packaged DataGrid code
  • Development of Management Tools
  • statistics concerning DataGrid code
  • auto-building of DataGrid RPMs
  • publishing of generated API documentation
  • latest build Release 1.2 (August 2002)

140506 Lines of Code 10 Languages (Release 1.0)
29
Robust?Software Evaluation
ETT Extensively Tested in Testbed
UT Unit Testing
IT Integrated Testing
NI Not Installed
NFF Some Non-Functioning Features
MB Some Minor Bugs
SD Successfully Deployed
Component ETT UT IT NI NFF MB SD
Resource Broker v v v l
Job Desc. Lang. v v v l
Info. Index v v v l
User Interface v v v l
Log. Book. Svc. v v v l
Job Sub. Svc. v v v l
Broker Info. API v v l
SpitFire v v l
GDMP l
Rep. Cat. API v v l
Globus Rep. Cat. v v l
Component ETT UT IT NI NFF MB SD
SE Info. Prov. V v l
File Elem. Script l
Info. Prov. Config. V v l
RFIO V v l
MSS Staging l
Mkgridmap daemon v l
CRL update daemon v l
Security RPMs v l
EDG Globus Config. v v l
Component ETT UT IT NI NFF MB SD
Schema v v v l
FTree v v l
R-GMA v v l
Archiver Module v v l
GRM/PROVE v v l
LCFG v v v l
CCM v l
Image Install. v l
PBS Info. Prov. v v v l
LSF Info. Prov. v v l
Component ETT UT IT NI NFF MB SD
PingER v v l
UDPMon v v l
IPerf v v l
Globus2 Toolkit v v l
30
Robust?Middleware Testbed(s)
Validation/ Maintenance gtTestbed(s) EU-wide
development
31
Robust? Code Development Issues
  • Reverse Engineering (C code analysis and
    restructuring coding standards) gt abstraction
    of existing code to UML architecture diagrams
  • Language choice
  • (currently 10 used in DataGrid)
  • Java C - - features (global variables,
    pointer manipulation, goto statements, etc.).
  • Constraints (performance, libraries, legacy code)
  • Testing (automation, object oriented testing)
  • Industrial strength?
  • OGSA-compliant?
  • O(20 year) Future proof??

ETT Extensively Tested in Testbed
UT Unit Testing
IT Integrated Testing
NI Not Installed
NFF Some Non-Functioning Features
MB Some Minor Bugs
SD Successfully Deployed
32
LHC Computing GridHigh Level Planning
1. CERN
Prototype of Hybrid Event Store (Persistency
Framework)
Hybrid Event Store available for general users
Distributed production using grid services
Full Persistency Framework
applications
Distributed end-user interactive analysis
Grid as a Service
LHC Global Grid TDR
50 prototype (LCG-3) available
LCG-1 reliability and performance targets
First Global Grid Service (LCG-1) available
33
LCG Level 1 Milestonesproposed to LHCC
M1.1 - June 03 First Global Grid Service (LCG-1) available -- this milestone and M1.3 defined in detail by end 2002
M1.2 - June 03 Hybrid Event Store (Persistency Framework) available for general users
M1.3a - November 03 LCG-1 reliability and performance targets achieved
M1.3b - November 03 Distributed batch production using grid services
M1.4 - May 04 Distributed end-user interactive analysis -- detailed definition of this milestone by November 03
M1.5 - December 04 50 prototype (LCG-3) available -- detailed definition of this milestone by June 04
M1.6 - March 05 Full Persistency Framework
M1.7 - June 05 LHC Global Grid TDR
34
LCG Level 1 Milestones
1. CERN
Hybrid Event Store available for general users
applications
Distributed production using grid services
Distributed end-user interactive analysis
Full Persistency Framework
grid
LHC Global Grid TDR
50 prototype (LCG-3) available
LCG-1 reliability and performance targets
First Global Grid Service (LCG-1) available
35
LCG Process
SC2
PEB
requirements
Architects Forum Design decisions, implementation
strategy for physics applications
Grid Deployment Board Coordination,
standards, management policies for operating
the LCG Grid Service
36
LCG ProjectExecution Board
  • All UK-funded posts now filled (20 people)
  • Management Areas
  • Applications
  • Fabric PASTA 2002 Report
  • GRID Technology
  • GRID Deployment
  • Grid Deployment Board
  • Each report is available via
  • http//lhcgrid.web.cern.ch/LHCgrid/peb/
  • The process is open, clear and intuitive.

37
Recruitment status
38
(No Transcript)
39
Events.. to Files.. to Events
Event 1 Event 2 Event 3
Data Files
Data Files
Data Files
RAW
Tier-0 (International)
RAW
RAW
Data Files
RAW Data File
ESD
Tier-1 (National)
Data Files
ESD
ESD
Data Files
Data Files
ESD Data
AOD
Tier-2 (Regional)
AOD
AOD
Data Files
Data Files
Data Files
AOD Data
TAG
Tier-3 (Local)
TAG
TAG
TAG Data
Not all pre-filtered events are interesting Non
pre-filtered events may be File Replication
Overhead.
Interesting Events List
40
Events.. to EventsEvent Replication and Query
Optimisation
Event 1 Event 2 Event 3
Distributed (Replicated) Database
RAW
Tier-0 (International)
RAW
RAW
ESD
Tier-1 (National)
ESD
ESD
AOD
Tier-2 (Regional)
AOD
AOD
TAG
Tier-3 (Local)
TAG
TAG
Knowledge Stars in Stripes
Interesting Events List
41
POOL
Persistency Framework
42
Giggle
RLI
Hierarchical indexing. The higher- level RLI
contains pointers to lower-level RLIs or LRCs.
Scalable? Trade-off Consistency Versus Efficien
cy
RLI
RLI
RLI Replica Location Index
LRC Local Replica Catalog
LRC
LRC
LRC
LRC
LRC
Storage Element
Storage Element
Storage Element
Storage Element
Storage Element
43
LCG SC2(See Nicks Talk)
44
LCG SC2(See Nicks Talk)
e.g. Persistency (POOL) The RTAG convened in ten
three-hour sessions during the weeks of 28
January, 18 February, and 11 March, and delivered
an interim report to the SC2 on 8 March. An
additional report was provided during the LCG
Launch Workshop on 12 March. A final report to
the SC2 is expected on 5 April 2002. Developer
Release November 2003.
45
Grid Technology Deployment
  • Close collaboration between LCG and EDG on
    integration and certification of grid middleware
  • common teams being established
  • prepares the ground for long-term LCG support of
    grid middleware
  • Importance of a common grid middleware toolkit
  • compatible with implementations in Europe, US
  • flexible enough to evolve with mainline grid
    developments
  • responding to the needs of the experiments
  • GLUE common US-European activity to achieve
    compatible solutions
  • supported by DataTAG, iVDGL, ..
  • Grid Deployment Board
  • first task is the detailed definition of LCG-1,
    the initial LCG Global Grid Service
  • this will include defining the set of grid
    middleware tools to be deployed
  • target full definition of LCG-1 by the end of
    the year - LCG-1 in operation
    mid-2003

46
Are we (sufficiently) well organised?
Is each part of this structure working? Is the
whole working? Comments welcome.
47
GridPP Project Map - Elements
48
GridPP Project Map - Metrics and Tasks
Available from Web Pages Provides Structure for
PMB (and Daves talk)
49
This years high point?
50
Things Missing, apparently
i.e. not ideal but it works
51
Next years high point?
  • Which experiments will use the Grid most
    efficiently? (Experiment monitoring throughout
    the year - a common way of measuring
    experiments data-handling metrics CPU, Disk,
    users, jobs, memory reqts. etc..)
  • What middleware will be used?
  • How efficient will the UK testbed be?
  • How well integrated will it be?
  • How well will we share resources?
  • Need to anticipate questions, in order to answer
    them

52
GUI last meeting
53
GUI this meetingGridPP User Interface GUIDO
  • generic user interface to enable experiments to
    access resources efficiently..

Talk Demonstration of GridPP portal - Sarah
Marr and Dave Colling
54
GridPP User InterfacePulling it all together
GridPP Web Links 2003? Demonstration elements
Gridsite, GUIDO, R-GMA, Short and Long-Term
Monitoring, Local and WAN Monitoring.
SAM
55
GridPP Achievements and Issues
  • 1st Year Achievements
  • Complete Project Map
  • Applications Middleware Hardware
  • Fully integrated with EU DataGrid, LCG and SAM
    Projects
  • Rapid middleware deployment /testing
  • Integrated US-EU applications development e.g.
    BaBarEDG
  • Roll-out document for all sites in the UK (Core
    Sites, Friendly Testers, User Only).
  • Testbed up and running at 15 sites in the UK
  • Tier-1 Deployment
  • 200 GridPP Certificates issued
  • First significant use of Grid by an external user
    (LISA simulations) in May 2002
  • Web page development (GridSite)
  • Issues for Year 2
  • Status 13 Sep 2002 1446 GMT monitor
    and improve testbed deployment efficiency short
    (10 min) and long-term (monthly)
  • Importance of EU-wide development of middleware
    and integration with US-led approach
  • Integrated Testbed for use/testing by all
    applications
  • Common integration layer between middleware and
    application software
  • Integrated US-EU applications development
  • Tier-1 Grid Production Mode
  • Tier-2 Definitions and Deployment
  • Integrated Tier-1 Tier-2 Testbed
  • Transfer to UK e-Science CA
  • Integration with other UK projects e.g.
    AstroGrid, MyGrid
  • Publication of YOUR work

56
Summary (by Project Map areas)
  • Grid success is fundamental for PP
  • CERN LCG, Grid as a Service.
  • DataGrid Middleware built upon Globus and
    Condor-G. Testbed 1 deployed.
  • Applications complex, need to interface to
    middleware.
  • LHC Analyses ongoing feedback/development.
  • Other Analyses have immediate requirements.
    Integrated using Globus, Condor, EDG/SAM tools
  • Infrastructure Tiered computing to the
    physicist desktop
  • Scale in UK? 1 PByte and 2,000 distributed CPUs
  • GridPP in Sept 2004
  • Integration ongoing with UK e-science
  • Dissemination
  • Co-operation required with other
    disciplines/industry
  • Finances under control, but need to start
    looking to Year 4..
  • Year 1 was a good starting point. First Grid jobs
    have been submitted..
  • Looking forward to Year 2. Web services ahead..
    but
  • Experiments will define whether this experiment
    is successful (or not)

57
Holistic ViewMulti-layered Issues(Not a Status
Report)
GridPP
applications
infrastructure
middleware
58
GridPP5 - Welcome
  • Opening Session (Chair Dave Britton)
  • 1100-1130 Welcome and Introduction - Steve
    Lloyd
  • 1130-1200 GridPP Project Status - Tony Doyle
  • 1200-1230 Project Management - Dave Britton
  • Experiment Developments I (Chair Roger Barlow)
  • 1330-1350 EB News and LCG SC2 Activities -
    Nick Brook
  • 1350-1410 WP8 status - Frank Harris
  • 1410-1435 ATLAS/LHCb GANGA Development -
    Alexander Soroko
  • 1435-1500 ATLAS Installation and Validation
    Tools - Roger Jones
  • 1500-1515 UK e-Science Grid for ATLAS -
    Matt Palmer
  • 1515-1540 CMS status and future plans -
    Peter Hobson
  • Experiment Developments II (Chair Nick Brook)
  • 1600-1625 UKQCD Status and Future Plans -
    James Perry
  • 1625-1650 BaBar Status and Future Plans -
    David Smith
  • 1650-1700 SAM - Introduction - Rick St Denis
  • 1700-1720 SAM Status and Future Plans -
    Stefan Stonjek
  • 1720-1740 SAM-Grid Status and Future Plans -
    Rod Walker
  • 1740-1755 Demonstration of GridPP portal -
    Sarah Marr and Dave Colling
  • LeSC Perspective, Middleware and Testbed
    (Chair Pete Clarke)
  • 900-930 London eScience Centre -
    Steven Newhouse, Technical Director of LeSC
  • 930-1000 EDG Overview, inc. Feedback from EDG
    Retreat and Plans for Testbed 2 - Steve
    Fisher
  • 1000-1030 Tier 1/A Status Report
    - Andrew Sansum
  • 1030-1100 Testbed Deployment Status
    - Andrew McNab
  • Testbed Installation Experiences, Issues and
    Plans (Chair John Gordon)
  • 1130-1145 Meta Directory Service - Steve
    Traylen
  • 1145-1200 Virtual Organisation
    - Andrew McNab
  • 1200-1215 Replica Catalog - Owen Moroney
  • 1215-1230 Resource Broker - Dave Colling
  • Grid Middleware Status and Plans (Chair Steve
    Lloyd)
  • 1330-1350 WP1 Overview - Dave Colling
  • 1350-1410 WP2 Overview - Gavin McCance
  • 1410-1430 WP3 Overview - Steve Fisher
  • 1430-1450 WP4 Status - Tony Cass
  • 1450-1510 WP5 Overview - John Gordon
  • 1510-1530 WP7 Networking and Security -
    Paul Mealor
Write a Comment
User Comments (0)
About PowerShow.com