Title: HPWREN and International Collaborations
1Applications and Middleware Work in PRAGMA Grid
Cindy Zheng PRAGMA Grid Coordinator Pacific Rim
Application and Grid Middleware
Assembly http//www.pragma-grid.net http//goc.pra
gma-grid.net
2Overview
- Introduction to PRAGMA Grid
- People
- Hardware
- Software
- Operations
- Grid Applications
- Grid Middleware
- Security
- Infrastructure
- Services
- Education Grid
- Collaborations/Integrations
- Grid Interoperations
3PRAGMA Grid
UZH Switzerland
AIST OsakaU UTsukuba TITech Japan
UZH Switzerland
JLU China
NCSA USA
CNIC GUCAS China
KISTI Korea
BU USA
UUtah USA
LZU China
LZU China
SDSC USA
ASGC NCHC Taiwan
CUHK HongKong
UPRM Puerto Rico
UoHyd India
CICESE Mexico
ASTI Philippines
UNAM Mexico
NECTEC ThaiGrid Thailand
HCMUT HUT IOIT-HCM Vietnam
CeNAT-ITCR Costa Rica
SKU UI Indonesia
MIMOS USM Malaysia
APAC QUT Australia
BII IHPC NGO NTU Singapore
UCN Chile
UChile Chile
MU Australia
BESTGrid New Zealand
32 institutions in 16 countries/regions, 27
compute sites ( 10 in preparation)
4PRAGMA Grid Members and Teamhttp//goc.pragma-gri
d.net/wiki/index.php/Site_status_and_tasks
- Sites
- 23 sites from PRAGMA member institutions
- 15 sites from Non-PRAGMA member institutions
- 27 sites contributed compute clusters
- Team members
- 190 and growing
- one management contact / site
- 13 technical support contact / site
- 14 application drivers / application team
- 15 / Middleware development team
5PRAGMA Grid Compute Resourceshttp//goc.pragma-gr
id.net/pragma-doc/computegrid.html
6Characteristics of PRAGMA Grid
- Grass-root
- Voluntary contribution
- Open (PRAGMA member or not, pacific rim or not)
- Long-term collaborative working experiment
- Heterogeneous
- Funding
- No uniform infrastructure management
- Variety of sciences and applications
- Site policies, system and network environments
- Realistically tough
- Good for development, collaborations,
integrations and testing
7PRAGMA Grid Software Layershttp//goc.pragma-grid
.net/pragma-doc/userguide/join.html
Applications
Phylogenetic
FMO
CSTFT
Savannah
MM5
AMBER
Siesta
Application Middleware
Infrastructure Middleware
Ninf-G
Nimrod/G
Mpich-GX
Gfarm
SCMSWeb
MOGAS
CSF
Globus (required)
Local job scheduler (require one)
SGE
PBS
LSF
SQMS
8PRAGMA Grid Operations
9One of the major lessons from PRAGMA Grid, that
everybody has noticed and would agree You have
to Grid People before you can Grid machines
10Grid Operationhttp//goc.pragma-grid.net,
http//wiki.pragma-grid.net
- Develop and maintain mutual beneficial and happy
relationships among all people involved - Geographies, time-zones, languages
- Funding, chain-of-command, priorities
- Mutual benefit, consensus, active leadership
- Coordinator, site contacts
- Collaboration tools
- Mailing lists, VTCs, Skype, semi-annual workshops
- Grid Operation Center (GOC)
- Wiki, all sites and application, middleware teams
collaborate - Heterogeneity
- Tolerate, technology, overcome and take advantage
- Software inventory instead of software stack
- Many sub-grids for applications
- Recommendation instead of requirements
- Software license (Amber grid-wide license)
11Create New Ways To Operate http//goc.pragma-grid
.net, http//wiki.pragma-grid.net
- Lack precedence
- Everyone contributes ideas, suggestions
- Evolving and improving over time
- Everyone document and update (wiki)
- Create new procedures
- New site setup to join PRAGMA Grid
- http//goc.pragma-grid.net/pragma-doc/userguide/jo
in.html - New user/application to run in PRAGMA grid
- http//goc.pragma-grid.net/pragma-doc/userguide/pr
agma_user_guide.html - Tabulate information
- Application pages, site pages, resources tables,
status pages - Publish instructions
- Software deployment procedures, tools
12Application Driven
13Applications and Middleware http//goc.pragma-gri
d.net/applications/default.html
- Real science applications paired with and drive
middleware development - Open to applications of all scientific
disciplines - Achieve long-run and scientific results
- 30 applications in 3 years
- Structure biology
- Phaser (MU, Australia)
- Quantum-mechanics, quantum-chemistry
- TDDFT, QM-MD, FMO/Ninf-G (AIST, Japan)
- Climate simulation
- Savannah/Nimrod (MU, Australia)
- MM5/Mpich-Gx (CICESE, Mexico KISTI, Korea)
- Genomics and meta-genomics
- iGAP/Gfarm/CSF (UCSD, USA AIST, Japan JLU,
China) - HPM genomics (IOIT-HCM, Vietnam)
- mpiBlast/Mpich-G2 (ASGC, Taiwan)
- Phylogenetic/Gfarm/CFS (UWisc and UCSD, USA)
- Computational chemistry and fluid dynamics
- CSE-Online (UUtah, USA)
- e-AIRS (KISTI, Korea)
14Grid Middleware
15Ninf-G http//ninf.apgrid.org
- Developed by AIST, Japan
- Based on GridRPC model
- Support parallel computing
- Integrated to NMI release 8 (first non-US
software in NMI) - Integrate with Rocks
- OGF standard
- 4 applications ran in PRAGMA grid, 2 ran across
multi-grid - TDDFT
- QM/MD
- FMO
- CSTFT (UPRM)
- Achieved long runs (50 days)
- Improved fault-tolerance
- Simplified deployment procedures
- Speed-up development cycles
16Nimrod/Ghttp//www.csse.monash.edu.au/davida/nim
rod
- Developed by Monash University (MU), Australia
- Supports large scale parameter sweeps on Grid
infrastructure - Easy user interface Nimrod portals
- MU, Australia
- UZurich, Switzerland
- UCSD, USA
- 4 applications ran in PRAGMA grid and 2 run in
multi-grids - Savanah climate simulation (MU, Australia))
- GAMESS/APBS (UZurich, Switzerland)
- Siesta (UZurich, Switzerland)
- Structure biology (MU, Australia)
- Developed interface to Unicore with UZH
- Achieved long runs (90 different scenarios of 6
weeks each - Improved fault-tolerance (innovate time_step)
- Enhancements in data and storage handling
Description of Parameters PLAN FILE
17Mpich-Gxhttp//www.moredream.org/mpich.htm
- Mpich-GX
- Korea Institute of Science and Technology
Information (KISTI), Korea - Based on Mpich-g2
- Grid-enabled MPI, support
- Private IP
- Fault tolerance
- MM5 and WRF
- CICESE, Mexico
- Medium scale atmospheric simulation model
- Experiment
- KGrid
- WRF work well with MPICH-GX
- MM5 experienced scaling problems with MPICH-GX
when use more than 24 processors in a cluster - Functionality of the private IP is usable
- Performance of the private IP is reasonable
18Education Gridhttp//prime.ucsd.edu
http//prius.ics.es.osaka-u.ac.jp/en/index.html
- PRIME - Pacific Rim Undergraduate Experiences,
providing UCSD undergraduate students
international interdisciplinary research
internships and Cultural experiences, in
collaboration with PRAGMA since 2004. - PRIUS - Pacific Rim International UniverSity,
provide Osaka University students expert lectures
and internship abroad, in collaboration with
PRAGMA since 2005 - Sample middleware projects
- MOGAS
- Grid security analysis
- Sample applications ran in PRAGMA grid this year
- Climate modeling
- Multi-walled carbon nanotube and polyethylene
oxide composite computer visualization model - Metabolic regulation of ionic currents and pumps
in rabbit ventricular myocyte model - Improving binding energy using quantum mechanics
- Cardiac mechanics modeling
- H5N1 simulation
- Shp2 Protein Tyrosine Phosphatase Inhibitor
simulation for cancer research
19ScienceTechnologiesCollaborationsIntegration
s
20Collaborations With Science and Technology Teams
- Grid security
- Naregi (Japan), APGrid, GAMA (SDSC, USA)
- Grid infrastructure
- Monitoring - SCMSWeb (ThaiGrid, Thailand)
- Accounting - MOGAS (NTU Singapore)
- Metascheduling - Community Scheduler Framework
(JLU, China) - Cyber-environment - CSE-Online (UUtah, USA)
- Rocks and middleware (SDSC, USA )
- Ninf-G, SCE, Gfarm, Bio, KRocks, Condor,
- Science, datagrid, sensor, network
- Biosciences Avian Flu, portal,
- Gfarm-fuse (AIST, Japan)
- GEON data network
- GLEON sensor network
- OptIPuter
- High performance networked TDW
- Telescience
21Grid Security
- Trust in PRAGMA grid, http//goc.pragma-grid.net/p
ragma-doc/certificates.html - IGTF distribution
- Non-IGTF distribution (trust all PRAGMA Grid
sites) - APGrid PMA
- One of three IGTF founding PMAs
- Many PRAGMA grid sites are members
- PRAGMA CA
- Naregi-CA
- AIST, UCSD, UChile, UoHyd, UPRM
- PRAGMA CA (experimental and production)
- Based on Naregi-CA
- Catch-all CA for PRAGMA
- Production CA will be IGTF compliant
- MyProxy and VOMS services
- APAC and UCSD
- Work with GAMA
- Integrate with Naregi-CA (Naregi, UCSD)
- Integration with VOMS (AIST)
- Add servelet for account management (UChile)
22Gfarm Grid File Systemhttp//datafarm.apgrid.org
- UTsukuba, Open source development at
SourceForge.net - Grid file system that Federates storage of each
site - Meta-server keeps track of file copies and
locations - Can be mounted from cluster nodes and clients
(GfarmFS-FUSE) - Parallel I/O, near site copy for scalable
performance - Replication for fault tolerance
- Use GSI authentication
- Easy application deployment, file sharing
- Distributed Meta-servers?
23Develop and Test GfarmFS-FUSE in PRAGMA
Gridhttp//goc.pragma-grid.net/wiki/index.php/Res
ources_and_Data
- Testing with applications
- Igap (Gfarm, Japan, UCSD, USA JLU, China)
- Huge number of small files
- High meta-data access overhead
- Meta-data cache server
- Dramatic improvements (44sec -gt 3.54sec)
- AMBER (USM, Malaysia Gfarm, Japan)
- Remote Gfarm meta-server
- Meta-server is bottle-neck
- File sharing permission, security
- 2.0 improved performance
- Use as a shared storage only
- Version 1.4 works well in local or regional grid
- GeoGrid, Japan
- CLGrid, Chile
- Integration
- SCMSWeb (ThaiGrid, Thailand)
- Rocks (SDSC, USA UZH, Switzerland)
24SCMSWebhttp//www.opensce.org/components/SCMSWeb
- Developed by Kasetsart University and ThaiGrid
- Web-based real-time grid monitoring system
- System usage, Job/queue status
- Probe Globus authentication, job submission,
gridftp, Gfarm access, - Network bandwidth measurements with Iperf
- PRAGMA grid geo map
- Support Linux, Solaris. Good meta-view, easy user
interface, excellent user support - Develop and test in PRAGMA grid
- Deployed in 27 sites, improve scalability and
performance - Sites help with porting to ia64 and Solaris
- Demands push fast expansion of functionalities
- More regional/national grids learned and adopted
25SCMSWeb Collaborations and Integrations
- Grid Interoperation Now (GIN, OGF)
http//forge.gridforum.org/sf/wiki/do/viewPage/pro
jects.gin/wiki/GinOps - Worked with PRAGMA grid, TeraGrid, OSG, NorduGrid
and EGEE on GIN testbed monitoring
http//goc.pragma-grid.net/cgi-bin/scmsweb/probe.c
gi, added probes to handle various grid service
configurations/tests. - Worked with CERN and Implemented a XML-gt LDIF
translator for GIN geo map http//maps.google.com/
maps?qhttp//lfield.home.cern.ch/lfield/gin.kml - Worked with many grid monitor software developers
on a common schema for cross-grid monitoring
http//wiki.pragma-grid.net/index.php?titleGIN_2
8Grid_Inter-operation_Now29_Monitoring - Software integration and interoperations
- Rocks SCE roll
- MOGAS grid accounting
- Bandwidth measurements
- Data federator for grid applications
- Provide site software information
- Standardize data extractions and formats
- Improve data storage with RDBMS
- Condor, CSF, provide resource info
- Interoperate with other monitoring software
- Ganglia support
26MOGAShttp//ntu-cg.ntu.edu.sg/pragma/index.jsp
- Multi-Organization Grid Accounting System (MOGAS)
- Lead by NanYang University, funded by National
Grid Office in Singapore - Build on globus core (gridftp, GRAM, GSI)
- Support GT2,3,4 SGE, PBS
- Job/user/cluster/OU/grid levels usages job logs
metering and charging tools - Develop and test in PRAGMA grid
- Deployed on 14 sites different GT versions, job
schedulers, GRAM scripts, security policies - Feedbacks, improve, automate deployment procedure
- Decentralized servers and better database to
improve scalability and performance - Collaborations and integrations with applications
and other middleware teams push the development
of easy database interface and more efficient
data collection
27CSF4http//goc.pragma-grid.net/wiki/index.php/CSF
_server_and_portal
- Community Scheduler Framework, v4
meta-scheduler - Developed by Jilin University, China
- Grid services host in GT4, WSRF compliant,
execution Component in Globus Toolkit 4 - Open Source, http//sourceforge.net/projects/gcsf
- Support GT24, LSF, PBS, SGE, Condor
- Easy user interface - portal
- Testing and collaborating in PRAGMA
- Testing with application iGAP, AFG (UCSD, AIST,
KISTI, ) - Collaborate and integrate with Gfarm on data
staging (AIST, Japan) - Setup a CSF server and portal (SDSC, USA)
- Collaborate/integrate with SCMSWeb for resource
information (Thaigrid, Thailand) - Leverage resources and global grid testing
environment
28Computational Science Engineering
Onlinehttp//cse-online.net
- Developed by University of Utah, USA (Thanh N.
Truong) - Desktop tool, user friendly interface enables
seamless access to remote data, tools and grid
computing resources - Currently support computational Chemistry
- Can be customized for other domain science
- Developed interface to TeraGrid
- Collaborate with ThaiGrid as case study
- Used for Computational workshop
- Extend grid access to portal architecture
- Improved security
- Working on interface PRAGMA grid
- Heterogeneity
Quantum Chemistry
Nano-materials
Drug Design
29Collaborations with OptIPuter, GLIF and
CAMERAhttp//www.optiputer.net
- OptIPuter (Optical networking, Internet Protocol,
computer storage, processing and visualization
technologies) - Infrastructure that will tightly couple
computational resources over parallel optical
networks using the IP communication mechanism - central architectural element is optical
networking, not computers - enable scientists who are generating terabytes
and petabytes of data to interactively visualize,
analyze, and correlate their data from multiple
storage sites connected to optical networks - Rocks/SAGE VIS-roll (SDSC)
- Networked Tile Display Walls (TDW)
- Low cost
- For research collaboration
- For remote education and conferencing
- Deployed at many PRAGMA grid sites
30Build a Rocks / SAGE OptIPortal
UZurich
CNIC
NCHC
Osaka U
31Global Lambda Integrated Facility
(GLIF)http//www.glif.is
Visualization courtesy of Bob Patterson, NCSA.
- Map to many PRAGMA grid sites
- PRAGMA grid use GLIF to solve grid application
bandwidth problem
32Intergrate CAMERA and PRAGMA Grid Microbial
Metagenomicist Userbase
Over 1300 Registered Users From 48 Countries
33Calit2/PRAGMA/CAMERA/LambdaGrid Collaborations
PRAGMA Countries with CAMERA Registered Users
- Add CAMERA Server to PRAGMA Grid Testbed
- Ad hoc Supercomputing
- NIMROD?
- New Bioinformatics Apps
- Set up PRAGMA OptIPortal LambdaGrid for several
PRAGMA Sites - KISTI, Konkuk U (Korea)
- AIST, Osaka U (Japan)
- CNIC (China)
- NCHC (Taiwan)
- APAC, UMelbourne, Monash U, U Queensland
(Australia) - CICESE (Mexico)
- UZurich (Switzerland)
- Plus Other Volunteers!
Source Paul Gilna, Kayo Arima, Calit2
34Grid Interoperation
35Grid Interoperation Now (GIN)http//forge.gridfor
um.org/sf/wiki/do/viewPage/projects.gin/wiki/GinOp
s
- Open Grid Forum and GIN
- GIN-OPS (lead by PRAGMA)
- GIN testbed (February, 2006 On going)
- One or more clusters from each grid
- Still part of each production grid
- Running real science applications
- Explore interoperation issues
- Develop solutions
- Provide insight to standardization effort
- Application driven
- TDDFT/Ninf-G (PRAGMA - AIST, Japan)
- PRAGMA, TeraGrid, OSG, NorduGrid EGEE
- Savanah fire simulation (PRAGMA Monash
University, Australia) - PRAGMA, TeraGrid, OSG
36Grid Interoperation Now (GIN)http//forge.gridfor
um.org/sf/wiki/do/viewPage/projects.gin/wiki/GinOp
s
- Software interface and integration
- Ninf-G (AIST/PRAGMA) - NorduGrid
- Nimrod/G (MU-PRIME/PRAGMA) Unicore
- SCMSWeb (ThaiGrid/PRAGMA) Condor (UWisc/OSG)
- SCMSWeb (ThaiGrid/PRAGMA) BDII (CERN)
- VDT (OSG) and Rocks (SDSC/PRAGMA) integration
- Multi-Grid monitoring
- Lead by ThaiGrid/PRAGMA
- SCMSWeb probe matrix (PRAGMA - ThaiGrid,
Thailand) - Common schema (http//goc.pragma-grid.net/wiki/ind
ex.php?titleGIN_28Grid_Inter-operation_Now29_Mo
nitoring) - PRAGMA SCMSWeb, MOGAS
- TeraGrid Globus Gt4.0.1, Ganglia, NAGIOS
- EGEE MonAlisa
- NorduGrid/ARC NorduGrid/MDS2, NorduGrid/Grid
Monitor
37(No Transcript)
38Peer-grid Interoperation Experimentshttp//goc.pr
agma-grid.net/wiki/index.php/Main_PageGrid_Inter-
operations
- Different from GIN testbed
- More resources and support from each grid
- Either uni-directional or bi-directional
application run - Long run to achieve scientific results
- OSGlt-gtPRAGMA (January, 2007 on-going)
- How
- Each grid identify management, application
drivers, resources supporters - All participants document application
requirements, meetings, issues, solutions,
status, results, at wiki.pragma-grid.net - Resources
- OSG FermilabGrid, will add UWisc
- PRAGMA grid - any sites application driver choose
to use - Applications
- OSG GISolve, spatial Interpolation (UIowa, USA)
- PRAGMA
- FMO/Ninf-G, quantum Chemistry (AIST, Japan)
completed - Structure biology (MU, Australia) start soon
39OSG-PRAGMA Grid Interoperation Experimentshttp//
goc.pragma-grid.net/wiki/index.php/Main_PageGrid_
Inter-operations
- More resources and support from each grid, but no
special arrangements - Application long-run
- GridFMO/Ninf-G Large scale quantum Chemistry
(Tsutomo Ikegami, AIST, Japan) - 240 CPUs from OSG and PRAGMA grid, 10 days x 7
calculations - Fault-tolerance enabled long-run
- Meaningful and usable scientific results
40PRAGMA Summit
- Start a series of three workshops
- First MarchApril 2008
- Organized by UZurich
- Swiss Grid, European Federated Grid (Euro-Grid),
and PRAGMA - Goal
- Inform and learn from each other
- Seek ways to collaborate
- Collaboration work started summer 2007
- Nimrod interface to UNICORE
41Lessons Learned From Grid Interoperation
http//forge.gridforum.org/sf/wiki/do/viewPage/pro
jects.gin/wiki/GinOps
- Grid interoperation make large scale calculations
possible - Differences among grids provide learning,
collaboration and integration opportunities - IGTF, VOMS (GIN)
- Common Software Area (TeraGrid)
- Ninf-G NorduGrid
- Nimrod/G Unicore
- SCMSWeb Condor
- SCMSWeb BDII
- SCMSWeb probe matrix for GIN testbed monitoring
- Common schema among many grid monitoring software
- VDT (OSG) and Rocks (SDSC/PRAGMA) integration
- Differences in grid environment are source of
difficulties for users and applications - Different user access setup procedure - take
extra effort - Different job submission protocols
- GRAM, Sandbox, gridftp, modified GRAM,
- One-to-one interface - is it scalable? Possible
standards? - Middleware fault tolerance and flexible resource
management is important - Cope with unfamiliar fault conditions, lack of
parallel computation support,
42Collaborate in Publishing Research Results
- Some published papers in 2007
- Amaro, RE, Minh DDL, Cheng LS, Lindstrom, WM Jr,
Olson AJ, Lin JH, Li WW, and McCammon JA.
Remarkable Loop Flexibility in Avian Influenza N1
and Its Implications for Antiviral Drug Design.
J. AM. CHEM. SOC. 2007, 129, 7764-7765 (PRIME) - Choi Y, Jung S, Kim D, Lee J, Jeong K, Lim SB,
Heo D, Hwang S, and Byeon OH."Glyco-MGrid A
Collaborative Molecular Simulation Grid for
e-Glycomics," in 3rd IEEE International
Conference on e-Science and Grid Computing,
Banglore, India, 2007. Accepted. - Ding Z, Wei W, Luo Y, Ma D, Arzberger PW, and Li
WW, "Customized Plug-in Modules in Metascheduler
CSF4 for Life Sciences Applications," New
Generation Computing, p. In Press, 2007. - Ding Z, Wei S, Ma, D and Li WW, "VJM -- A
Deadlock Free Resource Co-allocation Model for
Cross Domain Parallel Jobs," in HPC Asia 2007,
Seoul, Korea, 2007, p. In Press. - Görgen K, Lynch H, Abramson D, Beringer J and
Uotila P. "Savanna fires increase monsoon
rainfall as simulated using a distributed
computing environment", to appear, Geophysical
Research Letters. - Ichikawa K, Date S, Krishnan S, Li W, Nakata K,
Yonezawa Y, Nakamura H, and Shimojo S, "Opal OP
An extensible Grid-enabling wrapping approach
for legacy applications", GCA2007 - Proceedings
of the 3rd workshop on Grid Computing
Applications -, pp.117-127 , Singapore, June 2007
a. (PRIUS) - Ichikawa K, Date S, and Shimojo S. A Framework
for Meta-Scheduling WSRF Based Services,
Proceedings of 2007 IEEE Pacific Rim Conference
on Communications, Computers and Signal
Processing (PACRIM 2007), Victoria, Canada, pp.
481-484, Aug. 2007 b. (PRIUS) - Kuwabara S, Ichikawa K, Date S, and Shimojo S. A
Built-in Application Control Module for SAGE,
Proceedings of 2007 IEEE Pacific Rim Conference
on Communications, Computers and Signal
Processing (PACRIM 2007), Victoria, Canada, pp.
117-120, Aug. 2007. (PRIUS) - Takeda S, Date S, Zhang J, Lee BU, and Shimojo S.
Security Monitoring Extension For MOGAS,
GCA2007 - Proceedings of the 3rd workshop on Grid
Computing Applications - , pp.128-137
Singapore, June 2007. (PRIUS) - Tilak S, Hubbard P, Miller M, and Fountain T,
The Ring Buffer Network Bus (RBNB) DataTurbine
Streaming Data Middleware for Environmental
Observing Systems," to appear in the Proceedings
of the e-Science 2007 - Zheng C, Katz M, Papadopoulos P, Abramson D,
Ayyub S, Enticott C, Garic S, Goscinski W,
Arzberger P, Lee B S, Phatanapherom S,
Sriprayoonsakul S, Uthayopas P, Tanaka Y,
Tanimura Y, Tatebe O. Lesson Learned Through
Driving Science Applications in the PRAGMA Grid.
Int. J. Web and Grid Servies, Vol.3, No.3,
pp287-312. 2007
43Summary
- PRAGMA grid
- Shared vision lower resistance to use others
software, test on others resources - Formed new development collaborations
- Size and heterogeneity, explore issues which
functional grid must resolve - Management, resources and software coordination
- Identity and fault management
- Scalability and performance
- Feedback between application and middleware help
improve software and promote software integration - Heterogeneous global grid
- Is realistic and challenging
- Can be good for middleware development and
testing - Can be useful for real science
- Impact
- Software dissemination (Rocks, Ninf-G, Nimrod,
SCMSWeb, Naregi-CA, ) - Help new national/regional grids (Chile, Vietnam,
Hong kong, ) - Key is people, is collaboration
44How Can I Participate?
- Get involved now
- PRAGMA or similar collaborative communities
- Cost a little
- Benefit a lot
- Being a part of larger grid effort
- Learn from doing
- Build collaborations
- Develop bigger/better ideas/projects
- Push the use of network and other infrastructure
45A Grass Roots Effort
- One of the most important lessons of the
Internet is that it grows most successfully where
grass roots initiatives are encouraged and
enabled. The Internet has historically grown from
the bottom up, and this aspect continues to fuel
its continued growth in the academic and
commercial sectors. - Vint Cert, UN Economic and Social Council in 2000
46- PRAGMA is supported by the National Science
Foundation (Grant No. INT-0216895, INT-0314015,
OCI -0627026) and by member institutions - PRIME is supported by the National Science
Foundation under NSF INT 04007508 - PRAGMA grid is the result of contributions and
support from all PRAGMA grid team members
Thank You
http//www.pragma-grid.net http//goc.pragma-grid.
net http//wiki.pragma-grid.net
47PRAGMA
A Practical Collaborative Framework People and
applications
Overarching Goals
Strengthen Existing and Establish New
Collaborations Work with Science Teams to Advance
Grid Technologies and Improve the Underlying
Infrastructure In the Pacific Rim and Globally
http//www.pragma-grid.net
48PRAGMA Member Institutions
CRAY PNWG USA
JLU China
KBSI KISTI Konkuk Korea
CNIC China
AIST CCS CMC NARC OsakaU TITech Japan
UUtah USA
UoHyd India
CalIT2 CRBS SDSC UCSD USA
APAN Japan
NCSA StarLight TransPAC2 USA
ASGCC NCHC Taiwan
CICESE Mexico
KU NECTEC TNGC Thailand
APAC Australia
BII IHPC NGO Singapore
BeSTGRID New Zealand
MIMOS USM Malaysia
33 institutions from 12 countries/regions
Founded 2002 Supported by Members
MU Australia
http//www.pragma-grid.net
49Overview and ApproachProcess to Promote Routine
Use Team Science
Application-Driven Collaborations Applications Mid
dleware
Outcomes Improved middleware Broader Use New
Collaborations Transfer Tech. Standards Publicatio
ns New Knowledge Data Access Education
50PRAGMA Working Groups
- Bioscience
- Telescience
- Geo-science
- Resources and data
- Grid middleware interoperability
- Global grid usability and productivity
- PRAGMA Grid effort is led by resources and data
working group, but rely on collaborations and
contributions among all working groups.
51Lessons Learned From Running Applications
- PRAGMA grid and its heterogeneous environment is
great for - Testing
- Collaborating
- Integrating
- Sharing
- Not easy
- Middleware needs improvements
- Work in heterogeneous environment
- Fault tolerance
- Need user friendly portals and services
- Automate and integrate
- Information collections (grid monitoring,
workflow) - Decisions and executions (scheduling)
- Domain specific easy user interfaces (portals, CE
tools)
52MM5-WRF/Mpich-GX Experiment
Hurricane Marty Simulation
Mpich-GX
Private IP support
Fault Tolerance support
Santana Winds Simulation
KGrid
output
USA
SDSC
CICESE Ensenada
México
eolo
pluto
53PRAGMA is a great model and needs to be emulated.
Has helped weaken barriers between different
research groups across different continents and
allowed people to trust and collaborate rather
than compete.
- Arun Agarwal
- UoHyd, India
54PRAGMA Gfarm Datagridhttp//goc.pragma-grid.net/p
ragma-doc/datagrid.html
- Compute Cluster