Cluster Computing

, Volume 10, Issue 3, pp 351–364

NEKTAR, SPICE and Vortonics: using federated grids for large scale scientific applications

  • Bruce Boghosian
  • Peter Coveney
  • Suchuan Dong
  • Lucas Finn
  • Shantenu Jha
  • George Karniadakis
  • Nicholas Karonis
Original Paper

Abstract

In response to a joint call from US’s NSF and UK’s EPSRC for applications that aim to utilize the combined computational resources of the US and UK, three computational science groups from UCL, Tufts and Brown Universities teamed up with a middleware team from NIU/Argonne to meet the challenge. Although the groups had three distinct codes and aims, the projects had the underlying common feature that they were comprised of large-scale distributed applications which required high-end networking and advanced middleware in order to be effectively deployed. For example, cross-site runs were found to be a very effective strategy to overcome the limitations of a single resource.

The seamless federation of a grid-of-grids remains difficult. Even if interoperability at the middleware and software stack levels were to exist, it would not guarantee that the federated grids can be utilized for large scale distributed applications. There are important additional requirements for example, compatible and consistent usage policy, automated advanced reservations and most important of all co-scheduling. This paper outlines the scientific motivation and describes why distributed resources are critical for all three projects. It documents the challenges encountered in using a grid-of-grids and some of the solutions devised in response.

Keywords

Distributed supercomputers Federated grids Interoperability Optical lightpaths MPICH-G2 Co-scheduling 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Blake, R., Coveney, P.V., Clarke, P., Pickles, S.M.: The teragyroid experiment—supercomputing 2003. Sci. Program. 13(1), 1–17 (2005) Google Scholar
  2. 2.
    Chin, J., Coveney, P.V.: Chirality and domain growth in the gyroid mesophase, preprint (2005) Google Scholar
  3. 3.
    Fowler, P., Jha, S., Coveney, P.V.: Grid-based steered thermodynamic integration accelerates the calculation of binding free energies. Phil. Trans. Roy. Soc. London A 363(1833), 1999–2015 (2005), http://www.pubs.royalscoc.ac.uk/philtransa.shtml CrossRefGoogle Scholar
  4. 4.
    Allen, G., Goodale, T., Russell, M., Seidel, E., Shalf, J.: Classifying and enabling grid applications. In: Grid Computing: Making the Global Infrastructure a Reality, p. 601. Wiley (2004) Google Scholar
  5. 5.
    Chin, J., Harvey, M.J., Jha, S., Coveney, P.V.: Scientific grid computing: the first generation. Comput. Sci. Eng. 10(2), 24–32 (2005) CrossRefGoogle Scholar
  6. 6.
    Sherwin, S., Formaggia, L., Peiro, J., Franke, V.: Computational modelling of 1D blood flow with variable mechanical properties in the human arterial system. Int. J. Num. Meth. Fluids 43, 673 (2003) CrossRefMATHMathSciNetGoogle Scholar
  7. 7.
    Karonis, N., Toonen, B., Foster, I.: MPICH-G2: A grid-enabled implementation of the message passing interface. J. Parallel Distrib. Comput. 63, 551–563 (2003) CrossRefMATHGoogle Scholar
  8. 8.
    Lubensky, D.K., Nelson, D.R.: Phys. Rev. E 65, 31917 (1999) CrossRefGoogle Scholar
  9. 9.
    Metzler, R., Klafter, J.: Biophys. J. 85, 2776 (2003) Google Scholar
  10. 10.
    Howorka, S., Bayley, H.: Biophys. J. 83, 3202 (2002) CrossRefGoogle Scholar
  11. 11.
    Meller, A., et al.: Phys. Rev. Lett. 86, 3435 (2003) CrossRefGoogle Scholar
  12. 12.
    Sauer-Budge, A.F., et al.: Phys. Rev. Lett. 90(23), 238101 (2003) CrossRefGoogle Scholar
  13. 13.
    Karplus, M., McCammon, J.A.: Molecular dynamics simulations of biomolecules. Nat. Struct. Biol. 9(9), 646–652 (2002) CrossRefGoogle Scholar
  14. 14.
    Jha, S., Coveney, P.V., Harvey, M.J., Pinning, R.: SPICE: simulated pore interactive computing environment. Proceedings of the 2005 ACM/IEEE conference on Supercomputing, p. 70 (2005), dx.doi.org/10.1109/SC.2005.65
  15. 15.
    Jha, S., Harvey, M.J., Coveney, P.V., Pezzi, N., Pickles, S., Pinning, R.L., Clarke, P.: Simulated pore interactive computing environment (SPICE)—using grid computing to understand DNA translocation across protein nanopores embedded in lipid membranes. Proceedings of the UK e-Science All Hands Meeting, 19–22 September 2005, http://www.allhands.org.uk/2005/proceedings/papers/455.pdf
  16. 16.
  17. 17.
    Saffman, P.: Vortex Dynamics. Cambridge University Press, Cambridge (1995) Google Scholar
  18. 18.
    Finn, L., Boghosian, B., Kottke, C.: Vortex core identification in viscous hydrodynamics. Phil. Trans. Roy. Soc. A 363, 1937–1948 (2005) CrossRefMathSciNetGoogle Scholar
  19. 19.
    Finn, L., Boghosian, B.: A global variational approach to vortex core identification. Physica A 362, 11–16 (2006) CrossRefGoogle Scholar
  20. 20.
    Boghosian, B., Finn, L., Coveney, P.V.: Moving the data to the computation: multi-site distributed parallel computation. RealityGrid, Tech. Rep. (2006), http://www.realitygrid.org/publications/GD3.pdf
  21. 21.
    Kale, L., et al.: J. Comput. Phys. 151, 283–312 (1999) CrossRefMATHMathSciNetGoogle Scholar
  22. 22.
    Humphrey, W., Dalke, A., Schulten, K.: VMD—visual molecular dynamics. J. Mol. Graph. 14, 33–38 (1996) CrossRefGoogle Scholar
  23. 23.
    JISC: UKLight switched optical lightpath network, http://www.uklight
  24. 24.
    Carson, M., Santay, D.: NISTNet—a Linux-based network emulation tool. Comput. Commun. Rev. 6 (2003) Google Scholar
  25. 25.
    Pickles, S., et al.: The RealityGrid computational steering API version 1.1, http://www.sve.man.ac.uk/Research/AtoZ/RealityGrid/Steering/ReG_steering_api.pdf
  26. 26.
    Chin, J., Harting, J., Jha, S., Coveney, P.V., Porter, A.R., Pickles, S.M.: Contemp. Phys. 44, 417–432 (2003) CrossRefGoogle Scholar
  27. 27.
    Karonis, N., de Supinski, B., Foster, I., Gropp, W., Lusk, E., Bresnahan, J.: Exploiting hierarchy in parallel computer networks to optimize collective operation performance. In: Fourteenth International Parallel and Distributed Processing Symposium (IPDPS’00), pp. 377–384. Cancun, Mexico, 1–5 May 2000 Google Scholar
  28. 28.
    Allen, G., Dramlitsch, T., Foster, I., Karonis, N.T., Ripeanu, M., Seidel, E., Toonen, B.: Supporting efficient execution in heterogenous distributed computing environments with Cactus and Globus. Proceedings of the 2001 ACM/IEEE conference on Supercomputing, Denver, CO. Gordon Bell Prize Winner, Special Category (2001) Google Scholar
  29. 29.
    Hargrove, W.W., Hoffman, F.M., Mahinthakumar, G., Karonis, N.: Multivariate geographic clustering in a MetaComputing environment using Globus. Proceedings of the 1999 ACM/IEEE conference on Supercomputing, Portland, OR (1999) Google Scholar
  30. 30.
    Karonis, N., Papka, M., Binns, J., Bresnahan, J., Insley, J., Jones, D., Link, J.: High-resolution remote rendering of large datasets in a collaborative environment. Futur. Gener. Comput. Syst. (FGCS) 19(6), 909–917 (2003) CrossRefGoogle Scholar
  31. 31.
    Mahinthakumar, K., Sayeed, M., Karonis, N.: Grid enabled solution of groundwater inverse problems on the TeraGrid network. In: High Performance Computing Symposium. Huntsville, Alabama, 2–6 April 2006 Google Scholar
  32. 32.
    Gabriel, E., Fagg, G.E., Boscilca, G., Angskun, T., Dongarra, J.J., Squyres, J.M., Sahay, V., Kambadur, P., Barrett, B., Lumsdaine, A., Castain, R.H., Daniel, D.J., Graham, R.L., Woodall, T.S.: OpenMPI: goals, concept and design of a next generations MPI implementation. In: 11th European PVM/MPI Users Group Meeting. ser. Budapest, Hungary (September 2004) Google Scholar
  33. 33.
  34. 34.
    Grid Resource Allocation Agreement Protocol Working Group: https://forge.gridforum.org/projects/graap-wg, http://www.fz-juelich.de/zam/RD/coop/ggf/graap/
  35. 35.
    McLaren, J., McKeown, M.: HARC: highly available robust co-scheduler, e-print http://www.realitygrid.org/publications/HARC.pdf
  36. 36.
    G-Lambda: Coordination of a Grid Scheduler and Lambda Path Service over GMPLS, http://www.gtrc.aist.go.jp/g-lambda/
  37. 37.
    A Meta-Scheduling Service for Co-allocating Arbitrary Types of Resources: http://www.coregrid.net/mambo/images/stories/TechnicalReports/tr-0010.pdf
  38. 38.
    Grid Interoperation Now (GIN): https://forge.gridforum.org/sf/projects/gin-cg
  39. 39.
    The Application Hosting Environment: http://www.realitygrid.org/AHE
  40. 40.
  41. 41.
    U.O.S.C. Information Sciences Institute: rFC 793: Transmission Control Protocol 1981, http://rfc.net/rfc793.html

Copyright information

© Springer Science+Business Media, LLC 2007

Authors and Affiliations

  • Bruce Boghosian
    • 1
  • Peter Coveney
    • 2
  • Suchuan Dong
    • 3
  • Lucas Finn
    • 1
  • Shantenu Jha
    • 2
  • George Karniadakis
    • 3
  • Nicholas Karonis
    • 4
    • 5
  1. 1.Department of MathematicsTufts UniversityMedfordUSA
  2. 2.Centre for Computational ScienceUCLLondonUK
  3. 3.Division of Applied MathematicsBrown UniversityProvidenceUSA
  4. 4.Department of Computer ScienceNorthern Illinois UniversityDekalbUSA
  5. 5.Argonne National LaboratoryArgonneUSA

Personalised recommendations