Journal of Grid Computing

, Volume 11, Issue 2, pp 265–280 | Cite as

HPC on the Grid: The Theophys Experience

  • Roberto Alfieri
  • Silvia Arezzini
  • Alberto Ciampa
  • Roberto De Pietri
  • Enrico Mazzoni
Article

Abstract

The Grid Virtual Organization (VO) “Theophys”, associated to the INFN (Istituto Nazionale di Fisica Nucleare), is a theoretical physics community with various computational demands, spreading from serial, SMP, MPI and hybrid jobs. That has led, in the past 20 years, towards the use of the Grid infrastructure for serial jobs, while the execution of multi-threaded, MPI and hybrid jobs has been performed in several small-medium size clusters installed in different sites, with access through standard local submission methods. This work analyzes the support for parallel jobs in the scientific Grid middlewares, then describes how the community unified the management of most of its computational need (serial and parallel ones) using the Grid through the development of a specific project which integrates serial e parallel resources in a common Grid based framework. A centralized national cluster is deployed inside this framework, providing “Wholenodes” reservations, CPU affinity, and other new features supporting our High Performance Computing (HPC) applications in the Grid environment. Examples of the cluster performance for relevant parallel applications in theoretical physics are reported, focusing on the different kinds of parallel jobs that can be served by the new features introduced in the Grid.

Keywords

HPC Grid Theoretical physics 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Foster, I., Kesselman, C., Tuecke, S.: The anatomy of the Grid: enabling scalable virtual organizations. Int. J. Supercomput. Appl. 15(3), 200–222 (2001)CrossRefGoogle Scholar
  2. 2.
    Ferrari, T., Luciano Gaido, L.: Resources and services of the EGEE production infrastructure. J. Grid Computing 9(2), 119–133 (2011)CrossRefGoogle Scholar
  3. 3.
    Dooley, R., Milfeld, K., Guiang, C., Pamidighantam S., Allen, G.: From proposal to production: lessons learned developing the computational chemistry Grid cyberinfrastructure. J. Grid Computing 4(2), 195–208 (2006)MATHCrossRefGoogle Scholar
  4. 4.
    Becciani, U., Antonuccio-Delogu, V., Costa, A., Petta, C.: Cosmological simulations and data exploration: a testcase on the usage of Grid infrastructure. J. Grid Computing 10(2), 265–277 (2012)CrossRefGoogle Scholar
  5. 5.
    Vilotte, J.P., Moguilny, G.: Earth science: requirements and experiences with use of MPI in EGEE. In: EGEE09 Conference, Barcelona, 21–25 September 2009Google Scholar
  6. 6.
    Engelberts, J.: Towards a robust and userfriendly MPI functionality on the EGEE Grid. In: EGEE User Forum 2010, Uppsala, 12–15 April 2010. See also: http://www.Grid.ie/mpi/wiki/WorkingGroup
  7. 7.
    Anglano, C., Canonico, M., Guazzone, M.: The ShareGrid peer-to-peer desktop Grid: infrastructure, applications and performance evaluation. J. Grid Computing 8(4), 543–570 (2010)CrossRefGoogle Scholar
  8. 8.
    IGI accounting portal: http://accounting.egi.eu (2010)
  9. 9.
    Bodin, F., Boucaud, P., Cabibbo, N., Cascino, G., Calvayrac, F., Della Morte, M., Del Re, A., De Pietri, et al.: APE computers—past, present and future. Comput. Phys. Commun. 147(1–2), 402–409 (2002)CrossRefGoogle Scholar
  10. 10.
    Foster, I.: Globus toolkit version 4: software for service-oriented systems. In: IFIP International Conference on Network and Parallel Computing, LNCS 3779, pp. 2–13. Springer, Berlin (2005)Google Scholar
  11. 11.
    Laure, E., Fisher, S.M., Frohner, A., Grandi, C., Kunszt, P., Krenek, A., Mulmo, O., Pacini, F., Prelz, F., White, J., Barroso, M., Buncic, P., Hemmer, F., Di Meglio, A., Edlund, A.: Programming the Grid with gLite. Comput. Methods Sci. Technol. 12(1), 33–45 (2006)Google Scholar
  12. 12.
    Ellert, M., et al.: Advanced resource connector middleware for lightweight computational Grids. Future Gener. Comput. Syst. 23, 219–240 (2007)CrossRefGoogle Scholar
  13. 13.
    Ernst, M., Fuhrmann, P., Mkrtchyan, T., Bakken, J., Fisk, I., Perelmutov, T., Petravick, D.: Managed data storage and data access services for data Grids. In: Computing in High Energy Physics and Nuclear Physics 2004 (CHEP04), Interlaken, Switzerland, p. 665, 27 Sept–1 Oct 2004Google Scholar
  14. 14.
    Streit, A., Bala, P., Beck-Ratzka, A., Benedyczak, K., Bergmann, S., Breu, R., Daivandy, J. M. , Demuth, B., Eifer, A. Giesler, A., Hagemeier, B., Holl, S., Huber, V., Lamla, N., Mallmann, D., Memon, A.S., Memon, M.S., Rambadt, M., Riedel, M., Romberg, M., Schuller, B., Schlauch, T., Schreiber, A., Soddemann, T., Ziegler, W.: UNICORE 6—recent and future advancements. Ann. Télécommun. 65(11–12), 757–762 (2010)CrossRefGoogle Scholar
  15. 15.
    Fuhrmann, P.: EMI, the introduction. In: Proceeding of CHEP 2010, Taipei, 18 October 2010Google Scholar
  16. 16.
    Fernández, E.: A unified user experience for MPI jobs in EMI. In: EGI User Forum 2011, Vilnius, 11 April 2011Google Scholar
  17. 17.
    Anjomshoaa, A., Drescher, M., Fellows, D., Ly, A., McGough, S., Pulsipher, D., Savva, A.: Job Submission Description Language (JSDL) specification, version 1.0. White Paper, 7 November 2005Google Scholar
  18. 18.
    Schuller, B.: MPI in UNICORE. In: EGI Technical Forum 2010, Amsterdam, 15 September 2010Google Scholar
  19. 19.
    Dichev, K., Stork, S., Keller R., Fernández, E.: MPI support on the Grid. Comput. Inform. 27(2), 213–222 (2008)MATHGoogle Scholar
  20. 20.
    Schuller, B., Konya, B., Konstantinov, A., Sgaravatto, M., Zangrando, L.: EMI-ES, a common interface to ARC, gLite and UNICORE computing elements. In: EGI User Forum, Vilnius, 11 April 2011Google Scholar
  21. 21.
    Carminati, F., Templon, J., et al.: Common use cases for a HEP common application layer. White Paper, LHC-SC2-20-2002Google Scholar
  22. 22.
    European Grid Infrastructure: An integrated sustainable Pan-European infrastrucute for researchers in Europe (EGI-InSPIRE). White Paper, 18 April 2011. Document Link: https://documents.egi.eu/document/201
  23. 23.
    XSEDE Production Baseline: Service provider software and services. White paper, released on 22 February 2012Google Scholar
  24. 24.
    Altunay, M., Avery, P., Blackburn, K., Bockelman, B., Ernst, M., et al.: A science driven production cyberinfrastructure—the open science Grid. J. Grid Computing 9(2), 201–218 (2011)CrossRefGoogle Scholar
  25. 25.
    Berg, A.: PRACE distributed infrastructure services and evolution. In: EGI Community Forum 2012, Garching, 28 March 2012Google Scholar
  26. 26.
    Gentzsch, W., Denis Girou, D., Kennedy, A., Lederer, H., Reetz, J., et al.: DEISA–distributed European infrastructure for supercomputing applications. J. Grid Computing 9(2), 259–277 (2011)CrossRefGoogle Scholar
  27. 27.
    Laganá, A., Costantini, A., Gervasi, O., Faginas Lago, N., Manuali, C., et al.: Compchem: progress towards GEMS a Grid empowered molecular simulator and beyond. J. Grid Computing 8(4), 571–586 (2010)CrossRefGoogle Scholar
  28. 28.
    Borgdorff, J., Falcone, J., Lorenz, E., Chopard, B., Hoekstra, A.: A principled approach to distributed multiscale computing, from formalization to execution. In: Proceedings of The Seventh IEEE International Conference on e-Science Workshops, Stockholm, Sweden, 5–8 December 2011, pp. 97–104. IEEE Computer Society, Washington, DC (2011)Google Scholar
  29. 29.
    Frohner, A., Baud, J.P., Garcia Rioja, R.M., Grosdidier, G., Mollon, R., Smith D., Tedesco, P.: Data management in EGEE. In: Journal of Physics, Conference Series 219 (2010)Google Scholar
  30. 30.
    Zappi, R., Magnoni, L., Donno, F., Ghiselli, A.: StoRM: Grid middleware for disk resource management. In: Proceedings of Computing in High-Energy Physics, 27 Sept–1 Oct 2004 Interlaken, Switzerland, (CHEP04), pp. 1238–1241 (2005)Google Scholar
  31. 31.
    Turner, D., Oline, A., Chen, X., Benjegerdes, T.: Integrating new capabilities into NetPIPE. Lect. Notes Comput. Sci. 2840, 37–44 (2003)CrossRefGoogle Scholar
  32. 32.
    Aiftimiei, C., Andreetto, P., Bertocco, S., Dalla Fina, S., Alvise Dorigo, A., Frizziero, E., Gianelle, A., Marzolla, M., Mazzucato, M., Sgaravatto, M., Traldi S., Zangrando, L.: Design and implementation of the gLite CREAM job management service. Future Gener. Comput. Syst. 26(4), 654–667 (2010)CrossRefGoogle Scholar
  33. 33.
    Alfieri, R., Cecchini, R., Ciaschini, V., dell’Agnello, L., Frohner, A., Lorentey, K., Spataro F.: From Gridmap-file to VOMS: managing authorization in a Grid environment. Future Gener. Comput. Syst. 21(4), 549–558 (2005)CrossRefGoogle Scholar
  34. 34.
    Edwards, R.G., (LHPC Collaboration), Joó, B., (UKQCD Collaboration): The chroma software system for lattice QCD. arXiv:hep-lat/0409003. In: Proceedings of the 22nd International Symposium for Lattice Field Theory (Lattice2004), Nucl. Phys. B140 (Proc. Suppl), 832 (2005). See also: http://usqcd.jlab.org/usqcd-docs/chroma/
  35. 35.
    Goodale, T., et al.: The cactus framework and toolkit: design and applications. In: Vector and Parallel Processing—VECPAR’2002, 5th International Conference, Lecture Notes in Computer Science. Springer, Berlin (2003).Google Scholar
  36. 36.
    Schnetter, E., Hawley, S.H., Hawke, I.: Evolutions in 3-D numerical relativity using fixed mesh refinement. Class. Quantum Grav. 21, 1465–1488 (2004)MathSciNetMATHCrossRefGoogle Scholar
  37. 37.
    Baiotti, L., Hawke, I., Montero, P.J., Löffler, F., Rezzolla, L., Stergioulas, N., Font, J.A., Seidel, E.: Three-dimensional relativistic simulations of rotating neutron star collapse to a Kerr black hole. Phys. Rev. D 71, 024035 (2005). See also: http://einsteintoolkit.org/ CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2012

Authors and Affiliations

  • Roberto Alfieri
    • 1
  • Silvia Arezzini
    • 2
  • Alberto Ciampa
    • 2
  • Roberto De Pietri
    • 1
  • Enrico Mazzoni
    • 2
  1. 1.INFN Parma and Parma UniversityParmaItaly
  2. 2.INFN PisaPisaItaly

Personalised recommendations