PMI: A Scalable Parallel Process-Management Interface for Extreme-Scale Systems

  • Pavan Balaji
  • Darius Buntinas
  • David Goodell
  • William Gropp
  • Jayesh Krishna
  • Ewing Lusk
  • Rajeev Thakur
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6305)

Abstract

Parallel programming models on large-scale systems require a scalable system for managing the processes that make up the execution of a parallel program. The process-management system must be able to launch millions of processes quickly when starting a parallel program and must provide mechanisms for the processes to exchange the information needed to enable them communicate with each other. MPICH2 and its derivatives achieve this functionality through a carefully defined interface, called PMI, that allows different process managers to interact with the MPI library in a standardized way. In this paper, we describe the features and capabilities of PMI. We describe both PMI-1, the current generation of PMI used in MPICH2 and all its derivatives, as well as PMI-2, the second-generation of PMI that eliminates various shortcomings in PMI-1. Together with the interface itself, we also describe a reference implementation for both PMI-1 and PMI-2 in a new process-management framework within MPICH2, called Hydra, and compare their performance in running MPI jobs with thousands of processes.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Argonne National Laboratory: MPICH2, http://www.mcs.anl.gov/research/projects/mpich2
  2. 2.
    Castain, R., Woodall, T., Daniel, D., Squyres, J., Barrett, B., Fagg, G.: The Open Run-Time Environment (OpenRTE): A transparent multi-cluster environment for high-performance computing. In: Di Martino, B., Kranzlmüller, D., Dongarra, J. (eds.) EuroPVM/MPI 2005. LNCS, vol. 3666, pp. 225–232. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  3. 3.
    Gara, A., Blumrich, M., Chen, D., Chiu, G., Coteus, P., Giampapa, M., Haring, R., Heidelberger, P., Hoenicke, D., Kopcsay, G., Liebsch, T., Ohmacht, M., SteinmacherBurow, B., Takken, T., Vranas, P.: Overview of the Blue Gene/L system architecture. IBM Journal of Research and Development 49(2/3) (2005)Google Scholar
  4. 4.
    Huang, W., Santhanaraman, G., Jin, H., Gao, Q., Panda, D.: Design of high performance MVAPICH2: MPI2 over InfiniBand. In: Proceedings of the sixth IEEE International Symposium on Cluster Computing and the Grid, Singapore Management University, Singapore, May 16–19 (2006)Google Scholar
  5. 5.
  6. 6.
  7. 7.
  8. 8.
    PBS: Portable batch system , http://www.openpbs.org
  9. 9.
  10. 10.
  11. 11.
  12. 12.
    SiCortex Inc., http://www.sicortex.com
  13. 13.
    Sridhar, J., Koop, M., Perkins, J., Panda, D.K.: ScELA: Scalable and Extensible Launching Architecture for Clusters. In: Sadayappan, P., Parashar, M., Badrinath, R., Prasanna, V.K. (eds.) HiPC 2008. LNCS, vol. 5374, pp. 323–335. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  14. 14.
  15. 15.
    Yoo, A.B., Jette, M.A., Grondona, M.: SLURM: Simple Linux utility for resource management. In: Feitelson, D.G., Rudolph, L., Schwiegelshohn, U. (eds.) JSSPP 2003. LNCS, vol. 2862, pp. 44–60. Springer, Heidelberg (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Pavan Balaji
    • 1
  • Darius Buntinas
    • 1
  • David Goodell
    • 1
  • William Gropp
    • 2
  • Jayesh Krishna
    • 1
  • Ewing Lusk
    • 1
  • Rajeev Thakur
    • 1
  1. 1.Argonne National LaboratoryArgonneUSA
  2. 2.University of IllinoisUrbanaUSA

Personalised recommendations