HyMPI – A MPI Implementation for Heterogeneous High Performance Systems

  • Franciso Isidro Massetto
  • Augusto Mendes Gomes Junior
  • Liria Matsumoto Sato
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3947)


This paper presents the HyMPI, a runtime system to integrate several MPI implementations, used to develop Heterogeneous High Performance Applications. This means that a single image system can be composed by mono and multiprocessor nodes running several Operating Systems and MPI implementations, as well as, heterogeneous clusters as nodes of the system. HyMPI supports blocking and non-blocking point-to-point communication and collective communication primitive in order to increase the range of High Performance Applications that can use it and to keep compatibility with MPI Standard.


Message Passing Interface Heterogeneous Cluster Master Process Collective Communication Globus Toolkit 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Snir, M., Gropp, W.: MPI the Complete Reference. The MIT Press, Cambridge (1998)Google Scholar
  2. 2.
    Massetto, F.I., Sato, L.M., Gomes, A.M.: HMPI – Hybrid MPI. In: 14th IEEE International Symposium on High Performance Distributed Computing (2005)Google Scholar
  3. 3.
    George, W., Hagedorn, J., Devaney, J.: IMPI: Making MPI Interoperable. Journal of Research of the National Institute of Standards and Technology 105 (2000)Google Scholar
  4. 4.
    Fagg, G., Dongarra, J., Geist, A.: Heterogeneous MPI Application Interoperation and Process management under PVMPI. In: Bubak, M., Waśniewski, J., Dongarra, J. (eds.) PVM/MPI 1997. LNCS, vol. 1332. Springer, Heidelberg (1997)CrossRefGoogle Scholar
  5. 5.
    Karonis, N., Toonen, B., Foster, I.: MPICH-G2: A Grid-Enabled Implementation of the Message Passing Interface. Journal of Parallel and Distributed Computing (JPDC) 63(5), 551–563 (2003)CrossRefMATHGoogle Scholar
  6. 6.
    Gabriel, E., Resch, M., Ruhle, R.: Implementing MPI with optimized algorithms for metacomputing. In: Message Passing Interface Developer’s and Users Conference (1999)Google Scholar
  7. 7.
    Imamura, T., et al.: An architecture of Stampi: MPI library on a cluster of parallel computers. In: Dongarra, J., Kacsuk, P., Podhorszki, N. (eds.) PVM/MPI 2000. LNCS, vol. 1908, p. 200. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  8. 8.
    Poeppe, M., Schuch, S., Bemmerl, T.: A Message Passing Interface Library for Inhomogeneous Coupled Clusters. In: Proceedings of ACM/IEEE International Parallel and Distributed Processing Symposium (IPDPS 2003), Workshop for Communication Architecture in Clusters (2003)Google Scholar
  9. 9.
    LAM Team. LAM/MPI Parallel Computing, MPI General Information, Avaliable at: http://www.lam-mpi.org/mpi/
  10. 10.
    The Globus Project, Available at: http://www.globus.org
  11. 11.
    Geist, A., Beguelin, A.: PVM Parallel Virtual Machine – A Users’ guide and tutorial for networked parallel computing. The MIT Press, Cambridge (1994)MATHGoogle Scholar
  12. 12.
  13. 13.

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Franciso Isidro Massetto
    • 1
  • Augusto Mendes Gomes Junior
    • 1
  • Liria Matsumoto Sato
    • 1
  1. 1.Politechnic SchoolUniversity of São PauloSão PauloBrazil

Personalised recommendations