Advertisement

Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

A novel strategy for building interoperable MPI environment in heterogeneous high performance systems

Abstract

Breakthrough advances in microprocessor technology and efficient power management have altered the course of development of processors with the emergence of multi-core processor technology, in order to bring higher level of processing. The utilization of many-core technology has boosted computing power provided by cluster of workstations or SMPs, providing large computational power at an affordable cost using solely commodity components. Different implementations of message-passing libraries and system softwares (including Operating Systems) are installed in such cluster and multi-cluster computing systems. In order to guarantee correct execution of message-passing parallel applications in a computing environment other than that originally the parallel application was developed, review of the application code is needed. In this paper, a hybrid communication interfacing strategy is proposed, to execute a parallel application in a group of computing nodes belonging to different clusters or multi-clusters (computing systems may be running different operating systems and MPI implementations), interconnected with public or private IP addresses, and responding interchangeably to user execution requests. Experimental results demonstrate the feasibility of this proposed strategy and its effectiveness, through the execution of benchmarking parallel applications.

This is a preview of subscription content, log in to check access.

References

  1. 1.

    Bal HE (1992) Orca: a language for parallel programming of distributed systems. IEEE Trans Softw Eng 18(3):190–205

  2. 2.

    Culler DE (1999) Parallel computer architecture: a hardware/software approach. Morgan Kaufmann, San Mateo

  3. 3.

    Czajkowski K et al (1999) Co-allocation services for computational grids. In: 8th IEEE symp. on high performance distributed computing, 1999

  4. 4.

    Czajkowski K et al (1998) A resource management architecture for metacomputing systems. In: 4th workshop on job scheduling strategies for parallel processing, 1998

  5. 5.

    Fagg G, Dongarra J (1996) PVMPI: an integration of the PVM and MPI systems. Technical report, Department of Computer Science, University of Tennessee, April 1996. Disponivel em: www.cs.utk.edu/~library/TechReports.html

  6. 6.

    Fagg G, Dongarra J, Geist A (1997) Heterogeneous MPI application interoperation and process management under PVMPI. In: Recent advances in parallel virtual machine and message passing interface, LNCS, vol 1332. Springer, Berlin

  7. 7.

    Fitzgerald S et al (1997) A directory service for configuring high-performance distributed computations. In: 6th IEEE symp on high performance distributed computing, 1997, pp 365–375

  8. 8.

    Gabriel E et al (1998) Distributed computing in a heterogeneous computing environment. In: PVM/MPI, 1998, pp 180–187

  9. 9.

    Gabriel E (1999) Implementing MPI with optimized algorithms for meta-computing. In: Proc 3rd MPI developers and users conference, 1999. MPI Software Technology Press, Starkville

  10. 10.

    Gabriel E (1997) An extension to MPI for distributed computing on MPPs. In: Lecture notes in computer science, vol 1332. Springer, Berlin, pp 75–82

  11. 11.

    Geist A (1994) PVM: Parallel virtual machine. A user’s guide and tutorial for networked parallel computing. MIT Press, Cambridge

  12. 12.

    George W et al (2000) IMPI: Making MPI interoperable. J Res Natl Inst Stand Technol 105(3)

  13. 13.

    Globus (2006) The Globus Toolkit. Available at www.globus.org

  14. 14.

    Globus Mailing List (2003) MPICH-G2 for an inter-cluster application. Available at http://www-unix.globus.org/mail_archive/mpich-g/2003/09/msg00033.html

  15. 15.

    Johnson KL et al (1995) CRL: high-performance all-software distributed shared memory. ACM SIGOPS Oper Syst 29(5)

  16. 16.

    Karonis N et al (2008) MPICH-G2: A grid-enabled implementation of the message passing interface. Available at www.globlus.org

  17. 17.

    Kees, V. (2004) Cluster communication protocols for parallel-programming systems. ACM Trans Comput Syst 22(3):281–325

  18. 18.

    Lacour S (2001) MPICH-G2 Collective operations: performance evaluation, optimizations. Master’s thesis, Rapport de stage MIM2, Magistère d’informatique et modélisation (MIM), ENS Lyon, Mathematics and Computer Science Division, Argonne National Laboratory, USA

  19. 19.

    LAM Team (2008) LAM/MPI parallel computing, MPI general information. Available at http://www.lam-mpi.org/mpi/

  20. 20.

    Marinho J, Silva JG (1998) WMPI—message passing interface for Win32 clusters. In: Proceedings of the 5th European PVM/MPI users’ group meeting on recent advances in parallel virtual machine and message passing interface. Lecture notes in computer science, vol 1497. Springer, Berlin, pp 113–120

  21. 21.

    Massetto FI et al (2005) HMPI—hybrid MPI. In: 14th IEEE symposium on high performance and distributed computing, 2005

  22. 22.

    Massetto FI (2006) HyMPI—a MPI implementation for heterogeneous high-performance systems. In: Proceedings of first international conference GPC 2006. Lecture notes in computer science, vol 3497. Springer, Berlin, pp 314–323

  23. 23.

    MPI (1994) MPI—a message-passing interface standard. Int J Supercomput Appl 8(3/4), 165–414

  24. 24.

    Mpich (2006) A portable MPI implementation. http://www.unix.mcs.anl.gov/mpi/mpich/

  25. 25.

    MPIForum (2006) Message-passing interface (MPI) forum home page. Available at www.mpi-forum.org

  26. 26.

    MPIPro (2006) MPI software technology. http://www.mpi-softech.com/

  27. 27.

    MPI2 (2003) MPI-2: Extensions to the message-passing interface. University of Tennessee

  28. 28.

    NGPP, Grid-enabling execution of MPI-based programs. Available at www.ngp.org.sg/ngpp/document/Infosheet%20-%20RSIP%20v2.pdf

  29. 29.

    Skjellum A, McMahon T (1997) Interoperability of message-passing interface (MPI) implementations, J Dev Message Passing

  30. 30.

    Snir M, Gropp W (1998) MPI the complete reference. MIT Press, Cambridge

  31. 31.

    Squyres J et al (2000) The interoperable message passing interface (IMPI) extensions to LAM/MPI. In: MPI developer’s conference, Ithaca, NY, 2000

  32. 32.

    SSH (2006) OpenSSH. Available at http://www.openssh.com/

  33. 33.

    SSL (2006) SSL request for comments. Available at: http://www.ietf.org/rfc/rfc2246.txt

Download references

Author information

Correspondence to Kuan-Ching Li.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Massetto, F.I., Sato, L.M. & Li, K. A novel strategy for building interoperable MPI environment in heterogeneous high performance systems. J Supercomput 60, 87–116 (2012). https://doi.org/10.1007/s11227-009-0272-y

Download citation

Keywords

  • Transmission Time
  • Message Passing Interface
  • Parallel Application
  • Local Root
  • Master Process