Advertisement

MPL_Connect managing heterogeneous MPI applications interoperation and process control

Extended abstract
  • Graham E. Fagg
  • Kevin S. London
  • Jack J. Dongarra
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1497)

Abstract

Presently, different vendors' MPI implementations cannot interoperate directly with each other. As a result, performance of distributed computing across different vendors' machines requires use of a single MPI implementation, such as MPICH. This solution may be sub-optimal since it can not utilize the vendors' own optimized MPI implementations. MPL_Connect, a software package currently under development at the University of Tennessee, provides the needed interoperability between different vendors' optimized MPI implementations. This project grew out of the PVMPI project that utilized PVM to provide inter-platform communication and process control, and was upgraded to use the new MetaComputing SNIPE system which has proven more flexible and less restrictive than PVM when operating upon certain MPPs. MPI_Connect provides two distrinct programming models to its users. The first is a single MPICOMM_WORLD model similar to that provided by the contempary PACX project. Where inter-communication is completely transparent to MPI applications thus requiring no source level modification of applications. The second is that of uniquely identified process groups that inter-communicate via MPI point-to-point calls. Both systems use the MPI profiling interface to maitain portability between MPI implementations. A unique feature of this system is its ability to allow MPI-2 dynamic process control and inter-operation between MPI implementations. Currently supported implementation include MPICH, LAM 6, IBM MPIF and SGI MPI.

Keywords

Message Passing Interface Message Passing Interface Process Message Passing Interface Implementation Message Passing Interface Application Message Passing Interface Communicator 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    A. L. Beguelin, J. J. Dongarra, A. Geist, R. J. Manchek, and V. S. Sunderam. Heterogeneous Network Computing. Sixth SIAM Conference on Parallel Processing, 1993.Google Scholar
  2. 2.
    Thomas Beisel. “Ein effizientes Message-Passing-Interface (MPI) fuer HiPPI”, Diploma thesis, University of Stuttgart, 1996.Google Scholar
  3. 3.
    Greg Burns, Raja Daoud and James Vaigl. LAM: An Open Cluster Environment for MPI. Technical report, Ohio Supercomputer Center, Columbus, Ohio, 1994.Google Scholar
  4. 4.
    Fei-Chen Cheng. Unifying the MPI and PVM 3 Systems. Technical report, Department of Computer Science, Mississippi State University, May 1994.Google Scholar
  5. 5.
    Nathan Doss, William Gropp, Ewing Lusk and Anthony Skjellum. A model implementation of MPI. Technical report MCS-P393-1193, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439, 1993.Google Scholar
  6. 6.
    Graham E. Fagg, Jack J. Dongarra and Al Geist, PVMPI provides Interoperability between MPI Implementations Proceedings of Eight SIAM conference on Parallel Processing March 1997Google Scholar
  7. 7.
    Graham E Fagg, Keith Moore, Jack Dongarra and Al Geist. ”Scalable Networked Information Processing Environment (SNIPE)” Proc. of SuperComputing 97, San Jose, November 1997.Google Scholar
  8. 8.
    Message Passing Interface Forum. MPI: A Message-Passing Interface Standard. International Journal of Supercomputer Applications, 8(3/4), 1994. Special issue on MPI.Google Scholar
  9. 9.
    Alexander Reinfeld, Jorn Gehring and Matthias Brune, Communicating Across Parallel Message-Passing Environments Journal of Systems Architecture, Special Issue on Cluster Computing, 1997.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Graham E. Fagg
    • 1
  • Kevin S. London
    • 1
  • Jack J. Dongarra
    • 1
    • 2
  1. 1.Department of Computer ScienceUniversity of TennesseeKnoxville
  2. 2.Mathematical Sciences SectionOak Ridge National LaboratoryOak Ridge

Personalised recommendations