Advertisement

Learning from the Success of MPI

  • William D. Gropp
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2228)

Abstract

The Message Passing Interface (MPI) has been extremely successful as a portable way to program high-performance parallel computers. This success has occurred in spite of the view of many that message passing is dificult and that other approaches, including automatic parallelization and directive-based parallelism, are easier to use. This paper argues that MPI has succeeded because it addresses all of the important issues in providing a parallel programming model.

Keywords

Programming Model Message Passing Interface Parallel Programming Model Parallel Virtual Machine High Performance Computing Application 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Papers about MPI (2001) http://www.mcs.anl.gov/mpi/papers.
  2. 2.
    Hansen, P.B.: An evaluation of the Message-Passing Interface. ACM SIGPLAN Notices 33 (1998) 65–72CrossRefGoogle Scholar
  3. 3.
    Geist, A., Beguelin, A., Dongarra, J., Jiang, W., Manchek, B., Sunderam, V.: PVM: Parallel Virtual Machine—A User’s Guide and Tutorial for Network Parallel Computing. MIT Press, Cambridge, MA. (1994)Google Scholar
  4. 4.
    Boyle, J., Butler, R., Disz, T., Glickfeld, B., Lusk, E., Overbeek, R., Patterson, J., Stevens, R.: Portable Programs for Parallel Processors. Holt, Rinehart, and Winston, New York (1987)Google Scholar
  5. 5.
    Message Passing Interface Forum: MPI: A Message-Passing Interface standard. International Journal of Supercomputer Applications 8 (1994) 165–414Google Scholar
  6. 6.
    Koelbel, C.H., Loveman, D.B., Schreiber, R.S., Jr., G.L.S., Zosel, M.E.: The High Performance Fortran Handbook. MIT Press, Cambridge, MA (1993)Google Scholar
  7. 7.
    Carlson, W.W., Draper, J.M., Culler, D., Yelick, K., Brooks, E., Warren, K.: Introduction to UPC and language specification. Technical Report CCS-TR-99-157, Center for Computing Sciences, IDA, Bowie, MD (1999)Google Scholar
  8. 8.
    Numrich, R.W., Reid, J.: Co-Array Fortran for parallel programming. ACM SIGPLAN FORTRAN Forum 17 (1998) 1–31CrossRefGoogle Scholar
  9. 9.
    Dongarra, J., Gustavson, F., Karp, A.: Implementing linear algebra algorithms for dense matrices on a vector pipeline machine. SIAM Review 26 (1984) 91–112zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Whaley, R.C., Petitet, A., Dongarra, J.J.: Automated empirical optimizations of software and the ATLAS project. Parallel Computing 27 (2001) 3–35zbMATHCrossRefGoogle Scholar
  11. 11.
    Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message Passing Interface, 2nd edition. MIT Press, Cambridge, MA (1999)CrossRefGoogle Scholar
  12. 12.
    Skjellum, A., Smith, S.G., Doss, N.E., Leung, A.P., Morari, M.: The design and evolution of Zipcode. Parallel Computing 20 (1994) 565–596zbMATHCrossRefGoogle Scholar
  13. 13.
    Message Passing Interface Forum: MPI2: A message passing interface standard. International Journal of High Performance Computing Applications 12 (1998) 1–299Google Scholar
  14. 14.
    Total View Multiprocess Debugger/Analyzer (2000) http://www.etnus.com/Products/TotalView.
  15. 15.
    Cownie, J., Gropp, W.: A standard interface for debugger access to message queue information in MPI. In Dongarra, J., Luque, E., Margalef, T., eds.: Recent Advances in Parallel Virtual Machine and Message Passing Interface. Volume 1697 of Lecture Notes in Computer Science., Berlin, Springer (1999) 51–58CrossRefGoogle Scholar
  16. 16.
  17. 17.
    Hempel, R., Walker, D.W.: The emergence of the MPI message passing standard for parallel computing. Computer Standards and Interfaces 21 (1999) 51–62CrossRefGoogle Scholar
  18. 18.
    Krechmer, K.: The need for openness in standards. IEEE Computer 34 (2001) 100–101Google Scholar
  19. 19.
    Carriero, N., Gelernter, D.: A foundation for advanced compile-time analysis of linda programs. In Banerjee, U., Gelernter, D., Nicolau, A., Padua, D., eds.: cnProceedings of Languages and Compilers for Parallel Computing. Volume 589 of Lecture Notes in Computer Science., Berlin, Springer (1992) 389–404Google Scholar
  20. 20.
    Ogawa, H., Matsuoka, S.: OMPI: Optimizing MPI programs using partial evaluation. In: Supercomputing’96. (1996) http://www.bib.informatik.th-darmstadt.de/sc96/OGAWA.
  21. 21.
    Zaki, O., Lusk, E., Gropp, W., Swider, D.: Toward scalable performance visualization with Jumpshot. High Performance Computing Applications 13 (1999) 277–288CrossRefGoogle Scholar
  22. 22.
    Gropp, W., Lusk, E., Thakur, R.: Using MPI-2: Advanced Features of the Message-Passing Interface. MIT Press, Cambridge, MA (1999)CrossRefGoogle Scholar
  23. 23.
    Gropp, W., Lusk, E., Swider, D.: Improving the performance of MPI derived datatypes. In Skjellum, A., Bangalore, P.V., Dandass, Y.S., eds.: Proceedings of the Third MPI Developer’s and User’s Conference, MPI Software Technology Press (1999) 25–30Google Scholar
  24. 24.
    Traeff., J.L., Hempel, R., Ritzdoff., H., Zimmermann, F.: Flattening on the fly: Efficient handling of MPI derived datatypes. Volume 1697 of Lecture Notes in Computer Science., Berlin, Springer (1999) 109–116Google Scholar
  25. 25.
    Demaine, E.D.: A threads-only MPI implementation for the development of parallel programs. In: Proceedings of the 11th International Symposium on High Performance Computing Systems. (1997) 153–163Google Scholar
  26. 26.
    Tang, H., Shen, K., Yang, T.: Compile/run-time support for threaded MPI execution on multiprogrammed shared memory machines. In Chien, A.A., Snir, M., eds.: Proceedings of the 1999 ACM Sigplan Symposium on Principles and Practice of Parallel Programming (PPoPP’99). Volume 34.8 of ACM Sigplan Notices., New York, ACM Press (1999) 107–118Google Scholar
  27. 27.
    Tatebe, O., Kodama, Y., Sekiguchi, S., Yamaguchi, Y.: Highly efficient implementation of MPI point-to-point communication using remote memory operations. In: Proceedings of the International Conference on Supercomputing (ICS-98), New York, ACM press (1998) 267–273Google Scholar
  28. 28.
    Roy, A., Foster, I., Gropp, W., Karonis, N., Sander, V., Toonen, B.: MPICH-GQ: Quality of service for message passing programs. Technical Report ANL/MCSP838-0700, Mathematics and Computer Science Division, Argonne National Laboratory (2000)Google Scholar
  29. 29.
    Carns, P.H., Ligon III, W.B., Ross, R.B., Thakur, R.: PVFS: A parallel file system for Linux clusters. In: Proceedings of the 4th Annual linux howcase and Conference, Atlanta, GA, USENIX Association (2000) 317–327Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • William D. Gropp
    • 1
  1. 1.Mathematics and Computer Science DivisionArgonne National LaboratoryArgonne

Personalised recommendations