Skip to main content

Learning from the Success of MPI

  • Conference paper
  • First Online:
High Performance Computing — HiPC 2001 (HiPC 2001)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2228))

Included in the following conference series:

Abstract

The Message Passing Interface (MPI) has been extremely successful as a portable way to program high-performance parallel computers. This success has occurred in spite of the view of many that message passing is dificult and that other approaches, including automatic parallelization and directive-based parallelism, are easier to use. This paper argues that MPI has succeeded because it addresses all of the important issues in providing a parallel programming model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Papers about MPI (2001) http://www.mcs.anl.gov/mpi/papers.

  2. Hansen, P.B.: An evaluation of the Message-Passing Interface. ACM SIGPLAN Notices 33 (1998) 65–72

    Article  Google Scholar 

  3. Geist, A., Beguelin, A., Dongarra, J., Jiang, W., Manchek, B., Sunderam, V.: PVM: Parallel Virtual Machine—A User’s Guide and Tutorial for Network Parallel Computing. MIT Press, Cambridge, MA. (1994)

    Google Scholar 

  4. Boyle, J., Butler, R., Disz, T., Glickfeld, B., Lusk, E., Overbeek, R., Patterson, J., Stevens, R.: Portable Programs for Parallel Processors. Holt, Rinehart, and Winston, New York (1987)

    Google Scholar 

  5. Message Passing Interface Forum: MPI: A Message-Passing Interface standard. International Journal of Supercomputer Applications 8 (1994) 165–414

    Google Scholar 

  6. Koelbel, C.H., Loveman, D.B., Schreiber, R.S., Jr., G.L.S., Zosel, M.E.: The High Performance Fortran Handbook. MIT Press, Cambridge, MA (1993)

    Google Scholar 

  7. Carlson, W.W., Draper, J.M., Culler, D., Yelick, K., Brooks, E., Warren, K.: Introduction to UPC and language specification. Technical Report CCS-TR-99-157, Center for Computing Sciences, IDA, Bowie, MD (1999)

    Google Scholar 

  8. Numrich, R.W., Reid, J.: Co-Array Fortran for parallel programming. ACM SIGPLAN FORTRAN Forum 17 (1998) 1–31

    Article  Google Scholar 

  9. Dongarra, J., Gustavson, F., Karp, A.: Implementing linear algebra algorithms for dense matrices on a vector pipeline machine. SIAM Review 26 (1984) 91–112

    Article  MATH  MathSciNet  Google Scholar 

  10. Whaley, R.C., Petitet, A., Dongarra, J.J.: Automated empirical optimizations of software and the ATLAS project. Parallel Computing 27 (2001) 3–35

    Article  MATH  Google Scholar 

  11. Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message Passing Interface, 2nd edition. MIT Press, Cambridge, MA (1999)

    Book  Google Scholar 

  12. Skjellum, A., Smith, S.G., Doss, N.E., Leung, A.P., Morari, M.: The design and evolution of Zipcode. Parallel Computing 20 (1994) 565–596

    Article  MATH  Google Scholar 

  13. Message Passing Interface Forum: MPI2: A message passing interface standard. International Journal of High Performance Computing Applications 12 (1998) 1–299

    Google Scholar 

  14. Total View Multiprocess Debugger/Analyzer (2000) http://www.etnus.com/Products/TotalView.

  15. Cownie, J., Gropp, W.: A standard interface for debugger access to message queue information in MPI. In Dongarra, J., Luque, E., Margalef, T., eds.: Recent Advances in Parallel Virtual Machine and Message Passing Interface. Volume 1697 of Lecture Notes in Computer Science., Berlin, Springer (1999) 51–58

    Chapter  Google Scholar 

  16. MPI poll ’95 (1995) http://www.dcs.ed.ac.uk/home/trollius/www.osc.edu/Lam/mpi/mpi-poll.html.

  17. Hempel, R., Walker, D.W.: The emergence of the MPI message passing standard for parallel computing. Computer Standards and Interfaces 21 (1999) 51–62

    Article  Google Scholar 

  18. Krechmer, K.: The need for openness in standards. IEEE Computer 34 (2001) 100–101

    Google Scholar 

  19. Carriero, N., Gelernter, D.: A foundation for advanced compile-time analysis of linda programs. In Banerjee, U., Gelernter, D., Nicolau, A., Padua, D., eds.: cnProceedings of Languages and Compilers for Parallel Computing. Volume 589 of Lecture Notes in Computer Science., Berlin, Springer (1992) 389–404

    Google Scholar 

  20. Ogawa, H., Matsuoka, S.: OMPI: Optimizing MPI programs using partial evaluation. In: Supercomputing’96. (1996) http://www.bib.informatik.th-darmstadt.de/sc96/OGAWA.

  21. Zaki, O., Lusk, E., Gropp, W., Swider, D.: Toward scalable performance visualization with Jumpshot. High Performance Computing Applications 13 (1999) 277–288

    Article  Google Scholar 

  22. Gropp, W., Lusk, E., Thakur, R.: Using MPI-2: Advanced Features of the Message-Passing Interface. MIT Press, Cambridge, MA (1999)

    Book  Google Scholar 

  23. Gropp, W., Lusk, E., Swider, D.: Improving the performance of MPI derived datatypes. In Skjellum, A., Bangalore, P.V., Dandass, Y.S., eds.: Proceedings of the Third MPI Developer’s and User’s Conference, MPI Software Technology Press (1999) 25–30

    Google Scholar 

  24. Traeff., J.L., Hempel, R., Ritzdoff., H., Zimmermann, F.: Flattening on the fly: Efficient handling of MPI derived datatypes. Volume 1697 of Lecture Notes in Computer Science., Berlin, Springer (1999) 109–116

    Google Scholar 

  25. Demaine, E.D.: A threads-only MPI implementation for the development of parallel programs. In: Proceedings of the 11th International Symposium on High Performance Computing Systems. (1997) 153–163

    Google Scholar 

  26. Tang, H., Shen, K., Yang, T.: Compile/run-time support for threaded MPI execution on multiprogrammed shared memory machines. In Chien, A.A., Snir, M., eds.: Proceedings of the 1999 ACM Sigplan Symposium on Principles and Practice of Parallel Programming (PPoPP’99). Volume 34.8 of ACM Sigplan Notices., New York, ACM Press (1999) 107–118

    Google Scholar 

  27. Tatebe, O., Kodama, Y., Sekiguchi, S., Yamaguchi, Y.: Highly efficient implementation of MPI point-to-point communication using remote memory operations. In: Proceedings of the International Conference on Supercomputing (ICS-98), New York, ACM press (1998) 267–273

    Google Scholar 

  28. Roy, A., Foster, I., Gropp, W., Karonis, N., Sander, V., Toonen, B.: MPICH-GQ: Quality of service for message passing programs. Technical Report ANL/MCSP838-0700, Mathematics and Computer Science Division, Argonne National Laboratory (2000)

    Google Scholar 

  29. Carns, P.H., Ligon III, W.B., Ross, R.B., Thakur, R.: PVFS: A parallel file system for Linux clusters. In: Proceedings of the 4th Annual linux howcase and Conference, Atlanta, GA, USENIX Association (2000) 317–327

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Gropp, W.D. (2001). Learning from the Success of MPI. In: Monien, B., Prasanna, V.K., Vajapeyam, S. (eds) High Performance Computing — HiPC 2001. HiPC 2001. Lecture Notes in Computer Science, vol 2228. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45307-5_8

Download citation

  • DOI: https://doi.org/10.1007/3-540-45307-5_8

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43009-4

  • Online ISBN: 978-3-540-45307-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics