Abstract
The Message Passing Interface (MPI) has been extremely successful as a portable way to program high-performance parallel computers. This success has occurred in spite of the view of many that message passing is dificult and that other approaches, including automatic parallelization and directive-based parallelism, are easier to use. This paper argues that MPI has succeeded because it addresses all of the important issues in providing a parallel programming model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Papers about MPI (2001) http://www.mcs.anl.gov/mpi/papers.
Hansen, P.B.: An evaluation of the Message-Passing Interface. ACM SIGPLAN Notices 33 (1998) 65–72
Geist, A., Beguelin, A., Dongarra, J., Jiang, W., Manchek, B., Sunderam, V.: PVM: Parallel Virtual Machine—A User’s Guide and Tutorial for Network Parallel Computing. MIT Press, Cambridge, MA. (1994)
Boyle, J., Butler, R., Disz, T., Glickfeld, B., Lusk, E., Overbeek, R., Patterson, J., Stevens, R.: Portable Programs for Parallel Processors. Holt, Rinehart, and Winston, New York (1987)
Message Passing Interface Forum: MPI: A Message-Passing Interface standard. International Journal of Supercomputer Applications 8 (1994) 165–414
Koelbel, C.H., Loveman, D.B., Schreiber, R.S., Jr., G.L.S., Zosel, M.E.: The High Performance Fortran Handbook. MIT Press, Cambridge, MA (1993)
Carlson, W.W., Draper, J.M., Culler, D., Yelick, K., Brooks, E., Warren, K.: Introduction to UPC and language specification. Technical Report CCS-TR-99-157, Center for Computing Sciences, IDA, Bowie, MD (1999)
Numrich, R.W., Reid, J.: Co-Array Fortran for parallel programming. ACM SIGPLAN FORTRAN Forum 17 (1998) 1–31
Dongarra, J., Gustavson, F., Karp, A.: Implementing linear algebra algorithms for dense matrices on a vector pipeline machine. SIAM Review 26 (1984) 91–112
Whaley, R.C., Petitet, A., Dongarra, J.J.: Automated empirical optimizations of software and the ATLAS project. Parallel Computing 27 (2001) 3–35
Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message Passing Interface, 2nd edition. MIT Press, Cambridge, MA (1999)
Skjellum, A., Smith, S.G., Doss, N.E., Leung, A.P., Morari, M.: The design and evolution of Zipcode. Parallel Computing 20 (1994) 565–596
Message Passing Interface Forum: MPI2: A message passing interface standard. International Journal of High Performance Computing Applications 12 (1998) 1–299
Total View Multiprocess Debugger/Analyzer (2000) http://www.etnus.com/Products/TotalView.
Cownie, J., Gropp, W.: A standard interface for debugger access to message queue information in MPI. In Dongarra, J., Luque, E., Margalef, T., eds.: Recent Advances in Parallel Virtual Machine and Message Passing Interface. Volume 1697 of Lecture Notes in Computer Science., Berlin, Springer (1999) 51–58
MPI poll ’95 (1995) http://www.dcs.ed.ac.uk/home/trollius/www.osc.edu/Lam/mpi/mpi-poll.html.
Hempel, R., Walker, D.W.: The emergence of the MPI message passing standard for parallel computing. Computer Standards and Interfaces 21 (1999) 51–62
Krechmer, K.: The need for openness in standards. IEEE Computer 34 (2001) 100–101
Carriero, N., Gelernter, D.: A foundation for advanced compile-time analysis of linda programs. In Banerjee, U., Gelernter, D., Nicolau, A., Padua, D., eds.: cnProceedings of Languages and Compilers for Parallel Computing. Volume 589 of Lecture Notes in Computer Science., Berlin, Springer (1992) 389–404
Ogawa, H., Matsuoka, S.: OMPI: Optimizing MPI programs using partial evaluation. In: Supercomputing’96. (1996) http://www.bib.informatik.th-darmstadt.de/sc96/OGAWA.
Zaki, O., Lusk, E., Gropp, W., Swider, D.: Toward scalable performance visualization with Jumpshot. High Performance Computing Applications 13 (1999) 277–288
Gropp, W., Lusk, E., Thakur, R.: Using MPI-2: Advanced Features of the Message-Passing Interface. MIT Press, Cambridge, MA (1999)
Gropp, W., Lusk, E., Swider, D.: Improving the performance of MPI derived datatypes. In Skjellum, A., Bangalore, P.V., Dandass, Y.S., eds.: Proceedings of the Third MPI Developer’s and User’s Conference, MPI Software Technology Press (1999) 25–30
Traeff., J.L., Hempel, R., Ritzdoff., H., Zimmermann, F.: Flattening on the fly: Efficient handling of MPI derived datatypes. Volume 1697 of Lecture Notes in Computer Science., Berlin, Springer (1999) 109–116
Demaine, E.D.: A threads-only MPI implementation for the development of parallel programs. In: Proceedings of the 11th International Symposium on High Performance Computing Systems. (1997) 153–163
Tang, H., Shen, K., Yang, T.: Compile/run-time support for threaded MPI execution on multiprogrammed shared memory machines. In Chien, A.A., Snir, M., eds.: Proceedings of the 1999 ACM Sigplan Symposium on Principles and Practice of Parallel Programming (PPoPP’99). Volume 34.8 of ACM Sigplan Notices., New York, ACM Press (1999) 107–118
Tatebe, O., Kodama, Y., Sekiguchi, S., Yamaguchi, Y.: Highly efficient implementation of MPI point-to-point communication using remote memory operations. In: Proceedings of the International Conference on Supercomputing (ICS-98), New York, ACM press (1998) 267–273
Roy, A., Foster, I., Gropp, W., Karonis, N., Sander, V., Toonen, B.: MPICH-GQ: Quality of service for message passing programs. Technical Report ANL/MCSP838-0700, Mathematics and Computer Science Division, Argonne National Laboratory (2000)
Carns, P.H., Ligon III, W.B., Ross, R.B., Thakur, R.: PVFS: A parallel file system for Linux clusters. In: Proceedings of the 4th Annual linux howcase and Conference, Atlanta, GA, USENIX Association (2000) 317–327
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Gropp, W.D. (2001). Learning from the Success of MPI. In: Monien, B., Prasanna, V.K., Vajapeyam, S. (eds) High Performance Computing — HiPC 2001. HiPC 2001. Lecture Notes in Computer Science, vol 2228. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45307-5_8
Download citation
DOI: https://doi.org/10.1007/3-540-45307-5_8
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-43009-4
Online ISBN: 978-3-540-45307-9
eBook Packages: Springer Book Archive