Open MPI: A Flexible High Performance MPI

  • Richard L. Graham
  • Timothy S. Woodall
  • Jeffrey M. Squyres
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3911)


A large number of MPI implementations are currently available, each of which emphasize different aspects of high-performance computing or are intended to solve a specific research problem. The result is a myriad of incompatible MPI implementations, all of which require separate installation, and the combination of which present significant logistical challenges for end users. Building upon prior research, and influenced by experience gained from the code bases of the LAM/MPI, LA-MPI, FT-MPI, and PACX-MPI projects, Open MPI is an all-new, production-quality MPI-2 implementation that is fundamentally centered around component concepts. Open MPI provides a unique combination of novel features previously unavailable in an open-source, production-quality implementation of MPI. Its component architecture provides both a stable platform for third-party research as well as enabling the run-time composition of independent software add-ons. This paper presents a high-level overview the goals, design, and implementation of Open MPI, as well as performance results for it’s point-to-point implementation.


Message Passing Interface Message Size Component Architecture Component Framework Memory Pool 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bosilca, G., Bouteiller, A., Cappello, F., Djilali, S., Fedak, G., Germain, C., Herault, T., Lemarinier, P., Lodygensky, O., Magniette, F., Neri, V., Selikhov, A.: MPICH-V: Toward a scalable fault tolerant MPI for volatile nodes. In: SC 2002 Conference CD, Baltimore, MD, pap 298, LRI, IEEE/ACM SIGARCH (2002)Google Scholar
  2. 2.
    Bernholdt, D.E., et al.: A component architecture for high-performance scientific computing. Intl. J. High-Performance Computing Applications (2004)Google Scholar
  3. 3.
    Fagg, G.E., Gabriel, E., Chen, Z., Angskun, T., Bosilca, G., Bukovski, A., Dongarra, J.J.: Fault tolerant communication library and applications for high perofrmance. In: Los Alamos Computer Science Institute Symposium, Santa Fee, NM, October 27-29 (2003)Google Scholar
  4. 4.
    Graham, R.L., Choi, S.-E., Daniel, D.J., Desai, N.N., Minnich, R.G., Rasmussen, C.E., Risinger, L.D., Sukalksi, M.W.: A network-failure-tolerant message-passing system for terascale clusters. International Journal of Parallel Programming 31(4), 285–303 (2003)CrossRefzbMATHGoogle Scholar
  5. 5.
    Keller, R., Gabriel, E., Krammer, B., Mueller, M.S., Resch, M.M.: Towards efficient execution of parallel applications on the grid: porting and optimization issues. International Journal of Grid Computing 1(2), 133–149 (2003)CrossRefGoogle Scholar
  6. 6.
    Liu, J., Wu, J., Kini, S.P., Wyckoff, P., Panda, D.K.: High performance RDMA-based MPI implementation over infiniband. In: ICS 2003. Proceedings of the 17th annual international conference on Supercomputing, pp. 295–304. ACM Press, New York (2003)Google Scholar
  7. 7.
    Message Passing Interface Forum. MPI: A Message Passing Interface Standard (June 1995),
  8. 8.
    Message Passing Interface Forum. MPI-2: Extensions to the Message Passing Interface (July 1997),
  9. 9.
  10. 10.
    Sankaran, S., Squyres, J.M., Barrett, B., Lumsdaine, A., Duell, J., Hargrove, P., Roman, E.: The LAM/MPI checkpoint/restart framework: System-initiated checkpointing. International Journal of High Performance Computing Applications (to appear, 2004)Google Scholar
  11. 11.
    Shipman, G.M.: Infiniband scalability in Open MPI. Master’s thesis, University of New Mexico (December 2005)Google Scholar
  12. 12.
    Snell, Q.O., Mikler, A.R., Gustafson, J.L.: NetPIPE: A Network Protocol Independent Performace Evaluator. In: IASTED International Conference on Intelligent Information Management and Systems (June 1996)Google Scholar
  13. 13.
    Squyres, J.M., Lumsdaine, A.: A Component Architecture for LAM/MPI. In: Dongarra, J., Laforenza, D., Orlando, S. (eds.) EuroPVM/MPI 2003. LNCS, vol. 2840, Springer, Heidelberg (2003)CrossRefGoogle Scholar
  14. 14.
    Thakur, R., Gropp, W., Lusk, E.: Data sieving and collective I/O in ROMIO. In: Proceedings of the 7th Symposium on the Frontiers of Massively Parallel Computation, pp. 182–189. IEEE Computer Society Press, Los Alamitos (1999)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Richard L. Graham
    • 1
  • Timothy S. Woodall
    • 1
  • Jeffrey M. Squyres
    • 2
  1. 1.Advanced Computing LaboratoryLos Alamos National LabUSA
  2. 2.Open System LaboratoryIndiana UniversityUSA

Personalised recommendations