Open MPI: A Flexible High Performance MPI
A large number of MPI implementations are currently available, each of which emphasize different aspects of high-performance computing or are intended to solve a specific research problem. The result is a myriad of incompatible MPI implementations, all of which require separate installation, and the combination of which present significant logistical challenges for end users. Building upon prior research, and influenced by experience gained from the code bases of the LAM/MPI, LA-MPI, FT-MPI, and PACX-MPI projects, Open MPI is an all-new, production-quality MPI-2 implementation that is fundamentally centered around component concepts. Open MPI provides a unique combination of novel features previously unavailable in an open-source, production-quality implementation of MPI. Its component architecture provides both a stable platform for third-party research as well as enabling the run-time composition of independent software add-ons. This paper presents a high-level overview the goals, design, and implementation of Open MPI, as well as performance results for it’s point-to-point implementation.
KeywordsMessage Passing Interface Message Size Component Architecture Component Framework Memory Pool
Unable to display preview. Download preview PDF.
- 1.Bosilca, G., Bouteiller, A., Cappello, F., Djilali, S., Fedak, G., Germain, C., Herault, T., Lemarinier, P., Lodygensky, O., Magniette, F., Neri, V., Selikhov, A.: MPICH-V: Toward a scalable fault tolerant MPI for volatile nodes. In: SC 2002 Conference CD, Baltimore, MD, pap 298, LRI, IEEE/ACM SIGARCH (2002)Google Scholar
- 2.Bernholdt, D.E., et al.: A component architecture for high-performance scientific computing. Intl. J. High-Performance Computing Applications (2004)Google Scholar
- 3.Fagg, G.E., Gabriel, E., Chen, Z., Angskun, T., Bosilca, G., Bukovski, A., Dongarra, J.J.: Fault tolerant communication library and applications for high perofrmance. In: Los Alamos Computer Science Institute Symposium, Santa Fee, NM, October 27-29 (2003)Google Scholar
- 6.Liu, J., Wu, J., Kini, S.P., Wyckoff, P., Panda, D.K.: High performance RDMA-based MPI implementation over infiniband. In: ICS 2003. Proceedings of the 17th annual international conference on Supercomputing, pp. 295–304. ACM Press, New York (2003)Google Scholar
- 7.Message Passing Interface Forum. MPI: A Message Passing Interface Standard (June 1995), http://www.mpi-forum.org/
- 8.Message Passing Interface Forum. MPI-2: Extensions to the Message Passing Interface (July 1997), http://www.mpi-forum.org/
- 10.Sankaran, S., Squyres, J.M., Barrett, B., Lumsdaine, A., Duell, J., Hargrove, P., Roman, E.: The LAM/MPI checkpoint/restart framework: System-initiated checkpointing. International Journal of High Performance Computing Applications (to appear, 2004)Google Scholar
- 11.Shipman, G.M.: Infiniband scalability in Open MPI. Master’s thesis, University of New Mexico (December 2005)Google Scholar
- 12.Snell, Q.O., Mikler, A.R., Gustafson, J.L.: NetPIPE: A Network Protocol Independent Performace Evaluator. In: IASTED International Conference on Intelligent Information Management and Systems (June 1996)Google Scholar