Can MPI Be Used for Persistent Parallel Services?

  • Robert Latham
  • Robert Ross
  • Rajeev Thakur
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4192)

Abstract

MPI is routinely used for writing parallel applications, but it is not commonly used for writing long-running parallel services, such as parallel file systems or job schedulers. Nonetheless, MPI does have many features that are potentially useful for writing such software. Using the PVFS2 parallel file system as a motivating example, we studied the needs of software that provide persistent parallel services and evaluated whether MPI is a good match for those needs. We also ran experiments to determine the gaps between what the MPI Standard enables and what MPI implementations currently support. The results of our study indicate that MPI can enable persistent parallel systems to be developed with less effort and can provide high performance, but MPI implementations will need to provide better support for certain features. We also describe an area where additions to the MPI Standard would be useful.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bosilca, G., Bouteiller, A., Cappello, F., Djilali, S., Fedak, G., Germain, C., Herault, T., Lemarinier, P., Lodygensky, O., Magniette, F., Neri, V., Selikhov, A.: MPICH-V: Toward a scalable fault tolerant MPI for volatile nodes. In: Supercomputing 2002: Proceedings of the 2002 ACM/IEEE Conference on Supercomputing, pp. 1–18. IEEE Computer Society Press, Los Alamitos (2002)Google Scholar
  2. 2.
    Carns, P.H., Ligon, III.W.B., Ross, R.B., Thakur, R.: PVFS: A parallel file system for Linux clusters. In: Proceedings of the 4th Annual Linux Showcase and Conference, Atlanta, GA, October 2000, pp. 317–327 (2000)Google Scholar
  3. 3.
    Carns, P.: Design and analysis of a network transfer layer for parallel file systems. Master’s thesis, Clemson University, Clemson, S.C. (July 2001)Google Scholar
  4. 4.
    Carns, P.H.: Achieving Scalability in Parallel File Systems. PhD thesis, Dept. of Electrical and Computer Engineering, Clemson University, Clemson, SC (May 2004)Google Scholar
  5. 5.
    Desai, N., Bradshaw, R., Lusk, A., Lusk, E.: MPI cluster system software. In: Kranzlmüller, D., Kacsuk, P., Dongarra, J. (eds.) EuroPVM/MPI 2004. LNCS, vol. 3241, pp. 277–286. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  6. 6.
    Fagg, G.E., Dongarra, J.: FT-MPI: Fault tolerant MPI, supporting dynamic applications in a dynamic world. In: Dongarra, J., Kacsuk, P., Podhorszki, N. (eds.) PVM/MPI 2000. LNCS, vol. 1908, pp. 346–353. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  7. 7.
    Gabriel, E., Fagg, G.E., Bosilca, G., Angskun, T., Dongarra, J. J., Squyres, J.M., Sahay, V., Kambadur, P., Barrett, B., Lumsdaine, A., Castain, R.H., Daniel, D.J., Graham, R.L., Woodall, T.S.: Open MPI: Goals, concept, and design of a next generation MPI implementation. In: Kranzlmüller, D., Kacsuk, P., Dongarra, J. (eds.) EuroPVM/MPI 2004. LNCS, vol. 3241, pp. 97–104. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  8. 8.
    Gropp, W.D., Ross, R., Miller, N.: Providing efficient I/O redundancy in MPI environments. In: Kranzlmüller, D., Kacsuk, P., Dongarra, J. (eds.) EuroPVM/MPI 2004. LNCS, vol. 3241, pp. 77–86. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  9. 9.
    International Business Machines Corporation. IBM Parallel Environment for AIX 5L: MPI Subroutine Reference, 3rd edn. (April 2005)Google Scholar
  10. 10.
    Message Passing Interface Forum. MPI-2: Extensions to the message-passing interface (July 1997), http://www.mpi-forum.org/docs/docs.html
  11. 11.
  12. 12.
    The PVFS2 parallel file system, http://www.pvfs.org/pvfs2
  13. 13.
    Ross, R., Miller, N., Gropp, W.: Implementing fast and reusable datatype processing. In: Dongarra, J., Laforenza, D., Orlando, S. (eds.) EuroPVM/MPI 2003. LNCS, vol. 2840, pp. 404–413. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  14. 14.
    Thakur, R., Rabenseifner, R., Gropp, W.: Optimization of collective communication operations in MPICH. International Journal of High-Performance Computing Applications 19(1), 49–66 (2005)CrossRefGoogle Scholar
  15. 15.
    Träff, J.L., Hempel, R., Ritzdorf, H., Zimmermann, F.: Flattening on the fly: Efficient handling of MPI derived datatypes. In: Margalef, T., Dongarra, J., Luque, E. (eds.) PVM/MPI 1999. LNCS, vol. 1697, pp. 109–116. Springer, Heidelberg (1999)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Robert Latham
    • 1
  • Robert Ross
    • 1
  • Rajeev Thakur
    • 1
  1. 1.Mathematics and Computer Science DivisionArgonne National LaboratoryArgonneUSA

Personalised recommendations