Writing Parallel Libraries with MPI - Common Practice, Issues, and Extensions

  • Torsten Hoefler
  • Marc Snir
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6960)

Abstract

Modular programming is an important software design concept. We discuss principles for programming parallel libraries, show several successful library implementations, and introduce a taxonomy for existing parallel libraries. We derive common requirements that parallel libraries pose on the programming framework. We then show how those requirements are supported in the Message Passing Interface (MPI) standard. We also note several potential pitfalls for library implementers using MPI. Finally, we conclude with a discussion of state-of-the art of parallel library programming and we provide some guidelines for library designers.

Keywords

Message Passing Interface Topology Mapping Virtual Topology Collective Communication Communication Library 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Balay, S., Gropp, W.D., McInnes, L.C., Smith, B.F.: Efficient management of parallelism in object-oriented numerical software libraries, pp. 163–202 (1997)Google Scholar
  2. 2.
    Basili, V.R., Briand, L.C., Melo, W.L.: How reuse influences productivity in object-oriented systems. Commun. ACM 39, 104–116 (1996)CrossRefGoogle Scholar
  3. 3.
    Folk, M., Heber, G., Koziol, Q., Pourmal, E., Robinson, D.: An overview of the HDF5 technology suite and its applications. In: Proceedings of the EDBT/ICDT 2011 Workshop on Array Databases, AD 2011, pp. 36–47. ACM, New York (2011)CrossRefGoogle Scholar
  4. 4.
    Gregor, D., Lumsdaine, A.: Lifting sequential graph algorithms for distributed-memory parallel computation. In: Proceedings of OOPSLA 2005, pp. 423–437 (2005)Google Scholar
  5. 5.
    Gregor, D., Lumsdaine, A.: Design and implementation of a high-performance MPI for C# and the common language infrastructure. In: Proceedings of PPoPP 2008, pp. 133–142. ACM, New York (2008)Google Scholar
  6. 6.
    Gropp, W., Lusk, E., Skjellum, A.: Using MPI: portable parallel programming with the message-passing interface. MIT Press, Cambridge (1994)MATHGoogle Scholar
  7. 7.
    Hoefler, T., Bronevetsky, G., Barrett, B., de Supinski, B.R., Lumsdaine, A.: Efficient MPI Support for Advanced Hybrid Programming Models. In: Keller, R., Gabriel, E., Resch, M., Dongarra, J. (eds.) EuroMPI 2010. LNCS, vol. 6305, pp. 50–61. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  8. 8.
    Hoefler, T., Lumsdaine, A.: Message Progression in Parallel Computing - To Thread or not to Thread? (September 2008); accepted at the Cluster 2008 ConferenceGoogle Scholar
  9. 9.
    Hoefler, T., Lumsdaine, A.: Optimizing non-blocking Collective Operations for InfiniBand. In: Proceedings of IEEE IPDPS 2008 (2008)Google Scholar
  10. 10.
    Hoefler, T., Snir, M.: Generic Topology Mapping Strategies for Large-scale Parallel Architectures. In: Proceedings of ICS 2011, pp. 75–85. ACM, New York (2011)Google Scholar
  11. 11.
    Hoefler, T., Lumsdaine, A., Rehm, W.: Implementation and Performance Analysis of Non-Blocking Collective Operations for MPI. In: Lumpe, M., Vanderperren, W. (eds.) SC 2007. LNCS, vol. 4829. Springer, Heidelberg (2007)Google Scholar
  12. 12.
    Korson, T., McGregor, J.D.: Technical criteria for the specification and evaluation of object-oriented libraries. Softw. Eng. J. 7, 85–94 (1992)CrossRefGoogle Scholar
  13. 13.
    Latham, R., Gropp, W., Ross, R.B., Thakur, R.: Extending the MPI-2 Generalized Request Interface. In: Cappello, F., Herault, T., Dongarra, J. (eds.) PVM/MPI 2007. LNCS, vol. 4757, pp. 223–232. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  14. 14.
    Lumsdaine, A., Mccandless, B.C.: Parallel extensions to the matrix template library. In: Parallel Processing for Scientific Computing (1997)Google Scholar
  15. 15.
    Lusk, E.L., Pieper, S.C., Butler, R.M.: More scalability, less pain: A simple programming model and its implementation for extreme computing. In: SciDAC Rev., vol. 17, pp. 30–37 (2010)Google Scholar
  16. 16.
    Meyer, B.: Lessons from the Design of the Eiffel Libraries. Commun. ACM 33(9), 68–88 (1990)CrossRefGoogle Scholar
  17. 17.
    Mohagheghi, P., Conradi, R., Killi, O.M., Schwarz, H.: An empirical study of software reuse vs. defect-density and stability. In: Proc. of ICSE 2004, pp. 282–292 (2004)Google Scholar
  18. 18.
    MPI Forum: MPI: A Message-Passing Interface Standard. Version 2.2 (September 4, 2009), http://www.mpi-forum.org/docs/mpi-2.2/mpi22-report.pdf
  19. 19.
    Pan, H., Hindman, B., Asanović, K.: Lithe: enabling efficient composition of parallel libraries. In: HotPar 2009, p. 11 (2009)Google Scholar
  20. 20.
    Sbalzarini, I.F., Walther, J.H., Bergdorf, M., Hieber, S.E., Kotsalis, E.M., Koumoutsakos, P.: Ppm: a highly efficient parallel particle-mesh library for the simulation of continuum systems. J. Comput. Phys. 215, 566–588 (2006)CrossRefMATHGoogle Scholar
  21. 21.
    Skjellum, A., Doss, N.E., Bangalore, P.V.: Writing libraries in mpi. In: Proceedings of the Scalable Parallel Libraries Conference, pp. 166–173 (October 1993)Google Scholar
  22. 22.
    Snir, M.: Endpoint proposal for mpi-3.0. Tech. rep. (2010)Google Scholar
  23. 23.
    Willcock, J., Hoefler, T., Edmonds, N., Lumsdaine, A.: AM++: A Generalized Active Message Framework. In: Proccedings of ACM PACT 2010 (2010)Google Scholar
  24. 24.
    Willcock, J., Hoefler, T., Edmonds, N., Lumsdaine, A.: Active Pebbles: Parallel Programming for Data-Driven Applications. In: Proc. of ACM ICS 2011, pp. 235–245 (2011)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Torsten Hoefler
    • 1
  • Marc Snir
    • 1
  1. 1.University of IllinoisUrbanaUSA

Personalised recommendations