Skip to main content

Using MPI Derived Datatypes in Numerical Libraries

  • Conference paper
Recent Advances in the Message Passing Interface (EuroMPI 2011)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 6960))

Included in the following conference series:

Abstract

By way of example this paper examines the potential of MPI user-defined datatypes for distributed datastructure manipulation in numerical libraries. The three examples, namely gather/scatter of column-wise distributed two dimensional matrices, matrix transposition, and redistribution of doubly cyclically distributed matrices as used in the Elemental dense matrix library, show that distributed data structures can be conveniently expressed with the derived datatype mechanisms of MPI, yielding at the same time worthwhile performance advantages over straight-forward, handwritten implementations. Experiments have been performed with on different systems with mpich2 and OpenMPI library implementations. We report results for a SunFire X4100 system with the mvapich2 library. We point out cases where the current MPI collective interfaces do not provide sufficient functionality.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Byna, S., Gropp, W.D., Sun, X.-H., Thakur, R.: Improving the performance of MPI derived datatypes by optimizing memory-access cost. In: IEEE International Conference on Cluster Computing (CLUSTER 2003), pp. 412–419 (2003)

    Google Scholar 

  2. Byna, S., Sun, X.-H., Thakur, R., Gropp, W.D.: Automatic memory optimizations for improving MPI derived datatype performance. In: Mohr, B., Träff, J.L., Worringen, J., Dongarra, J. (eds.) PVM/MPI 2006. LNCS, vol. 4192, pp. 238–246. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  3. Hoefler, T., Gottlieb, S.: Parallel zero-copy algorithms for fast fourier transform and conjugate gradient using MPI datatypes. In: Keller, R., Gabriel, E., Resch, M., Dongarra, J. (eds.) EuroMPI 2010. LNCS, vol. 6305, pp. 132–141. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  4. Lu, Q., Wu, J., Panda, D.K., Sadayappan, P.: Applying MPI derived datatypes to the NAS benchmarks: A case study. In: 33rd International Conference on Parallel Processing Workshops (ICPP 2004 Workshops), pp. 538–545 (2004)

    Google Scholar 

  5. MPI Forum. MPI: A Message-Passing Interface Standard. Version 2.2, September 4 (2009), http://www.mpi-forum.org

  6. Poulson, J., Marker, B., Hammond, J.R., Romero, N.A., van de Geijn, R.: Elemental: A new framework for distributed memory dense matrix computations. ACM Transactions on Mathematical Software (2011) (conditionally accepted)

    Google Scholar 

  7. Renault, É.: Extended MPICC to generate MPI derived datatypes from C datatypes automatically. In: Cappello, F., Herault, T., Dongarra, J. (eds.) PVM/MPI 2007. LNCS, vol. 4757, pp. 307–314. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  8. Renault, E., Parrot, C.: MPI pre-processor: Generating MPI derived datatypes from C datatypes automatically. In: International Conference on Parallel Processing Workshops (ICPP), pp. 248–256 (2006)

    Google Scholar 

  9. Ross, R.J., Miller, N., Gropp, W.D.: Implementing fast and reusable datatype processing. In: Dongarra, J., Laforenza, D., Orlando, S. (eds.) EuroPVM/MPI 2003. LNCS, vol. 2840, pp. 404–413. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  10. Sanders, P.: Random permutations on distributed, external and hierarchical memory. Information Processing Letters 67(6), 305–309 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  11. Träff, J.L.: A (radical) proposal addressing the non-scalability of the irregular MPI collective interfaces. In: 16th International Workshop on High-level Parallel Programming Models and Supportive Environments (HIPS 2011), International Parallel and Distributed Processing Symposium (IPDPS), page 42 (2011)

    Google Scholar 

  12. Träff, J.L., Hempel, R., Ritzdorf, H., Zimmermann, F.: Flattening on the fly: Efficient handling of MPI derived datatypes. In: Margalef, T., Dongarra, J., Luque, E. (eds.) PVM/MPI 1999. LNCS, vol. 1697, pp. 109–116. Springer, Heidelberg (1999)

    Chapter  Google Scholar 

  13. Wu, J., Wyckoff, P., Panda, D.K.: High performance implementation of MPI derived datatype communication over InfiniBand. In: 18th International Parallel and Distributed Processing Symposium (IPDPS 2004), page 14 (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bajrović, E., Träff, J.L. (2011). Using MPI Derived Datatypes in Numerical Libraries. In: Cotronis, Y., Danalis, A., Nikolopoulos, D.S., Dongarra, J. (eds) Recent Advances in the Message Passing Interface. EuroMPI 2011. Lecture Notes in Computer Science, vol 6960. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24449-0_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24449-0_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24448-3

  • Online ISBN: 978-3-642-24449-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics