Implementing Fast and Reusable Datatype Processing

  • Robert Ross
  • Neill Miller
  • William D. Gropp
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2840)

Abstract

Methods for describing structured data are a key aid in application development. The MPI standard defines a system for creating “MPI types” at run time and using these types when passing messages, performing RMA operations, and accessing data in files. Similar capabilities are available in other middleware. Unfortunately many implementations perform poorly when processing these structured data types. This situation leads application developers to avoid these components entirely, instead performing any necessary data processing by hand.

In this paper we describe an internal representation of types and a system for processing this representation that helps maintain the highest possible performance during processing. The performance of this system, used in the MPICH2 implementation, is compared to well-written manual processing routines and other available MPI implementations. We show that performance for most tested types is comparable to manual processing. We identify additional opportunities for optimization and other software where this implementation can be leveraged.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Byna, S., Gropp, W., Sun, X., Thakur, R.: Improving the performance of mpi derived datatypes by optimizing memory-access cost. Technical Report Preprint ANL/MCS-P1045-0403, Mathematics and Computer Science Division, Argonne National Laboratory (April 2003)Google Scholar
  2. 2.
    Carns, P., Ligon, W., Ross, R., Thakur, R.: PVFS: A parallel file system for Linux clusters. In: Proceedings of the 4th Annual Linux Showcase and Conference, October 2000, pp. 317–327, Atlanta, GA, USENIX Association (2000)Google Scholar
  3. 3.
    Fryxell, B., Olson, K., Ricker, P., Timmes, F.X., Zingale, M., Lamb, D.Q., MacNeice, P., Rosner, R., Tufo, H.: FLASH: An adaptive mesh hydrodynamics code for modelling astrophysical thermonuclear flashes. Astrophysical Journal Suppliment 131, 273 (2000)CrossRefGoogle Scholar
  4. 4.
    Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface. MIT Press, Cambridge (1994)Google Scholar
  5. 5.
    Gropp, W., Lusk, E., Swider, D.: Improving the performance of MPI derived datatypes. In: Skjellum, A., Bangalore, P.V., Dandass, Y.S. (eds.) Proceedings of the Third MPI Developer’s and User’s Conference, pp. 25–30. MPI Software Technology Press (1999)Google Scholar
  6. 6.
    McCalpin, J.: Sustainable memory bandwidth in current high performance computers. Technical report, Advanced Systems Division, Silicon Graphics, Inc., Revised to October 12 (1995)Google Scholar
  7. 7.
    Reussner, R., Sanders, P., Prechelt, L., Müller, M.: SKaMPI: A detailed, accurate MPI benchmark. In: Alexandrov, V.N., Dongarra, J. (eds.) PVM/MPI 1998. LNCS, vol. 1497, pp. 52–59. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  8. 8.
    Reussner, R., Träff, J., Hunzelmann, G.: A benchmark for MPI derived datatypes. In: Dongarra, J., Kacsuk, P., Podhorszki, N. (eds.) PVM/MPI 2000. LNCS, vol. 1908, pp. 10–17. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  9. 9.
    Ross, R., Nurmi, D., Cheng, A., Zingale, M.: A case study in application I/O on linux clusters. In: Proceedings of SC 2001 (November 2001)Google Scholar
  10. 10.
    Träff, J., Hempel, R., Ritzdoff, H., Zimmermann, F.: Flattening on the fly: Efficient handling of MPI derived datatypes. In: Margalef, T., Dongarra, J., Luque, E. (eds.) PVM/MPI 1999. LNCS, vol. 1697, pp. 109–116. Springer, Heidelberg (1999)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Robert Ross
    • 1
  • Neill Miller
    • 1
  • William D. Gropp
    • 1
  1. 1.Argonne National LaboratoryMathematics and Computer Science DivisionArgonneUSA

Personalised recommendations