Implementing and Benchmarking Derived Datatypes in Metacomputing

  • Edgar Gabriel
  • Michael Resch
  • Roland Rühle
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2110)

Abstract

Flexible data structures have become a common tool of programming also in the field of engineering simulation and scientific simulation in the last years. Standard programming languages like Fortran and C or C++ allow to specify user defined datatypes for such structures. For parallel programming this leads to a special problem when it comes to exchanging data between processors. Regular data structures occupy contiguous space in memory and can thus be easily transferred to other processes when necessary. Irregular data structures, however, are more difficult to handle and the costs for communicating them may be rather high. MPI (Message Passing Interface) provides so called “derived datatypes” to overcome that problem, and for many systems these derived datatypes have been implemented efficiently. However, when running MPI on a cluster of systems in wide-area networks, such optimized implementations are not yet available and the overhead for communicating them may be substantial. The purpose of this paper is to show how this problem can be overcome by considering both the nature of the derived datatype and the cluster of systems used. We present an optimized implementation and show some results for clusters of supercomputers.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Pickles S., Costen F., Brooke J., Gabriel E., Müller M., Resch M. and Ord S., The problems and the solutions of the metacomputing experiment in SC99, HPCN’2000, Amsterdam/The Netherlands, May 10–12, 2000.Google Scholar
  2. 2.
    Michael Resch, Dirk Rantzau and Robert Stoy, Metacomputing Experience in a Transatlantic Wide Area Application Testbed, Future Generation Computer Systems, (15) 5-6 (1999), pp. 807–816.Google Scholar
  3. 3.
    Edgar Gabriel, Michael Resch and Roland Rühle Implementing MPI with Optimized Algorithms for Metacomputing in Anthony Skjellum, Purushotham V. Bangalore, Yoginder S. Dandass, ‘Proceedings of the Third MPI Developer’s and User’s Conference’, MPI Software Technology Press, Starkville Mississippi, 1999.Google Scholar
  4. 4.
    William Gropp, Ewing Lusk, and Deborah Swider Improving the Performance of MPI Derived Datatypes in Anthony Skjellum, Purushotham V. Bangalore, Yoginder S. Dandass, ‘Proceedings of the Third MPI Developer’s and User’s Conference’, MPI Software Technology Press, Starkville Mississippi, 1999.Google Scholar
  5. 5.
    J.L. Träff, R. Hempel, H. Ritzdorf, and F. Zimmermann Flattening on the fly: efficient handling of MPI derived datatypes in Jack Dongarra, Emilio Luque, Tomas Margalef (Eds.) ‘Recent Advances in Parallel Virtual Machine and Message Passing Interface’, pp 109–116, Springer, 1999.Google Scholar
  6. 6.
    Ralf Reussner, Jesper Larsson Träff, Gunnar Hunzelmann A Benchmark for MPI Derived Datatypes in Jack Dongarra, Peter Kacsuk, Norbert Podhorszki (Eds.) ‘Recent Advances in Parallel Virtual Machine and Message Passing Interface’, pp 10–17, Springer, 2000.Google Scholar
  7. 7.
    Graham E Fagg and Jack J. Dongarra FT-MPI: Fault Tolerant MPI, Supporting Dynamic Applications in a Dynamic World in Jack Dongarra, Peter Kacsuk, Norbert Podhorszki (Eds.) ‘Recent Advances in Parallel Virtual Machine and Message Passing Interface’, pp 346–353, Springer, 2000.Google Scholar
  8. 8.
    William D. Gropp Runtime Checking of Datatype Signatures in MPI in Jack Dongarra, Peter Kacsuk, Norbert Podhorszki (Eds.) ‘Recent Advances in Parallel Virtual Machine and Message Passing Interface’, pp 160–167, Springer, 2000.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Edgar Gabriel
  • Michael Resch
  • Roland Rühle
    • 1
  1. 1.High Performance Computing Center StuttgartStuttgartGermany

Personalised recommendations