Advertisement

Achieving Performance Portability with SKaMPI for High-Performance MPI Programs

  • Ralf Reussner
  • Gunnar Hunzelmann
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2074)

Abstract

Current development processes for parallel software often fail to deliver portable software. This is because these processes usually require a tedious tuning phase to deliver software of good performance. This tuning phase often is costly and results in machine specific tuned (i.e., less portable) software. Designing software for performance and portability in early stages of software design requires performance data for all targeted parallel hardware platforms. In this paper we present a publicly available database, which contains data necessary for software developers to design and implement portable and high performing MPI software.

Keywords

Message Passing Interface Message Length Root Process Collective Operation Parallel Software 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. [BBB+94]
    D. Bailey, E. Barszcz, J. Barton, D. Browning, R. Carter, L. Dagum, R. Fatoohi, S. Fineberg, P. Frederickson, T. Lasinski, R. Schreiber, H. Simon, V. Venkatakrishnan, and S. Weeratunga. The NAS Parallel Benchmarks. Report RNR-94-007, Department of Mathematics and Computer Science, Emory University, March 1994.Google Scholar
  2. [dSK00]
    B. R. de Supinski and N. T. Karonis. Accurately measuring mpi broadcasts in a computational grid. In Proc. 8th IEEE Symp. on High Performance Distributed Computing (HPDC-8), Redondo Beach, CA, August 2000. IEEE.Google Scholar
  3. [Fos94]
    Ian T. Foster. Designung and Building Parallel Programs-Concepts and Tools for Parallel Software Engineering. Addison Wesley, Reading, MA, 1994.Google Scholar
  4. [GHH97]
    V. Getov, E. Hernandez, and T. Hey. Message-passing performance of parallel computers. In Proceedings of EuroPar’ 97 (LNCS 1300), pages 1009–1016, 1997.Google Scholar
  5. [GHLL+98]
    William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing Lusk, Bill Nitzberg, William Saphir, and Marc Snir. MPI: The Complete Reference. Volume 1& 2. MIT Press, Cambridge, MA, USA, second edition, 1998.Google Scholar
  6. [GL99]
    W. Gropp and E. Lusk. Reproducible measurements of MPI performance characteristics. In J. J. Dongarra, E. Luque, and Tomas Margalef, editors, Recent advances in parallel virtual machine and message passing interface: 6th European PVM/MPI Users’ Group Meeting, Barcelona, Spain, September 26-29, 1999: proceedings, volume 1697 of Lecture Notes in Computer Science, pages 11–18. Springer-Verlag, 1999.Google Scholar
  7. [Hem99]
    Rolf Hempel. Basic message passing benchmarks, methodology and pitfalls, September 1999. Presented at the SPEC Workshop (http://www.hlrs.de/mpi/beff/hempelwuppertal.ppt).
  8. [Hun99]
    Gunnar Hunzelmann. Entwurf und Realisierung einer Datenbank zur Speicherung von Leistungsdaten paralleler Rechner. Studienarbeit, Department of Informatics, University of Karlsruhe, Am Fasanengarten 5, D-76128 Karlsruhe, Germany, October 1999.Google Scholar
  9. [KdSF+00]
    N. T. Karonis, B. R. de Supinski, I. Foster, W. Gropp, E. Lusk, and J. Bresnahan. Exploiting hierarchy in parallel computer networks to optimize collective operation performance. In Proceedings of the 14th International Conference on Parallel and Distributed Processing Symposium (IPDPS-00), pages 377–386, Los Alamitos, May 1-5 2000. IEEE.Google Scholar
  10. [KGGK94]
    Vipin Kumar, Ananth Grama, Anshul Gupta, and George Karypis. Introduction to Parallel Computing: Design and Analysis of Algorithms. Benjamin/Cummings, Redwood City, CA, 1994.zbMATHGoogle Scholar
  11. [KLS+94]
    Charles H. Koelbel, David B. Loveman, Robert S. Schreiber, Guy L. Steele Jr., and Mary E. Zosel. The High Performance Fortran handbook. Scientific and engineering computation. MIT Press, Cambridge, MA, USA, January 1994.Google Scholar
  12. [O.W96]
    C. O. Wahl. Evaluierung von Implementationen des Message Passing Interface (MPI)-Standards auf heterogenen Workstation-clustern. Master’s thesis, RWTH Aachen, Germany, 1996.Google Scholar
  13. [PAR94]
    PARKBENCH Committee, Roger Hockney, chair. Public international benchmarks for parallel computers. Scientific Programming, 3(2):iii–126, Summer 1994.Google Scholar
  14. [PFG97]
    J. Piernas, A. Flores, and J. M. Garcia. Analyzing the performance of MPI in a cluster of workstations based on Fast Ethernet. In Recent advances in parallel virtual machine and message passing interface: Proceedings of the 4th European PVM/MPI Users’ Group Meeting, number 1332 in Lecture Notes in Computer Science, pages 17–24, 1997.Google Scholar
  15. [RBB97]
    M. Resch, H. Berger, and T. Bönisch. A comparison of MPI performance on different MPPs. In Recent advances in parallel virtual machine and message passing interface: Proceedings of the 4th European PVM/MPI Users’ Group Meeting, number 1332 in Lecture Notes in Computer Science, pages 25–32, 1997.Google Scholar
  16. [Reu99]
    Ralf H. Reussner. SKaMPI: The Special Karlsruher MPI-Benchmark-User Manual. Technical Report 02/99, Department of Informatics, University of Karlsruhe, Am Fasanengarten 5, D-76128 Karlsruhe, Germany, 1999.Google Scholar
  17. [RSPM98]
    R. Reussner, P. Sanders, L. Prechelt, and M. Müller. SKaMPI: A detailed, accurate MPI benchmark. In V. Alexandrov and J. J. Dongarra, editors, Recent advances in parallel virtual machine and message passing interface: 5th European PVM/MPI Users’ Group Meeting, Liverpool, UK, September 7-9, 1998: proceedings, volume 1497 of Lecture Notes in Computer Science, pages 52–59, 1998.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Ralf Reussner
    • 1
  • Gunnar Hunzelmann
    • 1
  1. 1.Chair Computer Science for Engineering and ScienceUniversität Karlsruhe (T. H.)KarlsruheGermany

Personalised recommendations