Advertisement

SPMD OpenMP versus MPI on a IBM SMP for 3 Kernels of the NAS Benchmarks

  • Géraud Krawezik
  • Guillaume Alléon
  • Franck Cappello
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2327)

Abstract

Shared Memory Multiprocessors are becoming more popular since they are used to deploy large parallel computers. The current trend is to enlarge the number of processors inside such multiprocessor nodes. However a lot of existing applications are using the message passing paradigm even when running on shared memory machines. This is due to three main factors: 1) the legacy of previous versions written for distributed memory computers, 2) the difficulty to obtain high performances with OpenMP when using loop level parallelization and 3) the complexity of writing multithreaded programs using a low level thread library. In this paper we demonstrate that OpenMP can provide better performance than MPI on SMP machines. We use a coarse grain parallelization approach, also known as the SPMD programming style with OpenMP. The performance evaluation considers the IBM SP3 NH2 and three kernels of the NAS benchmark: FT, CG and MG. We compare three implementations of them: the NAS 2.3 MPI, a fine grain (loop level) OpenMP version and our SPMD OpenMP version. A breakdown of the execution times provides an explanation of the performance results.

Keywords

Shared Memory Message Passing Loop Nest Shared Memory Machine OpenMP Implementation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    F. Cappello and D. Etiemble. Mpi versus mpi+openmp on ibm sp for the nas benchmarks. In Supercomputing 2000: High-Performance Networking and Computing (SC2000), 2000. http://www.sc2000.org/proceedings/techpapr/index.html
  2. 2.
    P. Kloos amd F. Mathey and P. Blaise. Openmp and mpi programming with a cg algorithm. In Proceedings of the Second European Workshop on OpenMP (EWOMP 2000), http://www.epcc.ed.ac.uk/ewomp2000/proceedings.html, 2000.
  3. 3.
    A. Kneer. Industrial mixed openmp/mpi cfd application for practical use in free-surface flow calculations. In International Workshop on OpenMP Applications and Tools, WOMPAT 2000, http://www.cs.uh.edu/wompat2000/Program.html, 2000.
  4. 4.
    L. Smith and M. Bull. Development of mixed mode mpi/ openmp applications. In International Workshop on OpenMP Applications and Tools, WOMPAT 2000, http://www.cs.uh.edu/wompat2000/Program.html, 2000.
  5. 5.
    A. J. Wallcraft. Spmd openmp vs mpi for ocean models. In First European Workshop on OpenMP-EWOMP’99, http://www.it.lth.se/ewomp99/programme.html, 2000.
  6. 6.
    B. Armstrong, S. Wook Kim, and R. Eigenmann. Quantofying differences between openmp and mpi using a large-scale application suite. In Proceedings of the 3rd International Symposium on High Performance Computing (ISHPC2000-WOM-PEI2000), http://research.ac.upc.es/wompei/program.html, 2000.
  7. 7.
    B. Chapman, A. Patil, and A. Prabhakar. Performance oriented programming for numa architectures. In LNCS 2104, International Workshop on OpenMP Applications and Tools, WOMPAT 2001, West Lafayette, IN, USA, 2001.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Géraud Krawezik
    • 1
    • 2
  • Guillaume Alléon
    • 2
  • Franck Cappello
    • 1
  1. 1.Laboratoire de Recherche en InformatiqueUniversité Paris-SudOrsay CedexFrance
  2. 2.EADS - Corporate Research CenterBlagnac CedexFrance

Personalised recommendations