Exploring Unexpected Behavior in MPI

  • Martin Schulz
  • Dieter Kranzlmüller
  • Bronis R. de Supinski
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4208)


MPI has become the dominant programming paradigm in high performance computing partly due to its portability: an MPI application can run on a wide range of architectures. Unfortunately, portability in MPI is only guaranteed for compiling codes; it does not necessarily mean that an MPI program will actually result in the same behavior on different platforms. The MPITEST suite provides a series of micro kernels to test MPI implementations across different systems. All codes of MPITEST conform to the MPI standard; however, their behavior is implementation dependent, potentially leading to unexpected results. In this paper we introduce MPITEST and present examples from the test suite along with their surprising results and consequences on a series of platforms. The goal of this work is to demonstrate this problem in general and to raise awareness in the MPI user community.


Test Suite Message Passing Interface Lawrence Livermore National Laboratory Unexpected Behavior Benchmark Suite 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bailey, D., Harris, T., Saphir, W., der Wijngaart, R.V., Woo, A., Yarrow, M.: The NAS parallel benchmarks 2.0. Report NAS-95-020, NASA Ames Research Center, Moffett Field, CA (December 1995)Google Scholar
  2. 2.
    de Supinski, B.: The ASCI PSE Milepost: Run-Time Systems Performance Tests. In: Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA) (June 2001)Google Scholar
  3. 3.
    de Supinski, B., Karonis, N.: Accurately Measuring Broadcasts in a Computational Grid. In: Proceedings of the 8th IEEE International Symposium on High-Performance Distributed Computing (HPDC), pp. 29–37 (1999)Google Scholar
  4. 4.
    Kranzlmüller, D., Schulz, M.: Notes on Nondeterminism in Message Passing Programs. In: Proceedings of the 9th European PVM/MPI Users’ Group Meeting, pp. 357–367 (September 2002)Google Scholar
  5. 5.
    Lawrence Livermore National Laboratory. The ASCI purple benchmark codes (October 2002),
  6. 6.
    Message Passing Interface Forum (MPIF). MPI: A Message-Passing Interface Standard. Technical Report, University of Tennessee, Knoxville (June 1995),
  7. 7.
    Reussner, R., Sanders, P., Prechelt, L., Müller, M.: SKaMPI: A Detailed, Accurate MPI Benchmark. In: Proceedings of the 5th European PVM/MPI Users’ Group Meeting, pp. 52–59 (September 1998)Google Scholar
  8. 8.
    Vuduc, R., Schulz, M., Quinlan, D., de Supinski, B., Sæbjørnsen, A.: Improving Distributed Memory Applications Testing By Message Perturbation. In: Proceedings of Parallel and Distributed Systems: Testing and Debugging (PADTAD) (July 2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Martin Schulz
    • 1
  • Dieter Kranzlmüller
    • 2
  • Bronis R. de Supinski
    • 1
  1. 1.Center for Applied Scientific ComputingLawrence Livermore National LaboratoryLivermoreUSA
  2. 2.Institute of Graphics and Parallel Processing (GUP)Joh. Kepler University LinzLinzAustria/Europe

Personalised recommendations