Evaluation of Interpreted Languages with Open MPI

  • Matti Bickel
  • Adrian Knoth
  • Mladen Berekovic
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6960)

Abstract

High performance computing (HPC) seems to be one of the last monopolies of low-level languages like C and FORTRAN. The de-facto standard for HPC, the Message Passing Interface (MPI), defines APIs for C, FORTRAN and C++ only. This paper evaluates current alternatives among interpreted languages, specifically Python and C#. MPI library wrappers for both languages are examined and their performance is compared to native (C) Open MPI using two benchmarks. Both languages compare favorably in code and performance effectiveness.

Keywords

Cellular Automaton Message Passing Interface High Performance Computing Board Size Message Size 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Dalcin, L., Paz, R., Storti, M., Delia, J.: MPI for python: Performance improvements and MPI-2 extensions. Journal of Parallel and Distributed Computing 68(5), 655–662 (2008), http://dx.doi.org/10.1016/j.jpdc.2007.09.005 CrossRefGoogle Scholar
  2. 2.
    Gregor, D., Hoefler, T., Barrett, B., Lumsdaine, A.: Fixing Probe for Multi-Threaded MPI Applications. Tech. Rep. 674, Indiana University (January 2009)Google Scholar
  3. 3.
    Gregor, D., Lumsdaine, A.: Design and implementation of a high-performance MPI for c# and the common language infrastructure. In: PPoPP 2008: Proceedings of the 13th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 133–142. ACM, New York (2008), http://dx.doi.org/10.1145/1345206.1345228 Google Scholar
  4. 4.
    Gropp, W., Lederman, S.H., Lumsdaine, A., Lusk, E., Nitzberg, B., Saphir, W., Snir, M.: MPI - The Complete Reference: the MPI-2 Extensions, vol. 2. MIT Press, Cambridge (1998)Google Scholar
  5. 5.
    Gropp, W., Lusk, E.: Using MPI-2: A problem-based approach, p. 12 (2007), http://dx.doi.org/10.1007/978-3-540-75416-9_7
  6. 6.
    Ong, E.: MPI Ruby: Scripting in a parallel environment. Computing in Science & Engineering 4(4), 78–82 (2002), http://dx.doi.org/10.1109/MCISE.2002.1014983 CrossRefGoogle Scholar
  7. 7.
    Pacheco, P.: Parallel Programming With MPI. Morgan Kaufmann, San Francisco (1996), http://www.worldcat.org/isbn/1558603395 MATHGoogle Scholar
  8. 8.
    Packard, N.H., Wolfram, S.: Two-dimensional cellular automata. Journal of Statistical Physics 38, 901–946 (1985), http://dx.doi.org/10.1007/BF01010423, 10.1007, doi:10.1007/BF01010423MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Snir, M., Otto, S.W., Walker, D.W., Dongarra, J., Huss-Lederman, S.: MPI: The Complete Reference. MIT Press, Cambridge (1995), http://portal.acm.org/citation.cfm?id=546703 Google Scholar
  10. 10.
    The Message-Passing Interface Forum: MPI: A message-passing interface standard 2.1. Tech. rep., University of Tennessee, Knoxville, Tennessee (2008), http://www.mpi-forum.org/docs/mpi21-report.pdf
  11. 11.
    William Gropp, E.L., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message Passing Interface, 2nd edn. MIT Press, Cambridge (1999)Google Scholar
  12. 12.
    Wilmes, J., Stevens, C.: Parallel: MPI - an MPI binding for perl (May 1999), http://cpansearch.perl.org/src/JOSH/Parallel-MPI-0.03/docs/paper.tex (retrieved September 21, 2009)

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Matti Bickel
    • 1
  • Adrian Knoth
    • 1
  • Mladen Berekovic
    • 1
  1. 1.Institute of Computer ScienceFriedrich-Schiller-University JenaGermany

Personalised recommendations