MPI Application Development Using the Analysis Tool MARMOT

  • Bettina Krammer
  • Matthias S. Müller
  • Michael M. Resch
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3038)

Abstract

The Message Passing Interface (MPI) is widely used to write parallel programs using message passing. Due to the complexity of parallel programming there is a need for tools supporting the development process. There are many situations where incorrect usage of MPI by the application programmer can automatically be detected. Examples are the introduction of irreproducibility, deadlocks and incorrect management of resources like communicators, groups, datatypes and operators. We also describe the tool MARMOT that implements some of these tests. Finally we describe our experiences with three applications of the CrossGrid project regarding the usability and performance of this tool.

Keywords

Paral 

References

  1. 1.
  2. 2.
    Gropp, W.D.: Runtime checking of datatype signatures in MPI. In: Dongarra, J., Kacsuk, P., Podhorszki, N. (eds.) PVM/MPI 2000. LNCS, vol. 1908, p. 160. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  3. 3.
    Hood, R.: Debugging computational grid programs with the portable parallel/ distributed debugger (p2d2). In: The NASA HPCC Annual Report for 1999. NASA (1999), http://hpcc.arc.nasa.gov:80/reports/report99/99index.htm
  4. 4.
    Krammer, B., Bidmon, K., Müller, M.S., Resch, M.M.: MARMOT: An MPI analysis and checking tool. In: Proceedings of PARCO 2003, Dresden, Germany (September 2003)Google Scholar
  5. 5.
    Kranzlmüller, D.: Event Graph Analysis For Debugging Massively Parallel Programs. PhD thesis, Joh. Kepler University Linz, Austria (2000)Google Scholar
  6. 6.
    Luecke, G., Zou, Y., Coyle, J., Hoekstra, J., Kraeva, M.: Deadlock detection in MPI programs. Concurrency and Computation: Practice and Experience 14, 911–932 (2002)MATHCrossRefGoogle Scholar
  7. 7.
    Message Passing Interface Forum. MPI: A Message Passing Interface Standard (June 1995), http://www.mpi-forum.org
  8. 8.
    Mourino, J.C., Martin, M.J., Doallo, R., Singh, D.E., Rivera, F.F., Bruguera, J.D.: The stem-ii air quality model on a distributed memory system (2004)Google Scholar
  9. 9.
    Reynolds, S.: System software makes it easy. Insights Magazine, NASA (2000), http://hpcc.arc.nasa.gov:80/insights/vol12
  10. 10.
    Rodriguez, D., Gomes, J., Marco, J., Marco, R., Martinez-Rivero, C.: MPICHG2 implementation of an interactive artificial neural network training. In: 2nd European Across Grids Conference, Nicosia, Cyprus (January 28-30, 2004)Google Scholar
  11. 11.
    Tirado-Ramos, A., Ragas, H., Shamonin, D., Integration, H.: of blood flow visualization on the grid: the flowfish/gvk approach. In: 2nd European Across Grids Conference, Nicosia, Cyprus (January 28-30, 2004)Google Scholar
  12. 12.
    Vetter, J.S., de Supinski, B.R.: Dynamic software testing of mpi applications with umpire. In Proceedings of the 2000 ACM/IEEE Supercomputing Conference (SC 2000), Dallas, Texas, ACM/IEEE (2000); CD-ROMGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Bettina Krammer
    • 1
  • Matthias S. Müller
    • 1
  • Michael M. Resch
    • 1
  1. 1.High Performance Computing Center StuttgartStuttgartGermany

Personalised recommendations