Automatic MPI to AMPI Program Transformation Using Photran

  • Stas Negara
  • Gengbin Zheng
  • Kuo-Chuan Pan
  • Natasha Negara
  • Ralph E. Johnson
  • Laxmikant V. Kalé
  • Paul M. Ricker
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6586)

Abstract

Adaptive MPI, or AMPI, is an implementation of the Message Passing Interface (MPI) standard. AMPI benefits MPI applications with features such as dynamic load balancing, virtualization, and checkpointing. Because AMPI uses multiple user-level threads per physical core, global variables become an obstacle. It is thus necessary to convert MPI programs to AMPI by eliminating global variables. Manually removing the global variables in the program is tedious and error-prone. In this paper, we present a Photran-based tool that automates this task with a source-to-source transformation that supports Fortran. We evaluate our tool on the multi-zone NAS Benchmarks with AMPI. We also demonstrate the tool on a real-world large-scale FLASH code and present preliminary results of running FLASH on AMPI. Both results show significant performance improvement using AMPI. This demonstrates that the tool makes using AMPI easier and more productive.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Eclipse - an open development platform, http://www.eclipse.org/
  2. 2.
    Fryxell, B., et al.: Flash: An adaptive mesh hydrodynamics code for modeling astrophysical thermonuclear flashes. ApJS 131, 273 (2000)CrossRefGoogle Scholar
  3. 3.
    Huang, C., Lawlor, O., Kalé, L.V.: Adaptive MPI. In: Rauchwerger, L. (ed.) LCPC 2003. LNCS, vol. 2958, pp. 306–322. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  4. 4.
    Jin, H., der Wijngaart, R.F.V.: Performance characteristics of the multi-zone nas parallel benchmarks. In: Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS) (2004)Google Scholar
  5. 5.
    Kale, L.V., Zheng, G.: Charm++ and AMPI: Adaptive Runtime Strategies via Migratable Objects. In: Parashar, M. (ed.) Advanced Computational Infrastructures for Parallel and Distributed Applications, pp. 265–282. Wiley Interscience, Hoboken (2009)CrossRefGoogle Scholar
  6. 6.
    Kamal, H., Wagner, A.: Fg-mpi: Fine-grain mpi for multicore and clusters. In: The 11th IEEE International Workshop on Parallel and Distributed Scientific and Engineering Computing (PDESC). IEEE, Los Alamitos (April 2010)Google Scholar
  7. 7.
    Photran - An IDE for Fortran, http://www.eclipse.org/photran/
  8. 8.
  9. 9.
    Sedov, L.I.: Similarity and Dimensional Methods in Mechanics (1959)Google Scholar
  10. 10.
  11. 11.
    Tang, H., Shen, K., Yang, T.: Program transformation and runtime support for threaded MPI execution on shared-memory machines. ACM Transactions on Programming Languages and Systems 22(4), 673–700 (2000)CrossRefGoogle Scholar
  12. 12.
    Zheng, G., Lawlor, O.S., Kalé, L.V.: Multiple flows of control in migratable parallel programs. In: 2006 International Conference on Parallel Processing Workshops (ICPPW 2006), Columbus, Ohio, pp. 435–444 (August 2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Stas Negara
    • 1
  • Gengbin Zheng
    • 1
  • Kuo-Chuan Pan
    • 2
  • Natasha Negara
    • 3
  • Ralph E. Johnson
    • 1
  • Laxmikant V. Kalé
    • 1
  • Paul M. Ricker
    • 2
  1. 1.Department of Computer ScienceUniversity of Illinois at Urbana-ChampaignUrbanaUSA
  2. 2.Department of AstronomyUniversity of Illinois at Urbana-ChampaignUrbanaUSA
  3. 3.Department of Computing ScienceUniversity of AlbertaAlbertaCanada

Personalised recommendations