MPI Thread-Level Checking for MPI+OpenMP Applications

  • Emmanuelle Saillard
  • Patrick Carribault
  • Denis Barthou
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9233)


MPI is the most widely used parallel programming model. But the reducing amount of memory per compute core tends to push MPI to be mixed with shared-memory approaches like OpenMP. In such cases, the interoperability of those two models is challenging. The MPI 2.0 standard defines the so-called thread level to indicate how MPI will interact with threads. But even if hybrid programs are more common, there is still a lack in debugging tools and more precisely in thread level compliance. To fill this gap, we propose a static analysis to verify the thread-level required by an application. This work extends PARCOACH, a GCC plugin focused on the detection of MPI collective errors in MPI and MPI+OpenMP programs. We validated our analysis on computational benchmarks and applications and measured a low overhead.


Static verification OpenMP MPI MPI thread level 


  1. 1.
    Chiang, W.-F., Szubzda, G., Gopalakrishnan, G., Thakur, R.: Dynamic verification of hybrid programs. In: Keller, R., Gabriel, E., Resch, M., Dongarra, J. (eds.) EuroMPI 2010. LNCS, vol. 6305, pp. 298–301. Springer, Heidelberg (2010) CrossRefGoogle Scholar
  2. 2.
    DeSouza, J., Kuhn, B., de Supinski, B.R., Samofalov, V., Zheltov, S., Bratanov, S.: Automated, scalable debugging of MPI programs with intel message checker. In: SE-HPCS 2005, pp. 78–82. ACM (2005)Google Scholar
  3. 3.
    Falzone, C., Chan, A., Lusk, E., Gropp, W.: A portable method for finding user errors in the usage of MPI collective operations. IJHPCA 21(2), 155–165 (2007)Google Scholar
  4. 4.
    Gropp, W., Thakur, R.: Thread safety in an MPI implementation: requirements and analysis. Parallel Comput. 33(9), 595–604 (2007)CrossRefGoogle Scholar
  5. 5.
    Hilbrich, T., Müller, M.S., Krammer, B.: Detection of violations to the MPI standard in hybrid OpenMP/MPI applications. In: Eigenmann, R., de Supinski, B.R. (eds.) IWOMP 2008. LNCS, vol. 5004, pp. 26–35. Springer, Heidelberg (2008) CrossRefGoogle Scholar
  6. 6.
    Hilbrich, T., de Supinski, B.R., Hänsel, F., Müller, M.S., Schulz, M., Nagel, W.E.: Runtime MPI collective checking with tree-based overlay networks. In: EuroMPI, pp. 129–134 (2013)Google Scholar
  7. 7.
    Hilbrich, T., Protze, J., de Supinski, B.R.d., Schulz, M., Müller, M.S., Nagel, W.E.: Intralayer communication for tree-based overlay networks. In: International Conference on Parallel Processing, pp. 995–1003 (2013)Google Scholar
  8. 8.
    Jourdren, H.: HERA: a hydrodynamic AMR platform for multi-physics simulations. In: Plewa, T., Linde, T., Gregory Weirs, V. (eds.) Adaptive Mesh Refinement - Theory and Applications, vol. 41, pp. 283–294. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  9. 9.
    Krammer, B., Bidmon, K., Müller, M.S., Resch, M.M.: MARMOT: an MPI analysis and checking tool. In: PARCO. Advances in Parallel Computing, vol. 13, pp. 493–500. Elsevier (2003)Google Scholar
  10. 10.
    Luecke, G.R., Chen, H., Coyle, J., Hoekstra, J., Kraeva, M., Zou, Y.: MPI-CHECK: a tool for checking Fortran 90 MPI programs. Concurrency Comput. Pract. Experience 15(2), 93–100 (2003)CrossRefGoogle Scholar
  11. 11.
    Lusk, E.R., Chan, A.: Early experiments with the OpenMP/MPI hybrid programming model. In: Eigenmann, R., de Supinski, B.R. (eds.) IWOMP 2008. LNCS, vol. 5004, pp. 36–47. Springer, Heidelberg (2008) CrossRefGoogle Scholar
  12. 12.
    Saillard, E., Carribault, P., Barthou, D.: Combining static and dynamic validation of MPI collective communications. In: EuroMPI, pp. 117–122. ACM (2013)Google Scholar
  13. 13.
    Saillard, E., Carribault, P., Barthou, D.: PARCOACH: combining static and dynamic validation of MPI collective communications. IJHPCA 28, 425–434 (2014)Google Scholar
  14. 14.
    Saillard, E., Carribault, P., Barthou, D.: Static/Dynamic validation of MPI collective communications in multi-threaded context. In: PPoPP. ACM (2015)Google Scholar
  15. 15.
    Siegel, S., Zirkel, T.: Automatic formal verification of MPI based parallel programs. In: PPoPP. pp. 309–310 (2011)Google Scholar
  16. 16.
    Smith, L., Bull, M.: Development of mixed mode MPI/OpenMP applications. Sci. Program. 9(2,3), 83–98 (2001)Google Scholar
  17. 17.
    Message Passing Interface Forum.
  18. 18.
  19. 19.
  20. 20.
    Vetter, J.S., de Supinski, B.R.: Dynamic software testing of MPI applications with Umpire. In: ACM/IEEE Conference on Supercomputing (2000)Google Scholar
  21. 21.
    Vo, A., Aananthakrishnan, S., Gopalakrishnan, G., de Supinski, B.R.d., Schulz, M., Bronevetsky, G.: A scalable and distributed dynamic formal verifier for MPI programs. In: ACM/IEEE SC 2010, pp. 1–10 (2010)Google Scholar
  22. 22.
    Wolff, M., Jaouen, S., Jourdren, H.: High-order dimensionally split lagrange-remap schemes for ideal magnetohydrodynamics. In: Discrete and Continuous Dynamical Systems Series S. NMCF (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Emmanuelle Saillard
    • 1
  • Patrick Carribault
    • 1
  • Denis Barthou
    • 2
  1. 1.CEA, DAM, DIFArpajonFrance
  2. 2.Bordeaux Institute of Technology, LaBRI / INRIABordeauxFrance

Personalised recommendations