Advertisement

ParFuse: Parallel and Compositional Analysis of Message Passing Programs

  • Sriram AananthakrishnanEmail author
  • Greg Bronevetsky
  • Mark Baranowski
  • Ganesh Gopalakrishnan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10136)

Abstract

Static analysis discovers provable true properties about behaviors of programs that are useful in optimization, debugging and verification. Sequential static analysis techniques fail to interpret the message passing semantics of the MPI and lack the ability to optimize or check the message passing behaviors of MPI programs. In this paper, we introduce an abstraction for approximating the message passing behaviors of MPI programs that is more precise than prior work and is applicable to a wide variety of applications. Our approach builds on the compositional paradigm where we transparently extend MPI support to sequential analyses through composition with our MPI analyses. This is the first framework where the data flow analysis is carried out in parallel on a cluster, with the message-carried data flow facts for refining inter-process data flow analysis states. We detail ParFuse – a framework that supports such parallel and compositional analysis of MPI programs, report its scalability and detail the prospects of extending our work for more powerful analyses.

Notes

Acknowledgments

This research was supported in part by NSF ACI 1148127, CCF 1439002, CCF 1346756 and DOE grant “Static Analysis using ROSE”.

References

  1. 1.
    BOOST Team. Boost Serialization API (2004)Google Scholar
  2. 2.
    Bronevetsky, G.: Communication-sensitive static dataflow for parallel message passing applications. In: CGO (2009)Google Scholar
  3. 3.
    Bronevetsky, G., Burke, M., Aananthakrishnan, S., Zhao, J., Sarkar, V.: Compositional dataflow via abstract transition systems. Technical report, LLNL (2013)Google Scholar
  4. 4.
    Burkardt, J.: Quadrature using MPI (2010). http://people.sc.fsu.edu/jburkardt/c_src/quad_mpi/quad_mpi.html
  5. 5.
    Burkardt, J.: Couting Primes using MPI (2011). https://people.sc.fsu.edu/jburkardt/c_src/prime_mpi/prime_mpi.html
  6. 6.
    Burkardt, J.: Heat Equation solver in MPI-C (2011). http://people.sc.fsu.edu/jburkardt/c_src/heat_mpi/heat_mpi.html
  7. 7.
    Cooper, K.D., Subramanian, D., Torczon, L.: Adaptive optimizing compilers for the 21st century. SC 23, 7–22 (2002)zbMATHGoogle Scholar
  8. 8.
    Droste, A., Kuhn, M., Ludwig, T.: MPI-Checker: Static Analysis for MPI. In: LLVM-HPC (2015)Google Scholar
  9. 9.
    Formal Verification Group at University of Utah. 2D Diffusion Equation Solver in MPI-C (2009). http://formalverification.cs.utah.edu/MPI_Tests/general_tests/small_tests/2ddiff.c
  10. 10.
    Gansner, E., Koutsofios, E., North, S.: Drawing Graphs with DOT (2006)Google Scholar
  11. 11.
    Hoefler, T., Schneider, T.: Runtime detection and optimization of collective communication patterns. In: PACT (2012)Google Scholar
  12. 12.
    Kulkarni, S., Cavazos, J.: Mitigating the compiler optimization phase-ordering problem using machine learning. In: OOPSLA (2012)Google Scholar
  13. 13.
    Lattner, C.: LLVM Alias Analysis Infrastructure. http://llvm.org/docs/AliasAnalysis.html
  14. 14.
    Lerner, S., Grove, D., Chambers, C.: Composing dataflow analyses and transformations. In: POPL (2002)Google Scholar
  15. 15.
    McPherson, A.J., Nagarajan, V., Cintra, M.: Static approximation of MPI communication graphs for optimized process placement. In: Brodman, J., Tu, P. (eds.) LCPC 2014. LNCS, vol. 8967, pp. 268–283. Springer, Heidelberg (2015). doi: 10.1007/978-3-319-17473-0_18 Google Scholar
  16. 16.
    MCS, Argonne National Laboratory. Simple Jacobi Iteration in C (2000). http://www.mcs.anl.gov/research/projects/mpi/tutorial/mpiexmpl/src/jacobi/C/main.html
  17. 17.
    Reif, J.H.: Data flow analysis of communicating processes. In: POPL (1979)Google Scholar
  18. 18.
    ROSE Compiler Team. ROSE User Manual: A Tool for Building Source-to-Source TranslatorsGoogle Scholar
  19. 19.
    Shires, D., Pollock, L., Sprenkle, S.: Program flow graph construction for static analysis of MPI programs. In: PDPTA (1999)Google Scholar
  20. 20.
    Strout, M.M., Kreaseck, B., Hovland, P.D.: Data-flow analysis for MPI programs. In: ICPP (2006)Google Scholar
  21. 21.
    Vakkalanka, S., Vo, A., Gopalakrishnan, G., Kirby, R.M.: Reduced execution semantics of MPI: from theory to practice. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 724–740. Springer, Heidelberg (2009). doi: 10.1007/978-3-642-05089-3_46 CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Sriram Aananthakrishnan
    • 1
    Email author
  • Greg Bronevetsky
    • 2
  • Mark Baranowski
    • 1
  • Ganesh Gopalakrishnan
    • 1
  1. 1.University of UtahSalt Lake CityUSA
  2. 2.Google Inc.Mountain ViewUSA

Personalised recommendations