Protocols by Default

Safe MPI Code Generation Based on Session Types
  • Nicholas Ng
  • Jose Gabriel de Figueiredo Coutinho
  • Nobuko Yoshida
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9031)


This paper presents a code generation framework for type-safe and deadlock-free Message Passing Interface (MPI) programs. The code generation process starts with the definition of the global topology using a protocol specification language based on parameterised multiparty session types (MPST). An MPI parallel program backbone is automatically generated from the global specification. The backbone code can then be merged with the sequential code describing the application behaviour, resulting in a complete MPI program. This merging process is fully automated through the use of an aspect-oriented compilation approach. In this way, programmers only need to supply the intended communication protocol and provide sequential code to automatically obtain parallelised programs that are guaranteed free from communication mismatch, type errors or deadlocks. The code generation framework also integrates an optimisation method that overlaps communication and computation, and can derive not only representative parallel programs with common parallel patterns (such as ring and stencil), but also distributed applications from any MPST protocols. We show that our tool generates efficient and scalable MPI applications, and improves productivity of programmers. For instance, our benchmarks involving representative parallel and application-specific patterns speed up sequential execution by up to 31 times and reduce programming effort by an average of 39%.


Kernel Function Message Pass Interface Parallel Application Loop Condition Session Type 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Aananthakrishnan, S., Bronevetsky, G., Gopalakrishnan, G.: Hybrid approach for data-flow analysis of MPI programs. In: ICS 2013, pp. 455–456. ACM (2013)Google Scholar
  2. 2.
    Asanovic, K., Wawrzynek, J., Wessel, D., Yelick, K., Bodik, R., Demmel, J., Keaveny, T., Keutzer, K., Kubiatowicz, J., Morgan, N., Patterson, D., Sen, K.: A view of the parallel computing landscape. CACM 52(10), 56 (2009)CrossRefGoogle Scholar
  3. 3.
    Balaji, P., Dinan, J., Hoefler, T., Thakur, R.: Advanced MPI Programming (Tutorial at SC 2013).
  4. 4.
    Boob, S., González-Vélez, H., Popescu, A.M.: Automated instantiation of heterogeneous fast flow CPU/GPU parallel pattern applications in clouds. In: PDP, pp. 162–169. IEEE (2014)Google Scholar
  5. 5.
    Bronevetsky, G.: Communication-Sensitive Static Dataflow for Parallel Message Passing Applications. In: CGO 2009, pp. 1–12. IEEE (2009)Google Scholar
  6. 6.
    Cardoso, J.: a.M., Carvalho, T., Coutinho, J.G., Luk, W., Nobre, R., Diniz, P., Petrov, Z.: LARA: an aspect-oriented programming language for embedded systems. In: AOSD 2012, pp. 179–190. ACM (2012)Google Scholar
  7. 7.
    Cohen, E., Dahlweid, M., Hillebrand, M., Leinenbach, D., Moskal, M., Santen, T., Schulte, W., Tobies, S.: VCC: A practical system for verifying concurrent C. In: Berghofer, S., Nipkow, T., Urban, C., Wenzel, M. (eds.) TPHOLs 2009. LNCS, vol. 5674, pp. 23–42. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  8. 8.
    Dagum, L., Menon, R.: OpenMP: an industry standard API for shared-memory programming. Computational Science and Engineering 5(1), 46–55 (1998)CrossRefGoogle Scholar
  9. 9.
    Demangeon, R., Honda, K.: Nested protocols in session types. In: Koutny, M., Ulidowski, I. (eds.) CONCUR 2012. LNCS, vol. 7454, pp. 272–286. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  10. 10.
    Deniélou, P.M., Yoshida, N.: Dynamic multirole session types. In: POPL 2011, pp. 435–446. ACM (2011)Google Scholar
  11. 11.
    Denielou, P.M., Yoshida, N., Bejleri, A., Hu, R.: Parameterised Multiparty Session Types. Logical Methods in Computer Science 8(4), 1–46 (2012)CrossRefMathSciNetGoogle Scholar
  12. 12.
    DeSouza, J., Kuhn, B., de Supinski, B.R., Samofalov, V., Zheltov, S., Bratanov, S.: Automated, scalable debugging of MPI programs with Intel Message Checker. In: SE-HPCS 2005, pp. 78–82. ACM (2005)Google Scholar
  13. 13.
    Fagg, G.E., Dongarra, J.J.: FT-MPI: Fault Tolerant MPI, Supporting Dynamic Applications in a Dynamic World. In: Dongarra, J., Kacsuk, P., Podhorszki, N. (eds.) PVM/MPI 2000. LNCS, vol. 1908, pp. 346–353. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  14. 14.
    Ferner, C., Wilkinson, B., Heath, B.: Toward using higher-level abstractions to teach parallel computing. In: IPDPSW, pp. 1291–1296. IEEE (2013)Google Scholar
  15. 15.
    González-Vélez, H., Leyton, M.: A Survey of Algorithmic Skeleton Frameworks: High-level Structured Parallel Programming Enablers. Softw. Pract. Exper. 40(12), 1135–1160 (2010)CrossRefGoogle Scholar
  16. 16.
    Gopalakrishnan, G., Kirby, R.M., Siegel, S., Thakur, R., Gropp, W., Lusk, E., De Supinski, B.R., Schulz, M., Bronevetsky, G.: Formal analysis of MPI-based parallel programs. CACM 54(12), 82–91 (2011)CrossRefGoogle Scholar
  17. 17.
    Graepel, T., Candela, J.Q., Borchert, T., Herbrich, R.: Web-Scale Bayesian Click-Through Rate Prediction for Sponsored Search Advertising in Microsofts Bing Search Engine. In: ICML 2010, pp. 13–20 (2010)Google Scholar
  18. 18.
    Hilbrich, T., Protze, J., Schulz, M., de Supinski, B.R., Müller, M.S.: MPI Runtime Error Detection with MUST: Advances in Deadlock Detection. In: SC 2012, pp. 1–11. IEEE (2012)Google Scholar
  19. 19.
    Honda, K., Yoshida, N., Carbone, M.: Multiparty asynchronous session types. In: POPL 2008, vol. 5201, pp. 273–284. ACM (2008)Google Scholar
  20. 20.
    Kolodziej, J., González-Vélez, H., Wang, L.: Advances in data-intensive modelling and simulation. Future Generation Comp. Syst. 37, 282–283 (2014)CrossRefGoogle Scholar
  21. 21.
    Krammer, B., Bidmon, K., Müller, M.S., Resch, M.M.: MARMOT: an MPI analysis and checking tool. In: PARCO 2003, pp. 493–500 (2003)Google Scholar
  22. 22.
  23. 23.
    Mancini, E.P., Marsh, G., Panda, D.K.: An MPI-stream hybrid programming model for computational clusters. In: CCGrid 2010, pp. 323–330. IEEE (2010)Google Scholar
  24. 24.
    Marques, E.R., Martins, F., Vasconcelos, V.T., Santos, C., Ng, N., Yoshida, N.: Protocol-based verification of C+MPI programs. DI-FCUL 13, U. of Lisbon (2014)Google Scholar
  25. 25.
  26. 26.
    Ng, N., Yoshida, N.: Pabble: Parameterised Scribble. SOCA, 1–16 (2014)Google Scholar
  27. 27.
    Ng, N., Yoshida, N.: Pabble: Parameterised Scribble for Parallel Programming. In: PDP, pp. 707–714. IEEE (2014)Google Scholar
  28. 28.
    Ng, N., Yoshida, N., Honda, K.: Multiparty Session C: Safe Parallel Programming with Message Optimisation. In: Furia, C.A., Nanz, S. (eds.) TOOLS 2012. LNCS, vol. 7304, pp. 202–218. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  29. 29.
    Ng, N., Yoshida, N., Luk, W.: Scalable Session Programming for Heterogeneous High-Performance Systems. In: Counsell, S., Núñez, M. (eds.) SEFM 2013. LNCS, vol. 8368, pp. 82–98. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  30. 30.
  31. 31.
    Rabhi, F., Gorlatch, S. (eds.): Patterns and Skeletons for Parallel and Distributed Computing. Springer (2003)Google Scholar
  32. 32.
    Saldaña, M., Patel, A., Madill, C., Nunes, D., Wang, D., Chow, P., Wittig, R., Styles, H., Putnam, A.: MPI as a Programming Model for High-Performance Reconfigurable Computers. ACM TRETS 3(4), 1–29 (2010)CrossRefGoogle Scholar
  33. 33.
    Scribble homepage,
  34. 34.
    Sharma, S., Gopalakrishnan, G., Bronevetsky, G.: A sound reduction of persistent-sets for deadlock detection in mpi applications. In: Gheyi, R., Naumann, D. (eds.) SBMF 2012. LNCS, vol. 7498, pp. 194–209. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  35. 35.
    Siegel, S.F., Zirkel, T.K.: Collective assertions. In: Jhala, R., Schmidt, D. (eds.) VMCAI 2011. LNCS, vol. 6538, pp. 387–402. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  36. 36.
    Siegel, S.F., Zirkel, T.K.: FEVS: A Functional Equivalence Verification Suite for High-Performance Scientific Computing. MSCS 5(4), 427–435 (2011)zbMATHGoogle Scholar
  37. 37.
    Sklml webpage,
  38. 38.
    Vetter, J.S., de Supinski, B.R.: Dynamic Software Testing of MPI Applications with Umpire. In: SC 2000, p. 51. IEEE (2000)Google Scholar
  39. 39.
    Vo, A., Aananthakrishnan, S., Gopalakrishnan, G., de Supinski, B., Schulz, M., Bronevetsky, G.: A scalable and distributed dynamic formal verifier for mpi programs. In: SC 2010, pp. 1–10. IEEE (2010)Google Scholar
  40. 40.
    Vo, A., Vakkalanka, S., DeLisi, M., Gopalakrishnan, G., et al.: Formal verification of practical MPI programs. In: PPoPP 2009, pp. 261–270. ACM (2008)Google Scholar
  41. 41.
    Wienke, S., Springer, P., Terboven, C., an Mey, D.: OpenACC – First Experiences with Real-World Applications. In: Kaklamanis, C., Papatheodorou, T., Spirakis, P.G. (eds.) Euro-Par 2012. LNCS, vol. 7484, pp. 859–870. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  42. 42.
    Wilkinson, B., Villalobos, J., Ferner, C.: Pattern programming approach for teaching parallel and distributed computing. In: SIGCSE 2013, pp. 409–414. ACM (2013)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Nicholas Ng
    • 1
  • Jose Gabriel de Figueiredo Coutinho
    • 1
  • Nobuko Yoshida
    • 1
  1. 1.Imperial College LondonLondonUK

Personalised recommendations