Leveraging C++ Meta-programming Capabilities to Simplify the Message Passing Programming Model

  • Simone Pellegrini
  • Radu Prodan
  • Thomas Fahringer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6960)


Message passing is the primary programming model utilized for distributed memory systems. Because it aims at performance, the level of abstraction is low, making distributed memory programming often difficult and error-prone. In this paper, we leverage the expressivity and meta-programming capabilities of the C++ language to raise the abstraction level and simplify message passing programming. We redefine the semantics of the assignment operator to work in a distributed memory fashion and leave to the compiler the burden of generating the required communication operations. By enforcing more severe checks at compile-time we are able to statically capture common programming errors without causing runtime overhead.


Message passing C++ Meta-programming PGAS 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    PAPI: Performance Application Programming Interface,
  2. 2.
    The MPI-2 Specification,
  3. 3.
    UPC Language Specifications, v1.2. Tech. Rep. LBNL-59208, Lawrence Berkeley National Lab Tech. Report (2005)Google Scholar
  4. 4.
    Abrahamsi, D., Gurtovoy, A.: C++ Template Metaprogramming. Addison-Wesley, Reading (2006)Google Scholar
  5. 5.
    Bronevetsky, G.: Communication-Sensitive Static Dataflow for Parallel Message Passing Applications. In: Proceedings of the 7th annual IEEE/ACM International Symposium on Code Generation and Optimization, CGO 2009, pp. 1–12. IEEE Computer Society, Washington, DC (2009)Google Scholar
  6. 6.
    Gurtovoy, A., Abrahams, D.: The Boost Metaprogramming Library,
  7. 7.
    Nieplocha, J., Palmer, B., Krishnan, M., Trease, H., Apr, E., Nieplocha, J., Palmer, B., Krishnan, M., Trease, H., Apr, E.: Advances, Applications and Performance of the Global Arrays Shared Memory Programming Toolkit. Intern. J. High Perf. Comp. Applications 20 (2005)Google Scholar
  8. 8.
    Numrich, R.W., Reid, J.: Co-array Fortran for parallel programming. SIGPLAN Fortran Forum 17, 1–31 (1998), CrossRefGoogle Scholar
  9. 9.
    Strout, M.M., Kreaseck, B., Hovland, P.D.: Data-Flow Analysis for MPI Programs. In: International Conference on Parallel Processing, ICPP 2006, pp. 175–184 (August 2006)Google Scholar
  10. 10.
    Zheng, Y., Blagojevic, F., Bonachea, D., Hargrove, P.H., Hofmeyr, S., Iancu, C., Min, S.J., Yelick, K.: Getting Multicore Performance with UPC. In: SIAM Conference on Parallel Processing for Scientific Computing, Seattle, Washington (February 2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Simone Pellegrini
    • 1
  • Radu Prodan
    • 1
  • Thomas Fahringer
    • 1
  1. 1.University of Innsbruck – Distributed and Parallel Systems GroupInnsbruckAustria

Personalised recommendations