International Journal of Parallel Programming

, Volume 44, Issue 3, pp 552–573

Synthesizing MPI Implementations from Functional Data-Parallel Programs

Article

DOI: 10.1007/s10766-015-0359-4

Cite this article as:
Aubrey-Jones, T. & Fischer, B. Int J Parallel Prog (2016) 44: 552. doi:10.1007/s10766-015-0359-4
  • 167 Downloads

Abstract

Distributed memory architectures such as Linux clusters have become increasingly common but remain difficult to program. We target this problem and present a novel technique to automatically generate data distribution plans, and subsequently MPI implementations in C++, from programs written in a functional core language. The main novelty of our approach is that we support distributed arrays, maps, and lists in the same framework, rather than just arrays. We formalize distributed data layouts as types, which are then used both to search (via type inference) for optimal data distribution plans and to generate the MPI implementations. We introduce the core language and explain our formalization of distributed data layouts. We describe how we search for data distribution plans using an adaptation of the Damas–Milner type inference algorithm, and how we generate MPI implementations in C++ from such plans.

Keywords

Data parallelism Data distribution Type inference Code generation MPI 

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.University of SouthamptonSouthamptonUK
  2. 2.University of StellenboschStellenboschSouth Africa

Personalised recommendations