Advertisement

PolyAPM: Parallel Programming via Stepwise Refinement with Abstract Parallel Machines

  • Nils Ellmenreich
  • Christian Lengauer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2670)

Abstract

Writing a parallel program can be a difficult task which has to meet several, sometimes conflicting goals. While the manual approach is time-consuming and error-prone, the use of compilers reduces the programmer’s control and often does not lead to an optimal result. With our approach, PolyAPM, the programming process is structured as a series of source-to-source transformations. Each intermediate result is a program for an Abstract Parallel Machine (APM) on which it can be executed to evaluate the transformation. We propose a decision tree of programs and corresponding APMs that help to explore alternative design decisions. Our approach stratifies the effects of individual, self-contained transformations and enables their evaluation during the parallelisation process.

Keywords

Parallel Program Source Program Program Transformation Abstract Machine Loop Body 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. BEF+95.
    William Blume, Rudolf Eigenmann, Keith Faigin, John Grout, Jay Hoeflinger, David Padua, Paul Petersen, William Pottenger, Lawrence Rauchwerger, Peng Tu, and Stephen Weatherford. Polaris: Improving the effectiveness of parallelizing compilers. In Keshav Pingali, Uptal Banerjee, David Gelernter, Alex Nicolau, and David Padua, editors, Proceedings of the 7th International Workshop on Languages and Compilers for Parallel Computing, LNCS 892, pages 141–154. Springer-Verlag, 1995.CrossRefGoogle Scholar
  2. Bra98.
    Thomas Brandes. ADAPTOR Programmer’s Guide, Version 6.0, June 1998. Available via anonymous ftp from ftp://gmd.de as gmd/adaptor/docs/pguide.ps.
  3. EL03.
    Nils Ellmenreich and Christian Lengauer. Comparative Parallel Programming with PolyAPM using Abstract Parallel Machines. In Peter Knijnenburg and Paul van der Mark, editors, Proc. 10th Intl. Worksh. on Compilers for Parallel Computers (CPC 2003). Leiden Institute of Advanced Computer Science, January 2003. http://www.infosun.fmi.uni-passau.de/cl/papers/EllLen03.html.
  4. Fos95.
    Ian Foster. Design and Building Parallel Programs. Addison-Wesley, 1995.Google Scholar
  5. GL96.
    Martin Griebl and Christian Lengauer. The loop parallelizer LooPo. In Michael Gerndt, editor, Proc. Sixth Workshop on Compilers for Parallel Computers (CPC’96), Konferenzen des Forschungszentrums Julich 21, pages 311–320. Forschungszentrum Julich, 1996.Google Scholar
  6. Goo01.
    Joy Goodman. Incremental Program Transformations using Abstract Parallel Machines. PhD thesis, Department of Computing Science, University of Glasgow, September 2001.Google Scholar
  7. Len93.
    Christian Lengauer. Loop parallelization in the polytope model. In Eike Best, editor, CONCUR’93, Lecture Notes in Computer Science 715, pages 398–416. Springer-Verlag, 1993.Google Scholar
  8. OR97.
    John O’Donnell and Gudula Rünger. A methodology for deriving abstract parallel programs with a family of parallel abstract machines. In Christian Lengauer, Martin Griebl, and Sergei Gorlatch, editors, EuroPar’97: Parallel Processing, LNCS 1300, pages 662–669. Springer-Verlag, 1997.CrossRefGoogle Scholar
  9. PGH+90.
    Constantine Polychronopoulos, Milind B. Girkar, Mohammad R. Haghighat, Chia L. Lee, Bruce P. Leung, and Dale A. Schouten. The structure of Parafrase-2: An advanced parallelizing compiler for C and Fortran. In David Gelernter, Alex Nicolau, and David Padua, editors, Languages and Compilers for Parallel Computing (LCPC’90), Research Monographs in Parallel and Distributed Computing, pages 423–453. Pitman, 1990.Google Scholar
  10. Sch94.
    Sven-Bodo Scholz. Single Assignment C — Functional Programming Using Imperative Style. In John Glauert, editor, Proceedings of the 6th International Workshop on the Implementation of Functional Languages. University of East Anglia, 1994.Google Scholar
  11. THMJ+96.
    Phil Trinder, Kevin Hammond, Jim S. Mattson Jr, Andrew Partridge, and Simon Peyton Jones. GUM: A portable parallel implementation of Haskell. In Proc. of ACM SIPGLAN Conf. on Programming Languages Design and Implementation (PLDI’96), pages 79–88. ACM Press, May 1996.Google Scholar
  12. WFW+94.
    Robert P. Wilson, Robert S. French, Christopher S. Wilson, Saman P. Amarasinghe, Jennifer M. Anderson, Steve W. K. Tjiang, Shih-Wei Liao, Chau-Wen Tseng, Mary W. Hall, Monica S. Lam, and John L. Hennessy. SUIF: An infrastructure for research on parallelizing and optimizing compilers. In Proc. Fourth ACM SIGPLAN Symp. on Principles & Practice of Parallel Programming (PPoPP), pages 31–37. ACM Press, 1994. http://suif.stanford.edu/suif/.
  13. Win01.
    Noel Winstanley. Staged Methodologies for Parallel Programming. PhD thesis, Department of Computing Science, University of Glasgow, April 2001.Google Scholar
  14. Wol87.
    Michel Wolfe. Iteration space tiling for memory hierarchies. In G. Rodrigue, editor, Parallel Processing for Scientific Computing, pages 357–361. SIAM, 1987.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Nils Ellmenreich
    • 1
  • Christian Lengauer
    • 1
  1. 1.Fakultät für Mathematik und InformatikUniversitat PassauGermany

Personalised recommendations