Skip to main content

State of the art in compiling HPF

  • Chapter
  • First Online:
The Data Parallel Programming Model

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1132))

Abstract

Proposing to the user a nice programming model based on the dataparallel paradigm is one thing. Running the resulting applications very fast is the next issue for a language aiming at high performance on massively parallel machines. This paper discusses the issues involved in HPF compilation and presents optimization techniques, targeting the message-passing SPMD programming model of distributed memory MIMD architectures.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Corinne Ancourt, Fabien Coelho, François Irigoin, and Ronan Keryell. A Linear Algebra Framework for Static HPF Code Distribution. In Workshop on Compilers for Parallel Computers, Delft, December 1993. New version to appear in Scientific Programming, 1996.

    Google Scholar 

  2. Corinne Ancourt and François Irigoin. Scanning Polyhedra with DO Loops. In Symposium on Principles and Practice of Parallel Programming, April 1991.

    Google Scholar 

  3. Saman P. Amarasinghe and Monica S. Lam. Communication Optimization and Code Generation for Distributed Memory Machines. In ACM SIGPLAN International Conference on Programming Language Design and Implementation, June 1993.

    Google Scholar 

  4. ANSI. American National Standard Programming Language Fortran, ANSI x3.9-1978, ISO 1539-1980. New-York, 1983.

    Google Scholar 

  5. Vincent Bouchitté, Pierre Boulet, Alain Darte, and Yves Robert. Evaluating array expressions on massively parallel machines with communication/computation overlap. Int. J. of Supercomputer Applications and High Performance Computing, 9(3):205–219, Fall 1995.

    Google Scholar 

  6. Siegfried Benkner, Peter Brezany, and Hans Zima. Processing Array Statements and Procedure Interfaces in the Prepare High Performance Fortran Compiler. In 5th International Conference on Compiler Construction, April 1994. Springer-Verlag LNCS vol. 786, pages 324–338.

    Google Scholar 

  7. Prithviraj Banerjee, John A. Chandy, Manish Gupta, Eugene W. Hodges IV, John G. Holm, Antonio Lain, Daniel J. Palermo, Shankar Ramaswamy, and Ernesto Su. An Overview of the PARADIGME Compiler for Distributed-Memory Multicomputers. IEEE Computer, 28(10), October 1995.

    Google Scholar 

  8. P. Brezany, O. Chéron, K. Sanjari, and E. van Kronijnenburg. Processing irregular codes containing arrays with multidimensional distributions by the PREPARE HPF compiler. In The International Conference and Exhibition on High-Performance Computing and Networking, pages 526–531, Milan, Italie, May 1995. LNCS 919, Springer Verlag.

    Google Scholar 

  9. Fabien Coelho and Corinne Ancourt. Optimal Compilation of HPF Remappings. Technical Report A 277, CRI, École des mines de Paris, October 1995. To appear in JPDC, 1996.

    Google Scholar 

  10. Siddhartha Chatterjee, John R. Gilbert, Fred J. E. Long, Robert Schreiber, and Shang-Hua Teng. Generating local addresses and communication sets for data-parallel programs. Jounal of Parallel and Distributed Computing, 26(1):72–84, April 1995.

    Google Scholar 

  11. Siddharta Chatterjee, John R. Gilbert, Robert Schreiber, and Thomas J. Sheffler. Modeling data parallel programs with the alignment-distribution graph. Journal of Programming Languages, (2):227–258, 1994.

    Google Scholar 

  12. David Callahan and Ken Kennedy. Compiling programs for distributed-memory multiprocessors. The Journal of Supercomputing, 2:151–169, 1988.

    Google Scholar 

  13. Fabien Coelho. Experiments with HPF Compilation for a Network of Workstations. In High-Performance Computing and Networking, Springer-Verlag LNCS 797, pages 423–428, April 1994.

    Google Scholar 

  14. Fabien Coelho. Compilation of I/O Communications for HPF. In 5th Symposium on the Frontiers of Massively Parallel Computation, pages 102–109, February 1995.

    Google Scholar 

  15. Paul Feautrier. Parametric integer programming. RAIRO Recherche Opérationnelle, 22:243–268, September 1988.

    Google Scholar 

  16. High Performance Fortran Forum. High Performance Fortran Language Specification. Rice University, Houston, Texas, November 1994. Version 1.1.

    Google Scholar 

  17. M. Gupta and P. Banerjee. A methodology for high-level synthesis of communication on multicomputers. In ACM Int. Conf. on Supercomputing, pages 1–11, 92.

    Google Scholar 

  18. Cécile Germain and Franck Delaplace. Automatic Vectorization of Communications for Data-Parallel Programs. In Euro-Par'95, Stockholm, Sweden, pages 429–440, August 1995. Springer Verlag, LNCS 966.

    Google Scholar 

  19. Hans Michael Gerndt. Automatic Parallelization for Distributed-Memory Multiprocessing Systems. PhD thesis, University of Vienna, 1989.

    Google Scholar 

  20. S. K. S. Gupta, S. D. Kaushik, S. Mufti, S. Sharma, C.-H. Huang, and P. Sadayappan. On compiling array expressions for efficient execution on distributed-memory machines. In International Conference on Parallel Processing, pages II-301–II-305, August 1993.

    Google Scholar 

  21. Manish Gupta, Sam Midkiff, Edith Schonberg, Ven Seshadri, David Shields, Ko-Yang Wang, Wai-Mee Ching, and Ton Ngo. An HPF Compiler for the IBM SP2. In Supercomputing, December 1995.

    Google Scholar 

  22. M. Gupta, E. Schonberg, and H. Srinivasan. A unified data-flow framework for optimizing communication, LNCS 892. In 7th Workshop Languages and compilers for parallel computing, pages 266–282. Springer-Verlag, 94.

    Google Scholar 

  23. Seema Hiranandani, Ken Kennedy, John Mellor-Crummey, and Ajay Sethi. Compilation techniques for block-cyclic distributions. In ACM International Conference on Supercomputing, July 1994.

    Google Scholar 

  24. Seema Hiranandani, Ken Kennedy, and Chau-Wen Tseng. Compiling Fortran D for MIMD Distributed-Memory machines. Communications of the ACM, 35(8):66–80, August 1992.

    Google Scholar 

  25. François Irigoin. Code generation for the hyperplane method and for loop interchange. ENSMP-CAI-88 E102/CAI/I, CRI, École des mines de Paris, October 1988.

    Google Scholar 

  26. ISO/IEC. International Standard ISO/IEC 1539: 1991 (E), second 1991-07-01 edition, 1991.

    Google Scholar 

  27. S. D. Kaushik, C.-H. Huang, and P. Sadayappan. Compiling Array Statements for Efficient Execution on Distributed-Memory Machines: Two-level Mappings. In Language and Compilers for Parallel Computing, pages 14.1–14.15, August 1995.

    Google Scholar 

  28. Charles Koelbel and Piyush Mehrotra. Compiling global name-space parallel loops for distributed execution. IEEE Transactions on Parallel and Distributed Systems, 2(4):440–451, October 1991.

    Google Scholar 

  29. Ken Kennedy, Nenad Nedeljković, and Ajay Sethi. A Linear Time Algorithm for Computing the Memory Access Sequence in Data-Parallel Programs. In Symposium on Principles and Practice of Parallel Programming, 1995. Sigplan Notices, 30(8):102–111.

    Google Scholar 

  30. Ken Kennedy, Nenad Nedeljković, and Ajay Sethi. Efficient address generation for block-cyclic distributions. In ACM International Conference on Supercomputing, pages 180–184, June 1995.

    Google Scholar 

  31. Charles Koelbel. Compile-time generation of communication for scientific programs. In Supercomputing, pages 101–110, November 1991.

    Google Scholar 

  32. Marc Le Fur. Scanning Parameterized Polyhedron using Fourier-Motzkin Elimination. In High Performance Computing Symposium, pages 130–143, July 1995.

    Google Scholar 

  33. Marc Le Fur, Jean-Louis Pazat, and Françoise André. An Array Partitioning Analysis for Parallel Loop Distribution. In Euro-Par'95, Stockholm, Sweden, pages 351–364, August 1995. Springer Verlag, LNCS 966.

    Google Scholar 

  34. Yves Mahéo and Jean-Louis Pazat. Distributed Array Management for HPF Compilers. In High Performance Computing Symposium, pages 119–129, July 1995.

    Google Scholar 

  35. Alexis Platonoff. Automatic data distribution for massively parallel computers. In Workshop on Compilers for Parallel Computers, Malaga, pages 555–570, June 1995.

    Google Scholar 

  36. Edwin M. Paalvast, Henk J. Sips, and A.J. van Gemund. Automatic parallel program generation and optimization from data decompositions. In 1991 International Conference on Parallel Processing — Volume II: Software, pages 124–131, June 1991.

    Google Scholar 

  37. J. Stichnoth, O'Hallaron D., and T. Gross. Generating communication for array statements: Design, implementation and evaluation. Jounal of Parallel and Distributed Computing, 21(1):150–159, April 1994.

    Google Scholar 

  38. E. Su, D.J. Palermo, and P. Banerjee. Automatic parallelization of regular computations for distributed memory multicomputers in the paradigm compiler. In 1993 Int. Conf. on parallel processing, pages II30–II38. CRC Press, 93.

    Google Scholar 

  39. Henk J. Sips, Kees van Reeuwijk, and Will Denissen. Analysis of local enumeration and storage schemes in HPF. In ACM International Conference on Supercomputing, May 1996.

    Google Scholar 

  40. Ashwath Thirumalai and J. Ramanujam. Fast Address Sequence Generation for Data-Parallel Programs using Integer Lattices. In Language and Compilers for Parallel Computing, pages 13.1–13.19, August 1995.

    Google Scholar 

  41. C-W. Tseng. An optimizing Fortran D compiler for MIMD Distributed-memory machines. PhD thesis, Rice University, 93.

    Google Scholar 

  42. Vincent Van Dongen. Compiling distributed loops onto SPMD code. Parallel Processing Letters, 4(3):301–312, March 1994.

    Google Scholar 

  43. A. Veen and de Lange M. Overview of the prepare project. In 4th Int. Workshop on Compilers for Parallel Computers, Delft, The Netherlands, 1993.

    Google Scholar 

  44. C. van Reuuwijk, H. J. Sips, W. Denissen, and E. M. Paalvast. Implementing HPF distributed arrays on a message-passing parallel computer system. Computational Physics Report Series, CP-95 006, Delft University of Technology, November 1994.

    Google Scholar 

  45. Hans Zima and Barbara Chapman. Compiling for distributed-memory systems. Proceedings of the IEEE, February 1993.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Guy-René Perrin Alain Darte

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Coelho, F., Germain, C., Pazat, JL. (1996). State of the art in compiling HPF. In: Perrin, GR., Darte, A. (eds) The Data Parallel Programming Model. Lecture Notes in Computer Science, vol 1132. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61736-1_45

Download citation

  • DOI: https://doi.org/10.1007/3-540-61736-1_45

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61736-5

  • Online ISBN: 978-3-540-70646-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics