Advertisement

Compiler analysis for irregular problems in fortran D

  • R. von Hanxleden
  • K. Kennedy
  • C. Koelbel
  • R. Das
  • J. Saltz
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 757)

Abstract

Many parallel programs require run-time support to implement the communication caused by indirect data references. In previous work, we have developed the inspectorexecutor paradigm to handle these cases. This paper extends that work by developing a dataflow framework to aid in placing the executor communications calls. Our dataflow analysis determines when it is safe to combine communications statements, move them into less frequently executed code regions, or avoid them altogether in favor of reusing data which are already buffered locally.

Keywords

Program Text Index Array Communication Call Indirect Reference Runtime Support 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    T. W. Clark, R. v. Hanxleden, K. Kennedy, C. Koelbel, and L. R. Scott. Evaluating parallel languages for molecular dynamics computations. In Scalable High Performance Computing Conference, Williamsburg, VA, April 1992.Google Scholar
  2. [2]
    R. Das, D. Mavriplis, J. Saltz, S. Gupta, and R. Ponnusamy. The design and implementation of a parallel unstructured Euler solver using software primitives, AIAA-92-0562. In Proceedings of the 30th Aerospace Sciences Meeting. AIAA, January 1992.Google Scholar
  3. [3]
    R. Das, R. Ponnusamy, J. Saltz, and D. Mavriplis. Distributed memory compiler methods for irregular problems — data copy reuse and runtime partitioning. ICASE Report 91-73, Institute for Computer Application in Science and Engineering, Hampton, VA, September 1991.Google Scholar
  4. [4]
    T. Gross and P. Steenkiste. Structured dataflow analysis for arrays and its use in an optimizing compiler. Software—Practice and Experience, 20(2):133–155, February 1990.Google Scholar
  5. [5]
    S. Hiranandani, K. Kennedy, C. Koelbel, U. Kremer, and C. Tseng. An overview of the Fortran D programming system. In Proceedings of the Fourth Workshop on Languages and Compilers for Parallel Computing, Santa Clara, CA, August 1991.Google Scholar
  6. [6]
    S. Horwitz, T. Reps, and D. Binkley. Interprocedural slicing using dependence graphs. ACM Transactions on Programming Languages and Systems, 12(1):26–60, January 1990.Google Scholar
  7. [7]
    J. Kam and J. Ullman. Global data flow analysis and iterative algorithms. Journal of the ACM, 23(1):159–171, January 1976.Google Scholar
  8. [8]
    J. Knoop, O. Rüthing, and B. Steffen. Lazy code motion. In Proceedings of the ACM SIG-PLAN '92 Conference on Program Language Design and Implementation, San Francisco, CA, June 1992.Google Scholar
  9. [9]
    C. Koelbel, P. Mehrotra, and J. Van Rosendale. Supporting shared data structures on distributed memory machines. In Proceedings of the Second ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Seattle, WA, March 1990.Google Scholar
  10. [10]
    R. Mirchandaney, J. Saltz, R. Smith, D. Nicol, and K. Crowley. Principles of runtime support for parallel processors. In Proceedings of the Second International Conference on Supercomputing, St. Malo, France, July 1988.Google Scholar
  11. [11]
    J. Saltz, K. Crowley, R. Mirchandaney, and H. Berryman. Run-time scheduling and execution of loops on message passing machines. Journal of Parallel and Distributed Computing, 8(2):303–312, 1990.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • R. von Hanxleden
    • 1
  • K. Kennedy
    • 1
  • C. Koelbel
    • 1
  • R. Das
    • 2
  • J. Saltz
    • 2
  1. 1.Rice UniversityHouston
  2. 2.University of MarylandCollege Park

Personalised recommendations