Advertisement

An overview of the PTRAN analysis system for multiprocessing

  • Fran Allen
  • Michael Burke
  • Philippe Charles
  • Ron Cytron
  • Jeanne Ferrante
Session 3: Software Environments For Parallel Machines
Part of the Lecture Notes in Computer Science book series (LNCS, volume 297)

Abstract

PTRAN (Parallel TRANslator) is a system for automatically restructuring sequential FORTRAN programs for execution on parallel architectures. This paper describes PTRAN-A: the currently operational analysis phase of PTRAN. The analysis is both broad and deep, incorporating interprocedural information into dependence analysis. The system is organized around a persistent database of program and procedure information. PTRAN incorporates several new, fast algorithms in a pragmatic design.

Keywords

Formal Parameter Constant Propagation Dependence Analysis Summary Information Control Flow Graph 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography

  1. 1.
    A. V. Aho and J. D. Ullman. Principles of Compiler Design. Addison-Wesley, 1977.Google Scholar
  2. 2.
    Fran Allen, Michael Burke, and Ron Cytron. The PTRAN Multiprocessing Model, IBM Research, 1987. Report RC12552.Google Scholar
  3. 3.
    Frances E. Allen. Compiling for Parallelism. in G. Almasi, R. Hockney, and G. Paul, editor, Proceedings of the IBM Institute Europe, North-Holland Press, 1986.Google Scholar
  4. 4.
    F. E. Allen, J. L. Carter, J. Fabri, J. Ferrante, W. H. Harrison, P. G. Loewner, and L. H. Trevillyan. The Experimental Compiling System. IBM Journal of Research and Development, 24(6), November 1980.Google Scholar
  5. 5.
    Frances E. Allen and John Cocke. A Program Data Flow Analysis Procedure. Communications of the ACM, 19(3):137–146, March 1976.Google Scholar
  6. 6.
    John R. Allen. Dependence Analysis for Subscript Variables and Its Application to Program Transformations, PhD thesis, Rice University, Houston, Texas 1983.Google Scholar
  7. 7.
    J. R. Allen, Ken Kennedy, Carrie Porterfield, and Joe Warren. Conversion of Control Dependence to Data Dependence. Conf. Rec. Tenth ACM Symposium on Principles of Programming Languages, 1983.Google Scholar
  8. 8.
    Randy Allen, David Callahan, and Ken Kennedy. Automatic Decomposition of Scientific Programs for Parallel Execution. Conference Record of the Fourteenth Annual ACM Symposium on Principles of Programming Languages, pages 63–76, January 1987.Google Scholar
  9. 9.
    Todd Allen, Michael Burke, and Ron Cytron. A Practical and Powerful Algorithm for Subscript Dependence Testing, IBM, 1986. Internal Report.Google Scholar
  10. 10.
    M. Auslander and M. Hopkins. An Overview of the PL.8 Compiler. Proceedings of the Sigplan ′82 Symposium on Compiler Construction, 17(6):22–31, June 1982.Google Scholar
  11. 11.
    Utpal Banerjee. Speedup of Ordinary Programs, PhD thesis, University of Illinois at Urbana-Champaign, Urbana, Illinois 1979.Google Scholar
  12. 12.
    Utpal Banerjee, Shyh-Ching Chen, David J. Kuck, and Ross A. Towle. Time and Parallel Processor Bounds for Fortran-Like Loops. IEEE Transactions on Computers, C-28(9):660–670, September 1979.Google Scholar
  13. 13.
    John Banning. An Efficient Way to Find the Side Effects of Procedure Calls and the Aliases of Variables. Conf. Rec. Sixth ACM Symposium on Principles of Programming Languages, 1979.Google Scholar
  14. 14.
    Michael Burke. An Interval Analysis Approach Toward Interprocedural Data Flow, IBM Research, 1984. Report RC11794.Google Scholar
  15. 15.
    Michael Burke and Ron Cytron. Interprocedural Dependence Analysis and Parallelization. Proceedings of the Sigplan '86 Symposium on Compiler Construction, 21(7):162–175, July 1986.Google Scholar
  16. 16.
    Michael Burke and Ron Cytron. Interprocedural Dependence Analysis and Parallelization (Extended Version), IBM Research, 1986. Report RC11794.Google Scholar
  17. 17.
    David Callahan, Keith D. Cooper, Ken Kennedy, and Linda Torczon. Interprocedural Constant Propagation. Proceedings of the Sigplan '86 Symposium on Compiler Construction, 21(7):152–161, July 1986.Google Scholar
  18. 18.
    Keith Cooper and Ken Kennedy. Efficient Computation of Flow Insensitive Interprocedural Summary Information. Proceedings of the SIGPLAN 84 Symposium on Compiler Construction, 1984.Google Scholar
  19. 19.
    Keith D. Cooper, Ken Kennedy, and Linda Torczon. Interprocedural Optimization: Eliminating Unnecessary Recompilation. Proceedings of the Sigplan '86 Symposium on Compiler Construction, 21(7):58–67, July 1986.Google Scholar
  20. 20.
    Keith D. Cooper, Ken Kennedy, and Linda Torczon. The Impact of Interprocedural Analysis and Optimization in the Rn Programming Environment. ACM Transactions on Programming Languages and Systems, 8(4):491–523, October 1986.Google Scholar
  21. 21.
    Ron Cytron. Compile-time Scheduling and Optimization for Asynchronous Machines, PhD thesis, University of Illinois at Urbana-Champaign, Urbana, Illinois 1984.Google Scholar
  22. 22.
    Ron Cytron. Useful Parallelism in a Multiprocessing Environment. Proc. 1985 International Conference on Parallel Processing, 1985.Google Scholar
  23. 23.
    J. Ferrante, K. Ottenstein, and J. Warren. The Program Dependence Graph and its Use in Optimization. ACM Trans. on Programming Languages and Systems, July 1987. To appear.Google Scholar
  24. 24.
    Stefan M. Freudenberger, Jacob T. Schwartz, and Micha Sharir. Experience with the SETL Optimizer. ACM Transactions on Programming Languages and Systems, 5(1):26–45, January 1983.Google Scholar
  25. 25.
    Ken Kennedy. Automatic Translation of FORTRAN Programs to Vector Form, Rice University, 1980. Report 476-029-4.Google Scholar
  26. 26.
    G. Kildall. A Unified Approach to Program Optimization. Conference Record of First ACM Symposium on Principles of Programming Languages, 1973.Google Scholar
  27. 27.
    David J. Kuck. The Structure of Computers and Computations. John Wiley and Sons, 1978.Google Scholar
  28. 28.
    D. J. Kuck, R. H. Kuhn, B. Leasure, and M. Wolfe. The Structure of an Advanced Vectorizer for Pipelined Processors. Proceedings of CompSAC 80 (Fourth International Computer Software and Applications Conference), pages 709–715, October 1980.Google Scholar
  29. 29.
    D. J. Kuck, R. H. Kuhn, D. A. Padua, B. Leasure, and M. Wolfe. Dependence Graphs and Compiler Optimizations. Conference Record of 8th ACM Symposium on Principles of Programming Languages, 1981.Google Scholar
  30. 30.
    L. Lamport. The Parallel Execution of DO Loops. Communications of the ACM, pages 83–93, February 1974.Google Scholar
  31. 31.
    Bruce R. Leasure. Compiling Serial Languages for Parallel Machines, University of Illinois at Urbana-Champaign, 1976. M.S. Thesis.Google Scholar
  32. 32.
    David A. Padua and Michael J. Wolfe. Advanced Compiler Optimizations for Supercomputers. Communications of the ACM, 29(12):1184–1201, December 1986.Google Scholar
  33. 33.
    J. H. Reif and H. R. Lewis. Symbolic Evaluation and the Global Value Graph. Conf. Rec. Fourth ACM Symposium on Principles of Programming Languages, 1977.Google Scholar
  34. 34.
    J. T. Schwartz and M. Sharir. A Design for Optimizations of the Bitvectoring Class, New York University, September 1979. Courant Computer Science Report #17.Google Scholar
  35. 35.
    B. T. Smith, J. M. Boyle, J. J. Dongarra, B. S. Garbow, Y. Ikebe, V. C. Klema, and C. B. Moler. Matrix Eigensystem Routines — Eispack Guide. Springer-Verlag, Heidelberg, West Germany, 1976.Google Scholar
  36. 36.
    Remi Triolet. Contribution a la Parellisation Automatique de Programmes Fortran Comportant des Appels de Procedure, PhD thesis, L'Universite Pierre et Marie Curie (Paris VI), Paris, France 1984.Google Scholar
  37. 37.
    Remi Triolet, Francois Irigoin, and Paul Feautrier. Direct Parallelization of Call Statements. Proceedings of the Sigplan '86 Symposium on Compiler Construction, 21(7):176–185, July 1986.Google Scholar
  38. 38.
    Mark Wegman and Ken Zadeck. Constant Propagation with Conditional Branches. Conf. Rec. Twelfth ACM Symposium on Principles of Programming Languages, pages 291–299, January 1985.Google Scholar
  39. 39.
    Michael Wolfe. Personal CommunicationGoogle Scholar
  40. 40.
    Michael J. Wolfe. Techniques for Improving the Inherent Parallelism in Programs, University of Illinois at Urbana-Champaign, 1978. M.S. Thesis.Google Scholar
  41. 41.
    Michael J. Wolfe. Optimizing Supercompilers for Supercomputers, PhD thesis, University of Illinois at Urbana-Champaign, Urbana, Illinois 1982. Report No. UIUCDCS-R-82-1105.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1988

Authors and Affiliations

  • Fran Allen
    • 1
  • Michael Burke
    • 1
  • Philippe Charles
    • 1
  • Ron Cytron
    • 1
  • Jeanne Ferrante
    • 1
  1. 1.Computer Science DepartmentIBM T. J. Watson Research CenterYorktown Heights

Personalised recommendations