Automatic parallelization of fortran programs in the presence of procedure calls

  • Rémi Triolet
  • Paul Feautrier
  • François Irigoin
Part of the Lecture Notes in Computer Science book series (LNCS, volume 213)


Scientific programs must be transformed and adapted to supercomputers to be executed efficiently. Automatic parallelization has been a very active research field for the past few years and effective restructuring compilers have been developed.

Although CALL statements are not properly handled by current automatic tools, it has recently been shown that parallel execution of subroutines provides good speedups on multiprocessor machines.

In this paper, a method to parallelize programs with CALL statements is described. It is based on the computation of subroutine effects. The basic principles of the method are to keep the general structure of the program, to compute the effects of called procedures on those calling and to find out and use predicates on scalar variables to improve array handling.

First it is shown that some knowledge of the CALL statement context is necessary to compute accurately a property over a procedure. This leads us to define the notions of static procedure occurrence and static occurrence tree.

Then a new concept, called region, is introduced to define precisely the effect of a procedure execution. These regions allow us to describe, in a calling procedure, the parts of arrays which are read or written by the called procedure executions. The main lines of the effect computation algorithm are given. It is based on a bottom-up analysis of the static occurrence tree.

The computation and the use of predicates among scalar variables to improve array handling are also, as well as regions, new in restructuring compilers. Classical semantics analysis methods are adapted to our special needs and extended to the interprocedural case. It is also briefly explained how the predicates these methods provide can be used.

Finally, the introduction of our method in a restructuring compiler is reported.


Dependence Graph Call Statement Procedure Call Static Occurrence Automatic Parallelization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. All1.
    J.R. Allen and K. Kennedy, “PFC: a program to convert Fortran to a parallel form,” in Supercomputers, Design and Application, ed. K. Hwang (1982). COMPSAC, TutorialGoogle Scholar
  2. All2.
    J. R. Allen, “Dependence Analysis for Subscripted Variables and its Application to Program Transformations,” PhD Thesis, Dept. of Mathematical Science, Rice University (1983).Google Scholar
  3. Ban.
    U. Banerjee, “Speedup of Ordinary Programs,” Report No UIUCDCS-R-79-989, University of Illinois (1979). PhD ThesisGoogle Scholar
  4. Bro.
    B. Q. Brode, “VAST: A Vectorization Tool for the Cyber-205,” Internal Report, Pacific-Sierra Research Corp. (1982).Google Scholar
  5. Cot.
    J. Cottet, C. Renvoise, and D. Sciamma, “Vesta: Vectorisation automatique et paramétrée de programmes,” Proc. of the 6th Int. Symp. on Programming (1984).Google Scholar
  6. Cou.
    P. Cousot and N. Halbwachs, “Automatic Discovery of Linear Restraints Among Variables of a Program,” Proc. of the 5th POPL Conf. (1978).Google Scholar
  7. Don.
    J. J. Dongarra and R. E. Hiromoto, “A Collection of Parallel Linear Equations Routine for the Denelcor HEP,” Parallel Computing 1(2), pp.133–142, North-Holland (1984).Google Scholar
  8. Duf.
    R. J. Duffin, “On Fourier's Analysis of Linear Inequality Systems,” Mathematical Programming Study 1, North-Holland (1974).Google Scholar
  9. Fea.
    P. Feautrier, “Projet VESTA: Outil de calcul symbolique,” Proc. of the 6th Int. Symp. on Programming (Apr. 1984).Google Scholar
  10. Gaj.
    D. Gajski, D. Kuck, D. Lawrie, and A. Sameh, “CEDAR: A Large Scale Multiprocessor,” Proc. of the 1983 Int. Conf. on Parallel Processing (1983).Google Scholar
  11. Got.
    A. Gottlieb, R. Grishman, C. Kruskal, K. MacAuliffe, L. Rudolph, and M. Snir, “The NYU Ultracomputer. Designing a MIMD, Shared-Memory Parallel Machine,” Proc. of the 9th Symp. on Computer Architecture (1982).Google Scholar
  12. Hec.
    M.S. Hecht, Flow Analysis of Computer Programs, North-Holland (1977).Google Scholar
  13. Hus.
    C.A. Huson, “An In-Line Subroutine Expander for Parafrase,” Report No UIUCDCS-R-82-1118, University of Illinois (1982). M.S. ThesisGoogle Scholar
  14. Jou.
    P. Jouvelot, “Evaluation sémantique des conditions de Bernstein,” Rapport Interne MASI #70, Université Paris VI (Feb. 1985).Google Scholar
  15. Kar.
    M. Karr, “Affine Relationships Among Variables of a Program,” Acta Informatica(6) (1976).Google Scholar
  16. Kil.
    G. Killdal, “A Unified Approach to Global Program Optimization,” Proceedings of the 1st POPL Conference (1973).Google Scholar
  17. Kuc.
    D. Kuck, R. Kuhn, B. Leasure, and M. Wolfe, “The Structure of an Advanced Vectorizer for Pipelined Processors,” Proc. of the 4th Int. Conf. on Computer Software and Application (Oct. 1980).Google Scholar
  18. Pfi.
    G. F. Pfister and al., “The IBM Research Parallel Processor Prototype (RP3): Introduction and Architecture,” Proc. of the 1985 Int. Conf. on Parallel Processing (1985).Google Scholar
  19. Sho.
    R. Shostak, “Deciding Linear Inequalities by Computing Loop Residues,” ACM Journal 28(4), pp.769–779 (1981).Google Scholar
  20. Tri1.
    R. Triolet, “Contribution à la parallélisation automatique de programmes,” Thèse de Docteur-Ingénieur, Université Paris VI (Dec. 1984).Google Scholar
  21. Tri2.
    R. Triolet, “Problèmes posés par l'expansion de procédure en Fortran 77,” Rapport Technique, Centre d'Automatique et d'Informatique de l'Ecole des Mines de Paris (1985).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1986

Authors and Affiliations

  • Rémi Triolet
    • 1
  • Paul Feautrier
    • 2
  • François Irigoin
    • 3
  1. 1.Center for Supercomputer R & DUniversity of IllinoisUSA
  2. 2.Université PARIS VI, MASIFrance
  3. 3.Ecole des Mines de Paris, Centre d'Automatique et InformatiqueFrance

Personalised recommendations