Abstract
Scientific programs must be transformed and adapted to supercomputers to be executed efficiently. Automatic parallelization has been a very active research field for the past few years and effective restructuring compilers have been developed.
Although CALL statements are not properly handled by current automatic tools, it has recently been shown that parallel execution of subroutines provides good speedups on multiprocessor machines.
In this paper, a method to parallelize programs with CALL statements is described. It is based on the computation of subroutine effects. The basic principles of the method are to keep the general structure of the program, to compute the effects of called procedures on those calling and to find out and use predicates on scalar variables to improve array handling.
First it is shown that some knowledge of the CALL statement context is necessary to compute accurately a property over a procedure. This leads us to define the notions of static procedure occurrence and static occurrence tree.
Then a new concept, called region, is introduced to define precisely the effect of a procedure execution. These regions allow us to describe, in a calling procedure, the parts of arrays which are read or written by the called procedure executions. The main lines of the effect computation algorithm are given. It is based on a bottom-up analysis of the static occurrence tree.
The computation and the use of predicates among scalar variables to improve array handling are also, as well as regions, new in restructuring compilers. Classical semantics analysis methods are adapted to our special needs and extended to the interprocedural case. It is also briefly explained how the predicates these methods provide can be used.
Finally, the introduction of our method in a restructuring compiler is reported.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
J.R. Allen and K. Kennedy, “PFC: a program to convert Fortran to a parallel form,” in Supercomputers, Design and Application, ed. K. Hwang (1982). COMPSAC, Tutorial
J. R. Allen, “Dependence Analysis for Subscripted Variables and its Application to Program Transformations,” PhD Thesis, Dept. of Mathematical Science, Rice University (1983).
U. Banerjee, “Speedup of Ordinary Programs,” Report No UIUCDCS-R-79-989, University of Illinois (1979). PhD Thesis
B. Q. Brode, “VAST: A Vectorization Tool for the Cyber-205,” Internal Report, Pacific-Sierra Research Corp. (1982).
J. Cottet, C. Renvoise, and D. Sciamma, “Vesta: Vectorisation automatique et paramétrée de programmes,” Proc. of the 6th Int. Symp. on Programming (1984).
P. Cousot and N. Halbwachs, “Automatic Discovery of Linear Restraints Among Variables of a Program,” Proc. of the 5th POPL Conf. (1978).
J. J. Dongarra and R. E. Hiromoto, “A Collection of Parallel Linear Equations Routine for the Denelcor HEP,” Parallel Computing 1(2), pp.133–142, North-Holland (1984).
R. J. Duffin, “On Fourier's Analysis of Linear Inequality Systems,” Mathematical Programming Study 1, North-Holland (1974).
P. Feautrier, “Projet VESTA: Outil de calcul symbolique,” Proc. of the 6th Int. Symp. on Programming (Apr. 1984).
D. Gajski, D. Kuck, D. Lawrie, and A. Sameh, “CEDAR: A Large Scale Multiprocessor,” Proc. of the 1983 Int. Conf. on Parallel Processing (1983).
A. Gottlieb, R. Grishman, C. Kruskal, K. MacAuliffe, L. Rudolph, and M. Snir, “The NYU Ultracomputer. Designing a MIMD, Shared-Memory Parallel Machine,” Proc. of the 9th Symp. on Computer Architecture (1982).
M.S. Hecht, Flow Analysis of Computer Programs, North-Holland (1977).
C.A. Huson, “An In-Line Subroutine Expander for Parafrase,” Report No UIUCDCS-R-82-1118, University of Illinois (1982). M.S. Thesis
P. Jouvelot, “Evaluation sémantique des conditions de Bernstein,” Rapport Interne MASI #70, Université Paris VI (Feb. 1985).
M. Karr, “Affine Relationships Among Variables of a Program,” Acta Informatica(6) (1976).
G. Killdal, “A Unified Approach to Global Program Optimization,” Proceedings of the 1st POPL Conference (1973).
D. Kuck, R. Kuhn, B. Leasure, and M. Wolfe, “The Structure of an Advanced Vectorizer for Pipelined Processors,” Proc. of the 4th Int. Conf. on Computer Software and Application (Oct. 1980).
G. F. Pfister and al., “The IBM Research Parallel Processor Prototype (RP3): Introduction and Architecture,” Proc. of the 1985 Int. Conf. on Parallel Processing (1985).
R. Shostak, “Deciding Linear Inequalities by Computing Loop Residues,” ACM Journal 28(4), pp.769–779 (1981).
R. Triolet, “Contribution à la parallélisation automatique de programmes,” Thèse de Docteur-Ingénieur, Université Paris VI (Dec. 1984).
R. Triolet, “Problèmes posés par l'expansion de procédure en Fortran 77,” Rapport Technique, Centre d'Automatique et d'Informatique de l'Ecole des Mines de Paris (1985).
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1986 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Triolet, R., Feautrier, P., Irigoin, F. (1986). Automatic parallelization of fortran programs in the presence of procedure calls. In: Robinet, B., Wilhelm, R. (eds) ESOP 86. ESOP 1986. Lecture Notes in Computer Science, vol 213. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-16442-1_16
Download citation
DOI: https://doi.org/10.1007/3-540-16442-1_16
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-16442-5
Online ISBN: 978-3-540-39782-3
eBook Packages: Springer Book Archive