International Journal of Parallel Programming

, Volume 20, Issue 1, pp 23–53 | Cite as

Dataflow analysis of array and scalar references

  • Paul Feautrier
Article

Abstract

Given a program written in a simple imperative language (assignment statements,for loops, affine indices and loop limits), this paper presents an algorithm for analyzing the patterns along which values flow as the execution proceeds. For each array or scalar reference, the result is the name and iteration vector of the source statement as a function of the iteration vector of the referencing statement. The paper discusses several applications of the method: conversion of a program to a set of recurrence equations, array and scalar expansion, program verification and parallel program construction.

Key Words

Dataflow analysis semantics analysis array expansion 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Paul Feautrier, Parametric integer programming,RAIRO Recherche Opérationnlle,22:243–268 (September 1988).Google Scholar
  2. 2.
    N. Suzuki and D. Jefferson, Verification decidability of Pressburger array programs.Procs. of a Conf. on TCS, Waterloo (1977).Google Scholar
  3. 3.
    David J. Kuck,The Structure of Computers and Computations, J. Wiley and Sons, New York (1978).Google Scholar
  4. 4.
    D. A. Padua and Michael J. Wolfe, Advanced compiler optimization for supercomputers,CACM,29:1184–1201 (December 1986).Google Scholar
  5. 5.
    J. R. Allen and Ken Kennedy, Automatic translation of FORTRAN programs to vector form,ACM TOPLAS,9(4):491–542 (October 1987).Google Scholar
  6. 6.
    Thomas Brandes, The importance of direct dependences for automatic parallelization,ACM Int'l. Conf. on Supercomputing, St. Malo, France (July 1988).Google Scholar
  7. 7.
    Michael J. Wolfe and Utpal Banerjee, Data dependence and its application to parallci processing,IJPP 16:137–178 (1987).Google Scholar
  8. 8.
    Nadia Tawbi, Alain Dumay, and Paul Feautrier, PAF: un paralléliseur automatique pour FORTRAN, Technical Report 185,MASI (1987).Google Scholar
  9. 9.
    L. G. Tesler and H. J. Enea, A language design design for concurrent processes,SJCC, pp. 403–408 (1968).Google Scholar
  10. 10.
    J. C. Syre, D. Comte, and N. Hifdi, Pipelining, parallelism and asynchronism in the LAU system,Int'l. Conf. on Parallel Processing (1977).Google Scholar
  11. 11.
    Jacques Arsac,La construction de programmes structurés, Dunod, Paris (1977).Google Scholar
  12. 12.
    E. A. Ashcroft and W. W. Wadge,Lucid, the Data-flow Programming Language, Academic Press (1985).Google Scholar
  13. 13.
    J. R. Allen and Ken Kennedy, Automatic loop interchange,Proc. of the ACM SIGPLAN Compiler Conference, pp. 233–246 (June 1984).Google Scholar
  14. 14.
    Patrice Quinton, Mapping recurrences on parallel architectures,3rd Intl. Conf. on Supercomputing, Boston (May 1988).Google Scholar
  15. 15.
    Paul Feautrier, Asymptotically efficient algorithms for parallel architectures, (ed.), M. Cosnard and C. Girault,Decentralized Systems, pp. 273–284;IFIP WG Vol. 10, No. 3, North-Holland (December 1989).Google Scholar
  16. 16.
    William Pugh, Uniform techniques for loop optimization,ACM Conf. on Supercomputing, pp. 341–352 (January 1991).Google Scholar
  17. 17.
    Lee-Chung, Lu, A unified framework for systematic loop transformations,SIGPLAN Notices,26:28–38 (July 1991);3rd ACM SIGPLAN Symp. on Principles and Practice of Parallel Programming.Google Scholar
  18. 18.
    A. V. Aho, R. Sethi, and J. D. Ullman,Compilers: Principles, Techniques and Tools, Addison-Wesley, Reading, Massachusetts (1986).Google Scholar
  19. 19.
    Ron Cytron, Jeanne Ferrante, Barry K. Rosen, Mark N. Wegman, and F. Kenneth Zadeck, An efficient method of computing static single assignment form,Proc. 16th ACM POPL Conf., pp. 25–35 (January 1989).Google Scholar
  20. 20.
    Micah Beck, Richard Johnson, and Keshav Pingali, From control flow to dataflow,J. of Parallel and Distributed Computing,12:118–129 (1991).Google Scholar
  21. 21.
    Leslie Lamport, The parallel execution of DO loops,CACM,17:83–93 (February 1974).Google Scholar
  22. 22.
    François Irigoin and Rémi Triolet, Supernode partitioning, San Diego, California,Proc. 15th POPL, pp. 319–328 (January 1988).Google Scholar
  23. 23.
    Zhiyu Shen, Zhiyuan Li, and Pen-Chung Yew, An empirical study on array subscripts and data dependencies,Int'l. Conf. on Parallel Processing, Vol.11, pp. 145–152 (1989).Google Scholar
  24. 24.
    Brenda S. Baker, An algorithm for structuring programs,J. of the ACM,24:98–120 (1977).Google Scholar
  25. 25.
    Zahira Ammarguellat, Restructuration des programmes FORTRAN en vue de leur parallélisation, Ph.D. thesis, Université P. et M. Curie, Paris (December 1988).Google Scholar
  26. 26.
    Zahira Ammarguellat, Normalization of Program Control Flow, Technical Report 885,CSRD (May 1989).Google Scholar
  27. 27.
    Pierre Jouvelot and Babak Dehbonei, A unified semantic approach for the vectorization and parallelization of generalized reductions ACM Press,Procs. of the 3rd Int'l. Conf. on Supercomputing, pp. 186 194 (1989).Google Scholar
  28. 28.
    Paul Feautrier, Array expansion,ACM Int'l. Conf. on Supercomputing, St. Malo (1988).Google Scholar

Copyright information

© Plenum Publishing Corporation 1991

Authors and Affiliations

  • Paul Feautrier
    • 1
  1. 1.Laboratoire MASIUniversité P. et M. CurieParis Cedex 05France

Personalised recommendations