Advertisement

Exact versus approximate array region analyses

  • Béatrice Creusillet
  • François Irigoin
Program Analysis
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1239)

Abstract

Advanced program optimizations, such as array privatization, require precise array data flow analyses, usually relying on conservative over- (or may) and under- (or must) approximations of array element sets [25, 33, 21]. In a recent study [13], we proposed to compute exact sets whenever possible. But the advantages of this approach were still an open issue which is discussed in this paper.

It is first recalled that must array region analyses cannot be defined on lattices. It implies that there exists no better solution for such data flow problems, and that ad-hoc solutions must be defined.

For that purpose, it is suggested to perform under- and over-approximate analyses at the same time, and to enhance the results of must analyses with those of may analyses, when the latter can be proved exact according to an exactness criterion. This is equivalent to our previous approach, and is more effective than using only existing techniques such as widening and narrowing operators which may fail to expose exact solutions even though their computability is decidable. This method is very general and could be applied to other types of analyses.

Keywords

Array region analysis semantical analysis exactness 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Corinne Ancourt and François Irigoin. Scanning polyhedra with DO loops. In Symposium on Principles and Practice of Parallel Programming, pages 39–50, April 1991.Google Scholar
  2. 2.
    V. Balasundaram and K. Kennedy. A technique for summarizing data access and its use in parallelism enhancing transformations. In International Conference on Programming Language Design and Implementation, pages 41–53, June 1989.Google Scholar
  3. 3.
    Garrett Birkhoff. Lattice Theory, volume XXV of AMS Colloqium Publications. American Mathematical Society, Providence, Rhode Island, third edition, 1967.Google Scholar
  4. 4.
    W. Blume and R. Eigenmann. Performance analysis of parallelizing compilers on the Perfect Benchmarks programs. IEEE Transactions on Parallel and Distributed Systems, 3(6):643–656, November 1992.CrossRefGoogle Scholar
  5. 5.
    François Bourdoncle. Sémantique des Langages Impératifs d'Ordre Supérieur et Interprétation Abstraite. PhD thesis, École Polytechnique, November 1992.Google Scholar
  6. 6.
    D. Callahan and K. Kennedy. Analysis of interprocedural side effects in a parallel programming environment. Journal of Parallel and Distributed Computing, 5:517–550, 1988.CrossRefGoogle Scholar
  7. 7.
    Fabien Coelho. Compilation of I/O communications for HPF. In Frontiers'95, pages 102–109, February 1995. Available via http://www.cri.ensmp.fr/∼coelho.Google Scholar
  8. 8.
    Fabien Coelho and Corinne Ancourt. Optimal compilation of HPF remappings. Technical Report A-277-CRI, CRI, École des Mines de Paris, October 1995. To appear in JPDC in 1996.Google Scholar
  9. 9.
    Patrick Cousot. Méthodes Itératives de Construction et d'Approximation de Points Fixes d'Opérateurs Monotones sur un Treillis, Analyse Sémantique des Programmes. PhD thesis, Institut National Polytechnique de Grenoble, March 1978.Google Scholar
  10. 10.
    Patrick Cousot and Radhia Cousot. Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Symposium on Principles of Programming Languages, pages 238–252, 1977.Google Scholar
  11. 11.
    Patrick Cousot and Radhia Cousot. Higher-order abstract interpretation (and application to comportment analysis generalizing strictness, termination, projection and PER analysis of functional languages). In International Conference on Computer Languages, IEEE Computer Socitey Press, pages 95–112, May 1994.Google Scholar
  12. 12.
    Béatrice Creusillet. Array regions for interprocedural parallelization and array privatization. Report A-279, CRI, École des Mines de Paris, November 1995. Available at http://www.cri.ensmp.fr/∼creusil.Google Scholar
  13. 13.
    Béatrice Creusillet and François Irigoin. Interprocedural array region analyses. In Languages and Compilers for Parallel Computing, number 1033 in Lecture Notes in Computer Science, pages 46–60. Springer-Verlag, August 1995.Google Scholar
  14. 14.
    Béatrice Creusillet and François Irigoin. Interprocedural array region analyses. To appear in International Journal of Parallel Programming (special issue on LCPG), 24(6), 1996. Extended version of [13].Google Scholar
  15. 15.
    Paul Feautrier. Array expansion. In International Conference on Supercomputing, pages 429–441, July 1988.Google Scholar
  16. 16.
    Paul Feautrier. Dataflow analysis of array and scalar references. International Journal of Parallel Programming, 20(1):23–53, September 1991.CrossRefGoogle Scholar
  17. 17.
    Kyle Gallivan, William Jalby, and Dennis Gannon. On the problem of optimizing data transfers for complex memory systems. In International Conference on Supercomputing, pages 238–253, July 1988.Google Scholar
  18. 18.
    S. Graham and M. Wegman. Fast and usually linear algorithm for global flow analysis. Journal of the ACM, 23(1):172–202, January 1976.CrossRefGoogle Scholar
  19. 19.
    Jungie Gu, Zhiyuan Li, and Gyungho Lee. Symbolic array dataflow analysis for array privatization and program parallelization. In Supercomputing, December 1995.Google Scholar
  20. 20.
    C. Gunter and D. Scott. Denotational semantics. In Jan van Leeuwen, editor, Theoretical Computer Science, volume B, chapter 12. Elsevier Science Publisher, 1990.Google Scholar
  21. 21.
    Mary Hall, Saman Amarasinghe, Brian Murphy, Shih-Wei Liao, and Monica Lam. Detecting coarse-grain parallelism using an interprocedural parallelizing compiler. In Supercomputing, December 1995.Google Scholar
  22. 22.
    Mary Hall, Brian Murphy, Saman Amarasinghe, Shih-Wei Liao, and Monica Lam. Interprocedural analysis for parallelization. In Languages and Compilers for Parallel Computing, Lecture Notes in Computer Science, pages 61–80. Springer-Verlag, August 1995.Google Scholar
  23. 23.
    Michael Hind, Michael Burke, Paul Carini, and Sam Midkiff. An empirical study of precise interprocedural array analysis. Scientific Programming, 3(3):255–271, May 1994.Google Scholar
  24. 24.
    François Irigoin, Pierre Jouvelot, and Rémi Triolet. Semantical interprocedural parallelization: An overview of the PIPS project. In International Conference on Supercomputing, pages 144–151, June 1991.Google Scholar
  25. 25.
    Zhiyuan Li. Array privatization for parallel execution of loops. In International Conference on Supercomputing, pages 313–322, July 1992.Google Scholar
  26. 26.
    Dror E. Maydan, Saman P. Amarasinghe, and Monica S. Lam. Array data-flow analysis and its use in array privatization. In Symposium on Principles of Programming Languages, January 1993.Google Scholar
  27. 27.
    Peter Mosses. Denotational semantics. In Jan van Leeuwen, editor, Theoretical Computer Science, volume B, chapter 11. Elsevier Science Publisher, 1990.Google Scholar
  28. 28.
    William Pugh. A practical algorithm for exact array dependence analysis. Communications of the ACM, 35(8):102–114, August 1992.CrossRefGoogle Scholar
  29. 29.
    Peiyi Tang. Exact side effects for interprocedural dependence analysis. In International Conference on Supercomputing, pages 137–146, July 1993.Google Scholar
  30. 30.
    Rémi Triolet. Contribution à la parallélisation automatique de programmes Fortran comportant des appels de procédures. PhD thesis, Paris VI University, 1984.Google Scholar
  31. 31.
    Rémi Triolet, Paul Feautrier, and François Irigoin. Direct parallelization of call statements. In ACM SIGPLAN Symposium on Compiler Construction, pages 176–185, 1986.Google Scholar
  32. 32.
    Peng Tu. Automatic Array Privatization and Demand-Driven Symbolic Analysis. PhD thesis, University of Illinois at Urbana-Champaign, 1995.Google Scholar
  33. 33.
    Peng Tu and David Padua. Array privatization for shared and distributed memory machines (extended abstract). In Workshop on Languages and Compilers for Distributed Memory Machines, pages 64–67, 1992.Google Scholar
  34. 34.
    Peng Tu and David Padua. Automatic array privatization. In Languages and Compilers for Parallel Computing, August 1993.Google Scholar
  35. 35.
    David Wonnacott. Constraint-Based Array Dependence Analysis. PhD thesis, University of Maryland, College Park, August 1995.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Béatrice Creusillet
    • 1
  • François Irigoin
    • 1
  1. 1.École des mines de ParisCentre de Recherche en InformatiqueFontainebleau CedexFrance

Personalised recommendations