Maximal Static Expansion

  • Denis Barthou
  • Albert Cohen
  • Jean-François Collard


Memory expansions are classical means to extract parallelism from imperative programs. However, current techniques require some runtime mechanism to restore data flow when expansion maps have two definitions reaching the same use to two different memory locations (e.g., φ functions in the SSA framework). This paper presents an expansion framework for any type of data structure in any imperative program, without the need for dynamic data flow restoration. The key idea is to group together definitions that reach a common use. We show that such an expansion boils down to mapping each group to a memory cell.

automatic parallelization array expansion reaching definition analysis 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    P. Tu and D. Padua, Automatic array privatization, in Proc. Sixth Workshop on Lang.Compilers for Parallel Computing, Lecture Notes in Computer Science, Portland, Oregon, No. 768, pp. 500–21 (August 1993).Google Scholar
  2. 2.
    D. E. Maydan, S. P. Amarasinghe, and M. S. Lam, Array dataflow analysis and its use in array privatization, in Proc. ACM Conf. Principles Progr. Lang., pp. 2–15 (January 1993).Google Scholar
  3. 3.
    B. Creusillet, Array region analyses and applications, Ph.D. thesis, Ecole des Mines de Paris (December 1996).Google Scholar
  4. 4.
    K. L. Pieper, Parallelizing compilers: Implementation and effectiveness, Ph.D. thesis, Stanford University, Computer Systems Laboratory (June 1993).Google Scholar
  5. 5.
    K. Knobe and V. Sarkar, Array SSA form and its use in parallelization, in ACM Symp. Principles Progr. Lang. (PoPL), San Diego, California, pp. 107–120 (January 1998).Google Scholar
  6. 6.
    P. Feautrier, Dataflow analysis of scalar and array references, IJPP 20(1):23–53 (February 1991).Google Scholar
  7. 7.
    J.-F. Collard, The advantages of reaching definition analyses in Array SSA, in Proc. Workshop on Languages and Compilers for Parallel Computing, Chapel Hill, North Carolina (August 1998). Springer-Verlag. To appear in June 99.Google Scholar
  8. 8.
    R. Cytron, J. Ferrante, B. K. Rosen, M. N. Wegman, and F. K. Zadeck, Efficiently computing static single assignment form and the control dependence graph, ACM Trans. Progr. Lang. Syst. 13(4):451–490 (October 1991).Google Scholar
  9. 9.
    P.-Y. Calland, A. Darte, Y. Robert, and Frédéric Vivien, Plugging anti-and output-dependence removal techniques into loop parallelization algorithms, Parallel Computing 23(1–2):251–266 (1997).Google Scholar
  10. 10.
    D. Barthou, A. Cohen, and J.-F. Collard, Maximal static expansion, in ACM Symp. Principles Progr. Lang. (PoPL), San Diego, California, pp. 98–106 (January 1998).Google Scholar
  11. 11.
    D. Barthou, J.-F. Collard, and P. Feautrier, Fuzzy array dataflow analysis, J. Parallel Distribut. Comput. 40:210–226 (1997).Google Scholar
  12. 12.
    D. Barthou, Array dataflow analysis in presence of nonaffine constraints, Ph.D. thesis, Université de Versailles (February 1998). Scholar
  13. 13.
    A. Schrijver, Theory of Linear and Integer Programming, John Wiley, New York (1986).Google Scholar
  14. 14.
    P. Feautrier, Some efficient solution to the affine scheduling problem, Part II, Multidimensional time, IJPP 21(6):389–420 (December 1992).Google Scholar
  15. 15.
    A. Darte and F. Vivien, Optimal fine and medium grain parallelism detection in polyhedral reduced dependence graphs, IJPP 25(6):447–496 (December 1997).Google Scholar
  16. 16.
    F. Irigoin and R. Tiolet, Supernode partitioning, in Proc. 15th POPL, San Diego, California, pp. 319–328 (January 1988).Google Scholar
  17. 17.
    L. Carter, J. Ferrante, and S. Flynn Hummel, Efficient multiprocessor parallelism via hierarchical tiling, SIAM Conf. Parallel Proc. Sci. Comput. (1995).Google Scholar
  18. 18.
    D. G. Wonnacott, Constraint-based array dependence analysis, Ph.D. thesis, University of Maryland (1995).Google Scholar
  19. 19.
    W. Pugh and D. Wonnacott, Constraint-based array dependence analysis, ACM Trans. Progr. Lang. Syst. 3:635–678 (May 1998).Google Scholar
  20. 20.
    S. S. Muchnick, Advanced Compiler Design and Implementation, Morgan Kaufmann (1997).Google Scholar
  21. 21.
    C. Ancourt and F. Irigoin, Scanning polyhedra with DO loops, in Proc. of ACM SIGPLAN Symp. Principles Parallel Progr., pp. 39–50 (June 1991).Google Scholar
  22. 22.
    J.-F. Collard, P. Feautrier, and T. Risset, Construction of DO loops from systems of affine constraints, Parallel Proc. Lett. 5(3) (1995).Google Scholar
  23. 23.
    W. Pugh, A practical algorithm for exact array dependence analysis, Commun. ACM 35(8):27–47 (August 1992).Google Scholar
  24. 24.
    W. Kelly, W. Pugh, E. Rosser, and T. Shpeisman, Transitive closure of infinite graphs and its applications, IJPP 24(6):579–598 (1996).Google Scholar
  25. 25.
    W. Blume, R. Eigenmann, K. Faigin, J. Grout, J. Hoeflinger, D. Padua, P. Petersen, W. Pottenger, L. Rauchwerger, P. Tu, and S. Weatherford, Parallel programming with Polaris, IEEE Computer 29(12):78–82 (December 1996).Google Scholar
  26. 26.
    M. Hall, et al., Maximizing multiprocessor performance with the SUIF compiler, IEEE Computer 29(12):84–89 (December 1996).Google Scholar
  27. 27.
    D. Wonnacott and W. Pugh, Nonlinear array dependence analysis, in Proc. Third Workshop on Languages, Compilers and Runtime Systems for Scalable Computers, Troy, New York (1995).Google Scholar
  28. 28.
    J.-F. Collard, D. Barthou, and P. Feautrier, Fuzzy array dataflow analysis, in ACM SIGPLAN Symp. Principles and Practive of Parallel Progr. (PPoPP), Santa Barbara, California, pp. 92–102 (July 1995).Google Scholar

Copyright information

© Plenum Publishing Corporation 2000

Authors and Affiliations

  • Denis Barthou
    • 1
  • Albert Cohen
    • 1
  • Jean-François Collard
    • 1
  1. 1.PRiSMUniversité de VersaillesVersaillesFrance

Personalised recommendations