Putting Polyhedral Loop Transformations to Work

  • Cédric Bastoul
  • Albert Cohen
  • Sylvain Girbal
  • Saurabh Sharma
  • Olivier Temam
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2958)

Abstract

We seek to extend the scope and efficiency of iterative compilation techniques by searching not only for program transformation parameters but for the most appropriate transformations themselves. For that purpose, we need a generic way to express program transformations and compositions of transformations. In this article, we introduce a framework for the polyhedral representation of a wide range of transformations in a unified way. We also show that it is possible to generate efficient code after the application of polyhedral program transformations. Finally, we demonstrate an implementation of the polyhedral representation and code generation techniques in the Open64/ORC compiler.

Keywords

Propa Olate Kelly Harness Padding 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ancourt, C., Irigoin, F.: Scanning polyhedra with DO loops. In: 3rd ACM SIGPLAN Symp. on Principles and Practice of Parallel Programming, June 1991, pp. 39–50 (1991)Google Scholar
  2. 2.
    Bastoul, C., Cohen, A., Girbal, S., Sharma, S., Temam, O.: Putting polyhedral loop transformations to work. Research report 4902, INRIA Rocquencourt, France (July 2003)Google Scholar
  3. 3.
    Bastoul, C., Feautrier, P.: Improving data locality by chunking. In: Hedin, G. (ed.) CC 2003. LNCS, vol. 2622, pp. 320–335. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  4. 4.
    Blume, W., Eigenmann, R., Faigin, K., Grout, J., Hoeflinger, J., Padua, D., Petersen, P., Pottenger, W., Rauchwerger, L., Tu, P., Weatherford, S.: Parallel programming with Polaris. IEEE Computer 29(12), 78–82 (1996)CrossRefGoogle Scholar
  5. 5.
    Boulet, P., Darte, A., Silber, G.-A., Vivien, F.: Loop parallelization algorithms: From parallelism extraction to code generation. Parallel Computing 24(3), 421–444 (1998)MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Cooper, K.D., Hall, M.W., Hood, R.T., Kennedy, K., McKinley, K.S., Mellor-Crummey, J.M., Torczon, L., Warren, S.K.: The ParaScope parallel programming environment. Proceedings of the IEEE 81(2), 244–263 (1993)CrossRefGoogle Scholar
  7. 7.
    Eigenmann, R., Hoeflinger, J., Padua, D.: On the automatic parallelization of the perfect benchmarks. IEEE Trans. on Parallel and Distributed Systems 9(1), 5–23 (1998)CrossRefGoogle Scholar
  8. 8.
    Feautrier, P.: Some efficient solution to the affine scheduling problem, part II, multidimensional time. Int. Journal of Parallel Programming 21(6), 389–420 (1992); See also Part I, One Dimensional Time, 21(5), pp. 315–348MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Griebl, M., Lengauer, C., Wetzel, S.: Code generation in the polytope model. In: PACT 1998 Intl. Conference on Parallel Architectures and Compilation Techniques, pp. 106–111 (1998)Google Scholar
  10. 10.
    Guillou, A.-C., Quilleré, F., Quinton, P., Rajopadhye, S., Risset, T.: Hardware design methodology with the alpha language. In: FDL 2001, Lyon, France (September 2001)Google Scholar
  11. 11.
    Hall, M., et al.: Maximizing multiprocessor performance with the SUIF compiler. IEEE Computer 29(12), 84–89 (1996)CrossRefGoogle Scholar
  12. 12.
    Irigoin, F., Jouvelot, P., Triolet, R.: Semantical interprocedural parallelization: An overview of the pips project. In: ACM Int. Conf. on Supercomputing (ICS’2), Cologne, Germany (June 1991)Google Scholar
  13. 13.
    Kelly, W.: Optimization within a unified transformation framework. Technical Report CS-TR-3725, University of Maryland (1996)Google Scholar
  14. 14.
    Kelly, W., Pugh, W., Rosser, E.: Code generation for multiple mappings. In: Frontiers 1995 Symp. on the frontiers of massively parallel computation, McLean (1995)Google Scholar
  15. 15.
    Li, W., Pingali, K.: A singular loop transformation framework based on nonsingular matrices. Intl. J. of Parallel Programming 22(2), 183–205 (1994)CrossRefMATHGoogle Scholar
  16. 16.
    O’Boyle, M.: MARS: a distributed memory approach to shared memory compilation. In: Proc. Language, Compilers and Runtime Systems for Scalable Computing, Pittsburgh, May 1998. Springer, Heidelberg (1998)Google Scholar
  17. 17.
    O’Boyle, M., Knijnenburg, P., Fursin, G.: Feedback assisted iterative compiplation. In: Parallel Architectures and Compilation Techniques (PACT 2001), October 2001. IEEE Computer Society Press, Los Alamitos (2001)Google Scholar
  18. 18.
    Open research compiler, http://ipf-orc.sourceforge.net
  19. 19.
    Quilleré, F., Rajopadhye, S., Wilde, D.: Generation of efficient nested loops from polyhedra. Intl. J. of Parallel Programming 28(5), 469–498 (2000)CrossRefGoogle Scholar
  20. 20.
    Schreiber, R., Aditya, S., Rau, B., Kathail, V., Mahlke, S., Abraham, S., Snider, G.: High-level synthesis of nonprogrammable hardware accelerators. Technical report, Hewlett-Packard (May 2000)Google Scholar
  21. 21.
    Xue, J.: Automating non-unimodular loop transformations for massive parallelism. Parallel Computing 20(5), 711–728 (1994)MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Yotov, K., Li, X., Ren, G., Cibulskis, M., De Jong, G., Garzaran, M., Padua, D., Pingali, K., Stodghill, P., Wu, P.: A comparison of empirical and model-driven optimization. In: ACM Symp. on Programming Language Design and Implementation (PLDI 2003), San Diego, California (June 2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Cédric Bastoul
    • 1
    • 3
  • Albert Cohen
    • 1
  • Sylvain Girbal
    • 1
    • 2
    • 4
  • Saurabh Sharma
    • 1
  • Olivier Temam
    • 2
  1. 1.A3 group, INRIARocquencourt
  2. 2.LRIParis South UniversityUSA
  3. 3.PRiSMUniversity of VersaillesUSA
  4. 4.LIST, CEA SaclayUSA

Personalised recommendations