Skip to main content

Advertisement

Log in

A methodology correlating code optimizations with data memory accesses, execution time and energy consumption

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

The advent of data proliferation and electronic devices gets low execution time and energy consumption software in the spotlight. The key to optimizing software is the correct choice, order as well as parameters of optimization transformations that has remained an open problem in compilation research for decades for various reasons. First, most of the transformations are interdependent and thus addressing them separately is not effective. Second, it is very hard to couple the transformation parameters to the processor architecture (e.g., cache size) and algorithm characteristics (e.g., data reuse); therefore, compiler designers and researchers either do not take them into account at all or do it partly. Third, the exploration space, i.e., the set of all optimization configurations that have to be explored, is huge and thus searching is impractical. In this paper, the above problems are addressed for data-dominant affine loop kernels, delivering significant contributions. A novel methodology is presented reducing the exploration space of six code optimizations by many orders of magnitude. The objective can be execution time (ET), energy consumption (E) or the number of L1, L2 and main memory accesses. The exploration space is reduced in two phases: firstly, by applying a novel register blocking algorithm and a novel loop tiling algorithm and secondly, by computing the maximum and minimum ET/E values for each optimization set. The proposed methodology has been evaluated for both embedded and general-purpose CPUs and for seven well-known algorithms, achieving high memory access, speedup and energy consumption gain values (from 1.17 up to 40) over gcc compiler, hand-written optimized code and Polly. The exploration space from which the near-optimum parameters are selected is reduced from 17 up to 30 orders of magnitude.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. This is an extension of the conference paper ‘A methodology for efficient code optimizations and memory management’, ACM International Conference on Computing Frontiers 2018.

References

  1. Artés A, Ayala JL, Huisken J, Catthoor F (2013) Survey of low-energy techniques for instruction memory organisations in embedded systems. Signal Process Syst 70(1):1–19

    Article  Google Scholar 

  2. Ashouri AH, Bignoli A, Palermo G, Silvano C (2016) Predictive modeling methodology for compiler phase-ordering. In: Proceedings of the 7th Workshop on Parallel Programming and Run-Time Management Techniques for Many-core Architectures and the 5th Workshop on Design Tools and Architectures For Multicore Embedded Computing Platforms, PARMA-DITAM ’16. ACM, New York, NY, USA, pp 7–12. https://doi.org/10.1145/2872421.2872424

  3. Ashouri AH, Bignoli A, Palermo G, Silvano C, Kulkarni S, Cavazos J (2017) Micomp: mitigating the compiler phase-ordering problem using optimization sub-sequences and machine learning. ACM Trans Archit Code Optim 14(3):29:1–29:28. https://doi.org/10.1145/3124452

    Article  Google Scholar 

  4. Ashouri AH, Killian W, Cavazos J, Palermo G, Silvano C (2018) A survey on compiler autotuning using machine learning. ACM Comput Surv 51(5):96:1–96:42. https://doi.org/10.1145/3197978

    Article  Google Scholar 

  5. Ashouri AH, Mariani G, Palermo G, Park E, Cavazos J, Silvano C (2016) Cobayn: compiler autotuning framework using Bayesian networks. ACM Trans Archit Code Optim 13(2):21:1–21:25. https://doi.org/10.1145/2928270

    Article  Google Scholar 

  6. Balaprakash P, Wild SM, Hovland PD (2013) An experimental study of global and local search algorithms in empirical performance tuning. In: High Performance Computing for Computational Science - VECPAR 2012, 10th International Conference, Revised Selected Papers, Lecture Notes in Computer Science. Springer, pp 261–269. https://doi.org/10.1007/978-3-642-38718-0_26

  7. Bao B, Ding C (2013) Defensive loop tiling for shared cache. In: Proceedings of the 2013 IEEE/ACM International Symposium on Code Generation and Optimization (CGO), CGO ’13. IEEE Computer Society, Washington, DC, USA, pp 1–11. https://doi.org/10.1109/CGO.2013.6495008

  8. Binkert N, Beckmann B, Black G, Reinhardt SK, Saidi A, Basu A, Hestness J, Hower DR, Krishna T, Sardashti S, Sen R, Sewell K, Shoaib M, Vaish N, Hill MD, Wood DA (2011) The gem5 simulator. SIGARCH Comput Archit News 39(2):1–7. https://doi.org/10.1145/2024716.2024718

    Article  Google Scholar 

  9. Bondhugula U, Hartono A, Ramanujam J, Sadayappan P (2008) A practical automatic polyhedral parallelizer and locality optimizer. SIGPLAN Not 43(6):101–113. https://doi.org/10.1145/1379022.1375595

    Article  Google Scholar 

  10. Bondhugula U, Ramanujam J et al (2008) Pluto: a practical and fully automatic polyhedral program optimization system. In: Proceedings of the ACM SIGPLAN 2008 Conference on Programming Language Design and Implementation (PLDI 2008)

  11. Brockmeyer E, Durinck B, Corporaal H, Catthoor F (2007) Layer assignment techniques for low energy in multi-layered memory organizations. Springer, Dordrecht

    Book  Google Scholar 

  12. Cavazos J, Fursin G, Agakov F, Bonilla E, O’Boyle MFP, Temam O (2007) Rapidly selecting good compiler optimizations using performance counters. In: Proceedings of the International Symposium on Code Generation and Optimization, CGO ’07. IEEE Computer Society, Washington, DC, USA, pp 185–197. https://doi.org/10.1109/CGO.2007.32

  13. Chen C, Chame J, Hall M (2008) Chill: a framework for composing high-level loop transformations. Technical report

  14. de Mesmay F, Voronenko Y, Püschel M (2010) Offline library adaptation using automatically generated heuristics. In: International Parallel and Distributed Processing Symposium (IPDPS), pp 1–10

  15. Fursin G, O’Boyle MFP, Knijnenburg PMW (2002) Evaluating iterative compilation. In: Languages and Compilers for Parallel Computing, 15th Workshop, LCPC 2002, College Park, MD, USA, July 25–27, 2002, Revised Papers, pp 362–376. https://doi.org/10.1007/11596110_24

  16. Grosser T, Größlinger A, Lengauer C (2012) Polly–performing polyhedral optimizations on a low-level intermediate representation. Parallel Process Lett. https://doi.org/10.1142/S0129626412500107

    Article  MathSciNet  Google Scholar 

  17. Haneda M, Khnijnenburg PMW, Wijshoff HAG (2005) Automatic selection of compiler options using non-parametric inferential statistics. In: Proceedings of the 14th International Conference on Parallel Architectures and Compilation Techniques, PACT ’05. IEEE Computer Society, Washington, DC, USA, pp 123–132. https://doi.org/10.1109/PACT.2005.9

  18. Hartono A, Norris B, Sadayappan P (2009) Annotation-based empirical performance tuning using Orio. In: IEEE International Symposium on Parallel & Distributed Processing. IEEE, pp 1–11

  19. Hu Q, Kjeldsberg PG, Vandecappelle A, Palkovic M, Catthoor F (2007) Incremental hierarchical memory size estimation for steering of loop transformations. ACM Trans Des Autom Electron Syst. https://doi.org/10.1145/1278349.1278363

    Article  Google Scholar 

  20. Kandemir M, Muralidhara SP, Narayanan SHK, Zhang Y, Ozturk O (2009) Optimizing shared cache behavior of chip multiprocessors. In: MICRO 42: Proceedings of the 42nd Annual IEEE/ACM International Symposium on Microarchitecture. ACM, New York, NY, USA, pp 505–516. https://doi.org/10.1145/1669112.1669176

  21. Kelefouras V, Georgios K, Nikolaos V (2018) Combining software cache partitioning and loop tiling for effective shared cache management. ACM Trans Embed Comput Syst 17(3):72:1–72:25. https://doi.org/10.1145/3202663

    Article  Google Scholar 

  22. Kim D, Renganarayanan L, Rostron D, Rajopadhye S, Strout MM (2007) Multi-level tiling: M for the price of one. In: Proceedings of the 2007 ACM/IEEE Conference on Supercomputing, SC ’07. ACM, New York, NY, USA, pp 51:1–51:12. https://doi.org/10.1145/1362622.1362691

  23. Knijnenburg PMW, Kisuki T, Gallivan K, O’Boyle MFP (2004) The effect of cache models on iterative compilation for combined tiling and unrolling. j-CCPE 16(2–3):247–270

    Google Scholar 

  24. Krzikalla O, Feldhoff K, Müller-Pfefferkorn R, Nagel WE (2012) Scout: a source-to-source transformator for simd-optimizations. In: Proceedings of the 2011 International Conference on Parallel Processing—vol 2, Euro-Par’11. Springer, pp 137–145. https://doi.org/10.1007/978-3-642-29740-3_17

  25. Kulkarni P, Hines S, Hiser J, Whalley D, Davidson J, Jones D (2004) Fast searches for effective optimization phase sequences. SIGPLAN Not 39(6):171–182. https://doi.org/10.1145/996893.996863

    Article  Google Scholar 

  26. Kulkarni PA, Whalley DB, Tyson GS, Davidson JW (2009) Practical exhaustive optimization phase order exploration and evaluation. ACM Trans Archit Code Optim 6(1):1:1–1:36. https://doi.org/10.1145/1509864.1509865

    Article  Google Scholar 

  27. Kulkarni S, Cavazos J (2012) Mitigating the compiler optimization phase-ordering problem using machine learning. SIGPLAN Not 47(10):147–162. https://doi.org/10.1145/2398857.2384628

    Article  Google Scholar 

  28. Leather H, Bonilla E, O’Boyle M (2009) Automatic feature generation for machine learning based optimizing compilation. In: Proceedings of the 7th Annual IEEE/ACM International Symposium on Code Generation and Optimization, CGO ’09. IEEE Computer Society, Washington, DC, USA, pp 81–91. https://doi.org/10.1109/CGO.2009.21

  29. Leather H, Bonilla E, O’boyle M (2014) Automatic feature generation for machine learning-based optimising compilation. ACM Trans Archit Code Optim 11(1):14:1–14:32. https://doi.org/10.1145/2536688

    Article  Google Scholar 

  30. Li S, Ahn JH, Strong RD, Brockman JB, Tullsen DM, Jouppi NP (2009) Mcpat: an integrated power, area, and timing modeling framework for multicore and manycore architectures. In: Proceedings of the 42nd Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 42. ACM, New York, NY, USA, pp 469–480. https://doi.org/10.1145/1669112.1669172

  31. Lidman J, Quinlan DJ, Liao C, McKee SA (2012) Rose:: Fttransform-a source-to-source translation framework for exascale fault-tolerance research. In: 2012 IEEE/IFIP 42nd International Conference on Dependable Systems and Networks Workshops (DSN-W). IEEE, pp 1–6

  32. Liu J, Zhang Y, Ding W, Kandemir MT (2011) On-chip cache hierarchy-aware tile scheduling for multicore machines. In: CGO. IEEE Computer Society, pp 161–170. http://dblp.uni-trier.de/db/conf/cgo/cgo2011.html

  33. Namjoshi KS, Singhania N (2016) Loopy: programmable and formally verified loop transformations. In: Static Analysis—23rd International Symposium, SAS 2016, Edinburgh, UK, September 8–10, 2016, Proceedings, pp 383–402. https://doi.org/10.1007/978-3-662-53413-7_19

  34. Nobre R, Martins LGA, Cardoso JaMP (2015) Use of previously acquired positioning of optimizations for phase ordering exploration. In: Proceedings of the 18th International Workshop on Software and Compilers for Embedded Systems, SCOPES ’15, pp 58–67. https://doi.org/10.1145/2764967.2764978

  35. Nobre R, Reis L, Cardoso JMP (2016) Compiler phase ordering as an orthogonal approach for reducing energy consumption. In: Proceedings of the 19th Workshop on Compilers for Parallel Computing (CPC’16)

  36. Ogilvie WF, Petoumenos P, Wang Z, Leather H (2017) Minimizing the cost of iterative compilation with active learning. In: Proceedings of the 2017 International Symposium on Code Generation and Optimization, CGO ’17. IEEE Press, Piscataway, NJ, USA, pp 245–256. http://dl.acm.org/citation.cfm?id=3049832.3049859

  37. Palkovic M, Catthoor F, Corporaal H (2009) Trade-offs in loop transformations. ACM Trans Des Autom Electr Syst. https://doi.org/10.1145/1497561.1497565

    Article  Google Scholar 

  38. Park E, Kulkarni S, Cavazos J (2011) An evaluation of different modeling techniques for iterative compilation. In: Proceedings of the 14th International Conference on Compilers, Architectures and Synthesis for Embedded Systems, CASES ’11. ACM, New York, NY, USA, pp 65–74. https://doi.org/10.1145/2038698.2038711

  39. Pouchet LN (2012) Polybench/c benchmark suite. http://web.cs.ucla.edu/~pouchet/software/polybench/. Accessed May 2019

  40. Pouchet LN, Bastoul C, Cohen A, Cavazos J (2008) Iterative optimization in the polyhedral model: part II, multidimensional time. SIGPLAN Not 43(6):90–100. https://doi.org/10.1145/1379022.1375594

    Article  Google Scholar 

  41. Pouchet LN, Bastoul C, Cohen A, Vasilache N (2007) Iterative optimization in the polyhedral model: part I, one-dimensional time. In: Proceedings of the International Symposium on Code Generation and Optimization, CGO ’07. IEEE Computer Society, Washington, DC, USA, pp 144–156. https://doi.org/10.1109/CGO.2007.21

  42. Pouchet LN, Bondhugula U, Bastoul C, Cohen A, Ramanujam J, Sadayappan P, Vasilache N (2011) Loop transformations: convexity, pruning and optimization. SIGPLAN Not 46(1):549–562. https://doi.org/10.1145/1925844.1926449

    Article  MATH  Google Scholar 

  43. Purini S, Jain L (2013) Finding good optimization sequences covering program space. ACM Trans Archit Code Optim 9(4):56:1–56:23. https://doi.org/10.1145/2400682.2400715

    Article  Google Scholar 

  44. Qiu M, Sha EHM, Liu M, Lin M, Hua S, Yang LT (2008) Energy minimization with loop fusion and multi-functional-unit scheduling for multidimensional dsp. J Parallel Distrib Comput 68(4):443–455. https://doi.org/10.1016/j.jpdc.2007.06.014

    Article  MATH  Google Scholar 

  45. Quinlan D, Haihang Y, Qing Y, Vuduc R, Seymour K (2007) Poet: parameterized optimizations for empirical tuning. In: 2007 IEEE International Parallel and Distributed Processing Symposium 00, p 447. https://doi.org/10.1109/IPDPS.2007.370637

  46. Renganarayanan L, Kim D, Rajopadhye S, Strout MM (2007) Parameterized tiled loops for free. SIGPLAN Not 42(6):405–414. https://doi.org/10.1145/1273442.1250780

    Article  Google Scholar 

  47. Sato Y, Yuki T, Endo T (2019) An autotuning framework for scalable execution of tiled code via iterative polyhedral compilation. ACM Trans Archit Code Optim 15(4):67:1–67:23. https://doi.org/10.1145/3293449

    Article  Google Scholar 

  48. Sharma N, Panda PR, Catthoor F, Raghavan P, Aa TV (2015) Array interleaving—an energy-efficient data layout transformation. ACM Trans Des Autom Electron Syst 20(3):44:1–44:26. https://doi.org/10.1145/2747875

    Article  Google Scholar 

  49. Shobaki G, Shawabkeh M, Rmaileh NEA (2008) Preallocation instruction scheduling with register pressure minimization using a combinatorial optimization approach. ACM Trans Archit Code Optim 10(3):14:1–14:31. https://doi.org/10.1145/2512432

    Article  Google Scholar 

  50. Stephenson M, Amarasinghe S (2005) Predicting unroll factors using supervised classification. In: Proceedings of the International Symposium on Code Generation and Optimization, CGO ’05. IEEE Computer Society, Washington, DC, USA, pp 123–134. https://doi.org/10.1109/CGO.2005.29

  51. Sun XH, Wang D (2014) Concurrent average memory access time. Computer 47(5):74–80. https://doi.org/10.1109/MC.2013.227

    Article  Google Scholar 

  52. Sung IJ, Stratton JA, Hwu WMW (2010) Data layout transformation exploiting memory-level parallelism in structured grid many-core applications. In: Proceedings of the 19th International Conference on Parallel Architectures and Compilation Techniques, PACT ’10. ACM, New York, NY, USA, pp 513–522. https://doi.org/10.1145/1854273.1854336

  53. Tartara M, Crespi Reghizzi S (2012) Parallel iterative compilation: using MapReduce to speedup machine learning in compilers. In: Proceedings of Third International Workshop on Mapreduce and Its Applications Date, MapReduce ’12, pp 33–40. https://doi.org/10.1145/2287016.2287023

  54. Tartara M, Crespi Reghizzi S (2013) Continuous learning of compiler heuristics. ACM Trans Archit Code Optim 9(4):46:1–46:25. https://doi.org/10.1145/2400682.2400705

    Article  Google Scholar 

  55. Trifunovic K, Nuzman D, Cohen A, Zaks A, Rosen I (2009) Polyhedral-model guided loop-nest auto-vectorization. In: Proceedings of the 2009 18th International Conference on Parallel Architectures and Compilation Techniques, PACT ’09. IEEE Computer Society, Washington, DC, USA, pp 327–337. https://doi.org/10.1109/PACT.2009.18

  56. Wang D, Sun XH (2014) Apc: a novel memory metric and measurement methodology for modern memory systems. IEEE Trans Comput 63(7):1626–1639. https://doi.org/10.1109/TC.2013.38

    Article  MathSciNet  MATH  Google Scholar 

  57. Zhou X, Giacalone JP, Garzarán MJ, Kuhn RH, Ni Y, Padua D (2012) Hierarchical overlapped tiling. In: Proceedings of the Tenth International Symposium on Code Generation and Optimization, CGO ’12. ACM, New York, NY, USA, pp 207–218. https://doi.org/10.1145/2259016.2259044

Download references

Acknowledgements

This work is partly supported by the European Commission under H2020-ICT-20152 Contract 687584—Transparent heterogeneous hardware Architecture deployment for eNergy Gain in Operation (TANGO) project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vasilios Kelefouras.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kelefouras, V., Djemame, K. A methodology correlating code optimizations with data memory accesses, execution time and energy consumption. J Supercomput 75, 6710–6745 (2019). https://doi.org/10.1007/s11227-019-02880-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-019-02880-z

Keywords

Navigation