Advertisement

Towards an Energy-Aware Framework for Application Development and Execution in Heterogeneous Parallel Architectures

  • Karim Djemame
  • Richard Kavanagh
  • Vasilios Kelefouras
  • Adrià Aguilà
  • Jorge Ejarque
  • Rosa M. Badia
  • David García Pérez
  • Clara PezuelaEmail author
  • Jean-Christophe Deprez
  • Lotfi Guedria
  • Renaud De Landtsheer
  • Yiannis Georgiou
Chapter

Abstract

The Transparent heterogeneous hardware Architecture deployment for eNergy Gain in Operation (TANGO) project’s goal is to characterise factors which affect power consumption in software development and operation for Heterogeneous Parallel Hardware (HPA) environments. Its main contribution is the combination of requirements engineering and design modelling for self-adaptive software systems, with power consumption awareness in relation to these environments. The energy efficiency and application quality factors are integrated into the application lifecycle (design, implementation and operation). To support this, the key novelty of the project is a reference architecture and its implementation. Moreover, a programming model with built-in support for various hardware architectures including heterogeneous clusters, heterogeneous chips and programmable logic devices is provided. This leads to a new cross-layer programming approach for heterogeneous parallel hardware architectures featuring software and hardware modelling. Application power consumption and performance, data location and time-criticality optimization, as well as security and dependability requirements on the target hardware architecture are supported by the architecture.

Notes

Acknowledgements

This work has been supported by the European Commission through the Horizon 2020 Research and Innovation program under contract 687584 (TANGO project) by the Spanish Government under contract TIN2015-65316 and grant SEV-2015-0493 (Severo Ochoa Program) and by Generalitat de Catalunya under contracts 2014-SGR-1051 and 2014-SGR-1272.

References

  1. 1.
    Iot’s challenges and opportunities (2017) A gartner trend insight report, Apr 2017Google Scholar
  2. 2.
    Djemame K, Armstrong D, Kavanagh RE, Deprez JC, Ferrer AL, Perez DG, Badia RM, Sirvent R, Ejarque J, Georgiou Y (2016) Tango: transparent heterogeneous hardware architecture deployment for energy gain in operation. In: Proceedings of the first workshop on program transformation for programmability in heterogeneous architectures, arXiv:1603.01407
  3. 3.
    Badia RM, Conejero J, Diaz C, Ejarque J, Lezzi D, Lordan F, Ramon-Cortes C, Sirvent R (2015) Comp superscalar, an interoperable programming framework. SoftwareX 3:32–36Google Scholar
  4. 4.
    Duran A, Ayguadé E, Badia RM, Labarta J, Martinell L, Martorell X, Planas J (2011) Ompss: a proposal for programming heterogeneous multi-core architectures. Parallel Process Lett 21(02):173–193Google Scholar
  5. 5.
    HPC UGent (2017) Easybuild: building software with ease. https://easybuilders.github.io/easybuild/
  6. 6.
    Lawrence Livermore National Laboratory (2017) Spack—package management toolGoogle Scholar
  7. 7.
    Docker Inc. (2017) Docker—a better way to build apps. https://www.docker.com/
  8. 8.
    Singularity (2017). https://singularity.lbl.gov/
  9. 9.
    Yoo AB, Jette MA, Grondona M (2003) Slurm: simple linux utility for resource management. In: Job scheduling strategies for parallel processing, pp 44–60Google Scholar
  10. 10.
    IBM (2005) An architectural blueprint for autonomic computingGoogle Scholar
  11. 11.
    Smith R (2016) Preemption improved: fine-grained preemption for time-critical tasksGoogle Scholar
  12. 12.
    NVIDIA Corp (2017) CUDA homepage. http://www.nvidia.es/object/cuda_home_new.htm. Accessed 3 May 2017
  13. 13.
    Stone JE, Gohara D, Shi G (2010) Opencl: a parallel programming standard for heterogeneous computing systems. Comput Sci Eng 12(3):66–73Google Scholar
  14. 14.
    OpenACC Application Programming Interface Specification (2017). http://www.openacc.org/specification. Accessed 3 May 2017
  15. 15.
    OpenMP Architecture Review Board (2017) OpenMP application programming interface specification. http://www.openmp.org/specifications/. Accessed 3 May 2017
  16. 16.
    MPI forum (2017) Message passing interface specification. http://mpi-forum.org/. Accessed 3 May 2017
  17. 17.
    Tarek A (2006) El-Ghazawi and Lauren Smith. Upc: unified parallel c. In: Proceedings of the 2006 ACM/IEEE conference on Supercomputing, pp 27. ACMGoogle Scholar
  18. 18.
    Pelcat M, Desnos K, Heulot J, Guy C, Nezan JE, Aridhi S (2014) Preesm: a dataflow-based rapid prototyping framework for simplifying multicore dsp programming. In: 2014 6th European embedded design in education and research conference (EDERC), Sept 2014, pp 36–40Google Scholar
  19. 19.
    GPU Open Consortium (2017) Code XL. http://gpuopen.com/compute-product/codexl/. Accessed 17 May 2017
  20. 20.
    NVIDIA (2017) NVIDIA CUDA toolkit. https://developer.nvidia.com/cuda-toolkit. Accessed 17 May 2017
  21. 21.
    Silexica GmbH (2017) Software design for multicore. https://silexica.com/. Accessed 17 May 2017
  22. 22.
    Capit N, Da Costa G, Georgiou Y, Huard G, Martin C, Mounié G, Neyron P, Richard O (2005) A batch scheduler with high level components. In: 5th international symposium on cluster computing and the grid (CCGrid 2005), Cardiff, UK, 9–12 May 2005, pp 776–783Google Scholar
  23. 23.
    Litzkow MJ, Livny M, Mutka MW (1988) Condor—a hunter of idle workstations. In: Proceedings of the 8th international conference on distributed computing systems, San Jose, California, USA, 13–17 June 1988, pp 104–111Google Scholar
  24. 24.
    Zhou S, Zheng X, Wang J, Delisle P (1993) Utopia: a load sharing facility for large, heterogeneous distributed computer systems. Softw Pract Exp 23(12):1305–1336Google Scholar
  25. 25.
  26. 26.
    Altair (2017) PBS professional open source project. http://www.pbspro.org/
  27. 27.
    Hindman B, Konwinski A, Zaharia M, Ghodsi A, Joseph AD, Katz RH, Shenker S, Stoica I (2011) Mesos: a platform for fine-grained resource sharing in the data center. In: Proceedings of the 8th USENIX symposium on networked systems design and implementation, NSDI 2011, Boston, MA, USA, 30 Mar–1 Apr 2011Google Scholar
  28. 28.
    Vavilapalli VK, Murthy AC, Douglas C, Agarwal S, Konar M, Evans R, Graves T, Lowe J, Shah H, Seth S, Saha B, Curino C, O’Malley O, Radia S, Reed B, Baldeschwieler E (2013) Apache hadoop YARN: yet another resource negotiator. In: ACM Symposium on Cloud Computing, SOCC ’13, Santa Clara, CA, USA, 1–3 Oct 2013, pp 5:1–5:16Google Scholar
  29. 29.
    Ahn DH, Garlick J, Grondona M, Lipari D, Springmeyer B, Schulz M (2014) Flux: a next-generation resource management framework for large HPC centers. In: 43rd international conference on parallel processing workshops, ICPPW 2014, Minneapolis, MN, USA, 9–12 Sept 2014, pp 9–17Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2019

Authors and Affiliations

  • Karim Djemame
    • 1
  • Richard Kavanagh
    • 1
  • Vasilios Kelefouras
    • 1
  • Adrià Aguilà
    • 2
  • Jorge Ejarque
    • 2
  • Rosa M. Badia
    • 2
  • David García Pérez
    • 3
  • Clara Pezuela
    • 3
    Email author
  • Jean-Christophe Deprez
    • 4
  • Lotfi Guedria
    • 4
  • Renaud De Landtsheer
    • 4
  • Yiannis Georgiou
    • 5
  1. 1.School of ComputingUniversity of LeedsLeedsUK
  2. 2.Barcelona Supercomputing Center (BSC)BarcelonaSpain
  3. 3.Atos Research & Innovation, Atos Spain SAMadridSpain
  4. 4.CETICCharleroiBelgium
  5. 5.Bull Atos TechnologiesLes Clayes-sous-BoisFrance

Personalised recommendations