Advertisement

Pre-exascale Architectures: OpenPOWER Performance and Usability Assessment for French Scientific Community

  • Gabriel Hautreux
  • Alfredo Buttari
  • Arnaud Beck
  • Victor Cameo
  • Dimitri Lecas
  • Dominique Aubert
  • Emeric Brun
  • Eric Boyer
  • Fausto Malvagi
  • Gabriel Staffelbach
  • Isabelle d’Ast
  • Joeffrey Legaux
  • Ghislain Lartigue
  • Gilles Grasseau
  • Guillaume Latu
  • Juan Escobar
  • Julien Bigot
  • Julien Derouillat
  • Matthieu Haefele
  • Nicolas Renon
  • Philippe Parnaudeau
  • Philippe Wautelet
  • Pierre-Francois Lavallee
  • Pierre Kestener
  • Remi Lacroix
  • Stephane Requena
  • Anthony Scemama
  • Vincent Moureau
  • Jean-Matthieu Etancelin
  • Yann Meurdesoif
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10524)

Abstract

Exascale implies a major pre-requisite in terms of energy efficiency, as an improvement of an order of magnitude must be reached in order to stay within an acceptable envelope of 20 MW. To address this objective and to continue to sustain performance, HPC architectures have to become denser, embedding many-core processors (to several hundreds of computing cores) and/or become heterogeneous, that is, using graphic processors or FPGAs. These energy-saving constraints will also affect the underlying hardware architectures (e.g., memory and storage hierarchies, networks) as well as system software (runtime, resource managers, file systems, etc.) and programming models. While some of these architectures, such as hybrid machines, have existed for a number of years and occupy noticeable ranks in the TOP 500 list, they are still limited to a small number of scientific domains and, moreover, require significant porting effort. However, recent developments of new paradigms (especially around OpenMP and OpenACC) make these architectures much more accessible to programmers. In order to make the most of these breakthrough upcoming technologies, GENCI and its partners have set up a technology watch group and lead collaborations with vendors, relying on HPC experts and early adopted HPC solutions. The two main objectives are providing guidance and prepare the scientific communities to challenges of exascale architectures.

The work performed on the OpenPOWER platform, one of the targeted platform for exascale, is described in this paper.

Keywords

OpenPOWER assessment Technological watch OpenMP OpenACC Benchmarks Usability Programmability 

Notes

Acknowledgments

GENCI thanks all its partners within the project for their support as well as IBM and nVIDIA experts and all the developers that contributed to the work performed on this platform.

References

  1. 1.
    Gourdain, N., Gicquel, L., Montagnac, M., Vermorel, O., Gazaix, M., Staffelbach, G., Garcia, M., Boussuge, J.F., Poinsot, T.: High performance parallel computing of flows in complex geometries - part 1: methods. Comput. Sci. Disc. 2, 015003 (2009)CrossRefGoogle Scholar
  2. 2.
    Grasseau, G., Chamont, D., Beaudette, F., Bianchini, L., Davignon, O., Mastrolorenzo, L., Ochando, C., Paganini, P., Strebler, T.: Matrix element method for high performance computing platforms. In: Journal of Physics: Conference Series, vol. 664, Bristol, Institute of Physics Publishing (2015). 092009Google Scholar
  3. 3.
    Aubert, D., Deparis, N., Ocvirk, P.: EMMA: an adaptive mesh refinement cosmological simulation code with radiative transfer. MNRAS 454, 1012–1037 (2015)CrossRefGoogle Scholar
  4. 4.
    Parnaudeau, P., Suzuki, A., Sac-Epee, J.M.: GPS: an efficient & spectrally accurate code for computing gross-pitaevskii equation. In: Research Posters Session, ISC-2015, Germany, 12–16 July 2015Google Scholar
  5. 5.
    Lavallée, P.-F., de Verdière, G.C., Wautelet, P., Lecas, D., Dupays, J.-M.: Porting and optimizing hydro to new platforms and programming paradigms - lessons learntGoogle Scholar
  6. 6.
  7. 7.
    Brun, E., Chauveau, S., Malvagi, F.: Patmos: a prototype Monte Carlo transport code to test high performance architectures. In: M&C 2017 - International Conference on Mathematics & Computational Methods Applied to Nuclear Science & Engineering, Jeju, Korea, 16–20 April 2017Google Scholar
  8. 8.
    Moureau, V., Domingo, P., Vervisch, L.: Design of a massively parallel CFD code for complex geometries. Comptes Rendus Mcanique 339(2), 141–148 (2011)CrossRefzbMATHGoogle Scholar
  9. 9.
    Scemama, A., Caffarel, M., Oseret, E., Jalby, W.: Quantum Monte Carlo for large chemical systems: Implementing efficient strategies for petascale platforms and beyond. J. Comput. Chem. 34(11), 938–951 (2013)CrossRefGoogle Scholar
  10. 10.
    Intel software development emulator. https://software.intel.com/en-us/articles/intel-software-development-emulator. Accessed 26 Apr 2017
  11. 11.
    Agullo, E., Buttari, A., Guermouche, A., Lopez, F.: Implementing multifrontal sparse solvers for multicore architectures with sequential task flow runtime systems. ACM Trans. Math. Softw. 43(2), 13:1–13:22 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Agullo, E., Buttari, A., Guermouche, A., Lopez, F.: Task-based multifrontal QR solver for GPU-accelerated multicore architectures. In: HiPC, pp. 54–63. IEEE Computer Society (2015)Google Scholar
  13. 13.

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Gabriel Hautreux
    • 1
  • Alfredo Buttari
    • 7
  • Arnaud Beck
    • 11
  • Victor Cameo
    • 9
  • Dimitri Lecas
    • 9
  • Dominique Aubert
    • 9
  • Emeric Brun
    • 10
  • Eric Boyer
    • 1
  • Fausto Malvagi
    • 10
  • Gabriel Staffelbach
    • 4
  • Isabelle d’Ast
    • 4
  • Joeffrey Legaux
    • 4
  • Ghislain Lartigue
    • 5
  • Gilles Grasseau
    • 11
  • Guillaume Latu
    • 2
  • Juan Escobar
    • 2
  • Julien Bigot
    • 2
  • Julien Derouillat
    • 2
  • Matthieu Haefele
    • 2
  • Nicolas Renon
    • 8
  • Philippe Parnaudeau
    • 1
  • Philippe Wautelet
    • 1
  • Pierre-Francois Lavallee
    • 1
  • Pierre Kestener
    • 1
  • Remi Lacroix
    • 1
  • Stephane Requena
    • 1
  • Anthony Scemama
    • 3
  • Vincent Moureau
    • 5
  • Jean-Matthieu Etancelin
    • 6
  • Yann Meurdesoif
    • 6
  1. 1.GENCIParisFrance
  2. 2.Maison de la Simulation, CEA, CNRS, Univ. Paris-Sud, UVSQ, Université Paris-SaclayGif-sur-YvetteFrance
  3. 3.Laboratoire de Chimie et Physique Quantiques, Université de Toulouse, CNRS, UPSToulouseFrance
  4. 4.CERFACSToulouseFrance
  5. 5.CORIA, CNRS UMR6614, Normandie UniversitéSaint-Etienne-du-RouvrayFrance
  6. 6.CReSTIC EA3804, ROMEO HPC Center, University of Reims Champagne-ArdenneReimsFrance
  7. 7.IRIT, CNRS UMR5505, Université de ToulouseToulouseFrance
  8. 8.CALMIP, Université de Toulouse, Université Paul Sabatier, CNRS, UMS3667ToulouseFrance
  9. 9.Observatoire Astronomique de Strasbourg, UMR 7550 Universite de Strasbourg - CNRSStrasbourgFrance
  10. 10.Den-Service d’Études des Réacteurs et de Mathématiques Appliquées (SERMA), CEA, Université Paris-SaclayGif-sur-YvetteFrance
  11. 11.Leprince-Ringuet Laboratory (LLR), CNRS/IN2P3, Ecole PolytechniquePalaiseauFrance

Personalised recommendations