Advertisement

A Case Study for Performance Portability Using OpenMP 4.5

  • Rahulkumar GayatriEmail author
  • Charlene Yang
  • Thorsten Kurth
  • Jack Deslippe
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11381)

Abstract

In recent years, the HPC landscape has shifted away from traditional multi-core CPU systems to energy-efficient architectures, such as many-core CPUs and accelerators like GPUs, to achieve high performance. The goal of performance portability is to enable developers to rapidly produce applications which can run efficiently on a variety of these architectures, with little to no architecture specific code adoptions required. We implement a key kernel from a material science application using OpenMP 3.0, OpenMP 4.5, OpenACC, and CUDA on Intel architectures, Xeon and Xeon Phi, and NVIDIA GPUs, P100 and V100. We will compare the performance of the OpenMP 4.5 implementation with that of the more architecture-specific implementations, examine the performance of the OpenMP 4.5 implementation on CPUs after back-porting, and share our experience optimizing large reduction loops, as well as discuss the latest compiler status for OpenMP 4.5 and OpenACC.

Keywords

OpenMP 3.0 OpenMP 4.5 OpenACC CUDA Parallel programming models P100 V100 Xeon Phi Haswell 

Notes

Acknowledgement

This research has used resources of the Oak Ridge Leadership Computing Facility and National Energy Research Scientific Computing Center (NERSC) which are supported by the Office of Science of the U.S. Department of Energy. While the use of the GPP kernel in this work was largely for exploration of performance portability strategies rather than of the kernel itself, JD acknowledges support for the discussions around BerkeleyGW and use of the GPP kernel from the Center for Computational Study of Excited-StatePhenomena in Energy Materials (C2SEPEM) which is funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, MaterialsSciences and Engineering Division under Contract No. DE-AC02-05CH11231, as part of the Computational Materials Sciences Program.

References

  1. 1.
    TOP500 Supercomputers list. https://www.top500.org/lists/2018/06/
  2. 2.
    Edwards, H.C., Trott, C.R., Sunderland, D.: Kokkos: enabling manycore performance portability through polymorphic memory access patterns. J. Parallel Distrib. Comput. 74(12), 3202–3216 (2014)CrossRefGoogle Scholar
  3. 3.
    Hornung, R.D., Keasler, J.A.: The RAJA poratability layer: overview and status. Tech report, LLNL-TR-661403, September 2014Google Scholar
  4. 4.
    Deslippe, J., Samsonidze, G., Strubbe, D.A., Jain, M., Cohen, M.L., Louie, S.G.: BerkeleyGW: a massively parallel computer package for the calculation of the quasiparticle and optical properties of materials and nanostructures. Comput. Phys. Commun. 183(6), 1269–1289 (2012)CrossRefGoogle Scholar
  5. 5.
    BerkeleyGW Code. https://berkeleygw.org
  6. 6.
    Soininen, J., Rehr, J., Shirley, E.: Electron self-energy calculation using a general multi-pole approximation. J. Phys.: Condens. Matter 15(17) (2003)Google Scholar
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
    Intel Haswell Processor: Haswell: The Fourth-Generation Intel Core Processor. IEEE Micro 34(2), 6–20 (2014)CrossRefGoogle Scholar
  12. 12.
    Sinharoy, B., et al.: IBM POWER8 processor core microarchitecture. IBM J. Res. Dev. 59(1), 2:1–2:21 (2015)CrossRefGoogle Scholar
  13. 13.
    Sadasivam, S.K., Thompto, B.W., Kalla, R., Starke, W.J.: IBM Power9 processor architecture. IEEE Micro 37(2), 40–51 (2017)CrossRefGoogle Scholar
  14. 14.
  15. 15.
  16. 16.
  17. 17.
    Nickolls, J., Buck, I., Garland, M., Skadron, K.: Scalable parallel programming with CUDA. Queue 6(2), 40–53 (2008).  https://doi.org/10.1145/1365490.1365500CrossRefGoogle Scholar
  18. 18.
    Lopez, M.G., et al.: Towards achieving performance portability using directives for accelerators. In: 2016 Third Workshop on Accelerator Programming Using Directives (WACCPD) (2016)Google Scholar
  19. 19.
    Hayashi, A., Shirako, J., Tiotto, E., Ho, R., Sarkar, V.: Exploring compiler optimization opportunities for the OpenMP 4.\(\times \) accelerator model on a POWER8+ GPU platform. In: 2016 Third Workshop on Accelerator Programming Using Directives (WACCPD) (2016)Google Scholar
  20. 20.
    Vergara, L.V.G., Wayne, J., Lopez, M.G., Hernández, O.: Early experiences writing performance portable OpenMP 4 codes. In: Proceedings of Cray User Group Meeting, London, England. Cray User Group Incorporated, May 2016Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Rahulkumar Gayatri
    • 1
    Email author
  • Charlene Yang
    • 1
  • Thorsten Kurth
    • 1
  • Jack Deslippe
    • 1
  1. 1.National Energy Research Scientific Computing Center (NERSC)Lawrence Berkeley National Laboratory (LBNL)BerkeleyUSA

Personalised recommendations