Advertisement

Hands on with OpenMP4.5 and Unified Memory: Developing Applications for IBM’s Hybrid CPU + GPU Systems (Part II)

  • Leopold GrinbergEmail author
  • Carlo Bertolli
  • Riyaz Haque
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10468)

Abstract

Integration of multiple types of compute elements and memories in a single system requires proper support at a system-software level including operating system (OS), compilers, drivers, etc. The OS helps in scheduling work on different compute elements and manages memory operations in multiple memory pools including page migration. Compilers and programming languages provide tools for taking advantage of advanced architectural features. In this paper we encourage code developers to work with experimental versions of compilers and OpenMP standard extensions designed for hybrid OpenPOWER nodes. Specifically, we focus on nested parallelism and Unified Memory as key elements for efficient system-wide programming of CPU and GPU resources of OpenPOWER. We give implementation details using code samples and we discuss limitations of the presented approaches.

Keywords

OpenPOWER HPC Offloading Directive based programming Nested parallelism 

Notes

Acknowledgement

This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DEAC52-07NA27344 (LLNL-CONF-730616) and supported by Office of Science, Office of Advanced Scientific Computing Research.

References

  1. 1.
    Using shared memory in CUDA C/C++, April 2017. https://devblogs.nvidia.com/parallelforall/using-shared-memory-cuda-cc/
  2. 2.
    Edwards, H.C., Trott, C., Sunderland, D.: Kokkos, a manycore device performance portability library for C++ HPC applications, March 2014. http://on-demand.gputechconf.com/gtc/2014/presentations/S4213-kokkos-manycore-device-perf-portability-library-hpc-apps.pdf
  3. 3.
    Grinberg, L., Bertolli, C., Haque, R.: Hands on with openmp4.5 and unified memory: developing applications for IBM’S hybrid CPU + GPU systems (part I). Submitted for IWOMP 2017Google Scholar
  4. 4.
    CUDA C/C++ programming guide - shared memory section, April 2017. http://docs.nvidia.com/cuda/cuda-c-programming-guide/#shared-memory
  5. 5.
    OpenMP Language Committee: OpenMP Application Program Interface, version 4.5 edn., July 2013. http://www.openmp.org/mp-documents/openmp-4.5.pdf
  6. 6.
    Sakharnykh, N.: Combine OpenACC and unified memory for productivity and performance, September 2015. https://devblogs.nvidia.com/parallelforall/combine-openacc-unified-memory-productivity-performance/
  7. 7.
  8. 8.
    Beyond GPU memory limits with unified memory on Pascal, April 2017. https://devblogs.nvidia.com/parallelforall/beyond-gpu-memory-limits-unified-memory-pascal/

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.IBM ResearchYorktown HeightsUSA
  2. 2.LLNLLivermoreUSA

Personalised recommendations