Abstract
Integration of multiple types of compute elements and memories in a single system requires proper support at a system-software level including operating system (OS), compilers, drivers, etc. The OS helps in scheduling work on different compute elements and manages memory operations in multiple memory pools including page migration. Compilers and programming languages provide tools for taking advantage of advanced architectural features. In this paper we encourage code developers to work with experimental versions of compilers and OpenMP standard extensions designed for hybrid OpenPOWER nodes. Specifically, we focus on nested parallelism and Unified Memory as key elements for efficient system-wide programming of CPU and GPU resources of OpenPOWER. We give implementation details using code samples and we discuss limitations of the presented approaches.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Using shared memory in CUDA C/C++, April 2017. https://devblogs.nvidia.com/parallelforall/using-shared-memory-cuda-cc/
Edwards, H.C., Trott, C., Sunderland, D.: Kokkos, a manycore device performance portability library for C++ HPC applications, March 2014. http://on-demand.gputechconf.com/gtc/2014/presentations/S4213-kokkos-manycore-device-perf-portability-library-hpc-apps.pdf
Grinberg, L., Bertolli, C., Haque, R.: Hands on with openmp4.5 and unified memory: developing applications for IBM’S hybrid CPU + GPU systems (part I). Submitted for IWOMP 2017
CUDA C/C++ programming guide - shared memory section, April 2017. http://docs.nvidia.com/cuda/cuda-c-programming-guide/#shared-memory
OpenMP Language Committee: OpenMP Application Program Interface, version 4.5 edn., July 2013. http://www.openmp.org/mp-documents/openmp-4.5.pdf
Sakharnykh, N.: Combine OpenACC and unified memory for productivity and performance, September 2015. https://devblogs.nvidia.com/parallelforall/combine-openacc-unified-memory-productivity-performance/
Unified memory in CUDA 6, April 2017. https://devblogs.nvidia.com/parallelforall/unified-memory-in-cuda-6/
Beyond GPU memory limits with unified memory on Pascal, April 2017. https://devblogs.nvidia.com/parallelforall/beyond-gpu-memory-limits-unified-memory-pascal/
Acknowledgement
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DEAC52-07NA27344 (LLNL-CONF-730616) and supported by Office of Science, Office of Advanced Scientific Computing Research.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Grinberg, L., Bertolli, C., Haque, R. (2017). Hands on with OpenMP4.5 and Unified Memory: Developing Applications for IBM’s Hybrid CPU + GPU Systems (Part II). In: de Supinski, B., Olivier, S., Terboven, C., Chapman, B., Müller, M. (eds) Scaling OpenMP for Exascale Performance and Portability. IWOMP 2017. Lecture Notes in Computer Science(), vol 10468. Springer, Cham. https://doi.org/10.1007/978-3-319-65578-9_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-65578-9_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-65577-2
Online ISBN: 978-3-319-65578-9
eBook Packages: Computer ScienceComputer Science (R0)