Skip to main content

Locality-Aware Automatic Parallelization for GPGPU with OpenHMPP Directives


The use of GPUs for general purpose computation has increased dramatically in the past years due to the rising demands of computing power and their tremendous computing capacity at low cost. Hence, new programming models have been developed to integrate these accelerators with high-level programming languages, giving place to heterogeneous computing systems. Unfortunately, this heterogeneity is also exposed to the programmer complicating its exploitation. This paper presents a new technique to automatically rewrite sequential programs into a parallel counterpart targeting GPU-based heterogeneous systems. The original source code is analyzed through domain-independent computational kernels, which hide the complexity of the implementation details by presenting a non-statement-based, high-level, hierarchical representation of the application. Next, a locality-aware technique based on standard compiler transformations is applied to the original code through OpenHMPP directives. Two representative case studies from scientific applications have been selected: the three-dimensional discrete convolution and the simple-precision general matrix multiplication. The effectiveness of our technique is corroborated by a performance evaluation on NVIDIA GPUs.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5


  1. 1.

    Andión, J.M., Arenaz, M., Rodríguez, G., Touriño, J.: A novel compiler support for automatic parallelization on multicore systems. Parallel Comput. 39(9), 442–460 (2013)

    Article  Google Scholar 

  2. 2.

    Andrade, D., Arenaz, M., Fraguela, B.B., Touriño, J., Doallo, R.: Automated and accurate cache behavior analysis for codes with irregular access patterns. Concurr. Comput. Pract. Exp. 19(18), 2407–2423 (2007)

    Article  Google Scholar 

  3. 3.

    Appentra Solutions: Parallware for OpenACC. Accessed 31 Jan 2015

  4. 4.

    Arenaz, M., Touriño, J., Doallo, R.: Compiler support for parallel code generation through kernel recognition. In: Proceedings of the 18th International Parallel and Distributed Processing Symposium (IPDPS), Santa Fe, NM, USA, p. 79b. IEEE (2004)

  5. 5.

    Arenaz, M., Touriño, J., Doallo, R.: XARK: an extensible framework for automatic recognition of computational kernels. ACM Trans. Program. Lang. Syst. 30(6), 32:1–32:56 (2008)

    Article  Google Scholar 

  6. 6.

    Baskaran, M.M., Ramanujam, J., Sadayappan, P.: Automatic C-to-CUDA code generation for affine programs. In: Proceedings of the 19th International Conference on Compiler Construction (CC), Paphos, Cyprus, LNCS, vol. 6011, pp. 244–263. Springer (2010)

  7. 7.

    BLAS: Basic Linear Algebra Subprograms. Accessed 31 Jan 2015

  8. 8.

    Bodin, F., Bihan, S.: Heterogeneous multicore parallel programming for graphics processing units. Sci. Program. 17(4), 325–336 (2009)

    Google Scholar 

  9. 9.

    Bondhugula, U., Hartono, A., Ramanujam, J., Sadayappan, P.: A practical automatic polyhedral parallelizer and locality optimizer. In: Proceedings of the 29th Conference on Programming Language Design and Implementation (PLDI), Tucson, AZ, USA, pp. 101–113. ACM (2008)

  10. 10.

    Christen, M., Schenk, O., Burkhart, H.: Automatic code generation and tuning for stencil kernels on modern shared memory architectures. Comp. Sci. Res. Dev. 26(3–4), 205–210 (2011)

    Article  Google Scholar 

  11. 11.

    Eigenmann, R., Hoeflinger, J., Li, Z., Padua, D.A.: Experience in the automatic parallelization of four perfect-benchmark programs. In: Proceedings of the 4th International Workshop on Languages and Compilers for Parallel Computing (LCPC), Santa Clara, CA, USA, LNCS, vol. 589, pp. 65–83. Springer (1992)

  12. 12.

    Grauer-Gray, S., Xu, L., Searles, R., Ayalasomayajula, S., Cavazos, J.: Auto-tuning a high-level language targeted to GPU codes. In: Proceedings of Innovative Parallel Computing (InPar), San Jose, CA, USA, pp. 1–10. IEEE (2012)

  13. 13.

    Han, T.D., Abdelrahman, T.S.: hiCUDA: High-level GPGPU programming. IEEE Trans. Parallel Distrib. Syst. 22(1), 78–90 (2011)

    Article  Google Scholar 

  14. 14.

    HPC Project: Par4All. Accessed 31 Jan 2015

  15. 15.

    Intel Corporation: Intel Math Kernel Library. Accessed 31 Jan 2015

  16. 16.

    Jablin, T.B., Jablin, J.A., Prabhu, P., Liu, F., August, D.I.: Dynamically managed data for CPU–GPU architectures. In: Proceedings of the 10th International Symposium on Code Generation and Optimization (CGO), San Jose, CA, USA, pp. 165–174. ACM (2012)

  17. 17.

    Jablin, T.B., Prabhu, P., Jablin, J.A., Johnson, N.P., Beard, S.R., August, D.I.: Automatic CPU–GPU communication management and optimization. In: Proceedings of the 32nd Conference on Programming Language Design and Implementation (PLDI), San Jose, CA, USA, pp. 142–151. ACM (2011)

  18. 18.

    Kurzak, J., Tomov, S., Dongarra, J.: Autotuning GEMM kernels for the Fermi GPU. IEEE Trans. Parallel Distrib. Syst. 23(11), 2045–2057 (2012)

    Article  Google Scholar 

  19. 19.

    Larsen, E.S., McAllister, D.: Fast matrix multiplies using graphics hardware. In: Proceedings of the 14th International Conference on High Performance Computing, Networking, Storage and Analysis (SC), Denver, CO, USA, p. 55. ACM (2001)

  20. 20.

    Lee, S., Eigenmann, R.: OpenMPC: Extended OpenMP programming and tuning for GPUs. In: Proceedings of the 23rd International Conference on High Performance Computing, Networking, Storage and Analysis (SC), New Orleans, LA, USA, pp. 1–11. IEEE (2010)

  21. 21.

    Lee, S., Vetter, J.S.: Early evaluation of directive-based GPU programming models for productive exascale computing. In: Proceedings of the 25th International Conference on High Performance Computing, Networking, Storage and Analysis (SC), Salt Lake City, UT, USA, pp. 23:1–23:11. IEEE (2012)

  22. 22.

    Novatte Pte. Ltd.: CAPS Compilers. Accessed 31 Jan 2015

  23. 23.

    NVIDIA Corporation: Cg Toolkit. Accessed 31 Jan 2015

  24. 24.

    NVIDIA Corporation: CUBLAS Library. Accessed 31 Jan 2015

  25. 25.

    NVIDIA Corporation: CUDA C Best Practices Guide. Accessed 31 Jan 2015

  26. 26.

    NVIDIA Corporation: CUDA C Programming Guide. Accessed 31 Jan 2015

  27. 27.

    OpenHMPP Consortium: OpenHMPP Concepts and Directives. Accessed 31 Jan 2015

  28. 28.

    OpenMP Architecture Review Board: OpenMP Application Program Interface (Version 4.0). Accessed 31 Jan 2015

  29. 29.

    Owens, J., Houston, M., Luebke, D., Green, S., Stone, J., Phillips, J.: GPU computing. Proc. IEEE 96(5), 879–899 (2008)

    Article  Google Scholar 

  30. 30.

    The Khronos Group Inc.: The OpenCL Specification (Version 2.0). Accessed 31 Jan 2015

  31. 31.

    The Khronos Group Inc.: The OpenGL Shading Language (Version 4.50). Accessed 31 Jan 2015

  32. 32.

    The OpenACC Standards Group: The OpenACC Application Programming Interface (Version 2.0a). Accessed 31 Jan 2015

  33. 33.

    Verdoolaege, S., Juega, J.C., Cohen, A., Gómez, J.I., Tenllado, C., Catthoor, F.: Polyhedral parallel code generation for CUDA. ACM Trans. Archit. Code Optim. 9(4), 54:1–54:23 (2013)

    Article  Google Scholar 

  34. 34.

    Viñas, M., Lobeiras, J., Fraguela, B.B., Arenaz, M., Amor, M., García, J.A., Castro, M.J., Doallo, R.: A multi-GPU shallow-water simulation with transport of contaminants. Concurr. Comput. Pract. Exp. 25(8), 1153–1169 (2013)

    Article  Google Scholar 

  35. 35.

    Volkov, V.: Better performance at lower occupancy. In: Proceedings of the 2010 GPU technology conference (GTC), San Jose, CA, USA. NVIDIA (2010)

  36. 36.

    Wolfe, M.: Implementing the PGI accelerator model. In: Proceedings of the 3rd Workshop on General Purpose Processing on Graphics Processing Units (GPGPU), Pittsburgh, PA, USA, pp. 43–50. ACM (2010)

  37. 37.

    Zima, E.: Simplification and optimization of transformations of chains of recurrences. In: Proceedings of the 1995 International Symposium on Symbolic and Algebraic Computation (ISSAC), Montreal, Canada, pp. 42–50. ACM (1995)

  38. 38.

    Zhang, Y., Mueller, F.: Autogeneration and autotuning of 3D stencil codes on homogeneous and heterogeneous GPU clusters. IEEE Trans. Parallel Distrib. Syst. 24(3), 417–427 (2013)

    Article  Google Scholar 

Download references


This research was supported by the Ministry of Economy and Competitiveness of Spain and FEDER Funds of the European Union (Projects TIN2010-16735 and TIN2013-42148-P), by the Galician Government under the Consolidation Program of Competitive Reference Groups (Reference GRC2013-055), and by the FPU Program of the Ministry of Education of Spain (Reference AP2008-01012). We want to acknowledge the staff of CAPS Entreprise for their support to do this work, as well as Roberto R. Expósito for his help to configure the cluster pluton to carry out our experiments. Finally we want to thank the anonymous reviewers for their suggestions, which helped improve the paper.

Author information



Corresponding author

Correspondence to José M. Andión.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Andión, J.M., Arenaz, M., Bodin, F. et al. Locality-Aware Automatic Parallelization for GPGPU with OpenHMPP Directives. Int J Parallel Prog 44, 620–643 (2016).

Download citation


  • Heterogeneous systems
  • Locality
  • Automatic parallelization
  • OpenHMPP
  • Domain-independent kernel