International Journal of Parallel Programming

, Volume 45, Issue 2, pp 262–282 | Cite as

Automatic CPU/GPU Generation of Multi-versioned OpenCL Kernels for C++ Scientific Applications

  • Rafael Sotomayor
  • Luis Miguel Sanchez
  • Javier Garcia Blas
  • Javier Fernandez
  • J. Daniel Garcia
Article

Abstract

Parallelism has become one of the most extended paradigms used to improve performance. However, it forces software developers to adapt applications and coding mechanisms to exploit the available computing devices. Legacy source code needs to be re-written to take advantage of multi- core and many-core computing devices. Writing parallel applications in a traditional way is hard, expensive, and time consuming. Furthermore, there is often more than one possible transformation or optimization that can be applied to a single piece of legacy code. Therefore many parallel versions of the same original sequential code need to be considered. In this paper, we describe an automatic parallel source code generation workflow (REWORK) for parallel heterogeneous platforms. REWORK automatically identifies promising kernels on legacy C++ source code and generates multiple specific versions of kernels for improving C++ applications, selecting the most adequate version based on both static source code and target platform characteristics.

Keywords

OpenCL C++ Multi-versioning Code generation 

Notes

Acknowledgments

The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n. 609666 (REPARA) and by the Spanish Ministry of Economics and Competitiveness under the grant TIN2013-41350-P.

References

  1. 1.
    Aldinucci, M., Meneghin, M., Torquati, M.: Efficient smith-waterman on multi-core with fastflow. In: 2010 18th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), pp. 195–199. IEEE (2010)Google Scholar
  2. 2.
    Baghdadi, S., Größlinger, A., Cohen, A.: Putting automatic polyhedral compilation for GPGPU to work. In: Proceedings of the 15th Workshop on Compilers for Parallel Computers (CPC’10). Vienna, Austria (2010)Google Scholar
  3. 3.
    Baráth, Á., Porkoláb, Z.: Attribute-based checking of C++ move semantics. In: Proceedings of the 3rd Workshop on Software Quality Analysis, Monitoring, Improvement and Applications (SQAMIA 2014), Lovran, Croatia, September 19-22, 2014., pp. 9–14 (2014)Google Scholar
  4. 4.
    Baskaran, M., Ramanujam, J., Sadayappan, P.: Automatic C-to-CUDA code generation for affine programs. In: Gupta, R. (ed.) Compiler construction. Lecture notes in computer science, vol. 6011, pp. 244–263. Springer, Berlin (2010)CrossRefGoogle Scholar
  5. 5.
    Baskaran, M.M., Bondhugula, U., Krishnamoorthy, S., Ramanujam, J., Rountev, A., Sadayappan, P.: A compiler framework for optimization of affine loop nests for gpgpus. In: Proceedings of the 22Nd Annual International Conference on Supercomputing. ICS ’08, pp. 225–234. ACM, New York, NY, USA (2008)Google Scholar
  6. 6.
    Bastoul, C.: Extracting polyhedral representation from high level languages. Tech. rep., LRI, Paris-Sud University (2008). Related to the Clan toolGoogle Scholar
  7. 7.
    Bertolli, C., Antao, S.F., Eichenberger, A.E., O’Brien, K., Sura, Z., Jacob, A.C., Chen, T., Sallenave, O.: Coordinating GPU Threads for OpenMP 4.0 in LLVM. In: Proceedings of the 2014 LLVM Compiler Infrastructure in HPC. LLVM-HPC ’14, pp. 12–21. IEEE Press, Piscataway, NJ, USA (2014)Google Scholar
  8. 8.
    Bhattacharyya, A., Amaral, J.N.: Automatic Speculative Parallelization of Loops Using Polyhedral Dependence Analysis. In: Proceedings of the First International Workshop on Code OptimiSation for MultI and Many Cores, COSMIC ’13, pp. 1:1–1:9. ACM, New York, NY, USA (2013)Google Scholar
  9. 9.
    Bondhugula, U., Bandishti, V., Cohen, A., Potron, G., Vasilache, N.: Tiling and optimizing time-iterated computations on periodic domains. In: Proceedings of the 23rd International Conference on Parallel Architectures and Compilation. PACT ’14, pp. 39–50. ACM, New York, NY, USA (2014)Google Scholar
  10. 10.
    Bradski, G., Kaehler, A.: Learning OpenCV: computer vision with the OpenCV library. O’Reilly Media, Inc., California (2008)Google Scholar
  11. 11.
    Campa, S., Danelutto, M., Goli, M., González-Vélez, H., Popescu, A.M., Torquati, M.: Parallel patterns for heterogeneous CPU/GPU architectures: structured parallelism from cluster to cloud. Future Gener. Comp. Syst. 37, 354–366 (2014)CrossRefGoogle Scholar
  12. 12.
    Doerfert, J., Hammacher, C., Streit, K., Hack, S.: SPolly: Speculative Optimizations in the Polyhedral Model. In: Proceedings 3rd International Workshop on Polyhedral Compilation Techniques (IMPACT), pp. 55–61. Berlin, Germany (2013)Google Scholar
  13. 13.
    Feld, D., Soddemann, T., Jünger, M., Mallach, S.: Hardware-aware automatic code-transformation to support compilers in exploiting the multi-level parallel potential of modern CPUs. In: Proceedings of the 2015 International Workshop on Code Optimisation for Multi and Many Cores, COSMIC ’15, pp. 2:1–2:10. ACM, New York, NY, USA (2015)Google Scholar
  14. 14.
    Grewe, D., O’Boyle, M.F.P.: A static task partitioning approach for heterogeneous systems using opencl. In: Proceedings of the 20th International Conference on Compiler Construction: Part of the Joint European Conferences on Theory and Practice of Software. CC’11/ETAPS’11, pp. 286–305. Springer-Verlag, Berlin, Heidelberg (2011)Google Scholar
  15. 15.
    Grewe, D., Wang, Z., O’Boyle, M.: Portable mapping of data parallel programs to OpenCL for heterogeneous systems. In: Code Generation and Optimization (CGO), 2013 IEEE/ACM International Symposium on, pp. 1–10 (2013)Google Scholar
  16. 16.
    GROSSER, T., GROESSLINGER, A., LENGAUER, C.: Polly—performing polyhedral optimizations on a low-level intermediate representation. Parallel Proc. Lett. 22(04), 1250,010 (2012)MathSciNetCrossRefGoogle Scholar
  17. 17.
    ISO/IEC: Information technology—programming languages – C++. International Standard ISO/IEC 14882:20111, ISO/IEC, Geneva, Switzerland (2011)Google Scholar
  18. 18.
    Lincke, R., Lundberg, J., Löwe, W.: Comparing software metrics tools. In: Proceedings of the 2008 International Symposium on Software Testing and Analysis. ISSTA ’08, pp. 131–142. ACM, New York, NY, USA (2008)Google Scholar
  19. 19.
    Ma, K., Li, X., Chen, W., Zhang, C., Wang, X.: GreenGPU: A holistic approach to energy efficiency in GPU-CPU heterogeneous architectures. In: 2012 41st International Conference on Parallel Processing (ICPP), pp. 48–57. IEEE (2012)Google Scholar
  20. 20.
    McCabe, T.J.: A complexity measure. IEEE Trans. Softw. Eng. 2(4), 308–320 (1976)MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Mikushin, D., Likhogrud, N., Zhang, E.Z., Bergstrom, C.: KernelGen - The Design and Implementation of a Next Generation Compiler Platform for Accelerating Numerical Models on GPUs. In: 2014 IEEE International Parallel & Distributed Processing Symposium Workshops, Phoenix, AZ, USA, May 19-23, 2014, pp. 1011–1020. IEEE (2014)Google Scholar
  22. 22.
    Nugteren, C., Corporaal, H.: Bones: an automatic skeleton-based C-to-CUDA compiler for GPUs. ACM Trans. Archit. Code Optim. 11(4), 35:1–35:25 (2014)CrossRefGoogle Scholar
  23. 23.
    OpenCL: open computing language. http://www.khronos.org/opencl (2015)
  24. 24.
    Par4All: automatic parallelizing and optimizing compiler. http://www.par4all.org/ (2015)
  25. 25.
    PPCG: Automatic parallelizing and optimizing compiler. http://freecode.com/projects/ppcg (2015)
  26. 26.
    REPARA website (2015). http://repara-project.eu/
  27. 27.
    Saaty, T.: Fundamentals of the analytic hierarchy process. RWS Publications, 4922 Ellsworth Avenue, Pittsburgh, PA 15413 (2000)Google Scholar
  28. 28.
    Sanchez, L.M., Fernandez, J., Sotomayor, R., Escolar, S., Garcia, J.D.: A comparative study and evaluation of parallel programming models for shared-memory parallel architectures. New Gener. Comput. 31(3), 139–161 (2013)CrossRefGoogle Scholar
  29. 29.
    Seo, S., Jo, G., Lee, J.: Performance characterization of the NAS parallel benchmarks in OpenCL. In: Workload Characterization (IISWC), 2011 IEEE International Symposium on, pp. 137–148 (2011)Google Scholar
  30. 30.
    Serban, T., Danelutto, M., Kilpatrick, P.: Autonomic scheduling of tasks from data parallel patterns to CPU/GPU core mixes. In: International Conference on High Performance Computing & Simulation, HPCS 2013, Helsinki, Finland, July 1-5, 2013, pp. 72–79 (2013)Google Scholar
  31. 31.
    Thouti, K., Sathe, S.R.: A methodology for translating C-programs to openCL. Int. J. Comput. Appl. 82(3), 11–15 (2013)Google Scholar
  32. 32.
    Viñas, M., Fraguela, B.B., Bozkus, Z., Andrade, D.: Improving OpenCL programmability with the heterogeneous programming library. Procedia Computer Science 51, 110–119 (2015). International Conference On Computational Science, ICCS 2015Computational Science at the Gates of NatureGoogle Scholar
  33. 33.
    Wienke, S., Springer, P., Terboven, C., an Mey, D.: OpenACC: First experiences with real-world applications. In: Proceedings of the 18th International Conference on Parallel Processing. Euro-Par’12, pp. 859–870. Springer, Berlin, Heidelberg (2012)Google Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • Rafael Sotomayor
    • 1
  • Luis Miguel Sanchez
    • 1
  • Javier Garcia Blas
    • 1
  • Javier Fernandez
    • 1
  • J. Daniel Garcia
    • 1
  1. 1.University Carlos III of MadridLeganesSpain

Personalised recommendations