Advertisement

Effects of Compiler Optimizations in OpenMP to CUDA Translation

  • Amit Sabne
  • Putt Sakdhnagool
  • Rudolf Eigenmann
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7312)

Abstract

One thrust of the OpenMP standard development focuses on support for accelerators. An important question is whether or not OpenMP extensions are needed, and how much performance difference they would make. The same question is relevant for related efforts in support of accelerators, such as OpenACC. The present paper pursues this question. We analyze the effects of individual optimization techniques in a previously developed system that translates OpenMP programs into GPU codes, called OpenMPC. We also propose a new tuning strategy, called Modified IE (MIE), which overcomes some inefficiencies of the original OpenMPC tuning scheme. Furthermore, MIE addresses the challenge of tuning in the presence of runtime variations, owing to the memory transfers between the CPU and GPU. MIE, on average, performs 11% better than the previous tuning system while restricting the tuning system time complexity to a polynomial function.

Keywords

GPU CUDA Tuning System Compiler Optimizations 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Lee, S., Eigenmann, R.: Openmpc: Extended openmp programming and tuning for gpus. In: Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2010, pp. 1–11. IEEE Computer Society, Washington, DC (2010)CrossRefGoogle Scholar
  2. 2.
    Blume, W., Eigenmann, R.: Performance analysis of parallelizing compilers on the perfect benchmarks programs. IEEE Transactions on Parallel and Distributed Systems 3, 643–656 (1992)CrossRefGoogle Scholar
  3. 3.
    Pan, Z., Eigenmann, R.: Fast and effective orchestration of compiler optimizations for automatic performance tuning. In: Proceedings of the International Symposium on Code Generation and Optimization, CGO 2006, pp. 319–332. IEEE Computer Society, Washington, DC (2006)Google Scholar
  4. 4.
    Triantafyllis, S., Vachharajani, M., Vachharajani, N., August, D.I.: Compiler optimization-space exploration. In: Proceedings of the International Symposium on Code Generation and Optimization: Feedback-Directed and Runtime Optimization, CGO 2003, pp. 204–215. IEEE Computer Society, Washington, DC (2003)CrossRefGoogle Scholar
  5. 5.
    Pinkers, R.P.J., Knijnenburg, P.M.W., Haneda, M., Wijshoff, H.A.G.: Statistical selection of compiler options. In: Proceedings of the IEEE Computer Society’s 12th Annual International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunications Systems, MASCOTS 2004, pp. 494–501. IEEE Computer Society, Washington, DC (2004)CrossRefGoogle Scholar
  6. 6.
    Cooper, K.D., Subramanian, D., Torczon, L.: Adaptive optimizing compilers for the 21st century. J. Supercomput. 23, 7–22 (2002)zbMATHCrossRefGoogle Scholar
  7. 7.
    OpenMP 3.1: Openmp 3.1 released (July 2011), http://openmp.org/wp/openmp-31-released/
  8. 8.
    OpenACC (November 2011), http://www.openacc-standard.org/

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Amit Sabne
    • 1
  • Putt Sakdhnagool
    • 1
  • Rudolf Eigenmann
    • 1
  1. 1.Purdue UniversityWest LafayetteUSA

Personalised recommendations