Using Compiler Directives for Accelerating CFD Applications on GPUs

  • Haoqiang Jin
  • Mark Kellogg
  • Piyush Mehrotra
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7312)


As the current trend of parallel systems is towards a cluster of multi-core nodes enhanced with accelerators, software development for such systems has become a major challenge. Both low-level and high-level programming models have been developed to address complex hierarchical structures at different hardware levels and to ease the programming effort. However, achieving the desired performance goal is still not a simple task. In this study, we describe our experience with using the accelerator directives developed by the Portland Group to port a computational fluid dynamics (CFD) application benchmark to a general-purpose GPU platform. Our work focuses on the usability of this approach and examines the programming effort and achieved performance on two Nvidia GPU-based systems. The study shows very promising results in terms of programmability as well as performance when compared to other approaches such as the CUDA programming model.


GPU Programming Accelerator Directives Performance Evaluation 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bailey, D.H., Barszcz, E., Barton, J.T., Browning, D.S., Carter, R.L., Dagum, L., Fatoohi, R.A., Frederickson, P.O., Lasinski, T.A., Schreiber, R.S., Simon, H.D., Venkatakrishnan, V., Weeratunga, S.K.: The NAS Parallel Benchmarks. International Journal of Supercomputer Applications 5(3), 63–73 (1991)CrossRefGoogle Scholar
  2. 2.
    Beyer, J.C., Stotzer, E.J., Hart, A., de Supinski, B.R.: OpenMP for Accelerators. In: Chapman, B.M., Gropp, W.D., Kumaran, K., Müller, M.S. (eds.) IWOMP 2011. LNCS, vol. 6665, pp. 108–121. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  3. 3.
    CAPS: HMPP Programming Model,
  4. 4.
    Jespersen, D.C.: Acceleration of a CFD code with a GPU. Scientific Programming 18, 193–201 (2010)Google Scholar
  5. 5.
    Jin, H., Frumkin, M., Yan, J.: The OpenMP Implementation of NAS Parallel Benchmarks and Its Performance. NAS Technical Report NAS-99-011, NASA Ames Research Center (October 1999)Google Scholar
  6. 6.
    Khronos Group, The OpenCL Standard,
  7. 7.
    Kirk, D.B., Hwu, W.W.: Programming Massively Parallel Processors: A Hands-on Approach. Morgan Kaufmann Publishers (2010)Google Scholar
  8. 8.
  9. 9.
    The OpenACC Standard,
  10. 10.
    Pennycook, S.J., Hammond, S.D., Jarvis, S.A., Mudalige, G.R.: Performance Analysis of a Hybrid MPI/CUDA Implementation of the NAS-LU Benchmark. ACM SIGMETRICS Performance Evaluation Review - PMBS 10 38(4), 23–29 (2011)CrossRefGoogle Scholar
  11. 11.
    The Portland Group, PGI Accelerator Programming Model for Fortran and C, v1.3 (November 2010),
  12. 12.
    The Portland Group, PGI CUDA Fortran Programming Guide and Reference,
  13. 13.
    Seo, S., Jo, G., Lee, J.: Performance Characterization of the NAS Parallel Benchmarks in OpenCL. In: IEEE International Symposium on Workload Characterization (IISWC), Austin, TX, pp. 137–148 (2011)Google Scholar
  14. 14.
    The Top 500 Supercomputer List (November 2011),

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Haoqiang Jin
    • 1
  • Mark Kellogg
    • 1
  • Piyush Mehrotra
    • 1
  1. 1.NAS DivisionNASA Ames Research CenterMoffett FieldUSA

Personalised recommendations