A multi-GPU accelerated solver for the three-dimensional two-phase incompressible Navier-Stokes equations

Special Issue Paper

Abstract

The use of graphics hardware for general purpose computations allows scientists to enormously speed up their numerical codes. We presently investigate the impact of this technology on our computational fluid dynamics solver for the three-dimensional two-phase incompressible Navier-Stokes equations, which is based on the level set technique and applies Chorin’s projection approach. To our knowledge, this is the first time, that a two-phase solver for the Navier-Stokes equations profits from the computation power of modern graphics hardware. As part of our project, a Jacobi preconditioned conjugate gradient solver for the pressure Poisson equation and the reinitialization of the level set function of our CPU based code were ported to the graphics processing unit (GPU). They are implemented in double precision and parallelized by the Message Passing Interface (MPI). We obtain speedups of 16.2 and 8.6 for the Poisson solver and the reinitialization on one GPU in contrast to a single CPU. Our implementation scales close to perfect on multiple GPUs of a distributed memory cluster. This results in excellent speedups of 115.8 and 53.7 on eight GPUs of our cluster. Furthermore our whole multi-GPU accelerated solver achieves an impressive speedup of 69.6 on eight GPUs/CPUs.

Keywords

Computational fluid dynamics Graphics hardware Navier-Stokes equations Multi-GPU Two-phase flows 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Brackbill JU, Kothe DB, Zemach C (1992) A continuum method for modeling surface tension. J Comput Phys 100(2):335–354. doi:10.1016/0021-9991(92)90240-Y MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Chorin AJ (1968) Numerical solution of the Navier-Stokes equations. Math Comput 22(104):745–762 MATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Cohen J, Molemaker M (2009) A fast double precision CFD code using CUDA. In: Proceedings of Parallel CFD 2009, Moffett Field, CA, USA Google Scholar
  4. 4.
    Croce R, Griebel M, Schweitzer MA (2004) A parallel level-set approach for two-phase flow problems with surface tension in three space dimensions. Preprint 157, Sonderforschungsbereich 611, Universität Bonn Google Scholar
  5. 5.
    Croce R, Griebel M, Schweitzer MA (2009) Numerical simulation of bubble and droplet deformation by a level set approach with surface tension in three dimensions. Int J Numer Methods Fluids, accepted Google Scholar
  6. 6.
    Griebel M, Dornseifer T, Neunhoeffer T (1998) Numerical simulation in fluid dynamics, a practical introduction. SIAM, Philadelphia Google Scholar
  7. 7.
    Griebel M, Metsch B, Oeltz D, Schweitzer MA (2006) Coarse grid classification: a parallel coarsening scheme for algebraic multigrid methods. Numer Linear Algebra Appl 13(2–3):193–214 MATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Halfhill TR (2008) Parallel processing with CUDA. Microprocessor Report Google Scholar
  9. 9.
    Harris M (2007) Optimizing parallel reduction in CUDA. Tech rep, NVIDIA Corporation Google Scholar
  10. 10.
    Hoff KE III, Keyser J, Lin M, Manocha D, Culver T (1999) Fast computation of generalized Voronoi diagrams using graphics hardware. In: SIGGRAPH ’99: proceedings of the 26th annual conference on computer graphics and interactive techniques. ACM/Addison-Wesley, New York, pp 277–286 CrossRefGoogle Scholar
  11. 11.
    Hopf M, Ertl T (1999) Hardware based wavelet transformations. In: Workshop ’99 on vision, modeling and visualization, Erlangen, Germany Google Scholar
  12. 12.
    Jiang GS, Peng D (1999) Weighted ENO schemes for Hamilton-Jacobi equations. SIAM J Sci Comput 21:2126–2143 CrossRefMathSciNetGoogle Scholar
  13. 13.
    Jiang GS, Shu CW (1996) Efficient implementation of weighted ENO schemes. J Comput Phys 126(1):202–228. doi:10.1006/jcph.1996.0130 MATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Krüger J (2006) A GPU framework for interactive simulation and rendering of fluid effects. PhD thesis, Technische Universität München Google Scholar
  15. 15.
    Micikevicius P (2009) 3D finite difference computation on GPUs using CUDA. In: GPGPU-2: proceedings of 2nd workshop on general purpose processing on graphics processing units. ACM, New York, pp 79–84. doi:10.1145/1513895.1513905 CrossRefGoogle Scholar
  16. 16.
    NVIDIA (2008) CUDA programming guide, Version 2.3. NVIDIA Corporation Google Scholar
  17. 17.
    Osher S, Sethian JA (1988) Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations. J Comput Phys 79(1):12–49. doi:10.1016/0021-9991(88)90002-2 MATHCrossRefMathSciNetGoogle Scholar
  18. 18.
    Stam J (1999) Stable fluids. In: SIGGRAPH ’99: proceedings of the 26th annual conference on computer graphics and interactive techniques. ACM/Addison-Wesley, New York, pp 121–128. doi:10.1145/311535.311548 CrossRefGoogle Scholar
  19. 19.
    Steinhoff J, Underhill D (1994) Modification of the Euler equations for “vorticity confinement”: application to the computation of interacting vortex rings. Phys Fluids 6:2738–2744. doi:10.1063/1.868164 MATHCrossRefGoogle Scholar
  20. 20.
    Strybny J, Thorenz C, Croce R, Engel M (2006) A parallel 3D free surface Navier-Stokes solver for high performance computing at the German waterways administration. In: The 7th international conference on hydroscience and engineering (ICHE-2006), Philadelphia, USA Google Scholar
  21. 21.
    Thibault JC, Senocak I (2009) CUDA implementation of a Navier-Stokes solver on multi-GPU desktop platforms for incompressible flows. In: Proceedings of the 47th AIAA aerospace sciences meeting, Orlando, FL, USA Google Scholar

Copyright information

© Springer-Verlag 2010

Authors and Affiliations

  1. 1.Institute for Numerical SimulationUniversity of BonnBonnGermany

Personalised recommendations