Skip to main content

Hybrid Parallelism for CFD Simulations: Combining MPI with OpenMP

  • Conference paper
  • First Online:
Parallel Computational Fluid Dynamics 2007

Part of the book series: Lecture Notes in Computational Science and Engineering ((LNCSE,volume 67))

Abstract

In this paper, performance of hybrid programming approach using MPI and OpenMP for a parallel CFD solver was studied in a single cluster of multi-core parallel system. Timing cost for computation and communication was compared for different scenarios. Tuning up the MPI based parallelizable sections of the solver with OpenMP functions and libraries were done. BigRed parallel system of Indiana University was used for parallel runs for 8, 16, 32, and 64 compute nodes with 4 processors (cores) per node. Four threads were used within the node, one for each core. It was observed that MPI performed better than the hybrid with OpenMP in overall elapsed time. However, the hybrid approach showed improved communication time for some cases. In terms of parallel speedup and efficiency, hybrid results were close to MPI, though they were higher for processor numbers less than 32. In general, MPI outperforms hybrid for our applications on this particular computing platform.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

7. References

  1. Jost, G., Jin, H. Mey, D., and Hatay, F.F, “Comparing the OpenMP, MPI, and Hybrid Programming Paradigms on an SMP Cluster,” NAS Technical Report NAS-03-019, and the Fifth European Workshop on OpenMP (EWOMP03), Aachen, Germany, 2003.

    Google Scholar 

  2. Rabenseifner, R., “Hybrid Parallel Programming: Performance Problems and Chances,” 45th CUG Conference (www.cug.org) 2003, Columbus, Ohio, USA, 2003.

  3. E. Chow and D. Hysom, “Assessing Performance of Hybrid. mpi/openmp Programs on SMP Clusters,” Technical Report, UCRL-JC-143957, Lawrence Livermore National laboratory, Livermore, CA, 2001.

    Google Scholar 

  4. Yilmaz, E., Kavsaoglu, M.S., Akay, H.U., and Akmandor, I.S., “Cell-vertex Based Parallel and Adaptive Explicit 3D Flow Solution on Unstructured Grids,” International Journal of Computational Fluid Dynamics, Vol. 14, pp. 271–286, 2001.

    Article  Google Scholar 

  5. Payli, R.U., Akay, H.U., Baddi, A.S., Yilmaz, E., Ecer, A., and Oktay, E., “Computational Fluid Dynamics Applications on TeraGrid,” Parallel Computational Fluid Dynamics,” Edited by A. Deane, et al., Elsevier Science B.V., pp. 141–148, 2006.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yilmaz, E., Payli, R., Akay, H., Ecer, A. (2009). Hybrid Parallelism for CFD Simulations: Combining MPI with OpenMP. In: Parallel Computational Fluid Dynamics 2007. Lecture Notes in Computational Science and Engineering, vol 67. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-92744-0_50

Download citation

Publish with us

Policies and ethics