Investigating the Scalability of OpenFOAM for the Solution of Transport Equations and Large Eddy Simulations

  • Orlando Rivera
  • Karl Fürlinger
  • Dieter Kranzlmüller
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7017)

Abstract

OpenFOAM is a mainstream open-source framework for flexible simulation in several areas of CFD and engineering whose syntax is a high level representation of the mathematical notation of physical models. We use the backward-facing step geometry with Large Eddy Simulations (LES) and semi-implicit methods to investigate the scalability and important MPI characteristics of OpenFOAM. We find that the master-slave strategy introduces an unexpected bottleneck in the communication of scalar values when more than a hundred MPI tasks are employed. An extensive analysis reveals that this anomaly is present only in a few MPI tasks but results in a severe overall performance reduction. The analysis work in this paper is performed with the tool IPM, a portable profiling and workload characterization tool for MPI programs.

Keywords

Computational Fluid Dynamics Large Eddy Simulation Domain Decomposition Cell Mesh Wall Time 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
  2. 2.
    Fürlinger, K., Wright, N.J., Skinner, D.: Effective performance measurement at petascale using IPM. In: Proceedings of The Sixteenth IEEE International Conference on Parallel and Distributed Systems (ICPADS 2010), Shanghai, China (December 2010)Google Scholar
  3. 3.
    Karypis, G., Kumar, V.: MeTis: Unstructured Graph Partitioning and Sparse Matrix Ordering System, Version 4.0 (2009), http://www.cs.umn.edu/~metis
  4. 4.
    Pringle, G.: Porting OpenFOAM to HECToR A dCSE Project (2010), http://www.hector.ac.uk/cse/distributedcse/reports/openfoam/openfoam/index.html
  5. 5.
    Jasak, H.: Error Analysis and Estimation for the Finite Volume Method with Applications to Fluid Flow. PhD thesis, Department of Mechanical Enginering, Imperial College of Science, Technology and Medicine (1996)Google Scholar
  6. 6.
    Kobayashi, H., Wu, X.: Application of a local subgrid model based on coherent structures to complex geometries. Center for turbulent research Stanford University and NASA. Annual research brief, pp. 69–77 (2006)Google Scholar
  7. 7.
    Le, H., Moin, P., Kim, J.: Direct numerical simulation of turbulent flow over a backward-facing step. Journal of Fluid Mechanics 330(1), 349–374 (1997)CrossRefMATHGoogle Scholar
  8. 8.
    Weller, H.G., Tabor, G., Jasak, H., Fureby, C.: A tensorial approach to computational continuum mechanics using object orientated techniques. Computers in Physics 12(6), 620–631 (1998)CrossRefGoogle Scholar
  9. 9.
    HPC Advisory Council. OpenFOAM Performance Benchmark and Profiling (2010), http://www.hpcadvisorycouncil.com/pdf/OpenFOAM_Analysis_and_Profiling_Intel.pdf
  10. 10.
    Berselli, L.C., Iliescu, T., Layton, W.J.: Mathematics of Large Eddy Simulations of Turbulent Flows, 1st edn., pp. 18–25. Springer, Heidelberg (2005)Google Scholar
  11. 11.
    Leibniz-Rechenzentrum (LRZ). Hardware Description of HLRB II (2009), http://www.lrz.de/services/compute/hlrb/hardware/
  12. 12.
    Calegari, P., Gardner, K., Loewe, B.: Performance Study of OpenFOAM v1.6 on a Bullx HPC Cluster with a Panasas Parallel File System. In: Open Source CFD Conference, Barcelona, Spain (November 2009)Google Scholar
  13. 13.
    Rivera, O., Fürlinger, K.: Parallel aspects of openfoam with large eddy simulations. In: Proceedings of the 2011 International Conference on High Performance Computing and Communications (HPCC 2011), Banff, Canada (September 2011)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Orlando Rivera
    • 1
  • Karl Fürlinger
    • 2
  • Dieter Kranzlmüller
    • 1
    • 2
  1. 1.Leibniz Supercomputing Centre (LRZ)MunichGermany
  2. 2.MNM-TeamLudwig-Maximilians-Universität (LMU)MunichGermany

Personalised recommendations