Towards Petascale Computing with Parallel CFD codes
Many world leading high-end computing (HEC) facilities are now offering over 100 Teraflops/s of performance and several initiatives have begun to look forward to Petascale computing5 (1015 flop/s). Los Alamos National Laboratory and Oak Ridge National Laboratory (ORNL) already have Petascale systems, which are leading the current (Nov 2008) TOP500 list . Computing at the Petascale raises a number of significant challenges for parallel computational fluid dynamics codes. Most significantly, further improvements to the performance of individual processors will be limited and therefore Petascale systems are likely to contain 100,000+ processors. Thus a critical aspect for utilising high Terascale and Petascale resources is the scalability of the underlying numerical methods, both with execution time with the number of processors and scaling of time with problem size. In this paper we analyse the performance of several CFD codes for a range of datasets on some of the latest high performance computing architectures. This includes Direct Numerical Simulations (DNS) via the SBLI  and SENGA2  codes, and Large Eddy Simulations (LES) using both STREAMS LES  and the general purpose open source CFD code Code Saturne .
Key wordsPetascale High-End Computing Parallel Performance Direct Numerical Simulations Large Eddy Simulations
Unable to display preview. Download preview PDF.
- 1.Top 500 Supercomputer sites, http://www.top500.org/
- 2.N.D. Sandham, M. Ashworth and D.R. Emerson, Direct Numerical Simulation of Shock/Boundary Layer Interaction, http://www.cse.clrc.ac.uk/ceg/sbli.shtml
- 4.L. Temmerman, M.A. Leschziner, C.P. Mellen, and J. Frohlich, Investigation of wall-function approximations and subgrid-scale models in large eddy simulation of separated flow in a channel with streamwise periodic constrictions, International Journal of Heat and Fluid Flow, 24(2): 157–180, 2003.CrossRefGoogle Scholar
- 5.F.Archambeau, N. Méchitoua, and M. Sakiz, Code }Saturne: a finite volume code for the computation of turbulent incompressible flows industrial applications, Int. J. Finite Volumes, February 2004.Google Scholar
- 6.MPI: A Message Passing Interface Standard, Message Passing Interface Forum 1995, http://www.netlib.org/mpi/index.html
- 7.EDF Research and Development, http://rd.edf.com/107008i/EDF.fr/Research-and-Development/softwares/Code-Saturne.html
- 8.HPCx -The UK’s World-Class Service for World-Class Research, http://www.hpcx.ac.uk
- 9.STFC’s Computational Science and Engineering Department, http://www.cse.scitech.ac.uk/
- 10.HECToR UK National Supercomputing Service, http://www.hector.ac.uk
- 11.Supercomputing at Oak Ridge National Laboratory, http://computing.ornl.gov/supercomputing.shtml.
- 12.The Green Top 500 List, http://www.green500.org/lists/2007/11/green500.php.
- 13.Single Node Performance Analysis of Applications on HPCx, M. Bull, HPCx Technical Report HPCxTR0703 2007, http://www.hpcx.ac.uk/research/hpc/technical}reports/HPCxTR0703.pdf.
- 14.J. Bonelle, Y. Fournier, F. Jusserand, S.Ploix, L. Maas, B. Quach, Numerical methodology for the study of a fluid flow through a mixing grid, Presentation to Club Utilisateurs Code }Saturne, 2007, http://research.edf.com/fichiers/fckeditor/File/EDFRD/Code}Saturne/ClubU/2007/07-mixing}grid}HPC.pdf.