Towards Petascale Computing with Parallel CFD codes

  • A. G. Sunderland
  • M. Ashworth
  • C. Moulinec
  • N. Li
  • J. Uribe
  • Y. Fournier
Conference paper
Part of the Lecture Notes in Computational Science and Engineering book series (LNCSE, volume 74)

Abstract

Many world leading high-end computing (HEC) facilities are now offering over 100 Teraflops/s of performance and several initiatives have begun to look forward to Petascale computing5 (1015 flop/s). Los Alamos National Laboratory and Oak Ridge National Laboratory (ORNL) already have Petascale systems, which are leading the current (Nov 2008) TOP500 list [1]. Computing at the Petascale raises a number of significant challenges for parallel computational fluid dynamics codes. Most significantly, further improvements to the performance of individual processors will be limited and therefore Petascale systems are likely to contain 100,000+ processors. Thus a critical aspect for utilising high Terascale and Petascale resources is the scalability of the underlying numerical methods, both with execution time with the number of processors and scaling of time with problem size. In this paper we analyse the performance of several CFD codes for a range of datasets on some of the latest high performance computing architectures. This includes Direct Numerical Simulations (DNS) via the SBLI [2] and SENGA2 [3] codes, and Large Eddy Simulations (LES) using both STREAMS LES [4] and the general purpose open source CFD code Code Saturne [5].

Key words

Petascale High-End Computing Parallel Performance Direct Numerical Simulations Large Eddy Simulations 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Top 500 Supercomputer sites, http://www.top500.org/
  2. 2.
    N.D. Sandham, M. Ashworth and D.R. Emerson, Direct Numerical Simulation of Shock/Boundary Layer Interaction, http://www.cse.clrc.ac.uk/ceg/sbli.shtml
  3. 3.
  4. 4.
    L. Temmerman, M.A. Leschziner, C.P. Mellen, and J. Frohlich, Investigation of wall-function approximations and subgrid-scale models in large eddy simulation of separated flow in a channel with streamwise periodic constrictions, International Journal of Heat and Fluid Flow, 24(2): 157–180, 2003.CrossRefGoogle Scholar
  5. 5.
    F.Archambeau, N. Méchitoua, and M. Sakiz, Code }Saturne: a finite volume code for the computation of turbulent incompressible flows industrial applications, Int. J. Finite Volumes, February 2004.Google Scholar
  6. 6.
    MPI: A Message Passing Interface Standard, Message Passing Interface Forum 1995, http://www.netlib.org/mpi/index.html
  7. 7.
  8. 8.
    HPCx -The UK’s World-Class Service for World-Class Research, http://www.hpcx.ac.uk
  9. 9.
    STFC’s Computational Science and Engineering Department, http://www.cse.scitech.ac.uk/
  10. 10.
    HECToR UK National Supercomputing Service, http://www.hector.ac.uk
  11. 11.
    Supercomputing at Oak Ridge National Laboratory, http://computing.ornl.gov/supercomputing.shtml.
  12. 12.
  13. 13.
    Single Node Performance Analysis of Applications on HPCx, M. Bull, HPCx Technical Report HPCxTR0703 2007, http://www.hpcx.ac.uk/research/hpc/technical}reports/HPCxTR0703.pdf.
  14. 14.
    J. Bonelle, Y. Fournier, F. Jusserand, S.Ploix, L. Maas, B. Quach, Numerical methodology for the study of a fluid flow through a mixing grid, Presentation to Club Utilisateurs Code }Saturne, 2007, http://research.edf.com/fichiers/fckeditor/File/EDFRD/Code}Saturne/ClubU/2007/07-mixing}grid}HPC.pdf.

Copyright information

© Springer Berlin Heidelberg 2010

Authors and Affiliations

  • A. G. Sunderland
    • 1
  • M. Ashworth
    • 1
  • C. Moulinec
    • 1
  • N. Li
    • 2
  • J. Uribe
    • 3
  • Y. Fournier
    • 4
  1. 1.STFC Daresbury LaboratoryWarringtonUK
  2. 2.NAG LtdOxfordUK
  3. 3.University of ManchesterManchesterUK
  4. 4.EDF R&DChatouFrance

Personalised recommendations