Advertisement

Performance of Scientific Applications on Modern Supercomputers

  • Frank Deserno
  • Georg Hager
  • Frank Brechtefeld
  • Gerhard Wellein
Conference paper

Abstract

We discuss performance characteristics of scientific applications on modern computer architectures, ranging from commodity “off-the-shelf” (COTS) systems like clusters, to tailored High Performance Computing (HPC) systems, e.g. NEC SX6 or CRAY X1. The application programs are selected from important HPC projects which have been supported by the KONWIHR project cxHPC. In general we focus on the single processor performance and give some optimisation/parallelisation hints, if appropriate. For computational fluid dynamics (CFD) applications we also discuss parallel performance to compare COTS with tailored HPC systems. We find, that an HPC environment with a few tailored “central” high-end systems and “local” mid-size COTS systems supports our users' requirements best.

Keywords

High Performance Computing Memory Bandwidth Cache Line Shared Memory System High Performance Computing System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    TOP500 list November 2003 available at http://www.top500.org/Google Scholar
  2. 2.
    L. Oliker, A. Canning, J. Carter, J. Shalf, D. Skinner, S. Ethier, R. Biswas, J. Djomehri, and R. V. d. Wijngaart, Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations, in Proc. SC2003, CD-ROM, 2003.Google Scholar
  3. 3.
    The ASCI program: http://www.llnl.gov/asci/Google Scholar
  4. 4.
    A. J. van der Steen and J. Dongarra, Overview of Recent Supercomputers (thirteenth edition), avaiable at http://www.phys.uu.nl/steen/web03/overview.htmlGoogle Scholar
  5. 5.
    H. Sakagami, H. Murai, Y. Seo, and M. Yokokawa. 14.9 TFLOPS threedimensional fluid simulation for fusion science with HPF on the Earth Simulator, in Proc. SC2002, CD-ROM, 2002Google Scholar
  6. 6.
    S. Shingu, et. al., A 26.58 Tflops global atmospheric simulation with the spectral transform method on the Earth Simulator, in Proc. SC2002, CD-ROM, 2002.Google Scholar
  7. 7.
    D. Komatitsch, S. Tsuboi, C. Ji, and J. Tromp, A 14.6 billion degrees of freedom, 5 teraflops, 2.5 terabyte earthquake simulation on the Earth Simulator, in Proc. SC2003, CD-ROM, 2003.Google Scholar
  8. 8.
    The KONWIHR project: http://konwihr.in.tum.de/Google Scholar
  9. 9.
    M. Glück, M. Breuer, F. Durst, A. Halfmann, and E. Rank in S. Wagner et al. (Eds.): High Performance Computing in Science and Engineering Munich 2002, pp. 11–20, 2003, Springer-Verlag 2003 Numerical Prediction of Deformations and Oscillations of Wind-Exposed StructuresGoogle Scholar
  10. 10.
    M. Breuer, N. Jovicic, and K. Mazaev in S. Wagner et al. (Eds.): High Performance Computing in Science and Engineering Munich 2002, pp. 11–20, Springer-Verlag, 2003 Large-Eddy and Detached-Eddy Simulation of the Flow Around High-Lift ConfigurationsGoogle Scholar
  11. 11.
    P. Lammers, K. Beronov, G. Brenner, and F. Durst in S. Wagner et al. (Eds.): High Performance Computing in Science and Engineering Munich 2002, pp. 11–20, 2003, Springer-Verlag Direct Simulation with the Lattice Boltzmann Code BEST of Developed Turbulence in Channel FlowsGoogle Scholar
  12. 12.
    L. Palm and F. Brechtefeld in S. Wagner et al. (Eds.): High Performance Computing in Science and Engineering Munich 2002, pp. 11–20, 2003, Springer-Verlag A User-Oriented Set of Quantum Chemical BenchmarksGoogle Scholar
  13. 13.
    Behling S., Bell R., Farrell P., Holthoff H., O'Connell F., Weir W., The Power4 Processor Introduction and Tuning Guide, IBM (2001), www.ibm.com/redbooks/Google Scholar
  14. 14.
    Krämer F., IBM, private communication.Google Scholar
  15. 15.
    H.L. Stone, Iterative solution of implicit approximations of multidimensional partial differential equations, SIAM J. Numerical Analysis, 5(5), 1968Google Scholar
  16. 16.
    G. Hager, F. Deserno and G. Wellein: Pseudo-Vectorization and RISC Optimization Techniques for the Hitachi SR8000 Architecture, High Performance Computing in Science and Engineering Munich 2002, Springer Verlag Berlin Heidelberg, 2003, ISBN 3-540-00474-2Google Scholar
  17. 17.
    J. H. Ferziger, M. Perić: Computational Methods for Fluid Dynamics. Springer Verlag, 1999Google Scholar
  18. 18.
    J. Reeve, A. Scurr and J. Merlin, Parallel Versions of Stone's Strongly Implicit Algorithm, Concurrency Practice and Experience 13, 2001Google Scholar
  19. 19.
    Basic code examples for the algorithms in M. Perić: Computational Methods for Fluid Dynamics. Springer Verlag, 1999 [17] can be obtained from ftp://ftp.springer.de/pub/technik/peric/Google Scholar
  20. 20.
    D. A. Wolf-Gladrow, Lattice-Gas Cellular Automata and Lattice-Boltzmann Models, Springer Verlag, 2000, ISBN 3-540-66973-6Google Scholar
  21. 21.
    S. Succi, The Lattice Boltzmann Equation for Fluid Dynamics and Beyond, Numerical Mathematics and Scientific Computation, Oxford University Press, 2001Google Scholar
  22. 22.
    M. Henkel, M. Pleimling, C. Godrèche and J.-M. Luck, Aging, Phase Ordering, and Conformal Invariance, in Phys. Rev. Lett. 87, 265701 (2001).CrossRefGoogle Scholar
  23. 23.
    M. Reiher, O. Salomon, D. Sellmann, B.A. Hess (2001): Dinuclear Diazene Iron and Ruthenium Complexes as Models for Studying Nitrogenase Activity, Chem. Eur. J. 23, 5195–5202CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Frank Deserno
    • 1
  • Georg Hager
    • 1
  • Frank Brechtefeld
    • 1
  • Gerhard Wellein
    • 1
  1. 1.Regionales Rechenzentrum ErlangenErlangenGermany

Personalised recommendations