Efficiency of High Order Spectral Element Methods on Petascale Architectures

  • Maxwell Hutchinson
  • Alexander Heinecke
  • Hans Pabst
  • Greg Henry
  • Matteo Parsani
  • David Keyes
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9697)

Abstract

High order methods for the solution of PDEs expose a trade-off between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change.

This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform production-scale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the core-hour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60 % full application bandwidth utilization at scale and achieve \(\approx \)1 PFlop/s of compute performance in Nek’s most flop-intense methods.

Keywords

High order Vectorization Spectral element method Nek5000 

References

  1. 1.
    LIBXSMM v1.0.2 (2015)Google Scholar
  2. 2.
    NekBox v2.0.0 (2015)Google Scholar
  3. 3.
    Bell, J.B., et al.: Direct numerical simulations of type Ia supernovae flames. II. The Rayleigh-Taylor instability. Astrophys. J. 608(2), 883–906 (2004)CrossRefGoogle Scholar
  4. 4.
    Breuer, A., Heinecke, A., Rannabauer, L., Bader, M.: High-order ADER-DG minimizes energy- and time-to-solution of seissol. In: Kunkel, J.M., Ludwig, T. (eds.) ISC High Performance 2015. LNCS, vol. 9137, pp. 340–357. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  5. 5.
    Dimonte, G., Youngs, D.L., Dimits, A., Weber, S., Marinak, M., Wunsch, S., Garasi, C., Robinson, A., Andrews, M.J., Ramaprabhu, P., Calder, A.C., Fryxell, B., Biello, J., Dursi, L., MacNeice, P., Olson, K., Ricker, P., Rosner, R., Timmes, F., Tufo, H., Young, Y.-N., Zingale, M.: A comparative study of the turbulent RayleighTaylor instability using high-resolution three-dimensional numerical simulations: the Alpha-Group collaboration. Phys. Fluids 16(5), 1668 (2004)CrossRefMATHGoogle Scholar
  6. 6.
    Gocharov, V., et al.: Panel 3 report: implosion hydrodynamics. LLNL report LLNLTR-562104, pp. 22–24 (2012)Google Scholar
  7. 7.
    Goto, K., et al.: Anatomy of high-performance matrix multiplication. ACM Trans. Math. Softw. 34(3), 12:1–12:25 (2008)MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Heinecke, A., et al.: LIBXSMM: a high performance library for small matrix multiplications. In: Poster and Extended Abstract Presented at SC 2015 (2015)Google Scholar
  9. 9.
    Hutchinson, M.: Direct numerical simulation of single mode three-dimensional Rayleigh-Taylor experiments (2015). arXiv:1511.07254
  10. 10.
    Intel Corporation: Intel MKL 11.3 Release Notes. Introduced (S/D)GEMM_BATCH and (C/Z)GEMM3M_BATCH functions to perform multiple independent matrix-matrix multiply operations (2015)Google Scholar
  11. 11.
    Ivanov, I., et al.: Evaluation of parallel communication models in Nekbone, a Nek5000 mini-application. In: 2015 IEEE International Conference on Cluster Computing (CLUSTER), pp. 760–767. IEEE (2015)Google Scholar
  12. 12.
    Linden, P.F.: On the structure of salt fingers. Deep Sea Res. Oceanogr. Abstr. 20, 325–340 (1973)CrossRefGoogle Scholar
  13. 13.
    Lottes, J.W., et al.: Hybrid multigrid/Schwarz algorithms for the spectral element method. J. Sci. Comput. 24(1), 45–78 (2005)MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    Markidis, S., et al.: OpenACC acceleration of the Nek5000 spectral element code. Int. J. High Perform. Comput. Appl. 29(3), 311–319 (2015)CrossRefGoogle Scholar
  15. 15.
    McCalpin, J.D.: STREAM: sustainable memory bandwidth in high performance computers. Technical report, University of Virginia, Charlottesville, Virginia, 1991–2007. A continually updated technical report. http://www.cs.virginia.edu/stream/
  16. 16.
    Offermans, N., Marin, O., Schanen, M., Gong, J., Fischer, P., Schlatter, P., Obabko, A., Peplinski, A., Hutchinson, M., Merzari, E.: On the strong scaling of the spectral element solver Nek5000 on petascale systems. In: Solving Software Challenges for Exascale, pp. 57–68. Springer (2016)Google Scholar
  17. 17.
    Otten, M., et al.: An MPI/OpenACC implementation of a high-order electromagnetics solver with GPUDirect communication. Int. J. High Perform. Comput. Appl. (2016). http://hpc.sagepub.com/content/early/2016/02/01/1094342015626584.abstract. doi:10.1177/1094342015626584
  18. 18.
    Patera, A.T.: A spectral element method for fluid dynamics: laminar flow in a channel expansion. J. Comput. Phy. 54(3), 468–488 (1984)MathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    Shin, J., et al.: Speeding up Nek5000 with autotuning and specialization. In: Proceedings of the 24th ACM International Conference on Supercomputing, ICS 2010, pp. 253–262. ACM, New York (2010)Google Scholar
  20. 20.
    Tufo, H.M., et al.: Terascale spectral element algorithms and implementations. In: Proceedings of the 1999 ACM/IEEE Conference on Supercomputing, p. 68 (1999)Google Scholar
  21. 21.
    Wang, Z.J., et al.: High-order CFD methods: current status and perspective. Int. J. Numer. Meth. Fluids 72(8), 811–845 (2013)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Maxwell Hutchinson
    • 1
  • Alexander Heinecke
    • 2
  • Hans Pabst
    • 3
  • Greg Henry
    • 4
  • Matteo Parsani
    • 5
  • David Keyes
    • 5
  1. 1.Department of PhysicsUniversity of ChicagoChicagoUSA
  2. 2.Intel CorporationSanta ClaraUSA
  3. 3.Intel Semiconductor AGZurichSwitzerland
  4. 4.Intel CorporationHillsboroUSA
  5. 5.Extreme Computing Research CenterKAUSTThuwalKingdom of Saudi Arabia

Personalised recommendations