Petaflop Seismic Simulations in the Public Cloud

  • Alexander Breuer
  • Yifeng Cui
  • Alexander HeineckeEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11501)


During the last decade cloud services and infrastructure as a service became a popular solution for diverse applications. Additionally, hardware support for virtualization closed performance gaps, compared to on-premises, bare-metal systems. This development is driven by offloaded hypervisors and full CPU virtualization. Today’s cloud service providers, such as Amazon or Google, offer the ability to assemble application-tailored clusters to maximize performance. However, from an interconnect point of view, one has to tackle a 4–5\(\times \) slow-down in terms of bandwidth and 25\(\times \) in terms of latency, compared to latest high-speed and low-latency interconnects. Taking into account the high per-node and accelerator-driven performance of latest supercomputers, we observe that the network-bandwidth performance of recent cloud offerings is within 2\(\times \) of large supercomputers. In order to address these challenges, we present a comprehensive application-centric approach for high-order seismic simulations utilizing the ADER discontinuous Galerkin finite element method, which exhibits excellent communication characteristics. This covers the tuning of the operating system, normally not possible on supercomputers, micro-benchmarking, and finally, the efficient execution of our solver in the public cloud. Due to this performance-oriented end-to-end workflow, we were able to achieve 1.09 PFLOPS on 768 AWS c5.18xlarge instances, offering 27,648 cores with 5 PFLOPS of theoretical computational power. This correlates to an achieved peak efficiency of over 20% and a close-to 90% parallel efficiency in a weak scaling setup. In terms of strong scalability, we were able to strong-scale a science scenario from 2 to 64 instances with 60% parallel efficiency. This work is, to the best of our knowledge, the first of its kind at such a large scale.


High-order DG Seismic simulations Earthquake simulations Cloud computing Petascale computing 



EDGE, EDGEcut and the discussed cloud-related scripts are available under BSD-3 from the linked resources at: We thank David Lenz for his contributions to EDGEcut. We thank the AWS Cloud Credits for Research and Academic Google Cloud program. At AWS we thank Walker Stemple, Linda Hedges, Aaron Bucher, Heather Matson, Randy Ridgley and Pierre-Yves Aquilanti for their patient and very helpful support. This work was supported by the Southern California Earthquake Center through award #18211.

Optimization Notice. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to Intel, Xeon, and Intel Xeon Phi are trademarks of Intel Corporation in the U.S. and/or other countries.


  1. 1.
    Alliez, P., et al.: 3D mesh generation. In: CGAL User and Reference Manual (2018)Google Scholar
  2. 2.
    Breuer, A., et al.: Petascale local time stepping for the ADER-DG finite element method. In: IPDPS 2016 (2016)Google Scholar
  3. 3.
    Breuer, A., Heinecke, A., Rettenberger, S., Bader, M., Gabriel, A.-A., Pelties, C.: Sustained petascale performance of seismic simulations with SeisSol on SuperMUC. In: Kunkel, J.M., Ludwig, T., Meuer, H.W. (eds.) ISC 2014. LNCS, vol. 8488, pp. 1–18. Springer, Cham (2014). Scholar
  4. 4.
    Breuer, A., Heinecke, A., Cui, Y.: EDGE: extreme scale fused seismic simulations with the discontinuous Galerkin method. In: Kunkel, J.M., Yokota, R., Balaji, P., Keyes, D. (eds.) ISC 2017. LNCS, vol. 10266, pp. 41–60. Springer, Cham (2017). Scholar
  5. 5.
    Chen, P., Lee, E.-J.: Full-3D Seismic Waveform Inversion: Theory, Software and Practice. SG. Springer, Cham (2015). Scholar
  6. 6.
    Custódio, S., et al.: The 2004 mw6.0 Parkfield, California, earthquake: inversion of near-source ground motion using multiple data sets. Geophys. Res. Lett. 32(23) (2005)Google Scholar
  7. 7.
    Deelman, E., et al.: The cost of doing science on the cloud: the montage example. In: SC 2008 (2008)Google Scholar
  8. 8.
    Evangelinos, C., et al.: Cloud computing for parallel scientific HPC applications: feasibility of running coupled atmosphere-applications (2008)Google Scholar
  9. 9.
    Geuzaine, C., et al.: Gmsh: a 3-d finite element mesh generator with built-in pre- and post-processing facilities. Numer. Methods Eng. 79(11), 1309 (2009)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Goto, K., et al.: Anatomy of high-performance matrix multiplication. ACM Trans. Math. Softw. 34, 12 (2008)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Graves, R., et al.: Cybershake: a physics-based seismic hazard model for Southern California. Pure Appl. Geophys. 168(3), 367–381 (2011)CrossRefGoogle Scholar
  12. 12.
    Heinecke, A., et al.: Petascale high order dynamic rupture earthquake simulations on heterogeneous supercomputers. In: SC 2014 (2014)Google Scholar
  13. 13.
    Intel: Intel Xeon Processor Scalable Family Specification Update (2018)Google Scholar
  14. 14.
    Jackson, K.R., et al.: Performance analysis of high performance computing applications on the Amazon web services cloud. In: CCCTS 2010 (2010)Google Scholar
  15. 15.
    Mauch, V., et al.: High performance cloud computing. Future Gener. Comput. Syst. 29, 1408 (2013)CrossRefGoogle Scholar
  16. 16.
    McCalpin, J.D.: HPL and DGEMM performance variability on the Xeon Platinum 8160 processor. In: SC 2018, pp. 18:1–18:13. IEEE Press, Piscataway (2018)Google Scholar
  17. 17.
    Mohammadi, M., et al.: Comparative benchmarking of cloud computing vendors with high performance linpack. In: HPCCC 2018 (2018)Google Scholar
  18. 18.
    Napper, J., et al.: Can cloud computing reach the top500? In: UCHPC-MAW 2009 (2009)Google Scholar
  19. 19.
    Schoeder, S., et al.: Efficient explicit time stepping of high order discontinuous Galerkin schemes for waves. arXiv e-prints arXiv:1805.03981, May 2018
  20. 20.
    Small, P., et al.: The SCEC unified community velocity model software framework. Seismol. Res. Lett. 88(6), 1539 (2017)CrossRefGoogle Scholar
  21. 21.
    Top500 Authors: Top500 List, November 2013Google Scholar
  22. 22.
    Uphoff, C., et al.: Extreme scale multi-physics simulations of the tsunamigenic 2004 sumatra megathrust earthquake. In: SC 2017 (2017)Google Scholar
  23. 23.
    Yvinec, M.: 2D triangulation. In: CGAL User and Reference Manual (2018)Google Scholar
  24. 24.
    Zhao, L., et al.: Strain green’s tensors, reciprocity, and their applications to seismic source and structure studies. Bull. Seismol. Soc. Am. 96(5), 1753 (2006)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Alexander Breuer
    • 1
  • Yifeng Cui
    • 1
  • Alexander Heinecke
    • 2
    Email author
  1. 1.UC San DiegoLa JollaUSA
  2. 2.Intel CorporationSanta ClaraUSA

Personalised recommendations