Petaflop Seismic Simulations in the Public Cloud
During the last decade cloud services and infrastructure as a service became a popular solution for diverse applications. Additionally, hardware support for virtualization closed performance gaps, compared to on-premises, bare-metal systems. This development is driven by offloaded hypervisors and full CPU virtualization. Today’s cloud service providers, such as Amazon or Google, offer the ability to assemble application-tailored clusters to maximize performance. However, from an interconnect point of view, one has to tackle a 4–5\(\times \) slow-down in terms of bandwidth and 25\(\times \) in terms of latency, compared to latest high-speed and low-latency interconnects. Taking into account the high per-node and accelerator-driven performance of latest supercomputers, we observe that the network-bandwidth performance of recent cloud offerings is within 2\(\times \) of large supercomputers. In order to address these challenges, we present a comprehensive application-centric approach for high-order seismic simulations utilizing the ADER discontinuous Galerkin finite element method, which exhibits excellent communication characteristics. This covers the tuning of the operating system, normally not possible on supercomputers, micro-benchmarking, and finally, the efficient execution of our solver in the public cloud. Due to this performance-oriented end-to-end workflow, we were able to achieve 1.09 PFLOPS on 768 AWS c5.18xlarge instances, offering 27,648 cores with 5 PFLOPS of theoretical computational power. This correlates to an achieved peak efficiency of over 20% and a close-to 90% parallel efficiency in a weak scaling setup. In terms of strong scalability, we were able to strong-scale a science scenario from 2 to 64 instances with 60% parallel efficiency. This work is, to the best of our knowledge, the first of its kind at such a large scale.
KeywordsHigh-order DG Seismic simulations Earthquake simulations Cloud computing Petascale computing
EDGE, EDGEcut and the discussed cloud-related scripts are available under BSD-3 from the linked resources at: http://dial3343.org. We thank David Lenz for his contributions to EDGEcut. We thank the AWS Cloud Credits for Research and Academic Google Cloud program. At AWS we thank Walker Stemple, Linda Hedges, Aaron Bucher, Heather Matson, Randy Ridgley and Pierre-Yves Aquilanti for their patient and very helpful support. This work was supported by the Southern California Earthquake Center through award #18211.
Optimization Notice. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance. Intel, Xeon, and Intel Xeon Phi are trademarks of Intel Corporation in the U.S. and/or other countries.
- 1.Alliez, P., et al.: 3D mesh generation. In: CGAL User and Reference Manual (2018)Google Scholar
- 2.Breuer, A., et al.: Petascale local time stepping for the ADER-DG finite element method. In: IPDPS 2016 (2016)Google Scholar
- 3.Breuer, A., Heinecke, A., Rettenberger, S., Bader, M., Gabriel, A.-A., Pelties, C.: Sustained petascale performance of seismic simulations with SeisSol on SuperMUC. In: Kunkel, J.M., Ludwig, T., Meuer, H.W. (eds.) ISC 2014. LNCS, vol. 8488, pp. 1–18. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-07518-1_1CrossRefGoogle Scholar
- 4.Breuer, A., Heinecke, A., Cui, Y.: EDGE: extreme scale fused seismic simulations with the discontinuous Galerkin method. In: Kunkel, J.M., Yokota, R., Balaji, P., Keyes, D. (eds.) ISC 2017. LNCS, vol. 10266, pp. 41–60. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58667-0_3CrossRefGoogle Scholar
- 6.Custódio, S., et al.: The 2004 mw6.0 Parkfield, California, earthquake: inversion of near-source ground motion using multiple data sets. Geophys. Res. Lett. 32(23) (2005)Google Scholar
- 7.Deelman, E., et al.: The cost of doing science on the cloud: the montage example. In: SC 2008 (2008)Google Scholar
- 8.Evangelinos, C., et al.: Cloud computing for parallel scientific HPC applications: feasibility of running coupled atmosphere-applications (2008)Google Scholar
- 12.Heinecke, A., et al.: Petascale high order dynamic rupture earthquake simulations on heterogeneous supercomputers. In: SC 2014 (2014)Google Scholar
- 13.Intel: Intel Xeon Processor Scalable Family Specification Update (2018)Google Scholar
- 14.Jackson, K.R., et al.: Performance analysis of high performance computing applications on the Amazon web services cloud. In: CCCTS 2010 (2010)Google Scholar
- 16.McCalpin, J.D.: HPL and DGEMM performance variability on the Xeon Platinum 8160 processor. In: SC 2018, pp. 18:1–18:13. IEEE Press, Piscataway (2018)Google Scholar
- 17.Mohammadi, M., et al.: Comparative benchmarking of cloud computing vendors with high performance linpack. In: HPCCC 2018 (2018)Google Scholar
- 18.Napper, J., et al.: Can cloud computing reach the top500? In: UCHPC-MAW 2009 (2009)Google Scholar
- 19.Schoeder, S., et al.: Efficient explicit time stepping of high order discontinuous Galerkin schemes for waves. arXiv e-prints arXiv:1805.03981, May 2018
- 21.Top500 Authors: Top500 List, November 2013Google Scholar
- 22.Uphoff, C., et al.: Extreme scale multi-physics simulations of the tsunamigenic 2004 sumatra megathrust earthquake. In: SC 2017 (2017)Google Scholar
- 23.Yvinec, M.: 2D triangulation. In: CGAL User and Reference Manual (2018)Google Scholar