Abstract
We employ sparse grid regression to predict the run time in three types of numerical simulation: molecular dynamics (MD), weather and climate simulation. The impact of algorithmic, OpenMP/MPI and hardware-aware optimization parameters on performance is studied. We show that normalization of run time data via algorithmic complexity arguments significantly improves prediction accuracy. Mean relative prediction errors are in the range of few percent; in MD, a five-dimensional parameter space exploration results in mean relative prediction errors of ca. 15% using ca. 178 run time samples.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Flato, G.: Earth system models: an overview. Wiley Interdisc. Rev. Clim. Change 2(6), 783–800 (2011)
Southern, J., et al.: Multi-scale computational modelling in biology and physiology. Prog. Biophys. Mol. Biol. 96(1), 60–89 (2008)
Frauel, Y., et al.: Easy use of high performance computers for fusion simulations. Fusion Eng. Des. 87(12), 2057–2062 (2012)
Neumann, P., Flohr, H., Arora, R., Jarmatz, P., Tchipev, N., Bungartz, H.J.: MaMiCo: software design for parallel molecular-continuum flow simulations. Comput. Phys. Commun. 200, 324–335 (2016)
Pflüger, D.: Spatially Adaptive Sparse Grids for High-Dimensional Problems. Dr, Hut, Munich (2010)
Giorgetta, M., et al.: ICON-A, the atmosphere component of the ICON Earth System Model: I. Model description. J. Adv. Model. Earth Syst. 10(7), 1613–1637 (2018)
Garcke, J.: Sparse grids in a nutshell. In: Garcke, J., Griebel, M. (eds.) Sparse Grids and Applications, pp. 57–80. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-31703-3_3
Gerstner, T., Griebel, M.: Dimension-adaptive tensor-product quadrature. Computing 71, 65–87 (2003). https://doi.org/10.1007/s00607-003-0015-5
Pflüger, D.: Spatially adaptive refinement. In: Garcke, J., Griebel, M. (eds.) Sparse Grids and Applications, pp. 243–262. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-31703-3_12
Kerbyson, D., Alme, H., Hoisie, A., Petrini, F., Wasserman, H., Gittings, M.: Predictive performance and scalability modeling of a large-scale application. In: Proceedings of the 2001 ACM/IEEE Conference on Supercomputing, p. 39. ACM, New York (2001)
Neumann, P., et al.: Assessing the scales in numerical weather and climate predictions: will exascale be the rescue? Philos. Trans. R. Soc. A 377(2142), 20180148 (2019)
Carrington, L., Snavely, A., Wolter, N.: A performance prediction framework for scientific applications. Future Gener. Comput. Syst. 22(3), 336–346 (2006)
Marin, G., Mellor-Crummey, J.: Cross-architecture performance predictions for scientific applications using parameterized models. In: Proceedings of the Joint International Conference on Measurement and Modeling of Computer Systems SIGMETRICS 2004/Performance 2004, pp. 2–13. ACM, New York (2004)
Calotoiu, A., Hoefler, T., Poke, M., Wolf, F.: Using automated performance modeling to find scalability bugs in complex codes. In: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, pp. 45:1–45:12. ACM, New York (2013)
Barnes, B., Rountree, B., Lowenthal, D., Reeves, J., de Supinski, B., Schulz, M.: A regression-based approach to scalability prediction. In: Proceedings of the 22nd Annual International Conference on Supercomputing, pp. 368–377. ACM, New York (2008)
Ipek, E., de Supinski, B.R., Schulz, M., McKee, S.A.: An approach to performance prediction for parallel applications. In: Cunha, J.C., Medeiros, P.D. (eds.) Euro-Par 2005. LNCS, vol. 3648, pp. 196–205. Springer, Heidelberg (2005). https://doi.org/10.1007/11549468_24
Singh, K., et al.: Comparing scalability prediction strategies on an SMP of CMPs. In: D’Ambra, P., Guarracino, M., Talia, D. (eds.) Euro-Par 2010. LNCS, vol. 6271, pp. 143–155. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15277-1_14
Lee, B., Brooks, D., de Supinski, B., Schulz, M., Singh, K., McKee, S.: Methods of inference and learning for performance modeling of parallel applications. In: Proceedings of the 12th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 249–258. ACM, New York (2007)
Eggensperger, K., Lindauer, M., Hutter, F.: Neural networks for predicting algorithm runtime distributions. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI 2018), pp. 1442–1448. International Joint Conferences on Artificial Intelligence Organization (2018)
Wyatt, M.R., Herbein, S., Gamblin, T., Moody, A., Ahn, D., Taufer, M.: PRIONN: predicting runtime and IO using neural networks. In: Proceedings of the 47th International Conference on Parallel Processing (ICPP 2018), pp. 46:1–46:12. ACM, New York (2018)
Garcke, J.: Maschinelles Lernen durch Funktionsrekonstruktion mit verallgemeinerten dnnen Gittern. Ph.D. thesis, University of Bonn (2004)
Dirnstorfer, S., Grau, A., Zagst, R.: High-dimensional regression on sparse grids applied to pricing moving window Asian options. Open J. Stat. 3, 427–440 (2013)
Bohn, B.: On the convergence rate of sparse grid least squares regression. In: Garcke, J., Pflüger, D., Webster, C.G., Zhang, G. (eds.) Sparse Grids and Applications – Miami 2016. LNCSE, vol. 123, pp. 19–41. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-75426-0_2
Garcke, J.: Regression with the optimised combination technique. In: Proceedings of the 23rd International Conference on Machine Learning, pp. 321–328. ACM, New York (2006)
Griebel, M., Knapek, S., Zumbusch, G.: Numerical Simulation in Molecular Dynamics - Numerics, Algorithms, Parallelization, Applications. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-68095-6
Tchipev, N., et al.: TweTriS: twenty trillion-atom simulation. Int. J. High Perform. Comput. Appl. 33, 838–854 (2019)
Klocke, D., Brueck, M., Hohenegger, C., Stevens, B.: Rediscovery of the doldrums in storm-resolving simulations over the tropical Atlantic. Nat. Geosci. 10, 891–896 (2017)
Acknowledgements
Financial support by the Federal Ministry of Education and Research, Germany, grant number 01IH16008B (project TaLPas), and by the European project ESiWACE is acknowledged. ESiWACE has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 675191. P. Neumann thanks K. Brusch, R. Brown and P. Harder for initial works on sparse grid-based performance evaluations.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Neumann, P. (2020). Sparse Grid Regression for Performance Prediction Using High-Dimensional Run Time Data. In: Schwardmann, U., et al. Euro-Par 2019: Parallel Processing Workshops. Euro-Par 2019. Lecture Notes in Computer Science(), vol 11997. Springer, Cham. https://doi.org/10.1007/978-3-030-48340-1_46
Download citation
DOI: https://doi.org/10.1007/978-3-030-48340-1_46
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-48339-5
Online ISBN: 978-3-030-48340-1
eBook Packages: Computer ScienceComputer Science (R0)