Skip to main content

Sparse Grid Regression for Performance Prediction Using High-Dimensional Run Time Data

  • Conference paper
  • First Online:
Euro-Par 2019: Parallel Processing Workshops (Euro-Par 2019)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11997))

Included in the following conference series:

Abstract

We employ sparse grid regression to predict the run time in three types of numerical simulation: molecular dynamics (MD), weather and climate simulation. The impact of algorithmic, OpenMP/MPI and hardware-aware optimization parameters on performance is studied. We show that normalization of run time data via algorithmic complexity arguments significantly improves prediction accuracy. Mean relative prediction errors are in the range of few percent; in MD, a five-dimensional parameter space exploration results in mean relative prediction errors of ca. 15% using ca. 178 run time samples.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://redmine.dkrz.de/projects/icon-benchmark/wiki/Instructions_on_download_execution_and_analysis_ICON_Benchmark_v160.

References

  1. Flato, G.: Earth system models: an overview. Wiley Interdisc. Rev. Clim. Change 2(6), 783–800 (2011)

    Article  Google Scholar 

  2. Southern, J., et al.: Multi-scale computational modelling in biology and physiology. Prog. Biophys. Mol. Biol. 96(1), 60–89 (2008)

    Article  Google Scholar 

  3. Frauel, Y., et al.: Easy use of high performance computers for fusion simulations. Fusion Eng. Des. 87(12), 2057–2062 (2012)

    Article  Google Scholar 

  4. Neumann, P., Flohr, H., Arora, R., Jarmatz, P., Tchipev, N., Bungartz, H.J.: MaMiCo: software design for parallel molecular-continuum flow simulations. Comput. Phys. Commun. 200, 324–335 (2016)

    Article  Google Scholar 

  5. Pflüger, D.: Spatially Adaptive Sparse Grids for High-Dimensional Problems. Dr, Hut, Munich (2010)

    MATH  Google Scholar 

  6. Giorgetta, M., et al.: ICON-A, the atmosphere component of the ICON Earth System Model: I. Model description. J. Adv. Model. Earth Syst. 10(7), 1613–1637 (2018)

    Article  Google Scholar 

  7. Garcke, J.: Sparse grids in a nutshell. In: Garcke, J., Griebel, M. (eds.) Sparse Grids and Applications, pp. 57–80. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-31703-3_3

    Chapter  Google Scholar 

  8. Gerstner, T., Griebel, M.: Dimension-adaptive tensor-product quadrature. Computing 71, 65–87 (2003). https://doi.org/10.1007/s00607-003-0015-5

    Article  MathSciNet  MATH  Google Scholar 

  9. Pflüger, D.: Spatially adaptive refinement. In: Garcke, J., Griebel, M. (eds.) Sparse Grids and Applications, pp. 243–262. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-31703-3_12

    Chapter  Google Scholar 

  10. Kerbyson, D., Alme, H., Hoisie, A., Petrini, F., Wasserman, H., Gittings, M.: Predictive performance and scalability modeling of a large-scale application. In: Proceedings of the 2001 ACM/IEEE Conference on Supercomputing, p. 39. ACM, New York (2001)

    Google Scholar 

  11. Neumann, P., et al.: Assessing the scales in numerical weather and climate predictions: will exascale be the rescue? Philos. Trans. R. Soc. A 377(2142), 20180148 (2019)

    Article  Google Scholar 

  12. Carrington, L., Snavely, A., Wolter, N.: A performance prediction framework for scientific applications. Future Gener. Comput. Syst. 22(3), 336–346 (2006)

    Article  Google Scholar 

  13. Marin, G., Mellor-Crummey, J.: Cross-architecture performance predictions for scientific applications using parameterized models. In: Proceedings of the Joint International Conference on Measurement and Modeling of Computer Systems SIGMETRICS 2004/Performance 2004, pp. 2–13. ACM, New York (2004)

    Google Scholar 

  14. Calotoiu, A., Hoefler, T., Poke, M., Wolf, F.: Using automated performance modeling to find scalability bugs in complex codes. In: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, pp. 45:1–45:12. ACM, New York (2013)

    Google Scholar 

  15. Barnes, B., Rountree, B., Lowenthal, D., Reeves, J., de Supinski, B., Schulz, M.: A regression-based approach to scalability prediction. In: Proceedings of the 22nd Annual International Conference on Supercomputing, pp. 368–377. ACM, New York (2008)

    Google Scholar 

  16. Ipek, E., de Supinski, B.R., Schulz, M., McKee, S.A.: An approach to performance prediction for parallel applications. In: Cunha, J.C., Medeiros, P.D. (eds.) Euro-Par 2005. LNCS, vol. 3648, pp. 196–205. Springer, Heidelberg (2005). https://doi.org/10.1007/11549468_24

    Chapter  Google Scholar 

  17. Singh, K., et al.: Comparing scalability prediction strategies on an SMP of CMPs. In: D’Ambra, P., Guarracino, M., Talia, D. (eds.) Euro-Par 2010. LNCS, vol. 6271, pp. 143–155. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15277-1_14

    Chapter  Google Scholar 

  18. Lee, B., Brooks, D., de Supinski, B., Schulz, M., Singh, K., McKee, S.: Methods of inference and learning for performance modeling of parallel applications. In: Proceedings of the 12th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 249–258. ACM, New York (2007)

    Google Scholar 

  19. Eggensperger, K., Lindauer, M., Hutter, F.: Neural networks for predicting algorithm runtime distributions. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI 2018), pp. 1442–1448. International Joint Conferences on Artificial Intelligence Organization (2018)

    Google Scholar 

  20. Wyatt, M.R., Herbein, S., Gamblin, T., Moody, A., Ahn, D., Taufer, M.: PRIONN: predicting runtime and IO using neural networks. In: Proceedings of the 47th International Conference on Parallel Processing (ICPP 2018), pp. 46:1–46:12. ACM, New York (2018)

    Google Scholar 

  21. Garcke, J.: Maschinelles Lernen durch Funktionsrekonstruktion mit verallgemeinerten dnnen Gittern. Ph.D. thesis, University of Bonn (2004)

    Google Scholar 

  22. Dirnstorfer, S., Grau, A., Zagst, R.: High-dimensional regression on sparse grids applied to pricing moving window Asian options. Open J. Stat. 3, 427–440 (2013)

    Article  Google Scholar 

  23. Bohn, B.: On the convergence rate of sparse grid least squares regression. In: Garcke, J., Pflüger, D., Webster, C.G., Zhang, G. (eds.) Sparse Grids and Applications – Miami 2016. LNCSE, vol. 123, pp. 19–41. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-75426-0_2

    Chapter  Google Scholar 

  24. Garcke, J.: Regression with the optimised combination technique. In: Proceedings of the 23rd International Conference on Machine Learning, pp. 321–328. ACM, New York (2006)

    Google Scholar 

  25. Griebel, M., Knapek, S., Zumbusch, G.: Numerical Simulation in Molecular Dynamics - Numerics, Algorithms, Parallelization, Applications. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-68095-6

    Book  MATH  Google Scholar 

  26. Tchipev, N., et al.: TweTriS: twenty trillion-atom simulation. Int. J. High Perform. Comput. Appl. 33, 838–854 (2019)

    Article  Google Scholar 

  27. Klocke, D., Brueck, M., Hohenegger, C., Stevens, B.: Rediscovery of the doldrums in storm-resolving simulations over the tropical Atlantic. Nat. Geosci. 10, 891–896 (2017)

    Article  Google Scholar 

Download references

Acknowledgements

Financial support by the Federal Ministry of Education and Research, Germany, grant number 01IH16008B (project TaLPas), and by the European project ESiWACE is acknowledged. ESiWACE has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 675191. P. Neumann thanks K. Brusch, R. Brown and P. Harder for initial works on sparse grid-based performance evaluations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Philipp Neumann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Neumann, P. (2020). Sparse Grid Regression for Performance Prediction Using High-Dimensional Run Time Data. In: Schwardmann, U., et al. Euro-Par 2019: Parallel Processing Workshops. Euro-Par 2019. Lecture Notes in Computer Science(), vol 11997. Springer, Cham. https://doi.org/10.1007/978-3-030-48340-1_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-48340-1_46

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-48339-5

  • Online ISBN: 978-3-030-48340-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics