Skip to main content

Understanding the Scalability of Molecular Simulation Using Empirical Performance Modeling

  • Conference paper
  • First Online:
Book cover Programming and Performance Visualization Tools (ESPT 2017, ESPT 2018, VPA 2017, VPA 2018)

Abstract

Molecular dynamics (MD) simulation allows for the study of static and dynamic properties of molecular ensembles at various molecular scales, from monatomics to macromolecules such as proteins and nucleic acids. It has applications in biology, materials science, biochemistry, and biophysics. Recent developments in simulation techniques spurred the emergence of the computational molecular engineering (CME) field, which focuses specifically on the needs of industrial users in engineering. Within CME, the simulation code ms2 allows users to calculate thermodynamic properties of bulk fluids. It is a parallel code that aims to scale the temporal range of the simulation while keeping the execution time minimal. In this paper, we use empirical performance modeling to study the impact of simulation parameters on the execution time. Our approach is a systematic workflow that can be used as a blue-print in other fields that aim to scale their simulation codes. We show that the generated models can help users better understand how to scale the simulation with minimal increase in execution time.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. BMBF project TaLPas - Task-based Load Balancing and Auto-tuning in Particle Simulations. https://wr.informatik.uni-hamburg.de/research/projects/talpas/start. Accessed 22 May 2018

  2. Cube 4.x series. http://www.scalasca.org/software/cube-4.x/download.html. Accessed 22 May 2018

  3. European Union’s Horizon 2020 project POP - Performance Optimisation and Productivity. https://pop-coe.eu. Accessed 25 June 2018

  4. Extra-P – Automated Performance-modeling Tool. http://www.scalasca.org/software/extra-p. Accessed 22 May 2018

  5. Folding@home. https://foldingathome.org/. Accessed 04 July 2018

  6. GROMACS: Molecular Dynamics Package. http://www.gromacs.org/. Accessed 03 July 2018

  7. LAMMPS: Molecular Dynamics Simulator. http://lammps.sandia.gov/. Accessed 03 July 2018

  8. NAMD: Scalable Molecular Dynamics. http://www.ks.uiuc.edu/Research/namd/. Accessed 03 July 2018

  9. Berendsen, H., van der Spoel, D., van Drunen, R.: GROMACS: a message-passing parallel molecular dynamics implementation. Comput. Phys. Commun. 91(1), 43–56 (1995). https://doi.org/10.1016/0010-4655(95)00042-E

    Article  Google Scholar 

  10. Bhatele, A., Jain, N., Livnat, Y., Pascucci, V., Bremer, P.T.: Analyzing network health and congestion in dragonfly-based supercomputers. In: Proceedings of the 30th IEEE International Parallel & Distributed Processing Symposium (IPDPS), pp. 93–102. IEEE Computer Society, May 2016. https://doi.org/10.1109/IPDPS.2016.123

  11. Phillips, J.C., et al.: Scalable molecular dynamics with NAMD. J. Comput. Chem. 26(16), 1781–1802 (2005). https://doi.org/10.1002/jcc.20289

  12. Calotoiu, A., et al.: Fast multi-parameter performance modeling. In: Proceedings of the IEEE International Conference on Cluster Computing (CLUSTER), pp. 1–10. IEEE, September 2016. https://doi.org/10.1109/CLUSTER.2016.57

  13. Calotoiu, A., Hoefler, T., Poke, M., Wolf, F.: Using automated performance modeling to find scalability bugs in complex codes. In: Proceedings of the ACM/IEEE Conference on Supercomputing (SC), pp. 45:1–45:12. ACM, November 2013. https://doi.org/10.1145/2503210.2503277

  14. Chunduri, S., et al.: Run-to-run Variability on Xeon Phi based cray XC systems. In: Proceedings of the ACM/IEEE Conference on Supercomputing (SC), pp. 52:1–52:13. ACM, November 2017. https://doi.org/10.1145/3126908.3126926

  15. Deublein, S., et al.: ms2: a molecular simulation tool for thermodynamic properties. Comput. Phys. Commun. 182(11), 2350–2367 (2011). https://doi.org/10.1016/j.cpc.2011.04.026

    Article  Google Scholar 

  16. Glass, C.W., et al.: ms2: a molecular simulation tool for thermodynamic properties, new version release. Comput. Phys. Commun. 185(12), 3302–3306 (2014). https://doi.org/10.1016/j.cpc.2014.07.012

    Article  Google Scholar 

  17. Horsch, M., Niethammer, C., Vrabec, J., Hasse, H.: Computational molecular engineering as an emerging technology in process engineering. Methods Appl. Inform. Inf. Technol. 55(3), 97–101 (2013). https://doi.org/10.1524/itit.2013.0013

  18. Iwainsky, C., et al.: How many threads will be too many? On the scalability of OpenMP implementations. In: Träff, J.L., Hunold, S., Versaci, F. (eds.) Euro-Par 2015. LNCS, vol. 9233, pp. 451–463. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-48096-0_35

    Chapter  Google Scholar 

  19. Kale, L.V., Krishnan, S.: CHARM++: a portable concurrent object oriented system based on C++. In: Proceedings of the 8th Annual Conference on Object-Oriented Programming Systems, Languages, and Applications (OOPSLA), pp. 91–108. ACM (1993). https://doi.org/10.1145/165854.165874

  20. Knüpfer, A., et al.: Score-P - a joint performance measurement run-time infrastructure for periscope, scalasca, TAU, and vampir. In: Brunst, H., Müller, M., Nagel, W., Resch, M. (eds.) Tools for High Performance Computing, pp. 79–91. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-31476-6_7

    Chapter  Google Scholar 

  21. Marathe, A., et al.: Performance modeling under resource constraints using deep transfer learning. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2017, pp. 31:1–31:12. ACM (2017). https://doi.org/10.1145/3126908.3126969

  22. Plimpton, S.: Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117(1), 1–19 (1995). https://doi.org/10.1006/jcph.1995.1039

    Article  MATH  Google Scholar 

  23. Rutkai, G., et al.: ms2: a molecular simulation tool for thermodynamic properties, release 3.0. Comput. Phys. Commun. 221, 343–351 (2017). https://doi.org/10.1016/j.cpc.2017.07.025

  24. Shudler, S., Calotoiu, A., Hoefler, T., Strube, A., Wolf, F.: Exascaling your library: will your implementation meet your expectations? In: Proceedings of the 29th ACM International Conference on Supercomputing (ICS), pp. 165–175. ACM, June 2015. https://doi.org/10.1145/2751205.2751216

  25. Shudler, S., Calotoiu, A., Hoefler, T., Wolf, F.: Isoefficiency in practice: configuring and understanding the performance of task-based applications. In: Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP), pp. 131–143. ACM, February 2017. https://doi.org/10.1145/3018743.3018770

  26. Singh, K., İpek, E., McKee, S.A., de Supinski, B.R., Schulz, M., Caruana, R.: Predicting parallel application performance via machine learning approaches: research articles. Concurr. Comput.: Pract. Exper. 19(17), 2219–2235 (2007). https://doi.org/10.1002/cpe.1171

    Article  Google Scholar 

  27. Vogel, A., et al.: 10,000 performance models per minute – scalability of the UG4 simulation framework. In: Träff, J.L., Hunold, S., Versaci, F. (eds.) Euro-Par 2015. LNCS, vol. 9233, pp. 519–531. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-48096-0_40

    Chapter  Google Scholar 

  28. Yang, X., Jenkins, J., Mubarak, M., Ross, R.B., Lan, Z.: Watch out for the bully!: job interference study on dragonfly network. In: Proceedings of the ACM/IEEE Conference on Supercomputing (SC), pp. 64:1–64:11. IEEE Press (2016). https://doi.org/10.1109/SC.2016.63

Download references

Acknowledgements

This work was supported by the German Research Foundation (DFG) through the Program Performance Engineering for Scientific Software and the ExtraPeak project, by the German Federal Ministry of Education and Research (BMBF) through the TaLPas project under Grant No. 01IH16008D, and by the US Department of Energy through the PRIMA-X project under Grant No. DE-SC0015524. The authors would like to thank the partners of the TaLPas project for fruitful discussions. Finally, the authors would also like express their gratitude to the High Performance Computing Center Stuttgart (HLRS) and the University Computing Center (Hochschulrechenzentrum) of Technische Universität Darmstadt for providing access to machines Hazel Hen and Lichtenberg, respectively.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sergei Shudler .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shudler, S., Vrabec, J., Wolf, F. (2019). Understanding the Scalability of Molecular Simulation Using Empirical Performance Modeling. In: Bhatele, A., Boehme, D., Levine, J., Malony, A., Schulz, M. (eds) Programming and Performance Visualization Tools. ESPT ESPT VPA VPA 2017 2018 2017 2018. Lecture Notes in Computer Science(), vol 11027. Springer, Cham. https://doi.org/10.1007/978-3-030-17872-7_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-17872-7_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-17871-0

  • Online ISBN: 978-3-030-17872-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics