Advertisement

Understanding the Scalability of Molecular Simulation Using Empirical Performance Modeling

  • Sergei ShudlerEmail author
  • Jadran Vrabec
  • Felix Wolf
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11027)

Abstract

Molecular dynamics (MD) simulation allows for the study of static and dynamic properties of molecular ensembles at various molecular scales, from monatomics to macromolecules such as proteins and nucleic acids. It has applications in biology, materials science, biochemistry, and biophysics. Recent developments in simulation techniques spurred the emergence of the computational molecular engineering (CME) field, which focuses specifically on the needs of industrial users in engineering. Within CME, the simulation code ms2 allows users to calculate thermodynamic properties of bulk fluids. It is a parallel code that aims to scale the temporal range of the simulation while keeping the execution time minimal. In this paper, we use empirical performance modeling to study the impact of simulation parameters on the execution time. Our approach is a systematic workflow that can be used as a blue-print in other fields that aim to scale their simulation codes. We show that the generated models can help users better understand how to scale the simulation with minimal increase in execution time.

Keywords

Molecular dynamics Performance modeling Parallel programming 

Notes

Acknowledgements

This work was supported by the German Research Foundation (DFG) through the Program Performance Engineering for Scientific Software and the ExtraPeak project, by the German Federal Ministry of Education and Research (BMBF) through the TaLPas project under Grant No. 01IH16008D, and by the US Department of Energy through the PRIMA-X project under Grant No. DE-SC0015524. The authors would like to thank the partners of the TaLPas project for fruitful discussions. Finally, the authors would also like express their gratitude to the High Performance Computing Center Stuttgart (HLRS) and the University Computing Center (Hochschulrechenzentrum) of Technische Universität Darmstadt for providing access to machines Hazel Hen and Lichtenberg, respectively.

References

  1. 1.
    BMBF project TaLPas - Task-based Load Balancing and Auto-tuning in Particle Simulations. https://wr.informatik.uni-hamburg.de/research/projects/talpas/start. Accessed 22 May 2018
  2. 2.
    Cube 4.x series. http://www.scalasca.org/software/cube-4.x/download.html. Accessed 22 May 2018
  3. 3.
    European Union’s Horizon 2020 project POP - Performance Optimisation and Productivity. https://pop-coe.eu. Accessed 25 June 2018
  4. 4.
    Extra-P – Automated Performance-modeling Tool. http://www.scalasca.org/software/extra-p. Accessed 22 May 2018
  5. 5.
    Folding@home. https://foldingathome.org/. Accessed 04 July 2018
  6. 6.
    GROMACS: Molecular Dynamics Package. http://www.gromacs.org/. Accessed 03 July 2018
  7. 7.
    LAMMPS: Molecular Dynamics Simulator. http://lammps.sandia.gov/. Accessed 03 July 2018
  8. 8.
    NAMD: Scalable Molecular Dynamics. http://www.ks.uiuc.edu/Research/namd/. Accessed 03 July 2018
  9. 9.
    Berendsen, H., van der Spoel, D., van Drunen, R.: GROMACS: a message-passing parallel molecular dynamics implementation. Comput. Phys. Commun. 91(1), 43–56 (1995).  https://doi.org/10.1016/0010-4655(95)00042-ECrossRefGoogle Scholar
  10. 10.
    Bhatele, A., Jain, N., Livnat, Y., Pascucci, V., Bremer, P.T.: Analyzing network health and congestion in dragonfly-based supercomputers. In: Proceedings of the 30th IEEE International Parallel & Distributed Processing Symposium (IPDPS), pp. 93–102. IEEE Computer Society, May 2016.  https://doi.org/10.1109/IPDPS.2016.123
  11. 11.
    Phillips, J.C., et al.: Scalable molecular dynamics with NAMD. J. Comput. Chem. 26(16), 1781–1802 (2005).  https://doi.org/10.1002/jcc.20289
  12. 12.
    Calotoiu, A., et al.: Fast multi-parameter performance modeling. In: Proceedings of the IEEE International Conference on Cluster Computing (CLUSTER), pp. 1–10. IEEE, September 2016.  https://doi.org/10.1109/CLUSTER.2016.57
  13. 13.
    Calotoiu, A., Hoefler, T., Poke, M., Wolf, F.: Using automated performance modeling to find scalability bugs in complex codes. In: Proceedings of the ACM/IEEE Conference on Supercomputing (SC), pp. 45:1–45:12. ACM, November 2013.  https://doi.org/10.1145/2503210.2503277
  14. 14.
    Chunduri, S., et al.: Run-to-run Variability on Xeon Phi based cray XC systems. In: Proceedings of the ACM/IEEE Conference on Supercomputing (SC), pp. 52:1–52:13. ACM, November 2017.  https://doi.org/10.1145/3126908.3126926
  15. 15.
    Deublein, S., et al.: ms2: a molecular simulation tool for thermodynamic properties. Comput. Phys. Commun. 182(11), 2350–2367 (2011).  https://doi.org/10.1016/j.cpc.2011.04.026CrossRefGoogle Scholar
  16. 16.
    Glass, C.W., et al.: ms2: a molecular simulation tool for thermodynamic properties, new version release. Comput. Phys. Commun. 185(12), 3302–3306 (2014).  https://doi.org/10.1016/j.cpc.2014.07.012CrossRefGoogle Scholar
  17. 17.
    Horsch, M., Niethammer, C., Vrabec, J., Hasse, H.: Computational molecular engineering as an emerging technology in process engineering. Methods Appl. Inform. Inf. Technol. 55(3), 97–101 (2013).  https://doi.org/10.1524/itit.2013.0013
  18. 18.
    Iwainsky, C., et al.: How many threads will be too many? On the scalability of OpenMP implementations. In: Träff, J.L., Hunold, S., Versaci, F. (eds.) Euro-Par 2015. LNCS, vol. 9233, pp. 451–463. Springer, Heidelberg (2015).  https://doi.org/10.1007/978-3-662-48096-0_35CrossRefGoogle Scholar
  19. 19.
    Kale, L.V., Krishnan, S.: CHARM++: a portable concurrent object oriented system based on C++. In: Proceedings of the 8th Annual Conference on Object-Oriented Programming Systems, Languages, and Applications (OOPSLA), pp. 91–108. ACM (1993).  https://doi.org/10.1145/165854.165874
  20. 20.
    Knüpfer, A., et al.: Score-P - a joint performance measurement run-time infrastructure for periscope, scalasca, TAU, and vampir. In: Brunst, H., Müller, M., Nagel, W., Resch, M. (eds.) Tools for High Performance Computing, pp. 79–91. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-31476-6_7CrossRefGoogle Scholar
  21. 21.
    Marathe, A., et al.: Performance modeling under resource constraints using deep transfer learning. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2017, pp. 31:1–31:12. ACM (2017).  https://doi.org/10.1145/3126908.3126969
  22. 22.
    Plimpton, S.: Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117(1), 1–19 (1995).  https://doi.org/10.1006/jcph.1995.1039CrossRefzbMATHGoogle Scholar
  23. 23.
    Rutkai, G., et al.: ms2: a molecular simulation tool for thermodynamic properties, release 3.0. Comput. Phys. Commun. 221, 343–351 (2017).  https://doi.org/10.1016/j.cpc.2017.07.025
  24. 24.
    Shudler, S., Calotoiu, A., Hoefler, T., Strube, A., Wolf, F.: Exascaling your library: will your implementation meet your expectations? In: Proceedings of the 29th ACM International Conference on Supercomputing (ICS), pp. 165–175. ACM, June 2015.  https://doi.org/10.1145/2751205.2751216
  25. 25.
    Shudler, S., Calotoiu, A., Hoefler, T., Wolf, F.: Isoefficiency in practice: configuring and understanding the performance of task-based applications. In: Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP), pp. 131–143. ACM, February 2017.  https://doi.org/10.1145/3018743.3018770
  26. 26.
    Singh, K., İpek, E., McKee, S.A., de Supinski, B.R., Schulz, M., Caruana, R.: Predicting parallel application performance via machine learning approaches: research articles. Concurr. Comput.: Pract. Exper. 19(17), 2219–2235 (2007).  https://doi.org/10.1002/cpe.1171CrossRefGoogle Scholar
  27. 27.
    Vogel, A., et al.: 10,000 performance models per minute – scalability of the UG4 simulation framework. In: Träff, J.L., Hunold, S., Versaci, F. (eds.) Euro-Par 2015. LNCS, vol. 9233, pp. 519–531. Springer, Heidelberg (2015).  https://doi.org/10.1007/978-3-662-48096-0_40CrossRefGoogle Scholar
  28. 28.
    Yang, X., Jenkins, J., Mubarak, M., Ross, R.B., Lan, Z.: Watch out for the bully!: job interference study on dragonfly network. In: Proceedings of the ACM/IEEE Conference on Supercomputing (SC), pp. 64:1–64:11. IEEE Press (2016).  https://doi.org/10.1109/SC.2016.63

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Argonne National LaboratoryLemontUSA
  2. 2.Thermodynamics and Process EngineeringTechnical University BerlinBerlinGermany
  3. 3.Laboratory for Parallel ProgrammingTechnical University DarmstadtDarmstadtGermany

Personalised recommendations