Advertisement

HPC Hardware Efficiency for Quantum and Classical Molecular Dynamics

  • Vladimir V. StegailovEmail author
  • Nikita D. Orekhov
  • Grigory S. Smirnov
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9251)

Abstract

Development of new HPC architectures proceeds faster than the corresponding adjustment of the algorithms for such fundamental mathematical models as quantum and classical molecular dynamics. There is the need for clear guiding criteria for the computational efficiency of a particular model on a particular hardware. LINPACK benchmark alone can no longer serve this role. In this work we consider a practical metric of the time-to-solution versus the computational peak performance of a given hardware system. In this metric we compare different hardware for the CP2K and LAMMPS software packages widely used for atomistic modeling. The metric considered can serve as a universal unambiguous scale that ranges different types of supercomputers.

Keywords

Molecular Dynamic Atomistic Simulation Classical Molecular Dynamic Molecular Dynamic Model Quantum Molecular Dynamic 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgment

The work is partially supported by the grant No. 14-50-00124 of the Russian Science Foundation.

References

  1. 1.
  2. 2.
    Bethune, I., Carter, A., Guo, X., Korosoglou, P.: Million atom KS-DFT with CP2K. http://www.prace-project.eu/IMG/pdf/cp2k.pdf
  3. 3.
    Corsetti, F.: Performance analysis of electronic structure codes on HPC systems: a case study of SIESTA. PLoS ONE 9(4), e95390 (2014)CrossRefGoogle Scholar
  4. 4.
    Eckhardt, W., Heinecke, A., Bader, R., Brehm, M., Hammer, N., Huber, H., Kleinhenz, H.-G., Vrabec, J., Hasse, H., Horsch, M., Bernreuther, M., Glass, C.W., Niethammer, C., Bode, A., Bungartz, H.-J.: 591 TFLOPS multi-trillion particles simulation on SuperMUC. In: Kunkel, J.M., Ludwig, T., Meuer, H.W. (eds.) ISC 2013. LNCS, vol. 7905, pp. 1–12. Springer, Heidelberg (2013) CrossRefGoogle Scholar
  5. 5.
    Gygi, F.: Large-scale first-principles molecular dynamics: moving from terascale to petascale computing. J. Phys. Conf. Ser. 46(1), 268 (2006)CrossRefGoogle Scholar
  6. 6.
    Heroux, M.A., Doerfler, D.W., Crozier, P.S., Willenbring, J.M., Edwards, H.C., Williams, A., Rajan, M., Keiter, E.R., Thornquist, H.K., Numrich, R.W.: Improving performance via mini-applications. Technical report, Sandia Nat. Laboratories (2009)Google Scholar
  7. 7.
    Hutter, J., Curioni, A.: Dual-level parallelism for ab initio molecular dynamics: reaching teraflop performance with the CPMD code. Parallel Comput. 31(1), 1–17 (2005)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Krack, M., Parrinello, M.: Quickstep: make the atoms dance. High Perform. Comput. Chem. 25, 29–51 (2004)Google Scholar
  9. 9.
    Muller, M.S., van Waveren, M., Lieberman, R., Whitney, B., Saito, H., Kumaran, K., Baron, J., Brantley, W.C., Parrott, C., Elken, T., Feng, H., Ponder, C.: SPEC MPI2007 – an application benchmark suite for parallel systems using MPI. Concurrency Comput. Pract. Experience 22(2), 191–205 (2010)Google Scholar
  10. 10.
    Orekhov, N.D., Stegailov, V.V.: Graphite melting: atomistic kinetics bridges theory and experiment. Carbon 87, 358–364 (2015)CrossRefGoogle Scholar
  11. 11.
    Orekhov, N.D., Stegailov, V.V.: Molecular-dynamics based insights into the problem of graphite melting. J. Phys.: Conf. Ser. (2015)Google Scholar
  12. 12.
    Piana, S., Klepeis, J.L., Shaw, D.E.: Assessing the accuracy of physical models used in protein-folding simulations: quantitative evidence from long molecular dynamics simulations. Curr. Opin. Struct. Biol. 24, 98–105 (2014)CrossRefGoogle Scholar
  13. 13.
    Smirnov, G.S., Stegailov, V.V.: Toward determination of the new hydrogen hydrate clathrate structures. J. Phys. Chem. Lett. 4(21), 3560–3564 (2013)CrossRefGoogle Scholar
  14. 14.
    Stegailov, V.V., Norman, G.E.: Challenges to the supercomputer development in Russia: a HPC user perspective. Program Systems: Theory and Applications 5(1), 111–152 (2014). http://psta.psiras.ru/read/psta2014_1_111-152.pdf
  15. 15.
    VandeVondele, J.: CP2K: parallel algorithms. www.training.prace-ri.eu/uploads/tx_pracetmo/cpw09_cp2k_parallel.pdf

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Vladimir V. Stegailov
    • 1
    • 2
    • 3
    Email author
  • Nikita D. Orekhov
    • 1
    • 2
  • Grigory S. Smirnov
    • 1
    • 2
  1. 1.Joint Institute for High Temperatures of RASMoscowRussia
  2. 2.Moscow Institute of Physics and TechnologyDolgoprudnyRussia
  3. 3.National Research University Higher School of EconomicsMoscowRussia

Personalised recommendations