Efficient Implementation of the Force Calculation in MD Simulations

  • Alexander Heinecke
  • Wolfgang Eckhardt
  • Martin Horsch
  • Hans-Joachim Bungartz
Chapter
Part of the SpringerBriefs in Computer Science book series (BRIEFSCOMPUTER)

Abstract

This chapter describes how the computational kernel of MD simulations, the force calculation between particles, can be mapped to different kinds of hardware by applying minimal changes to the software. Since ls1 mardyn is based on the so-called linked-cells algorithm, several difference facets of this approach are optimized. First, we present a newly developed sliding window traversal of the entire data structure which enables the seamless integration of new optimizations such as the vectorization of the Lennard-Jones-12-6 potential. Second, we describe and evaluate several variants of mapping this potential to today’s SIMD/vector hardware using intrinsics at the example of the Intel Xeon processor and the Intel Xeon Phi coprocessor, in dependence on the functionality offered by the hardware. This is done for single-center and as well for multicentered rigid-body molecules.

Keywords

Molecular dynamics simulation Memory optimizations Structure of arrays Vectorization Gather Scatter Lennard-Jones potential Intel Xeon Phi 

References

  1. 1.
    J. Mellor-Crummey, D. Whalley, K. Kennedy, Improving memory hierarchy performance for irregular applications using data and computation reorderings. Int. J. Parallel Program. 29, 217–247 (2001)CrossRefMATHGoogle Scholar
  2. 2.
    S. Meloni, M. Rosati, L. Colombo, Efficient particle labelling in atomistic simulations. J. Chem. Phys. 126(12), 121102 (2007)CrossRefGoogle Scholar
  3. 3.
    M. Schoen, Structure of a simple molecular dynamics FORTRAN program optimized for CRAY vector processing computers. Comput. Phys. Commun. 52(2), 175–185 (1989)CrossRefMathSciNetGoogle Scholar
  4. 4.
    G.S. Grest, B. Dnweg, K. Kremer, Vectorized link cell Fortran code for molecular dynamics simulations for a large number of particles. Comput. Phys. Commun. 55(3), 269–285 (1989)CrossRefGoogle Scholar
  5. 5.
    R. Everaers, K. Kremer, A fast grid search algorithm for molecular dynamics simulations with short-range interactions. Comput. Phys. Commun. 81(12), 19–55 (1994)CrossRefGoogle Scholar
  6. 6.
    D.C. Rapaport, Large-scale molecular dynamics simulation using vector and parallel computers. Comput. Phys. Rep. 9, 1–53 (1988)CrossRefGoogle Scholar
  7. 7.
    D.C. Rapaport, The Art of Molecular Dynamics Simulation (Cambridge University Press, Cambridge, 2004)CrossRefMATHGoogle Scholar
  8. 8.
    D.C. Rapaport, Multibillion-atom molecular dynamics simulation: design considerations for vector-parallel processing. Comput. Phys. Commun. 174(7), 521–529 (2006)CrossRefMATHMathSciNetGoogle Scholar
  9. 9.
    K. Benkert, F. Gähler, Molecular Dynamics on NEC Vector Systems (Springer, Berlin, 2007), pp. 145–152Google Scholar
  10. 10.
    E. Lindahl, B. Hess, D. van der Spoel, GROMACS 3.0: a package for molecular simulation and trajectory analysis. J. Mol. Model. 7, 306–317 (2001)Google Scholar
  11. 11.
    S. Olivier, J. Prins, J. Derby, K. Vu, Porting the GROMACS molecular dynamics code to the cell processor, in IEEE International Parallel and Distributed Processing Symposium, IPDPS 2007, pp. 1–8 (2007)Google Scholar
  12. 12.
    L. Peng, M. Kunaseth, H. Dursun, K.-I. Nomura, W. Wang, R. Kalia, A. Nakano, P. Vashishta, Exploiting hierarchical parallelisms for molecular dynamics simulation on multicore clusters. J. Supercomput. 57, 20–33 (2011)CrossRefGoogle Scholar
  13. 13.
    S. Pll, B. Hess, A flexible algorithm for calculating pair interactions on SIMD architectures. Comput. Phys. Commun. (2013) (accepted for publication)Google Scholar
  14. 14.
    W. Eckhardt, A. Heinecke, W. Hölzl, H.-J. Bungartz,Vectorization of multi-center, highly-parallel rigid-body molecular dynamics simulations, in Supercomputing 2013, The International Conference for High Performance Computing, Networking, Storage and Analysis, Denver, (IEEE, Poster abstract, 2013)Google Scholar
  15. 15.
    S. Pennycook, C. Hughes, M. Smelyanskiy, S. Jarvis, Exploring SIMD for molecular dynamics, using Intel Xeon processors and Intel Xeon Phi coprocessors, in IEEE 27th International Symposium on Parallel Distributed Processing (IPDPS), pp. 1085–1097 (2013)Google Scholar
  16. 16.
    W. Eckhardt, A. Heinecke, An efficient vectorization of linked-cell particle simulations. in ACM International Conference on Computing Frontiers (Cagliari, 2012), pp. 241–243Google Scholar
  17. 17.
    W. Eckhardt, A. Heinecke, R. Bader, M. Brehm, N. Hammer, H. Huber, H.-G. Kleinhenz, J. Vrabec, H. Hasse, M. Horsch, M. Bernreuther, C. Glass, C. Niethammer, A. Bode, H.-J. Bungartz. 591 TFLOPS multi-trillion particles simulation on SuperMUC, in Proceedings of the International Supercomputing Conference (ISC), Lecture Notes in Computer Science. vol. 7905 (Springer, Leipzig, 2013), pp. 1–12Google Scholar
  18. 18.
    J. Roth, F. Gähler, H.-R. Trebin, A molecular dynamics run with 5 180 116 000 particles. Int. J. Mod. Phys. C 11(02), 317–322 (2000)CrossRefGoogle Scholar
  19. 19.
    T.C. Germann, K. Kadau, Trillion-atom molecular dynamics becomes a reality. Int. J. Mod. Phys. C 19(09), 1315–1319 (2008)CrossRefMATHGoogle Scholar
  20. 20.
    K. Kadau, T.C. Germann, P.S. Lomdahl, Molecular dynamics comes of age: 320 billion atom simulation on BlueGene/L. Int. J. Mod. Phys. C 17(12), 1755–1761 (2006)CrossRefGoogle Scholar
  21. 21.
    A. Rahimian, I. Lashuk, S. Veerapaneni, A. Chandramowlishwaran, D. Malhotra, L. Moon, R. Sampath, A. Shringarpure, J. Vetter, R. Vuduc, D. Zorin, G. Biros, Petascale direct numerical simulation of blood flow on 200k cores and heterogeneous architectures, in Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, SC’10 (IEEE Computer Society, Washington, 2010), pp. 1–11Google Scholar
  22. 22.
    I. Kabadshow, H. Dachsel, J. Hammond, Poster: passing the three trillion particle limit with an error-controlled fast multipole method, in Proceedings of the 2011 Companion on High Performance Computing Networking, Storage and Analysis Companion, SC’11 Companion (ACM, New York, 2011), pp. 73–74Google Scholar
  23. 23.
    W. Eckhardt, T. Neckel, Memory-efficient implementation of a rigid-body molecular dynamics simulation, in Proceedings of the 11th International Symposium on Parallel and Distributed Computing—ISPDC 2012 (IEEE, Munich, 2012), pp. 103–110Google Scholar

Copyright information

© The Author(s) 2015

Authors and Affiliations

  • Alexander Heinecke
    • 1
  • Wolfgang Eckhardt
    • 2
  • Martin Horsch
    • 3
  • Hans-Joachim Bungartz
    • 2
  1. 1.Intel CorporationSanta ClaraUSA
  2. 2.Technische Universität MünchenGarchingGermany
  3. 3.University of KaiserslauternKaiserslauternGermany

Personalised recommendations