A thread-level parallelization of pairwise additive potential and force calculations suitable for current many-core architectures

  • Yoshimichi Andoh
  • Soichiro Suzuki
  • Satoshi Ohshima
  • Tatsuya Sakashita
  • Masao Ogino
  • Takahiro Katagiri
  • Noriyuki Yoshii
  • Susumu Okazaki


In molecular dynamics (MD) simulations, calculations of potentials and their derivatives by coordinate, i.e., forces, in a pairwise additive manner such as the Lennard–Jones interactions and a short-range part of the Coulombic interactions form the main part of arithmetic operations. It is essential to achieve high thread-level parallelization efficiency of these pairwise additive calculations of potentials and forces to use current supercomputers with many-core architectures effectively. In this paper, we propose four new thread-level parallelization algorithms for the pairwise additive potential and force calculations. We implement the four codes in a MD calculation code based on the fast multipole method. Performance benchmarks were taken on the FX100 supercomputer and Intel Xeon Phi coprocessor. The code succeeds in achieving high thread-level parallelization efficiency with 32 threads on the FX100 and up to 60 threads on the Xeon Phi.


Thread-level parallelization Pairwise additive calculations Domain decomposition Molecular dynamics simulation Fast multipole method 



We thank Dr. Y. Komura for valuable suggestions to code 1. This work is supported by “Joint Usage/Research Center for Interdisciplinary Large-Scale Information Infrastructures” and “High Performance Computing Infrastructure” in Japan (Project ID jh150015-NA11, jh160040-NAJ, and jh170024-NAH). This work is also supported by the FLAGSHIP2020, MEXT within the priority study5:Development of new fundamental technologies for high-efficiency energy creation, conversion/storage and use (Proposal No. hp170241). This work is partially funded by MEXT’s program for the Development and Improvement for the Next Generation Ultra High-Speed Computer System, under its Subsidies for Operating the Specific Advanced Large Research Facilities (S. S.). Benchmark calculations were taken at the Information Technology Center (ITC) of Nagoya University, and at the ITC of The University of Tokyo. This work is also supported by JSPS KAKENHI Grant Number 16K21094 (Y. A.), and by MEXT KAKENHI Grant No. 26410012 (N. Y.).

Supplementary material

11227_2018_2272_MOESM1_ESM.pdf (79 kb)
Supplementary material 1 (pdf 78 KB)


  1. 1.
    Frenkel D, Smit B (2002) Understanding molecular simulation: from algorithms to applications, 2nd edn. Academic Press, New YorkMATHGoogle Scholar
  2. 2.
    Tuckerman ME (2010) Statistical mechanics: theory and molecular simulation. Oxford University Press, New YorkMATHGoogle Scholar
  3. 3.
    Ganesan V, Jayaraman A (2014) Theory and simulation studies of effective interactions, phase behavior and morphology in polymer nanocomposites. Soft Matter 10:13–38CrossRefGoogle Scholar
  4. 4.
    Slater AG, Cooper AI (2015) Function-led design of new porous materials. Science 348:988–997CrossRefGoogle Scholar
  5. 5.
    Pronk S, Pall S, Schulz R, Larsson P, Bjelkmar P, Apostolov R, Shirts MR, Smith JC, Kasson PM, van der Spoel D, Hess B, Lindahl E (2013) GROMACS 4.5: a high-throuhput and higly parallel open source molecular simulation toolkit. Bioinfomatics 29:845–854CrossRefGoogle Scholar
  6. 6.
    Plimpton S (1995) Fast parallel algorithms for short-range molecular dynamics. J Comp Phys 117:1–19CrossRefMATHGoogle Scholar
  7. 7.
    Phillips JC, Braun R, Wang W, Gumbart J, Tajkhorshid E, Villa E, Chipot C, Skeel RD, Kale L, Schulten K (2005) Scalable molecular dynamics with NAMD. J Comput Chem 26:1781–1802CrossRefGoogle Scholar
  8. 8.
    Andoh Y, Yoshii N, Fujimoto K, Mizutani K, Kojima H, Yamada A, Okazaki S, Kawaguchi K, Nagao H, Iwahashi K, Mizutani F, Minami K, Ichikawa S, Komatsu H, Ishizuki S, Takeda Y, Fukushima M (2013) MODYLAS: a highly parallelized general-purpose molecular dynamics simulation program for large-scale systems with long range forces calculated by fast multipole method (FMM) and highly scalable fine-grained new parallel processing algorithms. J Chem Theory Comput 9:3201–3209CrossRefGoogle Scholar
  9. 9.
    Harrod WJ (2012) A journey to exascale computing. In: The International Conference for High Performance Computing, Networking, Strage and Analysis, SC12. https://science.energy.gov/~/media/ascr/ascac/pdf/reports/2013/SC12_Harrod.pdf
  10. 10.
    White paper: FUJITSU Supercomputer PRIMEHPC FX100—Evolution to the Next Generation. https://www.fujitsu.com/global/Images/primehpc-fx100-hard-en.pdf
  11. 11.
    MacKerell AD, Bashford D, Bellott M, Dunbrack RL, Evanseck JD, Field MF, Fischer S, Gao J, Guo H, Ha S, Joseph-McCarthy D, Kuchnir L, Kuczera K, Lau FT, Mattos C, Michnick S, Ngo T, Nguyen DT, Prodhom B, Reiher WE, Roux B, Schlenkrich M, Smith JC, Stote R, Straub J, Watanabe M, Wiorkiewicz-Kuczera J, Yin D, Karplus M (1998) All-atom empirical potential for molecular modeling and dynamics studies of proteins. J Phys Chem B 102:3586–3616CrossRefGoogle Scholar
  12. 12.
    Jorgensen WL, Maxwell DS, Tirado-Rives J (1996) Development and testing of the OPLS all-atom force field on conformational energetics and properties of organic liquids. J Am Chem Soc 118:11225–11236CrossRefGoogle Scholar
  13. 13.
    Cornell WD, Cieplak P, Bayly CI, Gould IR, Merz KM, Ferguson DM, Spellmeyer DC, Fox T, Caldwell W, Kollman PA (1995) A second generation force field for the simulation of proteins, nucleic acids, and organic molecules. J Am Chem Soc 117:5179–5197CrossRefGoogle Scholar
  14. 14.
    Wong-ekkabut J, Karttunen M (2016) The good, the bad and the user in soft matter simulations. Biochim Biophys Acta 1858:2529–2538CrossRefGoogle Scholar
  15. 15.
    Ewald P (1921) Die Berechnung optischer und elektrostatischer Gitterpotentiale. Ann Phys 64:253–287CrossRefMATHGoogle Scholar
  16. 16.
    Essmann U, Perera L, Berkowitz ML, Darden T, Lee H, Pedersen LG (1995) A smooth particle mesh Ewald method. J Chem Phys 103:8577–8593CrossRefGoogle Scholar
  17. 17.
    Greengard LF (1988) The rapid evaluation of potential fields in particle systems. MIT Press, CambridgeMATHGoogle Scholar
  18. 18.
    Figueirido F, Levy RM, Zhou R, Berne BJ (1997) Large scale simulation of macromolecules in solution: combining the periodic fast multiple method with multiple time step integrators. J Chem Phys 106:9835–9849CrossRefGoogle Scholar
  19. 19.
    Yoshida T, Hondou M, Tabata T, Kan R, Kiyota N, Kojima H, Hosoe K, Okano H (2015) Sparc64 XIfx: Fujitsu’s next-generation processor for high-performance computing. IEEE Micro 35:6–14CrossRefGoogle Scholar
  20. 20.
    Durell SR, Brooks BR, Ben-Naim A (1994) Solvent induced forces between two hydrophilic groups. J Phys Chem 98:2198–2202CrossRefGoogle Scholar
  21. 21.
    Birdsall CK (1991) Particle-in-cell charged-particle simulations, plus Monte–Carlo collisions with neutral atoms, PIC-MCC. IEEE Trans Plasma Sci 19:65–85CrossRefGoogle Scholar
  22. 22.
    Hoogerbrugge PJ, Koelman MVA (1992) Simulating microscopic phenomena with dissipative particle dynamics. Europhys Lett 19:155–160CrossRefGoogle Scholar
  23. 23.
    Monaghan JJ (1992) Smoothed particle hydrodynamics. Annu Rev Astron Astrophys 30:543–574CrossRefGoogle Scholar
  24. 24.
    Springel V, Hernquist L (2003) Cosmological smoothed particle hydrodynamics simulations: a hybrid multiphase model for star formation. Mon Not R Astron Soc 339:289–311CrossRefGoogle Scholar
  25. 25.
    Cundall PA (1971) A computer model for simulating progressive large scale movements in blocky rock Systems. In: Proceedings of the Symposium of the International Society of Rock Mechanics, Nancy, FranceGoogle Scholar
  26. 26.
    Cundall PA, Strack ODL (1979) A discrete numerical model for granular assemblies. Geotechnique 29:47–65CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Yoshimichi Andoh
    • 1
  • Soichiro Suzuki
    • 2
  • Satoshi Ohshima
    • 3
  • Tatsuya Sakashita
    • 4
  • Masao Ogino
    • 5
  • Takahiro Katagiri
    • 5
  • Noriyuki Yoshii
    • 1
  • Susumu Okazaki
    • 4
  1. 1.Center for Computational Science, Graduate School of EngineeringNagoya UniversityNagoyaJapan
  2. 2.RIKEN Advanced Institute for Computational ScienceKobeJapan
  3. 3.Information Technology CenterThe University of TokyoTokyoJapan
  4. 4.Department of Materials ChemistryNagoya UniversityNagoyaJapan
  5. 5.Information Technology CenterNagoya UniversityNagoyaJapan

Personalised recommendations