On Quantum Chemistry Code Adaptation for RSC PetaStream Architecture

  • Vladimir Mironov
  • Maria Khrenova
  • Alexander Moskovsky
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9137)

Abstract

Molecular simulations with quantum chemistry methods consume a large portion of CPU cycles in modern high-performance computing centers. Evolution of modern processors and HPC architectures necessitates adaptation of software to new hardware generations. The present work concentrates on the optimization of the widely used GAMESS code to Intel Xeon Phi architecture and recently devised RSC PetaStream platform. Since improvement in parallelization is required, the most frequently used Hartree-Fock and DFT methods are explored for additional parallelization options. The Xeon Phi requires vectorization that is important for electron-repulsion integrals (ERI) calculations to achieve good performance.

Keywords

Quantum chemistry Hartree-Fock Density functional theory Intel xeon phi, GAMESS 

Notes

Acknowledgements

This work is supported by Intel Parallel Compute Center program. We thank Georg Zitzlsberg (Intel Corp.) and Klaus-Dieter Oertel (Intel Corp.) for valuable advices.

References

  1. 1.
    Goodwins, R.: Intel unveils many-core Knights platform for HPC. http://www.zdnet.com/article/intel-unveils-many-core-knights-platform-for-hpc/ (2010)
  2. 2.
    Schmidt, M.W., Baldridge, K.K., Boatz, J.A., Elbert, S.T., Gordon, M.S., et al.: General atomic and molecular electronic structure system. J. Comput. Chem. 14, 1347–1363 (1993)CrossRefGoogle Scholar
  3. 3.
    Gordon, M.S., Schmidt, M.W.: Advances in electronic structure theory: GAMESS a decade later. In: Dykstra, C., Frenking, G., Kim, K., Scuseria, G. (eds.) Theory And Applications Of Computational Chemistry: The First Forty Years, pp. 1167–1189. Elsevier, Amsterdam (2005)CrossRefGoogle Scholar
  4. 4.
    Jeffers, J., Reinders, J.: Intel Xeon Phi Coprocessor High-Performance Programming. Morgan Kaufmann Publishers, San Francisco (2013)Google Scholar
  5. 5.
    Anthony, S.: Intel unveils 72-core x86 Knights Landing CPU for exascale supercomputing. http://www.extremetech.com/extreme/171678-intel-unveils-72-core-x86-knights-landing-cpu-for-exascale-supercomputing (2013)
  6. 6.
    Semin, A., Druzhinin, E., Mironov, V., Shmelev, A., Moskovsky, A.: The performance characterization of the rsc petastream module. In: 29th International Conference (ISC 2014), Leipzig, Germany, pp. 420–429 (2014)Google Scholar
  7. 7.
    Schlegel, H.B., Frisch, M.J.: Computational Bottlenecks in Molecular Orbital Calculations. Theor. Comput. Model. Org. Chem. 339, 5–33 (1991)Google Scholar
  8. 8.
    Reza Ahmadi, G., Almlöf, J.: The Coulomb operator in a Gaussian product basis. Chem. Phys. Lett. 246, 364–370 (1995)CrossRefGoogle Scholar
  9. 9.
    McMurchie, L.E., Davidson, E.R.: One- and two-electron integrals over cartesian gaussian functions. J. Comput. Phys. 26, 218–231 (1978)CrossRefGoogle Scholar
  10. 10.
    Obara, S., Saika, A.: Efficient recursive computation of molecular integrals over Cartesian Gaussian functions. J. Chem. Phys. 84, 3963 (1986)CrossRefGoogle Scholar
  11. 11.
    Rys, J., Dupuis, M., King, H.F.: Computation of electron repulsion integrals using the rys quadrature method. J. Comput. Chem. 4, 154–157 (1983)CrossRefGoogle Scholar
  12. 12.
    Foster, I.T., Tilson, J.L., Wagner, A.F., Shepard, R.L., Harrison, R.J., et al.: Toward high-performance computational chemistry: i. scalable fock matrix construction algorithms. J. Comput. Chem. 17, 109–123 (1996)CrossRefGoogle Scholar
  13. 13.
    Ishimura, K., Kuramoto, K., Ikuta, Y., Hyodo, S.: MPI/OpenMP hybrid parallel algorithm for hartree − fock calculations. J. Chem. Theory Comput. 6, 1075–1080 (2010)CrossRefGoogle Scholar
  14. 14.
    Nieplocha, J.: Advances, applications and performance of the global arrays shared memory programming toolkit. Int. J. High Perform. Comput. Appl. 20, 203–231 (2006)CrossRefGoogle Scholar
  15. 15.
    Alexeev, Y., Kendall, R.A., Gordon, M.S.: The distributed data SCF. Comput. Phys. Commun. 143, 69–82 (2002)MATHCrossRefGoogle Scholar
  16. 16.
    Fletcher, G.D., Schmidt, M.W., Bode, B.M., Gordon, M.S.: Distributed data interface in GAMESS. Comput. Phys. Commun. 128, 190–200 (2000)MATHCrossRefGoogle Scholar
  17. 17.
    Sengottaiyan, S., Liu, F., Sosonkina, M.: A GPU support for large scale quantum chemistry applications. In: The 2012 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA 2012), Las Vegas, Nevada, USA (2012)Google Scholar
  18. 18.
    Ufimtsev, I.S., Martínez, T.J.: Quantum chemistry on graphical processing units. 1. strategies for two-electron integral evaluation. J. Chem. Theory Comput. 4, 222–231 (2008)CrossRefGoogle Scholar
  19. 19.
    Ufimtsev, I.S., Martinez, T.J.: Quantum chemistry on graphical processing units. 2. direct self-consistent-field implementation. J. Chem. Theory Comput. 5, 1004–1015 (2009)CrossRefGoogle Scholar
  20. 20.
    Aprà, E., Klemm, M., Kowalski, K.: Efficient implementation of many-body quantum chemical methods on the intel® xeon phi™ coprocessor. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and AnalysisPiscataway, NJ, USA, pp. 674–684. IEEE Press (2014)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Vladimir Mironov
    • 1
  • Maria Khrenova
    • 1
  • Alexander Moskovsky
    • 2
  1. 1.Chemistry DepartmentLomonosov Moscow State UniversityMoscowRussia
  2. 2.ZAO “RSC Technologies”MoscowRussia

Personalised recommendations