OpenAtom: Scalable Ab-Initio Molecular Dynamics with Diverse Capabilities
The complex interplay of tightly coupled, but disparate, computation and communication operations poses several challenges for simulating atomic scale dynamics on multi-petaflops architectures. OpenAtom addresses these challenges by exploiting overdecomposition and asynchrony in Charm++, and scales to thousands of cores for realistic scientific systems with only a few hundred atoms. At the same time, it supports several interesting ab-initio molecular dynamics simulation methods including the Car-Parrinello method, Born-Oppenheimer method, k-points, parallel tempering, and path integrals. This paper showcases the diverse functionalities as well as scalability of OpenAtom via performance case studies, with focus on the recent additions and improvements to OpenAtom. In particular, we study a metal organic framework (MOF) that consists of 424 atoms and is being explored as a candidate for a hydrogen storage material. Simulations of this system are scaled to large core counts on Cray XE6 and IBM Blue Gene/Q systems, and time per step as low as \(1.7\,s\) is demonstrated for simulating path integrals with 32-beads of MOF on 262,144 cores of Blue Gene/Q.
KeywordsMetal Organic Framework Runtime System Density Object Pair Calculator Remote Method Invocation
This research is partly funded by the NSF SI2-SSI grant titled Collaborative Research: Scalable, Extensible, and Open Framework for Ground and Excited State Properties of Complex Systems with ID ACI 13-39715. This research is also part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (award number OCI 07-25070) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications.
This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357. This research also used computer time on Livermore Computing’s high performance computing resources, provided under the M&IC Program.
- 1.Acun, B., Gupta, A., Jain, N., Langer, A., Menon, H., Mikida, E., Ni, X., Robson, M., Sun, Y., Totoni, E., Wesolowski, L., Kale, L.: Parallel programming with migratable objects: Charm++ in practice. In: SC (2014)Google Scholar
- 2.Agarwal, T., Sharma, A., Kalé, L.V.: Topology-aware task mapping for reducing communication contention on large parallel machines. In: Proceedings of IEEE International Parallel and Distributed Processing Symposium 2006, April 2006Google Scholar
- 3.Alam, S., Bekas, C., Boettiger, H., Curioni, A., Fourestey, G., Homberg, W., Knobloch, M., Laino, T., Maurer, T., Mohr, B., Pleiter, D., Schiller, A., Schulthess, T., Weber, V.: Early experiences with scientific applications on the IBM Blue Gene/Q supercomputer. IBM J. Res. Dev. 57(1/2), 14:1–14:9 (2013). doi: 10.1147/JRD.2012.2234331 CrossRefGoogle Scholar
- 4.Bhatele, A.: Automating topology aware mapping for supercomputers. Ph.D. thesis, Department of Computer Science, University of Illinois, August 2010. http://hdl.handle.net/2142/16578
- 9.cpmd.org. http://www.cpmd.org/
- 12.Fitch, B.G., Rayshubskiy, A., Eleftheriou, M., Ward, T.J.C., Giampapa, M., Pitman, M.C.: Blue matter: approaching the limits of concurrency for classical molecular dynamics. In: SC 2006: Proceedings of the 2006 ACM/IEEE Conference on Supercomputing. ACM, New York (2006)Google Scholar
- 13.Gygi, F., Draeger, E.W., Schulz, M., de Supinski, B.R., Gunnels, J.A., Austel, V., Sexton, J.C., Franchetti, F., Kral, S., Ueberhuber, C.W., Lorenz, J.: Large-scale electronic structure calculations of high-Z metals on the BlueGene/L platform. In: Proceedings of the 2006 ACM/IEEE Conference on Supercomputing, SC 2006. ACM, New York (2006). http://doi.acm.org/10.1145/1188455.1188502
- 14.Gygi, F., Draeger, E.W., Schulz, M., Supinski, B.R.D., Gunnels, J.A., Austel, V., Sexton, J.C., Franchetti, F., Kral, S., Ueberhuber, C., Lorenz, J.: Large-scale electronic structure calculations of high-Z metals on the Blue Gene/L platform. In: Proceedings of the International Conference in Supercomputing. ACM Press (2006)Google Scholar
- 15.Hoefler, T., Snir, M.: Generic topology mapping strategies for large-scale parallel architectures. In: Proceedings of the International Conference on Supercomputing, ICS 2011, pp. 75–84. ACM, New York (2011)Google Scholar
- 16.IBM Blue Gene Team: Overview of the IBM Blue Gene/P project. IBM J. Res. Dev. 52(1/2) (2008)Google Scholar
- 18.Kumar, S., Shi, Y., Bohm, E., Kale, L.V.: Scalable, fine grain, parallelization of the Car-Parrinello ab initio molecular dynamics method. Technical report, UIUC, Department of Computer Science (2005)Google Scholar
- 19.Lee, H.S., Tuckerman, M., Martyna, G.: Efficient evaluation of nonlocal pseudopotentials via Euler exponential spline interpolation. Chem. Phys. Chem. 6, 1827–1835 (2005)Google Scholar
- 22.Rosi, N.L., Eckert, J., Eddaoudi, M., Vodak, D.T., Kim, J., O’Keeffe, M., Yaghi, O.M.: Hydrogen storage in microporous metal-organic frameworks. Science 300(5622), 1127–1129 (2003). http://www.sciencemag.org/content/300/5622/1127.abstract CrossRefGoogle Scholar