Avoid common mistakes on your manuscript.
Atomistic algorithms, in which nuclear degrees of freedom are evolved in an underlying potential energy surface (PES), are commonly used to unravel physical phenomena in diverse scientific fields such as materials science, biology, or chemistry, to name a few. This underlying PES may be given by empirical classical interatomic potentials or by fully quantum-mechanical descriptions including electronic degrees of freedom. The main goal of these algorithms is to predict the response of an arrangement of atoms under different external conditions.
Molecular dynamics (MD) simulations evolve the nuclear degrees of freedom according to a set of equations of motion (EOM) fully resolving the atomic vibrations. These EOM involve the forces acting on each atom, which calculation represents an important computational bottleneck of the MD simulations. The more accurate the forces, the larger the computational burden. Also, for the system to be stable in a given PES, a sufficiently small timestep related to the atomic vibrations needs to be applied, which represents a second critical source of inefficiency. However, it often times occurs that the phenomena of interest take place in time scales much longer than atomic vibrations, which control the maximum integration timestep that can be used. This makes straightforward MD inefficient in exploring the PES, such that it might be unable to capture crucial mechanisms in the process to be studied. Typically, MD simulations are limited to sub-microsecond timescales, even at the lowest end of the accuracy spectrum. Overcoming this limitation calls for the development of advanced algorithms to overcome the so-called time scale problem to more efficiently predict long-time evolution or more extensively sample the PES to explore regions in phase space that traditional MD fails to reach.
Two main strategies have been pursued to mitigate the time scale problem following the main computational bottlenecks discussed above: (i) accelerating the calculation of the forces acting on every atom and (ii) bypassing or mitigating the limitations inherent to the short integration timestep by focusing on the slow state-to-state dynamics that control the evolution of materials over long timescales.
In the context of quantum MD (including electronic degrees of freedom), the main strategy so far has been to make the calculation of the forces faster, as accuracy is required to study the problem at hand. As a matter of fact, even in Kohn–Sham density functional theory,1 which is a widely used mean-field approximation of the quantum many-body problem, the cubic scaling complexity of computing the single electron wave functions makes it prohibitively expensive to consider large-scale calculations. To overcome this limitation, there are many ongoing efforts to develop reduced order scaling approaches for Kohn–Sham DFT.2,3 Another promising approach is the development of orbital-free density functional theory (OF-DFT), where the kinetic energy of non-interacting electrons is modeled as an explicit functional of electron density and the computational complexity scales linearly with system size. The article in this Focus Issue by William Witt4 presents a comprehensive overview of OF-DFT, including the development of various kinetic energy density functionals and their accuracy, as well as a wide range of problems where OF-DFT has been adopted.
Another recent development that accelerates the time evolution of a quantum mechanical MD simulation is the Extended Lagrangian Born-Oppeheimer MD. The Extended-Lagrangian Born-Oppenheimer (XLBO) MD formalism has proven extremely efficient for describing the evolution of an ensemble of atoms with quantum accuracy. The XLBO approach relies on the introduction of an auxiliary electronic degree of freedom as a dynamical variable, for which the equations of motion can be integrated without breaking time reversibility, even under approximate self-consistent field convergence. This framework therefore enables accurate energy conserving simulations at a fraction of the cost of regular Born-Oppenheimer simulations.5–11 While originally formulated for microcanonical trajectories, it has been recently shown12 that the XLBO formalism is also compatible with the noise introduced via thermostatting in canonical simulations, maintaining the efficiency of the original formulation.
These developments, among others, enable simulated times larger than their traditional counterparts, which opens the possibility of coupling them with methods to explore state-to-state dynamics like the ones described below. Still, the development of advanced algorithms in this front is crucial to broaden the range of applicability of these methods and gain access to physical problems of interest with unprecedented accuracy.
On the other hand, when the computational expense of the forces is not daunting, methods to overcome the second bottleneck can be applied. In the case of one-ended methods, where the goal is to generate individual trajectories that are representative of the long-time evolution of the system, acceleration techniques tend to fall into one of two broad classes: Accelerated MD (AMD) and kinetic Monte Carlo (KMC) algorithms. The acceleration provided by these methods results in a tradeoff: in exchange for longer times, one has to give up on generating trajectories that are fully continuous in phase space; instead, one only requires that the trajectories be statistically correct at a coarse level. Specifically, these methods produce so-called state-to-state trajectories, where a state typically corresponds to a finite volume in configuration space corresponding to a given metastable configuration of the material. In other words, one might not be able to precisely determine the phase space variables of a system while it explores a given state, but the statistics of jumps from state to state should be preserved. When one is interested in predicting the behavior of materials over timescales that far exceed typical vibrational periods, this is typically all of the information that is needed.
The first class of methods, the AMD techniques, was pioneered by Arthur F. Voter in the late 1990s.13–15 The philosophy that underpins AMD is to hang on to the most powerful features of MD: that it can be used to generate statistically correct escape times and pathways without any a priori knowledge of the process. While the inability to do so directly in the conditions of interest is the root cause of the timescale problem, this limitation can be mitigated by instead running dynamics in specially crafted conditions where transitions are faster, and by unbiasing the results back to the target dynamics. This usually entails resorting to various rate theories, but it can also rely on simple limiting behaviors. The main strength of AMD methods is that fairly strong statements can often be made on the quality of the resulting state-to-state dynamics.16,17 However, the performance of AMD methods can be variable as it relies on the presence of a large separation of timescales between vibrating within a state and escaping from a state. When this separation is large, AMD can provide considerable acceleration, sometimes into the seconds of simulation time; however, when this separation is limited due to the presence of low barriers, acceleration over MD can be modest.
A contemporary area of interest in the development of AMD methods addresses the question of how to best leverage the massive parallelism provided by current super-computers.18,19 This is non-trivial, as conventional MD shows notoriously poor strong-scaling, i.e., the increase in simulation throughput at fixed system size quickly saturates as more and more processing elements are added. This is because integrating atomistic EOM is an inherently serial task: one timestep has to be completed before work on the next can start. This means that computation can only be distributed within a given timestep, not across. Of course, more accurate, and hence more computationally intensive, methods can exploit more processing elements efficiently than computationally cheaper empirical potentials; nonetheless, scaling often breaks down before computational resources are fully utilized, especially on leadership-scale computers with hundreds of thousands of cores. This presents an opportunity if these extra resources can be put to good use. Another area of interest is the development of methods that can maintain efficiency even when the separation of timescales between vibrations and transitions is small, either because the system is large, so it contains many local environments that enable transitions at significant rates, or because the system is inherently complex. While a truly general solution to this problem might not exist, there is a great potential to improve the performance in the common case where trajectories are trapped within super-basins, i.e., set of states that are connected by low barriers compared to those that need to be overcome to exit the set. This kind of situation leads to repetitive transitions, which offers opportunities for optimization.20 Both of these aspects are examined in articles found in this Focus Issue, “Long-Time Molecular Dynamics Simulations on Massively-Parallel Platforms: a Comparison of Parallel Replica Dynamics” (p. 813) and “Parallel Trajectory Splicing; Speculation and Replication in Temperature Accelerated Dynamics” (p. 823).
The second class of algorithms focused on the long-term dynamics belong to the kinetic Monte Carlo (KMC) family. The fundamental hypothesis of the KMC algorithm is that the average probability, to first order in time δt, that a particular event α takes place in the next time interval (δt) equals cαδt, where cα is a rate constant.21–23 This assumption implies that the process studied is Markovian (i.e., the probability of escaping a state in a given time is just a function of the phase space variables at the present time). Gillespie22,23 showed how, from this fundamental hypothesis, a simple algorithm can be derived to follow the state-to-state dynamics of a process characterized by such fundamental relation. The KMC algorithm samples the time evolution of the process with the right probability distribution, giving one realization of the Master Equation. Often times it is the case in physical processes that all the rates for the possible events are not known a priori, or the number of events is so large that makes the solution of the Master Equation daunting. In these cases the KMC poses itself as a feasible alternative for the study of such processes. This is the case, for example, in the context of chemical reactions, kinetic processes of alloys, systems under irradiation, or a combination of such. It is also important to note that the fact that the KMC solves the state-to-state dynamics integrating out the atomic vibrations makes it suitable to study processes ranging a large variety of time scales unlike traditional molecular dynamics models.
The KMC has been extensively used to study chemistry,24 thermal properties,25 surface growth,26 or microstructure evolution under irradiation.27–36 Different subclasses of algorithms have been developed depending on the specific processes that are to be studied. Object KMC28,37,38 or Event-driven KMC39,40 algorithms, which uniquely model the evolution of certain pre-defined objects, rely on the a priori knowledge of the rates of all possible events. These models are ubiquitous in the study of the behavior of irradiation-created defects. Rigid lattice models (Lattice KMC) assume that the possible events can be described in an underlying lattice and constitute a known set to calculate the rates on-the-fly. These models have been used to study chemistry evolution under thermal and irradiation conditions.32,34,41 On the other hand, newer algorithms have been developed that are capable of calculating the event catalog on-the-fly, making no assumptions about what these events might be. The adaptive KMC (AKMC),42,43 k-ART44–46 or the self-evolving atomistic KMC (SEAK-MC)47,48 are some examples. These algorithms would be able to accurately describe the evolution of a complex process but, in general, are limited in size and physical time compared to the less flexible or accurate previously-mentioned algorithms. It is also worth mentioning that several parallelization schemes have been developed over the years to take advantage of the large-scale parallel architectures commonly used these days.49–53
ON THE COVER
Energy landscape with different configurations of a Platinum nanoparticle.
References
W. Kohn and L.J. Sham: Self-consistent equations including exchange and correlation effects. Phys. Rev. 140, 1133–1138 (1965).
D.R. Bowler and T. Miyazaki: O ( N) methods in electronic structure calculations. Rep. Prog. Phys. 75, 036503 (2012).
S. Goedecker: Linear scaling electronic structure methods. Rev. Mod. Phys. 71, 1085 (1999).
W.C. Witt, B.G. del Rio, J.M. Dieterich, and E.A. Carter: Orbital-free density functional theory for materials research. J. Mater. Res., 1–19 (2018). doi: https://doi.org/10.1557/jmr.2017.462.
A. Niklasson, C. Tymczak, and M. Challacombe: Time-reversible Born-Oppenheimer molecular dynamics. Phys. Rev. Lett. 97, (2006).
A. Niklasson: Extended Born-Oppenheimer molecular dynamics. Phys. Rev. Lett. 100, (2008).
A.M.N. Niklasson et al.: Extended Lagrangian Born–Oppenheimer molecular dynamics with dissipation. J. Chem. Phys. 130, 214109 (2009).
P. Steneteg, I.A. Abrikosov, V. Weber, and A.M.N. Niklasson: Wave function extended Lagrangian Born-Oppenheimer molecular dynamics. Phys. Rev. B 82, (2010).
J. Hutter: Car–Parrinello molecular dynamics. Wiley Interdiscip. Rev.: Comput. Mol. Sci. 2, 604–612 (2012).
P. Souvatzis and A.M.N. Niklasson: Extended Lagrangian Born-Oppenheimer molecular dynamics in the limit of vanishing self-consistent field optimization. J. Chem. Phys. 139, 214102 (2013).
A.M.N. Niklasson and M.J. Cawkwell: Generalized extended Lagrangian Born-Oppenheimer molecular dynamics. J. Chem. Phys. 141, 164123 (2014).
E. Martínez, M.J. Cawkwell, A.F. Voter, and A.M.N. Niklasson: Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics. J. Chem. Phys. 142, 154120 (2015).
A.F. Voter: A method for accelerating the molecular dynamics simulation of infrequent events. J. Chem. Phys. 106, 4665 (1997).
A.F. Voter: Parallel replica method for dynamics of infrequent events. Phys. Rev. B 57, R13985 (1998).
M.R. So/rensen and A.F. Voter: Temperature-accelerated dynamics for simulation of infrequent events. J. Chem. Phys. 112, 9599 (2000).
C.L. Bris, T. Lelievre, M. Luskin, and D. Perez: A mathematical formalization of the parallel replica dynamics. Monte Carlo Methods Appl. 18, 119 (2012).
T. Lelièvre: Accelerated dynamics: Mathematical foundations and algorithmic improvements. Eur. Phys. J.: Spec. Top. 224, 2429–2444 (2015).
D. Perez, R. Huang, and A.F. Voter: Long-time molecular dynamics simulations on massively parallel platforms: A comparison of parallel replica dynamics and parallel trajectory splicing. J. Mater. Res., 1–10 (2017). doi: https://doi.org/10.1557/jmr.2017.456.
R.J. Zamora, D. Perez, and A.F. Voter: Speculation and replication in temperature accelerated dynamics. J. Mater. Res., 1–12 (2018). doi: https://doi.org/10.1557/jmr.2018.17.
D. Perez, E.D. Cubuk, A. Waterland, E. Kaxiras, and A.F. Voter: Long-time dynamics through parallel trajectory splicing. J. Chem. Theory Comput. (2015). doi: https://doi.org/10.1021/acs.jctc.5b00916.
A.B. Bortz, M.H. Kalos, and J.L. Lebowitz: A new algorithm for Monte Carlo simulation of king spin systems. J. Comput. Phys. 17, 10–18 (1975).
D.T. Gillespie: A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. J. Comput. Phys. 22, 403–434 (1976).
D.T. Gillespie: Exact stochastic simulation of coupled chemical reactions. J. Phys. Chem. 81, 2340–2361 (1977).
Q. Yang, C.A. Sing-Long, and E.J. Reed: L1 regularization-based model reduction of complex chemistry molecular dynamics for statistical learning of kinetic Monte Carlo models. MRS Adv. 1, 1767–1772 (2016).
E. Martínez, O. Senninger, C-C. Fu, and F. Soisson: Decomposition kinetics of Fe–Cr solid solutions during thermal aging. Phys. Rev. B 86, (2012).
S.R. Archarya and T.S. Rahman: Toward multiscale modeling of thin-film growth processes using SLKMC. J. Mater. Res. 33, 709–719 (2018).
M.J. Caturla et al.: Comparative study of radiation damage accumulation in Cu and Fe. J. Nucl. Mater. 276, 13–21 (2000).
I. Martin-Bragado, A. Rivera, G. Valles, J.L. Gomez-Selles, and M.J. Caturla: MMonCa: An object kinetic Monte Carlo simulator for damage irradiation evolution and defect diffusion. Comput. Phys. Commun. 184, 2703–2710 (2013).
C.S. Becquart and C. Domain: Modeling microstructure and irradiation effects. Metall. Mater. Trans. A 42, 852–870 (2010).
R.E. Stoller, S.I. Golubov, C. Domain, and C.S. Becquart: Mean field rate theory and object kinetic Monte Carlo: A comparison of kinetic models. J. Nucl. Mater. 382, 77–90 (2008).
F. Soisson: Kinetic Monte Carlo simulations of radiation induced segregation and precipitation. J. Nucl. Mater. 349, 235–250 (2006).
F. Soisson: Monte Carlo simulations of segregation and precipitation in alloys under irradiation. Philos. Mag. 85, 489–495 (2005).
F. Soisson and T. Jourdan: Radiation-accelerated precipitation in Fe–Cr alloys. Acta Mater. 103, 870–881 (2016).
F. Soisson and G. Martin: Monte Carlo simulations of the decomposition of metastable solid solutions: Transient and steady-state nucleation kinetics. Phys. Rev. B 62, 203 (2000).
D. Terentyev et al.: Further development of large-scale atomistic modelling techniques for Fe–Cr alloys. J. Nucl. Mater. 409, 167–175 (2011).
T. Opplestrup, V. Bulatov, G. Gilmer, M. Kalos, and B. Sadigh: First-passage Monte Carlo algorithm: Diffusion without all the hops. Phys. Rev. Lett. 97, (2006).
T.S. Hudson, S.L. Dudarev, M-J. Caturla, and A.P. Sutton: Effects of elastic interactions on post-cascade radiation damage evolution in kinetic Monte Carlo simulations. Philos. Mag. 85, 661–675 (2005).
M. Wen, N.M. Ghoniem, and B.N. Singh: Dislocation decoration and raft formation in irradiated materials. Philos. Mag. 85, 2561–2580 (2005).
C-C. Fu, J.D. Torre, F. Willaime, J-L. Bocquet, and A. Barbu: Multiscale modelling of defect kinetics in irradiated iron. Nat. Mater. 4, 68–74 (2004).
T. Oppelstrup, D.R. Jefferson, V.V. Bulatov, and L.A. Zepeda-Ruiz: SPOCK: Exact Parallel Kinetic Monte-Carlo on 1.5 Million Tasks (ACM Press, 2016); pp. 127–130. doi: https://doi.org/10.1145/2901378.2901403.
R.A. Enrique and P. Bellon: Compositional patterning in systems driven by competing dynamics of different length scale. Phys. Rev. Lett. 84, 2885 (2000).
G. Henkelman and H. Jónsson: Long time scale kinetic Monte Carlo simulations without lattice approximation and predefined event table. J. Chem. Phys. 115, 9657 (2001).
L. Vernon, S.D. Kenny, R. Smith, and E. Sanville: Growth mechanisms for TiO2 at its rutile (110) surface. Phys. Rev. B 83, (2011).
L.K. Béland, P. Brommer, F. El-Mellouhi, J-F. Joly, and N. Mousseau: Kinetic activation-relaxation technique. Phys. Rev. E 84, (2011).
F. El-Mellouhi, N. Mousseau, and L. Lewis: Kinetic activation-relaxation technique: An off-lattice self-learning kinetic Monte Carlo algorithm. Phys. Rev. B 78, (2008).
M-C. Marinica, F. Willaime, and N. Mousseau: Energy landscape of small clusters of self-interstitial dumbbells in iron. Phys. Rev. B 83, (2011).
L. Xu and G. Henkelman: Adaptive kinetic Monte Carlo for first-principles accelerated dynamics. J. Chem. Phys. 129, 114104 (2008).
H. Xu, Y.N. Osetsky, and R.E. Stoller: Simulating complex atomistic processes: On-the-fly kinetic Monte Carlo scheme with selective active volumes. Phys. Rev. B 84, (2011).
Y. Shim and J. Amar: Rigorous synchronous relaxation algorithm for parallel kinetic Monte Carlo simulations of thin film growth. Phys. Rev. B 71, (2005).
Y. Shim and J. Amar: Semirigorous synchronous sublattice algorithm for parallel kinetic Monte Carlo simulations of thin film growth. Phys. Rev. B 71, (2005).
E. Martínez, J. Marian, M.H. Kalos, and J.M. Perlado: Synchronous parallel kinetic Monte Carlo for continuum diffusion-reaction systems. J. Comput. Phys. 227, 3804–3823 (2008).
E. Martínez, P.R. Monasterio, and J. Marian: Billion-atom synchronous parallel kinetic Monte Carlo simulations of critical 3D Ising systems. J. Comput. Phys. 230, 1359–1369 (2011).
G. Nandipati et al.: Parallel kinetic Monte Carlo simulations of Ag(111) island coarsening using a large database. J. Phys.: Condens. Matter 21, 084214 (2009).
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Martinez, E., Perez, D., Gavani, V. et al. Introduction. Journal of Materials Research 33, 773–776 (2018). https://doi.org/10.1557/jmr.2018.69
Published:
Issue Date:
DOI: https://doi.org/10.1557/jmr.2018.69