Background

The concept of integrated computational materials engineering (ICME) has garnered extensive interest for its potential to improve understanding of material structures and thereby enhance manufactured materials [1, 2]. This concept is important both as a research topic and, pragmatically, as a manufacturing approach to accelerate novel materials development and understanding. The central concept of ICME is hierarchical bridging of spacial and time scales wherein different computational paradigms are employed to understand material behavior [3]. This bridging may be split into two parts. Downscaling is the methodical movement down through computational length/time scales by first defining the required material properties at the upper scale and then identifying what lower scale properties and structures are necessary for the material to fulfill those requirements. Once this process is completed, the movement through the scales is reversed and upscaling is employed by simulating and analyzing lower scales to define their effect on properties at higher length scales. ICME is a complex method of studying materials which was envisioned some time ago, [4], but has become more and more prominent as the computational methods and understanding required to use it effectively for various materials has matured [5, 6].

This complex methodology represents the most effective method known to assess how the physics of electron interactions is applied to macroscopic structures. It requires experimental validation to ensure that the approximations at each scale do not lead to errors dominating the response. At each scale, the microstructure and material arrangement have a critical effect on the material properties. These microstructural effects must be incorporated manually at higher scales. In contrast, at lower scales, the microstructural effects would be naturally captured if the scale were not too confining for them to become noticeable. Thus, a detailed focus on microstructural effects at each scale is necessary to fully optimize material design.

Currently, there are three major areas where ICME would benefit from further improvements:

  1. 1.

    Reduction of the inherent degrees of approximation at each length scale. At each length scale, approximations appropriate to the resolution of that scale are made. This reduces the amount of information required to describe the material and the number of computations required to predict its response. However, it leads to loss of physics inherently captured by lower scales. This physics must be recaptured by formalism at the higher scale that statistically recovers the average results of the lower scale physics, (e.g., potentials, equations of state). Improvements in the accuracy of necessary formalism which averages the lower scale behavior would improve prediction accuracy at each scale.

  2. 2.

    Improvement of the computational implementation of each scale to increase efficiency. Each scale is limited by how large of spacial or time scales can be implemented, leading to small-scale and transient artifacts. Better computational algorithms, particularly implementation of better parallelization, would lead to significantly enhanced results, effectively allowing the domains of each scale to overlap more and provide model validation.

  3. 3.

    Strengthening of the bridges connecting the scales to ensure that model parameters at each scale are optimally chosen. Naturally, each scale makes predictions only as accurate as the parameters to which it is fit. These parameters should be chosen based on key physical properties which models at multiple scales can predict. Higher scale model parameters are calibrated to produce the same values of these properties as the lower scales. Selection of the most appropriate physical properties for each application and development of more rigorous and/or automated methods of calibration can make upscaling both more accurate and more efficient.

The purpose of this work is to advance this third area of development for metals between the ab initio and atomistic scales. At the ab initio scale, density functional theory (DFT) is employed, and at the atomistic scale, the modified embedded atomic method (MEAM) is employed. Both of these methods contain significant approximations but have been previously noted for their usefulness and are popular today. DFT has several notable strengths: its fundamental equations are derived from the exact quantum Hamiltonian of the system, hence it is called an ab initio, i.e., first principles, method [7, 8]. Typically, for any input system, DFT produces a relaxed structure and the groundstate energy [9]. This information is applied in various ways to produce other properties such as the stiffness tensor. DFT is less accurate or less efficient for computing properties which depend on electron excitations [10]. Also, DFT does not provide an indication of the local distribution of system energy. The most notable approximation made in DFT is in the functional form given to the exchange-correlation energy functional which describes electron-electron repulsions, the exact form of which is unknown [8]. DFT also only produces the groundstate energy and system wavefunction and neglects dynamic effects of the nuclei [11]. Thus behavior which occurs far from the material equilibrium will not be captured. Nevertheless, DFT is the most exact method known which scales by N 3 where N is the number of electrons. This scaling generally limits DFT to simulating only thousands or hundreds of atoms. Psuedopotentials are generally employed to describe the combined behavior of some of the most bound electrons and the nuclei, only the valence electrons are separately considered [12, 13]. This greatly increases the efficiency due to DFT’s N 3 scaling with relatively minor sacrifices to accuracy. Parallelization of traditional DFT structures is fairly effective but requires a high communication bandwidth since the computation is inherently nonlocal.

The MEAM is described by an atomistic potential which consists of a pairwise form and an embedding function with angular screening which improves structure accuracy [1416]. The MEAM formulation is empirical rather than ab initio, but calibration of its 13 free parameters for each element to match ab initio results strengthens the physical grounding while allowing the development of an efficient model. The MEAM has the disadvantage that each alloy combination must be separately calibrated as the introduction of new elements alters the electron densities, directly altering certain calibration parameters in ways that are not obvious at the atomistic scale [17]. The MEAM formalism was developed to describe metallic bonding [18]. Many aspects of covalent bonding cannot be adequately captured by MEAM, restricting it to primarily describe metal alloys (although recently some modification of the formalism to handle covalent bonds has been developed [19]). MEAM scales linearly in the number of atoms, enabling computations with greater than tens of millions of atoms on modern computational hardware. In the MEAM formulation, each atom’s potential depends only upon its nearest neighbors. Therefore, parallel computation with spacial partitioning of MEAM simulations requires relatively little information to be passed from each processor’s partition to the others. This contrasts strongly to DFT computations. Efficient spacial partitioning allows nearly linear scaling as the number of processors is increased until each spacial partition only includes a few atoms.

At the ab initio scale of DFT, which is usually described as the lowest scale useful for deriving bulk material properties, alloying is naturally captured because the electrons and nuclei are each separately tracked in the computational framework. Atomistic scales do not have this advantage because the individual electrons are replaced by an energy potential associated with each atom, and thus the potentials require substantial calibration to artificially define electron densities of alloy structures. At the atomistic scale, deformation properties such as crack initiation and growth, dislocations, grain boundary sliding, grain growth, and twinning are all naturally captured because these properties all follow naturally from the discrete nature of the ordered atomic lattice under stress. These properties all must be defined artificially at higher scales where the material is approximated as continuous. Without a good understanding of these properties, the strengths of new alloys after various processing steps cannot be clearly derived. Thus a rigorous and efficient method of developing atomistic potentials and measuring properties with them which may be used for upscaling is of paramount importance for the development of better materials via the ICME framework.

Efforts have been made recently to make atomistic potential development with MEAM more rigorous. A design framework for atomic potentials which implements sensitivity analysis to tailor potential accuracy was introduced by [20]. Other authors have focused on uncertainty analysis methods to predict and manage potential reliability [21]. On the other hand, recent efforts have focused on making upscaling from atomistic potentials to course grain or continuum models more rigorous using similar uncertainty quantification [2225]. The bridging process thus is usually complex and difficult to precisely quantify, but current progress suggests that it is possible to reduce user mediation in the bridging process to merely the specification of desired primary applications.

As a means to streamline this process, we have developed a novel heuristic software package, the MEAM Parameter Calibration tool (MPC tool), for development and calibration of MEAM alloy potentials from target data obtained from a database. The database is filled from experimental measurements and DFT simulations. This software package semi-automates the fitting process for MEAM alloys. It allows a rigorous algorithm to be used to link target data and calibration parameters but also enables the user to specify fitting methods, manually define parameters, or emphasize accuracy of certain properties when fitting for a particular application. Previously, we introduced this software methodology in a more primitive form for Al as a test material [26] and performed a sensitivity and uncertainty analysis on our calibration approach [27]. Now, this software tool has been redesigned in a generalized form to work appropriately for all elements which the MEAM is applicable, regardless of their cohesive structure and to calibrate alloy parameters using binary and ternary data for all element combinations.

Methods

Many different target properties have been used in the past for the fitting of MEAM alloys to experimental and first principles data. The targets discussed in detail below are based primarily on the objective of will fitting plasticity properties of the material. Some of them, such as the energy-volume curves for various lattices and the elastic constants, are closely ingrained in the MEAM formalism and should always be included as targets. Others, such as the generalized stacking fault energies, ensure reasonable results for specific aspects of plasticity. The MPC tool is designed with a modular approach so that various targets may be used and others ignored. The modular approach also ensures that new target data may be added later for calibration or validation with minimal effort.

Calibration of MEAM parameters for metal alloy plasticity was based first on the following properties of single elements determined by DFT or experiments:

  1. 1.

    The energy-volume curve for the most stable lattice structure. This automatically contains the atomic radius, the cohesive energy, and the bulk modulus.

  2. 2.

    The energy-volume curves for two other lattice structures. Transition energies from the primary lattice structure to secondary lattice structures are obtained from this data coupled to the first set of properties.

  3. 3.

    The elastic constants. For cubic lattices, there are three independent elastic constants. For transverse isotropic materials, there are five independent elastic constants, and for lower symmetry materials, there are up to 21 independent elastic constants. All independent elastic constants are calibrated for the primary lattice structure.

  4. 4.

    The vacancy formation energy. This may be computed both for relaxed ion positions or unrelaxed ion positions. We prefer using the relaxed positions, although this increases the calibration time.

  5. 5.

    The interstitial formation energy. Multiple sites may be chosen to insert interstitials, e.g., tetrahedral, octahedral, or dumbbell positions. These are determined based on the primary lattice and all ion positions are relaxed.

  6. 6.

    The surface formation energy. Surfaces on close-packed planes are defined based on the primary lattice structure. These may also be relaxed or unrelaxed, and again we prefer the relaxed values.

  7. 7.

    The generalized stacking fault energy (GSFE). The GSFE is determined for slip directions on slip planes of the primary lattice. Dislocation properties generally are captured well if the GSFE curve matches closely to DFT targets.

Calibration targets were computed from the raw simulation data as follows. The energy in the energy-volume curve was divided by the number of atoms to get the energy-per-atom vs. volume. Then the cohesive energy and relaxed volume were obtained by interpolation using a Murnighan equation of state [28]. The lattice parameter and atomic radius were derived from the relaxed volume using the basic lattice-dependent formulas which relate lattice parameter, atomic radius, and volume.

The elastic constants were obtained from the following formula [29]:

$$ E \left(e_{i})\right. = E_{0}-P(V)\Delta V+VC_{ij}e_{i}e_{j}/2+O\left({e_{i}^{3}}\right) $$
((1))

Here Einstein notation has been used, e and C are the strain tensor and the stiffness tensor in Voigt notation. E 0 is the cohesive energy and E(e i) is the energy per atom for a given strain. V is the volume per atom, and P(V) is the pressure, which can be expressed as a trace of the stress and thus as a contraction of the stiffness tensor and the strain. As the strain components approach zero, the higher order terms (\(O\left ({e_{i}^{3}}\right)\)) become negligible, and thus the stiffnesses may be computed from the cell energy for a certain strain. The energies with small positive and negative strain values were used along with the equilibrium energy to correctly produce the curvature proportional to each elastic constant. The equilibrium volume was obtained from the Murnighan equation of state, but the energies for both equilibrium and strained configurations were calculated directly rather than obtained from the equation of state. The strain values are adjustable to fit the needs of various materials.

The vacancy formation energy is given by:

$$ E_{\text{vac}}=E_{t}-N E_{c} $$
((2))

where E t is the total simulation energy, N is the number of atoms, and E c is the cohesive energy per atom. Since periodic conditions are used, a significant number of unit cells, and hence atoms, must be replicated in the simulation to avoid interactions between the vacancy and its periodic images. The exact size of our simulations varies with the lattice type, but typically over 100 atoms are simulated for vacancy energy.

Interstitial energies are computed using the same formula as the vacancy energy. Again, a large cell must be used and interstitial energies differ depending upon the substitution site for the extra atom.

The surface formation energy is given by

$$ E_{s}=\frac{E_{t}-N E_{c}}{2A} $$
((3))

where E s is the surface formation energy and A is the surface area. The factor of two appears because of the periodic conditions enforcing that two free surfaces are simulated. The box dimension normal to the surface is twice the thickness of the material so that the spacing of the surfaces through the vacuum and through the material is identical. A material thickness of at least 10 Angstroms was employed to minimize interactions between the two surfaces.

The GSFE is given by:

$$ E_{\text{GSFE}}(s) = \frac{E_{t}(s)-E_{t}(0)}{A} $$
((4))

The shift is a displacement along the slip direction of all the material above the slip plane which slices the simulation box in half. The GSFE simulation contains free surfaces on the top and bottom of the box. The energy of these surfaces is subtracted from E GSFE by the E(0)t term which also subtracts the cohesive energy for all atoms. This setup is used rather than having stacking faults on the top/bottom of the cell because not all GSFE curves are symmetric with respect to the sign of the shift, leading to incorrect stacking fault averaging in non-symmetric cases with a setup including two stacking faults.

The DFT target values for these properties were computed using VASP [13]. VASP uses periodic boundary conditions. In all simulations, ion positions were relaxed, but box shape and size was unaltered. (Energy-volume curves were generated by running a series of simulations in which the volume was held constant, but varying the volume from one simulation to the next.) The relaxed atomic radius from the energy-volume curve was used for the ion positions in all other calculations. For lower symmetry structures such as the hexagonal close-packed structure, there are two independent lattice parameters. Ideal values were obtained for each of these, and then the energy volume curve was obtained keeping the c/a ratio constant. Because of the periodic boundary conditions, surfaces and generalized stacking faults were produced in sets of two, one in the center of the simulation cell and a second one on the top/bottom face. LAMMPS was used for atomistic scale simulations [30]. At the atomistic scale, identical simulations were used to those employed for calibration targets with DFT. LAMMPS was compiled as a shared library allowing it to be directly integrated into our calibration algorithm.

Alloy properties must also be fit in order to develop a meaningful atomistic potential. The following properties of alloys are used for multi-element calibration:

  1. 1.

    Heat of formation and lattice parameters of binary lattice structures. Lattices like the B1 structure of NaCl, B2, C1, and L 12 may be used to calibrate solubility properties, or validate potential predictions. The MEAM formalism allows one binary lattice heat of formation and lattice parameter to be explicitly defined as potential parameters. This is generally fit to DFT data for either B1 or B2.

  2. 2.

    Binary lattice elastic constants. Generally, only the bulk modulus is used for calibration.

  3. 3.

    Foreign element substitution energy. When the foreign atom is near the same size or larger than the atoms of the material it is substituted into, this property is closely related to solubility and diffusion.

  4. 4.

    Foreign interstitial energy. If the foreign atom is significantly smaller than the bulk material, inserting it in an interstitial site of the lattice is more useful than replacing a lattice site.

  5. 5.

    Binding energy between two foreign elements. This is the only target used to fit ternary alloy properties. It is also used for binary alloys where both of the foreign elements are the same type. The binding energies are defined here as the differences in energy between having two neighboring foreign elements in a matrix, and having the same two elements in the matrix far away from one another.

Heat of formation is computed per mole as follows:

$$ H_{l}(i,j) = \frac{E_{l}(i,j)-N(i)E_{c}(i)-N(j)E_{c}(j)}{N(i)+N(j)} $$
((5))

Here i represents atoms and energies of one element, and j represents those of another. l represents the binary lattice in question. It should be noted that for lattices which are not composed of equal numbers of each type of atom, H l is not symmetric with respect to switching i and j. The heat of formation is computed for a fully relaxed lattice found using energy volume curves, as described for single element lattices. E c(i) is the cohesive energy per atom of the ith element.

Binary elastic constants are computed in the same manner as those for single elements.

The foreign element substitution and foreign interstitial energies are computed using the following equation:

$$ E_{f}(i,j)=E_{t}-N(i) E_{c}(i) - E_{c}(j) $$
((6))

where E c(i) is the cohesive energy of the matrix and E c(j) is the cohesive energy of the foreign element.

Binding energies are computed using a similar formula:

$$ E_{b}=E_{t}-N(i) E_{c}(i) - E_{c}(j) - E_{c}(k) - E_{f}(i,j) - E_{f}(i,k) $$
((7))

Results and discussion

Synopsis of the workflow

The software package is divided into two parts; the first part exclusively solves DFT data for the desired target properties, while the second part calibrates the MEAM potential to match those targets. To connect the two parts, the target properties of single elements and alloys are cataloged in a database by the DFT code which is accessed by our calibration package. Database entries which are already published may be entered manually into the database in lieu of being computed from first principles using VASP. Initial guess values for MEAM parameters are loaded from a standard MEAM library file and parameter file. Initial MEAM values for the target properties are computed from these guess values. After this, the user may specify which parameters are to be calibrated to which target properties. After calibration, the user reevaluates the difference between target properties and their MEAM predictions. This is repeated for various combinations of target properties and parameters iteratively until a satisfactory match is made for all properties. This process is depicted in the flowchart shown in Fig. 1. Currently, a beta version of this software package may be accessed for testing purposes from our website [31].

Fig. 1
figure 1

The workflow of calibrating and validating a new MEAM potential using our software is illustrated with a flowchart. The user starts with an initial guess MEAM potential and published experimental and first principles data. Other needed data for calibration may be obtained using the DFT portion of the code which runs VASP. Once all necessary data is input into a database, the calibration of the initial potential to the target data is performed using the MEAM calibration graphical user interface (GUI). Validation of the calibrated potential should be performed before it is complete

Software features

Since the first principles calculations do not require any user interaction beyond the initial definitions for properties to compute and accuracy parameters, the program for performing these calculations is currently script-based. A user defines the elements to be computed and their properties using simple keyword strings. K-point resolution, energy convergence, and ionic and electronic relaxation steps may be adjusted but the default values should give reasonable results in nearly all cases. Complete results of these calculations are retained in a directory tree, and the resulting properties are compiled in a database to be read by the calibration routine.

Unlike obtaining first principles data, the calibration of an atomic potential is a necessarily interactive process. Although it can be entirely automated, this is not usually desirable since dedicated users can usually outperform an automated calibration routine by repeating small calibration steps and tweaking certain parameters. To support this process, a graphical user interface was developed to simplify the steps a user needs to take to calibrate the software effectively, regardless of how much interaction is desired. A series of screenshots of the MPC software package are shown in Figs. 2 and 3. Potentials for these illustrations were taken from the five element MEAM potential by [17]. These illustrate the steps of initiating the software and selecting elements, viewing plots comparing MEAM predictions to DFT data, and the calibration process. Calibration progress is depicted by the current value of the error function which is updated at each calibration step. Calibration may be paused at any point by the user with the current values of parameters. Changes to parameters by calibration may also be reverted if they have an undesired effect on other properties.

Fig. 2
figure 2

Graphical user interface of the MPC software. The interface is divided into sections showing current library entries, parameter file entries, controls for plotting, calibration, and running LAMMPS, and a plot area which may be used in various ways

Fig. 3
figure 3

Graphical user interface demonstration of the calibration process where here the GSFE curves are calibrated to DFT point data

The calibration process can be a time-consuming task, as there are many combinations of properties and parameters which may be chosen. Hence, there are certain recommended combinations that can optimize the process [26]. A recommended sequence of calibration steps may be selected one-by-one from the menu for beginning users. More advanced users may wish to deviate from these steps as sometimes better results can be achieved from other calibration combinations. The software also allows manual changes to parameters which allows users to see immediately what the effects of those parameters on target properties are, and sometimes may speed the fitting process.

The MPC software contains plot tools allowing users to select a wide variety of ways to depict data from DFT and the current MEAM potential. Some of these plots are shown in Figs. 2 and 3 illustrating comparisons of energy vs. volume for various elements and lattices, and generalized stacking fault energies for several slip modes. Much of this data, particularly for combinations of multiple elements, exceeds what can practically be used for calibration and so it is often used as validation of the potential.

Design philosophy and capabilities

It is well known that there are a large number of software packages available for querying properties of materials using MEAM potentials and other models. Some of these also include calibration routines. Many of them have more capabilities than our tool. One key feature that distinguishes our software package is its heuristic approach to potential development that emphasizes an intuitive interface with the user. The MPC software will be useful to both advanced programmers well versed in atomistic codes as well as users who have nearly no knowledge of atomistics and only wish to develop a working potential for a single niche application. The program is designed to have a minimal learning curve so that students can learn to use it either in a classroom or research setting quickly enough that they are able to spend most of their time generating useful results rather than learning how to navigate the program.

A second key feature of our software package is a blend of automation and user flexibility. The software will automatically calibrate parameters to fit target properties from DFT or experiments. However, some target properties are better matched to certain parameters than others, and this varies depending upon the application for which the potential is intended. Thus the user is allowed to choose which parameters and properties to match in each calibration. This is an iterative process as all of the parameters are fit and then fit again until everything matches well. Manual parameter adjustment is also necessary at times to achieve the best results. Our software strikes a balance between automating the whole processes and allowing the user enough flexibility to get better results.

Future developments

The following features are currently in development or planned for future releases:

  1. 1.

    Expansion of single element target properties available to include the energy required to add an atom to a free surface at various locations. This is a common metric used to fit certain potentials when surface energies are important.

  2. 2.

    Additional binary and ternary target properties will also be added. These include elastic constants for ternary lattices, and binding energies between foreign elements and vacancies or self-interstitials.

  3. 3.

    Validation metrics will be added for thermal properties. In general, these will be too computationally expensive to use directly for calibration, but accurate properties for melting temperature and stable binary phases are essential for many alloy designs.

  4. 4.

    A user interface will be developed to interact with first principles code that is used to compute a database of properties for calibration. This is currently done with a script-based program, but a user interface will reduce the learning curve necessary to run these calculations so that nearly any user will be able to perform them.

Validation of the software

The MPC tool has been tested with various single elements and alloying combinations, but most of the new potentials developed using the tool have not been validated enough for publication. Validation external to the MPC tool will always be necessary as MEAM is an empirical formulation and may lead to unexpected errors if adequate validation is skipped. However, some validation of the tool within this publication is necessary and so we here illustrate its use to develop a new Mg potential with plasticity property predictions which are superior to previous potentials. These results should be considered preliminary as the predicted potential has not been subjected to extensive finite temperature testing to ensure its accuracy.

Application of MEAM fitting to magnesium

Magnesium makes an ideal candidate single element to demonstrate the need for a better MEAM fitting technique. Mg is an important lightweighting candidate material for vehicular applications, but its low formability at room temperature has hampered economical implementation of its ideal strength-to-weight and stiffness-to-weight ratios in large-scale industrial applications. Extensive research into alloying recipes have so far only had limited success in improving its formability. Brittle failure during low temperature forming has been traced to the nucleation of highly disordered twin embryos, due to the lack of available slip systems. This stems from the low-symmetry hexagonal close-packed structure, which has easy slip only on the basal plane in Mg. Thus, researchers have concluded that alloying elements which inhibit twinning and increase the propensity for non-basal slip may provide enough formability to make Mg a viable economic lightweighting material.

The search for alloying recipes which fulfill these objectives using experiments alone is very time-consuming and costly. Therefore, accurate atomistic models of Mg alloys are needed to predict promising alloying components and concentrations. However, the single element Mg alone proves to be quite challenging to accurately fit for plastic behavior. Mg alloys have five active slip modes (basal 〈a〉 : \(\langle 2\bar {1}\bar {1}0\rangle \{0001\}\), prismatic 〈a〉: \(\langle 2\bar {1}\bar {1}0\rangle \{01\bar {1}0\}\), pyramidal 〈a〉: \(\langle 2\bar {1}\bar {1}0\rangle \{01\bar {1}1\}\), first-order pyramidal 〈c+a〉:\(\langle 2\bar {1}\bar {1}3\rangle \{10\bar {1}\bar {1}\}\), and second-order pyramidal 〈c+a〉: \(\langle 2\bar {1}\bar {1}3\rangle \{2\bar {1}\bar {1}\bar {2}\}\)), and five twin modes (\(\{10\bar {1}2\}, \{10\bar {1}1\}, \{10\bar {1}3\}, \{11\bar {2}1\}\), and \(\{11\bar {2}2\}\)). The \(\{11\bar {2}1\}\) and \(\{11\bar {2}2\}\) twin modes are not generally observed in pure Mg, but are present in some alloys which contain relatively small percentages of rare-earth elements. The most important of these mechanisms are basal 〈a〉 slip, second order pyramidal 〈c+a〉 slip, \(\{10\bar {1}2\}\) twinning, and \(\{10\bar {1}1\}\) twinning. To our knowledge, no previous potential has been calibrated or validated to all of these dominant plastic mechanisms in Mg. Inaccurate behavior for any of them leads to a Mg potential with lopsided plasticity predictions, such as predicting a preference for first order 〈c+a〉 slip over second order 〈c+a〉 slip. This preference, which disagrees with most experimental data, is present in all current Mg potentials.

Here we introduce a new preliminary Mg potential, fit primarily to the stable and unstable Shockley fault energies, the stable second-order pyramidal fault energy, the \(\{10\bar {1}2\}\) twin boundary energy, and the \(\{10\bar {1}1\}\) twin boundary energy. The cohesive energy, lattice parameter, and bulk modulus are also incorporated into the potential directly since they are analytically related to certain parameters. The potential is compared with experimental and DFT results, and four previous potentials for Mg.

The potential parameters

As initial guess values for the parameters, we choose the potential of Wu [32]. This potential fits basal 〈a〉 and pyramidal 〈c+a〉 slip well, but reports a value of the \(\{10\bar {1}2\}\) twin boundary energy 28 % too high and the \(\{10\bar {1}1\}\) twin boundary energy 17 % too high. After fitting to improve twin boundary energies while retaining a good fit for basal Shockley and second-order pyramidal GSFE curves, we obtained the parameters given in Table 1. The values of Wu’s [32] potential are given for comparison.

Table 1 MEAM parameters for new Mg potential

Like the potential by Wu [32], we used a radial cutoff of 5.875 Angstroms.

Predicted material properties

A table of material properties as compared to experiments or simulations is given in Table 2.

Table 2 Property predictions by new MEAM Mg potential

Here property predictions for all atomistic potentials were calculated using the MPC software. DFT and experimental references are obtained from the following authors and supplemented with our own DFT calculations [17, 3239]. Figure 4 demonstrates the GSFE curves for our potential. The first-order pyramidal slip and Shockley curves are for minimum energy paths rather than straight lines along the 〈c+a〉 vector and 〈10−10〉 vector, respectively.

Fig. 4
figure 4

GSFEs for major slip modes calculated with the new Mg MEAM potential

The Wu [32] potential probably should be considered the best all-round potential because of its superior predictions for surface and vacancy energies, accurate elastic constants, and generally reasonable plastic mode energies. However, the new potential demonstrates superior predictions for twin boundaries and some stacking faults, but gives a weak C 13 elastic constant similarly to the Liu [40] potential, low surface energies, and very low vacancy energy. The new potential developed here must be considered as preliminary since it has not undergone significant finite temperature testing. However, it clearly validates the use of the MPC tool. Our objective was to better predict slip and twinning properties while lowering the priority of predicting other properties; the resulting potential demonstrates exactly the objective stated. At low stresses and low temperatures relative to melting where deformation properties can be more accurately accessed, the vacancy and surface energies will not affect results nearly so much as GSFE curves. As the stress increases, the potential would over-predict cracking and void growth, but at MD timescales, these processes occur very high stresses relative to slip and twinning propagation. The low C 13 constant could also be a concern, but the wealth of simulation data produced using the Liu [40] potential with a similar C 13 value suggests that it does not significantly hamper the accuracy of the potential. In an upcoming work, we will further explore the relative merits of this potential and possibly suggest some modifications to further improve its predictability.

Conclusions

The MPC software was designed from the ground up as an ICME tool which enables people who are not experts at the particular scales of atomistics or density functional theory to develop potentials. This was made possible by making the fitting process less arbitrary and creating software which drastically simplifies the process of running DFT and atomistic simulations to compare and calibrate resulting data. Rapid development techniques are especially important for alloy potentials, which have increasingly more parameters to fit as the number of elements increases. The potential development scheme has been validated by applying it to a challenging multi-objective problem of magnesium plasticity. The resulting potential was shown to be highly competitive with state-of-the-art potentials already published.

Future innovations in metallic development may be greatly accelerated by increasing the simplicity and ease of developing new potentials which reliably capture physical behavior at the atomistic scale. This, in turn, makes developing more accurate higher scale models an easier and more rapid task, advancing the untapped potential for efficient large-scale materials development. The MPC software fills a key gap currently existing in rapid materials development by making more intuitive the task of creating alloy potentials which properly describe material behavior across a variety of compositions.