Advertisement

A Time Integration Method for Phase-Field Modeling

  • Tsung-Hui Huang
  • Tzu-Hsuan Huang
  • Yang-Shan Lin
  • Chih-Hsiang Chang
  • Shu-Wei Chang
  • Chuin-Shan ChenEmail author
Original Research
  • 53 Downloads

Abstract

A novel numerical time integration for solving phase-field problems is presented. This method includes the generalized single step single solve (GSSSS) family of algorithms, which can preserve second-order time accuracy as well as provide controllable numerical dissipation independently on each time variable. Furthermore, we demonstrate an enhancement of the time integration method that can reduce numerical oscillation of differential algebraic equation from phase-field model. The algebraic equation is evaluated at \(t_{n+1}\) instead of the general time level \(t_{n+W_{1}}\). The enhancement reduces the numerical oscillation due to the non-dissipative scheme. Two popular phase-field examples, the Cahn–Hilliard equations and simple phase-field-crystal equations, are used to demonstrate the capability of proposed time integration scheme. We conclude that this approach has a significant advantage over currently used algorithms, and provides a new avenue for robustness of time integration schemes for phase-field problems with a high-order term in free energy.

Keywords

Cahn–Hilliard model Differential algebraic equations GSSSS family of algorithms Phase-field-crystal model Phase-field method 

Introduction

Numerical simulation of phase-field modelling has become popular in recent years. It was originally applied to solidification dynamics [3], and is now widely implemented in various large-scale problems with interfacial phenomena such as fracture dynamics [15], bubble kinematics [12], and microstructure evolution [5]. The phase-field method uses a partial differential equation, known as the order parameter, to describe the phase transition at an interface. Two specific values (e.g. − 1 and 1) represent each side of the phase with a continuous diffusion boundary. Thus, the front-tracking technique is no longer necessary and the phase propagation can then be more easily traced [6, 17]. The phase-field method is advancing in solving problems which have difficulties in experiments, and thus numerical simulation of the phase-field method is continuing to improve in terms of robustness and accuracy in the pursuit of a better understanding of complex interfacial phenomena [6, 7, 8, 9, 12, 18, 22].

An essential feature arising in the phase-field method is that phase-field equations can be derived from a higher-order energetic variational formulation [10]. This variational form usually contains a biharmonic (or higher) operator. Therefore, numerical method such as finite element methods require higher continuity (\(C^{1}\) or higher) to approximate the solution of the weak form. Another way to bypass using higher continuity function is to introduce extra variables to represent these high-order terms. Thus linear piecewise approximation is applicable for the weak form solution. This procedure for the phase-field method is very common [23] and turns phase-field equations from a differential system to a differential algebraic system with index-1. Another noteworthy feature is that the phase-field method often results in a stiff system according to the limit of the interface width. As the width reduces, the system becomes extremely stiff [23], and usually cause numerical instability. Also, in the formulation of the phase-field model, thermal noise are usually introduced at interface to obtain certain feature, such as dendrite morphology[23], which is significant in physics.

Semi-implicit time integration schemes were first used for phase-field equations, meaning they solve differential equations and algebraic equations separately with different method[2, 16, 24], for example, Euler method with Crank–Nicolson method. This approach may reduce global time accuracy, and is conditionally stable. Also, solving DAEs separately is computational inefficient. To address this issue, implicit schemes with numerical dissipation are provided as a solution. Computer softwares with dissipative scheme such as, DASSL [21] or ODE15s in MATLAB, has been working well in simulating DAEs. The backward difference formulation (BDF) and some Runge-Kutta type methods are also widely used for phase-field problems [1, 29]. However, these methods demonstrate remarkable numerical dissipation, which may over-smear the thermal fluctuation mentioned above. Investigation of non-dissipative scheme and controllable dissipation with unconditionally stability in phase-field modeling is required.

As a result, we introduce a second-order accurate, unconditionally stable, single step single solve scheme. The proposed algorithm is capable of control numerical dissipation separately on the prime variables and their time derivative. In addition, we evaluate the algebraic variables at time-level \(t_{n+1}\), which can stabilize the numerical solution when using midpoint-type non-dissipative methods included in GSSSS. This extended approach demonstrates a possible avenue for discovering robust time integration schemes. In “The framework of GSSSS family of algorithms for phase-field equations”, the framework of the GSSSS family of algorithms is defined for index-1 DAEs. In “Stability and accuracy of GSSSS family of algorithm”, we briefly introduce the features of GSSSS in stability and accuracy. In “Numerical examples”, two numerical examples are shown to verify analytical algorithmic properties and the extension to non-linear problems. Finally, in “Conclusions”, we give the conclusion of this study.

The Framework of GSSSS Family of Algorithms for Phase-Field Equations

Before we express the the problem with proposed method, the notations with corresponding definitions are used in equations and derivations as shown in Table 1 which will help readers to follow the article easily.
Table 1

Notations used for solving DAEs with GSSSS time integration method

Notation

Definitions

\(\varOmega\), \(\varGamma\)

Domain and boundary of the phase field problem

\(\partial\)

Partial differential operator

\(\delta\)

Variation differential operator

\(\nabla\)

Gradient operator

\(\nabla ^2\)

Laplacian operator \(\nabla ^2= \nabla \cdot \nabla\)

\(\theta ^{h}\)

Galerkin finite element approximation of \(\theta\)

\(\varDelta t\)

Timestep size in fully discretized system of DAEs

\(\dot{\theta }\)

Time derivative of variable \(\theta\)

\(\theta _{0}\), \(t_{0}\)

Variable \(\theta\) and time t evaluated at zero timestep \(t=0\)

\(\theta_{n}\), \(t_{n}\)

Variable \(\theta\) and time t evaluated at n timestep \(t=t_{n}\)

\(t_{n+W_{1}}\)

Time t evaluated at \(n+W_{1}\) timestep: \(t=(1-W_{1})t_{n}+W_{1}t_{n+1}\)

\(\tilde{\theta }\)

Variable \(\theta\) evaluated at time \(t=t_{n+W_{1}}\) where for displacement, velocity and algebraic variables are defined in Eq. (7)

In a general phase-field problem, a free energy can be expressed in a simple form
$$\begin{aligned} \mathcal {E} = \int _{\varOmega } H(\phi ) + \dfrac{\epsilon }{2}\left| \nabla \phi \right| ^2 \partial \varOmega \end{aligned}$$
(1)
where \(\phi\) is the order parameter representing phases, \(H(\phi )\) is a double-well equation and \(\epsilon\) is the width of the phases. The presence of \(H(\phi )\) restricts the order parameter \(\phi\) converges to specific value such as − 1 and 1. Then the dynamics of a phase-field system can be defined as:
$$\begin{aligned} \dfrac{\partial \phi }{\partial t} = D\dfrac{\delta \mathcal {E}}{\delta \phi } = D \nabla ^2 \left( \dfrac{\partial H(\phi )}{\partial \phi } + \epsilon \nabla ^2 \phi \right) \end{aligned}$$
(2)
where t is time and D is the diffusivity. Solving the weak form of Eq. (2) usually requires \(C^{1}\)-continuity. However, we can introduce additional algebraic variables \(\psi\) so that Eq. (2) can be rewritten as:
$$\begin{aligned} \begin{aligned} \dfrac{\partial \phi }{\partial t}&= D \nabla ^2 \psi \\ \psi&= \dfrac{\partial H(\phi )}{\partial \phi } + \epsilon \nabla ^2 \phi \end{aligned} \end{aligned}$$
(3)
which only requires \(C^{0}\)-continuity in \(\phi\) and \(\psi\). \(\psi\) is often regarded as chemical potential of the system. Equation (3) is an index-1 differential algebraic equation. We consider a weak form solution with Galerkin approximation to Eq. (3) in the semi-explicit form
$$\begin{aligned} \begin{aligned} \dot{\phi ^{h}}&= F(\phi ^{h},\psi ^{h},t) \\ 0&= Q(\phi ^{h},\psi ^{h},t) \end{aligned} \end{aligned}$$
(4)
with the initial condition
$$\begin{aligned} \phi ^{h}(t=0) = \phi ^{h}_{0}, \quad \psi ^{h}(t=0) = \psi ^{h}_{0} \end{aligned}$$
(5)
where \(\phi ^{h}\) and \(\psi ^{h}\) are Galerkin approximation of weak form of Eq. (3). F and Q are non-linear operators for differential equations and algebraic equations, respectively. \(\dot{\phi ^{h}}\) is the time derivatives of order parameter.
The GSSSS family of algorithms of ordinary differential equations and differential algebraic equation was recently formulated and patented by Tamma et al. [27, 28]. In addition, previous work on the application of the GSSSS family of algorithms for multibody dynamics problems is presented in [11], which resolves issues in second-order, non-linear, index-3 DAEs. Recently, the application of GSSSS on differential algebraic systems were investigated in the same manner [25]. Both studies demonstrated the significance of (1) evaluating the algebraic variables at a distinct time-level and (2) evaluating the non-linear term inside the non-linear operator. Then the following predictor-corrector form for the numerical solution of Eq. (4) can be expressed:
$$\begin{aligned} \begin{aligned} \tilde{\dot{\phi ^{h}}}&= F(\tilde{\phi ^{h}},\tilde{\psi ^{h}},t_{n+W_{1}}) \\ 0&= Q(\tilde{\phi ^{h}},\tilde{\psi ^{h}},t_{n+W_{1}}) \end{aligned} \end{aligned}$$
(6)
where the dynamic variables follow
$$\begin{aligned} \begin{aligned} \tilde{\dot{\phi ^{h}}}&= (\varLambda _{6}W_{1}-1)\dot{\phi ^{h}}_{n}+\varLambda _{6}W_{1}\dot{\phi ^{h}}_{n+1}\\ \tilde{\phi ^{h}}&= \phi ^{h}_{n}+\varLambda _{4}W_{1}\varDelta t \dot{\phi ^{h}}_{n} + \varLambda _{5}W_{2}\varDelta t(\dot{\phi ^{h}}_{n+1}-\dot{\phi ^{h}}_{n})\\ \tilde{\psi ^{h}}&= (1-W_{1})\psi ^{h}_{n} + W_{1}\psi ^{h}_{n+1} \end{aligned} \end{aligned}$$
(7)
with the update equation
$$\begin{aligned} \phi ^{h}_{n+1} = \phi ^{h}_{n} + \lambda _{4} \varDelta t \dot{\phi ^{h}}_{n} + \lambda _{5} \varDelta t (\dot{\phi ^{h}}_{n+1}-\dot{\phi ^{h}}_{n}) \end{aligned}$$
(8)
where \(\varDelta t\) is the timestep size and the algorithmic parameters are defined through the weighted time residual procedure [25, 30] as:
$$\begin{aligned} \begin{aligned}&\varLambda _{4}W_{1} = \frac{1}{1+\rho }, \quad \lambda _{4} = 1\\&\varLambda _{5}W_{2} = \frac{1}{(1+\rho )(1+\rho _{s})}, \quad \lambda _{5} = \frac{1}{1+\rho _{s}}\\&\varLambda _{6}W_{1} = \frac{3+\rho +\rho _{s}-\rho \rho _{s}}{2(1+\rho )(1+\rho _{s})}, \quad W_{1} = \frac{1}{1+\rho }. \end{aligned} \end{aligned}$$
(9)
The parameter \(\rho\) and \(\rho _s\) in Eq. (9) are bounded by
$$\begin{aligned} 0 \le \rho _{s} \le \rho \le 1 \end{aligned}$$
(10)
which provide a family of implicit schemes with desirable characteristics:
  • Unconditional stability

  • Second-order time accuracy

  • Controllable numerical dissipation by \(\rho\) and \(\rho _s\)

  • Zero-order overshoot behaviour

and these characteristics are proved and designed for the time integration as shown in the [25, 26, 30]. In addition, the GSSSS schemes control the numerical dissipation on the primary variables and its time derivative separately, where
  • Principle root \(\rho\) controls the numerical dissipation on \(\phi ^{h}\)

  • Spurious root \(\rho _s\) controls the numerical dissipation on \(\dot{\phi ^{h}}\)

This feature is highly suitable for phase-field problems involving fast variation near the interface. However, the GSSSS family of algorithms contains midpoint schemes which produce unstable solutions [12]. The unstable phenomenon will be shown in “Numerical examples”.
Currently, it is well-known in second-order system to evaluated the midpoint type method for algebraic variables at time level \(t_{n+1}\) instead of at the algorithmic time level [20] (e.g. \(n+0.5\) for Crank–Nicolson, and other midpoint methods). It was found that with this approach, the numerical solution become stable non-dissipative schemes [25]. The importance of time-level consistency and evaluation issues was also presented in [26]. This approach can be extended to first-order phase-field systems. Consider the non-linear operator Q in Eq. (6) with the same predictors and correctors (7) and (8), respectively. The algebraic equation Q be evaluated at \(t_{n+1}\) instead of \(t_{n+W_{1}}\) as
$$\begin{aligned} Q(\phi ^{h}_{n+1},\psi ^{h}_{n+1},t_{n+1}) = 0 \end{aligned}$$
(11)
Thus Q can be solved implicitly at time-level \(n+1\). To demonstrate the effect of such time-level implementation, we show two options for solving a phase-field model in a DAE system as shown in Table 2.
Table 2

Options of evaluation time-level for DAEs

Option

Equations of DAE system

Time-level

Option I

\(\tilde{\dot{\phi ^{h}}} = F(\tilde{\phi ^{h}},\tilde{\psi ^{h}},t_{n+W_{1}})\)

\(t_{n+W_{1}}\)

\(0 = Q(\tilde{\phi ^{h}},\tilde{\psi ^{h}},t_{n+W_{1}})\)

\(t_{n+W_{1}}\)

Option II

\(\tilde{\dot{\phi ^{h}}} = F(\tilde{\phi ^{h}},\tilde{\psi ^{h}},t_{n+W_{1}})\)

\(t_{n+W_{1}}\)

\(0 = Q(\phi ^{h}_{n+1},\psi ^{h}_{n+1},t_{n+1})\)

\(t_{n+1}\)

Stability and Accuracy of GSSSS Family of Algorithm

In order to analyze the numerical stability and accuracy of the current method with distinct options, we transform a phase-field DAEs into the Kronecker canonical form to study the analytic properties of the amplification matrix [4]. Consider the DAEs shown in Eq. (6) for Option I, and the update equation Eq. (8):
$$\begin{aligned} \begin{aligned} \tilde{\dot{\phi ^{h}}}&= F(\tilde{\phi ^{h}},\tilde{\psi ^{h}},t_{n+W_{1}}) \\ 0&= Q(\tilde{\phi ^{h}},\tilde{\psi ^{h}},t_{n+W_{1}}) \\ \phi ^{h}_{n+1}&= \phi ^{h}_{n} + \lambda _{4} \varDelta t \dot{\phi ^{h}}_{n} + \lambda _{5} \varDelta t (\dot{\phi ^{h}}_{n+1}-\dot{\phi ^{h}}_{n}). \end{aligned} \end{aligned}$$
(12)
Alternatively, for Option II, we have
$$\begin{aligned} \begin{aligned} \tilde{\dot{\phi ^{h}}}&= F(\tilde{\phi ^{h}},\tilde{\psi ^{h}},t_{n+W_{1}}) \\ 0&= Q(\phi ^{h}_{n+1},\psi ^{h}_{n+1},t_{n+1}) \\ \phi ^{h}_{n+1}&= \phi ^{h}_{n} + \lambda _{4} \varDelta t \dot{\phi ^{h}}_{n} + \lambda _{5} \varDelta t (\dot{\phi ^{h}}_{n+1}-\dot{\phi ^{h}}_{n}). \end{aligned} \end{aligned}$$
(13)
Solving the differential, algebraic and updated equations together in Eq. (12) results in a system:
$$\begin{aligned} y_{n+1} = \mathbf {A} y_{n} + \mathbf {B} \end{aligned}$$
(14)
where y is the solution vector
$$\begin{aligned} y = \begin{bmatrix} \phi ^{h} ,\dot{\phi ^{h}} , \psi ^{h} \end{bmatrix}^{T} \end{aligned}$$
(15)
and \(\mathbf {A}\) and \(\mathbf {B}\) are the amplification matrix and force vector, respectively. The numerical stability can therefore be investigated by measuring \(\mathbf {A}\). A simple temporal analysis is performed here to explain the mechanism of using Option I and Option II for phase-field equations. We first consider one-dimensional system of Eq. (2) with chosen parameter \(D=1\), \(\epsilon =1\), and \(H(\phi )=0\). Then Option I of such setting resulting in the matrix form of the system as:
$$\begin{aligned} \begin{aligned} \mathbf {C}\tilde{\dot{\phi }} + \mathbf {K} \tilde{\mu }&= 0 \\ \mathbf {K}\tilde{\phi } + \mathbf {C}\tilde{\mu }&= 0 \end{aligned} \end{aligned}$$
(16)
where
$$\begin{aligned} \begin{aligned} \mathbf {C}&= \int _{\varOmega } \mathbf {N}^{T} \mathbf {N} d\varOmega \\ \mathbf {K}&= \int _{\varOmega } \mathbf {N}_{,x}^{T} \mathbf {N}_{,x} d\varOmega . \end{aligned} \end{aligned}$$
(17)
Here \(\mathbf {N}\) denotes the finite element shape function. By considering Fourier representation of each terms for finite element formulation with full integration, the matrix \(\mathbf {C}\) and \(\mathbf {K}\) can be represented by \(\hat{c}\) and \(\hat{k}\) as
$$\begin{aligned} \begin{aligned} \hat{c}&= \dfrac{\varDelta x}{3} (2+cos(\theta )) \\ \hat{k}&= \dfrac{2}{\varDelta x} (1-cos(\theta )) \end{aligned} \end{aligned}$$
(18)
where \(\theta =\eta \varDelta x\) is the normalized wave number ranging from 0 to \(\pi\), \(\eta\) is the wave number, and \(\varDelta x\) is the element size. Considering the case of zero wave number (namely infinite wave length), the amplification matrix can then be expressed as:
$$\begin{aligned} \mathbf {A} = \begin{bmatrix} 1&\dfrac{\varDelta t(\rho - \rho _{s} + \rho \rho _{s} - 1)}{\rho + \rho _{s} - \rho \rho _{s} + 3}&0 \\ 0&-\dfrac{\rho + \rho _{s} + 3\rho \rho _{s} - 1}{\rho + \rho _{s} - \rho \rho _{s} + 3}&0 \\ 0&0&-\rho \end{bmatrix}. \end{aligned}$$
(19)
The stability condition can be obtained by requiring \(\mathbf {A}\)
$$\begin{aligned} det(\mathbf {A}) = 0 \end{aligned}$$
(20)
with the spectral stability criteria [14]:
$$\begin{aligned} \begin{aligned}&|\xi _{k}| \le 0 \\&if \quad \xi _{k}=\xi _{l}=1, \quad then \quad \xi _{k} \ne \xi _{l} \end{aligned} \end{aligned}$$
(21)
where \(\xi _{k}\) is the spectral roots of amplification matrix, evaluated from Eqs. (19) and (20). The spectral roots for Option I then becomes
$$\begin{aligned} \xi ^{Option I}_{k} = \begin{bmatrix} 1 \\ -\rho \\ \dfrac{(3\rho _{s} + 1)\rho + \rho _{s} - 1}{(\rho _{s} - 1)\rho - \rho _{s} - 3} \end{bmatrix} \end{aligned}$$
(22)
and it can be easily found out that \(\rho =\rho _{s}=1\) will results in roots with multiplicity, \(\xi _{k} = [1,-1,-1]\), and therefore is unstable by the stability criteria (21). By choosing \(\rho<1, \rho _{s}<1\), the situation of multiplicity of spectral roots can be avoided and therefore the numerical solution will be stable. Also, from the amplification matrix (19), one can see that the algebraic variable \(\psi ^{h}\) is controlled only by the algorithmic parameter \(\rho\), such that any perturbations in \(\psi ^{h}\) is fully dependent on \(\rho\).
On the other hand, in the case of Option II, meaning discretizing the algebraic equation at time level \(t_{n+1}\), the spectral roots are evaluated as
$$\begin{aligned} \xi ^{Option II}_{k} = \begin{bmatrix} 0 \\ 1\\ \dfrac{(3\rho _{s} + 1)\rho + \rho _{s} - 1}{(\rho _{s} - 1)\rho - \rho _{s} - 3} \end{bmatrix} \end{aligned}$$
(23)
and the multiplicity of spectral roots is successfully suppressed. In this way, the solution is stable under any choice of \((\rho ,\rho _{s})\). The ability remains for controlling the algorithmic parameters to introduce numerical dissipation.

In the general situation of different normalized wave number \(\theta = 0\) to \(\pi\), one will find out the spectral roots from Option I always contain one roots \(\xi ^{Option I}_{k}=-\rho\), such that makes it hard to reduces numerical perturbation when \(\rho = 1\). On the other hand, this roots is removed in the case of Option II such that Option II always contain one roots \(\xi ^{Option II}_{k}=0\) with two conjugate roots, which making the solution more stable since it always satisfy the stability criteria (21). In this way, adapting Option II should be more robust in modeling phase-field equations.

The purpose of this paper is to demonstrate the capability of GSSSS method for phase-field methods with non-dissipative scheme or controllable dissipation. Full analysis of numerical stability, dissipation, accuracy and other properties of GSSSS in different index of DAE system are referred to [19, 25, 26]. The proposed temporal analysis with the numerical evidence from the references are concluded in Table. 3.
Table 3

Stability of GSSSS scheme for two options in index-1 DAEs

Option

\((\rho ,\rho _{s})\)

Dissipation

Solution

Option I

(1, 1)

No

Unstable

\((\rho< 1,\rho _{s} < 1)\)

Yes

Stable

Option II

(1, 1)

No

Stable

\((\rho< 1,\rho _{s} < 1)\)

Yes

Stable

It is shown in [25, 28] that the algorithmic parameter \(\rho\) controls the numerical dissipation on the prime variables \(\phi ^{h}\) and algebraic variables \(\psi ^{h}\). On the other hand, \(\rho _{s}\) controls the numerical dissipation on \(\dot{\phi ^{h}}\). Larger \(\rho\) and \(\rho _s\) provide less dissipation and \((\rho ,\rho _s)=(1,1)\) (equivalent to Crank–Nicolson scheme) produces a completely non-dissipative system. Also, it should be noted that \(\rho =1\) produces the time integration scheme with time-level \(t_{n+W_{1}}=t_{n+0.5}\), which is equivalent to the general midpoint schemes [25, 26]. From Table 3 and the temporal analysis, we can see that for non-dissipative scheme, evaluating the algebraic equations at the end of the time-level reproduce a stable solution by avoiding multiple spectral roots lying with unit length. On the other hand, adding numerical dissipation always gives a stable solution. In addition, it is proved in [25] under Option II, perturbation in algebraic variables will decay if \((\rho ,\rho _s)=(1,\rho _s)\) and \(\rho _s<1\). Related results will be shown in “Numerical examples”.

As noted in “The framework of GSSSS family of algorithms for phase-field equations”, all schemes in GSSSS family of algorithms are second order time accuracy. A noteworthy property of the GSSSS family of algorithms is the time-level of the time derivative \(\dot{\phi ^{h}}\), which \(\dot{\phi ^{h}}\) preserve second-order time accuracy when \(\dot{\phi ^{h}}\) is evaluated at time \(t_{n-(\varLambda _{6}W_{1}-W_{1})}\) instead of time \(t_{n}\). The complete proof of time order accuracy can be found in [19, 25, 26]. To conclude, the features of unconditional stability and second order accuracy is practical for solving differential-algebraic phase-field equations with perturbed initial conditions or noises. Numerical evidence will be provided to verify such properties in the next section.

Numerical Examples

Two numerical examples are used to demonstrated and explored the option of the proposed scheme. The first example given is the Cahn–Hilliard equation described in [23], which expresses the separation of mixtures with a miscibility interface in the phase diagram, and is probably the most well known phase-field model. The second example is a simple phase-field-crystal model (PFC) as in [29] which is an extension of the phase-field model. PFC has recently become popular in multi-scale modelling of microstructure evolution. Both examples are classic problems in phase-field simulation. In addition, they can be derived from high-order free energy, which is common in this field. To discretise in space, we employ the Galerkin finite element method with \(C^{0}\) elements in a regular mesh. Then, we apply the GSSSS family of algorithms to demonstrate the time marching results.

The solution is produced under several different combinations of \(\rho\) and \(\rho _{s}\) resulting in an algorithms denoted by \((\rho ,\rho _{s})\). For both cases, convergence is plotted using a numerical solution with very refined timesteps, which is treated as the exact solution \(z_{exact}\). Then, the comparative numerical solution is produced by larger timesteps, \(z_{numerical}\) and the definition of error is given by \(|(z_{numerical}-z_{exact})/z_{exact}|\).

Cahn–Hilliard Equations

The Cahn–Hilliard equation is from mathematical physics and describes the process of phase separation, usually in the form
$$\begin{aligned} \dfrac{\partial \phi }{\partial t} = D\dfrac{\delta \mathcal {E}}{\delta \phi } = D \nabla ^2 (\phi ^3 - \phi + \gamma \nabla ^2 \phi ) \end{aligned}$$
(24)
where \(\phi\) is the order parameter describing phase, D is a diffusion coefficient, and \(\sqrt{\gamma }\) is the width of the diffusive interface. \(\mathcal {E}\) is the free energy of the system, which takes variation formulation results in standard phase-field equations. Our problem is posed on a space/time domain \(\varOmega \times (0,T)\), where \(\varOmega\) is a square two dimensional domain and (0, T) is the time interval. Periodic boundary condition are imposed in all boundaries. The algebraic variable \(\mu\) is often used to represent chemical potential in the form
$$\begin{aligned} \mu = \phi ^3 - \phi + \gamma \nabla ^2 \phi \end{aligned}$$
(25)
which Eq. (25) can be regarded as an algebraic equation. Therefore, an index-1 DAE can then be expressed as
$$\begin{aligned} \dfrac{\partial \phi }{\partial t} = D \nabla ^2 \mu \end{aligned}$$
(26)
with \(\mu\) defined in Eq. (25) and the free energy \(\mathcal {E}\) for Eq. (24) is defined as
$$\begin{aligned} \mathcal {E} = \int _{\varOmega } \dfrac{1}{4}(\phi ^2-1)^2 + \dfrac{\gamma }{2} \left| \nabla \phi \right| ^2 \partial \varOmega . \end{aligned}$$
(27)
The first term in Eq. (27) is a double-well potential acting as chemical free energy. The second term is the surface free energy. During simulation, the segregation of an initially mixed binary fluid into domains can be seen. An important feature of the Cahn–Hilliard equation is that the mean phase \(\phi\) is conserved (also noted as mass conservation). That is,
$$\begin{aligned} \varPhi = \int _{\varOmega } \phi d\varOmega \end{aligned}$$
(28)
is a constant and so
$$\begin{aligned} \dfrac{d \varPhi }{dt} = 0 \end{aligned}$$
(29)
implies the velocity of the total phase is zero.
By the standard Galerkin finite element method with \(C^{0}\) element, Eq. (26) can be written as
$$\begin{aligned} \begin{aligned} \mathbf {C}\dot{\phi } + \mathbf {K}\mu&= 0 \\ \mathbf {A}\phi + \mathbf {C}\mu + \mathbf {F}(\phi )&= 0 \end{aligned} \end{aligned}$$
(30)
with the initial conditions
$$\begin{aligned} \phi (t=0) = \phi _0, \quad \mu (t=0) = \mu _{0} \end{aligned}$$
(31)
where \(\mathbf {C}\), \(\mathbf {K}\), \(\mathbf {F}\) are matrix and force vector by Galerkin finite element method
$$\begin{aligned} \begin{aligned} \mathbf {C}&= \int _{\varOmega } \mathbf {N}^{T} \mathbf {N} d\varOmega \\ \mathbf {K}&= \int _{\varOmega } \mathbf {B}^{T} D \mathbf {B} d\varOmega \\ \mathbf {A}&= \int _{\varOmega } \mathbf {N}^{T} \gamma \mathbf {N} d\varOmega \\ \mathbf {F}&= \int _{\varOmega } \mathbf {N}^{T} (\phi ^3 - \phi ) d\varOmega \\ \end{aligned} \end{aligned}$$
(32)
with \(\mathbf {N}\) and \(\mathbf {B}\) are the finite element shape function and its spatial derivative. Then, the GSSSS family with Option I results in
$$\begin{aligned} \begin{aligned} \mathbf {C}\tilde{\dot{\phi }} + \mathbf {K} \tilde{\mu }&= 0 \\ \mathbf {A}\tilde{\phi } + \mathbf {C}\tilde{\mu } + \mathbf {F}(\tilde{\phi })&= 0 \end{aligned} \end{aligned}$$
(33)
with
$$\begin{aligned} \phi _{n+1} = \phi _{n} + \lambda _{4} \varDelta t \dot{\phi }_{n} + \lambda _{5} \varDelta t (\dot{\phi }_{n+1}-\dot{\phi }_{n}). \end{aligned}$$
(34)
At each timestep, Eq. (33) are solved iteratively by Newton–Raphson iterations given by
$$\begin{aligned} \begin{bmatrix} \varLambda _{6}W_{1} \mathbf {C}&W_{1} \mathbf {K} \\ \varLambda _{5}W_{2}\varDelta t (\mathbf {A}+ \mathbf {F}_{\phi })&W_{1} \mathbf {C} \end{bmatrix} \begin{Bmatrix} \varDelta \phi ^{k+1} \\ \varDelta \mu ^{k+1} \end{Bmatrix} = \begin{bmatrix} \mathbf {R}_{1}^{k} \\ \mathbf {R}_{2}^{k} \end{bmatrix} \end{aligned}$$
(35)
where \(\mathbf {R}_{1}^{k}\) and \(\mathbf {R}_{2}^{k}\) are the residuals in the k-step iteration given by the left hand sides of Eq. (33). The solution is iterated until the convergence criterion
$$\begin{aligned} \Vert \mathbf {R}^{k+1} \Vert < tol \end{aligned}$$
(36)
is satisfied. Equations (33) and (35) with Option II, on the other hand, become
$$\begin{aligned} \begin{aligned} \mathbf {C}\tilde{\dot{\phi }} + \mathbf {K} \tilde{\mu }&= 0 \\ \mathbf {A}\phi _{n+1} + \mathbf {C}\mu _{n+1} + \mathbf {F}(\phi _{n+1})&= 0 \end{aligned} \end{aligned}$$
(37)
and
$$\begin{aligned} \begin{bmatrix} \varLambda _{6}W_{1} \mathbf {C}&W_{1} \mathbf {K} \\ \lambda _{5}\varDelta t (\mathbf {A}+ \mathbf {F}_{,\phi })&\mathbf {C} \end{bmatrix} \begin{Bmatrix} \varDelta \phi ^{k+1} \\ \varDelta \mu ^{k+1} \end{Bmatrix} = \begin{bmatrix} \mathbf {R}_{1}^{k} \\ \mathbf {R}_{2}^{k} \end{bmatrix} \end{aligned}$$
(38)
with the same convergence criteria. Parameters are chosen for the diffusion coefficient \(D=1\). The interface width \(\sqrt{\gamma }\) is determined by setting \(\gamma = \sqrt{2}\), with randomly perturbation between \((-0.05,0.05)\). All variables and parameters are dimensionless. The spatial domain is discretized with \(64 \times 64\) quadrilateral finite elements with element size \(1 \times 1\). The time domain is discretized with timestep size \(\varDelta t = 2\). Tolerance is set to \(10^{-7}\).
Fig. 1

Snapshots of the phase-field from random initial data, demonstrating its evolution through the phase separation, which was run using algorithmic parameters \((\rho ,\rho _s)=(0.3,0.3)\) with Option II. In the beginning of the simulation, ac, the phase separates rapidly to the values of and for each phase. Then, the two phases start to segregate towards the topology equilibrium as shown in df. Finally, it can be noted that the simulation slows down for gi since it approaches the minimum of the free energy \(\mathcal {E}\)

Snapshots of the simulation are shown in Fig. 1a–i. The dynamics of the Cahn–Hilliard equation are driven by the free energy Eq. (27). The first part of the dynamics is the phase separation driven by the minimization of the chemical free energy, which occurs on a very short time scale. After separation, the phases start coarsening until they achieve the equilibrium topology as shown in Fig. 1i.
Fig. 2

Total phase change and phase velocity in the simulation with respect to specific \((\rho ,\rho _{s})\). It can be shown that mass conservation is exactly satisfied

From Fig. 2a, b, it can be seen that mass conservation in Eqs. (28) and (29) is conserved under tolerance.
Fig. 3

Evolution of mean chemical potential \(\mu\). \((\rho ,\rho _{s})=(1,1)\) shows oscillation in the algebraic variables under Option I, although conservation is satisfied. Numerical dissipation (reduce \(\rho\)) and Option II both eliminate the oscillation. It is obvious that Option II is better than simply introducing dissipation in Option I since the oscillation can completely disappear at the beginning of the simulation for Option II while reducing \(\rho\) for Option I requires additional time to reduce the influence of perturbation

It is known that the Cahn–Hilliard equation is unstable under random perturbation within the spinodal region. The mean chemical potential \(\bar{\mu }\) in Fig. 3 shows oscillatory behaviour for non-dissipative schemes \((\rho ,\rho _s)=(1,1)\), which is equivalent to Crank–Nicolson method. It should be noted that \(\rho\) controls the numerical dissipation on the algebraic variable \(\mu\) in the Option I. Therefore, reducing \(\rho\) effectively eliminates the non-physical behaviour. Directly reducing the value of \(\rho\) can dramatically reduce the numerical oscillation. On the other hand, the oscillation on the algebraic variables in Option II is reduced for all choices of \((\rho ,\rho _s)\).
Fig. 4

Cross-section of each variable in the interface area. The phase field is between − 1 and 1 for all cases of time integration scheme. Velocity and chemical potential, on the other hand, demonstrate unstable solutions for the case of \((\rho ,\rho _s)=(1,1)\) in Option I

We can also observe from Fig. 4a–c that although (1, 1) can produce a stable solution for phase field \(\phi\), the velocity and algebraic variables show very unstable solutions and therefore, dissipation is required. On the other hand, Option II can also provide stable solutions.
Fig. 5

Energy and phase evolution of Cahn–Hilliard equations, demonstrating monotonically decreasing behaviour

The free energy evolution of phase separation/segregation can be seen in Fig. 5. One can easily observe that the free energy is decaying until it reaches the minimum of free energy. At the beginning of the simulation, free energy decay and phase start separation occur very quickly. After that, it takes most of the remaining simulation duration for the phases to merge and near the phase topology equilibrium.
Fig. 6

Convergence plot for displacement \(\phi\), velocity \(\dot{\phi }\) and chemical potential \(\mu\) under different choice of \((\rho ,\rho _s)\) (dashed line: (0.6, 0.3) with Option II. Solid line: (1, 1)) with Option I. To properly show the accuracy of \(\dot{\phi }\), Option II was used to obtain \(\dot{\phi }_{exact}\) and \(\dot{\phi }_{numerical}\). All variables show second-order accuracy when using the two smallest timesteps

Finally, a convergence plot for the Cahn–Hilliard equation is given in Fig. 6. The second-order accuracy of the GSSSS scheme can be seen for different choices of \((\rho ,\rho _s)\).

Phase-Field-Crystal Equations

The second example is a simple phase-field-crystal method, which describes the crystallization in undercooling. The problem is posed over the spatial domain \(\varOmega\) and time domain (0, T) and given by
$$\begin{aligned} \dfrac{\partial \phi }{\partial t} = \dfrac{\delta \mathcal {E}}{\delta \phi } = \nabla ^2 [(1+\nabla ^2)^{2}\phi + \phi ^3 - \epsilon \phi ] \end{aligned}$$
(39)
where \(\phi\) represents the approximation of crystalline nuclei, and \(\epsilon\) represents a critical transition variable associated with the degree of undercooling. \(\mathcal {E}\) represents the free energy of the model. Like the Cahn–Hilliard equation, Eq. (39) can be written as a DAE system that consists of three coupled equations
$$\begin{aligned} \begin{aligned} \dfrac{\partial \phi }{\partial t}&= \nabla ^2 \sigma \\ \sigma&= (1+\nabla ^2)\theta + \phi ^3 - \epsilon \phi \\ \theta&= (1+\nabla ^2)\phi \end{aligned} \end{aligned}$$
(40)
with the free energy \(\mathcal {E}\) defined by
$$\begin{aligned} \mathcal {E} = \int _{\varOmega } \dfrac{1}{4}\phi ^4 + \dfrac{1}{2} \epsilon \phi ^2 + \dfrac{1}{2}(\phi ^2-2|\nabla \phi |^2 + (\theta -\phi )^2) \partial \varOmega \end{aligned}$$
(41)
where \(\sigma\) and \(\theta\) are algebraic variables introduced to decouple high-order terms in Eq. (39). Similarly, the PFC model has mass conservation such that
$$\begin{aligned} \varPhi = \int _{\varOmega } \phi d\varOmega \end{aligned}$$
(42)
is a constant. Thus,
$$\begin{aligned} \dfrac{d \varPhi }{dt} = 0. \end{aligned}$$
(43)
Then, the finite element method for the weak form solution leads to
$$\begin{aligned} \begin{aligned} \mathbf {C}\dot{\phi } + \mathbf {K} \sigma&= 0 \\ \mathbf {C}\sigma - \mathbf {C}\theta + \mathbf {K}\theta + \mathbf {F}(\phi )&= 0 \\ \mathbf {C}\theta - \mathbf {C}\phi + \mathbf {K}\phi&= 0 \end{aligned} \end{aligned}$$
(44)
along with the initial condition
$$\begin{aligned} \phi (t=0) = \phi _{0}, \quad \sigma (t=0) = \sigma _{0}, \quad \theta (t=0) = \theta _{0} \end{aligned}$$
(45)
where similarly the matrix and force vectors are
$$\begin{aligned} \begin{aligned} \mathbf {C}&= \int _{\varOmega } \mathbf {N}^{T} \mathbf {N} d\varOmega \\ \mathbf {K}&= \int _{\varOmega } \mathbf {B}^{T} \mathbf {B} d\varOmega \\ \mathbf {F}&= \int _{\varOmega } \mathbf {N}^{T} (\phi ^3 - \epsilon \phi ) d\varOmega . \end{aligned} \end{aligned}$$
(46)
Then, discretizing in time according the GSSSS scheme with Option I gives
$$\begin{aligned} \begin{aligned} \mathbf {C}\tilde{\dot{\phi }} + \mathbf {K}\tilde{\sigma }&= 0 \\ \mathbf {C}\tilde{\sigma } - \mathbf {C}\tilde{\theta } + \mathbf {K}\tilde{\theta } + \tilde{\mathbf {F}}(\phi )&= 0 \\ \mathbf {C}\tilde{\theta } - \mathbf {C}\tilde{\phi } + \mathbf {K}\tilde{\phi }&= 0 \end{aligned} \end{aligned}$$
(47)
with
$$\begin{aligned} \phi _{n+1} = \phi _{n} + \lambda _{4} \varDelta t \dot{\phi }_{n} + \lambda _{5} \varDelta t (\dot{\phi }_{n+1}-\dot{\phi }_{n}). \end{aligned}$$
(48)
At each timestep, Eq. (47) are solved iteratively by Newton–Raphson iterations given by
$$\begin{aligned} \begin{bmatrix} \varLambda _{6}W_{1} \mathbf {C}&W_{1} \mathbf {K}&\mathbf {0}\\ \varLambda _{5}W_{2}\varDelta t \mathbf {F}_{,\phi }&W_{1} \mathbf {C}&W_{1} (\mathbf {K}-\mathbf {C}) \\ \varLambda _{5}W_{2}\varDelta t (\mathbf {K}-\mathbf {C})&\mathbf {0}&W_{1} \mathbf {C} \\ \end{bmatrix} \begin{Bmatrix} \varDelta \phi ^{k+1} \\ \varDelta \sigma ^{k+1} \\ \varDelta \theta ^{k+1} \end{Bmatrix} = \begin{bmatrix} \mathbf {R}_{1}^{k} \\ \mathbf {R}_{2}^{k} \\ \mathbf {R}_{3}^{k} \end{bmatrix} \end{aligned}$$
(49)
where \(\mathbf {R}_{1}^{k}\), \(\mathbf {R}_{2}^{k}\) and \(\mathbf {R}_{3}^{k}\) are the residuals in the k-step iteration given by the left hand sides of Eq. (47). The solution is iterated until the convergence criterion (36) is satisfied. Similarly, Eqs. (47) and (49) with Option II becomes
$$\begin{aligned} \begin{aligned} \mathbf {C}\tilde{\dot{\phi }} + \mathbf {K}\tilde{\sigma }&= 0 \\ \mathbf {C}\sigma - \mathbf {C}\theta _{n+1} + \mathbf {K}\theta _{n+1} + \mathbf {F}(\phi _{n+1})&= 0 \\ \mathbf {C}\theta _{n+1} - \mathbf {C}\phi _{n+1} + \mathbf {K}\phi _{n+1}&= 0 \end{aligned} \end{aligned}$$
(50)
and
$$\begin{aligned} \begin{bmatrix} \varLambda _{6}W_{1} \mathbf {C}&W_{1} \mathbf {K}&\mathbf {0}\\ \lambda _{5}\varDelta t \mathbf {F}_{\phi }&\mathbf {C}&\mathbf {K}-\mathbf {C} \\ \lambda _{5}\varDelta t (\mathbf {K}-\mathbf {C})&\mathbf {0}&\mathbf {C} \\ \end{bmatrix} \begin{Bmatrix} \varDelta \phi ^{k+1} \\ \varDelta \sigma ^{k+1} \\ \varDelta \theta ^{k+1} \end{Bmatrix} = \begin{bmatrix} \mathbf {R}_{1}^{k} \\ \mathbf {R}_{2}^{k} \\ \mathbf {R}_{3}^{k} \end{bmatrix} \end{aligned}$$
(51)
with the same convergence criterion. All simulation variables and parameters are dimensionless. Solution were run for \(\epsilon = 0.325\) and initial phase \(\phi\) is assigned by one-mode approximation corresponding to a triangular configuration described in [29]. The spacial domain is discretized with \(64 \times 64\) quadrilateral element with element size \(1\times 1\). The time domain is discretized with timdstep size \(\varDelta t = 2\). Tolerance is set to \(10^{-7}\).
Fig. 7

Simulation of atomistic field through a random initial condition \(\bar{\phi } = \sqrt{\epsilon }/2\). The algorithmic parameter \((\rho,\rho_s)=(0.3,0.3)\) is used. It can be seen that it takes a short period for crystallites to merge as shown in ac. After that, it takes a long time for the dimensionless atoms to rearrange into a BCC crystalline structure

Fig. 8

Snapshots of approximate dimensionless atomistic density field, demonstrating its evolution throughout the simulation, which was run using a computational mesh composed of \(64\times 64\) and a timesteps size \(\varDelta t = 2\) with algorithmic parameters \((\rho ,\rho _s)=(0.3,0.3)\) with Option II. The initial crystallite in the centre of the domain expands due to the degree of undercooling \(\epsilon\) and is distributed in a body-centred cubic BCC crystalline structure [29]

To see how perturbation influences the solution, a perturbed initial condition (randomly distributed for \(\bar{\phi }\in [-0.05,0.05]\)) is also used for the phase-field-crystal model as shown in Fig. 7a–f. With such initial condition, the numerical instability can be shown more clearly in Fig. 9c, d.

Then, to observe how the BCC crystalline is formed, simulation is run under crystalline initial condition as shown in Fig. 8a–c. The BCC crystalline structure is well captured and observed. PFC acts as an extension of the phase-field method where density function theory is introduced to describe the dislocations and atomistic length scale behaviour. It is more sensitive than the phase-field model due to its fast variation between approximate atoms. Therefore, error perturbation in the simulation makes it very unstable. It can be seen by choosing Option II with dissipative scheme, the solution will be stable. On the other hand, traditional midpoint method (\(\rho = 1\)) often results in poor solutions in algebraic variables as shown in Fig. 9c, d.
Fig. 9

Variation of different variables in the simulation with respect to specific \((\rho ,\rho _{s})\). It can be shown that mass conservation is satisfied. Only \((\rho ,\rho _{s})=(1,1)\) shows oscillation in each variable even if mass conservation is satisfied under tolerance. We note that for the PFC model, perturbation can be eliminated by introducing numerical oscillation or solving Option II instead of Option I. Further, Option II converges to a stable solution in early timesteps while the dissipative schemes with Option I take some time to reduce the perturbation

From Fig. 9a, b, mass conservation is verified for different choices of \((\rho ,\rho _s)\). On the other hand, the algebraic variables in Fig. 9c, d demonstrate similar oscillatory behaviour as shown in the Cahn–Hilliard equation discussed in the previous section. Theoretically, the algebraic variable \(\bar{\sigma }\) will converge to the stable value \(\sqrt{\epsilon }/2\), and numerically, (1, 1) with Option I produced unbounded oscillation around the stable solution. Hence, introducing numerical dissipation can reduce the impact of perturbation. Option II, however, demonstrates an enhancement of the original approach to eliminate the oscillatory behaviour of the DAEs.
Fig. 10

Energy evolution of the phase-field-crystal model, demonstrating decay behaviour. The phase field shows the approximate atomistic field

Free energy decay can be seen in Fig. 10. One can notice that compare to the case of Cahn–Hilliard equations, rough atomistic filed appears in a very short time interval and then takes a long time to rearrange until it reaches the complete BCC structure.
Fig. 11

Convergence plot for displacement \(\phi\), velocity \(\dot{\phi }\) and algebraic variables \(\theta\) and \(\sigma\) under different choices of \((\rho ,\rho _s)\) (dashed line: (0.6, 0.3) with Option II. Solid line: (1, 1) with Option I). To properly show the accuracy of \(\dot{\phi }\), Option II was use for obtaining \(\dot{\phi }_{exact}\) and \(\dot{\phi }_{numerical}\). All variables have second-order accuracy when using the two smallest timestpes

Finally, a convergence plot for the PFC model can be found in Fig. 11. The second-order accuracy of the GSSSS scheme can be seen for different choices of \((\rho ,\rho _s)\). For both numerical examples, the PF and PFC model verify the numerical properties for stability and accuracy under the different Options. Option I acts as an initial and straightforward approach, and shows unstable solutions without introducing numerical dissipation. Even if we artificially add the dissipation to reduce the oscillation, it takes a while for the solution to converge. On the other hand, the new approach, Option II, shows stable and robust solutions and can deal with the instability arising in non-dissipative midpoint methods. The proposed method had been applied on phase-filed modeling for manufacturing process shown in [13].

Conclusions

In this research, we presented an unconditionally stable, generalized single step single solve scheme (GSSSS) to solve phase-field problems in differential algebraic system. GSSSS can control numerical dissipation as well as preserve second-order time accuracy for this problem. We also demonstrated an enhancement of the original approach (Option I) by evaluating the algebraic equation at time-level \(t_{n+1}\) (Option II). This reduced the instability arising in the general phase-field problem, and saved unstable schemes, such as Crank–Nicolson and other midpoint methods, from oscillatory behaviour in algebraic equations. Thus, non-dissipative schemes such as Crank–Nicolson or the Midpoint rule can be retained. Two numerical examples were provided as numerical evidence to verify: (1) the algorithmic accuracy and stability properties; (2) numerical dissipation is important for controlling numerical oscillation; and (3) Option II (evaluate the algebraic equation at time-level \(t_{n+1}\)) is a valid way in time integration which can overcome instability arising in stiff DAEs, especially for phase-field problems with high-order free energy. This research demonstrates a new avenue for non-dissipative schemes and provides an opportunities toward robust time integration scheme for phase-field modelling.

Notes

Acknowledgements

We would like to acknowledge the help regarding GSSSS time integration method from Dr. Shimada and Prof. Tamma at University of Minnesota, Twin Cities. We are grateful for the computational resources from National Center of High-performance Computing (NCHC). This work is supported by the Industrial Technology Research Institute (ITRI) at Hsinchu, Taiwan.

References

  1. 1.
    C. Andersson, Phase-field simulation of dendritic solidification. Unpublished PhD thesis. Royal Institute of Technology, Stockholm (2002)Google Scholar
  2. 2.
    V.E. Badalassi, H.D. Ceniceros, S. Banerjee, Computation of multiphase systems with phase field models. J. Comput. Phys. 190(2), 371–397 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    W.J. Boettinger, J.A. Warren, C. Beckermann, A. Karma, Phase-field simulation of solidification 1. Ann. Rev. Mater. Res. 32(1), 163–194 (2002)CrossRefGoogle Scholar
  4. 4.
    K.E. Brenan, S.L. Campbell, L.R. Petzold, Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations, vol. 14 (SIAM, Philadelphia, 1996)zbMATHGoogle Scholar
  5. 5.
    L.-Q. Chen, Phase-field models for microstructure evolution. Ann. Rev. Mater. Res. 32(1), 113–140 (2002)CrossRefGoogle Scholar
  6. 6.
    G.J. Fix, in Free Boundary Problems: Theory and Applications, vol. II, ed. by A. Fasano, M. Primicerio (Piman, Boston, 1983), p. 580Google Scholar
  7. 7.
    H. Gomez, T.J.R. Hughes, Provably unconditionally stable, second-order time-accurate, mixed variational methods for phase-field models. J. Comput. Phys. 230(13), 5310–5327 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    H. Gomez, X. Nogueira, An unconditionally energy-stable method for the phase field crystal equation. Comput. Methods Appl. Mech. Eng. 249, 52–61 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    H. Gomez, A. Reali, G. Sangalli, Accurate, efficient, and (iso) geometrically flexible collocation methods for phase-field models. J. Comput. Phys. 262, 153–171 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    M.E. Gurtin, Generalized Ginzburg–Landau and Cahn–Hilliard equations based on a microforce balance. Phys. D Nonlinear Phenom. 92(3), 178–192 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    A.J. Hoitink, Application of the GSSSS family of algorithms to the natural index 3 differential-algebraic equations of multibody dynamics. PhD thesis, University of Minnesota (2011)Google Scholar
  12. 12.
    J. Hua, P. Lin, C. Liu, Q. Wang, Energy law preserving \(c^0\) finite element schemes for phase field models in two-phase flow computations. J. Comput. Phys. 230(19), 7115–7131 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    T.-H. Huang, T.-H. Huang, Y.-S. Lin, C.-H. Chang, P.-Y. Chen, S.-W. Chang, C.-S. Chen, Phase-field modeling of microstructural evolution by freeze-casting. Adv. Eng. Mater. 20(3), 1700343 (2018)CrossRefGoogle Scholar
  14. 14.
    T.J.R. Hughes, The Finite Element Method: Linear Static and Dynamic Finite Element Analysis (Courier Corporation, Chelmsford, 2012)Google Scholar
  15. 15.
    A. Karma, D.A. Kessler, H. Levine, Phase-field model of mode III dynamic fracture. Phys. Rev. Lett. 87(4), 045501 (2001)CrossRefGoogle Scholar
  16. 16.
    R. Kobayashi, A numerical approach to three-dimensional dendritic solidification. Exp. Math. 3(1), 59–81 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    J.S. Langer, Models of pattern formation in first-order phase transitions. Dir. Condens. Matter Phys. 1, 165–186 (1986)MathSciNetCrossRefGoogle Scholar
  18. 18.
    C. Liu, J. Shen, A phase field model for the mixture of two incompressible fluids and its approximation by a Fourier-spectral method. Phys. D Nonlinear Phenom. 179(3), 211–228 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    S.U.B. Masuri, M. Sellier, X. Zhou, K.K. Tamma, Design of order-preserving algorithms for transient first-order systems with controllable numerical dissipation. Int. J. Numer. Methods Eng. 88(13), 1411–1448 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    D. Negrut, R. Rampalli, G. Ottarsson, A. Sajdak, On the use of the HHT method in the context of index 3 differential algebraic equations of multibody dynamics. In ASME 2005 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (American Society of Mechanical Engineers, 2005), pp. 207–218Google Scholar
  21. 21.
    L.R. Petzold, A description of dassl: a differential/algebraic system solver. Sci. Comput. 1, 65–68 (1982)Google Scholar
  22. 22.
    S. Praetorius, A. Voigt, Development and analysis of a block-preconditioner for the phase-field crystal equation. SIAM J. Sci. Comput. 37(3), B425–B451 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    N. Provatas, K. Elder, Phase-field methods in materials science and engineering (Wiley, Hoboken, 2011)Google Scholar
  24. 24.
    N. Provatas, N. Goldenfeld, J. Dantzig, Adaptive mesh refinement computation of solidification microstructures using dynamic data structures. J. Computat. Phys. 148(1), 265–290 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    M. Shimada, Novel design and development of isochronous time integration architectures for ordinary differential equations and differential-algebraic equations: computational science and engineering applications. PhD thesis, University of Minnesota (2014)Google Scholar
  26. 26.
    M. Shimada, A.J. Hoitink, K.K. Tamma, The fundamentals underlying the computations of acceleration for general dynamic applications: issues and noteworthy perspectives. CMES Comput. Model. Eng. Sci. 104(2), 133–158 (2015)Google Scholar
  27. 27.
    M. Shimada, S.U.B. Masuri, K.K. Tamma, A novel design of an isochronous integration [iintegration] framework for first/second order multidisciplinary transient systems. Int. J. Numer. Methods Eng. 102(3–4), 867–891 (2015)CrossRefzbMATHGoogle Scholar
  28. 28.
    K.K. Tamma, M. Shimada, S.U.B. Masuri, X. Zhou, Computer-implemented method for performing simulation, June 25. US Patent App. 14/314,925 (2014)Google Scholar
  29. 29.
    P. Vignal, L. Dalcin, D.L. Brown, N. Collier, V.M. Calo, An energy-stable convex splitting for the phase-field crystal equation. Comput. Struct. 158, 355–368 (2015)CrossRefGoogle Scholar
  30. 30.
    X. Zhou, K.K. Tamma, Design, analysis, and synthesis of generalized single step single solve and optimal algorithms for structural dynamics. Int. J. Numer. Methods Eng. 59(5), 597–668 (2004)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Korean Multi-Scale Mechanics (KMSM) 2019

Authors and Affiliations

  1. 1.University of California, San DiegoLa JollaUSA
  2. 2.Department of Civil EngineeringNational Taiwan UniversityTaipeiTaiwan
  3. 3.Industrial Technology Research InstituteHsin-chuTaiwan

Personalised recommendations