Abstract
The chapter reviews the basic analytic solutions of phase-field models at the mesoscopic scale. First the phase-field equation is derived by the Clausius-Duhem relation applied to a simple phase-field functional. The concept of variational derivative, related to the gradient contribution in the phase-field functional, is introduced. The traveling wave solution, or solitonian wave solution, for the phase-field equation in 1-dimension is derived for two relevant potential functions. Finally, the relations between model parameters and physically defined material parameters are derived.
You have full access to this open access chapter, Download chapter PDF
Keywords
- Phase field
- Analytics of moving boundary problems
- Partial differential equations
- Solitonian wave solutions
- Variational derivative
- Clausius-Duhem relation
- Double well potential
- Double obstacle potential
- Capillarity
- Gibbs-Thomson equation
1 The Problem of Propagating a Wave Front on a Numerical Grid
In this chapter, we will recapture the analytical background of traveling-wave solutions in 1D with two of the most commonly used separating potentials: the so-called “double-well” and “double-obstacle” potentials. This solution, also known as the antisymmetric soliton solution (see previous Chap. 1), is the basic reason that phase fields work so well in a numerical solution: the solution is quite robust against perturbations, which are inevitable in numerical simulations. Of course this resembles the physical fact, that the wave front is robust against any perturbation—wind, a bird etc.—in reality. The front self-stabilizes while traveling over the numerical grid, just like a solitonian wave on the surface of water.
Let us start with an example. We define a wave front ϕ(x, t0) at time t0 in one dimension x:
We know the wave front from Fig. 1.4b: the solitonian wave, but now defined between 0 and 1, as we do nowadays in phase field. In the following we take this convention that the left-hand side of the wave is ϕ = 1 and its right-hand side ϕ = 0. The front is centered around x = 0, and the width η is a measure of the distance between ϕ = 0.05 and ϕ = 0.95. This “cutoff” is necessary since the hyperbolic tangent converges to 0 or 1 only at infinity, and we need a finite measure of the interface width.
We now want to propagate this wave with constant speed v0. One writes:
You may try to solve this equation numerically in 1D (as we do in the class), applying the profile (2.1) as a starting condition. However, this will simply not work for propagating the wave for a reasonable distance while maintaining its profile! Equation (2.2) is called a hyperbolic transport equation, and it is prone to large numerical instability up to shock-waves, and we do not want this at all. The equation thus has to be regularized using an appropriate method. We will not go into the details of numerical regularization approaches in hydro-dynamical wave mechanics, but we will modify the equation for better solvability. We want to solve (2.2) exactly. The only things we can do to the equation are: (i) add 0, or (ii) multiply by 1. We will do the first. We add:
Does this look strange? Let us do the derivation. We will prove that (2.3) has the solution (2.1) and that it is self-similar in time for a propagating planar wave, a wave in 1D. Therefore, it adds 0 to the original Eq. (2.2). We add it:
This is the famous “phase-field equation” we will be dealing with in the rest of the lectures.Footnote 1 Adding this strange 0 Eqs. (2.3)–(2.1) changes the type of the equation from hyperbolic to parabolic. It becomes a “diffusion equation,” which is easy to solve in a discretized manner on a numerical grid. Let us go through this step by step.
2 Equation of Motion for the Phase Field
We start with the phase-field functional density, Eq. (1.3). A functional, by its mathematical definition, is a mapping of an arbitrary set of functions (functional densities), within a given definition domain, onto the real numbers. These functional densities, functions in space and time, are very difficult to compare. If we map functions onto a scalar, the comparison becomes easy. The functions can then be ordered on a linear coordinate indicating whether they are “larger or smaller.” For a human, sadly speaking, “money” is an often-used measure; this measures the value of all goods that can be bought. In metallurgy, we like to use the free energy of the system because it is a thermodynamic functional of the state variables temperature and pressure, two quantities that are easily controllable in lab experiments. Other functionals are useful under different conditions. In thermodynamics, we use Legendre transformation to relate these functionals uniquely. Nevertheless, different functionals should be used for different applications.
To introduce “time,” we start from an entropy functional S. The second law of thermodynamics demands that entropy increases with time:
Here, the symbol δ denotes the “variational derivative” of a functional, i.e., of a function of functions. This will be elaborated below in some detail. For a rigorous derivation, see Kiselev, Shnir, and Tregubovich’s book [2]. For now, let us simply take the message that Eq. (2.5) can easily be fulfilled if
This relation is termed the “Clausius–Duhem relation” in continuum mechanics. It is not a “physical law,” but it is the simplest possible Ansatz to ensure the positivity of entropy production; it works well and is generally accepted. It is termed the “relaxation Ansatz” because it has only a first derivative in time and no accelerations. Since the entropy S enters the Gibbs free energy F with a “−” sign, we accept that:
F =∫Ωd3xf is the total free energy within the 3D domain Ω with the free-energy density f. We have to work on the functional as an integral over the domain of interest since the free energy density f of a phase-field model is a function of gradients in ϕ, ∇ϕ, which are, by construction, not defined on a special point in space but need at least two different points. Let us treat the three parts of the free energy in (1.3) separately:
We start with the easiest one: f2. We are not afraid of the phase-field function ϕ(x, t); we treat it as a normal variable at the point in space x and time t under consideration, and it should not depend on the values of the field at other locations regarding the function f2. This will come later. Therefore, we can treat the variational derivative as a normal partial derivative. We calculate:Footnote 2
As mentioned before, this part is only active in the interface. It changes its sign at \(\phi =\frac 12\), i.e., in the center of the interface. It is the first derivative of the so-called double-well potential, also called the Landau potential, ϕ2(1 − ϕ)2. Therefore, we call it the potential contribution.
In the third term, f3 (2.9), we have already separated the phase-field-dependent part \(h(\phi )=\frac {3\phi ^2-2\phi ^2}6\), the so-called coupling function, from the physical part, i.e., the Gibbs-energy-density difference between the phases as a function of temperature, composition, strain, and other parameters, Δg(T, c, 𝜖, …). Here, we will take this as a constant Δg = Δg0. Because this part also does not depend on non-local contributions, i.e., gradients of the phase field, the functional derivative can again be treated as a normal partial derivative:
As we can see, the special form of (2.9) has been chosen such that the free-energy difference acts only within the interface when 0 < ϕ < 1. Δg0 is called the “driving force,” since it makes a positive or negative contribution to the evolution of the phase field \(\dot \phi \), driving it to grow or to shrink.
The first part (2.7) is more involved because it contains a gradient in ϕ, (∇ϕ)2, in 1D, \((\frac {\partial \phi }{\partial x})^2\). We will perform the variational differentiation in 3D so as not to increase the confusion of the problem with the dimensionality of the variational derivative. The 1D case works identically. Applying the chain rule, we find:
The difficulty lies in finding a way to treat the variation δ(∇ϕ) of the gradient operator. It helps to realize that all derivatives—regardless of whether they are functional derivatives δϕ, partial derivatives ∂ϕ, or partial derivatives in space ∇ϕ—are linear operators and are therefore commutable. What we will do is commute the operators δ and ∇ by a partial integration in space:
The boundary integral is omitted mostly with the reasoning that the interface should not touch the boundary of the domain. This will not be the case in most applications, so boundary conditions should always be treated with care. This topic, however, will only be touched on in some numerical exercises. The most important factor of the derivation is the change of sign in the gradient contribution (2.12) due to the partial integration. In the free-energy functional, both interface contributions (2.7) and (2.8) are positive penalties against forming interfaces; they will counteract with opposite signs in the equation of motion: The Laplace- or diffusion-operator will act as to smear out the profile, while the potential operator will act as to sharpen it, as we will see in detail. Collecting all terms defines the equation of motion of the phase field, the so-called phase-field Eq. (2.4), and it now has physical constants in Kobayashi’s notation. It is derived from the Clausius–Duhem relation (2.6) with the proportionality constant τ, which is called the relaxation constant.
For completeness, a common formal replacement of the functional derivative for theories with gradient contributions (quantum mechanics in general) has the following form, known as the “Euler–Lagrange equation,” in which ∇ϕ is treated as a symbolic entity in the denominator of the first differential operator:
3 Traveling-Wave Solution for the Double-Well Potential
A “traveling-wave solution,” as introduced in Chap. 1 (Fig. 1.4), has the special feature of self-similarity in time (for constant driving force Δg0 and velocity v0); i.e., we can define a coordinate ξ = x − v0t to describe this solution in its own frame moving with constant velocity with regard to a resting frame. We derive this solution as the minimum solution of the functionals (1.3) or (1.4). Of course, the condition of self similarity \(\frac {d\xi }{dt}=0\) and the minimum energy condition are related, because a minimum solution requires the vanishing of the first derivative of the phase-field \(\frac {\delta \phi }{\delta t}\propto \frac {\delta F}{\delta \phi }=0\) in Eq. (2.15). Here, we will not try to derive the solution by applying some scheme to solve partial differential equations, but we will prove that the given solution from the literature does the job!Footnote 3 The solution has already been introduced in (2.1) for the initial time step t = 0. We change x → ξ = x − v0t to find the traveling solution:
To prove that this is a solution of (2.15), we have to compute the first and second derivatives in the space coordinate x:
We find:
This is the first important result, and all phase-field models agree: the minimum solution of ϕ for a phase-field functional (1.3) with Δg = 0, or with a self-similar solution and arbitrary but constant velocity v, demands an interface width
Inserting this relation back into the phase-field Eq. (2.15), only the driving force part remains, and we have:
We compare this with the Gibbs–Thomson equation for a moving planar interface, i.e., where the capillarity term is 0, and the interface mobility is Mϕ:
This relates the relaxation constant τ to the interface mobility Mϕ:
Finally, we have to relate 𝜖 and γ to the interface energy σ. The interface energy (in units \(\left [ \frac {J}{m^2}\right ]\)) must come out if we integrate the free energy density (in units \(\left [ \frac {J}{m^3}\right ]\)) in the normal direction through the interface:
We have used the first derivative of ϕ with respect to x, \(\frac {\partial \phi }{\partial x}\), (2.17) twice, along with the relation between η, γ, and η (2.20), and we have substituted the integration over x by the integration over ϕ in (2.25). To summarize this derivation, we insert the relation between the model parameters in Kobayashi’s notation 𝜖, γ, and τ and the physical parameters Mϕ, σ, and η into (2.15) to arrive at the final phase-field equation with a double-well potential:
Going back to our original problem, to solve the propagation of a wave front with constant speed v0 (2.2), we now can prove (2.3) just by inserting the second derivative (2.18). In fact, both terms cancel if the front is in the right contour. The first derivative (2.17) then motivates the ansatz for the driving force \(m(\phi )\propto \frac \partial {\partial x}\phi \propto \phi (1-\phi )\) (2.9). Now we see that the hyperbolic transport Eq. (2.2) is turned into a parabolic equation with the second derivative in the space coordinate x. This type of equation—also termed a diffusion equation—is well behaved and easily solved on a computer. You can try this out using the program with which you tried to solve the hyperbolic Eq. (2.2).
4 Interpretation of the Phase-Field Equation
Let us now have a closer look at the phase-field Eq. (2.28) to determine the effect of each term on the phase-field profile. The first term is the Laplacian, or diffusion operator. This smoothens any profile, ensuring a smooth phase-field profile between 0 and 1. For the right-hand hyperbolic tangent profile, it contributes negative increments above \(\phi > \frac {1}{2}\) and positive increments below \(\phi < \frac {1}{2}\), as indicated in Fig. 2.1a. The second term stems from the double-well potential. Its first derivative changes its sign at \(\phi = \frac {1}{2}\) because the potential function is symmetric around \(\frac {1}{2}\). Now we see that the potential contribution has negative increments where the Laplacian has positive increments, and vice versa [Fig. 2.1b].
This competition between the two contributions guarantees the stable phase-field profile! If a numerical solution deviates from the correct profile, the contributions will not balance but push the contour back to the correct contour. Later, in Chap. 3, we will see that this balance is also violated for a curved interface in more than one dimension. This will be used to consider capillarity. Furthermore, Δg will not be a constant in real applications: it will be positive or negative, and it will vary from point to point in space and time. It will depend on transport of temperature, solute, and momentum, as will be explained in later chapters.
5 Phase-Field Equation and Traveling-Wave Solution for the Double-Obstacle Potential
As a general comment, we shall keep in mind that the potential function (2.8), which is called the double-well potential, is not a unique choice for any system. It was first introduced by Landau to explain the behavior of a ferromagnet at the critical point, which is called the Curie point in this system. Only at this critical point, which is a second-order phase transformation, is it justified to truncate the potential describing the interface energies—the so-called Landau potential—to this simple form (for details, see [6]). In phase field, however, we are dealing with a first-order phase transformation such as solidification or solid-state phase transformations, which are far away from a critical point. We also stick to the notion of a mesoscopic phase-field method (Sect. 1.4) whereby phenomena inside the interface cannot, and need not, be interpreted physically. Therefore, we have some freedom to choose a potential function in a different form than (2.8).
For reasons that will be explained in Chap. 6, the multi-phase-field method, as implemented in OpenPhase [3], applies the so-called double-obstacle potential:
The “obstacle” is realized by the absolute signs, which flip the negative branches of the function ϕ(1 − ϕ) to positive values.Footnote 4 Figure 2.2 shows a comparison between the double-well and double-obstacle potentials.
As we can see, the graphs of the two potentials are similar. They are determined such that: (i) the free energy goes to infinity f →±∞ for ϕ →±∞; (ii) they have two minima at f = 0 for ϕ = 0 and ϕ = 1; and (iii) the region of the so-called potential barrier between 0 and 1 has an area that is \(\frac 12\) the interface energy σ. Other potentials can be used that fulfill above criteria, but they are little discussed in the phase-field literature. As long as we are speaking about a two-phase system, they should not show another minimum. More complex potentials, so-called “multi-well potentials” are used for multi-phase systems (e.g., [1, 4]).
The minimum solution of the free energy with the double-obstacle potential (2.29) is:
As before, we derive in the transition region 0 < ϕ < 1 (do this as an exercise!):
The final task is to find a proper coupling function mDO corresponding to the first derivative in ϕ, (2.31), of the phase-field profile (2.30) for the double-obstacle potential. We only give here the function and leave the proof as an exercise:
Repeating the analysis for the double-well potential, we find the relations between the model parameters 𝜖DO, γDO, and τDO and the physical parameters η, σ, and Mϕ:
Collecting all these pieces, we define the free-energy density in physical notation for the double-obstacle potential:
The normalization of the interface contribution \(\frac 8{\pi ^2}\) is necessary to fulfill the condition that the integral of this term in the normal direction through the interface becomes the interface energy σ [cf. (2.26)]. We also chose the notation where σ is divided by a length, the interface width η, to emphasize that this contribution is an energy density like Δg. The term in the brackets is now a dimensionless contour function indicating the interface position. In the physical parametrization, the phase-field equation with the double-obstacle potential reads:
6 Gibbs–Thomson Limit of the Phase-Field Equation
Summarizing all contributions, we can return to the underlying physical equation of interest: the Gibbs–Thomson equation (1.5). We have already done this in 3D, relating the velocity of the interface \(\vec v\) to the rate change of the phase field with the normal vector to the interface \(\vec n=\frac {\vec \nabla \phi }{|\vec \nabla \phi |}\):
The curvature of the interface κ, which is given by the strange expressions in Fig. 2.3, will be derived in Chap. 3.
7 Exercises
Exercise
Exercise
Prove that the coupling function in (2.33) does the job, i.e., that the derivative of this function with respect to ϕ is proportional to the first derivative of ϕ in the case where a double-obstacle potential is used.
Exercise
Prove the relation (2.34) between the model parameters and the physical parameters for the double-obstacle potential.
Exercise
Check the relations between the model parameters as indicated in Fig. 2.3.
Further Reading
-
Appendix A in the review “Phase-field models in materials science” [5], where another potential function, the “top hat” function, is introduced.
Notes
- 1.
The equation is given here without physical prefactors, which will be used to multiply the 0 later. Why? To make you curious for Chap. 3: because in 3D, the “0” gets a physical meaning!
- 2.
Note that the common convention in physics and mathematics literature about variational derivatives δ violates the otherwise accepted convention that a symbol \(\frac \delta \delta \) should be dimensionless. F is an extensive (absolute) quantity, while f is an intensive quantity (defined per volume). The variational derivative removes the volume integral and the volume increment d3x.
- 3.
A reference to the first scientist who derived this solution is difficult. It is not a particular solution for a phase field; it is very general.
- 4.
Note that these absolute signs are omitted in several publications for better readability, but they are definitely necessary for the model to work. In the numerical calculations, they are realized by a sharp cutoff against negative values of ϕ < 0 as well as against values larger than 1, ϕ > 1.
References
R. Folch, M. Plapp, Quantitative phase-field modeling of two-phase growth. Phys. Rev. E 72, 011602 (2005). https://doi.org/10.1103/PhysRevE.72.011602
V.G. Kiselev, Y.M. Shnir, A.Y. Tregubovich, Introduction to Quantum Field Theory (CRC Press, Boca Raton, 2000). https://doi.org/10.1201/b16984
OpenPhase. https://openphase.rub.de/
O. Shchyglo, U. Salman, A. Finel, Martensitic phase transformations in Ni–Ti-based shape memory alloys: the Landau theory. Acta Mater. 60(19), 6784–6792 (2012). https://doi.org/10.1016/j.actamat.2012.08.056
I. Steinbach, Phase-field models in materials science. Model. Simul. Mater. Sci. Eng. 17, 073001 (2009)
I. Steinbach, Phase-field model for microstructure evolution at the mesoscopic scale. Ann. Rev. Mater. Res. 43, 89–107 (2013). https://doi.org/10.1146/annurev-matsci-071312-121703
Author information
Authors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2023 The Author(s)
About this chapter
Cite this chapter
Steinbach, I., Salama, H. (2023). Analytics. In: Lectures on Phase Field . Springer, Cham. https://doi.org/10.1007/978-3-031-21171-3_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-21171-3_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-21170-6
Online ISBN: 978-3-031-21171-3
eBook Packages: Chemistry and Materials ScienceChemistry and Material Science (R0)