Keywords

1 What Is a Phase Field?

A phase field, in the context of these lectures, is a scalar field in space and time that indicates the local state of matter as characterized by its phase state. This may not help much for now: do not worry, we will proceed step by step, explaining all expressions. Firstly, the present section will explain what a phase field is—i.e., what do we understand by this expression? Secondly, in Sect. 1.2, we will investigate which types of materials problems can be examined using phase fields. In Sect. 1.3, the historical background of phase field will be detailed; perhaps you will come back to this part at the end of the lectures, at which time you should be better able to reference the different historical aspects of phase field comparing to their current state of development. In the last section of this introductory chapter, Sect. 1.4, we will focus in particular on the scale of the materials problem: the mesoscopic scale. Let us begin!

The “phase” of matter, a lump of atoms within a reference volume, denotes the state of crystalline order between the atoms. It is a central concept in physics and a central element of the phase-field theory. The order of atoms—i.e., their position in a crystalline lattice: FCC, BCC, or other crystalline structures—defines the phase state. In addition, the amorphous, liquid, and gaseous states—i.e., the absence of crystalline order—are characterized as phases. If the phase state is a gas, we may be referring to an atmosphere outside of a solid-state sample or a pore inside the sample. Liquids are mainly considered as a melted phase in connection with a solid crystal. This can be water and ice, or molten iron in connection with already-solidified iron and slag. Further types of phase are characterized by magnetic or ferroic order, superconductivity, or plasma. The phase field, a field in space and time, indicates the state of a materials point (in space and time), whether solid, liquid, gas, plasma ferromagnetic, etc, It tells nothing about other properties of matter at this point (in space and time), like its temperature, pressure, the composition of elements or molecules. Of course this information is also needed. But it is not enough to characterize the material at this point (...). We need to know it’s phase state, which is enclosed in the local value of the “phase field.”

In the physics literature, the phase state is characterized by a so-called “order parameter,” which is normalized between 0 and 1. We will use the letter ϕ to represent this parameter. The term “order” relates to “crystalline order,” which is different between different phases. For example, a crystalline solid will be stable at low temperatures, while its melt will be stable above the melting temperature. An important element of the concept of phases is that there is a discontinuity in their order; the order does not change gradually if, for example, the temperature changes: there is an abrupt change in the atomic order (at least for phase transformations of the first kind, which we will be considering here). This discontinuity is also reflected in discontinuities in the properties of the phases, such as their elasticity, viscosity, thermal or electrical conductivity. In other words: the phase state can be determined uniquely from the material properties of the piece of matter we are investigating.

Another important aspect is that the phase state needs not to be stable. We also consider metastable or unstable phases. This means that two materials with the same temperature, pressure and composition may be in different phase states. In other words, to characterize the state of a material not only pressure, temperature and composition is needed, but also information about the atomic order: its phases state.

To make this concept more transparent to materials scientists, let us discuss a metallic alloy, say a binary aluminium–silicon alloy, regarding possible phases at different temperatures and compositions. Figure 1.1b shows the phase diagram of this alloy. This is called a “eutectic” phase diagram, as it has three stable phases in different regimes of alloy composition c and temperature T, as indicated in the figure: liquid, FCC aluminum, and FCC silicon (diamond lattice) on the far right-hand side of the diagram is almost pure silicon, which is not shown. These regimes in which a phase is stable are also termed “phase fields” in alloy thermodynamics. Don’t be confused. They are fields in temperature, composition, and pressure, not in space and time as we will use the term in the context of these lectures. But both usages of “phase field”, of course, are somewhat related.

Fig. 1.1
Diagram A displays two concave up curves on a plane. Diagram B exhibits solid lines marked with hollow dots at some points, and a dashed enclosed area is indicated as the two-phase region.

(a) Schematic of the Gibbs energy curves of a binary alloy in two different phase states, α and β, for a given temperature T0. The common tangent determines: (i) the phase compositions cα and cβ of an alloy with nominal composition within the two-phase region, and (ii) the minimum Gibbs energy of a two-phase mixture with this nominal composition. (b) Linearized Al–Si phase diagram, indicating the stable composition of cα and cβ for the given temperature T0

In-between these stable regions of individual phases, we have the so-called “two-phase” regions, where none of the adjacent phases can be stable. The reason for this can be read from the Gibbs energies of the system as function of composition [see Fig. 1.1a]. For a given temperature and pressure, the Gibbs energy functions Gα and Gβ of the two phases, α and β, are displayed schematically. The minimum of the total Gibbs energy is determined by the so-called double-tangent construction, where the common tangent touches the curves at the phase compositions cα and cβ. With the fraction of phases fα = 1 − fβ, we define the total Gibbs energy Gtotal:

$$\displaystyle \begin{aligned} {} G_{\mathrm{total}} = f_\alpha G_\alpha + f_\beta G_\beta. \end{aligned} $$
(1.1)

The total Gibbs energy is minimal along the tangent line between cα and cβ, weighted with the fractions fα and fβ. Each individual phase with composition c (cα < c < cβ) will have a higher energy. Therefore, the material with a nominal composition c0 in the two-phase region must decompose in two phases so that the material will reach a lower state in the Gibbs total energy. We say that the composition region between cα and cβ is “forbidden” for both phases. There is an “energy barrier” in the Gibbs energy of the material that separates it into different phases. Other properties of the different phases—such as elasticity, thermal conductivity, and density—also show a similar behavior, and they are clearly different in both phases.

All of this is well known to most of our readers, but we repeat it here to perform the following thought experiment. The upper panel of Fig. 1.2 displays a cross section through a two-phase mixture between liquid (white phase) and FCC aluminum (black phase) close to equilibrium. A line scan of the local composition is performed, and the corresponding composition c(x) is displayed in the lower panel of this figure. This switches from cα to cβ and back again. It is now an easy exercise to normalize this composition to determine the phase fractions as a function of space, as also displayed along the line scan:

$$\displaystyle \begin{aligned} {} f_\alpha = \frac{c - c_\beta}{c_\alpha - c_\beta}, \qquad f_\beta = \frac{c - c_\alpha}{c_\beta - c_\alpha}; \qquad f_\beta = 1 - f_\alpha. \end{aligned} $$
(1.2)
Fig. 1.2
A diagram at the top has blots of a dark color on a lighter background. Below it is a graph of phi solid and c versus x. In it, an oblong waveform with 2 waves is plotted.

Scheme of a solid–liquid phase mixture (solid in black, liquid in white) close to the equilibrium of the compositions within the individual phases. The scan evaluates the local composition, which varies between the solid concentration cα and the liquid concentration cβ. On the left-hand axis, the value of the phase field ϕsolid is displayed, with β indexing the solid phase

The fractions fα and fβ are usually used to characterize a phase mixture as a whole; the fractions are not considered as fields in space and time. Therefore, we use a different notation: we use ϕα(x), the “phase field,” which varies in space between 0 and 1. Note that we will also consider interfaces between phases as “diffuse,” i.e., with a finite width. Therefore, the graph in Fig. 1.2 is not step-wise: it is “smeared out,” as will be discussed in detail in Sect. 1.3.1. The normalized function “phase field” is also called an “indicator function” because it indicates where a phase is present in space. In the same way as it is defined from the varying phase concentrations in (1.2), it could be defined from the elasticity or the density of the individual phases. We can also map the phase field back to the local composition, elasticity, and density if we know the local phase-field value and the properties of the individual phases.

The phase field is a field in space that indicates the position at which we find a special phase α, ϕα(x) = 1, or do not find this phase, ϕα(x) = 0. Then we let this evolve in time (!):

$$\displaystyle \begin{aligned} f_\alpha \rightarrow \phi_\alpha(x)\rightarrow \phi_\alpha(x,t). \end{aligned}$$

We can see that a multi-phase material can be characterized regarding its local phase state. We can use the standard concepts of alloy thermodynamics to determine the local phase fractions in a small reference volume containing, say, a thousand atoms to be large enough to characterize the phase state reliably. Then, we will find either the values of 1 or 0, or if our reference volume is intersected by an interface, the value will be between 1 and 0. Interfaces may move, and phases can change.

The content of the following chapters involves establishing how to determine the evolution of phases in complex materials under various conditions.

2 What Is the Purpose of Phase Field?

Generally speaking, the phase-field method uses a set of partial differential equations to describe a material’s problem with evolving microstructures. Let us first elaborate on what we mean by “evolving microstructures.” The microstructure of a multicrystalline material—and we will mainly speak about the microstructures of metallic materials—is the distribution and shape of different grains in that material. These individual grains have many attributes: their orientation with respect to a reference orientation; their composition, pressure, and temperature; along with important crystalline defects such as dislocations, stacking faults, and vacancies. If two neighboring crystalline grains are of the same phase but have different orientations, they form a “grain boundary.” Consequently, they are considered as different elements of the microstructure and will be denoted by different indicators, that is, by different phase fields. Two grains of different phases form a “phase boundary,” and a pore within a solid grain forms a “surface.”

We will describe all these different cases using phase fields. This means that a phase field marks a region of space in which the material can be i) characterized uniquely by its phase state and orientation or ii) as an interface (where the phase state is undetermined). Practically speaking, the phase field \(\phi _\alpha (\vec x)\), a field in three-dimensional (3D) space \(\vec x\), has, by convention, the value \(\phi _\alpha (\vec x)=1\) for the material point at position \(\vec x\) belonging to the phase α indexed by ϕα. For \(1>\phi _\alpha (\vec x)>0\) it is an interface, for \(\phi _\alpha (\vec x)=0\) it is another phase than α. Other conventions can be found in the literature, but this is the prevailing one. If there are only two grains in our material, e.g., a small inclusion in a matrix, we will skip the subscript α and index the inclusion by ϕ(x) = 1 and the matrix by ϕ(x) = 0 (or vice versa). The phase boundary is marked by an intermediate value 0 < ϕ(x) < 1. Treating the field as continuous, or “diffuse,” as we do in phase-field theory, the boundary has to have a finite width η.

Such a construction is very helpful for characterizing a polycrystal, a multi-phase material, or any general static microstructure. We want, however, to allow the microstructure to change in time, ϕ(x) → ϕ(x, t). Now we have to distinguish two cases as follows. (i) The microstructure is just “deforming,” i.e., we find a mapping of space coordinates \(\vec x(t_1) \rightarrow \vec x(t_2)\). In continuum mechanics, this is called the mapping from the “reference frame” at time t1 to the “actual frame” at time t2. In the second case, (ii) the microstructure is “evolving”; it may be growing, swelling, or shrinking. It may have been born in the time t1 → t2 (“nucleated,” as we say in materials science), or it may have died, i.e., it has fully transformed to a different phase. Thus, it cannot just be referenced in two different frames. In general, one material point in an evolving structure transforms to a material point of a different kind (e.g., liquid to solid). No physical movement of matter has to be involved in this process. We will speak in these chapters about case (ii), evolving microstructures. We will in addition allow them to deform (see Chap. 7), but this is a side remark.

The example shown at the end of this chapter is a dendrite growing in two dimensions (2D). In this example, which we undertake as an exercise, only the interface between the phases (solid and liquid) “moves.” This means that a material point, which had been liquid in the starting condition, changes its phase state to solid. However, no material is moving, neglecting the effects of shrinkage during the phase transformation and motion of the solid and the melt. For small changes in the shape of the dendrite, we could also treat this problem using finite elements. Then, the elements would have to be swelling or shrinking in both phases. However, in the general case, if we start from full liquid and simulate to full solid, we will need so-called adaptive finite elements, which continuously adapt to the shape of the dendrite. This is not impossible: it has been demonstrated by several authors, particularly in 2D simulations, but also in 3D (e.g., by Provatas et al. [16] and Schmidt [17]). However, this is numerically expensive, particularly in 3D. The phase-field method offers an elegant and numerically efficient way to treat this problem of evolving, growing, or shrinking microstructures. Aside from using adaptive finite elements, alternative approaches include the cellular automata and level set methods (see Further Reading).

Having described a materials problem—e.g., solidification or coarsening of a microstructure during heat treatment—as a moving-boundary problem using a set of partial differential equations, we will also need external boundary conditions and initial conditions. Furthermore, we will need a good database for physical input data and constitutive models for all the properties of the bulk phases and boundaries. Last but not least, we will need good solvers and sound numerics. All of this will be touched on during these lectures. It should be noted that the scientist applying the method will have to be patient: a good numerical solution to a problem may take weeks or months….

3 History of the Phase-Field Method

3.1 Microscopic Phase Field

The phase-field method can be split into two branches with very different histories, interpretations, and intentions. The first branch is often referred to as “order-parameter theory,” the “microscopic phase-field method,” or the “time-dependent Ginzburg–Landau theory.” This rests on grounds in thermodynamics with a special focus regarding interfaces. Commonly, van der Waals is noted as being the father of this branch; he rationalized from general considerations that a diffuse interface between two phases will be more likely than a sharp interface [23], although at this time, the existence of the atomistic structure of matter had not yet been established. As a next step, Landau introduced the concept of an order parameter into the thermodynamic description of materials. Ginzburg must be referenced as introducing gradients of the order parameter into the concept, representing interfaces or phase boundaries. Other stepping-stones in the development are the Cahn–Hilliard theory of spinodal decomposition [3] and Khachaturyan’s theory of microelasticity [8]. Wheeler, Boettinger, and McFadden introduced a first model of alloy solidification in 1992 [24], and this combines a Cahn–Hilliard model of a material with a miscibility gap and a phase-field model of the second type, which we will call a “mesoscopic phase-field model” below. A compilation of both branches with notable contributions, by far incomplete, is shown in Fig. 1.3. Before giving the historical details of the second branch—mesoscopic models—let us provide an outline of the problem that urges us to extend microscopic models, along with its possible solution.

Fig. 1.3
An oval shaped diagram displays the timeline of the phase field theory. The lower part exhibits the mesoscopic models, and the upper part indicates microscopic models.

Two branches of phase-field theory—microscopic and mesoscopic—highlighting important steps in their development. Today, the branches converge to a common understanding accepting the strengths of each approach. We stop this history about 20 years ago, so newer developments in phase-field theory are not included, such as models of fracture [6, 14, 18] or quantum phase fields [11, 21]

3.2 The Problem of Scale

All phase-field models—and this must be very clear—agree in terms of their general structure; they start from a thermodynamic functional density f(ϕ, ∇ϕ, T, c, 𝜖, …) with three terms of different character:

$$\displaystyle \begin{aligned} {} f = \frac\epsilon2 (\nabla\phi)^2 + \frac\gamma4\phi^2(1-\phi)^2 + m(\phi, T, c, \epsilon, \ldots). \end{aligned} $$
(1.3)

Here, we use the notation of Kobayashi [9]. The first and second terms relate to interface or grain-boundary contributions. This is easy to see, since the first term vanishes in the bulk when ϕ = const, i.e., in the bulk of the grains ϕ ≡ 1 or ϕ ≡ 0 (in the convention we shall use throughout this script). For these conditions, the second term—the so-called “double-well potential”—also vanishes by construction. Both terms represent a positive energy penalty within the interface, 0 < ϕ < 1, which is related to the interface energy (see Chap. 2). The last term in (1.3) relates to the bulk energy difference between phases. This is a function of temperature T, composition c, strain 𝜖, and other material states such as magnetism. Charges may be added, but those cases are not treated in this lecture series, and they are little investigated in phase-field theory.

We can also write down the functional in the form:

$$\displaystyle \begin{aligned} {} f = \frac\sigma\eta \left[\eta^2 (\nabla\phi)^2 + 72\phi^2(1-\phi)^2\right] + \Delta g(\phi, T, c, \epsilon, \ldots), \end{aligned} $$
(1.4)

where σ is the interfacial energy, η is the interfacial width, and Δg is the energy difference between phases, the function m in Eq. (1.3) above. Now, if all the parameters of the equations—𝜖, γ, m, σ, η and Δg—are constants, we can find a direct one-to-one correlation between the two representations, as will be shown in Chap. 2.

We can take for granted that the relation between m and ΔG works, since it is not phase field specific: it is standard thermodynamic Gibbs energy difference between different phases. The relations between 𝜖, γ, σ, and η have the following problem: if 𝜖 and γ are determined from an underlying microscopic theory, as well as the interface energy σ, the interface width η is uniquely fixed. If all parameters are correct, we will find, as in real materials, that the interface width η ≈ 1 nm. This is good if we are investigating materials at the nanometer scale (as scientists do with microscopic phase-field models); it is not so good if we want to investigate microstructures at the micrometer scale, because it will not be feasible to treat the diffuse interface of a phase field with a width of 1 nm in a 3D simulation of micrometer-sized grain structures. How can this dilemma be solved? We seek a theory that can be matched to a so-called “sharp interface” theory (see Chap. 4). In this theory, the interface width has no physical importance. To match a diffuse-interface method (the phase-field method) to the theory of such a sharp interface, we need to remove the influence of the interface width on all quantities with physical meaning. We seek a theory that is agnostic regarding the interface width of a real material: the interface width shall be scalable for convenience of numerical simulation. To say this clearly: it is frustrating to throw away the physical insights of a phase field to predict the structure of an interface; it is, however, the great success of mesoscopic phase-field models to make quantitative predictions of microstructural processes at larger scales possible!

4 Mesoscopic Phase-Field Model

The mesoscopic phase-field modelFootnote 1 stems from a very different route than thermodynamics. It is rooted in wave mechanics, in particular “traveling-wave solutions” called “solitons” in the physics literature. The investigation of these phenomena dates back to Korteweg, a Dutch mathematician in the nineteenth century. It is reported that he observed a single wave packet, caused by a boat clutching against a channel wall in Amsterdam, his hometown. The wave travelled for a long distance without changing its shape. He and his collaborators wrote down a non-linear wave equation for such a phenomenon in the shallow waters of rivers or ocean shores. This is known today as the Korteweg–de Vries (KDV) equation; it is of third order in the space derivative, and its solution is displayed in Fig. 1.4a.

Fig. 1.4
a is a graph of phi versus xi. In it, a bell curve is plotted. b is a graph of phi versus xi. In it, a logistic growth curve that extends from third to first quadrant is plotted.

Schematic representation of solitonian waves. The left-hand panel displays a wave pulse ϕ(ξ) of width η along the space coordinate x traveling with speed v; this remains self-similar in the moving coordinate ξ = x − vt. The right-hand panel shows a wave front; this is an integral form of the wave pulse. The width η and moving coordinate ξ are the same as for the wave pulse

The special feature of this solution is self-similarity in its local coordinate ξ = x − vt with the velocity v of the wave packet, withstanding moderate outer perturbations. This property, as realized by Langer, can be used to propagate interfaces by solitonian waves in a numerical simulation: the shape of the wave front is not part of the solution, i.e., it can be eliminated from the solution. The wave packet, Fig. 1.4a, hereby does not distinguish between either side of the packet, right or left. Therefore, in phase-field models, the integral form of the KDV equation is used as a second-order equation with an asymmetric solution, as displayed in Fig. 1.4b. This is more or less exactly what we call today phase field: the minimum solution of (1.3) or (1.4).

In the physics literature, both the symmetric and the anti-symmetric solutions are termed “solitons”: self-similar traveling-wave solutions of non-linear wave equations, in contrast to periodic solutions of linear wave equations. There are further aspects of this connection between linear and non-linear wave equations that relate to the fundamental understanding of matter (see Chap. 8). In particular, soliton solutions can be used for quantization if applied to an appropriate wave function, as already pointed out by Olsen in 1974 [15]. This “quantization of phase field” has been applied to studying scale formation in the universe, [22]. Enthusiastically speaking, quantization of phase field opens the road to a new understanding of the physical world: space, time, and matter are unified in a wave-mechanical description that is consistent with both quantum mechanics and thermodynamics (see further reading, “Solitons and quantum phase field,” on this topic).

Returning to the classical phase field in materials science, the idea is to combine the equation for the solitonian front, which travels with a given velocity v, with a transport equation in the bulk phases ahead or and behind the wave. The transport in the bulk phases ahead and behind, of course, is influenced on the one hand by the wave itself. On the other hand, the state of the phases ahead and behind will affect the wave velocity wave v. For the classical soliton, the front velocity is a constant such that the traveling wave is a plane wave; however, now in our applications, the velocity is a vector in 3D space, varying at each position of the wave front dependent on the interaction of the wave front with its surroundings: \(v \rightarrow \vec v(\vec x, t)\).Footnote 2 The equation used to couple both phenomena is the empirical Gibbs–Thomson equation. This relates the velocity of the front linearly to the kinetic undercooling of a front with mobility Mϕ and the front normal \(\vec n\):

$$\displaystyle \begin{aligned} {} \vec v = \vec n \; M^\phi \left ( \sigma^* \kappa + \Delta g \right ). \end{aligned} $$
(1.5)

The Gibbs–Thomson equation considers capillarity with the interface stiffness σ and the local curvature of the front κ; i.e., the kinetic undercooling is expressed as the curvature-corrected Gibbs energy difference between the bulk phases Δg. The Gibbs–Thomson equation is not dependent on the interface width η: it represents a sharp-interface model at the mesoscopic scale, where the interface can be approximated as a discontinuity between the phases. The Gibbs–Thomson equation (1.5) can be replaced by an appropriate phase-field equation (see Chap. 2). Since the width of the phase-field function in the transition between phases has no analogue in the Gibbs–Thomson equation, it can be chosen (within certain bounds) for numerical convenience. This will be treated in detail in Chap. 4.

It was also realized early on [1, 2] that the curvature κ is considered inherently in a phase-field description of a curved interface. This will be discussed in detail in Chap. 3.

4.1 Applications

The phase-field method now has many applications, from biophysics to astrophysics. Its main field of application is materials science, and metallurgy in particular. As described above, we distinguish the two main approaches: microscopic models, which are applied mostly to solid-state transformations (the school of Armen Khachaturyan); and mesoscopic models, which were first applied to solidification (the school of Jim Langer). Today, there are well-established methods for combining both branches for specific materials problems; these range, in the context of metallurgy, from solidification and heat treatment to in-service degradation until failure. A new branch has arisen with phase-field models of fracture [7, 14, 18]. This will, however, not be covered in this lecture series. One common aspect of all these applications is the evolution of the microstructure. Evolving structures define a special class of moving-boundary problems, in which phase-field models offer elegant and efficient solutions.

Example—Dendritic Growth

Dendritic solidification occurs if a solid crystal grows from an undercooled melt. As a result, the interface becomes unstable, forming tips and branches. It then develops into a “tree-like” structure: hence the term “dendrite.”

Equiaxed Dendrite

This example is related to Kobayashi’s dendrite in 2D [9, 10], and it is an exercise that we undertake in the class. One very good result from a former student is displayed in Fig. 1.5. The “100-line code” (see Appendix A.1), written in C++, evolves the phase-field variable and temperature field with adiabatic boundary conditions. One circular grain is placed at the bottom of the 2D simulation zone at the start of the simulation, and the system size is 300 × 600 grid points. The phase field ϕ is displayed for different time steps.

Fig. 1.5
Four illustrations display the growth of a solid substance from a dot to a solid figure reminiscent of a snowflake branch.

Dendritic solidification of a pure substance into an undercooled melt. (a) t = 0.05 s. (b) t = 0.25 s. (c) t = 0.75 s. (d) t = 1.5 s

Further Reading

  • Long-range elastic, electrostatic, and magnetic interactions connected to domain structure evolution during structural, ferroelectric, and ferromagnetic phase transformations, a recent review: [4].

  • Fracture: [6, 14, 18].

  • Level set: [5, 20].

  • Hydrodynamic interpretation of Phase Field: [12]

  • Solitons and quantum phase field: [11, 19, 21].