1 Introduction

The thresholding scheme is a highly efficient computational scheme for multiphase mean curvature flow (MCF) which was originally introduced by Merriman, Bence, and Osher [27, 28]. The main motivation for MCF comes from metallurgy where it models the slow relaxation of grain boundaries in polycrystals [31]. Each ”phase” in our mathematical jargon corresponds to a grain, i.e., a region of homogeneous crystallographic orientation. The effective surface tension \(\sigma _{ij}(\nu )\) and the mobility \(\mu _{ij}(\nu )\) of a grain boundary depend on the mismatch between the lattices of the two adjacent grains \(\Omega _i\) and \(\Omega _j\) and on the relative orientation of the grain boundary, given by its normal vector \(\nu \). It is well known that for small mismatch angles, the dependence on the normal can be neglected [32]. The effective evolution equations then read

$$\begin{aligned} V_{ij} = -\mu _{ij} \sigma _{ij} H_{ij} \quad \text {along the grain boundary }\Sigma _{ij}, \end{aligned}$$
(1)

where \(V_{ij} \) and \(H_{ij}\) denote the normal velocity and mean curvature of the grain boundary \(\Sigma _{ij} = \partial \Omega _i \cap \partial \Omega _j\), respectively. These equations are coupled by the Herring angle condition

$$\begin{aligned} \sigma _{ij}\nu _{ij} + \sigma _{jk}\nu _{jk} + \sigma _{ki}\nu _{ki} =0 \quad \text {along triple junctions } \Sigma _{ij} \cap \Sigma _{jk}, \end{aligned}$$
(2)

which is a balance-of-forces condition and simply states that triple junctions are in local equilibrium; here \(\nu _{ij}\) denotes the unit normal of \(\Sigma _{ij}\) pointing from \(\Omega _i\) into \(\Omega _j\), (Fig. 1). We refer the interested reader to [17] for more background on the modeling.

Fig. 1
figure 1

The Herring angle condition at a triple junction

Efficient numerical schemes allow to carry out large-scale simulations to give insight into relevant statistics like the average grain size or the grain boundary character distribution, as an alternative to studying corresponding mean field limits as in [4, 18]. The main obstruction to directly discretize the dynamics (1)–(2) are ubiquitous topological changes in the network of grain boundaries like for example the vanishing of grains. Thresholding instead naturally handles such topological changes. The scheme is a time discretization which alternates between the following two operations: (i) convolution with a smooth kernel; (ii) thresholding. The second step is a simple pointwise operation and also the first step can be implemented efficiently using the Fast Fourier Transform. One of the main objectives of our analysis is to rigorously justify this intriguingly simple scheme in the presence of such topological changes.

The basis of our analysis is the underlying gradient-flow structure of (1)–(2), which means that the solution follows the steepest descent in an energy landscape. More precisely, the energy is the total interfacial area weighted by the surface tensions \(\sigma _{ij}\), and the metric tensor is the \(L^2\)-product on normal velocities, weighted by the inverse mobilities \(\frac{1}{\mu _{ij}}\). One can read off this structure from the inequality

$$\begin{aligned} \frac{d}{dt} \sum _{i,j=1}^N \sigma _{ij} Area (\Sigma _{ij}) = - \sum _{i,j=1}^N \frac{1}{\mu _{ij}} \int _{\Sigma _{ij}} V_{ij}^2\, dS \le 0, \end{aligned}$$

which is valid for sufficiently regular solutions to (1)–(2). In the seminal work [8], Esedoḡlu and Otto showed that the efficient thresholding scheme respects this gradient-flow structure as it may be viewed as a minimizing movements scheme in the sense of De Giorgi. More precisely, they show that each step in the scheme is equivalent to solving a variational problem of the form

$$\begin{aligned} \min _\chi \left\{ \frac{1}{2h} d_h^2(\Sigma ,\Sigma ^{n-1}) + E_h(\Sigma ) \right\} , \end{aligned}$$
(3)

where \(E_h(\Sigma )\) and \(d_h(\Sigma ,\Sigma ^{n-1})\) are proxies for the total interfacial energy of the configuration \(\Sigma \) and the distance of the configuration \(\Sigma \) to the one at the previous time step \(\Sigma ^{n-1}\), respectively. Since the work of Jordan, Kinderlehrer, and Otto [15], the importance of the formerly often neglected metric in such gradient-flow structures has been widely appreciated. Also in the present work, the focus lies on the metric, which in the case of MCF is well-known to be completely degenerate [29]. This explains the proxy for the metric appearing in the related well-known minimizing movements scheme for MCF by Almgren, Taylor, and Wang [1], and Luckhaus and Sturzenhecker [25]. This remarkable connection between the numerical scheme and the theory of gradient flows has the practical implication that it made clear how to generalize the algorithm to arbitrary surface tensions \(\sigma _{ij}\). From the point of view of numerical analysis, (3) means that thresholding behaves like the implicit Euler scheme and is therefore unconditionally stable. The variational interpretation of the thresholding scheme has of course implications for the analysis of the algorithm as well. It allowed Otto and one of the authors to prove convergence results in the multiphase setting [19, 20], which lies beyond the reach of the more classical viscosity approach based on the comparison principle implemented in [3, 9, 13]. Also in different frameworks, this variational viewpoint turned out to be useful, such as MCF in higher codimension [23] or the Muskat problem [14]. The only downside of the generalization [8] are the somewhat unnatural effective mobilities \(\mu _{ij}=\frac{1}{\sigma _{ij}}\). Only recently, Salvador and Esedoḡlu [33] have presented a strikingly simple way to incorporate a wide class of mobilities \(\mu _{ij}\) as well. Their algorithm is based on the fact pointed out in [7] that although the same kernel appears in the energy and the metric, each term only uses certain properties of the kernel, which can be tuned independently: Starting from two Gaussian kernels \(G_\gamma \) and \(G_\beta \) of different width, they find a positive linear combination \(K_{ij}=a_{ij} G_\gamma + b_{ij} G_\beta \), whose effective mobility and surface tension match the given \(\mu _{ij}\) and \(\sigma _{ij}\), respectively. It is remarkable that this algorithm retains the same simplicity and structure as the previous ones [8, 28]. We refer to Sect. 2 for the precise statement of the algorithm.

In the present work, we prove the first convergence result for this new general scheme. The main novelty here is that the proof applies in the full generality of this new scheme incorporating arbitrary mobilities. Furthermore, this is the first proof of De Giorgi’s inequality in the multiphase case. We exploit the gradient-flow structure and show that under the natural assumption of energy convergence, any limit of thresholding satisfies De Giorgi’s inequality, a weak notion of multiphase mean curvature flow. This assumption is inspired by the fundamental work of Luckhaus–Sturzenhecker [25] and has appeared in the context of thresholding in [19, 20]. We expect it to hold true before the onset of singularities such as the vanishing of grains. Furthermore, at least in the simpler two-phase case, it can be verified for certain singularities [5, 6]. We would in fact expect this assumption to be true generically, which however seems to be a difficult problem in the multiphase case.

The present work fits into the theory of general gradient flows even better than the two previous ones [19, 20] and crucially depends on De Giorgi’s abstract framework, cf. [2]. This research direction was initiated by Otto and the first author and appeared in the lecture notes [21]. There, De Giorgi’s inequality is derived for the simple model case of two phases. Here, we complete these ideas and use a careful localization argument to generalize this result to the multiphase case. A further particular novelty of our work is that for the first time, we prove the convergence of the new scheme for arbitrary mobilities [33].

Our proof rests on the fact that thresholding, like any minimizing movements scheme, satisfies a sharp energy-dissipation inequality of the form

$$\begin{aligned} E_h(\Sigma ^h(T)) + \frac{1}{2} \int _0^T \left( \frac{1}{h^2}d_h^2(\Sigma ^h(t),\Sigma ^h(t-h)) +|\partial E_h|^2(\tilde{\Sigma }^h(t)) \right) dt \le E_h(\Sigma (0)), \end{aligned}$$
(4)

where \(\Sigma ^h(t)\) denotes the piecewise constant interpolation in time of our approximation, \({{\tilde{\Sigma }}}^h(t)\) denotes another, intrinsic interpolation in terms of the variational scheme, cf. Lemma 3, and \(|\partial E_h|\) is the metric slope of \(E_h\), cf. (33).

Our main goal is to pass to the limit in (4) and obtain the sharp energy-dissipation relation for the limit, which in the simple two-phase case formally reads

$$\begin{aligned} \sigma \mathrm {Area}(\Sigma (T)) + \frac{1}{2} \int _0^T \int _{\Sigma (t)} \left( \frac{1}{\mu }V^2 + \mu \sigma ^2 H^2 \right) dS\,dt \le \sigma \mathrm {Area}(\Sigma (0)). \end{aligned}$$
(5)

To this end, one needs sharp lower bounds for the terms on the left-hand side of (4). While the proof of the lower bound on the metric slope of the energy

$$\begin{aligned} \liminf _{h\downarrow 0} \int _0^T |\partial E_h|^2({{\tilde{\Sigma }}}^h(t)) \,dt \ge \mu \sigma ^2 \int _0^T \int _{\Sigma (t)} H^2 dS\,dt \end{aligned}$$
(6)

is a straight-forward generalization of the argument in [21], the main novelty of the present work lies in the sharp lower bound for the distance-term of the form

$$\begin{aligned} \liminf _{h\downarrow 0} \int _0^T \frac{1}{h^2} d_h^2(\Sigma ^h(t),\Sigma ^h(t-h))\,dt \ge \frac{1}{\mu }\int _0^T\int _{\Sigma (t)} V^2\, dS\,dt. \end{aligned}$$
(7)

This requires us to work on a mesoscopic time scale \(\tau \sim \sqrt{h}\), which is much larger than the microscopic time-step size h and which is natural in view of the parabolic nature of our problem. It is remarkable that De Giorgi’s inequality (5) in fact characterizes the solution of MCF under additional regularity assumptions. Indeed, if \(\Sigma (t)\) evolves smoothly, this inequality can be rewritten as

$$\begin{aligned} \frac{1}{2} \int _0^T \int _{\Sigma (t)} \sigma \Big ( \frac{1}{\sqrt{\mu \sigma }} V + \sqrt{\mu \sigma } H \Big )^2 dS\,dt \le 0, \end{aligned}$$
(8)

and therefore \(V=-\mu \sigma H\). For expository purpose, we focused here on the vanilla two-phase case. In the multiphase case, the resulting inequality implies both the PDEs (1) and the balance-of-forces conditions (2), cf. Remark 1. An optimal energy-dissipation relation like the one here also plays a crucial role in the recent weak-strong uniqueness result for multiphase mean curvature flow by Fischer, Hensel, Simon, and one of the authors [10]. There, a new dynamic analogue of calibrations is introduced and uniqueness is established in the following two steps: (i) any strong solution is a calibrated flow and (ii) every calibrated flow is unique in the class of weak solutions. In fact, Hensel and the first author recently showed in [11] that (a slightly weaker version of) De Giorgi’s inequality is sufficient for weak-strong uniqueness. De Giorgi’s general strategy we are implementing here is also related to the approaches by Sandier and Serfaty [34] and Mielke [30]. They provide sufficient conditions for gradient flows to converge in the same spirit as \(\Gamma \)-convergence of energy functionals, implies the convergence of minimizers. In the dynamic situation it is clear that one needs conditions on both energy and metric in order to verify such a convergence.

There has been continuous interest in MCF in the mathematics literature, so we only point out some of the most relevant recent advances. We refer the interested reader to the introductions of [19] and [22] for further related references. The existence of global solutions to multiphase MCF has only been established recently by Kim and Tonegawa [16] who carefully adapt Brakke’s original construction and show in addition that phases do not vanish spontaneously. For the reader who wants to familiarize themselves with this topic, we recommend the recent notes [37]. Another approach to understanding the long-time behavior of MCF is to restart strong solutions after singular times. This amounts to solving the Cauchy problem with non-regular initial data, such as planar networks of curves with quadruple junctions. In this two-dimensional setting, this has been achieved by Ilmanen, Neves, and Schulze [12] by gluing in self-similarly expanding solutions for which it is possible to show that the initial condition is attained in some measure theoretic way. Most recently, using a similar approach of gluing in self-similar solutions, but also relying on blow-ups from geometric microlocal analysis, Lira, Mazzeo, Pluda, and Saez [24] were able to construct such strong solutions, prove stronger convergence towards the initial (irregular) network of curves, and classify all such strong solutions.

The rest of the paper is structured as follows. In Sect. 2 we recall the thresholding scheme for arbitrary mobilities introduced in [33], show its connection to the abstract framework of gradient flows, and record the direct implications of this theory. We state and discuss our main results in Sect. 3. Section 4 contains the localization argument in space, which will play a crucial role in the proofs which are gathered in Sect. 5. Finally, in the short “Appendix”, we record some basic facts about thresholding.

2 Setup and the modified thresholding scheme

Here and in the rest of the paper, \([0,1)^d\) denotes the d-dimensional torus. Thus when we deal with functions \(u: [0,1)^d \rightarrow {\mathbf {R}}\) we always assume that they have periodic boundary conditions. In particular they can be extended periodically on \({\mathbf {R}}^d\). In general if u is a function as before and \(f: {\mathbf {R}}^d \rightarrow {\mathbf {R}}\) then by \(f * u\) we mean the convolution on \({\mathbf {R}}^d\) between f and the periodic extension of u, i.e.

$$\begin{aligned} f*u(x) := \int _{{\mathbf {R}}^d} f(z) u(x-z)dz,\ x \in {\mathbf {R}}^d \end{aligned}$$
(9)

when this expression makes sense.

2.1 The modified algorithm

We start by describing the algorithm proposed by Salvador and Esedoḡlu [33]. Let the symmetric matrix \(\varvec{\sigma } = (\sigma _{ij})_{ij} \in {\mathbf {R}}^{N\times N}\) of surface tensions and the symmetric matrix \(\varvec{\mu } = (\mu _{ij})_{ij}\) of mobilities be given. In this work we define for notational convenience \(\sigma _{ii} = \mu _{ii} = 0\). Let \(\gamma> \beta > 0\) be given. Define the matrices \({\mathbb {A}} = (-a_{ij})_{ij} \in {\mathbf {R}}^{N \times N}\) and \({\mathbb {B}} = (-b_{ij})_{ij} \in {\mathbf {R}}^{N \times N}\) by

$$\begin{aligned}&a_{ij} = \frac{\sqrt{\pi }\sqrt{\gamma }}{\gamma - \beta }(\sigma _{ij} - \beta \mu _{ij}^{-1}), \end{aligned}$$
(10)
$$\begin{aligned}&b_{ij} = \frac{\sqrt{\pi }\sqrt{\beta }}{\gamma - \beta }(-\sigma _{ij} + \gamma \mu _{ij}^{-1}), \end{aligned}$$
(11)

for \(i \ne j\) and \(a_{ii} = b_{ii} = 0\). Then \(a_{ij}, b_{ij}\) are uniquely determined as solutions of the following linear system

$$\begin{aligned} {\left\{ \begin{array}{ll} \sigma _{ij} = \frac{a_{ij}\sqrt{\gamma }}{\sqrt{\pi }} + \frac{b_{ij}\sqrt{\beta }}{\sqrt{\pi }}, \\ \mu _{ij}^{-1} = \frac{a_{ij}}{\sqrt{\pi }\sqrt{\gamma }} + \frac{b_{ij}}{\sqrt{\pi }\sqrt{\beta }}. \end{array}\right. } \end{aligned}$$
(12)

The algorithm introduced by Salvador and Esedoḡlu is as follows. Let the time step size \(h > 0\) be fixed. Hereafter \(G_{\gamma }^h := G_{\gamma h}^{(d)}\) denotes the d-dimensional heat kernel (18) at time \(\gamma h\).

Algorithm 1

(Modified thresholding scheme) Let \(\{\Omega _1^{0}, . . . , \Omega _N^{0}\}\) be disjoint open subsets of \([0,1)^d\) such that \([0, 1)^d = \cup _{i} \overline{\Omega _i^{0}}\), to obtain the new collection \(\{\Omega _1^{n+1}, . . . , \Omega _N^{n+1}\}\) at time \(t= h (n+1)\) from the collection \(\{\Omega _1^n, . . . , \Omega _N^n\}\) at time \(t = hn\)

  1. (1)

    For any \(i = 1 , . . . , N\) form the convolutions

    $$\begin{aligned} \phi ^n_{1,i} = G_{\gamma }^h * {\mathbf {1}}_{\Omega _i^n},\ \phi ^n_{2,i} = G_{\beta }^h * {\mathbf {1}}_{\Omega _i^n} \end{aligned}$$
  2. (2)

    For any \(i= 1, . . . , N\) form the comparison functions

    $$\begin{aligned} \psi ^n_{i} = \sum _{j\ne i} a_{ij}\phi ^n_{1,j} + b_{ij}\phi ^n_{2,j}. \end{aligned}$$
  3. (3)

    Thresholding step, define

    $$\begin{aligned} \Omega _i^{n+1} := \left\{ x: \psi _i^n(x) < \min _{j \ne i} \psi _j^n(x) \right\} . \end{aligned}$$

We will assume the following:

$$\begin{aligned}&\text {The coefficients}\ a_{ij}, b_{ij}\ \text {satisfy the strict triangle inequality.} \end{aligned}$$
(13)
$$\begin{aligned}&\text {The matrices}\ {\mathbb {A}}\ \text {and}\ {\mathbb {B}}\ \text {are positive definite on}\ (1, . . . , 1)^{\perp }. \end{aligned}$$
(14)

In particular, for \(v \in (1, . . . , 1)^{\perp }\) we can define norms

$$\begin{aligned} |v|_{{\mathbb {A}}}^2 = v \cdot {\mathbb {A}}v,\ |v|_{{\mathbb {B}}}^2 = v \cdot {\mathbb {B}}v. \end{aligned}$$

We remark that we need the matrices \({\mathbb {A}}, {\mathbb {B}}\) to be positive definite on \((1, . . . , 1)^{\perp }\) to guarantee that the functional defined in (28) is a distance, see the comment following (28) below.

Observe that condition (13) is always satisfied if we choose \(\gamma \) large and \(\beta \) small provided the surface tensions and the inverse of the mobilities satisfy the strict triangle inequality. Indeed, define

$$\begin{aligned} m_{\sigma } = \min _{i,j,k} \{\sigma _{ik} + \sigma _{kj} - \sigma _{ij}\}\ \text {and}\ M_{\sigma } = \max _{i,j,k} \{\sigma _{ik} + \sigma _{kj} - \sigma _{ij}\}, \end{aligned}$$

where ijk range over all triples of distinct indices \(1\le i,j,k \le N\). Define \(m_{\frac{1}{\mu }}\) and \(M_{\frac{1}{\mu }}\) in a similar way. Then a computation shows that \(a_{ij}\) and \(b_{ij}\) satisfy the (strict) triangle inequality if

$$\begin{aligned} \beta < \frac{m_{\sigma }}{M_{\frac{1}{\mu }}}\ \text {and}\ \gamma > \frac{M_{\sigma }}{m_{\frac{1}{\mu }}}, \end{aligned}$$
(15)

which can always be achieved since \(\gamma> \beta > 0\) are arbitrary. For the second condition (14), we have the following result of Salvador and Esedoḡlu [33].

Lemma 1

Let the matrix \(\sigma \) of the surface tensions and the matrix \(\frac{1}{\mu }\) of the inverse mobilities (for the diagonal we set inverses to be zeros) be negative definite on \((1, . . . , 1)^\perp \). Let \(\gamma > \beta \) be such that

$$\begin{aligned} \gamma > \frac{\min _{i=1, . . . , N-1} s_i}{\max _{i=1, . . . , N-1} m_i},\ \beta < \frac{\max _{i=1, . . . , N-1}s_i}{\min _{i=1, . . . , N-1}m_i} \end{aligned}$$
(16)

where \(s_i\) and \(m_i\) are the nonzero eigenvalues of \(J\sigma J\) and \(J\frac{1}{\mu }J\) respectively, where the matrix J has components \(J_{ij} = \delta _{ij} - \frac{1}{N}\). Then \({\mathbb {A}}\) and \({\mathbb {B}}\) are positive definite on \((1, . . . , 1)^{\perp }\).

In particular, if we choose \(\gamma \) large enough and \(\beta \) small enough, condition (14) on the matrices \({\mathbb {A}}, {\mathbb {B}}\) is satisfied provided the matrices \(\sigma \) and \(\frac{1}{\mu }\) are negative definite on \((1, . . . , 1)^{\perp }\). By a classical result of Schoenberg [35] this is the case if and only if \(\sqrt{\sigma _{ij}}\) and \(1/\sqrt{\mu _{ij}}\) are \(\ell ^2\) embeddable. In particular, this holds for the choice of Read-Shockley surface tensions and equal mobilities.

For \(1 \le i \ne j \le N\) define the kernels

$$\begin{aligned} K_{ij}(z) = a_{ij}G_{\gamma }(z) + b_{ij}G_{\beta }(z) \end{aligned}$$
(17)

where, for a given \(t > 0\), we define \(G^{(d)}_t\) as the heat kernel in \({\mathbf {R}}^d\), i.e.,

$$\begin{aligned} G^{(d)}_t(z) = \frac{e^{-\frac{|z|^2}{4t}}}{\sqrt{4\pi t}^d}. \end{aligned}$$
(18)

If the dimension d is clear from the context, we suppress the superscript (d) in (18). We recall here some basic properties of the heat kernel.

$$\begin{aligned} G_t(z)&> 0\&\text {(non-negativity)}, \end{aligned}$$
(19)
$$\begin{aligned} G_t(z)&= G_t(Rz)\ \forall R \in O(d)\&\text {(symmetry)}, \end{aligned}$$
(20)
$$\begin{aligned} G_{t}(z)&= \frac{1}{\sqrt{t}^d}G_1\left( \frac{z}{\sqrt{t}}\right) \&\text {(scaling)}, \end{aligned}$$
(21)
$$\begin{aligned} G_t * G_s&= G_{t+s}\&\text {(semigroup property)}, \end{aligned}$$
(22)
$$\begin{aligned} G_t^{(d)}(z)&= \prod _{i=1}^d G_t^{(1)}(z_i)\&\text {(factorization property)}. \end{aligned}$$
(23)

We observe that the kernels \(K_{ij}\) are positive, with positive Fourier transform \({\hat{K}}_{ij}\) provided \(\gamma > \max _{i,j}\sigma _{i,j}\mu _{i,j}\) and \(\beta < \min _{i,j}\sigma _{i,j}\mu _{i,j}\). In particular assuming

  1. (1)

    \(\sigma _{ij}\) and \(\frac{1}{\mu _{ij}}\) satisfy the strict triangle inequality,

  2. (2)

    \(\sigma \) and \(\frac{1}{\mu }\) are negative definite on \((1, . . . , 1)^{\perp }\),

we can always achieve the conditions posed on \({\mathbb {A}}, {\mathbb {B}}\) and the positivity of the kernels \(K_{ij}\) by choosing \(\gamma \) large and \(\beta \) small.

Given any \(h > 0\) we define the scaled kernels

$$\begin{aligned} K^h_{ij}(z) = \frac{1}{\sqrt{h}^{d}}K_{ij}\left( \frac{z}{\sqrt{h}}\right) , \end{aligned}$$
(24)

then the first and the second step in Algorithm 1 may be compactly rewritten as follows

$$\begin{aligned} \psi _i^n = \sum _{j\ne i} K_{ij}^h * {\mathbf {1}}_{\Omega _j^n}. \end{aligned}$$

For later use, we also introduce the kernel

$$\begin{aligned} K(z) = \frac{1}{2}G_{\gamma }(z) + \frac{1}{2}G_{\beta }(z). \end{aligned}$$
(25)

2.2 Connection to De Giorgi’s minimizing movements

The first observation is that Algorithm 1 has a minimizing movements interpretation. To explain this, let us introduce the class

$$\begin{aligned} {\mathcal {A}} := \left\{ \chi : [0,1)^d \rightarrow \{0,1\}^N\biggr |\ \sum _{k=1}^N \chi _k = 1 \right\} \end{aligned}$$

and its relaxation

$$\begin{aligned} {\mathcal {M}} := \left\{ u: [0,1)^d \rightarrow [0,1]^N\biggr |\ \sum _{k=1}^N u_k = 1 \right\} . \end{aligned}$$

If \(\chi \in {\mathcal {A}}\cap BV([0,1)^d)^N\), then each of the sets \(\Omega _i := \{\chi _i = 1\}\) is a set of finite perimeter. We denote by \(\partial ^*\Omega _i\) the reduced boundary of the set \(\Omega _i\), and for any pair \(1\le i\ne j\le N\) we denote by \(\Sigma _{ij} := \partial ^*\Omega _i \cap \partial ^*\Omega _j\) the interface between the sets. For \(u \in {\mathcal {M}}\) we define

$$\begin{aligned} E(u) := {\left\{ \begin{array}{ll} \sum _{i,j}\sigma _{ij}{\mathcal {H}}^{d-1}(\Sigma _{ij})\ &{}\text {if}\ u \in {\mathcal {A}} \cap BV([0,1)^d)^N, \\ +\infty \ &{}\text {otherwise}. \end{array}\right. } \end{aligned}$$
(26)

For \(h>0\) fixed we define the approximate energy \(E_h\) for \(u \in {\mathcal {M}}\)

$$\begin{aligned} E_h(u) = \sum _{i,j} \frac{1}{\sqrt{h}}\int _{[0,1)^d} u_iK_{ij}^h * u_j dx. \end{aligned}$$
(27)

For \(u,v \in {\mathcal {M}}\) and \(h>0\) we also define the distance

$$\begin{aligned} d_h^2(u,v)&:= -2hE_h(u-v) = -2\sqrt{h}\sum _{i,j} \int (u_i - v_i)K_{ij}^h*(u_j - v_j)dx \nonumber \\&=2\sqrt{h}\int |G_{\gamma }^{h/2} *(u-v)|_{{\mathbb {A}}}^2 + |G_{\beta }^{h/2} *(u-v)|_{{\mathbb {B}}}^2\ dx, \end{aligned}$$
(28)

where we used the semigroup property (22) and the symmetry (20) to derive the last equality. We also point out that since \(\sum _i u_i = \sum _i v_i = 1\) a.e., we have \(G_{\gamma }^{h/2}*(u-v), G_{\beta }^{h/2}*(u-v) \in (1, . . . , 1)^{\perp }\). Hence the assumptions on \({\mathbb {A}}\) and \({\mathbb {B}}\) guarantee that \(d_h\) defines a distance on \({\mathcal {M}}\) (and on \({\mathcal {A}}\)).

Lemma 2

The pair \(({\mathcal {M}}, d_h)\) is a compact metric space. The function \(E_h\) is continuous with respect to \(d_h\). For every \(1 \le i \le N\) and \(n \in {\mathbf {N}}\) define \(\chi _i^n = {\mathbf {1}}_{\Omega ^n_i}\), where \(\Omega _1^n, . . . , \Omega _N^n\) are obtained from \(\Omega _1^{n-1}, . . . , \Omega _N^{n-1}\) by the thresholding scheme. Then \(\chi ^n\) minimizes

$$\begin{aligned} \frac{1}{2h}d_h^2(u, \chi ^{n-1}) + E_h(u)\ \text {among all}\ u \in {\mathcal {M}}. \end{aligned}$$
(29)

Proof

For \(u, v \in {\mathcal {M}}\) definition (28) and the fact that \({\mathbb {A}}\) and \({\mathbb {B}}\) are positive definite imply that \(d_h\) is a distance on \({\mathcal {M}}\). The fact that \(({\mathcal {M}}, d_h)\) is compact and \(E_h\) is continuous is just a consequence of the fact that \(d_h\) metrizes the weak convergence in \(L^2\) on \({\mathcal {M}}\), the interested reader may find the details of the reasoning in [21]. We are thus left with showing that \(\chi ^n\) satisfies (29). For \(u, v \in L^2([0,1)^d)\) define

$$\begin{aligned} (u,v) = \frac{1}{\sqrt{h}}\sum _{i,j}\int u_i K^{h}_{ij}*v_j dx, \end{aligned}$$

then by the symmetry (20) of the Gaussian kernel and by the symmetry of both matrices \({\mathbb {A}}, {\mathbb {B}}\) it is not hard to show that \((\cdot , \cdot )\) is symmetric. In particular we can write for any \(u \in {\mathcal {M}}\)

$$\begin{aligned} \frac{1}{2h}d_h^2(u,\chi ^{n-1}) + E_h(u)&= -E_h(u-\chi ^{n-1}) + E_h(u) \\&=-(u-\chi ^{n-1}, u-\chi ^{n-1}) + (u,u) \\&=2(\chi ^{n-1}, u) - (\chi ^{n-1}, \chi ^{n-1}). \end{aligned}$$

Thus (29) is equivalent to the fact that \(\chi ^n\) minimizes \((\chi ^{n-1}, u)\) among all \(u \in {\mathcal {M}}\). Since by (13)

$$\begin{aligned} (\chi ^{n-1}, u) = \int \sum _{i} u_i \psi ^n_i dx, \end{aligned}$$

we see that \(\chi ^{n}\) minimizes the integrand pointwise, and thus it is a minimizer for the functional.\(\square \)

The previous lemma allows us to apply the general theory of gradient flows in [2] to this particular problem. We record the key statement for our purposes in the following lemma, which will be applied to \(({\mathcal {M}}, d_h)\), where \(d_h\) is the metric (28).

Lemma 3

Let \(({\mathcal {M}},d)\) be a compact metric space and \(E:{\mathcal {M}} \rightarrow {\mathbf {R}}\) be continuous. Given \(\chi ^0 \in {\mathcal {M}}\) and \(h > 0\) consider a sequence \(\{\chi ^n\}_{n\in {\mathbf {N}}}\) satisfying

$$\begin{aligned} \chi ^n\ \text {minimizes}\ \frac{1}{2h}d^2(u, \chi ^{n-1})+ E(u)\ \text {among all}\ u\in {\mathcal {M}}. \end{aligned}$$
(30)

Then we have for all \(t \in {\mathbf {N}}h\)

$$\begin{aligned} \begin{aligned} E(\chi (t)) + \frac{1}{2}\int _0^t\left( \frac{1}{h^2}d^2(\chi (s+h),\chi (s)) +|\partial E|^2(u(s))\right) ds \le E(\chi ^0). \end{aligned} \end{aligned}$$
(31)

Here \(\chi (t)\) is the piecewise constant interpolation, u(t) is the so-called variational interpolation, which for \(n \in {\mathbf {N}}\) and \(t \in ((n-1)h, nh]\) is defined by

$$\begin{aligned} u(t) \in {\text {argmin}}_{u \in {\mathcal {M}}}\left\{ E(u) + \frac{d^2(u, \chi ^{n-1})}{2(t-(n-1)h)} \right\} , \end{aligned}$$
(32)

and \(|\partial E|(u)\) is the metric slope defined by

$$\begin{aligned} |\partial E|(u) := \lim _{d(u,v) \rightarrow 0} \frac{(E(u) - E(v))_+}{d(u,v)} \in [0,\infty ]. \end{aligned}$$
(33)

Moreover, the variational interpolation u(t) satisfies

$$\begin{aligned} \int _0^{\infty }\frac{1}{2h^2}d^2(u(t),\chi (t))dt&\le E(\chi ^0), \end{aligned}$$
(34)
$$\begin{aligned} E(u(t))&\le E(\chi (t))\ \text {for all}\ t\ge 0. \end{aligned}$$
(35)

3 Statement of results

Our main result is the convergence of the modified thresholding scheme to a weak notion of multiphase mean curvature flow. More precisely, given an initial partition \(\{\Omega _1^0, . . . , \Omega _N^0\}\) of \([0,1)^d\) encoded by \(\chi ^0: [0,1)^d \rightarrow \{0,1\}^{N}\) such that \(\sum _i \chi _i^0 = 1\), define \(\chi ^h: [0,1)^d \times {\mathbf {R}} \rightarrow \{0,1\}^N\) by setting

$$\begin{aligned} \begin{aligned}&\chi ^h(t,x) = \chi ^0(x)\ \text {for}\ t < h, \\&\chi ^h(t,x) = \chi ^n(x)\ \text {for}\ t \in [nh, (n+1)h)\ \text {for}\ n \in {\mathbf {N}}. \end{aligned} \end{aligned}$$
(36)

If \(\chi ^0\) is a function of bounded variation, we denote by \(\Sigma _{ij}^0 := \partial ^*\Omega _i^0 \cap \partial ^*\Omega _j^0\). Our main result is contained in the following theorem.

Theorem 1

Given \(\chi ^0 \in {\mathcal {A}}\) and such that \(\nabla \chi ^0\) is a bounded measure and a sequence \(h\downarrow 0\); let \(\chi ^h\) be defined by (36). Assume that there exists \(\chi : [0,1)^d \times (0,T) \rightarrow [0,1]^N\) such that

$$\begin{aligned} \chi ^h\rightharpoonup \chi \ \text {in}\ L^1([0,1)^d \times (0,T)). \end{aligned}$$
(37)

Then \(\chi \in \{0,1\}^N\) almost everywhere, \(\sum _{i}\chi _i = 1\) and \(\chi \in L^1((0,T), BV([0,1)^d))^N\).

If we assume that

$$\begin{aligned} \limsup _{h \downarrow 0} \int _0^T E_h(\chi ^h(t))dt \le \sum _{i,j} \sigma _{ij}\int _0^T {\mathcal {H}}^{d-1}(\Sigma _{ij}(t))dt, \end{aligned}$$
(38)

then \(\chi \) is a De Giorgi solution in the sense of Definition 1 below.

The convergence assumption (38) is motivated by a similar assumption on the implicit time discretization in the seminal paper [25] by Luckhaus and Sturzenhecker, and has also appeared in previous work in the context of the thresholding scheme [19,20,21]. As of now, this assumption can be verified only in particular cases, such as before the first singularity [36] or for certain types of singularities, namely mean convex ones, meaning \(H>0\). This was shown for the implicit time discretization in [6] and a proof in the case of the thresholding scheme will appear in a forthcoming work by Fuchs and the first author.

Inspired by the general framework [2] and [34], generalizing the previous two-phase version [21], we propose the following definition for weak solutions in the case of multiphase mean curvature flow.

Definition 1

Given \(\chi ^0 \in {\mathcal {A}}\) and such that \(\nabla \chi ^0\) is a bounded measure, a map \(\chi : [0,1)^d \times (0,T) \rightarrow \{0,1\}^N\) such that \(\sum _{i}\chi _i = 1\) and \(\chi \in L^1((0,T), BV([0,1)^d))^N\) is called a De Giorgi solution to the multiphase mean curvature flow with surface tensions \(\sigma _{ij}\) and mobilities \(\mu _{ij}\) provided the following three facts hold:

  1. (1)

    There exist \(H_{ij} \in L^2({\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx)dt)\) which are mean curvatures in the weak sense, i.e., such that for any test vector field \(\xi \in C^{\infty }_c([0,1)^d\times (0,T))^d\)

    $$\begin{aligned}&\sum _{i,j}\sigma _{ij}\int _{[0,1)^d\times (0,T)} (\nabla \cdot \xi - \nu _{ij}\cdot \nabla \xi \nu _{ij}){\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt \nonumber \\&\quad = -\sum _{i,j}\sigma _{ij}\int _{[0,1)^d\times (0,T)}H_{ij}\nu _{ij}\cdot \xi {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt. \end{aligned}$$
    (39)
  2. (2)

    There exist normal velocities \(V_{ij} \in L^2({\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt)\) with

    $$\begin{aligned} \begin{aligned}&\int _{[0,1)^d} \eta (t=0)\chi _i^0 dx + \int _{[0,1)^d\times (0,T)} \partial _t\eta \ \chi _i\ dxdt \\&\quad +\sum _{k\ne i}\int _{[0,1)^d\times (0,T)} \eta V_{ik}\ {\mathcal {H}}^{d-1}_{|\Sigma _{ik}(t)}(dx)dt = 0 \end{aligned} \end{aligned}$$

    for all \(\eta \in C^{\infty }_c([0,1)^d\times [0,T))\).

  3. (3)

    De Giorgi’s inequality is satisfied, i.e.,

    $$\begin{aligned} \begin{aligned}&\limsup _{\tau \downarrow 0}\frac{1}{\tau }\sum _{i,j}\sigma _{ij}\int _{(T-\tau , T)}{\mathcal {H}}^{d-1}(\Sigma _{ij}(t))dt \\&\quad + \frac{1}{2}\sum _{i,j}\int _{[0,1)^d\times (0,T)} \left( \frac{V_{ij}^2}{\mu _{ij}} + \mu _{ij}\sigma _{ij}^2H_{ij}^2 \right) {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt \le \sum _{i,j}\sigma _{ij}{\mathcal {H}}^{d-1}(\Sigma ^0_{ij}). \end{aligned} \end{aligned}$$
    (40)

Remark 1

Observe that inequality (40) together with the definition of the weak mean curvatures gives a notion of weak solution for the multiphase mean curvature flow incorporating both the dynamics \(V_{ij} = -\sigma _{ij}\mu _{ij}H_{ij}\) and the Herring angle condition at triple junctions. Indeed if \(\chi : [0,1)^d \times (0,T) \rightarrow \{0,1\}^N\) with \(\sum _{i} \chi _i(t) = 1\) is such that the sets \(\Omega _i(t) = \{\chi _i(\cdot , t) = 1\}\) meet along smooth interfaces \(\Sigma _{ij} := \partial \Omega _i \cap \partial \Omega _j\) which evolve smoothly and satisfy (39), (40) then

  1. (1)

    The Herring angle condition at triple junctions is satisfied. Indeed by the divergence theorem on surfaces (see Theorem 11.8 and Remark 11.42 in [26]) for any \(\xi \in C^{\infty }_c([0,1)^d)^d\)

    $$\begin{aligned} \int _{\Sigma _{ij}(t)} (\nabla \cdot \xi - \nu _{ij}\cdot \nabla \xi \nu _{ij})\ {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx) =&-\int _{\Sigma _{ij}(t)} H_{ij}\nu _{ij}{\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx) \\&+ \int _{\partial \Sigma _{ij}(t)} \xi \cdot J\nu _{ij} {\mathcal {H}}^{d-2}(dx), \end{aligned}$$

    where J denotes the rotation by ninety degrees in the normal plane to the triple junction \(\partial \Sigma _{ij}(t)\). Thus (39) and \(H_{ij} \in L^2({\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt)\) imply that

    $$\begin{aligned}&\sigma _{i_1i_2}\int _{\partial \Sigma _{i_1i_2}(t)} \xi \cdot J\nu _{i_1i_2} {\mathcal {H}}^{d-2}(dx) \\&\quad + \sigma _{i_2i_3}\int _{\partial \Sigma _{i_2i_3}(t)} \xi \cdot J\nu _{i_2i_3} {\mathcal {H}}^{d-2}(dx) \\&\quad + \sigma _{i_3i_1}\int _{\partial \Sigma _{i_3i_1}(t)} \xi \cdot J\nu _{i_3i_1} {\mathcal {H}}^{d-2}(dx) = 0, \end{aligned}$$

    which forces \(\sigma _{i_1i_2}\nu _{i_1i_2} + \sigma _{i_2i_3}\nu _{i_2i_3} + \sigma _{i_3i_1}\nu _{i_3i_1} = 0\) at triple junctions.

  2. (2)

    We have \(V_{ij} = -\sigma _{ij}\mu _{ij}H_{ij}\) on \(\Sigma _{ij}(t)\). Indeed in the smooth case inequality (40) reduces to

    $$\begin{aligned}&\sum _{i,j}\sigma _{ij}\int _{(0, T)}\frac{d}{dt}{\mathcal {H}}^{d-1}(\Sigma _{ij}(t))dt \\&\quad + \frac{1}{2}\sum _{i,j}\int _{[0,1)^d\times (0,T)} \left( \frac{V_{ij}^2}{\mu _{ij}} + \mu _{ij}\sigma _{ij}^2H_{ij}^2 \right) {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt \le 0. \end{aligned}$$

    Using the Herring angle condition we have

    $$\begin{aligned} \sum _{i,j}\frac{d}{dt}{\mathcal {H}}^{d-1}(\Sigma _{ij}(t)) = \sum _{i,j} \int _{[0,1)^d} V_{ij}H_{ij} {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx) \end{aligned}$$

    and after completing the square we arrive at

    $$\begin{aligned} \sum _{i,j}\sigma _{ij}\int _{[0,1)^d\times (0,T)} \left( \frac{V_{ij}}{\sqrt{\mu _{ij}\sigma _{ij}}} + \sqrt{\mu _{ij}\sigma _{ij}}H_{ij} \right) ^2{\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt \le 0, \end{aligned}$$

    which implies \(V_{ij} = -\sigma _{ij}\mu _{ij}H_{ij}\).

The following lemma establishes, next to a compactness statement, that our convergence can be localized in the space and time variables x and t, but also in the variable z appearing in the convolution.

Lemma 4

We have the following:

  1. (i)

    Let \(\{\chi ^h\}_{h\downarrow 0}\) be a sequence of \(\{0,1\}^N\)-valued functions on \((0,T) \times [0,1)^d\) that satisfies \(\chi ^h \in {\mathcal {A}}\) for a.e. t and

    $$\begin{aligned} \limsup _{h\downarrow 0} \left( \mathop {{\text {esssup}}}\limits _{t \in (0,T)} E_h(\chi ^h(t)) + \int _0^T \frac{1}{2h^2}d_h^2(\chi ^h(t), \chi ^h(t-h))dt\right) < \infty \end{aligned}$$
    (41)

    and that is piecewise constant in time in the sense of (36). Such a sequence is precompact in \(L^1([0,1)^d \times (0,T))^N\) and any weak limit \(\chi \) is such that \(\chi \in L^1((0,T), BV([0,1)^d))^N\) with

    $$\begin{aligned} \sum _{i,j} \sigma _{ij}\int _0^T{\mathcal {H}}^{d-1}(\Sigma _{ij}(t))dt \le \liminf _{h \downarrow 0} \int _0^T E_h(\chi ^h(t))dt. \end{aligned}$$
    (42)
  2. (ii)

    Assume that \(u^h\) is a sequence of \([0,1]^N\)-valued functions with \(\sum _i u_i^h = 1\) such that (38) holds (with \(\chi ^h\) replaced by \(u^h\)) and such that \(u^h \rightarrow \chi \) in \(L^1([0,1)^d\times (0,T))^N\) holds. Assume also that

    $$\begin{aligned} \limsup _{h\downarrow 0}\mathop {{\text {esssup}}}\limits _{t \in (0,T)} E_h(u^h(t)) < \infty . \end{aligned}$$
    (43)

    Then as measures on \({\mathbf {R}}^d \times [0,1)^d \times (0,T)\) we have the following weak convergences for any \(i \ne j\)

    $$\begin{aligned}&\frac{K_{ij}(z)}{\sqrt{h}}u_i^h(x,t)u_j^h(x-\sqrt{h}z,t)dxdtdz \nonumber \\&\quad \rightharpoonup K_{ij}(z)(\nu _{ij}(x,t)\cdot z)_+{\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dtdz. \end{aligned}$$
    (44)
    $$\begin{aligned}&\frac{K_{ij}(z)}{\sqrt{h}}u_i^h(x-\sqrt{h}z,t)u_j^h(x,t)dxdtdz \nonumber \\&\quad \rightharpoonup K_{ij}(z)(\nu _{ij}(x,t)\cdot z)_-{\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dtdz. \end{aligned}$$
    (45)

    Here \(\nu _{ij}(\cdot , t)\) denotes the outer measure theoretic unit normal of \(\Omega _i(t)\) restricted to the interface \(\Sigma _{ij}(t)\). Here the convergence may be tested also with continuous functions which have polynomial growth in \(z \in {\mathbf {R}}^d\).

The next proposition is the main ingredient in the proof of Theorem 1. It establishes the sharp lower bound on the distance-term.

Proposition 1

Suppose that (37) and the conclusion of Lemma 4 (ii) hold. Assume also that the left hand side of (47) is finite. Then for every \(1 \le k \le N\) there exists \(V_k \in L^2(|\nabla \chi _k|dt)\) such that

$$\begin{aligned} \partial _t\chi _k = V_k|\nabla \chi _k|dt \end{aligned}$$
(46)

in the sense of distributions. Given \(i \ne j\), it holds that \(V_{i}(x,t) = - V_{j}(x,t)\) on \(\Sigma _{ij}(t)\) and if we define \(V_{ij}(x,t) := V_i(x, t)|_{\Sigma _{ij}(t)}\) then we have

$$\begin{aligned} \liminf _{h \downarrow 0} \int _{0}^T \frac{1}{h^2}d_h^2(\chi ^h(t), \chi ^h(t-h))dt \ge \sum _{i,j}\frac{1}{\mu _{ij}} \int _0^T\int _{\Sigma _{ij}(t)} |V_{ij}(x,t)|^2{\mathcal {H}}^{d-1}(dx)dt.\nonumber \\ \end{aligned}$$
(47)

The final ingredient is the analogous sharp lower bound for the metric slope.

Proposition 2

Suppose that the conclusion of Lemma 4 (ii) holds and that (37) holds with \(\chi ^h\) replaced by \(u^h\). Then for any \(i \ne j\) there exists a mean curvature \(H_{ij} \in L^2({\mathcal {H}}^{d-1}_{\Sigma _{ij}(t)}(dx)dt)\) in the sense of (39). Moreover the following inequality is true:

$$\begin{aligned} \liminf _{h\downarrow 0} \int _0^T |\partial E_h|^2(u^h(t)) dt \ge \sum _{i,j} \mu _{ij}\sigma _{ij}^2\int _0^T\int _{\Sigma _{ij}(t)}|H_{ij}(x,t)|^2{\mathcal {H}}^{d-1}(dx)dt. \end{aligned}$$
(48)

We will present the proofs of Theorem 1, Lemma 4, Proposition 1 and Proposition 2 in Sect. 5. Before doing that, we need a simple geometric measure theory construction.

4 Construction of suitable partitions of unity

In the sequel we will frequently want to localize on one of the interfaces. To do so, we need to construct a suitable family of balls on which the behavior of the flow is split into two majority phases and several minority phases. Hereafter we will ignore the time variable and consider a map \(\chi : [0, 1)^d \rightarrow \{0,1\}^N\) such that \(\chi \in BV([0,1)^d, {\mathbf {R}}^N)\), \(\sum _k \chi _k = 1\). Given \(1 \le i < j \le N\) we denote by \(\partial ^*\Omega _i\) the reduced boundary of the set \(\{\chi _i = 1\}\) and by \(\Sigma _{ij} = \partial ^*\Omega _i \cap \partial ^*\Omega _j\) the interface between phase i and phase j. Given a real number \(r > 0\) and a natural number \(n \in {\mathbf {N}}\) we define

$$\begin{aligned} {\mathcal {F}}_n^r:=\left\{ B(x, nr\sqrt{d}):\ x \in r{\mathbf {Z}}^d \cap [0, 1)^d \right\} \end{aligned}$$
(49)

where the balls appearing in the definition are intended to be open. Observe that for any \(n \ge 2\) and any \(r > 0\) the collection of balls in \({\mathcal {F}}_n^r\) is a covering of \([0,1)^d\) with the property that any point \(x \in [0,1)^d\) lies in at most c(nd) distinct balls belonging to \({\mathcal {F}}^r_n\), where \(0 < c(n,d) \le (2n)^d\) is a constant that depends on nd but not on r. Given numbers \(1 \le l \ne p \le N\) we define

$$\begin{aligned} {\mathcal {E}}^r := \left\{ B \in {\mathcal {F}}^r_2:\ B \cap \Sigma _{lp} \ne \emptyset ,\ \frac{{\mathcal {H}}^{d-1}(\Sigma _{ij} \cap 2B)}{\omega _{d-1}(4r)^{d-1}} \le \frac{1}{2^d},\ \{i,j\} \ne \{l,p\} \right\} . \end{aligned}$$
(50)

Here 2B denotes the ball with center given by the center of B and twice its radius. Given lp as above, denote by \(\{B_{m}^r\}\) an enumeration of \({\mathcal {E}}^r\) and by \(\{\rho _{m}\}\) a smooth partition of unity subordinate to \(\{B_{m}^r\}\). Then the following result holds true (for a proof, see the “Appendix”).

Lemma 5

Fix \(1 \le l \ne p \le N\). With the above construction the following two properties hold.

  1. (i)

    For any \(1 \le i \ne j \le N\), \(\{i,j\} \ne \{l,p\}\) and any \(\eta \in L^1({\mathcal {H}}^{d-1}_{|\Sigma _{ij}})\)

    $$\begin{aligned} \lim _{r\downarrow 0} \sum _{m} \int _{B_{m}^r} \eta {\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx) = 0. \end{aligned}$$
    (51)
  2. (ii)

    For any \(\eta \in L^1({\mathcal {H}}^{d-1}_{|\Sigma _{lp}})\)

    $$\begin{aligned} \lim _{r\downarrow 0} \sum _{m} \int \rho _{m} \eta {\mathcal {H}}^{d-1}_{|\Sigma _{lp}}(dx) = \int \eta {\mathcal {H}}^{d-1}_{|\Sigma _{lp}}(dx). \end{aligned}$$
    (52)

5 Proofs

5.1 Proof of Theorem 1

Proof

By Lemma 2, we can apply Lemma 3 on the metric space \(({\mathcal {M}}, d_h)\) so that we get inequality (31) with \((E,d,\chi ,u) = (E_h, d_h, \chi ^h, u^h)\). Our first observation is that

$$\begin{aligned} \lim _{h\downarrow 0} E_h(\chi ^0) = \sum _{i,j}\sigma _{ij}{\mathcal {H}}^{d-1}(\Sigma _{ij}^0), \end{aligned}$$
(53)

which follows from the consistency, cf. Lemma 7 in the “Appendix”. Inequality (31) then yields that the sequence \(\chi ^h\) satisfies (41), so that Lemma 4 (i) applies to get that \(\chi \in L^1((0,T), BV([0,1)^d))^N\), \(\chi \in \{0,1\}^N\) a.e., \(\sum _{i} \chi _i = 1\) and, after extracting a subsequence, \(\chi ^h \rightarrow \chi \) in \(L^2([0,1)^d \times (0,T))^N\). We claim that this implies \(u^h \rightarrow \chi \) in \(L^2([0,1)^d \times (0,T))^N\). To see this, observe that (34) implies

$$\begin{aligned} \begin{aligned} hE_h(\chi ^0)&\ge -\int _0^T E_h(u^h(t) - \chi ^h(t))dt \\&\ge C \frac{1}{\sqrt{h}}\sum _{i=1}^N\left( \int |G_{\gamma }^{h/2}*(u_i^h - \chi _i^h)|^2 dxdt + \int |G_{\beta }^{h/2}*(u_i^h - \chi _i^h)|^2 dxdt \right) \end{aligned} \end{aligned}$$
(54)

where C is a constant which depends on \(N, {\mathbb {A}}, {\mathbb {B}}\) but not on h and comes from the fact that all norms on \((1, . . . , 1)^{\perp }\) are comparable. Inequality (54) clearly implies that \(K^h*u^h - K^h*\chi ^h\) converges to zero in \(L^2\). Observe that inequality (35) in particular yields (43). Recalling (158) in the “Appendix”, we learn that \(u^h - \chi ^h\) converges to zero in \(L^2\). This implies that we can apply Lemma 4 (ii) both to the sequence \(u^h\) and the sequence \(\chi ^h\). In particular, we may apply Proposition 1 for \(\chi ^h\) and Proposition 2 for \(u^h\). Now the proof follows the same strategy as the one in the two-phase case in [21]. For the sake of completeness, we sketch the argument here. First of all, Lemma 3 gives inequality (31) for \((E_h, d_h, \chi ^h, u^h)\), namely for \(n \in {\mathbf {N}}\)

$$\begin{aligned} \rho (nh) \le E_h(\chi ^0), \end{aligned}$$
(55)

where we set \(\rho (t) = E_h(\chi ^h(t)) + \frac{1}{2}\int _0^t \left( \frac{1}{h^2} d_h^2(\chi ^h(s+h), \chi ^h(s)) + |\partial E_h(u^h(s))|^2\right) ds\). Multiplying (55) by \(\eta (nh) - \eta ((n+1)h)\) for some non-increasing function \(\eta \in C_c([0,T))\) we get \(-\int \frac{d\eta }{dt} \rho dt \le (\eta (0) + h \sup \left| \frac{d\eta }{dt} \right| )E_h(\chi ^0)\). As test function \(\eta \), we now choose \(\eta (t) = \max \{\min \{\frac{T-t}{\tau }, 1\}, 0\}\) and obtain

$$\begin{aligned}&\frac{1}{\tau }\int _{T-\tau }^T E_h(\chi ^h(t))dt \nonumber \\&\quad + \frac{1}{2}\int _0^{T-\tau } \left( \frac{1}{h^2}d_h^2(\chi ^h(t), \chi ^h(t-h)) + |\partial E_h(u^h(t))|^2\right) dt \le (1 + \frac{h}{\tau })E_h(\chi ^0). \end{aligned}$$
(56)

Now it remains to pass to the limit as \(h \downarrow 0\): to get (40) from inequality (56) one uses the lower semicontinuity (42) for the first left hand side term, the sharp bound (47) for the second left hand side term, the bound (48) for the last left hand side term and finally one uses the consistency Lemma 7 in the “Appendix” to treat the right hand side term. To get (40) it remains to pass to the limit in \(\tau \downarrow 0\). \(\square \)

5.2 Proof of Lemma 4

Proof

Argument for (i) For the compactness, the arguments in [21] adapt to this setting with minor changes. The first observation is that, by inequality (158) in the “Appendix”, one needs to prove compactness in \(L^2([0,1)^d \times (0,T))^N\) of \(\{K^h * \chi ^h\}_{h\downarrow 0}\). For this, one just needs a modulus of continuity in time. I.e. it is sufficient to prove that there exists a constant \(C > 0\) independent of h such that \(I_h(s) \le C\sqrt{s}\), where

$$\begin{aligned} I_h(s) = \int _{(s,T) \times [0,1)^d} |\chi ^h(x,t) - \chi ^h(x, t-s)|^2dxdt. \end{aligned}$$

This can be done applying word by word the argument in [21] once we show the following: for any pair \(\chi , \chi ' \in {\mathcal {A}}\), we have

$$\begin{aligned} \int |\chi -\chi '|dx \le \frac{C}{\sqrt{h}}d_h^2(\chi , \chi ') + C\sqrt{h}\left( E_h(\chi ) + E_h(\chi ')\right) . \end{aligned}$$
(57)

Here the constant C depends on \(N, {\mathbb {A}}, {\mathbb {B}}\) but not on h.

To prove (57) we proceed as follows: let \({\mathbb {S}} \in {\mathbf {R}}^{N\times N}\) be a symmetric matrix which is positive definite on \((1, . . . , 1)^{\perp }\). Since any two norms on a finite dimensional space are comparable, there exists a constant \(C>0\) depending on \({\mathbb {S}}\) and N such that

$$\begin{aligned} |\chi - \chi '| \le |\chi - \chi '|^2 \le C|\chi - \chi '|^2_{{\mathbb {S}}} \end{aligned}$$
(58)

where \(|\cdot |_{{\mathbb {S}}}\) denotes the norm induced by \({\mathbb {S}}\). For a function \(u \in {\mathcal {M}}\) write \(({\tilde{K}}^h *) u^h\) for the function

$$\begin{aligned} \left( ({\tilde{K}}^h *) u^h\right) _i = \sum _{j \ne i} K_{ij}^h * u^h_j. \end{aligned}$$

Then we calculate

$$\begin{aligned} \begin{aligned} |\chi -\chi '|^2_{{\mathbb {S}}} = -(\chi -\chi ')\cdot ({\tilde{K}}^h*)(\chi -\chi ') + (\chi -\chi ')\cdot ({\mathbb {S}} + ({\tilde{K}}^h*))(\chi -\chi '). \end{aligned} \end{aligned}$$
(59)

Select \({\mathbb {S}} = (s_{ij})\) where \(s_{ij} = -\int K_{ij}(z) dz\). Then, by our assumption (14), \({\mathbb {S}}\) is positive definite on \((1, . . . , 1)^{\perp }\) and after integration on \([0,1)^d\) identity (59) becomes

$$\begin{aligned} \int |\chi -\chi '|^2_{{\mathbb {S}}}dx = \frac{1}{2\sqrt{h}}d_h^2(\chi , \chi ') + \int (\chi -\chi ')({\mathbb {S}} + ({\tilde{K}}^h*))(\chi -\chi ') dx. \end{aligned}$$

We now proceed to estimate the integral on the right hand side. By the choice of \({\mathbb {S}}\) and Jensen’s inequality we have

$$\begin{aligned}&\int (\chi -\chi ')({\mathbb {S}} + ({\tilde{K}}^h*))(\chi -\chi ') dx \le C\int |({\mathbb {S}} + ({\tilde{K}}^h*))(\chi -\chi ')| dx \nonumber \\&\quad \le C\sum _{i,j} \int K^h_{ij}(z)|(\chi _j-\chi _j')(x-z) - (\chi _j-\chi _j')(x)|dxdz. \end{aligned}$$
(60)

Using the triangle inequality and (156) in the “Appendix” we can estimate the right hand side to obtain the following inequality

$$\begin{aligned}&\int (\chi -\chi ')({\mathbb {S}} + ({\tilde{K}}^h*))(\chi -\chi ') dx\nonumber \\&\quad \le C\sum _{i,j} \left( \sum _{k \ne j}\int K^h_{ij}(z)\chi _j(x-z)\chi _k(x)dxdz\right. \nonumber \\&\qquad \qquad \quad \, \left. +\sum _{k \ne j}\int K^h_{ij}(z)\chi _j(x)\chi _k(x-z)dxdz\right. \nonumber \\&\qquad \qquad \quad \, \left. +\sum _{k \ne j}\int K^h_{ij}(z)\chi '_j(x-z)\chi '_k(x)dxdz\right. \nonumber \\&\qquad \qquad \quad \, \left. + \sum _{k \ne j}\int K^h_{ij}(z)\chi '_j(x)\chi '_k(x-z)dxdz\right) . \end{aligned}$$
(61)

Observing that there is a constant \(C>0\) such that \(K_{ij} \le CK_{jk}\) we conclude that

$$\begin{aligned} \int (\chi -\chi ')({\mathbb {S}} + ({\tilde{K}}^h*))(\chi -\chi ') dx \le C \sqrt{h}\left( E_h(\chi ) + E_{h}(\chi ') \right) . \end{aligned}$$

This proves (57) and closes the argument for the compactness.

We also have to prove (42), but this follows from (44) with \(u^h\) replaced by \(\chi ^h\) once we have shown that the limit \(\chi \) is such that \(|\nabla \chi |\) is a bounded measure, equiintegrable in time. Indeed one can check from the proof of (44) that the lower bound of (44) does not require the extra assumption (38). Thus one gets that

$$\begin{aligned}&\liminf _{h\downarrow 0}\int _0^T E_h(\chi ^h(t))dt = \liminf _{h\downarrow 0} \sum _{i \ne j} \frac{1}{\sqrt{h}}\int _0^T \int _{[0,1)^d} \chi _i^h K_{ij}^h *\chi _j^h dx dt \\&\quad \ge \sum _{i \ne j}\liminf _{h\downarrow 0} \frac{1}{\sqrt{h}} \int _0^T \int _{[0,1)^d} \chi _i^h K_{ij}^h *\chi _j^h dx dt \\&\quad = \sum _{i \ne j}\liminf _{h\downarrow 0} \frac{1}{\sqrt{h}} \int _0^T \int _{[0,1)^d} \int _{{\mathbf {R}}^d}\chi _i^h(x,t) K_{ij}^h(z)\chi _j^h(x-z, t) dz dx dt \\&\quad \ge \sum _{i \ne j}\int _0^T \int _{[0,1)^d} \int _{{\mathbf {R}}^d}K_{ij}(z)(\nu _{ij} \cdot z)_{+} dz {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx) dt \\&\quad = \sum _{i \ne j} \sigma _{ij} \int _{0}^T {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx) dt, \end{aligned}$$

where in the last two lines we used (44) and the definition of \(\sigma _{ij}\). To prove that the limit \(\chi \) is such that \(|\nabla \chi |\) is a bounded measure, equiintegrable in time one can proceed with an argument similar to the one used in [21] for the two-phase case. Observe that this only requires the weaker assumption (43).

Argument for (ii) As mentioned in the previous paragraph, we already know that the limit \(\chi \) is such that \(|\nabla \chi |\) is a bounded measure, equiintegrable in time. We will prove (44). Then (45) easily follows by recalling that \(\nu _{ij} = - \nu _{ji}\). A standard argument (to be found in [21]) which relies on the exponential decay of the kernel yields the fact that we can test convergences (44) with functions with at most polynomial growth in z provided we already have the result for bounded and continuous test functions, thus we focus on this case.

Let \(\xi \in C_b({\mathbf {R}}^d \times [0,1)^d \times (0,T))\) be a bounded and continuous function. To show (44) we aim at showing that

$$\begin{aligned} \begin{aligned}&\lim _{h\downarrow 0}\int \xi (z,x,t)\frac{K_{ij}(z)}{\sqrt{h}}u_i^h(x,t)u_j^h(x-\sqrt{h}z,t)dxdtdz\\&\quad = \int \xi (z,x,t)K_{ij}(z)(\nu _{ij}(x,t)\cdot z)_+ {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dtdz. \end{aligned} \end{aligned}$$
(62)

Upon splitting \(\xi \) into the positive and the negative part, by linearity we may assume that \(0 \le \xi \le 1\). We can split (62) into the local lower bound

$$\begin{aligned} \begin{aligned}&\liminf _{h\downarrow 0}\int \xi (z,x,t)\frac{K_{ij}(z)}{\sqrt{h}}u_i^h(x,t)u_j^h(x-\sqrt{h}z,t)dzdxdt \\&\quad \ge \int \xi (z,x,t)K_{ij}(z)(\nu _{ij}(x,t)\cdot z)_+ {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dtdz \end{aligned} \end{aligned}$$
(63)

and the global upper bound

$$\begin{aligned} \begin{aligned}&\limsup _{h\downarrow 0}\int \frac{K_{ij}(z)}{\sqrt{h}}u_i^h(x,t)u_j^h(x-\sqrt{h}z,t)dzdxdt \\&\quad \le \int K_{ij}(z)(\nu _{ij}(x,t)\cdot z)_+ {\mathcal {H}}^{d-1}_{\Sigma _{ij}(t)}(dx)dtdz. \end{aligned} \end{aligned}$$
(64)

Indeed we can recover the limsup inequality in (62) by splitting \(\xi = 1 - (1 - \xi )\) and applying the local lower bound (63) to \(1-\xi \).

We first concentrate on the local lower bounds in the case where \(u^h = \chi \), namely we will show

$$\begin{aligned} \begin{aligned}&\liminf _{h\downarrow 0}\int \xi (z,x,t)\frac{K_{ij}(z)}{\sqrt{h}}\chi _i(x,t)\chi _j(x-\sqrt{h}z,t)dzdxdt \\&\quad \ge \int \xi (z,x,t)K_{ij}(z)(\nu _{ij}(x,t)\cdot z)_+ {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dtdz. \end{aligned} \end{aligned}$$
(65)

By Fatou’s lemma the claim is reduced to showing that for a.e. point t in time and every \(z \in {\mathbf {R}}^d\)

$$\begin{aligned}&\liminf _{h\downarrow 0}\int \xi (z,x,t)\frac{K_{ij}(z)}{\sqrt{h}}\chi _i(x,t)\chi _j(x-\sqrt{h}z,t)dx \nonumber \\&\quad \ge \int \xi (z,x,t)K_{ij}(z)(\nu _{ij}(x,t)\cdot z)_+ {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx). \end{aligned}$$
(66)

Fix a point t such that \(\chi (\cdot , t) \in BV([0,1)^d, \{0,1\}^N)\) and any \(z \in {\mathbf {R}}^d\). In the sequel, we will drop those variables, so \(\chi (x) = \chi (x,t)\), \(\xi (x) = \xi (z,x,t)\). By approximation we may assume that \(\xi \in C^{\infty }([0,1)^d)\). Let \(\rho _{mij}\) be a partition of unity obtained by applying the construction of Sect. 4 to the function \(\chi (x)\) on the interface \(\Sigma _{ij}\). Let \(\nu _i\) be the outer measure theoretic normal of \(\Omega _i(t)\). Then by Lemma 5 we have

$$\begin{aligned}&\int \xi (x) (\nu _{ij}(x) \cdot z)_+{\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx) \nonumber \\&\quad = \lim _{r\downarrow 0} \left( \sum _{m \in {\mathbf {N}}} \int \rho _{mij}(x)\xi (x) (\nu _{ij}(x) \cdot z)_+{\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx)\right) \nonumber \\&\quad = \lim _{r\downarrow 0} \sum _{m \in {\mathbf {N}}} \left( \int \rho _{mij}(x)\xi (x) (\nu _{i}(x) \cdot z)_+{\mathcal {H}}^{d-1}_{|\partial ^*\Omega _i}(dx)\right. \nonumber \\&\qquad \qquad \qquad \quad \, \left. - \sum _{k \ne i,j}\int \rho _{mij}(x)\xi (x) (\nu _{ij}(x) \cdot z)_+{\mathcal {H}}^{d-1}_{|\Sigma _{ik}}(dx) \right) \nonumber \\&\quad = \lim _{r\downarrow 0} \sum _{m \in {\mathbf {N}}} \int \rho _{mij}(x)\xi (x) (\nu _{i}(x) \cdot z)_+{\mathcal {H}}^{d-1}_{|\partial ^*\Omega _i}(dx). \end{aligned}$$
(67)

We now focus on estimating the argument of the last limit. Observe that \((\nu _{i}(x) \cdot z)_+{\mathcal {H}}^{d-1}_{|\partial ^*\Omega _i}(dx) = (\partial _z \chi _i)_+\), thus by definition of positive part of a measure, given \(\epsilon > 0\) we can select, for any \(m \in {\mathbf {N}}\), a function \({\tilde{\xi }}_m \in C^1_c(B_{m})\) such that \(0 \le {\tilde{\xi }}_m \le 1\) and such that

$$\begin{aligned} \int \rho _{mij}\xi {\tilde{\xi }}_m \partial _z\chi _i + 2^{-m}\epsilon \ge \int \rho _{mij}\xi (\nu _i \cdot z)_+{\mathcal {H}}^{d-1}_{|\partial ^*\Omega _i}(dx). \end{aligned}$$
(68)

Let \(\eta _m := \rho _{mij}\xi {\tilde{\xi }}_m \in C^1_c(B_m)\), then

$$\begin{aligned} \int \eta _m\partial _z\chi _i&= -\int \partial _z\eta _m\chi _i dx\\&= \lim _{h\downarrow 0} \int \frac{\eta _m(x+\sqrt{h}z) - \eta _m(x)}{\sqrt{h}} \chi _i(x)dx\\&= \lim _{h\downarrow 0} \int \eta _m(x) \frac{\chi _i(x) - \chi _i(x-\sqrt{h}z)}{\sqrt{h}}dx. \end{aligned}$$

Using that \(\chi _i(x) - \chi _i(x-\sqrt{h}z) \le \chi _i(x)(1-\chi _i(x-\sqrt{h}z))\) (because \(\chi _i \in \{0, 1\}\)) and that \(1-\chi _i = \sum _{k \ne i} \chi _k\) we can estimate the last item by

$$\begin{aligned}&\liminf _{h\downarrow 0} \sum _{k\ne i}\int \eta _m(x) \frac{\chi _i(x)\chi _k(x-\sqrt{h}z)}{\sqrt{h}}dx\\&\quad \le \liminf _{h\downarrow 0} \int \eta _m(x) \frac{\chi _i(x)\chi _j(x-\sqrt{h}z)}{\sqrt{h}}dx \\&\qquad + \limsup _{h\downarrow 0} \sum _{k\ne i,j}\int \eta _m(x) \frac{\chi _i(x)\chi _k(x-\sqrt{h}z)}{\sqrt{h}}dx \\&\quad \le \liminf _{h\downarrow 0} \int \eta _m(x) \frac{\chi _i(x)\chi _j(x-\sqrt{h}z)}{\sqrt{h}}dx \\&\qquad + \sum _{k\ne i,j}\limsup _{h\downarrow 0}\int \eta _m(x) \frac{\chi _i(x)\chi _k(x-\sqrt{h}z)}{\sqrt{h}}dx. \end{aligned}$$

Observe that for each \(m \in {\mathbf {N}}\), using also the consistency Lemma 7

$$\begin{aligned}&\limsup _{h\downarrow 0}\int \eta _m(x) \frac{\chi _i(x)\chi _k(x-\sqrt{h}z)}{\sqrt{h}}dx \\&\quad \le \limsup _{h\downarrow 0}\int \eta _m(x) \frac{\chi _i(x)\chi _k(x-\sqrt{h}z)+\chi _i(x-\sqrt{h}z)\chi _j(x)}{\sqrt{h}}dx \\&\quad = \int \eta _m(x) |\nu _{ik}(x) \cdot z| {\mathcal {H}}^{d-1}_{|\Sigma _{ik}}(dx) \\&\quad \le |z|{\mathcal {H}}^{d-1}(B_{mij}^r \cap \Sigma _{ik}). \end{aligned}$$

Thus we obtain

$$\begin{aligned} \begin{aligned} \int \eta _m\partial _z\chi _i \le&\, \liminf _{h\downarrow 0} \int \eta _m(x) \frac{\chi _i(x)\chi _j(x-\sqrt{h}z)}{\sqrt{h}}dx \\&+ \sum _{k\ne i,j}|z|{\mathcal {H}}^{d-1}(B_{mij}^r \cap \Sigma _{ik}). \end{aligned} \end{aligned}$$

Inserting back into (67), recalling also Lemma 5 and the inequality (68), using Fatou’s lemma, the fact that \(\rho _{mij}\) is a partition of unity and that \(0\le {\tilde{\xi }}_m \le 1\) we obtain that

$$\begin{aligned} \int \xi (x) (\nu _{ij}(x) \cdot z)_+{\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx) \le \liminf _{h\downarrow 0}&\int \xi (x) \frac{\chi _i(x)\chi _j(x-\sqrt{h}z)}{\sqrt{h}}dx + \epsilon \end{aligned}$$

and (65) follows letting \(\epsilon \) go to zero. To derive inequality (63) we just apply Lemma 9 in the “Appendix”.

To get the upper bound (64) we argue as follows. First of all, recall Assumption (38) which says

$$\begin{aligned} \limsup _{h\downarrow 0}\int _0^TE_h(u^h(t))dt \le \int _0^T E(\chi (t))dt. \end{aligned}$$
(69)

Now, if we define

$$\begin{aligned} e^{ij}_h(u^h) = \frac{1}{\sqrt{h}}\int _0^T\int u_i^h(t) K_{ij}^h * u_j^h(t) dxdt, \end{aligned}$$

we have that by (63) \(\liminf _{h\downarrow 0} e_h^{ij}(u_h) \ge e^{ij}(\chi )\), where \(e^{ij}(\chi )\) is defined in the obvious way. Assume that there exists a pair ij such that \(\limsup _{h\downarrow 0} e_h^{ij}(u^h) > e^{ij}(\chi )\), then

$$\begin{aligned} \begin{aligned} \int _0^T E(\chi (t))dt&\ge \limsup _{h\downarrow 0} \int _0^TE_h(u^h(t))dt \\&\ge \sum _{(l,p) \ne (i,j)} \liminf _{h\downarrow 0} e_h^{lp}(u^h) + \limsup _{h\downarrow 0} e_h^{ij}(u^h) \\&> \int _0^T E(\chi (t))dt \end{aligned} \end{aligned}$$

which is a contradiction. Thus we have proved (64). \(\square \)

5.3 Proof of Proposition 1

Proof

Since we assume that the left hand side of (47) is finite, in view of (28), upon passing to a subsequence we may assume that, in the sense of distributions, the limit

$$\begin{aligned} \lim _{h\downarrow 0} \frac{1}{h\sqrt{h}} \left( \left| G^{h/2}_{\gamma } * (\chi - \chi ( \cdot - h))\right| _{{\mathbb {A}}}^2 + \left| G^{h/2}_{\beta }*(\chi - \chi ( \cdot - h))\right| _{{\mathbb {B}}}^2 \right) = \omega \end{aligned}$$
(70)

exists as a finite positive measure on \([0,1)^d \times (0,T)\). Here we indicated with \(\chi _l^h( \cdot - h)\) the time shift of function \(\chi _l^h\). We denote by \(\tau \) a small fraction of the characteristic spatial scale, namely \(\tau = \alpha \sqrt{h}\) for some \(\alpha > 0\), which we think as a small number. Given \(1 \le l \le N\) we define

$$\begin{aligned} \delta \chi ^h_l := \chi ^h_l - \chi ^h_l( \cdot - \tau ). \end{aligned}$$
(71)

We divide the proof into two parts: first we show that the normal velocities exist, and afterwards we prove the sharp bound. But first, let us state two distributional inequalities that will be used later. Namely

  • In a distributional sense it holds that

    $$\begin{aligned} \limsup _{h\downarrow 0} -\frac{1}{\sqrt{h}}\sum _{i\ne j}\delta \chi _i K_{ij}^h *\delta \chi _j \le \alpha ^2\omega . \end{aligned}$$
    (72)
  • There exists a constant \(C> 0\) such that for any \(1 \le i \le N\) and any \(\theta \in \{\gamma , \beta \}\) in a distributional sense it holds that

    $$\begin{aligned} \limsup _{h\downarrow 0} \frac{1}{\sqrt{h}}(\chi _i - \chi _i(\cdot -\tau ))G_{\theta }^h * (\chi _i - \chi _i(\cdot -\tau )) \le C\alpha ^2\omega . \end{aligned}$$
    (73)

We observe that it suffices to prove (72), then (73) follows immediately. Indeed recall that \({\mathbb {A}}\) and \({\mathbb {B}}\) are positive definite on \((1, . . . , 1)^{\perp }\). In particular there exists a constant \(C>0\) such that for any \(v \in (1, . . . , 1)^{\perp }\) one has \(|v|^2_{{\mathbb {A}}} + |v|_{{\mathbb {B}}}^2 \ge C|v|^2 \ge Cv_i^2\) for any \(i \in \{1, . . . , N\}\). Applying this to the vector \(v_i = G_\theta ^{h/2}*\delta \chi _i\) one gets

$$\begin{aligned} |G_\theta ^{h/2}*\delta \chi _i|^2 \le \frac{1}{C} |G_{\theta }^{h/2} *\delta \chi |_{{\mathbb {A}}}^2 + |G_{\theta }^{h/2} * \delta \chi |_{{\mathbb {B}}}^2. \end{aligned}$$
(74)

The claim then follows from the definition of \(\omega \), (72), the symmetry (20) and the semigroup property (22). Indeed it is sufficient to check that, in the sense of distributions

$$\begin{aligned} \lim _{h\downarrow 0} \frac{1}{\sqrt{h}}\sum _{i \ne j} \delta \chi _i K_{ij}^h*\delta \chi _j +\frac{1}{\sqrt{h}}\left( |G_{\gamma }^{h/2} *\delta \chi |_{{\mathbb {A}}}^2 + |G_{\beta }^{h/2} * \delta \chi |_{{\mathbb {B}}}^2 \right) = 0. \end{aligned}$$
(75)

To this aim, pick a test function \(\eta \in C^{\infty }_c([0,1)^d\times (0,T))\). Spelling out the definition of the norms \(|\cdot |_{{\mathbb {A}}}\) and \(|\cdot |_{{\mathbb {B}}}\), the claim is proved once we show that

$$\begin{aligned} \lim _{h\downarrow 0} \frac{1}{\sqrt{h}}\sum _{i\ne j} a_{ij} \int \xi (\delta \chi _i G_{\gamma }^h*\delta \chi _j - G_{\gamma }^{h/2}*\delta \chi _i G_{\gamma }^{h/2}*\delta \chi _j)dxdt = 0, \end{aligned}$$
(76)

and the same claim with \(a_{ij}\), \(\gamma \) replaced by \(b_{ij}, \beta \) respectively.

We concentrate on (76). Clearly, we are done once we show that for any \(i \ne j\)

$$\begin{aligned} \lim _{h\downarrow 0} \frac{1}{\sqrt{h}}\int \xi (\delta \chi _i G_{\gamma }^h*\delta \chi _j - G_{\gamma }^{h/2}*\delta \chi _i G_{\gamma }^{h/2}*\delta \chi _j)dxdt = 0. \end{aligned}$$
(77)

To show this, using the semigroup property (22) we rewrite the argument of the limit as

$$\begin{aligned} -\frac{1}{\sqrt{h}}\int [\xi , G_{\gamma }^{h/2}*](\delta \chi _i)G_{\gamma }^{h/2}*\delta \chi _j dxdt, \end{aligned}$$
(78)

where \([\xi , G_{\gamma }^{h/2}*]\) denotes the commutator of multiplying by \(\xi \) and convolving with \(G_{\gamma }^{h/2}\), i.e.

$$\begin{aligned}{}[\xi , G_{\gamma }^{h/2}*](f) = \xi G_{\gamma }^{h/2}*f - G_{\gamma }^{h/2}*(\xi f), \end{aligned}$$
(79)

for every function f for which this expression makes sense. We observe that by the boundedness of the measures \(\frac{1}{\sqrt{h}}|G_{\gamma }^{h/2}*\delta \chi |^2_{{\mathbb {A}}}\) it suffices to show

$$\begin{aligned} \lim _{h\downarrow 0} \frac{1}{\sqrt{h}}\int |[\xi , G_{\gamma }^{h/2}*](\delta \chi _i)|^2dxdt = 0. \end{aligned}$$
(80)

To prove this, spelling out the integrand, using the Cauchy–Schwarz inequality and recalling the scaling (21) we observe that

$$\begin{aligned}&\int |[\xi , G_{\gamma }^{h/2}*](\delta \chi _i)|^2dxdt \nonumber \\&\quad \le \int \left( \int |\xi (x,t) - \xi (x-z,t)|^2G_{\gamma }^{h/2}(z)dz\right) G_{\gamma }^{h/2}*|\delta \chi _i(x,t)|^2dxdt\nonumber \\&\quad \le \frac{h}{2} \sup |\nabla \xi |^2\int G_{\gamma }(z)|z|^2dz \int _0^T \int |\delta \chi _i(x,t)|^2dxdt. \end{aligned}$$
(81)

Observe that by the compactness of \(\chi ^h\) in \(L^2([0,1)^d\times (0,T))\), (81) is of order h, thus (80) indeed holds true.

Now we can turn to the proof of (72), which is essentially already contained in the paper [21]. For the convenience of the reader we sketch the main ideas here. One reduces the claim to proving the following facts:

$$\begin{aligned}&\lim _{h\downarrow 0} \frac{1}{\sqrt{h}}\sum _{ij}\delta \chi _i K^h_{ij}*\delta \chi _j - \frac{1}{\sqrt{h}}\left( \left| G_{\gamma } ^{h/2}* \delta \chi \right| _{{\mathbb {A}}}^2 + \left| G_{\beta }^{h/2}*\delta \chi \right| _{{\mathbb {B}}}^2\right) = 0, \end{aligned}$$
(82)
$$\begin{aligned}&\limsup _{h \downarrow 0} \frac{1}{\sqrt{h}}\left| G_{\gamma }^{h/2} * \delta \chi \right| _{{\mathbb {A}}}^2 - \alpha ^2 \frac{1}{h\sqrt{h}}\left| G_{\gamma }^{h/2} * (\chi - \chi ( \cdot - h))\right| _{{\mathbb {A}}}^2 \le 0, \end{aligned}$$
(83)
$$\begin{aligned}&\limsup _{h \downarrow 0} \frac{1}{\sqrt{h}}\left| G_{\beta }^{h/2}*\delta \chi \right| _{{\mathbb {B}}}^2 - \alpha ^2 \frac{1}{h\sqrt{h}} \left| G_{\beta }^{h/2}*(\chi - \chi ( \cdot - h))\right| _{{\mathbb {B}}}^2 \le 0. \end{aligned}$$
(84)

Claim (82) was proved in the previous paragraph, while (83) and (84) are consequences of Jensen’s inequality in the time variable for the convex functions \(|\cdot |_{{\mathbb {A}}}^2\) and \(|\cdot |_{{\mathbb {B}}}^2\) respectively. More precisely, assume without loss of generality that \(\tau = Nh\) for some \(N \in {\mathbf {N}}\), then by a telescoping argument and Jensen’s inequality for \(|\cdot |_{{\mathbb {A}}}^2\) we get

$$\begin{aligned} \begin{aligned}&\frac{1}{\sqrt{h}}|G_{\gamma }^{h/2}*\delta \chi |_{{\mathbb {A}}}^2 \\&\quad \le N\sum _{n=0}^{N-1}\frac{1}{\sqrt{h}}|G_{\gamma }^{h/2}*(\chi ^h(\cdot -nh) - \chi ^h(\cdot -(n+1)h))|_{{\mathbb {A}}}^2. \end{aligned} \end{aligned}$$

Recalling that \(N = \alpha /\sqrt{h}\) we can rewrite the right hand side as

$$\begin{aligned} \frac{\alpha ^2}{N}\sum _{n=0}^{N-1}\frac{1}{h\sqrt{h}}|G_{\gamma }^{h/2}*(\chi ^h(\cdot -nh) - \chi ^h(\cdot - (n+1)h))|^2_{{\mathbb {A}}}. \end{aligned}$$
(85)

This is an average of time shifts of \(\alpha ^2\frac{1}{h\sqrt{h}}|G_{\gamma }^{h/2}*(\chi ^h - \chi ^h(\cdot - h))|^2_{{\mathbb {A}}}\). Since \(Nh = o(1)\) all these time shifts are small, thus the average has the same distributional limit as \(\alpha ^2\frac{1}{h\sqrt{h}}|G_{\gamma }^{h/2}*(\chi ^h - \chi ^h(\cdot - h))|^2_{{\mathbb {A}}}\). This proves (83). The argument for (84) is similar.

Existence of the normal velocities We now prove the existence of the normal velocities. Fix \(1 \le i \le N\) and observe that for \(w \in \{\gamma , \beta \}\) we have

$$\begin{aligned} \begin{aligned} |\chi _i - \chi _i(\cdot -\tau )| \le&(\chi _i - \chi _i(\cdot -\tau )) G_w^h * (\chi _i - \chi _i(\cdot -\tau )) + |\chi _i - G_{w}^h * \chi _i| \\&+ |\chi _i(\cdot -\tau ) - G_w^h*\chi _i(\cdot -\tau )|, \end{aligned} \end{aligned}$$
(86)

which follows simply by observing that \(|\chi _i - \chi _i(\cdot -\tau )| = |\chi _i - \chi _i(\cdot -\tau )|^2 = (\chi _i - \chi _i(\cdot -\tau )G_w^h * (\chi _i - \chi _i(\cdot -\tau )) + (\chi _i - \chi _i(\cdot -\tau ))(\chi - G_w^h * \chi ) + (\chi _i(\cdot -\tau ) - \chi _i)(\chi _i(\cdot -\tau ) - G_w^h * \chi _i(\cdot -\tau ))\). Using Jensen’s inequality and the elementary identity (156) in the “Appendix” we have

$$\begin{aligned} \begin{aligned} |\chi _i - G_{w}^h * \chi _i|&\le \int G_{w}^h(z)|\chi _i(x) - \chi _i(x-z)| dz \\&= \int G_{w}^h(z)\chi _i(x)(1 - \chi _i(x-z)) dz + \int G_{w}^h(z)(1-\chi _i(x)) \chi _i(x-z)dz \\&= \sum _{k\ne i}\int G_{w}^h(z)\chi _i(x)\chi _k(x-z) dz + \sum _{k\ne i}\int G_{w}^h(z)\chi _k(x) \chi _i(x-z)dz. \end{aligned} \end{aligned}$$
(87)

Now observe that by testing (44) with \(G_{w}/K_{ij}\) (which is bounded, and thus admissible), we learn that

$$\begin{aligned} \lim _{h\downarrow 0}\frac{1}{\sqrt{h}}\int G_{w}^h(z)\chi _i(x)\chi _k(x-z) dz = \int G_{w}(z)(\nu _{ik}(x,t) \cdot z)_{+}dz{\mathcal {H}}^{d-1}_{|\Sigma _{ik}(t)}(dx)dt. \end{aligned}$$
(88)

Thus, if we divide (86) by \(\sqrt{h}\) and let \(h \downarrow 0\), using also (73) we obtain

$$\begin{aligned} \begin{aligned} \alpha |\partial _t\chi _i|&\le \liminf _{h\downarrow 0}\frac{|\delta \chi _i|}{\sqrt{h}} \\&\le \limsup _{h\downarrow 0}\frac{|\delta \chi _i|}{\sqrt{h}} \\&\le C\alpha ^2\omega + C{\mathcal {H}}^{d-1}_{|\partial ^* \Omega _i(t)}(dx)dt, \end{aligned} \end{aligned}$$
(89)

where C is a constant which depends on \(\gamma , \beta , N\), the mobilities and the surface tensions. If we divide by \(\alpha \) and then let \(\alpha \rightarrow 0\) we learn that \(|\partial _t\chi _i|\) is absolutely continuous with respect to \({\mathcal {H}}^{d-1}_{|\partial ^*\Omega _i(t)}(dx)dt\). In particular, there exists \(V_i \in L^1({\mathcal {H}}^{d-1}_{|\partial ^*\Omega _i(t)}(dx)dt)\) which is the normal velocity of \(\chi _i\) in the sense that \(\partial _t\chi _i = V_i|\nabla \chi _i|\) in the sense of distributions. The optimal integrability \(V_i \in L^2({\mathcal {H}}^{d-1}_{|\partial ^*\Omega _i(t)}(dx)dt)\) will be shown in the second part of the proof. Let us record for later use that with a similar reasoning we actually obtain that \(\limsup _{h} \frac{|\delta \chi _i|}{\sqrt{h}}\) is absolutely continuous with respect to \({\mathcal {H}}^{d-1}_{|\partial ^*\Omega _i(t)}(dx)dt\). Thus in particular inequality (89) holds with \(\omega \) replaced by its absolutely continuous part with respect to \({\mathcal {H}}^{d-1}_{|\partial ^*\Omega _i(t)}(dx)dt\); calling this \(\omega ^{ac}_i\), it means

$$\begin{aligned} \limsup _{h\downarrow 0} \frac{|\delta \chi _i|}{\sqrt{h}} \le C\alpha ^2\omega _i^{ac} + C{\mathcal {H}}^{d-1}_{|\partial ^*\Omega _i(t)}(dx)dt. \end{aligned}$$
(90)

Sharp bound For a given \(1 \le i \le N\) we denote by \(\delta \chi _i^+\) and \(\delta \chi _i^-\) the positive and negative parts fo \(\delta \chi _i\) respectively, i.e. we set \(\delta \chi _i^+ := (\chi _i - \chi _i(\cdot - \tau ))_{+}\) and \(\delta \chi _i^- := (\chi _i - \chi _i(\cdot - \tau ))_{-}\). Before entering into the proof of the sharp bound, we need to prove the following property. For any \(i \ne j\) we have that, in a distributional sense, the following holds

$$\begin{aligned} \lim _{h \downarrow 0} \frac{1}{\sqrt{h}} \delta \chi _i^+ K_{ij}^h*\delta \chi _j^+ = 0 = \lim _{h \downarrow 0} \frac{1}{\sqrt{h}} \delta \chi _i^- K_{ij}^h*\delta \chi _j^-. \end{aligned}$$
(91)

We focus on the first limit, the second one being analogous. The first observation is that the limit

$$\begin{aligned} \lambda := \lim _{h\downarrow 0} \frac{1}{\sqrt{h}}\delta \chi _i^+K_{ij}^h * \delta \chi _j^+ \end{aligned}$$
(92)

is a nonnegative bounded measure, which is absolutely continuous with respect to \({\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt\). Indeed, spelling out the z-integral and using the fact that \(\delta \chi _i^+ = \chi _i(1-\chi _i(\cdot -\tau ))\) we obtain

$$\begin{aligned} \frac{1}{\sqrt{h}}\delta \chi _i^+K_{ij}^h * \delta \chi _j^+&= \frac{1}{\sqrt{h}}\int K_{ij}^h(z)\delta \chi _i^+(x,t)\delta \chi _j^+(x-z, t)dz \\&\le \frac{1}{\sqrt{h}}\int K_{ij}^h(z)\chi _i(x,t)\chi _j(x-z, t)dz. \end{aligned}$$

By (44) in Lemma 4, as \(h\downarrow 0\), the right hand side converges to

$$\begin{aligned} \int K_{ij}(z) (\nu _{ij}(x,t)\cdot z)_{+} {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt, \end{aligned}$$
(93)

which is absolutely continuous with respect to \({\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt\).

Now, given \(\nu _0 \in {\mathbf {S}}^{d-1}\) we claim that

$$\begin{aligned} \begin{aligned} \lambda \le&\int _{\nu _0 \cdot z \le 0} K_{ij}(z)(\nu _{ij}\cdot z)_+{\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt \\&+ \int _{\nu _0 \cdot z \ge 0} K_{ij}(z)(\nu _{ij}\cdot z)_-{\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt. \end{aligned} \end{aligned}$$
(94)

To see this, we rewrite the argument of the limit in (92) as

$$\begin{aligned} \frac{1}{\sqrt{h}} \int _{{\mathbf {R}}^d} \lambda _{h}(t,x,z)dz, \end{aligned}$$
(95)

where we set \(\lambda _h(t, x, z) := \chi _i(x,t)(1-\chi _i)(x, t-\tau )K_{ij}^h(z)\chi _i(x-z,t)(1-\chi _i)(x-z,t-\tau )\). Using the fact that \(0 \le \chi _i \le 1\) and \(\sum _l \chi _l = 1\) we obtain the following inequalities

$$\begin{aligned} \lambda _h&\le \chi _i(x,t)K_{ij}^h(z)\chi _i(x-z,t), \end{aligned}$$
(96)
$$\begin{aligned} \lambda _h&\le \chi _j(x,t-\tau )K_{ij}^h(z)\chi _i(x-z,t-\tau ) \nonumber \\&\quad + C\sum _{k\ne i,j} K_{ij}^h(z)\left( |\delta \chi _k|(x,t) + |\delta \chi _k|(x-z,t)\right) . \end{aligned}$$
(97)

Here C is a constant that does not depend on h. Using inequality (96) on the domain \(\{\nu _0 \cdot z \le 0\}\) and inequality (97) on the domain \(\{\nu _0 \cdot z \ge 0\}\) we obtain

$$\begin{aligned} {\lambda } \le&\limsup _{h\downarrow 0}\frac{1}{\sqrt{h}}\int _{\nu _0 \cdot z \le 0} \chi _i(x,t)K_{ij}^h(z)\chi _i(x-z,t)dz \\&+ \limsup _{h\downarrow 0}\frac{1}{\sqrt{h}}\int _{\nu _0 \cdot z \ge 0} \chi _j(x,t-\tau )K_{ij}^h(z)\chi _i(x-z,t-\tau )dz \\&+ C\sum _{k\ne i, j}\limsup _{h\downarrow 0}\left( \frac{1}{\sqrt{h}}\int K_{ij}^h(z)|\delta \chi _k|(x,t)dz + \frac{1}{\sqrt{h}}\int K_{ij}^h(z)|\delta \chi _k|(x-z,t)dz \right) . \end{aligned}$$

Observe that for any \(1\le k\le N\) we have

$$\begin{aligned} \limsup _{h\downarrow 0}\frac{1}{\sqrt{h}}\int K_{ij}^h(z)|\delta \chi _k|(x,t)dz = \limsup _{h\downarrow 0}\frac{1}{\sqrt{h}}\int K_{ij}^h(z)|\delta \chi _k|(x-z,t)dz. \end{aligned}$$
(98)

This can be seen by showing that

$$\begin{aligned} \lim _{h\downarrow 0}\frac{1}{\sqrt{h}}\int K_{ij}^h(z)\left( |\delta \chi _k|(x,t)-|\delta \chi _k|(x-z,t) \right) dz = 0, \end{aligned}$$
(99)

which can be shown to be true by testing with an admissible test function, and putting the spatial shift z on it. Thus recalling (44) and (90), we obtain that

$$\begin{aligned} \begin{aligned} \lambda \le&\int _{\nu _0 \cdot z \le 0} K_{ij}(z)(\nu _{ij}\cdot z)_+{\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt \\&+ \int _{\nu _0 \cdot z \ge 0} K_{ij}(z)(\nu _{ij}\cdot z)_-{\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt \\&+ C\sum _{k\ne i,j}\alpha ^2\omega _k^{ac} + {\mathcal {H}}^{d-1}_{|\partial ^*\Omega _k(t)}(dx)dt. \end{aligned} \end{aligned}$$
(100)

Since we already know that \(\lambda \) is absolutely continuous with respect to \({\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt\), the same bound holds true if we replace the right hand side with its absolutely continuous part with respect to \({\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt\). Observing that for \(k \ne i, j\) by Lemma 6 in the “Appendix” the measures \({\mathcal {H}}^{d-1}_{|\partial ^*\Omega _k(t)}(dx)dt\) and \({\mathcal {H}}^{d-1}_{|\partial ^*\Sigma _{ij}(t)}(dx)dt\) are mutually singular, this yields (94).

Writing \(\lambda = \theta (x,t){\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt\) for some \(L^1({\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt)\)-function \(\theta \) we obtain that inequality (94) yields

$$\begin{aligned} \begin{aligned} \theta (x,t)&\le \int _{\nu _0 \cdot z \le 0}K_{ij}(z)(\nu _{ij}(x,t) \cdot z)_+dz \\&\quad +\int _{\nu _0 \cdot z \ge 0}K_{ij}(z)(\nu _{ij}(x,t) \cdot z)_-dz \end{aligned} \end{aligned}$$
(101)

for every \(\nu _0 \in {\mathbf {S}}^{d-1}\) and \({\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt\)-a.e. \((x,t) \in [0,1)^d \times (0,T)\). By a separability argument, we see that the null set on which (101) does not hold can be chosen so that it is independent of the choice of \(\nu _0\). If we select \(\nu _0 = \nu _{ij}(x,t)\) this yields \(\theta \le 0\) almost everywhere with respect to \({\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt\). Since we already know that \(\lambda \) is nonnegative this gives \(\lambda = 0\).

Before getting the sharp bound, we check that for any \(i\ne j\) we have \(V_i = -V_j\) a.e. with respect to \({\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt\). To see this, we start by observing that if \(\xi \in C^{\infty }_c([0,1)^d\times (0,T))\), thanks to the fact that \(\sum _{k\ne i} \chi _k = 1 - \chi _i\), we get

$$\begin{aligned} \begin{aligned} \int \xi V_i{\mathcal {H}}^{d-1}_{|\partial ^*\Omega _i(t)}(dx)dt&= -\int \partial _t\xi \chi _i dxdt \\&=\sum _{k\ne i} \int \partial _t\xi \chi _k dxdt \\&=-\sum _{k\ne i} \int \xi V_k{\mathcal {H}}^{d-1}_{|\partial ^*\Omega _k(t)}(dx)dt. \end{aligned} \end{aligned}$$
(102)

Choosing \(\xi = f(t)g(x)\) for some \(f \in C^{\infty }_c((0,T))\) and \(g \in C^{\infty }([0,1)^d)\), by a separability argument, we obtain that for a.e. t and every \(g \in C^{\infty }([0,1)^d)\)

$$\begin{aligned} \begin{aligned} \int g V_i{\mathcal {H}}^{d-1}_{|\partial ^*\Omega _i(t)}(dx)&= -\sum _{k\ne i} \int g V_k{\mathcal {H}}^{d-1}_{|\partial ^*\Omega _k(t)}(dx). \end{aligned} \end{aligned}$$
(103)

Pick t such that (103) holds. Let \(g \in C^{\infty }([0,1)^d)\) and let \(\rho _{m}\) be a partition of unity obtained by the construction of Sect. 4 applied to the function \(\chi (\cdot , t)\) on the interface \(\Sigma _{ij}(t)\). Then

$$\begin{aligned} \begin{aligned} \sum _{m \in {\mathbf {N}}}\int \rho _{m}g V_i{\mathcal {H}}^{d-1}_{|\partial ^*\Omega _i(t)}(dx)&= -\sum _{m \in {\mathbf {N}}}\sum _{k\ne i} \int \rho _{m}g V_k{\mathcal {H}}^{d-1}_{|\partial ^*\Omega _k(t)}(dx). \end{aligned} \end{aligned}$$
(104)

Passing to the limit \(r\downarrow 0\) in (104) we get by Lemma 5 that

$$\begin{aligned} \begin{aligned} \int g V_i{\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)&= -\int g V_j{\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx). \end{aligned} \end{aligned}$$
(105)

Since this identity holds for any \(g \in C^{\infty }([0,1)^d)\), a density argument gives \(V_i(x, t) = -V_j(x,t)\) for \({\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}\)-a.e. x. In other words

$$\begin{aligned} \int |V_i(x,t) +V_j(x,t)|{\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx) = 0. \end{aligned}$$
(106)

Integrating in time yields that \(V_i = -V_j\) a.e. with respect to \({\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt\).

We now proceed with the derivation of the sharp lower bound. Define \(c_{ij} := \int K_{ij}(z)dz\). Then we have

$$\begin{aligned}&c_{ij}(|\delta \chi _i| + |\delta \chi _j|) = c_{ij}(\delta \chi _i^+ + \delta \chi _j^- + \delta \chi _i^- + \delta \chi _j^+) \nonumber \\&\quad =\frac{1}{2}\left( \delta \chi _i^+ K_{ij}^h * (1 - \delta \chi _j^-) + (1-\delta \chi _j^-) K_{ij}^h * \delta \chi _i^+ + \delta \chi _j^- K_{ij}^h * (1 - \delta \chi _i^+) \right. \nonumber \\&\qquad \qquad \left. + (1-\delta \chi _i^+) K_{ij}^h * \delta \chi _j^- + \delta \chi _i^- K_{ij}^h * (1 - \delta \chi _j^+) + (1-\delta \chi _j^+) K_{ij}^h * \delta \chi _i^- \right. \nonumber \\&\qquad \qquad \left. + \delta \chi _j^+ K_{ij}^h * (1 - \delta \chi _i^-) + (1-\delta \chi _i^-) K_{ij}^h * \delta \chi _j^+\right) \nonumber \\&\qquad \qquad + \left( \delta \chi _i^+ K_{ij}^h*\delta \chi _j^- + \delta \chi _j^- K_{ij}^h*\delta \chi _i^+ + \delta \chi _i^- K_{ij}^h*\delta \chi _j^+ + \delta \chi _j^+ K_{ij}^h*\delta \chi _i^- \right) . \end{aligned}$$
(107)

Now we rewrite the terms in the second parenthesis using \(-ab = a_+b_- + a_-b_+ - a_+b_+ - a_-b_-\) and then adding and subtracting the contributions of the minority phases we obtain

$$\begin{aligned} \begin{aligned} c_{ij}(|\delta \chi _i| + |\delta \chi _j|) \le&\, \frac{1}{2}\Bigg ( \delta \chi _i^+ K_{ij}^h * (1 - \delta \chi _j^-) + (1-\delta \chi _j^-) K_{ij}^h * \delta \chi _i^+ + \delta \chi _j^- K_{ij}^h * (1 - \delta \chi _i^+) \\&\qquad + (1-\delta \chi _i^+) K_{ij}^h * \delta \chi _j^- + \delta \chi _i^- K_{ij}^h * (1 - \delta \chi _j^+) + (1-\delta \chi _j^+) K_{ij}^h * \delta \chi _i^- \\&\qquad + \delta \chi _j^+ K_{ij}^h * (1 - \delta \chi _i^-) + (1-\delta \chi _i^-) K_{ij}^h * \delta \chi _j^+\Bigg ) - \sum _{l,p}\delta \chi _l K_{lp}^h*\delta \chi _p \\&\qquad +\delta \chi _i^+ K_{ij}^h*\delta \chi _j^+ + \delta \chi _i^- K_{ij}^h*\delta \chi _j^- + \delta \chi _j^+ K_{ij}^h*\delta \chi _i^+ \\&\qquad +\delta \chi _j^- K_{ij}^h*\delta \chi _i^- + \sum _{\{l,p\} \ne \{i,j\}}\delta \chi _l K_{lp}^h*\delta \chi _p. \end{aligned} \end{aligned}$$
(108)

Now the main idea is to split the integral of \(K_{ij}\) in the definition of \(c_{ij}\) into two parts. More precisely, by the symmetry (20), for any \(\nu _0 \in {\mathbf {S}}^{d-1}\) and any \(V_0 > 0\) we have

$$\begin{aligned} c_{ij} = 2\int _{0 \le \nu _0 \cdot z \le \alpha V_0}K_{ij}(z)dz + 2\int _{\nu _0 \cdot z > \alpha V_0}K_{ij}(z)dz. \end{aligned}$$
(109)

Substituting into (108) and dividing by \(\sqrt{h}\) we obtain

$$\begin{aligned}&2\int _{0 \le \nu _0 \cdot z \le \alpha V_0} K_{ij}(z)dz\frac{(|\delta \chi _i| + |\delta \chi _j|)}{\sqrt{h}} \nonumber \\&\quad \le \frac{1}{2\sqrt{h}}\bigg ( \delta \chi _i^+ K_{ij}^h * (1 - \delta \chi _j^-) + (1-\delta \chi _j^-) K_{ij}^h * \delta \chi _i^+ + \delta \chi _j^- K_{ij}^h * (1 - \delta \chi _i^+) \nonumber \\&\qquad \qquad \qquad + (1-\delta \chi _i^+) K_{ij}^h * \delta \chi _j^- + \delta \chi _i^- K_{ij}^h * (1 - \delta \chi _j^+) + (1-\delta \chi _j^+) K_{ij}^h * \delta \chi _i^- \nonumber \\&\qquad \qquad \qquad + \delta \chi _j^+ K_{ij}^h * (1 - \delta \chi _i^-) + (1-\delta \chi _i^-) K_{ij}^h * \delta \chi _j^+ \nonumber \\&\qquad \qquad \qquad -4\int _{\nu _0 \cdot z > \alpha V_0}K_{ij}(z)dz(|\delta \chi _i| + |\delta \chi _j|) \nonumber \\&\qquad \qquad \qquad - 2\sum _{l,p}\delta \chi _l K_{lp}^h*\delta \chi _p +2 \delta \chi _i^+ K_{ij}^h*\delta \chi _j^+ + 2\delta \chi _i^- K_{ij}^h*\delta \chi _j^- \nonumber \\&\qquad \qquad \qquad + 2\delta \chi _j^+ K_{ij}^h*\delta \chi _i^+ + 2\delta \chi _j^- K_{ij}^h*\delta \chi _i^- \nonumber \\&\qquad \qquad \qquad + 2\sum _{(l,p) \ne (i,j), (l,p) \ne (j,i)}\delta \chi _l K_{lp}^h*\delta \chi _p \bigg ). \end{aligned}$$
(110)

We will be interested in bounding the \(\liminf \) of the left hand side. Observe that the distributional limit of the last five terms is non-positive. Indeed, the limit of first four terms vanish distributionally by property (91), while the last term is bounded from above by

$$\begin{aligned} 2 \sum _{(l,p) \ne (i,j), (l,p) \ne (j,i)}\delta \chi _l^+ K_{lp}^h*\delta \chi _p^+ + \delta \chi _l^- K_{lp}^h*\delta \chi _p^-, \end{aligned}$$

which vanish distributionally in the limit \(h \downarrow 0\) by property (91). We thus obtain that the \(\liminf \) of the left hand side of (110) is bounded from above by

$$\begin{aligned} \begin{aligned}&\liminf _{h\downarrow 0}\frac{1}{2\sqrt{h}}\bigg ( \delta \chi _i^+ K_{ij}^h * (1 - \delta \chi _j^-) + (1-\delta \chi _j^-) K_{ij}^h * \delta \chi _i^+ + \delta \chi _j^- K_{ij}^h * (1 - \delta \chi _i^+) \\&\qquad \qquad \qquad \quad + (1-\delta \chi _i^+) K_{ij}^h * \delta \chi _j^- + \delta \chi _i^- K_{ij}^h * (1 - \delta \chi _j^+) + (1-\delta \chi _j^+) K_{ij}^h * \delta \chi _i^- \\&\qquad \qquad \qquad \quad + \delta \chi _j^+ K_{ij}^h * (1 - \delta \chi _i^-) + (1-\delta \chi _i^-) K_{ij}^h * \delta \chi _j^+ \\&\qquad \qquad \qquad \quad - 4\int _{\nu _0 \cdot z > \alpha V_0}K_{ij}(z)dz(|\delta \chi _i| + |\delta \chi _j|) - 2\sum _{l,p}\delta \chi _l K_{lp}^h*\delta \chi _p\bigg ). \end{aligned} \end{aligned}$$
(111)

For the last term we use the sharp bound (72), relating this term to our dissipation measure \(\omega \). We would like to get a good bound for the other terms. This cannot be done naively as before, since we want the bound to be sharp. We claim that

$$\begin{aligned}&\limsup _{h\downarrow 0}\frac{1}{\sqrt{h}}\Bigg ( \delta \chi _i^+ K_{ij}^h * (1 - \delta \chi _j^-) + (1-\delta \chi _j^-) K_{ij}^h * \delta \chi _i^+ + \delta \chi _j^- K_{ij}^h * (1 - \delta \chi _i^+) \nonumber \\&\qquad \qquad \qquad \quad + (1-\delta \chi _i^+) K_{ij}^h * \delta \chi _j^- + \delta \chi _i^- K_{ij}^h * (1 - \delta \chi _j^+) + (1-\delta \chi _j^+) K_{ij}^h * \delta \chi _i^- \nonumber \\&\qquad \qquad \qquad \quad + \delta \chi _j^+ K_{ij}^h * (1 - \delta \chi _i^-) + (1-\delta \chi _i^-) K_{ij}^h * \delta \chi _j^+ \nonumber \\&\qquad \qquad \qquad \quad - 4\int _{\nu _0 \cdot z > \alpha V_0}K_{ij}(z)dz(|\delta \chi _i| + |\delta \chi _j|) \Bigg ) \nonumber \\&\qquad \qquad \qquad \quad \le 8\int _{0 \le \nu _0 \cdot z \le \alpha V_0} K_{ij}(z)|\nu _{ij}(x)\cdot z|dz {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt \nonumber \\&\qquad \qquad \qquad \quad + C\sum _{k\ne i,j}(\alpha ^2 \omega ^{ac}_k + {\mathcal {H}}^{d-1}_{|\partial ^*\Omega _k(t)}(dx))dt. \end{aligned}$$
(112)

Here C is a constant that depends on \(\gamma , \beta , {\mathbb {A}}, {\mathbb {B}}\), but not on h. Assume for the moment that (112) is true and let us conclude the argument in this case. Using (112) and (72) we obtain

$$\begin{aligned}&2\liminf _{h\downarrow 0}\int _{0 \le \nu _0 \cdot z \le \alpha V_0} K_{ij}(z)dz\frac{(|\delta \chi _i| + |\delta \chi _j|)}{\sqrt{h}} \nonumber \\&\quad \le \alpha ^2\omega + 4\int _{0 \le \nu _0 \cdot z \le \alpha V_0} K_{ij}(z)|\nu _{ij}(x)\cdot z|dz {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt \nonumber \\&\qquad + C\sum _{k \ne i,j}(\alpha ^2 \omega ^{ac}_k + {\mathcal {H}}^{d-1}_{|\partial ^*\Omega _k(t)}(dx)dt \end{aligned}$$
(113)

in the sense of distributions on \([0,1)^d\times (0,T)\). Observe also that the left hand side of (113) is an upper bound for \(\int _{0 \le \nu _0 \cdot z \le \alpha V_0}K_{ij}(z)dz(|\partial _t\chi _i| + |\partial _t\chi _j|)\), thus the inequality still holds true if the left hand side is replaced by this term. Remember that \(\omega _k^{ac}\) is absolutely continuous with respect to \({\mathcal {H}}^{d-1}_{|\partial ^*\Omega _k(t)}(dx)dt\), thus there exist functions \(W_k \in L^1({\mathcal {H}}^{d-1}_{|\partial ^*\Omega _k(t)}(dx)dt)\) such that \(\omega _k^{ac} = W_k(x,t){\mathcal {H}}^{d-1}_{|\partial ^*\Omega _k(t)}(dx)dt\). We now disintegrate the measure \(\omega \), i.e. we find a Borel family \(\omega _{t}, t \in (0,T)\), of positive measures on \([0,1)^d\) such that \(\omega = \omega _t \otimes dt\). Having said this, it is not hard to see that (113) holds in a disintegrated version, i.e. we have for Lebesgue a.e. \(t \in (0,T)\)

$$\begin{aligned}&2\alpha \int _{0 \le \nu _0 \cdot z \le \alpha V_0}K_{ij}(z)dz(|V_i(x,t)|{\mathcal {H}}^{d-1}_{|\partial ^*\Omega _i(t)}(dx)+|V_j(x,t)|{\mathcal {H}}^{d-1}_{|\partial ^*\Omega _j(t)}(dx)) \nonumber \\&\quad \le \alpha ^2\omega _t + 4\int _{0 \le \nu _0 \cdot z \le \alpha V_0} K_{ij}(z)|\nu _{ij}(x)\cdot z|dz {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx) \nonumber \\&\qquad + C\sum _{k \ne i,j}(\alpha ^2 W_k(x,t) + 1) {\mathcal {H}}^{d-1}_{|\partial ^*\Omega _k(t)}(dx). \end{aligned}$$
(114)

Here \(\nu _0 \in {\mathbf {S}}^{d-1}\) and \(V_0 \in (0, \infty )\) are arbitrary: indeed even if the set of points in time for which (114) holds is a priori dependent on \(\nu _0\) and \(V_0\), a standard separability argument allows us to conclude that we can get rid of this dependence.

Fix a point t in time such that (114) holds. In what follows, we drop the time variable t which is fixed, so for example \(V_i(x) = V_i(x,t)\), \(\Sigma _{ij} = \Sigma _{ij}(t)\) and so on. Fix \(\xi \in C([0,1)^d)\), observe that by definition of \(V_{ij}\) and by using the fact that \(\Sigma _{ij} \subset \partial ^*\Omega _i \cap \partial ^*\Omega _j\) we have

$$\begin{aligned}&4\alpha \int _{0 \le \nu _0 \cdot z \le \alpha V_0}K_{ij}(z)dz \int _{[0,1)^d} \xi (x)|V_{ij}(x)|{\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx) \nonumber \\&\quad \le \alpha ^2\int _{[0,1)^d}\xi (x)\omega _t(dx)+4\int _{[0,1)^d}\int _{0 \le \nu _0 \cdot z \le \alpha V_0} K_{ij}(z)|\nu _{ij}(x)\cdot z|dz\xi (x) {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx) \nonumber \\&\qquad + C\sum _{k \ne i,j}\int _{[0,1)^d}\xi (x)(\alpha ^2 W_k(x,t) + 1) {\mathcal {H}}^{d-1}_{|\partial ^*\Omega _k(t)}(dx). \end{aligned}$$
(115)

Let us relabel \(\nu _0\), \(V_0\) and \(\xi \) to make clear that they may depend on the pair ij. Thus \(\nu _0^{ij} \in {\mathbf {S}}^{d-1}\), \(V_0^{ij} \in (0, \infty )\) and \(\xi _{ij} \in C([0,1)^d)\) are arbitrary, and it holds

$$\begin{aligned}&4\alpha \int _{0 \le \nu _0^{ij} \cdot z \le \alpha V^{ij}_0}K_{ij}(z)dz \int _{[0,1)^d} \xi _{ij}(x)|V_{ij}(x)|{\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx) \nonumber \\&\quad \le \alpha ^2\int _{[0,1)^d}\xi _{ij}(x)\omega _t(dx)+4\int _{[0,1)^d}\int _{0 \le \nu _0^{ij} \cdot z \le \alpha V_0^{ij}} K_{ij}(z)|\nu _{ij}(x)\cdot z|dz\xi _{ij}(x) {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx) \nonumber \\&\qquad + C\sum _{k \ne i,j}\int _{[0,1)^d}\xi _{ij}(x)(\alpha ^2 W_k(x,t) + 1) {\mathcal {H}}^{d-1}_{|\partial ^*\Omega _k(t)}(dx). \end{aligned}$$
(116)

Let \(\{\rho _{m}\}\) be a partition of unity obtained using the construction of Sect. 4 applied to the function \(\chi (\cdot , t)\) on the interface \(\Sigma _{ij}(t)\). Use the above inequality with \(\xi _{ij}\) replaced by \(\rho _{m}\xi _{ij}\) and sum over m and ij to get

$$\begin{aligned}&\begin{aligned} \sum _{i<j}\sum _{m\in {\mathbf {N}}} \mathbf {LH}_m^{ij} \le \sum _{i<j}\sum _{m \in {\mathbf {N}}} ({\mathbf {I}}_m^{ij}+\mathbf {II}_m^{ij}+\mathbf {III}_m^{ij}) \end{aligned} \end{aligned}$$
(117)

where we have set

$$\begin{aligned} {\mathbf {LH}_m^{ij}}&= 4\alpha \int _{0 \le \nu _0^{ij} \cdot z \le \alpha V^{ij}_0}K_{ij}(z)dz \int _{[0,1)^d} \rho _{mij}(x)\xi _{ij}(x)|V_{ij}(x)|{\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx),\\ {{\mathbf {I}}^{ij}_m}&= \alpha ^2\int _{[0,1)^d}\xi _{ij}(x)\rho _{mij}(x)\omega _t(dx),\\ {\mathbf {II}^{ij}_m}&= 4\int _{[0,1)^d}\int _{0 \le \nu _0^{ij} \cdot z \le \alpha V_0^{ij}} K_{ij}(z)|\nu _{ij}(x)\cdot z|dz\rho _{mij}(x)\xi _{ij}(x) {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx),\\ {\mathbf {III}^{ij}_m}&= C\sum _{k \ne i,j}\int _{[0,1)^d}\rho _{mij}(x)\xi _{ij}(x)(\alpha ^2 W_k(x,t) + 1) {\mathcal {H}}^{d-1}_{|\partial ^*\Omega _k(t)}(dx). \end{aligned}$$

Observe that

$$\begin{aligned} \sum _{i<j}\sum _{m \in {\mathbf {N}}} {\mathbf {I}}_m^{ij} \le \sum _{i<j} \alpha ^2\int _{[0,1)^d}\xi _{ij}(x)\omega _t(dx) \end{aligned}$$
(118)

because \(\rho _{m}\) is a partition of unity. Moreover by Lemma 5 we get

$$\begin{aligned} {\lim _{r\downarrow 0}\sum _{i<j}\sum _{m \in {\mathbf {N}}}\mathbf {LH}_m^{ij}}&=4\alpha \int _{0 \le \nu _0^{ij} \cdot z \le \alpha V^{ij}_0}K_{ij}(z)dz \int _{[0,1)^d}\xi _{ij}(x)|V_{ij}(x)|{\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx),\\ {\lim _{r\downarrow 0} \sum _{i<j}\sum _{m \in {\mathbf {N}}} \mathbf {II}_m^{ij}}&=\sum _{i<j} 4\int _{[0,1)^d}\int _{0 \le \nu _0^{ij} \cdot z \le \alpha V_0^{ij}} K_{ij}(z)|\nu _{ij}(x)\cdot z|dz\xi _{ij}(x) {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx),\\ {\lim _{r\downarrow 0} \sum _{i<j}\sum _{m \in {\mathbf {N}}} \mathbf {III}_m^{ij}}&= 0. \end{aligned}$$

Putting things together we obtain that for any \(\nu _0^{ij} \in {\mathbf {S}}^{d-1}\), any \(V_0^{ij} \in (0, \infty )\), and any \(\xi \in C([0,1)^d)\)

$$\begin{aligned}&4\alpha \sum _{i<j}\int _{0 \le \nu _0^{ij} \cdot z \le \alpha V^{ij}_0}K_{ij}(z)dz \int _{[0,1)^d}\xi _{ij}(x)|V_{ij}(x)|{\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx) \nonumber \\&\quad \le \sum _{i<j}\alpha ^2\int _{[0,1)^d}\xi _{ij}(x)\omega _t(dx) \nonumber \\&\qquad + 4\sum _{i<j}\int _{[0,1)^d}\int _{0 \le \nu _0^{ij} \cdot z \le \alpha V_0^{ij}} K_{ij}(z)|\nu _{ij}(x)\cdot z|dz\xi _{ij}(x) {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx). \end{aligned}$$
(119)

We now claim that by approximation the above inequality is valid for any simple function \(\xi _{ij} \ge 0\). To see this, it is clear that we can concentrate on \(\xi _{ij} = w_{ij}{\mathbf {1}}_{B_{ij}}\), where \(B_{ij} \subset [0,1)^d\) are Borel and \(w_{ij} \ge 0\). Observe that by the dominated convergence theorem, the family

$$\begin{aligned} {\mathcal {F}} := \left\{ B = \prod _{i<j} B_{ij}:\ B_{ij} \in {\mathcal {B}}([0,1)^d)\ \text {s.t.}\ \forall w_{ij} \ge 0\ \text { (119) holds with}\ \xi _{ij} = w_{ij} {\mathbf {1}}_{B_{ij}}\right\} \nonumber \\ \end{aligned}$$
(120)

is a monotone class. Thus by the monotone class theorem we just need to show that it contains all the products of open sets. But this is easy because given \(B_{ij} \subset [0,1)^d\) open sets, we can always find sequences \(\eta _{k}^{ij}\) of continuous functions with compact support such that \(0 \le \eta _k^{ij} \le {\mathbf {1}}_{B_{ij}}\) and such that \(\eta _k^{ij} \rightarrow {\mathbf {1}}_{B_{ij}}\), thus the claim follows by the monotone convergence theorem.

With this in place one can use an approximation argument to replace the vector \(\nu _0^{ij}\) with the \({\mathcal {H}}^{d-1}\)-measurable vector valued function \(\nu _{ij}\) obtaining the following inequality:

$$\begin{aligned}&4\alpha \sum _{i<j}\int _{[0,1)^d}\int _{0< \nu _{ij}(x) \cdot z< \alpha V^{ij}_0}K_{ij}(z)dz \xi _{ij}(x)|V_{ij}(x)|{\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx) \nonumber \\&\quad \le \sum _{i<j} \alpha ^2\int _{[0,1)^d}\xi _{ij}(x)\omega _t(dx) \nonumber \\&\qquad + 4\sum _{i<j}\int _{[0,1)^d}\int _{0 \le \nu _{ij} \cdot z \le \alpha V_0^{ij}} K_{ij}(z)|\nu _{ij}(x)\cdot z|dz\xi _{ij}(x) {\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx). \end{aligned}$$
(121)

Now divide by \(\alpha ^2\) and send \(\alpha \) to zero. Record the following limits, which can be computed spelling out the definition of \(K_{ij}\), and recalling the symmetry property (20) and the factorization property (23) for the heat kernel

$$\begin{aligned}&\lim _{\alpha \downarrow 0} \frac{1}{\alpha }\int _{0< \nu _{ij}(x) \cdot z < \alpha V_0^{ij}} K_{ij}(z) dz = \frac{V_0^{ij}}{2\mu _{ij}}. \nonumber \\&\lim _{\alpha \downarrow 0} \frac{1}{\alpha ^2}\int _{0 \le \nu _{ij}(x) \cdot z \le \alpha V_0^{ij}} K_{ij}(z)|\nu _{ij}(x)\cdot z| dz = \frac{(V_0^{ij})^2}{4\mu _{ij}}. \end{aligned}$$
(122)

Then if we insert back into (121) we obtain

$$\begin{aligned}&\sum _{i<j}\frac{2}{\mu _{ij}}\int _{[0,1)^d}V_0^{ij} \xi _{ij}(x)|V_{ij}(x)|{\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx) \end{aligned}$$
(123)
$$\begin{aligned}&\quad \le \int _{[0,1)^d}\omega _t(dx) + \sum _{i<j}\int _{[0,1)^d}\frac{(V_0^{ij})^2}{\mu _{ij}}\xi _{ij}(x) {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx). \end{aligned}$$
(124)

Now given \(M > 0\) take sequences of simple functions

$$\begin{aligned} s_m^{ij} = \sum _{k=1}^{p_m} w_k^{ij}{\mathbf {1}}_{B^k_{ij}} \end{aligned}$$
(125)

such that \(s_m^{ij} \rightarrow |V_{ij}|{\mathbf {1}}_{\{|V_{ij}| \le M\}}\) as \(m \rightarrow +\infty \) monotonically almost everywhere with respect to \({\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}\). We are assuming that \(\{B_{ij}^k\}_{k=1, . . . , p_m}\) are disjoint and \({\mathcal {H}}^{d-1}\)-measurable, with the property that \(B_{ij}^{k_1} \cap B_{lr}^{k_2} = \emptyset \) if \(\{i,j\} \ne \{l,r\}\). Choosing \(V_0^{ij} = w_k^{ij}\), \(\xi _{ij} = {\mathbf {1}}_{B^k_{ij}}\) in (123) and summing over k we obtain

$$\begin{aligned}&\sum _{i<j}\frac{2}{\mu _{ij}}\int _{[0,1)^d}s_m^{ij}|V_{ij}(x)|{\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx) \nonumber \\&\quad \le \int _{[0,1)^d}\omega _t(dx) + \sum _{i<j}\int _{[0,1)^d}\frac{(s_m^{ij})^2}{\mu _{ij}} {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx). \end{aligned}$$
(126)

Taking the limit \(m\rightarrow +\infty \), using the monotone convergence theorem we obtain

$$\begin{aligned}&\sum _{i<j}\frac{2}{\mu _{ij}}\int _{[0,1)^d}|V_{ij}(x)|^2{\mathbf {1}}_{\{|V_{ij}|\le M\}}{\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx) \nonumber \\&\quad \le \int _{[0,1)^d}\omega _t(dx) + \sum _{i<j}\int _{[0,1)^d}\frac{|V_{ij}(x)|^2}{\mu _{ij}}{\mathbf {1}}_{\{|V_{ij}|\le M\}}{\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx) \end{aligned}$$
(127)

or, in other words,

$$\begin{aligned} \sum _{i<j}\frac{1}{\mu _{ij}}\int _{[0,1)^d}|V_{ij}(x)|^2{\mathbf {1}}_{\{|V_{ij}|\le M\}}{\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx) \le \int _{[0,1)^d}\omega _t(dx). \end{aligned}$$

Recall that \(\mu _{ij} = \mu _{ji}\), thus the inequality above may be rewritten as

$$\begin{aligned} \sum _{i,j}\frac{1}{2\mu _{ij}}\int _{[0,1)^d}|V_{ij}(x)|^2{\mathbf {1}}_{\{|V_{ij}|\le M\}}{\mathcal {H}}^{d-1}_{|\Sigma _{ij}}(dx) \le \int _{[0,1)^d}\omega _t(dx). \end{aligned}$$

If we now integrate in time we learn by the monotone convergence theorem that \(V_{ij} \in L^2({\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt)\) and that the sharp bound (47) is satisfied.

Proof of (112) To prove (112) we proceed in several steps.

First of all, we claim that the first eight terms may be substituted by

$$\begin{aligned}&2\int _{\nu _0 \cdot z \ge 0}K_{ij}^h(z) \bigg ( |\delta \chi _i^+ - \delta \chi _j^-(-z)| + |\delta \chi _i^+(-z) - \delta \chi _j^-| \nonumber \\&\qquad \quad \qquad \qquad \qquad |\delta \chi _i^- - \delta \chi _j^+(-z)| + |\delta \chi _i^-(-z) - \delta \chi _j^+| \bigg )dz. \end{aligned}$$
(128)

To show this, observe that we may replace the implicit z-integrals in the convolution in the first eight terms by twice the integrals over the half space \(\{\nu _0 \cdot z \ge 0\}\) instead of \({\mathbf {R}}^d\). This is clearly true once we observe that, in the sense of distribution

$$\begin{aligned}&\lim _{h\downarrow 0}\frac{1}{\sqrt{h}}\left( \delta \chi _{i}^+ \int _{\nu _0\cdot z \ge 0} K_{ij}^h(z)(1-\delta \chi _j^-(\cdot - z)) dz \right. \nonumber \\&\qquad \qquad \quad \left. +(1- \delta \chi _{j}^-) \int _{\nu _0\cdot z \ge 0} K_{ij}^h(z)\delta \chi _i^+(\cdot - z) dz\right) \nonumber \\&\quad {=}\lim _{h\downarrow 0}\frac{1}{\sqrt{h}}\left( \delta \chi _{i}^+ \int _{\nu _0\cdot z \le 0} K_{ij}^h(z)(1-\delta \chi _j^-(\cdot - z)) dz \right. \nonumber \\&\qquad \qquad \qquad \quad \left. +(1- \delta \chi _{j}^-) \int _{\nu _0\cdot z \le 0} K_{ij}^h(z)\delta \chi _i^+(\cdot - z) dz\right) \end{aligned}$$
(129)

and that similar identities hold exchanging the roles of ij and \(+, -\) respectively. That (129) holds is not difficult to show. Indeed multiplying the argument in both the limits by a test function \(\xi \in C^{\infty }_c([0,1)^d \times (0, T))\) and integrating over space-time one observes that since the kernel is even, the argument of the second limit is just a spatial shift of z of the first one. By translation invariance the spatial shift may be put onto the test function, and thanks to the scaling of the kernel one can get the claim. We may thus substitute the first eight terms of the left hand side of (112) with twice the same terms with the integration with respect to z on the half space \(\{\nu _0 \cdot z \ge 0\}\). If we rely again on the fact that \(\delta \chi _i^+ \in \{0,1\}\), by identity (156) in the “Appendix” we obtain (128), as claimed.

Now we need two inequalities for the integrand. First note that the integrand is a mixed space-time second-order finite difference. We claim that

$$\begin{aligned}&|\delta \chi _i^+ - \delta \chi _j^-(\cdot -z)| + |\delta \chi _i^+(\cdot -z) - \delta \chi _j^-|+|\delta \chi _i^- - \delta \chi _j^+(\cdot -z)| + |\delta \chi _i^-(\cdot -z) - \delta \chi _j^+| \nonumber \\&\quad \le {\left\{ \begin{array}{ll} |\delta \chi _i^+ - \delta \chi _i^+(\cdot -z)| + |\delta \chi _i^- - \delta \chi _i^-(\cdot -z)| + |\delta \chi _j^+ - \delta \chi _j^+(\cdot -z)| + |\delta \chi _j^- - \delta \chi _j^-(\cdot -z)| \\ \quad +\, 4\sum _{k\ne i,j} (|\delta \chi _k| + |\delta \chi _k(\cdot -z)|), \\ { } \\ |\delta \chi _i| + |\delta \chi _i(\cdot -z)| + |\delta \chi _j| + |\delta \chi _j(\cdot -z)|. \end{array}\right. } \end{aligned}$$
(130)

The second follows from the triangle inequality. To show the first one, observe that

$$\begin{aligned} {|\delta \chi _i^+ - \delta \chi _j^-(\cdot -z)|}&= (1-\delta \chi _i^+)\delta \chi _j^-(\cdot -z) + \delta \chi _i^+(1-\delta \chi _j^-(\cdot -z)) \nonumber \\ { }&\le (1-\delta \chi _i^+)\delta \chi _i^+(\cdot -z) + \sum _{k \ne i,j} |\delta \chi _k(\cdot -z)| + \delta \chi _j^-(1-\delta \chi _j^-(\cdot -z)) \nonumber \\&\quad + \sum _{k \ne i,j} |\delta \chi _k| \end{aligned}$$
(131)

and that similarly

$$\begin{aligned} {|\delta \chi _i^+(\cdot -z) - \delta \chi _j^-|} =&(1-\delta \chi _i^+(\cdot -z))\delta \chi _j^- + \delta \chi _i^+(\cdot -z)(1-\delta \chi _j^-) \nonumber \\ { }\le&(1-\delta \chi _i^+(\cdot -z))\delta \chi _i^+ + \sum _{k \ne i,j} |\delta \chi _k| + \delta \chi _j^-(\cdot -z)(1-\delta \chi _j^-) \nonumber \\&+ \sum _{k \ne i,j} |\delta \chi _k(\cdot -z)|. \end{aligned}$$
(132)

Summing up the two inequalities we get

$$\begin{aligned}&|\delta \chi _i^+ - \delta \chi _j^-(\cdot -z)| + |\delta \chi _i^+(\cdot -z) - \delta \chi _j^-| \nonumber \\&\quad \le |\delta \chi _i^+ - \delta \chi _i^+(\cdot -z)| + |\delta \chi _j^- - \delta \chi _j^-(\cdot -z)| + 2\sum _{k\ne i,j} \left( |\delta \chi _k| + |\delta \chi _k( \cdot -z)|\right) . \end{aligned}$$
(133)

Similar bounds hold for the remaining terms in (130).

We now split the integral (128) into the domains of integration \(\{0 \le \nu _0 \cdot z \le \alpha V_0\}\) and \(\{ \nu _0 \cdot z > \alpha V_0\}\). On the first one we use the first inequality in (130) for the integrand. Recalling identity (156) and inequality (157) in the “Appendix” we obtain, and using the fact that \(\sum _k \chi _k = 1\)

$$\begin{aligned}&2\int _{0 \le \nu _0 \cdot z \le \alpha V_0}K_{ij}^h(z)\bigg ( |\delta \chi _i^+ - \delta \chi _j^-(\cdot -z)| + |\delta \chi _i^+(\cdot -z) - \delta \chi _j^-| \nonumber \\&\qquad +|\delta \chi _i^- - \delta \chi _j^+(\cdot -z)| + |\delta \chi _i^-(\cdot -z) - \delta \chi _j^+| \bigg )dz \nonumber \\&\quad \le 2\int _{0 \le \nu _0 \cdot z \le \alpha V_0}K_{ij}^h(z)\bigg (|\chi _i - \chi _i(\cdot -z)| + |\chi _i(\cdot -\tau ) - \chi _i(\cdot -\tau ,\cdot -z)| \nonumber \\&\qquad +|\chi _j - \chi _j(\cdot -z)| + |\chi _j(\cdot -\tau ) - \chi _j(\cdot -\tau ,\cdot -z)| \nonumber \\&\qquad +8\sum _{k\ne i,j} |\delta \chi _k| + |\delta \chi _k(\cdot -z)|\bigg )dz\nonumber \\&\quad \le 2\int _{0 \le \nu _0 \cdot z \le \alpha V_0}K_{ij}^h(z)\left( \chi _i\chi _j(\cdot -z) + \chi _i(\cdot -z)\chi _j + \sum _{k\ne i,j}\chi _i\chi _k(\cdot -z) + \chi _i(\cdot -z)\chi _k \right. \nonumber \\&\qquad \left. + \chi _i(\cdot -\tau )\chi _j(\cdot -\tau , \cdot -z) + \chi _i(\cdot -\tau ,\cdot -z)\chi _j(\cdot -\tau ) \right. \nonumber \\&\qquad \left. + \sum _{k\ne i,j}\chi _i(\cdot -\tau )\chi _k(\cdot -\tau ,\cdot -z) + \chi _i(\cdot -\tau , \cdot -z)\chi _k(\cdot -\tau ) \right. \nonumber \\&\qquad \left. +\chi _j\chi _i(\cdot -z) + \chi _j(\cdot -z)\chi _i + \sum _{k\ne i,j}\chi _j\chi _k(\cdot -z) + \chi _j(\cdot -z)\chi _k \right. \nonumber \\&\qquad \left. +\chi _j(\cdot -\tau )\chi _i(\cdot -\tau , \cdot -z) + \chi _j(\cdot -\tau ,\cdot -z)\chi _i(\cdot -\tau ) \right. \nonumber \\&\qquad \left. + \sum _{k\ne i,j}\chi _j(\cdot -\tau )\chi _k(\cdot -\tau , \cdot -z) + \chi _j(\cdot -\tau , \cdot -z)\chi _k(\cdot -\tau ) \right. \nonumber \\ {}&\qquad \left. + 8\sum _{k\ne i,j} |\delta \chi _k| + |\delta \chi _k(\cdot -z)|\right) dz. \end{aligned}$$
(134)

On the set \(\{\nu _0 \cdot z > \alpha V_0\}\) we use the second inequality in (130), obtaining

$$\begin{aligned}&2\int _{ \nu _0 \cdot z> \alpha V_0} K_{ij}^h(z)\left( |\delta \chi _i^+ - \delta \chi _j^-(\cdot -z)| + |\delta \chi _i^+(\cdot -z) - \delta \chi _j^-|\right. \nonumber \\&\qquad \qquad \qquad \qquad \qquad \left. +|\delta \chi _i^- - \delta \chi _j^+(\cdot -z)| + |\delta \chi _i^-(\cdot -z) - \delta \chi _j^+| \right) dz \nonumber \\&\quad \le 2\int _{ \nu _0 \cdot z > \alpha V_0} K_{ij}^h(z)(|\delta \chi _i| + |\delta \chi _i(\cdot -z)| + |\delta \chi _j| + |\delta \chi _j(\cdot -z)|)dz. \end{aligned}$$
(135)

We now observe that for any \(1 \le k \le N\) we have, as we already observed in (99)

$$\begin{aligned} \lim _{h\downarrow 0} \frac{1}{\sqrt{h}}\int _{0 \le \nu _0 \cdot z \le \alpha V_0} K_{ij}^h(z)(|\delta \chi _k(\cdot -z)| - |\delta \chi _k|)dz = 0, \end{aligned}$$
(136)

thus in particular

$$\begin{aligned}&\limsup _{h \downarrow 0} \frac{1}{\sqrt{h}}\int _{0 \le \nu _0 \cdot z \le \alpha V_0} K_{ij}^h(z)|\delta \chi _k(\cdot -z)|dz \nonumber \\&\quad = \limsup _{h\downarrow 0}\frac{1}{\sqrt{h}}\int _{0 \le \nu _0 \cdot z \le \alpha V_0} K_{ij}^h(z)|\delta \chi _k|dz. \end{aligned}$$
(137)

By putting the time shift \(\tau \) on the test function it is easy to check that the distributional limit of the terms of (134) which involve the shift \(\tau \) have the same limit as the corresponding terms without the time shift. Thus recalling (90) and relying on (136) and (44) we obtain that inserting (134) and (135) into (128), the left hand side of (112) is bounded by

$$\begin{aligned}&8\int _{0 \le \nu _0 \cdot z \le \alpha V_0}K_{ij}(z)((\nu _{ij}(x,t)\cdot z)_++(\nu _{ij}(x,t)\cdot z)_-){\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt dz \\&\quad + C\sum _{k\ne i,j} \int _{0 \le \nu _0 \cdot z \le \alpha V_0}K_{ik}(z)((\nu _{ik}(x,t)\cdot z)_++(\nu _{ik}(x,t)\cdot z)_-){\mathcal {H}}^{d-1}_{|\Sigma _{ik}(t)}(dx)dtdz \\&\quad +C\sum _{k\ne i,j}(\alpha ^2\omega _k^{ac} + {\mathcal {H}}^{d-1}_{|\partial ^*\Omega _k(t)}(dx)dt), \end{aligned}$$

which clearly gives the claim once we realize that

$$\begin{aligned} \begin{aligned}&\int _{0 \le \nu _0 \cdot z \le \alpha V_0}K_{ik}(z)((\nu _{ik}(x)\cdot z)_++(\nu _{ik}(x)\cdot z)_-)dz \\&\quad \le 2\int _{{\mathbf {R}}^d}K_{ik}(z)|z|dz \le C. \end{aligned} \end{aligned}$$

\(\square \)

5.4 Proof of Proposition 2

Proof

The proof is along the same lines as Proposition 2 in [21], where the claim is shown in the case of two phases. For the convenience of the reader, we outline the strategy of the full proof, providing details only for the required changes. The proof is split into several steps.

step 1. The first observation is that for any \(h>0\), any admissible \(u \in {\mathcal {M}}\) and any smooth vector field \(\xi \) we have the following lower bound for the metric slope, cf. (33)

$$\begin{aligned} \frac{1}{2}|\partial E_h|(u) \ge \delta E_h(u)_\bullet \xi - \frac{1}{2}\left( \delta d_h(\cdot ,u)_\bullet \xi \right) ^2. \end{aligned}$$

Here \(\delta \) denotes the first variation, which is computed considering the curve \(s \rightarrow u^s\) of configurations which solve the transport equations

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _su_i^s + \xi \cdot \nabla u_i^s = 0, \\ u_i^s(\cdot , 0) = u_i(\cdot ), \end{array}\right. } \end{aligned}$$
(138)

and by setting

$$\begin{aligned} \delta E_h(u)_\bullet \xi := \frac{d}{ds}_{|{s=0}} E_h(u^s)\ \text {and}\ \delta d_h(\cdot , u)_\bullet \xi := \frac{d}{ds}_{|{s=0}} d(u, u^s). \end{aligned}$$
(139)

step 2. The second observation is a representation formula for \(\delta E_h(u)_\bullet \xi \). Namely

$$\begin{aligned} \begin{aligned} \delta E_h(u)_\bullet \xi&= \sum _{i,j}\frac{1}{\sqrt{h}}\left( \int \nabla \cdot \xi u_iK_{ij}^h*u_jdx + \int \nabla \cdot \xi u_jK_{ij}^h*u_idxdt\right. \\&\qquad \qquad \qquad \left. + \int [\xi , \nabla K_{ij}^h*](u_j)u_i dx\right) . \end{aligned} \end{aligned}$$
(140)

Here \([\xi , \nabla K_{ij}^h*]\) denotes the commutator obtained taking the convolution with \(\nabla K_{ij}^h\) and multiplying by \(\xi \), the definition is analogous to the one given in (79). To check this formula one starts by assuming u to be smooth and then an approximation argument gives the result for a general \(u \in {\mathcal {M}}\).

step 3. Representation for \(\delta d_h(\cdot , u)_\bullet \xi \). One checks that

$$\begin{aligned}&\frac{1}{2}\left( \delta d_h(\cdot , u)_\bullet \xi \right) ^2 \nonumber \\&\quad = \frac{\sqrt{h}}{2}\sum _{i,j}\left( \int u_i\xi \cdot \nabla ^2K_{ij}^h *(\xi u_j)dx + \int u_j\xi \cdot \nabla ^2K_{ij}^h *(\xi u_i)dx\right. \nonumber \\&\qquad \qquad \qquad \quad \left. +\int u_i \nabla \cdot \xi \nabla K_{ij}^h*(\xi u_j)dx + \int u_j \nabla \cdot \xi \nabla K_{ij}^h*(\xi u_i)dx\right. \nonumber \\&\qquad \qquad \qquad \quad \left. -\int u_i \nabla \cdot \xi K_{ij}^h*(u_j\nabla \cdot \xi )dx-\int u_j \nabla \cdot \xi K_{ij}^h*(u_i\nabla \cdot \xi )dx\right. \nonumber \\&\qquad \qquad \qquad \quad \left. -\int \xi u_i \nabla K_{ij}^h*(u_j\nabla \cdot \xi )dx-\int \xi u_j \nabla K_{ij}^h*(u_i\nabla \cdot \xi )dx\right) . \end{aligned}$$
(141)

Once again this formula can be easily checked when u is smooth, an approximation argument then gives the extension to the case \(u \in {\mathcal {M}}\).

step 4. Passage to the limit in \(\delta E_h\). We claim that

$$\begin{aligned} \lim _{h\downarrow 0} \int _0^T \delta E_h(u^h(t))_\bullet \xi dt = \sum _{i,j} \sigma _{ij}\int \left( \nabla \cdot \xi - \nu _{ij} \cdot \nabla \xi \nu _{ij}\right) {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt. \end{aligned}$$
(142)

The proof is very similar to the two phases case, and relies on the weak convergence (44) and (45). Firstly, testing (44) with \(\nabla \cdot \xi \) we get

$$\begin{aligned} \begin{aligned}&\lim _{h\downarrow 0} \sum _{i,j}\frac{1}{\sqrt{h}}\int \left( \nabla \cdot \xi u_i^hK_{ij}^h*u_j^h + \nabla \cdot \xi u_j^hK_{ij}^h*u_i^h \right) dxdt \\&\quad =\sum _{i,j}2\sigma _{ij}\int \nabla \cdot \xi {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt. \end{aligned} \end{aligned}$$

For the term involving the commutator, one checks that

$$\begin{aligned} \lim _{h\downarrow 0} \left( \int [\xi ,\nabla K_{ij}^h*](u_j^h)u_i^hdxdt - \int \nabla \xi z\cdot \nabla K_{ij}^h(z)u_j^h(x-z,t)u_i^h(x,t)dzdxdt \right) = 0. \end{aligned}$$

With this in place, we observe that

$$\begin{aligned} \begin{aligned}&\lim _{h \downarrow 0}\int \nabla \xi z\cdot \nabla K_{ij}^h(z)u_j^h(x-z,t)u_i^h(x,t)dzdxdt \\&\quad = \int \nabla \xi (x,t)z \cdot \nabla K_{ij}(z)(\nu _{ij}(x,t)\cdot z)_{+} {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt \end{aligned} \end{aligned}$$

which can be seen by testing (44) with \(\frac{\nabla \xi z \cdot \nabla K_{ij}(z)}{K_{ij}(z)}\), which is of polynomial growth in z. To conclude (142) we just need to show that for any symmetric matrix \(A \in {\mathbf {R}}^{d\times d}\) and any unit vector \(\nu \) we have

$$\begin{aligned} \int Az \cdot \nabla K_{ij}(z)(\nu \cdot z)_{+}dz = -\sigma _{ij}\left( {\text {tr}}A + \nu \cdot A\nu \right) . \end{aligned}$$

Using the definition of the kernel \(K_{ij}\) it suffices to show that

$$\begin{aligned} \int Az \cdot \nabla G_{w}(z)(\nu \cdot z)_{+}dz = -\frac{\sqrt{w}}{\sqrt{\pi }}\left( {\text {tr}}A + \nu \cdot A\nu \right) \ w \in \{\gamma , \beta \}. \end{aligned}$$

step 5. Passage to the limit in \(\delta d_h(\cdot , u)\xi \). We claim that

$$\begin{aligned} \begin{aligned} \lim _{h\downarrow 0} \frac{1}{2}\left( \delta d_h(\cdot , u^h)_\bullet \xi \right) = \sum _{i,j}\frac{1}{2\mu _{ij}}\int (\xi \cdot \nu _{ij})^2 {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt. \end{aligned} \end{aligned}$$
(143)

To prove this, we observe that the terms which do not involve the Hessian \(\nabla ^2 K_{ij}^h\) are all \(O(\sqrt{h})\). For example, to prove that

$$\begin{aligned} \sqrt{h}\int u_i^h\nabla \cdot \xi \nabla K_{ij}^h *(\xi u_j^h)dxdt = O(\sqrt{h}), \end{aligned}$$
(144)

spell out the integral in the convolution, use the fact that \(\nabla K_{ij}^h = \frac{1}{\sqrt{h}^{d+1}}\nabla K_{ij}(\frac{z}{\sqrt{h}})\), use the fact that \(\nabla \cdot \xi (x,t)\xi (x-\sqrt{h}z,t)\) is bounded and test (44) with \(\nabla K_{ij} / K_{ij}\). The other terms can be treated similarly. For the terms involving the Hessian of the kernel, we split the claim into

$$\begin{aligned} {\lim _{h\downarrow 0}\sqrt{h}\int u_i^h(\xi \cdot \nabla ^2K_{ij}^h *u_j)\xi dx dt}&= \frac{1}{2\mu _{ij}}\int (\xi \cdot \nu _{ij}(x,t))^2{\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt, \end{aligned}$$
(145)
$$\begin{aligned} {\sqrt{h}\int u_i^h\xi \cdot [\xi , \nabla ^2K_{ij}^h*](u_j^h)dxdt}&= O(\sqrt{h}). \end{aligned}$$
(146)

The proof of (146) is similar to the argument for (144). In fact, while the additional derivative on the kernel gives an additional factor \(\frac{1}{\sqrt{h}}\), we gain a factor \(\sqrt{h}\) by the Lipschitz estimate

$$\begin{aligned} |\xi (x, t) - \xi (x - \sqrt{h}z, t)| \le \sqrt{h} \Vert \nabla \xi \Vert _{\infty }. \end{aligned}$$
(147)

To prove identity (145) observe that by spelling out the z-integral, a change of variable and by testing (44) with \(\frac{\xi (x,t)\cdot \nabla ^2 K_{ij}(z)\xi (x, t)}{K_{ij}(z)}\) we obtain

$$\begin{aligned} \begin{aligned}&\lim _{h\downarrow 0}\sqrt{h}\int u_i^h(\xi \cdot \nabla ^2K_{ij}^h *u^h_j)\xi dx dt \\&\quad = \int \xi \cdot \nabla ^2K_{ij}(z) \xi (\nu _{ij}(x,t)\cdot z)_+ {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt. \end{aligned} \end{aligned}$$

Now identity (143) follows from the following formula: For any two vectors \(\xi \in {\mathbf {R}}^d\) and \(\nu \in {\mathbf {S}}^{d-1}\) we have

$$\begin{aligned} \int \xi \cdot \nabla ^2K_{ij}(z) \xi (\nu \cdot z)_+ dz = \frac{1}{2\mu _{ij}}(\xi \cdot \nu )^2. \end{aligned}$$
(148)

To check (148), by relying on the definition of the kernels, we just need to show that for \(w \in \{\gamma , \beta \}\)

$$\begin{aligned} \int \xi \cdot \nabla ^2G_w(z) \xi (\nu \cdot z)_+ dz = \frac{1}{2\sqrt{\pi w}}(\xi \cdot \nu )^2. \end{aligned}$$

Since the kernel is isotropic, we can reduce to the case \(\xi = e_1\), thus we need to prove

$$\begin{aligned} \int \partial ^2_1G_w(z)(\nu \cdot z)_+ dz = \frac{1}{2\sqrt{\pi w}}\nu _1^2. \end{aligned}$$

This can be done after two integration by parts and observing that

$$\begin{aligned} \int _{\nu \cdot z = 0} G_w(z) {\mathcal {H}}^{d-1}(dz) = \frac{1}{2\sqrt{\pi w}}. \end{aligned}$$

conclusion. By step 1 we have

$$\begin{aligned} \frac{1}{2}\int _0^T|\partial E_h|^2(u^h)\ dt \ge \int _0^T \delta E_h(u^h)_\bullet \xi dt - \frac{1}{2}\int _0^T \left( \delta d_h(\cdot , u^h)_\bullet \xi \right) ^2dt. \end{aligned}$$

Taking the liminf on the left hand side, using step 4 and step 5 we get that for any smooth vector field \(\xi \)

$$\begin{aligned} \begin{aligned} \liminf _{h\downarrow 0} \frac{1}{2}\int _0^T|\partial E_h|^2(u^h) dt&\ge \sum _{i,j}\bigg [ \sigma _{ij}\int \left( \nabla \cdot \xi - \nu _{ij}\cdot \nabla \xi \nu _{ij}\right) {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt \\&\qquad \qquad \, - \frac{1}{2\mu _{ij}} \int (\xi \cdot \nu _{ij})^2\ {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt \bigg ]. \end{aligned} \end{aligned}$$

Since the left hand side is bounded, the Riesz representation theorem for \(L^2\) yields functions \(H_{ij} \in L^2({\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt)\) such that

$$\begin{aligned} \sum _{i,j}\sigma _{ij}\int \left( \nabla \cdot \xi - \nu _{ij}\cdot \nabla \xi \nu _{ij}\right) \ {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt =-\sum _{i,j}\sigma _{ij}\int H_{ij}\nu _{ij} \cdot \xi \ {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt \end{aligned}$$

and such that for any \(\xi \in L^2({\mathcal {H}}^{d-1}_{|\bigcup _{i,j}\Sigma _{ij}(t)}(dx)dt)\)

$$\begin{aligned} \begin{aligned} \liminf _{h\downarrow 0} \frac{1}{2}\int _0^T|\partial E_h|(u_h)\ dt&\ge \sum _{i,j}\bigg ( -\sigma _{ij}\int H_{ij}\nu _{ij}\cdot \xi {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt \\&\qquad \qquad - \frac{1}{2\mu _{ij}} \int (\xi \cdot \nu _{ij})^2 {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt \bigg ). \end{aligned} \end{aligned}$$

Since the integration measures are mutually singular we can test with a vector field \(\xi \in L^2({\mathcal {H}}^{d-1}_{|\bigcup _{i,j}\Sigma _{ij}(t)}(dx)dt)\) such that \(\xi _{|\Sigma _{ij}(t)} = -\mu _{ij}\sigma _{ij}H_{ij}\nu _{ij}\). This yields

$$\begin{aligned} \begin{aligned} \liminf _{h\downarrow 0} \frac{1}{2}\int _0^T|\partial E_h|^2(u_h)\ dt \ge&\frac{1}{2}\sum _{i,j}\sigma _{ij}^2\mu _{ij}\int H_{ij}^2\ {\mathcal {H}}^{d-1}_{|\Sigma _{ij}(t)}(dx)dt. \end{aligned} \end{aligned}$$

\(\square \)