1 Introduction

A long-standing open problem in the study of Calabi-Yau manifolds is whether given a Lagrangian submanifold, one can find a special Lagrangian in its homology or Hamiltonian isotopy class. Special Lagrangians are always area minimising, so one way to approach the existence problem is to try to minimise area among all Lagrangians in a given class. This minimisation problem turns out to be very subtle and fraught with difficulties. Indeed Schoen and Wolfson [14] showed that when the real dimension is 4, given a particular class one can find a Lagrangian minimising area among Lagrangians in that class, but that the minimiser need not be a special Lagrangian. Later Wolfson [20] found a K3 surface and a Lagrangian sphere in this surface such that the area minimiser among Lagrangians in the homology class of the sphere, is not special Lagrangian, and the area minimiser in the class is not Lagrangian.

An alternative way of approaching the problem is to consider mean curvature flow. Mean curvature flow is a geometric evolution of submanifolds where the velocity at any point is given by the mean curvature vector. This can also be seen as the gradient descent for the area functional. Smoczyk showed in [15] that the Lagrangian condition is preserved by mean curvature flow if the ambient space is Kähler–Einstein, and consequently mean curvature flow has been proposed as a means of constructing special Lagrangians. In order to flow to a special Lagrangian, one would need to show that the flow exists for all time. This however can’t be expected in general, as finite time singularities abound. See for example Neves [13]. For a nice overview on what is known about singularities of Lagrangian mean curvature flow, we refer the reader to the survey paper of Neves [12].

A natural question is whether it might be possible to continue the flow in a weaker sense once a singularity develops and, in doing so, to push through the singularity. Since all special Lagrangians are zero-Maslov class, and the Maslov class is preserved by Lagrangian mean curvature flow, of particular interest is the mean curvature flow of zero-Maslov class Lagrangians. In this case, the structure of singularities is relatively well understood. Indeed Neves [11] has shown that a singularity of zero-Maslov class Lagrangian mean curvature flow must be asymptotic to a union of special Lagrangian cones. We note that in \(\mathbb {C}^2\) every such union is simply a union of Lagrangian planes, and so the case we consider in the theorem below is not necessarily overly restrictive. In this paper we consider the simplest such singularity, namely that where the singularities are each asymptotic to the union of two non-area-minimising, transversally intersecting Lagrangian planes. Specifically we prove the following theorem which serves as a partial answer to Problem 3.14 in [8].

Theorem

(Short-time existence) Suppose that \(L\subset \mathbb {C}^n\) is a compact Lagrangian submanifold of \(\mathbb {C}^n\) with a finite number of singularities, each of which is asymptotic to a pair of transversally intersecting planes \(P_1 + P_2\) where neither \(P_1 + P_2\) nor \(P_1 - P_2\) are area minimizing. Then there exists \(T > 0\) and a Lagrangian mean curvature flow \((L_t)_{0<t<T}\) such that as \(t \searrow 0\), \(L_t \rightarrow L\) as varifolds and in \(C^\infty _\mathrm{loc}\) away from the singularities.

We remark that the assumptions \(L \subset \mathbb {C}^n\) and L compact are made to simplify the analysis in the sequel, however since the analysis is all of an entirely local nature we may relax this to \(L \subset M\) for some Calabi-Yau manifold M, and to L non-compact provided, in the latter case, that we impose suitable conditions at infinity.

In the one-dimensional case all curves are Lagrangian. Ilmanen–Neves–Schulze considered the flow of planar networks, that is finite unions of embedded line segments of non-zero length meeting only at their endpoints, in [5]. They showed that there exists a flow of regular networks, that is networks where at any meeting point exactly three line segments come together at angles of \(2\pi /3\), starting at any initial non-regular network. To do so they performed a gluing procedure to get an approximating family of regular initial conditions, and proved uniform estimates on the corresponding flows, allowing them to pass to a limit of flows to prove the result. The proof here is based heavily on their arguments, and many of the calculations we do are similar to those in that paper. To prove the short-time existence, we construct a smooth approximating family \(L^s\) of initial conditions via a surgery procedure. Specifically we take a singularity asymptotic to some non-area-minimising pair of planes \(P_1 + P_2\), cut it out and glue in a piece of the Lagrangian self-expander asymptotic to those planes at a scale determined by s. For full details see Sect. 7. Each of these approximating Lagrangians is smooth, and hence standard short time existence theory gives a smooth Lagrangian mean curvature flow \(L^s_t\) corresponding to each s. As \(s\rightarrow 0\) the curvature of \(L^s\) blows up so the existence time of the flows \(L^s_t\) guaranteed by the standard short time existence theory goes to zero. Instead we are able to prove uniform estimates on the Gaussian density ratios of \(L^s_t\), which combined with the local regularity result of Brian White [19] provides uniform curvature estimates, interior in time, on the flows \(L^s_t\), from which we obtain a uniform time of existence allowing us to pass to a limit of flows and prove the main result.

There are two key components in the proof of the estimates on the Gaussian density ratios. The first is a stability result for self-expanding solutions to Lagrangian mean curvature flow. More specifically we show that if a Lagrangian is weakly close to a Lagrangian self expander in an \(L^2\) sense, then it is close in a stronger \(C^{1,\alpha }\) sense. The proof of this stability result depends crucially on a uniqueness result for zero-Maslov smooth self-expanders asymptotic to transverse pairs of planes due to Lotay and Neves [10] and Imagi et al. [6]. The second component is a monotonicity formula for the self-expander equation, which allows us to show that the approximating family of initial conditions that we construct in the proof, which are self-expanders in a ball, remain weakly close to the self-expander for a short time. The combination of these results tells us that the evolution of the approximating flows is close to the evolution of the self-expander near the singularity. Since self-expanders move by dilation, we have good curvature control on the self-expander, and hence estimates on the Gaussian densities of the approximating flow.

Organisation The paper is organised as follows. In Sect. 2 we recall key definitions and results. In Sect. 3 we derive evolution equations and monotonicity formulas for geometric quantities under the flow. In Sect. 4 we prove the Stability result mentioned above. Section 5 contains the proof of the main theorem which gives uniform estimates on the Gaussian density ratios of the approximating family near the singularity. From this we get uniform estimates, interior in time, on the curvature of the approximating family which allows us to appeal to a compactness argument. Section 6 contains the proof of the short time existence result itself. Section 7 details the construction of the approximating family used in the proof of the main theorem. Finally the appendix, Sect. A, contains miscellaneous technical results, including Ecker–Huisken style curvature estimates for high-codimension mean curvature flow.

2 Preliminaries

2.1 Mean curvature flow

Let \(M^n \subset \mathbb {R}^{n+k}\) be an n-dimensional embedded submanifold of \(\mathbb {R}^{n+k}\). A mean curvature flow is a one parameter family of immersions \(F:M\times [0,T)\rightarrow \mathbb {R}^{n+k}\) such that the normal velocity at any point is given by the mean curvature vector, that is

$$\begin{aligned} \frac{dF}{dt} = \vec {H}. \end{aligned}$$

Of particular interest to us are self-expanders. These are submanifolds \(M \subset \mathbb {R}^{n+k}\) satisfying the elliptic equation

$$\begin{aligned} \vec {H} - x^\perp = 0, \end{aligned}$$

where \((\cdot )^\perp \) is the projection to the normal space. In this case one can show that \(M_t = \sqrt{2t}M\) is a solution of mean curvature flow.

A fundamental tool in the analysis of mean curvature flow is the Gaussian density. We first define the backwards heat kernel \(\rho _{(x_0,t_0)}\) as follows

$$\begin{aligned} \rho _{(x_0,t_0)}(x,t) := \frac{1}{(4\pi (t_0 - t))^{n/2}}\exp \left( -\frac{|x-x_0|^2}{4(t_0 - t)}\right) . \end{aligned}$$

Next, for a mean curvature flow \((M_t)_{0\le t<T}\) we define the Gaussian density ratio centred at \((x_0,t_0)\) and at scale r by

$$\begin{aligned} \Theta (x_0,t_0,r)&:= \int _{M_{t_0 - r^2}} \rho _{(x_0,t_0)}(x,t_0 - r^2)d\mathcal {H}^n(x) \\&= \int _{M_{t_0 - r^2}} \frac{1}{(4\pi r^2)^{n/2}}\exp \left( -\frac{|x-x_0|^2}{4r^2}\right) d\mathcal {H}^n(x) \end{aligned}$$

this is defined for \(0 < t_0 \le T\), \(0 < r \le \sqrt{t_0}\) and any \(x_0 \in \mathbb {R}^{n+k}\). Huisken in [4] proved the following monotonicity formula.

Theorem 2.1

(Monotonicity Formula) If \((M_t)_{0\le t<t_0}\) is a mean curvature flow, then

$$\begin{aligned} \frac{d}{dt}\int _{M_t} \rho _{(x_0,t_0)}(x,t)d\mathcal {H}^n(x) = -\int _{M_t} \left| \vec {H} - \frac{(x_0 - x)^\perp }{2(t_0 - t)}\right| ^2\rho _{(x_0,t_0)}(x,t)d\mathcal {H}^n(x). \end{aligned}$$

In particular, it follows that \(\Theta (x_0,t_0,r)\) is non-decreasing in r. Consequently we can define the Gaussian density as

$$\begin{aligned} \Theta (x_0,t_0) := \lim _{r\searrow 0} \Theta (x_0,t_0,r). \end{aligned}$$

One can show that \((x_0,t_0)\) is a regular point of the flow if and only if \(\Theta (x_0,t_0) = 1\). The following local regularity theorem of White [19] says that if the density ratios are close to 1, then that is enough to get curvature estimates.

Theorem 2.2

(Local regularity) Let \(\tau > 0\). There are constants \(\varepsilon _0(n,k) > 0\) and \(C_0(n,k,\tau ) < \infty \) such that if \(\partial M_t \cap B_{2r} = \emptyset \) for \(t \in [0,r^2)\) and

$$\begin{aligned} \Theta (x,t,\rho ) \le 1 + \varepsilon _0 \quad \rho \le \tau \sqrt{t}, \, x\in B_{2r}(x_0), \, t \in [0,r^2), \end{aligned}$$

then

$$\begin{aligned} |A|(x,t) \le \frac{C_0}{\sqrt{t}} \quad x \in M_t \cap B_r(x_0), \, t\in [0,r^2), \end{aligned}$$

where A(xt) is the second fundamental form of \(M_t\) at the point x.

Finally, we introduce what it means for two manifolds to be \(\varepsilon \)-close in \(C^{1,\alpha }\). Given an open set \(U\subset \mathbb {R}^{n+k}\) and two n-dimensional submanifolds \(\Sigma \) and L defined in U, we say that \(\Sigma \) and L are 1-close in \(C^{1,\alpha }(W)\) for any \(W \subset U\) with \(\mathrm {dist}(W, \partial U) \ge 1\) if for all \(x \in W\), \(B_1(x) \cap \Sigma \) and \(B_1(x) \cap L\) are both graphical over some common n-dimensional plane, and if u and v denote the respective graph functions then \(\Vert u - v\Vert _{1,\alpha } \le 1\). We then say that \(\Sigma \) and L are \(\varepsilon \)-close in W if after rescaling by a factor \(1/\varepsilon \), \(\Sigma \) and L are 1-close in \(\varepsilon ^{-1}W\) for any W with \(\mathrm {dist}(\varepsilon ^{-1}W, \varepsilon ^{-1}\partial U) \ge 1\).

2.2 Lagrangian submanifolds and Lagrangian mean curvature flow

We consider \(\mathbb {C}^n\) with the standard complex coordinates \(z_j = x_j + iy_j\). We will often identify \(\mathbb {C}^n\) with \(\mathbb {R}^{2n}\). We let J denote the standard complex structure on \(\mathbb {C}^n\) and \(\omega \) the standard symplectic form on \(\mathbb {C}^n\), defined by

$$\begin{aligned} \omega = \sum _{i=1}^n \mathrm {d}x_i \wedge \mathrm {d}y_i. \end{aligned}$$

We say that a smooth n-dimensional submanifold of \(\mathbb {C}^n\) is Lagrangian if \(\omega |_L = 0\). We also consider the closed n-form \(\Omega \), called the holomorphic volume form, defined by

$$\begin{aligned} \Omega := dz_1\wedge \dots \wedge dz_n. \end{aligned}$$

On any oriented Lagrangian a simple computation shows that \(\Omega |_L = e^{i\theta _L} \mathrm {vol}_L\), where \(\mathrm {vol}_L\) is the volume form on L. We call \(e^{i\theta _L}:L\rightarrow S^1\) the Lagrangian phase, and \(\theta _L\) the Lagrangian angle, which may be a multi-valued function. We henceforth suppress the subscript L. In the case that \(\theta \) is a single valued function, we say that the Lagrangian L is zero-Maslov. An equivalent condition is \([d\theta ] = 0\), that is, \(d\theta \) is cohomologous to 0. If \(\theta \equiv \theta _0\) is constant, then we say that L is special Lagrangian. In this case L is calibrated by \(\mathrm {Re}(e^{-i\theta _0}\mathrm {vol}_L)\), and hence is area-minimising in its homology class.

We also consider the Liouville form \(\lambda \) on \(\mathbb {C}^n\) defined by

$$\begin{aligned} \lambda := \sum _{j=1}^n x_jdy_j - y_jdx_j. \end{aligned}$$

A simple calculation verifies that \(d\lambda = 2\omega \). If there exists some function \(\beta \) such that \(\lambda |_L = d\beta \) then we say that L is exact. In this paper we will be more interested in local exactness, that is when the Liouville form \(\lambda \) only has a primitive in some open set. It can be shown that any smooth Lagrangian is locally exact.

The following remarkable property of smooth Lagrangians relates the Lagrangian angle and mean curvature vector (see for example [16])

$$\begin{aligned} \vec {H} = J\nabla \theta . \end{aligned}$$

Consequently we see that the smooth minimal Lagrangians are exactly the smooth special Lagrangians.

A Lagrangian mean curvature flow in \(\mathbb {C}^n\) is a mean curvature flow \((L_t)_{0\le t<T}\) with \(L_0\) Lagrangian. As proved by Smoczyk [15], it turns out that the Lagrangian condition is preserved by the mean curvature flow.

3 Evolution equations and monotonicity formulas

In this section we compute evolution equations for different geometric quantities under the flow, and then use these to prove a local monotonicity formula for a primitive of the expander equation.

Lemma 3.1

Let \((L_t)_{0\le t<T}\) be a Lagrangian mean curvature flow in \(\mathbb {C}^n\). The following evolution equations hold.

  1. (i)

    \(\frac{d\theta _t}{d t} = \Delta \theta _t\), where \(\theta _t\) is the Lagrangian angle for \(L_t\). Note that since only derivatives of \(\theta _t\) appear here, this does not require the assumption that the flow is zero-Maslov.

  2. (ii)

    In an open set where the flow is zero-Maslov and exact with \(\beta _t\) a primitive for the Liouville form, \(\frac{d\beta _t}{d t} = \Delta \beta _t - 2\theta _t\).

  3. (iii)

    \(\left( \frac{d \rho _{(x_0,t_0)}}{d t} + \Delta \rho _{(x_0,t_0)}\right) - |\vec {H}|^2\rho _{(x_0,t_0)}= - \left| \vec {H} - \frac{(x_0 - x)^\perp }{2(t_0 -t)}\right| ^2\rho _{(x_0,t_0)}\).

Remark

In particular, from part (iii) we have

$$\begin{aligned} \left( \frac{d \rho _{(x_0,t_0)}}{d t} + \Delta \rho _{(x_0,t_0)}\right) - |\vec {H}|^2\rho _{(x_0,t_0)} \le 0 \end{aligned}$$

Proof

(i) Differentiating the holomorphic volume form \(\Omega \) and using Cartan’s formula we have

$$\begin{aligned} \frac{d\Omega }{d t}&= \mathcal {L}_{\vec {H}}\Omega = d(\vec {H} \lrcorner \Omega ) = d(ie^{i\theta _t}\nabla \theta _t \lrcorner \mathrm {vol}_{L_t}) \\&= ie^{i\theta _t}d(\nabla \theta _t \lrcorner \mathrm {vol}_{L_t}) - e^{i\theta _t}d\theta _t\wedge (\nabla \theta _t \lrcorner \mathrm {vol}_{L_t}) \\&= ie^{i\theta _t}\mathrm {div}(\nabla \theta _t)\mathrm {vol}_{L_t} - e^{i\theta _t}d\theta _t\wedge (\nabla \theta _t \lrcorner \mathrm {vol}_{L_t}). \end{aligned}$$

On the other hand

$$\begin{aligned} \frac{d\Omega }{d t} = \frac{d}{d t}\left( e^{i\theta _t}\mathrm {vol}_{L_t}\right) = ie^{i\theta _t}\frac{d\theta _t}{d t}\mathrm {vol}_{L_t} + e^{i\theta _t}\frac{d}{d t}\mathrm {vol}_{L_t}. \end{aligned}$$

Comparing real and imaginary parts we have (i).

(ii) Using Cartan’s formula again and denoting the Liouville form by \(\lambda _t\), we have

$$\begin{aligned} d\left( \frac{d\beta _t}{d t}\right)&= \mathcal {L}_{\vec {H}}\lambda _t = d(\vec {H}\lrcorner \lambda _t) + \vec {H}\lrcorner d\lambda _t \\&= d(\vec {H}\lrcorner \lambda _t) + J\nabla \theta _t \lrcorner 2\omega \\&= d(\vec {H}\lrcorner \lambda _t) - 2d\theta _t. \end{aligned}$$

Hence

$$\begin{aligned} d\left( \frac{d\beta _t}{d t} - \vec {H}\lrcorner \lambda _t + 2\theta _t\right) = 0. \end{aligned}$$

By possibly adding a time-dependent constant to \(\beta _t\) this implies

$$\begin{aligned} \frac{d\beta _t}{d t} = \vec {H}\lrcorner \lambda _t - 2\theta _t. \end{aligned}$$

Hence it only remains to show that \(\vec {H}\lrcorner \lambda _t = \Delta \beta _t\). We first show that \(\nabla \beta _t = (Jx)^T\). Indeed we have \(d\beta _t = \lambda _t\), thus for a tangent vector \(\tau \)

$$\begin{aligned} \langle \nabla \beta _t, \tau \rangle = d\beta _t(\tau ) = \lambda _t(\tau ) = \langle Jx,\tau \rangle = \langle (Jx)^T, \tau \rangle . \end{aligned}$$

With this in hand we now choose normal coordinates at a point x, and denote the coordinate tangent vectors by \(\{\partial _1,\dots ,\partial _n\}\). Then we calculate

$$\begin{aligned} \nabla _i\nabla _j\beta _t&= \langle \nabla _i(Jx)^T, \partial _j\rangle = \partial _i\langle Jx, \partial _j \rangle - \langle (Jx)^T, D_{\partial _i}\partial _j\rangle \\&= \langle J\partial _i, \partial _j\rangle + \langle Jx, D_{\partial _i}\partial _j\rangle - \langle (Jx)^T, D_{\partial _i}\partial _j \rangle \\&= \omega (\partial _i, \partial _j) + \langle (Jx)^\perp , D_{\partial _i}\partial _j \rangle \\&= \langle Jx, h_{ij} \rangle , \end{aligned}$$

where \(h_{ij}\) is the second fundamental form. Taking the trace of each side we have

$$\begin{aligned} \Delta \beta _t = \langle Jx, \vec {H} \rangle = \vec {H}\lrcorner \lambda _t. \end{aligned}$$

(iii) We may assume without loss of generality that \(x_0 = 0\) and \(t_0 = 0\), and we will suppress the subscripts of \(\rho \). We first calculate

$$\begin{aligned} \frac{\partial \rho }{\partial t} = \left( -\frac{n}{2t} - \frac{|x|^2}{4t^2}\right) \rho \quad \frac{\partial \rho }{\partial x^i} = \frac{x^i}{2t}\rho \quad \frac{\partial ^2\rho }{\partial x^i \partial x^j} = \left( \frac{\delta _{ij}}{2t} + \frac{x^ix^j}{4t^2}\right) \rho . \end{aligned}$$

Then we have

$$\begin{aligned} \frac{\partial \rho }{\partial t} + \mathrm {div}(D\rho )&= \left( -\frac{n}{2t} - \frac{|x|^2}{4t^2}\right) \rho + \sum _{i,j = 1}^{n+k} p^{ij}\frac{\partial ^2\rho }{\partial x^i\partial x^j} \\&= \left( -\frac{n}{2t} - \frac{|x|^2}{4t^2}\right) \rho + \sum _{i,j=1}^{n+k} p^{ij}\left( \frac{\delta _{ij}}{2t} + \frac{x^ix^j}{4t^2}\right) \rho \\&= \left( -\frac{n}{2t} - \frac{|x|^2}{4t^2}\right) \rho + \frac{n}{2t}\rho + \frac{|x^T|^2}{4t^2}\rho = -\frac{|x^\perp |^2}{4t^2}, \end{aligned}$$

where \(p^{ij}\) denotes the matrix of the projection onto \(T_xM\). We therefore calculate

$$\begin{aligned} \left( \frac{d}{dt} + \Delta \right) \rho&= \frac{\partial \rho }{\partial t} + \langle D\rho , \vec {H} \rangle + \mathrm {div}(\nabla \rho )\\&= \frac{\partial \rho }{\partial t} + \mathrm {div}(D\rho ) + 2\langle D\rho , \vec {H}\rangle \\&= \frac{\partial \rho }{\partial t} + \mathrm {div}(D\rho ) - \left| \vec {H} - \frac{x^\perp }{2t}\right| ^2 \rho + \frac{|x^\perp |^2}{4t^2}\rho + |\vec {H}|^2\rho \\&= -\left| \vec {H} - \frac{x^\perp }{2t}\right| ^2\rho + |\vec {H}|^2\rho , \end{aligned}$$

which establishes the claim. \(\square \)

Remark

From the above evolution equations we see that both the zero-Maslov condition, and local exactness are preserved by the flow. Indeed that the zero-Maslov condition is preserved follows easily, and for local exactness we observe

$$\begin{aligned} \frac{d\lambda _t}{d t} = \mathcal {L}_{\vec {H}}\lambda _t&= d(\vec {H}\lrcorner \lambda _t) + \vec {H}\lrcorner d\lambda _T \\&= d(\vec {H}\lrcorner \lambda _t) + J\nabla \theta _t\lrcorner 2\omega \\&= d(\vec {H}\lrcorner \lambda _t) - 2d\theta _t. \end{aligned}$$

So by the fundamental theorem of calculus we have

$$\begin{aligned} \lambda _t = \lambda _0 + \int _0^t \frac{d\lambda _s}{ds}ds, \end{aligned}$$

where the right hand side is exact if \(\lambda _0\) is.

Let \(\phi \) be a cut-off function supported on \(B_3\) with \(0\le \phi \le 1\), \(\phi \equiv 1\) on \(B_2\) and the estimates \(|D\phi | \le 2\) and \(|D^2\phi | \le C\). We then have the following lemma.

Lemma 3.2

Suppose that \((L_t)\) are exact in \(B_3\) and define \(\alpha _t := \beta _t + 2t\theta _t\). Then

$$\begin{aligned} \frac{d}{dt}\int _{L_t} \phi \alpha _t^2\rho d\mu \le -\int _{L_t}\phi |2t\vec {H} - x^\perp |^2\rho d\mu + C\int _{L_t\cap (B_3{\setminus } B_2)} \alpha _t^2\rho d\mu . \end{aligned}$$

where \(C = C(\phi )\).

Remark

Note that it follows from Lemma 3.1 that

$$\begin{aligned} \frac{d}{dt}\alpha _t = \Delta \beta _t - 2\theta _t + 2\theta _t + 2t\Delta \theta _t = \Delta \alpha _t. \end{aligned}$$

This is the motivation for why we might expect \(\alpha _t\) to satisfy some sort of monotonicity formula in the first place.

Proof

We calculate

$$\begin{aligned} \left( \frac{d}{dt} - \Delta \right) \phi = \frac{\partial \phi }{\partial t} - \mathrm {div}D\phi = -\Delta _{\mathbb {R}^{2n}}\phi + \mathrm {tr}_{(TL)^\perp } D^2 \phi \le C\chi _{B_3{\setminus } B_2}, \end{aligned}$$

where \(\chi _{B_3{\setminus } B_2}\) denotes the indicator function on \(B_3 {\setminus } B_2\). Then

$$\begin{aligned} \left( \frac{d}{dt} - \Delta \right) (\phi \alpha _t^2)&= \phi \left( \frac{d}{dt} - \Delta \right) \alpha _t^2 + \alpha _t^2\left( \frac{d}{dt} - \Delta \right) \phi - 2\langle \nabla \phi , \nabla \alpha _t^2\rangle \\&\le 2\phi \alpha _t\left( \frac{d}{dt} \!-\! \Delta \right) \alpha _t \!-\! 2\phi |\nabla \alpha _t|^2 \!+\! C\alpha _t^2\chi _{B_3{\setminus } B_2} \!-\! 4\alpha _t \langle \nabla \phi , \nabla \alpha _t\rangle . \end{aligned}$$

Using Young’s inequality we estimate the last term on the set \(\{\phi > 0\}\) as follows

$$\begin{aligned} -4\alpha _t\langle \nabla \phi , \nabla \alpha _t\rangle \le 4|D\phi ||\alpha _t||\nabla \alpha _t| \le \phi |\nabla \alpha _t|^2 + \frac{4|D\phi |^2}{\phi }\alpha _t^2 \le \phi |\nabla \alpha _t|^2 + C\alpha _t^2\chi _{B_3{\setminus } B_2}, \end{aligned}$$

where we used that

$$\begin{aligned} \frac{|D\phi |^2}{\phi } \le 2\max |D^2\phi | \le C. \end{aligned}$$

This is true of any compactly supported smooth (or even \(C^2\)) function. Thus we arrive at

$$\begin{aligned} \left( \frac{d}{dt} - \Delta \right) \phi \alpha _t^2 \le -\phi |\nabla \alpha _t|^2 + C\alpha _t^2\chi _{B_3{\setminus } B_2}. \end{aligned}$$

We now just differentiate under the integral and use Green’s identity to get

$$\begin{aligned} \frac{d}{dt}\int _{L_t} \phi \alpha _t^2\rho d\mu&= \int _{L_t} \rho \frac{d(\phi \alpha _t^2)}{dt} + \phi \alpha _t^2\frac{d\rho }{d t} - |\vec {H}|^2\phi \alpha _t^2\rho d\mu \\&= \int _{L_t} \phi \alpha _t^2\Delta \rho - \rho \Delta (\phi \alpha _t^2)d\mu + \int _{L_t} \rho \frac{d(\phi \alpha _t^2)}{dt} + \phi \alpha _t^2\frac{d\rho }{d t}\\&\quad -|\vec {H}|^2\phi \alpha _t^2\rho d\mu \\&= \int _{L_t} \rho \left( \frac{d}{dt} - \Delta \right) (\phi \alpha _t^2) + \left( \left( \frac{d}{dt} + \Delta \right) \rho - |\vec {H}|^2\rho \right) \phi \alpha _t^2 d\mu \\&\le - \int _{L_t} \phi \rho |\nabla \alpha _t|^2d\mu + C\int _{L_t\cap (B_3{\setminus } B_2)} \alpha _t^2\rho d\mu . \end{aligned}$$

So we are left with precisely the desired inequality since \(\nabla \alpha _t = \nabla \beta _t + 2t\nabla \theta _t = Jx^\perp - 2tJ\vec {H}\). \(\square \)

4 Stability of self-expanders

In this section we prove a dynamic stability result for Lagrangian self-expanders. More specifically we show that if a Lagrangian submanifold is asymptotic to some pair of planes and is almost a self-expander in a weak sense, then the submanifold is actually close in a stronger topology to some self-expander. Let \(P_1\), \(P_2\subset \mathbb {C}^n\) be Lagrangian planes intersecting transversally such that neither \(P_1 + P_2\) nor \(P_1 - P_2\) are area minimising. We denote by \(P := P_1 + P_2\). We will need the following uniqueness result, proved by Lotay and Neves [10] in dimension 2 and Imagi et al. [6] in dimensions 3 and higher.

Theorem 4.1

There exists a unique smooth, zero-Maslov class Lagrangian self-expander asymptotic to P.

Theorem 4.2

Fix R, r, \(\tau > 0\), \(\alpha \), \(\varepsilon _0 < 1\), and \(C, M < \infty \). Let \(\Sigma \) be the unique smooth zero-Maslov Lagrangian self-expander asymptotic to P. Then for all \(\varepsilon > 0\) there exists \(\tilde{R} \ge R\), \(\eta \), \(\nu > 0\) each dependent on \(\varepsilon _0\), \(\varepsilon \), r, R, \(\tau \), \(\alpha \), C, M and P such that if L is a smooth Lagrangian submanifold which is zero-Maslov in \(B_{\tilde{R}}\) and

  1. (i)

    \(|A|\le M\) on \(L\cap B_{\tilde{R}}\),

  2. (ii)

    \(\int _L \rho _{(x,0)}(y,-r^2)d\mathcal {H}^n \le 1 + \varepsilon _0\) for all x and \(0 < r \le \tau \),

  3. (iii)

    \(\int _{L\cap B_{\tilde{R}}} |\vec {H} - x^\perp |^2d\mathcal {H}^n \le \eta \),

  4. (iv)

    The connected components of \(L\cap A(r,\tilde{R})\) (where \(A(r, \tilde{R}) := \overline{B_{\tilde{R}}}{\setminus } B_r\)) are in one to one correspondence with the connected components of \(P \cap A(r, \tilde{R})\) and

    $$\begin{aligned} \mathrm {dist}(x, P) \le \nu + C\mathrm {exp}\left( \frac{-|x|^2}{C}\right) , \end{aligned}$$

    for all \(x \in L\cap A(r, \tilde{R})\);

then L is \(\varepsilon \)-close to \(\Sigma \) in \(C^{1,\alpha }(\overline{B}_{\tilde{R}})\).

Proof

Seeking a contradiction, suppose that the result were not true. Then there would exist sequences \(\nu _i \searrow 0\), \(\eta _i \searrow 0\), \(R_i \rightarrow \infty \) and \(L_i\) such that each \(L_i\) is a smooth Lagrangian submanifold of \(\mathbb {C}^n\) that is zero-Maslov in \(B_{R_i}\), satisfying

  1. (1)

    \(|A^{L_i}| \le M\) on \(L_i\cap B_{R_i}\),

  2. (2)

    \(\int _{L_i} \rho _{(x,0)}(y,-r^2)d\mathcal {H}^n \le 1 + \varepsilon _0\) for all x and \(0 < r \le \tau \),

  3. (3)

    \(\int _{L_i\cap B_{R_i}} |\vec {H} - x^\perp |^2 d\mathcal {H}^n \le \eta _i\)

  4. (4)

    The connected components of \(L_i\cap A(r,R_i)\) are in one to one correspondence with the connected components of \(P\cap A(r,R_i)\) and

    $$\begin{aligned} \mathrm {dist}(x,P) \le \nu _i + C\exp \left( \frac{-|x|^2}{C}\right) \end{aligned}$$

    for all \(x \in L_i\cap A(r,R_i)\),

  5. (5)

    \(L_i\) is not \(\varepsilon \)-close to \(\Sigma \) in \(C^{1,\alpha }(B_{R_i})\).

By virtue of (1), (4), and a suitable interpolation inequality, it follows that for some \(\rho > 0\), outside of \(B_\rho \), \(L_i\) and \(\Sigma \) are both \(\varepsilon /4\)-close to P in \(C^{1,\alpha }\). Hence, in order that (5) is satisfied, we conclude that for large i, \(L_i\) is not \(\varepsilon \)-close to \(\Sigma \) in \(C^{1,\alpha }(B_\rho )\).

On the other hand, by (1) and (2) we may extract a subsequence of \(L_i\) that converges in \(C^{1,\alpha }_\mathrm{loc}\) for all \(\alpha < 1\) to some limit \(L_\infty \), a \(C^{1,1}\) zero-Maslov Lagrangian submanifold. The estimate (2) passes to the limit and tells us that \(L_\infty \) has unit multiplicity everywhere, and bounded area ratios. Since \(L_\infty \) is \(C^{1,1}\) we can define mean curvature in a weak sense, and (3) implies

$$\begin{aligned} \int _{L_\infty } |\vec {H} - x^\perp |^2 d\mathcal {H}^n = 0. \end{aligned}$$

By standard Schauder theory for elliptic PDE, this immediately implies that \(L_\infty \) is in fact smooth and satisfies the expander equation in the classical sense. Consequently \(L_\infty \) is a smooth, zero-Maslov class Lagrangian submanifold, and (4) implies that \(L_\infty \) is asymptotic to P. Theorem 4.1 then implies that \(L_\infty = \Sigma \), which contradicts (5). \(\square \)

5 Main theorem

Suppose, as in the previous section, that \(P := P_1 + P_2\) is a pair of transversely intersecting Lagrangian planes such that neither \(P_1 + P_2\) nor \(P_1 - P_2\) are minimising, and that \(\Sigma \) is a zero-Maslov Lagrangian self-expander asymptotic to P. We assume the existence of a family \((L^s)_{0<s\le c}\) of compact Lagrangians, each exact and zero-Maslov in \(B_4\) satisfying the following properties. The existence of such a family will be established in Sect. 7.

  1. (H1)

    The area ratios are uniformly bounded, i.e. there exists a constant \(D_1\) such that

    $$\begin{aligned} \mathcal {H}^n(L^s\cap B_r(x)) \le D_1 r^n \quad \forall r > 0, \quad \forall s \in (0,c], \quad \forall x. \end{aligned}$$
  2. (H2)

    There is a constant \(D_2\) such that for every s and \(x \in L^s\cap B_4\)

    $$\begin{aligned} |\theta ^s(x)| + |\beta ^s(x)| \le D_2(|x|^2 + 1), \end{aligned}$$

    where \(\theta ^s\) and \(\beta ^s\) are, respectively, the Lagrangian angle of \(L^s\) and a primitive for the Liouville form on \(L^s\).

  3. (H3)

    For any \(\alpha \in (0, 1)\), the rescaled manifolds \(\tilde{L}^s := (2s)^{-1/2}L^s\) converge in \(C^{1,\alpha }_\mathrm{loc}\) to \(\Sigma \). Moreover the second fundamental form of \(\tilde{L}^s\) is bounded uniformly in s and without loss of generality we can assume that

    $$\begin{aligned} \lim _{s\rightarrow 0} (\tilde{\theta }^s + \tilde{\beta }^s) = 0 \end{aligned}$$

    locally on \(\tilde{L}^s\). (Note that \(\tilde{L}^s\) is exact in the ball \(B_{4(2s)^{-1/2}}\) so we can make sense of \(\tilde{\beta }^s\) in the limit.)

  4. (H4)

    The connected components of \(P \cap A(r_0\sqrt{s}, 4)\) are in one to one correspondence with the connected components of \(L^s\cap A(r_0\sqrt{s}, 4)\), and each component can be parametrised as a graph over the corresponding plane \(P_i\)

    $$\begin{aligned} L^s \cap A(r_0\sqrt{s},3) \subset \{x + u_s(x) | x \in P\cap A(r_0\sqrt{s}, 3)\} \subset L^s\cap A(r_0\sqrt{s}, 4), \end{aligned}$$

    where the function \(u_s:P\cap A(r_0\sqrt{s}, 3) \rightarrow P^\perp \) is normal to P and satisfies the estimate

    $$\begin{aligned} |u_s(x)| + |x|\left| \overline{\nabla }u_s(x)\right| + |x|^2|\overline{\nabla }^2 u_s(x)| \le D_3\left( |x|^2 + \sqrt{2s}e^{-b|x|^2/2s}\right) , \end{aligned}$$

    where \(\overline{\nabla }\) denotes the covariant derivative on P, and \(b > 0\).

We will denote by \((L^s_t)_{t\in [0,T_s)}\) a smooth solution of Lagrangian mean curvature flow with initial condition \(L^s\). For \(x_0 \in \mathbb {R}^{2n}\) and \(t > 0\) we define

$$\begin{aligned} \Phi (x_0,t)(x) := \rho _{(x_0,0)}(x, -t) = \frac{1}{(4\pi t)^{n/2}}\mathrm {exp}\left( -\frac{|x-x_0|^2}{4t}\right) \end{aligned}$$

We introduce a slightly modified notion of the Gaussian density ratios, which we will continue to refer to as the Gaussian density ratios, of \(L^s_t\) at \(x_0\), denoted \(\Theta _t^s(x_0,r)\) and defined as

$$\begin{aligned} \Theta _t^s(x_0,r) := \int _{L^s_t} \Phi (x_0, r^2)d\mathcal {H}^n = \int _{L^s_t} \frac{1}{(4\pi r^2)^{n/2}}e^{-|x - x_0|^2/4r^2}d\mathcal {H}^n(x), \end{aligned}$$
(5.1)

defined for \(t < T_s\). The monotonicity formula of Huisken tells us that

$$\begin{aligned} \Theta _t^s(x_0,r) = \Theta ^s(x_0,t+r^2,r) \le \Theta ^s(x_0,t+r^2,\rho ) = \int _{L^s_{t + r^2 - \rho ^2}} \Phi (x_0,t+r^2)d\mathcal {H}^n, \end{aligned}$$

for all \(\rho \ge r\). In particular choosing \(\rho ^2 = t+r^2\) we have

$$\begin{aligned} \Theta ^s_t(x_0,r) \le \int _{L^s}\Phi (x_0,t+r^2)d\mathcal {H}^n. \end{aligned}$$

We also define

$$\begin{aligned} \tilde{L}^s_t = \frac{L^s_t}{\sqrt{2(s+t)}}. \end{aligned}$$

We will denote by \(\tilde{\Theta }^s_t(x_0,r)\) the Gaussian density ratios of \((\tilde{L}^s_t)\), that is

$$\begin{aligned} \tilde{\Theta }^s_t(x_0,r) := \int _{\tilde{L}^s_t}\Phi (x_0,r)d\mathcal {H}^n. \end{aligned}$$

One of the primary reasons for modifying the Gaussian density ratios is that our new ratios behave well under the above rescaling. Indeed we can calculate

$$\begin{aligned} \Theta ^s_t(x_0,r) = \tilde{\Theta }^s_t\left( \frac{x_0}{\sqrt{2(s+t)}}, \frac{r}{\sqrt{2(s+t)}}\right) . \end{aligned}$$

The primary goal of this section is now to prove the following result.

Theorem 5.1

Let \(\varepsilon _0 > 0\). There are \(s_0\), \(\delta _0\) and \(\tau \) depending on \(D_1, D_2, D_3\), \(\Sigma \) and \(r_0\) such that if

$$\begin{aligned} t \le \delta _0, \, r^2 \le \tau t \,\, \text {and} \,\, s\le s_0, \end{aligned}$$

then

$$\begin{aligned} \Theta _t^s(x_0,r) \le 1 + \varepsilon _0 \end{aligned}$$

for every \(x_0 \in B_1\).

We start by proving estimates like the one in the above theorem hold for a short time or far from the origin.

Lemma 5.2

(Far from the origin estimate) Let \(\varepsilon _0 > 0\). There are \(\delta _1 > 0\), \(K_0 < \infty \) such that if \(r^2 \le t \le \delta _1\) and \(s > 0\), then

$$\begin{aligned} \Theta ^s_t(x_0, r) \le 1 + \varepsilon _0, \end{aligned}$$

for all \(x_0 \in A(K_0\sqrt{2t}, 1)\).

Proof

We first claim that there is a \(K_0 < \infty \) such that if \(y_0 \in \mathbb {R}^{2n}\) has \(|y_0| \ge K_0\), then for any \(\lambda > 0\) and s we have

$$\begin{aligned} \int _{\lambda (L^s\cap B_3(0))}\Phi (y_0,1)d\mathcal {H}^n \le 1 + \varepsilon _0/2. \end{aligned}$$

Indeed if this were not the case, then there would exist sequences \(y_i\), \(\lambda _i\) and \(s_i\) with \(|y_i| \rightarrow \infty \) such that

$$\begin{aligned} \int _{\lambda _i(L^{s_i}\cap B_3(0))} \Phi (y_i, 1) d\mathcal {H}^n \ge 1 + \varepsilon _0/2. \end{aligned}$$
(5.2)

First we note that \(\lambda _i\) must be unbounded since, for some universal constant C we have

$$\begin{aligned} \int _{\lambda _i(L^{s_i}\cap B_3(0))} \Phi (y_i, 1) d\mathcal {H}^n&\le \int _{\lambda _i(L^{s_i}\cap B_3(0))}\frac{1}{(4\pi )^{n/2}}e^{-|y_i|^2/8}e^{3|x|^2/4}d\mathcal {H}^n \\&\le e^{-|y_i|^2/8}\lambda _i^n\int _{L^{s_i}\cap B_3} \frac{1}{(4\pi )^{n/2}}e^{9\lambda _i^2/4}d\mathcal {H}^n \\&\le C\lambda _i^n e^{-|y_i|^2/8 + c\lambda _i^2}\underbrace{\mathcal {H}^n(L^{s_i}\cap B_3(0))}_{\le D_13^n}, \end{aligned}$$

so it is easily seen that if \(\lambda _i\) were bounded then (5.2) would fail for large i. Next from the estimate (H4) we have that

$$\begin{aligned} |\overline{\nabla }^2 u^j_s(x)| \le C\left( 1 + \frac{\sqrt{2s}}{|x|^2}e^{-b|x|^2/2s}\right) , \end{aligned}$$

for every \(x \in A(r_0\sqrt{2s}, 4)\) and hence

$$\begin{aligned} |A| \le C\left( 1 + \frac{1}{\sqrt{2s}}e^{-b|x|^2/2s}\right) \end{aligned}$$

on \(B_3\cap L^s\), since on \(B_{r_0\sqrt{2s}}\) we have \(|A| \le C(2s)^{-1/2}\) where C is a curvature bound for \(\Sigma \). We rescale and define

$$\begin{aligned} \hat{L_i} := \lambda _iL^{s_i} \quad \sigma _i := \lambda _i^2s_i, \end{aligned}$$

so that on \(\hat{L_i}\) we have the estimate

$$\begin{aligned} |A| \le \frac{C}{\lambda _i}\left( 1 + \frac{1}{\sqrt{s_i}}e^{-b|x|^2/2\lambda _i^2s_i}\right) = C\left( \lambda _i^{-1} + \sigma _i^{-1/2} e^{-b|x|^2/2\sigma _i}\right) . \end{aligned}$$

Consequently \(|A|\rightarrow 0\) uniformly on compact sets centred at \(y_i\), so it follows that locally \(\hat{L_i} - y_i \) converges to a plane, but this contradicts (5.2).

We next observe that (H1) ensures that we may choose \(\delta _1 > 0\) small enough such that for any \(x_0 \in B_1(0)\) and \(l \le 2\sqrt{\delta _1}\) we have

$$\begin{aligned} \int _{L^s{\setminus } B_3} \Phi (x_0, l) d \mathcal {H}^n \le \varepsilon _0/2 \end{aligned}$$

By the monotonicity formula we have that for any \(r^2, t \le \delta _1\)

$$\begin{aligned} \Theta ^s_t(x_0, r)&\le \int _{L^s} \Phi (x_0, r^2 + t) d \mathcal {H}^n \\&= \int _{L^s{\setminus } B_3}\Phi (x_0, r^2 + t) d \mathcal {H}^n + \int _{L^s\cap B_3}\Phi (x_0, r^2 + t) d \mathcal {H}^n \\&\le \varepsilon _0/2 + \int _{(r^2+t)^{-1}(L^s\cap B_3)}\Phi \left( \frac{x_0}{\sqrt{r^2 + t}},1\right) d\mathcal {H}^n \\&\le 1 + \varepsilon _0 \end{aligned}$$

provided that \(|x_0| \ge K_0\sqrt{r^2 + t}\), so imposing the additional requirement that \(r^2 \le t\) this gives precisely the desired result. \(\square \)

Remark

We observe that increasing \(K_0\) will only weaken the hypotheses, and so we may do so freely if necessary without changing the conclusions. This will be important in the next lemma, and also in the proof of the main theorem where we will assume that \(K_0\) is at least 1.

Lemma 5.3

(Short-time estimate) Let \(\varepsilon _0 > 0\). There are \(s_1 > 0\) and \(q_1 \in (0,1)\) such that if \(s \le s_1\), \(r^2 \le q_1 s\) and \(t \le q_1s\) then

$$\begin{aligned} \Theta ^s_t(x,r) \le 1 + \varepsilon _0, \end{aligned}$$
(5.3)

for all \(x \in B_1\).

Proof

Fix \(\alpha \in (0,1)\) and let \(q_1 = q_1(\Sigma , \varepsilon _0, \alpha )\) be as in Lemma 8.2. We may assume without loss of generality that \(q_1 < 1\). By Lemma 5.2 we need only prove the estimate for \(x \in B_{K_0\sqrt{2t}}\). We fix \(\alpha \in (0,1)\) and seek to apply Lemma 8.2 with \(R = K_0\sqrt{q_1} + 1\), which we can assume is at least 2 by increasing \(K_0\), and the rescaled flow \(\hat{L}_t := (2s)^{-1/2}L^s_{2st}\). This is a mean curvature flow with initial condition \(\tilde{L}^s\). By (H3) we know that \(\tilde{L}^s \rightarrow \Sigma \) in \(C^{1,\alpha }_\mathrm{loc}\). In particular, letting \(\varepsilon = \varepsilon (\varepsilon _0, \Sigma , \alpha )\) from Lemma 8.2, if s is small enough we can ensure that \(\tilde{L}^s\) is \(\varepsilon \)-close to \(\Sigma \) in \(C^{1,\alpha }(B_R(0))\). The conclusion of Lemma 8.2 then says that for \(r^2, t \le q_1\) and \(x \in B_{K_0\sqrt{q_1}}\) we have

$$\begin{aligned} \hat{\Theta }^s_t(x,r) = \int _{\hat{L}^s_t} \Phi (x, r^2) d\mathcal {H}^n = \int _{L^s_{2st}} \Phi (2sx, 2sr^2) d\mathcal {H}^n \le 1 + \varepsilon _0, \end{aligned}$$

or in other words

$$\begin{aligned} \Theta ^s_t(x,r) \le 1 + \varepsilon _0, \end{aligned}$$

for all \(r^2, t \le q_1s\) and \(x\in B_{K_0\sqrt{2sq_1}}\). However since \(t\le q_1s\) this holds for all \(x \in B_{K_0\sqrt{2t}}\). \(\square \)

The next lemma shows that in an annular region, and for short times, we retain control on both the distance to P and the Gaussian density ratios that is uniform in s.

Lemma 5.4

(Proximity to \(P = P_1+P_2\)) There are constants \(C_1\), and \(r_1\) such that for any \(\nu > 0\) there are \(s_2, \delta _2 > 0\) such that the following holds. If \(s \le s_2\) and \(t \le \delta _2\) then we have the estimate

$$\begin{aligned} \mathrm {dist}(y_0, P) \le \nu + C_1e^{-|y_0|^2/C_1} \quad \forall y_0 \in \tilde{L}^s_t \cap A(r_1, (s+t)^{-1/8}), \end{aligned}$$

and if in addition \(r \le 2\), then

$$\begin{aligned} \tilde{\Theta }^s_t(y_0, r) \le 1 + \frac{\varepsilon _0}{2} + \nu \quad \forall y_0 \in A(r_1, (s+t)^{-1/8}). \end{aligned}$$

Remark

Note in particular that \(r_1\) does not depend on \(\nu \), which will be important later.

Proof

We consider \(t\le \delta _2\) and \(s\le s_2\) (both \(\delta _2\) and \(s_2\) to be chosen) and define

$$\begin{aligned} l := \frac{t}{2(s+t)} \quad \Sigma ^{(s,t)} := \frac{L^s}{\sqrt{2(s+t)}}. \end{aligned}$$

Clearly \(l \le 1/2\) and also from (H4) we have that if \(s_2\), \(\delta _2\) are chosen small enough, then \(\Sigma ^{(s,t)}\cap A(r_0, 3(s+t)^{-1/8})\) is graphical over \(P \cap A(r_0, 3(s+t)^{-1/8})\). Moreover if \(v_{(s,t)}\) is the function arising from this graphical decomposition then we have by scaling the estimate of (H4) that

$$\begin{aligned}&|v_{(s,t)}(x)| + |x||\overline{\nabla }v_{(s,t)}(x)| + |x|^2|\overline{\nabla }^2v_{(s,t)}(x)|\\&\quad \le D_3\left( \sqrt{2(s+t)}|x|^2 + \left( \frac{\sqrt{2s}}{\sqrt{2(s+t)}}\right) e^{-2b(s+t)|x|^2/2s}\right) \\&\quad \le D_3\left( \sqrt{2(s+t)}|x|^2 + e^{-b|x|^2}\right) . \end{aligned}$$

Let \(c > 0\) be a constant that will be chosen later. If \(s_2(D_3, r_0, c)\) and \(\delta _2(D_3, r_0, c) > 0\) are small enough and \(r_1(P,c) \ge \max \{r_0, 1\}\) is chosen to be large enough then we can ensure that

$$\begin{aligned} |v_{(s,t)}(x)| + |x||\overline{\nabla }v_{(s,t)}(x)| \le D_3\left( \sqrt{2(s+t)}|x|^2 + e^{-b|x|^2}\right) \le c/2 \end{aligned}$$
(5.4)

on \(A(r_1, 3(s+t)^{-1/8})\). Indeed \(x \in A(r_1, 3(s+t)^{-1/8})\) implies that \(|x|^2 \le 9(s+t)^{-1/4}\), and so \(\sqrt{2(s+t)}|x|^2 \le 9\sqrt{2}(s_2 + \delta _2)^{1/4}\) can be bounded in terms of \(s_2\) and \(\delta _2\). From now on we fix some \(y_0 \in \tilde{L}^s_t\cap A\left( 3r_1 + 1, (s+t)^{-1/8}\right) \). Since \(y_0\sqrt{2(s+t)}\) is a regular point of \((L^s_t)\), the monotonicity formula implies

$$\begin{aligned} 1 \le \Theta ^s_0(y_0\sqrt{2(s+t)},\sqrt{t}) = \int _{\Sigma ^{(s,t)}}\Phi (y_0, l)d\mathcal {H}^n =: I + J + K, \end{aligned}$$

where

$$\begin{aligned} I&:= \int _{\Sigma ^{(s,t)}{\setminus } B_{3(s+t)^{-1/8}}} \Phi (y_0, l) d\mathcal {H}^n, \\ J&:= \int _{\Sigma ^{(s,t)}\cap B_{r_1}} \Phi (y_0, l)d\mathcal {H}^n, \\ K&:= \int _{\Sigma ^{(s,t)}\cap A(r_1, 3(s+t)^{-1/8})} \Phi (y_0, l)d\mathcal {H}^n. \end{aligned}$$

We first estimate I. If \(|x| \ge 3(s+t)^{-1/8} \ge 3|y_0|\) then

$$\begin{aligned} |x - y_0|^2 \ge |x|^2 - 2|x||y_0| + |y_0|^2 \ge |x|^2 - \frac{2|x|^2}{3} + |y_0|^2 = \frac{|x|^2}{3} + |y_0|^2, \end{aligned}$$

so

$$\begin{aligned}&\Phi (y_0, l) = \frac{1}{(4\pi l)^{n/2}}e^{-|x-y_0|^2/4l} \le \frac{1}{(4\pi l)^{n/2}}e^{-|y_0|^2/4l}e^{-|x|^2/12l}\\&\quad =3^{n/2}e^{-|y_0|^2/4l}\Phi (0,3l). \end{aligned}$$

Therefore by choosing \(C_1 = C_1(D_1, n)\) we can estimate

$$\begin{aligned} I = \int _{\Sigma ^{(s,t)}{\setminus } B_{3(s+t)^{-1/8}}} \Phi (y_0, l)d\mathcal {H}^n&\le 3^{n/2} e^{-|y_0|^2/4l}\int _{\Sigma ^{(s,t)}{\setminus } B_{3(s+t)^{-1/8}}} \Phi (0, 3l)d\mathcal {H}^n \\&\le 3^{n/2}e^{-|y_0|^2/4l}\int _{(3l)^{-1/2}\Sigma ^{(s,t)}} \Phi (0,1) d\mathcal {H}^n \\&\le C_1 e^{-|y_0|^2/C_1}, \end{aligned}$$

since l is bounded independent of s and t, and the estimate (H1) is scale invariant, so in particular is satisfied by \((3l)^{-1/2}\Sigma ^{(s,t)}\).

Next we estimate J. Similarly as before we find that for \(|x| \le r_1 \le |y_0|/3\) we have

$$\begin{aligned} |x-y_0|^2 \ge |x|^2 + \frac{|y_0|^2}{3}. \end{aligned}$$

Thus

$$\begin{aligned} \Phi (y_0, l) \le e^{-|y_0|^2/12l}\Phi (0, l) \quad \text {on } B_{r_1}, \end{aligned}$$

hence by possibly increasing \(C_1\) if necessary we have

$$\begin{aligned} J = \int _{\Sigma ^{(s,t)}\cap B_{r_1}} \Phi (y_0, l) d\mathcal {H}^n \le e^{-|y_0|^2/12l}\int _{\Sigma ^{(s,t)}\cap B_{r_1}}\Phi (0,l)d\mathcal {H}^n \le C_1e^{-|y_0|^2/C_1}. \end{aligned}$$

Finally we deal with K. We denote by \(a_i\) the orthogonal projection of \(y_0\) onto \(P_i\) and by \(b_i\) the orthogonal projection of \(y_0\) onto \(P_i^\perp \). We suppose without loss of generality that

$$\begin{aligned} \mathrm {dist}(y_0, P) = |b_1|. \end{aligned}$$

We will also denote by \(\Sigma ^{(s,t)}_i\) the component of \(\Sigma ^{(s,t)}\cap A(r_1, 3(s+t)^{-1/8})\) that is graphical over \(\Pi _i := P_i\cap A(r_1, 3(s+t)^{-1/8})\), and by \(v^i_{(s,t)}\) the corresponding graph function. Since we have that \(P_1\cap P_2 = \{0\}\) it follows that for some \(c = c(P) > 0\) we have that \(|b_2| \ge c|y_0|\). Notice that since \(|b_2| \le |y_0|\) we have that \(c \le 1\). Suppose that \(x \in \Sigma ^{(s,t)}_2\), and denote by \(x'\) the orthogonal projection onto \(P_2\). Then we have

$$\begin{aligned} |y_0 - x|^2 = |a_2 + b_2 - x' - v^2_{(s,t)}(x')|^2 = |a_2 - x'|^2 + |b_2 - v^2_{(s,t)}(x')|^2. \end{aligned}$$

Moreover by (5.4), if \(r_1\) is chosen large enough (and in particular larger than 1),

$$\begin{aligned} |v^2_{(s,t)}(x')| \le \frac{c}{2} \le \frac{c|y_0|}{2}, \end{aligned}$$

so

$$\begin{aligned} |b_2 - v^2_{(s,t)}(x')| \ge |b_2| - |v^2_{(s,t)}(x')| \ge \frac{c|y_0|}{2}. \end{aligned}$$

Consequently, denoting by \(g_{ij} := \delta _{ij} + D_iv^2_{(s,t)}\cdot D_jv^2_{(s,t)}\) the induced metric on the graph, we can estimate

$$\begin{aligned} \int _{\Sigma ^{(s,t)}_2}\Phi (y_0,l) d\mathcal {H}^n&= \int _{\Pi _2} \frac{1}{(4\pi l)^{n/2}} \exp \left( \frac{-|a_2 - x'|^2-|b_2 - v_{(s,t)}^2(x')|^2}{4l}\right) \\&\quad \sqrt{\mathrm {det}(g_{ij})}d x' \\&\le C e^{-c^2|y_0|^2/16l}\int _{P_2}\frac{1}{(4\pi l)^{n/2}}e^{-|a_2 - x'|^2/4l} dx' \\&\le C_1e^{-|y_0|^2/C_1}, \end{aligned}$$

where we used (5.4) to estimate the gradient terms arising in the surface measure. Combining this with the estimates for I and J we have that

$$\begin{aligned} 1 \le \int _{\Sigma ^{(s,t)}} \Phi (y_0, l) d\mathcal {H}^n \le \int _{\Sigma ^{(s,t)}_1} \Phi (y_0, l) d\mathcal {H}^n + C_1\exp \left( \frac{-|y_0|^2}{C_1}\right) . \end{aligned}$$
(5.5)

Increasing \(r_1\) for the last time if necessary, we can ensure that

$$\begin{aligned} C_1\exp \left( \frac{-|y_0|^2}{C_1}\right) \le \frac{1}{2}. \end{aligned}$$

Therefore we have that

$$\begin{aligned} \frac{1}{2} \le \int _{\Sigma ^{(s,t)}_1} \Phi (y_0,l)d\mathcal {H}^n \le C\sup _{\Pi _1} \exp \left( -\frac{|b_1 - v^1_{(s,t)}|^2}{4l}\right) . \end{aligned}$$

Therefore it follows that \(|b_1 - v^1_{(s,t)}|^2/4l\) is bounded on \(\Pi _1\) independently of l, s and t, thus we can estimate

$$\begin{aligned} \frac{|b_1 - v^1_{(s,t)}|^2}{4l} \le C\left( 1 - e^{-|b_1 - v^1_{(s,t)}|^2/4l}\right) , \end{aligned}$$

on \(\Pi _1\) where C is independent of s and t. Moreover because the matrix \((D_iv^1_{(s,t)}\cdot D_jv^1_{(s,t)})\) has non-negative eigenvalues we have that \(\sqrt{\det (g_{ij})} \ge 1\), so we can estimate

$$\begin{aligned} \int _{\Pi _1}&\frac{|v^1_{(s,t)} - b_1|^2}{4l} \frac{\exp (-|x' - a_1|^2/4l)}{(4\pi l)^{n/2}}dx' \\&\le C\int _{\Pi _1} \left( 1 - \exp \left( -\frac{|v^1_{(s,t)} - b_1|^2}{4l}\right) \right) \frac{\exp (-|x' - a_1|^2/4l)}{(4\pi l)^{n/2}} \sqrt{\det (g_{ij})}dx'\\&= C\left( \int _{\Pi _1} \frac{\exp (-|x' - a_1|^2/4l)}{(4\pi l)^{n/2}}\sqrt{\det (g_{ij})}dx' - \int _{\Sigma ^{(s,t)}_1}\Phi (y_0,l)d\mathcal {H}^n\right) \\&\le C\left( \int _{\Pi _1} \frac{\exp (-|x' - a_1|^2/4l)}{(4\pi l)^{n/2}}\sqrt{\det (g_{ij})}dx' - 1\right) + C_1\exp (-|y_0|^2/C_1) \\&\le C\int _{\Pi _1} \frac{\exp (-|x' - a_1|^2/4l)}{(4\pi l)^{n/2}} \left( \sqrt{\det (g_{ij})} - 1 \right) dx' + C_1\exp (-|y_0|^2/C_1), \end{aligned}$$

where we used (5.5). Next, from the Taylor expansions for square root and determinant, we have \(\sqrt{1 + x} = 1 + x/2 + O(x^2)\) and \(\det (I + A) = 1 + \mathrm {tr}(A) + O(|A|^2)\), which implies

$$\begin{aligned} \sqrt{\det (g_{ij})} - 1&= \left( 1 + \sum _{i=1}^n|D_iv^1_{(s,t)}|^2 + O(|\overline{\nabla }v^1_{(s,t)}|^4)\right) ^{1/2} - 1 \\&\le \frac{n}{2}|\overline{\nabla }v^1_{(s,t)}|^2 + O(|\overline{\nabla }v^1_{(s,t)}|^4) \\&\le C|\overline{\nabla }v^1_{(s,t)}|^2, \end{aligned}$$

where the last line follows from the fact that \(|\overline{\nabla }v^1_{(s,t)}|\) is bounded on \(A(r_1, 3(s+t)^{-1/8})\) by (5.4). Putting the above two estimates together we find

$$\begin{aligned}&\int _{\Pi _1} \frac{|v^1_{(s,t)} - b_1|^2}{4l}\frac{\exp (-|x' - a_1|^2/4l)}{(4\pi l)^{n/2}}dx' \\&\quad \le C\int _{\Pi _1} |\overline{\nabla }v^1_{(s,t)}|^2\frac{\exp (-|x' - a_1|^2/4l)}{(4\pi l)^{n/2}}dx' + C_1\exp (-|y_0|^2/C_1). \end{aligned}$$

Therefore since

$$\begin{aligned} |b_1|^2 \le \left( |b_1 - v^1_{(s,t)}| + |v^1_{(s,t)}|\right) ^2 \le 2\left( |b_1 - v^1_{(s,t)}|^2 + |v^1_{(s,t)}|^2\right) , \end{aligned}$$

we can estimate, by integrating both sides against \((4\pi l)^{n/2}\exp (-|x' - a_1|^2/4l)\) over \(\Pi _1\)

$$\begin{aligned} |b_1|^2 \le C_1\int _{\Pi _1} (|v^1_{(s,t)}|^2 + |\overline{\nabla }v^1_{(s,t)}|)\frac{\exp (-|x' - a_1|^2/4l)}{(4\pi l)^{n/2}}dx' + C_1\exp (-|y_0|^2/C_1). \end{aligned}$$
(5.6)

Note that here we used the fact that the integral of \((4\pi l)^{n/2}\exp (-|x' - a_1|^2/4l)\) over \(\Pi _1\) can be bounded below by a constant, on account of the fact that l is bounded independently of s and t, and the outer radius in the definition of \(\Pi _1\) is bounded below by \(3(s_2 + \delta _2)^{-1/8}\) which, by choice of \(s_2\) and \(\delta _2\), we can assume to be greater than \(2r_1\) say. Since \(b_1\) is also constant we rearrange to obtain the above identity. We want to now control the integral terms on the right hand side. First we observe that \(|a_1| \ge c|y_0|\) for some constant depending only on P. This follows from the fact that we assumed \(y_0\) was closer to \(P_1\) than \(P_2\), and hence lies in some fixed conical neighbourhood of \(P_1\). Moreover for any \(0 \le l \le 1\) we have for any x, \(a_1 \in \mathbb {R}^{2n}\)

$$\begin{aligned} 2b|x+a_1|^2 + \frac{|x|^2}{4l}&= |x|^2\left( \frac{1}{4l} + 2b\right) + 2|a_1|^2b + 4bx\cdot a_1 \\&\ge |x|^2\left( \frac{1}{4l} + 2b\right) + 2|a_1|^2b - \frac{16bl + 1}{8l}|x|^2 - \frac{32b^2l}{16bl+1}|a_1|^2 \\&\ge \frac{|x|^2}{8l} + \frac{2b|a_1|^2}{16bl+1}. \end{aligned}$$

Furthermore for \(x \in \Pi _1\) we have \(|x| \ge 1\), so by (5.4)

$$\begin{aligned} |\overline{\nabla }v^1_{(s,t)}|^2 \le |x|^2|\overline{\nabla }v^1_{(s,t)}|^2 \le C\left( (s+t)|x|^2 + e^{-2b|x|^2}\right) . \end{aligned}$$

Hence for some \(C_1 = C_1(D_1, D_3, P)\) we have

$$\begin{aligned} \int _{\Pi _1} |\overline{\nabla }v^1_{(s,t)}|^2\frac{e^{-|x' - a_1|^2/4l}}{(4\pi l)^{n/2}}dx'&\le C_1\int _{\Pi _1}\left( (s+t)|x'|^2 + e^{-2b|x'|^2}\right) \frac{e^{-|x' - a_1|^2/4l}}{(4\pi l)^{n/2}}dx' \\&\le C_1(s+t) + C_1\int _{\mathbb {R}^n}e^{-b|x'|^2}\frac{e^{-|x' - a_1|^2/4l}}{(4\pi l)^{n/2}}dx' \\&\le C_1(s+t) + C_1\int _{\mathbb {R}^n}e^{-b|x'+a_1|^2} \frac{e^{-|x'|^2/4l}}{(4\pi l)^{n/2}}dx' \\&\le C_1(s+t) + C_1e^{-|a_1|^2/C_1}\int _{\mathbb {R}^n}\frac{e^{-|x'|^2/8l}}{(4\pi l)^{n/2}}dx' \\&\le C_1\left( (s+t) + e^{-|y_0|^2/C_1}\right) . \end{aligned}$$

In the first line we used the fact that integrating \(|x'|^2\) against \((4\pi l)^{n/2}\exp (-|x'-a_1|^2/4l)\) over \(\mathbb {R}^n\) can be bounded in terms of the scale l, which we observed earlier is bounded by 1 / 2. Similarly, using (5.4) again, we can estimate

$$\begin{aligned} |v^1_{(s,t)}|^2 \le C_1\left( (t+s)|x|^4 + e^{-2b|x|^2}\right) . \end{aligned}$$

So an entirely analogous calculation establishes the estimate

$$\begin{aligned} \int _{\Pi _1} |v^1_{(s,t)}|^2\frac{e^{-|x' - a_1|^2/4l}}{(4\pi l)^{n/2}}dx' \le C_1\left( (s+t) + e^{-|y_0|^2/C_1}\right) . \end{aligned}$$

Therefore from (5.6) we have

$$\begin{aligned} |b_1|^2 \le C_1\left( (s+t) + e^{-|y_0|^2/C_1}\right) , \end{aligned}$$

so choosing \(s_2\) and \(\delta _2\) depending on \(D_1, D_2, P, r_0, \nu \) and b we have that for all \(s \le s_2\) and \(t \le \delta _2\) we have

$$\begin{aligned} |b_1| = \mathrm {dist}(y_0, P) \le \nu + C_1e^{-|y_0|^2/C_1}. \end{aligned}$$

We next want to show that by possibly increasing \(r_1\), and decreasing \(s_1\) and \(\delta _1\) if necessary, that we also have the estimate

$$\begin{aligned} \tilde{\Theta }_t^s(y_0, r) \le 1 + \frac{\varepsilon }{2} + \nu \end{aligned}$$

for any \(r \le 2\). We have

$$\begin{aligned} \tilde{\Theta }^s_t(y_0, r)&= \int _{\tilde{L}^s_t} \frac{1}{(4\pi r^2)^{n/2}}\exp \left( \frac{-|x-y_0|^2}{4r^2}\right) d\mathcal {H}^n \\&= \int _{L^s_t} \frac{1}{(4\pi (2(s+t))r^2)^{n/2}}\exp \left( \frac{-|x - \sqrt{2(s+t)}y_0|^2}{4r^2(2(s+t))}\right) d\mathcal {H}^n \\&= \Theta ^s_{t}(\sqrt{2(s+t)}y_0, \sqrt{2(s+t)}r). \end{aligned}$$

Applying the monotonicity formula we have

$$\begin{aligned} \Theta ^s_{t}(\sqrt{2(s+t)}y_0, \sqrt{2(s+t)}r) \le \Theta ^s_0(\sqrt{2(s+t)}y_0, \sqrt{2(s+t)r^2 + t}), \end{aligned}$$

so we find, recalling that \(l = t/2(s+t)\)

$$\begin{aligned} \tilde{\Theta }_t^s(y_0, r)&\le \int _{L^s} \frac{1}{(4\pi (2(s+t)r^2 + t))^{n/2}}\exp \left( \frac{-|x - \sqrt{2(s+t)}y_0|^2}{4(2(s+t)r^2 + t)}\right) d\mathcal {H}^n \\&= \int _{\Sigma ^{(s,t)}} \frac{1}{(4\pi (r^2 + l))^{n/2}}\exp \left( \frac{-|x - y_0|^2}{4(l + r^2)}\right) d \mathcal {H}^n \\&= \int _{\Sigma ^{(s,t)}} \Phi (y_0, l + r^2) d\mathcal {H}^n. \end{aligned}$$

Therefore by splitting up the integral as before and estimating exactly analogously we have

$$\begin{aligned} \tilde{\Theta }^s_{t}(y_0, r)&\le \int _{\Sigma ^{(s,t)}_1} \Phi (y_0, l + r^2)d\mathcal {H}^n + C_1 \exp \left( \frac{-|y_0|^2}{C_1}\right) \\&\le \int _{\Pi _1} \frac{\exp \left( \frac{-|x' - a_1|^2}{4(l + r^2)} \right) }{(4\pi (l+r^2))^{n/2}}\sqrt{\mathrm {det}(g_{ij}}dx' + C_1 \exp \left( \frac{-|y_0|^2}{C_1}\right) \\&\le 1 + C_1\int _{\Pi _1}|\overline{\nabla }v^1_{(s,t)}|^2 \frac{\exp \left( \frac{-|x' - a_1|^2}{4(l + r^2)} \right) }{(4\pi (l+r^2))^{n/2}}dx' +C_1 \exp \left( \frac{-|y_0|^2}{C_1}\right) \\&\le 1 \!+\! C_1(s+t) \!+\! C_1\int _{\mathbb {R}^n} e^{-2b|x'|^2} \frac{\exp \left( \frac{-|x' \!-\! a_1|^2}{4(l \!+\! r^2)} \right) }{(4\pi (l+r^2))^{n/2}} dx \!+\! C_1 \exp \left( \frac{-|y_0|^2}{C_1}\right) \\&= 1 + C_1(s+t) + C_1\int _{\mathbb {R}^n} e^{-2b|x'+a_1|^2} \frac{\exp \left( \frac{-|x'|^2}{4(l + r^2)} \right) }{(4\pi (l+r^2))^{n/2}} dx\\&\quad + C_1 \exp \left( \frac{-|y_0|^2}{C_1}\right) \end{aligned}$$

We want to estimate the exponential terms and pull out an exponential factor in \(|a_1|\) so we estimate

$$\begin{aligned} 2b|x+a_1|^2 + \frac{|x|^2}{4(l+r^2)}&\ge |x|^2\frac{8b(l+r^2)+1}{4(l+r^2)} + 2b|a_1|^2 - \frac{16b(l+r^2) + 1}{8(l+r^2)}|x|^2 \\&\quad - \frac{32b^2(l+r^2)}{16b(l+r^2) + 1}|a_1|^2 \\&= \frac{|x|^2}{8(l+r^2)} + \frac{2b|a_1|^2}{16b(l+r^2) + 1} \\&\ge \frac{|x|^2}{8(l+r^2)} + \frac{|a_1|^2}{C_1} \end{aligned}$$

where we used the fact that l and r are both bounded independently of s and t. Therefore putting this together we have

$$\begin{aligned} \tilde{\Theta }_t^s(y_0,r)&\le 1 + C_1(s+t) + C_1e^{-|a_1|^2/C_1}\int _{\mathbb {R}^n}\frac{e^{-|x|^2/8(l+r^2)}}{(4\pi (l+r^2))^{n/2}}dx + C_1e^{-|y_0|^2/C_1} \\&\le 1 + C_1(s+t) + C_1e^{-|y_0|^2/C_1}. \end{aligned}$$

Evidently an appropriate choice of \(r_1\), \(s_2\) and \(\delta _2\) yields the required result. \(\square \)

The following two Lemmas show that we have additional control in annular regions, specifically on normal deviation, curvature, Lagrangian angle and the primitive for the Liouville form.

Lemma 5.5

Let \(F^s_t: L^s\rightarrow \mathbb {R}^{2n}\) be the normal deformation such that \(L^s_t = F^s_t(L^s)\). We also define \(\tilde{F}^s_t := (2(s+t))^{-1/2}F^s_t\) so that \(\tilde{L}^s_t = \tilde{F}^s_t(L^s)\). Then there exist \(r_2\), \(\delta _3\), \(s_3\) and \(K < \infty \) such that if \(t \le \delta _3\) and \(s \le s_3\) then

$$\begin{aligned} \left| \tilde{F}^s_0(x) - \tilde{F}^s_t(x)\right| \le K \quad \text {whenever} \quad \tilde{F}^s_0(x) \in A\left( r_2, (s+t)^{-1/8}/4\right) . \end{aligned}$$

Proof

By the proximity lemma 5.4 we may choose \(r_2 \ge 1\), \(\delta _3\) and \(s_3\) such that if \(t \le \delta _3\) and \(s \le s_3\) then

$$\begin{aligned} \Theta ^s_t(x,r) \le 1 + \varepsilon _0 \end{aligned}$$

for all \(r \le 2\sqrt{2(s+t)}\) and \(x \in A\left( r_2\sqrt{2(s+t)}, \sqrt{2}(s+t)^{3/8}\right) \). Hence by White’s regularity theorem (Theorem 2.2) we can find a C such that

$$\begin{aligned} \left| \frac{dF^s_t(p)}{dt}\right| = |\vec {H}| \le \frac{C}{\sqrt{t}}, \end{aligned}$$

whenever \(F^s_t(p) \in A\left( 2r_2\sqrt{2(s+t)}, \sqrt{2(s+t)}(s+t)^{-1/8}/2\right) \). Therefore, choosing a larger \(r_2\) and smaller \(s_3\), \(\delta _3\) if necessary we obtain, by the fundamental theorem of calculus,

$$\begin{aligned} \left| F^s_t(p) - F^s_0(p)\right| \le \int _0^t \frac{C}{\sqrt{s}}ds = 2C\sqrt{t}, \end{aligned}$$

whenever

$$\begin{aligned} F^s_0(p)\in A\left( r_2(2(s+t))^{1/2}, (2(s+t))^{1/2}(s+t)^{-1/8}/4\right) , \end{aligned}$$

which establishes the result. \(\square \)

Lemma 5.6

There are \(\delta _4 > 0\) and \(s_4 > 0\) such that for \(0<s \le s_4\) and \(t<\delta _4\)

$$\begin{aligned} |A^s_t(x)| + |\theta ^s_t(x)| + |\beta ^s_t(x)| \le D_4 \quad \forall x \in L^s_t\cap A(1/3,3). \end{aligned}$$
(5.7)

Proof

The estimate is clearly true for \(t = 0\) by assumption (H2). Moreover, by (H4) we can assume that for s sufficiently small, each of the \(L^s\) is the graph of a function with small gradient in the region A(1 / 4, 4). Applying Lemma 8.1 we find that \(L^s\) remains graphical with small gradient in A(2 / 7, 7 / 2) for some short time, which implies that \(|\theta ^s_t| \le C\) for \(\delta _4\) chosen small enough.

That \(|A^s_t|\) is bounded follows from Lemma 8.1 and Corollary 8.4, since Lemma 8.1 implies small gradient for a short time, which allows us to apply Corollary 8.4 to get uniform curvature bounds for some short time in A(1 / 3, 3).

Since \(|\theta ^s_t|\) and \(|A^s_t|\) are both bounded, we have from the evolution equations of \(\beta ^s_t\) (see Lemma 3.1) that

$$\begin{aligned} \left| \frac{d\beta ^s_t}{dt}\right| \le \left| \langle Jx, \vec {H}\rangle \right| + 2|\theta ^s_t| \le C. \end{aligned}$$

Hence for some suitable short time, \(|\beta ^s_t|\) also remains bounded in A(1 / 3, 3). \(\square \)

The last of the technical lemmas in this section uses the monotonicity formula of Sect. 3 to show that after waiting for a short time dependent on s, we can find times at which the scaled flow \(\tilde{L}^s_t\) is close to a self-expander in an \(L^2\) sense. We later use this in the proof of the main theorem to get estimates on the density ratios via the stability result.

Lemma 5.7

Let \(a > 1\). Let \(q_1\) be as given by Lemma 5.3, and set \(q := q_1/a\). Then for all \(\eta > 0\) and \(R > 0\) there exist \(\delta _5 > 0\), \(s_5 > 0\) such that for all \(s \le s_5\) and \(qs \le T \le \delta _5\) we have

$$\begin{aligned} \frac{1}{(a-1)T}\int _T^{aT}\int _{\tilde{L}^s_t\cap B_{R}}|\vec {H} - x^\perp |^2d\mathcal {H}^ndt \le \eta . \end{aligned}$$

Proof

Fix \(R > 0\), \(\eta > 0\). Suppose \(s \le s_5\) and \(qs \le T \le \delta _5\), with \(\delta _5\) and \(s_5\) yet to be determined. Furthermore, we set \(T_0 := R^2(s + aT) + aT\). Throughout the proof, we denote by C a constant which depends on a, R and q, but not on T or s. We estimate

$$\begin{aligned}&\frac{1}{(a-1)T}\int _T^{aT}\int _{\tilde{L}^s_t\cap B_R}|\vec {H} - x^\perp |^2 d\mathcal {H}^ndt \nonumber \\&\quad = \frac{1}{(a-1)T}\int _T^{aT}(2(s+t))^{-n/2 - 1}\int _{L^s_t\cap B_{R\sqrt{2(s+t)}}}|2(s+t)\vec {H} - x^\perp |^2d\mathcal {H}^ndt. \end{aligned}$$
(5.8)

Now supposing that \(s_5\) and \(\delta _5\) are small enough we can ensure that \(R\sqrt{2(s+t)} \le 2\). Moreover on \(B_{R\sqrt{2(s+t)}}\) we have

$$\begin{aligned} (T_0 - t)^{n/2}\rho _{0,T_0}(x,t)&= \frac{1}{(4\pi )^{n/2}}\exp \left( -\frac{|x|^2}{4(T_0 - t)}\right) \\&\ge \frac{1}{(4\pi )^{n/2}}\exp \left( -\frac{R^22(s+t)}{4(T_0 - t)}\right) . \end{aligned}$$

Since \(T_0 - t = R^2(s + aT) + aT - t \ge R^2(s + aT) \ge R^2(s + t)\), it follows that

$$\begin{aligned} (T_0 - t)^{n/2}\rho _{0,T_0}(x,t) \ge \frac{1}{(4\pi )^{n/2}}\exp \left( -\frac{1}{2}\right) . \end{aligned}$$

Hence we can continue estimating (5.8) using the localized monotonicity formula of Lemma 3.2 (\(\phi \) denotes the cut-off function given in that lemma which is 1 on \(B_2\) and 0 outside of \(B_3\))

$$\begin{aligned} (5.8)&\le \frac{C}{T}\int _T^{aT}(s+t)^{-(n+2)/2}(T_0 - t)^{n/2}\int _{L^s_t} \phi |2(s+t)\vec {H} - x^\perp |^2\rho _{0,T_0}d\mathcal {H}^ndt \nonumber \\&\le \frac{C}{T}\int _T^{aT}(s+T)^{-(n+2)/2}(T_0 \!-\! T)^{n/2}\int _{L^s_t\cap A(2,3)}|\beta ^s_t \!+\! 2(s\!+\!t)\theta ^s_t|^2\rho _{0,T_0}d\mathcal {H}^ndt \nonumber \\&\quad +\frac{C}{T}(s+T)^{-(n+2)/2}(T_0-T)^{n/2}\int _{L^s_T}\phi |\beta ^s_T + 2(s+T)\theta ^s_T|^2\rho _{0,T_0}d\mathcal {H}^n. \end{aligned}$$
(5.9)

Now using the localized monotonicity a second time we have the estimate

$$\begin{aligned} \frac{d}{dt}\int _{L^s_t}\phi |\beta ^s_t + 2(s+t)\theta ^s_t|^2\rho _{0,T_0}d\mathcal {H}^n \le C\int _{L^s_t\cap A(2,3)}|\beta ^s_t + 2(s+t)\theta ^s_t|^2\rho _{0,T_0}d\mathcal {H}^n \end{aligned}$$

so

$$\begin{aligned} \int _{L^s_T}\phi |\beta ^s_T+2(s+T)\theta ^s_T|^2\rho _{0,T_0}d\mathcal {H}^n&\le \int _{L^s_0}\phi |\beta ^s_0+2s\theta ^s_0|^2\rho _{0,T_0}d\mathcal {H}^n \\&\quad + C\int _0^T\int _{L^s_t\cap A(2,3)}|\beta ^s_t \!+\! 2(s\!+\!t)\theta ^s_t|^2\rho _{0,T_0}d\mathcal {H}^ndt, \end{aligned}$$

hence

$$\begin{aligned} (5.9)&\le \frac{C}{T}(s+T)^{-(n+2)/2}(T_0 - T)^{n/2}\int _{L^s_0}\phi |2s\theta ^s_0 + \beta ^s_0|^2\rho _{0,T_0}d\mathcal {H}^n \\&\quad + \frac{C}{T}(s+T)^{-(n+2)/2}(T_0 \!-\! T)^{n/2}\int _0^{aT}\int _{L^s_t\cap A(2,3)}|2(s\!+\!t)\theta ^s_t \!+\! \beta ^s_t|^2\rho _{0,T_0}d\mathcal {H}^ndt. \end{aligned}$$

Now \(T_0 - T \le C(s+T)\), with C depending only on R and a, so estimating the terms in front of the integrals we have

$$\begin{aligned} (5.9)&\le \frac{C}{T(s+T)}\int _{L^s_0}\phi |2s\theta ^s_0 + \beta ^s_0|^2\rho _{0,T_0}d\mathcal {H}^n \\&\quad + \frac{C}{T(s+T)}\int _0^{aT}\int _{L^s_t\cap A(2,3)}|2(s+t)\theta ^s_t + \beta ^s_t|^2\rho _{0,T_0}d\mathcal {H}^ndt\\&=: A + B. \end{aligned}$$

We first estimate B. Notice that by Lemma 5.6 we have

$$\begin{aligned} |2(s+t)\theta ^s_t + \beta ^s_t|^2 \le \left( 2(s+t)|\theta ^s_t| + |\beta ^s_t|\right) ^2 \le C((s+t) + 1)^2. \end{aligned}$$

Hence, we can estimate

$$\begin{aligned} B&\le \frac{C((s+aT) + 1)^2}{T(s+T)}\int _0^{aT}\int _{L^s_t\cap A(2,3)}\rho _{0,T_0}d\mathcal {H}^ndt \nonumber \\&\le \frac{C((s+aT)+1)^2}{T(s+T)}\int _0^{aT}\int _{L^s_t\cap A(2,3)} |x|^4\rho _{0,T_0}d\mathcal {H}^ndt \nonumber \\&= \frac{C((s+aT)+1)^2}{T(s+T)}\int _0^{aT}(T_0 - t)^2\int _{(T_0 - t)^{-1/2}(L^s_t\cap A(2,3))} |x|^4\rho _{0,1}d\mathcal {H}^ndt \nonumber \\&\le \frac{C((s+aT)+1)^2}{T(s+T)} T_0^3\sup _{t\in [0,aT]}\int _{(T_0 - t)^{-1/2}(L^s_t\cap A(2,3))} |x|^4\exp \left( -\frac{|x|^2}{4}\right) d\mathcal {H}^n. \end{aligned}$$
(5.10)

We note that \(T_0 \le (R^2(1/q + a) + a)T= CT\), \(T_0 \le C(s+T)\) and \(T_0 \ge R^2(s + aT)\) so we can estimate

$$\begin{aligned} (5.10)&\le C(T_0 + 1)^2T_0 \sup _{t\in [0,aT]}\int _{(T_0 - t)^{-1/2}L^s_t\cap A(2,3)} |x|^4\exp \left( -\frac{|x|^2}{4}\right) d\mathcal {H}^n \\&\le C(T_0 + 1)^2T_0, \end{aligned}$$

where we can estimate the supremum by a uniform constant because \(L^s_t\) all have bounded area ratios with a uniform constant. Moreover \(T_0 \le R^2\delta _5(1/q + a) + a\delta _5\) so that by possibly decreasing \(\delta _5\) we can ensure that \(B \le \eta /2\).

We next estimate A,

$$\begin{aligned} A \le \frac{C}{T(s+T)}\int _{L_0^s\cap B_3}|2s\theta ^s_0 + \beta ^s_0|^2\rho _{0,T_0}d\mathcal {H}^ndt. \end{aligned}$$
(5.11)

First recall that if \(\beta ^s\) is primitive for the Liouville form on some \(L^s\), then \(\beta ^s_l := l^{-2}\beta ^s\) is primitive for the Liouville form on \(l^{-1}L^s\). From here on we supress the subscript 0 of the \(\beta ^s\) and \(\theta ^s\) since we only ever integrate over the manifolds \(L^s_0\), and we instead use a subscript l to denote the rescaling factor of the \(\beta ^s\). We define

$$\begin{aligned} l := \sqrt{2(s+T)} \quad \sigma := \frac{s}{s+T} \end{aligned}$$

then

$$\begin{aligned} (5.11)&= \frac{C(s+T)}{T}\int _{l^{-1}(L^s_0\cap B_3)} |\sigma \theta ^s + \beta ^s_l|^2\rho _{0,l^{-2}T_0}d\mathcal {H}^ndt \\&\le C \int _{l^{-1}(L^s_0\cap B_3)}|\sigma \theta ^s + \beta ^s_l|^2\rho _{0,l^{-2}T_0}d\mathcal {H}^n, \end{aligned}$$

since \(T \ge qs\), so we can absorb \((s+T)/T\) into the constant. Define

$$\begin{aligned} F(s,T) := \int _{l^{-1}(L^s_0\cap B_3)} |\sigma \theta ^s + \beta ^s_l|^2\rho _{0,l^{-2}T_0}d\mathcal {H}^n. \end{aligned}$$

Notice that from the definition of \(T_0\) we can find \(C > 0\) independent of T and s such that \(l^{-2}T_0 \in [C^{-1}, C]\). We want to show that by possibly again decreasing \(s_5\) and \(\delta _5\), we can ensure

$$\begin{aligned} F(s,T) \le \eta /2. \end{aligned}$$

Seeking a contradiction, suppose that this is not the case. Then we can find sequences \(s_i\) and \(T_i\) both converging to 0 with \(qs_i \le T_i\) and such that

$$\begin{aligned} F(s_i,T_i) > \eta /2. \end{aligned}$$

After possibly extracting a subsequence which we don’t relabel, we may assume that \(l_i^{-2}T_0 \rightarrow T_1\). We split the rest of the proof into two cases.

Case 1 Suppose that (after possibly extracting a further subsequence) we have that \(\sigma _i \rightarrow \sigma > 0\). Then by (H3) we have

$$\begin{aligned} l^{-1}_iL^{s_i}_0 = \sigma _i^{1/2}\tilde{L}^{s_i}_0 \rightarrow \sigma ^{1/2}\Sigma \end{aligned}$$

in \(C^{1,\alpha }\). Therefore we have

$$\begin{aligned} \lim _{i\rightarrow \infty } F(T_i, s_i)&= \lim _{i\rightarrow \infty }\int _{\sigma _i^{1/2}\tilde{L}^{s_i}_0\cap l_i^{-1}B_3}|\sigma _i\theta ^{s_i} + \beta ^{s_i}_{l_i}|^2\rho _{0,l_i^{-2}T_0}d\mathcal {H}^n \\&= \lim _{i\rightarrow \infty } \sigma _i^2 \int _{\tilde{L}^{s_i}_0\cap (2s_i)^{-1/2} B_3} |\tilde{\theta }^{s_i} + \tilde{\beta }^{s_i}|^2\rho _{0,l_i^{-2}\sigma _i^{-1}T_0}d\mathcal {H}^n = 0, \end{aligned}$$

because \(|\tilde{\theta }^{s_i} + \tilde{\beta }^{s_i}|\) is bounded by \(D_2(1 + |x|^2)\) on \(B_{3(2s_i)^{-1/2}}\), which means that since \(l_i^{-2}\sigma _i^{-1}T_0 \rightarrow \sigma ^{-1}T_1 > 0\) the contribution to the integral outside some fixed large ball is small uniformly in i. Moreover by (H3) we have \(\lim _{i\rightarrow \infty }|\tilde{\theta }^{s_i} + \tilde{\beta }^{s_i}|^2 = 0\) locally, so inside this large ball the integral can be made as small as desired.

Case 2 Suppose now that after possibly passing to subsequence, which we do not relabel, we have \(\sigma _i \rightarrow 0\). Then, with \(r_0\) defined as in property (H4) of the family \(L^s\), we find

$$\begin{aligned}&\lim _{i\rightarrow \infty }\int _{l_i^{-1}(L^{s_i}_0\cap B_{r_0\sqrt{s_i}})}|\sigma _i\theta ^{s_i} + \beta ^{s_i}_{l_i}|^2\rho _{0,l_i^{-2}T_0}d\mathcal {H}^n\\&\quad = \lim _{i\rightarrow \infty }\int _{\sigma _i^{1/2}\tilde{L}^{s_i}_0\cap B_{r_0\sqrt{\sigma _i/2}}} |\sigma _i\theta ^{s_i} + \beta ^{s_i}_{l_i}|^2\rho _{0,l_i^{-2}T_0}d\mathcal {H}^n \\&\quad = \lim _{i\rightarrow \infty }\sigma _i^2\int _{\tilde{L}^{s_i}_0\cap B_{r_0/\sqrt{2}}} |\tilde{\theta }^{s_i} + \tilde{\beta }^{s_i}|^2 \rho _{0,\sigma _i^{-1}l_i^{-1}T_0}d\mathcal {H}^n = 0, \end{aligned}$$

because \(|\tilde{\theta }^{s_i} + \tilde{\beta }^{s_i}|^2 \rightarrow 0\) locally, and \(\rho \) is bounded. So to estimate \(\lim _{i\rightarrow \infty }F(T_i,s_i)\) we need only control the integral in the annulus \(A(r_0\sqrt{\sigma _i/2},3l_i^{-1})\). We first notice that by (H4), provided i is large enough, \(l_i^{-1}L^{s_i}\cap A(r_0\sqrt{\sigma _i/2}, 3l_i^{-1})\) is graphical over P, and if \(v_i\) is the function arising from this decomposition we have the estimate

$$\begin{aligned} |v_i(x')| + |x'||\overline{\nabla }v_i(x')| + |x'|^2|\overline{\nabla }^2v_i(x')| \le D_3\left( l_i|x'|^2 + \sigma _i^{1/2} e^{-b|x'|^2/2\sigma _i}\right) . \end{aligned}$$

In the graphical region, the normal space to the graph is spanned by the vectors \(n_j := (-\overline{\nabla }v^j_i, e_j)\) for \(j = 1,\dots ,n\) where \(e_j\) denotes the vector in \(\mathbb {R}^n\) whose jth entry is 1, and all other entries are 0, and \(v^j_i\) is the jth coordinate of \(v_i\). Then given an orthonormal basis for the normal space \(\nu _1,\dots ,\nu _n\) we have \(\nu _j = \sum _{k=1}^n\alpha _{jk}n_k\), where \(\alpha _{jk}\) are fixed real numbers denoting the coefficients in the basis expansion of \(\nu _j\) in terms of the \(n_k\). It then follows that

$$\begin{aligned} |x^\perp | \le C\sum _{j=1}^n |\langle x, n_j\rangle |, \end{aligned}$$

where C depends only on the \(\alpha _{jk}\). Now

$$\begin{aligned} \langle x, n_j\rangle&= \left\langle (x', v_i(x')), (-\overline{\nabla }v^j_i, e_j)\right\rangle \\&= -\left\langle x', \overline{\nabla }v^j_i(x')\right\rangle + v^j_i(x') \end{aligned}$$

from which it follows that

$$\begin{aligned} |x^\perp | \le C\left( |v_i(x')| + |x'||\overline{\nabla }v_i(x')|\right) . \end{aligned}$$

Therefore

$$\begin{aligned} |\nabla \beta ^{s_i}_{l_i}| = |x^\perp | \le C\left( l_i|x'|^2 + \sigma _i^{1/2}\right) . \end{aligned}$$
(5.12)

Using this estimate we can control \(\beta ^{s_i}_{l_i}\) independently of i on the annular region \(A(r_0\sqrt{\sigma _i/2}, 3l_i^{-1}) \cap l_i^{-1}L^{s_i}\). Indeed suppose that \(x \in A(r_0\sqrt{\sigma _i/2}, 3l_i^{-1}) \cap l_i^{-1} L^{s_i}\), then there is a corresponding \(x' \in A(r_0\sqrt{\sigma _i/2}, 3l_i^{-1}) \cap P\) such that \(x = x' + v_i(x')\). Define

$$\begin{aligned} x_i' := \frac{r_0\sqrt{\sigma _i}}{\sqrt{2}} \frac{x'}{|x'|} \quad \text { and }\quad x_i := x_i' + v_i(x_i'). \end{aligned}$$

Note that \(x_i\) of course depends on the original choice of x as well as i. We may now define a curve in \(l^{-1}_iL^{s_i}\) by setting

$$\begin{aligned} \gamma (t) := x_i' + t(x' - x_i') + v_i(x_i' + t(x' - x_i')). \end{aligned}$$

By the fundamental theorem of calculus we can write

$$\begin{aligned} \beta ^{s_i}_{l_i}(x)&= \beta ^{s_i}_{l_i}(x_i) + \int _0^1 \frac{d}{dt}\beta ^{s_i}_{l_i}(\gamma (t))dt \\&\le \beta ^{s_i}_{l_i}(x_i) + \int _0^1|\nabla \beta ^{s_i}_{l_i}(\gamma (t))||\gamma '(t)|dt, \end{aligned}$$

and furthermore

$$\begin{aligned} |\gamma '(t)| \le |x' - x_i'| + |\overline{\nabla }v_i||x' - x_i'| \le C|x| \end{aligned}$$

so

$$\begin{aligned} \beta ^{s_i}_{l_i}(x)&\le \beta ^{s_i}_{l_i}(x_i) + C|x|\int ^1_0 l_i|x_i' + t(x' - x_i')|^2 + \sigma _i^{1/2}dt \\&\le \beta ^{s_i}_{l_i}(x_i) + C\left( l_i|x|^3 + \sigma _i^{1/2}\right) . \end{aligned}$$

Now \(\beta ^{s_i}_{l_i}(x_i) = \sigma _i\tilde{\beta }^{s_i}(\sigma ^{1/2}_ix_i)\), moreover since \(|x_i|\) is bounded independently of i or the original choice of |x| we have from property (H3) of \(L^s\) that

$$\begin{aligned} \lim _{i\rightarrow \infty } \tilde{\beta }^{s_i}\left( \sigma _i^{1/2}x_i\right) + \tilde{\theta }^{s_i}\left( \sigma _i^{1/2}x_i\right) = 0 \end{aligned}$$

uniformly in x. Thus

$$\begin{aligned} \lim _{i\rightarrow \infty } \beta ^{s_i}_{l_i}(x_i) = -\lim _{i\rightarrow \infty }\sigma _i\tilde{\theta }^{s_i}\left( \sigma _i^{1/2}x_i\right) = 0 \end{aligned}$$

uniformly in x as \(\tilde{\theta }^{s_i}\) is bounded and \(\sigma _i \rightarrow 0\). Therefore we may bound the term \(\beta ^{s_i}_{l_i}(x_i)\) by some sequence \(b_i\) with \(b_i \rightarrow 0\). Consequently we have the estimate

$$\begin{aligned} |\beta ^{s_i}_{l_i}(x)| \le C\left( l_i|x|^3 + \sigma _i^{1/2}|x|\right) + b_i \end{aligned}$$

on \(A\left( r_0\sqrt{\sigma _i/2}, 3l_i^{-1}\right) \cap l_i^{-1}L^{s_i}\), hence

$$\begin{aligned} \lim _{i\rightarrow \infty } F(T_i,s_i)&= \lim _{i\rightarrow \infty } \int _{l_i^{-1}L^{s_i}\cap A\left( r_0\sqrt{\sigma /2}, 3l_i^{-1}\right) } |\sigma _i\theta ^{s_i} + \beta ^{s_i}_{l_i}|^2\rho _{0,l_i^{-2}T_0}d\mathcal {H}^n \\&= \lim _{i\rightarrow \infty }\int _{l_i^{-1}L^{s_i}\cap A\left( r_0\sqrt{\sigma /2}, 3l_i^{-1}\right) } |\beta ^{s_i}_{l_i}|^2\rho _{0,l_i^{-2}T_0}d\mathcal {H}^n \\&\le \lim _{i\rightarrow \infty } C\left( l_i^2 + \sigma _i + b_i^2\right) \int _{l_i^{-1}L^{s_i}}(|x|^6 + |x|^2 + 1)\rho _{0,l_i^{-2}T_0}d\mathcal {H}^n = 0, \end{aligned}$$

where we again used the fact that \(l_i^{-2}T_0 \rightarrow T_1 > 0\), so that outside of some large ball the contribution to the integral is very small. This limit being zero is a contradiction, so we are done. \(\square \)

We may now embark on the proof of Theorem 5.1. Changing scale, to prove the main theorem it would in fact suffice to show the following (which is very slightly stronger due to the bound on the scale of the density ratios),

Theorem

(Rescaled main theorem) There exist \(s_0\), \(\delta _0\) and \(\tau \) such that if \(t \le \delta _0\), \(r^2 \le \tau \) and \(s \le s_0\), then

$$\begin{aligned} \tilde{\Theta }^s_t(x_0, r) \le 1 + \varepsilon _0 \end{aligned}$$

for all \(x_0\) with \(|x_0| \le (2(s+t))^{-1/2}\).

Let \(q_1\) be defined as in Lemma 5.3, and recall that \(q_1 < 1\). If we set \(\tau := q_1/(2(q_1 + 1))\), then the rescaled version of Lemma 5.3 implies

Lemma

(Rescaled short-time existence) If \(s \le s_1\), \(t \le q_1 s\) and \(r^2 \le \tau \) then

$$\begin{aligned} \tilde{\Theta }^s_t(y_0, r) \le 1 + \varepsilon _0 \end{aligned}$$

\(|y_0| \le (2(s+t))^{-1/2}\).

Similarly the rescaled Lemma 5.2 tells us that

Lemma

(Rescaled far from origin) If \(r^2 \le \tau \) and \(q_1s \le t \le \delta _1\)

$$\begin{aligned} \tilde{\Theta }^s_t(y_0, r) \le 1 + \varepsilon _0 \end{aligned}$$

whenever \(K_0 \le |y_0| \le (2(s+t))^{-1/2}\).

Thus to prove the rescaled main theorem, it suffices to show that for appropriately chosen \(s_0, \delta _0\) and \(\tau \) the following holds true: if \(r^2 \le \tau \), \(s \le s_0\), \(t \le \delta _0\) and \(t \ge q_1 s\) then

$$\begin{aligned} \tilde{\Theta }^s_t(y_0, r) \le 1 + \varepsilon _0 \end{aligned}$$

whenever \(|y_0| \le K_0\). This is what we now show.

Proof of Theorem 5.1

For each s we define

$$\begin{aligned} T_s := \sup \left\{ T | \tilde{\Theta }^s_t(y_0, r) \le 1 + \varepsilon _0 \,\, \forall r^2 \le \tau , t \le T, |y_0| \le K_0 \right\} . \end{aligned}$$

We now claim that we can find \(\delta _0 > 0\) and \(s_0 > 0\) such that \(T_s \ge \delta _0\) for all \(s \le s_0\). Indeed, with \(\tau = q_1/(2(q_1+1))\) as above, we choose \(a > 1\) with \(a < (1 + 2\tau )\). Let \(C_0\) be the constant of White’s local regularity theorem (Theorem 2.2), and set

$$\begin{aligned} \tilde{C} := C_0\frac{\sqrt{2(a+3)}}{\sqrt{q_1(a-1)}}. \end{aligned}$$
(5.13)

We next let \(r_3 := \max \{r_0, r_1, r_2, 1\}\), where \(r_0\), \(r_1\), and \(r_2\) are as in, respectively, the construction of the approximating family, Lemmas 5.4 and 5.5. Let \(R := \sqrt{1+ 2q_1}K_0 + r_3\) and note that \(R \ge 2\). Next fix \(\alpha \in (0,1)\) as in the proof of Lemma 5.3, and \(\varepsilon = \varepsilon (\Sigma , \varepsilon _0, \alpha )\) as given by Lemma 8.2. We apply the stability result, Theorem 4.2 with \(R = R\); \(r = r_3\); \(C = \max \{C_1, C\}\) the constants from Lemma 5.4, and the construction of the approximating family respectively; \(M = \tilde{C}\); \(\tau = \tau \); \(\Sigma = \Sigma \) and \(\varepsilon = \varepsilon \). Thus we obtain \(\tilde{R} \ge R\), \(\eta > 0\) and \(\nu \ge 0\) as in the theorem. Apply Lemma 5.7 with \(\eta = \eta /2\) and \(R = \tilde{R}\). This gives \(s_5\) and \(\delta _5\) such that the lemma holds. Next apply Lemma 5.4 with \(\nu \) to obtain \(s_2\) and \(\delta _2\). We now let \(s_0 := \min \{s_1,s_2,s_3,s_4,s_5\}\) and \(\delta _0 := \min \{\delta _1, \delta _2, \delta _3, \delta _4, \delta _5\}\). We finally possibly decrease \(s_0\) and \(\delta _0\) slightly to ensure that

$$\begin{aligned} (s_0 + \delta _0)^{-1/8} \ge 2\tilde{R}. \end{aligned}$$

This will ensure that in the annular region \(A(r_3, \tilde{R})\) we have all of the estimates of the intermediate lemmas of this section. We now claim that these \(s_0\) and \(\delta _0\) are the required constants. Specifically we claim that for all \(s \le s_0\) we have \(T_s \ge \delta _0\). Indeed, suppose that this were not the case and that for some \(s \le s_0\) we have \(T_s < \delta _0\). Our goal is to show that the hypotheses of Theorem 4.2 are satisfied by \(\tilde{L}^s_t\) for some t close to \(T_s\), so that we can conclude \(\tilde{L}^s_t\) is \(C^{1,\alpha }\) close to \(\Sigma \). Lemma 8.2 will then give density ratio bounds for times past \(T_s\), resulting in a contradiction. To this end we define \(T := T_s/a\), then since \(T < T_s\) we have for all \(t \in [T, T_s)\)

$$\begin{aligned} \tilde{\Theta }^s_t(x,r) \le 1 + \varepsilon _0, \end{aligned}$$

for all \(r^2 \le \tau \) and \(x \in B_{K_0}\). In fact, as has already been observed, the same is true for all \(|x| \le (s+t)^{-1/8}\), so in particular for all \(|x| \le 2\tilde{R}\). Let \(\hat{L}^s_l\) denote the Lagrangian mean curvature flow with initial condition \(\tilde{L}^s_T\). Let \(\sigma ^2 = 2(s + T)\), then we can write \(\tilde{L}^s_T = \sigma ^{-1}L^s_T\). Then we may write \(\hat{L}^s_l\) as

$$\begin{aligned} \hat{L}^s_l = \sigma ^{-1}L^s_{T+\sigma ^2l} = \frac{\sqrt{2(T+s+\sigma ^2l)}}{\sqrt{2(T+s)}}\tilde{L}^2_{T+s+\sigma ^2l} = \sqrt{1 + 2l}\tilde{L}^s_{T + \sigma ^2l}. \end{aligned}$$

This implies the density ratio control

$$\begin{aligned} \hat{\Theta }^s_l(x,r) \le 1 + \varepsilon _0, \end{aligned}$$

for all l such that \(T + \sigma ^2l \in [T, T_s)\), \(r^2 \le \tau \) and \(x \in B_{2\tilde{R}}\). By White’s local regularity theorem (Theorem 2.2) we get curvature bounds of the form

$$\begin{aligned} |\hat{A}^s_l | \le \frac{C_0}{\sqrt{l}} \quad l \le \tau , \text { on } B_{\tilde{R}}, \end{aligned}$$

or, scaled back to the original scale this means

$$\begin{aligned} |A^s_t| \le \frac{C_0}{\sqrt{t - T}}, \end{aligned}$$
(5.14)

on \(B_{\sigma \tilde{R}}\) for all \(t < T_s\) with \(T \le t \le T + 2(s+T)\tau = (1 + 2\tau )T + 2\tau s\). Notice in particular that

$$\begin{aligned} T_s = aT \le (1 + 2\tau )T + 2 \tau s, \end{aligned}$$

so the above estimate always holds up to time \(T_s\). Let \(t_0 := T(a+1)/2\). Then from (5.14) we see

$$\begin{aligned} |A^s_{t_0}| \le \frac{C_0}{\sqrt{t_0 - T}} = \frac{C_0\sqrt{2}}{\sqrt{(a-1)T}} = \frac{C_0\sqrt{\frac{2(a+3)}{a-1}}}{\sqrt{(a + 3)T}} = \frac{C_0\sqrt{\frac{2(a+3)}{a-1}}}{\sqrt{2(t_0 + T)}}. \end{aligned}$$

Recall that \(T \ge q_1s\) and \(q_1 < 1\) so

$$\begin{aligned} |A^s_{t_0}| \le \frac{C_0\sqrt{\frac{2(a+3)}{a-1}}}{\sqrt{2(t_0 + q_1s)}} \le \frac{\tilde{C}}{\sqrt{2(t_0 + s)}}, \end{aligned}$$

on \(B_{\sigma \tilde{R}}\), where \(\tilde{C}\) is defined as in (5.13). Similarly, if \(t > 0\) is such that \(t_0 + t \le T_s\) then

$$\begin{aligned} |A^s_{t_0 + t}| \le \frac{C_0}{\sqrt{t_0 + t - T}} \le \frac{C_0\sqrt{2}}{\sqrt{(a-1)T + t}} \le \frac{\tilde{C}}{\sqrt{2(t_0 + t + s)}}. \end{aligned}$$

In other words, for each \(t \in [t_0, T_s)\) we have

$$\begin{aligned} |A^s_t| \le \frac{\tilde{C}}{\sqrt{2(s+t)}} \quad \text { on } B_{\sigma \tilde{R}}, \end{aligned}$$

which implies that for each \(t \in [t_0, T_s)\) we have

$$\begin{aligned} |\tilde{A}^s_t| \le \tilde{C} \quad \text { on } B_{\tilde{R}}. \end{aligned}$$

This means that \(\tilde{L}^s_t\) satisfies condition (i) of Theorem 4.2 with \(M = \tilde{C}\) and \(\tilde{R} = \tilde{R}\) for every \(t \in [t_0, T_s)\). Next, applying Lemma 5.7, we may select \(t_1 \in [t_0, T_s)\) with

$$\begin{aligned} \int _{\tilde{L}^s_{t_1}\cap B_{\tilde{R}}} |\vec {H} - x^\perp |^2d\mathcal {H}^n \le \eta . \end{aligned}$$

So \(\tilde{L}^s_{t_1}\) also satisfies condition (iii) of Theorem 4.2. Condition (iv) of Theorem 4.2 holds for \(\tilde{L}^s_{t_1}\) by Lemma 5.4, and condition (ii) holds by definition of \(T_s\) as \(t_1 < T_s\). Hence Theorem 4.2 implies that \(\tilde{L}^s_{t_1}\) is \(\varepsilon \)-close to \(\Sigma \) in \(C^{1,\alpha }(B_{\tilde{R}})\). Redefine \(\hat{L}^s_l\) to be the Lagrangian mean curvature flow with initial condition \(\tilde{L}^s_{t_1}\). As before we know that we can write

$$\begin{aligned} \hat{L}^s_l = \sqrt{1 + 2l}\tilde{L}^s_{t_1 + 2(s+t_1)l}. \end{aligned}$$

Then Lemma 8.2 applied to \(\hat{L}^s_l\) says that

$$\begin{aligned} \hat{\Theta }^s_l(x,r) \le 1 + \varepsilon _0 \quad r^2, \, l \le q_1 \end{aligned}$$

for \(|x| \le \tilde{R} - 1\). Since \(\tilde{R} \ge R = \sqrt{1 + 2q_1}K_0 + r_3\) and \(r_3 \ge 1\), this means that the same is true for \(|x| \le \sqrt{1 + 2q_1}K_0\). Rescaling, this is equivalent to

$$\begin{aligned} \tilde{\Theta }^s_{t_1 + 2(s+t_1)l}\left( \frac{x}{\sqrt{1 + 2l}}, \frac{r}{\sqrt{1 + 2l}}\right) \le 1 + \varepsilon _0 , \end{aligned}$$

for \(r^2\), \(l \le q_1\) and \(|x| \le \sqrt{1 + 2q_1}K_0\). Or in other words

$$\begin{aligned} \tilde{\Theta }^s_t(x,r) \le 1 + \varepsilon _0, \end{aligned}$$

for \(r^2 \le q_1/(1 + 2q_1) = \tau \), \(|x| \le K_0\) and \(t_1 \le t \le (1 + 2q_1)t_1 + 2q_1s\). However, \((1 + 2q_1)t_1 + 2q_1s > at_1 > aT = T_s\), which contradicts the definition of \(T_s\). \(\square \)

6 Short-time existence

In this section we prove the following short time existence result using Theorem 5.1.

Theorem 6.1

Suppose that \(L\subset \mathbb {C}^n\) is a compact Lagrangian submanifold of \(\mathbb {C}^n\) with a finite number of singularities, each of which is asymptotic to a pair of transversally intersecting planes \(P_1 + P_2\) where neither \(P_1 + P_2\) nor \(P_1 - P_2\) are area minimizing. Then there exists \(T > 0\) and a Lagrangian mean curvature flow \((L_t)_{0<t<T}\) such that as \(t \searrow 0\), \(L_t \rightarrow L\) as varifolds and in \(C^\infty _\mathrm{loc}\) away from the singularities.

Proof

For simplicity we suppose that L has only one singularity at the origin. The case where L has more than one follows by entirely analogous arguments. By standard short time existence theory for smooth compact mean curvature flow, for all \(s \in (0,c]\) there exists a Lagrangian mean curvature flow \((L^s_t)_{0 \le t \le T_s}\) with \(T_s > 0\). We claim that there exists a \(T_0 > 0\) such that \(T_s \ge T_0\) for all s sufficiently small, and that furthermore, we have interior estimates on |A| and its higher derivatives for all \(t > 0\), which are independent of s. By virtue of Lemma 8.1, we can apply Corollary 8.4 on small balls everywhere outside \(B_{1/3}\) to get uniform curvature bounds outside of \(B_{1/2}\) up to time \(\min \{T_s,\delta \}\) where \(\delta > 0\) is independent of s. Uniform estimates on the higher derivatives then immediately follow by standard parabolic PDE theory.

To obtain the desired bounds on \(B_{1/2}\) we use Theorem 5.1. Let \(\varepsilon _0 > 0\) be the constant of Brian White’s local regularity theorem. Then Theorem 5.1 says that there exist \(s_0\), \(\delta _0\) and \(\tau \) such that for all \(s \le s_0\), \(t \le \delta _0\), \(r^2 \le \tau t\) and \(x_0 \in B_{1/2}\) we have

$$\begin{aligned} \Theta ^s_t(x_0, r) = \Theta ^s(x_0, t+r^2, r) \le 1 + \varepsilon _0. \end{aligned}$$

This implies that for all \(s \le s_0\), \(t\le \delta _0\) and \(r^2 \le \tau t\) we have \(\Theta ^s(x_0, t, r) \le 1 + \varepsilon _0\). We now fix \(s \le s_0\), \(t_0 < \min \{\delta _0, T_s\}\), and \(\rho \le \min \{1/4, \sqrt{t_0}\}\). Then it follows that \(B_{2\rho }(x_0) \subset B_1\), and furthermore that

$$\begin{aligned} \Theta ^s(x,t,r) \le 1 + \varepsilon _0 \end{aligned}$$

for all \(r \le \tau \rho ^2\), and \((x,t) \in B_{2\rho }(x_0) \times (t_0 - \rho ^2 , t_0]\). Then it immediately follows from White’s theorem that

$$\begin{aligned} |A^s_t(x)| \le \frac{C}{\sqrt{t - t_0 + \rho ^2}} \end{aligned}$$

for all \((x,t) \in B_\rho (x_0) \times (t_0 - \rho ^2, t_0]\), where C depends only on \(\tau \) and \(\varepsilon _0\). These estimates are then uniform in s for \(s \le s_0\). Moreover, these curvature bounds, along with those outside of the ball \(B_{1/2}\), imply that \(T_s \ge \min \{\delta , \delta _0\}\).

Because the estimates are independent of s, they pass to the limit in the varifold topology when we take a subsequential limit of the flows and so we obtain a limiting flow \((L_t)_{0<t<T_0}\), for which \(L_t \rightarrow L\) as varifolds.

Note that away from the singularities, we can obtain uniform curvature estimates on |A| thanks to Corollary 8.4, so it follows that \((L_t)\) attains the initial data L in \(C^\infty _\mathrm{loc}\) away from the singular points. \(\square \)

7 Construction of approximating family

In this section, we consider a Lagrangian submanifold L of \(\mathbb {C}^n\) with a singularity at the origin which is asymptotic to the pair of planes P considered in Sect. 4. We approximate L by gluing in the self-expander \(\Sigma \) which is asymptotic to P at smaller and smaller scales in place of the singularity. We will show that this yields a family of compact Lagrangians, exact in \(B_4\), which satisfy the hypotheses (H1)-(H4) given in Sect. 5 which are required to implement the analysis in that section.

Since L is conically singular we may write \(L\cap B_4\) as a graph over \(P\cap B_4\) (possibly rescaling L so that this is the case). We may further apply the Lagrangian neighbourhood theorem (its extension to cones was proved by Joyce, [7, Theorem4.1]), so that we may identify \(L\cap B_4\) with the graph of a one-form \(\gamma \) on P. Recall that the manifold corresponding to the graph of such a one-form is Lagrangian if and only if the one-form is closed.

Moreover, since we have assumed that L is exact inside \(B_4\), there exists \(u\in C^\infty (P\cap B_4)\) such that \(\mathrm {d}u=\gamma \). Since we know that \(\gamma \) must decay quadratically, we can choose a primitive for \(\gamma \) which has cubic decay, i.e.,

$$\begin{aligned} |\nabla ^ku(x)|\le C|x|^{3-k}. \end{aligned}$$
(7.1)

We saw in Theorem 4.1 that there exists a unique, smooth zero-Maslov self-expander asymptotic to P. We may also identify the self-expander outside a ball of radius \(r_0\) with the graph of a one-form over P and, since a zero-Maslov class Lagrangian self-expander is globally exact, there exists a function \(v\in C^\infty (P\backslash B_{r_0})\) such that the self-expander is described by the exact one-form \(\psi =\mathrm {d}v\) on \(P\backslash B_{r_0}\). Further, Lotay and Neves proved [10, Theorem 3.1]

$$\begin{aligned} \Vert v\Vert _{C^k(P\backslash B_r)}\le Ce^{-br^2} \quad \text {for all }r\ge r_0. \end{aligned}$$
(7.2)

We will glue \(\Sigma _s:=\sqrt{2s}\Sigma \) into the initial condition L to resolve the singularity. Our new manifold, \(L^s\), will be the rescaled self-expander \(\Sigma ^s\) inside \(B_{r_0\sqrt{2s}}\), the manifold L outside \(B_{4}\) and will smoothly interpolate between the two on the annulus \(A(r_0\sqrt{2s},4)\).

To do this, we will glue together the primitives of the one-forms corresponding to these manifolds, before taking the exterior derivative. This gives us a one-form that will describe \(L^s\) on the annulus \(A(r_0\sqrt{2s},4)\), which ensures \(L^s\) is still Lagrangian and is exact in \(B_4\). We will then show that this family satisfies the properties (H1)-(H4).

Let \(\varphi :\mathbb {R}_+\rightarrow [0,1]\) be a smooth function satisfying \(\varphi \equiv 1\) on [0, 1] and \(\varphi \equiv 0\) on \([2,\infty )\). Consider the one-form given by, for \(r_0\sqrt{2s}\le |x|\le 4\), \(0<s\le c\)

$$\begin{aligned} \gamma _s(x)&=\mathrm {d}w_s(x)=\mathrm {d}\left[ \varphi (s^{-1/4}|x|)2sv(x/\sqrt{2s})+(1-\varphi (s^{-1/4}|x|))u(x)\right] , \end{aligned}$$
(7.3)

where we have that \(r_0\sqrt{2s}<s^{1/4}<2s^{1/4}<4\) holds for all \(s\le c\). Notice that in particular we must have \(c<1\). Then \(\gamma _s(x)\equiv \psi _s(x):=\sqrt{2s}\psi (x/\sqrt{2s})\), the one-form corresponding to the rescaled self-expander \(\Sigma _s\) for \(|x|<s^{1/4}\) and \(\gamma _s\equiv \gamma \) for \(|x|>2s^{1/4}\). Notice that since \(\gamma _s\) is exact, it is closed and therefore its graph corresponds to an exact Lagrangian.

We define the smooth exact Lagrangian \(L^s\) by

  • \(L^s\cap B_{r_0\sqrt{2s}}=\Sigma _s\cap B_{r_0\sqrt{2s}}\),

  • \(L^s\cap A(r_0\sqrt{2s},4)=\)graph \(\gamma _s\),

  • \(L^s\backslash B_4=L\backslash B_4\).

We will now show that \(L^s\) satisfies (H1)-(H4).

For (H1), notice that both the self-expander and the initial condition individually satisfy (H1), and so for the rescaled self-expander, we have that

$$\begin{aligned} \mathcal {H}^n(\Sigma _s\cap B_R)&=\mathcal {H}^n((\sqrt{2s}\Sigma )\cap B_R)=(2s)^{n/2}\mathcal {H}^n(\Sigma \cap B_{R/\sqrt{2s}}) \\&\le (2s)^{n/2}D_1\left( \frac{R}{\sqrt{2s}}\right) ^n=D_1R^n. \end{aligned}$$

Since \(L^s\) interpolates between \(\Sigma _s\) and L on a compact region, \(L^s\) satisfies (H1).

We see that (H2) is satisfied because the Lagrangian angle of the initial condition L and the self-expander \(\Sigma \) are bounded, as is that of the rescaled self-expander \(\Sigma _s\) by Lemma 3.1 (i) and the maximum principle, since the Lagrangian angle of P is locally constant. When we interpolate between the two, we may consider the formula for the Lagrangian angle of a Lagrangian graph, as seen in [1, p. 5]. This tells us that a Lagrangian graph in \(\mathbb {C}^n\) (over \(\mathbb {R}^n\)) given by \((x_1,...,x_n,u_1(x),...,u_n(x))\), where \(u:\mathbb {R}^n\rightarrow \mathbb {R}, u_i:=\frac{\partial u}{\partial x_i},\) has Lagrangian angle

$$\begin{aligned} \theta =\sum \arctan \lambda _i, \end{aligned}$$

where the \(\lambda _i\)’s are the eigenvalues of the Hessian of u. Since the eigenvalues of the Hessian of u are some non-linear function of the second derivatives of u, if the \(C^2\) norm of u is small we have that the Lagrangian angle of the graph is close to that of the Lagrangian angle of the plane that u is a graph over. So we can uniformly bound the Lagrangian angle of the graph. Since in our case, the Lagrangian angle of \(\gamma _s\) is given by the sum of arctangents of the eigenvalues of the Hessian of the function \(w_s\), and, as we will show when we prove (H4), the \(C^2\) norm of \(w_s\) is small, this means that we can uniformly bound the Lagrangian angle of the graph \(\gamma _s\), and so the Lagrangian angle of \(L^s\).

On the initial condition, since \(\lambda =Jx\), we have that \(\mathrm {d}\beta _L=\lambda |_L=(Jx)^T\). Therefore, \(\beta _L\) is bounded quadratically, and so is the primitive for the Liouville form of \(L^s\backslash B(2s^{1/4})\). On the self-expander, applying the maximum principle to Lemma 3.1 (ii), we have \(\beta _s\) (the primitive of \(\lambda |_{\Sigma _s}\)) is bounded by \(\beta _P\), and so \(|\beta ^s(x)|\le |\beta _P(x)|\le C|x|^2\) for \(|x|<s^{1/4}\). So it remains to check this still holds where we interpolate. We perform a calculation similar to that in the proof of Lemma 3.1 (ii). We have that, for \(L^s_t\) the manifold described by the graph of the one-form \(t\mathrm {d}w_s\),

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}t} \lambda |_{L^s_t}=:\mathcal {L}_{J\nabla w_s} \lambda |_{L^s_t}=\mathrm {d}(J\nabla w_s\lrcorner \lambda |_{L^s_t})+J\nabla w_s\lrcorner \mathrm {d}\lambda |_{L^s_t}. \end{aligned}$$

Since \(\mathrm {d}\lambda =\omega \) and \(J\nabla w_s\lrcorner \omega =\mathrm {d}w_s\) and possibly adding constant to \(\beta ^s_t\) dependent on s and t, we have that

$$\begin{aligned} \frac{\mathrm {d}\beta ^s_t}{\mathrm {d}t}=-2w_s+\langle x,\nabla w_s \rangle |_{L^s_t}, \end{aligned}$$

where \(\mathrm {d}\beta ^s_t\) is equal to the restriction of the Liouville form \(\lambda \) to graph of \(t\gamma _s\). Integrating, we find that

$$\begin{aligned} \beta ^s=\beta _P-2w_s+\int _0^1\langle x,\nabla w_s\rangle |_{L^s_t} \,\mathrm {d}t, \end{aligned}$$

where \(\beta _P\) is the primitive for \(\lambda \) on P. Now, \(w_s\) is bounded independently of s by \(D(1+|x|^2),\) using (7.1) and (7.2), as is \(\langle x,\nabla w_s\rangle \), using Cauchy–Schwarz and the estimates (7.1) and (7.2) so we find that \(\beta ^s\) is bounded independently of s on the annulus \(A(s^{1/4},2s^{1/4})\). Therefore, we have that

$$\begin{aligned} |\theta ^s(x)|+|\beta ^s(x)|\le D_2(|x|^2+1). \end{aligned}$$

and so (H2) is satisfied.

To show that (H3) is satisfied, recall that we define \(L^s\) as \(L^s\cap B_{r_0\sqrt{2s}}=\Sigma _s\cap B_{r_0\sqrt{2s}}\), \(L^s\backslash B_4=L\backslash B_4\) and we interpolate smoothly between the two, which exactly happens when \(s^{1/4}\le |x|\le 2s^{1/4}\). Therefore when we rescale by \(1/\sqrt{2s}\), we have that \(\tilde{L}^s\cap B_{r_0}\equiv \Sigma \). So it remains to check convergence outside this ball.

On the annulus \(r_0\le |x|\le 4/\sqrt{2s}\), \(\tilde{L}^s\) is identified with the graph of the following one-form

$$\begin{aligned} \tilde{\gamma }_s (x)=\mathrm {d}\left[ \varphi (s^{1/4}|x|)v(x)+(1-\varphi (s^{1/4}|x|))\frac{u(\sqrt{2s}x)}{2s}\right] . \end{aligned}$$

From this expression, noticing that

$$\begin{aligned} \frac{u(\sqrt{2s}x)}{2s}\le C\frac{(2s)^{3/2}x^3}{2s}=C\sqrt{2s}x, \end{aligned}$$

we see that as \(s\rightarrow 0\), \(\tilde{\gamma }_s\rightarrow \mathrm {d}v=\psi \), the one-form whose graph is identified with \(\Sigma \). This says that, outside \(B_{r_0}\), \(\tilde{L}^s\rightarrow \Sigma \) as \(s\rightarrow 0\) smoothly. Therefore we actually have stronger than the required \(C^{1,\alpha }_\mathrm{loc}\) convergence.

Finally, we check that the second fundamental form of \(\tilde{L}^s\) is uniformly bounded in s. We have that the second fundamental form of \(\Sigma \) must be bounded, and if A is the second fundamental form of L, rescaling L by \(1/\sqrt{2s}\) means that the second fundamental form scales by \(\sqrt{2s}\). Since \(\sqrt{2s}<1\), we can uniformly bound both second fundamental forms so that \(\tilde{L}^s\), which is a combination of both \(\Sigma \) and \(1/\sqrt{2s}L\), has second fundamental form uniformly bounded in s.

To see (H4), first notice that since we can write \(L^s\cap A(r_0\sqrt{2s},4)\) as a graph over \(P\cap A(r_0\sqrt{2s},4)\), we have that \(L^s\) has the same number of connected components as P in the annulus \(A(r_0\sqrt{2s},4)\). We now must estimate \(\gamma _s\). Firstly, note that we have

$$\begin{aligned} |\nabla ^k(v(x/\sqrt{2s}))|\le |(2s)^{-k/2}(\nabla ^kv)(x/\sqrt{2s})|\le C(2s)^{-k/2}e^{-b|x|^2/2s}, \end{aligned}$$
(7.4)

where we have used (7.2).

We will need different estimates on \(2s\nabla ^{2}v(x/\sqrt{2s})\) and \(2s\nabla ^{3}v(x/\sqrt{2s})\), which we find as follows.

$$\begin{aligned} |2s\nabla ^2v(x/\sqrt{2s})|&\le Ce^{-b|x|^2/2s}=C\frac{\sqrt{2s}}{|x|}\frac{|x|}{\sqrt{2s}}e^{-b|x|^2/2s} \nonumber \\&=C\frac{\sqrt{2s}}{|x|}e^{-\tilde{b} |x|^2/2s}\frac{|x|}{\sqrt{2s}}e^{-\tilde{b} |x|^2/2s} \le \tilde{C} \frac{\sqrt{2s}}{|x|}e^{-\tilde{b} |x|^2/2s}, \end{aligned}$$
(7.5)

where \(\tilde{b}=b/2\) and \(\tilde{C}= Ce^{-1/2}/\sqrt{b}\), since the function \(y\mapsto ye^{-by^2/2}\) is bounded independently of y (by \(e^{-1/2}/\sqrt{b}\)) on \(\mathbb {R}\), and so \(\tilde{C}\) is independent of s.

A similar calculation, this time noticing the uniform boundedness of the function \(y\mapsto ye^{-by/2}\) for \(y>0\) we can show that

$$\begin{aligned} |2s\nabla ^3 v(x/\sqrt{2s})|\le C \frac{\sqrt{2s}}{|x|^2}e^{-b|x|^2/2s}, \end{aligned}$$
(7.6)

where we make C (which remains independent of s) larger if necessary and b smaller (which does not affect the previous estimates).

We have, using the definition in (7.3),

$$\begin{aligned} |\gamma _s|&=|\nabla w_s|=|\varphi '(s^{-1/4}|x|)2s^{3/4}v(x/\sqrt{2s})+\varphi (s^{-1/4}|x|)2s\nabla [v(x/\sqrt{2s})] \\&\quad -s^{-1/4}\varphi '(s^{-1/4}|x|)u(x)+(1-\varphi (s^{-1/4}|x|))\nabla u(x)|, \end{aligned}$$

and, using that \(s^{3/4}=\sqrt{s}s^{1/4}< \sqrt{s}\) since \(s<1\), (7.1) and (7.4) imply that

$$\begin{aligned} |\gamma _s|&\le \sqrt{2s}Ce^{-b|x|^2/2s}+\sqrt{2s}Ce^{-b|x|^2/2s}+C|x|^{3-1}+C|x|^2 \nonumber \\&\le C\left[ \sqrt{2s}e^{-b|x|^2/2s}+|x|^2\right] , \end{aligned}$$
(7.7)

where we have made C larger.

Now consider

$$\begin{aligned} |\nabla \gamma _s|&=|\nabla ^2 w_s|=|\varphi ''(s^{-1/4}|x|)2s^{1/2}v(x/\sqrt{2s})+\varphi '(s^{-1/4}|x|)4s^{3/4}\nabla [v(x/\sqrt{2s})] \\&\quad +\varphi (s^{-1/4}|x|)2s\nabla ^2[v(x/\sqrt{2s})]-s^{-1/2}\varphi ''(s^{-1/4}|x|)u(x)\\&\quad -2s^{-1/4}\varphi '(s^{-1/4}|x|)\nabla u(x) +(1-\varphi (s^{-1/4}|x|))\nabla ^2u(x)| \end{aligned}$$

Using that on the support of \(\varphi '\) and \(\varphi ''\) we have (\(s<1\)) \(\sqrt{s}<s^{1/4}\le \sqrt{2}\sqrt{2s}/|x|\), and applying the estimates (7.4) and (7.5)

$$\begin{aligned} |\nabla \gamma _s|&\le C\left[ \left( \frac{\sqrt{2s}}{|x|}+\frac{\sqrt{2s}}{|x|}+\frac{\sqrt{2s}}{|x|}\right) e^{-b|x|^2/2s}+|x|^{3-2}+|x|^{2-1}+|x|\right] \nonumber \\&\le C\left[ \frac{\sqrt{2s}}{|x|}e^{-b|x|^2/2s}+|x|\right] . \end{aligned}$$
(7.8)

Finally, performing a similar computation to those above and combining (7.4), (7.5) and (7.6) we find that

$$\begin{aligned} |\nabla ^2\gamma _s|\le C\left[ \frac{\sqrt{2s}}{|x|^2}e^{-b|x|^2/2s}+1\right] . \end{aligned}$$
(7.9)

Combining (7.7), (7.8) and (7.9), we have that

$$\begin{aligned} |\gamma _s|+|x||\nabla \gamma _s|+|x|^2|\nabla ^2\gamma _s|\le D_3\left( |x|^2+\sqrt{2s}e^{-b|x|^2/2s}\right) , \end{aligned}$$

where \(D_3\) is a constant independent of s. Therefore (H4) is satisfied.