1 Introduction and Main Results

The mean curvature flow is a 1-parameter family \(F_t\) of embedded submanifolds of a semi-Riemannian manifold that evolves in the direction of its mean curvature vector, that is, \(F_t\) satisfies

$$\begin{aligned} \partial _t F(x,t) = \vec {H}(x,t) \,, \qquad F(x,0) = F_0(x) \end{aligned}$$

for all \(x\in M\) and \(t\in [0,T)\) for some \(T>0\). Especially the case of hypersurfaces of Riemannian manifolds has been studied extensively in the literature, see e. g. [2] for a recent survey.

Another interesting case that has been studied less often is the case of hypersurfaces in globally hyperbolic Lorentzian manifolds. In this case, the mean curvature flow is a parabolic equation precisely if the immersion is spacelike. Further, if the ambient space is a product manifold \(M\times \mathbb {R}\), the flow is given entirely in terms of a function \(u:M\times [0,T)\rightarrow \mathbb {R}\), subject to the equation

$$\begin{aligned} \partial _t u = \sqrt{1 - |\nabla u|_{\sigma }^2} \, \mathrm {div}_{\sigma }\left( \frac{^{\sigma }\nabla u}{\sqrt{1-|\nabla u|_{\sigma }^2}} \right) \,, \end{aligned}$$

where \(\sigma \) is the metric on M and the norms and derivative are taken with respect to \(\sigma \). Ecker and Huisken considered the mean curvature flow in cosmological spacetimes satisfying the timelike convergence condition to find hypersurface of prescribed mean curvature [6]. However, they had to assume a structural monotonicity condition for the prescribed mean curvature. The timelike convergence condition and the structural monotonicity condition were later on removed by Gerhardt [7]. Long-time existence of the mean curvature flow in Minkowski space was shown in [4]. In [3], Ecker considered the mean curvature flow in asymptotically flat manifolds and proved long-time existence and convergence to a maximal hypersurface, provided that the spacetime satisfies a weak energy condition.

In this paper, we are considering the asymptotic behaviour of solutions of the mean curvature flow in asymptotically flat manifolds. In contrast to [3], we do not need any energy condition on the spacetime but we assume that the spacetime is of product type. Our main result is as follows:

Theorem 1.1

Let \((M^n,\sigma )\), \(n\ne 2\) be an asymptotically flat Riemannian manifold and let \(M\times \mathbb {R}\) be equipped with the Lorentzian product metric

$$\begin{aligned} \sigma - (\mathrm {d}x^0)^2 \,. \end{aligned}$$

Furthermore, let \(F_0\subset M\times \mathbb {R}\) be a hypersurface which is the graph of a function \(u_0\in C^{0,1}(M)\) with Lipschitz constant smaller than \(1-\varepsilon \) for some \(\varepsilon >0\) and such that \(u_0(x)\rightarrow s\) for some \(s\in \mathbb {R}\) as \(x\rightarrow \infty \). Then there exists a solution \(F_t\) to the mean curvature flow which starts at \(F_0\) and exists for all times \(t>0\). Further, \(F_t\) is the graph of a function \(u_t:M\rightarrow \mathbb {R}\) for each t and \(u_t\) converges to \(u_{\infty }(x)=s\), uniformly in all derivatives as \(t\rightarrow \infty \).

In this paper, we prove the theorem for manifolds with one Euclidean end but the proof can easily be generalised to the case of multiple ends. Similarly, the result also holds for manifolds which are asymptotically locally Euclidean (ALE).

Our solution of the mean curvature flow is constructed as the limit of solutions of Dirichlet problems on large balls. In spatial dimensions \(n\ge 3\), we construct rotationally symmetric stationary solutions of the mean curvature flow in Minkowski space. In asymptotic coordinates, these solutions serve as barriers in order to guarantee that the limit solution stays asymptotic to \(M\times \{s\}\) for positive times, i. e. the solution can not lift off at spatial infinity in the sense that the function \(|u_t-s|\) becomes eventually bigger than some positive constant on any compact subset. The behaviour of the hypersurfaces at spatial infinity has been raised as an open point in [3, Remark 3.2].

Furthermore, we use these barriers at infinity and a carefully chosen quantity that controls the solution in the interior to prove uniform \(C^1\)-bounds from which we obtain global existence. The convergence behaviour at \(t\rightarrow \infty \) follows from a combination of maximum principle arguments.

We have to exclude the case \(n=2\) because we can neither construct nice barriers at infinity nor use the simple structure of the evolution equation as in the case \(n=1\). Therefore, we get an unsatisfactory gap in the main theorem. However, we conjecture that the assertion of our main result also holds in dimension \(n=2\).

Corollary 1.2

Let \((M\times N,\sigma +\eta )\) be the Riemannian product of an asymptotically flat manifold \((M^n,\sigma )\), \(n\ne 2\), and a Riemannian manifold \((N,\eta )\) of bounded geometry (i.e. with positive injectivity radius, bounded curvature and boundedness of any derivative of curvature) and let \(M\times N\times \mathbb {R}\) be equipped with the Lorentzian product metric

$$\begin{aligned} \sigma + \eta - (\mathrm {d}x^0)^2\,. \end{aligned}$$

Furthermore, let \(F_0\subset M\times N\times \mathbb {R}\) be a hypersurface which is given by the graph of a bounded function \(u_0\in C^{0,1}(M\times N)\) with Lipschitz constant at most \(1-\varepsilon \) for some \(\varepsilon >0\) and such that \(u_0(x,y)\rightarrow s\) uniformly in \(y\in N\) for some \(s\in \mathbb {R}\) as \(x\rightarrow \infty \). Then there exists a solution \(F_t\) to the mean curvature flow which starts at \(F_0\) and exists for all times \(t>0\). Further, \(F_t\) is the graph of a function \(u_t:M\times N\rightarrow \mathbb {R}\) for each t and \(u_t\) converges to \(u_{\infty }(x)=s\), uniformly in all derivatives as \(t\rightarrow \infty \).

The corollary is a consequence of our main result: Pick an initial function \(v_0:M\rightarrow \mathbb {R}\) satisfying the assumptions of the theorem and that in addition fulfils the inequality \(|u_0(x,y)-s|\le v_0(x)\) for all \((x,y)\in M\times N\). Then the mean curvature flow in \(M\times \mathbb {R}\) starting at \({\text {graph }}(v_0)\) (represented by the function \(v_t\)) can be extended to a mean curvature flow on \(M\times N\times \mathbb {R}\) (by letting \(v_t\) be constant in the N-direction). By adding some \(\varepsilon >0\) and applying a point-picking maximum principle we see that \(v_t\) serves as a barrier for \(u_t\).

Remark 1.3

If \(N=\mathbb {R}^{n-1}\) (equipped with the flat metric) in Corollary 1.2, one obtains the main theorem under weaker assumptions in the case of the Minkowski space:

  1. (i)

    If M in the main theorem is \(\mathbb {R}^n\) with the flat metric, the assertion also holds for \(u_0\in C^{0,1}(\mathbb {R}^n)\) that does not converge to \(s\in \mathbb {R}\) in all directions but is bounded and satisfies \(u_0(x_1,\ldots ,x_n)\rightarrow s\) uniformly in \(x_2,\ldots ,x_n\) as \(|x_1|\rightarrow \infty \).

  2. (ii)

    Suppose \(F_0\) is a spacelike hypersurface that converges uniformly in one direction to an arbitrary spacelike hyperplane \(\Sigma \subset \mathbb {R}^{n,1}\) in the following sense: There is an isometry \(A\in \mathrm {Iso}(\mathbb {R}^{n,1})\) such that \(A(\Sigma )=\mathbb {R}^{n}\times \left\{ s\right\} \) and \(A(F_0)=\text {graph }(u_0)\) where \(u_0\) satisfies the assumptions of (i). Then the mean curvature flow starting at \(F_0\) exists for all times and converges to \(\Sigma \) uniformly in all derivatives.

From an analytical viewpoint, it would be very interesting to relax the condition on the Lipschitz constant of \(u_0\) such that we can allow \(|\nabla u_0|\rightarrow 1\) at infinity. Moreover, it would also be interesting to generalise the result from product spacetimes to stationary spacetimes. Both issues are planned to be attacked in the future.

This paper is organised as follows. In Sect. 2, we introduce notation, conventions and setup. In Sect. 3, we construct rotationally symmetric barriers at infinity. In Sect. 4, a general procedure to interpolate between an arbitrary initial data on an asymptotically flat manifold and zero initial data on flat space is elaborated. In Sect. 5, we show that the mean curvature flow can not lift off at infinity due to our assumptions. In Sects. 6 and 7, we prove gradient estimates for the Dirichlet problem at the boundary and in the interior, respectively. In Sect. 8, we conclude with the proof of the main theorem.

2 Preliminaries

2.1 Notation and Conventions

Let \((M,\sigma )\) be an n-dimensional Riemannian manifold. We consider the Lorentzian product manifold

$$\begin{aligned} \bigl (M\times \mathbb {R}, h\bigr )\,, \qquad \text {where} \quad h{:}{=} \sigma - \bigl (\mathrm {d}x^0\bigr )^2 \,, \end{aligned}$$

and where \(x^{0}\) is the coordinate on \(\mathbb {R}\). Locally (i. e. with respect to local coordinates \(\{x^k\}\)), the expression for the metric is given by

$$\begin{aligned} h = \sigma _{ij} \mathrm {d}x^i \mathrm {d}x^j - (\mathrm {d}x^0)^2 \,, \end{aligned}$$

where we use the Einstein summation convention and the indices \(i,j,k,\dots \) will always run from 1 to \(n{:}{=} \dim M\). Associated to the metric h we have its Levi-Civita connection \(^{h}\nabla \), which splits according to the product structure as

Here, \(^{\sigma }\nabla \) is the Levi–Civita connection with respect to \(\sigma \), with Christoffel symbols given by \(^{\sigma }\!\varGamma _{ij}^k\).

A smooth function \(u:M\rightarrow \mathbb {R}\) naturally defines an embedding into the product manifold \(M\times \mathbb {R}\) via

$$\begin{aligned} F:M\rightarrow M\times \mathbb {R}\,, \qquad F(x){:}{=} \bigl ( x, u(x) \bigr ) \,. \end{aligned}$$

Then the graph of the map u is the image of F,

$$\begin{aligned} \text {graph }u {:}{=} F(M) {:}{=} \bigl \{ \bigl ( x, u(x) \bigr ) : x\in M \bigr \} \subset M\times \mathbb {R}\,. \end{aligned}$$

The induced metric g on M is defined as

$$\begin{aligned} g {:}{=} F^*h \end{aligned}$$

and is locally given by

$$\begin{aligned} g_{ij} = (F^*h)_{ij} = \sigma _{ij} - u_i u_j \,, \end{aligned}$$

where \(u_j {:}{=} \frac{\partial u}{\partial x_j}\) denotes the derivatives of the function u with respect to the local coordinates \(\{x^k\}\). The Levi–Civita connection of g will be denoted by \(\nabla \).

For the Ricci curvatures associated to the metrics h and \(\sigma \), we will write \(\text {Ric}_h\) and \(\text {Ric}_{\sigma }\), respectively.

The different norms induced by the metrics will be indicated by an index. For example, we write \(|\cdot |_g\) for the norm induced by the metric g. The gradient associated to a metric will be denoted by the same symbol as the corresponding connection (e. g. \((^{\sigma }\nabla u)^i = \sigma ^{ij} u_j\)). Note that when writing the norm of a derivative, we will use the same metric for the connection and for the norm (unless otherwise stated), e. g. \(|\nabla u|_{\sigma } {:}{=} \left| ^{\sigma }\nabla u\right| _{\sigma }\).

Throughout, we will often use the notation “\(x \lesssim y\)”, which means “there is a universal constant \(C=C(n)>0\) such that \(x\le Cy\)” and “\(x\simeq y\)” means “\(x\lesssim y\) and \(y\lesssim x\)”.

Let us endow \(\mathbb {R}^n\) with its Euclidean metric \(\delta _{ij}\). For the following definition, norms \(|\cdot |\), raising and lowering of indices, and differentiation \(\nabla \) are taken with respect to the Euclidean metric, unless otherwise specified. Moreover, for a function f we denote by \(f_{i}\) or, respectively, \(f_{ij}\) derivatives with respect to the Euclidean metric, i. e. partial derivatives in Cartesian coordinates.

Definition 2.1

A Riemannian manifold \((M,\sigma )\) is called asymptotically flat if there exists a compact set \(K\subset M\), a Euclidean ball \(B_{R_0}(0)\subset \mathbb {R}^n\) such that \(M{\setminus } K\) is diffeomorphic to \(\mathbb {R}^n{\setminus } B_{R_0}(0)\) and there is a bounded function \(\omega :(R_0,\infty )\rightarrow \mathbb {R}_+\) with \(\omega (\xi )\rightarrow 0\) as \(\xi \rightarrow \infty \) such that for all \(r\ge R_0\),

$$\begin{aligned} \left| \sigma _{ij}-\delta _{ij}\right|&\lesssim \omega (r), \end{aligned}$$
(2.1)
$$\begin{aligned} \left| \nabla \left( \sigma _{ij}-\delta _{ij}\right) \right| = \left| \nabla \sigma _{ij}\right|&\lesssim \frac{\omega (r)}{r}, \end{aligned}$$
(2.2)

where r is the Euclidean distance to the origin and the diffeomorphism is suppressed in the formulæ.

Remark 2.2

  1. (i)

    Note that asymptotic flatness implies

    $$\begin{aligned} \left| \sigma ^{ij}-\delta ^{ij}\right|&\lesssim \omega (r), \end{aligned}$$
    (2.3)
    $$\begin{aligned} \left| ^{\sigma }\!\varGamma ^k_{ij} \right| = \left| \tfrac{1}{2} \sigma ^{kl}(\sigma _{li,j}+\sigma _{lj,i}-\sigma _{ij,l})\right|&\lesssim \frac{\omega (r)}{r}. \end{aligned}$$
    (2.4)
  2. (ii)

    We use a weaker definition of asymptotically flatness than most other papers where one often assumes that \(\omega (r)=r^{-\tau }\) for some \(\tau >0\) (e.g. in [3]). In this case, one calls \((M,\sigma )\) asymptotically flat of order \(\tau \).

The diffeomorphism \(\varphi :M{\setminus } K\rightarrow \mathbb {R}^n{\setminus } B_{R_0}(0)\) is also called an asymptotic coordinate system and \(\varphi =(x^1,\ldots ,x^n)\) are called asymptotic coordinates. The Riemannian distance function of \(\sigma \) will be denoted by d and metric balls of radius R around \(x\in M\) will be denoted by \(B_R(x)\). If \(R\ge R_0\), we use the notation

$$\begin{aligned} K_R:= M{\setminus } (\varphi ^{-1}(\mathbb {R}^n{\setminus } B_{R}(0))). \end{aligned}$$

We call \(M{\setminus } K_R=\varphi ^{-1}\bigl (\mathbb {R}^n{\setminus } B_{R}(0)\bigr )\) an asymptotic coordinate neighbourhood.

Finally, we call \(u\in C^{0,1}(M)\) uniformly spacelike, if there exists an \(\varepsilon >0\), such that the Lipschitz constant (with respect to \(\sigma \)) is at most \( 1-\varepsilon \). For \(u\in C^1(M)\), this is equivalent to \(|\nabla u|_{\sigma }\le 1-\varepsilon \). The notion of uniform spacelikeness will become clearer in the next subsection.

2.2 Mean Curvature Flow of Spacelike Hypersurfaces in Static Spacetimes

Let \(u:M\times [0,T)\rightarrow \mathbb {R}\) be a family of smooth functions, where we set \(u_t(x){:}{=} u(x,t)\). We say that the family u evolves under the mean curvature flow, if the family of graphs

$$\begin{aligned} \text {graph }u_t {:}{=} \bigl \{ (x,u(x,t)) : x \in M \bigr \} \subset M\times \mathbb {R}\end{aligned}$$

satisfies the mean curvature flow equation, that is, there exists another family of maps \(F:M\times [0,T)\rightarrow M\times \mathbb {R}\) such that

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t F_t(x) = \vec {H}(x,t) \quad \forall x\in M \,, \\ F_0(x) = (x,u_0(x)) \,, \end{array}\right. } \end{aligned}$$
(2.5)

and \(\text {graph }u_t=F_t(M)\). Here, \(\vec {H}(x,t)\) denotes the mean curvature vector of \(F_t(M)\) at F(xt). To get a parabolic equation, we have to assume that the hypersurfaces \(F_t(M)\) are spacelike. This is the case if and only if \(|\nabla u|_{\sigma }<1\) everywhere. Under these assumptions, the mean curvature flow system may equivalently be written entirely in terms of u as

$$\begin{aligned} \partial _t u = \sqrt{1-| \nabla u |_{\sigma } ^2} \, \mathrm {div}_{\sigma } \left( \frac{^{\sigma }\nabla u}{\sqrt{1-| \nabla u|_{\sigma }^2}} \right) \,. \end{aligned}$$
(2.6)

Here, \(\mathrm {div}_{\sigma }\) denotes the divergence with respect to the metric \(\sigma \). Using that the inverse metric is given by

$$\begin{aligned} g^{ij}(\sigma ,\nabla u) =\sigma ^{ij} + \frac{\sigma ^{ik} \sigma ^{jl} u_k u_l}{1-\sigma ^{kl}u_k u_l} \,, \end{aligned}$$

we can also write

$$\begin{aligned} \partial _t u=g^{ij}(\sigma ,\nabla u) \, ^{\sigma }\nabla ^2_{ij} u \,. \end{aligned}$$
(2.7)

Lemma 2.3

The evolution equation for u is given by

$$\begin{aligned} \left( \frac{d}{dt} - \Delta \right) u = 0 \,, \end{aligned}$$

where \(\frac{d}{dt}\) denotes the total time derivative and \(\Delta \) is the Laplacian with respect to the induced metric \(g\).

Proof

In [7, Lemma 3.5] (using the notation of this paper), we have the formula

$$\begin{aligned} \left( \frac{d}{dt} - \Delta \right) u = e^{-\psi }v^{-1}f-e^{-\psi }g^{ij}\bar{h}_{ij}+\bar{\varGamma }_{00}^{0}|D u|^2+2\bar{\varGamma }_{0i}^0u^i, \end{aligned}$$

for the mean curvature flow

$$\begin{aligned} \partial _t F(x,t) = (\vec {H}-f\nu )(x,t) \end{aligned}$$

with a prescribed function f in a general Lorentzian manifold. In our setting, \(f=0\). The appearing Christoffel symbols vanish, because \(M\times \mathbb {R}\) is equipped with a product metric. For the same reason, the second fundamental form \(\bar{h}\) of the hypersurfaces \(M\times \left\{ s\right\} \) vanishes. Thus, the whole right-hand side vanishes and we are left with the result of the lemma. \({\square }\)

Let us set

$$\begin{aligned} v {:}{=} \frac{1}{\sqrt{1-|\nabla u|^2_{\sigma }}} = - \left\langle \nu , \frac{\partial }{\partial {x_0}} \right\rangle \,, \quad \text {where} \quad \nu {:}{=} \frac{\bigl (^{\sigma }\nabla u, 1\bigr )}{\sqrt{1-|\nabla u|^2_{\sigma }}} \end{aligned}$$
(2.8)

is the unit normal vector of the graph.

Lemma 2.4

Assume that \(\text {graph }u_t\) is strictly spacelike. Then the relations

$$\begin{aligned} \frac{1}{v^2} \sigma&\le g \le \sigma \,, \\ 1 - \frac{1}{v^2}&= |\nabla u|^2_{\sigma } \le |\nabla u|^2 = v^2-1 \end{aligned}$$

hold.

Proof

The first relation follows by noting that for all vectors \(w\in TM\) we have

$$\begin{aligned} \frac{1}{v^2} = 1 - |\nabla u|^2_{\sigma } \,, \qquad |\nabla u|^2_{\sigma } \sigma (w,w) \ge (\partial _w u)^2 \end{aligned}$$

and \(g(w,w)=\sigma (w,w)-(\partial _wu)^2\le \sigma (w,w)\). For the second relation, we note that \(\sigma ^{-1}\le g^{-1}\le v^2 \sigma ^{-1}\) and calculate

$$\begin{aligned} |\nabla u|^2_{\sigma } = \sigma ^{ij} u_i u_j \le g^{ij} u_i u_j = |\nabla u|^2 \,. \end{aligned}$$

For the remaining equality, we calculate

$$\begin{aligned} |\nabla u|^2&= g^{ij} u_i u_j = \left( \sigma ^{ij} + \frac{\sigma ^{ik} \sigma ^{jl} u_k u_l}{1-\sigma ^{kl} u_k u_l} \right) u_i u_j \\&= |\nabla u|^2_{\sigma } + \frac{|\nabla u|^4_{\sigma }}{1-|\nabla u|_{\sigma }^2} = \frac{|\nabla u|_{\sigma }^2}{1-|\nabla u|_{\sigma }^2} = v^2-1 \,. \end{aligned}$$

\({\square }\)

3 Construction of a Barrier

For this section we assume \(n\ge 3\). In the cases \(n=1,2\), the graphical maximal surface described below is unbounded. Therefore, these cases are slightly more complicated and will be discussed in separate remarks in later chapters.

We wish to find a rotationally symmetric positive function b defined on \(\mathbb {R}^n {\setminus } B_{r_0}(0)\) (where \(\mathbb {R}^n\) is equipped with an asymptotically flat metric \(\sigma \)) for some \(r_0>0\), such that \(b(x)\gg 1\) for \(|x|=r_0\), \(b(x)\rightarrow 0\) for \(|x|\rightarrow \infty \), and such that b is a static supersolution of (2.7), i. e.

$$\begin{aligned} g^{ij}(\sigma ,\nabla b) \, ^{\sigma }\nabla ^2_{ij} b \equiv \left( \sigma ^{ij} + \frac{\sigma ^{ik} \sigma ^{jl} b_k b_l}{1-\sigma ^{kl}b_k b_l} \right) \left( b_{ij}- ^{\sigma }\!\varGamma ^k_{ij} b_k\right) \le 0. \end{aligned}$$
(3.1)

Let us first consider \(\sigma _{ij}(x)=\delta _{ij}\) and the case of equality in (3.1). That is, we consider the equation for maximal spacelike hypersurfaces in Minkowski space. The function corresponding to b in this case we denote by \(\beta \). We choose \(\beta \) to be rotationally symmetric and by abuse of notation we write \(\beta (x)\equiv \beta (r)\). Also setting \(\beta ^i = \beta _k \delta ^{ki}\), (3.1) becomes

$$\begin{aligned} 0&= \left( \delta ^{ij}+ \frac{ \beta ^i \beta ^j}{1-\beta ^k\beta _k} \right) \beta _{ij} \nonumber \\&= \left( \delta ^{ij} + \frac{(\beta ')^2}{1-(\beta ')^2}\frac{x^i x^j}{r^2}\right) \left( \beta '' \frac{x_i x_j}{r^2} + \beta ' \left( \frac{\delta _{ij}}{r}-\frac{x_i x_j}{r^3}\right) \right) \nonumber \\&= \frac{\beta ''}{1-(\beta ')^2} + \beta ' \frac{n-1}{r}. \end{aligned}$$
(3.2)

Rewriting this equation in the form

$$\begin{aligned} \frac{\beta ''}{(1-\beta ')(1+\beta ')\beta '} = -\frac{n-1}{r} \end{aligned}$$

makes it easy to integrate it under the condition \(-1<\beta '<0\). We have for any \(c>0\) a solution of (3.2) of the form

$$\begin{aligned} \beta ' = -\left( 1+c\,r^{2n-2}\right) ^{-1/2}. \end{aligned}$$
(3.3)

We remark that \(\beta '\) is integrable at infinity only for \(n\ge 3\) and a solution with \(\beta (r)\rightarrow 0\) for \(r\rightarrow \infty \) is available only in this case.

Lemma 3.1

There exists \(r_1>0\), depending only on \(\sigma \), such that for any \(r_0 \ge r_1\) there exists a rotationally symmetric function \(b:\mathbb {R}^n{\setminus } B_{r_0}(0) \rightarrow \mathbb {R}\) with the following properties:

  1. (i)

    \(b(x) \simeq r_0^{n-\frac{3}{2}}\, |x|^{-(n-\frac{5}{2})}\) and

  2. (ii)

    \(g^{ij}(\sigma ,\nabla b) ^{\sigma }\nabla ^2_{ij}b \le 0\), i. e. b is a static supersolution.

Proof

In order to construct a supersolution, we are going to tweak the exponent of r in (3.3) to make \(\beta \) into a proper supersolution b strong enough to survive a change from the Euclidean metric to \(\sigma \). For \(r_0>0\) (to be chosen later) we consider the rotationally symmetric function b(r) given on \(\mathbb {R}^n{\setminus } B_{r_0}(0)\) by the conditions \(b(r)\rightarrow 0\) for \(r\rightarrow \infty \) and

$$\begin{aligned} b' = -\left( 1+\left( \frac{r}{r_0}\right) ^{2n-3}\right) ^{-1/2}. \end{aligned}$$
(3.4)

The second derivative is given by

$$\begin{aligned} b'' = \left( n-\tfrac{3}{2}\right) \frac{\frac{1}{r_0}\left( \frac{r}{r_0}\right) ^{2n-4}}{\left( 1+ \left( \frac{r}{r_0}\right) ^{2n-3}\right) ^{3/2}}. \end{aligned}$$
(3.5)

We will also need

$$\begin{aligned} 1-(b')^2 = 1- \frac{1}{1+\left( \frac{r}{r_0}\right) ^{2n-3}} = \frac{\left( \frac{r}{r_0}\right) ^{2n-3}}{1+\left( \frac{r}{r_0}\right) ^{2n-3}}. \end{aligned}$$
(3.6)

Now, instead of fulfilling (3.2), b is a strict supersolution:

$$\begin{aligned} \left( \delta ^{ij}+ \frac{ b^i b^j}{1-b^kb_k} \right) b_{ij}&= \frac{b''}{1-(b')^2} + b' \frac{n-1}{r} \nonumber \\&= (n-\tfrac{3}{2}) \frac{r^{-1}}{\left( 1+\left( \frac{r}{r_0}\right) ^{2n-3}\right) ^{1/2}} +(n-1)\frac{b'}{r} \nonumber \\&= \frac{1}{2} \frac{b'}{r} \simeq - r^{-1} \left( \frac{r}{r_0}\right) ^{-n+\frac{3}{2}} \,. \end{aligned}$$
(3.7)

We shall see that the left-hand sides of (3.1) and (3.7) differ by terms of lower order than the right-hand side of (3.7). In this way we will obtain (3.1). Before we continue let us remark the following:

$$\begin{aligned} |\nabla b|&\simeq \left( \frac{r}{r_0}\right) ^{-n+\frac{3}{2}}, \end{aligned}$$
(3.8)
$$\begin{aligned} |\nabla ^2 b|&\simeq r^{-1}\left( \frac{r}{r_0}\right) ^{-n+\frac{3}{2}}. \end{aligned}$$
(3.9)

Furthermore we note \(1-\delta ^{kl}b_k b_l \ge \frac{1}{2}\) and by assuming that \(r_0\) is sufficiently large we can infer by (2.1) that \(1-\sigma ^{kl}b_kb_l \ge \frac{1}{3}\). It allows us to estimate in the following way using (2.3) and \(|\nabla b|\le 1\):

$$\begin{aligned} \left| \frac{1}{1-\sigma ^{kl}b_kb_l} - \frac{1}{1-\delta ^{kl}b_kb_l}\right| = \left| \frac{(\sigma ^{kl}-\delta ^{kl})b_kb_l}{(1-\sigma ^{kl}b_kb_l)(1-\delta ^{kl}b_kb_l)} \right| \lesssim \omega (r). \end{aligned}$$
(3.10)

It is now straightforward to estimate the error term from \(g^{ij}\):

$$\begin{aligned} \begin{aligned}&\left| g^{ij}(\sigma ,\nabla b)-g^{ij}(\delta ,\nabla b) \right| \\&\quad = \left| \sigma ^{ij} + \frac{\sigma ^{ik}\sigma ^{jl} b_kb_l}{1-\sigma ^{kl}b_kb_l} - \delta ^{ij} - \frac{\delta ^{ik}\delta ^{jl} b_kb_l}{1-\delta ^{kl}b_kb_l}\right| \\&\quad \le \left| \sigma ^{ij}-\delta ^{ij}\right| + \left| \frac{\sigma ^{ik}\sigma ^{jl}-\delta ^{ik}\delta ^{jl}}{1-\sigma ^{kl}b_kb_l} \right| + \left| \delta ^{ik}\delta ^{jl} \right| \left| \frac{1}{1-\sigma ^{kl}b_kb_l}-\frac{1}{1-\delta ^{kl}b_kb_l}\right| \\&\quad \lesssim \omega (r). \end{aligned} \end{aligned}$$
(3.11)

We can finally compare the operators for the different metrics:

$$\begin{aligned} \begin{aligned}&\left| g^{ij}(\sigma ,\nabla b) ^{\sigma }\nabla ^2_{ij}b - g^{ij}(\delta ,\nabla b) b_{ij} \right| \\&\quad \le \left| g^{ij}(\sigma ,\nabla b) - g^{ij}(\delta ,\nabla b) \right| |b_{ij}| + \left| g^{ij}(\sigma ,\nabla b)\right| \left| ^{\sigma }\!\varGamma ^k_{ij}b_k\right| \\&\quad \lesssim \frac{\omega (r)}{r} \left( \frac{r}{r_0}\right) ^{-n+\frac{3}{2}} \end{aligned} \end{aligned}$$
(3.12)

by (3.11), (3.9), (3.8), (2.4), and the fact that \(\left| g^{ij}(\sigma ,\nabla b)\right| \) is bounded (by (2.3) and \(r_0\gg 1\)). Since by (3.7)

$$\begin{aligned} g^{ij}(\delta ,\nabla b) b_{ij}= \left( \delta ^{ij}+ \frac{ b^i b^j}{1-b^kb_k} \right) b_{ij} \simeq - r^{-1} \left( \frac{r}{r_0}\right) ^{-n+\frac{3}{2}} \,, \end{aligned}$$
(3.13)

we use (3.12) for \(1\ll r_0 \le r\) to conclude the desired result (3.1), i. e. b is a supersolution. As we choose b such that \(b(r)\rightarrow 0\) for \(r\rightarrow \infty \) we find

$$\begin{aligned} b(r) = -\int \limits _{r}^{\infty } b'(s) \, \mathrm {d}s \simeq \int \limits _{r}^{\infty } \left( \frac{s}{r_0}\right) ^{-n+\frac{3}{2}} \mathrm {d}s \simeq r_0^{n-\frac{3}{2}}\, r^{-n+\frac{5}{2}} \end{aligned}$$
(3.14)

which finishes the proof. \({\square }\)

For making use of the supersolution b as a barrier, we need to make sure that a solution can not touch b at \(r=r_0\). But this is ensured by the lemma, since b can be made arbitrarily large at \(r=r_0\) as we enlarge \(r_0\).

Corollary 3.2

For any \(h>0\) and for any \(r_1>0\) there exists \(r_0\ge r_1\), depending only on \(\sigma \), and a rotationally symmetric function \(b:\mathbb {R}^n{\setminus } B_{r_0}(0) \rightarrow \mathbb {R}\) with the following properties:

  1. (i)

    \(b>0\),

  2. (ii)

    \(b(x)\rightarrow 0\) for \(|x|\rightarrow \infty \),

  3. (iii)

    \(b(x)\ge h\) for \(|x|=r_0\), and

  4. (iv)

    \(g^{ij}(\sigma ,\nabla b) ^{\sigma }\nabla ^2_{ij}b \le 0\), i. e. b is a static supersolution.

Remark 3.3

The following observations are crucial:

  1. (i)

    Because the operator only depends on derivatives, \(b+\varepsilon \) is a supersolution for any \(\varepsilon \in \mathbb {R}\).

  2. (ii)

    Because of \(g^{ij}(\sigma , -\nabla b)= g^{ij}(\sigma , \nabla b)\), \(-b\) is a subsolution and yields a barrier from below.

4 Interpolation of Initial Data

Proposition 4.1

Let \((M,\sigma )\) be an asymptotically flat Riemannian manifold and \(R_0>0\) so large that we have an asymptotic coordinate system

$$\begin{aligned} \varphi :M{\setminus } K\rightarrow \mathbb {R}^n{\setminus } B_{R_0}(0). \end{aligned}$$

Let \(u_0\in C^{0,1}(M)\) be uniformly spacelike with Lipschitz constant \(1-\varepsilon \). Then for every \(R_2>R_1\ge R_0\), there exists a metric \(\widetilde{\sigma }\) and a function \(\widetilde{u}_0\) such that

$$\begin{aligned} \widetilde{\sigma }_{ij} = \sigma _{ij} \quad&\text {and} \quad \widetilde{u}_0 = u_0 \qquad \text {on} \;\; K_{R_1},\\ \widetilde{\sigma }_{ij} = \delta _{ij} \quad&\text {and} \quad \widetilde{u}_0 = 0 \qquad \text {on} \;\; M{\setminus } K_{R_2} \end{aligned}$$

and such that \(\widetilde{u}_0\) is uniformly spacelike with respect to the constant \(\varepsilon >0\) from above.

Proof

For the proof, we may assume that \(u_0\in C^1(M)\) since the Lipschitz case follows immediately from an approximation argument. For notational convenience, set \(S_1{:}{=} R_1\) and \(S_4{:}{=} R_2\). Pick constants \(S_2,S_3\) such that \(S_1<S_2<S_3<S_4\) and choose smooth cutoff functions \(\psi _i:M\rightarrow [0,1]\), \(i=1,2,3\) such that

$$\begin{aligned} \psi _i\equiv 1 \quad \text { on }K_{S_i},\qquad \psi _i\equiv 0 \quad \text { on }M{\setminus } K_{S_{i+1}}. \end{aligned}$$

Now fix \(\lambda > 1\) and let

$$\begin{aligned} \widehat{\sigma }_{ij}&= \psi _1 \sigma _{ij} + (1-\psi _1) \lambda \sigma _{ij} \,, \\ \widetilde{\sigma }_{ij}&= \psi _3 \widehat{\sigma }_{ij} + (1-\psi _3) \delta _{ij} = \psi _3 (\psi _1 \sigma _{ij} + (1-\psi _1) \lambda \sigma _{ij}) + (1-\psi _3) \delta _{ij} \end{aligned}$$

and

$$\begin{aligned} \widetilde{u}_0 = \psi _2 u_0 \,. \end{aligned}$$

Then we have \( \widetilde{\sigma }_{ij} = \sigma _{ij}\) and \(\widetilde{u}_0 = u_0 \) on \(K_{S_1}\) and \(\widetilde{\sigma }_{ij} = \delta _{ij} \) and \(\widetilde{u}_0 = 0\) outside of \(K_{S_4}\). Because \(\widetilde{u}_0\) vanishes even outside of \(K_{S_3}\), it remains to prove that \(\widetilde{u}_0\) is uniformly spacelike on \(K_{S_3}{\setminus } K_{S_1}\). On \(K_{S_2}{\setminus } K_{S_1}\), we have

$$\begin{aligned} \widetilde{u}_0 = u_0 \quad \text {and} \quad \widetilde{\sigma }_{ij} = \widehat{\sigma }_{ij} \,. \end{aligned}$$

and since \(\widehat{\sigma }_{ij} = f \sigma _{ij}\) with a function \(f \ge 1\), we have

$$\begin{aligned} |\nabla \widetilde{u}_0|^2_{\widetilde{\sigma }}&= |\nabla u_0|^2_{\widehat{\sigma }} = \frac{1}{f} |\nabla u_0|^2_{\sigma }\le (1-\varepsilon )^2 \end{aligned}$$

in this region. Finally, let us check the region \(K_{S_3}{\setminus } K_{S_2}\). There we have \( \widetilde{\sigma }_{ij} = \lambda \sigma _{ij}\) so that

$$\begin{aligned} |\nabla \widetilde{u}_0|^2_{\widetilde{\sigma }} = \frac{1}{\lambda } |\nabla \widetilde{u}_0|^2_{{\sigma }}\le \frac{2}{\lambda } \Bigl (u_0^2 |\nabla \psi _2|^2_{\sigma } + \psi _2^2|\nabla u_0|^2_{\sigma }\Bigr )\le (1-\varepsilon )^2 \,. \end{aligned}$$

provided that \(\lambda > 1\) was chosen large enough depending on \(u_0,\varepsilon \) and \(\psi _2\). This finishes the proof. \({\square }\)

Remark 4.2

Note that in this construction the geometry of \(\tilde{\sigma }\) can be chosen uniformly bounded (depending on \(\sup _M |u_0|\), \(\varepsilon \) and the difference \(R_2-R_1\)). Note further that \(\tilde{u}_0(x)\) has the same sign as \(u_0(x)\) for all \(x\in M\) and that \(|\tilde{u}_0|\le |u_0|\).

5 No Lift-Off

In the following (Lemma 5.1 and Remark 5.2) we will show that our constructed solution has no lift-off behaviour at infinity. Because our construction involves approximation by compact problems, the proof is an easy maximum principle argument in this case. On the other hand, excluding lift-off behaviour directly for any given solution needs an additional argument to compensate for the non-compactness and is therefore treated afterwards (Theorem 5.4).

Lemma 5.1

Let \(\sigma \) be an asymptotically flat metric on \(\mathbb {R}^n\) \((n\ge 3)\) and \(u_0\in C^{0,1}(\mathbb {R}^n)\) a function which is uniformly spacelike and satisfies \(|u_0(x)|\rightarrow 0\) as \(|x|\rightarrow \infty \). Let \(R>0\) and \(u_{0,R}\) be a function constructed from \(u_0\) according to Proposition 4.1 that vanishes outside of \(B_R(0)\) and is equal to \(u_0\) on a smaller ball. Finally, let \(T\in (0,\infty ]\) and assume that \(u_R\in C^{2;1}\big (B_R(0)\times (0,T)\big )\cap C^0\big (\overline{B_R(0)}\times [0,T]\big )\) is a solution of the initial boundary value problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t u_R(x,t)=g^{ij}(\sigma ,\nabla u_R) ^{\sigma }\nabla ^2_{ij} u_R(x,t) &{} \text {for } (x,t) \in B_R(0)\times (0,T),\\ u_R(x,0) = u_{0,R}(x) &{} \text {for } x\in B_R(0),\\ u_R(x,t) = 0 &{} \text {for } (x,t) \in \partial B_R(0)\times [0,T], \end{array}\right. } \end{aligned}$$
(5.1)

which is extended by zero for \(|x|>R\). Then for every \(\varepsilon >0\), there is a function \(b_\varepsilon :\mathbb {R}^n\rightarrow \mathbb {R}_+\), depending only on \(\sigma \) (through (2.1) and (2.2)), \(\varepsilon \), and \(u_0\), such that

  1. (i)

    \(|u_R(x,t)| \le b_\varepsilon (x)\) for \((x,t)\in \mathbb {R}^n\times (0,T)\) and

  2. (ii)

    \(b_\varepsilon (x)\rightarrow \varepsilon \text { for } |x| \rightarrow \infty .\)

Remark 5.2

Let \(b(x){:}{=} \inf _{\varepsilon >0}b_{\varepsilon }(x)\), which depends on \(\sigma \) and \(u_0\). Then

  1. (i)

    \(|u_R(x,t)| \le b(x)\) for \((x,t)\in \mathbb {R}^n\times (0,T)\) and

  2. (ii)

    \(b(x)\rightarrow 0 \text { for } |x| \rightarrow \infty .\)

Proof of Lemma 5.1

The maximum principle implies \(\sup _{\mathbb {R}^n\times [0,T)} u_R = \sup _{\mathbb {R}^n} u_{0,R}\). For \(\varepsilon >0\), there exists \(r_1>0\) such that \(|u_0(x)|<\varepsilon \) for \(|x|\ge r_1(\varepsilon )\). Using \(h{:}{=} \sup _{\mathbb {R}^n\times [0,T)}u_R\), Corollary  3.2 gives an \(r_0\ge r_1\) and a function \(b:\mathbb {R}^n{\setminus } B_{r_0}(0)\rightarrow \mathbb {R}_+\) with the properties listed there. Instead of that we will use \(b_\varepsilon {:}{=} b+\varepsilon \) (cf. Remark 3.3). Finally extend \(b_\varepsilon \) to \(B_{r_0}(0)\) by \(h+\varepsilon \). The asserted property (ii) is readily seen.

For \(|x|< r_0\), we have the inequality \(u_R(x,t) \le h\le b_\varepsilon (x)\) and for \(|x|\ge R\), we have \(|u_R(x,t)|=0<b_\varepsilon (x)\), so it only remains to prove assertion (i) in the region \(B_R(0){\setminus } B_{r_0}(0)\) (if non-empty). To achieve this we use the comparison principle. Because of \(r_0 \ge r_1\) we have by assumption \(b_\varepsilon (x)>\varepsilon \ge u_R(x,0)\) in \(B_R(0){\setminus } B_{r_0}(0)\). By the definition of h we have \(b_\varepsilon (x)\ge h+\varepsilon > u_R(x,t)\) for \(|x|=r_0\) and also \(b_\varepsilon (x)>0=u_R(x,t)\) for \(|x|=R\). So we have \(u_R(x,t)< b_{\varepsilon }(x)\) on the parabolic boundary of \(\bigl (B_R(0){\setminus } B_{r_0}(0)\bigr ) \times (0,T)\). Note that \(b_\varepsilon \) is a supersolution in the viscosity sense as a minimum of two supersolutions. Thus, the comparison principle implies \(u_R(x,t)<b_\varepsilon (x)\) on \(B_R(0){\setminus } B_{r_0}(0)\).

An analogous argument shows \(u_R(x,t)>-b_\varepsilon (x)\) on \(B_R(0){\setminus } B_{r_0}(0)\) (cf. Remark 3.3). \({\square }\)

Remark 5.3

The assertions of Lemma 5.1, and therefore of Remark 5.2, also hold when replacing \((\mathbb {R}^n,\sigma )\) by an asymptotically flat manifold and \(B_R(0)\) by \(K_R\). In this case, one uses constant barriers in the interior of the asymptotic neighbourhood.

Later on, we will show that the \(u_R\) converge (for \(R\rightarrow \infty \)) to a solution of the initial value problem on the whole manifold that preserves the asymptotic behaviour. In the following, we will show that this behaviour is preserved for any solution of the initial value problem. Uniqueness of such a solution is shown by using the comparison principle.

Theorem 5.4

Let \(T\in (0,\infty ]\) and let \(\sigma \) be an asymptotically flat metric on \(\mathbb {R}^n\) \((n\ge 3)\). Suppose \(u_0\in C^{0,1}(\mathbb {R}^n)\) is uniformly spacelike and satisfies

$$\begin{aligned} u_0(x)\rightarrow 0 \quad \text {for} \quad |x|\rightarrow \infty . \end{aligned}$$
(5.2)

Let \(u\in C^{2;1}\bigl (\mathbb {R}^n\times (0,T)\bigr )\cap C^0\bigl (\mathbb {R}^n\times [0,T)\bigr )\) be a solution of (2.7) with \(u(\cdot ,0)=u_0\). Further suppose that the solution is uniformly spacelike, i. e. there exists \(\mu >0\) such that \(|\nabla u|_\sigma ^2\le 1- \mu \). Then for any \(\varepsilon >0\) and any \(t^*\in (0,T)\), there exists \(R>0\), such that

$$\begin{aligned} |u(x,t)| \le \varepsilon \qquad \text {for all} \quad |x|\ge R \quad \text {and} \quad t\in (0,t^*] \,. \end{aligned}$$
(5.3)

Moreover, this convergence is uniform in t and in u in the following sense. There is a function \(b:\mathbb {R}^n\rightarrow \mathbb {R}_+\), dependent only on \(u_0\) and \(\sigma \), such that

  1. (i)

    \(|u(x,t)|\le b(x)\) for \((x,t)\in \mathbb {R}^n\times (0,T)\) and

  2. (ii)

    \(b(x)\rightarrow 0\) for \(|x|\rightarrow \infty \).

Remark 5.5

The exact dependence on \(u_0\) in Theorem 5.4 is given through \(\sup _{\mathbb {R}^n} u_0\) and the function \(r_1(\varepsilon ) {:}{=}\sup \{ |x| : |u_0(x)|>\varepsilon \}\). The dependence on \(\sigma \) is given through (2.1) and (2.2).

Remark 5.6

The barrier construction explained in the proof below also works for dimensions \(n=1,2\). As a consequence, (5.3) also holds in these dimensions but we do not get (i) and (ii) in Theorem 5.4 because Lemma  5.1 and Remark 5.2 cannot be used in these cases.

Proof of Theorem 5.4

The essential part of the proof is to establish (5.3). Once this is shown, we consider \(b_{\varepsilon }\) as defined in the proof of Lemma 5.1. We deduce from (5.3) that for each \(t^*\in (0,T)\) and each \(\zeta \) with \(0<\zeta <\varepsilon \) there exists \(R\ge 1\), such that \(b_{\varepsilon } - u \ge \zeta \) in \(\bigl (\mathbb {R}^n {\setminus } B_R(0) \bigr )\times [0,t^*)\). Hence, we can apply the maximum principle, albeit \(\mathbb {R}^n\) is not compact, and deduce that \(b_{\varepsilon }\ge u\). Then we take the infimum over \(\varepsilon >0\) as in Remark 5.2 and arrive at the second assertion about the barrier function

Now we turn to the technical details. Firstly, we are assuming that we work far out where \(\sigma \) is close to the Euclidean metric. To begin with, by choosing \(\mu >0\), we may assume that

$$\begin{aligned} |\nabla u|^2\le 1-\mu \end{aligned}$$
(5.4)

holds in the region we consider. Now we define the barrier function. Let \(\alpha >0\). For \(t_0<-1\) and \(x_0\in \mathbb {R}^n\) we define

$$\begin{aligned} \widehat{b}(x,t)\equiv \widehat{b}_{x_0,t_0}(x,t)&{:}{=} \sqrt{2n(t-t_0)+|x-x_0|^2} + \alpha \, t, \\&\quad (x,t)\in B_{\rho }(x_0)\times [0,-t_0], \end{aligned}$$

where \(\rho \equiv \rho (t_0) \equiv \rho (t_0,\mu ) {:}{=} \sqrt{\frac{2-\mu }{\mu }4n(-t_0)}\). We are now going to prove that the function \(\widehat{b}\) satisfies

$$\begin{aligned} \partial _t \widehat{b}&= \frac{n}{\sqrt{2n(t-t_0)+|x-x_0|^2}} + \alpha ,\end{aligned}$$
(5.5)
$$\begin{aligned} \widehat{b}_i&= \frac{(x-x_0)_i}{\sqrt{2n(t-t_0)+|x-x_0|^2}},\end{aligned}$$
(5.6)
$$\begin{aligned} \widehat{b}_{ij}&= \frac{1}{\sqrt{2n(t-t_0)+|x-x_0|^2}} \left( \delta _{ij} - \frac{ (x-x_0)_i (x-x_0)_j}{2n(t-t_0)+|x-x_0|^2}\right) ,\end{aligned}$$
(5.7)
$$\begin{aligned} \partial _t \widehat{b}&-g^{ij}(\delta ,\nabla \widehat{b}) \widehat{b}_{ij} = \alpha , \end{aligned}$$
(5.8)
$$\begin{aligned} 1&-|\nabla \widehat{b}|^2 \ge \frac{\mu }{4}, \end{aligned}$$
(5.9)
$$\begin{aligned} \widehat{b}_i&\frac{(x-x_0)^i}{|x-x_0|} \ge \sqrt{1-\frac{\mu }{2}} \quad \text {for } |x-x_0|=\rho . \end{aligned}$$
(5.10)

Direct calculations yield Eqs. (5.5), (5.6), and (5.7). For the ease of notation we may ignore \(\alpha \) and \(x_0\) (by setting them equal to zero), for it is easy to incorporate these afterwards. Now we address (5.8). We have \(\widehat{b}_i = \frac{x_i}{\widehat{b}}\) and

$$\begin{aligned} g^{ij}(\delta ,\nabla \widehat{b})\widehat{b}_{ij}&= \left( \delta ^{ij} + \frac{x^i x^j}{\widehat{b}^2-|x|^2}\right) \left( \frac{\delta _{ij}}{\widehat{b}}-\frac{x_i x_j}{\widehat{b}^3}\right) \\&= \frac{n}{\widehat{b}}-\frac{|x|^2}{\widehat{b}^3}+\frac{|x|^2}{\widehat{b}\,(\widehat{b}^2-|x|^2)} -\frac{|x|^4}{\widehat{b}^3(\widehat{b}^2-|x|^2)}\\&= \frac{n}{\widehat{b}}- \frac{(\widehat{b}^2-|x|^2)|x|^2 - \widehat{b}^2 |x|^2 + |x|^4}{\widehat{b}^3(\widehat{b}^2-|x|^2)} = \frac{n}{\widehat{b}}\\&= \partial _t \widehat{b} \,. \end{aligned}$$

The inequality (5.9) is shown by

$$\begin{aligned} 1-|\nabla \widehat{b}|^2 = \frac{2n(t-t_0)}{2n(t-t_0)+|x|^2} \ge \frac{2n(-t_0)}{2n(-t_0)+\rho ^2} = \frac{1}{1+\frac{2-\mu }{\mu }2} = \frac{\mu }{4-\mu } \ge \frac{\mu }{4}. \end{aligned}$$
(5.11)

Finally, we derive the estimate (5.10): With \(|x|=\rho \) and \(t\le -t_0\), we get

$$\begin{aligned} \widehat{b}_i\frac{x^i}{|x|} = \frac{|x|}{\widehat{b}} = \sqrt{\frac{\rho ^2}{2n(t-t_0)+\rho ^2}} \ge \sqrt{\frac{\rho ^2}{4n(-t_0)+\rho ^2}} = \sqrt{\frac{\frac{2-\mu }{\mu }}{1+\frac{2-\mu }{\mu }}} = \sqrt{1-\frac{\mu }{2}}. \end{aligned}$$
(5.12)

The action of the parabolic operator on \(\widehat{b}\) can be estimated in the following way:

$$\begin{aligned}&\partial _t \widehat{b} -g^{ij}(\sigma ,\nabla \widehat{b}) ^{\sigma }\nabla ^2_{ij}\widehat{b}\\&\quad \ge \partial _t \widehat{b}-g^{ij}(\delta ,\nabla \widehat{b})\widehat{b}_{ij} - \left| g^{ij}(\delta ,\nabla \widehat{b})\widehat{b}_{ij} - g^{ij}(\sigma ,\nabla \widehat{b}) ^{\sigma }\nabla ^2_{ij}\widehat{b} \right| \\&\quad \ge \alpha - \left| g^{ij}(\sigma ,\nabla \widehat{b}) - g^{ij}(\delta ,\nabla \widehat{b}) \right| |\widehat{b}_{ij}| - \left| g^{ij}(\sigma ,\nabla \widehat{b})\right| \left| ^{\sigma }\!\varGamma ^k_{ij}\right| |\widehat{b}_l| \,. \end{aligned}$$

We may use (5.9), the calculation of (3.11), \(|\widehat{b}_{ij}|\lesssim 1\) for \(-t_0\gg 1\), and (2.4) to conclude

$$\begin{aligned} \partial _t \widehat{b} -g^{ij}(\sigma ,\nabla \widehat{b}) ^{\sigma }\nabla ^2_{ij}\widehat{b} > 0 \end{aligned}$$
(5.13)

if we are sufficiently far out, i. e. \(|x_0|-\rho (t_0)\) is sufficiently large depending on \(\alpha \), \(\mu \), and \(\sigma \) only.

We have shown that \(\widehat{b}+h\) (\(h\in \mathbb {R}\)) is an upper barrier for u on \(B_\rho (x_0)\times [0,-t_0]\) (\(|x_0|\) large) if \(\widehat{b}+h>u\) holds initially: Because of (5.4) and (5.10) there can not be a first contact point of u and \(\widehat{b}\) at \(\partial B_\rho (x_0)\times (0,-t_0)\). By (5.13) and (2.7), there can not be a first contact in the interior either. For more a more detailed exposition, see Proposition 5.7 below.

To prove (5.3) we show that for any \(t^*\in (0,T)\) and any \(\varepsilon >0\) there is \(R>0\) such that \(u(x,t) < \varepsilon \) for any \(|x|>R\) and any \(t\in (0,t^*]\). So let \(t^*\in (0,T)\) and \(\varepsilon >0\) be given. By (5.2) there is \(R_0\) such that \(u(x,0)<\frac{\varepsilon }{2}\) for \(|x|\ge R_0\). Let us choose \(|t_0|\) so large and \(\alpha \) so small that \(\partial _t \widehat{b}_{x_0,t_0}(x_0,t)<\frac{\varepsilon }{2t^*}\) for all \(t\in [0,t^*]\). After that we fix an arbitrary \(x_0\in \mathbb {R}^n\) with \(|x_0|>R_0+\rho (t_0)\) and \(|x_0|\) being so large that (5.4) and (5.13) hold in \(B_{\rho }(x_0)\times (0,-t_0)\). Recall also (5.10). Then \(\widehat{b}_{x_0,t_0}+h\) with \(h=-\widehat{b}_{x_0,t_0}(x_0,0)+\frac{\varepsilon }{2}\) is an upper barrier for u in \(B_\rho (x_0)\times (0,-t_0)\). We may assume \(t^*<-t_0\). In particular, we obtain for \(t\in (0,t^*)\)

$$\begin{aligned} u(x_0,t)<\widehat{b}_{x_0,t_0}(x_0,t)+h \le \frac{\varepsilon }{2}+t^* \sup _{\tau \in [0,t^*]} \partial _t \widehat{b}_{x_0,t_0}(x_0,\tau ) \le \varepsilon \,, \end{aligned}$$

which establishes (5.3). \({\square }\)

To prove a version of this theorem for asymptotically flat manifolds, we apply a comparison principle for barriers with large gradient at the boundary and solutions with bounded gradient. In [1, Lemma 4.1], a similar technique has been used.

Proposition 5.7

Let \(\Omega \) be an open bounded subset of a complete manifold M with \(\partial \Omega \in C^1\) and outer unit normal \(\nu \). Let \(0<T\le \infty \). Let \(u\in C^{2;1}(M\times (0,T))\cap C^0(M\times [0,T))\) be a solution to (2.7) in \(\Omega \times (0,T)\). Let \(b\in C^{2;1}\bigl (\overline{\Omega }\times [0,T)\bigr )\) fulfil

$$\begin{aligned}\partial _t b\ge g^{ij}(\sigma ,\nabla b) ^{\sigma }\nabla ^2_{ij} b\quad \text {in }\Omega \times (0,T).\end{aligned}$$

Assume that

$$\begin{aligned}u(\cdot ,0)\le b(\cdot ,0)\quad \text {in }\Omega \quad \text {and}\quad \sup \limits _{\Omega \times (0,T)}|\nabla u| <\inf \limits _{\partial \Omega \times (0,T)}\langle \nabla b,\nu \rangle .\end{aligned}$$

Then

$$\begin{aligned} u\le b\quad \text {in }\Omega \times [0,T). \end{aligned}$$

Proof

By replacing b by \(b+\varepsilon (1+t)\) and later considering \(\varepsilon \searrow 0\), we may assume without loss of generality that \(\partial _t b> g^{ij}(\sigma ,\nabla b) ^{\sigma } \nabla ^2_{ij}b\) and \(u(\cdot ,0)<b(\cdot ,0)\). It suffices to prove the result for any finite S such that \(0<S<T\). We exhaust \(\Omega \) by compact sets \(K_l\), \(l\in \mathbb N\), such that \(K_l\nearrow \Omega \) and \(\partial K_l\rightarrow \partial \Omega \) in \(C^1\) in a local graph representation. We may assume without loss of generality that l is large enough so that

$$\begin{aligned} \sup \limits _{\Omega \times (0,S]} |\nabla u|< \inf \limits _{\partial K_l\times (0,S]} \langle \nabla b,\nu \rangle , \end{aligned}$$
(5.14)

where \(\nu \) is the outer unit normal to \(\partial K_l\). We fix l and argue by contradiction. Hence, we may assume that there exists a first \((x_0,t_0)\in K_l\times (0,S]\), such that \(u(x_0,t_0)=b(x_0,t_0)\). Now, we distinguish two cases:

  • If \(x_0\in \partial K_l\), we still have \(b-u\ge 0\) in \(K_l\times \{t_0\}\) with equality at \(x_0\) and deduce that \(\langle \nabla (b-u),\nu \rangle \le 0\) at \((x_0,t_0)\). We get \(\langle \nabla b,\nu \rangle \le |\nabla u|\) there. This contradicts (5.14).

  • A first touching of u and b in the interior of \(K_l\), however, is excluded by the maximum principle as we have assumed that b is a strict supersolution.

We obtain \(u<b\) in \(K_l\times [0,S]\). Now, we let \(l\rightarrow \infty \) and \(S\nearrow T\). The claim follows. \({\square }\)

Corollary 5.8

Theorem 5.4 holds for asymptotically flat \((M^n,\sigma )\) with \(n\ge 3\).

Proof

The only case where the proof slightly differs from the proof of Theorem 5.4 is the second last paragraph: In this case one chooses \(|x_0|\) (in an asymptotic neighbourhood) and \(\rho >0\) so large that \(M{\setminus } B_{\rho }(x_0)\) lies in an asymptotic coordinate neighbourhood and such that \(|\nabla b|>\sup _M |\nabla u_0|\) on \(\partial B_{\rho }(x_0)\). Then, \(v+h\) is an upper barrier for u by Proposition 5.7. \({\square }\)

6 Boundary Gradient Estimates

Our goal in this section is to prove boundary gradient estimates for the modified Dirichlet problem for the mean curvature flow.

Proposition 6.1

Let \((M,\sigma )\) be an asymptotically flat manifold and \(u_0\in C^{0,1}(\mathbb {R}^n)\) be a function which is uniformly spacelike and satisfies \(|u_0(x)|\rightarrow 0\) as \(|x|\rightarrow \infty \). Let \(r_0\) (depending on \((M,\sigma )\) and \(u_0\)) be so large that \(\mathbb {R}^n{\setminus } B_{r_0}(0)\) lies in the image of an asymptotic coordinate system and such that there exists a rotationally symmetric static supersolution \(b_{r_0}\) of the mean curvature flow with \(b_{r_0}=\sup u_0+1\) at \(\partial K_{r_0}\). Let \(R>\max \{2,r_0\}\) and let \(u_{0,R}\) be modified initial data and \(\sigma _{R}\) be a modified metric, such that \(u_{0,R}=0\) and \(\sigma _{R}=\delta \) on \(M{\setminus } K_R\) and \(u_{0,R}=u_0\) and \(\sigma _{R}=\sigma \) in \(K_{R-1}\). Finally, let \(T\in (0,\infty ]\) and \(u_R\in C^{2;1}\big (B_{R^4}(0)\times (0,T)\big )\cap C^0\big (\overline{B_{R^4}(0)}\times [0,T)\big )\) be a solution of the initial boundary value problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t u_{R}(x,t)=g^{ij}(\sigma _{R},\nabla u_{R}) \, ^{\sigma _R}\nabla ^2_{ij} u_{R}(x,t) &{} \text {for} \;\; (x,t) \in B_{R^4}(0)\times (0,T) \,,\\ u_{R}(x,0) = u_{0,{R}}(x) &{} \text {for} \;\; x\in B_{R^4}(0) \,,\\ u_{R}(x,t) = 0 &{} \text {for} \;\; (x,t) \in \partial B_{R^4}(0)\times [0,T) \,. \end{array}\right. } \end{aligned}$$
(6.1)

Then we obtain

$$\begin{aligned} |\nabla u_R|_{{\sigma }_R}(x,t) \lesssim R^{-3n+\frac{9}{2}} \end{aligned}$$

for all \(|x| = R^4\) and \(t \in [0,T)\).

Proof

Let \(b_{R}\) be a rotationally symmetric supersolution constructed in Sect. 3. By Corollary 3.2, it can be chosen such that \(b_{R}|_{\partial K_R}\ge \sup |u_0|+1\). Let \(a_R\) be the value of \(b_{R}\) at \(\partial K_{R^4}\). Then \(a_R\simeq R^{-3n+\frac{17}{2}}<1\) by Lemma 3.1 for \(R\gg 1\). Let \(\tilde{b}_R=b_{R}-a_R\). Then \(\tilde{b}_R\) has the following properties:

$$\begin{aligned} \tilde{b}_R(x)\ge \sup _M |u_0| \;\; \text {for} \;\; |x|=R\,, \qquad \tilde{b}_R(x)=0 \;\; \text {for} \;\; |x|=R^4\,, \end{aligned}$$

where \(x\in M\) is identified with a vector in \(\mathbb {R}^n\) via the asymptotic chart. If \(u_R\) is a solution of the Dirichlet problem of the proposition, we obviously have \(|u_R(x,t)|\le \sup _M |u_0|\) for all times. Therefore,

$$\begin{aligned} u_{R}(x,t)\le \tilde{b}_{R}(x) \quad \text {for}\;\;|x|=R \;\; \text {and} \;\; |x|=R^4 \,, \qquad t\in [0,T) \,. \end{aligned}$$

By construction, \(0=|u_{0,R}|\le \tilde{b}_{R}\) on \(K_{R^4}{\setminus } K_R\). By the maximum principle and because \(b_{R}\) is a static supersolution, we obtain

$$\begin{aligned} -\widetilde{b}_R \le u_R \le \widetilde{b}_R \end{aligned}$$

on \(\bigl (K_{R^4}{\setminus } K_R\bigr )\times [0,T)\) and we have equality on \(\partial K_{R^4}\times [0,T)\). This implies in particular

$$\begin{aligned} \frac{\partial (-\widetilde{b}_R)}{\partial \nu }(x) \le \frac{\partial u_R}{\partial \nu }(x,t) \le \frac{\partial \widetilde{b}_R}{\partial \nu }(x) \quad \text {for} \quad x\in \partial K_{R^4}\,, \quad t\in (0,T) \,. \end{aligned}$$

Since by (3.8) we have

$$\begin{aligned} |\nabla b(x)|&\simeq \left( \frac{|x|}{R}\right) ^{-n+\frac{3}{2}} \,, \end{aligned}$$

and since \(|\nabla b|=|\nabla {\tilde{b}}|\), and because the derivatives tangential to the boundary vanish, we finally get

$$\begin{aligned} |\nabla u_R|_{{\sigma }_R}(x,t) \lesssim R^{-3n+\frac{9}{2}} \,. \end{aligned}$$

for all \(|x| = R^4\) and \(t \in [0,T)\). \({\square }\)

Remark 6.2

In the cases \(n=1,2\), we also get bounds on the derivative at \(|x|=R^4\). The fact that in these cases b is unbounded as \(|x|\rightarrow \infty \) does not play a role in the proof of the above lemma.

7 Derivative Estimates

In this section, we are going to prove uniform derivative estimates for solutions of the modified initial boundary value problem (6.1) for the mean curvature flow. These estimates are crucial as they directly yield long-time existence of the solutions. The key point is to compute an evolution inequality for a good quantity and to apply the maximum principle. Similar ideas were used by Gerhardt [7].

Let u evolve under mean curvature flow (2.6). Recall from [6, Proposition 3.2], that the evolution of the quantity v introduced in (2.8) under the mean curvature flow is given by

$$\begin{aligned} \left( \frac{d}{dt} - \Delta \right) v = -v \bigl ( |A|^2 + \text {Ric}_{h}(\nu ,\nu ) \bigr ) \,, \end{aligned}$$
(7.1)

where \(\text {Ric}_{h}\) is the Ricci tensor of the metric \(h=\sigma _{ij} \mathrm {d}x^i \mathrm {d}x^j-(\mathrm {d}x^0)^2\), A is the second fundamental form and \(\nu \) is the unit normal of the hypersurface as defined in (2.8). In this section, \(\langle .,.\rangle \), |.|, \(\nabla \) and \(\Delta \) are taken with respect to the induced metric \(g\) on the graph of u unless otherwise specified.

Lemma 7.1

Let \((M,\sigma )\) be a Riemannian manifold of bounded geometry and \(u:M\rightarrow \mathbb {R}\) be a smooth function with spacelike graph in \((M\times \mathbb {R},h)\). Then the Ricci tensor \(\text {Ric}_h\) of the metric h, evaluated on the unit normal \(\nu \) of \(\text {graph }(u)\), satisfies

$$\begin{aligned} \bigl | \text {Ric}_{h}(\nu ,\nu ) \bigr | \le C (v^2-1) \end{aligned}$$

with some \(C\ge 0\).

Proof

Using the product structure of \(M\times \mathbb {R}\), we can evaluate the Ricci tensor,

$$\begin{aligned} \text {Ric}_{h}(\nu ,\nu )&= \text {Ric}_{\sigma } \left( \frac{^{\sigma }\nabla u}{\sqrt{1-|\nabla u|^2_{\sigma }}}, \frac{^{\sigma }\nabla u}{\sqrt{1-|\nabla u|^2_{\sigma }}} \right) = v^2 \text {Ric}_{\sigma }\bigl (^{\sigma }\nabla u, ^{\sigma }\nabla u\bigr ) \,, \end{aligned}$$

where \(^{\sigma }\nabla \) denotes the gradient with respect to \(\sigma \). From the definition of v in Eq. (2.8) we infer \(v^2|\nabla u|^2_{\sigma } = v^2 - 1\). Since M has bounded geometry, its Ricci tensor satisfies \(|\text {Ric}_{\sigma }(w,w)|\le C|w|^2_{\sigma }\) for some \(C\ge 0\) and all w. Thus,

$$\begin{aligned} \bigl | \text {Ric}_{h}(\nu ,\nu ) \bigr | \le v^2 C |\nabla u|^2_{\sigma } = C(v^2-1) \,. \end{aligned}$$

\({\square }\)

In the sequel, we will study a function also considered by Gerhardt [7] (for similar ideas, see also e. g. [6, Proof of Proposition 4.4] or [8, Proof of Theorem 15]).

Lemma 7.2

Let \((M,\sigma )\) be a Riemannian manifold of bounded geometry and let \(u:M\times [0,T)\rightarrow \mathbb {R}\) be a solution of the mean curvature flow in \((M\times \mathbb {R},h)\). Then, the function \(\varphi \) defined by

$$\begin{aligned} \varphi {:}{=} v \exp \bigl ( \mu \mathrm {e}^{\lambda u} \bigr ), \,\qquad \lambda ,\mu >0, \end{aligned}$$
(7.2)

evolves according to the equation

$$\begin{aligned} \left( \frac{d}{dt} - \Delta \right) \varphi&= - \varphi \Bigl ( |A|^2 + \text {Ric}_{h}(\nu ,\nu ) \Bigr ) - 2\lambda \mu \frac{\varphi }{v} \mathrm {e}^{ \lambda u}\langle \nabla u, \nabla v\rangle \\&\qquad - \mu \lambda ^2 \varphi \mathrm {e}^{\lambda u} \Bigl \{ \mu \mathrm {e}^{\lambda u} + 1 \Bigr \} |\nabla u|^2 \,. \end{aligned}$$

Proof

For the first derivative of \(\varphi \), we calculate

$$\begin{aligned} \varphi _j = v_j \exp \bigl ( \mu \mathrm {e}^{\lambda u} \bigr ) + \mu \lambda \varphi \mathrm {e}^{\lambda u} u_j \,. \end{aligned}$$
(7.3)

Further, for the second derivative we get

$$\begin{aligned} \nabla ^2_{ij}\varphi= & {} \bigl ( \nabla _{ij}^2 v\bigr ) \exp \bigl ( \mu \mathrm {e}^{\lambda u} \bigr ) + \mu \lambda \exp \bigl ( \mu \mathrm {e}^{\lambda u} \bigr ) \mathrm {e}^{\lambda u} u_i v_j \\&+ \mu \lambda \bigl ( v_i \exp (\mu \mathrm {e}^{\lambda u} ) + \mu \lambda \varphi \mathrm {e}^{\lambda u} u_i \bigr ) \mathrm {e}^{\lambda u} u_j \\&+ \mu \lambda ^2 \varphi \mathrm {e}^{\lambda u} u_i u_j + \mu \lambda \varphi \mathrm {e}^{\lambda u} \nabla ^2_{ij} u \,. \end{aligned}$$

Taking the trace with respect to \(g\), we obtain

$$\begin{aligned} \Delta \varphi&= \frac{\varphi }{v} \Delta v + 2 \lambda \mu \exp \bigl ( \mu \mathrm {e}^{\lambda u} \bigr ) \mathrm {e}^{\lambda u} \langle \nabla u, \nabla v\rangle \\&\qquad + \mu ^2\lambda ^2 \varphi \mathrm {e}^{2\lambda u} |\nabla u|^2 + \mu \lambda ^2 \varphi \mathrm {e}^{\lambda u} |\nabla u|^2 + \mu \lambda \varphi \mathrm {e}^{\lambda u} \Delta u \,. \end{aligned}$$

Since the time derivative of \(\varphi \) is given by \(\partial _t \varphi = (\partial _t v) \frac{\varphi }{v} + \mu \lambda \varphi \mathrm {e}^{\lambda u} \partial _t u\), Lemma 2.3 implies that \(\varphi \) evolves according to

$$\begin{aligned} \left( \frac{d}{dt} - \Delta \right) \varphi&= \frac{\varphi }{v}\left( \frac{d}{dt} - \Delta \right) v + \mu \lambda \varphi \mathrm {e}^{\lambda u}\underbrace{\left( \frac{d}{dt} - \Delta \right) u}_{=0} - 2\lambda \mu \frac{\varphi }{v} \mathrm {e}^{\lambda u}\langle \nabla u, \nabla v\rangle \\&\qquad - \mu \lambda ^2 \varphi \mathrm {e}^{\lambda u} \Bigl \{ \mu \mathrm {e}^{\lambda u} + 1 \Bigr \} |\nabla u|^2 \,. \end{aligned}$$

The claim now follows from the evolution Eq. (7.1) for v. \({\square }\)

Lemma 7.3

Let \((M,\sigma )\) be a Riemannian manifold of bounded geometry and let \(u:M\rightarrow \mathbb {R}\) be strictly spacelike. Then, the estimate

$$\begin{aligned} \bigl | \langle \nabla u, \nabla v\rangle \bigr | \le |A| |\nabla u|^2 \end{aligned}$$

holds.

Proof

Using the relation \(v^2=1+|\nabla u|^2\) from Lemma 2.4, we calculate

$$\begin{aligned} \partial _i (v^2) = 2v v_i = 2 g^{jk} (\nabla ^2_{ij} u) u_k \,. \end{aligned}$$

Consequently,

$$\begin{aligned} \bigl | \langle \nabla u, \nabla v\rangle \bigr | = \frac{1}{v} \bigl | g^{jk} g^{il} (\nabla _{ij}^2 u) u_k u_l \bigr | \le \frac{1}{v} |\nabla ^2 u| |\nabla u|^2 \,. \end{aligned}$$

Let \(\pi _{\mathbb {R}}:M\times \mathbb {R}\rightarrow \mathbb {R}\) denote the projection onto the second factor. Then the claim follows from

$$\begin{aligned} |\nabla ^2 u| = |\mathrm {d}\pi _{\mathbb {R}}(\nabla ^2 F)| = |\mathrm {d}\pi _{\mathbb {R}}(A)| = |A| v \,. \end{aligned}$$

\({\square }\)

Lemma 7.4

Let \(u_R\) be a solution of the modified initial boundary value problem (6.1). Then \(\sup _{M\times \{t\}}v\le c\) (where v is computed as in (2.8) using \(u_R\) instead of u) for some finite constant \(c\ge 1\) which depends on \(\sigma \) and the initial data, but is independent of \(R\gg 1\).

Proof

Note that as long as the solution exists, we have

$$\begin{aligned} \min u_0\le \min u_{0,R}\le u_{t,R}\le \max u_{0,R} \le \max u_0 \end{aligned}$$

by the avoidance principle and the construction of \(u_{0,R}\). After possibly applying a shift to \(u_0\), we further may assume

$$\begin{aligned} \min u_0 \ge 0 \,. \end{aligned}$$

Using Lemma 7.3 and \(|\nabla u_R|^2\le v^2\) (see Lemma 2.4), we obtain the estimate

$$\begin{aligned} 2 \langle \nabla u_R, \nabla v\rangle \le \varepsilon |A|^2 v + \frac{|\nabla u_R|^4}{\varepsilon v} \le v \left( \varepsilon |A|^2 + \frac{|\nabla u_R|^2}{\varepsilon } \right) \end{aligned}$$

for any \(\varepsilon >0\). Together with the Lemmas  7.1 and 7.2, this implies

$$\begin{aligned} \left( \frac{\mathrm{{d}}}{\mathrm{{d}}t} - \Delta \right) \varphi&\le -\varphi \left( \left( 1 - \varepsilon \lambda \mu \mathrm {e}^{\lambda u_R} \right) |A|^2 - C(v^2-1) \right) \\&\quad - \lambda \mu \varphi \mathrm {e}^{\lambda u_R} \left( \lambda \bigl (\mu \mathrm {e}^{\lambda u_R}+1\bigr ) - \frac{1}{\varepsilon } \right) |\nabla u_R|^2 \,. \end{aligned}$$

Now let \(\varepsilon {:}{=}\mathrm {e}^{-\lambda u_R}\), \(\lambda {:}{=} C\) and \(\mu {:}{=} \frac{1}{C}\). Then \(\varepsilon \lambda \mu \mathrm {e}^{\lambda u_R} = 1\), and using \(|\nabla u_R|^2=v^2-1\) we calculate

$$\begin{aligned} \left( \frac{\mathrm{{d}}}{\mathrm{{d}}t} - \Delta \right) \varphi&\le \varphi C(v^2-1) - \varphi \mathrm {e}^{Cu_R} \bigl ( \mathrm {e}^{Cu_R} + C - \mathrm {e}^{C u_R} \bigr )(v^2-1) \\&= -\varphi C\bigl ( \mathrm {e}^{Cu_R} - 1 \bigr ) (v^2-1) {\mathop {\le }\limits ^{\min u_R \ge 0}}0 \,. \end{aligned}$$

By the gradient estimates in Proposition 6.1, the function \(\varphi \) is bounded at \(\partial B_{R^4}(0)\) for positive times. We can thus apply the maximum principle to conclude that

$$\begin{aligned} \sup _{B_{R^4}(0)\times [0,T)}\varphi \le \max \left\{ \sup _{B_{R^4}(0)\times \left\{ 0\right\} }\varphi ,\sup _{\partial B_{R^4}(0)\times [0,T)}\varphi \right\} \,, \end{aligned}$$

which yields the statement of the lemma. \({\square }\)

Lemma 7.5

Let \(R>0\) be large enough, \(T\in (0,\infty ]\) and let \((M^n,\sigma )\), \(n\ge 3\), be an asymptotically flat manifold. Suppose \(u_R\in C^{2;1}\big (K_R\times (0,T)\big )\cap C^0\big (\overline{K_R}\times [0,T)\big )\) is a solution of the modified initial boundary value problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t u_R(x,t)=g^{ij}(\sigma _R,\nabla u_R) \, ^{\sigma }\nabla ^2_{ij} u_R(x,t) &{} \text {for} \;\; (x,t) \in K_R\times (0,T) \,,\\ u_R(x,0) = u_{0,R}(x) &{} \text {for} \;\; x\in K_R \,,\\ u_R(x,t) = 0 &{} \text {for} \;\; (x,t) \in \partial K_R\times [0,T] \,, \end{array}\right. } \end{aligned}$$

where \(u_{0,R}\) and \(\sigma _R\) are the modified initial data provided by Proposition 4.1.

We extend \(u_{0,R}\) and \(u_R\) by zero for \(|x|>R\). We additionally assume that we are given \(\varepsilon >0\) and \(r_1>0\) such that \(|u_0(x)|\le \varepsilon \) for \(|x| \ge r_1\).

Then there is a function \(b_\varepsilon :M\rightarrow \mathbb {R}_+\), dependent only on \(\sigma \), \(\varepsilon \), \(r_1\), and \(\sup _{M} u_0\) and a small constant \(\kappa >0\) dependent only on \(\sigma \), \(r_1\), \(\sup _{M}\bigl ( |u_0|+|\nabla u_0|_{\sigma }\bigr )\), such that for all large R

  1. (i)

    \(|u_R(x,t)| \le b_\varepsilon (x)\) for \((x,t)\in M\times (0,T)\),

  2. (ii)

    \(b_\varepsilon (x)\rightarrow \varepsilon \text { for } |x| \rightarrow \infty \),

  3. (iii)

    \(|\nabla u_R(x,t)|_{\sigma } \le 1-\kappa \) for \((x,t)\in M\times (0,T)\).

Consequently, each solution \(u_R\) can be extended for all times and satisfies the estimates (i) and (iii) uniformly for all \(t\in (0,\infty )\).

Proof

Choose \(b_{\varepsilon }\) as in the proof of Lemma 5.1. The estimates (i) and (ii) hold due to Remark 5.3. Part (iii) follows directly from Lemma 7.4. Note that all these estimates are independent of T. Suppose that \(T_{\text {max}}<\infty \) is the maximal time of existence. By standard theory of parabolic equations, we also have bounds on all derivatives of \(u_R\) on \((T_{\text {max}}/2,T_{\text {max}})\) which are uniform up to \(T_{\text {max}}\). Therefore \(u_{t,R}\) converges in all derivatives to some function \(u_{T_{\text {max}},R}\) and we can apply short-time existence to get a smooth extension of \(u_R\) beyond \(T_{\text {max}}\) which contradicts the maximality of \(T_{\text {max}}\). \({\square }\)

Remark 7.6

Suppose \(r_1\) is given as a function of \(\varepsilon \) in the sense that for all \(\varepsilon >0\) \(|u_0(x)|<\varepsilon \) for \(|x|\ge r_1(\varepsilon )\). Here we may take the infimum \(b(x){:}{=} \inf _{\varepsilon >0}b_{\varepsilon }(x)\) which depends on \(\sigma \), \(\sup _{M} u_0\), and the function \(r_1\). With \(\kappa >0\) as above and \(R\gg 1\) large enough, we then have

  1. (i)

    \(|u_R(x,t)| \le b(x)\) for \((x,t)\in M\times (0,T)\),

  2. (ii)

    \(b(x)\rightarrow 0 \text { for } |x| \rightarrow \infty \)

  3. (iii)

    \(|\nabla u_R(x,t)|_{\sigma _R} \le 1-\kappa \) for \((x,t)\in M\times (0,T)\).

8 Long-Time Existence and Convergence

In this section, we are going to prove the main statements of the paper.

Theorem 8.1

Let \((M^n,\sigma )\) be an asymptotically flat manifold with \(n\ge 3\) and \(u_0\in C^{0,1}(M)\) be uniformly spacelike such that

$$\begin{aligned} u_0(x)\rightarrow 0 \quad \text {for} \quad |x|\rightarrow \infty . \end{aligned}$$

Then there exists a unique uniformly spacelike solution of (2.7) with \(u(\cdot ,0)=u_0\) which exists for all times and satisfies \(u\in C^{2;1}\bigl (M\times (0,\infty ) \bigr )\cap C^0\bigl (M\times [0,\infty )\bigr )\). Moreover, u satisfies

$$\begin{aligned} u(x,t)\rightarrow 0 \quad \text {for} \quad |x|\rightarrow \infty . \end{aligned}$$
(8.1)

and this convergence is uniform in t. More precisely, there is a function \(b:M\rightarrow \mathbb {R}\) and a constant \(\kappa >0\), both only dependent on \(u_0\) and \(\sigma \), such that

  1. (i)

    \(|u(x,t)|\le b(x)\) for \((x,t)\in M\times (0,\infty )\),

  2. (ii)

    \(b(x)\rightarrow 0\) for \(|x|\rightarrow \infty \),

  3. (iii)

    \(|\nabla u(x,t)|_{\sigma } \le 1-\kappa \) for \((x,t)\in M\times (0,\infty )\).

Proof

By Lemma 7.5, the solutions \(u_R\) exist for all times and are uniformly spacelike in T and R. By standard theory for parabolic equations, we get uniform bounds on higher derivatives on \((\delta ,\infty )\) where the bounds only depend on \(u_0\), \(\sigma \) and \(\delta \) and are in particular independent of R. Therefore, by the Arzelà-Ascoli theorem, the family \(u_R\) subconverges in all derivatives to a solution u of (2.7) with initial data \(u_0\). The estimates (i)–(iii) are consequences of the estimates (i)–(iii) in Lemma 7.5 and Remark 7.6.

To show uniqueness, one uses (8.1) in the following sense: Suppose that v(xt) is another uniformly spacelike solution of (2.7) which is defined up to a time T. Then by Theorem 5.4, v also satisfies (8.1) for each \(t\in [0,T)\). Therefore, \(u\pm \varepsilon \) can be used as a barrier for v on \(M\times [0,T)\). The maximum principle implies that for each \(\varepsilon >0\), we get \(u-\varepsilon \le v\le u+\varepsilon \). Uniqueness follows from \(\varepsilon \rightarrow 0\). \({\square }\)

Remark 8.2

Statement (8.1) also follows in dimensions \(n=1,2\) but we cannot conclude (i) and (ii) in these cases. These properties are however essential in the proof of the main theorem below. In the case \(n=1\), we give a separate proof that is done in a subsection below.

Theorem 8.3

Let u be the solution in Theorem 8.1. Then \(u\left( \cdot ,t\right) \xrightarrow [t\rightarrow \infty ]{}0\) uniformly in \(C^{l}\) for all \(l\in \mathbb {N}_{0}\). More precisely, for each \(l\in \mathbb {N}_0\), there exists a function \(v_l:[0,T)\rightarrow \mathbb {R}\) with \(v_l(t)\rightarrow 0\) as \(t\rightarrow \infty \) such that

$$\begin{aligned} \bigl \Vert u_t\bigr \Vert _{C^{l}(M)}\le v_l(t) \,. \end{aligned}$$

Proof

First we prove the statement for \(l=0\). Since u decays as \(x\rightarrow \infty \) by Theorem 8.1, we can apply the maximum principle to conclude that

$$\begin{aligned} v:[0,T)\rightarrow \mathbb {R}\,, \qquad v(t){:}{=} \sup _{x\in M} |u(x,t)| \,, \end{aligned}$$

is monotonically decreasing. If \(v(t)\rightarrow 0\) for \(t\rightarrow \infty \), the claim follows. Assume that \(v\left( t\right) \xrightarrow [t\rightarrow \infty ]{}\delta >0\). By Theorem  8.1 there exists \(R>0\) such that \(\left| u\left( x,t\right) \right| \le \frac{\delta }{2}\) for all \(\left| x\right| \ge R\) and \(t\ge 0\). Using the strict maximum principle once again in \(\overline{B_{R}\left( 0\right) }\times [0,\infty )\) yields that either v is strictly decreasing or \(u\left( \cdot ,t\right) \) is constant for all \(t\ge T_{0}\) for some \(T_0\ge 0\). In the second case we conclude (again by Theorem 8.1) that \(u\left( \cdot ,t\right) =0\) for all \(t\ge T_{0}\).

Let \(t_{k}\ge 0\) with \(t_{k}\xrightarrow [k\rightarrow \infty ]{}\infty \) be arbitrary. Due to the uniform estimates for u in \(C^{l}\) for \(l\in \mathbb {N}\) it follows by the Arzelà-Ascoli theorem (after choosing a subsequence also labelled \(\left( t_{k}\right) _{k\in \mathbb {N}}\)) that

$$\begin{aligned} u\left( \cdot ,t+t_{k}\right) \xrightarrow [k\rightarrow \infty ]{}\tilde{u}\left( \cdot ,t\right) \end{aligned}$$

uniformly and \(\tilde{u}\) is a solution of the same differential equation as u. Due to the uniform convergence we conclude that

$$\begin{aligned} \sup _{x\in M}\left| \tilde{u}\left( x,t\right) \right|&=\lim \limits _{k\rightarrow \infty }\sup \limits _{x\in M}\left| u\left( x,t+t_{k}\right) \right| =\lim \limits _{k\rightarrow \infty }v\left( t+t_{k}\right) =\delta >0 \end{aligned}$$
(8.2)

for all \(t\ge 0\). Because \(\left| u\left( x,t\right) \right| \xrightarrow [\left| x\right| \rightarrow \infty ]{}0\) uniformly in t the same holds for \(\tilde{u}\). Therefore, \(x\mapsto \tilde{u}(x,t)\) attains a maximum for each \(t\ge 0\). By the strict maximum principle, this is strictly decreasing unless \(\tilde{u}\) is constant, hence identically zero. However, this is a contradiction to (8.2).

For arbitrary \(l\in \mathbb {N}\), the statement follows immediately from the following interpolation inequality

$$\begin{aligned} \left\| \nabla f\right\| _{C^{0}\left( M \right) }^{2}\le c\left( n\right) \left\| f\right\| _{C^{0}\left( M\right) }\bigl \Vert f\bigr \Vert _{C^{2}\left( M\right) } \end{aligned}$$

since \(\left\| u\left( \cdot ,t\right) \right\| _{C^{0}\left( M\right) }\xrightarrow [t\rightarrow \infty ]{}0\) and the uniform bounds for the derivatives of u hold. \({\square }\)

8.1 The One-Dimensional Case

For \((M,\sigma )=(\mathbb {R},\delta )\), the mean curvature flow can be written in a particularly simple form which allows us to prove our main result with a different method. To be more precise, for maps \(u:\mathbb {R}\times [0,T)\rightarrow \mathbb {R}\), the graphical mean curvature flow (2.6) may equivalently be written as

$$\begin{aligned} \partial _t u = \frac{u''}{1-\bigl (u'\bigr )^2} = \frac{1}{2} \left[ \ln \frac{1+u'}{1-u'} \right] '\,. \end{aligned}$$
(8.3)

From Theorem 8.1 and Remark 8.2, we know already that (8.3) admits uniformly spacelike solutions that exist for all times, provided that the initial data is nice enough. In the following, we will show convergence to 0 for uniformly spacelike \(u_0\) and we will even get a convergence rate if \(u_0\in L^2(\mathbb {R})\).

Lemma 8.4

Let \(u:\mathbb {R}\times (0,T)\rightarrow \infty \) be a smooth solution of (8.3) (which is continuous up to \(t=0\)) with uniformly spacelike initial data \(u_0\in C^{0,1}(\mathbb {R})\cap L^2(\mathbb {R})\). Then \(u_t\in L^2(\mathbb {R})\) and \(\left\| u_t\right\| _{L^2}\le \left\| u_0\right\| _{L^{2}}\).

Proof

Choose a smooth cutoff function \(\psi \) with \(0\le \psi \le 1\),

$$\begin{aligned} \psi (x) = {\left\{ \begin{array}{ll} 1 \,, &{} x\in [-1,1] \,, \\ 0 \,, &{} x\notin (-2,2) \,, \end{array}\right. } \qquad \text {and} \qquad |\psi '|\le 2 \,. \end{aligned}$$

Let \(\varphi (x)=\psi (x/R)\) where \(R>0\). Then we get

$$\begin{aligned} \partial _t\int _{\mathbb {R}}\varphi ^2u^2\mathrm {d}x&= 2\int _{\mathbb {R}}\varphi ^2(\partial _t u) u\mathrm {d}x=\int _{\mathbb {R}}\varphi ^2u\left( \ln \frac{1+u'}{1-u'}\right) '\mathrm {d}x\\&\quad =-2\int _{\mathbb {R}}\varphi '\varphi u\left( \ln \frac{1+u'}{1-u'}\right) \mathrm {d}x -\int _{\mathbb {R}}\varphi ^2u'\left( \ln \frac{1+u'}{1-u'}\right) \mathrm {d}x\\&\quad \le \frac{1}{\delta }\int _{\mathbb {R}}(\varphi ')^2u^2\mathrm {d}x+\delta \int _{\mathbb {R}}\varphi ^2\left( \ln \frac{1+u'}{1-u'}\right) ^2\mathrm {d}x \\&\qquad -\int _{\mathbb {R}}\varphi ^2u'\left( \ln \frac{1+u'}{1-u'}\right) \mathrm {d}x , \end{aligned}$$

where \(\delta >0\) is fixed below. Also using \(x\ln \frac{1+x}{1-x} \ge x^2\) for \(|x|<1\), we further estimate

$$\begin{aligned} \partial _t\int _{\mathbb {R}}\varphi ^2u^2\mathrm {d}x&\le \frac{4}{\delta R^2}\int _{-2R}^{2R}u^2\mathrm {d}x+ C \delta \int _{\mathbb {R}}\varphi ^2(u')^2\mathrm {d}x -\int _{\mathbb {R}}\varphi ^2(u')^2\mathrm {d}x\\&\le \frac{4}{\delta R^2}\int _{-2R}^{2R}u^2\mathrm {d}x \,, \end{aligned}$$

where C depends on \(\sup |u'_t|\le 1-\varepsilon \) and \(\delta \in (0,C^{-1})\). Now, define

$$\begin{aligned} A(t,R)=\sup _{y\in \mathbb {R}}\int _{\mathbb {R}}u^2\varphi _{R,y}^2\mathrm {d}x \,, \end{aligned}$$

where \(\varphi _{R,y}(x)=\psi ((x-y)/R)\). Note that A(tR) is bounded because u is. After integration in time, we get

$$\begin{aligned} \int _{\mathbb {R}}\varphi _{R,y}^2u_t^2\mathrm {d}x&\le \int _{\mathbb {R}}\varphi _{R,y}^2u_0^2\mathrm {d}x+\frac{4}{\delta R^2}\int _0^t \int _{-2R}^{2R}u_s^2\mathrm {d}x \mathrm {d}s \\&\le A(0,R) +\frac{8}{\delta R^2}\int _0^tA(s,R)\mathrm {d}s \end{aligned}$$

and taking the supremum over x on the left-hand side yields

$$\begin{aligned} A(t,R)\le A(0,R)+\frac{8}{\delta R^2}\int _0^tA(s,R)\mathrm {d}s \end{aligned}$$

and by the Gronwall inequality, we obtain

$$\begin{aligned} A(t,R)\le A(0,R)\exp \left( \frac{8t}{\delta R^2}\right) \end{aligned}$$

and the result follows by letting \(R\rightarrow \infty \). \({\square }\)

Now we extend the cutoff argument to get a uniform estimate of the \(H^1\)-norm.

Lemma 8.5

Let \(u:\mathbb {R}\times (0,T)\rightarrow \infty \) be a smooth solution of (8.3) with uniformly spacelike initial data \(u_0\in C^{0,1}(\mathbb {R})\cap L^2(\mathbb {R})\). Then \(u_t\in H^1(\mathbb {R})\) and

$$\begin{aligned} \left\| u_t\right\| _{L^2}^2+t \left\| u_t'\right\| _{L^2}^2\le \left\| u_0'\right\| _{L^2}^2 \end{aligned}$$

for all \(t>0\).

Proof

Let \(\varphi \) be as in the proof of Lemma 8.4. Then

$$\begin{aligned}&\partial _t \left( t \int _{\mathbb {R}}\varphi ^2(u')^2\mathrm {d}x\right) =\int _{\mathbb {R}}\varphi ^2(u')^2\mathrm {d}x+2t\int _{\mathbb {R}}\varphi ^2(u')(\partial _t u)'\mathrm {d}x\\&\quad =\int _{\mathbb {R}}\varphi ^2(u')^2\mathrm {d}x+2t\int _{\mathbb {R}}\varphi ^2u'\left( \frac{u''}{1-(u')^2}\right) '\mathrm {d}x\\&\quad =\int _{\mathbb {R}}\varphi ^2(u')^2\mathrm {d}x -2t\int _{\mathbb {R}}\varphi ^2\frac{(u'')^2}{1-(u')^2}\mathrm {d}x-4t\int _{\mathbb {R}}\varphi '\varphi u'\left( \frac{u''}{1-(u')^2}\right) \mathrm {d}x\\&\quad \le \int _{\mathbb {R}}\varphi ^2(u')^2\mathrm {d}x -2t\int _{\mathbb {R}}\varphi ^2\frac{(u'')^2}{1-(u')^2}\mathrm {d}x\\&\qquad +\frac{2t}{\delta }\int _{\mathbb {R}}(\varphi ')^2(u')^2\mathrm {d}x+2t\delta \int _{\mathbb {R}}\varphi ^2\frac{(u'')^2}{(1-(u')^2)^2}\mathrm {d}x\\&\quad \le \int _{\mathbb {R}}\varphi ^2(u')^2\mathrm {d}x+\frac{2t}{\delta }\int _{\mathbb {R}}(\varphi ')^2(u')^2\mathrm {d}x -2t(1-C\delta )\int _{\mathbb {R}}\varphi ^2\frac{(u'')^2}{1-(u')^2}\mathrm {d}x\\&\quad \le \int _{\mathbb {R}}\varphi ^2(u')^2\mathrm {d}x+\frac{8t}{R^2\delta }\int _{-2R}^{2R}(u')^2\mathrm {d}x -2t(1-C\delta )\int _{\mathbb {R}}\varphi ^2\frac{(u'')^2}{1-(u')^2}\mathrm {d}x \,, \end{aligned}$$

where C depends on \(\sup |u'_t|<1\). Now, pick \(\alpha \in (0,1)\). Then, combining the above estimate with the one from the Lemma 8.4, we obtain for any \(\delta \in (0,(1-\alpha )C^{-1})\)

$$\begin{aligned}&\partial _t\left( \int _{\mathbb {R}}\varphi ^2u^2\mathrm {d}x+\alpha t \int _{\mathbb {R}}\varphi ^2(u')^2\mathrm {d}x\right) \le \frac{4}{\delta R^2}\int _{-2R}^{2R}u^2\mathrm {d}x+\frac{8\alpha t}{R^2\delta }\int _{-2R}^{2R}(u')^2\mathrm {d}x\\&\qquad -(1-\alpha -C\delta )\int _{\mathbb {R}}\varphi ^2(u')^2\mathrm {d}x-2t(1-C\delta ) \alpha \int _{\mathbb {R}}\varphi ^2\frac{(u'')^2}{1-(u')^2}\mathrm {d}x\\&\quad \le \frac{4}{\delta R^2}\int _{-2R}^{2R}u^2\mathrm {d}x+\frac{8\alpha t}{R^2\delta }\int _{-2R}^{2R}(u')^2\mathrm {d}x \,. \end{aligned}$$

Analogously to the above, we define

$$\begin{aligned} A(t,R,\alpha )=\sup _{y\in \mathbb {R}}\left( \int _{\mathbb {R}}u^2\varphi _{R,y}^2\mathrm {d}x+\alpha t\int _{\mathbb {R}}(u')^2\varphi _{R,y}^2\mathrm {d}x\right) \,, \end{aligned}$$

where \(\varphi _{R,y}(x)\) as in the proof of the previous lemma. After integration in time, we get

$$\begin{aligned}&\int _{\mathbb {R}}\varphi _{R,y}^2u_t^2\mathrm {d}x+\alpha t\int _{\mathbb {R}}\varphi _{R,y}^2(u'_t)^2\mathrm {d}x \\&\quad \le \int _{\mathbb {R}}\varphi _{R,y}^2u_0^2\mathrm {d}x +\frac{4}{\delta R^2}\int _0^t \int _{-2R}^{2R}u_s^2\mathrm {d}x\mathrm {d}s + \frac{8\alpha }{R^2\delta }\int _0^ts\int _{-2R}^{2R}(u'_s)^2\mathrm {d}x\mathrm {d}s \\&\quad \le A(0,R,\alpha )+\frac{16}{\delta R^2}\int _0^tA(s,R,\alpha )\mathrm {d}s \end{aligned}$$

and taking the supremum over x on the left-hand side yields

$$\begin{aligned} A(t,R,\alpha )\le A(0,R,\alpha )+\frac{16}{\delta R^2}\int _0^tA(s,R,\alpha )\mathrm {d}s. \end{aligned}$$

By the Gronwall inequality, we obtain

$$\begin{aligned} A(t,R,\alpha )\le A(0,R,\alpha )\exp \left( \frac{16t}{\delta R^2}\right) \end{aligned}$$

and the result follows from letting \(R\rightarrow \infty \) and then letting \(\alpha \rightarrow 1\). \({\square }\)

Proposition 8.6

Let \(u:\mathbb {R}\times (0,T)\rightarrow \infty \) be a smooth solution of (8.3) with uniformly spacelike initial data \(u_0\in C^{0,1}(\mathbb {R})\cap L^2(\mathbb {R})\). Then \(u_t\rightarrow 0\) for \(t\rightarrow \infty \), uniformly in all derivatives. More precisely,

$$\begin{aligned} \left\| u^{(k)}\right\| _{L^{\infty }}\le C_k\cdot t^{-\frac{1}{4}} \end{aligned}$$

for some \(C\ge 0\) and for all \(t>0\).

Proof

By the one-dimensional Gagliardo–Nirenberg inequality and Lemma 8.5, we obtain

$$\begin{aligned} \left\| u\right\| _{L^{\infty }}\le C\left\| u'\right\| _{L^2}^{\frac{1}{2}}\left\| u\right\| _{L^2}^{\frac{1}{2}} = Ct^{-\frac{1}{4}}(t^{\frac{1}{2}}\left\| u'\right\| _{L^2})^{\frac{1}{2}}\left\| u\right\| _{L^2}^{\frac{1}{2}} \le Ct^{-\frac{1}{4}}\,. \end{aligned}$$

For higher derivatives, the assertion follows from short-time derivative estimates of the form

$$\begin{aligned} \left\| u^{(k)}_{t+1}\right\| _{L^{\infty }}\le C_k\cdot \left\| u_t\right\| _{L^{\infty }} \,, \end{aligned}$$

which finishes the proof. \({\square }\)

Remark 8.7

The convergence rate in Proposition 8.6 is the same as one gets for solutions of the 1-dimensional heat equation with initial data in \(L^2\). This can be seen as follows: If \(u_0\in L^2(\mathbb {R})\), the solution of the heat equation with initial data \(u_0\) is given by

$$\begin{aligned} u(x,t)=\int _{\mathbb {R}}K(y-x,t)u_0(y)\mathrm {d}y\,,\qquad K(x,t)=(4\pi t)^{-1/2}\cdot e^{-\frac{x^2}{4t}}\,, \end{aligned}$$

where K is the 1-dimensional heat kernel. A direct computation proves that the inequality \(\left\| K(\cdot ,t)\right\| _{L^2}\le C\cdot t^{-1/4}\) holds. Finally Young’s inequality for convolutions implies that

$$\begin{aligned} \left\| u(\cdot ,t)\right\| _{L^{\infty }}\le \left\| K(\cdot ,t)\right\| _{L^2}\left\| u_0\right\| _{L^2}\le C\cdot t^{-1/4}\cdot \left\| u_0\right\| _{L^2}\,, \end{aligned}$$

which is the estimate we claimed.

Theorem 8.8

Let \(u:\mathbb {R}\times (0,T)\rightarrow \infty \) be a smooth, uniformly spacelike solution of (8.3) with uniformly spacelike initial data \(u_0\in C^{0,1}(\mathbb {R})\) such that \(u_0(x)\rightarrow 0\) as \(|x|\rightarrow \infty \). Then, if \(t\rightarrow \infty \), \(u_t\rightarrow 0\) uniformly in all derivatives. More precisely, there exist functions \(v_k(t)\) with \(v_k(t)\rightarrow 0\) as \(t\rightarrow \infty \) such that

$$\begin{aligned} \left\| u^{(k)}\right\| _{L^{\infty }}\le v_k(t) \end{aligned}$$

for some \(C\ge 0\) and for all \(t>0\).

Proof

For each \(\varepsilon >0\), pick a compactly supported smooth function \(v_{0,\varepsilon }\) such that

$$\begin{aligned} -v_{0,\varepsilon }-\frac{\varepsilon }{2}\le u_0\le v_{0,\varepsilon }+\frac{\varepsilon }{2}\,. \end{aligned}$$

By the maximum principle, the corresponding solutions satisfy

$$\begin{aligned} -v_{\varepsilon }-\frac{\varepsilon }{2}\le u\le v_{\varepsilon }+\frac{\varepsilon }{2} \end{aligned}$$

for all times. As \(v_{\varepsilon }\rightarrow 0\), we conclude that \(|u|\le \varepsilon \) for all \(t\ge T(\varepsilon )\) and some large time \(T(\varepsilon )\), and the assertion for u follows for \(k=0\). For the higher derivatives, the claim follows from standard estimates. \({\square }\)