Abstract
In this paper we consider a fractional analogue of the Monge–Ampère operator. Our operator is a concave envelope of fractional linear operators of the form \( \inf _{A\in \mathcal {A}}L_Au, \) where the set of operators is a degenerate class that corresponds to all affine transformations of determinant one of a given multiple of the fractional Laplacian. We set up a relatively simple framework of global solutions prescribing data at infinity and global barriers. In our key estimate, we show that the operator remains strictly elliptic, which allows to apply known regularity results for uniformly elliptic operators and deduce that solutions are classical.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
The classical Monge–Ampère equation \(\det D^2u=f\) arises in many areas of analysis, geometry, and applied mathematics. Standard boundary value problems are the Dirichlet problem and optimal transportation problem, where we prescribe the image of a domain by the gradient map.
In the Dirichlet problem, one prescribes a smooth domain \(\Omega \subset \mathbb {R}^n\), boundary data g(x) on \(\partial \Omega \), and a right-hand side f(x) in \(\Omega \), and studies existence and regularity of a function u such that
For the problem to fit in the framework of the theory of fully nonlinear elliptic equations, one must seek convex solutions u to ensure that \(\det D^2 u\) is indeed a monotone function of \(D^2u\). Thus, we must require f(x) positive and \(\Omega \) convex. The convexity of \(\Omega \) is required in order to construct appropriate smooth subsolutions that act as lower barriers, see [2].
In that case, there is considerable work in the literature establishing the existence, uniqueness and regularity of solutions to (1.1), see [1, 2, 8, 10] and the references therein. The main ingredients entering the theory are, roughly speaking, the following:
-
(a)
The Monge–Ampère equation is a concave fully nonlinear equation. For a convex solution
$$\begin{aligned} \det D^2u=f \end{aligned}$$is equivalent to
$$\begin{aligned} \inf _{L\in \mathcal {A}}Lu=f, \end{aligned}$$where \(\mathcal {A}\) is the family of linear operators \(Lu=\mathrm{trace}\left( AD^2u\right) \) for \(A>0\) with eigenvalues \(\lambda _j(A)\) that satisfy \(\prod _{j=1}^{n} \lambda _j(A)=n^{-n}f^{n-1}\). Furthermore, if we take nA equal to the matrix of cofactors of \(D^2u\), then
$$\begin{aligned} n^n\prod _{j=1}^{n} \lambda _j(A)=(\det D^2u)^{n-1}=f^{n-1}, \end{aligned}$$the infimum is realized and the equation satisfied. Moreover, from the concavity of \(\det ^{1/n}(\cdot )\) any other choice of the eigenvalues would give a larger value than the prescribed f, making u a subsolution.
In other words, the Monge–Ampère equation can be thought of as the infimum of a family of linear operators that consists of all affine transformations of determinant one of a given multiple of the Laplacian.
-
(b)
The fact that \(\det D^2u\) can be represented as a concave fully nonlinear equation implies that pure second derivatives are subsolutions of an equation with bounded measurable coefficients and as such, are bounded from above. Indeed, if we consider the second-order incremental quotient in the direction \(e\in \partial B_1(0)\),
$$\begin{aligned} \delta (u,x_0,y)=u(x_0+he)+u(x_0-he)-2u(x_0) \end{aligned}$$and choose
$$\begin{aligned} Lv=\mathrm{trace}\left( AD^2v\right) \end{aligned}$$with nA the matrix of cofactors of \(D^2u(x_0)\), we have that \(Lu(x_0)=f(x_0)\) while on the other hand, the matrix
$$\begin{aligned} B=\left[ \frac{f(x_0+he)}{f(x_0)}\right] ^\frac{n-1}{n}A \end{aligned}$$satisfies
$$\begin{aligned} \det B=n^{-n}f(x_0+he)^{n-1}, \end{aligned}$$which makes it eligible to compete for the minimum of \(\mathrm{trace}\left( ND^2u(x_0+he)\right) \). This implies,
$$\begin{aligned} \left[ \frac{f(x_0+he)}{f(x_0)}\right] ^\frac{n-1}{n}Lu(x_0+he)\ge f(x_0+he) \end{aligned}$$or equivalently,
$$\begin{aligned} Lu(x_0+he)\ge f(x_0+he)^\frac{1}{n}f(x_0)^\frac{n-1}{n}. \end{aligned}$$We deduce that at a maximum of a second derivative \(D_{ee}^2u\) the function f must satisfy,
$$\begin{aligned} f(x_0)^\frac{n-1}{n} D_{ee}^2\,f^{1/n}(x_0)\le 0. \end{aligned}$$If \(D_{ee}^2f\) is bounded and we have an appropriate barrier, plus control of the second derivatives of u at the boundary of \(\Omega \), we deduce that u is not only convex but also semiconcave. For that purpose, the boundary and data must be smooth and the domain strictly convex. This allows for the construction of appropriate subsolutions as barriers.
-
(c)
Then, the last ingredient of the theory is that for a convex solution the equation \(\prod _{j=1}^{n}\lambda _j=f\) with f strictly positive implies that all \(\lambda _j\) are strictly positive (and not merely non-negative). This implies that the operators involved with the minimization can be restricted to a uniformly elliptic family and the corresponding general theory applies. In particular, Evans-Krylov theorem implies that solutions are \(\mathcal {C}^{2,\alpha }\) and from there, as smooth as two derivatives better than f.
The discussion above suggests that one could carry out a similar program for a non-local or fractional Monge–Ampère equation of the form
where the set of operators \(L_A\) corresponds to that of all affine transformations of determinant one of a given multiple of the fractional Laplacian. In fact, one may consider any concave function of the Hessian as in [2] as an infimum of affine transformations of the Laplacian, the affine transformations corresponding now to the different linearization coefficients of the function \(F(\lambda _1,\ldots ,\lambda _n)\) and consider the corresponding nonlocal operator.
One can take \(\inf _{A\in \mathcal {A}} L_Au=f\), where \(\mathcal {A}\) corresponds to a family of symmetric positive matrices with determinant bounded from above and below,
and
The kernel under consideration does not need to be necessarily \(|A^{-1}x|^{-(n+2s)},\) it could be a more general kernel K(Ax). In fact, the geometry of the domain is an important issue for the “inherited from the boundary” regularity theory for degenerate operators depending on the eigenvalues of the Hessian, see [2].
In this article we shall set up a relatively simple framework of global solutions prescribing data at infinity and global barriers to avoid having to deal with the technical issues inherited from boundary data, which is rather complex for non-local equations. As in the second order case, we intend to prove:
-
(a)
Existence of solutions.
-
(b)
Solutions are semiconcave, i.e. second derivatives are bounded from above.
-
(c)
Along each line, the fractional Laplacian is bounded from above and strictly positive.
-
(d)
The operators that are close to the infimum remain strictly elliptic.
-
(e)
The non-local fully nonlinear theory developed in [3, 4] applies, in particular the nonlocal Evans-Krylov theorem, and solutions are “classical”.
To be more precise, let us introduce the non-local Monge–Ampère operator \(\mathcal {D}_s\) that we are going to consider in the sequel, given by
We shall always use the definition that is most suitable to each case. Let us mention that if u is convex, asymptotically linear, and \(1/2<s<1\), then
up to a constant factor that depends only on the dimension n (see Appendix A for a proof of this fact).
Another recent attempt to approach nonlocal Monge–Ampère operators is the operator proposed in [5]. The interested reader should also check [7].
Remark 1.1
We can assume without loss of generality that the matrices A in the definition of \(\mathcal {D}_s u(x)\) are symmetric and positive definite. This follows from the (unique) polar decomposition of \(A^{-1}\), namely \(A^{-1} = OS^{-1}\), where O is orthogonal and \(S^{-1}\) is a positive definite symmetric matrix.
We shall study the following Dirichlet problem,
where \(1/2<s<1\) and we prescribe boundary data at infinity \(\phi (x)\) (that, at the same time, acts as a smooth lower barrier). The results below can be extended to the problem
under appropriate assumptions on g (see [9] for a local analogue of problem (1.4)). Let us now describe the precise hypothesis that we shall require on g and \(\phi \).
First, \(\phi \in \mathcal {C}^{2,\alpha }(\mathbb {R}^n)\) is strictly convex in compact sets and \(\phi =\Gamma +\eta \) near infinity, with \(\Gamma (x)\) a cone and
for some constants \(a>0\) and \(0<\epsilon <n\). In particular, as \(|x|\rightarrow \infty \),
(see Section 2 for the definition of the fractional Laplacian) and
from homogeneity, where \(c_1,c_2\) are some positive constants depending on the strict convexity of the section of \(\Gamma \). We normalize \(\phi \) so that \(\phi (0)=0\), \(\nabla \phi (0)=0\).
The model problem that we consider is \(g(x,u(x))=u(x)- \phi (x)\). On the other hand, the general hypotheses on \(g: \mathbb {R}^{n+1} \rightarrow \mathbb {R}\) that we shall consider are:
and, there exists \(\mu >0\) such that
We would like to point out that hypothesis (1.5) implies that the function g is locally Lipschitz continuous (see for instance [6, Proposition 2.1.7]). In particular,
for any \(R>0\). Therefore, hypothesis (1.6) could be replaced, for instance, by the following
and
In the sequel, we shall assume (1.6) for simplicity.
The paper is organized as follows. In Section 2 we present the notation, the notion of solution, and some preliminary results. In Section 3 we prove the main result of the paper, namely, that matrices that are too degenerate do not count for the infimum in (1.2), effectively proving that the fractional Monge–Ampère operator is locally uniformly elliptic and thus the known theory for uniformly elliptic nonlocal operators applies (see for instance [3] and the references therein). In Section 4 we prove a comparison principle for problem (1.4), and in Section 5 we prove Lipschitz continuity and semiconcavity of solutions to problem (1.4). Finally, in Section 6 we prove existence of solutions to the model problem (1.3).
Notation and Preliminaries
In this section we are going to state notations and recall some basic results and definitions.
For square matrices, \(A>0\) means positive definite and \(A\ge 0\) positive semidefinite. We shall denote \(\lambda _i(A)\) the eigenvalues of A, in particular \(\lambda _{\min }(A)\) and \(\lambda _{\max }(A)\) are the smallest and largest eigenvalues, respectively.
We shall denote the kth-dimensional ball of radius 1 and center 0 by \(B_1^{k}(0)=\{x\in \mathbb {R}^k:\ |x|\le 1\}\) and the corresponding \((k-1)\)th-dimensional sphere by \(\partial B_1^{k}(0)=\{x\in \mathbb {R}^k:\ |x|=1\}\). Whenever k is clear from context, we shall simply write \(B_1(0)\) and \(\partial B_1(0)\). \(\mathcal {H}^{k}\) stands for the k-dimensional Haussdorff measure. We shall denote \(\omega _{k}= \mathcal {H}^{k-1}\big (\partial B_1^{k}(0)\big )=k\,|B_1^{k}(0)|=\frac{2\pi ^{k/2}}{\Gamma (k/2)}\).
Given a function u, we shall denote the second-order increment of u at x in the direction of y as \(\delta (u,x,y)=u(x+y)+u(x-y)-2u(x)\).
Let \(A\subset \mathbb {R}^n\) be an open set. We say that a function \(u:A \rightarrow \mathbb {R}\) is semiconcave if it is continuous in A and there exists \(C \ge 0\) such that \(\delta (u,x,y) \le C|y|^2\) for all \(x, y\in \mathbb {R}^n\) such that \([x-y, x +y] \subset A\). The constant C is called a semiconcavity constant for u in A.
Alternatively, a function u is semiconcave in A with constant C if \(u(x)- \frac{C}{2}|x|^2\) is concave in A. Geometrically, this means that the graph of u can be touched from above at every point by a paraboloid of the type \(a+\langle b,x\rangle +\frac{C}{2}|x|^2\).
A function u is called semiconvex in A if \(-u\) is semiconcave.
Let us mention here for the reader’s convenience the definition of the fractional Laplacian ,
where \(c_{n,s}\) is a normalization constant. Notice that \(-c_{n,s}^{-1}\,(- \Delta )^{s}u(x)\) belongs to the class of operators over which the infimum in the definition of \(\mathcal {D}_s u(x)\) is taken.
We recall from [3] the notion of viscosity solution that we are going to use in the sequel.
Definition 2.1
A function \(u :\mathbb {R}^n \rightarrow \mathbb {R}\), upper (resp. lower) semicontinuous in \(\overline{\Omega }\), is said to be a subsolution (supersolution) to \(\mathcal {D}_su = f\), and we write \(\mathcal {D}_su\ge f\) (resp. \(\mathcal {D}_su \le f\)), if every time all the following happen,
-
x is a point in \(\Omega \),
-
N is an open neighborhood of x in \(\Omega \),
-
\(\psi \) is some \(C^2\) function in \(\overline{N}\),
-
\(\psi (x) = u(x)\),
-
\(\psi (y) > u(y)\) (resp. \(\psi (y) < u(y)\)) for every \(y \in N \setminus \{x\}\),
and if we let
then we have \(\mathcal {D}_sv(x) \ge f(x)\) (resp. \(\mathcal {D}_sv(x) \le f(x)\)). A solution is a function u that is both a subsolution and a supersolution.
The following lemma states that \(\mathcal {D}_su\) can be evaluated classically at those points x where u can be touched by a paraboloid.
Lemma 2.2
Let \(1/2<s<1\) and \(u:\mathbb {R}^n\rightarrow \mathbb {R}\) with asymptotically linear growth. If we have \(\mathcal {D}_{s}u\ge f\) in \(\mathbb {R}^n\) (resp. \(\mathcal {D}_{s}u\le f\)) in the viscosity sense and \(\psi \) is a \(\mathcal {C}^2\) function that touches u from above (below) at a point x, then \(\mathcal {D}_s u(x)\) is defined in the classical sense and \(\mathcal {D}_s u(x)\ge f(x)\) (resp. \(\mathcal {D}_{s}u(x)\le f(x)\)).
Proof
Let us deal first with the subsolution case, that is, assume first that \(\psi \in \mathcal {C}^2\) touches u from above at a point x. Define for \(r>0\),
Then, we have that
and then the arguments in the proof of [3, Lemma 3.3] yield that \(\delta (u,x,y)/|y|^{n+2s}\) is integrable. Therefore, \(-(- \Delta )^su(x)\) is defined in the classical sense and \(\mathcal {D}_s u(x)<+\infty \). Notice that,
Thus, \(\delta (u,x,y)/|A^{-1}y|^{n+2s}\) is integrable and
is also defined in the classical sense. By definition of viscosity solution, we have
But then, \(0\le \delta (v_r-u,x,y)\le \delta (v_{r_0}-u,x,y)\) for all \(r<r_0\), \(\delta (v_{r_0}-u,x,y)/|A^{-1}y|^{n+2s}\) is integrable and \(\delta (v_r-u,x,y)\rightarrow 0\) as \(r\rightarrow 0\). Hence, by the dominated convergence theorem, \(L_A (v_r-u)(x)\rightarrow 0\) as \(r\rightarrow 0\). We conclude \(L_A u(x)\ge f(x)\) in the classical sense. Since the matrix A is arbitrary and we could pick any matrix \(A>0\) with \(\det A=1\), we have that \(\mathcal {D}_s u(x)\ge f(x)\) in the classical sense.
In the supersolution case, that is, when \(\psi \in \mathcal {C}^2\) touches u from below at x, some modifications are required. Fix \(\epsilon >0\), arbitrary, and let \(A_\epsilon >0\) with \(\det A_\epsilon =1\) such that
It is easy to see that \(\delta (v_r,x,y)\) is non-decreasing in r and \(\delta (v_r,x,y)\rightarrow \delta (u,x,y)\) as \(r\rightarrow 0\). By the monotone convergence theorem, \(\delta (u,x,y)/|A_{\epsilon }^{-1}y|^{n+2s}\) is integrable and \(L_{A_\epsilon } u(x)\le f(x)+\epsilon \) in the classical sense. We find that
and we conclude letting \(\epsilon \rightarrow 0\), since it is arbitrary. \(\square \)
Local Uniform Ellipticity of the Fractional Monge–Ampère Equation
In this section we shall prove that the infimum in the definition of \(\mathcal {D}_s\), see (1.2), cannot be realized by matrices that are too degenerate, effectively proving that the fractional Monge–Ampère operator is locally uniformly elliptic. Then, existing theory for uniformly elliptic operators is available (see [3, 4] and the references therein).
To this aim, consider the following approximating, non-degenerate operator,
Let us point out that the conditions \(\det A=1\), and \(\lambda _{\min }(A)\ge \theta \) imply \(\lambda _{\max }(A)\le \theta ^{1-n}\) and this bound is realized by matrices with eigenvalues \(\theta \) (simple) and \(\theta ^{1-n}\) (multiplicity \(n-1\)). Therefore, \(\mathcal {D}_s^\theta \) belongs to the class of uniformly elliptic, nonlocal operators with extremal Pucci operators
and
Observe that in general \(\mathcal {M}_{\theta ,\theta ^{1-n}}^{-} u(x)< \mathcal {D}_s^\theta u(x),\) as the class of matrices over which the infimum is taken is broader for the Pucci operator.
The main result of this section and of the paper is the following.
Theorem 3.1
Consider \(\frac{1}{2}<s<1\) and let u be Lipschitz continuous and semiconcave (with constants L and C respectively) and such that
in the viscosity sense for some constant \(\eta _0>0\) and \(\Omega \subset \mathbb {R}^n\). Then,
in the classical sense, for \(\mathcal {D}_s^\theta \) the approximating operator defined by (3.1) and
with \(\mu _0,\mu _1\) defined in (3.8) and (3.9) below.
Remark 3.2
(Limits as \(s\rightarrow 1\)). It can be checked that
as \(s\rightarrow 1\). In particular, Theorem 3.1 is stable in the limit as \(s\rightarrow 1\).
It is illustrative for the sequel to show how the ideas in the proof of Theorem 3.1 work in the local case. More precisely, assume u semiconcave with constant C and such that
(about the normalization \((4n)^{-1}\omega _n\), recall (3.2) and Lemma A.2). We want to prove that the Monge–Ampère operator is actually non-degenerate, that is,
for some \(\theta >0\). The proof has two steps:
1. The second derivative of u in the direction e is strictly positive and bounded (uniformly) for every direction. More precisely,
for \(\bar{\mu }_0\) independent of e (given by (3.5) below), and C the semiconcavity constant of u. The proof of the upper bound follows from the definition of semiconcavity. For the lower bound, choose \(A=PJP^t\) with J a diagonal matrix with eigenvalues \(\epsilon \) (single) and \(\epsilon ^\frac{1}{1-n}\) (multiplicity \(n-1\)), and P an orthogonal matrix whose i-th column is e (notice that \(\det (A) =1\)). Then,
by semiconcavity. Choosing \(\epsilon \) small enough, e.g. \(\epsilon =\big (\frac{1}{2}C(n-1)\omega _nn^{-1}\eta _0^{-1}\big )^\frac{n-1}{2}\) we get
which is equivalent to (3.4). For future reference, \(\bar{\mu }_0\) is given by
2. The infimum in the Monge–Ampère operator cannot be achieved for matrices that are too degenerate. More precisely, let A with \(\det (A)=1\) and write \(A=PJP^t\) with P orthogonal; then
using that \(1=\det (A)\le \lambda _{\min }(A)\lambda _{\max }(A)^{n-1}\). We conclude that matrices with very small eigenvalues will produce very large operators that will not count for the infimum (see the proof of Theorem 3.1 for details).
For simplicity, we shall assume that \(0 \in \Omega \) and then prove (3.3) for \(x=0\). Note for the sequel that since u is semiconcave, Lemma 2.2 implies that \(\mathcal {D}_s u(x)\) is defined in the classical sense for all \(x\in \Omega \) and (3.2) holds pointwise.
The proof of Theorem 3.1 has, again, two parts. In the first part we prove that the (one-dimensional) fractional laplacian of the restriction of u to any line is positive and bounded from above. Then, in the second part, we shall use this fact to prove that
for \(\mu _0\) given by (3.8). Therefore the infimum in the fractional Monge–Ampère operator cannot be achieved for matrices that are too degenerate.
The two parts we have mentioned correspond to the following two results.
Proposition 3.3
Assume the same hypotheses of Theorem 3.1. Then, for every \(e\in \partial B_1(0)\),
with
for \(C_1,C_2\) defined in (3.12) and (3.13), and
Remark 3.4
Proposition 3.3 yields (3.4) in the limit as \(s\rightarrow 1\) since \(\lim _{s\rightarrow 1}\mu _0=\bar{\mu }_0/2\) (with \(\bar{\mu }_0\) defined by (3.5)), \(\lim _{s\rightarrow 1}\mu _1=C/2\) and
Proposition 3.5
Assume \(\epsilon _1,\ldots ,\epsilon _n\) are positive constants such that \(\prod _{j=1}^n\epsilon _j=1\). Then, in the same hypotheses of Theorem 3.1, we have,
with \(\mu _0\) defined in (3.8).
Remark 3.6
Proposition 3.5 implies (3.7), which yields (3.6) in the limit as \(s\rightarrow 1\) since
Propositions 3.3 and 3.5 (that we prove below) allow to prove the main result of this section, Theorem 3.1.
Proof of Theorem 3.1
Consider a symmetric matrix \(A>0\) with \(\det A=1\) and \(\lambda _{\min }(A)<\frac{1}{k}\). We can write \(A=PJP^t\), and denote \(\tilde{u}(y)=u(Py)\). Observe that then Proposition 3.5 (see also (3.7)) implies
and we get the estimate
Observe that by choosing \(A=I\), Proposition 3.3 yields
Therefore, from (3.10) and (3.11) we have that whenever \(k> \left( n\mu _1\mu _0^{-1} \right) ^ \frac{n-1}{2s}\),
This implies (3.3), since
\(\square \)
The rest of this section is devoted to the proof of Propositions 3.3 and 3.5.
Proof of Propositions 3.3 and 3.5
Our goal is to prove that the (one-dimensional) fractional Laplacian of the restriction of u to any line is positive and bounded from above. In the proof of Proposition 3.3 we need several partial results.
In the sequel, we denote \(\bar{y}=(y_2,\ldots ,y_n)\in \mathbb {R}^{n-1}\) and \(v(y)=u(y)-u(0)\).
Lemma 3.7
Let \(\epsilon >0\) and assume the same hypotheses of Theorem 3.1. Then,
with
for \(\mu _1\) given by (3.9).
Proof
Since u is Lipschitz and semiconcave, we have
A change of variables
yields,
and the result follows noticing that both integrals on the right-hand side are constant. \(\square \)
Lemma 3.8
We have,
where
Proof
A change of variables \(z_1=y_1\), \(z_j=\frac{y_{j}}{\epsilon ^{\frac{n}{n-1}}\,y_{1}}\), \(j=2,\ldots ,n\) yields,
\(\square \)
Lemmas 3.7 and 3.8 allow us to prove that the one-dimensional fractional laplacian of the restriction \(v(y_1,\bar{0})\) is strictly positive.
Lemma 3.9
Under the same hypotheses of Theorem 3.1, we have
where \(\mu _0\) is given by (3.8).
Proof
From Lemmas 3.7 and 3.8, we have that
Then, by (3.2) and the definition of \(\mathcal {D}_s\) we get
Therefore,
We get the result from this expression by choosing \( \epsilon =\left( \frac{\eta _0}{2C_1}\right) ^\frac{n-1}{2s}. \) \(\square \)
From Lemma 3.9 we can finally prove Proposition 3.3.
Proof of Proposition 3.3
First, we are going to prove that the one-dimensional fractional Laplacian of the restriction of u to any line is bounded above. Indeed, from the Lipschitz continuity and semiconcavity of u,
where \(\mu _1\) is given by (3.9).
Now, fix \(e\in \partial B_1(0)\), and choose P such that e is its first column and the rest of columns complete an orthonormal basis of \(\mathbb {R}^n\). Notice that \(\tilde{u}(x)=u(Px)\) is in the hypotheses of Theorem 3.1. Hence, we can apply Lemma 3.9 to \(\tilde{u}\) and get
but then, \(\tilde{u}(y_1,\bar{0})=\tilde{u}(y_1e_1)=u(y_1Pe_1)=u(y_1e)\) by definition of P. \(\square \)
Next, we provide the proof of Proposition 3.5 that uses Proposition 3.3.
Proof of Proposition 3.5
Our aim is to prove that the infimum in the fractional Monge–Ampère operator is not realized by matrices that are very degenerate. From Proposition 3.3, we have
Proposition B.1 yields the estimate,
where we have used that \(\prod _{j=1}^n\epsilon _j=1\). This completes the proof. \(\square \)
Comparison and Uniqueness
Next, we prove a comparison principle that yields uniqueness for problem (1.4). Notice that the same arguments apply to the operator \((1-s)\mathcal {D}_s\) giving a stable result in the limit as \(s\rightarrow 1\).
Theorem 4.1
Assume \(1/2<s<1\), and let \(g:\mathbb {R}^{n+1}\rightarrow \mathbb {R}\) a continuous function satisfying (1.7). Consider \(\phi \in \mathcal {C}^{2,\alpha }(\mathbb {R}^n)\), and \(u\in USC\) and \(v\in LSC\) such that
in the viscosity sense. Then, \(u\le v\) in \(\mathbb {R}^n\).
Remark 4.2
It is also possible to assume \(t\mapsto g(x,t)\) strictly increasing for any \(x\in \mathbb {R}^n\) instead of (1.7) to derive a contradiction in (4.12).
Proof
Let us first present the ideas of the proof in the case when u, v are a classical sub- and supersolution, then we shall consider the viscosity counterparts.
Since we seek to prove \(u \le v\), let us assume to the contrary that \(\sup _{\mathbb {R}^n}(u-v)>0\). As \((u- v)(x)\rightarrow 0\) as \(|x|\rightarrow \infty \), there exists \(x_0 \in \mathbb {R}^n\) such that
Fix \(\delta >0\), arbitrary, and let \(A_\delta >0\) with \(\det A_\delta =1\), such that
for \(L_{A_\delta }\) defined as in (2.1). On the other hand, for the same matrix,
At a maximum point \(\delta (u-v,x_0,y)\le 0\), and
Therefore, since \(\delta \) is arbitrary, we can let \(\delta \rightarrow 0\) and get
a contradiction with the fact that \(g(x_0,\cdot )\) is strictly increasing.
In the general case, we cannot be certain that \(L_{A_\delta } u(x_0)\) and \(L_{A_\delta } v(x_0)\) above are well defined, since u and v may not have the necessary regularity. To remedy that we shall use sup- and inf-convolutions and work with regularized functions. However, we shall rather apply the regularizations to the functions \(\bar{u}=u- \phi \) and \(\bar{v}=v- \phi \), since they are bounded above and below respectively (notice that \(\bar{u}\in USC\), \(\bar{v}\in LSC\), and \(\bar{u}(x),\bar{v}(x) \rightarrow 0\) as \(|x|\rightarrow \infty \) imply that \(\bar{u},\bar{v}\) have respectively a maximum and a minimum).
Consider the sup- and inf-convolution of \(\bar{u}, \bar{v}\), respectively,
and
Before we start with the proof, let us recall for the reader’s convenience two properties of \(\bar{u}^\epsilon \) that we shall use in the sequel. Analogous properties hold for \(\bar{v}_\epsilon \) noticing that \(\bar{v}_\epsilon =-(- \bar{v})^\epsilon \).
-
(1)
\(\bar{u}^\epsilon \) is bounded above. Since \(\bar{u}\) is bounded above by some constant C, we have
$$\begin{aligned} \bar{u}^\epsilon (x)\le \sup _{y} \left\{ C- \frac{|x-y|^2}{\epsilon }\right\} =C. \end{aligned}$$ -
(2)
The supremum in the definition of (4.1) is achieved. In fact,
$$\begin{aligned} \bar{u}^\epsilon (x)=\sup _{|y-x|^2\le 2\Vert \bar{u}\Vert _\infty \epsilon } \left\{ \bar{u}(y)- \frac{|x-y|^2}{\epsilon }\right\} =\bar{u}(x^*)- \frac{|x-x^*|^2}{\epsilon } \end{aligned}$$(4.2)for some \(x^*\) such that
$$\begin{aligned} |x-x^*|^2\le 2\Vert \bar{u}\Vert _\infty \epsilon \end{aligned}$$(4.3)(here we are slightly abusing notation for the sake of brevity since, as \(\bar{u}\in USC\), we should write \(\sup \bar{u}\) instead of \(\Vert \bar{u}\Vert _\infty \)). To see this, first notice that since \(\bar{u}^\epsilon \) is bounded above, for any given \(\delta >0\) there exists \(x_\delta \) such that,
$$\begin{aligned} \bar{u}^\epsilon (x)= \sup _{y} \left\{ \bar{u}(y)- \frac{|x-y|^2}{\epsilon }\right\} \le \bar{u}(x_\delta )- \frac{|x-x_\delta |^2}{\epsilon }+\delta . \end{aligned}$$Since \(\bar{u}(x)\le \bar{u}^\epsilon (x)\) (pick \(y=x\) in the definition of \(\bar{u}^\epsilon (x)\)), we conclude that \(|x - x_\delta |^2\le (2\Vert \bar{u}\Vert _\infty +1)\epsilon \), assuming \(\delta <1\). Therefore,
$$\begin{aligned} \bar{u}^\epsilon (x)\le \sup _{|y-x|^2\le (2\Vert \bar{u}\Vert _\infty +1)\epsilon } \left\{ \bar{u}(y)- \frac{|x-y|^2}{\epsilon }\right\} +\delta . \end{aligned}$$Since \(\delta \) is arbitrary, we can let \(\delta \rightarrow 0\) and conclude that the supremum in the definition of (4.1) is achieved,
$$\begin{aligned} \bar{u}^\epsilon (x)=\sup _{|y-x|^2\le (2\Vert \bar{u}\Vert _\infty +1)\epsilon } \left\{ \bar{u}(y)- \frac{|x-y|^2}{\epsilon }\right\} . \end{aligned}$$At this point, we can repeat the previous argument with \(\delta =0\) and get formula (4.2).
Now, again for the sake of contradiction, assume \(\sup _{\mathbb {R}^n}(u-v)>0\). Notice that \(\bar{u}^\epsilon (x)- \bar{v}_\epsilon (x)\ge \bar{u}(x)-\bar{v}(x)\) (pick \(y=x\) in the definitions of \(\bar{u}^\epsilon (x),\bar{v}_\epsilon (x)\)), and therefore,
Moreover, \((\bar{u}^\epsilon - \bar{v}_\epsilon )(x)\rightarrow 0\) as \(|x|\rightarrow \infty \). To see this, notice that
and \( \bar{u} (x)- \bar{v} (x)\), \(\sup _{|y|^2\le 2\Vert \bar{u}\Vert _\infty \epsilon } \bar{u}(x+y)\), and \( \inf _{|y|^2\le 2\Vert \bar{v}\Vert _\infty \epsilon } \bar{v}(x+y)\) converge to 0 as \(|x|\rightarrow \infty \).
Thus, there exists \(x_\epsilon \) such that
An important point in the sequel is that both functions \(\bar{u}^\epsilon \) and \( \bar{v}_\epsilon \) are \(\mathcal {C}^{1,1}\) at \(x_\epsilon \), so that the integrals in the operators appearing in the subsequent computations are well-defined. This follows from the following three facts:
-
The paraboloid
$$\begin{aligned} P(x)=\bar{u}(x_\epsilon ^*)- \frac{|x-x_\epsilon ^*|^2}{\epsilon } \end{aligned}$$touches \(\bar{u}^\epsilon \) from below at \(x_\epsilon \) for \(x_\epsilon ^*\) such that \(\bar{u}^\epsilon (x_\epsilon )= \bar{u}(x_\epsilon ^*)- \frac{|x_\epsilon -x_\epsilon ^*|^2}{\epsilon }.\)
-
The paraboloid
$$\begin{aligned} Q(x)=\bar{v}(x_{\epsilon ,*})+ \frac{|x-x_{\epsilon ,*}|^2}{\epsilon } \end{aligned}$$touches \(\bar{v}_\epsilon \) from above at \(x_\epsilon \) for \(x_{\epsilon ,*}\) such that \(\bar{v}_\epsilon (x_\epsilon )= \bar{v}(x_{\epsilon ,*})+ \frac{|x_\epsilon -x_{\epsilon ,*}|^2}{\epsilon }.\)
-
Since \(x_\epsilon \) is a maximum point of \(\bar{u}^\epsilon - \bar{v}_\epsilon \), the function \(\bar{v}_\epsilon (x)-\bar{v}_\epsilon (x_\epsilon )+\bar{u}^\epsilon (x_\epsilon )\) touches \(\bar{u}^\epsilon \) from above at \(x_\epsilon \).
We conclude from these three facts that the paraboloids \(Q(x)- \bar{v}_\epsilon (x_\epsilon )+\bar{u}^\epsilon (x_\epsilon )\) and \(P(x)+ \bar{v}_\epsilon (x_\epsilon )-\bar{u}^\epsilon (x_\epsilon )\) touch respectively \(\bar{u}^\epsilon \) from above and \(\bar{v}_\epsilon \) from below at the point \(x_\epsilon \). Therefore, both \(\bar{u}^\epsilon \) and \(\bar{v}_\epsilon \) can be touched from above and below by a paraboloid at \(x_\epsilon \) and they are \(\mathcal {C}^{1,1}\) at \(x_\epsilon \).
The fact that both \(\bar{u}^\epsilon \) and \(\bar{v}_\epsilon \) are \(\mathcal {C}^{1,1}\) at \(x_\epsilon \) is crucial to make rigorous the formal argument described at the beginning of the proof. Since \(\bar{u}^\epsilon \in \mathcal {C}^{1,1}\) at \(x_\epsilon \), there exists a paraboloid P(x) that touches \(\bar{u}^\epsilon \) from above at \(x_\epsilon \). Then, the function
touches u from above at \(x_{\epsilon }^*\). On the other hand, there exists a paraboloid Q(x) that touches \(\bar{v}_\epsilon \) from below at \(x_\epsilon \) and then, the function
touches v from below at \(x_{\epsilon ,*}\). By Lemma 2.2 we have
in the classical sense.
Fix \(\eta >0\), arbitrary, and let \(A_\eta >0\) with \(\det A_\eta =1\) such that
and
with \(L_{A_\eta }\) defined as in (2.1). Subtracting, we get
The rest of the proof is devoted to derive a contradiction from the previous inequality by showing that, for \(\epsilon \) small enough, the left-hand side is strictly smaller than the right-hand side.
Let us prove first that,
By definition of the operator \(L_{A_\eta }\), we have
Notice that
Since the proof of both inequalities is analogous, let us show how to obtain the first one. As we have seen,
On the other hand, picking \(z=x_\epsilon ^*-x_\epsilon \),
From these two expressions, we get (4.9).
Now, using (4.9), we have that
Observe that \(x_\epsilon \) is a maximum point of \(\bar{u}^\epsilon -\bar{v}_\epsilon \) and therefore \(\delta (\bar{u}^\epsilon -\bar{v}_\epsilon ,x_\epsilon ,y)\le 0\). We conclude
From (4.6), (4.8) and (4.10), we get
Recall from (4.4) and (4.5) that
Therefore,
or equivalently,
Notice that from estimate (4.3) and its analogous for the inf-convolution, we have
Thus, by the continuity of \(\phi \), we have that for \(\epsilon \) small enough,
Since \(\phi \in \mathcal {C}^{2,\alpha },\) in particular \(L_{A_\eta } \phi (x)\) is a continuous function and, for \(\epsilon \) small enough
By the continuity of g, we can also assume that \(g\big (x_\epsilon ^*,v(x_{\epsilon ,*})\big )-g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big )\ge - \eta \). Then, we have from (4.11) and (1.7) that
Since \(\eta \) is arbitrary, we can choose \(\eta \le \frac{\mu }{12}\sup _{\mathbb {R}^n}(\bar{u}- \bar{v})\) and get a contradiction. \(\square \)
Lipschitz Continuity and Semiconcavity of Solutions
In this section, we prove Lipschitz continuity and semiconcavity of solutions to (1.4) with \(\phi \) under the hypothesis of Section 1. These results are needed to fulfill the hypotheses of Theorem 3.1.
Remark 5.1
The regularity results below apply to the operator \((1-s)\mathcal {D}_s\). Notice that all constants involved in the estimates are independent of s and allow passing to the limit as \(s\rightarrow 1\).
We start with the particular case when \(g(x,v(x))=v(x)- \phi (x)\) to illustrate the key ideas.
Proposition 5.2
Assume \(\phi \) is semiconcave and Lipschitz continuous and let v be the solution of
Then, v is Lipschitz continuous and semiconcave with the same constants as \(\phi \).
Proof
In the following proof, we assume for clarity of presentation that v is a classical solution to (5.1) and all the equations hold pointwise. The argument can be made rigorous using a regularization argument (similar to the one in the proof of Theorem 4.1) that is explained in detail in the proofs of the more general results Propositions 5.3 and 5.4 below, so we shall skip it here.
1. For the proof of Lipschitz continuity, fix \(e\in \mathbb {R}^n\) and consider the first-order incremental quotient \(v(x+e)-v(x)\). Observe that
as \(|x|\rightarrow \infty \), and therefore \(v(x+e)-v(x)\) is bounded above. Furthermore, we can assume that
since we are done otherwise. Then, there exists some \(x_0\) such that
Fix \(\eta >0\), arbitrary, and let \(A_\eta >0\) such that
and
with \(L_{A_\eta }\) defined as in (2.1). We have from the above expressions that
Notice that \(\delta \big (v(\cdot +e)-v,x_0,y\big )\le 0\), and therefore \(L_{A_\eta }\big (v(x_0+e)-v(x_0)\big )\le 0\). Consequently,
and we conclude letting \(\eta \rightarrow 0\).
A symmetric argument, where \(x_0\) is a point such that
and the operator \(L_{A_\eta }\) is such that,
and
yields
2. For the proof of semiconcavity, consider the second-order incremental quotient \(\delta (v,x,e)=v(x+e)+v(x-e)-2v(x)\). Denote by \(SC(\phi )\) the semiconcavity constant of \(\phi \), and notice that
so \(\delta (v,x,e)\) is bounded above. Furthermore, we can assume that
since we are done otherwise. Then, there exists some \(x_0\) such that
As before, fix \(\eta >0\) arbitrary, and let \(A_\eta >0\) such that
and
with \(L_{A_\eta }\) defined as in (2.1). We have from the above expressions that
Notice that \(\delta \big (\delta (v,\cdot \, ,e),x_0,z\big )\le 0\), and therefore \(L_{A_\eta }\delta (v,x_0,e)\le 0\). Consequently,
We conclude letting \(\eta \rightarrow 0\). \(\square \)
In the next result we prove that solutions to (1.4) are Lipschitz continuous whenever g on the right-hand side satisfies (1.6) and (1.7).
Proposition 5.3
(Lipschitz continuity of the solution) Let \(g: \mathbb {R}^{n+1} \rightarrow \mathbb {R}\) satisfy (1.6) and (1.7). Then, v, the solution to (1.4), is uniformly Lipschitz continuous, namely, for every \(x,y\in \mathbb {R}^n\),
Proof
The following proof uses a regularization process similar to the proof of Theorem 4.1. For the sake of clarity, let us present first the main ideas assuming that v is a classical solution.
Fix \(e\in \mathbb {R}^n\) and consider the first-order incremental quotient \(v(x+e)-v(x)\). Observe that
as \(|x|\rightarrow \infty \), and therefore \(v(x+e)-v(x)\) is bounded above. Furthermore, we can assume that
since we are done otherwise. Then, there exists some \(x_0\) such that
Fix \(\eta >0\), arbitrary, and let \(A_\eta >0\) with \(\det A_\eta =1\) such that
and
with \(L_{A_\eta }\) defined as in (2.1).
We have from the above expressions that
Notice that \(\delta \big (v(\cdot +e)-v,x_0,y\big )\le 0\), and therefore \(L_{A_\eta }\big (v(x_0+e)-v(x_0)\big )\le 0\). Consequently,
At his point we can let \(\eta \rightarrow 0\) and, using (1.6) and (1.7), get
A symmetric argument, where \(x_0\) is a point such that
and the operator \(L_{A_\eta }\) is such that,
and
yields
and from there,
In general, in the above argument we cannot guarantee that v is regular enough so that both \(L_{A_\eta } v(x_0+e)\) and \(L_{A_\eta } v(x_0)\) are well-defined and the corresponding equations hold in the classical sense.
To complete the argument, we are going to use a regularization process similar to the one in the proof of Theorem 4.1. Let us show the details in the proof of (5.2).
To simplify the notation in the sequel, let us denote \(u(x)=v(x+e)\) and consider the sup- and inf-convolution of u, and v, respectively,
and
In the proof of Theorem 4.1 we were dealing with the regularization of \(v-\phi \), a bounded function. In our case, v is not bounded but its growth at infinity is controlled by \(\phi \), which allows to prove the following:
-
(1)
\(u^\epsilon (x)\) is bounded above. Specifically, there exists a constant \(C>0\) depending only on \(\phi \) and \(\Vert v-\phi \Vert _\infty \) such that \(u^\epsilon (x)\le C(1+|x+e|)\). To see this, notice that by our hypotheses on \(\phi \),
$$\begin{aligned} \phi (x)\le a|x|^{-\epsilon }+\Gamma (x)\le a|x|^{-\epsilon }+b|x|\le a+b|x| \end{aligned}$$for |x| large enough, where b depends on the convexity of the sections of \(\Gamma \). Since \(\phi \) is bounded near 0, we conclude that \( \phi (x)\le a+b|x|\) for all x, maybe for a different constant a. Since \(v-\phi \) is bounded,
$$\begin{aligned} u^\epsilon (x)= & {} \sup _{y} \left\{ (v-\phi )(y+e)+\phi (y+e)- \frac{|x-y|^2}{\epsilon }\right\} \\\le & {} \sup _{y} \left\{ \Vert v-\phi \Vert _\infty +a+b|y+e|- \frac{|x-y|^2}{\epsilon }\right\} \\\le & {} \Vert v-\phi \Vert _\infty +a+b|x+e|+ \sup _{y} \left\{ b|x-y|- \frac{|x-y|^2}{\epsilon }\right\} \\\le & {} \Vert v-\phi \Vert _\infty +a+b|x+e|+ b^2\epsilon \le C(1+|x+e|). \end{aligned}$$ -
(2)
As a consequence, the supremum in the definition of \(u^\epsilon (x)\) is finite, and for any given \(\delta >0\) there exists \(x_\delta \) such that,
$$\begin{aligned} u^\epsilon (x)= \sup _{y} \left\{ u(y)- \frac{|x-y|^2}{\epsilon }\right\} \le u(x_\delta )- \frac{|x-x_\delta |^2}{\epsilon }+\delta . \end{aligned}$$ -
(3)
The supremum in the definition of \(u^\epsilon \) is achieved. In fact,
$$\begin{aligned} u^\epsilon (x)\le \sup _{|y-x|\le \sqrt{\epsilon }R} \left\{ u(y)- \frac{|x-y|^2}{\epsilon }\right\} = u(x^*)- \frac{|x-x^*|^2}{\epsilon } \end{aligned}$$for some \(x^*\) such that \(|x-x^*|\le \sqrt{\epsilon }R\), where R depends on \(\text {Lip}(\phi )\) and \( \Vert v-\phi \Vert _\infty \) but can be chosen independent of \(\epsilon \) and x.
To see this, fix \(\delta <1\) and notice that \( u(x)\le u^\epsilon (x) \le u(x_\delta )- \frac{|x-x_\delta |^2}{\epsilon }+\delta . \) We conclude
$$\begin{aligned} \begin{aligned} \frac{|x - x_\delta |^2}{\epsilon }&\le (v-\phi )(x_\delta +e)- (v-\phi )(x+e)+\text {Lip}(\phi )\,|x_\delta -x|+\delta \\&\le 2\Vert v-\phi \Vert _\infty +\text {Lip}(\phi )\,|x_\delta -x|+1. \end{aligned} \end{aligned}$$From this expression, it follows that \(|x - x_\delta |<\sqrt{\epsilon }R\) for some R as before (\(\sqrt{\epsilon }R\) is basically the larger root of the quadratic polynomial in \(|x - x_\delta |\)). Therefore,
$$\begin{aligned} u^\epsilon (x)\le \sup _{|y-x|\le \sqrt{\epsilon }R} \left\{ u(y)- \frac{|x-y|^2}{\epsilon }\right\} +\delta . \end{aligned}$$Since \(\delta \) is arbitrary, we can let \(\delta \rightarrow 0\) and conclude that the supremum in the definition of \( u^\epsilon \) is achieved.
-
(4)
Analogous properties hold for \(v_\epsilon \). Notice that property (1) is simpler,
$$\begin{aligned} v_\epsilon (x)= & {} \inf _{y} \left\{ v(y)+ \frac{|x-y|^2}{\epsilon }\right\} =\inf _{y} \left\{ (v-\phi )(y)+\phi (y)+ \frac{|x-y|^2}{\epsilon }\right\} \\\ge & {} \inf (v-\phi )>-\infty . \end{aligned}$$
We are ready now to complete the proof. Following the formal argument above, we can assume that there exists \(x_0\) such that
First, we need to prove that there exists \(x_\epsilon \) such that
To see this, observe that
On the other hand,
Therefore, for \(\epsilon \) small enough, \(\text {Lip}(\phi )\,(2\sqrt{\epsilon }R+|e|)< \sup _{x\in \mathbb {R}^n}(u^\epsilon -v_\epsilon )\) and
Following the proof of Theorem 4.1, we can prove that both \( u^\epsilon \) and \( v_\epsilon \) are \(\mathcal {C}^{1,1}\) at \(x_\epsilon \), so that the integrals in the subsequent computations are well defined. The idea is that the paraboloids
and
touch respectively \( u^\epsilon \) from above and \( v_\epsilon \) from below at the point \(x_\epsilon \). Therefore, \( u^\epsilon \) and \( v_\epsilon \) can both be touched from above and below by a paraboloid at \(x_\epsilon \) and they are \(\mathcal {C}^{1,1}\) at \(x_\epsilon \).
Since \( u^\epsilon \in \mathcal {C}^{1,1}\) at \(x_\epsilon \), there exists a paraboloid P(x) that touches \( u^\epsilon \) from above at \(x_\epsilon \). Then
touches u from above at \(x_{\epsilon }^*\). Equivalently,
touches v from above at \(x_{\epsilon }^*+e\). On the other hand, there exists a paraboloid Q(x) that touches \( v_\epsilon \) from below at \(x_\epsilon \) and then
touches v from below at \(x_{\epsilon ,*}\). By Lemma 2.2 we have
in the classical sense.
Fix \(\eta >0\), arbitrary, and let \(A_\eta >0\) such that
and
with \(L_{A_\eta }\) defined as in (2.1). Subtracting, we get
By definition of the operator \(L_{A_\eta }\), we have
Notice that, as in the proof of Theorem 4.1,
Observe that \(x_\epsilon \) is a maximum point of \( u^\epsilon - v_\epsilon \) and therefore \(\delta ( u^\epsilon -v_\epsilon ,x_\epsilon ,y)\le 0\). We conclude
Notice that
Since \(\eta \) is arbitrary, we can let \(\eta \rightarrow 0\) and get
The result follows letting \(\epsilon \rightarrow 0\). \(\square \)
In the next result we show that solutions to (1.4) are semiconcave, informally, that second derivatives of solutions to (1.4) are bounded from above, under certain conditions on the right-hand side g. Before stating the result, let us identify heuristically the natural hypotheses on g in our context if semiconcavity is expected from the solutions.
To simplify, consider instead of \(\mathcal {D}_s\) a linear operator \(L_A\) (defined as in (2.1)) such that
Formally, we have that \(D_{ee}^2v(x_0)\) satisfies
where \(\sum _{1\le i,j\le n} \partial ^2_{x_ix_j}g(x_0,v(x_0))e_ie_j\) is the second derivative in the direction e, at the point \(x_0\), of the composite function \(x\mapsto g(x,v(x))\). Now, if \(x_0\) is a maximum point of \(D_{ee}^2v\) we get
It can be checked that
where \(\partial ^2_{i,j}g(x,v(x))\) and \(\partial _{n+1}g(x,v(x))\) denote derivatives of g as a function of \(n+1\) variables evaluated at the point (x, v(x)). Writing \(\xi =( e^t, \ \langle \nabla v (x),e\rangle )^t\) for convenience, we have
or equivalently,
This inequality suggests that in order to get an upper bound on \(D_{ee}^2v(x_0)\) it is natural to require \(D^2g\ge -C \, Id\) and \(\partial _{n+1}g(x_0,v(x_0))>\mu >0\), namely hypotheses (1.5) and (1.7), since then
From here we have the desired estimate as long as we can guarantee that v is Lipschitz. In Proposition 5.3 we proved that this is actually the case provided hypotheses (1.6), and (1.7) hold true.
In the following result we justify the heuristic argument above.
Proposition 5.4
(Semiconcavity of the solution) Let \(g: \mathbb {R}^{n+1} \rightarrow \mathbb {R}\) satisfy (1.5), (1.6), and (1.7). Then, the solution to (1.4) is semiconcave, that is, for every \(x\in \mathbb {R}^n\),
Proof
Let v be the solution to problem (1.4), \(e\in \mathbb {R}^n\) fixed, and assume that
as the result is trivial otherwise. We observe that \(\delta (v,x,e)\rightarrow 0\) as \(|x| \rightarrow \infty \). To see this, notice first that \(\delta (v,x,e)=\delta (v- \phi ,x,e)+\delta (\phi ,x,e)=o(1)+\delta (\phi ,x,e)\) as \(|x| \rightarrow \infty \). Also, by our hypotheses on \(\phi \), we have that
Therefore, there is some \(x_0\) such that
To complete the proof we need a regularization process as in the proof of Proposition 5.3. Again, let us present the ideas first assuming that v is a classical solution and all the equations hold pointwise.
Fix \(\eta >0\) arbitrary, and let \(A_\eta >0\) such that
and
with \(L_{A_\eta }\) defined as in (2.1). We have from the above expressions that
Notice that \(\delta \big (\delta (v,\cdot \, ,e),x_0,z\big )\le 0\), and therefore \(L_{A_\eta }\delta (v,x_0,e)\le 0\). Consequently,
At this point we can let \(\eta \rightarrow 0\) and rewrite the resulting expression as
for \(\theta _1= \big ( e,v(x_0+e)-v(x_0) \big )\) and \(\theta _2 =\big ( -e,v(x_0-e)-v(x_0) \big ) \). Then, by (1.5) and (1.7) we have
and therefore, for any \(x\in \mathbb {R}^n\),
The result follows applying Proposition 5.3.
To complete the proof in the general case, let us sketch the regularization procedure. The details follow the lines of the proof of Proposition 5.3. To simplify the notation, let us denote \(u(x)=v(x+e),\) \(w(x)=v(x-e)\) and consider the sup-convolution of u, w and the inf-convolution of v, namely,
and
for some points \(x^*,x^{**},\) and \(x_*\) within a distance \(\sqrt{\epsilon }R\) from x (see property 3 in the proof of Proposition 5.3).
Assume (5.3). Then, on the one hand, we have that
On the other hand,
as \(|x|\rightarrow \infty \). Therefore, for \(\epsilon \) small enough, there exists \(x_\epsilon \) such that
Now, consider the following three paraboloids:
and
Then, all three \(u^\epsilon ,w^\epsilon \), and \(v_\epsilon \) are \(\mathcal {C}^{1,1}\) at \(x_\epsilon \). To see this, notice that
-
P(x) touches \( u^\epsilon \) from below at \(x_\epsilon \) and
$$\begin{aligned} 2Q(x)-R(x)+u^\epsilon (x_\epsilon )+w^\epsilon (x_\epsilon )-2v_\epsilon (x_\epsilon ) \end{aligned}$$touches from above.
-
Q(x) touches \( v_\epsilon \) from above at \(x_\epsilon \) and
$$\begin{aligned} \frac{1}{2}P(x)+\frac{1}{2}R(x)+ v_\epsilon (x_\epsilon )-\frac{1}{2}u^\epsilon (x_\epsilon )-\frac{1}{2}w^\epsilon (x_\epsilon ) \end{aligned}$$touches from below.
-
R(x) touches \( w^\epsilon \) from below at \(x_\epsilon \) and
$$\begin{aligned} 2Q(x)-P(x)+u^\epsilon (x_\epsilon )+w^\epsilon (x_\epsilon )-2v_\epsilon (x_\epsilon ) \end{aligned}$$touches from above.
Then, there are three paraboloids that touch v from above at \(x_\epsilon ^*+e\) and \(x_\epsilon ^{**}-e\), and from below at \(x_{\epsilon *}\). By Lemma 2.2 we have
and
in the classical sense. Fix \(\eta >0\), arbitrary, and let \(A_\eta >0\) such that
with \(L_{A_\eta }\) defined as in (2.1). Then
As in the proof of Theorem 4.1,
Since \(\eta \) is arbitrary, we conclude
Rearranging terms, we get,
Let us analyze the left-hand side of the inequality first. By (1.7) and (1.6), we have
If we denote \(\theta =(x_\epsilon ^*+e-x_{\epsilon ,*},v(x_\epsilon ^*+e)-v(x_{\epsilon ,*}))\) the right-hand side becomes
where in the last step we have used (1.5). Therefore,
Observe that
On the other hand,
Finally, recall that \(|x_\epsilon ^*-x_{\epsilon ,*}|\le 2\sqrt{\epsilon }R\) and \(|x_\epsilon ^*+x_{\epsilon }^{**}-2x_{\epsilon ,*}|\le 4\sqrt{\epsilon }R\). All the above together yields,
Now we can let \(\epsilon \rightarrow 0\) and apply Proposition 5.3 to conclude. \(\square \)
Existence of Solutions
In this section we prove existence of solutions to the problem
One could consider existence for more general problems with a right-hand side g(x, u) as in (1.4). The arguments below would work under assumptions on g that guarantee the existence of appropriate sub- and supersolutions as well as comparison (see Section 4).
The idea to find a supersolution is that, by the definition of \(\mathcal {D}_{s}\) as an infimum of linear operators, it is enough to have the appropriate inequality for just one of them.
Lemma 6.1
Denote \(g(x)=\min \{1,|x|^{-(2s+\tau )}\}\) and \(u_F(x)=C_F\cdot |x|^{2s-n}\), the fundamental solution of \((-\Delta )^s\) for an appropriate constant \(C_F\). Then, there exist constants \(0<\tau <\min \{2s-1,n-2s\}\) and \(M>0\) such that \(u=\phi +M\cdot \big (u_F*g\big )\) satisfies
Proof
We construct an upper barrier of the form \(u=\phi +w\) as a potential. We start the construction of w with \(w_0=u_F* g_0\) where \(g_0(x)=|x|^{-(2s+\tau )}\) for some small \(0<\tau <n-2s\). Since both \(n-2s<n\) and \(2s+\tau <n\) while \((n-2s)+(2s+\tau )>n\), there are constants \(a_0,a_1>0\) such that
Also, by construction,
Notice that \(w_0\) decays at infinity (and therefore \((u-\phi )=w_0\rightarrow 0\) as \(|x|\rightarrow \infty \)) but it is not bounded at 0. Consequently, we truncate \(g_0\) and define \(g_1=\min \{1,g_0\}\) and \(w_1=u_F*g_1\).
The function \(w_1\) is bounded, still radially decreasing, and has the same decay as \(w_0\). To prove the last assertion, first notice that
The function \(g_0-g_1\) is supported in the ball of radius one, therefore
For \(|y|\le 1\) and \(|x|>2\) we have \(|y|<|x|/2\) and from there we deduce
On the other hand, \(\tau <n-2s\) implies that
is constant and therefore
form some constants \(b_0,b_1>0\). Again, since \(\tau <n-2s\) (6.3), (6.4), and (6.5) prove that \(w_1\) has the same decay as \(w_0\) as claimed.
Moreover, there exist constants \(A_0,A_1>0\) such that
Also, by construction, we have
As a third and final step, we dilate \(w_1\) to our final w. To this aim, define \(w(x)=M\cdot w_1(x)\) for some large constant M to be chosen. Then,
and
We are ready to check that for an appropriate M the function \(u=\phi +w\) satisfies
Indeed,
where \(c_{n,s},A_0\) are given and M is to be chosen. On the other hand,
From our hypotheses on \(\phi ,\)
for |x| large, while \(-(-\Delta )^s\phi (x)\) is bounded in every neighborhood of the origin. Therefore
for some constant \(C>0\).
In view of (6.6), (6.7), and (6.8), we only have to control \(C\cdot \min \{1,|x|^{1-2s}\}\) by \(c_{n,s}M A_0\min \{1,|x|^{-\tau }\}\) to conclude. If \(\tau <2s-1\), a large value of M does it. \(\square \)
Proposition 6.2
(Existence of solutions) There exists a unique solution of
Proof
First, observe that \(\phi \) is a subsolution to the problem, since by convexity \(\delta ( \phi ,x,y)\ge 0\), and therefore \(\mathcal {D}_{s} \phi (x)\ge 0\).
To find a supersolution notice that \(-c_{n,s}^{-1}\,(- \Delta )^{s}\) is one of the operators that compete for the infimum in the definition of \(\mathcal {D}_{s}\), and we know from Lemma 6.1 that there is a function \(\bar{u}\) such that \(-c_{n,s}^{-1}\,(- \Delta )^{s}\bar{u}\le \bar{u}- \phi \) with the right “boundary data at infinity”, that is, \((\bar{u}-\phi )\rightarrow 0\) as \(|x|\rightarrow \infty \). We have,
By comparison, see Theorem 4.1, \(\phi \le \bar{u}\).
Consider the following approximating problem,
where \(\mathcal {D}_{s}^\epsilon \) is the following non-degenerate operator,
Notice that \(\phi \) and \(\bar{u}\) as above are respectively a sub- and supersolution for these approximating problems for every \(\epsilon \).
Being uniformly elliptic, the approximating problems have a solution for every \(\epsilon \). To show this, consider for every \(k=1,2,\ldots \) the following uniformly elliptic Dirichlet problem
Then, for every k there exists a unique solution \(u_k\), which is regular (depending on \(\epsilon \) but not in k), see [3, 4] and the references therein. By comparison (see [3]) we have
for every \(k_1\le k_2\) (notice that \(\phi \) and \(\bar{u}\) are always a sub- and supersolution), that is, the sequence \(u_k\) is monotone increasing. We conclude that the sequence converges locally uniformly to the solution of the approximating problem.
Now, observe that \(\mathcal {D}_s u(x)\le \mathcal {D}_s^{\epsilon _1} u(x)\le \mathcal {D}_s^{\epsilon _2} u(x)\) for any \(\epsilon _1\le \epsilon _2\) since the infimum is smaller the larger the class of matrices. In particular, we have that every \(u_\epsilon \) is a supersolution of (6.9) and a supersolution of the approximating problem with every smaller \(\epsilon \). Picking \(\epsilon _k=1/k\) for \(k=1,2,\ldots \) we have by comparison (Theorem 4.1) that
The arguments in Section 5 imply that every \(u_\epsilon \) is Lipschitz continuous with the same constant as \(\phi \), hence uniformly in \(\epsilon \). Therefore, when \(\epsilon \) goes to 0, the sequence \(u_\epsilon \) converges monotonically and uniformly to the solution of problem (6.9), which is unique by comparison. \(\square \)
To conclude, we show that the right-hand side of problem (6.9) is positive and Theorem 3.1 applies. Therefore, the operator remains uniformly elliptic and standard regularity results for uniformly elliptic equations, see [3, 4] are available.
Proposition 6.3
Let u be the solution to problem (6.9). Then, \(u>\phi \) in \(\mathbb {R}^n\).
Proof
As pointed out in the proof of Proposition 6.2 we have \(u\ge \phi \) in \(\mathbb {R}^n\) by comparison, therefore we need to prove that the inequality is strict. We argue by contradiction and assume to the contrary that there is \(x_0\in \mathbb {R}^n\) such that \(u(x_0)=\phi (x_0)\).
Observe that then \(\phi \) touches u from below at \(x_0\). We can replace \(\phi \) by \(\tilde{\phi }\in \mathcal {C}^{2,\alpha }(\mathbb {R}^n)\), also strictly convex in compact sets and asymptotic to some cone at infinity in such a way that \(\tilde{\phi }\) touches \(\phi \) (and also u) from below at \(x_0\), and this is the only contact point.
Then we can use \(\tilde{\phi }\) as a test in the definition of viscosity solution of problem (6.9) to get \(\mathcal {D}_{s} \tilde{\phi }(x_0)\le 0\). On the other hand, \(\mathcal {D}_{s} \tilde{\phi }(x_0)\ge 0\) by convexity, and we conclude that \(\mathcal {D}_{s} \tilde{\phi }(x_0)=0\).
Then we claim that there exist a direction \(e\in \partial B_1(0)\) such that the one-dimensional fractional laplacian of the restriction of \(\tilde{\phi }\) to the direction e is non-positive, namely \(-\big (- \Delta \big )_{e}^s\tilde{\phi }(x_0)\le 0\). This is a contradiction with the fact that \(-\big (- \Delta \big )_{e}^s\tilde{\phi }(x_0)>0\) since \(\tilde{\phi }\) is convex and non-constant.
To prove the claim, notice that for all \(k=1,2,\ldots \) there exists \(e_k\in \partial B_1(0)\) such that
(otherwise, there exists \(\mu >0\) such that \(-\big (- \Delta \big )_{e}^s\tilde{\phi }(x_0)>\mu \) uniformly in e and we can argue as in Proposition 3.5 to show that \(\mathcal {D}_{s} \tilde{\phi }(x_0)>0\)).
By the convexity and linear growth at infinity of \(\tilde{\phi }\) we have that
independently of k. Passing to a subsequence if necessary, we can assume that \(e_k\rightarrow e\) as \(k\rightarrow \infty \) and then we can pass to the limit in (6.10) by the dominated convergence theorem. This concludes the proof of the claim. \(\square \)
Remark 6.4
Another approach to show existence would be to solve a “truncated problem” with the restriction \(\epsilon Id < A <\epsilon ^{-1} Id\) besides the condition \(\det (A)=1\). For this problem solutions exist and are smooth (depending on \(\epsilon \)) from existing theory (see [3, 4]), but also semiconvex independently of \(\epsilon \) (Section 5) and the proof that the operator remains strictly non-degenerate (Section 3) applies directly to these solutions for \(\epsilon \) small enough.
References
Caffarelli, L.A., Nirenberg, L., Spruck, J.: The Dirichlet problem for nonlinear second-order elliptic equations. I. Monge–Ampère equation. Commun. Pure Appl. Math. 37(3), 369–402 (1984)
Caffarelli, L.A., Nirenberg, L., Spruck, J.: The Dirichlet problem for nonlinear second-order elliptic equations. III. Functions of the eigenvalues of the Hessian. Acta Math. 155(3–4), 261–301 (1985)
Caffarelli, L., Silvestre, L.: Regularity theory for fully nonlinear integro-differential equations. Commun. Pure Appl. Math. 62(5), 597–638 (2009)
Caffarelli, L., Silvestre, L.: The Evans-Krylov theorem for nonlocal fully nonlinear equations. Ann. Math. (2) 174(2), 1163–1187 (2011)
Caffarelli, L.A., Silvestre, L.: A non local Monge-Ampere equation, Commun. Anal. Geom., to appear
Cannarsa, P., Sinestrari, C.: Semiconcave Functions, Hamilton–Jacobi Equations, and Optimal Control, Progress in Nonlinear Differential Equations and their Applications, 58. Birkhauser Boston Inc, Boston (2004)
Guillen, N., Schwab, R.W.: Aleksandrov–Bakelman–Pucci type estimates for integro-differential equations. Arch. Ration. Mech. Anal. 206(1), 111–157 (2012)
Gutiérrez, C.: The Monge–Ampère equation, Progress in Nonlinear Differential Equations and their Applications, 44. Birkhäuser Boston, Inc., Boston (2001). xii+127 pp
Jian, H., Wang, X.J.: Existence of entire solutions to the Monge–Ampère equation. Am. J. Math. 136(4), 1093–1106 (2014)
Trudinger, N.S., Wang, X.J.: The Monge–Ampère equation and its geometric applications. Handbook of Geometric Analysis 1, 467–524 (2008)
Author information
Authors and Affiliations
Corresponding author
Additional information
Luis Caffarelli has been supported by NSF DMS-1540162. Fernando Charro partially supported by a MEC-Fulbright and Juan de la Cierva fellowships and MINECO grants MTM2011-27739-C04-01 and MTM2014-52402-C3-1-P, and is part of the Catalan research group 2014 SGR 1083 (“grup reconegut”).
Appendices
Appendix A
In this appendix we include for the reader’s convenience the proof of the following fact, mentioned in the introduction.
Lemma A.1
If u is convex, asymptotically linear and \(1/2<s<1\), then
in the viscosity sense.
The proof of Lemma A.1 is a direct consequence of the following two results.
Lemma A.2
If u is asymptotically linear and \(1/2<s<1\), then
in the viscosity sense.
Lemma A.3
Let B be a symmetric and positive semidefinite matrix. Then,
We devote the rest of this appendix to the proof of Lemmas (A.2) and (A.3) (Lemma (A.3) is well-known, but we include a proof for completeness).
Proof of Lemma A.2
We shall first consider the case when \(u\in \mathcal {C}^2(\mathbb {R}^n)\) and then show how to adapt the arguments to the viscosity setting. To this aim, let \(A>0\) such that \(\det A=1\) and \(0<\rho <R\) to be chosen later on. Then,
Now, we are going to analyze each term on the right-hand side of (A.1). First, notice that
The last equality follows integrating by parts (notice that y is the unit normal to \(\partial B_1(0)\)).
Fix \(\epsilon >0\), small. Since \(\delta (u,x,Ay)= \langle D^2u(x)Ay,Ay\rangle +o(|y|^2)\) as \(|y|\rightarrow 0\), we have that
if \(|y|\le \rho \) with \(\rho \) sufficiently small. Thus,
On the other hand,
As for the last term in (A.1), since u is asymptotically linear, for \(R>0\) large enough, there exists some constant \(L>0\) such that \(|\delta (u,x,Ay)|\le 2L |Ay| \le 2L \lambda _{\max }^{1/2}(AA^t) |y| \). Therefore,
Collecting (A.1)– (A.5), we get
and then,
Since both A and \(\epsilon \) are arbitrary, we have
On the other hand, from (A.1)– (A.5) we also get
Let \(A_\epsilon >0\) with \(\det A_\epsilon =1\) such that
Then,
Finally, letting first \(s\rightarrow 1\) and then \(\epsilon \rightarrow 0\), we conclude
and therefore, the equality.
To conclude, let us show how to adapt this argument to the viscosity setting. According to Definition 2.1, whenever a function \(\psi \) touches u (from above or from below) at a point x in the sense that
-
\(\psi (x) = u(x)\),
-
\(\psi (y) > u(y)\) (resp. \(\psi (y) < u(y)\)) for every \(y \in N \setminus \{x\}\),
where N is a neighborhood of x and \(\psi \in C^2(\overline{N})\), then we have to evaluate \(\mathcal {D}_s v(x)\) for
and consider the appropriate inequality in the corresponding equation. Therefore, the main difference when carrying out the above argument in the viscosity sense is that (A.1) reads
provided \(\rho \) and R are respectively small and large enough. Then, we can apply estimates (A.2)–(A.5) to each of the terms in (A.8) and get the corresponding one-sided inequality (analogous to (A.6), (A.7)) needed to read the equation in the viscosity sense, namely,
\(\square \)
Proof of Lemma A.3
Let A with \(\det A=1\). Then, the matrix \(AA^t\) is symmetric and positive definite. Hence, the inequality between the arithmetic and geometric means yields,
As this is true for any A with \(\det A=1\), we deduce,
To derive the converse inequality, assume first that \(B>0\). If we choose
with \(P_B\) such that \(B=P_B\,\mathrm{diag}(\lambda _i(B))P_B^t\) and \(P_BP_B^t=I\), we get
Let us now consider the case when \(B\ge 0\). Since the result is trivial when \(B=0\), we can assume that 0 is an eigenvalue of the matrix B with multiplicity \(n-k<n\), that is,
with \(P_BP_B^t=I\). Fix \(\epsilon >0\) and define
with \(P_B\) the same as before and
In this way, \(A_\epsilon \) is positive definite with \(\mathrm{det} (A_\epsilon )=1\), and
Since \(\epsilon >0\) is arbitrary, we conclude that
\(\square \)
Appendix B: Estimates of the Integral of the Kernel on the Sphere
Proposition B.1
Let \(\epsilon _j>0\), for \(j=1,\ldots ,k\). Then,
Remark B.2
Notice that
Proof
1. Start denoting
Multiply both sides by \(r^{1-2s}e^{-r^2}\) and integrate from 0 to \(\infty \), to get
Using polar coordinates, we get
Notice that our choice of the radial function in the numerator, namely \(|x|^2e^{-|x|^2}\), makes the integral finite and well-defined.
2. We prove first the lower estimate. A change of variables \(y_j=\epsilon _jx_j\) in (B.1) yields
Performing a change of variables \(t=\big (\sum _{j=1}^{k}\epsilon _{j}^{-2}y_j^2\big )^{1/2}r\) we get
Collecting (B.1), (B.2), and (B.3) yields
Then, Jensen inequality with weights \(y_j^2\) (notice that \(\sum _{j=1}^ky_j^2=|y|^2=1\)) yields
Integrating by parts (note that y is the unit normal) we have
3. The upper estimate follows from the identity
just realizing that
4. To prove (B.4), we shall use the following formula that follows from changing variables in the definition of the Gamma function
Applying this formula with \(h=\frac{k+2s}{2}\) and \(\lambda =\sum _{i=1}^{k}\epsilon _{i}^2x_i^2\) we can rewrite the kernel as
We get,
A change of variables \(y_j=(1+z\epsilon _j^2)^{1/2}x_j\) yields
Identity (B.4) follows from (B.1), (B.5), and (B.6) with the change of variables \(t=\epsilon _i^2 z\). \(\square \)
Rights and permissions
About this article
Cite this article
Caffarelli, L., Charro, F. On a Fractional Monge–Ampère Operator. Ann. PDE 1, 4 (2015). https://doi.org/10.1007/s40818-015-0005-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40818-015-0005-x