Introduction

The classical Monge–Ampère equation \(\det D^2u=f\) arises in many areas of analysis, geometry, and applied mathematics. Standard boundary value problems are the Dirichlet problem and optimal transportation problem, where we prescribe the image of a domain by the gradient map.

In the Dirichlet problem, one prescribes a smooth domain \(\Omega \subset \mathbb {R}^n\), boundary data g(x) on \(\partial \Omega \), and a right-hand side f(x) in \(\Omega \), and studies existence and regularity of a function u such that

(1.1)

For the problem to fit in the framework of the theory of fully nonlinear elliptic equations, one must seek convex solutions u to ensure that \(\det D^2 u\) is indeed a monotone function of \(D^2u\). Thus, we must require f(x) positive and \(\Omega \) convex. The convexity of \(\Omega \) is required in order to construct appropriate smooth subsolutions that act as lower barriers, see [2].

In that case, there is considerable work in the literature establishing the existence, uniqueness and regularity of solutions to (1.1), see [1, 2, 8, 10] and the references therein. The main ingredients entering the theory are, roughly speaking, the following:

  1. (a)

    The Monge–Ampère equation is a concave fully nonlinear equation. For a convex solution

    $$\begin{aligned} \det D^2u=f \end{aligned}$$

    is equivalent to

    $$\begin{aligned} \inf _{L\in \mathcal {A}}Lu=f, \end{aligned}$$

    where \(\mathcal {A}\) is the family of linear operators \(Lu=\mathrm{trace}\left( AD^2u\right) \) for \(A>0\) with eigenvalues \(\lambda _j(A)\) that satisfy \(\prod _{j=1}^{n} \lambda _j(A)=n^{-n}f^{n-1}\). Furthermore, if we take nA equal to the matrix of cofactors of \(D^2u\), then

    $$\begin{aligned} n^n\prod _{j=1}^{n} \lambda _j(A)=(\det D^2u)^{n-1}=f^{n-1}, \end{aligned}$$

    the infimum is realized and the equation satisfied. Moreover, from the concavity of \(\det ^{1/n}(\cdot )\) any other choice of the eigenvalues would give a larger value than the prescribed f, making u a subsolution.

    In other words, the Monge–Ampère equation can be thought of as the infimum of a family of linear operators that consists of all affine transformations of determinant one of a given multiple of the Laplacian.

  2. (b)

    The fact that \(\det D^2u\) can be represented as a concave fully nonlinear equation implies that pure second derivatives are subsolutions of an equation with bounded measurable coefficients and as such, are bounded from above. Indeed, if we consider the second-order incremental quotient in the direction \(e\in \partial B_1(0)\),

    $$\begin{aligned} \delta (u,x_0,y)=u(x_0+he)+u(x_0-he)-2u(x_0) \end{aligned}$$

    and choose

    $$\begin{aligned} Lv=\mathrm{trace}\left( AD^2v\right) \end{aligned}$$

    with nA the matrix of cofactors of \(D^2u(x_0)\), we have that \(Lu(x_0)=f(x_0)\) while on the other hand, the matrix

    $$\begin{aligned} B=\left[ \frac{f(x_0+he)}{f(x_0)}\right] ^\frac{n-1}{n}A \end{aligned}$$

    satisfies

    $$\begin{aligned} \det B=n^{-n}f(x_0+he)^{n-1}, \end{aligned}$$

    which makes it eligible to compete for the minimum of \(\mathrm{trace}\left( ND^2u(x_0+he)\right) \). This implies,

    $$\begin{aligned} \left[ \frac{f(x_0+he)}{f(x_0)}\right] ^\frac{n-1}{n}Lu(x_0+he)\ge f(x_0+he) \end{aligned}$$

    or equivalently,

    $$\begin{aligned} Lu(x_0+he)\ge f(x_0+he)^\frac{1}{n}f(x_0)^\frac{n-1}{n}. \end{aligned}$$

    We deduce that at a maximum of a second derivative \(D_{ee}^2u\) the function f must satisfy,

    $$\begin{aligned} f(x_0)^\frac{n-1}{n} D_{ee}^2\,f^{1/n}(x_0)\le 0. \end{aligned}$$

    If \(D_{ee}^2f\) is bounded and we have an appropriate barrier, plus control of the second derivatives of u at the boundary of \(\Omega \), we deduce that u is not only convex but also semiconcave. For that purpose, the boundary and data must be smooth and the domain strictly convex. This allows for the construction of appropriate subsolutions as barriers.

  3. (c)

    Then, the last ingredient of the theory is that for a convex solution the equation \(\prod _{j=1}^{n}\lambda _j=f\) with f strictly positive implies that all \(\lambda _j\) are strictly positive (and not merely non-negative). This implies that the operators involved with the minimization can be restricted to a uniformly elliptic family and the corresponding general theory applies. In particular, Evans-Krylov theorem implies that solutions are \(\mathcal {C}^{2,\alpha }\) and from there, as smooth as two derivatives better than f.

The discussion above suggests that one could carry out a similar program for a non-local or fractional Monge–Ampère equation of the form

$$\begin{aligned} \inf L_Au=f \end{aligned}$$

where the set of operators \(L_A\) corresponds to that of all affine transformations of determinant one of a given multiple of the fractional Laplacian. In fact, one may consider any concave function of the Hessian as in [2] as an infimum of affine transformations of the Laplacian, the affine transformations corresponding now to the different linearization coefficients of the function \(F(\lambda _1,\ldots ,\lambda _n)\) and consider the corresponding nonlocal operator.

One can take \(\inf _{A\in \mathcal {A}} L_Au=f\), where \(\mathcal {A}\) corresponds to a family of symmetric positive matrices with determinant bounded from above and below,

$$\begin{aligned} 0<\lambda \le \det A\le \Lambda , \end{aligned}$$

and

$$\begin{aligned} L_Au(x)=\int _{\mathbb {R}^n}\frac{u(x+y)+u(x-y)-2u(x)}{|A^{-1}y|^{n+2s}}\,dy. \end{aligned}$$

The kernel under consideration does not need to be necessarily \(|A^{-1}x|^{-(n+2s)},\) it could be a more general kernel K(Ax). In fact, the geometry of the domain is an important issue for the “inherited from the boundary” regularity theory for degenerate operators depending on the eigenvalues of the Hessian, see [2].

In this article we shall set up a relatively simple framework of global solutions prescribing data at infinity and global barriers to avoid having to deal with the technical issues inherited from boundary data, which is rather complex for non-local equations. As in the second order case, we intend to prove:

  1. (a)

    Existence of solutions.

  2. (b)

    Solutions are semiconcave, i.e. second derivatives are bounded from above.

  3. (c)

    Along each line, the fractional Laplacian is bounded from above and strictly positive.

  4. (d)

    The operators that are close to the infimum remain strictly elliptic.

  5. (e)

    The non-local fully nonlinear theory developed in [3, 4] applies, in particular the nonlocal Evans-Krylov theorem, and solutions are “classical”.

To be more precise, let us introduce the non-local Monge–Ampère operator \(\mathcal {D}_s\) that we are going to consider in the sequel, given by

$$\begin{aligned} \begin{aligned} \mathcal {D}_s u(x)&=\inf \bigg \{ \text {P.V.}\int _{\mathbb {R}^n}\frac{u(y)-u(x)}{|A^{-1}(y-x)|^{n+2s}}\, \, dy\ \bigg |\ A>0,\ \det A=1\bigg \}\\&=\inf \bigg \{\frac{1}{2} \int _{\mathbb {R}^n}\frac{u(x+y)+u(x-y)-2u(x)}{|A^{-1}y|^{n+2s}}\, dy\ \bigg |\ A>0,\ \det A=1\bigg \}. \end{aligned} \end{aligned}$$
(1.2)

We shall always use the definition that is most suitable to each case. Let us mention that if u is convex, asymptotically linear, and \(1/2<s<1\), then

$$\begin{aligned} \lim _{s\rightarrow 1}\big ((1-s)\,\mathcal {D}_s u(x)\big )=\det (D^2u(x))^{1/n}, \end{aligned}$$

up to a constant factor that depends only on the dimension n (see Appendix A for a proof of this fact).

Another recent attempt to approach nonlocal Monge–Ampère operators is the operator proposed in [5]. The interested reader should also check [7].

Remark 1.1

We can assume without loss of generality that the matrices A in the definition of \(\mathcal {D}_s u(x)\) are symmetric and positive definite. This follows from the (unique) polar decomposition of \(A^{-1}\), namely \(A^{-1} = OS^{-1}\), where O is orthogonal and \(S^{-1}\) is a positive definite symmetric matrix.

We shall study the following Dirichlet problem,

$$\begin{aligned} \left\{ \begin{array}{lll} \mathcal {D}_su(x) =u(x)-\phi (x) &{} \text {in} &{} \mathbb {R}^n \\ (u- \phi )(x) \rightarrow 0 &{} \text {as} &{} |x|\rightarrow \infty , \end{array} \right. \end{aligned}$$
(1.3)

where \(1/2<s<1\) and we prescribe boundary data at infinity \(\phi (x)\) (that, at the same time, acts as a smooth lower barrier). The results below can be extended to the problem

$$\begin{aligned} \left\{ \begin{array}{lll} \mathcal {D}_su(x) = g\big (x,u(x)\big ) &{} \text {in} &{} \mathbb {R}^n \\ (u- \phi )(x) \rightarrow 0 &{} \!\text {as} &{} |x|\rightarrow \infty \end{array} \right. \end{aligned}$$
(1.4)

under appropriate assumptions on g (see [9] for a local analogue of problem (1.4)). Let us now describe the precise hypothesis that we shall require on g and \(\phi \).

First, \(\phi \in \mathcal {C}^{2,\alpha }(\mathbb {R}^n)\) is strictly convex in compact sets and \(\phi =\Gamma +\eta \) near infinity, with \(\Gamma (x)\) a cone and

$$\begin{aligned} |\eta (x)| \le a |x|^{-\epsilon }, \qquad |\nabla \eta (x)| \le a |x|^{-(1+\epsilon )}, \qquad \text {and} \qquad |D^2\eta (x) |\le a |x|^{-(2+\epsilon )} \end{aligned}$$

for some constants \(a>0\) and \(0<\epsilon <n\). In particular, as \(|x|\rightarrow \infty \),

$$\begin{aligned} -(- \Delta )^s \eta (x)= O\big ( |x| ^{-(2s+ \epsilon )}\big ) \end{aligned}$$

(see Section 2 for the definition of the fractional Laplacian) and

$$\begin{aligned} c_1|x|^{1-2s}\le -(- \Delta )^s \Gamma (x)\le c_2|x|^{1-2s} \end{aligned}$$

from homogeneity, where \(c_1,c_2\) are some positive constants depending on the strict convexity of the section of \(\Gamma \). We normalize \(\phi \) so that \(\phi (0)=0\), \(\nabla \phi (0)=0\).

The model problem that we consider is \(g(x,u(x))=u(x)- \phi (x)\). On the other hand, the general hypotheses on \(g: \mathbb {R}^{n+1} \rightarrow \mathbb {R}\) that we shall consider are:

$$\begin{aligned}&g\,\text {is globally semiconvex with constant}\,C, \end{aligned}$$
(1.5)
$$\begin{aligned}&x\mapsto g(x,t)\,\text {is Lipschitz continuous with constant}\,\text {Lip}(g),\,\text {uniformly in}\,t,\qquad \qquad \end{aligned}$$
(1.6)

and, there exists \(\mu >0\) such that

$$\begin{aligned} g(x,t_1)-g(x,t_2)\ge \mu (t_1-t_2)\qquad \forall t_1,t_2\in \mathbb {R},\ t_1\ge t_2\quad \text {uniformly in}\ x. \end{aligned}$$
(1.7)

We would like to point out that hypothesis (1.5) implies that the function g is locally Lipschitz continuous (see for instance [6, Proposition 2.1.7]). In particular,

$$\begin{aligned} \frac{|g(x,t)-g(y,t)|}{|x-y|} \le \frac{2\,\text {osc}\big (g(\cdot ,t),\overline{B}_{R/2}(x_0)\big )}{R}+CR \qquad \forall x,y\in B_{R/2}(x_0), \end{aligned}$$

for any \(R>0\). Therefore, hypothesis (1.6) could be replaced, for instance, by the following

$$\begin{aligned} \begin{aligned}&x\mapsto g(x,t)\ \text {is Lipschitz continuous in}\,\mathbb {R}^{n}\setminus B_{R_0}(0)\,\text {for some radius}\,{R_0}>0 \\&\quad \text {with constant}\,\text {Lip}(g,\mathbb {R}^{n}\setminus B_{R_0}(0)),\,\text {uniformly in}\,t, \end{aligned} \end{aligned}$$
(1.8)

and

$$\begin{aligned} \text {osc}\big (g(\cdot ,t),\overline{B}_{ R_0/2}(x_0)\big )\ \text {bounded in}\,t. \end{aligned}$$
(1.9)

In the sequel, we shall assume (1.6) for simplicity.

The paper is organized as follows. In Section 2 we present the notation, the notion of solution, and some preliminary results. In Section 3 we prove the main result of the paper, namely, that matrices that are too degenerate do not count for the infimum in (1.2), effectively proving that the fractional Monge–Ampère operator is locally uniformly elliptic and thus the known theory for uniformly elliptic nonlocal operators applies (see for instance [3] and the references therein). In Section 4 we prove a comparison principle for problem (1.4), and in Section 5 we prove Lipschitz continuity and semiconcavity of solutions to problem (1.4). Finally, in Section 6 we prove existence of solutions to the model problem (1.3).

Notation and Preliminaries

In this section we are going to state notations and recall some basic results and definitions.

For square matrices, \(A>0\) means positive definite and \(A\ge 0\) positive semidefinite. We shall denote \(\lambda _i(A)\) the eigenvalues of A, in particular \(\lambda _{\min }(A)\) and \(\lambda _{\max }(A)\) are the smallest and largest eigenvalues, respectively.

We shall denote the kth-dimensional ball of radius 1 and center 0 by \(B_1^{k}(0)=\{x\in \mathbb {R}^k:\ |x|\le 1\}\) and the corresponding \((k-1)\)th-dimensional sphere by \(\partial B_1^{k}(0)=\{x\in \mathbb {R}^k:\ |x|=1\}\). Whenever k is clear from context, we shall simply write \(B_1(0)\) and \(\partial B_1(0)\). \(\mathcal {H}^{k}\) stands for the k-dimensional Haussdorff measure. We shall denote \(\omega _{k}= \mathcal {H}^{k-1}\big (\partial B_1^{k}(0)\big )=k\,|B_1^{k}(0)|=\frac{2\pi ^{k/2}}{\Gamma (k/2)}\).

Given a function u, we shall denote the second-order increment of u at x in the direction of y as \(\delta (u,x,y)=u(x+y)+u(x-y)-2u(x)\).

Let \(A\subset \mathbb {R}^n\) be an open set. We say that a function \(u:A \rightarrow \mathbb {R}\) is semiconcave if it is continuous in A and there exists \(C \ge 0\) such that \(\delta (u,x,y) \le C|y|^2\) for all \(x, y\in \mathbb {R}^n\) such that \([x-y, x +y] \subset A\). The constant C is called a semiconcavity constant for u in A.

Alternatively, a function u is semiconcave in A with constant C if \(u(x)- \frac{C}{2}|x|^2\) is concave in A. Geometrically, this means that the graph of u can be touched from above at every point by a paraboloid of the type \(a+\langle b,x\rangle +\frac{C}{2}|x|^2\).

A function u is called semiconvex in A if \(-u\) is semiconcave.

Let us mention here for the reader’s convenience the definition of the fractional Laplacian ,

$$\begin{aligned} -(- \Delta )^{s}u(x)= & {} c_{n,s}\,\text {P.V.}\int _{\mathbb {R}^n}\frac{u(y)-u(x)}{|y-x|^{n+2s}}\, \, dy \\ {}= & {} \frac{c_{n,s}}{2}\,\int _{\mathbb {R}^n}\frac{u(x+y)+u(x-y)-2u(x)}{|y|^{n+2s}}\, \, dy \end{aligned}$$

where \(c_{n,s}\) is a normalization constant. Notice that \(-c_{n,s}^{-1}\,(- \Delta )^{s}u(x)\) belongs to the class of operators over which the infimum in the definition of \(\mathcal {D}_s u(x)\) is taken.

We recall from [3] the notion of viscosity solution that we are going to use in the sequel.

Definition 2.1

A function \(u :\mathbb {R}^n \rightarrow \mathbb {R}\), upper (resp. lower) semicontinuous in \(\overline{\Omega }\), is said to be a subsolution (supersolution) to \(\mathcal {D}_su = f\), and we write \(\mathcal {D}_su\ge f\) (resp. \(\mathcal {D}_su \le f\)), if every time all the following happen,

  • x is a point in \(\Omega \),

  • N is an open neighborhood of x in \(\Omega \),

  • \(\psi \) is some \(C^2\) function in \(\overline{N}\),

  • \(\psi (x) = u(x)\),

  • \(\psi (y) > u(y)\) (resp. \(\psi (y) < u(y)\)) for every \(y \in N \setminus \{x\}\),

and if we let

$$\begin{aligned} v := {\left\{ \begin{array}{ll} \psi &{}\text {in } \quad N \\ u &{}\text {in }\quad \mathbb {R}^n \setminus N \ , \end{array}\right. } \end{aligned}$$

then we have \(\mathcal {D}_sv(x) \ge f(x)\) (resp. \(\mathcal {D}_sv(x) \le f(x)\)). A solution is a function u that is both a subsolution and a supersolution.

The following lemma states that \(\mathcal {D}_su\) can be evaluated classically at those points x where u can be touched by a paraboloid.

Lemma 2.2

Let \(1/2<s<1\) and \(u:\mathbb {R}^n\rightarrow \mathbb {R}\) with asymptotically linear growth. If we have \(\mathcal {D}_{s}u\ge f\) in \(\mathbb {R}^n\) (resp. \(\mathcal {D}_{s}u\le f\)) in the viscosity sense and \(\psi \) is a \(\mathcal {C}^2\) function that touches u from above (below) at a point x, then \(\mathcal {D}_s u(x)\) is defined in the classical sense and \(\mathcal {D}_s u(x)\ge f(x)\) (resp. \(\mathcal {D}_{s}u(x)\le f(x)\)).

Proof

Let us deal first with the subsolution case, that is, assume first that \(\psi \in \mathcal {C}^2\) touches u from above at a point x. Define for \(r>0\),

$$\begin{aligned} v_r(y) = {\left\{ \begin{array}{ll} \psi (y) &{} \text {in } \quad B_r(x) \\ u(y) &{} \text {in } \quad \mathbb {R}^n \setminus B_r(x). \end{array}\right. } \end{aligned}$$

Then, we have that

$$\begin{aligned} -c_{n,s}^{-1}\,(- \Delta )^{s}v_r(x)\ge \mathcal {D}_{s}v_r(x)\ge f(x) \end{aligned}$$

and then the arguments in the proof of [3, Lemma 3.3] yield that \(\delta (u,x,y)/|y|^{n+2s}\) is integrable. Therefore, \(-(- \Delta )^su(x)\) is defined in the classical sense and \(\mathcal {D}_s u(x)<+\infty \). Notice that,

$$\begin{aligned} \lambda _{\min }(A)^{n+2s}\frac{\delta (u,x,y)}{|y|^{n+2s}}\le \frac{\delta (u,x,y)}{|A^{-1}y|^{n+2s}}\le \lambda _{\max }(A)^{n+2s}\frac{\delta (u,x,y)}{|y|^{n+2s}}. \end{aligned}$$

Thus, \(\delta (u,x,y)/|A^{-1}y|^{n+2s}\) is integrable and

$$\begin{aligned} L_A u(x)= \frac{1}{2}\int _{\mathbb {R}^n}\frac{\delta (u,x,y)}{|A^{-1}y|^{n+2s}}\, dy \end{aligned}$$
(2.1)

is also defined in the classical sense. By definition of viscosity solution, we have

$$\begin{aligned} L_A u(x)+ L_A (v_r-u)(x)\ge \mathcal {D}_s v_r(x)\ge f(x). \end{aligned}$$

But then, \(0\le \delta (v_r-u,x,y)\le \delta (v_{r_0}-u,x,y)\) for all \(r<r_0\), \(\delta (v_{r_0}-u,x,y)/|A^{-1}y|^{n+2s}\) is integrable and \(\delta (v_r-u,x,y)\rightarrow 0\) as \(r\rightarrow 0\). Hence, by the dominated convergence theorem, \(L_A (v_r-u)(x)\rightarrow 0\) as \(r\rightarrow 0\). We conclude \(L_A u(x)\ge f(x)\) in the classical sense. Since the matrix A is arbitrary and we could pick any matrix \(A>0\) with \(\det A=1\), we have that \(\mathcal {D}_s u(x)\ge f(x)\) in the classical sense.

In the supersolution case, that is, when \(\psi \in \mathcal {C}^2\) touches u from below at x, some modifications are required. Fix \(\epsilon >0\), arbitrary, and let \(A_\epsilon >0\) with \(\det A_\epsilon =1\) such that

$$\begin{aligned} L_{A_\epsilon } v_r(x)\le f(x)+\epsilon . \end{aligned}$$

It is easy to see that \(\delta (v_r,x,y)\) is non-decreasing in r and \(\delta (v_r,x,y)\rightarrow \delta (u,x,y)\) as \(r\rightarrow 0\). By the monotone convergence theorem, \(\delta (u,x,y)/|A_{\epsilon }^{-1}y|^{n+2s}\) is integrable and \(L_{A_\epsilon } u(x)\le f(x)+\epsilon \) in the classical sense. We find that

$$\begin{aligned} \mathcal {D}_s u(x)\le L_{A_\epsilon } u(x)\le f(x)+\epsilon , \end{aligned}$$

and we conclude letting \(\epsilon \rightarrow 0\), since it is arbitrary. \(\square \)

Local Uniform Ellipticity of the Fractional Monge–Ampère Equation

In this section we shall prove that the infimum in the definition of \(\mathcal {D}_s\), see (1.2), cannot be realized by matrices that are too degenerate, effectively proving that the fractional Monge–Ampère operator is locally uniformly elliptic. Then, existing theory for uniformly elliptic operators is available (see [3, 4] and the references therein).

To this aim, consider the following approximating, non-degenerate operator,

$$\begin{aligned} \begin{aligned} \mathcal {D}_s^\theta u(x)&=\inf \bigg \{ \text {P.V.}\int _{\mathbb {R}^n}\frac{u(y)-u(x)}{|A^{-1}(y-x)|^{n+2s}}\, \, dy\ \bigg |\ A>0,\ \det A=1,\ \lambda _{\min }(A)\ge \theta \bigg \}\\&=\inf \bigg \{\frac{1}{2} \int _{\mathbb {R}^n}\frac{u(x+y)+u(x-y)-2u(x)}{|A^{-1}y|^{n+2s}}\, dy\ \bigg |\ A>0,\\&\qquad \quad \quad \det A=1,\ \lambda _{\min }(A)\ge \theta \bigg \}. \end{aligned} \end{aligned}$$
(3.1)

Let us point out that the conditions \(\det A=1\), and \(\lambda _{\min }(A)\ge \theta \) imply \(\lambda _{\max }(A)\le \theta ^{1-n}\) and this bound is realized by matrices with eigenvalues \(\theta \) (simple) and \(\theta ^{1-n}\) (multiplicity \(n-1\)). Therefore, \(\mathcal {D}_s^\theta \) belongs to the class of uniformly elliptic, nonlocal operators with extremal Pucci operators

$$\begin{aligned} \mathcal {M}_{\theta ,\theta ^{1-n}}^{+} u(x)=\sup \bigg \{\frac{1}{2} \int _{\mathbb {R}^n}\frac{u(x+y)+u(x-y)-2u(x)}{|A^{-1}y|^{n+2s}}\, dy\ \bigg |\ \theta \, I\le A \le \theta ^{1-n}I\bigg \} \end{aligned}$$

and

$$\begin{aligned} \mathcal {M}_{\theta ,\theta ^{1-n}}^{-} u(x)=\inf \bigg \{\frac{1}{2} \int _{\mathbb {R}^n}\frac{u(x+y)+u(x-y)-2u(x)}{|A^{-1}y|^{n+2s}}\, dy\ \bigg |\ \theta \, I\le A \le \theta ^{1-n}I\bigg \}. \end{aligned}$$

Observe that in general \(\mathcal {M}_{\theta ,\theta ^{1-n}}^{-} u(x)< \mathcal {D}_s^\theta u(x),\) as the class of matrices over which the infimum is taken is broader for the Pucci operator.

The main result of this section and of the paper is the following.

Theorem 3.1

Consider \(\frac{1}{2}<s<1\) and let u be Lipschitz continuous and semiconcave (with constants L and C respectively) and such that

$$\begin{aligned} (1-s)\,\mathcal {D}_su(x)\ge \eta _0 \quad \forall x \in \Omega \end{aligned}$$
(3.2)

in the viscosity sense for some constant \(\eta _0>0\) and \(\Omega \subset \mathbb {R}^n\). Then,

$$\begin{aligned} \mathcal {D}_s u(x)=\mathcal {D}_s^\theta u(x) \quad \forall x \in \Omega \end{aligned}$$
(3.3)

in the classical sense, for \(\mathcal {D}_s^\theta \) the approximating operator defined by (3.1) and

$$\begin{aligned} \theta < \left( \frac{\mu _0}{n\mu _1} \right) ^ \frac{n-1}{2s} \end{aligned}$$

with \(\mu _0,\mu _1\) defined in (3.8) and (3.9) below.

Remark 3.2

(Limits as \(s\rightarrow 1\)). It can be checked that

$$\begin{aligned} \frac{\mu _0}{\mu _1} = O\big (\eta _0^n (2s-1)^{n}\big ) \end{aligned}$$

as \(s\rightarrow 1\). In particular, Theorem 3.1 is stable in the limit as \(s\rightarrow 1\).

It is illustrative for the sequel to show how the ideas in the proof of Theorem 3.1 work in the local case. More precisely, assume u semiconcave with constant C and such that

$$\begin{aligned} \frac{\omega _n}{4n}\cdot \inf \big \{\mathrm{trace}\big (AA^tD^2u(x)\big )\, |\ \det A=1\big \}\ge \eta _0\qquad \forall x\in \Omega \end{aligned}$$

(about the normalization \((4n)^{-1}\omega _n\), recall (3.2) and Lemma A.2). We want to prove that the Monge–Ampère operator is actually non-degenerate, that is,

$$\begin{aligned}&\inf \big \{\mathrm{trace}\big (AA^tD^2u(x)\big )\, |\ \det A=1\big \}\\&\quad \quad =\inf \big \{\mathrm{trace}\big (AA^tD^2u(x)\big )\, |\ \det A=1,\ \lambda _{\min }(A)\ge \theta \big \} \end{aligned}$$

for some \(\theta >0\). The proof has two steps:

1. The second derivative of u in the direction e is strictly positive and bounded (uniformly) for every direction. More precisely,

$$\begin{aligned} 0<\bar{\mu }_0\le u_{ee}(x)\le C\qquad \forall e\in \partial B_1(0). \end{aligned}$$
(3.4)

for \(\bar{\mu }_0\) independent of e (given by (3.5) below), and C the semiconcavity constant of u. The proof of the upper bound follows from the definition of semiconcavity. For the lower bound, choose \(A=PJP^t\) with J a diagonal matrix with eigenvalues \(\epsilon \) (single) and \(\epsilon ^\frac{1}{1-n}\) (multiplicity \(n-1\)), and P an orthogonal matrix whose i-th column is e (notice that \(\det (A) =1\)). Then,

$$\begin{aligned} \begin{aligned} \frac{4n\,\eta _0}{\omega _n}\le \text {trace}(AA^tD^2u(x))&=\sum _{j=1}^{n}\lambda _j^2(A)(P^tD^2u(x)P)_{jj}\\&=\epsilon ^2\,(P^tD^2u(x)P)_{ii}+\epsilon ^\frac{2}{1-n}\sum _{j\ne i}^{n}(P^tD^2u(x)P)_{jj}\\&\le \epsilon ^2\,(P^tD^2u(x)P)_{ii}+C(n-1)\epsilon ^\frac{2}{1-n} \end{aligned} \end{aligned}$$

by semiconcavity. Choosing \(\epsilon \) small enough, e.g. \(\epsilon =\big (\frac{1}{2}C(n-1)\omega _nn^{-1}\eta _0^{-1}\big )^\frac{n-1}{2}\) we get

$$\begin{aligned} 0<\bar{\mu }_0\le \big (P^tD^2u(x)P\big )_{ii}\le C\qquad \forall i=1,\ldots ,n, \end{aligned}$$

which is equivalent to (3.4). For future reference, \(\bar{\mu }_0\) is given by

$$\begin{aligned} \bar{\mu }_0=\left( \frac{2n\eta _0}{\omega _n}\right) ^n(C(n-1))^{1-n}. \end{aligned}$$
(3.5)

2. The infimum in the Monge–Ampère operator cannot be achieved for matrices that are too degenerate. More precisely, let A with \(\det (A)=1\) and write \(A=PJP^t\) with P orthogonal; then

$$\begin{aligned} \text {trace}(AA^tD^2u(x))=\sum _{i=1}^{n}\lambda _i^2(A)(P^tD^2u(x)P)_{ii}\ge \bar{\mu }_0\sum _{i=1}^{n}\lambda _i^2(A)\ge \bar{\mu }_0\, \lambda _{\min }(A)^{- \frac{2}{n-1}}, \end{aligned}$$
(3.6)

using that \(1=\det (A)\le \lambda _{\min }(A)\lambda _{\max }(A)^{n-1}\). We conclude that matrices with very small eigenvalues will produce very large operators that will not count for the infimum (see the proof of Theorem 3.1 for details).

For simplicity, we shall assume that \(0 \in \Omega \) and then prove (3.3) for \(x=0\). Note for the sequel that since u is semiconcave, Lemma 2.2 implies that \(\mathcal {D}_s u(x)\) is defined in the classical sense for all \(x\in \Omega \) and (3.2) holds pointwise.

The proof of Theorem 3.1 has, again, two parts. In the first part we prove that the (one-dimensional) fractional laplacian of the restriction of u to any line is positive and bounded from above. Then, in the second part, we shall use this fact to prove that

$$\begin{aligned} (1-s)\int _{\mathbb {R}^n}\frac{u(y)-u(0)}{|A^{-1}y|^{n+2s}}\,dy\ge \frac{ \mu _0\,\omega _n}{2n}\, \sum _{j=1}^{n}\lambda _j^{2s}(A)\ge \frac{ \mu _0\,\omega _n}{2n}\, \lambda _{\min }(A)^{- \frac{2s}{n-1}}. \end{aligned}$$
(3.7)

for \(\mu _0\) given by (3.8). Therefore the infimum in the fractional Monge–Ampère operator cannot be achieved for matrices that are too degenerate.

The two parts we have mentioned correspond to the following two results.

Proposition 3.3

Assume the same hypotheses of Theorem 3.1. Then, for every \(e\in \partial B_1(0)\),

$$\begin{aligned} 0<\mu _0\le - (1-s)\big (- \Delta \big )_{e}^su(0)=(1-s)\int _{\mathbb {R}} \frac{u(te)-u(0)}{|t|^{1+2s}}\,dt\le \mu _1, \end{aligned}$$

with

$$\begin{aligned} \mu _0=C_1^{1-n}C_2^{-1}\left( \frac{\eta _0}{2}\right) ^n \end{aligned}$$
(3.8)

for \(C_1,C_2\) defined in (3.12) and (3.13), and

$$\begin{aligned} \mu _1=\frac{1-s}{2}\int _{\mathbb {R}} \frac{\min \{2L\,|t|,C|t|^2\}}{|t|^{1+2s}}\,dt=\frac{L^{2-2s}}{2s-1}\left( \frac{C}{2}\right) ^{2s-1}. \end{aligned}$$
(3.9)

Remark 3.4

Proposition 3.3 yields (3.4) in the limit as \(s\rightarrow 1\) since \(\lim _{s\rightarrow 1}\mu _0=\bar{\mu }_0/2\) (with \(\bar{\mu }_0\) defined by (3.5)), \(\lim _{s\rightarrow 1}\mu _1=C/2\) and

$$\begin{aligned} \lim _{s\rightarrow 1}(1-s)\int _{\mathbb {R}} \frac{u(te)-u(0)}{|t|^{1+2s}}\,dt=\frac{u_{ee}(0)}{2}. \end{aligned}$$

Proposition 3.5

Assume \(\epsilon _1,\ldots ,\epsilon _n\) are positive constants such that \(\prod _{j=1}^n\epsilon _j=1\). Then, in the same hypotheses of Theorem 3.1, we have,

$$\begin{aligned} (1-s)\int _{\mathbb {R}^n}\frac{u(y)-u(0)}{\left( \sum _{j=1}^n\epsilon _j^2y_j^2\right) ^\frac{n+2s}{2}}\,dy \ge \frac{ \mu _0\,\omega _n}{2n}\cdot \sum _{j=1}^{n}\frac{1}{\epsilon _j^{2s}}, \end{aligned}$$

with \(\mu _0\) defined in (3.8).

Remark 3.6

Proposition 3.5 implies (3.7), which yields (3.6) in the limit as \(s\rightarrow 1\) since

$$\begin{aligned} \lim _{s\rightarrow 1}(1-s)\int _{\mathbb {R}^n}\frac{u(y)-u(0)}{|A^{-1}y|^{n+2s}}\,dy=\frac{\omega _n}{4n}\,\text {trace}(AA^tD^2u(x)). \end{aligned}$$

Propositions 3.3 and 3.5 (that we prove below) allow to prove the main result of this section, Theorem 3.1.

Proof of Theorem 3.1

Consider a symmetric matrix \(A>0\) with \(\det A=1\) and \(\lambda _{\min }(A)<\frac{1}{k}\). We can write \(A=PJP^t\), and denote \(\tilde{u}(y)=u(Py)\). Observe that then Proposition 3.5 (see also (3.7)) implies

$$\begin{aligned} \int _{\mathbb {R}^n}\frac{u(y)-u(0)}{|A^{-1}y|^{n+2s}}\,dy= & {} \int _{\mathbb {R}^n}\frac{u(Py)-u(0)}{|J^{-1}y|^{n+2s}}\,dy = \int _{\mathbb {R}^n}\frac{\tilde{u}(y)- \tilde{u}(0)}{\left( \sum _{j=1}^n\epsilon _j^2y_j^2\right) ^\frac{n+2s}{2}}dy\\> & {} \frac{\mu _0\,\omega _n}{2n(1-s)} \,k^\frac{2s}{n-1} \end{aligned}$$

and we get the estimate

$$\begin{aligned} \inf \left\{ \int _{\mathbb {R}^n}\frac{u(y)-u(0)}{|A^{-1}y|^{n+2s}}\,dy\ \bigg |\ A>0,\ \det A=1, \ \lambda _{\min }(A)<\frac{1}{k} \right\} \ge \frac{\mu _0\,\omega _n}{2n(1-s)} \,k^\frac{2s}{n-1}. \end{aligned}$$
(3.10)

Observe that by choosing \(A=I\), Proposition 3.3 yields

$$\begin{aligned}&\inf \left\{ \int _{\mathbb {R}^n}\frac{u(y)-u(0)}{|A^{-1}y|^{n+2s}}\,dy\ \bigg |\ A>0,\ \det A=1 \right\} \le \int _{\mathbb {R}^n}\frac{u(y)-u(0)}{|y|^{n+2s}}\,dy\nonumber \\&\quad = \int _{\partial {B_1(0)}}\int _0^\infty \frac{u(re)-u(0)}{r^{1+2s}}\,dr\,d\mathcal {H}^{n-1}(e) \le \frac{\mu _1 \omega _{n}}{2(1-s)}. \end{aligned}$$
(3.11)

Therefore, from (3.10) and (3.11) we have that whenever \(k> \left( n\mu _1\mu _0^{-1} \right) ^ \frac{n-1}{2s}\),

$$\begin{aligned}&\inf \left\{ \int _{\mathbb {R}^n}\frac{u(y)-u(0)}{|A^{-1}y|^{n+2s}}\,dy\ \bigg |\ A>0,\ \det A=1, \ \lambda _{\min }(A)<\frac{1}{k} \right\} \\&\quad > \inf \left\{ \int _{\mathbb {R}^n}\frac{u(y)-u(0)}{|A^{-1}y|^{n+2s}}\,dy\ \bigg |\ A>0,\ \det A=1 \right\} . \end{aligned}$$

This implies (3.3), since

$$\begin{aligned}&\inf \left\{ \int _{\mathbb {R}^n}\frac{u(y)-u(0)}{|A^{-1}y|^{n+2s}}\,dy\ \bigg |\ A>0,\ \det A=1 \right\} \\&\quad = \mathrm{min}\Bigg \{ \inf \left\{ \int _{\mathbb {R}^n}\frac{u(y)-u(0)}{|A^{-1}y|^{n+2s}}\,dy\ \bigg |\ A>0,\ \det A=1, \ \lambda _{\min }(A)<\frac{1}{k} \right\} , \\&\quad \qquad \qquad \,\, \inf \left\{ \int _{\mathbb {R}^n}\frac{u(y)-u(0)}{|A^{-1}y|^{n+2s}}\,dy\ \bigg |\ A>0,\ \det A=1, \ \lambda _{\min }(A)\ge \frac{1}{k} \right\} \Bigg \}. \end{aligned}$$

\(\square \)

The rest of this section is devoted to the proof of Propositions 3.3 and 3.5.

Proof of Propositions 3.3 and 3.5

Our goal is to prove that the (one-dimensional) fractional Laplacian of the restriction of u to any line is positive and bounded from above. In the proof of Proposition 3.3 we need several partial results.

In the sequel, we denote \(\bar{y}=(y_2,\ldots ,y_n)\in \mathbb {R}^{n-1}\) and \(v(y)=u(y)-u(0)\).

Lemma 3.7

Let \(\epsilon >0\) and assume the same hypotheses of Theorem 3.1. Then,

$$\begin{aligned} (1-s)\int _{\mathbb {R}^n}\frac{u(y_1,\bar{y})-u(y_1,\bar{0})}{\left( \epsilon ^2y_1^2+\epsilon ^\frac{-2}{n-1}|\bar{y}|^2\right) ^\frac{n+2s}{2}}\,dy\le C_1\cdot \epsilon ^\frac{2s}{n-1} \end{aligned}$$

with

$$\begin{aligned} C_1=\frac{\sqrt{\pi }\cdot \Gamma \left( \frac{n-1}{2}+s\right) }{\Gamma \left( \frac{n}{2}+s\right) }\cdot \frac{\mu _1\omega _{n-1}}{2}, \end{aligned}$$
(3.12)

for \(\mu _1\) given by (3.9).

Proof

Since u is Lipschitz and semiconcave, we have

$$\begin{aligned} \int _{\mathbb {R}^n}\frac{u(y_1,\bar{y})-u(y_1,\bar{0})}{\left( \epsilon ^2y_1^2+\epsilon ^\frac{-2}{n-1}|\bar{y}|^2\right) ^\frac{n+2s}{2}}\,dy \le \frac{1}{2}\int _{\mathbb {R}^n}\frac{\min \left\{ 2L\,|\bar{y}|,C\,|\bar{y}|^2\right\} }{\left( \epsilon ^2y_1^2+\epsilon ^\frac{-2}{n-1}|\bar{y}|^2\right) ^\frac{n+2s}{2}}\,dy. \end{aligned}$$

A change of variables

$$\begin{aligned} z_1=\epsilon ^{\frac{n}{n-1}}\, y_1\,|\bar{y}|^{-1},\qquad z_j=y_j,\quad j=2,\ldots ,n, \end{aligned}$$

yields,

$$\begin{aligned} \int _{\mathbb {R}^n}\frac{\min \left\{ 2L\,|\bar{y}|,C\,|\bar{y}|^2\right\} }{\left( \epsilon ^2y_1^2+\epsilon ^\frac{-2}{n-1}|\bar{y}|^2\right) ^\frac{n+2s}{2}}\,dy= & {} \epsilon ^\frac{2s}{n-1}\cdot \int _{\mathbb {R}} \left( 1+z_1^2\right) ^{- \frac{n+2s}{2}}dz_1\\&\cdot \int _{\mathbb {R}^{n-1}}\frac{\min \left\{ 2L\,|\bar{z}|,C\,|\bar{z}|^2\right\} }{|\bar{z}|^{n-1+2s}}\,d\bar{z}, \end{aligned}$$

and the result follows noticing that both integrals on the right-hand side are constant. \(\square \)

Lemma 3.8

We have,

$$\begin{aligned} \int _{\mathbb {R}^n}\frac{v(y_1,\bar{0})}{\left( \epsilon ^2y_1^2+\epsilon ^\frac{-2}{n-1}|\bar{y}|^2\right) ^\frac{n+2s}{2}}\,dy = C_2\ \epsilon ^{-2s}\int _{\mathbb {R}} \frac{v(y_1,\bar{0})}{|y_1|^{1+2s}}\,dy_1 \end{aligned}$$

where

$$\begin{aligned} C_2=\omega _{n-1}\cdot \frac{\Gamma \left( \frac{n-1}{2}\right) \Gamma \left( s+\frac{1}{2}\right) }{2\,\Gamma \left( \frac{n}{2}+s\right) }. \end{aligned}$$
(3.13)

Proof

A change of variables \(z_1=y_1\), \(z_j=\frac{y_{j}}{\epsilon ^{\frac{n}{n-1}}\,y_{1}}\), \(j=2,\ldots ,n\) yields,

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {R}^n}\frac{v(y_1,\bar{0})}{\left( \epsilon ^2y_1^2+\epsilon ^\frac{-2}{n-1}|\bar{y}|^2\right) ^\frac{n+2s}{2}}\,dy \\&\quad =\frac{1}{\epsilon ^{n+2s}}\int _{\mathbb {R}^{n-1}}\int _{\mathbb {R}} \frac{v(y_1,\bar{0})}{|y_1|^{n+2s}}\,\left( 1+\epsilon ^{- \frac{2n}{n-1}}\frac{|\bar{y}|^2}{y_1^2}\right) ^{- \frac{n+2s}{2}}\,d\bar{y}\,dy_1\\&\quad =\frac{1}{\epsilon ^{2s}}\int _{\mathbb {R}} \frac{v(z_1,\bar{0})}{|z_1|^{1+2s}}\,dz_1\,\int _{\mathbb {R}^{n-1}}\frac{d\bar{z}}{\left( 1+|\bar{z}|^2\right) ^{\frac{n+2s}{2}}}= \frac{C_2}{\epsilon ^{2s}}\int _{\mathbb {R}} \frac{v(y_1,\bar{0})}{|y_1|^{1+2s}}\,dy_1. \end{aligned} \end{aligned}$$

\(\square \)

Lemmas 3.7 and 3.8 allow us to prove that the one-dimensional fractional laplacian of the restriction \(v(y_1,\bar{0})\) is strictly positive.

Lemma 3.9

Under the same hypotheses of Theorem 3.1, we have

$$\begin{aligned} (1-s)\int _{\mathbb {R}} \frac{u(y_1,\bar{0})-u(0)}{|y_1|^{1+2s}}\,dy_1\ge \mu _0, \end{aligned}$$

where \(\mu _0\) is given by (3.8).

Proof

From Lemmas 3.7 and 3.8, we have that

$$\begin{aligned} \frac{C_1\cdot \epsilon ^\frac{2s}{n-1}}{1-s}\ge \int _{\mathbb {R}^n}\frac{v(y_1,\bar{y})}{\left( \epsilon ^2y_1^2+\epsilon ^\frac{-2}{n-1}|\bar{y}|^2\right) ^\frac{n+2s}{2}}\,dy-C_2\ \epsilon ^{-2s}\int _{\mathbb {R}} \frac{v(y_1,\bar{0})}{|y_1|^{1+2s}}\,dy_1. \end{aligned}$$

Then, by (3.2) and the definition of \(\mathcal {D}_s\) we get

$$\begin{aligned}&\int _{\mathbb {R}^n}\frac{v(y_1,\bar{y})}{\left( \epsilon ^2y_1^2+\epsilon ^\frac{-2}{n-1}|\bar{y}|^2\right) ^\frac{n+2s}{2}}\,dy\ge \inf \left\{ \int _{\mathbb {R}^n}\frac{u(y)-u(0)}{|A^{-1}y|^{n+2s}}\,dy\ \bigg |\ A>0,\ \det A=1 \right\} \\&\quad \ge \frac{\eta _0}{1-s}>0. \end{aligned}$$

Therefore,

$$\begin{aligned} C_1\cdot \epsilon ^\frac{2s}{n-1}\ge \eta _0-C_2\ \epsilon ^{-2s}(1-s)\int _{\mathbb {R}} \frac{v(y_1,\bar{0})}{|y_1|^{1+2s}}\,dy_1. \end{aligned}$$

We get the result from this expression by choosing \( \epsilon =\left( \frac{\eta _0}{2C_1}\right) ^\frac{n-1}{2s}. \) \(\square \)

From Lemma 3.9 we can finally prove Proposition 3.3.

Proof of Proposition 3.3

First, we are going to prove that the one-dimensional fractional Laplacian of the restriction of u to any line is bounded above. Indeed, from the Lipschitz continuity and semiconcavity of u,

$$\begin{aligned} \int _{\mathbb {R}} \frac{u(te)-u(0)}{|t|^{1+2s}}\,dt= & {} \int _{\mathbb {R}} \frac{\frac{1}{2}u(te)+\frac{1}{2}u(-te)-u(0)}{|t|^{1+2s}}\,dt \\\le & {} \frac{1}{2}\int _{\mathbb {R}} \frac{\min \{2L\,|t|,C|t|^2\}}{|t|^{1+2s}}\,dt=\frac{\mu _1}{1-s}, \end{aligned}$$

where \(\mu _1\) is given by (3.9).

Now, fix \(e\in \partial B_1(0)\), and choose P such that e is its first column and the rest of columns complete an orthonormal basis of \(\mathbb {R}^n\). Notice that \(\tilde{u}(x)=u(Px)\) is in the hypotheses of Theorem 3.1. Hence, we can apply Lemma 3.9 to \(\tilde{u}\) and get

$$\begin{aligned} (1-s)\int _{\mathbb {R}} \frac{\tilde{u}(y_1,\bar{0})- \tilde{u}(0)}{|y_1|^{1+2s}}\,dy_1\ge \mu _0, \end{aligned}$$

but then, \(\tilde{u}(y_1,\bar{0})=\tilde{u}(y_1e_1)=u(y_1Pe_1)=u(y_1e)\) by definition of P. \(\square \)

Next, we provide the proof of Proposition 3.5 that uses Proposition 3.3.

Proof of Proposition 3.5

Our aim is to prove that the infimum in the fractional Monge–Ampère operator is not realized by matrices that are very degenerate. From Proposition 3.3, we have

$$\begin{aligned} \begin{aligned} \int _{\mathbb {R}^n}\frac{u(y)-u(0)}{\left( \sum _{j=1}^n\epsilon _j^2y_j^2\right) ^\frac{n+2s}{2}}\,dy&= \int _{\partial {B_1(0)}}\int _0^\infty \frac{u(re)-u(0)}{r^{1+2s}}\,dr\;\frac{1}{\left( \sum _{j=1}^n\epsilon _j^2e_j^2\right) ^\frac{n+2s}{2}}\,d\mathcal {H}^{n-1}(e)\\&\ge \frac{ \mu _0}{2(1-s)} \int _{\partial {B_1(0)}}\frac{1}{\left( \sum _{j=1}^n\epsilon _j^2e_j^2\right) ^\frac{n+2s}{2}}\,d\mathcal {H}^{n-1}(e). \end{aligned} \end{aligned}$$

Proposition B.1 yields the estimate,

$$\begin{aligned} \int _{\partial {B_1(0)}}\frac{1}{\left( \sum _{j=1}^n\epsilon _j^2e_j^2\right) ^\frac{n+2s}{2}}\,d\mathcal {H}^{n-1}(e) \ge \frac{\omega _n}{n}\sum _{j=1}^{n}\frac{1}{\epsilon _j^{2s}}, \end{aligned}$$

where we have used that \(\prod _{j=1}^n\epsilon _j=1\). This completes the proof. \(\square \)

Comparison and Uniqueness

Next, we prove a comparison principle that yields uniqueness for problem (1.4). Notice that the same arguments apply to the operator \((1-s)\mathcal {D}_s\) giving a stable result in the limit as \(s\rightarrow 1\).

Theorem 4.1

Assume \(1/2<s<1\), and let \(g:\mathbb {R}^{n+1}\rightarrow \mathbb {R}\) a continuous function satisfying (1.7). Consider \(\phi \in \mathcal {C}^{2,\alpha }(\mathbb {R}^n)\), and \(u\in USC\) and \(v\in LSC\) such that

$$\begin{aligned} \left\{ \begin{array}{lll} \mathcal {D}_s u(x) \ge g(x,u) &{} \text {in} &{} \mathbb {R}^n \\ (u- \phi )(x)\rightarrow 0 &{} \text {as} &{} |x|\rightarrow \infty \end{array} \right. \qquad \text {and} \qquad \left\{ \begin{array}{lll} \mathcal {D}_s v(x)\le g(x,v) &{} \text {in} &{} \mathbb {R}^n \\ (v- \phi )(x)\rightarrow 0 &{} \text {as} &{} |x|\rightarrow \infty . \end{array} \right. \end{aligned}$$

in the viscosity sense. Then, \(u\le v\) in \(\mathbb {R}^n\).

Remark 4.2

It is also possible to assume \(t\mapsto g(x,t)\) strictly increasing for any \(x\in \mathbb {R}^n\) instead of (1.7) to derive a contradiction in (4.12).

Proof

Let us first present the ideas of the proof in the case when uv are a classical sub- and supersolution, then we shall consider the viscosity counterparts.

Since we seek to prove \(u \le v\), let us assume to the contrary that \(\sup _{\mathbb {R}^n}(u-v)>0\). As \((u- v)(x)\rightarrow 0\) as \(|x|\rightarrow \infty \), there exists \(x_0 \in \mathbb {R}^n\) such that

$$\begin{aligned} (u- v)(x_0) = \sup _{\mathbb {R}^n}(u-v)>0. \end{aligned}$$

Fix \(\delta >0\), arbitrary, and let \(A_\delta >0\) with \(\det A_\delta =1\), such that

$$\begin{aligned} L_{A_\delta } v(x_0)\le \mathcal {D}_s v(x_0) +\delta \le g(x_0,v(x_0))+\delta , \end{aligned}$$

for \(L_{A_\delta }\) defined as in (2.1). On the other hand, for the same matrix,

$$\begin{aligned} L_{A_\delta } u(x_0)\ge \mathcal {D}_s u(x_0) \ge g(x_0,u(x_0)). \end{aligned}$$

At a maximum point \(\delta (u-v,x_0,y)\le 0\), and

$$\begin{aligned} 0 \ge L_{A_\delta } (u-v)(x_0)\ge g(x_0,u(x_0))-g(x_0,v(x_0))- \delta . \end{aligned}$$

Therefore, since \(\delta \) is arbitrary, we can let \(\delta \rightarrow 0\) and get

$$\begin{aligned} g(x_0,v(x_0)) \ge g(x_0,u(x_0)), \end{aligned}$$

a contradiction with the fact that \(g(x_0,\cdot )\) is strictly increasing.

In the general case, we cannot be certain that \(L_{A_\delta } u(x_0)\) and \(L_{A_\delta } v(x_0)\) above are well defined, since u and v may not have the necessary regularity. To remedy that we shall use sup- and inf-convolutions and work with regularized functions. However, we shall rather apply the regularizations to the functions \(\bar{u}=u- \phi \) and \(\bar{v}=v- \phi \), since they are bounded above and below respectively (notice that \(\bar{u}\in USC\), \(\bar{v}\in LSC\), and \(\bar{u}(x),\bar{v}(x) \rightarrow 0\) as \(|x|\rightarrow \infty \) imply that \(\bar{u},\bar{v}\) have respectively a maximum and a minimum).

Consider the sup- and inf-convolution of \(\bar{u}, \bar{v}\), respectively,

$$\begin{aligned} \bar{u}^\epsilon (x)=\sup _{y} \left\{ \bar{u}(y)- \frac{|x-y|^2}{\epsilon }\right\} \end{aligned}$$
(4.1)

and

$$\begin{aligned} \bar{v}_\epsilon (x)=\inf _{y} \left\{ \bar{v}(y)+ \frac{|x-y|^2}{\epsilon }\right\} . \end{aligned}$$

Before we start with the proof, let us recall for the reader’s convenience two properties of \(\bar{u}^\epsilon \) that we shall use in the sequel. Analogous properties hold for \(\bar{v}_\epsilon \) noticing that \(\bar{v}_\epsilon =-(- \bar{v})^\epsilon \).

  1. (1)

    \(\bar{u}^\epsilon \) is bounded above. Since \(\bar{u}\) is bounded above by some constant C, we have

    $$\begin{aligned} \bar{u}^\epsilon (x)\le \sup _{y} \left\{ C- \frac{|x-y|^2}{\epsilon }\right\} =C. \end{aligned}$$
  2. (2)

    The supremum in the definition of (4.1) is achieved. In fact,

    $$\begin{aligned} \bar{u}^\epsilon (x)=\sup _{|y-x|^2\le 2\Vert \bar{u}\Vert _\infty \epsilon } \left\{ \bar{u}(y)- \frac{|x-y|^2}{\epsilon }\right\} =\bar{u}(x^*)- \frac{|x-x^*|^2}{\epsilon } \end{aligned}$$
    (4.2)

    for some \(x^*\) such that

    $$\begin{aligned} |x-x^*|^2\le 2\Vert \bar{u}\Vert _\infty \epsilon \end{aligned}$$
    (4.3)

    (here we are slightly abusing notation for the sake of brevity since, as \(\bar{u}\in USC\), we should write \(\sup \bar{u}\) instead of \(\Vert \bar{u}\Vert _\infty \)). To see this, first notice that since \(\bar{u}^\epsilon \) is bounded above, for any given \(\delta >0\) there exists \(x_\delta \) such that,

    $$\begin{aligned} \bar{u}^\epsilon (x)= \sup _{y} \left\{ \bar{u}(y)- \frac{|x-y|^2}{\epsilon }\right\} \le \bar{u}(x_\delta )- \frac{|x-x_\delta |^2}{\epsilon }+\delta . \end{aligned}$$

    Since \(\bar{u}(x)\le \bar{u}^\epsilon (x)\) (pick \(y=x\) in the definition of \(\bar{u}^\epsilon (x)\)), we conclude that \(|x - x_\delta |^2\le (2\Vert \bar{u}\Vert _\infty +1)\epsilon \), assuming \(\delta <1\). Therefore,

    $$\begin{aligned} \bar{u}^\epsilon (x)\le \sup _{|y-x|^2\le (2\Vert \bar{u}\Vert _\infty +1)\epsilon } \left\{ \bar{u}(y)- \frac{|x-y|^2}{\epsilon }\right\} +\delta . \end{aligned}$$

    Since \(\delta \) is arbitrary, we can let \(\delta \rightarrow 0\) and conclude that the supremum in the definition of (4.1) is achieved,

    $$\begin{aligned} \bar{u}^\epsilon (x)=\sup _{|y-x|^2\le (2\Vert \bar{u}\Vert _\infty +1)\epsilon } \left\{ \bar{u}(y)- \frac{|x-y|^2}{\epsilon }\right\} . \end{aligned}$$

    At this point, we can repeat the previous argument with \(\delta =0\) and get formula (4.2).

Now, again for the sake of contradiction, assume \(\sup _{\mathbb {R}^n}(u-v)>0\). Notice that \(\bar{u}^\epsilon (x)- \bar{v}_\epsilon (x)\ge \bar{u}(x)-\bar{v}(x)\) (pick \(y=x\) in the definitions of \(\bar{u}^\epsilon (x),\bar{v}_\epsilon (x)\)), and therefore,

$$\begin{aligned} \sup _{\mathbb {R}^n}(\bar{u}^\epsilon - \bar{v}_\epsilon )\ge \sup _{\mathbb {R}^n}(\bar{u}- \bar{v})=\sup _{\mathbb {R}^n}(u-v)>0. \end{aligned}$$
(4.4)

Moreover, \((\bar{u}^\epsilon - \bar{v}_\epsilon )(x)\rightarrow 0\) as \(|x|\rightarrow \infty \). To see this, notice that

$$\begin{aligned} \begin{aligned} \bar{u} (x)- \bar{v} (x)&\le \bar{u} ^\epsilon (x)- \bar{v} _\epsilon (x) \\&=\sup _{|y|^2\le 2\Vert \bar{u}\Vert _\infty \epsilon } \left\{ \bar{u}(x+ y)- \frac{|y|^2}{\epsilon }\right\} - \inf _{|y|^2\le 2\Vert \bar{v}\Vert _\infty \epsilon } \left\{ \bar{v}(x+ y)+ \frac{|y|^2}{\epsilon }\right\} \\&\le \sup _{|y|^2\le 2\Vert \bar{u}\Vert _\infty \epsilon } \bar{u}(x+y)- \inf _{|y|^2\le 2\Vert \bar{v}\Vert _\infty \epsilon } \bar{v}(x+y), \end{aligned} \end{aligned}$$

and \( \bar{u} (x)- \bar{v} (x)\), \(\sup _{|y|^2\le 2\Vert \bar{u}\Vert _\infty \epsilon } \bar{u}(x+y)\), and \( \inf _{|y|^2\le 2\Vert \bar{v}\Vert _\infty \epsilon } \bar{v}(x+y)\) converge to 0 as \(|x|\rightarrow \infty \).

Thus, there exists \(x_\epsilon \) such that

$$\begin{aligned} (\bar{u}^\epsilon - \bar{v}_\epsilon )(x_\epsilon )=\sup _{\mathbb {R}^n}(\bar{u}^\epsilon - \bar{v}_\epsilon ). \end{aligned}$$
(4.5)

An important point in the sequel is that both functions \(\bar{u}^\epsilon \) and \( \bar{v}_\epsilon \) are \(\mathcal {C}^{1,1}\) at \(x_\epsilon \), so that the integrals in the operators appearing in the subsequent computations are well-defined. This follows from the following three facts:

  • The paraboloid

    $$\begin{aligned} P(x)=\bar{u}(x_\epsilon ^*)- \frac{|x-x_\epsilon ^*|^2}{\epsilon } \end{aligned}$$

    touches \(\bar{u}^\epsilon \) from below at \(x_\epsilon \) for \(x_\epsilon ^*\) such that \(\bar{u}^\epsilon (x_\epsilon )= \bar{u}(x_\epsilon ^*)- \frac{|x_\epsilon -x_\epsilon ^*|^2}{\epsilon }.\)

  • The paraboloid

    $$\begin{aligned} Q(x)=\bar{v}(x_{\epsilon ,*})+ \frac{|x-x_{\epsilon ,*}|^2}{\epsilon } \end{aligned}$$

    touches \(\bar{v}_\epsilon \) from above at \(x_\epsilon \) for \(x_{\epsilon ,*}\) such that \(\bar{v}_\epsilon (x_\epsilon )= \bar{v}(x_{\epsilon ,*})+ \frac{|x_\epsilon -x_{\epsilon ,*}|^2}{\epsilon }.\)

  • Since \(x_\epsilon \) is a maximum point of \(\bar{u}^\epsilon - \bar{v}_\epsilon \), the function \(\bar{v}_\epsilon (x)-\bar{v}_\epsilon (x_\epsilon )+\bar{u}^\epsilon (x_\epsilon )\) touches \(\bar{u}^\epsilon \) from above at \(x_\epsilon \).

We conclude from these three facts that the paraboloids \(Q(x)- \bar{v}_\epsilon (x_\epsilon )+\bar{u}^\epsilon (x_\epsilon )\) and \(P(x)+ \bar{v}_\epsilon (x_\epsilon )-\bar{u}^\epsilon (x_\epsilon )\) touch respectively \(\bar{u}^\epsilon \) from above and \(\bar{v}_\epsilon \) from below at the point \(x_\epsilon \). Therefore, both \(\bar{u}^\epsilon \) and \(\bar{v}_\epsilon \) can be touched from above and below by a paraboloid at \(x_\epsilon \) and they are \(\mathcal {C}^{1,1}\) at \(x_\epsilon \).

The fact that both \(\bar{u}^\epsilon \) and \(\bar{v}_\epsilon \) are \(\mathcal {C}^{1,1}\) at \(x_\epsilon \) is crucial to make rigorous the formal argument described at the beginning of the proof. Since \(\bar{u}^\epsilon \in \mathcal {C}^{1,1}\) at \(x_\epsilon \), there exists a paraboloid P(x) that touches \(\bar{u}^\epsilon \) from above at \(x_\epsilon \). Then, the function

$$\begin{aligned} \tilde{P}(x)=P(x+x_{\epsilon }-x_{\epsilon }^*)+\frac{|x_{\epsilon }-x_{\epsilon }^*|^2}{\epsilon }+\phi (x) \end{aligned}$$

touches u from above at \(x_{\epsilon }^*\). On the other hand, there exists a paraboloid Q(x) that touches \(\bar{v}_\epsilon \) from below at \(x_\epsilon \) and then, the function

$$\begin{aligned} \tilde{Q}(x)=Q(x+x_{\epsilon }-x_{\epsilon ,*})-\frac{|x_{\epsilon }-x_{\epsilon ,*}|^2}{\epsilon }+\phi (x) \end{aligned}$$

touches v from below at \(x_{\epsilon ,*}\). By Lemma 2.2 we have

$$\begin{aligned} \mathcal {D}_s u(x_\epsilon ^*)\ge g\big (x_\epsilon ^*,u(x_\epsilon ^*)\big ), \qquad \text {and} \qquad \mathcal {D}_s v(x_{\epsilon ,*})\le g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big ) \end{aligned}$$

in the classical sense.

Fix \(\eta >0\), arbitrary, and let \(A_\eta >0\) with \(\det A_\eta =1\) such that

$$\begin{aligned} L_{A_\eta } v(x_{\epsilon ,*})\le g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big )+\eta , \end{aligned}$$

and

$$\begin{aligned} L_{A_\eta } u(x_\epsilon ^*)\ge \mathcal {D}_s u(x_\epsilon ^*)\ge g\big (x_\epsilon ^*,u(x_\epsilon ^*)\big ), \end{aligned}$$

with \(L_{A_\eta }\) defined as in (2.1). Subtracting, we get

$$\begin{aligned} L_{A_\eta } u(x_\epsilon ^*)-L_{A_\eta } v(x_{\epsilon ,*})\ge g\big (x_\epsilon ^*,u(x_\epsilon ^*)\big )-g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big )- \eta . \end{aligned}$$
(4.6)

The rest of the proof is devoted to derive a contradiction from the previous inequality by showing that, for \(\epsilon \) small enough, the left-hand side is strictly smaller than the right-hand side.

Let us prove first that,

$$\begin{aligned} \lim _{\epsilon \rightarrow \infty }\big (L_{A_\eta } u(x_\epsilon ^*)-L_{A_\eta } v(x_{\epsilon ,*})\big )\le 0. \end{aligned}$$
(4.7)

By definition of the operator \(L_{A_\eta }\), we have

$$\begin{aligned} L_{A_\eta } u(x_\epsilon ^*)-L_{A_\eta } v(x_{\epsilon ,*})=\frac{1}{2} \int _{\mathbb {R}^n}\frac{\delta (u,x_\epsilon ^*,y)- \delta (v,x_{\epsilon ,*},y)}{|A_\eta ^{-1}y|^{n+2s}}\, dy. \end{aligned}$$
(4.8)

Notice that

$$\begin{aligned} \delta (\bar{u}^\epsilon ,x_\epsilon ,y)\ge \delta (\bar{u},x_\epsilon ^*,y)\qquad \text {and}\qquad \delta (\bar{v}_\epsilon ,x_\epsilon ,y)\le \delta (\bar{v},x_{\epsilon ,*},y). \end{aligned}$$
(4.9)

Since the proof of both inequalities is analogous, let us show how to obtain the first one. As we have seen,

$$\begin{aligned} \bar{u}^\epsilon (x_\epsilon )= \bar{u}(x_\epsilon ^*)- \frac{|x_\epsilon -x_\epsilon ^*|^2}{\epsilon }. \end{aligned}$$

On the other hand, picking \(z=x_\epsilon ^*-x_\epsilon \),

$$\begin{aligned} \bar{u} ^\epsilon (x_\epsilon \pm y) =\sup _{z} \left\{ \bar{u}(x_\epsilon \pm y+z)- \frac{|z|^2}{\epsilon }\right\} \ge \bar{u}(x_\epsilon ^*\pm y)- \frac{|x_\epsilon -x_\epsilon ^*|^2}{\epsilon }. \end{aligned}$$

From these two expressions, we get (4.9).

Now, using (4.9), we have that

$$\begin{aligned} \begin{aligned} \delta (u,x_\epsilon ^*,y)- \delta (v,x_{\epsilon ,*},y)&= \delta (\bar{u},x_\epsilon ^*,y)- \delta (\bar{v},x_{\epsilon ,*},y) + \delta (\phi ,x_\epsilon ^*,y)- \delta (\phi ,x_{\epsilon ,*},y)\\&\le \delta (\bar{u}^\epsilon ,x_\epsilon ,y)- \delta (\bar{v}_\epsilon ,x_\epsilon ,y) + \delta (\phi ,x_\epsilon ^*,y)- \delta (\phi ,x_{\epsilon ,*},y)\\&= \delta (\bar{u}^\epsilon -\bar{v}_\epsilon ,x_\epsilon ,y) + \delta (\phi ,x_\epsilon ^*,y)- \delta (\phi ,x_{\epsilon ,*},y). \end{aligned} \end{aligned}$$

Observe that \(x_\epsilon \) is a maximum point of \(\bar{u}^\epsilon -\bar{v}_\epsilon \) and therefore \(\delta (\bar{u}^\epsilon -\bar{v}_\epsilon ,x_\epsilon ,y)\le 0\). We conclude

$$\begin{aligned} \delta (u,x_\epsilon ^*,y)- \delta (v,x_{\epsilon ,*},y)\le \delta (\phi ,x_\epsilon ^*,y)- \delta (\phi ,x_{\epsilon ,*},y). \end{aligned}$$
(4.10)

From (4.6), (4.8) and (4.10), we get

$$\begin{aligned} \begin{aligned}&L_{A_\eta } \phi (x_\epsilon ^*)-L_{A_\eta } \phi (x_{\epsilon ,*}) =\frac{1}{2} \int _{\mathbb {R}^n}\frac{\delta (\phi ,x_\epsilon ^*,y)- \delta (\phi ,x_{\epsilon ,*},y)}{|A_\eta ^{-1}y|^{n+2s}}\, dy\\&\quad \ge \frac{1}{2} \int _{\mathbb {R}^n}\frac{\delta (u,x_\epsilon ^*,y)- \delta (v,x_{\epsilon ,*},y)}{|A_\eta ^{-1}y|^{n+2s}}\, dy =L_{A_\eta } u(x_\epsilon ^*)-L_{A_\eta } v(x_{\epsilon ,*})\\&\quad \ge g\big (x_\epsilon ^*,u(x_\epsilon ^*)\big )-g\big (x_\epsilon ^*,v(x_{\epsilon ,*})\big )+g\big (x_\epsilon ^*,v(x_{\epsilon ,*})\big )-g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big )- \eta . \end{aligned} \end{aligned}$$
(4.11)

Recall from (4.4) and (4.5) that

$$\begin{aligned} (\bar{u}^\epsilon - \bar{v}_\epsilon )(x_\epsilon )=\sup _{\mathbb {R}^n}(\bar{u}^\epsilon - \bar{v}_\epsilon )\ge \sup _{\mathbb {R}^n}(\bar{u}- \bar{v})>0. \end{aligned}$$

Therefore,

$$\begin{aligned} \bar{u}(x_\epsilon ^*)- \bar{v}(x_{\epsilon ,*}) \ge \sup _{\mathbb {R}^n}(\bar{u}- \bar{v})+\frac{|x_\epsilon -x_\epsilon ^*|^2+|x_\epsilon -x_{\epsilon ,*}|^2}{\epsilon }, \end{aligned}$$

or equivalently,

$$\begin{aligned} u(x_\epsilon ^*)- v(x_{\epsilon ,*}) \ge \sup _{\mathbb {R}^n}(\bar{u}- \bar{v})+\big (\phi (x_\epsilon ^*)- \phi (x_{\epsilon ,*}) \big )+\frac{|x_\epsilon -x_\epsilon ^*|^2+|x_\epsilon -x_{\epsilon ,*}|^2}{\epsilon }, \end{aligned}$$

Notice that from estimate (4.3) and its analogous for the inf-convolution, we have

$$\begin{aligned} |x_\epsilon -x_\epsilon ^*|^2\le 2\Vert \bar{u}\Vert _\infty \epsilon \qquad \text {and}\qquad |x_\epsilon -x_{\epsilon ,*}|^2\le 2\Vert \bar{v}\Vert _\infty \epsilon . \end{aligned}$$

Thus, by the continuity of \(\phi \), we have that for \(\epsilon \) small enough,

$$\begin{aligned} u(x_\epsilon ^*)- v(x_{\epsilon ,*}) \ge \frac{1}{2}\sup _{\mathbb {R}^n}(\bar{u}- \bar{v})>0. \end{aligned}$$

Since \(\phi \in \mathcal {C}^{2,\alpha },\) in particular \(L_{A_\eta } \phi (x)\) is a continuous function and, for \(\epsilon \) small enough

$$\begin{aligned} \big ( L_{A_\eta } \phi (x_\epsilon ^*)-L_{A_\eta } \phi (x_{\epsilon ,*})\big )\le \eta . \end{aligned}$$

By the continuity of g, we can also assume that \(g\big (x_\epsilon ^*,v(x_{\epsilon ,*})\big )-g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big )\ge - \eta \). Then, we have from (4.11) and (1.7) that

$$\begin{aligned} 3\eta \ge g\big (x_\epsilon ^*,u(x_\epsilon ^*)\big )-g\big (x_\epsilon ^*,v(x_{\epsilon ,*})\big )\ge \mu \big (u(x_\epsilon ^*)- v(x_{\epsilon ,*})\big ) \ge \frac{\mu }{2}\sup _{\mathbb {R}^n}(\bar{u}- \bar{v})>0. \end{aligned}$$
(4.12)

Since \(\eta \) is arbitrary, we can choose \(\eta \le \frac{\mu }{12}\sup _{\mathbb {R}^n}(\bar{u}- \bar{v})\) and get a contradiction. \(\square \)

Lipschitz Continuity and Semiconcavity of Solutions

In this section, we prove Lipschitz continuity and semiconcavity of solutions to (1.4) with \(\phi \) under the hypothesis of Section 1. These results are needed to fulfill the hypotheses of Theorem 3.1.

Remark 5.1

The regularity results below apply to the operator \((1-s)\mathcal {D}_s\). Notice that all constants involved in the estimates are independent of s and allow passing to the limit as \(s\rightarrow 1\).

We start with the particular case when \(g(x,v(x))=v(x)- \phi (x)\) to illustrate the key ideas.

Proposition 5.2

Assume \(\phi \) is semiconcave and Lipschitz continuous and let v be the solution of

$$\begin{aligned} \left\{ \begin{array}{lll} \mathcal {D}_sv(x)=v(x)- \phi (x) &{} \text {in} &{} \mathbb {R}^n \\ (v- \phi )(x)\rightarrow 0 &{} \text {as} &{} |x|\rightarrow \infty . \end{array} \right. \end{aligned}$$
(5.1)

Then, v is Lipschitz continuous and semiconcave with the same constants as \(\phi \).

Proof

In the following proof, we assume for clarity of presentation that v is a classical solution to (5.1) and all the equations hold pointwise. The argument can be made rigorous using a regularization argument (similar to the one in the proof of Theorem 4.1) that is explained in detail in the proofs of the more general results Propositions 5.3 and 5.4 below, so we shall skip it here.

1. For the proof of Lipschitz continuity, fix \(e\in \mathbb {R}^n\) and consider the first-order incremental quotient \(v(x+e)-v(x)\). Observe that

$$\begin{aligned} v(x+e)-v(x)= & {} (v-\phi )(x+e)-(v-\phi )(x)\\&+\,\phi (x+e)-\phi (x)\le o(1)+\text {Lip}(\phi )\,|e| \end{aligned}$$

as \(|x|\rightarrow \infty \), and therefore \(v(x+e)-v(x)\) is bounded above. Furthermore, we can assume that

$$\begin{aligned} \sup _{x\in \mathbb {R}^n}\big (v(x+e)-v(x)\big )>\text {Lip}(\phi )\,|e|, \end{aligned}$$

since we are done otherwise. Then, there exists some \(x_0\) such that

$$\begin{aligned} v(x_0+e)-v(x_0)=\sup _{x\in \mathbb {R}^n}\big (v(x+e)-v(x)\big ). \end{aligned}$$

Fix \(\eta >0\), arbitrary, and let \(A_\eta >0\) such that

$$\begin{aligned} L_{A_\eta } v(x_0)\le v(x_0)-\phi (x_0)+\eta , \end{aligned}$$

and

$$\begin{aligned} L_{A_\eta } v(x_0+e)\ge \mathcal {D}_s v(x_0+e)\ge v(x_0+e)-\phi (x_0+e), \end{aligned}$$

with \(L_{A_\eta }\) defined as in (2.1). We have from the above expressions that

$$\begin{aligned} L_{A_\eta } \big (v(x_0+e)- v(x_0)\big )\ge \big (v(x_0+e)- v(x_0)\big )-\big (\phi (x_0+e)- \phi (x_0)\big )-\eta . \end{aligned}$$

Notice that \(\delta \big (v(\cdot +e)-v,x_0,y\big )\le 0\), and therefore \(L_{A_\eta }\big (v(x_0+e)-v(x_0)\big )\le 0\). Consequently,

$$\begin{aligned} \sup _{x\in \mathbb {R}^n}\big (v(x+e)-v(x)\big )= v(x_0+e)- v(x_0)\le \text {Lip}(\phi )|e|+\eta \end{aligned}$$

and we conclude letting \(\eta \rightarrow 0\).

A symmetric argument, where \(x_0\) is a point such that

$$\begin{aligned} v(x_0+e)-v(x_0)=\inf _{x\in \mathbb {R}^n}\big (v(x+e)-v(x)\big )<-\text {Lip}(\phi )\,|e|, \end{aligned}$$

and the operator \(L_{A_\eta }\) is such that,

$$\begin{aligned} L_{A_\eta } v(x_0+e)\le v(x_0+e)-\phi (x_0+e)+\eta , \end{aligned}$$

and

$$\begin{aligned} L_{A_\eta } v(x_0)\ge v(x_0)-\phi (x_0) \end{aligned}$$

yields

$$\begin{aligned} \inf _{x\in \mathbb {R}^n}\big (v(x+e)-v(x)\big )\ge -\text {Lip}(\phi )\,|e|. \end{aligned}$$

2. For the proof of semiconcavity, consider the second-order incremental quotient \(\delta (v,x,e)=v(x+e)+v(x-e)-2v(x)\). Denote by \(SC(\phi )\) the semiconcavity constant of \(\phi \), and notice that

$$\begin{aligned} \delta (v,x,e)= \delta (v-\phi ,x,e)+\delta (\phi ,x,e)\le o(1)+SC(\phi )\,|e|^2 \qquad \text {as}\ |x|\rightarrow \infty \end{aligned}$$

so \(\delta (v,x,e)\) is bounded above. Furthermore, we can assume that

$$\begin{aligned} \sup _{x\in \mathbb {R}^n}\delta (v,x,e)>SC(\phi )\,|e|^2 \end{aligned}$$

since we are done otherwise. Then, there exists some \(x_0\) such that

$$\begin{aligned} \delta (v,x_0,e)=\sup _{x\in \mathbb {R}^n} \delta (v,x,e). \end{aligned}$$

As before, fix \(\eta >0\) arbitrary, and let \(A_\eta >0\) such that

$$\begin{aligned} L_{A_\eta } v(x_0)\le v(x_0)- \phi (x_0)+\eta , \end{aligned}$$

and

$$\begin{aligned} L_{A_\eta } v(x_0\pm e)\ge \mathcal {D}_s v(x_0\pm e)\ge v(x_0\pm e)- \phi (x_0\pm e), \end{aligned}$$

with \(L_{A_\eta }\) defined as in (2.1). We have from the above expressions that

$$\begin{aligned} L_{A_\eta }\delta (v,x_0,e)\ge \delta (v,x_0,e)- \delta (\phi ,x_0,e)-2\eta . \end{aligned}$$

Notice that \(\delta \big (\delta (v,\cdot \, ,e),x_0,z\big )\le 0\), and therefore \(L_{A_\eta }\delta (v,x_0,e)\le 0\). Consequently,

$$\begin{aligned} \delta (v,x,e)\le \delta (v,x_0,e)\le \delta (\phi ,x_0,e)+2\eta \le SC|e|^2+2\eta . \end{aligned}$$

We conclude letting \(\eta \rightarrow 0\). \(\square \)

In the next result we prove that solutions to (1.4) are Lipschitz continuous whenever g on the right-hand side satisfies (1.6) and (1.7).

Proposition 5.3

(Lipschitz continuity of the solution) Let \(g: \mathbb {R}^{n+1} \rightarrow \mathbb {R}\) satisfy (1.6) and (1.7). Then, v, the solution to (1.4), is uniformly Lipschitz continuous, namely, for every \(x,y\in \mathbb {R}^n\),

$$\begin{aligned} \frac{|v(x)-v(y) |}{|x-y|}\le \max \left\{ \frac{\text {Lip}(g)}{\mu }, \text {Lip}(\phi )\right\} . \end{aligned}$$

Proof

The following proof uses a regularization process similar to the proof of Theorem 4.1. For the sake of clarity, let us present first the main ideas assuming that v is a classical solution.

Fix \(e\in \mathbb {R}^n\) and consider the first-order incremental quotient \(v(x+e)-v(x)\). Observe that

$$\begin{aligned}&v(x+e)-v(x)= (v-\phi )(x+e)-(v-\phi )(x)\\&\quad +\,\, \phi (x+e)-\phi (x)\le o(1)+\text {Lip}(\phi )\,|e| \end{aligned}$$

as \(|x|\rightarrow \infty \), and therefore \(v(x+e)-v(x)\) is bounded above. Furthermore, we can assume that

$$\begin{aligned} \sup _{x\in \mathbb {R}^n}\big (v(x+e)-v(x)\big )>\text {Lip}(\phi )\,|e|, \end{aligned}$$

since we are done otherwise. Then, there exists some \(x_0\) such that

$$\begin{aligned} v(x_0+e)-v(x_0)=\sup _{x\in \mathbb {R}^n}\big (v(x+e)-v(x)\big ). \end{aligned}$$

Fix \(\eta >0\), arbitrary, and let \(A_\eta >0\) with \(\det A_\eta =1\) such that

$$\begin{aligned} L_{A_\eta } v(x_0)\le g\left( x_0,v(x_0)\right) +\eta , \end{aligned}$$

and

$$\begin{aligned} L_{A_\eta } v(x_0+e)\ge \mathcal {D}_s v(x_0+e)\ge g\left( x_0+e,v(x_0+e)\right) , \end{aligned}$$

with \(L_{A_\eta }\) defined as in (2.1).

We have from the above expressions that

$$\begin{aligned} L_{A_\eta } v(x_0+e)-L_{A_\eta } v(x_0)\ge g\left( x_0+e,v(x_0+e)\right) -g\left( x_0,v(x_0)\right) -\eta . \end{aligned}$$

Notice that \(\delta \big (v(\cdot +e)-v,x_0,y\big )\le 0\), and therefore \(L_{A_\eta }\big (v(x_0+e)-v(x_0)\big )\le 0\). Consequently,

$$\begin{aligned} g\left( x_0+ e,v(x_0+ e)\right) -g\left( x_0,v(x_0)\right) \pm g\left( x_0+e,v(x_0)\right) \le \eta . \end{aligned}$$

At his point we can let \(\eta \rightarrow 0\) and, using (1.6) and (1.7), get

$$\begin{aligned} v(x+e)-v(x) \le v(x_0+e)-v(x_0) \le \frac{\text {Lip}(g)}{\mu } |e|. \end{aligned}$$
(5.2)

A symmetric argument, where \(x_0\) is a point such that

$$\begin{aligned} v(x_0+e)-v(x_0)=\inf _{x\in \mathbb {R}^n}\big (v(x+e)-v(x)\big )<-\text {Lip}(\phi )\,|e|, \end{aligned}$$

and the operator \(L_{A_\eta }\) is such that,

$$\begin{aligned} L_{A_\eta } v(x_0+e)\le g\left( x_0+e,v(x_0+e)\right) +\eta , \end{aligned}$$

and

$$\begin{aligned} L_{A_\eta } v(x_0)\ge g\left( x_0,v(x_0)\right) \end{aligned}$$

yields

$$\begin{aligned} g\left( x_0,v(x_0)\right) -g\left( x_0+ e,v(x_0+ e)\right) \pm g\left( x_0+e,v(x_0)\right) \le 0. \end{aligned}$$

and from there,

$$\begin{aligned} -\frac{\text {Lip}(g)}{\mu } |e|\le v(x_0+e)-v(x_0) \le v(x+e)-v(x) . \end{aligned}$$

In general, in the above argument we cannot guarantee that v is regular enough so that both \(L_{A_\eta } v(x_0+e)\) and \(L_{A_\eta } v(x_0)\) are well-defined and the corresponding equations hold in the classical sense.

To complete the argument, we are going to use a regularization process similar to the one in the proof of Theorem 4.1. Let us show the details in the proof of (5.2).

To simplify the notation in the sequel, let us denote \(u(x)=v(x+e)\) and consider the sup- and inf-convolution of u,  and v, respectively,

$$\begin{aligned} u^\epsilon (x)=\sup _{y} \left\{ u(y)- \frac{|x-y|^2}{\epsilon }\right\} =\sup _{y} \left\{ v(y+e)- \frac{|x-y|^2}{\epsilon }\right\} \end{aligned}$$

and

$$\begin{aligned} v_\epsilon (x)=\inf _{y} \left\{ v(y)+ \frac{|x-y|^2}{\epsilon }\right\} . \end{aligned}$$

In the proof of Theorem 4.1 we were dealing with the regularization of \(v-\phi \), a bounded function. In our case, v is not bounded but its growth at infinity is controlled by \(\phi \), which allows to prove the following:

  1. (1)

    \(u^\epsilon (x)\) is bounded above. Specifically, there exists a constant \(C>0\) depending only on \(\phi \) and \(\Vert v-\phi \Vert _\infty \) such that \(u^\epsilon (x)\le C(1+|x+e|)\). To see this, notice that by our hypotheses on \(\phi \),

    $$\begin{aligned} \phi (x)\le a|x|^{-\epsilon }+\Gamma (x)\le a|x|^{-\epsilon }+b|x|\le a+b|x| \end{aligned}$$

    for |x| large enough, where b depends on the convexity of the sections of \(\Gamma \). Since \(\phi \) is bounded near 0, we conclude that \( \phi (x)\le a+b|x|\) for all x, maybe for a different constant a. Since \(v-\phi \) is bounded,

    $$\begin{aligned} u^\epsilon (x)= & {} \sup _{y} \left\{ (v-\phi )(y+e)+\phi (y+e)- \frac{|x-y|^2}{\epsilon }\right\} \\\le & {} \sup _{y} \left\{ \Vert v-\phi \Vert _\infty +a+b|y+e|- \frac{|x-y|^2}{\epsilon }\right\} \\\le & {} \Vert v-\phi \Vert _\infty +a+b|x+e|+ \sup _{y} \left\{ b|x-y|- \frac{|x-y|^2}{\epsilon }\right\} \\\le & {} \Vert v-\phi \Vert _\infty +a+b|x+e|+ b^2\epsilon \le C(1+|x+e|). \end{aligned}$$
  2. (2)

    As a consequence, the supremum in the definition of \(u^\epsilon (x)\) is finite, and for any given \(\delta >0\) there exists \(x_\delta \) such that,

    $$\begin{aligned} u^\epsilon (x)= \sup _{y} \left\{ u(y)- \frac{|x-y|^2}{\epsilon }\right\} \le u(x_\delta )- \frac{|x-x_\delta |^2}{\epsilon }+\delta . \end{aligned}$$
  3. (3)

    The supremum in the definition of \(u^\epsilon \) is achieved. In fact,

    $$\begin{aligned} u^\epsilon (x)\le \sup _{|y-x|\le \sqrt{\epsilon }R} \left\{ u(y)- \frac{|x-y|^2}{\epsilon }\right\} = u(x^*)- \frac{|x-x^*|^2}{\epsilon } \end{aligned}$$

    for some \(x^*\) such that \(|x-x^*|\le \sqrt{\epsilon }R\), where R depends on \(\text {Lip}(\phi )\) and \( \Vert v-\phi \Vert _\infty \) but can be chosen independent of \(\epsilon \) and x.

    To see this, fix \(\delta <1\) and notice that \( u(x)\le u^\epsilon (x) \le u(x_\delta )- \frac{|x-x_\delta |^2}{\epsilon }+\delta . \) We conclude

    $$\begin{aligned} \begin{aligned} \frac{|x - x_\delta |^2}{\epsilon }&\le (v-\phi )(x_\delta +e)- (v-\phi )(x+e)+\text {Lip}(\phi )\,|x_\delta -x|+\delta \\&\le 2\Vert v-\phi \Vert _\infty +\text {Lip}(\phi )\,|x_\delta -x|+1. \end{aligned} \end{aligned}$$

    From this expression, it follows that \(|x - x_\delta |<\sqrt{\epsilon }R\) for some R as before (\(\sqrt{\epsilon }R\) is basically the larger root of the quadratic polynomial in \(|x - x_\delta |\)). Therefore,

    $$\begin{aligned} u^\epsilon (x)\le \sup _{|y-x|\le \sqrt{\epsilon }R} \left\{ u(y)- \frac{|x-y|^2}{\epsilon }\right\} +\delta . \end{aligned}$$

    Since \(\delta \) is arbitrary, we can let \(\delta \rightarrow 0\) and conclude that the supremum in the definition of \( u^\epsilon \) is achieved.

  4. (4)

    Analogous properties hold for \(v_\epsilon \). Notice that property (1) is simpler,

    $$\begin{aligned} v_\epsilon (x)= & {} \inf _{y} \left\{ v(y)+ \frac{|x-y|^2}{\epsilon }\right\} =\inf _{y} \left\{ (v-\phi )(y)+\phi (y)+ \frac{|x-y|^2}{\epsilon }\right\} \\\ge & {} \inf (v-\phi )>-\infty . \end{aligned}$$

We are ready now to complete the proof. Following the formal argument above, we can assume that there exists \(x_0\) such that

$$\begin{aligned} v(x_0+e)-v(x_0)=\sup _{x\in \mathbb {R}^n}\big (v(x+e)-v(x)\big )>\text {Lip}(\phi )\,|e|, \end{aligned}$$

First, we need to prove that there exists \(x_\epsilon \) such that

$$\begin{aligned} (u^\epsilon -v_\epsilon )(x_\epsilon )=\sup _{x\in \mathbb {R}^n}(u^\epsilon -v_\epsilon ). \end{aligned}$$

To see this, observe that

$$\begin{aligned} \sup _{x\in \mathbb {R}^n}(u^\epsilon -v_\epsilon )\ge (u^\epsilon -v_\epsilon )(x_0)\ge (u-v)(x_0)= & {} \sup _{x\in \mathbb {R}^n}\big (v(x+e)-v(x)\big )\\> & {} \text {Lip}(\phi )\,|e|. \end{aligned}$$

On the other hand,

$$\begin{aligned} \begin{aligned} (u^\epsilon -v_\epsilon )(x)&=u(x^*)- \frac{|x-x^*|^2}{\epsilon }-v(x_*)- \frac{|x-x_*|^2}{\epsilon }\\&\le (v-\phi )(x^*+e)-(v-\phi )(x_*)+\text {Lip}(\phi )\,(2\sqrt{\epsilon }R+|e|). \end{aligned} \end{aligned}$$

Therefore, for \(\epsilon \) small enough, \(\text {Lip}(\phi )\,(2\sqrt{\epsilon }R+|e|)< \sup _{x\in \mathbb {R}^n}(u^\epsilon -v_\epsilon )\) and

$$\begin{aligned} (u^\epsilon -v_\epsilon )(x)<o(1)+\sup _{x\in \mathbb {R}^n}(u^\epsilon -v_\epsilon )\qquad \text {as}\ |x|\rightarrow \infty . \end{aligned}$$

Following the proof of Theorem 4.1, we can prove that both \( u^\epsilon \) and \( v_\epsilon \) are \(\mathcal {C}^{1,1}\) at \(x_\epsilon \), so that the integrals in the subsequent computations are well defined. The idea is that the paraboloids

$$\begin{aligned} \frac{|x-x_{\epsilon ,*}|^2}{\epsilon }+v(x_{\epsilon ,*})- v_\epsilon (x_\epsilon )+ u^\epsilon (x_\epsilon ) \end{aligned}$$

and

$$\begin{aligned} - \frac{|x-x_\epsilon ^*|^2}{\epsilon }+ v_\epsilon (x_\epsilon )+u(x_\epsilon ^*)- u^\epsilon (x_\epsilon ) \end{aligned}$$

touch respectively \( u^\epsilon \) from above and \( v_\epsilon \) from below at the point \(x_\epsilon \). Therefore, \( u^\epsilon \) and \( v_\epsilon \) can both be touched from above and below by a paraboloid at \(x_\epsilon \) and they are \(\mathcal {C}^{1,1}\) at \(x_\epsilon \).

Since \( u^\epsilon \in \mathcal {C}^{1,1}\) at \(x_\epsilon \), there exists a paraboloid P(x) that touches \( u^\epsilon \) from above at \(x_\epsilon \). Then

$$\begin{aligned} P(x+x_{\epsilon }-x_{\epsilon }^*)+\frac{|x_{\epsilon }-x_{\epsilon }^*|^2}{\epsilon } \end{aligned}$$

touches u from above at \(x_{\epsilon }^*\). Equivalently,

$$\begin{aligned} P(x-e+x_{\epsilon }-x_{\epsilon }^*)+\frac{|x_{\epsilon }-x_{\epsilon }^*|^2}{\epsilon } \end{aligned}$$

touches v from above at \(x_{\epsilon }^*+e\). On the other hand, there exists a paraboloid Q(x) that touches \( v_\epsilon \) from below at \(x_\epsilon \) and then

$$\begin{aligned} Q(x+x_{\epsilon }-x_{\epsilon ,*})-\frac{|x_{\epsilon }-x_{\epsilon ,*}|^2}{\epsilon } \end{aligned}$$

touches v from below at \(x_{\epsilon ,*}\). By Lemma 2.2 we have

$$\begin{aligned} \mathcal {D}_s v(x_\epsilon ^*+e)\ge g\big (x_\epsilon ^*+e,v(x_\epsilon ^*+e)\big ), \qquad \text {and} \qquad \mathcal {D}_s v(x_{\epsilon ,*})\le g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big ) \end{aligned}$$

in the classical sense.

Fix \(\eta >0\), arbitrary, and let \(A_\eta >0\) such that

$$\begin{aligned} L_{A_\eta } v(x_{\epsilon ,*})\le g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big )+\eta , \end{aligned}$$

and

$$\begin{aligned} L_{A_\eta } v(x_\epsilon ^*+e)\ge \mathcal {D}_s v(x_\epsilon ^*+e)\ge g\big (x_\epsilon ^*+e,u(x_\epsilon ^*+e)\big ), \end{aligned}$$

with \(L_{A_\eta }\) defined as in (2.1). Subtracting, we get

$$\begin{aligned} L_{A_\eta } v(x_\epsilon ^*+e)-L_{A_\eta } v(x_{\epsilon ,*})\ge g\big (x_\epsilon ^*+e,v(x_\epsilon ^*+e)\big )-g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big )- \eta . \end{aligned}$$

By definition of the operator \(L_{A_\eta }\), we have

$$\begin{aligned} L_{A_\eta } v(x_\epsilon ^*+e)-L_{A_\eta } v(x_{\epsilon ,*})=\frac{1}{2} \int _{\mathbb {R}^n}\frac{\delta (u,x_\epsilon ^*,y)- \delta (v,x_{\epsilon ,*},y)}{|A_\eta ^{-1}y|^{n+2s}}\, dy. \end{aligned}$$

Notice that, as in the proof of Theorem 4.1,

$$\begin{aligned} \delta ( u^\epsilon ,x_\epsilon ,y)\ge \delta ( u,x_\epsilon ^*,y)\qquad \text {and}\qquad \delta ( v_\epsilon ,x_\epsilon ,y)\le \delta ( v,x_{\epsilon ,*},y). \end{aligned}$$

Observe that \(x_\epsilon \) is a maximum point of \( u^\epsilon - v_\epsilon \) and therefore \(\delta ( u^\epsilon -v_\epsilon ,x_\epsilon ,y)\le 0\). We conclude

$$\begin{aligned} \begin{aligned} \eta&\ge g\big (x_\epsilon ^*+e,v(x_\epsilon ^*+e)\big )-g\big (x_\epsilon ^*+e,v(x_{\epsilon ,*})\big )+g\big (x_\epsilon ^*+e,v(x_{\epsilon ,*})\big )-g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big )\\&\ge \mu \,\big (v(x_\epsilon ^*+e)-v(x_{\epsilon ,*})\big )-\text {Lip}(g)\,|x_\epsilon ^*+e-x_{\epsilon ,*}|.\end{aligned} \end{aligned}$$

Notice that

$$\begin{aligned} v(x_\epsilon ^*+e)-v(x_{\epsilon ,*})\ge (u^\epsilon -v_\epsilon )(x_{\epsilon })= & {} \sup _{\mathbb {R}^n}(u^\epsilon -v_\epsilon )\ge (u^\epsilon -v_\epsilon )(x_0)\ge (u-v)(x_0)\\= & {} \sup _{\mathbb {R}^n}(u-v) \end{aligned}$$

Since \(\eta \) is arbitrary, we can let \(\eta \rightarrow 0\) and get

$$\begin{aligned} \mu \sup _{x\in \mathbb {R}^n}\big (v(x+e)-v(x)\big )\le & {} \text {Lip}(g)\,\big (|e|+|x_\epsilon ^*-x_{\epsilon }|+|x_\epsilon -x_{\epsilon ,*}|\big )\\\le & {} \text {Lip}(g)\,\big (|e|+2\sqrt{\epsilon }R\big ). \end{aligned}$$

The result follows letting \(\epsilon \rightarrow 0\). \(\square \)

In the next result we show that solutions to (1.4) are semiconcave, informally, that second derivatives of solutions to (1.4) are bounded from above, under certain conditions on the right-hand side g. Before stating the result, let us identify heuristically the natural hypotheses on g in our context if semiconcavity is expected from the solutions.

To simplify, consider instead of \(\mathcal {D}_s\) a linear operator \(L_A\) (defined as in (2.1)) such that

$$\begin{aligned} L_{A} v(x)= g\left( x,v(x)\right) . \end{aligned}$$

Formally, we have that \(D_{ee}^2v(x_0)\) satisfies

$$\begin{aligned} L_{A_\eta } D_{ee}^2v(x_0)= \sum _{1\le i,j\le n} \partial ^2_{x_ix_j}g(x_0,v(x_0))e_ie_j. \end{aligned}$$

where \(\sum _{1\le i,j\le n} \partial ^2_{x_ix_j}g(x_0,v(x_0))e_ie_j\) is the second derivative in the direction e, at the point \(x_0\), of the composite function \(x\mapsto g(x,v(x))\). Now, if \(x_0\) is a maximum point of \(D_{ee}^2v\) we get

$$\begin{aligned} \sum _{1\le i,j\le n} \partial ^2_{x_ix_j}g(x_0,v(x_0))e_ie_j=L_{A_\eta } D_{ee}^2v(x_0)\le 0. \end{aligned}$$

It can be checked that

$$\begin{aligned} \begin{aligned} \big [\partial ^2_{x_ix_j}g(x,v(x))\big ]_{1\le i,j\le n}=&\left[ \begin{array}{cc} I_{n\times n}&\nabla v (x)^t \end{array} \right] _{n\times (n+1)} \left[ \partial ^2_{i,j}g(x,v(x)) \right] _{1\le i,j\le n+1} \\&\times \left[ \begin{array}{c} I_{n\times n} \\ \nabla v (x) \end{array} \right] _{(n+1)\times n}+\partial _{n+1}g(x,v(x)) \, D^2v(x) \end{aligned} \end{aligned}$$

where \(\partial ^2_{i,j}g(x,v(x))\) and \(\partial _{n+1}g(x,v(x))\) denote derivatives of g as a function of \(n+1\) variables evaluated at the point (xv(x)). Writing \(\xi =( e^t, \ \langle \nabla v (x),e\rangle )^t\) for convenience, we have

$$\begin{aligned} 0\ge & {} \sum _{1\le i,j\le n} \partial ^2_{x_ix_j}g(x_0,v(x_0))e_ie_j\\= & {} \sum _{1\le i,j\le n+1} \partial ^2_{i,j}g(x_0,v(x_0))\xi _i\xi _j +\partial _{n+1}g(x_0,v(x_0)) \, D_{ee}^2v(x_0) \end{aligned}$$

or equivalently,

$$\begin{aligned} \partial _{n+1}g(x_0,v(x_0)) \, D_{ee}^2v(x_0)\le -\sum _{1\le i,j\le n+1} \partial ^2_{i,j}g(x_0,v(x_0))\xi _i\xi _j . \end{aligned}$$

This inequality suggests that in order to get an upper bound on \(D_{ee}^2v(x_0)\) it is natural to require \(D^2g\ge -C \, Id\) and \(\partial _{n+1}g(x_0,v(x_0))>\mu >0\), namely hypotheses (1.5) and (1.7), since then

$$\begin{aligned} \mu \,D_{ee}^2v(x_0)\le - \sum _{1\le i,j\le n+1} \partial ^2_{i,j}g(x_0,v(x_0))\xi _i\xi _j \le C\, |\xi |^2 \le C\, (1+|\nabla v(x_0) |^2). \end{aligned}$$

From here we have the desired estimate as long as we can guarantee that v is Lipschitz. In Proposition 5.3 we proved that this is actually the case provided hypotheses (1.6), and (1.7) hold true.

In the following result we justify the heuristic argument above.

Proposition 5.4

(Semiconcavity of the solution) Let \(g: \mathbb {R}^{n+1} \rightarrow \mathbb {R}\) satisfy (1.5), (1.6), and (1.7). Then, the solution to (1.4) is semiconcave, that is, for every \(x\in \mathbb {R}^n\),

$$\begin{aligned} \delta (v,x,y) \le \frac{C}{\mu } \left( 1+\max \left\{ \left( \frac{\text {Lip}(g)}{\mu } \right) ^2, \text {Lip}(\phi )^2\right\} \right) |y|^2. \end{aligned}$$

Proof

Let v be the solution to problem (1.4), \(e\in \mathbb {R}^n\) fixed, and assume that

$$\begin{aligned} \sup _{x\in \mathbb {R}^n} \delta (v,x,e)>0, \end{aligned}$$

as the result is trivial otherwise. We observe that \(\delta (v,x,e)\rightarrow 0\) as \(|x| \rightarrow \infty \). To see this, notice first that \(\delta (v,x,e)=\delta (v- \phi ,x,e)+\delta (\phi ,x,e)=o(1)+\delta (\phi ,x,e)\) as \(|x| \rightarrow \infty \). Also, by our hypotheses on \(\phi \), we have that

$$\begin{aligned} \frac{\delta (\phi ,x,e)}{|e|^2}=O\left( \frac{1}{|x|}\right) \quad \text {as}\ |x| \rightarrow \infty . \end{aligned}$$

Therefore, there is some \(x_0\) such that

$$\begin{aligned} \delta (v,x_0,e)=\sup _{x\in \mathbb {R}^n} \delta (v,x,e)>0. \end{aligned}$$
(5.3)

To complete the proof we need a regularization process as in the proof of Proposition 5.3. Again, let us present the ideas first assuming that v is a classical solution and all the equations hold pointwise.

Fix \(\eta >0\) arbitrary, and let \(A_\eta >0\) such that

$$\begin{aligned} L_{A_\eta } v(x_0)\le g\left( x_0,v(x_0)\right) +\eta , \end{aligned}$$

and

$$\begin{aligned} L_{A_\eta } v(x_0\pm e)\ge \mathcal {D}_s v(x_0\pm e)\ge g\big (x_0\pm e, v(x_0\pm e)\big ), \end{aligned}$$

with \(L_{A_\eta }\) defined as in (2.1). We have from the above expressions that

$$\begin{aligned} L_{A_\eta }\delta (v,x_0,e)\ge & {} g\left( x_0+ e,v(x_0+ e)\right) +g\left( x_0- e,v(x_0- e)\right) \\&-\,2g\left( x_0,v(x_0)\right) -2\eta . \end{aligned}$$

Notice that \(\delta \big (\delta (v,\cdot \, ,e),x_0,z\big )\le 0\), and therefore \(L_{A_\eta }\delta (v,x_0,e)\le 0\). Consequently,

$$\begin{aligned} g\left( x_0+ e,v(x_0+ e)\right) +g\left( x_0- e,v(x_0- e)\right) -2g\left( x_0,v(x_0)\right) \le 2\eta . \end{aligned}$$

At this point we can let \(\eta \rightarrow 0\) and rewrite the resulting expression as

$$\begin{aligned} g\big ((x_0,v(x_0))+ \theta _2\big ) - g\big ((x_0,v(x_0))- \theta _1\big )\le & {} 2g(x_0,v(x_0)) - g\big ((x_0,v(x_0))+ \theta _1\big ) \\&- g\big ((x_0,v(x_0))- \theta _1\big ) \end{aligned}$$

for \(\theta _1= \big ( e,v(x_0+e)-v(x_0) \big )\) and \(\theta _2 =\big ( -e,v(x_0-e)-v(x_0) \big ) \). Then, by (1.5) and (1.7) we have

$$\begin{aligned} \begin{aligned} \mu \,\delta (v,x_0,e)&\le g\big (x_0-e,v(x_0-e)\big )-g\big (x_0-e,2v(x_0)-v(x_0+e)\big ) \\&= g\big ((x_0,v(x_0))+ \theta _2\big )- g\big ((x_0,v(x_0))- \theta _1\big ) \\&\le 2g(x_0,v(x_0)) - g\big ((x_0,v(x_0))+ \theta _1\big ) - g\big ((x_0,v(x_0))- \theta _1\big ) \le C| \theta _1|^2 \end{aligned} \end{aligned}$$

and therefore, for any \(x\in \mathbb {R}^n\),

$$\begin{aligned} \delta (v,x,e) \le \delta (v,x_0,e) \le \frac{C}{\mu } \left( 1+\left( \frac{v(x_0+e)-v(x_0)}{|e|}\right) ^2\right) |e|^2. \end{aligned}$$

The result follows applying Proposition 5.3.

To complete the proof in the general case, let us sketch the regularization procedure. The details follow the lines of the proof of Proposition 5.3. To simplify the notation, let us denote \(u(x)=v(x+e),\) \(w(x)=v(x-e)\) and consider the sup-convolution of uw and the inf-convolution of v, namely,

$$\begin{aligned} u^\epsilon (x)= & {} \sup _{y} \left\{ u(y)- \frac{|x-y|^2}{\epsilon }\right\} =\sup _{y} \left\{ v(y+e)- \frac{|x-y|^2}{\epsilon }\right\} \\= & {} v(x^*+e)- \frac{|x-x^*|^2}{\epsilon },\\ w^\epsilon (x)= & {} \sup _{y} \left\{ w(y)- \frac{|x-y|^2}{\epsilon }\right\} =\sup _{y} \left\{ v(y-e)- \frac{|x-y|^2}{\epsilon }\right\} \\= & {} v(x^{**}-e)- \frac{|x-x^{**}|^2}{\epsilon }, \end{aligned}$$

and

$$\begin{aligned} v_\epsilon (x)=\inf _{y} \left\{ v(y)+ \frac{|x-y|^2}{\epsilon }\right\} = v(x_*)+ \frac{|x-x_*|^2}{\epsilon }. \end{aligned}$$

for some points \(x^*,x^{**},\) and \(x_*\) within a distance \(\sqrt{\epsilon }R\) from x (see property 3 in the proof of Proposition 5.3).

Assume (5.3). Then, on the one hand, we have that

$$\begin{aligned} \sup _{\mathbb {R}^n}(u^\epsilon +w^\epsilon -2v_\epsilon )\ge u^\epsilon (x_0)+w^\epsilon (x_0)-2v_\epsilon (x_0)\ge \delta (v,x_0,e)=\sup _{x\in \mathbb {R}^n} \delta (v,x,e)>0. \end{aligned}$$

On the other hand,

$$\begin{aligned} \begin{aligned} u^\epsilon (x)+ w^\epsilon (x)-2 v_\epsilon (x)&\le (v-\phi )(x^*+e)+ (v-\phi )(x^{**}-e)- 2 (v-\phi )(x_*)\\&\quad +\big (\phi (x^*+e)-\phi (x+e)\big )+\big (\phi (x^{**}-e)-\phi (x-e)\big )\\&\quad - 2 \big ( \phi (x_*)-\phi (x)\big )+\delta (\phi ,x,e)\\&\le o(1)+4\text {Lip}(\phi )\sqrt{\epsilon }R + O\left( \frac{1}{|x|}\right) \end{aligned} \end{aligned}$$

as \(|x|\rightarrow \infty \). Therefore, for \(\epsilon \) small enough, there exists \(x_\epsilon \) such that

$$\begin{aligned} u^\epsilon (x_\epsilon )+w^\epsilon (x_\epsilon )-2v_\epsilon (x_\epsilon )=\sup _{\mathbb {R}^n}(u^\epsilon +w^\epsilon -2v_\epsilon ). \end{aligned}$$

Now, consider the following three paraboloids:

$$\begin{aligned} P(x)= u(x_\epsilon ^*)- \frac{|x-x_\epsilon ^*|^2}{\epsilon }, \qquad Q(x)= v(x_{\epsilon ,*})+ \frac{|x-x_{\epsilon ,*}|^2}{\epsilon }, \end{aligned}$$

and

$$\begin{aligned} R(x)= w(x_\epsilon ^{**})- \frac{|x-x_\epsilon ^{**}|^2}{\epsilon } \end{aligned}$$

Then, all three \(u^\epsilon ,w^\epsilon \), and \(v_\epsilon \) are \(\mathcal {C}^{1,1}\) at \(x_\epsilon \). To see this, notice that

  • P(x) touches \( u^\epsilon \) from below at \(x_\epsilon \) and

    $$\begin{aligned} 2Q(x)-R(x)+u^\epsilon (x_\epsilon )+w^\epsilon (x_\epsilon )-2v_\epsilon (x_\epsilon ) \end{aligned}$$

    touches from above.

  • Q(x) touches \( v_\epsilon \) from above at \(x_\epsilon \) and

    $$\begin{aligned} \frac{1}{2}P(x)+\frac{1}{2}R(x)+ v_\epsilon (x_\epsilon )-\frac{1}{2}u^\epsilon (x_\epsilon )-\frac{1}{2}w^\epsilon (x_\epsilon ) \end{aligned}$$

    touches from below.

  • R(x) touches \( w^\epsilon \) from below at \(x_\epsilon \) and

    $$\begin{aligned} 2Q(x)-P(x)+u^\epsilon (x_\epsilon )+w^\epsilon (x_\epsilon )-2v_\epsilon (x_\epsilon ) \end{aligned}$$

    touches from above.

Then, there are three paraboloids that touch v from above at \(x_\epsilon ^*+e\) and \(x_\epsilon ^{**}-e\), and from below at \(x_{\epsilon *}\). By Lemma 2.2 we have

$$\begin{aligned} \mathcal {D}_s v(x_\epsilon ^*+e)\ge g\big (x_\epsilon ^*+e,v(x_\epsilon ^*+e)\big ),\qquad \mathcal {D}_s v(x_\epsilon ^{**}-e)\ge g\big (x_\epsilon ^{**}-e,v(x_\epsilon ^{**}-e)\big ), \end{aligned}$$

and

$$\begin{aligned} \mathcal {D}_s v(x_{\epsilon ,*})\le g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big ) \end{aligned}$$

in the classical sense. Fix \(\eta >0\), arbitrary, and let \(A_\eta >0\) such that

$$\begin{aligned} L_{A_\eta } v(x_{\epsilon ,*})\le g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big )+\eta \end{aligned}$$

with \(L_{A_\eta }\) defined as in (2.1). Then

$$\begin{aligned}&L_{A_\eta } u(x_\epsilon ^*)+ L_{A_\eta } w(x_\epsilon ^{**})-2L_{A_\eta } v(x_{\epsilon ,*})\\&\quad \ge g\big (x_\epsilon ^*+e,v(x_\epsilon ^*+e)\big )+g\big (x_\epsilon ^{**}-e,v(x_\epsilon ^{**}-e)\big )-2g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big )-2 \eta . \end{aligned}$$

As in the proof of Theorem 4.1,

$$\begin{aligned} \delta ( u,x_\epsilon ^*,y)+\delta ( w,x_\epsilon ^{**},y)-2 \delta ( v,x_{\epsilon ,*},y)\le \delta ( u^\epsilon +w^\epsilon -2v_\epsilon ,x_\epsilon ,y)\le 0. \end{aligned}$$

Since \(\eta \) is arbitrary, we conclude

$$\begin{aligned} g\big (x_\epsilon ^*+e,v(x_\epsilon ^*+e)\big )+g\big (x_\epsilon ^{**}-e,v(x_\epsilon ^{**}-e)\big )-2g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big )\le 0. \end{aligned}$$

Rearranging terms, we get,

$$\begin{aligned}&g\big (x_\epsilon ^{**}-e,v(x_\epsilon ^{**}-e)\big ) \pm g\big (x_{\epsilon }^{**}-e,2v(x_{\epsilon ,*})-v(x_\epsilon ^*+e)\big )\\&\quad \quad -\,g\big (2x_{\epsilon ,*}-x_\epsilon ^*-e,2v(x_{\epsilon ,*})-v(x_\epsilon ^*+e)\big )\\&\quad \le 2g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big )-g\big (x_\epsilon ^*+e,v(x_\epsilon ^*+e)\big )\\&\quad \quad -\,g\big (2x_{\epsilon ,*} -x_\epsilon ^*-e,2v(x_{\epsilon ,*})-v(x_\epsilon ^*+e)\big ). \end{aligned}$$

Let us analyze the left-hand side of the inequality first. By (1.7) and (1.6), we have

$$\begin{aligned}&g\big (x_\epsilon ^{**}-e,v(x_\epsilon ^{**}-e)\big ) \pm g\big (x_{\epsilon }^{**}-e,2v(x_{\epsilon ,*})-v(x_\epsilon ^*+e)\big )\\&\quad \quad -\,g\big (2x_{\epsilon ,*}-x_\epsilon ^*-e,2v(x_{\epsilon ,*})-v(x_\epsilon ^*+e)\big )\\&\quad \ge \mu (v(x_\epsilon ^*+e)+v(x_\epsilon ^{**}-e)-2v(x_{\epsilon ,*}))-\text {Lip}(g)|x_\epsilon ^*\\&\quad \quad +\,x_{\epsilon }^{**}-2x_{\epsilon ,*}| \end{aligned}$$

If we denote \(\theta =(x_\epsilon ^*+e-x_{\epsilon ,*},v(x_\epsilon ^*+e)-v(x_{\epsilon ,*}))\) the right-hand side becomes

$$\begin{aligned}&2g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big )-g\big (x_\epsilon ^*+e,v(x_\epsilon ^*+e)\big )-g\big (2x_{\epsilon ,*}-x_\epsilon ^*-e,2v(x_{\epsilon ,*})-v(x_\epsilon ^*+e)\big )\\&\quad = 2g\big (x_{\epsilon ,*},v(x_{\epsilon ,*})\big )-g\big ((x_{\epsilon ,*},v(x_{\epsilon ,*}))+\theta )-g\big ((x_{\epsilon ,*},v(x_{\epsilon ,*}))-\theta )\le C|\theta |^2 \end{aligned}$$

where in the last step we have used (1.5). Therefore,

$$\begin{aligned} \mu (v(x_\epsilon ^*+e)+v(x_\epsilon ^{**}-e)-2v(x_{\epsilon ,*}))\le C|\theta |^2+\text {Lip}(g)|x_\epsilon ^*+x_{\epsilon }^{**}-2x_{\epsilon ,*}| \end{aligned}$$

Observe that

$$\begin{aligned}&v(x_\epsilon ^*+e)+v(x_\epsilon ^{**}-e)-2v(x_{\epsilon ,*})\ge (u^\epsilon +w^\epsilon -2v_\epsilon )(x_\epsilon )\\&\quad \ge (u^\epsilon +w^\epsilon -2v_\epsilon )(x_0)\ge (u+w-2v)(x_0)=\delta (v,x_0,e)=\sup _{x\in \mathbb {R}^n}\delta (v,x,e). \end{aligned}$$

On the other hand,

$$\begin{aligned} |\theta |^2= & {} |x_\epsilon ^*+e-x_{\epsilon ,*}|^2+|v(x_\epsilon ^*+e)-v(x_{\epsilon ,*})|^2\le (1+\text {Lip}(v)^2)|x_\epsilon ^*+e-x_{\epsilon ,*}|^2\\\le & {} (1+\text {Lip}(v)^2)(|x_\epsilon ^*-x_{\epsilon ,*}|+|e|)^2 \end{aligned}$$

Finally, recall that \(|x_\epsilon ^*-x_{\epsilon ,*}|\le 2\sqrt{\epsilon }R\) and \(|x_\epsilon ^*+x_{\epsilon }^{**}-2x_{\epsilon ,*}|\le 4\sqrt{\epsilon }R\). All the above together yields,

$$\begin{aligned} \mu \sup _{x\in \mathbb {R}^n}\delta (v,x,e) \le C(1+\text {Lip}(v)^2)(2\sqrt{\epsilon }R+|e|)^2+4\text {Lip}(g)\sqrt{\epsilon }R. \end{aligned}$$

Now we can let \(\epsilon \rightarrow 0\) and apply Proposition 5.3 to conclude. \(\square \)

Existence of Solutions

In this section we prove existence of solutions to the problem

$$\begin{aligned} \left\{ \begin{array}{lll} \mathcal {D}_{s} u=u-\phi &{} \text {in} &{} \mathbb {R}^n \\ (u- \phi )(x)\rightarrow 0 &{} \text {as} &{} |x|\rightarrow \infty . \end{array} \right. \end{aligned}$$
(6.1)

One could consider existence for more general problems with a right-hand side g(xu) as in (1.4). The arguments below would work under assumptions on g that guarantee the existence of appropriate sub- and supersolutions as well as comparison (see Section 4).

The idea to find a supersolution is that, by the definition of \(\mathcal {D}_{s}\) as an infimum of linear operators, it is enough to have the appropriate inequality for just one of them.

Lemma 6.1

Denote \(g(x)=\min \{1,|x|^{-(2s+\tau )}\}\) and \(u_F(x)=C_F\cdot |x|^{2s-n}\), the fundamental solution of \((-\Delta )^s\) for an appropriate constant \(C_F\). Then, there exist constants \(0<\tau <\min \{2s-1,n-2s\}\) and \(M>0\) such that \(u=\phi +M\cdot \big (u_F*g\big )\) satisfies

$$\begin{aligned} \left\{ \begin{array}{lll} -(- \Delta )^{s}u\le c_{n,s}(u- \phi ) &{} \text {in} &{} \mathbb {R}^n \\ (u- \phi )(x)\rightarrow 0 &{} \text {as} &{} |x|\rightarrow \infty . \end{array} \right. \end{aligned}$$
(6.2)

Proof

We construct an upper barrier of the form \(u=\phi +w\) as a potential. We start the construction of w with \(w_0=u_F* g_0\) where \(g_0(x)=|x|^{-(2s+\tau )}\) for some small \(0<\tau <n-2s\). Since both \(n-2s<n\) and \(2s+\tau <n\) while \((n-2s)+(2s+\tau )>n\), there are constants \(a_0,a_1>0\) such that

$$\begin{aligned} a_0|x|^{-\tau }\le w_0(x)\le a_1|x|^{-\tau }\quad \text {as}\quad |x|\rightarrow \infty . \end{aligned}$$
(6.3)

Also, by construction,

$$\begin{aligned} (-\Delta )^sw_0(x)=g_0(x)=|x|^{-(2s+\tau )}. \end{aligned}$$

Notice that \(w_0\) decays at infinity (and therefore \((u-\phi )=w_0\rightarrow 0\) as \(|x|\rightarrow \infty \)) but it is not bounded at 0. Consequently, we truncate \(g_0\) and define \(g_1=\min \{1,g_0\}\) and \(w_1=u_F*g_1\).

The function \(w_1\) is bounded, still radially decreasing, and has the same decay as \(w_0\). To prove the last assertion, first notice that

$$\begin{aligned} w_1=u_F*g_1=w_0-u_F*(g_0-g_1). \end{aligned}$$
(6.4)

The function \(g_0-g_1\) is supported in the ball of radius one, therefore

$$\begin{aligned} \big (u_F*(g_0-g_1)\big )(x)=C_F\int _{B_1(0)}\left( |y|^{-(2s+\tau )}-1\right) |x-y|^{2s-n}\,dy. \end{aligned}$$

For \(|y|\le 1\) and \(|x|>2\) we have \(|y|<|x|/2\) and from there we deduce

$$\begin{aligned} \left( \frac{2}{3}\right) ^{n-2s}|x|^{2s-n}\le |x-y|^{2s-n}\le 2^{n-2s}|x|^{2s-n}. \end{aligned}$$

On the other hand, \(\tau <n-2s\) implies that

$$\begin{aligned} \int _{B_1(0)}\left( |y|^{-(2s+\tau )}-1\right) \,dy \end{aligned}$$

is constant and therefore

$$\begin{aligned} b_0|x|^{2s-n}\le \big (u_F*(g_1-g_0)\big )(x)\le b_1|x|^{2s-n}\quad \text {as}\quad |x|\rightarrow \infty \end{aligned}$$
(6.5)

form some constants \(b_0,b_1>0\). Again, since \(\tau <n-2s\) (6.3), (6.4), and (6.5) prove that \(w_1\) has the same decay as \(w_0\) as claimed.

Moreover, there exist constants \(A_0,A_1>0\) such that

$$\begin{aligned} A_0\min \{1,|x|^{-\tau }\}\le w_1(x)\le A_1\min \{1,|x|^{-\tau }\}. \end{aligned}$$

Also, by construction, we have

$$\begin{aligned} (-\Delta )^sw_1(x)=g_1(x)=\min \{1,|x|^{-(2s+\tau )}\}. \end{aligned}$$

As a third and final step, we dilate \(w_1\) to our final w. To this aim, define \(w(x)=M\cdot w_1(x)\) for some large constant M to be chosen. Then,

$$\begin{aligned} (-\Delta )^sw=M\cdot g_1(x)=M\, \min \{1,|x|^{-(2s+\tau )}\} \end{aligned}$$

and

$$\begin{aligned} w(x)\ge M A_0\min \{1,|x|^{-\tau }\}. \end{aligned}$$

We are ready to check that for an appropriate M the function \(u=\phi +w\) satisfies

$$\begin{aligned} -(-\Delta )^su\le c_{n,s} (u-\phi ). \end{aligned}$$

Indeed,

$$\begin{aligned} c_{n,s}(u-\phi )=c_{n,s}w\ge c_{n,s}M A_0\min \{1,|x|^{-\tau }\}\end{aligned}$$
(6.6)

where \(c_{n,s},A_0\) are given and M is to be chosen. On the other hand,

$$\begin{aligned} \begin{aligned} -(-\Delta )^su(x)&=-(-\Delta )^sw(x)-(-\Delta )^s\phi (x)\\&=-M\min \{1,|x|^{-(2s+\tau )}\}-(-\Delta )^s\phi (x)\le -(-\Delta )^s\phi (x). \end{aligned} \end{aligned}$$
(6.7)

From our hypotheses on \(\phi ,\)

$$\begin{aligned} -(-\Delta )^s\phi (x)=-(-\Delta )^s\Gamma (x)-(-\Delta )^s\eta (x)\le C\big (|x|^{1-2s}+|x|^{-(2s+\epsilon )}\big )\le C\,|x|^{1-2s} \end{aligned}$$

for |x| large, while \(-(-\Delta )^s\phi (x)\) is bounded in every neighborhood of the origin. Therefore

$$\begin{aligned} -(-\Delta )^s\phi (x)\le C\cdot \min \{1,|x|^{1-2s}\} \end{aligned}$$
(6.8)

for some constant \(C>0\).

In view of (6.6), (6.7), and (6.8), we only have to control \(C\cdot \min \{1,|x|^{1-2s}\}\) by \(c_{n,s}M A_0\min \{1,|x|^{-\tau }\}\) to conclude. If \(\tau <2s-1\), a large value of M does it. \(\square \)

Proposition 6.2

(Existence of solutions) There exists a unique solution of

$$\begin{aligned} \left\{ \begin{array}{lll} \mathcal {D}_{s} u=u- \phi &{} \text {in} &{} \mathbb {R}^n \\ (u- \phi )(x)\rightarrow 0 &{} \text {as} &{} |x|\rightarrow \infty . \end{array} \right. \end{aligned}$$
(6.9)

Proof

First, observe that \(\phi \) is a subsolution to the problem, since by convexity \(\delta ( \phi ,x,y)\ge 0\), and therefore \(\mathcal {D}_{s} \phi (x)\ge 0\).

To find a supersolution notice that \(-c_{n,s}^{-1}\,(- \Delta )^{s}\) is one of the operators that compete for the infimum in the definition of \(\mathcal {D}_{s}\), and we know from Lemma 6.1 that there is a function \(\bar{u}\) such that \(-c_{n,s}^{-1}\,(- \Delta )^{s}\bar{u}\le \bar{u}- \phi \) with the right “boundary data at infinity”, that is, \((\bar{u}-\phi )\rightarrow 0\) as \(|x|\rightarrow \infty \). We have,

$$\begin{aligned} \mathcal {D}_{s} \bar{u}(x)\le -c_{n,s}^{-1}\,(- \Delta )^{s}\bar{u}(x)\le \bar{u}- \phi . \end{aligned}$$

By comparison, see Theorem 4.1, \(\phi \le \bar{u}\).

Consider the following approximating problem,

$$\begin{aligned} \left\{ \begin{array}{lll} \mathcal {D}_{s}^\epsilon u=u- \phi &{} \text {in} &{} \mathbb {R}^n \\ (u- \phi )(x)\rightarrow 0 &{} \text {as} &{} |x|\rightarrow \infty , \end{array} \right. \end{aligned}$$

where \(\mathcal {D}_{s}^\epsilon \) is the following non-degenerate operator,

$$\begin{aligned} \begin{aligned}&\mathcal {D}_s^\epsilon u(x)=\inf \bigg \{ \text {P.V.}\int _{\mathbb {R}^n}\frac{u(y)-u(x)}{|A^{-1}(y-x)|^{n+2s}}\, \, dy\ \bigg |\ A>0,\ \det A=1,\ \\&\quad \qquad \qquad \qquad \qquad \epsilon \,Id<A<\epsilon ^{-1}\,Id\bigg \}\\&=\inf \bigg \{\frac{1}{2} \int _{\mathbb {R}^n}\frac{u(x+y)+u(x-y)-2u(x)}{|A^{-1}y|^{n+2s}}\, dy\ \bigg |\ A>0,\ \det A=1,\ \\&\quad \qquad \qquad \epsilon \,Id<A<\epsilon ^{-1}\,Id\bigg \}. \end{aligned} \end{aligned}$$

Notice that \(\phi \) and \(\bar{u}\) as above are respectively a sub- and supersolution for these approximating problems for every \(\epsilon \).

Being uniformly elliptic, the approximating problems have a solution for every \(\epsilon \). To show this, consider for every \(k=1,2,\ldots \) the following uniformly elliptic Dirichlet problem

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \mathcal {D}_{s}^\epsilon u=u- \phi &{} \text {in } B_k(0) \\ u= \phi &{}\text {in } \mathbb {R}^n\setminus {}B_k(0). \end{array} \right. \end{aligned}$$

Then, for every k there exists a unique solution \(u_k\), which is regular (depending on \(\epsilon \) but not in k), see [3, 4] and the references therein. By comparison (see [3]) we have

$$\begin{aligned} \phi \le u_{k_1}\le u_{k_2}\le \bar{u}\qquad \text {in} \quad \ B_{k_1}(0) \end{aligned}$$

for every \(k_1\le k_2\) (notice that \(\phi \) and \(\bar{u}\) are always a sub- and supersolution), that is, the sequence \(u_k\) is monotone increasing. We conclude that the sequence converges locally uniformly to the solution of the approximating problem.

Now, observe that \(\mathcal {D}_s u(x)\le \mathcal {D}_s^{\epsilon _1} u(x)\le \mathcal {D}_s^{\epsilon _2} u(x)\) for any \(\epsilon _1\le \epsilon _2\) since the infimum is smaller the larger the class of matrices. In particular, we have that every \(u_\epsilon \) is a supersolution of (6.9) and a supersolution of the approximating problem with every smaller \(\epsilon \). Picking \(\epsilon _k=1/k\) for \(k=1,2,\ldots \) we have by comparison (Theorem 4.1) that

$$\begin{aligned} \phi \le \cdots \le u_{\epsilon _k}\le \cdots \le u_{\epsilon _2}\le u_{\epsilon _1} \le \bar{u}. \end{aligned}$$

The arguments in Section 5 imply that every \(u_\epsilon \) is Lipschitz continuous with the same constant as \(\phi \), hence uniformly in \(\epsilon \). Therefore, when \(\epsilon \) goes to 0, the sequence \(u_\epsilon \) converges monotonically and uniformly to the solution of problem (6.9), which is unique by comparison. \(\square \)

To conclude, we show that the right-hand side of problem (6.9) is positive and Theorem 3.1 applies. Therefore, the operator remains uniformly elliptic and standard regularity results for uniformly elliptic equations, see [3, 4] are available.

Proposition 6.3

Let u be the solution to problem (6.9). Then, \(u>\phi \) in \(\mathbb {R}^n\).

Proof

As pointed out in the proof of Proposition 6.2 we have \(u\ge \phi \) in \(\mathbb {R}^n\) by comparison, therefore we need to prove that the inequality is strict. We argue by contradiction and assume to the contrary that there is \(x_0\in \mathbb {R}^n\) such that \(u(x_0)=\phi (x_0)\).

Observe that then \(\phi \) touches u from below at \(x_0\). We can replace \(\phi \) by \(\tilde{\phi }\in \mathcal {C}^{2,\alpha }(\mathbb {R}^n)\), also strictly convex in compact sets and asymptotic to some cone at infinity in such a way that \(\tilde{\phi }\) touches \(\phi \) (and also u) from below at \(x_0\), and this is the only contact point.

Then we can use \(\tilde{\phi }\) as a test in the definition of viscosity solution of problem (6.9) to get \(\mathcal {D}_{s} \tilde{\phi }(x_0)\le 0\). On the other hand, \(\mathcal {D}_{s} \tilde{\phi }(x_0)\ge 0\) by convexity, and we conclude that \(\mathcal {D}_{s} \tilde{\phi }(x_0)=0\).

Then we claim that there exist a direction \(e\in \partial B_1(0)\) such that the one-dimensional fractional laplacian of the restriction of \(\tilde{\phi }\) to the direction e is non-positive, namely \(-\big (- \Delta \big )_{e}^s\tilde{\phi }(x_0)\le 0\). This is a contradiction with the fact that \(-\big (- \Delta \big )_{e}^s\tilde{\phi }(x_0)>0\) since \(\tilde{\phi }\) is convex and non-constant.

To prove the claim, notice that for all \(k=1,2,\ldots \) there exists \(e_k\in \partial B_1(0)\) such that

$$\begin{aligned} -\big (- \Delta \big )_{e_k}^s\tilde{\phi }(x_0)=\frac{1}{2} \int _{\mathbb {R}} \frac{\delta (\tilde{\phi },x_0,te_k)}{|t|^{1+2s}}\,dt\le \frac{1}{k} \end{aligned}$$
(6.10)

(otherwise, there exists \(\mu >0\) such that \(-\big (- \Delta \big )_{e}^s\tilde{\phi }(x_0)>\mu \) uniformly in e and we can argue as in Proposition 3.5 to show that \(\mathcal {D}_{s} \tilde{\phi }(x_0)>0\)).

By the convexity and linear growth at infinity of \(\tilde{\phi }\) we have that

$$\begin{aligned} 0\le \frac{\delta (\tilde{\phi },x_0,te_k)}{|t|^{1+2s}}\le \frac{\min \{2\,\text {Lip}(\tilde{\phi })|t|,C|t|^2\}}{|t|^{1+2s}}\in L^1(\mathbb {R}) \end{aligned}$$

independently of k. Passing to a subsequence if necessary, we can assume that \(e_k\rightarrow e\) as \(k\rightarrow \infty \) and then we can pass to the limit in (6.10) by the dominated convergence theorem. This concludes the proof of the claim. \(\square \)

Remark 6.4

Another approach to show existence would be to solve a “truncated problem” with the restriction \(\epsilon Id < A <\epsilon ^{-1} Id\) besides the condition \(\det (A)=1\). For this problem solutions exist and are smooth (depending on \(\epsilon \)) from existing theory (see [3, 4]), but also semiconvex independently of \(\epsilon \) (Section 5) and the proof that the operator remains strictly non-degenerate (Section 3) applies directly to these solutions for \(\epsilon \) small enough.