1 Introduction

The fractional Laplacian is a singular integral operator defined, e.g., by

$$\begin{aligned} (-\Delta )^su(x):=-\frac{1}{2} C_{N,s} \int _{{\mathbb {R}}^N} \frac{\delta (u, x, y)}{|y|^{N+2s}} \, dy \end{aligned}$$

with \(s \in (0,1)\) and

$$\begin{aligned} \delta (u, x, y)= u(x+y)+ u(x-y)-2u(x), \end{aligned}$$

so that the value of \((-\Delta )^su\) at x depends on the value of u in the whole of \({\mathbb {R}}^N\). But, of course, it is possible to define singular integral operators that depend only on subdimensional sets of \({\mathbb {R}}^N\). For example, one can consider one-dimensional sets, fixing a direction \(\xi \in {\mathbb {R}}^N\) and letting

$$\begin{aligned} {\mathcal {I}}_\xi u(x) :=C_s \int _{0}^{+\infty } \frac{\delta (u, x, \tau \xi )}{\tau ^{1+2s}} \, d\tau . \end{aligned}$$

Here \(C_s=C_{1,s}\) so that \({\mathcal {I}}_\xi u(x)\) acts as the 2s-fractional derivative of u in the direction \(\xi\). Hence, we can denote \({\mathcal {V}}_k\) the family of k-dimensional orthonormal sets in \({\mathbb {R}}^N\) and define the following nonlocal nonlinear operators

$$\begin{aligned} {\mathcal {I}}_k^+ u(x) := \sup \left\{ \sum _{i=1}^k {\mathcal {I}}_{\xi _i} u(x) :\{ \xi _i \}_{i=1}^k \in {\mathcal {V}}_k \right\} \\ {\mathcal {I}}_k^- u(x) := \inf \left\{ \sum _{i=1}^k {\mathcal {I}}_{\xi _i} u(x) :\{ \xi _i \}_{i=1}^k \in {\mathcal {V}}_k \right\} . \end{aligned}$$

These operators have been very recently considered in [7], where representation formulas were given, and in [12], where the operators \({\mathcal {I}}_1^\pm\) are shown to be related with a notion of fractional convexity. These extremal operators, even for \(k=N\), are intrinsically different from the fractional Laplacian and we will show some new phenomena arising. We concentrate in particular on exterior Dirichlet problems in bounded domains.

Precisely, for \(\Omega\) a bounded domain of \({\mathbb {R}}^N\), we will study:

$$\begin{aligned} \left\{ \begin{array}{cl} {\mathcal {I}}^\pm _ku(x)+c(x)u(x)=f(x) &{} {\text { in }}\Omega \\ u=0 &{} {\text { in }}{\mathbb {R}}^N\backslash \Omega . \end{array}\right. \end{aligned}$$
(1.1)

The first difference we wish to emphasize is that, in general, these operators are not continuous, precisely, even if u is in \(C^\infty (\Omega )\) and bounded, \({\mathcal {I}}_k^\pm u(\cdot )\) may not be continuous. What is required in order to have continuity, or lower or upper semicontinuity, is a global condition on the regularity of u; this will be shown in Proposition 3.1. This is a striking difference with respect to the case of nonlinear integro-differential operators like, e.g., the ones considered in [10], which are continuous once \(C^{1, 1}\) regularity holds in the domain \(\Omega\). These continuity properties play a key role in the arguments used for the proofs of the comparison principle, Alexandrov–Bakelman–Pucci estimate, and the Harnack inequality, showing that the setting we are interested in deviates in a substantial way from [10].

Nevertheless, we will show that the comparison principle still holds for \({\mathcal {I}}_k^\pm\) in any bounded domain; we recall that a comparison principle for \({\mathcal {I}}_1^\pm\) was also proved in [12], but under the assumption that the domain is strictly convex. We wish to remark that in fact the comparison principle here is very simple compared to the local case. As it is well known, in the theory of viscosity solutions the comparison principle for second order operators requires the Jensen-Ishii’s lemma, see [11], which in turn lies on a remarkably complex proof that uses tools from convex analysis. Here, instead, the proof is completely self-contained and uses only a straightforward calculation, somehow more similar to the case of first order local equations, where just the doubling variable technique is used.

Via an adaptation of the Perron’s method by [11], the comparison principle allows to prove existence of solutions for (1.1). Let us mention that existence in a very general setting that includes elliptic integro-differential operators was proved in [2, 3]. However, the approach we use is quite immediate, and it seemed to us simpler and friendlier to the reader to just give the proof then checking if we fit into the general Barles–Chasseigne–Imbert setting.

We conclude with the proof of Hölder estimates for \({\mathcal {I}}_1^\pm\) in uniformly convex domains and the validity of maximum principle for the operators

$$\begin{aligned} {\mathcal {I}}_k^\pm \cdot +\mu \cdot \end{aligned}$$

with \(\mu\) below the generalized principal eigenvalues, which, adapting the classical definition in [4], we set as

$$\begin{aligned} \mu _k^\pm = \sup \{ \mu :\exists v \in LSC(\Omega )\cap L^\infty ({\mathbb {R}}^N), v>0 {\text { in }} \Omega , v \ge 0 {\text { in }} {\mathbb {R}}^N, {\mathcal {I}}_k^\pm v + \mu v \le 0 {\text { in }} \Omega \}. \end{aligned}$$

Let us mention that with our choice of the constant \(C_s\), the operators \({\mathcal {I}}_k^\pm\) converge to the operators \({\mathcal {P}}_k^\pm\), the so called truncated Laplacians, defined by

$$\begin{aligned} {\mathcal {P}}^+_k (D^2u)(x):= \sum _{i=N-k+1}^{N} \lambda _i(D^2 u(x)) = \max \left\{ \sum _{i=1}^k \langle D^2 u(x) \xi _i, \xi _i \rangle \, :\, \{ \xi _i\}_{i=1}^k \in {\mathcal {V}}_k \right\} \end{aligned}$$

and

$$\begin{aligned} {\mathcal {P}}^-_k (D^2u)(x):= \sum _{i=1}^{k} \lambda _i(D^2 u(x)) = \min \left\{ \sum _{i=1}^k \langle D^2 u(x) \xi _i, \xi _i \rangle \, :\, \{ \xi _i\}_{i=1}^k \in {\mathcal {V}}_k \right\} , \end{aligned}$$

where \(\lambda _i(D^2 u)\) are the eigenvalues of \(D^2u\) arranged in nondecreasing order, see [5, 6, 9, 15]. Of course there are other classes of nonlocal operators that approximate \({\mathcal {P}}^\pm _k (D^2 u)(x)\), as can be seen in [7]. But we have concentrated on those that are somehow more of a novelty.

In general, we wish to emphasize that in this setting we have differences both with the local equivalent operators and with more standard nonlocal operators. We have already seen that they are in general not continuous, also it is immediate that even when \(k=N\), which in the local case gives \({\mathcal {P}}^+_N (D^2u)(x)={\mathcal {P}}^-_N (D^2u)(x)=\Delta u\), it is not true that \({\mathcal {I}}_N^-\) is equal to \({\mathcal {I}}_N^+\) or that it is equal to the fractional Laplacian. But there are other differences, for example, regarding the validity of the strong maximum principle, see Theorem 4.3 and Proposition 4.7, or regarding the fact that for \({\mathcal {P}}^\pm _k\) the supremum (infimum) among all possible k-dimensional frames is in fact a maximum (minimum), while here the extremum may not be reached as it is shown in the examples before Proposition 3.1. Hence we encourage the reader to pursue her reading in order to see all these fascinating differences.

This paper is organized as follows.

After a preliminary section, in Sect. 4 we study continuity properties of \({\mathcal {I}}_k^\pm\). We will first give counterexamples showing that in general these operators are not continuous, and then we prove that they preserve upper (or lower) semicontinuity under some global assumptions. As a related result, we also show that the supremum and the infimum in the definitions of \({\mathcal {I}}_k^\pm\) are in general not attained.

Section 5 is devoted to the proof of the comparison principle. We investigate the validity and the failure of strong maximum/minimum principles for these operators. Moreover, we prove a Hopf-type lemma for \({\mathcal {I}}_N^-\) and \({\mathcal {I}}_k^+\).

In Sect. 6, we exploit the uniform convexity of the domain \(\Omega\) to construct first barrier functions, then solutions for the Dirichlet problem by using the Perron’s method [11].

Section 7 is devoted to the analysis of validity of the maximum principle for \({\mathcal {I}}_k^\pm \cdot +\mu \cdot\), and to the relation with principal eigenvalues.

Finally, Hölder estimates for solutions of \({\mathcal {I}}_1^\pm u= f\) in \(\Omega\), \(u=0\) in \({\mathbb {R}}^N {\setminus } \Omega\), where \(\Omega\) is a uniformly convex domain, are proved in Sect. 8.

We will use them in Sect. 9 to prove existence of a positive principal eigenfunction.

Notations

\(B_r(x)\)

ball centered in x of radius r

\({\mathcal {S}}^{N-1}\)

unitary sphere in \({\mathbb {R}}^{N}\)

\(\{ e_i\}_{i=1}^N\)

canonical basis of \({\mathbb {R}}^N\)

d(x)

\(= \inf _{y \in \partial \Omega } \left| x-y\right|\), the distance function from \(x \in \Omega\) to \(\partial \Omega\)

\(LSC(\Omega )\)

space of lower semicontinuous functions on \(\Omega\)

\(USC(\Omega )\)

space of upper semicontinuous functions on \(\Omega\)

\(\delta (u, x, y)\)

\(= u(x+y)+ u(x-y)-2u(x)\)

\({\mathcal {I}}_\xi u(x)\)

\(=C_s \int _{0}^{+\infty } \frac{\delta (u, x, \tau \xi )}{\tau ^{1+2s}} \, d\tau\), where \(\xi \in {\mathcal {S}}^{N-1}\) and \(C_s\) is a normalizing constant

\({\hat{x}}\)

\(=\frac{x}{\left| x\right| }\)

\(\beta (a, b)\)

\(=\int _0^1 t^{a-1} (1-t)^{b-1} \, dt\)

\({\mathcal {V}}_k\)

the family of k-dimensional orthonormal sets in \({\mathbb {R}}^N\)

2 Preliminaries

We recall the definition of viscosity solution in this nonlocal context [2, 3]. For definitions and main properties of viscosity solutions in the classical local framework, we refer to the survey [11].

Henceforth, we consider bounded functions \(u:\mathbb {R}^N\mapsto \mathbb {R}\) which are measurable along one-dimensional affine subspaces of \(\mathbb {R}^N\). That is for every \(x\in \mathbb {R}^N\) and \(\xi \in {\mathcal {S}}^{N-1}\) we require the map

$$\begin{aligned} \tau \in \mathbb {R}\mapsto u(x+\tau \xi ) \end{aligned}$$

to be measurable. In the rest of the paper we shall tacitly assume such condition without mentioning it anymore and, with a slight abuse of notation, we shall simply write \(u\in L^\infty (\mathbb {R}^N)\).

Definition 2.1

Given a function \(f \in C(\Omega \times {\mathbb {R}})\), we say that \(u \in L^\infty ({\mathbb {R}}^N) \cap LSC(\Omega )\) (respectively, \(USC(\Omega )\)) is a (viscosity) supersolution (respectively, subsolution) to

$$\begin{aligned} {\mathcal {I}}_k^+ u +f(x, u(x)) = 0 {\text { in }} \Omega \end{aligned}$$
(2.1)

if for every point \(x_0 \in \Omega\) and every function \(\varphi \in C^2(B_\rho (x_0))\), \(\rho >0\), such that \(x_0\) is a minimum (resp. maximum) point to \(u - \varphi\), then

$$\begin{aligned} {\mathcal {I}}(u, \varphi , x_0, \rho ) +f(x_0, u(x_0)) \le 0 \quad ({\text {resp. }}\ge 0) \end{aligned}$$
(2.2)

where

$$\begin{aligned} {\mathcal {I}}(u, \varphi , x_0, \rho ) = C_s \sup _{\{\xi _i \} \in {\mathcal {V}}_k} \sum _{i=1}^k \left( \int _{0}^\rho \frac{\delta (\varphi , x_0, \tau \xi _i)}{\tau ^{1+2s}} \, d\tau + \int _{\rho }^{+\infty } \frac{\delta (u, x_0, \tau \xi _i)}{\tau ^{1+2s}} \, d\tau \right) . \end{aligned}$$

We say that a continuous function u is a solution of (2.1) if it is both a supersolution and a subsolution of (2.1). We analogously define viscosity sub/super solutions for the operator \({\mathcal {I}}_k^-\), taking the infimum over \({\mathcal {V}}_k\) in place of the supremum.

Remark 2.2

We stress that the definition above is inspired by \(-(-\Delta )^s\) and not by \((-\Delta )^s\), that means, in a certain sense, that a minus sign in front of the operator is taken into account.

Remark 2.3

In the definition of supersolution above, we can assume without loss of generality that \(u > \varphi\) in \(B_\rho (x_0) {\setminus } \{ x_0 \}\), and \(\varphi (x_0) = u(x_0)\). Indeed, let us assume that for any such \(\varphi\)

$$\begin{aligned} C_s \sup _{\{\xi _i \} \in {\mathcal {V}}_k} \sum _{i=1}^k \left( \int _{0}^\rho \frac{\delta (\varphi , x_0, \tau \xi _i)}{\tau ^{1+2s}} \, d\tau + \int _{\rho }^{+\infty } \frac{\delta (u, x_0, \tau \xi _i)}{\tau ^{1+2s}} \, d\tau \right) + f(x_0, u(x_0)) \le 0 \end{aligned}$$

is satisfied, and consider a general \({\tilde{\varphi }} \in C^2(B_\rho (x_0))\) such that \(u-{\tilde{\varphi }}\) has a minimum in \(x_0\). We take for any \(n \in {\mathbb {N}}\)

$$\begin{aligned} \varphi _n(x)={\tilde{\varphi }}(x) + u(x_0) - {\tilde{\varphi }}(x_0) - \frac{1}{n} \left| x-x_0\right| ^2, \end{aligned}$$

and notice that \(u(x_0)=\varphi _n(x_0)\), and since \(u(x_0) - {\tilde{\varphi }}(x_0) \le u(x)-{\tilde{\varphi }}(x)\),

$$\begin{aligned} \varphi _n(x) \le u(x) - \frac{1}{n} \left| x-x_0\right| ^2 < u(x) \end{aligned}$$

for any \(x \in B_\rho (x_0) {\setminus } \{ x_0 \}\). Also, for any \(n \in {\mathbb {N}}\),

$$\begin{aligned}&C_s \sup _{\{\xi _i \} \in {\mathcal {V}}_k} \sum _{i=1}^k \left( \int _{0}^\rho \frac{\delta ({\tilde{\varphi }}, x_0, \tau \xi _i)}{\tau ^{1+2s}} \, d\tau + \int _{\rho }^{+\infty } \frac{\delta (u, x_0, \tau \xi _i)}{\tau ^{1+2s}} \, d\tau \right) \\&\quad + f(x_0, u(x_0)) \le C_s \frac{k \rho ^{2-2s}}{n (1-s)}, \end{aligned}$$

and the conclusion follows taking the limit \(n \rightarrow \infty\).

Remark 2.4

We point out that if we verify (2.2) for \(\rho _1\), then it is also verified for any \(\rho _2 > \rho _1\), since

$$\begin{aligned} {\mathcal {I}}(u, \varphi , x_0, \rho _2) \le {\mathcal {I}}(u, \varphi , x_0, \rho _1). \end{aligned}$$

Remark 2.5

The operators \({\mathcal {I}}_k^\pm\) satisfy the following ellipticity condition: if \(\psi _1, \psi _2 \in C^2(B_\rho (x_0)) \cap L^{\infty }({\mathbb {R}}^N)\) for some \(\rho >0\) are such that \(\psi _1-\psi _2\) has a maximum in \(x_0\), then

$$\begin{aligned} {\mathcal {I}}_k^\pm \psi _1 (x_0) \le {\mathcal {I}}_k^\pm \psi _2 (x_0). \end{aligned}$$

Indeed, if \(\psi _1(x_0)-\psi _2(x_0) \ge \psi _1(x)-\psi _2(x)\) for all \(x \in \mathbb R^N,\) then

$$\begin{aligned} \delta (\psi _1, x_0, \tau \xi _i) \le \delta (\psi _2, x_0, \tau \xi _i) \end{aligned}$$

which yields the conclusion.

Remark 2.6

Notice that in the definition above we assumed \(u \in L^\infty ({\mathbb {R}}^N)\), as this will be enough for our purposes, however, one can also consider unbounded functions u with a suitable growth condition at infinity, see [7].

3 Continuity

In this section, we study continuity properties of the maps \(x \mapsto {\mathcal {I}}_k^\pm u(x)\). We start by showing that the assumption \(u \in C^2(\Omega ) \cap L^\infty ({\mathbb {R}}^N)\) which ensures that \({\mathcal {I}}_k^\pm u(x)\) is well defined, is in fact not enough to guarantee the continuity of \({\mathcal {I}}_k^\pm u(x)\) with respect to x. What is needed is a more global assumption as it will be shown later.

Let u be the function defined as follows:

$$\begin{aligned} u(x)=\left\{ \begin{array}{rl} 0 &{} {\text {if }}\; |x|\le 1 {\text { or }} \langle x, e_N \rangle \le 0\\ -1 &{} {\text {otherwise.}} \end{array}\right. \end{aligned}$$
(3.1)

Set \(\Omega =B_1(0)\). The map

$$\begin{aligned} x\in \Omega \mapsto {\mathcal {I}}^+_ku(x) \end{aligned}$$

is well defined, since u is bounded in \({\mathbb {R}}^N\) and smooth (in fact constant) in \(\Omega\). We shall prove that it is not continuous at \(x=0\) when \(k<N\).

Let us first compute the value \({\mathcal {I}}_k^+u(0)\). Since \(u\le 0\) in \({\mathbb {R}}^N\) it turns out that for any \(|\xi |=1\)

$$\begin{aligned} {\mathcal {I}}_\xi u(0)=C_s \int _0^{+\infty }\frac{u(\tau \xi )+u(-\tau \xi )}{\tau ^{1+2s}}\,d\tau \le 0. \end{aligned}$$

Hence,

$$\begin{aligned} \sup _{\left\{ \xi _i\right\} _{i=1}^k\in {{\mathcal {V}}}_k}\sum _{i=1}^k{\mathcal {I}}_{\xi _i} u(0)\le 0. \end{aligned}$$
(3.2)

On the other hand, choosing the first k-unit vectors \(e_1,\ldots ,e_k\) of the standard basis, we obtain that

$$\begin{aligned} {\mathcal {I}}_{e_1}u(0)=\cdots ={\mathcal {I}}_{e_k}u(0)=0. \end{aligned}$$
(3.3)

Hence, by (3.2)-(3.3)

$$\begin{aligned} {\mathcal {I}}^+_ku(0)=0. \end{aligned}$$
Fig. 1
figure 1

We represent with \(P_1\) the point \(\frac{1}{n} e_N + \tau _1(n) \xi\), whereas \(P_2=\frac{1}{n} e_N - \tau _2(n) \xi\)

Now we are going to prove that

$$\begin{aligned} \limsup _{n\rightarrow +\infty }{\mathcal {I}}^+_ku\left( \frac{1}{n}e_N\right) <0 \end{aligned}$$

where \(e_N=(0,\ldots ,0,1)\). Fix any \(|\xi |=1\). Since \({\mathcal {I}}_\xi u={\mathcal {I}}_{-\xi }u\), we can further assume that \(\left\langle \xi ,e_N\right\rangle \ge 0\). Then, for any \(n>1\),

$$\begin{aligned} \begin{aligned} {\mathcal {I}}_\xi u\left( \frac{1}{n} e_N\right)&=C_s \int _0^{+\infty }\frac{u(\frac{1}{n}e_N+\tau \xi )+u(\frac{1}{n}e_N-\tau \xi )}{\tau ^{1+2s}}\,d\tau \\&=C_s \left( -\int _{\tau _1(n)}^{\tau _2(n)}\frac{1}{\tau ^{1+2s}}\,d\tau +\int _{\tau _2(n)}^{+\infty }\frac{-1+u(\frac{1}{n}e_N-\tau \xi )}{\tau ^{1+2s}}\,d\tau \right) \end{aligned} \end{aligned}$$
(3.4)

where

$$\begin{aligned} \tau _1(n)=-\frac{\left\langle \xi ,e_N\right\rangle }{n}+\sqrt{\left( \frac{\left\langle \xi ,e_N\right\rangle }{n}\right) ^2+1-\frac{1}{n^2}} \end{aligned}$$

and

$$\begin{aligned} \tau _2(n)=\frac{\left\langle \xi ,e_N\right\rangle }{n}+\sqrt{\left( \frac{\left\langle \xi ,e_N\right\rangle }{n}\right) ^2+1-\frac{1}{n^2}}. \end{aligned}$$

Notice that if \(\tau \le \tau _1(n)\) then \(\frac{1}{n} e_N \pm \tau \xi \in \overline{B_1(0)}\), if \(\tau \in (\tau _1(n), \tau _2(n)]\) then \(\frac{1}{n} e_N - \tau \xi \in \overline{B_1(0)}\), however \(\frac{1}{n} e_N + \tau \xi \not \in \overline{B_1(0)}\). Finally, if \(\tau > \tau _2(n)\), then \(\frac{1}{n} e_N \pm \tau \xi \not \in \overline{B_1(0)}\), see also Fig. 1.

Using \(u\le 0\) we obtain from (3.4) that

$$\begin{aligned} {\mathcal {I}}_\xi u\left( \frac{1}{n} e_N\right) \le - C_s \int _{\tau _1(n)}^{+\infty }\frac{1}{\tau ^{1+2s}}\,d\tau . \end{aligned}$$

Moreover, since \(\tau _1(n)\le \sqrt{1-\frac{1}{n^2}}\), we infer that

$$\begin{aligned} {\mathcal {I}}_\xi u\left( \frac{1}{n} e_N\right) \le -C_s \int _{\sqrt{1-\frac{1}{n^2}}}^{+\infty }\frac{1}{\tau ^{1+2s}}\,d\tau =-C_s \frac{1}{2s{(1-\frac{1}{n^2})}^s} \end{aligned}$$
(3.5)

for any \(|\xi |=1\). Then,

$$\begin{aligned} {\mathcal {I}}_k^+u\left( \frac{1}{n} e_N\right) \le -\frac{kC_s }{2s{(1-\frac{1}{n^2})}^s} \end{aligned}$$

and

$$\begin{aligned} \limsup _{n\rightarrow +\infty }{\mathcal {I}}_k^+u\left( \frac{1}{n} e_N\right) \le -\frac{kC_s }{2s}<0 \end{aligned}$$

as we wanted to show.

A slight modification of the function u in (3.1) allows us to show that the map

$$\begin{aligned} x\in \Omega \mapsto {\mathcal {I}}^+_Nu(x) \end{aligned}$$

is also, in general, not continuous.

Consider the function

$$\begin{aligned} u(x)=\left\{ \begin{array}{rl} 0 &{} {\text {if }} |x|\le 1, {\text { or }}\langle x, e_N \rangle \le 0 {\text { or }}\sum _{i=1}^{N-1} \langle x, e_i \rangle ^2=0\\ -1 &{} {\text {otherwise.}} \end{array}\right. \end{aligned}$$

As before, using the fact that \(u\le 0\) in \({\mathbb {R}}^N\) and that

$$\begin{aligned} {\mathcal {I}}_{e_1}u(0)=\cdots ={\mathcal {I}}_{e_N}u(0)=0, \end{aligned}$$

we have

$$\begin{aligned} {\mathcal {I}}_N^+u(0)=0. \end{aligned}$$

Moreover, for any \(|\xi |=1\) such that \(\left\langle \xi ,e_N\right\rangle \in [0,1)\), then (3.5) still holds. Since for any orthonormal basis \(\left\{ \xi _1,\ldots ,\xi _N\right\}\) there is at most one \(\xi _i\) such that \(\left\langle \xi _i,e_N\right\rangle =1\), then

$$\begin{aligned} {\mathcal {I}}_N^+u\left( \frac{1}{n} e_N\right) \le -C_s \frac{N-1}{2s{(1-\frac{1}{n^2})}^s} \end{aligned}$$

and

$$\begin{aligned} \limsup _{n\rightarrow +\infty }{\mathcal {I}}_N^+u\left( \frac{1}{n} e_N\right) \le -C_s \frac{N-1}{2s}. \end{aligned}$$

A further consequence of the lack of continuity is that the \(\sup\) or \(\inf\) in the definition of \({\mathcal {I}}_k^\pm\) are in general not attained under the only assumption \(u \in C^2(\Omega ) \cap L^\infty ({\mathbb {R}}^N)\). As an example, take

$$\begin{aligned} u(x)={\left\{ \begin{array}{ll} 0 &{}{\text {if}}\, \left| x\right| \le 1 {\text { or }} \langle x, e_N \rangle \le 0 \\ e^{-\langle x, e_N \rangle } &{}{\text {otherwise.}} \end{array}\right. } \end{aligned}$$

Then

$$\begin{aligned} {\mathcal {I}}_1^+ u(0)= \sup _{\left| \xi \right| =1} {\mathcal {I}}_\xi (0)=C_s \sup _{\left| \xi \right| =1} \int _0^{+\infty } \frac{u(\tau \xi ) + u(-\tau \xi )}{\tau ^{1+2s}} \, d\tau . \end{aligned}$$

Since \({\mathcal {I}}_\xi u(0)={\mathcal {I}}_{-\xi } u(0)\), we can assume without loss of generality that \(\langle \xi , e_N \rangle \in [0, 1]\). Thus,

$$\begin{aligned} {\mathcal {I}}_1^+ u(0) = C_s \sup _{\left| \xi \right| =1, \langle \xi , e_N \rangle \ge 0} \int _0^{+\infty }\frac{u(\tau \xi )}{\tau ^{1+2s}} \, d\tau . \end{aligned}$$

Notice that

$$\begin{aligned} \int _0^{+\infty }\frac{u(\tau \xi )}{\tau ^{1+2s}} \, d\tau = {\left\{ \begin{array}{ll} 0 &{}{\text { if }} \langle \xi , e_N \rangle =0 \\ f(\langle \xi , e_N \rangle ) &{}{\text { if }} \langle \xi , e_N \rangle \in (0, 1], \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} f(y) = \int _1^{+\infty } \frac{e^{-\tau y}}{\tau ^{1+2s}}\, d\tau , \end{aligned}$$

which is continuous and monotone decreasing and

$$\begin{aligned} \sup _{y \in (0, 1]} f(y)= f(0)=\int _1^{+\infty } \frac{1}{\tau ^{1+2s}} \, d\tau . \end{aligned}$$

Therefore, we deduce

$$\begin{aligned} {\mathcal {I}}_1^+ u(0) =C_s \int _1^{+\infty } \frac{1}{\tau ^{1+2s}} \, d\tau . \end{aligned}$$

However, there does not exist any \(\xi\) such that \({\mathcal {I}}_1^+ u(0)={\mathcal {I}}_\xi u(0)\).

Let us now consider the case \({\mathcal {I}}_k^+\) with \(2 \le k \le N\). We take into account the function

$$\begin{aligned} u(x)= {\left\{ \begin{array}{ll} e^{-\langle x, e_N \rangle } &{}{\text { if }} \sum _{i=1}^{N-2} \langle x, e_i \rangle ^2=0, \, \langle x, e_{N-1} \rangle ^2+ \langle x, e_N \rangle ^2>1, \, \langle x, e_N \rangle >0 \\ 0 &{}{\text { otherwise.}} \end{array}\right. } \end{aligned}$$

In this case,

$$\begin{aligned} {\mathcal {I}}_k^+ u(0)=\sup _{\theta \in [0, \pi /2] } ({\mathcal {I}}_{\eta _1} u(0) + {\mathcal {I}}_{\eta _2}u(0)), \end{aligned}$$

where

$$\begin{aligned} \eta _1=(0, \dots , 0, \cos \theta , \sin \theta ), \quad \eta _2=(0, \dots , 0, -\sin \theta , \cos \theta ). \end{aligned}$$

Thus, one has

$$\begin{aligned} {\mathcal {I}}_{\eta _1} u(0) + {\mathcal {I}}_{\eta _2}u(0)= {\left\{ \begin{array}{ll} \displaystyle {C_s \int _1^{+\infty } \frac{e^{-\tau \sin \theta } + e^{-\tau \cos \theta }}{\tau ^{1+2s}} \,d \tau } &{}{\text { if }} \theta \in (0, \pi /2)\\ &{}\\ \displaystyle {C_s \int _1^{+\infty } \frac{e^{-\tau }}{\tau ^{1+2s}} \, d\tau } &{}{\text { if }} \theta =0 {\text { or }} \theta =\pi /2. \end{array}\right. } \end{aligned}$$

Now, let us compute the supremum of the function

$$\begin{aligned} F(\theta )=\int _1^{+\infty } \frac{e^{-\tau \sin \theta } + e^{-\tau \cos \theta }}{\tau ^{1+2s}} \,d \tau = \int _1^{+\infty } \frac{f(\tau , \theta )}{\tau ^{1+2s}} \, d\tau . \end{aligned}$$

Observe that

$$\begin{aligned} 0 \le \frac{f(\tau , \theta )}{\tau ^{1+2s}} \le \frac{2}{\tau ^{1+2s}} \in L^1(1, +\infty ), \end{aligned}$$
(3.6)

and that

$$\begin{aligned} \frac{1}{\tau ^{1+2s}} \left| \frac{\partial f}{\partial \theta }\right| = \frac{1}{\tau ^{2s}} \left| -e^{-\tau \sin \theta } \cos \theta + e^{-\tau \cos \theta } \sin \theta \right| \le \frac{2}{\tau ^{2s}} \in L^1(1, +\infty ), \end{aligned}$$
(3.7)

as \(s > 1/2\). By (3.6) and (3.7), \(F(\theta ) \in C^1(0, \pi /2)\) and

$$\begin{aligned} F'(\theta )=\int _1^{+\infty } \frac{\frac{\partial f}{\partial \theta }}{\tau ^{1+2s}} \, d\tau . \end{aligned}$$

Moreover,

$$\begin{aligned} \frac{\partial ^2 f}{\partial \theta ^2} = \tau ^2 e^{-\tau \sin \theta } \cos ^2 \theta + \tau e^{-\tau \sin \theta } \sin \theta + \tau ^2 e^{-\tau \cos \theta } \sin ^2 \theta + \tau e^{-\tau \cos \theta } \cos \theta >0 \end{aligned}$$
(3.8)

for all \(\tau >1\) and \(\theta \in (0, \pi /2)\). Also,

$$\begin{aligned} \frac{\partial f}{\partial \theta }(\tau , \pi /4)=0. \end{aligned}$$
(3.9)

Combining (3.8) and (3.9), we conclude

$$\begin{aligned} F'(\theta )<0, {\text { if }} \theta \in (0, \pi /4), \quad F'(\theta )>0, {\text { if }} \theta \in (\pi /4, \pi /2). \end{aligned}$$

Finally,

$$\begin{aligned} \lim _{\theta \rightarrow 0^+} F(\theta )=\lim _{\theta \rightarrow \pi /2^-} F(\theta )=\int _1^{+\infty } \frac{1+e^{-\tau }}{\tau ^{1+2s}} \, d\tau , \end{aligned}$$

which implies

$$\begin{aligned} \sup _{0< \theta < \pi /2} F(\theta )=\int _1^{+\infty } \frac{1+e^{-\tau }}{\tau ^{1+2s}} \, d\tau . \end{aligned}$$

Therefore,

$$\begin{aligned} {\mathcal {I}}_k^+ u(0)= C_s \int _1^{+\infty } \frac{1+e^{-\tau }}{\tau ^{1+2s}} \, d\tau \end{aligned}$$

however there does not exists \(\theta \in [0, \pi /2]\) such that

$$\begin{aligned} {\mathcal {I}}_{\eta _1} u(0) + {\mathcal {I}}_{\eta _2}u(0)= C_s \int _1^{+\infty } \frac{1+e^{-\tau }}{\tau ^{1+2s}} \, d\tau . \end{aligned}$$

Proposition 3.1

Let \(u\in C^2(\Omega )\cap L^\infty ({\mathbb {R}}^N)\) and consider the maps

$$\begin{aligned} \begin{aligned}&\Psi :(x,\xi )\in \Omega \times {{\mathcal {S}}}^{N-1}\mapsto {\mathcal {I}}_\xi u(x)\\&{\mathcal {I}}^\pm _ku:x\in \Omega \mapsto {\mathcal {I}}^\pm _ku(x). \end{aligned} \end{aligned}$$

If \(u\in LSC({\mathbb {R}}^N)\) (respectively, \(USC({\mathbb {R}}^N)\), \(C({\mathbb {R}}^N)\)) then

  1. (i)

    \(\Psi \in LSC(\Omega \times {{\mathcal {S}}}^{N-1})\) (respectively, \(USC(\Omega \times {{\mathcal {S}}}^{N-1})\), \(C(\Omega \times {{\mathcal {S}}}^{N-1})\));

  2. (ii)

    \({\mathcal {I}}^\pm _ku\in LSC(\Omega )\) (respectively, \(USC(\Omega )\), \(C(\Omega )\)).

Proof

  1. (i)

    Let \((x_n,\xi _n)\rightarrow (x_0,\xi _0)\in \Omega \times {{\mathcal {S}}}^{N-1}\) as \(n\rightarrow +\infty\). Fix \(R>0\) such that \({\overline{B}}_R(x_0)\subset \Omega\) and set \(M=\displaystyle \max _{x\in {\overline{B}}_R(x_0)}\left\| D^2u(x)\right\|\). For \(\rho \in (0,\frac{R}{2})\) it holds that \(B_{2\rho }(x_0)\subset B_R(x_0)\) and, for n sufficiently large and any \(\tau \in [0,\rho )\), that \(x_n\pm \tau \xi _n\in B_{2\rho }(x_0)\). By a second-order Taylor expansion, we have

    $$\begin{aligned} {\mathcal {I}}_{\xi _n}u(x_n)-{\mathcal {I}}_{\xi _0}u(x_0)\ge -C_s\frac{M\rho ^{2-2s}}{1-s}+C_s \int _\rho ^{+\infty }\frac{\delta (u, x_n, \tau \xi _n)}{\tau ^{1+2s}}\,d\tau -C_s \int _\rho ^{+\infty }\frac{\delta (u, x_0, \tau \xi _0)}{\tau ^{1+2s}}\,d\tau \,. \end{aligned}$$

    Since \(u(x_n)\rightarrow u(x_0)\) as \(n\rightarrow +\infty\), because of the continuity of u in \(\Omega\), then using the lower semicontinuity of u in \({\mathbb {R}}^N\) we have

    $$\begin{aligned} \liminf _{n\rightarrow +\infty }\delta (u, x_n, \tau \xi _n) \ge \delta (u, x_0, \tau \xi _0) \end{aligned}$$

    for any \(\tau \in (0,+\infty )\). Moreover, taking into account that \(\rho >0\) and \(u\in L^\infty ({\mathbb {R}}^N)\), by means of Fatou’s lemma we also infer that

    $$\begin{aligned} \liminf _{n\rightarrow +\infty }[ {\mathcal {I}}_{\xi _n}u(x_n)-{\mathcal {I}}_{\xi _0}u(x_0)] \ge -C_s\frac{M\rho ^{2-2s}}{1-s}. \end{aligned}$$

    Since \(\rho\) can be chosen arbitrarily small, we conclude that

    $$\begin{aligned} \liminf _{n\rightarrow +\infty }\Psi (x_n, \xi _n)\ge \Psi (x_0, \xi _0). \end{aligned}$$

    In a similar way one can prove that \(\Psi \in USC(\Omega \times {{\mathcal {S}}}^{N-1})\) if \(u\in USC({\mathbb {R}}^N)\). In particular \(\Psi \in C(\Omega \times {{\mathcal {S}}}^{N-1})\) when u is continuous in \({\mathbb {R}}^N\).

  2. (ii)

    By the assumption \(u\in C^2(\Omega )\cap L^\infty ({\mathbb {R}}^N)\), we first note that, for any \(x\in \Omega\), \({\mathcal {I}}_\xi u(x)\) is uniformly bounded with respect to \(\xi \in {\mathcal {S}}^{N-1}\). Hence,

    $$\begin{aligned} -\infty<{\mathcal {I}}^-_ku(x)\le {\mathcal {I}}^+_ku(x)<+\infty . \end{aligned}$$

    Moreover, for any compact \(K \subset \Omega\) there exists a constant \(M_K\) such that

    $$\begin{aligned} -M_K \le {\mathcal {I}}_k^- u \le {\mathcal {I}}_k^+ u \le M_K. \end{aligned}$$

    Henceforth, we shall consider \({\mathcal {I}}^-_k\), the other case being similar.

Let \(x_n\rightarrow x_0\in \Omega\) as \(n\rightarrow +\infty\) and let \(\varepsilon >0\). By the definitions of lower limit and \({\mathcal {I}}_k^-u\), there exist a subsequence \((x_{n_m})_{m}\) and k sequences \((\xi _i(m))_m\subset {\mathcal {S}}^{N-1}\), \(i=1,\ldots ,k\), such that for any \(m \in {\mathbb {N}}\)

$$\begin{aligned} \liminf _{n\rightarrow +\infty }{\mathcal {I}}_k^-u(x_n)+2\varepsilon \ge {\mathcal {I}}_k^-u(x_{n_m})+\varepsilon \ge \sum _{i=1}^k\Psi (x_{n_m},\xi _i(m)). \end{aligned}$$
(3.10)

Up to extract a further subsequence, we can assume that \(\xi _i(m)\rightarrow {\bar{\xi }}_i\), as \(m\rightarrow +\infty\), for any \(i=1,\ldots ,k\). Since \(\Psi \in LSC(\Omega \times {{\mathcal {S}}}^{N-1})\) by i), we can pass to the limit as \(m\rightarrow +\infty\) in (3.10) to get

$$\begin{aligned} \liminf _{n\rightarrow +\infty }{\mathcal {I}}_k^-u(x_n)+2\varepsilon \ge \sum _{i=1}^k\Psi (x_0,{\bar{\xi }}_i)\ge {\mathcal {I}}^-_ku(x_0). \end{aligned}$$

This implies that \({\mathcal {I}}_k^-u(x)\in LSC(\Omega )\) sending \(\varepsilon \rightarrow 0\).

The proof that \({\mathcal {I}}_k^-u(x)\in USC(\Omega )\) under the assumption \(u\in USC({\mathbb {R}}^N)\) is more standard since \({\mathcal {I}}_k^-u(x)=\inf _{\left\{ \xi _i\right\} _{i=1}^k\in {{\mathcal {V}}}_k}\sum _{i=1}^k\Psi (x,\xi _i)\) and \(\Psi (x,\xi _i)\in USC(\Omega )\) by i).

Lastly if \(u\in C({\mathbb {R}}^N)\), by the previous cases \({\mathcal {I}}_k^-\) is in turn a continuous function in \(\Omega\). \(\square\)

4 Comparison and maximum principles

We consider the problems

$$\begin{aligned} \left\{ \begin{array}{cl} {\mathcal {I}}^\pm _ku+c(x)u=f(x) &{} {\text { in }}\Omega \\ u=0 &{} {\text { in }}{\mathbb {R}}^N\backslash \Omega \end{array}\right. \end{aligned}$$
(4.1)

Theorem 4.1

Let \(\Omega \subset {\mathbb {R}}^N\) be a bounded domain and let \(c(x),f(x)\in C(\Omega )\) be such that \(\left\| c^+\right\| _{\infty }<C_s \frac{k}{s}({\text {diam}}(\Omega ))^{-2s}\). If \(u\in USC({\overline{\Omega }})\cap L^\infty ({\mathbb {R}}^N)\) and \(v\in LSC({\overline{\Omega }})\cap L^\infty ({\mathbb {R}}^N)\) are, respectively, sub- and supersolution of (4.1), then \(u\le v\) in \(\Omega\).

Proof

We shall detail the proof in the case \({\mathcal {I}}^+_k\), the same arguments applying to \({\mathcal {I}}^-_k\) as well. We argue by contradiction by supposing that there exists \(x_0\in \Omega\) such that

$$\begin{aligned} \max _{{\mathbb {R}}^N}(u-v)=u(x_0)-v(x_0)>0. \end{aligned}$$

Doubling the variables, for \(n\in \mathbb {N}\) we consider \((x_n,y_n)\in {\overline{\Omega }}\times {\overline{\Omega }}\) such that

$$\begin{aligned} \max _{{\overline{\Omega }}\times {\overline{\Omega }}}(u(x)-v(y)-n|x-y|^2)=u(x_n)-v(y_n)-n|x_n-y_n|^2\ge u(x_0)-v(x_0). \end{aligned}$$
(4.2)

Using [11, Lemma 3.1], up to subsequences, we have

$$\begin{aligned} \lim _{n\rightarrow +\infty }(x_n,y_n)=({\bar{x}},{\bar{x}})\in \Omega \times \Omega \end{aligned}$$
(4.3)

and

$$\begin{aligned} \lim _{n\rightarrow +\infty }u(x_n)=u({\bar{x}}),\quad \lim _{n\rightarrow +\infty }v(x_n)=v({\bar{x}}),\quad u({\bar{x}})-v({\bar{x}})=u(x_0)-v(x_0). \end{aligned}$$
(4.4)

By semicontinuity of u and v we can find moreover \(\varepsilon >0\) such that

$$\begin{aligned} u(x)<u(x_0)-v(x_0) \quad \forall x\in \Omega _\varepsilon \end{aligned}$$
(4.5)

and also

$$\begin{aligned} -v(x)<u(x_0)-v(x_0) \quad \forall x\in \Omega _\varepsilon \end{aligned}$$
(4.6)

where \(\Omega _\varepsilon =\left\{ x\in {\overline{\Omega }}:\;{\text {dist}}(x,\partial \Omega )<\varepsilon \right\}\). We first claim that for \(n\ge \frac{\left\| u\right\| _\infty +\left\| v\right\| _\infty }{\varepsilon ^2}\)

$$\begin{aligned} \max _{{\overline{\Omega }}\times {\overline{\Omega }}}[ u(x)-v(y)-n|x-y|^2]=\max _{{\mathbb {R}}^N\times {\mathbb {R}}^N}[u(x)-v(y)-n|x-y|^2]\,. \end{aligned}$$
(4.7)

To show (4.7) take any \((x,y)\notin {\overline{\Omega }}\times {\overline{\Omega }}\):

Case 1. If \(|x-y|\ge \varepsilon\), then \(u(x)-v(y)-n|x-y|^2\le \left\| u\right\| _\infty +\left\| v\right\| _\infty -n\varepsilon ^2\le 0\);

Case 2. If \(|x-y|<\varepsilon\) and both \(x\notin {\overline{\Omega }}\) and \(y\notin {\overline{\Omega }}\), then \(u(x)-v(y)-n|x-y|^2\le 0\);

Case 3. If \(|x-y|<\varepsilon\) and \(x\notin {\overline{\Omega }},\ y\in {\overline{\Omega }}\) or \(x\in {\overline{\Omega }},\ y\notin {\overline{\Omega }}\), then using (4.5) and (4.6) we infer that \(u(x)-v(y)-n|x-y|^2<u(x_0)-v(x_0)\).

Thus, (4.7) is proved.

Taking the functions \(\varphi _n(x):=u(x_n)+ n|x-y_n|^2 - n|x_n-y_n|^2\) and \(\phi _n(y)=v(y_n)- n|x_n-y|^2 +n|x_n-y_n|^2\), we see that \(\varphi _n\) touches u in \(x_n\) from above, while \(\phi _n\) touches v in \(y_n\) from below. Hence, for any \(\rho >0\)

$$\begin{aligned} \begin{aligned} f(x_n)&\le c(x_n)u(x_n)+C_s \sup _{\left\{ \xi _i\right\} _{i=1}^k\in {{\mathcal {V}}}_k}\sum _{i=1}^k \left( \int _0^\rho \frac{\delta (\varphi _n, x_n, \tau \xi _i)}{\tau ^{1+2s}}\,d\tau +\int _\rho ^{+\infty }\frac{\delta (u, x_n, \tau \xi _i)}{\tau ^{1+2s}}\,d\tau \right) \\&=c(x_n)u(x_n)+C_s\frac{kn\rho ^{2-2s}}{1-s}+C_s \sup _{\left\{ \xi _i\right\} _{i=1}^k\in {{\mathcal {V}}}_k}\left( \sum _{i=1}^k\int _\rho ^{+\infty }\frac{\delta (u, x_n, \tau \xi _i)}{\tau ^{1+2s}}\,d\tau \right) . \end{aligned} \end{aligned}$$
(4.8)

In a dual fashion

$$\begin{aligned} f(y_n)\ge c(y_n)v(y_n)-C_s\frac{kn\rho ^{2-2s}}{1-s}+C_s \sup _{\left\{ \xi _i\right\} _{i=1}^k\in {{\mathcal {V}}}_k}\left( \sum _{i=1}^k\int _\rho ^{+\infty }\frac{\delta (v, y_n, \tau \xi _i)}{\tau ^{1+2s}}\,d\tau \right) . \end{aligned}$$
(4.9)

Subtracting (4.8) and (4.9), we then obtain

$$\begin{aligned} \begin{aligned} f(x_n)-f(y_n)&\le C_s\frac{2kn\rho ^{2-2s}}{1-s}+c(x_n)u(x_n)-c(y_n)v(y_n)\\&\quad +C_s \sup _{\left\{ \xi _i\right\} _{i=1}^k\in {{\mathcal {V}}}_k}\left(\sum _{i=1}^k\int _\rho ^{+\infty }\frac{\delta (u, x_n, \tau \xi _i) - \delta (v, y_n, \tau \xi _i)}{\tau ^{1+2s}}\,d\tau\right) . \end{aligned} \end{aligned}$$
(4.10)

From (4.2) and (4.7), we have

$$\begin{aligned} u(x)-v(y)-n|x-y|^2\le u(x_n)-v(y_n)-n|x_n-y_n|^2\quad \forall x,y\in {\mathbb {R}}^N. \end{aligned}$$

Choosing in particular \(x=x_n\pm \tau \xi _i\) and \(y=y_n\pm \tau \xi _i\) we deduce that

$$\begin{aligned} \delta (u, x_n, \tau \xi _i) - \delta (v, y_n, \tau \xi _i) \le 0 \end{aligned}$$

for any \(\tau >0\) and for any \(|\xi _i|=1\). Thus, (4.10) implies, assuming without loss of generality that \(\rho < {\text {diam}}(\Omega )\),

$$\begin{aligned} \begin{aligned} f(x_n)-f(y_n)&\le C_s\frac{2kn\rho ^{2-2s}}{1-s}+c(x_n)u(x_n)-c(y_n)v(y_n)\\&\quad +C_s \sup _{\left\{ \xi _i\right\} _{i=1}^k\in {{\mathcal {V}}}_k}\left(\sum _{i=1}^k\int _{{\text {diam}}(\Omega )}^{+\infty }\frac{\delta (u, x_n, \tau \xi _i) - \delta (v, y_n, \tau \xi _i)}{\tau ^{1+2s}}\,d\tau\right) . \end{aligned} \end{aligned}$$
(4.11)

Since \(\Omega \subset B_{{\text {diam}}(\Omega )}(x_n)\) and \(x_n\pm \tau \xi _i \notin B_{{\text {diam}}(\Omega )}(x_n)\) for any \(\tau \ge {\text {diam}}(\Omega )\), then \(u(x_n\pm \tau \xi _i)\le 0\). For the same reason \(v(y_n\pm \tau \xi _i)\ge 0\) when \(\tau \ge {\text {diam}}(\Omega )\). Hence,

$$\begin{aligned} \delta (u, x_n, \tau \xi _i) - \delta (v, y_n, \tau \xi _i)\le -2(u(x_n)-v(y_n)) \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} f(x_n)-f(y_n)\le&\; C_s\frac{2kn\rho ^{2-2s}}{1-s}+c(x_n)u(x_n)-c(y_n)v(y_n)\\&\quad -C_s (u(x_n)-v(y_n))\frac{k}{s}({\text {diam}}(\Omega ))^{-2s}. \end{aligned} \end{aligned}$$
(4.12)

Letting first \(\rho \rightarrow 0\), then \(n\rightarrow +\infty\) and using (4.3)-(4.4) we obtain

$$\begin{aligned} 0\le (u(x_0)-v(x_0))\left( c({\bar{x}})-C_s \frac{k}{s}({\text {diam}}(\Omega ))^{-2s}\right) \end{aligned}$$

which is a contradiction since \(u(x_0)-v(x_0)>0\) and \(\left\| c^+\right\| _{\infty }<C_s \frac{k}{s}({\text {diam}}(\Omega ))^{-2s}\). \(\square\)

In what follows, we clarify what we mean by (weak) maximum/minimum principle.

Definition 4.2

We say that the operator \({\mathcal {I}}\) satisfies the weak maximum principle in \(\Omega\) if

$$\begin{aligned} {\mathcal {I}} u \ge 0 {\text { in }} \Omega , \quad u \le 0 {\text { in }} {\mathbb {R}}^N {\setminus } \Omega \quad \Longrightarrow \quad u \le 0 {\text { in }} \Omega , \end{aligned}$$

and that it satisfies the strong maximum principle in \(\Omega\) if

$$\begin{aligned} {\mathcal {I}} u \ge 0 {\text { in }} \Omega , \quad u \le 0 {\text { in }}{\mathbb {R}}^N \quad \Longrightarrow \quad u < 0 {\text { or }} u \equiv 0 {\text { in }} \Omega . \end{aligned}$$

Correspondingly, \({\mathcal {I}}\) satisfies the weak minimum principle in \(\Omega\) if

$$\begin{aligned} {\mathcal {I}} u \le 0 {\text { in }} \Omega , \quad u \ge 0 {\text { in }} {\mathbb {R}}^N {\setminus } \Omega \quad \Longrightarrow \quad u \ge 0 {\text { in }} \Omega , \end{aligned}$$

and it satisfies the strong minimum principle in \(\Omega\) if

$$\begin{aligned} {\mathcal {I}} u \le 0 {\text { in }} \Omega , \quad u \ge 0 {\text { in }} {\mathbb {R}}^N \quad \Longrightarrow \quad u > 0 {\text { or }} u \equiv 0 {\text { in }} \Omega . \end{aligned}$$

The weak minimum/maximum principle follows by applying the comparison principle Theorem 4.1 with \(v=0\) or \(u=0\). However, the operators \({\mathcal {I}}_k^\pm\) do not always satisfy the strong maximum or minimum principle, see also [7].

Theorem 4.3

The following conclusions hold.

  1. (i)

    The operators \({\mathcal {I}}_k^-\), with \(k < N\), do not satisfy the strong minimum principle in \(\Omega\).

  2. (ii)

    The operator \({\mathcal {I}}_N^-\) satisfies the strong minimum principle in \(\Omega\).

  3. (iii)

    The operators \({\mathcal {I}}_k^+\), with \(k \le N\), satisfy the strong minimum principle in \(\Omega\).

Remark 4.4

We notice that since \({\mathcal {I}}_k^+ (-u)= -{\mathcal {I}}_k^- u\), corresponding results hold for the maximum principle.

Proof

  1. (i)

    We refer to Proposition 2.2 in [7] for a counterexample.

  2. (ii)

    Let us assume that u satisfies

    $$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_N^- u \le 0 &{}{\text { in }} \Omega \\ u \ge 0 &{}{\text { in }} {\mathbb {R}}^N \end{array}\right. } \end{aligned}$$

    and let \(u(x_0)=0\) for some \(x_0 \in \Omega\). We want to prove that \(u \equiv 0\) in \(\Omega\). Let us proceed by contradiction, and assume there exists \(y \in \Omega\) such that \(u(y) >0\). Let us choose a ball \(B_R(y)\) such that

    • \(B_R(y) \subset \Omega\)

    • \(\exists\, x_1\in\partial B_R(y)\) such that \(u(x_1)=0\)

    • \(u(x)>0\) for all \(x\in\overline B_R(y)\backslash\left\{x_1\right\}\).

    Then, by definition of viscosity super solutions, for fixed \(\rho >0\) and \(\varphi \in C^2(B_\rho (x_1))\), for which \(x_1\) is a minimum point for \(u-\varphi\), and for every \(\varepsilon >0\), there exists a orthonormal basis \(\{ \xi _1, \dots , \xi _N \}=\{ \xi _1(\varepsilon ), \dots , \xi _N(\varepsilon ) \}\) such that

    $$\begin{aligned} \varepsilon \ge C_s \sum _{i=1}^N \left( \int _{0}^\rho \frac{\delta (\varphi , x_1, \tau \xi _i)}{\tau ^{1+2s}} \, d\tau +\int _{\rho }^{+\infty } \frac{\delta (u, x_1, \tau \xi _i)}{\tau ^{1+2s}} \, d\tau \right) . \end{aligned}$$
    (4.13)

    Fix \(\rho < \frac{2R}{\sqrt{N}}\), and choose \(\varphi \equiv 0\) on \(B_\rho (x_1)\). Moreover, we know that there exists \(j=j(\varepsilon )\) such that

    $$\begin{aligned} \langle \xi _j, \widehat{x_1- y} \rangle \ge \frac{1}{\sqrt{N}}, \quad {\text { with }} \widehat{x_1- y} =\frac{x_1-y}{\left| x_1-y\right| }. \end{aligned}$$

    In particular, one has \(\rho < 2R\langle \xi _j, \widehat{x_1- y} \rangle\). Then, taking into account that \(u(x_1)=0\) and \(u \ge 0\), from (4.13) one has

    $$\begin{aligned} \varepsilon&\ge C_s \sum _{i=1}^N \int _{\rho }^{+\infty } \frac{u(x_1 + \tau \xi _i)+u(x_1-\tau \xi _i)}{ \tau ^{1+2s}} \, d\tau \\&= C_s \sum _{i \ne j} \int _{\rho }^{+\infty } \frac{u(x_1 + \tau \xi _i)+ u(x_1-\tau \xi _i) }{\tau ^{1+2s}} \, d\tau + C_s \int _{\rho }^{+\infty } \frac{u(x_1 + \tau \xi _j)+ u(x_1-\tau \xi _j)}{ \tau ^{1+2s}} \, d\tau \\&\ge C_s \int _{\rho }^{+\infty } \frac{u(x_1 - \tau \xi _j)}{\tau ^{1+2s}} \, d\tau \ge C_s \int _{\rho }^{2R\langle \xi _j, \widehat{x_1- y} \rangle } \frac{u(x_1 - \tau \xi _j)}{\tau ^{1+2s}} \, d\tau \\&\ge C_s \frac{1}{2s} \left( \rho ^{-2s} - \left( \frac{2R}{\sqrt{N}} \right) ^{-2s} \right) \min _{{\overline{B}}_R(y) {\setminus } B_\rho (x_1)} u, \end{aligned}$$

    as \(x_1-\tau \xi _j \in {\overline{B}}_R(y) {\setminus } B_\rho (x_1)\) if \(\rho<\tau < 2R\langle \xi _j, \widehat{x_1- y} \rangle\), which gives the contradiction if \(\varepsilon\) is small enough.                                                              

  3. (iii)

    The conclusion for the operators \({\mathcal {I}}_k^+\) follows recalling

    $$\begin{aligned} {\mathcal {I}}_k ^+ u(x) \le 0\; \Rightarrow \; {\mathcal {I}}_N^- u(x)\le 0. \end{aligned}$$

    Indeed, since \({\mathcal {I}}_k ^+ u(x) \le 0\) one has \(\sum _{i=1}^k {\mathcal {I}}_{\xi _i} u(x) \le 0\) for any \(\{ \xi _1, \dots , \xi _k\} \in {\mathcal {V}}_k\). Fix any \(\{ {\bar{\xi }}_1, \dots , {\bar{\xi }}_N \} \in {\mathcal {V}}_N\), and denote with \({\mathcal {A}}_k\) the set of all subsets of cardinality k of \(\{ {\bar{\xi }}_1, \dots , {\bar{\xi }}_N \}\). Clearly, \({\mathcal {A}}_k \subset {\mathcal {V}}_k\). In particular,

    $$\begin{aligned} 0 \ge \sum _{\{ \xi _i \} \in {\mathcal {A}}_k } \sum _{i=1}^k {\mathcal {I}}_{\xi _i} u(x) ={{N-1}\atopwithdelims (){k-1}} \sum _{i=1}^N {\mathcal {I}}_{{\bar{\xi }}_i} u(x), \end{aligned}$$

    from which the conclusion.

\(\square\)

Remark 4.5

Notice that the proofs above only require \(\Omega\) to be connected, and not necessarily bounded.

Remark 4.6

The same proof as in item (iii) shows that

$$\begin{aligned} {\mathcal {I}}_k ^+ u(x) \le 0\; \Rightarrow \; {\mathcal {I}}_{k+1}^+ u(x)\le {\mathcal {I}}_k ^+ u(x) \end{aligned}$$

and

$$\begin{aligned} {\mathcal {I}}_k ^- u(x) \le 0\; \Rightarrow \; {\mathcal {I}}_{k-1}^- u(x)\le {\mathcal {I}}_k ^- u(x)\,. \end{aligned}$$

Actually, the operators \({\mathcal {I}}_k^+\) satisfy a stronger condition than the strong minimum principle, which is also satisfied by the fractional Laplacian, and which turns out to be false for \({\mathcal {I}}_N^-\).

Proposition 4.7

One has

  1. (i)

    The operators \({\mathcal {I}}_k^+\), with \(k \le N\), satisfy the following

    $$\begin{aligned} {\mathcal {I}}_k^+ u(x) \le 0 {\text { in }} \Omega , \quad u \ge 0 {\text { in }}{\mathbb {R}}^N \; \Rightarrow \; u > 0 {\text { in }} \Omega {\text { or }} u \equiv 0 {\text { in }} {\mathbb {R}}^N. \end{aligned}$$
  2. (ii)

    There exist functions u such that \({\mathcal {I}}_N^- u \le 0\) in \(\Omega\), \(u \equiv 0\) in \({\overline{\Omega }}\), and \(u \not \equiv 0\) in \({\mathbb {R}}^N {\setminus } {\overline{\Omega }}\).

Fig. 2
figure 2

Graphic representation of the function u in the proof of Proposition 4.7 (ii), with \(N=2\)

Proof

  1. (i)

    Take u which satisfies the assumptions of the minimum principle, and assume there exists \(x_0 \in \Omega\) such that \(u(x_0)=0\). By the strong minimum principle in \(\Omega\), we know that \(u \equiv 0\) in \(\Omega\), in particular \(u \ge 0\) in \({\mathbb {R}}^N\). Choose any orthonormal basis of \({\mathbb {R}}^N\) \(\{ \xi _1, \dots , \xi _{N} \}\). Thus, recalling that \(u\ge 0\) in \({\mathbb {R}}^N\)

    $$\begin{aligned} 0 \ge {\mathcal {I}}_k^+ u(x_0)&\ge \sum _{i=1}^k {\mathcal {I}}_{\xi _i} u(x_0) =C_s \sum _{i=1}^k \int _0^{+\infty } \frac{u(x_0 + \tau \xi _i) + u(x_0-\tau \xi _i)}{\tau ^{1+2s}} \, d\tau . \end{aligned}$$

    Hence, since \(u \ge 0\) in \({\mathbb {R}}^N\), we conclude that \(u \equiv 0\) on every line with direction \(\xi _i\), and passing by \(x_0\). Since the directions are arbitrary, we get the conclusion.

  2. (ii)

    Take

    $$\begin{aligned} u(x)={\left\{ \begin{array}{ll} 0 &{}{\text { if there exists }} i =1, \dots , N {\text { such that }} \left| \langle x, e_i \rangle \right| \le 1\\ 1 &{}{\text { otherwise,}} \end{array}\right. } \end{aligned}$$

    see also Fig. 2, and notice that

    $$\begin{aligned} {\mathcal {I}}_N^- u (x)\le \sum _{i=1}^N {\mathcal {I}}_{e_i} u (x) =0 {\text { in }} B_1(0), \end{aligned}$$

    where \(e_i\) is the canonical basis. Moreover, \(u \equiv 0\) in \({\overline{B}}_1(0)\), however \(u \not \equiv 0\) in \({\mathbb {R}}^N {\setminus } {\overline{B}}_1(0)\). \(\square\)

We now prove a Hopf-type Lemma. We will borrow some ideas from [14], where the fractional Laplacian is taken into account. The next known computation (see [8, end of Section 2.6]) provides a useful barrier function.

Lemma 4.8

For any \(\xi \in {\mathcal {S}}^{N-1}\) one has

$$\begin{aligned} {\mathcal {I}}_\xi {(R^2-|x|^2)}^s_+= - C_s \beta (1-s, s) \, {\text { in }} B_R(0), \end{aligned}$$

where

$$\begin{aligned} \beta (1-s, s)=\int _0^1 t^{-s} (1-t)^{s-1} \, dt \end{aligned}$$

is the Beta function. In particular,

$$\begin{aligned} {\mathcal {I}}_k^+ {(R^2-|x|^2)}^s_+= {\mathcal {I}}_k^- {(R^2-|x|^2)}^s_+= - k \, C_s \beta (1-s, s) \, {\text { in }} B_R(0). \end{aligned}$$

For completeness’ sake, we give a sketch of the proof.

Sketch of the proof Call \(v(x)={(R^2-|x|^2)}^s_+\), and define \(u : {\mathbb {R}}\rightarrow {\mathbb {R}}\) as \(u(t)={(1-|t|^2)}^s_+\). Notice that for \(x\in B_R(0)\)

$$\begin{aligned} {\mathcal {I}}_\xi v(x)=C_s \, P.V. \int _{\mathbb {R}}\frac{{(R^2-|x+\tau \xi |^2)}^s_+-{(R^2-|x|^2)}^s}{\left| \tau \right| ^{1+2s}} \, d\tau . \end{aligned}$$

Now, one performs the change of variable

$$\begin{aligned} \tau =-\langle x, \xi \rangle + t\sqrt{{R}^2- |x|^2+\langle x, \xi \rangle ^2 } \end{aligned}$$

to get

$$\begin{aligned} {\mathcal {I}}_\xi v(x)= -(-\Delta )^s u \left( \frac{\langle x, \xi \rangle }{\sqrt{R^2- |x|^2+\langle x, \xi \rangle ^2}} \right) =- C_s \beta (1-s, s). \end{aligned}$$

The last equality follows from equation (2.43) in [8], see also [13], and the fact that \(\frac{\left| \langle x, \xi \rangle \right| }{\sqrt{R^2- |x|^2+\langle x, \xi \rangle ^2}}<1\). \(\square\)

Proposition 4.9

Let \(\Omega\) be a bounded \(C^2\) domain, and let u satisfy

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_N^- u \le 0 &{}{\text { in }} \Omega \\ u \ge 0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } \Omega . \end{array}\right. } \end{aligned}$$

Assume \(u \not \equiv 0\) in \(\Omega\). Then, there exists a positive constant \(c=c(\Omega , u)\) such that

$$\begin{aligned} u(x) \ge c\, d(x)^s\quad \forall x\in {\overline{\Omega }}. \end{aligned}$$
(4.14)

Notice that the conclusion is not true for the operators \({\mathcal {I}}_k^-\), \(k < N\). Indeed, consider the function

$$\begin{aligned} u(x)={\left\{ \begin{array}{ll} e^{-\frac{1}{1-\left| x\right| ^2}} &{}{\text { if }} \left| x\right| < 1 \\ 0 &{}{\text { if }} \left| x\right| \ge 1 \end{array}\right. } \end{aligned}$$

and take \(\{ \xi _i \} \in {\mathcal {V}}_k\) such that \(\langle x, \xi _i \rangle =0\) for any \(i=1, \dots , k\). Hence,

$$\begin{aligned} \left| x+\tau \xi _i\right| ^2 =\left| x\right| ^2 + \tau ^2 \ge \left| x\right| ^2 \end{aligned}$$

and using the radial monotonicity of u

$$\begin{aligned} {\mathcal {I}}_k^- u(x) \le \sum _{i=1}^k {\mathcal {I}}_{\xi _i} u(x) \le 0 {\text { in }} B_1(0). \end{aligned}$$

However, u clearly does not satisfy

$$\begin{aligned} u(x) \ge c\, d(x)^\gamma \end{aligned}$$

for any positive constants \(c, \gamma\).

As a consequence of Proposition 4.9, we immediately obtain the following

Corollary 4.10

Let \(\Omega\) be a bounded \(C^2\) domain, and let u satisfy

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_k^+ u \le 0 &{}{\text { in }} \Omega \\ u \ge 0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } \Omega . \end{array}\right. } \end{aligned}$$

Assume \(u \not \equiv 0\) in \(\Omega\). Then,

$$\begin{aligned} u(x) \ge c\, d(x)^s \end{aligned}$$

for some positive constant \(c=c(\Omega , u)\).

Remark 4.11

We also point out that from Proposition 4.9 one can deduce the strong maximum/minimum principle for the operators \({\mathcal {I}}_k^+\), \({\mathcal {I}}_N^-\), which however follows also by a more direct argument as we showed in Theorem 4.3.

Fig. 3
figure 3

The blue vector represents \(\xi _1(n)\), and the red segment corresponds to points \(x_n -\tau \xi _1(n)\), with \(\tau \in [\tau _1(n), \tau _2(n)]\)

Proof of Proposition 4.9

By the weak and strong minimum principles, see Theorem 4.1 and Theorem 4.3-(ii), \(u>0\) in \(\Omega\). Therefore, for any K compact subset of \(\Omega\) we have

$$\begin{aligned} \inf _{y \in K} u(y) >0. \end{aligned}$$
(4.15)

Without loss of generality we can further assume that u vanishes somewhere in \(\partial \Omega\), otherwise the conclusion is obvious.

Since \(\Omega\) is a \(C^2\) domain, there exists a positive constant \(\varepsilon\), depending on \(\Omega\), such that for any \(x \in \Omega _\varepsilon =\{ x \in \Omega : d(x) < \varepsilon \}\) there are a unique \(z \in \partial \Omega\) for which \(d(x)=\left| x-z\right|\) and a ball \(B_{2\varepsilon }({\bar{y}}) \subset \Omega\) such that \(\overline{B_{2\varepsilon }({\bar{y}})} \cap ({\mathbb {R}}^N {\setminus } \Omega )=\{ z \}\).

Now we consider the radial function \(w(x)={((2\varepsilon )^2-\left| x- {\bar{y}}\right| ^2)}^s_+\) which satisfies, see Lemma 4.8, the equation

$$\begin{aligned} {\mathcal {I}}_N^- w=-N \, C_s \beta (1-s, s) \, {\text { in }} B_{2\varepsilon }({\bar{y}}). \end{aligned}$$

We claim that there exists \({\bar{n}}= {\bar{n}}(u, \varepsilon )\) such that

$$\begin{aligned} u \ge w_{{\bar{n}}} {\text { in }} {\mathbb {R}}^N, \end{aligned}$$

where

$$\begin{aligned} w_n(x) = \frac{1}{n} w(x). \end{aligned}$$

This implies (4.14). Indeed, for any \(x\in \Omega _\varepsilon\)

$$\begin{aligned} w_{{\bar{n}}} (x)=\frac{1}{{\bar{n}}} ((2\varepsilon )^2-\left| x- {\bar{y}}\right| ^2)^s_+ \ge \frac{2\varepsilon }{{\bar{n}}} \left| x-z\right| ^s =\frac{2\varepsilon }{{\bar{n}}} d(x)^s, \end{aligned}$$
(4.16)

and

$$\begin{aligned} u(x)\ge \min _{y\in \Omega \backslash \Omega _\varepsilon }\frac{u(y)}{d(y)^s}d(x)^s\quad \forall x\in \Omega \backslash \Omega _\varepsilon . \end{aligned}$$
(4.17)

From (4.16)-(4.17) we obtain (4.14) with \(c=\min \left\{ \frac{2\varepsilon }{{\bar{n}}} ,\min _{y\in \Omega \backslash \Omega _\varepsilon }\frac{u(y)}{d(y)^s}\right\}\).

We proceed by contradiction in order to prove the claim; hence, we suppose that for any \(n \in {\mathbb {N}}\)

$$\begin{aligned} v_n = w_n-u \end{aligned}$$

is USC and positive somewhere. From now on, for simplicity of notation, we assume that \(B_{2\varepsilon }({\bar{y}}) =B_1(0)\). Since

$$\begin{aligned} w_n =0 \le u {\text { in }} {\mathbb {R}}^N {\setminus } B_1(0), \end{aligned}$$

we know that it attains its positive maximum \(x_n\) in \(B_1(0) \subset \Omega\). One has

$$\begin{aligned} 0<u(x_n) < w_n (x_n). \end{aligned}$$

Also, \(w_n \rightarrow 0\) uniformly in \({\mathbb {R}}^N\), thus

$$\begin{aligned} \lim _{n \rightarrow +\infty } u(x_n) =0. \end{aligned}$$
(4.18)

Therefore, recalling (4.15), \(\left| x_n\right| \rightarrow 1\) as \(n \rightarrow \infty\), hence in particular \(x_n \in B_1(0) {\setminus } B_{r_0}(0)\), where \(r_0=\sqrt{1 - \frac{1}{2N}}\), and \(d(x_n) < (1-r_0)/2\) for n large enough.

Since \({\mathcal {I}}_N^- u \le 0\) in \(\Omega\), we know that for every test function \(\varphi \in C^2(B_\rho (x_n))\) such that \(x_n\) is a minimum point to \(u-\varphi\), one has

$$\begin{aligned} \inf _{\{ \xi _i \} \in {\mathcal {V}}_N} \sum _{i=1}^N \left( \int _{0}^{\rho } \frac{\delta (\varphi , x_n, \tau \xi _i)}{\tau ^{1+2s}} \, d\tau + \int _{\rho }^{+\infty } \frac{\delta (u, x_n, \tau \xi _i)}{\tau ^{1+2s}} \, d\tau \right) \le 0, \end{aligned}$$

and in particular for any \(n \in {\mathbb {N}}\) there exists \(\{ \xi _1(n), \dots , \xi _N(n) \}\) orthonormal basis of \({\mathbb {R}}^N\) such that

$$\begin{aligned} \sum _{i=1}^N \left( \int _{0}^{\rho } \frac{\delta (\varphi , x_n, \tau \xi _i(n))}{\tau ^{1+2s}} \, d\tau + \int _{\rho }^{+\infty }\frac{\delta (u, x_n, \tau \xi _i(n))}{\tau ^{1+2s}} \, d\tau \right) \le \frac{1}{n}. \end{aligned}$$
(4.19)

Since \(\{ \xi _1(n), \dots , \xi _N(n) \}\) is a basis of \({\mathbb {R}}^N\), then there exists at least one \(\xi _i(n)\) such that \(\langle {\hat{x}}_n, \xi _i(n) \rangle \ge \frac{1}{\sqrt{N}}\). Without loss of generality, we can suppose that \(\xi _i(n)=\xi _1(n)\). Let us choose \(\rho = d(x_n) < (1- r_0)/2\), and \(\varphi (x)=w_n (x) \in C^2(B_\rho (x_n))\) as test function.

We consider the left hand side of (4.19), and we aim at providing a positive lower bound independent on n, which will give the desired contradiction. Let us start with the second integral in (4.19) for each fixed \(i=2, \dots , N\), and let us notice that since \(x_n\) is a maximum point for \(v_n\)

$$\begin{aligned} \int _{\rho }^{+\infty }\frac{\delta (u, x_n, \tau \xi _i(n))}{\tau ^{1+2s}}\, d\tau \ge \int _{\rho }^{+\infty }\frac{\delta (w_n, x_n, \tau \xi _i(n))}{\tau ^{1+2s}}\, d\tau . \end{aligned}$$

On the other hand, in order to estimate the integral for \(i=1\), we split it as follows:

$$\begin{aligned} \int _{\rho }^{+\infty }\frac{\delta (u, x_n, \tau \xi _1(n))}{\tau ^{1+2s}}\, d\tau =J_1+ J_2+J_3, \end{aligned}$$
(4.20)

where

$$\begin{aligned} J_1= & {} \int _{\rho }^{\tau _1(n)} \frac{\delta (u, x_n, \tau \xi _1(n))}{\tau ^{1+2s}} \, d \tau , \\ J_2= & {} \int _{\tau _1(n)}^{\tau _2(n)} \frac{\delta (u, x_n, \tau \xi _1(n))}{\tau ^{1+2s}} d \tau \end{aligned}$$

and

$$\begin{aligned} J_3= \int _{\tau _2(n)}^{+\infty } \frac{\delta (u, x_n, \tau \xi _1(n))}{\tau ^{1+2s}}d \tau , \end{aligned}$$

with

$$\begin{aligned} \tau _1(n)=\frac{\left| x_n\right| }{\sqrt{N}} - \sqrt{ 1 - \frac{1}{2N} - \left| x_n\right| ^2\left( 1-\frac{1}{N}\right) } \end{aligned}$$

and

$$\begin{aligned} \tau _2(n)= \frac{\left| x_n\right| }{\sqrt{N}} + \sqrt{ 1 - \frac{1}{2N} - \left| x_n\right| ^2\left( 1-\frac{1}{N}\right) }. \end{aligned}$$

Notice that if \(\tau \in [\tau _1(n), \tau _2(n)]\) then \(x_n - \tau \xi _1(n) \in B_{r_0}(0)\), as

$$\begin{aligned} \left| x_n -\tau \xi _1(n)\right| ^2 \le \left| x_n\right| ^2+ \tau ^2 - \frac{2 \tau \left| x_n\right| }{\sqrt{N}} \le 1 -\frac{1}{2N}, \end{aligned}$$

see also Fig. 3. Also, for n large we can assume \(\rho =d(x_n)< \tau _1(n) < \tau _2(n)\), since as \(n\rightarrow +\infty\), \(d(x_n)\rightarrow 0\), \(\tau _1(n)\rightarrow \frac{1}{\sqrt{N}}\left( 1-\frac{1}{\sqrt{2}}\right)\) and \(\tau _2(n)\rightarrow \frac{1}{\sqrt{N}}\left( 1+\frac{1}{\sqrt{2}}\right)\).

Integrals \(J_1\) and \(J_3\) can be estimated once again as above, exploiting the inequality

$$\begin{aligned} \delta (u, x_n, \tau \xi _1(n)) \ge \delta (w_n, x_n, \tau \xi _1(n)). \end{aligned}$$

In order to estimate \(J_2\), we now use the fact that \(u(x_n - \tau \xi _1 (n))\ge \min _{{\overline{B}}_{r_0}} u >0\). We obtain

$$\begin{aligned} J_2&\ge \int _{\tau _1(n)}^{\tau _2(n)} \frac{u(x_n - \tau \xi _1(n)) - 2u(x_n)}{\tau ^{1+2s}}\,d\tau \\&\ge \left( \min _{{\overline{B}}_{r_0}} u-2u(x_n)\right) \, \int _{\tau _1(n)}^{\tau _2(n)} \frac{1}{\tau ^{1+2s}}= \frac{\min _{{\overline{B}}_{r_0}} u-2u(x_n)}{2s} \, \left( \frac{1}{\tau _1(n)^{2s}} - \frac{1}{\tau _2(n)^{2s}} \right) . \end{aligned}$$

Now, putting estimates above together and recalling (4.19), one has

$$\begin{aligned} \begin{aligned} \frac{1}{n}&\ge \sum _{i=1}^N \int _0^{+\infty } \frac{\delta (w_n, x_n, \tau \xi _i(n))}{\tau ^{1+2s}}\,d\tau - \int _{\tau _1(n)}^{\tau _2(n)} \frac{\delta (w_n, x_n, \tau \xi _1(n))}{\tau ^{1+2s}}\,d\tau \\&\quad + \frac{\min _{{\overline{B}}_{r_0}} u-2u(x_n)}{2s} \, \left( \frac{1}{\tau _1(n)^{2s}} - \frac{1}{\tau _2(n)^{2s}} \right) \,. \end{aligned} \end{aligned}$$
(4.21)

Notice that, as \(n\rightarrow +\infty\)

$$\begin{aligned} \left| \int _{\tau _1(n)}^{\tau _2(n)} \frac{\delta (w_n, x_n, \tau \xi _1(n))}{\tau ^{1+2s}} \, d \tau \right| \le \frac{2}{s \, n} \left( \frac{1}{\tau _1(n)^{2s}} - \frac{1}{\tau _2(n)^{2s}} \right) \rightarrow 0, \end{aligned}$$

and that by Lemma 4.8

$$\begin{aligned} \sum _{i=1}^N \int _0^{+\infty } \frac{\delta (w_n, x_n, \tau \xi _i(n))}{\tau ^{1+2s}} \, d \tau = - \frac{N}{n} C_s \beta (1-s, s). \end{aligned}$$

Thus, by taking the limit \(n \rightarrow +\infty\) in (4.21) and using (4.18) we get the contradiction

$$\begin{aligned} 0 < \frac{1}{2s} \min _{{\overline{B}}_{r_0}} u \, \left( \left( \frac{1}{\sqrt{N}} \left( 1 - \frac{1}{\sqrt{2}} \right) \right) ^{-2s} - \left( \frac{1}{\sqrt{N}} \left( 1 +\frac{1}{\sqrt{2}} \right) \right) ^{-2s} \right) \le 0 \,. \end{aligned}$$

\(\square\)

5 Stability and the Perron method

We now give some stability results which will be crucial for our purposes. They have been treated in a very general context in [2, 3], see also [1]; here we give a simplified proof with full details for the operators \({\mathcal {I}}_k^\pm\).

For the local counterparts, we refer to [11]. Let us set

$$\begin{aligned} u_* (x)= \sup _{r>0} \inf _{\left| y-x\right| \le r} u(y), \quad u^*(x) = \inf _{r>0} \sup _{\left| y-x\right| \le r} u(y) \end{aligned}$$

and

$$\begin{aligned} {\liminf }_* u_n(x)=\lim _{j \rightarrow \infty } \inf \left\{ u_n(y): n \ge j, \, \left| y-x\right| \le \frac{1}{j} \right\} , \\ {\limsup }^* u_n (x)= \lim _{j \rightarrow \infty } \sup \left\{ u_n(y): n \ge j, \, \left| y-x\right| \le \frac{1}{j} \right\} . \end{aligned}$$

Lemma 5.1

Let \(u_n \in USC(\Omega )\) (respectively, \(LSC(\Omega )\)) be a sequence of subsolutions (supersolutions) of

$$\begin{aligned} {\mathcal {I}}_k^\pm u_n = f_n(x) {\text { in }} \Omega , \end{aligned}$$
(5.1)

where \(f_n\) are locally uniformly bounded functions, and \(u_n \le 0\) (\(u_n \ge 0\)) in \({\mathbb {R}}^N {\setminus } \Omega\). We assume that there exists \(M>0\) such that for any \(n \in {\mathbb {N}}\)

$$\begin{aligned} \left\| u_n\right\| _\infty \le M {\text { in }} {\mathbb {R}}^N. \end{aligned}$$
(5.2)

Then \({\overline{u}}:= {\limsup }^* u_n\) (resp. \({\underline{u}} :={\liminf }_* u_n\)) is a subsolution (resp. supersolution) of

$$\begin{aligned} {\mathcal {I}}_k^\pm \overline u = \underline f(x) {\text { in }} \Omega \quad(\text{resp.}\;\; {\mathcal {I}}_k^\pm \underline u = \overline f(x) {\text { in }} \Omega), \end{aligned}$$

such that \(\overline u \le 0\) (resp. \(\underline u \ge 0\)) in \({\mathbb {R}}^N {\setminus } {\overline{\Omega }}\), where \({\underline{f}}=\liminf _* f_n\) (resp. \({\overline{f}}=\limsup ^* f_n\)).

Remark 5.2

Notice that in general we cannot guarantee that the limit solution \({\overline{u}}\) is \(\le 0\) also on the boundary of the domain \(\Omega\). However, in our next results, we will always be able to avoid this difficulty, by comparing the limit solution with the distance function to the boundary, see also Lemma 6.5.

Proof

Let us only consider \({\mathcal {I}}_k^+\), for \({\mathcal {I}}_k^-\) is analogous. Let us fix \(x_0 \in \Omega\), and let us choose \(\Phi \in C^2(B_\rho (x_0))\) such that \(\Phi (x_0)={\overline{u}} (x_0)\), and \(\Phi > {\overline{u}}\) in \(B_\rho (x_0) {\setminus } \{ x_0 \}\). We can choose \(x_n \rightarrow x_0\) such that up to a subsequence \(u_n-\Phi\) has a maximum in \(x_n\) in \({\overline{B}}_{\rho /2}(x_n)\), and \({\overline{u}}(x_0)=\lim _n u_n(x_n)\). Since \(u_n\) are subsolutions, there exist \(\{ \xi _i(n) \} \in {\mathcal {V}}_k\) such that

$$\begin{aligned} f_n(x_n) - \frac{1}{n} \le C_s\sum _{i=1}^k \left( \int _{0}^{\rho /2} \frac{\delta (\Phi , x_n, \tau \xi _i(n))}{\tau ^{1+2s}} \, d\tau + \int _{\rho /2}^{+\infty } \frac{\delta (u_n, x_n, \tau \xi _i(n))}{\tau ^{1+2s}}\, d\tau \right) \end{aligned}$$
(5.3)

Up to extracting a further subsequence, we can assume \(\xi _i(n) \rightarrow {\bar{\xi }}_i\) as \(n \rightarrow \infty\). Then, recalling \(\Phi \in C^2(B_\rho (x_0))\),

$$\begin{aligned} \lim _{n\rightarrow +\infty } \int _{0}^{\rho /2} \frac{\delta (\Phi , x_n, \tau \xi _i(n))}{\tau ^{1+2s}}\, d\tau = \int _{0}^{\rho /2} \frac{\delta (\Phi , x_0, \tau {\bar{\xi }}_i)}{\tau ^{1+2s}} \, d\tau . \end{aligned}$$

On the other hand, by applying Fatou lemma, and using hypothesis (5.2),

$$\begin{aligned} \limsup _{n\rightarrow +\infty } \int _{\rho /2}^{+\infty } \frac{\delta (u_n, x_n, \tau \xi _i(n))}{\tau ^{1+2s}} \, d\tau \le \int _{\rho /2}^{+\infty } \frac{\delta ({\overline{u}}, x_0, \tau {\bar{\xi }}_i)}{\tau ^{1+2s}} \, d\tau \end{aligned}$$

Thus, recalling (5.3), passing to the limit, and also using that \(\Phi \ge {\overline{u}}\) in \(B_\rho (x_0)\),

$$\begin{aligned} \begin{aligned} {\underline{f}}(x_0)&\le C_s\sum _{i=1}^k\left( \int _{0}^{\rho /2} \frac{\delta (\Phi , x_0, \tau {\bar{\xi }}_i)}{\tau ^{1+2s}}\, d\tau + \int _{\rho /2}^{+\infty } \frac{\delta ({\overline{u}}, x_0, \tau {\bar{\xi }}_i)}{\tau ^{1+2s}}\, d\tau \right) \\&\le C_s\sum _{i=1}^k\left( \int _{0}^{\rho } \frac{\delta (\Phi , x_0, \tau {\bar{\xi }}_i)}{\tau ^{1+2s}}\, d\tau + \int _{\rho }^{+\infty } \frac{\delta ({\overline{u}}, x_0, \tau {\bar{\xi }}_i)}{\tau ^{1+2s}} \, d\tau \right) \end{aligned} \end{aligned}$$

which implies the conclusion. \(\square\)

Analogously one proves

Lemma 5.3

Let \((u_\alpha )_\alpha \subseteq USC(\Omega )\) (respectively, \(LSC(\Omega )\)) a family of subsolutions (supersolutions) of

$$\begin{aligned} {\mathcal {I}}_k^\pm u_\alpha = f_\alpha (x){\text { in }} \Omega \end{aligned}$$

such that \(u_\alpha \le 0\) (\(u_\alpha \ge 0\)) in \({\mathbb {R}}^N {\setminus } \Omega\), and there exists \(M>0\) such that for any \(\alpha\)

$$\begin{aligned} \left\| u_\alpha \right\| _\infty \le M {\text { in }} {\mathbb {R}}^N, \end{aligned}$$

where \(f_\alpha\) are uniformly bounded. Set \(u=\sup _\alpha u_\alpha\) (resp. \(v=\inf _{\alpha } u_\alpha\)). Then \(u^*\) (resp. \(v_*\)) is a subsolution (resp. supersolution) of

$$\begin{aligned} {\mathcal {I}}_k^\pm u = f(x) {\text { in }} \Omega \end{aligned}$$

such that \(u \le 0\) (\(u \ge 0\)) in \({\mathbb {R}}^N {\setminus } \Omega\), where \(f=(\inf _\alpha f_\alpha )_*\) (resp. \(f=(\sup _\alpha f_\alpha )^*\)).

As a consequence, we get the following analog of the Perron method.

Lemma 5.4

Let \({\underline{u}}\) and \({\overline{u}}\) in \(C({\mathbb {R}}^N)\) be, respectively, sub- and supersolutions of

$$\begin{aligned} {\mathcal {I}}_k^\pm u =f(x) {\text { in }} \Omega , \end{aligned}$$
(5.4)

such that \({\underline{u}}= {\overline{u}}=0\) in \({\mathbb {R}}^N {\setminus } \Omega\). Then there exists a solution \(v \in C({\mathbb {R}}^N)\) to (5.4) such that \({\underline{u}} \le v \le {\overline{u}}\), and \(v=0\) in \({\mathbb {R}}^N {\setminus } \Omega\).

Proof

In what follows we only consider the case \({\mathcal {I}}_k^+\), similar considerations hold for \({\mathcal {I}}_k^-\). Let

$$\begin{aligned} v= \sup \{ u: \, u {\text { is a subsolution to (5.4) s.t. }} u \le {\overline{u}} {\text { in }} {\mathbb {R}}^N\}. \end{aligned}$$

Notice that \(v \in L^\infty ({\mathbb {R}}^N)\) as

$$\begin{aligned} {\underline{u}} \le v_* \le v \le v^* \le {\overline{u}}, \end{aligned}$$

which also implies \(v=0\) in \({\mathbb {R}}^N {\setminus } \Omega\). We know by Lemma 5.3 that \(v^*\) is a subsolution to (5.4), thus \(v^* \le v\) by maximality of v and \(v=v^*\). We claim that \(v_*\) is a supersolution to (5.4). If the claim is true, then by the comparison principle Theorem 4.1 we conclude \(v^* \le v_*\), and since the other inequality trivially holds, then \(v=v_*=v^* \in C({\mathbb {R}}^N)\) is a solution to (5.4) such that \(v=0\) in \({\mathbb {R}}^N {\setminus } \Omega\).

We now prove the claim. Let us assume by contradiction that \(v_*\) is not a supersolution. Then, there exists \(x_0 \in \Omega\), \(\rho >0\) and \(\Phi \in C^2(\overline{B_\rho (x_0)})\) such that \(\Phi (x_0)=v_*(x_0)\), \(\Phi < v_*\) in \(\overline{B_\rho (x_0)} {\setminus } \{ x_0 \}\), and

$$\begin{aligned} {\mathcal {I}}_k^+ \Psi (x_0) > f(x_0), \end{aligned}$$
(5.5)

where \(\Psi \in LSC({\mathbb {R}}^N) \cap L^\infty ({\mathbb {R}}^N) \cap C^2(B_\rho (x_0))\) is defined as

$$\begin{aligned} \Psi (x)= {\left\{ \begin{array}{ll} \Phi (x) &{} {\text { if }} x \in {\overline{B}}_\rho (x_0) \\ v_*(x) &{} {\text { if }} x \in {\mathbb {R}}^N {\setminus } {\overline{B}}_\rho (x_0). \end{array}\right. } \end{aligned}$$

By Proposition 3.1, there exist \(r < \rho /2\) and \(\varepsilon _0 >0\) such that

$$\begin{aligned} {\mathcal {I}}_k^+ \Psi (x) \ge f(x)+\varepsilon _0 \end{aligned}$$
(5.6)

for any \(x \in B_r(x_0)\). Moreover, for any \(\eta >0\) let

$$\begin{aligned} \Psi _{\eta }(x)= {\left\{ \begin{array}{ll} \Phi (x) +\eta &{} {\text { if }} x \in {\overline{B}}_\rho (x_0) \\ v_*(x) &{} {\text { if }} x \in {\mathbb {R}}^N {\setminus } {\overline{B}}_\rho (x_0). \end{array}\right. } \end{aligned}$$

Then,

$$\begin{aligned} {\mathcal {I}}_k^+ \Psi _{\eta }(x) \ge f(x) \end{aligned}$$
(5.7)

for any \(\eta < \eta _1=\varepsilon _0 C_s^{-1} \frac{s}{k} \left( \frac{\rho }{2} \right) ^{2s}\) and for any \(x \in B_r(x_0)\). Indeed, notice that \(\Psi _{\eta } = \Psi + \eta \chi _{{\overline{B}}_\rho (x_0)}\), where \(\chi _A\) is the characteristic function of the set A, and that for any \(\left| \xi \right| =1\) and \(x \in B_r (x_0)\), \(x \pm \tau \xi \in B_\rho (x_0)\) if \(\tau < \rho -r\). Thus, by direct computations

$$\begin{aligned} {\mathcal {I}}_\xi \chi _{{\overline{B}}_\rho (x_0)} (x)&= C_s \int _{\rho -r}^{+\infty } \frac{\delta (\chi _{{\overline{B}}_\rho (x_0)} , x, \tau \xi )}{\tau ^{1+2s}} \, d \tau \ge -2 C_s \int _{\rho -r}^{+\infty } \frac{1}{\tau ^{1+2s}} \, d \tau \\&= -\frac{C_s }{s} (\rho - r)^{-2s} \ge -\frac{C_s }{s} \left( \frac{\rho }{2} \right) ^{-2s}. \end{aligned}$$

Thus,

$$\begin{aligned} {\mathcal {I}}_k^+ \Psi _{\eta }(x) \ge {\mathcal {I}}_k^+ \Psi (x) - C_s \frac{k}{s} \left( \frac{\rho }{2} \right) ^{-2s} \eta \ge f(x) + \varepsilon _0-C_s \frac{k}{s} \left( \frac{\rho }{2} \right) ^{-2s}\eta \ge f(x) \end{aligned}$$

by using (5.6).

Let us take

$$\begin{aligned} \eta _2= \min _{{\overline{B}}_{\rho }(x_0) {\setminus } B_{r/2}(x_0)} (v_* - \Phi )>0, \end{aligned}$$

so that \(v_* > \Phi +\eta\) in \({\overline{B}}_{\rho }(x_0) {\setminus } B_{r/2}(x_0)\) for any \(\eta < \eta _2\).

Consider

$$\begin{aligned} \eta _0\le \min \{\eta _1, \eta _2 \}. \end{aligned}$$

Define

$$\begin{aligned} w= {\left\{ \begin{array}{ll} \max \{ v, \Psi _{\eta _0} \} &{}{\text { in }} B_{r}(x_0)\\ v &{}{\text { in }} {\mathbb {R}}^N {\setminus } B_{r}(x_0). \end{array}\right. } \end{aligned}$$

In particular, \(w(x) \ge \Psi _{\eta _0}(x)\) for all x.

Let us prove that w is a subsolution. Let us fix \({\bar{x}} \in \Omega\), and let us choose \(\varphi \in C^2(B_\varepsilon ({\bar{x}}))\) such that \(w({\bar{x}})=\varphi ({\bar{x}})\), and \(w(x) \le \varphi (x)\) in \(B_\varepsilon ({\bar{x}})\).

If \(w({\bar{x}} )=v({\bar{x}})\), then \(\varphi\) is a test function for v, and we exploit the fact that v is a subsolution. If \(w({\bar{x}})=\Phi ({\bar{x}})+ \eta _0> v({\bar{x}})\), then in particular \({\bar{x}} \in B_{r/2}(x_0)\). Set

$$\begin{aligned} \theta (x)= {\left\{ \begin{array}{ll} \varphi (x) &{}{\text { if }} x \in B_\varepsilon ({\bar{x}}) \\ w(x) &{}{\text { if }} x \in {\mathbb {R}}^N {\setminus } B_\varepsilon ({\bar{x}}). \end{array}\right. } \end{aligned}$$

One has

$$\begin{aligned} \theta ({\bar{x}})=\varphi ({\bar{x}})=w({\bar{x}})=\Phi ({\bar{x}})+\eta _0=\Psi _{\eta _0}({\bar{x}}). \end{aligned}$$

Also, \(\theta (x)\ge \Psi _{\eta _0}(x)\) for any x. Indeed, if \(x \in B_\varepsilon ({\bar{x}})\), then \(\theta (x) =\varphi (x) \ge w(x) \ge \Psi _{\eta _0}(x)\), whereas if \(x \not \in B_\varepsilon ({\bar{x}})\), then \(\theta (x)=w(x) \ge \Psi _{\eta _0}(x)\). Therefore,

$$\begin{aligned} {\mathcal {I}}_k^+ \theta ({\bar{x}}) \ge {\mathcal {I}}_k^+ \Psi _{\eta _0} ({\bar{x}}) \ge f({\bar{x}}) \end{aligned}$$

by (5.7).

Hence, w is a subsolution, and this yields a contradiction. Indeed, there exists a sequence \(x_n \rightarrow x_0\) such that \(\lim _{n\rightarrow \infty } v(x_n)=v_*(x_0)\), and one has

$$\begin{aligned} \lim _n (w(x_n)-v(x_n) )= \max \{ v_*(x_0), \Phi (x_0)+\eta _0 \} - v_*(x_0) = \eta _0 >0. \end{aligned}$$

Thus, \(w(x) > v(x)\) for some x. Finally, we notice that \(w \le {\overline{u}}\) by comparison, and as a consequence \(w \le v\) by maximality of v, a contradiction. \(\square\)

We finally prove existence of a unique solution to the Dirichlet problem in uniformly convex domains

$$\begin{aligned} \Omega =\bigcap _{y\in Y}B_R(y). \end{aligned}$$

The proof will be based on stability properties above.

Theorem 5.5

Let f be a bounded continuous function, and let \(\Omega\) be a uniformly convex domain. Then there exists a unique function \(u \in C({\mathbb {R}}^N)\) such that

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_k^\pm u = f(x) &{}{\text { in }} \Omega \\ u=0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } \Omega . \end{array}\right. } \end{aligned}$$
(5.8)

Proof

Exploiting the barrier functions in Lemma 4.8, we build suitable sub/super solutions. Indeed, for any \(y \in Y\) one considers the function

$$\begin{aligned} v_y(x)=M(R^2-\left| x-y\right| ^2)^s_+ \end{aligned}$$

which for \(M=M(k, s)\) big enough satisfies

$$\begin{aligned} {\mathcal {I}}_k^+ v_y \le - \left\| f\right\| _\infty {\text { in }} B_R(y). \end{aligned}$$

We now take

$$\begin{aligned} v(x)= \inf _{y \in Y} v_y(x) \end{aligned}$$
(5.9)

which is a supersolution to (5.8). In order to prove it, first we note that \(0 \le v(x) \le M R^{2s}\), hence v is bounded. Moreover, notice that \(v \in C^{0, s}({\mathbb {R}}^N)\). Indeed, for any \(x, y \in {\overline{\Omega }}\), one has

$$\begin{aligned} \begin{aligned} \left| v(x)-v(z)\right|&\le \sup _y \left| v_y(x) - v_y(z)\right| \\&=M \sup _y \left| (R^2 - \left| x-y\right| ^2)^s-(R^2 - \left| z-y\right| ^2)^s \right| \\&\le M \sup _y \left| (R^2 -\left| x-y\right| ^2) - (R^2 -\left| z-y\right| ^2)\right| ^s \\&= M \sup _y \left| \left| z-y\right| ^2 - \left| x-y\right| ^2\right| ^s \\&= M \sup _y (\left| z-y\right| + \left| x-y\right| )^s \left| \left| z-y\right| -\left| x-y\right| \right| ^s \\&\le M (2R)^s \left| z-x\right| ^s. \end{aligned} \end{aligned}$$

Moreover, \(v = 0\) in \({\mathbb {R}}^N {\setminus } \Omega\). Indeed, if \(x \not \in \Omega\), there exists \(y=y(x)\) such that \(x \not \in B_R(y)\) which implies

$$\begin{aligned} 0 \le v(x) \le v_y(x) =M(R^2-\left| x-y\right| ^2)_+^s=0. \end{aligned}$$

The infimum in definition (5.9) is attained, as given \(x_0 \in \Omega\), we can choose \(y_0 \in Y\) and \(z_0 \in \partial B_R(y_0)\) such that

$$\begin{aligned} \left| x_0 - z_0\right| = d(x_0)= \eta . \end{aligned}$$

Therefore, as \(B_\eta (x_0) \subseteq \Omega \subseteq B_R(y)\) for any \(y \in Y\),

$$\begin{aligned} \left| y-x_0\right| \le R-\eta = \left| x_0-y_0\right| \end{aligned}$$

and as a consequence \(v(x_0)=v_{y_0}(x_0)\). In particular,

$$\begin{aligned} {\mathcal {I}}_k^+ v_{y_0}(x_0) \le - \left\| f\right\| _\infty , \end{aligned}$$

which yields

$$\begin{aligned} {\mathcal {I}}_k^+ v(x) \le - \left\| f\right\| _\infty \, {\text { in }} \Omega . \end{aligned}$$

Analogously, we take the supremum of the subsolutions

$$\begin{aligned} w_y(x)=- v_y(x). \end{aligned}$$

Notice that

$$\begin{aligned} {\mathcal {I}}_k^+ w_y(x) \ge {\mathcal {I}}_k^- w_y(x)=-{\mathcal {I}}_k^+ v_y(x) \ge \left\| f\right\| _\infty {\text { in }} B_R(y) \end{aligned}$$

for a sufficiently big constant M.

We now exploit the Perron method, applying Lemma 5.4, to get a solution to (5.8). Uniqueness follows from Theorem 4.1. \(\square\)

6 Maximum principles and principal eigenvalues

We finally define the following generalized principal eigenvalues, adapting the classical definition in [4],

$$\begin{aligned} \mu _k^\pm = \sup \left\{ \mu :\, \exists v \in LSC(\Omega )\cap L^\infty ({\mathbb {R}}^N), v>0 {\text { in }} \Omega , v \ge 0 {\text { in }} {\mathbb {R}}^N, {\mathcal {I}}_k^\pm v + \mu v \le 0 {\text { in }} \Omega \right\} . \end{aligned}$$

Also let us set

$$\begin{aligned} {\bar{\mu }}^\pm _k=\sup \left\{ \mu :\,\exists v\in LSC(\Omega )\cap L^\infty ({\mathbb {R}}^N),\,\inf _\Omega v>0,\,v\ge 0\;{\text { in }} {\mathbb {R}}^N,\;{\mathcal {I}}^\pm _kv+\mu v\le 0 {\text { in }} \Omega \right\} . \end{aligned}$$

Remark 6.1

In this section, we only consider the operators \({\mathcal {I}}^\pm _k(\cdot )+\mu \cdot\), however, one can also treat operators with a zero order term like \({\mathcal {I}}^\pm _k(\cdot )+c(x) \cdot + \mu \cdot\), up to some technicalities.

Theorem 6.2

The operators \({\mathcal {I}}^\pm _k(\cdot )+\mu \cdot\) satisfy the maximum principle for \(\mu <{\bar{\mu }}^\pm _k\).

Proof

We consider \({\mathcal {I}}^+_k\), the other case being analogous. Let \(\mu <{\bar{\mu }}^+_k\) and let \(u\in USC({\overline{\Omega }})\cap L^\infty ({\mathbb {R}}^N)\) be a solution of

$$\begin{aligned} \left\{ \begin{array}{cl} {\mathcal {I}}^+_ku+\mu u\ge 0 &{} {\text { in }}\Omega \\ u\le 0 &{} {\text { in }}{\mathbb {R}}^N\backslash \Omega . \end{array}\right. \end{aligned}$$

By contradiction, we suppose that \(u(x_0)>0\) for some \(x_0\in \Omega\). In view of Theorem 4.1 we have \(\mu >0\). By the definition of \({\bar{\mu }}^+_k\) there exists \(\eta \in (\mu ,{\bar{\mu }}^+_k)\) and a nonnegative bounded function \(v\in LSC(\Omega )\) such that

$$\begin{aligned} {\mathcal {I}}^+_kv+\eta v\le 0\quad {\text { in }}\Omega \; \text { and }\; \displaystyle \inf _\Omega v>0. \end{aligned}$$

Set \(\gamma =\sup _\Omega \frac{u}{v}\). Then,

$$\begin{aligned} 0<\frac{u(x_0)}{v(x_0)}\le \gamma <+\infty \end{aligned}$$

and for any \(\varepsilon \in (0,\gamma )\) there exists \(z_\varepsilon \in \Omega\) such that

$$\begin{aligned} u(z_\varepsilon )-(\gamma -\varepsilon )v(z_\varepsilon )>0. \end{aligned}$$

From this, we infer that there exists \(x_\varepsilon \in \Omega\) such that

$$\begin{aligned} M_\varepsilon :=\max _{{\overline{\Omega }}} [u(x)-(\gamma -\varepsilon )v(x)] =u(x_\varepsilon )-(\gamma -\varepsilon )v(x_\varepsilon )>0. \end{aligned}$$

For \(n\in \mathbb {N}\) let \(x_n=x_n(\varepsilon ), y_n=y_n(\varepsilon )\in {\overline{\Omega }}\) be such that

$$\begin{aligned} \begin{aligned} \max _{{\overline{\Omega }}\times {\overline{\Omega }}}[u(x)-(\gamma -\varepsilon )v(y)-n|x-y|^2]&=u(x_n)-(\gamma -\varepsilon )v(y_n)-n|x_n-y_n|^2\\&\ge M_\varepsilon >0. \end{aligned} \end{aligned}$$
(6.1)

Arguing as in the proof of Theorem 4.1 we find that, for n sufficiently large,

$$\begin{aligned} \max _{{\overline{\Omega }}\times {\overline{\Omega }}}[u(x)-(\gamma -\varepsilon )v(y)-n|x-y|^2]=\max _{{\mathbb {R}}^N\times {\mathbb {R}}^N}[u(x)-(\gamma -\varepsilon )v(y)-n|x-y|^2]. \end{aligned}$$
(6.2)

Moreover, up to extract a subsequence, we may further assume that \((x_n,y_n)\rightarrow ({\bar{x}},{\bar{x}})\), with \({\bar{x}}\in \Omega\). Using \(\varphi _n(x)=u(x_n) + n|x-y_n|^2- n\left| x_n-y_n\right| ^2\) as test function for u at \(x_n\), and also testing v at \(y_n\) with \(\phi _n(y)=(\gamma -\varepsilon )v(y_n) -n|x_n-y|^2 + n\left| x_n-y_n\right| ^2\), and finally subtracting the corresponding inequalities, see also the proof of Theorem 4.1, we obtain

$$\begin{aligned} \begin{aligned} \eta (\gamma -\varepsilon )v(y_n)&\le \mu u(x_n)+C_s(\gamma -\varepsilon +1)\frac{nk\rho ^{2-2s}}{1-s}\\&\quad +C_s \sup _{\left\{ \xi _i\right\} _{i=1}^k\in {{\mathcal {V}}}_k}\sum _{i=1}^k\int _\rho ^{+\infty }\frac{\delta (u, x_n, \tau \xi _i)- \delta ((\gamma -\varepsilon ) v, y_n, \tau \xi _i)}{\tau ^{1+2s}}\,d\tau . \end{aligned} \end{aligned}$$

By (6.1)-(6.2) it follows that \(\delta (u, x_n, \tau \xi _i)- \delta ((\gamma -\varepsilon ) v, y_n, \tau \xi _i)\le 0\). Hence,

$$\begin{aligned} \eta (\gamma -\varepsilon )v(y_n)\le \mu u(x_n)+C_s(\gamma -\varepsilon +1)\frac{nk\rho ^{2-2s}}{1-s}. \end{aligned}$$

Letting \(\rho \rightarrow 0\)

$$\begin{aligned} \eta (\gamma -\varepsilon )v(y_n)\le \mu u(x_n). \end{aligned}$$

Then, as \(n\rightarrow +\infty\)

$$\begin{aligned} \eta (\gamma -\varepsilon )v({\bar{x}})\le \liminf _{n\rightarrow +\infty }\eta (\gamma -\varepsilon )v(y_n)\le \limsup _{n\rightarrow +\infty }\mu u(x_n)\le \mu u({\bar{x}})\le \mu \gamma v({\bar{x}}). \end{aligned}$$

Since v and \(\gamma\) are positive and \(\varepsilon\) can be chosen arbitrarily small, we reach the contradiction

$$\begin{aligned} \eta \le \mu . \end{aligned}$$

\(\square\)

Proposition 6.3

One has

  1. (i)

    \({\bar{\mu }}_k^-=\mu _k^-=+\infty\) for any \(k < N\).

  2. (ii)

    If \(B_{R_1} \subseteq \Omega \subseteq B_{R_2}\), then

    $$\begin{aligned}\frac{c_2}{R_2^{2s}} \le {\bar{\mu }}_1^+ \le \cdots \le {\bar{\mu }}_N^+\le {\bar{\mu }}_N^- \le \frac{c_1}{R_1^{2s}} < +\infty , \end{aligned}$$

    where \(c_1, c_2\) are positive constants depending on s.

Proof

  1. (i)

    Let \(w(x)=e^{-\alpha \left| x\right| ^2} > 0\) for \(\alpha >0\) and fix any \(\mu >0\). Since

    $$\begin{aligned} \int _0^{+\infty } (1-e^{-\alpha \tau ^2}) \tau ^{-(1+2s)} \, d\tau = \alpha ^s \int _0^{+\infty } (1-e^{- \tau ^2}) \tau ^{-(1+2s)} \, d\tau , \end{aligned}$$

    using Theorem 3.4 in [7] (see also Remark 3.5) we obtain

    $$\begin{aligned} {\mathcal {I}}_k^- w + \mu w&= k {\mathcal {I}}_{x^\perp } w + \mu w \\&= - 2k C_s e^{-\alpha \left| x\right| ^2} \int _0^{+\infty } (1-e^{-\alpha \tau ^2}) \tau ^{-(1+2s)} \, d\tau + \mu e^{-\alpha \left| x\right| ^2} = 0 \end{aligned}$$

    if

    $$\begin{aligned} \alpha ^s=\frac{\mu }{2k C_s \int _0^{+\infty } (1-e^{-\tau ^2}) \tau ^{-(1+2s)}}, \end{aligned}$$

    where \(x^\perp\) is a unitary vector such that \(\langle x, x^\perp \rangle =0\).

  2. (ii)

    We first note that in the definitions of \({\bar{\mu }}^\pm _k\) it is not restrictive to suppose \(\mu \ge 0\) (since the constant function \(v=1\) is a positive solution of \({\mathcal {I}}^\pm _kv=0\)). Moreover if \(\mu \ge 0\) and v is a nonnegative supersolution of the equation

    $$\begin{aligned} {\mathcal {I}}^+_kv+\mu v=0 \quad {\text {in }} \Omega , \end{aligned}$$

    then \({\mathcal {I}}^+_kv\le 0\) in \(\Omega\) and using Remark 4.6 we have

    $$\begin{aligned} {\mathcal {I}}^+_{k+1}v+\mu v\le 0 \quad {\text { in }}\Omega . \end{aligned}$$

    This leads to \({\bar{\mu }}^+_k\le {\bar{\mu }}^+_{k+1}\) for any \(k=1,\dots ,N-1\). If \(k=N\), using the inequality \({\mathcal {I}}^-_N\le {\mathcal {I}}^+_N\) we immediately obtain that \({\bar{\mu }}^+_N\le {\bar{\mu }}^-_N\).

Also, by scaling we obtain

$$\begin{aligned} {\bar{\mu }}_N^- (\Omega ) \le {\bar{\mu }}_N^- (B_{R_1}) = \frac{{\bar{\mu }}_N^- (B_1)}{R_1^{2s}}. \end{aligned}$$

Hence, it is sufficient to prove that \({\bar{\mu }}_N^- (B_1)\) is bounded from above.

Arguing as in [16], choose a constant function \(h \ge 0\), \(h \not \equiv 0\) with compact support in \(B_1\). By Theorem 5.5, there exists a unique solution to the following

$$\begin{aligned} {\left\{ \begin{array}{ll} - {\mathcal {I}}_N^- v = h &{}{\text { in }}B_1 \\ v=0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } B_1. \end{array}\right. } \end{aligned}$$

By Theorem 4.1 and Theorem  4.3\(v >0\) in \(B_1\). Since h has compact support, we may select a constant \(\rho _0 >0\) such that \(\rho _0 v \ge h\) in \(B_1\). Therefore, v satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_N^- v +\rho _0v\ge 0 &{}{\text { in }}B_1 \\ v=0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } B_1. \end{array}\right. } \end{aligned}$$

By Theorem 6.2 we infer that \({\bar{\mu }}_N^- \le \rho _0\).

As for the bound from below, we observe that \(u(x)={(R_2^2 - \left| x\right| ^2)}^s_+ + \varepsilon\) satisfies

$$\begin{aligned} {\mathcal {I}}_1^+ u + \mu u=- C_s \beta (1-s, s) + \mu u \le 0 \end{aligned}$$

if we take \(\mu \le \frac{C_s \beta (1-s,s)}{R_2^{2s}+\varepsilon }\) for any \(\varepsilon >0\), thus \({\bar{\mu }}_1^+ \ge \frac{ C_s \beta (1-s,s)}{R_2^{2s}}>0\). \(\square\)

Remark 6.4

Notice that the proof of (i) above suggests the existence of a continuum of eigenvalues in \((0, +\infty )\) for \({\mathcal {I}}_k^- + \mu\) in \(\mathbb {R}^N\).

We now consider uniformly convex domains and prove that \({\bar{\mu }}_k^+=\mu _k^+\). Moreover this common value turns out to be the optimal threshold for the validity of the maximum principle. We start with the next Lemma which will be crucial in the rest of the paper.

Lemma 6.5

Let m be a positive constant and let u be a solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_k^+u (x) \ge -m &{}{\text { in }} \Omega \\ u \le 0 &{}{\text { in }} {\mathbb {R}}^N{\setminus } \Omega , \end{array}\right. } \end{aligned}$$

where the domain \(\Omega\) is uniformly convex. Then there exists a positive constant \(C=C(\Omega , m, s)\) such that

$$\begin{aligned} u(x) \le C \, d(x)^s \end{aligned}$$
(6.3)

for any \(x \in {\overline{\Omega }}\).

Proof

Fix any \(y \in Y\) and consider the function

$$\begin{aligned} v_y(x)=M {(R^2-\left| x-y\right| ^2)}^s _+ \end{aligned}$$

where M is such that \(k M C_s \beta (1-s, s)=m\). Then,

$$\begin{aligned} {\mathcal {I}}_k^+ v_y(x) = - k M C_s \beta (1-s, s)=-m. \end{aligned}$$

Also, we point out that \(v_y(x) \ge 0\) in \({\mathbb {R}}^N\). By the comparison principle, see Theorem 4.1, \(u(x) \le v_y(x)\) in \({\mathbb {R}}^N\). Let \(x \in \Omega\) and select \(z \in \partial \Omega\) so that \(d(x)=\left| x-z\right|\). Choose \(y \in Y\) such that \(z \not \in B_R(y)\). Notice that since \(\left| x-y\right| \le R\),

$$\begin{aligned} {(R^2-\left| x-y\right| ^2)}^s&= {(R-\left| x-y\right| )}^s{(R+\left| x-y\right| )}^s \le 2^s R^s {(R-\left| x-y\right| )}^s \\&= 2^s R^s \left| x-z\right| ^s= 2^s R^s d(x)^s. \end{aligned}$$

Thus, for any \(x \in \overline{\Omega }\)

$$\begin{aligned} u(x) \le M {(R^2-\left| x-y\right| ^2)}^s \le M 2^s R^s d(x)^s, \end{aligned}$$

leading to (6.3) with \(C= M 2^s R^s\). \(\square\)

Theorem 6.6

Let \(\Omega\) be a uniformly convex domain. There exists a nonnegative subsolution \(v \not \equiv 0\) of

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_k^+ v + {\bar{\mu }}_k^+ v= 0 &{}{\text { in }} \Omega \\ v =0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } \Omega . \end{array}\right. } \end{aligned}$$

Proof

Let us consider the problem

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_k^+ w +\left( {\bar{\mu }}_k^+ - \frac{1}{n} \right) w=-1 &{}{\text { in }} \Omega \\ w =0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } \Omega , \end{array}\right. } \end{aligned}$$
(6.4)

and define

$$\begin{aligned} A_n=\{ w \in USC({\mathbb {R}}^N) {\text { nonnegative subsolution of (6.4) s.t. }} w=0 {\text { on }} {\mathbb {R}}^N {\setminus } \Omega \}. \end{aligned}$$

One has \(\emptyset \ne A_n \subseteq A_{n+1}\). We claim that for any n there exists \(w_n \in A_n\) such that \(\lim _n \left\| w_n\right\| _\infty =+ \infty\). If the claim is true, then we define \(z_n=\frac{w_n}{\left\| w_n\right\| }\), which turn out to be solutions of

$$\begin{aligned} {\mathcal {I}}_k^+ z_n +\left( {\bar{\mu }}_k^+ - \frac{1}{n} \right) z_n \ge - \frac{1}{\left\| w_n\right\| } {\text { in }} \Omega . \end{aligned}$$

By semicontinuity, there exists a sequence \(x_n \in \Omega\) such that \(\sup _\Omega z_n=z_n(x_n)=1\). Up to a subsequence, \(x_n \rightarrow x_0\), and by Lemma 6.5\(x_0 \in \Omega\). Thus, \(v(x)={\limsup _n}^* z_n(x)\) satisfies by Lemma 5.1

$$\begin{aligned} {\mathcal {I}}_k^+ v + {\bar{\mu }}_k^+ v \ge 0 {\text { in }} \Omega \end{aligned}$$

and, again by Lemma 6.5\(v = 0\) on \({\mathbb {R}}^N {\setminus } \Omega\). Also, \(v(x_0)=1\) and the proof is complete.

Let us now prove the claim. We will proceed by contradiction, assuming that for any sequence \(u_n \in A_n\) then \(\limsup _n \left\| u_n\right\| _\infty < + \infty\), and split the proof into steps.

Step 1. We show that \(U_n(x)=\sup _{w \in A_n} w(x) < +\infty\) for any x and any n.

If it is not the case, then there exists \({\bar{n}}\) and \({\bar{x}}\) such that \(U_{{\bar{n}}}({\bar{x}})=+\infty\), and by definition of supremum, there exists a sequence \((u_n)_n \subseteq A_{{\bar{n}}}\) such that \(\lim _n u_n ({\bar{x}}) =+ \infty\). Since for any \(n \ge {\bar{n}}\) one has \(A_{{\bar{n}}} \subseteq A_n\), then \(u_n \in A_n\) for any \(n \ge {\bar{n}}\) and \(\lim _n \left\| u_n\right\| _\infty =+\infty\), a contradiction.

Step 2. One has \(\left\| U_n\right\| _\infty < +\infty\) for any fixed n.

Indeed, if there exists \({\bar{n}}\) such that \(\left\| U_{{\bar{n}}}\right\| _\infty =+\infty\), then there exists \(x_n \in \Omega\) and \(u_n \in A_{{\bar{n}}}\) such that \(u_n(x_n) \rightarrow +\infty\). Then, \(u_n \in A_n\) for any \(n \ge {\bar{n}}\), and \(\left\| u_n\right\| _\infty \ge u_n(x_n) \rightarrow +\infty\), a contradiction.

Step 3. We show that there exists a constant \(C>0\) such that \(\left\| U_n\right\| _\infty \le C\) uniformly in n.

Notice that \(\left\| U_n\right\| _\infty \le \left\| U_{n+1}\right\| _\infty\) and hence if it is not bounded, then \(\left\| U_n\right\| _\infty \rightarrow \infty\), thus \(\left\| u_n\right\| _\infty \rightarrow \infty\) for a sequence \(u_n \in A_n\), a contradiction.

Step 4. One has \(U_n=(U_n)^*\) is a subsolution to (6.4) such that \(U_n=0\) in \({\mathbb {R}}^N {\setminus } \Omega\).

Indeed, \((U_n)^*\) is a subsolution by Lemma 5.3. Moreover, since for any \(u \in A_n\)

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_k^+ u \ge -(1+{\bar{\mu }}_k^+ C) &{} {\text { in }} \Omega \\ u=0 &{} {\text { in }} {\mathbb {R}}^N {\setminus } \Omega , \end{array}\right. } \end{aligned}$$

where C is the constant found in Step 3, by applying Lemma 6.5 we have \(u(x) \le {\tilde{C}} d(x)^s\), for a positive constant \({\tilde{C}}= {\tilde{C}}({\bar{\mu }}_k^+ C, s, \Omega )\), and as a consequence \((U_n)^*=0\) in \({\mathbb {R}}^N {\setminus } \Omega\). Finally, by maximality of \(U_n\), we conclude \(U_n=(U_n)^*\).

Step 5. Conclusion of the proof of the claim.

By using the same argument as in the proof of Lemma 5.4 (in particular the bump construction), we prove that \((U_n)_*\) is a supersolution to (6.4), which implies that \((U_n)_*+\varepsilon\) is a supersolution of

$$\begin{aligned} {\mathcal {I}}_k^+ w +\left( {\bar{\mu }}_k^+ + \frac{1}{n} \right) w=0 {\text { in }} \Omega \end{aligned}$$

if n is sufficiently big, and \(\varepsilon\) is sufficiently small. Also, \((U_n)_*+\varepsilon >0\) in \({\overline{\Omega }}\), which contradicts the definition of \({\bar{\mu }}_k^+\). \(\square\)

Lemma 6.7

Let \(\Omega\) be a convex domain. Then \(\mu _k^+={\bar{\mu }}_k^+\).

Proof

Fix any \(\varepsilon >0\). Let \(v \in LSC(\Omega )\cap L^\infty ({\mathbb {R}}^N)\) such that \(v>0\) in \(\Omega\), \(v \ge 0\) in \({\mathbb {R}}^N\), and \({\mathcal {I}}_k^+ v + (\mu _k^+-\varepsilon ) v \le 0\) in \(\Omega\). Fix \(x_0 \in \Omega\), and observe that

$$\begin{aligned} {\tilde{v}}(x)= v\left( \frac{x+\varepsilon x_0}{1+\varepsilon } \right) \end{aligned}$$

satisfies

$$\begin{aligned} {\mathcal {I}}_k^+ {\tilde{v}} + \frac{\mu _k^+-\varepsilon }{(1+\varepsilon )^{2s}} {\tilde{v}} \le 0 {\text { in }} \Omega . \end{aligned}$$

Also, \({\tilde{v}} >0\) in \({\overline{\Omega }}\), as \(\Omega\) is convex. Thus,

$$\begin{aligned} {\bar{\mu }}_k^+ \ge \frac{\mu _k^+-\varepsilon }{(1+\varepsilon )^{2s}} \end{aligned}$$

from which letting \(\varepsilon \rightarrow 0\) we have \(\mu _k^+\le {\bar{\mu }}_k^+\), and by definition equality holds. \(\square\)

Theorem 6.8

Let \(\Omega\) be a uniformly convex domain. The operator

$$\begin{aligned} I^+_k + \mu \end{aligned}$$

satisfies the maximum principle if and only if \(\mu< \mu _k^+ < +\infty\), and correspondingly

$$\begin{aligned} I_k^- + \mu \end{aligned}$$

satisfies the maximum principle for any \(\mu \in \mathbb {R}\).

Proof

Immediately follows from Theorems 6.2 -6.6, Proposition 6.3 and Lemma 6.7. \(\square\)

7 Hölder estimates

Proposition 7.1

Let u satisfy

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_1^+ u(x) =f(x) &{} {\text { in }}\Omega \\ u=0 &{} {\text { in }}{\mathbb {R}}^N {\setminus } \Omega , \end{array}\right. } \end{aligned}$$
(7.1)

where \(\Omega\) is a uniformly convex domain. If \(s>\frac{1}{2}\), then u is Hölder continuous of order \(2s-1\) in \({\mathbb {R}}^N\).

Proof

It is sufficient to show that for any \(x, y \in \overline{\Omega }\) such that \(\left| x-y\right| < \rho\), where \(\rho =\rho (s,\left\| f\right\| _\infty )\) is a positive constant to be determined, then

$$\begin{aligned} u(x)-u(y) \le L \left| x-y\right| ^{2s-1} \end{aligned}$$
(7.2)

with \(L=L(\Omega , \left\| u\right\| _\infty , \left\| f\right\| _\infty ,s)\). Fix \(\theta \in (s,2s)\) and consider

$$\begin{aligned} w(|x|)=-\left| x\right| ^{2s-1} + \left| x\right| ^\theta , \end{aligned}$$

which has a minimum in

$$\begin{aligned} r_0= \left( \frac{2s-1}{\theta } \right) ^{\frac{1}{\theta -2s+1}}\,. \end{aligned}$$

Set

$$\begin{aligned} v(x)= {\left\{ \begin{array}{ll} w(|x|) &{}{\text {if }} \left| x\right| \le r_0\\ w(r_0) &{}{\text {if }} \left| x\right| > r_0. \end{array}\right. } \end{aligned}$$
(7.3)

We claim that there exists \({\bar{\rho }}={\bar{\rho }}(s,\left\| f\right\| _\infty )\) sufficiently small such that

$$\begin{aligned} {\mathcal {I}}_1^+ v(x) \ge \left\| f\right\| _\infty \quad \forall x\in B_{{\bar{\rho }}}(0)\backslash \left\{ 0\right\} . \end{aligned}$$
(7.4)

In order to show (7.4), we fix \(x \in B_{{\bar{\rho }}}(0)\), where \({\bar{\rho }} < r_0\) will be chosen later, and notice that it is sufficient to make computations in the parallel direction \(I_{{\hat{x}}}v\), thus

$$\begin{aligned} I_{{\hat{x}}} v (x)&= C_s \int _{0}^{+\infty } \frac{\delta (v, x, \tau {\hat{x}})}{\tau ^{1+2s}} \, d\tau \\&= C_s \Big ( \int _{0}^{r_0 - \left| x\right| } \frac{\delta (w, x, \tau {\hat{x}})}{\tau ^{1+2s}}\, d\tau + \int _{r_0 - \left| x\right| }^{r_0 +\left| x\right| } \frac{w(|x-\tau {\hat{x}}|) +w(r_0)-2 w(|x|) }{\tau ^{1+2s}}\, d\tau \\&\quad +2 \int _{r_0 + \left| x\right| }^{+\infty } \frac{w(r_0) - w(x) }{\tau ^{1+2s}}\, d\tau \Big ). \end{aligned}$$

We now add and subtract the integral

$$\begin{aligned} C_s \int _{r_0 - \left| x\right| }^{+\infty } \frac{\delta (w, x, \tau {\hat{x}})}{\tau ^{1+2s}}\, d\tau , \end{aligned}$$

and as a result

$$\begin{aligned} I_{{\hat{x}}}v(x)=C_s (J_1+J_2+J_3), \end{aligned}$$

where

$$\begin{aligned} J_1= & {} \int _{0}^{+\infty } \frac{\delta (w, x, \tau {\hat{x}})}{\tau ^{1+2s}}\, d\tau =- \int _{0}^{+\infty } \frac{\delta (\left| x\right| ^{2s-1}, x, \tau {\hat{x}})}{\tau ^{1+2s}}\, d\tau + \int _{0}^{+\infty } \frac{\delta (\left| x\right| ^{\theta }, x, \tau {\hat{x}})}{\tau ^{1+2s}} \, d\tau , \\ J_2= & {} \int _{r_0 + \left| x\right| }^{+\infty } \frac{w(r_0) - w(|x-\tau {\hat{x}}|) }{\tau ^{1+2s}} \, d\tau \end{aligned}$$

and

$$\begin{aligned} J_3= \int _{r_0 -\left| x\right| }^{+\infty } \frac{w(r_0) - w(|x+\tau {\hat{x}}|) }{\tau ^{1+2s}}\, d\tau . \end{aligned}$$

Recall that

$$\begin{aligned} J_1= c_\theta \left| x\right| ^{\theta -2s}, \end{aligned}$$

where \(c_\theta >0\) as \(\theta > 2s-1\), see Lemma 3.6 in [7]. Moreover, using \(w(r_0)<0\),

$$\begin{aligned} J_2&= \int _{r_0 + \left| x\right| }^{+\infty } \frac{w(r_0)}{\tau ^{1+2s}}\,d\tau - \int _{r_0 + \left| x\right| }^{+\infty } \frac{w(|x-\tau {\hat{x}}|) }{\tau ^{1+2s}}\,d\tau \\&= \frac{1}{2s} w(r_0) (r_0 +\left| x\right| )^{-2s} +\int _{r_0 + \left| x\right| }^{+\infty } \frac{\left| \left| x\right| - \tau \right| ^{2s-1} - \left| \left| x\right| - \tau \right| ^\theta }{\tau ^{1+2s}}\,d\tau \\&\ge \frac{1}{2s} w(r_0) (r_0 +\left| x\right| )^{-2s} - \left| x\right| ^{\theta -2s} \int _{r_0/\left| x\right| +1}^{+\infty } \frac{\left| 1-\tau \right| ^\theta }{\tau ^{1+2s}}\,d\tau \\&\ge \frac{1}{2s} w(r_0) r_0^{-2s} - \left| x\right| ^{\theta -2s} \int _{r_0/{\bar{\rho }}+1}^{+\infty } \frac{\left| 1-\tau \right| ^\theta }{\tau ^{1+2s}}\,d\tau \\&\ge \frac{1}{2s} w(r_0) r_0^{-2s} - \left| x\right| ^{\theta -2s} \int _{r_0/{\bar{\rho }}+1}^{+\infty }\tau ^{\theta -1-2s}\,d\tau \\&= \frac{1}{2s} w(r_0) r_0^{-2s}-\frac{\left| x\right| ^{\theta -2s}}{2s-\theta }{\left( 1+\frac{r_0}{{\bar{\rho }}}\right) }^{\theta -2s}\,. \end{aligned}$$

Similarly, for \({\bar{\rho }}<\frac{r_0}{2}\)

$$\begin{aligned} J_3&= \int _{r_0- \left| x\right| }^{+\infty } \frac{w(r_0)}{\tau ^{1+2s}}\,d\tau - \int _{r_0- \left| x\right| }^{+\infty }\frac{w(|x+\tau {\hat{x}}|) }{\tau ^{1+2s}}\,d\tau \\&\ge \frac{1}{2s} w(r_0) (r_0-\left| x\right| )^{-2s} - \left| x\right| ^{\theta -2s} \int _{r_0/\left| x\right| - 1}^{+\infty }\frac{\left| 1+\tau \right| ^\theta }{\tau ^{1+2s}}\,d\tau \\&\ge \frac{1}{2s} w(r_0) (r_0-{\bar{\rho }})^{-2s} - \left| x\right| ^{\theta -2s} \int _{r_0/{\bar{\rho }}- 1}^{+\infty }\frac{\left| 1+\tau \right| ^\theta }{\tau ^{1+2s}}\,d\tau \\&\ge \frac{1}{2s} w(r_0) (r_0-{\bar{\rho }})^{-2s} - 2^\theta \left| x\right| ^{\theta -2s} \int _{r_0/{\bar{\rho }}- 1}^{+\infty }\tau ^{\theta -1-2s}\,d\tau \\&=\frac{1}{2s} w(r_0) (r_0-{\bar{\rho }})^{-2s}-\frac{2^\theta \left| x\right| ^{\theta -2s} }{2s-\theta }{\left( \frac{r_0}{{\bar{\rho }}}-1\right) }^{\theta -2s}\,. \end{aligned}$$

Summing up,

$$\begin{aligned} I_{{\hat{x}}}v(x)\ge & {} C_s \left| x\right| ^{\theta -2s} \Big ( c_\theta - \frac{1}{2s-\theta }{\left( 1+\frac{r_0}{{\bar{\rho }}}\right) }^{\theta -2s} -\frac{2^\theta }{2s-\theta }{\left( \frac{r_0}{{\bar{\rho }}}-1\right) }^{\theta -2s}\\&+\frac{1}{2s} {{\bar{\rho }}}^{2s-\theta } w(r_0) \left( r_0^{-2s}+ (r_0-{\bar{\rho }})^{-2s} \right) \Big ). \end{aligned}$$

Since the expression in parenthesis tends to \(c_\theta >0\) as \({\bar{\rho }}\rightarrow 0\), then we can pick \({\bar{\rho }}={\bar{\rho }}(s,\left\| f\right\| _\infty )\) sufficiently small such that

$$\begin{aligned} {\mathcal {I}}_1^+ v(x) \ge \left\| f\right\| _\infty {\text { in }} B_{{\bar{\rho }}}(0) {\setminus } \{0\}. \end{aligned}$$
(7.5)

This shows (7.4).

Let \(x_0, y_0 \in {\overline{\Omega }}\) with \(\left| x_0-y_0\right| < {\bar{\rho }}\) and take

$$\begin{aligned} v_{y_0}(x)=u(y_0) + L v(x-y_0) \quad x \in B_{{\bar{\rho }}}(y_0), \end{aligned}$$

where \(L>0\). We want to prove that there is \(L=L(\Omega ,\left\| u\right\| _\infty ,\left\| f\right\| _\infty ,s)\) sufficiently large such that

$$\begin{aligned} v_{y_0}(x_0)\le u(x_0). \end{aligned}$$
(7.6)

This readily implies (7.2) since \(v_{y_0}(x_0)\ge u(y_0)-L|x_0-y_0|^{2s-1}\) and \(x_0,y_0\) are arbitrary points of \({\overline{\Omega }}\) with \(\left| x_0-y_0\right| < {\bar{\rho }}\).

To obtain (7.6) we make use of the comparison principle, see Theorem 4.1, in \(\Omega \cap B_{{\bar{\rho }}}(y_0)\backslash \left\{ y_0\right\}\). By (7.5), if \(L\ge 1\) then

$$\begin{aligned} {\mathcal {I}}_1^+v_{y_0}(x) \ge \left\| f\right\| _\infty {\text { in }} B_{{\bar{\rho }}}(y_0) {\setminus } \{y_0\}, \end{aligned}$$

hence \(v_{y_0}\) is a subsolution of \({\mathcal {I}}_1^+v=f(x)\) in \(B_{{\bar{\rho }}}(y_0) {\setminus } \{y_0\}\). As far as the exterior boundary condition is concerned, first notice that by definition \(v_{y_0}(y_0)=u(y_0)\). Now let \(x \in {\mathbb {R}}^N\backslash B_{{\bar{\rho }}}(y_0)\). Since the function v(x) is radially decreasing it turns out that

$$\begin{aligned} v(x-y_0) \le - {\bar{\rho }}^{2s-1} + {\bar{\rho }}^\theta \end{aligned}$$

and, for

$$\begin{aligned} L\ge \frac{2\left\| u\right\| _\infty }{{{\bar{\rho }}}^{2s-1}-{{\bar{\rho }}}^\theta }, \end{aligned}$$
(7.7)

that

$$\begin{aligned} v_{y_0}(x)=u(y_0)+Lv(x-y_0) \le u(y_0)-L{\bar{\rho }}^{2s-1} + L{\bar{\rho }}^\theta \le u(y_0)-2 \left\| u\right\| _\infty \le u(x). \end{aligned}$$

It remains to prove the inequality \(v_{y_0}(x)\le u(x)\) for \(x \in \overline{B_{{\bar{\rho }}}(y_0)} \cap \Omega ^c\). For this we recall that by Lemma 6.5 there exists a positive constant \(C=C(\Omega ,\left\| f\right\| _\infty ,s)\) such that

$$\begin{aligned} u(y_0) \le C d(y_0)^s\le C|x-y_0|^s\,. \end{aligned}$$
(7.8)

Notice that the function \(r\in (0,+\infty )\mapsto r^{s-1} - r^{\theta - s}\) is decreasing, thus

$$\begin{aligned} r^{s-1} - r^{\theta - s}\ge {\bar{\rho }}^{s-1} - {\bar{\rho }}^{\theta - s}\quad \forall r\in (0,{\bar{\rho }}]. \end{aligned}$$
(7.9)

Using (7.9) with \(r=|x-y_0|\) and (7.8) we obtain, for \(x \in \overline{B_{{\bar{\rho }}}(y_0)} \cap \Omega ^c\), that

$$\begin{aligned} \begin{aligned} u(x)=0&\ge u(y_0) - C\left| x-y_0\right| ^s \\&\ge u(y_0) - L\left| x-y_0\right| ^{2s-1} + L \left| x-y_0\right| ^\theta =v_{y_0}(x) \end{aligned} \end{aligned}$$

provided

$$\begin{aligned} L\ge \frac{C}{{{\bar{\rho }}}^{s-1}-{{\bar{\rho }}}^{\theta -s}}\,. \end{aligned}$$
(7.10)

Summing up, by (7.7) and(7.10), if

$$\begin{aligned} L\ge \max \left\{ \frac{2\left\| u\right\| _\infty }{{\bar{\rho }}^{2s-1}-{\bar{\rho }}^\theta },\frac{C}{{\bar{\rho }}^{s-1}-{\bar{\rho }}^{\theta -s}},1\right\} , \end{aligned}$$

then by comparison we conclude that (7.6) holds, as we wanted to show. \(\square\)

Let us point out that, as in the local setting (see [6, Section 3]), the uniform convexity of \(\Omega\) was just exploited in the proof of Proposition 7.1 to get (7.8), hence to apply comparison principle up to the boundary. Moreover, in order to obtain interior Hölder estimates is in fact sufficient to assume the function u to be only supersolution.

Proposition 7.2

Let \(\Omega\) be a bounded domain of \({\mathbb {R}}^N\), and let \(s > \frac{1}{2}\). Then:

  1. i)

    for any compact \(K\subset \Omega\) and any supersolution u of (7.1), there exists a positive constant \(C=C(K,\Omega ,\left\| u\right\| _\infty ,\left\| f\right\| _\infty , s)\) such that \(\left\| u\right\| _{C^{0,2s-1}(K)}\le C\);

  2. ii)

    any supersolution u which satisfies (6.3) is \((2s-1)\)-Hölder continuous in \({\overline{\Omega }}\).

In the next theorem, we obtain global Hölder equicontinuity of sequences of solutions with uniformly bounded right hand sides. We shall use it in the next section for the existence of a principal eigenfunction.

Theorem 7.3

Let \(s > \frac{1}{2}\), and let \(u_n\in C({\overline{\Omega }})\cap L^\infty ({\mathbb {R}}^N)\) be solutions of

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}^+_1u_n=f_n(x) &{} {\text { in }}\Omega \\ u_n=0 &{} {\text { in }}{\mathbb {R}}^N\backslash \Omega , \end{array}\right. } \end{aligned}$$

where the domain \(\Omega\) is uniformly convex and \(f_n\in C(\Omega )\) for any \(n\in \mathbb {N}\). Assume that there exists a positive constant D such that

$$\begin{aligned} \sup _{n\in \mathbb {N}}\left\| f_n\right\| _{L^\infty {(\Omega )}}\le D. \end{aligned}$$
(7.11)

Then there exists \({\tilde{D}}={\tilde{D}}(D,\Omega ,s)>0\) such that

$$\begin{aligned} \sup _{n\in \mathbb {N}}\left\| u_n\right\| _{C^{0,2s-1}(\mathbb {R}^N)}\le {\tilde{D}}. \end{aligned}$$
(7.12)

Proof

We start by showing that \(\sup _n\left\| u_n\right\| _{L^\infty ({\mathbb {R}}^N)}<+\infty\). Let R, just depending on \(\Omega\), be such that \(B_R(0)\supseteq \Omega\) and consider the function

$$\begin{aligned} \varphi (x)=\frac{D}{C_s\beta (1-s,s)}{\left( R^2-|x|^2\right) }^s_+. \end{aligned}$$

By Lemma 4.8, \(\varphi\) solves

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}^+_1\varphi =-D &{} {\text { in }}\Omega \\ \varphi \ge 0 &{} {\text { in }}{\mathbb {R}}^N\backslash \Omega . \end{array}\right. } \end{aligned}$$

For any \(n\in \mathbb {N}\), using (7.11), \(u_n\) is solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}^+_1u_n\ge -D &{} {\text { in }}\Omega \\ u_n=0 &{} {\text { in }}{\mathbb {R}}^N\backslash \Omega . \end{array}\right. } \end{aligned}$$

Hence, by the comparison Theorem 4.1 we get

$$\begin{aligned} u_n(x)\le \varphi (x)\le \frac{DR^{2s}}{C_s\beta (1-s,s)} \quad \forall x\in \Omega . \end{aligned}$$
(7.13)

In a similar fashion we also obtain

$$\begin{aligned} u_n(x)\ge -\frac{DR^{2s}}{C_s\beta (1-s,s)} \quad \forall x\in \Omega . \end{aligned}$$
(7.14)

From (7.13)-(7.14) we infer that \(\sup _n\left\| u_n\right\| _{L^\infty ({\mathbb {R}}^N)}<+\infty\). Arguing as in the proof of Proposition 7.1, with the same notations there used, and v as defined in (7.3), we can pick \({\bar{\rho }}={\bar{\rho }}(s,D)\) such that

$$\begin{aligned} {\mathcal {I}}^+_1v(x)\ge D\quad {\text { in }}B_{{\bar{\rho }}}(0)\backslash \left\{ 0\right\} . \end{aligned}$$

Moreover, by Lemma 6.5 there exists a positive constant \(C=C(\Omega ,D,s)\) such that

$$\begin{aligned} u_n(x) \le C d(x)^s\quad \forall x\in {\overline{\Omega }}. \end{aligned}$$

Hence, by taking

$$\begin{aligned} L\ge \max \left\{ \frac{2\sup _n\left\| u_n\right\| _\infty }{{\bar{\rho }}^{2s-1}-{\bar{\rho }}^\theta },\frac{C}{{\bar{\rho }}^{s-1}-{\bar{\rho }}^{\theta -s}},1\right\} \end{aligned}$$

we conclude that for any \(n\in \mathbb {N}\) and any \(x,y\in {\overline{\Omega }}\) such that \(|x-y|\le {\bar{\rho }}\) then

$$\begin{aligned} u_n(x)-u_n(y)\le L|x-y|^{2s-1}. \end{aligned}$$

This readily implies (7.12). \(\square\)

8 Existence of a principal eigenfunction

The main result of this section is the following

Theorem 8.1

Let \(\Omega\) be a uniformly convex domain, and let \(s > \frac{1}{2}\). Then, there exists a positive function \(\psi _1 \in C^{0,2s-1}({\overline{\Omega }})\) such that

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_1^+ \psi _1 + \mu _1^+ \psi _1=0 &{}{\text { in }} \Omega \\ \psi _1=0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } \Omega . \end{array}\right. } \end{aligned}$$
(8.1)

For this, we first prove the solvability of the Dirichlet problem “below the principal eigenvalue”.

Theorem 8.2

Let \(\Omega\) be a uniformly convex domain, \(s > \frac{1}{2}\), and let \(f\in C(\Omega )\cap L^\infty (\Omega )\). Then there exists a solution \(u\in C^{0,2s-1}({\overline{\Omega }})\) of

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}^+_1u+\mu u=f(x) &{} {\text { in }}\Omega \\ u=0 &{} {\text { in }}{\mathbb {R}}^N\backslash \Omega , \end{array}\right. } \end{aligned}$$
(8.2)

in the following cases:

  1. (i)

    for any \(\mu\) if \(f\ge 0\)

  2. (ii)

    for any \(\mu <\mu ^+_1\).

In the case \(\mu <\mu ^+_1\) the solution is unique.

Proof

We can assume \(\mu >0\), since the arguments of the proof of Theorem 5.5 continue to apply for \({\mathcal {I}}^\pm _k+\mu u\) when \(\mu \le 0\).

  1. (i)

    Let \(w_1=0\) and define iteratively \(w_{n+1} \in C({\mathbb {R}}^N)\) as the solution, obtained by Theorem 5.5, of

    $$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_1^+ w_{n+1} = f(x) - \mu w_n(x) &{}{\text { in }} \Omega \\ w_{n+1}=0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } \Omega . \end{array}\right. } \end{aligned}$$
    (8.3)

    Note that the sequence \((w_n)_n\) is nonincreasing and in particular \(w_n \le 0\) for any n. Indeed, since \(f\ge 0\) then \(w_2\le 0=w_1\) by Theorem 4.1. Moreover assuming by induction \(w_{n+1}\le w_n\), one has

    $$\begin{aligned} {\mathcal {I}}_1^+ w_{n+2} = f- \mu w_{n+1} \ge f- \mu w_{n} = {\mathcal {I}}_1^+ w_{n+1}, \end{aligned}$$

    hence again by comparison \(w_{n+2} \le w_{n+1}\).

    We now show that \(\sup _n \left\| w_n\right\| _\infty < +\infty\). If this is true, then in view of Theorem 7.3, the sequence \((w_n)_n\) converges uniformly in \({\mathbb {R}}^N\) to \(u\in C^{0,2s-1}({\mathbb {R}}^N)\), and passing to the limit in (8.3) we conclude, exploiting Lemma 5.1. Let us assume by contradiction that \(\lim _{n\rightarrow +\infty } \left\| w_n\right\| _\infty = +\infty\), and let \(v_n=\frac{w_n}{\left\| w_n\right\| }\). Then

    $$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_1^+ v_{n+1}=\frac{f(x)}{\left\| w_{n+1}\right\| } - \mu \frac{\left\| w_n\right\| }{\left\| w_{n+1}\right\| }v_n(x) &{}{\text { in }} \Omega \\ v_{n+1}=0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } \Omega . \end{array}\right. } \end{aligned}$$

    Then again by the Hölder estimate (7.12) the sequence \((v_n)_n\) converges uniformly, up to a subsequence, to a function \(v \le 0\). Since, up to extract a further subsequence, \(\frac{\left\| w_n\right\| }{\left\| w_{n+1}\right\| } \rightarrow \tau \le 1\), we may pass to the limit to get

    $$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_1^+ v +\mu \tau v =0 &{}{\text { in }} \Omega \\ v =0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } \Omega . \end{array}\right. } \end{aligned}$$

    Now since \({\mathcal {I}}^-_1(-v)+\mu \tau (-v)=0\) in \(\Omega\), by Theorem 6.8 we infer that v in fact vanishes everywhere. This is in contradiction to \(\left\| v\right\| _\infty =1\).

  2. (ii)

    We first claim that there exists a nonnegative solution \({\overline{w}}\in C^{0,2s-1}({\mathbb {R}}^N)\) of

    $$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_1^+{\overline{w}} + \mu {\overline{w}}=-\left\| f\right\| _\infty &{}{\text { in }} \Omega \\ {\overline{w}}=0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } \Omega . \end{array}\right. } \end{aligned}$$
    (8.4)

    As above, we define \(w_1=0\) and \(w_{n+1}\) be the solution of

    $$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_1^+ w_{n+1} = -\left\| f\right\| _\infty - \mu w_n(x) &{}{\text { in }} \Omega \\ w_{n+1}=0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } \Omega . \end{array}\right. } \end{aligned}$$

    The sequence \((w_n)_n\) is nondecreasing. Using now that \(\mu <\mu ^+_1\) we also infer that \(\sup _n \left\| w_n\right\| _\infty < +\infty\). Then, by Theorem 7.3, \(w_n\) converges uniformly in \({\mathbb {R}}^N\) to a function \({\overline{w}}\in C^{0,2s-1}({\mathbb {R}}^N)\) which is solution of (8.4).

For the general case, let us denote by \({\underline{w}}\) the solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_1^+{\underline{w}} + \mu {\underline{w}}=\left\| f\right\| _\infty &{}{\text { in }} \Omega \\ {\underline{w}}=0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } \Omega . \end{array}\right. } \end{aligned}$$

obtained in i). Notice that \({\underline{w}}\le 0\le {\overline{w}}\).

Now let us define \(u_1= \underline{w}\) and let \(u_{n+1}\) be the solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_1^+ u_{n+1} = f(x) - \mu u_n &{}{\text { in }} \Omega \\ u_{n+1} =0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } \Omega . \end{array}\right. } \end{aligned}$$

We want to show that \(\underline{w} \le u_n \le {\overline{w}}\). This is true for \(n=1\). Let us assume by induction that this holds true at level n, and notice that

$$\begin{aligned} {\mathcal {I}}_1^+ u_{n+1}\ge - \left\| f\right\| _\infty - \mu \overline{w}={\mathcal {I}}_1^+ {\overline{w}}\quad {\text {in}} \Omega \end{aligned}$$

and similarly

$$\begin{aligned} {\mathcal {I}}_1^+ u_{n+1}\le \left\| f\right\| _\infty - \mu \underline{w}={\mathcal {I}}_1^+ {\underline{w}}\quad {\text { in }}\Omega . \end{aligned}$$

Hence, by comparison we have \(\underline{w} \le u_{n+1} \le {\bar{w}}\). As a consequence, the sequence \((u_n)_n\) is bounded in \(C^{0,2s-1}({\mathbb {R}}^N)\) and up to a subsequence it converges uniformly to a function \(u\in C^{0,2s-1}({\mathbb {R}}^N)\) which is the desired solution.

It remains to show that (8.2) has at most one solution. For this notice that if u and v are, respectively, sub- and supersolution of \({\mathcal {I}}^+_1u+\mu u=f\) in \(\Omega\), then the difference \(w=u-v\) is a viscosity subsolution of

$$\begin{aligned} {\mathcal {I}}^+_1w+\mu w=0\quad {\text { in }}\Omega . \end{aligned}$$

This easily follows if at least one between u and v are in \(C^2(\Omega )\). Instead, if u and v are merely semicontinuous, then using the doubling variables technique, as in the proof of Theorem 4.1 with minor changes, we obtain the result. Hence, if \(u_1\) and \(u_2\) are solutions of (8.2) then \(w=u_1-u_2\) solves

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}^+_1w+\mu w\ge 0 &{} {\text { in }}\Omega \\ w=0 &{} {\text { in }}{\mathbb {R}}^N\backslash \Omega . \end{array}\right. } \end{aligned}$$

By Theorem 6.8, we infer that \(u_1\le u_2\). Reversing the role of \(u_1\) and \(u_2\) we conclude that \(u_1=u_2\). \(\square\)

We are now in position to give the proof of Theorem 8.1.

Proof of Theorem 8.1

In view of Theorem 8.2, for any \(n\in \mathbb {N}\) there exists a solution \(w_n\in C^{0,2s-1}({\overline{\Omega }})\) of

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_1^+ w_n +\left( \mu _1^+ -\frac{1}{n} \right) w_n=-1 &{}{\text { in }} \Omega \\ w_n>0 &{}{\text { in }} \Omega \\ w_n =0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } \Omega . \end{array}\right. } \end{aligned}$$

We claim that \(\sup _n \left\| w_n\right\| =+\infty\). If not, we can pick \(j\in \mathbb {N}\) such that \(j\ge 2\sup _n \left\| w_n\right\|\). Hence, \(w_j\) solves

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_1^+ w_j +\left( \mu _1^+ +\frac{1}{j} \right) w_j\le 0 &{}{\text { in }} \Omega \\ w_j>0 &{}{\text { in }} \Omega \\ w_j =0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } \Omega . \end{array}\right. } \end{aligned}$$

This contradicts the maximality of \(\mu _1^+\), and proves that \(\sup _n \left\| w_n\right\| =+\infty\). Up to a subsequence, we may assume \(\lim _n \left\| w_n\right\| = +\infty\), and we can introduce the functions \(z_n=\frac{w_n}{\left\| w_n\right\| }\), which turn out to be solutions of

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {I}}_1^+ z_n + \left( \mu _1^+ -\frac{1}{n}\right) z_n = - \frac{1}{\left\| w_n\right\| }&{}{\text { in }} \Omega \\ z_n =0 &{}{\text { in }} {\mathbb {R}}^N {\setminus } \Omega . \end{array}\right. } \end{aligned}$$

Using the estimate (7.12), the sequence \((z_n)_n\) converges uniformly to a function \(\psi _1\in C^{0,2s-1}({\overline{\Omega }})\) which is solution of (8.1). Moreover, \(\psi _1 \ge 0\) in \(\Omega\) by construction and \(\left\| \psi _1 \right\| _\infty =1\). By the strong minimum principle, see Theorem 4.3-iii), we conclude that \(\psi _1>0\) in \(\Omega\). \(\square\)