1 Introduction and main results

Let \(N \in {\mathbb {N}}\) and \(0< 2s < N\), for some \(s \in (0,1)\), and let \(\Omega \subset {\mathbb {R}}^N\) be a bounded open set with \(C^2\) boundary. The goal of the present paper is to analyze the variational problem of minimizing, for a given \(a \in C({\overline{\Omega }})\), the quotient functional

$$\begin{aligned} {\mathcal {S}}_{a}[u]:= \frac{ \int _{{\mathbb {R}}^N} |{(-\Delta )^{s/2}}u|^2 \mathop {}\!\textrm{d}y + \int _\Omega a(y) u(y)^2 \mathop {}\!\textrm{d}y}{\Vert u\Vert _{L^{\frac{2N}{N-2s}}(\Omega )}^2} \end{aligned}$$
(1.1)

over functions in the space

$$\begin{aligned} {{\widetilde{H}}^s(\Omega )}:= \left\{ u \in H^s({\mathbb {R}}^N) : \, u \equiv 0 \quad \text { on } {\mathbb {R}}^N \setminus \Omega \right\} , \end{aligned}$$
(1.2)

where \(u \in H^s({\mathbb {R}}^N)\) iff

$$\begin{aligned} \Vert u \Vert _{L^2({\mathbb {R}}^N)} + \left( \int _{{\mathbb {R}}^N} |{(-\Delta )^{s/2}}u|^2 \mathop {}\!\textrm{d}y \right) ^{1/2} < \infty , \end{aligned}$$
(1.3)

and the fractional Laplacian operator \({(-\Delta )^{s}}u\) is defined for any \(u \in H^s({\mathbb {R}}^N)\) through the Fourier representation

$$\begin{aligned} (-\Delta )^s u = {\mathcal {F}}^{-1} (|\xi |^{2s} {\mathcal {F}} u). \end{aligned}$$
(1.4)

We also recall the singular integral representation of the fractional Laplacian (see [12, 26]):

$$\begin{aligned} (-\Delta )^s u(x):= C_{N,s} P.V. \int _{{\mathbb {R}}^N} \frac{u(x) - u(y)}{|x-y|^{N+2s}} \mathop {}\!\textrm{d}y, \end{aligned}$$
(1.5)

where

$$\begin{aligned} C_{N,s} := \frac{s 2^{2s} \Gamma \left( \frac{N+2s}{2}\right) }{\pi ^{N/2} \Gamma (1-s)}. \end{aligned}$$
(1.6)

The associated infimum,

$$\begin{aligned} S(a):= \inf \left\{ {\mathcal {S}}_{a}[u] : \, u \in {{\widetilde{H}}^s(\Omega )}\right\} , \end{aligned}$$
(1.7)

is to be compared with the number \(S:= S_{N,s}:= S(0)\), which is equal to the best constant in the fractional Sobolev embedding

$$\begin{aligned} \int _{{\mathbb {R}}^N} |{(-\Delta )^{s/2}}u|^2 \mathop {}\!\textrm{d}y \ge S \Vert u\Vert _{L^\frac{2N}{N-2s}({\mathbb {R}}^N)}^2, \end{aligned}$$
(1.8)

given by

$$\begin{aligned} S_{N,s}:= 2^{2s} \pi ^s \frac{\Gamma \left( \frac{N+2s}{2}\right) }{\Gamma \left( \frac{N-2s}{2}\right) } \left( \frac{\Gamma (N/2)}{\Gamma (N)}\right) ^{2s/N}, \end{aligned}$$
(1.9)

where \(\Gamma (x):= \int _0^\infty t^{x-1} e^{-t} \mathop {}\!\textrm{d}t\) denotes the Euler Gamma function.

We note that the embedding \({{\widetilde{H}}^s(\Omega )}\hookrightarrow L^{p+1}(\Omega )\) and the associated best constant are in fact independent of \(\Omega \) and equal to the best full-space Sobolev constant \(S_{N,s}\). This follows, e.g., from the computations in [42]. An alternative proof of this fact is provided by Theorem 2.1 below.

In the classical case \(s = 1\), problem (1.7) has been first studied in the famous paper [10] by Brezis and Nirenberg, who were interested in obtaining positive solutions to the associated elliptic equation. One of the main findings in that paper is that the behavior of (1.7) depends on the space dimension N in a rather striking way. Indeed, when \(N \ge 4\), then \(S(a) < S\) if and only if \(a(x) < 0\) for some \(x \in \Omega \). On the other hand, when \(N = 3\), then \(S(a) = S\) whenever \(\Vert a\Vert _\infty \) is small enough, leaving open the question of characterizing the cases \(S(a) < S\) in terms of a. In [20], Druet proved that, for \(N=3\), the following equivalence holds:

$$\begin{aligned} S(a)< S \qquad \iff \qquad \phi _a(x) < 0 \quad \text { for some } x \in \Omega , \end{aligned}$$
(1.10)

where \(\phi _a(x)\) denotes the Robin function associated to a (see (1.11) below). This answered positively a conjecture previously formulated by Brezis in [9].

For a fractional power \(s \in (0,1)\) and assuming \(a = - \lambda \) for some constant \(\lambda >0\), Brezis–Nirenberg type results have been obtained by Servadei and Valdinoci:

  1. (i)

    In [42], they proved that, for \(N \ge 4s\), \(S(-\lambda ) < S\) whenever \(\lambda > 0\);

  2. (ii)

    In [40], they proved that, for \(2s< N < 4s\), there is \(\lambda _{s} \in (0, \lambda _{1,s})\) (where \(\lambda _{1,s}\) is the first Dirichlet eigenvalue of \((-\Delta )^s\)) such that for every \(\lambda \in (\lambda _{s}, \lambda _{1,s})\), one has \(S(-\lambda ) < S\).

In this paper, we shall exclusively be concerned with the low-dimensional range \(2s< N < 4s\). This is the natural replacement of the classical case \(N=3\), \(s=1\), as indicated by the results above. One may also notice that when \(2s <N\), the Green’s function for \((-\Delta )^s\) on \({\mathbb {R}}^N\) behaves like \(G(x,y) \sim |x-y|^{-N+2s}\) near the diagonal and thus fails to be in \(L^2_\text {loc}({\mathbb {R}}^N)\) precisely if \(N \le 4s\), compare [31].

A central notion to what follows is that of a critical function a, which was introduced by Hebey and Vaugon in [30] for \(s = 1\) and readily generalizes to the fractional setting. Indeed, the following definition is naturally suggested by the behavior of S(a) just described.

Definition 1.1

(Critical function) Let \(a \in C({\overline{\Omega }})\). We say that a is critical if \(S(a) = S\) and \(S({\tilde{a}}) < S(a)\) for every \({\tilde{a}} \in C({\overline{\Omega }})\) with \({\tilde{a}} \le a\) and \({\tilde{a}} \not \equiv a\).

When \(N \ge 4s\), the result of [42] implies that the only critical potential is \(a \equiv 0\). For this case, or more generally for \(N > 2s\) with \(a \equiv 0\), the recent literature is rather rich in refined results going beyond [42]. Notably, in [15, 16], the authors proved the fractional counterpart of some conjectures by Brezis and Peletier [11] concerning the blow-up asymptotics of minimizers to the problem \(S(-\varepsilon )\) and a related problem with subcritical exponent \(p-\varepsilon \) as \(\varepsilon \rightarrow 0\). For the fractional subcritical problem, we also mention the result on Gamma convergence from [36]. In the classical case \(s = 1\), such results are due to Han [29] and Rey [37, 38]. Corresponding existence results, also for non-minimizing multi-bubble solutions, are also given in [15, 16], as well as in [18, 28].

In contrast to this, in the more challenging setting of dimension \(2s< N < 4s\), critical functions can have all possible shapes and are necessarily non-zero, compare [20] and Corollary 1.3 below. In this setting, and notably in the presence of a critical function, results of Han–Rey type as just discussed are much more scarce in the literature. Even in the local case \(s=1\) and \(N=3\), the conjecture of Brezis and Peletier (see [11, Conjecture 3.(ii)]) which involves a (constant) critical function has only been proved recently in [24]. The purpose of the present paper is to treat the analogous question for low dimensions \(2s< N < 4s\) in the fractional setting.

1.1 Main results

For all of our results, a crucial role is played by the Green’s function of \({(-\Delta )^{s}}+ a\), which we introduce now. For a function \(a \in C({\overline{\Omega }})\) such that \((-\Delta )^s + a\) is coercive, i.e.

$$\begin{aligned} \int _{{\mathbb {R}}^{N}} |{(-\Delta )^{s/2}}v|^2 \mathop {}\!\textrm{d}y + \int _\Omega a v^2 \mathop {}\!\textrm{d}y \ge c \int _{{\mathbb {R}}^N} |{(-\Delta )^{s/2}}v|^2 \mathop {}\!\textrm{d}y \end{aligned}$$

for some \(c > 0\), we define \(G_a: \Omega \times {\mathbb {R}}^N \rightarrow {\mathbb {R}}\) as the unique function such that for every fixed \(x \in \Omega \)

$$\begin{aligned} {\left\{ \begin{array}{ll} ((- \Delta )^s + a) G_a(x, \cdot ) = \gamma _{N,s} \delta _x &{}\quad \text {in } \Omega , \\ G_a(x, \cdot ) = 0 &{} \quad \text {on } {\mathbb {R}}^N \setminus \Omega . \end{array}\right. } \end{aligned}$$

Here, we set \(\gamma _{N,s} = \frac{2^{2\,s} \pi ^{N/2} \Gamma (s)}{\Gamma (\frac{N-2\,s}{2})}\), so that \({(-\Delta )^{s}}|y|^{-N+2\,s} = \gamma _{N,s} \delta _0\) on \({\mathbb {R}}^N\). Thus, this choice of \(\gamma _{N,s}\) ensures that we can write \(G_a\) as a sum of its singular part and its regular part \(H_a(x,y)\) as follows:

$$\begin{aligned} G_a(x,y) = \frac{1}{|x-y|^{N-2s}} - H_a(x,y). \end{aligned}$$

The function \(H_a\) is continuous up to the diagonal, see e.g. Lemma A.3. Therefore, we may define the Robin function

$$\begin{aligned} \phi _a(x):= H_a(x,x), \qquad x \in \Omega . \end{aligned}$$
(1.11)

We prove several properties of the Green’s functions \(G_a\) in Appendix A.

Our first main result is the following extension of Druet’s theorem from [20] to the fractional case.

Theorem 1.2

(Characterization of criticality) Let \(2s< N < 4s\) and let \(a \in C({\overline{\Omega }})\) be such that \((-\Delta )^s + a\) is coercive. The following properties are equivalent.

  1. (i)

    There is \(x \in \Omega \) such that \(\phi _a(x) < 0\).

  2. (ii)

    \(S(a) < S\).

  3. (iii)

    S(a) is achieved by some function \(u \in {{\widetilde{H}}^s(\Omega )}\).

As an immediate corollary, we can characterize critical functions in terms of their Robin function.

Corollary 1.3

Let a be critical. Then \(\inf _{x \in \Omega } \phi _a(x) = 0\).

The implications \((i) \Rightarrow (ii)\) and \((ii) \Rightarrow (iii)\) in Theorem 1.2 are well-known: indeed, \((i) \Rightarrow (ii)\) easily follows by the proper choice of test functions thanks to Theorem 2.1 below; the implication \((ii) \Rightarrow (iii)\) is the fractional version of the seminal observation in [10] (see [42, Theorem 2]).

Our proof of \((iii) \Rightarrow (ii)\) is the content of Proposition 3.1 below and follows [20, Step 1]. The most involved proof is that of the implication \((ii) \Rightarrow (i)\), which we give in Sect. 4. We adapt the strategy developed by Esposito in [22], who gave an alternative proof of that implication for \(s=1\). His approach is based on an expansion of the energy functional \({\mathcal {S}}_{a - \varepsilon }[u_\varepsilon ]\) as \(\varepsilon \rightarrow 0\), where a is critical as in Definition 1.1 and \(u_\varepsilon \) is a minimizer of \(S(a - \varepsilon )\).

In fact, by using the techniques applied in the recent work [25] for \(s = 1\), we are even able to push this expansion of \({\mathcal {S}}_{a - \varepsilon }[u_\varepsilon ]\) further by one order of \(\varepsilon \) and derive precise asymptotics of the energy \(S(a -\varepsilon )\) and of the sequence \((u_\varepsilon )\).

To give a precise statement of our results, let us fix some more assumptions and notations. We denote the zero set of the Robin function \(\phi _a\) by

$$\begin{aligned} {\mathcal {N}}_a:= \{ x \in \Omega : \phi _a(x) = 0\}. \end{aligned}$$

It follows from Theorem 1.2 that \(\inf _\Omega \phi _a(x) = 0\) if and only if a is critical. In particular, \({\mathcal {N}}_a\) is not empty if a is critical.

We will consider perturbations of a of the form \(a + \varepsilon V\), with non-constant \(V \in L^\infty (\Omega )\). For such V, following [25], we let

$$\begin{aligned} Q_V(x):= \int _\Omega V(y) G_a(x,y)^2 \mathop {}\!\textrm{d}y \end{aligned}$$

and

$$\begin{aligned} {\mathcal {N}}_a(V):= \{ x \in {\mathcal {N}}_a : \, Q_V(x) < 0\}. \end{aligned}$$

Finally, we shall assume that \(\Omega \) has \(C^2\) boundary and that

$$\begin{aligned} a \in C({{\bar{\Omega }}}) \cap C^1(\Omega ), \ a(x) < 0 \quad \text { for all } x \in {\mathcal {N}}_a. \end{aligned}$$
(1.12)

By Corollary 2.2, we have a priori that \(a(x) \le 0\) on \({\mathcal {N}}_a\). Also, (1.12) is clearly satisfied for the canonical case of a being constant, since a critical forces \(a < 0\) in that case. In this sense we may say that assumption (1.12) is not severe. The \(C^2\) assumption on \(\Omega \) ensures that the concentration point \(x_0\) of \(u_\varepsilon \) does not lie on the boundary; see Proposition 3.6 and Lemma A.1 below.

We point out that with our methods we are able to prove the following theorems only for the restricted dimensional range \(\tfrac{8}{3}s< N < 4s\), which enters in Sect. 5. We discuss this assumption in some more detail after the statement of Theorem 1.6 below.

The following theorem describes the asymptotics of the perturbed minimal energy \(S(a + \varepsilon V)\) as \(\varepsilon \rightarrow 0+\). It shows, in particular, the non-obvious fact that the condition \({\mathcal {N}}_a(V) \ne \emptyset \) is sufficient to have \(S(a+ \varepsilon V) < S\).

Theorem 1.4

(Energy asymptotics) Let \( \tfrac{8}{3} s< N < 4s\). Let us assume that \({\mathcal {N}}_a(V) \ne \emptyset \). Then, \(S(a+\varepsilon V) < S\) for all \(\varepsilon >0\) and

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0^+} \frac{S(a+\varepsilon V) -S}{\varepsilon ^{\frac{2s}{4s-N}}}&= \sigma _{N,s} \sup _{x \in {\mathcal {N}}_a(V)} \frac{|Q_V(x)|^\frac{2s}{4s-N}}{|a(x)|^\frac{N-2s}{4s-N}} , \end{aligned}$$

where \(\sigma _{N,s} > 0\) is a dimensional constant given explicitly by

$$\begin{aligned} \sigma _{N,s} = A_{N,s}^{-\frac{N-2s}{N}} \left( \alpha _{N,s} + c_{N,s} d_{N,s} b_{N,s} \right) ^{-\frac{N-2s}{4s-N}} \left( \frac{N-2s}{2s} \right) ^\frac{2s}{4s-N} \frac{4s - N}{N-2s}. \end{aligned}$$

The constants \(A_{N,s}\), \(\alpha _{N,s}\), \(c_{N,s}\), \(d_{N,s}\), and \(b_{N,s}\) are given explicitly in Lemma B.5 below.

On the other hand, when \({\mathcal {N}}_a(V) = \emptyset \), the next theorem shows that the asymptotics become trivial provided \(Q_V > 0\) on \({\mathcal {N}}_a\). Only in the case when \(\min _{{\mathcal {N}}_a} Q_V = 0\) we do not obtain the precise leading term of \(S(a + \varepsilon V) - S\).

Theorem 1.5

(Energy asymptotics, degenerate case) Let \( \tfrac{8}{3} s< N < 4\,s\). Let us assume that \({\mathcal {N}}_a(V) = \emptyset \). Then \(S(a+\varepsilon V) = S + o(\varepsilon ^2)\) as \(\varepsilon \rightarrow 0^+\). If, in addition, \(Q_V(x) >0\) for all \(x \in {\mathcal {N}}_a\) then \(S(a+\varepsilon V) = S\) for sufficiently small \(\varepsilon >0\).

For a potential V such that \({\mathcal {N}}_a(V) \ne \emptyset \), and thus \(S(a + \varepsilon V) < S\) by Theorem 1.4, a minimizer \(u_\varepsilon \) of \(S(a + \varepsilon V)\) exists by Theorem 1.2. We now turn to studying the asymptotic behavior of the sequence \((u_\varepsilon )\). In fact, since our methods are purely variational, we do not need to require that the \(u_\varepsilon \) satisfy a corresponding equation and we can equally well treat a sequence of almost minimizers in the sense of (1.17) below.

Since the functional \({\mathcal {S}}_{a}\) is merely a perturbation of the standard Sobolev quotient functional, it is not surprising that, to leading order, the sequence \(u_\varepsilon \) approaches the family of functions

$$\begin{aligned} U_{x, \lambda }(y) = \left( \frac{\lambda }{1 + \lambda ^2|x-y|^2} \right) ^\frac{N-2s}{2}, \qquad x \in {\mathbb {R}}^N, \quad \lambda > 0. \end{aligned}$$
(1.13)

The \(U_{x, \lambda }\) are precisely the optimizers of the fractional Sobolev inequality on \({\mathbb {R}}^N\)

$$\begin{aligned} \Vert (-\Delta )^{s/2} u\Vert _{L^2({\mathbb {R}}^N)}^2 \ge S_{N,s} \Vert u\Vert _{L^{\frac{2N}{N-2s}}({\mathbb {R}}^N)}^2. \end{aligned}$$
(1.14)

and satisfy the equation

$$\begin{aligned} (-\Delta )^s U_{x, \lambda }(y) = c_{N,s} U_{x, \lambda }(y)^\frac{N+2s}{N-2s} \end{aligned}$$
(1.15)

with \(c_{N,s} > 0\) given explicitly in Lemma B.5.

Since we are working on the bounded set \(\Omega \), the first refinement of the approximation consists in ‘projecting’ the functions \(U_{x, \lambda }\) to \({{\widetilde{H}}^s(\Omega )}\). That is, we consider the unique function \(PU_{x, \lambda }\in {{\widetilde{H}}^s(\Omega )}\) satisfying

$$\begin{aligned} {\left\{ \begin{array}{ll} (- \Delta )^s PU_{x, \lambda } = (- \Delta )^s U_{x, \lambda } &{} \quad \text {in } \Omega , \\ PU_{x, \lambda }= 0 &{} \quad \text {on } {\mathbb {R}}^N \setminus \Omega \end{array}\right. } \end{aligned}$$
(1.16)

in the weak sense, that is,

$$\begin{aligned} \int _{{\mathbb {R}}^N} {(-\Delta )^{s/2}}PU_{x, \lambda }{(-\Delta )^{s/2}}\eta \mathop {}\!\textrm{d}y = \int _{{\mathbb {R}}^N} {(-\Delta )^{s}}U_{x, \lambda }\eta \mathop {}\!\textrm{d}y = c_{N,s} \int _\Omega U_{x, \lambda }^\frac{N+2s}{N-2s} \eta \mathop {}\!\textrm{d}y \end{aligned}$$

for every \(\eta \in {{\widetilde{H}}^s(\Omega )}\).

Finally, we introduce the space

$$\begin{aligned} T_{x, \lambda }= \text {span} \left\{ PU_{x, \lambda }, \partial _{\lambda }PU_{x, \lambda }, \{\partial _{x_i}PU_{x, \lambda }\}_{i=1}^N \right\} \subset {{\widetilde{H}}^s(\Omega )}\end{aligned}$$

and denote by \(T_{x, \lambda }^\perp \subset {{\widetilde{H}}^s(\Omega )}\) its orthogonal complement in \({{\widetilde{H}}^s(\Omega )}\) with respect to the scalar product \((u,v):= \int _{{\mathbb {R}}^N} {(-\Delta )^{s/2}}u {(-\Delta )^{s/2}}v \mathop {}\!\textrm{d}y\). Moreover, let us denote by \(\Pi _{x, \lambda }\) and \(\Pi _{x, \lambda }^\bot \) the projections onto \(T_{x, \lambda }\) and \(T_{x, \lambda }^\bot \) respectively.

Then we have the following result.

Theorem 1.6

(Concentration of almost-minimizers) Let \( \tfrac{8}{3} s< N < 4\,s\). Suppose that \((u_\varepsilon ) \subset {{\widetilde{H}}^s(\Omega )}\) is a sequence such that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \frac{{\mathcal {S}}_{a+\varepsilon V}[u_\varepsilon ] - S(a+\varepsilon V)}{S - S(a+\varepsilon V)} = 0 \quad \text { and } \quad \int _\Omega u_\varepsilon ^p \mathop {}\!\textrm{d}y = \int _{{\mathbb {R}}^N} U_{0,1}^p \mathop {}\!\textrm{d}y. \end{aligned}$$
(1.17)

Then there exist sequences \((x_\varepsilon ) \subset \Omega \), \((\lambda _\varepsilon ) \subset (0, \infty )\), \((w_\varepsilon ) \subset T_{x_\varepsilon , \lambda _\varepsilon }^\bot \), and \((\alpha _\varepsilon ) \subset {\mathbb {R}}\) such that, up to extraction of a subsequence,

$$\begin{aligned} u_\varepsilon = \alpha _\varepsilon \left( PU_{x_\varepsilon , \lambda _\varepsilon } + \lambda ^{-\frac{N-2s}{2}} \Pi ^\perp _{x_\varepsilon ,\lambda _\varepsilon }\left( H_0(x_\varepsilon , \cdot ) - H_a(x_\varepsilon , \cdot ) \right) + r_\varepsilon \right) . \end{aligned}$$
(1.18)

Moreover, as \(\varepsilon \rightarrow 0\), we have

$$\begin{aligned}&x_\varepsilon \rightarrow x_0 \ \text { for some }x_0 \in {\mathcal {N}}_a(V)\text { such that } \quad \frac{|Q_V(x_0)|^{\frac{2s}{4s-N}}}{|a(x_0)|^{\frac{N-2s}{4s-N}}} = \sup _{y \in {\mathcal {N}}_a(V)} \frac{|Q_V(y)|^{\frac{2s}{4s-N}}}{|a(y)|^{\frac{N-2s}{4s-N}}}, \\&\phi _a(x_\varepsilon ) = o(\varepsilon ), \\&\lim _{\varepsilon \rightarrow 0} \varepsilon ^\frac{1}{4s-N} \lambda _\varepsilon = \left( \frac{2s (\alpha _{N,s} + c_{N,s} d_{N,s} b_{N,s} ) |a(x_0)| }{(N-2s) |Q_V(x_0)| } \right) ^\frac{1}{4s-N} , \\&\alpha _\varepsilon = \xi + {\mathcal {O}}\left( \varepsilon ^\frac{N-2s}{4s-N}\right) \text { for some } \xi \in \{\pm 1\}. \end{aligned}$$

Finally, \(r_\varepsilon \in T^\perp _{x_\varepsilon ,\lambda _\varepsilon }\) and \(\Vert (-\Delta )^{\frac{s}{2}} r \Vert _{L^2({\mathbb {R}}^N)}^2 = o\left( \varepsilon ^{\frac{2\,s}{4\,s-N}}\right) \).

The constants \(\alpha _{N,s}\), \(c_{N,s}\), \(d_{N,s}\), and \(b_{N,s}\) are given explicitly in Lemma B.5.

Remark 1.7

Since a is critical and \(\phi _a(x_0) = 0\), \(x_0\) is in particular a global minimum of \(\phi _a\). Thus, we obtain the commonly found necessary condition \(\nabla \phi _a(x_0) = 0\), provided that \(\phi _a\) is differentiable. We strongly expect this to be the case for smooth enough a, with a proof along the lines of Lemma A.3 (see also [24, Sect. B.2]), but prefer to not go into these details here.

Theorem 1.6 should be seen as the low-dimensional counterpart of [16, Theorems 1.1 and 1.2], which concerns \(N > 4s\). The decisive additional complication to be overcome in our case is the presence of a non-zero critical function a. More concretely, the coefficient \(\phi _a(x)\) of the subleading term of the energy expansion vanishes due to criticality of a (compare Theorem 2.1 and Lemma 5.5). As a consequence, it is only after further refining the expansion that we are able to conclude the desired information about the concentration behavior of the sequence \(u_\varepsilon \).

In the same vein, the energy expansions from Theorem 1.4 are harder to obtain than their analogues in higher dimensions \(N \ge 4\,s\). Indeed, for \(N > 4s\) we have

$$\begin{aligned} S(\varepsilon V) = S_{N,s} - {\tilde{c}}_{N,s} \sup _{ \{x \in \Omega : \, V(x) < 0\}} \phi _0(x)^{-\frac{2s}{N-4s}} |V(x)|^\frac{N-2s}{N-4s} \varepsilon ^\frac{N-2s}{N-4s} + o\left( \varepsilon ^\frac{N-2s}{N-4s}\right) , \end{aligned}$$
(1.19)

where \({\tilde{c}}_{N,s} > 0\) is some dimensional constant. In this case, a sharp upper bound on \(S(\varepsilon V)\) can already be derived from testing \({\mathcal {S}}_{\varepsilon V}\) against the family of functions \(PU_{x, \lambda }\). In contrast, for \(2s< N < 4s\) this family needs to be modified by a lower order term in order give the sharp upper bound for Theorem 1.4 (see (2.1) and Theorem 2.1 below). For details of the computations in case \(N \ge 4s\), we refer to the forthcoming work [19]. It is noteworthy that the auxiliary minimization problem giving the subleading coefficient in (1.19) is local in V in the sense that it only involves the pointwise value V(x), whereas that of Theorem 1.4 contains the non-local quantity \(Q_V\).

Let us now describe in more detail the approach we use in the proofs of Theorems 1.4, 1.5 and 1.6, which are, in fact, intimately linked. Firstly, the family of functions \(\psi _{x, \lambda }\) defined in (2.1) below yields an upper bound for \(S(a + \varepsilon V)\), which will turn out to be sharp. The strategy we use to prove the corresponding lower bound on \({\mathcal {S}}_{a + \varepsilon V}[u_\varepsilon ]\), for a sequence \((u_\varepsilon )\) as in (1.17), can be traced back at least to work of Rey [37, 38] and Bahri–Coron [3] on the classical Brezis–Nirenberg problem for \(s =1\); it was adapted to treat problems with a critical potential a when \(s=1\), \(N = 3\) in [22] and, more recently, in [24, 25]. This strategy features two characteristic steps, namely (i) supplementing the initial asymptotic expansion \(u_\varepsilon = \alpha _\varepsilon (PU_{x_\varepsilon , \lambda _\varepsilon } + w_\varepsilon )\), obtained by a concentration-compactness argument, by the orthogonality condition \(w_\varepsilon \in T_{x_\varepsilon , \lambda _\varepsilon }^\bot \) and (ii) using a certain coercivity inequality, valid for functions in \(T_{x_\varepsilon , \lambda _\varepsilon }^\bot \), to improve the bound on the remainder \(w_\varepsilon \). The basic instance of this strategy is carried out in Sect. 3. Indeed, after performing steps (i) and (ii), in Proposition 3.6 below we are able to exclude concentration near \(\partial \Omega \) and obtain a quantitative bound on \(w_\varepsilon = \alpha _\varepsilon ^{-1} u_\varepsilon - PU_{x_\varepsilon , \lambda _\varepsilon }\). As in and [23, 37], this piece of information is enough to arrive at (1.19) and similar conclusions when \(N > 4s\); see the forthcoming paper [19] for details.

On the other hand, when \(2s< N < 4s\), the bound that Proposition 3.6 provides for the modified difference \(u_\varepsilon - \psi _{x_\varepsilon , \lambda _\varepsilon }\) is still insufficient. For \(s = 1\), it was however observed in [25] that one can refine the expansion of \(u_\varepsilon \) by reiterating steps (i) and (ii). Here, we carry out their strategy in a streamlined version (compare Remark 5.1) and for fractional \(s \in (0,1)\). That is, one writes \(w_\varepsilon = \psi _{x_\varepsilon , \lambda _\varepsilon } - PU_{x_\varepsilon , \lambda _\varepsilon } + q_\varepsilon \), decomposes \(q_\varepsilon = t_\varepsilon + r_\varepsilon \) with \(r_\varepsilon \in T_{x_\varepsilon , \lambda _\varepsilon }^\bot \) and applies the coercivity inequality a second time. We are able to conclude as long as the technical condition \(8s/3 < N\) is met (which is equivalent to \(\lambda ^{-3N+6s} = o(\lambda ^{-2s})\)). Indeed, in that case the leading contributions of \(t_\varepsilon \) to the energy, which enter to orders \(\lambda ^{-N+2s}\) and \(\lambda ^{-2N+4s}\), cancel precisely; see Lemma 5.8. If \(8s/3 \le N\), a plethora of additional terms in \(t_\varepsilon \), which contribute to orders \(\lambda ^{-k(N -2\,s)}\) with \(3 \le k \le \tfrac{2\,s}{N-2\,s}\), will become relevant, and we were not able to treat those in a systemized way. It is natural to expect that the cancellation phenomenon that occurs for \(k =1,2\) still persists for \(k \ge 3\). This would allow to prove Theorems 1.4, 1.5, and 1.6 for general \(N > 2s\). For further details of the argument and the difficulties just discussed, we refer to Sect. 5.

One may note that the method of proof just described makes no use of two common techniques used to treat similar fractional-order problems. Firstly, we do not employ the extension formulation for \((-\Delta )^s\) due to either [14] for the restricted or to [13, 44] for the spectral fractional Laplacian, differently from, e.g., [5, 15, 16, 18, 28]. Secondly, using the properties of \(PU_{x, \lambda }\) (as given in Lemma A.2) we avoid lengthy calculations with singular integrals, appearing e.g. in [42], while at the same time optimizing the cutoff procedure with respect to [42]. We do use the singular integral formulation of \((-\Delta )^s\) in the proof of Proposition 3.1, but not in the main line of the asymptotic analysis argument.

To conclude this introduction, let us mention that several works in the literature (see [5, 7, 45]) treat the problem corresponding to (1.7) for a different notion of Dirichlet boundary conditions for \((-\Delta )^s\) on \(\Omega \), namely the spectral fractional Laplacian, defined by classical spectral theory using the \(L^2(\Omega )\)-ONB of Dirichlet eigenfunctions for \(-\Delta \). In contrast to this, the notion of \((-\Delta )^s\) we use in this paper, as defined in (1.4) or (1.5) on \({{\widetilde{H}}^s(\Omega )}\) given by (1.2), usually goes in the literature by the name of restricted fractional Laplacian. A nice discussion of these two operators, as well as a method of unified treatment for both, can be found in [18] (see also [41]).

1.2 Further perspectives

As far as we know, the role of the threshold configurations given by \(k(N -2s) = 2s\) for \(k \ge 1\) in the fractional Brezis–Nirenberg problem (1.7) has only been investigated in the literature for \(k = 1\) corresponding to \(N = 4s\), below which the problem is known to behave differently by the results quoted above. It would be exciting to exhibit some similar, possibly refined, qualitative change in the behavior of (1.7) at one or each of the following thresholds \(N = 3s\), \(N = 8s/3\), \(N= 10s/4\), etc.

For the fractional Brezis-Nirenberg problem in general, and the low-dimensional range \(2s< N < 4s\), many intriguing questions around the blow-up analysis remain open. For instance, one may consider PDE solutions \((u_\varepsilon )\) which are not necessarily energy-minizing and even possibly admit several concentration points, as done for \(s = 1\) in [4, 32].

Another possible extension includes the case of higher-order derivatives \(s > 1\), for which the maximum principle may fail and additional boundary conditions need to be supplemented. For instance, to the best of our knowledge, even for \(s = 2\) the analogue of Theorem 1.2 is not known (we refer to [27, Chapters 7.5–7.9] for some polyharmonic problems with critical growth). Finally, it could be of interest to study the limit case \(2s = N\) corresponding to the Moser–Trudinger inequality, see [21] for a very general blow-up result in this case when \(s = 1\).

1.3 Notation

We will often abbreviate the fractional critical Sobolev exponent by \(p:= \frac{2N}{N-2s}\). For any \(q \ge 1\), we abbreviate \(\Vert \cdot \Vert _q:= \Vert \cdot \Vert _{L^q({\mathbb {R}}^N)}\). When \(q=2\), we sometimes write \(\Vert \cdot \Vert := \Vert \cdot \Vert _2\).

Unless stated otherwise, we shall always assume \(s\in (0,1)\) and \(N \in (2\,s, 4\,s)\).

For \(x \in \Omega \), we use the shorthand \(d(x) = \text {dist}(x, \partial \Omega )\).

For a set M and functions \(f,g: M \rightarrow {\mathbb {R}}_+\), we shall write \(f(m) \lesssim g(m)\) if there exists a constant \(C > 0\), independent of m, such that \(f(m) \le C g(m)\) for all \(m \in M\), and accordingly for \(\gtrsim \). If \(f \lesssim g\) and \(g \lesssim f\), we write \(f \sim g\).

The various constants appearing throughout the paper and their numerical values are collected in Lemma B.5 in the Appendix.

2 Proof of the upper bound

The following theorem gives the asymptotics of \({\mathcal {S}}_{a + \varepsilon V}[\psi _{x, \lambda }]\), for the test function

$$\begin{aligned} \psi _{x, \lambda }(y):= PU_{x, \lambda }(y) - \lambda ^{-\frac{N-2s}{2}} (H_a(x,y) - H_0(x,y)) \end{aligned}$$
(2.1)

as \(\lambda \rightarrow \infty \).

Theorem 2.1

(Expansion of \({\mathcal {S}}_{a + \varepsilon V}{[\psi _{x, \lambda }]}\)) As \(\lambda \rightarrow \infty \), uniformly for x in compact subsets of \(\Omega \) and for \(\varepsilon \ge 0\),

$$\begin{aligned}&\Vert {(-\Delta )^{s/2}}\psi _{x, \lambda }\Vert _2^2 + \int _\Omega (a + \varepsilon V) \psi _{x, \lambda }^2 \mathop {}\!\textrm{d}y \nonumber \\&\quad = c_{N,s} A_{N,s} - c_{N,s} a_{N,s} \phi _a(x) \lambda ^{-N+2s} \nonumber \\&\qquad + (c_{N,s} d_{N,s} b_{N,s} - \alpha _{N,s}) a(x) \lambda ^{-2s} + \varepsilon \lambda ^{-N + 2s} Q_V(x) + o (\lambda ^{-2s}) + o(\varepsilon \lambda ^{-N+2s}) \nonumber \\ \end{aligned}$$
(2.2)

and

$$\begin{aligned} \int _\Omega \psi _{x, \lambda }^p \mathop {}\!\textrm{d}y&= A_{N,s} - p a_{N,s} \phi _a(x) \lambda ^{-N+2s} + {\mathcal {T}}_1 (\phi _a(x), \lambda ) + p d_{N,s} b_{N,s} a(x) \lambda ^{-2s} + o(\lambda ^{-2s}). \end{aligned}$$
(2.3)

In particular,

$$\begin{aligned} {\mathcal {S}}_{a + \varepsilon V} [\psi _{x, \lambda }]&= S + A_{N,s}^{-\frac{N-2s}{N}} \Big [ a_{N,s} c_{N,s} \phi _a(x) \lambda ^{-N+2s} + {\mathcal {T}}_2 (\phi _a(x), \lambda ) \nonumber \\&\quad - a(x)\lambda ^{-2s} (\alpha _{N,s} + c_{N,s} d_{N,s} b_{N,s} ) \nonumber \\&\quad + \varepsilon \lambda ^{-N+2s} Q_V(x) \Big ] + o( \lambda ^{-2s}) + o(\varepsilon \lambda ^{-N+2s}). \end{aligned}$$
(2.4)

Here, \({\mathcal {T}}_i(\phi , \lambda )\) are (possibly empty) sums of the form

$$\begin{aligned} {\mathcal {T}}_i(\phi , \lambda ):= \sum _{k = 2}^{K} \gamma _i(k) \phi ^k \lambda ^{-k(N-2s)} \end{aligned}$$
(2.5)

for suitable coefficients \(\gamma _i(k) \in {\mathbb {R}}\), where \(K = \lfloor \frac{2\,s}{N-2\,s} \rfloor \) is the largest integer less than or equal to \(\frac{2s}{N-2s}\).

Theorem 2.1 is valid irrespective of the criticality of a. The following corollary states two consequences of Theorem 2.1, which concern in particular critical potentials.

Corollary 2.2

(Properties of critical potentials)

  1. (i)

    If \(S(a) = S\), then \(\phi _a(x) \ge 0\) for all \(x \in \Omega \).

  2. (ii)

    If \(S(a) = S\) and \(\phi _a(x) = 0\) for some \(x \in \Omega \), then \(a(x) \le 0\).

Proof

Both statements follow from Theorem 2.1 applied with \(\varepsilon = 0\). Indeed, suppose that either \(\phi _a(x) < 0\) or \(\phi _a(x) = 0 < a(x)\) for some \(x \in \Omega \). In both cases, (2.4) gives \(S(a) \le {\mathcal {S}}_{a}[\psi _{x, \lambda }] < S\) for \(\lambda > 0\) large enough, contradiction. \(\square \)

Based on Theorem 2.1, we can now derive the following upper bound for \(S(a+\varepsilon V)\) provided that \({\mathcal {N}}_a(V)\) is not empty.

Corollary 2.3

Suppose that \({\mathcal {N}}_a(V) \ne \emptyset \). Then \(S(a+\varepsilon V) < S\) for all \(\varepsilon > 0\) and, as \(\varepsilon \rightarrow 0+\),

$$\begin{aligned} S(a+\varepsilon V) \le S - \sigma _{N,s} \sup _{x \in {\mathcal {N}}_a(V)} \frac{|Q_V(x)|^\frac{2s}{4s-N}}{|a(x)|^\frac{N-2s}{4s-N}} \varepsilon ^\frac{2s}{4s-N} + o\left( \varepsilon ^\frac{2s}{4s-N}\right) , \end{aligned}$$
(2.6)

where

$$\begin{aligned} \sigma _{N,s} = A_{N,s}^{-\frac{N-2s}{N}} \left( \alpha _{N,s} + c_{N,s} d_{N,s} b_{N,s} \right) ^{-\frac{N-2s}{4s-N}} \left( \frac{N-2s}{2s} \right) ^\frac{2s}{4s-N} \frac{4s - N}{N-2s}. \end{aligned}$$
(2.7)

Proof

Let us fix \(\varepsilon > 0\) and \(x \in {\mathcal {N}}_a(V)\). Then, by (2.4),

$$\begin{aligned} S(a+\varepsilon V)&\le S_{a +\varepsilon V}[\psi _{x,\lambda }] \end{aligned}$$
(2.8)
$$\begin{aligned}&= S + A_{N,s}^{-\frac{N-2s}{N}} \left( - (a(x) +o(1)) \lambda ^{-2s} (\alpha _{N,s} + c_{N,s} d_{N,s} b_{N,s} ) \right. \nonumber \\&\quad \left. + \varepsilon \lambda ^{-N+2s} (Q_V(x) +o(1)) \right) . \end{aligned}$$
(2.9)

We first optimize the right side over \(\lambda > 0\). Since \(A_\varepsilon := (-a(x) +o(1)) (\alpha _{N,s} + c_{N,s} d_{N,s} b_{N,s} )\) and \(B_\varepsilon := -Q_V(x)+o(1)\), are strictly positive by our assumptions, we are in the situation of Lemma B.6. Picking \(\lambda = \lambda _0(\varepsilon )\) given by (B.5), we have \(o(\lambda ^{-2s}) = o(\varepsilon ^\frac{2s}{4s-N})\). Thus, by (B.6), we get, as \(\varepsilon \rightarrow 0+\),

$$\begin{aligned} S(a+\varepsilon V)&\le S - \varepsilon ^\frac{2s}{4s-N} \frac{|Q_V(x)|^\frac{2s}{4s-N}}{|a(x)|^\frac{N-2s}{4s-N}} A_{N,s}^{-\frac{N-2s}{N}} \left( \alpha _{N,s} + c_{N,s} d_{N,s} b_{N,s} \right) ^{-\frac{N-2s}{4s-N}}\\&\quad \left( \frac{N-2s}{2s} \right) ^\frac{2s}{4s-N} \frac{4s - N}{N-2s} \\&\quad + o(\varepsilon ^\frac{2s}{4s-N}). \end{aligned}$$

Optimizing over \(x\in {\mathcal {N}}_a(V)\) completes the proof of (2.6). In particular, \(S(a + \varepsilon V) < S\) for small enough \(\varepsilon > 0\). Since \(S(a + \varepsilon V)\) is a concave function of \(\varepsilon \) (being the infimum over u of functions \({\mathcal {S}}_{a + \varepsilon V}[u]\) which are linear in \(\varepsilon \)) and \(S(a) = S\), this implies that \(S(a + \varepsilon V) < S\) for every \(\varepsilon > 0\). \(\square \)

Proof of Theorem 2.1

Step 1: Expansion of the numerator. Since \((-\Delta )^s H_a(x, \cdot ) = a G_a(x, \cdot )\), the function \(\psi _{x, \lambda }\) satisfies

$$\begin{aligned} {(-\Delta )^{s}}\psi _{x, \lambda }= c_{N,s} U_{x, \lambda }^{p-1} - \lambda ^{-\frac{N-2s}{2}} a G_a(x, \cdot ). \end{aligned}$$

Therefore, recalling Lemma A.2,

$$\begin{aligned} \Vert {(-\Delta )^{s/2}}\psi _{x, \lambda }\Vert _2^2&= \int _\Omega \psi _{x, \lambda }(y) {(-\Delta )^{s}}\psi _{x, \lambda }(y) \mathop {}\!\textrm{d}y \\&= \int _\Omega \left( U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ) - f_{x, \lambda }\right) \left( c_{N,s} U_{x, \lambda }^{p-1} - \lambda ^{-\frac{N-2s}{2}} a G_a(x, \cdot ) \right) \mathop {}\!\textrm{d}y \\&= c_{N,s} \int _\Omega U_{x, \lambda }^p \mathop {}\!\textrm{d}y - c_{N,s} \lambda ^{-\frac{N-2s}{2}} \int _\Omega U_{x, \lambda }^{p-1} H_a(x, \cdot ) \mathop {}\!\textrm{d}y \\&\quad - \lambda ^{-\frac{N-2s}{2}} \int _\Omega a G_a(x, \cdot ) \left( U - \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ) \right) \mathop {}\!\textrm{d}y \\&\quad - \int _\Omega f_{x, \lambda }\left( c_{N,s} U_{x, \lambda }^{p-1} - \lambda ^{-\frac{N-2s}{2}} a G_a(x,\cdot ) \right) \mathop {}\!\textrm{d}y. \end{aligned}$$

We now treat the four terms on the right side separately.

A simple computation shows that \(\int _{{\mathbb {R}}^N {\setminus } B_{d(x)}(x)} U_{x, \lambda }^p \mathop {}\!\textrm{d}y = {\mathcal {O}}(\lambda ^{-N})\). Thus the first term is given by

$$\begin{aligned} c_{N,s} \int _\Omega U_{x, \lambda }^p \mathop {}\!\textrm{d}y = c_{N,s} \left( \int _{{\mathbb {R}}^N} U_{0,1}^p \mathop {}\!\textrm{d}y \right) + {\mathcal {O}}(\lambda ^{-N}) = c_{N,s} A_{N,s} + o( \lambda ^{-2s}). \end{aligned}$$

The second term is, by Lemma A.4,

$$\begin{aligned} - c_{N,s} \lambda ^{-\frac{N-2s}{2}} \int _\Omega U_{x, \lambda }^{p-1} H_a(x, \cdot ) \mathop {}\!\textrm{d}y&= - c_{N,s} a_{N,s} \phi _a(x) \lambda ^{-N+2s} + c_{N,s} d_{N,s} b_{N,s} a(x) \lambda ^{-2s} + o( \lambda ^{-2s}). \end{aligned}$$

The third term will be combined with a term coming from \(\int _\Omega (a + \varepsilon V) \psi _{x, \lambda }^2 \mathop {}\!\textrm{d}y\), see below.

The fourth term can be bounded, by Lemma B.1 and recalling \(\Vert f_{x, \lambda }\Vert _\infty \lesssim \lambda ^{-\frac{N+4-2s}{2}}\) from Lemma A.2, by

$$\begin{aligned} \left| \int _\Omega f_{x, \lambda }\left( c_{N,s} U_{x, \lambda }^{p-1} - \lambda ^{-\frac{N-2s}{2}} a G_a(x,\cdot ) \right) \mathop {}\!\textrm{d}y \right|&\lesssim \Vert f_{x, \lambda }\Vert _\infty \left( \Vert U_{x, \lambda }\Vert _{p-1}^{p-1} + \lambda ^{-\frac{N-2s}{2}} \right) \\&\lesssim \lambda ^{-N - 2 + 2s} = o(\lambda ^{-2s}). \end{aligned}$$

Now we treat the potential term. We have

$$\begin{aligned} \int _\Omega (a + \varepsilon V) \psi _{x, \lambda }^2 \mathop {}\!\textrm{d}y&= \int _\Omega (a + \varepsilon V) \left( U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ) - f_{x, \lambda }\right) ^2 \mathop {}\!\textrm{d}y \\&= \int _\Omega (a + \varepsilon V) \left( U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ) \right) ^2 \mathop {}\!\textrm{d}y \\&\quad - 2 \int _\Omega (a + \varepsilon V) f_{x, \lambda }\left( U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ) \right) \mathop {}\!\textrm{d}y + \int _\Omega (a + \varepsilon V) f_{x, \lambda }^2 \mathop {}\!\textrm{d}y . \end{aligned}$$

Similarly to the computations above, the terms containing \(f_{x, \lambda }\) are bounded by

$$\begin{aligned} \left| \int _\Omega (a + \varepsilon V) f_{x, \lambda }\left( U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ) \right) \mathop {}\!\textrm{d}y \right|&\lesssim \Vert f_{x, \lambda }\Vert _\infty \left( \Vert U_{x, \lambda }\Vert _1 - \lambda ^{-\frac{N-2s}{2}}\right) \\&\lesssim \lambda ^{-N - 2 + 2s} = o\left( \lambda ^{-2s}\right) \end{aligned}$$

and

$$\begin{aligned} \left| \int _\Omega (a + \varepsilon V) f_{x, \lambda }^2 \mathop {}\!\textrm{d}y \right| \lesssim \Vert f_{x, \lambda }\Vert ^2_\infty \lesssim \lambda ^{-N-4+2s} = o\left( \lambda ^{-2s}\right) . \end{aligned}$$

Finally, we combine the main term with the third term in the expansion of \(\Vert {(-\Delta )^{s/2}}\psi _{x, \lambda }\Vert _2^2\) from above. Recalling that

$$\begin{aligned} U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ) - \lambda ^{-\frac{N-2s}{2}} G_a(x, \cdot )= & {} U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} |x-y|^{-N+ 2s} \nonumber \\= & {} -\lambda ^\frac{N-2s}{2}h(\lambda (x-y)) \end{aligned}$$
(2.10)

with h as in Lemma B.3, we get

$$\begin{aligned}&- \lambda ^{-\frac{N-2s}{2}} \int _\Omega a G_a(x, \cdot ) \left( U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ) \right) \mathop {}\!\textrm{d}y + \int _\Omega (a + \varepsilon V) \left( U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ) \right) ^2 \mathop {}\!\textrm{d}y \\&\quad = - \int _\Omega a \left( U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ) \right) \lambda ^{\frac{{N-2s}}{2}} h(\lambda (x-y)) \mathop {}\!\textrm{d}y + \varepsilon \int _\Omega V \left( U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ) \right) ^2 \mathop {}\!\textrm{d}y. \end{aligned}$$

Since

$$\begin{aligned} \int _\Omega a H_a(x, \cdot ) h(\lambda (x-y)) \mathop {}\!\textrm{d}y \lesssim \lambda ^{-N} \Vert h\Vert _{L^1({\mathbb {R}}^N)} =o(\lambda ^{-2s}), \end{aligned}$$

by Lemma B.4, we have

$$\begin{aligned} - \int _\Omega a \left( U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ) \right) \lambda ^{\frac{{N-2s}}{2}} h(\lambda (x-y)) \mathop {}\!\textrm{d}y = -\alpha _{N,s} \lambda ^{-2s} a(x) + o(\lambda ^{-2s}). \end{aligned}$$

Moreover, again by (2.10), and using the fact that \(h \in L^2({\mathbb {R}}^N)\) by Lemma B.3,

$$\begin{aligned}&\varepsilon \int _\Omega V \left( U- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ) \right) ^2\\&\quad = \varepsilon \lambda ^{-N + 2s} \int _\Omega V G_a(x, \cdot )^2 \mathop {}\!\textrm{d}y - 2 \varepsilon \int _\Omega V G_a(x, \cdot ) h(\lambda (x-y)) \mathop {}\!\textrm{d}y \\&\qquad + \varepsilon \lambda ^{N-2s} \int _\Omega V h(\lambda (x-y))^2 \mathop {}\!\textrm{d}y \end{aligned}$$

with

$$\begin{aligned} \varepsilon \int _\Omega V G_a(x, \cdot ) h(\lambda (x-y)) \mathop {}\!\textrm{d}y \lesssim \varepsilon \lambda ^{-N/2} \Vert G_a(x, \cdot )\Vert _2 \Vert h\Vert _2 = o\left( \varepsilon \lambda ^{-N+2s}\right) \end{aligned}$$

and

$$\begin{aligned} \varepsilon \lambda ^{N-2s} \int _\Omega V h(\lambda (x-y))^2 \mathop {}\!\textrm{d}y \lesssim \varepsilon \lambda ^{-2s} \Vert h\Vert _2^2 = o\left( \varepsilon \lambda ^{-N+2s}\right) . \end{aligned}$$

This completes the proof of the claimed expansion (2.2).

Step 2: Expansion of the denominator. Recall \(p = \frac{2N}{N-2\,s}\). Firstly, writing \(\psi _{x, \lambda }= U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ) - f_{x, \lambda }\), we have

$$\begin{aligned} \int _\Omega \psi _{x, \lambda }^p \mathop {}\!\textrm{d}y= & {} \int _\Omega \left( U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot )\right) ^p \mathop {}\!\textrm{d}y \\{} & {} + {\mathcal {O}}\left( \Vert U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot )\Vert _{p-1}^{p-1} \Vert f_{x, \lambda }\Vert _\infty + \Vert f_{x, \lambda }\Vert _\infty ^p\right) . \end{aligned}$$

Using Lemma B.1 and the bound \(\Vert f_{x, \lambda }\Vert _\infty \lesssim \lambda ^{-\frac{N+4-2\,s}{2}}\), we deduce that the remainder term is \(o(\lambda ^{-2s})\). To evaluate the main term, from Taylor’s formula for \(t \mapsto t^p\), we have

$$\begin{aligned} (a + b)^p = a^p - pa^{p-1} b + \sum _{k=2}^K {p \atopwithdelims ()k} a^{p-k} b^k + {\mathcal {O}}\left( a^{p-K-1} |b|^{K+1} + |b|^p\right) . \end{aligned}$$

Here, \({p \atopwithdelims ()k}:= \frac{\Gamma (p+1)}{\Gamma (p-k+1) \Gamma (k+1)}\) is the generalized binomial coefficient and \(K = \lfloor \frac{2\,s}{N-2\,s} \rfloor \) as in the statement of the theorem. Applying this with \(a = U_{x, \lambda }(y)\) and \(b = - \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot )\), we find

$$\begin{aligned}&\int _\Omega (U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ))^p \mathop {}\!\textrm{d}y \\&\quad = \int _\Omega U_{x, \lambda }^p \mathop {}\!\textrm{d}y - p \lambda ^{-\frac{N-2s}{2}} \int _\Omega U_{x, \lambda }^{p-1} H_a(x, \cdot ) \mathop {}\!\textrm{d}y \\&\qquad + {\mathcal {O}}\left( \lambda ^{-N+2s} \int _\Omega U_{x, \lambda }^{p-2} H_a(x, \cdot )^2 \mathop {}\!\textrm{d}y \right) + {\mathcal {O}}\left( \lambda ^{-N} \int _\Omega H_a(x, \cdot )^p \mathop {}\!\textrm{d}y \right) . \end{aligned}$$

By Lemma A.4, the claimed expansion (2.3) follows.

Step 3: Expansion of the quotient. For \(\alpha = 2/p \in (0,1)\), and fixed \(a > 0\), we again use the Taylor expansion

$$\begin{aligned} (a + b)^{-\alpha } = a^{-\alpha } - \alpha a^{-\alpha - 1} b + \sum _{k=2}^K {{-\alpha } \atopwithdelims ()k} a^{-\alpha - k} b^k + {\mathcal {O}}(b^{K+1}). \end{aligned}$$

By Step 2, we may apply this with \(a = A_{N,s}\) and \(b = - p a_{N,s} \phi _a(x) \lambda ^{-N+2\,s} + {\mathcal {T}}_1 (\phi _a(x), \lambda ) + p d_{N,s} b_{N,s} a(x) \lambda ^{-2\,s} + o(\lambda ^{-2\,s})\). Since \(b = {\mathcal {O}}(\lambda ^{-N+2\,s})\) and \(K+1 > \frac{2\,s}{N-2\,s}\), we have \({\mathcal {O}}(b^{K+1}) = o(\lambda ^{-2s})\) and thus

$$\begin{aligned} \left( \int _\Omega \psi _{x, \lambda }^p \mathop {}\!\textrm{d}y \right) ^{-2/p}&= A_{N,s}^{-\frac{N-2s}{N}} + A_{N,s}^{-\frac{2(N-s)}{N}} \left( 2 a_{N,s} \phi _a(x) \lambda ^{-N+2s} - 2 d_{N,s} b_{N,s} a(x) \lambda ^{-2s} \right) \end{aligned}$$
(2.11)
$$\begin{aligned}&\quad + {\mathcal {T}}_3(\phi _a(x), \lambda ) + o(\lambda ^{-2s}) \end{aligned}$$
(2.12)

for some term \({\mathcal {T}}_3(\phi , \lambda )\) as in (2.5). Multiplying this expansion with (2.2), we obtain

$$\begin{aligned} {\mathcal {S}}_{a + \varepsilon V} [\psi _{x, \lambda }]&= c_{n,s} A_{N,s}^\frac{2s}{N} + A_{N,s}^{-\frac{N-2s}{N}} \Big [ a_{N,s} c_{N,s} \phi _a(x) \lambda ^{-N+2s} + {\mathcal {T}}_2(\phi _a(x),\lambda ) \\&\quad - a(x) \lambda ^{-2s} (\alpha _{N,s} + c_{N,s} d_{N,s} b_{N,s}) + \varepsilon \lambda ^{-N+2s} Q_V(x) \Big ]\\&\quad + o( \lambda ^{-2s}) + o(\varepsilon \lambda ^{-N+2s}). \end{aligned}$$

By integrating the equation \((-\Delta )^s U_{0,1} = c_{N,s} U_{0,1}^{p-1}\) and using the fact that \(U_{0,1}\) minimizes the Sobolev quotient on \({\mathbb {R}}^N\) (or by a computation on the numerical values of the constants given in Lemma B.5), we have \(c_{N,s} A_{N,s}^\frac{2s}{N} = S\). Hence, this is the expansion claimed in (2.4). \(\square \)

3 Proof of the lower bound I: a first expansion

3.1 Non-existence of a minimizer for S(a)

In this section, we prove that for a critical potential a, the infimum S(a) is not attained. As we will see in Sect. 3.2, this implies the important basic fact that the minimizers for \(S(a+ \varepsilon V)\) must blow up as \(\varepsilon \rightarrow 0\).

The following is the main result of this section.

Proposition 3.1

(Non-existence of a minimizer for S(a)) Suppose that \(a \in C({\overline{\Omega }})\) is a critical potential. Then

$$\begin{aligned} S(a) = \inf _{u \in {{\widetilde{H}}^s(\Omega )}, u \not \equiv 0} \frac{\int _\Omega |{(-\Delta )^{s/2}}u|^2 \mathop {}\!\textrm{d}y + \int _\Omega a u^2 \mathop {}\!\textrm{d}y}{\Vert u\Vert _\frac{2N}{N-2s}^2} \end{aligned}$$

is not achieved.

For \(s=1\), Proposition 3.1 was proved by Druet [20] and we follow his strategy. The feature that makes the generalization of [20] to \(s \in (0,1)\) not completely straightforward is its use of the product rule for ordinary derivatives. Instead, we shall use the identity

$$\begin{aligned} {(-\Delta )^{s}}(uv) = u {(-\Delta )^{s}}v + v {(-\Delta )^{s}}u - I_s(u,v), \end{aligned}$$
(3.1)

where

$$\begin{aligned} I_s(u,v)(x):= C_{N,s} P.V. \int _{{\mathbb {R}}^N} \frac{(u(x) - u(y))(v(x) - v(y))}{|x-y|^{N+2s}} \mathop {}\!\textrm{d}y \end{aligned}$$

with \(C_{N,s}\) as in (1.6). While the relation (3.1) can be verified by a simple computation (see e.g. [26, Lemma 20.2]), it leads to more complicated terms than those arising in Druet’s proof. To be more precise, the term \(\int _\Omega u^2 |\nabla \varphi |^2\) from [20] is replaced by the term \({\mathcal {I}}(\varphi )\) defined in (3.6), which is more involved to evaluate for the right choice of \(\varphi \).

Proof of Proposition 3.1

For the sake of finding a contradiction, we suppose that there exists u which achieves S(a), normalized so that

$$\begin{aligned} \int _\Omega u^{\frac{2N}{N-2s}} \mathop {}\!\textrm{d}y = 1. \end{aligned}$$
(3.2)

Then u satisfies the equation

$$\begin{aligned} {(-\Delta )^{s}}u + a u = S u^\frac{N-2s}{N-2s} \end{aligned}$$
(3.3)

with Lagrange multiplier \(S = S_{N,s}\) equal to the Sobolev constant. Indeed, this value is determined by integrating the equation against u and using (3.2).

Since \(S(a) = S\), we have, for every \(\varphi \in C^\infty ({\mathbb {R}}^N)\) and \(\varepsilon > 0\), and abbreviating \(p = \frac{2N}{N-2s}\),

$$\begin{aligned} S \left( \int _\Omega \left( u(1+ \varepsilon \varphi )\right) ^p \mathop {}\!\textrm{d}y \right) ^\frac{2}{p} \le \int _{{\mathbb {R}}^N} |{(-\Delta )^{s/2}}\left( u(1 + \varepsilon \varphi )\right) |^2 \mathop {}\!\textrm{d}y + \int _\Omega a u^2 (1+ \varepsilon \varphi )^2 \mathop {}\!\textrm{d}y.\nonumber \\ \end{aligned}$$
(3.4)

We shall expand both sides of (3.4) in powers of \(\varepsilon \). For the left side, a simple Taylor expansion together with (3.2) gives

$$\begin{aligned}{} & {} \left( \int _\Omega \left( u(1+ \varepsilon \varphi ) \right) ^p \mathop {}\!\textrm{d}y \right) ^\frac{2}{p} \nonumber \\{} & {} \quad = 1 + 2 \varepsilon \int _\Omega u^p \varphi \mathop {}\!\textrm{d}y + \varepsilon ^2 \left( (p-1) \int _\Omega u^p \varphi ^2 \mathop {}\!\textrm{d}y - (p-2) \left( \int _\Omega u^p \varphi \mathop {}\!\textrm{d}y \right) ^2 \right) + o(\varepsilon ^2).\nonumber \\ \end{aligned}$$
(3.5)

Expanding the right side is harder and we need to invoke the fractional product rule (3.1). Firstly, integrating by parts, we have

$$\begin{aligned} \int _{{\mathbb {R}}^N} |{(-\Delta )^{s/2}}\left( u(1 + \varepsilon \varphi )\right) |^2 \mathop {}\!\textrm{d}y = \int _{{\mathbb {R}}^N} u (1 + \varepsilon \varphi ) {(-\Delta )^{s}}\left( u(1 + \varepsilon \varphi )\right) \mathop {}\!\textrm{d}y. \end{aligned}$$

By (3.1), we can write

$$\begin{aligned} {(-\Delta )^{s}}\left( u(1 + \varepsilon \varphi )\right) = (1 + \varepsilon \varphi ) {(-\Delta )^{s}}u + \varepsilon u {(-\Delta )^{s}}\varphi - \varepsilon I_s (u, \varphi ). \end{aligned}$$

Hence,

$$\begin{aligned} \int _{{\mathbb {R}}^N} |{(-\Delta )^{s/2}}\left( u(1 + \varepsilon \varphi )\right) |^2 \mathop {}\!\textrm{d}y = \int _\Omega u {(-\Delta )^{s}}u (1+ \varepsilon \varphi )^2 + \varepsilon ^2 {\mathcal {I}}(\varphi ), \end{aligned}$$

where we write

$$\begin{aligned} {\mathcal {I}}(\varphi ) := \varepsilon ^{-1} \int _\Omega u (1 + \varepsilon \varphi ) \left( u {(-\Delta )^{s}}\varphi - I_s(u, \varphi ) \right) \mathop {}\!\textrm{d}y. \end{aligned}$$
(3.6)

Writing out \({(-\Delta )^{s}}\varphi \) as the singular integral given by (1.5), we obtain (we drop the principal value for simplicity)

$$\begin{aligned} {\mathcal {I}}(\varphi )&= \varepsilon ^{-1} C_{N,s} \iint _{{\mathbb {R}}^N \times {\mathbb {R}}^N} u(x) u(y) (1 + \varepsilon \varphi (x)) \frac{\varphi (x) - \varphi (y)}{|x-y|^{N+2s}} \mathop {}\!\textrm{d}x \mathop {}\!\textrm{d}y \nonumber \\&= \frac{C_{N,s}}{2} \iint _{{\mathbb {R}}^N \times {\mathbb {R}}^N} u(x) u(y) \frac{|\varphi (x) - \varphi (y)|^2}{|x-y|^{N+2s}} \mathop {}\!\textrm{d}x \mathop {}\!\textrm{d}y. \end{aligned}$$
(3.7)

The last equality follows by symmetrizing in the x and y variables.

Thus we can write the right side of (3.4) as

$$\begin{aligned}&\int _{{\mathbb {R}}^N} |{(-\Delta )^{s/2}}\left( u(1 + \varepsilon \varphi )\right) |^2 \mathop {}\!\textrm{d}y + \int _\Omega a u^2 (1+ \varepsilon \varphi )^2 \mathop {}\!\textrm{d}y \\&\quad = \int _\Omega u ({(-\Delta )^{s}}u + a u ) (1 + \varepsilon \varphi )^2 \mathop {}\!\textrm{d}y + \varepsilon ^2 {\mathcal {I}}(\varphi ) \\&\quad = S \int _\Omega u^p (1 + \varepsilon \varphi )^2 \mathop {}\!\textrm{d}y + \varepsilon ^2 {\mathcal {I}}(\varphi ), \end{aligned}$$

where we used Eq. (3.3). After expanding the square \((1 + \varepsilon \varphi )^2\), the terms of orders 1 and \(\varepsilon \) on both sides of (3.4) cancel precisely. For the coefficients of \(\varepsilon ^2\), we thus recover the inequality

$$\begin{aligned} \int _\Omega u^p \varphi ^2 \mathop {}\!\textrm{d}y \le \frac{1}{S(p-2)} {\mathcal {I}}(\varphi ) + \left( \int _\Omega u^p \varphi \mathop {}\!\textrm{d}y \right) ^2. \end{aligned}$$
(3.8)

We now make a suitable choice of \(\varphi \), which turns (3.8) into the desired contradiction. As in [20], we choose

$$\begin{aligned} \varphi _i(y):= ({\mathcal {S}}(y))_i, \quad i = 1,\ldots , N+1, \end{aligned}$$

where \({\mathcal {S}}: {\mathbb {R}}^N \rightarrow {\mathbb {S}}^N\) is the (inverse) stereographic projection, i.e. [34, Sect. 4.4]

$$\begin{aligned} \varphi _i = \frac{2y_i}{1 + |y|^2} \quad \text { for } i = 1,\ldots , N, \qquad \varphi _{N+1} = \frac{1- |y|^2}{1 + |y|^2}. \end{aligned}$$
(3.9)

Moreover, we may assume (up to scaling and translating \(\Omega \) if necessary) that the balancing condition

$$\begin{aligned} \int _\Omega u^p \varphi _i \mathop {}\!\textrm{d}y = 0, \qquad i = 1,\ldots , N+1, \end{aligned}$$
(3.10)

is satisfied. Since [20] is rather brief on this point, we include some details in Lemma 3.2 below for the convenience of the reader.

By definition, we have \(\sum _{i = 1}^{N+1} \varphi _i^2 = 1\). Testing (3.8) with \(\varphi _i\) and summing over i thus yields, by (3.10),

$$\begin{aligned} 1 = \int _\Omega u^p \mathop {}\!\textrm{d}y \le \frac{1}{S(p-2)} \sum _{i = 1}^{N+1} {\mathcal {I}}(\varphi _i). \end{aligned}$$
(3.11)

To obtain a contradiction and finish the proof, we now show that \(\sum _{i = 1}^{N+1} {\mathcal {I}}(\varphi _i) < S(p-2)\). By definition of \(\varphi _i\), we have

$$\begin{aligned} \sum _{i=1}^{N+1}{\mathcal {I}}(\varphi _i) = \frac{C_{N,s}}{2} \iint _{{\mathbb {R}}^N \times {\mathbb {R}}^N} u(x) u(y) \frac{|{\mathcal {S}}(x) - {\mathcal {S}}(y)|^2}{|x-y|^{N+2s}} \mathop {}\!\textrm{d}x \mathop {}\!\textrm{d}y. \end{aligned}$$
(3.12)

To evaluate this integral further, we pass to \({\mathbb {S}}^N\). Set \(J_{{\mathcal {S}}^{-1}}(\eta ):= \det D {\mathcal {S}}^{-1} (\eta )\) and define

$$\begin{aligned} U(\eta ):= u({\mathcal {S}}^{-1}(\eta )) J_{{\mathcal {S}}^{-1}} (\eta )^\frac{1}{p}, \end{aligned}$$

so that \(\int _{{\mathbb {S}}^N} U^p \mathop {}\!\textrm{d}\eta = 1\). Since the distance transforms as

$$\begin{aligned} |{\mathcal {S}}^{-1}(\eta ) - {\mathcal {S}}^{-1} (\xi )| = J_{{\mathcal {S}}^{-1}}(\eta )^\frac{1}{2N} |\eta - \xi | J_{{\mathcal {S}}^{-1}}(\xi )^\frac{1}{2N}, \end{aligned}$$

changing variables in (3.12) gives

$$\begin{aligned} \sum _{i=1}^{N+1} {\mathcal {I}}(\varphi _i) = \frac{C_{N,s}}{2} \iint _{{\mathbb {S}}^N \times {\mathbb {S}}^N} \frac{U(\eta ) U(\xi )}{|\eta - \xi |^{N+2s -2}} \mathop {}\!\textrm{d}\eta \mathop {}\!\textrm{d}\xi . \end{aligned}$$
(3.13)

By applying first Cauchy–Schwarz and then Hölder’s inequality, we estimate

$$\begin{aligned} \sum _{i=1}^{N+1} I(\varphi _i)&\le \frac{C_{N,s}}{2} \iint _{{\mathbb {S}}^N \times {\mathbb {S}}^N} \frac{U(\eta )^2}{|\eta - \xi |^{N+2s -2}} \mathop {}\!\textrm{d}\eta \mathop {}\!\textrm{d}\xi \nonumber \\&= \frac{C_{N,s}}{2} \delta _{N,s} \int _{{\mathbb {S}}^N} U(\eta )^2 \mathop {}\!\textrm{d}\eta < \frac{C_{N,s}}{2} \delta _{N,s} |{\mathbb {S}}^N|^\frac{2s}{N}, \end{aligned}$$
(3.14)

where the last inequality is strict. Indeed, U vanishes near the south pole of \({\mathbb {S}}^N\), hence there cannot be equality in Hölder’s inequality applied with the functions \(U^2\) and 1. Moreover, above we abbreviated

$$\begin{aligned} \delta _{N,s}:= \int _{{\mathbb {S}}^N} \frac{1}{|\eta - \xi |^{N+2s-2}} \mathop {}\!\textrm{d}\xi \end{aligned}$$

(note that this number is independent of \(\eta \in {\mathbb {S}}^N\)). By transforming back to \({\mathbb {R}}^N\) and evaluating a Beta function integral, the explicit value of \(\delta _{N,s}\) can be computed explicitly to be

$$\begin{aligned} \delta _{N,s} = 2^{2-2s} \pi ^{N/2} \frac{\Gamma (1-s)}{\Gamma \left( \frac{N}{2} + 1 - s\right) }. \end{aligned}$$

Inserting this into estimate (3.14), as well as the explicit values of \(C_{N,s}\) given in (1.6) and of \(S_{N,s}\) given in (1.9), a direct computation then gives

$$\begin{aligned} \frac{ \sum _{i=1}^{N+1}{\mathcal {I}}(\varphi _i) }{S_{N,s}} < \frac{1}{S_{N,s}} \frac{C_{N,s}}{2} \delta _{N,s} |{\mathbb {S}}^N|^\frac{2s}{N} = \frac{s 2^{2-2s}}{N-2s} \left( 2 \frac{\Gamma (N) \Gamma (\frac{1}{2})}{\Gamma \left( \frac{N+1}{2}\right) \Gamma \left( \frac{N}{2}\right) } \right) ^\frac{2s}{N}. \end{aligned}$$

It can be easily shown by induction over N that

$$\begin{aligned} 2 \frac{\Gamma (N) \Gamma (\frac{1}{2})}{\Gamma \left( \frac{N+1}{2}\right) \Gamma (\frac{N}{2})} = 2^N \end{aligned}$$

for every \(N \in {\mathbb {N}}\), and hence

$$\begin{aligned} \sum _{i=1}^{N+1} {\mathcal {I}}(\varphi _i) < S \frac{4s}{N-2s} = S(p-2). \end{aligned}$$

This is the desired contradiction to (3.11). \(\square \)

Here is the lemma that we referred to in the previous proof. It expands an argument sketched in [20, Step 1]. To emphasize its generality, instead of \(u^p\) we state it for a general nonnegative function h with \(\int _\Omega h = 1\).

Lemma 3.2

Let \(\Omega \subset {\mathbb {R}}^N\) be an open bounded set and \(0 \le h \in L^1(\Omega )\) with \(\Vert h\Vert _1 = 1\). Then there is \((y, t) \in {\mathbb {R}}^N \times (0, \infty )\) such that

$$\begin{aligned} F(y,t)&:= \int _\Omega h(x) \frac{2t(x-y)}{1+ t^2|x-y|^2} \mathop {}\!\textrm{d}x = 0, \\ G(y,t)&:= \int _\Omega h(x) \frac{1 - t^2|x-y|^2}{1 + t^2 |x-y|^2} \mathop {}\!\textrm{d}x = 0. \end{aligned}$$

Proof

Define \(H: {\mathbb {R}}^N \times {\mathbb {R}}\rightarrow {\mathbb {R}}^{N+1}\) by

$$\begin{aligned} H(y,s):= \left( F\left( y, \frac{s + \sqrt{s^2 + 4}}{2} \right) + y, G\left( y, \frac{s + \sqrt{s^2 + 4}}{2} \right) + s \right) . \end{aligned}$$

We claim that

$$\begin{aligned} |H(y,s)| \le |y|^2 + s^2 \end{aligned}$$
(3.15)

whenever \(|y|^2 + s^2\) is large enough. Thus, for large enough radii \(R >0\), the map H sends \(B(0,R) \subset {\mathbb {R}}^{N+1}\) into itself. By the Brouwer fixed point theorem, H has a fixed point (ys). Then the pair \((y,\frac{s + \sqrt{s^2 + 4}}{2})\) satisfies the property stated in the lemma.

To prove (3.15), it is more natural to set \(t:= \frac{s + \sqrt{s^2 + 4}}{2} > 0\), so that \(s = t - t^{-1}\). By writing out \(|H(y,s)|^2\), (3.15) is equivalent to

$$\begin{aligned} 2 y \cdot F(y,t) + 2 (t - t^{-1}) G(y,t) + |F(y,t)|^2 + |G(y,t)|^2 \le 0 \end{aligned}$$
(3.16)

whenever \(|y|^2 + (t - t^{-1})^2\) is large enough.

First, it is easy to see that \(y \cdot F(y,t)\), F(yt) and G(yt) are bounded in absolute value uniformly in \((y,t) \in {\mathbb {R}}^N \times (0,\infty )\). Moreover, there is \(C > 0\) such that

$$\begin{aligned} G(y,t) = 1 - 2 \int _\Omega h(x) \frac{t^2|x-y|^2}{1 + t^2|x-y|^2} \mathop {}\!\textrm{d}x {\left\{ \begin{array}{ll} \ge \frac{1}{2} &{} \quad \text {if } 0 < t \le 1/C, \\ \le - \frac{1}{2} &{} \quad \text {if } t \ge C. \end{array}\right. } \end{aligned}$$

Therefore, \((t - t^{-1}) G(y,t) \rightarrow -\infty \) as \(t \rightarrow 0\) or \(t \rightarrow \infty \). Thus (3.16) holds whenever \((t - t^{-1})^2\) is large enough.

Thus, in what follows, we assume that \(t \in [1/C, C]\) and prove that (3.16) holds if |y| is large enough. For convenience, fix some sequence (yt) with \(|y| \rightarrow \infty \) and \(t \rightarrow t_0 \in (0,\infty )\). Then \(|F(y,t)| \rightarrow 0\) and \(G(y,t) \rightarrow -1\). Moreover, since \(\Omega \) is bounded, \(\frac{|x-y|}{|y|} \rightarrow 1\) uniformly in \(x \in \Omega \) and hence

$$\begin{aligned} 2 y \cdot F(y,t) = - \int _\Omega h(x) \frac{4t|y|^2}{1 + t^2|x-y|^2} + {\mathcal {O}} \left( \int _\Omega h(x) \frac{t|y|}{1 + t^2 |x-y|^2} \mathop {}\!\textrm{d}x \right) \rightarrow - \frac{4}{t_0}. \end{aligned}$$

Altogether, the quantity on the left side of (3.16) thus tends to \(-2 t_0 - 2 t_0^{-1} + 1 \le -3 < 0\), which concludes the proof of (3.16). \(\square \)

3.2 Profile decomposition

The following proposition gives an asymptotic decomposition of a general sequence of normalized (almost) minimizers of \(S(a+\varepsilon V)\).

Proposition 3.3

(Profile decomposition) Let \(a \in C({\overline{\Omega }})\) be critical and let \(V \in C({\overline{\Omega }})\) be such that \({\mathcal {N}}_a(V) \ne \emptyset \). Suppose that \((u_\varepsilon ) \subset {{\widetilde{H}}^s(\Omega )}\) is a sequence such that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \frac{{\mathcal {S}}_{a+\varepsilon V}[u_\varepsilon ] - S(a+\varepsilon V)}{S - S(a+\varepsilon V)} = 0 \quad \text { and } \quad \int _\Omega u_\varepsilon ^p \mathop {}\!\textrm{d}y = \int _{{\mathbb {R}}^N} U_{0,1}^p \mathop {}\!\textrm{d}y, \end{aligned}$$
(3.17)

where \(U_{0,1}\) is given by (1.13). Then, there are sequences \((x_\varepsilon ) \subset \Omega \), \((\lambda _\varepsilon ) \subset (0, \infty )\), \((w_\varepsilon ) \subset T_{x_\varepsilon , \lambda _\varepsilon }^\bot \), and \((\alpha _\varepsilon ) \subset {\mathbb {R}}\) such that, up to extraction of a subsequence,

$$\begin{aligned} u_\varepsilon = \alpha _\varepsilon \left( PU_{x_\varepsilon , \lambda _\varepsilon } + w_\varepsilon \right) . \end{aligned}$$
(3.18)

Moreover, as \(\varepsilon \rightarrow 0\), we have

$$\begin{aligned} \Vert (-\Delta )^{s/2} w_\varepsilon \Vert _2&\rightarrow 0, \\ d(x_\varepsilon ) \lambda _\varepsilon&\rightarrow \infty , \\ x_\varepsilon&\rightarrow x_0, \\ \alpha _\varepsilon&\rightarrow \pm 1. \end{aligned}$$

In what follows, we shall always work with a sequence \(u_\varepsilon \) that satisfies the assumptions of Proposition 3.3. For readability, we shall often drop the index \(\varepsilon \) from \(\alpha _\varepsilon \), \(x_\varepsilon \), \(\lambda _\varepsilon \) and \(w_\varepsilon \), and write \(d:= d_\varepsilon := d(x_\varepsilon )\). Moreover, we adopt the convention that we always assume the strict inequality

$$\begin{aligned} S(a + \varepsilon V) < S. \end{aligned}$$
(3.19)

In Theorems 1.4 and 1.6, we assume \({\mathcal {N}}_a(V) \ne \emptyset \), so assumption (3.19) is certainly justified in view of Corollary 2.3. For Theorem 1.5, where we assume \({\mathcal {N}}_a(V) = \emptyset \), we discuss the role of assumption (3.19) in the proof of that theorem in Sect. 6.

Proof

Step 1. We derive a preliminary decomposition in terms of the Sobolev optimizers \(U_{z, \lambda }\) and without orthogonality condition on the remainder, see (3.20) below.

The assumptions imply that the sequence \((u_\varepsilon )\) is bounded in \({{\widetilde{H}}^s(\Omega )}\), hence, up to a subsequence, we may assume \(u_\varepsilon \rightharpoonup u_0\) for some \(u_0 \in {{\widetilde{H}}^s(\Omega )}\). By the argument given in [25, Proof of Proposition 3.1, Step 1], the fact that \({\mathcal {S}}_{a + \varepsilon V}[u_\varepsilon ] \rightarrow S(a) = S\) implies that \(u_0\) is a minimizer for S(a), unless \(u_0 \equiv 0\). Since such a minimizer does not exist by Proposition 3.1, we conclude that, in fact, \(u_\varepsilon \rightharpoonup 0\) in \({{\widetilde{H}}^s(\Omega )}\).

By Rellich’s theorem, \(u_\varepsilon \rightarrow 0\) strongly in \(L^2(\Omega )\), in particular \(\int _\Omega (a + \varepsilon V) u_\varepsilon ^2 = o(1)\). The assumption (1.17) thus implies that \((u_\varepsilon )\) is a minimizing sequence for the Sobolev quotient \(\int _{{\mathbb {R}}^N} |{(-\Delta )^{s/2}}u|^2 \mathop {}\!\textrm{d}y / \Vert u\Vert ^2_\frac{2N}{N-2\,s}\). Therefore, the assumptions of [35, Theorem 1.3] are satisfied, and we may conclude by that theorem that there are sequences \((z_\varepsilon ) \subset {\mathbb {R}}^N\), \((\mu _\varepsilon ) \subset (0, \infty )\), \((\sigma _\varepsilon )\) such that

$$\begin{aligned} \mu _\varepsilon ^{-\frac{N-2s}{2}} u_\varepsilon (z_\varepsilon + \mu _\varepsilon ^{-1} \cdot ) \rightarrow \beta U_{0,1} \end{aligned}$$

in \({\dot{H}}^s({\mathbb {R}}^N)\), for some \(\beta \in {\mathbb {R}}\). By the normalization condition from (1.17), we have \(\beta \in \{ \pm 1\}\). Now, a change of variables \(y = z_\varepsilon + \mu _\varepsilon ^{-1} x\) implies

$$\begin{aligned} u_\varepsilon (y) = U_{z_\varepsilon , \mu _\varepsilon }(y) + \sigma _\varepsilon , \end{aligned}$$
(3.20)

where still \(\sigma _\varepsilon \rightarrow 0\) in \({\dot{H}}^s({\mathbb {R}}^N)\), since the \({\dot{H}}^s({\mathbb {R}}^N)\)-norm is invariant under this change of variable.

Moreover, since \(\Omega \) is smooth, the fact that

$$\begin{aligned} \int _{\mu _\varepsilon \Omega + z_\varepsilon } U_{0,1}^p \mathop {}\!\textrm{d}y = \int _\Omega U_{z_\varepsilon , \mu _\varepsilon }^p \mathop {}\!\textrm{d}y = \int _{{\mathbb {R}}^N} U_{0,1}^p \mathop {}\!\textrm{d}y + o(1) \end{aligned}$$

implies \(\mu _\varepsilon \text {dist}(z_\varepsilon , {\mathbb {R}}^N {\setminus } \Omega ) \rightarrow \infty \).

Step 2. We make the necessary modifications to derive (3.18) from (3.20). The crucial argument is furnished by [2, Proposition 4.3], which generalizes the corresponding statement by Bahri and Coron [3, Proposition 7] to fractional \(s \in (0,1)\). It states the following. Suppose that \(u \in {{\widetilde{H}}^s(\Omega )}\) with \(\Vert u\Vert _{{\widetilde{H}}^s(\Omega )}= A_{N,s}\) satisfies

$$\begin{aligned} \inf \left\{ \Vert {(-\Delta )^{s/2}}( u - PU_{x, \lambda })\Vert _2 : \, x \in \Omega , \, \lambda d(x) > \eta ^{-1} \right\} < \eta \end{aligned}$$
(3.21)

for some \(\eta > 0\). Then, if \(\eta \) is small enough, the minimization problem

$$\begin{aligned} \inf \left\{ \Vert {(-\Delta )^{s/2}}( u - \alpha PU_{x, \lambda }) \Vert _2 : \, x \in \Omega , \, \lambda d(x) > (4\eta )^{-1}, \, \alpha \in (1/2, 2) \right\} \end{aligned}$$
(3.22)

has a unique solution.

By the decomposition from Step 1 and Lemma A.2, we have

$$\begin{aligned} \Vert {(-\Delta )^{s/2}}(u_\varepsilon - PU_{z_\varepsilon , \mu _\varepsilon })\Vert _2 \le \Vert {(-\Delta )^{s/2}}(U_{z_\varepsilon , \mu _\varepsilon } - PU_{z_\varepsilon , \mu _\varepsilon }) \Vert _2 + \Vert {(-\Delta )^{s/2}}\sigma _\varepsilon \Vert _2 \rightarrow 0 \end{aligned}$$

as \(\varepsilon \rightarrow 0\), so that (3.21) is satisfied by \(u_\varepsilon \) for all \(\varepsilon \) small enough, with a constant \(\eta _\varepsilon \) tending to zero. We thus obtain the desired decomposition

$$\begin{aligned} u_\varepsilon = \alpha _\varepsilon (PU_{x_\varepsilon , \lambda _\varepsilon } + w_\varepsilon ) \end{aligned}$$

by taking \((x_\varepsilon , \lambda _\varepsilon , \alpha _\varepsilon )\) to be the solution to (3.22) and \(w_\varepsilon := \alpha _\varepsilon ^{-1} u_\varepsilon - PU_{x_\varepsilon , \lambda _\varepsilon }\). To verify the claimed asymptotic behavior of the parameters, note that since \(\eta _\varepsilon \rightarrow 0\), by definition of the minimization problem (3.22), we have \(\Vert {(-\Delta )^{s/2}}w_\varepsilon \Vert _2 < \eta _\varepsilon \rightarrow 0\) and \(\lambda _\varepsilon d(x_\varepsilon ) > (4 \eta _\varepsilon )^{-1} \rightarrow \infty \). Since \(\Omega \) is bounded, the convergence \(x_\varepsilon \rightarrow x_0 \in {\overline{\Omega }}\) is ensured by passing to a suitable subsequence. Finally, using (1.17), we have

$$\begin{aligned} \int _{{\mathbb {R}}^N} U_{0,1}^p \mathop {}\!\textrm{d}y = \int _\Omega u_\varepsilon ^p \mathop {}\!\textrm{d}y = |\alpha _\varepsilon |^p \int _\Omega PU_{x_\varepsilon , \lambda _\varepsilon } \mathop {}\!\textrm{d}y + o(1) = |\alpha _\varepsilon |^p \int _{{\mathbb {R}}^N} U_{0,1}^p \mathop {}\!\textrm{d}y + o(1), \end{aligned}$$

which implies \(\alpha _\varepsilon = \pm 1 + o(1)\). \(\square \)

3.3 Coercivity

In the following sections, our goal is to improve the bounds from Proposition 3.3 step by step.

The following inequality, and its improvement in Proposition 3.5 below, will be central. For \(s=1\), these inequalities are due to Rey [38, Eq. (D.1)] and Esposito [22, Lemma 2.1], respectively, whose proofs inspired those given below.

Proposition 3.4

(Coercivity inequality) For all \(x \in {\mathbb {R}}^n\) and \(\lambda > 0\), we have

$$\begin{aligned} \Vert (-\Delta )^{s/2} v\Vert _2^2 - c_{N,s} (p-1) \int _\Omega U_{x, \lambda }^{p-2} v^2 \mathop {}\!\textrm{d}y \ge \frac{4s}{N+2s+2} \Vert (-\Delta )^{s/2} v\Vert _2^2 \end{aligned}$$
(3.23)

for all \(v \in T_{x, \lambda }^\bot \).

As a corollary, we can include the lower order term \(\int _\Omega a v^2\), at least when \(d(x) \lambda \) is large enough and at the price of having a non-explicit constant on the right side. This is the form of the inequality which we shall use below to refine our error bounds in Sects. 3.4 and 5.2.

Proposition 3.5

(Coercivity inequality with potential a) Let \((x_n) \subset \Omega \) and \((\lambda _n) \subset (0, \infty )\) be sequences such that \({{\,\textrm{dist}\,}}(x_n, \partial \Omega ) \lambda _n \rightarrow \infty \). Then there is \(\rho > 0\) such that for all n large enough,

$$\begin{aligned}{} & {} \Vert (-\Delta )^{s/2} v\Vert _2^2 + \int _\Omega a v^2 \mathop {}\!\textrm{d}y - c_{N,s} (p-1) \int _\Omega U_{x_n, \lambda _n} ^{p-2} v^2 \mathop {}\!\textrm{d}y \ge \rho \Vert (-\Delta )^{s/2} v\Vert _2^2,\nonumber \\{} & {} \quad \text { for all } v \in T_{x_n, \lambda _n}^\bot . \end{aligned}$$
(3.24)

Proof

Abbreviate \(U_n:= U_{x_n, \lambda _n}\) and \(T_n:= T_{x_n, \lambda _n}\). We follow the proof of [22] and define

$$\begin{aligned} C_n:= \inf \left\{ 1 + \int _\Omega a v^2 \mathop {}\!\textrm{d}y - c_{N,s} (p-1) \int _\Omega U_n ^{p-2} v^2 \mathop {}\!\textrm{d}y : \, v \in T_n^\bot ,\, \Vert {(-\Delta )^{s/2}}v\Vert = 1 \right\} . \end{aligned}$$

Then \(C_n\) is bounded from below, uniformly in n. We first claim that \(C_n\) is achieved whenever \(C_n <1\). Indeed, fix n and let \(v_k\) be a minimizing sequence. Up to a subsequence, \(v_k \rightharpoonup v_\infty \) in \({{\widetilde{H}}^s(\Omega )}\) and consequently \(\Vert {(-\Delta )^{s/2}}v_\infty \Vert \le 1\) and \(\int _\Omega a v_k^2 - c_{N,s} (p-1) \int _\Omega U_n^{p-2} v_k^2 \mathop {}\!\textrm{d}y \rightarrow \int _\Omega a v_\infty ^2 - c_{N,s} (p-1) \int _\Omega U_n^{p-2} v_\infty ^2 \mathop {}\!\textrm{d}y\), by compact embedding \({{\widetilde{H}}^s(\Omega )}\hookrightarrow L^2(\Omega )\). Thus

$$\begin{aligned}&(1- C_n) \Vert {(-\Delta )^{s/2}}v_\infty \Vert ^2 + \int _\Omega a v_\infty ^2 \mathop {}\!\textrm{d}y - c_{N,s} (p-1) \int _\Omega U_n ^{p-2} v_\infty ^2 \mathop {}\!\textrm{d}y \\&\quad \le (1 - C_n) + \int _\Omega a v_\infty ^2 \mathop {}\!\textrm{d}y - c_{N,s} (p-1) \int _\Omega U_n ^{p-2} v_\infty ^2 \mathop {}\!\textrm{d}y = 0. \end{aligned}$$

On the other hand, the left hand side of the above inequality must itself be non-negative, for otherwise \({\tilde{v}}:= v_\infty /\Vert {(-\Delta )^{s/2}}v_\infty \Vert \) (notice that \(C_n < 1\) enforces \(v_\infty \not \equiv 0\)) yields a contradiction to the definition of \(C_n\) as an infimum. Thus the above inequality must be in fact an equality, whence \(\Vert {(-\Delta )^{s/2}}v_\infty \Vert = 1\). We have thus proved that \(C_n\) is achieved if \(C_n < 1\).

Now, assume for contradiction, up to passing to a subsequence, that \(\lim _{n \rightarrow \infty } C_n =: L \le 0\). By the first part of the proof, let \(v_n\) be a minimizer satisfying

$$\begin{aligned} (1- C_n) \int _{{\mathbb {R}}^N} {(-\Delta )^{s/2}}v_n {(-\Delta )^{s/2}}w \mathop {}\!\textrm{d}y + \int _\Omega a v_n w \mathop {}\!\textrm{d}y - c_{N,s} (p-1) \int _\Omega U_n^{p-2} v_n w \mathop {}\!\textrm{d}y = 0\qquad \end{aligned}$$
(3.25)

for all \(w \in T_n^\bot \). Up to passing to a subsequence, we may assume \(v_n \rightharpoonup v \in {{\widetilde{H}}^s(\Omega )}\). We claim that

$$\begin{aligned} (1-L) {(-\Delta )^{s}}v + a v = 0 \qquad \text { in } ({{\widetilde{H}}^s(\Omega )})'. \end{aligned}$$
(3.26)

Assuming (3.26) for the moment, we obtain a contradiction as follows. Testing (3.26) against \(v \in {{\widetilde{H}}^s(\Omega )}\) gives

$$\begin{aligned} \Vert {(-\Delta )^{s/2}}v\Vert ^2 + \int _\Omega a v^2 \mathop {}\!\textrm{d}y = L \Vert {(-\Delta )^{s/2}}v\Vert ^2 \le 0. \end{aligned}$$

On the other hand, by coercivity of \({(-\Delta )^{s}}+ a\), the left hand side must be nonnegative and hence \(v \equiv 0\). By compact embedding, we deduce \(v_n \rightarrow 0\) strongly in \(L^2(\Omega )\) and thus

$$\begin{aligned} C_n = 1 - c_{N,s} (p-1) \int _\Omega U_n^2 v_n^2 \mathop {}\!\textrm{d}y + o(1) \ge \frac{4s}{N+2s+2} + o(1). \end{aligned}$$

This is the desired contradiction to \(\lim _{n \rightarrow \infty } C_n \le 0\).

At last, we prove (3.26). Let \(\varphi \in {{\widetilde{H}}^s(\Omega )}\) be given and write \(\varphi = u_n + w_n\), with \(u_n \in T_n\) and \(w_n \in T_n^\bot \). By (3.25) and using \( \int _{{\mathbb {R}}^N} {(-\Delta )^{s/2}}v_n {(-\Delta )^{s/2}}u_n = 0\), we have

$$\begin{aligned}&(1- C_n) \int _{{\mathbb {R}}^N} {(-\Delta )^{s/2}}v_n {(-\Delta )^{s/2}}\varphi \mathop {}\!\textrm{d}y + \int _\Omega a v_n \varphi \mathop {}\!\textrm{d}y - c_{N,s} (p-1) \int _\Omega U_n^{p-2} v_n \varphi \mathop {}\!\textrm{d}y \end{aligned}$$
(3.27)
$$\begin{aligned}&\quad = \int _\Omega a v_n u_n \mathop {}\!\textrm{d}y - c_{N,s} (p-1) \int _\Omega U_n^{p-2} v_n u_n \mathop {}\!\textrm{d}y = {\mathcal {O}}(\Vert {(-\Delta )^{s/2}}u_n\Vert ). \end{aligned}$$
(3.28)

On the one hand,

$$\begin{aligned} \left| \int _\Omega U_n^{p-2} v_n \varphi \mathop {}\!\textrm{d}y \right| \le \Vert U_n^{p-2} \varphi \Vert _\frac{p}{p-1} \rightarrow 0 \end{aligned}$$

because \(\varphi ^\frac{p}{p-1} \in L^{p-1} = (L^\frac{p-1}{p-2}(\Omega ))'\) and \(U_n^\frac{(p-2)p}{p-1} \rightharpoonup 0\) weakly in \(L^\frac{p-1}{p-2}(\Omega )\). Thus, by weak convergence, the expression in (3.27) tends to

$$\begin{aligned} (1- L) \int _{{\mathbb {R}}^N} {(-\Delta )^{s/2}}v {(-\Delta )^{s/2}}\varphi \mathop {}\!\textrm{d}y + \int _\Omega a v \varphi \mathop {}\!\textrm{d}y \end{aligned}$$

as desired. In view of (3.28), the proof of (3.26) is thus complete if we can show \(\Vert {(-\Delta )^{s/2}}u_n \Vert \rightarrow 0\). This is again a consequence of weak convergence to zero of the \(U_n\). Indeed, by Lemmas B.1 and B.2, we have

$$\begin{aligned} \left| \int _{{\mathbb {R}}^N} {(-\Delta )^{s/2}}\frac{PU_n}{\Vert {(-\Delta )^{s/2}}PU_n\Vert } {(-\Delta )^{s/2}}\varphi \mathop {}\!\textrm{d}y \right|&\lesssim \int _{{\mathbb {R}}^N} U_n^{p-1} |\varphi | \mathop {}\!\textrm{d}y = o(1) ,\\ \left| \int _{{\mathbb {R}}^N} {(-\Delta )^{s/2}}\frac{\partial _\lambda PU_n}{\Vert {(-\Delta )^{s/2}}\partial _{\lambda }PU_n\Vert } {(-\Delta )^{s/2}}\varphi \mathop {}\!\textrm{d}y \right|&\lesssim \lambda \int _{{\mathbb {R}}^N} U_n^{p-2} \partial _\lambda U_n \varphi \mathop {}\!\textrm{d}y \lesssim \Vert U_n^{p-2}\varphi \Vert _\frac{p}{p-1} = o(1), \end{aligned}$$

and, similarly,

$$\begin{aligned} \int _{{\mathbb {R}}^N} {(-\Delta )^{s/2}}(\lambda ^{-N+2s} \partial _{x_i} PU_n) {(-\Delta )^{s/2}}\varphi \mathop {}\!\textrm{d}y = o(1). \end{aligned}$$

Here, we used again that \(U_n^{p-1} \rightharpoonup 0\) in \(L^\frac{p}{p-1}\) and \(U_n^\frac{(p-2)p}{p-1} \rightharpoonup 0\) in \(L^\frac{p-1}{p-2}\) weakly, and \(\int _\Omega U^\frac{(p-2)p}{p-1} \varphi ^\frac{p}{p-1} \mathop {}\!\textrm{d}y = o(1)\) by weak convergence.

From the convergence to zero of these scalar products, one can conclude \(u_n \rightarrow 0\) by using the fact that the \(PU_n\), \(\partial _{\lambda }PU_n\), \(\partial _{x_i}PU_n\) are ‘asymptotically orthogonal’ by the bounds of Lemma B.2. For a detailed argument, we refer to Lemma 5.2 below, see also [25, Lemma 6.1]. \(\square \)

Let us now prepare the proof of Proposition 3.4. We recall that \({{\mathcal {S}}}: {\mathbb {R}}^N \rightarrow {{\mathbb {S}}^N}{\setminus } \{-e_{N+1}\}\) (where \(-e_{N+1}= (0,\ldots ,0,1) \in {\mathbb {R}}^{N+1}\) is the south pole) denotes the inverse stereographic projection defined in (3.9), with Jacobian \(J_{{\mathcal {S}}}(x):= \det D {{\mathcal {S}}}(x) = \left( \frac{2}{1+|x|^2}\right) ^N\).

Given a function v on \({\mathbb {R}}^N\), we may define a function u on \({{\mathbb {S}}^N}\) by setting

$$\begin{aligned} u(\omega ):= v_{{{\mathcal {S}}}^{-1}}(\omega ):= v({{\mathcal {S}}}^{-1}(\omega )) J_{{{\mathcal {S}}}^{-1}}(\omega )^\frac{N-2s}{2N}, \qquad \omega \in {{\mathbb {S}}^N}\setminus \{-e_{N+1}\}. \end{aligned}$$

The inverse of this map is of course given by

$$\begin{aligned} v(y):= u_{{\mathcal {S}}}(y):= u({{\mathcal {S}}}(y)) J_{{\mathcal {S}}}(y)^\frac{N-2s}{2N}, \qquad y \in {\mathbb {R}}^N. \end{aligned}$$

The exponent in the determinant factor is chosen such that \(\Vert v\Vert _{L^p({\mathbb {R}}^N)} = \Vert u\Vert _{L^p({{\mathbb {S}}^N})}\).

For a basis \((Y_{l,m})\) of \(L^2({{\mathbb {S}}^N})\) consisting of \(L^2\)-normalized spherical harmonics, write \(u \in L^2({{\mathbb {S}}^N})\) as \(u = \sum _{l,m} u_{l,m} Y_{l,m}\) with coefficients \(u_{l,m} \in {\mathbb {R}}\). With

$$\begin{aligned} \lambda _l = \frac{\Gamma \left( l + \frac{N}{2}+ s\right) }{\Gamma \left( l + \frac{N}{2}- s\right) }. \end{aligned}$$
(3.29)

the Paneitz operator \({\mathcal {P}}_{2s}\) is defined by

$$\begin{aligned} {{\mathcal {P}}_{2s}}u:= \sum _{l,m} \lambda _l u_{l,m} Y_{l,m} \end{aligned}$$

for every \(u \in L^2({{\mathbb {S}}^N})\) such that \(\sum _{l,m} \lambda _l u_{l,m}^2 < \infty \).

It is well-known (see [6]) that, for every \(v \in C^\infty _0({\mathbb {R}}^N)\), we have,

$$\begin{aligned} {(-\Delta )^{s}}v(x) = J_{{\mathcal {S}}}(x)^\frac{N+2s}{2N} {{\mathcal {P}}_{2s}}u ({{\mathcal {S}}}(x)), \end{aligned}$$
(3.30)

where \(u = v_{{\mathcal {S}}}\). Thus, we have

$$\begin{aligned} \int _{{\mathbb {R}}^N} |{(-\Delta )^{s/2}}v|^2 \mathop {}\!\textrm{d}y= \int _{{\mathbb {R}}^N} v {(-\Delta )^{s}}v \mathop {}\!\textrm{d}y = \int _{{{\mathbb {S}}^N}} u {{\mathcal {P}}_{2s}}u \mathop {}\!\textrm{d}y = \sum _{l,m} \lambda _l u_{l,m}^2. \end{aligned}$$

Since \(C^\infty _0({\mathbb {R}}^N)\) is dense in the space \({\mathcal {D}}^{s,2}({\mathbb {R}}^N):= \{ v \in L^\frac{2N}{N-2\,s}({\mathbb {R}}^N) \,: \, {(-\Delta )^{s/2}}v \in L^2({\mathbb {R}}^N) \}\) (see, e.g., [8]), the equality

$$\begin{aligned} \int _{{\mathbb {R}}^N} |{(-\Delta )^{s/2}}v|^2 \mathop {}\!\textrm{d}y= \sum _{l,m}^2 \lambda _l u_{l,m}^2 \end{aligned}$$
(3.31)

extends to all \(v \in {\mathcal {D}}^{s,2}({\mathbb {R}}^N)\). In particular, it holds for \(v \in {{\widetilde{H}}^s(\Omega )}\).

Proof of Proposition 3.4

We first prove (3.23) for \((x,\lambda ) = (0,1)\). Let \(v \in T_{0,1}^\perp \) and denote \(u = v_{{\mathcal {S}}}\). We claim that the orthogonality conditions on v imply that \(u_{l,m} = 0\) for \(l = 0,1\). Indeed, e.g. from \(v \perp \partial _{\lambda }PU_{0,1}\) and recalling \(J_{{\mathcal {S}}}(y) = (\frac{2}{1+|y|^2})^N = 2^N U_{0,1}^\frac{2N}{N-2s}\), we compute

$$\begin{aligned} 0&= \int _{{\mathbb {R}}^N} {(-\Delta )^{s/2}}v {(-\Delta )^{s/2}}\partial _{\lambda }PU_{0,1} \mathop {}\!\textrm{d}y = c_{N,s} \frac{N+2s}{N-2s} \int _{{\mathbb {R}}^N} v U_{0,1}^\frac{4s}{N-2s} \partial _{\lambda }U_{0,1} \mathop {}\!\textrm{d}y \\&= c_{N,s} \frac{N+2s}{N-2s} 2^{-\frac{N+2s}{2}} \int _{{\mathbb {R}}^N} v(y) J_{{\mathcal {S}}}(y)^\frac{N+2s}{2N} \frac{1-|y|^2}{1+|y|^2} \mathop {}\!\textrm{d}y \\&= c_{N,s} \frac{N+2s}{N-2s} 2^{-\frac{N+2s}{2}} \int _{{\mathbb {S}}^N}u(\omega ) \omega _{N+1} \mathop {}\!\textrm{d}\sigma (\omega ). \end{aligned}$$

Analogous calculations show that \(v \perp PU_{0,1}\) implies \(\int _{{\mathbb {S}}^N}u = 0\) and that \(v \perp \partial _{x_i}PU_{0,1}\) implies \(\int _{{\mathbb {S}}^N}u \omega _i = 0\) for \(i = 1,\ldots ,N\). Since the functions 1 and \(\omega _i\) (\(i = 1,\ldots ,N+1\)) form a basis of the space of spherical harmonics of angular momenta \(l = 0\) and \(l=1\) respectively, we have proved our claim.

Since the eigenvalues \(\lambda _l\) of \({{\mathcal {P}}_{2s}}\) are increasing in l, changing back variables to \({\mathbb {R}}^N\), we deduce from (3.31) that

$$\begin{aligned} \int _{{\mathbb {R}}^N} |{(-\Delta )^{s/2}}v|^2 \mathop {}\!\textrm{d}y = \sum _{l,m} \lambda _l u_{l,m}^2 \ge \lambda _2 \int _{{\mathbb {S}}^N}u(\omega )^2 \mathop {}\!\textrm{d}\sigma (\omega ) = 2^{2s} \lambda _2 \int _{{\mathbb {R}}^N} v^2(y) U_{0,1}^{p-2}(y) \mathop {}\!\textrm{d}y. \end{aligned}$$

By an explicit computation using the numerical values of \(\lambda _2\) given by (3.29) and \(c_{N,s}\) given in Lemma B.5, this is equivalent to

$$\begin{aligned} \Vert (-\Delta )^{s/2} v\Vert _2^2 - c_{N,s} (p-1) \int _\Omega U_{0,1}^{p-2} v^2 \mathop {}\!\textrm{d}y \ge \frac{4s}{N + 2s +2} \Vert (-\Delta )^{s/2} v\Vert _2^2, \end{aligned}$$
(3.32)

which is the desired inequality.

The case of general \((x, \lambda ) \in \Omega \times (0, \infty )\) can be deduced from this by scaling. Indeed, for \(v \in T_{x, \lambda }^\bot \), set \(v_{x, \lambda }(y):= v(x - \lambda ^{-1}y)\). Then \(v_{x, \lambda }\in T_{0,1}^\bot \) with respect to the set \(\lambda (x - \Omega )\), so that by the above \(v_{x, \lambda }\) satisfies

$$\begin{aligned} \Vert (-\Delta )^{s/2} v_{x, \lambda }\Vert _2^2 - c_{N,s} (p-1) \int _{\lambda (x -\Omega )} U_{0,1} ^{p-2} v_{x, \lambda }^2 \mathop {}\!\textrm{d}y \ge \frac{4s}{N + 2s +2} \Vert (-\Delta )^{s/2} v_{x, \lambda }\Vert _2^2. \end{aligned}$$

Changing back variables now yields (3.32). \(\square \)

3.4 Improved a priori bounds

The main section of this section is the following proposition, which improves Proposition 3.3. It states that the concentration point \(x_0\) does not lie on the boundary of \(\Omega \) and gives an optimal quantitative bound on w.

Proposition 3.6

As \(\varepsilon \rightarrow 0\),

$$\begin{aligned} \Vert {(-\Delta )^{s/2}}w \Vert = {\mathcal {O}}\left( \lambda ^{-\frac{N-2s}{2}}\right) \end{aligned}$$
(3.33)

and

$$\begin{aligned} d^{-1} = {\mathcal {O}}(1). \end{aligned}$$
(3.34)

In particular, \(x_0 \in \Omega \).

The proposition will readily follow from the following expansion of \({\mathcal {S}}_{a + \varepsilon V}[u_\varepsilon ]\) with respect to the decomposition \(u_\varepsilon = \alpha (PU_{x, \lambda }+ w)\) obtained in the previous section.

Lemma 3.7

As \(\varepsilon \rightarrow 0\), we have

$$\begin{aligned} {\mathcal {S}}_{a + \varepsilon V}[u_\varepsilon ]&= S + 2^{2s} \pi ^{N/2} \frac{\Gamma (s)}{\Gamma \left( \frac{N-2s}{2}\right) } \left( \frac{S}{c_{N,s}} \right) ^{- \frac{N-2s}{2s}} \phi _0(x) \lambda ^{-N + 2s} \\&\quad + \left( \frac{S}{c_{N,s}} \right) ^{- \frac{N-2s}{2s}} \left( \Vert {(-\Delta )^{s/2}}w\Vert ^2 + \int _\Omega a w^2 \mathop {}\!\textrm{d}y - c_{N,s} (p-1) \int _\Omega U_{x, \lambda }^{p-2} w^2 \mathop {}\!\textrm{d}y \right) \\&\quad + {\mathcal {O}} \left( \lambda ^{-\frac{N-2s}{2}} \Vert {(-\Delta )^{s/2}}w\Vert \right) + o\left( (d \lambda )^{-N+2s}\right) + o\left( \Vert {(-\Delta )^{s/2}}w\Vert ^2\right) . \end{aligned}$$

Proof of Proposition 3.6

Using the almost minimality assumption (1.17) and the coercivity inequality from Proposition 3.5, the expansion from Lemma 3.7 yields the inequality

$$\begin{aligned} 0&\ge (1+ o(1))( S - S(a + \varepsilon V)) + c \phi _0(x) \lambda ^{-N+2s} + c \Vert {(-\Delta )^{s/2}}w\Vert ^2 \\&\quad + {\mathcal {O}} \left( \lambda ^{-\frac{N-2s}{2}} \Vert {(-\Delta )^{s/2}}w\Vert \right) + o\left( (d \lambda )^{-N+2s}\right) \end{aligned}$$

for some \(c > 0\). By Lemma A.1, we have the lower bound \(\phi _0(x) \gtrsim d^{-N+2s}\). Using the estimate

$$\begin{aligned} {\mathcal {O}}\left( \lambda ^{-\frac{N-2s}{2}} \Vert {(-\Delta )^{s/2}}w\Vert \right) \le \delta \Vert {(-\Delta )^{s/2}}w \Vert ^2 + C_\delta \lambda ^{-N+2s}, \end{aligned}$$

we obtain, by taking \(\delta \) small enough,

$$\begin{aligned} C_\delta \lambda ^{-N+2s} \ge (1+ o(1))( S - S(a + \varepsilon V)) + c (d \lambda )^{-N+2s} + c \Vert {(-\Delta )^{s/2}}w\Vert ^2. \end{aligned}$$

Since all three terms on the right side are nonnegative, the proposition follows. \(\square \)

Proof of Lemma 3.7

Step 1: Expansion of the numerator. By orthogonality, we have

$$\begin{aligned} \Vert {(-\Delta )^{s/2}}(PU_{x, \lambda }+w)\Vert ^2 = \Vert {(-\Delta )^{s/2}}PU_{x, \lambda }\Vert ^2 + \Vert {(-\Delta )^{s/2}}w\Vert ^2. \end{aligned}$$

The main term can be written as

$$\begin{aligned} \Vert {(-\Delta )^{s/2}}PU_{x, \lambda }\Vert ^2&= \int _\Omega PU_{x, \lambda }{(-\Delta )^{s}}PU_{x, \lambda }\mathop {}\!\textrm{d}y = c_{N,s} \int _\Omega U_{x, \lambda }^{p-1} PU_{x, \lambda }\mathop {}\!\textrm{d}y \\&= c_{N,s} \int _\Omega U_{x, \lambda }^p \mathop {}\!\textrm{d}y + c_{N,s} \lambda ^{-\frac{N-2s}{2}} \int _\Omega U_{x, \lambda }^{p-1} H_0(x,\cdot ) \mathop {}\!\textrm{d}y \\&\quad + {\mathcal {O}}(\Vert f_{x, \lambda }\Vert _\infty \int _\Omega U_{x, \lambda }^{p-1} \mathop {}\!\textrm{d}y), \end{aligned}$$

where we used \(PU_{x, \lambda }= U_{x, \lambda }- \lambda ^{-\frac{N-2\,s}{2}} H_0(x,\cdot ) + f_{x, \lambda }\) with \(\Vert f_{x, \lambda }\Vert _\infty \lesssim \lambda ^\frac{-N+4-2\,s}{2} d^{-N -2 +2\,s}\), by Lemma A.2. Thus,

$$\begin{aligned} \Vert f\Vert _\infty \int _\Omega U_{x, \lambda }^{p-1} \mathop {}\!\textrm{d}y \lesssim (d\lambda )^{-N+2s -2} = o\left( (d\lambda )^{-N+2s}\right) . \end{aligned}$$

Next, we have

$$\begin{aligned} \int _{{\mathbb {R}}^N \setminus \Omega } U_{x, \lambda }^p \mathop {}\!\textrm{d}y \le \int _{{\mathbb {R}}^N \setminus B_d} U_{x, \lambda }^p \mathop {}\!\textrm{d}y \lesssim \int _{d\lambda }^\infty \frac{r^{N-1}}{(1+r^2)^N} \mathop {}\!\textrm{d}r \lesssim (d\lambda )^{-N} = o\left( (d\lambda )^{-N+2s}\right) \end{aligned}$$

and thus

$$\begin{aligned} c_{N,s} \int _\Omega U_{x, \lambda }^p \mathop {}\!\textrm{d}y = c_{N,s} \Vert U_{0,1}\Vert _p^p + o\left( (d\lambda )^{-N+2s}\right) . \end{aligned}$$

Finally, using the fact that \(H_0(x,y) = \phi _0(x) + {\mathcal {O}}( \Vert \nabla _y H_0(x,\cdot )\Vert _\infty |x-y|) = \phi _0(x) + {\mathcal {O}}( d^{-N+2\,s-1} |x-y|)\) by Lemma A.1, we have

$$\begin{aligned} \lambda ^{-\frac{N-2s}{2}} \int _\Omega U_{x, \lambda }^{p-1} H_0(x,\cdot ) \mathop {}\!\textrm{d}y&= \lambda ^{-\frac{N-2s}{2}} \phi _0(x) \int _{B_d} U_{x, \lambda }^{p-1} \mathop {}\!\textrm{d}y \\&\quad + {\mathcal {O}}(d^{-N+2s-1} \lambda ^{-\frac{N-2s}{2}} \int _{B_d} U_{x, \lambda }^{p-1} |x-y| \mathop {}\!\textrm{d}y ) \\&\quad + \lambda ^{-\frac{N-2s}{2}} \int _{\Omega \setminus B_d} U_{x, \lambda }^{p-1} H_0(x,y)\mathop {}\!\textrm{d}y . \end{aligned}$$

Since \(H_0(x,y) \lesssim d^{-N+2s}\) by Lemma A.1, the last term is

$$\begin{aligned} \lambda ^{-\frac{N-2s}{2}} \int _{\Omega \setminus B_d} U_{x, \lambda }^{p-1} H_0(x,\cdot ) \mathop {}\!\textrm{d}y \lesssim d^{-N+2s} \lambda ^{-\frac{N-2s}{2}} \int _{{\mathbb {R}}^N \setminus B_d} U_{x, \lambda }^{p-1} \mathop {}\!\textrm{d}y \lesssim (d\lambda )^{-N} = o((d\lambda )^{-N+2s}). \end{aligned}$$

Similarly,

$$\begin{aligned} \lambda ^{-\frac{N-2s}{2}} \phi _0(x) \int _{B_d} U_{x, \lambda }^{p-1} \mathop {}\!\textrm{d}y&= \lambda ^{-\frac{N-2s}{2}} \phi _0(x) \Vert U_{0,1}\Vert _{p-1}^{p-1} + {\mathcal {O}}\left( \phi _0(x) \lambda ^{-\frac{N-2s}{2}} \int _{{\mathbb {R}}^N \setminus B_d} U_{x, \lambda }^{p-1} \mathop {}\!\textrm{d}y \right) \\&= \lambda ^{-\frac{N-2s}{2}} \phi _0(x) \Vert U_{0,1}\Vert _{p-1}^{p-1} + {\mathcal {O}}((d\lambda )^{-N}) \\&= \lambda ^{-\frac{N-2s}{2}} \phi _0(x) \Vert U_{0,1}\Vert _{p-1}^{p-1} + o((d\lambda )^{-N+2s}). \end{aligned}$$

Finally,

$$\begin{aligned} d^{-N+2s-1} \lambda ^{-\frac{N-2s}{2}} \int _{B_d} U_{x, \lambda }^{p-1} |x-y|&\lesssim (d\lambda )^{-N+2s-1} \int _0^{d\lambda } \frac{r^N}{(1+r^2)^\frac{N+2s}{2}}\mathop {}\!\textrm{d}r = o((d\lambda )^{-N+2s}) \end{aligned}$$

(where one needs to distinguish the cases where \(1-2s\) is positive, negative or zero because the \(\mathop {}\!\textrm{d}r\)-integral is divergent if \(1-2s \ge 0\)).

Collecting all the previous estimates, we have proved

$$\begin{aligned} \Vert {(-\Delta )^{s/2}}PU_{x, \lambda }\Vert ^2&= c_{N,s} \Vert U_{x, \lambda }\Vert _p^p + c_{N,s} \lambda ^{-N+2s} \Vert U_{x, \lambda }\Vert _{p-1}^{p-1} \phi _0(x) + o( (d\lambda )^{-N +2s}). \end{aligned}$$
(3.35)

The potential term splits as

$$\begin{aligned} \int _\Omega (a+\varepsilon V) (PU_{x, \lambda }+w)^2 \mathop {}\!\textrm{d}y= & {} \int _\Omega (a+ \varepsilon V) PU_{x, \lambda }^2 \mathop {}\!\textrm{d}y + \int _\Omega a w^2 \mathop {}\!\textrm{d}y \\{} & {} + \int _\Omega \left( (a+ \varepsilon V) PU_{x, \lambda }w + \varepsilon V w^2 \right) \mathop {}\!\textrm{d}y \end{aligned}$$

and we can estimate

$$\begin{aligned} \left| \int _\Omega (a+ \varepsilon V) PU_{x, \lambda }^2 \mathop {}\!\textrm{d}y \right| \lesssim \Vert U_{x, \lambda }\Vert _2^2 \lesssim \lambda ^{-N + 2s} \end{aligned}$$

as well as

$$\begin{aligned} \int _\Omega \left( (a+ \varepsilon V) PU_{x, \lambda }w + \varepsilon V w^2 \right) \mathop {}\!\textrm{d}y&\lesssim \Vert PU_{x, \lambda }\Vert _{p'} \Vert w\Vert _p + \varepsilon \Vert w\Vert ^2 \\&= {\mathcal {O}}\left( \lambda ^{-\frac{N-2s}{2}} \Vert {(-\Delta )^{s/2}}w\Vert ) + o(\Vert {(-\Delta )^{s/2}}w\Vert ^2\right) . \end{aligned}$$

In summary, we have, for the numerator of \({\mathcal {S}}_{a+ \varepsilon V}[u_\varepsilon ]\),

$$\begin{aligned}&\alpha ^{-2} \left( \Vert {(-\Delta )^{s/2}}u\Vert ^2 + \int _\Omega (a + \varepsilon V) u^2 \mathop {}\!\textrm{d}y \right) \\&\quad = c_{N,s} \Vert U_{x, \lambda }\Vert _p^p + c_{N,s} \lambda ^{-N+2s} \Vert U_{x, \lambda }\Vert _{p-1}^{p-1} \phi _0(x) + \Vert {(-\Delta )^{s/2}}w\Vert ^2 + \int _\Omega a w^2 \mathop {}\!\textrm{d}y \\&\qquad +{\mathcal {O}}(\lambda ^{-N+2s}) +{\mathcal {O}}( \lambda ^{-\frac{N-2s}{2}} \Vert {(-\Delta )^{s/2}}w\Vert + o( (d\lambda )^{-N +2s}) + o(\Vert {(-\Delta )^{s/2}}w\Vert ^2) . \end{aligned}$$

Step 2: Expansion of the denominator. By Taylor’s formula,

$$\begin{aligned} (PU_{x, \lambda }+w)^p = PU_{x, \lambda }^p + p PU_{x, \lambda }^{p-1} w + \frac{p(p-1)}{2} PU_{x, \lambda }^{p-2} w^2 + {\mathcal {O}}\left( PU_{x, \lambda }^{p-3} |w|^3 + |w|^p\right) . \end{aligned}$$

Note that, strictly speaking, we use this formula if \(p \ge 3\). If \(2< p \le 3\), the same is true without the remainder term \(PU^{p-3} |w|^3\), which does not affect the rest of the proof. To evaluate the main term, we write \(PU_{x, \lambda }= U_{x, \lambda }- \varphi _{x, \lambda }\) with \(\varphi _{x, \lambda }:= \lambda ^{-1/2} H_0(x, \cdot ) + f_{x, \lambda }\) (see Lemma A.2). Then,

$$\begin{aligned} \int _\Omega PU_{x, \lambda }^p \mathop {}\!\textrm{d}y&= \int _\Omega U_{x, \lambda }^p \mathop {}\!\textrm{d}y - p \int _\Omega U_{x, \lambda }^{p-1} \varphi _{x, \lambda }\mathop {}\!\textrm{d}y + {\mathcal {O}}\left( \int _\Omega (U_{x, \lambda }^{p-2} \varphi _{x, \lambda }^2 + \varphi _{x, \lambda }^p) \mathop {}\!\textrm{d}y \right) \\&\quad \Vert U_{0,1}\Vert _p^p - p \lambda ^{-N+2s} \Vert U_{0,1}\Vert _{p-1}^{p-1} \phi _0(x) + o((d\lambda )^{-N+2s}), \end{aligned}$$

where we used that, by Lemmas A.2 and B.1, \(\int _\Omega U_{x, \lambda }^{p-2} \varphi _{x, \lambda }^2 \mathop {}\!\textrm{d}y \le \Vert U_{x, \lambda }\Vert _{p-2}^{p-2} \Vert \varphi _{x, \lambda }\Vert ^2_\infty \lesssim (d\lambda )^{-2N+4\,s} = o((d\lambda )^{-N+2\,s})\) and \(\Vert \varphi _{x, \lambda }\Vert _p^p \lesssim (d\lambda )^{-N} = o((d\lambda )^{-N+2\,s})\).

Next, the integral of the remainder term is controlled by

$$\begin{aligned} \int _\Omega (PU_{x, \lambda }^{p-3} |w|^3 + |w|^p) \mathop {}\!\textrm{d}y \lesssim \Vert PU_{x, \lambda }\Vert ^{p-3}_p \Vert w\Vert _p^3 + \Vert w\Vert _p^p = o(\Vert {(-\Delta )^{s/2}}w\Vert ^2). \end{aligned}$$

The term linear in w is

$$\begin{aligned} \int _\Omega PU_{x, \lambda }^{p-1} w \mathop {}\!\textrm{d}y = \int _\Omega U_{x, \lambda }^{p-1} w \mathop {}\!\textrm{d}y + {\mathcal {O}} \left( \int _\Omega (U_{x, \lambda }^{p-2} \varphi _{x, \lambda }|w| + \varphi _{x, \lambda }^{p-1} |w|) \mathop {}\!\textrm{d}y \right) . \end{aligned}$$

Now, by orthogonality of w, we have

$$\begin{aligned} \int _\Omega U_{x, \lambda }^{p-1} w \mathop {}\!\textrm{d}y = c_{N,s}^{-1} \int _\Omega (-\Delta )^s U_{x, \lambda }w \mathop {}\!\textrm{d}y = \int _{{\mathbb {R}}^N} {(-\Delta )^{s/2}}U_{x, \lambda }{(-\Delta )^{s/2}}w \mathop {}\!\textrm{d}y = 0. \end{aligned}$$

Moreover, using \(\Vert \varphi _{x, \lambda }\Vert _p \lesssim (d \lambda )^{-\frac{N-2\,s}{2}}\) by Lemma A.2, we get

$$\begin{aligned} \left| \int _\Omega \varphi _{x, \lambda }^{p-1} w \mathop {}\!\textrm{d}y \right| \le \Vert \varphi _{x, \lambda }\Vert _p^{p-1} \Vert w\Vert _p \le (d\lambda )^{-\frac{N+2s}{2}} \Vert {(-\Delta )^{s/2}}w\Vert = o((d\lambda )^{-N+2s}). \end{aligned}$$

Using additionally that \(\Vert \varphi _{x, \lambda }\Vert _\infty \lesssim d^{-N+2\,s} \lambda ^{-\frac{N-2\,s}{2}}\) by Lemma A.2, by the same computation as in [23, Lemma A.1] we get \(\Vert U_{x, \lambda }^{p-2} \varphi _{x, \lambda }\Vert _{\frac{p}{p-1}} \lesssim (d\lambda )^{-N+2\,s}\) and therefore

$$\begin{aligned} \int _\Omega U_{x, \lambda }^{p-2} \varphi _{x, \lambda }|w| \mathop {}\!\textrm{d}y \lesssim (d\lambda )^{-N+2s} \Vert {(-\Delta )^{s/2}}w\Vert . \end{aligned}$$

In summary, we have, for the denominator of \({\mathcal {S}}_{a+ \varepsilon V}[u_\varepsilon ]\),

$$\begin{aligned} \alpha ^{-p} \int _\Omega u_\varepsilon ^p \mathop {}\!\textrm{d}y&= \Vert U_{0,1}\Vert _p^p - p \lambda ^{-N+2s} \Vert U_{0,1}\Vert _{p-1}^{p-1} \phi _0(x) + \frac{p(p-1)}{2} \int _\Omega U_{x, \lambda }^{p-2} w^2 \mathop {}\!\textrm{d}y \\&\quad + {\mathcal {O}}((d\lambda )^{-N+2s} \Vert {(-\Delta )^{s/2}}w\Vert ) + o(\Vert {(-\Delta )^{s/2}}w \Vert ^2) + o((d\lambda )^{-N+2s}). \end{aligned}$$

Step 3: Expansion of the quotient. Using Taylor’s formula, we find, for the denominator,

$$\begin{aligned} \alpha ^{-2} \left( \int _\Omega u_\varepsilon ^p \mathop {}\!\textrm{d}y \right) ^{-2/p}&= \Vert U_{0,1}\Vert _p^{-2} + 2 \Vert U_{0,1}\Vert _p^{-p-2} \Vert U_{0,1}\Vert _{p-1}^{p-1} \lambda ^{-N+2s} \phi _0(x) \\&\quad - c_{N,s} (p-1) \Vert U_{0,1}\Vert _p^{-p-2} \int _\Omega U_{x, \lambda }^{p-2} w^2 \mathop {}\!\textrm{d}y \\&\quad + {\mathcal {O}}((d\lambda )^{-N+2s} \Vert {(-\Delta )^{s/2}}w\Vert ) +o(\Vert {(-\Delta )^{s/2}}w \Vert ^2) + o((d\lambda )^{-N+2s}). \end{aligned}$$

Multiplying this with the expansion for the denominator found above, we obtain

$$\begin{aligned} {\mathcal {S}}_{a+ \varepsilon V}[u_\varepsilon ]&= c_{N,s} \Vert U_{0,1}\Vert _p^{p-2} + \lambda ^{-N+2s} c_{N,s} \Vert U_{0,1}\Vert _p^{-2} \Vert U_{0,1}\Vert _{p-1}^{p-1} \phi _0(x) \\&\qquad \Vert U_{0,1}\Vert _p^{-2} \left( \Vert {(-\Delta )^{s/2}}w\Vert ^2 + \int _\Omega a w^2 \mathop {}\!\textrm{d}y - c_{N,s} (p-1) \int _\Omega U_{x, \lambda }^{p-2} w^2 \mathop {}\!\textrm{d}y \right) \\&\quad + {\mathcal {O}}((d\lambda )^{-N+2s} \Vert {(-\Delta )^{s/2}}w\Vert ) +o(\Vert {(-\Delta )^{s/2}}w \Vert ^2) + o((d\lambda )^{-N+2s}). \end{aligned}$$

Expressing the various constants using Lemma B.5, we find

$$\begin{aligned} c_{N,s} \Vert U_{0,1}\Vert _p^{p-2}&= S , \\ \Vert U_{0,1}\Vert _p^{-2}&= \left( \frac{S}{c_{N,s}} \right) ^{- \frac{N-2s}{2s}}, \\ c_{N,s} \Vert U_{0,1}\Vert _p^{-2} \Vert U_{0,1}\Vert _{p-1}^{p-1}&= 2^{2s} \pi ^{N/2} \frac{\Gamma (s)}{\Gamma \left( \frac{N-2s}{2}\right) } \left( \frac{S}{c_{N,s}} \right) ^{- \frac{N-2s}{2s}}. \end{aligned}$$

This yields the expansion claimed in the lemma. \(\square \)

4 Proof of Theorem 1.2

At this point, we have collected sufficiently precise information on the behavior of a general almost minimizing sequence to prove Theorem 1.2.

The main difficulty of the argument consists in constructing, for a critical potential a, a point \(x_0 \in \Omega \) at which \(\phi _a(x_0) = 0\). To do so, we carry out some additional analysis for a sequence \(u_\varepsilon \) which we assume to consist of true minimizers of \(S(a - \varepsilon )\), not only almost minimizers as in the rest of this paper. We make this additional assumption essentially for convenience and brevity of the argument, see the remark below Lemma 4.1.

Indeed, since a is critical, we have \(S(a - \varepsilon ) < S\) for every \(\varepsilon > 0\). By the results of [42], which adapts the classical lemma of Lieb contained in [10] to the fractional case, this strict inequality implies existence of a minimizer \(u_\varepsilon \) of \(S(a - \varepsilon )\). Normalizing \(\int _\Omega u_\varepsilon ^\frac{2N}{N-2\,s} \mathop {}\!\textrm{d}y = A_{N,s}\) as in (1.17), \(u_\varepsilon \) satisfies the equation

$$\begin{aligned} {(-\Delta )^{s}}u_\varepsilon + (a - \varepsilon ) u_\varepsilon = \frac{S(a-\varepsilon )}{A_{N,s}^{2s/N}} u_\varepsilon ^\frac{N+2s}{N-2s} \quad \text {on } \Omega , \qquad u \equiv 0 \quad \text { on } {\mathbb {R}}^N \setminus \Omega . \end{aligned}$$
(4.1)

By using equation (4.1), we can conveniently extract the leading term of the remainder term \(w_\varepsilon \). We do this in the following lemma, which is the key step in the proof of Theorem 1.2.

Lemma 4.1

Let \(u_\varepsilon \) be minimizers of \(S(a - \varepsilon )\) which satisfy (4.1). Then we have

$$\begin{aligned} S(a-\varepsilon ) = S + c_{N,s} A_{N,s}^{-2/p} \phi _a(x) \lambda ^{-N+2s} + o(\lambda ^{-N+2s}). \end{aligned}$$
(4.2)

If \(8s/3 < N\), Lemma 4.1 is in fact implied by the more refined analysis carried out in Sect. 5 below, which does not use the Eq. (4.1). If \(2s < N \le 8s/3\), we speculate than one can prove Lemma 4.1 for almost minimizers not satisfying (4.1) by arguing like in [25, Sect. 5], but we do not pursue this explicitly here.

Proof of Lemma 4.1

Clearly, the analysis carried out in Sect. 3 so far applies to the sequence \((u_\varepsilon )\). Thus, up to passing to a subsequence, we may assume that \(u_\varepsilon = \alpha _\varepsilon (PU_{x_\varepsilon , \lambda _\varepsilon } + w_\varepsilon )\) with \(\alpha _\varepsilon \rightarrow 1\), \(x_\varepsilon \rightarrow x_0 \in \Omega \) and \(\Vert {(-\Delta )^{s/2}}w_\varepsilon \Vert _2 \lesssim \lambda ^{-\frac{N-2s}{2}}\) as \(\varepsilon \rightarrow 0\).

Thus the sequence \({\tilde{w}}_\varepsilon := \lambda _\varepsilon ^{\frac{N-2s}{2}} w_\varepsilon \) is bounded in \({{\widetilde{H}}^s(\Omega )}\) and converges weakly in \({{\widetilde{H}}^s(\Omega )}\), up to a subsequence, to some \({\tilde{w}}_0 \in {{\widetilde{H}}^s(\Omega )}\). Inserting the expansion \(u = \alpha ( PU_{x, \lambda }+ w)\) in (4.1), the equation fulfilled by \({\tilde{w}}_\varepsilon \) reads

$$\begin{aligned} {(-\Delta )^{s}}{\tilde{w}}_\varepsilon + (a - \varepsilon ) {\tilde{w}}_\varepsilon = -(a - \varepsilon ) PU_{x, \lambda }\lambda ^\frac{N-2s}{2}+ \lambda ^{-2s} \frac{S(a-\varepsilon )}{A_{N,s}^{2s/N}} \left( PU_{x, \lambda }\lambda ^\frac{N-2s}{2}+ {\tilde{w}}_\varepsilon \right) ^\frac{N+2s}{N-2s}.\nonumber \\ \end{aligned}$$
(4.3)

By Lemma A.2, we can write

$$\begin{aligned} PU_{x, \lambda }\lambda ^{\frac{N-2s}{2}} = G_0(x, \cdot ) - \lambda ^{N-2s} h(\lambda (x - \cdot )) - \lambda ^{N-2s} f_{x, \lambda }\end{aligned}$$

with h as in Lemma B.3. By the bounds on h and \(f_{x, \lambda }\) from Lemmas A.2 and B.3, this yields

$$\begin{aligned} PU_{x, \lambda }\lambda ^{\frac{N-2s}{2}} \rightarrow G_0 (x_0, \cdot ) \, \text { uniformly on compacts of } \Omega \setminus \{x_0\} \, \text { and in } L^q(\Omega ), \, q < \tfrac{N}{N-2s}.\nonumber \\ \end{aligned}$$
(4.4)

Letting \(\varepsilon \rightarrow 0\) in (4.3), we obtain

$$\begin{aligned} \int _\Omega {(-\Delta )^{s/2}}{\tilde{w}}_0 {(-\Delta )^{s/2}}\varphi \mathop {}\!\textrm{d}y + \int _\Omega a {\tilde{w}}_0 \varphi \mathop {}\!\textrm{d}y = - \int _\Omega a G_0(x, \cdot ) \varphi \mathop {}\!\textrm{d}y \end{aligned}$$
(4.5)

for every \(\varphi \in C^\infty _c(\Omega \setminus \{x_0\})\). Now it is straightforward to show that \(C^\infty _c(\Omega {\setminus } \{x_0\})\) is dense in \({{\widetilde{H}}^s(\Omega )}\), by using a cutoff function argument together with identity (3.1). Thus, by approximation, (4.5) even holds for every \(\varphi \in {{\widetilde{H}}^s(\Omega )}\). In other words, \({\tilde{w}}_0\) weakly solves the equation

$$\begin{aligned} (-\Delta )^s {\tilde{w}}_0 + a {\tilde{w}}_0 = - a G_0(x_0, \cdot ) \quad \text { on } \Omega , \qquad {\tilde{w}}_0 \equiv 0 \quad \text { on } {\mathbb {R}}^N \setminus \Omega . \end{aligned}$$

By uniqueness of solutions, we conclude \({\tilde{w}}_0 = H_0(x_0, \cdot ) - H_a(x_0, \cdot )\).

We will now use this information to prove the desired expansion (4.2) of the energy \(S(a-\varepsilon ) = {\mathcal {S}}_{a -\varepsilon } [PU_{x, \lambda }+ w]\). Indeed, using the already established bound \(\Vert {(-\Delta )^{s/2}}w\Vert \lesssim \lambda ^{-\frac{N-2s}{2}}\), the numerator is

$$\begin{aligned} \Vert {(-\Delta )^{s/2}}PU_{x, \lambda }\Vert ^2 + \int _\Omega a (PU_{x, \lambda }^2 + 2 PU_{x, \lambda }w) \mathop {}\!\textrm{d}y + \Vert {(-\Delta )^{s/2}}w\Vert ^2 + \int _\Omega a w^2 \mathop {}\!\textrm{d}y + o(\lambda ^{-N+2s}). \end{aligned}$$
(4.6)

By integrating the equation for w against w and recalling \(\frac{S(a-\varepsilon )}{A_{N,s}^{2s/N}} = c_{N,s} + o(1)\), we easily find the asymptotic identity (compare [22, Eq. (8)] for \(s= 1\))

$$\begin{aligned} \Vert {(-\Delta )^{s/2}}w\Vert ^2 + \int _\Omega a w^2 \mathop {}\!\textrm{d}y = c_{N,s} (p-1) \int _\Omega U_{x, \lambda }^{p-2} w^2 \mathop {}\!\textrm{d}y - \int _\Omega a PU_{x, \lambda }w \mathop {}\!\textrm{d}y. \end{aligned}$$

Inserting this in (4.6), together with the expansion of \(\Vert {(-\Delta )^{s/2}}PU_{x, \lambda }\Vert ^2\) given in (3.35), the numerator of \({\mathcal {S}}_{a -\varepsilon } [PU_{x, \lambda }+ w]\) becomes

$$\begin{aligned}{} & {} c_{N,s} A_{N,s} - c_{N,s} a_{N,s} \phi _0(x) \lambda ^{-N+2s} + \int _\Omega a (PU_{x, \lambda }^2 + PU_{x, \lambda }w) \mathop {}\!\textrm{d}y \nonumber \\{} & {} \quad + c_{N,s} (p-1) \int _\Omega U_{x, \lambda }^{p-2} w^2 \mathop {}\!\textrm{d}y + o(\lambda ^{-N+2s}). \end{aligned}$$
(4.7)

The numerator of \({\mathcal {S}}_{a -\varepsilon } [PU_{x, \lambda }+ w]\), by the computations in the proof of Lemma 3.7, is given by

$$\begin{aligned} \left( \int _\Omega (PU_{x, \lambda }+ w)^p \mathop {}\!\textrm{d}y \right) ^{-2/p} = A_{N,s}^{-\frac{2}{p}} - A_{N,s}^{-\frac{2}{p} - 1} \left( - 2 \phi _0(x) \lambda ^{-N+2s} + (p-1) \int _\Omega U_{x, \lambda }^{p-2} w^2 \mathop {}\!\textrm{d}y \right) .\nonumber \\ \end{aligned}$$
(4.8)

Multiplying out (4.7) and (4.8), the terms in \(\int _\Omega U^{p-2} w^2 \mathop {}\!\textrm{d}y\) cancel precisely and we obtain

$$\begin{aligned} S(a-\varepsilon )&= S + c_{N,s} A_{N,s}^{-2/p} a_{N,s} \phi _0(x) \lambda ^{-N+2s} \nonumber \\&\quad + A_{N,s}^{-2/p} \lambda ^{-N+2s} \left( \int _\Omega a \left( (\lambda ^\frac{N-2s}{2}PU_{x, \lambda })^2 + \lambda ^\frac{N-2s}{2}PU_{x, \lambda }{\tilde{w}}\right) \mathop {}\!\textrm{d}y \right) + o(\lambda ^{-N+2s}). \nonumber \\ \end{aligned}$$
(4.9)

Now we are ready to return to our findings about \({\tilde{w}}_0\). Indeed, by (4.4), and observing that \(G_0(x, \cdot )\) is an admissible test function in (4.5), we get

$$\begin{aligned}&\int _\Omega a \left( (\lambda ^\frac{N-2s}{2}PU_{x, \lambda })^2 + \lambda ^\frac{N-2s}{2}PU_{x, \lambda }{\tilde{w}}\right) \mathop {}\!\textrm{d}y \nonumber \\&\quad = \int _\Omega a \left( G_0(x, \cdot )^2 + G_0(x, \cdot ) {\tilde{w}}_0 \right) \mathop {}\!\textrm{d}y + o(1) \end{aligned}$$
(4.10)
$$\begin{aligned}&\quad =- \int _\Omega {(-\Delta )^{s/2}}{\tilde{w}}_0 {(-\Delta )^{s/2}}G_0(x, \cdot ) \mathop {}\!\textrm{d}y +o(1)= - {\tilde{w}}_0(x) \nonumber \\&\quad = \gamma _{N,s} (\phi _a(x) - \phi _0(x)) + o(1). \end{aligned}$$
(4.11)

By inserting this into (4.9) and observing that \(\gamma _{N,s} = c_{N,s} a_{N,s}\) by the numerical values given in Lemma B.5, the proof is complete. \(\square \)

Now we have all the ingredients to give a quick proof of our first main result.

Proof of Theorem 1.2

As explained after the statement of the theorem, it only remains to prove the implication \((ii) \Rightarrow (i)\). Suppose thus \(S(a) < S\) and let \(c > 0\) be the smallest number such that \({\bar{a}}:= a +c\) satisfies \(S({\bar{a}}) = S\). For \(\varepsilon > 0\), let \(u_\varepsilon \) be the sequence of minimizers \(S({\bar{a}} - \varepsilon )\), normalized to satisfy (4.1). By Lemma 4.1, we have

$$\begin{aligned} S > S({\bar{a}} - \varepsilon ) = S - c_{N,s} A_{N,s}^{-2/p} \phi _a(x) \lambda ^{-N+2s} + o(\lambda ^{-N+2s}). \end{aligned}$$

Letting \(\varepsilon \rightarrow 0\), this shows \(\phi _{{\bar{a}}}(x_0) \le 0\). By the resolvent identity, we have for every \(x \in \Omega \)

$$\begin{aligned} \phi _{{\tilde{a}}}(x) = \phi _a(x) + \int _{\Omega } ({\tilde{a}}-a)(z) G_a(x,z) G_{a+c}(z,x) \mathop {}\!\textrm{d}z < \phi _a(x), \end{aligned}$$

and hence \(\phi _a(x_0)\) is strictly monotone in a. Thus \(\phi _a(x_0) < \phi _{a+c}(x_0) = 0\), and the proof is complete. \(\square \)

5 Proof of the lower bound II: a refined expansion

This section is the most technical of the paper. It is devoted to extracting the leading term of the remainder w and to obtaining sufficiently good bounds on the new error term. In Sect. 5.2 we will need to work under the additional assumption \(8s/3 < N\) in order to obtain the required precision.

Concretely, we write

$$\begin{aligned} w = \lambda ^{-\frac{N-2s}{2}} (H_0(x, \cdot ) - H_a(x, \cdot )) + q \end{aligned}$$

and decompose the remainder further into a tangential and an orthogonal part

$$\begin{aligned} q = t +r, \qquad t \in T_{x, \lambda }, \quad r \in T_{x, \lambda }^\bot . \end{aligned}$$

(We keep omitting the subscript \(\varepsilon \).) A refined expansion of \({\mathcal {S}}_{a + \varepsilon V}[u_\varepsilon ]\) then yields an error term in r which can be controlled using the coercivity inequality of Proposition 3.5. The refined expansion is derived in Sect. 5.2 below.

On the other hand, since t is an element of the \((N+2)\)-dimensional space \(T_{x, \lambda }\), it can be bounded by essentially explicit computations. This is achieved in Sect. 5.1.

Remark 5.1

The present Sect. 5 thus constitutes the analogon of [25, Sect. 6], where the same analysis is carried out for the case \(s=1\) and \(N=3\). We emphasize that, despite these similarities, our approach is conceptually somewhat simpler than that of [25]. Indeed, the argument in [25] relies on an intermediate step involving a spectral cutoff construction, through which the apriori bound \(\Vert \nabla q\Vert =o(\lambda ^{1/2}) = o(\lambda ^{-\frac{N-2s}{2}})\) is obtained.

On the contrary, we are able to conduct the following analysis with only the weaker bound \(\Vert \nabla q\Vert = {\mathcal {O}}(\lambda ^{-\frac{N-2s}{2}})\) at hand (which follows from Proposition 3.6). This comes at the price of some additional explicit error terms in r, which can however be conveniently absorbed (see Lemmas 5.7 and 5.9). Since \(N > 8s/3\) is fulfilled when \(N=3\), \(s=1\), this simplified proof of course also works in the particular situation of [25].

5.1 A precise description of t

For \(\lambda \) large enough, the functions \(PU_{x, \lambda }\), \(\partial _{\lambda }PU_{x, \lambda }\) and \(\partial _{x_i}PU_{x, \lambda }\), \(i = 1,\ldots ,N\) are linearly independent. There are therefore uniquely determined coefficients \(\beta , \gamma , \delta _i\), \(i = 1,\ldots ,N\), such that

$$\begin{aligned} t = \beta \lambda ^{-N+2s} PU_{x, \lambda }+ \gamma \lambda ^{-N+2s+1} \partial _{\lambda }PU_{x, \lambda }+ \sum _{i = 1}^N \delta _i \lambda ^{-N+2s-2} \partial _{x_i}PU_{x, \lambda }. \end{aligned}$$
(5.1)

Here the choice of the different powers of \(\lambda \) multiplying the coefficients is justified by the following result.

Lemma 5.2

As \(\varepsilon \rightarrow 0\), we have \(\beta , \gamma , \delta _i = {\mathcal {O}}(1)\).

As a corollary, we obtain estimates on t in various norms.

Lemma 5.3

As \(\varepsilon \rightarrow 0\),

$$\begin{aligned} \Vert {(-\Delta )^{s/2}}t \Vert _2 \lesssim \lambda ^{-N+2s} \quad \text { and } \quad \Vert t\Vert _\frac{2N}{N+2s} \lesssim \Vert t\Vert _2 \lesssim \lambda ^{-\frac{3N -6s}{2}}. \end{aligned}$$

Proof

Recall that \(PU_{x, \lambda }= U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_0(x,\cdot ) + f_{x, \lambda }\). Then all bounds follow in a straightforward way from (5.1) together with Lemma 5.2 and the standard bounds from Lemmas A.1, A.2, B.1 and B.2. \(\square \)

Proof of Lemma 5.2

Step 1. We introduce the normalized basis functions

$$\begin{aligned} {\tilde{\varphi }}_1:= \frac{PU_{x, \lambda }}{\Vert {(-\Delta )^{s/2}}PU_{x, \lambda }\Vert }, \quad {\tilde{\varphi }}_2:= \frac{\partial _{\lambda }PU_{x, \lambda }}{\Vert {(-\Delta )^{s/2}}\partial _{\lambda }PU_{x, \lambda }\Vert }, \quad {\tilde{\varphi }}_j:= \frac{\partial _{x_{j-2}} PU_{x, \lambda }}{\Vert {(-\Delta )^{s/2}}\partial _{x_{j-2}} PU_{x, \lambda }\Vert },\nonumber \\ \end{aligned}$$
(5.2)

and prove that

$$\begin{aligned} a_j:= \int _{{\mathbb {R}}^N} {(-\Delta )^{s/2}}{\tilde{\varphi }}_j {(-\Delta )^{s/2}}t \mathop {}\!\textrm{d}y = {\left\{ \begin{array}{ll} {\mathcal {O}}(\lambda ^{-N+2s}), &{}\quad j = 1,2, \\ {\mathcal {O}}(\lambda ^{-N+2s-1}), &{}\quad j = 3,\ldots , N+2. \end{array}\right. } \end{aligned}$$
(5.3)

Since \(\lambda ^{-\frac{N-2\,s}{2}} (H_0(x,\cdot ) - H_a(x,\cdot )) + t + r = w \in T_{x, \lambda }^\bot \), and \(r \in T_{x, \lambda }^\bot \), we have

$$\begin{aligned} a_j&= \lambda ^{-\frac{N-2s}{2}} \int _{{\mathbb {R}}^N} {(-\Delta )^{s/2}}{\tilde{\varphi }}_j {(-\Delta )^{s/2}}\left( H_a(x,\cdot ) - H_0(x,\cdot )\right) \mathop {}\!\textrm{d}y \\&= \lambda ^{-\frac{N-2s}{2}} \int _{{\mathbb {R}}^N} {(-\Delta )^{s}}\tilde{\varphi _j} \left( H_a(x,\cdot ) - H_0(x,\cdot )\right) \mathop {}\!\textrm{d}y . \end{aligned}$$

Thus,

$$\begin{aligned} a_1&= \lambda ^{-\frac{N-2s}{2}} \Vert {(-\Delta )^{s/2}}PU_{x, \lambda }\Vert ^{-1} c_{N,s} \int _\Omega U_{x, \lambda }^\frac{N+2s}{N-2s} \left( H_a(x,\cdot ) - H_0(x,\cdot )\right) \mathop {}\!\textrm{d}y \\&\quad \lesssim \lambda ^{-N+2s}, \end{aligned}$$

where we used that by Lemma B.2, \(\Vert {(-\Delta )^{s/2}}PU_{x, \lambda }\Vert ^{-1} \lesssim 1\). The bound for \(a_2\) follows similarly. To obtain the claimed improved bound for \(a_j\), \(j = 3,\ldots ,N+2\), we write

$$\begin{aligned} a_{i+2}&\lesssim \lambda ^{-\frac{N-2s}{2}} \Vert {(-\Delta )^{s/2}}\partial _{x_i}PU_{x, \lambda }\Vert ^{-1} \int _\Omega U_{x, \lambda }^\frac{4s}{N-2s} \partial _{x_i}U \left( H_a(x,\cdot ) - H_0(x,\cdot )\right) \mathop {}\!\textrm{d}y \\&\lambda ^{-\frac{N-2s}{2}} \Vert {(-\Delta )^{s/2}}\partial _{x_i}PU_{x, \lambda }\Vert ^{-1} {\mathcal {O}} \left( \int _{{\mathbb {R}}^N \setminus B_d} U_{x, \lambda }^\frac{4s}{N-2s} |\partial _{x_i}U_{x, \lambda }| \mathop {}\!\textrm{d}y + \int _\Omega U_{x, \lambda }^\frac{4s}{N-2s} |\partial _{x_i}U_{x, \lambda }| |x-y| \mathop {}\!\textrm{d}y \right) \\&\lesssim \lambda ^{-N+2s-1}. \end{aligned}$$

Here, we wrote \(H_a(x,y) - H_0(x,y) = \phi _a(x) -\phi _0(x) + {\mathcal {O}}(|x-y|)\) and used that by oddness of \(\partial _{x_i}U\),

$$\begin{aligned} (\phi _a(x) -\phi _0(x)) \int _{B_d} U_{x, \lambda }^\frac{4s}{N-2s} \partial _{x_i}U_{x, \lambda }= 0. \end{aligned}$$

This concludes the proof of (5.3).

Step 2. We write

$$\begin{aligned} t = \sum _{j = 1}^{N+2} b_j {\tilde{\varphi }}_j, \end{aligned}$$

with

$$\begin{aligned} b_1&= \beta \lambda ^{-N+2s} \Vert {(-\Delta )^{s/2}}PU_{x, \lambda }\Vert , \quad b_2 = \gamma \Vert {(-\Delta )^{s/2}}\partial _{\lambda }PU_{x, \lambda }\Vert , \\ b_j&= \delta _j \lambda ^{-N+2s-2} \Vert {(-\Delta )^{s/2}}\partial _{x_{j-2}} PU_{x, \lambda }\Vert , \qquad j= 3,\ldots ,N+2. \end{aligned}$$

Our goal is to show that

$$\begin{aligned} b_j = a_j + {\mathcal {O}}(\lambda ^{-N+2s}) \sup _k a_k, \qquad j = 1,\ldots ,N+2. \end{aligned}$$
(5.4)

Then, from (5.4), we conclude by using the estimates on the \(a_j\) from (5.3) and Lemma B.2.

To prove (5.4), we define the Gram matrix G by

$$\begin{aligned} G_{j,k}:= ({(-\Delta )^{s/2}}{\tilde{\varphi }}_j, {(-\Delta )^{s/2}}{\tilde{\varphi }}_k). \end{aligned}$$

By Lemma B.2 and the definition of the \({\tilde{\varphi }}_j\) it is easily checked that

$$\begin{aligned} G_{j,k} = \delta _{j,k} + {\mathcal {O}}(\lambda ^{-N+2s}), \qquad j,k = 1,\ldots , N+2. \end{aligned}$$

Thus, for sufficiently large \(\lambda \), G is invertible with

$$\begin{aligned} (G^{-1})_{j,k} = \delta _{j,k} + {\mathcal {O}}(\lambda ^{-N+2s}). \end{aligned}$$
(5.5)

By definition of G,

$$\begin{aligned} \psi _j:= \sum _{k=1}^{N+2} (G^{-1/2})_{j,k} {\tilde{\varphi }}_k \end{aligned}$$
(5.6)

is an orthonormal basis of \(T_{x, \lambda }\). We can therefore write

$$\begin{aligned} t&= \sum _j \left( {(-\Delta )^{s/2}}\psi _j, {(-\Delta )^{s/2}}t \right) \psi _j = \sum _j \sum _k (G^{-1/2})_{j,k} \left( {(-\Delta )^{s/2}}{\tilde{\varphi }}_k, {(-\Delta )^{s/2}}t \right) \phi _j\\&= \sum _j \sum _k (G^{-1/2})_{j,k} a_k \sum _l (G^{-1/2})_{j,l} {\tilde{\varphi }}_l \\&= \sum _l \left( \sum _k \left( \sum _j (G^{-1/2})_{l,j} (G^{-1/2})_{j,k} \right) a_k \right) {\tilde{\varphi }}_l \\&= \sum _l \left( \sum _k (G^{-1})_{l,k} a_k \right) {\tilde{\varphi }}_l. \end{aligned}$$

Thus, \(b_l = \sum _k (G^{-1})_{l,k} a_k\) and (5.4) follows from (5.5). \(\square \)

Remark 5.4

By treating the terms in the above proof more carefully, it can be shown in fact that \(\lambda ^{N - 2\,s} \beta \), \(\lambda ^{N- 2\,s-1}\gamma \) and \(\lambda ^{N-2\,s+2} \delta _i\) have a limit as \(\lambda \rightarrow \infty \). Indeed, for instance, the leading orders of the expressions \(\int _\Omega U_{x, \lambda }^\frac{N+2\,s}{N-2\,s} (H_a(x, \cdot ) - H_0(x, \cdot )) \mathop {}\!\textrm{d}y\) and \(\Vert {(-\Delta )^{s/2}}PU_{x, \lambda }\Vert \) going into the leading behavior of \(\beta \) can be explicitly evaluated, see Lemma A.4 and the proof of Lemma B.2 respectively. We do not need the behavior of the coefficients \(\beta , \gamma , \delta _i\) to that precision in what follows, so we do not state them explicitly.

5.2 The new expansion of \({\mathcal {S}}_{a+\varepsilon V} [u]\)

Our goal is now to expand the value of the energy functional \({\mathcal {S}}_{a + \varepsilon V} [u_\varepsilon ]\) with respect to the refined decomposition introduced above, namely

$$\begin{aligned} u = \alpha (\psi _{x, \lambda }+ q) = \alpha \left( PU_{x, \lambda }+ \left( \lambda ^{-\frac{N-2s}{2}} H_a(x, \dot{)} - H_0(x, \cdot )\right) + t + r \right) . \end{aligned}$$

In all that follows, we work under the important assumption that

$$\begin{aligned} -3N + 6s< -2s, \quad \text { i.e. } \tfrac{8}{3} s < N \end{aligned}$$
(5.7)

so that \(\lambda ^{-3N + 6s} = o(\lambda ^{-2s})\). Assumption (5.7) has the consequence that, using the available bounds on t and r, we can expand the energy \({\mathcal {S}}_{a + \varepsilon V}[u]\) up to \(o(\lambda ^{-2s})\) errors in a way that does not depend on t. This is the content of the next lemma.

Lemma 5.5

As \(\varepsilon \rightarrow 0\), we have

$$\begin{aligned} {\mathcal {S}}_{a + \varepsilon V}[u_\varepsilon ]&= {\mathcal {S}}_{a + \varepsilon V} [\psi _{x, \lambda }] + D_0^{-2/p} \left( {\mathcal {E}}_0[r] - \frac{2 N_0}{p D_0} I[r] + o(\Vert {(-\Delta )^{s/2}}r\Vert ^2) \right) \\&\quad + o(\lambda ^{-2s}) + o(\varepsilon \lambda ^{-N+ 2s}) + o(\phi _a(x) \lambda ^{-N+2s}). \end{aligned}$$

Here,

$$\begin{aligned} N_0:= \Vert {(-\Delta )^{s/2}}\psi _{x, \lambda }\Vert _2^2 + \int _\Omega (a+ \varepsilon V) \psi _{x, \lambda }^2 \mathop {}\!\textrm{d}y, \qquad D_0:= \int _\Omega \psi _{x, \lambda }^p \mathop {}\!\textrm{d}y, \end{aligned}$$
(5.8)

and I[r] is as defined in (5.10) below.

We emphasize that the contribution of t enters only into the remainders \(o(\lambda ^{-2\,s}) + o(\varepsilon \lambda ^{-N+ 2\,s})+ o(\phi _a(x) \lambda ^{-N+2\,s})\). This is remarkable because t enters to orders \(\lambda ^{-N+2\,s} \gg \lambda ^{-2\,s}\) and \(\lambda ^{-2N+4\,s} \gg \lambda ^{-2\,s}\) (if \(N< 3\,s\)) into both the numerator and the denominator of \( {\mathcal {S}}_{a + \varepsilon V}[u_\varepsilon ]\), see Lemmas 5.6 and 5.7 below. When calculating the quotients, these contributions cancel precisely, as we verify in Lemma 5.8 below. Heuristically, such a phenomenon is to be expected because (up to projection onto \({{\widetilde{H}}^s(\Omega )}\) and perturbation by \(a + \varepsilon V\)) by definition t represents the directions along which the quotient functional is invariant. As already pointed out in the introduction, we suspect, but cannot prove, that in the absence of assumption (5.7) the contributions of t to the higher order coefficients \(\lambda ^{-kN + 2ks}\) for \(3 \le k \le \tfrac{2N}{N-2\,s}\) would continue to cancel.

We prove Lemma 5.5 by separately expanding the numerator and the denominator of \({\mathcal {S}}_{a + \varepsilon V}[u_\varepsilon ]\). We abbreviate

$$\begin{aligned} {\mathcal {E}}_\varepsilon [u]:= \Vert {(-\Delta )^{s/2}}u\Vert ^2 + \int _\Omega (a + \varepsilon V)u^2 \mathop {}\!\textrm{d}y \end{aligned}$$

and write \({\mathcal {E}}_\varepsilon [u,v]\) for the associated bilinear form.

Lemma 5.6

(Expanding the numerator) As \(\varepsilon \rightarrow 0\),

$$\begin{aligned} |\alpha |^{-2} {\mathcal {E}}_\varepsilon [u_\varepsilon ]&= {\mathcal {E}}_\varepsilon [\psi _{x, \lambda }] + \left( 2 {\mathcal {E}}_0[\psi _{x, \lambda }, t] + \Vert {(-\Delta )^{s/2}}t\Vert ^2 \right) + {\mathcal {E}}_0[r] + o(\lambda ^{-2s}) + o(\varepsilon \lambda ^{-N+ 2s}). \end{aligned}$$

Proof

We write \(\alpha ^{-1} u_\varepsilon = \psi _{x, \lambda }+ t + r\) and therefore

$$\begin{aligned} {\mathcal {E}}_\varepsilon [u_\varepsilon ] = {\mathcal {E}}_\varepsilon [\psi _{x, \lambda }] + 2 {\mathcal {E}}_\varepsilon [\psi _{x, \lambda }, t+r] + {\mathcal {E}}_\varepsilon [t+r]. \end{aligned}$$
(5.9)

The third term on the right side is

$$\begin{aligned} {\mathcal {E}}_\varepsilon [t+r] = {\mathcal {E}}_0[t] + 2{\mathcal {E}}_0[t,r] + {\mathcal {E}}_0[r] + \varepsilon \int _\Omega V (t+r)^2 \mathop {}\!\textrm{d}y. \end{aligned}$$

Now \(\int _{{\mathbb {R}}^N} {(-\Delta )^{s/2}}t {(-\Delta )^{s/2}}r \mathop {}\!\textrm{d}y = 0\) by orthogonality and therefore, by Lemma 5.3,

$$\begin{aligned} {\mathcal {E}}_0[t,r]= & {} \int _\Omega a t r \mathop {}\!\textrm{d}y = {\mathcal {O}}(\Vert {(-\Delta )^{s/2}}r\Vert \Vert t\Vert _\frac{2N}{N+2s}) = {\mathcal {O}}(\lambda ^\frac{-3N+6s}{2} \Vert {(-\Delta )^{s/2}}r\Vert ) \\= & {} o(\lambda ^{-2s}) + o(\Vert {(-\Delta )^{s/2}}r\Vert ^2), \end{aligned}$$

where the last equality is a consequence of assumption (5.7) and Young’s inequality. Finally, again by Lemma 5.3,

$$\begin{aligned} \varepsilon \int _\Omega V (t+r)^2 \mathop {}\!\textrm{d}y = {\mathcal {O}}( \varepsilon (\Vert t\Vert ^2 + \Vert {(-\Delta )^{s/2}}r\Vert ^2)) = o(\varepsilon \lambda ^{-N+2s}) + o( \Vert {(-\Delta )^{s/2}}r\Vert ^2). \end{aligned}$$

The second term on the right side of (5.9) is

$$\begin{aligned} 2 {\mathcal {E}}_\varepsilon [\psi _{x, \lambda }, t+r] = 2 {\mathcal {E}}_0[\psi _{x, \lambda }, t] + 2 {\mathcal {E}}_0[\psi _{x, \lambda }, r ] + 2 \varepsilon \int _\Omega V \psi _{x, \lambda }(t + r) \mathop {}\!\textrm{d}y. \end{aligned}$$

To start with, using Lemma 5.3,

$$\begin{aligned} \varepsilon \int _\Omega V \psi _{x, \lambda }(t + r) \mathop {}\!\textrm{d}y&= {\mathcal {O}}( \varepsilon ( \Vert t\Vert _\frac{2N}{N+2s} + \Vert \psi \Vert _\frac{2N}{N+2s} \Vert {(-\Delta )^{s/2}}r\Vert )) \\&= {\mathcal {O}}( \varepsilon \lambda ^{-\frac{N-2s}{2}} \Vert {(-\Delta )^{s/2}}r\Vert ) + o(\varepsilon \lambda ^{-N+2s})\\ {}&= o(\varepsilon \lambda ^{-N+2s}) + o(\Vert {(-\Delta )^{s/2}}r\Vert ^2)), \end{aligned}$$

again by Young’s inequality. Moreover, using that \(r \in T_{x, \lambda }^\bot \), that \({(-\Delta )^{s}}H_a(x, \cdot ) = a G_a(x, \cdot )\) and \({(-\Delta )^{s}}H_0(x,\cdot ) = 0\), and integrating by parts,

$$\begin{aligned} {\mathcal {E}}_0[\psi _{x, \lambda }, r ]&= \lambda ^{-\frac{N-2s}{2}} \int _\Omega {(-\Delta )^{s/2}}(H_0 - H_a)(x, \cdot ) {(-\Delta )^{s/2}}r \mathop {}\!\textrm{d}y + \int _\Omega a \psi _{x, \lambda }r \mathop {}\!\textrm{d}y \\&= \int _\Omega a (- \lambda ^{-\frac{N-2s}{2}} G_a(x, \cdot ) + \psi _{x, \lambda }) r \mathop {}\!\textrm{d}y. \end{aligned}$$

Since we can write \(\lambda ^{-\frac{N-2s}{2}} G_a(x,y) - \psi _{x, \lambda }(y) = \lambda ^\frac{N-2s}{2}h(\lambda (x-y)) + f_{x, \lambda }\) with h as in Lemma B.3 and \(f_{x, \lambda }\) as in Lemma A.2, we get

$$\begin{aligned} {\mathcal {E}}_0[\psi _{x, \lambda }, r ]&\lesssim \left( \lambda ^{-\frac{N-2s}{2}} \Vert h(\lambda \cdot )\Vert _\frac{2N}{N+2s} + \Vert f_{x, \lambda }\Vert _\infty \right) \Vert {(-\Delta )^{s/2}}r\Vert \lesssim \lambda ^{-2s} \Vert {(-\Delta )^{s/2}}r\Vert = o(\lambda ^{-2s}) \end{aligned}$$

by the bounds in those lemmas. Finally,

$$\begin{aligned} {\mathcal {E}}_0[ t] = \Vert {(-\Delta )^{s/2}}t\Vert ^2 + \int _\Omega a t^2 \mathop {}\!\textrm{d}y \end{aligned}$$

and \(\int _\Omega a t^2 \mathop {}\!\textrm{d}y \lesssim \Vert t\Vert _2^2 \lesssim \lambda ^{-3N+6\,s} = o(\lambda ^{-2\,s})\) by Lemma 5.3 and Assumption (5.7). \(\square \)

Lemma 5.7

(Expanding the denominator) As \(\varepsilon \rightarrow 0\),

$$\begin{aligned} |\alpha |^{-p} \int _\Omega u_\varepsilon ^p \mathop {}\!\textrm{d}y&= \int _\Omega \psi _{x, \lambda }^p \mathop {}\!\textrm{d}y + \left( p \int _\Omega \psi _{x, \lambda }^{p-1} t \mathop {}\!\textrm{d}y + \frac{p(p-1)}{2} \int _\Omega \psi _{x, \lambda }^{p-2} t^2 \mathop {}\!\textrm{d}y \right) \\&\quad +\frac{p(p-1)}{2} \int _\Omega \psi _{x, \lambda }^{p-2} r^2 \mathop {}\!\textrm{d}y \\&\quad + {\mathcal {O}}\left( \lambda ^{-\frac{N-2s}{2}} \int _\Omega U^{p-2} |H_a| |r| \mathop {}\!\textrm{d}y \right) + o(\Vert {(-\Delta )^{s/2}}r\Vert ^2) + o(\lambda ^{-2s}). \end{aligned}$$

Proof

Write \(\alpha ^{-1} u_\varepsilon = \psi _{x, \lambda }+t+ r\). We expand

$$\begin{aligned} \int _\Omega (\psi _{x, \lambda }+ t +r)^p \mathop {}\!\textrm{d}y&= \int _\Omega (\psi _{x, \lambda }+r)^p \mathop {}\!\textrm{d}y\\&\quad + p \int _\Omega (\psi _{x, \lambda }+ r)^{p-1} t \mathop {}\!\textrm{d}y + \frac{p(p-1)}{2} \int _\Omega (\psi _{x, \lambda }+ r)^{p-2} t^2 \mathop {}\!\textrm{d}y \\&\quad + {\mathcal {O}}\left( \Vert \psi _{x, \lambda }+ r\Vert _p^{p-3} \Vert {(-\Delta )^{s/2}}t\Vert ^3 + \Vert {(-\Delta )^{s/2}}t\Vert ^p \right) . \end{aligned}$$

By Lemma 5.3 together with assumption (5.7), the last term is \(o(\lambda ^{-2s})\). The third term is, by Lemma 5.3,

$$\begin{aligned} \int _\Omega (\psi _{x, \lambda }+ r)^{p-2} t^2 \mathop {}\!\textrm{d}y = \int _\Omega \psi _{x, \lambda }^{p-2} t^2 \mathop {}\!\textrm{d}y+ {\mathcal {O}}\left( \lambda ^{-2N+4s} \Vert {(-\Delta )^{s/2}}r\Vert \right) . \end{aligned}$$

The second term is

$$\begin{aligned} \int _\Omega (\psi _{x, \lambda }+ r)^{p-1} t \mathop {}\!\textrm{d}y = \int _\Omega \psi _{x, \lambda }^{p-1} t \mathop {}\!\textrm{d}y+ (p-1) \int _\Omega \psi _{x, \lambda }^{p-2} r t \mathop {}\!\textrm{d}y+ o( \Vert {(-\Delta )^{s/2}}r\Vert ^2 ). \end{aligned}$$

The remaining term \(\int _\Omega \psi _{x, \lambda }^{p-2} r t\mathop {}\!\textrm{d}y\) needs to be expanded more carefully. Using \(\psi _{x, \lambda }= U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ) - f_{x, \lambda }\) with \(\Vert \lambda ^{-\frac{N-2s}{2}} H_a(x, \cdot ) + f_{x, \lambda }\Vert _\infty \lesssim \lambda ^{-\frac{N-2s}{2}}\), we write

$$\begin{aligned} \int _\Omega \psi _{x, \lambda }^{p-2} r t \mathop {}\!\textrm{d}y&= \int _\Omega U_{x, \lambda }^{p-2} r t \mathop {}\!\textrm{d}y+ {\mathcal {O}} \left( \lambda ^{-\frac{N-2s}{2}} \Vert {(-\Delta )^{s/2}}r\Vert \Vert {(-\Delta )^{s/2}}t\Vert \right) \end{aligned}$$

and using assumption (5.7), the remainder is bounded by

$$\begin{aligned} \lambda ^{-\frac{N-2s}{2}} \Vert {(-\Delta )^{s/2}}r\Vert \Vert {(-\Delta )^{s/2}}t\Vert \lesssim \lambda ^{\frac{-3N+6s}{2}} \Vert {(-\Delta )^{s/2}}r\Vert = o(\lambda ^{-2s}) + o(\Vert {(-\Delta )^{s/2}}r\Vert ^2). \end{aligned}$$

Now using orthogonality of r and the expansion (5.1) of s, by some standard calculations, whose details we omit, one obtains

$$\begin{aligned} \int _\Omega U_{x, \lambda }^{p-2} r t \mathop {}\!\textrm{d}y&= {\mathcal {O}} \left( \lambda ^{\frac{-3N+6s}{2}} \Vert {(-\Delta )^{s/2}}r \Vert \right) = o(\lambda ^{-2s}) + o(\Vert {(-\Delta )^{s/2}}r\Vert ^2), \end{aligned}$$

where we used again assumption (5.7) for the last equality.

It remains only to treat the t-independent term \(\int _\Omega (\psi _{x, \lambda }+ r)^p\mathop {}\!\textrm{d}y\). We find

$$\begin{aligned} \int _\Omega (\psi _{x, \lambda }+ r)^p \mathop {}\!\textrm{d}y&= \int _\Omega \psi _{x, \lambda }^p \mathop {}\!\textrm{d}y + p \int _\Omega \psi _{x, \lambda }^{p-1} r \mathop {}\!\textrm{d}y \\&\quad + \frac{p(p-1)}{2} \int _\Omega \psi _{x, \lambda }^{p-2} r^2 \mathop {}\!\textrm{d}y + o( \Vert {(-\Delta )^{s/2}}r\Vert ^2). \end{aligned}$$

Using orthogonality of r, we get that \(\int _\Omega U_{x, \lambda }^{p-1} r \mathop {}\!\textrm{d}y = 0\) and hence

$$\begin{aligned} \int _\Omega \psi _{x, \lambda }^{p-1} r \mathop {}\!\textrm{d}y= {\mathcal {O}}\left( \lambda ^{-\frac{N-2s}{2}} \int _\Omega U_{x, \lambda }^{p-2} |H_a(x, \cdot )| |r| \mathop {}\!\textrm{d}y + \lambda ^{-\frac{N+2s}{2}} \Vert {(-\Delta )^{s/2}}r\Vert \right) \end{aligned}$$

and \(\lambda ^{-\frac{N+2\,s}{2}} \Vert {(-\Delta )^{s/2}}r\Vert \lesssim \lambda ^{-N} = o(\lambda ^{-2\,s})\). Finally, we have

$$\begin{aligned} \int _\Omega \psi _{x, \lambda }^{p-2} r^2 \mathop {}\!\textrm{d}y = \int _\Omega U_{x, \lambda }^{p-2} r^2 \mathop {}\!\textrm{d}y + o(\Vert {(-\Delta )^{s/2}}r\Vert ^2). \end{aligned}$$

Collecting all the estimates gives the claim of the lemma. \(\square \)

We can now prove the claimed expansion of the energy functional.

Proof of Lemma 5.5

We write the expansions of the numerator and the denominator as

$$\begin{aligned} {\mathcal {E}}_\varepsilon [u_\varepsilon ] = N_0 + N_1 + {\mathcal {E}}_0(r) + o(\lambda ^{-2s} + (\varepsilon + \phi _a(x)) \lambda ^{-N+2s}, \end{aligned}$$

where

$$\begin{aligned} N_0 = {\mathcal {E}}_\varepsilon [\psi ], \quad N_1:= 2 {\mathcal {E}}_0[\psi _{x, \lambda }, t] + \Vert {(-\Delta )^{s/2}}t\Vert ^2, \end{aligned}$$

and

$$\begin{aligned} \int _\Omega u_\varepsilon ^p = D_0 + D_1 + {\mathcal {I}}[r] + o(\lambda ^{-2s}), \end{aligned}$$

where

$$\begin{aligned} D_0 = \int _\Omega \psi ^p, \qquad D_1:= p \int _\Omega \psi _{x, \lambda }^{p-1} t + \frac{p(p-1)}{2} \int _\Omega \psi _{x, \lambda }^{p-2} t^2, \end{aligned}$$

and

$$\begin{aligned} {\mathcal {I}}[r]:= \frac{p(p-1)}{2} \int _\Omega \psi _{x, \lambda }^{p-2} r^2 + {\mathcal {O}}(\lambda ^{-\frac{N-2s}{2}} \int _\Omega U^{p-2} |H_a| |r| \mathop {}\!\textrm{d}y). \end{aligned}$$
(5.10)

Taylor expanding up to and including second order, we find

$$\begin{aligned} \left( \int _\Omega u_\varepsilon ^p \right) ^{-2/p}&= D_0^{-2/p} \left( 1 - \frac{2}{p} \frac{D_1 + I[r]}{D_0} + \frac{p+2}{p^2} \frac{(D_1 + I[r])^2}{D_0^2} \right) + o(\lambda ^{-2s}). \end{aligned}$$

We now observe \({\mathcal {I}}[r] \lesssim \Vert {(-\Delta )^{s/2}}r\Vert ^2 + o(\phi _a(x) \lambda ^{-N+2\,s} + \lambda ^{-2\,s})\) since

$$\begin{aligned} \lambda ^{-\frac{N-2s}{2}} \int _\Omega U^{p-2} |H_a| |r|&\lesssim \int _\Omega U^{p-2} r^2 + \lambda ^{-N+2s} \int _\Omega U^{p-2} H_a^2 \\&\lesssim \Vert {(-\Delta )^{s/2}}r\Vert ^2 + o(\lambda ^{-N+2s} \phi _a(x)) \end{aligned}$$

Hence we can simplify the expression of the denominator to

$$\begin{aligned} \left( \int _\Omega u_\varepsilon ^p \right) ^{-2/p}= & {} D_0^{-2/p} \left( 1 - \frac{2}{p} \frac{D_1 + I[r]}{D_0} + \frac{p+2}{p^2} \frac{D_1^2}{D_0^2} \right) + o(\Vert {(-\Delta )^{s/2}}r\Vert ^2) \\{} & {} + o(\phi _a(x) \lambda ^{-N+2s})+ o(\lambda ^{-2s}). \end{aligned}$$

Multiplying this with the expansion of the numerator from above, we find

$$\begin{aligned} {\mathcal {S}}_{a + \varepsilon V}[u_\varepsilon ]&= D_0^{-2/p} N_0 + D_0^{-2/p} \left( N_1 - \frac{2}{p} \frac{N_0}{D_0} D_1 - \frac{2}{p} \frac{N_1 D_1}{D_0} + \frac{p+2}{p^2} \frac{D_1^2 N_0}{D_0^2} \right) \\&\quad + D_0^{-2/p} \left( {\mathcal {E}}_0[r] - \frac{2 N_0}{p D_0} I[r] + o(\Vert {(-\Delta )^{s/2}}r\Vert ^2) \right) + o(\lambda ^{-2s}) + o(\phi _a(x)\lambda ^{-N+2s}). \end{aligned}$$

We show in Lemma 5.8 below that the bracket involving the terms \(N_1\) and \(D_1\) involving s vanishes up to order \(o(\lambda ^{-2s})\), due to cancellations. Noting that \(D_0^{-2/p} N_0\) is nothing but \({\mathcal {S}}_{a+ \varepsilon V}[\psi ]\), the expansion claimed in Lemma 5.5 follows. \(\square \)

Lemma 5.8

Assume (5.7) and let \(N_0\), \(N_1\), \(D_0\), \(D_1\) be defined as in the proof of Lemma 5.5. Then

$$\begin{aligned} N_1&= 2 \beta c_{N,s}A_{N,s} \lambda ^{-N+2s} \\&\quad + c_{N,s} \lambda ^{-2N + 4s} \left( \beta ^2 A_{N,s} + \gamma ^2 (p-1) B_{N,s} - 2 a_{N,s} \phi _0(x) (\beta - \frac{N-2s}{2}\gamma ) \right) + o(\lambda ^{-2s}) \end{aligned}$$

and

$$\begin{aligned} D_1&= \lambda ^{-N+2s} p \beta A_{N,s} + \lambda ^{-2N + 4s} \left( \frac{p(p-1)}{2} (\beta ^2 A_{N,s} + \gamma ^2 B_{N,s}) -p (\beta - \frac{N-2s}{2} \gamma ) a_{N,s} \phi _0(x) \right) \\&\quad + o(\phi _a(x) \lambda ^{-N+2s}) + o(\lambda ^{-2s}), \end{aligned}$$

where we abbreviated \(B_{N,s}:= \int _{{\mathbb {R}}^N} U_{0,1}^{p-2} |\partial _{\lambda }U_{0,1}|^2 \mathop {}\!\textrm{d}y\).

In particular,

$$\begin{aligned} N_1 - \frac{2}{p} \frac{N_0}{D_0} D_1 - \frac{2}{p} \frac{N_1 D_1}{D_0} + \frac{p+2}{p^2} \frac{D_1^2 N_0}{D_0^2} = o(\lambda ^{-2s}) + o(\phi _a(x) \lambda ^{-N+2s}). \end{aligned}$$

Proof

We start with expanding \(N_1 = 2{\mathcal {E}}_0[\psi , t] + \Vert {(-\Delta )^{s/2}}t\Vert ^2 \). From Lemma B.2 and the expansion (5.1) for t, one easily sees that

$$\begin{aligned} \Vert {(-\Delta )^{s/2}}t\Vert ^2&= \beta ^2 \lambda ^{-2N+4s} \Vert {(-\Delta )^{s/2}}PU_{x, \lambda }\Vert ^2 + \gamma \lambda ^{-2N+4s+2} \Vert {(-\Delta )^{s/2}}\partial _{\lambda }PU_{x, \lambda }\Vert ^2 \\&= \beta ^2 c_{N,s} A_{N,s} \lambda ^{-2N + 4s} + \gamma ^2 (p-1) c_{N,s}B_{N,s} + o(\lambda ^{-2s}), \end{aligned}$$

where we also used assumption (5.7). Next, recalling \(({(-\Delta )^{s}}+ a) \psi _{x, \lambda }= c_{N,s} U_{x, \lambda }^{p-1} - a (\lambda ^{\frac{N-2s}{2}} h(\lambda (x - \cdot ) + f_{x, \lambda })\) with h as in Lemma B.3, we easily obtain

$$\begin{aligned} 2 {\mathcal {E}}_0[\psi _{x, \lambda }, t]&= 2 c_{N,s} \int _\Omega U_{x, \lambda }^{p-1} t \mathop {}\!\textrm{d}y + o(\lambda ^{-2s}) = 2 \beta c_{N,s} A_{N,s} \lambda ^{-N+2s} \\&\quad - 2 c_{N,s} a_{N,s} \phi _0(x) \lambda ^{-2N+4s}(\beta + \frac{N-2s}{2}\gamma ) + o(\lambda ^{-2s}). \end{aligned}$$

(Observe that the leading order term with \(\gamma \) vanishes because \(\int _{{\mathbb {R}}^N} U_{0,1}^{p-1} \partial _{\lambda }U_{0,1} = 0\).) This proves the claimed expansion for \(N_1\). For \(D_1\), we have

$$\begin{aligned} \int _\Omega \psi _{x, \lambda }^{p-1} t \mathop {}\!\textrm{d}y&= \lambda ^{-N+2s} \beta \int _\Omega \psi _{x, \lambda }^{p-1} PU_{x, \lambda }\mathop {}\!\textrm{d}y + \gamma \lambda ^{-N+2s+1} \int _\Omega \psi _{x, \lambda }^{p-1} \partial _{\lambda }PU_{x, \lambda }\mathop {}\!\textrm{d}y + o(\lambda ^{-2s}). \end{aligned}$$

Writing out \(\psi _{x, \lambda }= U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_0(x, \cdot ) - f_{x, \lambda }\) and \(PU_{x, \lambda }= U_{x, \lambda }- \lambda ^{-\frac{N-2s}{2}} H_0(x, \cdot ) - f\), by the usual bounds together with assumption (5.7) we get

$$\begin{aligned} \lambda ^{-N+2s} \beta \int _\Omega \psi ^{p-1} PU_{x, \lambda }\mathop {}\!\textrm{d}y&= \lambda ^{-N+2s} \beta A_{N,s} - \lambda ^{-2N+4s} \beta a_{N,s} \phi _0(x) \\&\quad + o(\lambda ^{-N+2s} \phi _a(x)) + o(\lambda ^{-2s}). \end{aligned}$$

Similarly,

$$\begin{aligned} \gamma \lambda ^{-N+2s+1} \int _\Omega \psi ^{p-1} \partial _{\lambda }PU_{x, \lambda }\mathop {}\!\textrm{d}y&= \gamma \frac{N-2s}{2} \lambda ^{-2N+4s} a_{N,s} \phi _0(x) + o(\lambda ^{-N+2s} \phi _a(x)) + o(\lambda ^{-2s}). \end{aligned}$$

Observe that the leading order term with \(\gamma \) vanishes because \(\int _{{\mathbb {R}}^N} U_{0,1}^{p-1} \partial _{\lambda }U_{0,1} = 0\). Finally,

$$\begin{aligned} \int _\Omega \psi _{x, \lambda }^{p-2} t^2 \mathop {}\!\textrm{d}y&= \lambda ^{-2N + 4s} \left( \beta ^2 A_{N,s} + \gamma ^2 B_{N,s} \right) + o(\lambda ^{-2s}). \end{aligned}$$

Putting together the above, we end up with the claimed expansion for \(D_1\).

The last assertion of the lemma follows from the expansions of \(N_0\), \(D_0\), \(N_1\), and \(D_1\) by an explicit calculation whose details we omit. \(\square \)

Based on the refined expansion of \({\mathcal {S}}_{a+ \varepsilon V}[u_\varepsilon ]\) obtained in Lemma 5.5, we are now in a position to give the proofs of our main results.

We first use the coercivity inequality from Proposition 3.4 to control the terms involving r that appear in Lemma 5.5.

Lemma 5.9

(Coercivity result) There is \(\rho > 0\) such that, as \(\varepsilon \rightarrow 0\),

$$\begin{aligned} {\mathcal {E}}_0[r] - \frac{2 N_0}{p D_0} I[r] \ge \rho \Vert {(-\Delta )^{s/2}}r\Vert ^2. \end{aligned}$$

Proof

Recalling the definition (5.10) of \({\mathcal {I}}[r]\) and observing that \(N_0/D_0 = c_{N,s}\), we find by Proposition 3.5 that

$$\begin{aligned} {\mathcal {E}}_0[r] - \frac{2 N_0}{p D_0} I[r]&= \Vert {(-\Delta )^{s/2}}r\Vert ^2 + \int _\Omega a r^2 \mathop {}\!\textrm{d}y - c_{N,s} (p-1) \int _\Omega U_{x, \lambda }^{p-2} r^2 \mathop {}\!\textrm{d}y \\&\quad + {\mathcal {O}}\left( \lambda ^{-\frac{N-2s}{2}} \int _\Omega U_{x, \lambda }^{p-2} |H_a(x \cdot )| |r| \mathop {}\!\textrm{d}y \right) +o(\Vert {(-\Delta )^{s/2}}r\Vert ^2) \\&\ge \rho \Vert {(-\Delta )^{s/2}}r\Vert ^2 + {\mathcal {O}}\left( \lambda ^{-\frac{N-2s}{2}} \int _\Omega U_{x, \lambda }^{p-2} |H_a(x \cdot )| |r| \mathop {}\!\textrm{d}y \right) \end{aligned}$$

for some \(\rho > 0\). The remaining error term can be bounded as follows:

$$\begin{aligned} \lambda ^{-\frac{N-2s}{2}} \int _\Omega U_{x, \lambda }^{p-2} |H_a(x, \cdot )| |r| \mathop {}\!\textrm{d}y&\le \delta ' \int _\Omega U_{x, \lambda }^{p-2} r^2 \mathop {}\!\textrm{d}y + C \lambda ^{-N+2s} \int _\Omega U_{x, \lambda }^{p-2} H_a(x,\cdot )^2 \mathop {}\!\textrm{d}y \\&\le \delta \Vert {(-\Delta )^{s/2}}r\Vert ^2 + {\mathcal {O}}(\lambda ^{-2N+4s} \phi _a(x)^2) + o(\lambda ^{-2s}) \\&\le \delta \Vert {(-\Delta )^{s/2}}r\Vert ^2 + o(\lambda ^{-N+2s} \phi _a(x) + \lambda ^{-2s}), \end{aligned}$$

where we used Lemma A.4. By choosing \(\delta > 0\) small enough, we obtain the conclusion. \(\square \)

6 Proof of the main results

Combining Lemma 5.9 with Lemma 5.5 gives a lower bound on \({\mathcal {S}}_{a + \varepsilon V}[u_\varepsilon ]\). Using the almost-minimizing assumption (1.17) and the expansion from Theorem 2.1, this lower bound can be stated as follows:

$$\begin{aligned} 0&\ge (1 + o(1)) (S - S(a + \varepsilon V)) + {\mathcal {R}} \nonumber \\&\quad + A_{N,s}^{-\frac{N-2s}{N}} \left( (Q_V(x) +o(1)) \varepsilon \lambda ^{-N+2s} - (a(x)+o(1))(\alpha _{N,s} + c_{N,s} d_{N,s} b_{N,s} ) \lambda ^{-2s} \right) , \nonumber \\ \end{aligned}$$
(6.1)

where

$$\begin{aligned} {\mathcal {R}}:= A_{N,s}^\frac{N-2s}{N} \left( a_{N,s} c_{N,s} (1 + o(1)) \phi _a(x) \lambda ^{-N + 2s} + {\mathcal {T}}_2(\phi _a(x), \lambda )\right) + \rho \Vert {(-\Delta )^{s/2}}r\Vert _2^2, \end{aligned}$$

for some \(\rho > 0\), and \({\mathcal {T}}_2(\phi _a(x), \lambda )\) as in (2.5).

Recall that \(\phi _a \ge 0\) by Corollary 2.2 and that \(\phi _a(x)\) is bounded because \(x_0 \in \Omega \). Since \({\mathcal {T}}_2\) is a sum of higher powers \((\phi _a(x) \lambda ^{-N+2\,s})^k\) with \(k \ge 2\), we have \({\mathcal {R}} \ge 0\) for \(\varepsilon \) small enough.

Lemma 6.1

As \(\varepsilon \rightarrow 0\), \(\phi _a(x) = o(1)\). In other words, \(x_0 \in {\mathcal {N}}_a\).

Proof

Since \(S - S(a + \varepsilon V) \ge 0\) and \(\Vert {(-\Delta )^{s/2}}r\Vert _2^2 \ge 0\), the bound (6.1) gives

$$\begin{aligned} \phi _a(x) \lesssim \varepsilon + \lambda ^{N-4s} + \lambda ^{N-2s} {\mathcal {T}}_2(\phi _a(x), \lambda ). \end{aligned}$$

Since \(\phi _a(x)\) is uniformly bounded, we can bound \({\mathcal {T}}_2(\phi _a(x), \lambda ) \lesssim \lambda ^{-2N + 4s}\), which concludes. \(\square \)

Lemma 6.2

If \({\mathcal {N}}_a(V) \ne \emptyset \), then \(x_0 \in {\mathcal {N}}_a(V)\).

In the proof of this lemma, we need the assumption (1.12), i.e. that \(a(x) < 0\) on \({\mathcal {N}}_a\).

Proof

By Lemma 6.1 we only need to prove that \(Q_V(x_0) <0\).

Inserting the upper bound from Corollary 2.3 on \(S - S(a+ \varepsilon V)\) into (6.1), and using \({\mathcal {R}} \ge 0\), we obtain that

$$\begin{aligned} (Q_V(x) +o(1)) \varepsilon \lambda ^{-N+2s} \le - C_1 \varepsilon ^\frac{2s}{4s-N} + C_2 \lambda ^{-2s}. \end{aligned}$$

Here, the numbers \(C_1\) and \(C_2\) are given by

$$\begin{aligned} C_1:= (1+o(1)) \sigma _{N,s} \sup _{x \in {\mathcal {N}}_a(V).} \frac{|Q_V(x)|^\frac{2s}{4s-N}}{|a(x)|^\frac{N-2s}{4s-N}}, \qquad C_2:= -a(x)+o(1). \end{aligned}$$

Using Lemma 6.1 and the assumption \(a < 0\) on \({\mathcal {N}}_a\), we have that \(C_2\) is strictly positive and remains bounded away from zero by assumption. Since \({\mathcal {N}}_a(V)\) is not empty, the same is clearly true for \(C_1\). Thus, by Young’s inequality,

$$\begin{aligned} - C_1 \varepsilon ^\frac{2s}{4s-N} + C_2 \lambda ^{-2s} \le - c \varepsilon \lambda ^{-N+2s} \end{aligned}$$

for some \(c > 0\). This implies \(Q_V(x_0)< -c < 0\) as desired. \(\square \)

Now we are ready to prove our main results Theorems 1.4, 1.5, and 1.6.

Proof of Theorem 1.4

By using \({\mathcal {R}} \ge 0\) and minimizing the last term over \(\lambda \), like in the proof of Corollary 2.3, the bound (6.1) implies

$$\begin{aligned} (1+ o(1)) (S(a + \varepsilon V) - S)&\ge - \sigma _{N,s}\frac{|Q_V(x_0)|^\frac{2s}{4s-N}}{|a(x_0)|^\frac{N-2s}{4s-N}} \varepsilon ^\frac{2s}{4s-N} + o\left( \varepsilon ^\frac{2s}{4s-N} \right) \\&\ge - \sigma _{N,s} \sup _{x \in {\mathcal {N}}_a(V)} \frac{|Q_V(x)|^\frac{2s}{4s-N}}{|a(x)|^\frac{N-2s}{4s-N}} \varepsilon ^\frac{2s}{4s-N} + o\left( \varepsilon ^\frac{2s}{4s-N}\right) , \end{aligned}$$

where the last inequality follows from Lemma 6.2. This is equivalent to

$$\begin{aligned} S(a + \varepsilon V) \ge S - \sigma _{N,s} \sup _{x \in {\mathcal {N}}_a(V)} \frac{|Q_V(x)|^\frac{2s}{4s-N}}{|a(x)|^\frac{N-2s}{4s-N}} \varepsilon ^\frac{2s}{4s-N} + o\left( \varepsilon ^\frac{2s}{4s-N} \right) . \end{aligned}$$

Since the matching upper bound has already been proved in Corollary 2.3, the proof of the theorem is complete. \(\square \)

Proof of Theorem 1.5

Since \(x_0 \in {\mathcal {N}}_a\) by Lemma 6.1, by assumption we have \(Q_V(x_0) \ge 0\) and \(a(x_0) < 0\). Together with \({\mathcal {R}} \ge 0\), the bound (6.1) then implies

$$\begin{aligned} 0 \ge (1 + o(1)) (S - S(a + \varepsilon V)) + c \lambda ^{-2s} + o(\varepsilon \lambda ^{-N+2s}) \end{aligned}$$

for some \(c > 0\). Since \(o(\varepsilon \lambda ^{-N+2\,s}) \ge - \frac{c}{2} \lambda ^{-2\,s} + o(\varepsilon ^\frac{2\,s}{4\,s - N})\) by Young, this implies \(S(a+\varepsilon V) \ge S + o(\varepsilon ^\frac{2s}{N-4s}\). Since the inequality

$$\begin{aligned} S(a +\varepsilon V) \le S \end{aligned}$$
(6.2)

always holds (e.g. by Theorem 2.1), we obtain \(S(a+\varepsilon V) \ge S + o(\varepsilon ^\frac{2s}{N-4s})\) as desired.

Now assume that additionally \(Q_V(x_0) > 0\). With \({\mathcal {R}} \ge 0\), (6.1) implies, for \(\varepsilon > 0\) small enough and some \(C_1, C_2 > 0\)

$$\begin{aligned} S(a + \varepsilon V) - S \ge C_1 \varepsilon \lambda ^{-N+2s} + C_2 \lambda ^{-2s} > 0, \end{aligned}$$

which contradicts (6.2). Thus assumption (3.19), under which we have worked so far, cannot be satisfied, and we must have \(S(a + \varepsilon _0 V) = S\) for some \(\varepsilon _0 > 0\). Since \(S(a + \varepsilon V)\) is concave in \(\varepsilon \) (being the infimum of functions linear in \(\varepsilon \)) and since \(S(a) = S\), we must have \(S(a + \varepsilon V) = S\) for all \(\varepsilon \in [0, \varepsilon _0]\). \(\square \)

Proof of Theorem 1.6

We may first observe that the upper and lower bounds on \(S(a + \varepsilon V)\) already discussed in the proof of Theorem 1.4 imply

$$\begin{aligned} \frac{|Q_V(x_0)|^\frac{2s}{4s-N}}{|a(x_0)|^\frac{N-2s}{4s-N}} = \sup _{x \in {\mathcal {N}}_a} \frac{|Q_V(x)|^\frac{2s}{4s-N}}{|a(x)|^\frac{N-2s}{4s-N}}. \end{aligned}$$
(6.3)

Now, by using additionally Lemma B.6, the estimate (6.1) becomes

$$\begin{aligned} (1 + o(1)) \left( (S(a +\varepsilon V) -S\right) \ge - \sigma _{N,s} \frac{|Q_V(x_0)|^\frac{2s}{4s-N}}{|a(x_0)|^\frac{N-2s}{4s-N}} \varepsilon ^\frac{2s}{4s-N} + {\mathcal {R}}' + o\left( \varepsilon ^\frac{2s}{4s-N}\right) , \end{aligned}$$
(6.4)

where

$$\begin{aligned} {\mathcal {R}}' = {\left\{ \begin{array}{ll} {\mathcal {R}} + c_0 \varepsilon ^\frac{2s-2}{4s-N} \left( \lambda ^{-1} - \lambda _0(\varepsilon )^{-1}\right) ^2 &{} \quad \text {if } \left( \frac{A_\varepsilon }{B_\varepsilon }\right) ^{\frac{1}{4s-N}} \varepsilon ^{-\frac{1}{4s-N}} \lambda ^{-1} \le 2 \left( \frac{2s}{N-2s}\right) ^\frac{1}{N-4s}, \\ {\mathcal {R}} + c_0 \varepsilon ^\frac{2s}{4s-N} &{} \quad \text {if } \left( \frac{A_\varepsilon }{B_\varepsilon }\right) ^{\frac{1}{4s-N}} \varepsilon ^{-\frac{1}{4s-N}} \lambda ^{-1} > 2 \left( \frac{2s}{N-2s}\right) ^\frac{1}{N-4s}, \end{array}\right. } \end{aligned}$$

in the notation of Lemma B.6, for \(A_\varepsilon := A_{N,s}^{-\frac{N-2\,s}{N}} (\alpha _{N,s} + c_{N,s} d_{N,s} b_{N,s} )(|a(x_0)|+o(1))\) and \(B_\varepsilon := A_{N,s}^{-\frac{N-2\,s}{N}} (|Q_V(x_0)| + o(1)) \), with \(\lambda _0(\varepsilon )\) given by (B.5). Now applying in (6.4) the upper bound on \(S(a +\varepsilon V)\) from Corollary 2.3 yields

$$\begin{aligned} {\mathcal {R}}' = o(\varepsilon ^\frac{2s}{4s-N} ). \end{aligned}$$

The terms that make up \({\mathcal {R}}'\) being separately nonnegative, this implies \({\mathcal {R}} = o(\varepsilon ^\frac{2s}{4s-N})\) and \((\lambda ^{-1} - \lambda _0(\varepsilon )^{-1})^2 = o(\varepsilon ^\frac{2}{4\,s-N})\), that is,

$$\begin{aligned} \lambda&= \left( \frac{2s A_\varepsilon }{(N-2s) B_\varepsilon } \right) ^\frac{1}{4s-N} \varepsilon ^{-\frac{1}{4s-N}} + o\left( \varepsilon ^{-\frac{1}{4s-N}}\right) \nonumber \\&= \left( \frac{2s (\alpha _{N,s} + c_{N,s} d_{N,s} b_{N,s} ) |a(x_0)| }{(N-2s) |Q_V(x_0)| } \right) ^\frac{1}{4s-N} \varepsilon ^{-\frac{1}{4s-N}} + o\left( \varepsilon ^{-\frac{1}{4s-N}}\right) \end{aligned}$$
(6.5)

and

$$\begin{aligned} \Vert {(-\Delta )^{s/2}}r\Vert _2 = o\left( \varepsilon ^\frac{s}{4s-N}\right) . \end{aligned}$$
(6.6)

Inserting the asymptotics of \(\lambda \) back into \({\mathcal {R}} = o\left( \varepsilon ^\frac{2\,s}{4\,s-N}\right) \) now gives

$$\begin{aligned} \phi _a(x) = o(\varepsilon ). \end{aligned}$$
(6.7)

It remains to derive the claimed expansion for \(\alpha \). From Lemma 5.7, we deduce

$$\begin{aligned} |\alpha |^{-\frac{2N}{N-2s}} \int _\Omega u_\varepsilon ^\frac{2N}{N-2s} \mathop {}\!\textrm{d}y = \int _\Omega \psi _{x, \lambda }^\frac{2N}{N-2s} \mathop {}\!\textrm{d}y + {\mathcal {O}}\left( \Vert {(-\Delta )^{s/2}}s\Vert _2 + \Vert {(-\Delta )^{s/2}}r\Vert ^2_2 + \lambda ^{-N+2s}\right) . \end{aligned}$$

Using the bound \(\Vert {(-\Delta )^{s/2}}s\Vert _2 \lesssim \lambda ^{-N+2s}\), together with (6.5), (6.6) and the expansion of \(\int _\Omega \psi _{x, \lambda }^\frac{2N}{N-2s}\) from Theorem 2.1, we obtain

$$\begin{aligned} |\alpha |^{-p} = 1 + {\mathcal {O}}(\lambda ^{-N+2s}) = 1 + {\mathcal {O}}\left( \varepsilon ^\frac{N-2s}{4s-N}\right) . \end{aligned}$$

This completes the proof of Theorem 1.6. \(\square \)