1 Introduction

1.1 Background

Recall that the Sobolev inequality for exponent 2 states that there exists a positive constant \(S>0\) such that

$$\begin{aligned} S \Vert u\Vert _{L^{{2}^*}} \le \left\Vert \nabla u\right\Vert _{L^2} \end{aligned}$$
(1.1)

for \(n\ge 3\), \(u\in C^{\infty }_c(\mathbb {R}^n)\) and \(2^{*}=\frac{2n}{n-2}.\) By density, one can extend this inequality to functions in the homogeneous Sobolev space \(\dot{H}^1(\mathbb {R}^n)\). Talenti [20] and Aubin [1] independently computed the optimal \(S>0\) and showed that the extremizers of (1.1) are functions of the form

$$\begin{aligned} U[z,\lambda ](x) = c_{n}\left( \frac{\lambda }{1+\lambda ^2|x-z|^2}\right) ^{(n-2)/2}, \end{aligned}$$
(1.2)

where \(c_n>0\) is a dimenion dependent constant. Furthermore, it is well known that the Euler-Lagrange equation associated with the inequality (1.1) is

$$\begin{aligned} \Delta u + u|u|^{2^*-2}=0. \end{aligned}$$
(1.3)

In particular, Caffarelli, Gidas, and Spruck in [2] showed that the Talenti bubbles are the only positive solutions of the equation

$$\begin{aligned} \Delta u + u^{p}=0, \end{aligned}$$
(1.4)

where \(p=2^{*}-1.\) Geometrically, Eq. (1.4) arises in the context of the Yamabe problem which given a Riemannian manifold \((M,g_0)\), in dimension \(n\ge 3\), asks for a metric g conformal to \(g_0\) with prescribed scalar curvature equal to some function R. In the case when \((M,g_0)\) is the round sphere and R is a positive constant then the result of Caffarelli, Gidas, and Spruck can be seen as a rigidity result in the sense that the only possible choice of conformal functions are the Talenti bubbles defined in (1.2). For more details regarding the geometric perspective see the discussion in Section 1.1 in [4].

Following the rigidity result discussed above, it is natural to consider the case of almost rigidity. Here one asks if being an approximate solution of (1.3) implies that u is close to a Talenti bubble. We first observe that this statement is not always true. For instance, consider two bubbles \(U_1 = U[Re_1,1]\) and \(U_2=U[-Re_1,1]\), where \(R\gg 1\) and \(e_1=(1,0,\ldots ,0)\in \mathbb {R}^n\). Then \(u=U_1+U_2\) will be an approximate solution of (1.3) in some well-defined sense, however, it is clear that u is not close to either of the two bubbles. Thus, it is possible that if u approximately solves, (1.3) then it might be close to a sum of Talenti bubbles. However, Struwe in [19] showed that the above case, also known as bubbling, is the only possible worst case. More, precisely, he showed the following

Theorem 1.1

(Struwe). Let \(n \ge 3\) and \(\nu \ge 1\) be positive integers. Let \(\left( u_{k}\right) _{k \in \mathbb {N}} \subseteq \dot{H}^{1}\left( \mathbb {R}^{n}\right) \) be a sequence of nonnegative functions such that

$$\begin{aligned} \left( \nu -\frac{1}{2}\right) S^{n} \le \int _{\mathbb {R}^{n}}\left| \nabla u_{k}\right| ^{2} \le \left( \nu +\frac{1}{2}\right) S^{n} \end{aligned}$$
(1.5)

with \(S=S(n)\) being the sharp constant for the Sobolev inequality defined in (1.1). Assume that

$$\begin{aligned} \left\| \Delta u_{k}+u_{k}^{2^{*}-1}\right\| _{H^{-1}} \rightarrow 0 \quad \text{ as } k \rightarrow \infty \text{. } \end{aligned}$$
(1.6)

Then there exist a sequence \(\left( z_{1}^{(k)}, \ldots , z_{\nu }^{(k)}\right) _{k \in \mathbb {N}}\) of \(\nu \) -tuples of points in \(\mathbb {R}^{n}\) and a sequence \(\left( \lambda _{1}^{(k)}, \ldots , \lambda _{\nu }^{(k)}\right) _{k \in \mathbb {N}}\) of \(\nu \) -tuples of positive real numbers such that

$$\begin{aligned} \left\| \nabla \left( u_{k}-\sum _{i=1}^{\nu } U\left[ z_{i}^{(k)}, \lambda _{i}^{(k)}\right] \right) \right\| _{L^{2}} \rightarrow 0 \quad \text{ as } k \rightarrow \infty . \end{aligned}$$
(1.7)

Morally, the above statement says that if the energy of a function \(u_k\) is less than or equal to the energy of \(\nu \) bubbles as in (1.5) and if \(u_k\) almost solves (1.3) in the sense that the deficit (1.7) is small then \(u_k\) is close to \(\nu \) bubbles in \(\dot{H}^1\) sense. For instance, when \(\nu =1\) the energy constraint in (1.5) forbids us from writing u as the sum of two bubbles since that would imply that u has energy equal to \(2S^n\). Thus, Struwe’s result gives us a qualitative answer to the almost rigidity problem posed earlier.

Building upon this work, Ciraolo, Figalli, and Maggi in [4] proved the first sharp quantitative stability result around one bubble, i.e. the case when \(\nu =1\). Their result implied that the distance between the bubbles (1.7) should be linearly controlled by the deficit as in (1.6). Thus one might be inclined to conjecture that the same should hold in the case of more than one bubble or when \(\nu \ge 2.\)

Figalli and Glaudo in [12] investigated this problem and gave a positive result for any dimension \(n\in \mathbb {N}\) when the number of bubbles \(\nu = 1\) and for dimension \(3\le n \le 5\) when the number of bubbles \(\nu \ge 2.\) More precisely they proved the following

Theorem 1.2

(Figalli–Glaudo). Let the dimension \(3\le n\le 5\) when \(\nu \ge 2\) or \(n\in \mathbb {N}\) when \(\nu =1\). Then there exists a small constant \(\delta (n,\nu ) > 0\) and constant \(C(n,\nu )>0\) such that the following statement holds. Let \(u\in \dot{H}^1(\mathbb {R}^n)\) such that

$$\begin{aligned} \left\Vert {\nabla u - \sum _{i=1}^{\nu }\nabla \tilde{U}_i}\right\Vert _{L^2} \le \delta \end{aligned}$$
(1.8)

where \(\left( \tilde{U}_{i}\right) _{1 \le i \le \nu }\) is any family of \(\delta \)-interacting Talenti bubbles. Then there exists a family of Talenti bubbles \((U_i)_{i=1}^{\nu }\) such that

$$\begin{aligned} \left\Vert {\nabla u - \sum _{i=1}^{\nu }\nabla U_i}\right\Vert _{L^2} \le C \Vert \Delta u + u|u|^{p-1}\Vert _{H^{-1}}, \end{aligned}$$
(1.9)

where \(p=2^{*}-1=\frac{n+2}{n-2}.\)

Here a \(\delta -\) interacting family is a family of bubbles \((U_i[z_i, \lambda _i])_{i=1}^{\nu }\) where \(\nu \ge 1\) and \(\delta > 0\) if

$$\begin{aligned} Q = \min _{i\ne j}\left( \frac{\lambda _i}{\lambda _j}, \frac{\lambda _j}{\lambda _i}, \frac{1}{\lambda _i\lambda _j|z_i-z_j|^2} \right) \le \delta . \end{aligned}$$
(1.10)

Note that the definition \(\delta \)-interaction between bubbles follows naturally by estimating the \(\dot{H}^1\) interaction between any two bubbles. This is explained in Remark 3.2 in [12] which we recall here for the reader’s convenience. If \(U_1=U\left[ z_1, \lambda _1\right] \) and \(U_2=U\left[ z_2, \lambda _2\right] \) are two bubbles, then from the interaction estimate in Proposition B.2 in [12] and the fact that\(-\Delta U_i=U_i^{2^*-1}\) for \(i=1,2\) we have

$$\begin{aligned} \int _{\mathbb {R}^n} \nabla U_1 \cdot \nabla U_2=\int _{\mathbb {R}^n} U_1^{2^*-1} U_2 \approx \min \left( \frac{\lambda _1}{\lambda _2}, \frac{\lambda _2}{\lambda _1}, \frac{1}{\lambda _1 \lambda _2\left| z_1-z_2\right| ^2}\right) ^{\frac{n-2}{2}}. \end{aligned}$$

In particular, if \(U_1\) and \(U_2\) belong to a \(\delta \)-interacting family then their \(\dot{H}^1\)-scalar product is bounded by \(\delta ^{\frac{n-2}{2}}\).

In the proof of the above theorem, the authors first approximate the function u by a linear combination \(\sigma = \sum _{i=1}^\nu \alpha _i U_i[z_i,\lambda _i]\) where \(\alpha _i\) are real-valued coefficients. They then show that each coefficient \(\alpha _i\) is close to 1. To quantify this notion we say that the family \((\alpha _i, U_i[z_i, \lambda _i])_{i=1}^{\nu }\) is \(\delta \)-interacting if (1.10) holds and

$$\begin{aligned} |\alpha _i-1|\le \delta . \end{aligned}$$
(1.11)

The authors [12] also constructed counter-examples for the higher dimensional case when \(n\ge 6\) proving that the estimate (1.9) does not hold. The optimal estimates in dimension \(n\ge 6\) were recently established by Deng, Sun, and Wei in [8]. They proved the following

Theorem 1.3

(Deng-Sun-Wei). Let the dimension \(n \ge 6 \) and the number of bubbles \(\nu \ge 2\). Then there exists \(\delta =\delta (n, \nu )>0\) and a large constant \(C=C(n,\nu )>0\) such that the following holds. Let \(u \in \dot{H}^{1}\left( \mathbb {R}^{n}\right) \) be a function such that

$$\begin{aligned} \left\Vert \nabla u-\sum _{i=1}^{\nu } \nabla \tilde{U}_{i}\right\Vert _{L^2} \le \delta \end{aligned}$$
(1.12)

where \(\left( \tilde{U}_{i}\right) _{1 \le i \le \nu }\) is a \(\delta \)-interacting family of Talenti bubbles. Then there exists a family of Talenti bubbles \((U_i)_{i=1}^{\nu }\) such that

$$\begin{aligned} \left\Vert \nabla u-\sum _{i=1}^{\nu } \nabla U_{i}\right\Vert _{L^2} \le C\left\{ \begin{array}{ll} \Gamma |\log \Gamma |^{\frac{1}{2}} &{} \text{ if } n=6, \\ \Gamma ^{\frac{p}{2}} &{} \text{ if } n \ge 7 \end{array}\right. \end{aligned}$$
(1.13)

for \(\Gamma =\left\| \Delta u+u|u|^{p-1}\right\| _{H^{-1}}\) and \(p=2^{*}-1=\frac{n+2}{n-2}.\)

Given these results, it is natural to wonder if they generalize to some reasonable setting. One possible way is to observe that the Sobolev inequality (1.1) can be generalized to hold for functions in fractional spaces. This is better known as the Hardy-Littlewood-Sobolev (HLS) inequality, which states that there exists a positive constant \(S>0\) such that for all \(u\in C^{\infty }_{c}(\mathbb {R}^n)\), we have

$$\begin{aligned} S \Vert u\Vert _{L^{{2}^*}} \le \left\Vert (-\Delta )^{s/2}u\right\Vert _{L^2} \end{aligned}$$
(1.14)

where \(s\in (0,1)\) and \(n>2s\) and \(2^* = \frac{2n}{n-2\,s}.\)Footnote 1 Observe that by a density argument, the HLS inequality also holds for all functions \(u\in \dot{H}^s(\mathbb {R}^n),\) where we define the fractional homogeneous Sobolev space \(\dot{H}^s(\mathbb {R}^n)=\dot{W}^{s,2}(\mathbb {R}^n)\) as the closure of the space of test functions \(C_c^{\infty }(\mathbb {R}^n)\) with respect to the norm

$$\begin{aligned} \left\Vert {u}\right\Vert _{\dot{H}^s}^2 = \left\Vert (-\Delta )^{s/2} u\right\Vert _{L^2}^2 = \frac{C_{n,s}}{2} \int _{\mathbb {R}^n} \int _{\mathbb {R}^n} \frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}} dx dy \end{aligned}$$

equipped with the natural inner product for \(u,v\in \dot{H}^s\)

$$\begin{aligned} \langle u,v\rangle _{\dot{H}^s} = \frac{C_{n,s}}{2}\int _{\mathbb {R}^n} \int _{\mathbb {R}^n} \frac{(u(x)-u(y)) (v(x)-v(y)) }{|x-y|^{n+2s}} dx dy, \end{aligned}$$

where \(C_{n,s}>0\) is a constant depending on n and s. Furthermore Lieb in [16] found the optimal constant and established that the extremizers of (1.14) are functions of the form

$$\begin{aligned} U[z,\lambda ](x) = c_{n,s} \left( \frac{\lambda }{1+ \lambda ^2|x-z|^2}\right) ^{(n-2s)/2} \end{aligned}$$
(1.15)

for some \(\lambda >0\) and \(z\in \mathbb {R}^n\), where we choose the constant \(c_{n,s}\) such that the bubble \(U(x)=U[z,\lambda ](x)\) satisfies

$$\begin{aligned} \int _{\mathbb {R}^n} |(-\Delta )^{s/2}U|^2 = \int _{\mathbb {R}^n} U^{2^*}=S^{n/s}. \end{aligned}$$

Similar to the Sobolev inequality, the Euler Lagrange equation associated with (1.14) is as follows

$$\begin{aligned} (-\Delta )^s u = u^{p}, \end{aligned}$$
(1.16)

where \(p=2^*-1=\frac{n+2s}{n-2s}.\) Chen, Li, and Ou in [3] showed that the only positive solutions to 1.16 are the bubbles described in (1.15). Following this rigidity result Palatucci and Pisante in [18] proved an analog of Struwe’s result on any bounded subset of \(\mathbb {R}^n\). However, as noted in [9] the same proof works carries over to \(\mathbb {R}^n\). We state the result in the multi-bubble case (i.e. when \(\nu \ge 1\)). See also Lemma 2.1 in [9] for the single bubble case.

Theorem 1.4

(Palatucci–Pisante, Nitti–König). Let \(n \in \mathbb {N}, 0<s<n / 2\), \(\nu \ge 1\) be positive integers, and \(\left( u_{k}\right) _{k \in \mathbb {N}} \subseteq \dot{H}^{s}\left( \mathbb {R}^{n}\right) \) be a sequence of functions such that

$$\begin{aligned} \left( \nu -\frac{1}{2}\right) S^{n/s} \le \int _{\mathbb {R}^{n}}\left| (-\Delta )^{s/2} u_{k}\right| ^{2} \le \left( \nu +\frac{1}{2}\right) S^{n/s} \end{aligned}$$
(1.17)

with \(S=S(n,s)\) being the sharp constant for the HLS inequality defined in (1.14). Assume that

$$\begin{aligned} \left\| \Delta u_{k}+u_{k}^{2^{*}-1}\right\| _{\dot{H}^{-s}} \rightarrow 0 \quad \text{ as } k \rightarrow \infty \text{. } \end{aligned}$$
(1.18)

Then there exist a sequence \(\left( z_{1}^{(k)}, \ldots , z_{\nu }^{(k)}\right) _{k \in \mathbb {N}}\) of \(\nu \) -tuples of points in \(\mathbb {R}^{n}\) and a sequence \(\left( \lambda _{1}^{(k)}, \ldots , \lambda _{\nu }^{(k)}\right) _{k \in \mathbb {N}}\) of \(\nu \) -tuples of positive real numbers such that

$$\begin{aligned} \left\| u_{k}-\sum _{i=1}^{\nu } U\left[ z_{i}^{(k)}, \lambda _{i}^{(k)}\right] \right) \Vert _{\dot{H}^{s}} \rightarrow 0 \quad \text{ as } k \rightarrow \infty . \end{aligned}$$
(1.19)

Thus following the above qualitative result, we establish the following sharp quantitative almost rigidity for the critical points of the HLS inequality

Theorem 1.5

Let the dimension \(n >2s\) where \(s\in (0,1)\) and the number of bubbles \(\nu \ge 1\). Then there exists \(\delta =\delta (n,\nu , s)>0\) and a large constant \(C=C(n,\nu , s)>0\) such that the following holds. Let \(u \in \dot{H}^{s}\left( \mathbb {R}^{n}\right) \) be a function such that

$$\begin{aligned} \left\| u-\sum _{i=1}^{\nu } \tilde{U}_{i}\right\| _{\dot{H}^s} \le \delta \end{aligned}$$
(1.20)

where \(\left( \tilde{U}_{i}\right) _{1 \le i \le \nu }\) is a \(\delta \)-interacting family of Talenti bubbles. Then there exists a family of Talenti bubbles \(\left( {U}_{i}\right) _{1 \le i \le \nu }\) such that

$$\begin{aligned} \left\| u-\sum _{i=1}^{\nu } U_{i}\right\| _{\dot{H}^s} \le C\left\{ \begin{array}{ll} \Gamma &{} \text{ if } 2s< n < 6s,\\ \Gamma |\log \Gamma |^{\frac{1}{2}} &{} \text{ if } n = 6s\\ \Gamma ^{\frac{p}{2}} &{} \text{ if } n > 6s \end{array}\right. \end{aligned}$$
(1.21)

for \(\Gamma =\left\| \Delta u+u|u|^{p-1}\right\| _{H^{-s}}\) and \(p=2^*-1=\frac{n+2s}{n-2s}.\)

Note that analogous to the local case, i.e. when \(s=1\), the definition of \(\delta \)-interaction between bubbles carries over naturally since given any two bubbles \(U_1=U\left[ z_1, \lambda _1\right] , U_2=U\left[ z_2, \lambda _2\right] \) we can estimate their interaction using Proposition B.2 in [12] and the fact that\((-\Delta )^s U_i=U_i^p\) for \(i=1,2\) we have

$$\begin{aligned} \int _{\mathbb {R}^n} (-\Delta )^{s/2} U_1 \cdot (-\Delta )^{s/2} U_2=\int _{\mathbb {R}^n} U_1^p U_2 \approx \min \left( \frac{\lambda _1}{\lambda _2}, \frac{\lambda _2}{\lambda _1}, \frac{1}{\lambda _1 \lambda _2\left| z_1-z_2\right| ^2}\right) ^{\frac{n-2s}{2}}. \end{aligned}$$

In particular, if \(U_1\) and \(U_2\) belong to a \(\delta \)-interacting family then their \(\dot{H}^s\)-scalar product is bounded by \(\delta ^{\frac{n-2s}{2}}\). Furthermore, the conditions on the scaling and the translation parameters in Theorem 1.1 of Palatucci and Pisante in [18] imply that bubbling observed in the non-local setting is caused by the same mechanism as in the local case, i.e. either when the distance between the centers of the bubbles or their relative concentration scales (such as \(\lambda _i/\lambda _j\) in 1.32) tend to infinity.

The above theorem, besides being a natural generalization, also provides a quantitative rate of convergence to equilibrium for the non-local fast diffusion equation

$$\begin{aligned} {\left\{ \begin{array}{ll}\partial _t u+(-\Delta )^s\left( |u|^{m-1} u\right) =0, &{} t>0, x \in \mathbb {R}^n \\ u(0, x)=u_0(x), &{} x \in \mathbb {R}^n\end{array}\right. } \end{aligned}$$
(1.22)

with

$$\begin{aligned} m:=\frac{1}{p}=\frac{1}{2^*-1}=\frac{n-2 s}{n+2 s}. \end{aligned}$$

In particular, Nitti and König in [9] established the following result

Theorem 1.6

(Nitti-König). Let \(n \in \mathbb {N}\) and \(s \in (0, \min \{1, n / 2\})\). Let \(u_0 \in C^2\left( \mathbb {R}^n\right) \) such that \(u_0 \ge 0\) and \(|x|^{2\,s-n} u_0^m\left( x /|x|^2\right) \) can be extended to a positive \(C^2\) function in the origin. Then, there exist an extinction time \(\bar{T}=T\left( u_0\right) \in (0, \infty )\) such that the solution u of (1.22) satisfies \(u(t, x)>0\) for \(t \in (0, \bar{T})\) and \(u(\bar{T}, \cdot ) \equiv 0\). Moreover, there exist \(z \in \mathbb {R}^n\) and \(\lambda >0\), such that

$$\begin{aligned} \left\| \frac{u(t, \cdot )}{U_{\bar{T}, z, \lambda }(t, \cdot )}-1\right\| _{L^{\infty }\left( \mathbb {R}^n\right) } \le C_*(\bar{T}-t)^\kappa \quad \forall t \in (0, \bar{T}), \end{aligned}$$

for some \(C_*>0\) depending on the initial datum \(u_0\) and all \(\kappa <\kappa _{n, s}\), where

$$\begin{aligned} \kappa _{n, s}:=\frac{1}{(n+2-2 s)(p-1)} \gamma _{n, s}^2 \end{aligned}$$

with \(p=\frac{n+2\,s}{n-2\,s}\), \(\gamma _{n, s}=\frac{4\,s}{n+2\,s+2}\) and

$$\begin{aligned} U_{\bar{T}, z, \lambda }(t, x):=\left( \frac{p-1}{p}\right) ^{\frac{p}{p-1}}(\bar{T}-t)^{\frac{p}{p-1}} U[z, \lambda ]^p(x), \quad t \in (0, \bar{T}), x \in \mathbb {R}^n \end{aligned}$$

with \(\frac{p}{p-1}=\frac{1}{1-m}=\frac{n+2 s}{4 s}\).

Using Theorem 1.5 one can also establish Theorem 1.6 by following the arguments of the proof of Theorem 5.1 in [12]. However, such a result would not provide explicit bounds on the rate parameter \(\kappa \). The remarkable point of the Theorem 1.6 is that the authors provide an explicit bound on the parameter \(\kappa .\)

1.2 Proof sketch and challenges

The proof of the Theorem 1.5 follows by adapting the arguments in [12] and [8] however we need to overcome several difficulties introduced by the non-locality of the fractional laplacian.

To understand the technical challenges involved, we sketch the proof of Theorem 1.5. First, consider the case when \(n\in (2s,6s)\). Then the linear estimate in (1.21) follows by adapting the arguments in [12]. The starting point in their argument is to approximate the function u by a linear combination of Talenti Bubbles. This can be achieved for instance by solving the following minimization problem

$$\begin{aligned} \left\Vert u -\sum _{i=1}^{\nu }\alpha _i U[z_i, \lambda _i] \right\Vert _{\dot{H}^s}= {\mathop {\min }_{\begin{array}{c} \bar{z}_{1}, \ldots , \bar{z}_{\nu } \in \mathbb {R}^n \\ \bar{\lambda }_{1}, \ldots , \bar{\lambda }_{\nu }\in \mathbb {R}_{+}, \tilde{\alpha }_1,\ldots ,\tilde{\alpha }_{\nu } \in \mathbb {R} \end{array}}} \left\Vert u-\sum _{i=1}^{\nu }\tilde{\alpha _i} U[\tilde{z}_i, \tilde{\lambda }_i]\right\Vert _{\dot{H}^s} \end{aligned}$$

where \(U_i = U[z_i, \lambda _i]\) and \(\sigma = \sum _{i=1}^{\nu } \alpha _i U_i\) is the sum of Talenti Bubbles closest to the function u in the \(\dot{H}^{s}(\mathbb {R}^n)\) norm. Denote the error between u and this approximation by \(\rho \), i.e.

$$\begin{aligned} u = \sigma + \rho . \end{aligned}$$

Now, the idea is to estimate the \(H^s\) norm of \(\rho \). Thus

$$\begin{aligned} \left\Vert \rho \right\Vert _{\dot{H}^s}^2=\langle \rho ,\rho \rangle _{\dot{H}^s}&= \langle \rho ,u-\sigma \rangle _{\dot{H}^s} =\langle \rho ,u\rangle _{\dot{H}^s} = \langle \rho ,(-\Delta )^s u\rangle _{L^2} \\&= \langle \rho ,(-\Delta )^s u-u|u|^{p-1}\rangle _{L^2} + \langle \rho ,u|u|^{p-1}\rangle _{L^2}\\&\le \left\Vert \rho \right\Vert _{\dot{H}^s} \Vert (-\Delta )^s u-u|u|^{p-1}\Vert _{{H}^{-s}} + \int _{\mathbb {R}^n} u|u|^{p-1}\rho . \end{aligned}$$

where in the third equality we made use of the orthogonality between \(\rho \) and \(\sigma \). This follows from differentiating in the coefficients \(\alpha _i\). On expanding \(u=\sigma +\rho \) we can further estimate the second term in the final inequality as follows

$$\begin{aligned} \int _{\mathbb {R}^n} \rho u|u|^{p-1}&\le p\int _{\mathbb {R}^n} |\sigma |^{p-1} \rho ^2 \\&\quad + C_{n,\nu }\left( \int _{\mathbb {R}^n}|\sigma |^{p-2} |\rho |^3 + \int _{\mathbb {R}^n}|\rho |^{p+1}+\sum _{1\le i\ne j\le \nu } \int _{\mathbb {R}^n}\rho U_i^{p-1} U_j\right) . \end{aligned}$$

The argument for controlling the second and third terms follows in the same way as in [12] and this is where one uses the fact that \(n<6s.\) However, it is not so clear how to estimate the first and the last term. For the first term, ideally, we would like to show that

$$\begin{aligned} p \int _{\mathbb {R}^n} |\sigma |^{p-1} \rho ^2 \le \tilde{c} \left\Vert \rho \right\Vert _{\dot{H}^s}^2 \end{aligned}$$

where \(\tilde{c}<1\) is a positive constant. In [12], the authors make use of a spectral argument that essentially says that the third eigenvalue of a linearized operator associated with (1.16) is strictly larger than p. Such a result was not proved in the non-local setting and thus, our first contribution is to prove this fact rigorously in Sect. 2.2.

By linearizing around a single bubble we deduce that

$$\begin{aligned} p \int _{\mathbb {R}^n} |U|^{p-1} \rho ^2 \le \tilde{c} \left\Vert \rho \right\Vert _{\dot{H}^s}^2 \end{aligned}$$

however, recall that our goal is to estimate \(\int _{\mathbb {R}^n} |\sigma |^{p-1} \rho ^2\) instead of \(\int _{\mathbb {R}^n} |U|^{p-1} \rho ^2.\) Thus one would like to localize \(\sigma \) by a bump function \(\Phi _i\) such that \(\sigma \Phi _i \approx \alpha _i U_i.\) This allows us to show that

$$\begin{aligned} \int _{\mathbb {R}^{n}}\left( \rho \Phi _{i}\right) ^{2} U_{i}^{p-1} \le \frac{1}{\Lambda } \int _{\mathbb {R}^{n}}\left| (-\Delta )^{s/2}\left( \rho \Phi _{i}\right) \right| ^{2}+o(1)\left\Vert \rho \right\Vert _{\dot{H}^s}^{2} \end{aligned}$$

where \(\Lambda >p.\) Now observe that in the local setting, estimating the first term on the right-hand side of the above inequality would be elementary since we could simply write the following identity

$$\begin{aligned} \int _{\mathbb {R}^n}\left| \nabla \left( \rho \Phi _i\right) \right| ^2=\int _{\mathbb {R}^n}|\nabla \rho |^2 \Phi _i^2+\int _{\mathbb {R}^n} \rho ^2\left| \nabla \Phi _i\right| ^2+2 \int _{\mathbb {R}^n} \rho \Phi _i \nabla \rho \cdot \nabla \Phi _i. \end{aligned}$$

However, this clearly fails in the non-local setting. To get around this issue we observe that

$$\begin{aligned} (-\Delta )^{s/2}(\rho \Phi _i) = \rho (-\Delta )^{s/2}\Phi _i +\Phi _i(-\Delta )^{s/2}\rho +\mathcal {C}(\rho , \Phi _i) \end{aligned}$$

where \(\mathcal {C}(\rho ,\Phi _i)\) is error term introduced by the fractional laplacian. Thus to estimate the product of the fractional gradient we must control this error term, which is fortunately controlled by the commutator estimates from Theorem A.8 in [13]. These estimates state that the remainder term \(\mathcal {C}(\rho , \Phi _i)\) satisfies

$$\begin{aligned} \left\Vert \mathcal {C}(\rho , \Phi _i\right\Vert _{L^2}\le C\left\Vert {(-\Delta )^{s_{1} / 2} \Phi _i}\right\Vert _{L^{p_{1}}} \left\Vert {(-\Delta )^{s_{2} / 2} \rho }\right\Vert _{L^{p_{2}}} \end{aligned}$$
(1.23)

provided that \(s_{1}, s_{2} \in [0, s], s=s_{1}+s_{2}\) and \(p_{1}, p_{2} \in (1,+\infty )\) satisfy

$$\begin{aligned} \frac{1}{2}=\frac{1}{p_{1}}+\frac{1}{p_{2}}. \end{aligned}$$

Setting \(s_1=s, s_2=0, p_1=n/s\) and \(p_2=2^* = \frac{2n}{n-2\,s}\) we get

$$\begin{aligned} \left\Vert \mathcal {C}(\rho , \Phi _i\right\Vert _{L^2}\le C\left\Vert {(-\Delta )^{s / 2} \Phi _i}\right\Vert _{L^{n/s}}\left\Vert {\rho }\right\Vert _{L^{2^*}}\lesssim \left\Vert {(-\Delta )^{s / 2} \Phi _i}\right\Vert _{L^{n/s}}\left\Vert \rho \right\Vert _{\dot{H}^s}. \end{aligned}$$
(1.24)

Thus it remains to control the fractional gradient of the bump function which we establish in Lemma 2.1.

Next, we consider the case when \(n\ge 6s\). We follow the proof strategy of [8]. The argument proceeds in the following steps.

Step (i). First we obtain a family of bubbles \((U_i)_{i=1}^{\nu }\) as a result of solving the following minimization problem

$$\begin{aligned} \left\Vert u - \sum _{i=1}^{\nu }U[z_i, \lambda _i]\right\Vert _{\dot{H}^s} ={\mathop {\inf }_{\begin{array}{c} \tilde{z}_{1},\ldots ,\tilde{z}_{\nu } \in \mathbb {R}^{n} \\ \tilde{\lambda }_{1},\ldots , \tilde{\lambda }_{\nu }>0 \end{array}}} \left\Vert u - \sum _{i=1}^{\nu }U[\tilde{z}_i, \tilde{\lambda }_i]\right\Vert _{\dot{H}^s} \end{aligned}$$
(1.25)

and denote the error \(\rho = u - \sum _{i=1}^{\nu }U_i= u - \sigma .\) Since \((-\Delta )^s U_i = U_i^p\) for \(i=1,\ldots ,\nu \) we have

$$\begin{aligned} (-\Delta )^s u-u|u|^{p-1}&=(-\Delta )^s(\sigma +\rho )-(\sigma +\rho )|\sigma +\rho |^{p-1} =(-\Delta )^s \rho -p\sigma ^{p-1}\rho \nonumber \\&\quad -(\sigma ^p - \sum _{i=1}^{\nu }U_i^p) - ((\sigma +\rho )|\sigma +\rho |^{p-1}-\sigma ^p-p\sigma ^{p-1}\rho ). \end{aligned}$$
(1.26)

Thus the error \(\rho \) satisfies

$$\begin{aligned} (-\Delta )^s\rho -p\sigma ^{p-1}\rho -I_1-I_2-f=0, \end{aligned}$$
(1.27)

where

$$\begin{aligned} f&= (-\Delta )^s u -u|u|^{p-1},\quad I_1=\sigma ^p- \sum _{i=1}^{\nu }U_i^p, \nonumber \\ I_2&=(\sigma +\rho )|\sigma +\rho |^{p-1}-\sigma ^p-p\sigma ^{p-1}\rho . \end{aligned}$$
(1.28)

Clearly (1.20) implies that \(\left\Vert \rho \right\Vert _{\dot{H}^s}\le \delta \) and furthermore the family of bubbles \((U_i)_{i=1}^{\nu }\) are also \(\delta '-\)interacting where \(\delta '\) tends to 0 as \(\delta \) tends to 0. In the first step we decompose \(\rho = \rho _0 + \rho _1\) where we will prove the existence of the first approximation \(\rho _0\), which solves the following system

$$\begin{aligned} \left\{ \begin{array}{l}(-\Delta )^s \rho _{0}-\left[ \left( \sigma +\rho _{0}\right) ^{p}-\sigma ^{p}\right] =\left( \sigma ^{p}-\sum _{j=1}^{\nu } U_{j}^{p}\right) -\sum _{i=1}^{\nu } \sum _{a=1}^{n+1} c_{a}^{i} U_{i}^{p-1} Z_{i}^{a} \\ \langle \rho ,Z_i^a\rangle _{\dot{H}^s}=0, \quad i=1,\ldots ,\nu ;\quad a=1, \ldots , n+1\end{array}\right. \end{aligned}$$
(1.29)

where \((c_a^i)\) is a family of scalars and \(Z^a_i\) are the rescaled derivative of \(U[z_i,\lambda _i]\) defined as follows

$$\begin{aligned} Z_{i}^{a}=\frac{1}{\lambda _{i}} \frac{\partial U\left[ z, \lambda _{i}\right] }{\partial z^{a}}\bigg |_{z=z_{i}}, \quad Z_{i}^{n+1}=\lambda _{i} \frac{\partial U\left[ z_{i}, \lambda \right] }{\partial \lambda }\bigg |_{\lambda =\lambda _{i}} \end{aligned}$$
(1.30)

for \(1\le i\le \nu \) and \(1\le a\le n.\)

Step (ii). The next step is to establish point-wise estimates on \(\rho _0.\) In order to do this we argue as in the proof of Proposition 3.3 in [8] to show that

$$\begin{aligned} |I_1|=|\sigma ^p-\sum _{i=1}^{\nu }U_i^p|\lesssim V(x) = \sum _{i=1}^{\nu }\left( \frac{\lambda _{i}^{\frac{n+2s}{2}} R^{2s-n}}{\left\langle y_{i}\right\rangle ^{4s}} \chi _{\left\{ \left| y_{i}\right| \le R\right\} }+\frac{\lambda _{i}^{\frac{n+2s}{2}} R^{-4s}}{\left\langle y_{i}\right\rangle ^{n-2s}} \chi _{\left\{ \left| y_{i}\right| \ge R / 2\right\} }\right) \end{aligned}$$

which along with an a priori estimate and a fixed point argument allows us to show that

$$\begin{aligned} |\rho _0(x)|\lesssim W(x) =\sum _{i=1}^{\nu }\left( \frac{\lambda _{i}^{\frac{n-2s}{2}} R^{2s-n}}{\left\langle y_{i}\right\rangle ^{2s}} \chi _{\left\{ \left| y_{i}\right| \le R\right\} }+\frac{\lambda _{i}^{\frac{n-2s}{2}} R^{-4s}}{\left\langle y_{i}\right\rangle ^{n-4s}} \chi _{\left\{ \left| y_{i}\right| \ge R / 2\right\} }\right) \end{aligned}$$

where \(y_i=\lambda _i(x-z_i)\) and

$$\begin{aligned} R = \min _{i\ne j}\frac{R_{ij}}{2} = \frac{1}{2}\min _{i\ne j}\max _{j=1,\ldots ,\nu }\left( \sqrt{\lambda _{i} / \lambda _{j}}, \sqrt{\lambda _{j} / \lambda _{i}}, \sqrt{\lambda _{i} \lambda _{j}}\left| z_{i}-z_{j}\right| \right) . \end{aligned}$$

This will also give us a control on the energy of \(\rho _0\), in particular arguing as in the proof of Proposition 3.9 in [8] one can show that

$$\begin{aligned} \left\Vert (-\Delta )^{s/2}\rho _0\right\Vert _{L^2}\lesssim {\left\{ \begin{array}{ll}Q|\log Q|^{\frac{1}{2}}, &{}\text {if }n=6s,\\ Q^{\frac{p}{2}},&{}\text {if }n> 6s.\end{array}\right. } \end{aligned}$$

Here we define the interaction term Q by first defining the interaction between two bubbles \(U_i\) and \(U_j\)

$$\begin{aligned} q_{ij} = \left( \frac{\lambda _i}{\lambda _j} + \frac{\lambda _j}{\lambda _i} + \lambda _i\lambda _j|z_i-z_j|^2\right) ^{-(n-2s)/2} \end{aligned}$$
(1.31)

and then setting

$$\begin{aligned} Q = \max \{q_{ij}:i,j=1,\ldots , \nu \}. \end{aligned}$$
(1.32)

Since the bubbles are \(\delta \)-interacting, \(Q<\delta .\)

Step (iii). Next, we can estimate the second term \(\rho _1\). First observe that \(\rho _1\) satisfies

$$\begin{aligned} \left\{ \begin{array}{l} (-\Delta )^s \rho _{1}-\left[ \left( \sigma +\rho _{0}+\rho _{1}\right) ^{p}-\left( \sigma +\rho _{0}\right) ^{p}\right] -\sum _{i=1}^{\nu } \sum _{a=1}^{n+1} c_{a}^{j} U_{j}^{p-1} Z_{j}^{a}-f=0 \\ \langle \rho _1,Z_j^a\rangle _{\dot{H}^s}=0, \quad i=1,\ldots ,\nu ;\quad a=1, \ldots , n+1\end{array}\right. \end{aligned}$$
(1.33)

Using the above equation and the decomposition

$$\begin{aligned} \rho _1 = \sum _{i=1}^{\nu } \beta ^{i} U_{i}+\sum _{i=1}^{\nu } \sum _{a=1}^{n+1} \beta _{a}^{i} Z_{i}^{a}+\rho _{2}, \end{aligned}$$

we can estimate \(\rho _2\) and the absolute value of the scalar coefficients \(\beta ^i\) and \(\beta ^i_a\). This allows us to estimate the energy of \(\rho _1.\) In particular using similar arguments as in the proofs of Proposition 3.10, 3.11, and 3.12 in [8] one can show that

$$\begin{aligned} \left\Vert (-\Delta )^{s/2}\rho _1\right\Vert _{L^2}\lesssim Q^2+\Vert f\Vert _{H^{-s}}. \end{aligned}$$

Step (iv). In the final step, combining the energy estimates for \(\rho _0\) and \(\rho _1\) from Step (ii) and (iii) one can finally arrive at the desired estimate (1.21) using the same argument as in Lemma 2.1 and Lemma 2.3 in [8]. This will conclude the proof of Theorem 1.5 in the case when the dimension \(n\ge 6s.\)

Even though the proof sketch outlined above seems to carry over in a straightforward manner we decided to include this case for a few reasons. First, we were curious to understand how the fractional parameter s would influence the estimates in our main Theorem. Second, we tried to explain the intuition behind some technical estimates which we hope might be useful for readers not very familiar with the finite-dimensional reduction method (see [6] for an introduction to this technique). Finally, we encountered an interesting technical issue in Step (ii) of the proof. The authors of [8] make use of the following pointwise differential inequality in the case when \(s=1\).

Proposition 1.7

The functions \(\tilde{W}\) and \(\tilde{V}\) satisfy

$$\begin{aligned} (-\Delta )^s \tilde{W}\ge \alpha _{n,s} \tilde{V}, \end{aligned}$$
(1.34)

where \(\alpha _{n,s}>0\) is a constant depending on n and s.

For precise definitions of \(\tilde{W}\) and \(\tilde{V}\) see (3.30). After some straightforward reductions of \(\tilde{W}\) and \(\tilde{V}\), it suffices to establish

$$\begin{aligned} (-\Delta )^s[(1+|x|^2)^{-s}]\ge \alpha _{n,s} [1+|x|^2]^{-2s}. \end{aligned}$$
(1.35)

for some constant \(\alpha _{n,s}>0.\) By direct computations the above inequality is true in the local setting \(s=1\) however it is not clear whether it generalizes to the fractional case as well. We show that this is indeed true.

Such pointwise differential inequality seems to be new and for instance, does not follow from the well-known pointwise differential inequality

$$\begin{aligned} (-\Delta )^{s}(\varphi (f))(x) \le \varphi ^{\prime }(f(x)) \cdot (-\Delta )^{s} f(x) \end{aligned}$$

where \(\varphi \in C^1(\mathbb {R})\) is convex and \(f\in \mathcal {S}(\mathbb {R}^n)\). We prove (1.35) inequality directly by using an integral representation of the fractional derivative of \(\frac{1}{(1+|x|^2)^s}\) and by counting the number of zeros of a certain Hypergeometric function.

2 Case when dimension \(n<6s \)

The goal of this section is to prove Theorem 1.5 in the case when the dimension satisfies \(2s<n<6s\). The proof is similar to the proof of Theorem 3.3 in [12], however, we need to make some modifications to establish the spectral and interaction integral term estimate. We first prove Theorem 1.5 assuming that the desired spectral estimate holds.

Proof

Consider the following minimization problem

$$\begin{aligned} \left\Vert u -\sum _{i=1}^{\nu }\alpha _i U[z_i, \lambda _i] \right\Vert _{\dot{H}^s}= {\mathop {\min }_{\begin{array}{c} \bar{z}_{1}, \ldots , \bar{z}_{\nu } \in \mathbb {R}^n \\ \bar{\lambda }_{1}, \ldots , \bar{\lambda }_{\nu }\in \mathbb {R}_{+}, \tilde{\alpha }_1,\ldots ,\tilde{\alpha }_{\nu } \in \mathbb {R} \end{array}}}\left\Vert u-\sum _{i=1}^{\nu }\tilde{\alpha _i} U[\tilde{z}_i, \tilde{\lambda }_i]\right\Vert _{\dot{H}^s} \end{aligned}$$

where \(U_i = U[z_i, \lambda _i]\) and \(\sigma = \sum _{i=1}^{\nu } \alpha _i U_i\) is the sum of Talenti Bubbles closest to the function u in the \(\dot{H}^{s}(\mathbb {R}^n)\) norm. Thus if we set

$$\begin{aligned} u = \sigma + \rho \end{aligned}$$

we immediately have that \(\left\Vert \rho \right\Vert _{\dot{H}^s}\le \delta \) and since the family \((\tilde{U}_i)_{i=1}^{\nu }\) is \(\delta \)-interacting we also deduce that the family \((\alpha _i, U_i)_{i=1}^{\nu }\) is \(\delta '-\)interacting where \(\delta '\rightarrow 0\) as \(\delta \rightarrow 0.\)

Furthermore, for each bubble \(U_i\) with \(1\le i \le \nu \), we have the following orthogonality conditions as before

$$\begin{aligned} \langle \rho ,U_i\rangle _{\dot{H}^s}&= 0 \end{aligned}$$
(2.1)
$$\begin{aligned} \langle \rho ,\partial _{\lambda }U_i\rangle _{\dot{H}^s}&= 0 \end{aligned}$$
(2.2)
$$\begin{aligned} \langle \rho ,\partial _{z_j}U_i\rangle _{\dot{H}^s}&= 0 \end{aligned}$$
(2.3)

for any \(1\le j\le n.\) Using Lemma 2.3, we deduce that \(U, \partial _{\lambda } U\) and \(\partial _{z_j} U\) are eigenfunctions for the operator \(\frac{(-\Delta )^s}{U^{p-1}}\) and thus for each \(1\le i,j\le n\), we get

$$\begin{aligned} \int _{\mathbb {R}^n} \rho \cdot U_i^p&= 0 , \end{aligned}$$
(2.4)
$$\begin{aligned} \int _{\mathbb {R}^n} \rho \cdot \partial _{\lambda } U_i U_i^{p-1}&=0, \end{aligned}$$
(2.5)
$$\begin{aligned} \int _{\mathbb {R}^n} \rho \cdot \partial _{z_j} U_i U_i^{p-1}&=0. \end{aligned}$$
(2.6)

Next using integration by parts and (2.1), we get

$$\begin{aligned} \left\Vert \rho \right\Vert _{\dot{H}^s}^2=\langle \rho ,\rho \rangle _{\dot{H}^s}&= \langle \rho ,u-\sigma \rangle _{\dot{H}^s} =\langle \rho ,u\rangle _{\dot{H}^s} = \langle \rho ,(-\Delta )^s u\rangle _{L^2} \\&= \langle \rho ,(-\Delta )^s u-u|u|^{p-1}\rangle _{L^2} + \langle \rho ,u|u|^{p-1}\rangle _{L^2}\\&\le \left\Vert \rho \right\Vert _{\dot{H}^s} \Vert (-\Delta )^s u-u|u|^{p-1}\Vert _{{H}^{-s}} + \int _{\mathbb {R}^n} u|u|^{p-1}\rho . \end{aligned}$$

In order to estimate the second term, consider the following two estimates

$$\begin{aligned} |(\sigma + \rho )|\sigma + \rho |^{p-1} - \sigma |\sigma |^{p-1}|&\le p|\sigma |^{p-1} |\rho | + C_{n,s}(|\sigma |^{p-2} |\rho |^2 + |\rho |^{p}) \\ | \sigma |\sigma |^{p-1} - \sum _{i=1}^{\nu } \alpha _i |\alpha _i|^{p-1}U_i^p|&\lesssim \sum _{1\le i\ne j\le \nu } U_i^{p-1} U_j, \end{aligned}$$

where \(C_{n,s}>0\) is some constant depending on n and s. Thus using triangle inequality and the orthogonality condition (2.4), we get

$$\begin{aligned} \int _{\mathbb {R}^n} \rho u|u|^{p-1}&\le p\int _{\mathbb {R}^n} |\sigma |^{p-1} \rho ^2 \nonumber \\&\quad + C_{n,\nu }\left( \int _{\mathbb {R}^n}|\sigma |^{p-2} |\rho |^3 + \int _{\mathbb {R}^n}|\rho |^{p+1}+\sum _{1\le i\ne j\le \nu } \int _{\mathbb {R}^n}\rho U_i^{p-1} U_j\right) . \end{aligned}$$
(2.7)

For the first term by Lemma 2.3, we get

$$\begin{aligned} p \int _{\mathbb {R}^n} |\sigma |^{p-1} \rho ^2 \le \tilde{c} \left\Vert \rho \right\Vert _{\dot{H}^s}^2 \end{aligned}$$
(2.8)

provided \(\delta '\) is small, where \(\tilde{c} = \tilde{c}(n,\nu )<1\) is a positive constant. Using Hölder and Sobolev inequality for the remainder terms with the constraint \(n<6s\) yields

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {R}^{n}} |\sigma |^{p-2}|\rho |^{3} \le \Vert \sigma \Vert _{L^{2^{*}}}^{p-2}\Vert \rho \Vert _{L^{2^{*}}}^{3} \lesssim \left\Vert \rho \right\Vert _{\dot{H}^s} ^{3},\\&\int _{\mathbb {R}^{n}}|\rho |^{p+1} \lesssim \left\Vert \rho \right\Vert _{\dot{H}^s}^{p+1} ,\\&\int _{\mathbb {R}^{n}}\rho U_{i}^{p-1} U_{j} \lesssim \Vert \rho \Vert _{L^{{2}^*}} \Vert U_i^{p-1}U_j\Vert _{L^{(2^*)'}} \lesssim \left\Vert \rho \right\Vert _{\dot{H}^s} \Vert U_i^{p-1}U_j\Vert _{L^{(2^*)'}}. \end{aligned} \end{aligned}$$
(2.9)

For the last estimate using an integral estimate similar to Proposition B.2 in Appendix B of [12], we get

$$\begin{aligned} \left\Vert {U_i^{p-1}U_j}\right\Vert _{L^{2n/(n+2s)}}&= \left( \int _{\mathbb {R}^n} U_i^{2p n/(n+2s)}U_j^{2n/(n+2s)}\right) ^{(n+2s)/2n} \approx Q_{ij}^{(n-2s)/2}\nonumber \\&\approx \int _{\mathbb {R}^n} U_i^{p} U_j, \end{aligned}$$
(2.10)

where

$$\begin{aligned} Q_{ij} = \min \left( \frac{\lambda _{j}}{\lambda _{i}},\frac{\lambda _{i}}{\lambda _{j}}, \frac{1}{\lambda _{i} \lambda _{j}\left| z_{i}-z_{j}\right| ^{2}}\right) , i\ne j. \end{aligned}$$

Furthermore, for the interaction term using the same argument as in Proposition 3.11 in [12], we have that for any \(\varepsilon > 0\), there exists a small enough \(\delta '>0\) such that

$$\begin{aligned} \int _{\mathbb {R}^n} U_i^p U_j \lesssim \varepsilon \left\Vert \rho \right\Vert _{\dot{H}^s}+\Vert (-\Delta )^s u - u|u|^{p-1}\Vert _{{H}^{-s}}+\left\Vert \rho \right\Vert _{\dot{H}^s}^{2}, \end{aligned}$$
(2.11)

and

$$\begin{aligned} |\alpha _i - 1| \lesssim \varepsilon \left\Vert \rho \right\Vert _{\dot{H}^s}+\Vert (-\Delta )^s u - u|u|^{p-1}\Vert _{{H}^{-s}}+\left\Vert \rho \right\Vert _{\dot{H}^s}^{2}. \end{aligned}$$
(2.12)

Thus using (2.8), (2.9), (2.10), and (2.11) into (2.7), we get

$$\begin{aligned} \left\Vert \rho \right\Vert _{\dot{H}^s}^2 \le (\tilde{c}+ C\varepsilon ) \left\Vert \rho \right\Vert _{\dot{H}^s}^2 + C_0(\left\Vert \rho \right\Vert _{\dot{H}^s}^3 + \left\Vert \rho \right\Vert _{\dot{H}^s}^{p+1} + \left\Vert \rho \right\Vert _{\dot{H}^s}\Vert (-\Delta )^s u - u|u|^{p-1}\Vert _{{H}^{-s}}). \end{aligned}$$

Therefore if we choose \(\varepsilon \) such that the quantity \( (\tilde{c}+ C\varepsilon ) <1\) then we can absorb the term \((\tilde{c}+ C\varepsilon ) \left\Vert \rho \right\Vert _{\dot{H}^s}^2\) on the left hand side of the inequality to obtain

$$\begin{aligned} \left\Vert \rho \right\Vert _{\dot{H}^s}^2 \lesssim \left\Vert \rho \right\Vert _{\dot{H}^s}^3 + \left\Vert \rho \right\Vert _{\dot{H}^s}^{p+1} + \left\Vert \rho \right\Vert _{\dot{H}^s}\Vert (-\Delta )^s u - u|u|^{p-1}\Vert _{{H}^{-s}}. \end{aligned}$$

Assuming w.l.o.g. that the quantity \(\left\Vert \rho \right\Vert _{\dot{H}^s}\ll 1\), we can now deduce that

$$\begin{aligned} \left\Vert \rho \right\Vert _{\dot{H}^s} \lesssim \Vert (-\Delta )^s u - u|u|^{p-1}\Vert _{{H}^{-s}}. \end{aligned}$$
(2.13)

Finally, if we decompose u in the following manner

$$\begin{aligned} u = \sum _{i=1}^{\nu } U_i + \sum _{i=1}^{\nu } (\alpha _i - 1) U_i + \rho = \sum _{i=1}^{\nu } U_i + \rho ', \end{aligned}$$

then using (2.12) and (2.13), we get

$$\begin{aligned} \left\Vert \rho '\right\Vert _{\dot{H}^s} \lesssim \sum _{i=1}^{\nu } |\alpha _i-1| + \left\Vert \rho \right\Vert _{\dot{H}^s} \lesssim \Vert (-\Delta )^s u - u|u|^{p-1}\Vert _{{H}^{-s}}. \end{aligned}$$

Thus \(U_1, U_2, \ldots , U_{\nu }\) is the desired family of Talenti bubbles. \(\square \)

We now turn to prove the spectral estimate (2.8). For this, we begin by constructing bump functions that will localize the linear combination of bubbles.

2.1 Construction of bump functions

The construction of bump functions allows us to localize the sum of bubbles \(\sigma \) such that in a suitable region we can assume \(\sigma \Phi _i \approx \alpha _i U_i\) for some bubble \(U_i\) and associated bump function \(\Phi _i.\) As a preliminary step we start by constructing cut-off functions with suitable properties.

Lemma 1.8

(Construction of Cut-off Function). Let \(n > 2s.\) Given a point \(\overline{x}\in \mathbb {R}^n\) and two radii \(0<r< R\), there exists a Lipschitz cut-off function \(\varphi =\varphi _{\overline{x}, r, R}:\mathbb {R}^n \rightarrow [0,1]\) such that the following holds

  1. (i)

    \(\varphi \equiv 1\) on \(B(\overline{x}, r).\)

  2. (ii)

    \(\varphi \equiv 0\) outside \(B(\overline{x}, R).\)

  3. (iii)

    \(\int _{\mathbb {R}^n} |(-\Delta )^{s/2} \varphi |^{n/s} \lesssim \log (R/r)^{1-n/s}.\)

Proof

For simplicity, we set \(\overline{x}=0\). Then we define the function \(\varphi :\mathbb {R}^n\rightarrow [0,1]\) as follows

$$\begin{aligned} \varphi (x):=\left\{ \begin{array}{ll}F(r), &{}\quad |x| \le r \\ F(|x|), &{}\quad r <|x|\le R \\ F(R), &{}\quad |x|>R\end{array}\right. \end{aligned}$$
(2.14)

where \(F(x)=F(|x|)=\frac{\log (R/|x|)}{\log (R/r)}.\) The function \(\varphi \) clearly satisfies the first and second conditions. Next, we estimate the fractional gradient. Recall the formula for the fractional derivative

$$\begin{aligned} (-\Delta )^{s/2}\varphi (x) = \int _{\mathbb {R}^n} \frac{\varphi (x)-\varphi (y)}{|x-y|^{n+s}}dy. \end{aligned}$$

When \(|x|>2R\) then since the integral is non-zero when \(|y|<R\) we have \(|x-y|>\frac{1}{2}|x|\) and therefore

$$\begin{aligned} |(-\Delta )^{s/2}\varphi (x)|\lesssim \log ^{-1}(R/r)\frac{R^n}{|x|^{n+s}}. \end{aligned}$$

Thus

$$\begin{aligned} \int _{|x|>2R} |(-\Delta )^{s/2}\varphi (x)|^{n/s} dx \lesssim \log ^{-n/s}(R/r). \end{aligned}$$

On the other hand when \(|x|<2R\) then we have three possible cases. When \(|x|<r\) then we consider three cases \(r<|y|<2r\) and \(2r<|y|<R\) and \(|y|>R.\) Since \(|x|<r\) and \(r<|y|<2r\) implies that \(|x-y|<3r\) we have

$$\begin{aligned}&\int _{|x|<r} |(-\Delta )^{s/2}\varphi (x)|^{n/s} dx \le \int _{|x|<r} \left( \int _{|x-y|<3r }\frac{\left| \varphi (x)-\varphi (y)\right| }{|x-y|^{n+s}}dy\right) ^{n/s} dx\\&\quad +\int _{|x|<r} \left( \int _{2r<|y|<R}\frac{\left| \varphi (x)-\varphi (y)\right| }{|x-y|^{n+s}}dy\right) ^{n/s}dx +\int _{|x|<r}\left( \int _{|y|>R}\frac{\left| \varphi (x)-\varphi (y)\right| }{|x-y|^{n+s}}dy\right) ^{n/s}dx. \end{aligned}$$

Denote

$$\begin{aligned} \textrm{I}&= \int _{|x-y|<3r }\frac{\left| \varphi (x)-\varphi (y)\right| }{|x-y|^{n+s}}dy,\quad \textrm{II} = \int _{2r<|y|<R}\frac{\left| \varphi (x)-\varphi (y)\right| }{|x-y|^{n+s}}dy\\ \textrm{III}&= \int _{|y|>R}\frac{\left| \varphi (x)-\varphi (y)\right| }{|x-y|^{n+s}}dy. \end{aligned}$$

Estimating each term

$$\begin{aligned} \textrm{I}&\lesssim \frac{\log ^{-1}(R/r)}{r}\int _{|x-y|<3r} \frac{1}{|x-y|^{n+s-1}}dy\lesssim \frac{\log ^{-1}(R/r)}{r^s} \\ \textrm{II}&= \log ^{-1}(R/r) \int _{ 2r<|y|<R} \frac{\log (|y|)-\log (r)}{|x-y|^{n+s}}dy \\&= \log ^{-1}(R/r) \sum _{k=1}^{K}\int _{ 2^kr<|y|<2^{k+1}r} \frac{\log (|y|)-\log (r)}{|x-y|^{n+s}}dy\\&= \frac{\log ^{-1}(R/r)}{r^{n+s}} \sum _{k=1}^{K}\int _{ 2^kr<|y|<2^{k+1}r} \frac{(k+1)\log 2}{(2^k-1)^{n+s}}dy\\&\lesssim \frac{\log ^{-1}(R/r)}{r^s} \sum _{k=1}^{K}\frac{(k+1)2^{kn}}{(2^k-1)^{n+s}}\\&\lesssim \frac{\log ^{-1}(R/r)}{r^s} K \lesssim \frac{1}{r^s}\\ \textrm{III}&\lesssim \frac{1}{R^s}, \end{aligned}$$

where we used the fact the \(\varphi \) is Lipschitz for estimating \(\textrm{I}\) and \(K\approx \log (R/r)\). This implies that

$$\begin{aligned} \int _{|x|<r} |(-\Delta )^{s/2}\varphi (x)|^{n/s} dx&\lesssim \log ^{-n/s}(R/r) + 1 + \frac{r^n}{R^n} \\&\lesssim \log ^{1-n/s}(R/r). \end{aligned}$$

When \(r<|x|<R\) then we can use the expression in Table 1 of [14] to deduce that

$$\begin{aligned}&|(-\Delta )^{s/2}\varphi (x)| \lesssim \log ^{-1}(R/r)\frac{1}{|x|^s}.\\&\quad \int _{r<|x|<R} |(-\Delta )^{s/2}\varphi (x)|^{n/s} dx \lesssim \log ^{1-n/s}(R/r). \end{aligned}$$

Following a similar argument as in the case when \(|x|<r\) we can estimate in the regime when \(R<|x|<2R\) to get

$$\begin{aligned}&\int _{R<|x|<2R} |(-\Delta )^{s/2}\varphi (x)|^{n/s}dx \lesssim \log ^{1-n/s}(R/r). \end{aligned}$$

Thus combining the above estimates we get

$$\begin{aligned} \int _{\mathbb {R}^n} |(-\Delta )^{s/2}\varphi (x)|^{n/s}dx&\lesssim \log ^{1-n/s}(R/r). \end{aligned}$$

\(\square \)

The construction of cut-off functions allows us to deduce the following lemma whose proof is identical to the proof of Lemma 3.9 in [12], except for the \(L^{n/s}\) estimate which is a consequence of property (iii) in Lemma 2.1.

Lemma 1.9

(Construction of Bump Function). Given dimension \(n>2s\), number of bubbles \(\nu \ge 1\) and parameter \(\hat{\varepsilon }>0\) there exists a \(\delta = \delta (n, \nu , \hat{\varepsilon },s) > 0\) such that for a \(\delta \)-interacting family of Talenti bubbles \((U_i)_{i=1}^{\nu }\) where \(U_i = U[z_i, \lambda _i]\) there exists a family of Lipschitz bump functions \(\Phi _i:\mathbb {R}^n \rightarrow [0,1]\) such that the following hold,

  1. (i)

    Most of the mass of the function \(U_i^{p+1}\) is in the region \(\{\Phi _i = 1\}\) or more precisely,

    $$\begin{aligned} \int _{\left\{ \Phi _{i}=1\right\} } U_{i}^{p+1} \ge (1-\hat{\varepsilon }) S^{n/s}. \end{aligned}$$
    (2.15)
  2. (ii)

    The function \(\Phi _i\) is much larger than any other bubble in the region \(\{\Phi _i > 0\}\) or more precisely,

    $$\begin{aligned} \hat{\varepsilon } U_{i}>U_{j} \end{aligned}$$
    (2.16)

    for each index \(j\ne i.\)

  3. (iii)

    The \(L^{n/s}\) norm of the the function \((-\Delta )^{s/2} \Phi _{i}\) is small, or more precisely,

    $$\begin{aligned} \left\| (-\Delta )^{s/2} \Phi _{i}\right\| _{L^{n/s}} \le \hat{\varepsilon }. \end{aligned}$$
    (2.17)
  4. (iv)

    Finally, for all \(j\ne i\) such that \(\lambda _j \le \lambda _i\), we have

    $$\begin{aligned} \frac{\sup _{\left\{ \Phi _{i}>0\right\} } U_{j}}{\inf _{\left\{ \Phi _{i}>0\right\} } U_{j}} \le 1+\hat{\varepsilon }. \end{aligned}$$
    (2.18)

2.2 Spectral properties of the linearized operator

Consider the linearized equation,

$$\begin{aligned} (-\Delta )^s \varphi = pU^{p-1} \varphi \end{aligned}$$

where \(\varphi \in \dot{H}^s\) and \(U(x)=U[z,\lambda ](x).\) By exploiting the positivity of the second variation of \(\delta (u) = \left\Vert u\right\Vert _{\dot{H}^s}^2 - S^2 \Vert u\Vert _{L^{{2}^*}}^2\) around the bubble U and using Theorem 1.1 in [11], we can deduce the following result

Lemma 1.10

The operator \(\mathcal {L}=\frac{(-\Delta )^s}{U^{p-1}}\) has a discrete spectrum with increasing eigenvalues \(\{\lambda _{i}\}_{i=1}^{\infty }\) such that,

  1. (i)

    The first eigenvalue \(\alpha _{1} = 1\) with eigenspace \(H_1 = {\text {span}}(U).\)

  2. (ii)

    The second eigenvalue \(\alpha _{2} = p\) with eigenspace \(H_2 ={\text {span}}(\partial _{z_1}U,\partial _{z_2}U,\ldots , \partial _{z_n}U,\partial _{\lambda }U)\).

Proof

Since the embedding \(\dot{H}^s \hookrightarrow L^{2^*}_{U^{p-1}}\) is compact, the spectrum of the operator \(\mathcal {L}\) is discrete. This can be proved by following the same strategy as in the proof of Proposition A.1 in [12] along with a fractional Rellich-Kondrakov Theorem which is stated as Theorem 7.1 in [10]. Furthermore, as the following identities hold

$$\begin{aligned} (-\Delta )^s U=U^{p}, \quad (-\Delta )^s\left( \partial _{\lambda } U\right) =p U^{p-1} \partial _{\lambda } U, \quad (-\Delta )^s\left( \nabla _{z_j} U\right) =p U^{p-1} \nabla _{z_j} U, \end{aligned}$$

for \(1\le j\le n\), it is clear that 1 and p are eigenvalues with eigenfunctions U and the partial derivatives \(\partial _{z_j}U\) and \(\partial _{\lambda } U\) respectively. For the first part, since the function \(U>0\), we deduce that the first eigenvalue is \(\alpha _1=1\) which is simple and therefore \(H_1={\text {span}}(U).\) For the second part, recall the min–max characterization of the second eigenvalue

$$\begin{aligned} \alpha _{2}=\inf \left\{ \frac{\langle w,w\rangle _{\dot{H}^s}}{\langle w,w\rangle _{L^2_{U^{p-1}}}}: \langle w,U\rangle _{L^2_{U^{p-1}}} = 0\right\} . \end{aligned}$$

We first show that \(\lambda _2\le p\). For this consider the second variation of the quantity \(\delta (u) = \left\Vert u\right\Vert _{\dot{H}^s}^2 - S^2 \Vert u\Vert _{L^{{2}^*}}^2\) around the bubble U. Since U is an extremizer for the HLS inequality we know that

$$\begin{aligned} \frac{d^{2}}{d \varepsilon ^{2}}\bigg |_{\varepsilon =0} \delta (U+\varepsilon \varphi ) \ge 0, \quad \forall \varphi \in \dot{H}^s(\mathbb {R}^n). \end{aligned}$$

Furthermore using \(\int _{\mathbb {R}^n} U^{2^*} = S^{n/s}\) for any \(\varphi \in \dot{H}^s\), we get

$$\begin{aligned} \frac{d^{2}}{d \varepsilon ^{2}}\bigg |_{\varepsilon =0} \delta (U+\varepsilon \varphi )&= 2\langle \varphi ,\varphi \rangle _{\dot{H}^s} - 2S^2(2-2^*) (S^{n/s})^{2/2^* - 2}\left( \int _{\mathbb {R}^n} U^{p}\varphi \right) ^2 \\&\quad - 2S^2 p (S^{n/s})^{2/2^*-1}\int _{\mathbb {R}^n} U^{p-1}\varphi ^2. \end{aligned}$$

Thus, when \(\langle \varphi ,U\rangle _{L^2_{U^{p-1}}}=\int _{\mathbb {R}^n} \varphi U^p = 0\) we can drop the second term to get

$$\begin{aligned} p \int _{\mathbb {R}^n} U^{p-1}\varphi ^2 = p \langle \varphi ,\varphi \rangle _{L^2_{U^{p-1}}} \le \langle \varphi ,\varphi \rangle _{\dot{H}^s}. \end{aligned}$$
(2.19)

This implies that \(\lambda _2\le p\) and the equality is attained when \(\varphi = \partial _{\lambda } U\) or \(\varphi = \partial _{z_j} U\) for \(1\le j\le n.\) Thus the second eigenvalue \(\lambda _2 = p.\) Finally, we need to argue that \(H_2 ={\text {span}}(\partial _{z_1}U,\partial _{z_2}U,\ldots , \partial _{z_n}U,\partial _{\lambda }U)\). For this we make use of Theorem 1.1 in [11] which states the following

Theorem 1.11

Let \(n>2s\) and \(s\in (0,1)\). Then the solution

$$\begin{aligned} U(x)=\alpha _{n, s}\left( \frac{1}{1+|x|^{2}}\right) ^{\frac{n-2s}{2}} \end{aligned}$$

of the equation, \((-\Delta )^s U = U^p\) is nondegenerate in the sense that all bounded solutions of equation \((-\Delta )^s \varphi = pU^{p-1}\varphi \) are linear combinations of the functions

$$\begin{aligned} \partial _{\lambda } U, \partial _{z_1} U, \partial _{z_2} U,\ldots , \partial _{z_n} U. \end{aligned}$$

Thus, if we argue that the solutions to the linearized equation \((-\Delta )^s \varphi = p U^{p-1}\varphi \) are bounded, then we can apply the above theorem to deduce that \(H_2 ={\text {span}}(\partial _{z_1}U,\partial _{z_2}U,\ldots , \partial _{z_n}U,\partial _{\lambda }U)\). For this, we use a bootstrap argument. Let \(\varphi \in \dot{H}^s\) satisfy the equation \((-\Delta )^s \varphi = pU^{p-1}\varphi \). Then since \(pU^{p-1}\varphi \in \dot{H}^s\), we get that \((-\Delta )^s \varphi \in \dot{H}^s\), which implies that \(\varphi \in \dot{H}^{3s}.\) Indeed using the definition of \(\dot{H}^{3s}\) norm, we have

$$\begin{aligned} \left\Vert {u}\right\Vert _{\dot{H}^{3s}}^2&= \left\Vert (-\Delta )^{3s/2}\varphi \right\Vert _{L^2}^2 = \left\Vert (-\Delta )^{s/2}[(-\Delta )^s\varphi ]\right\Vert _{L^2}^2 = \left\Vert {(-\Delta )^s u}\right\Vert _{\dot{H}^{s}}^2 < +\infty . \end{aligned}$$

However, now since \(\varphi \in \dot{H}^{3s}\) we can repeat the same argument to get \(\varphi \in \dot{H}^{5s}.\) Proceeding in this manner we deduce that \(\varphi \in \dot{H}^{(2k+1)s}\) for any \(k\in \mathbb {N}.\) Thus for large enough \(k\in \mathbb {N}\), we have \(2(2k+1)s > n\) and therefore by Sobolev embedding for fractional spaces (see for instance Theorem 4.47 in [7]) we get that \(\varphi \in L^{\infty }(\mathbb {R}^n)\). This allows us to use Theorem 2.4 and thus we deduce that \(H_2 ={\text {span}}(\partial _{z_1}U,\partial _{z_2}U,\ldots , \partial _{z_n}U,\partial _{\lambda }U)\). \(\square \)

As done in [12], by localizing the linear combination of bubbles using bump functions and the spectral properties derived in Lemma 2.3, we can show the following inequality,

Lemma 1.12

Let \(n>2s\) and \(\nu \ge 1.\) Then there exists a \(\delta > 0\) such that if \((\alpha _i, U_i)_{i=1}^{\nu }\) is a family of \(\delta \)-interacting family of bubbles and \(\rho \in \dot{H}^s(\mathbb {R}^n)\) is a function satisfying the orthogonality conditions (2.1), (2.2) and (2.3) where we denote \(U_i = U[z_i,\lambda _i]\) then there exists a constant \(\tilde{c}<1\) such that following holds

$$\begin{aligned} \int _{\mathbb {R}^n} |\sigma |^{p-1}\rho ^2 \le \frac{\tilde{c}}{p}\int _{\mathbb {R}^n} |(-\Delta )^{s/2} \rho |^2 \end{aligned}$$

where \(\sigma = \sum _{i=1}^{\nu } \alpha _i U_i.\)

Proof

Let \(\varepsilon > 0\) be such that there exists a \(\delta >0\) and a family of bump functions \((\Phi _i)_{i=1}^{\nu }\) as in Lemma 2.2. Then using (2.16), we get

$$\begin{aligned} \int _{\mathbb {R}^n} |\sigma |^{p-1}\rho ^2&= \int _{\left\{ \sum _{i} \Phi _i \ge 1\right\} } |\sigma |^{p-1}\rho ^2 + \int _{\left\{ \sum _{i} \Phi _i< 1\right\} } |\sigma |^{p-1}\rho ^2 \\&\le \sum _{i=1}^{\nu } \int _{\{\Phi _i > 0\}} |\sigma |^{p-1}\rho ^2 + \int _{\left\{ \sum _{i} \Phi _i< 1\right\} } |\sigma |^{p-1}\rho ^2 \\&\le (1+o(1))\sum _{i=1}^{\nu } \int _{\mathbb {R}^n} \Phi _{i}^2 U_i^{p-1}\rho ^2 + \int _{\left\{ \sum _{i} \Phi _i < 1\right\} } |\sigma |^{p-1}\rho ^2, \end{aligned}$$

where o(1) denotes a quantity that tends to 0 as \(\delta \rightarrow 0.\) We can estimate the second term using Hölder and Sobolev inequality

$$\begin{aligned} \int _{\left\{ \sum \Phi _{i}<1\right\} } |\sigma |^{p-1} \rho ^{2}&\le \left( \int _{\left\{ \sum \Phi _{i}<1\right\} } |\sigma |^{2^{*}}\right) ^{\frac{p-1}{2^{*}}}\Vert \rho \Vert _{L^{2^{*}}}^{2} \nonumber \\&\le C \left( \sum _{i=1}^{\nu }\int _{\left\{ \Phi _{i}<1\right\} } U_i^{2^{*}}\right) ^{\frac{p-1}{2^{*}}}\left\Vert \rho \right\Vert _{\dot{H}^s}^{2} \nonumber \\&\le o(1) \left\Vert \rho \right\Vert _{\dot{H}^s}^{2}. \end{aligned}$$
(2.20)

To estimate the first term, we first show that

$$\begin{aligned} \int _{\mathbb {R}^{n}}\left( \rho \Phi _{i}\right) ^{2} U_{i}^{p-1} \le \frac{1}{\Lambda } \int _{\mathbb {R}^{n}}\left| (-\Delta )^{s/2}\left( \rho \Phi _{i}\right) \right| ^{2}+o(1)\left\Vert \rho \right\Vert _{\dot{H}^s}^{2} \end{aligned}$$
(2.21)

where \(\Lambda > p\) is the third eigenvalue of the operator \(\frac{(-\Delta )^s}{U_i^p}.\) To prove this estimate we show that \(\rho \Phi _i\) almost satisfies the orthogonality conditions (2.1), (2.2) and (2.3). Let \(f:\mathbb {R}^n\rightarrow \mathbb {R}\) be a function equal to \(U_i, \partial _{\lambda } U_i\) or \(\partial _{z_j} U_i\) up to scaling and satisfying the identity \(\int _{\mathbb {R}^n} f U_{i}^{p-1} =1.\) Then using (2.1), (2.2) and (2.3), we get

$$\begin{aligned} \left| \langle (\rho \Phi _i),f\rangle _{L^{2}_{U^{p-1}}}\right|&= \left| \int _{\mathbb {R}^n} \rho \Phi _i f U_i^{p-1}\right| = \left| \int _{\mathbb {R}^n} \rho f U_i^{p-1} (1-\Phi _i)\right| \\&\le \Vert \rho \Vert _{L^{2^*}}\left( \int _{\mathbb {R}^{n}} f^{2} U_{i}^{p-1}\right) ^{1/2}\left( \int _{\left\{ \Phi _{i}<1\right\} } U_{i}^{2^{*}}\right) ^{s/n} \le o(1)\left\Vert \rho \right\Vert _{\dot{H}^s}. \end{aligned}$$

Using Lemma 2.3, we can now conclude (2.21). We can further estimate the first term in (2.21) by using Theorem A.8 in [13], which states that the remainder term \(\mathcal {C}(\rho , \Phi _i)=(-\Delta )^{s/2}(\rho \Phi _i)-\rho (-\Delta )^{s/2}\Phi _i -\Phi _i(-\Delta )^{s/2}\rho \) satisfies the following estimate

$$\begin{aligned} \left\Vert \mathcal {C}(\rho , \Phi _i\right\Vert _{L^2}\le C\left\Vert {(-\Delta )^{s_{1} / 2} \Phi _i}\right\Vert _{L^{p_{1}}} \left\Vert {(-\Delta )^{s_{2} / 2} \rho }\right\Vert _{L^{p_{2}}} \end{aligned}$$
(2.22)

provided that \(s_{1}, s_{2} \in [0, s], s=s_{1}+s_{2}\) and \(p_{1}, p_{2} \in (1,+\infty )\) satisfy

$$\begin{aligned} \frac{1}{2}=\frac{1}{p_{1}}+\frac{1}{p_{2}}. \end{aligned}$$

Setting \(s_1=s, s_2=0, p_1=n/s\) and \(p_2=2^* = \frac{2n}{n-2\,s}\) using (2.22) and (2.17), we get

$$\begin{aligned} \left\Vert \mathcal {C}(\rho , \Phi _i\right\Vert _{L^2}\le C\left\Vert {(-\Delta )^{s / 2} \Phi _i}\right\Vert _{L^{n/s}}\left\Vert {\rho }\right\Vert _{L^{2^*}}\le o(1)\left\Vert \rho \right\Vert _{\dot{H}^s}. \end{aligned}$$
(2.23)

We can estimate the term \(\rho (-\Delta )^{s/2}\Phi _i\) using Hölder and Sobolev inequality along with (2.17) as follows

$$\begin{aligned} \int _{\mathbb {R}^n}\rho ^2 |(-\Delta )^{s/2} \Phi _i|^2&\le \Vert \rho \Vert _{L^{{2}^*}}^2 \left\Vert {(-\Delta )^{s/2} \Phi _i}\right\Vert ^2_{L^{n/s}} \le o(1) \left\Vert \rho \right\Vert _{\dot{H}^s}^2. \end{aligned}$$

Thus

$$\begin{aligned} \sum _{i=1}^{\nu }\left\Vert (-\Delta )^{s/2}(\rho \Phi _i)\right\Vert _{L^2}&\le \sum _{i=1}^{\nu } \left\Vert \mathcal {C}(\rho , \Phi _i)\right\Vert _{L^2} + \left\Vert \Phi _i(-\Delta )^{s/2} \rho \right\Vert _{L^2} + \left\Vert \rho (-\Delta )^{s/2}\Phi _i\right\Vert _{L^2}\nonumber \\&\le o(1)\left\Vert \rho \right\Vert _{\dot{H}^s} + \sum _{i=1}^{\nu }\left\Vert \Phi _i(-\Delta )^{s/2} \rho \right\Vert _{L^2}. \end{aligned}$$
(2.24)

Since the bump functions have disjoint support by construction, we get

$$\begin{aligned} \sum _{i=1}^{\nu } \int _{\mathbb {R}^n}|\Phi _i (-\Delta )^{s/2}\rho |^2 \le \left\Vert \rho \right\Vert _{\dot{H}^s}^2. \end{aligned}$$
(2.25)

Therefore using (2.20), (2.21), (2.24) and (2.25), we get

$$\begin{aligned} \int _{\mathbb {R}^{n}} |\sigma |^{p-1} \rho ^{2}&\le (1+o(1))\sum _{i=1}^{\nu } \int _{\mathbb {R}^n} \Phi _{i}^2 U_i^{p-1}\rho ^2 + \int _{\left\{ \sum _{i} \Phi _i < 1\right\} } |\sigma |^{p-1}\rho ^2 \\&\le (1+o(1))\left( \frac{1}{\Lambda } \sum _{i=1}^{\nu }\int _{\mathbb {R}^{n}}\left| (-\Delta )^{s/2}\left( \rho \Phi _{i}\right) \right| ^{2} + o(1)\left\Vert \rho \right\Vert _{\dot{H}^s}^{2}\right) + o(1) \left\Vert \rho \right\Vert _{\dot{H}^s}^{2} \\&\le \left( \frac{1}{\Lambda }+o(1)\right) \left\Vert \rho \right\Vert _{\dot{H}^s}^{2}, \end{aligned}$$

which implies the desired estimate. \(\square \)

2.3 Interaction integral estimate

In this section, we will prove (2.11) and (2.12).

Lemma 1.13

Let \(2\,s< n < 6\,s\) and \(\nu \ge 1.\) For any \(\varepsilon >0\) there exits \(\delta >0\) such that the following holds. Let \((\alpha _i, U_i)_{i=1}^{\nu }\) be a \(\delta \)-interacting family, \(u = \sum _{i=1}^{\nu } \alpha _i U_i + \rho \) and \(\rho \) satisfies (2.1), (2.2), (2.3) with \(\left\Vert \rho \right\Vert _{\dot{H}^s} \le 1.\) Then for all \(i=1,2,\ldots , \nu \) we have

$$\begin{aligned} \left| \alpha _{i}-1\right| \lesssim \varepsilon \left\Vert \rho \right\Vert _{\dot{H}^s}+\left\| (-\Delta )^{s} u-u|u|^{p-1}\right\| _{H^{-s}}+\left\Vert \rho \right\Vert _{\dot{H}^s}^{2} \end{aligned}$$
(2.26)

and for each \(j\ne i\)

$$\begin{aligned} \int _{\mathbb {R}^n} U_i^p U_j \lesssim \varepsilon \left\Vert \rho \right\Vert _{\dot{H}^s}+\Vert (-\Delta )^s u - u|u|^{p-1}\Vert _{{H}^{-s}}+\left\Vert \rho \right\Vert _{\dot{H}^s}^{2}, \end{aligned}$$
(2.27)

Proof

We begin by assuming that the bubbles are ordered by \(\lambda _i\) in descending order. Thus \(U_1\) is the most concentrated bubble and so on. The proof of this Lemma then proceeds by induction on the index i. Assume that the claim holds for all indices \(j<i\) where \(1\le i\le \nu \) and let \(U_i\) be the corresponding bubble and let \(V= \sum _{j=1,j\ne i }^{\nu } \alpha _j U_j.\)

For \(\varepsilon >0\) (in particular \(\varepsilon =o(1)\)) let \(\Phi _i\) be the bump function associated to \(U_i\) as in Lemma 2.2. Then consider the following decomposition

$$\begin{aligned} \left( \alpha _i-\alpha _i^{p}\right) U_i^{p}-p (\alpha _i U_i)^{p-1} V=&-(-\Delta )^s \rho +((-\Delta )^s u-u|u|^{p-1})-\sum _{i=1}^{\nu } \alpha _{i} U_{i}^{p} \\&+p(\alpha _i U_i)^{p-1}\rho +\left[ (\sigma +\rho )|\sigma +\rho |^{p-1}-\sigma ^{p}-p \sigma ^{p-1} \rho \right] \\ {}&+\left[ p \sigma ^{p-1} \rho -p(\alpha _i U_i)^{p-1} \rho \right] \\ {}&+\left[ (\alpha _i U_i+V)^{p}-(\alpha _i U_i)^{p}-p(\alpha _i U_i)^{p-1} V\right] . \end{aligned}$$

The term \((\alpha _i-\alpha _i^p)U_i^p\) allows us to estimate \(|\alpha _i-1|\) while the term \(U_i^{p-1}V\) will help us to establish the integral estimate. Furthermore, we want to establish a control that is linear in \(\left\Vert {(-\Delta )^s u-u|u|^{p-1}}\right\Vert _{H^{-s}}\), therefore, we introduce the laplacian term on the right-hand side. Notice that on the region \(\{\Phi _i > 0\}\) we can use (2.16) to get

$$\begin{aligned} \sum _{j\ne i} \alpha _j U_j^p&\le o(1)U_i^{p-1} V, \\ |p\sigma ^{p-1}\rho - p(\alpha _i U_i)^{p-1} \rho |&\le o(1) p|\rho | U_i^{p-1}, \\ |\alpha _i U_i+V)^{p}-(\alpha _i U_i)^{p}-p(\alpha _i U_i)^{p-1} V|&\le o(1) U_i^{p-1} V,\\ |(\sigma + \rho )|\sigma + \rho |^{p-1} -\sigma ^{p} - p\sigma ^{p-1}\rho |&\lesssim |\rho |^{p} + U^{p-2}\rho ^2 \end{aligned}$$

Thus combining the above estimates we get

$$\begin{aligned}&|(\alpha _i-\alpha _i^{p}) U_i^{p}-(p \alpha _i^{p-1}+o(1)) U_i^{p-1} V + (-\Delta )^s \rho -((-\Delta )^s u-u|u|^{p-1})-p(\alpha _i U_i)^{p-1} \rho | \\&\quad \lesssim |\rho |^{p}+ U_i^{p-2}|\rho |^{2}+o(1)(U_i^{p-1}|\rho |). \end{aligned}$$

Testing the above estimate with \(f\Phi _i\) where \(f = U_i\) or \(f=\partial _{\lambda } U_i\) and using orthogonality conditions (2.1), (2.2) and (2.3) we get

$$\begin{aligned}&\left| \int _{\mathbb {R}^n}\left[ \left( \alpha _i-\alpha _i^{p}\right) U_i^{p}-\left( p \alpha _i^{p-1}+o(1)\right) U_i^{p-1} V\right] f \Phi _i\right| \lesssim \left| \int _{\mathbb {R}^n} (-\Delta )^{s/2} \rho (-\Delta )^{s/2}(f \Phi _i)\right| \\&\quad +\left| \int _{\mathbb {R}^n}\left( (-\Delta )^{s} u-u|u|^{p-1}\right) f \Phi _i\right| +\left| \int _{\mathbb {R}^n} U_i^{p-1} f \rho \Phi _i\right| +\int _{\mathbb {R}^n}|\rho |^{p}|f| \Phi _i\\&\quad +o(1)\int _{\mathbb {R}^n} U_i^{p-2}|f||\rho |^{2} \Phi _i+ o(1)\int _{\mathbb {R}^n} U_i^{p-1}|f||\rho | \Phi _i. \end{aligned}$$

To estimate the first two terms we make use of the Kato-Ponce inequality (1.23). Thus for the first term, we have

$$\begin{aligned} \left| \int _{\mathbb {R}^n} (-\Delta )^{s/2} \rho (-\Delta )^{s/2}(f \Phi _i)\right|&\le \int _{\mathbb {R}^n} |\mathcal {C}(f, \Phi _i) (-\Delta )^{s/2} \rho | + \int _{\mathbb {R}^n} |(-\Delta )^{s/2} \rho f (-\Delta )^{s/2}\Phi _i| \\&\quad + \left| \int _{\mathbb {R}^n} |(-\Delta )^{s/2} \rho (\Phi _i-1) (-\Delta )^{s/2}f \right| \end{aligned}$$

where we used made use of the orthogonality conditions (2.1) and (2.2) for the last term and \(\mathcal {C}(f,\Phi _i) = (-\Delta )^{s/2}(f\Phi _i)- f (-\Delta )^{s/2}\Phi _i -\Phi _i (-\Delta )^{s/2}f.\) Then using (1.23), Hölder and Sobolev inequalities we get

$$\begin{aligned} \int _{\mathbb {R}^n} |\mathcal {C}(f, \Phi _i) (-\Delta )^{s/2} \rho |&\le \left\Vert \mathcal {C}(f,\Phi _i)\right\Vert _{L^2} \left\Vert \rho \right\Vert _{\dot{H}^s} \le \Vert f\Vert _{L^{{2}^*}} \left\Vert {(-\Delta )^{s/2} \Phi _i}\right\Vert _{L^{n/s}} \left\Vert \rho \right\Vert _{\dot{H}^s}\\&\lesssim o(1)\left\Vert \rho \right\Vert _{\dot{H}^s}\\ \int _{\mathbb {R}^n} |(-\Delta )^{s/2} \rho f (-\Delta )^{s/2}\Phi _i|&\le \left\Vert \rho \right\Vert _{\dot{H}^s} \Vert f\Vert _{L^{{2}^*}}\left\Vert {(-\Delta )^{s/2}\Phi _i}\right\Vert _{L^{n/s}} \le o(1)\left\Vert \rho \right\Vert _{\dot{H}^s} \\ \left| \int _{\mathbb {R}^n} |(-\Delta )^{s/2} \rho (\Phi _i-1) (-\Delta )^{s/2}f \right|&\le \left\Vert \rho \right\Vert _{\dot{H}^s} \left( \int _{\{\Phi _i<1\}} |(-\Delta )^{s/2} f|^2 \right) ^{1/2} \le o(1) \left\Vert \rho \right\Vert _{\dot{H}^s}. \end{aligned}$$

Combining the above estimates yields

$$\begin{aligned} \left| \int _{\mathbb {R}^n} (-\Delta )^{s/2} \rho (-\Delta )^{s/2}(f \Phi _i)\right| \lesssim o(1)\left\Vert \rho \right\Vert _{\dot{H}^s}. \end{aligned}$$

For the second term

$$\begin{aligned} \left| \int _{\mathbb {R}^n}\left( (-\Delta )^s u-u|u|^{p-1}\right) f \Phi _i\right|&\le \Vert (-\Delta )^su-u|u|^{p-1}\Vert _{{H}^{-s}} \left\Vert (-\Delta )^{s/2}(f\Phi _i)\right\Vert _{L^2} \\&\lesssim \Vert (-\Delta )^su-u|u|^{p-1}\Vert _{{H}^{-s}} \end{aligned}$$

as \(\left\Vert (-\Delta )^{s/2}(f\Phi _i)\right\Vert _{L^2} \lesssim 1\) by (1.23). The other terms can be estimated in the same way as in [12]. Thus

$$\begin{aligned} \left| \int _{\mathbb {R}^n} U_i^{p-1} f \rho \Phi _i\right|&= \left| \int _{\mathbb {R}^n} U_i^{p-1} f \rho (\Phi _i-1)\right| \\&\lesssim \left\Vert \rho \right\Vert _{\dot{H}^s} \left( \int _{\{\Phi _i<1\}}\left( U_i^{p-1}|f|\right) ^{\frac{2^{*}}{p}}\right) ^{\frac{p}{2^{*}}} \\&\lesssim o(1) \left\Vert \rho \right\Vert _{\dot{H}^s}, \\ \int _{\mathbb {R}^n}|\rho |^{p}|f| \Phi _i&\le \left\Vert \rho \right\Vert _{\dot{H}^s}^p\left\Vert f\right\Vert _{L^2} \lesssim \left\Vert \rho \right\Vert _{\dot{H}^s}^p, \\ \int _{\mathbb {R}^n} U_i^{p-2}|f||\rho |^{2} \Phi _i&\le \left\Vert \rho \right\Vert _{\dot{H}^s}^2 \left\Vert {U_i^{p-2}f}\right\Vert _{L^{2^*/(p-1)}} \lesssim \left\Vert \rho \right\Vert _{\dot{H}^s}^2, \\ \int _{\mathbb {R}^n} U_i^{p-1}|f||\rho | \Phi _i&\le \left\Vert \rho \right\Vert _{\dot{H}^s} \left\Vert {U_i^{p-1} f}\right\Vert _{L^{2^*/p}} \lesssim \left\Vert \rho \right\Vert _{\dot{H}^s}. \end{aligned}$$

Thus combining the above estimates we finally get

$$\begin{aligned} \begin{aligned} \left| \int _{\mathbb {R}^n}\left[ \left( \alpha _i-\alpha _i^{p}\right) U_i^{p}-\left( p \alpha _i^{p-1}+o(1)\right) U_i^{p-1} V\right] f \Phi _i\right| \\ \lesssim o(1)\left\Vert \rho \right\Vert _{\dot{H}^s} + \left\| (-\Delta )^s u-u|u|^{p-1}\right\| _{H^{-s}}+\left\Vert \rho \right\Vert _{\dot{H}^s}^2. \end{aligned} \end{aligned}$$
(2.28)

Now if we split \(V = V_1 + V_2\) where \(V_1 =\sum _{j<i} \alpha _jU_j\) and \(V_2 = \sum _{j>i}\alpha _jU_j\) then we know by our induction hypothesis that our claim holds for all indices \(j<i.\) Thus

$$\begin{aligned} \int _{\mathbb {R}^n} U_i^{p-1} V_{1}|f| \Phi _i \lesssim \int U_i^{p} V_{1} \lesssim o(1)\left\Vert \rho \right\Vert _{\dot{H}^s} + \left\| (-\Delta )^s u-u|u|^{p-1}\right\| _{H^{-s}}+\left\Vert \rho \right\Vert _{\dot{H}^s}^2. \end{aligned}$$
(2.29)

Furthermore using (2.18) we have

$$\begin{aligned} V_2(x)\Phi _i(x) = (1+o(1))\Phi _i(x), \quad \forall x\in \mathbb {R}^n. \end{aligned}$$

If \(\alpha _i = 1\) then we have nothing to prove, otherwise define \(\theta = \frac{p\alpha _i^{p-1}V_2(0)}{\alpha _i-\alpha _i^{p}}\) and thus using the previous estimate and (2.29) we can re-write (2.28) as

$$\begin{aligned} \begin{aligned} \left| \alpha _i-\alpha _i^{p}\right| \left| \int _{\mathbb {R}^n}\left( U_i^{p}-(1+o(1)) \theta U_i^{p-1}\right) f \Phi _i\right|&\lesssim o(1)\left\Vert \rho \right\Vert _{\dot{H}^s} + \left\| (-\Delta )^s u-u|u|^{p-1}\right\| _{H^{-s}}\\&\quad +\left\Vert \rho \right\Vert _{\dot{H}^s}^2. \end{aligned} \end{aligned}$$
(2.30)

Using (2.15) we can expand the integral on the left-side as follows

$$\begin{aligned} \int _{\mathbb {R}^n}\left( U_i^{p}-(1+o(1)) \theta U_i^{p-1}\right) f \Phi _i&=\int _{\mathbb {R}^n} U_i^{p} f -\theta \int _{\mathbb {R}^n} U_i^{p-1} f+ \int _{\mathbb {R}^n} U_i^{p} f (\Phi _i-1) \\&\quad + o(1) \theta \int _{\mathbb {R}^n} U_i^{p-1} f (\Phi _i - 1), \end{aligned}$$

where the last two terms can be estimated using (2.15) and \(|f|\lesssim U_i\)

$$\begin{aligned} \int _{\mathbb {R}^n} U_i^{p} f (\Phi _i-1) \lesssim \int _{\{\Phi _i<1\}} U_i^{p+1} = o(1). \end{aligned}$$

Thus we have

$$\begin{aligned} \int _{\mathbb {R}^n}\left( U_i^{p}-(1+o(1)) \theta U_i^{p-1}\right) f \Phi _i=\int _{\mathbb {R}^n} U_i^{p} f-\theta \int _{\mathbb {R}^n} U_i^{p-1} f +o(1). \end{aligned}$$
(2.31)

To prove (2.26) we need to show that left-side of the (2.31) cannot be too small for \(f=U_i\) or \(f=\partial _{\lambda } U_i.\) It is enough to check that

$$\begin{aligned} \frac{\int _{\mathbb {R}^n} U_i^{2^{*}}}{\int _{\mathbb {R}^n} U_i^{p}} \ne \frac{\int _{\mathbb {R}^n} U_i^{p} \partial _{\lambda } U_i}{\int _{\mathbb {R}^n} U_i^{p-1} \partial _{\lambda } U_i} \end{aligned}$$
(2.32)

because otherwise, we would have that

$$\begin{aligned} \int _{\mathbb {R}^n}\left( U_i^{p}-(1+o(1)) \theta U_i^{p-1}\right) f \Phi _i=o(1), \end{aligned}$$

which can be made arbitrarily small. To check (2.32) observe that

$$\begin{aligned} \frac{\int _{\mathbb {R}^n} U_i^{2^{*}}}{\int _{\mathbb {R}^n} U_i^{p}} = \frac{S^{n/s}}{\int _{\mathbb {R}^n} U_i^{p}} > 0 \end{aligned}$$

while,

$$\begin{aligned} \frac{\int _{\mathbb {R}^n} U_i^{p} \partial _{\lambda } U_i}{\int _{\mathbb {R}^n} U_i^{p-1} \partial _{\lambda } U_i} = \frac{\frac{1}{p+1}\frac{d}{d\lambda }_{|\lambda =\lambda _i}\int _{\mathbb {R}^n} U[0,\lambda ]^{p+1}}{\frac{1}{p}\frac{d}{d\lambda }_{|\lambda =\lambda _i}\int _{\mathbb {R}^n} U[0,\lambda ]^{p}} = 0. \end{aligned}$$

Thus combining (2.31) and (2.32) we get

$$\begin{aligned} 1\lesssim \max _{f\in \{U_i, \partial _{\lambda } U_i\}} \left| \int \left( U_i^{p}-(1+o(1)) \theta U_i^{p-1}\right) f \Phi _i\right| \end{aligned}$$

for either \(f=U_i\) or \(f=\partial _{\lambda } U_i\). Choosing f for which the above integral is maximized we get

$$\begin{aligned} \left| \alpha _i-\alpha _i^{p}\right|&\lesssim \left| \alpha _i-\alpha _i^{p}\right| \max _{f\in \{U_i, \partial _{\lambda } U_i\}} \left| \int \left( U_i^{p}-(1+o(1)) \theta U_i^{p-1}\right) f \Phi _i\right| \\&\lesssim o(1)\left\Vert \rho \right\Vert _{\dot{H}^s} + \left\| (-\Delta )^s u-u|u|^{p-1}\right\| _{H^{-s}}+\left\Vert \rho \right\Vert _{\dot{H}^s}^2. \end{aligned}$$

This proves (2.26). To prove (2.27) we use (2.26) and (2.28) with \(f=U_i\) to get

$$\begin{aligned} \int _{\mathbb {R}^n} U_i^p V \Phi _i \lesssim o(1)\left\Vert \rho \right\Vert _{\dot{H}^s} + \left\| (-\Delta )^s u-u|u|^{p-1}\right\| _{H^{-s}}+\left\Vert \rho \right\Vert _{\dot{H}^s}^2, \end{aligned}$$

which in particular implies that for all indices \(j\ne i\), we have

$$\begin{aligned} \int _{B(z_i, \lambda _i^{-1})} U_i^p U_j \lesssim o(1)\left\Vert \rho \right\Vert _{\dot{H}^s} + \left\| (-\Delta )^s u-u|u|^{p-1}\right\| _{H^{-s}}+\left\Vert \rho \right\Vert _{\dot{H}^s}^2. \end{aligned}$$

Using the integral estimate similar to Proposition B.2 in Appendix B of [12] we deduce that the (2.27) also holds for all indices \(j>i\) and thus we are done. \(\square \)

3 Case when dimension \(n\ge 6s \)

The goal of this section is to prove Theorem 1.5 in the case when the dimension satisfies \(n\ge 6s\). We follow the steps outlined in Sect. 1.2.

3.1 Existence of the first approximation \(\rho _0\)

In this section, our goal is to find a function \(\rho _0\) and a set of scalars \(\{c^i_a\}\) such that the following system is satisfied

$$\begin{aligned} \left\{ \begin{array}{l}(-\Delta )^s \varphi -\left[ \left( \sigma +\varphi \right) ^{p}-\sigma ^{p}\right] =h-\sum _{i=1}^{\nu } \sum _{a=1}^{n+1} c_{a}^{i} U_{i}^{p-1} Z_{i}^{a} \\ \langle \rho ,Z_i^a\rangle _{\dot{H}^s}=0, \quad i=1,\ldots ,\nu ;\quad a=1, \ldots , n+1\end{array}\right. \end{aligned}$$
(3.1)

Here \(U_i = U[z_i,\lambda _i]\) are a family of \(\delta -\) interacting bubbles, \(\sigma = \sum _{i=1}^{\nu } U_i\), \(Z_i^{a} \) are derivatives as defined in (1.30), \(h = \sigma ^{p}-\sum _{j=1}^{\nu } U_{j}^{p}\). From the finite-dimensional reduction argument we first strive to solve the equation with the linear operator

$$\begin{aligned} \left\{ \begin{array}{l}(-\Delta )^s \varphi -p\sigma ^{p-1}\varphi =h-\sum _{i=1}^{\nu } \sum _{a=1}^{n+1} c_{a}^{i} U_{i}^{p-1} Z_{i}^{a} \\ \langle \rho ,Z_i^a\rangle _{\dot{H}^s}=0, \quad i=1,\ldots ,\nu ;\quad a=1, \ldots , n+1\end{array}\right. \end{aligned}$$
(3.2)

Since the proof follows from a fixed point argument, we need to define the appropriate normed space. Denote for \(i\ne j\) and \(i,j\in I\) where \(I=\{1,\ldots , \nu \}\)

$$\begin{aligned} R_{ij}&= \max \left( \sqrt{\lambda _{i} / \lambda _{j}}, \sqrt{\lambda _{j} / \lambda _{i}}, \sqrt{\lambda _{i} \lambda _{j}}\left| z_{i}-z_{j}\right| \right) = \varepsilon ^{-1}_{ij}, \end{aligned}$$
(3.3)
$$\begin{aligned} R&= \frac{1}{2}\min _{i\ne j} R_{ij} . \end{aligned}$$
(3.4)

The quantity \(R_{ij}\) gives us an idea of how concentrated or how far the two bubbles are with respect to each other. We further identify two regimes of interest in the subsequent definition.

Definition 1.14

If \(R_{ij} = \sqrt{\lambda _{i} \lambda _{j}}\left| z_{i}-z_{j}\right| \) then we call the bubbles \(U_i\) and \(U_j\) a bubble cluster. Otherwise, we call them a bubble tower.

We now define two norms that capture the behavior of the interaction term h.

Definition 1.15

Define the norm \(\left\Vert {\cdot }\right\Vert _{*}\) as

$$\begin{aligned} \left\Vert {\varphi }\right\Vert _{*} = \sup _{x \in \mathbb {R}^{n}}|\varphi (x)| W^{-1}(x) \end{aligned}$$
(3.5)

and the norm \(\left\Vert {\cdot }\right\Vert _{**}\) as

$$\begin{aligned} \left\Vert {h}\right\Vert _{**} = \sup _{x \in \mathbb {R}^{n}}|h(x)| V^{-1}(x), \end{aligned}$$
(3.6)

where the functions V and W are defined using \(y_i = \lambda _i(x-z_i)\)

$$\begin{aligned} V(x)&= \sum _{i=1}^{\nu }\left( \frac{\lambda _{i}^{\frac{n+2s}{2}} R^{2s-n}}{\left\langle y_{i}\right\rangle ^{4s}} \chi _{\left\{ \left| y_{i}\right| \le R\right\} }+\frac{\lambda _{i}^{\frac{n+2s}{2}} R^{-4s}}{\left\langle y_{i}\right\rangle ^{n-2s}} \chi _{\left\{ \left| y_{i}\right| \ge R / 2\right\} }\right) \end{aligned}$$
(3.7)
$$\begin{aligned} W(x)&=\sum _{i=1}^{\nu }\left( \frac{\lambda _{i}^{\frac{n-2s}{2}} R^{2s-n}}{\left\langle y_{i}\right\rangle ^{2s}} \chi _{\left\{ \left| y_{i}\right| \le R\right\} }+\frac{\lambda _{i}^{\frac{n-2s}{2}} R^{-4s}}{\left\langle y_{i}\right\rangle ^{n-4s}} \chi _{\left\{ \left| y_{i}\right| \ge R / 2\right\} }\right) . \end{aligned}$$
(3.8)

Using the norm \(\Vert \cdot \Vert _{**}\) we can obtain control on the interaction term h for small enough \(\delta \). This is the content of the next lemma.

Lemma 1.16

There exists a small constant \(\delta _0 =\delta _{0}(n)\) and large constant \(C=C(n)\) such that if \(\delta <\delta _0\) then

$$\begin{aligned} \Vert \sigma ^p - \sum _{i=1}^{\nu }U_i^p\Vert _{**} \le C(n). \end{aligned}$$

Proof

We first prove the desired estimate when \(\nu =2\). Then since

$$\begin{aligned} \left| \sigma ^p - \sum _{i=1}^{\nu } U_i^p\right| \le C \sum _{i\ne j}\left[ (U_i+U_j)^p -U_i^p-U_j^p\right] \end{aligned}$$

we can conclude the proof in the case when \(\nu >2.\) Thus for the remainder of the proof assume that \(\nu = 2.\) As the bubbles are weakly interacting, the bubbles \(U_1\) and \(U_2\) either form a bubble tower or a bubble cluster. We will analyze each case separately.

Case 1. (Bubble Tower) Assume w.l.o.g. that \(\lambda _1 > \lambda _2\), in other words \(U_1\) is more concentrated than \(U_2.\) Then \(R_{12} = \sqrt{\frac{\lambda _1}{\lambda _2}} \gg 1.\) We will estimate the function \(h =(U_1+U_2)^p - U_1^p-U_2^p\) in different regimes.

Core region of \(U_1\): We define the core region of \(U_1\) as the following set

$$\begin{aligned} {\text {Core}}(U_1) = \left\{ x\in \mathbb {R}^n: |x-z_1| < \frac{1}{\sqrt{\lambda _1 \lambda _2}}\right\} \end{aligned}$$
(3.9)

Working in the \(z_1\) centered co-ordinates \(y_1 = \lambda _1(x-z_1)\), we first have that

$$\begin{aligned} U_1(x) = U[z_1,\lambda _1](x) = \lambda _1^{(n-2s)/2}U[0,1](y_1) = \lambda _1^{(n-2s)/2}U(y_1). \end{aligned}$$

Furthermore denoting \(\lambda _2 = \lambda _1 R_{12}^{-2}\) and \(\xi _2 = \lambda _1(z_2-z_1)\) we have the following identity

$$\begin{aligned} \lambda _2(x-z_2) = R_{12}^{-2}\lambda _1 (x-z_1) + R_{12}^{-2}\lambda _1(z_1-z_2) = R_{12}^{-2}(y_1 -\xi _2). \end{aligned}$$

Thus we can express \(U_2\) as follows

$$\begin{aligned} U_2(x)&= U[z_2,\lambda _2](x) = c_{n,s} \left( \frac{\lambda _2}{1+\lambda _2^{2}|x-z_2|^{2}}\right) ^{\frac{n-2s}{2}} \\&= c_{n,s} \left( \frac{\lambda _1 R_{12}^{-2}}{1+R_{12}^{-4s}|y_1-\xi _2|^{2}}\right) ^{\frac{n-2s}{2}} = \frac{c_{n,s}\lambda _1^{(n-2s)/2} R_{12}^{2s-n}}{(1+R_{12}^{-4}|y_1-\xi _2|^{2})^{(n-2s)/2}}. \end{aligned}$$

Note that by definition of \(R_{12}\) in (3.3), we have

$$\begin{aligned} |\xi _2| = |\lambda _1(z_1-z_2)|\le \frac{R_{12}\lambda _1}{\sqrt{\lambda _1 \lambda _2}}\le R_{12}^2. \end{aligned}$$

Now we are in position to estimate h in the core region of \(U_1.\) Observe that for \(x\in {\text {Core}}(U_1)\) we have

$$\begin{aligned} |y_1| = |\lambda _1(x-z_1)| < \frac{\lambda _1}{\sqrt{\lambda _1 \lambda _2}} = R_{12} \end{aligned}$$

and thus if \(|y_1|\le \frac{R_{12}}{2}\) we have that \(U_2\lesssim U_1\). Therefore

$$\begin{aligned} h \lesssim U_1^{p-1}U_2&\approx \frac{\lambda _1^{(n+2s)/2} R_{12}^{2s-n}}{(1+|y_1|^2)^{(n+2s)/2}(1+R_{12}^{-4}|y_1-\xi _2|^{2})^{(n-2s)/2}}\nonumber \\&\lesssim \frac{\lambda _1^{(n+2s)/2} R_{12}^{2s-n}}{\langle y_1 \rangle ^{n+2s} (1 + |y_1|^{-2})^{(n-2s)/2}} \lesssim \frac{\lambda _1^{(n+2s)/2} R_{12}^{2s-n}}{\langle y_1 \rangle ^{4s} }. \end{aligned}$$
(3.10)

Outside the Core region of \(U_1\): In this case we consider \(R_{12}/3 \le |y_1| \le 2 R_{12}^2\) and thus we get \(U_1 \approx \lambda _1^{(n-2\,s)/2} |y_1|^{2\,s-n}\). This is because

$$\begin{aligned} U_1&= c_{n,s} \lambda _1^{(n-2s)/2} \frac{1}{(1+|y_1|^2)^{(n-2s)/2}} \lesssim \lambda _1^{(n-2s)/2} |y_1|^{2s-n}, \\ \lambda _1^{(n-2s)/2} |y_1|^{2s-n}&\lesssim \frac{\lambda _1^{(n-2s)/2}}{(9|y_1|^2+|y_1|^2)^{(n-2s)/2}} \lesssim U_1 = c_{n,s} \frac{\lambda _1^{(n-2s)/2}}{(1+|y_1|^2)^{(n-2s)/2}}, \end{aligned}$$

where for the second estimate we used the fact that \(\frac{1}{3} \le \frac{R_{12}}{3} \le |y_1|\) implies \(9|y_1|^2 \ge 1.\) On the other hand, \(U_2 \approx \lambda _1^{(n-2s)/2} R_{12}^{2s-n}.\) This is because

$$\begin{aligned} U_2&= \frac{c_{n,s}\lambda _1^{(n-2s)/2} R_{12}^{2s-n}}{(1+R_{12}^{-4}|y_1-\xi _2|^{2})^{(n-2s)/2}} \lesssim \lambda _1^{(n-2s)/2} R_{12}^{2s-n}, \\ \lambda _1^{(n-2s)/2} R_{12}^{2s-n}&\lesssim U_2 = \frac{c_{n,s}\lambda _1^{(n-2s)/2} R_{12}^{2s-n}}{(1+R_{12}^{-4}|y_1-\xi _2|^{2})^{(n-2s)/2}}, \end{aligned}$$

where for the second estimate we used the fact that \(|\xi _2|^2\le R_{12}^4\) and \(|y_1|^2\le 4R_{12}^4\) to get

$$\begin{aligned} R_{12}^{-4}|y_1-\xi _2|^2 \le R_{12}^{-4}(|y_1|^2 + |\xi _2|^2) \le 5 \lesssim 1. \end{aligned}$$

Thus we get

$$\begin{aligned} h&= U_2^p \left[ (1+U_1/U_2)^p - 1 - (U_1/U_2)^p\right] \nonumber \\&\lesssim \lambda _{1}^{(n+2s) / 2} R_{12}^{-(n+2s)}\left| \left( 1+\frac{R_{12}^{n-2s}}{|y_1|^{n-2s}}\right) ^{p}-1-\left( \frac{R_{12}}{|y_1|}\right) ^{p(n-2s)}\right| \nonumber \\&\lesssim \frac{\lambda _{1}^{(n+2s) / 2} R_{12}^{-(n+2s)}}{|y_1/R_{12}|^{n-2s}} \approx \frac{\lambda _{1}^{(n+2s) / 2} R_{12}^{-4s}}{\left\langle y_{1}\right\rangle ^{n-2s}}. \end{aligned}$$
(3.11)

Core region of \(U_2\): We define the core region of \(U_2\) as

$$\begin{aligned} {\text {Core}}(U_2) = \left\{ x\in \mathbb {R}^n: |x-z_2| < \frac{1}{\lambda _2} \sqrt{\frac{\lambda _1}{\lambda _2}}\right\} . \end{aligned}$$
(3.12)

Note that in the \(z_2\) centered co-ordinates \(y_2 = \lambda _2(x-z_2)\) the points \(x\in \mathbb {R}^n\) satisfying \(|y_2|< R_{12}\) are precisely the ones forming \({\text {Core}}(U_2).\) Like before, in these new co-ordinates we can re-write \(U_2(x) = \lambda _2^{(n-2\,s)/2} U(y_2)\) and \(U_1\) as

$$\begin{aligned} U_{1}(x)=\frac{\lambda _{2}^{(n-2s) / 2} R_{12}^{2s-n}}{\left( R_{12}^{-4}+\left| y_{2}-\xi _{1}\right| ^{2}\right) ^{(n-2s) / 2}} \end{aligned}$$

where \(\xi _1 = \lambda _2(z_1-z_2)\) such that

$$\begin{aligned} |\xi _1|=\lambda _2|z_1-z_2|\le \sqrt{\frac{\lambda _2}{\lambda _1}} \sqrt{\lambda _1\lambda _2}|z_1-z_2|\le \sqrt{\frac{\lambda _2}{\lambda _1}}\sqrt{\frac{\lambda _1}{\lambda _2}}=1. \end{aligned}$$

Since \(y_2 - \xi _1 = R_{12}^{-2}y_1\), then in the region \(1\le |y_2-\xi _1| \le R_{12}/2\) we have \(R_{12}^2 \le |y_1| \le R_{12}^3/2.\) This is indeed the core region of \(U_2\) as

$$\begin{aligned} |y_2|\le |\xi _1| + R_{12}^{-2}|y_1|\le 1+R_{12}/2\le R_{12} \end{aligned}$$

and thus \(U_1\lesssim U_2\) which implies that

$$\begin{aligned} h \lesssim U_2^{p-1} U_1 \lesssim \frac{\lambda _2^{(n+2s)/2}R_{12}^{2s-n}}{\langle y_2 \rangle ^{4s}}. \end{aligned}$$
(3.13)

Outside the Core region \(U_2\): In this region \(|y_2-\xi _2| \ge R_{12}/3 \) and, therefore

$$\begin{aligned} h \lesssim U_{2}^{p} \lesssim \frac{\lambda _{1}^{(n+2s) / 2} R_{12}^{-2s}}{\left\langle y_{1}\right\rangle ^{n}} \lesssim \frac{\lambda _{1}^{(n+2s) / 2} R_{12}^{-4s}}{\left\langle y_{1}\right\rangle ^{n-2s}}. \end{aligned}$$
(3.14)

Thus if we put together estimates (3.10), (3.11), (3.13) and (3.14) we get

$$\begin{aligned} h \lesssim \sum _{i=1}^{2}\left( \frac{\lambda _{i}^{\frac{n+2s}{2}} R^{2s-n}}{\left\langle y_{i}\right\rangle ^{4s}} \chi _{\left\{ \left| y_{i}\right| \le R / 2\right\} }+\frac{\lambda _{i}^{\frac{n+2s}{2}} R^{-4s}}{\left\langle y_{i}\right\rangle ^{n-2s}} \chi _{\left\{ \left| y_{i}\right| \ge R / 3\right\} }\right) , \end{aligned}$$
(3.15)

which in particular implies that \(h(x)\le V(x) C(n,s)\) for some positive dimension and s dependent constant \(C(n,s)>0\). Thus, if the bubbles form a tower then Lemma 3.3 holds. The proof when the bubbles form a cluster also follows from the same argument as in the proof of Proposition 3.3 in [8] with the minor modifications in the exponents involving the parameter s. \(\square \)

Next, we look at a system similar to (3.2) and deduce an a priori estimate on its solution.

Lemma 1.17

There exists a positive \(\delta _{0}\) and a constant C, independent of \(\delta \), such that for all \(\delta \leqslant \delta _{0}\), if \(\left\{ U_{i}\right\} _{1 \le i \le \nu }\) is a \(\delta \) -interacting bubble family and \(\varphi \) solves the equation

$$\begin{aligned} \left\{ \begin{array}{l} (-\Delta )^s \varphi -p \sigma ^{p-1} \varphi +h=0 \\ \langle \varphi ,Z_i^a\rangle _{\dot{H}^s}=0, \quad i=1, \ldots , \nu ; a=1, \ldots , n+1 \end{array}\right. \end{aligned}$$
(3.16)

then

$$\begin{aligned} \Vert \varphi \Vert _{*} \le C\Vert h\Vert _{* *}, \end{aligned}$$
(3.17)

where the norms \(\Vert \varphi \Vert _{*} = \sup _{x\in \mathbb {R}^n} |\varphi (x)|W^{-1}(x)\) and \(\Vert h\Vert _{**} = \sup _{x\in \mathbb {R}^n}|h(x)|V^{-1}(x)\) with weight functions

$$\begin{aligned} V(x)&= \sum _{i=1}^{\nu }\left( \frac{\lambda _{i}^{\frac{n+2s}{2}} R^{2s-n}}{\left\langle y_{i}\right\rangle ^{4s}} \chi _{\left\{ \left| y_{i}\right| \le R\right\} }+\frac{\lambda _{i}^{\frac{n+2s}{2}} R^{-4s}}{\left\langle y_{i}\right\rangle ^{n-2s}} \chi _{\left\{ \left| y_{i}\right| \ge R / 2\right\} }\right) ,\\ W(x)&= \sum _{i=1}^{\nu }\left( \frac{\lambda _{i}^{\frac{n-2s}{2}} R^{2s-n}}{\left\langle y_{i}\right\rangle ^{2s}} \chi _{\left\{ \left| y_{i}\right| \le R\right\} }+\frac{\lambda _{i}^{\frac{n-2s}{2}} R^{-4s}}{\left\langle y_{i}\right\rangle ^{n-4s}} \chi _{\left\{ \left| y_{i}\right| \ge R / 2\right\} }\right) . \end{aligned}$$

Proof

We will prove this result following the same contradiction-based argument outlined in [8]. Thus, assume that the estimate (3.17) does not hold. Then there exists sequences \((U^{(k)}_i)_{i=1}^{\nu }\) of \(\frac{1}{k}-\)interacting family of bubbles, \(h = h_k\) with \(\Vert h_k\Vert _{**}\rightarrow 0\) as \(k\rightarrow \infty \) and \(\varphi = \varphi _k\) such that \(\Vert \varphi _k\Vert _{*} = 1\) and satisfies

$$\begin{aligned} \left\{ \begin{array}{l} (-\Delta )^s \varphi _k-p \sigma ^{p-1} \varphi _k+h_k=0 \\ \langle \varphi _k,Z_i^{a^{(k)}}\rangle _{\dot{H}^s}, \quad i=1,\ldots ,\nu ; a=1, \ldots , n+1 \end{array}\right. \end{aligned}$$
(3.18)

Here \(Z_i^{a^{(k)}}\) are the partial derivatives of \(U_i^{(k)}\) defined as in (1.30). We will often drop the superscript for convenience and set \(U_i = U_i^{(k)}\) and \(\sigma = \sigma ^{(k)} =\sum _{i=1}^{\nu } U_i^{(k)}.\) Next denote, \(z_{ij}^{(k)} = \lambda _i^{(k)}(z_j^{(k)}-z_i^{(k)})\) for \(i\ne j.\) For each such sequence we can assume that either \(\lim _{k\rightarrow \infty } z_{ij}^{(k)} = \infty \) or \(\lim _{k\rightarrow \infty } z_{ij}^{(k)}\) exists and \(\lim _{k\rightarrow \infty } |z_{ij}^{(k)}|<\infty \). Thus define the following two index sets for \(i\in I=\{1,2,\ldots ,\nu \}\)

$$\begin{aligned} I_{i, 1}&= \left\{ j\in I\setminus \{i\} | \lim _{k \rightarrow \infty } z_{i j}^{(k)} \text { exists and } \lim _{k \rightarrow \infty }|z_{i j}^{(k)}|<\infty \right\} ,\\ I_{i, 2}&=\left\{ j\in I\setminus \{i\} | \lim _{k \rightarrow \infty } z_{i j}^{(k)}=\infty \right\} . \end{aligned}$$

Given a bubble \(U_i\) we can further segregate it from bubble \(U_j\), for \(j\ne i\), based on whether they form a bubble cluster or bubble tower with respect to each other

$$\begin{aligned} T_{i}^{+}&=\left\{ j\in I\setminus \{i\} \mid \lambda _{j}^{(k)} \ge \lambda _{i}^{(k)}, R_{ij}^{(k)}=\sqrt{\lambda _{j}^{(k)} / \lambda _{i}^{(k)}}\right\} \\ T_{i}^{-}&=\left\{ j\in I\setminus \{i\} \mid \lambda _{j}^{(k)}<\lambda _{i}^{(k)}, R_{ij}^{(k)}=\sqrt{\lambda _{i}^{(k)} / \lambda _{j}^{(k)}}\right\} \\ C_{i}^{+}&=\left\{ j\in I\setminus \{i\} |\lim _{k \rightarrow \infty } \lambda _{j}^{(k)} / \lambda _{i}^{(k)}=\infty , R_{ij}^{(k)}=\sqrt{\lambda _{i}^{(k)} \lambda _{j}^{(k)}}| z_{i}^{(k)}-z_{j}^{(k)}|\right\} \\ C_{i}^{-}&=\left\{ j\in I\setminus \{i\}|\lim _{k \rightarrow \infty } \lambda _{j}^{(k)} / \lambda _{i}^{(k)}<\infty , R_{ij}^{(k)}=\sqrt{\lambda _{i}^{(k)} \lambda _{j}^{(k)}}| z_{i}^{(k)}-z_{j}^{(k)}|\right\} . \end{aligned}$$

Setting \(y_i^{(k)} = \lambda _i^{(k)}(x-z_i^{(k)})\) and \(L_i = \max _{j\in I_{i,1}}\{\lim _{k\rightarrow \infty } |z_{ij}^{(k)}|\}+L\) we define

$$\begin{aligned} \Omega ^{(k)}&= \bigcup _{i\in I} \left\{ |y_i^{(k)}|\le L_i\right\} , \end{aligned}$$
(3.19)
$$\begin{aligned} \Omega _{i}^{(k)}&=\left\{ |y_{i}^{(k)}| \le L_i\right\} \bigcap \left( \bigcap _{j \in T_{i}^{+} \cup C_{i}^{+}}\left\{ \left| y_{i}^{(k)}-z_{i j}^{(k)}\right| \ge \varepsilon \right\} \right) \end{aligned}$$
(3.20)

where \(i\in I\), \(j\ne i\), \(L>0\) is a large constant and \(\varepsilon >0\) is a small constant to be determined later. Observe that for large k, our choice of \(L_i\) ensures that \(\{|y_i^{(k)}-z_{ij}^{(k)}|=\varepsilon \}\subset \{|y_i^{(k)}|\le L_i\}\) for \(j\in T_i^+\cup C_i^+.\)

Dropping superscripts we re-write the weight functions W and V as defined earlier in the following manner

$$\begin{aligned} W = \sum _{i=1}^{\nu } w_{i,1} + w_{i,2},\quad V = \sum _{i=1}^{\nu } v_{i,1} + v_{i,2}, \end{aligned}$$

where for each \(i\in I\) and \(R=\frac{1}{2}\min _{j\ne i} R_{ij}^{(k)}\rightarrow \infty \) as \(k\rightarrow \infty \), we have

$$\begin{aligned} w_{i, 1}(x)&=\frac{\lambda _{i}^{\frac{n-2s}{2}} R^{2s-n}}{\left\langle y_{i}\right\rangle ^{2s}} \chi _{\left\{ \left| y_{i}\right| \le R\right\} }, \quad w_{i, 2}(x)=\frac{\lambda _{i}^{\frac{n-2s}{2}} R^{-4s}}{\left\langle y_{i}\right\rangle ^{n-4s}} \chi _{\left\{ \left| y_{i}\right| \ge R / 2\right\} }, \end{aligned}$$
(3.21)
$$\begin{aligned} v_{i, 1}(x)&=\frac{\lambda _{i}^{\frac{n+2s}{2}} R^{2s-n}}{\left\langle y_{i}\right\rangle ^{4s}} \chi _{\left\{ \left| y_{i}\right| \le R\right\} }, \quad v_{i, 2}(x)=\frac{\lambda _{i}^{\frac{n+2s}{2}} R^{-4s}}{\left\langle y_{i}\right\rangle ^{n-2s}} \chi _{\left\{ \left| y_{i}\right| \ge R / 2\right\} }. \end{aligned}$$
(3.22)

Since \(\Vert \varphi _k\Vert _{*}=1\), by definition we have that \(|\varphi _k|(x) \le W(x)\). In particular, there exists a subsequence such that \(|\varphi _k|(x_k)=W(x_k).\) We will now analyze several cases based on where this subsequence \(\{x_k\}_{k \in \mathbb {N}}\) lives.

Case 1. Suppose \(\{x_k\}_{k \in \mathbb {N}}\in \Omega _i\) for some \(i\in I.\) W.l.o.g. \(i=1\) and set

$$\begin{aligned} \left\{ \begin{array}{l}\tilde{\varphi }_{k}\left( y_{1}\right) :=W^{-1}\left( x_{k}\right) \varphi _{k}(x) \quad \text{ with } y_{1}=\lambda _{1}\left( x-z_{1}\right) \\ \tilde{h}_{k}\left( y_{1}\right) :=\lambda _{1}^{-2s} W^{-1}\left( x_{k}\right) h_{k}(x), \quad \tilde{\sigma }_{k}\left( y_{1}\right) :=\sigma _{k}(x)\end{array}\right. \end{aligned}$$
(3.23)

Then since \(\varphi _k\) solves the system (3.16), the rescaled function \(\tilde{\varphi }_k\) and \(\tilde{h}_k\) satisfy

$$\begin{aligned} \left\{ \begin{array}{ll} (-\Delta )^s \tilde{\varphi }_{k}\left( y_{1}\right) -p \lambda _{1}^{-2s} \tilde{\sigma }_{k}^{p-1}\left( y_{1}\right) \tilde{\varphi }_{k}\left( y_{1}\right) +\tilde{h}_{k}\left( y_{1}\right) =0 &{} \text{ in } \mathbb {R}^{n} \\ \int _{\mathbb {R}^n} U^{p-1} Z_{1}^{a} \tilde{\varphi }_{k} d y_{1}=0, &{} 1 \le a \le n+1\end{array}\right. \end{aligned}$$
(3.24)

where \(U = U[0,1](y_1)\). Set, \(\tilde{z}_j = z_{1j}^{(k)}\), \(\bar{z}_j = \lim _{k\rightarrow \infty } \tilde{z}_{1j}^{(k)}\) and define

$$\begin{aligned} E_{1}&=\bigcap _{j \in T_{1}^{+} \cap I_{1,1}}\left\{ \left| y_{1}-\tilde{z}_{j}\right| \ge 1 / M\right\} , \quad E_{2}=\bigcap _{j \in C_{1}^{+} \cap I_{1,1}}\left\{ \left| y_{1}-\bar{z}_{j}\right| \ge \left| \bar{z}_{j}\right| / M\right\} , \nonumber \\ K_M&=\left\{ \left| y_{1}\right| \le M\right\} \cap E_{1} \cap E_{2}. \end{aligned}$$
(3.25)

If we choose M to be large enough then for k large enough. To prove estimate (3.17) in Case 1 we need to make use of the following proposition.

Proposition 1.18

In each compact subset \(K_{M}\), it holds that, as \(k \rightarrow \infty \)

$$\begin{aligned} \lambda _{1}^{-2s} \tilde{\sigma }_{k}^{p-1} \rightarrow U[0,1], \quad \left| \tilde{h}_{k}\right| \rightarrow 0 \end{aligned}$$
(3.26)

uniformly. Moreover, we have

$$\begin{aligned} \left| \tilde{\varphi }_{k}\right| \left( y_{1}\right) \lesssim \left| y_{1}-\tilde{z}_{j}\right| ^{4s-n}+1, \quad j \in T_{1}^{+} \cup C_{1}^{+}. \end{aligned}$$
(3.27)

Postponing the proof of Proposition 3.5 we strive to prove (3.17). First note that by elliptic regularity theory and (3.26) we get that \(\tilde{\varphi }_k\) converges upto a subsequence in \(K_M\). When \(M\rightarrow \infty \), using a diagonal argument we get that \(\tilde{\varphi }_k\rightarrow \tilde{\varphi }\) weakly such that \(\tilde{\varphi }\) satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} (-\Delta )^s \tilde{\varphi }-p U^{p-1} \tilde{\varphi }=0, &{} \text{ in } \mathbb {R}^{n} \backslash \left\{ \bar{z}_{j} \mid j \in \left( T_{1}^{+} \cup C_{1}^{+}\right) \cap I_{1,1}\right\} \\ |\tilde{\varphi }|\left( y_{1}\right) \lesssim \left| y_{1}-\bar{z}_{j}\right| ^{4s-n}+1, &{} j \in \left( T_{1}^{+} \cup C_{1}^{+}\right) \cap I_{1,1}, \\ \int _{\mathbb {R}^n} U^{p-1} Z^{a} \tilde{\varphi } d y_{1}=0, &{} 1 \le a \le n+1\end{array}\right. \end{aligned}$$
(3.28)

The orthogonality condition implies \(\tilde{\varphi }\) is orthogonal to the kernel of the linearized operator \((-\Delta )^s - pU^{p-1}\) and thus \(\tilde{\varphi } =0.\) On the other hand since \(|Y_k|= |\lambda _1(x_k-z_1)|\le L_1\) upto a subsequence we have that \(Y_k\rightarrow Y_{\infty }\) and thus \(\tilde{\varphi }(Y_{\infty }) = 1\) which is clearly a contradiction.

Case 2. Suppose \(\{x_k\}_{k \in \mathbb {N}}\subset \Omega {\setminus } \bigcup _{i\in I}\Omega _i.\) Then there exists an index \(i\in I\) such that there exists a subsequence \(\{x_k\}_{k \in \mathbb {N}}\) satisfying

$$\begin{aligned} \{x_k\}_{k \in \mathbb {N}}\subset A_i = \bigcup _{j \in T_{i}^{+} \cup C_{i}^{+}}\left\{ \left| y_{i}\right| \le L_i,\left| y_{i}-\tilde{z}_{i j}\right| \le \varepsilon ,\left| y_{j}\right| \ge L_j\right\} . \end{aligned}$$

We assume w.l.o.g. that \(i=1\). For \(\mu \in (0,1/2)\) define the function

$$\begin{aligned} F(a, b)=\frac{a+b}{2}-\sqrt{\left( \frac{a-b}{2}\right) ^{2}+\mu a b}, \end{aligned}$$
(3.29)

which approximates the function \(\min \{a,b\}.\) Using F we define new weight functions \(\tilde{W}\) and \(\tilde{V}\) as follows

$$\begin{aligned} \tilde{W}(x)&=\sum _{j \in J_{1}} \lambda _{j}^{\frac{n-2s}{2}} F\left( \frac{R^{2s-n}}{\left\langle y_{j}\right\rangle ^{2s}}, \frac{R^{-4s}}{\left\langle y_{j}\right\rangle ^{n-4s}}\right) +w_{1,1}(x), \end{aligned}$$
(3.30)
$$\begin{aligned} \tilde{V}(x)&=\sum _{j \in J_{1}} \lambda _{j}^{\frac{n+2s}{2}} F\left( \frac{R^{2s-n}}{\left\langle y_{j}\right\rangle ^{4s}}, \frac{R^{-4s}}{\left\langle y_{j}\right\rangle ^{n-2s}}\right) +v_{1,1}(x). \end{aligned}$$
(3.31)

Then

$$\begin{aligned} \tilde{W}(x)&\approx \sum _{j \in J_{1}}\left( w_{j, 1}(x)+w_{j, 2}(x)\right) +w_{1,1}(x) \\ \tilde{V}(x)&\approx \sum _{j \in J_{1}}\left( v_{j, 1}(x)+v_{j, 2}(x)\right) +v_{1,1}(x), \end{aligned}$$

where \(J_1 = T_{1}^{+} \cup C_{1}^{+}\). We need the following three propositions to conclude estimate (3.17) for Case 2.

Proposition 1.19

The functions \(\tilde{W}\) and \(\tilde{V}\) satisfy

$$\begin{aligned} (-\Delta )^s \tilde{W}\ge \alpha _{n,s} \tilde{V}, \end{aligned}$$
(3.32)

where \(\alpha _{n,s}>0\) is a constant depending on n and s.

Proposition 1.20

For two bubbles \(U_{i}\) and \(U_{j}\), suppose k large enough, in the region \(\left\{ \left| y_{i}\right| \ge L_i,\left| y_{j}\right| \ge L_j\right\} \) it holds that

$$\begin{aligned} \sum _{m, l \in \{i,j\}} U_{m}^{p-1}\left( w_{l, 1}+w_{l, 2}\right) \le \varepsilon _{1} \sum _{l \in \{i,j\}}\left( v_{l, 1}+v_{l, 2}\right) \end{aligned}$$
(3.33)

with \(\varepsilon _1>0\) depending on Lns and \(\varepsilon .\)

Proposition 1.21

For bubble \(U_{i}\), let \(J_{i}=T_{i}^{+} \cup C_{i}^{+}.\) In the region \(A_{i}\), we have

$$\begin{aligned} U_{i}^{p-1} \sum _{j \in J_{i}}\left( w_{j, 1}+w_{j, 2}\right)&\le \varepsilon _{1} \sum _{j \in J_{i}}\left( v_{j, 1}+v_{j, 2}\right) \end{aligned}$$
(3.34)
$$\begin{aligned} \sum _{j \in J_{i}} U_{j}^{p-1} w_{i, 1}&\le \varepsilon _{1} \sum _{j \in J_{i}}\left( v_{j, 1}+v_{j, 2}\right) +\varepsilon _{1} v_{i, 1} \end{aligned}$$
(3.35)

when k is large enough.

From Proposition 3.6 we have \((-\Delta )^s \tilde{W}\ge \alpha _{n,s} \tilde{V}\). Furthermore in \(A_1\) it is clear that \(U_1 \gg \sum _{j\in T_1^{-}\cup C_1^{-}} U_j\) and therefore

$$\begin{aligned} \sigma _{k}^{p-1}=\left( \sum _{j \in J_{1}} U_{j}+U_{1}+\sum _{j \in T_{1}^{-} \cup C_{1}^{-}} U_{j}\right) ^{p-1} \le \sum _{j \in J_{1}} U_{j}^{p-1}+C U_{1}^{p-1} \end{aligned}$$

where \(C=C(n,s)>0\) is constant depending on n and s. This estimate combined with (3.33) and (3.34) implies

$$\begin{aligned} \begin{aligned} \sigma _{k}^{p-1} \tilde{W}\le&\sum _{i, j \in J_{1}} U_{i}^{p-1}\left( w_{j, 1}+w_{j, 2}\right) +C U_{1}^{p-1} \sum _{j \in J_{1}}\left( w_{j, 1}+w_{j, 2}\right) \\ {}&+w_{1,1} \sum _{j \in J_{1}} U_{j}^{p-1}+C U_{1}^{p-1} w_{1,1} \\ \le&\left( 4\nu ^2 \varepsilon _{1}+C\right) \tilde{V}, \end{aligned} \end{aligned}$$
(3.36)

in the region \(A_1.\) Thus choosing large L and small \(\varepsilon \), we get

$$\begin{aligned} (-\Delta )^s \tilde{W}-p \sigma _{k}^{p-1} \tilde{W} \ge \tilde{V},\quad \text { in } A_{1}. \end{aligned}$$

From Case 1, for large k, we have

$$\begin{aligned} |\varphi _k|(x)\le C \Vert h_k\Vert _{**}W(x),\quad \forall x \in \bigcup _{i=1}^\nu \Omega _i \end{aligned}$$
(3.37)

where \(C=C(n,\nu ,s)>0\) is a large constant. This is because if (3.37) were not true then for any \(\gamma \in \mathbb {N}\) there exists \(k=k(\gamma )\) and \(x_k\in A\) such that

$$\begin{aligned} |\varphi _k|(x_k)W^{-1}(x_k)>\gamma \Vert h_k\Vert _{**}. \end{aligned}$$

W.l.o.g. assume that \(|\varphi _k|(x_k)W^{-1}(x_k)=1\), then \(\Vert h_k\Vert _{**}\le \frac{1}{\gamma }\rightarrow 0\) as \(\gamma \rightarrow +\infty \) and \(k=k(\gamma )\rightarrow +\infty .\) However this cannot happen as shown in Case 1. Next, for \(i\in T_{1}^{-}\) with \(|y_i|=\frac{\lambda _i}{\lambda _1}|y_1- \tilde{z}_i|\le L_i,\) for large k, we have

$$\begin{aligned} \begin{aligned} \frac{w_{1,1}}{w_{i, 1}}&=\left( \frac{\lambda _{1}}{\lambda _{i}}\right) ^{\frac{n-2s}{2}} \frac{\left\langle y_{i}\right\rangle ^{2s}}{\left\langle y_{1}\right\rangle ^{2s}} \ge L^{-2s}\left( \frac{\lambda _{1}}{\lambda _{i}}\right) ^{\frac{n-2s}{2}} \gg 1, \\ \frac{v_{1,1}}{v_{i, 1}}&=\left( \frac{\lambda _{1}}{\lambda _{i}}\right) ^{\frac{n+2s}{2}} \frac{\left\langle y_{i}\right\rangle ^{4s}}{\left\langle y_{1}\right\rangle ^{4s}} \ge L^{-4s}\left( \frac{\lambda _{1}}{\lambda _{i}}\right) ^{\frac{n+2s}{2}} \gg 1. \end{aligned} \end{aligned}$$
(3.38)

Similarly if \(i\in C_1^{-}\) then \(|y_i| =\frac{\lambda _2}{\lambda _1}|y_1- \tilde{z}_i| \ge \frac{\lambda _i}{2\lambda _1}|\tilde{z}_i| = \frac{\lambda _i|z_1-z_i|}{2}\), which in turn implies

$$\begin{aligned} \begin{aligned} \frac{w_{1,1}}{w_{i, 2}}&=R^{6s-n}\left( \frac{\lambda _{1}}{\lambda _{i}}\right) ^{\frac{n-2s}{2}} \frac{\left\langle y_{i}\right\rangle ^{n-4s}}{\left\langle y_{1}\right\rangle ^{2s}} \ge 2^{4s-n} L^{-2s} R^{2s}\left( \frac{\lambda _{1}}{\lambda _{i}}\right) \gg 1, \\ \frac{v_{1,1}}{v_{i, 2}}&=R^{6s-n}\left( \frac{\lambda _{1}}{\lambda _{i}}\right) ^{\frac{n+2s}{2}} \frac{\left\langle y_{i}\right\rangle ^{n-2s}}{\left\langle y_{1}\right\rangle ^{4s}} \ge 2^{2s-n} L^{-4s} R^{4s}\left( \frac{\lambda _{1}}{\lambda _{i}}\right) ^{2s} \gg 1. \end{aligned} \end{aligned}$$
(3.39)

Combining estimates (3.38) and (3.39) we get the following estimate in the region \(A_1\)

$$\begin{aligned} \begin{aligned} W&\approx \tilde{W} + \sum _{j\in T_1^{-}\cup C_1^{-}} (w_{j,1} + w_{j,2}) \approx \tilde{W},\\ V&\approx \tilde{V} + \sum _{j\in T_1^{-}\cup C_1^{-}} (v_{j,1} + v_{j,2}) \approx \tilde{V}. \end{aligned} \end{aligned}$$
(3.40)

For large k, \(\partial A_1 \subset \cup _{i\in I}\partial \Omega _i\) and therefore from (3.37) and (3.40) we conclude that

$$\begin{aligned} |\varphi _k|(x) \le C \Vert h_k\Vert _{**} \tilde{W}(x),\quad \forall x \in \partial A_1 \end{aligned}$$

which along with (3.36) implies that \(\pm C\Vert h_k\Vert _{**}\tilde{W}\) is an upper/lower barrier for \(\varphi _k\) and therefore

$$\begin{aligned} |\varphi _k|(x_k) \tilde{W}^{-1}(x_k) \lesssim \Vert h_k\Vert _{**} \rightarrow 0. \end{aligned}$$

This contradicts the fact that \(|\varphi _k|(x_k) =W(x_k).\)

Case 3. Finally consider the case \(\{x_k\}_{k \in \mathbb {N}}\subset \Omega ^c = \bigcup _{i\in I}\{|y_i|> L_i\}.\) Working with similar approximations of W and V as defined in (3.30) and arguing as in the proof of Proposition 3.6 we get that \((-\Delta )^s \tilde{W}\ge \alpha _{n,s} \tilde{V}\). Next using Proposition 3.7, we get

$$\begin{aligned} \sigma _k^{p-1} \tilde{W} (x) \le \nu ^2 \varepsilon _1 \tilde{V} (x),\quad \forall x\in \Omega ^c. \end{aligned}$$

Consequently in the region \(\Omega ^c\), we get

$$\begin{aligned} (-\Delta )^s \tilde{W}-p \sigma _{k}^{p-1} \tilde{W} \ge \tilde{V}. \end{aligned}$$
(3.41)

From the previous two cases we know that for large k, we have

$$\begin{aligned} |\varphi _k|(x) \le C \Vert h_k\Vert _{**} \tilde{W}(x) \end{aligned}$$

in the region \(\Omega \) and thus the above estimate also holds on the boundary \(\partial \Omega = \partial \Omega ^c.\) Thus \(\pm C \Vert h_k\Vert _{**} \tilde{W}(x)\) is an upper/lower barrier for the function \(\varphi _k\). This implies that

$$\begin{aligned} |\varphi _k|(x) \le C \Vert h_k\Vert _{**} \tilde{W}(x), \quad \forall x\in \Omega , \end{aligned}$$

which in turn implies

$$\begin{aligned} |\varphi _k|(x_k) \tilde{W}^{-1}(x_k) \le C \Vert h_k\Vert _{**} \rightarrow 0. \end{aligned}$$

This contradicts the fact \(|\varphi _k|(x_k) =W(x_k).\) To complete the proof we prove Proposition 3.6 since the proof of the other propositions can be found in [8] by modifying the exponents of the weights by the parameter s.

Proof of Proposition 3.6

Using the concavity of F and the integral representation of the fractional laplacian we deduce that

$$\begin{aligned} (-\Delta )^s\tilde{W}(x)\ge \sum _{j\in J_1}\lambda _j^{\frac{n-2s}{2}}\left( \frac{\partial F}{\partial a}(a_j, b_j) (-\Delta )^s a_j + \frac{\partial F}{\partial b}(a_j, b_j) (-\Delta )^s b_j\right) , \end{aligned}$$

where \(a_j = R^{2\,s-n}\langle y_j\rangle ^{-2\,s}\), \(b_j=R^{-4\,s}\langle y_j\rangle ^{4\,s-n}\) and \(y_j=\lambda _j(x-z_j).\) Thus to obtain a lower bound for \((-\Delta )^s\tilde{W}\) we first show that

$$\begin{aligned} (-\Delta )^s \langle y_j\rangle ^{-2s} \ge c_{n,s} \langle y_j\rangle ^{-4s},\quad (-\Delta )^s \langle y_j\rangle ^{n-4s} \ge c_{n,s} \langle y_j\rangle ^{n-2s}. \end{aligned}$$

We prove the first estimate since the argument for the second inequality is the same. Furthermore, by scaling we can consider the case when \(\lambda _j=1\) and \(z_j=0\). Thus we need to show that

$$\begin{aligned} (-\Delta )^s[(1+|x|^2)^{-s}]\ge \alpha _{n,s} [1+|x|^2]^{-2s}. \end{aligned}$$

Using the hypergeometric function as in Table 1 in [14], we get

$$\begin{aligned} (-\Delta )^s[(1+|x|^2)^{-s}] = c_{n,s} {}_{2}F_{1}\left( n/2+s,2s,n/2;-|x|^2\right) \end{aligned}$$

for constant \(c_{n,s}>0\) depending on n and s. By the integral representation as in (15.6.1) in [17], we have

$$\begin{aligned} {}_{2}F_{1}\left( n/2+s,2s,n/2;-|x|^2\right) = \frac{1}{\Gamma (2s)\Gamma (n/2-2s)}\int _{0}^{1}\frac{t^{2s-1}(1-t)^{n/2-2s-1}}{(1+|x|^2t)^{n/2+s}}dt\ge 0 \end{aligned}$$

as \(n>4s.\) Next observe that the left-hand side has no roots. This is because if we use transformation (15.8.1) in [17] we get

$$\begin{aligned} {}_{2}F_{1}\left( n/2+s,2s,n/2;-|x|^2\right) = (1+|x|^2)^{-2s}{}_{2}F_{1}\left( -s,2s,n/2;\frac{|x|^2}{1+|x|^2}\right) \end{aligned}$$

and therefore we can compute the number of zeros using (15.3.1) in [17] as suggested in [15] to get

$$\begin{aligned} N(n,s) = \lfloor s \rfloor + \frac{1}{2}(1+S) = \frac{1}{2}(1+S) \end{aligned}$$

where \(S = {\text {Sign}}(\Gamma (-s)\Gamma (2\,s)\Gamma (n/2+s)\Gamma (n/2-2\,s))=-1\). Then since \(n>4s\), we get \(N(n,s) = 0.\) Thus the function \({}_{2}F_{1}\left( n/2+s,2s,n/2;-|x|^2\right) \) must be strictly positive and in particular there exists a constant \(\alpha _{n,s}>0\) such that

$$\begin{aligned} (-\Delta )^s[(1+|x|^2)^{-s}] > \alpha _{n,s} \ge \alpha _{n,s}[1+|x|^2]^{-2s} \end{aligned}$$

which gives us the desired estimate. Finally using the homogeneity of the function F we can conclude the proof of Proposition 3.6. \(\square \)

The next result gives an estimate for the coefficients \(c^i_a\) of the system (3.2).

Lemma 1.22

Let \(\sigma \) be a sum of \(\delta \)-interacting bubbles and let \(\varphi , h\) and \(c^j_b\) for \(j=1,\ldots ,\nu \) and \(b=1,2,\ldots , n+1\) satisfy the system (3.2). Then

$$\begin{aligned} |c^j_b|\lesssim Q \Vert h\Vert _{**} + Q^p\Vert \varphi \Vert _{*}, \end{aligned}$$
(3.42)

where Q is the interaction term as defined in (1.32).

Proof

For simplicity we set \(j=1\) and \(\nu =2.\) Multiplying \(Z^b_1\) to (3.2) and integrating we get

$$\begin{aligned} \int _{\mathbb {R}^n} p\sigma ^{p-1}\varphi Z^b_1 = \int _{\mathbb {R}^n} hZ^b_1 + \sum _{a=1}^{n+1}c^{1}_a \int _{\mathbb {R}^n} U_1^{p-1}Z_1^a Z^b_1 + c^{2}_a\int _{\mathbb {R}^n} U_2^{p-1}Z_2^a Z^b_1. \end{aligned}$$

Using Lemma 3.16 we get

$$\begin{aligned} \sum _{a=1}^{n+1} \int _{\mathbb {R}^n} c^{1}_a U_1^{p-1}Z_1^a Z^b_1 + \int _{\mathbb {R}^n} c^{2}_a U_2^{p-1}Z_2^a Z^b_1 = c^1_b \gamma ^b + \sum _{a=1}^{n+1} c^{2}_a O(q_{12}). \end{aligned}$$

Using Lemma 3.18 we can also estimate the first term on the RHS as follows

$$\begin{aligned} \left| \int _{\mathbb {R}^n} h Z_1^b\right| \lesssim \Vert h\Vert _{**} \int _{\mathbb {R}^n} V U_1 \lesssim Q \Vert h\Vert _{**}. \end{aligned}$$

Moving to the LHS, since \(\varphi \) satisfies

$$\begin{aligned} \int _{\mathbb {R}^n} U_1^{p-1} \varphi Z_1^b = 0 \end{aligned}$$

for \(b=1,2,\ldots , n+1\) and \(|Z_1^b|\lesssim U_1\) we have

$$\begin{aligned} \left| \int _{\mathbb {R}^n} p \sigma ^{p-1} \varphi Z_1^{b} \right|&=\left| \int _{\mathbb {R}^n} p (\sigma ^{p-1} - U_1^{p-1}) \varphi Z_1^{b} \right| \lesssim \Vert \varphi \Vert _{*} \left| \int _{\mathbb {R}^n} (\sigma ^{p-1} - U_1^{p-1}) U_1 W \right| \\&\lesssim \Vert \varphi \Vert _{*} \left| \int _{\mathbb {R}^n} (\sigma ^{p} - U_1^{p}-U_2^p) W \right| \lesssim \Vert \varphi \Vert _{*} \int _{\mathbb {R}^n} V W \lesssim \Vert \varphi \Vert _{*} Q^p, \end{aligned}$$

where we used Lemma 3.3 to control the interaction term \(\sigma ^p-\sum _{i=1}^\nu U_i^p\) and Lemma 3.17 to control the integral term \(\int _{\mathbb {R}^n} V W.\) Thus we get that \(\{c^1_b\}_{b=1}^{n+1}\) satisfies the following system

$$\begin{aligned} c^1_b \gamma ^b + \sum _{a=1}^{n+1}c_a^2 O(q_{12}) = \int _{\mathbb {R}^n} p\sigma ^{p-1} \varphi Z_1^b - \int _{\mathbb {R}^n} h Z_1^b \end{aligned}$$

and since \(q_{12}< Q < \delta \) the above system is solvable such that the estimate (3.42) also holds. To prove the estimate for \(\nu >2\) bubbles, one just needs to observe that for each \(j\in I\)

$$\begin{aligned} \left( \sigma ^{p-1}-U_{j}^{p-1}\right) U_{j} \le \sum _{i=1}^{\nu }\left( \sigma ^{p-1}-U_{i}^{p-1}\right) U_{i}=\sigma ^{p}-\sum _{i} U_{i}^{p} \end{aligned}$$

and thus repeating the same argument as above one ends up with the following system

$$\begin{aligned} c_{b}^{1} \gamma ^{b}+\sum _{i \ne 1} \sum _{a=1}^{n+1} c_{a}^{i} O\left( q_{i 1}\right) =\int p \sigma ^{p-1} \varphi Z_{1}^{b}+\int h Z_{1}^{b} \end{aligned}$$

which is a solvable system for \(\delta \) small enough. \(\square \)

Next, we prove that the system (3.2) has a unique solution under some smallness conditions.

Lemma 1.23

There exists a constant \(\delta _0 > 0\) and \(C>0\) independent of \(\delta \) such that for \(\delta _0 \le \delta \) and any h such that \(\Vert h\Vert _{**}<\infty \) the system (3.2) has a unique solution \(\varphi \equiv L_{\delta }(h)\) such that

$$\begin{aligned} \Vert L_{\delta }(h)\Vert _{*}\le C \Vert h\Vert _{**}, \quad |c^i_a| \le \delta \Vert h\Vert _{**}. \end{aligned}$$
(3.43)

Proof

We imitate the proof of Proposition 4.1 in [5]. For this consider the space of functions

$$\begin{aligned} H = \{\varphi \in \dot{H}^{s}(\mathbb {R}^n): \langle \varphi ,Z_i^a\rangle _{\dot{H}^s}=-p\int _{\mathbb {R}^n} U_i^{p-1} \varphi Z_i^a = 0, i=1,\ldots ,\nu , a =1,\ldots , n+1\}, \end{aligned}$$

endowed with the natural inner product

$$\begin{aligned} \langle \varphi ,\psi \rangle _{H} = \langle \varphi ,\psi \rangle _{\dot{H}^s}= \frac{C_{n,s}}{2}\int _{\mathbb {R}^n} \int _{\mathbb {R}^n} \frac{(\varphi (x)-\varphi (y)) (\psi (x)-\psi (y)) }{|x-y|^{n+2s}} dx dy. \end{aligned}$$

In the weak form solving the system (3.2) is equivalent to finding a function \(\varphi \in H\) that for all \(\psi \in H\) satisfies

$$\begin{aligned} \langle \varphi ,\psi \rangle _{H} = \langle -p\sigma ^{p-1}\varphi -h,\psi \rangle _{L^2} \end{aligned}$$

which in operator form can be written as

$$\begin{aligned} \varphi = T_{\delta }(\varphi ) + \tilde{h} \end{aligned}$$

where \(\tilde{h}\) depends linearly on h and \(T_\delta \) is a compact operator on H. Then Fredholm’s alternative implies that there exists a unique solution \(\varphi \in H\) to the above equation provided the only solution to

$$\begin{aligned} \varphi = T_{\delta }(\varphi ) \end{aligned}$$

is \(\varphi \equiv 0\) in H. In other words we want to show that the following equation has a trivial solution in H

$$\begin{aligned} (-\Delta )^s \varphi - p\sigma ^{p-1}\varphi +\sum _{i, a} c^i_a U_i^{p-1}Z_i^a=0. \end{aligned}$$

We proceed by contradiction. Suppose there exists a non-trivial solution \(\varphi \equiv \varphi _{\delta }\). W.l.o.g. assume that \(\Vert \varphi _{\delta }\Vert _{*} =1.\) However from Lemma 3.4 and Lemma 3.9 we get that \(\Vert \varphi _{\delta }\Vert _{*}\rightarrow 0\) as \(Q \rightarrow 0\) since

$$\begin{aligned} \Vert \varphi _{\delta }\Vert _{*} \le C \sum _{i,a}|c_a^i|\le C' Q^p\Vert \varphi _{\delta }\Vert _{*}\lesssim Q^p, \end{aligned}$$

which is a contradiction. Thus for each h, the system (3.2) admits a unique solution in H. Furthermore the estimates (3.43) also follow from Lemma 3.4 and Lemma 3.9. \(\square \)

With this Lemma in hand, we can now prove the main result of this section.

Proposition 1.24

Suppose \(\delta \) is small enough. There exists \(\rho _0\) and a family of scalar \(\{c^i_a\}\) which solves (3.1) with

$$\begin{aligned} |\rho _0(x)|\le C W(x). \end{aligned}$$
(3.44)

Proof

Set

$$\begin{aligned} N_{1}(\varphi ) = (\sigma +\varphi )^p-\sigma ^p - p\sigma ^{p-1}\varphi , \quad N_2 = \sigma ^p - \sum _{i=1}^\nu U_i^p \end{aligned}$$

and denote \(L_{\delta }(h)\) to be the solution to the system (3.2). Then

$$\begin{aligned} (-\Delta )^s \varphi - p\sigma ^{p-1}\varphi = N_1(\varphi ) + N_2- \sum _{i,a} c^{i}_a U^{p-1}_i Z_i^a=0. \end{aligned}$$

Thus solving (3.1) is the same as solving

$$\begin{aligned} \varphi = A(\varphi ) = L_{\delta }(N_1(\varphi )) +L_{\delta }(N_2) \end{aligned}$$

where \(L_{\delta }\) is defined in Lemma 3.10. We show that A is a contraction map in a suitable normed space and show the existence of a solution \(\rho _0\) using the fixed point theorem. First using Lemma 3.3 we have that

$$\begin{aligned} \Vert N_2\Vert _{**} = \Vert h\Vert _{**} \le C_2 \end{aligned}$$

for some large constant \(C_2 = C_2(n,s)>0.\) To control \(N_1\) observe that

$$\begin{aligned} |N_1(\varphi )| \le C|\varphi |^p \le C \Vert \varphi \Vert _{*} W^p \end{aligned}$$

which implies that

$$\begin{aligned} \Vert N_1(\varphi )\Vert _{**} \le C \Vert \varphi \Vert _{*} \sup _{x\in \mathbb {R}^n} W^p(x) V^{-1}(x) \le C_1 R^{-4(p-1)} \Vert \varphi \Vert _{*} \end{aligned}$$

for some large constant \(C_1= C_1(n,s)>0\) that also satisfies the estimate

$$\begin{aligned} \Vert L_{\delta }(h)\Vert _{*}\le C_1 \Vert h\Vert _{**}. \end{aligned}$$

Define the space

$$\begin{aligned} E = \{u\in C^{1}(\mathbb {R}^n) \cap \dot{H}^{s}(\mathbb {R}^n): \Vert u\Vert _{*}\le C_1C_2 + 1\}. \end{aligned}$$

We show that the operator A is a contraction on the space \((E, \Vert \cdot \Vert _{*}).\) First we show that \(A(E)\subset E.\) For this take \(\varphi \in E.\) Then

$$\begin{aligned} \Vert A(\varphi )\Vert _{*}&= \Vert L_{\delta }(N_1(\varphi )) + L_{\delta }(N_2)\Vert _{*} \le C_1 \Vert N_1(\varphi )\Vert _{**} + C_1\Vert N_2\Vert _{**}\\&\le C_1^2R^{-4(p-1)}(C_1C_2 + 1) + C_1 C_2 \le C_1 C_2 + 1 \end{aligned}$$

for \(R\gg 1\) when \(\delta \) is small. Next, we show that A is a contraction map. Observe that for \(\varphi _1, \varphi _2\in E\) we have

$$\begin{aligned} \Vert A(\varphi _1) - A(\varphi _2)\Vert _{*} \le \Vert N_1(\varphi _1) - N_1(\varphi _2)\Vert _{**}. \end{aligned}$$

Since when \(n\ge 6\,s\), \(|N'(t)|\le C |t|^{p-1}\) which implies

$$\begin{aligned} |N_1(\varphi _1) - N_1(\varphi _2)| V^{-1}&\le C(|\varphi _1|^{p-1}+|\varphi _2|^{p-1})|\varphi _1 -\varphi _2|\\&\le C (\Vert \varphi _1\Vert _{*}^{p-1} + \Vert \varphi _2\Vert _{*}^{p-1}) \Vert \varphi _1 -\varphi _2\Vert _{*} W^{p}V^{-1} \\&\le \frac{1}{2} \Vert \varphi _1 -\varphi _2\Vert _{*} , \end{aligned}$$

where we choose small enough \(\delta \) such that \(R\gg 1.\) Then we get

$$\begin{aligned} \Vert A(\varphi _1) - A(\varphi _2)\Vert _{*} \le \Vert N_1(\varphi _1) - N_1(\varphi _2)\Vert _{**} \le \frac{1}{2}\Vert \varphi _1 -\varphi _2\Vert _{*}. \end{aligned}$$

This shows that the equation \(\varphi = A(\varphi )\) has a unique solution. Finally from Lemma 3.10 we get

$$\begin{aligned} \Vert \varphi \Vert _{*}&= \Vert A(\varphi )\Vert _{*} \le \Vert L_{\delta }(N_1)(\varphi )\Vert _{*} + \Vert L_{\delta }(N_2)\Vert _{*} \\&\le \Vert N_1(\varphi )\Vert _{**} + \Vert N_2\Vert _{**} \le C_1C_2 + 1 \le C \end{aligned}$$

for a large constant \(C>0.\) \(\square \)

3.2 Energy estimates of the first approximation \(\rho _0\)

In this section, our goal is to establish \(L^2\) estimate for \((-\Delta )^{s/2} \rho _0\) where \(\rho _0\) is the solution of the system (3.1) as in Proposition 3.11.

Using Lemma 3.17, Lemma 3.20, and Proposition 3.11 we obtain the following estimate.

Proposition 1.25

Suppose \(\delta \) is small enough. Then, for \(n\ge 6s\)

$$\begin{aligned} \Vert (-\Delta )^{s/2}\rho _0\Vert _{L^2}\lesssim {\left\{ \begin{array}{ll}Q|\log Q|^{\frac{1}{2}}, &{}\text {if }n=6s,\\ Q^{\frac{p}{2}},&{}\text {if }n> 6s.\end{array}\right. }. \end{aligned}$$
(3.45)

Proof

Testing the equation in (3.1) with \(\rho _0\) we get

$$\begin{aligned} \left\Vert (-\Delta )^{s/2}\rho _0 \right\Vert _{L^2}^2 \lesssim \int _{\mathbb {R}^n} \sigma ^{p-1}\rho _0 + \int _{\mathbb {R}^n} |\rho _0|^{p+1} + \int _{\mathbb {R}^n} (\sigma -\sum _{i=1}^{\nu }U_i^p) \rho _0, \end{aligned}$$

where we used the inequality \(|(\sigma + \rho _0)-\sigma ^p-p\sigma ^{p-1}\rho _0|\lesssim |\rho _0|^p.\) Using \(\sigma ^{p-1} \lesssim U_1^{p-1} + \cdots +U_\nu ^{p-1}\), \(|\rho _0(x)|\lesssim W(x)\) from Proposition 3.11 and Lemma 3.19 we get

$$\begin{aligned} \int _{\mathbb {R}^n} \sigma ^{p-1} \rho _0&\lesssim \int _{\mathbb {R}^n} \sum _{i=1}^{\nu } U_i^{p-1} W^2 \\&\lesssim \sum _{i=1}^{\nu }\sum _{j=1}^{2}\int _{\mathbb {R}^n} U_i^{p-1}w^2_{j,1}+U_i^{p-1}w^2_{j,2} \lesssim R^{-n-2s} \approx Q^{p}. \end{aligned}$$

For the second term using the Sobolev inequality we have

$$\begin{aligned} \int _{\mathbb {R}^n} |\rho _0|^{p+1} \lesssim \left\Vert (-\Delta )^{s/2} \rho _0\right\Vert _{L^2}^{p+1}. \end{aligned}$$
(3.46)

Finally for the interaction term recall that Lemma 3.3 implies that \(|h| \lesssim V(x).\) Using this estimate along with \(|\rho _0| \lesssim W(x)\) we get

$$\begin{aligned} \int _{\mathbb {R}^n} |\sigma - \sum _{i=1}^{\nu }U_i^p - U_2^p)\rho _0|&\lesssim \int _{\mathbb {R}^n} V W\\&\approx \sum _{i=1}^{\nu }\sum _{j=1}^{2} \int _{\mathbb {R}^n} v_{i,1}w_{j,1} + v_{i,1}w_{j,2} + v_{i,2}w_{j,1} + v_{i,2}w_{j,2}. \end{aligned}$$

Using Lemma 3.17 we get that the RHS is bounded by (up to a constant) \(R^{-n-2\,s} \approx Q^{p}\) when \(n\ge 6\,s\). When \(n=6s\) we can obtain a more precise estimate using Lemma 3.20 since \(p=2\)

$$\begin{aligned} \int _{\mathbb {R}^n} |((U_1+U_2)^2-U_1^2 -U_2^2) \rho _0| \lesssim \int _{\mathbb {R}^n} U_1 U_2 W \lesssim R^{-8}\log R \approx Q^{2}|\log Q|. \end{aligned}$$

This concludes the proof of Proposition 3.12. \(\square \)

3.3 Energy estimate of the second approximation \(\rho _1\)

In our attempt to estimate the energy of the error term \(\rho \) as defined from the minimization process in (1.25), we define \(\rho _1 = \rho -\rho _0.\) Then since \(\rho \) satisfies (1.27) and \(\rho _0\) satisfies (3.1) the second approximation \(\rho _1\) satisfies

$$\begin{aligned} \left\{ \begin{array}{l} (-\Delta )^s \rho _{1}-\left[ \left( \sigma +\rho _{0}+\rho _{1}\right) ^{p}-\left( \sigma +\rho _{0}\right) ^{p}\right] -\sum _{j=1}^{\nu } \sum _{a=1}^{n+1} c_{a}^{j} U_{j}^{p-1} Z_{j}^{a}-f=0 \\ \langle \rho _1,Z_j^a\rangle _{\dot{H}^s}=0, \quad j=1,\ldots ,\nu ;\quad a=1, \ldots , n+1\end{array}\right. \end{aligned}$$
(3.47)

Here recall that \(f= (-\Delta )^s u -u|u|^{p-1}.\) We further decompose \(\rho _1\) as

$$\begin{aligned} \rho _1=\sum _{i=1}^{\nu }\beta ^i U_i+\sum _{i=1}^{\nu }\sum _{a=1}^{n+1}\beta _a^iZ_{i}^a+\rho _2, \end{aligned}$$
(3.48)

where \(\rho _2\) satisfies the following orthogonality conditions

$$\begin{aligned} \langle \rho _2,U_i\rangle _{\dot{H}^s}=\langle \rho _2,Z_i^a\rangle _{\dot{H}^s} = 0, \end{aligned}$$
(3.49)

for \(i=1,\ldots ,\nu \) and \(a=1,\ldots , n+1.\) Thus in order to estimate the \(L^2\) norm of \((-\Delta )^{s/2} \rho _1\) we estimate the \(L^2\) norm of \((-\Delta )^{s/2} \rho _2.\)

Lemma 1.26

Suppose \(\delta \) is small enough. Then

$$\begin{aligned} \Vert (-\Delta )^{s/2}\rho _2\Vert _{L^2}\lesssim \sum _{i=1}^{\nu }|\beta ^i|+\sum _{i=1}^{\nu }\sum _{a=1}^{n+1}|\beta _a^i|+\Vert f\Vert _{H^{-s}}. \end{aligned}$$
(3.50)

Proof

Testing (3.47) with \(\rho _2\) and using (3.49) we get

$$\begin{aligned} \int _{\mathbb {R}^n} |(-\Delta )^{s/2} \rho _2|^2 \lesssim \int _{\mathbb {R}^n} |(\sigma +\rho _0+\rho _1)^p-(\sigma +\rho _0)^p|\rho _2 + \int _{\mathbb {R}^n} |f \rho _2|. \end{aligned}$$

The second term can be trivially estimated as follows

$$\begin{aligned} \int _{\mathbb {R}^n} |f \rho _2| \lesssim \Vert f\Vert _{{H}^{-s}}\left\Vert (-\Delta )^{s/2} \rho _2\right\Vert _{L^2}. \end{aligned}$$
(3.51)

To estimate the first term, we make use of the following inequality

$$\begin{aligned} |(\sigma + \rho _0 + \rho _1)^p - (\sigma +\rho _0)^p - p(\sigma + \rho _0)^{p-1}\rho _1| \le |\rho _1|^{p} \end{aligned}$$

and therefore we get

$$\begin{aligned} \int _{\mathbb {R}^n} |(\sigma +\rho _0+\rho _1)^p-(\sigma +\rho _0)^p|\rho _2 \le p\int _{\mathbb {R}^n} |\sigma +\rho _0|^{p-1}|\rho _1\rho _2| + \int _{\mathbb {R}^n} |\rho _1|^{p}|\rho _2|. \end{aligned}$$

Using the decomposition of \(\rho _1\) in (3.48) we get

$$\begin{aligned} p\int _{\mathbb {R}^n} |\sigma + \rho _0|^{p-1}|\rho _1\rho _2|\le p\int _{\mathbb {R}^n} |\sigma + \rho _0|^{p-1}|\rho _2|^2 + \mathcal {B}\int _{\mathbb {R}^n} |\sigma + \rho _0|^{p-1} U_i |\rho _2| \end{aligned}$$

where \(\mathcal {B} = \sum _i|\beta ^i| + \sum _{i,a} |\beta ^{i}_a|.\) For the first term, we have

$$\begin{aligned} p\int _{\mathbb {R}^n} |\sigma + \rho _0|^{p-1}|\rho _2|^2&\le p\int _{\mathbb {R}^n} \sigma ^{p-1}|\rho _2|^2 + p\int _{\mathbb {R}^n} |\rho _0|^{p-1}|\rho _0|^2 \\&\le (\tilde{c} + C \left\Vert (-\Delta )^{s/2} \rho _0\right\Vert _{L^2}^{p-1}))\left\Vert (-\Delta )^{s/2} \rho _2\right\Vert _{L^2}^2 \end{aligned}$$

where we made use of the Sobolev inequality and the spectral inequality of the same form as (2.8) with constant \(\tilde{c}<1.\) For the other terms, using Sobolev inequality we have

$$\begin{aligned} \int _{\mathbb {R}^n} |\sigma +\rho _0|^{p-1} U_i |\rho _2| \lesssim \left\Vert {(\sigma + \rho _0) U_i}\right\Vert _{L^{\frac{2n}{n+2s}}} \left\Vert (-\Delta )^{s/2} \rho _2\right\Vert _{L^2} \lesssim \left\Vert (-\Delta )^{s/2} \rho _2\right\Vert _{L^2} \end{aligned}$$

and

$$\begin{aligned} \int _{\mathbb {R}^n} |\rho _1|^p |\rho _2| \lesssim \left\Vert {\rho _1}\right\Vert ^p_{L^{2^*}} \left\Vert {\rho _2}\right\Vert _{L^{2^*}} \lesssim (\mathcal {B} + \left\Vert (-\Delta )^{s/2} \rho _2\right\Vert _{L^2})^p \left\Vert (-\Delta )^{s/2} \rho _2\right\Vert _{L^2}. \end{aligned}$$

Thus combining the above estimates we get

$$\begin{aligned} p\int _{\mathbb {R}^n} |\sigma + \rho _0|^{p-1}|\rho _1\rho _2|&\lesssim (\tilde{c}+C\left\Vert (-\Delta )^{s/2} \rho _0\right\Vert _{L^2}^{p-1})\left\Vert (-\Delta )^{s/2} \rho _2\right\Vert _{L^2}^2 + \mathcal {B}\left\Vert (-\Delta )^{s/2} \rho _2\right\Vert _{L^2} \nonumber \\&\quad + (\mathcal {B} + \left\Vert (-\Delta )^{s/2} \rho _2\right\Vert _{L^2})^p \left\Vert (-\Delta )^{s/2} \rho _2\right\Vert _{L^2}. \end{aligned}$$
(3.52)

For small enough \(\delta \) using Proposition 3.12 we can make \(\left\Vert (-\Delta )^{s/2} \rho _0\right\Vert _{L^2}\ll 1.\) Furthermore for small enough \(\delta \) we can also make \(\mathcal {B}<1\) and \(\left\Vert (-\Delta )^{s/2} \rho _2\right\Vert _{L^2}<1\) and thus using (3.52) and (3.51) we get (3.50). \(\square \)

Estimate (3.50) for \(\rho _2\) suggests that the next natural step is to control the absolute sum of coefficients \(\mathcal {B}\) as defined in (3.48).

Lemma 1.27

If \(\delta \) is small, then

$$\begin{aligned} |\beta ^i|+|\beta _a^i|\lesssim Q^2+\Vert f\Vert _{H^{-s}}. \end{aligned}$$
(3.53)

Proof

We start by multiplying the bubble \(U_k\) to equation (3.47) and integrating by parts. Thus we get

$$\begin{aligned}&\int _{\mathbb {R}^n} (-\Delta )^{s/2} \rho _1 \cdot (-\Delta )^{s/2} U_k + \int _{\mathbb {R}^n}[(\sigma + \rho _0 + \rho _1)^p - (\sigma +\rho _0)^p]U_k \\&\quad + \sum _{i,a}c^{i}_a U_i^{p-1}U_k Z_{i}^a + \int _{\mathbb {R}^n} U_k f=0. \end{aligned}$$

Using the inequality

$$\begin{aligned} \left| (\sigma +\rho _0)^{p-1}- U_k^{p-1}\right| \lesssim \sum _{i\ne k} U_i^{p-1} + |\rho _0|^{p-1}, \end{aligned}$$

we get the following estimate

$$\begin{aligned} \begin{aligned}&\left| \int _{\mathbb {R}^n}[(\sigma +\rho _0+\rho _1)^p-(\sigma +\rho _0)^p]U_k-p\int _{\mathbb {R}^n} U_k^{p}\rho _1\right| \\&\quad \lesssim \sum _{i\ne k}\int _{\mathbb {R}^n} U_i^{p-1}U_k|\rho _1|+ \int _{\mathbb {R}^n} |\rho _1|^p U_k+\int _{\mathbb {R}^n} |\rho _0|^{p-1}\rho _1U_k. \end{aligned} \end{aligned}$$

Estimating the three terms separately we get

$$\begin{aligned} \int _{\mathbb {R}^n} U_i^{p-1}|\rho _1|U_k&\le \Vert (-\Delta )^{s/2} \rho _1\Vert _{L^{2}}\Vert U_i^{p-1}U_k\Vert _{L^{\frac{2n}{n+2s}}}\lesssim o(1)\left( \mathcal {B}+\Vert f\Vert _{H^{-s}}\right) ,\text { when }i\ne k \\ \int _{\mathbb {R}^n} |\rho _1|^p U_k&\lesssim \mathcal {B}^p+\int _{\mathbb {R}^n}\rho _2^pU_k\lesssim \mathcal {B}^p + \left\Vert (-\Delta )^{s/2} \rho _2\right\Vert _{L^2}^p \lesssim \mathcal {B}^p+\Vert f\Vert _{H^{-s}}^p,\\ \int _{\mathbb {R}^n}|\rho _0|^{p-1}\rho _1U_k&\lesssim \Vert (-\Delta )^{s/2}\rho _0\Vert _{L^2}^{p-1}\Vert (-\Delta )^{s/2}\rho _1\Vert _{L^2}\lesssim o(1)\left( \mathcal {B}+\Vert f\Vert _{H^{-s}}\right) , \end{aligned}$$

where o(1) denotes a quantity that tends to 0 when \(\delta \rightarrow 0.\) Next, using integral estimate similar to Proposition B.2 in Appendix B of [12] and \(|Z_j^a|\lesssim U_j\) we get that

$$\begin{aligned} \int _{\mathbb {R}^n} U_j^{p-1}Z_j^a U_k \lesssim \int _{\mathbb {R}^n} U_j^{p}U_k \lesssim Q \end{aligned}$$

when \(j\ne k\) otherwise the above integral is equal to 0. Furthermore, the coefficients \(c^i_a\) can be estimated using Lemma 3.3 and Lemma 3.9 to get

$$\begin{aligned} |c^i_a| \lesssim Q\Vert h\Vert _{**} + Q^p\Vert \rho _0\Vert _{*} \lesssim Q. \end{aligned}$$

Thus we get

$$\begin{aligned} \int _{\mathbb {R}^n} (-\Delta )^{s/2} \rho _1(-\Delta )^{s/2} U_k+p\int _{\mathbb {R}^n} U_k^p\rho _1\lesssim&o(1)\mathcal {B}+Q^2+\Vert f\Vert _{{H}^{-s}}. \end{aligned}$$
(3.54)

Writing \(\int _{\mathbb {R}^n} (-\Delta )^{s/2} \rho _1(-\Delta )^{s/2} U_k+p\int _{\mathbb {R}^n} U_k^p\rho _1 = (p-1)\int _{\mathbb {R}^n} U_k^p\rho _1\) and using the decomposition of \(\rho _1\) in (3.48) along with the orthogonality condition for \(\rho _2\) in (3.49) we get

$$\begin{aligned}{} & {} \int _{\mathbb {R}^n} (-\Delta )^{s/2} \rho _1(-\Delta )^{s/2} U_k+p\int _{\mathbb {R}^n} U_k^p\rho _1 \nonumber \\{} & {} \quad = (p-1)\left( \sum _i \beta ^i\int _{\mathbb {R}^n} U_i^p U_k+\sum _{i,a} \beta _a^i\int _{\mathbb {R}^n} U_k^{p}Z_i^a \right) \nonumber \\{} & {} \quad = (p-1)\beta ^k S^{p+1} + \sum _{i\ne k}\beta ^i O(q_{ik}) + \sum _{i\ne k,a}\beta ^{i}_a O(q_{ik}) \end{aligned}$$
(3.55)

where we made use of the integral estimate similar to Proposition B.2 in Appendix B of [12] in the last step. On the other hand the orthogonality condition \(\langle \rho _1,Z_k^b\rangle _{\dot{H}^s} =0\), (3.48), (3.49) and Lemma 3.16 implies

$$\begin{aligned} 0&=\sum _{i}\beta ^i\int _{\mathbb {R}^n} (-\Delta )^{s/2} U_i\cdot (-\Delta )^{s/2} Z^b_k+\sum _{i,a}\beta _a^i\int _{\mathbb {R}^n} (-\Delta )^{s/2} Z_i^a\cdot (-\Delta )^{s/2} Z^b_k \nonumber \\&= \sum _{i\ne k}\beta ^i O(q_{ik}) + \beta ^k_b + \sum _{i\ne k, a\ne b} \beta ^i_a O(q_{ik}). \end{aligned}$$
(3.56)

Thus combining (3.56) and (3.55) we get

$$\begin{aligned}&(p-1)\beta ^k S^{p+1} + \sum _{i\ne k}\beta ^i O(q_{ik}) + \sum _{i\ne k,a}\beta ^{i}_a O(q_{ik}) \\&\quad = (p-1)\beta ^k S^{p+1} + \sum _{i\ne k}\beta ^{i} O(q_{ik}) + \sum _{i\ne k,a\ne b}\beta ^{i}_a O(q_{ik}) + \sum _{i\ne k}\beta ^{i}_b O(q_{ik}) \\&\quad = (p-1)\beta ^k S^{p+1} -\beta ^k_b + \sum _{i\ne k} \beta ^i_b O(q_{ik}) \end{aligned}$$

which along with (3.54) gives us the desired bound. \(\square \)

Lemma 1.28

Let \(\delta \) be small enough. Then,

$$\begin{aligned} \Vert (-\Delta )^{s/2}\rho _1\Vert _{L^2}\lesssim Q^2+\Vert f\Vert _{H^{-s}}. \end{aligned}$$
(3.57)

Proof

From (3.48), (3.50) and (3.53) we get that

$$\begin{aligned} \left\Vert (-\Delta )^{s/2} \rho _1\right\Vert _{L^2} \lesssim \mathcal {B} + \left\Vert (-\Delta )^{s/2}\rho _2\right\Vert _{L^2} \lesssim \mathcal {B} + \left\Vert {f}\right\Vert _{H^{-s}} \lesssim Q^2 +\left\Vert {f}\right\Vert _{H^{-s}}. \end{aligned}$$

\(\square \)

3.4 Conclusion

Assuming Step (i), (ii), and (iii) we conclude the proof of Theorem 1.5 when \(n\ge 6s\).

Proof

The proof proceeds in four steps.

Step 1. Recall the equation satisfied by the error \(\rho \)

$$\begin{aligned} (-\Delta )^s\rho -p\sigma ^{p-1}\rho -I_1-I_2-f=0 \end{aligned}$$

where,

$$\begin{aligned} f&= (-\Delta )^s u -u|u|^{p-1},\quad I_1=\sigma ^p- \sum _{i=1}^{\nu }U_i^p, \\ I_2&=(\sigma +\rho )|\sigma +\rho |^{p-1}-\sigma ^p-p\sigma ^{p-1}\rho . \end{aligned}$$

Multiplying \(Z_k^{n+1}\), integrating by parts and using the orthogonality condition for \(\rho \) we get

$$\begin{aligned} \left| \int _{\mathbb {R}^n} I_1 Z^{n+1}_k\right| \le \int _{\mathbb {R}^n} p\sigma ^{p-1}|\rho Z^{n+1}_k| + \int _{\mathbb {R}^n} |I_2 Z^{n+1}_k| + \int _{\mathbb {R}^n} |f Z^{n+1}_k|. \end{aligned}$$

Using \(|I_2|\lesssim |\rho |^p\) for the second term and \(|Z^{n+1}_k|\lesssim U_k\) for the third term we get

$$\begin{aligned} \left| \int _{\mathbb {R}^n} I_1 Z^{n+1}_k\right| \lesssim \int _{\mathbb {R}^n} \sigma ^{p-1}|\rho Z^{n+1}_k| + \int _{\mathbb {R}^n} |\rho |^p |Z^{n+1}_k| + \Vert f\Vert _{{H}^{-s}}. \end{aligned}$$
(3.58)

Next, we further estimate the first two terms in the above estimate.

Step 2. For \(\delta \) small from Lemma 3.13 in [8] we deduce that

$$\begin{aligned} \left| \int _{\mathbb {R}^n} \sigma ^{p-1} \rho Z_k^{n+1}\right| =o(Q)+\Vert f\Vert _{H^{-s}},\quad \int _{\mathbb {R}^n} |\rho |^{p} |Z_k^{n+1}|=o(Q)+\Vert f\Vert _{H^{-s}}, \end{aligned}$$
(3.59)

where Q is defined as in (1.32) and o(Q) denotes a quantity that satisfies \(o(Q)/Q\rightarrow 0\) as \(Q\rightarrow 0.\)

Step 3. Thus from (3.58), (3.59) we get

$$\begin{aligned} \int _{\mathbb {R}^n} I_1 Z_k^{n+1} \lesssim o(Q) + \Vert f\Vert _{{H}^{-s}}. \end{aligned}$$
(3.60)

Thus using Lemma 2.1 from [8] we get that

$$\begin{aligned} \int _{\mathbb {R}^n} I_1 Z_k^{n+1} = \int _{\mathbb {R}^n} I_1 \lambda _k \partial _{\lambda _k} U_k = \int _{\mathbb {R}^n} U_i^p \lambda _k \partial _{\lambda _k} U_k + o(Q) \end{aligned}$$
(3.61)

where \(i\ne k.\)

Step 4. Arguing as in Lemma 2.3 in [8] we deduce that \(Q\lesssim \Vert f\Vert _{{H}^{-s}}.\) Thus using the fact that, \(\rho = \rho _0 + \rho _1\) and estimates (3.45) and (3.57) we get that

$$\begin{aligned} \left\Vert (-\Delta )^{s/2} \rho \right\Vert _{L^2}&\le \left\Vert (-\Delta )^{s/2} \rho _0\right\Vert _{L^2} + \left\Vert (-\Delta )^{s/2} \rho _1\right\Vert _{L^2}\\ {}&\lesssim {\left\{ \begin{array}{ll} \Vert f\Vert _{H^{-s}}\left| \log \Vert f\Vert _{H^{-s}}\right| ^{\frac{1}{2}},&{} n=6s,\\ \Vert f\Vert _{H^{-s}}^{\frac{p}{2}},&{}n> 6s\end{array}\right. } \end{aligned}$$

which concludes the proof of Theorem 1.5. \(\square \)