1 Introduction and Main Results

In this paper, we are concerned with the following fractional Kirchhoff problem

$$\begin{aligned} \Big (a+b{\int _{\mathbb {R}^{N}}}|(-\Delta )^{\frac{s}{2}}u|^2\mathrm{{d}}x\Big )(-\Delta )^su+u=u^p,\quad \text {in}\ \mathbb {R}^{N}, \end{aligned}$$
(1.1)

where \(a,b>0\), \((-\Delta )^s\) is the pseudo-differential operator defined by

$$\begin{aligned} \mathcal {F}((-\Delta )^su)(\xi )=|\xi |^{2s}\mathcal {F}(u)(\xi ),\ \ \xi \in \mathbb {R}^N, \end{aligned}$$

where \(\mathcal {F}\) denotes the Fourier transform, and p satisfies

$$\begin{aligned} 1<p<2_s^*-1= {\left\{ \begin{array}{ll} \frac{N+2s}{N-2s}, &{} 0<s<\frac{N}{2}, \\ +\infty , &{} s\ge \frac{N}{2}, \end{array}\right. } \end{aligned}$$

where \(2^*_s\) is the standard fractional Sobolev critical exponent. Recently, Rǎdulescu and Yang [41] established uniqueness and nondegeneracy for positive solutions to (1.1) for \(\frac{N}{4}<s<1\). Then in this paper, we will consider the high dimensional cases, i.e. \(N\ge 4s\). We also refer to [26, 40, 44, 45] for critical cases and single/multi-peak solutions in this direction.

If \(s=1\), Eq. (1.1) reduces to the well known Kirchhoff type problem, which and their variants have been studied extensively in the literature. The equation that goes under the name of Kirchhoff equation was proposed in [28] as a model for the transverse oscillation of a stretched string in the form

$$\begin{aligned} \rho h \partial _{t t}^{2} u-\left( p_{0}+\frac{\mathcal {E}h}{2 L} \int _{0}^{L}\left| \partial _{x} u\right| ^{2} \mathrm{{d}}x\right) \partial _{x x}^{2} u=0, \end{aligned}$$
(1.2)

for \(t \ge 0\) and \(0<x<L\), where \(u=u(t, x)\) is the lateral displacement at time t and at position \(x, \mathcal {E}\) is the Young modulus, \(\rho \) is the mass density, h is the cross section area, L the length of the string, \(p_{0}\) is the initial stress tension. Problem (1.2) and its variants have been studied extensively in the literature. Bernstein obtains the global stability result in [10], which has been generalized to arbitrary dimension \(N\ge 1\) by Pohožaev in [37]. We also point out that such problems may describe a process of some biological systems dependent on the average of itself, such as the density of population (see e.g. [9]). Many interesting work on Kirchhoff equations can be found in [15, 27, 33, 43] and the references therein. We also refer to [38] for a recent survey of the results connected to this model.

On the other hand, the interest in generalizing the model introduced by Kirchhoff to the fractional case does not arise only for mathematical purposes. In fact, following the ideas of [11] and the concept of fractional perimeter, Fiscella and Valdinoci proposed in [20] an equation describing the behaviour of a string constrained at the extrema in which appears the fractional length of the rope. Recently, problem similar to (1.1) has been extensively investigated by many authors using different techniques and producing several relevant results (see, e.g. [1,2,3,4, 6, 8, 23,24,25, 34,35,36, 42]).

Besides, if \(b=0\) in (1.1), then we are led immediately to the following fractional Schrödinger equation

$$\begin{aligned} a(-\Delta )^su+u=u^p, \quad \text {in} \,\,\mathbb {R}^{N}. \end{aligned}$$
(1.3)

This equation is related to the standing wave solutions of the time-independent fractional Schrödinger equation

$$\begin{aligned} ih\frac{\partial \psi }{\partial t}=h^{2s}(-\Delta )^s\psi +V(x)\psi -f(x,|\psi |),\,\,\, \text {in}\ \mathbb {R}^N\times \mathbb {R}, \end{aligned}$$
(1.4)

where h is the Plank constant and V(x) is a potential function. Eq. (1.4) was introduced by Laskin [29, 30] as a fundamental equation of fractional quantum mechanics in the study of particles on stochastic fields modelled by Lévy process. For \(0<s<1\), the fractional Sobolev space \(H^s(\mathbb {R}^N)\) is defined by

$$\begin{aligned} H^{s}(\mathbb {R}^N)=\bigg \{u\in L^2(\mathbb {R}^N):\frac{u(x)-u(y)}{|x-y|^{\frac{N}{2}+s}}\in L^2({\mathbb {R}^{N}\times \mathbb {R}^{N}})\bigg \}, \end{aligned}$$

endowed with the natural norm

$$\begin{aligned} \Vert u\Vert ^2=\int _{\mathbb {R}^N}|u|^2\mathrm{{d}}x+\int \int _{\mathbb {R}^N\times \mathbb {R}^N}\frac{|u(x)-u(y)|^2}{|x-y|^{N+2s}}\mathrm{{d}}x\mathrm{{d}}y. \end{aligned}$$

From [17], we have

$$\begin{aligned} \Vert (-\Delta )^{\frac{s}{2}}u\Vert ^2_2=\int _{\mathbb {R}^N}|\xi |^{2s}|\mathcal {F}(u)|^2\mathrm{{d}}\xi =\frac{1}{2}C(N,s) \int _{\mathbb {R}^N\times \mathbb {R}^N}\frac{|u(x)-u(y)|^2}{|x-y|^{N+2s}}\mathrm{{d}}x\mathrm{{d}}y, \end{aligned}$$

and the fractional Gagliardo–Nirenberg–Sobolve inequality

$$\begin{aligned} \int _{\mathbb {R}^N}|u|^{p+1}\mathrm{{d}}x\le \mathcal {S}\Big (\int _{\mathbb {R}^N}|(-\Delta )^{\frac{s}{2}}u|^2\mathrm{{d}}x\Big )^{\frac{N(p-1)}{4s}} \Big (\int _{\mathbb {R}^N}|u|^2\mathrm{{d}}x\Big )^{\frac{p-1}{4s}(2s-N)+1}, \end{aligned}$$
(1.5)

where \(\mathcal {S}>0\) is the best constant. It follows from (1.5) that

$$\begin{aligned} J(u)=\frac{\mathcal {S}\Big (\int _{\mathbb {R}^N}|(-\Delta )^{\frac{s}{2}}u|^2\mathrm{{d}}x\Big )^{\frac{N(p-1)}{4s}}\Big (\int _{\mathbb {R}^N}|u|^2\mathrm{{d}}x\Big )^{\frac{p-1}{4s}(2s-N)+1}}{\int _{\mathbb {R}^N}|u|^{p+1}\mathrm{{d}}x}>0. \end{aligned}$$
(1.6)

Since the fractional Laplacian \((-\Delta )^s\) is a nonlocal operator, one can not apply directly the usual techniques dealing with the classical Laplacian operator. Therefore, some ideas are proposed recently. In [12], Caffarelli and Silvestre expressed the operator \((-\Delta )^s\) on \(\mathbb {R}^N\) as a generalized elliptic BVP with local differential operators defined on the upper half-space \(\mathbb {R}^{N+1}_+=\{(t,x):t>0,x\in \mathbb {R}^N\}\). By means of Lyapunov–Schmidt reduction, concentration phenomenon of solutions was considered independently in [13, 16]. For more interesting results concerning with the existence, multiplicity and concentration of solutions for the fractional Laplacian equation, we refer reader to [5, 17, 18] and the references therein.

Uniqueness of ground states of nonlocal equations similar to Eq. (1.3) is of fundamental importance in the stability and blow-up analysis for solitary wave solutions of nonlinear dispersive equations, for example, of the generalized Benjamin–Ono equation. In contrast to the classical limiting case when \(s=1\), in which standard ODE techniques are applicable, uniqueness of ground state solutions to Eq. (1.3) is a really difficult problem. In the case that \(s=\frac{1}{2}\) and \(N=1\), Amick and Toland [7], they obtained the uniqueness result for solitary waves of the Benjamin–Ono equation. After that, Lenzmann [31] obtained the uniqueness of ground states for the pseudorelativistic Hartree equation in 3-dimension. In [21], Frankand and Lenzmann extends the results in [7] to the case that \(s \in (0,1)\) and \(N=1\) with completely new methods. For the high dimensional case, Fall and Valdinoci [19] established the uniqueness and nondegeneracy of ground state solutions of (1.3) when \(s \in (0,1)\) is sufficiently close to 1 and p is subcritical. In their striking paper [22], Frank, Lenzmann and Silvestre solved the problem completely, and they showed that the ground state solutions of (1.3) is unique for arbitrary space dimensions \(N \ge 1\) and all admissible and subcritical exponents \(p>0 .\) Moreover, they also established the nondegeneracy of ground state solutions. We summarize their main results as follows.

Proposition 1.1

Let \(N\ge 1\), \(0<s<1\) and \(1<p<2_s^*-1\). Then the following holds.

(i):

there exists a minimizer \(Q\in H^{s}(\mathbb {R}^N)\) for J(u), which can be chose a nonnegative function that solves Eq. (1.3);

(ii):

there exist some \(x_0\in \mathbb {R}^N\) such that \(Q(\cdot -x_0)\) is radial, positive and strictly decreasing in \(r=|x-x_0|\). Moreover, the function Q belongs to \(C^\infty (\mathbb {R}^N)\cap H^{2s+1}(\mathbb {R}^N)\) and it satisfies

$$\begin{aligned} \frac{C_1}{1+|x|^{N+2s}}\le Q(x)\le \frac{C_2}{1+|x|^{N+2s}},\quad \forall \ x\in \mathbb {R}^N, \end{aligned}$$

with some constants \(C_2\ge C_1>0\);

(iii):

Q is a unique solution of (1.3) up to translation.

Proposition 1.2

Let \(N\ge 1\), \(0<s<1\), \(1<p<2_s^*-1\) and c be a positive constant. Suppose that \(Q\in H^s(\mathbb {R}^N)\) is a ground state solution of

$$\begin{aligned} c(-\Delta )^sQ+Q=|Q|^p\quad \text {in}\quad \mathbb {R}^{N} \end{aligned}$$
(1.7)

and \(T_+\) denotes the corresponding linearized operator given by

$$\begin{aligned} T_+=c(-\Delta )^s+1-p|Q|^{p-1}. \end{aligned}$$

Then the following holds.

(i):

Q is nondegenerate, i.e., \(\ker T_+=span\{\partial _{x_1}Q, \partial _{x_2}Q,\cdots , \partial _{x_N}Q\}\);

(ii):

the restriction of \(T_+\) on \(L^2_{rad}(\mathbb {R}^N)\) is one-to-one and thus it has an inverse \(T_+^{-1}\) acting on \(L^2_{rad}(\mathbb {R}^N)\);

(iii):

\(T_+Q=-(p-1)Q^{p}\) and \(T_+R=-2sQ\), where \(R=\frac{2s}{p-1}Q+x\cdot (-\Delta )^{\frac{s}{2}} Q\).

From the viewpoint of calculus of variation, the fractional Kirchhoff problem (1.1) is much more complex and difficult than the classical fractional Laplacian Eq. (1.3) as the appearance of the term \(b\big ({\int _{\mathbb {R}^{N}}}|(-\Delta )^{\frac{s}{2}}u|^2\mathrm{{d}}x\big )(-\Delta )^su\), which is of order four. So a fundamental task for the study of problem (1.1) is to make clear the effects of this non-local term. The only one uniqueness and non-degeneracy result which we know for the solution of problem (1.1) is proved in [41] for the case \(\frac{N}{4}<s<1\), and [14, 32] for the case \(s=1\). As in [41], let U be a ground state positive solution of (1.1) and set

$$\begin{aligned} \mathcal {E}_0=a+b\Vert (-\Delta )^{\frac{s}{2}}U\Vert ^2_2\quad \text {and}\quad {\tilde{U}}(x)=U(\mathcal {E}_0^{\frac{1}{2s}}x). \end{aligned}$$

Then, it is easy to check that \({\tilde{U}}\) is a positive solution of (1.3) and a minimizer of J(u). Therefore, from the uniqueness result for positive solutions of problem (1.3), we know that any solution U(x) of problem (1.1) with \(a,b > 0\) has the following form

$$\begin{aligned} U(x)=Q(\mathcal {E}_0^{-\frac{1}{2s}}x). \end{aligned}$$

Consequently, the solvability of the problem (1.1) is simply equivalent to the solvability of the following algebraic equation in \((0,+\infty )\),

$$\begin{aligned} f(\mathcal {E})=\mathcal {E}-a-bm^{\frac{2}{p-1}+\frac{2s-N}{2s}}\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2\mathcal {E}^{\frac{N-2s}{2s}}=0, \quad \mathcal {E}\in (a,+\infty ). \end{aligned}$$

This observation makes the question of uniqueness and multiplicity for solutions to problem (1.1) very simple. Therefore, our main focus of the present paper is non-degeneracy property for positive solutions of problem (1.1). The main results of this paper are collected in the following results.

Theorem 1.1

Assume that \(a,b>0\) and \(1<p<2^*_s-1\). Then the following statements are true:

(i):

If \(1<N <4s\), then problem (1.1) has exactly one solution;

(ii):

If \(N=4s\), then problem (1.1) is solvable if and only if \(b\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2<1\), and in this case problem (1.1) has exactly one solution;

(iii):

If \(N>4s\), then problem (1.1) is solvable if and only if

$$\begin{aligned} b\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2\le \frac{2sa^{\frac{4s-N}{2s}}(N-4s)^{\frac{N-4s}{2s}}}{(N-2s)^{\frac{N-2s}{2s}}}. \end{aligned}$$

Furthermore, problem (1.1) has exactly one solution when the equality holds, and has exactly two solutions for the other case.

Moreover, define the solution by U, then there exist some \(x_0\in \mathbb {R}^N\) such that \(U(\cdot -x_0)\) is radial, positive and strictly decreasing in \(r=|x-x_0|\). Moreover, the function U belongs to \(C^\infty (\mathbb {R}^N)\cap H^{2s+1}(\mathbb {R}^N)\) and it satisfies

$$\begin{aligned} \frac{C_1}{1+|x|^{N+2s}}\le U(x)\le \frac{C_2}{1+|x|^{N+2s}},\quad \forall \ x\in \mathbb {R}^N, \end{aligned}$$

with some constants \(C_2\ge C_1>0\);

Theorem 1.2

Suppose that \(a,b>0\). Then any positive solution U(x) of problem (1.1) is non-degenerate if one of the following conditions holds:

  • \(1\le N\le 4s\);

  • \(N>4s\) and \(b\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2 \ne \frac{2sa^{\frac{4s-N}{2s}}(N-4s)^{\frac{N-4s}{2s}}}{(N-2s)^{\frac{N-2s}{2s}}}\).

By Theorem 1.2, it is now possible that we apply Lyapunov–Schmidt reduction to study the perturbed fractional Kirchhoff equation.

$$\begin{aligned} \Big (\varepsilon ^{2s}a+\varepsilon ^{4s-N} b{\int _{\mathbb {R}^{N}}}|(-\Delta )^{\frac{s}{2}}u|^2\mathrm{{d}}x\Big )(-\Delta )^su+V(x)u=u^p,\quad \text {in}\ \mathbb {R}^{N}, \end{aligned}$$
(1.8)

where \(V: \mathbb {R}^{N} \rightarrow \mathbb {R}\) is a bounded continuous function. We want to look for solutions of (1.8) in the Sobolev space \(H^s(\mathbb {R}^N)\) for sufficiently small \(\varepsilon \), which named semiclassical solutions. We also call such derived solutions as concentrating solutions since they will concentrate at certain point of the potential function V. Moreover, it is expected that this approach can deal with problem (1.8) for all \(1<p<2^*_s-1\), in a unified way. To state our following results, let introduce some notations that will be used throughout the paper. For \(\varepsilon >0\) and \(y=\left( y_{1}, y_{2},\cdots y_{N}\right) \in \mathbb {R}^{N}\), write

$$\begin{aligned} U_{\varepsilon , y}(x)=U\left( \frac{x-y}{ \varepsilon }\right) , \quad x \in \mathbb {R}^{N}. \end{aligned}$$

Assume that \(V: \mathbb {R}^{N} \rightarrow \mathbb {R}\) satisfies the following conditions:

\((V_1)\):

V is a bounded continuous function with \(\inf \limits _{x \in \mathbb {R}^{N}} V>0\);

\((V_2)\):

There exist \(x_{0} \in \mathbb {R}^{N}\) and \(r_{0}>0\) such that

$$\begin{aligned} V\left( x_{0}\right)<V(x) \quad \text{ for } 0<\left| x-x_{0}\right| <r_{0}, \end{aligned}$$

and \(V \in C^{\alpha }\left( {\bar{B}}_{r_{0}}\left( x_{0}\right) \right) \) for some \(0<\alpha <\frac{N+4 s}{2}\). That is, V is of \(\alpha \)-th order Hölder continuity around \(x_{0}\).

The assumption \((V_1)\) allows us to introduce the inner products

$$\begin{aligned} \langle u, v\rangle _{\varepsilon }=\int _{\mathbb {R}^{N}}\left( \varepsilon ^{2s} a (-\Delta )^{\frac{s}{2}} u \cdot (-\Delta )^{\frac{s}{2}} v+V(x) u v\right) \mathrm{{d}}x, \end{aligned}$$

for \(u, v \in H^{s}(\mathbb {R}^{N})\). We also write

$$\begin{aligned} H_{\varepsilon }=\left\{ u \in H^{s}(\mathbb {R}^{N}):\Vert u\Vert _{\varepsilon }=\langle u, u\rangle _{\varepsilon }^{\frac{1}{2}}<\infty \right\} . \end{aligned}$$

Now we state the existence result as follows.

Theorem 1.3

Let \(a,b>0\), \(1<p<2^*_s-1\) and V satisfies \((V_1)\) and \((V_2)\). Assume that \(N=4s\) and \(b \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} Q|^{2} \mathrm{{d}}x<1 .\) Then there exists \(\varepsilon _{0}>0\) such that for all \(\varepsilon \in \left( 0, \varepsilon _{0}\right) \), problem (1.8) has a solution \(u_{\varepsilon }\) of the form

$$\begin{aligned} u_{\varepsilon }=U\left( \frac{x-y_{\varepsilon }}{\varepsilon }\right) +\varphi _{\varepsilon } \end{aligned}$$

with \(\varphi _{\varepsilon } \in H_{\varepsilon }\), satisfying

$$\begin{aligned} \begin{aligned} y_{\varepsilon }&\rightarrow x_{0}, \\ \left\| \varphi _{\varepsilon }\right\| _{\varepsilon }&=o\left( \varepsilon ^{\frac{N}{2}}\right) \\ \end{aligned} \end{aligned}$$

as \(\varepsilon \rightarrow 0\) .

Theorem 1.4

Let \(a,b>0\), \(1<p<2^*_s-1\) and V satisfies \((V_1)\) and \((V_2)\). Assume that \(N>4s\) and \(b \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} Q|^{2} d x<\frac{2sa^{\frac{4s-N}{2s}}(N-4s)^{\frac{N-4s}{2s}}}{(N-2s)^{\frac{N-2s}{2s}}} .\) Let \(U_{i}(i=1,2)\) be two positive solutions of problem (1.1). Then there exists \(\varepsilon _{0}>0\) such that for all \(\varepsilon \in \left( 0, \varepsilon _{0}\right) \), problem (1.8) has two solutions \(u_{\varepsilon }^{i}(x)(i=1,2)\) of the form

$$\begin{aligned} u_{\varepsilon }^i(x)=U_i\left( \frac{x-y_{\varepsilon }}{\varepsilon }\right) +\varphi _{\varepsilon }^i(x), \end{aligned}$$

with \(\varphi _{\varepsilon } \in H_{\varepsilon }\), satisfying

$$\begin{aligned} \begin{aligned} y_{\varepsilon }^i&\rightarrow x_{0}, \\ \left\| \varphi _{\varepsilon }^i\right\| _{\varepsilon }&=o\left( \varepsilon ^{\frac{N}{2}}\right) \\ \end{aligned} \end{aligned}$$

as \(\varepsilon \rightarrow 0\) .

This paper is organized as follows. We complete the proof of Theorem 1.1 in Sect. 2 and prove Theorem 1.2 in Sect. 3. In Sect. 3, we present some basic results and explain the strategy of the proof of Theorems 1.3 and 1.4.

Notation. Throughout this paper, we make use of the following notations.

  • For any \(R>0\) and for any \(x\in \mathbb {R}^N\), \(B_R(x)\) denotes the ball of radius R centered at x;

  • \(\Vert \cdot \Vert _q\) denotes the usual norm of the space \(L^q(\mathbb {R}^N),1\le q\le \infty \);

  • \(o_n(1)\) denotes \(o_n(1)\rightarrow 0\) as \(n\rightarrow \infty \);

  • C or \(C_i(i=1,2,\cdots )\) are some positive constants may change from line to line.

2 Proof of Theorem 1.1

In this section, we analyze the existence of solutions for the following fractional Kirchhoff problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \Big (a+b{\int _{\mathbb {R}^{N}}}|(-\Delta )^{\frac{s}{2}}u|^2\mathrm{{d}}x\Big )(-\Delta )^su+u=u^p, &{} \text {in}\ \mathbb {R}^N, \\ u(x)>0, &{} \text {in}\ \mathbb {R}^N, \\ u(x)\in H^s(\mathbb {R}^N). \end{array}\right. } \end{aligned}$$
(2.1)

As mentioned in the introduction, we know that any solution to (2.1) has the following form

$$\begin{aligned} {U}(x)=Q\Big (\mathcal {E}_0^{-\frac{1}{2s}}x-x_0\Big ). \end{aligned}$$

and

$$\begin{aligned} \mathcal {E}_0=a+b\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2\mathcal {E}_0^{\frac{N-2s}{2s}}, \end{aligned}$$

where Q being the unique positive radial solution to the following problem

$$\begin{aligned} {\left\{ \begin{array}{ll} (-\Delta )^sQ+Q=Q^p, &{} \text {in}\ \mathbb {R}^N, \\ Q(x)>0, &{} \text {in}\ \mathbb {R}^N, \\ Q(x)\in H^s(\mathbb {R}^N). \end{array}\right. } \end{aligned}$$
(2.2)

Let Q be the uniquely positive solution of (2.2) and also a minimizer of J(u). Consider the equation

$$\begin{aligned} f(\mathcal {E})=\mathcal {E}-a-b\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2\mathcal {E}^{\frac{N-2s}{2s}}=0, \quad \mathcal {E}\in (a,+\infty ). \end{aligned}$$
(2.3)

Therefore, to find solution U(x) of (2.1), it suffices to find positive solutions of the above algebraic Eq. (2.3).

\(\mathbf{{Case\, 1\,\, 1<N<4s:}}\) In this case, we have \(\frac{N-2s}{2s}<1\), which implies that \(\lim \limits _{\mathcal {E}\rightarrow +\infty }f(\mathcal {E})=+\infty \). Moreover, one has \(f(a)<0\). Consequently, there exists unique \(\mathcal {E}_0>a\) such that \(f(\mathcal {E}_0)=0\), which means that (2.1) has a unique solution.

\(\mathbf{{Case\, 2\,\, N=4s:}}\) In this case, (2.3) becomes

$$\begin{aligned} \mathcal {E}-a-b\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2\mathcal {E}=0, \end{aligned}$$
(2.4)

which means that this equation has a unique positive solution

$$\begin{aligned} \mathcal {E}_0=\frac{a}{1-b\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2}, \end{aligned}$$
(2.5)

if and only if \(b<\frac{1}{\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2}\).

\(\mathbf{{Case\, 3\,\, N>4s:}}\) A simple computation implies that

$$\begin{aligned} f'(\mathcal {E})=1-\frac{N-2s}{2s}b\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2\mathcal {E}^{\frac{N-4s}{2s}}, \end{aligned}$$
(2.6)

which means that \(f(\mathcal {E})\) has a unique maximum point

$$\begin{aligned} \mathcal {E}_0=\left( \frac{2s}{(N-2s)\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2b}\right) ^{\frac{2s}{N-4s}}>0, \end{aligned}$$
(2.7)

and the maximum of \(f(\mathcal {E})\) is

$$\begin{aligned} f(\mathcal {E}_0)=(\frac{N-4s}{N-2s})\left( \frac{2s}{(N-2s)\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2b}\right) ^{\frac{2s}{N-4s}}-a. \end{aligned}$$
(2.8)

It is easy to see that \(f(\mathcal {E}_0)\ge 0\) implies

$$\begin{aligned} b\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2\le \frac{2sa^{\frac{4s-N}{2s}}(N-4s)^{\frac{N-4s}{2s}}}{(N-2s)^{\frac{N-2s}{2s}}}. \end{aligned}$$
(2.9)

Since \(f^{\prime \prime }(\mathcal {E})<0\) in \((0,+\infty )\) due to \(N>4s\), we know that \(f(\mathcal {E})\) is concave in \((0,+\infty )\). Noting further that \(f(0)=-a<0\) and \(\lim \limits _{\mathcal {E} \rightarrow +\infty } f(\mathcal {E})=-\infty \), a sufficient and necessary condition for the solvability of Eq. (2.3) in \((0,+\infty )\) is \(f\left( \mathcal {E}_0\right) \ge 0 .\) Hence, Eq. (2.3) has a solution in \((0,+\infty )\) if and only if inequality (2.9) holds. Furthermore, we have

(i):

If \(b\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2= \frac{2sa^{\frac{4s-N}{2s}}(N-4s)^{\frac{N-4s}{2s}}}{(N-2s)^{\frac{N-2s}{2s}}}\), then Eq. (2.3) has exactly one positive solution \(\mathcal {E}_0\) defined by (2.7);

(ii):

If \(b\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2< \frac{2sa^{\frac{4s-N}{2s}}(N-4s)^{\frac{N-4s}{2s}}}{(N-2s)^{\frac{N-2s}{2s}}}\), then Eq. (2.3) has exactly two positive solutions \(\mathcal {E}_{1}\) and \(\mathcal {E}_{2}\) such that \(\mathcal {E}_{1} \in \left( 0, \mathcal {E}_{0}\right) \) and \(\mathcal {E}_{2} \in \left( \mathcal {E}_{0},+\infty \right) \).

Up to now, we have proved Theorem 1.1. Next, we analyze the asymptotic behavior of solution obtained above as \(b\rightarrow 0\). In the case \(1< N\le 4s\), if we denote by \(\mathcal {E}_0\) the unique positive solution to Eq. (2.3), we have \(\lim \limits _{b \rightarrow 0} b\mathcal {E}_0=0 .\) It infers from this that the following conclusion holds.

Theorem 2.1

Assume that \(1<N\le 4s\). Let \(U_{b}(x)\) be the unique solution to problem (2.1). Then \(\lim \limits _{b \rightarrow 0} U_{b}(x)=Q(x)\) in point wise.

In the case \(N>4s\), if \(b\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2< \frac{2sa^{\frac{4s-N}{2s}}(N-4s)^{\frac{N-4s}{2s}}}{(N-2s)^{\frac{N-2s}{2s}}}\), Eq. (2.1) has exactly two solutions \(\mathcal {E}_{1}\) and \(\mathcal {E}_{2}\) such that

$$\begin{aligned} 0<\mathcal {E}_{1}<\mathcal {E}_{0} \text{ and } \mathcal {E}_{0}<\mathcal {E}_{2}<+\infty , \text{ where } \mathcal {E}_{0}=\left( \frac{2s}{(N-2s)\Vert (-\Delta )^{\frac{s}{2}}Q\Vert ^2_2b}\right) ^{\frac{2s}{N-4s}}.\nonumber \\ \end{aligned}$$
(2.10)

Correspondingly, problem (2.1) has exactly two solutions

$$\begin{aligned} U_{b}^{1}(x)=Q\left( \mathcal {E}_{1}^{-\frac{1}{2s}}x\right) \quad \text{ and } \quad U_{b}^{2}(x)=Q\left( \mathcal {E}_{2}^{-\frac{1}{2s}}x\right) . \end{aligned}$$

From (2.10), we can see that

$$\begin{aligned} \lim _{b \rightarrow 0} b \mathcal {E}_{2} \ge \lim _{b \rightarrow 0} b \mathcal {E}_{0}=+\infty . \end{aligned}$$

Hence,

$$\begin{aligned} \lim _{b \rightarrow 0} U_{b}^{2}(x)=Q(0), \quad \forall x \in \mathbb {R}^{N}. \end{aligned}$$

By a similar analysis, we have \(\lim \limits _{b \rightarrow 0} b \mathcal {E}_{1}=0\), and the following conclusion is true.

Theorem 2.2

Suppose that \(N>4s\). Then

$$\begin{aligned} \lim _{b \rightarrow 0} U_{b}^{1}(x)=Q(x) \quad \text{ and } \quad \lim _{b \rightarrow 0} U_{b}^{2}(x)=Q(0), \quad \forall x \in \mathbb {R}^{N}. \end{aligned}$$

3 Nondegeneracy Results

In this section we prove the nondegeneracy results of Theorem 1.2. For positive constants ab, we define the differential operator L as

$$\begin{aligned} L(u)=\left( a+b \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}}u|^{2} \mathrm{{d}} x\right) (-\Delta )^s u+u-|u|^{p-1} u, \end{aligned}$$

for any \(u \in H^{s}(\mathbb {R}^{N})\) in the weak sense. The linearized operator \(\mathcal {L}_+\) of L at U is defined as

$$\begin{aligned} \mathcal {L}_+(\varphi )=\left. \frac{d L(U+t \varphi )}{d t}\right| _{t=0}, \quad \forall \varphi \in H^{s}(\mathbb {R}^{N}). \end{aligned}$$

It is easy to see that for any \(\varphi \in H^s(\mathbb {R}^N)\),

$$\begin{aligned} \begin{aligned} \mathcal {L}_+(\varphi )&=\Big (a+b{\int _{\mathbb {R}^{N}}}|(-\Delta )^{\frac{s}{2}}U|^2\mathrm{{d}}x\Big )(-\Delta )^s\varphi +\varphi -pU^{p-1}\varphi +2b\\&\qquad \quad \Big ({\int _{\mathbb {R}^{N}}}(-\Delta )^{\frac{s}{2}}U(-\Delta )^{\frac{s}{2}}\varphi \mathrm{{d}}x\Big )(-\Delta )^sU\\&=T_+(\varphi )+L_2(\varphi )(-\Delta )^sU, \end{aligned} \end{aligned}$$

acting on \(L^2(\mathbb {R}^N)\) with domain D(L), where

$$\begin{aligned} T_+(\varphi )=c(-\Delta )^s\varphi +\varphi -pU^{p-1}\varphi , \end{aligned}$$

with \(c=a+b\int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}}U|^2\mathrm{{d}}x\) and

$$\begin{aligned} L_2(\varphi )=2b\Big ({\int _{\mathbb {R}^{N}}}(-\Delta )^{\frac{s}{2}}U(-\Delta )^{\frac{s}{2}}\varphi \mathrm{{d}}x\Big ). \end{aligned}$$

We also denote by \({\text {Ker}}(L)\) the kernel space of a linear operator L, that is

$$\begin{aligned} {\text {Ker}}(L)=\{\varphi \in D(L): L(\varphi )=0\}. \end{aligned}$$

Definition 3.1

Let \(U \in H^{s}(\mathbb {R}^{N})\) be a solution to \(L(u)=0\). We say that U is non-degenerate if \({\text {Ker}}\left( \mathcal {L}_+\right) ={\text {span}}\left\{ \frac{\partial U}{\partial x_{1}},\frac{\partial U}{\partial x_{2}},\cdots ,\frac{\partial U}{\partial x_{N}}\right\} \).

In the sequel, we always use U(x) to denote a positive solution to the equation \(L(u)=0\) in \(H^{s}(\mathbb {R}^{N})\). We divide the proof of Theorem () into the following series of lemmas.

Lemma 3.1

\({\text {Ker}}(T_+)={\text {span}}\left\{ \frac{\partial U}{\partial x_{1}},\frac{\partial U}{\partial x_{2}},\cdots ,\frac{\partial U}{\partial x_{N}}\right\} \).

Proof

Since U(x) is a positive solution to the equation \(L(u)=0, U(x)\) satisfies

$$\begin{aligned} \left( a+b \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} U|^{2} d x\right) (-\Delta )^s U+U-U^{p}=0,\quad \text {in}\ \mathbb {R}^{N}. \end{aligned}$$
(3.1)

For any fixed \(i \in \{1,2, \ldots , N\}\), taking partial derivative with respect to \(x_{i}\) on both sides of the above Eq. (3.1), we obtain

$$\begin{aligned} \left( a+b \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} U|^{2} d x\right) (-\Delta )^s \frac{\partial U}{\partial x_{i}}+\frac{\partial U}{\partial x_{i}}-p U^{p-1} \frac{\partial U}{\partial x_{i}}=0, \quad \text {in}\ \mathbb {R}^{N}. \end{aligned}$$

This implies that \(T_+\left( \frac{\partial U}{\partial x_{i}}\right) =0\) for any fixed \(i \in \{1,2, \ldots , N\} .\) Therefore,

$$\begin{aligned} {\text {span}}\left\{ \frac{\partial U}{\partial x_{1}},\frac{\partial U}{\partial x_{2}},\cdots ,\frac{\partial U}{\partial x_{N}}\right\} \subseteq {\text {Ker}}(T_+). \end{aligned}$$

On the other hand, for any \(\varphi \in {\text {Ker}}(T_+)\), from the definition of \({\text {Ker}}(T_+)\), we have

$$\begin{aligned} c (-\Delta )^s \varphi +\varphi -p U^{p-1} \varphi =0. \end{aligned}$$
(3.2)

Let \(x=c^{\frac{1}{2s}} y, \hat{\varphi }(y)=\varphi (c^{\frac{1}{2s}} y)=\varphi (x)\) and \(Q(y)=U(c^{\frac{1}{2s}} y)=U(x) .\) Then Eq. (3.2) becomes

$$\begin{aligned} (-\Delta )^s\hat{\varphi }(y)+\hat{\varphi }(y)-p Q^{p-1}(y) \hat{\varphi }(y)=0. \end{aligned}$$
(3.3)

Noting that Q(y) is a solution to Eqs.(1.3), (3.3) implies that \(\hat{\varphi }(y) \in {\text {Ker}}\left( T_+\right) \). Therefore, it follows from Proposition 1.2 that there are real numbers \(a_{i}(i \in \{1,2, \ldots , N\})\) such that

$$\begin{aligned} \hat{\varphi }(y)=\sum _{i=1}^{N} a_{i} \frac{\partial Q}{\partial y_{i}}. \end{aligned}$$

Since \(\frac{\partial Q}{\partial y_{i}}=c^{\frac{1}{2s}} \frac{\partial U}{\partial x_{i}}\), we have

$$\begin{aligned} \varphi (x)=\hat{\varphi }(y)=\sum _{i=1}^{N} a_{i} \frac{\partial Q}{\partial y_{i}}=\sum _{i=1}^{N} a_{i} c^{\frac{1}{2s}} \frac{\partial U}{\partial x_{i}}. \end{aligned}$$

This implies that \(\varphi \in {\text {span}}\left\{ \frac{\partial U}{\partial x_{1}},\frac{\partial U}{\partial x_{2}},\cdots ,\frac{\partial U}{\partial x_{N}}\right\} .\) From the arbitrariness of \(\varphi \), we have \({\text {Ker}}(T_+) \subseteq {\text {span}}\left\{ \frac{\partial U}{\partial x_{1}},\frac{\partial U}{\partial x_{2}},\cdots ,\frac{\partial U}{\partial x_{N}}\right\} \). Thus, \({\text {Ker}}(T_+)={\text {span}}\left\{ \frac{\partial U}{\partial x_{1}},\frac{\partial U}{\partial x_{2}},\cdots ,\frac{\partial U}{\partial x_{N}}\right\} \).

\(\square \)

Since \(\frac{\partial U}{\partial x_{i}}\) is non-radially symmetric, we have the following corollary:

Corollary 3.1

\(T_+\) is invertible on \(L_{\text{ rad } }^{2}(\mathbb {R}^{N})\).

Lemma 3.2

Let U(x) be a positive solution to the equation \(L(u)=0\) in \(H^{s}(\mathbb {R}^{N}) .\) Then \(L_2\left( \frac{\partial U}{\partial x_{i}}\right) =0\) for \(i \in \{1,2, \ldots , N\}\).

Proof

From the definition of \(L_2\), and U is the solution of the equation

$$\begin{aligned} c(-\Delta )^s U+U=U^p. \end{aligned}$$

We have

$$\begin{aligned} L_2\left( \frac{\partial U}{\partial x_{i}}\right) =-2 b \int _{\mathbb {R}^{N}} \frac{\partial U}{\partial x_{i}} (-\Delta )^sU \mathrm{{d}}x. \end{aligned}$$

Therefore,

$$\begin{aligned} L_2\left( \frac{\partial U}{\partial x_{i}}\right) =\frac{-2 b}{c} \int _{\mathbb {R}^{N}}\left( U^{p}-U\right) \frac{\partial U}{\partial x_{i}} \mathrm{{d}}x=\frac{-2 b}{c} \int _{\mathbb {R}^{N}} \frac{\partial \left( \frac{1}{p+1} U^{p+1}-\frac{1}{2} U\right) }{\partial x_{i}} \mathrm{{d}}x. \end{aligned}$$

Since, for any fixed i, up to a translation, the function \(\frac{\partial \left( \frac{1}{p+1} U^{p+1}-\frac{1}{2} U\right) }{\partial x_{i}}\) is odd in variable \(x_{i}\), it is easy to see that

$$\begin{aligned} \int _{\mathbb {R}^{N}} \frac{\partial \left( \frac{1}{p+1} U^{p+1}-\frac{1}{2} U\right) }{\partial x_{i}} \mathrm{{d}}x=0. \end{aligned}$$

Therefore, \(L_2\left( \frac{\partial U}{\partial x_{i}}\right) =0\). \(\square \)

Lemma 3.3

Let U(x) be a positive solution to the equation \(L(u)=0\) in \(H^{s}(\mathbb {R}^{N}) .\) If \(N>4s\) and

$$\begin{aligned} \frac{(N-2s) b \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} U|^{2} \mathrm{{d}}x}{2sc}=1, \end{aligned}$$

then

$$\begin{aligned} b \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} Q|^{2} \mathrm{{d}}x= \frac{2sa^{\frac{4s-N}{2s}}(N-4s)^{\frac{N-4s}{2s}}}{(N-2s)^{\frac{N-2s}{2s}}}, \end{aligned}$$

where \(Q \in H^{s}(\mathbb {R}^{N})\) is the unique positive solution to the equation \(L_{0}(u)=0\).

Proof

Noting that \(c=a+b \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} U|^{2} \mathrm{{d}}x\), the assumption

$$\begin{aligned} \frac{(N-2s) b \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} U|^{2} \mathrm{{d}}x}{2sc}=1, \end{aligned}$$

implies

$$\begin{aligned} b \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} U|^{2} d x=\frac{2sa}{N-4s} \text{ and } c=\frac{(N-2s)a}{N-4s}. \end{aligned}$$

Since \(U(x) \in H^{s}(\mathbb {R}^{N})\) is a positive solution to the equation \(L(u)=0\), we know that U(x) has the following form

$$\begin{aligned} U(x)=Q\left( c^{-\frac{1}{2s}}x\right) , \end{aligned}$$

with \(Q(x) \in H^{s}(\mathbb {R}^{N})\) being the unique positive solution to the Eq. (1.3). Therefore,

$$\begin{aligned} \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} U|^{2} d x= & {} c^{\frac{N-2s}{2s}} \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} Q|^{2} \mathrm{{d}}x\\= & {} \left( \frac{(N-2s)a}{N-4s}\right) ^{\frac{N-2s}{2s}} \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} Q|^{2} \mathrm{{d}}x. \end{aligned}$$

Therefore, we have

$$\begin{aligned} b \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} Q|^{2} \mathrm{{d}}x= \frac{2sa^{\frac{4s-N}{2s}}(N-4s)^{\frac{N-4s}{2s}}}{(N-2s)^{\frac{N-2s}{2s}}}. \end{aligned}$$

This completes the proof. \(\square \)

Lemma 3.4

Let U(x) be a positive solution to the equation \(L(u)=0\) in \(H^{s}(\mathbb {R}^{N}) .\) Suppose that

$$\begin{aligned} 1< N \le 4s, \end{aligned}$$

or

$$\begin{aligned} N>4s\quad \text {and}\quad b \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} Q|^{2} \mathrm{{d}}x \ne \frac{2sa^{\frac{4s-N}{2s}}(N-4s)^{\frac{N-4s}{2s}}}{(N-2s)^{\frac{N-2s}{2s}}} . \end{aligned}$$

Then

$$\begin{aligned} {\text {Ker}}\left( \mathcal {L}_+\right) \bigcap L_{r a d}^{2}(\mathbb {R}^{N})=\{0\}, \end{aligned}$$

Proof

Assume that \(v\in H^s(\mathbb {R}^N)\cap L^2_{rad}(\mathbb {R}^N)\) belongs to \(\ker \mathcal {L}_+\). Then we have

$$\begin{aligned}&\Big (a+b{\int _{\mathbb {R}^{N}}}|(-\Delta )^{\frac{s}{2}}U|^2\mathrm{{d}}x\Big )(-\Delta )^sv+v-pU^{p-1}v\nonumber \\&\quad =-2b\Big ({\int _{\mathbb {R}^{N}}}(-\Delta )^{\frac{s}{2}}U(-\Delta )^{\frac{s}{2}}v \mathrm{{d}}x\Big )(-\Delta )^sU. \end{aligned}$$
(3.4)

Let \(c=a+b\Vert (-\Delta )^{\frac{s}{2}}U\Vert ^2_2\). Recall that U is a ground state solution of (1.1). It follows from above that c is a constant independent of U under the assumptions of Theorem 1.1. Hence, U solves (1.3) with \(c=a+b\Vert (-\Delta )^{\frac{s}{2}}U\Vert ^2_2\). We then can rewrite (3.4) as

$$\begin{aligned} T_+v=-2b\Big ({\int _{\mathbb {R}^{N}}}(-\Delta )^{\frac{s}{2}}U(-\Delta )^{\frac{s}{2}}v \mathrm{{d}}x\Big )(-\Delta )^sU=-\frac{2b\sigma _v}{c}(-U+U^{p}), \end{aligned}$$
(3.5)

where

$$\begin{aligned} \sigma _v={\int _{\mathbb {R}^{N}}}(-\Delta )^{\frac{s}{2}}U(-\Delta )^{\frac{s}{2}}v \mathrm{{d}}x. \end{aligned}$$

By applying Proposition 1.2, we conclude that

$$\begin{aligned} v=-\frac{2b\sigma _v}{c}T_+^{-1}(-U+U^{p})=-\frac{b\sigma _v}{sc}\psi , \end{aligned}$$
(3.6)

where \(\psi =x\cdot \nabla U\). Multiplying (3.6) by \((-\Delta )^sU\) and integrating over \(\mathbb {R}^N\), we see that

$$\begin{aligned} \int _{\mathbb {R}^{N}}v(-\Delta )^{s}U\mathrm{{d}}x=-\frac{b\sigma _v}{sc}\int _{\mathbb {R}^{N}}\psi (-\Delta )^{s}U\mathrm{{d}}x. \end{aligned}$$
(3.7)

Note that

$$\begin{aligned} \int _{\mathbb {R}^{N}}v(-\Delta )^{s}U\mathrm{{d}}x=\int _{\mathbb {R}^{N}}(-\Delta )^{\frac{s}{2}}U(-\Delta )^{\frac{s}{2}}v \mathrm{{d}}x, \end{aligned}$$
(3.8)

and

$$\begin{aligned} \int _{\mathbb {R}^{N}}\psi (-\Delta )^{s}U\mathrm{{d}}x=\frac{2s-N}{2}\int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}}U|^2 \mathrm{{d}}x, \end{aligned}$$
(3.9)

(see e.g. [39]). We then conclude from (3.7)-(3.9) that

$$\begin{aligned} \sigma _v=-\frac{b(2s-N)\sigma _v}{2sc}\int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}}U|^2 \mathrm{{d}}x=-\frac{(c-a)(2s-N)}{2sc}\sigma _v. \end{aligned}$$

It follows from Lemma 3.3 that

$$\begin{aligned} 1+\frac{(2s-N) b \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} U|^{2} \mathrm{{d}}x}{2sc} \ne 0, \end{aligned}$$

provided that \(1< N \le 4s\), or \(N>4s\) and \(b \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} Q|^{2} \mathrm{{d}}x \ne \frac{2sa^{\frac{4s-N}{2s}}(N-4s)^{\frac{N-4s}{2s}}}{(N-2s)^{\frac{N-2s}{2s}}}\). Therefore, under this assumption, we have \(v \equiv 0\). This completes the proof. \(\square \)

Proof of Theorem 1.2

Let \(U(x) \in H^{s}(\mathbb {R}^{N})\) be a positive solution to the equation \(L(u)=0 .\) For any \(i \in \{1,2, \ldots , N\}\), by Lemmas 3.1 and  3.2, we have

$$\begin{aligned} \mathcal {L}_+\left( \frac{\partial U}{\partial x_{i}}\right) =T_+\left( \frac{\partial U}{\partial x_{i}}\right) +L_2\left( \frac{\partial U}{\partial x_{i}}\right) (-\Delta )^s U=0. \end{aligned}$$

This implies that

$$\begin{aligned} {\text {span}}\left\{ \frac{\partial U}{\partial x_{1}},\frac{\partial U}{\partial x_{2}},\cdots ,\frac{\partial U}{\partial x_{N}}\right\} \subseteq {\text {Ker}}\left( \mathcal {L}_+\right) . \end{aligned}$$

On the other hand, for any \(\varphi (x) \in {\text {Ker}}\left( \mathcal {L}_+\right) \), we have

$$\begin{aligned} T_+(\varphi )=-L_2(\varphi )(-\Delta )^s U. \end{aligned}$$
(3.10)

To prove \(\varphi \in {\text {span}}\left\{ \frac{\partial U}{\partial x_{1}},\frac{\partial U}{\partial x_{2}},\cdots ,\frac{\partial U}{\partial x_{N}}\right\} \), it follows from Corollary 3.1 that there exists a unique radial function \(\psi _{1}(r) \in L_{\text{ rad } }^{2}(\mathbb {R}^{N})\) such that

$$\begin{aligned} T_+\left( \psi _{1}\right) =-L_2(\varphi )(-\Delta )^s U. \end{aligned}$$
(3.11)

Set \(W=\psi (x)-\psi _{1}(r) .\) Then, from (3.10) and (3.11), we have \(T_+(W)=0 .\) Therefore, it follows from Lemma 3.1 that there are some real numbers \(a_{i}\) such that

$$\begin{aligned} W=\sum _{i=1}^{N} a_{i} \frac{\partial U}{\partial x_{i}}. \end{aligned}$$

This implies that any solution \(\psi (x)\) to the Eq. (3.10) has the following form

$$\begin{aligned} \psi (x)=\psi _{1}(r)+\sum _{i=1}^{N} a_{i} \frac{\partial U}{\partial x_{i}}. \end{aligned}$$

Since \(\varphi (x)\) is a solution to (3.10), we conclude that

$$\begin{aligned} \varphi (x)=\psi _{1}(r)+\sum _{i=1}^{N} a_{i} \frac{\partial U}{\partial x_{i}} \end{aligned}$$
(3.12)

for some real numbers \(a_{i} .\) Noting that \(\varphi (x)\) and \(\frac{\partial U}{\partial x_{i}}\) are in \({\text {Ker}}\left( \mathcal {L}_+\right) \), we can conclude from (3.12) that \(\mathcal {L}_+\left( \psi _{1}(r)\right) =0 .\) That is \(\psi _{1}(r) \in {\text {Ker}}\left( \mathcal {L}_+\right) .\) Hence, it follows from Lemma 3.4 that \(\psi _{1}(r) \equiv 0 .\) Now, from (3.12), we have

$$\begin{aligned} \varphi (x)=\sum _{i=1}^{N} a_{i} \frac{\partial U}{\partial x_{i}}, \end{aligned}$$

for some real numbers \(a_{i} .\) This implies that \(\varphi \in {\text {span}}\left\{ \frac{\partial U}{\partial x_{1}},\frac{\partial U}{\partial x_{2}},\cdots ,\frac{\partial U}{\partial x_{N}}\right\} .\) From the arbitrariness of \(\varphi \), we see that \({\text {Ker}}\left( \mathcal {L}_+\right) \subseteq {\text {span}}\left\{ \frac{\partial U}{\partial x_{1}},\frac{\partial U}{\partial x_{2}},\cdots ,\frac{\partial U}{\partial x_{N}}\right\} \).

In conclusion, we have \({\text {Ker}}\left( \mathcal {L}_+\right) ={\text {span}}\left\{ \frac{\partial U}{\partial x_{1}},\frac{\partial U}{\partial x_{2}},\cdots ,\frac{\partial U}{\partial x_{N}}\right\} .\) That is, U(x) is non-degenerate. This completes the proof of Theorem 1.2.\(\square \)

4 The Lyapunov–Schmidt Reduction

As mentioned in the Introduction, non-degeneracy property of positive solutions for the limit problem in entire space can be used to construct concentrated solutions for singularly perturbed problems. Here, we take the following problem as an example:

$$\begin{aligned} \left\{ \begin{array}{l} \Big (\varepsilon ^{2s}a+\varepsilon ^{4s-N} b{\int _{\mathbb {R}^{N}}}|(-\Delta )^{\frac{s}{2}}u|^2\mathrm{{d}}x\Big )(-\Delta )^su+V(x)u=u^p,\quad \text {in}\ \mathbb {R}^{N}\\ 0<u(x) \in H^{s}(\mathbb {R}^{N}). \end{array}\right. \end{aligned}$$
(4.1)

where \(V: \mathbb {R}^{N} \rightarrow \mathbb {R}\) satisfies the following conditions:

\((V_1)\):

V is a bounded continuous function with \(\inf \limits _{x \in \mathbb {R}^{N}} V>0\);

\((V_2)\):

There exist \(x_{0} \in \mathbb {R}^{N}\) and \(r_{0}>0\) such that

$$\begin{aligned} V\left( x_{0}\right)<V(x) \quad \text{ for } 0<\left| x-x_{0}\right| <r_{0}, \end{aligned}$$

and \(V \in C^{\alpha }\left( {\bar{B}}_{r_{0}}\left( x_{0}\right) \right) \) for some \(0<\alpha <\frac{N+4 s}{2}\). That is, V is of \(\alpha \)-th order Hölder continuity around \(x_{0}\).

It is known that every solution to Eq. (4.1) is a critical point of the energy functional \(I_{\varepsilon }: H_{\varepsilon } \rightarrow \mathbb {R}\), given by

$$\begin{aligned} I_{\varepsilon }(u)=\frac{1}{2}\Vert u\Vert _{\varepsilon }^{2}+\frac{b\varepsilon ^{4s-N}}{4}\left( \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} u|^{2}\mathrm{{d}}x\right) ^{2}-\frac{1}{p+1} \int _{\mathbb {R}^{N}} u^{p+1}\mathrm{{d}}x, \end{aligned}$$

for \(u \in H_{\varepsilon }\). It is standard to verify that \(I_{\varepsilon } \in C^{2}\left( H_{\varepsilon }\right) .\) So we are left to find a critical point of \(I_{\varepsilon }\). Since the procedure of Lyapunov–Schmidt reduction is the same as in [41], we just state some Lemmas and explain the strategy of the proof. Readers interested in the full proof shall refer to [41].

4.1 Finite Dimensional Reduction

We will restrict our argument to the existence of a critical point of \(I_{\varepsilon }\) that concentrates, as \(\varepsilon \) small enough. For \(\delta ,\eta >0\), fixing \(y \in B_{\delta }(x_0)\), we define

$$\begin{aligned} M_{\varepsilon ,\eta }=\left\{ (y, \varphi ): y \in B_{\delta }(x_0), \varphi \in E_{\varepsilon ,y}\right\} , \end{aligned}$$

where we denote \(E_{\varepsilon ,y}\) by

$$\begin{aligned} E_{\varepsilon , y}:=\left\{ \varphi \in H_{\varepsilon }:\left\langle \frac{\partial U_{\varepsilon , y^{i}}}{\partial y^{i}}, \varphi \right\rangle _{\varepsilon }=0, i=1, \ldots , N\right\} . \end{aligned}$$

We are looking for a critical point of the form

$$\begin{aligned} u_\varepsilon =U_{\varepsilon ,y}+\varphi _{\varepsilon }. \end{aligned}$$

For this we introduce a new functional \(J_{\varepsilon }:M_{\varepsilon ,\eta } \rightarrow \mathbb {R}\) defined by

$$\begin{aligned} J_{\varepsilon }(y, \varphi )=I_{\varepsilon }\left( U_{\varepsilon , y}+\varphi \right) , \quad \varphi \in E_{\varepsilon , y}. \end{aligned}$$

In fact, we divide the proof of Theorem  1.3 and 1.4 into two steps:

Step 1:

for each \(\varepsilon , \delta \) sufficiently small and for each \(y \in B_{\delta }(x_0)\), we will find a critical point \(\varphi _{\varepsilon , y}\) for \(J_{\varepsilon }(y, \cdot )\) (the function \(y \mapsto \varphi _{\varepsilon , y}\) also belongs to the class \(C^{1}\left( H_{\varepsilon }\right) \) );

Step 2:

for each \(\varepsilon , \delta \) sufficiently small, we will find a critical point \(y_{\varepsilon }\) for the function \(j_{\varepsilon }\) : \(B_{\delta }(x_0) \rightarrow \mathbb {R}\) induced by

$$\begin{aligned} y \mapsto j_{\varepsilon }(y) \equiv J\left( y, \varphi _{\varepsilon , y}\right) . \end{aligned}$$
(4.2)

That is, we will find a critical point \(y_{\varepsilon }\) in the interior of \(B_{\delta }(x_0)\).

It is standard to verify that \(\left( y_{\varepsilon }, \varphi _{\varepsilon , y_{\varepsilon }}\right) \) is a critical point of \(J_{\varepsilon }\) for \(\varepsilon \) sufficiently small by the chain rule. This gives a solution \(u_{\varepsilon }= U_{\varepsilon , y_{\varepsilon }}+\varphi _{\varepsilon , y_{\varepsilon }}\) to Eq. (4.1) for \(\varepsilon \) sufficiently small in virtue of the following lemma.

Lemma 4.1

There exist \(\varepsilon _{0}, \eta _{0}>0\) such that for \(\varepsilon \in \left( 0, \varepsilon _{0}\right] , \eta \in \left( 0, \eta _{0}\right] \), and \((y, \varphi ) \in M_{\varepsilon , y}\) the following are equivalent:

(i):

\(u_{\varepsilon }= U_{\varepsilon , y_{\varepsilon }}+\varphi _{\varepsilon , y_{\varepsilon }}\) is a critical point of \(I_{\varepsilon }\) in \(H_{\varepsilon }\).

(i):

\((y, \varphi )\) is a critical point of \(J_{\varepsilon }\).

Now, in order to realize Step 1, we expand \(J_{\varepsilon }(y, \cdot )\) near \(\varphi =0\) for each fixed y as follows:

$$\begin{aligned} J_{\varepsilon }(y, \varphi )=J_{\varepsilon }(y, 0)+l_{\varepsilon }(\varphi )+\frac{1}{2}\left\langle \mathcal {L}_{\varepsilon } \varphi , \varphi \right\rangle +R_{\varepsilon }(\varphi ), \end{aligned}$$

where \(J_{\varepsilon }(y, 0)=I_{\varepsilon }\left( U_{\varepsilon , y}\right) \), and \(l_{\varepsilon }, \mathcal {L}_{\varepsilon }\) and \(R_{\varepsilon }\) are defined for \(\varphi , \psi \in H_{\varepsilon }\) as follows:

$$\begin{aligned} \begin{aligned} l_{\varepsilon }(\varphi )&=\left\langle I_{\varepsilon }^{\prime }\left( U_{\varepsilon , y}\right) , \varphi \right\rangle \\&=\left\langle U_{\varepsilon , y}, \varphi \right\rangle _{\varepsilon }+b\varepsilon ^{4s-N}\left( \int _{\mathbb {R}^{N}}\left| (-\Delta )^{\frac{s}{2}}U_{\varepsilon , y}\right| ^{2}\mathrm{{d}}x\right) \int _{\mathbb {R}^{N}} (-\Delta )^{\frac{s}{2}} U_{\varepsilon , y}\\&\qquad \cdot (-\Delta )^{\frac{s}{2}} \varphi \mathrm{{d}}x-\int _{\mathbb {R}^{N}} U_{\varepsilon , y}^{p} \varphi \mathrm{{d}}x, \end{aligned} \end{aligned}$$
(4.3)

and \(\mathcal {L}_{\varepsilon }: L^{2}\left( \mathbb {R}^{N}\right) \rightarrow L^{2}\left( \mathbb {R}^{N}\right) \) is the bilinear form around \(U_{\varepsilon , y}\) defined by

$$\begin{aligned} \begin{aligned} \left\langle \mathcal {L}_{\varepsilon } \varphi , \psi \right\rangle&=\left\langle I_{\varepsilon }^{\prime \prime }\left( U_{\varepsilon , y}\right) [\varphi ], \psi \right\rangle \\&=\langle \varphi , \psi \rangle _{\varepsilon }+ b\varepsilon ^{4s-N}\left( \int _{\mathbb {R}^{N}}\left| (-\Delta )^{\frac{s}{2}} U_{\varepsilon , y}\right| ^{2}\mathrm{{d}}x\right) \int _{\mathbb {R}^{N}} (-\Delta )^{\frac{s}{2}} \varphi \cdot (-\Delta )^{\frac{s}{2}} \psi \mathrm{{d}}x\\&\quad +2\varepsilon ^{4s-N}b\left( \int _{\mathbb {R}^{N}} (-\Delta )^{\frac{s}{2}} U_{\varepsilon , y} \cdot (-\Delta )^{\frac{s}{2}} \varphi \mathrm{{d}}x\right) \\&\qquad \left( \int _{\mathbb {R}^{N}} (-\Delta )^{\frac{s}{2}} U_{\varepsilon , y} \cdot (-\Delta )^{\frac{s}{2}} \psi \mathrm{{d}}x\right) -p \int _{\mathbb {R}^{N}} U_{\varepsilon , y}^{p-1} \varphi \psi \mathrm{{d}}x, \end{aligned} \end{aligned}$$

and \(R_{\varepsilon }\) denotes the second order reminder term given by

$$\begin{aligned} R_{\varepsilon }(\varphi )=J_{\varepsilon }(y, \varphi )-J_{\varepsilon }(y, 0)-l_{\varepsilon }(\varphi )-\frac{1}{2}\left\langle \mathcal {L}_{\varepsilon } \varphi , \varphi \right\rangle . \end{aligned}$$
(4.4)

We remark that \(R_{\varepsilon }\) belongs to \(C^{2}\left( H_{\varepsilon }\right) \) since so is every term in the right hand side of (4.4).

Lemma 4.2

Assume that V satisfies \((V_1)\) and \((V_2)\). Then, there exists a constant \(C>0\), independent of \(\varepsilon \), such that for any \(y \in B_{1}(0)\), there holds

$$\begin{aligned} \left| l_{\varepsilon }(\varphi )\right| \le C \varepsilon ^{\frac{N}{2}}\left( \varepsilon ^{\alpha }+(|V(y)-V(x_0)|)\right) \Vert \varphi \Vert _{\varepsilon }, \end{aligned}$$

for \(\varphi \in H_{\varepsilon }\). Here \(\alpha \) denotes the order of the Hölder continuity of V in \(B_{r_0}(0)\).

Lemma 4.3

There exists a constant \(C>0\), independent of \(\varepsilon \) and b, such that for \(i \in \{0,1,2\}\), there hold

$$\begin{aligned} \left\| R_{\varepsilon }^{(i)}(\varphi )\right\| \le C \varepsilon ^{-\frac{N(p-1)}{2}}\Vert \varphi \Vert _{\varepsilon }^{p+1-i}+C(b+1) \varepsilon ^{-\frac{N}{2}}\left( 1+\varepsilon ^{-\frac{N}{2}}\Vert \varphi \Vert _{\varepsilon }\right) \Vert \varphi \Vert _{\varepsilon }^{N-i}, \end{aligned}$$

for all \(\varphi \in H_{\varepsilon }\).

Lemma 4.4

Assume that V satisfies (V1) and (V2). Then, for \(\varepsilon >0\) sufficiently small, there is a small constant \(\tau >0\) and \(C>0\) such that,

$$\begin{aligned} \begin{aligned} I_{\varepsilon }\left( U_{\varepsilon , y}\right) =&A \varepsilon ^{N}+B \varepsilon ^{N}\left( \left( V\left( y\right) -V\left( x_0\right) \right) \right) +O\varepsilon ^{N+\alpha }, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} A&=\frac{1}{2} \int _{\mathbb {R}^{N}}\left( a|(-\Delta )^{\frac{s}{2}} U|^{2}+U^{2}\right) \mathrm{{d}}x+\frac{b}{4}\left( \int _{\mathbb {R}^{N}}|(-\Delta )^{\frac{s}{2}} U|^{2}\mathrm{{d}}x\right) ^{2}\\&\qquad -\frac{1}{p+1} \int _{\mathbb {R}^{N}} U^{p+1}\mathrm{{d}}x, \end{aligned}$$

and

$$\begin{aligned} B=\frac{1}{2} \int _{\mathbb {R}^{N}} U^{2}\mathrm{{d}}x. \end{aligned}$$

In this subsection we complete Step 1 for the Lyapunov–Schmidt reduction method as in Sect. 4. We first consider the operator \(\mathcal {L}_{\varepsilon }\),

$$\begin{aligned} \begin{aligned} \left\langle \mathcal {L}_{\varepsilon } \varphi , \psi \right\rangle&=\langle \varphi , \psi \rangle _{\varepsilon }+\varepsilon ^{4s-N} b\int _{\mathbb {R}^{N}}\left| (-\Delta )^{\frac{s}{2}} U_{\varepsilon , y}\right| ^{2}\mathrm{{d}}x \int _{\mathbb {R}^{N}} (-\Delta )^{\frac{s}{2}} \varphi \cdot (-\Delta )^{\frac{s}{2}} \psi \mathrm{{d}}x\\&\quad +2 \varepsilon ^{4s-N} b\left( \int _{\mathbb {R}^{N}} (-\Delta )^{\frac{s}{2}} U_{\varepsilon , y} \cdot (-\Delta )^{\frac{s}{2}} \varphi \mathrm{{d}}x \right) \\&\quad \left( \int _{\mathbb {R}^{N}} (-\Delta )^{\frac{s}{2}} U_{\varepsilon , y} \cdot (-\Delta )^{\frac{s}{2}} \psi \mathrm{{d}}x\right) \\&\quad -p \int _{\mathbb {R}^{N}} U_{\varepsilon , y}^{p-1} \varphi \psi \mathrm{{d}}x, \end{aligned} \end{aligned}$$

for \(\varphi , \psi \in H_{\varepsilon } .\) The following result shows that \(\mathcal {L}_{\varepsilon }\) is invertible when restricted on \(E_{\varepsilon , y}\)

Lemma 4.5

There exist \(\varepsilon _{1}>0, \delta _{1}>0\) and \(\rho >0\) sufficiently small, such that for every \(\varepsilon \in \left( 0, \varepsilon _{1}\right) , \delta \in \left( 0, \delta _{1}\right) \), there holds

$$\begin{aligned} \left\| \mathcal {L}_{\varepsilon } \varphi \right\| _{\varepsilon } \ge \rho \Vert \varphi \Vert _{\varepsilon }, \quad \forall \varphi \in E_{\varepsilon , y}, \end{aligned}$$

uniformly with respect to \(y \in B_{\delta }(x_0)\).

Lemma 4.5 implies that by restricting on \(E_{\varepsilon , y}\), the quadratic form \(\mathcal {L}_{\varepsilon }: E_{\varepsilon , y} \rightarrow E_{\varepsilon , y}\) has a bounded inverse, with \(\left\| \mathcal {L}_{\varepsilon }^{-1}\right\| \le \rho ^{-1}\) uniformly with respect to \(y \in B_{\delta }(x_0)\). This further implies the following reduction map.

Lemma 4.6

There exist \(\varepsilon _{0}>0, \delta _{0}>0\) sufficiently small such that for all \(\varepsilon \in \left( 0, \varepsilon _{0}\right) , \delta \in \) \(\left( 0, \delta _{0}\right) \), there exists a \(C^{1}\) map \(\varphi _{\varepsilon }: B_{\delta }(x_0) \rightarrow H_{\varepsilon }\) with \(y \mapsto \varphi _{\varepsilon , y} \in E_{\varepsilon , y}\) satisfying

$$\begin{aligned} \left\langle \frac{\partial J_{\varepsilon }\left( y, \varphi _{\varepsilon , y}\right) }{\partial \varphi }, \psi \right\rangle _{\varepsilon }=0, \quad \forall \psi \in E_{\varepsilon , y}. \end{aligned}$$

Moreover, there exists a constant \(C>0\) independent of \(\varepsilon \) small enough and \(\kappa \in (0,\frac{\alpha }{2})\) such that

$$\begin{aligned} \Vert \varphi _{\varepsilon , y}\Vert _{\varepsilon } \le C \varepsilon ^{\frac{N}{2}+\alpha -\kappa }+C \varepsilon ^{\frac{N}{2}} \left( V\left( y\right) -V\left( x_{0}\right) \right) ^{1-\kappa }. \end{aligned}$$

4.2 Proof of Theorems 1.3 and 1.4

Let \(\varepsilon _{0}\) and \(\delta _{0}\) be defined as in Lemma 4.6 and let \(\varepsilon <\varepsilon _{0}\). Fix \(0<\) \(\delta <\delta _{0}\). Let \(y \mapsto \varphi _{\varepsilon , y}\) for \(y \in B_{\delta }(x_0)\) be the map obtained in Lemma 4.6. As aforementioned in Step 2, it is equivalent to find a critical point for the function \(j_{\varepsilon }\) defined as in (4.2) by Lemma 4.1. By the Taylor expansion, we have

$$\begin{aligned} j_{\varepsilon }(y)=J\left( y, \varphi _{\varepsilon , y}\right) =I_{\varepsilon }\left( U_{\varepsilon , y}\right) +l_{\varepsilon }\left( \varphi _{\varepsilon , y}\right) +\frac{1}{2}\left\langle \mathcal {L}_{\varepsilon } \varphi _{\varepsilon , y}, \varphi _{\varepsilon , y}\right\rangle +R_{\varepsilon }\left( \varphi _{\varepsilon , y}\right) . \end{aligned}$$

We analyze the asymptotic behavior of \(j_{\varepsilon }\) with respect to \(\varepsilon \) first.

By Lemmas 4.2,  4.3,  4.4 and  4.6, we have

$$\begin{aligned} \begin{aligned} j_{\varepsilon }(y)&=I_{\varepsilon }\left( U_{\varepsilon , y}\right) +O\left( \left\| l_{\varepsilon }\right\| \left\| \varphi _{\varepsilon }\right\| +\left\| \varphi _{\varepsilon }\right\| ^{2}\right) \\&=A \varepsilon ^{N}+B \varepsilon ^{N}\left( V\left( y\right) -V\left( x_{0}\right) \right) + \varepsilon ^{N}\left( \varepsilon ^{\alpha -\kappa }\right. \\&\qquad \left. + \left( V\left( y\right) -V\left( x_{0}\right) \right) ^{1-\kappa }\right) ^2+O\varepsilon ^{N+\alpha }.\\ \end{aligned} \end{aligned}$$
(4.5)

Now consider the minimizing problem

$$\begin{aligned} j_{\varepsilon }\left( y_{\varepsilon }\right) \equiv \inf _{y \in B_{\delta }(x_0)} j_{\varepsilon }(y). \end{aligned}$$

Assume that \(j_{\varepsilon }\) is achieved by some \(y_{\varepsilon }\) in \(B_{\delta }(x_0) .\) We will prove that \(y_{\varepsilon }\) is an interior point of \(B_{\delta }(x_0)\).

To prove the claim, we apply a comparison argument. Let \(e \in \mathbb {R}^{N}\) with \(|e|=1\) and \(\eta >1\). We will choose \(\eta \) later. Let \(z_{\epsilon }=\epsilon ^{\eta } e \in B_{\delta }(0)\) for a sufficiently large \(\eta >1\). By the above asymptotics formula, we have

$$\begin{aligned} \begin{aligned} j_{\epsilon }\left( z_{\epsilon }\right) =\,&A \epsilon ^{N}+B \epsilon ^{N}\left( V\left( z_{\epsilon }\right) -V(0)\right) +O\left( \epsilon ^{N+\alpha }\right) \\&+O\left( \epsilon ^{N}\right) \left( \epsilon ^{\alpha -\kappa }+\left( V\left( z_{\epsilon }\right) -V(0)\right) ^{1-\kappa }\right) ^{2}. \end{aligned} \end{aligned}$$

Applying the Hölder continuity of V, we derive that

$$\begin{aligned} \begin{aligned} j_{\epsilon }\left( z_{\epsilon }\right) =\,&A \epsilon ^{N}+O\left( \epsilon ^{N+\alpha \eta }\right) +O\left( \epsilon ^{N+\alpha }\right) \\&+O\left( \epsilon ^{N}\left( \epsilon ^{2(\alpha -\tau )}+\epsilon ^{2 \eta \alpha (1-\kappa )}\right) \right) \\ =\,&A \epsilon ^{N}+O\left( \epsilon ^{N+\alpha }\right) . \end{aligned} \end{aligned}$$

where \(\eta >1\) is chosen to be sufficiently large accordingly. Note that we also used the fact that \(\kappa \ll \alpha / 2\). Thus, by using \(j\left( y_{\epsilon }\right) \le j\left( z_{\epsilon }\right) \) we deduce

$$\begin{aligned} B \epsilon ^{N}\left( V\left( y_{\epsilon }\right) -V(0)\right) +O\left( \epsilon ^{N}\right) \left( \epsilon ^{\alpha -\kappa }+\left( V\left( y_{\epsilon }\right) -V(0)\right) ^{1-\kappa }\right) ^{2} \le O\left( \epsilon ^{N+\alpha }\right) \end{aligned}$$

That is,

$$\begin{aligned} B\left( V\left( y_{\epsilon }\right) -V(0)\right) +O(1)\left( \epsilon ^{\alpha -\kappa }+\left( V\left( y_{\epsilon }\right) -V(0)\right) ^{1-\kappa }\right) ^{2} \le O\left( \epsilon ^{\alpha }\right) . \end{aligned}$$
(4.6)

If \(y_{\epsilon } \in \partial B_{\delta }(0)\), then by the assumption \((V_2)\), we have

$$\begin{aligned} V\left( y_{\epsilon }\right) -V(0) \ge c_{0}>0, \end{aligned}$$

for some constant \(0<c_{0} \ll 1\) since V is continuous at \(x=0\) and \(\delta \) is sufficiently small. Thus, by noting that \(B>0\) from Lemma 4.4 and sending \(\epsilon \rightarrow 0\), we infer from (4.6) that

$$\begin{aligned} c_{0} \le 0. \end{aligned}$$

We reach a contradiction. This proves the claim. Thus \(y_{\epsilon }\) is a critical point of \(j_{\epsilon }\) in \(B_{\delta }(x_0)\). Then Theorems 1.3 and 1.4 now follows from the claim and Lemma 4.1.