1 Introduction and Main Result

In this paper, we study the existence and the concentration behavior of multi-peak solutions to the following singularly perturbed fourth-order nonlinear Schrödinger equation with mixed dispersion:

$$\begin{aligned} {\varepsilon ^{\text {4}}}{\Delta ^{\text {2}}}u - \beta {\varepsilon ^2}\Delta u + V(x)u = |u{|^{p - 2}}u{\text { in }}{{\mathbb {R}}^N},{\text { }}u \in {H^2}({{\mathbb {R}}^N}), \end{aligned}$$
(1.1)

where \(\varepsilon \) is a small positive parameter, \(N \ge 5\), \(2< p < {2^ * }: = 2N/(N - 4)\), and the potential \(V:{{\mathbb {R}}^N} \rightarrow {\mathbb {R}}\) satisfies:

\((V_1)\) \(V \in C({{\mathbb {R}}^N},{\mathbb {R}}) \cap {L^\infty }({{\mathbb {R}}^N})\) and \(\mathop {\inf }\nolimits _{x \in {{\mathbb {R}}^N}} V(x) = {V_0} > 0\);

\((V_2)\) there exist K mutually disjoint bounded domains \({\Lambda ^k}\ (k = 1,2,\ldots ,K)\) such that

$$\begin{aligned} {m_k}: = \mathop {\inf }\limits _{{\Lambda ^k}} V < \mathop {\min }\limits _{\partial {\Lambda ^k}} V. \end{aligned}$$

We set

$$\begin{aligned} {{\mathcal {M}}^k}: = \{ x \in {\Lambda ^k}:{\text { }}V(x) = {m_k}\} . \end{aligned}$$

This kind of hypothesis was first introduced by del Pino and Felmer [1] and Gui [2]. Without loss of generality, we may assume that \({\text {dist}}\, ({\Lambda ^{{k_1}}},{\Lambda ^{{k_2}}}) > 0\) for each \({k_1} \ne {k_2}\), \(1 \le {k_1},{k_2} \le K\); this can be achieved by making \({\Lambda ^k}\) smaller if necessary. Moreover, denoting \(m: = {\max _{1 \le k \le K}}{m_k}\), we also assume that \(\beta \ge 2{m^{1/2}}\).

Problem (1.1) arises from seeking standing waves for the following time-dependent fourth-order Schrödinger equation

$$\begin{aligned} i\partial _t{\psi }-\gamma {\Delta }^{2}{\psi }+\mu \Delta \psi + {|\psi |}^{p-2}{\psi }=0, ~~~~\psi (0,x)=\psi _0(x),~~~~(t,x)\in {{\mathbb {R}}}\times {{\mathbb {R}}^N},\nonumber \\ \end{aligned}$$
(1.2)

which was introduced by Karpman [3] to regularize and stabilize the solutions of classical Schrödinger equations. Locally well-posedness of the Cauchy problem (1.2) in \(H^2({\mathbb {R}}^N)\) if \(2<p<2^*\) was proved in [4]. We also refer the reader to [5,6,7] for globally well-posedness and scattering, and [8, 9] concerning the existence of finite-time blow up solutions, stability on instability of standing wave solutions to (1.2).

As it is shown in the above papers [8, 9], the added defocusing fourth-order dispersion term (\(\gamma >0\) is small enough) clearly helps to stabilize the standing waves of problem (1.2). The effect of the fourth-order dispersion term (focusing or defocusing) depends on whether it is small or large compared with the Laplacian; see [9, Sect. 6] for details. Thus, it is a natural question to consider the asymptotic behavior of standing waves of problem (1.2) as \(\gamma ,\mu \rightarrow 0^+\) (this might depend on their comparison). This is the main purpose of the present paper.

When the fourth-order dispersion term in (1.1) vanishes, it becomes the following form of classical singularly perturbed Schrödinger equations, that is,

$$\begin{aligned} - {\varepsilon ^2}\Delta u + V(x)u = f(u),{\text { }}x \in {{\mathbb {R}}^N},{\text { }}N \ge 1. \end{aligned}$$
(1.3)

Floer and Weinstein [10] considered (1.3) in one dimension case, where \(f(u) = {u^3}\), \(V \in {L^\infty }({\mathbb {R}})\) with \(\mathop {\inf }\nolimits _{\mathbb {R}} V > 0\). They constructed a single-peak solution concentrating around any given non-degenerate critical point of V(x). Next, this result was extended by Oh [11] in higher dimensions when \(f(u) = {u^{p-1}}(2< p < \frac{2N}{N-2})\) and the potential V belongs to a Kato class. Furthermore, Oh [12] proved the existence of multi-peak solutions concentrating around any finite subsets of the non-degenerate critical points of V. The arguments developed in [10,11,12] are mainly based on a Lyapunov–Schmidt reduction which requires the uniqueness and non-degeneracy of ground state solutions to the following “limiting equation”

$$\begin{aligned} \left\{ \begin{gathered} - \Delta u + mu = {u^{p-1}}{\text { in }} {{\mathbb {R}}^N},{\text { }}m> 0, \\ u > 0,{\text { }}u \in {H^1}({{\mathbb {R}}^N}),{\text { }}u(0) = \mathop {\max }\limits _{x \in {{\mathbb {R}}^N}} u(x),{\text { }}\Bigl (2< p < \frac{2N}{N-2}\Bigr ). \\ \end{gathered} \right. \end{aligned}$$
(1.4)

Namely, there exists a unique positive radially symmetric solution \(u \in {H^1}({{\mathbb {R}}^N})\) to (1.4) and the kernel of the operator \(Lw = - \Delta w + w - (p-1){u^{p - 2}}w\) in \(H^1({\mathbb {R}}^N)\) is spanned by \(\{ u_{x_1},\ldots ,u_{x_N}\}\). However, the uniqueness and non-degeneracy of ground state solutions to “limiting problem”

figure a

corresponding to problem (1.1) are, in general, difficult to check. These properties were partially proved by Bonheure et al. [8] only for the case \(2<p< 2+\frac{2}{N}\). Notice that in this present paper, we are in a wider range \(2< p < \frac{2N}{N-4}\).

On the other hand, Rabinowitz [13] used the mountain pass theorem to show that (1.3) possesses a positive ground state solution for \(\varepsilon > 0\) small under the conditions:

\((V_3)\) \({V_\infty } = \mathop {\lim \inf }\nolimits _{|x| \rightarrow \infty } V(x)> {V_0} = \mathop {\inf }\nolimits _{x \in {{\mathbb {R}}^N}} V(x) > 0\).

We also refer to Wang [14] who proved that the positive ground state solutions to (1.3) obtained in [13] must concentrate at global minima of V as \(\varepsilon \rightarrow 0\). del Pino and Felmer [15] studied (1.3) with the conditions on V replaced by

\((V_4)\) \(\mathop {\inf }\nolimits _{x \in {{\mathbb {R}}^N}} V(x) > 0\);

\((V_5)\) There is a bounded domain \(\Lambda \) such that

$$\begin{aligned} \mathop {\inf }\limits _\Lambda V < \mathop {\min }\limits _{\partial \Lambda } V. \end{aligned}$$

They proved that (1.3) possesses a positive bound state solution for \(\varepsilon > 0\) small which concentrates around the local minima of V in \(\Lambda \) as \(\varepsilon \rightarrow 0\). del Pino and Felmer [1], Gui [2] obtained multi-peak solutions to (1.3) which exhibit concentration at any prescribed finite set of local minima, possibly degenerate, of the potential by gluing localized solutions due to Coti Zelati and Rabinowitz [16, Proposition 3.4].

Although there are many works dealing with singularly perturbed Schrödinger equations (1.3), just a few works can be found dealing with biharmonic semilinear equations. Among them we shall just mention [17]. Pimenta and Soares [17] studied the following biharmonic Schrödinger equation

$$\begin{aligned} {\varepsilon ^{\text {4}}}{\Delta ^{\text {2}}}u + V(x)u = f(u){\text { in }}{{\mathbb {R}}^N},{\text { }}u \in {H^2}({{\mathbb {R}}^N}). \end{aligned}$$
(1.5)

They developed the methods in [13, 14] to obtain a family of solutions to (1.5) which concentrates around the global minima of V as \(\varepsilon \rightarrow 0\), where f is of subcritical growth.

To the best of our knowledge, the existence and concentration behavior of multi-peak solutions to (1.1) has not ever been studied. It is worth pointing out that for the fourth-order nonlinear Schrödinger equation (1.1), some of the methods used in the literature have to be deeply modified. We first refer to the impossibility of splitting \(u = {u^ + } - {u^ - }\) in \({H^2}({{\mathbb {R}}^N})\), which leads that the classical Nash–Moser type iteration technique fails. Next, we point out the lack of a general maximum principle for the operator \({\Delta ^{\text {2}}}\) causes much trouble in finding multi-peak solutions to problem (1.1). On the other hand, since for each \(\varepsilon > 0\) fixed, the limit \(\mathop {\lim }\nolimits _{|x| \rightarrow \infty } V(\varepsilon x)\) may not exist (even if the limit exists, \(V(\varepsilon x)\) may not necessarily converge uniformly for \(\varepsilon > 0\) small as \(|x| \rightarrow \infty \)), the common method in [18] for dealing with the decay of solutions to the biharmonic equations can not be applied. This implies that the classical global penalization method due to Byeon and Wang [19], which highly relies on the uniform exponential decay of solutions to (1.1), cannot be used directly. As we shall see later, the above two aspects prevent us from using variational method in a standard way.

Our main result is stated in what follows.

Theorem 1.1

Assume that the potential V satisfies \((V_1)\), \((V_2)\), \(N \ge 5\) and \(\beta \ge 2{m^{1/2}}\). For any two positive integers \({K_1}\), \({K_2}\) with \({K_1} + {K_2} = K\), there exists an \({\varepsilon _0} > 0\) such that for every \(\varepsilon \in (0,{\varepsilon _0}]\), (1.1) possesses a sign-changing bound state solution \({u_\varepsilon } \in {H^2}({{\mathbb {R}}^N}) \cap {C^4}({{\mathbb {R}}^N})\). Moreover, for each \(1 \le i \le {K_1}\), \(1 \le j \le {K_2}\), \({u_\varepsilon }\) possesses exactly one maximum point \(x_\varepsilon ^{p(i)}\) in \({\Lambda ^{p(i)}}\) and one minimum point \(x_\varepsilon ^{q(j)}\) in \({\Lambda ^{q(j)}}\) satisfying

$$\begin{aligned} \mathop {\lim }\limits _{\varepsilon \rightarrow 0} {\text {dist}}\,(x_\varepsilon ^{p(i)},{{\mathcal {M}}^{p(i)}}) = 0{\text { and }}\mathop {\lim }\limits _{\varepsilon \rightarrow 0} {\text {dist}}\,(x_\varepsilon ^{q(j)},{{\mathcal {M}}^{q(j)}}) = 0, \end{aligned}$$

where \(\{ p(1),\ldots ,p({K_1}),q(1),\ldots ,q({K_2})\} \) is a rearrangement of \(\{ 1,2,\ldots ,K\}\).

To complete this section, we sketch our proof. First, we need to consider the “limiting problem” \(({E_{\beta ,\alpha }})\) with \(\alpha ,\beta > 0\) and \(\beta \ge 2{\alpha ^{1/2}}\). Whether the positive (or negative) solution to \(({E_{\beta ,\alpha }})\) is unique or not is unknown. Nevertheless we can prove that the set of positive (or negative) ground state solutions to \(({E_{\beta ,\alpha }})\) satisfies some compactness properties (Proposition 2.2). This is crucial for finding multi-peak solutions which are close to a set of prescribed functions. More precisely, we search for a solution of (1.1) which consists essentially of K disjoints parts, each part being close to a ground state solution of the “limiting equation” \(({E_{\beta ,\alpha }})\) associated to the corresponding set \({{\mathcal {M}}^k}\).

To study (1.1), we work with the following equivalent equation

$$\begin{aligned} {\Delta ^2}v - \beta \Delta v + V(\varepsilon x)v = |v{|^{p - 2}}v{\text { in }}{{\mathbb {R}}^N},{v \in {H^2}({{\mathbb {R}}^N})}. \end{aligned}$$
(1.6)

The corresponding energy functional to (1.6) is

$$\begin{aligned} {I_\varepsilon }(v) = \frac{1}{2}\int _{{{\mathbb {R}}^N}} {|\Delta v{|^2}} + \frac{1}{2}\beta \int _{{{\mathbb {R}}^N}} {|\nabla v{|^2}} + \frac{1}{2}\int _{{{\mathbb {R}}^N}} {V(\varepsilon x){v^2}} - \frac{1}{p}\int _{{{\mathbb {R}}^N}} {|v{|^p}},\ v \in {H_\varepsilon }, \end{aligned}$$

where \({H_\varepsilon }\) is a class of weighted Sobolev spaces defined as follows:

$$\begin{aligned} H_\varepsilon := \left\{ {v \in {H^2}({{\mathbb {R}}^N}):\int _{{{\mathbb {R}}^N}} {V(\varepsilon x){v^2}} < \infty } \right\} . \end{aligned}$$

Unlike [13], where the minimum of V(x) is global, the mountain pass theorem can be used globally, here in the present paper, the condition \((V_2)\) is local, we need to use a penalization method introduced in [1, 2, 15], which helps us to overcome the difficulty caused by the non-compactness due to the unboundedness of the domain \({{\mathbb {R}}^N}\). For this purpose, we shall modify the functional \({I_\varepsilon }\). Following [1, 2, 15], we define auxiliary functionals \({J_\varepsilon }\), \(J_\varepsilon ^k(k = 1,\ldots ,K)\), respectively (see Sect. 3 for details). It will be shown that this type of penalization will force the concentration phenomena to occur inside \(\Lambda = \cup _{k = 1}^K{\Lambda ^k}\) (Lemma 3.4).

In order to get a critical point \({v_\varepsilon }\) of \({J_\varepsilon }\), we use a version of quantitative deformation lemma (Lemma 3.7) to construct a special convergent Palais-Smale sequence of \({J_\varepsilon }\) for \(\varepsilon > 0\) small. To prove that \({v_\varepsilon }\) is indeed a solution to the original problem (1.6), we need to exhibit a uniform decay of \({v_\varepsilon }\) at infinity. For this purpose, we establish a local \({W^{4,p}}\)-estimate and a global \({L^\infty }\)-estimate of the solutions to the fourth-order semilinear elliptic equations (Proposition 2.3).

This paper is organized as follows, in Sect. 2, we give some preliminary results. In Sect. 3, we prove the main result Theorem 1.1.

2 Auxiliary Results

The “limiting problem” to (1.1) is

figure b

where \(\alpha ,\beta > 0\) and \(\beta \ge 2{\alpha ^{1/2}}\). The functional corresponding to \(({E_{\beta ,\alpha }})\) is defined as

$$\begin{aligned}&{I_{\beta ,\alpha }}(u) \\&\quad = \frac{1}{2}\int _{{{\mathbb {R}}^N}} {|\Delta u{|^2}} + \frac{1}{2}\beta \int _{{{\mathbb {R}}^N}} {|\nabla u{|^2}} + \frac{1}{2}\alpha \int _{{{\mathbb {R}}^N}} {{|u|^2}} - \frac{1}{p}\int _{{{\mathbb {R}}^N}} {|u{|^p}},{\text { }}u \in {H^2}({{\mathbb {R}}^N}), \end{aligned}$$

where

$$\begin{aligned} {H^2}({{\mathbb {R}}^N}): = \{ u \in {L^2}({{\mathbb {R}}^N}):\nabla u \in {L^2}({{\mathbb {R}}^N}),\Delta u \in {L^2}({{\mathbb {R}}^N})\} \end{aligned}$$

endowed with the equivalent norm

$$\begin{aligned} {\Vert u \Vert _{{H^2}({{\mathbb {R}}^N})}}: = {\left( {\int _{{{\mathbb {R}}^N}} {|\Delta u{|^2}} + \int _{{{\mathbb {R}}^N}} {|u{|^2}} } \right) ^{1/2}}. \end{aligned}$$

Denoting \({c_{\beta ,\alpha }}\) the ground state level of \(({E_{\beta ,\alpha }})\), that is

$$\begin{aligned} {c_{\beta ,\alpha }}: = \mathop {\inf }\limits _{u \in {{\mathcal {G}}_{\beta ,\alpha }}} {I_{\beta ,\alpha }}(u), \end{aligned}$$

where \({{\mathcal {G}}_{\beta ,\alpha }}: = \{ u \in {H^2}({{\mathbb {R}}^N})\backslash \{ 0\} :{I'_{\beta ,\alpha }}(u) = 0\}\). Arguing as in [13, 20], we see that

$$\begin{aligned} {c_{\beta ,\alpha }}= & {} \mathop {\inf }\limits _{\gamma \in {\Gamma _{\beta ,\alpha }}} \mathop {\max }\limits _{t \in [0,1]} {I_{\beta ,\alpha }}(\gamma (t)) = \mathop {\inf }\limits _{u \in {H^2}({{\mathbb {R}}^N})\backslash \{ 0\} } \mathop {\sup }\limits _{t> 0} {I_{\beta ,\alpha }}(tu) \nonumber \\= & {} \mathop {\inf }\limits _{u \in {{\mathcal {N}}_{\beta ,\alpha }}} {I_{\beta ,\alpha }}(u) > 0, \end{aligned}$$
(2.1)

where the set of paths is defined as

$$\begin{aligned} {\Gamma _{\beta ,\alpha }}: = \left\{ {\gamma \in C([0,1],{H^2}({{\mathbb {R}}^N})):\gamma (0) = 0{\text { and }}{I_{\beta ,\alpha }}(\gamma (1)) < 0} \right\} \end{aligned}$$
(2.2)

and \({{{\mathcal {N}}_{\beta ,\alpha }}}\) is the Nehari manifold defined by

$$\begin{aligned} {{\mathcal {N}}_{\beta ,\alpha }}: = \{ u \in {H^2}({{\mathbb {R}}^N})\backslash \{ 0\} :\langle {{I'_{\beta ,\alpha }}(u),u} \rangle = 0\}. \end{aligned}$$

The following result on the ground state solutions of \(({E_{\beta ,\alpha }})\) was proved in [21].

Proposition 2.1

([21], Theorem 1) Assume that \(\alpha > 0\), \(\beta \ge 2{\alpha ^{1/2}}\), \(N \ge 5\) and \(2< p < {2^ * }: = 2N/(N - 4)\), then \(({E_{\beta ,\alpha }})\) has a nontrivial ground state solution and any ground state solution of \(({E_{\beta ,\alpha }})\) does not change sign, is radially symmetric around some point and strictly decreasing.

Letting \(S_{\beta ,\alpha }^ + \)(or \(S_{\beta ,\alpha }^ - \)) the set of positive (or negative) ground state solutions U(or V) of \(({E_{\beta ,\alpha }})\) satisfying \(U(0) = \mathop {\max }_{x \in {{\mathbb {R}}^N}} U(x)\)(or \(V(0) = \mathop {\min }_{x \in {{\mathbb {R}}^N}} V(x)\)), we obtain the following compactness of \(S_{\beta ,\alpha }^ + \) (or \(S_{\beta ,\alpha }^ - \)).

Proposition 2.2

Assume that \(\alpha > 0\), \(\beta \ge 2{\alpha ^{1/2}}\), \(N \ge 5\), then \(S_{\beta ,\alpha }^ + \) and \(S_{\beta ,\alpha }^ - \) are compact in \({H^2}({{\mathbb {R}}^N})\).

Proof

For any \(U \in S_{\beta ,\alpha }^ +\),

$$\begin{aligned} \begin{array}{ll} {c_{\beta ,\alpha }} &{}= \displaystyle {I_{\beta ,\alpha }}(U) - \frac{1}{p}\left\langle {{I'_{\beta ,\alpha }}(U),U} \right\rangle \\ &{}= \displaystyle \left( {\frac{1}{2} - \frac{1}{p}} \right) \left( {\int _{{{\mathbb {R}}^N}} {|\Delta U{|^2}} + \beta \int _{{{\mathbb {R}}^N}} {|\nabla U{|^2}} + \alpha \int _{{{\mathbb {R}}^N}} {|U{|^2}} } \right) , \\ \end{array} \end{aligned}$$

thus \(S_{\beta ,\alpha }^ +\) is bounded in \({H^2}({{\mathbb {R}}^N})\).

For any sequence \(\{ {U_n}\} _{k = 1}^\infty \subset S_{\beta ,\alpha }^ +\), up to a subsequence, we may assume that there is a \({U_0} \in {H^2}({{\mathbb {R}}^N})\) such that

$$\begin{aligned} {U_n} \rightharpoonup {U_0}{\text { in }}{H^2}({{\mathbb {R}}^N}) \end{aligned}$$
(2.3)

and \({U_0}\) satisfies \(({E_{\beta ,\alpha }})\). Next, we claim that there exist a sequence \(\{ {x_n}\} _{n = 1}^\infty \subset {{\mathbb {R}}^N}\) and \(R>0\), \({\beta _0}>0\) such that

$$\begin{aligned} \int _{{B_R}({x_n})} {|{U_n}{|^2}} \ge {\beta _0}. \end{aligned}$$
(2.4)

Otherwise, by the vanishing theorem (see [22, Lemma I.1]), it follows that

$$\begin{aligned} \int _{{{\mathbb {R}}^N}} {|{U_n}{|^q}} \rightarrow 0{\text { as }}n \rightarrow \infty {\text { for }}2< q < {2^ * }. \end{aligned}$$
(2.5)

(2.5) and \(\left\langle {{I'_{\beta ,\alpha }}({U_n}),{U_n}} \right\rangle = 0\) imply that \({\Vert {{U_n}} \Vert _{{H^2}({{\mathbb {R}}^N})}} = o(1)\) which contradicts the fact that \({I_{\beta ,\alpha }}({U_n}) = {c_{\beta ,\alpha }} > 0\), thus (2.4) holds. In view of Proposition 2.2, \({U_n}\) is radially symmetric around 0 and strictly radially decreasing, we see from (2.4) that,

$$\begin{aligned} \int _{{B_R}(0)} {|{U_n}{|^2}} \ge {\beta _0}. \end{aligned}$$
(2.6)

(2.3) and (2.6) imply that \({U_0}\) is nontrivial, then

$$\begin{aligned} \begin{array}{ll} {c_{\beta ,\alpha }} &{}\le \displaystyle {I_{\beta ,\alpha }}({U_0}) - \frac{1}{p}\left\langle {{I'_{\beta ,\alpha }}({U_0}),{U_0}} \right\rangle \\ &{}=\displaystyle \left( {\frac{1}{2} - \frac{1}{p}} \right) \left( {\int _{{{\mathbb {R}}^N}} {|\Delta {U_0}{|^2}} + \beta \int _{{{\mathbb {R}}^N}} {|\nabla {U_0}{|^2}} + \alpha \int _{{{\mathbb {R}}^N}} {|{U_0}{|^2}} } \right) \\ &{}\le \displaystyle \mathop {\lim }\limits _{n \rightarrow \infty } \left( {\frac{1}{2} - \frac{1}{p}} \right) \left( {\int _{{{\mathbb {R}}^N}} {|\Delta {U_n}{|^2}} + \beta \int _{{{\mathbb {R}}^N}} {|\nabla {U_n}{|^2}} + \alpha \int _{{{\mathbb {R}}^N}} {|{U_n}{|^2}} } \right) \\ &{}= \displaystyle \mathop {\lim }\limits _{n \rightarrow \infty } \left( {{I_{\beta ,\alpha }}({U_n}) - \frac{1}{p}\left\langle {{I'_{\beta ,\alpha }}({U_n}),{U_n}} \right\rangle } \right) = {c_{\beta ,\alpha }}, \\ \end{array} \end{aligned}$$
(2.7)

by (2.3) and (2.7), we obtain \({U_n} \rightarrow {U_0}\) in \({H^2}({{\mathbb {R}}^N})\). This completes the proof that \(S_{\beta ,\alpha }^ +\) is compact in \({H^2}({{\mathbb {R}}^N})\). Similarly, we also see that \(S_{\beta ,\alpha }^ -\) is compact in \({H^2}({{\mathbb {R}}^N})\).    \(\square \)

For \(u \in {L^1}({{\mathbb {R}}^N})\), we define its Fourier transform \({\mathcal {F}}u = {{\hat{u}}}\) by

$$\begin{aligned} {\mathcal {F}}u(\xi )= {{{\hat{u}}}}(\xi ): = \frac{1}{{{{(2\pi )}^{N/2}}}}\int _{{{\mathbb {R}}^N}} {{e^{ - ix\cdot \xi }}u(x)} \mathrm{d}x \end{aligned}$$

and its inverse Fourier transform \({{\mathcal {F}}^{ - 1}}u\) by

$$\begin{aligned} {{\mathcal {F}}^{ - 1}}u(x) = {\check{u}}(x): = \frac{1}{{{{(2\pi )}^{N/2}}}}\int _{{{\mathbb {R}}^N}} {{e^{i\xi \cdot x}}u(\xi )} d\xi . \end{aligned}$$

We recall that the fundamental solutions to the Helmholtz equation are solutions to

$$\begin{aligned} - \Delta {{\mathcal {K}}_\mu } + \mu {{\mathcal {K}}_\mu } = {\delta (0)}, \end{aligned}$$
(2.8)

where \(\mu \in {\mathbb {C}}\), \(y \in {{\mathbb {R}}^N}\) and \(\delta (0)\) stands for the Dirac mass centered at 0. Of course, \({{\mathcal {K}}_\mu }\) is not uniquely determined, but in the following, we always choose those which satisfy nice integrability condition, namely, we require that \({{\mathcal {K}}_\mu } \in {L^1}({{\mathbb {R}}^N})\). Fixing a \({c_0} > 0\) small such that \({\beta ^2} - 4{c_0} > 0\) and \({c_0} < \mathop {\inf }_{{{\mathbb {R}}^N}} V(x)\). Arguing as the Example 1 in Sect. 4.3.1. of [23], we see that

$$\begin{aligned} {{\mathcal {K}}_{{\lambda _i}}}: = \frac{1}{{{{(2\pi )}^{N/2}}}}{\left( {\frac{1}{{|\xi {|^2} + {\lambda _i}}}} \right) ^ \vee } = \frac{1}{{{{(4\pi )}^{N/2}}}}\int _0^{ + \infty } {\frac{{{e^{ - {\lambda _i}t - \frac{{|x{|^2}}}{{4t}}}}}}{{{t^{N/2}}}}} \mathrm{d}t \quad (x \ne 0), \end{aligned}$$

where \({{\mathcal {K}}_{{\lambda _i}}}(i = 1,2)\) are the fundamental solutions to (2.8) with

$$\begin{aligned} {\lambda _1} = \frac{{\beta - \sqrt{{\beta ^2} - 4{c_0}} }}{2}{\text { and }}{\lambda _2} = \frac{{\beta + \sqrt{{\beta ^2} - 4{c_0}} }}{2}. \end{aligned}$$

Here, we observe that \({{\mathcal {K}}_{{\lambda _i}}} \in {L^1}({{\mathbb {R}}^N})\) is radially symmetric, non-negative, non-increasing in \(r = |x|\) and it decays exponentially at infinity. Moreover, it is smooth in \({{\mathbb {R}}^N}\backslash \{ 0\} \). Next, we denote by \({\mathcal {K}}\) the fundamental solution to the operator \({\Delta ^2} - \beta \Delta + {c_0}Id\), that is,

$$\begin{aligned} {\Delta ^2}{\mathcal {K}} - \beta \Delta {\mathcal {K}} + {c_0}{\mathcal {K}} = {\delta (0)}. \end{aligned}$$
(2.9)

Taking the Fourier transform in (2.9), we get

$$\begin{aligned} \begin{array}{ll} {\mathcal {K}} &{}= \displaystyle \frac{1}{{{{(2\pi )}^{N/2}}}}{\left( {\frac{1}{{|\xi {|^4} + \beta |\xi {|^2} + {c_0}}}} \right) ^ \vee } \\ &{}= \displaystyle \frac{1}{{\sqrt{{\beta ^2} - 4{c_0}} }}\frac{1}{{{{(2\pi )}^{N/2}}}}{\left( {\frac{1}{{|\xi {|^2} + {\lambda _1}}} - \frac{1}{{|\xi {|^2} + {\lambda _2}}}} \right) ^ \vee } \\ &{} \displaystyle = \frac{1}{{\sqrt{{\beta ^2} - 4{c_0}} }}\left( {{{\mathcal {K}}_{{\lambda _1}}} - {{\mathcal {K}}_{{\lambda _2}}}} \right) . \\ \end{array} \end{aligned}$$

Moreover, we see that \(0 \le {\mathcal {K}} \in {L^1}({{\mathbb {R}}^N})\).

The following local \({W^{4,p}}\)-estimate for fourth-order semilinear elliptic equations with mixed dispersion is a key to get the uniform and global \({L^\infty }\)-estimate of the solutions to (1.1) and the proof is standard. Since we have not found a local \({W^{4,p}}\)-estimate suitable for fourth-order semilinear elliptic equations with mixed dispersion, for readers’ convenience, we give a detailed proof.

Proposition 2.3

Let \(h \in {L^p}({{\mathbb {R}}^N})\), \(1< p < \infty \) and let \(u: = {\mathcal {K}} * h\). Then \(u \in {W^{4,p}}({{\mathbb {R}}^N})\),

$$\begin{aligned} {\Delta ^2}u - \beta \Delta u + {c_0}u = h{\text { }}a.e.{\text { }}{{\mathbb {R}}^N} \end{aligned}$$
(2.10)

and for any \(x \in {{\mathbb {R}}^N}\),

$$\begin{aligned} {\Vert u \Vert _{{W^{4,p}}({B_1}(x))}} \le C\Bigl ( {{{\Vert h \Vert }_{{L^p}({B_2}(x))}} + {{\Vert u \Vert }_{{L^p}({B_2}(x))}}} \Bigr ), \end{aligned}$$
(2.11)

where \(C>0\) depends only on N and p.

Proof

Let us deal first with the case \(p=2\). If \(h \in C_c^\infty ({{\mathbb {R}}^N})\), since \({\mathcal {K}} \in {L^1}({{\mathbb {R}}^N})\), we see from dominated convergence theorem that

$$\begin{aligned}&u: = {\mathcal {K}} * h = \frac{1}{{\sqrt{{\beta ^2} - 4{c_0}} }}\left( {{{\mathcal {K}}_{{\lambda _1}}} * h - {{\mathcal {K}}_{{\lambda _2}}} * h} \right) \\&\quad : = \frac{1}{{\sqrt{{\beta ^2} - 4{c_0}} }}({g_{{\lambda _1}}} - {g_{{\lambda _2}}}) \in {C^\infty }({{\mathbb {R}}^N}). \end{aligned}$$

We claim that, u satisfies (2.10) in classical sense. To see this, for each \(i = 1,2\), fixing \(\delta > 0\), then

$$\begin{aligned} \begin{array}{ll} - \Delta {g_{{\lambda _i}}} + {\lambda _i}{g_{{\lambda _i}}} &{}= \displaystyle \int _{{B_\delta }(0)} {{{\mathcal {K}}_{\lambda _i}}(y)( - {\Delta _x}h(x - y) + {\lambda _i}h(x - y))} \mathrm{d}y \\ &{}\quad +\displaystyle \int _{{{\mathbb {R}}^N}\backslash {B_\delta }(0)} {{{\mathcal {K}}_{\lambda _i}}(y)( - {\Delta _x}h(x - y) + {\lambda _i}h(x - y))} \mathrm{d}y \\ &{}= \displaystyle (I) + (II). \\ \end{array} \end{aligned}$$
(2.12)

We see that

$$\begin{aligned} |(I)| \le C\Bigl ( {{{\Vert h \Vert }_{{L^\infty }({{\mathbb {R}}^N})}} + {{\Vert {{\nabla ^2}h} \Vert }_{{L^\infty }({{\mathbb {R}}^N})}}} \Bigr )\int _{{B_\delta }(0)} {{{\mathcal {K}}_{{\lambda _i}}}(y)} \mathrm{d}y = o(1){\text { as }}\delta \rightarrow 0.\nonumber \\ \end{aligned}$$
(2.13)

An integration by parts yields

$$\begin{aligned}&\int _{{{\mathbb {R}}^N}\backslash {B_\delta }(0)} {{{\mathcal {K}}_{{\lambda _i}}}(y){\Delta _x}h(x - y)} \mathrm{d}y \nonumber \\&\quad = \int _{{{\mathbb {R}}^N}\backslash {B_\delta }(0)} {{{\mathcal {K}}_{{\lambda _i}}}(y){\Delta _y}h(x - y)} \mathrm{d}y \nonumber \\&\quad = - \int _{{{\mathbb {R}}^N}\backslash {B_\delta }(0)} {\nabla {{\mathcal {K}}_{{\lambda _i}}}(y){\nabla _y}h(x - y)} \mathrm{d}y + \int _{\partial {B_\delta }(0)} {{{\mathcal {K}}_{{\lambda _i}}}(y)\frac{{\partial h}}{{\partial \nu }}(x - y)} \mathrm{d}S(y) \nonumber \\&\quad = {(II)_1} + {(II)_2}, \end{aligned}$$
(2.14)

where \(\nu \) denoting the inward pointing unit normal along \({\partial {B_\delta }(0)}\). Noting that

$$\begin{aligned} \begin{array}{ll} |{(II)_2}| &{}\displaystyle \le {\left\| {\nabla h} \right\| _{{L^\infty }({{\mathbb {R}}^N})}}\int _{\partial {B_\delta }(0)} {{{\mathcal {K}}_{{\lambda _i}}}(y)} \mathrm{d}S(y) \\ &{}\displaystyle \le C\int _{\partial {B_\delta }(0)} {\left( {\int _0^{ + \infty } {\frac{{{e^{ - {\lambda _i}t - \frac{{{\delta ^2}}}{{4t}}}}}}{{{t^{N/2}}}}\mathrm{d}t} } \right) } \mathrm{d}S(y) \\ &{}\displaystyle \le C{\delta ^{N - 1}}\int _0^{ + \infty } {\frac{{{e^{ - \frac{{{\delta ^2}}}{{4t}}}}}}{{{t^{N/2}}}}\mathrm{d}t} \\ &{}\displaystyle \mathop = \limits ^{t' = t/{\delta ^2}} C\delta \int _0^{ + \infty } {\frac{{{e^{ - \frac{1}{{4t'}}}}}}{{{{(t')}^{N/2}}}}\mathrm{d}t'} \le C\delta . \\ \end{array} \end{aligned}$$
(2.15)

We continue by integrating by parts once again in the term \({(II)_1}\) to get that

$$\begin{aligned} {(II)_1} = \int _{{{\mathbb {R}}^N}\backslash {B_\delta }(0)} {\Delta {{\mathcal {K}}_{{\lambda _i}}}(y)h(x - y)} \mathrm{d}y - \int _{\partial {B_\delta }(0)} {\frac{{\partial {{\mathcal {K}}_{{\lambda _i}}}}}{{\partial \nu }}(y)h(x - y)} \mathrm{d}S(y).\qquad \end{aligned}$$
(2.16)

Since

$$\begin{aligned} \nabla {{\mathcal {K}}_{\lambda _i}}(y) = \frac{1}{{{{(4\pi )}^{N/2}}}}\int _0^{ + \infty } {\frac{{{e^{ - {\lambda _i}t - \frac{{|y{|^2}}}{{4t}}}}}}{{{t^{N/2}}}}\left( { - \frac{y}{{2t}}} \right) } \mathrm{d}t \quad (y \ne 0) \end{aligned}$$

and \(\nu = - y/|y| = - y/\delta \) on \({\partial {B_\delta }(0)}\), consequently,

$$\begin{aligned} \begin{array}{ll} \displaystyle \frac{{{\mathcal {K}}_{{\lambda _i}}}}{{\partial \nu }}(y) &{}= \displaystyle \nabla {{\mathcal {K}}_{\lambda _i}}(y) \cdot \nu \\ &{}=\displaystyle \frac{1}{{2{{(4\pi )}^{N/2}}}}\int _0^{ + \infty } {\frac{{{e^{ - {\lambda _i}t - \frac{{{\delta ^2}}}{{4t}}}}}}{{{t^{\frac{N}{2} + 1}}}}\delta } \mathrm{d}t \\ &{}\displaystyle \mathop = \limits ^{t' = t/{\delta ^2}} \frac{1}{{2{{(4\pi )}^{N/2}}{\delta ^{N - 1}}}}\int _0^{ + \infty } {\frac{{{e^{ - {\lambda _i}{\delta ^2}t' - \frac{1}{{4t'}}}}}}{{{{(t')}^{\frac{N}{2} + 1}}}} } \mathrm{d}t' \\ \end{array} \end{aligned}$$

on \({\partial {B_\delta }(0)}\). Hence we get

$$\begin{aligned}&\mathop {\lim }\limits _{\delta \rightarrow 0} \int _{\partial {B_\delta }(0)} {\frac{{\partial {{\mathcal {K}}_{\lambda _i}}}}{{\partial \nu }}(y)h(x - y)} \mathrm{d}S(y) \nonumber \\&\quad = \mathop {\lim }\limits _{\delta \rightarrow 0} \frac{1}{{2{{(4\pi )}^{N/2}}}}\left( {\int _0^{ + \infty } {\frac{{{e^{ - {\lambda _i}{\delta ^2}t' - \frac{1}{{4t'}}}}}}{{{{(t')}^{\frac{N}{2} + 1}}}}} \mathrm{d}t'} \right) \frac{1}{{{\delta ^{N - 1}}}}\int _{\partial {B_\delta }(x)} {h(y)} \mathrm{d}S(y) \nonumber \\&\quad = \frac{1}{{2{{(4\pi )}^{N/2}}}}\left( {\int _0^{ + \infty } {\frac{{{e^{ - \frac{1}{{4t'}}}}}}{{{{(t')}^{\frac{N}{2} + 1}}}}} \mathrm{d}t'} \right) {S_N}h(x) \nonumber \\&\quad \mathop =\limits ^{t = 1/4t'} \frac{1}{{2{\pi ^{N/2}}}}\left( {\int _0^{ + \infty } {{e^{ - t}}{t^{\frac{N}{2} - 1}}} \mathrm{d}t} \right) {S_N}h(x) \nonumber \\&\quad = \frac{1}{{2{\pi ^{N/2}}}}\Gamma \left( {\frac{N}{2}} \right) {S_N}h(x) = h(x), \end{aligned}$$
(2.17)

where \({S_N}\) is the surface area of the sphere \({\partial {B_1 }(0)}\) in \({{\mathbb {R}}^N}\). Since \(- \Delta {{\mathcal {K}}_{\lambda _i}} + {\lambda _i}{{\mathcal {K}}_{\lambda _i}} = 0\) away from 0, plugging (2.1)–(2.1) into (2.1), we see that,

$$\begin{aligned} - \Delta {g_{{\lambda _i}}} + {\lambda _i}{g_{{\lambda _i}}} = h, \end{aligned}$$

then

$$\begin{aligned}&{\Delta ^2}u - \beta \Delta u + {c_0}u \\&\quad = \frac{1}{{\sqrt{{\beta ^2} - 4{c_0}} }}\left( ( - \Delta + {\lambda _2}Id)( - \Delta + {\lambda _1}Id){g_{{\lambda _1}}}\right. \\&\qquad \left. - ( - \Delta + {\lambda _1}Id)( - \Delta + {\lambda _2}Id){g_{{\lambda _2}}} \right) \\&\quad = \frac{1}{{\sqrt{{\beta ^2} - 4{c_0}} }}\left( {{\lambda _2} - {\lambda _1}} \right) h = h, \end{aligned}$$

this proves the claim. Consequently, for any ball \({B_R}(0)\),

$$\begin{aligned} \int _{{B_R}(0)} {{{({\Delta ^2}u - \beta \Delta u + {c_0}u)}^2}} = \int _{{B_R}(0)} {{h^2}}. \end{aligned}$$
(2.18)

integrating by parts, we obtain

$$\begin{aligned} \int _{{B_R}(0)} {{\Delta ^2}u \cdot \Delta u}= & {} - \int _{{B_R}(0)} {|\nabla (\Delta u){|^2}} + \int _{\partial {B_R}(0)} {\frac{{\partial \Delta u}}{{\partial \nu '}}\Delta u}, \end{aligned}$$
(2.19)
$$\begin{aligned} \int _{{B_R}(0)} {\Delta u \cdot u}= & {} - \int _{{B_R}(0)} {|\nabla u{|^2}} + \int _{\partial {B_R}(0)} {\frac{{\partial u}}{{\partial \nu '}}u}, \end{aligned}$$
(2.20)

and

$$\begin{aligned} \int _{{B_R}(0)} {{\Delta ^2}u \cdot u} = \int _{{B_R}(0)} {|\Delta u{|^2}} - \int _{\partial {B_R}(0)} {\frac{{\partial u}}{{\partial \nu '}}\Delta u} + \int _{\partial {B_R}(0)} {\frac{{\partial \Delta u}}{{\partial \nu '}}u,} \end{aligned}$$
(2.21)

where \({\nu '}\) is the outward pointing unit normal vector field along \({\partial {B_R}(0)}\). We assume that \({\text {supp}}h \subset {B_{{R_0}}}(0)\), for \(R > 2{R_0}\), \(x \in \partial {B_R}(0)\), similar to the argument in (2.1), we see that for \(k \in {\mathbb {N}}\),

$$\begin{aligned} |{D^k}u| \le C\int _{{B_{{R_0}}}(0)} {|{D^k}{\mathcal {K}}(x - y)| \cdot |h(y)|} \mathrm{d}y \le C/{R^{N - 2 + k}}. \end{aligned}$$

Letting \(R \rightarrow \infty \) in (2.1)–(2.2), we get

$$\begin{aligned} {\Vert u \Vert _{{H^4}({{\mathbb {R}}^N})}} \le C{\Vert h \Vert _{{L^2}({{\mathbb {R}}^N})}}. \end{aligned}$$
(2.22)

Fixing \(1 \le i,j,k,l \le N\), we define the linear operator \(T:C_c^\infty ({{\mathbb {R}}^N}) \rightarrow {C^\infty }({{\mathbb {R}}^N})\) by

$$\begin{aligned} Th: = {D_{ijkl}}({\mathcal {K}} * h). \end{aligned}$$

Since \(C_c^\infty ({{\mathbb {R}}^N})\) is dense in \({L^2}({{\mathbb {R}}^N})\), by approximation and (2.2), we see that T can be uniquely extended as a bounded linear operator from \({L^2}({{\mathbb {R}}^N})\) to \({L^2}({{\mathbb {R}}^N})\). By the classical Calderon–Zygmund decomposition and Marcinkiewicz interpolation theorem (see [24, Theorem 9.9]), we see that for \(1< p < \infty \),

$$\begin{aligned} {\Vert {Th} \Vert _{{L^p}({{\mathbb {R}}^N})}} \le C{\Vert h \Vert _{{L^p}({{\mathbb {R}}^N})}}, \end{aligned}$$

where \(C>0\) depends only on N and p. Moreover, since \({\mathcal {K}} \in {L^1}({{\mathbb {R}}^N})\), by Young’s inequality for convolution, we have

$$\begin{aligned} {\Vert {{\mathcal {K}} * h} \Vert _{{L^p}({{\mathbb {R}}^N})}} \le {\Vert {\mathcal {K}} \Vert _{{L^1}({{\mathbb {R}}^N})}}{\Vert h \Vert _{{L^p}({{\mathbb {R}}^N})}}. \end{aligned}$$

Hence

$$\begin{aligned} {\Vert u \Vert _{{W^{4,p}}({{\mathbb {R}}^N})}} \le C{\Vert h \Vert _{{L^p}({{\mathbb {R}}^N})}}. \end{aligned}$$
(2.23)

For any \(1< {s_1}< {s_2} < 2\), we define the cut-off function \(0 \le \eta \le 1 \) such that \(\eta = 1\) on \({B_{{s_1}}}(x)\), \(\eta = 0\) on \({{\mathbb {R}}^N}\backslash {B_{{s_2}}}(x)\) and \(|{D^k}\eta | \le C/{({s_2} - {s_1})^k}\), \(k \in {\mathbb {N}}\). Letting \(v = \eta u\), then v satisfies

$$\begin{aligned} {\Delta ^2}v - \beta \Delta v + {c_0}v = {{\bar{h}}}, \end{aligned}$$

where

$$\begin{aligned} {{\bar{h}}} = \eta h + 4\nabla \eta \nabla (\Delta u) + 6\Delta \eta \Delta u + 4\nabla (\Delta \eta )\nabla u + {\Delta ^2}\eta u - 2\beta \nabla \eta \nabla u - \beta \Delta \eta u. \end{aligned}$$

From (2.2) and the fact that \(1< {s_1}< {s_2} < 2\), we obtain

$$\begin{aligned} {\Vert u \Vert _{{W^{4,p}}({B_{{s_1}}}(x))}} \le C\left( {{{\Vert h \Vert }_{{L^p}({B_{{s_2}}}(x))}} + \sum \limits _{k = 0}^3 {\frac{1}{{{{({s_2} - {s_1})}^{4 - k}}}}{{\Vert {{D^k}u} \Vert }_{{L^p}({B_{{s_2}}}(x))}}} } \right) . \end{aligned}$$

By the interpolation inequality in Sobolev spaces (see [24, Theorem 7.28]), we see that

$$\begin{aligned} {\Vert u \Vert _{{W^{4,p}}({B_{{s_1}}}(x))}} \le \frac{1}{2}{\Vert u \Vert _{{W^{4,p}}({B_{{s_2}}}(x))}} + \frac{C}{{{{({s_2} - {s_1})}^4}}}{\Vert u \Vert _{{L^p}({B_{{s_2}}}(x))}} + C{\Vert h \Vert _{{L^p}({B_{{s_2}}}(x))}}.\nonumber \\ \end{aligned}$$
(2.24)

Letting \({t_0} = 1\) and \({t_{i + 1}} = {t_i} + (1 - \tau ){\tau ^i}\), where \(0< \tau < 1\) to be fixed later, by (2.2),

$$\begin{aligned} {\Vert u \Vert _{{W^{4,p}}({B_{{t_i}}}(x))}}\le & {} \frac{1}{2}{\Vert u \Vert _{{W^{4,p}}({B_{{t_{i + 1}}}}(x))}} + \frac{C}{{{{(1 - \tau )}^4}{\tau ^{4i}}}}{\Vert u \Vert _{{L^p}({B_{{t_{i + 1}}}}(x))}} \nonumber \\&\quad + C{\Vert h \Vert _{{L^p}({B_{{t_{i + 1}}}}(x))}}. \end{aligned}$$
(2.25)

Iterating (2.2) for n times, we have

$$\begin{aligned}&{\Vert u \Vert _{{W^{4,p}}({B_1}(x))}} \le \frac{1}{{{2^n}}}{\Vert u \Vert _{{W^{4,p}}({B_{{t_n}}}(x))}} \\&\quad + C\left[ {\frac{1}{{{{(1 - \tau )}^4}}}{{\Vert u \Vert }_{{L^p}({B_{{t_n}}}(x))}} + {{\Vert h \Vert }_{{L^p}({B_{{t_n}}}(x))}}} \right] \sum \limits _{i = 0}^{n - 1} {\frac{1}{{{2^i}}}{\tau ^{ - 4i}}}. \end{aligned}$$

Choosing \(\tau > 0\) such that \(\frac{1}{2}{\tau ^{ - 4}} < 1\) and letting \(n \rightarrow \infty \), we get (2.1). \(\square \)

3 The Singularly Perturbed Problem

Problem (1.1) can be rewritten as

$$\begin{aligned} {\Delta ^2}v - \beta \Delta v + V(\varepsilon x)v = |v{|^{p - 2}}v{\text { in }}{{\mathbb {R}}^N},{v \in {H^2}({{\mathbb {R}}^N})}. \end{aligned}$$
(3.1)

The corresponding energy functional to (3.1) is

$$\begin{aligned} {I_\varepsilon }(v) = \frac{1}{2}\int _{{{\mathbb {R}}^N}} {|\Delta v{|^2}} + \frac{1}{2}\beta \int _{{{\mathbb {R}}^N}} {|\nabla v{|^2}} + \frac{1}{2}\int _{{{\mathbb {R}}^N}} {V(\varepsilon x){v^2}} - \frac{1}{p}\int _{{{\mathbb {R}}^N}} {|v{|^p}},v \in {H_\varepsilon }, \end{aligned}$$

where \({H_\varepsilon }\) be a class of weighted Sobolev space as follows:

$$\begin{aligned} \left\{ {v \in {H^2}({{\mathbb {R}}^N}):\int _{{{\mathbb {R}}^N}} {V(\varepsilon x){v^2}} < \infty } \right\} \end{aligned}$$

and the norm of the space \({H_\varepsilon }\) is denoted by

$$\begin{aligned} {\Vert v \Vert _{{H_\varepsilon }}}: = {\left( {\int _{{{\mathbb {R}}^N}} {|\Delta v{|^2}} + \int _{{{\mathbb {R}}^N}} {V(\varepsilon x){v^2}} } \right) ^{1/2}}. \end{aligned}$$

Moreover, we see that \({H_\varepsilon }\) is equivalent to \({H^2}({{\mathbb {R}}^N})\) owing to \(0 < {V_0} \le V \in {L^\infty }({{\mathbb {R}}^N})\). It will be convenient to consider mutually disjoint open set \(\widetilde{{\Lambda ^k}}\) compactly containing \({{\Lambda ^k}}\) satisfying \(V(x) > \mathop {\inf }_{\xi \in {\Lambda ^k}} V(\xi )\) for all \(x \in \overline{\widetilde{{\Lambda ^k}}} \backslash {\Lambda ^k}\). We assume that \({\text {dist}}\,(\widetilde{{\Lambda ^{{k_1}}}},\widetilde{{\Lambda ^{{k_2}}}}) > 0\) for \({k_1} \ne {k_2}\), this can be achieved by making \({\Lambda ^k}\) smaller if necessary. From now on, we define \(\Lambda = \cup _{k = 1}^K{\Lambda ^k}\), \({\widetilde{\Lambda }} = \cup _{k = 1}^K\widetilde{{\Lambda ^k}}\) and \({\mathcal {M}} = \mathop \cup \nolimits _{k = 1}^K {{\mathcal {M}}^k}\). Letting \(V_0\) be as in \((V_1)\) and choosing \(a > 0\) such that \({a^{p - 2}} < \frac{1}{{{l_0}}}{V_0}\) with \({l_0} > \frac{p}{{p - 2}}\). Following [1, 2, 15] with minor modification, we define the truncated function

$$\begin{aligned} {g_\varepsilon }(x,u): = \chi (\varepsilon x)|u{|^{p - 2}}u + (1 - \chi (\varepsilon x))\min \{ |u{|^{p - 2}},{a^{p - 2}}\} u \end{aligned}$$

and

$$\begin{aligned} g_\varepsilon ^k(x,u): = {\chi ^k}(\varepsilon x)|u{|^{p - 2}}u + (1 - {\chi ^k}(\varepsilon x))\min \{ |u{|^{p - 2}},{a^{p - 2}}\} u{\text { }}(1 \le k \le K), \end{aligned}$$

respectively, where \(0 \le {\chi ^k}(x) \le 1\) is a smooth function such that \({\chi ^k}(x) = 1\) on \({\Lambda ^k}\), \({\chi ^k}(x) = 0\) on \({{\mathbb {R}}^N}\backslash \widetilde{{\Lambda ^k}}\) and \(\chi (x): = \sum _{k = 1}^K {{\chi ^k}(x)} \). Moreover, we set

$$\begin{aligned} {G_\varepsilon }(x,u): = \int _0^u {{g_\varepsilon }(x,\tau )} d\tau ~\text {and}~G_\varepsilon ^k(x,u): = \int _0^u {g_\varepsilon ^k(x,\tau )} d\tau \end{aligned}$$

accordingly. Finally, the penalized functionals \({J_\varepsilon }\), \(J_\varepsilon ^k(k = 1,\ldots ,K)\) on \({H_\varepsilon }\) are defined as

$$\begin{aligned} {J_\varepsilon }(v): = \frac{1}{2}\int _{{{\mathbb {R}}^N}} {|\Delta v{|^2}} + \frac{1}{2}\beta \int _{{{\mathbb {R}}^N}} {|\nabla v{|^2}} + \frac{1}{2}\int _{{{\mathbb {R}}^N}} {V(\varepsilon x){v^2}} - \int _{{{\mathbb {R}}^N}} {{G_\varepsilon }(x,v)} \end{aligned}$$

and

$$\begin{aligned} J_\varepsilon ^k(v): = \frac{1}{2}\int _{{{\mathbb {R}}^N}} {|\Delta v{|^2}} + \frac{1}{2}\beta \int _{{{\mathbb {R}}^N}} {|\nabla v{|^2}} + \frac{1}{2}\int _{{{\mathbb {R}}^N}} {V(\varepsilon x){v^2}} - \int _{{{\mathbb {R}}^N}} {G_\varepsilon ^k(x,v)}. \end{aligned}$$

As we shall see, this type of modification will act as a penalization to force the concentration phenomena to occur inside \(\Lambda \). It is standard to see that the functionals \({J_\varepsilon },J_\varepsilon ^k(k = 1,\ldots ,K)\) are in \({C^1}({H_\varepsilon },{\mathbb {R}})\). To find solutions to (3.1) which concentrate around \({\mathcal {M}}\) as \(\varepsilon \rightarrow 0\), we shall search critical points \(v_\varepsilon \) of \({J_\varepsilon }\) for which \({g_\varepsilon }(x,{v_\varepsilon }) = |{v_\varepsilon }{|^{p - 2}}{v_\varepsilon }\). The following lemma says that \({J_\varepsilon }\), \(J_\varepsilon ^k(k = 1,\ldots ,K)\) satisfy Palais Smale condition and can be proved as Lemma 1.1 of [15], we omit the proof.

Lemma 3.1

For each \(\varepsilon > 0\) fixed, letting \(\{ {u_n}\} _{n = 1}^\infty \) be a sequence in \({H_\varepsilon }\) such that \({J_\varepsilon }({u_n})\)(or \(J_\varepsilon ^k({u_n})\)) is bounded and \({J'_\varepsilon }({u_n})\)(or \({\left( {J_\varepsilon ^k} \right) ^\prime }({u_n})\))\(\rightarrow 0\), then \(\{ {u_n}\} _{n = 1}^\infty \) has a convergent subsequence in \({H_\varepsilon }\).

Defining \(S_{\beta ,{m_{p(i)}}}^ + \)(or \(S_{\beta ,{m_{q(j)}}}^ - \)) by the set of positive (or negative) ground state solutions U(V) to \(({E_{\beta ,{m_{p(i)}}}})\)(or \(({E_{\beta ,{m_{q(j)}}}})\)) satisfying \(U(0) = \mathop {\max }_{x \in {{\mathbb {R}}^N}} U(x)\)(or \(V(0) = {\min _{x \in {{\mathbb {R}}^N}}}V(x)\)) and

$$\begin{aligned} {\delta _0}: = \frac{1}{{10}}\min \left\{ {{\text {dist}}\,\{ {\mathcal {M}},{{\mathbb {R}}^N}\backslash \Lambda \},\mathop {\min }\limits _{{k_1} \ne {k_2}} {\text {dist}}\,(\widetilde{{\Lambda ^{{k_1}}}},\widetilde{{\Lambda ^{{k_2}}}})} \right\} , \end{aligned}$$

we fix a cut-off function \(\varphi \in C_c^\infty ({{\mathbb {R}}^N},[0,1])\) such that \(\varphi (x) = 1\) for \(|x| \le {\delta _0} \), \(\varphi (x) = 0\) for \(|x| \ge 2{\delta _0}\), \(|\nabla \varphi | \le C/{\delta _0} \) and \(|\Delta \varphi | \le C/{({\delta _0})^2}\). For \(\varepsilon > 0\) small, we will find a solution of (3.1) near the set

$$\begin{aligned} \begin{array}{ll} {X_\varepsilon }: = &{}\Bigl \{ \sum \limits _{i = 1}^{{K_1}} {\varphi (\varepsilon x - {{{{\bar{z}}}}^i}){U^i}(x - ({{{{\bar{z}}}}^i}/\varepsilon ))} + \sum \limits _{j = 1}^{{K_2}} {\varphi (\varepsilon x - {{{{\tilde{z}}}}^j}){V^j}(x - ({{{{\tilde{z}}}}^j}/\varepsilon ))} \\ &{}:{{{{\bar{z}}}}^i} \in {({{\mathcal {M}}^{p(i)}})^{{\delta _0}}}{\text {,}}{{{{\tilde{z}}}}^j} \in {({{\mathcal {M}}^{q(j)}})^{{\delta _0}}}{\text { and }}{U^i} \in S_{\beta ,{m_{p(i)}}}^ +,{V^j} \in S_{\beta ,{m_{q(j)}}}^ - \Bigr \}. \\ \end{array} \end{aligned}$$

where \({({{\mathcal {M}}^k})^{\delta _0} }: = \bigl \{ {y \in {{\mathbb {R}}^N}:\mathop {\inf }_{z \in {{\mathcal {M}}^k}} |y - z| \le {\delta _0} } \bigr \}\). Similarly, for \(A \subset {H_\varepsilon }\), we use the notation

$$\begin{aligned} {A^a}: = \bigl \{ {u \in {H_\varepsilon }:\mathop {\inf }\limits _{v \in A} {{\Vert {u - v} \Vert }_{{H_\varepsilon }}} \le a} \bigr \}. \end{aligned}$$

For each \(1 \le i \le {K_1}\), \(1 \le j \le {K_2}\), letting \(U_ * ^i\)(or \(V_ * ^j\)) a positive (or negative) ground state solution of \(({E_{\beta ,{m_{p(i)}}}})\)(or \(({E_{\beta ,{m_{q(j)}}}})\)), then there is a \({S_i} > 0\)(or \({T_j} > 0\)) such that \({I_{\beta ,{m_{p(i)}}}}({S_i}U_*^i) < - 1\)(or \({I_{\beta ,{m_{q(j)}}}}({T_j}V_*^j) < - 1\)). Moreover, we choose \(z_*^k \in {{\mathcal {M}}^k}\) for \(1 \le k \le K\). We define

$$\begin{aligned}&U_{\varepsilon ,{{\bar{s}}}}^i(x): = \varphi (\varepsilon x - z_*^{p(i)}){{\bar{s}}}U_*^i(x - (z_*^{p(i)}/\varepsilon )),V_{\varepsilon ,{{\bar{t}}}}^j(x)\nonumber \\&\quad : = \varphi (\varepsilon x - z_*^{q(j)}){{\bar{t}}}V_*^j(x - (z_*^{q(j)}/\varepsilon )) \end{aligned}$$
(3.2)

for each \(\varepsilon > 0\) and \({{\bar{s}}}\), \({{\bar{t}}} > 0\). Noting that \({\text {supp}}U_{\varepsilon ,{{\bar{s}}}}^i \subset {\Lambda ^{p(i)}}/\varepsilon \) and \({\text {supp}}V_{\varepsilon ,{{\bar{t}}}}^j \subset {\Lambda ^{q(j)}}/\varepsilon \), direct calculations show that for each \(1 \le i \le {K_1}\),

$$\begin{aligned} J_\varepsilon ^{p(i)}(U_{\varepsilon ,{S_i}}^i) = {I_\varepsilon }(U_{\varepsilon ,{S_i}}^i) = {I_{\beta ,{m_{p(i)}}}}({S_i}U_*^i) + o(1)< - 1 + o(1) < - \frac{1}{2} \end{aligned}$$
(3.3)

for \(\varepsilon > 0\) small. Similarly, we also see that for each \(1 \le j \le {K_2}\),

$$\begin{aligned} J_\varepsilon ^{q(j)}(V_{\varepsilon ,{T_j}}^j) < - \frac{1}{2} \end{aligned}$$
(3.4)

for \(\varepsilon > 0\) small. We define

$$\begin{aligned} {{{{\tilde{c}}}}_\varepsilon }: = \mathop {\max }\limits _{(s,t) \in {{[0,1]}^K}} {J_\varepsilon }({\gamma _\varepsilon }(s,t)), \end{aligned}$$

where

$$\begin{aligned} {\gamma _\varepsilon }(s,t): = \sum _{i = 1}^{{K_1}} {U_{\varepsilon ,{s_i}{S_i}}^i} + \sum _{j = 1}^{{K_2}} {V_{\varepsilon ,{t_j}{T_j}}^j} \end{aligned}$$
(3.5)

for \((s,t): = ({s_1},\ldots ,{s_{{K_1}}},{t_1},\ldots ,{t_{{K_2}}}) \in {[0,1]^K}\), we have the following estimates:

Lemma 3.2

(i) \(\mathop {\lim }\limits _{\varepsilon \rightarrow 0} {{{{\tilde{c}}}}_\varepsilon } = \sum \limits _{k = 1}^K {{c_{\beta ,{m_k}}}}\);

(ii) \(\mathop {\lim }\limits _{\varepsilon \rightarrow 0} \mathop {\max }\limits _{(s,t) \in \partial {{[0,1]}^K}} {J_\varepsilon }({\gamma _\varepsilon }(s,t)) \le \sum \limits _{k = 1}^K {{c_{\beta ,{m_k}}}} - \sigma \),

where \(0< \sigma < \min \{ {c_{\beta ,{m_k}}}:k = 1,2,\ldots ,K\}\) is a fixed number.

Proof

Since for each \(1 \le {k_1},{k_2} \le K\) with \({k_1} \ne {k_2}\), \({\Lambda ^{{k_1}}} \cap {\Lambda ^{{k_2}}} = \emptyset \) and \({\text {supp}}U_{\varepsilon ,{s_i}{S_i}}^i \subset {\Lambda ^{p(i)}}/\varepsilon \), \({\text {supp}}V_{\varepsilon ,{t_j}{T_j}}^j \subset {\Lambda ^{q(j)}}/\varepsilon \), we see that

$$\begin{aligned} \begin{array}{ll} {{{{\tilde{c}}}}_\varepsilon }&{}= \sum \limits _{i = 1}^{{K_1}} {\mathop {\max }\limits _{{s_i} \in [0,1]} J_\varepsilon ^{p(i)}(U_{\varepsilon ,{s_i}{S_i}}^i)} + \sum \limits _{j = 1}^{{K_2}} {\mathop {\max }\limits _{{t_j} \in [0,1]} J_\varepsilon ^{q(j)}(V_{\varepsilon ,{t_j}{T_j}}^j)} \\ &{}= \sum \limits _{i = 1}^{{K_1}} {\mathop {\max }\limits _{{s_i} \in [0,1]} {I_{\beta ,{m_{p(i)}}}}({s_i}{S_i}U_*^i)} + \sum \limits _{j = 1}^{{K_2}} {\mathop {\max }\limits _{{t_j} \in [0,1]} {I_{\beta ,{m_{q(j)}}}}({t_j}{T_j}V_ * ^j)} + o(1) \\ &{}= \sum \limits _{k = 1}^K {{c_{\beta ,{m_k}}}} + o(1), \\ \end{array} \end{aligned}$$

(i) holds. Moreover, by (3.3) and (3.4), (ii) is obvious. \(\square \)

Letting

$$\begin{aligned} c_\varepsilon ^k: = \mathop {\inf }\limits _{\gamma \in \Gamma _\varepsilon ^k} \mathop {\max }\limits _{r \in [0,1]} J_\varepsilon ^k(\gamma (r)), \end{aligned}$$

where

$$\begin{aligned} \begin{array}{ll} \Gamma _\varepsilon ^k: =&{} \{ \gamma (r) \in C([0,1],{H_\varepsilon }):\gamma (0) = 0{\text { and }}\gamma (1) = U_{\varepsilon ,{S_i}}^i{\text { if }}k = p(i),i = 1,\ldots {K_1} \\ &{}{\text {or }}\gamma (1) = V_{\varepsilon ,{T_j}}^j{\text { if }}k = q(j),j = 1,\ldots {K_2}\}. \\ \end{array} \end{aligned}$$

We have the following estimates:

Lemma 3.3

For each \(1 \le k \le K\),

$$\begin{aligned} \mathop { {\lim } }\limits _{\varepsilon \rightarrow 0} c_\varepsilon ^k = {{c_{\beta ,{m_k}}}}. \end{aligned}$$

Proof

For each \(1 \le k \le K\), the upper estimate of the form

$$\begin{aligned} \mathop {{\overline{\lim }} }\limits _{\varepsilon \rightarrow 0} c_\varepsilon ^k \le {c_{\beta ,{m_k}}} \end{aligned}$$
(3.6)

follows immediately from the use of a test path constructed as in the proof of Lemma 3.2 (i).

On the other hand, we see from Lemma 3.1 that \(J_\varepsilon ^k\) satisfies Palais Smale condition on \({H_\varepsilon }\). By (3.3) and (3.4), the mountain pass theorem implies that for \(\varepsilon > 0\) small, \(c_\varepsilon ^k\) is a critical value for \(J_\varepsilon ^k\). Letting \(w_\varepsilon ^k\) be an associated critical point. Using the definition of \(g_\varepsilon ^k\) and (3.6), we see that for \(\varepsilon > 0\) small,

$$\begin{aligned}&\int _{{{\mathbb {R}}^N}} {|\Delta w_\varepsilon ^k{|^2}} + \beta \int _{{{\mathbb {R}}^N}} {|\nabla w_\varepsilon ^k{|^2}} + \int _{{{\mathbb {R}}^N}} {V(\varepsilon x)|w_\varepsilon ^k{|^2}} \\&\quad \le C + 2\int _{{{\mathbb {R}}^N}} {G_\varepsilon ^k(x,w_\varepsilon ^k)} \\&\quad \le C + \frac{2}{p}\int _{{{\mathbb {R}}^N}} {{\chi ^k}(\varepsilon x)|w_\varepsilon ^k{|^p}} + {a^{p - 2}}\int _{{{\mathbb {R}}^N}} {(1 - {\chi ^k}(\varepsilon x))|w_\varepsilon ^k{|^2}} \\&\quad \le C + \frac{2}{p}\int _{{{\mathbb {R}}^N}} {g_\varepsilon ^k(x,w_\varepsilon ^k)w_\varepsilon ^k} + \frac{1}{{{l_0}}}\int _{{{\mathbb {R}}^N}} {V(\varepsilon x)|w_\varepsilon ^k{|^2}} , \end{aligned}$$

combining with \(\left\langle {(J_\varepsilon ^k)'(w_\varepsilon ^k),w_\varepsilon ^k} \right\rangle = 0\), we obtain

$$\begin{aligned} \left( {\frac{{p - 2}}{p} - \frac{1}{{{l_0}}}} \right) \left( {\int _{{{\mathbb {R}}^N}} {|\Delta w_\varepsilon ^k{|^2}} + \beta \int _{{{\mathbb {R}}^N}} {|\nabla w_\varepsilon ^k{|^2}} + \int _{{{\mathbb {R}}^N}} {V(\varepsilon x)|w_\varepsilon ^k{|^2}} } \right) \le C \end{aligned}$$
(3.7)

for \(\varepsilon > 0\) small.

For any sequence \(\{ {\varepsilon _n}\} _{n = 1}^\infty \) with \({\varepsilon _n} \rightarrow 0\), we claim that, up to a subsequence, \(\exists \) \(\{ {y_n}\} _{n = 1}^\infty \subset {{\mathbb {R}}^N}\) and \(R>0\), \({\beta _0} > 0\) such that

$$\begin{aligned} \int _{{B_R}({y_n})} {{{\left| {w_{{\varepsilon _n}}^k} \right| }^2}} \ge {\beta _0}. \end{aligned}$$
(3.8)

Otherwise, by vanishing theorem (see [22, Lemma I.1]), it follows that

$$\begin{aligned} \int _{{{\mathbb {R}}^N}} {{{\left| {w_{{\varepsilon _n}}^k} \right| }^q}} \rightarrow 0 \end{aligned}$$

as \(n \rightarrow \infty \) for all \(2< q < {2^ * }\). Combining \(\left\langle {(J_\varepsilon ^k)'(w_\varepsilon ^k),w_\varepsilon ^k} \right\rangle = 0\) and the definition of \(g_\varepsilon ^k\), we see that \({\left\| {w_{{\varepsilon _n}}^k} \right\| _{{H_{{\varepsilon _n}}}}} = o(1)\), which contradicts \(J_\varepsilon ^k(w_\varepsilon ^k) = c_\varepsilon ^k \ge {c_{\beta ,{V_0}}} > 0\).

Moreover, we also have

$$\begin{aligned} {\text {dist}}\,({\varepsilon _n}{y_n},\widetilde{{\Lambda ^k}}) \le {\varepsilon _n}R. \end{aligned}$$
(3.9)

Indeed, for any \(\delta > 0\) fixed, we define a smooth cut-off function \(0 \le \psi (x) \le 1\) such that \(\psi (x) = 0\) for \(x \in \widetilde{{\Lambda ^k}}\), \(\psi (x) = 1\) for \(x \in {{\mathbb {R}}^N}\backslash {(\widetilde{{\Lambda ^k}})^\delta }\), \(|\nabla \psi | \le C/\delta \) and \(|\Delta \psi | \le C/{\delta ^2}\). Using \(\left\langle {(J_{{\varepsilon _n}}^k)'(w_{{\varepsilon _n}}^k),w_{{\varepsilon _n}}^k\psi ({\varepsilon _n}x)} \right\rangle = 0\), the definition of \(g_\varepsilon ^k\) and the fact that \({\text {supp}}\psi ({\varepsilon _n}x) \cap (\widetilde{{\Lambda ^k}}/\varepsilon ) = \emptyset \), we get

$$\begin{aligned}&\left( {1 - \frac{1}{{{l_0}}}} \right) {V_0}\int _{{{\mathbb {R}}^N}} {|w_{{\varepsilon _n}}^k{|^2}\psi ({\varepsilon _n}x)} \\&\quad \le \left( {1 - \frac{1}{{{l_0}}}} \right) \int _{{{\mathbb {R}}^N}} {V({\varepsilon _n}x)|w_{{\varepsilon _n}}^k{|^2}\psi ({\varepsilon _n}x)} \\&\quad \le - 2\int _{{{\mathbb {R}}^N}} {\Delta w_{{\varepsilon _n}}^k(\nabla w_{{\varepsilon _n}}^k \cdot \nabla \psi ({\varepsilon _n}x))} - \int _{{{\mathbb {R}}^N}} {\Delta w_{{\varepsilon _n}}^kw_{{\varepsilon _n}}^k\Delta \psi ({\varepsilon _n}x)} \\&\qquad - \beta \int _{{{\mathbb {R}}^N}} {w_{{\varepsilon _n}}^k(\nabla w_{{\varepsilon _n}}^k \cdot \nabla \psi ({\varepsilon _n}x))} \\&\quad \le \frac{C}{\delta }{\varepsilon _n} + \frac{C}{{{\delta ^2}}}\varepsilon _n^2. \end{aligned}$$

If there is a subsequence, still denote it by \(\{ {\varepsilon _n}\} _{n = 1}^\infty \), such that \({B_R}({y_n}) \cap ({(\widetilde{{\Lambda ^k}})^\delta }/{\varepsilon _n}) = \emptyset \), then

$$\begin{aligned} \int _{{B_R}({y_n})} {|w_{{\varepsilon _n}}^k{|^2}} \le \frac{C}{\delta }{\varepsilon _n} + \frac{C}{{{\delta ^2}}}\varepsilon _n^2, \end{aligned}$$

which contradicts (3.8). Thus, for \({\varepsilon _n} > 0\) small, \({B_R}({y_n}) \cap ({(\widetilde{{\Lambda ^k}})^\delta }/{\varepsilon _n}) \ne \emptyset \), which means that \({\text {dist}}\,({\varepsilon _n}{y_n},\widetilde{{\Lambda ^k}}) \le {\varepsilon _n}R + \delta \). Letting \(\delta \rightarrow {0^ + }\), we obtain (3.9).

Letting \(v_{{\varepsilon _n}}^k: = w_{{\varepsilon _n}}^k(x + {y_n})\), by (3.7), (3.8) and (3.9), we see that, up to a subsequence, \({\varepsilon _n}{y_n} \rightarrow {y^k} \in \overline{\widetilde{{\Lambda ^k}}} \), \(v_{{\varepsilon _n}}^k \rightharpoonup {v^k}\) in \({H^2}({{\mathbb {R}}^N})\), where \({v^k}\) is a nontrivial solution of

$$\begin{aligned} {\Delta ^2}u - \beta \Delta u + V({y^k})u = {g^k}(u), \end{aligned}$$
(3.10)

where

$$\begin{aligned} {g^k}(u) = {\chi ^k}({y^k})|u{|^{p - 2}}u + (1 - {\chi ^k}({y^k}))\min \{ |u{|^{p - 2}},{a^{p - 2}}\} u. \end{aligned}$$

We denote

$$\begin{aligned} {h_n}: = \frac{1}{2}\left( {|\Delta v_{{\varepsilon _n}}^k{|^2} + \beta |\nabla v_{{\varepsilon _n}}^k{|^2} + V({\varepsilon _n}x + {\varepsilon _n}{y_n})|v_{{\varepsilon _n}}^k{|^2}} \right) - G_{{\varepsilon _n}}^k(x + {y_n},v_{{\varepsilon _n}}^k). \end{aligned}$$

Standard argument shows that \(v_{{\varepsilon _n}}^k \rightarrow {v^k}\) in \(H_{{\text {loc}}}^2({{\mathbb {R}}^N})\). Thus, for each \(R>0\) fixed,

$$\begin{aligned} \mathop {\lim }\limits _{n \rightarrow \infty } \int _{{{B_R}(0)}} {{h_n}} = \frac{1}{2}\int _{{B_R}(0)} {\left( {|\Delta {v^k}{|^2} + \beta |\nabla {v^k}{|^2} + V({y^k})|{v^k}{|^2}} \right) } - \int _{{B_R}(0)} {{G^k}({v^k})} ,\nonumber \\ \end{aligned}$$
(3.11)

where \({G^k}(u): = \int _0^u {{g^k}(s)} \mathrm{d}s\). Letting \(0 \le {\varphi _R} \le 1\) be a smooth cut-off function such that \({\varphi _R} = 0\) on \({B_{R - 1}}(0)\), \({\varphi _R} = 1\) on \({{\mathbb {R}}^N}\backslash {B_R}(0)\), \(|\nabla {\varphi _R}| \le C\) and \(|\Delta {\varphi _R}| \le C\). Choosing \({\varphi _R}v_{{\varepsilon _n}}^k\) as a test function for

$$\begin{aligned} {\Delta ^2}v_{{\varepsilon _n}}^k - \beta \Delta v_{{\varepsilon _n}}^k + V({\varepsilon _n}x + {\varepsilon _n}{y_n})v_{{\varepsilon _n}}^k = g_{{\varepsilon _n}}^k(x + {y_n},v_{{\varepsilon _n}}^k) \end{aligned}$$

to get

$$\begin{aligned} {E_n} + 2\int _{{{\mathbb {R}}^N}\backslash {B_R}(0)} {{h_n}} + \int _{{{\mathbb {R}}^N}\backslash {B_R}(0)} {2G_{{\varepsilon _n}}^k(x + {y_n},v_{{\varepsilon _n}}^k) - g_{{\varepsilon _n}}^k(x + {y_n},v_{{\varepsilon _n}}^k)v_{{\varepsilon _n}}^k} = 0,\nonumber \\ \end{aligned}$$
(3.12)

where

$$\begin{aligned} \begin{array}{ll} {E_n} =&{} \displaystyle \int _{{B_R}(0)\backslash {B_{R - 1}}(0)} {\Delta v_{{\varepsilon _n}}^k\Delta ({\varphi _R}v_{{\varepsilon _n}}^k)} - \beta \int _{{B_R}(0)\backslash {B_{R - 1}}(0)} {\nabla v_{{\varepsilon _n}}^k\nabla ({\varphi _R}v_{{\varepsilon _n}}^k)} \\ &{} \displaystyle + \int _{{B_R}(0)\backslash {B_{R - 1}}(0)} {V({\varepsilon _n}x + {\varepsilon _n}{y_n})|v_{{\varepsilon _n}}^k{|^2}{\varphi _R}} \\ &{} - \int _{{B_R}(0)\backslash {B_{R - 1}}(0)} {g_{{\varepsilon _n}}^k(x + {y_n},v_{{\varepsilon _n}}^k)v_{{\varepsilon _n}}^k{\varphi _R}}. \\ \end{array} \end{aligned}$$

The fact that \(v_{{\varepsilon _n}}^k \rightarrow {v^k}\) in \(H_{{\text {loc}}}^2({{\mathbb {R}}^N})\) and \({v^k} \in {H^2}({{\mathbb {R}}^N})\) imply that for any \(\delta > 0\), \(\exists R > 0\) such that \(\mathop {{\overline{\lim }} }\nolimits _{n \rightarrow \infty } |{E_n}| \le \delta \). On the other hand, the definition of \(g_\varepsilon ^k\) gives that \(2G_{{\varepsilon _n}}^k(x + {y_n},v_{{\varepsilon _n}}^k) - g_{{\varepsilon _n}}^k(x + {y_n},v_{{\varepsilon _n}}^k)v_{{\varepsilon _n}}^k \le 0\). Using this in (3.12) and combining with (3.11), we have \(\mathop {{\underline{\lim }} }\nolimits _{n \rightarrow \infty } J_{{\varepsilon _n}}^k(w_{{\varepsilon _n}}^k) \ge {J^k}({v^k})\), where \({J^k}\) is the corresponding functional to (3.10). Since \(V({y^k}) \ge {m_k}\) and \({G^k}({v^k}) \le \frac{1}{p}|{v^k}{|^p}\), we have \({J^k}({v^k}) \ge {c_{\beta ,{m_k}}}\). The arbitrariness of \(\{ {\varepsilon _n}\} _{n = 1}^\infty \) implies that \(\mathop {{\underline{\lim }} }\nolimits _{\varepsilon \rightarrow 0} c_\varepsilon ^k \ge {c_{\beta ,{m_k}}}\). This finishes the proof. \(\square \)

The following lemma is a key for the proof of Theorem 1.1:

Lemma 3.4

For each \({d_0} > 0\) small and \(\{ {\varepsilon _n}\} _{n = 1}^\infty \), \(\{ {u_{{\varepsilon _n}}}\} _{n = 1}^\infty \) satisfying

$$\begin{aligned} \mathop {\lim }\limits _{n \rightarrow \infty } {\varepsilon _n} = 0,{u_{{\varepsilon _n}}} \in X_{{\varepsilon _n}}^{{d_0}},\mathop {\lim }\limits _{n \rightarrow \infty } {J_{{\varepsilon _n}}}({u_{{\varepsilon _n}}}) \le \sum \limits _{k = 1}^K {{c_{\beta ,{m_k}}}} {\text { and }}\mathop {\lim }\limits _{n \rightarrow \infty } {\Vert {{J'_{{\varepsilon _n}}}({u_{{\varepsilon _n}}})} \Vert _{{{({H_{{\varepsilon _n}}})}^{ - 1}}}} = 0, \end{aligned}$$

there exists, up to a subsequence, \(\{ y_{{\varepsilon _n}}^{p(i)}\} _{n = 1}^\infty \subset {{\mathbb {R}}^N}\), \({z^{p(i)}} \in {{\mathcal {M}}^{p(i)}}\), \({U^i} \in S_{\beta ,{m_{p(i)}}}^ + \) \((1 \le i \le {K_1})\) and \(\{ y_{{\varepsilon _n}}^{q(j)}\} _{n = 1}^\infty \subset {{\mathbb {R}}^N}\), \({z^{q(j)}} \in {{\mathcal {M}}^{q(j)}}\), \({V^j} \in S_{\beta ,{m_{q(j)}}}^ - \) \((1 \le j \le {K_2})\) such that

$$\begin{aligned} \mathop {\lim }\limits _{n \rightarrow \infty } |{\varepsilon _n}y_{{\varepsilon _n}}^{p(i)} - {z^{p(i)}}| = 0,{\text { }}\mathop {\lim }\limits _{n \rightarrow \infty } |{\varepsilon _n}y_{{\varepsilon _n}}^{q(j)} - {z^{q(j)}}| = 0 \end{aligned}$$

and

$$\begin{aligned}&\mathop {\lim }\limits _{n \rightarrow \infty } \left\| {u_{{\varepsilon _n}}} - \sum \limits _{i = 1}^{{K_1}} {\varphi ({\varepsilon _n}x - {\varepsilon _n}y_{{\varepsilon _n}}^{p(i)}){U^i}(x - y_{{\varepsilon _n}}^{p(i)})} \right. \\&\quad \left. - \sum \limits _{j = 1}^{{K_2}} {\varphi ({\varepsilon _n}x - {\varepsilon _n}y_{{\varepsilon _n}}^{q(j)}){V^j}(x - y_{{\varepsilon _n}}^{q(j)})} \right\| _{{H_{{\varepsilon _n}}}} = 0. \end{aligned}$$

Proof

For notational simplicity, we write \(\varepsilon \) for \({\varepsilon _n}\) and still use \(\varepsilon \) after taking a subsequence. By the definition of \(X_\varepsilon ^{{d_0}}\) and the compactness of \(S_{\beta ,{m_{p(i)}}}^ + \), \(S_{\beta ,{m_{q(j)}}}^ - \) and \({({{\mathcal {M}}^k})^{{\delta _0}}}\), we see that there exist \({{{{\bar{W}}}}^i} \in S_{\beta ,{m_{p(i)}}}^ + \), \({{{{\tilde{W}}}}^j} \in S_{\beta ,{m_{q(j)}}}^ - \), \({\{ z_\varepsilon ^{p(i)}\} _{\varepsilon > 0}} \subset {({{\mathcal {M}}^{p(i)}})^{{\delta _0}}}\), \({\{ z_\varepsilon ^{q(j)}\} _{\varepsilon > 0}} \subset {({{\mathcal {M}}^{q(j)}})^{{\delta _0}}}\) such that for \(\varepsilon > 0\) small and \(1 \le i \le {K_1}\), \(1 \le j \le {K_2}\),

$$\begin{aligned}&\left\| {u_\varepsilon } - \sum \limits _{i = 1}^{{K_1}} {\varphi (\varepsilon x - z_\varepsilon ^{p(i)}){{{{\bar{W}}}}^i}(x - (z_\varepsilon ^{p(i)}/\varepsilon ))} \right. \nonumber \\&\quad \left. - \sum \limits _{j = 1}^{{K_2}} {\varphi (\varepsilon x - z_\varepsilon ^{q(j)}){{{{\tilde{W}}}}^j}(x - (z_\varepsilon ^{q(j)}/\varepsilon ))} \right\| _{{H_\varepsilon }} \le 2{d_0} \end{aligned}$$
(3.13)

and

$$\begin{aligned} z_\varepsilon ^{p(i)} \rightarrow {z^{p(i)}} \in {({{\mathcal {M}}^{p(i)}})^{{\delta _0}}}{\text { and }}z_\varepsilon ^{q(j)} \rightarrow {z^{q(j)}} \in {({{\mathcal {M}}^{q(j)}})^{{\delta _0}}}{\text { as }}\varepsilon \rightarrow 0. \end{aligned}$$
(3.14)

Step 1: We claim that

$$\begin{aligned} \mathop {\lim }\limits _{\varepsilon \rightarrow 0} \mathop {\sup }\limits _{y \in {A_\varepsilon }} \int _{{B_1}(y)} {|{u_\varepsilon }{|^2}} = 0, \end{aligned}$$
(3.15)

where \({A_\varepsilon } = \cup _{k = 1}^K({B_{3{\delta _0} /\varepsilon }}(z_\varepsilon ^k/\varepsilon )\backslash {B_{{\delta _0} /2\varepsilon }}(z_\varepsilon ^k/\varepsilon ))\).

Assuming on the contrary that there exists \(r>0\) such that

$$\begin{aligned} \mathop {{\underline{\lim }} }\limits _{\varepsilon \rightarrow 0} \mathop {\sup }\limits _{y \in {A_\varepsilon }} \int _{{B_1}(y)} {|{u_\varepsilon }{|^2}} = 2r > 0, \end{aligned}$$

then there exists \({y_\varepsilon } \in {A_\varepsilon }\) such that for \(\varepsilon > 0\) small,

$$\begin{aligned} \int _{{B_1}({y_\varepsilon })} {|{u_\varepsilon }{|^2}} \ge r > 0. \end{aligned}$$
(3.16)

Letting \({v_\varepsilon }(x): = {u_\varepsilon }(x + {y_\varepsilon })\), up to a subsequence, there exists \(v \in {H^2}({{\mathbb {R}}^N})\backslash \{ 0\} \) such that \({v_\varepsilon } \rightharpoonup v\) in \({H^2}({{\mathbb {R}}^N})\) and \(\varepsilon {y_\varepsilon } \rightarrow {x_0} \in \overline{ \cup _{k = 1}^K({B_{3{\delta _0}}}({z^k})\backslash {B_{{\delta _0}/2}}({z^k}))} \in {{\mathcal {M}}^{4{\delta _0}}} \in \Lambda \). Moreover, we see that v satisfies \(({E_{\beta ,V({x_0})}})\). Since

$$\begin{aligned} \begin{array}{ll} {c_{\beta ,V({x_0})}} &{}\displaystyle \le {I_{\beta ,V({x_0})}}(v) -\displaystyle \frac{1}{p}\left\langle {{I'_{\beta ,V({x_0})}}(v),v} \right\rangle \\ &{}\displaystyle = \left( {\frac{1}{2} - \frac{1}{p}} \right) \left( {\int _{{{\mathbb {R}}^N}} {|\Delta v{|^2}} + \beta \int _{{{\mathbb {R}}^N}} {|\nabla v{|^2}} + V({x_0}) \int _{{{\mathbb {R}}^N}} {|v{|^2}} } \right) \\ \end{array} \end{aligned}$$

then for \(R>0\) large,

$$\begin{aligned}&\mathop {{\underline{\lim }} }\limits _{\varepsilon \rightarrow 0} \left( {\frac{1}{2} - \frac{1}{p}} \right) \left( {\int _{{B_R}({y_\varepsilon })} {|\Delta {u_\varepsilon }{|^2}} + \beta \int _{{B_R}({y_\varepsilon })} {|\nabla {u_\varepsilon }{|^2}} + V({x_0}) \int _{{B_R}({y_\varepsilon })} {|{u_\varepsilon }{|^2}} } \right) \nonumber \\&\quad = \mathop {{\underline{\lim }} }\limits _{\varepsilon \rightarrow 0} \left( {\frac{1}{2} - \frac{1}{p}} \right) \left( {\int _{{B_R}(0)} {|\Delta {v_\varepsilon }{|^2}} + \beta \int _{{B_R}(0)} {|\nabla {v_\varepsilon }{|^2}} + V({x_0}) \int _{{B_R}(0)} {|{v_\varepsilon }{|^2}} } \right) \nonumber \\&\quad \ge \left( {\frac{1}{2} - \frac{1}{p}} \right) \left( {\int _{{B_R}(0)} {|\Delta v{|^2}} + \beta \int _{{B_R}(0)} {|\nabla v{|^2}} + V({x_0}) \int _{{B_R}(0)} {|v{|^2}} } \right) \ge \frac{1}{2}{c_{\beta ,V({x_0})}} \nonumber \\&\quad > 0. \end{aligned}$$
(3.17)

On the other hand, by (3.13) and Sobolev’s imbedding theorem, we have

$$\begin{aligned}&\int _{{B_R}({y_\varepsilon })} {|\Delta {u_\varepsilon }{|^2}} + \beta \int _{{B_R}({y_\varepsilon })} {|\nabla {u_\varepsilon }{|^2}} + V({x_0})\int _{{B_R}({y_\varepsilon })} {|{u_\varepsilon }{|^2}} \\&\quad \le C\sum \limits _{i = 1}^{{K_1}} {\int _{{B_R}({y_\varepsilon } - (z_\varepsilon ^{p(i)}/\varepsilon ))} {|\Delta {{{{\bar{W}}}}^i}{|^2} + |\nabla {{{{\bar{W}}}}^i}{|^2} + |{{{{\bar{W}}}}^i}{|^2}} } \\&\qquad + C\sum \limits _{j = 1}^{{K_2}} {\int _{{B_R}({y_\varepsilon } - (z_\varepsilon ^{q(j)}/\varepsilon ))} {|\Delta {{{{\tilde{W}}}}^j}{|^2} + |\nabla {{{{\tilde{W}}}}^j}{|^2} + |{{{{\tilde{W}}}}^j}{|^2}} } + C{d_0} + o(1) \\&\quad = C{d_0} + o(1), \end{aligned}$$

where \(o(1) \rightarrow 0\) as \(\varepsilon \rightarrow 0\) and we have used the fact that \(|{y_\varepsilon } - (z_\varepsilon ^k/\varepsilon )| \ge {\delta _0} /2\varepsilon \). This leads to a contradiction for \(d_0\) small. Hence, (3.2) holds.

Since

$$\begin{aligned} \mathop {\sup }\limits _{y \in {A_\varepsilon }} \int _{{B_1}(y)} {|{u_\varepsilon }{|^2}} \ge \mathop {\sup }\limits _{y \in {{\mathbb {R}}^N}} \int _{{B_1}(y)} {|{\eta _\varepsilon }{u_\varepsilon }{|^2}} , \end{aligned}$$

where \({\eta _\varepsilon } \in C_c^\infty ({{\mathbb {R}}^N},[0,1])\) such that \({\eta _\varepsilon }(x) = 1\) for \(x \in \mathop \cup \nolimits _{k = 1}^K ({B_{(3{\delta _0} /\varepsilon ) - 2}}(z_\varepsilon ^k/\varepsilon )\backslash \)\({B_{({\delta _0} /2\varepsilon ) + 2}}(z_\varepsilon ^k/\varepsilon ))\), \({\text {supp}}{\eta _\varepsilon } \subset \mathop \cup \nolimits _{k = 1}^K ({B_{(3{\delta _0} /\varepsilon ) - 1}}(z_\varepsilon ^k/\varepsilon )\backslash {B_{({\delta _0} /2\varepsilon ) + 1}}(z_\varepsilon ^k/\varepsilon ))\), \(|\nabla {\eta _\varepsilon }| \le C\) and \(|\Delta {\eta _\varepsilon }| \le C\). By (3.2) and the boundedness of \({\{ {\eta _\varepsilon }{u_\varepsilon }\} _{\varepsilon > 0}}\) in \({H^2}({{\mathbb {R}}^N})\), we derive from vanishing theorem (see [22, Lemma I.1]) that for \(2< q < {2^ * }\),

$$\begin{aligned} \mathop {\lim }\limits _{\varepsilon \rightarrow 0} \int _{\mathop \cup \limits _{k = 1}^K ({B_{2{\delta _0} /\varepsilon }}(z_\varepsilon ^k/\varepsilon )\backslash {B_{{\delta _0} /\varepsilon }}(z_\varepsilon ^k/\varepsilon ))} {|{u_\varepsilon }{|^q}} \rightarrow 0. \end{aligned}$$
(3.18)

Step 2: Let \({u_{\varepsilon ,1}}(x): = \sum \nolimits _{k = 1}^K {u_{\varepsilon ,1}^k(x)} : = \sum \nolimits _{k = 1}^K {\varphi (\varepsilon x - z_\varepsilon ^k){u_\varepsilon }(x)} \), \({u_{\varepsilon ,2}}(x): = {u_\varepsilon }(x) - {u_{\varepsilon ,1}}(x)\), by (3.2), we see that

$$\begin{aligned} \int _{{{\mathbb {R}}^N}} {|\Delta {u_\varepsilon }{|^2}}\ge & {} \int _{{{\mathbb {R}}^N}} {|\Delta u_{\varepsilon ,1}{|^2}} + \int _{{{\mathbb {R}}^N}} {|\Delta u_{\varepsilon ,2}{|^2}} + o(1), \end{aligned}$$
(3.19)
$$\begin{aligned} \int _{{{\mathbb {R}}^N}} {|\nabla {u_\varepsilon }{|^2}}\ge & {} \int _{{{\mathbb {R}}^N}} {|\nabla u_{\varepsilon ,1}{|^2}} + \int _{{{\mathbb {R}}^N}} {|\nabla u_{\varepsilon ,2}{|^2}} + o(1), \end{aligned}$$
(3.20)
$$\begin{aligned} \int _{{{\mathbb {R}}^N}} {V(\varepsilon x)|{u_\varepsilon }{|^2}}\ge & {} \int _{{{\mathbb {R}}^N}} {V(\varepsilon x)|u_{\varepsilon ,1}{|^2}} + \int _{{{\mathbb {R}}^N}} {V(\varepsilon x)|u_{\varepsilon ,2}{|^2}} , \end{aligned}$$
(3.21)
$$\begin{aligned} \int _{{{\mathbb {R}}^N}} {{G_\varepsilon }(x,{u_\varepsilon })}= & {} \int _{{{\mathbb {R}}^N}} {{G_\varepsilon }(x,{u_{\varepsilon ,1}})} + \int _{{{\mathbb {R}}^N}} {{G_\varepsilon }(x,{u_{\varepsilon ,2}})} + o(1), \end{aligned}$$
(3.22)

From (3.19)–(3.2), we infer that

$$\begin{aligned} {J_\varepsilon }({u_\varepsilon }) \ge {J_\varepsilon }(u_{\varepsilon ,1}) + {J_\varepsilon }(u_{\varepsilon ,2}) + o(1). \end{aligned}$$
(3.23)

By (3.13), it follows that

$$\begin{aligned}&{\left\| {{u_{\varepsilon ,2}}} \right\| _{{H_\varepsilon }}} \nonumber \\&\quad \le \left\| {u_{\varepsilon ,1}} - \sum \limits _{i = 1}^{{K_1}} {\varphi (\varepsilon x - z_\varepsilon ^{p(i)}){{{{\bar{W}}}}^i}(x - (z_\varepsilon ^{p(i)}/\varepsilon ))} \right. \nonumber \\&\qquad \left. - \sum \limits _{j = 1}^{{K_2}} {\varphi (\varepsilon x - z_\varepsilon ^{q(j)}){{{{\tilde{W}}}}^j}(x - (z_\varepsilon ^{q(j)}/\varepsilon ))} \right\| _{{H_\varepsilon }} + 2{d_0} \nonumber \\&\quad \le {\left\| {{u_{\varepsilon ,2}}} \right\| _{{H_\varepsilon }\left( {\mathop \cup \limits _{k = 1}^K {B_{2{\delta _0}/\varepsilon }}(z_\varepsilon ^k/\varepsilon )} \right) }} + 4{d_0} \nonumber \\&\quad \le C{\left\| {{u_\varepsilon }} \right\| _{{H_\varepsilon }\left( {\mathop \cup \limits _{k = 1}^K \left( {{B_{2{\delta _0}/\varepsilon }}(z_\varepsilon ^k/\varepsilon )\backslash {B_{{\delta _0}/\varepsilon }}(z_\varepsilon ^k/\varepsilon )} \right) } \right) }} + 4{d_0} \nonumber \\&\quad \le C{\sum \limits _{i = 1}^{{K_1}} {\left\| {\varphi (\varepsilon x - z_\varepsilon ^{p(i)}){{{{\bar{W}}}}^i}(x - (z_\varepsilon ^{p(i)}/\varepsilon ))} \right\| } _{{H^2}({B_{2{\delta _0}/\varepsilon }}(z_\varepsilon ^{p(i)}/\varepsilon )\backslash {B_{{\delta _0}/\varepsilon }}(z_\varepsilon ^{p(i)}/\varepsilon ))}} \nonumber \\&\qquad + C{\sum \limits _{j = 1}^{{K_2}} {\left\| {\varphi (\varepsilon x - z_\varepsilon ^{q(j)}){{{{\tilde{W}}}}^j}(x - (z_\varepsilon ^{p(i)}/\varepsilon ))} \right\| } _{{H^2}({B_{2{\delta _0}/\varepsilon }}(z_\varepsilon ^{q(j)}/\varepsilon )\backslash {B_{{\delta _0}/\varepsilon }}(z_\varepsilon ^{q(j)}/\varepsilon ))}} + C{d_0} \nonumber \\&\quad \le C{\sum \limits _{i = 1}^{{K_1}} {\left\| {{{{{\bar{W}}}}^i}} \right\| } _{{H^2}({B_{2{\delta _0}/\varepsilon }}(0)\backslash {B_{{\delta _0}/\varepsilon }}(0))}} + C{\sum \limits _{j = 1}^{{K_2}} {\left\| {{{{{\tilde{W}}}}^j}} \right\| } _{{H^2}({B_{2{\delta _0}/\varepsilon }}(0)\backslash {B_{{\delta _0}/\varepsilon }}(0))}} \nonumber \\&\qquad + C{d_0} = C{d_0} + o(1), \end{aligned}$$

where \(o(1) \rightarrow 0\) as \(\varepsilon \rightarrow 0\). Thus, \(\mathop {{\overline{\lim }} }\limits _{\varepsilon \rightarrow 0} {\Vert {u_{\varepsilon ,2}} \Vert _{{H_\varepsilon }}} \le C{d_0}\).

On the other hand, since \(\langle {{J'_\varepsilon }({u_\varepsilon }),{u_{\varepsilon ,2}}} \rangle \rightarrow 0\) as \(\varepsilon \rightarrow 0\), we deduce from (3.2) and Sobolev’s imbedding theorem that

$$\begin{aligned} \Vert {{u_{\varepsilon ,2}}} \Vert _{{H_\varepsilon }}^2 \le C\Vert {{u_{\varepsilon ,2}}} \Vert _{{H_\varepsilon }}^p + o(1). \end{aligned}$$

Choosing \({d_0} > 0\) small, we see that \({\Vert {{u_{\varepsilon ,2}}} \Vert _{{H_\varepsilon }}} = o(1)\), by (3.3),

$$\begin{aligned} {J_\varepsilon }({u_\varepsilon }) \ge {J_\varepsilon }(u_{\varepsilon ,1}) + o(1). \end{aligned}$$
(3.24)

Step 3: For each \(1 \le k \le K\), letting \({{\tilde{w}}}_\varepsilon ^k(x): = u_{\varepsilon ,1}^k(x + (z_\varepsilon ^k/\varepsilon )): = \varphi (\varepsilon x){u_\varepsilon }(x + (z_\varepsilon ^k/\varepsilon ))\), up to a subsequence, as \(\varepsilon \rightarrow 0\), \(\exists {{{{\tilde{w}}}}^k} \in {H^2}({{\mathbb {R}}^N})\) such that \({{\tilde{w}}}_\varepsilon ^k \rightharpoonup {{{{\tilde{w}}}}^k}\) in \({H^2}({{\mathbb {R}}^N})\). Next, we claim that

$$\begin{aligned} {{\tilde{w}}}_\varepsilon ^k \rightarrow {{{{\tilde{w}}}}^k}{\text { in }}{L^q}({{\mathbb {R}}^N}){\text { for }}q \in (2,{2^ * }). \end{aligned}$$
(3.25)

If not, by vanishing theorem (see [22, Lemma I.1]), \(\exists r > 0\) such that

$$\begin{aligned} \mathop {{\underline{\lim }} }\limits _{\varepsilon \rightarrow 0} \mathop {\sup }\limits _{x \in {{\mathbb {R}}^N}} \int _{{B_1}(x)} {|{{\tilde{w}}}_\varepsilon ^k - {{{{\tilde{w}}}}^k}{|^2}} = 2r > 0, \end{aligned}$$

then for \(\varepsilon > 0\) small, \(\exists x_\varepsilon ^k \in {{\mathbb {R}}^N}\) such that

$$\begin{aligned} \int _{{B_1}(x_\varepsilon ^k)} {|{{\tilde{w}}}_\varepsilon ^k - {{{{\tilde{w}}}}^k}{|^2}} \ge r > 0. \end{aligned}$$
(3.26)

There are two cases:

Case 1: \({\{ x_\varepsilon ^k\} _{\varepsilon > 0}}\) is bounded, that is, \(|x_\varepsilon ^k| \le {R_k}\) for some \({R_k} > 0\), then for \(\varepsilon > 0\) small,

$$\begin{aligned} \int _{{B_{{R_k} + 1}}(0)} {|{{\tilde{w}}}_\varepsilon ^k - {{{{\tilde{w}}}}^k}{|^2}} \ge r > 0, \end{aligned}$$

which contradicts that \({{\tilde{w}}}_\varepsilon ^k \rightarrow {{{{\tilde{w}}}}^k}\) in \(L_{{\text {loc}}}^2({{\mathbb {R}}^N})\).

Case 2: \({\{ x_\varepsilon ^k\} _{\varepsilon > 0}}\) is unbounded, by (3.3),

$$\begin{aligned} \mathop {{\underline{\lim }} }\limits _{\varepsilon \rightarrow 0} \int _{{B_1}(x_\varepsilon ^k)} {|\varphi (\varepsilon x){u_\varepsilon }(x + (z_\varepsilon ^k/\varepsilon )){|^2}} \ge r > 0. \end{aligned}$$
(3.27)

Since \(\varphi (x) = 0\) for \(|x| \ge 2{\delta _0} \), we see that \(|{x_\varepsilon ^k}| \le 3{\delta _0} /\varepsilon \) for \(\varepsilon > 0\) small. Moreover, we see that \(|{x_\varepsilon ^k}| \le {\delta _0} /2\varepsilon \) for \(\varepsilon > 0\) small. If not, \({x_\varepsilon ^k} \in {B_{3{\delta _0} /\varepsilon }}(0)\backslash {B_{{\delta _0} /2\varepsilon }}(0)\), by (3.2),

$$\begin{aligned}&\mathop {{\underline{\lim }} }\limits _{\varepsilon \rightarrow 0} \int _{{B_1}(x_\varepsilon ^k)} {|\varphi (\varepsilon x){u_\varepsilon }(x + (z_\varepsilon ^k/\varepsilon )){|^2}} \\&\quad \le \mathop {{\underline{\lim }} }\limits _{\varepsilon \rightarrow 0} \mathop {\sup }\limits _{z \in {B_{3{\delta _0} /\varepsilon }}(0)\backslash {B_{{\delta _0} /2\varepsilon }}(0)} \int _{{B_1}(z)} {|{u_\varepsilon }(x + (z_\varepsilon ^k/\varepsilon )){|^2}} \\&\quad = \mathop {{\underline{\lim }} }\limits _{\varepsilon \rightarrow 0} \mathop {\sup }\limits _{y \in {B_{3{\delta _0} /\varepsilon }}(z_\varepsilon ^k/\varepsilon )\backslash {B_{{\delta _0} /2\varepsilon }}(z_\varepsilon ^k/\varepsilon )} \int _{{B_1}(y)} {|{u_\varepsilon }{|^2}} \\&\quad \le \mathop {{\underline{\lim }} }\limits _{\varepsilon \rightarrow 0} \mathop {\sup }\limits _{y \in {A_\varepsilon }} \int _{{B_1}(y)} {|{u_\varepsilon }{|^2}} = 0, \end{aligned}$$

which contradicts (3.3). Up to a subsequence, \(\varepsilon x_\varepsilon ^k \rightarrow {x^k} \in \overline{{B_{{\delta _0} /2}}(0)} \) and \({{\bar{w}}}_\varepsilon ^k(x): = {{\tilde{w}}}_\varepsilon ^k(x + x_\varepsilon ^k) \rightharpoonup {{{{\bar{w}}}}^k}\) in \({H^2}({{\mathbb {R}}^N})\), by (3.3), \({{{{\bar{w}}}}^k} \ne 0\) and satisfies \(({E_{\beta ,V({z^k} + {x^k})}})\). Arguing as in Step 1, we get a contradiction for \({d_0} > 0\) small. (3.3) follows.

Similar to the argument in Lemma 3.2(i), we have \({J_\varepsilon }({u_{\varepsilon ,1}}) = \sum _{k = 1}^K {{J_\varepsilon }(u_{\varepsilon ,1}^k(x))}\). Recalling that for each \(1 \le k \le K\), \(z_\varepsilon ^k \rightarrow {z^k}\) and \({{\tilde{w}}}_\varepsilon ^k(x) = u_{\varepsilon ,1}^k(x + (z_\varepsilon ^k/\varepsilon ))\), by (3.3) and (3.3), we obtain

$$\begin{aligned} \sum \limits _{k = 1}^K {{I_{\beta ,V({z^k})}}({{{{\tilde{w}}}}^k})} \le \sum \limits _{k = 1}^K {{c_{\beta ,{m_k}}}}. \end{aligned}$$
(3.28)

For any \(\psi \in C_c^\infty ({{\mathbb {R}}^N})\), letting \(\psi (x - (z_\varepsilon ^k/\varepsilon ))\) as a test function for \({J'_\varepsilon }({u_\varepsilon })\). Since for \(\varepsilon > 0\) small, \({\text {supp}}\psi (x - (z_\varepsilon ^k/\varepsilon )) \subset \Lambda /\varepsilon \), we see that \({{{{{\tilde{w}}}}^k}}\) is a solution of \(({E_{\beta ,V({z^k})}})\). Moreover, thanks to (3.3) and \(\langle {{J'_\varepsilon }({u_\varepsilon }),u_{\varepsilon ,1}^k} \rangle \rightarrow 0\), \({\Vert {{u_{\varepsilon ,2}}} \Vert _{{H_\varepsilon }}} \rightarrow 0\) as \(\varepsilon \rightarrow 0\), we have

$$\begin{aligned}&\int _{{{\mathbb {R}}^N}} {|\Delta {{{{\tilde{w}}}}^k}{|^2}} + \beta \int _{{{\mathbb {R}}^N}} {|\nabla {{{{\tilde{w}}}}^k}{|^2}} + \int _{{{\mathbb {R}}^N}} {V({z^k})|{{{{\tilde{w}}}}^k}{|^2}} \\&\quad \le \mathop {{\underline{\lim }} }\limits _{\varepsilon \rightarrow 0} \left[ {\int _{{{\mathbb {R}}^N}} {|\Delta {{\tilde{w}}}_\varepsilon ^k{|^2}} + \beta \int _{{{\mathbb {R}}^N}} {|\nabla {{\tilde{w}}}_\varepsilon ^k{|^2}} + \int _{{{\mathbb {R}}^N}} {V(\varepsilon x + z_\varepsilon ^k)|\nabla {{\tilde{w}}}_\varepsilon ^k{|^2}} } \right] \\&\quad = \mathop {{\underline{\lim }} }\limits _{\varepsilon \rightarrow 0} \int _{{{\mathbb {R}}^N}} {|{{\tilde{w}}}_\varepsilon ^k{|^p}} = \int _{{{\mathbb {R}}^N}} {|{{{{\tilde{w}}}}^k}{|^p}} = \int _{{{\mathbb {R}}^N}} {|\Delta {{{{\tilde{w}}}}^k}{|^2}} + \beta \int _{{{\mathbb {R}}^N}} {|\nabla {{{{\tilde{w}}}}^k}{|^2}} + \int _{{{\mathbb {R}}^N}} {V({z^k})|{{{{\tilde{w}}}}^k}{|^2}}, \end{aligned}$$

then as \(\varepsilon \rightarrow 0\),

$$\begin{aligned} \left\{ \begin{gathered} \int _{{{\mathbb {R}}^N}} {|\Delta {{\tilde{w}}}_\varepsilon ^k{|^2}} \rightarrow \int _{{{\mathbb {R}}^N}} {|\Delta {{{{\tilde{w}}}}^k}{|^2}}, \\ \int _{{{\mathbb {R}}^N}} {|\nabla {{\tilde{w}}}_\varepsilon ^k{|^2}} \rightarrow \int _{{{\mathbb {R}}^N}} {|\nabla {{{{\tilde{w}}}}^k}{|^2}}, \\ \int _{{{\mathbb {R}}^N}} {V(\varepsilon x + z_\varepsilon ^k)|{{\tilde{w}}}_\varepsilon ^k{|^2}} \rightarrow \int _{{{\mathbb {R}}^N}} {V({z^k})|{{{{\tilde{w}}}}^k}{|^2}}. \\ \end{gathered} \right. \end{aligned}$$
(3.29)

By (3.13), (3.3) and \({\Vert {{u_{\varepsilon ,2}}} \Vert _{{H_\varepsilon }}} = o(1)\), we see that \({{{{\tilde{w}}}}^k} \ne 0\) for \({d_0} > 0\) small. Thus

$$\begin{aligned} {I_{\beta ,V({z^k})}}({{{{\tilde{w}}}}^k}) \ge {c_{\beta ,V({z^k})}}. \end{aligned}$$
(3.30)

Since \({z^k} \in {({{\mathcal {M}}^k})^{\delta _0} } \subset {\Lambda ^k}\), (3.3) and (3.3) imply that \(V({z^k}) = {m_k}\), \({z^k} \in {{\mathcal {M}}^k}\) and \({I_{\beta ,{m_k}}}({{{{\tilde{w}}}}^k}) = {c_{\beta ,{m_k}}}\). Moreover

$$\begin{aligned} {m_k}\int _{{{\mathbb {R}}^N}} {|{{\tilde{w}}}_\varepsilon ^k{|^2}} \le \int _{{{\mathbb {R}}^N}} {V(\varepsilon x + z_\varepsilon ^k)|{{\tilde{w}}}_\varepsilon ^k{|^2}}, \end{aligned}$$

by (3.3), \({{\tilde{w}}}_\varepsilon ^k \rightarrow {{{{\tilde{w}}}}^k}\) in \({H^2}({{\mathbb {R}}^N})\). At this point, it is clear that for \({d_0} > 0\) small and each \(1 \le i \le {K_1}\), \(1 \le j \le {K_2}\), \(\exists {U^i} \in {S_{\beta ,{m_{p(i)}}}}\), \({V^j} \in {S_{\beta ,{m_{q(j)}}}}\) and \({{{{\bar{z}}}}^{p(i)}},{{{{\bar{z}}}}^{q(j)}} \in {{\mathbb {R}}^N}\) such that \({{{{\tilde{w}}}}^{p(i)}}(x) = {U^i}(x - {{{{\bar{z}}}}^{p(i)}})\), \({{{{\tilde{w}}}}^{q(j)}}(x) = {V^j}(x - {{{{\bar{z}}}}^{q(j)}})\). Therefore, as \(\varepsilon \rightarrow 0\),

$$\begin{aligned}&\Bigl \Vert {u_{{\varepsilon }}} - \sum \limits _{i = 1}^{{K_1}} {\varphi (\varepsilon x - (z_\varepsilon ^{p(i)} + \varepsilon {{{{\bar{z}}}}^{p(i)}})){U^i}(x - ((z_\varepsilon ^{p(i)}/\varepsilon ) + {{{{\bar{z}}}}^{p(i)}}))} \\&\quad - \sum \limits _{j = 1}^{{K_2}} {\varphi (\varepsilon x - (z_\varepsilon ^{q(j)} + \varepsilon {{{{\bar{z}}}}^{q(j)}})){V^j}(x - ((z_\varepsilon ^{q(j)}/\varepsilon ) + {{{{\bar{z}}}}^{q(j)}}))} {\Bigr \Vert _{{H_\varepsilon }}} \rightarrow 0. \end{aligned}$$

This completes the proof. \(\square \)

We define \(J_\varepsilon ^\alpha \subset {H_\varepsilon }\) by

$$\begin{aligned} J_\varepsilon ^\alpha : = \{ u \in {H_\varepsilon }:{J_\varepsilon }(u) \le \alpha \}. \end{aligned}$$

Lemma 3.5

Letting \({d_0}\) be the number given in Lemma 3.4, then for any \(d \in (0,{d_0})\), there exist \({\varepsilon _d} > 0\), \({\rho _d} > 0\) and \({\omega _d} > 0\) such that

$$\begin{aligned} {\Vert {{J'_\varepsilon }(u)} \Vert _{{{({H_\varepsilon })}^{ - 1}}}} \ge {\omega _d} \end{aligned}$$

for all \(u \in J_\varepsilon ^{\sum \nolimits _{k = 1}^K {{c_{{m_{\beta ,k}}}}} + {\rho _d}} \cap (X_\varepsilon ^{{d_0}}\backslash X_\varepsilon ^d)\) with \(\varepsilon \in (0,{\varepsilon _d})\).

Proof

Assuming on the contrary that, there exist \(d \in (0,{d_0})\), \(\{ {\varepsilon _n}\} _{n = 1}^\infty \), \(\{ {\rho _n}\} _{n = 1}^\infty \) with \({\varepsilon _n}\), \({\rho _n} \rightarrow 0\) and \({u_n} \in J_{{\varepsilon _n}}^{\sum \nolimits _{k = 1}^K {{c_{{m_{\beta ,k}}}}} + {\rho _n}} \cap (X_{{\varepsilon _n}}^{{d_0}}\backslash X_{{\varepsilon _n}}^d)\) such that

$$\begin{aligned} {\Vert {{J'_{{\varepsilon _n}}}({u_n})} \Vert _{{{({H_{{\varepsilon _n}}})}^{ - 1}}}} \rightarrow 0{\text { as }}n \rightarrow \infty . \end{aligned}$$

By Lemma 3.4, for each \(1 \le i \le {K_1}\), \(1 \le j \le {K_2}\), we find \(\{ y_n^{p(i)}\} _{n = 1}^\infty ,\{ y_n^{q(j)}\} _{n = 1}^\infty \subset {{\mathbb {R}}^N}\), \({z^{p(i)}} \in {{\mathcal {M}}^{p(i)}}\), \({z^{q(j)}} \in {{\mathcal {M}}^{q(j)}}\), \({U^i} \in {S_{\beta ,{m_{p(i)}}}}\), \({V^j} \in {S_{\beta ,{m_{q(j)}}}}\) such that

$$\begin{aligned} \mathop {\lim }\limits _{n \rightarrow \infty } |{\varepsilon _n}y_n^{p(i)} - {z^{p(i)}}| = 0,{\text { }}\mathop {\lim }\limits _{n \rightarrow \infty } |{\varepsilon _n}y_n^{q(j)} - {z^{q(j)}}| = 0 \end{aligned}$$

and

$$\begin{aligned}&\mathop {\lim }\limits _{n \rightarrow \infty } \left\| {u_n} - \sum \limits _{i = 1}^{{K_1}} {\varphi ({\varepsilon _n}x - {\varepsilon _n}y_n^{p(i)}){U^i}(x - y_n^{p(i)})} \right. \\&\quad \left. - \sum \limits _{j = 1}^{{K_2}} {\varphi ({\varepsilon _n}x - {\varepsilon _n}y_n^{q(j)}){V^j}(x - y_n^{q(j)})} \right\| _{{H_{{\varepsilon _n}}}} = 0, \end{aligned}$$

which gives that \({u_n} \in X_{{\varepsilon _n}}^d\) for large n. This contradicts that \({u_n} \notin X_{{\varepsilon _n}}^d\). \(\square \)

Lemma 3.6

There exists \({T_0} > 0\) with the following property: for any \(\delta > 0\) small, there exist \({\alpha _\delta } > 0\) and \({\varepsilon _\delta } > 0\) such that if \({J_\varepsilon }({\gamma _\varepsilon }(s,t)) \ge \sum \nolimits _{k = 1}^K {{c_{\beta ,{m_k}}}} - {\alpha _\delta }\) and \(\varepsilon \in (0,{\varepsilon _\delta })\), then \({\gamma _\varepsilon }(s,t) \in X_\varepsilon ^{{T_0}\delta }\), where \({\gamma _\varepsilon }(s,t)\) has been mentioned in (3.5).

Proof

First, there is a \({T_0} > 0\) such that for each \(1 \le k \le K\) and \(u \in {H^2}({{\mathbb {R}}^N})\),

$$\begin{aligned} {\Vert {\varphi (\varepsilon x - z_*^k)u(x - (z_*^k/\varepsilon ))} \Vert _{{H_\varepsilon }}} \le {T_0}{\Vert {u(x)} \Vert _{{H^2}({{\mathbb {R}}^N})}}, \end{aligned}$$
(3.31)

where \(z_ * ^k \in {{\mathcal {M}}^k}\) has been mentioned in (3.2). We define

$$\begin{aligned} \begin{array}{ll} &{}{\alpha _\delta } =\displaystyle \frac{1}{4}\min \Bigl \{ \sum \limits _{k = 1}^K {{c_{\beta ,{m_k}}}} - \sum \limits _{i = 1}^{{K_1}} {{I_{\beta ,{m_{p(i)}}}}({s_i}{S_i}U_*^i)} - \sum \limits _{j = 1}^{{K_2}} {{I_{\beta ,{m_{q(j)}}}}({t_j}{T_j}V_*^j)} \\ &{}\quad :{s_i},{t_j} \in [0,1],\sum \limits _{i = 1}^{{K_1}} {|{s_i}{S_i} - 1|{{\Vert {U_*^i} \Vert }_{{H^2}({{\mathbb {R}}^N})}}} + \sum \limits _{j = 1}^{{K_2}} {|{t_j}{T_j} - 1|{{\Vert {V_*^i} \Vert }_{{H^2}({{\mathbb {R}}^N})}}} \ge \delta \Bigr \} > 0, \\ \end{array} \end{aligned}$$

we have

$$\begin{aligned} \begin{gathered} \sum \limits _{i = 1}^{{K_1}} {{I_{\beta ,{m_{p(i)}}}}({s_i}{S_i}U_*^i)} + \sum \limits _{j = 1}^{{K_2}} {{I_{\beta ,{m_{q(j)}}}}({t_j}{T_j}V_*^j)} \ge \sum \limits _{k = 1}^K {{c_{\beta ,{m_k}}}} - 2{\alpha _\delta }{\text { implies }} \\ \sum \limits _{i = 1}^{{K_1}} {|{s_i}{S_i} - 1|{{\left\| {U_*^i} \right\| }_{{H^2}({{\mathbb {R}}^N})}}} + \sum \limits _{j = 1}^{{K_2}} {|{t_j}{T_j} - 1|{{\left\| {V_*^i} \right\| }_{{H^2}({{\mathbb {R}}^N})}}} \le \delta . \\ \end{gathered} \end{aligned}$$
(3.32)

Similar to the proof of Lemma 3.2(i), we see that there exists an \(\varepsilon _\delta >0\) such that

$$\begin{aligned} \mathop {\max }\limits _{(s,t) \in {{[0,1]}^K}} \left| {{J_\varepsilon }({\gamma _\varepsilon }(s,t)) - \sum \limits _{i = 1}^{{K_1}} {{I_{\beta ,{m_{p(i)}}}}({s_i}{S_i}U_*^i)} - \sum \limits _{j = 1}^{{K_2}} {{I_{\beta ,{m_{q(j)}}}}({t_j}{T_j}V_*^j)} } \right| \le {\alpha _\delta }\quad \qquad \end{aligned}$$
(3.33)

for all \(\varepsilon \in (0,{\varepsilon _\delta })\). Thus if \(\varepsilon \in (0,{\varepsilon _\delta })\) and \({J_\varepsilon }({\gamma _\varepsilon }(s,t)) \ge \sum \nolimits _{k = 1}^K {{c_{\beta ,{m_k}}}} - {\alpha _\delta }\), by (3.4) and (3.4), we have

$$\begin{aligned} \sum \limits _{i = 1}^{{K_1}} {|{s_i}{S_i} - 1|{{\Vert {U_*^i} \Vert }_{{H^2}({{\mathbb {R}}^N})}}} + \sum \limits _{j = 1}^{{K_2}} {|{t_j}{T_j} - 1|{{\Vert {V_*^i} \Vert }_{{H^2}({{\mathbb {R}}^N})}}} \le \delta , \end{aligned}$$

by (3.4), we have

$$\begin{aligned}&\left\| {\gamma _\varepsilon }(s,t) - \sum \limits _{i = 1}^{{K_1}} {\varphi (\varepsilon x - z_*^{p(i)})U_*^i(x - (z_*^{p(i)}/\varepsilon ))} \right. \\&\qquad \left. - \sum \limits _{j = 1}^{{K_2}} {\varphi (\varepsilon x - z_*^{q(j)})V_*^j(x - (z_*^{q(j)}/\varepsilon ))} \right\| _{{H_\varepsilon }} \\&\quad \le \sum \limits _{i = 1}^{{K_1}} {|{s_i}{S_i} - 1|{{\left\| {\varphi (\varepsilon x - z_*^{p(i)})U_*^i(x - (z_*^{p(i)}/\varepsilon ))} \right\| }_{{H_\varepsilon }}}} \\&\qquad + \sum \limits _{j = 1}^{{K_2}} {|{t_j}{T_j} - 1|{{\left\| {\varphi (\varepsilon x - z_*^{q(j)})V_*^j(x - (z_*^{q(j)}/\varepsilon ))} \right\| }_{{H_\varepsilon }}}} \\&\quad \le {T_0}\sum \limits _{i = 1}^{{K_1}} {|{s_i}{S_i} - 1|{{\left\| {U_*^i} \right\| }_{{H^2}({{\mathbb {R}}^N})}}} + {T_0}\sum \limits _{j = 1}^{{K_2}} {|{t_j}{T_j} - 1|{{\left\| {V_*^j} \right\| }_{{H^2}({{\mathbb {R}}^N})}}} \le {T_0}\delta . \end{aligned}$$

Thus \({\gamma _{\varepsilon } }(s,t) \in X_\varepsilon ^{{T_0}\delta }\). \(\square \)

Choosing \({\delta _1} > 0\) to ensure that \({T_0}{\delta _1} < {d_0}/4\), letting \({\bar{\alpha }} = \min \{ {\alpha _{{\delta _1}}},\sigma \} \) and fixing \(d = {d_0}/4: = {d_1}\) in Lemma 3.5. To prove the next lemma, we use the idea developed in [25]. However, for constructing multi-peak solutions, we give a proof which is slightly different from the one given in [25], where only the single-peak solution was considered.

Lemma 3.7

\(\exists {\bar{\varepsilon }} > 0\) such that for each \(\varepsilon \in (0,{\bar{\varepsilon }} ]\), there exists a sequence \(\{ {v_{n,\varepsilon }}\} _{n = 1}^\infty \subset J_\varepsilon ^{{{{{\tilde{c}}}}_\varepsilon } + \varepsilon } \cap X_\varepsilon ^{{d_0}}\) such that \({J'_\varepsilon }({v_{n,\varepsilon }}) \rightarrow 0\) in \({({H_\varepsilon })^{ - 1}}\) as \(n \rightarrow \infty \).

Proof

Assuming on the contrary that there always exist \(\varepsilon > 0\) small and \(\gamma (\varepsilon ) > 0\) such that

$$\begin{aligned} {\Vert {{J'_\varepsilon }(u)} \Vert _{{{({H_\varepsilon })}^{ - 1}}}} \ge \gamma (\varepsilon ) > 0 \end{aligned}$$
(3.34)

for \(u \in J_\varepsilon ^{{{{{\tilde{c}}}}_{\varepsilon }} + \varepsilon } \cap X_\varepsilon ^{{d_0}} \).

Letting Y be a pseudo-gradient vector field for \({{J'_{\varepsilon }}}\) in \({H_\varepsilon }\), that is, \({H_\varepsilon } \rightarrow {H_\varepsilon }\) is a locally Lipschitz continuous vector field such that for every \(u \in {H_\varepsilon }\),

$$\begin{aligned} {\Vert {Y(u)} \Vert _{{H_\varepsilon }}}\le & {} 2{\Vert {{J'_\varepsilon }(u)} \Vert _{{{({H_\varepsilon })}^{ - 1}}}}, \end{aligned}$$
(3.35)
$$\begin{aligned} \langle {J'_\varepsilon }(u),Y(u)\rangle\ge & {} \Vert {{J'_\varepsilon }(u)} \Vert _{{{({H_\varepsilon })}^{ - 1}}}^2. \end{aligned}$$
(3.36)

Letting \({\psi _1}\), \({\psi _2}\) be locally Lipschitz continuous functions in \({H_\varepsilon }\) such that \(0 \le {\psi _1},{\psi _2} \le 1\) and

$$\begin{aligned} {\psi _1}(u)= & {} \left\{ \begin{array}{ll} 1, &{}\sum \limits _{k = 1}^K {{c_{\beta ,{m_k}}}} -\displaystyle \frac{1}{2}{\bar{\alpha }} \le {J_\varepsilon }(u) \le {{{{\tilde{c}}}}_\varepsilon }, \\ 0, &{}{J_\varepsilon }(u) \le \sum \limits _{k = 1}^K {{c_{\beta ,{m_k}}}} - {\bar{\alpha }} {\text { or }}{{{{\tilde{c}}}}_\varepsilon } + \varepsilon \le {J_\varepsilon }(u), \end{array} \right. \\ {\psi _2}(u)= & {} \left\{ \begin{array}{ll} 1,&{}u \in X_\varepsilon ^{3{d_0}/4}, \\ 0,&{}u \notin X_\varepsilon ^{{d_0}}. \end{array} \right. \end{aligned}$$

Considering the following ordinary differential equations:

$$\begin{aligned} \left\{ \begin{gathered} \frac{d}{{dr}}\eta (r,u) = - \frac{{Y(\eta (r,u))}}{{{{\Vert {Y(\eta (r,u))} \Vert }_{{H_\varepsilon }}}}}{\psi _1}(\eta (r,u)){\psi _2}(\eta (r,u)), \\ \eta (0,u) = u. \\ \end{gathered} \right. \end{aligned}$$
(3.37)

By (3.4), (3.4) and (3.4), we have

$$\begin{aligned}&\frac{d}{{dr}}{J_\varepsilon }(\eta (r,u)) \\&\quad = \left\langle {{J'_\varepsilon }(\eta (r,u)),\frac{d}{{dr}}\eta (r,u)} \right\rangle \\&\quad = \left\langle {{J'_\varepsilon }(\eta (r,u)), - \frac{{Y(\eta (r,u))}}{{{{\Vert {Y(\eta (r,u))} \Vert }_{{H_\varepsilon }}}}}{\psi _1}(\eta (r,u)){\psi _2}(\eta (r,u))} \right\rangle \\&\quad \le - \frac{{{\psi _1}(\eta (r,u)){\psi _2}(\eta (r,u))}}{{{{\Vert {Y(\eta (r,u))} \Vert }_{{H_\varepsilon }}}}}\Vert {{J'_\varepsilon }(\eta (r,u))} \Vert _{{{{({H_\varepsilon })}^{ - 1}}}}^2 \\&\quad \le - \frac{1}{2}{\psi _1}(\eta (r,u)){\psi _2}(\eta (r,u)){\Vert {{J'_\varepsilon }(\eta (r,u))} \Vert _{{{{({H_\varepsilon })}^{ - 1}}}}} \\ \end{aligned}$$

and combining with Lemma 3.2(i), Lemma 3.5, (3.4), (3.4) and the definition of \({\psi _1}\), \({\psi _2}\), it is standard to show that \(\eta \in C([0, + \infty ) \times {H_\varepsilon },{H_\varepsilon })\) and satisfies that for \(\varepsilon > 0\) small,

(i) \(\frac{d}{{dr}}{J_{\varepsilon }}(\eta (r,u)) \le 0\) for each \(r \in [0,+\infty )\) and \(u \in {H_\varepsilon }\);

(ii) \(\frac{d}{{dr}}{J_\varepsilon }(\eta (r,u)) \le -{\omega _{{d_1}}}/2\) if \(\eta (r,u) \in \overline{J_\varepsilon ^{{{{{\tilde{c}}}}_\varepsilon }}\backslash J_\varepsilon ^{\sum \nolimits _{k = 1}^K {{c_{\beta ,{m_k}}}} - \frac{1}{2}{\bar{\alpha }} }} \cap \overline{X_\varepsilon ^{3{d_0}/4}\backslash X_\varepsilon ^{{d_0}/4}} \);

(iii) \(\frac{d}{{dr}}{J_\varepsilon }(\eta (r,u)) \le -\gamma (\varepsilon )/2\) if \(\eta (r,u) \in \overline{J_\varepsilon ^{{{{{\tilde{c}}}}_\varepsilon }}\backslash J_\varepsilon ^{\sum \nolimits _{k = 1}^K {{c_{\beta ,{m_k}}}} - \frac{1}{2}{\bar{\alpha }} }} \cap X_\varepsilon ^{3{d_0}/4}\);

(iv) \(\eta (r,u) = u\) if \({J_\varepsilon }(u) \le \sum \nolimits _{k = 1}^K {{c_{\beta ,{m_k}}}} - {\bar{\alpha }} \).

Setting \({r_1}: = {\omega _{{d_1}}}{d_0}/\gamma (\varepsilon )\) and \({\xi _\varepsilon }(s,t): = \eta ({r_1},{\gamma _\varepsilon }(s,t))\), we have the following cases:

Case 1: \({\gamma _\varepsilon }(s,t) \in J_\varepsilon ^{\sum \nolimits _{k = 1}^K {{c_{\beta ,{m_k}}}} - {\bar{\alpha }} }\). By (iv), we see that

$$\begin{aligned} \eta (r,{\gamma _\varepsilon }(s,t)) = {\gamma _\varepsilon }(s,t). \end{aligned}$$
(3.38)

Case 2: \({\gamma _\varepsilon }(s,t) \notin J_\varepsilon ^{\sum \nolimits _{k = 1}^K {{c_{\beta ,{m_k}}}} - {\bar{\alpha }} }\). By Lemma 3.6 and the definition of \({{{{\tilde{c}}}}_\varepsilon }\), we see that

$$\begin{aligned} {\gamma _\varepsilon }(s,t) \in \overline{J_\varepsilon ^{{{{{\tilde{c}}}}_\varepsilon }}\backslash J_\varepsilon ^{\sum \nolimits _{k = 1}^K {{c_{\beta ,{m_k}}}} - {\bar{\alpha }} }} \cap X_\varepsilon ^{{d_0}/4}. \end{aligned}$$

Moreover, we have

$$\begin{aligned} \eta (r,{\gamma _\varepsilon }(s,t)) \in X_\varepsilon ^{{d_0}}{\text { for }}r \in [0,{r_1}]. \end{aligned}$$
(3.39)

Indeed, if not, \(\exists r' \in [0,{r_1}]\) such that \(\eta (r',{\gamma _\varepsilon }(s,t)) \notin X_\varepsilon ^{{d_0}}\). Denote

$$\begin{aligned} r'': = \sup \left\{ {r \in [0,r']:\eta (r,{\gamma _{\varepsilon }}(s,t)) \in X_\varepsilon ^{{d_0}}} \right\} , \end{aligned}$$

then by (3.4) and the definition of \({\psi _2}\), we see \(\eta (r',{\gamma _\varepsilon }(s,t)) = \eta (r'',{\gamma _\varepsilon }(s,t)) \in X_\varepsilon ^{{d_0}}\), which leads to a contradiction.

Next, we divide Case 2 into the following three subcases:

Case 2.1: \(\eta ({r_1},{\gamma _\varepsilon }(s,t)) \in J_\varepsilon ^{\sum \nolimits _{k = 1}^K {{c_{\beta ,{m_k}}}} - \frac{1}{2}{\bar{\alpha }} }\);

Case 2.2: \(\eta ({r_1},{\gamma _\varepsilon }(s,t)) \in J_\varepsilon ^{{{{{\tilde{c}}}}_\varepsilon }}\backslash J_\varepsilon ^{\sum \nolimits _{k = 1}^K {{c_{\beta ,{m_k}}}} - \frac{1}{2}{\bar{\alpha }} }\) and \(\eta (r,{\gamma _\varepsilon }(s,t)) \notin X_\varepsilon ^{3{d_0}/4}\) for some \(r \in [0,{r_1}]\);

Case 2.3: \(\eta ({r_1},{\gamma _\varepsilon }(s,t)) \in J_\varepsilon ^{{{{{\tilde{c}}}}_\varepsilon }}\backslash J_\varepsilon ^{\sum \nolimits _{k = 1}^K {{c_{\beta ,{m_k}}}} - \frac{1}{2}{\bar{\alpha }} }\) and \(\eta (r,{\gamma _\varepsilon }(s,t)) \in X_\varepsilon ^{3{d_0}/4}\) for all \(r \in [0,{r_1}]\).

In Case 2.2, denote

$$\begin{aligned} {r_2}: = \inf \left\{ {r \in [0,{r_1}]:\eta (r,{\gamma _{\varepsilon }}(s,t)) \notin X_\varepsilon ^{3{d_0}/4}} \right\} \end{aligned}$$

and

$$\begin{aligned} {r_3}: = \sup \left\{ {r \in [0,{r_2}]:\eta (r,{\gamma _{\varepsilon }}(s,t)) \in X_\varepsilon ^{{d_0}/4}} \right\} , \end{aligned}$$

then by (3.4), \({r_2} - {r_3} \ge \frac{1}{2}{d_0}\) and \(\eta (r,{\gamma _{\varepsilon }}(s,t)) \in \overline{X_\varepsilon ^{3{d_0}/4}\backslash X_\varepsilon ^{{d_0}/4}} \) for each \(r \in [{r_3},{r_2}]\). By (i), (ii) and Lemma 3.2(i), we obtain

$$\begin{aligned}&{J_\varepsilon }(\eta ({r_1},{\gamma _{\varepsilon }}(s,t))) \\&\quad = {J_\varepsilon }({\gamma _{\varepsilon }}(s,t)) + \int _0^{{r_1}} {\frac{d}{{dr}}{J_\varepsilon }(\eta (r,{\gamma _{\varepsilon }}(s,t)))} \mathrm{d}s \\&\quad \le {{{{\tilde{c}}}}_{\varepsilon }} + \int _{{r_3}}^{{r_2}} {\frac{d}{{dr}}{J_\varepsilon }(\eta (r,{\gamma _{\varepsilon }}(s,t)))} \mathrm{d}s \\&\quad \le {{{{\tilde{c}}}}_{\varepsilon }} - \frac{1}{4}{\omega _{{d_1}}}{d_0} = \sum \limits _{k = 1}^K {{c_{\beta ,{m_k}}}} - \frac{1}{4}{\omega _{{d_1}}}{d_0} + o(1), \end{aligned}$$

where \( o(1) \rightarrow 0\) as \(\varepsilon \rightarrow 0\).

In Case 2.3, by (iii) and the definition of \(r_1\), we have

$$\begin{aligned} \begin{array}{ll} {J_\varepsilon }(\eta ({r_1},{\gamma _{\varepsilon }}(s,t))) \displaystyle &{}= {J_\varepsilon }({\gamma _{\varepsilon }}(s,t)) + \displaystyle \int _0^{{r_1}} {\frac{d}{{dr}}{J_\varepsilon }(\eta (r,{\gamma _{\varepsilon }}(s,t)))} \mathrm{d}s \\ \displaystyle &{}\le {{{{\tilde{c}}}}_{\varepsilon }} - \displaystyle \frac{1}{2}{\omega _{{d_1}}}{d_0} = \sum \limits _{k = 1}^K {{c_{\beta ,{m_k}}}} - \displaystyle \frac{1}{2}{\omega _{{d_1}}}{d_0} + o(1). \\ \end{array} \end{aligned}$$

To sum up, choosing \({\bar{\mu }} = \min \left\{ {{{\bar{\alpha }}}/2,{\omega _{{d_1}}}{d_0}/4} \right\} > 0\), we see that, for \({(s,t) \in {{[0,1]}^K}}\),

$$\begin{aligned} {J_\varepsilon }({\xi _\varepsilon }(s,t)) \le \sum \limits _{k = 1}^K {{c_{\beta ,{m_k}}}} - {\bar{\mu }} + o(1). \end{aligned}$$
(3.40)

From (3.4) and (3.4), we have

$$\begin{aligned} {\Vert {{\xi _\varepsilon }(s,t)} \Vert _{{H_\varepsilon }}} \le C{\text { for }}\varepsilon > 0{\text { small and }}(s,t) \in {[0,1]^K}. \end{aligned}$$
(3.41)

Letting \({k_\varepsilon } \in {\mathbb {N}}\) such that \(k_\varepsilon ^2 \le {\delta _0}/(5\varepsilon )\), \({k_\varepsilon } \rightarrow \infty \),and putting

$$\begin{aligned} {{{{\tilde{A}}}}_{j,\varepsilon }}: = {({\tilde{\Lambda }} /\varepsilon )^{2{\delta _0}/\varepsilon + 5(j + 1){k_\varepsilon }}}\backslash {({\tilde{\Lambda }} /\varepsilon )^{2{\delta _0}/\varepsilon + 5j{k_\varepsilon }}},j = 0,1,\ldots ,{k_\varepsilon } - 1. \end{aligned}$$

By (3.5), we see that

$$\begin{aligned} \sum \limits _{j = 0}^{{k_\varepsilon } - 1} {\int _{{{{{\tilde{A}}}}_{j,\varepsilon }}} {|\Delta {\xi _\varepsilon }(s,t){|^2} + \beta |\nabla {\xi _\varepsilon }(s,t){|^2} + V(\varepsilon x)|{\xi _\varepsilon }(s,t){|^2}} } \le C. \end{aligned}$$

Thus, there exists a \({j_\varepsilon } \in \{ 0,1,\ldots ,{k_\varepsilon } - 1\} \) such that

$$\begin{aligned} \int _{{{{{\tilde{A}}}}_{{j_\varepsilon },\varepsilon }}} {|\Delta {\xi _\varepsilon }(s,t){|^2} + \beta |\nabla {\xi _\varepsilon }(s,t){|^2} + V(\varepsilon x)|{\xi _\varepsilon }(s,t){|^2}} \le C/{k_\varepsilon } \rightarrow 0 \end{aligned}$$
(3.42)

uniformly for \((s,t) \in {[0,1]^K}\). Choosing cut-off functions \({\zeta _{\varepsilon ,1}}\) and \({\zeta _{\varepsilon ,2}}\) such that

$$\begin{aligned} {\zeta _{\varepsilon ,1}}(x)= & {} \left\{ \begin{gathered} 1,{\text { if }}x \in {({\tilde{\Lambda }} /\varepsilon )^{2{\delta _0}/\varepsilon + (5{j_\varepsilon } + 1){k_\varepsilon }}}, \\ 0,{\text { if }}x \in {{\mathbb {R}}^N}\backslash {({\tilde{\Lambda }} /\varepsilon )^{2{\delta _0}/\varepsilon + (5{j_\varepsilon } + 2){k_\varepsilon }}}, \\ \end{gathered} \right. \\ {\zeta _{\varepsilon ,2}}(x)= & {} \left\{ \begin{gathered} 0,{\text { if }}x \in {({\tilde{\Lambda }} /\varepsilon )^{2{\delta _0}/\varepsilon + (5{j_\varepsilon } + 3){k_\varepsilon }}}, \\ 1,{\text { if }}x \in {{\mathbb {R}}^N}\backslash {({\tilde{\Lambda }} /\varepsilon )^{2{\delta _0}/\varepsilon + (5{j_\varepsilon } + 4){k_\varepsilon }}} \\ \end{gathered} \right. \end{aligned}$$

and \({\xi _{\varepsilon ,i}}(s,t): = {\zeta _{\varepsilon ,i}}{\xi _\varepsilon }(s,t)\), \(i=1,2\). By (3.5), we have

$$\begin{aligned} {\left\| {{\xi _\varepsilon }(s,t) - {\xi _{\varepsilon ,1}}(s,t) - {\xi _{\varepsilon ,2}}(s,t)} \right\| _{{H_\varepsilon }}} \rightarrow 0{\text { as }}\varepsilon \rightarrow 0 \end{aligned}$$
(3.43)

uniformly for \((s,t) \in {[0,1]^K}\). (3.5) implies that

$$\begin{aligned} {J_\varepsilon }({\xi _\varepsilon }(s,t)) \ge {J_\varepsilon }({\xi _{\varepsilon ,1}}(s,t)) + {J_\varepsilon }({\xi _{\varepsilon ,2}}(s,t)) + o(1). \end{aligned}$$
(3.44)

In Case 1, by (3.4), \({\xi _{\varepsilon ,2}}(s,t) = {\zeta _{\varepsilon ,2}}{\xi _\varepsilon }(s,t) = 0\). In Case 2, by (3.4),

$$\begin{aligned} {\left\| {{\xi _{\varepsilon ,2}}(s,t)} \right\| _{{H_\varepsilon }}} = {\left\| {{\zeta _{\varepsilon ,2}}{\xi _\varepsilon }(s,t)} \right\| _{{H_\varepsilon }}} \le C{\left\| {{\xi _\varepsilon }(s,t)} \right\| _{{H_\varepsilon }({{\mathbb {R}}^N}\backslash {{({\tilde{\Lambda }} /\varepsilon )}^{2{\delta _0}/\varepsilon }})}} \le C{d_0}. \end{aligned}$$

Choosing \({d_0} > 0\) small, we see from Sobolev’s imbedding theorem that

$$\begin{aligned} {J_\varepsilon }({\xi _{\varepsilon ,2}}(s,t)) \ge \left\| {{\xi _{\varepsilon ,2}}(s,t)} \right\| _{{H_\varepsilon }}^2\left( {\frac{1}{2} - Cd_0^{p - 2}} \right) \ge 0. \end{aligned}$$

No matter which case occurs, we always have

$$\begin{aligned} {J_\varepsilon }({\xi _\varepsilon }(s,t)) \ge {J_\varepsilon }({\xi _{\varepsilon ,1}}(s,t)) + o(1). \end{aligned}$$
(3.45)

Next, defining \(\xi _{\varepsilon ,1}^k(s,t)(x) = {\xi _{\varepsilon ,1}}(s,t)(x)\) for \(x \in {(\widetilde{{\Lambda ^k}}/\varepsilon )^{3{\delta _0}/\varepsilon }}\), \(\xi _{\varepsilon ,1}^k(s,t)(x) = 0\) for \(x \notin {(\widetilde{{\Lambda ^k}}/\varepsilon )^{3{\delta _0}/\varepsilon }}\) for each \(1 \le k \le K\). Arguing as in the proof of Lemma 3.2(i), we get

$$\begin{aligned} {J_\varepsilon }({\xi _{\varepsilon ,1}}(s,t)) \ge \sum \limits _{k = 1}^K {{J_\varepsilon }(\xi _{\varepsilon ,1}^k(s,t))} + o(1) = \sum \limits _{k = 1}^K {J_\varepsilon ^k(\xi _{\varepsilon ,1}^k(s,t))} + o(1). \end{aligned}$$
(3.46)

Next, we introduce some notations as in [16]. For \((s,t) \in {[0,1]^K}\), let

$$\begin{aligned} {0_{{s_i}}}= & {} ({s_1},.,{s_{i - 1}},0,{s_{i + 1}},.,{s_{{K_1}}},{t_1},.,{t_{{K_2}}})\\ {\text { and }}{1_{{s_i}}}= & {} ({s_1},.,{s_{i - 1}},1,{s_{i + 1}},.,{s_{{K_1}}},{t_1},.,{t_{{K_2}}}). \end{aligned}$$

Similarly, we can also define \({0_{{t_j}}}\) and \({1_{{t_j}}}\). We see from Lemma 3.2(ii) and (iv) in the proof of Lemma 3.7 that \({\xi _\varepsilon }({0_{{s_i}}}) = {\gamma _\varepsilon }({0_{{s_i}}})\), \({\xi _\varepsilon }({0_{{t_j}}}) = {\gamma _\varepsilon }({0_{{t_j}}})\) and \({\xi _\varepsilon }({1_{{s_i}}}) = {\gamma _\varepsilon }({1_{{s_i}}})\), \({\xi _\varepsilon }({1_{{t_j}}}) = {\gamma _\varepsilon }({1_{{t_j}}})\). By the definition of \(\xi _{\varepsilon ,1}^k(s,t)\), we see that \(J_\varepsilon ^{p(i)}(\xi _{\varepsilon ,1}^{p(i)}({0_{{s_i}}})) = J_\varepsilon ^{p(i)}(0) = 0\), \(J_\varepsilon ^{q(j)}(\xi _{\varepsilon ,1}^{q(j)}({0_{{t_j}}})) = J_\varepsilon ^{q(j)}(0) = 0\) and \(J_\varepsilon ^{p(i)}(\xi _{\varepsilon ,1}^{p(i)}({1_{{s_i}}})) = J_\varepsilon ^{p(i)}(U_{\varepsilon ,{S_i}}^i) < 0\), \(J_\varepsilon ^{q(j)}(\xi _{\varepsilon ,1}^{q(j)}({1_{{t_j}}})) = J_\varepsilon ^{q(j)}(V_{\varepsilon ,{T_j}}^j) < 0\) for \(\varepsilon > 0\) small by (3.3) and (3.4). Using the celebrated gluing method due to Coti Zelati and Rabinowitz (see [16, Proposition 3.4]), there exists \(({{{{\bar{s}}}}_\varepsilon },{{{{\bar{t}}}}_\varepsilon }) \in {[0,1]^K}\) such that

$$\begin{aligned} J_\varepsilon ^k(\xi _{\varepsilon ,1}^k({{{{\bar{s}}}}_\varepsilon },{{{{\bar{t}}}}_\varepsilon })) \ge c_\varepsilon ^k{\text { for each }}1 \le k \le K. \end{aligned}$$
(3.47)

(3.5), (3.5), (3.5) and Lemma 3.3 yield

$$\begin{aligned} \mathop {\max }\limits _{(s,t) \in {{[0,1]}^K}} {J_\varepsilon }({\xi _\varepsilon }(s,t)) \ge \sum \limits _{k = 1}^K {{c_{\beta ,{m_k}}}} + o(1), \end{aligned}$$

which contradicts (3.4) for \(\varepsilon > 0\) small. \(\square \)

Proof of Theorem 1.1

By Lemma 3.7, \(\exists {\bar{\varepsilon }} > 0\) such that for each \(\varepsilon \in (0,{\bar{\varepsilon }} ]\), there exists a sequence \(\{ {v_{n,\varepsilon }}\} _{n = 1}^\infty \subset J_\varepsilon ^{{{{{\tilde{c}}}}_\varepsilon } + \varepsilon } \cap X_\varepsilon ^{{d_0}}\) such that \({J'_\varepsilon }({v_{n,\varepsilon }}) \rightarrow 0\) in \({({H_\varepsilon })^{ - 1}}\) as \(n \rightarrow \infty \). By Lemma 3.1, \(\exists {v_\varepsilon } \in J_\varepsilon ^{{{{{\tilde{c}}}}_\varepsilon } + \varepsilon } \cap X_\varepsilon ^{{d_0}}\) such that, up to a subsequence, \({v_{n,\varepsilon }} \rightarrow {v_\varepsilon }\) in \({H_\varepsilon }\) and \({v_\varepsilon }\) satisfies

$$\begin{aligned} {\Delta ^2}{v_\varepsilon } - \beta \Delta {v_\varepsilon } + V(\varepsilon x){v_\varepsilon } = {g_\varepsilon }(x,{v_\varepsilon }){\text { in }}{{\mathbb {R}}^N}. \end{aligned}$$
(3.48)

Since \({c_{\beta ,{m_k}}} > 0(1 \le k \le K)\), we see that \(0 \notin X_\varepsilon ^{{d_0}}\) for \({d_0} > 0\) small. Thus \({v_\varepsilon } \ne 0\).

For any sequence \(\{ {\varepsilon _n}\} _{n = 1}^\infty \) with \({\varepsilon _n} \rightarrow 0\), by Lemma 3.4, there exist, up to a subsequence, \(\{ y_{{\varepsilon _n}}^{p(i)}\} _{n = 1}^\infty \subset {{\mathbb {R}}^N}\), \({z^{p(i)}} \in {{\mathcal {M}}^{p(i)}}\), \({U^i} \in S_{\beta ,{m_{p(i)}}}^ + \) \((1 \le i \le {K_1})\) and \(\{ y_{{\varepsilon _n}}^{q(j)}\} _{n = 1}^\infty \subset {{\mathbb {R}}^N}\), \({z^{q(j)}} \in {{\mathcal {M}}^{q(j)}}\), \({V^j} \in S_{\beta ,{m_{q(j)}}}^ - \) \((1 \le j \le {K_2})\) such that as \(n \rightarrow \infty \),

$$\begin{aligned} |{\varepsilon _n}y_{{\varepsilon _n}}^{p(i)} - {z^{p(i)}}| \rightarrow 0,{\text { }} |{\varepsilon _n}y_{{\varepsilon _n}}^{q(j)} - {z^{q(j)}}| \rightarrow 0 \end{aligned}$$
(3.49)

and

$$\begin{aligned}&\left\| {v_{{\varepsilon _n}}} - \sum \limits _{i = 1}^{{K_1}} {\varphi ({\varepsilon _n}x - {\varepsilon _n}y_{{\varepsilon _n}}^{p(i)}){U^i}(x - y_{{\varepsilon _n}}^{p(i)})} \right. \nonumber \\&\quad \left. - \sum \limits _{j = 1}^{{K_2}} {\varphi ({\varepsilon _n}x - {\varepsilon _n}y_{{\varepsilon _n}}^{q(j)}){V^j}(x - y_{{\varepsilon _n}}^{q(j)})} \right\| _{{H_{{\varepsilon _n}}}} \rightarrow 0. \end{aligned}$$
(3.50)

For each \(R>0\), we have

$$\begin{aligned}&\left\| {{v_{{\varepsilon _n}}}} \right\| _{{L^2}({{\mathbb {R}}^N}\backslash \mathop \cup \limits _{k = 1}^K {B_R}(y_{{\varepsilon _n}}^k))} \nonumber \\&\quad \le \left\| {v_{{\varepsilon _n}}} - \sum \limits _{i = 1}^{{K_1}} {\varphi ({\varepsilon _n}x - {\varepsilon _n}y_{{\varepsilon _n}}^{p(i)}){U^i}(x - y_{{\varepsilon _n}}^{p(i)})} \right. \nonumber \\&\qquad \left. - \sum \limits _{j = 1}^{{K_2}} {\varphi ({\varepsilon _n}x - {\varepsilon _n}y_{{\varepsilon _n}}^{q(j)}){V^j}(x - y_{{\varepsilon _n}}^{q(j)})} \right\| _{{L^2}({{\mathbb {R}}^N})} \nonumber \\&\qquad + \sum \limits _{i = 1}^{{K_1}} {{{\left\| {{U^i}} \right\| }_{{L^2}({{\mathbb {R}}^N}\backslash {B_R}(0))}}} + \sum \limits _{j = 1}^{{K_2}} {{{\left\| {{V^j}} \right\| }_{{L^2}({{\mathbb {R}}^N}\backslash {B_R}(0))}}}. \end{aligned}$$
(3.51)

On the other hand, since \({v_{{\varepsilon _n}}} \in X_{{\varepsilon _n}}^{{d_0}}\), then \({v_{{\varepsilon _n}}}\) is bounded in \({H^2}({{\mathbb {R}}^N})\). Writing (3.5) as

$$\begin{aligned} {\Delta ^2}{v_{\varepsilon _n} } - \beta \Delta {v_{\varepsilon _n} } + {c_0}{v_{\varepsilon _n} } = ({c_0} - V({\varepsilon _n} x)){v_{\varepsilon _n} } + {g_{\varepsilon _n} }(x,{v_{\varepsilon _n} }){\text { in }}{{\mathbb {R}}^N}, \end{aligned}$$

where \({c_0}>0\) has been mentioned in (2.9). Observing that \({h_n}:=({c_0} - V({\varepsilon _n} x)){v_{\varepsilon _n} } + {g_{\varepsilon _n} }(x,{v_{\varepsilon _n} }) \in L_{{\text {loc}}}^q({{\mathbb {R}}^N})\) for \(1 \le q \le \frac{{2N}}{{(N - 4)(p - 1)}}\), we deduce from Sobolev’s imbedding theorem and classical bootstrap technique based on the local \({W^{4,p}}\)-estimates for fourth-order semilinear elliptic equations (Proposition 2.3) that \({v_{{\varepsilon _n}}} \in W_{{\text {loc}}}^{4,q}({{\mathbb {R}}^N})\) for every \(q \ge 1\) with a uniform estimate on unit balls. Given \(q > N/4\), by Morrey’s inequality, we infer that \(\{ {v_{{\varepsilon _n}}}\} _{n = 1}^\infty \) is bounded in \({L^\infty }({{\mathbb {R}}^N})\). Letting \(p=N\) in (2.1), we see that for any \(x \in {{\mathbb {R}}^N}\),

$$\begin{aligned} {\left\| {{v_{{\varepsilon _n}}}} \right\| _{{W^{4,N}}({B_1}(x))}} \le C\left( {{{\left\| {{h_n}} \right\| }_{{L^N}({B_2}(x))}} + {{\left\| {{v_{{\varepsilon _n}}}} \right\| }_{{L^N}({B_2}(x))}}} \right) \le C\left\| {{v_{{\varepsilon _n}}}} \right\| _{{L^2}({B_2}(x))}^{2/N}, \end{aligned}$$

by Morrey’s inequality,

$$\begin{aligned} {\left\| {{v_{{\varepsilon _n}}}} \right\| _{{L^\infty }({B_1}(x))}} \le C\left\| {{v_{{\varepsilon _n}}}} \right\| _{{L^2}({B_2}(x))}^{2/N}, \end{aligned}$$
(3.52)

where \(C>0\) depends only on N. We obtain from (3.5), (3.6) and (3.6) that for any \(\delta >0\), there exists \({R_\delta } > 0\) such that

$$\begin{aligned} |{v_{{\varepsilon _n}}}(x)| < \delta {\text { uniformly for }}x \in {{\mathbb {R}}^N}\backslash \mathop \cup \limits _{k = 1}^K {B_{{R_\delta }}}(y_{{\varepsilon _n}}^k){\text { and }}{\varepsilon _n} > 0{\text { small}}. \end{aligned}$$
(3.53)

Choosing \(\delta = a\) in (3.6), by (3.5), we have \(\mathop \cup \nolimits _{k = 1}^K {B_{{R_a}}}(y_{{\varepsilon _n}}^k) \subset \Lambda /{\varepsilon _n}\) for \({\varepsilon _n} > 0\) small. Thus, we see from the definition of \({g_\varepsilon }\) that \({v_{{\varepsilon _n}}}\) is a solution to (3.1). Moreover, by Proposition 2.3, Morrey’s inequality and Schauder’s estimate, we see that \({v_{{\varepsilon _n}}} \in {C^4}({{\mathbb {R}}^N})\). Therefore \({u_{{\varepsilon _n}}}(x): = {v_{{\varepsilon _n}}}(x/{\varepsilon _n})\) is a classical solution to the original problem (1.1) with \(\varepsilon \) replaced by \({\varepsilon _n}\).

Since \(\{ {v_{{\varepsilon _n}}}\} _{n = 1}^\infty \) is bounded in \({L^\infty }({{\mathbb {R}}^N})\), by Proposition 2.3 and Morrey’s inequality, we see that for each \(1 \le i \le K_1\), \(1 \le j \le K_2\), \(\{ {v_{{\varepsilon _n}}}(x + y_{{\varepsilon _n}}^{p(i)})\} _{n = 1}^\infty \) and \(\{ {v_{{\varepsilon _n}}}(x + y_{{\varepsilon _n}}^{q(j)})\} _{n = 1}^\infty \) is bounded in \(C_{{\text {loc}}}^{3,\alpha }({{\mathbb {R}}^N})\) for some \(0< \alpha < 1\). It follows from Arzelá-Ascoli’s theorem and (3.5) that,

$$\begin{aligned} {v_{{\varepsilon _n}}}(x + y_{{\varepsilon _n}}^{p(i)}) \rightarrow {U^i}(x){\text { and }}{v_{{\varepsilon _n}}}(x + y_{{\varepsilon _n}}^{q(j)}) \rightarrow {V^j}(x){\text { in }}C_{{\text {loc}}}^3{\text {(}}{{\mathbb {R}}^N}{\text {) as }}n \rightarrow \infty .\nonumber \\ \end{aligned}$$
(3.54)

In particular,

$$\begin{aligned} {v_{{\varepsilon _n}}}( y_{{\varepsilon _n}}^{p(i)}) \rightarrow {U^i}(0) > 0{\text { and }}{v_{{\varepsilon _n}}}( y_{{\varepsilon _n}}^{q(j)}) \rightarrow {V^j}(0) < 0{\text { as }}n \rightarrow \infty . \end{aligned}$$
(3.55)

Letting \(x_{{\varepsilon _n}}^{p(i)}\) (or \(x_{{\varepsilon _n}}^{q(j)}\)) be a maximum (or minimum) point of \({u_{{\varepsilon _n}}}\) in \(\overline{{\Lambda ^{p(i)}}} \) (or \(\overline{{\Lambda ^{q(j)}}} \)), we obtain from (3.6) that for \({\varepsilon _n} > 0\) small,

$$\begin{aligned} {u_{{\varepsilon _n}}}(x_{{\varepsilon _n}}^{p(i)}) = {v_{{\varepsilon _n}}}(x_{{\varepsilon _n}}^{p(i)}/{\varepsilon _n}) \ge {v_{{\varepsilon _n}}}(y_{{\varepsilon _n}}^{p(i)}) \ge \frac{{{U^i}(0)}}{2} > 0 \end{aligned}$$
(3.56)

and

$$\begin{aligned} {u_{{\varepsilon _n}}}(x_{{\varepsilon _n}}^{q(j)}) = {v_{{\varepsilon _n}}}(x_{{\varepsilon _n}}^{q(j)}/{\varepsilon _n}) \le {v_{{\varepsilon _n}}}(y_{{\varepsilon _n}}^{q(j)}) \le \frac{{{V^j}(0)}}{2} < 0. \end{aligned}$$
(3.57)

Given \(\delta = {\bar{\delta }} : = \min \left\{ {\{ {U^i}(0)/2\} _{i = 1}^{{K_1}} \cup \{ - {V^j}(0)/2\} _{j = 1}^{{K_2}}} \right\} \) in (3.6), then there exists \({R_{{\bar{\delta }}}} > 0\) such that \(|{v_{{\varepsilon _n}}}(x)| < {{\bar{\delta }}}\) for all \(x \in {{\mathbb {R}}^N}\backslash \mathop \cup \nolimits _{k = 1}^K {B_{{R_{{\bar{\delta }}}}}}(y_{{\varepsilon _n}}^k)\). Recalling (3.5), we have

$$\begin{aligned} |(x_{{\varepsilon _n}}^k/{\varepsilon _n}) - y_{{\varepsilon _n}}^k| \le {R_{{\bar{\delta }} }}, \end{aligned}$$
(3.58)

thus \(x_{{\varepsilon _n}}^k \rightarrow {z^k} \in {{\mathcal {M}}^k}\) as \(n \rightarrow \infty \).

We only need to prove the uniqueness of \(x_{{\varepsilon _n}}^{p(i)}\) and \(x_{{\varepsilon _n}}^{q(j)}\). For each \(1 \le i \le {K_1}\), we assume on the contrary that, up to a subsequence, \({u_{{\varepsilon _n}}}\) possesses at least two maximum points \(x_{{\varepsilon _n},l}^{p(i)}\) in \({\Lambda ^{p(i)}}\) \((l=1,2)\). By (3.6), for each \(l=1,2\), after passing to a subsequence, \((x_{{\varepsilon _n},l}^{p(i)}/{\varepsilon _n}) - y_{{\varepsilon _n}}^{p(i)} \rightarrow {P_l} \in \overline{{B_{{R_{{\bar{\delta }} }}}}(0)}\). Let \({v_{{\varepsilon _n},l}}(x) = {u_{{\varepsilon _n}}}({\varepsilon _n}x + x_{{\varepsilon _n},l}^{p(i)})\), by (3.54), we see that

$$\begin{aligned} {v_{{\varepsilon _n},l}}(x) \rightharpoonup {U^i}(x + {P_l}){\text { in }}{H^2}({{\mathbb {R}}^N}){\text { and }}{v_{{\varepsilon _n},l}}(x) \rightarrow {U^i}(x + {P_l}){\text { in }}C_{{\text {loc}}}^3({{\mathbb {R}}^N}).\qquad \end{aligned}$$
(3.59)

The function \(U^i\) has a unique local maximum point at zero, it is radially symmetric and strictly decreasing as Proposition 2.1 shows, then \({P_l}=0\).

Next, we claim that

$$\begin{aligned} \Delta {U^i}(0) < 0. \end{aligned}$$
(3.60)

Suppose not, we assume that \(\Delta {U^i}(0) = 0\). Set \({W^i}: = - \Delta {U^i} + \frac{\beta }{2}{U^i}\), we see that \(({U^i},{W^i})\) satisfies

$$\begin{aligned} \left\{ \begin{gathered} - \Delta {U^i} + \frac{\beta }{2}{U^i} - {W^i} = 0, \\ - \Delta {W^i} + \frac{\beta }{2}{W^i} + \left( {{m_{p(i)}} - \frac{{{\beta ^2}}}{4}} \right) {U^i} - |{U^i}{|^{p - 2}}{U^i} = 0. \\ \end{gathered} \right. \end{aligned}$$
(3.61)

Since \({U^i} > 0\) and \(\frac{{{\beta ^2}}}{4} \ge {m_{p(i)}}\), by (3.6) and strong maximum principle, \({W^i}>0\). In view of Theorem 1 in [26] or proof of Theorem 1.1 continued in [21], we see that \({U^i}\), \({W^i}\) must be radially symmetric and strictly decreasing respect to zero. Let \(\varphi (r) = {U^i}(r) - {U^i}(0)\) and \(\psi (r) = {W^i}(r) - {W^i}(0)\), we compute

$$\begin{aligned} \Delta \varphi (r)= & {} \Delta {U^i}(r) = \frac{\beta }{2}(\varphi (r) + {U^i}(0)) - (\psi (r) + {W^i}(0)) \\= & {} \frac{\beta }{2}\varphi (r) - \psi (r) + \Delta {U^i}(0), \end{aligned}$$

then

$$\begin{aligned} - \Delta \varphi (r) + \frac{\beta }{2}\varphi (r) = \psi (r) \le 0. \end{aligned}$$

By strong maximum principle, either \(\varphi = 0\) or \(\varphi < 0\), which is impossible. Hence, (3.6) holds. Therefore, we can choose \({r_0}>0\) such that \(({U^i})''(r) < 0\) for \(0 \le r \le {r_0}\). By (3.6) and [27, Lemma 4.2], we see that

$$\begin{aligned} \frac{{|x_{{\varepsilon _n},1}^{p(i)} - x_{{\varepsilon _n},2}^{p(i)}|}}{{{\varepsilon _n}}} \ge {r_0} > 0, \end{aligned}$$

which contradicts the fact that \((x_{{\varepsilon _n},l}^{p(i)}/{\varepsilon _n}) - y_{{\varepsilon _n}}^{p(i)} \rightarrow {P_l} = 0\). This proves the uniqueness of \(x_{{\varepsilon _n}}^{p(i)}\). The uniqueness of \(x_{{\varepsilon _n}}^{q(j)}\) is similar, we omit it here.

Since \(\{ {\varepsilon _n}\} _{n = 1}^\infty \) is arbitrary, we obtain all the results in Theorem 1.1. \(\square \)