1 Introduction

We look for solutions \((u,\lambda ) \in H^1(\mathbb {R}^N) \times \mathbb {R}\) to the following equation

$$\begin{aligned} -\Delta u + \lambda u = g(u) \quad \textrm{in} \ \mathbb {R}^N, \ N \ge 2 \end{aligned}$$

paired with the constraint

$$\begin{aligned} \int _{\mathbb {R}^N} |u|^2 \, dx =\rho ^2, \end{aligned}$$

where \(\rho >0\) is fixed and g satisfies suitable assumptions (see (g0)–(g4) below). Equations like (1.1) arise when one looks for standing-wave solutions to the evolution equation

$$\begin{aligned} \textrm{i} \partial _t \Psi + \Delta \Psi + g(\Psi ) = 0, \end{aligned}$$

i.e., \(\Psi (t,x) = e^{\textrm{i} \lambda t} u(x)\). The condition (1.2), instead, is motivated by the property that, formally, the quantity

$$\begin{aligned} \int _{\mathbb {R}^N} |\Psi (t,x)|^2 \, dx \end{aligned}$$

is conserved in time. Moreover, the \(L^2\)-norm squared of a solution to (1.3) has a precise physical meaning in the contexts which this equation is derived from: it represents, e.g., the power supply in nonlinear optics and the total number of atoms in the Bose–Einstein condensation. The model for g that inspires this paper is

$$\begin{aligned} g(s) = s \ln s^2, \end{aligned}$$

which was introduced in [4] to obtain the separability of non-interacting subsystems; observe that, in this case,

$$\begin{aligned} \lim _{s \rightarrow 0} \frac{g(s)}{s} = -\infty \end{aligned}$$

which is known in the literature as the strongly sublinear (or infinite-mass) regime. After that, both (1.1)—possibly with an external potential—and (1.3), in both cases with g as in (1.4), raised great interest: we mention [13, 31, 33,34,35] for the former context and [8, 9, 11, 14] (see also [10, Section 9.3]) for the latter one. The common feature of most of the articles above is that the authors can work in the subspace of \(H^1(\mathbb {R}^N)\)

$$\begin{aligned} \mathcal {H}:= \left\{ u \in H^1(\mathbb {R}^N): \int _{\mathbb {R}^N} u^2 |\ln u^2| \, dx < \infty \right\} , \end{aligned}$$

where the energy functional (see (1.7) below) is of class \(\mathcal {C}^1\) (we mention that, instead, the approach of [13] is based on critical point theory for lower semi-continuous functionals, while that of [14] on a truncation argument). Of course, this strategy is impossible with generic nonlinearities as in this paper; speaking of which, generic nonlinear terms were recently considered in [15, 24].

Concerning normalized solutions, i.e., in the presence of (1.2), to the best of our knowledge, they are only treated in two very recent pieces of work: in [30], with

$$\begin{aligned} {g(s) = \alpha s \ln s^2 + \mu |s|^{p-2} s}, \end{aligned}$$

where \(\alpha ,\mu \in \mathbb {R}\) and \(2 < p \le 2^*\), and in [37], which mainly concerns systems of two equations. It is worth mentioning that, when g is as in (1.4), if (v, 0) solves (1.1) and \(v \ne 0\), then \((u,\lambda )\) given by

$$\begin{aligned} u = \frac{\rho }{|v|_2} v, \quad \lambda = 2 \ln \left( \frac{\rho }{|v|_2}\right) \end{aligned}$$

is a solution to (1.1)–(1.2). Nonetheless, the approach of [30, 37] still makes use of \(\mathcal {H}\). In particular, to recover compactness, the authors consider the subspace of radially symmetric functions as in [9, 11], which embeds compactly into \(L^q(\mathbb {R}^N)\) for every \(q \in [2,2^*)\); this is different from the classical result concerning the subspace of radial functions in \(H^1(\mathbb {R}^N)\), which embeds compactly into \(L^q(\mathbb {R}^N)\) only for \(q \in (2,2^*)\). The additional compact embedding into \(L^2(\mathbb {R}^N)\) proves to be extremely useful owing to the constraint (1.2); in particular, this is what allows the authors to work with every \(\rho >0\) when \(2< p < 2 + 4/N\), where p is the same as in (1.5). Once again, here we cannot utilize the compact embedding into \(L^2(\mathbb {R}^N)\) because g need not be given by (1.5).

The function g in (1.1) satisfies the following assumptions. Define \(G(s)=\int _0^s g(t)\, dt\) and

$$\begin{aligned} G_+(s) = {\left\{ \begin{array}{ll} \displaystyle \int _0^s \max \{g(t),0\} \, dt &{} \text {if } s \ge 0,\\ \displaystyle \int _{s}^0 \max \{-g(t),0\} \, dt &{} \text {if } s < 0. \end{array}\right. } \end{aligned}$$
  1. (g0)

    \(g:\mathbb {R}\rightarrow \mathbb {R}\) is continuous and \(g(0)=0\).

  2. (g1)

    \(\lim _{s\rightarrow 0}G_+(s)/|s|^2=0\) and \(\limsup _{s \rightarrow 0} g_+(s)/s < \infty \), where \(g_+ = G_+'\).

  3. (g2)

    If \(N\ge 3\), then \(\limsup _{|s|\rightarrow \infty }|g(s)|/|s|^{2^*-1}<\infty \), where \(2^*=\frac{2N}{N-2}\). If \(N=2\), then \(\lim _{|s|\rightarrow \infty }g(s)/e^{\alpha s^2} = 0\) for all \(\alpha > 4\pi \).

  4. (g3)

    \(\lim _{|s|\rightarrow \infty } G_+(s) / |s|^{2+4/N}=0\).

  5. (g4)

    There exists \(\xi _0 \ne 0\) such that \(G(\xi _0)>0\).

The assumptions (g0), (g2), and (g4) are classical in the context of elliptic problems (cf. [2, 3]), while (g1) is a weaker version of what is normally assumed in the presence of \(L^2\)-constraints, i.e., \(\lim _{s\rightarrow 0} g(s) / s = 0\); (g3) indicates that the growth of \(G_+\) is mass-subcritical at infinity. With these assumptions, we can deal with nonlinearities that are much more general than (1.5); for example, we can consider functions g that behave like \(-|s|^{\omega - 1} s\) or \(|s|^{\omega - 1} s \ln s^2\) for |s| small, with \(0< \omega < 1\). We can also cover the case when g goes to 0 more slowly than any power, e.g., when g behaves like \(1 / \ln s^2\) for |s| small.

Let us define

$$\begin{aligned} \mathcal {D}:= \left\{ u \in H^1(\mathbb {R}^N): \int _{\mathbb {R}^N} |u|^2 \, dx \le \rho ^2 \right\} \quad \text {and} \quad {\mathcal {S}}:= \left\{ u \in H^1(\mathbb {R}^N): \int _{\mathbb {R}^N} |u|^2 \, dx = \rho ^2 \right\} . \end{aligned}$$

Sometimes, we will write explicitly \(\mathcal {D}(\rho )\) and \({\mathcal {S}}(\rho )\). Let also \(J :H^1(\mathbb {R}^N) \rightarrow \mathbb {R}\cup \{\infty \}\) be the energy functional associated with (1.1)–(1.2), i.e.,

$$\begin{aligned} J(u) = \frac{1}{2} \int _{\mathbb {R}^N} |\nabla u|^2 \, dx - \int _{\mathbb {R}^N} G(u) \, dx. \end{aligned}$$

Our first result is the existence of a solution to (1.1)–(1.2), with additional properties. We point out that by solution to (1.1) we mean a pair \((u,\lambda ) \in H^1(\mathbb {R}^N) \times \mathbb {R}\) such that \(J'(u)v = -\lambda uv\) for every \(v \in \mathcal {C}^{\infty }_0(\mathbb {R}^N)\).

Theorem 1.1

If g satisfies (g0)–(g4), then there exists \(\bar{\rho }>0\) such that for every \(\rho >\bar{\rho }\) there exist \(\lambda >0\) and \(u\in {\mathcal {S}}\) such that \(J(u)=\min _{\mathcal {D}}J<0\) and \((u,\lambda )\) is a solution to (1.1)–(1.2). Moreover, u has constant sign and is, up to a translation, radial and radially monotonic.

Remark 1.2

Of course, (g0)–(g4) include the classical case \(\lim _{s \rightarrow 0} g(s) / s = 0\), which implies that J is of class \(\mathcal {C}^1\) in \(H^1(\mathbb {R}^N)\). In this case, with the notations of Theorem 1.1, u is actually a critical point of \(J|_{\mathcal {S}}\). In addition, if \(\lim _{s \rightarrow 0} G(s) / |s|^{2+4/N} = \infty \) (i.e., in the mass-subcritical regime), then \(\bar{\rho } = 0\) in Theorem 1.1 (see, e.g., [28] for more details). An important example in this context is given by nonlinear media with saturation [18], where \(g(s) = s^3 / (1 + s^2)\).

Remark 1.3

Under the assumptions (g0)–(g4), if there exists a solution \((u,\lambda ) \in H^1(\mathbb {R}^N) \times \mathbb {R}\) to (1.1)–(1.2) with \(J(u) \le 0\), then \(\lambda > 0\). As a matter of fact, \(J(u) \le 0\) implies that \(\int _{\mathbb {R}^N} G(u) \, dx\) is finite, thus from [24, Proposition 3.1] \((u,\lambda )\) satisfies the Pohožaev identity \((N-2) \int _{\mathbb {R}^N} |\nabla u|^2 \, dx = 2N \int _{\mathbb {R}^N} G(u) - \frac{1}{2} \lambda |u|^2 \, dx\) and, consequently, \(0 \ge J(u) = \frac{1}{N} |\nabla u|_2^2 - \frac{\lambda }{2} \rho ^2\).

Our next result concerns the limit of the ground state energy map \(\rho \mapsto \inf _{\mathcal {D}(\rho )} J\) as \(\rho \rightarrow \infty \).

Proposition 1.4

If g satisfies (g0)–(g4), then

$$\begin{aligned} \lim _{\rho \rightarrow \infty } \inf _{\mathcal {D}(\rho )} J = -\infty . \end{aligned}$$

Moreover, we obtain the relative compactness in \(H^1(\mathbb {R}^N)\) of minimizing sequences for \(\inf _{\mathcal {D}(\rho )} J\) up to translations. Here and in the sequel, with a small abuse of notations, if \((u_n)_n\) is a sequence in X, we write \(u_n \in X\) instead of \((u_n)_n \subset X\).

Theorem 1.5

If (g0)–(g4) hold, then there exists \(\bar{\rho } > 0\) such that for every \(\rho > \bar{\rho }\), if \(0 < \rho _n \rightarrow \rho \) and \(u_n \in \mathcal {D}(\rho _n)\) is such that \(J(u_n) \rightarrow \inf _{\mathcal {D}(\rho )} J\), then \(u_n\) is relatively compact in \(H^1(\mathbb {R}^N)\) up to translations.

It is well known that such a relative compactness is an important step toward the orbital stability of solutions to (1.1)–(1.2). Unfortunately, we currently do not have the necessary tools to investigate the dynamics of (1.3) in this general setting, hence this issue is postponed to future work. Moreover, unlike the classical approach when J is of class \(\mathcal {C}^1\), where the existence of a constrained minimizer for J and, consequently, a solution to (1.1)–(1.2) is a by-product of the aforementioned relative compactness, here we prove first the existence of a solution to (1.1)–(1.2) and then use this fact to prove the compactness result.

When g is as in (1.5), we additionally have the following non-existence result.

Theorem 1.6

Let g be given by (1.5) with \(\alpha > 0\), \(\mu \in \mathbb {R}\), and \(2 < p \le 2^*\) (\(p > 2\) if \(N = 2\)).

(i) If \(\rho \gg 1\) and either \(-\frac{\alpha p}{p-2}e^{-p/2} < \mu \le 0\), or \(\mu > 0\) and \(2<p<2+4/N\), then there exists a solution \((u,\lambda ) \in H^1(\mathbb {R}^N) \times (0,\infty )\) to (1.1)–(1.2) such that \(G(u) \in L^1(\mathbb {R}^N)\) and \(J(u) = \min _{\mathcal {D}}J < 0\).

(ii) If \(\mu \le -\frac{\alpha p}{p-2}e^{-p/2}\), then there exist no solutions \((u,\lambda ) \in H^1(\mathbb {R}^N) \times [0,\infty )\) to (1.1)–(1.2) such that \(G(u) \in L^1(\mathbb {R}^N)\).

Remark 1.7

In Theorem 1.1, we look for (ground state) solutions with sufficiently large \(\rho \) because this implies that the energy is negative, which is used, in turn, to ensure such an existence. Then, as we pointed out in Remark 1.3, a consequence is that \(\lambda >0\). On the other hand, in the context of Theorem 1.6 (i) and \(\mu >0\), ground state solutions still exist for all \(\rho >0\) as in Corollary 1.9 below, but with no information about the Lagrange multiplier or the energy.

We follow the approach of [24] and consider a family of approximating problems. Note that the approximations are different from those in [15], where the nonlinearity is modified in a neighbourhood of infinity, and are more similar to those in [14], although not the same as the nonlinearity therein is of the form (1.4).

Let \(g_{-}(s):=g_{+}(s)-g(s)\) and \(G_{-}(s):=G_{+}(s)-G(s)\ge 0\) for \(s\in \mathbb {R}\). In view of (g1) and (g3), \(G_{+}(u)\in L^{1}(\mathbb {R}^N)\) for \(u \in H^1(\mathbb {R}^N)\); however, \(G_{-}(u)\) may not be integrable unless \(G_{-}(s) \lesssim |s|^{2}\) for small |s|. Moreover, for any \(u \in H^1(\mathbb {R}^N)\), J is differentiable at u along functions in \(\mathcal {C}_0^\infty (\mathbb {R}^N)\), but it is not of class \(\mathcal {C}^1\) in general. In order to overcome this problem, for every \(\varepsilon \in (0,1)\) let us take an even function \(\varphi _\varepsilon :\mathbb {R}\rightarrow [0,1]\) such that \(\varphi _\varepsilon (s) = |s| / \varepsilon \) for \(|s|\le \varepsilon \), \(\varphi _{\varepsilon }(s) = 1\) for \(|s|\ge \varepsilon \). We introduce a new functional,

$$\begin{aligned} J_\varepsilon (u) = \frac{1}{2} \int _{\mathbb {R}^N} |\nabla u|^2 \, dx + \int _{\mathbb {R}^N} G_{-}^\varepsilon (u) \, dx - \int _{\mathbb {R}^N} G_{+}(u) \, dx, \end{aligned}$$

such that \(G_{-}^\varepsilon (s)=\int _0^s \varphi _\varepsilon (t)g_-(t)\, dt\), \(s\in \mathbb {R}\), and now observe that \(G_{-}^\varepsilon (s) \le c_\varepsilon |s|^{2}\) for every \(|s| \le 1\) and some constant \(c_\varepsilon >0\) depending only on \(\varepsilon >0\). Hence, for \(\varepsilon \in (0,1)\), \(J_\varepsilon \) is of class \(\mathcal {C}^1\) on \(H^1(\mathbb {R}^N)\).

The idea of considering the \(L^2\)-disc \(\mathcal {D}\) instead of the \(L^2\)-sphere \({\mathcal {S}}\) was introduced in [5] in a context where the nonlinear term is mass-supercritical and Sobolev-subcritical at infinity (i.e., \(\lim _{|s| \rightarrow \infty } G(s) / |s|^{2+4/N} = \infty \), \(\lim _{|s| \rightarrow \infty } G(s) / |s|^{2^*} = 0\)) and mass-critical or -supercritical at the origin (i.e., \(\limsup _{s \rightarrow 0} G(s) / |s|^{2+4/N} < \infty \)); later on, it was exploited again in [25], in a similar context that allows systems with Sobolev-critical nonlinearities, in [28], for equations and systems of equations in the mass-critical or -subcritical setting, and in [6], where polyharmonic operators and Hardy-type potentials are considered. One of the reasons for this choice was that the embedding \(H^1(\mathbb {R}^N) \hookrightarrow L^2(\mathbb {R}^N)\) is not compact, even if considering radial functions or with symmetries as in [20], which, in turn, causes that the limit point of a weakly convergent sequence in \({\mathcal {S}}\) need not belong to \({\mathcal {S}}\); on the contrary, limit points of weakly convergent sequences in \(\mathcal {D}\) stay in \(\mathcal {D}\), and the additional information that such a weak limit point belongs to the set one is working with proves to be useful. In this paper, this strategy is fundamental: it allows the critical point found in Theorem 2.1 below – the existence result for the perturbed problem – to be a minimizer of the corresponding energy functional over the whole set \(\mathcal {D}\), and this fact plays an important role in the proof of Theorem 1.1 because the “candidate minimizer” of \(J|_\mathcal {D}\) might not, initially, belong to \({\mathcal {S}}\).

Now, we turn to the problem of finding multiple solutions (in fact, infinitely many) to (1.1)–(1.2). In this case, we need to change both the assumptions about g and our approach. The idea is to build a subspace of \(H^1(\mathbb {R}^N)\) in the spirit of \(\mathcal {H}\), but with a more general setting. The issues with using the previous approach to obtain an infinite sequence of solutions are mainly two. First, even at the level of the perturbed functional, one can obtain an arbitrary large number of solutions if \(\rho \) is sufficiently large, but it is unclear whether infinitely many solutions exist; this is caused by the lack of compact embedding into \(L^2(\mathbb {R}^N)\), which in turn raises the need for each of the usual minimax values (cf. Theorem 4.9 below) to be negative, for which we need \(\rho \) to be large (and how large depends differently on each minimax level). Second, fixed a family (depending on \(\varepsilon \)) of critical values of the perturbed functional obtained in the way we have just briefly mentioned, and considering the corresponding family of critical points, it is not clear whether this family converges to a critical point of the unperturbed functional, unlike what happens in the Proof of Theorem 1.1, where we can exploit the information that the critical levels are minima.

We assume that the right-hand side in (1.1) is given by

$$\begin{aligned} g(s) = f(s) - a(s) \end{aligned}$$

together with the following assumptions, where \(F(s) = \int _0^s f(t) \, dt\), \(A(s) = \int _0^s a(t) \, dt\), \(F_+\) is defined via (1.6) replacing g with f, and \(f_+ = F_+'\).

  1. (A)

    \(A \in \mathcal {C}^1(\mathbb {R})\) is an N-function that satisfies the \(\Delta _2\) and \(\nabla _2\) conditions globally and such that \(\lim _{s \rightarrow 0} A(s)/s^2 = \infty \) and \(s \mapsto a(s)s\) is convex.

  2. (f0)

    \(f :\mathbb {R}\rightarrow \mathbb {R}\) is continuous and odd.

  3. (f1)

    \(\lim _{s \rightarrow 0} f_+(s) / s = 0\).

  4. (f2)

    If \(N \ge 3\), then \(|f(s)| \lesssim a(s) + |s| + |s|^{2^*-1}\); If \(N = 2\), then for all \(q \ge 2\) and \(\alpha > 4\pi \) there holds \(|f(s)| \lesssim a(s) + |s| + |s|^{q-1} \left( e^{\alpha s^2} - 1\right) \).

  5. (f3)

    \(\limsup _{|s| \rightarrow \infty } f_+(s) / |s|^{1+4/N} < \infty \).

Observe that (f3) implies

$$\begin{aligned} \eta := \limsup _{|s| \rightarrow \infty } \frac{F_+(s)}{|s|^{2+4/N}} < \infty . \end{aligned}$$

Let us define

$$\begin{aligned} W:= \left\{ u \in H^1(\mathbb {R}^N): A(u) \in L^1(\mathbb {R}^N) \right\} \end{aligned}$$

and, for a subgroup \(\mathcal {O}\subset \mathcal {O}(N)\),

$$\begin{aligned} W_{\mathcal {O}}:= \left\{ u \in W: u(g\cdot ) = u \text { for all } g \in \mathcal {O}\right\} . \end{aligned}$$

If \(N = 4\) or \(N \ge 6\), with the purpose of finding non-radial solutions (cf. [1, 23]), fix \(2 \le M \le N/2\) such that \(N - 2\,M \ne 1\) and let \(\tau (x) = (x_2,x_1,x_3)\) for \(x = (x_1,x_2,x_3) \in \mathbb {R}^M \times \mathbb {R}^M \times \mathbb {R}^{N-2\,M}\) (\(x_3\) is removed form the definition of \(\tau \) if \(N = 2\,M\)). Then, let

$$\begin{aligned} \mathcal {X}:= \left\{ u \in W_{\mathcal {O}(M) \times \mathcal {O}(M) \times \mathcal {O}(N-2M)}: u \circ \tau = -u\right\} \end{aligned}$$

(\(\mathcal {O}(N-2M)\) is removed from the definition of \(\mathcal {X}\) if \(N = 2\,M\)) and observe that \(\mathcal {X}\cap W_{\mathcal {O}(N)} = \{0\}\). For \(2< p < 2^*\), let us denote by \(C_{N,p}\) the best constant in the Gagliardo–Nirenberg inequality:

$$\begin{aligned} |u|_p \le C_{N,p} |\nabla u|_2^{N(1/2-1/p)} |u|_2^{1-N(1/2-1/p)} \quad \forall \, u \in H^1(\mathbb {R}^N). \end{aligned}$$

Our multiplicity result reads as follows.

Theorem 1.8

Let g be given by (1.9) such that (A), (f0)–(f3), and

$$\begin{aligned} 2 \eta C_{N,2+4/N}^{2+4/N} \rho ^{4/N} < 1 \end{aligned}$$

hold, where \(\eta \) is defined in (1.10). Then there exist infinitely many solutions \((u,\lambda ) \in W_{\mathcal {O}(N)} \times \mathbb {R}\) to (1.1)–(1.2), one of which – say, \((\bar{u},\bar{\lambda })\) – having the property that \(J(\bar{u}) = \min _{{\mathcal {S}}\cap W} J\). If, in addition, \(N = 4\) or \(N \ge 6\), then there exist infinitely many solutions \((v,\mu ) \in \mathcal {X}\times \mathbb {R}\), one of which – say, \((\bar{v},\bar{\mu })\) – having the property that \(J(\bar{v}) = \min _{{\mathcal {S}}\cap \mathcal {X}} J\).

By setting

$$\begin{aligned} A(s):= {\left\{ \begin{array}{ll} - \alpha s^2 \ln s^2 &{} \text {if } 0 \le |s| < e^{-3},\\ 3 \alpha s^2 + 4 \alpha e^{-3}|s| - \alpha e^{-6} &{} \text {if } |s| \ge e^{-3}, \end{array}\right. } \quad F(s) = \alpha s^2 \ln s^2 + A(s) + \frac{\mu }{p} |s|^p, \end{aligned}$$

with \(\alpha > 0\), and \(\mu ,p\) in the right range, we check that (A) and (f0)–(f3) hold and we recover the case (1.5).

Corollary 1.9

Let g be given by (1.5) with \(\alpha > 0\), \(\mu \ge 0\), and \(2 < p \le 2+4/N\). If (1.12) holds, then there exists a solution \((u,\lambda ) \in W\times \mathbb {R}\) to (1.1)–(1.2) such that \(J(u) = \min _{{\mathcal {S}}\cap W}J\).

In particular, we get a solution to (1.1)–(1.2) with (1.5) for any \(\rho >0\) provided that \(2<p<2+4/N\). Moreover, (f3) allows us to handle terms g that grow “too fast” to be included in the usual \(H^1(\mathbb {R}^N)\) setting. For example, letting

$$\begin{aligned} A(s):= {\left\{ \begin{array}{ll} \displaystyle -\alpha s^2 \ln s^2 &{} \text {if } 0 \le |s| < e^{-3},\\ \displaystyle \frac{10 \alpha }{p} e^{3p-6}|s|^p + 2 \alpha \left( 3 - \frac{5}{p}\right) e^{-6} &{} \text {if } |s| \ge e^{-3}, \end{array}\right. } \end{aligned}$$

with \(\alpha > 0\), we can allow \(f_-(s) \approx |s|^{p-2} s\) for \(|s| \gg 1\) for every \(p > 2\), even when \(N \ge 3\) and \(p > 2^*\). At the same time, we can treat nonlinearities that are “more singular” at the origin, for example, letting

$$\begin{aligned} A(s):= \frac{1}{q} |s|^q, \quad 1< q < 2, \end{aligned}$$

and taking any f that satisfies (f0)–(f3). The price to pay for this different approach is that we need (1.9) and (A) to construct a proper functional setting (i.e., an Orlicz one); this prevents us from dealing, e.g., with terms that behave like \(1/\ln s^2\) near the origin, which instead are allowed by the use of the approximating problems to find least-energy solutions. On the other hand, we can weaken the assumptions about f, the “remaining” part of the nonlinearity: it can be controlled by a instead of the usual conditions with an \(H^1(\mathbb {R}^N)\)-setting (in particular, we can deal with Sobolev-supercritical nonlinearities), we do not need \(F > A\) anywhere, \(f_+\) can have a mass-critical growth at infinity (provided \(\rho \) is small), and there are no restrictions on \(\rho \) if \(f_+\) has a mass-subcritical growth at infinity; none of these scenarios can be handled with the previous approach. Except for [30, Theorems 1.2 and 1.3], where the nonlinear term has the very specific form (1.5), Theorem 1.8 seems to be the first multiplicity result when normalized solutions are sought in the strongly sublinear regime, and also the first time at all that more than two solutions are obtained.

The paper is structured al follows: in Sect. 2, we obtain least-energy solutions to the perturbed problem; in Sect. 3, we prove Theorems 1.11.5, 1.6, and Proposition 1.4; in Sect. 4, we obtain infinitely many solutions to (1.1)–(1.2).

2 Solutions to the perturbed problem

Recall from (1.8) the definition of \(J_\varepsilon :H^1(\mathbb {R}^N) \rightarrow \mathbb {R}\). The main purpose of this section is to prove the following result, where we denote \(c_\varepsilon (\rho ):= \inf _{\mathcal {D}(\rho )} J_\varepsilon \).

Theorem 2.1

Let g satisfy (g0)–(g4). Then for every \(\beta > 0\) there exists \(\bar{\rho }> 0\) such that for every \(\rho > \bar{\rho }\) and \(\varepsilon > 0\) there exist \(\lambda _\varepsilon > 0\) and \(u_\varepsilon \in H^1(\mathbb {R}^N)\) such that \(J_\varepsilon (u_\varepsilon ) = c_\varepsilon (\rho ) < -\beta \) and

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta u_\varepsilon + \lambda _\varepsilon u_\varepsilon = g_+(u_\varepsilon ) - \varphi _\varepsilon (u_\varepsilon ) g_-(u_\varepsilon ) \quad \text {in } \mathbb {R}^N\\ \int _{\mathbb {R}^N}|u_\varepsilon |^2\,dx=\rho ^2. \end{array}\right. } \end{aligned}$$

Moreover, every such \(u_\varepsilon \) has constant sign and, up to a translation, is radial and radially monotonic.

We begin by proving a series of lemmas.

Lemma 2.2

If g satisfies (g0)–(g3), then \(J_\varepsilon |_\mathcal {D}\) is coercive and bounded below.


From (g1) and (g3), for every \(\delta > 0\) there exists \(C_\delta > 0\) such that for every \(s \in \mathbb {R}\)

$$\begin{aligned} G_+(s) \le C_\delta |s|^2 + \delta |s|^{2+4/N}. \end{aligned}$$

Consequently, from (1.11), for every \(u \in \mathcal {D}\) there holds

$$\begin{aligned} \begin{aligned} J_\varepsilon (u)&\ge \int _{\mathbb {R}^N} \frac{1}{2} |\nabla u|^2 - G_+(u) \, dx \ge \frac{1}{2} |\nabla u|_2^2 - C_\delta |u|_2^2 - \delta |u|_{2+4/N}^{2+4/N}\\&\ge \frac{1}{2} |\nabla u|_2^2 - C_\delta \rho ^2 - \delta C_{N,2+4/N}^{2+4/N} \rho ^{4/N} |\nabla u|_2^2. \end{aligned} \end{aligned}$$

Taking \(\delta < \left( 2 C_{N,2+4/N}^{2+4/N} \rho ^{4/N}\right) ^{-1}\) we conclude. \(\square \)

Lemma 2.3

If g satisfies (g0)–(g2) and (g4), then for every \(\beta >0\) there exists \(\bar{\rho }>0\) such that

$$\begin{aligned} c_\varepsilon (\rho ) < -\beta \end{aligned}$$

for every \(\rho >\bar{\rho }\) and every \(\varepsilon \in (0,1)\).


Arguing as in [2, Proof of Theorem 2] and observing that \(G^\varepsilon := G_+ - G_-^\varepsilon \ge G\), there exists \(u \in H^1(\mathbb {R}^N)\) such that \(\int _{\mathbb {R}^N} G^\varepsilon (u) \, dx \ge 1\) for every \(\varepsilon \in (0,1)\). Define \(u_\rho := u(|u|_2^{2/N}\rho ^{-2/N}\cdot ) \in {\mathcal {S}}(\rho )\). There holds

$$\begin{aligned} \begin{aligned} J_\varepsilon (u_\rho )&= \int _{\mathbb {R}^N} \frac{1}{2} |\nabla u_\rho |^2 - G^\varepsilon (u_\rho ) \, dx = \frac{|\nabla u|_2^2}{2|u|_2^{2(N-2)}} \rho ^{2 - 4/N} - \frac{1}{|u|_2^2} \int _{\mathbb {R}^N} G^\varepsilon (u) \, dx \rho ^2\\&\le \frac{|\nabla u|_2^2}{2|u|_2^{2(N-2)}} \rho ^{2 - 4/N} - \frac{1}{|u|_2^2} \rho ^2 \rightarrow -\infty \end{aligned} \end{aligned}$$

as \(\rho \rightarrow \infty \), whence the statement. \(\square \)

Lemma 2.4

If (g0)–(g3) hold, then for every \(\rho _1,\rho _2>0\) there holds \(c_\varepsilon (\sqrt{\rho _1^2+\rho _2^2})\le c_\varepsilon (\rho _1)+c_\varepsilon (\rho _2)\).


Fix \(\delta >0\). From the definition of \(c_\varepsilon \), which is finite due to Lemma 2.2, there exist \(u_1,u_2\in \mathcal {C}_0^\infty (\mathbb {R}^N)\) such that \(|u_i|_2\le \rho _i\) and \(J_\varepsilon (u_i)\le c_\varepsilon (\rho _i)+\delta \), \(i=1,2\). In virtue of the translation invariance, we can assume that \(u_1\) and \(u_2\) have disjoint supports. Then \(|u_1+u_2|_2^2 = |u_1|_2^2 + |u_2|_2^2 \le \rho _1^2 + \rho _2^2\) and so

$$\begin{aligned} c_\varepsilon \left( \sqrt{\rho _1^2+\rho _2^2}\right) \le J_\varepsilon (u_1+u_2)\le c_\varepsilon (\rho _1)+c_\varepsilon (\rho _2)+2\delta . \end{aligned}$$

\(\square \)

The following lemma is inspired from [29, Lemma 3.2 (ii)].

Lemma 2.5

Assume (g0)–(g3) hold and let \(\rho _1,\rho _2 > 0\). If there exist \(u_i\in \mathcal {D}(\rho _i)\) such that \(J_\varepsilon (u_i) = c_\varepsilon (\rho _i)\), \(i=1,2\), and \((u_1,u_2) \ne (0,0)\), then \(c_\varepsilon (\sqrt{\rho _1^2+\rho _2^2}) < c_\varepsilon (\rho _1) + c_\varepsilon (\rho _2)\).


For every \(s\ge 1\) and \(i=1,2\) there holds

$$\begin{aligned} c_\varepsilon (\sqrt{s} \rho _i)\le & {} J_\varepsilon \bigl (u_i(s^{-1/N}\cdot )\bigr )=s\left( \frac{1}{2s^{2/N}}\int _{\mathbb {R}^N}|\nabla u_i|^2\,dx-\int _{\mathbb {R}^N}G^\varepsilon (u_i)\,dx\right) \\\le & {} sJ_\varepsilon (u_i)=sc_\varepsilon (\rho _i). \end{aligned}$$

Moreover, if \(s>1\) and \(u_i \ne 0\), then \(c_\varepsilon (\sqrt{s} \rho _i)<sc_\varepsilon (\rho _i)\). Let us assume, without loss of generality, that \(u_1 \ne 0\). If \(\rho _1\ge \rho _2\), then

$$\begin{aligned} c_\varepsilon \left( \sqrt{\rho _1^2+\rho _2^2}\right) < \frac{\rho _1^2+\rho _2^2}{\rho _1^2} c_\varepsilon (\rho _1) = c_\varepsilon (\rho _1) + \frac{\rho _2^2}{\rho _1^2} c_\varepsilon (\rho _1) \le c_\varepsilon (\rho _1) + c_\varepsilon (\rho _2). \end{aligned}$$

If \(\rho _1<\rho _2\), then

$$\begin{aligned} c_\varepsilon \left( \sqrt{\rho _1^2+\rho _2^2}\right) \le \frac{\rho _1^2+\rho _2^2}{\rho _2^2} c_\varepsilon (\rho _2) = \frac{\rho _1^2}{\rho _2^2} c_\varepsilon (\rho _2) + c_\varepsilon (\rho _2) < c_\varepsilon (\rho _1) + c_\varepsilon (\rho _2). \end{aligned}$$

\(\square \)

Remark 2.6

(a) Lemmas 2.4 and 2.5 still hold if \(J_\varepsilon \) is replaced with J.

(b) As proved in [28, Lemma 2.5 (ii)], Lemma 2.5 holds under the weaker assumption that \(J_\varepsilon (u_1) = c_\varepsilon (\rho _1)\) or \(J_\varepsilon (u_2) = c_\varepsilon (\rho _2)\).

Lemma 2.7

Let g satisfy (g0)–(g4). Then, for every \(\beta >0\), there exists \(\bar{\rho }>0\) such that for every \(\rho >\bar{\rho }\) and \(\varepsilon >0\) we have \(c_\varepsilon (\rho ) < -\beta \) and, if \(0 < \rho _n \rightarrow \rho \) and \(u_n \in \mathcal {D}(\rho _n)\) is such that \(J_\varepsilon (u_n) \rightarrow c_\varepsilon (\rho )\), then there exists \(u \in \mathcal {D}(\rho ) {\setminus } \{0\}\) such that, up to translations, \(u_n \rightarrow u\) in \(L^p(\mathbb {R}^N)\), \(2<p<2^*\).


Let \(\bar{\rho }>0\) be determined by Lemma 2.3 and consider \(u_n\) and \(\rho _n\) as in the statement. Then \(u_n\) is bounded in view of Lemma 2.2. If

$$\begin{aligned} \lim _n\max _{y\in \mathbb {R}^N}\int _{B_1(y)}|u_n|^2\,dx=0, \end{aligned}$$

then \(\lim _n|u_n|_p=0\) for every \(2<p<2^*\) due to Lions’ lemma [21, Lemma I.1 in Part 2], hence, taking into account (g1) and (g3), \(\lim _n\int _{\mathbb {R}^N}G_+(u_n)\,dx=0\) and, consequently, \(c_\varepsilon (\rho )=\lim _nJ_\varepsilon (u_n)\ge 0\), a contradiction with Lemma 2.3. Therefore, there exist \(y_n\in \mathbb {R}^N\) and \(u\in \mathcal {D}(\rho ){\setminus }\{0\}\) such that, up to a subsequence, \(u_n(\cdot -y_n)\rightharpoonup u\) in \(H^1(\mathbb {R}^N)\) and \(u_n(\cdot -y_n)\rightarrow u\) a.e. in \(\mathbb {R}^N\). Let us denote \(v_n(x):=u_n(x-y_n)-u(x)\). In virtue of the Brezis–Lieb lemma [7, Theorem 2],

$$\begin{aligned} \lim _nJ_\varepsilon (u_n)-J_\varepsilon (v_n)=J_\varepsilon (u). \end{aligned}$$

We want to prove that

$$\begin{aligned} \lim _n\max _{y\in \mathbb {R}^N}\int _{B_1(y)}|v_n|^2\,dx=0, \end{aligned}$$

which, as before, will yield the statement of the Lemma. If this is not the case, then, as before, there exist \(z_n\in \mathbb {R}^N\) and \(v\in H^1(\mathbb {R}^N){\setminus }\{0\}\) such that, denoting \(w_n(x):=v_n(x-z_n)-v(x)\), we have \(w_n\rightharpoonup 0\) in \(H^1(\mathbb {R}^N)\), \(w_n\rightarrow 0\) a.e. in \(\mathbb {R}^N\), and

$$\begin{aligned} \lim _nJ_\varepsilon (v_n)-J_\varepsilon (w_n)=J_\varepsilon (v). \end{aligned}$$

Observe that, again owing to the Brezis–Lieb lemma,

$$\begin{aligned} \lim _n |u_n|_2^2 - |w_n|_2^2 = \lim _n |u_n|_2^2 - |v_n|_2^2 + |v_n|_2^2 - |w_n|_2^2 = |u|_2^2 + |v|_2^2, \end{aligned}$$

whence, denoting \(\beta :=|u|_2>0\) and \(\gamma :=|v|_2>0\), there holds

$$\begin{aligned} \rho ^2 - \beta ^2 - \gamma ^2 \ge \limsup _n |u_n|_2^2 - \beta ^2 - \gamma ^2 = \limsup _n |w_n|_2^2 =: \delta ^2 \ge 0. \end{aligned}$$

If \(\delta >0\), then let us set \(\tilde{w}_n:=\frac{\delta }{|w_n|_2}w_n\in {\mathcal {S}}(\delta )\). Via explicit computations (cf., e.g., [29, Lemma 2.4]), we have \(\lim _nJ_\varepsilon (w_n)-J_\varepsilon (\tilde{w}_n)=0\), hence, together with Lemma 2.4 and since \(c_\varepsilon \) is non-increasing,

$$\begin{aligned} \begin{aligned} c_\varepsilon (\rho )&= \lim _n J_\varepsilon (u_n) = J_\varepsilon (u) + J_\varepsilon (v) + \lim _n J_\varepsilon (w_n) = J_\varepsilon (u) + J_\varepsilon (v) + \lim _n J_\varepsilon (\tilde{w}_n)\\&\ge c_\varepsilon (\beta ) + c_\varepsilon (\gamma ) + c_\varepsilon (\delta ) \ge c_\varepsilon (\sqrt{\beta ^2+\gamma ^2+\delta ^2}) \ge c_\varepsilon (\rho ). \end{aligned} \end{aligned}$$

Thus all the inequalities in (2.2) are in fact equalities and, in particular, \(J_\varepsilon (u) = c_\varepsilon (\beta )\) and \(J_\varepsilon (v) = c_\varepsilon (\gamma )\). Therefore Lemma 2.5 yields \(c_\varepsilon (\beta ) + c_\varepsilon (\gamma ) + c_\varepsilon (\delta ) > c_\varepsilon (\sqrt{\beta ^2+\gamma ^2+\delta ^2})\), which contradicts (2.2). If \(\delta =0\), then \(w_n\rightarrow 0\) in \(L^2(\mathbb {R}^N)\), which, together with (1.11), implies \(\lim _n \int _{\mathbb {R}^N} G_+(w_n) \, dx = 0\), whence \(\liminf _n J_\varepsilon (w_n) \ge 0\). Then (2.2) becomes

$$\begin{aligned} c_\varepsilon (\rho )= & {} \lim _n J_\varepsilon (u_n) = J_\varepsilon (u) + J_\varepsilon (v) + \lim _n J_\varepsilon (w_n) \ge c_\varepsilon (\beta ) + c_\varepsilon (\gamma ) \\\ge & {} c_\varepsilon (\sqrt{\beta ^2+\gamma ^2}) = c_\varepsilon (\rho ) \end{aligned}$$

and we reach a contradiction as before. \(\square \)

Let us recall that every solution \((u_\varepsilon ,\lambda _\varepsilon )\) to the differential equation in (2.1) satisfies the Pohožaev identity

$$\begin{aligned} \int _{\mathbb {R}^N} (N-2) |\nabla u_\varepsilon |^2 + N \lambda _\varepsilon |u_\varepsilon |^2 \, dx = 2N \int _{\mathbb {R}^N} G_+(u_\varepsilon ) - G_-^\varepsilon (u_\varepsilon ) \, dx. \end{aligned}$$

Proof of Theorem 2.1

Let \(\bar{\rho }>0\) be determined by Lemma 2.3 and consider a minimizing sequence \(u_n\in \mathcal {D}(\rho )\) for \(c_\varepsilon (\rho )\). Then, in virtue of Lemmas 2.2 and 2.7, there exists \(u_\varepsilon \in \mathcal {D}(\rho )\setminus 0\) such that, up to subsequences and translations, \(u_n\rightharpoonup u_\varepsilon \) in \(H^1(\mathbb {R}^N)\) and \(u_n\rightarrow u_\varepsilon \) in \(L^p(\mathbb {R}^N)\) for every \(2<p<2^*\) and a.e. in \(\mathbb {R}^N\). From (g1) and (g3) we have \(\lim _n\int _{\mathbb {R}^N}G_+(u_n)\,dx=\int _{\mathbb {R}^N} G_+(u_\varepsilon ) \, dx\), therefore, using Fatou’s Lemma, \(c_\varepsilon (\rho )\le J_\varepsilon (u_\varepsilon )\le \lim _nJ_\varepsilon (u_n)=c_\varepsilon (\rho )<-\beta \). In particular, there exists \(\lambda _\varepsilon \in \mathbb {R}\) such that

$$\begin{aligned} -\Delta u_\varepsilon +\lambda _\varepsilon u_\varepsilon =g^\varepsilon (u_\varepsilon ) \quad \text {in } \mathbb {R}^N. \end{aligned}$$

Note that \(\lambda _\varepsilon =0\) if \(u_\varepsilon \) is an interior point of \(\mathcal {D}(\rho )\), i.e., \(|u_\varepsilon |_2<\rho \). If \(\lambda _\varepsilon \le 0\), then (2.3) yields \(0>J_\varepsilon (u_\varepsilon )\ge |\nabla u_\varepsilon |_2^2/N\ge 0\), a contradiction, therefore \(\lambda _\varepsilon >0\) and \(u_\varepsilon \in {\mathcal {S}}(\rho )\). The last statement follows from [16, Theorem 1.4] (see also [22]). \(\square \)

Remark 2.8

The proof of Theorem 2.1 shows that every minimizing sequence for \(c_\varepsilon (\rho )\) is relatively compact in \(H^1(\mathbb {R}^N)\) up to a translation. As a matter of fact, from \(\lim _n\int _{\mathbb {R}^N}G_+(u_n)\,dx=\int _{\mathbb {R}^N}G_+(u_\varepsilon )\,dx\) and \(\lim _nJ_\varepsilon (u_n)=J_\varepsilon (u_\varepsilon )\) we get \(\lim _n|\nabla u_n|_2=|\nabla u_\varepsilon |_2\). Moreover,

$$\begin{aligned} \rho =|u_\varepsilon |_2\le \liminf _n|u_n|_2\le \limsup _n|u_n|_2\le \rho , \end{aligned}$$

i.e., \(\lim _n|u_n|_2=|u_\varepsilon |_2\). Recalling that \(u_n\rightharpoonup u_\varepsilon \) in \(H^1(\mathbb {R}^N)\), we obtain \(u_n\rightarrow u_\varepsilon \) in \(H^1(\mathbb {R}^N)\).

3 Solutions to (1.1) and related properties

Throughout this section, it is assumed that (g0)–(g4) hold. We consider the family of functions and positive numbers \((u_\varepsilon ,\lambda _\varepsilon )_{\varepsilon \in (0,1)}\) given by Theorem 2.1.

Lemma 3.1

The quantities \(|\nabla u_\varepsilon |_2\) and \(\lambda _\varepsilon \) are bounded.


From (g0), (g1), and (g3), for every \(\delta >0\) there exists \(C_\delta >0\) such that for every \(s\in \mathbb {R}\)

$$\begin{aligned} G_+(s)\le C_\delta |s|^2+\delta |s|^{2+4/N}. \end{aligned}$$

Let us first consider the case \(N \ge 3\). Since \(\lambda _\varepsilon >0\), from (2.3) we obtain

$$\begin{aligned} \frac{1}{2^*} |\nabla u_\varepsilon |_2^2 < \int _{\mathbb {R}^N} G_+(u_\varepsilon ) \, dx \le C_\delta |u_\varepsilon |_2^2 + \delta |u_\varepsilon |_{2+4/N}^{2+4/N} \le C_\delta \rho ^2 + \delta C_{N,2+4/N}^{2+4/N} \rho ^{4/N} |\nabla u_\varepsilon |_2^2. \end{aligned}$$

Taking \(\delta < \left( 2^* C_{N,2+4/N}^{2+4/N} \rho ^{4/N}\right) ^{-1}\), we obtain the boundedness of \(|\nabla u_\varepsilon |_2\). In addition, this and again (2.3) yield

$$\begin{aligned} 0< \frac{2^*}{2} \lambda _\varepsilon \rho ^2< \int _{\mathbb {R}^N} |\nabla u_\varepsilon |^2 + \frac{2^*}{2} \lambda _\varepsilon |u_\varepsilon |^2 + 2^* G_-^\varepsilon (u_\varepsilon ) \, dx = 2^* \int _{\mathbb {R}^N} G_+(u_\varepsilon ) \, dx \le C < \infty \nonumber \\ \end{aligned}$$

and \(\lambda _\varepsilon \) is bounded as well. Now, let us consider the case \(N = 2\). From (2.3) we get

$$\begin{aligned} \rho ^2 \lambda _\varepsilon\le & {} \int _{\mathbb {R}^N} \lambda _\varepsilon |u_\varepsilon |^2 + 2 G_-^\varepsilon (u_\varepsilon ) \, dx = 2 \int _{\mathbb {R}^N} G_+(u_\varepsilon ) \, dx\nonumber \\\le & {} C_\delta \rho ^2 + \delta C_{N,2+4/N}^{2+4/N} \rho ^{4/N} |\nabla u_\varepsilon |_2^2. \end{aligned}$$

Moreover, again using (2.3) we obtain

$$\begin{aligned} -\beta > J_\varepsilon (u_\varepsilon ) = \frac{1}{2} \left( |\nabla u_\varepsilon |_2^2 - \rho ^2 \lambda _\varepsilon \right) , \end{aligned}$$

which, together with (3.2), implies

$$\begin{aligned} \frac{1}{2} |\nabla u_\varepsilon |_2^2 \le C_\delta \rho ^2 - \beta + \delta C_{N,2+4/N}^{2+4/N} \rho ^{4/N} |\nabla u_\varepsilon |_2^2. \end{aligned}$$

Taking \(\delta < \left( 2 C_{N,2+4/N}^{2+4/N} \rho ^{4/N}\right) ^{-1}\), we obtain the boundedness of \(|\nabla u_\varepsilon |_2\), hence that of \(\lambda _\varepsilon \) follows from (3.2). \(\square \)

Note that for every \(\varepsilon \in (0,1)\) and every \(u\in \mathcal {D}\) there holds \(J_\varepsilon (u_\varepsilon )\le J_\varepsilon (u)\le J(u)\), whence \(J_\varepsilon (u_\varepsilon )\le \inf _{\mathcal {D}(\rho )} J =: c(\rho )\).

Proof of Theorem 1.1

From Lemma 3.1 there exist \(u \in \mathcal {D}\) and \(\lambda \ge 0\) such that, up to a subsequence, \(u_\varepsilon \rightharpoonup u\) in \(H^1(\mathbb {R}^N)\) and \(\lambda _\varepsilon \rightarrow \lambda \) as \(\varepsilon \rightarrow 0^+\). Arguing as in [24, Proof of Theorem 1.1] we obtain that \(\varphi _\varepsilon (u_\varepsilon )g_-(u_\varepsilon )v\rightarrow g_-(u)v\) a.e. for every \(v\in \mathcal {C}_0^\infty (\mathbb {R}^N)\) and that \(g_+(u_\varepsilon )v\) and \(\varphi _\varepsilon (u_\varepsilon )g_-(u_\varepsilon )v\) are uniformly integrable (and tight). We deduce that for every \(v\in \mathcal {C}_0^\infty (\mathbb {R}^N)\)

$$\begin{aligned} -\lambda \int _{\mathbb {R}^N}uv\,dx\leftarrow J_\varepsilon '(u_\varepsilon )v\rightarrow J'(u)v, \end{aligned}$$


$$\begin{aligned} -\Delta u+\lambda u=g(u) \quad \text {in } \mathbb {R}^N. \end{aligned}$$

Moreover, (3.1) (when \(N \ge 3\)) or (3.2) (when \(N = 2\)) yields

$$\begin{aligned} \int _{\mathbb {R}^N}G_-^\varepsilon (u_\varepsilon )\,dx\le \int _{\mathbb {R}^N}G_+(u_\varepsilon )\,dx\le C \end{aligned}$$

hence, in view of Fatou’s lemma, \(G_-(u)\in L^1(\mathbb {R}^N)\). In particular, from [24, Proposition 3.1], \((u,\lambda )\) satisfies the Pohožaev identity

$$\begin{aligned} (N-2) \int _{\mathbb {R}^N} |\nabla u|^2 \, dx = 2N \int _{\mathbb {R}^N} G(u) - \frac{1}{2} \lambda |u|^2 \, dx. \end{aligned}$$

We can assume that \(|\nabla u_\varepsilon |_2\) and \(\int _{\mathbb {R}^N}G_-^\varepsilon (u_\varepsilon )\,dx\) are convergent. Since \(\int _{\mathbb {R}^N}G_+(u_\varepsilon )\,dx\rightarrow \int _{\mathbb {R}^N}G_+(u)\) (remember that each \(u_\varepsilon \) is radially symmetric), we have \(J(u)\le \lim _\varepsilon J_\varepsilon (u_\varepsilon )<0\) from Lemma 2.3; in particular, \(u\ne 0\). If \(\lambda =0\), then from (3.3) we have \(J(u)=|\nabla u|_2^2/N>0\), therefore \(\lambda >0\). We prove that \(u_\varepsilon \rightarrow u\) in \(H^1(\mathbb {R}^N)\). Since \(\lambda >0\), from (3.3) there follows

$$\begin{aligned}\begin{aligned}&\int _{\mathbb {R}^N} (N-2) |\nabla u|^2 + N \lambda |u|^2 + 2N G_-(u) \, dx\\&\quad \le \lim _\varepsilon \int _{\mathbb {R}^N} (N-2) |\nabla u_\varepsilon |^2 + N \lambda |u_\varepsilon |^2 + 2N G^\varepsilon _-(u_\varepsilon ) \, dx\\&\quad = 2N \lim _\varepsilon \int _{\mathbb {R}^N} G_+(u_\varepsilon ) \, dx = 2N \int _{\mathbb {R}^N} G_+(u) \, dx\\&\quad = \int _{\mathbb {R}^N}(N-2) |\nabla u|^2 + N \lambda |u|^2 + 2N G_-(u) \, dx, \end{aligned}\end{aligned}$$

which yields \(|u_\varepsilon |_2 \rightarrow |u|_2\); in particular, \(u\in {\mathcal {S}}\). Furthermore, in a similar fashion we obtain \(J(u) \le \lim _\varepsilon J_\varepsilon (u_\varepsilon ) \le c(\rho ) \le J(u)\), which shows that \(|\nabla u_\varepsilon |_2 \rightarrow |\nabla u|_2\) and \(J(u) = c(\rho )\). Since (2.1) is translation-invariant, we can assume that every \(u_\varepsilon \) is radial, hence so is u. Moreover, because \(u \ne 0\), there exists \(x \in \mathbb {R}^N\) such that \(u(x) \ne 0\) and so \(u_\varepsilon (x)\) has the same sign as u(x) for every \(\varepsilon \) sufficiently small; since every \(u_\varepsilon \) has constant sign, it has everywhere the same sign as u(x), hence u has constant sign too. Finally, if there exist \(x,y,z \in \mathbb {R}^N\) such that \(|x|< |y| < |z|\) and either \(u(y) < \min \{u(x),u(z)\}\) or \(u(y) > \max \{u(x),u(z)\}\), then arguing as before we obtain a contradiction. \(\square \)

Remark 3.2

(i) The Proof of Theorem 1.1 contains the relevant result that \(c_\varepsilon (\rho ) \rightarrow c(\rho )\) as \(\varepsilon \rightarrow 0^+\).

(ii) Unlike the proof of Theorem 2.1, we cannot use the information \(\lambda > 0\) to deduce \(u \in {\mathcal {S}}\) because we do not know whether u is a critical point of \(J|_\mathcal {D}\).

Proof of Proposition 1.4

It follows from Lemma 2.3 and Remark 3.2 (i). \(\square \)

Proof of Theorem 1.5

Let \(u_n\) and \(\rho _n\) as in the statement. We prove preliminarily that \(u_n\) is relatively compact up to translations in \(L^p(\mathbb {R}^N)\), \(2< p < 2^*\). Fix \(\varepsilon >0\). Arguing (by contradiction) as in the proof of Lemma 2.7, we find \(\beta ,\gamma >0\), \(\delta \ge 0\), \(u \in \mathcal {D}(\beta ) {\setminus } \{0\}\), \(v \in \mathcal {D}(\gamma ) {\setminus } \{0\}\), \(y_n,z_n \in \mathbb {R}^N\), and \(v_n,w_n \in H^1(\mathbb {R}^N)\) bounded, none of which depending on \(\varepsilon \), such that \(\rho ^2 - \beta ^2 - \gamma ^2 \ge \delta ^2\) and \(\lim _n J_\varepsilon (u_n) - J_\varepsilon (w_n) = J_\varepsilon (u) + J_\varepsilon (v)\), where \(v_n(x) = u_n(x-y_n) - u(x)\) and \(w_n(x) = v_n(x-z_n) - v(x)\). If \(\delta >0\), then \(\tilde{w}_n:= \frac{\delta }{|w_n|_2}w_n \in {\mathcal {S}}(\delta )\) and, again from [29, Lemma 2.4],

$$\begin{aligned} \begin{aligned} c_\varepsilon (\rho )&= \lim _n J_\varepsilon (u_n) = J_\varepsilon (u) + J_\varepsilon (v) + \lim _n J_\varepsilon (\tilde{w}_n) \ge c_\varepsilon (\beta ) + c_\varepsilon (\gamma ) + c_\varepsilon (\delta )\\&\ge c_\varepsilon (\sqrt{\beta ^2 + \gamma ^2 + \delta ^2}) \ge c_\varepsilon (\rho ). \end{aligned} \end{aligned}$$

Letting \(\varepsilon \rightarrow 0^+\), observing that \(J_\varepsilon (u) \rightarrow J(u)\) and \(J_\varepsilon (v) \rightarrow J(v)\) from the monotone convergence theorem, and using Remark 3.2(i), we obtain

$$\begin{aligned} \begin{aligned} c(\rho )&\ge J(u) + J(v) + \lim _{\varepsilon \rightarrow 0^+} \lim _n J_\varepsilon (\tilde{w}_n) \ge \lim _{\varepsilon \rightarrow 0^+} c_\varepsilon (\beta ) + \lim _{\varepsilon \rightarrow 0^+} c_\varepsilon (\gamma ) + \lim _{\varepsilon \rightarrow 0^+} c_\varepsilon (\delta )\\&\ge \lim _{\varepsilon \rightarrow 0^+} c_\varepsilon (\sqrt{\beta ^2 + \gamma ^2 + \delta ^2}) \ge c(\rho ). \end{aligned} \end{aligned}$$

In particular, \(c_\varepsilon (\beta ) \rightarrow J(u)\) and \(c_\varepsilon (\gamma ) \rightarrow J(v)\) as \(\varepsilon \rightarrow 0^+\), therefore

$$\begin{aligned} c(\beta ) \le J(u) = \lim _{\varepsilon \rightarrow 0^+} c_\varepsilon (\beta ) \le c(\beta ), \quad c(\gamma ) \le J(v) = \lim _{\varepsilon \rightarrow 0^+} c_\varepsilon (\gamma ) \le c(\gamma ), \end{aligned}$$

and so, from Lemma 2.4 and Remark 2.6 (a),

$$\begin{aligned} \begin{aligned} \lim _{\varepsilon \rightarrow 0^+} c_\varepsilon (\sqrt{\beta ^2 + \gamma ^2 + \delta ^2})&\le \lim _{\varepsilon \rightarrow 0^+} c_\varepsilon (\sqrt{\beta ^2 + \gamma ^2}) + \lim _{\varepsilon \rightarrow 0^+} c_\varepsilon (\delta ) \le c(\sqrt{\beta ^2 + \gamma ^2})+ \lim _{\varepsilon \rightarrow 0^+} c_\varepsilon (\delta )\\&< c(\beta ) + c(\gamma ) + \lim _{\varepsilon \rightarrow 0^+} c_\varepsilon (\delta ) = \lim _{\varepsilon \rightarrow 0^+} c_\varepsilon (\sqrt{\beta ^2 + \gamma ^2 + \delta ^2}), \end{aligned} \end{aligned}$$

a contradiction. If \(\delta =0\), then, again as in the proof of Lemma 2.7, \(\liminf _n J_\varepsilon (w_n) \ge 0\) and we get

$$\begin{aligned} c_\varepsilon (\rho )= & {} \lim _n J_\varepsilon (u_n) = J_\varepsilon (u) + J_\varepsilon (v) + \lim _n J_\varepsilon (w_n) \ge c_\varepsilon (\beta ) + c_\varepsilon (\gamma ) \\\ge & {} c_\varepsilon (\sqrt{\beta ^2 + \gamma ^2}) \ge c_\varepsilon (\rho ), \end{aligned}$$

thus we obtain a contradiction arguing in a similar (and simpler) way as before. Having proved that \(u_n\) is relatively compact up to translations in \(L^p(\mathbb {R}^N)\), \(2< p < 2^*\), now we prove that the same convergence holds in \(H^1(\mathbb {R}^N)\). Recalling that \(u_n\) is bounded, there exists \(u \in \mathcal {D}(\rho )\) such that, up to translations and subsequences, \(u_n \rightharpoonup u\) in \(H^1(\mathbb {R}^N)\) and \(u_n \rightarrow u\) in \(L^p(\mathbb {R}^N)\), \(2< p < 2^*\), whence

$$\begin{aligned} c(\rho ) \le J(u) \le \lim _n J(u_n) = c(\rho ) \end{aligned}$$

and so \(|\nabla u_n|_2^2 \rightarrow |\nabla u|_2^2\). Moreover, \(J(u) = c(\rho ) < 0\), which implies, in particular, that \(G(u) \in L^1(\mathbb {R}^N)\). Assume by contradiction that \(|u|_2 < \rho \); then, for every \(v \in \mathcal {C}_0^\infty (\mathbb {R}^N)\) and every \(t \in \mathbb {R}\) sufficiently small, \(|u + tv|_2 < \rho \). Next, we check that J is differentiable at u along every \(v \in \mathcal {C}_0^\infty (\mathbb {R}^N)\). Of course, this only needs to be checked for the functional \(\int _{\mathbb {R}^N} G(\cdot ) \, dx\). Let \(t \ne 0\): from (g0) and (g2), \(\frac{1}{t} [G(u+tv) - G(u)] = \int _0^1 g(u + stv) \, ds \, v\) and the family \(\{\int _0^1 g(u + stv) \, ds \, v\}_t\) is uniformly integrable (and tight), thus the claim holds true. There follows that \(J'(u)[v] = 0\) for every \(v \in \mathcal {C}_0^\infty (\mathbb {R}^N)\), i.e., \(-\Delta u = g(u)\) in \(\mathbb {R}^N\). Then, from [24, Proposition 3.1], u satisfies (3.3) with \(\lambda = 0\), whence \(J(u) = |\nabla u|_2^2 / N > 0\), which is impossible. \(\square \)

Proof of Theorem 1.6

Since \(G(s) = \alpha / 2 (\ln s^2 - 1) s^2 + \mu / p |s|^p\), it is clear that (g0)–(g3) hold. We want to prove that (g4) is satisfied if and only if \(\mu > -\frac{\alpha p}{p - 2} e^{-p/2}\). If \(\mu \ge 0\), then clearly (g4) holds, so let us consider the case \(\mu < 0\). Let \(s>0\). Of course, \(G(s)>0\) if and only if

$$\begin{aligned} \widetilde{G}(s):= \frac{\alpha }{2} \left( \ln s^2 - 1\right) + \frac{\mu }{p} s^{p-2} > 0. \end{aligned}$$


$$\begin{aligned} \widetilde{G}'(s) = \frac{\alpha }{s} + \mu \frac{p-2}{p} s^{p-3}, \end{aligned}$$

we have

$$\begin{aligned} \max \widetilde{G} = \frac{\alpha }{2} \left( \ln \left( \frac{\alpha p}{\mu (2-p)}\right) ^{2/(p-2)} - 1\right) - \frac{\alpha }{p-2} \end{aligned}$$

and the claim holds because (g4) is equivalent to \(\max \widetilde{G}>0\). Point (i) is then a consequence of Theorem 1.1. Concerning point (ii), observe that \(\max G = 0\) if and only if \(\mu = - \frac{\alpha p}{p - 2} e^{-p/2}\). Since, from [24, Proposition 3.1], every solution to (1.1) satisfies the Pohožaev identity (3.3), the statement easily holds true if \(\lambda > 0\) or \(N \ge 3\) or \(\mu < - \frac{\alpha p}{p - 2} e^{-p/2}\). Now let us consider the case \(\lambda = 0\), \(N = 2\), and \(\mu = - \frac{\alpha p}{p - 2} e^{-p/2}\) and assume by contradiction that such a u exists. Since (3.3) reads \(\int _{\mathbb {R}^N} G(u) \, dx = 0\), \(G \le 0\), and u is continuous (se, e.g., [32, p. 271]), necessarily \(u \equiv 0\) or \(u \equiv \pm \bar{s}\), where \(\bar{s}\) is the unique \(s>0\) such that \(G(s) = 0\), which contradicts \(|u|_2 = \rho \in (0,\infty )\). \(\square \)

4 Multiple solutions

4.1 Functional setting

Let us recall from [27] that a function \(A :\mathbb {R}\rightarrow \mathbb {R}\) is called an N-function if and only if it is nonnegative, even, convex, and satisfies

$$\begin{aligned} \lim _{s \rightarrow 0} \frac{A(s)}{s} = \lim _{s \rightarrow \infty } \frac{A(s)}{s} = \infty . \end{aligned}$$

It is said to satisfy the \(\Delta _2\) condition globally if and only if there exists \(K>0\) such that

$$\begin{aligned} A(2s) \le K A(s) \quad \text {for all } s \in \mathbb {R}. \end{aligned}$$

It is said to satisfy the \(\nabla _2\) condition globally if and only if there exist \(\ell > 1\) such that

$$\begin{aligned} 2\ell A(s) \le A(\ell s) \quad \text {for all } s \in \mathbb {R}. \end{aligned}$$

Equivalently, A satisfies the \(\Delta _2\) (respectively, \(\nabla _2\)) condition globally if and only if there exists \(C>1\) such that

$$\begin{aligned} C A(s) \ge s A'(s) \; \text {(respectively, }s A'(s) \ge C A(s)) \quad \text {for all } s \in \mathbb {R}. \end{aligned}$$

From now on, A shall denote the same function as in (A). We can define the Orlicz space associated with A as

$$\begin{aligned} V:= \left\{ u \in L^1_loc (\mathbb {R}^N): A(u) \in L^1(\mathbb {R}^N)\right\} . \end{aligned}$$

If we define the norm

$$\begin{aligned} \Vert u\Vert _V:= \inf \left\{ \kappa > 0: \int _{\mathbb {R}^N} A(u/\kappa ) \, dx \le 1\right\} , \end{aligned}$$

then \((V,\Vert \cdot \Vert _V)\) is a reflexive Banach space (cf. [27, Theorem IV.I.10]). Moreover, the following holds true.

Lemma 4.1

(i) Let \(u_n,u \in V\). Then \(\lim _n \Vert u_n - u\Vert _V = 0\) if and only if \(\lim _n \int _{\mathbb {R}^N} A(u_n - u) \, dx = 0\).

(ii) Let \(X \subset V\). Then X is bounded if and only if

$$\begin{aligned} \left\{ \int _{\mathbb {R}^N} A(u) \, dx: u \in X\right\} \end{aligned}$$

is bounded.

(iii) Let \(u_n,u \in V\). If \(u_n \rightarrow u\) a.e. and \(\int _{\mathbb {R}^N} A(u_n) \, dx \rightarrow \int _{\mathbb {R}^N} A(u) \, dx\), then \(\Vert u_n - u\Vert _V \rightarrow 0\).


The first two points follow from [27, Theorem III.IV.12 and Corollary III.IV.15] respectively, while the third one is a consequence of the previous two and [7, Theorem 2, Example (b)]. \(\square \)

Note that \(W = H^1(\mathbb {R}^N) \cap V\). Since \((V,\Vert \cdot \Vert _V)\) is a reflexive Banach space, so is \((W,\Vert \cdot \Vert _W)\), with

$$\begin{aligned} \Vert u\Vert _W^2:= |\nabla u|_2^2 + |u|_2^2 + \Vert u\Vert _V^2. \end{aligned}$$

We obtain the following variant of Lions’ lemma.

Lemma 4.2

Suppose that \(u_n \in W\) is bounded and for some \(r>0\)

$$\begin{aligned} \lim _{n\rightarrow \infty }\sup _{y\in \mathbb {R}^N} \int _{B(y,r)} |u_n|^2\,dx=0. \end{aligned}$$

Then \(u_n\rightarrow 0\) in \(L^p(\mathbb {R}^N)\) for every \(p \in [2,2^*)\).


In view of Lions’ lemma [21, Lemma I.1 in Part 2], \(u_n\rightarrow 0\) in \(L^p(\mathbb {R}^N)\) for every \(p \in (2,2^*)\). We prove the convergence in \(L^2(\mathbb {R}^N)\). Take any \(p \in (2,2^*)\) and \(\varepsilon >0\). We find \(\delta >0\) such that

$$\begin{aligned}\begin{aligned} |s|^2&\le \varepsilon A(s)\quad \hbox { if }|s|\in [0,\delta ],\\ |s|^2&\le \delta ^{2-p} |s|^{p}\quad \hbox { if }|s|>\delta , \end{aligned} \end{aligned}$$

Hence, we get

$$\begin{aligned} \limsup _{n\rightarrow \infty }\int _{\mathbb {R}^N}|u_n|^2\, dx\le & {} \varepsilon \limsup _{n\rightarrow \infty }\int _{\mathbb {R}^N}A(u_n)\, dx+\delta ^{2-p} \limsup _{n\rightarrow \infty }\int _{\mathbb {R}^N}|u_n|^p\, dx\\= & {} \varepsilon \limsup _{n\rightarrow \infty }\int _{\mathbb {R}^N}A(u_n)\, dx. \end{aligned}$$

In view of Lemma 4.1 (ii), we conclude by letting \(\varepsilon \rightarrow 0\). \(\square \)

Let \(\mathcal {O}\) be any subgroup \(\mathcal {O}(N)\) such that \(\mathbb {R}^N\) is compatible with \(\mathcal {O}\) (cf. [20]), i.e., \(\lim _{|y|\rightarrow \infty } m(y,r)=\infty \) for some \(r>0\), where, for \(y \in \mathbb {R}^N\),

$$\begin{aligned} m(y,r):= & {} \sup \big \{n\in \mathbb {N}: \hbox {there exist }g_1,\dots ,g_n\in \mathcal {O}\hbox { such that } B(g_iy,r)\cap B(g_jy,r)\\= & {} \emptyset \hbox { for }i\ne j\big \}. \end{aligned}$$

In view of [20], \(H^1_{\mathcal {O}}(\mathbb {R}^N)\) embeds compactly into \(L^p(\mathbb {R}^N)\) for \(2<p<2^*\). As for the subspace \(W_\mathcal {O}:=W\cap H^1_{\mathcal {O}}(\mathbb {R}^N)\), we obtain the compact embedding also in \(L^2(\mathbb {R}^N)\).

Corollary 4.3

Suppose that \(u_n \in W_{\mathcal {O}}\) is bounded and \(u_n\rightarrow 0\) in \(L^2_loc (\mathbb {R}^N)\). Then \(u_n\rightarrow 0\) in \(L^p(\mathbb {R}^N)\) for every \(p \in [2,2^*)\).


Suppose that

$$\begin{aligned} \int _{B(y_n,1)} |u_n|^2\,dx\ge c>0 \end{aligned}$$

for some sequence \(y_n \in \mathbb {R}^N\) and a constant c. Observe that in the family \(\{B(hy_n,1)\}_{h\in \mathcal {O}}\) we find an increasing number of disjoint balls provided that \(|y_n|\rightarrow \infty \). Since \(u_n\) is bounded in \(L^2(\mathbb {R}^N)\) and invariant with respect to \(\mathcal {O}\), by (4.3) \((y_n)\) must be bounded. Then for sufficiently large r one obtains

$$\begin{aligned} \int _{B(0,r)} |u_n|^2\,dx\ge \int _{B(y_n,1)} |u_n|^2\,dx\ge c>0, \end{aligned}$$

and we get a contradiction with the convergence of \(u_n\) in \(L^2_loc (\mathbb {R}^N)\). Therefore by Lemma 4.2 we conclude. \(\square \)

Finally, we have the following result.

Proposition 4.4

If (A) and (f0)–(f3) hold, then the functional \(J|_W :W \rightarrow \mathbb {R}\) is of class \(\mathcal {C}^1\) and, for every \(u \in W \cap {\mathcal {S}}\), \(J|_{W \cap {\mathcal {S}}}'(u) = 0\) if and only if there exists \(\lambda \in \mathbb {R}\) such that \((u,\lambda )\) is a solution to (1.1)–(1.2).


Let \(B :\mathbb {R}\rightarrow \mathbb {R}\) be the complementary N-function of A. From [27, Theorem II.V.8], it satisfies the \(\Delta _2\) and \(\nabla _2\) conditions globally because A does. Let us recall (cf. [27, Definition III.IV.2, Corollary III.IV.5, and Corollary IV.II.9]) that \(V'\), the dual space of V, is isomorphic to the Orlicz space associated with B. From [27, Theorem I.III.3],

$$\begin{aligned} B\bigl (a(s)\bigr ) = sa(s) - A(s) \le (C-1) A(s), \end{aligned}$$

where \(C>1\) is the constant given in the characterization of the \(\Delta _2\) condition – i.e., (4.1). Then, for every \(u,v \in V\), we have

$$\begin{aligned} \left| \int _{\mathbb {R}^N} a(u) v \, dx\right| \le \int _{\mathbb {R}^N} |a(u)| |v| \, dx \lesssim \Vert a(u)\Vert _{V'} \Vert v\Vert _V. \end{aligned}$$

Now we can use the same argument as in [12, Lemma 2.1] to obtain that the functionals \(u \mapsto \int _{\mathbb {R}^N} A(u) \, dx\) and \(u \mapsto \int _{\mathbb {R}^N} F(u) \, dx\) belong to \(\mathcal {C}^1(W)\). The remaining part is obvious. \(\square \)

4.2 Proof of Theorem 1.8

We recall the definitions \(F_-:= F_+ - F\) and \(f_-:= F_-'\).

Lemma 4.5

If (A), (f0)–(f3), and (1.12) hold, then \(J|_{W \cap {\mathcal {S}}}\) is coercive and bounded below.


For every \(\delta > 0\), there exists \(C_\delta > 0\) such that for every \(s \in \mathbb {R}\)

$$\begin{aligned} F_+(s) \le C_\delta |s|^2 + (\eta + \delta ) |s|^{2+4/N}. \end{aligned}$$

For every \(u \in W \cap {\mathcal {S}}\) there holds

$$\begin{aligned} \begin{aligned} J(u)&\ge \int _{\mathbb {R}^N} \frac{1}{2} |\nabla u|^2 + A(u) - F_+(u) \, dx\\&\ge \int _{\mathbb {R}^N} \frac{1}{2} |\nabla u|^2 + A(u) - C_\delta |u|^2 - (\eta + \delta ) |u|^{2+4/N} \, dx\\&\ge \frac{1}{2} |\nabla u|_2^2 + \int _{\mathbb {R}^N} A(u) \, dx - C_\delta \rho ^2 - (\eta + \delta ) C_{N,2+4/N}^{2+4/N} \rho ^{4/N} |\nabla u|_2^2. \end{aligned} \end{aligned}$$

We conclude by Lemma 4.1 (ii) and taking \(\delta \) so small that \(2 (\eta + \delta ) C_{N,2+4/N}^{2+4/N} \rho ^{4/N} < 1\). \(\square \)

Remark 4.6

Setting \(f_p := \max \{f,0\}\) and \(f_n := \max \{-f,0\}\), we have that \(f_-(s) = f_n (s)\) if \(s \ge 0\) and \(f_-(s) = -f_p (s)\) if \(s < 0\). In particular, \(f_-(s)s \ge 0\) for every \(s \in \mathbb {R}\).

Lemma 4.7

If (A) and (f0)–(f3) hold, then \(J|_{W_{\mathcal {O}}} \cap {\mathcal {S}}\) satisfies the Palais–Smale condition.


Let \(u_n \in W_{\mathcal {O}} \cap {\mathcal {S}}\) such that \(J(u_n)\) is bounded and \(J|_{W_{\mathcal {O}} \cap {\mathcal {S}}}'(u_n) \rightarrow 0\). From Lemma 4.5, \(u_n\) is bounded in W, therefore there exists \(u \in W_{\mathcal {O}}\) such that \(u_n \rightharpoonup u\) in \(W_\mathcal {O}\) up to a subsequence. Then, from Corollary 4.3, \(u_n \rightarrow u\) in \(L^p(\mathbb {R}^N)\) for every \(p \in [2,2^*)\); in particular, \(u \in {\mathcal {S}}\). Up to a second subsequence, we can assume that \(u_n \rightarrow u\) a.e. in \(\mathbb {R}^N\). Additionally, from [3, Lemma 3], there exist \(\lambda _n \in \mathbb {R}\) such that

$$\begin{aligned} -\Delta u_n + \lambda _n u_n - g(u_n) u_n \rightarrow 0 \quad \text {in } W_\mathcal {O}', \end{aligned}$$

where \(W_\mathcal {O}'\) is the dual space of \(W_\mathcal {O}\). Testing (4.4) with \(u_n\), we obtain that \(\lambda _n\) is bounded as well, hence there exists \(\lambda \in \mathbb {R}\) such that \(\lambda _n \rightarrow \lambda \) up to a subsequence, and \((u,\lambda )\) is a solution to (1.1). Finally, from the Nehari identity and the fact that \(\lambda _n \rightarrow \lambda \), we obtain

$$\begin{aligned} \begin{aligned} \int _{\mathbb {R}^N} |\nabla u|^2 + a(u)u + f_-(u)u \, dx&= \int _{\mathbb {R}^N} f_+(u)u - \lambda |u|^2 \, dx\\&= \lim _n \int _{\mathbb {R}^N} f_+(u_n)u_n - \lambda |u_n|^2 \, dx\\&= \lim _n \int _{\mathbb {R}^N} |\nabla u_n|^2 + a(u_n)u_n + f_-(u_n)u_n \, dx, \end{aligned} \end{aligned}$$

which, together with Remark 4.6, implies that \(|\nabla u_n|_2^2 \rightarrow |\nabla u|_2^2\) and \(\int _{\mathbb {R}^N} a(u_n)u_n \, dx \rightarrow \int _{\mathbb {R}^N} a(u)u \, dx\). It remains to prove that \(\Vert u_n - u\Vert _V \rightarrow 0\). In virtue of Lemma 4.1(iii), it suffices to prove that \(\int _{\mathbb {R}^N} A(u_n) \, dx \rightarrow \int _{\mathbb {R}^N} A(u) \, dx\). Additionally, since A satisfies the \(\nabla _2\) condition globally, this occurs if the sequence \(a(u_n)u_n\) is bounded above by an integrable function, which holds true from (A) and [7, Example (b)]. \(\square \)

Lemma 4.8

For every \(k \ge 1\) there exist \(\pi _k :\mathbb {S}^{k-1} \rightarrow W_{\mathcal {O}(N)} \cap {\mathcal {S}}\) and \(\tilde{\pi }_k :\mathbb {S}^{k-1} \rightarrow \mathcal {X}\cap {\mathcal {S}}\) odd and continuous.


The existence of \(\pi _k\) was proved in [3, Theorem 10, Lemma 8] (with the notations therein, note that we can take any number different from 0 instead of \(\zeta \) because we do not need \(V(u) \ge 1\)). Then, following [17, Proof of Lemma 3.4] and [23, Remark 4.2], we take \(\phi \in \mathcal {C}^\infty (\mathbb {R})\) odd such that \(0 \le \phi \le 1\) and \(\phi (t) = 1\) for all \(t \ge 1\), and define for \(\sigma \in \mathbb {S}^{k-1}\)

$$\begin{aligned} \tilde{\pi }_k(\sigma )(x):= \pi _k(\sigma )(x) \phi (|x_1| - |x_2|) \quad \text {for all } x = (x_1,x_2,x_3) \in \mathbb {R}^M \times \mathbb {R}^M \times \mathbb {R}^{N-2M}. \end{aligned}$$

\(\square \)

We make use of the following abstract theorem, where \(\mathcal {G}\) stands for the Krasnoselsky genus [32, Chapter 5].

Theorem 4.9

Let E be a Banach space, \(H \supset E\) a Hilbert space with scalar product \((\cdot |\cdot )\), \(R>0\), \(\mathcal {M}:= \left\{ u \in E: (u|u) = R\right\} \), and \(I \in \mathcal {C}^1(E)\) even such that \(I|_\mathcal {M}\) is bounded below. For every \(k \ge 1\), define

$$\begin{aligned} c_k:= \inf _{A \in \Gamma _k} \sup _{u \in A} I(u), \quad \Gamma _k:= \left\{ A \subset \mathcal {M}: A = -A = \overline{A} \text { and } \mathcal {G}(A) \ge k\right\} . \end{aligned}$$

Then, for every \(k \ge 1\), \(-\infty < c_1 \le \dots \le c_k \le c_{k+1} \le \dots \) and the following holds: if there exist \(k \ge 1\) and \(h \ge 0\) such that \(c_k = \dots = c_{k+h} < \infty \) and \(I|_\mathcal {M}\) satisfies the Palais–Smale condition at the level \(c_k\), then

$$\begin{aligned} \mathcal {G}\left( \left\{ u \in \mathcal {M}: I(u) = c_k \text { and } I|_\mathcal {M}'(u) = 0\right\} \right) \ge h + 1 \end{aligned}$$

(in particular, taking \(h=0\), every \(c_k\) is a critical value of \(I|_\mathcal {M}\)).


This theorem is basically (part of) [17, Theorem 2.1] (see also [26]), so we omit the proof. The only difference is that the values \(c_k\) are critical regardless of their sign, which is a consequence of \(I|_\mathcal {M}\) satisfying the Palais–Smale condition at any level. \(\square \)

Proof of Theorem 1.8

Let us set \(E = W_{\mathcal {O}(N)}\) (respectively, \(E = \mathcal {X}\)), \(H = L^2(\mathbb {R}^N)\), \(R = \rho ^2\), \(\mathcal {M}= {\mathcal {S}}\cap E\), and \(I = J|_E\). From Lemmas 4.5 and 4.7, \(I|_\mathcal {M}= J_{{\mathcal {S}}\cap E}\) is bounded below and satisfies the Palais–Smale condition; moreover, from Lemma 4.8, \(\pi _k(\mathbb {S}^{k-1}) \in \Gamma _k\) (respectively, \(\tilde{\pi }_k(\mathbb {S}^{k-1}) \in \Gamma _k\)) for every k, so the numbers \(c_k\) are finite. Applying Theorem 4.9, we conclude the part about the existence of infinitely many solutions. Concerning the existence of a least-energy solution, we consider a sequence \(u_n \in {\mathcal {S}}\cap E\) such that \(\lim _n J(u_n) = \inf _{{\mathcal {S}}\cap E} J\). From Ekeland’s variational principle, we can assume that \(u_n\) is a Palais–Smale sequence for \(J|_{{\mathcal {S}}\cap E}\), hence argue as above to obtain a solution \((\bar{u},\bar{\lambda }) \in \mathbb {R}\times ({\mathcal {S}}\cap E)\) to (1.1)–(1.2) such that \(J(\bar{u}) = \min _{{\mathcal {S}}\cap E} J\). The fact that \(\min _{{\mathcal {S}}\cap W_{\mathcal {O}(N)}} J = \min _{{\mathcal {S}}\cap W} J\) follows from the properties of the Schwartz rearrangement [19, Chapter 3]. \(\square \)