1 Introduction

We consider the following variational problems

$$\begin{aligned} C_{G,\mu ,\alpha }:= \sup \left\{ \int _{\mathbb {R}^2}G(u^2) dx \ \Bigg | \ u \in H^1(\mathbb {R}^2),\ \int _{\mathbb {R}^2}\left( |\nabla u|^2 + \mu u^2\right) dx=\alpha \right\} \end{aligned}$$

and

$$\begin{aligned} D_{G, \alpha }:= \sup \left\{ \frac{\int _{\mathbb {R}^2}G(u^2) dx}{\int _{\mathbb {R}^2}u^2 dx} \ \Bigg | \ u \in H^1(\mathbb {R}^2),\ \int _{\mathbb {R}^2}|\nabla u|^2 dx = \alpha \right\} , \end{aligned}$$

where \(\mu \) and \(\alpha \) are positive constants and \(G: [0,\infty ) \rightarrow \mathbb {R}\) satisfies

  1. (G1)

    \(G(0)=0\), \(G \in C^1\left( (0,\infty ); \mathbb {R}\right) \) and G is convex,

  2. (G2)

    there exists a nonnegative constant m such that \(\lim _{s \rightarrow +0} G(s)/s = m\) and \(G(s) \not \equiv ms\),

  3. (G3)

    \(G(s) \le C e^{Cs}\) holds for all \(s>0\) with some positive constant C.

In the case \(G(s) = s^p\) with \(p > 1\), problem \(C_{G,\mu ,\alpha }\) is the best constant for the Sobolev embedding \(H^1(\mathbb {R}^2) \hookrightarrow L^{2p}(\mathbb {R}^2)\) and \(D_{G,\alpha }\) is the best constant of the Gagliardo-Nirenberg-Sobolev inequality. It is known that for any \(\mu \) and \(\alpha \) there exists a function which attains \(C_{G,\mu ,\alpha }\) by the compactness of the embedding \(H^1_{rad}(\mathbb {R}^2) \hookrightarrow L^{2p}(\mathbb {R}^2)\), and \(D_{G,\alpha }\) is also attained. On the other hand, if \(G(s)=s\), then \(C_{G,\mu ,\alpha }\) is the best constant for \(H^1(\mathbb {R}^2) \hookrightarrow L^2(\mathbb {R}^2)\) and the constant is not attained due to the non-compactness of the embedding \(H^1_{rad}(\mathbb {R}^2) \hookrightarrow L^2(\mathbb {R}^2)\). Obviously, if \(G(s) = s\), then \(D_{G,\alpha }=1\) and \(D_{G,\alpha }\) is attained.

In the case \(G(s) = e^{s}-1\) and \(\alpha \le 4\pi \), the constant \(C_{G,\mu ,\alpha }\) is the best constant of the Trudinger-Moser inequality, which boundedness is obtained by B. Ruf [40]. The existence of a maximizer for \(C_{G,\mu , 4\pi }\) is also proved in [40]. In addition to the existence result, it is shown by M. Ishiwata [16] that there exists a threshold \(\alpha _* < 4\pi \) such that if \(\alpha > \alpha _*\), then \(C_{G,\mu , \alpha } > \alpha /\mu \) and \(C_{G,\mu , \alpha }\) is attained, while if \(\alpha < \alpha _*\), then \(C_{G,\mu , \alpha } = \alpha /\mu \) and \(C_{G,\mu , \alpha }\) is not attained. Concerning \(D_{G,\alpha }\), it is shown by T. Ogawa [34] that there exists a positive constant \(C_0\) such that \(D_{G,1} \le C_0\) holds. Later, it is shown by Adachi and Tanaka [3] that \(D_{G,\alpha }<\infty \) holds if and only if \(\alpha <4\pi \). In [20] and [7], the existence of a maximizer of \(D_{G,\alpha }\) for any \(\alpha <4\pi \) is proved. Moreover, by Cassani, Sani and Tarsi [7], a sharp estimate of \(D_{G,\alpha }\) with respect to \(\alpha \) is obtained, and then it is proved that the boundedness of \(D_{G,\alpha }\) for any \(\alpha <4\pi \) is equivalent to the boundedness of \(C_{G,\mu ,\alpha }\) for \(\mu =1\) and \(\alpha =4\pi \). For more about the existence of extremal functions for Trudinger-Moser inequality and its generalization, we refer reader to [1, 8, 9, 11, 13, 17, 21,22,23, 26, 27, 31,32,33, 35, 36] and references therein.

Maximizers of \(C_{G,\mu ,\alpha }\) and of \(D_{G,\alpha }\) are solutions of elliptic equations of the form

$$\begin{aligned} - \Delta u + \omega u = \lambda u g(u^2) \quad \textrm{in}\quad \mathbb {R}^2 \end{aligned}$$

with positive constants \(\omega \) and \(\lambda \), where g satisfies \(G(s) = \int _0^s g(t)dt\), and by proper scaling of solutions, the equation can be simplified to

$$\begin{aligned} - \Delta u + u = \Lambda u g(u^2) \quad \textrm{in}\quad \mathbb {R}^2 \end{aligned}$$
(1)

with a positive constant \(\Lambda \). Concerning more general equations, equation of the form

$$\begin{aligned} {\left\{ \begin{array}{ll} - \Delta u = f(u) \quad \textrm{in}\quad \mathbb {R}^N,\\ u \in H^1(\mathbb {R}^N) \end{array}\right. } \end{aligned}$$
(2)

has been extensively studied starting from the fundamental papers due to Berestycki and Lions [5] and to Berestycki, Gallouët and Kavian [6]. Equation (2) has the variational structure and solutions of (2) can be characterized as critical points of the functional \(I:H^1(\mathbb {R}^N) \rightarrow \mathbb {R}\) defined by

$$\begin{aligned} I(u):= \frac{1}{2} \int _{\mathbb {R}^N} |\nabla u|^2 dx - \int _{\mathbb {R}^N}F(u)dx, \end{aligned}$$

where \(F(s) = \int _0^s f(t) dt\). In [5] and [6], the authors establish the existence of ground state solution, namely, solutions of (2) which have least energy among all nontrivial critical points of I, through the minimization problems:

$$\begin{aligned} \inf \left\{ \int _{\mathbb {R}^N} |\nabla u|^2dx \ \Bigg |\ \int _{\mathbb {R}^N} F(u)dx = 1 \right\} \quad \textrm{for} \quad N \ge 3, \\ \inf \left\{ \int _{\mathbb {R}^2}|\nabla u|^2dx \ \Bigg |\ \int _{\mathbb {R}^2}F(u)dx = 0 \right\} \quad \textrm{for} \quad N =2. \end{aligned}$$

The uniqueness of ground state solution is studied in [2, 4, 10, 24, 25, 29, 30, 37,38,39, 42, 43]. In particular, if \(f(s) = s^p-as^q-s\) with \(a \ge 0\) and \(1<q<p<(N+2)/(N-2)\), then the ground state solution of (2) is unique.

In this paper, we investigate property of maximizers of \(C_{G, \mu , \alpha }\) and of \(D_{G,\alpha }\). More precisely, we study the relationship between these maximizers and ground state solutions of (1). As mentioned above, in the case \(G(s) = s\), \(C_{G,\mu ,\alpha }\) is not attained and \(D_{G,\alpha }\) is attained by any functions satisfying the constraint. Thus, it is natural to assume that \(G(s) \not \equiv ms\) in (G2).

Concerning maximizers of \(C_{G,\mu ,\alpha }\) and ground state solutions of (1), we prove the following result.

Theorem 1

Assume that \(u_0 \in H^1(\mathbb {R}^2)\) is a maximizer of \(C_{G, \mu , \alpha }\). Then, there exists a positive constant \(\Lambda _0\) such that \(u_0\) is a ground state solution of (1) with \(\Lambda = \Lambda _0\), up to scaling.

The proof of Theorem 1 relies on suitable scaling properties which investigated in [7], and we use the best constant \(D_{G,\alpha }\) to specify the Lagrange multiplier. Moreover, we do not use any variational techniques to prove Theorem 1.

In general, ground state solution of (1) and maximizer of \(C_{G, \mu ,\alpha }\) are distinct. The next result is an example of a ground state solution which is not a maximizer of \(C_{G, \mu ,\alpha }\).

Theorem 2

Assume that \(G(s) = e^s-1\) and \(w_\Lambda \) is a ground state solution of (1) for \(\Lambda >0\). Let \(\alpha _{\mu } = \int _{\mathbb {R}^2}\left( |\nabla w_\Lambda |^2 + \mu w_\Lambda ^2\right) dx\) for \(\mu >0\). Then, there exists \(\Lambda _* \in (0,1)\) such that for any \(\Lambda \in (0,\Lambda _*)\) and \(\mu >0\), either \(\alpha _{\mu } > 4\pi \) or \(\int _{\mathbb {R}^2}G(w_\Lambda ^2)dx < C_{G,\mu ,\alpha _{\mu }}\) provided that \(\alpha _{\mu } \le 4 \pi \).

The existence of a ground state solution of (1) with \(G(s)=e^{s}-1\) and \(\Lambda \in (0,1)\) is guaranteed by the result of Ruf and Sani [41]. Theorem 2 asserts that a ground state solution \(w_\Lambda \) of (1) with small \(\Lambda \) is either a critical point of \(\int _{\mathbb {R}^2}(e^{u^2}-1)dx\) under the constraint \(\int _{\mathbb {R}^2}\left( |\nabla u|^2 + \mu u^2\right) dx \le 4\pi \) except a maximizer, or a critical point of \(\int _{\mathbb {R}^2}(e^{u^2}-1)dx\) under the constraint \(\int _{\mathbb {R}^2}\left( |\nabla u|^2 + \mu u^2\right) dx > 4\pi \), though \(C_{G, \mu , \alpha }= \infty \) for \(\alpha >4\pi \). Theorems 1 and 2 assert that equivalence of maximizers of \(C_{G,\mu ,\alpha }\) and ground state solutions of (1) does not hold in general.

To state our results regarding relationship between maximizers of variational problems \(D_{G,\alpha }\) and ground state solutions of (1), we consider the next condition on G.

  1. (G4)

    \(D_{G, \alpha }\) is attained whenever \(D_{G,\alpha }<\infty \).

We prove the following results.

Theorem 3

Assume that G satisfies (G1)-(G3) and \(v_0 \in H^1(\mathbb {R}^2)\) is a maximizer of \(D_{G,\alpha }\). Then, \(v_0\) is a ground state solution of (1) for \(\Lambda = D_{G,\alpha }^{-1}\), up to scaling.

Theorem 4

Assume that G satisfies (G1)-(G4) and \(w_0 \in H^1(\mathbb {R}^2)\) is a ground state solution of (1) for \(\Lambda >0\). Let \(\alpha _0 = \int _{\mathbb {R}^2}|\nabla w_0|^2dx\). Then,

$$\begin{aligned} \Lambda = D_{G, \alpha _0}^{-1} \end{aligned}$$

and \(w_0\) is a maximizer of \(D_{G,\alpha _0}\).

As for the condition (G4), using results [3, 18, 19] and arguments to prove Theorem 1.1 in [7], we describe some sufficient conditions of (G4). Under the conditions (G1)-(G3), by the result of [18], if G satisfies

$$\begin{aligned} \lim _{s \rightarrow \infty } \frac{sG(s)}{e^{Ks}}=0 \end{aligned}$$
(3)

for some positive constant K provided that \(\lim _{s \rightarrow \infty } G(s)/e^{K-s}=\infty ~\textrm{and} \lim _{s \rightarrow \infty } G(s)/e^{K+s}=0\) for any \(K_{-}<K<K_{+}\), then \(D_{G,\alpha } < \infty \) if and only if \(\alpha \le 4\pi /K\). Moreover, by the conditions (G1)-(G3) and the arguments of [7], we derive the existence of a maximizer of \(D_{G,\alpha }\) for any \(\alpha \in (0,4\pi /K]\). If G satisfies

$$\begin{aligned} \lim _{s \rightarrow \infty } \frac{sG(s)}{e^{Ks}}=\infty \quad \textrm{and} \quad \lim _{s \rightarrow \infty } \frac{G(s)}{e^{Ks}}<\infty , \end{aligned}$$
(4)

then \(D_{G,\alpha } < \infty \) for \(\alpha < 4\pi /K\) and \(D_{G,\alpha }=\infty \) for \(\alpha \ge 4\pi /K\) by the results of [3] and [18]. In the former case, there exists a maximizer of \(D_{G,\alpha }\) for any \(\alpha < 4\pi /K\) by the same reason as in the case (3). In the remaining case

$$\begin{aligned} 0<\lim _{s \rightarrow \infty } \frac{sG(s)}{e^{Ks}}<\infty , \end{aligned}$$
(5)

the attainability of \(D_{G, 4\pi /K}\) depends on lower order perturbations included in G. Conditions of existence and non-existence of a maximizer of \(D_{G, 4\pi /K}\) are given by Theorem 1.1 in [19]. Thus, in addition to the subcritical case, G satisfies (G4) if the growth of G satisfies (3), (4) or (5) with an existence condition of Theorem 1.1 in [19]. In particular, functions \(G(s) = e^s-1\) and \(G(s)=s^p\) with \(p>1\) satisfy (G4). It is shown in Corollary 1.3 in [19] that there exists a function G satisfying (5) for which there is no mountain pass solution of (1) with small \(\Lambda \). Such function G does not satisfy (G4).

In the special case \(G(s)=s^p\) with \(p>1\), a stronger result follows from the uniqueness result on positive solution of (1) by Kwong [25]. In the situation \(G(s)=s^p\) for \(p>1\), maximizers of \(C_{G,\mu ,\alpha }\) and \(D_{G,\alpha }\) are positive solutions of (1) with \(\Lambda =1\), up to dilation and multiplicative constant of the maximizers. Moreover, the existence of positive ground state solution of (1) with \(\Lambda =1\) is obtained in [6], and the uniqueness result on positive solution of (1) with \(\Lambda =1\) is proved in [25]. Thus, these results yield that any maximizers of \(C_{G,\mu ,\alpha }\) and \(D_{G,\alpha }\) for any positive constants \(\mu \) and \(\alpha \) are the same as the unique positive ground state solution of (1) with \(\Lambda =1\), up to dilation and multiplicative constant.

Different from Theorem 2, any ground state solution of (1) attains a maximization problem \(D_{G,\alpha }\) for some \(\alpha \) under the additional condition (G4). By Theorems 3, 4 and a scaling property of (1), existence of a maximizer of \(D_{G,\alpha }\) is equivalent to existence of a ground state solution of (1) with \(\Lambda =D_{G,\alpha }^{-1}\) under the condition (G4), and the ground state level is \(\alpha /2\) if a ground state solution exists.

This paper is organized as follows. In Sect. 2, we prove Theorems 1 and 3. We first prove Theorem 3, and then, using Theorem 3, we prove Theorem 1. The key argument to prove Theorems 3 is the characterization of ground state solutions of (2) given in [6] in the subcritical case. In order to prove Theorem 1, we show that a maximizer of \(C_{G,\mu ,\alpha }\) is also a maximizer of \(D_{G,\alpha _1}\) for some \(\alpha _1<\alpha \). In Sect. 3, we prove Theorem 2. To prove Theorem 2, we estimate the Dirichlet norm of the ground state solution \(w_\Lambda \) for small \(\Lambda \). We show that \(w_\Lambda \) concentrates at origin as \(\Lambda \rightarrow 0\), unless \(\alpha _{\mu } > 4\pi \). Then, under the assumption \(\alpha _{\mu } \le 4\pi \), we apply blow-up analysis in [27] to \(w_\Lambda \). In Sect. 4, we prove Theorem 4. In Sect. 5, we extend Theorems 1, 2, 3 and 4 to higher dimensional case \(N \ge 3\) and \(W^{1,N}(\mathbb {R}^N)\).

2 Proof of Theorems 1 and 3

In this section, we prove Theorems 1 and 3. In order to prove these theorems, we fix some notations. For a positive constant K, we define

$$\begin{aligned} D^*_{G, \alpha , K}:= \sup \left\{ \int _{\mathbb {R}^2}G(u^2) dx \ \Bigg | \ u \in H^1(\mathbb {R}^2),\ \int _{\mathbb {R}^2}|\nabla u|^2 dx = \alpha , \ \int _{\mathbb {R}^2}u^2 dx =K \right\} . \end{aligned}$$

For G satisfying (G1)-(G3) we define a function g such that \(G(s) = \int _0^s g(t) dt\). We define the energy functional \(I_{\Lambda }:H^1(\mathbb {R}^2) \rightarrow \mathbb {R}\) corresponding to the equation (1) by

$$\begin{aligned} I_{\Lambda }(u):= \frac{1}{2}\int _{\mathbb {R}^2}\left( |\nabla u|^2 + u^2\right) dx - \frac{\Lambda }{2} \int _{\mathbb {R}^2}G(u^2)dx. \end{aligned}$$

Then, the ground state level is defined as

$$\begin{aligned} M_{\Lambda }:= \inf \left\{ I_{\Lambda }(u) \ \Bigg |\ u \in H^1(\mathbb {R}^2) \setminus \{0\} \mathrm{\ is \ a \ solution \ of \ }(1) \right\} . \end{aligned}$$

We summarize some properties of G. By the conditions (G1) and (G2), a lower estimate \(G(s) \ge ms\) holds for any \(s \ge 0\) and there exists \(s_0>0\) such that \(G(s_0)>ms_0\). Set

$$\begin{aligned} S_0:=\inf \left\{ s_0 \ge 0\ \Bigg | \ G(s_0) > ms_0 \right\} . \end{aligned}$$

Using the convexity of G again, we observe that

$$\begin{aligned} G(\kappa s) < \kappa G(s) \quad \mathrm{for\ any}\quad s>S_0 \quad \textrm{and} \quad \kappa \in (0,1). \end{aligned}$$
(6)

Moreover, for the same constant \(S_0\), we have

$$\begin{aligned} G(s) < sg(s) \quad \mathrm{for\ any}\quad s>S_0. \end{aligned}$$
(7)

Going back to the properties that \(G(s) \ge ms\) holds for any \(s \ge 0\) and \(G(s_0)>ms_0\) holds for some \(s_0\), we have

$$\begin{aligned} D_{G,\alpha }>m \end{aligned}$$
(8)

for any \(\alpha >0\).

We first prove Theorem 3. Assume that a function G satisfies (G1)-(G3), \(\alpha >0\) and \(v_0 \in H^1(\mathbb {R}^2)\) is a maximizer of \(D_{G,\alpha }\). By the Lagrange multiplier theorem, \(v_0\) satisfies

$$\begin{aligned} - \Lambda _0 \Delta v_0 = \frac{1}{\int _{\mathbb {R}^2}v_0^2dx} \left( - D_{G,\alpha } v_0 + v_0 g(v_0^2)\right) \quad \textrm{in}\quad \mathbb {R}^2, \end{aligned}$$
(9)

where \(\Lambda _0 \in \mathbb {R}\) is the Lagrange multiplier. By (8), we see that \(\Vert v_0\Vert _{L^\infty (\mathbb {R}^2)} > \sqrt{S_0}\), and thus by (7), we have

$$\begin{aligned} \int _{\mathbb {R}^2}v_0^2 g(v_0^2)dx > \int _{\mathbb {R}^2}G(v_0^2)dx. \end{aligned}$$

Multiplying (9) by \(v_0\) and integrating over \(\mathbb {R}^2\), we have

$$\begin{aligned} \Lambda _0 \int _{\mathbb {R}^2}|\nabla v_0|^2dx= & {} -D_{G,\alpha } + \frac{\int _{\mathbb {R}^2}v_0^2 g(v_0^2)}{\int _{\mathbb {R}^2}v_0^2dx} \\> & {} -D_{G,\alpha } + \frac{\int _{\mathbb {R}^2}G(v_0^2)dx}{\int _{\mathbb {R}^2}v_0^2dx}\\= & {} 0. \end{aligned}$$

Hence, it holds that \(\Lambda _0>0\).

Set

$$\begin{aligned} w_0(x) = v_0 \left( \theta x\right) \quad \textrm{with}\quad \theta := \sqrt{\frac{\Lambda _0 \int _{\mathbb {R}^2}v_0^2dx}{ D_{G,\alpha }}}. \end{aligned}$$
(10)

Then, \(w_0\) is a solution of

$$\begin{aligned} - \Delta w + w = D_{G,\alpha }^{-1} w g(w^2) \quad \textrm{in}\quad \mathbb {R}^2 \end{aligned}$$
(11)

and it holds that

$$\begin{aligned} \int _{\mathbb {R}^2}|\nabla w_0|^2dx=\alpha . \end{aligned}$$
(12)

In [6], the Pohozaev identity was shown under the condition that g has a subcritical growth. We prove the same equality for G such that (G1)-(G3).

Proposition 5

Assume that a function G satisfies the conditions (G1)-(G3). Then, any solution \(u \in H^1(\mathbb {R}^2)\) of (1) with \(\Lambda >0\) satisfies

$$\begin{aligned} \int _{\mathbb {R}^2}\left( \Lambda G(u^2) - u^2\right) dx = 0. \end{aligned}$$

Proof

By the convexity of G, we have

$$\begin{aligned} g(s_1) \le \frac{G(s_2)-G(s_1)}{s_2-s_1} \end{aligned}$$

for any positive constants \(s_1\) and \(s_2\) with \(s_2>s_1\). In particular, it holds that

$$\begin{aligned} g(s_1) \le \frac{G(2s_1)}{s_1} \end{aligned}$$

for any \(s_1\), and then by (G2) and (G3), there exists \(L>0\) such that

$$\begin{aligned} g(s) \le L e^{Ls} \end{aligned}$$

for any \(s \ge 0\). By the regularity theory, we derive that \(u \in W^{2,q}_{loc}(\mathbb {R}^2)\) for any \(q>1\). Hence, applying the argument to prove Claim 5.3 in [14], we obtain the equality of the proposition. \(\square \)

By Proposition 5, we can write

$$\begin{aligned} M_{\Lambda } = \inf \left\{ \frac{1}{2} \int _{\mathbb {R}^2}|\nabla u|^2 dx \ \Bigg | \ u \in H^1(\mathbb {R}^2) \setminus \{0\} \mathrm{\ is \ a \ solution \ of \ }(1)\right\} . \end{aligned}$$
(13)

Next, we prove the monotonicity of \(D_{G,\alpha }\) with respect to \(\alpha \).

Proposition 6

Assume that \(\beta >0\). Then, for any \(v \in H^1(\mathbb {R}^2)\) satisfying \(\int _{\mathbb {R}^2}|\nabla v|^2dx<\beta \), it holds that

$$\begin{aligned} \frac{\int _{\mathbb {R}^2}G(v^2)dx}{\int _{\mathbb {R}^2}v^2dx} < D_{G,\beta }. \end{aligned}$$

Proof

Let \(v \in H^1(\mathbb {R}^2)\) be such that \(\int _{\mathbb {R}^2}|\nabla v|^2dx<\beta \) and put \(\gamma := \int _{\mathbb {R}^2}|\nabla v|^2dx\). We distinguish two cases:

Case 1.

$$\begin{aligned} \Vert v \Vert _{L^\infty (\mathbb {R}^2)} \le \sqrt{S_0}. \end{aligned}$$

In this case, \(G(v(x)^2)\) coincides with \(mv(x)^2\) for a.e. \(x \in \mathbb {R}^2\). Thus, we have

$$\begin{aligned} \frac{\int _{\mathbb {R}^2}G(v^2)dx}{\int _{\mathbb {R}^2}v^2dx} = m < D_{G, \beta }. \end{aligned}$$

Hence, we obtain desired estimate.

Case 2.

$$\begin{aligned} \Vert v \Vert _{L^\infty (\mathbb {R}^2)} > \sqrt{S_0}. \end{aligned}$$

We consider

$$\begin{aligned} v_\beta (x) = \sqrt{\frac{\beta }{\gamma }}v(x). \end{aligned}$$

It is easy to check that \(\int _{\mathbb {R}^2}|\nabla v_\beta |^2dx = \beta \). Moreover, by the hypothesis and (6), we derive that

$$\begin{aligned} \int _{\left\{ v> \sqrt{S_0}\right\} } G\left( v^2\right) dx < \frac{\gamma }{\beta }\int _{\left\{ v > \sqrt{S_0}\right\} } G(v_\beta ^2)dx. \end{aligned}$$

Hence,

$$\begin{aligned} \frac{\int _{\mathbb {R}^2}G(v^2)dx}{\int _{\mathbb {R}^2}v^2dx} < \frac{\int _{\mathbb {R}^2}G(v_\beta ^2)dx}{\int _{\mathbb {R}^2}v_\beta ^2dx} \le D_{G,\beta }. \end{aligned}$$

Consequently, we conclude that Proposition 6 holds. \(\square \)

Proof of Theorem 3

Propositions 5 and 6 give that a necessary condition of solutions of (11) is

$$\begin{aligned} \int _{\mathbb {R}^2}|\nabla w|^2 dx \ge \alpha . \end{aligned}$$

The estimate and (13) yield the following lower bound of the ground state level:

$$\begin{aligned} M_{D_{G,\alpha }^{-1}}\ge \frac{\alpha }{2}. \end{aligned}$$

Moreover, it holds that, by (12) and (13),

$$\begin{aligned} M_{D_{G,\alpha }^{-1}} \le \frac{1}{2}\int _{\mathbb {R}^2}|\nabla w_0|^2dx =\frac{\alpha }{2}. \end{aligned}$$

Hence, we derive that \(M_{D_{G,\alpha }^{-1}} = \int _{\mathbb {R}^2}|\nabla w_0|^2/2\). Consequently, \(w_0\) is a ground state solution of (11), and by (10), we conclude Theorem 3. \(\square \)

We next prove Theorem 1. Assume that G satisfies (G1)-(G3), \(\mu >0\), \(\alpha >0\) and \(u_0 \in H^1(\mathbb {R}^2)\) is a maximizer of \(C_{G,\mu ,\alpha }\). The maximizer is a solution of

$$\begin{aligned} - \Delta u + \mu u = \Lambda _1 u g(u^2) \quad \textrm{in}\quad \mathbb {R}^2, \end{aligned}$$

where \(\Lambda _1\) is the Lagrange multiplier characterized by

$$\begin{aligned} \Lambda _1 = \frac{\alpha }{\int _{\mathbb {R}^2}u_0^2 g(u_0^2)dx}. \end{aligned}$$

Since \(\alpha >0\), we see that \(\Lambda _1>0\). We define a constant by

$$\begin{aligned} \alpha _1:= \int _{\mathbb {R}^2}|\nabla u_0|^2dx. \end{aligned}$$

Then, we prove the following proposition.

Proposition 7

The function \(u_0 \in H^1(\mathbb {R}^2)\) is a maximizer of \(D_{G,\alpha _1}\) and we have

$$\begin{aligned} \Lambda _1 = \frac{\mu }{D_{G, \alpha _1}}. \end{aligned}$$
(14)

Proof

By the constraint of \(C_{G, \mu , \alpha }\), we see that

$$\begin{aligned} \int _{\mathbb {R}^2}u_0^2 dx = \frac{\alpha - \alpha _1}{\mu }. \end{aligned}$$

Then, it follows from the definitions of \(C_{G, \mu , \alpha }\) and \(D^*_{G, \beta ,K}\) that

$$\begin{aligned} C_{G, \mu , \alpha } \ge D^*_{G, \alpha _1, (\alpha -\alpha _1)/\mu }. \end{aligned}$$

Since \(u_0\) is a maximizer of \(C_{G,\mu ,\alpha }\) and satisfies the constraint of \(D^*_{G, \alpha _1, (\alpha -\alpha _1)/\mu }\), \(u_0\) also attains the best constant \(D^*_{G, \alpha _1, (\alpha -\alpha _1)/\mu }\).

Here, for any function \(v \in H^1(\mathbb {R}^2)\) and positive constant K, we consider the following scaling

$$\begin{aligned} v_K(x) = v \left( \theta _K x\right) \quad \textrm{with} \quad \theta _K = \sqrt{\frac{\int _{\mathbb {R}^2}v^2dx}{K}}. \end{aligned}$$

Then, we observe that

$$\begin{aligned} \frac{D^*_{G, \beta , K}}{K} = D_{G, \beta } \end{aligned}$$
(15)

for any G and \(\beta >0\), and hence, \(u_0\) also attains \(D_{G, \alpha _1}\).

Next, we prove the equality (14). The same argument to prove Proposition 5 yields that

$$\begin{aligned} \int _{\mathbb {R}^2}\left( \Lambda _1 G(u_0^2) - \mu u_0^2\right) dx=0, \end{aligned}$$

and then we derive that

$$\begin{aligned} \Lambda _1 D^*_{G, \alpha _1, (\alpha -\alpha _1)/\mu } - (\alpha -\alpha _1) = 0, \end{aligned}$$

or

$$\begin{aligned} \Lambda _1 = \frac{\alpha -\alpha _1}{D^*_{G, \alpha _1, (\alpha -\alpha _1)/\mu }}. \end{aligned}$$

The equality with (15) gives the equality (14), and hence, Proposition 7 is proved. \(\square \)

Proof of Theorem 1

Set

$$\begin{aligned} w_1(x) = u_0\left( x/\sqrt{\mu }\right) . \end{aligned}$$

By Proposition 7 and Theorem 3, \(w_1\) is a ground state solution of

$$\begin{aligned} - \Delta w + w = D_{G,\alpha _1}^{-1} w g(w^2) \quad \textrm{in}\quad \mathbb {R}^2. \end{aligned}$$

Consequently, the proof of Theorem 1 is complete. \(\square \)

3 Proof of theorem 2

Proof of Theorem 2

Suppose that \(G(s) = e^s-1\) for \(s \ge 1\). Let \(\{\Lambda _n\}\) be a sequence of positive numbers such that \(\Lambda _n \rightarrow 0\) as \(n \rightarrow \infty \) and let \(w_n \in H^1(\mathbb {R}^2)\) be a ground state solution of

$$\begin{aligned} - \Delta w + w = \Lambda _n w e^{w^2} \quad \textrm{in}\quad \mathbb {R}^2. \end{aligned}$$
(16)

We note that \(w_n\) is positive and radially symmetric by the result of [15]. For \(\mu >0\), a constant \(\alpha _{\mu , n}\) denotes \(\int _{\mathbb {R}^2}\left( |\nabla w_n|^2 + \mu w_n^2\right) dx\) and in the following, we assume that \(\alpha _{\mu , n} \le 4\pi \). We first prove that \(w_n\) does not attain \(C_{G,\mu ,\alpha _{\mu ,n}}\) for any \(\mu \not = 1\). Assume on the contrary that \(\int _{\mathbb {R}^2}\left( e^{w_n^2}-1\right) dx = C_{G,\mu ,\alpha _{\mu ,n}}\) holds with \(\mu \not =1\). We observe that \(w_n\) is a solution of

$$\begin{aligned} - \Delta w + \mu w = \Lambda _\mu w e^{w^2} \quad \textrm{in}\quad \mathbb {R}^2 \end{aligned}$$
(17)

with a Lagrange multiplier \(\Lambda _\mu \) depending on n. Applying the argument in the proof of Proposition 5 to the above equation, we have

$$\begin{aligned} \int _{\mathbb {R}^2}\left[ \mu w_n^2 - \Lambda _\mu \left( e^{w_n^2}-1\right) \right] dx = 0. \end{aligned}$$

On the other hand, by the characterization of ground state solutions of (16) given in [41], we have

$$\begin{aligned} \int _{\mathbb {R}^2}\left[ w_n^2 - \Lambda _n \left( e^{w_n^2}-1\right) \right] dx = 0. \end{aligned}$$
(18)

The two equalities yield that \(\Lambda _\mu = \mu \Lambda _n\). Then, since \(w_n\) is a solution of both (16) and (17) again, we have

$$\begin{aligned} \left( 1-\frac{1}{\mu }\right) \Delta w_n=0, \end{aligned}$$

which implies that \(w_n \equiv 0\). This is a contradiction, and hence, \(w_n\) is not a maximizer of \(C_{G,\mu ,\alpha _{\mu ,n}}\) for \(\mu \not =1\).

In the following, we assume that \(\mu =1\). For simplicity, we set \(C_{\alpha }:= C_{G,1,\alpha }\) and \(\alpha _n:= \alpha _{1,n}\). We will prove that a ground state solution \(w_n\) does not attain \(C_{\alpha _n}\) for sufficiently large n. Going back to (18), we derive that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{\int _{\mathbb {R}^2}\left( e^{w_n^2} - 1\right) dx}{\int _{\mathbb {R}^2}w_n^2 dx} = \infty , \end{aligned}$$

which implies that

$$\begin{aligned} \lim _{n \rightarrow \infty } \int _{\mathbb {R}^2}|\nabla w_n|^2 dx \ge 4\pi \end{aligned}$$

by the results in [3]. The lower bound and the assumption of \(\alpha _n\) yield that \(\lim _{n \rightarrow \infty } \alpha _n = 4\pi \), \(\lim _{n \rightarrow \infty } \int _{\mathbb {R}^2}|\nabla w_n|^2 dx = 4\pi \) and \(\int _{\mathbb {R}^2}w_n^2dx=0\). Hence, \(\{w_n\}\) concentrates at the origin, that is it holds that \(\lim _{n \rightarrow \infty } w_n(0) = \infty \) and that \(\lim _{n \rightarrow \infty } w_n(x) = 0\) for all \(x \in \mathbb {R}^2 {\setminus } \{0\}\). Using the same arguments in [27] to prove the existence of maximizers of \(C_{4\pi }\), we have, after passing to a subsequence,

$$\begin{aligned} \lim _{n \rightarrow \infty }\int _{\mathbb {R}^2}\left( e^{w_n^2}-1\right) dx \le \pi e^{4\pi A} < C_{4\pi } \end{aligned}$$

with an explicit constant A. Hence, by the continuity of the best constant \(C_\alpha \) with respect to \(\alpha \), we derive that \(\int _{\mathbb {R}^2}\left( e^{w_n^2}-1\right) dx < C_{\alpha _n}\) for large n.

Consequently, for sufficiently large n, it holds that \(\int _{\mathbb {R}^2}\left( e^{w_n^2} - 1\right) dx < C_{\alpha _n}\) unless \(\alpha _n > 4 \pi \). The proof of Theorem 2 is complete. \(\square \)

4 Proof of theorem 4

Proof of Theorem 4

Assume that G satisfies (G1)-(G4) and \(w_0 \in H^1(\mathbb {R}^2)\) is a ground state solution of (1) for \(\Lambda > 0\). We first estimate \(\Lambda \). Since G is convex and G satisfies (G2), by Proposition 5, we derive that

$$\begin{aligned} 0 = \int _{\mathbb {R}^2}\left( \Lambda G(w_0^2)-w_0^2\right) dx >\int _{\mathbb {R}^2}\left( \Lambda m w_0^2-w_0^2\right) dx, \end{aligned}$$

and thus, we have \(\Lambda ^{-1}>m\).

Let \(\alpha _0 = \int _{\mathbb {R}^2}|\nabla w_0|^2dx\). Then, we observe that

$$\begin{aligned} \frac{1}{\Lambda } = \frac{\int _{\mathbb {R}^2}G(w_0^2)dx}{\int _{\mathbb {R}^2}w_0^2dx} \le D_{G,\alpha _0}. \end{aligned}$$

To prove \(\Lambda ^{-1}=D_{G,\alpha _0}\), assuming that, on the contrary

$$\begin{aligned} \frac{1}{\Lambda } < D_{G,\alpha _0}, \end{aligned}$$

we derive a contradiction. Since \(D_{G,\alpha }\) is continuous with respect to \(\alpha \), \(\lim _{\alpha \rightarrow 0}D_{G,\alpha }=m\) and \(m<\Lambda ^{-1}\), there exists \(\beta \in (0,\alpha _0)\) such that

$$\begin{aligned} \frac{1}{\Lambda } = D_{G,\beta }. \end{aligned}$$

By (G4), there exists \(v_\beta \in H^1(\mathbb {R}^2)\) such that \(\int _{\mathbb {R}^2}|\nabla v_\beta |^2dx=\beta \) and

$$\begin{aligned} \frac{\int _{\mathbb {R}^2}G(v_\beta ^2)dx}{\int _{\mathbb {R}^2}v_\beta ^2dx} = D_{G,\beta }. \end{aligned}$$

Thus, by Theorem 3, \(v_\beta \) is another ground state solution of (1), up to scaling. Recalling the characterization of the ground state level given by (13), we have

$$\begin{aligned} M_{\Lambda } = \frac{1}{2}\int _{\mathbb {R}^2}|\nabla v_\beta |^2dx = \frac{\beta }{2}. \end{aligned}$$

However, since \(w_0\) is also a ground state solution of (1), we have

$$\begin{aligned} M_{\Lambda } = \frac{1}{2}\int _{\mathbb {R}^2}|\nabla w_0|^2dx = \frac{\alpha _0}{2}, \end{aligned}$$

which contradicts that \(\beta <\alpha _0\). Consequently, it holds that \(\Lambda ^{-1} = D_{G,\alpha _0}\) and \(w_0\) is a maximizer of \(D_{G,\alpha _0}\). \(\square \)

5 Higher dimensional case

In this section, we deal with \(N \ge 3\) and \(W^{1,N}(\mathbb {R}^N)\). We consider \(G: [0,\infty ) \rightarrow \mathbb {R}\) satisfies

  1. (G1)

    \(G(0)=0\), \(G \in C^1\left( (0,\infty ); \mathbb {R}\right) \) and G is convex,

  2. (G2)

    there exists a nonnegative constant m such that \(\lim _{s \rightarrow +0} G(s)/s = m\) and \(G(s) \not \equiv ms\),

  3. (G3)

    \(G(s) \le C e^{Cs^{\frac{1}{N-1}}}\) holds for all \(s>0\) with some positive constant C.

Set

$$\begin{aligned} \mathscr {C}_{G,\mu ,\alpha }:= \sup \left\{ \int _{\mathbb {R}^N} G(|u|^N) dx \ \Bigg | \ u \in W^{1,N}(\mathbb {R}^N),\ \int _{\mathbb {R}^N} \left( |\nabla u|^N + \mu |u|^N\right) dx=\alpha \right\} \end{aligned}$$

and

$$\begin{aligned} \mathscr {D}_{G, \alpha }:= \sup \left\{ \frac{\int _{\mathbb {R}^N} G(|u|^N) dx}{\int _{\mathbb {R}^N} |u|^N dx} \ \Bigg | \ u \in W^{1,N}(\mathbb {R}^N),\ \int _{\mathbb {R}^N} |\nabla u|^N dx = \alpha \right\} , \end{aligned}$$

where \(\mu \) and \(\alpha \) are positive constants. Then, consider the condition

  1. (G4)

    \(\mathscr {D}_{G, \alpha }\) is attained whenever \(\mathscr {D}_{G,\alpha }<\infty \).

It is worth noting that the results in [18] are extended to the case \(N \ge 3\) by Masmoudi and Sani [28]. By the boundedness result in higher dimensional case, if G satisfies (G1)-(G3) and

$$\begin{aligned} \lim _{s \rightarrow \infty } \frac{s^{\frac{1}{N-1}}G(s)}{e^{Ks^{\frac{1}{N-1}}}}=0 \end{aligned}$$
(19)

for some positive constant K provided that \(\lim _{s \rightarrow \infty } G(s)/e^{K_{-}s^\frac{1}{N-1}} = \infty \) and \(\lim _{s \rightarrow \infty } G(s)/e^{K_{+}s^\frac{1}{N-1}} = 0\) for any \(K_{-}< K < K_{+}\), then \(\mathscr {D}_{G,\alpha } < \infty \) if and only if \(\alpha \le (\alpha _N^* K)^{N-1}\), where \(\alpha _N^* = N\omega _{N-1}^{1/(N-1)}\) and \(\omega _{N-1}\) is the surface area of the unit sphere in \(\mathbb {R}^N\). Moreover, \(\mathscr {D}_{G,\alpha }\) is attained for any \(\alpha \le (\alpha _N^* K)^{N-1}\) by the compactness result in [28] and the arguments of [7] (see Remark 2.8 in [7]). If G satisfies (G1)-(G3),

$$\begin{aligned} \lim _{s \rightarrow \infty } \frac{s^{\frac{1}{N-1}}G(s)}{e^{Ks^{\frac{1}{N-1}}}}=\infty \quad \textrm{and} \quad \lim _{s \rightarrow \infty } \frac{G(s)}{e^{Ks^{\frac{1}{N-1}}}}<\infty , \end{aligned}$$
(20)

then \(\mathscr {D}_{G,\alpha } < \infty \) if and only if \(\alpha < (\alpha _N^* K)^{N-1}\) by [3] and [28]. In the situation \(\mathscr {D}_{G,\alpha } < \infty \), there exists a maximizer of \(\mathscr {D}_{G,\alpha }\) by the same reason as in the case (19). In the case

$$\begin{aligned} 0<\lim _{s \rightarrow \infty } \frac{s^{\frac{1}{N-1}}G(s)}{e^{Ks^{\frac{1}{N-1}}}}<\infty , \end{aligned}$$

different from the case \(N=2\), condition of existence of a maximizer for \(\mathscr {D}_{G,(\alpha _N^* K)^{N-1}}\) is still open. Thus, if the growth of G satisfies at least (19), (20) or the subcritical growth, then G satisfies (G4) in the higher dimensional case.

Quasilinear elliptic equations related to variational problems \(\mathscr {C}_{G,\mu ,\alpha }\) and \(\mathscr {D}_{G, \alpha }\) are of the form

$$\begin{aligned} - \Delta _N u + u^{N-1} = \Lambda u^{N-1}g(u^N), \quad u>0 \quad \textrm{in} \quad \mathbb {R}^N \end{aligned}$$
(21)

with positive constant \(\Lambda \), where \(\Delta _N\) is the usual N-Laplace operator defined by \(\Delta _N u:= \textrm{div} \left( |\nabla u|^{N-2}\nabla u\right) \). If \(u \in W^{1,N}(\mathbb {R}^N)\) is a solution of (21), then \(u \in C^{1,\rho }_{loc}(\mathbb {R}^N)\) by the conditions (G1)-(G3) and the regularity result obtained by E. DiBenedetto [12]. Thus, by the same argument to prove Claim 5.3 in [14], we obtain that any solution \(u \in W^{1,N}(\mathbb {R}^N)\) of (21) satisfies

$$\begin{aligned} \int _{\mathbb {R}^N} \left( \Lambda G(u^N) - u^N\right) dx=0. \end{aligned}$$

Consequently, we extend Theorems 1-4 to the following results.

Theorem 8

Assume that \(u_0 \in W^{1,N}(\mathbb {R}^N)\) is a maximizer of \(\mathscr {C}_{G, \mu , \alpha }\). Then, there exists a positive constant \(\Lambda _0\) such that \(u_0\) is a ground state solution of (21) with \(\Lambda = \Lambda _0\), up to scaling.

Theorem 9

Assume that

$$\begin{aligned} G(s) = e^{s^{\frac{1}{N-1}}} - \sum _{j=0}^{N-2} \frac{s^{\frac{j}{N-1}}}{j!} \end{aligned}$$

and \(w_\Lambda \) is a ground state solution of (21) for \(\Lambda >0\). Let \(\alpha _{\mu } = \int _{\mathbb {R}^N} \left( |\nabla w_\Lambda |^N + \mu w_\Lambda ^N \right) dx\) for \(\mu >0\). Then, there exists \(\Lambda _* \in (0, (N-1)!)\) such that for any \(\Lambda \in (0,\Lambda _*)\) and \(\mu >0\), either \(\alpha _{\mu } > (\alpha _N^*)^{N-1}\) or \(\int _{\mathbb {R}^2}G(w_\Lambda ^N) < \mathscr {C}_{G,\mu ,\alpha _{\mu }}\) provided that \(\alpha _{\mu } \le (\alpha _N^*)^{N-1}\).

Theorem 10

Assume that G satisfies (G1)-(G3) and \(v_0 \in W^{1,N}(\mathbb {R}^N)\) is a maximizer of \(\mathscr {D}_{G,\alpha }\). Then, \(v_0\) is a ground state solution of (21) for \(\Lambda = \mathscr {D}_{G,\alpha }^{-1}\), up to scaling.

Theorem 11

Assume that G satisfies (G1)-(G4) and \(w_0 \in W^{1,N}(\mathbb {R}^N)\) is a ground state solution of (21) for \(\Lambda >0\). Let \(\alpha _0 = \int _{\mathbb {R}^2}|\nabla w_0|^Ndx\). Then,

$$\begin{aligned} \Lambda = \mathscr {D}_{G, \alpha _0}^{-1} \end{aligned}$$

and \(w_0\) is a maximizer of \(\mathscr {D}_{G,\alpha _0}\).