Abstract
In this paper, for \(d \ge 1\) and \(s \in (0,\frac{d}{2})\), we study the Bianchi–Egnell quotient
where \(S_{d,s}\) is the best Sobolev constant and \({\mathcal {B}}\) is the manifold of Sobolev optimizers. By a fine asymptotic analysis, we prove that when \(d = 1\), there is a neighborhood of \({\mathcal {B}}\) on which the quotient \({\mathcal {Q}}(f)\) is larger than the lowest value attainable by sequences converging to \({\mathcal {B}}\). This behavior is surprising because it is contrary to the situation in dimension \(d \ge 2\) described recently in König (Bull Lond Math Soc 55(4):2070–2075, 2023). This leads us to conjecture that for \(d = 1\), \({\mathcal {Q}}(f)\) has no minimizer on \(\dot{H}^s({\mathbb {R}}^d) \setminus {\mathcal {B}}\), which again would be contrary to the situation in \(d \ge 2\). As a complement of the above, we study a family of test functions which interpolates between one and two Talenti bubbles, for every \(d \ge 1\). For \(d \ge 2\), this family yields an alternative proof of the main result of König (Bull Lond Math Soc 55(4):2070–2075, 2023). For \(d =1\) we make some numerical observations which support the conjecture stated above.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and main result
For any space dimension \(d \ge 1\), the Sobolev inequality on \(\mathbb {R}^d\) for \(s \in (0,\frac{d}{2})\) reads as
where
The unique optimizers of (1.1) are the bubble functions [2, 16, 17]
It turns out that the quantitative stability of (1.1) can be formulated in terms of the quotient
between the ’Sobolev deficit’ and the \(\dot{H}^s(\mathbb R^d)\)-distance of the optimizers. A famous inequality due, for \(s = 1\), to Bianchi and Egnell [3] (see [5] for general \(s \in (0, \frac{d}{2})\)) says that there is \(c_{BE}(s) > 0\) such that
This inequality is optimal in the sense that in the denominator \({{\,\textrm{dist}\,}}_{\dot{H}^s(\mathbb {R}^d)}(f, {\mathcal {B}})^2\) of \({\mathcal {Q}}(f)\), the distance cannot be measured by a stronger norm than \(\dot{H}^s(\mathbb {R}^d)\) and the exponent 2 cannot be replaced by a smaller one if inequality (1.3) is to be satisfied with a strictly positive constant on the right hand side.
The proof strategy of Bianchi and Egnell (and of [5] for general \(s \in (0, d/2)\)) consists in proving first that
for every sequence \(f_n\) which converges (in \(\dot{H}^s(\mathbb {R}^d)\)) to \({\mathcal {B}}\), for an explicit constant \(c_{BE}^{\text {loc}}(s) = \frac{4\,s}{d + 2\,s + 2}\). Then (1.3) can be deduced from this by a compactness argument. For \(s = 1\), using completely new ideas, the authors of the paper [9] have recently given a quantitative version of this argument which permits to give an explicit lower bound on \(c_{BE}(1)\).
By definition, we have \(c_{BE}^\text {loc}(s) \ge c_{BE}(s)\). Only recently, it has been observed in [14] that when \(d \ge 2\), there is \(\rho \in \dot{H}^s(\mathbb {R}^d)\) such that for every \(\varepsilon > 0\) small enough, the function \(f_\varepsilon (x) = (1 + |x|^2)^{-\frac{d-2\,s}{2}} + \varepsilon \rho (x)\) satisfies
In particular, this shows that in fact \(c_{BE}(s) < c_{BE}^{\text {loc}}(s)\) strictly, for every \(d \ge 2\) and \(s \in (0, \frac{d}{2})\).
Therefore, the following theorem about the situation in dimension \(d = 1\), which [14] makes no statement about, comes as a surprise.
Theorem 1
Let \(d = 1\) and \(s \in (0,\frac{1}{2})\). Then there is a neighborhood \(U \subset \dot{H}^s(\mathbb {R})\) of \({\mathcal {B}}\) such that for every \(f \in U \setminus {\mathcal {B}}\) one has
In view of Theorem 1 and the preceding discussion it is tempting to formulate the following conjecture:
Conjecture 2
Let \(d = 1\) and \(s \in (0, \frac{1}{2})\). Then \(c_{BE}(s) = c_{BE}^{\text {loc}}(s)\) and any minimizing sequence for \(c_{BE}(s)\) converges to \({\mathcal {B}}\). In particular, (1.3) does not admit a minimizer.
We provide some more evidence for Conjecture 2 in Sect. 4.
Let us stress here only that also the last part of the conjecture about non-existence of a minimizer would be in contrast to the situation in higher dimensions. Indeed, for \(d \ge 2\), the recent result from [15] shows that for any \(s \in (0, \frac{d}{2})\), every minimizing sequence for \(c_{BE}(s)\) converges (up to conformal symmetries and taking subsequences) to some minimizer \(f \in \dot{H}^s(\mathbb {R}^d) \setminus {\mathcal {B}}\). A key ingredient in the proof in [15] is the strict inequality \(c_{BE}(s) < c_{BE}^{\text {loc}}(s)\), which fails in \(d= 1\) if the first part of Conjecture 2 is true.
The special role of dimension \(d = 1\) for the Bianchi–Egnell inequality apparent from comparing Theorem 1 with the results of [14] is somewhat reminiscent of the situation in [12, Theorem 2]. There, similarly to [14], for a family of so-called reverse Sobolev inequalities of order \(s > d/2\) a certain behavior of the inequality can be verified for \(d \ge 2\) through an appropriate choice of test functions, but the same test functions do not yield the result if \(d = 1\). To our knowledge, complementing [12, Theorem 2] for \(d = 1\) is still an open question. We are currently not in a position to convincingly explain the origin of the particular behavior of \(d=1\) in either of the settings studied in this paper and in [12]. It would therefore be very interesting to shed some further light on the role of \(d = 1\) in conformally invariant minimization problems of fractional order.
The stability of Sobolev’s and related functional inequalities and the fine properties of the minimization problem (1.3) and its analogues is currently a very active topic of research with many recent contributions. Without attempting to be exhaustive, we mention in particular that the methods and results from [14] and [15] have been recently extended to the Log-Sobolev inequality [6] and to the Caffarelli–Kohn–Nirenberg inequality in the preprints [8, 18]; see also [13]. Besides, an excellent introduction to the topic is provided by the recent lecture notes [11].
2 The Bianchi–Egnell inequality on \({\mathbb {S}}^d\)
The inequalities (1.1) and (1.3) have a conformally equivalent formulation on the d-dimensional sphere \({\mathbb {S}}^d\) (viewed as a subset of \(\mathbb {R}^{d+1}\)), which is in some sense even more natural than that on \(\mathbb {R}^d\) and, at any rate, more convenient for the arguments used in this paper.
The conformal map between \(\mathbb {R}^d\) and \({\mathbb {S}}^d\) which induces this equivalence is the (inverse) stereographic projection \({\mathcal {S}}: \mathbb {R}^d \rightarrow {\mathbb {S}}^d\) given by
We denote by \(J_{{\mathcal {S}}}(x)= |\det D {\mathcal {S}}(x)| = \left( \frac{2}{1 + |x|^2}\right) ^d\) its Jacobian determinant. If \(f \in \dot{H}^s(\mathbb {R}^d)\) and \(u \in H^s({\mathbb {S}}^d)\) are related by
then the Sobolev inequality (1.1) translates to
where \((\cdot , \cdot )\) is \(L^2({\mathbb {S}}^d)\) scalar product. The operator \(P_s\) appearing here is given on spherical harmonics \(Y_\ell \) of degree \(\ell \ge 0\) by
with
The manifold of optimizers of (2.3) (i.e. the image of the bubble functions \({\mathcal {B}}\) under the transformation (2.2)) is given by
We refer to [11] for a justification of these facts.
Finally, the Bianchi–Egnell quotient on \({\mathbb {S}}^d\) reads
and the stability inequality corresponding to (1.3) is
(Notice carefully that the constants \(S_{d,s}\) and \(c_{BE}(s)\) do not change as one passes from \(\mathbb {R}^d\) to \({\mathbb {S}}^d\).)
Notation and conventions. We always consider \(H^s({\mathbb {S}}^d)\) to be equipped with the norm \(\Vert u\Vert := (u, P_s u)^{1/2}\), which is equivalent to the standard norm on \(H^s({\mathbb {S}}^d)\). Here, \((\cdot , \cdot )\) is \(L^2({\mathbb {S}}^d)\) scalar product.
We always assume that the numbers p, d, and s satisfy the relation \(s \in (0, \frac{d}{2})\) and \(p = \frac{2d}{d-2s}\).
For any \(q \in [1, \infty ]\), we denote for short \(\Vert \cdot \Vert _q:= \Vert \cdot \Vert _{L^q({\mathbb {S}}^d)}\). Also, for brevity we will often write the integral of a real-valued function u defined on \({\mathbb {S}}^d\) as \(\int _{{\mathbb {S}}^d} u\) instead of \(\int _{{\mathbb {S}}^d} u(\omega ) \, d \omega \). The implied measure \(d \omega \) is always taken to be non-normalized standard surface measure, so that \(\int _{{\mathbb {S}}^d} 1 \, d \omega = |{\mathbb {S}}^d|\).
For \(\ell \ge 0\), we denote by \(E_\ell \) (resp. \(E_{\ge \ell }\) or \(E_{\le \ell }\)) the space of spherical harmonics of \({\mathbb {S}}^d\) of degree equal to (resp. at least or at most) \(\ell \).
3 Proof of Theorem 1
In this section, our goal is to prove the following.
Theorem 3
Let \(d = 1\) and \(s \in (0,\frac{1}{2})\). Then there is a neighborhood \(V \subset H^s({\mathbb {S}}^1)\) of \({\mathcal {M}}\) such that for every \(u \in V \setminus {\mathcal {M}}\) one has
The lower bound \(c_{BE}^{\text {loc}}(s)\) in Theorem 3 is sharp. Indeed, for \(u_\mu (\theta ):= 1 + \mu \sin (2 \theta )\) the computations in the proof below (or their less involved variant in [14, proof of Theorem 1]) show that \(\text {dist}(u_\mu , {\mathcal {M}}) \rightarrow 0\) and \({\mathcal {E}}(u_\mu ) \rightarrow c_{BE}^{\text {loc}}(s)\) as \(\mu \rightarrow 0\).
By the equivalence between \(\mathbb {R}^d\) and \({\mathbb {S}}^d\) explained in Sect. 2, Theorem 3 is equivalent to Theorem 1 via stereographic projection. For our arguments, it is however much more convenient to work in the setting of the sphere for at least two reasons. Firstly, Taylor expansions near \({\mathcal {M}}\) are simpler because we can choose the constant function \(1 \in {\mathcal {M}}\) as a basepoint. Secondly, we have the explicit eigenvalues \(\alpha (\ell )\) of the operator \(P_s\) and their associated eigenfunctions \(\sin (\ell \theta )\) and \(\cos (\ell \theta )\) at our disposition.
We now begin with giving the proof of Theorem 3. Our proof is inspired by [10], where in a similar but different situation a certain term in a Taylor expansion vanishes, but the sign of a certain next-order correction term can be recovered.
Let \((u_n)\) be an arbitrary sequence such that \(\text {dist}_{H^s({\mathbb {S}}^1)}(u_n, {\mathcal {M}}) \rightarrow 0\). Theorem 1 follows if we can prove that \({\mathcal {E}}(u_n) \ge c_{BE}^{\text {loc}}(s)\) for every n large enough. We may thus assume
for otherwise the inequality \({\mathcal {E}}(u_n) \ge c_{BE}^{\text {loc}}(s)\) is automatic for large enough n.
By [7, Lemma 3.4] (respectively the conformally equivalent statement on \({\mathbb {S}}^1\) instead of \(\mathbb {R}\)), \(\text {dist}_{H^s({\mathbb {S}}^1)}(u_n, {\mathcal {M}})\) is achieved by some \(h_n \in {\mathcal {M}}\).
Up to multiplying \(u_n\) by a constant \(c_n\) and applying a conformal transformation \(T_n\), both of which do not change \({\mathcal {E}}(u_n)\), we may assume that \(h_n = 1\).
The fact that \(1 \in {\mathcal {M}}\) minimizes \(\text {dist}_{H^s({\mathbb {S}}^1)}(u_n, {\mathcal {M}})\) implies that \(u_n - 1\) is orthogonal (in \(H^s({\mathbb {S}}^1)\)) to the tangent space \(T_1 {\mathcal {M}}\). But this tangent space is precisely spanned by the functions 1, \(\omega _1\) and \(\omega _2\). Thus we have \(\rho _n:= u_n - 1 \in E_{\ge 2}\). We can thus summarize our chosen normalization as
The next lemma refines the decomposition (3.2) and gives additional information.
Lemma 4
Let \((u_n)\) satisfy (3.1) and (3.2). Then there are sequences \(\theta _n \in [0, 2 \pi )\), \(\mu _n > 0\) and \(\eta _n \in E_{\ge 3}\) such that
with \(r_n(\theta ) = \sin (2 (\theta - \theta _n)) \in E_2\), \(\mu _n \rightarrow 0\) and \(\Vert \eta _n\Vert \rightarrow 0\).
Proof
By (3.2), we can write \(u_n = 1+ \rho _n = 1 + \tilde{r}_n + {\tilde{\eta }}_n\) with \(\tilde{r}_n \in E_2\) and \({\tilde{\eta }}_n \in E_{\ge 3}\). Consequently,
and
Using the second-order Taylor expansion \((a + h)^\frac{2}{p} = a^\frac{2}{p} + \frac{2}{p} a^{\frac{2}{p}-1} h - \frac{p-2}{p^2} a^{\frac{2}{p} - 2} h^2 + o(h^2)\), we obtain
(To produce this form of the error term, we use Hölder’s and Sobolev’s inequalities together with the facts that \(\Vert \rho _n\Vert \rightarrow 0\) and \(2 < \min \{3,p\}\).)
By [11, Lemma 12], \({{\,\textrm{dist}\,}}_{H^s({\mathbb {S}}^1)}(u_n, {\mathcal {M}})^2 = (\rho _n, P_s \rho _n)\) for n large enough. Using that \(S_{1,s} (p-1) (2\pi )^{\frac{2}{p}-1} = \alpha (1)\), together with assumption (3.1), we obtain
Now divide both the numerator and the denominator by \((\tilde{r}_n, P_s \tilde{r}_n)\) and abbreviate \(\tau _n:= \frac{({\tilde{\eta }}_n, P_s {\tilde{\eta }}_n)}{(\tilde{r}_n, P_s \tilde{r}_n)}\). Since \(\tilde{r}_n \in E_2\), we have
Moreover, since \({\tilde{\eta }}_n \in E_{\ge 3}\), we have \(\int _{{\mathbb {S}}^1} {\tilde{\eta }}_n^2 \le \frac{1}{\alpha (3)} ({\tilde{\eta }}_n, P_s {\tilde{\eta }}_n)\). Altogether, the above yields
Since \(1 - \frac{\alpha (1)}{\alpha (3)} > 1 - \frac{\alpha (1)}{\alpha (2)} = c_{BE}^{\text {loc}}(s)\), the function \(\tau \mapsto \frac{c_{BE}^{\text {loc}}(s) + (1 - \frac{\alpha (1)}{\alpha (3)}) \tau }{1 + \tau }\) is strictly increasing in \(\tau \in (0, \infty )\) and takes the value \(c_{BE}^{\text {loc}}(s)\) in \(\tau = 0\). Together with (3.3), this forces \(\tau _n = o(1)\).
Now setting \(r_n = \mu _n^{-1} \tilde{r}_n\) and \(\eta _n = \mu _n^{-1} {\tilde{\eta }}_n\) with \(\mu _n = \left( \frac{\int _{{\mathbb {S}}^1} \tilde{r}_n^2}{\pi }\right) ^\frac{1}{2}\) gives the conclusion, noting that \(r_n \in E_2\) can be written as \(r_n = \sin (2 (\theta - \theta _n))\) for some appropriate \(\theta _n \in [0, 2\pi )\). \(\square \)
Let us now make the additional assumption that
which is the case if \(s \in [\frac{1}{4}, \frac{1}{2})\).
Assuming (3.4), we can prove our main result in a relatively straightforward way. We will explain afterwards how the proof needs to be modified in order to cover the general case.
Proof of Theorem 3; easy case \(p \ge 4\)
Under the additional assumption (3.4) we are now ready to perform the final expansion needed for the proof of Theorem 1. Thanks to \(p \ge 4\), the function \(t \mapsto |t|^{p}\) is four times continuously differentiable on \(\mathbb {R}\), and thus we may expand, for every value of \(\rho _n\),
Let us write \(\rho _n = \mu _n (r_n + \eta _n)\) as in Lemma 4. Since \(\int _{{\mathbb {S}}^1} \rho _n = 0\), this implies
Here, the error term comes from estimating, using Lemma 4,
because \(p > 4\) (if \(p=4\), the expansion (3.5) is exact), and
and alike for \(\mu _n^4 \int _{{\mathbb {S}}^1} r_n^2 \eta _n^2\), \(\mu _n^4 \int _{{\mathbb {S}}^1} r_n \eta _n^3\) and \(\mu _n^4 \int _{{\mathbb {S}}^1} \eta _n^4\). Similarly, if additionally \(p > 5\),
because \(\int _{{\mathbb {S}}^1} r_n^5 \lesssim \Vert r_n\Vert ^5 = {\mathcal {O}}(1)\) and \(\int _{{\mathbb {S}}^1} \eta _n^5 \lesssim \Vert \eta _n\Vert ^5 = o(1)\) by Hölder’s and Sobolev’s inequalities. Thus (3.5) is proved.
Next, we note that from \(r_n(\theta ) = \sin (2 (\theta - \theta _n))\) we necessarily obtain \(\int _{{\mathbb {S}}^1} r_n^3 = 0\). We emphasize that this property is what makes dimension \(d = 1\) special because for \(d \ge 2\) there are spherical harmonics \(\rho \in E_2\) such that \(\int _{{\mathbb {S}}^d} \rho ^3 \ne 0\), see [14] or Proposition 5 below.
Again by the second-order Taylor expansion \((a + h)^\frac{2}{p} = a^\frac{2}{p} + \frac{2}{p} a^{\frac{2}{p}-1} h - \frac{p-2}{p^2} a^{\frac{2}{p} - 2} h^2 + o(h^2)\), we obtain from (3.5) that
The other terms appearing in \({\mathcal {E}}(u_n)\) can be expanded as
and (for n large enough)
We can now put all of these expansions together to expand \(\mathcal E(u_n)\). Noticing that \(S_{1,s} (p-1) (2 \pi )^{\frac{2}{p} - 1} = \alpha (1)\), we arrive at
Since \(r_n \in E_2\), we have \((r_n, P_s r_n) = \alpha (2) \int _{{\mathbb {S}}^1} r_n^2\) and consequently
Therefore we can write the above expansion as
It remains to find a lower bound which shows that the right side is strictly positive for n large enough.
We expand \(\eta _n\) in spherical harmonics
Up to applying an additional rotation, we may assume for simplicity that \(\theta _n = 0\) in the decomposition of Lemma 4, i.e., \(r_n(\theta ) = \sin 2 \theta \). Since \(\sin ^2 2 \theta = \frac{1}{2} - \frac{1}{2} \cos 4 \theta \), we have
because all other integrals of \(\sin ^2 2\theta \) against the \(\sin k \theta \) and \(\cos k \theta \) with \(k \ge 3\) vanish. Following this observation, we may further decompose
Then, recalling \(c_{BE}^{\text {loc}}(s) = 1 - \frac{\alpha (1)}{\alpha (2)}\) and \({\tilde{\eta }}_n \in E_{\ge 3}\),
Now we drop the first summand, which is nonnegative. In the second summand we complete the square in \(b_{4,n}\) to obtain
Moreover, we can further simplify the term of (3.6) that is quartic in \(r_n\) by observing
Inserting all of this into (3.6), we obtain
Recalling that \(p = \frac{2}{1-2\,s}\) and \(\alpha (\ell ) = \frac{\Gamma (\ell + \frac{1}{2} + s)}{\Gamma (\ell + \frac{1}{2} - s)}\), an explicit computation gives
As a consequence,
for every n large enough. This finishes the proof (in the easy case \(p \ge 4\)). \(\square \)
Let us now explain how to drop the assumption (3.4) which states that \(p \ge 4\). If \( p < 4\), the very first step in the preceding proof is not justified, namely expanding \(|1 + \rho _n|^p\) to fourth order, because \(t \mapsto |t|^p\) is not four times continuously differentiable in 0.
That means, the fourth order expansion of \(|1 + \rho _n(\theta )|^p\) is only justified at points \(\theta \) where \(\rho _n(\theta ) > -1\). For this condition to be fulfilled for all \(\theta \in (0,2\pi )\), we would for example need \(\rho _n\) (or equivalently \(\eta _n\)) to converge to zero uniformly on \({\mathbb {S}}^1\). However, since \(H^s({\mathbb {S}}^1)\) does not embed into \(L^\infty ({\mathbb {S}}^1)\), this is not necessarily the case. To overcome this problem we adapt and simplify a strategy carried out in [10] in a similar situation for \(s = 1\) and \(d \ge 1\).
Proof of Theorem 3, hard case \(p < 4\)
Let again a sequence \(u_n = 1 + \rho _n\) be fixed which satisfies (3.1) and (3.2). Notice that Lemma 4 holds for \(u_n\) also in this case.
In view of the discussion above, we denote
On \({\mathbb {S}}^1 \setminus {\mathcal {C}}_n\), we have \(\rho _n \in [-\frac{1}{2}, \frac{1}{2}]\) and therefore we can expand, similarly to the above,
On the other hand, since \(t \mapsto |t|^p\) is twice differentiable on \(\mathbb {R}\), on \({\mathcal {C}}_n\) we can still expand to second order,
Let us now observe that we can bound the measure of \({\mathcal {C}}_n\), uniformly in \(n \in \mathbb {N}\), by
Indeed, this follows from
by Sobolev’s inequality and Lemma 4.
Using (3.10) and the fact that \(r_n\) is uniformly bounded, we can bound the error terms in (3.9) by
Moreover, in the same way we can estimate
and
Finally, the error term in (3.8), using that \(\mu _n |\eta _n| \le |\rho _n| + \mu _n |r_n| \le 1\), can be bounded by
With all these error estimates, we have proved that by adding up (3.8) and (3.9) we obtain
Now we can decompose \(\rho _n = \mu _n (r_n + \eta _n)\) and use some estimates we have already explained above. In this way we obtain the analogue of (3.5). From there we can proceed with the proof as in the ’easy case’ \(p \ge 4\). \(\square \)
4 A family of test functions for \({\mathcal {E}}(u)\)
In this section, we study in some detail a family \((u_\beta )\) of natural test functions which interpolates between one bubble and two bubbles, and which we define in Sect. 4.1 below. Here, actually most of our analysis will be carried out for general dimension \(d \ge 1\).
Our analysis of \((u_\beta )\) leads to several interesting consequences. Firstly, for \(d \ge 2\) the \((u_\beta )\) yield a different and very natural choice of test function that gives the strict inequality \(c_{BE}(s) < c_{BE}^{\text {loc}}(s)\) originally proved in [14]. See Sect. 4.2 below.
Secondly, for \(d = 1\), through some computations and basic numerics carried out in Sects. 4.3 and 4.4 below we show that \({\mathcal {E}}(u_\beta ) > c_{BE}^\text {loc}(s)\) for all values of \(\beta \). Since the \(u_\beta \) can be expected to be very good competitors for the global infimum \(c_{BE}(s)\) (see the discussion in Sect. 4.1), this provides some more evidence which supports our Conjecture 2.
4.1 Sums of two bubbles and their properties
For any \(d \ge 1\), \(s \in (0, d/2)\) and \(p = \frac{2d}{d-2\,s} \in (2, \infty )\), let
The function \(v_\beta \) is an optimizer of the Sobolev inequality (2.3), i.e., \(v_\beta \in {\mathcal {M}}\). Its normalization is chosen such that
Under the stereographic projection defined in (2.1) and (2.2), the family \((v_\beta )_{\beta \in (-1, 1)}\) corresponds precisely to all dilations \(B_\lambda (x) = \lambda ^\frac{d}{p} B(\lambda x)\), \(\lambda > 0\), of the standard bubble \(B(x) = \left( \frac{2}{1+ |x|^2}\right) ^\frac{d}{p}\) centered in \(0 \in \mathbb {R}^d\).
Let us now consider the family made of sums of two bubbles \(v_\beta \) given by
and note that by symmetry it is sufficient to consider the range \(\beta \in (0, 1)\). In the equivalent setting of \(\mathbb {R}^d\), the family \(u_\beta \) interpolates between one bubble centered at the origin having twice the \(L^p\)-norm of B (for \(\beta = 0\)) and the superposition \(B_\lambda + B_{\lambda ^{-1}}\) of two weakly interacting bubbles centered in 0, with \(\lambda = \lambda (\beta ) = \sqrt{\frac{1+\beta }{1 - \beta }} \rightarrow \infty \) as \(\beta \rightarrow 1\).
Let us discuss in some more detail the reasons why, heuristically, we expect the \((u_\beta )\) to be good competitors for the global infimum \(c_{BE}(s)\), i.e. \(\inf _{\beta \in (0,1)} \mathcal E(u_\beta )\) to give a value close to \(c_{BE}(s)\).
Firstly, it is shown in [15] that the asymptotic values of \({\mathcal {E}}(u_\beta )\) as \(\beta \rightarrow 1\) is
Moreover, the analysis of minimizing sequences from [15] shows that this value is best possible for sequences consisting of at least two non-trivial asymptotically non-interacting parts.
Secondly, as \(\beta \rightarrow 0\), the quotient \({\mathcal {E}}(u_\beta )\) converges to the best local constant \(c_{BE}^{\text {loc}}(s)\). What is more, for every \(d \ge 2\), it even does so from below. This is the content of Proposition 5 below.
The third reason why we expect the \(u_\beta \) to be good competitors for \(c_{BE}(s)\) concerns the whole range \(\beta \in (0,1)\), not just the asymptotic regime. Indeed, the shape of \(u_\beta \) as \(\beta \) varies reflects the two competing required properties of a minimal function u for \({\mathcal {E}}\). On the one hand, the numerator of \({\mathcal {E}}\), i.e., the Sobolev deficit \((u, P_s u) - {\mathcal {S}}_d \Vert u\Vert _p^2\) should be small, hence u should look like a Talenti bubble (which is the case for \(\beta \) small). On the other hand, the denominator of \({\mathcal {E}}\), i.e., the distance \({{\,\textrm{dist}\,}}(u, \mathcal M)^2\), should be large, which forces u to be different from a Talenti bubble. The family \((u_\beta )\) represents a natural attempt to reconcile these two competing necessities.
In connection with this, we can mention an interesting analogy with the stability inequality associated to the isoperimetric inequality, whose features are similar to inequality (2.5). For \(d=2\), a conjecture which is strongly supported by both numerics and partial rigorous arguments states that the optimal set for the isoperimetric stability inequality is a certain explicit non-convex mask-shaped set, see [4]. This set is precisely a compromise between one and two disjoint balls, in much the same way as \(u_\beta \), for intermediate values of \(\beta \in (0,1)\) is a (probably not fully optimal) compromise between one and two non-interacting bubbles.
4.2 An alternative proof for \(c_{BE}(s) < c_{BE}^{\text {loc}}(s)\) when \(d \ge 2\)
The following proposition yields an alternative proof of the main result of the recent paper [14], namely the fact that \(c_{BE}(s) < c_{BE}^\text {loc}(s)\) for every \(d \ge 2\).
Proposition 5
Let \(d \ge 1\) and let \((u_\beta )_{\beta \in (-1,1)}\) be the family of functions defined in (4.3). Then there are \(c_1(\beta ), c_2 > 0\) such that \(c_1(\beta ) \rightarrow 2\) and
uniformly on \({\mathbb {S}}^d\) as \(\beta \rightarrow 0\), where
As a consequence, \(\lim _{\beta \rightarrow 0} {\mathcal {E}}(u_\beta ) = \frac{4s}{d + 2s +2} = c_{BE}^\text {loc}(s)\). Moreover, if \(d \ge 2\),
for all \(\beta \) small enough.
As we will see in the proof of Proposition 5, the proof of the strict inequality (4.7) boils down, via a Taylor expansion, to proving that \(\int _{{\mathbb {S}}^d} \rho ^3 > 0\) if \(d \ge 2\) (while \(\int _{\mathbb S^1} \rho ^3 =0\) if \(d = 1\)). Exhibiting a second spherical harmonic \(\rho \in E_2\) with this property has been the key observation of [14], however the spherical harmonic chosen there is different from \(\rho \) in (4.6). Arguably, the choice of \(\rho \) in (4.6) is more natural, because it comes from the family \((u_\beta )\) via (4.5), while the choice in [14] is made on abstract and purely algebraic grounds. It must be noted that both choices require \(d \ge 2\) for (4.7) to hold, which is consistent with Theorem 1.
Proof of Proposition 5
The proof of (4.5) comes from a straightforward Taylor expansion. Indeed,
Hence
From this we conclude (4.5) by observing that
with \(\rho \) defined by (4.6).
Now we turn to proving (4.7). Given (4.5) and the fact that \(\rho \in E_2\), it now follows from a Taylor expansion of the quotient \(\mathcal E(u_\beta )\) that
for some constant \(c_3 > 0\), whose explicit value is of no interest to us. This follows from the computations made in [14], respectively from their equivalent on \({\mathbb {S}}^d\) via stereographic projection.
To prove (4.7), it therefore only remains to show that \(\int _{{\mathbb {S}}^d}\rho ^3 > 0\), where \(\rho \) is given by (4.6). We compute
Integration by parts yields the relation
for all \(k \ge 2\), \(l \ge 0\). Applying this repeatedly, a straightforward calculation leads to
Thus for \(\rho \) given by (4.6), \(\int _{{\mathbb {S}}^d} \rho ^3\) is strictly positive whenever \(d \ge 2\) (and zero when \(d = 1\)). Hence the proof is complete. \(\square \)
4.3 The case \(p = 3\)
We now turn to evaluating the family \((u_\beta )\) for intermediate values of \(\beta \). This proves to be much harder than obtaining asymptotic values. One of the reasons for this is the fact that there is some \(\beta _0 \in (0,1)\) such that for \(\beta > \beta _0\) the distance \({{\,\textrm{dist}\,}}(u_\beta , {\mathcal {M}})\) is no longer achieved by a constant. To simplify this particular issue, we introduce the modified Bianchi–Egnell quotient
The difference of \(\widetilde{{\mathcal {E}}}(u)\) to \({\mathcal {E}}(u)\) is that the denominator in (4.8) contains the \({H}^s\)-distance to the set \({\mathcal {C}} \subset {\mathcal {M}}\) of constant functions, instead of all optimizers \({\mathcal {M}}\). The advantage of \(\tilde{\mathcal {E}}(u)\) for computations is that the function \(c \in {\mathcal {C}}\) realizing \({{\,\textrm{dist}\,}}_{H^s({\mathbb {S}}^d)}(u, {\mathcal {C}})\) can be determined very easily for every function \(u \in H^s({\mathbb {S}}^d)\), while this is not so for \({{\,\textrm{dist}\,}}_{H^s(\mathbb S^d)}(u, {\mathcal {M}})\).
Since \({\mathcal {C}} \subset {\mathcal {M}}\), we moreover have
Finally, for small enough \(\beta \) we actually have \(\mathcal E(u_\beta ) =\widetilde{{\mathcal {E}}}(u_\beta )\): this follows from [11, Lemma 12] together with the fact that for the distance minimizer \(c_\beta \) of \({{\,\textrm{dist}\,}}_{H^s({\mathbb {S}}^d)}(u_\beta , {\mathcal {C}})\), one has \(\int _{{\mathbb {S}}^d} (u_\beta - c_\beta ) = \int _{{\mathbb {S}}^d} (u_\beta - c_\beta ) \omega = 0\), for all \(\beta \in (0,1)\). (However, for \(\beta \) close to 1 the minimizer of \({{\,\textrm{dist}\,}}_{H^s({\mathbb {S}}^d)}(u_\beta , {\mathcal {M}})\) must be close to \(v_\beta \) or \(v_{-\beta }\), and hence \({\mathcal {E}}(u_\beta ) > \widetilde{{\mathcal {E}}}(u_\beta )\) for such \(\beta \).)
To simplify computations further, we only consider a special choice of p which gives an algebraically simple expression, namely \(p = 3\). We will make some largely analogous computations for \(p = 4\) in the next subsection.
Our goal in this subsection is thus to confirm that for \(d = 1\) and \(p = 3\) (i.e., \(s = \frac{1}{6}\)) we have
By (4.9), this implies in particular \(\mathcal E(u_\beta ) > c_{BE}^{\textrm{loc}}(\frac{1}{6})\).
Unfortunately, it turns out that we are only able to prove (4.10) numerically. It would be desirable to obtain a mathematically rigorous proof of (4.10) and to extend (4.10) to all values of \(s \in (0, \frac{1}{2})\), either numerically or rigorously.
We emphasize once more that for \(d \ge 2\) property (4.10) must fail for small enough \(\beta \), as a consequence of Proposition 5. Nevertheless, we carry out the following computations for general \(d \ge 1\), because they present no additional difficulty and because the results may be of some independent use.
In the following lemma, we express \(\widetilde{{\mathcal {E}}}(u_\beta )\) conveniently in terms of the quantity
Lemma 6
Let \(p = 3\) and \(d \ge 1\), so that \(s = \frac{d}{6}\). Then for every \(\beta \in (0,1)\), we have
where \(\gamma (\beta ) = \frac{2 \beta }{1 + \beta ^2}\).
Before giving the proof of this lemma, we make a useful observation which explains the origin of the expression for \(\gamma (\beta )\).
Lemma 7
For \(d \ge 1\), let \(s \in (0, \frac{d}{2})\) and \(p = \frac{2d}{d-2s} > 2\). For every \(\beta , \beta ' \in (-1, 1)\), we have
where
In particular, if \(\beta ' = - \beta \), then \(\gamma \) and \(\beta \) are related by
(Here, if \(\gamma = 0\), we interpret \( \frac{1 - \sqrt{1 - \gamma ^2}}{\gamma } = 0\).)
Proof
Let \(B(x) = \left( \frac{2}{1 + |x|^2} \right) ^d\) and denote \(B_\lambda (x) = \lambda ^\frac{d}{p} B(\lambda x)\).
A direct computation then gives that
where the transformation \(u \mapsto u_{{\mathcal {S}}}\) is defined by (2.2), and \(\lambda \) and \(\beta \) are related by
By conformal invariance, we have
where, by (4.13), \(\gamma \) is given by
as claimed. The second identity in (4.12) follows from this by using that \(P_s v_\beta = \alpha (0) v_\beta ^{p-1}\) for every \(\beta \in (-1, 1)\).
The expression \(\beta = \frac{1 - \sqrt{1 - \gamma ^2}}{\gamma }\) comes from solving the quadratic equation \((1 + \beta ^2) \gamma = 2 \beta \) and taking into account that both \(\gamma \) and \(\beta \) are in \((-1,1)\). \(\square \)
Proof of Lemma 6
We compute the terms in \(\widetilde{{\mathcal {E}}}(u_\beta )\) separately, making repeated use of the normalization (4.2) and Lemma 7. We have
Similarly, we obtain (here is where we use \(p=3\) to simply multiply out the terms!)
Finally, it is easy to see that \({{\,\textrm{dist}\,}}(u_\beta , {\mathcal {C}})\) is uniquely achieved by the constant function \(c_\beta \) which satisfies \(\int _{{\mathbb {S}}^d} c_\beta = \int _{{\mathbb {S}}^d} u_\beta = 2 \int _{{\mathbb {S}}^d} v_\beta \), i.e.,
Therefore
Combining everything, and observing \(S_{d,s} = \alpha (0) |\mathbb S^d|^{1 - \frac{2}{p}} = \alpha (0) |{\mathbb {S}}^d|^{\frac{1}{3}}\) since \(p=3\), we obtain
which is the claimed expression. \(\square \)
To obtain a plot of the values of \(\widetilde{{\mathcal {E}}}(u_\beta )\), we now express \(I(\beta )\) in terms of a suitable hypergeometric function \(_2F_1(a,b; c; z)\). A useful reference for the definition and properties of \(_2F_1(a,b; c; z)\) is [1, Chapter 15]. In any case, all we need to know about the function \(_2F_1(a,b; c; z)\) is the identity stated in (4.16) below.
In fact, with no extra work we can obtain an expression for
for any \(q > 0\), which we will use in the next subsection for \(q = 2\) as well.
Lemma 8
Let \(d \ge 1\) and \(p > 2\). For \(\beta \in (0,1)\) and \(q > 0\), let \(I_q(\beta )\) be given by (4.15). Then
where \(c_d = 2^{d-1} \frac{|{\mathbb {S}}^{d-1}|}{|{\mathbb {S}}^{d}|} \frac{\Gamma (d/2)^2}{\Gamma (d)} = 2^{d-1} \frac{\Gamma (\frac{d}{2}) \Gamma (\frac{d+1}{2})}{\Gamma (\frac{1}{2}) \Gamma (d)}\).
With \(z = \frac{2 \beta }{1 + \beta }\), we can rewrite this as
Proof
We recall the integral representation of the hypergeometric function, which reads [1, eq. 15.3.1]
Thus, for every \(\beta \in (0,1)\), we have
Using (4.16) with \(a = dq/p\), \(b = d/2\) and \(c = d\), and inserting \(|{\mathbb {S}}^{d-1}| = \frac{2 \pi ^{d/2}}{\Gamma (\frac{d}{2})}\) we obtain the claimed expression.
Finally, for \(z = \frac{2 \beta }{1 + \beta }\) we have \(\beta = \frac{z}{2-z}\) and hence \(\frac{1- \beta }{1 + \beta } = 1 - z\) by direct computation. This yields the claimed expression for J(z). \(\square \)
We can now combine Lemmas 6 and 8 to express \(\widetilde{{\mathcal {E}}}(u_\beta )\) as an explicit function of one variable, which we can evaluate numerically.
Indeed, for \(\gamma (\beta )= \frac{2\beta }{1 + \beta ^2}\), direct computations give \(\frac{1 - \gamma }{1+ \gamma } = \left( \frac{1 -\beta }{1 + \beta } \right) ^2\) and \(\frac{2 \gamma }{1 + \gamma } = \frac{4 \beta }{(1 + \beta )^2}\), so that
In terms of \(z = \frac{2 \beta }{1 + \beta }\), we can write this as
To simplify even further the resulting expression for \(\widetilde{{\mathcal {E}}}(u_\beta )\) ((4.18) below), it is convenient to rewrite this once again, in terms of the variable \(y = z(2-z) \, \Leftrightarrow \, z = 1 - \sqrt{1-y}\), as
The expression for \(I(\beta )\) becomes accordingly
Summing up, for \(y = z(2-z) = \frac{4\beta }{(1 + \beta )^2}\) (which varies between 0 and 1 as \(\beta \) does), we have
Figure 1 shows the plot of the function e(y) for dimension \(d= 1\). As expected, it goes to \(\frac{1}{5} = c_{BE}^{\text {loc}}(\frac{1}{6})\) as \(y \rightarrow 0\). Moreover, from the plot, e(y) is manifestly increasing in y. Equivalently, \(\widetilde{{\mathcal {E}}}(u_\beta )\) is increasing in \(\beta \in (0,1)\). This confirms the desired inequality (4.10), at least numerically.
The difference to higher dimensions \(d \ge 2\) becomes clear when looking at the plot of \(y \mapsto e(y)\) for \(p=3\) and \(d=2\) in Fig. 2, which shows that \(\widetilde{\mathcal E}(u_\beta )\) is decreasing in \(\beta \in (0,1)\). This is consistent with the results of [14] and with the fact that \(\lim _{\beta \rightarrow 1} \widetilde{{\mathcal {E}}}(u_\beta ) = 1 - 2^{-\frac{2\,s}{d}} < \frac{4\,s}{d + 2\,s +2} = \lim _{\beta \rightarrow 0} \widetilde{{\mathcal {E}}}(u_\beta )\) for all \(d \ge 2\), \(s \in (0, d/2)\). (On the other hand, for \(d = 1\) one has indeed \(1 - 2^{-2\,s} > \frac{4\,s}{1 + 2\,s +2}\), consistently with Fig. 1).
The wild oscillations at the left end of the graphs in Figs. 1 and 2 come with near certainty from numerical instabilities only. Their presence is greatly reduced if one plots for instance the less singular expression \(e_1(y) - \frac{1}{5} e_2(y)\). Indeed, Fig. 3 confirms graphically that for \(d = 1\) one has \(e_1(y) - \frac{1}{5} e_2(y) > 0\), which is equivalent to (4.10).
One may appreciate how close the graphs in all figures remain to their respective extremal values on a large part of the interval (0, 1).
4.4 The case \(p = 4\)
Another easy test case which gives algebraically simple expressions is \(p = 4\), i.e., \(s = \frac{d}{4}\). We again evaluate \(\widetilde{{\mathcal {E}}}(u_\beta )\) for \(u_\beta = v_\beta + v_{-\beta }\), with \(v_\beta \) given by (4.1). Since we proceed analogously to the case \(p = 3\), we give fewer details here for the sake of brevity.
Similar computations as in the proof of Lemma 6 give
where again
Using the computations and notations from the proof of Lemma 7, by conformal invariance we have
and thus
Thus, as in the case \(p= 3\), with \(y = \frac{4 \beta }{(1 + \beta )^2}\) we can write
with m as in (4.17) (for \(p = 4\)), and
by Lemma 8.
Written in this form, we can plot the function \(y \mapsto f(y)\) for \(d = 1\), see Fig. 4. The qualitative behavior of the plot is exactly the same as in the case \(p=3\) we have studied before: f(y) is strictly increasing in \(y \in (0,1)\), with \(\lim _{y \rightarrow 0} f(y) = \frac{2}{7} = c_{BE}^\text {loc}\left( \frac{1}{4}\right) \).
A sign-changing family of test functions. When \(p=4\), since \(u^4 = |u|^4\), it is particularly easy to study the value of \({{\mathcal {E}}}\) for sign-changing competitors. It is natural to look at the family
with \(v_\beta \) again as in (4.1). It is easy to see that \({\mathcal {E}}(u) \le {\mathcal {E}}(|u|)\) for every u, and so it seems reasonable to expect that the \(w_\beta \) are actually even stronger competitors than the \(u_\beta = v_\beta + v_{-\beta }\). However, we shall see that this is not true. Actually, we will see (still numerically) that
Indeed, since \(\int _{{\mathbb {S}}^1} w_\beta = 0\), the infimum in \({{\,\textrm{dist}\,}}(w_\beta , {\mathcal {C}})\) is realized by the zero function and therefore simply \({{\,\textrm{dist}\,}}(w_\beta , {\mathcal {C}}) = (w_\beta , P_s w_\beta )\) in this case. With practically the same computations as in the previous cases, we then obtain
Plotting shows that \(\widetilde{{\mathcal {E}}}(w_\beta )\) is strictly decreasing, see Fig. 5.
Moreover, as \(\beta \rightarrow 1\) we have
Therefore,
where we used (4.4).
Data availability
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
References
Abramowitz, M., Stegun, I. A.: Handbook of mathematical functions with formulas, graphs, and mathematical tables. 10th printing, with corrections. A Wiley-Interscience Publication. New York etc.: John Wiley & Sons (1972)
Aubin, T.: Problèmes isoperimetriques et espaces de Sobolev. J. Differ. Geom. 11, 573–598 (1976)
Bianchi, G., Egnell, H.: A note on the Sobolev inequality. J. Funct. Anal. 100(1), 18–24 (1991)
Bianchini, C., Croce, G., Henrot, A.: On the quantitative isoperimetric inequality in the plane. ESAIM Control Optim. Calc. Var. 23(2), 517–549 (2017)
Chen, S., Frank, R.L., Weth, T.: Remainder terms in the fractional Sobolev inequality. Indiana Univ. Math. J. 62(4), 1381–1397 (2013)
Chen, L., Lu, G., Tang, H.: Sharp Stability of Log-Sobolev and Moser–Onofri inequalities on the Sphere. arXiv:2210.06727
De Nitti, N., König, T.: Stability with explicit constants of the critical points of the fractional Sobolev inequality and applications to fast diffusion. J. Funct. Anal. 285(9), 110093 (2023)
Deng, S., Tian, X.: On the stability of Caffarelli–Kohn–Nirenberg inequality in \({{\mathbb{R}}}^2\). arXiv:2308.04111
Dolbeault, J., Esteban, M. J., Figalli, A., Frank, R. L., Loss, M.: Sharp stability for Sobolev and log-Sobolev inequalities, with optimal dimensional dependence. arXiv:2209.08651
Frank, R.L.: Degenerate stability of some Sobolev inequalities. Ann. Inst. H. Poincaré Anal. Non Linéaire 39(6), 1459–1484 (2023)
Frank, R. L.: The sharp Sobolev inequality and its stability: an introduction. arXiv:2304.03115
Frank, R.L., König, T., Tang, H.: Reverse conformally invariant Sobolev inequalities on the sphere. J. Funct. Anal. 282(4), 109339 (2022)
Frank, R. L., Peteranderl, J. W.: Degenerate Stability of the Caffarelli–Kohn–Nirenberg Inequality along the Felli–Schneider Curve. arXiv:2308.07917
König, T.: On the sharp constant in the Bianchi–Egnell stability inequality. Bull. Lond. Math. Soc. 55(4), 2070–2075 (2023)
König, T.: Stability for the Sobolev inequality: existence of a minimizer. arXiv:2211.14185
Lieb, E.H.: Sharp constants in the Hardy–Littlewood–Sobolev and related inequalities. Ann. Math. (2) 118(2), 349–374 (1983)
Talenti, G.: Best constant in Sobolev inequality. Ann. Mat. Pura Appl. IV. Ser. 110, 353–372 (1976)
Wei, J. C., Wu, Y.: Stability of the Caffarelli–Kohn–Nirenberg inequality: the existence of minimizers. arXiv:2308.04667
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by P. H. Rabinowitz.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.