1 Introduction

1.1 Main results

Given a function \(g\in L^{2}(\mathbb{R})\) (called the window), the short-time Fourier transform (STFT) of a function \(f \in L^{2}(\mathbb{R})\) is usually defined as

$$ V_{g}f(x,\omega ) = \int _{\mathbb{R}} e^{- 2 \pi i t \omega} f(t) \overline{g(x-t)} \operatorname{d\!}t. $$
(1.1)

This transform plays a distinguished role in different areas of mathematics, including time-frequency analysis [19] and signal processing [30], mathematical physics [29], where it is also known as the coherent state transform, and semiclassical and microlocal analysis [21, 37].

From the point of view of time-frequency analysis, the STFT is a measure of the “instantaneous frequency” of the signal \(f\) at each point, in analogy to what a music score does. As the notion of “instantaneous frequency” is not well-defined for generic signals, due to the uncertainty principle, the STFT can only concentrate a limited amount of its \(L^{2}\)-norm on any set \(\Omega \subset \mathbb{R}^{2}\) with finite Lebesgue measure \(|\Omega |\), and finding explicit bounds in terms of \(|\Omega |\) is an important issue in time-frequency analysis. For a general window \(g\), this appears to be extremely challenging and only suboptimal bounds have been obtained: we refer the reader to the work of E. Lieb [28] for what is, to our knowledge, the current best result at this level of generality.

For very regular windows, however, the situation improves. In particular, in the relevant case (extensively studied in the literature also in connection with the spectrum of localization operators in the radially symmetric case, see e.g. [1, 9, 18, 35]) where \(g=\varphi \) is the Gaussian window

$$ \varphi (x) = 2^{1/4} e^{-\pi x^{2}}, \quad x \in \mathbb{R}, $$
(1.2)

a complete solution to this concentration problem has recently been given in [33], thus proving a conjecture from [2] (see also [10]). Denoting by \(\mathcal{V}f := V_{\varphi}f\) the STFT with the Gaussian window \(\varphi \) defined in (1.2), the main result of [33] can be stated as follows:

Theorem A

[33]; Faber-Krahn inequality for the STFT

If \(\Omega \subset \mathbb{R}^{2}\) is a measurable set with finite Lebesgue measure \(|\Omega |>0\), and \(f\in L^{2}(\mathbb{R})\setminus \{0\}\) is an arbitrary function, then

$$ \frac{\int _{\Omega} |\mathcal{V}f(x,\omega )|^{2} \operatorname{d\!}x \operatorname{d\!}\omega}{\Vert f\Vert _{L^{2}(\mathbb{R})}^{2}} \leq 1-e^{- \left | \Omega \right |}. $$
(1.3)

Moreover, equality is attained if and only if \(\Omega \) coincides (up to a set of measure zero) with a ball centered at some \(z_{0}=(x_{0}, \omega _{0})\in \mathbb{R}^{2}\) and, at the same time, \(f\) is a function of the kind

$$ f(x) = c\, \varphi _{z_{0}}(x) , \qquad \varphi _{z_{0}}(x) := e^{2 \pi i \omega _{0} x} \varphi (x-x_{0}), $$
(1.4)

for some \(c\in \mathbb{C}\setminus \{0\}\).

Note that the optimal functions in (1.4) are scalar multiples of the Gaussian window defined in (1.2), translated and modulated according to the center of the ball \(\Omega \).

This result, which improves upon Lieb’s uncertainty principle [28], has inspired other subsequent works: [36], where a similar result is extended to the case of Wavelet transforms; [4], where Kulikov used techniques inspired by those of [33] to prove some contractivity conjectures; and [14], where R. Frank uses the same circle of ideas to generalize a series of entropy-like inequalities (see also the recent preprint [15]). We refer the reader to [2225, 27, 31, 34] and the references therein for further closely related work.

In the present paper we investigate the stability of Theorem A: given \(\Omega \subset \mathbb{R}^{2}\) and \(f\in L^{2}(\mathbb{R})\) which are almost optimal, in the sense that they almost saturate inequality (1.3), can we infer (and to what extent) that \(\Omega \) is close to a ball and that \(f\) is close to a function of the form (1.4)? To formulate this question precisely, a crucial point is choosing how to measure almost optimality as well as closeness.

To measure almost optimality in (1.3) for a pair \((f,\Omega )\), we will consider the combined deficit

$$ \delta (f;\Omega ) := 1- \frac{\displaystyle \int _{\Omega} |\mathcal{V}f(x,\omega )|^{2} \, \operatorname{d\!}x \operatorname{d\!}\omega}{\displaystyle (1-e^{-\left | \Omega \right |}) \|f\|_{L^{2}(\mathbb{R})}^{2}}, $$
(1.5)

while we will use the Fraenkel asymmetry of \(\Omega \subset \mathbb{R}^{2}\) to measure its distance to a ball:

$$ \mathcal{A}(\Omega ) := \inf \left \{ \frac{\left | \Omega \triangle B(x,r) \right |}{\left | \Omega \right |} \colon \left | B(x,r) \right | = \left | \Omega \right | , r >0, x \in \mathbb{R}^{2} \right \} . $$
(1.6)

The Fraenkel asymmetry is a natural notion of asymmetry and it is often used to formulate the stability of geometric and functional inequalities, such as the isoperimetric inequality [8, 12, 16, 17] or the Faber–Krahn inequality for the Dirichlet Laplacian [5, 7].

Our main result reads as follows:

Theorem 1.1

Stability of the Faber-Krahn inequality for the STFT

There is an explicitly computable constant \(C>0\) such that, for all measurable sets \(\Omega \subset \mathbb{R}^{2}\) with finite measure \(|\Omega |>0\) and all functions \(f \in L^{2}(\mathbb{R})\backslash \{0\}\), we have

$$ \min _{z_{0}\in \mathbb{C}, |c|=\|f\|_{2}} \frac{\|f - c\, \varphi _{z_{0}} \|_{2}}{\|f\|_{2}} \leq C\big( e^{| \Omega |} \delta (f;\Omega )\big)^{1/2}. $$
(1.7)

Moreover, for some explicit constant \(K(|\Omega |)\) we also have

$$ \mathcal{A}(\Omega ) \leq K(|\Omega |) \delta (f;\Omega )^{1/2} . $$
(1.8)

Remark 1.2

Sharpness

In Theorem 1.1 the factor \(\delta (f;\Omega )^{1/2}\) in (1.7) and (1.8) cannot be replaced by \(\delta (f;\Omega )^{\beta}\), for any \(\beta > 1/2\). Similarly, the dependence on \(|\Omega |\) in (1.7) is also sharp, in the sense that factor \(e^{|\Omega |/2}\) cannot be replaced by \(e^{\beta |\Omega |}\) for any \(\beta <1/2\). We refer to Sect. 6 for proofs of these claims.

Remark 1.3

Higher dimensions

There is a natural generalization of the \(STFT\) to functions \(f\in L^{2}(\mathbb{R}^{d})\), for any \(d\geq 1\). In Sect. 7 we show that a more general version of Theorem 1.1 holds in all dimensions. It is worth noting that, although \(\delta (f;\Omega )^{1/2}\) still controls the distance of \(f\) to the set of optimizers, there is a dimensional dependence of this estimate on \(|\Omega |\).

As observed in [33], if the set \(\Omega \) is fixed and has finite measure, Theorem A (and consequently also Theorem 1.1) can be interpreted in terms of the well-known localization operator [6, 9] defined, in terms of the STFT operator \(\mathcal{V}\colon L^{2}(\mathbb{R})\to L^{2}(\mathbb{R}^{2})\) with Gaussian window, by

$$ L_{\Omega } := \mathcal{V}^{*}\, 1_{\Omega}\, \mathcal{V},\qquad L_{ \Omega}\colon L^{2}(\mathbb{R})\to L^{2}(\mathbb{R}). $$

This is a positive trace-class operator, hence its norm coincides with its largest eigenvalue

$$ \lambda _{1}(\Omega ):= \max _{f\in L^{2}(\mathbb{R})\backslash \{0\}} \frac {\langle L_{\Omega }\, f,f\rangle}{\Vert f\Vert _{L^{2}(\mathbb{R})}^{2}} = \max _{f\in L^{2}(\mathbb{R})\backslash \{0\}} \frac{\int _{\Omega} |\mathcal{V}f(x,\omega )|^{2} \operatorname{d\!}x \operatorname{d\!}\omega}{\int _{\mathbb{R}^{2}} |\mathcal{V}f(x,\omega )|^{2} \operatorname{d\!}x \operatorname{d\!}\omega}. $$
(1.9)

In particular, due to the arbitrariness of \(f\), (1.3) entails that

$$ \lambda _{1}(\Omega )\leq 1-e^{-|\Omega |}, $$
(1.10)

with equality if and only if \(\Omega \) is a ball, and so we call (1.10) a Faber–Krahn inequality, by analogy with the Dirichlet Laplacian. Clearly, for any fixed \(\Omega \), the functions \(f_{\Omega}\) that achieve the maximum in (1.9) (i.e. the eigenfunctions of \(L_{\Omega}\) associated with its first eigenvalue \(\lambda _{1}(\Omega )\)) are those functions whose STFT is optimally concentrated in that particular set \(\Omega \). When \(\Omega \) is a ball, these eigenfunctions are the functions described in (1.4) and appearing also in (1.7): therefore, specifying Theorem 1.1 to the case where \(f = f_{\Omega }\) is the first eigenfunction of \(L_{\Omega}\), normalized so that \(\|f_{\Omega }\|_{L^{2}}=1\), we obtain the following stability result for the first eigenvalue and eigenfunction of localization operators:

Corollary 1.4

Let \(\Omega \subset \mathbb{R}^{2}\) be a measurable set of positive finite Lebesgue measure, and let \(\lambda _{1}(\Omega )\) be the first eigenvalue of the localization operator \(L_{\Omega}\) as in (1.9), with unit-norm eigenfunction \(f_{\Omega}\). Then (1.10) holds true, and

$$ \min _{z_{0}\in \mathbb{C}, |c|=1} \|f_{\Omega } - c\, \varphi _{z_{0}} \|_{2} \leq C e^{|\Omega |/2} \left (1- \frac{\lambda _{1}(\Omega )}{1-e^{-|\Omega |}}\right )^{1/2}, $$
(1.11)

for some universal (explicitly computable) constant \(C\). Moreover, for some explicit constant \(K(|\Omega |)\) we also have

$$ \mathcal{A}(\Omega ) \leq K(|\Omega |) \left (1- \frac{\lambda _{1}(\Omega )}{1-e^{-|\Omega |}}\right )^{1/2} . $$

This result is the analogue of the stability results for the Faber–Krahn inequality for the Dirichlet Laplacian [5, 7, 13]. Note, however, that the stability estimate (1.7) is more general than (1.11), because it holds for arbitrary functions \(f \in L^{2}(\mathbb{R})\) which are not assumed to be eigenfunctions of the localization operator \(L_{\Omega }\). Indeed, the results of Theorem 1.1 are stronger than the available stability results for the Faber–Krahn inequality for the Dirichlet Laplacian also in that, contrarily to [5, 7], our proof of Theorem 1.1 is quantitative and does not rely on compactness arguments, as in the penalization method [8]. It is for this reason that the constants in estimates (1.7)–(1.8) can be made explicit. Note, moreover, that the set \(\Omega \) in Theorem 1.1 is not assumed to be smooth; in fact, since \(\mathcal{V}f\) is essentially an entire function via the Bargmann transform, we can replace \(\Omega \) with a suitable super-level set of a holomorphic function, which in Sect. 3 we prove to be very well-behaved (we then use the rigidity of the problem to come back from super-level sets of holomorphic functions to the original set \(\Omega \)).

We saw in Remark 1.2 that (1.7) is sharp, but whether Corollary 1.4 is sharp as well is a more delicate question. To answer it, one would need to either (i) compute the first eigenfunctions of \(L_{\Omega}\) for domains \(\Omega \) close to a ball, or (ii) given a function \(f\) close to the Gaussian \(\varphi \), construct a domain \(\Omega _{f} \subset \mathbb{R}^{2}\) such that \(f\) is the first eigenfunction of \(L_{\Omega}\). Strategy (i) appears rather difficult: to the best of our knowledge, the eigenfunctions of \(L_{\Omega}\) are not known even in the simple case where \(\Omega \) is an ellipse of small eccentricity; see [1, 9]. Implementing strategy (ii) involves tools essentially disjoint from those of this manuscript and so we decided not to address the question of optimality of Corollary 1.4 here; instead, this is one of the main goals of an upcoming work by the third author [35].

To discuss the main ideas behind the proofs of our results, we now briefly recall some facts and background notions from [33], which we shall use throughout the paper. We point out, however, that the proof of Theorem A in [33] cannot be readily adapted to yield quantitative results such as (1.7) or (1.8). Instead, the proof of these inequalities requires a set of new geometric ideas and estimates in the Fock space, which are the core of the present paper and which (often being of a general character, such as Lemma 2.1 or the results in Sect. 3), are of interest on their own.

1.2 Proof strategy in the Bargmann–Fock space

As shown in [33], energy concentration problems for the STFT can be very cleanly formulated (and dealt with) in terms of the Fock space [39], i.e. the Hilbert space \(\mathcal{F}^{2}(\mathbb{C})\) of all holomorphic functions \(F \colon \mathbb{C}\to \mathbb{C}\) for which

$$ \left \| F \right \|_{\mathcal{F}^{2}} := \left ( \int _{ \mathbb{C}} \left | F(z) \right |^{2} e^{- \pi \left | z \right |^{2}} \operatorname{d\!}z \right )^{1/2} < \infty , $$

endowed with the natural scalar product

$$ \langle F,G\rangle _{\mathcal{F}^{2}}= \int _{\mathbb{C}} F(z) \overline{G(z)}\, e^{- \pi \left | z \right |^{2}} \operatorname{d\!}z. $$

Here and throughout, \(z=x+i y\) and \(\operatorname{d\!}z=\operatorname{d\!}x\operatorname{d\!}y\) denotes Lebesgue measure on ℂ, always identified with \(\mathbb{R}^{2}\). This Hilbert space is closely connected to the STFT through the Bargmann transform \(\mathcal{B}\colon L^{2}(\mathbb{R})\to \mathcal{F}^{2}(\mathbb{C})\), defined for \(f \in L^{2}(\mathbb{R})\) as

$$ \mathcal{B}f(z) = 2^{1/4} \int _{\mathbb{R}} f(t) e^{2 \pi t z -\pi t^{2} - \frac{\pi}{2}z^{2}} \operatorname{d\!}t , \quad z \in \mathbb{C}, $$
(1.12)

see e.g. [19, Sect. 3.4]. The Bargmann transform is a unitary isomorphism which maps the orthonormal basis of Hermite functions on ℝ onto the orthonormal basis of \(\mathcal{F}^{2}(\mathbb{C})\) given by the normalized monomials

$$ e_{k}(z) = \left ( \frac{\pi ^{k}}{k!} \right )^{1/2} z^{k} , \quad k=0,1,2, \ldots . $$
(1.13)

More importantly for us, the definition of ℬ encodes the crucial property that

$$ \mathcal{V}f (x,-\omega ) = e^{\pi i x \omega } \mathcal{B}f(z) e^{- \pi \left | z \right |^{2}/2} , \quad z = x+i\omega , $$

which allows us to express the energy concentration in the time-frequency plane in terms of functions in the Fock space, since

$$ \frac{\int _{\Omega } \left | \mathcal{V}f(x,\omega ) \right |^{2} \operatorname{d\!}x \operatorname{d\!}\omega }{\left \| f \right \|_{L^{2}}^{2}} = \frac{\int _{\Omega '} \left | \mathcal{B}f(z) \right |^{2} e^{- \pi \left | z \right |^{2}} \operatorname{d\!}z}{\left \| \mathcal{B}f \right \|_{\mathcal{F}^{2}}^{2}} , $$
(1.14)

where \(\Omega ' = \left \{ (x, \omega ): (x,-\omega ) \in \Omega \right \}\). In this new setting, the image via ℬ of the functions \(\varphi _{z_{0}}\) defined in (1.4) takes the form

$$ \mathcal{B}\varphi _{z_{0}} = F_{z_{0}}, \qquad F_{z_{0}}(z)= e^{- \frac{\pi}{2} \left | z_{0} \right |^{2}} e^{\pi z \overline{z_{0}}}, $$
(1.15)

and therefore Theorem A can be rephrased in terms of the Fock space as follows, cf. [33, Theorem 3.1]:

Theorem B

If \(\Omega \subset \mathbb{R}^{2}\) is a measurable set with positive and finite Lebesgue measure, and if \(F \in \mathcal{F}^{2}(\mathbb{C}) \setminus \left \{ 0 \right \}\) is an arbitrary function, then

$$ \frac{\int _{\Omega } \left | F(z) \right |^{2} e^{-\pi \left | z \right |^{2}} \operatorname{d\!}z}{\left \| F \right \|_{\mathcal{F}^{2}}^{2}} \leq 1-e^{- \left | \Omega \right |} . $$
(1.16)

Moreover, equality is attained if and only if \(\Omega \) coincides (up to a set of measure zero) with a ball centered at some \(z_{0}\in \mathbb{C}\) and, at the same time, \(F = c F_{z_{0}}\) for some \(c\in \mathbb{C}\setminus \{0\}\).

Similarly, we can rephrase Theorem 1.1 over the Bargmann–Fock space, as follows:

Theorem 1.5

Fock space version of Theorem 1.1

There is an explicitly computable constant \(C>0\) such that, for all measurable sets \(\Omega \subset \mathbb{R}^{2}\) with positive finite measure and all functions \(F\in \mathcal{F}^{2}(\mathbb{C})\backslash \{0\}\), we have

$$ \min _{\substack{|c|=\|F\|_{\mathcal{F}^{2}},\\ z_{0}\in \mathbb{C}}} \frac{\left \| F-cF_{z_{0}} \right \|_{\mathcal{F}^{2}}}{\left \| F \right \|_{\mathcal{F}^{2}}} \leq C \left (e^{\left | \Omega \right |} \delta (F;\Omega )\right )^{1/2}, $$
(1.17)

where

$$ \delta (F;\Omega ) := 1- \frac{\int _{\Omega }|F(z)|^{2} e^{-\pi |z|^{2}}\operatorname{d\!}z}{(1-e^{-|\Omega |})\Vert F\Vert _{\mathcal{F}^{2}}^{2}}. $$
(1.18)

Moreover, for some universal explicit constant \(K(|\Omega |)\) we also have

$$ \mathcal{A}(\Omega ) \leq K(|\Omega |) \delta (F;\Omega )^{1/2} . $$
(1.19)

We warn the reader that, in (1.18), we used the same notation as in (1.5) to denote the Fock-counterpart of the deficit. However, no confusion should arise from this conflict of notation, since we always use an upper-case letter to denote elements \(F\) of the Fock space, corresponding to elements \(f\) of \(L^{2}\).

We will provide two different proofs of this theorem, based on a careful study of the real analytic function

$$ u_{F}(z) = u(z) := \left | F(z) \right |^{2} e^{- \pi \left | z \right |^{2}} $$
(1.20)

and the properties of its super-level sets

$$ A_{t} := \left \{ u >t \right \} = \left \{ z \in \mathbb{C} \colon u(z) >t \right \}, $$
(1.21)

where \(F\) is an arbitrary function in \(\mathcal{F}^{2}(\mathbb{C})\setminus \{0\}\). This study was initiated in [33], where it was proved that the distribution function

$$ \mu _{F}(t)=\mu (t) := |A_{t}|,\quad t\geq 0 $$
(1.22)

is locally absolutely continuous on \((0,\infty )\) and satisfies

$$ \mu '(t)\leq - \,\frac {1}{t}\quad \text{for a.e. } t\in (0,T),\qquad T := \max _{z\in \mathbb{C}} u(z), $$
(1.23)

from which one readily obtains that

$$ \mu (t) \geq \log _{+} \frac {T}{t}\quad \text{for all } t>0,\qquad \text{where $\log _{+} x := \max \{0,\log x\}$}. $$
(1.24)

Notice that, when \(F=c F_{z_{0}}\) as in the last part of Theorem B, then \(T=|c|^{2}\) and \(\mu (t)=\log _{+} T/t\). In [33], (1.23) can be found in the equivalent form

$$ u^{*}(s) + (u^{*})'(s) \geq 0, \text{ for almost every } s \ge 0, $$
(1.25)

where \(u^{*}\colon \mathbb{R}^{+}\to (0,T]\) is the decreasing rearrangement of \(u\), usually defined as

$$ u^{*}(s) := \sup \left \{ t \geq 0 \colon \mu (t) >s \right \}, \qquad s\geq 0. $$
(1.26)

The function \(u^{*}\) is proved to be invertible, with \(\mu |_{(0,T]}\) as inverse function (see [4] for a direct usage of (1.23) in this form). This fact enables one to find, for any number \(s\geq 0\), a unique super-level set \(A_{t}=A_{u^{*}(s)}\) of measure \(s\), which is the set where \(u\) is most concentrated among all sets of measure \(s\), namely

$$ I_{F}(s) = I(s) := \int _{\left \{ u >u^{*}(s) \right \}} u(z) \operatorname{d\!}z \geq \int _{\Omega }u(z) \operatorname{d\!}z,\quad \text{whenever $|\Omega |=s$.} $$
(1.27)

Based on (1.25), it was proved in [33] that the function \(G(\sigma ) := I(- \log \sigma )\) is convex on \([0,1]\). Since

$$ G(0) = \lim _{s \to \infty} I(s) = \int _{\mathbb{C}} u(z) \operatorname{d\!}z = \|F\|_{\mathcal{F}^{2}}^{2},\qquad G(1) = I(0) = 0, $$

the convexity of \(G\) yields the upper bound \(G(\sigma ) \le \|F\|_{\mathcal{F}^{2}}^{2}(1-\sigma )\) or, equivalently,

$$ I(s) \le \|F\|_{\mathcal{F}^{2}}^{2}(1-e^{-s}), $$
(1.28)

which, combined with (1.27), proves (1.16).

It was then observed in [33] that, if equality holds in (1.16), then by convexity we must have \(G(\sigma )\equiv \|F\|_{\mathcal{F}^{2}}^{2} (1-\sigma )\) on \([0,1]\) or, equivalently, \(I(s)=\|F\|_{\mathcal{F}^{2}}^{2} ( 1- e^{-s})\) for every \(s\geq 0\), and in particular

$$ I'(0)=\Vert F\Vert _{\mathcal{F}^{2}}^{2}. $$
(1.29)

But since \(\mathcal{F}^{2}(\mathbb{C})\) is a Hilbert space with reproducing kernel \(K_{w}(z) = e^{\frac{\pi}{2} \left | w \right |^{2}} F_{w}(z)\), we have

$$ |F(z)|^{2} e^{-\pi |z|^{2}} \le \|F\|_{\mathcal{F}^{2}}^{2} $$
(1.30)

for all \(F\in \mathcal{F}^{2}(\mathbb{C})\), with equality at some \(z=z_{0}\) if and only if \(F = c F_{z_{0}}\) for some \(c \in \mathbb{C}\) (see e.g. [33, Proposition 2.1]). Since in any case \(I'(0)=T:=\max _{z \in \mathbb{C}} |F(z)|^{2} e^{-\pi |z|^{2}}\), (1.29) shows that equality in (1.16) forces equality (for at least one \(z\)) also in (1.30), and this proves the last part of Theorem B.

In the rest of this introduction (and also in Sect. 2) we assume without loss of generality the normalization condition \({\|F\|_{\mathcal{F}^{2}}=1}\). A simple but fundamental observation to both our proofs of Theorem 1.5 is that equality in (1.30) can be precisely quantified: indeed,

$$ \min _{\substack{z_{0}\in \mathbb{C}\\ |c|=1}} \Vert F-cF_{z_{0}} \Vert _{\mathcal{F}^{2}}^{2} = 2(1-\sqrt{T})\leq 2(1-T), $$
(1.31)

cf. Lemma 2.5 below. Thus, to prove estimate (1.17) in Theorem 1.5, we need to show that the deficit controls \((1-T)\).

Our first proof of Theorem 1.5 is based on a careful study of the area between the graphs of \(s\mapsto u^{*}(s)\) and \(s\mapsto e^{-s}\). Consider a parameter \(s^{*}>0\), defined to be a solution of the equation

$$ u^{*}(s^{*})=e^{-s^{*}}. $$

Such a solution always exists and, as soon as \(T<1\), it is unique. An argument relying on the convexity inequality (1.25) yields

$$ \int _{0}^{s^{*}} \left (e^{-s} - u^{*}(s)\right ) \operatorname{d\!}s \leq e^{| \Omega |} \delta , \qquad \delta :=\delta (F;\{u>u^{*}(|\Omega |)\}), $$
(1.32)

cf. Lemma 2.3. Thus, to prove the desired stability estimate (1.17), by (1.31) and (1.32) it is enough to show that the integral above controls \((1-T)\). In fact, it is not difficult to see that this integral controls \((1-T)\) to a suboptimal power, as we have

$$ \frac{(1-T)^{2}}{2} = \int _{0}^{s_{*}} (1-s-T)_{+} \operatorname{d\!}s\leq \int _{0}^{s_{*}} \left (e^{-s} - u^{*}(s)\right ) \operatorname{d\!}s \leq e^{|\Omega |} \delta . $$
(1.33)

Thus, by (1.31), (1.33) already yields a suboptimal form of stability.

To upgrade (1.33) to an optimal estimate, we need to estimate the integral in (1.32) much more precisely, and our approach is to give a precise quantification of the equality cases in (1.24). By passing to the inverse functions we have

$$ \int _{e^{-s^{*}}}^{T} \left (\log \frac {1}{t} - \mu (t)\right ) \operatorname{d\!}t \leq \int _{0}^{s_{*}} \left (e^{-s} - u^{*}(s)\right ) \operatorname{d\!}s, $$
(1.34)

cf. (2.50) below, and our proof proceeds by establishing a sharp estimate for the distribution function \(\mu (t)\): precisely, there is a universal constant \(C>0\) such that

$$ \mu (t) \leq (1+C(1-T)) \log \frac {T}{t}, $$
(1.35)

provided that \(t\) and \(T\) are sufficiently close to 1 (see Lemma 2.1); in this paper, \(C\) always denotes a universal constant, which however may change from line to line. Note that, by the suboptimal estimate (1.33), this restriction on \(t\), \(T\) does not restrict generality. Establishing (1.35) is the most delicate part of the whole argument, as this estimate relies on a cancellation effect due to analyticity of \(F\). The desired estimate (1.17) then follows by an elementary analysis, after plugging in (1.35) into (1.34) and using again (1.31) and (1.32).

Concerning the stability of the set in (1.19), we note that it is not clear how to quantify inequality (1.27) used in the proof of Theorem B described above, at least for general sets \(\Omega \). Nonetheless, since we already have estimate (1.17), we know that \(u\) is close to a Gaussian. This allows us to first compare \(\Omega \) with \(A_{u^{*}(|\Omega |)}\), and then compare \(A_{u^{*}(|\Omega |)}\) with a ball.

The described strategy also works to show the stability of a similar Faber-Krahn inequality for wavelet transforms (see [36]), after adapting the current arguments. We plan to address this in a future work.

1.3 The geometry of super-level sets and a variational approach

As mentioned above, we will give two different proofs of Theorem 1.5, the first one having been described in the previous subsection. We now describe our second proof, which is variational in nature and based on the following result, which is of independent interest:

Proposition 1.6

There are small explicit constants \(\delta _{0},c>0\) such that the following holds: for all \(F\in \mathcal{F}^{2}(\mathbb{C})\) such that

$$ e^{s} \delta (F;A_{u^{*}(s)}) \leq \delta _{0} $$

and for all \(s< c\log (1/\delta _{0})\), the super-level set

$$ A_{u^{*}(s)}=\{z\in \mathbb{C}:u(z)>u^{*}(s)\} $$

has smooth boundary and convex closure.

Proposition 1.6 shows in particular that level sets of \(u\) sufficiently close to its maximum can be seen as smooth graphs over a circle, thus they can be deformed to a circle through an appropriate flow. This observation, in turn, allows us to give a variational approach to Theorem 1.5, in the spirit of Fuglede’s computation [16] for the quantitative isoperimetric inequality. We refer the reader to [20] for a detailed introduction to variational methods in shape-optimization problems.

To be precise, and comparing with (1.28), for some fixed \(s>0\) we consider the functional

$$ \mathcal {K}\colon F \mapsto \frac{I_{F}(s)}{\|F\|_{\mathcal{F}^{2}}^{2}}. $$

We study perturbations of \(F_{0}\equiv 1\), i.e. we consider \(F=1+\varepsilon G\) for some small \(\varepsilon >0\). Taking \(\Omega =A_{u^{*}(s)}\), we note that by a formal Taylor expansion we have

$$ \begin{aligned}\delta (1+\varepsilon G;\Omega ) &= \frac{1}{1-e^{-|\Omega |}}\big( \mathcal {K}[1]-\mathcal {K}[1+\varepsilon G]\big) \\ &\geq \frac{1}{1-e^{-|\Omega |}} \Big(-\frac{\varepsilon ^{2}}{2} \nabla ^{2} \mathcal {K}[1](G,G) + o(\varepsilon ^{2})\Big), \end{aligned}$$

since \(\nabla \mathcal {K}[1](G)=0\) for all \(G\in \mathcal{F}^{2}(\mathbb{C})\) satisfying the orthogonality conditions

$$ \langle 1,G\rangle _{\mathcal{F}^{2}} = \langle z, G\rangle _{ \mathcal{F}^{2}} = 0, $$

according to Theorem 5.1 and Lemma 5.2. Thus, once the Taylor expansion above has been justified (and this is achieved in Appendix A), we see that for small perturbations of \(F_{0}\equiv 1\) the deficit is governed by the second variation of \(\mathcal {K}\). For stability to hold, this variation ought to be uniformly negative definite, since \(\varepsilon \) is essentially the left-hand side in (1.17). In Proposition 5.3 we show that, under the above orthogonality conditions, we have

$$ \frac {1}{2}\nabla ^{2} \mathcal {K}[1](G,G) \leq - s e^{-s}\|G\|^{2}_{ \mathcal{F}^{2}}. $$
(1.36)

This inequality is interesting for several reasons. Firstly, it is sharp, as highlighted by taking \(G(z)=z^{2}\). Secondly, by the suboptimal stability result (1.33), to prove (1.17) it is enough to consider functions with small deficit. Therefore, the above Taylor expansion, combined with (1.36), easily yields the stability estimate (1.17), although with a suboptimal dependence of the constant on \(|\Omega |\). Finally, the non-degeneracy of \(\nabla ^{2} \mathcal {K}\) provided by (1.36), combined once again with the above Taylor expansion, shows that the deficit behaves quadratically near \(F_{0}\equiv 1\), which leads to a direct proof of the optimality of our estimates, as claimed in Remark 1.2.

1.4 Outline

In Sect. 2 we give a first proof of (1.17), following the strategy described in Sect. 1.2 above. In Sect. 3 we study the geometry of the super-level sets of functions with small deficit and, in particular, we prove Proposition 1.6 above. In Sect. 4 we prove the set stability estimate (1.8). Section 5 contains the variational proof described in Sect. 1.3 and in particular the proof of (1.36). In Sect. 6 we prove the claims from Remark 1.2. Finally, in Sect. 7 we extend our results to the higher-dimensional setting, as claimed in Remark 1.3.

2 First proof of the function stability part

The goal of this section is to prove (1.17), by combining a series of new results (potentially of independent interest) valid for arbitrary functions \(F\in \mathcal{F}^{2}\), which for convenience will be assumed to be normalized by

$$ \Vert F\Vert _{\mathcal{F}^{2}}=1. $$
(2.1)

In these statements, we will make extensive use of the notation and the background results recalled in Sect. 1.2, concerning the functions \(u(z)\), \(\mu (t)\) and \(u^{*}(s)\) that can be associated with a given \(F\in \mathcal{F}^{2}\). In particular, as in (1.23), in our statements we will let

$$ T := \max _{z\in \mathbb{C}} u(z)=u^{*}(0)=\max _{z\in \mathbb{C}} |F(z)|^{2} e^{-\pi |z|^{2}}, $$
(2.2)

recalling that \(T\in [0,1]\) whenever (2.1) is assumed.

We also note that, since \(u^{*}\) is (by its definition) equimeasurable with \(u\) and decreasing, there holds

$$ \int _{\{u>u^{*}(s_{0})\}} u(z)\operatorname{d\!}z =\int _{0}^{s_{0}} u^{*}(s) \operatorname{d\!}s \quad \forall s_{0}\geq 0. $$
(2.3)

Moreover, as recalled in Sect. 1.2, when \(F=c F_{z_{0}}\) (with \(|c|=1\)) is one of the optimal functions described in Theorem B, one has \(\mu (t)=\log _{+} \frac {1}{t}\) or, equivalently, \(u^{*}(s)=e^{-s}\). For this reason, a careful comparison between \(e^{-s}\) and \(u^{*}(s)\) (for an arbitrary \(F\) satisfying (2.1)) will be the core of the results of this section. Since, when (2.1) holds, letting \(s_{0}\to \infty \) in (2.3) we have

$$ 1=\int _{0}^{\infty }u^{*}(s)\operatorname{d\!}s =\int _{0}^{\infty }e^{-s}\operatorname{d\!}s, $$
(2.4)

as noted in [27] there exists at least one value \(s^{*}>0\) for which

$$ u^{*}(s)\quad \textstyle\begin{cases} \quad \leq e^{-s} & \text{if $s \in [0,s^{*}]$} \\ \quad \geq e^{-s} & \text{if $s\geq s^{*}$} \end{cases} $$
(2.5)

or, equivalently, in terms of the inverse functions, a value \(t^{*} \in (0,T)\) for which

$$ \mu (t) \quad \textstyle\begin{cases} \quad \geq \log \frac {1}{t} & \text{if $t \in (0,t^{*}]$} \\ \quad \leq \log \frac {1}{t} & \text{if $t\in [t^{*},T]$} \end{cases} $$
(2.6)

(note \(\mu (t)=0\) for \(t\geq T\)). When \(T=1\) (or, equivalently, if \(F\) is one of the optimal functions described in Theorem B, see [33, Proposition 2.1]) and hence \(u^{*}(s)=e^{-s}\), clearly all values of \(s^{*}\) (or \(t^{*}\)) have this property, but when \(T<1\) we will prove in Corollary 2.2 that \(s^{*}\) and \(t^{*}\) are in fact unique, with an unexpected universal upper bound on \(t^{*}\) (or lower bound on \(s^{*}\)).

With this background, we are now ready to state and prove the results of this section, starting with a sharp estimate for \(\mu (t)\), which shows that (1.24) becomes almost an equality when \(T\) is close to 1.

Lemma 2.1

For every \(t_{0}\in (0,1)\), there exists a threshold \(T_{0}\in (t_{0},1)\) and a constant \(C_{0}>0\) with the following property. If \(F\in \mathcal{F}^{2}(\mathbb{C})\) is such that \(\Vert F\Vert _{\mathcal{F}^{2}}=1\) and \(T \geq T_{0}\), then

$$ \mu (t)\leq \left ( 1+{C_{0} \frac{1-T}{T}}\right )\log \frac {T}{t} \quad \forall t\in [t_{0},T]. $$
(2.7)

We note, before proving such a result, that the proof presented below shows that one can choose \(C_{0}=C/t_{0}^{3}\), where \(C\) is some universal constant.

Proof

Given \(t_{0}\in (0,1)\) and \(F\) as in the statement, we split the proof into several steps.

Step I. We may assume that \(u(z)\) achieves its absolute maximum \(T\) at \(z=0\) and that \(F(0)\) is a real number, so that \(F(0)=\sqrt {T}\). Expanding \(F\) with respect to the orthonormal basis of monomials (1.13), we have

$$ \frac {F(z)}{\sqrt{T}}=1 +R(z),\quad z\in \mathbb{C}, $$
(2.8)

where \(R(z)\) is the entire function

$$ R(z) := \sum _{n=2}^{\infty }\frac{a_{n}}{\sqrt{T}}\, \frac {\pi ^{n/2} z^{n}}{\sqrt{n!}},\quad z\in \mathbb{C}$$
(2.9)

The fact that \(a_{1}=0\), i.e. \(F'(0)=0\), follows easily from our assumption that \(u(z)\) has a critical point at \(z=0\), which by (1.20) forces a critical point for \(|F(z)|^{2}\) and ultimately for \(F(z)\). The assumption that \(1=\Vert F\Vert _{\mathcal{F}^{2}}^{2}\) takes the form \(1=T+\sum _{n=2}^{\infty }|a_{n}|^{2}\), which we record in the form

$$ \sum _{n=2}^{\infty }\frac{|a_{n}|^{2}}{T}=\frac {1-T}{T} =:\delta ^{2}, $$
(2.10)

hereby defining \(\delta \). In the sequel we will often tacitly assume that \(\delta \) is small enough, depending only on \(t_{0}\); in the end, the required smallness of \(\delta \) will determine the threshold \(T_{0}\) in the statement of Lemma 2.1.

From (2.9), Cauchy-Schwarz and (2.10) we obtain

$$ |R(z)|^{2}\leq \left (\sum _{n=2}^{\infty }\frac{|a_{n}|^{2}}{T} \right ) \left (\sum _{n=2}^{\infty }\frac {\pi ^{n} |z|^{2n}}{n!} \right ) = \delta ^{2}\left (e^{\pi |z|^{2}}-1-\pi |z|^{2}\right ). $$
(2.11)

In particular, \(|R(z)|^{2}\leq \delta ^{2}\left (e^{\pi |z|^{2}}-1\right )\), hence squaring (2.8) we have

$$ \frac {|F(z)|^{2}}{T} \leq 1 + \delta ^{2}\left (e^{\pi |z|^{2}}-1 \right ) + h(z), $$
(2.12)

where \(h(z)\) is the real valued harmonic function

$$ h(z):=2\mathop{\textrm{Re}} R(z),\qquad z\in \mathbb{C}. $$
(2.13)

Step II: Estimates for \(h\). Since \(|h(z)|\leq 2 |R(z)|\), the elementary inequality \(e^{x}-1-x\leq \frac {x^{2}}{2} e^{x}\), written with \(x=\pi |z|^{2}\) and combined with (2.11), implies

$$ |h(z)|\leq \sqrt{2}\pi \,\delta |z|^{2} e^{\frac{\pi |z|^{2}}{2}}, \qquad \forall z\in \mathbb{C}. $$
(2.14)

Differentiating (2.9), and then using Cauchy-Schwarz and (2.10) as in (2.11), we have

$$ |R'(z)|\leq \sum _{n=2}^{\infty }\frac {|a_{n}|}{\sqrt {T}} \frac {n \pi ^{n/2} |z|^{n-1}}{\sqrt {n!}} \leq \delta \left ( \sum _{n=2}^{ \infty }\frac {n^{2} \pi ^{n} |z|^{2(n-1)}}{n!} \right )^{ \frac {1}{2}}\leq \delta \sqrt{2} \pi |z| e^{\frac {\pi |z|^{2}}{2}} , $$
(2.15)

having used the inequality \(\frac {n^{2}}{n!}\leq \frac {2}{(n-2)!}\) in the last passage. Similarly, differentiating (2.9) twice, using Cauchy-Schwarz and estimating the resulting power series, we find

$$ |R''(z)|\leq \delta \left ( \sum _{n=2}^{\infty } \frac {n^{2}(n-1)^{2} \pi ^{n} |z|^{2(n-2)}}{n!} \right )^{ \frac {1}{2}} \leq \delta C (1+|z|^{2}) e^{\frac {\pi |z|^{2}}{2}}. $$
(2.16)

By (2.13) and the Cauchy-Riemann equations \(|\nabla h(z)|=2 |R'(z)|\) and \(|D^{2} h(z)|=2\sqrt {2} |R''(z)|\), we obtain the following uniform estimates with respect to the angular variable \(\theta \) for the first and second radial derivatives of \(h(r e^{i\theta})\):

$$ \begin{aligned} \left \vert \frac {\partial h(r e^{i\theta})}{\partial r} \right \vert \leq \delta 2\sqrt{2} \pi r e^{\frac {\pi r^{2}}{2}}, \end{aligned} $$
(2.17)

and

$$ \left \vert \frac {\partial ^{2} h(r e^{i\theta})}{\partial r^{2}} \right \vert \leq \delta C (1+r^{2}) e^{\frac {\pi r^{2}}{2}}. $$
(2.18)

Step III: Definition of \(E_{\sigma}\) and \(r_{\!\sigma }(\theta )\). Assuming \(T>t_{0}\), we consider any \(t\in [t_{0},T)\) and any complex number \(z=r e^{i\theta}\) (\(r\geq 0\)), and we observe that

$$ u(r e^{i\theta})>t\quad \iff \quad \frac {t e^{\pi r^{2}}}{T} < \frac {|F(r e^{i\theta})|^{2}}{T}. $$

Hence, by virtue of (2.12), we obtain the implication

u(r e i θ )>t g θ (r,1)<1,
(2.19)

where, for every fixed \(\theta \in [0,2\pi ]\), \(g_{\theta}\) is defined as

$$ g_{\theta}(r,\sigma ):= e^{\pi r^{2}}\left (\frac {t}{T} -\delta ^{2} \right )+\delta ^{2}- \sigma \,h(r e^{i\theta}),\quad r\geq 0,\quad \sigma \in [0,1]. $$
(2.20)

The variable \(\sigma \in [0,1]\) plays the role of a parameter that defines the family of planar sets

$$ E_{\sigma}:=\left \{ r e^{i\theta}\in \mathbb{C}\,|\,\, g_{\theta}(r, \sigma )< 1\right \},\quad \sigma \in [0,1]. $$

Since (2.19) is equivalent to the set inclusion \(\{u>t\}\subseteq E_{1}\), (2.7) will be proved if we show that

$$ |E_{1}|\leq \left (1+\frac {C\delta ^{2}}{t_{0}^{3}}\right )\log \frac {T}{t}. $$
(2.21)

The advantage of the parameter \(\sigma \) is that we can easily prove the analogous estimate for \(E_{0}\) – which is a circle, since \(g_{\theta}(r,0)\) is independent of \(\theta \) –, and then show that this estimate is inherited by every \(E_{\sigma}\) (including \(E_{1}\)), by exploiting a cancellation effect due to the harmonicity of \(h\).

We first show that each set \(E_{\sigma}\) is star-shaped with respect to the origin, by showing that \(g_{\theta}(r,\sigma )\) is increasing in \(r\) (for fixed \(\theta \) and \(\sigma \)). Using (2.17) and assuming e.g. that \(\delta ^{2}+\sqrt {2}\delta \leq t_{0}/2\), we have from (2.20)

$$ \begin{aligned} \frac {\partial g_{\theta}(r,\sigma )}{\partial r} & = 2\pi r e^{\pi r^{2}} \left (\frac {t}{T} -\delta ^{2}\right )-\sigma \frac {\partial h(r e^{i\theta})}{\partial r} \\ &\geq 2\pi r e^{\pi r^{2}}\left (t_{0} -\delta ^{2}\right ) - \delta 2 \sqrt{2} \pi r e^{\frac {\pi r^{2}}{2}} \\ &\geq 2\pi r e^{\pi r^{2}}\left (t_{0} -\delta ^{2}-\sqrt {2} \delta \right ) \geq \pi t_{0} r e^{\pi r^{2}}>0. \end{aligned} $$
(2.22)

Since \(g_{\theta}(0,\sigma )=t/T\geq t_{0}\), integrating the previous bound we also obtain that

$$ g_{\theta}(r,\sigma )\geq \frac {t_{0}}{2}\bigl(1+ e^{\pi r^{2}} \bigr),\quad \forall r\geq 0,\quad \forall \sigma \in [0,1], $$
(2.23)

and hence, since \(g_{\theta}(0,\sigma )=t/T< 1\), for every \(\sigma \in [0,1]\) the equation in \(r>0\)

$$ g_{\theta}(r,\sigma )=1 $$
(2.24)

has a unique solution \(r_{\!\sigma }> 0\), which we shall also denote by \(r_{\!\sigma }(\theta )\) when the dependence of \(r_{\!\sigma }\) on the angle \(\theta \) is to be stressed, as in (2.25) below. Since \(E_{\sigma}\) is star-shaped, using polar coordinates we can compute its area \(|E_{\sigma}|\) in terms of \(r_{\!\sigma }\), as

$$ f(\sigma ):= |E_{\sigma}|=\frac {1}{2} \int _{0}^{2\pi}r_{\!\sigma }( \theta )^{2}\operatorname{d\!}\theta ,\quad \sigma \in [0,1]. $$
(2.25)

Notice that \(f(1)\) is the area of \(E_{1}\) that we want to estimate as in (2.21), while

$$ f(0)=\frac {1}{2}\int _{0}^{2\pi} r_{0}(\theta )^{2}\operatorname{d\!}\theta = \pi r_{0}^{2}, $$
(2.26)

since when \(\sigma =0\), equation (2.24) simplifies to

$$ e^{\pi r_{0}^{2}}\left (\frac {t}{T} -\delta ^{2}\right )+\delta ^{2}=1, $$
(2.27)

so \(r_{0}\) is independent of \(\theta \) and \(E_{0}\) is a ball of radius \(r_{0}\). Note that the sets \(E_{\sigma}\) are uniformly bounded, since (2.23) and (2.24) entail that

$$ \pi r_{\!\sigma }^{2} \leq \log \frac {2}{t_{0}},\quad \forall \sigma \in [0,1]. $$
(2.28)

Step IV: Estimates for \(r_{\!\sigma }'\) and \(r_{\!\sigma }''\). By (2.22) and the implicit function theorem, \(r_{\!\sigma }\) is, for every fixed value of \(\theta \in [0,2\pi ]\), a smooth, bounded function of the parameter \(\sigma \in [0,1]\). Denoting for simplicity by \(r_{\!\sigma }'\) its derivative with respect to \(\sigma \), we have

$$ r_{\!\sigma }'= \frac{\partial r_{\!\sigma }(\theta )}{\partial \sigma}=-\,\, \frac {\frac {\partial g_{\theta}}{\partial \sigma}(r_{\!\sigma },\sigma )}{\frac{\partial g_{\theta}}{\partial r}(r_{\!\sigma },\sigma )}= \frac {h(r_{\!\sigma }e^{i\theta})}{\frac {\partial g_{\theta}}{\partial r}(r_{\!\sigma },\sigma )}, \quad \sigma \in [0,1], $$
(2.29)

and using (2.14) and (2.22) we find the bound

$$ |r_{\!\sigma }'| \leq \frac {\sqrt {2} \,\delta r_{\!\sigma }}{t_{0}}e^{- \frac {\pi r_{\!\sigma }^{2}}{2} }. $$
(2.30)

In particular, this implies that

$$ r_{\!\sigma }^{2} \leq 2 r_{0}^{2}\quad \forall \sigma \in [0,1], $$
(2.31)

since by (2.30) \(|r_{\!\sigma }'|/r_{\!\sigma }\leq \sqrt {2}\,\delta /t_{0} \leq \log \sqrt{2}\) provided \(\delta \) is small enough, we have for every \(\sigma \in [0,1]\)

$$ \log r_{\!\sigma }=\log r_{0}+ \int _{0}^{\sigma } \frac {r_{s}'}{r_{s}}\operatorname{d\!}s\leq \log r_{0}+\sigma \log \sqrt{2}\leq \log r_{0}+\log \sqrt{2}, $$

and (2.31) follows.

Differentiating (2.29) with respect to \(\sigma \), we have

$$ r_{\!\sigma }''= \frac {\frac {\partial h(r e^{i\theta})}{\partial r} r_{\!\sigma }'}{ \frac {\partial g_{\theta}}{\partial r}} - \frac {h(r e^{i\theta})\left ( \frac {\partial ^{2} g_{\theta}}{\partial \sigma \partial r}+ \frac {\partial ^{2} g_{\theta}}{\partial r^{2}}r_{\!\sigma }'\right )}{\left (\frac{\partial g_{\theta}}{\partial r}\right )^{2}}. $$
(2.32)

Since by (2.20) and (2.17)

$$\begin{aligned} \left \vert \frac {\partial ^{2} g_{\theta}}{\partial \sigma \partial r} \right \vert = \left \vert \frac {\partial h(r e^{i\theta})}{\partial r} \right \vert \leq \delta C r e^{\frac{\pi r^{2}}{2}}, \end{aligned}$$

while by (2.20) and (2.18)

$$\begin{aligned} \left \vert \frac {\partial ^{2} g_{\theta}}{\partial r^{2}} \right \vert \leq (2\pi +4\pi ^{2} r^{2})e^{\pi r^{2}}\left (\frac {t}{T}- \delta ^{2}\right )+ \sigma \left \vert \frac {\partial ^{2} h(r e^{i\theta})}{\partial r^{2}} \right \vert \\ \leq \frac {t}{T} (2\pi +4\pi ^{2} r^{2})e^{\pi r^{2}}+ \delta C (1+ r^{2})e^{ \frac {\pi r^{2}}{2}} \leq C(1+ r^{2})e^{\pi r^{2}}, \end{aligned}$$

from (2.32) and (2.14) we see that

$$\begin{aligned} |r_{\!\sigma }''|&\leq \frac {\delta 2\sqrt {2}\pi r_{\!\sigma }e^{\frac{\pi r_{\!\sigma }^{2}}{2}}}{\pi t_{0} r_{\!\sigma }e^{\pi r_{\!\sigma }^{2}}} |r_{\!\sigma }'| + \frac {\sqrt {2}\pi \delta r_{\!\sigma }^{2} e^{\frac{\pi r_{\!\sigma }^{2}}{2} }}{(\pi t_{0} r_{\!\sigma }e^{\pi r_{\!\sigma }^{2}})^{2}} \left (\delta C r_{\!\sigma }e^{\frac{\pi r_{\!\sigma }^{2}}{2}}+ C(1+r_{ \!\sigma }^{2})e^{\pi r_{\!\sigma }^{2}} |r_{\!\sigma }'| \right ) \\ &\leq \frac{\delta C e^{-\frac {\pi r_{\!\sigma }^{2}}{2}}}{ t_{0}} |r_{ \!\sigma }'|+ \frac{\delta C e^{-\frac {3 \pi r_{\!\sigma }^{2}}{2}}}{ t_{0}^{2}} \left (\delta r_{\!\sigma }e^{\frac{\pi r_{\!\sigma }^{2}}{2}}+ (1+ r_{ \!\sigma }^{2})e^{\pi r_{\!\sigma }^{2}} |r_{\!\sigma }'| \right ). \end{aligned}$$

Combining with (2.30),

$$ |r_{\!\sigma }''|\leq \frac{ \delta ^{2}C r_{\!\sigma }e^{-\pi r_{\!\sigma }^{2}}}{ t_{0}^{2} } +\frac{\delta C e^{-\frac {3 \pi r_{\!\sigma }^{2}}{2}}}{ t_{0}^{2}} \left (\delta r_{\!\sigma }e^{\frac{\pi r_{\!\sigma }^{2}}{2}}+ (1+ r_{ \!\sigma }^{2})\frac {\delta Cr_{\!\sigma }}{t_{0}}e^{ \frac{\pi r_{\!\sigma }^{2}}{2}} \right ) \leq \frac {\delta ^{2}Cr_{\!\sigma }}{t_{0}^{3}}. $$
(2.33)

Step V: Proof of (2.21). Now, recalling the bounds (2.28) and (2.30), one can differentiate under the integral in (2.25), obtaining

$$ f'(\sigma )=\int _{0}^{2\pi} r_{\!\sigma }(\theta ) \frac{\partial r_{\!\sigma }(\theta )}{\partial \sigma} \operatorname{d\!}\theta . $$
(2.34)

Differentiating (2.34) again, and then using (2.33) and (2.30), we obtain the estimate

$$\begin{aligned} |f''(\sigma )|\leq \int _{0}^{2\pi} \left ( |r_{\!\sigma }'|^{2} +|r_{ \!\sigma }r_{\!\sigma }''|\right )\operatorname{d\!}\theta \leq \frac {C\delta ^{2}}{t_{0}^{3}}\int _{0}^{2\pi} r_{\!\sigma }^{2} \operatorname{d\!}\theta ,\quad \sigma \in [0,1]. \end{aligned}$$

This, combined with (2.31) and recalling (2.26), gives

$$ |f''(\sigma )|\leq \frac{C\delta ^{2}}{t_{0}^{3}} f(0), \quad \forall \sigma \in [0,1]. $$
(2.35)

We now claim that \(f'(0)=0\), which is the crucial step of the proof. Indeed, when \(\sigma =0\), we see from (2.27) and (2.22) that \(r_{0}\) and \(\partial g_{\theta}/\partial r\) are independent of \(\theta \), and therefore, by (2.29), when \(\sigma =0\) we may write

$$ \left .r_{\!\sigma }(\theta ) \frac{\partial r_{\!\sigma }(\theta )}{\partial \sigma}\right \vert _{ \sigma =0}= r_{0}\, \frac {h(r_{0}\, e^{i\theta})}{\displaystyle \frac {\partial g_{\theta}}{\partial r}(r_{0},0)}= \phi (r_{0}) h(r_{0}\, e^{i\theta}), $$

where \(\phi (r_{0})\neq0\) depends on \(r_{0}\) but is independent of \(\theta \). Therefore, from (2.34),

$$\begin{aligned} f'(0)=\int _{0}^{2\pi} r_{\!\sigma }(\theta ) \frac{\partial r_{\!\sigma }(\theta )}{\partial \sigma}\Big\vert _{ \sigma =0}\operatorname{d\!}\theta =\phi (r_{0})\int _{0}^{2\pi} h(r_{0}\, e^{i \theta})\operatorname{d\!}\theta . \end{aligned}$$

On the other hand, the last integral vanishes, since the mean value theorem applied to the harmonic function \(h\) gives

$$ \frac {1}{2\pi r_{0}}\int _{0}^{2\pi} h(r_{0}e^{i\theta})\operatorname{d\!}\theta =h(0)=0. $$

Hence, as \(f'(0)=0\), we may write, through Taylor’s formula,

$$ f(s)=f(0)+ \frac{f''(\sigma )}{2}s^{2}\quad \text{for some $\sigma \in (0,s)$}, $$

and taking \(s=1\) and using (2.35) gives

$$ |E_{1}|= f(1)\leq f(0)+\frac {C\delta ^{2}}{t_{0}^{3}} f(0)=\left (1+ \frac {C\delta ^{2}}{t_{0}^{3}}\right )f(0). $$
(2.36)

Now, as we may assume that \(2\delta ^{2}\leq t_{0}\), we claim that

$$ \pi r_{0}^{2}\leq \left (1+\frac{2\delta ^{2}}{t_{0}}\right )\log \frac {T}{t}, $$
(2.37)

which, according to (2.27), is equivalent to

$$ e^{\pi r_{0}^{2}}=\frac {1-\delta ^{2}}{\frac {t}{T}-\delta ^{2}} \leq \left (\frac {T}{t}\right )^{1+\frac {2\delta ^{2}}{t_{0}}}. $$
(2.38)

Setting for convenience \(\kappa =1/t_{0}\) and defining the function

$$ \psi (\tau ):= \delta ^{2}+ \left (1-\delta ^{2}\tau \right ) \tau ^{2 \delta ^{2}\kappa },\quad \tau \in [1,\kappa ], $$
(2.39)

we observe that (2.38) is equivalent to \(\psi (T/t)\geq 1\). Since \(\psi (1)=1\), \(T/t\in [1,\kappa ]\) and \(\psi \) is concave (note that \(2\delta ^{2}\kappa \leq 1\) by assumption), it suffices to prove that \(\psi (\kappa )\geq 1\). Indeed, we have

$$\begin{aligned} \psi (\kappa ) &= \delta ^{2}+ \left (1-\delta ^{2}\kappa \right )e^{2 \delta ^{2}\kappa \log \kappa} \geq \delta ^{2}+ \left (1-\delta ^{2} \kappa \right )(1+2\delta ^{2}\kappa \log \kappa ) \\ &=1+\delta ^{2}\left (1 + 2\kappa (1 -\delta ^{2}\kappa ) \log \kappa - \kappa \right )\geq 1+\delta ^{2}\left (1 + \kappa \log \kappa - \kappa \right ), \end{aligned}$$

having used \(1-\delta ^{2}\kappa \geq \frac {1}{2}\) in the last passage. This shows that \(\psi (\kappa )\geq 1\), hence (2.37) is established.

Thus, (2.21) follows by combining (2.36) with (2.26) and (2.37). □

Corollary 2.2

Uniqueness and non-degeneracy of \(t^{*}\)

If \(F\in \mathcal{F}^{2}(\mathbb{C})\) is such that \(\Vert F\Vert _{\mathcal{F}^{2}}=1\) and \(T<1\), then there is a unique value \(t^{*}\in (0,T)\) satisfying (2.6). Moreover,

$$ t^{*} \leq \tau ^{*}, $$
(2.40)

for some universal constant \(\tau ^{*}\in (0,1)\).

Note that the uniqueness of \(t^{*}\) implies the uniqueness of \(s^{*}\) defined in (2.5), and \(t^{*}=e^{-s^{*}}\). We also note that there cannot be any universal lower bound on \(t^{*}\), since \(t^{*}\leq T\) and \(T\) can be arbitrarily small.

Proof

If (2.6) were true for two distinct values \(t_{1}< t_{2}< T\) of \(t^{*}\), then we would have \(\mu (t)=\log 1/t\) for every \(t\in [t_{1},t_{2}]\), whence \(\mu '(t)=-1/t \) for every \(t\in (t_{1},t_{2})\). But the proof of [33, Remark 3.5] shows that this happens if and only if the corresponding sets \(\{ u > t \}\) are balls, \(|\nabla u|\) being constant on each boundary \(\partial \{ u > t \} = \{ u = t \}\): this in turn implies that \(u(z) = e^{-\pi |z-z_{0}|^{2}}\), for some \(z_{0} \in \mathbb{C}\) and hence \(u(z_{0}) = 1\) (or equivalently that \(u^{*}(s)\equiv e^{-s}\)), contradicting to our assumption that \(T<1\).

Now let \(T_{0}\) and \(C_{0}\) be the constants provided by Lemma 2.1 when \(t_{0}=\frac {1}{2}\), and define

$$ \tau ^{*}:= \max \left \{\frac {1}{2}, T_{0},e^{-\,\frac {1}{C_{0}}} \right \}. $$

Given \(F\) as in our statement, if \(t^{*}\leq 1/2\) then clearly \(t^{*}\leq \tau ^{*}\), and the same is true if \(T< T_{0}\), because certainly \(t^{*}\leq T\). Finally, if \(t^{*}>1/2\) and \(T\geq T_{0}\), then (2.7) written with \(t=t^{*}\) becomes

$$ \log \frac {1}{t^{*}}=\mu (t^{*}) \leq (1+C_{0}(1-T))\log \frac {T}{t^{*}}, $$

which is equivalent to

$$ \frac {1}{t^{*}} \leq \left (\frac {T}{t^{*}}\right )^{1+C_{0}(1-T)}, \quad \text{that is,}\quad t^{*} \leq T ^{1+\frac {1}{C_{0}(1-T)}}. $$

But then, since \(T\leq 1\), we obtain

$$ t^{*} \leq T ^{\,\frac {1}{C_{0}(1-T)}} \leq e^{-\,\frac {1}{C_{0}}} $$

and \(t^{*}\leq \tau ^{*}\) also in this case (notice that \(x^{\frac {1}{1-x}}\leq e^{-1}\) for every \(x\in (0,1)\)). □

We are now ready to start the comparison between \(u^{*}(s)\) and \(e^{-s}\), where the number \(s^{*}\), uniquely defined by (2.5) if \(T<1\), will play a crucial role. In the next two lemmas, however, it is not necessary to assume that \(T<1\), since when \(T=1\) (and \(u^{*}(s)=e^{-s}\)) their claims remain true (though trivial) for all values of \(s^{*}\).

Lemma 2.3

For every \(F\in \mathcal{F}^{2}(\mathbb{C})\) such that \(\Vert F\Vert _{\mathcal{F}^{2}}=1\) and every \(s_{0}>0\), there holds

$$ \frac{(1- T)^{2}}{2} \leq \int _{0}^{s^{*}} \bigl(e^{-s}-u^{*}(s) \bigr)\operatorname{d\!}s\leq \delta _{s_{0}} e^{s_{0}}, $$
(2.41)

where \(T\) is as in (2.2) and

$$ \delta _{s_{0}}:=1- \frac{\int _{\{u>u^{*}(s_{0})\}} u(z)\operatorname{d\!}z}{1-e^{-s_{0}}} =1- \frac{\int _{0}^{s_{0}} u^{*}(s)\operatorname{d\!}s}{1-e^{-s_{0}}}. $$
(2.42)

Note that \(\delta _{s_{0}}\) coincides with the deficit \(\delta (F;\Omega )\) of Theorem 1.5 when \(\Omega =\{u>u^{*}(s_{0})\}\) is the super-level set of \(u\), with measure \(s_{0}\).

Proof

Instead of writing explicitly \(e^{-s}\), we will use the notation

$$ v^{*}(s):=e^{-s},\quad s\geq 0. $$
(2.43)

This will be particularly useful in Sect. 7, when we adapt the current proof to higher dimensions.

Since \(u^{*}(x)\leq T\) and \(v^{*}(s)\geq 1-s\), the first inequality in (2.41) follows from

$$\begin{aligned} \int _{0}^{s^{*}} \bigl(v^{*}(s)-u^{*}(s)\bigr)\operatorname{d\!}s&\geq \int _{0}^{s^{*}} \bigl(1-s-T\bigr)_{+}\operatorname{d\!}s \\ &=\int _{0}^{1-T}(1-s-T)\operatorname{d\!}s= \frac {(1-T)^{2}}{2}. \end{aligned}$$
(2.44)

To prove the second inequality, note that \(1-e^{-s_{0}}=\int _{0}^{s_{0}} v^{*}(s)\operatorname{d\!}s\), and hence we can rewrite (2.42) as

$$ \varepsilon := \delta _{s_{0}} \int _{0}^{s_{0}} v^{*}(s)\operatorname{d\!}s = \int _{s_{0}}^{\infty }\left ( u^{*}(s)-v^{*}(s) \right ) \operatorname{d\!}s = \int _{0}^{s_{0}} \left (v^{*}(s) - u^{*}(s)\right ) \operatorname{d\!}s. $$
(2.45)

The key of the proof is that the ratio

$$ r(s):=\frac{u^{*}(s)}{v^{*}(s)}\quad \text{is an increasing function on $[0,+\infty )$,} $$
(2.46)

as follows immediately from (1.25) since \(r(s)=e^{s}u^{*}(s)\). In order to implement such an idea, we must now distinguish between some cases:

Case 1: \(s_{0} > s^{*}\). Since \(r(s^{*})=1\) and \(r(s)\) is increasing by the convexity inequality (1.25), we have from (2.45)

$$\begin{aligned} \varepsilon = \int _{s_{0}}^{\infty }u^{*}(s)\left ( 1 - \frac {1}{r(s)} \right ) \operatorname{d\!}s \geq \left ( 1 - \frac {1}{r(s_{0})} \right ) \int _{s_{0}}^{\infty }u^{*}(s) \operatorname{d\!}s. \end{aligned}$$

On the other hand, for the same reason,

$$ \int _{s^{*}}^{s_{0}} \left (u^{*}(s)-v^{*}(s)\right )\operatorname{d\!}s = \int _{s^{*}}^{s_{0}} u^{*}(s) \left (1-\frac {1}{r(s)}\right )\operatorname{d\!}s \leq \left (1- \frac {1}{r(s_{0})}\right ) \int _{s^{*}}^{s_{0}} u^{*}(s) \operatorname{d\!}s, $$

which, combined with the previous estimate, gives

$$ \int _{s^{*}}^{s_{0}} \left (u^{*}(s)-v^{*}(s)\right )\operatorname{d\!}s \leq \varepsilon \,\, \frac{\int _{s^{*}}^{s_{0}} u^{*}(s) \operatorname{d\!}s}{\int _{s_{0}}^{\infty }u^{*}(s) \operatorname{d\!}s}. $$

Thus, recalling (2.45) and using the last inequality, we find that

$$ \int _{s^{*}}^{\infty} \left (u^{*}(s)-v^{*}(s)\right )\operatorname{d\!}s\leq \varepsilon +\varepsilon \,\, \frac{\int _{s^{*}}^{s_{0}} u^{*}(s) \operatorname{d\!}s}{\int _{s_{0}}^{\infty }u^{*}(s) \operatorname{d\!}s}= \varepsilon \, \frac {\int _{s^{*}}^{\infty} u^{*}(s) \operatorname{d\!}s}{\int _{s_{0}}^{\infty }u^{*}(s) \operatorname{d\!}s} \leq \frac {\varepsilon }{\int _{s_{0}}^{\infty }v^{*}(s) \operatorname{d\!}s}, $$
(2.47)

having used (2.4) for the numerator, and the fact that \(u^{*}(s)\geq v^{*}(s)\) when \(s\geq s^{*}\), for the denominator. Given that clearly \(\varepsilon \leq \delta _{s_{0}}\), the second inequality in (2.41) follows immediately since \(\int _{s_{0}}^{\infty} v^{*}(s)\operatorname{d\!}s=e^{-s_{0}}\).

Case 2: \(s_{0} \leq s^{*}\). As \(r(s^{*})=1\) and \(r(s)\) is increasing, we have from (2.45) again that

$$\begin{aligned} \varepsilon = \int _{0}^{s_{0}} v^{*}(s)\left ( 1 -r(s) \right ) \operatorname{d\!}s \geq \left ( 1 -r(s_{0}) \right ) \int _{0}^{s_{0}} v^{*}(s) \operatorname{d\!}s. \end{aligned}$$

On the other hand, for the same reason,

$$ \int _{s_{0}}^{s^{*}} \left (v^{*}(s)-u^{*}(s)\right )\operatorname{d\!}s= \int _{s_{0}}^{s^{*}} v^{*}(s)\left ( 1 -r(s) \right ) \operatorname{d\!}s \leq \left ( 1 -r(s_{0}) \right ) \int _{s_{0}}^{s^{*}} v^{*}(s) \operatorname{d\!}s, $$

which combined with the previous estimate gives

$$ \int _{s_{0}}^{s^{*}} \left (v^{*}(s)-u^{*}(s)\right )\operatorname{d\!}s \leq \varepsilon \,\, \frac{\int _{s_{0}}^{s^{*}} v^{*}(s) \operatorname{d\!}s}{\int _{0}^{s_{0}} v^{*}(s) \operatorname{d\!}s}. $$

Thus, using the last inequality, we find

$$ \int _{0}^{s^{*}}\left (v^{*}(s)-u^{*}(s)\right )\operatorname{d\!}s\leq \varepsilon +\varepsilon \,\, \frac{\int _{s_{0}}^{s^{*}} v^{*}(s) \operatorname{d\!}s}{\int _{0}^{s_{0}} v^{*}(s) \operatorname{d\!}s}= \varepsilon \, \frac {\int _{0}^{s^{*}} v^{*}(s) \operatorname{d\!}s}{\int _{0}^{s_{0}} v^{*}(s) \operatorname{d\!}s} \leq \frac {\varepsilon }{\int _{0}^{s_{0}} v^{*}(s) \operatorname{d\!}s}=\delta _{s_{0}}, $$
(2.48)

and the second inequality in (2.41) follows also in this case. □

We are now ready to show that, in (2.41), the first inequality holds in fact in a much stronger form.

Lemma 2.4

Under the same assumptions as in Lemma 2.3, there holds

$$ 1-T \leq C \int _{0}^{s^{*}} \bigl(e^{-s}-u^{*}(s)\bigr)\operatorname{d\!}s, $$
(2.49)

where \(C>0\) is a universal constant.

Proof

Passing to the inverse functions, and recalling that \(\mu (t)\), restricted to \((0,T)\), is the inverse of \(u^{*}(s)\), we have

$$ \int _{0}^{s^{*}} \bigl(e^{-s}-u^{*}(s)\bigr)\operatorname{d\!}s \geq \int _{0}^{s^{*}} \bigl(\min \{T,e^{-s}\}-u^{*}(s)\bigr)\operatorname{d\!}s =\int _{t^{*}}^{T} \bigl(\log \frac {1}{t} -\mu (t)\bigr)\operatorname{d\!}t. $$
(2.50)

Observe that, given any universal constant \(\tau \in (0,1)\), in proving (2.49) we may assume (if convenient) that

$$ T \geq \tau , $$
(2.51)

because otherwise (2.49) would immediately follow from the first inequality in (2.41), as soon as \(C\geq 2/(1-\tau )\). In particular, letting \(T_{0}\) and \(C_{0}\) be the constants provided by Lemma 2.1 when \(t_{0}=\tau ^{*}\), where \(\tau ^{*}\) is the constant obtained in Corollary 2.2, we may assume that \(T\geq T_{0}\), so that (2.7) reads

$$ \mu (t)\leq (1+C_{0}(1-T))\log \frac {T}{t}\quad \forall t\in [\tau ^{*},T]. $$
(2.52)

Relying on (2.40), we now use (2.52) to minorize the last integral in (2.50). More precisely, letting \(\tau _{1}\in [\tau ^{*},1)\) denote a universal constant to be chosen later, and further assuming (in addition to \(T\geq T_{0}\)) that (2.51) holds also with \(\tau =\tau _{1}\), from (2.40), (2.52) and (2.50) we find

$$ \int _{0}^{s^{*}} \bigl(e^{-s}-u^{*}(s)\bigr)\operatorname{d\!}s \geq \int _{ \tau _{1}}^{T} \bigl(\log \frac {1}{t} - (1+C_{0}(1-T))\log \frac {T}{t}\bigr)\operatorname{d\!}t. $$
(2.53)

Using \(-\log T\geq 1-T\), for every \(t\in (\tau _{1},T)\) we have

$$ \begin{aligned}\log \frac {1}{t} - (1+C_{0}(1-T))\log \frac {T}{t}&=-\log T - C_{0}(1-T) \log \frac {T}{t} \\ &\geq 1-T- C_{0}(1-T)\log \frac {1}{\tau _{1}}, \end{aligned}$$

and choosing now \(\tau _{1}\in [\tau ^{*},1)\) sufficiently close to 1 in such a way that

$$ \varepsilon _{1}:=1-C_{0} \log \frac {1}{\tau _{1}}>0, $$
(2.54)

from (2.53) and the subsequent estimate we obtain

$$ \int _{0}^{s^{*}} \bigl(e^{-s}-u^{*}(s)\bigr)\operatorname{d\!}s \geq \int _{ \tau _{1}}^{T} (1-T)\left (1-C_{0}\log \frac {1}{\tau _{1}}\right ) \operatorname{d\!}t\geq \varepsilon _{1}(1-T)(T-\tau _{1}). $$
(2.55)

Finally, choosing a larger number \(\tau _{2}\in (\tau _{1},1)\) and further assuming that (2.51) holds also with \(\tau =\tau _{2}\), we obtain

$$ \int _{0}^{s^{*}} \bigl(e^{-s}-u^{*}(s)\bigr)\operatorname{d\!}s \geq \varepsilon _{1}(\tau _{2}-\tau _{1})(1-T) $$

and (2.49) follows, by letting \(C^{-1}=\varepsilon _{1}(\tau _{2}-\tau _{1})\). □

The final ingredient we need is a well-known lemma, whose statement and proof are well-known in the theory of Reproducing Kernel Hilbert spaces. For completeness, we provide its proof here.

Lemma 2.5

If \(F\in \mathcal{F}^{2}(\mathbb{C})\) and \(\Vert F\Vert _{\mathcal{F}^{2}}=1\), then

$$ \min _{\substack{z_{0}\in \mathbb{C}\\ |c|=1}} \Vert F-cF_{z_{0}} \Vert _{\mathcal{F}^{2}}^{2}=2\left ( 1-\sqrt {T} \,\right ) \leq 2(1-T). $$
(2.56)

Proof

Since \(\Vert F_{z_{0}}\Vert _{\mathcal{F}^{2}}=1\) for every \(z_{0}\in \mathbb{C}\), for any \(c\) with \(|c|=1\) we have

$$ \Vert F-cF_{z_{0}}\Vert _{\mathcal{F}^{2}}^{2} = 2- 2 \operatorname{Re}\left \langle \overline{c} F, F_{z_{0}} \right \rangle _{\mathcal {F}^{2}}, $$
(2.57)

and, since \(\mathcal{F}^{2}(\mathbb{C})\) is a reproducing kernel Hilbert space with kernel \(K_{w}(z) = e^{\frac{\pi}{2} \left | w \right |^{2}} F_{w}(z)\), we have \(\left \langle F,F_{z_{0}} \right \rangle _{\mathcal {F}^{2}} = F(z_{0}) e^{- \frac{\pi}{2} \left | z_{0} \right |^{2}}\). Therefore,

$$ \Vert F-cF_{z_{0}}\Vert _{\mathcal{F}^{2}}^{2} = 2- 2 \operatorname{Re}\overline{c} F(z_{0}) e^{- \frac{\pi}{2} \left | z_{0} \right |^{2}}, $$

and choosing the unimodular \(c\) that minimizes the last term, we obtain for every \(z_{0}\)

$$ \min _{|c|=1} \Vert F-cF_{z_{0}}\Vert _{\mathcal{F}^{2}}^{2} = 2- 2 |F(z_{0})| e^{- \frac{\pi}{2} \left | z_{0} \right |^{2}} =2-2 \sqrt{u(z_{0})}. $$

The equality in (2.56) then follows by minimizing over \(z_{0}\in \mathbb{C}\), while the inequality is a direct consequence thereof. □

We are now ready to prove (1.17).

Proof of (1.17)

By homogeneity, in (1.17) one can assume that \(F\in \mathcal{F}^{2}(\mathbb{C})\) and \(\Vert F\Vert _{\mathcal{F}^{2}}=1\). Then, given \(\Omega \) as in Theorem 1.5 and letting \(s_{0}=|\Omega |\), on combining (2.56) with (2.49) and the second inequality in (2.41), one finds

$$ \min _{\substack{z_{0}\in \mathbb{C}\\ |c|=1}} \Vert F-cF_{z_{0}} \Vert _{\mathcal{F}^{2}}^{2} \leq C\delta _{s_{0}} e^{s_{0}}, $$
(2.58)

where \(\delta _{s_{0}}\) is the deficit defined in (2.42), relative to the super-level set \(\{u>u^{*}(s_{0})\}\). But (1.27) (rewritten with \(s=s_{0}\)) reveals that \(\delta _{s_{0}}\leq \delta (F;\Omega )\), where \(\delta (F;\Omega )\) is the deficit relative to \(\Omega \) as defined in (1.18). Then (1.17) follows from (2.58), taking square roots. □

3 The geometry of super-level sets

In this section we study, for a fixed number \(t>0\), some basic geometric properties of the super-level sets \(\{z\in \mathbb{C}:u_{F}(z)>t\}\). In the proof of Lemma 2.1 we saw that the function \(g_{\theta}(r,\sigma )\), defined in (2.20), is monotone increasing in \(r\) and, in particular, its sub-level sets are star-shaped. We will soon see that, by doing a finer analysis, we can prove a stronger version of this result, namely Proposition 1.6.

We begin by discussing some useful normalizations that we will use throughout the next sections. Let us first consider the quantity

$$ \rho (F) : =\min _{z_{0} \in \mathbb{C}, c \in \mathbb{C}} \frac{ \|F - c \cdot F_{z_{0}}\|_{\mathcal {F}^{2}}}{\|F\|_{\mathcal {F}^{2}}}. $$

Without loss of generality, we will assume that

$$ \rho (F) =\min _{c\in \mathbb{C}} \frac{\| F - c \|_{\mathcal {F}^{2}}}{\|F\|_{\mathcal {F}^{2}}}, $$

that is, the closest function to \(F\) in \(\{c F_{z_{0}}\}_{z_{0} \in \mathbb{C}, c\in \mathbb{C}}\) is a multiple of the constant function \(F_{0}\equiv 1\). This follows by (2.57), since \(\rho (F)^{2} = \min _{z_{0} \in \mathbb{C}} \frac{\|F\|_{\mathcal {F}^{2}}^{2} - |F(z_{0})|^{2} e^{-\pi |z_{0}|^{2}}}{\|F\|_{\mathcal {F}^{2}}^{2}}\). Moreover, we can also assume that

$$ F(0) = 1. $$

Now, we note that, by the previous assumptions, we have

$$\begin{aligned} \begin{aligned} \|F - c \cdot F_{z_{0}}\|_{\mathcal {F}^{2}}^{2} & = \|F\|_{\mathcal {F}^{2}}^{2} + |c|^{2} - 2 \operatorname{Re}(\overline{c} F(z_{0}))e^{-\pi |z_{0}|^{2}/2} \\ & \ge \|F\|_{\mathcal {F}^{2}}^{2} + |c|^{2} - 2|c||F(z_{0})|e^{-\pi |z_{0}|^{2}/2} \\ &\ge \|F\|_{\mathcal {F}^{2}}^{2} - \max _{z_{0} \in \mathbb{C}} |F(z_{0})|^{2}e^{- \pi |z_{0}|^{2}}. \end{aligned} \end{aligned}$$
(3.1)

This shows that \(\rho (F)\) is attained at \(z_{0} = 0\) if and only if 0 is a maximum for \(u_{F}\); hence, our normalization also implies

$$ F'(0)=0. $$

Observe that \(\rho \) differs slightly from the distance to the extremizing class used in (1.17), due to the condition on \(c\). However, it is equivalent to this distance: indeed, by Lemma 2.5, we have

$$ \rho (F) \leq \min _{z_{0} \in \mathbb{C}, |c|=\|F\|_{\mathcal{F}^{2}( \mathbb{C})}} \frac{ \|F - c \cdot F_{z_{0}}\|_{\mathcal {F}^{2}}}{\|F\|_{\mathcal {F}^{2}}} = \min _{|c|=\|F\|_{\mathcal{F}^{2}(\mathbb{C})}} \frac{ \|F - c \|_{\mathcal {F}^{2}}}{\|F\|_{\mathcal {F}^{2}}} \leq \sqrt{2} \rho (F). $$
(3.2)

Lemma 2.5 also shows that

$$ 2\frac{\|F\|_{\mathcal{F}^{2}}-1}{\|F\|_{\mathcal{F}^{2}}} =2\left (1- \frac{|F(0)|}{\|F\|_{\mathcal{F}^{2}}} \right )= \min _{|c|=\|F\|_{ \mathcal{F}^{2}(\mathbb{C})}} \frac{ \|F - c \|_{\mathcal {F}^{2}}^{2}}{\|F\|_{\mathcal {F}^{2}}^{2}}. $$
(3.3)

Note that, by our normalizations, we have \(F(0)=1\leq \|F\|_{\mathcal {F}^{2}}\). One the other hand, (3.3), Lemma 2.5, and (2.41) show that

$$ 1\leq \|F\|_{\mathcal{F}^{2}}\leq \frac{2}{2-C(e^{|\Omega |}\delta (F;\Omega ))^{1/2}}\leq 2, $$
(3.4)

provided that the deficit is sufficiently small.

In addition to \(\rho (F)\), it will be convenient to consider the slightly different quantity

$$ \varepsilon (F):= \|F-1\|_{\mathcal {F}^{2}}. $$

This allows us to write

$$ F=1+\varepsilon G, \qquad \text{where } \|G\|_{\mathcal {F}^{2}} = 1 \text{ and } \varepsilon = \varepsilon (F), $$
(3.5)

and then the above assumptions are translated as

$$ \langle G, 1\rangle _{\mathcal {F}^{2}} = \langle G, z\rangle _{ \mathcal {F}^{2}} = 0. $$
(3.6)

Note that, by (3.4), we can assume that \(\varepsilon \) is sufficiently small: indeed, if \(e^{|\Omega |}\delta (F;\Omega )\) is sufficiently small, then

$$ \frac{\varepsilon (F)}{2} \leq \frac{\varepsilon (F)}{\|F\|_{\mathcal{F}^{2}}}= \rho (F) \leq \min _{|c| = \|F\|_{\mathcal {F}^{2}}} \frac{\| F - c \|_{\mathcal {F}^{2}}}{\|F\|_{\mathcal{F}^{2}}}\leq C \Big(e^{|\Omega |}\delta (F;\Omega )\Big)^{1/4}. $$
(3.7)

We are now ready to begin the main part of this section. We begin with a key technical lemma which shows that, above a certain threshold, all level sets of the function \(u_{F}\) behave like those of the standard Gaussian, as long as \(\varepsilon (F)\) is sufficiently small.

Lemma 3.1

Let \(F \in \mathcal{F}^{2}(\mathbb{C})\) satisfy the normalizations in the beginning of this section. There are constants \(\varepsilon _{0}, c_{1} >0\) with the following property: if \(\varepsilon (F)\leq \varepsilon _{0}\), then for any \(\alpha \in [0,2\pi ]\), the function

$$ \mathcal{G}_{\alpha}(r) := u_{F}(r e^{i\alpha}) = |F(r e^{i \alpha})|^{2} e^{-\pi r^{2}} $$

is strictly decreasing on the interval \(\left [0,c_{1} \sqrt{\log (1/\varepsilon (F))}\right ]\).

Proof

Without loss of generality we will take \(\alpha =0\). In order to prove the desired assertion, we shall divide our analysis in two cases.

Case 1: \(1/10 < r < c_{1} \sqrt{\log (1/\varepsilon )}\). We differentiate the function \(\mathcal{G}_{0}\) in terms of \(r\), which gives us

$$\begin{aligned} \begin{aligned} \mathcal{G}_{0}'(r) ={}& -2\pi r |F(r)|^{2} e^{-\pi r^{2}} + 2 \operatorname{Re}(F'(r) \overline{F(r)} ) e^{-\pi r^{2}} \\ ={}& -2 \pi r(1 + 2 \varepsilon \operatorname{Re}(G(r)) + \varepsilon ^{2} |G(r)|^{2}) e^{-\pi r^{2}} \\ &{} + 2 \varepsilon \operatorname{Re}(G'(r) \overline{(1+\varepsilon G(r))}) e^{-\pi r^{2}}, \end{aligned} \end{aligned}$$
(3.8)

where in the last line we used (3.5). In order to bound the last term we note that, by the Cauchy integral formula,

$$ |G'(w)| \le \frac{2}{|w|} \sup _{|z| = 2|w|} |G(z)| \leq 2 \frac{e^{2\pi |w|^{2}}}{|w|}, $$
(3.9)

since \(|G(z)|\leq e^{\pi |z|^{2}/2}\), and in addition

$$ |1+\varepsilon G(r)|\leq \sqrt{2}e^{\frac {\pi }{2} r^{2}}, $$

since \(\|1+\varepsilon G\|_{\mathcal {F}^{2}}\le 2\). Thus, as the \(\varepsilon ^{2}\)-term in (3.8) is negative, and \(|\operatorname{Re}G(r)|e^{-\pi r^{2}}\leq 1\) as \(r>1/10\), we can estimate

$$ \mathcal{G}_{0}'(r) \leq -2 \pi r(e^{-\pi r^{2}} - 2 \varepsilon ) + \frac{8 \varepsilon }{r} e^{\frac{3\pi}{2}r^{2}}. $$

Since \(r < c_{1} \sqrt{\log (1/\varepsilon )}\), we obtain that \(e^{\pi r^{2}} \le e^{\pi c_{1}^{2} \log (1/\varepsilon )} = \varepsilon ^{-\pi c_{1}^{2}}\). For all \(\varepsilon \) small enough, we have \(e^{-\pi r^{2}}- 2 \varepsilon \geq \varepsilon ^{\pi c_{1}^{2}}- 2 \varepsilon > 0\) provided that \(\pi c_{1}^{2} < 1\). Since also \(1/10\leq r\), we have

$$ \mathcal{G}_{0}'(r) \leq -2 \pi r(\varepsilon ^{\pi c_{1}^{2}}-2 \varepsilon ) + 80 \varepsilon ^{1-\frac{3\pi}{2} c_{1}^{2}}. $$

Hence, as long as

$$ \frac{5\pi}{2} c_{1}^{2}< 1, $$
(3.10)

the term \(-2 \pi r \varepsilon ^{\pi c_{1}^{2}}\) dominates over the others. Thus, for sufficiently small \(\varepsilon \), we have \(\mathcal{G}_{0}'(r) < 0\).

Case 2: \(0 < r \leq 1/10\). Notice that this case is more subtle, as \(\mathcal{G}_{0}'(r) \to 0\) when \(r \to 0\). We will show that the second derivative of \(\mathcal{G}_{0}\) is strictly negative for \(r \in (0,1/10)\): thus the first derivative decreases in \((0,1)\) and, as \(\mathcal{G}_{0}'(0) = 0\), it follows that \(\mathcal{G}_{0}'(r) < 0\) in this interval, proving the claim.

Starting from (3.8), we compute:

$$\begin{aligned} \mathcal{G}_{0}''(r) = & -2 \pi (1 + 2 \varepsilon \operatorname{Re}(G(r)) + \varepsilon ^{2} |G(r)|^{2}) e^{-\pi r^{2}} \\ & - 4 \pi \varepsilon r (\operatorname{Re}(G'(r))+ \varepsilon \operatorname{Re}(G'(r)\overline{G(r)}))e^{-\pi r^{2}} \\ &+ 4\pi ^{2} r^{2} (1 + 2 \varepsilon \operatorname{Re}(G(r)) + \varepsilon ^{2} |G(r)|^{2}) e^{-\pi r^{2}} \\ & + 2 \varepsilon \operatorname{Re}(G''(r) \overline{(1+\varepsilon G(r))}) e^{-\pi r^{2}} + 2\varepsilon ^{2} |G'(r)|^{2}e^{- \pi r^{2}} \\ & - 4 \pi r \varepsilon \operatorname{Re}(G'(r) \overline{(1+\varepsilon G(r))}) e^{-\pi r^{2}}. \end{aligned}$$

We now follow the same strategy as in the first case. For \(|w|\leq 1\), we find the estimates

$$\begin{aligned} |G'(w)|\leq \frac{4 \pi}{2\pi} \max _{|z|=2} |G(z)|\leq 2 e^{2\pi}, \qquad |G''(w)| \leq \frac{4 \pi}{\pi} \max _{|z|=2} |G(z)| \leq 4 e^{2 \pi}. \end{aligned}$$
(3.11)

Therefore we have

$$\begin{aligned} \mathcal{G}_{0}''(r) & \leq -2\pi (1-2\pi r^{2})e^{-\pi r^{2}} + \varepsilon \mathfrak{h}(\varepsilon ), \end{aligned}$$

where \(\mathfrak{h} \colon \mathbb{R}\to [0,\infty )\) is a smooth function. Since \(r<1/10\), we have \(1-2\pi r^{2}>0\) and so the first term above is negative. Hence, if \(\varepsilon \) is sufficiently small, it holds that \(\mathcal{G}_{0}''(r) < 0\) for all \(r \in (0,1/10)\), and the conclusion follows. □

In spite of its simple nature, we can derive several important conclusions from Lemma 3.1, such as the following result.

Lemma 3.2

Under the same hypotheses of Lemma 3.1, one may find a small constant \(c_{2}> 0\) such that, for \(t > \varepsilon (F)^{c_{2}}\), the level sets

$$ \{z \in \mathbb{C}\colon u_{F}(z) > t \} $$

are all star-shaped with respect to the origin. Moreover, for such \(t\), the boundary \(\partial \{ u_{F} > t\} = \{ u_{F} = t \}\) is a smooth, closed curve.

Proof

Let \(c_{1} >0\) be given by Lemma 3.1. We first prove the following assertion: if \(|z| > c_{1} \sqrt{\log (1/\varepsilon )}\) then

$$ u_{F}(z) < 4\varepsilon ^{\pi c_{1}^{2}}. $$
(3.12)

As before, we use the decomposition (3.5) to write

$$ u_{F}(z) = (1 + 2 \varepsilon \operatorname{Re}(G(z)) + \varepsilon ^{2} |G(z)|^{2})e^{-\pi \left | z \right |^{2}}. $$

For \(|z| > c_{1} \sqrt{\log (1/\varepsilon )}\), and \(\varepsilon \) sufficiently small, since \(\|G\|_{\mathcal {F}^{2}}=1\) one readily sees that

$$ u_{F}(z) \le \varepsilon ^{\pi c_{1}^{2}}(1 + 2 \varepsilon + \varepsilon ^{2}) < 4 \varepsilon ^{\pi c_{1}^{2}}, $$

since we can choose \(\pi c_{1}^{2} \leq \frac {1}{2}\), cf. (3.10).

We now claim that the conclusion of the lemma holds with \(c_{2} =\frac{\pi c_{1}^{2}}{2}\). If this is not the case, there is \(t_{0} > \varepsilon ^{c_{2}}\) such that \(A_{t_{0}} := \{z \in \mathbb{C}\colon u_{F}(z) > t_{0} \}\) is not star-shaped with respect to 0. Thus, there would be a point \(w_{0} \in A_{t_{0}}\), such that, for some \(r \in (0,1)\), \(r\cdot w_{0} \notin A_{t_{0}}\). By (3.12), we must have that

$$ |w_{0}| < c_{1} \sqrt{\log (1/\varepsilon )}; $$
(3.13)

indeed, if \(|w_{0}|>c_{1} \sqrt{\log (1/\varepsilon )}\) then, by choosing \(\varepsilon \) even smaller if need be, we would have \(u(w_{0}) < 4 \varepsilon ^{\pi c_{1}^{2}} < \varepsilon ^{\pi c_{1}^{2}/2}<t_{0}\), contradicting the fact that \(w_{0}\in A_{t_{0}}\). However, (3.13) leads to a contradiction already: if we write \(e^{i \alpha _{0}} = \frac{w_{0}}{|w_{0}|}\) then Lemma 3.1 ensures that the function \(s \mapsto |F(s e^{i \alpha _{0}})|^{2} e^{-\pi s^{2}}\) is strictly decreasing for \(s < c_{1} \sqrt{\log (1/\varepsilon )}\) and thus we would have

$$ t_{0} > u_{F}(r w_{0}) > u_{F}(w_{0}) > t_{0}, $$

which is a contradiction. Hence \(A_{t_{0}}\) is star-shaped with respect to the origin.

The final claim of the lemma, concerning the smoothness of the boundary \(\partial \{u_{F} > t\} = \{ u_{F} =t\}\), follows from the Inverse Function Theorem. Indeed, by (3.12) we see that if \(z\) is such that \(u_{F}(z) = t > \varepsilon ^{c_{2}}\) then \(|z| < c_{1} \sqrt{\log (1/\varepsilon )}\), and Lemma 3.1 then guarantees that \(\nabla u_{F}(z) \neq 0\). Thus \(t\) is a regular value of \(u_{F}\) and the set \(\{u_{F}=t\}\) is a smooth curve. □

Lemmata 3.1 and 3.2 already show that the super-level sets of \(u_{F}\) are regular and have controlled geometry. We now show that they are in fact convex:

Proposition 3.3

Under the same assumptions as in Lemma 3.1, there are small constants \(\varepsilon _{0},c_{3} > 0\) such that, as long as \(\varepsilon (F)\leq \varepsilon _{0}\) and \(s<-c_{3} \log (\varepsilon (F))\), the set

$$ A_{u_{F}^{*}(s)} := \{ z \in \mathbb{C}\colon u_{F}(z) > u_{F}^{*}(s) \} $$

has convex closure.

Proof

Choosing \(\varepsilon _{0}\) appropriately, we can apply Lemmas 3.1 and 3.2 to conclude that, for \(t > \varepsilon (F)^{c_{2}}\), the level sets \(\{z \in \mathbb{C}\colon u_{F}(z) > t\}\) are all star-shaped with respect to the origin and have smooth boundary.

We write, for shortness, \(u=u_{F}\) and \(u_{0}=e^{-\pi |\cdot |^{2}}\) throughout the rest of this proof. By the triangle inequality and (3.4), we have

$$\begin{aligned} \begin{aligned} |u -u_{0}| & = \left | (|F|^{2} -1) e^{-\pi |z|^{2}}\right | \\ & \leq |F-1| (|F|+1) e^{-\pi |z|^{2}} \leq \varepsilon (F) (\|F\|_{ \mathcal {F}^{2}} + \|1\|_{\mathcal {F}^{2}}) \leq 3 \varepsilon (F) \end{aligned} \end{aligned}$$
(3.14)

and so

$$ \left \{ u_{0} >u^{*}(s) + 3\varepsilon (F) \right \} \subset \left \{ u >u^{*}(s) \right \} \subset \left \{ u_{0} >u^{*}(s) - 3 \varepsilon (F) \right \}. $$
(3.15)

This implies that

$$ s = |\{u>u^{*}(s)\}| \geq |\{u_{0} > u^{*}(s) + 3 \varepsilon (F)\}| = -\log (u^{*}(s) + 3 \varepsilon (F)) $$

or, rearranging,

$$ u^{*}(s)\geq e^{-s} - 3 \varepsilon (F), $$
(3.16)

In particular, if \(e^{-s} >\varepsilon (F)^{c_{3}}\), then

$$ u^{*}(s) \geq \frac {1}{2} \varepsilon (F)^{c_{3}} \geq \varepsilon (F)^{c_{2}} $$
(3.17)

provided \(c_{3}\) and \(\varepsilon _{0}\) are chosen sufficiently small. Thus, for our choice of parameters, the set \(A_{u^{*}(s)}\) is star-shaped and has a smooth boundary.

Arguing similarly to Lemma 3.1 we see that, by further shrinking \(c_{3}\) if needed, we have

$$ \|u-u_{0}\|_{C^{2}(A_{u^{*}(s)})} \le C_{s} \varepsilon (F), $$
(3.18)

whenever \(s \le - c_{3} \log (\varepsilon (F))\). Indeed, recalling again (3.5), (3.18) is equivalent to

$$ \left \|\left (\operatorname{Re}(G) + \frac{\varepsilon }{2} |G|^{2} \right ) \cdot u_{0} \right \|_{C^{2}(A_{u^{*}(s)})} \le \frac{C_{s}}{2}. $$

Using (3.9), (3.11), a suitable version of the first of those estimates for the second derivative, and (3.14)–(3.17), we see that

$$ \left \|\left (\operatorname{Re}(G) + \frac{\varepsilon }{2} |G|^{2} \right ) \cdot u_{0} \right \|_{C^{2}(A_{u^{*}(s)})} \le C \sup _{w \in A_{u*(s)}} e^{4 \pi |w|^{2}}. $$
(3.19)

If \(w\in A_{u^{*}(s)}\), by (3.15) and similarly to (3.16), we have

$$ e^{-\pi |w|^{2}} \ge u^{*}(s) - 3 \varepsilon (F) \geq \frac{e^{-s}}{2}, $$

and hence (3.19) implies (3.18) with \(C_{s} = C \cdot e^{4s}\), where \(C\) is an absolute constant.

Let then \(\kappa _{s}\) denote the curvature of \(\partial A_{u^{*}(s)}= \{ u = u^{*}(s)\}\), thus

$$ \kappa _{s} = - \frac{\nabla ^{2} u[\nabla u, \nabla u]}{|\nabla u|^{3}}. $$

For \(0< s<-c_{3} \log (\varepsilon (F))\), by (3.15) and (3.17) we have \(\{u_{0}>\frac{1}{4}\varepsilon (F)^{c_{3}}\}\supset A_{u^{*}(s)}\), and hence

$$ |\nabla u_{0}(z)|= 2 \pi |z| e^{-\pi |z|^{2}} \geq C \varepsilon (F)^{c_{3}} \quad \text{in } \partial A_{u^{*}(s)}. $$
(3.20)

Let us denote by \(\tilde{\kappa}_{s}>0\) the curvature of the circle \(\{u_{0} = u^{*}(s) \}\) and notice that, by (3.17), \(\tilde{\kappa}_{s}\to \infty \) as \(s\to 0\). By (3.18) and (3.20), choosing \(c_{3}\) and \(\varepsilon _{0}\) sufficiently small, we have an estimate

$$ \left |\kappa _{s}- \tilde{\kappa}_{s}\right | \leq C C_{s} \frac{\varepsilon (F)}{\varepsilon (F)^{2c_{3}}} \leq \varepsilon (F)^{ \frac {1}{2}}, $$

where we used the bound on \(s\) in the last inequality. Combining the last two facts, we see that we can choose \(\varepsilon _{0}\) small enough so that \(\varepsilon _{0}^{1/4} \leq \tilde{\kappa}_{s}\) and \(\varepsilon _{0}^{1/2}\leq \frac {1}{2} \varepsilon _{0}^{1/4}\). These choices ensure that

$$ \frac{\varepsilon _{0}^{1/4} }{2} \leq \tilde{\kappa}_{s} - \varepsilon _{0}^{1/2}\leq \kappa _{s} $$

for all \(s<-c_{3} \log \varepsilon (F)\). This lower bound implies that \(A_{u^{*}(s)}\) is locally convex. We then use the well-known Tietze–Nakajima theorem (see [32, 38]) which asserts that, as \(\overline{A_{u^{*}(s)}}\) is a closed, connected set, its local convexity implies its convexity, and the assertion is proved. □

Proof of Proposition 1.6

Proposition 1.6 follows immediately from Proposition 3.3 and (3.7), taking \(\Omega =A_{u_{F}^{*}(s)}\) as usual. □

4 Proof of the set stability

In this section we complete the proof of our main Theorem 1.1. As explained in the introduction, it suffices to prove its Fock space analogue, Theorem 1.5.

Proof of Theorems 1.1 and 1.5

Since the stability for the function has already been proved in Sect. 2, it remains to prove stability of the set, i.e. estimate (1.8).

Fix \(f \in L^{2}\) as in the statement of Theorem 1.1, let \(F = \mathcal{B}f\), \(u_{F}(z) = |F(z)|^{2} e^{-\pi \left | z \right |^{2}}\) and let us write \(\delta = \delta (F;\Omega )\) for simplicity. Clearly we may assume that \(\delta \leq \delta _{0}\), for some arbitrarily small constant \(\delta _{0}\). We may also suppose that \(F\) is normalized as at the beginning of Sect. 3 and so, as in (3.5), we can write \(F=1+\varepsilon G\), where \(\|G\|_{\mathcal{F}^{2}}=1\) satisfies (3.6) and \(\varepsilon \) satisfies (3.7).

Let \(A_{\Omega} := A_{u_{F}^{*}(|\Omega |)}\), as in (1.21). Let \(\mathcal{T}\) be any transport map \(\mathcal{T}\colon A_{\Omega } \setminus \Omega \to \Omega \setminus A_{ \Omega }\), that is,

$$ 1_{\Omega \setminus A_{\Omega }}(\mathcal{T}(x)) \det \nabla \mathcal{T}(x) = 1_{A_{\Omega }\setminus \Omega }(x), $$

cf. [11, page 12] for details on the existence of such a map. Define

$$ B := \left \{ x \in A_{\Omega } \setminus \Omega \colon | \mathcal{T}(x)|^{2} -|x|^{2} > C_{|\Omega |} \gamma \right \}, $$

where \(C_{|\Omega |}\), \(\gamma \) are constants to be chosen later. Since \(\mathcal{T}\) is a transport map,

$$ \int _{B} \left (u(z) - u(\mathcal{T}(z)) \right ) \operatorname{d\!}z = \int _{B} u - \int _{\mathcal{T}(B)} u \le \int _{A_{\Omega }} u - \int _{ \Omega }u =: d(\Omega ). $$
(4.1)

In (4.1), the inequality holds by the fact that, for \(z \in A_{\Omega }\setminus \Omega \), \(\quad u_{F}(z) > u_{F}^{*}(| \Omega |)\), and the reverse inequality holds for \(z \in \Omega \setminus A_{\Omega }\). Note that from (1.28) we have the bound

$$ d(\Omega ) \leq \|F\|_{\mathcal{F}^{2}}^{2}(1-e^{-|\Omega |})- \int _{ \Omega }u = \|F\|_{\mathcal{F}^{2}}^{2} (1-e^{-|\Omega |}) \delta \leq 4 (1-e^{-|\Omega |})\delta , $$
(4.2)

by (3.4) and the assumption that \(\delta \) is sufficiently small.

Step I. Control over \(B\). In this step, we will show that

$$ u(z)-u(\mathcal{T}(z))\geq 5 \gamma \quad \text{for } z\in B, $$
(4.3)

after choosing \(C_{|\Omega |}\) and \(\gamma \) correctly. To see this, we begin by writing

$$\begin{aligned} u(z) - u(\mathcal{T}(z)) & = e^{-\pi |z|^{2}} - e^{-\pi |\mathcal{T}(z)|^{2}} \\ & + 2\varepsilon \left ( \operatorname{Re}(G(z)e^{-\pi |z|^{2}}) - \operatorname{Re}(G(\mathcal{T}(z))e^{-\pi |\mathcal{T}(z)|^{2}}) \right ) \\ & + \varepsilon ^{2} \left (|G(z)|^{2} e^{-\pi |z|^{2}} - |G( \mathcal{T}(z))|^{2} e^{-\pi |\mathcal{T}(z)|^{2}} \right ). \end{aligned}$$

Since \(|G|^{2}e^{-\pi |\cdot |^{2}} \le 1\), we have promptly

$$\begin{aligned} \begin{aligned} u(z) - u(\mathcal{T}(z)) & \ge e^{- \pi |z|^{2} } - e^{-\pi | \mathcal{T}(z)|^{2}} - (4\varepsilon + \varepsilon ^{2}) \\ & = e^{-\pi |z|^{2}} \left ( 1 - e^{-\pi \left (|\mathcal{T}(z)|^{2} - |z|^{2}\right )} \right ) - 4\varepsilon - \varepsilon ^{2}, \end{aligned} \end{aligned}$$
(4.4)

whenever \(z \in B\). Moreover, since \(z \in B\subset A_{\Omega }\), (3.14) shows that

$$ e^{-\pi |z|^{2}} \ge u(z) - 3 \varepsilon \ge u^{*}(|\Omega |) - 3 \varepsilon \ge e^{-|\Omega |} - 6 \varepsilon > \frac{e^{-|\Omega |}}{2}; $$

here we used also

$$ e^{-|\Omega |} - 3 \varepsilon \leq u^{*}(|\Omega |)\leq e^{-|\Omega |} + 3 \varepsilon , $$
(4.5)

cf. (3.15) and (3.16). If \(\pi (|\mathcal{T}(z)|^{2}-|z|^{2}) \geq 1\) then we find

$$ u(z)- u(\mathcal{T}(z)) \geq \frac{e^{-|\Omega |}}{2} (1- e^{-1}) - 4 \varepsilon - \varepsilon ^{2} \geq \frac{e^{-|\Omega |}}{4} - 4 \varepsilon - \varepsilon ^{2}. $$

On the other hand, if \(\pi (|\mathcal{T}(z)|^{2}-|z|^{2}) \leq 1\), from (4.4),

$$ u(z) - u(\mathcal{T}(z)) \ge \frac{e^{-|\Omega |} \left (|\mathcal{T}(z)|^{2} - |z|^{2}\right )}{2} - 4 \varepsilon - \varepsilon ^{2} \ge C_{|\Omega |}e^{-|\Omega |} \frac{\gamma}{2} - 4\varepsilon - \varepsilon ^{2}. $$

Choosing \(C_{|\Omega |} = 20 e^{|\Omega |}\) and \(\gamma \ge \varepsilon \), the previous estimates yield the desired (4.3).

Step II.Showing that \(\Omega \) is close to \(A_{\Omega }\). Note the identities

$$ |\Omega |-|B|= |\Omega |- |\mathcal{T}(B)| = |\Omega \setminus \mathcal{T}(B)| = |\Omega |-|A_{\Omega}\setminus \Omega | + |(\Omega \setminus \mathcal{T}(B))\setminus A_{\Omega}|, $$

hence

$$ \frac {1}{2}|\Omega \Delta A_{\Omega }| = |A_{\Omega }\setminus \Omega | = |B|+ |(\Omega \setminus \mathcal{T}(B))\setminus A_{\Omega}|. $$
(4.6)

In this step, we want to estimate both terms on the right-hand side. The estimate for the first term follows by combining (4.1), (4.2) and (4.3):

$$ |B| \le \frac{d(\Omega )}{5\gamma} \le \frac{2\delta (1-e^{-|\Omega |})}{5 \gamma}. $$
(4.7)

To estimate the second term, note that \(\Omega \setminus \mathcal{T}(B)\) is contained in a \(C_{|\Omega |} \gamma \)-neighborhood of \(A_{\Omega}\); in turn, by (3.15), \(A_{\Omega}\) is nested between two concentric balls:

$$ \{z:e^{-\pi |z|^{2}} > u^{*}(|\Omega |)+ 3 \varepsilon \}\subset A_{ \Omega }\subset \{z:e^{-\pi |z|^{2}} > u^{*}(|\Omega |)- 3 \varepsilon \}=: E_{\Omega}. $$
(4.8)

Combining this information with (4.5), and setting \(\lambda _{\pm} = \sqrt{-\pi ^{-1} \log (u^{*}(s) \pm 3\varepsilon )}\) we can estimate

$$\begin{aligned} \begin{aligned} |(\Omega \setminus \mathcal{T}(B))\setminus A_{\Omega}| & \leq |B_{C_{| \Omega |} \gamma + \lambda _{-}} \setminus B_{\lambda _{+}}| = \pi (C_{| \Omega |} \gamma + \lambda _{-})^{2} + \log (u^{*}(s) + 3\varepsilon ) \\ & \leq 2 \sqrt{\pi} C_{|\Omega |} \gamma \sqrt{- \log (u^{*}(s) - 3 \varepsilon )} + \pi (C_{|\Omega |} \gamma )^{2} \\ &\hphantom{\leq}{} + \log \left ( \frac{u^{*}(s) + 3\varepsilon }{u^{*}(s)-3\varepsilon } \right ) \\ & \leq 4 C_{|\Omega |} \gamma \sqrt{-\log (u^{*}(s)-3\varepsilon )} + \pi (C_{|\Omega |} \gamma )^{2} + C_{s} \varepsilon \\ & \leq 4 C_{|\Omega |} \gamma \sqrt{|\Omega | + 8 \varepsilon e^{| \Omega |}} + \pi C_{|\Omega |}^{2} \gamma ^{2} + C_{s} \gamma \\ & \leq 4 C_{|\Omega |} \gamma \left (|\Omega |^{1/2} + 4 \varepsilon \frac{e^{|\Omega |}}{|\Omega |^{1/2}}\right ) + \pi C_{|\Omega |}^{2} \gamma ^{2} + C_{s} \gamma \\ & \leq 4 C_{|\Omega |} \gamma \left (|\Omega |^{1/2} + 4 C\delta ^{1/2} \frac{e^{2|\Omega |}}{|\Omega |^{1/2}}\right ) + \pi C_{|\Omega |}^{2} \gamma ^{2} + C_{s} \gamma , \end{aligned} \end{aligned}$$
(4.9)

provided that \(\varepsilon \) is sufficiently small, depending on \(|\Omega |\). Choosing \(\varepsilon \leq \gamma = C (e^{|\Omega |}\delta )^{1/2}\), where \(C\) is the constant provided by Theorem 1.5, and combining (4.6), (4.7) and (4.9), we get

$$ |\Omega \Delta A_{\Omega}| \leq C\delta ^{1/2}, $$

for some new but still explicitly computable constant \(C=C(|\Omega |)\).

Step III. Conclusion. To conclude, we just need to compare \(\Omega \) with the ball \(S_{\Omega}:=\{z:e^{-\pi |z|^{2}}\geq e^{-|\Omega |}\}\). By (4.5) and (4.8), we have \(S_{\Omega }\subset E_{\Omega }\) and

$$ |E_{\Omega }\setminus S_{\Omega }| \le C \varepsilon \leq C(|\Omega |) \delta ^{\frac {1}{2}}, $$
(4.10)

where we also used (3.7). It follows that

$$\begin{aligned} |S_{\Omega}\Delta \Omega | &\leq |\Omega \setminus E_{\Omega}| + |E_{ \Omega}\setminus S_{\Omega}|+|S_{\Omega}\setminus \Omega | \\ & \leq |\Omega \setminus E_{\Omega}| + |E_{\Omega}\setminus S_{\Omega}|+|E_{ \Omega}\setminus \Omega | \leq |E_{\Omega}\Delta \Omega | + C\delta ^{1/2} \end{aligned}$$

and so it is enough to bound \(|E_{\Omega }\triangle \Omega |\). We then estimate

$$\begin{aligned} |E_{\Omega }\triangle \Omega | = |E_{\Omega }\setminus \Omega | + | \Omega \setminus E_{\Omega }| \leq |E_{\Omega }\setminus A_{\Omega }| + |A_{\Omega }\setminus \Omega | + |\Omega \setminus A_{\Omega }| \leq C \delta ^{1/2}, \end{aligned}$$

where in the last inequality we estimate \(|E_{\Omega}\setminus A_{\Omega}|\) as in (4.10) and we also used the estimate from the last step. We have now proved (1.19) and thus also (1.8). □

We remark that, in spite of the sharp exponent of \(\delta \) in the result above, the asymptotic growth of the constant \(K(|\Omega |)\) in (1.8) from the proof above is likely not sharp: as we shall see in Sect. 6, one expects, from the functional stability part, that the sharp growth of the constant should be of the form \(\sim e^{|\Omega |/2}\), while the proof above yields \(K(|\Omega |) \sim e^{2|\Omega |}\).

Although there is room for improving such a constant with the current methods, it is unlikely that these will suffice in order to upgrade \(K(|\Omega |)\) to the aforementioned conjectured optimal growth rate. For that reason, we consider this to be a genuinely interesting problem, which we wish to revisit in a future work.

5 An alternative variational approach to the function stability

The purpose of this section is to give a variational proof of the function stability in Theorem 1.5. Fix \(s>0\) and consider the functional

$$ \mathcal {K}\colon \mathcal{F}^{2}(\mathbb{C})\to \mathbb{R}, \qquad \mathcal {K}[F] := \frac{I_{F}(s)}{\|F\|_{\mathcal {F}^{2}}^{2}}, $$

where we recall that \(I_{F}(s)\) is the integral of \(u_{F}\) over its superlevel set of measure \(s\), cf. (1.27). We will prove the following result:

Theorem 5.1

Fix \(s\in (0,\infty )\). There are explicit constants \(\varepsilon _{0}(s),C(s)>0\) such that, for all \(\varepsilon \in (0,\varepsilon _{0})\), we have

$$ \mathcal {K}[1] - \mathcal {K}[1+ \varepsilon G] \geq C(s)\varepsilon ^{2}, $$

whenever \(\|G\|_{\mathcal {F}^{2}}=1\) satisfies (3.6).

The proof of Theorem 5.1 is almost independent of the results of Sect. 2, as we will only rely on the suboptimal stability result from Lemma 2.3. This lemma, in turn, does not rely on the other results from that section.

Let us first note that Theorem 5.1 indeed implies the function stability part Theorem 1.5, although without the optimal dependence of the constant on \(|\Omega |\).

Alternative proof of (1.17), assuming Theorem 5.1

Without loss of generality, we can assume the normalizations detailed at the beginning of Sect. 3. By the same argument as in (3.7), if the deficit is sufficiently small we see that

$$ \|F-1\|_{\mathcal{F}^{2}} = \varepsilon (F)= \|F\|_{\mathcal{F}^{2}} \rho (F)\leq C \big(e^{|\Omega |}\delta (F;\Omega )\big)^{1/4}. $$

where now the last inequality follows by combining Lemma 2.3 with the simple Lemma 2.5, instead of using (1.17). Here, we take \(\Omega =A_{u_{F}^{*}(t)}=\{u_{F}>u_{F}^{*}(t)\}\). Hence, we can write

$$ F = 1 + \varepsilon G,\qquad \|G\|_{\mathcal {F}^{2}} = 1, $$

where \(G\) satisfies (3.6), and we can assume that \(\varepsilon \) is sufficiently small. Theorem 5.1 then implies that

$$ (1-e^{-\left | \Omega \right |})\delta (F;\Omega ) = \mathcal{K}[1] - \mathcal{K}[F] \ge C(|\Omega |) \varepsilon ^{2} = C(|\Omega |) \|F-1 \|_{\mathcal {F}^{2}}^{2}. $$

To complete the proof it suffices to note that, by our normalizations, \(F(0)=1\leq \|F\|_{\mathcal {F}^{2}}\). Thus

$$ \|F-1\|_{\mathcal{F}^{2}} \geq \rho (F) \geq 2^{-1/2} \min _{z_{0} \in \mathbb{C}, |c|=\|F\|_{\mathcal{F}^{2}(\mathbb{C})}} \frac{ \|F - c \cdot F_{z_{0}}\|_{\mathcal {F}^{2}}}{\|F\|_{\mathcal {F}^{2}}}, $$

where the last inequality follows from (3.2). □

The proof of Theorem 5.1 is based on the following technical result:

Lemma 5.2

There is \(\varepsilon _{0}=\varepsilon _{0}(s)\) and a modulus of continuity \(\eta \), depending only on \(s\), such that

$$ |\mathcal {K}[1+ \varepsilon G] - \mathcal {K}[1]| \le \left | \frac{\varepsilon ^{2}}{2}\nabla ^{2} \mathcal {K}[1](G,G)\right | + \eta (\varepsilon )\varepsilon ^{2} $$

for all \(0\leq \varepsilon \leq \varepsilon _{0}(t)\) and \(G\in \mathcal {F}^{2}(\mathbb{C})\) such that \(\|G\|_{\mathcal {F}^{2}}=1\) and which satisfy (3.6). Here we have defined

$$ \nabla ^{2} \mathcal{K}[1](G,G) := \frac{\operatorname{d\!}^{\,2}}{\operatorname{d\!}\varepsilon ^{\,2}} \mathcal{K}[1+\varepsilon G]\Big|_{ \varepsilon = 0}. $$

The proof of Lemma 5.2 is rather technical and standard, for which reason we moved it to Appendix A. Lemma 5.2 shows that \(\mathcal {K}[1+\varepsilon G]-\mathcal {K}[1]\) is essentially controlled by the second variation of \(\mathcal {K}\) at 1, in the direction of \(G\). Since 1 is a local maximum for \(\mathcal {K}\), this variation is negative definite, but to prove Theorem 5.1 we need to show that it is uniformly negative definite. This is the content of the next proposition, which is the main result of this section.

Proposition 5.3

For all \(G\in \mathcal {F}^{2}(\mathbb{C})\) such that \(\|G\|_{\mathcal {F}^{2}}=1\) and which satisfy (3.6), we have

$$ \frac {1}{2}\nabla ^{2} \mathcal {K}[1](G,G)\leq -s e^{-s}. $$

It is clear that Theorem 5.1 is an immediate consequence of the above two results:

Proof of Theorem 5.1

Combining Lemma 5.2 and Proposition 5.3, we have

$$ \mathcal {K}[1]-\mathcal {K}[1+\varepsilon G] \geq - \varepsilon ^{2} \Big(\frac {1}{2} \nabla ^{2} \mathcal {K}[1](G,G) + \eta ( \varepsilon )\Big) \geq \varepsilon ^{2}\Big(\frac{C(s)}{2}-\eta ( \varepsilon )\Big). $$

The conclusion now follows by choosing \(\varepsilon _{0}=\varepsilon _{0}(s)\) even smaller so that \(\frac{C(s)}{4}\geq \eta (\varepsilon _{0})\). □

The rest of this section is dedicated to the proof of Proposition 5.3. Clearly we first need to compute the second variation of \(\mathcal {K}\) and, in order to do so, our strategy is to consider the sets

$$ \Omega _{\varepsilon }:= \{u_{\varepsilon }>u_{\varepsilon }^{*}(s)\}, \qquad u_{\varepsilon }:=u_{1+\varepsilon G} = |1+\varepsilon G|^{2} e^{- \pi |\cdot |^{2}}, $$
(5.1)

and to write

$$ \Omega _{\varepsilon }= \Phi _{\varepsilon }(\Omega _{0}), $$

for a suitable volume-preserving flow \(\Phi _{\varepsilon }\). In order to construct such a flow, we first prove a general lemma which allows us to build a flow that deforms the unit disk into a given family of graphical domains over the unit circle. This type of result is well-known, and we refer the reader for instance to [3, Theorem 3.7] for a more general statement.

Lemma 5.4

Denote by \(D_{0} \subset \mathbb{R}^{2}\) the unit disk, and suppose that we are given a one-parameter family \(\{D_{\varepsilon }\}_{\varepsilon \in [0,\varepsilon _{0}]}\) of domains, whose boundaries are given by smooth graphs over the unit circle:

$$ \partial D_{\varepsilon }= \{(1+g_{\varepsilon }(\omega ))\omega : \omega \in \mathbb{S}^{1}\}. $$

We assume that the family \(\{g_{\varepsilon }\}_{\varepsilon \in [0,\varepsilon _{0}]}\) depends smoothly on \((\varepsilon , \omega )\).

Then there exists a family \(\{Y_{\varepsilon }\}_{\varepsilon \in [0,\varepsilon _{0}]}\) of smooth vector fields, which depends smoothly on the parameter \(\varepsilon \), such that, if \(\Psi _{\varepsilon }\) denotes the flow associated with \(Y_{\varepsilon }\), i.e. if

$$ \frac{\operatorname{d\!}}{\operatorname{d\!}\varepsilon } \Psi _{\varepsilon }= Y_{ \varepsilon }(\Psi _{\varepsilon }), $$

then \(\Psi _{\varepsilon }(D_{0}) = D_{\varepsilon }\). In addition, \(Y_{\varepsilon }\) is such that \(\operatorname*{div}(Y_{\varepsilon }) = 0\) in a neighbourhood of \(\mathbb{S}^{1}\).

Proof

By translating into polar coordinates \(r=|z|\) and \(\omega = z/|z|\) we see that, if we define a vector field \(Y_{\varepsilon }\) locally on a neighbourhood of \(\mathbb{S}^{1}\) by

$$ Y_{\varepsilon }(r,\omega ) = \frac {1}{r} (1+g_{\varepsilon }( \omega ))\partial _{\varepsilon }g_{\varepsilon }(\omega ) \omega , $$

then \(Y_{\varepsilon }\) satisfies \(\operatorname*{div}(Y_{\varepsilon }) = 0\) in a neighbourhood of \(\mathbb{S}^{1}\), Moreover, in the same neighbourhood of \(\mathbb{S}^{1}\), we may write the flow \(\Psi _{\varepsilon }\) of \(Y_{\varepsilon }\) explicitly as

$$ \Psi _{\varepsilon }(r,\omega ) = (r^{2} + (1+g_{\varepsilon }( \omega ))^{2}-1)^{\frac {1}{2}} \omega . $$
(5.2)

We then extend \(\Psi _{\varepsilon }\) from the neighbourhood of \(\mathbb{S}^{1}\) to the whole complex plane, in such a way that \(\Psi _{\varepsilon }(D_{0}) = D_{\varepsilon }\) and the map \((\varepsilon ,x) \mapsto \Psi _{\varepsilon }(x)\) is smooth. Taking \(\tilde{Y}_{\varepsilon }\) to be the vector field of the extended version of \(\Psi _{\varepsilon }\), we see that \(\tilde{Y}_{\varepsilon }\) is an extension of \(Y_{\varepsilon }\) to the whole space, and moreover, the map \((\varepsilon ,x) \mapsto \tilde{Y}_{\varepsilon }(x)\) is smooth. □

Using the results of Sect. 3 we can readily apply Lemma 5.4 to the sets \(\Omega _{\varepsilon }\):

Lemma 5.5

Let \(G\in \mathcal {F}^{2}(\mathbb{C})\) satisfy (3.6). There is \(\varepsilon _{0} =\varepsilon _{0}(s,\|G\|_{\mathcal {F}^{2}})> 0\) such that, for all \(\varepsilon \in [0,\varepsilon _{0}]\), there are globally defined smooth vector fields \(X_{\varepsilon }\), with associated flows \(\Phi _{\varepsilon }\), such that

$$ \Omega _{\varepsilon }= \Phi _{\varepsilon }(\Omega _{0}). $$

Moreover, \(X_{\varepsilon }\) depends smoothly on \(\varepsilon \) and is divergence-free in a neighborhood of \(\partial \Omega _{0}\). We also have

$$ \int _{\partial \Omega _{\varepsilon }} \langle X_{\varepsilon} , \nu _{\varepsilon }\rangle = 0, $$
(5.3)

where \(\nu _{\varepsilon }\) denotes the outward-pointing unit vector field on \(\partial \Omega _{\varepsilon }\).

Proof

Up to dilating by a constant (which depends only on \(s\)) we can assume that \(\Omega _{0}=B_{1}\). Lemma 3.2 shows that, if \(\varepsilon _{0}\) is chosen sufficiently small, the boundaries \(\partial \Omega _{\varepsilon }\) are smooth and the sets \(\Omega _{\varepsilon }\) are star-shaped with respect to zero, hence they can be written as graphs over \(\mathbb{S}^{1}\):

$$ \partial \Omega _{\varepsilon }= \{ (1+f_{\varepsilon }(\omega )) \, \omega \colon \omega \in \mathbb{S}^{1}\}. $$
(5.4)

We now claim that the function \((\varepsilon ,\omega ) \mapsto f_{\varepsilon }(\omega )\) is smooth as long as \(\varepsilon \) is sufficiently small.

Indeed, for fixed \(\varepsilon \), the function \(\omega \mapsto f_{\varepsilon }(\omega )\) is smooth, by Lemma 3.2, since it is implicitly defined by \(u_{\varepsilon }((1+f_{\varepsilon }(\omega ))\cdot \omega ) = u_{ \varepsilon }^{*}(s)\). Moreover, since \(\nabla u_{\varepsilon }\) is bounded by a constant depending only on \(s\) when restricted to \(\{u_{\varepsilon }= u_{\varepsilon }^{*}(s)\}\) (this follows, for instance, from the proof of Lemma 3.1), any careful quantification of the proof of the implicit function theorem (cf. [26]) implies that there is a universal \(\varepsilon _{0}(s) > 0\) such that, if \(\varepsilon < \varepsilon _{0}(s)\), then \(\varepsilon \mapsto f_{\varepsilon }(\omega )\) is smooth for any fixed \(\omega \in \mathbb{S}^{1}\). This proves the desired smoothness claim.

By Lemma 5.4, the associated vector fields are explicitly given in a neighbourhood of \(\mathbb{S}^{1}\) by

$$ X_{\varepsilon }(r,\omega ) = \frac {1}{r} (1+f_{\varepsilon }( \omega ))\partial _{\varepsilon }f_{\varepsilon }(\omega ) \omega , $$
(5.5)

and they are divergence-free in a neighbourhood of \(\mathbb{S}^{1}\). Their smoothness then follows from the smoothness of \(f_{\varepsilon }\) in \(\varepsilon \).

To prove the final claim we note that, since \(\Omega _{\varepsilon}\) has constant measure equal to \(s\) for all \(\varepsilon \in [0,\varepsilon _{0}]\), by a calculation in polar coordinates we see that the function

$$ \mathcal{A}(\varepsilon ) := \int _{\partial \Omega _{0}} (1+f_{ \varepsilon}(\omega ))^{2} \operatorname{d\!}\mathcal{H}^{1}(\omega ) $$

is constant in the interval \([0,\varepsilon _{0}]\). Thus,

$$ 0 = \frac{\operatorname{d\!}\mathcal {A}(\varepsilon )}{\operatorname{d\!}\varepsilon } = 2 \int _{\partial \Omega _{0}} \partial _{\varepsilon} f_{\varepsilon}( \omega ) (1 + f_{\varepsilon}(\omega ) ) \operatorname{d\!}\mathcal{H}^{1}( \omega ) = 2 \int _{\partial \Omega _{0}} \langle X_{\varepsilon}, \nu \rangle \operatorname{d\!}\mathcal{H}^{1}(\omega ), $$

and so the integral above has to vanish; here we used the fact that \(\Omega _{0}\) is a ball. Since \(\operatorname*{div}(X_{\varepsilon }) = 0\) in a neighbourhood of \(\partial \Omega _{0}\), and as \(\int _{\partial \Omega _{0}} \langle X_{\varepsilon }, \nu \rangle \operatorname{d\!}\mathcal{H}^{1}= 0\), the divergence theorem shows that, for any Lipschitz Jordan curve \(\gamma \) in the same neighbourhood of \(\partial \Omega _{0}\), we have

$$ \int _{\gamma} \langle X_{\varepsilon }, \nu _{\gamma} \rangle \operatorname{d\!}\mathcal {H}^{1} = 0 , $$

where \(\nu _{\gamma}\) denotes the outward-pointing normal field on \(\gamma \). Thus (5.3) follows. □

Having the previous lemma at our disposal, we can now obtain an explicit formula for \(\nabla ^{2} \mathcal {K}[1]\).

Lemma 5.6

For all \(G\in \mathcal{F}^{2}(\mathbb{C})\) which satisfy (3.6), we have

1 2 2 K [ 1 ] ( G , G ) = Ω 0 | G | 2 e π | z | 2 d z G F 2 2 Ω 0 e π | z | 2 d z + e s Ω 0 | G | 2 d H 1 ( z ) .
(5.6)

Proof

Setting \(\operatorname{d\!}\sigma (z) := e^{-\pi |z|^{2}} \operatorname{d\!}z\) for brevity, let us introduce the auxiliary functions

$$ I_{\varepsilon }:= \int _{\Omega _{\varepsilon }} |1+\varepsilon G|^{2} \operatorname{d\!}\sigma , \qquad J_{\varepsilon }:=\int _{\mathbb{C}}|1+ \varepsilon G|^{2} \operatorname{d\!}\sigma ; $$
(5.7)

we also write \(K_{\varepsilon }:= \mathcal {K}[1+\varepsilon G]\), where we recall that \(\Omega _{\varepsilon }:= \{u_{\varepsilon }>u_{\varepsilon }^{*}(s) \}\), cf. (5.1). We will always take \(\varepsilon \leq \varepsilon _{0}\), where \(\varepsilon _{0}\) is as in Lemma 5.5. Here and henceforth, we shall denote derivatives of the quantities \(K_{\varepsilon }\), \(\quad I_{\varepsilon }\), \(\quad J_{\varepsilon }\) in the \(\varepsilon \) variable with primes, that is, \(K_{\varepsilon }'\), \(\quad J_{\varepsilon }'\), \(\quad I_{\varepsilon }'\), etc. With that in mind, we have:

$$\begin{aligned} \begin{aligned} K_{\varepsilon }' & = \Big(I'_{\varepsilon }- I_{\varepsilon } \frac{J'_{\varepsilon }}{J_{\varepsilon }}\Big) \frac{1}{J_{\varepsilon }}, \\ K_{\varepsilon }'' &= \Big(I''_{\varepsilon }- I_{\varepsilon } \frac{J''_{\varepsilon }}{J_{\varepsilon }}\Big) \frac{1}{J_{\varepsilon }}- \frac{2 J'_{\varepsilon }}{J_{\varepsilon }^{2}} \Big(I_{\varepsilon }'-I_{ \varepsilon }\frac{J'_{\varepsilon }}{J_{\varepsilon }}\Big), \end{aligned} \end{aligned}$$
(5.8)

and using Reynold’s theorem we further compute

$$\begin{aligned} I'_{\varepsilon }& = 2 \int _{\Omega _{\varepsilon }} \text{Re}(G \cdot \overline{1+\varepsilon G}) \operatorname{d\!}\sigma , \qquad J'_{\varepsilon }= 2 \int _{\mathbb{C}}\text{Re} (G \cdot \overline{1+ \varepsilon G}) \operatorname{d\!}\sigma , \end{aligned}$$
(5.9)
$$\begin{aligned} I''_{\varepsilon }& = 2 \int _{\Omega _{\varepsilon }} |G|^{2} \operatorname{d\!}\sigma + 2\int _{\partial \Omega _{\varepsilon }} \text{Re}(G \cdot \overline{1+\varepsilon G}) \langle X_{\varepsilon },\nu _{ \varepsilon }\rangle e^{-\pi |z|^{2}}, \qquad J''_{\varepsilon }= 2 \int _{ \mathbb{C}}|G|^{2} \operatorname{d\!}\sigma . \end{aligned}$$
(5.10)

Here and in what follows, we write \(X_{\varepsilon }\) to be the vector fields built in Lemma 5.5. Note that, to obtain (5.9), we used the fact that \(u_{\varepsilon }\) is constant on \(\partial \Omega _{\varepsilon }\), together with the cancelling property (5.3) of the vector fields.

Since \(\langle G, 1\rangle _{\mathcal {F}^{2}}=0\), \(\Omega _{0}\) is a ball and \(G\) is holomorphic, from (5.9) it is easy to see that

$$ I'_{0}=J'_{0}=0 \qquad \implies \qquad \frac{\operatorname{d\!}}{\operatorname{d\!}\varepsilon } \mathcal {K}[1+\varepsilon G] \Big|_{ \varepsilon =0} = K'_{0} = 0, $$
(5.11)

where the implication follows from the first equation in (5.8).

Combining (5.8)–(5.11), we arrive at

$$\begin{aligned} \frac{\operatorname{d\!}^{\,2}}{\operatorname{d\!}\varepsilon ^{\,2}}\mathcal {K}[1+ \varepsilon G]\Bigg|_{\varepsilon = 0 } =&2\Big( \int _{\Omega _{0}} |G|^{2} e^{-\pi |z|^{2}} - \|G\|_{\mathcal {F}^{2}}^{2}\int _{\Omega _{0}} e^{- \pi |z|^{2}} \\ &{} + \int _{\partial \Omega _{0}} \operatorname{Re}(G(z)) \langle X_{0}, \nu \rangle e^{-\pi |z|^{2}} \Big). \end{aligned}$$
(5.12)

Since \(\partial \Omega _{0}\) is a circle of radius \(r_{0}\), where \(\pi r_{0}^{2} =s\), our main task is to simplify the last term: specifically, we want to show that

Ω 0 ReG X 0 ,νd H 1 = Ω 0 | G | 2 d H 1 .
(5.13)

In order to prove (5.13), we have to understand how to write \(X_{0}\) in terms of \(G\) on \(\partial \Omega _{0}\).

We first claim that

$$ \frac{\operatorname{d\!}}{\operatorname{d\!}\varepsilon }\Big|_{\varepsilon =0} \mu _{ \varepsilon }(t) = 0, $$
(5.14)

where \(\mu _{\varepsilon }(t) := \mu _{1+\varepsilon G}(t) =|\{u_{ \varepsilon }>t\}|\). To prove this claim we build, exactly as in Lemma 5.5, a family of vector fields \(Y_{\varepsilon }\) with associated flows \(\Psi _{\varepsilon }\) such that \(\Psi _{\varepsilon }(\{u_{0}>t\}) = \{u_{\varepsilon }>t\}\) (note that, by Lemma 3.2, these sets have smooth boundaries, hence we can apply Lemma 5.4). We compute, for \(z \in \partial \{ u_{0} > t \} = \{u_{0} = t\}\),

$$ t \equiv u_{\varepsilon }(\Psi _{\varepsilon }(z))= \left (1 + 2 \varepsilon \operatorname{Re}G(z)- 2\pi \varepsilon \langle Y_{0}(z),z \rangle + O(\varepsilon ^{2})\right )e^{-\pi |z|^{2}}, $$
(5.15)

and thus, since the first order term in \(\varepsilon \) vanishes, we have

$$ \operatorname{Re}G(z) = \pi \langle Y_{0},z\rangle . $$
(5.16)

We can now prove (5.14): again by Reynold’s formula, we have

d d ε | ε = 0 μ ε ( t ) = { u 0 > t } Y 0 , ν = 2 π { u 0 > t } Y 0 , z = 2 { u 0 > t } Re G = 2 Re G ( 0 ) = 0 ,
(5.17)

since \(\partial \{u_{0}>t\}\) is a circle and \(\operatorname{Re}G\) is harmonic, where the last equality follows from (3.6).

As we explain in Remark 5.7 below, the function \(\varepsilon \mapsto \mu _{\varepsilon }(t)\) is smooth in \(\varepsilon \), whenever \(\varepsilon \) is sufficiently small, and also smooth in \(t\), for \(t \in (\varepsilon ^{c_{2}},\max u_{\varepsilon })\), where \(c_{2}\) is as in Lemma 3.2. Now let us fix \(s>0\) and recall that \(G(0)=0\). Using the smoothness of \(\varepsilon \mapsto \mu _{\varepsilon }\) first and then the smoothness of \(\mu _{0}\) on a neighbourhood of \(u_{0}^{*}(s)\), we obtain:

$$\begin{aligned} s & = \mu _{\varepsilon }(u_{\varepsilon }^{*}(s)) \\ & = \mu _{0}(u_{\varepsilon }^{*}(s))+ 2\varepsilon \operatorname{Re}(G(0)) + O(\varepsilon ^{2}) \\ & = s + (u_{\varepsilon }^{*}(s)-u_{0}^{*}(s))\frac{\operatorname{d\!}}{\operatorname{d\!}t} \Big|_{t=u_{0}^{*}(s)} \mu _{0}(t) + O(\varepsilon ^{2}) \\ & = s - \frac{u_{\varepsilon }^{*}(s)-u_{0}^{*}(s)}{u_{0}^{*}(s)} + O( \varepsilon ^{2}), \end{aligned}$$

where we used the fact that \(G(0)=0\) by (3.6). Thus, after rearranging, we find

$$ u_{\varepsilon }^{*}(s) = (1 + O(\varepsilon ^{2}))e^{-s}. $$

Since \(\Phi _{\varepsilon }\) is the flow of \(X_{\varepsilon }\), we have

$$ \Phi _{\varepsilon }(z) = \Phi _{0}(z) + \varepsilon X_{0}(\Phi _{0}(z)) + O(\varepsilon ^{2}) = z+ \varepsilon X_{0}(z) + O(\varepsilon ^{2}). $$
(5.18)

We now compare the two expansions

$$\begin{aligned} u_{\varepsilon }(\Phi _{\varepsilon }(z)) & = (1 + 2 \varepsilon \operatorname{Re}G(z)- 2\pi \varepsilon \langle X_{0}(z),z\rangle + O( \varepsilon ^{2}) )e^{-\pi |z|^{2}}, \\ u_{\varepsilon }^{*}(s) & = (1 + O(\varepsilon ^{2})) e^{-s}, \end{aligned}$$

and we deduce that, on \(\partial \{u_{0}>u_{0}^{*}(s)\} = \partial \Omega _{0}\), the first order terms in \(\varepsilon \) must be the same, thus

$$ \pi \langle X_{0},z\rangle = \operatorname{Re}G(z) \text{ on } \partial \Omega _{0}. $$
(5.19)

Finally, since \(G\) is holomorphic and \(G(0)=0\), we have

Ω 0 ( Re G ) 2 d H 1 = Ω 0 Re( G 2 )d H 1 + Ω 0 ( Im G ) 2 d H 1 = Ω 0 ( Im G ) 2 d H 1 .

Now (5.13) follows by combining this identity with (5.19):

Ω 0 Re G X 0 , ν d H 1 = 2 π Ω 0 Re G X 0 , z d H 1 = 2 Ω 0 ( Re G ) 2 d H 1 = Ω 0 | G | 2 d H 1 ,

as wished. □

Proof of Proposition 5.3

Since \(G\in \mathcal {F}^{2}(\mathbb{C})\) satisfies (3.6), we can write

$$ G(z) = \sum _{k=2}^{\infty }a_{k} \Big(\frac{\pi ^{k}}{k!}\Big)^{ \frac {1}{2}} z^{k}, \qquad \|G\|_{\mathcal {F}^{2}}^{2} = \sum _{k=2}^{ \infty} |a_{k}|^{2}. $$

It is direct to see that (5.6) can be rewritten using the power series for \(G\) as

$$\begin{aligned} \begin{aligned} \frac {1}{2}\nabla ^{2} \mathcal {K}[1](G,G) =& \sum _{k=2}^{\infty }|a_{k}|^{2} V_{k}(s), \end{aligned} \end{aligned}$$
(5.20)

where

$$ V_{k}(s):= \frac{\pi ^{k}}{k!} \int _{B(0,\sqrt{ \frac {s}{\pi}})} |z|^{2k} e^{-\pi |z|^{2}} - \int _{B(0,\sqrt{ \frac {s}{\pi}})} e^{-\pi |z|^{2}} + e^{- s}\frac{s^{k}}{k!}. $$
(5.21)

We claim that \(V_{k}(s) \le 0\) for all \(s \ge 0\). This follows by a simple calculus observation:

$$\begin{aligned} V_{k}(s) &= - \frac{\pi ^{k}}{k!} \int _{\mathbb{C}\setminus B(0, \sqrt{s/\pi})} |z|^{2k} e^{-\pi |z|^{2}} \, \operatorname{d\!}z + \left (1 + \frac{s^{k}}{k!}\right )e^{-s} \\ & = - \frac{\Gamma (k+1,s)}{k!} +\left (1 + \frac{s^{k}}{k!}\right )e^{-s} = - \left ( \sum _{j=1}^{k-1} \frac{s^{j}}{j!} \right ) e^{-s}, \end{aligned}$$

where \(\Gamma (a,s) := \int _{s}^{\infty} r^{a-1} e^{-r} \operatorname{d\!}r\) denotes the upper incomplete Gamma function. In order to conclude the desired bound, notice that \(\lim _{k \to \infty} V_{k}(s) = e^{-s} - 1 <0\), and, since \(V_{k}(s)\) is decreasing in \(k\) for \(s>0\) fixed,

$$ \inf _{k \ge 2} (-V_{k}(s)) = -V_{2}(s) = se^{-s}. $$

The conclusion of Proposition 5.3 follows then directly from (5.20). □

Remark 5.7

In the proof of Lemma 5.6 above we used the fact that the function \((\varepsilon ,t)\mapsto \mu _{\varepsilon }(t)\) is smooth provided that \(\varepsilon \) is sufficiently small and that \(t\in (\varepsilon ^{c_{2}},\max u_{\varepsilon })\), where \(c_{2}\) is as in Lemma 3.2. This can be seen explicitly as follows: the smoothness in \(\varepsilon \) follows by Lemma 5.4 and the fact that \(\Psi _{\varepsilon }(\{u_{0} > t\}) = \{u_{\varepsilon }> t\}\). On the other hand, by [33, Lemma 3.2] we have

$$ -\partial _{t} \mu _{\varepsilon }(t) = \int _{\left \{ u_{ \varepsilon }= t \right \}} |\nabla u_{\varepsilon }|^{-1} \, \operatorname{d\!}\mathcal{H}^{1} \,\, \text{for almost every } t \text{ in } (0,\max u_{ \varepsilon }). $$

By the proof of Lemma 3.1 we see that

$$ |\nabla u_{\varepsilon }(z)| \geq C(\varepsilon ^{c_{2}}, \|G\|_{ \mathcal{F}^{2}})|z| , $$
(5.22)

hence \(\mu _{\varepsilon }\in C^{0,1}_{\mathrm{loc}}(\varepsilon ^{c_{2}}, \max u_{\varepsilon })\). Moreover, the divergence theorem allows us to write

$$\begin{aligned} -\partial _{t}\mu _{\varepsilon }(t) + \partial _{t}\mu _{ \varepsilon }(t_{0}) & = \int _{\left \{ u_{\varepsilon }= t \right \}} |\nabla u_{\varepsilon }|^{-1} \, \operatorname{d\!}\mathcal{H}^{1} - \int _{ \left \{ u_{\varepsilon }= t_{0} \right \}} |\nabla u_{\varepsilon }|^{-1} \, \operatorname{d\!}\mathcal{H}^{1} \\ & = \int _{\left \{ u_{\varepsilon }= t \right \}} \frac{\nabla u_{\varepsilon }}{|\nabla u_{\varepsilon }|^{2}} \cdot \frac{\nabla u_{\varepsilon }}{|\nabla u_{\varepsilon }|} \, \operatorname{d\!}\mathcal{H}^{1} - \int _{\left \{ u_{\varepsilon }= t_{0} \right \}} \frac{\nabla u_{\varepsilon }}{|\nabla u_{\varepsilon }|^{2}} \cdot \frac{\nabla u_{\varepsilon }}{|\nabla u_{\varepsilon }|} \, \operatorname{d\!}\mathcal{H}^{1} \\ & = - \int _{\left \{ t_{0} > u_{\varepsilon }> t \right \}} \operatorname*{div}\left ( \frac{\nabla u_{\varepsilon }}{|\nabla u_{\varepsilon }|^{2}} \right ) \operatorname{d\!}z, \end{aligned}$$

for a.e. \(t < t_{0}\). By (5.22), \(|\nabla u_{\varepsilon }|^{-2}\) is bounded and smooth in the set \(\{t_{0} > u_{\varepsilon }> t\}\), which shows that \(\partial _{t} \mu _{\varepsilon }\in C^{0,1}_{\mathrm{loc}}( \varepsilon ^{c_{2}}, \max u_{\varepsilon })\). By a straightforward use of the coarea formula, iterating such an argument yields the desired smoothness property of \(\mu _{\varepsilon }\). Moreover, we also have that \(\partial _{t}\mu _{\varepsilon }(t) \le - \frac{1}{t}\), cf. (1.23), and so by the Implicit Function Theorem the functions \(u_{\varepsilon }^{*}(s)\) are differentiable in the variable \(\varepsilon \) for all fixed \(s\), for \(\varepsilon < \varepsilon _{0}(s)\) sufficiently small.

It is important to note that (3.6) is crucial as a normalization for the above proof to work: as a matter of fact, many of the cancellations in the proof of Lemma 5.6 only appeared since \(G(0) = \langle G, 1 \rangle = 0\). Moreover, if \(\langle G,z \rangle \neq 0\), it could happen that \(\nabla ^{1} \mathcal{K}[1](G,G) = 0\), which would cause the proof of sharp stability to collapse.

The argument implicit in the reduction to (3.6) is hence a vital part of the proof: heuristically, it plays the pivotal role of providing us with a single point \(z_{0}\) – which, through translations, may be assumed to be the origin – for which one can compare the level sets of the functions \(u_{\varepsilon }\) to balls centered at \(z_{0}\). The fact that \(z_{0}\) is given by the point where each \(u_{\varepsilon }\) attains its maximum allows thus for a connection between the analytic and geometric natures of the problem, highlighting further the importance of the aforementioned reduction.

6 Sharpness of the stability estimates

In this short section we prove the sharpness claimed in Remark 1.2 concerning the estimates in Theorem 1.1. We will see that the variational approach of the previous section is quite useful in this regard. The following is the key proposition we require:

Proposition 6.1

Let \(s>0\) be a fixed positive real number. For each \(\varepsilon > 0\) sufficiently small there is a constant \(C>0\) and sequences \(\{\Omega _{\varepsilon }\}_{\varepsilon }\) and \(\{\tilde{F}_{\varepsilon }\}_{\varepsilon } \subset \mathcal {F}^{2}( \mathbb{C}) \) with \(\|\tilde{F}_{\varepsilon }\|_{\mathcal {F}^{2}} = 1\), \(\quad \forall \, \varepsilon > 0\), and such that:

  1. (i)

    \(\Omega _{0}\) a ball and \(|\Omega _{\varepsilon }| = s\);

  2. (ii)

    \(\inf _{c,z_{0} \in \mathbb{C}} \|\tilde{F}_{\varepsilon }- c \cdot F_{z_{0}} \|_{\mathcal {F}^{2}} \ge \frac{\varepsilon }{C}\);

  3. (iii)

    the deficit satisfies \(\delta (\tilde{F}_{\varepsilon };\Omega _{\varepsilon }) \le C \frac{s e^{-s}}{1-e^{-s}} \varepsilon ^{2}\).

Proof

Let \(F_{\varepsilon }(z) = 1 + \varepsilon z^{2}\) and as usual let us write \(u_{\varepsilon }(z):= |F_{\varepsilon }(z)|^{2} e^{-\pi |z|^{2}}\). Consider, as in (5.1), the domains

$$ \Omega _{\varepsilon }= \{ z \in \mathbb{C}\colon u_{\varepsilon }(z) > u_{\varepsilon }^{*}(s)\}, $$

where \(s> 0\) is fixed. We then have

$$\begin{aligned} \begin{aligned} (1-e^{-s})\delta (F_{\varepsilon };\Omega _{\varepsilon }) & = \mathcal{K}[1] - \mathcal{K}[F_{\varepsilon }] \\ & \leq -\frac{\varepsilon ^{2}}{2} \nabla ^{2} \mathcal{K}[1](z^{2},z^{2}) + \varepsilon ^{2}\eta (\varepsilon ) = \frac{2se^{-s}}{\pi ^{2}} \varepsilon ^{2} + \eta (\varepsilon )\varepsilon ^{2}, \end{aligned} \end{aligned}$$
(6.1)

where we used Lemma 5.2 to pass to the second line and also (5.6) in the last equality. Now note that taking \(\varepsilon \) sufficiently small yields the desired upper bound if we choose \(\tilde{F}_{\varepsilon }= \frac{F_{\varepsilon }}{\|F_{\varepsilon }\|_{\mathcal {F}^{2}}}\). For the lower bound on \(\|\tilde{F}_{\varepsilon }- c F_{z_{0}}\|_{\mathcal {F}^{2}}\), we recall from (3.1) that

$$\begin{aligned} \|\tilde{F}_{\varepsilon }- c \cdot F_{z_{0}}\|_{\mathcal {F}^{2}}^{2} \ge 1 - \max _{z_{0} \in \mathbb{C}} |\tilde{F}_{\varepsilon }(z_{0})|^{2}e^{- \pi |z_{0}|^{2}}. \end{aligned}$$

In order to finish, we only need to show that the only global maximum of \(|F_{\varepsilon }(z)|^{2} e^{-\pi |z|^{2}}\) occurs at \(z=0\), which is equivalent to showing that

$$ (1+ 2 \varepsilon (x^{2} - y^{2}) + \varepsilon ^{2} \left | z \right |^{4}) < e^{\pi |z|^{2}}, $$

for each \(z \in \mathbb{C}\setminus \{0\}\). As \(1 + \pi |z|^{2} + \frac{\pi ^{2}}{2} |z|^{4}< e^{\pi |z|^{2}}\), this inequality is true if \(\varepsilon < \frac{\pi}{4}\). Thus, for such \(\varepsilon \),

$$ \|\tilde{F}_{\varepsilon }- c F_{z_{0}}\|_{\mathcal {F}^{2}}^{2} \ge 1 - \frac{1}{1 + \frac{2}{\pi ^{2}}\varepsilon ^{2}} \ge \frac{\varepsilon ^{2}}{\pi ^{2}}, $$

which concludes the proof. □

We are now ready to prove the claims in Remark 1.2.

Corollary 6.2

The following assertions hold:

  1. (i)

    The factor \(\delta (f;\Omega )^{1/2}\) cannot be replaced by \(\delta (f;\Omega )^{\beta}\), for any \(\beta > 1/2\), in (1.7) and (1.8);

  2. (ii)

    There is no \(c\in (0,1)\) such that, for all measurable sets \(\Omega \subset \mathbb{C}\) of finite measure, we have

    $$ \min _{z_{0}\in \mathbb{C}, |c|=\|f\|_{2}} \frac{\|f - c\, \varphi _{z_{0}} \|_{2}}{\|f\|_{2}} \leq C \Big(e^{c | \Omega |} \delta (f;\Omega )\Big)^{1/2}. $$

Proof

Notice that (ii) follows directly from the statement of Proposition 6.1 by taking \(s \to \infty \), so we just have to prove (i). The fact that one cannot improve the exponent in (1.7) follows directly from Proposition 6.1 above.

To see that one cannot improve the exponent in (1.8) we argue as follows. For the domains \(\Omega _{\varepsilon }\) built in Proposition 6.1, we may use Lemma 5.5 to write \(\Omega _{\varepsilon }= \Phi _{\varepsilon }(\Omega _{0})\), provided that \(\varepsilon \) is small enough. As we saw in (5.18) we may write

$$ \Phi _{\varepsilon }(z) = z + \varepsilon X_{0}(z) + O(\varepsilon ^{2}), \qquad \text{where } X_{0}(z) = h_{0}(z) z, $$

for some scalar function \(h_{0}\colon \mathbb{C}\backslash \{0\}\to \mathbb{R}\). Indeed, that \(X_{0}\) has this form follows from its explicit formula (5.5) in Lemma 5.5. Since \(\pi \langle X_{0}(z), z \rangle = \operatorname{Re}(z^{2})\) on \(\partial \Omega _{0}\) by (5.19), we have

$$ h_{0}(z) = \frac{\operatorname{Re}(z^{2})}{\pi |z|^{2}} = \frac{\cos (2 \theta )}{\pi} $$

for \(z = r(\Omega _{0})e^{i \theta} \in \partial \Omega _{0}\), where \(r(\Omega _{0})\) denotes the radius of the ball \(\Omega _{0}\). Hence,

$$\begin{aligned} |\Omega _{\varepsilon }\triangle \Omega _{0}| \geq& |\Omega _{ \varepsilon }\setminus \Omega _{0}| \ge \Big|\Big\{ z=re^{i \theta} \colon r(\Omega _{0})< r < r(\Omega _{0})+\varepsilon \frac{\cos (2\theta )}{\pi} -C\varepsilon ^{2} \Big\} \Big| \\ >& c \, r( \Omega _{0})^{2} \varepsilon , \end{aligned}$$

which concludes the proof. □

7 Generalizations to higher dimensions

In this section we will provide the generalization of Theorems 1.1 and 1.5 to higher dimensions \(d\geq 1\). We believe that the results of Sects. 5 and 6 also have higher-dimensional counterparts, but as the focus of this paper is on the 1-dimensional case we do not elaborate further on this.

Given a window function \(g \in L^{2}(\mathbb{R}^{d})\), the STFT of a function \(f \in L^{2}(\mathbb{R}^{d})\) is defined as

$$ V_{g} f (x, \omega ) := \int _{\mathbb{R}^{d}} e^{-2 \pi i y \cdot \omega } f(y) \overline{g(x-y)} \operatorname{d\!}y , \quad x , \omega \in \mathbb{R}^{d}, $$

coherently with (1.1). As in dimension 1, we will only be interested in the case where \(g(x)=\varphi (x)\) is the standard Gaussian window defined as

$$ \varphi (x) := 2^{d/4} e^{- \pi \left | x \right |^{2}} , \quad x \in \mathbb{R}^{d}, $$
(7.1)

so as before we set \(\mathcal{V}f := V_{\varphi} f\). Note that (7.1) reduces to (1.2) when \(d=1\).

The \(d\)-dimensional version of Theorem A, proved in [33], can be stated as follows.

Theorem C

[33]; Faber-Krahn inequality for the STFT in dimension \(d\)

If \(\Omega \subset \mathbb{R}^{2d}\) is a measurable set with finite Lebesgue measure \(|\Omega |>0\), and \(f\in L^{2}(\mathbb{R}^{d})\setminus \{0\}\) is an arbitrary function, then

$$ \frac{\int _{\Omega} |\mathcal{V}f(x,\omega )|^{2} \operatorname{d\!}x \operatorname{d\!}\omega}{\Vert f\Vert _{L^{2}(\mathbb{R}^{d})}^{2}} \leq \int _{0}^{|\Omega |} e^{-(d!\, s)^{\frac {1}{d}}} \operatorname{d\!}s. $$
(7.2)

Moreover, equality is attained if and only if \(\Omega \) coincides (up to a set of measure zero) with a ball centered at some \(z_{0}=(x_{0}, \omega _{0})\in \mathbb{R}^{2d}\) and, at the same time, for some \(c\in \mathbb{C}\setminus \{0\}\)

$$ f(x) = c\, \varphi _{z_{0}}(x) , \qquad \varphi _{z_{0}}(x) := e^{2 \pi i \omega _{0}\cdot x} \varphi (x-x_{0}), $$
(7.3)

where \(\varphi \) is the Gaussian defined in (7.1).

We point out that in [33, Theorem 4.1] the right hand side of (7.2) is expressed in terms of the incomplete Gamma function and implicit constants depending on \(d\), whereas the present formulation (which is equivalent but more explicit) is taken from [34] (see the remark after Theorem 2.3 therein).

It appears from (7.2) that, in dimension \(d\geq 1\), the function

$$ v^{*}(s):=e^{-(d!\, s)^{\frac {1}{d}}},\quad s\geq 0, $$
(7.4)

plays a crucial role, since when \(d=1\), \(v^{*}(s)=e^{-s}\) and the right hand side of (7.2) reduces to \(1-e^{-|\Omega |}\), as in (1.16). To state a stability result in dimension \(d\), we must modify the deficit \(\delta \) defined in (1.5) to suit the right hand side of (7.2), so we let

$$ \delta (f;\Omega ) := 1 - \frac {\int _{\Omega }|\mathcal{V} f(x,\omega )|^{2} \, \operatorname{d\!}x \, \operatorname{d\!}\omega }{\|f\|_{L^{2}(\mathbb{R}^{d})}^{2}\,\int _{0}^{|\Omega |} v^{*}(s) \operatorname{d\!}s }, $$
(7.5)

which once again reduces to (1.5) when \(d=1\). Redefining the asymmetry index \(\mathcal{A}(\Omega )\) by simply replacing \(\mathbb{R}^{2}\) with \(\mathbb{R}^{2d}\) in (1.6), our extension of Theorem 1.1 to dimension \(d\) can be stated as follows.

Theorem 7.1

Stability of the FK inequality for the STFT in dimension \(d\)

There is an explicitly computable constant \(C=C(d)>0\) such that, for all measurable sets \(\Omega \subset \mathbb{R}^{2d}\) with finite measure \(|\Omega |>0\) and all functions \(f \in L^{2}(\mathbb{R}^{d})\backslash \{0\}\), we have

$$ \min _{z_{0}\in \mathbb{C}^{d}, |c|=\|f\|_{2}} \frac{\|f - c\, \varphi _{z_{0}} \|_{2}}{\|f\|_{2}} \leq C \left ( \frac{\delta (f;\Omega )}{\int _{|\Omega |}^{\infty }e^{-(d!\, s)^{\frac {1}{d}}} \operatorname{d\!}s } \right )^{\frac {1}{2}} . $$
(7.6)

Moreover, for some explicit constant \(K=K(d,|\Omega |)\) we also have

$$ \mathcal{A}(\Omega ) \leq K \delta (f;\Omega )^{1/2} . $$
(7.7)

As in the case of Theorem 1.1, the first step is to translate the problem into the Fock space \(\mathcal{F}^{2}(\mathbb{C}^{d})\), now defined as the Hilbert space of all holomorphic functions \(F \colon \mathbb{C}^{d} \to \mathbb{C}\) such that

$$ \left \| F \right \|_{\mathcal{F}^{2}} := \left ( \int _{ \mathbb{C}^{d}} \left | F(z) \right |^{2} e^{- \pi \left | z \right |^{2}} \operatorname{d\!}z \right )^{1/2} < \infty , $$

with its induced inner product. An orthonormal basis – that reduces to (1.13) when \(d=1\) – is given, using multi-index notation, by the normalized monomials

$$ e_{\alpha}(z)= (\pi ^{|\alpha |}/\alpha !)^{1/2} \,z^{\alpha},\quad \alpha \in \mathbb{N}^{d},\quad z\in \mathbb{C}^{d}, $$
(7.8)

while the reproducing kernels are the functions \(K_{w}(z)=e^{\frac {\pi }{2} |w|^{2}} F_{w}(z)\), where, in analogy to (1.15),

$$ F_{z_{0}}(z)= e^{-\frac{\pi}{2} \left | z_{0} \right |^{2}} e^{\pi z \cdot \overline{z_{0}}} . $$
(7.9)

The Bargmann transform is now an unitary operator from \(L^{2}(\mathbb{R}^{d})\) onto \(\mathcal{F}^{2}(\mathbb{C}^{d})\), defined as in (1.12), with \(\mathbb{R}^{d}\) and \(\mathbb{C}^{d}\) in place of ℝ and ℂ, and the multi-index notation being adopted. Moreover, the functions \(F_{z_{0}}\) in (7.9) are, much as in (1.15), the Bargmann transforms of the optimal functions \(\varphi _{z_{0}}\) defined in (7.3). In this setting, since by an identity similar to (1.14) the concentration of a function \(f\) on \(\Omega \) can still be expressed in terms of its Bargmann transform, one can rephrase Theorem 7.1 in terms of Fock spaces.

Theorem 7.2

Fock space version of Theorem 7.1

There is a computable constant \(C=C(d)>0\) such that, for all measurable sets \(\Omega \subset \mathbb{R}^{2d}\) with finite measure \(|\Omega |>0\) and all functions \(F\in \mathcal{F}^{2}(\mathbb{C}^{d})\backslash \{0\}\), we have

$$ \min _{ \substack{|c|=\|F\|_{\mathcal{F}^{2}},\\ z_{0}\in \mathbb{C}^{d}}} \frac{\left \| F-cF_{z_{0}} \right \|_{\mathcal{F}^{2}}}{\left \| F \right \|_{\mathcal{F}^{2}}} \leq C \left ( \frac{\delta (F;\Omega )}{\int _{|\Omega |}^{\infty }e^{-(d!\, s)^{\frac {1}{d}}} \operatorname{d\!}s } \right )^{\frac {1}{2}}, $$
(7.10)

where

$$ \delta (F;\Omega ):=1- \frac{\int _{\Omega }|F(z)|^{2} e^{-\pi |z|^{2}}\operatorname{d\!}z}{\Vert F\Vert _{\mathcal{F}^{2}}^{2}\, \int _{0}^{|\Omega |} v^{*}(s)\operatorname{d\!}s}. $$
(7.11)

Moreover, for some explicit constant \(K=K(d,|\Omega |)\) we also have

$$ \mathcal{A}(\Omega ) \leq K \delta (F;\Omega )^{1/2} . $$
(7.12)

We point out that (7.10) reduces to (1.17), when \(d=1\) and \(v^{*}(s)=e^{-s}\).

The proof of (7.10) can be obtained by arguments similar to those given in Sect. 2, where every result has a suitable analogue in dimension \(d\). Therefore, we limit ourselves to describing the relevant, and not always trivial, changes that are necessary to adapt Sect. 2 to dimension \(d\).

We start with the necessary background results from [33], which we discussed in Sect. 1.2 only in dimension one. We warn the reader that in [33] some numerical constants were written in terms of \(\boldsymbol{\omega }_{2d}\), the volume of the unit ball in \(\mathbb{R}^{2d}\): here, in (7.13) and (7.14), we write them explicitly using the fact that \(\boldsymbol{\omega }_{2d}=\pi ^{d}/d!\), as done in [34].

Given \(F\in \mathcal{F}^{2}(\mathbb{C}^{d})\), the function \(u\) and its super-level sets \(A_{t}\) are defined as in (1.20) and (1.21), now and henceforth with \(z\in \mathbb{C}^{d}\). The distribution function \(\mu (t)\) is defined as in (1.22), with the adopted convention that \(|\cdot |\) denotes Lebesgue measure in \(\mathbb{R}^{2d}\), but with (1.23) being replaced (see [33, §4]) by

$$ \mu '(t) \leq \,- \,\, \frac{d\,\mu (t)^{1-1/d}}{(d\,!)^{\frac {1}{d}} \, t} \quad \text{for a.e. } t\in (0,T),\quad T := \max _{z\in \mathbb{C}^{d}} u(z), $$
(7.13)

while (1.24) becomes

$$ \mu (t)\geq \frac {1}{d!} \left (\log _{+} \frac {T}{t}\right )^{d} \quad \text{for all } t>0. $$

Similarly, the decreasing rearrangement \(u^{*}(s)\) (i.e., the inverse function of \(\mu (t)\)) is defined exactly as in (1.26), but now with (1.25) being replaced by

$$ (u^{*})'(s)+ \frac {(d\,!)^{\frac {1}{d}}\,\, u^{*}(s)}{d \,\, s^{1-\frac {1}{d}}} \geq 0,\quad \text{for a.e. $s>0$.} $$
(7.14)

These changes are natural in dimension \(d\), since when \(F\) equals one of the optimal functions defined in (7.9), we have \(u(z)=e^{-\pi |z-z_{0}|^{2}}\) and its distribution function is

$$ \mu (t)=\frac {1}{d!} \left (\log _{+} \frac {1}{t}\right )^{d}, \quad t>0, $$
(7.15)

as the explicit volume of the unit ball is \(\boldsymbol{\omega }_{2d}=\pi ^{d}/d!\). Note that, for the particular \(\mu \) in (7.15), (7.13) is an equality. Moreover, if in (7.15) we let \(\mu (t)=s>0\) and solve for \(t\), the resulting inverse function is just the function \(v^{*}(s)\) defined in (7.4), much as \(e^{-s}\) is the inverse of \(\log _{+} \frac {1}{t}\) when \(d=1\). In particular, note that (7.14) becomes an equality when \(u^{*}=v^{*}\).

Finally, the fact, expressed by (1.27), that super-level sets maximize the concentration under a volume constraint, is clearly still valid, as so is (1.30), with equality if and only if \(F\) is a multiple of some \(F_{z_{0}}\).

For what concerns Sect. 2, beside obvious changes such as ℂ being replaced with \(\mathbb{C}^{d}\) and analogous changes, a general rule is that \(e^{-s}\) should always be replaced by \(v^{*}(s)\), e.g. in (2.4) and (2.5), and log with \((\log )^{d}/d!\), e.g. in (2.6). Accordingly, the claim of Lemma 2.1 becomes

$$ \mu (t) \leq \frac{1}{d!} \left ( 1+C_{0}(1-T) \right ) \left ( \log{T/t} \right )^{d} \quad \forall t\in [t_{0},T], $$

where now the underlying constants may depend on the dimension \(d\). The proof follows the same pattern, with some changes being necessary, which we now indicate. Replacing \(n\) by \(\alpha =(\alpha _{1},\ldots ,\alpha _{d})\in {\mathbb{N}}^{d}\) and adopting the multi-index notation, (2.9) becomes

$$ R(z) := \sum _{|\alpha |\geq 2} \frac{a_{\alpha}}{\sqrt{T}}\, \frac {\pi ^{|\alpha |/2} z^{\alpha }}{\sqrt{\alpha !}},\quad z=(z_{1}, \ldots ,z_{d})\in \mathbb{C}^{d}, $$
(7.16)

where we used the basis defined in (7.8). Accordingly, (2.10) changes into

$$ \sum _{|\alpha |\geq 2} \frac {|a_{\alpha}|^{2}}{T}=\frac {1-T}{T}=: \delta ^{2}. $$
(7.17)

Some additional caution is needed in order to estimate the subsequent powers series. The outcome of (2.11) is unchanged, now with \(z\in \mathbb{C}^{d}\), but after Cauchy–Schwarz one faces the multivariate power series

$$ \sum _{|\alpha |\geq 2} \frac {\pi ^{|\alpha |} \left |z^{2\alpha}\right |}{\alpha !}= \sum _{| \alpha |\geq 2} \frac {\pi ^{|\alpha |} \left |z_{1}^{2\alpha _{1}}\cdots z_{d}^{2\alpha _{d}} \right |}{\alpha _{1}!\cdots \alpha _{d}!}=e^{\pi |z|^{2}}-1-\pi |z|^{2}. $$
(7.18)

Subtler changes are needed in Step II. After defining \(h(z)\) as in (2.13) and obtaining (2.14), one replaces (2.15) by

$$ \left |\frac{\partial R(z)}{\partial z_{i}}\right |\leq \delta \sqrt{2} \pi |z| e^{\frac{\pi |z|^{2}}{2}},\quad z\in \mathbb{C}^{d},\quad 1 \leq i\leq d. $$
(7.19)

For instance, letting \(e_{1}=(1,0,\ldots ,0)\in {\mathbb{N}}^{d}\), differentiating (7.16), and then using Cauchy-Schwarz and (7.17) one obtains

$$ \left |\frac{\partial R(z)}{\partial z_{1}}\right | \leq \sum _{| \alpha |\geq 2} \frac{|a_{\alpha}|}{\sqrt{T}}\, \frac {\pi ^{|\alpha |/2} \alpha _{1} \left | z^{\alpha -e_{1}} \right |}{\sqrt{\alpha !}} \leq \delta \left ( \sum _{|\alpha |\geq 2} \frac {\pi ^{|\alpha |} \alpha _{1}^{2} \left | z^{2(\alpha -e_{1})} \right |}{\alpha !} \right )^{\frac {1}{2}}. $$
(7.20)

Focussing on multi-indices \(\alpha \) of a given size \(k\geq 2\), we have

$$ \sum _{|\alpha |=k} \frac {\pi ^{|\alpha |} \alpha _{1}^{2} \left | z^{2(\alpha -e_{1})} \right |}{\alpha !} = \sum _{ \substack{|\alpha |=k\\ \alpha _{1}\geq 1}} \frac {\pi ^{k} \alpha _{1} \left | z^{2(\alpha -e_{1})} \right |}{(\alpha _{1} -1)!\alpha _{2}!\cdots \alpha _{d}!} = \sum _{| \beta |=k-1} \frac {\pi ^{k} (1+\beta _{1}) \left | z^{2\beta} \right |}{\beta _{1}!\cdots \beta _{d}!}. $$

Since \(1+\beta _{1}\leq k\leq 2(k-1)\), we have, from the multinomial theorem,

$$\begin{aligned} \sum _{|\alpha |=k} \frac {\pi ^{|\alpha |} \alpha _{1}^{2} \left | z^{2(\alpha -e_{1})} \right |}{\alpha !} \leq& \frac {2\pi ^{k}}{(k-2)!}\sum _{|\beta |=k-1} \frac {(k-1)! \left | z^{2\beta} \right |}{\beta _{1}!\cdots \beta _{d}!} \\ =&\frac {2\pi ^{k}}{(k-2)!} (|z_{1}|^{2}+ \cdots |z_{d}|^{2})^{k-1} \end{aligned}$$

and, since \(|z_{1}|^{2}+\cdots |z_{d}|^{2}=|z|^{2}\), summing over \(k\geq 2\) we obtain

$$ \sum _{|\alpha |\geq 2} \frac {\pi ^{|\alpha |} \alpha _{1}^{2} \left | z^{2(\alpha -e_{1})} \right |}{\alpha !} \leq 2 \sum _{k=2}^{\infty } \frac {\pi ^{k} |z|^{2(k-1)}}{(k-2)!} =2\pi ^{2} |z|^{2} e^{\pi |z|^{2}}, $$

which combined with (7.20) proves (7.19) when \(i=1\) (when \(i>1\) the proof being the same). The analogue of (2.16) now reads

$$ \left |\frac{\partial ^{2} R(z)}{\partial z_{i}\partial z_{j}}\right | \leq C\delta (1+|z|^{2}) e^{\frac {\pi |z|^{2}}{2}},\quad z\in \mathbb{C}^{d}, \quad 1\leq i,j\leq d. $$
(7.21)

For instance, differentiating (7.16) twice, then using Cauchy–Schwarz and (7.17), one finds

$$ \left |\frac{\partial ^{2} R(z)}{\partial z_{1}\partial z_{2}}\right | \leq \delta \left ( \sum _{|\alpha |\geq 2} \frac {\pi ^{|\alpha |} \alpha _{1}^{2} \alpha _{2}^{2} \left | z^{2(\alpha -e_{1}-e_{2})} \right |}{\alpha !} \right )^{\frac {1}{2}}, $$
(7.22)

where \(e_{2}=(0,1,0,\ldots ,0)\in {\mathbb{N}}^{d}\). Since the sum can be restricted to those multi-indices \(\alpha \) where \(\alpha _{1}\geq 1\) and \(\alpha _{2}\geq 1\) (which imply that \(|\alpha |\geq 2\)), letting \(\beta =\alpha -e_{1}-e_{2}\) we have

$$\begin{aligned} \sum _{|\alpha |\geq 2} \frac {\pi ^{|\alpha |} \alpha _{1}^{2} \alpha _{2}^{2} \left | z^{2(\alpha -e_{1}-e_{2})} \right |}{\alpha !} & = \sum _{\beta \in {\mathbb{N}}^{d}} \frac {\pi ^{2+|\beta |} (1+\beta _{1})(1+\beta _{2}) \left | z^{2 \beta} \right |}{\beta !} \\ & = \pi ^{2} S(z_{1})S(z_{2})\prod _{j=2}^{d} \left (\sum _{\beta _{j}=0}^{ \infty } \frac {\pi ^{\beta _{j}} | z_{j}|^{2\beta _{j}}}{\beta _{j}!} \right ) \\ &=\pi ^{2} S(z_{1})S(z_{2}) e^{\pi (|z_{3}|^{2}+ \cdots +|z_{d}|^{2})}, \end{aligned}$$

where

$$ S(z_{i}):=\sum _{\beta _{i}=0}^{\infty } \frac {\pi ^{\beta _{i}} (1+\beta _{i})| z_{i}|^{2\beta _{i}}}{\beta _{i}!} < \pi (1+|z_{i}|^{2}) e^{\pi |z_{i}|^{2}}. $$

Plugging these estimates into (7.22), one obtains (7.21) when \(i=1\) and \(j=2\), and hence also for all \(i\neq j\), by the same argument. Finally, the case where \(i=j\) can be treated similarly, by a suitable modification of (2.16).

Then, by (2.13) and the Cauchy–Riemann equations, it is easy to see that (7.19) and (7.21) provide bounds for the gradient \(\nabla h(z)\) and the Hessian \(D^{2} h(z)\): in particular, in polar coordinates \(r\omega \in \mathbb{R}^{2d}\), one has the following bounds for the radial derivatives

$$ \left \vert \frac {\partial h(r \omega )}{\partial r} \right \vert \leq \delta C r e^{\frac {\pi r^{2}}{2}},\qquad \left \vert \frac {\partial ^{2} h(r \omega )}{\partial r^{2}} \right \vert \leq \delta C (1+r^{2}) e^{\frac {\pi r^{2}}{2}},\quad r\geq 0,\quad \omega \in {\mathbb{S}}^{2d-1}, $$
(7.23)

which replace (2.17) and (2.18). The rest of the proof requires only minor changes, such as the systematic usage of polar coordinates \(r\omega \in \mathbb{R}^{2d}\) (instead of \(r e^{i\theta}\)) with \(r\geq 0\) and \(\omega \in \mathbb{S}^{2d-1}\), as in (7.23). In particular, in (2.24) and in the sequel, \(r_{\sigma}=r_{\sigma}(\theta )\) becomes \(r_{\sigma}=r_{\sigma}(\omega )\). Also integrals should be changed accordingly, e.g., (2.25) now becomes

$$ f(\sigma ):=|E_{\sigma}|=\frac {1}{2d} \int _{\mathbb{S}^{2d-1}} r_{ \sigma}(\omega )^{2d}\operatorname{d\!}S(\omega ),\quad \sigma \in [0,1], $$

with the obvious related changes, e.g. in (2.34), while (2.26) becomes

$$ f(0)=\frac {1}{2d} \int _{\mathbb{S}^{2d-1}} r_{0}(\omega )^{2d} \operatorname{d\!}S(\omega )=|B(0,r_{0})|=\frac {\pi ^{d}}{d!} r_{0}^{2d}. $$

Corollary 2.2 is unchanged and has a similar proof, where one of course should replace log with \((\log )^{d}/d!\) as already mentioned.

The claim (2.41) of Lemma (2.3) must be rewritten as

$$ \frac {(1-T)^{d+1}}{(d+1)!}\leq \int _{0}^{s^{*}}\left (v^{*}(s)-u^{*}(s) \right )\operatorname{d\!}s\leq \frac {\delta _{s_{0}}}{\int _{s_{0}}^{\infty }v^{*}(s)\operatorname{d\!}s}, $$
(7.24)

with the proof, after rewritten in terms of \(v^{*}(s)\) remaining valid almost ad litteram to prove (7.24), the only necessary changes being the following. For the first inequality in (7.24), in (2.44) one should use, instead of \(v^{*}(s)\geq 1-s\) as when \(d=1\) and \(v^{*}(s)=e^{-s}\), the similar inequality

$$ v^{*}(s)=e^{-(d!\,s)^{\frac {1}{d}}}\geq 1 - (d!\,s)^{\frac {1}{d}}, $$

and change (2.44) into

$$\begin{aligned} \int _{0}^{s^{*}}\left (v^{*}(s)-u^{*}(s)\right )\operatorname{d\!}s \geq& \int _{0}^{s^{*}} \left (1 - (d!\,s)^{\frac {1}{d}}-T\right )_{+}\operatorname{d\!}s \\ =& \int _{0}^{ \frac {(1-T)^{d}}{d!}} \left (1 - (d!\,s)^{\frac {1}{d}}-T\right ) \operatorname{d\!}s, \end{aligned}$$

which after a routine computation yields the first inequality in (7.24).

Then the proof goes on unaltered, except that now (2.46) follows from (7.14) and (7.4), rather than (1.25) and (2.43), and in Case 1 one arrives at (2.47). Since \(\varepsilon \leq \delta _{s_{0}}\) by virtue of (2.45), (2.47) proves (7.24). Finally, Case 2 requires no changes, since (2.48) already implies (7.24).

The claim of Lemma 2.4 now becomes

$$ 1-T\leq C\int _{0}^{s^{*}} \bigl( v^{*}(s)-e^{*}(s)\bigr)\operatorname{d\!}s. $$
(7.25)

In the proof, the first inequality in (7.24) can now be used to justify, in a similar way, why on proving (7.25) one can freely assume (2.51) if needed. Thus, replacing also \(\log \frac {1}{t}\) by the right hand side of (7.15), and rewriting (2.52) as

$$ \mu (t)\leq \frac {1}{d!} (1+C_{0}(1-T))\left (\log \frac {T}{t} \right )^{d}\quad \text{for all } t\in [\tau ^{*},T], $$

one obtains the following version of (2.53):

$$ d! \int _{0}^{s^{*}} \bigl(v^{*}(s)-u^{*}(s)\bigr)\operatorname{d\!}s \geq \int _{ \tau _{1}}^{T} \left (\left (\log \frac {1}{t}\right )^{d} - (1+C_{0}(1-T)) \left (\log \frac {T}{t}\right )^{d}\right )\operatorname{d\!}t. $$
(7.26)

Then, using \(a^{d}-b^{d}\geq (a-b)a^{d-1}\) with the choice \(a=\log \frac {1}{t}\) and \(b=\log \frac {T}{t}\), since \(a-b=-\log T\) and \(-\log T\geq 1-T\) we can replace the minorization after (2.53) by

$$\begin{aligned} &\left (\log \frac {1}{t}\right )^{d} - (1+C_{0}(1-T))\left (\log \frac {T}{t}\right )^{d} \\ &\quad \geq (-\log T)\left (\log \frac {1}{t} \right )^{d-1}- C_{0}(1-T)\left (\log \frac {T}{t}\right )^{d} \\ &\quad \geq (1-T) \left (\log \frac {1}{t}\right )^{d-1}- C_{0}(1-T) \left (\log \frac {1}{t}\right )^{d} \\ &\quad \geq (1-T) \left (\log \frac {1}{t}\right )^{d-1}\left (1-C_{0} \log \frac {1}{\tau _{1}}\right ), \end{aligned}$$

for all \(t\in [\tau _{1},T]\). Finally, fixing \(\tau _{1}\in (\tau ^{*},1)\) in analogy to (2.54), from (7.26) and the previous estimate, in place of (2.55) now one obtains

$$ d! \int _{0}^{s^{*}} \bigl(v^{*}(s)-u^{*}(s)\bigr)\operatorname{d\!}s \geq \varepsilon _{1}(1-T)\int _{\tau _{1}}^{T} \left (\log \frac {1}{t} \right )^{d-1}\operatorname{d\!}t. $$

As explained after (2.55), one can proceed by further assuming \(T\geq \tau _{2}>\tau _{1}\), now obtaining

$$ d! \int _{0}^{s^{*}} \bigl(v^{*}(s)-u^{*}(s)\bigr)\operatorname{d\!}s \geq \varepsilon _{1}(1-T)\int _{\tau _{1}}^{\tau _{2}} \left (\log \frac {1}{t}\right )^{d-1}\operatorname{d\!}t, $$

which proves (7.25).

Lemma 2.5, as is well known, remains valid, with the obvious notational changes and the reproducing kernels described before (7.9). As a consequence, the proof of (7.10) can be completed essentially as the proof of (1.17), replacing (2.58) by

$$ \min _{\substack{z_{0}\in \mathbb{C}^{d} \\ |c|=1}} \Vert F-cF_{z_{0}} \Vert _{\mathcal{F}^{2}(\mathbb{C}^{d})}^{2} \leq C \frac {\delta _{s_{0}}}{\int _{s_{0}}^{\infty }v^{*}(s)\operatorname{d\!}s}. $$

Finally, the proof of the set stability remains virtually unchanged. This finishes the proof of Theorem 7.1.

We now discuss the sharpness of the estimate in Theorem 7.1, in analogy to the discussions of Sects. 5 and 6, and we explain how to adapt the arguments of these sections to the case of general dimension; as before, we keep the notation from these sections.

The first observation to be made is that Lemmas 5.4 and 5.5 hold after the obvious changes have been made, with essentially identical proofs. As in Sect. 5, we wish to compute the second variation \(\partial _{\varepsilon }^{2}\mathcal {K}[1+\varepsilon G]|_{ \varepsilon =0} \), where \(G\) now satisfies that

$$ \langle G,1 \rangle = \langle G,z_{i} \rangle = 0, \quad i = 1,\dots ,d. $$

We note that the same argument as in the proof of Lemma 5.6 implies that (5.12) still holds when passing to the higher-dimensional case. Hence, we only need to compute \(\langle X_{0}, \nu \rangle \), where \(X_{0}\) is defined as the vector field associated with the flows \(\Phi _{\varepsilon }\) at \(\varepsilon =0\) in the analogue of Lemma 5.5, and \(\nu \) denotes the unit normal at \(\partial \Omega _{0}\).

In order to do so, we adapt the proof of Lemma 5.6: first, note that equations (5.15)–(5.17) hold in the exact same way also in the higher-dimensional case. Moreover, we also note that

$$\begin{aligned} s &= \mu _{\varepsilon }(u_{\varepsilon }^{*}(s)) \\ &= \mu _{0}(u_{\varepsilon }^{*}(s)) + O(\varepsilon ^{2}) \\ &= s - (u_{\varepsilon }^{*}(s) - u_{0}^{*}(s)) \frac{d \cdot s^{1-1/d}}{(d!)^{1/d} u_{0}^{*}(s)} + O(\varepsilon ^{2}), \end{aligned}$$

where the last equality simply follows by differentiating (7.15) with respect to \(t\) and evaluating at \(t = u_{0}^{*}(s)=v^{*}(s)\). We hence conclude once again that

$$\begin{aligned} u_{\varepsilon }(\Phi _{\varepsilon }(z)) & = (1 + 2 \varepsilon \operatorname{Re}G(z)- 2\pi \varepsilon \langle X_{0}(z),z\rangle + O( \varepsilon ^{2}) )e^{-\pi |z|^{2}}, \\ u_{\varepsilon }^{*}(s) & = \left (1 + \frac{(d!)^{1/d}}{d \cdot s^{1-1/d}} O(\varepsilon ^{2})\right ) u_{0}^{*}(s). \end{aligned}$$

Again, since \(\Phi _{\varepsilon }(\{u_{0} = u_{0}^{*}(s)\}) = \{u_{\varepsilon }= u_{ \varepsilon }^{*}(s)\}\) by definition, if one looks at \(z \in \{u_{0} = u_{0}^{*}(s)\}\) and compares the expansion in \(\varepsilon \) of \(u_{\varepsilon }(\Phi _{\varepsilon }(z)) = u_{\varepsilon }^{*}(s)\), one arrives at

$$ \text{Re}(G) = \pi \langle X_{0}, z \rangle = \pi \cdot r(\Omega _{0}) \langle X_{0}, \nu \rangle \text{ on } \partial \Omega _{0}, $$
(7.27)

where \(r(\Omega _{0})\) denotes the radius of \(\Omega _{0}\), which by the definition of \(v^{*}\) in (7.4), is given by

$$ r(\Omega _{0}) = \left ( \frac{(d!s)^{1/d}}{\pi} \right )^{1/2}. $$

Hence,

Ω 0 Re ( G ) X 0 , ν e π | z | 2 d H 2 d 1 = H 2 d 1 ( Ω 0 ) e π r ( Ω 0 ) 2 π r ( Ω 0 ) Ω 0 Re ( G ) 2 d H 2 d 1 = H 2 d 1 ( Ω 0 ) e π r ( Ω 0 ) 2 2 π r ( Ω 0 ) Ω 0 | G | 2 d H 2 d 1 = e ( d ! s ) 1 / d d s 1 1 / d ( d ! ) 1 / d Ω 0 | G | 2 d H 2 d 1 ,

where the second identity may be justified by the fact that \(\Omega _{0}\) is a ball centered at 0 and \(G\), \(\overline{G}\) are harmonic functions with \(G(0) = 0\). Thus, as in Sect. 5, this shows that

1 2 d 2 d ε 2 K [ 1 + ε G ] | ε = 0 = Ω 0 | G | 2 e π | z | 2 G F 2 2 Ω 0 e π | z | 2 + e ( d ! s ) 1 / d d s 1 1 / d ( d ! ) 1 / d Ω 0 | G | 2 .
(7.28)

We now wish to compute the right-hand side of (7.28) for

$$ G(z) = \sum _{i=1}^{d} z_{i}^{2} + \sum _{1\le i < j \le d} \sqrt{2} z_{i} z_{j} . $$

In order to do so, we note that each monomial in the definition of \(G\) is orthogonal to every other monomial not just over \(\mathbb{C}^{d}\) but in fact over any ball centered at the origin. Thus we have

1 2 d 2 d ε 2 K [ 1 + ε G ] | ε = 0 = Ω 0 | z | 4 e π | z | 2 G F 2 2 Ω 0 e π | z | 2 + e ( d ! s ) 1 / d d s 1 1 / d ( d ! ) 1 / d Ω 0 | z | 4 .
(7.29)

In order to further analyze (7.29), note the identity

$$\begin{aligned} \int _{\Omega _{0}} |G|^{2} e^{-\pi |z|^{2}} - \|G\|_{\mathcal {F}^{2}}^{2} \int _{\Omega _{0}} e^{-\pi |z|^{2}} =& - \int _{\mathbb{C}^{d} \setminus \Omega _{0}} |G|^{2} e^{-\pi |z|^{2}} \\ &{} + \|G\|_{\mathcal {F}^{2}}^{2} \int _{\mathbb{C}^{d} \setminus \Omega _{0}} e^{-\pi |z|^{2}}. \end{aligned}$$

For our choice of \(G\), we can explicitly evaluate these integrals: indeed, a routine computation implies that

$$ \int _{\mathbb{C}^{d} \setminus \Omega _{0}} |z|^{4} e^{-\pi |z|^{2}} \, \operatorname{d\!}z = \frac{\Gamma (d+2,\pi r(\Omega _{0})^{2})}{\pi ^{2} \Gamma (d)}, $$

while \(\|G\|_{\mathcal {F}^{2}}^{2} = \frac{d(d+1)}{\pi ^{2}}\), which implies that

$$ \|G\|_{\mathcal {F}^{2}}^{2} \int _{\mathbb{C}^{d}\setminus \Omega _{0}} e^{-\pi |z|^{2}} \operatorname{d\!}z = \frac{ d(d+1)\Gamma (d,\pi r( \Omega _{0})^{2}) }{\pi ^{2} \Gamma (d)}. $$

Hence, by using that \(\Gamma (k,x) = (k-1)! e^{-x} \left ( \sum _{j=0}^{k-1} \frac{x^{j}}{j!} \right )\), we conclude that

$$ \int _{\Omega _{0}} |z|^{4} e^{-\pi |z|^{2}} \operatorname{d\!}z- \|G\|_{ \mathcal {F}^{2}}^{2}\int _{\Omega _{0}} e^{-\pi |z|^{2}} \operatorname{d\!}z = - \frac{d s \cdot e^{-(d!s)^{1/d}}}{\pi ^{2}} \left ( 1+d+ (d! s)^{1/d} \right ). $$

Finally, the last term in (7.29) may be explicitly computed to be \(\frac{d \cdot s}{\pi ^{2}} e^{-(d!s)^{1/d}}(d! s)^{1/d}\). Thus, plugging these into (7.29), we obtain

$$ \frac {1}{2} \frac{\operatorname{d\!}^{\,2}}{\operatorname{d\!}\varepsilon ^{\,2}}\mathcal {K}[1+ \varepsilon G]\Bigg|_{\varepsilon = 0 } = - e^{-(d!s)^{1/d}} \cdot d(d+1) \frac{s}{\pi ^{2}}. $$

A straightforward adaptation of the arguments from Sect. 6 shows the desired sharpness of the exponent, as well as the stability of the order of growth of the constant in (7.10). That is, we are able to obtain the following result:

Corollary 7.3

The following assertions hold:

  1. (i)

    The factor \(\delta (f;\Omega )^{1/2}\) cannot be replaced by \(\delta (f;\Omega )^{\beta}\), for any \(\beta > 1/2\), in (7.6);

  2. (ii)

    There is no \(c\in (0,(d!)^{1/d})\) such that, for all measurable sets \(\Omega \subset \mathbb{C}^{d}\) of finite measure, we have

    $$ \min _{z_{0}\in \mathbb{C}^{d}, |c|=\|f\|_{2}} \frac{\|f - c\, \varphi _{z_{0}} \|_{2}}{\|f\|_{2}} \leq C \Big(e^{c | \Omega |^{1/d}} \delta (f;\Omega )\Big)^{1/2}. $$

Elementary computations reveal that the denominator on the right-hand side of (7.10) behaves as

$$ \int _{|\Omega |}^{\infty }e^{-(d!s)^{1/d}} \operatorname{d\!}s \approx C_{d} | \Omega |^{\frac{d-1}{d}} e^{-(d! |\Omega | )^{1/d}} \quad \text{ as } | \Omega |\to \infty , $$

for some explicitly computable constant \(C_{d}>0\). Thus, Corollary 7.3(ii) yields, indeed, the desired optimal dependence of (7.10) on \(|\Omega |\).