1 Introduction and Results

The investigation of zeros of random functions with Gaussian distribution is a classical endeavor in statistical physics, in great part motivated by the goal to derive generic models for the distribution of quantum chaotic systems [7, 8]. This article is concerned with complex-valued random functions on the complex plane, in which case zeros are typically discrete, and correspond to phase singularities [9]. While random functions on the Euclidean plane are often studied under the assumption of stationarity, that is, stochastic invariance under Euclidean shifts, we study a form of invariance compatible with the complex structure of the plane, which we call twisted stationarity.

A model case for our theory are random power series with properly scaled independent normally distributed coefficients. Such random waves are known as translation invariant Gaussian entire functions (TI-GEF), because, even though they are not stochastically invariant under Euclidean shifts, their zeros are [24, 29]. In fact, due to the so-called Calabi rigidity, these are the only examples of Gaussian analytic functions on the plane with stationary zero sets [29].

The notion of twisted stationarity that we introduce abstracts some properties of TI-GEF such as stationarity of zeros, while, crucially, allowing for non-analytic examples. Indeed, our main motivation is the study of certain possibly non-analytic random functions such as the correlation of white noise with the time-frequency shifts of a given reference window function (short-time Fourier transform or cross radar ambiguity [17, Chap. 1] [21, Chap. 3]). While the freedom to choose a reference window function is very valuable in applications, only one specific choice leads to Gaussian entire functions [16, Chap. 15], [5, 6]. Further motivation comes from the fact that an operation as basic as computing a derivative of a TI-GEF (in the sense of complex geometry) does not preserve analyticity. Our starting point is the identification of a common element in the previous examples: stochastic invariance under a certain representation of the Weyl–Heisenberg group. Twisted stationarity provides a unified model for such situation, and the corresponding random functions are called Gaussian Weyl–Heisenberg functions (GWHF).

The zeros of Gaussian entire functions are statistically rich. In contrast to points in a Poisson process, they exhibit repulsion, that is, negative correlation similar to that of charged particles of equal sign, and they are hyperuniform, in the sense that the variance of the number of points in a large observation window is asymptotically smaller than the corresponding expected value [20, 30]. In the non-analytic setting of GWHF we shall find the new element of a signed charge, since, as is the case with non-analytic random waves, zeros may have negative winding numbers [9].

We now introduce the main mathematical objects and results, and provide context on their significance.

1.1 Gaussian Weyl–Heisenberg Functions

We study zero sets of Gaussian circularly symmetric random functions on the plane \(F:{\mathbb {C}} \rightarrow {\mathbb {C}}\) whose covariance kernel is given by twisted convolution:

$$\begin{aligned} {\mathbb {E}} \left[ F(z) \cdot \overline{F(w)} \right] = H(z-w) \cdot e^{i \Im (z {\overline{w}})}, \qquad z,w \in {\mathbb {C}}. \end{aligned}$$
(1.1)

Here, \(H:{\mathbb {C}} \rightarrow {\mathbb {C}}\) is a function called twisted kernel. Gaussianity means that for each \(z_1, \ldots , z_n \in {\mathbb {C}}\), \((F(z_1), \ldots , F(z_n))\) is a normally distributed complex random vector. Circularity means that \(F \sim e^{i \theta } F\), for all \(\theta \in {\mathbb {R}}\), and implies that F has vanishing expectation and pseudo-covariance, i.e., \({\mathbb {E}} \left[ F(z)\right] =0\), \({\mathbb {E}} \left[ F(z) F(w) \right] =0\), for all \(z,w \in {\mathbb {C}}\). Hence, the stochastics of F are completely encoded in the twisted kernel (1.1).

While the covariance structure (1.1) without the complex exponential factor would mean that F is stationary, the presence of the oscillatory factor means that F is twisted stationary:

$$\begin{aligned} {\mathbb {E}} \left[ e^{i\Im (z {\overline{\zeta }})} \cdot F(z-\zeta ) \cdot \overline{ e^{i\Im (w {\overline{\zeta }})} \cdot F(w-\zeta )} \right] = {\mathbb {E}} \left[ F(z) \cdot \overline{F(w)} \right] , \qquad z,w, \zeta \in {\mathbb {C}}. \end{aligned}$$
(1.2)

In other words, the stochastics of F are invariant under twisted shifts:

$$\begin{aligned} F(z) \mapsto e^{i\Im (z {\overline{\zeta }})} \cdot F(z-\zeta ),\qquad \zeta \in {\mathbb {C}}. \end{aligned}$$
(1.3)

We call such a random function F a Gaussian Weyl–Heisenberg function (GWHF), as the operators (1.3) generate the (reduced) Weyl–Heisenberg group [17, Chap. 1]. Let us mention some motivating examples (which are developed in more detail in Sect. 6).

Example 1.1

(Gaussian entire functions) Let F be a GWHF with twisted kernel \(H(z)=e^{-\tfrac{1}{2}\left| z\right| ^2}\) and set \(G(z)=e^{\tfrac{1}{2}\left| z\right| ^2} F(z)\). Then

$$\begin{aligned} {\mathbb {E}} \left[ G(z) \cdot \overline{G(w)} \right] = \exp \Big [\tfrac{1}{2}\left| z\right| ^2+\tfrac{1}{2}\left| w\right| ^2 -\tfrac{1}{2}\left| z-w\right| ^2 + i \Im (z{\bar{w}})\Big ] =e^{z {\bar{w}}}. \end{aligned}$$

Hence, G is a Gaussian entire function on the plane with correlation kernel given by the Bargmann-Fock kernel [35], and its zero set is well-studied [24]. In terms of G, the twisted stationarity property of F (1.3) is an instance of the projective invariance property [29], and, indeed, reflects the invariance of the stochastics of G under Bargmann-Fock shifts:

$$\begin{aligned} G(z) \mapsto e^{-\frac{1}{2} |\zeta |^2 + z {\overline{\zeta }}} \cdot G(z-\zeta ), \qquad \zeta \in {\mathbb {C}}. \end{aligned}$$
(1.4)

Example 1.2

(The short-time Fourier transform of complex white noise)

Given a window function \(g \in {\mathcal {S}}({\mathbb {R}})\), the short-time Fourier transform of a function \(f:{\mathbb {R}} \rightarrow {\mathbb {C}}\) is

$$\begin{aligned} V_g f(x,y) = \int _{{\mathbb {R}}} f(t) \overline{g(t-x)} e^{-2\pi i t y} dt, \qquad (x,y) \in {\mathbb {R}}^2. \end{aligned}$$
(1.5)

The short-time Fourier transform is a windowed Fourier transform, and the value \(V_g f(x,y)\) represents the influence of the frequency y near x. As localizing window g, one often chooses Gaussian, or, more generally, Hermite functions, as these optimize several measures related to Heisenberg’s uncertainty principle.

In signal processing, the short-time Fourier transform is often used to analyze functions (called signals) contaminated with random noise \({\mathcal {N}}\). The corresponding zero sets play an important role in many modern algorithms, for example in the dynamics of certain non-linear procedures to sharpen spectrograms [16, Chap. 12] or in the design of filtering masks, where landmarks are chosen guided by the statistics of zero sets [15].

Of particular interest are the zeros of the short-time Fourier transform of complex white Gaussian noise \(V_g \, {\mathcal {N}}\) [16, Chap. 15]. While many applications demand the use of different window functions g—see, e.g., [16, Sect. 10.2]—zero-statistics for the STFT are currently only understood for Gaussian windows [5, 6], as these facilitate the application of the theory of Gaussian entire functions (Example 1.1). One main motivation for this article is to obtain zero-statistics for general windows g, including for example Hermite functions. (Some related numerics can be found in [16, Chap. 15].)

With an adequate distributional interpretation, the STFT of complex white Gaussian noise with respect to a Schwartz window function g defines a smooth circularly symmetric Gaussian function on the plane. Twisted stationarity is revealed by the transformation

$$\begin{aligned} F(x+iy) := e^{-i xy} \cdot V_g \, {\mathcal {N}} \big (x/\sqrt{\pi }, -y/\sqrt{\pi }\big ), \end{aligned}$$
(1.6)

which, as shown in Sect. 6.1, indeed yields a GWHF. The twisted stationarity of F reflects the invariance of the stochastics of complex white noise under time-frequency shifts

$$\begin{aligned} f(t) \mapsto e^{2 \pi i b t} f(t-a), \qquad (a,b) \in {\mathbb {R}}^2. \end{aligned}$$

Basic questions about zero sets of short-time Fourier transforms also underlie problems about the spanning properties of the time-frequency shifts of a given function (Gabor systems) [23, 26], or about Berezin’s quantization [22]. The study of random counterparts provides a first form of average case analysis for such problems.

Example 1.3

(Derivatives of GEF) The covariant derivative of an entire function \(G:{\mathbb {C}} \rightarrow {\mathbb {C}}\) is

$$\begin{aligned} {\bar{\partial }}^* G(z) = {\bar{z}}\,G(z) - \partial G(z), \end{aligned}$$

and it is distinguished among other differential operators of order 1 because it commutes with the Bargmann–Fock shifts (1.4). As a consequence, if G is a Gaussian entire function, as in Example 1.1, the stochastics of \({\bar{\partial }}^* G\) are also invariant under Bargmann–Fock shifts, and the transformation

$$\begin{aligned} F(z)=e^{-\frac{1}{2}|z|^2} {\bar{\partial }}^* G(z) \end{aligned}$$
(1.7)

yields a GWHF. The corresponding twisted kernel is computed in Sect. 6.5.

Zeros of covariant derivatives are instrumental in the description of vanishing orders of analytic functions [11, 13]. They are also important in the study of weighted magnitudes of analytic functions G. For example, the amplitude \(A(z) = e^{-\frac{1}{2}|z|^2} |G(z)|\) of an entire function G satisfies

$$\begin{aligned} \big |\nabla A \big | = e^{-\frac{1}{2}|z|^2} \big | {\bar{\partial }}^* G \big |. \end{aligned}$$
(1.8)

Thus, the critical points of the amplitude of a Gaussian entire function G are exactly the zeros of the GWHF (1.7)—see also [12, 14]. The squared amplitude \(A^2(z)\) is also of interest, as it corresponds after normalization to the spectrogram of complex white noise with a Gaussian window (i.e., the squared absolute value of the STFT (1.5)) [5]; see also [6, Corollary 2.8].

Example 1.4

(Gaussian poly-entire functions) Iterated covariant derivatives of an analytic function \(G_0\),

$$\begin{aligned} G = ({\bar{\partial }}^*)^{q-1} \, G_0, \end{aligned}$$
(1.9)

are not themselves analytic, but satisfy a higher order Cauchy-Riemann condition

$$\begin{aligned} {\bar{\partial }}\,^q \,G = 0, \end{aligned}$$
(1.10)

known as poly-analyticity [4]. In Vasilevski’s parlance [33], (1.9) is a true or pure poly-entire function, while the more general solution to (1.10),

$$\begin{aligned} G = \sum _{k=0}^{q-1} \frac{1}{\sqrt{k!}} ({\bar{\partial }}^*)^{k} \,G_k, \end{aligned}$$
(1.11)

with \(G_0, \ldots , G_{q-1}\) entire, is a fully poly-entire function.

Random Gaussian poly-entire functions of either pure of full type are defined by (1.9) and (1.11), letting \(G_0, \ldots , G_{q-1}\) be independent Gaussian entire functions.

Poly-entire functions are important in statistical physics, in the analysis of high energy systems of particles [2], and we expect their random analogs to also be useful in that field.

1.2 Standing Assumptions

The positive semi-definiteness of the covariance kernel of a GWHF F reads as follows:Footnote 1

$$\begin{aligned} \Big ( H(z_k - z_j) \cdot e^{i \Im (z_k \overline{z_j})} \Big )_{j,k=1, \ldots , n} \ge 0 \qquad \text{ for } \text{ all } z_1,\ldots , z_n \in {\mathbb {C}}. \end{aligned}$$
(1.12)

As a consequence, \(H(-z) = \overline{H(z)}\) and \(H(0) \ge 0\). To avoid trivial cases, we assume that \(H(0) \not = 0\), since, otherwise, F would be almost surely constant. We furthermore impose the normalization

$$\begin{aligned} H(0)=1. \end{aligned}$$
(1.13)

We also assume that

$$\begin{aligned} \left| H(z)\right| < 1, \qquad z \in {\mathbb {C}} \setminus \{0\}, \end{aligned}$$
(1.14)

which means that no two samples F(z), F(w) with \(z \not = w\) are deterministically correlated, as (1.14) amounts to the invertibility of the joint covariance matrix \(\left[ {\begin{matrix} 1 &{} H(z-w) e^{i\Im (z{\bar{w}})}\\ \overline{H(z-w)} e^{-i\Im (z{\bar{w}})} &{} 1 \end{matrix}}\right] \).

We also assume a certain regularity of the twisted kernel:

$$\begin{aligned} H \text{ is } C^2 \text{ in } \text{ the } \text{ real } \text{ sense }, \end{aligned}$$
(1.15)

and denote the corresponding derivatives with supraindices; e.g., \(H^{(1,1)}(x+iy) = \partial _x \partial _y H(x+iy)\). Finally, we will always assume that F has \(C^2\) paths in the real sense:

$$\begin{aligned} \text{ Almost } \text{ every } \text{ realization } \text{ of } F \text{ is } \text{ a } C^2({\mathbb {R}}^2) \text{ function. } \end{aligned}$$
(1.16)

This is the case, for example, if \(H \in C^6({\mathbb {R}}^2)\) in the real sense [19, Theorem 5], but also other weaker assumptions suffice (see [1, Theorem 1.4.2]).

Definition 1.5

A function (twisted kernel) \(H:{\mathbb {C}} \rightarrow {\mathbb {C}}\) is said to satisfy the standing assumptions if (1.12), (1.13), (1.14), (1.15), and (1.16) are fulfilled.

Note that if H satisfies the standing assumptions, then a GWHF F with covariance kernel (1.1) always exists [3, Chap. 1]. For short, we also say that F is a GWHF satisfying the standing assumptions.

1.3 Zero Sets

We are mainly interested in the zero set of a GWHF F, encoded in the random measure

$$\begin{aligned} {\mathcal {Z}}_F&:= \sum _{z \in {\mathbb {C}},\, F(z)=0} \delta _z \,, \end{aligned}$$
(1.17)

where \(\delta _z\) denotes the Dirac measure at z. This measure properly encodes the zero set of F, because, as we prove in Proposition 3.2 below, under the standing assumptions the zeros of F are almost surely simple and non-degenerate (i.e., as a map on \({\mathbb {R}}^2\), the differential matrix of F is invertible).

Our first result describes the first point intensity of zero sets of GWHF.

Theorem 1.6

(First intensity of zero sets) Let F be a GWHF with twisted kernel H satisfying the standing assumptions. Then \({\mathcal {Z}}_F\) is a stationary random measure with first intensity:

$$\begin{aligned} \rho _1=\frac{1}{2\pi } \frac{\Delta _H+2}{\sqrt{\Delta _H+1}}, \end{aligned}$$
(1.18)

where

$$\begin{aligned} \Delta _H:= \det \begin{bmatrix} -H^{(2,0)}(0) -\left| H^{(1,0)}(0)\right| ^2 &{} -H^{(1,1)}(0)-i - H^{(1,0)}(0)H^{(0,1)}(0) \\ -\overline{H^{(1,1)}(0)} +i - \overline{H^{(1,0)}(0)H^{(0,1)}(0)} &{} -H^{(0,2)}(0) -\left| H^{(0,1)}(0)\right| ^2 \end{bmatrix}. \end{aligned}$$
(1.19)

Concretely, for every Borel set \(E \subseteq {\mathbb {C}}\):

$$\begin{aligned} {\mathbb {E}} \big [ \# \big \{z \in E: F(z)=0 \big \}\big ] = \rho _1 |E|. \end{aligned}$$
(1.20)

In addition, \(\Delta _H\ge 0\), and therefore \(\rho _1 \ge 1/\pi \).

In many important cases, the twisted kernel H is radial, and the expression for the first point intensity can be simplified.

Corollary 1.7

Let F be as in Theorem 1.6. Assume further that \(H(z)=P\big (|z|^2\big )\), where \(P:{\mathbb {R}} \rightarrow {\mathbb {R}}\) is \(C^2({\mathbb {R}})\). Then \(P'(0) \le -1/2\) and the first point intensity of the zero set of F is

$$\begin{aligned} \rho _1 = - \frac{1}{\pi } \bigg ( P'(0) + \frac{1}{4 P'(0)} \bigg ). \end{aligned}$$
(1.21)

We mention some applications of Theorem 1.6; these are further developed in Sect. 6.

In the context of Examples 1.3 and 1.4 we obtain the following.

Theorem 1.8

The first intensity of the zero set of a true-type poly-entire function as in (1.9) is \(\frac{1}{\pi } \big (q-\frac{1}{2} + \frac{1}{4q-2}\big )\), while that of a full-type one as in (1.11) is \(\frac{1}{2\pi } \big ( q + \frac{1}{q} \big )\).

The base case \(q=1\) is well-known as it corresponds to a Gaussian entire function (Example 1.1), and follows from more general results [24, Sect. 2], while the case \(q=2\) is implicit in [12, 14], since, by (1.8), it corresponds to the number of critical points of the weighted magnitude of a GEF. For large q we see that a true-type poly-entire function has on average \(\approx \tfrac{q}{\pi }\) zeros per unit area, while one of full-type has \(\approx \tfrac{q}{2\pi }\) zeros per unit area.

As a second application, we consider the short-time Fourier transform (Example 1.2), and obtain the following.

Theorem 1.9

(First intensity of zeros of STFT of complex white noise) Let \(g:{\mathbb {R}} \rightarrow {\mathbb {C}}\) be a Schwartz function normalized by \(\left| \left| g\right| \right| _2=1\), and consider the following uncertainty constants:

$$\begin{aligned} \begin{aligned} c_1&:= \int _{\mathbb {R}}t \, |g(t)|^2 dt, \qquad c_2 := \int _{\mathbb {R}}t^2 |g(t)|^2 dt, \qquad c_3 := \int _{\mathbb {R}}|g'(t)|^2 dt,\\ c_4&:= -i\int _{{\mathbb {R}}} g(t) \overline{g'(t)} dt, \qquad c_5 := \Im \bigg (\int _{{\mathbb {R}}} tg(t) \overline{g'(t)} dt\bigg ). \end{aligned} \end{aligned}$$
(1.22)

Then the zero set of \(V_g\,{\mathcal {N}}\), i.e., the STFT of complex white noise with window g, has first intensity:

$$\begin{aligned} \rho _{1,g} \equiv \frac{ 4 ( c_2 - c_1^2)c_3 -4 c_2 c_4^2 - 4 c_5^2 - 8 c_1 c_4 c_5 +1}{4 \sqrt{ ( c_2 - c_1^2)c_3 -c_2 c_4^2 -c_5^2 -2c_1 c_4 c_5 }}. \end{aligned}$$

Concretely, for every Borel set \(E \subseteq {\mathbb {C}}\):

$$\begin{aligned} {\mathbb {E}} \big [ \# \big \{z \in E: V_g \, {\mathcal {N}} (z)=0 \big \}\big ] = \rho _{1,g} |E|. \end{aligned}$$

The constants \(c_1, \ldots , c_5\) are real. When g is real-valued, the expression for \(\rho _{1,g}\) further simplifies because \(c_4=c_5=0\).

If we interpret \(|g(t)|^2\) as a probability density on \({\mathbb {R}}\), the uncertainty constants \(c_1\) and \(c_2\) correspond to the expected value and expected spread around the origin. The constants \(c_3\) and \(c_4\) have a similar meaning with respect to the Fourier transform of g. The constant \(c_5\) is more subtle to interpret, as it involves correlations between g and its Fourier transform.

We spell out the particular case of Theorem 1.9 for Hermite windows:

$$\begin{aligned} h_{r}(t) = \frac{2^{1/4}}{\sqrt{r!}}\left( \frac{-1}{2\sqrt{\pi }}\right) ^r e^{\pi t^2} \frac{d^r}{dt^r}\left( e^{-2\pi t^2}\right) , \qquad r \ge 0. \end{aligned}$$
(1.23)

Corollary 1.10

The expected number of zeros of the STFT of complex white noise with Hermite window \(h_r\) (1.23) inside a Borel set \(E \subset {\mathbb {C}}\) is

$$\begin{aligned} {\mathbb {E}} \big [ \# \big \{z \in E: V_{h_r} \, {\mathcal {N}} (z)=0 \big \}\big ] = \Big (r + \tfrac{1}{2} + \tfrac{1}{4r+2} \Big ) |E|. \end{aligned}$$
(1.24)

Numerical simulations related to the zeros of the STFT of white noise with \(h_0\) and \(h_1\) as windows can be found in [16, Chap. 15]—see also Fig. 1.

Note that the expression in Corollary 1.10 is minimal for \(r=0\) (Gaussian case). We will prove that this is in fact an instance of a general phenomenon.

Theorem 1.11

(Uncertainty principle for the zeros of the STFT of white noise) Under the assumptions of Theorem 1.9, the minimal value of \(\rho _{1,g}\) is 1, and it is attained exactly when g is a generalized Gaussian; that is,

$$\begin{aligned} g(t) = \frac{\lambda }{\sqrt{\sigma }} e^{-\tfrac{\pi }{\sigma ^2} \left[ (t-x_0 )^2 + i ( \xi _0 \cdot t + \xi _1 \cdot t^2)\right] }, \qquad t \in {\mathbb {R}}, \end{aligned}$$
(1.25)

with \(\sigma >0\), \(\lambda \in {\mathbb {C}}\), \(|\lambda |=2^{1/4}\), \(x_0\), \(\xi _0\), \(\xi _1 \in {\mathbb {R}}\).

To compare, we note that, in terms of the uncertainty constants (1.22), Heisenberg’s uncertainty inequality reads:

$$\begin{aligned} (c_2-c_1^2) \cdot (c_3-c_4^2) \ge \frac{1}{4}, \end{aligned}$$
(1.26)

where \(\left| \left| g\right| \right| _2=1\), and is saturated by (translated and linearly modulated) Gaussian functions, see, e.g., [17, Corollary 1.35]. Generalized Gaussians (1.25) are sometimes called squeezed states and minimize a refined version of (1.26) that involves the constant \(c_5\) in (1.22), known as the Robertson-Schrödinger uncertainty relations [32]. The proof of Theorem 1.11 exploits the invariance of squeezed states under the canonical transformations of the time-frequency plane (Weyl operators and metaplectic rotations [17]).

1.4 Charged Zeros

We now look into weighting each zero z of a GWHF F with a charge \(\pm 1\), according to whether F preserves or reverses orientation around z. More precisely, we inspect the differential matrix DF of F considered as \(F:{\mathbb {R}}^2 \rightarrow {\mathbb {R}}^2\) and define

$$\begin{aligned} \kappa _z := {\left\{ \begin{array}{ll} 1 &{} \text{ if } \det DF(z) >0 \\ 0 &{} \text{ if } \det DF(z) = 0 \\ -1 &{} \text{ if } \det DF(z) < 0 \end{array}\right. }. \end{aligned}$$

When zeros are interpreted as phase-singularities, charges correspond to the strength of their vorticity [9]. Charges also appear naturally in the study of critical points of random functions, as one investigates the signature of corresponding Hessian matrices—see [12, Sect. 3.1] for an extended discussion.

We encode charged zeros into the random measure:

$$\begin{aligned} {\mathcal {Z}}^{\kappa }_F&:= \sum _{z \in {\mathbb {C}},\, F(z)=0} \kappa _z \cdot \delta _z\,. \end{aligned}$$
(1.27)

Our next result shows that the corresponding first intensity is independent of the twisted kernel H.

Theorem 1.12

(First intensity of charged zeros) Let F be a GWHF with twisted kernel H satisfying the standing assumptions. Then the random signed measure \({\mathcal {Z}}^{\kappa }_F\) has first intensity \(\rho _1^{\kappa } = \frac{1}{\pi }\), i.e.,

$$\begin{aligned} {\mathbb {E}} \bigg [ \sum _{z \in E,\, F(z)=0} \kappa _z \bigg ] = \frac{1}{\pi } |E|, \qquad E \subseteq {\mathbb {C}} \text{ Borel } \text{ set }. \end{aligned}$$

As mentioned after (1.17), in the situation of Theorem 1.12 the zeros of F are almost surely non-degenerate, and consequently \(\kappa _z=\pm 1\). The fact that the intensity of charged zeros is constant is non-trivial, and shown in Sect. 4.

Charges are straightforward to interpret in Examples 1.11.3, and 1.4 , as the transformation \(G(z) \mapsto F(z)=e^{-|z|^2/2} G(z)\) preserves the sign of the Jacobian at a zero—see Sect. 6.7. In the case of Gaussian entire functions, all zeros are positively charged due to the conformality of analytic functions, and, indeed, the first intensities prescribed by Theorems 1.6 and 1.12 coincide. On the other hand, Theorem 1.8 shows that a higher order poly-entire function has a large number of expected zeros per unit area, while, according to Theorem 1.12, most of the corresponding charges cancel. Charges are, in expectation, in a certain equilibrium around the universal density \(1/\pi \).

While zeros of first-order true poly-entire functions correspond to critical points of weighted magnitudes of Gaussian entire functions (Example 1.3), their charges summarize the signatures of the corresponding Hessian matrices—see [12, Sect. 3.1] or Sect. 6.8. In fact, as we show in Sect. 6.8, Theorem 1.12 can be used to rederive a particular case of [12, Corollary 5].

For the STFT of white noise (Example 1.2) the quantities related to charge are

$$\begin{aligned} \mu _{z} = {{\,\mathrm{sgn}\,}}\Big \{\Im \, \Big [ \partial _x (V_g\, {\mathcal {N}})(x,y) \cdot \overline{ \partial _y(V_g\, {\mathcal {N}})(x,y)} \,\Big ] \Big \}, \qquad z=x+iy, \end{aligned}$$
(1.28)

and Theorem 1.12 gives the following.

Corollary 1.13

(Equilibrium of charge for the STFT of complex white noise) Let \(g:{\mathbb {R}} \rightarrow {\mathbb {C}}\) be a Schwartz function. Then the zeros of \(V_g \, {\mathcal {N}}\)—the STFT of complex white noise—satisfy

$$\begin{aligned} {\mathbb {E}} \bigg [ \sum _{z \in E,\, V_g\, {\mathcal {N}} (z) = 0} \mu _z \bigg ] = |E|, \qquad E \subseteq {\mathbb {C}} \text{ Borel } \text{ set }. \end{aligned}$$

Figure 1 illustrates the distribution of charged zeros of one realization of \(V_{h_1} \,{\mathcal {N}}\).

Fig. 1
figure 1

Distribution of the charged zeros of one realization of \(V_{h_1} {\mathcal {N}}\) on \([0,8]\times [0,8]\). Red plus signs correspond to positive charges while blue circles correspond to negative charges. The number of zeros is 109 and thus close to the expectation \(64\cdot 5/3 \approx 106.7\). The total charge of 61 is also close to the expectation of 64

1.5 Fluctuation of Aggregated Charge

While a general GWHF can have many expected zeros (Theorem 1.6), the corresponding expected charges almost balance out (Theorem 1.12), adding up to the universal density \(1/\pi \). We now look into the stochastic fluctuation of charge when aggregated inside large observation sets, and the extent to which equilibrium is observed at large scales.

A point process is called hyperuniform if the variance of the number of particles within an observation disk of radius R is asymptotically smaller than the corresponding expected number of points [30]. Such fluctuations are also called non-extensive [20], and are anomalously small in comparison to those in ordinary fluids and amorphous solids. Originally introduced in material science, hyperuniformity provides a unified framework to classify crystals and quasicrystals. The notion was subsequently developed into an abstract statistical notion, and found applications in a broad range of topics in physics, number theory, and biology [31]. In particular, hyperuniformity can be formulated, even quantitatively, for charged point processes such as (1.27), and certain classical results can be recast in this light. For example, fluctuations of charged Coulomb systems within observation disks or radius R, if non-extensive, are known to be dominated by the observation perimeter O(R) [27, 28].

Our last result shows that the fluctuations of the aggregated charge of zeros of GWHF with radial twisted kernels are non-extensive, and moreover provides an asymptotic expression for the variance of charge.

Theorem 1.14

(Hyperuniformity of charge) Let F be a GWHF with twisted kernel H satisfying the standing assumptions. Assume further that \(H(z)=P\big (|z|^2\big )\), where \(P:{\mathbb {R}} \rightarrow {\mathbb {R}}\) is \(C^2({\mathbb {R}})\) and

$$\begin{aligned} \sup _{r \ge 0} \big ( \left| P(r)\right| + \left| P'(r)\right| + \left| P''(r)\right| \big ) r^2 < \infty . \end{aligned}$$

Then the charged measure of zeros \({\mathcal {Z}}^{\kappa }_F\) satisfies the following: there exists a constant \(C=C_H>0\) such that for all \(z \in {\mathbb {C}}\),

$$\begin{aligned} \mathrm {Var}\big [{\mathcal {Z}}^{\kappa }_F(B_R(z))\big ] \le C R, \qquad R>0, \end{aligned}$$

while

$$\begin{aligned} \tfrac{1}{R} \mathrm {Var} \Big [ {\mathcal {Z}}^{\kappa }_F(B_R(z)) \Big ] \rightarrow \frac{1}{\pi } \int _0^\infty \frac{2 r^2 P'(r^2)^2}{1-P(r^2)^2} dr, \qquad \text{ as } {R \rightarrow \infty }, \end{aligned}$$

uniformly on z.

The hypothesis of Theorem 1.14 is satisfied in Example 1.1 (Gaussian entire functions, where all charges are positive and more refined results exist [24, Sect. 3.5]), and in poly-entire contexts (Examples 1.31.4), as well as for the short-time Fourier transform of white noise (Example 1.2) with Hermite windows—see Sect. 6.7. The case of order one pure poly-entire functions may be interesting in relation to the classification of critical points of Gaussian entire functions—see Sect. 6.8.

In the context of Theorem 1.14, whenever the one-point function of the zero set of F is large, most of the positively charged zeros tend to be surrounded by negatively charged ones, a phenomenon that in the stationary setting is called (almost perfect) screening [10, 34]. In the twisted stationary setting, this phenomenon is universally valid, independently of the particular kernel H. The significance of hyperuniformity thus concerns the empirical observability of the ensemble average claimed in Theorem 1.12: universal screening is observed with growing probability at all sufficiently large scales. For example, by Markov’s inequality, Theorem 1.14 implies that

$$\begin{aligned} \begin{aligned} {\mathbb {P}} \Big [ \big | \tfrac{1}{\left| B_R(z)\right| } {\mathcal {Z}}^{\kappa }_F(B_R(z)) - \tfrac{1}{\pi } \big | > \varepsilon \Big ] = O_\varepsilon \Big (\frac{1}{R^3}\Big ), \end{aligned} \end{aligned}$$

which is consistent with the experiment in Fig. 1, where the prescribed equilibrium is observable already in one realization of a GWHF.

1.6 Organization

Our main tool is direct computation with Kac-Rice formulae and exploitation of the invariance relation (1.2). Section 2 introduces background results and required adaptations to our setting. In Sect. 4, we prove all results related to first intensities (Theorems 1.6, 1.12 and Corollary 1.7). In Sect. 5, we study second order statistics of charged zeros and prove Theorem 1.14. Section 6 develops applications to Examples 1.11.31.4, and 1.2, including proofs of Theorems 1.8, 1.9, Corollary 1.10, Theorem 1.11, and Corollary 1.13. The short-time Fourier transform plays a prominent role, as time-frequency techniques are also brought to bear on the other examples. Section 8 contains auxiliary results, including a lengthy calculation, for which we also provide a Python worksheet at https://github.com/gkoliander/gwhf. Section 7 offers conclusions, a discussion on open problems, and perspectives on future work.

2 Preliminaries

2.1 Notation

We use t for real variables and zw for complex variables. We always use the notation \(z=x+iy\), \(w=u+iv\), with \(x,y,u,v \in {\mathbb {R}}\). The real and imaginary parts of \(z \in {\mathbb {C}}\) are otherwise denoted \(\Re (z)\) and \(\Im (z)\), respectively. The differential of the (Lebesgue) area measure on the plane will be denoted for short dA, while the measure of a set E is |E|. The derivatives of a function \(F:{\mathbb {C}} \rightarrow {\mathbb {C}}\) interpreted as \(F:{\mathbb {R}}^2 \rightarrow {\mathbb {R}}^2\) are denoted by \(F^{(1,0)}\) (real coordinate) and \(F^{(0,1)}\) (imaginary coordinate). Higher derivatives are denoted by \(F^{(k,\ell )}\).

Vectors \((z_1, \ldots , z_n)\in {\mathbb {C}}^n\) are identified with column matrices \((z_1, \ldots , z_n)\in {\mathbb {C}}^{n \times 1}\); \((z_1, \ldots , z_n)^t\) denotes transposition, while \((z_1, \ldots , z_n)^*\) denotes transposition followed by coordinatewise conjugation. We let

$$\begin{aligned} J=\begin{bmatrix} 0 &{}\quad 1 \\ -1 &{}\quad 0 \end{bmatrix} \end{aligned}$$

denote the matrix with the property:

$$\begin{aligned} (z, w)^* J (z,w) = -2i \Im (z{{\bar{w}}}), \qquad (z,w) \in {\mathbb {C}}^2. \end{aligned}$$
(2.1)

The Jacobian of \(F:{\mathbb {C}} \rightarrow {\mathbb {C}}\) at \(z \in {\mathbb {C}}\) is the determinant of its differential matrix DF considered as \(F:{\mathbb {R}}^2 \rightarrow {\mathbb {R}}^2\):

$$\begin{aligned} {{\,\mathrm{Jac}\,}}F(z) := \det DF(z). \end{aligned}$$

The following observations will be used repeatedly:

$$\begin{aligned} {{\,\mathrm{Jac}\,}}F(z) = \det DF(z) = - \Im \big [ F^{(1,0)}(z) \cdot \overline{ F^{(0,1)}(z)}\big ]. \end{aligned}$$
(2.2)

2.2 Gaussian Vectors and Intensities

By a Gaussian vector we always mean a circularly symmetric complex Gaussian random vector, i.e., a random vector X on \({\mathbb {C}}^n\) such that \((\Re (X), \Im (X))\) is normally distributed, has zero mean, and vanishing pseudo-covariance:

$$\begin{aligned} {\mathbb {E}}\big [ X X^t \big ] = 0. \end{aligned}$$

A complex Gaussian vector X on \({\mathbb {C}}^n\) is thus determined by its covariance matrix

$$\begin{aligned} {{\,\mathrm{Cov}\,}}[X] = {\mathbb {E}} \big [ X X^* \big ]. \end{aligned}$$

If \({{\,\mathrm{Cov}\,}}[X]\) is non-singular, then X is absolutely continuous and has probability density

$$\begin{aligned} f_X (z_1, \ldots , z_n)= \frac{1}{\pi ^n\det {{{\,\mathrm{Cov}\,}}[X]}} \exp \big ({-(z_1, \ldots , z_n)^* \,({{\,\mathrm{Cov}\,}}[X])^{-1} \, (z_1, \ldots , z_n)}\big ). \end{aligned}$$
(2.3)

Gaussian vectors are not a priori assumed to have non-singular covariances. The zero vector, for example, is a singular Gaussian vector.

If (XY) is a Gaussian vector on \({\mathbb {C}}^{n+m}\) and \(h:{\mathbb {C}}^{n} \rightarrow {\mathbb {R}}\) is a function, the conditional expectation \({\mathbb {E}}\big [h(X)\,\big \vert \,Y=0\big ]\) is defined by Gaussian regression. Informally, this involves finding a linear combination of XY which is uncorrelated to Y. The following remark makes this intuition precise.

Remark 2.1

(Gaussian regression) Let (XY) be a circularly symmetric Gaussian random vector in \({\mathbb {C}}^{n+m}\) with a (possibly singular) covariance matrix

$$\begin{aligned} {{\,\mathrm{Cov}\,}}[(X,Y)] = \begin{bmatrix} A &{} B \\ B^* &{} C \end{bmatrix}, \end{aligned}$$

where \(A={{\,\mathrm{Cov}\,}}[X] \in {\mathbb {C}}^{n \times n}\), \(B \in {\mathbb {C}}^{n \times m}\), and \(C={{\,\mathrm{Cov}\,}}[Y] \in {\mathbb {C}}^{m \times m}\). Assume further that C is nonsingular. Let Z be a circularly symmetric Gaussian random vector in \({\mathbb {C}}^{n}\) with covariance

$$\begin{aligned} {{\,\mathrm{Cov}\,}}[Z] = A-B C^{-1} B^*. \end{aligned}$$

Then, for any locally bounded \(h :{\mathbb {C}}^{n} \rightarrow {\mathbb {R}}\)

$$\begin{aligned} {\mathbb {E}} \big [ h(X) \, \big \vert \, Y=0 \big ] = {\mathbb {E}} \big [ h(Z)\big ] \end{aligned}$$

is the Gaussian regression version of the conditional expectation [3, Eq. (1.5)].

Whenever it exists, the first intensity or one-point intensity of a random signed measure \(\mu \) on \({\mathbb {C}}\) is a measurable function \(\rho :{\mathbb {C}} \rightarrow {\mathbb {R}}\) such that

$$\begin{aligned} {\mathbb {E}} \big [ \mu (E) \big ] = \int _E \rho (z) \,dA(z), \qquad E \text{ Borel } \text{ set }. \end{aligned}$$

Second order intensities are defined in the article as needed. Objects related to charged zeros are denoted with a superscript \(\kappa \).

Background on random Gaussian functions can be found in [1, 3].

2.3 Kac-Rice Formulae

The formulae that describe the statistics of the level sets of Gaussian functions are generically known as Kac-Rice formulae. The following result is quoted from [3, Theorem 6.2]—with the notation \({{\,\mathrm{Jac}\,}}f := \det Df\), for \(f:{\mathbb {R}}^d\rightarrow {\mathbb {R}}^d\).

Proposition 2.2

Let \(U \subset {\mathbb {R}}^d\) be open, \(Z:U \rightarrow {\mathbb {R}}^d\) a Gaussian random field, and \(u \in {\mathbb {R}}^d\). Assume that:

  1. (i)

    Almost surely, the function \(t \mapsto Z(t)\) is \(C^1\),

  2. (ii)

    For each \(t \in U\), Z(t) has a non-degenerate distribution (i.e., its covariance it positive-definite),

  3. (iii)

    \({\mathbb {P}} \left\{ \text {There exists } t \in U \text { such that } Z(t)=u \text { and } {{\,\mathrm{Jac}\,}}Z(t) =0 \right\} =0\).

Then for every Borel set \(E \subset U\):

$$\begin{aligned} {\mathbb {E}} \big [ \# \{t\in E\,:\, Z(t)=u \} \big ] = \int _E {\mathbb {E}} \big [ \left| {{\,\mathrm{Jac}\,}}Z(t) \right| \, \big |\, Z(t)=u \big ] p_{Z(t)}(u) \, dt, \end{aligned}$$
(2.4)

where \(p_{Z(t)}\) is the probability density function of Z(t). In addition, both sides of (2.4) are finite if E is compact.

The following weighted version of Proposition 2.2 is a particular case of [3, Theorem 6.4].

Proposition 2.3

(Expected number of weighted roots) Under the assumptions of Proposition 2.2, let \(\varphi :{\mathbb {R}}^d \times {\mathbb {R}}^{d\times d} \rightarrow {\mathbb {R}}\) be bounded and continuous and \(E \subset U\) compact. Then

$$\begin{aligned} {\mathbb {E}} \left[ \sum _{t \in E, Z(t)=u} \varphi \big (Z(t), DZ(t) \big ) \right] = \int _E {\mathbb {E}} \big [ \left| {{\,\mathrm{Jac}\,}}Z(t) \right| \varphi \big (Z(t), DZ(t) \big ) \, \big |\, Z(t)=u \big ] p_{Z(t)}(u) \, dt. \end{aligned}$$

3 Preparatory Calculations with GWHF

3.1 Covariance Structure of First Derivatives

As a first step in the investigation of a GWHF F, we describe the stochastics of the Gaussian vector

$$\begin{aligned} \big (F(z), F^{(1,0)}(z), F^{(0,1)}(z)\big ) \end{aligned}$$
(3.1)

at a given point \(z\in {\mathbb {C}}\). We start by calculating the covariances

$$\begin{aligned}&{\mathbb {E}} \left[ F(z) \cdot \overline{F(w)} \right] = H(z-w) e^{i (yu-xv)}, \\&{\mathbb {E}} \left[ F(z) \cdot \overline{F^{(1,0)}(w)} \right] = \left( -H^{(1,0)}(z-w) + iy H(z-w) \right) e^{i (yu-xv)}, \\&{\mathbb {E}} \left[ F(z) \cdot \overline{F^{(0,1)}(w)} \right] = \left( -H^{(0,1)}(z-w) - ix H(z-w) \right) e^{i (yu-xv)}, \\&{\mathbb {E}} \left[ F^{(1,0)}(z) \cdot \overline{F^{(1,0)}(w)} \right] \\&\quad = \left( -H^{(2,0)}(z-w)+iy H^{(1,0)}(z-w) + iv H^{(1,0)}(z-w)+yv H(z-w) \right) e^{i (yu-xv)}, \\&{\mathbb {E}} \left[ F^{(1,0)}(z) \cdot \overline{F^{(0,1)}(w)} \right] \\&\quad = \big ( - H^{(1,1)}(z-w)-iH(z-w) - ixH^{(1,0)}(z-w) + iv H^{(0,1)}(z-w)\\&\qquad - xvH(z-w) \big ) e^{i (yu-xv)}, \\&{\mathbb {E}} \left[ F^{(0,1)}(z) \cdot \overline{F^{(0,1)}(w)} \right] \\&\quad = \left( -H^{(0,2)}(z-w)-ix H^{(0,1)}(z-w) -iu H^{(0,1)}(z-w) + xuH(z-w) \right) e^{i (yu-xv)}. \end{aligned}$$

The following lemma will help us simplify further calculations.

Lemma 3.1

Let H be a twisted kernel satisfying the standing assumptions. Then \(H(0)=1\),

$$\begin{aligned}&H^{(1,0)}(0), H^{(0,1)}(0) \in i {\mathbb {R}}, \\&H^{(2,0)}(0), H^{(0,2)}(0), H^{(1,1)}(0) \in {\mathbb {R}}. \end{aligned}$$

Proof

The conclusion follows directly from (1.12). \(\square \)

We now specialize the previous calculations at \(z=w\) and see that the covariance matrix of (3.1) is

$$\begin{aligned} \Gamma (z)&= \begin{pmatrix} 1 &{} iy &{} - ix \\ - iy &{} y^2 &{} -i- xy \\ ix &{} i- xy &{} x^2 \end{pmatrix} + H^{(1,0)}(0) \begin{pmatrix} 0 &{} -1 &{} 0 \\ 1 &{} 2iy &{} - ix \\ 0 &{} - ix &{} 0 \end{pmatrix} \nonumber \\*&\quad + H^{(0,1)}(0) \begin{pmatrix} 0 &{} 0 &{} -1 \\ 0 &{} 0 &{} iy \\ 1 &{} iy &{} -2ix \end{pmatrix} + \begin{pmatrix} 0 &{} 0 &{} 0 \\ 0 &{} -H^{(2,0)}(0) &{} - H^{(1,1)}(0) \\ 0 &{} - H^{(1,1)}(0) &{} -H^{(0,2)}(0) \end{pmatrix}. \end{aligned}$$
(3.2)

We will be mainly interested in conditional expectations of the form

$$\begin{aligned} {\mathbb {E}}\big [ h\big (F^{(1,0)}(z), F^{(0,1)}(z) \big ) \, \big \vert \, F(z)=0\big ]. \end{aligned}$$
(3.3)

According to Remark 2.1, these are \({\mathbb {E}}\big [ h(Z) \big ]\) where \(Z\in {\mathbb {C}}^2\) is a circularly symmetric Gaussian random vector with covariance matrix:

$$\begin{aligned} \Omega = \begin{bmatrix} \alpha &{} \gamma \\ {{\bar{\gamma }}} &{} \beta \end{bmatrix} = \begin{bmatrix} -H^{(2,0)}(0) + \big (H^{(1,0)}(0)\big )^2 &{} - H^{(1,1)}(0)-i +H^{(1,0)}(0)H^{(0,1)}(0) \\ - H^{(1,1)}(0)+i +H^{(1,0)}(0)H^{(0,1)}(0) &{} -H^{(0,2)}(0) + \big ( H^{(0,1)}(0) \big )^2 \end{bmatrix}. \end{aligned}$$
(3.4)

As we can see, the covariance matrix (3.4) and thus the expectation (3.3) do not depend on the specific given point z. The following result formalizes these observations.

Proposition 3.2

Let F be a GWHF satisfying the standing assumptions. Then:

  1. (a)

    The field of absolute values \(\left| F\right| \) is stationary, i.e., defining \(G_w(z):= F(z-w)\) for each \(z\in {\mathbb {C}}\) and any \(w \in {\mathbb {C}}\), then the random functions \(\left| G_w\right| \) and \(\left| F\right| \) have the same distribution.

  2. (b)

    For any locally bounded function \(h:{\mathbb {R}} \rightarrow {\mathbb {R}}\), the conditional expectation

    $$\begin{aligned} {\mathbb {E}}\big [h\big ({{\,\mathrm{Jac}\,}}F(z)\big ) \, \big \vert \, F(z)=0 \big ] = {\mathbb {E}}\big [h \big ({-}\Im [ Z_1 \overline{ Z_2}]\big ) \big ] \end{aligned}$$

    where \(Z = (Z_1\, Z_2)\in {\mathbb {C}}^2\) is a circularly symmetric Gaussian random variable with covariance matrix given by \(\Omega \) in (3.4). In particular, \({\mathbb {E}}\big [h\big ({{\,\mathrm{Jac}\,}}F(z)\big ) \, \big \vert \, F(z)=0 \big ]\) does not depend on the choice of \(z\in {\mathbb {C}}\).

  3. (c)

    The zeros of F are almost surely simple and thus isolated. More precisely,

    $$\begin{aligned} {\mathbb {P}} \left\{ \text {There exists } z \in {\mathbb {C}} \text { such that } F(z)=0 \text { and } {{\,\mathrm{Jac}\,}}F(z) =0 \right\} =0. \end{aligned}$$

Proof

For (a) note that the twisted stationarity condition (1.2) implies that \(e^{i\Im (\cdot {\overline{w}})} F(\cdot -w)\) and \(F(\cdot )\) are circularly symmetric complex Gaussian fields with the same covariance structure. The claim about absolute values follows immediately since \(|e^{i\Im (z {\overline{w}})} F(z-w)|= \left| G_w(z)\right| \).

Part (b) follows from discussion above, as \({{\,\mathrm{Jac}\,}}F(z)=- \Im \big [ F^{(1,0)}(z) \, \overline{ F^{(0,1)}(z)}\big ]\).

For (c), we apply [3, Proposition 6.5]. The required hypotheses are that F be \(C^2\) almost surely, as we assume, and that the probability density of F(z) be bounded near 0, uniformly in z. This last requirement is satisfied because F(z) is a circularly symmetric complex Gaussian vector with variance \(Var[F(z)] = H(0)=1\). \(\square \)

3.2 Kac-Rice Formulae for GWHF

The second preparatory step is to check that various Kac-Rice formulae are applicable to GWHF and obtain corresponding intensities for zeros.

Lemma 3.3

Let F be a GWHF with twisted kernel H satisfying the standing assumptions. Then the first intensities of the uncharged and charged zero sets are independent of z and given by

$$\begin{aligned} \rho _1&= \tfrac{1}{\pi } {\mathbb {E}} \left[ \left| {{\,\mathrm{Jac}\,}}F(z)\right| \,\big |\, F(z)=0 \right] , \qquad \text{ for } \text{ any } z \in {\mathbb {C}}, \end{aligned}$$
(3.5)
$$\begin{aligned} \rho ^{\kappa }_1&= \tfrac{1}{\pi } {\mathbb {E}}\left[ {{\,\mathrm{Jac}\,}}F(z) \,\big |\, F(z)=0 \right] , \qquad \text{ for } \text{ any } z \in {\mathbb {C}}. \end{aligned}$$
(3.6)

That is, the random measures (1.17) and (1.27) satisfy

$$\begin{aligned} {\mathbb {E}} \big [ {\mathcal {Z}}_F(E)\big ]&= \rho _1 |E|, \\ {\mathbb {E}} \big [ {\mathcal {Z}}^{\kappa }_F(E)\big ]&= \rho ^{\kappa }_1 |E|, \end{aligned}$$

for every Borel set \(E \subseteq {\mathbb {C}}\).

In addition, we define the semi-charged two-point intensity \(\tau _2^{\kappa }:{\mathbb {C}} \rightarrow {\mathbb {R}}\) by

$$\begin{aligned} \tau _2^{\kappa }(z-w)&= \frac{ {\mathbb {E}}\big [ {{\,\mathrm{Jac}\,}}F(z) \, {{\,\mathrm{Jac}\,}}F(w) \,\big |\, F(z)=F(w)=0 \big ] }{ \pi ^{2}\big (1 - |{H(z-w)}|^2\big ) }, \qquad \text{ for } \text{ any } z,w \in {\mathbb {C}}. \end{aligned}$$
(3.7)

Then \(\tau _2^{\kappa }\) is well-defined and serves as density for the following semi-charged factorial moment:

$$\begin{aligned} {\mathbb {E}} \big [ \big ({\mathcal {Z}}^{\kappa }_F (E)\big )^2-{\mathcal {Z}}_F (E)\big ]&= \int _{E\times E} \tau _2^{\kappa }(z-w) \, dA(z) dA(w), \qquad E \subseteq {\mathbb {C}} \text{ Borel } \text{ set }. \end{aligned}$$
(3.8)

Proof

We first apply Proposition 2.2 to obtain:

$$\begin{aligned} \rho _1(z)&= {\mathbb {E}} \left[ \left| {{\,\mathrm{Jac}\,}}F(z)\right| \,\big |\, F(z)=0 \right] p_{F(z)} (0). \end{aligned}$$

The required regularity hypotheses are verified by Proposition 3.2. Since F(z) is a Gaussian circularly symmetric complex random variable with zero mean and variance \(H(0)=1\), \(p_{F(z)}(0) = \tfrac{1}{\pi }\), and (3.5) follows. The independence of z follows from Property (b) in Proposition 3.2.

Similarly, Proposition 2.3 gives:

$$\begin{aligned} {\mathbb {E}}\Bigg [ \sum _{z \in E, F(z)=0} \varphi ({{\,\mathrm{Jac}\,}}F(z)) \Bigg ]&= \tfrac{1}{{\pi }} \int _E {\mathbb {E}}\big [ \left| {{\,\mathrm{Jac}\,}}F(z)\right| \, \varphi ({{\,\mathrm{Jac}\,}}F(z)) \, \big | \, F(z)=0 \big ] dA(z), \end{aligned}$$
(3.9)

for all compact \(E \subseteq {\mathbb {C}}\) and bounded and continuous \(\varphi :{\mathbb {R}} \rightarrow {\mathbb {R}}\). Formally applying this formula to the non-continuous function \(\varphi (x)={{\,\mathrm{sgn}\,}}(x)\) yields (3.6). To justify such application, fix a compact set \(E\subseteq {\mathbb {C}}\) and let \(\varphi _n :{\mathbb {R}} \rightarrow [-1,1]\) be continuous and such that \(\varphi _n(x)={{\,\mathrm{sgn}\,}}(x)\), for \(\left| x\right| > 1/n\). First note that

$$\begin{aligned} \sum _{z \in E, F(z)=0} \varphi _n({{\,\mathrm{Jac}\,}}F(z)) \longrightarrow \sum _{z \in E, F(z)=0} {{\,\mathrm{sgn}\,}}({{\,\mathrm{Jac}\,}}F(z)) \end{aligned}$$
(3.10)

almost surely, as convergence can only fail when \(F(z)=0\) and \({{\,\mathrm{Jac}\,}}F(z)=0\), and this is a zero probability event according to Property (c) in Proposition 3.2. To show that (3.10) also holds in expectation, we estimate

$$\begin{aligned} \Bigg |\sum _{z \in E, F(z)=0} \varphi _n({{\,\mathrm{Jac}\,}}F(z)) \Bigg |\le \# \{z \in E, F(z)=0\}, \end{aligned}$$

note that \({\mathbb {E}} \big [ \# \{z \in E, F(z)=0\} \big ] = \rho _1 \left| E\right| <\infty \), and invoke the Dominated Convergence Theorem. We now inspect the right-hand side of (3.9). By Property (b) in Proposition 3.2,

$$\begin{aligned} {\mathbb {E}}\big [ \left| {{\,\mathrm{Jac}\,}}F(z)\right| \, \varphi _n({{\,\mathrm{Jac}\,}}F(z)) \, \big | \, F(z)=0 \big ] = {\mathbb {E}}\big [ \left| - \Im [ Z_1 \overline{ Z_2}]\right| \, \varphi _n(- \Im [ Z_1 \overline{ Z_2}]) \big ] \end{aligned}$$
(3.11)

and

$$\begin{aligned} {\mathbb {E}}\big [ {{\,\mathrm{Jac}\,}}F(z) \, \big | \, F(z)=0 \big ] = {\mathbb {E}}\big [ \Im [ Z_1 \overline{ Z_2}] \big ] \end{aligned}$$
(3.12)

where \(Z = (Z_1\, Z_2)\in {\mathbb {C}}^2\) is a circularly symmetric Gaussian random variable with covariance matrix \(\Omega \) given by (3.4). Since \(\left| - \Im [ Z_1 \overline{ Z_2}]\right| \, \varphi _n(- \Im [ Z_1 \overline{ Z_2}]) \rightarrow Z_1 \overline{ Z_2}\) almost surely, and

$$\begin{aligned} \left| - \Im [ Z_1 \overline{ Z_2}] \, \varphi _n(- \Im [ Z_1 \overline{ Z_2}])\right| \le \left| - \Im [ Z_1 \overline{ Z_2}] \right| , \end{aligned}$$

while

$$\begin{aligned} {\mathbb {E}}\big [ \left| - \Im [ Z_1 \overline{ Z_2}] \right| \big ] = {\mathbb {E}}\big [ \left| {{\,\mathrm{Jac}\,}}F(z)\right| \, \big \vert \, F(z)=0 \big ] = \pi \rho _1 \left| E\right| < \infty \,, \end{aligned}$$
(3.13)

we can again invoke the Dominated Convergence Theorem to conclude that the right-hand side in (3.11) converges to the right-hand side of (3.12). Summarizing, we have that

$$\begin{aligned} {\mathbb {E}}\Bigg [ \sum _{z \in E, \,F(z)=0} {{\,\mathrm{sgn}\,}}({{\,\mathrm{Jac}\,}}F(z)) \Bigg ]&= \lim _{n\rightarrow \infty }{\mathbb {E}}\Bigg [ \sum _{z \in E, \,F(z)=0} \varphi _n({{\,\mathrm{Jac}\,}}F(z)) \Bigg ] \nonumber \\&= \lim _{n\rightarrow \infty } \tfrac{1}{{\pi }} \int _E {\mathbb {E}}\big [ \left| {{\,\mathrm{Jac}\,}}F(z)\right| \, \varphi _n({{\,\mathrm{Jac}\,}}F(z)) \, \big \vert \, F(z)=0 \big ] dA(z) \nonumber \\&= \tfrac{1}{{\pi }} \int _E {\mathbb {E}}\big [ {{\,\mathrm{Jac}\,}}F(z) \, \big \vert \, F(z)=0 \big ] dA(z), \end{aligned}$$
(3.14)

which yields (3.6). For (3.7), we first note that

$$\begin{aligned} \big ({\mathcal {Z}}^{\kappa }_F (E)\big )^2-{\mathcal {Z}}_F (E) = \sum _{\begin{array}{c} z,w \in E, z \not = w \\ F(z)=F(w)=0 \end{array}} \mathrm {sgn} ({{\,\mathrm{Jac}\,}}F(z)) \cdot \mathrm {sgn} ({{\,\mathrm{Jac}\,}}F(w)). \end{aligned}$$

Let \(\delta >0\) and consider the random Gaussian field on \({\mathbb {C}}^2\) given by

$$\begin{aligned} {\tilde{F}}(z,w) = (F(z),F(w)). \end{aligned}$$

We apply Proposition 2.3 to \({\tilde{F}}\) and use a regularization argument as before to learn that, for any Borel set \({{\tilde{E}}} \subseteq {\mathbb {C}}^2\),

$$\begin{aligned} \begin{aligned}&{\mathbb {E}} \bigg [ \sum _{(z,w) \in {{\tilde{E}}}, \left| z-w\right| \ge \delta } \mathrm {sgn} ({{\,\mathrm{Jac}\,}}F(z)) \cdot \mathrm {sgn} ({{\,\mathrm{Jac}\,}}F(w)) \bigg ] \\&\quad = \lambda \int _{{{\tilde{E}}} \setminus \{\left| z-w\right| < \delta \}} {\mathbb {E}}\big [ {{\,\mathrm{Jac}\,}}F(z) \cdot {{\,\mathrm{Jac}\,}}F(w) | F(z)=F(w)=0 \big ], \end{aligned} \end{aligned}$$
(3.15)

where

$$\begin{aligned} \lambda =\pi ^{-2}\Big (1 - \left| H(z-w)\right| ^2\big )^{-1}. \end{aligned}$$
(3.16)

To apply the weighted Kac-Rice formula it is important that the covariance matrix of \({\tilde{F}}(z,w)\) be non-singular, as granted by (1.14). Proposition 2.3 thus gives (3.15), where \(\lambda \) is the value of the probability density of \({\tilde{F}}(z,w)\) at 0, which is indeed given by (3.16). We let \(E \subseteq {\mathbb {C}}\) be compact, choose \({{\tilde{E}}} \subseteq E \times E\) in (3.15) according to the signs of \({{\,\mathrm{Jac}\,}}F(z)\) and \({{\,\mathrm{Jac}\,}}F(w)\), let \(\delta \rightarrow 0\), and use the monotone convergence theorem to deduce (3.8), albeit with a function depending on (zw) in lieu of \(\tau _2^{\kappa }\). It thus remains to show that \({\mathbb {E}}\big [ {{\,\mathrm{Jac}\,}}F(z) \, {{\,\mathrm{Jac}\,}}F(w) \,\big |\, F(z)=F(w)=0 \big ]\) depends only on the difference \(z-w\). To this end, note that the twisted stationarity condition (1.2) means that

$$\begin{aligned} F_\zeta (z) := e^{i\Im (z {\overline{\zeta }})} F(z-\zeta ), \qquad z \in {\mathbb {C}}, \end{aligned}$$

and F are circularly symmetric complex Gaussian fields with the same covariance. Thus,

$$\begin{aligned}&{\mathbb {E}} \Big [ {{\,\mathrm{Jac}\,}}F(z) \cdot {{\,\mathrm{Jac}\,}}F(w) \,\big |\, F(z)=F(w)=0 \Big ]\nonumber \\&= {\mathbb {E}} \Big [ {{\,\mathrm{Jac}\,}}F_\zeta (z) \cdot {{\,\mathrm{Jac}\,}}F_\zeta (w) \, \big | \, F_\zeta (z)=F_\zeta (w)=0 \Big ]. \end{aligned}$$
(3.17)

Let us calculate the right-hand side of the previous equation. Writing \(\zeta =a+ib\), we compute

$$\begin{aligned} F^{(1,0)}_\zeta (z)&= e^{i\Im (z {\overline{\zeta }})} \big [ F^{(1,0)}(z-\zeta ) - i b F(z-\zeta ) \big ], \\ F^{(0,1)}_\zeta (z)&= e^{i\Im (z {\overline{\zeta }})} \big [ F^{(0,1)}(z-\zeta ) + i a F(z-\zeta ) \big ]. \end{aligned}$$

Similar equations hold of course for w in lieu of z. We note that the event

$$\begin{aligned} \{F_\zeta (z)=F_\zeta (w)=0\} \end{aligned}$$
(3.18)

is precisely the event \(\{F(z-\zeta )=F(w-\zeta )=0\}\), and that under this event,

$$\begin{aligned} F^{(1,0)}_\zeta (z)&= e^{i\Im (z {\overline{\zeta }})} F^{(1,0)}(z-\zeta ), \\ F^{(0,1)}_\zeta (z)&= e^{i\Im (z {\overline{\zeta }})} F^{(0,1)}(z-\zeta ), \end{aligned}$$

and similarly for w in lieu of z. As a consequence, under (3.18), \({{\,\mathrm{Jac}\,}}F_\zeta (z)={{\,\mathrm{Jac}\,}}F(z-\zeta )\) and \({{\,\mathrm{Jac}\,}}F_\zeta (w)={{\,\mathrm{Jac}\,}}F(w-\zeta )\). Plugging these observations into (3.17) we conclude that

$$\begin{aligned}&{\mathbb {E}} \big [ {{\,\mathrm{Jac}\,}}F(z) \cdot {{\,\mathrm{Jac}\,}}F(w) \,\big \vert \, F(z)=F(w)=0 \big ] \\&\quad = {\mathbb {E}} \big [ {{\,\mathrm{Jac}\,}}F(z-\zeta ) \cdot {{\,\mathrm{Jac}\,}}F(w-\zeta ) \,\big \vert \, F(z-\zeta )=F(w-\zeta )=0 \big ]. \end{aligned}$$

That is, the number \({\mathbb {E}} \big [ {{\,\mathrm{Jac}\,}}F(z) \cdot {{\,\mathrm{Jac}\,}}F(w) \,\big \vert \, F(z)=F(w)=0 \big ]\) depends only on \(z-w\). \(\square \)

4 First Intensities

We can now derive the main results on first intensities.

Proof of Theorem 1.6

By Kac-Rice’s formula (3.5),

$$\begin{aligned} \rho _1&= \frac{1}{\pi } {\mathbb {E}} \big [ \left| {{\,\mathrm{Jac}\,}}F(z)\right| \,\big |\, F(z)=0 \big ] = \frac{1}{{\pi }} {\mathbb {E}} \big [ \big |- \Im \big [ F^{(1,0)}(z) \, \overline{ F^{(0,1)}(z)}\big ] \big | \,\big \vert \, F(z)=0 \big ]. \end{aligned}$$

By Property (b) in Proposition 3.2,

$$\begin{aligned} {\mathbb {E}} \big [ \big |- \Im \big [ F^{(1,0)}(z) \, \overline{ F^{(0,1)}(z)}\big ] \big | \,\big \vert \, F(z)=0 \big ] = {\mathbb {E}}\big [ \big |\Im [ - Z_1 \overline{ Z_2}] \big | \big ] \end{aligned}$$

where \(Z = (Z_1\, Z_2)\in {\mathbb {C}}^2\) is a circularly symmetric Gaussian random variable with covariance matrix given by \(\Omega \) in (3.4). Let us assume initially that \(\Omega \) is non-singular and denote \(\Delta _H:= \det (\Omega )\), so that \(\Delta _H> 0\). Hence,

$$\begin{aligned} \pi \rho _1 =\frac{1}{\pi ^2 \Delta _H} \int _{{\mathbb {C}}^2} |\Im (z {\overline{w}})| e^{- (z,w)^* \Omega ^{-1} (z,w)} dA(z) dA(w). \end{aligned}$$

We use the following formula [9, eq. (4.32)]

$$\begin{aligned} \left| x\right| =\frac{1}{\pi }\int _{-\infty }^{+\infty } \left( 1-\cos (xt)\right) \frac{dt}{t^2} = \frac{1}{\pi } \Re \bigg ( \int _{-\infty }^{+\infty } \left( 1-e^{itx} \right) \frac{dt}{t^2}\bigg ), \end{aligned}$$

where the last integral is to be understood as a principal value. By Lemma 8.1,

$$\begin{aligned} \pi \rho _1&= \frac{1}{\pi \Delta _H} \Re \bigg ( \int _{-\infty }^{+\infty } \frac{1}{\pi ^2} \int _{{\mathbb {C}}^2} \big (1-e^{it\Im (z{{\bar{w}}})} \big ) e^{- (z,w)^* \Omega ^{-1} (z,w)} dA(z) dA(w) \frac{dt}{t^2} \bigg ) \\&= \frac{1}{\pi \Delta _H} \Re \bigg ( \int _{-\infty }^{+\infty } \bigg ( \Delta _H- \det \bigg ( \Omega ^{-1}+\frac{t}{2} J \bigg )^{-1} \bigg ) \frac{dt}{t^2} \bigg ) \\&= \frac{1}{\pi } \Re \bigg ( \int _{-\infty }^{+\infty } \bigg ( 1- \det \bigg ( I + \frac{t}{2}\Omega J \bigg )^{-1} \bigg ) \frac{dt}{t^2} \bigg ). \end{aligned}$$

Using Lemma 3.1, we note that the off-diagonal element \(\gamma \) in \(\Omega \) satisfies

$$\begin{aligned} {{\bar{\gamma }}}-\gamma =2i, \end{aligned}$$
(4.1)

and, hence,

$$\begin{aligned} \det \bigg (I + \frac{t}{2}\Omega J\bigg ) =\begin{vmatrix} -\frac{t}{2} \gamma + 1&\frac{t}{2} \alpha \\ -\frac{t}{2} \beta&\frac{t}{2} {{\bar{\gamma }}} +1 \end{vmatrix}&=1+\frac{t}{2}({{\bar{\gamma }}}-\gamma ) -\frac{t^2}{4}\big (\left| \gamma \right| ^2-\alpha \beta \big )\\&=1+it +\frac{t^2}{4} \Delta _H. \end{aligned}$$

Therefore

$$\begin{aligned}&\frac{1}{\pi } \Re \bigg ( \int _{-\infty }^{+\infty } \bigg ( 1- \det \bigg ( I + \frac{t}{2}\Omega J \bigg )^{-1} \bigg ) \frac{dt}{t^2} \bigg ) \\&\quad = \frac{1}{\pi } \Re \bigg ( \int _{-\infty }^{+\infty } \bigg ( 1- \frac{1}{1+it +\frac{t^2}{4} \Delta _H} \bigg ) \frac{dt}{t^2} \bigg ) \\&\quad = \frac{1}{\pi } \Re \bigg ( \int _{-\infty }^{+\infty } \bigg ( \frac{1 +\frac{t^2}{2} \Delta _H+\frac{t^4}{16}\Delta _H^2 + t^2 - 1+it -\frac{t^2}{4} \Delta _H}{ 1 +\frac{t^2}{2} \Delta _H+\frac{t^4}{16}\Delta _H^2 + t^2} \bigg ) \frac{dt}{t^2} \bigg ) \\&\quad = \frac{1}{\pi } \int _{-\infty }^{+\infty } \frac{ \frac{1}{4}\Delta _H+\frac{t^2}{16}\Delta _H^2 + 1 }{ 1 +\frac{t^2}{2} \Delta _H+\frac{t^4}{16}\Delta _H^2 + t^2} dt \\&\quad = \frac{1}{\pi } \int _{-\infty }^{+\infty } \frac{ t^2\Delta _H^2 + 4\Delta _H+ 16 }{ t^4 \Delta _H^2 + 8 t^2\Delta _H+ 16 t^2 + 16 } dt\,. \end{aligned}$$

Let us write \(t^4\Delta _H^2 + 8 t^2 \big (\Delta _H+2 \big )+16=\Delta _H^2(t^2+\lambda )(t^2+\mu )\) with

$$\begin{aligned} \Delta _H^2 \lambda \mu = 16, \quad \Delta _H^2(\lambda +\mu )=8(\Delta _H+2). \end{aligned}$$
(4.2)

This shows that \(\lambda ,\mu \in (0,+\infty )\); we may assume that \(\lambda \ge \mu \). A direct calculation further shows that

$$\begin{aligned}&\sqrt{\lambda }-\sqrt{\mu }=\sqrt{\lambda \mu }, \quad \sqrt{\lambda }+\sqrt{\mu }=\sqrt{\Delta _H+1}\sqrt{\lambda \mu },\nonumber \\&\quad \lambda -\mu =\lambda \mu \sqrt{\Delta _H+1} =\tfrac{16}{\Delta _H^2}\sqrt{\Delta _H+1}. \end{aligned}$$
(4.3)

Using (4.2) we write

$$\begin{aligned} {t^2} \Delta _H^2 +4\Delta _H+16&= t^2 \Delta _H^2 +4(\Delta _H+2) +8\\&= \tfrac{\Delta _H^2}{2} \left( (t^2+\lambda ) + (t^2+\mu )\right) +\tfrac{8}{\lambda -\mu }\left( (t^2+\lambda )-(t^2+\mu )\right) \\&= \Delta _H^2 \left( \big (\tfrac{1}{2}+\tfrac{1}{2\sqrt{\Delta _H+1}}\big ) (t^2+\lambda ) + \big (\tfrac{1}{2}-\tfrac{1}{2\sqrt{\Delta _H+1}}\big ) (t^2+\mu ) \right) . \end{aligned}$$

Hence, with the aid of (4.3), we compute

$$\begin{aligned} \pi \rho _1&= \frac{1}{\pi } \int _{-\infty }^\infty \left( \frac{1}{2}-\frac{1}{2\sqrt{\Delta _H+1}}\right) \frac{1}{t^2+\lambda } + \left( \frac{1}{2}+\frac{1}{2\sqrt{\Delta _H+1}}\right) \frac{1}{t^2+\mu } \, dt \\&= \left( \frac{1}{2}-\frac{1}{2\sqrt{\Delta _H+1}}\right) \frac{1}{\sqrt{\lambda }} + \left( \frac{1}{2}+\frac{1}{2\sqrt{\Delta _H+1}}\right) \frac{1}{\sqrt{\mu }} \\&= \frac{1}{\sqrt{\lambda \mu }} \left( \frac{1}{2}\left( \sqrt{\lambda }+\sqrt{\mu }\right) +\frac{1}{2\sqrt{\Delta _H+1}} \left( \sqrt{\lambda }-\sqrt{\mu }\right) \right) \\&= \left( \frac{1}{2}\sqrt{\Delta _H+1} +\frac{1}{2\sqrt{\Delta _H+1}} \right) = \frac{\Delta _H+2}{2\sqrt{\Delta _H+1}}, \end{aligned}$$

as claimed.

Finally, if \(\Omega \) is singular, we let \(Z^\tau := Z + \tau X\) with X an independent standard circularly symmetric random vector on \({\mathbb {C}}^2\) and \(\tau \in (0,1)\). Then \(Z^\tau \) has covariance \(\Omega ^\tau = \Omega + \tau I\) and the calculation above shows that

$$\begin{aligned} \lim _{\tau \rightarrow 0+} {\mathbb {E}}\big [ \big |\Im [ - Z^\tau _1 \overline{ Z^\tau _2}] \big | \big ] = \lim _{\tau \rightarrow 0+} \frac{\det (\Omega + \tau I)+2}{2\sqrt{\det (\Omega + \tau I)+1}} = 1\,. \end{aligned}$$

Furthermore, by continuity, \(\big |\Im [ - Z^\tau _1 \overline{ Z^\tau _2}] \big | \longrightarrow \big |\Im [ - Z_1 \overline{ Z_2}] \big |\) almost surely and

$$\begin{aligned} \big |\Im [ - Z^\tau _1 \overline{ Z^\tau _2}] \big | \le \left| Z^\tau _1\right| \cdot \left| Z^\tau _2\right| \le \big (\left| Z_1\right| + \left| X_1\right| \big ) \cdot \big (\left| Z_2\right| + \left| X_2\right| \big ), \qquad \tau \in (0,1). \end{aligned}$$
(4.4)

Since \(Z_1, Z_2, X_1, X_2\) are normal, they are square integrable with respect to the underlying probability, and thus the right hand side of (4.4) is integrable. Hence, by dominated convergence,

$$\begin{aligned} \frac{1}{{\pi }} = \frac{1}{{\pi }} \lim _{\tau \rightarrow 0+} {\mathbb {E}}\big [ \big |\Im [ - Z^\tau _1 \overline{ Z^\tau _2}] \big | \big ] = \frac{1}{{\pi }} {\mathbb {E}}\big [ \big |\Im [ - Z_1 \overline{ Z_2}] \big | \big ] = \rho _1\, . \end{aligned}$$

This completes the proof. \(\square \)

As an application of Theorem 1.6, we derive a simplified expression for radial twisted kernels.

Proof of Corollary 1.7

We use Theorem 1.6. We first note that \(H(0)=P(0)=1\) and compute

$$\begin{aligned} H^{(1,0)}(z)&= 2 x P'\big (\left| z\right| ^2\big ) \\ H^{(1,1)}(z)&= 4 x y P''\big (\left| z\right| ^2\big ) \\ H^{(2,0)}(z)&= 4 x^2 P''\big (\left| z\right| ^2\big ) + 2 P'\big (\left| z\right| ^2\big ). \end{aligned}$$

Hence, \(H^{(1,0)}(0)=H^{(1,1)}(0)=0\), and \(H^{(2,0)}(0)=2P'(0)\). By symmetry, \(H^{(0,1)}(0)=0\), and \(H^{(0,2)}(0)=2P'(0)\). This gives that \(\Omega \), as defined in (3.4), is

$$\begin{aligned} \Omega = \begin{bmatrix} -2P'(0) &{} -i \\ i &{} -2P'(0) \end{bmatrix} \end{aligned}$$
(4.5)

and its determinant \(\Delta _H\) is

$$\begin{aligned} \Delta _H= \big (-2P'(0)\big )^2 - 1 = \big ( 2 P'(0)+1 \big ) \big (2 P'(0)-1\big ). \end{aligned}$$

Because \(\Delta _H\ge 0\), this implies that either \(P'(0)\ge 1/2\) or \(P'(0)\le -1/2\). However, also the minor \(-2P'(0)\) of \(\Omega \) has to be nonnegative, i.e., \(P'(0)\le 0\) which implies that \(P'(0)\le -1/2\) is the only valid option. To obtain \(\rho _1\) in (1.18), we calculate

$$\begin{aligned} \Delta _H+ 2&= 4 P'(0)^2 +1,\\ \sqrt{\Delta _H+1}&= -2P'(0), \end{aligned}$$

and (1.18) simplifies to (1.21). \(\square \)

Finally, we derive the first intensity of charged zeros.

Proof of Theorem 1.12

We proceed along the lines of the proof of Theorem 1.6. This time we use Kac-Rice’s formula (3.6), which gives that \(\rho ^{\kappa }_1\) is the following constant:

$$\begin{aligned} \rho ^{\kappa }_1 = \frac{1}{{\pi }} {\mathbb {E}} \big [ {-} \Im \big [ F^{(1,0)}(z) \, \overline{ F^{(0,1)}(z)}\big ] \,\big \vert \, F(z)=0 \big ]. \end{aligned}$$

By Proposition 3.2(b),

$$\begin{aligned} {\mathbb {E}} \big [ {-} \Im \big [ F^{(1,0)}(z) \, \overline{ F^{(0,1)}(z)}\big ] \,\big \vert \, F(z)=0 \big ] = {\mathbb {E}}\big [ \Im [ - Z_1 \overline{ Z_2}] \big ] \end{aligned}$$

where \(Z = (Z_1\, Z_2)\in {\mathbb {C}}^2\) is a circularly symmetric Gaussian random variable with covariance matrix given by \(\Omega \) in (3.4). Thus,

$$\begin{aligned} \rho ^{\kappa }_1 = {-}\frac{1}{{\pi }} \Im (\gamma ) = \frac{1}{{\pi }} \end{aligned}$$

where \(\gamma \) is the covariance \({\mathbb {E}}\big [ Z_1 \overline{ Z_2}\big ]\) in (3.4). \(\square \)

5 Charge Fluctuations

5.1 Sufficient Conditions for Hyperuniformity

The following lemma gives sufficient conditions for the hyperuniformity of the charged zero set (1.27). These are formulated in terms of the first intensity of the uncharged zero set \(\rho _1\)—which is constant by Theorem 1.12—and the semi-charged two-point intensity defined in Lemma 3.3, cf. (3.7).

Lemma 5.1

Let F be a GWHF with twisted kernel H satisfying the standing assumptions. Suppose that

$$\begin{aligned}&\int _{{\mathbb {C}}} (1+\left| z\right| ) \bigg |\frac{1}{\pi ^2}-\tau ^{\kappa }_2(z) \bigg | dA(z) < \infty , \end{aligned}$$
(5.1)

and

$$\begin{aligned}&\int _{{\mathbb {C}}} \left( \frac{1}{\pi ^2}-\tau ^{\kappa }_2(z)\right) dA(z) = \rho _1. \end{aligned}$$
(5.2)

Then there exists \(C>0\) such that for all \(z_0 \in {\mathbb {C}}\),

$$\begin{aligned} \mathrm {Var}\big [{\mathcal {Z}}^{\kappa }_F(B_R(z_0))\big ] \le C R, \qquad R>0, \end{aligned}$$
(5.3)

and

$$\begin{aligned} \tfrac{1}{R} \mathrm {Var}\big [{\mathcal {Z}}^{\kappa }_F(B_R(z_0))\big ] \rightarrow \int _{{\mathbb {C}}} \left| z\right| \left( \frac{1}{\pi ^2}-\tau ^{\kappa }_2(z)\right) dA(z), \qquad \text { as } {R \rightarrow \infty }. \end{aligned}$$
(5.4)

Proof

We let \(z_0 \in {\mathbb {C}}\) and use (3.8) and Theorem 1.12 to compute

$$\begin{aligned}&\mathrm {Var}\Big [{\mathcal {Z}}^{\kappa }_F (B_R(z_0))\Big ] = {\mathbb {E}}\left[ \big ({\mathcal {Z}}^{\kappa }_F (B_R(z_0))\big )^2 \right] - \left( {\mathbb {E}}\left[ {\mathcal {Z}}^{\kappa }_F (B_R(z_0)) \right] \right) ^2 \\&\quad = {\mathbb {E}}\left[ \big ({\mathcal {Z}}^{\kappa }_F (B_R(z_0))\big )^2 -{\mathcal {Z}}_F (B_R(z_0)) \right] - \left( {\mathbb {E}}\left[ {\mathcal {Z}}^{\kappa }_F (B_R(z_0)) \right] \right) ^2 + {\mathbb {E}}\left[ {\mathcal {Z}}_F (B_R(z_0))\right] \\&\quad = \int _{B_R(z_0)\times B_R(z_0)} \tau _2^{\kappa }(z-w) dA(z) dA(w) - \left( \int _{B_R(z_0)} \rho _1^{\kappa } \,dA(z) \right) ^2 + \int _{B_R(z_0)} \rho _1 \, dA(z) \\&\quad = \int _{B_R(z_0)\times B_R(z_0)} \left( \tau _2^{\kappa }(z-w) - \frac{1}{\pi ^2}\right) dA(z) dA(w) + \rho _1 \left| B_R(z_0)\right| . \end{aligned}$$

In terms of the function \(\varphi (z) := \frac{1}{\pi ^2 \rho _1} - \frac{1}{\rho _1} \tau ^{\kappa }_2(z)\), the expression for the variance reads

$$\begin{aligned} \frac{1}{\rho _1} \mathrm {Var}\Big [{\mathcal {Z}}^{\kappa }_F (B_R(z_0))\Big ]&= \left| B_R(z_0)\right| - \int _{B_R(z_0)\times B_R(z_0)} \varphi (z-w) dA(z) dA(w). \end{aligned}$$

The last expression measures the average deviation within the disk \(B_R(z_0)\) between the indicator function of that disk and its convolution with \(\varphi \). Precise estimates are given in Lemma 8.3—whose proof is deferred to Sect. 8. The hypotheses of Lemma 8.3 are met due to (5.1) and (5.2), and we readily obtain (5.3) and (5.4). \(\square \)

5.2 Computations for Radial Twisted Kernels

For radial twisted kernels, the following proposition provides an expression for the integrals in (5.2). We use the notation of Lemma 5.1.

Proposition 5.2

Let F be a GWHF with twisted kernel H satisfying the standing assumptions. Assume further that \(H(z)=P\big (\left| z\right| ^2\big )\), where \(P:{\mathbb {R}} \rightarrow {\mathbb {R}}\) is \(C^2\), \(P(0)=1\), and

$$\begin{aligned} \sup _{r \ge 0} \big ( \left| P(r^2)\right| + \left| P'(r^2)\right| + \left| P''(r^2)\right| \big ) r^4 < \infty . \end{aligned}$$
(5.5)

Then

$$\begin{aligned} \int _{{\mathbb {C}}} \bigg |\frac{1}{\pi ^2}-\tau ^{\kappa }_2(z) \bigg | dA(z)&< \infty , \end{aligned}$$
(5.6)
$$\begin{aligned} \int _{{\mathbb {C}}} \left| z\right| \bigg |\frac{1}{\pi ^2}-\tau ^{\kappa }_2(z) \bigg | dA(z)&< \infty , \end{aligned}$$
(5.7)
$$\begin{aligned} \int _{{\mathbb {C}}} \bigg (\frac{1}{\pi ^2} - \tau ^{\kappa }_2(z) \bigg ) dA(z)&= - \frac{1}{\pi } \bigg ( P'(0) + \frac{1 }{ 4 P'(0)} \bigg ), \end{aligned}$$
(5.8)
$$\begin{aligned} \int _{{\mathbb {C}}} \left| z\right| \bigg (\frac{1}{\pi ^2}-\tau ^{\kappa }_2(z)\bigg ) dA(z)&= \frac{1}{\pi } \int _0^\infty \frac{2 r^2 P'(r^2)^2}{1-P(r^2)^2} dr. \end{aligned}$$
(5.9)

Proof

Step 1 (Calculation of the semi-charged 2-point intensity)

We recall that the covariance structure of \((F, F^{(1,0)}, F^{(0,1)})\) is given in Sect. 3.1. Here, we further need the covariance structure of \(\big (F(z), F^{(1,0)}(z), F^{(0,1)}(z), F(w), F^{(1,0)}(w), F^{(0,1)}(w)\big )\). We first compute

$$\begin{aligned} H^{(1,0)}(z)&= 2 x P'(\left| z\right| ^2), \\ H^{(0,1)}(z)&= 2 y P'(\left| z\right| ^2), \\ H^{(2,0)}(z)&= 2 P'(\left| z\right| ^2) + 4 x^2 P''(\left| z\right| ^2), \\ H^{(0,2)}(z)&= 2 P'(\left| z\right| ^2) + 4 y^2 P''(\left| z\right| ^2), \\ H^{(1,1)}(z)&= 4 x y P''(\left| z\right| ^2). \end{aligned}$$

Thus, the covariance matrix of \(\big (F(z), F^{(1,0)}(z), F^{(0,1)}(z), F(w), F^{(1,0)}(w), F^{(0,1)}(w)\big )\) can be calculated as

$$\begin{aligned} \begin{pmatrix} \Gamma (z) &{} \Gamma (z,w) \\ \Gamma (z,w)^* &{} \Gamma (w) \end{pmatrix} \end{aligned}$$

where \(\Gamma (z)\) is given by (3.2) and simplifies to

$$\begin{aligned} \Gamma (z)&= \begin{pmatrix} 1 &{} iy &{} - ix \\ - iy &{} y^2 -2 P'(0) &{} -i- xy \\ ix &{} i- xy &{} x^2-2 P'(0) \end{pmatrix} \end{aligned}$$

and \(\Gamma (z,w)\) is

$$\begin{aligned} \Gamma (z,w)&= e^{i (yu-xv)} \left( P\big (\left| z-w\right| ^2\big ) \begin{pmatrix} 1 &{} iy &{} - ix \\ - iv &{} yv &{} -i- xv \\ iu &{} i- uy &{} xu \end{pmatrix} \right. \\*&\quad + 2 P'(\left| z-w\right| ^2) \begin{pmatrix} 0 &{} -(x-u) &{} -(y-v) \\ (x-u) &{} -1+ i (x-u) (y +v) &{} i (y-v)v - i(x-u) x \\ (y-v) &{} i (y-v)y- i(x-u) u &{} -1 - i (y-v)(x +u) \end{pmatrix} \\*&\quad \left. + 4 P''(\left| z-w\right| ^2) \begin{pmatrix} 0 &{} 0 &{} 0 \\ 0 &{} (x-u)^2 &{} -(x-u) (y-v) \\ 0 &{} -(x-u) (y-v) &{} (y-v)^2 \end{pmatrix} \right) . \end{aligned}$$

We are interested in quantities of the form

$$\begin{aligned} {\mathbb {E}}\big [ h\big (F^{(1,0)}(z), F^{(0,1)}(z), F^{(1,0)}(w), F^{(0,1)}(w) \big ) \, \big \vert \, (F(z), F(w)) = (0,0) \big ]. \end{aligned}$$

Following Remark 2.1, this conditional expectation is \({\mathbb {E}}\big [ h(Z) \big ]\) where \(Z\in {\mathbb {C}}^4\) is a circularly symmetric Gaussian random variable with covariance matrix:

$$\begin{aligned} \Omega (z,w) = A-B C^{-1} B^*\,. \end{aligned}$$
(5.10)

Here,

$$\begin{aligned} A = \begin{pmatrix} \Gamma _{2,3;2,3}(z) &{} \Gamma _{2,3;2,3}(z,w) \\ \Gamma _{2,3;2,3}(z,w)^* &{} \Gamma _{2,3;2,3}(w) \end{pmatrix}, \end{aligned}$$
(5.11)
$$\begin{aligned} B&= \begin{pmatrix} \Gamma _{2,3;1}(z) &{} \Gamma _{2,3;1}(z,w) \\ \Gamma _{1;2,3}(z,w)^* &{} \Gamma _{2,3;1}(w) \end{pmatrix} \\&= \begin{pmatrix} -iy &{} e^{i (yu-xv)} \big (- iv P + 2(x-u)P'\big ) \\ ix &{} e^{i (yu-xv)} \big ( iu P + 2(y-v)P'\big ) \\ e^{- i (yu-xv)} \big (- iy P - 2(x-u)P'\big ) &{}-iv \\ e^{- i (yu-xv)} \big (ix P - 2(y-v)P'\big ) &{} iu \end{pmatrix} \end{aligned}$$

where \(\Gamma _{i,j; k,l}\) is the submatrix of \(\Omega \) containing the rows i and j and columns k and l, and

$$\begin{aligned} C = \begin{pmatrix} 1 &{} e^{i (yu-xv)} P \\ e^{- i (yu-xv)} P &{} 1 \end{pmatrix}. \end{aligned}$$
(5.12)

Thus,

$$\begin{aligned} C^{-1} = \frac{1}{1- P ^2} \begin{pmatrix} 1 &{} -e^{i (yu-xv)} P \\ -e^{- i (yu-xv)} P &{} 1 \end{pmatrix} \end{aligned}$$
(5.13)

with the convention that P, \(P'\), and \(P''\) are understood to be evaluated at \(\left| z-w\right| ^2\).

By Lemma 3.3,

$$\begin{aligned} \begin{aligned} \pi ^2 \tau _2^{\kappa }(z-w)&= \frac{1}{1- P^2 } E, \end{aligned} \end{aligned}$$
(5.14)

where

$$\begin{aligned} E := {\mathbb {E}}\big [ {{\,\mathrm{Jac}\,}}F(z) \, {{\,\mathrm{Jac}\,}}F(w) \,\big |\, F(z)=F(w)=0 \big ]. \end{aligned}$$
(5.15)

We now invoke a variant of Wick’s formula (Isserlis’ theorem), proved in Lemma 8.2 below, and obtain

$$\begin{aligned} E = -\frac{1}{2} \Re \big [ \Omega _{1,2}\Omega _{3,4} + \Omega _{1,4}\Omega _{3,2} - \Omega _{2,1}\Omega _{3,4} - \Omega _{2,4}\Omega _{3,1} \big ], \end{aligned}$$

With this information we can calculate explicitly E to obtain

$$\begin{aligned} \pi ^2 \tau _2^{\kappa }(z-w) = 1 + I'(\left| z-w\right| ^2), \end{aligned}$$

with

$$\begin{aligned} I(s)&= \frac{s \big (2P'(s)^2+\frac{3}{2} P(s)^2\big )}{1-P(s)^2} + \frac{2 s^2 P(s)P'(s)}{(1-P(s)^2)^2} \,. \end{aligned}$$
(5.16)

Section 8.3 contains detailed calculations, which also give the estimate

$$\begin{aligned} \sup _{r \ge 0} (1+r^4) \big |I'(r^2) \big | < \infty . \end{aligned}$$
(5.17)

Step 2 (Conclusions) We first verify (5.6) and (5.7); this then implies that the integrals in (5.8) and (5.9) are absolutely convergent. To this end, we use (5.17) and estimate

$$\begin{aligned} \int _{{\mathbb {C}}} \bigg |\frac{1}{\pi ^2} - \tau ^{\kappa }_2(z) \bigg | dA(z)&= \frac{1}{\pi ^2} \int _{{\mathbb {C}}} \left| I' (|z|^2)\right| dA(z) \\&= \frac{2}{\pi } \int _0^\infty r \left| I'\big (r^2\big )\right| \, dr \\&\lesssim \int _0^\infty \frac{r}{1+r^4} \, dr < \infty , \end{aligned}$$

and, similarly,

$$\begin{aligned} \int _{{\mathbb {C}}} \left| z\right| \bigg |\frac{1}{\pi ^2} - \tau ^{\kappa }_2(z) \bigg | dA(z)&\lesssim \int _0^\infty \frac{r^2}{1+r^4} \, dr < \infty . \end{aligned}$$

For (5.8) first note that

$$\begin{aligned} \lim _{s\rightarrow 0}\frac{s }{1-P(s)^2} = -\frac{1}{2P'(0)}, \end{aligned}$$

while \(\lim _{s \rightarrow \infty } I(s)=0\) by (5.5). Hence,

$$\begin{aligned} \int _{{\mathbb {C}}} \bigg (\frac{1}{\pi ^2} - \tau ^{\kappa }_2(z) \bigg ) dA(z)&= -\frac{1}{\pi ^2} \int _{{\mathbb {C}}} I'\big (\left| z\right| ^2\big ) dA(z) \\&= -\frac{1}{\pi } \int _0^\infty I'(s) ds \\&= \frac{1}{\pi } \lim _{s \rightarrow 0} I(s) \\&= \frac{1}{\pi } \bigg ( -\frac{ 2P'(0)^2+\frac{3}{2}}{2P'(0)} + \frac{1 }{ 2 P'(0)} \bigg ) \\&= - \frac{1}{\pi } \bigg ( P'(0) + \frac{1 }{ 4 P'(0)} \bigg ). \end{aligned}$$

For (5.9), integration by parts gives

$$\begin{aligned} \int _{{\mathbb {C}}} \left| z\right| \Big (\frac{1}{\pi ^2} - \tau ^{\kappa }_2(z) \Big ) dA(z)&= -\frac{1}{\pi ^2} \int _{{\mathbb {C}}} \left| z\right| I'\big (\left| z\right| ^2\big ) dA(z) \\&= -\frac{2}{\pi } \int _0^\infty r^2 I'\big (r^2\big ) dr \\&= -\frac{1}{\pi } \int _0^\infty r I'\big (r^2\big ) 2r dr \\&=\frac{1}{\pi } \int _0^\infty I\big (r^2\big ) dr. \end{aligned}$$

A direct calculation shows that

$$\begin{aligned} I(r^2) = \frac{2 r^2 P'(r^2)^2}{1-P(r^2)^2} + \frac{d}{dr}\bigg [\frac{ r^3 P(r^2)^2 }{ 2(1-P(r^2)^2 )} \bigg ]. \end{aligned}$$
(5.18)

Hence, by (5.5),

$$\begin{aligned} \int _{{\mathbb {C}}} \left| z\right| \Big (\frac{1}{\pi ^2} - \tau ^{\kappa }_2(z) \Big ) dA(z)&= \frac{1}{\pi } \int _0^\infty \frac{2 r^2 P'(r^2)^2}{1-P(r^2)^2} dr + \frac{1}{\pi } \bigg [ \frac{ r^3 P(r^2)^2 }{ 2(1-P(r^2)^2 )} \bigg ]_{r=0}^{\infty } \\&= \frac{1}{\pi } \int _0^\infty \frac{2 r^2 P'(r^2)^2}{1-P(r^2)^2} dr, \end{aligned}$$

as claimed in (5.9). \(\square \)

5.3 Proof of Theorem 1.14

We now derive the main result on hyperuniformity of charge. We invoke Lemma 5.1. Condition (5.1) is satisfied as shown in Proposition 5.2, (5.7), while (5.2) is seen to hold by comparing the explicit expressions given in Corollary 1.7 and Proposition 5.2. The asymptotic value of the variance in (5.4) is computed in (5.9).\(\square \)

6 Examples and Applications

6.1 The Short Time Fourier Transform of White Noise

Let \(g:{\mathbb {R}} \rightarrow {\mathbb {C}}\) be a Schwartz function. As a first step towards the definition of the short-time Fourier transform of white noise, we consider its distributional formulation. For a Schwartz function \(f:{\mathbb {R}} \rightarrow {\mathbb {C}}\), we write (1.5) as

$$\begin{aligned} V_g f(x,y) = \langle f, \varvec{\Phi }(x,y) g \rangle , \end{aligned}$$
(6.1)

where \(\varvec{\Phi }(x,y) g\) denotes the time-frequency shiftFootnote 2

$$\begin{aligned} \varvec{\Phi }(x,y) g(t) = e^{2 \pi i y t} g(t-x), \qquad t \in {\mathbb {R}}. \end{aligned}$$

We define the STFT of a distribution \(f \in {\mathcal {S}}'({\mathbb {R}})\) by (6.1), using the distributional interpretation of the \(L^2\) inner product \(\langle \cdot , \cdot \rangle \), and note that this defines a smooth function on \({\mathbb {R}}^2\). The adjoint short-time Fourier transform \(V^*_g:{\mathcal {S}}({\mathbb {R}}^2) \rightarrow {\mathcal {S}}({\mathbb {R}})\),

$$\begin{aligned} V^*_g \varphi (t) = \int _{{\mathbb {R}}^2} \varphi (x,y) \,\varvec{\Phi }(x,y) g(t) \, dx dy, \end{aligned}$$

provides the following concrete description of the distributional STFT:

$$\begin{aligned} \langle V_g f, \varphi \rangle = \langle f, V^*_g \varphi \rangle , \qquad f \in {\mathcal {S}}'({\mathbb {R}}), \quad \varphi \in {\mathcal {S}}({\mathbb {R}}^2). \end{aligned}$$
(6.2)

See [21, Chap. 11] for more background on the STFT of distributions.

Let \({\mathcal {N}}\) be complex white noise on \({\mathbb {R}}\), that is, \({\mathcal {N}}= \frac{1}{\sqrt{2}} \frac{d}{dt} \big (W_1 + i W_2\big )\), where \(W_1\) and \(W_2\) are independent copies of the Wiener process (Brownian motion with almost surely continuous paths), and the derivative is taken in the distributional sense. The short-time Fourier transform of complex white noise is the random function:

$$\begin{aligned} V_g \, {\mathcal {N}}(z) = \langle {\mathcal {N}}, \varvec{\Phi }(x,y) g \rangle , \qquad z=x+iy \in {\mathbb {C}}; \end{aligned}$$

see [5, 6] for other definitions and a comprehensive discussion on their equivalence. Then \(V_g \, {\mathcal {N}}\) is Gaussian because, as a consequence of (6.2), for any Schwartz function \(\varphi \in {\mathcal {S}}({\mathbb {R}}^2)\), \(\langle V_g \, {\mathcal {N}}, \varphi \rangle = \langle {\mathcal {N}}, V^*_g \varphi \rangle \) is normally distributed. In addition, \(V_g \, {\mathcal {N}}\) is circularly symmetric, as, for any \(\theta \in {\mathbb {R}}\), \(e^{i \theta } \cdot V_g \, {\mathcal {N}}= V_g \, \big ( e^{i \theta } \cdot {\mathcal {N}}\big ) \sim V_g \, {\mathcal {N}}\). One readily verifies that

$$\begin{aligned} {\mathbb {E}} \big [ V_g \,{\mathcal {N}}(z) \cdot \overline{V_g \,{\mathcal {N}}(w)} \big ] = \langle \varvec{\Phi }(u,v) g , \varvec{\Phi }(x,y) g \rangle ,\qquad z,w \in {\mathbb {C}}. \end{aligned}$$
(6.3)

The following lemma relates the STFT of white noise and GWHFs.

Lemma 6.1

Let \(g:{\mathbb {R}} \rightarrow {\mathbb {C}}\) be a Schwartz function normalized by \(\left| \left| g\right| \right| _2=1\), and consider the short-time Fourier transform of complex white noise, twisted and scaled as follows:

$$\begin{aligned} F(z) := e^{-i xy} \cdot V_g \, {\mathcal {N}}\big ({\bar{z}}/\sqrt{\pi }\big ), \qquad z=x+iy. \end{aligned}$$

Then F is a GWHF with twisted kernel

$$\begin{aligned} H(z) = e^{-i xy} \cdot V_g g \big ({\bar{z}}/\sqrt{\pi } \big ), \qquad z=x+iy, \end{aligned}$$

and the standing assumptions are satisfied.

In addition, the zero set of \(V_g \, {\mathcal {N}}\) has a first intensity \(\rho _{1,g}\) related to that of the zero set of F by

$$\begin{aligned} \rho _{1,g} = \pi \rho _1. \end{aligned}$$
(6.4)

Proof

F is Gaussian and circularly symmetric because \(V_g \,{\mathcal {N}}\) is. Using (6.3), we inspect the covariance of F:

$$\begin{aligned} {\mathbb {E}} \big [ F(z) \overline{F(w)} \big ]&= e^{i (uv-xy)} \Big \langle \varvec{\Phi }\big (\tfrac{u}{\sqrt{\pi }},-\tfrac{v}{\sqrt{\pi }}\big ) g , \varvec{\Phi }\big (\tfrac{x}{\sqrt{\pi }},-\tfrac{y}{\sqrt{\pi }}\big ) g \Big \rangle \\&= e^{i (uv-xy)} \int _{{\mathbb {R}}} g\big (t-\tfrac{u}{\sqrt{\pi }}\big ) \overline{g\big (t-\tfrac{x}{\sqrt{\pi }}\big )}e^{-2\sqrt{\pi } i (v-y) t} dt \\&= e^{i (uv-xy)} \int _{{\mathbb {R}}} g(t) \overline{g\big (t+\tfrac{u-x}{\sqrt{\pi }}\big )} e^{- 2 \sqrt{\pi } i (v-y)(t+u/\sqrt{\pi })} dt \\&= e^{i (yu-xv)} e^{-i (x-u)(y-v)} \int _{{\mathbb {R}}} g(t) \overline{g\big (t-\tfrac{(x-u)}{\sqrt{\pi }}\big )} e^{- 2 \pi i \sqrt{\pi }{(v-y)}{}t} dt \\&= e^{i\Im (z {{\bar{w}}})} H(z-w). \end{aligned}$$

We now verify the standing assumptions. Since g is Schwartz, H is \(C^\infty \), and (1.15) and (1.16) hold. The normalization condition (1.13) is indeed satisfied since \(H(0)= V_gg(0)=\left| \left| g\right| \right| ^2_2=1\). To check the non-degeneracy condition (1.14) note first that, by Cauchy-Schwarz,

$$\begin{aligned} \left| H(z)\right| = \left| \left<g,\varvec{\Phi }(x/\sqrt{\pi },-y/\sqrt{\pi }) g\right>\right| \le \left| \left| g\right| \right| _2^2=1=H(0). \end{aligned}$$

If equality holds for some \(z=x+iy\), then there exists \(\lambda \in {\mathbb {C}}\) such that

$$\begin{aligned} g = \lambda \varvec{\Phi }(x/\sqrt{\pi },y/\sqrt{\pi }) g. \end{aligned}$$

This implies,

$$\begin{aligned} |g(t)| = | \lambda g(t-x/\sqrt{\pi })|, \qquad t \in {\mathbb {R}}. \end{aligned}$$

Since \(g \in L^2({\mathbb {R}}) \setminus \{0\}\), we must have \(x=0\). Hence,

$$\begin{aligned} g(t) = \lambda e^{-2 \sqrt{\pi } i y t} g(t), \qquad t \in {\mathbb {R}}, \end{aligned}$$

which implies \(y=0\), since \(g \not \equiv 0\). Hence \(z=0\).

Finally, since \(F(z)=0\) if and only if \(V_g \,{\mathcal {N}}({\overline{z}}/\sqrt{\pi })=0\), (6.4) follows. \(\square \)

6.2 Calculation of the First Intensity

We now apply our results to the short-time Fourier transform of complex white noise.

Proof of Theorem 1.9

We consider the functions F and H as in Lemma 6.1, and the first intensities of their zero sets, \(\rho _1\) and \(\rho _{1,g}\), related by (6.4). We calculate

$$\begin{aligned} H(0)&=V_g g(0) = \left| \left| g\right| \right| ^2_2=1, \\ H^{(1,0)}(0)&=\frac{1}{\sqrt{\pi }} (V_g g)^{(1,0)}(0) =-\frac{1}{\sqrt{\pi }} \int _{{\mathbb {R}}} g(t) \overline{g'(t)} dt = -\frac{1}{\sqrt{\pi }} i c_4, \\ H^{(0,1)}(0)&=\frac{-1}{\sqrt{\pi }}(V_g g)^{(0,1)}(0) = 2\sqrt{\pi }i \int _{{\mathbb {R}}} t \left| g(t)\right| ^2 dt = 2\sqrt{\pi }i c_1, \\ H^{(2,0)}(0)&= \frac{1}{\pi } (V_gg)^{(2,0)}(0) =\frac{1}{\pi } \int _{{\mathbb {R}}} g(t) \overline{g''(t)} dt =-\frac{1}{\pi }\int _{{\mathbb {R}}} \left| g'(t)\right| ^2 dt = -\frac{1}{\pi } c_3, \\ H^{(0,2)}(0)&= \frac{1}{\pi } (V_gg)^{(0,2)}(0) = - 4 \pi \int _{{\mathbb {R}}} t^2 \left| g(t)\right| ^2 dt = - 4 \pi c_2, \\ H^{(1,1)}(0)&= -i V_gg(0) - \tfrac{1}{\pi } (V_gg)^{(1,1)}(0) =- i - 2 i\int _{{\mathbb {R}}} tg(t) \overline{g'(t)} dt\\&= 2 \Im \bigg (\int _{{\mathbb {R}}} tg(t) \overline{g'(t)} dt\bigg ) = 2 c_5, \end{aligned}$$

where we used that, by Lemma 3.1, \(H^{(1,1)}(0) \in {\mathbb {R}}\). Note also that \(c_4 \in {\mathbb {R}}\) by Lemma 3.1, while, clearly, \(c_1, c_2, c_3, c_5 \in {\mathbb {R}}\). Thus, (1.19) is given by

$$\begin{aligned} \Delta _H&= \det \begin{bmatrix} \frac{1}{\pi } c_3 -\frac{1}{\pi } c_4^2 &{} -2 c_5 -i - 2 c_1 c_4 \\ -2 c_5 + i - 2 c_1 c_4 &{} 4 \pi c_2 -4 \pi c_1^2 \end{bmatrix} \\&= ( c_3 - c_4^2) (4 c_2 -4 c_1^2) -(4 c_5^2 + 4 c_1^2 c_4^2 + 8 c_1 c_4 c_5 +1) \\&= 4 c_2 c_3 -4 c_2 c_4^2 -4 c_1^2 c_3 - 4 c_5^2 - 8 c_1 c_4 c_5 -1 \\&= 4 ( c_2 - c_1^2)c_3 -4 c_2 c_4^2 - 4 c_5^2 - 8 c_1 c_4 c_5 -1 \end{aligned}$$

and Theorem 1.6, together with (6.4), yield

$$\begin{aligned} \rho _{1,g} = \pi \rho _1 = \frac{ 4 ( c_2 - c_1^2)c_3 -4 c_2 c_4^2 - 4 c_5^2 - 8 c_1 c_4 c_5 +1}{4 \sqrt{ ( c_2 - c_1^2)c_3 -c_2 c_4^2 -c_5^2 -2c_1 c_4 c_5 }}, \end{aligned}$$
(6.5)

as claimed.

Finally, if g is real valued, integration by parts gives

$$\begin{aligned} \int _{{\mathbb {R}}} g(t) g'(t) dt = - \int _{{\mathbb {R}}} g'(t) g(t) dt, \end{aligned}$$

showing that \(c_4=0\), while clearly \(c_5=0\). \(\square \)

6.3 The Uncertainty Principle for Zeros

In order to show that generalized Gaussian windows minimize the expected numbers of zeros of the STFT with complex white noise, we first show that the corresponding intensities are invariant under certain transformations that preserve the class of Gaussians.

Lemma 6.2

Let \(g:{\mathbb {R}} \rightarrow {\mathbb {C}}\) be a Schwartz function, and \(x_0, \xi _0, \xi _1 \in {\mathbb {R}}\). Let

$$\begin{aligned} g_1(t) := e^{2 \pi i \left( \xi _0 t + \xi _1 t^2\right) }\cdot g(t-x_0), \qquad t \in {\mathbb {R}}. \end{aligned}$$

Then the first intensities of the zero sets of \(V_{g} \, {\mathcal {N}}\) and \(V_{g_1} \, {\mathcal {N}}\) coincide:

$$\begin{aligned} \rho _{1,g}=\rho _{1,g_1}. \end{aligned}$$

Proof

We proceed in two steps, and exploit different properties of the STFT. We first assume that \(\xi _1=0\) and use the so-called covariance of the STFT under time-frequency shifts:

$$\begin{aligned} V_{g_1} f (x,y) = V_{\varvec{\Phi }(x_0, \xi _0) g} \,f (x,y) = e^{2 \pi i \xi _0 x} \cdot V_g f(x+x_0, y+\xi _0), \end{aligned}$$

which can be verified by direct calculation or deduced from [21, Lemma 3.1.3]. Applying this formula to each realization of complex white noise \(f={\mathcal {N}}\), we deduce that \({\mathcal {Z}}_{F_{g_1}}\) and \({\mathcal {Z}}_{F_{g}}\) are related by a deterministic translation: \({\mathcal {Z}}_{F_{g_1}} = {\mathcal {Z}}_{F_{g}} (\cdot - x _0, \cdot - \xi _0)\). Hence, \(\rho _{1,g}=\rho _{1,g_1}\).

We now assume that \(\xi _0=x_0=0\), so that \(g_1\) and g are related by the unitary operator \(U:L^2({\mathbb {R}}) \rightarrow L^2({\mathbb {R}})\),

$$\begin{aligned} g_1 (t) = U g(t) = e^{2 \pi i \xi _1 t^2} g(t). \end{aligned}$$

The operator U is also an isomorphism on the spaces of Schwartz functions and tempered distributions. For a distribution f, we use the formula

$$\begin{aligned} V_{g_1} \big (U f\big ) (x,y) = e^{-2 \pi i \xi _1 x^2}\cdot V_g f (S(x,y)), \qquad S(x,y)=(x, y - 2 x \xi _1), \end{aligned}$$
(6.6)

which can be readily verified or deduced as special case of the symplectic covariance of the STFT [17, Chap. 4] [21, Sect. 9.4]. Let \({\mathcal {N}}\) be complex white noise; then so is \(U {\mathcal {N}}\) (both generalized Gaussian processes have the same stochastics). In addition, by Lemma 6.1 and Theorem 1.6, the zero sets of \(V_g \, {\mathcal {N}}\) and \(V_{g_1} \, \big (U {\mathcal {N}}\big )\) have first intensities and these are constant. Hence, for any Borel set \(E \subseteq {\mathbb {R}}^2\), by (6.6),

$$\begin{aligned} \rho _{1,g_{1}} |E|&= {\mathbb {E}} \big [ \# \{ (x,y) \in E: V_{g_1} {\mathcal {N}}(x,y) = 0 \} \big ] \\&= {\mathbb {E}} \big [ \# \{ (x,y) \in E: V_{g_1} \big (U {\mathcal {N}}\big ) (z) = 0 \} \big ] \\ {}&= {\mathbb {E}} \big [ \# \{ (x,y) \in E: V_{g} \big (U {\mathcal {N}}\big ) (S(x,y)) = 0 \} \big ] \\ {}&= {\mathbb {E}} \big [ \# \{ (x',y') \in S(E): V_{g} \big (U {\mathcal {N}}\big ) (x',y') = 0 \} \big ] \\ {}&= \rho _{1,g_{1}} |S(E)| = \rho _{1,g_{1}} |E|, \end{aligned}$$

as S is a linear map with determinant equal to 1.

Finally, the general case without assumptions on \(\xi _0\), \(\xi _0\) and \(x_0\) follows from the discussed special cases by successively considering the effect of the time-frequency shift \(\varvec{\Phi }(x_0,\xi _0)\) and quadratic modulation U. \(\square \)

We can now prove the announced uncertainty principle for zero sets.

Proof of Theorem 1.11

We use the notation of the proof of Theorem 1.9. Recall the relation (6.4). As shown in Theorem 1.6 and its proof, \(\rho _1\) as given by (1.18) satisfies \(\rho _1 \ge 1/\pi \) and achieves the value \(1/\pi \) exactly when \(\Delta _H=0\). We now describe the functions attaining that minimum.

Step 1 (Special minimizers) We consider first windows g such that \(c_1=c_4=c_5=0\). For such windows the minimality condition \(\Delta _H=0\) reads \(4 \cdot c_2 \cdot c_3 = 1\) and means that g saturates Heisenberg’s uncertainty relation:

$$\begin{aligned} \int _{\mathbb {R}}t^2 \left| g(t)\right| ^2 dt \cdot \int _{\mathbb {R}}|g'(t)|^2 dt = \frac{1}{4} = \frac{1}{4} \left| \left| g\right| \right| _2^2. \end{aligned}$$
(6.7)

By Heisenberg’s uncertainty principle, the solutions to (6.7) are exactly the Gaussians:

$$\begin{aligned} \frac{\lambda }{\sqrt{\sigma }} e^{-\tfrac{\pi }{\sigma ^2} t^2}, \qquad t \in {\mathbb {R}}, \end{aligned}$$
(6.8)

with \(\sigma >0\) and \(|\lambda |=2^{1/4}\); see, e.g., [17, Corollary 1.35]. Thus, we conclude that the Gaussians (6.8) achieve the minimal intensity \(\rho _{1,g}=1\), and that these are the only minimizers among (unit norm) windows with \(c_1=c_4=c_5=0\).

Step 2 (General minimizers) Suppose that \(\rho _{1,g}\) is minimal and consider

$$\begin{aligned} g_1(t) := e^{-i \big (\xi _0 t + \tfrac{\xi _1}{2} t^2\big )}\cdot g(t-x_0), \qquad t \in {\mathbb {R}}, \end{aligned}$$
(6.9)

with \(x_0, \xi _0,\xi _1 \in {\mathbb {R}}\). Let \(d_1, \ldots , d_5\) be the uncertainty constants defined similarly to \(c_1, \ldots , c_5\) but with respect to \(g_1\). We now show that it is possible to choose the parameters \(x_0, \xi _0,\xi _1\) so that \(d_1=d_4=d_5=0\). First, choosing \(x_0 := -c_1\), we get

$$\begin{aligned} d_1 = \int _{{\mathbb {R}}} t |g(t-x_0)|^2 dt = c_1 + x_0 = 0. \end{aligned}$$

Similarly,

$$\begin{aligned} i d_4&= \int _{{\mathbb {R}}} g(t-x_0) \cdot \left[ (i\xi _0 + i \xi _1 t) \overline{g(t-x_0)} + \overline{g'(t-x_0)}\right] dt\\&= i \xi _0 + i \xi _1 d_1 + i c_4 = i \xi _0 + i c_4, \end{aligned}$$

so it suffices to take \(\xi _0 := -c_4\), which is indeed a real number as proved in Theorem 1.9. Finally,

$$\begin{aligned} d_5&= \Im \left( \int _{{\mathbb {R}}} t g(t-x_0) \cdot \left[ (i\xi _0 + i \xi _1 t) \overline{g(t-x_0)} + \overline{g'(t-x_0)}\right] dt \right) \\&= \Im \big [i \xi _0 d_1 + i \xi _1 d_2\big ] + x_0 c_4 + c_5 = \xi _1 d_2 + x_0 c_4 + c_5. \end{aligned}$$

As \(g \not \equiv 0\), \(d_2>0\). In addition, \(c_4, c_5 \in {\mathbb {R}}\). Hence, \(\xi _1\) can be chosen so that \(d_5=0\).

By Lemma 6.2, \(\rho _{1, g_1}=\rho _{1, g}\) is also minimal. Thus, by Step 1, \(g_1\) must be a Gaussian (6.8), and, therefore, g is a generalized Gaussian (1.25).

Conversely, if g is a generalized Gaussian (1.25), then we can choose \(\xi _0, \xi _1, x_0 \in {\mathbb {R}}\) so that \(g_1\) takes the form (6.8). Hence, by Step 1 and Lemma 6.2, \(\rho _{1,g}=\rho _{1,g_1}=1\). \(\square \)

6.4 Hermite Windows

We now consider Hermite functions

$$\begin{aligned} h_{r}(t) = \frac{2^{1/4}}{\sqrt{r!}}\left( \frac{-1}{2\sqrt{\pi }}\right) ^r e^{\pi t^2} \frac{d^r}{dt^r}\left( e^{-2\pi t^2}\right) , \qquad r \ge 0, \end{aligned}$$
(6.10)

as windows for the STFT. According to Lemma 6.1, \(F(x+iy) := e^{-i xy} V_{h_r} \, {\mathcal {N}}({\overline{z}}/\sqrt{\pi })\) is a GWHF with twisted covariance kernel \(H(z) = e^{-i xy} V_{h_r} {h_r} ({\overline{z}}/\sqrt{\pi })\). The kernel can be calculated explicitly in terms of Laguerre polynomials

$$\begin{aligned} L_n(t)=\sum _{j=0}^n (-1)^j \left( {\begin{array}{c}n\\ j\end{array}}\right) \frac{t^j}{j!}, \end{aligned}$$
(6.11)

by the following formula

$$\begin{aligned} H(z) = L_{r}(|z|^2) e^{-\tfrac{1}{2}|z|^2}, \end{aligned}$$
(6.12)

known as the Laguerre connection [17, Theorem (1.104)]. We thus obtain a simple expression for the first intensity of the zeros of the STFT of complex noise with Hermite windows.

Proof of Corollary 1.10

We write \(H(z)=P(|z|^2)\) with \(P(t)=L_r(t) e^{-t/2}\). By Lemma 6.1, H satisfies the standing assumptions. We can therefore apply Corollary 1.7. Inspecting (6.11) we obtain

$$\begin{aligned} P'(0)&= L'_{r}(0) -\frac{1}{2} L_{r}(0) =-r-\frac{1}{2}. \end{aligned}$$

Using (6.4), we conclude

$$\begin{aligned} \rho _{1,h_r} = \pi \rho _{1} = - \left( P'(0) + \frac{1}{4 P'(0)} \right) = r + \frac{1}{2} + \frac{1}{4r+2}. \end{aligned}$$

\(\square \)

6.5 Derivatives of Gaussian Entire Functions

Let \(G_0\) be a Gaussian entire function, that is, a circularly symmetric random function with correlation kernel,

$$\begin{aligned} {\mathbb {E}} \left[ G_0(z) \cdot \overline{G_0(w)} \right] =e^{z {\bar{w}}}, \end{aligned}$$
(6.13)

and consider the iterated covariant derivatives

$$\begin{aligned} G(z)=\big ({\bar{\partial }}^*\big )^{q-1} G_0 = \big ({\bar{z}} - \partial \big )^{q-1} G_0, \end{aligned}$$
(6.14)

where \(q \in {\mathbb {N}}\). G is called a Gaussian poly-entire function of pure type. The following lemma provides an identification with a GWHF.

Lemma 6.3

Let G be a Gaussian poly-entire function of pure-type, as in (6.14). Then

$$\begin{aligned} F(z) = \frac{e^{-\tfrac{1}{2} |z|^2}}{\sqrt{(q-1)!}} \cdot G(z), \qquad z \in {\mathbb {C}}, \end{aligned}$$

is a GWHF with twisted kernel

$$\begin{aligned} H(z)= L_{q-1}(|z|^2) \cdot e^{-\tfrac{1}{2}|z|^2} \end{aligned}$$
(6.15)

satisfying the standing assumptions. Here, \(L_n\) denotes the Laguerre polynomial (6.11).

Proof

We consider the complex Hermite polynomials \(H_{k,j}(z, {\bar{z}})\) defined by:

$$\begin{aligned} H_{k,q-1}(z,{\bar{z}}) := ({\bar{z}} - \partial _z)^{q-1} \big [ z^k \big ], \qquad k \ge 0. \end{aligned}$$

Conjugating the last equation we obtain:

$$\begin{aligned} \overline{H_{k,q-1}(z,{\bar{z}})} = (z - \partial _{{{\bar{z}}}})^{q-1} \big [ {\bar{z}}^k \big ]. \end{aligned}$$

We combine (6.13) and (6.14), expand \(e^{z {\bar{w}}}\) into series, and compute

$$\begin{aligned} {\mathbb {E}} \left[ G(z) \cdot \overline{G(w)} \right]&= \big ({\bar{z}} - \partial _z \big )^{q-1} \big (w - \partial _{{\bar{w}}} \big )^{q-1}\big [e^{z{\bar{w}}}\big ] \\&= \sum _{k \ge 0} \frac{1}{k!} H_{k,q-1}(z,{\bar{z}}) \overline{H_{k,q-1}(w,{\bar{w}})} =(q-1)! \, L_{q-1}(|z-w|^2) e^{z{\bar{w}}}, \end{aligned}$$

where the last equality is proved in [18, Eq. 3.19]; see also [18, Proposition 3.7] and [25, Sect. 2].

Hence,

$$\begin{aligned} {\mathbb {E}} \left[ F(z) \cdot \overline{F(w)} \right]&=\exp \left[ -\tfrac{1}{2}|z|^2-\tfrac{1}{2}|w|^2 + z {\bar{w}} \right] L_{q-1}\big (|z-w|^2\big ) \\&=e^{i \Im (z {\bar{w}})} \cdot H(z-w), \end{aligned}$$

as desired. Finally, note that F is also the GWHF associated in Sect. 6.4 with the STFT with Hermite window \(h_{q-1}\). Hence, the standard assumptions hold by Lemma 6.1. \(\square \)

6.6 Gaussian Poly-entire Functions

We now look into Gaussian poly-entire function of full type (cf. Example 1.4). These are defined as

$$\begin{aligned} G = \sum _{k=0}^{q-1} \frac{1}{\sqrt{k!}} \big ({\bar{\partial }}^*\big )^k G_k \end{aligned}$$
(6.16)

where \(G_0, \ldots , G_{q-1}\) are independent Gaussian entire functions, and q is called the order of G. The following lemma identifies G with a GWHF, by means of the generalized Laguerre polynomial

$$\begin{aligned} L^{(1)}_n(t) = \sum _{k=0}^n L_k(t). \end{aligned}$$

Lemma 6.4

Let G be a Gaussian poly-entire function of full type of order q, as in (6.16). Then \(F(z) = q^{-1/2} \cdot e^{-\tfrac{1}{2} |z|^2} \cdot G(z)\) is a GWHF with twisted kernel

$$\begin{aligned} H(z)= q^{-1} L^{(1)}_{q-1}(|z|^2) e^{-\tfrac{1}{2}|z|^2} \end{aligned}$$
(6.17)

satisfying the standing assumptions.

Proof

By Lemma 6.3, \(F = \frac{1}{\sqrt{q}}\sum _{k=0}^{q-1} F_k\), where \(F_1, \ldots , F_{q-1}\) are independent GWHF with respective twisted kernels \(H_k(z) = L_k(|z|^2) e^{-\tfrac{1}{2}|z|^2}\). Due to independence,

$$\begin{aligned} {\mathbb {E}} \big [ F(z) \cdot \overline{F(w)} \big ] = \frac{1}{q} \sum _{k=0}^{q-1} e^{i\Im (z {\bar{w}})} H_k(|z-w|^2) = e^{i\Im (z {\bar{w}})} H(|z-w|^2). \end{aligned}$$

By Lemma 6.3, each twisted kernel \(H_k(z) = L_k(|z|^2) e^{-\tfrac{1}{2}|z|^2}\) satisfies (1.12) and (1.13), and therefore so does its average H. In addition, (1.15) and (1.16) are satisfied as \(H \in C^\infty ({\mathbb {R}}^2)\). \(\square \)

As an application, we obtain the following.

Proof of Theorem 1.8

By Lemmas 6.3 and 6.4, we can apply Corollary 1.7 with \(P(t) = L_{q-1}(t) e^{-t/2}\) or \(P(t)=q^{-1} L^{(1)}_{q-1}(t) e^{-t/2}\). In the first case (pure type), the calculation was carried out in the proof of Corollary 1.10 (where \(r=q-1\)). For the second case (full type), we note that \(L^{(1)}_{q-1}(0)=q\), while

$$\begin{aligned} \frac{d}{dt}L^{(1)}_{q-1}(0)=\sum _{k=0}^{q-1} L'_{k}(0)=\sum _{k=0}^{q-1} (-k) = - \frac{q(q-1)}{2}. \end{aligned}$$

We thus compute,

$$\begin{aligned} P'(0)&= \frac{1}{q} \left( \frac{d}{dt}L^{(1)}_{q-1}(0) -\frac{1}{2} L^{(1)}_{q-1}(0) \right) \\&=\frac{1}{q} \left( -\frac{q(q-1)}{2} -\frac{q}{2} \right) =-\frac{q}{2}, \end{aligned}$$

and, therefore,

$$\begin{aligned} \rho _1 = - \frac{1}{\pi } \left( P'(0) + \frac{1}{4 P'(0)} \right) =\frac{1}{2\pi } \left( q + \frac{1}{q} \right) . \end{aligned}$$

\(\square \)

6.7 Charges

We start with the following general observation.

Lemma 6.5

Let \(F,G:{\mathbb {C}} \rightarrow {\mathbb {C}}\) be \(C^1\) in the real sense, and \(z_0 \in {\mathbb {C}}\). If \(F(z_0)=0\) and \(G(z_0) \not =0\), then the charges of F and \(F\cdot G\) at \(z_0\) coincide.

Proof

Using (2.2) we see that the charge of \(F \cdot G\) at \(z_0\) is

$$\begin{aligned} {{\,\mathrm{sgn}\,}}\Big [ {{\,\mathrm{Jac}\,}}(F\cdot G)(z_0) \Big ]&= - {{\,\mathrm{sgn}\,}}\Big [ \Im \Big [ (F\cdot G)^{(1,0)}(z) \cdot \overline{ (F\cdot G)^{(0,1)}(z)}\Big ] \Big ] \\&=- {{\,\mathrm{sgn}\,}}\Big [ \Im \Big [ G(z_0) \cdot F^{(1,0)}(z) \cdot \overline{G(z_0)} \cdot \overline{ F^{(0,1)}(z)}\Big ] \Big ] \\&=- {{\,\mathrm{sgn}\,}}\Big [ \Im \Big [ |G(z_0)|^2 \cdot F^{(1,0)}(z) \cdot \overline{ F^{(0,1)}(z)}\Big ] \Big ] \\&=- {{\,\mathrm{sgn}\,}}\Big [ \Im \Big [ F^{(1,0)}(z) \cdot \overline{ F^{(0,1)}(z)}\Big ] \Big ], \end{aligned}$$

which is also the charge of F at \(z_0\). \(\square \)

We first apply Theorem 1.12 to the short-time Fourier transform, and obtain formulas in terms of (1.28).

Proof of Corollary 1.13

By Lemma 6.1, the short-time Fourier transform of complex white noise can be identified with a GWHF by the transformation

$$\begin{aligned} F(z) := e^{-i xy} V_g \, {\mathcal {N}}({\bar{z}}/\sqrt{\pi }), \qquad z=x+iy. \end{aligned}$$

At a zero \(\zeta =a+ib\),

$$\begin{aligned} F^{(1,0)}(\zeta )&= \frac{e^{-i ab}}{\sqrt{\pi }} \big (V_g \, {\mathcal {N}}\big )^{(1,0)}(a/\sqrt{\pi },-b/\sqrt{\pi }), \\ F^{(0,1)}(\zeta )&= -\frac{e^{-i ab}}{\sqrt{\pi }} \big (V_g \, {\mathcal {N}}\big )^{(0,1)}(a/\sqrt{\pi },-b/\sqrt{\pi }), \end{aligned}$$

and, consequently,

$$\begin{aligned} {{\,\mathrm{Jac}\,}}F(\zeta )=\frac{1}{\pi } \Im \Big [\,\big (V_g \, {\mathcal {N}}\big )^{(1,0)}(a/\sqrt{\pi },-b/\sqrt{\pi }) \cdot \overline{\big (V_g \, {\mathcal {N}}\big )^{(0,1)}(a/\sqrt{\pi },}\overline{-b/\sqrt{\pi })} \,\Big ]. \end{aligned}$$

Applying Theorem 1.12 with the change of variable \(z= {\bar{\zeta }}/\sqrt{\pi }\) we obtain

$$\begin{aligned} {\mathbb {E}} \Big [ \sum _{z \in E, \, V_g\, {\mathcal {N}} (z) = 0} \mu _z \Big ]&= {\mathbb {E}} \Big [ \sum _{\zeta \in \sqrt{\pi } {\bar{E}}, \, F(\zeta ) = 0} \mu _{{\bar{\zeta }}/\sqrt{\pi }} \Big ] \\&={\mathbb {E}} \Big [ \sum _{\zeta \in \sqrt{\pi } {\bar{E}}, \, F(\zeta ) = 0} {{\,\mathrm{sgn}\,}}{{\,\mathrm{Jac}\,}}F(\zeta )\Big ] \\&= \frac{1}{\pi } \big | \sqrt{\pi } {\bar{E}}\big | \\&= |E|, \end{aligned}$$

as claimed. \(\square \)

For the STFT of white noise with a Hermite window (6.10), the twisted kernel is given in (6.12), and we can apply Theorem 1.14 with

$$\begin{aligned} P(t) = L_{r}(t) e^{-t/2}. \end{aligned}$$

After a change of variables as in the proof of Corollary 1.13, we obtain

$$\begin{aligned} \mathrm {Var} \bigg [ \sum _{z \in B_R(z_0), \, V_g\, {\mathcal {N}} (z) = 0} \mu _z \bigg ] \le C_r R, \end{aligned}$$

while

$$\begin{aligned} \frac{1}{R} \mathrm {Var} \bigg [ \sum _{z \in B_R(z_0), \, V_g\, {\mathcal {N}} (z) = 0} \mu _z \bigg ] \rightarrow \pi ^{-1/2} \int _0^\infty \frac{2 t^2 P'(t^2)^2}{1-P(t^2)^2} dt, \qquad \text{ as } {R \rightarrow \infty }, \end{aligned}$$

uniformly on \(z_0\).

Finally, we note that we can also apply Theorems 1.12 and 1.14 to poly-entire functions. Let G be a Gaussian poly-entire function of pure-type, as in (6.14). According to Lemma 6.3, the function

$$\begin{aligned} F(z) = \frac{e^{-\frac{1}{2} |z|^2}}{\sqrt{(q-1)!}} \cdot G(z), \end{aligned}$$

is a GWHF. By Lemma 6.5, the charges of F and G at a zero \(\zeta \) coincide:

$$\begin{aligned} \kappa _{\zeta } = {{\,\mathrm{sgn}\,}}({{\,\mathrm{Jac}\,}}F(\zeta )) = {{\,\mathrm{sgn}\,}}({{\,\mathrm{Jac}\,}}G(\zeta )). \end{aligned}$$

A similar argument applies to poly-entire functions of full-type (cf. Example 1.4 and Sect. 6.6). Hence, Theorem 1.12 shows that the first intensity of the charged zeros of G is \(1/\pi \). Similarly, Theorem 1.14 applies to G and concrete expressions for the asymptotic charged particle variance can be obtained with the polynomials

$$\begin{aligned} P(r) = {\left\{ \begin{array}{ll} e^{-r/2} \cdot L_{q-1}(r) &{}\text{ pure-type } (6.14)\\ \frac{1}{q} \cdot e^{-r/2} \cdot L^{(1)}_{q-1}(r) &{}\text{ full-type } (6.16) \end{array}\right. }. \end{aligned}$$

6.8 First Derivatives of GEF

We now interpret the statistics of zeros of Gaussian pure poly-entire functions of order 1, and show how they recover the well-known first order statistics of critical points of weighted magnitudes of Gaussian entire functions (cf. Examples 1.3).

Let G be a Gaussian entire function as in Example 1.1 and consider its amplitude \(A(z) = e^{-\frac{1}{2}|z|^2} |G(z)|\). Then, by (1.8), the critical points of A are exactly the zeros of the GWHF \(F(z)=e^{-\frac{1}{2}|z|^2} {\bar{\partial }}^* G(z)\). By Theorem 1.8 (with \(q=2\)), the first intensity of the critical points of A is therefore 5/3.

Second, consider a critical point \(z_0\) of A. Then, by Proposition 3.2, with probability one, \(z_0\) is not a zero of G, and near \(z_0\) we can write \(G(z)=L(z)^2\) with L analytic. Hence,

$$\begin{aligned} 2 \partial A = A^{(1,0)} - i A^{(0,1)} = -{\frac{{\overline{L}}}{L}} \cdot F. \end{aligned}$$

As the factor \({{{\overline{L}}}/{L}}\) is smooth (in the real sense) and non-zero near \(z_0\), we conclude by Lemma 6.5 that the charge of F at \(z_0\) is

$$\begin{aligned} \kappa _z = {{\,\mathrm{sgn}\,}}\Big [ \big [A^{(1,1)}\big ]^2 - A^{(2,0)} A^{(0,2)} \Big ], \end{aligned}$$

that is, the opposite of the sign of the determinant of the Hessian matrix of A at \(z_0\). Hence, \(\kappa _z=1\) if \(z_0\) is a saddle point of A, while \(\kappa _z=-1\) if A has a local maximum at \(z_0\) (while local minima are excluded, as they are zeros of G [24, Sect. 8.2.2]). Thus, by Theorem 1.12, the first intensity of the quantity “saddle points − local maxima” is 1. Combining this with the first intensity of the total critical points, we conclude that the first intensity of the local maxima of A is 1/3 whereas that of the saddle points is 4/3.

While the calculation of first intensities of different kinds of critical points of G is well-known—they follow for example as the limit of more precise results for polynomial spaces in [12, Corollary 5]—the hyperuniformity of the statistics of “saddle points − local maxima” is, to the best of our knowledge, a novel consequence of Theorem 1.14.

7 Conclusions and Outlook

We introduced the notion of twisted stationarity for an ensemble of random functions and obtained basic statistics for their zeros. In comparison to the model case of translation invariant Gaussian entire functions, a novel element is found: GWHF may either preserve or reverse orientation around a zero, and zero statistics are thus augmented with the new attribute of charge.

While our result on hyperuniformity of charge is a first step in the exploration of repulsion between zeros of GWHF, as it shows that a universal form of screening is observed at large scales, many important questions remain open. First, Theorem 1.14 was obtained under the assumption that the twisted kernel is radial, which means that statistics are rotationally invariant. We do not know if hyperuniformity of charge holds also for non-radial twisted kernels. Second, no variance estimates were derived for uncharged zeros. We conjecture that the uncharged number variance grows like the perimeter of the observation disk. Finally, numerical experience suggests that the repulsion between zeros of the same charge is stronger than that between oppositely charged ones, but we do not yet have formal statistics justifying that claim.

The short-time Fourier transform of white noise is a case in point application of our results, because they open the door to the use of non-Gaussian windows. This new freedom has prospective applications in signal processing which we expect to develop in future work. Indeed, when analyzing a signal, one can often choose the STFT window, and the potentially rich zero statistics that we derived hold simultaneously for all such choices.

8 Auxiliary Results

8.1 Computations with Gaussians

Lemma 8.1

Let \(\Omega \in {\mathbb {C}}^{2\times 2}\) be positive definite and \(t \in {\mathbb {R}}\). Then

$$\begin{aligned} \frac{1}{\pi ^2} \int _{{\mathbb {C}}^2} e^{- ({{\bar{z}}},{{\bar{w}}}) \,\Omega \, (z,w)^t} e^{it \Im (z{{\bar{w}}})} dA(z) \,dA(w) = \frac{1}{\det \big (\Omega + \tfrac{t}{2} J\big )}. \end{aligned}$$
(8.1)

Proof

Write

$$\begin{aligned} \Omega = \begin{pmatrix} a &{} b+id \\ b-id &{} c \end{pmatrix}, \end{aligned}$$

fix \(a>0\), \(b \in {\mathbb {R}}\), and consider both sides of (8.1) as functions of the complex variable \(\xi = \frac{t}{2}+id\). For \(\xi \in i {\mathbb {R}}\) (i.e., \(t=0\)) and \(d^2<ac-b^2\) (i.e., \(\Omega \) positive definite), (8.1) holds because it expresses the fact that the probability density of a complex Gaussian is normalized. We will show that both sides of (8.1) are analytic functions on the domain

$$\begin{aligned} {\mathcal {A}} = \big \{ \xi \in {\mathbb {C}}: \big (\Im [\xi ]\big )^2 < ac-b^2 \big \}. \end{aligned}$$

To this end, we first rewrite

$$\begin{aligned}&\frac{1}{\pi ^2} \int _{{\mathbb {C}}^2} e^{- ({{\bar{z}}},{{\bar{w}}}) \,\Omega \, (z,w)^t} e^{it \Im (z{{\bar{w}}})} dA(z) \,dA(w) \\&\quad =\frac{1}{\pi ^2} \int _{{\mathbb {C}}^2} e^{- a z {{\bar{z}}} - c w {{\bar{w}}} + (\xi -b) z {{\bar{w}}} + (-\xi -b) w {{\bar{z}}}} d(z) \,dA(w). \end{aligned}$$

Here, the integrand is an analytic function in \(\xi \). To show the analyticity of the integral, we note that for any compact subset \({\mathcal {C}} \subseteq {\mathcal {A}}\), we have \(\vartheta _{{\mathcal {C}}}:=\sup _{\xi \in {\mathcal {C}}} \big (\Im [\xi ]\big )^2 < ac-b^2\). Thus, the absolute integrand satisfies

$$\begin{aligned} \frac{1}{\pi ^2} \int _{{\mathbb {C}}^2} \big | e^{- ({{\bar{z}}},{{\bar{w}}}) \,\Omega \, (z,w)^t} e^{it \Im (z{{\bar{w}}})} \big | dA(z) \,dA(w)&= \frac{1}{\pi ^2} \int _{{\mathbb {C}}^2} e^{- ({{\bar{z}}},{{\bar{w}}}) \,\Omega \, (z,w)^t} dA(z) \,dA(w) \\&\le \frac{1}{ac-b^2- \vartheta _{{\mathcal {C}}}}. \end{aligned}$$

Hence, the absolute integral is uniformly bounded for \(\xi \in {\mathcal {C}}\). Applying Morera’s theorem and Fubini’s theorem, we can conclude that the integral is analytic as well.

The right-hand side of (8.1) can be rewritten as

$$\begin{aligned} \frac{1}{\det \big (\Omega + \tfrac{t}{2} J\big )} = \frac{1}{ac-(b+\xi )(b-\xi )} \end{aligned}$$
(8.2)

and is also analytic in \(\xi \) as long as \(ac\ne (b+\xi )(b-\xi )\). In particular, for \(\xi \in {\mathcal {A}}\) we have that \(\Re [ac-(b+\xi )(b-\xi )]= ac-b^2+\frac{t^2}{4}-d^2 \ge ac-b^2-d^2>0\). Hence, both sides of (8.1) are analytic on \({\mathcal {A}}\) and coincide on the set \({\mathcal {A}} \cap i {\mathbb {R}}\). By the identity theorem of analytic functions, they thus coincide on \({\mathcal {A}}\). \(\square \)

Lemma 8.2

Let v be a 4-dimensional circularly symmetric complex Gaussian vector with covariance matrix \(\Omega \). Then

$$\begin{aligned} {\mathbb {E}}\Big [ \Im (v_1 {{\bar{v}}}_2) \cdot \Im (v_3 {{\bar{v}}}_4 ) \Big ]&= -\frac{1}{2} \Re \Big [ \Omega _{1,2}\Omega _{3,4} + \Omega _{1,4}\Omega _{3,2} - \Omega _{2,1}\Omega _{3,4} - \Omega _{2,4}\Omega _{3,1} \Big ]. \end{aligned}$$

Proof

By Wick’s formula (see, e.g., [24, Lemma 2.1.7]), we have

$$\begin{aligned} {\mathbb {E}}\Big [ \Im (v_1 {{\bar{v}}}_2) \cdot \Im (v_3 {{\bar{v}}}_4 ) \Big ]&= -\frac{1}{2} \Re \,{\mathbb {E}}\Big [v_1 v_3 {{\bar{v}}}_2 {{\bar{v}}}_4 - v_2 v_3 {{\bar{v}}}_1 {{\bar{v}}}_4 \Big ] \\&= -\frac{1}{2} \Re \Big [ \mathrm {per}( \Omega _{1,3;2,4}) - \mathrm {per}(\Omega _{2,3; 1,4}) \Big ], \\&= -\frac{1}{2} \Re \Big [ \Omega _{1,2}\Omega _{3,4} + \Omega _{1,4}\Omega _{3,2} - \Omega _{2,1}\Omega _{3,4} - \Omega _{2,4}\Omega _{3,1} \Big ], \end{aligned}$$

where \(\mathrm {per}\) is the permanent and \(\Omega _{i,j; k,l}\) is the submatrix of \(\Omega \) containing the rows i and j and columns k and l. \(\square \)

8.2 Regularization by Convolution

Lemma 8.3

Let \(\varphi :{\mathbb {C}} \rightarrow {\mathbb {R}}\) be an integrable function that satisfies

$$\begin{aligned} \int _{\mathbb {C}} \varphi (z) \, dA(z) = 1, \qquad C_\varphi := \int _{{\mathbb {C}}} \left| z\right| \left| \varphi (z)\right| \, dA(z) < \infty . \end{aligned}$$

Then there exists a universal constant \(C>0\) such that for all \(z_0 \in {\mathbb {C}}\),

$$\begin{aligned}&\bigg |\left| B_r(z_0)\right| - \int _{B_r(z_0)} (\varphi * 1_{B_r(z_0)})(z) \, dA(z) \bigg | \le C C_\varphi r, \qquad r>0. \end{aligned}$$
(8.3)

In addition, letting

$$\begin{aligned} I_\varphi := \int _{{\mathbb {C}}} \left| z\right| \varphi (z) \, dA(z), \end{aligned}$$

the following holds:

$$\begin{aligned}&\frac{1}{r} \bigg (\left| B_r(z_0)\right| - \int _{B_r(z_0)} (\varphi * 1_{B_r(z_0)})(z) \, dA(z)\bigg ) \rightarrow I_\varphi , \qquad \text{ as } {r \rightarrow +\infty }. \end{aligned}$$
(8.4)

Proof

We first note the elementary facts

$$\begin{aligned}&\left| B_1(0) \setminus B_1(w)\right| \le C \left| w\right| , \qquad w \in {\mathbb {C}}, \\&\lim _{h \rightarrow 0+} \tfrac{1}{h\left| w\right| } \left| B_1(0) \setminus B_1(h w)\right| =1, \end{aligned}$$

for some constant \(C>0\). For each \(w \in {\mathbb {C}}\), rescaling and translating yields

$$\begin{aligned}&\left| B_r(z_0) \setminus B_r(z_0+w)\right| = r^2\big |B_1\big (\tfrac{z_0}{r}\big ) \setminus B_1\big (\tfrac{z_0+w}{r}\big ) \big | \le C \left| w\right| r, \qquad r > 0, \end{aligned}$$
(8.5)
$$\begin{aligned}&\lim _{r \rightarrow \infty } \tfrac{1}{r\left| w\right| } \left| B_r(z_0) \setminus B_r(z_0+w)\right| =1. \end{aligned}$$
(8.6)

We calculate

$$\begin{aligned}&\left| B_r(z_0)\right| - \int _{B_r(z_0)} (\varphi * 1_{B_r(z_0)})(z) dA(z) \\&\quad =\int _{B_r(z_0)} 1_{B_r(z_0)}(z) \int _{{\mathbb {C}}} \varphi (w) \, dA(w) \, dA(z) - \int _{B_r(z_0)} \int _{{\mathbb {C}}} 1_{B_r(z_0)}(z-w) \varphi (w) \, dA(w) \, dA(z) \\&\quad =\int _{B_r(z_0)} \int _{{\mathbb {C}}} \big ( 1_{B_r(z_0)}(z)-1_{B_r(z_0+w)}(z) \big ) \varphi (w) \, dA(w) \, dA(z) \\&\quad = \int _{{\mathbb {C}}} \varphi (w) \left| B_r(z_0) \setminus B_r(z_0+w)\right| \,dA(w) \end{aligned}$$

where we used Fubini’s theorem. For (8.3), we use (8.5) and estimate

$$\begin{aligned} \bigg | \int _{{\mathbb {C}}} \varphi (w) \big |B_r(z_0) \setminus B_r(z_0+w) \big | \,dA(w) \bigg | \le C \int _{{\mathbb {C}}} \left| \varphi (w)\right| \left| w\right| r \,dA(w) = C C_\varphi r. \end{aligned}$$

For (8.4), we use (8.6) to obtain

$$\begin{aligned}&\frac{1}{r} \bigg (\left| B_r(z_0)\right| - \int _{B_r(z_0)} (\varphi * 1_{B_r(z_0)})(z) \, dA(z)\bigg ) = \int _{{\mathbb {C}}} \varphi (w) \left| w\right| \frac{1}{r\left| w\right| } \\&\quad \left| B_r(z_0) \setminus B_r(z_0+w)\right| \,dA(w) \rightarrow I_\varphi , \end{aligned}$$

as \(r \rightarrow +\infty \), where we used the dominated convergence theorem, as allowed by (8.5). \(\square \)

8.3 Calculation 1

The following calculations can be followed in the symbolic worksheet available at https://github.com/gkoliander/gwhf. We wish to calculate

$$\begin{aligned} E&= -\frac{1}{2} \Re \Big [ \Omega _{1,2}\Omega _{3,4} + \Omega _{1,4}\Omega _{3,2} - \Omega _{2,1}\Omega _{3,4} - \Omega _{2,4}\Omega _{3,1} \Big ] \\&= -\frac{1}{2} \Re \Big [ (\Omega _{1,2} - \Omega _{2,1}) \Omega _{3,4} + \Omega _{1,4}\Omega _{3,2} - \Omega _{2,4}\Omega _{3,1} \Big ] \\&= \Im \big [ \Omega _{1,2} \big ] \Im \big [ \Omega _{3,4} \big ] -\frac{1}{2} \Re \Big [\Omega _{1,4}\Omega _{3,2} - \Omega _{2,4}\Omega _{3,1} \Big ], \end{aligned}$$

With the notation of the proof of Proposition 5.2, note that

$$\begin{aligned} \Omega _{k,l} = A_{k,l} - \frac{(B_{k,1}- e^{- i (yu-xv)} P B_{k,2}) \overline{B_{l,1}} + (-e^{i (yu-xv)} P B_{k,1} + B_{k,2}) \overline{B_{l,2}}}{1- P ^2} \end{aligned}$$

Inserting the various specific values, we obtain

$$\begin{aligned} \Omega _{1,2}&= \Gamma (z)_{2,3} - \frac{(B_{1,1}- e^{- i (yu-xv)} P B_{1,2}) \overline{B_{2,1}} + (-e^{i (yu-xv)} P B_{1,1} + B_{1,2}) \overline{B_{2,2}}}{1- P ^2} \\&= -i-xy - \frac{- xy+ xvP^2 + 2ix (x-u)PP' + (iy P - iv P + 2(x-u)P') ( -iu P + 2(y-v)P')}{1- P ^2} \\&= \Re \big [ \Omega _{1,2} \big ] + i \bigg ( \frac{ - 2 r^2 PP' }{1- P ^2} -1 \bigg ) \end{aligned}$$
$$\begin{aligned} \Omega _{3,4}&= \Gamma (w)_{2,3} - \frac{(B_{3,1}- e^{- i (yu-xv)} P B_{3,2}) \overline{B_{4,1}} + (-e^{i (yu-xv)} P B_{3,1} + B_{3,2}) \overline{B_{4,2}}}{1- P ^2} \\&= -i-uv - \frac{( - iy P - 2(x-u)P' + iv P )(-ix P - 2(y-v)P') + uy P^2 - iu 2(x-u)PP'-uv }{1- P ^2} \\&= \Re \big [ \Omega _{3,4} \big ] + i \bigg ( \frac{ - 2 r^2 PP' }{1- P ^2} -1 \bigg ) \end{aligned}$$

Thus,

$$\begin{aligned} \Omega _{1,4}&= \Gamma (z,w)_{2,3} - \frac{(B_{1,1}- e^{- i (yu-xv)} P B_{1,2}) \overline{B_{4,1}} + (-e^{i (yu-xv)} P B_{1,1} + B_{1,2}) \overline{B_{4,2}}}{1- P ^2} \\&= e^{i (yu-xv)} \bigg ((-i-xv)P +2 i (v(y-v)-x(x-u))P' -4 (x-u)(y-v)P'' \\&\quad - \frac{(-iy + iv P^2 - 2(x-u)PP' ) \big (-ix P - 2(y-v)P'\big ) + u(y-v) P - 2iu(x-u)P'}{1- P^2} \bigg ) \\&= \frac{e^{i (yu-xv)}}{1- P^2} \Big ((x-u) (y-v) (4 P^2P''-4 PP'^2+P-4P'') + i \big ( -2r^2 P' + P^3-P \big ) \Big ) \end{aligned}$$
$$\begin{aligned} \Omega _{3,2}&= \overline{\Gamma (z,w)_{3,2}} - \frac{(B_{3,1}- e^{- i (yu-xv)} P B_{3,2}) \overline{B_{2,1}} + (-e^{i (yu-xv)} P B_{3,1} + B_{3,2}) \overline{B_{2,2}}}{1- P^2} \\&= e^{-i (yu-xv)} \bigg ((-i-uy)P -2 i (y(y-v)-u(x-u))P' -4 (x-u)(y-v)P'' \\&\quad - \frac{ - x(y-v) P + 2 ix(x-u)P' + ( iy P^2 + 2(x-u)PP' -iv) \big ( -iu P + 2(y-v)P'\big )}{1- P^2} \bigg ) \\&= \frac{e^{-i (yu-xv)}}{1- P^2} \Big ((x-u) (y-v) (4 P^2P''-4 PP'^2+P-4P'') + i \big ( -2r^2 P' + P^3-P \big ) \Big ) \end{aligned}$$

and the real part of the product is given as

$$\begin{aligned} \Re \big [ \Omega _{1,4}\Omega _{3,2}\big ] = \frac{(x-u)^2 (y-v)^2 (Q+P)^2 - (2r^2 P' - P^3+P)^2}{(1- P^2)^2} \end{aligned}$$
(8.7)

where we substituted \(Q = 4P^2P''-4PP'^2-4P''\). Similarly,

$$\begin{aligned} \Omega _{2,4}&= \Gamma (z,w)_{3,3} - \frac{(B_{2,1}- e^{- i (yu-xv)} P B_{2,2}) \overline{B_{4,1}} + (-e^{i (yu-xv)} P B_{2,1} + B_{2,2}) \overline{B_{4,2}}}{1- P^2} \\&= e^{i (yu-xv)} \bigg (xu P - (2+ 2 i (y-v)(x+u))P' +4 (y-v)^2 P'' \\&\quad - \frac{(ix - iu P^2 - 2(y-v)P P' ) \big (-ix P - 2(y-v)P'\big ) - u(x-u) P - 2iu(y-v)P' }{1- P^2} \bigg ) \\&= -\frac{e^{i (yu-xv)}}{1- P^2} \Big ( (x-u)^2 P - 4(y-v)^2 (P^2 P''-P P'^2- P'') + 2 P' (1-P^2) \Big )\\ \Omega _{3,1}&= \overline{\Gamma (z,w)_{2,2}} - \frac{(B_{3,1}- e^{- i (yu-xv)} P B_{3,2}) \overline{B_{1,1}} + (-e^{i (yu-xv)} P B_{3,1} + B_{3,2}) \overline{B_{1,2}}}{1- P^2} \\&= e^{-i (yu-xv)} \bigg ( yv P - (2 - 2 i (x-u)(y+v))P' + 4 (x-u)^2 P'' \\&\quad - \frac{ y(y-v) P - 2iy(x-u)P' + ( iy P^2 + 2(x-u)P P' -iv) ( iv P + 2(x-u)P')}{1- P^2} \bigg ) \\&= \frac{e^{-i (yu-xv)}}{1- P ^2} \Big ( -(y-v)^2 P + 4(x-u)^2 (P^2 P''-P P'^2- P'') - 2 P' (1-P^2) \Big ) \end{aligned}$$

and the real part of the product is given as

$$\begin{aligned} \Re \big [ \Omega _{2,4}\Omega _{3,1}\big ]&= \frac{ \big ((x-u)^2 P - (y-v)^2 Q + 2 P' (1-P^2)\big )\big ((y-v)^2 P - (x-u)^2 Q + 2 P' (1-P^2)\big ) }{(1- P^2)^2} \\&= \frac{ (x-u)^2 (y-v)^2 P^2 - (y-v)^4 PQ + 2 (y-v)^2 PP' (1-P^2) }{(1- P^2)^2} \\&\quad - \frac{ (x-u)^4 P Q - (y-v)^2(x-u)^2 Q^2 + 2 (x-u)^2 QP' (1-P^2) }{(1- P^2)^2} \\&\quad + \frac{ 2 (x-u)^2 P P' (1-P^2) - 2 (y-v)^2 Q P' (1-P^2)+ 4 P'^2 (1-P^2)^2 }{(1- P^2)^2} \\&= \frac{ (x-u)^2 (y-v)^2 (P^2+Q^2) - ((x-u)^4 + (y-v)^4) PQ + 2 r^2 P'(P-Q) (1-P^2) }{(1- P^2)^2} + 4 P'^2 \\&= \frac{ (x-u)^2 (y-v)^2 (P+Q)^2 - r^4 PQ + 2 r^2 P'(P-Q) (1-P^2) }{(1- P^2)^2} + 4 P'^2 \end{aligned}$$

Combining everything, we obtain

$$\begin{aligned} E&= \Im \big [ \Omega _{1,2} \big ] \Im \big [ \Omega _{3,4} \big ] -\frac{1}{2} \Re \Big [\Omega _{1,4}\Omega _{3,2} - \Omega _{2,4}\Omega _{3,1} \Big ] \\&= \bigg ( \frac{ 2 r^2 PP' }{1- P^2} +1 \bigg )^2 - \frac{(x-u)^2 (y-v)^2 (Q+P)^2 - (2r^2 P' - P^3+P)^2}{2(1- P^2)^2} \\&\quad + \frac{ (x-u)^2 (y-v)^2 (P+Q)^2 - r^4 PQ + 2 r^2 P'(P-Q) (1-P^2) }{2(1- P ^2)^2} + 2 P'^2 \\&= \bigg ( \frac{ 2 r^2 PP' }{1- P^2} +1 \bigg )^2 + \frac{ (2r^2 P' + P (1-P^2))^2 - r^4 PQ + 2 r^2 P'(P-Q) (1-P^2) }{2(1- P ^2)^2} + 2 P'^2 \\&= \frac{ 8 r^4 P^2 P'^2 + 4r^4 P'^2 - r^4 PQ }{2(1- P ^2)^2} + \frac{ r^2 P'(7P-Q) }{1- P^2} + 1 + 2 P'^2 +\frac{P^2}{2} \\&= \frac{2r^4 (3 P^2 P'^2 + P'^2 + PP''(1-P^2)) }{(1- P^2)^2} + \frac{ r^2 P'(7P+4PP'^2+4P''(1- P^2)) }{1- P ^2} + 1 + 2 P'^2 +\frac{P^2}{2} \end{aligned}$$

On the other hand, we have for I defined in (5.16) that

$$\begin{aligned} I'&= \frac{(2P'^2+\frac{3}{2} P^2)(1-P^2) + r^2 (4P'P''+3 PP')(1-P^2) + 2 r^2 (2P'^2+\frac{3}{2} P^2)PP'}{(1-P^2)^2} \\&\quad + \frac{4 r^2 PP'(1-P^2)^2 + 2 r^4 (P'^2+PP'')(1-P^2)^2 + 8 r^4 PP' (1-P^2)PP'}{(1-P^2)^4} \\&= \frac{2 r^4 ( 3P^2P'^2+ P'^2+PP''(1-P^2))}{(1-P^2)^3} + \frac{r^2P' ( 7 P + 4PP'^2+4P''(1-P^2))}{(1-P^2)^2} + \frac{2P'^2+\frac{3}{2} P^2 }{1-P^2} \end{aligned}$$

and see that \(E/(1-P^2) -1= I'\).

Finally, we verify (5.17). By (5.5),

$$\begin{aligned} \lim _{r \rightarrow \infty } \frac{1}{1- P^2(r^2)} = 1. \end{aligned}$$

Inspection of each term in the other factor in \(I'\) combined with (5.5) shows that

$$\begin{aligned} \limsup _{r \rightarrow \infty } r^4 \left| I'(r^2)\right| < \infty . \end{aligned}$$

Since \(I'\) is continuous, it follows that

$$\begin{aligned} \sup _{r \ge 0} (1+r^4)\left| I'(r^2)\right| < \infty , \end{aligned}$$

as claimed. \(\square \)