1 Introduction

1.1 Nodal Intersections for Laplace Eigenfunctions

The nodal set of a function is its zero-locus. Several recent works (e.g. [10, 18, 40]) studied the number of intersections between the nodal lines of eigenfunctions and a fixed reference curve (nodal intersections on ‘generic’ surfaces). It is expected that in many situations the nodal intersections number obeys the bound \(\ll \sqrt{E}\), where \(E>0\) is the eigenvalue.

On the three-dimensional standard flat torus \(\mathbb {T}^3=\mathbb {R}^3/\mathbb {Z}^3\), the nonzero Laplace eigenvalues, or energy levels, are of the form \(4\pi ^2 m\), \(m\in S^{(3)}\), where

$$\begin{aligned} S^{(3)}:=\{0<m: \ m=a_1^2+a_2^2+a_3^2,\ a_i\in \mathbb {Z}\}. \end{aligned}$$

Let

$$\begin{aligned} \mathcal {E}=\mathcal {E}_m:=\{\mu =(\mu ^{(1)},\mu ^{(2)},\mu ^{(3)})\in \mathbb {Z}^3 : ({\mu ^{(1)}})^2+({\mu ^{(2)}})^2+({\mu ^{(3)}})^2=m\}\nonumber \\ \end{aligned}$$
(1.1)

be the set of all lattice points on the sphere of radius \(\sqrt{m}\). The (complex-valued) Laplace eigenfunctions may be written as [5]

$$\begin{aligned} G(x)=G_m(x)=\sum _{\mu \in \mathcal {E}} c_{\mu } e^{2\pi i\langle \mu ,x\rangle }, \qquad x\in \mathbb {T}^3, \end{aligned}$$

with \(c_\mu \) Fourier coefficients.

A natural number m is representable as a sum of three integer squares if and only if \(m\ne 4^l(8k+7)\), for kl non-negative integers [14, 23]. The lattice points number, or equivalently the number of ways to express m as a sum of three squares will be denoted

$$\begin{aligned} N=N_m:=|\mathcal {E}|=r_3(m). \end{aligned}$$

Under the assumption \(m\not \equiv 0,4,7 \pmod 8\), one has

$$\begin{aligned} (\sqrt{m})^{1-\epsilon } \ll N \ll (\sqrt{m})^{1+\epsilon } \end{aligned}$$
(1.2)

for all \(\epsilon >0\) [7, section 1]. The condition \(m\not \equiv 0,4,7 \pmod 8\), ensuring \(N_m\rightarrow \infty \), is natural [37, section 1.3]: indeed, if \(m\equiv 7 \pmod 8\), then \(m\not \in S^{(3)}\); on the other hand,

$$\begin{aligned} \mathcal {E}_{4m}=\{2\mu : \mu \in \mathcal {E}_{m}\} \end{aligned}$$

(see, e.g. [23, §20]); hence, it suffices to consider energies \(4\pi ^2 m\) for m taken up to multiples of 4 (see Sect. 4 for details).

It is known [11, 35] that the nodal set

$$\begin{aligned} \mathcal {A}_G:=\{x\in \mathbb {T}^3 : G(x)=0\} \end{aligned}$$

is a union of smooth surfaces, possibly together with a set of lower dimension (curves and points). Let \(\Sigma \subset \mathbb {T}^3\) be a fixed smooth reference surface. Consider the intersection

$$\begin{aligned} \mathcal {A}_G \cap \Sigma , \end{aligned}$$

in the limit \(m\rightarrow \infty \). Bourgain and Rudnick [5, 6] found that, for \(\Sigma \) real-analytic, with nowhere zero Gauss–Kronecker curvature, there exists \(m_\Sigma \) such that for every \(m\ge m_\Sigma \), the following hold:

  • the surface \(\Sigma \) is not contained in the nodal set of any eigenfunction \(G_m\) [5, Theorem 1.2];

  • the one-dimensional Hausdorff measure of the intersection has the upper bound

    $$\begin{aligned} h_1(\mathcal {A}_{G_m} \cap \Sigma )<C_\Sigma \cdot \sqrt{m} \end{aligned}$$
    (1.3)

    for some constant \(C_\Sigma \) [6, Theorem 1.1];

  • for every eigenfunction \(G_m\),

    $$\begin{aligned} \mathcal {A}_{G_m} \cap \Sigma \ne \emptyset \end{aligned}$$

    [6, Theorem 1.3].

1.2 Arithmetic Random Waves

We work with an ensemble of random Gaussian Laplace toral eigenfunctions (‘arithmetic random waves’ [26, 32, 35])

$$\begin{aligned} F(x)={F_m^{(3)}}(x)=\frac{1}{\sqrt{N}} \sum _{(\mu ^{(1)},\mu ^{(2)},\mu ^{(3)})\in \mathcal {E}} a_{\mu } e^{2\pi i\langle \mu ,x\rangle }, \qquad x\in \mathbb {T}^3, \end{aligned}$$
(1.4)

with eigenvalue \(4\pi ^2m\), where \(a_{\mu }\) are complex standard Gaussian random variablesFootnote 1 (i.e. we have \(\mathbb {E}[a_{\mu }]=0\) and \(\mathbb {E}[|a_{\mu }|^2]=1\)), independent save for the relations \(a_{-\mu }=\overline{a_{\mu }}\) (so that F(x) is real valued). Several recent works [3, 8, 30, 37] study the fine properties of the nodal set of (1.4). Our object of investigation is the following.

Definition 1.1

Let \(\Sigma \) be a fixed compact regularFootnote 2 toral surface, of (finite) area A. Assume that \(\Sigma \) admits a smooth normal vector locally. We define the nodal intersection length for arithmetic random waves

$$\begin{aligned} \mathcal {L}=\mathcal {L}_m:=h_1(\mathcal {A}_{F_m} \cap \Sigma ), \end{aligned}$$
(1.5)

where \(h_1\) is one-dimensional Hausdorff measure.

We will study \(\mathcal {L}\) in the high energy limit \(m\rightarrow \infty \).

1.3 Prior Work on Nodal Intersections for Random Waves

Denote

$$\begin{aligned} S^{(d)}:=\{0<m: \ m=a_1^2+\cdots +a_d^2,\ a_i\in \mathbb {Z}\} \end{aligned}$$

the set of natural numbers expressible as a sum of d squares. For \(m\in S^{(d)}\), let

$$\begin{aligned} {\mathcal {E}_m^{(d)}}:=\{\mu \in \mathbb {Z}^d : |\mu |^2=m\} \end{aligned}$$

be the set of lattice points on the sphere (or circle for \(d=2\))

$$\begin{aligned} \sqrt{m}\mathcal {S}^{d-1}=\{y\in \mathbb {R}^d : |y|=\sqrt{m}\}, \end{aligned}$$

of cardinality \(N_m^{(d)}=|\mathcal {E}_m^{(d)}|\). Consider the ensemble of arithmetic random waves

$$\begin{aligned} F_m^{(d)}(x)=\frac{1}{\sqrt{N_m^{(d)}}} \sum _{\mu \in {\mathcal {E}_m^{(d)}}} a_{\mu } e^{2\pi i\langle \mu ,x\rangle } \end{aligned}$$
(1.6)

defined on the d-dimensional torus \(\mathbb {T}^d=\mathbb {R}^d/\mathbb {Z}^d\), with eigenvalue \(4\pi ^2m\), and the \(a_\mu \)’s defined as in Sect. 1.2.

Fix a smooth reference curve \(\mathcal {C}\subset \mathbb {T}^d\) and consider the number of nodal intersections

$$\begin{aligned} \mathcal {Z}=\mathcal {Z}_{m}^{(d)}(F)=|\{x : F(x)=0\} \cap \mathcal {C}| \end{aligned}$$
(1.7)

as \(m\rightarrow \infty \). Rudnick and Wigman [36] and subsequently Rossi-Wigman [34] and the author [31] investigated \(\mathcal {Z}_{m}^{(2)}\). Rudnick et al. [37] and subsequently the author [30] studied the three-dimensional analogue \(\mathcal {Z}_{m}^{(3)}\). We may view the nodal intersection length \(\mathcal {L}_m\) as another three-dimensional analogue of \(\mathcal {Z}_{m}^{(2)}\): indeed, in both cases one considers nodal intersections against a linear manifold.

The expected number of nodal intersections against smooth curves \(\mathcal {C}\) of length L is [36, 37]

$$\begin{aligned} \mathbb {E}[\mathcal {Z}_{m}^{(d)}]=\frac{2}{\sqrt{d}}\sqrt{m}\cdot L, \end{aligned}$$
(1.8)

consistent with (1.3), independent of the geometry. Let \(d=2\): as \(m\rightarrow \infty \), the number of lattice points on \(\sqrt{m}\mathcal {S}^1\) satisfies

$$\begin{aligned} N_m^{(2)}\ll m^\epsilon \qquad \forall \epsilon >0, \end{aligned}$$

and \(N\rightarrow \infty \) for a density one sequence of energy levels. Let the curve \(\mathcal {C}\subset \mathbb {T}^2\) be smooth, of nowhere zero curvature, with arc-length parametrisation given by \(\gamma :[0,L]\rightarrow \mathcal {C}\). Rudnick-Wigman found the precise asymptotic behaviour [36, Theorem 1.2]

$$\begin{aligned} \text {Var}(\mathcal {Z}_{m}^{(2)})= (4B_{\mathcal {C}}(\mathcal {E})-L^2) \cdot \frac{m}{N_m} + O\bigg (\frac{m}{N_m^{3/2}}\bigg ) \end{aligned}$$
(1.9)

where

$$\begin{aligned} B_{\mathcal {C}}(\mathcal {E}):= \int _{\mathcal {C}} \int _{\mathcal {C}} \frac{1}{N_m} \sum _{\mu \in \mathcal {E}} \bigg \langle \frac{\mu }{|\mu |},\dot{\gamma }(t_1)\bigg \rangle ^2 \cdot \bigg \langle \frac{\mu }{|\mu |},\dot{\gamma }(t_2)\bigg \rangle ^2 \hbox {d}t_1\hbox {d}t_2. \end{aligned}$$
(1.10)

This asymptotic behaviour is non-universal: \(B_{\mathcal {C}}(\mathcal {E})\) depends both on \(\mathcal {C}\) and on the angular distribution of the lattice points on \(\sqrt{m}\mathcal {S}^1\), as \(m\rightarrow \infty \). A nice consequence of (1.8) and (1.9) is that the distribution of \(\mathcal {Z}\) is asymptotically concentrated at the mean value, i.e. for all \(\epsilon >0\),

$$\begin{aligned} \lim _{\begin{array}{c} m\rightarrow \infty \\ \text {s.t. } N\rightarrow \infty \end{array}}\mathbb {P}\left( \left| \frac{\mathcal {Z}_{m}^{(2)}(F)}{\sqrt{2m}L}-1\right| >\epsilon \right) =0. \end{aligned}$$

The leading coefficient in (1.9) is always non-negative and bounded [36, sections 1 and 7]:

$$\begin{aligned} 0\le 4B_{\mathcal {C}}(\mathcal {E})-L^2\le L^2, \end{aligned}$$

though it might vanish, for instance when \(\mathcal {C}\) is a circle, independent of \(\mathcal {E}\).

Rossi-Wigman [34] investigated the scenario of curves such that \(4B_{\mathcal {C}}(\mathcal {E})-L^2\) vanishes universally (‘static curves’). For a density one sequence of energies, the precise asymptotics of the variance in the case of static curves are [34, Theorem 1.3]

$$\begin{aligned} \text {Var}(\mathcal {Z}_{m}^{(2)})= (16A_{\mathcal {C}}(\mathcal {E})-L^2) \cdot \frac{m}{4N_m^2} \cdot (1+o(1)), \end{aligned}$$
(1.11)

where \(A_{\mathcal {C}}(\mathcal {E})\) depends both on \(\mathcal {C}\) and on the limiting angular distribution of the lattice points. The leading term in (1.11) is bounded away from zero [34, Theorem 1.3].

1.4 Statement of Main Results

We now state our theorems on expectation and variance of the nodal intersection length (1.5).

Proposition 1.2

Let \(\Sigma \subset \mathbb {T}^3\) be a surface as in Definition 1.1, of area A. Then we have

$$\begin{aligned} \mathbb {E}[\mathcal {L}]=\frac{\pi }{\sqrt{3}}\sqrt{m}\cdot A. \end{aligned}$$
(1.12)

The proof of Proposition 1.2 will be given in Sect. 2. Note that (1.12) is consistent with (1.3) and that (1.12) and (1.8) are of similar shape. In the statement of our next result, \(\overrightarrow{n}(\sigma )\) is the unit normal vector to \(\Sigma \) at the point \(\sigma \).

Theorem 1.3

Let \(\Sigma \subset \mathbb {T}^3\) be a surface as in Definition 1.1, of area A, with nowhere vanishing Gauss–Kronecker curvature. Define

$$\begin{aligned} \mathcal {I}=\mathcal {I}_\Sigma :=\iint _{\Sigma ^2}\langle \overrightarrow{n}(\sigma ),\overrightarrow{n}(\sigma ')\rangle ^2\hbox {d}\sigma \hbox {d}\sigma '. \end{aligned}$$
(1.13)

Then we have as \(m\rightarrow \infty \), \(m\not \equiv 0,4,7 \pmod 8\),

$$\begin{aligned} \text {Var}(\mathcal {L})=\frac{\pi ^2}{60}\frac{m}{N}\left[ 3\mathcal {I}-A^2+O\left( m^{-1/28+o(1)}\right) \right] . \end{aligned}$$
(1.14)

Theorem 1.3 will be proven in Sect. 5. Compare the expressions (1.10) and (1.13): while the integral \(B_{\mathcal {C}}(\mathcal {E})\) depends on both the curve \(\mathcal {C}\subset \mathbb {T}^2\) and the angular distribution of lattice points on circles, the integral \(\mathcal {I}\) depends on \(\Sigma \subset \mathbb {T}^3\) only. This is because lattice points on spheres are equidistributedFootnote 3 (Linnik’s problem, see Sect. 4). Our next result concerns the analysis of the quantity \(\mathcal {I}\).

Proposition 1.4

Let \(\Sigma \subset \mathbb {T}^3\) be a surface as in Definition 1.1, of area A. The integral \(\mathcal {I}\) satisfies the sharp bounds

$$\begin{aligned} \frac{A^2}{3}\le \mathcal {I}\le A^2, \end{aligned}$$
(1.15)

so that the leading coefficient of (1.14) is always non-negative and bounded:

$$\begin{aligned} 0\le \frac{\pi ^2}{60}(3\mathcal {I}-A^2)\le \frac{\pi ^2}{30}A^2. \end{aligned}$$

Proposition 1.4 will be proven in Sect. 7. A computation shows that when \(\Sigma \) is a sphere or a hemisphere, the lower bound in (1.15) is achieved; hence, the leading term in (1.14) vanishes: in this case, the variance is of lower order than m / N (see Sect. 7 for details). As in the problem of nodal intersections against a curve on \(\mathbb {T}^2\) [36], the theoretical maximum of the variance leading term is achieved in the case of intersection with a manifold of identically zero curvature (straight lines in dimension 2, planes in dimension 3). As the case of \(\Sigma \) contained in a plane is excluded by the assumptions of Theorem 1.3, the upper bound of \(A^2\cdot \pi ^2/30\) for the leading coefficient in (1.14) is a supremum rather than a maximum, as in [36] (see Sect. 7).

Similarly to [36, 37], the above results on expectation and variance have the following consequence.

Theorem 1.5

Let \(\Sigma \subset \mathbb {T}^3\) be a surface as in Definition 1.1, of area A, with nowhere vanishing Gauss–Kronecker curvature. Then the nodal intersection length \(\mathcal {L}\) satisfies, for all \(\epsilon >0\),

$$\begin{aligned} \lim _{\begin{array}{c} m\rightarrow \infty \\ m\not \equiv 0,4,7 \pmod 8 \end{array}}\mathbb {P}\left( \left| \frac{\mathcal {L}}{\pi \sqrt{m}A/\sqrt{3}}-1\right| >\epsilon \right) =0. \end{aligned}$$

Proof

Apply the Chebyshev–Markov inequality together with (1.12) and (1.14). \(\square \)

1.5 Outline of the Proofs and Plan of the Paper

Throughout we apply many ideas of [26, 35,36,37]. The arithmetic random wave \(F^{(d)}:\mathbb {T}^d\rightarrow \mathbb {R}\) (1.6) is a random field. The number of nodal intersections \(\mathcal {Z}^{(d)}\) (1.7) against a curve is the zeros of a process, which is the restriction of the random wave F to the curve [34, 36, 37]. For a smooth process \(p:T\rightarrow \mathbb {R}\) defined on an appropriate parameter set \(T\subset \mathbb {R}\), moments of the number of zeros may be computed, under certain assumptions, via Kac–Rice formulas [2, 13].

More generally, given a smooth random field \(P:T\subset _{\text {open}}\mathbb {R}^n\rightarrow \mathbb {R}^{n'}\), let \(\mathcal {V}\) be the Hausdorff measure of its zero set. When \(n-n'=0\), \(\mathcal {V}\) is the number of zeros; when \(n-n'=1\), \(\mathcal {V}\) is the nodal length of P; when \(n-n'=2\), \(\mathcal {V}\) is the nodal area, and so forth. Only the case \(n\ge n'\) is interesting, since otherwise the zero set of P is almost surelyFootnote 4 empty. One may compute, under appropriate assumptions, the moments of \(\mathcal {V}\) by means of Kac–Rice formulas [2, Theorems 6.2, 6.3, 6.8 and 6.9]. The latter formulas, however, are not applicable to our problem, as the following example illustrates.

Example 1.6

Assume that the surface \(\Sigma \subset \mathbb {T}^3\) is the graph of a differentiable function, in the sense that it admits everywhere the parametrisation

$$\begin{aligned} \gamma :U\subset \mathbb {R}^2&\rightarrow \Sigma , \nonumber \\ (u,v)&\mapsto (u,v,h(u,v)), \end{aligned}$$
(1.16)

with \(h\in C^2(U)\). We restrict F to \(\Sigma \) and obtain the random field \(f:U\subset \mathbb {R}^2\rightarrow \mathbb {R}\),

$$\begin{aligned} f(u,v):=F(\gamma (u,v))=\frac{1}{\sqrt{N}} \sum _{\mu \in \mathcal {E}} a_{\mu } e^{2\pi i\langle \mu ,(u,v,h(u,v))\rangle }. \end{aligned}$$

The zero line of f is not necessarily (isometric to) the nodal intersection curve \(\{x\in \mathbb {T}^3 : F(x)=0\}\): rather, it is isometric to the projection of the nodal intersection curve onto the domain U of the parametrisation \(\gamma \). Therefore, the application of Kac–Rice formulas for f yields the moments of the length of the projected curve (see [2, Theorem 11.3]), not of the intersection curve itself: this is in marked contrast to what happens in the case of the nodal intersections number [34, 36, 37].

Our approach to the problem begins with the derivation of Kac–Rice formulas for a random field defined on a surface, which is done in Sect. 2 (also see [28, Theorem 5.3] and [29, Theorems 4.1 and 4.4]). Applying the Kac–Rice formula for the expectation (Proposition 2.4 to follow), we will prove Proposition 1.2.

For the nodal intersection length variance, we apply Proposition 2.7 and subsequently reduce our problem to estimating the second moment of the covariance function

$$\begin{aligned} r(\sigma ,\sigma '):=\mathbb {E}[F(\sigma )F(\sigma ')] \end{aligned}$$
(1.17)

and of its first- and second-order derivatives. We thus develop an approximate Kac–Rice formula. To state it, we need some extra notation: first, \(M:=4\pi ^2m/3\). Moreover, \(X,X',Y,Y'(\sigma ,\sigma ')\) are appropriate \(2\times 2\) matrices, depending on r, its derivatives, and the surface \(\Sigma \) (see Definition 3.3).

Proposition 1.7

(Approximate Kac–Rice formula). Let \(\Sigma \subset \mathbb {T}^3\) be a surface as in Definition 1.1, with nowhere vanishing Gauss–Kronecker curvature. Then we have

$$\begin{aligned} \text {Var}(\mathcal {L})= & {} M\left[ \iint _{\Sigma ^2} \left( \frac{1}{8}r^2 +\frac{tr(X)}{16} +\frac{tr(X')}{16} +\frac{tr(Y'Y)}{32} \right) \mathrm{d}\sigma \mathrm{d}\sigma ' \right. \\&\left. +\,O_\Sigma \left( \frac{1}{m^{11/16-\epsilon }}\right) \right] . \end{aligned}$$

The proof of this result takes up the whole of Sect. 3. Our problem of computing the nodal intersection length variance (1.14) is thus reduced to estimating the second moment of the covariance function r (1.17) and of its various first- and second-order derivatives, which is carried out in Sect. 5. The error term in Proposition 1.7 comes from bounding the fourth moment of r and of its derivatives: this is done in Sect. 6. To study the second and fourth moments of r, one needs to understand various properties of the lattice point set \(\mathcal {E}\) (1.1), covered in Sect. 4. In Sect. 7, we study the leading term of the nodal intersection length variance (1.14) and establish Proposition 1.4. “Appendix A” is dedicated to proving several auxiliary lemmas.

1.6 Future Directions

As discussed in Sect. 1.3, for the problem of nodal intersections against a curve in two dimensions, Rossi and Wigman [34] investigated the case of static curves and found the precise asymptotic behaviour of the variance, for a density one sequence of energies. It would be interesting to find, if any exist, families of ‘static surfaces’ (other than spheres and hemispheres) satisfying

$$\begin{aligned} \mathcal {I}=\frac{A^2}{3}, \end{aligned}$$

and study the variance asymptotics for these.

In the setting of higher-dimensional standard flat tori \(\mathbb {T}^d\) for \(d\ge 4\), to the best of our knowledge the only result available in the literature is the expectation (1.8). Higher dimensions and intersections against higher-dimensional toral submanifolds are likely to be interesting problems at the interface of number theory and geometry.

2 Kac–Rice Formulas for Random Fields Defined on a Surface

2.1 Background

Consider a random field P defined on a parameter set \(T\subset _{\text {open}}\mathbb {R}^n\) and taking values in \(\mathbb {R}\).Footnote 5 We always assume that the P(t), called the realisations or sample paths of our random field, are almost surely continuous in t. A random field \(P=(P_t)_t\), \(t\in T\), is Gaussian if, for all \(k=1,2\ldots \) and every \(t_1,\ldots ,t_k\in T\), the random vectors

$$\begin{aligned} (P(t_1),\ldots ,P(t_k)), \end{aligned}$$

called finite-dimensional distributions of P, are multivariate Gaussian. A centred (i.e. mean 0) Gaussian field may be completely described by its covariance function (see Kolmogorov’s theorem [13, section 3.3] or [2, section 1.2]).

The arithmetic random wave (1.4) is a centred Gaussian stationary random field, in the sense that its covariance function

$$\begin{aligned} r(x,y) = \frac{1}{N}\sum _{\mu \in \mathcal {E}} e^{2\pi i\langle \mu ,x-y\rangle }, \qquad x,y\in \mathbb {T}^3 \end{aligned}$$

depends on the difference \(x-y\) only. The restriction of F to \(\Sigma \)

$$\begin{aligned} F(\sigma )=\frac{1}{\sqrt{N}} \sum _{\mu \in \mathcal {E}} a_{\mu } e^{2\pi i\langle \mu ,\sigma \rangle }, \qquad \sigma \in \Sigma \end{aligned}$$

is a centred Gaussian random field, with unit variance and covariance function

$$\begin{aligned} r(\sigma ,\sigma ') = \frac{1}{N}\sum _{\mu \in \mathcal {E}} e^{2\pi i\langle \mu ,\sigma -\sigma '\rangle }, \qquad \sigma ,\sigma '\in \Sigma . \end{aligned}$$
(2.1)

As mentioned in Sect. 1.5, for a processp (i.e. a random field with a one-dimensional parameter set) satisfying appropriate assumptions, moments of the number of zeros may be computed via Kac–Rice formulas [1, 2, 13]. More generally, for a random field defined on \(\mathbb {R}^n\), there exist under certain conditions Kac–Rice formulas computing the moments of the Hausdorff measure \(h_{n-1}\) of its (\(n-1\)-dimensional) zero set [2, Chapter 6] [9, Section 3]. For a real-valued random field \(P:\Sigma \rightarrow \mathbb {R}\) defined on a smooth surface\(\Sigma \), consider its nodal length,

$$\begin{aligned} h_1\{\sigma \in \Sigma : P(\sigma )=0\}. \end{aligned}$$

The formulas of [2] are not applicable to this case, since in particular \(\Sigma \) is not a set of full measure in \(\mathbb {R}^3\) (also recall Example 1.6). Given a random field \(\mathcal {X}:\mathbb {R}^3\rightarrow \mathbb {R}\) and a surface \(\Sigma \subset \mathbb {R}^3\) satisfying appropriate assumptions, we derive Kac–Rice formulas for the first and second moments of the nodal length of \(P=\left. \mathcal {X}\right| _\Sigma \) (also see [28, Theorem 5.3] and [29, Theorems 4.1 and 4.4]).

2.2 Co-area Formula

Firstly, we require a general (i.e. concerning manifolds) version of the co-area formula.

Proposition 2.1

(General co-area formula [21, Theorem 3.1]). Let X and Y be separable \(C^1\)-smooth Riemannian manifolds with

$$\begin{aligned} \dim (X)=n\ge k=\dim (Y), \end{aligned}$$

and \(\varphi :X\rightarrow Y\) be a Lipschitz map. We denote \(J\varphi \) the JacobianFootnote 6 of \(\varphi \). Then

$$\begin{aligned} \int _B J\varphi (x)\mathrm{d}h_nx = \int _Y h_{n-k}(B\cap \varphi ^{-1}\{y\})\mathrm{d}h_ky \end{aligned}$$

whenever B is an \(h_n\)-measurable subset of X, and consequently

$$\begin{aligned} \int _X g(x)J\varphi (x)\mathrm{d}h_nx = \int _Y \left[ \int _{\varphi ^{-1}\{y\}} g(x)\mathrm{d}h_{n-k}x \right] \mathrm{d}h_ky \end{aligned}$$

whenever g is an \(h_n\)-integrable function on X.

Next we require the definition of surface gradient. For a differentiable map \(\psi :\mathbb {R}^3\rightarrow \mathbb {R}\) and a point \(x\in \mathbb {R}^3\), we employ the standard notation

$$\begin{aligned} \nabla \psi (x)=\left( \frac{\partial \psi }{\partial x_1},\frac{\partial \psi }{\partial x_2},\frac{\partial \psi }{\partial x_3}\right) \end{aligned}$$

for the gradient of \(\psi \) in \(\mathbb {R}^3\).

Definition 2.2

Fix a surface \(\Sigma \subset \mathbb {R}^3\) as in Definition 1.1. For every point \(\sigma \in \Sigma \), denote \(T_\sigma (\Sigma )\) the tangent plane to the surface at \(\sigma \). Given a differentiable map \(G:\mathbb {R}^3\rightarrow \mathbb {R}\), consider its restriction to \(\Sigma \). At each point \(\sigma \in \Sigma \), we define the surface gradient

$$\begin{aligned} \nabla _\Sigma G(\sigma ), \end{aligned}$$

projection of \(\nabla G(\sigma )\) onto \(T_\sigma \Sigma \).

The surface gradient gives the direction of maximal variation of G at \(\sigma \) (for further details, see, e.g. [1, chapter 7] or [15, section 2.5]). We record that \(JG(\sigma )=|\nabla _\Sigma G(\sigma )|\).

Proposition 2.3

(Deterministic nodal length). Let \(\Sigma \subset \mathbb {R}^3\) be a surface as in Definition 1.1 and \(G:\Sigma \rightarrow \mathbb {R}\) be a smooth map satisfying \(G(\sigma )=0 \Rightarrow \nabla _{\Sigma } G(\sigma )\ne 0\) for every \(\sigma \in \Sigma \). Denote

$$\begin{aligned} \mathcal {L}=\mathcal {L}(G,\Sigma ):=h_1\{\sigma \in \Sigma : G(\sigma )=0\} \end{aligned}$$
(2.2)

the zero-length of G, where \(h_1\) is Hausdorff measure. Then we have

$$\begin{aligned} \mathcal {L}=\lim _{\epsilon \rightarrow 0}\mathcal {L}_\epsilon \end{aligned}$$
(2.3)

with

$$\begin{aligned} \mathcal {L}_\epsilon =\mathcal {L}_\epsilon (G,\Sigma ):= \frac{1}{2\epsilon } \int _\Sigma \chi \left( \frac{G(\sigma )}{\epsilon }\right) |\nabla _\Sigma G(\sigma )|\mathrm{d}\sigma , \end{aligned}$$
(2.4)

where \(\chi \) is the indicator function of the interval \([-1,1]\).

We defer the proof of Proposition 2.3 to “Appendix A”.

2.3 Kac–Rice for the Expectation

Proposition 2.4

(Kac–Rice for the expected length). Let \(\mathcal {X}:\mathbb {R}^3\rightarrow \mathbb {R}\) be a Gaussian random field having \(C^1\) paths, and \(\Sigma \subset \mathbb {R}^3\) a surface as in Definition 1.1. Define \(\mathcal {L}(\mathcal {X},\Sigma )\) and \(\mathcal {L}_\epsilon (\mathcal {X},\Sigma )\) as in (2.2) and (2.4), respectively. Suppose that, for all \(\sigma \in \Sigma \), the distribution of the random variable \(\mathcal {X}(\sigma )\) is non-degenerate. Moreover, assume that \(\mathcal {L}_\epsilon \) is uniformly bounded and that for every \(\sigma \in \Sigma \), the quantity

$$\begin{aligned} \frac{1}{2\epsilon }\mathbb {E}\left[ \chi \left( {\frac{\mathcal {X}(\sigma )}{\epsilon }}\right) \cdot |(\nabla _\Sigma \mathcal {X})(\sigma )|\right] \end{aligned}$$
(2.5)

is bounded as a function of \(\sigma \) uniformly in \(\epsilon \). Then, we have

$$\begin{aligned} \mathbb {E}[\mathcal {L}]=\int _\Sigma K_{1;\Sigma }(\sigma )\mathrm{d}\sigma \end{aligned}$$
(2.6)

where \(K_{1;\Sigma }:\Sigma \rightarrow \mathbb {R}\),

$$\begin{aligned} K_{1;\Sigma }(\sigma ):= \phi _{\mathcal {X}(\sigma )}(0) \cdot \, \mathbb {E}\left[ \left| (\nabla _\Sigma \mathcal {X})(\sigma )\right| \ \big | \ \mathcal {X}(\sigma )=0\right] \end{aligned}$$
(2.7)

and \(\phi _{\mathcal {X}(\sigma )}\) is the probability density function of the random variable \(\mathcal {X}(\sigma )\).

Proof

We follow [35, Proposition 4.1]. By Ylvisaker’s theorem [2, Theorem 1.21], one has

$$\begin{aligned} \mathbb {P}(\exists \sigma \in \Sigma : \mathcal {X}(\sigma )=0, \nabla _{\Sigma } \mathcal {X}(\sigma )=0)=0. \end{aligned}$$

We take expectations on both sides of (2.3):

$$\begin{aligned} \mathbb {E}[\mathcal {L}]=\mathbb {E}\left[ \lim _{\epsilon \rightarrow 0}\frac{1}{2\epsilon } \int _\Sigma \chi \left( \frac{\mathcal {X}(\sigma )}{\epsilon }\right) |\nabla _\Sigma \mathcal {X}(\sigma )|\hbox {d}\sigma \right] . \end{aligned}$$

As \(\mathcal {L}_\epsilon \) is uniformly bounded by assumption, we may apply the dominated convergence theorem:

$$\begin{aligned} \mathbb {E}[\mathcal {L}]=\lim _{\epsilon \rightarrow 0}\frac{1}{2\epsilon }\mathbb {E}\left[ \int _\Sigma \chi \left( {\frac{\mathcal {X}(\sigma )}{\epsilon }}\right) \cdot \, |(\nabla _\Sigma \mathcal {X})(\sigma )|\hbox {d}\sigma \right] . \end{aligned}$$

By Fubini’s theorem,

$$\begin{aligned} \mathbb {E}[\mathcal {L}]=\lim _{\epsilon \rightarrow 0}\frac{1}{2\epsilon }\int _\Sigma \mathbb {E}\left[ \chi \left( {\frac{\mathcal {X}(\sigma )}{\epsilon }}\right) \cdot \, |(\nabla _\Sigma \mathcal {X})(\sigma )|\right] \hbox {d}\sigma . \end{aligned}$$

By assumption, the quantity (2.5) is bounded as a function of \(\sigma \) uniformly in \(\epsilon \); hence, we may apply the dominated convergence theorem to exchange the order of the limit and the integral over \(\Sigma \):

$$\begin{aligned} \mathbb {E}[\mathcal {L}]=\int _\Sigma \lim _{\epsilon \rightarrow 0}\frac{1}{2\epsilon } \mathbb {E}\left[ \chi \left( {\frac{\mathcal {X}(\sigma )}{\epsilon }}\right) \cdot |(\nabla _\Sigma \mathcal {X})(\sigma )|\right] \hbox {d}\sigma . \end{aligned}$$

Via dominated convergence as \(\epsilon \rightarrow 0\),

$$\begin{aligned}&\lim _{\epsilon \rightarrow 0}\frac{1}{2\epsilon } \mathbb {E}\left[ \chi \left( {\frac{\mathcal {X}(\sigma )}{\epsilon }}\right) \cdot |(\nabla _\Sigma \mathcal {X})(\sigma )|\right] \\&\quad = \lim _{\epsilon \rightarrow 0}\frac{1}{2\epsilon }\int _{-\epsilon }^{\epsilon }\mathbb {E}\left[ \left| (\nabla _\Sigma {\mathcal {X}})(\sigma )\right| \big |{\mathcal {X}}(\sigma )=x\right] \phi _{{\mathcal {X}}(\sigma )}(x)\hbox {d}x \nonumber \\&\quad = \phi _{{\mathcal {X}}(\sigma )}(0) \cdot \mathbb {E}\left[ \left| (\nabla _\Sigma {\mathcal {X}})(\sigma )\right| \big |{\mathcal {X}}(\sigma )=0\right] =K_{1;\Sigma }(\sigma ) \end{aligned}$$

hence (2.6). \(\square \)

Let us compare the quantity \(K_{1;\Sigma }\) (2.7) to the zero density function (or first intensity) \(K_1:\mathbb {R}^3\rightarrow \mathbb {R}\),

$$\begin{aligned} K_1(x):=\phi _{\mathcal {X}(x)}(0)\cdot \mathbb {E}\left[ |\nabla \mathcal {X}(x)|\ \big | \ \mathcal {X}(x)=0\right] \end{aligned}$$

of the random field \(\mathcal {X}:\mathbb {R}^3\rightarrow \mathbb {R}\): the zero density function has the gradient \(\nabla \) of \(\mathbb {R}^3\), in place of the surface gradient \(\nabla _\Sigma \), in its definition. We will call \(K_{1;\Sigma }\) the ‘zero density of \(\left. \mathcal {X}\right| _\Sigma \)’, as a generalisation of \(K_1\) to random fields defined on a manifold.

2.4 The Proof of Proposition 1.2

Recall the expression of the arithmetic random wave F (1.4). The following lemma, that will be proven in “Appendix A”, shows that F satisfies one of the hypotheses of Proposition 2.4.

Lemma 2.5

Let \(F=F_m\) be an arithmetic random wave, and \(\Sigma \subset \mathbb {T}^3\) a surface as in Definition 1.1. Then we have

$$\begin{aligned} \mathcal {L}_\epsilon (F,\Sigma )\le 18\sqrt{m}, \end{aligned}$$

with \(\mathcal {L}_\epsilon \) as in (2.4).

We now compute the zero density \(K_{1;\Sigma }\) for arithmetic random waves.

Lemma 2.6

Given the random field \(\mathcal {X}=F\), define the function \(K_{1;\Sigma }\) as in (2.7). Then we have

$$\begin{aligned} K_{1;\Sigma }(\sigma )\equiv \frac{\pi }{\sqrt{3}}\sqrt{m}. \end{aligned}$$
(2.8)

Before proving Lemma 2.6, we will complete the proof of Proposition 1.2.

Proof of Proposition 1.2

We need to show that the hypotheses of Proposition 2.4 hold for the random field \(\mathcal {X}=F\). First, the non-degeneracy condition is met, as F is unit variance. The boundedness of \(\mathcal {L}_\epsilon (F,\Sigma )\) is shown in Lemma 2.5.

Since the quantity

$$\begin{aligned} \frac{1}{2\epsilon }\mathbb {E}\left[ \chi \left( {\frac{F(\sigma )}{\epsilon }}\right) \cdot |\nabla F(\sigma )|\right] \end{aligned}$$

is bounded as a function of \(\sigma \) independent of \(\epsilon \) [35, proof of Proposition 4.1] and since, clearly, \(|\nabla _{\Sigma }F(\sigma )|\le |\nabla F(\sigma )|\), we also obtain the boundedness of

$$\begin{aligned} \frac{1}{2\epsilon }\mathbb {E}\left[ \chi \left( {\frac{F(\sigma )}{\epsilon }}\right) \cdot |\nabla _\Sigma F(\sigma )|\right] \end{aligned}$$

in \(\sigma \) independent of \(\epsilon \). Substituting the expression (2.8) of the zero density \(K_{1;\Sigma }\) into (2.6) yields

$$\begin{aligned} \mathbb {E}[\mathcal {L}] =\int _{\Sigma }\frac{\pi }{\sqrt{3}}\sqrt{m} \ \hbox {d}\sigma =\frac{\pi }{\sqrt{3}}\sqrt{m}\cdot A. \end{aligned}$$

\(\square \)

Proof of Lemma 2.6

We write the zero density function (2.7) for the Gaussian field F:

$$\begin{aligned} K_{1;\Sigma }(\sigma )= \frac{1}{\sqrt{2\pi }} \cdot \mathbb {E}\left[ \left| (\nabla _\Sigma F)(\sigma )\right| \ \big | \ F(\sigma )=0\right] . \end{aligned}$$
(2.9)

Define the vector field

$$\begin{aligned} a:\Sigma&\rightarrow \mathbb {T}^3\nonumber \\ \sigma&\mapsto (\nabla _\Sigma F)(\sigma ). \end{aligned}$$
(2.10)

Since \(F(\sigma )\) and \(a(\sigma )\) are independent (as F has unit variance), we may rewrite (2.9) as

$$\begin{aligned} K_{1;\Sigma }(\sigma ) =\frac{1}{\sqrt{2\pi }}\cdot \mathbb {E}[|a(\sigma )|]. \end{aligned}$$

One has \(\nabla F(x)\sim \mathcal {N}(0,MI_3)\) for each \(x\in \mathbb {T}^3\) [35, (4.1)]. Since at each \(\sigma \) the surface gradient a is the projection of \(\nabla F\) onto \(T_\sigma \Sigma \) (see Definition 2.2), one has

$$\begin{aligned} K_{1;\Sigma }(\sigma )\equiv \frac{\sqrt{M}}{\sqrt{2\pi }}\mathbb {E}[|\hat{a}|], \qquad \hat{a}\sim \mathcal {N}(0,I_2), \end{aligned}$$

so that universally

$$\begin{aligned} K_{1;\Sigma }(\sigma )\equiv \frac{\sqrt{M}}{\sqrt{2\pi }}\frac{\sqrt{2}}{2}\sqrt{\pi } =\frac{\pi }{\sqrt{3}}\sqrt{m}. \end{aligned}$$

\(\square \)

2.5 Kac–Rice for the Second Moment

Proposition 2.7

(Kac–Rice for the second moment). Let \(\mathcal {X}:\mathbb {R}^3\rightarrow \mathbb {R}\) be a Gaussian random field having \(C^1\) paths, and \(\Sigma \subset \mathbb {R}^3\) a surface as in Definition 1.1. Define \(\mathcal {L}(\mathcal {X},\Sigma )\) and \(\mathcal {L}_\epsilon (\mathcal {X},\Sigma )\) as in (2.2) and (2.4), respectively. Assume that \(\mathcal {L}_\epsilon \) is uniformly bounded and that for almost all \(\sigma ,\sigma '\in \Sigma \), the quantity

$$\begin{aligned} \mathcal {L}_{\epsilon _1,\epsilon _2}(\sigma ,\sigma '):=\frac{1}{4\epsilon _1\epsilon _2} \mathbb {E}\left[ \chi \left( \frac{\mathcal {X}(\sigma )}{\epsilon _1}\right) \chi \left( \frac{\mathcal {X}(\sigma ')}{\epsilon _2}\right) |(\nabla _\Sigma \mathcal {X})(\sigma )||(\nabla _\Sigma \mathcal {X})(\sigma ')| \right] \nonumber \\ \end{aligned}$$
(2.11)

is bounded uniformly in \(\epsilon _1,\epsilon _2\). Suppose further that for almost all \(\sigma ,\sigma '\in \Sigma \), the distribution of the random vector

$$\begin{aligned} \left( \mathcal {X}(\sigma ), \mathcal {X}(\sigma ') \right) \end{aligned}$$

is non-degenerate. Then we have

$$\begin{aligned} \mathbb {E}[\mathcal {L}^2]=\iint _{\Sigma ^2}\tilde{K}_{2;\Sigma }(\sigma ,\sigma ') \mathrm{d}\sigma \mathrm{d}\sigma ' \end{aligned}$$

where \(\tilde{K}_{2;\Sigma }:\Sigma \times \Sigma \rightarrow \mathbb {R}\),

$$\begin{aligned} \tilde{K}_{2;\Sigma }(\sigma ,\sigma '):= & {} \phi _{\mathcal {X}(\sigma ),\mathcal {X}(\sigma ')}(0,0) \nonumber \\&\cdot \mathbb {E}\left[ \left| (\nabla _\Sigma \mathcal {X})(\sigma )\right| \cdot \left| (\nabla _\Sigma \mathcal {X})(\sigma ')\right| \big | \mathcal {X}(\sigma )=\mathcal {X}(\sigma ')=0\right] \end{aligned}$$
(2.12)

and \(\phi _{\mathcal {X}(\sigma ),\mathcal {X}(\sigma ')}\) is the joint probability density function of the random vector \((\mathcal {X}(\sigma ),\mathcal {X}(\sigma '))\).

Proof

We follow [35, Proposition 5.2]. By Ylvisaker’s theorem [2, Theorem 1.21], one has

$$\begin{aligned} \mathbb {P}(\exists \sigma \in \Sigma : \mathcal {X}(\sigma )=0, \nabla _{\Sigma } \mathcal {X}(\sigma )=0)=0. \end{aligned}$$

Therefore (recall (2.4)),

$$\begin{aligned} \mathbb {E}[\mathcal {L}^2]= & {} \mathbb {E}\left[ \lim _{\epsilon _1,\epsilon _2\rightarrow 0}\mathcal {L}_{\epsilon _1}\mathcal {L}_{\epsilon _2}\right] =\mathbb {E}\left[ \lim _{\epsilon _1,\epsilon _2\rightarrow 0} \frac{1}{2\epsilon _1} \int _\Sigma \chi \left( \frac{\mathcal {X}(\sigma )}{\epsilon _1}\right) |(\nabla _\Sigma \mathcal {X})(\sigma )|\hbox {d}\sigma \right. \\&\left. \cdot \frac{1}{2\epsilon _2} \int _\Sigma \chi \left( \frac{\mathcal {X}(\sigma ')}{\epsilon _2}\right) |(\nabla _\Sigma \mathcal {X})(\sigma ')|\hbox {d}\sigma ' \right] . \end{aligned}$$

As \(\mathcal {L}_\epsilon \) is uniformly bounded by assumption, we may apply the dominated convergence theorem:

$$\begin{aligned} \mathbb {E}[\mathcal {L}^2]= & {} \lim _{\epsilon _1,\epsilon _2\rightarrow 0}\frac{1}{4\epsilon _1\epsilon _2} \cdot \mathbb {E}\left[ \iint _{\Sigma ^2} \chi \left( \frac{\mathcal {X}(\sigma )}{\epsilon _1}\right) \chi \left( \frac{\mathcal {X}(\sigma ')}{\epsilon _2}\right) \right. \\&\left. \cdot |(\nabla _\Sigma \mathcal {X})(\sigma )||(\nabla _\Sigma \mathcal {X})(\sigma ')| \hbox {d}\sigma \hbox {d}\sigma ' \right] . \end{aligned}$$

By Fubini’s theorem,

$$\begin{aligned} \mathbb {E}[\mathcal {L}^2]= & {} \lim _{\epsilon _1,\epsilon _2\rightarrow 0}\frac{1}{4\epsilon _1\epsilon _2} \cdot \iint _{\Sigma ^2} \mathbb {E}\left[ \chi \left( \frac{\mathcal {X}(\sigma )}{\epsilon _1}\right) \chi \left( \frac{\mathcal {X}(\sigma ')}{\epsilon _2}\right) \right. \\&\left. \cdot |(\nabla _\Sigma \mathcal {X})(\sigma )||(\nabla _\Sigma \mathcal {X})(\sigma ')| \right] \hbox {d}\sigma \hbox {d}\sigma '. \end{aligned}$$

Next, we exchange the order of taking the limit and the integration over \(\Sigma ^2\), using the boundedness of (2.11) uniformly in \(\epsilon _1,\epsilon _2\) together with the dominated convergence theorem: an upper bound for almost all \(\sigma ,\sigma '\) is sufficient, as changing the values of a function on a set of measure zero has no impact on integrability or value of the integral of the function. Finally, as

$$\begin{aligned}&\lim _{\epsilon _1,\epsilon _2\rightarrow 0}\frac{1}{4\epsilon _1\epsilon _2} \mathbb {E}\left[ \chi \left( \frac{\mathcal {X}(\sigma )}{\epsilon _1}\right) \chi \left( \frac{\mathcal {X}(\sigma ')}{\epsilon _2}\right) |(\nabla _\Sigma \mathcal {X})(\sigma )||(\nabla _\Sigma \mathcal {X})(\sigma ')| \right] \\&\quad = \phi _{\mathcal {X}(\sigma ),\mathcal {X}(\sigma ')}(0,0) \cdot \mathbb {E}\left[ \left| (\nabla _\Sigma \mathcal {X})(\sigma )\right| \left| (\nabla _\Sigma \mathcal {X})(\sigma ')\right| \mid \mathcal {X}(\sigma )=\mathcal {X}(\sigma ')=0\right] \\&\quad =\tilde{K}_{2;\Sigma }(\sigma ,\sigma '), \end{aligned}$$

we obtain

$$\begin{aligned} \mathbb {E}[\mathcal {L}^2] =\iint _{\Sigma ^2}\tilde{K}_{2;\Sigma }(\sigma ,\sigma ')\hbox {d}\sigma \hbox {d}\sigma ' \end{aligned}$$

as claimed. \(\square \)

The quantity \(\tilde{K}_{2;\Sigma }\) (2.12) will be called the ‘two-point correlation function of \(\left. \mathcal {X}\right| _\Sigma \)’, as a generalisation of the two-point function or second intensity\(\tilde{K_2}:\mathbb {R}^3\times \mathbb {R}^3\rightarrow \mathbb {R}\),

$$\begin{aligned} \tilde{K_2}(x,y):=\phi _{\mathcal {X}(x),\mathcal {X}(y)}(0,0)\cdot \mathbb {E}\left[ |\nabla \mathcal {X}(x)|\cdot |\nabla \mathcal {X}(y)| \ \big | \ \mathcal {X}(x)=\mathcal {X}(y)=0\right] , \end{aligned}$$

to random fields defined on a manifold.

Proposition 2.7 is a Kac–Rice formula for the usual second moment and not for the factorial second moment. This is consistent with Kac–Rice formulas for random fields defined on \(\mathbb {R}^n\) [2, p. 134], as may be seen, for instance, by comparing [2, Theorem 6.3] with [2, Theorem 6.9].

3 Approximate Kac–Rice Formula: Proof of Proposition 1.7

In the present section, we establish the approximate Kac–Rice formula of Proposition 1.7. We will require a few technical lemmas: the proof of these is deferred to “Appendix A”.

3.1 An Expression for the (Scaled) Two-Point Correlation Function \(K_{2;\Sigma }\)

Recall our notation (2.10)

$$\begin{aligned} a(\sigma ):=(\nabla _\Sigma F)(\sigma ) \end{aligned}$$

for the surface gradient and let \(a'=a(\sigma ')\). One has

$$\begin{aligned} a_i=(\nabla F)_i-\langle \nabla F,\overrightarrow{n}\rangle n_i, \qquad \qquad i=1,2,3, \end{aligned}$$
(3.1)

where \(\overrightarrow{n}=(n_1,n_2,n_3)\) is the unit normal to the surface at the point \(\sigma \). At least one coordinate of \(\overrightarrow{n}\), w.l.o.g. \(n_3\), is nonzero hence

$$\begin{aligned} a_3=-\frac{n_1}{n_3}a_1-\frac{n_2}{n_3}a_2. \end{aligned}$$

We may thus write

$$\begin{aligned}&\tilde{K}_{2;\Sigma }(\sigma ,\sigma ') =\phi _{F(\sigma ),F(\sigma ')}(0,0) \cdot \mathbb {E}[|a(\sigma )|\cdot |a(\sigma ')| \ \big | \ F(\sigma )=F(\sigma ')=0] \nonumber \\&\quad =\frac{1}{2\pi \sqrt{1-r^2}}\mathbb {E} \left[ \left\{ \left( (n_1^2+n_3^2)a_1^2+(n_2^2+n_3^2)a_2^2+2n_1n_2a_1a_2\right) /n_3^2\right\} ^{1/2}\cdot \right. \nonumber \\&\qquad \left. \cdot \, \left\{ \left( ({n'_1}^2+{n'_3}^2){a'_1}^2+({n'_2}^2+{n'_3}^2){a'_2}^2+2n'_1n'_2a'_1a'_2\right) /{n'_3}^2\right\} ^{1/2} \ \big | \right. \nonumber \\&\qquad \left. \big | \ F(\sigma )=F(\sigma ')=0 \right] . \end{aligned}$$
(3.2)

In order to rewrite (3.2) in a more convenient way, we will need some extra notation. Bearing in mind (3.1), the covariance matrix of a is given by \(M\cdot \Omega \), where \(M=4\pi ^2m/3\) and

$$\begin{aligned} \Omega (\sigma ):=\begin{pmatrix} n_2^2+n_3^2 &{}\quad -n_1n_2 &{}\quad -n_1n_3 \\ -n_1n_2 &{}\quad n_1^2+n_3^2 &{}\quad -n_2n_3 \\ -n_1n_3 &{}\quad -n_2n_3 &{}\quad n_1^2+n_2^2 \end{pmatrix}. \end{aligned}$$
(3.3)

We will denote \(\Omega '=\Omega (\sigma ')\). Recall the expression (2.1) for the covariance function r.

Definition 3.1

Define the row vector

$$\begin{aligned} D=D(\sigma ,\sigma '):= \left( \frac{\partial r}{\partial \sigma _1}, \frac{\partial r}{\partial \sigma _2}, \frac{\partial r}{\partial \sigma _3}\right) =\frac{2\pi i}{N} \sum _{\mu \in \mathcal {E}} e^{2\pi i\langle \mu ,\sigma -\sigma '\rangle }\cdot \mu , \end{aligned}$$

where for \(j=1,2,3\) we have computed the partial derivatives

$$\begin{aligned} \frac{\partial r}{\partial x_j}(x)=\frac{2\pi i}{N}\sum _{\mu \in \mathcal {E}} e^{2\pi i\langle \mu ,x\rangle }\mu ^{(j)}, \qquad \qquad \mu =(\mu ^{(1)},\mu ^{(2)},\mu ^{(3)}). \end{aligned}$$

Let

$$\begin{aligned} H=H(\sigma ,\sigma '):=H_{r} =-\frac{4\pi ^2}{N} \sum _{\mu \in \mathcal {E}} e^{2\pi i\langle \mu ,\sigma -\sigma '\rangle }\cdot \mu ^T\mu \end{aligned}$$

be the (symmetric) Hessian matrix of r. We also define the matrix

$$\begin{aligned} L:=\begin{pmatrix} 1 &{}\quad 0 \\ 0 &{}\quad 1 \\ 0 &{}\quad 0 \end{pmatrix}. \end{aligned}$$
(3.4)

Note that

$$\begin{aligned} \left( \frac{\partial r}{\partial \sigma '_1}, \frac{\partial r}{\partial \sigma '_2}, \frac{\partial r}{\partial \sigma '_3}\right) =D(\sigma ',\sigma )=-D(\sigma ',\sigma ) \end{aligned}$$

and that \(H(\sigma ',\sigma )=H(\sigma ,\sigma ')\).

Lemma 3.2

The covariance matrix \(\Phi \) of the Gaussian random vector

$$\begin{aligned} \left( F(\sigma ),F(\sigma '), a_1(\sigma ), a_2(\sigma ), a_1(\sigma '), a_2(\sigma ') \right) \end{aligned}$$

is given by

$$\begin{aligned} \Phi _{6\times 6}= \begin{pmatrix} 1 &{}\quad r &{}\quad 0 &{}\quad -D\Omega 'L\\ r &{}\quad 1 &{}\quad D\Omega L &{}\quad 0\\ 0 &{}\quad L^T\Omega D^T &{}\quad ML^T\Omega L &{}\quad -L^T\Omega H\Omega 'L\\ -L^T\Omega 'D^T &{}\quad 0 &{}\quad -L^T\Omega 'H\Omega L &{}\quad ML^T\Omega 'L \end{pmatrix}, \end{aligned}$$

where \(M=4\pi ^2 m/3\), \(\Omega \) is given by (3.3), and DHL are as in Definition 3.1.

The proof of Lemma 3.2 will be given in “Appendix A”. For \(r(\sigma ,\sigma ')\ne \pm 1\), we may apply [2, Proposition 1.2] to rewrite (3.2) as

$$\begin{aligned} \tilde{K}_{2;\Sigma }(\sigma ,\sigma ') =\frac{1}{2\pi \sqrt{1-r^2}} \mathbb {E}\left[ |(\tilde{W_1},\tilde{W_2})|\cdot |(\tilde{W_3},\tilde{W_4})| \right] , \end{aligned}$$

where \(\tilde{W}\sim \mathcal {N}(0,\tilde{\Theta })\) and \(\tilde{\Theta }\) is the reduced covariance matrix

$$\begin{aligned} \tilde{\Theta }= & {} \begin{pmatrix} ML^T\Omega L &{}\quad -L^T\Omega H\Omega 'L \\ -L^T\Omega 'H\Omega L &{}\quad ML^T\Omega ' L \end{pmatrix} \\&- \frac{1}{1-r^2} \begin{pmatrix} L^T\Omega D^TD\Omega L &{}\quad rL^T\Omega D^TD\Omega ' L \\ rL^T\Omega 'D^TD\Omega L &{}\quad L^T\Omega 'D^TD\Omega 'L \end{pmatrix}. \end{aligned}$$

We rescale the random Gaussian vector \(\tilde{W}\) as \(W:=\frac{1}{\sqrt{M}}\tilde{W}\). It follows that

$$\begin{aligned} K_{2;\Sigma }(\sigma ,\sigma '):=\frac{\tilde{K}_{2;\Sigma }(\sigma ,\sigma ')}{M} =\frac{1}{2\pi \sqrt{1-r^2}} \mathbb {E}\left[ |(W_1,W_2)|\cdot |(W_3,W_4)| \right] \end{aligned}$$

(having defined the scaled two-point function \(K_{2;\Sigma }\)), where \(W\sim \mathcal {N}(0,\Theta )\), and \(\Theta \) is the scaled covariance matrix

$$\begin{aligned} \Theta= & {} \begin{pmatrix} L^T\Omega L &{}\quad -L^T\Omega H\Omega 'L/M \\ -L^T\Omega 'H\Omega L/M &{}\quad L^T\Omega ' L \end{pmatrix} \\&- \frac{1}{(1-r^2)M} \begin{pmatrix} L^T\Omega D^TD\Omega L &{}\quad rL^T\Omega D^TD\Omega ' L \\ rL^T\Omega 'D^TD\Omega L &{}\quad L^T\Omega 'D^TD\Omega 'L \end{pmatrix}. \end{aligned}$$

One has

$$\begin{aligned} L^T\Omega L= \begin{pmatrix} n_2^2+n_3^2&{}\quad -n_1n_2 \\ -n_1n_2 &{}\quad n_1^2+n_3^2 \end{pmatrix}, \end{aligned}$$

the upper left \(2\times 2\) block of \(\Omega \). We may find a square rootQ of \((L^T\Omega L)^{-1}\), i.e. a matrix satisfying

$$\begin{aligned} Q^2=(L^T\Omega L)^{-1}. \end{aligned}$$
(3.5)

For instance, we may take explicitly

$$\begin{aligned} Q(\sigma )= \frac{1}{n_3^2+n_3}\cdot \begin{pmatrix} n_1^2+n_3^2+n_3 &{}\quad n_1n_2 \\ n_1n_2 &{}\quad n_2^2+n_3^2+n_3 \end{pmatrix}=Q^T. \end{aligned}$$
(3.6)

Define the Gaussian random vector

$$\begin{aligned} \hat{W}:= \begin{pmatrix} Q &{}\quad 0 \\ 0 &{}\quad Q' \end{pmatrix} W, \end{aligned}$$

with the shorthand \(Q':=Q(\sigma ')\). We obtain

$$\begin{aligned} K_{2;\Sigma }=\frac{1}{2\pi \sqrt{1-r^2}}\mathbb {E}[|(\hat{W_1},\hat{W_2})|\cdot |(\hat{W_3},\hat{W_4})|], \end{aligned}$$

where

$$\begin{aligned} \hat{W}\sim \mathcal {N}(0,\hat{\Theta }) \end{aligned}$$

and \(\hat{\Theta }\) is the covariance matrix

$$\begin{aligned} \hat{\Theta }= & {} \begin{pmatrix} I_2 &{}\quad -QL^T\Omega H\Omega 'LQ'/M \\ Q'L^T\Omega 'H\Omega LQ/M &{}\quad I_2 \end{pmatrix} \\&- \frac{1}{(1-r^2)M} \begin{pmatrix} QL^T\Omega D^TD\Omega LQ &{}\quad rQL^T\Omega D^TD\Omega ' LQ' \\ rQ'L^T\Omega 'D^TD\Omega LQ &{}\quad Q'L^T\Omega 'D^TD\Omega 'LQ' \end{pmatrix}. \end{aligned}$$

Definition 3.3

Let \(X,Y,X',Y'\) be the following \(2\times 2\) matrices:

$$\begin{aligned}&X:=X(\sigma ,\sigma ')=- \frac{1}{(1-r^2)M}QL^T\Omega D^TD\Omega LQ, \\&X':=X(\sigma ',\sigma ), \\&Y:=Y(\sigma , \sigma ')=-\frac{1}{M}\left[ QL^T\Omega \left( H+\frac{r}{1-r^2}D^TD\right) \Omega 'LQ'\right] , \\&Y':=Y(\sigma ',\sigma ), \end{aligned}$$

where \(\Omega \) is given by (3.3), DHL are as in Definition 3.1, Q is given by (3.6), and \(Q'=Q(\sigma ')\).

We may now rewrite

$$\begin{aligned} \hat{\Theta }=I_4+ \begin{pmatrix} X &{}\quad Y \\ Y' &{}\quad X' \end{pmatrix}. \end{aligned}$$
(3.7)

We have established the following result.

Lemma 3.4

The (scaled) 2-point correlation function has the expression

$$\begin{aligned} K_{2;\Sigma }=\frac{1}{2\pi \sqrt{1-r^2}}\mathbb {E}[|(\hat{W_1},\hat{W_2})|\cdot |(\hat{W_3},\hat{W_4})|], \end{aligned}$$

where

$$\begin{aligned} \hat{W}=(\hat{W_1},\hat{W_2},\hat{W_3},\hat{W_4})\sim \mathcal {N}(0,\hat{\Theta }), \end{aligned}$$

and \(\hat{\Theta }\) is given by (3.7).

3.2 Asymptotics for \(K_{2;\Sigma }\)

We will need the following lemma, to be proven in “Appendix A”.

Lemma 3.5

The entries of \(X,X',Y,Y'\) are uniformly bounded (with respect to \(\sigma ,\sigma '\)):

$$\begin{aligned} X,X',Y,Y'\ll _{\Sigma } 1. \end{aligned}$$
(3.8)

To write an asymptotic expression for the scaled two-point function, we need the Taylor expansion of a perturbed \(4\times 4\) standard Gaussian matrix

$$\begin{aligned} I_4+ \begin{pmatrix} X &{}\quad Y \\ Y' &{}\quad X' \end{pmatrix}, \end{aligned}$$

to the second order. The case where \(X'=X\) and \(Y'=Y\), to the fourth order, was treated in [26, Lemma 5.1] (\(4\times 4\) matrix) and [3, Lemma 5.8] (\(6\times 6\)). In [3, 26], the expansion up to order four is needed in light of a cancellation phenomenon known as ‘arithmetic Berry cancellation’; in this work, we need only an expansion to order two.

Lemma 3.6

Suppose

$$\begin{aligned} \hat{W}=(\hat{W_1},\hat{W_2},\hat{W_3},\hat{W_4})\sim \mathcal {N}(0,\hat{\Theta }), \end{aligned}$$

where

$$\begin{aligned} \hat{\Theta }=I_4+ \begin{pmatrix} X &{}\quad Y \\ Y' &{}\quad X' \end{pmatrix} \end{aligned}$$

is positive definite with real entries, and the \(2\times 2\) blocks \(X,X',Y,Y'\) are symmetric. Then

$$\begin{aligned} \mathbb {E}[|(\hat{W_1},\hat{W_2})|\cdot |(\hat{W_3},\hat{W_4})|]= & {} \frac{\pi }{2}\left( 1+\frac{tr(X)}{4}+\frac{tr(X')}{4}+\frac{tr(Y'Y)}{8}\right) \\&+\,O(X^2+X'^2+Y^4+Y'^4). \end{aligned}$$

The proof of Lemma 3.6 will be given in “Appendix A”. Assuming it, we obtain the following asymptotic for the scaled two-point function \(K_{2;\Sigma }\).

Proposition 3.7

For \(\sigma ,\sigma '\) such that \(r(\sigma ,\sigma ')\) is bounded away from \(\pm 1\), we have the following asymptotic for the (scaled) two-point correlation function:

$$\begin{aligned} K_{2;\Sigma }(\sigma ,\sigma ')= & {} \frac{1}{4} \left[ 1+\frac{1}{2}r^2+\frac{tr(X)}{4}+\frac{tr(X')}{4}+\frac{tr(Y'Y)}{8} \right] \\&+\,O(r^4{+}X^2+X'^2+Y^4+Y'^4{+}r^2(tr(X)+tr(X')+tr(Y'Y))), \end{aligned}$$

with \(X,X',Y,Y'\) as in Definition 3.3.Footnote 7

Proof

By assumption, \(r(\sigma ,\sigma ')\) is bounded away from \(\pm 1\); hence, \(\frac{1}{\sqrt{1-r^2}}=1+\frac{1}{2}r^2+O(r^4)\). We apply Lemma 3.6 to expand the precise expression of the two-point function given by Lemma 3.4:

$$\begin{aligned} K_{2;\Sigma }(\sigma ,\sigma ')= & {} \frac{1}{2\pi \sqrt{1-r^2(x)}}\cdot \mathbb {E}[|(\hat{W_1},\hat{W_2})|\cdot |(\hat{W_3},\hat{W_4})|] \\= & {} \frac{1}{2\pi }\left( 1+\frac{1}{2}r^2+O(r^4)\right) \cdot \left[ \frac{\pi }{2} \left( 1+\frac{tr(X)}{4}+\frac{tr(X')}{4}+\frac{tr(Y'Y)}{8}\right) \right. \\&\left. +\,O(X^2+X'^2+Y^4+Y'^4)\right] , \end{aligned}$$

hence the result of the present proposition. \(\square \)

3.3 Statement of Further Auxiliary Lemmas

The present section is dedicated to stating lemmas needed to prove Proposition 1.7. The proofs of the lemmas will follow in Sect. 6.

Lemma 3.8

For \(k\ge 0\), we define the k-th moment of the covariance function r (2.1) on the surface \(\Sigma \),

$$\begin{aligned} \mathcal {R}_k(m):=\iint _{\Sigma ^2}r^k(\sigma ,\sigma ')\mathrm{d}\sigma \mathrm{d}\sigma '. \end{aligned}$$
(3.9)

Assume \(\Sigma \) is of nowhere zero Gauss–Kronecker curvature. Then, for every \(\epsilon >0\) we have the upper bound

$$\begin{aligned} \mathcal {R}_4(m)\ll \frac{1}{m^{11/16-\epsilon }}. \end{aligned}$$
(3.10)

The proof of Lemma 3.8 will be given in Sect. 6.

Lemma 3.9

We have the following upper bounds:

$$\begin{aligned}&\iint _{\Sigma ^2}tr(X^2)\mathrm{d}\sigma \mathrm{d}\sigma ',\ \iint _{\Sigma ^2}tr(X'^2)\mathrm{d}\sigma \mathrm{d}\sigma ',\ \iint _{\Sigma ^2}tr(Y^4)\mathrm{d}\sigma \mathrm{d}\sigma ',\\&\iint _{\Sigma ^2}tr(Y'^4)\mathrm{d}\sigma \mathrm{d}\sigma ',\ \iint _{\Sigma ^2}r^2tr(X)\mathrm{d}\sigma \mathrm{d}\sigma ',\ \iint _{\Sigma ^2}r^2tr(X')\mathrm{d}\sigma \mathrm{d}\sigma ',\\&\iint _{\Sigma ^2}r^2tr(Y'Y)\mathrm{d}\sigma \mathrm{d}\sigma '\ll _\Sigma \frac{1}{m^{11/16-\epsilon }}. \end{aligned}$$

The proof of Lemma 3.9 will be given in Sect. 6.

3.4 The Proof of Proposition 1.7

We claim that the hypotheses of Kac–Rice (Proposition 2.7) hold. The non-degeneracy condition is met since

$$\begin{aligned} r(\sigma ,\sigma ')\ne \pm 1 \end{aligned}$$

for almost all \(\sigma ,\sigma '\in \Sigma \). Moreover, by Lemma 2.5, one has the uniform bound

$$\begin{aligned} \mathcal {L}_{\epsilon _1}\mathcal {L}_{\epsilon _2}\le 18^2m. \end{aligned}$$

Thanks to [35, Lemma 5.3] and the fact that \(|\nabla _{\Sigma }F(\sigma )|\le |\nabla F(\sigma )|\), one has for almost all \(\sigma ,\sigma '\in \Sigma \),

$$\begin{aligned} \mathcal {L}_{\epsilon _1,\epsilon _2}(\sigma ,\sigma ')= & {} \frac{1}{4\epsilon _1\epsilon _2} \mathbb {E}\left[ \chi \left( \frac{F(\sigma )}{\epsilon _1}\right) \chi \left( \frac{F(\sigma ')}{\epsilon _2}\right) |(\nabla _\Sigma F)(\sigma )||(\nabla _\Sigma F)(\sigma ')| \right] \\\ll & {} \frac{m}{\sqrt{1-r^2(\sigma ,\sigma ')}}, \end{aligned}$$

where the implied constant is independent of \(\epsilon _1,\epsilon _2\). Therefore, one may exchange the order of taking the limit and the integration over \(\Sigma ^2\) in Proposition 2.7. The hypotheses of Kac–Rice are thus all verified; hence,

$$\begin{aligned} \mathbb {E}[\mathcal {L}^2] =\iint _{\Sigma ^2}\tilde{K}_{2;\Sigma }(\sigma ,\sigma ')\hbox {d}\sigma \hbox {d}\sigma '. \end{aligned}$$
(3.11)

We will show that Proposition 3.7 applies ‘almost everywhere’ in the sense that r is small outside of a small set. Since the surface \(\Sigma \) is compact and regular, one may write

$$\begin{aligned} \Sigma =\bigcup _{1\le q\le Q}\Sigma _q=\bigcup _{1\le q\le Q}\gamma _q(U_q), \end{aligned}$$

where \(\gamma _q:U_q\subset \mathbb {R}^2\rightarrow \Sigma \), \(1\le q\le Q\), are finitely many parametrisations as in (1.16), and the union is disjoint save for boundary overlaps. These overlaps are a finite union of smooth curves possibly together with a set of points; therefore, the intersection with

$$\begin{aligned} \mathcal {A}_F:=\{x\in \mathbb {T}^3 : F(x)=0\} \end{aligned}$$

is a.s. a finite set of points [37, Theorem 1.4].

For each q, consider the smallest rectangle \(\overline{U}_q\supseteq U_q\) with sides parallel to the coordinate axes of \(\mathbb {R}^2\). Partition (with boundary overlaps) \(\overline{U}_q\) into small squares \(\overline{U}_{q,p}\) of side length \(\delta =c_0/\sqrt{m}\) for some small \(c_0>0\). This means \(U_q\) is the disjoint union (with boundary overlaps) of the \(U_q\cap \overline{U}_{q,p}=:U_{q,p}\). Each \(\gamma _q\) is bijective; thus, each \(\Sigma _q\) is the disjoint union of the \(\Sigma _{q,p}:=\gamma _q(U_{q,p})\).

To simplify the notation, we re-label the indices qp to a single index i and write \(\Sigma =\cup _i\Sigma _i\). The set \(\Sigma \times \Sigma \) is thus partitioned (with boundary overlaps) into regions \(\Sigma _i\times \Sigma _j=:V_{i,j}\).

Definition 3.10

We say the region \(V_{i,j}\) is singular if there are points \(\sigma \in \Sigma _i\) and \(\sigma '\in \Sigma _j\) s.t. \(|r(\sigma ,\sigma ')|>1/2\). The union of all singular regions is the singular setS.

Lemma 3.11

The measure of the singular set satisfies for every \(k\ge 0\) the following upper bound:

$$\begin{aligned} \text {meas}(S)\ll \mathcal {R}_k(m), \end{aligned}$$

where \(\mathcal {R}_k(m)\) is the k-th moment (3.9) of the covariance function r on the surface \(\Sigma \).

Proof

As \(r/\sqrt{m}\) is a Lipschitz function, with constant independent of m, then for each singular region \(V_{i,j}=\Sigma _i\times \Sigma _j\)

$$\begin{aligned} |r(\sigma ,\sigma ')|>1/4 \end{aligned}$$

everywhere on \(V_{i,j}\), provided \(c_0\) is chosen sufficiently small. An application of the Chebyshev–Markov inequality now yields the statement of the present lemma. \(\square \)

Outside of S Proposition 3.7 applies so that we may rewrite (3.11) as

$$\begin{aligned} \mathbb {E}[\mathcal {L}^2]= & {} M\iint _{\Sigma ^2{\setminus } S} \left( \frac{1}{4} +\frac{1}{8}r^2 +\frac{tr(X)}{16} +\frac{tr(X')}{16} +\frac{tr(Y'Y)}{32} \right) \hbox {d}\sigma \hbox {d}\sigma ' \nonumber \\&+\,m\cdot O\iint _{\Sigma ^2}(r^4+X^2+X'^2+Y^4+Y'^4+r^2(tr(X)\nonumber \\&+tr(X')+tr(Y'Y)))\hbox {d}\sigma \hbox {d}\sigma ' \nonumber \\&+\,\iint _{S}\tilde{K}_{2;\Sigma }(\sigma ,\sigma ')\hbox {d}\sigma \hbox {d}\sigma '. \end{aligned}$$
(3.12)

To control the third summand in (3.12), we state the following auxiliary result.

Lemma 3.12

We have the bound

$$\begin{aligned} \iint _{S}\tilde{K}_{2;\Sigma }(\sigma ,\sigma ')\mathrm{d}\sigma \mathrm{d}\sigma ' \ll m\cdot \mathcal {R}_4(m). \end{aligned}$$

The proof of Lemma 3.12 will follow in “Appendix A”. By Lemmas 3.8, 3.9, and 3.12, we may rewrite (3.12) as

$$\begin{aligned} \mathbb {E}[\mathcal {L}^2]= & {} M\left[ \iint _{\Sigma ^2{\setminus } S} \left( \frac{1}{4} +\frac{1}{8}r^2 +\frac{tr(X)}{16} +\frac{tr(X')}{16} +\frac{tr(Y'Y)}{32} \right) \hbox {d}\sigma \hbox {d}\sigma ' \right. \\&\left. +\,O_\Sigma \left( \frac{1}{m^{11/16-\epsilon }}\right) \right] . \end{aligned}$$

Changing the domain of integration to \(\Sigma ^2\) carries an error of \(m\cdot \mathcal {R}_4(m)\) thanks to Lemmas 3.5 and 3.11. Subtracting the expectation squared (Proposition 1.2) completes the proof of the approximate Kac–Rice formula Proposition 1.7.

4 Lattice Points on Spheres

4.1 Background

To estimate the second and fourth moments of the covariance function r and of its derivatives (in Sects. 5 and 6, respectively), we will need several considerations on lattice points on spheres \(\sqrt{m}\mathcal {S}^2\). An integer m is representable as a sum of three squares if and only if it is not of the form \(4^l(8k+7)\), for kl non-negative integers [14, 23]. The total number of lattice points \(N_m=r_3(m)\) oscillates: it is unbounded but vanishes for arbitrarily large m. We have the upper bound [7, section 1]

$$\begin{aligned} N \ll (\sqrt{m})^{1+\epsilon } \quad \text {for all} \ \epsilon >0. \end{aligned}$$

The condition \(m\not \equiv 0,4,7 \pmod 8\) is equivalent to the existence of primitive lattice points \((\mu ^{(1)},\mu ^{(2)},\mu ^{(3)})\), meaning \(\mu ^{(1)},\mu ^{(2)},\mu ^{(3)}\) are coprime (see, e.g. [7, section 1] and [37, section 4]). In this case, we have both lower and upper bounds (1.2)

$$\begin{aligned} (\sqrt{m})^{1-\epsilon } \ll N \ll (\sqrt{m})^{1+\epsilon }. \end{aligned}$$

This lower bound is ineffective: the behaviour of \(r_3(m)\) is not completely understood [7, section 1].

Given a sphere \(\mathfrak {C}\subset \mathbb {R}^3\) and a point \(P\in \mathfrak {C}\), we define the spherical cap\(\mathcal {T}\) centred at P to be the intersection of \(\mathfrak {C}\) with the ball \(\mathcal {B}_s(P)\) of radius s centred at P. We will call s the radius of the cap. We shall denote

$$\begin{aligned} \chi (\sqrt{m},s) =\max _\mathcal {T}\#\{\mu \in \mathbb {Z}^3\cap \mathcal {T}\} \end{aligned}$$
(4.1)

the maximal number of lattice points contained in a spherical cap \(\mathcal {T}\subset \sqrt{m}\mathcal {S}^2\) of radius s.

Lemma 4.1

(Bourgain and Rudnick [6, Lemma 2.1]). We have for all \(\epsilon >0\),

$$\begin{aligned} \chi (\sqrt{m},s) \ll m^\epsilon \left( 1+\frac{s^2}{\sqrt{m}^{1/2}}\right) \end{aligned}$$

as \(m\rightarrow \infty \).

Definition 4.2

Given an integer m expressible as a sum of three squares, define

$$\begin{aligned} \widehat{\mathcal {E}}_m:=\mathcal {E}_m/\sqrt{m}\subset \mathcal {S}^{2} \end{aligned}$$
(4.2)

to be the projection of the set of lattice points \(\mathcal {E}_m\) (1.1) on the unit sphere (cf. [7, (1.5)] and [37, (4.3)]).

Linnik conjectured (and proved under GRH) that the projected lattice points \(\widehat{\mathcal {E}}_m\) become equidistributed as \(m\rightarrow \infty \), \(m\not \equiv 0,4,7 \pmod 8\). This result was proven unconditionally by Duke [16, 17] and by Golubeva and Fomenko [22] following a breakthrough by Iwaniec [24]. As a consequence, one may approximate a summation over the lattice point set by an integral over the unit sphere.

Lemma 4.3

(cf. [33, Lemma 8]). Let g(z) be a \(C^2\)-smooth function on \(\mathcal {S}^2\). For \(m\rightarrow \infty \), \(m\not \equiv 0,4,7 \pmod 8\), we have

$$\begin{aligned} \frac{1}{N}\sum _{\mu \in \mathcal {E}}g\left( \frac{\mu }{|\mu |}\right) = \int _{z\in \mathcal {S}^2}g(z)\frac{\mathrm{d}z}{4\pi } + O_g\left( \frac{1}{m^{1/28-\epsilon }}\right) . \end{aligned}$$

Define the probability measures

$$\begin{aligned} \tau _m=\frac{1}{N}\sum _{\mu \in \mathcal {E}}\delta _{\mu /|\mu |} \end{aligned}$$
(4.3)

on the unit sphere, where \(\delta _x\) is the Dirac delta function at x. By the equidistribution of lattice points on spheres, the \(\tau _m\) converge weak-*Footnote 8 to the uniform measure on the unit sphere:

$$\begin{aligned} \tau _m\Rightarrow \frac{\sin (\varphi )\hbox {d}\varphi \hbox {d}\psi }{4\pi } \end{aligned}$$
(4.4)

as \(m\rightarrow \infty \), \(m\not \equiv 0,4,7 \pmod 8\).

Definition 4.4

For \(s>0\), the Rieszs-energy of n (distinct) points \(P_1,\ldots ,P_n\) on \(\mathcal {S}^2\) is defined as

$$\begin{aligned} E_s(P_1,\ldots ,P_n):=\sum _{i\ne j}\frac{1}{|P_i-P_j|^s}. \end{aligned}$$

Bourgain, Sarnak and Rudnick computed the following precise asymptotics for the Riesz s-energy of the projected lattice points \(\widehat{\mathcal {E}}_m\subset \mathcal {S}^2\) (4.2).

Proposition 4.5

([7, Theorem 1.1], [37, Theorem 4.1]). Fix \(0<s<2\). Suppose \(m\rightarrow \infty \), \(m\not \equiv 0,4,7 \pmod 8\). There is some \(\delta >0\) so that

$$\begin{aligned} E_s(\widehat{\mathcal {E}}_m)=I(s)\cdot N^2+O(N^{2-\delta }) \end{aligned}$$

where

$$\begin{aligned} I(s)=\frac{2^{1-s}}{2-s}. \end{aligned}$$

4.2 Lemmas on Lattice Points on Spheres

We will need the following lemma, which also appears in [8].

Lemma 4.6

(cf. [8, Lemma 3.3]). Suppose \(m\rightarrow \infty \), \(m\not \equiv 0,4,7 \pmod 8\). We have, for \(i,j=1,2,3\), \(i\ne j\), and \(0\le k\le l\), \(k+l=4\),

$$\begin{aligned} \frac{1}{m^2N}\sum _{(\mu ^{(1)},\mu ^{(2)},\mu ^{(3)})\in \mathcal {E}}({\mu ^{(i)}})^k({\mu ^{(j)}})^l= {\left\{ \begin{array}{ll} 1/5+O\left( m^{-1/28+o(1)}\right) &{} k=0, \\ 0 &{} k=1, \\ 1/15+O\left( m^{-1/28+o(1)}\right) &{} k=2. \end{array}\right. } \end{aligned}$$

Proof

The case \(k=1\) immediately follows from the symmetries of the lattice point set \(\mathcal {E}\) (cf. [35, Lemma 2.3]). Let \(k=0\): by the equidistribution of lattice points on spheres (Lemma 4.3), we may approximate

$$\begin{aligned} \frac{1}{m^2N}\sum _{\mu \in \mathcal {E}}{\mu ^{(j)}}^4 \sim \frac{1}{4\pi }\int _{z\in \mathcal {S}^2}z_j^4\hbox {d}z \end{aligned}$$
(4.5)

up to an error of \(m^{-1/28+o(1)}\). We use the spherical coordinates

$$\begin{aligned} z=(\sin (\varphi )\cos (\psi ),\sin (\varphi )\sin (\psi ),\cos (\varphi )), \qquad \quad 0\le \varphi \le \pi , \ 0\le \psi \le 2\pi . \end{aligned}$$

Take \(j=3\) (by the symmetry, (4.5) will be independent of the choice of j). The integral in (4.5) may be rewritten as

$$\begin{aligned} \frac{1}{4\pi }\int _{0}^{\pi }\cos ^4(\varphi )\sin (\varphi )\hbox {d}\varphi \int _{0}^{2\pi }\hbox {d}\psi =\frac{1}{5}. \end{aligned}$$

Similarly, for \(k=2\), Lemma 4.3 yields

$$\begin{aligned} \frac{1}{m^2N}\sum _{\mu \in \mathcal {E}}{\mu ^{(i)}}^2{\mu ^{(j)}}^2 \sim \frac{1}{4\pi }\int _{z\in \mathcal {S}^2}z_i^2z_j^2\hbox {d}z. \end{aligned}$$

The latter integral may be computed via the same method as the case \(k=0\) and equals 1 / 15. \(\square \)

We denote \(\mathcal {C}(4)\) the set of length 4 spectral correlations, i.e. 4-tuples of lattice points on spheres summing up to 0

$$\begin{aligned} \mathcal {C}(4):=\mathcal {C}_m(4) =\left\{ (\mu _1,\mu _2,\mu _3,\mu _4)\in \mathcal {E}_m^4 : \mu _1+\mu _2+\mu _3+\mu _4=0 \right\} . \end{aligned}$$
(4.6)

Denote [6, section 2.3]

$$\begin{aligned} \kappa (R):=\max _{\Pi } \#\{\mu \in \mathbb {Z}^3 : \mu \in R\mathcal {S}^{2}\cap \Pi \} \end{aligned}$$

the maximal number of lattice points in the intersection of \(R\mathcal {S}^{2}\subset \mathbb {R}^3\) and any plane \(\Pi \). Jarnik (see [25, 6, (2.6)]) found the upper bound

$$\begin{aligned} \kappa (R)\ll R^\epsilon , \quad \forall \epsilon >0. \end{aligned}$$
(4.7)

An upper bound for \(|\mathcal {C}(4)|\) may be obtained as follows. We fix two lattice points \(\mu _1,\mu _2\), which may be done in \(N^2\) ways. Supposing that \(\mu _1+\mu _2\ne 0\),Footnote 9 then \(\mu _1+\mu _2+\mu _3\) must lie in the intersection of two spheres centred, respectively, at the origin and at the point \(\mu _1+\mu _2\), both of radius \(\sqrt{m}\). As two spheres intersect in a circle, we obtain the upper bound

$$\begin{aligned} |\mathcal {C}(4)|\ll N^2\cdot \kappa (\sqrt{m}). \end{aligned}$$

By (4.7) and (1.2), it now follows that

$$\begin{aligned} |\mathcal {C}(4)|\ll N^{2+\epsilon } \qquad \forall \epsilon >0. \end{aligned}$$
(4.8)

Lemma 4.7

We have the bound

$$\begin{aligned} \sum _{\mathcal {E}^4{\setminus }\mathcal {C}(4)}\frac{1}{|\mu _1+\mu _2+\mu _3+\mu _4|^2}\ll N^{2+5/8+\epsilon }. \end{aligned}$$
(4.9)

Proof

This proof is inspired by the two-dimensional analogue [36, Lemma 6.2]. Let \(v:=\mu _1+\mu _2+\mu _3+\mu _4\): as \(|v|\ne 0\), we clearly have \(|v|\ge 1\). we separate the summation in (4.9) over three ranges: \(1\le |v|\le A\), \(A\le |v|\le B\), and \(|v|\ge B\), where \(A=A(m)\) and \(B=B(m)\) are parameters.

\({\hbox {First range} 1\le |v|\le A}\). Given v and \(\mu _1,\mu _2\) such that \(\mu _1+\mu _2\ne v\), the number of solutions \((\mu _3,\mu _4)\) to

$$\begin{aligned} v=\mu _1+\mu _2+\mu _3+\mu _4 \end{aligned}$$

is \(\ll \kappa (\sqrt{m})\ll m^\epsilon \): to see this, we use the same argument as in the bound for \(|\mathcal {C}(4)|\) above. Therefore,

$$\begin{aligned} \sum _{1\le |v| \le A}\frac{1}{|\mu _1+\mu _2+\mu _3+\mu _4|^2}\ll & {} N^{2+\epsilon }\sum _{1\le |v|\le A}\frac{1}{|v|^2}\ll N^{2+\epsilon }\int _{1\le |x|\le A}\frac{\hbox {d}x}{|x|^2}\nonumber \\\ll & {} N^{2+\epsilon }A. \end{aligned}$$
(4.10)

\({\hbox {Second range} A\le |v|\le B}\). We fix \(\mu _1,\mu _2,\mu _3\) and write

$$\begin{aligned} \sum _{A\le |v|\le B}\frac{1}{|\mu _1+\mu _2+\mu _3+\mu _4|^2}\ll \frac{1}{A^2}N^3\cdot |\{\mu _4 : A\le |v|\le B\}|. \end{aligned}$$
(4.11)

We require an estimate for the second factor on the RHS of (4.11). Consider the geometric picture: the vector v lies on the sphere centred at \(\mu _1+\mu _2+\mu _3\) of radius \(\sqrt{m}\), and also in the difference set of the two balls centred at the origin of radii BA. Therefore, \(\mu _4\) lies on a spherical cap of radius 2B of a sphere of radius \(\sqrt{m}\), hence the bound

$$\begin{aligned} |\{\mu _4 : A\le |v|\le B\}| \le \chi (\sqrt{m}, 2B)\ll m^\epsilon \left( 1+\frac{B^2}{\sqrt{m}^{1/2}}\right) \end{aligned}$$
(4.12)

via Lemma 4.1. Replacing (4.12) into (4.11),

$$\begin{aligned} \sum _{A\le |v|\le B}\frac{1}{|\mu _1+\mu _2+\mu _3+\mu _4|^2}\ll \frac{N^3}{A^2}\cdot m^\epsilon \left( 1+\frac{B^2}{m^{1/4}}\right) . \end{aligned}$$
(4.13)

\({\hbox {Third range} |v|\ge B}\). Here we have

$$\begin{aligned} \sum _{|v|\ge B}\frac{1}{|\mu _1+\mu _2+\mu _3+\mu _4|^2}\ll \frac{N^4}{B^2}. \end{aligned}$$
(4.14)

Collecting the estimates (4.10), (4.13) and (4.14), we obtain

$$\begin{aligned} \sum _{\mathcal {E}^4{\setminus }\mathcal {C}(4)}\frac{1}{|\mu _1+\mu _2+\mu _3+\mu _4|^2}\ll N^{2+\epsilon } A+\frac{N^3}{A^2}\cdot m^\epsilon \left( 1+\frac{B^2}{m^{1/4}}\right) +\frac{N^4}{B^2}. \end{aligned}$$

Bearing in mind (1.2), the optimal choice for the parameters is \((A,B)=(N^{5/8},N^{11/16})\), so that one has the estimate

$$\begin{aligned} \sum _{\mathcal {E}^4{\setminus }\mathcal {C}(4)}\frac{1}{|\mu _1+\mu _2+\mu _3+\mu _4|^2}\ll N^{2+5/8+\epsilon } \end{aligned}$$

as claimed. \(\square \)

5 Second Moment of r and of Its Derivatives: Proof of Theorem 1.3

The present section is dedicated to proving Theorem 1.3.

5.1 Statement of Estimates

We recall that A is the area of the surface \(\Sigma \), r the covariance function (2.1), and Definition 3.3 for \(X,X',Y,Y'\).

Lemma 5.1

Assume \(\Sigma \) is of nowhere zero Gauss–Kronecker curvature. Then we have the following estimates:

$$\begin{aligned}&\iint _{\Sigma ^2}r^2\mathrm{d}\sigma \mathrm{d}\sigma ' =\frac{A^2}{N}+O\left( \frac{1}{m^{1-\epsilon }}\right) ; \end{aligned}$$
(5.1)
$$\begin{aligned}&\iint _{\Sigma ^2}tr(X)\mathrm{d}\sigma \mathrm{d}\sigma ' =-\frac{2A^2}{N} +O\left( \frac{1}{m^{11/16-\epsilon }}\right) ; \end{aligned}$$
(5.2)
$$\begin{aligned}&\iint _{\Sigma ^2}tr(X')\mathrm{d}\sigma \mathrm{d}\sigma ' =-\frac{2A^2}{N} +O\left( \frac{1}{m^{11/16-\epsilon }}\right) . \end{aligned}$$
(5.3)

Lemma 5.1 will be proven in Sect. 5.2. We define

$$\begin{aligned} \mathcal {H}=\mathcal {H}_\Sigma (\mathcal {E}):=\iint _{\Sigma ^2}\frac{1}{N}\sum _{\mu \in \mathcal {E}}\left\langle \frac{\mu }{|\mu |},\overrightarrow{n}(\sigma )\right\rangle ^2\cdot \left\langle \frac{\mu }{|\mu |},\overrightarrow{n}(\sigma ')\right\rangle ^2\hbox {d}\sigma \hbox {d}\sigma '. \end{aligned}$$
(5.4)

Lemma 5.2

Assume \(\Sigma \) is of nowhere zero Gauss–Kronecker curvature. Then we have the following estimate:

$$\begin{aligned} \iint _{\Sigma ^2}tr(Y'Y)\mathrm{d}\sigma \mathrm{d}\sigma ' =\frac{3}{N}(A^2+3\mathcal {H})+O\left( \frac{1}{m^{11/16-\epsilon }}\right) . \end{aligned}$$
(5.5)

Lemma 5.2 will be proven in Sect. 5.3. By the equidistribution of lattice points on spheres (Lemma 4.3), one may express \(\mathcal {H}\) in terms of \(\Sigma \) only, as shown in the following result.

Lemma 5.3

As \(m\rightarrow \infty \), \(m\not \equiv 0,4,7 \pmod 8\), one has the estimate

$$\begin{aligned} \mathcal {H}=\frac{1}{15}(A^2+2\mathcal {I})+O\left( \frac{1}{m^{1/28-\epsilon }}\right) . \end{aligned}$$

Proof

Firstly, using Lemma 4.6, we estimate the summation

$$\begin{aligned}&\sum _{\mu \in \mathcal {E}}\left\langle \frac{\mu }{|\mu |},\overrightarrow{n}(\sigma )\right\rangle ^2\cdot \left\langle \frac{\mu }{|\mu |},\overrightarrow{n}(\sigma ')\right\rangle ^2 \nonumber \\&\quad =\frac{N}{15}\left( 3\sum _{i=1}^{3}n_i^2{n'_i}^2 +\sum _{\begin{array}{c} i=1\\ i\ne j \end{array}}^{3}n_i^2{n'_j}^2 +\sum _{\begin{array}{c} i=1\\ i\ne j \end{array}}^{3}4n_in_jn'_in'_j + O\left( \frac{1}{m^{1/28-\epsilon }}\right) \right) \nonumber \\&\quad =\frac{N}{15}\left( 1+2\langle \overrightarrow{n},\overrightarrow{n}'\rangle ^2 +O\left( \frac{1}{m^{1/28-\epsilon }}\right) \right) . \end{aligned}$$
(5.6)

By substituting (5.6) into (5.4), we obtain

$$\begin{aligned} \mathcal {H}_\Sigma (\mathcal {E})&=\frac{1}{15}\iint _{\Sigma ^2}\left( 1+2\langle \overrightarrow{n},\overrightarrow{n}'\rangle ^2 \right) \hbox {d}\sigma \hbox {d}\sigma '+ O\left( \frac{1}{m^{1/28-\epsilon }}\right) \\&=\frac{1}{15}(A^2+2\mathcal {I})+ O\left( \frac{1}{m^{1/28-\epsilon }}\right) . \end{aligned}$$

\(\square \)

We may now establish the asymptotics for the nodal intersection length variance.

Proof of Theorem 1.3 assuming Lemmas 5.1 and 5.2

On substituting the estimates (5.1), (5.2), (5.3) and (5.5) into the approximate Kac–Rice formula Proposition 1.7, we obtain

$$\begin{aligned} \text {Var}(\mathcal {L})= & {} M\left[ \frac{1}{8}\frac{A^2}{N} +\frac{1}{16}\left( -\frac{2A^2}{N}\right) +\frac{1}{16}\left( -\frac{2A^2}{N}\right) +\frac{1}{32}\frac{3}{N}(A^2+3\mathcal {H}) \right. \\&\left. +\,O\left( \frac{1}{m^{11/16-\epsilon }}\right) \right] =\frac{\pi ^2}{24}\frac{m}{N}\left( 9\mathcal {H}-A^2\right) +O\left( \frac{1}{m^{3/16-\epsilon }}\right) . \end{aligned}$$

The estimate for \(\mathcal {H}\) given by Lemma 5.3 now implies (1.14). \(\square \)

In the rest of Sect. 5, we prove Lemmas 5.1 and 5.2. Firstly, we will require a bound for oscillatory integrals on a surface.

Proposition 5.4

([39, section 7]). If \(\Sigma \) is a hypersurface in \(\mathbb {R}^n\) with non-vanishing Gaussian curvature, then

$$\begin{aligned} \int _\Sigma e^{ix\cdot \xi }\psi (x)\mathrm{d}\sigma (x)=O(|\xi |^{-(n-1)/2}) \end{aligned}$$

as \(|\xi |\rightarrow \infty \), whenever \(\psi \in C_0^{\infty }(\mathbb {R}^n)\).

The case \(n=2\) of Proposition 5.4 was proven by Van der Corput [39, Proposition 2] (see also [36, Lemma 5.2]). Here we will need the case \(n=3\), i.e. as \(|\xi |\rightarrow \infty \),

$$\begin{aligned} \int _\Sigma e^{ix\cdot \xi }\psi (x)\hbox {d}\sigma (x)=O(|\xi |^{-1}). \end{aligned}$$

5.2 Proof of Lemma 5.1

We square the covariance function (2.1)

$$\begin{aligned} r^2(\sigma ,\sigma ') = \frac{1}{N^2}\sum _{\mu ,\mu '} e^{2\pi i\langle \mu -\mu ',\sigma -\sigma '\rangle } \end{aligned}$$

and integrate it over \(\Sigma ^2\): the diagonal terms equal

$$\begin{aligned} \frac{1}{N^2}\iint _{\Sigma ^2}\sum _{\mu }1\hbox {d}\sigma \hbox {d}\sigma ' =\frac{A^2}{N}. \end{aligned}$$

We bound the off-diagonal terms by applying Proposition 5.4

$$\begin{aligned} \frac{1}{N^2}\iint _{\Sigma ^2} \sum _{\mu \ne \mu '} e^{2\pi i\langle \mu -\mu ',\sigma -\sigma '\rangle }\hbox {d}\sigma \hbox {d}\sigma '&=\frac{1}{N^2}\sum _{\mu \ne \mu '}\left| \int _{\Sigma }e^{2\pi i\langle \mu -\mu ',\sigma \rangle }\hbox {d}\sigma \right| ^2 \\&\ll _\Sigma \frac{1}{N^2}\sum _{\mu \ne \mu '}\frac{1}{|\mu -\mu '|^2}, \end{aligned}$$

followed by Proposition 4.5 with \(s=2-\epsilon \):

$$\begin{aligned} \sum _{\mu \ne \mu '}\frac{1}{|\mu -\mu '|^2}\ll \frac{N^2}{(\sqrt{m})^{2-\epsilon }}. \end{aligned}$$

This completes the proof of (5.1).

Next, we show (5.2), (5.3) being similar. The proof of the following preliminary lemma is deferred to Sect. 6.

Lemma 5.5

With DH as in Definition 3.1 and \(\Omega \) as in (3.3), we have the following estimates:

$$\begin{aligned}&-\iint _{\Sigma ^2}\frac{1}{M}D\Omega D^T\mathrm{d}\sigma \mathrm{d}\sigma '=-\frac{2A^2}{N}+O\left( \frac{1}{m^{1-\epsilon }}\right) ; \end{aligned}$$
(5.7)
$$\begin{aligned}&\iint _{\Sigma ^2}\frac{r^2}{M}D\Omega D^T\mathrm{d}\sigma \mathrm{d}\sigma '=O\left( \frac{1}{m^{11/16-\epsilon }}\right) ; \end{aligned}$$
(5.8)
$$\begin{aligned}&\iint _{\Sigma ^2}\frac{1}{M^2}tr\left( H\Omega H\Omega '\right) \mathrm{d}\sigma \mathrm{d}\sigma '=\frac{3}{N}(A^2+3\mathcal {H})+O\left( \frac{1}{m^{1-\epsilon }}\right) ; \end{aligned}$$
(5.9)
$$\begin{aligned}&\iint _{\Sigma ^2}\frac{r^2}{M^2}D\Omega H\Omega 'D^T\mathrm{d}\sigma \mathrm{d}\sigma '=O\left( \frac{1}{m^{11/16-\epsilon }}\right) . \end{aligned}$$
(5.10)

By Definition 3.3,

$$\begin{aligned} X=X(\sigma ,\sigma '):=-\frac{1}{(1-r^2)M}QL^T\Omega D^TD\Omega LQ, \end{aligned}$$

where the matrices DL are as in Definition 3.1, \(\Omega \) as in (3.3), and Q as in (3.6). We separate the domain of integration into the singular set S of Definition 3.10 and its complement:

$$\begin{aligned} \iint _{\Sigma ^2}tr(X)\hbox {d}\sigma \hbox {d}\sigma '=\iint _{\Sigma ^2{\setminus } S}tr(X)\hbox {d}\sigma \hbox {d}\sigma '+O(\text {meas}(S)), \end{aligned}$$
(5.11)

where we used the uniform boundedness of X given by Lemma 3.5. On \(\Sigma ^2{\setminus } S\) the covariance function r is bounded away from \(\pm 1\), so that

$$\begin{aligned} trX =-\frac{1}{M}tr(QL^T\Omega D^TD\Omega LQ) +O\left( tr\left( \frac{r^2}{M}QL^T\Omega D^TD\Omega LQ\right) \right) . \end{aligned}$$
(5.12)

We have

$$\begin{aligned} tr(QL^T\Omega D^TD\Omega LQ)=D\Omega LQQL^T\Omega D^T=D\Omega D^T, \end{aligned}$$
(5.13)

where in the last equality we noted that

$$\begin{aligned} LQ^2L^T\Omega =I_3 \end{aligned}$$
(5.14)

by (3.5). Inserting (5.13) into (5.12) and integrating over \(\Sigma ^2{\setminus } S\),

$$\begin{aligned} \iint _{\Sigma ^2{\setminus } S}tr(X)\hbox {d}\sigma \hbox {d}\sigma '= & {} -\frac{1}{M}\iint _{\Sigma ^2{\setminus } S}D\Omega D^T\hbox {d}\sigma \hbox {d}\sigma ' \nonumber \\&+\,O\left( \iint _{\Sigma ^2{\setminus } S}\frac{r^2}{M}D\Omega D^T\hbox {d}\sigma \hbox {d}\sigma '\right) . \end{aligned}$$
(5.15)

By Definition 3.1,

$$\begin{aligned} D\Omega D^T =-\frac{4\pi ^2}{N^2}\sum _{\mu ,\mu '} e^{2\pi i\langle \mu -\mu ',\sigma -\sigma '\rangle }\mu \Omega \mu '^T. \end{aligned}$$
(5.16)

The uniform bound

$$\begin{aligned} |\mu ^{(i)}|\le \sqrt{m} \qquad i=1,2,3 \end{aligned}$$
(5.17)

implies \(D\Omega D^T\ll M\). We may then rewrite the main term of (5.15) as

$$\begin{aligned} -\frac{1}{M}\iint _{\Sigma ^2{\setminus } S}D\Omega D^T\hbox {d}\sigma \hbox {d}\sigma ' =-\frac{1}{M}\iint _{\Sigma ^2}D\Omega D^T\hbox {d}\sigma \hbox {d}\sigma ' +O\left( \text {meas}(S)\right) .\nonumber \\ \end{aligned}$$
(5.18)

We substitute (5.18) into (5.15) and then (5.15) into (5.11) to obtain

$$\begin{aligned} \iint _{\Sigma ^2}tr(X)\hbox {d}\sigma \hbox {d}\sigma '= & {} -\frac{1}{M}\iint _{\Sigma ^2}D\Omega D^T\hbox {d}\sigma \hbox {d}\sigma '+O\left( \iint _{\Sigma ^2}\frac{r^2}{M}D\Omega D^T\hbox {d}\sigma \hbox {d}\sigma '\right) \nonumber \\&+\,O(\text {meas}(S)). \end{aligned}$$
(5.19)

Inserting the estimates (5.7) and (5.8) into (5.19), and bearing in mind Lemmas 3.11 and 3.8 concludes the proof of (5.2), (5.3) being similar.

5.3 Proof of Lemma 5.2

Similarly to the proof of Lemma 5.1, we write

$$\begin{aligned} \iint _{\Sigma ^2}tr(Y'Y)\hbox {d}\sigma \hbox {d}\sigma '=\iint _{\Sigma ^2{\setminus } S}tr(Y'Y)\hbox {d}\sigma \hbox {d}\sigma '+O(\text {meas}(S)). \end{aligned}$$
(5.20)

By Definition 3.3 and (5.14), on \(\Sigma ^2{\setminus } S\) we write

$$\begin{aligned} Y'Y(\sigma ,\sigma ')= & {} \frac{1}{M}\left[ Q'L^T\Omega '\left( H+\frac{r}{1-r^2}D^TD\right) \Omega LQ\right] \nonumber \\&\cdot \frac{1}{M}\left[ QL^T\Omega \left( H+\frac{r}{1-r^2}D^TD\right) \Omega 'LQ'\right] \nonumber \\= & {} \frac{1}{M^2}tr\left( H\Omega H\Omega '\right) +O\left( \frac{r}{M^2}D\Omega H\Omega 'D^T\right) . \end{aligned}$$
(5.21)

We insert (5.21) into (5.20) and use the bound \(tr(H\Omega H\Omega ')\ll M\) (recall (5.17)) together with Lemmas 3.11 and 3.8 to obtain

$$\begin{aligned} \iint _{\Sigma ^2}tr(Y'Y)\hbox {d}\sigma \hbox {d}\sigma '= & {} \iint _{\Sigma ^2}\frac{1}{M^2}tr\left( H\Omega H\Omega '\right) \hbox {d}\sigma \hbox {d}\sigma ' \nonumber \\&+O\left( \iint _{\Sigma ^2}\frac{r^2}{M^2}D\Omega H\Omega 'D^T\hbox {d}\sigma \hbox {d}\sigma '\right) +\,O\left( \frac{1}{m^{11/16-\epsilon }}\right) .\nonumber \\ \end{aligned}$$
(5.22)

Substituting (5.9) and (5.10) into (5.22) yields (5.5), concluding the proof of Lemma 5.2.

6 Fourth Moment of r: Proofs of Lemmas 3.8, 5.5, and 3.9

Proof of Lemma 3.8

We write the fourth power of r (2.1)

$$\begin{aligned} r^4(\sigma ,\sigma ') = \frac{1}{N^4}\sum _{\mathcal {E}^4} e^{2\pi i\langle \mu _1+\mu _2+\mu _3+\mu _4,\sigma -\sigma '\rangle } \end{aligned}$$

and substitute it into (3.9) with \(k=4\) to obtain

$$\begin{aligned} \mathcal {R}_4(m)=\frac{1}{N^4} \sum _{\mathcal {E}^4}\iint _{\Sigma ^2}e^{2\pi i\langle \sum \mu _j,\sigma -\sigma '\rangle }\hbox {d}\sigma \hbox {d}\sigma ' \end{aligned}$$

(we abbreviate \(\sum \mu _j:=\mu _1+\mu _2+\mu _3+\mu _4\)). We separate the summation over \(\mathcal {E}^4\) into the set (4.6) of 4-spectral correlations \(\mathcal {C}(4)\) and its complement.

The summation over the 4-correlations is

$$\begin{aligned} \sum _{\mathcal {C}(4)}\iint _{\Sigma ^2}\hbox {d}\sigma \hbox {d}\sigma ' \ll _{\Sigma }|\mathcal {C}(4)|\ll N^{2+\epsilon }, \end{aligned}$$

where we applied (4.8). By Proposition 5.4, the remaining summation is bounded by

$$\begin{aligned}&\sum _{\mathcal {E}^4{\setminus }\mathcal {C}(4)}\iint _{\Sigma ^2}e^{2\pi i\langle \sum \mu _j,\sigma -\sigma '\rangle }\hbox {d}\sigma \hbox {d}\sigma '\\&\quad \ll \sum _{\mathcal {E}^4{\setminus }\mathcal {C}(4)}\frac{1}{|\mu _1+\mu _2+\mu _3+\mu _4|^2}. \end{aligned}$$

By Lemma 4.7, the latter summation has the upper bound \(N^{2+5/8+\epsilon }\). It follows that

$$\begin{aligned} \mathcal {R}_4(m) \ll \frac{1}{N^4}(N^{2+\epsilon }+N^{2+5/8+\epsilon })\ll \frac{1}{m^{11/16-\epsilon }}, \end{aligned}$$

where we applied (1.2). \(\square \)

Proof of Lemma 5.5

One clearly has \(\sum _{\mu }|\mu |^2=mN\), hence

$$\begin{aligned} \sum _{\mu }(\mu ^{(i)})^2=\frac{mN}{3} \qquad \text { for } i=1,2,3. \end{aligned}$$
(6.1)

By the symmetry of the lattice points, one also has \(\sum _{\mu }\mu ^{(i)}\mu ^{(j)}=0\) for any \(i\ne j\). Bearing this in mind, we directly compute the diagonal terms \(\mu =\mu '\) of (5.16) to equal

$$\begin{aligned} \frac{4\pi ^2}{N^2} \sum _{\mu }[(1-n_1^2)(\mu ^{(1)})^2+(1-n_2^2)(\mu ^{(2)})^2+(1-n_3^2)(\mu ^{(3)})^2] =\frac{2M}{N}, \end{aligned}$$

giving a contribution of

$$\begin{aligned} -\frac{1}{M}\iint _{\Sigma ^2}\frac{2M}{N}\hbox {d}\sigma \hbox {d}\sigma '=-\frac{2A^2}{N} \end{aligned}$$
(6.2)

to (5.7). To control the off-diagonal terms \(\mu \ne \mu '\) of \(D\Omega D^T\), we start by applying Proposition 5.4 to the integrals

$$\begin{aligned} \int _{\Sigma }\left( \frac{\mu }{|\mu |}\Omega \frac{\mu '^T}{|\mu '|}(\sigma )\right) e^{2\pi i\langle \mu -\mu ',\sigma \rangle }\hbox {d}\sigma \ll _\Sigma \frac{1}{|\mu -\mu '|} \end{aligned}$$

and

$$\begin{aligned} \int _{\Sigma }e^{2\pi i\langle \mu '-\mu ,\sigma '\rangle }\hbox {d}\sigma ' \ll _\Sigma \frac{1}{|\mu -\mu '|}, \end{aligned}$$

obtaining the bound

$$\begin{aligned}&\sum _{\mu \ne \mu '}\int _{\Sigma } \frac{4\pi ^2}{N^2}\left( \frac{\mu }{|\mu |}\Omega \frac{\mu '^T}{|\mu '|}(\sigma )\right) e^{2\pi i\langle \mu -\mu ',\sigma \rangle }\hbox {d}\sigma \int _{\Sigma }e^{2\pi i\langle \mu '-\mu ,\sigma '\rangle }\hbox {d}\sigma ' \nonumber \\&\quad \ll _\Sigma \frac{1}{N^2}\sum _{\mu \ne \mu '}\frac{1}{|\mu -\mu '|^2}\ll \frac{1}{(\sqrt{m})^{2-\epsilon }}, \end{aligned}$$
(6.3)

where we also applied Proposition 4.5. Consolidating the estimates (6.2) and (6.3) completes the proof of (5.7).

We now show (5.8). To treat the diagonal terms in \(r^2D\Omega D^T\), we use (4.8) and the triangle inequality to obtain

$$\begin{aligned} \iint _{\Sigma ^2}\frac{1}{MN^4}\sum _{\mathcal {C}(4)}e^{2\pi i\langle \sum \mu _j,\sigma -\sigma '\rangle }\mu _3\Omega \mu _4^T \hbox {d}\sigma \hbox {d}\sigma ' \ll \frac{1}{N^{2-\epsilon }} \end{aligned}$$
(6.4)

(recall the abbreviation \(\sum \mu _j:=\mu _1+\mu _2+\mu _3+\mu _4\)). Next, we consider the off-diagonal terms of \(r^2D\Omega D^T\). Invoking Proposition 5.4 again,

$$\begin{aligned}&\sum _{\mathcal {E}^4{\setminus }\mathcal {C}(4)}\int _{\Sigma } \frac{1}{N^4}\left( \frac{\mu _3}{|\mu _3|}\Omega \frac{\mu _4^T}{|\mu _4|}(\sigma )\right) e^{2\pi i\langle \sum \mu _j,\sigma \rangle }\hbox {d}\sigma \int _{\Sigma }e^{2\pi i\langle -\sum \mu _j,\sigma '\rangle }\hbox {d}\sigma ' \nonumber \\&\quad \ll _\Sigma \frac{1}{N^4}\sum _{\mathcal {E}^4{\setminus }\mathcal {C}(4)}\frac{1}{|\sum \mu _j|^2}\ll \frac{1}{m^{11/16-\epsilon }} \end{aligned}$$
(6.5)

having used Lemma 4.7 in the last inequality. Consolidating the estimates (6.4) and (6.5) completes the proof of (5.8).

To prove (5.9), we start with Definition 3.1,

$$\begin{aligned} tr\left( H\Omega H\Omega '\right) =\frac{(4\pi ^2)^2}{N^2}\sum _{\mu ,\mu '}e^{2\pi i\langle \mu -\mu ',\sigma -\sigma '\rangle }\mu \Omega \mu '^T\mu '\Omega '\mu ^T. \end{aligned}$$

One directly calculates the diagonal terms of the latter expression to equal

$$\begin{aligned}&\frac{(4\pi ^2)^2}{N^2}\sum _{\mu }(m-\langle \mu ,n\rangle ^2)\cdot (m-\langle \mu ,n'\rangle ^2) \nonumber \\&\quad =\frac{(4\pi ^2)^2}{N^2}\left[ m^2N-m\sum _{\mu }\langle \mu ,n\rangle ^2-m\sum _{\mu }\langle \mu ,n'\rangle ^2+\sum _{\mu }\langle \mu ,n\rangle ^2\langle \mu ,n'\rangle ^2\right] \nonumber \\&\quad =\frac{(4\pi ^2)^2}{N^2}\left[ \frac{m^2N}{3}+\sum _{\mu }\langle \mu ,n\rangle ^2\langle \mu ,n'\rangle ^2\right] \end{aligned}$$
(6.6)

having used (6.1) in the last step. We control the contribution of the off-diagonal terms of \(tr\left( H\Omega H\Omega '\right) \) via Propositions 5.4 and 4.5:

$$\begin{aligned} \frac{(4\pi ^2)^2}{N^2}\sum _{\mu \ne \mu '}\left| \int _{\Sigma }\frac{\mu }{|\mu |}\Omega \frac{\mu '^T}{|\mu '|}e^{2\pi i\langle \mu -\mu ',\sigma \rangle }\hbox {d}\sigma \right| ^2\ll & {} _\Sigma \frac{1}{N^2}\sum _{\mu \ne \mu '}\frac{1}{|\mu -\mu '|^2} \nonumber \\\ll & {} \frac{1}{(\sqrt{m})^{2-\epsilon }}. \end{aligned}$$
(6.7)

Substituting (6.6) and (6.7) into (5.22) and recalling the definition of \(\mathcal {H}\) (5.4) yields (5.9). The computation for (5.10) is very similar to the preceding ones and we omit it here. \(\square \)

Proof of Lemma 3.9

We will prove in detail a few of the bounds, the remaining being similar. By squaring X (Definition 3.3), we obtain

$$\begin{aligned} X^2=\left[ - \frac{1}{(1-r^2)M}\right] ^2\cdot \left( QL^T\Omega D^TD\Omega LQ\right) ^2. \end{aligned}$$

Therefore, outside the singular set S (Definition 3.10),

$$\begin{aligned} tr(X^2) \ll \frac{1}{M^2}(D\Omega D^T)^2 \end{aligned}$$
(6.8)

where we used (5.14). We may thus write

$$\begin{aligned} \iint _{\Sigma ^2}tr(X^2)\hbox {d}\sigma \hbox {d}\sigma '=O\left( \iint _{\Sigma ^2}\frac{1}{M^2}(D\Omega D^T)^2\hbox {d}\sigma \hbox {d}\sigma '\right) +O(\text {meas}(S)). \end{aligned}$$
(6.9)

The diagonal terms of the integral on the RHS in (6.9) give a contribution of

$$\begin{aligned} \iint _{\Sigma ^2}\frac{1}{M^2N^4}\sum _{\mathcal {C}(4)}\mu _3\Omega \mu _4^T\mu _4\Omega '\mu _3^T \hbox {d}\sigma \hbox {d}\sigma ' \ll \frac{1}{N^{2-\epsilon }} \end{aligned}$$
(6.10)

via (4.8). The off-diagonal terms are bounded via Proposition 5.4 and Lemma 4.7:

$$\begin{aligned}&\frac{1}{N^4}\sum _{\mathcal {E}^4{\setminus }\mathcal {C}(4)}\left| \int _{\Sigma } \left( \frac{\mu _3}{|\mu _3|}\Omega \frac{\mu _4^T}{|\mu _4|}(\sigma )\right) e^{2\pi i\langle \sum \mu _j,\sigma \rangle }\hbox {d}\sigma \right| ^2 \nonumber \\&\quad \ll _\Sigma \frac{1}{N^4}\sum _{\mathcal {E}^4{\setminus }\mathcal {C}(4)}\frac{1}{|\sum \mu _j|^2}\ll \frac{1}{m^{11/16-\epsilon }}. \end{aligned}$$
(6.11)

Inserting the bounds (6.10) and (6.11) into (6.9) (and bearing in mind Lemmas 3.11 and 3.8) yields the desired estimate for \(tr(X^2)\). The one for \(tr(X'^2)\) is proven in a similar way.

Let us now prove the bound for \(tr(Y^4)\), the one for \(tr(Y'^4)\) being similar. By Definition 3.3,

$$\begin{aligned} Y=-\frac{1}{M}\left[ QL^T\Omega \left( H+\frac{r}{1-r^2}D^TD\right) \Omega 'LQ'\right] \end{aligned}$$

hence on \(\Sigma ^2{\setminus } S\)

$$\begin{aligned} tr(Y^4)\ll \frac{1}{M^4}tr(H\Omega H\Omega ')^2 \end{aligned}$$
(6.12)

via (5.14). Therefore,

$$\begin{aligned} \iint _{\Sigma ^2}tr(Y^4)\hbox {d}\sigma \hbox {d}\sigma '=O\left( \iint _{\Sigma ^2}\frac{1}{M^4}tr(H\Omega H\Omega ')^2\hbox {d}\sigma \hbox {d}\sigma '\right) +O(\text {meas}(S)).\nonumber \\ \end{aligned}$$
(6.13)

Similarly to the case of \(tr(X^2)\), the diagonal contribution to (6.13) is \(O(N^{-2+\epsilon })\), whereas the off-diagonal terms are bounded invoking Proposition 5.4 and Lemma 4.7 again:

$$\begin{aligned}&\frac{1}{N^4}\sum _{\mathcal {E}^4{\setminus }\mathcal {C}(4)}\left| \int _{\Sigma } \frac{\mu _1}{|\mu _1|}\Omega \frac{\mu _2^T}{|\mu _2|}\frac{\mu _3}{|\mu _3|}\Omega \frac{\mu _4^T}{|\mu _4|}e^{2\pi i\langle \sum \mu _j,\sigma \rangle }\hbox {d}\sigma \right| ^2 \\&\quad \ll _\Sigma \frac{1}{N^4}\sum _{\mathcal {E}^4{\setminus }\mathcal {C}(4)}\frac{1}{|\sum \mu _j|^2}\ll \frac{1}{m^{11/16-\epsilon }}. \end{aligned}$$

Finally, we show the bound for \(r^2 tr(X)\), the ones for \(r^2 tr(X')\) and \(r^2 tr(Y'Y)\) being similar. As in the previous computations, one writes

$$\begin{aligned} \iint _{\Sigma ^2}r^2tr(X)\hbox {d}\sigma \hbox {d}\sigma '=O\left( \iint _{\Sigma ^2}\frac{r^2}{M}D\Omega D^T\hbox {d}\sigma \hbox {d}\sigma '\right) +O(\text {meas}(S)). \end{aligned}$$

The required result now follows from (5.8) and Lemmas 3.11 and 3.8. \(\square \)

7 Study of the Variance Leading Constant: Proof of Proposition 1.4

Recall A is the area of the surface \(\Sigma \), \(\overrightarrow{n}(\sigma )\) the unit normal at the point \(\sigma \in \Sigma \), and \(\mathcal {I}\) the integral (1.13)

$$\begin{aligned} \mathcal {I}=\iint _{\Sigma ^2}\langle \overrightarrow{n}(\sigma ),\overrightarrow{n}(\sigma ')\rangle ^2\hbox {d}\sigma \hbox {d}\sigma '. \end{aligned}$$

The goal of this section is to prove Proposition 1.4, i.e. that the sharp bounds

$$\begin{aligned} \frac{A^2}{3}\le \mathcal {I}\le A^2 \end{aligned}$$

hold.

Lemma 7.1

We have

$$\begin{aligned} \mathcal {I}\le A^2, \end{aligned}$$

with equality if and only if \(\Sigma \) is contained in a plane.

Proof

The integral \(\mathcal {I}\) is maximised when

$$\begin{aligned} \langle \overrightarrow{n}(\sigma ),\overrightarrow{n}(\sigma ')\rangle =\pm 1 \end{aligned}$$

for every \(\sigma ,\sigma '\in \Sigma \), i.e. when the normal vectors to the surface are all parallel.

\(\square \)

We now turn to establishing the lower bound \(\mathcal {I}\ge A^2/3\). For any probability measure \(\tau \) on \(\mathcal {S}^2\) invariant by reflection w.r.t. the coordinate planes, define the number

$$\begin{aligned} c(\tau ,\Sigma ):=\iint _{\Sigma ^2}\int _{\mathcal {S}^{2}}\left\langle \theta ,\overrightarrow{n}(\sigma )\right\rangle ^2\cdot \left\langle \theta ,\overrightarrow{n}(\sigma ')\right\rangle ^2\hbox {d}\sigma \hbox {d}\sigma 'd\tau (\theta ). \end{aligned}$$

Lemma 7.2

For any measure \(\tau \) on \(\mathcal {S}^2\) invariant by reflection w.r.t. the coordinate planes, we have

$$\begin{aligned} \frac{A^2}{9}\le c(\tau ,\Sigma )\le \frac{A^2}{3}. \end{aligned}$$

Lemma 7.2 will be proven in a moment; assuming it, we may complete the proof of Proposition 1.4.

Proof of Proposition 1.4 Assuming Lemma 7.2

One has

$$\begin{aligned} c\left( \frac{\sin (\varphi )\hbox {d}\varphi \hbox {d}\psi }{4\pi },\Sigma \right) =\frac{A^2+2\mathcal {I}}{15} \end{aligned}$$

hence via Lemma 7.2 we obtain the lower bound in (1.15):

$$\begin{aligned} \mathcal {I}\ge \frac{A^2}{3}. \end{aligned}$$

The upper bound \(\mathcal {I}\le A^2\) in (1.15) has already been proven in Lemma 7.1.

\(\square \)

Proof of Lemma 7.2

We write

$$\begin{aligned} c(\tau ,\Sigma )=\int _{\mathcal {S}^{2}}q(\theta ,\Sigma )^2\hbox {d}\tau (\theta ), \end{aligned}$$
(7.1)

where for \(\theta \in \mathcal {S}^2\) we have defined

$$\begin{aligned} q(\theta ,\Sigma ):=\int _\Sigma \langle \theta ,\overrightarrow{n}(\sigma )\rangle ^2\hbox {d}\sigma . \end{aligned}$$

Let \(\mathfrak {R}\) be the intersection of the unit sphere and the first octant

$$\begin{aligned} \mathfrak {R}:=\{(x,y,z)\in \mathbb {R}^3 : x,y,z>0\}\cap \mathcal {S}^{2}. \end{aligned}$$

We complete \(\theta \) to an orthonormal basis \((\theta ,\theta ',\theta '')\) such that \(\theta ,\theta ',\theta ''\) lie in distinct octants, hence

$$\begin{aligned} 3 c(\tau ,\Sigma ) =8\int _\mathfrak {R}\left[ q(\theta ,\Sigma )^2+q(\theta ',\Sigma )^2+q(\theta '',\Sigma )^2\right] \hbox {d}\tau (\theta ) \end{aligned}$$
(7.2)

via (7.1) and the symmetries of \(\tau \). As \(\theta ,\theta ',\theta ''\in \mathcal {S}^2\) are pairwise orthogonal, we have

$$\begin{aligned} q(\theta ,\Sigma )+q(\theta ',\Sigma )+q(\theta '',\Sigma )=A \end{aligned}$$

so that by Cauchy–Schwartz (or the AM-QM inequality)

$$\begin{aligned} \frac{A^2}{3}\le q(\theta ,\Sigma )^2+q(\theta ',\Sigma )^2+q(\theta '',\Sigma )^2\le A^2. \end{aligned}$$
(7.3)

Inserting the two inequalities (7.3) into (7.2) yields the claim of the present lemma. \(\square \)

Let us compare the conditions to obtain the vanishing of the variance leading term in the two- and three-dimensional settings. In the former, this occurs for certain subsets of circles [36, Proposition 7.3] and more generally, for families of static curves [34, appendix F]. We were able to find specific examples of surfaces s.t. the variance leading term vanishes (e.g. spheres and hemispheres): it would be interesting to find, if it exists, a more general family of surfaces satisfying this condition.