1 Introduction and Formulation of the Main Result

The Circular \(\beta \)-Ensemble (C\(\beta \)E) is determined by the joint probability density

$$\begin{aligned} p_N^\beta (\theta _1, \ldots , \theta _N)=\frac{1}{Z_{N,\beta }}\prod _{1\le j< k\le N}\left| e^{i\theta _j}-e^{i\theta _k}\right| ^\beta , \ \ \ \end{aligned}$$
(1.1)

where \(\theta =(\theta _1, \ldots , \theta _N) \in {\mathbb {T}}^N, \) i.e. \(0\le \theta _1, \ldots , \theta _N <2 \pi , \ \beta >0, \) and the partition function \(Z_{N,\beta }\) is given by

$$\begin{aligned} Z_{N,\beta }=\int _{{\mathbb {T}}^N} \prod _{1\le j< k\le N}\left| e^{i\theta _j}-e^{i\theta _k}\right| ^\beta d\theta = (2\pi )^N \frac{\Gamma \left( 1+\frac{\beta N}{2}\right) }{\Gamma \left( 1+ \frac{\beta }{2}\right) ^N}. \end{aligned}$$
(1.2)

The ensemble was introduced in Random Matrix Theory by Dyson in [8]. The Circular Unitary Ensemble (CUE) corresponds to \(\beta =2.\) In this special case (1.1) describes the joint distribution of the eigenvalues of an \(N\times N\) random unitary matrix U distributed according to the Haar measure.

In [1]–[3], we studied the limiting distribution of pair counting functions

$$\begin{aligned} S_N(f)=\sum _{1\le i\ne j\le N} f(L_N (\theta _i-\theta _j)_c), \end{aligned}$$
(1.3)

in the Circular \(\beta \)-Ensemble, where \(1\le L_N\le N\) and \((\theta _i-\theta _j)_c\) is the phase difference on the unit circle, i.e.

$$\begin{aligned} (x-y)_c= {\left\{ \begin{array}{ll} x-y&{}\text {if } -\pi \le x-y<\pi ,\\ x-y -2 \pi &{}\text {if } \pi \le x-y<2 \pi ,\\ x-y +2 \pi &{}\text {if} -2 \pi<x-y<-\pi . \end{array}\right. } \end{aligned}$$
(1.4)

The case \(\beta =2, \ L_N=N\) was motivated by a classical result of Montgomery on pair correlation of zeros of the Riemann zeta function [18]–[19]. Let us denote the “non-trivial” zeros as \(\{ 1/2 \pm i \gamma _k, \ \gamma _k>0\}.\) Montgomery’s theorem suggested that the rescaled zeros

$$\begin{aligned} {\tilde{\gamma }}_k= \frac{\gamma _k}{2 \pi } \log (\gamma _k), \end{aligned}$$

asymptotically behave as the eigenvalues of a large random unitary (CUE) matrix. Namely, Montgomery studied the asymptotic behavior of the the statistic

$$\begin{aligned} F_T(\alpha )=T^{-1} \sum _{0<{\tilde{\gamma }}_j, \tilde{\gamma _k}\le T} \exp (i \alpha ({\tilde{\gamma }}_j-{\tilde{\gamma }}_k)) \frac{4}{4+({\tilde{\gamma }}_j-{\tilde{\gamma }}_k)^2/\log (T)^2}, \end{aligned}$$

for real \(\alpha \) and large real T. Assuming the Riemann Hypothesis, Montgomery argued that

$$\begin{aligned} F(\alpha ):=\lim _{T\rightarrow \infty } F_T(\alpha )={\left\{ \begin{array}{ll} |\alpha |, &{}\text { if } \ 0<|\alpha |<1 \\ 1 , &{}\text {if} \ |\alpha |\ge 1. \end{array}\right. }. \end{aligned}$$
(1.5)

Montgomery rigorously proved (1.5) for \(0<|\alpha |<1\) and provided heuristic arguments to support the formula for \(|\alpha |\ge 1.\) He also proved that for \( \alpha =0 \) and large T the statistic behaves as

$$\begin{aligned} F_T(0)=\log (T)^{-1} (1+o(1)). \end{aligned}$$

Since \(\alpha \mapsto \delta (\alpha )+\min (|\alpha |, 1)\) is the Fourier transform of \(\delta (x)+ 1-\left( \frac{\sin (\pi x)}{\pi x}\right) ^2,\) the formula (1.5) implies that the asymptotic behavior of the two-point correlations of the rescaled zeros of the Riemann zeta functions and the eigenvalues of a large random unitary matrix coincide in the limit.

In 1994, Hejhal [13] extended the result to the case of three-point correlation functions, again under a technical condition on the support of the Fourier transform of a test function. In 1996, Rudnick and Sarnak [20] considered the l-level correlation sum, \(l\ge 2,\)

$$\begin{aligned} R_{T,l}(g)=\sum ^{*} g({\tilde{\gamma }}_{j_1}, \ldots , {\tilde{\gamma }}_{j_l}), \end{aligned}$$
(1.6)

where the sum is restricted to distinct rescaled zeros in the interval [0, T] for large T,  and g is a smooth test function satisfying:

  1. (i)

    \(g(x_1, x_2, \ldots , x_l)\) is symmetric.

  2. (ii)

    \(g(x_1+t, x_2+t, \ldots , x_l+t)=g(x_1, x_2, \ldots , x_l)\) for all \(t\in {\mathbb {R}}.\)

  3. (iii)

    \(g(x_1, x_2, \ldots , x_l)\rightarrow 0\) rapidly as \(|x|\rightarrow \infty \) in the hyperplane \(\sum x_j=0.\)

Assuming the Riemann Hypothesis and an additional important technical condition that \({\hat{g}}(\xi _1, \ldots , \xi _l)\), the Fourier transform of g, is supported in \(\sum _j |\xi _j|<2,\) they proved that

$$\begin{aligned} R_{T,l}(g) \rightarrow \int _{{\mathbb {R}}^l} g(x_1, \ldots , x_l) \rho _l(x_1, \ldots , x_l) \delta (\frac{x_1+\ldots +x_l}{l}) dx_1\cdots dx_l, \end{aligned}$$
(1.7)

where \(\delta (x)\) is the delta function and

$$\begin{aligned} \rho _l(x_1, \ldots , x_l)=det(K(x_i,x_j))_{1\le i,j\le l}, \ \ K(x,y)=\frac{sin(\pi (x-y))}{\pi (x-y)}, \end{aligned}$$
(1.8)

is the limiting l-point correlation function for the CUE. We refer the reader to [4, 5, 14, 17] and the references therein for additional information on exciting connections between Random Matrices and Number Theory.

Introducing \(f(y_1, \ldots , y_{n})=g(0, y_1, \ldots , y_{l-1}), \ n=l-1, \ \) we can rewrite (1.6) as

$$\begin{aligned} R_{T,l}(g)=\sum ^{*} f({\tilde{\gamma }}_{j_2}-{\tilde{\gamma }}_{j_1}, \ldots , {\tilde{\gamma }}_{j_l}-{\tilde{\gamma }}_{j_1}), \end{aligned}$$
(1.9)

where f is a smooth symmetric test function decaying fast at infinity and the sum is over all distinct rescaled zeros in the interval [0, T]. In this paper, we study the limiting fluctuation of the analogue of (1.9) for the CUE. Specifically, we consider

$$\begin{aligned} S_N(f)=\sum _{1\le j_1, j_2, \ldots , j_{n+1}\le N} f(N (\theta _{j_2}-\theta _{j_1})_c, \ldots , N (\theta _{j_{n+1}}-\theta _{j_1})_c), \end{aligned}$$
(1.10)

where \(\theta =(\theta _1, \ldots , \theta _N) \in {\mathbb {T}}^N\) comes from the CUE and \(f\in C^{\infty }_c({\mathbb {R}}^{n})\) is a smooth test function with compact support. Even though the number of terms in (1.10) is \(N^{n+1},\) the number of non-zero terms in the sum is of order N.

When \(n=1\) the following result was proven in [1]:

Theorem 1.1

[1]. Let \(f \in C^{\infty }_c({\mathbb {R}})\) be an even, smooth, compactly supported function on the real line and \(\theta =(\theta _1, \ldots , \theta _N) \in {\mathbb {T}}^N\) be CUE-distributed. Consider

$$\begin{aligned} S_N(f(N\cdot ))=\sum _{1\le i,j\le N} f(N (\theta _i-\theta _j)_c). \end{aligned}$$
(1.11)

Then

$$\begin{aligned} {\mathbb {E}}S_N(f)= \sum _{k\in {\mathbb {Z}}} \frac{1}{\sqrt{2\pi }}{\hat{f}}(k/N) \min \left( \frac{|k|}{N}, 1\right) + {\hat{f}}(0) N, \end{aligned}$$

and \((S_N(f(N\cdot )) -{\mathbb {E}}S_N(f(N\cdot ))) N^{-1/2}\) converges in distribution to the centered real Gaussian random variable with the variance

$$\begin{aligned}&\frac{1}{\pi } \int _{{\mathbb {R}}} |{\hat{f}}(t)|^2 \min (|t|,1)^2 dt -\frac{1}{\pi } \int _{|s-t|\le 1, |s|\vee |t|\ge 1}{\hat{f}}(t) {\hat{f}}(s) (1-|s-t|) ds dt\nonumber \\&-\frac{1}{\pi } \int _{0\le s,t\le 1, s+t>1}{\hat{f}}(s) {\hat{f}}(t) (s+t-1) ds dt. \end{aligned}$$
(1.12)

Remark 1.2

The limiting distribution of (1.11) does not change if one replaces the circular difference (1.4) in the argument of \(f(N\cdot )\) by the regular one and studies instead

$$\begin{aligned} \sum _{1\le i,j\le N} f(N (\theta _i-\theta _j)), \end{aligned}$$
(1.13)

since the number of pairs of the eigenvalues in a \(O(N^{-1})\) neighborhood of \(\theta =0\) is bounded in probability.

The proof relied on the cumulant technique and had a strong combinatorial flavor. The purpose of this paper is to give a simpler proof that works for arbitrary \(n>1.\) Below we formulate our main result.

Theorem 1.3

Let \(f \in C^{\infty }_c({\mathbb {R}}^{n})\) be a smooth, symmetric, compactly supported function on \({\mathbb {R}}^{n}, \ n\ge 1, \) and \(\theta =(\theta _1, \ldots , \theta _N) \in {\mathbb {T}}^N\) be a CUE-distributed random vector. Consider the l-tuple smoothed counting statistic \( S_N(f), \ l=n+1,\) defined in (1.10). Then \({\mathbb {E}}S_N(f)\) satisfies (3.53.6) and the normalized random variable \((S_N(f) -{\mathbb {E}}S_N(f)) N^{-1/2}\) converges in distribution to the centered real Gaussian random variable \(N(0, \sigma ^2(f)\) with the limiting variance \(\sigma ^2(f)\) defined in (3.17).

The paper is organized as follows. Some preliminary facts are given in Sect. 2. Section 3 is devoted to the computation of mathematical expectation and variance of \(S_N(f).\) Theorem 1.3 is proven in Sect. 4. Throughout the paper the notation \(a_N=O(b_N)\) means that the ratio \( a_N/b_N\) is bounded from above in absolute value. The notation \(a_N=o(b_N)\) means that \(a_n/b_N\rightarrow 0\) as \(N\rightarrow \infty .\)

2 Preliminary Facts

Let \(f \in C^{\infty }_c({\mathbb {R}}^{n}).\) When N is sufficiently large, the support of \(f(N x), \ x=(x_1, \ldots , x_{n} ), \) is contained in the cube \([-\pi ,\pi ]^{n}\) and one can write a Fourier series

$$\begin{aligned} f(N x)=f(N x_1, \ldots , N x_n)=\frac{1}{(2\pi )^{n/2} N^{n}} \sum _{k\in {\mathbb {Z}}^n} {\hat{f}}(k_1 N^{-1}, \ldots , k_n N^{-1})e^{i k\cdot x}, \end{aligned}$$

where

$$\begin{aligned} {\hat{f}}(\xi )=\frac{1}{(2\pi )^{n/2}} \int _{{\mathbb {R}}^n}f(x)e^{-i \xi \cdot x} dx. \end{aligned}$$

Then

$$\begin{aligned} S_N(f)&=\sum _{1\le j_1, j_2, \ldots , j_{n+1}\le N} f(N (\theta _{j_2}-\theta _{j_1})_c, \ldots , N (\theta _{j_{n+1}}-\theta _{j_1})_c) \nonumber \\&= \frac{1}{(2\pi )^{n/2} N^{n}} \sum _{k\in {\mathbb {Z}}^n} {\hat{f}}(k_1 N^{-1}, \ldots , k_n N^{-1}) \prod _{j=1}^{n+1} T_{N, k_j}, \end{aligned}$$
(2.1)

where

$$\begin{aligned} T_{N,s}=\sum _{m=1}^N e^{i s \theta _m}=Tr (U^s) \end{aligned}$$
(2.2)

is the trace of the s-th power of a random unitary (CUE) matrix, and \(k_{n+1}=-\sum _{j=1}^n k_j.\)

First, we evaluate \({\mathbb {E}}S_N(f).\ \) One has

$$\begin{aligned} {\mathbb {E}}S_N(f)= \frac{1}{(2\pi )^{n/2} N^{n}} \sum _{k\in {\mathbb {Z}}^n} {\hat{f}}(k_1 N^{-1}, \ldots , k_n N^{-1}) \times {\mathbb {E}}[\prod _{j=1}^{n+1} T_{N, k_j}]. \end{aligned}$$
(2.3)

To compute \({\mathbb {E}}[\prod _{j=1}^{n+1} T_{N, k_j}],\) we study the joint cumulants of the traces of powers of a CUE matrix. We refer the reader to [15] (see also [1], Sect. 5) for the definition and basic properties of the joint cumulants. We will use the notation \(\kappa _m^{(N)}(k_1, \ldots , k_m)\) for the joint cumulant of \(T_{N, k_1}, \ldots , T_{N, k_m}, \) i.e.

$$\begin{aligned} \kappa _m^{(N)}(k_1, \ldots , k_m)=\kappa (T_{N, k_1}, \ldots , T_{N, k_m}). \end{aligned}$$

Recall that

$$\begin{aligned} {\mathbb {E}}[\prod _{j=1}^{n+1} T_{N, k_j}]=\sum _{\pi } \prod _{B\in \pi } \kappa _{|B|}^{(N)}(k_i: i \in B), \end{aligned}$$
(2.4)

where the sum is over all partitions \(\pi \) of \(\{1, \ldots , n+1\},\) B runs through the list of all blocks of the partition \(\pi , \ \) and |B| is the cardinality of a block B. The following result was established in [22] (see also Sect. 5 of [1]):

Lemma 2.1

  1. (i)

    If \(p>1\) and either \(k_1+\ldots +k_p\ne 0\) or \( \ \prod _i^p k_i=0, \ \) or both, then

    $$\begin{aligned} \kappa _p^{(N)}(k_1, \ldots , k_p)=0. \end{aligned}$$
    (2.5)
  2. (ii)

    If \(p>1, \ \ \prod _i^p k_i\ne 0,\) and \(\sum _{i=1}^p k_i=0,\) then

    $$\begin{aligned} \kappa _p^{(N)}(k_1, \ldots , k_p)=\sum _{m=1}^p \frac{(-1)^{m}}{m} \sum _{\begin{array}{c} (p_1, \ldots , p_m):\\ p_1+\ldots +p_m=p, \ p_1, \ldots p_m\ge 1 \end{array}}\frac{1}{p_1!\cdots p_m!} \sum _{\sigma \in S_p} J_N(p_1, \ldots , p_m; k_{\sigma (1)},\ldots , k_{\sigma (p)}), \end{aligned}$$
    (2.6)

    where for positive integers \(p_1, \ldots , p_m\ge 1, \ p_1+\ldots +p_m=p ,\) and integers \(k_1, \ldots , k_p,\) satisfying \(\sum _{i=1}^p k_i=0,\) we define

    $$\begin{aligned}&J_N(p_1, \ldots , p_m; k_1,\ldots , k_p):= \nonumber \\&\min \left( N, \ \max \left( 0, \sum _{i=1}^{p_1} k_i, \sum _{i=1}^{p_1+p_2} k_i, \ldots , \sum _{i=1}^{p_1+\ldots +p_{m-1}} k_i\right) \right. \nonumber \\&\quad \qquad \left. +\max \left( 0, \sum _{i=1}^{p_1} (-k_i), \ldots , \sum _{i=1}^{p_1+\ldots +p_{m-1}} (-k_i)\right) \right) . \end{aligned}$$
    (2.7)
  3. (iii)

    If \(p=1\) then \(\kappa _1^{(N)}(k)=N\) for \(k=0\) and \(\kappa _1^{(N)}(k)=0\) otherwise.

Remark 2.2

If \(\sum _{i=1}^p k_i=0\) and \(\sum _{i=1}^p |k_i|\le 2 N,\) then

$$\begin{aligned}&J_N(p_1, \ldots , p_m; k_1,\ldots , k_p)= \nonumber \\&\max \left( 0, \sum _{i=1}^{p_1} k_i, \sum _{i=1}^{p_1+p_2} k_i, \ldots , \sum _{i=1}^{p_1+\ldots +p_{m-1}} k_i\right) \quad +\max \left( 0, \sum _{i=1}^{p_1} (-k_i), \ldots , \sum _{i=1}^{p_1+\ldots +p_{m-1}} (-k_i)\right) . \end{aligned}$$
(2.8)

Using a combinatorial identity (Lemma 2 in [22]) one further obtains that

$$\begin{aligned} \kappa _p^{(N)}(k_1, \ldots , k_p)=0 \ \ {\text{ i }f} \ \ p>2 \ \ {\text{ a }nd} \ \ \sum _{i=1}^p |k_i|\le 2 N. \end{aligned}$$
(2.9)

Remark 2.3

It directly follows from (2.7) that

$$\begin{aligned} 0\le J_N(p_1, \ldots , p_m; k_1,\ldots , k_p)\le N. \end{aligned}$$

Therefore (2.52.6) imply

$$\begin{aligned} |\kappa _p^{(N)}(k_1, \ldots , k_p)|\le Const_p N, \end{aligned}$$
(2.10)

where \(Const_p\) depends only on p.

Remark 2.4

Recall that \(\kappa _p^{(N)}(k_1, \ldots , k_p)\) is a symmetric function of \(k_1, \ldots , k_p\). Direct computations allow one to derive joint cumulants in several important special cases:

$$\begin{aligned}&\kappa _2^{(N)}(k_1, -k_1)=min(N, |k_1|), \end{aligned}$$
(2.11)
$$\begin{aligned}&\kappa _3^{(N)}(k_1, k_2, -(k_1+k_2))={\left\{ \begin{array}{ll} 0, &{}\text { if } k_1+k_2\le N, k_1, k_2\ge 0,\\ k_1+k_2-N, &{}\text { if } k_1+k_2>N, 0\le k_1, k_2 \le N, \\ k_2, &{}\text {if } k_1+k_2>N, k_1>N, 0\le k_2 \le N, \\ N, &{}\text { } K_1\ge N, k_2\ge N. \end{array}\right. }. \end{aligned}$$
(2.12)
$$\begin{aligned}&\kappa _4^{(N)}(k_1, k_2, -k_1, -k_2)={\left\{ \begin{array}{ll} 0, &{}\text { if } 1\le |k_1|=|k_2|\le N/2,\\ N -2|k_1|, &{}\text { if } N/2< |k_1|=|k_2|\le N,\\ -N , &{}\text {if } |k_1|=|k_2|\ge N,\\ ||k_1|-|k_2||-N, &{}\text { if } 1\le ||k_1|-|k_2||\le N-1,N\le \max (|k_1|,|k_2|),\\ N-|k_1|-|k_2|, &{}\text {if } 1\le |k_1|\ne |k_2|\le N-1, N+1\le |k_1|+|k_2|,\\ 0,&{}\text {else.} \end{array}\right. }. \end{aligned}$$
(2.13)

We note that (2.13) directly follows from Corollary 4.2 in [1]. (2.12) follows from (2.6) and (2.8). Since \(\kappa _p^{(N)}(k_1, \ldots , k_p)\) is a symmetric function that satisfies \(\kappa _p^{(N)}(k_1, \ldots , k_p)=\kappa _p^{(N)}(-k_1, \ldots , -k_p),\) the third cumulant formula (2.12) completely determines \(\kappa _3^{(N)}(k_1, k_2, k_3)\) (recall that \(\kappa _3^{(N)}\) vanishes if \(k_3\ne -k_1-k_2\).) The second cumulant formula (2.11) immediately follows from (2.6) and (2.8).

The main contribution to (2.1) and (2.3) comes from \(|k|=O(N).\) Define

$$\begin{aligned} t_i=\frac{k_i}{N}, \ \ 1\le i\le p. \end{aligned}$$
(2.14)

Then

$$\begin{aligned} c_p(t_1, \ldots , t_p):= \frac{1}{N} \kappa _p^{(N)}(t_1 N, \ldots , t_p N) \end{aligned}$$
(2.15)

does not depend on N and is a bounded function of \(t_1, \ldots , t_p, \) which is identically zero on \(t_1+\ldots +t_p\ne 0\) for \(p>1\) and is piece-wise linear on \(t_1+\ldots +t_p=0.\) To write down an explicit formula for \(c_p(t_1, \ldots , t_p)\) we define functions \(j(p_1, \ldots , p_m; t_1,\ldots , t_p)\) for positive integers \(p_1, \ldots , p_m,\) satisfying \(\ p_1+\ldots +p_m=p ,\) and real numbers \(t_1, \ldots , t_p,\) as

$$\begin{aligned}&j(p_1, \ldots , p_m; t_1,\ldots , t_p):= \nonumber \\&\min \left( 1, \ \max \left( 0, \sum _{i=1}^{p_1} t_i, \sum _{i=1}^{p_1+p_2} t_i, \ldots , \sum _{i=1}^{p_1+\ldots +p_{m-1}} t_i\right) +\max \left( 0, \sum _{i=1}^{p_1} (-t_i), \ldots ,\sum _{i=1}^{p_1+\ldots +p_{m-1}} (-t_i)\right) \right) . \end{aligned}$$
(2.16)

Lemma 2.1 implies the following result.

Lemma 2.5

Rescaled joint cumulants \(c_p\) defined in (2.15) can be written for \(p>1\) and \(\sum _{i=1}^p t_i=0\) as

$$\begin{aligned} c_p(t_1, \ldots , t_p)=\sum _{m=1}^p \frac{(-1)^{m}}{m} \sum _{\begin{array}{c} (p_1, \ldots , p_m): \\ p_1+\ldots +p_m=p, \ p_1, \ldots p_m\ge 1 \end{array}} \frac{1}{p_1!\cdots p_m!} \sum _{\sigma \in S_p} j(p_1, \ldots , p_m; t_{\sigma (1)},\ldots , t_{\sigma (p)}), \end{aligned}$$
(2.17)

where the functions \(j(p_1, \ldots , p_m; t_1,\ldots , t_p)\) are defined in (2.16). Moreover, the following holds:

$$\begin{aligned}&(i) \ \ c_p(t_1, \ldots , t_p), \ p>1, \ \text { is a bounded symmetric piece-wise linear function on} \\&\quad \sum _{i=1}^p t_i=0. \\&(ii) \ \ c_p(t_1, \ldots , t_p)=0 \ \text {if} \ \ p>1 \ \ \text {and} \ \ \sum _{i=1}^p t_i\ne 0.\\&(iii) \ \ c_1(0)=1 \ \ \text {and} \ \ c_1(t)=0 \ \ \text {for} \ \ t\ne 0. \end{aligned}$$

Remark 2.6

We refer the reader to [9] and [10] for some important earlier results on the moments of \(|T_{N,k}|^2\) for \(k=O(N).\)

3 Expectation and Variance

We start with a computation of \({\mathbb {E}}S_N(f).\) It follows from (2.3), (2.4), (2.15), and (2.12) that

$$\begin{aligned} {\mathbb {E}}S_N(f)&= \frac{1}{(2\pi )^{n/2} N^{n}} \sum _{k_1, \ldots , k_n\in {\mathbb {Z}}} {\hat{f}}(k_1/N, \ldots , k_n/N) \sum _{\pi } N^{|\pi |}\prod _{B\in \pi } c_{|B|}(k_i/N: i \in B) \nonumber \\&= \frac{1}{(2\pi )^{n/2}} \sum _{\pi } N^{|\pi |-n} \sum _{k_1, \ldots , k_n\in {\mathbb {Z}}} {\hat{f}}(k_1/N, \ldots , k_n/N) \prod _{B\in \pi } c_{|B|}(k_i/N: i \in B), \end{aligned}$$
(3.1)

where \(k_{n+1}=-\sum _{i=1}^n k_i, \ \) the sum is over all partitions \(\pi \) of \(\{1, \ldots , n+1\},\) B runs through the list of all blocks of the partition \(\pi ,\ |\pi | \ \) is the number of blocks in a partition \(\pi ,\) and |B| is the cardinality of a block B.

Denote a linear subspace of \({\mathbb {R}}^{n+1}\) as

$$\begin{aligned} L_{\pi }:=\{t=(t_1, \ldots , t_{n+1})\in {\mathbb {R}}^{n+1} : \sum _{i=1}^{n+1} t_i=0; \ \ \sum _{i\in B} t_i=0 \ \forall B\in \pi \}. \end{aligned}$$
(3.2)

It follows from Lemma 2.5, parts (ii)–(iii), that for any fixed partition \(\pi \) the summation in (3.1) is over \(k=(k_1, \ldots , k_{n+1}) \in L_{\pi } \cap \frac{1}{N} {\mathbb {Z}}^{n+1}.\) Indeed, k satisfies the following system of linear equations of rank \(|\pi |:\)

$$\begin{aligned} {\left\{ \begin{array}{ll} &{} \sum _{i=1}^{n+1} k_i=0, \\ &{} \sum _{i\in B} k_i=0, \ \ \forall B\in \pi , \end{array}\right. } \end{aligned}$$
(3.3)

Observe that the first linear equation in (3.3) follows from the remaining \(|\pi |\) independent linear equations in (3.3). Therefore

$$\begin{aligned} \dim L_{\pi }=n+1-|\pi |. \end{aligned}$$
(3.4)

We note that the sums in (3.1) are Riemann sums \(\left( {\text {up\,to\,a\,multiplicative\,factor}}\,\frac{N}{(2\pi )^{n/2}}\right) \) corresponding to smooth and fast decaying at infinity functions \({\hat{f}}(t_1, \ldots , t_n) \prod _{B\in \pi } c_{|B|}(t_i: i \in B)\) on \(L_{\pi }\). Denote by m the number of blocks in \(\pi \) and by \(n_1, \ldots , n_m\) the cardinalities of the blocks. Using Lemma 2.5 we are ready to obtain asymptotics of the mean of \(S_N(f).\)

Lemma 3.1

$$\begin{aligned} {\mathbb {E}}S_N(f)={\mathcal {M}}(f) N +O(1), \end{aligned}$$
(3.5)

where

$$\begin{aligned}&{\mathcal {M}}(f)= \frac{1}{(2\pi )^{n/2}} \sum _{m=1}^n \frac{1}{m!} \sum _{\begin{array}{c} (n_1, \ldots , n_m):\\ n_1+\ldots +n_m=n+1, \ n_1, \ldots , n_m\ge 1 \end{array}} \frac{(n+1)!}{n_1!\cdots n_m!}\times \nonumber \\&\int _{L_{\pi }} {\hat{f}}(t_1, \ldots , t_n) \prod _{j=1}^m c_{n_j}(t_{M_{j-1}+1}, \ldots , t_{M_j-1}, -t_{M_{j-1}+1}-\ldots - t_{M_j-1}) d\lambda , \end{aligned}$$
(3.6)

\( t_{n+1}=-\sum _{i=1}^n t_i, \ \ M_j=n_1+\ldots +n_j, \ 1\le j\le m,\ M_0=0, \ \) \(L(n_1, \ldots , n_m)\subset {\mathbb {R}}^{n+1}\) is defined as

$$\begin{aligned} L_{n_1, \ldots , n_m}:=\{t=(t_1, \ldots , t_{n+1})\in {\mathbb {R}}^{n+1} : \ \sum _{i=1}^{n+1} t_i=0; \ \ \sum _{M_{j-1}<i\le M_j} t_i=0, \ \ 1\le j\le m\}, \end{aligned}$$
(3.7)

\(\ \lambda \ \) is the Lebegsue measure on \(L_{\pi },\) i.e.

$$\begin{aligned} \lambda =\prod _{j: n_j>1} dt_{M_{j-1}+1}\cdots dt_{M_j-1}, \end{aligned}$$
(3.8)

and rescaled joint cumulant functions \(c_{n_j}\)’s are defined in (2.11 and 2.12).

Remark 3.2

For singleton blocks \(B: |B|=n_j=1, \ \) we write \(c_{n_j}(t_{M_{j-1}+1})\) in (3.6) and use \(c_{n_j}(t_{M_{j-1}+1})=c_{n_j}(0)=1\) on \(L_{\pi }.\)

Next, we study \(\text {Var}S_N(f).\) It follows from (2.1) that

$$\begin{aligned}&\text {Var}(S_N(f))= \frac{1}{(2\pi )^{n} N^{2n}} \sum _{k^{(1)}\in {\mathbb {Z}}^n} \sum _{k^{(2)}\in {\mathbb {Z}}^n} {\hat{f}}(k^{(1)}/N) {\hat{f}}(k^{(2)}/N) \times \nonumber \\&\left( {\mathbb {E}}\left[ \prod _{j=1}^{n+1} T_{N, k_j} \prod _{j=1}^{n+1} T_{N, k_{n+1+j}}\right] - {\mathbb {E}}\left[ \prod _{j=1}^{n+1} T_{N, k_j}\right] {\mathbb {E}}\left[ \prod _{j=1}^{n+1} T_{N, k_{n+1+j}}\right] \right) , \end{aligned}$$
(3.9)

where \(k^{(1)}=(k_1, \ldots , k_{n}), \ \ k^{(2)}=(k_{n+2}, \ldots , k_{2n+1})\in {\mathbb {Z}}^{n}, \ \ k_{n+1}:=-\sum _{i=1}^n k_i,\) \(k_{2n+2}:=-\sum _{i=1}^n k_{n+1+i},\) and \(T_{N,s}\) is defined in (2.2).

To study the covariance of the products of traces in (3.9), we rewrite it using (2.4) as

$$\begin{aligned} {\mathbb {E}}\left[ \prod _{j=1}^{n+1} T_{N, k_j} \prod _{j=1}^{n+1} T_{N, k_{n+1+j}}\right] - {\mathbb {E}}\left[ \prod _{j=1}^{n+1} T_{N, k_j}\right] {\mathbb {E}}\left[ \prod _{j=1}^{n+1} T_{N, k_{n+1+j}}\right] = \sum _{\pi }^* \prod _{B\in \pi } \kappa _{|B|}^{(N)}(k_i: i \in B), \end{aligned}$$
(3.10)

where the sum is over all partitions \(\pi \) of \(\{1,2,\ldots , 2n+2\},\) satisfying the condition (3.11) below, B runs through the list of all blocks of the partition \(\pi , \ \) and |B| is the cardinality of a block \(B.\ \) The condition on \(\pi \) is that it is not a union of partitions of \(\{1, \dots , n+1 \}\) and \(\{n+2, \ldots , 2n+2\},\) in other words:

$$\begin{aligned}&{\text{ T }he \ set \ }\{1,2,\ldots , n+1\} { is \ not \ a \ union \ of \ some \ blocks \ of \ } \pi . \end{aligned}$$
(3.11)

Similarly to the \({\mathbb {E}}S_N(f)\) computations, define

$$\begin{aligned} L_{\pi }:=\{t=(t_1, \ldots , t_{2n+2})\in {\mathbb {R}}^{2n+2} : \sum _{i=1}^{n+1} t_i=0, \ \sum _{i=1}^{n+1} t_{n+1+i}=0; \ \ \sum _{i\in B} t_i=0 \ \forall B\in \pi \}. \end{aligned}$$
(3.12)

Using (3.10) we obtain

$$\begin{aligned} \text {Var}(S_N(f))&= \frac{1}{(2\pi )^{n} N^{2n}} \sum _{k^{(1)}\in {\mathbb {Z}}^n} \sum _{k^{(2)}\in {\mathbb {Z}}^n} {\hat{f}}(k^{(1)}/N) {\hat{f}}(k^{(2)}/N) \sum _{\pi }^* \prod _{B\in \pi } \kappa _{|B|}^{(N)}(k_i: i \in B)\nonumber \\&= \frac{1}{(2\pi )^{n}} \sum _{\pi }^* N^{|\pi |-2 n} \sum _{k^{(1)}\in {\mathbb {Z}}^n} \sum _{k^{(2)}\in {\mathbb {Z}}^n} {\hat{f}}(k^{(1)}/N) {\hat{f}}(k^{(2)}/N) \prod _{B\in \pi } c_{|B|}(k_i/N: i \in B). \end{aligned}$$
(3.13)

Remark 3.3

As before, the sum \(\sum _{\pi }^*\) in (3.13) is over all partitions \(\pi \) of \(\{1,2,\ldots , 2n+2\}\) satisfying the condition (3.11). The shorthand notation \(c_{|B|}(t_i: i\in B)\) means that the arguments of \(c_{|B|}\) are the \(t_i\)’s corresponding to \(i\in B.\) Since the joint cumulant functions are symmetric the order of the variables is not important.

It follows from Lemma 2.5, (ii) and (iii), that for any fixed \(\pi \) the summation in (3.13) is over \(k=(k_1, \ldots , k_{2n+2}) \in L_{\pi } \cap \frac{1}{N} {\mathbb {Z}}^{2n+2},\) since k satisfies the following system of linear equations of rank \(|\pi |+1:\)

$$\begin{aligned} {\left\{ \begin{array}{ll} &{} \sum _{i=1}^{n+1} k_i=0, \\ &{} \sum _{i=1}^{n+1+i} k_i=0, \\ &{} \sum _{i\in B} k_i=0, \ \ \forall B\in \pi . \end{array}\right. } \end{aligned}$$
(3.14)

The first linear equation \(\sum _{i=1}^{n+1} k_i=0\) follows from the remaining \(|\pi |+1\) independent linear equations in (3.14). Therefore

$$\begin{aligned} \dim L_{\pi }=2n+2-(|\pi |+1)=2n+1-|\pi |. \end{aligned}$$
(3.15)

Proceeding as in the case of the mathematical expectation above we arrive at

Lemma 3.4

$$\begin{aligned} \text {Var}(S_N(f))=\sigma ^2(f) N +O(1), \end{aligned}$$
(3.16)

where

$$\begin{aligned} \sigma ^2(f)= \frac{1}{(2\pi )^{n}} \sum _{\pi }^* \prod _{B\in \pi } \int _{L_{\pi }} \ f(t^{(1)}) f(t^{(2)}) \prod _{B\in \pi } c_{|B|}(t_i: i\in B) \ d\lambda , \end{aligned}$$
(3.17)

where the sum in (3.17) is over all partitions \(\pi \) of \(\{1,2,\ldots , 2n+2\},\) satisfying the condition (3.11), B runs through the list of all blocks of the partition \(\pi , \ \ |B|\) is the cardinality of a block \(B,\ \ t^{(1)}=(t_1, \ldots , t_n), \ t^{(2)}=(t_{n+2}, \ldots , t_{2n+1}),\) and \(\lambda \) is the Lebesgue measure on \(L_{\pi },\) i.e. it is the product of \(dt_i\)’s taken over all independent variables \(t_i\) from the system of linear equations

$$\begin{aligned} {\left\{ \begin{array}{ll} &{} \sum _{i=1}^{n+1} t_i=0, \\ &{} \sum _{i=1}^{n+1+i} t_i=0, \\ &{} \sum _{i\in B} t_i=0, \ \ \forall B\in \pi . \end{array}\right. } \end{aligned}$$
(3.18)

4 Proof of the Main Result

The section is devoted to the proof of Theorem 1.3. The proof uses the method of moments and is combinatorial in nature.

Proof

Fix a positive integer \(m>2.\) We have

$$\begin{aligned}&{\mathbb {E}}(S_N(f)-{\mathbb {E}}S_N(f))^m= \frac{1}{(2\pi )^{\frac{n m}{2}} N^{m n}} \sum _{k^{(1)}\in {\mathbb {Z}}^n} \ldots \sum _{k^{(m)}\in {\mathbb {Z}}^n} {\hat{f}}(k^{(1)}/N)\cdots {\hat{f}}(k^{(m)}/N) \times \nonumber \\&{\mathbb {E}}\left[ \prod _{i=0}^{m-1} \left( \prod _{j=1}^{n+1} T_{N, k_{(n+1) i+j}}- {\mathbb {E}}\prod _{j=1}^{n+1} T_{N, k_{(n+1)i+j}}\right) \right] , \end{aligned}$$
(4.1)

\(k^{(1)}=(k_1, \ldots , k_{n}), \ldots , k^{(m)}=(k_{(m-1) (n+1)+1}, \ldots , k_{m (n+1)-1})\in {\mathbb {Z}}^{n}, \) \(k_{n+1}:=-\sum _{j=1}^n k_j, \ \ k_{2(n+1)}=-\sum _{j=1}^{n} k_{n+1+j},\ldots , k_{m(n+1)}:=- \sum _{j=1}^n k_{(m-1) (n+1)+j}.\)

The mathematical expectation on the r.h.s. of (4.1) can be written in terms of joint cumulants using the following lemma. \(\square \)

Lemma 4.1

For centered random variables \( X_1, \ldots , X_{m l}\) with finite moments,

$$\begin{aligned} {\mathbb {E}}\left[ \prod _{i=0}^{m-1} \left( \prod _{j=1}^{l} X_{i l+j}- {\mathbb {E}}\prod _{j=1}^{l} X_{i l+j}\right) \right] = \sum ^*_{\pi } \prod _{B\in \pi } \kappa (X_i: i \in B), \end{aligned}$$
(4.2)

where the sum on the r.h.s. of (4.2) is over all partitions \(\pi \) of \(\{1, \ldots , m l\}\) that do not contain a partition of \(\{ i l +1, \ldots , (i+1) l\}\) for any \(0\le i\le m-1,\) i.e. none of the sets \(\{ i l +1, \ldots , (i+1) l\}\) can be represented as a union of some blocks of \(\pi .\)

Proof

It follows from the formula expressing moments in terms of cumulants (see e.g. (2.4)) that the r.h.s. of (4.2) is equal to a linear combination of \(\prod _{B\in \pi } \kappa (X_i: i \in B),\) where \(\pi \) runs over the list of partitions of \(\{1,2,\ldots , m l\}.\) Thus our goal is to show that a coefficient in the linear combination is either 1 or 0 depending on whether \(\pi \) satisfies the condition in Lemma 4.1 or not.

If any sub-collection of \(\pi \) is not a partition of \( \{ i l +1, \ldots , (i+1) l\}\) for any \(\ i=1, \ldots ,m,\) then the coefficient in front of the product \( \prod _{B\in \pi } \kappa (X_i: i \in B)\) in the linear combination is 1 since the only contribution comes from \({\mathbb {E}}\left[ \prod _{i=0}^{m-1} \prod _{j=1}^{l} X_{i l+j}\right] .\)

Finally, suppose \(s, \ 1\le s\le m,\) of the sets \( \{ i l +1, \ldots , (i+1) l\}, \ 1\le i \le m,\) can be represented as unions of some blocks of \(\pi .\) Then the coefficient in front of \(\prod _{B\in \pi } \kappa (X_i: i \in B)\) is equal to

$$\begin{aligned} \sum _{k=0}^s (-1)^k \frac{s!}{k! (s-k)!}=0. \end{aligned}$$
(4.3)

\(\square \)

Following Lemma 4.1 we can rewrite the m-th centered moment as

$$\begin{aligned}&{\mathbb {E}}(S_N(f)-{\mathbb {E}}S_N(f))^m= \nonumber \\&\frac{1}{(2\pi )^{\frac{n m}{2}} N^{m n}} \sum _{k^{(1)}\in {\mathbb {Z}}^n} \ldots \sum _{k^{(m)}\in {\mathbb {Z}}^n} {\hat{f}}(k^{(1)}/N)\cdots {\hat{f}}(k^{(m)}/N) \sum _{\pi }^* \prod _{B\in \pi } \kappa _{|B|}^{(N)}(k_i: i \in B), \end{aligned}$$
(4.4)

where, as before, \(k^{(1)}=(k_1, \ldots , k_{n}), \ldots , k^{(m)}=(k_{(m-1) (n+1)+1}, \ldots , k_{m (n+1)-1})\in {\mathbb {Z}}^{n},\) \( k_{n+1}=-\sum _{j=1}^n k_j, \ldots , k_{m(n+1)}=- \sum _{j=1}^n k_{(m-1) (n+1)+j}.\)

As in Sect. 3, we consider a linear subspace

$$\begin{aligned} L_{\pi }:=\{t\in {\mathbb {R}}^{m (n+1)} : \sum _{j=1}^{n+1} t_{i(n+1)+j}=0, \ \forall \ 0\le i\le m-1; \ \ \sum _{j\in B} t_j=0 \ \forall B\in \pi \}. \end{aligned}$$
(4.5)

It follows from Lemma 2.5, (ii) and (iii), that for any fixed \(\pi \) the summation in (4.4) is over \(k=(k_1, \ldots , k_{m (n+1)}) \in L_{\pi } \cap \frac{1}{N} {\mathbb {Z}}^{m (n+1)},\) since k satisfies the following system of linear equations:

$$\begin{aligned} {\left\{ \begin{array}{ll} &{} \sum _{j=1}^{n+1} k_{i (n+1)+j}=0, \ \forall \ 0\le i\le m-1, \\ &{} \sum _{i\in B} k_i=0, \ \ \forall B\in \pi . \end{array}\right. } \end{aligned}$$
(4.6)

Using (2.15), we rewrite (4.4) as

$$\begin{aligned}&{\mathbb {E}}(S_N(f)-{\mathbb {E}}S_N(f))^m= \nonumber \\&\frac{1}{(2\pi )^{\frac{n m}{2}} N^{m n}} \sum _{k\in L^{\pi }\cap \frac{1}{N} {\mathbb {Z}}^{m (n+1)}} {\hat{f}}(k^{(1)}/N)\cdots {\hat{f}}(k^{(m)}/N) \sum _{\pi }^* \ N^{|\pi |} \prod _{B\in \pi } c_{|B|}(k_i/N: i \in B)=\nonumber \\&\frac{1}{(2\pi )^{\frac{n m}{2}}}\sum _{\pi }^* N^{|\pi |-m n} \sum _{k\in L^{\pi }\cap \frac{1}{N} {\mathbb {Z}}^{m (n+1)}} {\hat{f}}(k^{(1)}/N)\cdots {\hat{f}}(k^{(m)}/N) \prod _{B\in \pi } c_{|B|}(k_i/N: i \in B). \end{aligned}$$
(4.7)

The crucial question in the power counting analysis of the r.h.s. of (4.7) is the dimension of the vector subspace \(L_{\pi }\subset {\mathbb {R}}^{m (n+1)},\) or, equivalently, the rank of the system of linear equations (4.6).

Denote

$$\begin{aligned}{}[1]:=\{1,\ldots , n+1\}, \ [2]:=\{n+2, \ldots , 2n+2\}, \ldots , [m]=\{(m-1) (n+1)+1, \ldots , m (n+1)\}. \end{aligned}$$

Definition 4.2

For a given partition \(\pi \) of \(\{1, \ldots , m n\}\) an equivalence relation \(\sim _{\pi }\) is defined on the set \(\{[1], [2], \ldots , [m]\}\) in the following way:

$$\begin{aligned}{}[i]\sim _{\pi } [j] \end{aligned}$$
(4.8)

if and only if there is a block B in the partition \(\pi \) such that \(B\cap [i]\ne \emptyset \) and \(B\cap [j]\ne \emptyset .\)

Remark 4.3

It follows from Lemma 4.1 that the cardinality of each equivalence class of the equivalence relation \(\sim _{\pi }\) is at least 2.

Definition 4.4

We call a partition \(\pi \) optimal if the cardinality of every equivalence class of the equivalence relation \(\sim _{\pi }\) is 2. If \(\pi \) is not optimal, it is called sub-optimal.

Clearly, optimal partitions exist if and only if m is even. Next lemma is the main ingredient of the proof of Theorem 1.3.

Lemma 4.5

  1. (i)

    Let m be an even positive integer and \(\pi \) be an optimal partition of \(\{1, \ldots , m n\}\). Then a linear subspace \(L_{\pi } \subset {\mathbb {R}}^{m (n+1)}\) defined in (4.5) satisfies

    $$\begin{aligned} \dim L_{\pi }= m n +m/2 -|\pi |. \end{aligned}$$
    (4.9)
  2. (ii)

    Let \(m>1\) be a positive integer and \(\pi \) be an sub-optimal partition of \(\{1, \ldots , m n\}\). Then

    $$\begin{aligned} \dim L_{\pi }<m n +m/2 -|\pi |. \end{aligned}$$
    (4.10)

Proof

  1. (i)

    If \(\pi \) is optimal it can be viewed as a union of \(\frac{m}{2}\) partitions \(\pi _{i,j}\) of \([i]\cup [j], \ [i]\sim _{\pi }[j].\) Each partition \(\pi _{i,j}\) corresponds to a vector subspace \(L_{\pi _{i,j}}\subset {\mathbb {R}}^{2n+2}\) of dimension

    $$\begin{aligned} \dim L_{\pi _{i,j}}=2 n+1-|\pi _{i,j}| \end{aligned}$$

    (see (3.12) and (3.15).) Since \(L_{\pi }\) is the Cartesian product of \(L_{\pi _{i,j}}\)’s we have

    $$\begin{aligned} \dim L_{\pi } =\sum \dim L_{\pi _{i,j}}, \end{aligned}$$
    (4.11)

    and (4.9) immediately follows.

  2. (ii)

    Now let us assume that \(\pi \) is a sub-optimal partition. Recall that the subspace \(L_{\pi }\) is determined by the following system of linear equations

    $$\begin{aligned} {\left\{ \begin{array}{ll} &{} \sum _{j=1}^{n+1} t_{i (n+1)+j}=0, \ \ \forall \ 0\le i\le m-1,\\ &{} \sum _{j\in B} t_j=0, \ \ \forall B\in \pi . \end{array}\right. } \end{aligned}$$
    (4.12)

To prove (4.10) we have to show that the rank of this system is bigger than \(|\pi |+\frac{m}{2}.\ \) As before, \(L_{\pi }\) can be viewed as the Cartesian product of the subspaces corresponding to the equivalence classes. Using the additivity of dimension, we can assume without a loss of generality that \(\pi \) in (4.12) has only one equivalence class. We claim that in this case the rank of (4.12) is \(|\pi |+m-1>|\pi |+\frac{m}{2},\) since \(m>2.\) To show this, consider \(|\pi |+m-1\) vectors in \({\mathbb {R}}^{m (n+1)},\ \) namely \(\{ \chi _B, \ B\in \pi \} \cup \{ \chi _{[i]}, \ 0\le i<m-1\}:\)

$$\begin{aligned}&\chi _B(j)={\left\{ \begin{array}{ll} &{} 1 \ \ j\in B, \\ &{} 0 \ \ j \notin B, \end{array}\right. } \end{aligned}$$
(4.13)
$$\begin{aligned}&\chi _{[i]}(j)={\left\{ \begin{array}{ll} &{} 1 \ \ j\in [i]=\{i (n+1) +1, \ldots , (i+1) (n+1)\}, \\ &{} 0 \ \ j \notin [i]. \end{array}\right. } \end{aligned}$$
(4.14)

Part (ii) of Lemma (4.5) follows from \(\square \)

Lemma 4.6

Let \(\pi \) be a sub-optimal partition of \(\{1, \ldots , m n\}, \ \ m\ge 1, \ \) such that the equivalence relation \(\sim _{\pi }\) on \(\{[1], \ldots , [m]\}\) has only one equivalence class. Then the vectors

$$\begin{aligned} \{ \chi _B, \ B\in \pi \} \cup \{ \chi _{[i]}, \ 0\le i<m-1\} \end{aligned}$$
(4.15)

are linearly independent.

Proof

The vectors \(\{ \chi _B, \ B\in \pi \}\) are linearly independent since their supports are disjoint. We show by induction that adding \(\chi _{[i]}\) one by one preserves linear independence. Suppose that \(\{ \chi _B, \ B\in \pi \} \cup \{ \chi _{[i]}, \ 0\le i<k-1\}, \ k<m, \ \) are linearly independent and \(\chi _{[k]}\) can be written as a linear combination of \(\{ \chi _B, \ B\in \pi \} \cup \{ \chi _{[i]}, \ 0\le i<k-1\}.\) Then a non-trivial linear combination of \(\{ \chi _{[i]}, \ 0\le i<k\} \ \) can be written as a linear combination of some of the vectors \(\chi _B.\) This implies that \([1]\cup \ldots \cup [k]\) contain an equivalence class of the equivalence relation \(\sim _{\pi }, \ \) which is a contradiction. Lemma 4.6 is proven. \(\square \)

Since \(L_{\pi }\) is the orthogonal complement of the linear span of the vectors (4.15) this implies part (ii) of Lemma (4.5).

Now we are ready to finish the proof of Theorem 2.5. Let \(m=2 k\) be even. Denote a subsum in (4.7) corresponding to a partition \(\pi \) by \(\Sigma _{\pi }.\) It follows from Lemma (4.5) that only the optimal partitions \(\pi \) give leading contribution of order \(N^{m/2}\) to the r.h.s. of (4.7). For each sub-optimal \(\pi \) we have \( \ \Sigma _{\pi }= O(N^{\frac{m-1}{2}}). \ \)

There are exactly \((2 k-1)!!\) ways to split the set \(\{[1], \ldots , [2 k] \}\) into pairs. By repeating the variance computations, we note that the sum of \(\Sigma _{\pi }\)’s over all \(\pi \) corresponding to any particular splitting of \(\{[1], \ldots , [2 k] \}\) into pairs gives \( \ \sigma ^{m}(f) N^{\frac{m}{2}} (1+o(1)). \ \)

We conclude that the 2k-th moment of \(\frac{S_N(f)-{\mathbb {E}}S_N(f)}{\sqrt{\text {Var}S_N(f)}}\) converges to \((2k-1)!!\) in the limit \(N\rightarrow \infty .\) The case of odd m is treated similarly. Specifically, one obtains that the odd moments of \(\frac{S_N(f)-{\mathbb {E}}S_N(f)}{\sqrt{\text {Var}S_N(f)}}\) converges to 0 in the limit \(N\rightarrow \infty .\)

This finishes the proof of Theorem 2.5.