The winding of stationary Gaussian processes
 256 Downloads
Abstract
This paper studies the winding of a continuously differentiable Gaussian stationary process \(f:\mathbb R\rightarrow \mathbb {C}\) in the interval [0, T]. We give formulae for the mean and the variance of this random variable. The variance is shown to always grow at least linearly with T, and conditions for it to be asymptotically linear or quadratic are given. Moreover, we show that if the covariance function together with its second derivative are in \(L^2(\mathbb R)\), then the winding obeys a central limit theorem. These results correspond to similar results for zeroes of realvalued stationary Gaussian functions by Malevich, Cuzick, Slud and others.
Keywords
Gaussian process Stationary process Fluctuations of zeroes Winding numberMathematics Subject Classification
60G15 60G10 30E991 Introduction
Gaussian functions on various spaces, and in particular stationary functions (i.e., those functions whose distribution is invariant under shifts), have long been an object of extensive study. Real Gaussian stationary functions \(f:\mathbb R\rightarrow \mathbb R\) are a classical model of random signals, and in particular much effort was devoted to the study of their zeroes [1, 4]. More recently, zeroes of complex Gaussian functions \( f:\mathbb {C}\rightarrow \mathbb {C}\) attracted attention, as they are interesting point processes with intrinsic repulsion [17].
In this paper we study the winding, or the increment of the argument, of planar Gaussian stationary processes \(f:\mathbb R\rightarrow \mathbb {C}\). In light of the argument principle, one might expect winding to be the appropriate analogue in this setting of zeroes in the aforementioned examples; indeed our results and methods are closely related to the corresponding ones for random zeroes, both in the real [30] and complex [11] settings. In this sense, this work is part of an effort to simplify, unify and generalize the tools which are used for analysing random zeroes.
In addition, this work is also motivated by a long history of works concerning the winding of various planar processes. Winding is used to model the entanglement of polymers [14], and the movement of a particle under a random magnetic field [8]. Limit laws and asymptotic behavior of the winding were studied for Brownian motion [27, 32] and certain fractal curves [35], among others. However, perhaps surprisingly, the winding of Gaussian stationary processes appears to be a topic that has been largely ignored. Prior to this work, we know only of a paper by LeDoussal et al. [24] which provides predictions and intriguing examples regarding the nature of the fluctuations of the winding. Their interest was inspired by their research on the winding of particles in random environments [10]. The present paper establishes and extends their predictions. In a somewhat different setting, the winding of the special “Gaussian kernel process” on various curves in the complex plane was studied in [3]. More about background and motivation, including previous related work, may be found in Sect. 2.
We develop an asymptotic formula for the variance \(V(T)=\mathrm {var}\,[\Delta (T)]\) of the winding of f in “time” [0, T] (Theorem 1). By analysing this formula, we show that V(T) is always at least linear in T (Theorem 2). Then we prove that if the covariance function and its second derivative are in \(L^2\), then V(T) is asymptotically linear in T and a central limit theorem holds (Theorem 3). Finally we show that if the spectral measure of the Gaussian process f does not contain any atoms, then V(T) is subquadratic (Theorem 4).
1.1 Definitions
A standard complex Gaussian, denoted \(\mathcal {N}_\mathbb {C}(0,1)\), is a \(\mathbb {C}\)valued random variable whose distribution has density \( \frac{1}{\pi }e^{z^2}\) against Lebesgue measure on the plane. A complex Gaussian vector is a random vector in \(\mathbb {C}^n\) that is equal in distribution to \(A\mathbf {v}\), where \(\mathbf {v}\) is a random vector in \(\mathbb {C}^m\) whose components are i.i.d. \(\mathcal {N}_\mathbb {C}(0,1)\)distributed, and A is an \(n \times m\) matrix (we always consider centred random variables and processes, i.e., having mean 0).
A complex Gaussian process \(f:\mathbb R\rightarrow \mathbb {C}\) is a random process whose finite marginals are Gaussian vectors; that is, for any \(n\in \mathbb N\) and any \(t_1,\ldots , t_n\in \mathbb R\) the vector \((f(t_1), \ldots , f(t_n))\) is a complex Gaussian vector. Such a process is stationary if its distribution is invariant under all real shifts, that is, for any \(n\in \mathbb N\), \(t_1,\ldots , t_n\in \mathbb R\) and \(s\in \mathbb R\) the vectors \((f(t_1),\ldots , f(t_n))\) and \((f(t_1+s),\ldots , f(t_n+s) )\) have the same distribution. We will write GSP to denote a Gaussian stationary process throughout this article.
1.2 Results
In all of our results we assume that \(f:\mathbb R\rightarrow \mathbb {C}\) is a nondegenerate GSP whose spectral measure obeys condition (3). The first result gives explicit formulae for the mean and variance of \(\Delta (T)\).
Theorem 1
 1.
\(\mathbb E[\Delta (T)] = T \ \mathrm {Im}\,r'(0)\).
 2.Denoting \(R(x)=\frac{r'}{r}(x)\) for x such that \(r(x)\ne 0\), we define \(K:\mathbb R\rightarrow \mathbb R\) byThen K is integrable on any compact subset of \(\mathbb R\), and$$\begin{aligned} K(x)={\left\{ \begin{array}{ll} \frac{1}{2} r'(x)^2, &{} \text {if } r(x)=0\text { or }1,\\ \frac{r(x)^2}{1r(x)^2}\ \mathrm {Im}\,^2 \left\{ R(x)R(0) \right\}  \frac{1}{2} \log \left( \frac{1}{1r(x)^2}\right) \mathrm {Re}\,\{R'{(x)}\}, &{} \text {if } 0< r(x) < 1. \end{array}\right. } \end{aligned}$$(5)$$\begin{aligned} \mathrm {var}\,[\Delta (T) ] = T \int _{T}^T \left( 1\frac{x}{T} \right) K(x) dx. \end{aligned}$$(6)
Remark 1.1
It is not hard to see that K(x) is continuous at the points where \(r(x)=0\) (which may be a large set). On the other hand, there is no natural definition of K(x) at the points where \(r(x)=1\), and we have assigned the value \(\frac{1}{2} r'(x)^2\) purely for convenience. In the course of the proof we will show that these points are isolated and that K has a logarithmic, integrable singularity at each of them.
Remark 1.2
One may check that the kernel K is always nonnegative, but we will not reproduce the calculations here since they will not be important for our purposes. An alternative form for the variance, which may be more convenient for applications, will be given in the course of the paper (see Proposition 4.1 below)—the kernel \(\widetilde{K}\) given there is trivially nonnegative.
Remark 1.3
Remark 1.4
Although the main focus of this paper is the “largetime” asymptotic behaviour, Le Doussal et al. [24] also mention the shorttime asymptotics of \(\mathrm {var}\,[\Delta (T)]\). Our result implies that \(\mathrm {var}\,[\Delta (T)]\sim (r'(0)^2r''(0))T\log \tfrac{1}{T}\) as \(T\rightarrow 0\) and further terms in the asymptotic expansion may be obtained if one assumes some extra regularity—the existence of higher order derivatives of r. In Lemma 3.3 we show that \(r'(0)^2r''(0)>0\).
Our next theorem states that the variance always grows at least linearly.
Theorem 2
The case of asymptotically linear variance is of particular interest. Below we give a simple condition that is sufficient for this to hold, and prove a central limit theorem (CLT) under this hypothesis.
Theorem 3
Remark 1.5
If \(r,r''\in L^2(\mathbb R)\) then also \(r'\in L^2(\mathbb R)\) (see Observation 5.2). Therefore, the condition \(r, r''\in L^2(\mathbb R)\) is enough to ensure both linear variance and a CLT.
On the other hand, the variance is trivially at most quadratic in T. The following theorem gives a mild mixing condition for the variance to be subquadratic.
Theorem 4
This was already proved in [11], but we repeat the proof at the end of this paper for completeness. We note that, under the assumption that f a.s. has an analytic extension to a strip in the complex plane, the converse to Theorem 4 holds (see [11, Remark 1.5]).
The rest of the paper is organized as follows. Sect. 2 is devoted to a discussion of motivation, related previous work and interesting examples. In Sect. 3 we prove Theorem 1 about the mean and variance. In Sect. 4 we prove Theorem 2 (concerning a lower bound for the variance), after developing an alternative form for the variance (Proposition 4.1). In Sect. 5 we prove Theorem 3 concerning linear variance and a CLT. Finally, Sect. 6 contains the proof of Theorem 4 about subquadratic variance.
Finally, a word about notation. By \(g\lesssim h\) we mean that \(g\le C\cdot h\), where \(C>0\) is a constant (which may vary from line to line, and may depend on fixed parameters). We write \(g=O(h)\) if \(g\lesssim h\). Similarly, \(g\simeq h\) means that \(g\lesssim h\) and \(h\lesssim g\). We use the notation \(g(T) \asymp h(T)\) to denote that \(\lim _{T\rightarrow \infty } \frac{g}{h}(T)\) exists and is some finite positive constant, while we write \(g(T) \sim h(T)\) to denote the more precise \(\lim _{T\rightarrow \infty } \frac{g}{h}(T)=1\)
2 Discussion
2.1 Background and motivation
There are three major motivations for this work. The first comes from theoretical physics, where the winding of planar random processes is used in models of polymers, flux lines in superconductors and the quantum Hall effect (see [8, 14, 16] and the references therein). For this reason, and out of pure mathematical interest, the winding has been studied for certain processes. For planar Brownian motion \(\mathbf{B}\), Spitzer [32] proved, denoting the winding of \(\mathbf{B}\) up to time T by \(\Delta _{\mathbf{B}}(T)\), that \(\Delta _{\mathbf{B}}(T) / \log T\) converges in distribution to a Cauchy random variable. This inspired a long sequence of works (most notably, Pitman–Yor [27, 28]). There was also much interest in windings of various fractal random curves (e.g. SARW [29], SLE and related processes [35]). Very recently, winding of Ornstein–Uhlenbeck processes [33] and of stable processes [7] were studied, including analysis of large scale asymptotics and limit laws. Some other relatively recent studies of winding with physical applications include [9, 14, 15, 23].
Le Doussal et al. [24] have studied the winding of planar Gaussian processes. The authors provide a formula for the variance of the winding of a Gaussian process, not necessarily stationary, with reflectional symmetry. Theorem 1 of this paper is a rigorous derivation of the same formula for stationary processes, without assuming reflectional symmetry. We comment that it is possible to apply our methods to nonstationary processes as well, but we did not pursue this route. Le Doussal et al. also noticed “diffusive behavior” (i.e., that the variance grows at least linearly) in all examples of interest, which led them to predict that “for most stationary processes the winding angle exhibits diffusion”. Theorem 2 establishes this fact for all sufficiently smooth processes.
The second motivation for this work is the extensive study of the zeroes of real stationary Gaussian processes \(f:\mathbb R\rightarrow \mathbb R\). Morally, in many scenarios zeroes are analogous to winding (related, for instance, by the argument principle). The survey [22] gives a good account of the research on zeroes of real GSPs, and we rely on it for details and references in what follows. The mean number of zeroes was computed by Kac [19], while asymptotics of the variance were studied by Cramér–Leadbetter [4], Piterbarg [26] and many others; however, no accessible formula for the variance was given. For this reason, the first CLTs contained conditions about the variance which were hard to check. One such example is the work of Cuzick [5], who proved a CLT whose main condition is linear growth of the variance. Our proof of the CLT in Theorem 3 is inspired by his, where, using our formula from Theorem 1, we can give an explicit condition for this linear growth. It is interesting to note that, after many years, Slud [30] gave a condition for linear growth of the variance of the number of zeroes, which is similar to the one we recovered for the winding in Theorem 3; i.e., that the covariance function and its second derivative are in \(L^2(\mathbb R)\) (see remark 1.5). However, while in this article we analyse a concrete formula, Slud’s work relies on more sophisticated methods including WienerItō expansions. Moreover, he does not establish “diffusive behavior” (c.f. Theorem 2) for zeroes. We do not know of a way to unify these results.
We note that Cuzick’s method was used by Granville–Wigman [13] to study the variance and CLT for the zeroes of random trigonometric polynomials. Their work was reproved and generalised to crossings of any level by Azaïs and León [2] using WienerItō expansions.
The third motivation comes from the study of complex zeroes of random Gaussian analytic functions. These have drawn increasing attention in recent years, as they provide rich and accessible point processes in the plane (see the recent book [17]). One of us [11] proved very similar results to ours about fluctuations of complex zeroes of stationary Gaussian analytic functions (without a CLT). While, once again, the methods are different and a priori neither result implies the other, the variance is shown to always be at least linear (as in Theorem 2), and the condition given for asymptotic linearity is very similar to ours (as in Theorem 3). The proof of subquadratic variance here (Theorem 4) is identical to that of [11].
In recent work [3] the authors also study the increment of the argument of a certain Gaussian process, where previously the focus of study had been the zero set. At this superficial level our work is quite similar to their’s, though the results are in fact quite different in spirit. In [3] the authors study one particular process—the “Gaussian kernel process” given by (10) which extends to an entire function^{1}. This process is very regular since its covariance decays so rapidly. The authors’ main focus is on the covariance between the increment of the argument along two planar curves, and how this covariance depends on the geometry of the intersection of these curves. In contrast in this paper we consider the simplest possible curve—a long section of the real line—and consider a very wide class of Gaussian processes; indeed these processes may not even extend in a sensible manner to a small neighbourhood of the real line. Further, in this paper we have tried to make minimal assumptions on the decay of the covariance kernel. The only intersection between the main theorems of the two papers is the conclusion that for the winding of the Gaussian kernel process the variance is asymptotically linear and a CLT holds.
We end by posing three natural open problems. The first is to determine the asymptotic behavior of the winding in case of nonlinear variance (in particular, when the conditions of Theorem 3 do not hold). In similar cases for random real zeroes, Slud has shown that there are regimes of CLT and regimes of nonCLT behavior [31, Thm 3.2].
The second is to derive a quantitative CLT for the winding, that is, to estimate the rate of convergence to the normal law. For these two questions, it may well be the case that the more sophisticated methods of WienerItō expansions could be useful. The third is to prove a converse to Theorem 4 with no further assumptions (that is, that if the spectral measure contains an atom, then the variance is quadratic).
2.2 Examples
In this section we discuss some interesting GSPs. The last two examples were pointed out by Le Doussal et al. [24]. We stress that, while here we present only orders of magnitude for \(\mathrm {var}\,[\Delta (T)]\) in the various examples, often one may apply our results to retrieve exact constants.
Sinc kernel Taking \(\rho = \tfrac{1}{2\pi }1{}\hbox {I}_{[\pi ,\pi ]}\) one obtains \(r(t)=\mathrm {sinc}(t)=\frac{\sin (\pi t)}{\pi t}\). This process has the representation \(f(t) = \sum _{n\in \mathbb Z} \zeta _n \mathrm {sinc}(tn)\), where \(\{\zeta _n\}_{n\in \mathbb Z}\) are i.i.d. \(\mathcal {N}_\mathbb {C}(0,1)\). Notice that \(f(n)=\zeta _n\) for \(n\in \mathbb Z\), so this process may be regarded as a smooth (in fact, analytic) interpolation of the i.i.d. sequence. For this example, Theorem 3 yields that \(\mathrm {var}\,[\Delta (T)]\asymp T\), and a CLT holds.
Exponential kernel and approximations Consider \(r_{\mathbf{OU}}(t)=e^{t}\). This process is a timespace change of Brownian motion, called the Ornstein–Uhlenbeck (OU) process. Inspired by Spitzer’s limit law for \(\Delta _{\mathbf{B}}\), Vakeroudis [33, Theorem 3.3] has recently shown that \(\frac{\Delta _{\mathbf{OU}}(T)}{T}\) converges in distribution to the Cauchy law; in particular the variance of the winding in each finite interval is infinite. As the OU process is not differentiable, none of our results may be directly applied. However, one may approximate the OU process by differentiable processes. One way to do so is by taking \(r_a(t) = e^{a\sqrt{a^2+t^2}}\) with \(a\downarrow 0\). For a fixed \(a>0\), since \(r_a\) is infinitely differentiable, we may apply Theorem 1 to see that the variance of the winding of the corresponding process in [0, T] is of order \(\ln (\frac{1}{a})\cdot T\) for \(T\ge a^{1\varepsilon }\). As \(a\rightarrow 0\) we see that the variance is unbounded, and this holds even on certain short intervals that are not “too short”.
Another approximation may be derived using the spectral measure. The OU process has spectral density \(\frac{1}{\pi (1+\lambda ^2)}\), thus one may consider the spectral density \(\frac{M}{\pi (M1)}\left( \frac{1}{\lambda ^2+1}\frac{1}{\lambda ^2+M^2} \right) \) which approximates the OU process as \(M\rightarrow \infty \), and satisfies (3) for each fixed M. The corresponding covariance kernel is \(r_M(t)=\frac{ M e^{t}  e^{Mt}}{M1}\), which is twice differentiable. Applying Theorem 1 one gets a variance of size \(\ln M \cdot T\) for \(T\ge M^{1+\varepsilon }\), and again we see that as \(M\rightarrow \infty \), the variance is unbounded, even on certain short intervals.

For \(r(t)=J_0(t)\), one has \(\mathrm {var}\,[\Delta (T)]\asymp T\ln T\). Here \(J_0\) stands for the 0Bessel function of the first kind.

Let \(0<b<\frac{1}{2}\). For \(r(t)=\frac{\cos t}{(1+t)^b}\), one has \(\mathrm {var}\,[\Delta (T)]\asymp T^{22b}\).
3 Formulae for the mean and variance: Theorem 1
3.1 Preliminaries
In the course of the proof of Theorem 1 we shall make use of the following lemmata. The first is an extension of an exercise in Kahane’s celebrated book [20, Ch. XXII, Ex. 3].
Lemma 3.1
 (a)
\( \mathbb E\left[ \frac{F'_1}{F_1} \right] = \frac{s_{11}}{r_{11}}.\)
 (b)If \(r_{12}\ne 0\), thenwhile if \(r_{12}=0\), then \(\mathrm {cov}\,\left( \frac{F'_1}{F_1}, \frac{F'_2}{F_2} \right) =\frac{s_{12} s_{21}}{r_{11} r_{22}}\).$$\begin{aligned} \mathrm {cov}\,\left( \frac{F'_1}{F_1}, \frac{F'_2}{F_2} \right) = \frac{r_{12}^2}{r_{11} r_{22}r_{12}^2} \left( \frac{s_{12}}{ r_{12}}  \frac{s_{11}}{r_{11}}\right) \left( \frac{s_{21}}{r_{21}}\frac{s_{22}}{r_{22}}\right) \end{aligned}$$
 (c)If \(r_{12}\ne 0\), thenwhile if \(r_{12}=0\) then \(\mathrm {cov}\,\left( \frac{F'_1}{F_1}, \overline{\left( \frac{F'_2}{F_2}\right) } \right) =0\).$$\begin{aligned} \mathrm {cov}\,\left( \frac{F'_1}{F_1}, \overline{\left( \frac{F'_2}{F_2}\right) } \right)&= \frac{r_{12}^2}{r_{11} r_{22}r_{12}^2} \left( \frac{s_{12}}{ r_{12}}  \frac{s_{11}}{r_{11}}\right) \overline{\left( \frac{s_{21}}{r_{21}}\frac{s_{22}}{r_{22}}\right) }\\&\quad + \log \left( \frac{r_{11} r_{22}}{ r_{11}r_{22}r_{12}^2 } \right) \cdot \left( \frac{t_{12}}{r_{12}}  \frac{s_{12}\overline{s_{21}} }{(r_{12})^2}\right) , \end{aligned}$$
Remark 3.1
If we fix all of the parameters except for \(r_{12}\) (and \(r_{21}=\overline{r_{12}}\)), then the covariances computed in (b) and (c) are continuous functions of \(r_{12}\) (i.e., at \(r_{12}=0\)).
If we drop any of the assumptions \(r_{11}\ne 0\), \(r_{22}\ne 0\) or \(r_{11}r_{22}\ne r_{12}^2\) then the quantities computed in (b) and (c) diverge. We only require \(r_{11}\ne 0\) for (a) to be finite.
All three parts of Lemma 3.1 are proved in a similar way, which we outline below.
Sketch of the proof of Lemma 3.1
As the proofs of the remaining cases are long but contain no new ideas, we omit them from this paper. \(\square \)
Next we note some basic properties of the covariance function.
Observation 3.2
Proof
Recalling that r is the Fourier transform of a probability measure (as in (2)), we get that \(r(x)=\overline{r(x)}\). All other relations follow easily from this. \(\square \)
The next lemma will allow us to analyse the behavior of r near its extremal points.
Lemma 3.3

For all \(t\in \mathbb R\), \(r(t)\le 1=r(0)\).

If there exists \(t\ne 0\) such that \(r(t)=1\), then there exists \(\lambda _0, \lambda _1\in \mathbb R\) such that \(\text {sprt}(\rho )\subseteq \lambda _0 + \lambda _1 \mathbb Z\).

The set \(D=\{t: \; r(t) = 1\}\) is discrete.
 If r is twice differentiable, then there exists \(C>0\) such that for any \(t_m\in D\)$$\begin{aligned} 1r(t)^2 = C(tt_m)^2 + o\left( (tt_m)^2\right) , \text { as } t\rightarrow t_m. \end{aligned}$$
Proof of Lemma 3.3
We shall also use the following integrability lemma.
Lemma 3.4
 (I)
\(\displaystyle \int _0^T \mathbb E\left[ \Big  \frac{f'(t)}{f(t)}\Big \right] \,dt < \infty .\)
 (II)
\(\displaystyle \int _0^T \int _0^T \mathbb E\left[ \Big  \frac{f'(t) \ f'(s)}{f(t) \ f(s)}\Big \right] \, dt\ ds < \infty .\)
This lemma first appeared in [11, Lemma 3.4], and though it is stated there for functions that are a.s. analytic, it applies in our setting with no changes to the proof.
Our last lemma is an elementary but useful change of variables.
Lemma 3.5
Proof
3.2 The mean
3.3 The variance
4 An alternative form for the variance and a linear lower bound: Theorem 2
The main goal of this section is to prove Theorem 2 concerning a linear lower bound on the variance. However, most of the section will be devoted to prove the following reformulation of the second part of Theorem 1, from which Theorem 2 will follow rather easily.
Proposition 4.1
A few remarks are in order before we proceed with the proofs.
Remark 4.1
Notice that all terms in this expression are nonnegative. It is interesting to note that \(\widetilde{K}\) can be defined if r is only once differentiable, and suggests that (21) may continue to hold in this case. (The random variable \(\Delta (T)\) can be defined if f is simply continuous.)
Remark 4.2
4.1 Proof of Proposition 4.1
4.2 Proof of Theorem 2
5 Linear variance and CLT: Theorem 3
In this section we prove Theorem 3. We begin with some observations regarding our premises.
Observation 5.1
\(r\in L^2(\mathbb R)\) if and only if the spectral measure \(\rho \) has density \(p(\lambda )\ge 0 \) (w.r.t. the Lebesgue measure) such that \(p\in L^2(\mathbb R)\). Similarly, \(r^{(k)}\in L^2(\mathbb R)\) if and only if \(\rho \) has density \(p(\lambda )\ge 0\) such that \(\lambda ^k p(\lambda )\in L^2(\mathbb R)\).
This observation follows from basic properties of Fourier transform.
Observation 5.2
If \(r, r'' \in L^2(\mathbb R)\), then \(r'\in L^2(\mathbb R)\).
Proof
5.1 Linear variance
In this subsection we show the first part of Theorem 3, that is, that if \(r, r'\in L^2(\mathbb R)\) then the variance of \(\Delta (T)\) is asymptotically linear (in the sense of (7)).
5.2 CLT
 1.
Construct an Mdependent stationary Gaussian process \(f_M:\mathbb R\rightarrow \mathbb {C}\), that approximates the original process f (in a way to be clarified). For this we employ an approximation strategy of Cuzick [5], although the idea goes back to Malevich [25].
 2.
Show that the increment of the argument of \(f_M\), denoted \(\Delta _M(T)\), obeys a CLT as \(T\rightarrow \infty \) for each fixed M.
 3.
Show that \((\Delta _M(T)\mathbb E\Delta _M(T))/\sqrt{\mathrm {var}\,(\Delta _M(T))}\) approaches \((\Delta (T)\mathbb E\Delta (T))/\sqrt{\mathrm {var}\,(\Delta (T))}\) as \(M\rightarrow \infty \) in \(L^2(\mathbb P)\), uniformly in T.
Lemma 5.3
 For each fixed M,$$\begin{aligned} X_M(T) \overset{d}{\longrightarrow } \mathcal {N}_\mathbb R(0,1), \text { as } T\rightarrow \infty . \end{aligned}$$
 We haveuniformly in T.$$\begin{aligned} \lim _{M\rightarrow \infty }\mathbb E\left[ (X(T)X_M(T))^2 \right] =0, \end{aligned}$$
5.2.1 Constructing an approximating process
5.2.2 Properties
In this subsection we clarify in what sense \(f_M\) approximates f. More importantly we prove the following key result, which concerns the convergence of the covariance kernels \(r_{M,M}\) and \(r_{M,0}\), and will be essential in proving the CLT in Theorem 3.
Proposition 5.4
We recall the definition of Mdependence.
Definition
(Mdependence) Let \(T\subseteq \mathbb R\), and \(M\ge 0\). A stochastic process \((X(t))_{t\in T}\) is Mdependent if for any \(s_1,s_2\in T\) such that \(s_2s_1>M\), the sigmaalgebras generated by \((X(t))_{t\le s_1}\) and \((X(t))_{t\ge s_2}\) are independent.
Proposition 5.5
The process \(f_M\) is almost surely continuously differentiable, and \(4\pi M\)dependent.
Further, \(f_M\) approximates f in the following sense, which we immediately deduce from the previous two propositions.
Corollary 5.6
We will now give a series of lemmata and observations which will lead to the proof of the previous two propositions.
Lemma 5.7
 1.
\(\widehat{P}_M(t)\) is twice continuously differentiable on \(\mathbb R\).
 2.
\(0\le \widehat{P}_M(t)\le 1\) for all \(t\in \mathbb R\).
 3.
\(\widehat{P}_M(t)=0\) for \(t>4\pi M\).
 4.
For any \(0<\varepsilon <1\) we have \(\widehat{P}_M(t) = 1\frac{K_2}{M^2} t^2 + O\left( \frac{t^{2+\varepsilon }}{M^{2+\varepsilon }}\right) \), as \(t\rightarrow 0\), where^{4} \( K_2 = \frac{1}{2K_1} \int _\mathbb R\lambda ^2 \mathrm {sinc}^4(\lambda ) \ d\lambda \) and the implicit constant depends only on \(\varepsilon \).
Proof
Lemma 5.8
 1.
\(r_{M,M}\) is a twice differentiable function on \(\mathbb R\), supported on \([4\pi M, 4\pi M]\).
 2.
\(r_{M,M}(0)=r(0)=1\) and \(r'_{M,M}(0)=r'(0)\).
 3.
\(r_{M,M}(t)\le r(t)\) for all t.
Recalling that \(r_{M,M}(t)=r(t)\widehat{P}_M(t)\) (see (30)), Lemma 5.8 follows immediately from Lemma 5.7 and our assumptions about r.
This previous lemma immediately implies that \(f_M\) is a \(4\pi M\)dependent process. The next lemma will complete the proof of Proposition 5.5.
Lemma 5.9
Proof
For Proposition 5.4 we shall need two further lemmas about the kernel \(P_M\).
Lemma 5.10
For any \(1\le p<\infty \) and \(h\in L^p(\mathbb R)\), we have \(P_M * h\overset{L^p}{\longrightarrow } h\) as \(M\rightarrow \infty \).
Proof
Observe that \((P_M)_{M>0}\) is a summability kernel; that is, \(P_M(\cdot )\ge 0\), \(\int _\mathbb RP_M = 1\), and for every fixed \(\varepsilon >0\) the convergence \(\lim _{M\rightarrow \infty }\int _{x>\varepsilon }P_M =0\) holds. A standard property of summability kernels (see [21, Ch. VI]) establishes our lemma. \(\square \)
Lemma 5.11
\(\widehat{P}_M^{\,'},\widehat{P}_M^{\,''}\rightarrow 0\) in \(L^2\) and \(L^{\infty }\) as \(M\rightarrow \infty \).
Proof
We will also need two simple observations.
Observation 5.12

If \(h_n\overset{L^1}{\rightarrow } h\), then \(\widehat{h}_n \overset{L^\infty }{\rightarrow } \widehat{h}\).

If \(h_n \overset{L^2}{\rightarrow } h\), then \(\widehat{h}_n\overset{L^2}{\rightarrow } \widehat{h}\).
Observation 5.13
If \(h,h_n\ge 0\) and \(h_n^2\overset{L^1}{\rightarrow } h^2\), then \(h_n\overset{L^2}{\rightarrow } h\).
Proof
Proof of Proposition 5.4
5.2.3 CLT for the approximating process
In this subsection we prove that \(\Delta _M(T)\) satisfies a CLT as \(T\rightarrow \infty \).
Proposition 5.14
Our main tool is the following theorem of Diananda [6, Theorem 4], which guarantees a CLT for sums of Mdependent sequences.
Theorem 5
Applying it, and accounting for differences between discrete and continuous time, we now prove our proposition.
Proof of Proposition 5.14
5.2.4 Quantifying the approximation
In this section we show that, when appropriately normalized, \({\Delta }_M(T)\) approaches \(\Delta (T)\) in \(L^2(\mathbb P)\) as \(M\rightarrow \infty \), uniformly in T. This is stated precisely in the following proposition. For brevity, we write \(\overline{\Delta }(T)= \Delta (T)\mathbb E\Delta (T)\) and \(\overline{\Delta }_M(T)=\Delta _M\mathbb E\Delta _M(T). \)
Proposition 5.15
In fact, given our previous variance computations, it is enough to prove the following.
Proposition 5.16
Proof of Proposition 5.16
5.3 Conclusion: Proof of the CLT in Theorem 3
At last, we conclude the proof of the central limit theorem appearing in (8). We apply Lemma 5.3 with \(X(T) = \Delta (T)\) and \(X_M(T)=\Delta _M(T)\). The first condition (a CLT for \(\Delta _M\)) is guaranteed by Proposition 5.14. The second condition (a uniform \(L^2\) approximation) is guaranteed by Proposition 5.15. Thus Lemma 5.3 implies that \(\Delta (T)\) satisfies a CLT in the sense of (8), and we are done.
6 Subquadratic variance: Theorem 4
Lastly, we include the proof of Theorem 4.
Proof of Theorem 4
Footnotes
 1.
Strictly speaking, there is a difference in normalisation: in order to get an entire function one does not normalise the variance to be 1 at every point.
 2.
It might be the case that “\(N=\infty \)”, i.e., that we have a countable number of points in [0, T) where r vanishes. We leave it to the reader to check that this does not affect the proof.
 3.
We remark that one may compute \(K_1 = \tfrac{2}{3}\), though this value will be unimportant for our purposes.
 4.
Again, it is possible to compute \(K_2=\tfrac{3}{8\pi ^2}\).
Notes
Acknowledgements
We are grateful to Mikhail Sodin for suggesting the project and for useful discussions. We thank Igor Wigman for many insightful comments, and Baruch Horovitz for a detailed conversation about the motivations coming from phyiscs.
References
 1.Adler, R.J., Taylor, J.E.: Random Fields and Geometry. Springer Monographs in Mathematics. Springer, Berlin (2007)Google Scholar
 2.Azaïs, JeanMarc, León, José R.: CLT for crossings of random trigonometric polynomials. Electron. J. Probab. 18(68), 17 (2013). https://doi.org/10.1214/EJP.v182403 MathSciNetMATHGoogle Scholar
 3.Buckley, J., Sodin, M.: Fluctuations of the increment of the argument for the Gaussian entire function, J. Stat. Phys. (To appear) available onlineGoogle Scholar
 4.Cramér, H., Leadbetter, M.R.: Stationary and related stochastic processes: sample function properties and their applications, Dover publications (2004) (first publised in 1967 by Wiley series)Google Scholar
 5.Cuzick, J.: A central limit theorem for the number of zeros of a stationary Gaussian process. Ann. Probab. 4, 547–556 (1976)MathSciNetCrossRefMATHGoogle Scholar
 6.Diananda, P.H.: The central limit theorem for \(m\)dependent variables. Proc. Camb. Philos. Soc. 51, 92–95 (1955)MathSciNetCrossRefMATHGoogle Scholar
 7.Doney, R., Vakeroudis, S.: Windings of planar Stable Processes, Sminaire de Probabilits XLV, Lecture Notes in Mathematics, vol. 2078, pp. 277–300. Springer (2013)Google Scholar
 8.Drossel, B., Kardar, M.: Winding angle distributions for random walks and flux lines. Phys. Rev. E. 53, 5861 (1996)CrossRefGoogle Scholar
 9.Duplantier, B., Blinder, I.A.: Harmonic measure and winding of random conformal paths: a Coulomb gas perspective. Nuclear Phys. B. 802, 494–513 (2008)MathSciNetCrossRefMATHGoogle Scholar
 10.Etzioni, Y., Horovitz, B., Le Doussal, P.: Rings and Coulomb boxes in dissipative environments. Phys. Rev. B 86, 235406 (2012)CrossRefGoogle Scholar
 11.Feldheim, N.: Variance of the number of zeroes of shiftinvariant Gaussian analytic functions (2015). arXiv: 1309.2111
 12.Feldheim, N.: Zeroes of Gaussian analytic functions with translationinvariant distribution. Isr. J. Math. 195, 317–345 (2013)MathSciNetCrossRefMATHGoogle Scholar
 13.Granville, A., Wigman, I.: The distribution of the zeros of random trigonometric polynomials. Am. J. Math. 133(2), 295–357 (2011)MathSciNetCrossRefMATHGoogle Scholar
 14.Grosberg, A., Frisch, H.: Winding angle distribution for planar random walk, polymer ring entangled with an obstacle, and all that: Spitzer–Edwards–Prager–Frisch model revisited. J. Phys. A Math. Gen 37(8), 3071 (2004)CrossRefMATHGoogle Scholar
 15.Hagendor, C., Le Doussal, P.: SLE on doubleconnected domains and the winding of looperased random walks. J. Stat. Phys. 133, 231–254 (2008)MathSciNetCrossRefMATHGoogle Scholar
 16.Holcman, D., Yor, M., Vakeroudis, S.: The mean first rotation time of a planar polymer. J. Stat. Phys. 143(6), 1074–1095 (2011)MathSciNetCrossRefMATHGoogle Scholar
 17.Hough, J.B., Krishnapur, M., Peres, Y., Virag, B.: Zeroes of Gaussian Analytic functions and Determinantal Processes, University Lecture Series, vol. 51, American Mathematical Society (2009)Google Scholar
 18.Jessen, B., Tornehave, H.: Mean motions and almost periodic functions. Acta Math. 77, 137–279 (1945)MathSciNetCrossRefMATHGoogle Scholar
 19.Kac, M.: On the average number fo real roots of a random algebraic equation. Bull. Am. Math. Soc. 18, 29–35 (1943)Google Scholar
 20.Kahane, J.P.: Cambridge Studies in Advanced Mathematics, vol. 5, 2nd edn. Cambridge University Press, Cambridge (1993)Google Scholar
 21.Katznelson, Y.: An Introduction to Harmonic Analysis, 3rd edn. Cambridge University Press, Cambridge (2004)CrossRefMATHGoogle Scholar
 22.Kratz, M.F.: Level crossings and other level functionals of stationary Gaussian processes. Probab. Surv. 3, 230–288 (2006)MathSciNetCrossRefMATHGoogle Scholar
 23.Kundu, A., Comtet, A., Majumdar, S.N.: Winding statistics of a Brownian particle on a ring. J. Phys. A: Math. Theor. 47, 385001 (2014)MathSciNetCrossRefMATHGoogle Scholar
 24.Le Doussal, P., Etzioni, Y., Horovitz, B.: Winding of planar gaussian processes. J. Stati. Mech. Theory Exp. 5, P07012 (2009)Google Scholar
 25.Malevich, T.L.: Asymptotic normality of the number of crossings of level zero by a Gaussian process. Theor. Prob. Appl. 14, 287–295 (1969)CrossRefMATHGoogle Scholar
 26.Piterbarg, I.: Large deviations of random processes close to Gaussian ones. Theory Prob. Appl. 27, 504–524 (1982)CrossRefMATHGoogle Scholar
 27.Pitman, J., Yor, M.: Asymptotic laws of planar Brownian motion. Ann. Probab. 14(3), 733–779 (1986)MathSciNetCrossRefMATHGoogle Scholar
 28.Pitman, J., Yor, M.: Further asymptotic laws of planar Brownian motion. Ann. Probab. 17(3), 965–1011 (1989)MathSciNetCrossRefMATHGoogle Scholar
 29.Saleur, H.: The winding angle distribution for Brownian and SAW revisited (1993). arXiv:hepth/9310034
 30.Slud, E.: Multiple WienerIto integral expansions for levelcrossingcount functionals. Prob. Theory. Relat. Fields 87, 349–364 (1991)MathSciNetCrossRefMATHGoogle Scholar
 31.Slud, E.: MWI representation of the number of curvecrossings by a differentiable Gaussian process, with applications. Ann. Prob. 22(3), 1355–1380 (1994)MathSciNetCrossRefMATHGoogle Scholar
 32.Spitzer, F.: Some theorems concerning 2dimensional Brownian motion. Trans. Am. Math. Soc. 87, 187–197 (1958)MathSciNetMATHGoogle Scholar
 33.Vakeroudis, S.: On the windings of complexvalued Ornstein–Uhlenbeck processes driven by a Brownian motion and by a stable process. Stoch Int J. Prob. Stoch. Process. 87(5), 766–793 (2015)MathSciNetCrossRefMATHGoogle Scholar
 34.Walters, P.: An Introduction to Ergodic Theory, Graduate Texts in Mathematics, vol. 79. Springer, New York (1982)CrossRefGoogle Scholar
 35.Wieland, B., Wilson, D.B.: Winding angle variance of Fortuin–Kasteleyn contours. Phys. Rev. E. 68, 056101. See also arXiv:1002.3220 (2003)
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.