Skip to main content
Log in

The many faces of the stochastic zeta function

  • Published:
Geometric and Functional Analysis Aims and scope Submit manuscript

Abstract

We introduce a framework to study the random entire function \(\zeta _\beta \) whose zeros are given by the Sine\(_\beta \) process, the bulk limit of beta ensembles. We present several equivalent characterizations, including an explicit power series representation built from Brownian motion. We study related distributions using stochastic differential equations. Our function is a uniform limit of characteristic polynomials in the circular beta ensemble; we give upper bounds on the rate of convergence. Most of our results are new even for classical values of \(\beta \). We provide explicit moment formulas for \(\zeta \) and its variants, and we show that the Borodin–Strahov moment formulas hold for all \(\beta \) both in the limit and for circular beta ensembles. We show a uniqueness theorem for \(\zeta \) in the Cartwright class, and deduce some product identities between conjugate values of \(\beta \). The proofs rely on the structure of the \(\mathtt {Sine}_{\beta }\) operator to express \(\zeta \) in terms of a regularized determinant.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. M. Abramowitz and I. A. Stegun, editors. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Wiley (1984).

  2. M. Aizenman and S. Warzel. On the ubiquity of the Cauchy distribution in spectral problems. Probab. Theory Relat. Fields, 163(1-2) (2014), 61–87

    MathSciNet  MATH  Google Scholar 

  3. G. Anderson, A. Guionnet, and O. Zeitouni. Introduction to Random Matrices. Cambridge University Press (2009).

    Book  Google Scholar 

  4. A. Borodin and E. Strahov. Averages of characteristic polynomials in random matrix theory. Commun. Pure Appl. Math., 59(2) (2006), 161–253

    Article  MathSciNet  Google Scholar 

  5. P. Bourgade, A. Nikeghbali, and A. Rouault. Circular Jacobi ensembles and deformed Verblunsky coefficients. Int. Math. Res. Notices, 2009(23) (2009), 4357–4394

    MathSciNet  MATH  Google Scholar 

  6. M. Cantero, L. Moral, and L. Velázquez. Five-diagonal matrices and zeros of orthogonal polynomials on the unit circle. Linear Algebra Appl., 362 (2003), 29–56.

    Article  MathSciNet  Google Scholar 

  7. R. Chhaibi, E. Hovhannisyan, J. Najnudel, A. Nikeghbali, and B. Rodgers. The limiting characteristic polynomial of classical random matrix ensembles. 20(4) (2019), 1093–1119

  8. R. Chhaibi, J. Najnudel, and A. Nikeghbali. The circular unitary ensemble and the Riemann zeta function: the microscopic landscape and a new approach to ratios. Invent. Math., 207(1) (2017), 23–113

    Article  MathSciNet  Google Scholar 

  9. L. de Branges. Some Hilbert spaces of entire functions. II. Trans. Am. Math. Soc., 99(1) (1961), 118–152

    Article  MathSciNet  Google Scholar 

  10. L. de Branges. Hilbert Spaces of Entire Functions. Prentice-Hall, Inc., Englewood Cliffs, NJ (1968).

    MATH  Google Scholar 

  11. D. Dufresne. The distribution of a perpetuity, with applications to risk theory and pension funding. Scand. Actuar. J., 1990(1) (1990), 39–79

    Article  MathSciNet  Google Scholar 

  12. P. J. Forrester. A random matrix decimation procedure relating \(\beta = 2/(r + 1)\) to \(\beta = 2(r + 1)\). Commun. Math. Phys., 285(2) (2009), 653–672

    Article  MathSciNet  Google Scholar 

  13. J. Franchi and Y. Le Jan. Hyperbolic dynamics and Brownian motion. Oxford Mathematical Monographs. Oxford University Press, Oxford (2012)

  14. F. Gesztesy and K. A. Makarov. (Modified) Fredholm determinants for operators with matrix-valued semi-separable integral kernels revisited. Integral Equ. Oper. Theory, 48(4)(2004), 561–602

    Article  MathSciNet  Google Scholar 

  15. I. Gohberg, S. Goldberg, and N. Krupnik. Traces and Determinants of Linear Operators, Volume 116 of Operator Theory: Advances and Applications. Birkhäuser Verlag, Basel (2000).

  16. D. Holcomb and E. Paquette. The maximum deviation of the sine-beta counting process. Electron. Commun. Probab., 23 (2018)

  17. M. Kaltenbäck and H. Woracek. Canonical differential equations of Hilbert-Schmidt type. In Operator Theory in Inner Product Spaces. Springer (2007), pp. 159–168.

  18. R. Killip and I. Nenciu. Matrix models for circular ensembles. Int. Math. Res. Notices, 2004(50):2665–2701 (2004).

    Article  MathSciNet  Google Scholar 

  19. R. Killip and E. Ryckman. Autocorrelations of the characteristic polynomial of a random matrix under microscopic scaling. arXiv:1004.1623 (2010)

  20. R. Killip and M. Stoiciu. Eigenvalue statistics for CMV matrices: from Poisson to clock via random matrix ensembles. Duke Math. J., 146(3):361–399, (2009).

    Article  MathSciNet  Google Scholar 

  21. E. Kritchevski, B. Valkó, and B. Virág. The scaling limit of the critical one-dimensional random Schrödinger operator. Comm. Math. Phys., 314(3):775–806, 2012.

    Article  Google Scholar 

  22. B. J. Levin. Distribution of Zeros of Entire Functions, Volume 5 of Translations of Mathematical Monographs. American Mathematical Society, Providence, R.I., revised edition (1980). Translated from the Russian by R. P. Boas, J. M. Danskin, F. M. Goodspeed, J. Korevaar, A. L. Shields and H. P. Thielman.

  23. B. Y. Levin. Lectures on Entire Functions, volume 150. American Mathematical Society (1996).

  24. H. Matsumoto, M. Yor, et al. Exponential functionals of Brownian motion, II: Some related diffusion processes. Probab. Surv., 2 (2005), 348–384

    Article  MathSciNet  Google Scholar 

  25. J. Najnudel and B. Virág. Uniform point variance bounds in classical beta ensembles. Random Matrices: Theory and Applications. 10(04), 2150033 (2021). https://doi.org/10.1142/S2010326321500337

    Article  MathSciNet  Google Scholar 

  26. B. Oksendal. Stochastic differential equations. In: An Introduction with Applications, 6th edn. Springer (2002).

  27. P. E. Protter. Stochastic Integration and Differential Equations, Volume 21 of Stochastic Modelling and Applied Probability 2 edn. Springer, Berlin (2005). Version 2.1, Corrected third printing.

  28. B. Simon. Orthogonal Polynomials on the Unit Circle. Part 2, Volume 54 of American Mathematical Society Colloquium Publications. American Mathematical Society, Providence, RI (2005). Classical theory.

  29. B. Simon. Trace Ideals and Their Applications, volume 120 of Mathematical Surveys and Monographs, 2nd edn. American Mathematical Society, Providence, RI (2005).

  30. S. Sodin. On the critical points of random matrix characteristic polynomials and of the Riemann \(\xi \)-function. Q. J. Math., 69(1) (2018), 183–210.

    Article  MathSciNet  Google Scholar 

  31. B. Valkó and B. Virág. Continuum limits of random matrices and the Brownian carousel. Invent. Math., 177 (2009), 463–508

    Article  MathSciNet  Google Scholar 

  32. B. Valkó and B. Virág. Large gaps between random eigenvalues. Ann. Probab., 38(3) (2010), 1263–1279.

    Article  MathSciNet  Google Scholar 

  33. B. Valkó and B. Virág. The Sine\(_\beta \) operator. Invent. Math., 209(1) (2017), 275–327

    Article  MathSciNet  Google Scholar 

  34. B. Valkó and B. Virág. Operator limit of the circular \(\beta \)-ensemble. Ann. Probab., 48(3) (2020), 1286–1316

    Article  MathSciNet  Google Scholar 

  35. B. Valkó and B. Virág. Palm measures for Dirac operators and the Sine beta process (2022). arXiv:2207.10626

  36. J. Weidmann. Spectral Theory of Ordinary Differential Operators, Volume 1258 of Lecture Notes in Mathematics. Springer, Berlin (1987).

Download references

Acknowledgements

The first author was partially supported by the NSF award DMS-1712551. The second author was supported by the Canada Research Chair program, the NSERC Discovery Accelerator grant, and the MTA Momentum Random Spectra research group. We thank Lucas Ashbury-Bridgwood, Alexei Borodin, Paul Bourgade, Reda Chhaibi, László Erdős, Yun Li, Vadim Gorin, Alexei Poltoratski, and the anonymous referees for useful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Benedek Valkó.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Proof of the Taylor Expansion of \(\zeta _{{\varvec{\tau }}}\)

Proof of Proposition 14

We start by adapting Lemma 15 to integral operators with matrix valued kernels. We can embed the space of \({{\mathbb {R}}}^2\)-valued \(L^2(0, \sigma ]\) functions into the space of real valued \(L^2(0,2 \sigma ]\) functions using the following invertible isometry:

$$\begin{aligned} g=[g_1,g_2]^{\dagger }, \quad {\mathcal {J}}g(t)={\left\{ \begin{array}{ll} g_1(s), &{} 0<s\le \sigma \\ g_2(s- \sigma ), &{} \sigma <s\le 2 \sigma . \end{array}\right. } \end{aligned}$$
(189)

If B is a Hilbert–Schmidt integral operator acting on \({{\mathbb {R}}}^2\)-valued functions on \((0, \sigma ]\) with kernel \(K=\left( \begin{array}{cc} K_{11} &{} K_{12} \\ K_{21} &{} K_{22} \\ \end{array} \right) \) then \(A={\mathcal {J}} B {\mathcal {J}}^{-1}\) is a Hilbert–Schmidt integral operator acting on scalar functions on \((0,2 \sigma ]\), and the integral kernel is

$$\begin{aligned} k(s,t)=K_{1+{\mathbf {1}}(s\ge \sigma ), 1+{\mathbf {1}}(t\ge \sigma )}(s,t). \end{aligned}$$
(190)

For a matrix M denote by \(\Vert M\Vert _2\) the Frobenius norm \(\Vert M\Vert _2^2={{\text {Tr}}}\,(M M^\dagger )\).

We have \(\int _0^{ \sigma }\int _0^{ \sigma } \Vert K(s,t)\Vert ^2_2 ds\, dt=\int _0^{2 \sigma }\int _0^{2 \sigma } |k(s,t)|^2 ds\, dt\) and \(\int _0^{2 \sigma } k(s,s)ds=\int _0^{ \sigma } {{\text {Tr}}}\, K(s,s) ds\). Assuming that both of these integrals are finite we may apply Lemma 15 to the integral operator A. Since the spectrum of A is the same as the spectrum of B, we have \({\det }_2(I-z B)={\det }_2(I-z A)\) and hence

$$\begin{aligned} {\det }_2(I-z B) e^{-z \int _0^{ \sigma } {{\text {Tr}}}\,K(s,s)}=1+\sum _{n=1}^\infty (-1)^n \frac{z^n}{n!} \int _{(0,2 \sigma ]^n}\det \left[ k(t_i,t_j) \right] _{i,j=1}^n dt_1\cdots dt_n. \end{aligned}$$
(191)

From (190) we have

$$\begin{aligned}&\int _{(0,2 \sigma ]^n}\det \left[ k(t_i,t_j) \right] _{i,j=1}^n dt_1\cdots dt_n\\&\quad =\sum _{i_1=1}^2 \cdots \sum _{i_n=1}^2 \int _{(0, \sigma ]^n} \det (K_{i_a, i_b}(s_a,s_b))_{a,b=1}^n ds_1\cdots ds_n, \end{aligned}$$

Now set \(B={\mathtt {r}\,}{\varvec{\tau }}\). From (17) we get that the entries \(K_{ij}\) of the matrix valued kernel are given by

$$\begin{aligned} K_{i,j}(s,t)=\tfrac{1}{2}\left( a_i(s) c_j(t){\mathbf {1}}(s<t)+c_i(s)a_j(t){\mathbf {1}}(t\le s)\right) . \end{aligned}$$
(192)

Note that \(K(s,t)^{\dagger }=K(t,s)\) and thus \(K_{i,j}(s,t)=K_{j,i}(t,s)\). Because of this we have

$$\begin{aligned}&\frac{1}{n!}\sum _{i_1=1}^2 \cdots \sum _{i_n=1}^2 \int _{(0, \sigma ]^n} \det (K_{i_j, i_\ell }(s_j,s_\ell ))_{j,\ell =1}^n ds_1\cdots ds_n \\&\quad =\sum _{i_1=1}^2 \cdots \sum _{i_n=1}^2\, \, \idotsint \limits _{0<s_1<\cdots <s_n\le \sigma } \det (K_{i_j, i_\ell }(s_j,s_\ell ))_{j,\ell =1}^n ds_1\cdots ds_n. \end{aligned}$$

Fix \(i_1, \ldots , i_n\in \{1,2\}\) and \(0<s_1<\cdots s_n\le \sigma \). Introduce the temporary notation

$$\begin{aligned} p_k=a_{i_k}(s_k),\quad q_k=c_{i_k}(s_k), \end{aligned}$$

then by (192) we have \(2 K_{i_j, i_\ell }(s_j,s_\ell )= p_{\min (j,\ell )} \cdot q_{\max (j,\ell )}\). For example, for \(n=3\) we have

$$\begin{aligned} (2K_{i_j, i_\ell }(s_j,s_\ell ))_{j,\ell =1}^n=\left( \begin{array}{ccc} p_1 q_1 &{}\quad p_1 q_2 &{}\quad p_1 q_3 \\ p_1 q_2 &{}\quad p_2 q_2 &{}\quad p_2 q_3 \\ p_1 q_3 &{}\quad p_2 q_3 &{}\quad p_3 q_3 \end{array} \right) . \end{aligned}$$

We show that

$$\begin{aligned} \det ( p_{\min (j,\ell )} \cdot q_{\max (j,\ell )})_{j,\ell =1}^n=p_1 q_n\prod _{j=1}^{n-1} (p_{j+1} q_{j}-p_{j}q_{j+1}). \end{aligned}$$
(193)

Subtract row \(n-1\) times \(q_{n}/q_{n-1}\) from row n. Then the last row becomes

$$\begin{aligned}{}[0,\ldots ,0, p_nq_n-p_{n-1} q_n^2/q_{n-1}]. \end{aligned}$$

The identity (193) now follows by induction.

Note that \(p_{j+1} q_{j}-p_{j}q_{j+1}=[p_j,q_j]J [p_{j+1},q_{j+1}]^{\dagger }\), hence, with \(v_{i_k}(s_k)=[p_k,q_k]^{\dagger }\) we have

$$\begin{aligned}&\det ( p_{\min (j,\ell )} \cdot q_{\max (j,\ell )})_{j,\ell =1}^n\nonumber \\&\quad =\left[ v_{i_1}(s_1) v_{i_1}(s_1)^{\dagger } J v_{i_2}(s_2) v_{i_2}(s_2)^{\dagger } J\cdots v_{i_n}(s_n) v_{i_n}(s_n)^{\dagger } J\right] _{1,1}. \end{aligned}$$
(194)

Note that

$$\begin{aligned} v_{1}(s)v_1(s)^{\dagger }+v_2(s)v_2(s)^{\dagger }=2U^{\dagger } R(s)U, \quad U=[{\mathfrak {u}}_0 ,{\mathfrak {u}}_1 ]. \end{aligned}$$

Summing (194) for all choices of \(i_1, \ldots , i_n\) gives

$$\begin{aligned} 2^n\left[ U^{\dagger } R(s_1) U J U^{\dagger } R(s_2) U J\cdots U^{\dagger } R(s_n) U J\right] _{1,1}=-(-2)^n {\mathfrak {u}}_0 ^{\dagger } R(s_1)JR(s_2)J\cdots R(s_n){\mathfrak {u}}_1 . \end{aligned}$$

In the last step we used \( U J U^{\dagger }=-J\), which is equivalent to the assumption (14).

The statement (28) of the proposition now follows from (191). \(\square \)

Law of Iterated Logarithm for Brownian Integrals

Theorem 81

For every \(a>2, b<1/2\) there is a constant c so that

$$\begin{aligned} P\Big (\sup _{t>0} \left( B(t)^2/t-a \log (1+|\log t|)\right) >y\Big )\le c e^{-by} \quad \text {for all}\quad y\ge 0. \end{aligned}$$
(195)

This is an effective small and large-time version of the upper bound in the law of iterated logarithm. By setting \(t=1\) inside the \(\sup \) we get \(B(1)^2\), which shows that the rate of the exponential decay cannot be more than 1/2. The lower bound 2 on the parameter a is sharp by the usual law of iterated logarithm.

Proof

Let \(f,g:(-1,\infty )\rightarrow {\mathbb {R}}\) be non-decreasing functions and \(\varepsilon \in (0,1)\). If \(f(t)>g(t)\) for \(t\ge 0\) then \(f(s)\ge g(s-\varepsilon )\) for \(s\in [t, t+\varepsilon ]\). Hence

$$\begin{aligned} 1(f(t)\ge g(t) \text{ for } \text{ some } t\ge 0)\le \tfrac{1}{\varepsilon } \int _0^\infty 1(f(s)\ge g(s-\varepsilon ))ds. \end{aligned}$$
(196)

Let \({\bar{B}}_t=\max _{0\le s\le t} |B_s|\), and apply (196) to \(f(t)={\bar{B}}(e^{t})^2\) and \(g(t)=e^{t}(y+a\log (1+t))\). The expectation of the resulting inequality bounds \(P(\sup _{t\ge 1} B_t^2/t-a \log (1+\log t)>y)\) above as

$$\begin{aligned} P({\bar{B}}(e^{t})^2\ge g(t)\hbox { for some}\ t\ge 0) \le \tfrac{1}{\varepsilon } \int _0^\infty P({\bar{B}}(e^s)^2\ge e^{s-\varepsilon }(y+a\log (1+s-\varepsilon ))ds. \end{aligned}$$
(197)

Since \(\max _{0\le r\le 1} B(r)\) is distributed as |B(1)|, union and Gaussian tail bounds yield

$$\begin{aligned} P({\bar{B}}(e^s)^2\ge e^s x)=P({\bar{B}}(1)^2\ge x)\le 2P(B(1)^2\ge x)\le 2 e^{-x/2}. \end{aligned}$$

Thus the right hand side of (197) is bounded above by

$$\begin{aligned} \tfrac{2}{\varepsilon } \int _0^\infty e^{- e^{-\varepsilon }(y+a\log (1+s-\varepsilon ))/2} ds=\frac{ 4(1-\varepsilon )^{1-e^{-\varepsilon } a/2}e^{-e^{-\varepsilon } y/2}}{(a e^{-\varepsilon }-2)\varepsilon }. \end{aligned}$$

To make the last step valid and to get the required bound we need \(a e^{-\varepsilon }>2\) and \(e^{-\varepsilon }/2>b\), so we choose \(\varepsilon <\min (1, \log (a/2),\log (1/(2b))\). Time inversion \(B_t \rightarrow tB(1/t)\) gives the same bound for the supremum for \(0< t\le 1\), from which (195) follows. \(\square \)

We apply Theorem 81 to estimate the growth of Brownian integrals.

Proposition 82

Suppose that B is two-sided Brownian motion and \(x_u, u\le 0\) is adapted to the filtration generated by its increments. Assume further that there is a random variable C and a constant \(a>0\) so that \( |X_u|\le C e^{a u} \) for \(u\le 0\). Then

$$\begin{aligned}&|\int _{-\infty }^u X_u dB |\\&\quad \le \frac{2C}{\sqrt{a}} e^{a u} \left( Z+ \log (1+2a|u|)+|\log a| +\log (1+2|\log C |)\right) \quad \text {for all } u\le 0, \end{aligned}$$

so the left hand side is well defined. With an absolute constant c, the random variable Z satisfies

$$\begin{aligned} P(Z>y)\le ce^{- y}, \quad y\ge 0. \end{aligned}$$
(198)

Proof

We have

$$\begin{aligned} \int _{-\infty }^u X_s^2\, ds \le \int _{-\infty }^u C^2 e^{2a s} ds=\frac{C^2}{2a} e^{2a u}<\infty , \end{aligned}$$

so the process \( M_u=\int _{-\infty }^u X_u dB \) is well defined. Moreover,

$$\begin{aligned}{}[M]_u\le \frac{C^2}{2a} e^{2a u}, \quad \text {for all}\quad u\le 0. \end{aligned}$$
(199)

By the Dubins–Schwarz theorem there is a Brownian motion \(W(x), x\ge 0\) so that \( M_u=W([M]_u)\). Let

$$\begin{aligned} Z=\tfrac{1}{3} \sup _{x>0} (W(x)^2/x-3 \log (1+|\quad \log x|)). \end{aligned}$$

By Theorem 81 this random variable satisfies (198), and for \(u\le 0\) we have

$$\begin{aligned} M_u^2\le 3 [M]_u Z+3 [M]_u (1+\log (1+|\quad \log [M]_u|)). \end{aligned}$$

The function \(x(1+\log (1+| \log x|))\) is increasing, so by (199) we also get

$$\begin{aligned} M_u^2\le \frac{3C^2}{2a} e^{2a u} \left( Z+1+ \log \left( 1+\left| \log \frac{C^2}{2a} e^{2a u}\right| \right) \right) . \end{aligned}$$
(200)

For \(x,y>0\) we have \(| \log (xy)|\le | \log x|+| \log y|\), \(\log (1+x+y)\le \log (1+x)+\log (1+y)\), and \(\log (1+x)\le x\). Using these bounds repeatedly we get

$$\begin{aligned} \log (1+| \log \frac{C^2}{2a} e^{2a u}|)&\le \log (1+2 | \log C|+ | \log 2a|+2a |u|) \\&\le \log (1+2a |u|)+\log 2+| \log a| +\log (1+2| \log C |). \end{aligned}$$

Take square roots in (200) and use the inequality \(\sqrt{1+y}\le 1+y\) for \(y>0\). For q fixed, \(Z+q\) satisfies the same tail bound as Z with a different c. This implies the claim. \(\square \)

Moment Bounds for an Almost Linear SDE

The following lemma is used when calculating moments of ratios of \(\zeta \).

Lemma 83

Consider the diffusion

$$\begin{aligned} dX=Y dt+Z dW, \end{aligned}$$
(201)

with \(|Y|\le a |X|\), \(|Z|\le b |X|\), and ab and \(E|X_0^2|\) finite. Then for any \(t\ge 0\)

$$\begin{aligned} E|X_t|^2\le E|X_0|^2 e^{(2 a+2b^2)t}, \quad EX_t=EX_0+\int _0^{t} E Y_s ds. \end{aligned}$$

In particular, if \(Y=\eta X\) for \(\eta \in {{\mathbb {C}}}\) then \(EX_t=EX_0 e^{\eta t}\).

Proof

Let \(\tau _c\) be the first time \(|X_t|\ge c\). By Itô’s formula

$$\begin{aligned} \int _0^{\tau _c\wedge s}d|X|^2=\int _0^{\tau _c\wedge s}2 \Re ({\bar{X}} Z dW)+(2 \Re X {\bar{Y}} +2|Z|^2)dt. \end{aligned}$$

In the interval \([0,\tau _c\wedge s]\) the quadratic variation is bounded, so

$$\begin{aligned} M_t=|X_{t\wedge \tau _c}|^2-\int _0^{t\wedge \tau _c} (2 \Re X_s {\bar{Y}}_s+2Z_s^2)\, ds \end{aligned}$$

is a martingale, and

$$\begin{aligned} E|X_{t\wedge \tau _c}^2|=E\int _0^{t\wedge \tau _c} (2 \Re X_s {\bar{Y}}_s+2Z_s^2)\, ds \le (2a+2b^2)\, E\int _0^{t\wedge \tau _c} |X|^2 ds. \end{aligned}$$
(202)

Since

$$\begin{aligned} \int _0^{t\wedge \tau _c} |X|^2 ds=\int _0^{t} |X|^2 {\mathbf {1}}(s\le \tau _c) ds \le \int _0^{t} |X|_{s\wedge \tau _c}^2 ds \end{aligned}$$

we have \( E|X_{t\wedge \tau _c}^2|\le E|X_0^2|e^{(2a+2b^2) t} \) by Gronwall’s inequality. Fatou’s lemma gives

$$\begin{aligned} E|X_t^2|\le E|X_0^2|e^{(2a+2b^2) t} \end{aligned}$$

as well. From this we see that the quadratic variation of \( \int _0^{t} Z dW \) has finite expectation, so it is a martingale. Thus we can take expectations in (201) which gives (202). The last claim follows from solving the equation \( EX_t=EX_0+\eta \int _0^{t} EX_s ds. \) \(\square \)

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Valkó, B., Virág, B. The many faces of the stochastic zeta function. Geom. Funct. Anal. 32, 1160–1231 (2022). https://doi.org/10.1007/s00039-022-00613-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00039-022-00613-8

Navigation