Expected Number of Zeros of Random Power Series with Finitely Dependent Gaussian Coefficients

We are concerned with zeros of random power series with coefficients being a stationary, centered, complex Gaussian process. We show that the expected number of zeros in every smooth domain in the disk of convergence is less than that of the hyperbolic Gaussian analytic function with i.i.d. coefficients. When coefficients are finitely dependent, i.e., the spectral density is a trigonometric polynomial, we derive precise asymptotics of the expected number of zeros inside the disk of radius r centered at the origin as r tends to the radius of convergence, in the proof of which we clarify that the negative contribution to the number of zeros stems from the zeros of the spectral density.


Introduction
Let {ζ k } ∞ k=0 be independent, identically distributed (i.i.d.) standard complex Gaussian random variables.Peres and Virág studied the zeros of random power series f PV (z) = ∞ k=0 ζ k z k and found that the zero point process z∈C:f PV (z)=0 δ z becomes a determinantal point process associated with the Bergman kernel [14].The studies around this Gaussian analytic function (GAF) has been developing in several directions (cf.[1,3,5,8,10,11]), however, it seems that there are relatively few works on zeros of random power series with dependent Gaussian coefficients.Recently, Mukeru, Mulaudzi, Nazabanita and Mpanda studied the zeros of Gaussian random power series f H (z) on the unit disk with coefficients Ξ (H) = {ξ (H) k } ∞ k=0 being a fractional Gaussian noise (fGn) with Hurst index 0 ≤ H < 1.They gave an estimate for the expected number of zeros of f H (z) inside D(r) := {z ∈ C : |z| < r} and show that it is smaller than that of f PV (z) by O((1 − r 2 ) −1/2 ) [12], whose proof was based on the maximum principle via an integral representation on D(r) of the expectation.In this paper, we will give a precise asymptotics as r → 1 − of the expected number of zeros in D(r) of a random power series f Ξ (z) = ∞ k=0 ξ k z k when Ξ = {ξ k } ∞ k=0 is a stationary, centered, finitely dependent complex Gaussian process, i.e., its spectral density is a trigonometric polynomial of degree n.As will be seen later, the essential idea of our proof is to represent the expected number of zeros as a contour integral on ∂D(r) by using the Stokes theorem similar to [2,9] and keep track of the poles of the integrand indexed by r, i.e., the zeros of a (scaled) spectral density for Ξ, as r → 1 − .We found that the degeneracy of zeros of spectral density sensitively affects on the order of the difference between the expected number of zeros of f Ξ (z) and that of f PV (z).
Let Ξ = {ξ k } k∈Z is a stationary, centered, complex Gaussian process with unit variance and covariance function where γ(0) = 1 and γ(−k) = γ(k).Throughout this paper, we always assume the variance to be 1.We consider the following random power series For the sake of simplicity, in what follows, we often omit the subscript Ξ in f Ξ .The covariance matrix of the Gaussian analytic function (GAF) defined in (1.2) is given by where Since |γ(k)| ≤ γ(0) = 1 follows from positive definiteness, the convergence radius of G(z) is more than or equal to 1.The covariance function γ(k) can be represented as γ(k) = (2π) −1 2π 0 e √ −1kθ d∆(θ), where ∆(θ) is called the spectral function of Ξ.When ∆(θ) is absolutely continuous with respect to the Lebesgue measure, the density ∆ (θ) = d∆(θ)/dθ is called the spectral density of Ξ (cf.[4]).We note that G 2 (e √ −1θ , e √ −1θ ) gives the spectral density of the Gaussian process Ξ if G(z) is analytic in a neighborhood of D. When {ξ k } k∈Z are i.i.d., γ(k) = δ 0,k (Kronecker's delta) and K f (z, w) is the Szegő kernel.As mentioned before, Peres-Virág showed that the zeros of f PV (z) with i.i.d.Gaussian coefficients form the determinantal point process associated with the Bergman kernel [14].In the present paper, we compare the expected number of zeros of f (z) with finitely dependent Gaussian coefficients with that of f PV (z).
We first deal with the case of 2-dependent stationary Gaussian processes with covariance function We easily verify that {γ(k)} k∈Z is positive definite if and only if (a, b) is in the region P = P 1 ∪ P 2 with and See Figure 1.We consider the GAF f a,b (z) associated with (1.5).Since we normalized the variance of ξ k to be 1, the convergence radius of the power series f a,b (z) is 1 a.s.for any (a, b) ∈ P.
We denote the zeros of GAF f by Z f and let be the number of zeros within D(r), the disk of radius r centered at the origin.From now on, for simplicity, we write r → 1 instead of r → 1 − .
(IV) If (a, b) is in the interior of P, then there exists a non-negative constant C(a, b) such that The constant C(a, b) is positive except for (a, b) = (0, 0).The numbers (I)-(IV) in Theorem 1.1 correspond to those in Figure 1.
The case of (a, b) = (0, 0) corresponds to the case of Peres-Virág, f PV (z), and it is known that Therefore, for all cases, the expected number of zeros is less than that of f PV (z) at least in the limit as r → 1.In fact, we can show the following stronger result.Theorem 1.2.Let f be a GAF defined in (1.2) with (1.3) and (1.4).Let D ⊂ D be a domain with smooth boundaries and N f (D) be the number of zeros of f inside D.Then, EN f (D) is always less than or equal to EN f PV (D).Moreover, the equality holds for some (hence any) domain D if and only if f (z) is equal to f PV (z) in law.
As was seen in the above, the asymptotic behavior at (a, b) = (±2/3, 1/6) corresponding to Case (III) is special since G 2 (z, z) is the most degenerated in the sense that The above G 2 (z, z) has the degenerated zero at z = ∓1.The phenomena are the same in both cases and so we only deal with the + case below.Now we focus on the n-dependent stationary Gaussian process Ξ with covariance function {γ n (k)} k∈Z which is the most degenerated in the sense above, i.e., (1.11) for z ∈ ∂D and z = −1 is the zero of order 2n.We remark that for this Gaussian process Ξ we have the following moving-average representation: where {ζ j } j∈Z is an i.i.d.standard complex Gaussian sequence.In this case, we have the following asymptotics, which include (1.8) as a special case of n = 2.
Theorem 1.3.Let γ n (k) be defined as (1.11) and Ξ = {ξ k } k∈Z be the stationary, centered, complex Gaussian process with covariance function {γ n (k)} k∈Z .The expected number of zeros of the power series f with coefficients Ξ within D(r) is given by where in (1.13) vanishes by a cancellation.See the proof of Theorem 1.3 and Remark 4.4.
As will be seen in the proof of the theorems, the order of the second term in the asymptotic expansion comes from the behavior of the zeros of G 2 (z, z) in the case of n-dependent Gaussian processes.If G 2 (z, z) has a zero of multiplicity 2k on ∂D, i.e., so does the spectral density, then the term of order (1 − r 2 ) −(2k−1)/(2k) appears in the asymptotics of EN f (r) as r → 1. Hence the zeros of the spectral density with the most multiplicity determines the asymptotics of the second order term.Therefore, we obtain the following result for general finitely dependent cases.
Corollary 1.5.Let Ξ = {ξ k } k∈Z be the stationary, centered, finitely dependent, complex Gaussian process.When the spectral density of Ξ has zeros θ j of multiplicity 2k j for j = 1, 2, . . ., p, we set α = (2k − 1)/(2k) with k = max 1≤j≤p k j ; α = 0 otherwise.Then, there exists a positive constant C Ξ such that the expected number of zeros of the GAF f with coefficients Ξ within D(r) is given by For example, the Gaussian process Ξ with G 2 (z, z) = (const.)p j=1 |z + a j | 2k j for z, a 1 , . . ., a p ∈ ∂D and k 1 , . . ., k p ≥ 1 gives an example of the GAF described in Corollary 1.5.
This paper is organized as follows.In Section 2, we recall the Edelman-Kostlan formula and derive its variants for later use, and prove Theorem 1.2.We also give some examples to give our idea for computation of the expected number of zeros.In Section 3, we prove Theorem 1.1.In Section 4, we briefly recall the method of Puiseux expansion and prove Theorem 1.3.

The expected number of zeros: examples
2.1.Expected number of zeros.To prove Theorem 1.1 and 1.3, we recall the Edelman-Kostlan formula for the expected number of zeros of GAF.
Proposition 2.1.Let D ⊂ C be a domain with smooth boundaries, f a GAF defined in a neighborhood of D, and N f (D) be the number of zeros of f inside D.Then, assuming that no singularity lies on ∂D for the second equality, where dm(z) is the Lebesgue measure on the complex plane C and i = √ −1 is the imaginary unit.
For the proof of the first equality, see [6].For the second equality, the Stokes theorem is used as in [2,9].
In our setting, we have much simpler expressions for EN f (r).
Corollary 2.2.Let f be a GAF defined in (1.2) with (1.3) and (1.4).Let D ⊂ D be a domain with smooth boundaries and N f (D) be the number of zeros inside D.Then, where J (D) has two expressions as follows: (2.15) In particular, when D = D(r), (2.14) becomes where we simply write J (r) for J (D(r)).
Proof.The first expression (2.15) directly follows from (1.3), (1.4) and the second equality in Proposition 2.1.For the second expression (2.16), since ∂ z G(z) = ∂ z (G(z)), it is easy to see from the first equality in Proposition 2.1 that This completes the proof.
The expression (2.16) essentially, but not explicitly, appeared in [12].They derived a similar expression from one-point correlation and used to evaluate the expected number of zeros in the case of fractional Gaussian noise.
where Θ(r, z) is the rational function of z obtained from G 2 (rz, rz) by putting z = z −1 on ∂D.In particular, when γ(k) is real for every k ∈ Z, we have Note that Θ(1, e iθ ) is the spectral density at least for finitely dependent Gaussian processes.Then, one can apply the residue theorem, and from this point of view, the behavior of zeros of Θ(r, z) as r → 1 is essential for the order of J (r).
Theorem 1.2 is a direct consequence of the second expression (2.16) of J (D).
Proof of Theorem 1.2.The error term J (D) is clearly non-positive from (2.16).Moreover, the right-hand side of (2.16) is zero if and only if G (z) = 0 m-a.e.D. It follows from the uniqueness theorem that G (z) is identically zero on D, and thus so is

2.2.
Examples.In this subsection, we show two examples to see how the expected number of zeros behaves as r → 1.Although all computations are rather straightforward, they are helpful for understanding of the situation.
By using z = z −1 for z ∈ ∂D, we see that We apply (2.18) to this case.The only zero z = 0 of Θ(r, z), which does not move in r, contributes to the residue as the only pole.Hence, we have In this case, G(z) is analytic in D(1/ρ) and Θ(1, z), or equivalently G 2 (z, z), does not vanish on ∂D.
Remark 2.5.As was seen in this example, the second term J (r) is O(1) as r → 1 whenever G(z) is analytic in a neighborhood of D := D ∪ ∂D and Θ(r, z) does not vanish on ∂D.
(ii) From (2.19) we intuitively observe that near z = 1, the first term ζ/(1 − z) pushes up the absolute values of √ 1 − ρf PV (z) and decreases the number of zeros.
We would like to emphasize that the behavior of zeros of Θ(r, z) as r → 1 is essential for the asymptotic behavior of the error term J (r).

2-dependent cases
In this section, we prove Theorem 1.1.
Let X = z + z −1 and rewrite the denominator as br 2 X 2 + arX + 1 − 2br 2 , whose roots are distinct and given by Here we take the branch of and hence We note that γ = (X − + γ − γ −1 )/2.Substituting it to the numerator and expanding it by Here we used the fact that X − is a solution of the equation br Hence it follows from (3.22), (3.23) and (3.24) that This completes the proof of Case (I).
we see that This completes the proof of Case (II).

Case (III).
We give a proof of Case (III).
3.4.Case (IV).Finally, we give a sketch of the proof of Case (IV).Since all zeros of Θ(r, z) stay away from ∂D as r → 1 when (a, b) is in the interior of P, any singularity contributing to the asymptotic behavior do not appear on the boundary ∂D, and hence it suffices to consider as r equals to 1.Here we only consider the interior of P 1 and a > 0. We use the same notations in the proof of Case (I).In this case, A little more computation shows that where µ(a, b) = (1 + 2b) 2 − 4a 2 and that C(a, b) > 0 unless (a, b) = (0, 0).We omit the other cases since we obtain the results just by repeating the similar computation.

Degenerated cases
In this section, we give a proof of Theorem 1.3.From (2.18), we have We note from (1.12) that q n (1, z) = (z + 1) 2n .To see the asymptotic behavior of EN f (r) as r → 1, we need that of z(r) for q n (r, z(r)) = 0. 4.1.Behavior of the root z(r) as r → 1.We first note that q n (1, −1) = 0 and ∂ z q n (r, z)| (r,z)=(1,−1) = 0. Hence, we cannot apply the implicit function theorem in the variable z to q n (r, z).Alternatively, we follow a strategy of using Puiseux series expansion and Newton polygon method (cf.[18]).
First we note that By shifting (r, z) → (1 − r, z + 1) in q n (r, z), we consider Note that Q n (0, y) = y 2n .Following [18], we denote by C{x, y} (resp., C{x}) the ring of convergent power series defined by two variables x, y (resp., one variable x).If f ∈ C{x, y} satisfies f (0, y) = y m A(y) with A(0) = 0, then we say f is regular in y of order m [18, p.20].In our setting, Q n (x, y) is regular in y of order 2n.We can use the following theorem from [18, p.20, Theorem 2.2.6] to guarantee the existence of 2n distinct solutions to the equation Q n (x, y) = 0 around (x, y) = (0, 0).
(ii) If f is regular in y of order m, and we write f = U F with U a unit and F a monic polynomial of degree m in y, there are m such solutions g j (x 1/m j ), all distinct unless the discriminant of F vanishes identically, and F (y) ≡ m j=1 y − g j (x 1/m j ) .For our purpose, we need more explicit form of g j 's so that we directly perform the Newton polygon method below.
The solution y(x) to Q n (x, y) = 0 around the neighborhood of the origin (0, 0) is described by this theorem since Q n (x, y) is a bivariate polynomial.Now we will compute the asymptotic expansion of y = y(x) in Q n (x, y(x)) = 0 at the origin (0, 0) following the Newton polygon method [18, p.15, Theorem 2.1.1].Here we give a brief description of the algorithm following [18].First, given f (x, y) = 0, we plot a point (r, s) of exponents for each term c r,s x r y s of f (x, y) on R 2 if c r,s = 0 and then we have the convex hull containing all points plotted.Its boundary is made up of straight line segments which do not lie on the coordinate axes.It is called the Newton polygon.Secondly, we denote by m 1 one of the reciprocal numbers of the negative of a slope among these segments.
Then we consider f (x, x m 1 (a 1 + y 1 )) and solve a 1 by focusing on the terms of the lowest degrees in x due to f (x, y) = 0. Thirdly, let f (1) (x, y 1 ) = x −l f (x, x m 1 (a 1 + y 1 ) where l is the intersection of s-axes.Repeat the above process and then we can obtain the solution y = a 1 x m 1 + a 2 x m 1 +m 2 + • • • of f (x, y) = 0 for f ∈ C{x, y}.For Q n (x, y), its Newton polygon joins (1, 0) and (0, 2n) as shown in Figure 2 for n = 4. Thus, it is guaranteed that Q n (x, y) = 0 has the solution of the form 0 1 2 3 4 5 6 7 8 0