1 Introduction

Let \(\{\zeta _k\}_{k=0}^{\infty }\) be independent, identically distributed (i.i.d.) standard complex Gaussian random variables. Peres and Virág studied the zeros of random power series \(f_{\textrm{PV}}(z)=\sum _{k=0}^\infty \zeta _k z^k\) and found that the zero point process \(\sum _{z \in {\mathbb {C}}: f_{\textrm{PV}}(z)=0}\delta _z\) becomes a determinantal point process associated with the Bergman kernel [15]. The studies around this Gaussian analytic function (GAF) have been developing in several directions (cf. [2, 5, 7, 8, 10,11,12,13, 16,17,18]); however, it seems that there are relatively few works on zeros of random power series with dependent Gaussian coefficients. Recently, Mukeru, Mulaudzi, Nazabanita and Mpanda studied the zeros of Gaussian random power series \(f_H(z)\) on the unit disk with coefficients \(\Xi ^{(H)} = \{\xi _k^{(H)}\}_{k=0}^{\infty }\) being a fractional Gaussian noise (fGn) with Hurst index \(0\le H<1\). They gave an estimate for the expected number of zeros of \(f_H(z)\) inside \({\mathbb {D}}(r):=\{z\in {\mathbb {C}}:|z|<r\}\) and show that it is smaller than that of \(f_{\textrm{PV}}(z)\) by \(O((1-r^2)^{-1/2})\) [14], whose proof was based on the maximum principle via an integral representation on \({\mathbb {D}}(r)\) of the expectation. In this paper, we will give a precise asymptotics as \(r \rightarrow 1_-\) of the expected number of zeros in \({\mathbb {D}}(r)\) of a random power series \(f_{\Xi }(z) = \sum _{k=0}^{\infty } \xi _k z^k\) when \(\Xi = \{\xi _k\}_{k=0}^{\infty }\) is a stationary, centered, finitely dependent complex Gaussian process, i.e., its spectral density is a trigonometric polynomial of degree n. As will be seen later, the essential idea of our proof is to represent the expected number of zeros as a contour integral on \(\partial {\mathbb {D}}(r)\) by using the Stokes theorem similar to [4, 11] and keep track of the poles of the integrand indexed by r, i.e., the zeros of a (scaled) spectral density for \(\Xi \), as \(r \rightarrow 1_-\). We found that the degeneracy of zeros of spectral density sensitively affects on the order of the difference between the expected number of zeros of \(f_{\Xi }(z)\) and that of \(f_{\textrm{PV}}(z)\).

Let \(\Xi = \{\xi _k\}_{k \in {\mathbb {Z}}}\) be a stationary, centered, complex Gaussian process with unit variance and covariance function

$$\begin{aligned} {\textbf{E}}[\xi _k\overline{\xi _l}]=\gamma (l-k), \quad k, l \in {\mathbb {Z}}, \end{aligned}$$
(1.1)

where \(\gamma (0)=1\) and \(\gamma (-k) = \overline{\gamma (k)}\). Throughout this paper, we always assume the variance to be 1. We consider the following random power series

$$\begin{aligned} f_{\Xi }(z) = \sum _{k=0}^\infty \xi _k z^k. \end{aligned}$$
(1.2)

For the sake of simplicity, in what follows, we often omit the subscript \(\Xi \) in \(f_{\Xi }\). The covariance matrix of the Gaussian analytic function (GAF) defined in (1.2) is given by

$$\begin{aligned} K_f(z,w) = {\textbf{E}}[f(z)\overline{f(w)}] = \frac{1}{1-z{\overline{w}}} G_2(z,w), \end{aligned}$$
(1.3)

where

$$\begin{aligned} G_2(z,w) = 1+G(z) + \overline{G(w)}, \quad G(z) = \sum _{k=1}^\infty \overline{\gamma (k)} z^k. \end{aligned}$$
(1.4)

Since \(|\gamma (k)| \le \gamma (0)=1\) follows from positive definiteness, the convergence radius of G(z) is more than or equal to 1. The covariance function \(\gamma (k)\) can be represented as \(\gamma (k) = (2 \pi )^{-1} \int _0^{2\pi } e^{\sqrt{-1}k\theta } d\Delta (\theta )\), where \(\Delta (\theta )\) is called the spectral function of \(\Xi \). When \(\Delta (\theta )\) is absolutely continuous with respect to the Lebesgue measure, the density \(\Delta '(\theta ) = d\Delta (\theta )/d\theta \) is called the spectral density of \(\Xi \) (cf. [6]). We note that \(G_2(e^{\sqrt{-1}\theta }, e^{\sqrt{-1}\theta })\) gives the spectral density of the Gaussian process \(\Xi \) if G(z) is analytic in a neighborhood of \({\mathbb {D}}\). When \(\{\xi _k\}_{k \in {\mathbb {Z}}}\) are i.i.d., \(\gamma (k) = \delta _{0,k}\) (Kronecker’s delta) and \(K_f(z,w)\) is the Szegő kernel. As mentioned before, Peres-Virág showed that the zeros of \(f_{\textrm{PV}}(z)\) with i.i.d. Gaussian coefficients form the determinantal point process associated with the Bergman kernel [15]. In the present paper, we compare the expected number of zeros of f(z) with finitely dependent Gaussian coefficients with that of \(f_{\textrm{PV}}(z)\).

We first deal with the case of 2-dependent stationary Gaussian processes with covariance function

$$\begin{aligned} \gamma (k) = {\left\{ \begin{array}{ll} 1 &{} k=0, \\ a &{} |k|= 1, \\ b &{} |k|= 2, \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
(1.5)

We easily verify that \(\{\gamma (k)\}_{k \in {\mathbb {Z}}}\) is positive definite if and only if (ab) is in the region \({\mathcal {P}}= {\mathcal {P}}_1 \cup {\mathcal {P}}_2\) with

$$\begin{aligned} {\mathcal {P}}_1 = \left\{ (a,b) \in {\mathbb {R}}^2: \frac{a^2}{8}+\left( b-\frac{1}{4}\right) ^2 \le \frac{1}{16} \right\} \end{aligned}$$

and

$$\begin{aligned} {\mathcal {P}}_2 = \left\{ (a,b) \in {\mathbb {R}}^2: \frac{a^2}{8}+\left( b-\frac{1}{4}\right) ^2\ge \frac{1}{16}, \ |a|-\frac{1}{2} \le b \le \frac{1}{6} \right\} . \end{aligned}$$
Fig. 1
figure 1

The region \({\mathcal {P}}\) of positive definiteness of \(\gamma (k)\) defined in (1.5). The red and black dashed ellipse is the boundary of \({\mathcal {P}}_1\), the green points are \((a,b) = (\pm 2/3, 1/6)\), and the blue line segments is \(b=|a|-1/2\) for \(-1/2 \le b \le 1/6\). The similar figure can be found in [3, p.72]. In fact, the region \(\mathcal {P}\) is equivalent to the invertibility conditions for moving averages (MA(2) processes) (Color figure online)

See Fig. 1. We consider the GAF \(f_{a,b}(z)\) associated with (1.5). Since we normalized the variance of \(\xi _k\) to be 1, the convergence radius of the power series \(f_{a,b}(z)\) is 1 a.s. for any \((a,b) \in {\mathcal {P}}\).

We denote the zeros of GAF f by \({\mathcal {Z}}_f\) and let

$$\begin{aligned} N_f(r)=\#\{z\in {\mathcal {Z}}_f : |z|< r\}, \quad r \in (0,1) \end{aligned}$$

be the number of zeros within \({\mathbb {D}}(r)\), the disk of radius r centered at the origin. From now on, for simplicity, we write \(r \rightarrow 1\) instead of \(r\rightarrow 1_-\).

Theorem 1.1

Let \(f_{a,b}\) be the GAF defined in (1.2) with covariance function of the form (1.5) with \((a,b) \in {\mathcal {P}}\). Then, the asymptotic behavior of the expected number of zeros is as follows.

(I) If (ab) satisfies \(a^2/8+(b-1/4)^2=1/16\) and \(1/6 < b \le 1/2\), then

$$\begin{aligned} {\textbf{E}}N_{f_{a,b}}(r)=\frac{r^2}{1-r^2}-\sqrt{\frac{2b}{6b-1}}\frac{1}{(1-r^2)^{1/2}}+O(1), \quad r\rightarrow 1. \end{aligned}$$
(1.6)

(II) If (ab) satisfies \(b=|a|-1/2\) and \(-1/2\le b<1/6\), then

$$\begin{aligned} {\textbf{E}}N_{f_{a,b}}(r) = \frac{r^2}{1-r^2}-\frac{1}{2}\sqrt{\frac{1-2b}{1-6b}} \frac{1}{(1-r^2)^{1/2}}+O(1), \quad r \rightarrow 1. \end{aligned}$$
(1.7)

(III) If \((a,b)=(\pm 2/3,1/6)\), then

$$\begin{aligned} {\textbf{E}}N_{f_{a,b}}(r) = \frac{r^2}{1-r^2}-\frac{1}{2^{5/4}}\frac{1}{(1-r^2)^{3/4}}+O\left( \frac{1}{(1-r^2)^{1/4}}\right) , \quad r\rightarrow 1. \end{aligned}$$
(1.8)

(IV) If (ab) is in the interior of \({\mathcal {P}}\), then there exists a nonnegative constant C(ab) such that

$$\begin{aligned} {\textbf{E}}N_{f_{a,b}}(r) = \frac{r^2}{1-r^2} - C(a,b)+O\left( 1-r^2\right) , \quad r \rightarrow 1. \end{aligned}$$
(1.9)

The constant C(ab) is positive except for \((a,b)=(0,0)\). The numbers (I)–(IV) in Theorem 1.1 correspond to those in Fig. 1.

The case of \((a,b)=(0,0)\) corresponds to the case of Peres-Virág, \(f_{\textrm{PV}}(z)\), and it is known that

$$\begin{aligned} {\textbf{E}}N_{f_{0,0}}(r) = {\textbf{E}}N_{f_{\textrm{PV}}}(r) = \frac{r^2}{1-r^2}. \end{aligned}$$
(1.10)

Therefore, for all cases, the expected number of zeros is less than that of \(f_{\textrm{PV}}(z)\) at least in the limit as \(r \rightarrow 1\). In fact, we can show the following stronger result.

Theorem 1.2

Let f be a GAF defined in (1.2) with (1.3) and (1.4). Let \(D \subset {\mathbb {D}}\) be a domain with smooth boundaries and \(N_f(D)\) be the number of zeros of f inside D. Then, \({\textbf{E}}N_f(D)\) is always less than or equal to \({\textbf{E}}N_{f_{\textrm{PV}}}(D)\). Moreover, the equality holds for some (hence any) domain D if and only if f is equal to \(f_{\textrm{PV}}\) in law.

As was seen in the above, the asymptotic behavior at \((a,b) = (\pm 2/3, 1/6)\) corresponding to Case (III) is special since \(G_2(z,z)\) is the most degenerated in the sense that

$$\begin{aligned} G_2(z,z) = 1 \pm \frac{2}{3} (z+z^{-1}) + \frac{1}{6} (z^2+z^{-2}) = \frac{1}{6} z^{-2} (z \pm 1)^4 \end{aligned}$$

for \(z \in \partial {\mathbb {D}}= \{z \in {\mathbb {C}}: |z| = 1\}\). The above \(G_2(z,z)\) has the degenerated zero at \(z=\mp 1\). The phenomena are the same in both cases and so we only deal with the \(+\) case below. Now, we focus on the n-dependent stationary Gaussian process \(\Xi \) with covariance function \(\{\gamma _n(k)\}_{k \in {\mathbb {Z}}}\) which is the most degenerated in the sense above, i.e.,

$$\begin{aligned} \gamma _n(k) = {\left\{ \begin{array}{ll} {2n \atopwithdelims ()n+k} {2n \atopwithdelims ()n} ^{-1} &{} \hbox { if}\ |k| =0, 1, 2, \dots , n, \\ 0 &{} \text {otherwise}, \end{array}\right. } \end{aligned}$$
(1.11)

which is normalized as \(\gamma _n(0)=1\). It is easy to see that

$$\begin{aligned} G_2(z,z) = \sum _{k = -n}^n \gamma _n(k) z^k = {2n \atopwithdelims ()n}^{-1} z^{-n} (z+1)^{2n} \end{aligned}$$
(1.12)

for \(z \in \partial {\mathbb {D}}\) and \(z=-1\) is the zero of order 2n. We remark that for this Gaussian process \(\Xi \) we have the following moving-average representation:

$$\begin{aligned} \xi _k = {2n \atopwithdelims ()n}^{-1/2} \sum _{j=0}^{n} {n \atopwithdelims ()j} \zeta _{k-j}, \quad k=0,1,\dots , \end{aligned}$$

where \(\{\zeta _j\}_{j \in {\mathbb {Z}}}\) is an i.i.d. standard complex Gaussian sequence. In this case, we have the following asymptotics, which include (1.8) as a special case of \(n=2\).

Theorem 1.3

Let \(\gamma _n(k)\) be defined as (1.11) and \(\Xi = \{\xi _k \}_{k \in {\mathbb {Z}}}\) be the stationary, centered, complex Gaussian process with covariance function \(\{\gamma _n(k)\}_{k \in {\mathbb {Z}}}\). The expected number of zeros of the power series f with coefficients \(\Xi \) within \({\mathbb {D}}(r)\) is given by

$$\begin{aligned} {\textbf{E}}N_f(r) = \frac{r^2}{1-r^2} - D_n (1-r^2)^{-\frac{2n-1}{2n}} +O((1-r^2)^{-\frac{2n-3}{2n}}), \quad r \rightarrow 1, \end{aligned}$$
(1.13)

where

$$\begin{aligned} D_n= \frac{1}{2n \sin \frac{\pi }{2n}} \left\{ \left( {\begin{array}{c}2(n-1)\\ n-1\end{array}}\right) \right\} ^{\frac{1}{2n}}. \end{aligned}$$

Remark 1.4

The term of order \((1-r^2)^{-\frac{2n-2}{2n}}\) in (1.13) vanishes by a cancellation. See the proof of Theorem 1.3 and Remark 4.4.

As will be seen in the proof of the theorems, the order of the second term in the asymptotic expansion comes from the behavior of the zeros of \(G_2(z,z)\) in the case of n-dependent Gaussian processes. If \(G_2(z,z)\) has a zero of multiplicity 2k on \(\partial {\mathbb {D}}\), i.e., so does the spectral density, then the term of order \((1-r^2)^{-(2k-1)/(2k)}\) appears in the asymptotics of \({\textbf{E}}N_f(r)\) as \(r \rightarrow 1\). Hence, the zeros of the spectral density with the most multiplicity determines the asymptotics of the second order term. Therefore, we obtain the following result for general finitely dependent cases.

Corollary 1.5

Let \(\Xi = \{\xi _k \}_{k \in {\mathbb {Z}}}\) be a stationary, centered, finitely dependent, complex Gaussian process. When the spectral density of \(\Xi \) has zeros \(\theta _j\) of multiplicity \(2k_j\) for \(j=1,2,\dots ,p\), we set \(\alpha = (2k-1)/(2k)\) with \(k = \max _{1\le j \le p} k_j\); \(\alpha =0\) otherwise. Then, there exists a positive constant \(C_{\Xi }\) such that the expected number of zeros of the GAF f with coefficients \(\Xi \) within \({\mathbb {D}}(r)\) is given by

$$\begin{aligned} {\textbf{E}}N_f(r) = \frac{r^2}{1-r^2} - C_{\Xi } (1-r^2)^{-\alpha } +o((1-r^2)^{-\alpha }), \quad r \rightarrow 1. \end{aligned}$$

For example, the Gaussian process \(\Xi \) with \(G_2(z,z) = (const.) \prod _{j=1}^p |z + a_j|^{2k_j}\) for \(z, a_1,\dots , a_p \in \partial {\mathbb {D}}\) and \(k_1, \dots , k_p \ge 1\) gives an example of the GAF described in Corollary 1.5.

This paper is organized as follows. In Sect. 2, we recall the Edelman–Kostlan formula and derive its variants for later use and prove Theorem 1.2. We also give some examples to give our idea for computation of the expected number of zeros. In Sect. 3 3, we prove Theorem 1.1. In Sect. 4, we briefly recall the method of Puiseux expansion and prove Theorem 1.3.

2 The Expected Number of Zeros: Examples

2.1 Expected Numbers of Zeros

To prove Theorems 1.1 and 1.3, we recall the Edelman–Kostlan formula for the expected number of zeros of GAF.

Proposition 2.1

Let \(D \subset {\mathbb {C}}\) be a domain with smooth boundaries, f be a GAF defined in a neighborhood of D, and \(N_f(D)\) be the number of zeros of f inside D. Then,

$$\begin{aligned} {\textbf{E}}N_f(D) = \frac{1}{4\pi }\int _{D}\Delta \log K_f(z,z)dm(z) =\frac{1}{2\pi {\textbf{i}}}\oint _{\partial D}\partial _z \log K_f(z,z)dz, \end{aligned}$$

assuming that no singularity lies on \(\partial D\) for the second equality, where dm(z) is the Lebesgue measure on the complex plane \({\mathbb {C}}\) and \({\textbf{i}}=\sqrt{-1}\) is the imaginary unit.

For the proof of the first equality, see [8]. For the second equality, the Stokes theorem is used as in [4, 11]. In our setting, we have much simpler expressions for \({\textbf{E}}N_f(r)\).

Corollary 2.2

Let f be a GAF defined in (1.2) with (1.3) and (1.4). Let \(D \subset {\mathbb {D}}\) be a domain with smooth boundaries and \(N_f(D)\) be the number of zeros inside D. Then,

$$\begin{aligned} {\textbf{E}}N_f(D)=\frac{1}{2\pi {\textbf{i}}} \oint _{\partial D} \frac{{\overline{z}}}{1-|z|^2} dz +{\mathcal {J}}(D), \end{aligned}$$
(2.14)

where \({\mathcal {J}}(D)\) has two expressions as follows:

$$\begin{aligned} {\mathcal {J}}(D) = \frac{1}{2\pi {\textbf{i}}} \oint _{\partial D} \frac{G'(z)}{G_2(z,z)}dz \end{aligned}$$
(2.15)

and

$$\begin{aligned} {\mathcal {J}}(D) = -\frac{1}{\pi }\int _{D} \left( \frac{|G'(z)|}{G_2(z,z)}\right) ^2 dm(z). \end{aligned}$$
(2.16)

In particular, when \(D = {\mathbb {D}}(r)\), (2.14) becomes

$$\begin{aligned} {\textbf{E}}N_f(r)=\frac{r^2}{1-r^2}+{\mathcal {J}}(r), \end{aligned}$$
(2.17)

where we simply write \({\mathcal {J}}(r)\) for \({\mathcal {J}}({\mathbb {D}}(r))\).

Proof

The first expression (2.15) directly follows from (1.3), (1.4) and the second equality in Proposition 2.1. For the second expression (2.16), since \(\overline{\partial _z G(z)} = \partial _{{\overline{z}}} (\overline{G(z)})\), it is easy to see from the first equality in Proposition 2.1 that

$$\begin{aligned} {\mathcal {J}}(D) = \frac{1}{\pi } \int _{D} \partial _z \partial _{{\overline{z}}} \log G_2(z,z) dm(z) = - \frac{1}{\pi } \int _{D} \frac{|\partial _zG(z)|^2}{(1+G(z)+\overline{G(z)})^2} dm(z). \end{aligned}$$

This completes the proof. \(\square \)

The expression (2.16) essentially, but not explicitly, appeared in [14]. They derived a similar expression from one-point correlation and used to evaluate the expected number of zeros in the case of fractional Gaussian noise.

Remark 2.3

In our setting, G(z) is a polynomial. By the change of variables \(z \mapsto rz\) in (2.15) with \(D = {\mathbb {D}}(r)\), we have

$$\begin{aligned} {\mathcal {J}}(r) = \frac{r}{2\pi {\textbf{i}}} \oint _{\partial {\mathbb {D}}} \frac{G'(rz)}{\Theta (r,z)} dz, \end{aligned}$$
(2.18)

where \(\Theta (r,z)\) is the rational function of z obtained from \(G_2(rz,rz)\) by putting \({\overline{z}}= z^{-1}\) on \(\partial {\mathbb {D}}\). In particular, when \(\gamma (k)\) is real for every \(k\in {\mathbb {Z}}\), we have

$$\begin{aligned} \Theta (r,z) = \sum _{k \in {\mathbb {Z}}} \gamma (k) r^{|k|} z^k. \end{aligned}$$

Note that \(\Theta (1,e^{i\theta })\) is the spectral density at least for finitely dependent Gaussian processes. Then, one can apply the residue theorem, and from this point of view, the behavior of zeros of \(\Theta (r,z)\) as \(r \rightarrow 1\) is essential for the order of \({\mathcal {J}}(r)\).

Theorem 1.2 is a direct consequence of the second expression (2.16) of \({\mathcal {J}}(D)\).

Proof of Theorem 1.2

The error term \({\mathcal {J}}(D)\) is clearly non-positive from (2.16). Moreover, the right-hand side of (2.16) is zero if and only if \(G'(z) = 0\) m-a.e. D. It follows from the uniqueness theorem that \(G'(z)\) is identically zero on \({\mathbb {D}}\), and thus so is G(z) since \(G(0)=0\). Therefore, f is equal to \(f_{\textrm{PV}}\) in law. \(\square \)

2.2 Examples

In this subsection, we show two examples to see how the expected number of zeros behaves as \(r \rightarrow 1\). Although all computations are rather straightforward, they are helpful for understanding of the situation.

Example 2.4

(Ornstein–Uhlenbeck process) Let \(\gamma (k) = \rho ^{|k|} \ (0< \rho < 1)\). The corresponding stationary Gaussian process is the (discrete time) Ornstein–Uhlenbeck process. In this case, we see that \(G(z) = \rho z (1-\rho z)^{-1}\) and

$$\begin{aligned} G_2(z,w) = \frac{1-\rho ^2 z {\overline{w}}}{(1-\rho z)(1- \rho {\overline{w}})}. \end{aligned}$$

By using \({\overline{z}}= z^{-1}\) for \(z \in \partial {\mathbb {D}}\), we see that

$$\begin{aligned} \Theta (r,z) = \frac{z(1-\rho ^2 r^2)}{(1-\rho r z)(z- \rho r)}. \end{aligned}$$

We apply (2.18) to this case. The only zero \(z=0\) of \(\Theta (r,z)\), which does not move in r, contributes to the residue as the only pole. Hence, we have

$$\begin{aligned} {\textbf{E}}N_f(r) = \frac{r^2}{1-r^2} - \frac{\rho ^2 r^2}{1-\rho ^2 r^2} = \frac{r^2}{1-r^2} - \frac{\rho ^2}{1-\rho ^2} + O(1-r^2), \quad r \rightarrow 1. \end{aligned}$$

In this case, G(z) is analytic in \({\mathbb {D}}(1/\rho )\) and \(\Theta (1,z)\), or equivalently \(G_2(z,z)\), does not vanish on \(\partial {\mathbb {D}}\).

Remark 2.5

As was seen in this example, the second term \({\mathcal {J}}(r)\) is O(1) as \(r \rightarrow 1\) whenever G(z) is analytic in a neighborhood of \({\overline{{\mathbb {D}}}}:= {\mathbb {D}}\cup \partial {\mathbb {D}}\) and \(\Theta (r,z)\) does not vanish on \(\partial {\mathbb {D}}\).

Example 2.6

For \(0<\rho <1\), let \(\zeta \) and \(\{\eta _k\}_{k \in {\mathbb {Z}}}\) be i.i.d. complex standard normal random variables and define the Gaussian process \(\Xi = \{\xi _k\}_{k \in {\mathbb {Z}}}\) by

$$\begin{aligned} \xi _k = \sqrt{\rho } \zeta + \sqrt{1-\rho } \eta _k \quad \text {for}\,\, k \in {\mathbb {Z}}. \end{aligned}$$

Then, the corresponding GAF is equal in law to

$$\begin{aligned} \sqrt{\rho } \frac{\zeta }{1-z} + \sqrt{1-\rho } f_{\textrm{PV}}(z) \end{aligned}$$
(2.19)

and its covariance function is given by

$$\begin{aligned} \gamma (k) = {\left\{ \begin{array}{ll} 1 &{} k=0, \\ \rho &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

In this case, \(G(z) = \rho z(1-z)^{-1}\) and

$$\begin{aligned} G_2(z,z)=\frac{1-(1-\rho )(z+{\overline{z}})+(1-2\rho )|z|^2}{(1-z)(1-{\overline{z}})}, \end{aligned}$$

and hence

$$\begin{aligned} \Theta (r,z) =- \frac{(1-\rho )rz^2 - (1+(1-2 \rho ) r^2)z +(1-\rho ) r}{(1-rz)(z-r)} \end{aligned}$$

The zeros of \(\Theta (r,z)\) are \(\nu \) and \(\nu ^{-1}\), where \(\nu =\frac{\delta -\sqrt{\delta ^2-4}}{2}\) and \(\delta =\frac{1+(1-2\rho )r^2}{(1-\rho )r}\). Note that \(\nu \) (resp., \(\nu ^{-1}\)) is inside (resp., outside) \({\mathbb {D}}\). By using (2.18) and the residue theorem, we have

$$\begin{aligned} {\textbf{E}}N_f(r)=\frac{r^2}{1-r^2}-\frac{\rho }{1-\rho }\frac{\nu -r}{(\nu -\nu ^{-1})(1-\nu r)}. \end{aligned}$$

As \(r \rightarrow 1\), we have

$$\begin{aligned} {\textbf{E}}N_f(r) = \frac{r^2}{1-r^2} - \frac{1}{2}\sqrt{\frac{\rho }{1-\rho }} \frac{1}{\sqrt{1-r^2}} + O(1). \end{aligned}$$

Remark 2.7

(i) The convergence radius of G(z) is 1, and its singularity is located only at \(z=1\). The zeros of \(\Theta (r,z)\) are \(\nu \) and \(\nu ^{-1}\) given above, where \(\nu \) (resp., \(\nu ^{-1}\)) is inside (resp., outside) \({\mathbb {D}}(r)\). Both \(\nu \) and \(\nu ^{-1}\) converge to 1 as \(r \rightarrow 1\), and the second term of \(O((1-r^2)^{-1/2})\) comes from \((\nu -\nu ^{-1})^{-1}\) as the residue at \(z=\nu \).

(ii) From (2.19), we intuitively observe that near \(z=1\), the first term \(\zeta /(1-z)\) pushes up the absolute values of \(\sqrt{1-\rho } f_{\textrm{PV}}(z)\) and decreases the number of zeros.

We would like to emphasize that the behavior of zeros of \(\Theta (r,z)\) as \(r \rightarrow 1\) is essential for the asymptotic behavior of the error term \({\mathcal {J}}(r)\).

3 2-Dependent Cases

In this section, we prove Theorem 1.1.

3.1 Case (I)

First we show Case (I).

Proof of Case (I) in Theorem 1.1

First we note that \(G(z)=az+bz^2\) and then \(\Theta (r,z) = 1 + ar(z+z^{-1}) + br^2(z^2+z^{-2})\). From (2.18), we have

$$\begin{aligned} {\mathcal {J}}(r)=\frac{r}{2\pi {\textbf{i}}}\oint _{\partial {\mathbb {D}}}\frac{a+2brz}{1+ar(z+z^{-1})+br^2(z^2+z^{-2})}dz, \end{aligned}$$
(3.20)

We suppose \((a,b) \in \partial {\mathcal {P}}_1 \cap \partial {\mathcal {P}}\), i.e., \(a = \pm 2 \sqrt{b(1-2b)}\) with \(1/6 \le b \le 1/2\). By the symmetry, it is enough to consider the case \(a>0\). Since the denominator is reciprocal, if \(\gamma \) is one of its roots, then the roots are given as \(\gamma , \gamma ^{-1}, {\bar{\gamma }}, {\bar{\gamma }}^{-1}\). Here, we suppose \(\gamma \in {\mathbb {D}}\) and in the upper-half plane. Thus, \(\gamma , {\bar{\gamma }}\) (resp., \(\gamma ^{-1}, {\bar{\gamma }}^{-1}\)) are inside (resp., outside) \({\mathbb {D}}\). By taking the residues at \(\gamma \) and \({\bar{\gamma }}\), we see that

$$\begin{aligned} {\mathcal {J}}(r)&= \frac{1}{2\pi {\textbf{i}}br} \oint _{\partial {\mathbb {D}}}\frac{z^2(a+2brz)}{(z-\gamma )(z-{\bar{\gamma }})(z-\gamma ^{-1})(z-{\bar{\gamma }}^{-1})} dz \\&= \frac{2}{br} \Re \left( \frac{\gamma ^2(a+2br\gamma )}{(\gamma -{\bar{\gamma }})(\gamma -\gamma ^{-1})(\gamma -{\bar{\gamma }}^{-1})} \right) . \end{aligned}$$

Let \(X=z+z^{-1}\) and rewrite the denominator as \(br^2 X^2+arX+1-2br^2\), whose roots are distinct and given by \(X_{\pm }=(-a \pm {\textbf{i}}2 \sqrt{2}b\sqrt{1-r^2})/(2br)\). It is easy to see that

$$\begin{aligned}&\gamma = \frac{X_- + \sqrt{X_-^2-4}}{2}, \quad {\bar{\gamma }}= \frac{X_+ + \sqrt{X_+^2-4}}{2}, \\ {}&\gamma ^{-1} = \frac{X_- - \sqrt{X_-^2-4}}{2}, \quad {\bar{\gamma }}^{-1} = \frac{X_+ - \sqrt{X_+^2-4}}{2}. \end{aligned}$$

Here, we take the branch of \(\sqrt{z}\) such that \(\sqrt{1}=1\) and analytic in \({\mathbb {C}}\setminus (-\infty ,0]\). Note that

$$\begin{aligned} \gamma -\gamma ^{-1} = \sqrt{X_{-}^2-4}&=\frac{1}{br}\left( \sqrt{\frac{\alpha +\sqrt{\alpha ^2+\beta ^2}}{2}}+{\textbf{i}}\sqrt{\frac{-\alpha +\sqrt{\alpha ^2+\beta ^2}}{2}}\right) \end{aligned}$$
(3.21)

with \(\alpha =b-2b^2(r^2+2)\) and \(\beta =2b\sqrt{2(1-r^2)b(1-2b)}\). It is easy to see that

$$\begin{aligned} (\gamma -{\bar{\gamma }})(\gamma -{\bar{\gamma }}^{-1})=\gamma (X_--X_+)=-\gamma \frac{2\sqrt{2(1-r^2)}}{r}{\textbf{i}}\end{aligned}$$

and hence

$$\begin{aligned} {\mathcal {J}}(r) =-\frac{1}{b\sqrt{2(1-r^2)}}\Im \left( \frac{\gamma (a+2br\gamma )}{\gamma -\gamma ^{-1}}\right) . \end{aligned}$$
(3.22)

We note that \(\gamma =(X_{-} + \gamma - \gamma ^{-1})/2\). Substituting it to the numerator and expanding it by \(Y:= \gamma -\gamma ^{-1}\), we have

$$\begin{aligned} \frac{\gamma (a+2br\gamma )}{\gamma -\gamma ^{-1}}&=\frac{1}{2Y} \Big ( X_- (a+brX_-) + (a+2brX_-) Y + br Y^2\Big ) \nonumber \\&=\frac{2br^2-1}{2r} Y^{-1} - {\textbf{i}}b\sqrt{2(1-r^2)}+\frac{brY}{2}. \end{aligned}$$
(3.23)

Here, we used the fact that \(X_-\) is a solution of the equation \(br^2 X^2 + arX + 1-2br^2=0\). Since \(\alpha = -b(6b-1) + O(1-r^2)\) and \(\beta = 2b\sqrt{2b(1-2b)}\sqrt{1-r^2}\), we see that

$$\begin{aligned} \Im Y = \sqrt{\frac{6b-1}{b}} + O(1-r^2), \quad \Im Y^{-1} = - \sqrt{\frac{b}{6b-1}} + O(1-r^2),\quad r\rightarrow 1.\nonumber \\ \end{aligned}$$
(3.24)

Hence, it follows from (3.22), (3.23) and (3.24) that

$$\begin{aligned} {\mathcal {J}}(r)=-\sqrt{\frac{2b}{6b-1}}\frac{1}{\sqrt{1-r^2}}+O(1),\quad r\rightarrow 1. \end{aligned}$$

This completes the proof of Case (I). \(\square \)

3.2 Case (II)

Next we prove Case (II).

Proof of Case (II) in Theorem 1.1

By the symmetry, it is enough to consider the case \(b=a-1/2\) \((-1/2\le b\le 1/6)\). We divide the proof of Case (II) into two cases, i.e., (i) \(0<b\le 1/6\) and (ii) \(-1/2\le b\le 0\). In this subsection, we always consider the situation for r sufficiently close to 1 depending on b.

First we prove the case (i). The roots of \(br^2X^2+arX+1-2br^2=0\) are real and given by \(X_{\pm }=(-a \pm \lambda )/2br\in {\mathbb {R}}\) with \(\lambda = \sqrt{a^2-4b^2 + 8b^2r^2}\). Note that \(X_\pm ^2-4\ge 0\), and \(X_{+} \rightarrow -2\) and \(X_{-} \rightarrow (2b-1)/(2b)\) as \(r \rightarrow 1\) As in Case (I), by (3.20), since the denominator is reciprocal, if two real roots \(\gamma \) and \(\kappa \) lie inside \({\mathbb {D}}\) such that \(\gamma< \kappa < 0\), then all the roots are given as \(\gamma ,\gamma ^{-1},\kappa ,\kappa ^{-1}\). Here \(\gamma ,\kappa \) (resp. \(\gamma ^{-1},\kappa ^{-1}\)) are in \({\mathbb {D}}\cap {\mathbb {R}}\) (resp. in \({\mathbb {D}}^c\cap {\mathbb {R}}\)), which are given by

$$\begin{aligned}&\gamma =\frac{X_++\sqrt{X_+^2-4}}{2}, \ \gamma ^{-1}=\frac{X_+-\sqrt{X_+^2-4}}{2}, \nonumber \\&\kappa =\frac{X_-+\sqrt{X_-^2-4}}{2}, \ \kappa ^{-1}=\frac{X_- - \sqrt{X_-^2-4}}{2}. \end{aligned}$$
(3.25)

By (3.20) and the residue theorem, we see that

$$\begin{aligned} {\mathcal {J}}(r)&= \frac{1}{2\pi {\textbf{i}}br }\oint _{\partial {\mathbb {D}}}\frac{z^2(a+2brz)}{(z-\gamma )(z-\gamma ^{-1})(z-\kappa )(z-\kappa ^{-1})}dz\nonumber \\&=\frac{1}{br}\left\{ \frac{\gamma ^2(a+2br\gamma )}{(\gamma -\gamma ^{-1})(\gamma -\kappa )(\gamma -\kappa ^{-1})}+\frac{\kappa ^2(a+2br\kappa )}{(\kappa -\gamma )(\kappa -\gamma ^{-1})(\kappa -\kappa ^{-1})}\right\} \nonumber \\&= \frac{1}{\lambda } \left\{ \frac{\gamma (a+2br\gamma )}{\gamma -\gamma ^{-1}} - \frac{\kappa (a+2br\kappa )}{\kappa -\kappa ^{-1}}\right\} . \end{aligned}$$
(3.26)

Here, we used

$$\begin{aligned}&(\gamma -\kappa )(\gamma -\kappa ^{-1}) =\gamma (X_+-X_-) = \frac{\gamma \lambda }{br},\\&(\kappa -\gamma )(\kappa -\gamma ^{-1})=\kappa (X_--X_+) = -\frac{\kappa \lambda }{br}. \end{aligned}$$

Since \((\kappa -\kappa ^{-1})^{-1}=O(1)\), it suffices to focus on the first term of (3.26). We again use the expansion in (3.23) and have

$$\begin{aligned} Y = \gamma -\gamma ^{-1}&= 2\sqrt{\frac{1-2b}{1-6b}}\sqrt{1-r^2} + O(1-r^2),\quad r\rightarrow 1. \end{aligned}$$

Therefore,

$$\begin{aligned} {\mathcal {J}}(r)=-\frac{1}{2}\sqrt{\frac{1-2b}{1-6b}}\frac{1}{\sqrt{1-r^2}}+O(1),\quad r\rightarrow 1. \end{aligned}$$

Next we prove the case (ii) of (II). Computation is almost the same as in the case (i) of (II), but we only need to change the roles of \(\gamma , \gamma ^{-1}, \kappa , \kappa ^{-1}\). Indeed, \(\gamma \) and \(\kappa ^{-1}\) (resp. \(\gamma ^{-1},\kappa \)) in (3.25) are in \({\mathbb {D}}\cap {\mathbb {R}}\) (resp. in \({\mathbb {D}}^c\cap {\mathbb {R}}\)). By (3.20), (3.25) and

$$\begin{aligned} (\kappa ^{-1} - \gamma ) (\kappa ^{-1} - \gamma ^{-1}) = \kappa ^{-1}(X_--X_+)= - \frac{\kappa ^{-1} \lambda }{2br}, \end{aligned}$$

we see that

$$\begin{aligned} {\mathcal {J}}(r)&=\frac{1}{br}\left\{ \frac{\gamma ^2(a+2br\gamma )}{(\gamma -\gamma ^{-1})(\gamma -\kappa )(\gamma -\kappa ^{-1})}+\frac{\kappa ^{-2}(a+2br\kappa ^{-1})}{(\kappa ^{-1}-\gamma )(\kappa ^{-1}-\gamma ^{-1})(\kappa ^{-1}-\kappa )}\right\} \\&=\frac{2}{\lambda } \left\{ \frac{\gamma (a+2br\gamma )}{\gamma -\gamma ^{-1}} - \frac{\kappa ^{-1}(a+2br\kappa ^{-1})}{\kappa ^{-1}-\kappa }\right\} \\&=-\frac{1}{2}\sqrt{\frac{1-2b}{1-6b}}\frac{1}{\sqrt{1-r^2}}+O(1),\quad r \rightarrow 1. \end{aligned}$$

This completes the proof of Case (II). \(\square \)

Remark 3.1

By the continuity, we have the same asymptotic in Case (II), but the behavior of roots \(\gamma ,\gamma ^{-1},\kappa ,\kappa ^{-1}\) in (II) is completely different from Case (I). Indeed, \(\gamma ,\gamma ^{-1}\rightarrow -1\) and \(\kappa ,\kappa ^{-1}\rightarrow (2b-1)/4b\pm \sqrt{(1-6b)(1+2b)}/2|b|\) as \(r\rightarrow 1\) in Case (II). That is, there is only one pair of roots toward the boundary \(\partial {\mathbb {D}}\) as \(r\rightarrow 1\) except \(b=-1/2\). This implies that the asymptotic order is affected by the degeneracy of roots of \(\Theta (1,z)\) located on the boundary \(\partial {\mathbb {D}}\).

3.3 Case (III)

We give a proof of Case (III).

Proof of Case (III) in Theorem 1.1

Suppose \((a,b) = (2/3, 1/6)\). Since \(\alpha =\frac{1}{18}(1-r^2)\) and \(\beta =\frac{1}{9}\sqrt{2(1-r^2)}\), by (3.21), we have

$$\begin{aligned}&Y = \gamma -\gamma ^{-1} \\&\quad = \frac{1}{r}\left( \sqrt{(1-r^2)+\sqrt{(1-r^2)(9-r^2)}} + {\textbf{i}}\sqrt{-(1-r^2)+\sqrt{(1-r^2)(9-r^2)}} \right) . \end{aligned}$$

It easily follows from this expression that \(\Im Y = O\big ( (1-r^2)^{1/4} \big )\) and

$$\begin{aligned} \Im Y^{-1} = - 2^{-7/4} (1-r^2)^{-1/4} + O\big ( (1-r^2)^{1/4} \big ),\quad r \rightarrow 1. \end{aligned}$$

Hence, from (3.22) and (3.23), we can conclude that

$$\begin{aligned} {\mathcal {J}}(r)&= -2^{-5/4} (1-r^2)^{-3/4} + O\Big ((1-r^2)^{-1/4}\Big ), \quad r \rightarrow 1. \end{aligned}$$

This completes the proof of Case (III). \(\square \)

3.4 Case (IV)

Finally, we give a sketch of the proof of Case (IV). Since all zeros of \(\Theta (r,z)\) stay away from \(\partial {\mathbb {D}}\) as \(r \rightarrow 1\) when (ab) is in the interior of \({\mathcal {P}}\), any singularity contributing to the asymptotic behavior does not appear on the boundary \(\partial {\mathbb {D}}\), and hence, it suffices to consider as r equals to 1. Here we only consider the interior of \({\mathcal {P}}_1\) and \(a>0\). We use the same notations in the proof of Case (I). In this case, \(X_{\pm }=(-a\pm {\textbf{i}}\lambda (a,b))/(2b)\) with \(\lambda (a,b)=\sqrt{4b-8b^2-a^2}\) and we see that \((\gamma -{\overline{\gamma }})(\gamma -{\overline{\gamma }}^{-1}) = -\gamma b^{-1} \lambda (a,b) {\textbf{i}}\). Hence,

$$\begin{aligned} C(a,b)=-{\mathcal {J}}(1)=\frac{2}{\lambda (a,b)}\Im \left( \frac{\gamma (a+2b\gamma )}{\gamma -\gamma ^{-1}}\right) . \end{aligned}$$

A little more computation shows that

$$\begin{aligned} C(a,b) = \frac{\mu (a,b)-(2b-1)}{2\lambda (a,b) \mu (a,b)}\sqrt{4b^2+2b-a^2+2b\mu (a,b)}-1, \end{aligned}$$

where \(\mu (a,b)=\sqrt{(1+2b)^2-4a^2}\) and that \(C(a,b) > 0\) unless \((a,b)=(0,0)\). We omit the other cases since we obtain the results just by repeating the similar computation.

4 Degenerated Cases

In this section, we give a proof of Theorem 1.3. From (2.18), we have

$$\begin{aligned} {\mathcal {J}}(r) = \frac{r}{2\pi {\textbf{i}}} \oint _{\partial {\mathbb {D}}} \frac{G'(z)}{\Theta (r,z)} dz = \frac{r}{2\pi {\textbf{i}}} \oint _{\partial {\mathbb {D}}} \frac{p_n(r,z)}{q_n(r,z)} dz \end{aligned}$$

where \(p_n(r,z) = z^n {2n \atopwithdelims ()n} G'(w)|_{w=rz}\) and

$$\begin{aligned} q_n(r,z):= z^n {2n \atopwithdelims ()n} \Theta (r,z) = z^n \sum _{k=-n}^n {2n \atopwithdelims ()n+k} r^{|k|} z^k. \end{aligned}$$

We note from (1.12) that

$$\begin{aligned} q_n(1,z) = (z+1)^{2n}. \end{aligned}$$

To see the asymptotic behavior of \({\textbf{E}}N_f(r)\) as \(r \rightarrow 1\), we need that of z(r) for \(q_n(r,z(r))=0\).

4.1 Behavior of the Root z(r) as \(r \rightarrow 1\)

We first note that \(q_n(1,-1)=0\) and \(\partial _z q_n(r,z)|_{(r,z)=(1,-1)}=0\). Hence, we cannot apply the implicit function theorem in the variable z to \(q_n(r,z)\). Alternatively, we follow a strategy of using Puiseux series expansion and Newton polygon method (cf. [19]).

First we note that

$$\begin{aligned} \partial _r q_n(r,z) |_{(r,z)=(1,-1)}&= 2\sum _{k=1}^n k(-1)^{n+k}\left( {\begin{array}{c}2n\\ n+k\end{array}}\right) \\&=(-1)^{n+1}\frac{n+1}{2n-1}\left( {\begin{array}{c}2n\\ n+1\end{array}}\right) \ne 0. \end{aligned}$$

By shifting \((r,z)\rightarrow (1-r,z+1)\) in \(q_n(r,z)\), we consider

$$\begin{aligned} Q_n(x,y):=\sum _{l=0}^{2n}\left( {\begin{array}{c}2n\\ l\end{array}}\right) (1-x)^{|l-n|}(y-1)^l. \end{aligned}$$
(4.27)

Note that \(Q_n(0,y)=y^{2n}\). Following [19], we denote by \({\mathbb {C}}\{x,y\}\) (resp., \({\mathbb {C}}\{x\}\)) the ring of convergent power series defined by two variables xy (resp., one variable x). If \(f\in {\mathbb {C}}\{x,y\}\) satisfies \(f(0,y)=y^mA(y)\) with \(A(0)\ne 0\), then we say f is regular in y of order m [19, p.20]. In our setting, \(Q_n(x,y)\) is regular in y of order 2n. We can use the following theorem from [19, p.20, Theorem 2.2.6] to guarantee the existence of 2n distinct solutions to the equation \(Q_n(x,y)=0\) around \((x,y)=(0,0)\).

Theorem 4.1

[19] (i) Any equation \(f(x,y)=0\) where \(f\in {\mathbb {C}}\{x,y\}\) with \(f(0,0)=0\), \(f(0,y)\not \equiv 0 \) admits at least one solution of the form \(y=g(x^{1/m_1})\in {\mathbb {C}}\{x\}\).

(ii) If f is regular in y of order m, and we write \(f=UF\) with U a unit and F a monic polynomial of degree m in y, there are m such solutions \(g_j(x^{1/m_{j}})\), all distinct unless the discriminant of F vanishes identically, and \(F(y)\equiv \prod _{j=1}^m\left( y-g_j(x^{1/m_{j}})\right) \).

For our purpose, we need more explicit form of \(g_j\)’s so that we directly perform the Newton polygon method below.

The solution y(x) to \(Q_n(x,y)=0\) around the neighborhood of the origin (0, 0) is described by this theorem since \(Q_n(x,y)\) is a bivariate polynomial. Now, we will compute the asymptotic expansion of \(y=y(x)\) in \(Q_n(x,y(x))=0\) at the origin (0, 0) following the Newton polygon method [19, p.15, Theorem 2.1.1]. Here, we give a brief description of the algorithm following [19]. Firstly, given \(f(x,y)=0\), we plot a point (rs) of exponents for each term \(c_{r,s}x^ry^s\) of f(xy) on \({\mathbb {R}}^2\) if \(c_{r,s} \not = 0\) and then we have the convex hull containing all points plotted. Its boundary is made up of straight line segments which do not lie on the coordinate axes. It is called the Newton polygon. Secondly, we denote by \(m_1\) one of the reciprocal numbers of the negative of a slope among these segments. Then, we consider \(f(x,x^{m_1}(a_1+y_1))\) and solve \(a_1\) by focusing on the terms of the lowest degrees in x due to \(f(x,y)=0\). Thirdly, let \(f^{(1)}(x,y_1)=x^{-l} f(x,x^{m_1}(a_1+y_1)\) where l is the intersection of s-axes. Repeat the above process and then we can obtain the solution \(y=a_1x^{m_1}+a_2x^{m_1+m_2}+\cdots \) of \(f(x,y)=0\) for \(f\in {\mathbb {C}}\{x,y\}\). For \(Q_n(x,y)\), its Newton polygon joins (1, 0) and (0, 2n) as shown in Fig. 2 for \(n=4\).

Fig. 2
figure 2

Newton polygon of \(Q_n(x,y)\) for \(n=4\). A point (rs) is marked when the coefficient \(x^r y^s\) of \(Q_n(x,y)\) is nonzero

Thus, it is guaranteed that \(Q_n(x,y)=0\) has the solution of the form

$$\begin{aligned} y=x^{1/(2n)}(a_1+y_1), \end{aligned}$$

where \(y_1=x^{m_2}(a_2+y_2)\) with   \(m_2 \in {\mathbb {Q}}\)   being positive. Setting \(t = x^{1/(2n)}\) (equivalently \(x=t^{2n}\)) in (4.27) for simplicity, we have

$$\begin{aligned} Q_n(t^{2n}, t(a_1+y_1)) = \sum _{l=0}^{2n}\left( {\begin{array}{c}2n\\ l\end{array}}\right) (1-t^{2n})^{|l-n|}(t(a_1+y_1)-1)^l=0 \end{aligned}$$

and the left-hand side can be expanded as follows:

$$\begin{aligned}&Q_n(t^{2n}, t(a_1+y_1))\\&=\left( \sum _{l=0}^{2n}\left( {\begin{array}{c}2n\\ l\end{array}}\right) |l-n|(-1)^{l+1}+a_1^{2n}+2na_1^{2n-1}y_1+\left( {\begin{array}{c}2n\\ 2\end{array}}\right) a_1^{2n-2}y_1^2 \right) t^{2n} \\&\quad +\sum _{l=0}^{2n}\left( {\begin{array}{c}2n\\ l\end{array}}\right) |l-n|l(-1)^l(a_1+y_1) t^{2n+1}\\&\quad +\sum _{l=0}^{2n}\left( {\begin{array}{c}2n\\ l\end{array}}\right) |l-n|\left( {\begin{array}{c}l\\ 2\end{array}}\right) (-1)^{l-1}(a_1+y_1)^2 t^{2n+2}+O(t^{2n+3}) \end{aligned}$$

Since \(y_1=O(x^{m_2}) = O(t^{2n m_2})\) for positive \(m \in {\mathbb {Q}}\), the leading term is of order \(t^{2n}\) and its coefficient is given by

$$\begin{aligned} a_1^{2n}+\sum _{l=0}^{2n} \left( {\begin{array}{c}2n\\ l\end{array}}\right) |l-n| (-1)^{l+1}&=a_1^{2n}+2(-1)^{n}\left( {\begin{array}{c}2(n-1)\\ n-1\end{array}}\right) . \end{aligned}$$

Thus, \(a_1\) is characterized by the solution of the equation

$$\begin{aligned} a_1^{2n}+2(-1)^n\left( {\begin{array}{c}2(n-1)\\ n-1\end{array}}\right) =0. \end{aligned}$$
(4.28)

For this \(a_1\), the term of the lowest order \(t^{2n}\) in \(Q_n(t^{2n}, t(a_1+y_1))\) vanishes and we have

$$\begin{aligned} Q_n^{(1)}(t, y_1)&:=t^{-2n} Q_n(t^{2n},t(a_1+y_1)) \\&=2na_1^{2n-1}y_1+n(2n-1)a_1^{2n-2}y_1^2 \nonumber \\&\quad +c(a_1+y_1)t +\sum _{l=0}^{2n}\left( {\begin{array}{c}2n\\ l\end{array}}\right) |l-n|\left( {\begin{array}{c}l\\ 2\end{array}}\right) (-1)^{l-1}(a_1+y_1)^2 t^2 + O(t^3), \nonumber \end{aligned}$$
(4.29)

where

$$\begin{aligned} c = \sum _{l=0}^{2n} {2n \atopwithdelims ()l} |l-n| l (-1)^l = (-1)^{n+1} 2n {2(n-1) \atopwithdelims ()n-1} \not =0, \end{aligned}$$

which implies \(y_1=O(t)\). Now, we repeat the same procedure for \(Q_n^{(1)}(t,y_1)\). We substitute \(y_1 = t(a_2 + y_2)\) in \(Q_n^{(1)}(t,y_1)\) and compare the term of order t to obtain

$$\begin{aligned} ca_1 + 2na_1^{2n-1} a_2= 0, \end{aligned}$$

and hence

$$\begin{aligned} a_2 = - \frac{c a_1^{-2(n-1)}}{2n} = -\frac{1}{2} a_1^2. \end{aligned}$$
(4.30)

Putting \(y_1=t(a_2+y_2)\) in (4.29) and using (4.28) and (4.30) yields

$$\begin{aligned} t^{-1}Q_n^{(1)}(t, t(a_2+y_2&)) = 2na_1^{2n-1}y_2 + \left( c'+cy_2+\cdots \right) t +O(t^2), \end{aligned}$$

where

$$\begin{aligned} c' = n(2n-1)a_1^{2n-2}a_2^2+c a_2 + \sum _{l=0}^{2n}\left( {\begin{array}{c}2n\\ l\end{array}}\right) |l-n|{l \atopwithdelims ()2}(-1)^{l-1}a_1^2 \not =0, \end{aligned}$$

which implies \(y_2=O(t)\). In summary, by taking (4.28), (4.30) and \(y = t \{a_1 + t (a_2 + O(t))\}\) into account, the solutions to the equation \(Q_n(x,y)=0\) around \(x=0\) are of the form

$$\begin{aligned} y^{(n)}_j(x) = b_j^{(n)} x^{1/(2n)} -\frac{1}{2} (b_j^{(n)})^2 x^{1/n} + O(x^{3/(2n)}), \quad \hbox { as}\ x \rightarrow 0, \end{aligned}$$
(4.31)

for \(j=0,1,\dots ,2n-1\), where \(\{b^{(n)}_j\}_{j=0}^{2n-1}\) are the solutions of (4.28).

Proposition 4.2

Let \(q_n(r,z)= z^n\sum _{k=-n}^n \left( {\begin{array}{c}2n\\ n+k\end{array}}\right) r^{|k|}z^k\). Then, the solutions \(z=z^{(n)}_j(r)\) to the equation \(q_n(r,z)=0\) are of the form

$$\begin{aligned} z^{(n)}_j(r) = -1 + b^{(n)}_j (1-r)^{\frac{1}{2n}} - \frac{1}{2} (b^{(n)}_j)^2 (1-r)^{\frac{1}{n}} + O((1-r)^{\frac{3}{2n}}), \quad r \rightarrow 1,\nonumber \\ \end{aligned}$$
(4.32)

where

$$\begin{aligned} b^{(n)}_j = \left\{ 2\left( {\begin{array}{c}2(n-1)\\ n-1\end{array}}\right) \right\} ^{1/(2n)} \exp \left( \frac{2 j - n+1}{2n} \pi {\textbf{i}}\right) \quad (j=0,1,\dots ,2n-1).\nonumber \\ \end{aligned}$$
(4.33)

Proof

Since \(z_j^{(n)}(r) = -1 + y_j^{(n)}(1-r)\), putting \(x=1-r\) and \(y=z+1\) in (4.31) yields

$$\begin{aligned} z_j^{(n)}(r) + 1&= b_j^{(n)} (1-r)^{\frac{1}{2n}} -\frac{1}{2} (b_j^{(n)})^2 (1-r)^{\frac{1}{n}} + O\Big ((1-r)^{\frac{3}{2n}}\Big ), \end{aligned}$$

as \(r \rightarrow 1\). We obtain the assertion. \(\square \)

4.2 Proof of Theorem 1.3

We first observe the following asymptotics.

Lemma 4.3

For \(k=0,1,\dots ,2n-1\), as \(r \rightarrow 1\),

$$\begin{aligned} \prod _{\begin{array}{c} j=0 \\ {j\not =k} \end{array}}^{2n-1} (z^{(n)}_k(r) - z^{(n)}_j(r))&= (2n)(-1)^{n-1} (e_k^{(n)})^{-1} \left\{ {2(n-1) \atopwithdelims ()n-1}\right\} ^{\frac{2n-1}{2n}} (1-r^2)^{\frac{2n-1}{2n}} \\&\quad \times \left\{ 1 - C_n e_k^{(n)} (1-r^2)^{\frac{1}{2n}} + O\Big ( (1-r^2)^{\frac{1}{n}} \Big )\right\} , \end{aligned}$$

where \(C_n\) is a constant depending only on n and

$$\begin{aligned} e^{(n)}_k = \exp \left( \frac{2 k - n+1}{2n} \pi {\textbf{i}}\right) \quad (k=0,1,\dots ,2n-1). \end{aligned}$$
(4.34)

Proof

From Proposition 4.2, we have

$$\begin{aligned} \prod _{\begin{array}{c} j=0 \\ {j\not =k} \end{array}}^{2n-1} (z^{(n)}_k(r) - z^{(n)}_j(r))&= \prod _{\begin{array}{c} j=0 \\ {j\not =k} \end{array}}^{2n-1} (b^{(n)}_k - b^{(n)}_j) \cdot (1-r)^{\frac{2n-1}{2n}}-\frac{1}{2} \sum _{\begin{array}{c} l=0\\ {l\not =k} \end{array}}^{2n-1} \prod _{\begin{array}{c} j=0 \\ {j\not =k, l} \end{array}}^{2n-1} (b^{(n)}_k - b^{(n)}_j) \nonumber \\&\quad \cdot \Big \{(b^{(n)}_k)^2 - (b^{(n)}_l)^2)\Big \} \cdot (1-r)^{\frac{2n}{2n}} + O\Big ( (1-r)^{\frac{2n+1}{2n}} \Big ). \end{aligned}$$

Since \(\displaystyle \prod _{j=0}^{2n-1} (z - e^{\frac{j-k}{n} \pi {\textbf{i}}}) = z^{2n}-1\), by differentiating both sides and putting \(z=1\), we obtain \(\displaystyle \prod _{\begin{array}{c} j=0 \\ {j\not =k} \end{array}}^{2n-1} (1 - e^{\frac{j-k}{n} \pi {\textbf{i}}}) = 2n\) for every \(k =0,1,\dots , 2n-1\). Hence, we have

$$\begin{aligned} \prod _{\begin{array}{c} j=0 \\ {j\not =k} \end{array}}^{2n-1} (e_k^{(n)} - e_j^{(n)}) = (e_k^{(n)})^{2n-1} \prod _{\begin{array}{c} j=0 \\ {j\not =k} \end{array}}^{2n-1} (1 - e^{\frac{j-k}{n} \pi {\textbf{i}}}) = 2n (-1)^{n-1} (e_k^{(n)})^{-1}. \end{aligned}$$

and thus, by (4.33),

$$\begin{aligned} \prod _{\begin{array}{c} j=0 \\ {j\not =k} \end{array}}^{2n-1} (b^{(n)}_k - b^{(n)}_j) = \left\{ 2 {2(n-1) \atopwithdelims ()n-1} \right\} ^{\frac{2n-1}{2n}} 2n (-1)^{n-1} (e_k^{(n)})^{-1}. \end{aligned}$$

Similarly,

$$\begin{aligned}&\sum _{\begin{array}{c} l=0\\ {l\not =k} \end{array}}^{2n-1} \prod _{\begin{array}{c} j=0 \\ {j\not =k, l} \end{array}}^{2n-1} (b^{(n)}_k - b^{(n)}_j) \cdot \Big \{(b^{(n)}_k)^2 - (b^{(n)}_l)^2 \Big \}\\&\qquad = 2 {2(n-1) \atopwithdelims ()n-1} 2n(-1)^{n-1} (e^{(n)}_{k})^{-1} \sum _{\begin{array}{c} l=0\\ {l\not =k} \end{array}}^{2n-1} (e^{(n)}_k + e^{(n)}_l) \\&\qquad =(-1)^{n-1} 8n(n-1) {2(n-1) \atopwithdelims ()n-1}. \end{aligned}$$

Since \(1-r = \frac{1-r^2}{2} + O((1-r^2)^2)\), we obtain the assertion. \(\square \)

Now, we give a proof of Theorem 1.3. We appeal to (2.15) to obtain the asymptotic behavior of \({\mathcal {J}}(r)\). First we remark that the constant \(b^{(n)}_j\) in (4.33) lies in the right-half plane \(\{z\in {\mathbb {C}}: \Re z > 0\}\) for \(j=0,1,\dots ,n-1\) and the left-half plane \(\{z\in {\mathbb {C}}: \Re z < 0\}\) for \(j=n, n+1, \dots , 2n-1\). Thus, if r is sufficiently close to 1, \(z^{(n)}_j(r)\) for \(j=0,1,\dots ,n-1\) lie inside \({\mathbb {D}}\) and \(z^{(n)}_j(r)\) for \(j=n+1,n+2,\dots ,2n-1\) lie outside \({\mathbb {D}}\). Therefore, we have

$$\begin{aligned} {\mathcal {J}}(r)&= \frac{r}{2\pi {\textbf{i}}} \oint _{\partial {\mathbb {D}}} \frac{p_n(r,z)}{q_n(r,z)}dz \nonumber \\&= r \sum _{k=0}^{n-1} \textrm{Res}\left( \frac{p_n(r,z)}{\prod _{j=0}^{2n-1} (z - z^{(n)}_j(r))} ; z= z^{(n)}_k(r)\right) \nonumber \\&= r \sum _{k=0}^{n-1} \frac{p_n(r,z^{(n)}_k(r))}{\prod _{j=0, j \not =k}^{2n-1} (z^{(n)}_k(r) - z^{(n)}_j(r))}. \end{aligned}$$
(4.35)

Since \(p_n(1,-1) = (-1)^n {2(n-1) \atopwithdelims ()n-1}\), from Lemma 4.3 and

$$\begin{aligned} p_n(r, z_k^{(n)}(r)) = p_n(1,-1) \left\{ 1 + C_n' e_k^{(n)} (1-r^2)^{1/(2n)} + O\big ( (1-r^2)^{1/n} \big ) \right\} , \end{aligned}$$

we have

$$\begin{aligned} \frac{p_n(r,z^{(n)}_k(r))}{\prod _{j=0, j \not =k}^{2n-1} (z^{(n)}_k(r) - z^{(n)}_j(r))}&= \frac{-1}{2n}{2(n-1) \atopwithdelims ()n-1}^{\frac{1}{2n}} e_k^{(n)} (1-r^2)^{-\frac{2n-1}{2n}} \\&\quad \times \left\{ 1 + (C_n+C_n') e_k^{(n)} (1-r^2)^{\frac{1}{2n}} + O\big ( (1-r^2)^{\frac{2}{2n}} \big )\right\} , \end{aligned}$$

where \(C_n'\) is a constant depending only on n. It is easy to see that

$$\begin{aligned} \sum _{k=0}^{n-1} e_k^{(n)} = (\sin \frac{\pi }{2n})^{-1}, \quad \sum _{k=0}^{n-1} (e_k^{(n)})^2 = 0. \end{aligned}$$
(4.36)

Therefore, from (4.35), we obtain

$$\begin{aligned} {\mathcal {J}}(r) = \frac{-1}{2n \sin (\frac{\pi }{2n})}{2(n-1) \atopwithdelims ()n-1}^{\frac{1}{2n}} (1-r^2)^{-\frac{2n-1}{2n}} \Big ( 1+ O\big ( (1-r^2)^{\frac{2}{2n}} \big ) \Big ). \end{aligned}$$

This completes the proof.

Remark 4.4

A naive computation gives only the error term \(O\big ( (1-r)^{-(n-1)/n} \big )\). Here, we saw the cancellation as the second equality in (4.36) to obtain \(O\big ( (1-r)^{-(2n-3)/(2n)} \big )\), which matches the direct computation in Case (III) for \(n=2\).

Remark 4.5

This method can be applied to all cases of finitely dependent Gaussian processes. Indeed, the zero of \(\Theta (1,e^{{\textbf{i}}\theta })\) of order 2k contributes to \({\mathcal {J}}(r)\) as constant multiple of \((1-r^2)^{-\frac{2k-1}{2k}}\).