1 Introduction

The topic we will be discussing involves discrete and continuous Fourier analysis and some notable analytical techniques. Generally, these ideas can be used in the Fourier analysis of multiple functional iterations. Such iterative mappings are widely used in practice. A nice introduction to the mathematical theory is given in [17]. We choose one known probabilistic problem as an interesting application of our methods. Some of the existing results will be significantly improved. In this section, we will present them briefly. In particular, we mostly skip the probabilistic context focusing more on the analytic part. All the formulas can be understood without additional explanations of the terms of the theory of branching processes that we will use. The main details will be covered in the following sections.

We consider a simple Galton-Watson branching process in the supercritical case with the minimum family size 1-the so-called Schröder case. The probability of the minimum family size is \(0<p_1<1\). Thus, the probability-generating function has the form

$$\begin{aligned} P(z)=p_1z+p_2z^2+p_3z^3+.... \end{aligned}$$

The case of non-zero extinction probability (\(p_0\ne 0\)) can usually be reduced to the supercritical case with the help of Harris-Sevastyanov transformation. The corresponding information about branching processes is given in, e.g., [3, 12]. We assume that the mean of offspring distribution is finite, i.e.

$$\begin{aligned} E=p_1+2p_2+3p_3+...<+\infty . \end{aligned}$$

Then one may define the martingal limit, the density of which can be expressed as a Fourier transform of some special function

$$\begin{aligned} p(x)=\frac{1}{2\pi }\int _{-\infty }^{+\infty }\Pi (\textbf{i}y)e^{\textbf{i}yx}dy, \ \ \ \textrm{where}\ \ \ \Pi (z)=\lim _{t\rightarrow +\infty }\underbrace{P\circ ...\circ P}_{t}\left( 1-\frac{z}{E^{t}}\right) , \end{aligned}$$

see [7, 15]. Since the Fourier integral is quite complex, any expansion and asymptotic analysis of p(x) is welcome. Often special attention is paid to the analysis of tails when \(x\rightarrow +0\) or \(x\rightarrow +\infty \). It is proven in [2] that p(x) has an asymptotic

$$\begin{aligned} p(x)=x^{\alpha }V(x)+o(x^{\alpha }),\ \ \ x\rightarrow +0, \end{aligned}$$
(1)

with explicit \(\alpha =-1-\log _Ep_1\) and a continuous, positive, multiplicatively periodic function, V, with period E. Further references to this asymptotic are always based on the principal work [2] and do not provide any formula for V(x), see, e.g., the corresponding remark in [10, 11, 18] devoted to the Schröder case. Even such a result is already great because p(x) is not simple.

Recently, some explicit expressions for V(x) in terms of Fourier coefficients of 1-periodic Karlin-McGregor function and some values of \(\Gamma \)-function are given in [15]. The derivation is based on the complete asymptotic series for discrete relative limit densities of the number of descendants provided in [16]. However, even for the first asymptotic term, the derivation is somewhat informal because the continuous martingal limit and the discrete distribution of the relative limit densities differ significantly. In particular, only the first term in both asymptotics has a common nature, all other terms are completely different. Now, I found beautiful and independent steps allowing me to obtain the complete expansion (not only the first term)

$$\begin{aligned} p(x)=x^{\alpha }V_1(x)+x^{\alpha +\beta }V_2(x)+x^{\alpha +2\beta }V_3(x)+...,\ \ \ x>0, \end{aligned}$$
(2)

with the certain value \(\alpha \) defined above, \(\beta =-\log _Ep_1>0\), and explicit multiplicatively periodic functions \(V_1\), \(V_2\), \(V_3\),..., see Eqs. (14) and (15). Finally, we obtain a very efficient representation of \(V_n\) in terms of Fourier coefficients of periodic Karlin-McGregor functions and some values of \(\Gamma \)-function

$$\begin{aligned}{} & {} V_n(x)=K_n\left( \frac{-\ln x}{\ln E}\right) ,\ \ \ K_n(z)=\kappa _n\sum _{m=-\infty }^{+\infty }\frac{\theta _{m}^{*n}e^{2\pi \textbf{i}mz}}{\Gamma (-\frac{2\pi \textbf{i}m+n\ln p_1}{\ln E})},\\{} & {} \theta _{m}^{*n}=\int _0^1K(x)^ne^{-2\pi \textbf{i}mx}dx, \end{aligned}$$

where

$$\begin{aligned}{} & {} K(z)=\Phi (\Pi (E^z))p_1^{-z},\ \ \Phi (z)=\lim _{t\rightarrow \infty }p_1^{-t}\underbrace{P\circ ...\circ P}_{t}(z),\\{} & {} \Phi ^{-1}(z)=\kappa _1z+\kappa _2z^2+\kappa _3z^3+..., \end{aligned}$$

see Eq. (24). All \(\kappa _j\) can be computed explicitly, see Eq. (10). The form of \(V_1(x)\) presented in [15] significantly helped to determine the forms of other \(V_n(x)\). However, we provide a proof of Eq. (2) free on any assumptions about \(V_1\). The main result is formulated in Theorem 3.1.

Expansion Eq. (2) consists of periodic oscillations amplified by power-law multipliers. As mentioned in, e.g., [4,5,6], such type of behavior can be important in applications in physics and biology. Let us note the continuing interest in the tail behavior of the Galton-Watson process. Recently, for the right (not left) tail, the universal estimates are obtained in [9]. Maybe, in the future, such results can be improved to full asymptotic series as it is done for the left tail in Eq. (2). My hope is also supported by the fact that series Eq. (2) converges not only for small arguments but for all \(x>0\) - that includes the right tail. Of course, Eq. (2) is not the right tail asymptotic we are looking for, but, I believe, Eq. (2) can be helpful during this search. In a different setup, some estimates related to branching trees are also discussed in [1].

The rest of the paper is organized as follows. Section 2 contains the definitions of main objects and some preliminary results. The key elements here are Eqs. (14)–(16), which will be elegantly developed in Eq. (19) along with Eqs. (23) and (24) of next Sect. 3. Other elements are more technical than masterly and can be done in a not-unique way: reasoning at the end of Sect. 2 and their alternative versions formulated after Theorem 3.1 give necessary conditions for the theorem. Section 4 contains some numerical comparison of LHS and RHS in the main result Eq. (2) applied to some concrete examples.

2 Preliminary Results

Let us recall some known facts about branching processes, including some recent results obtained in [16]. Some new results are also included in this section. The Galton–Watson process is defined by

$$\begin{aligned} X_{t+1}=\sum _{j=1}^{X_t}\xi _{j,t},\ \ \ X_0=1,\ \ \ t\in {\mathbb N}\cup \{0\}, \end{aligned}$$
(3)

where all \(\xi _{j,t}\) are independent and identically-distributed natural number-valued random variables with the probability-generating function

$$\begin{aligned} P(z):=\mathbb {E}z^{\xi }=p_0+p_1z+p_2z^2+p_3z^3+.... \end{aligned}$$
(4)

For simplicity, we consider the case when P is entire. In this case, the first moment

$$\begin{aligned} E=P'(1)=p_1+2p_2+3p_3+...<+\infty \end{aligned}$$
(5)

is automatically finite. A polynomial P is common in practice, but the results discussed below can be extended to a wide class of non-entire P. As the Introduction mentions, we assume \(p_0=0\), \(p_1\ne 0\). Another natural assumption is \(p_1<1\), otherwise the case \(p_1=1\) is trivial. One of the important limit distributions is the so-called martingal limit \(W=\lim _{t\rightarrow +\infty }E^{-t}X_t\), the density p(x) of which will be our main subject to study.

Under the assumptions discussed above, we can define

$$\begin{aligned} \Phi (z)=\lim _{t\rightarrow \infty }p_1^{-t}\underbrace{P\circ ...\circ P}_{t}(z), \end{aligned}$$
(6)

which is analytic at least for \(|z|<1\). The function \(\Phi \) satisfies the Scröder-type functional equation

$$\begin{aligned} \Phi (P(z))=p_1\Phi (z),\ \ \ \Phi (0)=0,\ \ \ \Phi '(0)=1, \end{aligned}$$
(7)

as is seen from Eq. (6). The function \(\Phi \) has an inverse

$$\begin{aligned} \Phi ^{-1}(z)=\kappa _1z+\kappa _2z^2+\kappa _3z^3+\kappa _4z^4+..., \end{aligned}$$
(8)

analytic in some neighborhood of \(z=0\). The coefficients \(\kappa _j\) can be determined by differentiating the corresponding Poincaré-type functional equation, inverse to Eq. (7),

$$\begin{aligned} P(\Phi ^{-1}(z))=\Phi ^{-1}(p_1z),\ \ \ \Phi ^{-1}(0)=0,\ \ \ (\Phi ^{-1})'(0)=1 \end{aligned}$$
(9)

at \(z=0\). In particular,

$$\begin{aligned} \kappa _1=1,\ \ \ \kappa _2=\frac{p_2}{p_1^2-p_1},\ \ \ \kappa _3=\frac{2p_2\kappa _2+p_3}{p_1^3-p_1},\, \, \,.... \end{aligned}$$
(10)

It is shown in, e.g., [7] or [15], that the density p(x), mentioned in Eq. (1) and between Eqs. (5) and (6), can be computed by

$$\begin{aligned} p(x)=\frac{1}{2\pi }\int _{-\infty }^{+\infty }\Pi (\textbf{i}y)e^{\textbf{i}yx}dy=\frac{1}{2\pi \textbf{i}}\int _{\gamma }\Pi (z)e^{zx}dz, \end{aligned}$$
(11)

where \(\gamma \) is a modification of the original contour \(\textbf{i}{\mathbb R}\) discussed below and

$$\begin{aligned} \Pi (z):=\lim _{t\rightarrow +\infty }\underbrace{P\circ ...\circ P}_{t}(1-\frac{z}{E^{t}}). \end{aligned}$$
(12)

This function is entire, satisfying another Poincaré-type functional equation

$$\begin{aligned} P(\Pi (z))=\Pi (Ez),\ \ \ \Pi (0)=1,\ \ \ \Pi '(0)=-1. \end{aligned}$$
(13)

Note that \(E>1\), since \(p_0=0\), \(0<p_1<1\), and \(\sum p_j=1\), see Eq. (5). The existence of the limit in Eq. (12) follows from the standard facts that the linearization of P at 1 leads to the multiplication of the increment of the argument by E. In other words, one can use \(P(1+w)=1+Ew+O(w^2)\) in Eq. (12) and see the convergence. The same reason applies to Eq. (6). The corresponding details related to the theory of Schröder and Poincaré-type functional equations are available in a good introductory book [17].

Everything is ready to derive a new expansion of p(x) that gives the complete left tail asymptotic series—the justification for changing summation and integration will be discussed below

$$\begin{aligned} p(x)&=\frac{1}{2\pi \textbf{i}}\int _{\gamma }\Phi ^{-1}(\Phi (\Pi (z)))e^{zx}dz =\sum _{n=1}^{+\infty }\frac{\kappa _n}{2\pi \textbf{i}}\int _{\gamma }\Phi (\Pi (z))^ne^{zx}dz \nonumber \\ \ {}&=\sum _{n=1}^{+\infty }x^{{-1-n\log _Ep_1}}V_n(x), \end{aligned}$$
(14)

see Eqs. (11) and (8), where

$$\begin{aligned} V_n(x)=\frac{\kappa _nx^{1+n\log _Ep_1}}{2\pi \textbf{i}}\int _{\gamma }\Phi (\Pi (z))^ne^{zx}dz. \end{aligned}$$
(15)

Using Eqs. (7), (13), and (15), it is easy to check that all \(V_n\) are multiplicatively periodic

$$\begin{aligned} V_n(\frac{x}{E})=\frac{\kappa _nx^{\frac{\ln Ep_1^n}{\ln E}}}{2\pi \textbf{i} p_1^n}\int _{\gamma }\Phi (\Pi (z))^ne^{\frac{zx}{E}}\frac{dz}{E}=\frac{\kappa _nx^{\frac{\ln Ep_1^n}{\ln E}}}{2\pi \textbf{i}}\int _{\frac{\gamma }{E}}\Phi (\Pi (z))^ne^{zx}dz=V_n(x).\nonumber \\ \end{aligned}$$
(16)

The integration contour \(\gamma \) in Eq. (15) should be chosen so that \(\Pi (\gamma )\) lies in the domain, where \(\Phi \) is invertible, and \(\gamma \) must be symmetric and connects \(-\textbf{i}\infty \) with \(+\textbf{i}\infty \). One of the choices is \(\gamma =\textbf{i}{\mathbb R}+\varepsilon \) with sufficiently large \(\varepsilon >0\), since \(\Pi (z)\rightarrow 0\) for \({\textrm{Re}}z\geqslant 0\) and \(z\rightarrow \infty \). For such type of contours, the second integral in Eq. (16) along \(\gamma \) and along \(\gamma /E\) is equal to the same value, due to the Cauchy integral theorem. The exact power-law decay of \(\Pi (z)\), when \({\textrm{Re}}z\geqslant 0\) and \(z\rightarrow \infty \), can be determined from the identity

$$\begin{aligned} \Pi (z)=\Phi ^{-1}(K(\log _E z)z^{\frac{\ln p_1}{\ln E}}), \end{aligned}$$
(17)
Fig. 1
figure 1

Gray strip belongs to the domain of definition of K(z). Paths \(\gamma \) and \(\log _E\gamma \) are plotted with the blue color

where \(K(z)=\Phi (\Pi (E^z))p_1^{-z}\) is 1-periodic by Eqs. (7) and (13). This famous function will reach its full potential a little later, see Eq. (18) and below. Here, we use only basic facts about it. Following [7], integral Eq. (11) exists. We assume that it exists in a strong sense meaning \(\Pi (\textbf{i}y)\rightarrow 0\) for \(y\rightarrow \pm \infty \) - this automatically means \(\Phi (\Pi (\textbf{i}y))\rightarrow 0\) for \(y\rightarrow \pm \infty \). Hence, K(z) defined above is analytic in a neighborhood of \([x+\frac{\textbf{i}\pi }{2\ln E},x+1+\frac{\textbf{i}\pi }{2\ln E}]\) for some large \(x\in {\mathbb R}\). Due to 1-periodicity, K(z) is analytic in some strip of a positive width, a neighborhood of the line \({\mathbb R}+\frac{\textbf{i}\pi }{2\ln E}\), and, due to the symmetry, \({\mathbb R}-\frac{\textbf{i}\pi }{2\ln E}\). In [7, 8, 16], it is shown that K(z) is also analytic in the strip \(|{\textrm{Im}}z|<\frac{\pi }{2\ln E}\). Roughly speaking, this fact can be derived from the definitions of K, \(\Phi \), \(\Pi \), and the basic facts about the Julia set related to P—it contains the unit disk. Angle \(\frac{\pi }{2}\) of the intersection of the boundary (unit circle) of the unit disk with the real line at \(z=1\) gives the bound \(\frac{\pi }{2\ln E}\) for the symmetric strip defined above. Here, \(\ln E\) is due to \(E^z\) in the definition of K. As mentioned, the corresponding details are available in, e.g., [7, 8, 16]. Returning to the analyticity of K(z) in the strip \(|{\textrm{Im}}z|<\frac{\pi }{2\ln E}\) and in the strips neighbor to \({\mathbb R}\pm \frac{\textbf{i}\pi }{2\ln E}\) we deduce that K(z) is analytic (and still 1-periodic) in a larger strip \(|{\textrm{Im}}z|<k\) for some \(k>\frac{\pi }{2\ln E}\). This means that the contour \(\log _E\gamma \) lies strictly inside the domain of the definition of K, see Fig. 1, and, hence, \(K(\log _E z)\) is bounded and smooth for \(z\in \gamma \). Thus, \(\Pi (z)=O(z^{\frac{\ln p_1}{\ln E}})\), see (17), and if \(\frac{\ln p_1}{\ln E}<-1\) then integrals Eq. (15) converge absolutely. Absolute convergence also justifies Eq. (14). Recall that \(p_1<1\) and \(E>1\). Hence, \(\frac{\ln p_1}{\ln E}<-1\) is not a rare case. In fact, it can be shown that Eq. (15) converges anyway because \(\frac{\ln p_1}{\ln E}<0\) and decaying function \(z^{\frac{\ln p_1}{\ln E}}\) is multiplied by the oscillatory factor \(e^{zx}\) for \({\textrm{Im}}z\rightarrow \pm \infty \), but it requires more cumbersome calculations.

3 Representation of \(V_n(x)\) Through the Fourier Coefficients of Karlin-McGregor Function

Recall that the Karlin-McGregor function, see [13, 14], is 1-periodic function given by

$$\begin{aligned} K(z)=\Phi (\Pi (E^z))p_1^{-z}=\sum _{m=-\infty }^{m=+\infty }\theta _me^{2\pi \textbf{i}mz}, \end{aligned}$$
(18)

where the corresponding Fourier coefficients \(\theta _m\) decay exponentially fast. One simple consequence of the theory of periodic functions states that if 1-periodic function is analytic in the strip \(\{z:\ |{\textrm{Im}}z|<r\}\) then its Fourier coefficients are bounded by \(C(\varepsilon )e^{-2\pi |m|(r-\varepsilon )}\), where m is a number of the coefficient, \(\varepsilon >0\) is any, and \(C(\varepsilon )\) is common for all the coefficients. We recall only that at least \(r=\frac{\log _E\pi }{2}\) for the Karlin-McGregor function, see details in, e.g., [16]. Let us rewrite Eq. (15) in terms of K:

$$\begin{aligned} V_n(x)= & {} \frac{\kappa _nx^{\frac{\ln Ep_1^n}{\ln E}}}{2\pi \textbf{i}}\int _{\gamma }p_1^{-nz}\Phi (\Pi (E^z))^np_1^{nz}e^{E^zx}dE^z\nonumber \\= & {} \frac{\kappa _nx^{\frac{\ln Ep_1^n}{\ln E}}}{2\pi \textbf{i}}\int _{\log _E\gamma }K(z)^np_1^{nz}e^{E^zx}dE^z\nonumber \\= & {} \frac{\kappa _nx^{\frac{\ln Ep_1^n}{\ln E}}}{2\pi \textbf{i}}\int _{\log _E\gamma }(\sum _{m=-\infty }^{m=+\infty }\theta _{m}^{*n}e^{2\pi \textbf{i}mz})p_1^{nz}e^{E^zx}dE^z\nonumber \\= & {} \kappa _n\sum _{m=-\infty }^{m=+\infty }\theta _{m}^{*n}\frac{x^{\frac{\ln Ep_1^n}{\ln E}}}{2\pi \textbf{i}}\int _{\log _E\gamma }e^{2\pi \textbf{i}mz}p_1^{nz}e^{E^zx}dE^z, \end{aligned}$$
(19)

where \(\theta _{m}^{*n}\) are Fourier coefficients of 1-periodic function

$$\begin{aligned} K(z)^n=\sum _{m=-\infty }^{m=+\infty }\theta _{m}^{*n}e^{2\pi \textbf{i}mz}. \end{aligned}$$
(20)

They are convolution powers of original Fourier coefficients \(\theta _m\). We already know the form of \(V_1(x)\):

$$\begin{aligned} V_1(x)=\sum _{m=-\infty }^{+\infty }\frac{\theta _me^{2\pi \textbf{i}m\frac{-\ln x}{\ln E}}}{\Gamma (-\frac{2\pi \textbf{i}m+\ln p_1}{\ln E})}, \end{aligned}$$
(21)

see [15]. Comparing Eqs. (19) with (21) and taking into account \(\kappa _1=1\), see Eq. (10), we guess the main identity

$$\begin{aligned} \frac{x^{\frac{\ln Ep_1}{\ln E}}}{2\pi \textbf{i}}\int _{\log _E\gamma }e^{2\pi \textbf{i}mz}p_1^{z} e^{E^zx}dE^z=\frac{e^{2\pi \textbf{i}m\frac{-\ln x}{\ln E}}}{\Gamma (-\frac{2\pi \textbf{i}m+\ln p_1}{\ln E})}. \end{aligned}$$
(22)

A direct independent proof of Eq. (22) can be based on well-known Hankel integral representations of the \(\Gamma \)-function. It is easy to check that, after changes of variables, Eq. (22) becomes equivalent to the formula presented in Example 12.2.6 on page 254 of [19]. This is a standard, but interesting, exercise in complex analysis and special functions. Substituting \(p_1^n\) instead of \(p_1\) into Eq. (22), we obtain also

$$\begin{aligned} \frac{x^{\frac{\ln Ep_1^n}{\ln E}}}{2\pi \textbf{i}}\int _{\log _E\gamma }e^{2\pi \textbf{i}mz}p_1^{nz}e^{E^zx}dE^z=\frac{e^{2\pi \textbf{i}m\frac{-\ln x}{\ln E}}}{\Gamma (-\frac{2\pi \textbf{i}m+n\ln p_1}{\ln E})}. \end{aligned}$$
(23)

Combining Eqs. (19) with (23), we obtain the final result

$$\begin{aligned} V_n(x)=K_n(\frac{-\ln x}{\ln E}),\ \ \ K_n(z)=\kappa _n\sum _{m=-\infty }^{+\infty }\frac{\theta _{m}^{*n}e^{2\pi \textbf{i}mz}}{\Gamma (-\frac{2\pi \textbf{i}m+n\ln p_1}{\ln E})}. \end{aligned}$$
(24)

The formula Eq. (24) is more convenient for computations than Eq. (15) because it is similar to Eq. (21) for which efficient numerical schemes developed in [16] were already applied in [15] and show good results there.

To justify changing the order of summation and integration in Eq. (19), it is enough to take into account \(\frac{\ln p_1}{\ln E}<-1\) and remember that K is smooth and bounded in the symmetric strip, parallel to \({\mathbb R}\), of the width \(\log _E\pi \), see details at the end of Sect. 2. Hence, K(z) is well approximated by its Fourier series for \(z\in \log _E\gamma \), with the necessary rate of convergence. Gathering all these remarks, let us formulate the main result.

Theorem 3.1

If P is entire, integral Eq. (11) exists in the strong sense \(\Pi (\textbf{i}y)\rightarrow 0\) for \(y\rightarrow \pm \infty \), and \(\frac{\ln p_1}{\ln E}<-1\) then Eq. (2) along with Eq. (24) hold true.

As it is mentioned at the end of Sect. 2, the assumption \(\frac{\ln p_1}{\ln E}<-1\) can be usually omitted. The convergence of Eq. (24) is quite fast because \(\theta _{m}^{*n}=O(e^{-2\pi |m|k})\) for some \(k>\frac{\pi }{2\ln E}\) that more than compensates small values of \(\Gamma \) in the denominator, which are approximately of the order \(e^{-\frac{\pi ^2|m|}{\ln E}}|m|^{-\frac{\ln p_1}{\ln E}-\frac{1}{2}}\), see, e.g., the corresponding analysis based on Stirling approximations of Gamma function explained in [16]. Thus, Fourier coefficients in Eq. (24) decay exponentially fast.

The condition \(\Pi (\textbf{i}y)\rightarrow 0\) for \(y\rightarrow \pm \infty \) can be replaced with a more simple condition, which follows directly from Eq. (13) and from the definition of the Julia set.

Proposition 3.2

If \(|\Pi (\textbf{i}y)|<1\) for \(y\in [r,Er]\) for some \(r>0\), then \(\Pi (\textbf{i}y)\rightarrow 0\) for \(y\rightarrow \pm \infty \).

Proof

Indeed, recall that the filled Julia set related to P(z) contains the open unit disk because all the coefficients of the polynomial P(z) are positive and the maximal by modulus value at the boundary of the unit disk is \(P(1)=p_1+p_2+...=1\). Thus, if \(\Pi (\textbf{i}y)\) lies in the interior of the unit disk then \(\Pi (\textbf{i}y)\) lies in the interior of the corresponding component of the filled Julia set as well. Thus, P-iterations performed by the first formula in Eq. (13)

$$\begin{aligned} \Pi (\textbf{i}E^ty)=\underbrace{P\circ ...\circ P}_{t}(\Pi (\textbf{i}y))\rightarrow 0,\ \ y\in [r,Er] \end{aligned}$$
(25)

by the standard properties of the Julia sets, since 0 is the unique attracting point inside the unit disk for P(z). The uniqueness of the attracting point follows from

$$\begin{aligned} |P(z)|\leqslant |z|(p_1+p_2|z|+...)<|z|(p_1+p_2+...)=|z|\ \ \ \textrm{for}\ \ \ |z|<1. \end{aligned}$$

Hence, P is a contracting mapping inside the unit disk and P-iterations for any z with \(|z|<1\) converges to 0. Because \(\cup _{t\in {\mathbb N}}[E^tr,E^{t+1}r]=[Er,+\infty )\) we state \(\Pi (\textbf{i}y)\rightarrow 0\) for \(y\rightarrow +\infty \), see Eq. (25). Due to the symmetry the same works for \(y\rightarrow -\infty \). \(\square \)

Using fast exponentially convergent algorithms for the computation of \(\Pi (z)\) developed in, e.g., [16], we can check the condition formulated in Proposition 3.2 numerically. On the other hand, if the interior of the corresponding component of the filled Julia set contains a little bit more than the open unit disk, namely a small sector near \(z=1\) of the angle \(\geqslant \pi \), then the condition formulated in Proposition 3.2 is automatically satisfied. This is because, for small y, the point w, defined by

$$\begin{aligned} w=\Pi (\textbf{i}y)=1-\textbf{i}y-\frac{\Pi ''(0)}{2}y^2+O(y^3),\ \ \Pi ''(0)=\frac{P''(0)}{E^2-E}>0, \end{aligned}$$

lies inside the filled Julia set and after a few, say m, P-iterations, see Eq. (13), \(\Pi (\textbf{i}E^my)=\underbrace{P\circ ...\circ P}_{m}(w)\) will be small enough by the properties of Julia sets already discussed. All the filled Julia sets we tested in various examples contain such sectors, see, e.g., the right panel in Fig. 2 computed with the help of this site.Footnote 1 The Julia sets (black area) include

$$\begin{aligned} \{z:\ |{\textrm{arg}}(1-z)|\leqslant \frac{\pi }{2}+\delta \} \end{aligned}$$

for all sufficiently small \(1-z\), and some positive \(\delta >0\). The total angle of the sector \(2(\pi /2+\delta )>\pi \) as required. The set of parameters \(p_1\), \(p_2\), \(p_3\), and \(p_4\) for which the Julia sets were computed will be used in the next examples section.

Fig. 2
figure 2

Two examples of Julia sets computed with the help of this site\(^1\): the cases \(p_1=0.1\), \(p_2=0.5\), \(p_3=0.4\), and \(p_1=0.1\), \(p_2=0.1\), \(p_3=0.5\), \(p_4=0.3\)

4 Examples

As examples, we consider the same cases as in [15]. The difference is that we add the second asymptotic term. We take two sets of parameters

$$\begin{aligned} p_1=0.1,\ p_2=0.5,\ p_3=0.4\ \ \ \textrm{and}\ \ \ p_1=0.1,\ p_2=0.1,\ p_3=0.5,\ p_4=0.3. \end{aligned}$$

Thus, we have two probability-generating functions

$$\begin{aligned} P_1(z)=0.1z+0.5z^2+0.4z^3\ \ \ \textrm{and}\ \ \ P_2(z)=0.1z+0.1z^2+0.5z^3+0.3z^4 \end{aligned}$$

that will be used for the computation of two densities \(p_1(x)\) and \(p_2(x)\) by Eqs. (11) and (12). Let us compare such computed densities with their first asymptotic term given in Eqs. (2) and (24). The comparison is provided in Fig. 3. As it is seen, the first asymptotic term already gives a very good approximation, especially for small x.

Fig. 3
figure 3

Comparison of exact normalized densities with their first asymptotic approximations: the case \(p_1=0.1\), \(p_2=0.5\), \(p_3=0.4\) (upper curves), and \(p_1=0.1\), \(p_2=0.1\), \(p_3=0.5\), \(p_4=0.3\) (bottom curves)

The next step is to compare the difference between exact normalized densities and their first asymptotic terms with the second asymptotic term, see again Eqs. (2) and (24). The comparison is provided in Fig. 4. The results are quite good.

For the computation of the asymptotic terms, we use numerical procedures based on fast algorithms developed in [16]. Originally, the procedures were developed for the computation of K(z) but can be easily applied without modifications for \(K(z)^n\), which is required in Eq. (20) and then in Eq. (24). For the computation of the densities, we use FFT (DFT). Some additional tricks are necessary because, roughly speaking, DFT is developed for the discrete analogs of finite integrals, from \(-\pi \) to \(\pi \), not for the infinite integrals as we need, see Eq. (11). Generally speaking, the computation of infinite integrals is much harder than the computation of finite integrals. Asymptotic terms Eq. (24) require the computation of Fourier coefficients of \(K(z)^n\), see Eq. (20), which can be performed with the help of a standard DFT—a discrete analog of finite integrals over \([-\pi ,\pi ]\). In this sense, RHS of Eq. (2) has an obvious advantage over LHS containing the infinite integral. In particular, the numerical accuracy of the computation of the infinite integral in LHS of Eq. (2) is insufficient to compare it with the second asymptotic term for very small x. Much more expensive calculations are needed to achieve the required accuracy. For this reason, we have not included this comparison (when \(x\approx 0\)) in Fig. 4. At the same time, RHS of Eq. (2) has no such type of disadvantages, because \(V_j(x)\) are multiplicatively periodic and can be computed for any x easily, and, as already said, their computation does not require infinite integrals and can be performed with the standard FFT. Among others, this was one of the motivations for obtaining the main result—RHS expansion in Eq. (2).

Fig. 4
figure 4

Comparison of differences between exact normalized densities and their first asymptotic terms with the second asymptotic terms: the case \(p_1=0.1\), \(p_2=0.5\), \(p_3=0.4\) (upper curves), and \(p_1=0.1\), \(p_2=0.1\), \(p_3=0.5\), \(p_4=0.3\) (bottom curves)