1 Introduction

Walter Hayman, in A generalisation of Stirling’s formula, [16], introduced what are now called Hayman (admissible) functions, a concept which has become a cornerstone of asymptotic methods in Analytic Combinatorics. The notion of Hayman function may at first glance appear as too specific and tailor-made for saddle point approximation and for the Laplace method to be applicable. But the Hayman class is a rich collection of functions, closed under certain operations (exponentials of Hayman functions are Hayman) and small perturbations (the product of a polynomial times a function in the Hayman class returns a function in the Hayman class); its usefulness in Analytic Combinatorics derives mostly from facts like those.

A few years later, following an idea of Khinchin, Paul Rosenbloom [25] introduced in Complex Analysis a family of probability distributions, which we term here a Khinchin family, associated to any given power series with non-negative coefficients. In this probabilistic framework, he interpreted Hayman functions as power series with Gaussian Khinchin families (see Definition 3.1) and as a show of its fruitfulness, Rosenbloom proved the Wiman–Valiron theorem on the maximal term of entire functions, essentially by applying Chebyshev’s inequality to the corresponding Khinchin family.

Luis Báez-Duarte [2] added to this circle of ideas the notion of strongly Gaussian power series (see Definition 3.4) and a basic substitution theorem (see Theorem C in Sect. 3.2.1) quite useful for asymptotic purposes. Products of power series become, in the Khinchin family side, sums of independent random variables, and by appealing to Lyapunov’s approach to the central limit theorem, Báez-Duarte was able to show that the function \(\prod _{j=1}^\infty (1-z^j)^{-1}\) is strongly Gaussian, and to deduce the classical Hardy–Ramanujan partition theorem. See also Candelpergher–Miniconi, [6].

In this paper we follow this thread of Hayman, Rosenbloom and Báez-Duarte.

We take full advantage of the notion of strong gaussianity and go back to Hayman’s basic function theoretical ideas to extract a direct and purely function theoretical criterion (Theorem 4.1) for belonging to the Hayman class—and so, being strongly Gaussian—for power series which do not vanish in their disk of convergence.

We use this criterion, combined with Hayman’s asymptotic formula of Theorem B, to derive in a simple and unified manner asymptotics for the enumeration of an assortment of combinatorial objects.

Further, we use Theorem 4.1 to show, with no intervention of the central limit theorem or the circle method, that generating functions of partitions, standard or with a variety of restrictions on parts, are in the Hayman class, and then with the additional and crucial contribution of Theorem C we derive asymptotic formulas for partitions with parts in an arithmetic sequence (Ingham’s theorem), plane partitions (Wright’s theorem), or colored partitions and, of course, Hardy–Ramanujan’s partition theorem.

We should emphasize that the general approach of Khinchin families and Hayman class functions aims to give first order asymptotic formulae, but not full asymptotic expansions.

In forthcoming work, we plan to extend this approach of Khinchin families and apply it to further asymptotic estimations of coefficients like those of infinite products, of large powers of power series, of generating functions of a variety of trees, etc.

As it turns out, [16] is one of the most cited papers of Hayman and, as far as we know, he did not pursue its topic any further. The present paper is a natural continuation of [16] once you have at your disposal the Khinchin families framework and the contributions of Báez-Duarte [2].

This paper is dedicated to the memory of Walter Hayman. Maybe, he would have been pleased with the extra range of direct applications, which the present paper contains, of his A generalisation of Stirling’s formula, [16].

Plan of the paper. Section 2 expounds the basic theory of Khinchin families. In Sect. 3, the notions of Gaussian and strongly Gaussian power series are introduced, their probabilistic and asymptotic implications, like Hayman’s asymptotic formula, Theorem B, are analyzed, and the Hayman class of functions is presented. In this Sect. 3 we present basic criteria for non-vanishing functions to be Gaussian, Theorem 3.2, and to be strongly Gaussian, Theorem 4.1. This last theorem is applied in Sect. 4 to verify that a number of exponential functions, mostly of combinatorial interest, are strongly Gaussian and to obtain asymptotic formulae for their coefficients. Theorem 4.1 is also the basic tool for the analysis of asymptotic formulae for the number of partitions of various kinds which, with the help of Theorem C, is carried out in Sect. 6.

Notations. For positive sequences \((a_n)\), \((b_n)\), the notation \(a_n\sim b_n\) as \(n\rightarrow \infty \) means that \(\lim _{n\rightarrow \infty } a_n/b_n=1\); and we refer to that as an asymptotic formula. With \(a_n\asymp b_n\) we abbreviate that \(c\le a_n/b_n\le C\) for positive constants cC. Analogous notations are used for positive functions. \(\Phi \) denotes the distribution function of the standard normal. With ‘ogf’ and ‘egf’ we abbreviate, respectively, ordinary and exponential generating function. The disk of center \(z\in \mathbb {C}\) and radius \(r>0\) is denoted \(\mathbb {D}(z,r)\). The unit disk \(\mathbb {D}(0,1)\) is denoted simply by \(\mathbb {D}\). \(\mathbf {E}(Z)\) and \(\mathbf {V}(Z)\) are reserved for the expectation (mean) and variance of the random variable Z. If X and Y are random variables, with \(X \overset{\mathrm{d}}{=}Y\) we signify that X and Y have the same distribution. We write \(\{x\}\) for the fractional part of the real number x. The Bernoulli numbers are denoted by \(B_j\), for \(j \ge 0\) (note that \(B_1=-1/2\)), while \(B_j(x)\), for \(j\ge 0\), stands for the Bernoulli polynomials. For \(f \in C^{k}[0,N]\), with \(k~\ge ~1\), the Euler summation formula of order k reads:

$$\begin{aligned}&\sum _{j=0}^{N} f(j)=\int _0^N f(t)\, dt \nonumber \\&\quad +\frac{1}{2}f(0)+\frac{1}{2} f(N)+\sum _{j=1}^{k-1} (-1)^{j+1} \frac{B_{j+1}}{(j+1)!} \left( f^{(j)}(N)-f^{(j)}(0)\right) \nonumber \\&\quad +(-1)^{k+1}\int _0^N f^{(k)}(t) \,\frac{{B}_k(\{t\})}{k!}\, dt\, . \end{aligned}$$
(1.1)

We will frequently refer to the comprehensive treatise Analytic Combinatorics [9] by Flajolet and Sedgewick for background on combinatorial issues.

2 Khinchin Families

We denote by \(\mathcal {K }\) the class of non-constant power series

$$\begin{aligned} f(z)=\sum _{n=0}^\infty a_n z^n, \end{aligned}$$

with positive radius of convergence, which have non-negative Taylor coefficients and such that \(a_0>0\). These are the power series of interest in this paper. Observe that f(t) is increasing for \(t\in [0,R)\), and, since \(f(0)>0\), we have \(f(t)>0\), for \(t \in [0,R)\).

Definition 2.1

The Khinchin family of a power series f in \(\mathcal {K }\) with radius of convergence \(R>0\) is the family of random variables \((X_t)_{t \in [0,R)}\) with values in \(\{0, 1, \ldots \}\) and with mass functions given by

$$\begin{aligned} \mathbf {P}(X_t=n)=\frac{a_n\, t^n}{f(t)}\, , \quad \text{ for } \text{ each } n \ge 0 \quad \text{ and } t\in (0,R)\, , \end{aligned}$$

while \(X_0\equiv 0\).

Any Khinchin family is continuous in distribution in [0, R). No hypothesis upon joint distribution of the variables \(X_t\) is considered: families, not processes.

2.1 Basic Properties

2.1.1 Mean and Variance Functions

For the mean and the variance of \(X_t\) we reserve the notations \(m(t)=\mathbf {E}(X_t)\) and \(\sigma ^2(t)=\mathbf {V}(X_t)\), for \(t \in [0,R)\). In terms of f, the mean and variance of \(X_t\) may be written as

$$\begin{aligned} m(t)=\frac{t f^\prime (t)}{f(t)}, \qquad \sigma ^2(t)=t m^\prime (t)\, , \quad \text{ for } t \in [0,R)\,. \end{aligned}$$
(2.1)

For each \(t \in (0,R)\), the variable \(X_t\) is not a constant, and so \(\sigma ^2(t)>0\). Consequently, m(t) is strictly increasing in [0, R), though, in general, \(\sigma (t)\) is not increasing, like in the case of polynomials. We denote

$$\begin{aligned} M_f=\lim _{t \uparrow R} m(t)\,. \end{aligned}$$
(2.2)

2.1.2 The Case \(M_f=\infty \)

This case \(M_f=\infty \), quite relevant in what follows, holds except in some exceptional cases, which we now specify.

Let \(f \in \mathcal {K }\) with radius of convergence \(R>0\) and assume that \(M_f<\infty \). Then

$$\begin{aligned} (\star )\quad m(t)=\frac{\sum _{n=0}^\infty n a_n t^n}{\sum _{n=0}^\infty a_n t^n}=\frac{t f^\prime (t)}{f(t)}\le M_f\,, \quad \text{ for } t \in (0,R)\,. \end{aligned}$$

For any fixed \(t_0\in (0,R)\), we then have upon integration in \((\star )\) that

$$\begin{aligned} (\star \star )\quad \ln \left( \frac{f(t)}{f(t_0)}\right) \le M_f \ln \left( \frac{t}{t_0}\right) \,, \quad \text{ for } t \in [t_0, R)\,. \end{aligned}$$

Let us distinguish now between radius of convergence R being finite or not.

\(\bullet \) If \(R<+\infty \) and \(M_f<\infty \), then \((\star \star )\) implies that \(\lim _{t\uparrow R} f(t)<\infty \) and then \(\sum _{n=0}^\infty a_n R^n<\infty \). And, besides, from \((\star )\) we deduce also that

$$\begin{aligned} (\flat )\quad \sum _{n=0}^\infty n a_n R^n<\infty \,. \end{aligned}$$

And, conversely, if \((\flat )\) holds and f in \(\mathcal {K}\) has radius of convergence R, then

$$\begin{aligned} M_f=\frac{\sum _{n=0}^\infty n a_n R^n}{\sum _{n=0}^\infty a_n R^n}<\infty \,. \end{aligned}$$

\(\bullet \) If \(R=\infty \) and \(M_f<\infty \), then \((\star \star )\) implies that f is a polynomial. And conversely, for a polynomial f in \(\mathcal {K }\) of degree N, one actually has \(M_f=N\).

In summary,

Lemma 2.2

For \(f(z)=\sum _{n=0}^\infty a_n z^n\) in \(\mathcal {K}\) with radius of convergence \(R>0\), we have \(M_f<\infty \) if and only if \(R<\infty \) and \(\sum _{n=0}^\infty n a_n R^n <\infty \), or if \(R=\infty \) and f is a polynomial.

In the first case, we have \(M_f=(\sum _{n=0}^\infty n a_n R^n) / (\sum _{n=0}^\infty a_n R^n)\). For a polynomial \(f\in \mathcal {K}\), we have \(M_f=\text{ deg }{(f)}\).

In all forthcoming applications, the exceptional cases above will not appear and we will always have \(M_f=+\infty \).

Important notation: If \(M_f=\infty \), we denote by \(t_n\) the unique value \(t\in [0,R)\) such that \(m(t_n)=n\).

Observe that the \(t_n\) verify that \(\lim _{n\rightarrow \infty } t_n=R\).

2.1.3 Normalization and Characteristic Functions

For each \(t \in (0,R)\), we denote the normalization of \(X_t\) by \(\breve{X}_t\):

$$\begin{aligned} \breve{X}_t=\frac{X_t-m(t)}{\sigma (t)}\, \quad \text{ for } t \in (0,R)\, . \end{aligned}$$

The characteristic function of \(X_t\) may be written in terms of the power series f as

$$\begin{aligned} \begin{aligned}\mathbf {E}(e^{\imath \theta X_t})&=\sum _{n=0}^\infty e^{\imath n \theta } \,\mathbf {P}(X_t=n)=\frac{1}{f(t)}\sum _{n=0}^\infty a_n t^n e^{\imath n \theta }\\&=\frac{f(te^{\imath \theta })}{f(t)}\, , \quad \text{ for } t\in (0,R) \text{ and } \theta \in \mathbb {R}\,, \end{aligned} \end{aligned}$$
(2.3)

while for its normalized version \(\breve{X}_t\) we have

$$\begin{aligned} \begin{aligned} \mathbf {E}(e^{\imath \theta \breve{X}_t})&=\mathbf {E}(e^{\imath \theta (X_t-m(t))/\sigma (t)})\\ {}&=\mathbf {E}(e^{\imath \theta X_t/\sigma (t)}) \,e^{-\imath \theta m(t)/\sigma (t)}\, , \quad \text{ for } t\in (0,R)\, \text{ and }\, \theta \in \mathbb {R}\,, \end{aligned} \end{aligned}$$
(2.4)

and so,

$$\begin{aligned} \left| \mathbf {E}(e^{\imath \theta \breve{X}_t})\right| =\left| \mathbf {E}(e^{\imath \theta X_t/\sigma (t)})\right| \, , \quad \text{ for } t\in (0,R) \text{ and }\, \theta \in \mathbb {R}\,. \end{aligned}$$

2.1.4 Scale

Let \(f \in \mathcal {K }\) have radius of convergence R and Khinchin family \((X_t)_{t \in [0,R)}\). Fix an integer \(m\ge 1\) and let \(g{_m}(z)\) be the power series defined by \(g{_m}(z)=f(z^m)\). Then \(g{_m}\) is also in \(\mathcal {K }\) and \(g{_m}\) has radius of convergence \(R^{1/m}\).

Let \((Y_t)_{t \in [0, R^{1/m})}\) be the Khinchin family of \(g_m\). We have

$$\begin{aligned} Y_t \overset{\mathrm{d}}{=} m\, X_{t^m} \, , \quad \text{ for } 0\le t <R^{1/m}\,. \end{aligned}$$

For, if \(f(z)=\sum _{n=0}^\infty a_n z^n\), then \(g{_m}(z)=\sum _{k=0}^\infty a_k z^{km}\) and for each \(k \ge 0\) and each \(0\le t<R^{1/m}\), we have that

$$\begin{aligned} {\mathbf {P}(Y_t/m=k)=\mathbf {P}(Y_t=km)=\frac{a_k t^{km}}{g{_m}(t)}=\frac{a_k t^{km}}{f(t^{m})}=\mathbf {P}(X_{t^m}=k)\,.} \end{aligned}$$

2.1.5 Auxiliary Function F

The function f does not vanish on the real interval [0, R). And so, it does not vanish in some simply connected region containing that interval. There we may consider \(\ln f\), a branch of the logarithm of f which is real on [0, R), and define the function

$$\begin{aligned} F(z)=\ln f(e^z), \end{aligned}$$

which is holomorphic in a region containing \((-\infty , \ln R)\).

If f does not vanish anywhere in the disk \(\mathbb {D}(0,R)\), then the auxiliary function F(z) is defined in the whole half plane \(\mathrm{Re}z< \ln R\). In general, the auxiliary function F(z) is defined and holomorphic in the half band-like region \(\Omega _f=\{s+\imath \theta : s<\ln R, |\theta |< \sqrt{2}/\sigma (e^s)\}\). This follows, for instance, see [3, Prop. 7.8], from the following lemma.

Lemma 2.3

If Y is a random variable and \(\mathbf {E}(e^{\imath \theta Y})=0\), then \(\theta ^2 \,\mathbf {V}(Y)\ge 2\).

Combining this lemma and formula (2.3), we deduce that \(f(te^{\imath \theta })\ne 0\), if \(|\theta |<\sqrt{2}/\sigma (t)\), and so \(f(e^{s+\imath \theta })\ne 0\), if \(|\theta |<\sqrt{2}/\sigma (e^s)\), for \(s<\ln R\). Therefore, \(f(e^z)\) vanishes nowhere in \(\Omega _f\).

Proof of Lemma 2.3

By considering \(Y-\mathbf {E}(Y)\), we may assume that \(\mathbf {E}(Y)=0\). For a complex valued \(C^2\) function h defined in \(\mathbb {R}\), we have that

$$\begin{aligned} h(y)-h(0)-h^\prime (0)y =\int _0^y \int _0^{u} h^{\prime \prime }(v) \, dv du\,. \end{aligned}$$

Applying this identity to \(h(y)=e^{\imath \phi y}\), we see that the inequality

$$\begin{aligned} |e^{\imath \phi y}-1-\imath y\phi |\le \frac{\phi ^2 y^2}{2} \end{aligned}$$

holds for \(y,\phi \in \mathbb {R}\). Substituting Y for y, and \(\theta \) for \(\phi \) and taking expectations, we conclude that \(1\le \theta ^2 \,\mathbf {E}(Y^2)/2=\theta ^2 \,\mathbf {V}(Y)/2\).\(\square \)

In terms of the auxiliary function F, the mean and variance functions of the Khinchin family of f, given by (2.1), may be expressed as follows:

$$\begin{aligned} {m(t)=F^\prime (s) \quad \text{ and } \quad \sigma ^2(t)=F^{\prime \prime }(s)\,, \quad \text{ for } t=e^s \text{ and } s<\ln R\,.} \end{aligned}$$
(2.5)

2.1.6 Some Basic Examples

We now exhibit explicit formulas for the mean and variance functions and for the characteristic functions of some basic examples.

  1. (a)

    Let \(f(z)=1+z\). In this case \(R=\infty \), and the mean and variance functions, given by (2.1), are \(m(t)=t/(1+t)\) and \(\sigma ^2(t)=t/(1+t)^2\). For each \(t >0\), the variable \(X_t\) is a Bernoulli variable with parameter \(t/(1+t)\), and its characteristic function is, see formula (2.3),

    $$\begin{aligned} \mathbf {E}(e^{\imath \theta X_t})=\frac{f(te^{\imath \theta })}{f(t)}=\frac{1+t e^{\imath \theta }}{1+t}\, , \quad \text{ for } \theta \in \mathbb {R} \text{ and } t>0\,, \end{aligned}$$

    and thus, using formula (2.4),

    $$\begin{aligned} \mathbf {E}(e^{\imath \theta \breve{X}_t})=\frac{te^{\imath \theta /\sqrt{t}}+e^{-\imath \theta \sqrt{t}}}{1+t}\, , \quad \text{ for } \theta \in \mathbb {R}\quad \text{ and }\, t>0\,. \end{aligned}$$
    (2.6)
  2. (b)

    Let \(f(z)={1}/{(1-z)}\). In this case \(R=1\), and the mean and variance functions are \(m(t)=t/(1-t)\) and \(\sigma ^2(t)=t/(1-t)^2\). For each \(t\in (0,1)\), the variable \(X_t\) is a geometric variable (number of failures until first success) of parameter \(1-t\), and its characteristic function is, see formula (2.3),

    $$\begin{aligned} \mathbf {E}(e^{\imath \theta X_t})=\frac{f(te^{\imath \theta })}{f(t)}=\frac{1-t}{1-te^{\imath \theta }} \,, \quad \text{ for }\, \theta \in \mathbb {R} \text{ and } t\in (0,1)\,, \end{aligned}$$

    and thus, using formula (2.4), we have that

    $$\begin{aligned} \mathbf {E}(e^{\imath \theta \breve{X}_t})= & {} \frac{1-t}{1-te^{\imath \theta (1-t)/\sqrt{t}}} \, e^{-\imath \theta \sqrt{t}}\nonumber \\= & {} \frac{1-t}{e^{\imath \theta \sqrt{t}}-te^{\imath \theta /\sqrt{t}}}\,, \quad \text{ for } \theta \in \mathbb {R}\,\,\quad \text{ and } t\in (0,1)\,. \end{aligned}$$
    (2.7)
  3. (c)

    Let \(f(z)=e^z\). In this case \(R=\infty \), and the mean and variance functions are \(m(t)=t\) and \(\sigma ^2(t)=t\). For each \(t>0\), the variable \(X_t\) in its Khinchin family follows a Poisson distribution with parameter t, and its characteristic function is, see formula (2.3),

    $$\begin{aligned} \mathbf {E}(e^{\imath \theta X_t})=\frac{f(te^{\imath \theta })}{f(t)}=e^{t(e^{\imath \theta }-1)}\,, \quad \text{ for }\, \theta \in \mathbb {R} \text{ and } t>0\,, \end{aligned}$$

    and thus, using formula (2.4), we have that

    $$\begin{aligned} \mathbf {E}(e^{\imath \theta \breve{X}_t})=\exp \!\left( t\left( e^{\imath (\theta /\sqrt{t})}-1-\frac{\imath \theta }{\sqrt{t}}\right) \right) \, , \quad \text{ for } \theta \in \mathbb {R}\,\,\text{ and } \, t >0\, \cdot \end{aligned}$$
    (2.8)
  4. (d)

    Let

    $$\begin{aligned} f(z)=\exp (e^z-1)\triangleq \sum _{n=0}^\infty \frac{{\mathcal {B}_n}}{n!}\, z^n\, ,\quad {\text{ for } z \in \mathbb {C}}\,, \end{aligned}$$

    where \(\mathcal {B}_n\) (not to be confused with the nth Bernoulli number \(B_n\)) is the nth Bell number, which counts the number of partitions of the set \(\{1, \ldots , n\}\); see [9, p. 109]. Then \(R=\infty \), and the mean and variance functions are \(m(t)=t e^t\) and \(\sigma ^2(t)=t(t+1)e^t\). The characteristic function is given by,

    $$\begin{aligned} \mathbf {E}(e^{\imath \theta X_t})=\frac{f(te^{\imath \theta })}{f(t)}=\exp \left( e^{t e^{\imath \theta }}-e^t\right) \, , \quad \text{ for } \theta \in \mathbb {R} \text{ and } t >0\, , \end{aligned}$$

    and thus, using formula (2.4), we have that

    $$\begin{aligned} \mathbf {E}(e^{\imath \theta \breve{X}_t})= \exp \!\left( e^{t e^{\imath \theta e^{-t/2}/\sqrt{t(t+1)}}}-e^t-\imath \theta \sqrt{\frac{t}{t+1}}\,e^{t/2}\right) \, , \quad \text{ for } \theta \in \mathbb {R} \text{ and } t >0\, . \end{aligned}$$

    The asymptotic formula for the \(\mathcal {B}_n\), due to Moser and Wyman [22], is dealt with in Sect. 5.2.

2.1.7 Partition Functions

We next turn our attention to generating functions of partition of integers, standard and with a variety of restrictions on the admitted parts. These examples of functions in \(\mathcal {K}\) are quite relevant in this paper. Asymptotic formulas of their coefficients are dealt with in Sect. 6.

The mean and variance functions or the characteristic function of the Khinchin family of these partitions functions are not as direct as in the previous examples. Closed formulas of their mean and variance functions involving series are presented in Sect. 2.2.1, and convenient approximations are exhibited in Sect. 2.2.1.

  1. (e)

    The ogf of partitions, the partition function, given by

    $$\begin{aligned} P(z)=\prod _{j=1}^\infty \frac{1}{1-z^j}=\sum _{n=0}^\infty p(n)\, z^n\, , \quad \text{ for } z \in \mathbb {D}\,, \end{aligned}$$

    is in \(\mathcal {K }\). The ogf Q(z) of partitions into distinct parts (which is also the ogf of partitions into odd parts),

    $$\begin{aligned} Q(z)=\prod _{j=1}^\infty (1+z^j)=\prod _{j=0}^\infty \frac{1}{1-z^{2j+1}}=\sum _{n=0}^\infty q(n)\, z^n\,, \quad \text{ for }\, z \in \mathbb {D}\,, \end{aligned}$$

    is also in \(\mathcal {K }\). Observe that

    $$\begin{aligned} Q(z)=\frac{P(z)}{P(z^2)}\, , \quad \text{ for } z \in \mathbb {D}\, . \end{aligned}$$
    (2.9)

    For integers \(a \ge 1, b \ge 1\), the infinite product

    $$\begin{aligned} P_{a,b}(z)=\prod _{j=0}^\infty \frac{1}{1-z^{aj+b}}\, , \quad \text{ for } z \in \mathbb {D}\,, \end{aligned}$$

    the ogf of the partitions whose parts lie in the arithmetic progression \(\{aj+b:j \ge 0\}\), is also in \(\mathcal {K }\). Observe that \(P_{1,1}\equiv P\) and that \(P_{2,1}\equiv Q\).

  2. (f)

    For integers \(a\ge 1, b \ge 0\), the infinite product \(W_a^b(z)\) given by

    $$\begin{aligned} W_a^b(z)=\prod _{j=1}^\infty \left( \frac{1}{1-z^{j^a}}\right) ^{j^b}\, , \quad \text{ for } z \in \mathbb {D}, \end{aligned}$$

    is also in \(\mathcal {K }\). We have \(W_1^0\equiv P\). Also, \(W_1^{1}(z)\), known as the MacMahon function [20], turns out to be the ogf of plane partitions; see [9, p. 580], and Bender–Knuth [4], for a simple proof of this fact.

Besides, \(W_a^0(z)\) is the ogf of partitions with parts which are ath powers of positive integers.

In general, \(W_a^b(z)\) is the ogf of partitions with parts which are ath powers of positive integers and with number of colors \(j^b\) for part \(j^a\), for \(j \ge 1\). See Remark 2.4 for the terminology.

Remark 2.4

(Colored/weighted partitions) Given a sequence \((b_j)_{j \ge 1}\) of integers \(b_j\ge 0\), the infinite product

$$\begin{aligned} \prod _{j=1}^\infty \left( \frac{1}{1-z^j}\right) ^{b_j} \end{aligned}$$

is the ogf of colored partitions with coloring sequence \((b_j)_{j \ge 1}\). A colored partition of n with the above coloring sequence is an array of integers \(n(j,k)\ge 0\) with \(j \ge 1\) and \(1 \le k \le b_j\), and \(n=\sum _{j \ge 1; 1\le k \le b_j} n(k,j)\), i.e., of partitions whose part j may come in \(b_j\) different colours. In the case of \(W_1^b\), the coloring sequence is \(b_j=j^b\), for \(j \ge 1\).

The colored partitions with the above coloring sequence are partitions in which the part j may appear in any of \(b_j\) available colors, with the order of the colors not mattering.

These colored partitions are also called weighted partitions; the weights being the \(b_j\). See Granovsky–Stark [11], and Granovsky et al. [12], where asymptotics about these (quite general) partitions are approached elegantly and systematically via Meinardus’ theorem, [21], and Khinchin families.

2.1.8 Products and Independence

If \(f, g\in \mathcal {K }\), then the product \(h=f\cdot g\) is also in \(\mathcal {K }\). If both f and g have radius of convergence at least R, then the product h has radius of convergence at least R. If the respective Khinchin families of fg and h are \((X_t)_{t \in [0,R)}\), \((Y_t)_{t \in [0,R)}\) and \((Z_t)_{t \in [0,R)}\), then the law of \(Z_t\) is the law of the independent sum \(X_t \oplus Y_t\) of the laws of \(X_t\) and \(Y_t\):

$$\begin{aligned} \mathbf {P}(Z_t=n)=\sum _{k=0}^n \mathbf {P}(X_t=k)\,\mathbf {P}(Y_t=n-k)\,, \quad \text{ for } \text{ each } n \ge 0\,. \end{aligned}$$

Thus, products of functions of \(\mathcal {K }\) become, on the Khinchin family side, sums of independent variables.

2.1.9 Convergence

Let \((f_k)_{k \ge 1}\) be a sequence of functions in \(\mathcal {K }\), all of them with radius of convergence at least \(R>0\), which converges uniformly on compacts sets of \(\mathbb {D}(0,R)\) to a function \(f\in \mathcal {K }\). Let \((X^{[k]}_t)_{t \in [0,R)}\) and \((X_t)_{t \in [0,R)}\) be the Khinchin families of \(f_k\), for \(k \ge 1\), and, respectively, of f.

Then, for each \(t \in [0,R)\), we have that \((X^{[k]}_t)_{k \ge 1}\) converges in distribution to \(X_t\) as \(k\rightarrow \infty \).

Also, for any moment, say, of order \(q\ge 1\), we have \(\lim _{k \rightarrow \infty } \mathbf {E}(({X_t^{[k]}})^q)=\mathbf {E}(X_t^q)\), since if \(\mathcal {D}\) is the operator \(\mathcal {D}=z(d/dz)\), then \(\mathbf {E}(X_t^q)=\mathcal {D}^q(f)(t)/f(t)\).

In particular, for each \(t \in [0,R)\), we have that \(\lim _{k \rightarrow \infty } m_k(t)=m(t)\) and that \(\lim _{k \rightarrow \infty } \sigma ^2_k(t)=\sigma ^2(t)\), where \(m_k(t), \sigma ^2_k(t)\) and \(m(t), \sigma ^2(t)\) are, respectively, the mean and variance functions of \(f_k\) and f.

2.2 Infinite Products

Let \((f_j)_{j \ge 1}\) be a sequence of functions in \(\mathcal {K }\) all with radius of convergence at least \(R>0\). Assume that the infinite product

$$\begin{aligned} f(z)=\prod _{j=1}^\infty f_j(z) \end{aligned}$$

converges absolutely and uniformly on compact subsets of \(\mathbb {D}(0,R)\).

The product f has non-negative Taylor coefficients. Since \(f_j(0)\ne 0\), for each j, we have \(f(0)\ne 0\), and actually, \(f(0)>0\). Moreover, from

$$\begin{aligned} \sum _{j=1}^\infty \frac{f_j^\prime (t)}{f_j(t)}=\frac{f^\prime (t)}{f(t)}\,, \quad \text{ for } t \in [0,R)\,, \end{aligned}$$

we see that f the infinite product is non-constant. Thus the product function f is in \(\mathcal {K }\) and its power series has radius of convergence at least R.

Let \((X_t)_{t \in (0,R)}\) be the Khinchin family of f and let \(m(t)=\mathbf {E}(X_t)\) and \(\sigma ^2(t)=\mathbf {V}(X_t)\) be its mean and variance functions. For each \(j \ge 1\), the Khinchin family of \(f_j(z)\) is denoted by \(X_{j,t}\) and its mean and variance functions by \(m_j(t)\) and \(\sigma _j^2(t)\), for \(t \in [0,R)\).

Since \(\prod _{j=1}^N f_j(z)\) converges uniformly as \(N \rightarrow \infty \) on compact subsets of \(\mathbb {D}(0,R)\), we have that, for each \(t \in (0,R)\), the sum \( \bigoplus _{j=1}^N X_{j,t}\) converges in law to \(X_t\), as \(N \rightarrow \infty \), and that for the means and variances we have

$$\begin{aligned} m(t)=\sum _{j=1}^\infty m_j(t)\quad \text{ and } \quad \sigma ^2(t)=\sum _{j=1}^\infty \sigma ^2_j(t), \quad \text{ for } \text{ each }\, t\in [0,R). \end{aligned}$$

2.2.1 Partition Functions as Infinite Products

As an illustration of the above, we consider now partitions and partitions into distinct parts.

Consider first the partition function

$$\begin{aligned} P(z)=\prod _{j=1}^\infty \frac{1}{1-z^j}, \quad \text{ for } \, |z|<1. \end{aligned}$$

Let \((X_t)_{t \in [0,1)}\) be its Khinchin family. And let \((Y_t)_{t \in [0,1)}\) be the Khinchin family of \(1/(1-z)\). Because of scaling properties, see Sect. 2.1.4, we have that

$$\begin{aligned} X_t{\overset{\mathrm{d}}{=}}\bigoplus _{j=1}^\infty j Y_{t^j} \, , \quad \text{ for } \, t \in (0,1)\,, \end{aligned}$$

and each \(Y_{t^j}\) is a geometric variable with parameter \(1-t^j\). For its mean and variance functions, m(t) and \( \sigma ^2(t)\), we have that

$$\begin{aligned} m(t)=\sum _{j=1}^\infty \frac{j t^{j}}{1-t^j} \quad \text{ and } \quad \sigma ^2(t)=\sum _{j=1}^\infty \frac{j^2 t^{j}}{(1-t^j)^2}\,, \quad \text{ for } t \in (0,1)\, . \end{aligned}$$

We will obtain closed form approximations of m(t) and \(\sigma ^2(t)\) as \(t \uparrow 1\) in Sect. 6.1.

If \((X_t)_{t \in [0,\infty )}\) is the Khinchin family of the ogf

$$\begin{aligned} Q(z)=\prod _{j=1}^\infty (1+z^j) \end{aligned}$$

of partitions into distinct parts, we then have, because of scaling properties, see Sect. 2.1.4, that

$$\begin{aligned} X_t {\overset{\mathrm{d}}{=}}\bigoplus _{j=1}^\infty j Y_{t^j} \, , \quad \text{ for } t >0\,, \end{aligned}$$

and each \(Y_{t^j}\) is a Bernoulli variable with parameter \(t^j/(1+t^j)\). For its mean and variance functions we have that

$$\begin{aligned} m(t)=\sum _{j=1}^\infty \frac{j t^{j}}{1+t^j} \quad \text{ and }\quad \sigma ^2(t)=\sum _{j=1}^\infty \frac{j^2 t^{j}}{(1+t^j)^2}\,, \quad \text{ for } t \in (0,1)\, . \end{aligned}$$

Analogous expressions for the mean and variance functions of other partition functions like \(P_{a,b}\) or \(W_{a,b}\) will appear later in Sect. 6.1.

2.3 Hayman’s Identity

For \(f(z)=\sum _{n=0}^\infty a_n z^n \in \mathcal {K }\), Cauchy’s formula for the coefficient \(a_n\) in terms of the characteristic function of its Khinchin family \((X_t)_{t \in [0,R)}\) reads

$$\begin{aligned} a_n=\frac{f(t)}{2\pi \,t^n }\int _{|\theta |<\pi } \mathbf {E}(e^{\imath \theta X_t})\,e^{-\imath \theta n} \, d \theta \, , \quad \text{ for } \text{ each } t \in (0,R) \text{ and } n\ge 1\, . \end{aligned}$$

In terms of the characteristic function of the normalized variable \(\breve{X}_t\), it becomes, for each \(t \in (0,R)\) and \(n \ge 1\),

$$\begin{aligned} a_n=\frac{f(t)}{2\pi \,t^n \,\sigma (t)}\int _{|\theta |<\pi \sigma (t)} \mathbf {E}(e^{\imath \theta \breve{X}_t})\, e^{-\imath \theta (n-m(t))/\sigma (t)} \, d \theta \,. \end{aligned}$$

If \(M_f=\infty \), we may take for each \(n\ge 1\) the (unique) radius \(t_n \in (0,R)\) so that \(m(t_n)=n\), to write

$$\begin{aligned} a_n=\frac{f(t_n)}{2\pi \,t_n^n \,\sigma (t_n)}\int _{|\theta |<\pi \sigma (t_n)} \mathbf {E}(e^{\imath \theta \breve{X}_{t_n}}) \, d \theta \, , \quad \text{ for } \text{ each } n \ge 1\,, \end{aligned}$$
(2.10)

which we call Hayman’s identity.

If we write this identity (2.10) as

$$\begin{aligned} \frac{a_n 2\pi \,t_n^n \,\sigma (t_n)}{f(t_n)}=\int _{|\theta |<\pi \sigma (t_n)} \mathbf {E}(e^{\imath \theta \breve{X}_{t_n}}) \, d \theta \end{aligned}$$

we see that an asymptotic formula for \(a_n\) follows if one is able to determine the behaviour of \(\mathbf {E}(e^{\imath \theta \breve{X}_{t}})\) as \(t \rightarrow R\). (Recall that \(t_n\rightarrow R\), as \(n \rightarrow \infty \), see Sect. 2.1.2.)

Of course, once that has been achieved one still needs to determine the \(t_n\) and also appropriate expressions for the \(f(t_n)\) and the \(\sigma (t_n)\), which, in general, is not so direct. In any case, the plan above is actually the route to go.

3 Gaussian Khinchin Families

In this section we discuss the notions of Gaussian and strongly Gaussian Khinchin families, obtain Hayman’s asymptotic formula for its coefficients and, finally, introduce the class of Hayman functions, which amounts to a criterion for being strongly Gaussian.

As we have seen, Theorem A, strongly Gaussian power series are Gaussian; and as we shall see, Theorem 3.8, power series in the Hayman class are strongly Gaussian.

3.1 Gaussian Khinchin Families

Definition 3.1

(Gaussian power series) A power series \(f \in \mathcal {K }\) and its Khinchin family \((X_t)_{t \in [0,R)}\) are termed Gaussian if \(\breve{X}_t\) converges, as \(t\uparrow R\), in distribution to the standard normal or, equivalently, if

$$\begin{aligned} \lim _{t \uparrow R} \mathbf {E}(e^{\imath \theta \breve{X}_t})=e^{-\theta ^2/2}\, , \quad \text{ for } \text{ each } \theta \in \mathbb {R}\,. \end{aligned}$$

For the exponential function \(f(z)=e^z\), we deduce directly by taking the limit as \(t \uparrow \infty \) in formula (2.8) of the characteristic function of its normalized family, with \(\theta ~\in ~\mathbb {R}\) fixed, that \(e^z\) is Gaussian. This fact means in particular that the normalized version of the Poisson random variable \(X_t\), that is, \((X_t-t)/\sqrt{t}\), converges in distribution to the standard normal.

The function \(f(z)=1/(1-z)\) is not Gaussian. By taking limits in formula (2.7) as \(t\uparrow 1\), with \(\theta \in \mathbb {R}\) fixed, we obtain that

$$\begin{aligned} \lim _{t \uparrow 1} \mathbf {E}(e^{\imath \theta \breve{X}_t})= \frac{e^{-\imath \theta }}{1-\imath \theta }\, , \quad \text{ for } \text{ each }\,\theta \in \mathbb {R}\, . \end{aligned}$$

This actually means that \(\breve{X}_t\) converges in distribution towards a variable Z, where \(Z+1\) is an exponential variable of parameter 1.

The function \(f(z)=1+z\) is not Gaussian, either. By taking limits in formula (2.6) as \(t\uparrow \infty \), with \(\theta \in \mathbb {R}\) fixed, we obtain that

$$\begin{aligned} \lim _{t \uparrow \infty } \mathbf {E}(e^{\imath \theta \breve{X}_t})= 1\, , \quad \text{ for } \text{ each } \,\theta \in \mathbb {R}\, , \end{aligned}$$

In other terms, \(\breve{X}_t\) converges in distribution towards the constant 0.

3.1.1 A Criterion for Gaussianity

The following simple criterion for gaussianity of functions \(f\in \mathcal {K }\) non-vanishing in \(\mathbb {D}(0,R)\), and in terms of the auxiliary function F of Sect.  2.1.5, is implicit in Hayman’s paper, see [16, Lem. 4].

Theorem 3.2

(A criterion for gaussianity) If \(f\in \mathcal {K }\) has radius of convergence \(R>0\) and vanishes nowhere in \(\mathbb {D}(0,R)\), and if for the auxiliary function F one has

$$\begin{aligned} \lim _{s \uparrow \ln R} \frac{\sup _{\phi \in \mathbb {R}} |F^{\prime \prime \prime }(s+i\phi )|}{F^{\prime \prime }(s)^{3/2}}=0\, , \end{aligned}$$
(3.1)

then f is Gaussian.

Proof

Since f vanishes nowhere in \(\mathbb {D}(0,R)\), the auxiliary function \(F(z)=\ln f(e^z)\) is defined and is holomorphic in the whole half-plane \(\{z \in \mathbb {C}: \mathrm{Re}z<\ln R\}\). We have, for \(s < \ln R\) and \(\theta \in \mathbb {R}\), that

$$\begin{aligned} \left| F(s+\imath \theta )-F(s)-F^{\prime }(s) \imath \theta + F^{\prime \prime }(s)\frac{\theta ^2}{2}\right| \le \sup _{\phi \in \mathbb {R}} \left| F^{\prime \prime \prime }(s+i\phi )\right| \, \frac{|\theta |^3}{6}\, \cdot \end{aligned}$$

Since \(F^{\prime }(s)=m(e^s)\) and \(F^{\prime \prime }(s)=\sigma ^2(e^s)\) for each \(s < \ln R\), writing \(e^s=t\) and substituting \(\theta \) by \(\theta /\sigma (t)\) we deduce that

$$\begin{aligned} \left| \ln f(te^{\imath \theta /\sigma (t)})-\ln f(t)- \imath \frac{m(t)}{\sigma (t)}\theta + \frac{\theta ^2}{2}\right| \le \frac{\sup _{\phi \in \mathbb {R}} \left| F^{\prime \prime \prime }(s+i\phi )\right| }{\sigma ^3(t)}\, \frac{|\theta |^3}{6}\, , \end{aligned}$$

or, equivalently, for \(t=e^s\) with \(s< \ln R\) and \(\theta \in \mathbb {R}\),

$$\begin{aligned} \left| \ln \mathbf {E}(e^{\imath \theta \breve{X}_t})+\frac{\theta ^2}{2}\right| \le \frac{\sup _{\phi \in \mathbb {R}} \left| F^{\prime \prime \prime }(s+i\phi )\right| }{F^{\prime \prime }(s)^{3/2}}\, \frac{|\theta |^3}{6}\,\cdot \end{aligned}$$
(3.2)

\(\square \)

3.1.2 Some Applications of the Use of the Criterion of Theorem 3.2

Recall the auxiliary function \(F(z)=\ln f(e^z)\) from Sect. 2.1.5.

For the exponential function \(f(z)=e^z\), the auxiliary function is given by \(F(z)=e^z\), and the gaussianity of f follows readily from Theorem 3.2.

Similarly, if \(R(z)=\sum _{j=0}^N b_j z^j\) is a polynomial of degree N such that \(e^{R(z)}\in \mathcal {K }\), then \(e^{R(z)}\) is Gaussian. For in this case \(F(z)=R(e^z)\), and for \(z=s+\imath \phi \) we have that

$$\begin{aligned} |F^{\prime \prime \prime }(z)|=\left| \sum _{j=0}^N b_j \,j^3 e^{jz}\right| \le \sum _{j=0}^N |b_j| \,j^3 e^{js}=O(e^{Ns})\, , \end{aligned}$$

while

$$\begin{aligned} (\star ) \qquad F^{\prime \prime }(s)=\sum _{j=0}^N b_j\,j^2 e^{js}\sim b_N N^2 e^{Ns}\, \quad \text{ as } s \uparrow \infty \, , \end{aligned}$$

and gaussianity follows from Theorem 3.2. Observe that \((\star )\) implies that \(b_N\) is real and \(b_N >0\); besides, since \(e^R \in \mathcal {K }\), the coefficients of the polynomial R must be real numbers.

For the partition function \(P(z)=\prod _{j=1}^\infty 1/(1-z^j)\), the auxiliary function F may be written as

$$\begin{aligned} F(z)=\sum _{j,k\ge 1} \frac{1}{k} \,e^{kj z}\, , \quad \text{ for } \mathrm{Re}z<0\,. \end{aligned}$$

For z such that \(\mathrm{Re}z<0\) and integer \(q\ge 1\), the qth derivative of F is given by

$$\begin{aligned} F^{(q)}(z)=\sum _{j,k\ge 1} k^{q-1} j^q \, e^{kj z}. \end{aligned}$$

For \(z=s+\imath \phi \), with \(s <0\) and \(\phi \in \mathbb {R}\), we then have that

$$\begin{aligned} \left| F^{\prime \prime \prime }(s+\imath \phi )\right| \le \sum _{j,k\ge 1} k^2 j^3 e^{kj s}=F^{\prime \prime \prime }(s)\, . \end{aligned}$$

Condition (3.1) requires one to check that

$$\begin{aligned} \lim _{s \downarrow 0} \frac{F^{\prime \prime \prime }(-s)}{F^{\prime \prime }(-s)^{3/2}}=0\, . \end{aligned}$$

Let us see. For \(s>0\) and integer \(q \ge 1\), we have that

$$\begin{aligned} s^{q+1} F^{(q)}(-s)=\sum _{k \ge 1} k^{q-1} \, s \sum _{j\ge 1} (js)^q \, e^{-k (js)} \, , \end{aligned}$$

and so

$$\begin{aligned} \lim _{s \downarrow 0} s^{q+1} F^{(q)}(-s)= & {} \sum _{k \ge 1} k^{q-1} \int _{0}^\infty x^q e^{-kx} dx=\sum _{k\ge 1}\frac{1}{k^2} \int _0^\infty y^{q} e^{-y} dy\\= & {} \zeta (2) \,\Gamma (q+1)\, , \end{aligned}$$

and, thus,

$$\begin{aligned} F^{(q)}(-s)\sim \zeta (2) \,\Gamma (q+1) \frac{1}{s^{q+1}}\, , \quad \text{ as } s \downarrow 0\,. \end{aligned}$$
(3.3)

In particular,

$$\begin{aligned} \frac{F^{\prime \prime \prime }(-s)}{F^{\prime \prime }(-s)^{3/2}}\sim \frac{3}{\sqrt{2\zeta (2)}} \, s^{1/2}\, , \quad \text{ as } s \downarrow 0\,. \end{aligned}$$

We conclude that the partition function P is Gaussian.

Likewise, one readily verifies that the ogfs Q and \(P_{a,b}\) are Gaussian.

For the infinite product \(W_a^b(z)\) with \(a\ge 1, b\ge 0\), let \(F_{a,b}(z)\) be the corresponding auxiliary function. The following asymptotic formula for the qth derivative of \(F_{a,b}\):

$$\begin{aligned} F_{a,b}^{(q)}(-s) \sim \frac{1}{a} \,\zeta \left( 1+\frac{b+1}{a}\right) \,\Gamma \left( q+\frac{b+1}{a}\right) \, \frac{1}{s^{q+(b+1)/a}}\, , \quad \text{ as } s \downarrow 0\, , \end{aligned}$$
(3.4)

may be obtained with an argument very much like the one above for the partition function P. We deduce that

$$\begin{aligned} \frac{F_{a,b}^{\prime \prime \prime }(-s)}{F_{a,b}^{\prime \prime }(-s)^{3/2}}\asymp s^{(b+1)/(2a)}\, , \quad \text{ as } s \downarrow 0\, ,\end{aligned}$$

thus showing that each \(W_a^b\), with \(a\ge 1, b\ge 0\), is Gaussian.

3.1.3 A Criterion of Gaussianity for \(f=e^g\) in Terms of g

Not unusually, we start with a power series g with non-negative coefficients and are actually interested in its exponential \(f=e^g\).

The symbolic method of Combinatorics, see Flajolet–Sedgewick [9, Ch. II], gives that if g is the exponential generating function (egf) of a combinatorial class \(\mathcal {G}\) of labelled objects with \(g(0)=0\), then \(e^g\) is the egf of the class of sets formed with the objects of \(\mathcal {G}\). For instance,

  • \(e^{e^z-1}\) is the egf of the class of sets of non-empty sets (or partitions of a set), see [9, p. 107], to be discussed in Sect.  5.2.1,

  • \(e^{ze^z}\) is the egf of the class of sets of pointed sets (or idempotent functions), see [9, p. 131], to be discussed in Sect. 5.2.2,

  • \(e^{z/(1-z)}\) is the egf of the class of sets of permutations (or ‘fragmented permutations’), see [9, p. 125], to be discussed in Sect. 5.3.

Observe that in the first two instances \(R=\infty \), while in the last one, \(R=1\).

The following theorem, a consequence of Theorem 3.2, exhibits conditions on g which imply that \(f=e^g\) is Gaussian.

Theorem 3.3

(A criterion of gaussianity for \(f=e^g\) in terms of g) Let \(f\in \mathcal {K }\) be such that \(f=e^g\), where g has radius of convergence \(R>0\) and non-negative coefficients. If g is a polynomial of degree 1 or g satisfies

$$\begin{aligned} \lim _{t \uparrow R} \frac{g^{\prime \prime \prime }(t)}{{g^{\prime \prime }(t)}^{3/2}}=0\, , \end{aligned}$$
(3.5)

then f is Gaussian.

The three examples above are readily seen to be Gaussian as a consequence of Theorem 3.3.

Proof

We will assume that g is not a polynomial of degree 1 and that (3.5) is satisfied. The auxiliary function F is given by \(F(z)=\ln f(e^z)=g(e^{z})\), for \(\mathrm{Re}z <\ln R\).

Write g(z) as \(g(z)=\sum _{n=0}^\infty b_n z^n\), with \(b_n \ge 0\), for \(|z|<R\).

For an integer \(j \ge 1\), we have that \(F^{(j)}(z)=\sum _{n=1}^\infty n^j b_n e^{nz}\), for \(\mathrm{Re}z <\ln R\). If we write \(t=e^s\), with \(s <\ln R\), and let \(\phi \in \mathbb {R}\), by using that \(n^3\le \frac{9}{2} n(n-1)(n-2)\) for \(n\ge 3\), we have that

$$\begin{aligned} |F^{\prime \prime \prime }(s+\imath \phi )|&\le \sum _{n=1}^\infty n^3 b_n t^n \le b_1 t+8b_2 t^2 +\frac{9}{2}\,t^3 \sum _{n=3}^\infty n(n-1)(n-2)b_n t^{n-3} \\&\le b_1 t+8 b_2 t^2+\frac{9}{2} \, t^3\,g^{\prime \prime \prime }(t)\, , \end{aligned}$$

and by observing that \(n^2\ge n(n-1)\), also that

$$\begin{aligned} F^{\prime \prime }(s)\ge t^2 g^{\prime \prime }(t)\, . \end{aligned}$$

Therefore,

$$\begin{aligned} (\flat )\quad \frac{\sup _{\phi \in \mathbb {R}}|F^{\prime \prime \prime }(s+\imath \phi )|}{F^{\prime \prime }(s)^{3/2}}\le \frac{b_1 t+8 b_2 t^2+\frac{9}{2} \,t^3 \,g^{\prime \prime \prime }(t)}{t^3 \,g^{\prime \prime }(t)^{3/2}}\, . \end{aligned}$$

If \(R=\infty \), condition (3.1) of Theorem 3.2 follows from \((\flat )\), hypothesis (3.5) and the fact that \(\lim _{t \uparrow R} g^{\prime \prime }(t)>0\).

If \(R <+\infty \), then \((\star )\, \lim _{t \uparrow R} g^{\prime \prime }(t)=+\infty \). For otherwise, because of (3.5), we would have that \(\lim _{t \uparrow R} g^{\prime \prime \prime }(t)=0\), so that \(g^{\prime \prime \prime }\equiv 0\) and g would be a polynomial of degree at most 2 and \(R=\infty \). From \((\star )\), \((\flat )\) and condition (3.5), we conclude that condition (3.1) of Theorem 3.2 is satisfied. \(\square \)

3.2 Strongly Gaussian Khinchin Families

Following Báez-Duarte [2], we introduce the notion of strongly Gaussian power series.

Definition 3.4

(Strongly Gaussian power series) A power series \(f \in \mathcal {K }\) with radius of convergence R and its Khinchin family \((X_t)_{t \in [0,R)}\) are termed strongly Gaussian if

$$\begin{aligned} \mathrm{(a)} \quad \lim _{t \uparrow R} \sigma (t)=+\infty \qquad \text {and}\qquad \mathrm{(b)} \quad \lim _{t \uparrow R} \int \nolimits _{|\theta |<\pi \sigma (t)}\left| \mathbf {E}(e^{\imath \theta \breve{X}_t})-e^{-\theta ^2/2}\right| \, d\theta =0\, . \end{aligned}$$

The exponential function \(f(z)=e^z\) is strongly Gaussian. Recall that in this case \(\sigma (t)=\sqrt{t}\). Strong gaussianity of \(e^z\) follows from its gaussianity and dominated convergence using the bound

$$\begin{aligned} \left| \mathbf {E}(e^{\imath \theta \breve{X}_t})\right| =e^{t (\cos (\theta /\sqrt{t})-1)}\le e^{-2\theta ^2/\pi ^2}\, , \quad \text{ for } \theta \in \mathbb {R},\, t>0 \text{ such } \text{ that } |\theta |< \pi \sqrt{t}\, . \end{aligned}$$

Theorem A below is both a local and a non-local central limit theorem satisfied by strongly Gaussian power series. It appears in Hayman’s [16] as Theorem I and Theorem II under the stronger hypothesis that f is in the Hayman class (to be discussed shortly in Sect. 3.3), but the proofs in [16] work for strongly Gaussian power series.

Theorem A

(Hayman’s central limit theorem) If \(f(z)=\sum _{n=0}^\infty a_n z^n\) in \(\mathcal {K }\) is strongly Gaussian, then

$$\begin{aligned} \lim _{t \uparrow R} \sup \limits _{n \in \mathbb {Z}} \left| \frac{a_n t^n}{f(t)} \,\sqrt{2\pi }\sigma (t)-e^{-(n-m(t))^2/(2\sigma ^2(t))}\right| =0\,. \end{aligned}$$
(3.6)

Besides,

$$\begin{aligned} \lim _{t \uparrow R}\mathbf {P}( \breve{X}_t \le b)=\Phi (b)\,, \quad \text{ for } \text{ every } b \in \mathbb {R}\, . \end{aligned}$$

And so, \(\breve{X}_t\) converges in distribution towards the standard normal, and f is Gaussian.

In this statement \(a_n=0\), for \(n <0\). By considering \(n=-1\) in (3.6), it follows that \( \lim _{t \uparrow R}({m(t)}/{\sigma (t)})=+\infty \). In particular, this means that \(M_f\) given by (2.2) is \(M_f=\infty \) for every \(f\in \mathcal {K }\) that is strongly Gaussian.

Theorem B

(Hayman’s asymptotic formula) If \(f(z)=\sum _{n=0}^\infty a_n z^n\) in \(\mathcal {K }\) is strongly Gaussian, then

$$\begin{aligned} a_n \sim \frac{1}{\sqrt{2\pi }} \, \frac{f(t_n)}{t_n^n \,\sigma (t_n)}\, , \quad \text{ as } n \rightarrow \infty \, , \end{aligned}$$
(3.7)

where \(t_n\) is given by \(m(t_n)=n\), for each \(n \ge 1\).

This asymptotic formula is obtained by using Hayman’s identity (2.10) as follows. Write

$$\begin{aligned}&\frac{\sqrt{2\pi } a_n t_n^n \sigma (t_n)}{f(t_n)} - 1=\frac{1}{\sqrt{2\pi }}\int \limits _{|\theta |<\pi \sigma (t_n)}\mathbf {E}(e^{\imath \theta \breve{X}_{t_n}})\, d\theta - 1 \\&\quad =\frac{1}{\sqrt{2\pi }}\int \limits _{|\theta |<\pi \sigma (t_n)}\left( \mathbf {E}(e^{\imath \theta \breve{X}_{t_n}}) - e^{-\theta ^2/2}\right) \, d\theta - \frac{1}{\sqrt{2\pi }}\int \limits _{|\theta |\ge \pi \sigma (t_n)} e^{-\theta ^2/2}\, d\theta , \end{aligned}$$

and observe that conditions b) and a), respectively, of strong gaussianity (Definition 3.4), show that the first and second terms of the expression above tend to 0 as \(n\rightarrow \infty \).

Actually, if \(\omega _n\) is a good approximation of \(t_n\), in the sense that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{m(\omega _n)-n}{\sigma (\omega _n)}=0\, , \end{aligned}$$

then

$$\begin{aligned} a_n \sim \frac{1}{\sqrt{2\pi }}\,\frac{f(\omega _n)}{\omega _n^n\,\sigma (\omega _n)}\, ,\quad \text{ as } n \rightarrow \infty \,. \end{aligned}$$
(3.8)

Formula (3.8) (and also formula (3.7)) follows immediately from (3.6) of Theorem A.

For the exponential \(f(z)=e^z\) one has \(m(t)=t\) and \(\sigma (t)=\sqrt{t}\), for \(t \ge 0\), and \(t_n=n\) for \(n \ge 1\). The asymptotic formula above gives

$$\begin{aligned} \frac{1}{n!}\sim \frac{1}{\sqrt{2\pi }} \,\frac{e^n}{n^n \sqrt{n}}\, , \quad \text{ as } n \rightarrow \infty \,, \end{aligned}$$

that is Stirling’s formula.

Remark 3.5

The function \(f(z)=e^{z^2}\) is Gaussian, because of Theorem 3.3. Its variance function \(\sigma ^2(t)=4t^2\) tends towards \(\infty \) as \(t \uparrow \infty \), but f is not strongly Gaussian, since its Taylor coefficients of odd order are null and do not satisfy the asymptotic formula (3.7).

3.2.1 Báez-Duarte Substitution

In general, precise expressions for the \(t_n\) are rare, since inverting m(t) is usually complicated. But, fortunately, in practice, one can do with a certain asymptotic approximation due to Báez-Duarte [2]. This is the content of Theorem C below.

Suppose that \( f \in \mathcal {K }\) is strongly Gaussian. Assume that \(\widetilde{m}(t)\) is continuous and monotonically increasing to \(+\infty \) in [0, R), and that \(\widetilde{m}(t)\) is a good approximation of m(t) in the sense that

$$\begin{aligned} \lim _{t \uparrow R} \frac{m(t)-\widetilde{m}(t)}{\sigma (t)}=0\,. \end{aligned}$$
(3.9)

Let \(\tau _n\) be defined by \(\widetilde{m}(\tau _n)=n\), for each \(n \ge 1\).

Theorem C

(Substitution) With the notations above, if \(f(z)=\sum _{n=0}^\infty a_n z^n\) in \(\mathcal {K }\) is strongly Gaussian and (3.9) is satisfied, then

$$\begin{aligned} a_n \sim \frac{1}{\sqrt{2\pi }} \,\frac{f(\tau _n)}{\tau _n^n \,\sigma (\tau _n)}\, , \quad \text{ as } n \rightarrow \infty \, . \end{aligned}$$

Moreover, if \(\widetilde{\sigma }(t)\) is such that \(\sigma (t) \sim \widetilde{\sigma }(t)\) as \(t \uparrow R\), we may further write

$$\begin{aligned} a_n \sim \frac{1}{\sqrt{2\pi }} \,\frac{f(\tau _n)}{\tau _n^n \, \widetilde{\sigma }(\tau _n)}\, , \quad \text{ as } n \rightarrow \infty \, . \end{aligned}$$
(3.10)

Theorem C follows readily from (3.8).

In Sect. 6.1, we will exhibit appropriate approximations \(\widetilde{m}\) and \( \widetilde{\sigma }^2\) of the mean and variance functions m and \(\sigma ^2\) of the ogfs of partitions P, Q, \(P_{a,b}\) and \(W_a^b\), satisfying in each case condition (3.9).

3.3 Hayman Class

Strongly Gaussian functions in \(\mathcal {K }\) have excellent asymptotic properties: the central limit Theorem A and the asymptotic formula (3.7) for its coefficients. Thus far, \(f(z)=e^z\) is the only strongly Gaussian function encountered here.

The class of Hayman consists of power series f in \(\mathcal {K }\) which satisfy some concrete and verifiable conditions which imply that f is strongly Gaussian, see Theorem 3.8.

Definition 3.6

(Hayman class) A power series \(f\in \mathcal {K }\) is in the Hayman class if

$$\begin{aligned} \mathrm{(variance condition):}\quad&\lim _{t \uparrow R} \sigma (t)=\infty \,, \end{aligned}$$
(3.11)

and if for a certain function \(h:[0,R)\rightarrow (0, \pi ]\), which we refer to as the cut (between a major arc and a minor arc), there hold

$$\begin{aligned} \mathrm{(majorarc):} \quad&\lim _{t \uparrow R}\sup _{|\theta |\le h(t)\,\sigma (t)} \left| \mathbf {E}(e^{\imath \theta \breve{X}_t})\,e^{\theta ^2/2}-1\right| =0 , \end{aligned}$$
(3.12)
$$\begin{aligned} { \mathrm (minorarc):} \quad&\lim _{t\uparrow R}\sigma (t) \, \sup _{h(t)\sigma (t)< |\theta | \le \pi \sigma (t)} |\mathbf {E}(e^{\imath \theta \breve{X}_t})|=0\, , \end{aligned}$$
(3.13)

The cut h of a f in the Hayman class is not uniquely determined.

The functions in the Hayman class are called admissible by Hayman [16]; they are also called Hayman-admissible, or just H-admissible.

Unlike other accounts (including Hayman’s), we do not allow a finite number of the coefficients of the power series to be negative.

For f in the Hayman class, the characteristic function of \(\breve{X}_t\) is uniformly approximated by \(e^{-\theta ^2/2}\) in the major arc, while it is uniformly \(o(1/\sigma (t))\) in the minor arc.

Observe that condition (3.13) may be written in terms of f or \(X_t\) as the requirement that

$$\begin{aligned} \lim _{t\uparrow R}\sigma (t) \, \sup _{h(t)< |\theta | \le \pi } \frac{|f(te^{\imath \theta })|}{f(t)}=0 \quad \text{ or } \quad \lim _{t\uparrow R}\sigma (t) \, \sup _{h(t)< |\theta | \le \pi } \left| \mathbf {E}(e^{\imath \theta X_t})\right| =0\,. \end{aligned}$$

Lemma 3.7

If \(f\in \mathcal {K }\) is in the Hayman class with cut h, then

$$\begin{aligned} \lim _{t \uparrow R} h(t) \,\sigma (t)=\infty \, . \end{aligned}$$

Proof

For \(\theta _t=h(t)\,\sigma (t)\), condition (3.12) implies that \(\lim _{t \uparrow R} \mathbf {E}(e^{\imath \theta _t \breve{X}_t }) \, e^{\theta _t^2/2}= 1\); while condition (3.13) implies that \(\lim _{t \uparrow R} \sigma (t)\, \mathbf {E}(e^{\imath \theta _t \breve{X}_t})=0\, .\) Thus, \(\lim _{t \uparrow R}{e^{\theta _t^2/2}}/{\sigma (t)}=\infty \, .\) As \(\lim _{t \uparrow R}\sigma (t)=\infty \), the conclusion follows. \(\square \)

The conditions for a power series to be in the Hayman class amount to a criterion to be strongly Gaussian:

Theorem 3.8

Power series in the Hayman class are strongly Gaussian.

Proof

Let us denote

$$\begin{aligned} I_t=\int \nolimits _{|\theta |< \pi \sigma (t)} \left| \mathbf {E}(e^{\imath \theta \breve{X}_t})-e^{-\theta ^2/2}\right| \,d\theta \,, \quad \text{ for } t \in (0,R)\,. \end{aligned}$$

Divide the integral \(I_t\) into an integral \(J_t\) over the major arc and an integral \(K_t\) over the minor arc:

$$\begin{aligned} J_t&=\int \nolimits _{|\theta |\le h(t)\sigma (t)}\left| \mathbf {E}(e^{\imath \theta \breve{X}_t})-e^{-\theta ^2/2}\right| \,d\theta , \\ K_t&=\int \nolimits _{ h(t)\sigma (t)< |\theta | \le \pi \sigma (t)}\left| \mathbf {E}(e^{\imath \theta \breve{X}_t})-e^{-\theta ^2/2}\right| \,d\theta \, . \end{aligned}$$

Bound \(K_t\) as

$$\begin{aligned} K_t\le 2\pi \sigma (t)\, \left( \sup \limits _{ h(t)\sigma (t)< |\theta | \le \pi \sigma (t)}\left| \mathbf {E}(e^{\imath \theta \breve{X}_t})\right| \right) + \int \nolimits _{|\theta |\ge h(t) \sigma (t)} e^{-\theta ^2/2}\, d\theta \, . \end{aligned}$$

The first summand of the bound of \(K_t\) tends to 0, as \(t \uparrow R\), because of the hypothesis (3.13), while the second summand tends to 0, as \(t \uparrow R\), since, as Lemma 3.7 dictates, \(\lim _{t \uparrow R} h(t) \sigma (t)=+\infty \).

Bound \(J_t\) as

$$\begin{aligned} J_t&= \int \nolimits _{|\theta |\le h(t) \sigma (t)} e^{-\theta ^2/2}\left| \mathbf {E}(e^{\imath \theta \breve{X}_t})\,e^{\theta ^2/2}-1\right| \, d\theta \\&\le \left( \int _\mathbb {R}e^{-\theta ^2/2}\, d\theta \right) \sup \limits _{|\theta |\le h(t) \sigma (t)}\left| \mathbf {E}(e^{\imath \theta \breve{X}_t})\,e^{\theta ^2/2}-1\right| \, . \end{aligned}$$

This bound of \(J_t\) tends to 0 as \(t \uparrow R\), by virtue of the hypothesis (3.12). \(\square \)

As a consequence of Theorem 3.8, for asymptotic estimation of coefficients of functions of the Hayman class, one may use the more flexible asymptotic formula (3.10) of Báez-Duarte instead of the proper Hayman’s asymptotic formula (3.7). This fact will be crucial later, particularly for the arguments pertaining the partition functions of Sect. 6.

3.3.1 Combination and Perturbation of Functions of Hayman Class

Maybe the reason for the relevance of the Hayman class in combinatorial questions is that it enjoys some closure and small perturbation properties.

For instance, see, of course, Hayman [16]:

  1. (a)

    If f and g are in the Hayman class, then the product \(f\cdot g\) is in the Hayman class, see [16, Thm. VII].

  2. (b)

    If f is in the Hayman class, then \(e^f\) is in the Hayman class, see [16, Thm. VI].

  3. (c)

    If f is in the Hayman class and if R is a polynomial in \(\mathcal {K }\), then the product \(R\cdot f\) is in the Hayman class, see [16, Thm. VIII].

  4. (d)

    Let \(B(z)=\sum _{n=0}^N b_n z^n\) be a non-constant polynomial with real coefficients such that for each \(d>1\) there exists m, not a multiple of d, such that \(b_m \ne 0\), and such that if m(d) is the largest such m, then \(b_{m(d)}>0\). If \(e^B\) is in \(\mathcal {K }\), then \(e^B\) is in the Hayman class, see [16, Thm. X].

4 Exponentials and Hayman Class

Let g be a non-constant power series with non-negative coefficients and radius of convergence \(R>0\). The exponential \(f=e^g\) of such a g is in \(\mathcal {K }\). As mentioned above, see Sect. 3.1.3, this setting is quite usual in combinatorial questions.

Write \(g(z)=\sum _{n=0}^\infty b_n z^n\), with \(b_n \ge 0\) for \(n \ge 0\), and let \(f\in \mathcal {K }\) be given by \(f=e^g\).

We will exhibit in Theorem 4.1 below conditions on the function g which ensure that f is in the Hayman class so that we can obtain an asymptotic formula for its coefficients.

Equipped with Theorem 4.1 below and the substitution Theorem C, we will discuss in Sects.  5.1, 5.2 and 5.3 a number of asymptotic formulae for coefficients, mostly (but not only) of combinatorial interest, for functions f of the form \(f=e^g\) with non-constant g with non-negative coefficients.

Let \((X_t)_{t \in [0,R)}\) be the Khinchin family of f. The mean and variance functions of f, written in terms of g, are

$$\begin{aligned} m(t)= t g^\prime (t) \quad \text{ and } \quad \sigma ^2(t)=tg^\prime (t)+t^2g^{\prime \prime }(t)\,, \quad \text{ for } t \in (0,R)\,. \end{aligned}$$

Since g has non-negative coefficients, the variance function \(\sigma ^2(t)\) of f is increasing in [0, R). The auxiliary function F of f is the function \(F(z)=g(e^z)\), for z such that \(\mathrm{Re}z < \ln R\).

The variance condition (3.11) for belonging to the Hayman class in terms of g becomes

$$\begin{aligned} \lim _{t \uparrow R} \left( tg^\prime (t) +t^2g^{\prime \prime }(t)\right) =+\infty \, . \end{aligned}$$

We want a cut function h for f.

Let us start with the major arc. The bound (3.2) gives that

$$\begin{aligned} \left| \ln \mathbf {E}(e^{\imath \theta \breve{X}_t})+\frac{\theta ^2}{2}\right| \le \frac{\sup _{\phi \in \mathbb {R}} \left| F^{\prime \prime \prime }(s+\imath \phi )\right| }{\sigma ^3(t)}\, \frac{|\theta |^3}{6}\, , \quad \text{ for } t=e^s \text{ and } s<\ln R\,. \end{aligned}$$

The proof of Theorem 3.3 gives also that

$$\begin{aligned} |F^{\prime \prime \prime }(s+\imath \phi )|\le b_1 t+8 b_2 t^2+\frac{9}{2} \,t^3 g^{\prime \prime \prime }(t) \, , \quad \text{ for } t=e^s, s<\ln R \text{ and } \phi \in \mathbb {R}\, . \end{aligned}$$

Define

$$\begin{aligned} \omega _g(t)=b_1 t+8 b_2 t^2+\frac{9}{2} \,t^3g^{\prime \prime \prime }(t) \, , \quad \text{ for } t \in (0,R)\, . \end{aligned}$$
(4.1)

Consider a potential cut function \(h:[0,R) \rightarrow (0,\pi ]\). Using that \(|e^z-1|\le |z|e^{|z|}\) for all \(z \in \mathbb {C}\), we obtain that

$$\begin{aligned} \sup _{|\theta |\le h(t)\,\sigma (t)} \left| \mathbf {E}(e^{\imath \theta \breve{X}_t})\,e^{\theta ^2/2}-1\right| \le \Omega (t) \,e^{\Omega (t)}, \end{aligned}$$

where \(\Omega (t)\) is given by

$$\begin{aligned} \Omega (t)=\frac{1}{6}\,\omega _g(t)\, h(t)^3\, ,\quad \text{ for } t \in (0,R)\,. \end{aligned}$$

So, for h to fulfill condition (3.12) on the major arc, it is enough to have that

$$\begin{aligned} \lim _{t \uparrow R} \omega _g(t)\, h(t)^3=0\,. \end{aligned}$$

In terms of h and g, condition (3.13) on the minor arc becomes

$$\begin{aligned} \lim _{t \uparrow R} \sigma (t) \exp \left( \sup \limits _{h(t)\le |\theta |\le \pi }\mathrm{Re}g(te^{\imath \theta })-g(t)\right) =0\,. \end{aligned}$$

Thus,

Theorem 4.1

Let g be a non-constant power series with radius of convergence R and non-negative coefficients satisfying the variance condition

$$\begin{aligned} \mathrm{(variance condition):}&\quad \lim _{t \uparrow R} \left( tg^\prime (t) +t^2g^{\prime \prime }(t)\right) =+\infty \, . \end{aligned}$$
(4.2)

If there is a cut function \(h:[0,R)\rightarrow (0,\pi ]\) satisfying

$$\begin{aligned} \mathrm{(majorarc):}&\quad \lim _{t \uparrow R} \omega _g(t)\, h(t)^3=0\,, \end{aligned}$$
(4.3)

and

$$\begin{aligned} \mathrm{(minorarc):}&\quad \lim _{t \uparrow R} \sigma (t) \exp \left( \sup \limits _{h(t)< |\theta |\le \pi }\mathrm{Re}g(te^{\imath \theta })-g(t)\right) =0\,,\end{aligned}$$
(4.4)

then \(f=e^g\) is in the Hayman class.

The function \(\omega _g\) appearing in the major arc condition is given by the formula (4.1).

Regarding the variance condition. For finite R, the variance condition (4.2) becomes \(\lim _{t\uparrow R} g^{\prime \prime }(t)=\infty \).

For \(R=\infty \), the variance condition (4.2) is always satisfied, since otherwise we would have that \(\lim _{t \rightarrow \infty } g^{\prime }(t)=0\) and g would be constant.

Regarding the major arc condition. For finite R, the major arc condition (4.3) becomes \(\lim _{t \uparrow R} g^{\prime \prime \prime }(t)\, h(t)^3=0\).

For \(R=\infty \), if g(z) is not a polynomial, the condition on the major arc (4.3) reduces to

$$\begin{aligned} \lim _{t \rightarrow \infty } t^3g^{\prime \prime \prime }(t)\, h(t)^3=0\, , \end{aligned}$$

while if g is a polynomial of degree \(k \ge 2\), then (4.3) reduces to

$$\begin{aligned} \lim _{t \rightarrow \infty } t^k h(t)^3=0\, . \end{aligned}$$

4.1 Some Lemmas for Minor Arc Estimates

In applications of Theorem 4.1, the variance and major arc condition for a cut for \(f=e^g\) are usually straightforward to verify. It is always the minor arc which requires the most work.

We collect next a few elementary estimates for the functions \(1/(1-z)^k\) on \(|z|=t\) for \(t \in (0,1)\) and integer \(k \ge 1\) which will prove useful in a number of verifications of the minor arc condition (4.4) of Theorem 4.1.

Lemma 4.2

For \(t \in (0,1)\) and \(\theta \in \mathbb {R}\),

$$\begin{aligned} \left| \frac{1-t}{1-te^{\imath \theta }}\right| ^2=\frac{(1-t)^2}{(1-t)^2-2t (\cos \theta -1)}=\left( 1+\dfrac{4t\sin ^2\frac{\theta }{2}}{(1-t)^2 }\right) ^{-1}\, . \end{aligned}$$

Lemma 4.3

Let \(\omega \in (0,\pi )\) and \(t\in (0,1)\). Then

$$\begin{aligned} \sup _{\omega \le |\theta |\le \pi } \mathrm{Re}\left( \frac{1-t}{1-te^{\imath \theta }}\right) =\mathrm{Re}\left( \frac{1-t}{1-te^{\imath \omega }}\right) \quad \text{ and } \quad \sup _{\omega \le |\theta |\le \pi } \left| \frac{1-t}{1-te^{\imath \theta }}\right| =\left| \frac{1-t}{1-te^{\imath \omega }}\right| \, . \end{aligned}$$

Proof

The Möbius transformation \(T(z)=1/(1-z)\) carries the circle \(|z|=r\), with \(r \in (0,1)\), onto the circle orthogonal to the positive real axis through \(1/(1+r)\) and \(1/(1-r)\). For each \(r \in (0,1)\), we have that \(\mathrm{Re}((1-r)/(1-re^{\imath \phi }))\) decreases from 1 to \((1-r)/(1+r)\) as \(\phi \) increases from 0 to \(\pi \) (and as \(\phi \) decreases from 0 to \(-\pi \)).

Monotonicity of modulus follows from Lemma 4.2. \(\square \)

Lemma 4.4

Let h(t) be defined in the interval (0, 1) with values in \((0, \pi )\) and such that \(\lim _{t \uparrow 1}{h(t)}/{(1-t)}=0\). Let k be an integer \(k \ge 1\). Then we have

$$\begin{aligned} \lim _{t \uparrow 1} \frac{(1-t)^2}{h(t)^2} \left( \left| \frac{1-t}{1-te^{\imath h(t)}}\right| ^k-1\right) = -\dfrac{k}{2}\,\cdot \end{aligned}$$

Proof

Lemma 4.2 and \(\lim _{t \uparrow 1} h(t)=0\) give that

$$\begin{aligned} (\star ) \quad \lim _{t \uparrow 1} \left| \frac{1 - t}{1 - te^{\imath h(t)}}\right| =1\,. \end{aligned}$$

Lemma 4.2 gives that

$$\begin{aligned} (\star \star )\quad 1-\left| \frac{1 - t}{1 - te^{\imath h(t)}}\right| ^2\sim \frac{4 t\sin ^2\left( \frac{h(t)}{2}\right) }{(1-t)^2}\,, \quad \text{ as } t \rightarrow 1\,. \end{aligned}$$

The case \(k=2\) follows from \((\star \star )\). The general case follows from combining \((\star )\), the case \(k=2\) and that

$$\begin{aligned} \lim _{y\rightarrow 1}\frac{y^k-1}{y^2-1}=\frac{k}{2}\,\cdot \end{aligned}$$

\(\square \)

5 Some Examples of Exponentials in the Hayman Class

In this section we verify that relevant examples of \(f=e^g\) are in the Hayman class. Some of them have been mentioned already, see the discussion of Sect. 3.1.3, like \(g(z)=e^{z}-1\) (for non-empty sets) or \(g(z)=ze^z\) (for pointed sets). We shall consider too the example of a polynomial g with non-negative coefficients or the case \(g(z)=(1+z)/(1-z)\).

The function g in all these cases has a closed formula which facilitates the verification of the conditions of Theorem 4.1, and thus the corroboration that f is in the Hayman class and that Hayman’s asymptotic formula (3.7) applies. Besides, the mean function m(t) and the variance function \(\sigma (t)\) have as well, in all these cases, closed formulas, and the asymptotic behaviour of \(t_n^n\) and \(f(t_n)\) and \(\sigma (t_n)\) appearing is (3.7) is quite direct.

5.1 Exponential of a Polynomial

Let \({B}(z)=\sum _{n=0}^N b_n\,z^n\) be a polynomial of degree \(N \ge 1\) with non-negative coefficients. Assume further that \(\mathrm{gcd}\{1 \le n \le N:b_n >0\}=1\). We will check that \(f=e^{{B}}\) is in the Hayman class. For this aim, we could appeal to the general result d) of Hayman of Sect. 3.3.1. But the present case of non-negative coefficients may be neatly cast as a consequence of Theorem 4.1, and we include a proof.

Proposition 5.1

Let \({B}(z)=\sum _{j=0}^N b_j z^j\) be a polynomial of degree \(N \ge 1\) with non-negative coefficients and such that

$$\begin{aligned} q\triangleq \mathrm{gcd}\{1\le n\le N: b_n > 0\}=1\,. \end{aligned}$$
(5.1)

Then, \(f(z)=e^{{B}(z)}\) is in the Hayman class.

Condition (5.1) is indispensable. In fact, if \(q>1\), then \(f=e^{{B}}\) is not strongly Gaussian, since then \(f(z)=H(z^q)\), for a certain entire power series H, and the coefficients of f can not satisfy Hayman’s asymptotic formula (3.7) as strongly Gaussian functions do.

Proof

As mentioned above, since the radius of convergence of B(z) is \(R=\infty \), the variance condition (4.2) is satisfied.

We verify next that h(t) given by \(h(t)=\min \{{\pi /N}, t^{-\alpha }\}\), with \(\alpha \in (N/3, N/2)\), satisfies conditions (4.3) and (4.4) of Theorem 4.1 for a cut. Fix \(\alpha \) in that interval.

Condition (4.3) on the major arc reduces, as we have mentioned above, to

$$\begin{aligned} \lim _{t \rightarrow \infty } t^N h(t)^3=0\, , \end{aligned}$$

which is satisfied since \(\alpha >N/3\).

Now, the minor arc. Notice that

$$\begin{aligned} \mathrm{Re}{B}(te^{\imath \theta })-{B}(t)=\sum _{n=1}^N b_n t^n \,(\cos (n \theta )-1) \, . \end{aligned}$$

All the summands above are non-positive. We distinguish now between \(\theta \)’s close to and away from the Nth roots of unity.

Denote \(\mathbb {T}=\{e^{\imath \theta }: \theta \in \mathbb {R}\}=\partial \mathbb {D}\).

For \(t>0\) and \(1\le j <N\), define the arcs in \(\mathbb {T}\)

$$\begin{aligned} I_j(t)=\left\{ e^{\imath \theta }\in \mathbb {T}: \left| \theta - \frac{2\pi j}{N}\right| <h(t)\right\} \, . \end{aligned}$$

and for \(t>0\) define also

$$\begin{aligned} I_0(t)=\left\{ e^{\imath \theta }\in \mathbb {T}: |\theta |<h(t)\right\} . \end{aligned}$$

These \(I_j(t)\), \(0 \le j <N\) are arcs in \(\mathbb {T}\) of length 2h(t) around the Nth roots of unity. \(I_0\) is the major arc. Since \(h(t)\le \pi /N\), the arcs \(I_j(t)\) are disjoint.

Denote also by \(\hat{I}_0(t)\) the arc \(\hat{I}_0(t)=\left\{ e^{\imath \theta }\in \mathbb {T}: |\theta |<N h(t)\right\} \).

Let \(\mathcal {N}=\{1\le n \le N :b_n >0\}\). We will use that, since \(\mathrm{gcd}(\mathcal {N})=1\), no \(e^{\imath \theta }\) with \(\theta \in \mathbb {R}\), except \(e^{\imath \theta }=1\), is simultaneously a nth root of unity for all \(n \in \mathcal {N}\).

For each \(1 \le j < N\), there is \(n_j \in \mathcal {N}\) so that \(e^{\imath 2\pi j/N}\), the center of the arc \(I_j(t)\), is not a \(n_j\)th root of unity. Since \(\lim _{t \uparrow \infty } h(t)=0\), there is \(t_0 >{1}\) and, for each \(1 \le j {<} N\), a \(\delta _j >0\), so that

$$\begin{aligned} b_{n_j}\,(\cos (n_j \theta )-1)\le - \delta _j\, , \quad \text{ for } {e^{\imath \theta } \in I_j(t)} \text{ and } t \ge t_0\,. \end{aligned}$$

Let \(\Delta =\min \{\delta _j, 1\le j <N\}\) and \(M=\min \mathcal {N}\). Then

$$\begin{aligned} b_{n_j}\,t^{n_j}\,(\cos (n_j \theta )-1)\le - \Delta t^M\, , \quad \text{ for } {e^{\imath \theta } \in I_j(t)} \text{ and } 1\le j<N \text{ and } t \ge t_0\,. \end{aligned}$$

For \(e^{\imath \theta } \in \mathbb {T}\setminus \left( \bigcup _{j=1}^{N-1} I_j(t) \cup I_0(t)\right) \) we have that \(e^{\imath N\theta } \in \mathbb {T}\setminus \hat{I_0}(t)\) and, thus, using the inequality \(1-\cos x\ge 2x^2/\pi ^2\), valid for \(x\in [-\pi ,\pi ]\), that

$$\begin{aligned} \cos (N\theta )-1\le -\frac{2}{\pi ^2} N^2 h(t)^2\, , \end{aligned}$$

and so that

$$\begin{aligned} b_N t^N \,(\cos (N\theta )-1)\le -\Lambda \,t^N h(t)^2\, , \end{aligned}$$

with \(\Lambda =(2/\pi ^2) N^2 b_N\).

Therefore, if \(h(t)\le |\theta |\le \pi \) and \(t >t_0\), we have that

$$\begin{aligned} \mathrm{Re}{B}(te^{\imath \theta })-{B}(t)=\sum _{n=1}^N b_n t^n \,(\cos (n \theta )-1) \le \max \left\{ -\Delta t^M, \,\, -\Lambda t^N h(t)^2\right\} \, . \end{aligned}$$

Since \(\sigma (t)\) grows just as a power of t, condition (4.4) on the minor arc is satisfied since \(\alpha <N/2\). We conclude that \(e^{{B}}\) is in the Hayman class. \(\square \)

5.1.1 Frequency of Permutations of Given Order

As a first example of the efficient alliance of strong gaussianity with the substitution Theorem C, we now give, for any fixed integer \(k \ge 1\) and as \(n \rightarrow \infty \), an asymptotic formula for the number \(A_k(n)\) of permutations \(\sigma \in S_n\) such that \(\sigma ^k\) is the identity. Observe, for instance, that \(A_2(n)\) is the number of involutions in \(S_n\). The particular cases \(k=2\) and k prime of this asymptotic formula are due to Moser–Wyman, [23, 24]. For general k, the formula is essentially due to Wilf [28].

The permutations counted by \(A_k(n)\) are those \(\sigma \in S_n\) so that all the cycles in its cycle decomposition have lengths which divide k. Thus, the symbolic method, see [9, p. 124], gives us that the egf \(I_k(z)\) of these permutations is

$$\begin{aligned} I_k(z)=\sum _{n=0}^\infty \frac{A_k(n)}{n!}\, z^n=\exp \left( \sum _{1 \le d\mid k} \frac{1}{d} \, z^d\right) \, . \end{aligned}$$

Denote

$$\begin{aligned} R_k(z)=\sum _{1 \le d\mid k} \frac{1}{d} z^d. \end{aligned}$$

As \(I_k(z)\) is the exponential of the polynomial \(R_k(z)\) and \(\mathrm{gcd}\{d\ge 1: d \mid k\}=1\), we have by Proposition 5.1 that \(I_k(z)\) is in the Hayman class and, in particular, that \(I_k(z)\) is strongly Gaussian.

By applying the asymptotic formula (3.10) of the substitution Theorem C, we will obtain the asymptotic formulae (5.2) and (5.3) for \(A_k(n)\), distinguishing between k odd or k even.

Denote by \(m_k(t)\) and \(\sigma ^2_k(t)\), for \(t>0\), the mean and variance functions of the Khinchin family of \(I_k(t)\). Observe that

$$\begin{aligned} m_k(t)=\sum _{1\le d|k} t^d \quad \text{ and } \quad \sigma _k^2(t)=\sum _{1\le d|k} d t^d,\quad \text{ for } t >0\,. \end{aligned}$$

We take \(\widetilde{\sigma }_k(t)=\sqrt{k}\,t^{k/2}\), for \(t >0\). Observe that \(\sigma _k(t) \sim \widetilde{\sigma }_k(t)\), as \(t \rightarrow \infty \).

To properly define an approximation \(\widetilde{m}_k(t)\) of \(m_k(t)\), we distinguish between k even and k odd.

  1. (a)

    Case k odd. The divisors d of k, different from k, satisfy \(d \le k/3\). Thus \(m_k(t)=t^k+O(t^{k/3})\), as \(t \rightarrow \infty \), and if we define \(\widetilde{m}_k(t)=t^k\), then

    $$\begin{aligned} m_k(t)-\widetilde{m}_k(t)=O(t^{k/3})=o(\widetilde{\sigma }_k(t))\, , \quad \text{ as } t \uparrow \infty \,. \end{aligned}$$

    For \(n \ge 1\), we take \(\tau _n=n^{1/k}\), so that \(\widetilde{m}_k(\tau _n)=n\). Strong gaussianity and the substitution Theorem C give for \(k\ge 1\), odd and fixed, that

    $$\begin{aligned} \frac{A_k(n)}{n!} \sim \frac{1}{\sqrt{2\pi }} \,\frac{1}{\sqrt{k}} \,\frac{e^{R_k(n^{1/k})}}{n^{n/k} \sqrt{n}}\, , \quad \text{ as } n \rightarrow \infty \, . \end{aligned}$$
    (5.2)
  2. (b)

    Case k even. Now a divisor of k is k/2, which is the order of \(\sigma _k(t)\). We have \(m_k(t)=t^k+t^{k/2}+O(t^{k/3})\), as \(t \uparrow \infty \), and we take

    $$\begin{aligned} \widetilde{m}_k(t)=\left( t^k+t^{k/2}+\frac{1}{4}\right) =\left( t^{k/2}+\frac{1}{2}\right) ^2\, , \quad \text{ for } t>0\, . \end{aligned}$$

Thus,

$$\begin{aligned} m_k(t)-\widetilde{m}_k(t)=O(t^{k/3})=o(\widetilde{\sigma }_k(t))\, , \quad \text{ as } t \uparrow \infty \,. \end{aligned}$$

Define \(\tau _n=(\sqrt{n}-1/2)^{2/k}\), so that \(\widetilde{m}_k(\tau _n)=n\), for each \(n \ge 1\). Since

$$\begin{aligned} n \ln \left( \sqrt{n}-\frac{1}{2}\right)&=n \ln \sqrt{n}+n \ln \left( 1-\frac{1}{2\sqrt{n}}\right) \\&=\frac{n}{2} \ln n - n \left( \frac{1}{2\sqrt{n}}+\frac{1}{8n}+O\left( \frac{1}{n^{3/2}}\right) \right) \\ {}&= \frac{n}{2} \ln n -\frac{\sqrt{n}}{2}-\frac{1}{8}+o(1)\, , \quad \text{ as } n \rightarrow \infty \,, \end{aligned}$$

and so

$$\begin{aligned} \left( \sqrt{n}-\frac{1}{2}\right) ^n \sim {n}^{n/2}\, e^{-\sqrt{n}/2} e^{-1/8}\, , \quad \text{ as } n \rightarrow \infty \,, \end{aligned}$$

we have that

$$\begin{aligned} \tau _n^n \sim n ^{n/k} e^{-\sqrt{n}/k} e^{-1/(4k)}\, , \quad \text{ as } n \rightarrow \infty \,. \end{aligned}$$

Besides,

$$\begin{aligned} \widetilde{\sigma }_k(\tau _n)=\sqrt{k} \, \tau _n^{k/2}\sim \sqrt{k} \sqrt{n}\, , \quad \text{ as } n \rightarrow \infty \,. \end{aligned}$$

Strong gaussianity and substitution give for \(k\ge 1\), even and fixed, that

$$\begin{aligned} \frac{A_k(n)}{n!} \sim \frac{1}{\sqrt{2\pi }} \,\frac{e^{1/(4k)}}{\sqrt{k}} \,e^{\sqrt{n}/k} \,\frac{e^{R_k((n-\sqrt{n}+1/4)^{1/k})}}{n^{n/k} \sqrt{n}}\, , \quad \text{ as } n \rightarrow \infty \, . \end{aligned}$$
(5.3)

Using (5.2) and (5.3), ones sees that if \(q>k\), then

$$\begin{aligned} \ln \left( \frac{A_k(n)}{A_q(n)}\right) =\left( \frac{1}{q}-\frac{1}{k}\right) \, n \ln n+O(n)\, ,\quad \text{ as } n\rightarrow \infty \,, \end{aligned}$$

and, thus, irrespective of the parity of q and k,

$$\begin{aligned} (\star )\qquad \lim _{n \rightarrow \infty } \frac{A_k(n)}{A_q(n)}=0\, , \end{aligned}$$

Besides, \(\lim _{n\rightarrow \infty } A_k(n)/n!=0\) and \(\lim _{n\rightarrow \infty } A_k(n)=\infty \).

For the number \(B_k(n)\) of permutations of \((1,\ldots , n)\) whose order is exactly k, one obtains via Möbius inversion and \((\star )\), see Wilf [28], that

$$\begin{aligned} B_k(n)\sim A_k(n)\, , \quad \text{ as } n \rightarrow \infty \,. \end{aligned}$$

5.2 Entire Functions: Bell Numbers and Idempotent Functions

We apply next Theorems 4.1 and C to the egfs of the Bell numbers and of the number of idempotent functions, which are both entire functions.

5.2.1 Bell Numbers

Consider first the function \(f(z)=e^{e^z-1}\), the egf of the Bell numbers \(\mathcal {B}_n\) described above in Sect. 2.1.6. Result b) of Hayman in Sect. 3.3.1 already gives that f is in the Hayman class, but let us see now how this fact follows most easily from Theorem 4.1 on our way to obtain the Moser–Wyman asymptotic formula (5.5) below for the Bell numbers, [22].

For \(f(z)=e^{e^z-1}\), we have that \(m(t)=te^t\) and that \(\sigma ^2(t)=t(t+1)e^t\). In the notation of Theorem 4.1, the function g is \(g(z)=e^z-1\).

For a potential cut h, condition (4.3) for the major arc requires that

$$\begin{aligned} \lim _{t \uparrow \infty } t^3 e^t h(t)^3=0\, . \end{aligned}$$

For the minor arc, using that

$$\begin{aligned} e^{t \cos \theta }-e^t \le -\dfrac{1}{2\pi ^2}\, e^t\, \theta ^2\, , \quad \text{ for } \text{ any } t\ge 1 \text{ and } |\theta |<\pi \,, \end{aligned}$$
(5.4)

we obtain, for \(t \ge 1\), that

$$\begin{aligned} \mathrm{Re}g(t e^{\imath \theta })-g(t)\le |e^{t e^{\imath \theta }}|-e^t=e^{t \cos \theta }-e^t\le -\frac{1}{2\pi ^2} e^t\, \theta ^2\, . \end{aligned}$$

And so

$$\begin{aligned} \exp \left( \sup \limits _{h(t)\le |\theta |\le \pi } \mathrm{Re}g(t e^{\imath \theta })-g(t)\right) \le \exp \left( -\frac{1}{2\pi ^2} e^t\, h(t)^2\right) \,. \end{aligned}$$

With \(h(t)= e^{-\alpha t}\), for \(t>0\) and fixed \(\alpha \in (1/3,1/2)\), both conditions (4.3) and (4.4) are satisfied, and we conclude that \(e^{e^z-1}\) is in the Hayman class.

The \(t_n\) for f are such that \(m(t_n)=t_ne^{t_n}=n\). Thus \(t_n=W(n)\), where W is the Lambert function. Since f is strongly Gaussian, formula (3.7) gives that

$$\begin{aligned} \frac{\mathcal {B}_n}{n!}\sim \frac{1}{\sqrt{2\pi }} \, \frac{e^{e^{W(n)}-1}}{\sqrt{W(n) (W(n)+1)e^{W(n)}}\,W(n)^n}\, , \quad \text{ as } n \rightarrow \infty \, , \end{aligned}$$
(5.5)

which is the asymptotic formula for the Bell numbers of Moser–Wyman, [22].

5.2.2 Counting Idempotent Functions

Consider now the function

$$\begin{aligned} f(z)=e^{ze^z}=\sum _{n=0}^\infty \frac{U_n}{n!}\, z^n\,, \end{aligned}$$

which turns out to be the egf of idempotent functions. That is, \(U_n\) denotes the number of idempotent functions u of \(\{1,\ldots , n\}\) into itself, i.e., functions u such that \(u\circ u\equiv u\). See [9, p. 571].

We will verify shortly that f is in the Hayman class, towards proving the Harris–Schoenfeld ([14, 15]) asymptotic formula (5.6) below for the \(U_n\).

For that aim we could appeal to the combination of results b) and c) of Hayman of Sect. 3.3.1, but we favour a direct proof via Theorem 4.1.

For \(f(z)=\exp (ze^z)\), we have that \(m(t)=(t^2+t)e^t\) and \(\sigma ^2(t)=(t^3+3t^2+t)e^t\), for \(t>0\). The function g is, in this case, \(g(z)=z e^z\).

For a potential cut h towards applying Theorem 4.1, observe that condition (4.3) for the major arc requires that

$$\begin{aligned} \lim _{t \uparrow \infty } t^4 e^t h(t)^3=0\, . \end{aligned}$$

For the minor arc, using again (5.4), we have, for \(t>1\), that

$$\begin{aligned} \mathrm{Re}g(t e^{\imath \theta })-g(t)\le |t e^{\imath \theta }e^{t \cos \theta }e^{\imath t \sin \theta }|-te^t= te^{t \cos \theta }-te^t\le -\frac{1}{2\pi ^2} t e^t\, \theta ^2\, , \end{aligned}$$

so that

$$\begin{aligned} \exp \left( \sup \limits _{h(t)\le |\theta |\le \pi } \mathrm{Re}g(t e^{\imath \theta })-g(t)\right) \le \exp \left( -\frac{1}{2\pi ^2} t e^t\, h(t)^2\right) \, , \quad \text{ for } t>1\,. \end{aligned}$$

As before, with \(\alpha \in (1/3,1/2)\) and \(h(t)= e^{-\alpha t}\), for \(t>0\), both (4.3) and (4.4) are fulfilled, and we deduce from Theorem 4.1 that f is strongly Gaussian.

We now obtain the promised asymptotic formula for the \(U_n\). Since f is strongly Gaussian, we obtain, with \(t_n\) given by \(m(t_n)=t_n (t_n+1)e^{t_n}=n\), for each \(n \ge 1\), that

$$\begin{aligned} \frac{U_n}{n!}\sim \frac{1}{\sqrt{2\pi }}\, \frac{1}{e^{t_n/2}\, t_n^{n+3/2}}\, \exp \left( \frac{n}{1+t_n}\right) \, , \quad \text{ as } n \rightarrow \infty \, , \end{aligned}$$
(5.6)

the asymptotic formula due to Harris–Schoenfeld [14, 15].

5.3 Functions in \(\mathbb {D}\): Cover Map of \(\{z\in \mathbb {C}: |z|>1\}\)

Consider now

$$\begin{aligned} f(z)=\exp \left( \dfrac{1+z}{1-z}\right) =\sum _{n=0}^\infty A_n \, z^n\,, \quad \text{ for } z \in \mathbb {D}\,. \end{aligned}$$

This function f is a universal cover of \(\{z\in \mathbb {C}: |z|>1\}\).

In this case \(R=1\), and the function g is given by \(g(z)=(1+z)/(1-z)\), for \(z \in \mathbb {D}\). The mean and variance functions of f are given by

$$\begin{aligned} m(t)=\frac{2t}{(1-t)^2} \quad \text{ and } \quad \sigma ^2(t)=\frac{2 t (1+t)}{(1-t)^3}\quad \text{ for } t \in (0,1)\, . \end{aligned}$$

The variance condition (4.2) is clearly satisfied.

Now we look for a cut function h of the form \(h= (1-t)^\alpha \) with some appropriate \(\alpha >0\), towards applying Theorem 4.1. The condition on the major arc (4.3) reduces to \(\lim _{t \uparrow 1} {h(t)^3}/{(1-t)^4}=0\). For the exponent \(\alpha \), this translates into requiring that \(\alpha >4/3\).

For the minor arc, we start by writing g as

$$\begin{aligned} g(z)=\frac{2}{1-z}-1\,. \end{aligned}$$

If the exponent \(\alpha \) in the definition of h satisfies \(\alpha >1\), then \(\lim _{t \uparrow 1} h(t)/(1-t)=0\), and then Lemmas 4.3 and 4.4 give that

$$\begin{aligned}&\limsup _{t \uparrow 1} \frac{(1-t)^3}{h(t)^2}\sup _{h(t)\le |\theta | \le \pi } \mathrm{Re}\left( g(te^{\imath \theta })-g(t)\right) \\&\qquad \le 2 \lim _{t \uparrow 1} \frac{(1-t)^3}{h(t)^2} \left( \left| \frac{1}{1-te^{\imath h(t)}}\right| -\frac{1}{1-t}\right) =-1\, . \end{aligned}$$

And so, for some \(t_0 \in (0,1)\) and \(\delta >0\), we have that

$$\begin{aligned} \sup _{h(t)\le |\theta | \le \pi } \mathrm{Re}(g(te^{\imath \theta })-g(t)) \le -\delta \,\frac{h(t)^2}{(1-t)^3}\,, \quad \text{ for } t\in (t_0,1)\,. \end{aligned}$$

Since \(\sigma (t)\asymp {1}/{(1-t)^{3/2}}\), as \(t \uparrow 1\), we deduce that (4.4) is satisfied as long as \(\alpha <3/2\).

Thus, with \(\alpha \in (4/3, 3/2)\) for the cut function, we conclude that f is in the Hayman class.

Next, by appealing to the substitution Theorem C, we obtain an asymptotic formula for the Taylor coefficients \(A_n\) of f(z).

Define \(\widetilde{m}(t)={2}/{(1-t)^2}\) and \(\widetilde{\sigma }(t)= {2}/{(1-t)^{3/2}}\), for \(t \in (0,1)\), and observe that

$$\begin{aligned} \sigma (t)\sim \widetilde{\sigma }(t)\, , \quad \text{ as } t \uparrow 1\,, \end{aligned}$$

and that

$$\begin{aligned} \widetilde{m}(t)-m(t)=\dfrac{2}{1-t}=o(\sigma (t))\,, \quad \text{ as } t \uparrow 1\,, \end{aligned}$$

and, thus, that condition (3.9) is satisfied.

For each \(n \ge 1\), take \(\tau _n\) given by \(1-\tau _n=\sqrt{2/n}\). Since

$$\begin{aligned} \tau _n^{-n}=\exp \left( n\ln \left( \frac{1}{1-\sqrt{2/n}}\right) \right) =\exp \left( \sqrt{2n}+1+o(1)\right) \,,\quad \text{ as } n \rightarrow \infty \,, \end{aligned}$$

by virtue of formula (3.10) we deduce that

$$\begin{aligned} A_n \sim \frac{1}{\sqrt{2\pi }} \,\frac{1}{2^{1/4}} \,\frac{e^{2\sqrt{2} \sqrt{n}}}{n^{3/4}}\, , \quad \text{ as } n \rightarrow \infty \,. \end{aligned}$$

This asymptotic formula is Theorem XIII of Hayman’s [16]. See also Wright [30] and Macintyre–Wilson [19].

For the closely related egf

$$\begin{aligned} \exp {\left( \frac{z}{1-z}\right) }=\sum _{n=0}^\infty \frac{I_n}{n!}\, z^n \end{aligned}$$

of the number \(I_n\) of fragmented permutations of \((1, \ldots , n)\) we have, with a similar argument, that \(e^{z/(1-z)}\) is in the Hayman class and that

$$\begin{aligned} \frac{I_n}{n!} \sim \frac{1}{2\sqrt{e}\sqrt{\pi }} \,\frac{1}{n^{3/4}} \,e^{2\sqrt{n}}\, , \quad \text{ as } n \rightarrow \infty \,. \end{aligned}$$

See [9, p. 563].

Remark 5.2

For the functions

$$\begin{aligned} f(z)=\frac{1}{(1-z)^\beta } \exp \left( \gamma \,\frac{1}{1-z}\right) \,, \quad \text{ for } z \in \mathbb {D}\, , \end{aligned}$$

with integers \(\beta \ge 0\) and \(\gamma \ge 1\), considered by Wright [30] and Macintyre–Wilson in [19] (both with more general \(\beta \) and \(\gamma \)), Theorem 4.1 gives us that f is strongly Gaussian. The function g(z) is in this case

$$\begin{aligned} g(z)=\beta \ln \frac{1}{1-z}+ \gamma \,\frac{1}{1-z}\,\cdot \end{aligned}$$

For the estimate of the minor arc, just observe that

$$\begin{aligned} \mathrm{Re}g(te^{\imath \theta })-g(t)\le \gamma \left( \mathrm{Re}\frac{1}{1-te^{\imath \theta }}-\frac{1}{1-t}\right) \, . \end{aligned}$$

Appropriate approximations \(\widetilde{m}\) and \(\widetilde{\sigma }^2\) are

$$\begin{aligned} \widetilde{m}(t)=\frac{\gamma }{(1-t)^2} \quad \text{ and } \quad \widetilde{\sigma }^2(t)=\frac{2\gamma }{(1-t)^3}\, , \end{aligned}$$

which satisfy condition (3.9). For the coefficients \(a_n\) of f(z) we have

$$\begin{aligned} a_n \sim \frac{1}{\sqrt{2\pi }} \,\frac{ \gamma ^{1/4-\beta /2}\, e^{\gamma /2}}{\sqrt{2}}\, \frac{1}{n^{3/4-\beta /2}}\, e^{2\sqrt{\gamma } \sqrt{n}}\, , \quad \text{ as } n \rightarrow \infty \, . \end{aligned}$$

The functions \(f(z)=\exp (1/(1-z)^\rho )\), for integer \(\rho >0\), also considered by Wright  [32] (with more general exponents \(\rho \)), are also strongly Gaussian. A cut function \(h(t)=(1-t)^{\alpha }\), for \(\alpha >0\), satisfies the condition on the major arc if \(3\alpha >\rho +3\). Using Lemmas 4.3 and 4.4 with \(k=\rho \), we see that h satisfies the condition on the minor arc if \(2\alpha <2 +\rho \).

With \(t_n\) given by \(\rho t_n/(1-t_n)^{\rho +1}=n\) (so that \(1-t_n\sim (\rho /n)^{1/(1+\rho )}\), as \(n\rightarrow \infty \)), we have an asymptotic formula for the coefficients of f, but unless \(\rho =1\), the substitution Theorem C is of no avail for simplifying the resulting expression.

6 Partitions

Next we discuss ogfs f of different sorts of partitions: P for usual partitions, Q for partitions with distinct parts, \(P_{a,b}\) for partitions with parts in an arithmetic sequence, or \(W_1^{1}\), which codifies plane partitions, or \(W_1^b\) for colored partitions.

As in Sect. 5, we will use Theorem 4.1 to verify that all these partition functions are strongly Gaussian and then use (3.10) to provide asymptotic formulas for their coefficients.

But in sharp contrast to the power series dealt with in Sect. 5, the corresponding functions g do not have closed formulas and the verifications of the conditions of Theorem 4.1 are more involved. In addition, the mean and variance functions do not have closed formulas either and this requires obtaining suitable approximations of these functions to find an approximation \(\tau _n\) of \(t_n\). And, finally, a proper asymptotic formula for the \(a_n\) requires, in this setting, determining the asymptotic behaviour of f on (0, R) in order to be able to get an asymptotic formula for \(f(t_n)\).

The variance condition and the major arc requirement of Theorem 4.1 are quite direct; it is the minor arc condition which requires more work.

We will obtain the needed approximations of means and variances in Sect. 6.1, the asymptotic expressions on (0, 1) in Sect. 6.2 and, finally, we will discuss strong gaussianity and asymptotic formulae of their coefficients in Sect. 6.3.

These asymptotic formulae of coefficients \(a_n\) of partition functions turn out to be always of the form

$$\begin{aligned} a_n \sim \alpha \,\frac{1}{n^\beta } \,e^{\gamma n^\delta }\, , \quad \text{ as } n \rightarrow \infty \, , \end{aligned}$$

for appropriate \(\alpha , \beta , \gamma , \delta >0\).

6.1 Approximation of Means and Variances

We collect here convenient approximations of the mean and variance functions, satisfying in each case condition (3.9) of Theorem C, of the Khinchin families of a number of partition functions.

Consider first the partition function

$$\begin{aligned} P(z)=\prod _{j=1}^\infty \frac{1}{1-z^j}\, , \quad \text{ for } |z|<1\, , \end{aligned}$$

with Khinchin family \((X_t)_{t \in [0,1)}\) and mean and variance functions

$$\begin{aligned} m(t)=\sum _{j=1}^\infty \frac{j t^{j}}{1-t^j}\quad \text{ and } \quad \sigma ^2(t)=\sum _{j=1}^\infty \frac{j^2 t^{j}}{(1-t^j)^2}\,, \quad \text{ for } t \in (0,1)\, . \end{aligned}$$

The precise values of the integrals in the following lemma will be invoked frequently in the rest of the paper.

Lemma 6.1

$$\begin{aligned} \begin{array}{llll} \mathrm{a)}&{}\displaystyle \int _0^\infty s^u \ln \frac{1}{1-e^{-s}}\, ds&{}=\zeta (u+2)\Gamma (u+1)\, , &{}\text{ for } u>-1\, , \\ \mathrm{b)}&{}\displaystyle \int _0^\infty s^u \frac{e^{-s}}{1-e^{-s}}\, ds&{}=\zeta (u+1)\Gamma (u+1)\, , &{}\text{ for } u>0\, ,\\ \mathrm{c)}&{}\displaystyle \int _0^\infty s^u \frac{e^{-s}}{(1-e^{-s})^2}\, ds&{}=\zeta (u)\Gamma (u+1)\, ,&{}\text{ for } u>1\, . \end{array} \end{aligned}$$

It is convenient to consider \(m(e^{-s})\), for \(s >0\), and then \(s \downarrow 0\). We have

$$\begin{aligned} m(e^{-s})=\frac{1}{s}\sum _{j=1}^\infty js \,\frac{e^{-js}}{(1-e^{-js})}\, \cdot \end{aligned}$$

Let \(\phi \) be the function

$$\begin{aligned} \phi (x)=\frac{x e^{-x}}{1-e^{-x}}\, , \quad \text{ for } x > 0\,. \end{aligned}$$

If we set \(\phi (0)=1\), then \(\phi \in C^{\infty }[0,+\infty )\) and \(\int _0^\infty \phi (s) ds=\zeta (2)\), by b) of Lemma 6.1. In addition, we have that \(\lim _{t \rightarrow \infty } \phi (t)=0\) and \(\int _0^\infty |\phi ^\prime (s)|ds<+\infty \). From the Euler summation formula (1.1) applied to \(\phi \), we get that

$$\begin{aligned} s\,m(e^{-s})- \frac{\zeta (2)}{s} =O(1)\, , \quad \text{ as } s\downarrow 0\, . \end{aligned}$$
(6.1)

and, therefore, that \(\lim _{s\downarrow 0} s^2 m(e^{-s})=\zeta (2)\), and so

$$\begin{aligned} m(e^{-s})\sim \frac{\zeta (2)}{s^2} \, , \quad \text{ as } s\downarrow 0\, . \end{aligned}$$
(6.2)

If we define \(\widetilde{m}(e^{-s})={\zeta (2)}/{s^2}\), for \(s >0\), we have that \(m(t)\sim \widetilde{m}(t)\), as \(t \uparrow 1\).

For the variance function \(\sigma ^2(t)\) we obtain analogously, using Lemma 6.1, that

$$\begin{aligned} \sigma ^2(e^{-s})\sim \frac{\Gamma (3) \zeta (2)}{s^3}\, , \qquad \text{ as } s \downarrow 0\,. \end{aligned}$$
(6.3)

Moreover, if we define \(\widetilde{\sigma }^2(e^{-s})=\pi ^2/(3s^3)\), for \(s >0\), we have that

$$\begin{aligned} \sigma (t)\sim \widetilde{\sigma }(t)\, , \quad \text{ as } t \uparrow 1\, , \end{aligned}$$

and also, using (6.1), that

$$\begin{aligned} \frac{m(e^{-s})-\widetilde{m}(e^{-s})}{\widetilde{\sigma }(e^{-s})}=O(\sqrt{s})=o(1)\, , \quad \text{ as } s \downarrow 0\, . \end{aligned}$$

Next, we turn to Q, the ogf of partitions with distinct parts. For the mean and variance functions of its Khinchin family, using Lemma 6.1 again, plus the identity \(\ln (1+x)=\ln (1-x^2)-\ln (1-x)\) for \(x\in (0,1)\), and its first two derivatives, we obtain that, as \(s \downarrow 0\),

$$\begin{aligned} \begin{aligned} m(e^{-s})&= \sum _{j=1}^\infty j \,\dfrac{e^{-js}}{1+e^{-js}}\sim \dfrac{\zeta (2)}{2 s^2}\triangleq \widetilde{m}(e^{-s}), \\ \sigma ^2(e^{-s})&= \sum _{j=1}^\infty j^2 \,\dfrac{e^{-js}}{(1+e^{-js})^2}\sim \dfrac{\zeta (2)}{s^3}\triangleq \widetilde{\sigma }^2(e^{-s}); \end{aligned} \end{aligned}$$
(6.4)

and, also, that

$$\begin{aligned} \frac{m(e^{-s})-\widetilde{m}(e^{-s})}{\widetilde{\sigma }(e^{-s})}=O(\sqrt{s})=o(1)\, , \quad \text{ as } s \downarrow 0\, . \end{aligned}$$

For the ogf of \(P_{a,b}\) with parts in the arithmetic progression \(\{aj+b; j \ge 0\}\), with integers \(a,b \ge 1\), we have analogously that, as \(s \downarrow 0\),

$$\begin{aligned} \begin{aligned} m(e^{-s})&=\sum _{j=0}^\infty (a j+b) \,\dfrac{e^{-(aj+b)s}}{1-e^{-(aj+b)s}}\sim \dfrac{\zeta (2)}{a s^2}\triangleq \widetilde{m}(e^{-s}), \\ \sigma ^2(e^{-s})&=\sum _{j=0}^\infty (a j+b)^2 \,\dfrac{e^{-(aj+b)s}}{(1-e^{-(aj+b)s})^2}\sim \dfrac{2\zeta (2)}{a s^3}\triangleq \widetilde{\sigma }^2(e^{-s}); \end{aligned} \end{aligned}$$
(6.5)

and, also,

$$\begin{aligned} \frac{m(e^{-s})-\widetilde{m}(e^{-s})}{\widetilde{\sigma }(e^{-s})}=O(\sqrt{s})=o(1)\, , \quad \text{ as } s \downarrow 0\, . \end{aligned}$$

For integers \(a\ge 1\) and \(b \ge 0\), consider the infinite product

$$\begin{aligned} W_a^b(z)=\prod _{j=1}^\infty \left( \frac{1}{1-z^{j^a}}\right) ^{j^b}\, , \quad \text{ for } z \in \mathbb {D}\,. \end{aligned}$$

Let m(t) and \(\sigma (t)\) be the mean and variance functions of the Khinchin family of \(W_a^b\). Consider the real function

$$\begin{aligned} \phi (x)=x^{b} \frac{x^a\,e^{-x^a}}{1-e^{-x^a}}\,, \quad \text{ for } x \ge 0\,, \end{aligned}$$

whose derivatives of order less than b vanish at \(x=0\) and \(x=+\infty \), and is such that

$$\begin{aligned} \int _0^\infty \phi (x) \, dx=\frac{1}{a}\,\zeta \left( 1+\frac{b+1}{a}\right) \,\Gamma \left( 1+\frac{b+1}{a}\right) \,. \end{aligned}$$

Arguing as above for the case of regular partitions, using \(\phi \) and Euler’s summation of order \(b+1\), we get that

$$\begin{aligned} m(e^{-s})=\widetilde{m}(e^{-s})+O\left( \frac{1}{s}\right) \, , \quad \text{ as } s \downarrow 0\,, \end{aligned}$$

where

$$\begin{aligned} \widetilde{m}(e^{-s})=\frac{1}{a}\,\zeta \left( 1+\frac{b+1}{a}\right) \,\Gamma \left( 1+\frac{b+1}{a}\right) \,\, \frac{1}{s^{1+(b+1)/a}}\, \cdot \end{aligned}$$
(6.6)

Analogously,

$$\begin{aligned} \sigma ^2(e^{-s})\sim \frac{1}{a}\,\zeta \left( 1+\frac{b+1}{a}\right) \,\Gamma \left( 2+\frac{b+1}{a}\right) \, \, \frac{1}{s^{2+(b+1)/a}}\, \triangleq \widetilde{\sigma }^2(e^{-s})\, , \quad \text{ as } s \downarrow 0\,. \end{aligned}$$
(6.7)

Observe that

$$\begin{aligned} \frac{m(e^{-s})-\widetilde{m}(e^{-s})}{\widetilde{\sigma }(e^{-s})}=O(s^{(b+1)/2a})=o(1)\, , \quad \text{ as } s \downarrow 0\, . \end{aligned}$$
(6.8)

6.2 Asymptotics of Partition Functions on (0, 1)

We start with the ogf of partitions P; an instance we will built upon.

For the partition function P we have the well-known asymptotic formula in the interval (0, 1):

$$\begin{aligned} \ln P(e^{-s})=\frac{\zeta (2)}{s}+\frac{1}{2} \ln s-\ln \sqrt{2\pi }+o(1)\, ,\quad \text{ as } s \downarrow 0\,, \end{aligned}$$

and, thus,

$$\begin{aligned} P(e^{-s})\sim \frac{1}{\sqrt{2\pi }}\, \sqrt{s} \,\, e^{\zeta (2)/s}\, , \quad \text{ as } s \downarrow 0\,. \end{aligned}$$
(6.9)

See Hardy–Ramanujan [13, Sect. 3.2] (the formula for g(x) there should have a factor \((1-x)^{3/2}\) instead of \(\sqrt{1-x}\)). See also de Bruijn [8, Ch. 3, Ex. 3] or even Báez-Duarte [2, (2.21)]. We provide a proof since its ingredients are to be used below to handle other partition functions.

Fix \(s >0\) and apply Euler’s summation of order 2 to the function \(f(x)=-\ln (1-e^{-sx})\). Observe that \(\lim _{x \rightarrow \infty } f(x)=\lim _{x \rightarrow \infty } f^\prime (x)=0\). Write

$$\begin{aligned} \ln P(e^{-s})=\sum _{j=1}^\infty f(j)=\int _1^\infty f(x) dx +\frac{1}{2} f(1)-\frac{1}{12}f^{\prime }(1) -\int _1^\infty f^{\prime \prime }(x) \frac{{B}_2(\{x\})}{2!} dx\, . \end{aligned}$$

where \(B_2(y)\) is the second Bernoulli polynomial.

Now, using a) in Lemma 6.1 we have that

$$\begin{aligned} \int _1^\infty \ln \frac{1}{1-e^{-sx}} \,dx&=\frac{1}{s} \int _s^\infty \ln \frac{1}{1-e^{-x}} \,dx\\ {}&=\frac{1}{s} \int _0^\infty \ln \frac{1}{1-e^{-x}} dx-\frac{1}{s} \int _0^s \ln \frac{1}{1-e^{-x}} dx\\ {}&=\frac{\zeta (2)}{s}+\ln s-1 +O(s)\, , \quad \text{ as } s\downarrow 0\, . \end{aligned}$$

Also,

$$\begin{aligned} \frac{1}{2} f(1) -\frac{1}{12} f^{\prime }(1)=-\frac{1}{2}\ln s+\frac{1}{12} +O(s),\quad \text {as } s \downarrow 0, \end{aligned}$$

and

$$\begin{aligned} \int _1^\infty f^{\prime \prime }(x) \,\frac{{B}_2(\{x\})}{2!} \,dx=\int _1^\infty \frac{(sx)^2 e^{sx}}{(e^{sx}-1)^2} \,\frac{{B}_2(\{x\})}{2!} \,\frac{1}{x^2} \,dx\, . \end{aligned}$$

Since the function \(y\mapsto (y^2 e^y)/(e^y-1)^2\) is bounded in \([0,+\infty )\) and tends to 1 as \(y\downarrow 0\), dominated convergence gives that

$$\begin{aligned} \int _1^\infty f^{\prime \prime }(x) \,\frac{{B}_2(\{x\})}{2!} \,dx=\int _1^\infty \frac{{B}_2(\{x\})}{2!} \,\frac{1}{x^2} \,dx +o(1)\, , \quad \text{ as } s \downarrow 0\,. \end{aligned}$$

Thus,

$$\begin{aligned} \ln P(e^{-s})=\frac{\zeta (2)}{s}+\frac{1}{2}\ln s -1+\frac{1}{12}-\int _1^\infty \frac{{B}_2(\{x\})}{2!} \,\frac{1}{x^2} \,dx +o(1)\, , \quad \text{ as } s \downarrow 0\,. \end{aligned}$$

We may identify the constant term of the above expression by using analogous Euler’s summation of order 2 between 1 and N for the function \(\ln x\), to obtain Stirling’s approximation in the following precise (and standard) form

$$\begin{aligned} \ln N!=N \ln N -N +1+\frac{1}{2} \ln N+\frac{1}{12}\,\left( \frac{1}{N}-1\right) +\int _1^N \frac{{B}_2(\{x\})}{2!} \,\frac{1}{x^2} \,dx\, , \end{aligned}$$

and thus deduce that

$$\begin{aligned} 1-\frac{1}{12}+\int _1^\infty \frac{{B}_2(\{x\})}{2!} \,\frac{1}{x^2} \,dx=\ln \sqrt{2\pi }\, . \end{aligned}$$

The same argument gives for the ogf Q of partitions into distinct parts that

$$\begin{aligned} Q(e^{-s})\sim \frac{1}{\sqrt{2}}\exp \left( \frac{\zeta (2)}{2 s}\right) \, , \quad \text{ as } s \downarrow 0\, . \end{aligned}$$
(6.10)

and for the ogf \(P_{a,b}\) of partitions with parts in the arithmetic progression \(\{aj+b; j \ge 0\}\) with integers \(a, b\ge 1\), that

$$\begin{aligned} P_{a,b}(e^{-s})\sim \frac{1}{\sqrt{2\pi }}\, \Gamma \left( \frac{b}{a}\right) (as)^{b/a-1/2} \exp \left( \frac{\zeta (2)}{as}\right) \, , \quad \text{ as } s \downarrow 0\, . \end{aligned}$$
(6.11)

Recall that \(P_{1,1}\equiv P\) and \(P_{2,1}\equiv Q\). The asymptotic formula for Q may be obtained from the identity (2.9) and the corresponding asymptotic formula (6.9) for P.

Now we turn to the ogf \(W_a^b\), with integers \(a\ge 1\) and \(b \ge 0\).

Fix \(s>0\). We will apply Euler’s summation of order \(b+2\) to the function

$$\begin{aligned} f(x)=x^b\, \ln \frac{1}{1-e^{-s x^a}}\,, \quad \text{ for } x >0\, . \end{aligned}$$

Note that

$$\begin{aligned} \ln W_a^b(e^{-s})=\sum _{j=1}^\infty f(j)\, . \end{aligned}$$

Observe that

$$\begin{aligned} \int _1^\infty f(x) dx&=\frac{1}{a} \frac{1}{s^{(b+1)/a}} \int _s^\infty y^{(b+1)/a} \ln \frac{1}{1-e^{-y}}\, \frac{dy}{y} \\&= \frac{1}{a} \frac{1}{s^{(b+1)/a}} \int _0^\infty y^{(b+1)/a} \ln \frac{1}{1-e^{-y}}\, \frac{dy}{y}\\ {}&\quad -\frac{1}{a} \frac{1}{s^{(b+1)/a}} \int _0^s y^{(b+1)/a} \ln \frac{1}{1-e^{-y}}\, \frac{dy}{y} \\&=\frac{1}{a} \zeta \left( 1+\frac{b+1}{a}\right) \, \Gamma \left( \frac{b+1}{a}\right) \frac{1}{s^{(b+1)/a}}\\ {}&\quad -\frac{1}{b+1} \ln \frac{1}{s}-\frac{a}{(b+1)^2}+O(s)\, . \end{aligned}$$

For the function f at \(\infty \), we have that \(\lim _{x \rightarrow \infty } f^{(j)}(x)=0\), for \(j \ge 0\).

We need the values of f and its derivatives up to order \(b+1\) at \(x=1\). We have

$$\begin{aligned} f(1)=\ln \dfrac{1}{1-e^{-s}}=\ln \dfrac{1}{s}+O(s)\, . \end{aligned}$$

Write

$$f(x)=\dfrac{1}{s^{b/a}}g(s^{1/a} x),$$

where

$$\begin{aligned} g(y)=y^b \ln \frac{1}{1-e^{-y^a}}=\underset{=\omega (y)}{\underbrace{y^{b+a}+y^b \ln \frac{y^a}{e^{y^a}-1}}}-a \, \, \underset{=\eta (y)}{\underbrace{(y^b\ln y)}}\, . \end{aligned}$$

Observe that

$$\begin{aligned} f^{(j)}(1)=\dfrac{1}{s^{(b-j)/a}} g^{(j)}(s^{1/a}),\quad \text {for }j \ge 1. \end{aligned}$$

The function \(\omega \) is holomorphic near 0, and its Taylor expansion starts with \((1/2) y^{b+a}\), while

$$\begin{aligned} \eta ^{(j)}(y)= y^{b-j} \frac{b!}{(b-j)!} \left( \ln y +\sum _{i=1}^j \frac{1}{b+1-i}\right) \, , \quad \text{ for } 1 \le j \le b \text{ and } y >0\, . \end{aligned}$$

Also, \(\eta ^{(b+1)}(y)={b!}/{y}\) and \(\eta ^{(b+2)}(y)=-{b!}/{y^2}\), for \(y >0\). And, also,

$$\begin{aligned} f^{(b+1)}(1)= & {} s^{1/a} g^{(b+1)}(s^{1/a})=s^{1/a} \omega ^{(b+1)}(s^{1/a})-as^{1/a} \eta ^{(b+1)}(s^{1/a})\\= & {} - b!\, a +O(s)\, . \end{aligned}$$

With all this in mind, we have that

$$\begin{aligned}&\frac{1}{2} (f(1)+f(\infty ))+\sum _{j=1}^{b+1} (-1)^{j+1} \frac{B_{j+1}}{(j+1)!} (f^{(j)}(\infty )-f^{(j)}(1)) \\&\quad =\frac{1}{2} \ln \frac{1}{s} +\left( \sum _{j=1}^b (-1)^{j+1} \frac{B_{j+1}}{(j+1)!}\frac{b!}{(b-j)!}\right) \ln s\\&\qquad +a \sum _{j=1}^b (-1)^{j+1} \frac{B_{j+1}}{(j+1)!}\frac{b!}{(b-j)!} \sum _{i=1}^j \frac{1}{b+1-i}{+} a (-1)^{b+2}\frac{B_{b+2}}{(b+2)!} b! +O(s)\, . \end{aligned}$$

Applying Euler’s summation of order \(b+1\) to \(x^b\), with \(b \ge 1\), just in the interval [0, 1], and taking into account that its bth derivative is a constant (b!, actually), we deduce that

$$\begin{aligned} 1=\frac{1}{b+1}+\frac{1}{2} +\sum _{j=1}^{b-1} (-1)^{j+1} \frac{B_{j+1}}{(j+1)!}\frac{b!}{(b-j)!}\, . \end{aligned}$$

Thus,

$$\begin{aligned} \sum _{j=1}^{b} (-1)^{j+1} \frac{B_{j+1}}{(j+1)!}\frac{b!}{(b-j)!}=(-1)^{b+1} \frac{B_{b+1}}{b+1}+\frac{1}{2}-\frac{1}{b+1}\,, \end{aligned}$$

for \(b \ge 1\), and for \(b=0\) also.

So, we may simplify and write

$$\begin{aligned}&\frac{1}{2} (f(1)+f(\infty ))+\sum _{j=1}^{b+1} (-1)^{j+1} \frac{B_{j+1}}{(j+1)!} (f^{(j)}(\infty )-f^{(j)}(1)) \\&\quad =\left( -\frac{1}{b+1}+(-1)^{b+1} \frac{B_{b+1}}{b+1}\right) \ln s \\&\qquad + a \sum _{j=1}^b (-1)^{j+1} \frac{B_{j+1}}{(j+1)!}\frac{b!}{(b-j)!} \sum _{i=1}^j \frac{1}{b+1-i}+ a (-1)^{b+2}\frac{B_{b+2}}{(b+2)!} b! +O(s)\, . \end{aligned}$$

Finally,

$$\begin{aligned}&(-1)^{b{+}3} \int _1^\infty f^{(b{+}2)}(x)\frac{B_{b{+}2}(\{x\})}{(b{+}2)!} dx\\ {}&\quad =(-1)^{b{+}3} \int _1^\infty (s^{1/a} x)^2 g^{(b{+}2)}(s^{1/a} x) \frac{B_{b{+}2}(\{x\})}{(b{+}2)!} \frac{dx}{x^2}\,. \end{aligned}$$

The function \(y \mapsto y^2 g^{(b+2)}(y)\) is bounded in \((0,\infty )\) and as \(y \downarrow 0\) tends to \(b! \, a\). We deduce from dominated convergence that, as \(s\downarrow 0\),

$$\begin{aligned} (-1)^{b+3} \int _1^\infty f^{(b+2)}(x)\frac{B_{b+2}(\{x\})}{(b+2)!} dx= (-1)^{b+3} b! \, a\int _1^\infty \frac{B_{b+2}(\{x\})}{(b+2)!} \frac{dx}{x^2}+o(1)\, . \end{aligned}$$

From this, we have

$$\begin{aligned} \ln W_a^b(e^{-s})&=\frac{1}{a} \zeta \left( 1+\frac{b+1}{a}\right) \, \Gamma \left( \frac{b+1}{a}\right) \,\frac{1}{s^{(b+1)/a}}+ (-1)^{b+1} \frac{B_{b+1}}{b+1}\, \ln s \\&\quad -\frac{a}{(b+1)^2}+a \sum _{j=1}^b (-1)^{b+1} \frac{B_{j+1}}{(j+1)!}\frac{b!}{(b-j)!} \sum _{i=1}^j \frac{1}{b+1-i}\\ {}&\quad +a (-1)^{b+2}\frac{B_{b+2}}{(b+2)!} b! + (-1)^{b+3} b! \, a \int _1^\infty \frac{B_{b+2}(\{x\})}{(b+2)!} \frac{dx}{x^2}+o(1)\, . \end{aligned}$$

The constant term of the above expression may be written more compactly by appealing to the so called (generalized) Glaisher–Kinkelin constants appearing in (in fact, defined by) the asymptotic formula of (generalized) hyperfactorials.

If we apply Euler’s summation of order \(b+2\) to the function \(x^b \ln x\) between 1 and N, we obtain that

$$\begin{aligned} (\ddag ) \begin{aligned}&\sum _{n=1}^N n^b \ln n =\frac{1}{b+1} N^{b+1} \ln N - \frac{1}{(b+1)^2} N^{b+1}+\frac{1}{2} N^b \ln N \\&\quad +\sum _{j=1}^b (-1)^{j+1} \frac{B_{j+1}}{(j+1)!} \frac{b!}{(b-j)!} N^{b-j} \ln N \\&\quad +\sum _{j=1}^{b-1} (-1)^{j+1} \frac{B_{j+1}}{(j+1)!} \frac{b!}{(b-j)!} N^{b-j} \sum _{i=1}^j \frac{1}{b-i+1} \\&\quad +(-1)^{b+1} \frac{B_{b+1}}{b+1} H_b +\frac{1}{(b+1)^2}-\sum _{j=1}^{b} (-1)^{j+1} \frac{B_{j+1}}{(j+1)!} \frac{ b!}{(b-j)!} \sum _{i=1}^j \frac{1}{b-i+1}\\&\quad -(-1)^{b+2} \frac{B_{b+2}}{(b+2)!}b!-(-1)^{b+3} \int _1^\infty \frac{b!}{(b+2)!} B_{b+2}(\{x\}) \frac{dx}{x^2}+O\left( \frac{1}{N}\right) \, . \end{aligned} \end{aligned}$$

Here, \(H_b\) denotes the bth harmonic number.

This expression \((\ddag )\) is the analogue of Stirling’s formula but now for the hyperfactorials \(\prod _{j=1}^N j^{j^b}\) of order b; the factorials being the case \(b=0\). The constant term (the sum of the terms not depending on N) in the above expression is \(\ln \mathrm{GK}_b\). These \(\mathrm{GK}_{b}, b \ge 1\) are the generalized Glaisher–Kinkelin constants introduced by Bendersky, [5], and identified in the following closed form:

$$\begin{aligned} (\star ) \quad \ln \mathrm{GK}_{b} =\frac{(-1)^{b+1}B_{b+1}}{b+1}\, H_{b}-\zeta ^{\prime }(-b)\,, \quad \text{ for } b\ge 1\, , \end{aligned}$$

by Choudhury [7], and Adamchik [1]. See also Wang [27]. The proper (original) Glaisher–Kinkelin constant is \(\mathrm{GK}_1\). By taking \(H_0=0\), formula \((\star )\) is also valid for \(b=0\). The constant \(\mathrm{GK}_0\) is actually \(\sqrt{2\pi }\).

With this notation, we have that, as \(s\downarrow 0\),

$$\begin{aligned} \ln W_a^b(e^{-s})=\frac{1}{a} \,\zeta \left( 1+\frac{b+1}{a}\right) \, \Gamma \left( \frac{b+1}{a}\right) \frac{1}{s^{(b+1)/a}}-\zeta (-b) \ln s +a\, \zeta ^{\prime }(-b)+o(1) , \end{aligned}$$
(6.12)

where for compactness we have used that \(\zeta (-b)=(-1)^b {B_{b+1}}/{(b+1)}\), for \(b \ge 0\).

6.3 Hayman Class and Coefficients of Partition Functions

Let us apply Theorem 4.1 to check that these partition functions, P, Q, \( P_{a,b}\) and \( W_1^{\!b}\), are in the Hayman class and, thus, that they are strongly Gaussian. The variance and major arc conditions of Theorem 4.1 are obtained quite directly in each case. As usual, the minor arc estimates demand more attention. Recall that in Sect. 6.1 we have obtained, for these partition functions, suitable approximations of their mean and variance functions all satisfying condition (3.9) of Theorem C, and that we have just described in Sect. 6.2 appropriate asymptotics of these partition functions in the interval (0, 1).

6.3.1 Partition Function P. Theorem of Hardy–Ramanujan

We start, as usual, with the case of the partition function P.

Theorem 6.2

The partition function P is in the Hayman class.

Proof

We check that P satisfies the conditions of Theorem 4.1.

For the variance function we have, see (6.3), that

$$\begin{aligned} (\star )\quad \sigma ^2(e^{-s})\sim \frac{\Gamma (3) \zeta (2)}{s^3}\, , \quad \text{ as } s \downarrow 0\,. \end{aligned}$$

Thus, the variance condition of Theorem 4.1 is satisfied.

We have that \(P(z)=e^{g(z)}\), for \(z \in \mathbb {D}\), where g is the function

$$\begin{aligned} g(z)=\sum _{j=1}^\infty \ln \frac{1}{1-z^j}=\sum _{j,k\ge 1} \frac{z^{jk}}{k}=\sum _{k=1}^\infty \frac{1}{k} \,\frac{z^k}{1-z^k}\, , \quad \text{ for } z \in \mathbb {D}\end{aligned}$$

(g has non-negative Taylor coefficients, as the second expression above shows).

Next we search for a cut function h so that conditions (4.3) and (4.4) of Theorem 4.1 are satisfied. We will take \(h(t)=(1-t)^\alpha \), for \(t \in (0,1)\), for an appropriate \(\alpha >0\) to be specified shortly.

Let F be the auxiliary function of P. Now,

$$\begin{aligned} t^3 g^{\prime \prime \prime }(t)\le F^{\prime \prime \prime }(-s)\, , \quad \text{ for } t=e^{-s} \text{ and } s >0\,, \end{aligned}$$

and so, because of the asymptotic formula (3.3), we have that

$$\begin{aligned} t^3 g^{\prime \prime \prime }(t) =O\left( \frac{1}{(1-t)^4}\right) \, , \quad \text{ as } t \uparrow 1\,. \end{aligned}$$

Thus, if \(\alpha >4/3\), the cut function h satisfies the major arc condition (4.3).

For the minor arc, write for \(z=te^{\imath \theta }\), with \(t \in (0,1)\) and \(\theta \in \mathbb {R}\)

$$\begin{aligned} \mathrm{Re}(g(te^{\imath \theta }))-g(t)=\sum _{k=1}^\infty \frac{1}{k} \left( \mathrm{Re}\left( \frac{z^k}{1-z^k}\right) -\frac{t^k}{1-t^k}\right) \,. \end{aligned}$$

All the summands in the series above are non- positive, so that keeping only the term corresponding to \(k=1\), we deduce that

$$\begin{aligned} \mathrm{Re}(g(te^{\imath \theta }))-g(t)\le \mathrm{Re}\left( \frac{z}{1-z}\right) -\frac{t}{1-t}=\mathrm{Re}\left( \frac{1}{1-z}\right) -\frac{1}{1-t}\, \cdot \end{aligned}$$

Because of Lemma 4.3, we have that

$$\begin{aligned} \sup _{h(t) \le |\theta |\le \pi }\mathrm{Re}(g(te^{\imath \theta }))-g(t)\le \mathrm{Re}\left( \frac{1}{1-t e^{\imath h(t)}}\right) -\frac{1}{1-t}\le \left| \frac{1}{1-t e^{\imath h(t)}}\right| -\frac{1}{1-t}\, \cdot \end{aligned}$$

And thus, from Lemma 4.4 we deduce, for some \(\delta >0\) and some \(t_0 \in (0,1)\), that

$$\begin{aligned} \sup _{h(t) \le |\theta |\le \pi }\mathrm{Re}(g(te^{\imath \theta }))-g(t)\le -\delta \frac{h(t)^2}{(1-t)^3}\, , \quad \text{ for } t_0<t<1\, . \end{aligned}$$

Finally, from the moderate growth of \(\sigma (t)\) given by \((\star )\), we conclude that condition (4.4) on the minor arc is amply satisfied as long as \(\alpha <3/2\).

In conclusion, by taking \(\alpha \) so that \(4/3<\alpha < 3/2\), all the conditions of Theorem 4.1 are satisfied, and P is in the Hayman class. \(\square \)

We can now deduce the asymptotic formula for p(n), the coefficients of the partition function P. From (6.2) and (6.3) we know that we can take \(\widetilde{m}(e^{-s})=\zeta (2)/s^2\) and \(\widetilde{\sigma }^2(e^{-s})=\Gamma (3) \zeta (2)/s^3\) satisfying (3.9).

Thus with \(\tau _n=e^{-s_n}\) and \(s_n=\sqrt{\zeta (2)/n}\), for \(n \ge 1\), by appealing to formula (3.10) and the asymptotics (6.9) of P on (0, 1), we deduce that

$$\begin{aligned} p(n)\sim \frac{1}{4\sqrt{3}} \,\frac{1}{n} \,e^{\pi \sqrt{2/3}\sqrt{n} }\, , \quad \text{ as } n\rightarrow \infty \, ; \end{aligned}$$
(6.13)

the Hardy–Ramanujan partition theorem, [13].

6.3.2 Function Q

For the ogf Q of partitions with distinct parts given by

$$\begin{aligned} Q(z)=\sum _{n=0}^{\infty } q(n) z^n=\prod _{j=1}^\infty (1+z^j)=\prod _{j=0}^\infty \frac{1}{1-z^{2j+1}}\, , \quad \text{ for } z \in \mathbb {D}\, , \end{aligned}$$

we have \(Q=\exp (g)\), where

$$\begin{aligned} g(z)=\sum _{k,j\ge 1} \frac{(-1)^{k+1}}{k} \, z^{kj}=\sum _{k\ge 1;j \ge 0} \frac{1}{k} \, z^{k(2j+1)}\, , \quad \text{ for } z \in \mathbb {D}\,. \end{aligned}$$

The second expression shows that g has non-negative coefficients.

As in the case of P, we have \(g^{\prime \prime \prime }(t)=O(1/(1-t)^4)\), as \(t \uparrow 1\). Thus, for a cut \(h(t)=(1-t)^\alpha \), we just need \(\alpha >4/3\) to fulfill condition (4.3) for the major arc of Theorem 4.1.

Now, the minor arc. For \(z=te^{\imath \theta }\) with \(t \in (0,1)\) and \(\theta \in \mathbb {R}\) we have that

$$\begin{aligned} \left| \frac{1+te^{\imath \theta }}{1+t}\right| ^2=1+\frac{2t (\cos \theta -1)}{(1+t)^2}\le 1+\frac{t}{2} (\cos \theta -1)\le e^{(t/2 (\cos \theta -1)}=e^{(\mathrm{Re}z -|z|)/2}\, , \end{aligned}$$

and, so,

$$\begin{aligned} \frac{|Q(z)|}{Q(|z|)}\le \exp \left( \frac{1}{4} \left( \mathrm{Re}\frac{z}{1-z} -\frac{t}{1-t}\right) \right) = \exp \left( \frac{1}{4} \left( \mathrm{Re}\frac{1}{1-z} -\frac{1}{1-t}\right) \right) \, , \end{aligned}$$

and, consequently,

$$\begin{aligned} \mathrm{Re}g(z) -g(|z|)=\ln \frac{|Q(z)|}{Q(|z|)} \le \frac{1}{4} \left( \mathrm{Re}\frac{1}{1-z} -\frac{1}{1-t}\right) \, . \end{aligned}$$

With this and the same argument as in the case of P, we see that the cut function \(h(t)=(1-t)^\alpha \) with \(\alpha <3/2\) satisfies the condition for the minor arc of Theorem 4.1. Thus, with \(\alpha \) such that \(4/3< \alpha < 3/2\), we conclude that:

Theorem 6.3

The partition function Q (of partitions with distinct parts) is in the Hayman class.

The approximations \(\widetilde{m}\) and \(\widetilde{\sigma }^2\) for the mean and variance functions of the Khinchin family of Q given in formula (6.4), and the asymptotics (6.10) of Q in the interval (0, 1), plus strong gaussianity and Theorem C, allow us to derive the asymptotic formula for the number q(n) of partitions of n with distinct parts, see [13]:

$$\begin{aligned} q(n)\sim \frac{1}{4\cdot 3^{1/4}} \,\frac{1}{n^{3/4}} \,e^{\pi \sqrt{1/3}\sqrt{n}}\, , \quad \text{ as } n\rightarrow \infty \, . \end{aligned}$$

6.3.3 Function \(P_{a,b}\): Theorem of Ingham

We consider now, for integers \(a, b \ge 1\), the ogf

$$\begin{aligned} P_{a,b}(z)=\prod _{j=0}^\infty \frac{1}{1-z^{aj+b}}\, , \quad \text{ for } z \in \mathbb {D}\,, \end{aligned}$$

of partitions with parts in the arithmetic progression \(\{aj+b: j \ge 0\}\).

If \(\mathrm{gcd}(a,b)=d>1\), then \(P_{a,b}\) is not strongly Gaussian (although it is Gaussian), since we can write \(P_{a,b}(z)=G(z^d)\), where G is a holomorphic function in \(\mathbb {D}\), and thus the coefficients of \(P_{a,b}\) satisfy no asymptotic formula; see Remark 6.6 below.

We will verify by means of Theorem 4.1 that if \(\mathrm{gcd}(a,b)=1\), then \(P_{a,b}\) is strongly Gaussian. The condition that a and b are relatively prime is used exclusively to verify the minor arc condition of Theorem 4.1.

We have already obtained, see formula (6.5), convenient approximations \(\widetilde{m}\) and \(\widetilde{\sigma }^2\) of the mean and variance functions of the Khinchin family of \(P_{a,b}\), and an appropriate asymptotic formula (6.11) of \(P_{a,b}\) on the interval (0, 1).

We can write \(P_{a,b}=\exp (g)\), where

$$\begin{aligned} g(z)=\sum _{k\ge 1; j \ge 0} \frac{1}{k} \,z^{k(aj+b)}=\sum _{k=1}^\infty \frac{1}{k}\,\frac{z^{bk}}{1-z^{ak}} =\sum _{k=1}^\infty \frac{1}{k} \,U(z^k)\, , \quad \text{ for } z \in \mathbb {D}\, , \end{aligned}$$

where U is the rational function

$$\begin{aligned} U(z)=\frac{z^b}{1-z^a}\, , \end{aligned}$$

holomorphic in \(\mathbb {D}\). Observe that the Taylor coefficients of U around \(z=0\) are non-negative.

We search now for a cut function \(h(t)=(1-t)^\alpha \) with appropriate \(\alpha >0\). Since \(g^{\prime \prime \prime }(t)=O(1/(1-t)^4)\), as \(t \uparrow 1\), the condition (4.3) of Theorem  4.1 for the major arc is satisfied if \(\alpha >4/3\).

Next, the minor arc. This is where we use the condition \(\mathrm{gcd}(a,b)=1\). We have

$$\begin{aligned} \mathrm{Re}g(z)-g(|z|)=\sum _{k=1}^\infty \frac{1}{k} \left( \mathrm{Re}U(z^k)-U(|z|^k)\right) \le \mathrm{Re}U(z)-U(|z|)\, , \quad \text{ for } z \in \mathbb {D}\,. \end{aligned}$$

The inequality holds since all the summands in the series above are non-positive.

The function U(z) has simple poles at the ath roots of unity: \(\gamma _j=e^{2\pi \imath j/a}\), with \(0\le j <a\). The residue of U at \(\gamma _j\) is \(-{\gamma _j^{b+1}}/{a}\).

Let us denote by \(\mathcal {R}_j\) the region \(\mathcal {R}_j=\{z=re^{\imath \phi }: 1/2\le r<1, |\phi -2\pi j/a|\le \pi /a\}\), and let \(\widetilde{\mathcal {R}}=\bigcup _{j=1}^{a-1} \mathcal {R}_j\); observe that we do not include \(\mathcal {R}_0\) in \(\widetilde{\mathcal {R}}\).

For a certain constant \(M>0\) we have that

$$\begin{aligned} (\flat )\quad \left| U(z)+\frac{\gamma _j^{b+1}}{a(z-\gamma _j)}\right| \le M\, , \quad \text{ for } z\in \mathcal {R}_j \text{ and } 0\le j <a\, . \end{aligned}$$

We start analyzing \(\mathrm{Re}U(z)\) for \(z\in \widetilde{\mathcal {R}}\).

Lemma 6.4

For \(0<j<a\), we have that

$$\begin{aligned} \limsup _{t \uparrow 1} \sup \limits _{\begin{array}{c} z \in \mathcal {R}_j;\\ t\le |z|\le 1 \end{array}} \mathrm{Re}\left( U(z)\frac{1-|z|^a}{|z|^b}\right) \le \frac{1}{2}\left( 1+\cos \left( 2\pi j\,\frac{b}{a}\right) \right) \, . \end{aligned}$$

Moreover, for certain \(\delta \in (0,1)\) and \(t_0\in (1/2,1)\) we have that

$$\begin{aligned} \mathrm{Re}\, U(z)\frac{1-|z|^a}{|z|^b}\le \delta \, , \quad \text{ for } \text{ every } z \in \widetilde{\mathcal {R}} \text{ with } |z|>t_0\, . \end{aligned}$$

Proof

Fix \(0<j<a\) and \(z \in \mathcal {R}_j\). Let \(w\in \mathcal {R}_0\) such that \(z=\gamma _j w\). By appealing to \((\flat )\) we have that

$$\begin{aligned} (\flat \flat )\quad \mathrm{Re}\, U(z)\frac{1-|z|^a}{|z|^b}\le \frac{1-|z|^a}{a(1-|z|)|z|^b}\, \mathrm{Re}\left( \gamma _j^b \frac{1-|w|}{1-w}\right) +M\,\frac{1-|z|^a}{|z|^b}\, . \end{aligned}$$

Now, for every \(w\in \mathbb {D}\) one has that

$$\begin{aligned} \frac{1-|w|}{1-w} \in \mathbb {D}\left( \frac{1}{2}, \frac{1}{2}\right) \, \cup \, \{1\}\,, \end{aligned}$$

and, in particular,

$$\begin{aligned} \mathrm{Re}\left( e^{\imath \eta }\frac{1-|w|}{1-w}\right) \le \frac{1}{2}\left( 1+\cos (\eta )\right) \, , \quad \text{ for } \text{ all } \eta \in \mathbb {R} \text{ and } w \in \mathbb {D}\,. \end{aligned}$$

Therefore, we obtain from \((\flat \flat )\) that

$$\begin{aligned} (\flat \flat \flat )\qquad \mathrm{Re}\left( U(z)\frac{1-|z|^a}{|z|^b}\right) \le \frac{1-|z|^a}{a(1-|z|)|z|^b}\, \delta _j+M\frac{1-|z|^a}{|z|^b}\, . \end{aligned}$$

with \( \delta _j=\frac{1}{2}\left( 1+\cos (2\pi \imath j b/a)\right) \, .\)

This bound \((\flat \flat \flat )\) gives the \(\limsup \) of the statement. The bound now follows since \(\mathrm{gcd}(a,b)=1\) and \(0<j<a\) imply that \(\cos (2\pi \imath j b/a)<1\). \(\square \)

As a consequence of Lemma 6.4, we have

$$\begin{aligned} (\natural )\quad \mathrm{Re}\, U(z)- U(t) \le (\delta -1) U(t)\le -\frac{1-\delta }{a} \frac{t_0^b}{1-t}\, , \quad \text{ for } z \in \widetilde{\mathcal {R}} \text{ with } |z|=t\in (t_0,1)\,. \end{aligned}$$

It remains to analyze \(\mathrm{Re}U(z)\) in the region \(\mathcal {R}_0\). For \(z \in \mathcal {R}_0\), we have that

$$\begin{aligned} \left| U(z)-\frac{1}{a(1-z)}\right| \le M\,, \end{aligned}$$

and, therefore, for \(z \in \mathcal {R}_0\) with \(|z|=t\), we have that

$$\begin{aligned} \mathrm{Re}\, U(z)-U(t)\le M +\frac{1}{a} \left( \mathrm{Re}\frac{1}{1-z} -\frac{1}{1-t}\right) + \frac{1}{a(1-t)}-\frac{t^b}{(1-t^a)}\, \cdot \end{aligned}$$

Now,

$$\begin{aligned} \dfrac{1}{a(1-t)}-\dfrac{t^b}{(1-t^a)}\le \dfrac{b}{a}, \quad \text {for each }t \in (0,1), \end{aligned}$$

and thus from Lemmas 4.3 and 4.4, arguing as in the case above of P and Q, and increasing the \(t_0\) above if necessary, we deduce that for certain \(\delta >0\),

$$\begin{aligned} (\natural \natural ) \quad \sup \limits _{\begin{array}{c} z=te^{\imath \theta }\in \mathcal {R}_0;\\ h(t)\le |\theta | \end{array}} (\mathrm{Re}\,U(z)-U(t))\le M+\frac{b}{a} -\delta \,\frac{h(t)^2}{(1-t)^3}\, , \quad \text{ for } t \in (t_0, 1)\,. \end{aligned}$$

The minor arc for \(\{|z|=t\}\) comprises \(\widetilde{\mathcal {R}}\cap \{|z|=t\}\) (where we have the bound \((\natural )\)) and the two subarcs in \(\mathcal {R}_0 \cap \{|z|=t\}\) of those \(z=t e^{\imath \theta }\) where \(|\theta |\ge h(t)\).

From \((\natural )\) and \((\natural \natural )\) and the moderate growth of \({\sigma }\), see (6.5), we conclude that \(\alpha <3/2\) suffices to guarantee the minor arc condition of Theorem 4.1, and consequently that:

Theorem 6.5

If \(\mathrm{gcd}(a,b)=1\), \(P_{a,b}\) is in the Hayman class.

The approximations \(\widetilde{m}\) and \(\widetilde{\sigma }^2\) for the mean and variance functions of the Khinchin family of \(P_{a,b}\) given in (6.5) and the asymptotics (6.11) of \(P_{a,b}\) in the interval (0, 1), plus strong gaussianity and Theorem C, allow us to derive Ingham’s theorem, originally in [17], see also Kane [18]: for integers \(a,b \ge 1\) with \(\mathrm{gcd}(a,b)=1\), the number of partitions \(p_{a,b}(n)\) of \(n\ge 1 \) in parts drawn for the arithmetic sequence \(\{aj+b: j \ge 0\}\) satisfies that, as \(n \rightarrow \infty \),

$$\begin{aligned} p_{a,b}(n){\sim } \left[ \frac{1}{\sqrt{2}\,2\pi } \Gamma \left( \frac{b}{a}\right) a^{b/(2a)-1/2} \left( \frac{\pi ^2}{6}\right) ^{b/(2a)}\right] \frac{1}{n^{1/2+b/(2a)}}\, \exp \left( \pi \sqrt{\frac{ 2}{3a}}\sqrt{n} \right) \,. \end{aligned}$$
(6.14)

Remark 6.6

If \(\mathrm{gcd}(a,b)=d>1\), then we have

$$\begin{aligned} P_{a,b}(z)=P_{{a}', {b}'}(z^d)\, , \quad \text{ for } z \in \mathbb {D}\, , \end{aligned}$$

where \(a'=a/d\) and \(b'=b/d\). Observe that \(\mathrm{gcd}(a', b')=1\). We deduce from (6.14) that, as \(n \rightarrow \infty \),

$$\begin{aligned} \textsc {coef}_{[nd]} P_{a,b}&= \textsc {coef}_{[n]}P_{a',b'} \\&\sim d\, \left[ \frac{1}{\sqrt{2}\,2\pi } \Gamma \left( \frac{b}{a}\right) a^{b/(2a)-1/2} \left( \frac{\pi ^2}{6}\right) ^{b/(2a)}\right] \\&\times \frac{1}{(nd)^{1/2+b/(2a)}}\exp \left( \pi \sqrt{\frac{ 2 }{3a}}\sqrt{nd} \right) \,. \end{aligned}$$

All the coefficients of \(P_{a,b}(z)\) whose indices are not multiples of d are 0.

6.3.4 Plane and Colored Partitions: Wright’s Theorem

We deal here with \(W_1^b\), where \(b \ge 0\), the ogf of colored partitions with coloring sequence \((j^b)_{j \ge 1}\).

First we discuss the case \(W_1^{1}\) and then we describe the minor changes to approach the general case \(b \ge 1\).

a) Plane partitions. The MacMahon function

$$\begin{aligned} W_1^{1}(z)=\prod _{j=1}^\infty \left( \frac{1}{1-z^j}\right) ^j\triangleq \sum _{n=0}^\infty M_n z^n \end{aligned}$$

codifies plane partitions: \(M_n\) is the number of plane partitions of the integer n, for \(n\ge 1\), with \(M_0=1\).

We have already obtained, see (6.6) and (6.7), convenient approximations, \(\widetilde{m}\) and \(\widetilde{\sigma }^2\), of the mean and variance functions of the Khinchin family of \(W_1^{1}\):

$$\begin{aligned} \widetilde{m}(e^{-s})=\zeta (3)\Gamma (3) \frac{1}{s^3} \quad \text{ and } \quad \widetilde{\sigma }^2(e^{-s})=\zeta (3)\Gamma (4) \frac{1}{s^4}\,, \quad \text{ as } s\downarrow 0\,, \end{aligned}$$

which satisfy condition (3.9).

We also have at our disposal, see equation (6.12), a suitable asymptotic formula for \(W_1^{1}\) on the interval (0, 1):

$$\begin{aligned} W_1^{1}(e^{-s})\sim e^{\zeta ^{\prime }(-1)}\, s^{1/12} \, e^{\zeta (3)/s^2}=\frac{e^{1/12}}{\mathrm{GK}_1} \, s^{1/12} \, e^{\zeta (3)/s^2}\, , \quad \text{ as } s \downarrow 0\,. \end{aligned}$$
(6.15)

We apply now Theorem 4.1 to verify that \(W_1^{1}\) is strongly Gaussian. The variance condition is clearly satisfied. We look now for a cut function \(h(t)=(1-t)^\alpha \), with \(\alpha >0\) to be specified.

Let \(g(z)=\ln W_1^{1}(z)\) in \(\mathbb {D}\). We have

$$\begin{aligned} g(z)=\sum _{l,j \ge 1} \frac{j}{l} \,(z^l)^j =\sum _{l=1}^\infty \frac{1}{l} \,K(z^l)\, , \quad \text{ for } z \in \mathbb {D}\, , \end{aligned}$$

where K is the Koebe function:

$$\begin{aligned} K(z)=\frac{z}{(1-z)^2}=\frac{1}{4} \left( \frac{1+z}{1-z}\right) ^2-\frac{1}{4}\,, \quad \text{ for } z \in \mathbb {D}\,. \end{aligned}$$

Observe, in particular, that the coefficients of g are all non-negative.

As \(g^{\prime \prime \prime }(t)=O(1/(1-t)^{5})\), as \(t \uparrow 1\), see (3.4), the condition on the major arc (4.3) is readily satisfied if \(\alpha >5/3\).

Now, the minor arc. For \(z=te^{\imath \theta }\) with \(t \in (0,1)\) and \(\theta \in \mathbb {R}\), we have that

$$\begin{aligned} \mathrm{Re}g(z)-g(t)&=\sum _{l=1}^\infty \frac{1}{l}\left( \mathrm{Re}K(z^l)- K(t^l)\right) \le \mathrm{Re}K(z)-K(t)\\ {}&=\frac{1}{4}\left( \mathrm{Re}\left( \frac{1+z}{1-z}\right) ^2-\left( \frac{1+t}{1-z}\right) ^2\right) \, , \end{aligned}$$

since the summands above are all non-positive, and, thus, that

$$\begin{aligned} \begin{aligned} \mathrm{Re}g(z)-g(t)&\le \frac{1}{4} \left( \left| \frac{1+z}{1-z} \right| ^2-\left( \frac{1+t}{1-t}\right) ^2\right) \le \frac{1}{4} (1+t)^2 \left( \frac{1}{|1-z|^2}-\frac{1}{(1-t)^2}\right) \\&\le \frac{1}{4} \left( \frac{1}{|1-z|^2}-\frac{1}{(1-t)^2}\right) =\frac{1}{4}\frac{1}{(1-t)^2} \left( \left| \frac{1-t}{1-te^{\imath \theta }}\right| ^2-1\right) \, . \end{aligned} \end{aligned}$$

Appealing to Lemma 4.4 we deduce that for \(t \ge t_0\) and some \(\delta >0\) we have that

$$\begin{aligned} \sup \limits _{|\theta |\ge h(t)} \left( \mathrm{Re}g(z)-g(t)\right) \le - \delta \frac{h(t)^2}{(1-t)^4}\, , \end{aligned}$$

and conclude that condition (4.4) on the minor arc is satisfied whenever \(\alpha <2\). Thus, specifying \(\alpha \in (5/3, 2)\), we conclude that:

Theorem 6.7

The MacMahon function \(W_1^{1}\) is in the Hayman class.

For each \(n \ge 1\), take \(\tau _n=e^{-s_n}\) with \(s_n= ({\zeta (3) \,\Gamma (3)}{/n})^{1/3}\), so that \(\widetilde{m}(\tau _n)=n\). Plugging \(\tau _n\) in the asymptotic formula (3.10) and using (6.15) we obtain

$$\begin{aligned} M_n \sim \left[ \frac{e^{\zeta ^{\prime }(-1)}}{\sqrt{12 \pi }}\, \zeta (3)^{7/36} 2^{25/36}\right] \, \frac{1}{n^{25/36}}\, \, \exp \left( 3 (\zeta (3)/4)^{1/3} n^{2/3}\right) \, , \quad \text{ as } n \rightarrow \infty \,, \end{aligned}$$
(6.16)

which is Wright’s asymptotic formula for plane partitions, [29].

b) Colored partitions. Fix an integer \(b \ge 1\). The argument which follows to handle \(W_1^b\) is just an extension of the one above for plane partitions, \(W_1^{1}\); first, strong gaussianity via Theorem 4.1 and then asymptotics of coefficients via Theorem C.

Theorem 6.8

For \(b\ge 1\), the function \(W_1^{b}\) is in the Hayman class.

Proof

Introduce the power series \(Q_b\) and \(R_b\) given by

$$\begin{aligned} Q_b(w)=\displaystyle \sum _{j=1}^\infty j^b w^j \quad \text{ and } \quad R_b(w)=\displaystyle \sum _{j=b}^\infty \dfrac{j!}{(j-b)!} \,w^j=b! \,\dfrac{w^b}{(1-w)^{b+1}}\, , \end{aligned}$$

and observe that

$$\begin{aligned} g(z)=\ln W_1^b(z)=\sum _{j, k}^\infty j^b\frac{1}{k} \, z^{kj}=\sum _{k=1}^\infty \frac{1}{k} \,Q_b(z^k)\, . \end{aligned}$$

For \(z=te^{\imath \theta } \in \mathbb {D}\), we have that

$$\begin{aligned}&\mathrm{Re}g(z)-g(t)\le \mathrm{Re}Q_b(z)-Q_b(t)\\&\quad \le b!\left( \mathrm{Re}\frac{z^b}{(1-z)^{b+1}}- \frac{t^b}{(1-t)^{b+1}}\right) \le b! \,t^b \left( \frac{1}{|1-z|^{b+1}}-\frac{1}{(1-t)^{b+1}}\right) \,. \end{aligned}$$

For a potential cut function \(h(t)=(1-t)^\alpha \), with \(\alpha >0\), we have, because of Lemmas 4.3 and 4.4, that for some \(\delta _b >0\),

$$\begin{aligned} \limsup _{t \uparrow 1} \frac{(1-t)^{(b+1)+2}}{h(t)^2} \left( \sup _{h(t)\le |\theta | \le \pi } \mathrm{Re}g(te^{\imath \theta })-g(t)\right) \le -\delta _b <0\, . \end{aligned}$$

Thus, condition (4.4) on the minor arc of Theorem 4.1 is satisfied as long as \(\alpha < 1+({b+1})/{2}\).

From (3.4), we deduce that \(g^{\prime \prime \prime }(t)=O(1/(1-t)^{(b+1)+3})\), as \(t \uparrow 1\), and, thus, that condition on the major arc is satisfied whenever \(\alpha > 1+(b+1)/3\). \(\square \)

Now from the approximation of moments of the Khinchin family of \(W_1^b\) given by (6.6), (6.7) and (6.8), and the asymptotics of \(W_1^b\) on the interval (0, 1), we deduce from Theorem C the following asymptotic formula for the coefficients \(a_n\) of \(W_1^b\):

$$\begin{aligned} {a_n}\sim \alpha _b \,\frac{1}{n^{\beta _b}}\, e^{\gamma _b \, n^{(b+1)/(b+2)}}\,, \quad \text{ as } n \rightarrow \infty \,, \end{aligned}$$
(6.17)

where

$$\begin{aligned} \alpha _b&=\frac{1}{\sqrt{2\pi }} \, e^{\zeta ^\prime (-b)} \frac{1}{\sqrt{b+2}} \,\left[ \Gamma (b+2) \,\zeta (b+2)\right] ^{(-2\zeta (-b)+1)/(2(b+2))}, \\ \beta _b&=\frac{-2\zeta (-b)+b+3}{2(b+2)} ,\quad \gamma _b =\frac{b+2}{b+1}\left( \Gamma (b+2) \,\zeta (b+2)\right) ^{1/(b+2)}\,. \end{aligned}$$

Formula (6.17) reduces to (6.13) when \(b=0\), usual partitions, and to (6.16) when \(b=1\), plane partitions.

6.3.5 General \(W_a^b\)

For \(a\ge 1, b\ge 0\), general, the ogfs \(W_a^b(z)\) are all Gaussian.

We do have appropriate approximations of the mean and variance functions given by (6.6), (6.7) and (6.8), satisfying condition (3.9) towards applying Theorem C, and also a convenient asymptotic formula in the interval (0, 1) given by (6.12).

Besides, aiming at applying Theorem 4.1 and verifying the strong gaussianity of \(W_a^b\), for a potential cut function \(h(t)=(1-t)^\alpha \), with \(\alpha >0\), we do have, because of (3.4), that condition (4.3) on the major arc is satisfied as long as \(\alpha > 1+(b+1)/(3a)\).

But, alas, we have not been able to stretch the simple estimates above, for the case \(a=1\), to verify the minor arc condition (4.4), which should be satisfied if \(\alpha <1+(b+1)/(2a)\).

If that were the case, all the ogfs \(W_a^b\) would be in the Hayman class, and theefore they would be strongly Gaussian. And from all the above, we would have the following asymptotic formula for the coefficients \(a_n\) of \(W_a^b\):

$$\begin{aligned} {a_n}\sim \alpha _{a,b} \,\frac{1}{n^{\beta _{a,b}}}\,e^{\gamma _{a,b} \, n^{(b+1)/(b+1+a)}}\,, \quad \text{ as } n \rightarrow \infty , \end{aligned}$$

where

$$\begin{aligned} \alpha _{a,b}&=\frac{1}{\sqrt{2\pi }} \, \sqrt{\frac{a}{a+b+1}}\, e^{a \zeta ^\prime (-b)} \, C_{a,b}^{(a-2a\zeta (-b))/(2(a+b+1))}, \\ \beta _{a,b}&= \frac{-2 a \zeta (-b)+2a+b+1}{2(a+b+1)},\quad \gamma _{a,b} =\left( \frac{a+b+1}{b+1}\right) \,C_{a,b}^{a/(a+b+1)}, \end{aligned}$$

with

$$\begin{aligned} C_{a,b} =\frac{1}{a}\,\zeta \left( 1+\frac{b+1}{a}\right) \,\Gamma \left( 1+\frac{b+1}{a}\right) \,. \end{aligned}$$

For asymptotic formulae (and further asymptotic expansions) in the cases \(W_2^0\) and \(W_a^0\), of partitions into squares and ath powers, obtained via the circle method, we refer to Vaughan [26], Gafni [10] and the primordial Wright [31].

It would be nice to know whether the ogf \(W_a^0\), with \(a\ge 2\), of partitions into powers of a is in the Hayman class.