1 Introduction and mathematical background

The negative binomial distribution is one of the most widely used discrete probability models that allow departure from the mean-equal-variance Poisson model. More specifically, the negative binomial distribution models overdispersion of data relative to the Poisson distribution. For clarity, we refer to the extended negative binomial distribution with probability mass function

$$\begin{aligned} P(X=x) = \frac{\Gamma (r + x) }{\Gamma (r) x\!} p^r(1-p)^x, \qquad x=0, 1, 2, \ldots , \end{aligned}$$
(1)

where \(r>0\). If \(r \in \{1,2,\dots \}\), x is the number of failures which occur in a sequence of independent Bernoulli trials to obtain r successes, and p is the success probability of each trial.

One limitation of the negative binomial distribution in fitting overdispersed count data is that the skewness and kurtosis are always positive. An example is given in Sect. 2.1.1, in which we introduce two real world data sets that do not fit a negative binomial model. The data sets reflect reported incidents of crime that occurred in the city of Chicago from January 1, 2001 to May 21, 2018. These data sets are overdispersed but the skewness coefficients are estimated to be respectively -0.758 and -0.996. Undoubtedly, the negative binomial model is expected to underperform in these types of count populations. These data sets are just two examples in a yet to be discovered non-negative binomial world, thus demonstrating the real need for a more flexible alternative for overdispersed count data. The literature on alternative probabilistic models for overdispersed count data is vast. A history of the overdispersed data problem and related literature can be found in Shmueli et al. (2005). In this paper we consider the fractional Poisson distribution (fPd) as an alternative. The fPd arises naturally from the widely studied fractional Poisson process (Saichev and Zaslavsky 1997; Repin and Saichev 2000; Jumarie 2012; Laskin 2003; Beghin and Orsingher 2009; Cahoy et al. 2010; Meerschaert et al. 2011). It has not yet been studied in depth and has not been applied to model real count data. We show that the fPd allows big (large mean), both left- and right-skewed overdispersed count data making it attractive for practical settings, especially now that data are becoming more available and bigger than before. fPd’s usually involve one parameter; generalizations to two parameters are proposed in Beghin and Orsingher (2009) and Herrmann (2016). Here, we take a step forward and further generalize the fPd to a three parameter model, proving the resulting distribution is still overdispersed.

One of the most popular measures to detect the departures from the Poisson distribution is the so-called Fisher index which is the ratio of the variance to the mean \((\lessgtr 1)\) of the count distribution. As shown in the crime example of Sect. 2.1.1, the computation of the Fisher index is not sufficient to determine a first fitting assessment of the model, which indeed should take into account at least the presence of negative/positive skewness. To compute all these measures, the first three factorial moments should be considered. Consider a discrete random variable X with probability generating function (pgf)

$$\begin{aligned} G_X(u) = {\varvec{E}} u^X = \sum _{k\ge 0} a_k \frac{(u-1)^k}{k!}, \qquad |u|\le 1, \end{aligned}$$
(2)

where \(\{a_k\}\) is a sequence of real numbers such that \(a_0=1.\) Observe that \(Q(t)=G_X(1+t)\) is the factorial moment generating function of X. The k-th moment is

$$\begin{aligned} {\varvec{E}} X^k = \sum _{r=1}^k S(k,r) a_r, \end{aligned}$$
(3)

where S(kr) are the Stirling numbers of the second kind (Di Nardo and Senato 2006). By means of the factorial moments it is straightforward to characterize overdispersion or underdispersion as follows: letting \(a_2 > a_1^2\) yields overdispersion whereas \(a_2 < a_1^2\) gives underdispersion. Let \(c_2\) and \(c_3\) be the second and third cumulant of X, respectively. Then, the skewness can be expressed as

$$\begin{aligned} \gamma (X) = \frac{c_3}{c_2^{3/2}} = \frac{a_3+3a_2+a_1[1-3 a_2 + a_1(2a_1-3)]}{(a_1+a_2-a_1^2)^{3/2}}. \end{aligned}$$
(4)

If the condition

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{a_n}{(n-k)!} = 0, \qquad k \le n, \end{aligned}$$
(5)

is fulfilled, the probability mass function of X can be written in terms of its factorial moments (Daley and Narayan 1980):

$$\begin{aligned} P(X=x) = \frac{1}{x!} \sum _{k\ge 0} a_{k+x} \frac{(-1)^k}{k!}, \qquad x \ge 0. \end{aligned}$$
(6)

As an example, the very well-known generalized Poisson distribution which accounts for both under and overdispersion (Maceda 1948; Consul and Jain 1973), put in the above form has factorial moments given by \(a_0=1\) and

$$\begin{aligned} a_k = \sum _{r = 0}^{h(\lambda _2)} \frac{1}{r!} \lambda _1 (\lambda _1 +\lambda _2(r+k))^{r+k-1} e^{-(\lambda _1+\lambda _2(r+k))}, \qquad \lambda _1 > 0, \end{aligned}$$
(7)

where \(h(\lambda _2) = \infty \) and \(k =1,2,\ldots \), if \(\lambda _2>0\). While \(h(\lambda _2) = M-k\) and \(k=1, \ldots , M\), if \(\max (-1, -\lambda _1/M) \le \lambda _2 < 0\) and M is the largest positive integer for which \(\lambda _1+M\lambda _2 > 0\).

Another example is given by the Kemp family of generalized hypergeometric factorial moments distributions (GHFD) (Kemp and Kemp 1974) for which the factorial moments are given by

$$\begin{aligned} a_k = \frac{\Gamma \left[ (a+k);(b+k) \right] \lambda ^k }{ \Gamma \left[ (a); (b) \right] }, \qquad k \ge 0, \end{aligned}$$
(8)

where \( \Gamma \left[ (a); (b) \right] =\prod _{i=1}^p \Gamma (a_i)/\prod _{j=1}^q \Gamma (b_j)\), with \(a_1, \ldots , a_p, b_1, \ldots , b_q \in {{\mathbb {R}}}\) and pq non negative integers. The factorial moment generating function is \(Q(t) = \, _pF_q \left[ (a) ;(b); \lambda t \right] \), where

$$\begin{aligned} _pF_q \left[ (a) ;(b); z \right] = \, _pF_q(a_1,\ldots , a_p; b_1,\ldots ,b_q; z) = \sum _{m\ge 0} \frac{(a_1)_m \cdots (a_p)_m}{(b_1)_m \cdots (b_q)_m} \frac{z^m}{m!}, \nonumber \\ \end{aligned}$$
(9)

and \((a)_m=a(a+1)\cdots (a+m-1), m \ge 1.\) Both overdispersion and underdispersion are possible, depending on the values of the parameters (Tripathi and Gurland 1979). The generalized fractional Poisson distribution (gfPd), which we introduce in the next section, lies in the same class of the Kemp’s GHFD but with the hypergeometric function in (9) substituted by a generalized Mittag–Leffler function (also known as three-parameter Mittag–Leffler function or Prabhakar function). In this case, as we have anticipated above, the model is capable of not only describing overdispersion but also having a degree of flexibility in dealing with skewness.

It is worthy to note that there exists a second family of Kemp’s distributions, still based on hypergeometric functions and still allowing both underdispersion and overdispersion. This is known the Kemp’s generalized hypergeometric probability distribution (GHPD) (Kemp 1968) and it is actually a special case of the very general class of weighted Poisson distributions (WPD). Taking into account the above features, we thus analyze the whole class of WPD’s with respect to the possibility of obtaining under and overdispersion. In Theorem 3.2 we first give a general necessary and sufficient condition to have an underdispersed or an overdispersed WPD random variable in the case in which the weight function may depend on the underlying Poisson parameter \(\lambda \). Special cases of WPD’s admitting a small number of parameters have already proven to be of practical interest, such as for instance the well-known COM-Poisson (Conway and Maxwell 1962) or the hyper-Poisson (Bardwell and Crow 1964) models. Here we present a novel WPD family related to a generalization of Mittag–Leffler functions in which the weight function is based on a ratio of gamma functions. The proposed distribution family includes the above-mentioned well-known classical cases. We characterize conditions on the parameters allowing overdispersion and underdispersion and analyze two further special cases of interest which have not yet appeared in the literature. We derive recursions to generate probability mass functions (and thus random numbers) and show how to approximate the mean and the variance.

The paper is organized as follows: in Sect. 2, we introduce the generalized fractional Poisson distribution, discuss some properties and recover the classical fPd as a special case. These models are fit to the two real-world data sets mentioned above. Sect. 3 is devoted to weighted Poisson distributions, their characteristic factorial moments and the related conditions to obtain overdispersion and underdispersion. Furthermore, the novel WPD based on a generalization of Mittag–Leffler functions is introduced and described in Sect. 3.1: we discuss some properties and show how to get exact formulae for factorial moments by using Faà di Bruno’s formula (Stanley 2012). Two special models are then characterized depending on the values of the parameters and compared to classical models. Finally, some illustrative plots end the paper.

2 Generalized fractional Poisson distribution (gfPd)

Definition 2.1

A random variable \(X_{\alpha , \beta }^\delta \; {\mathop {=}\limits ^{d}} \; \)gfPd\((\alpha , \beta , \delta , \mu )\) if

$$\begin{aligned} P(X_{\alpha , \beta }^\delta = x)= & {} \frac{ \Gamma (\delta + x)}{x! \Gamma (\delta )} \mu ^x \Gamma (\beta ) E_{\alpha ,\alpha x +\beta }^{\delta +x} (-\mu ),\nonumber \\&\quad \mu >0;\, x \in {\mathbb {N}}; \, \alpha , \beta \in (0, \;1]; \, \delta \in (0, \; \beta /\alpha ], \end{aligned}$$
(10)

where

$$\begin{aligned} E_{\eta ,\nu }^\tau ( w) = \sum _{j=0}^\infty \frac{(\tau )_j}{j!\Gamma (\eta j+\nu )} \; w^j, \end{aligned}$$
(11)

\(w \in {\mathbb {C}}; \mathfrak {R}(\eta ),\mathfrak {R}(\nu ),\mathfrak {R}(\tau ) >0\), is the generalized Mittag–Leffler function (Prabhakar 1971) and \((\tau )_j = \Gamma ( \tau + j )/ \Gamma ( \tau ) \) denotes the Pochhammer symbol.

To show non-negativity, notice that

$$\begin{aligned} \frac{ \Gamma (\delta + x)}{ \Gamma (\delta )} E_{\alpha ,\alpha x +\beta }^{\delta + x} (-\mu ) \ge 0 \iff (-1)^x \frac{ d^x}{d \mu ^x} E_{\alpha , \beta }^{\delta } (-\mu ) \ge 0, \end{aligned}$$
(12)

that is, \( E_{\alpha , \beta }^{\delta } (-\mu )\) is completely monotone. From De Oliveira et al. (2011), it is known that \( E_{\alpha , \beta }^{\delta } (-\mu )\) is completely monotone if \(\alpha , \beta \in (0, \;1], \delta \in (0, \; \beta /\alpha ]\) and thus the pmf in (10) is non-negative.

Note that the probability mass function can be determined using the following integral representation (Polito and Tomovski 2016):

$$\begin{aligned} P(X_{\alpha , \beta }^\delta = x) = \frac{ \Gamma (\beta ) }{x! \Gamma (\delta )} \mu ^x \int _{{\mathbb {R}}^+}e^{- \mu y} y^{\delta +x -1} \phi (-{\alpha }, \beta - \alpha \delta ; -y) dy, \end{aligned}$$
(13)

where the Wright function \(\phi \) is defined as the convergent sum (Kilbas et al. 2006)

$$\begin{aligned} \phi (\xi ,\omega ; z) =\sum \limits _{r=0}^\infty \frac{ z^r}{r! \Gamma [\xi r + \omega ]}, \qquad \xi > -1, \omega ,z \in {{\mathbb {R}}}. \end{aligned}$$
(14)

Remark 2.1

The random variable \(X_{\alpha ,\beta }^\delta \) has factorial moments

$$\begin{aligned} a_{k} = \frac{\Gamma (\beta ) \Gamma (\delta + k)}{\Gamma (\alpha k + \beta ) \Gamma (\delta )} \mu ^{k}, \qquad k \ge 0. \end{aligned}$$
(15)

Hence the pgf is \(G_{X_{\alpha , \beta }^\delta }(u)= \Gamma (\beta ) E_{\alpha , \beta }^{\delta } ( \mu (u-1) )\), \(|u| \le 1\).

By expressing the moments in terms of factorial moments and after some algebra we obtain

$$\begin{aligned} {\varvec{E}} [X_{\alpha , \beta }^\delta ]&= \frac{ \Gamma ( \beta ) \delta \mu }{ \Gamma ( \beta + \alpha )}, \end{aligned}$$
(16)
$$\begin{aligned} {\varvec{V}}\mathbf{ar }[ X_{\alpha , \beta }^\delta ]&= \frac{ \Gamma ( \beta ) \delta \mu }{ \Gamma ( \beta + \alpha )} + \Gamma ( \beta ) \delta \mu ^2 \left( \frac{ (\delta +1) }{ \Gamma ( \beta + 2\alpha )} - \frac{ \Gamma ( \beta ) \delta }{ \Gamma ( \beta + \alpha )^2} \right) . \end{aligned}$$
(17)

Theorem 2.1

\(X_{\alpha , \beta }^\delta \) exhibits overdispersion.

Proof

We have

$$\begin{aligned}&a_2> a_1^2 \Leftrightarrow \frac{\delta +1}{\Gamma (2 \alpha +\beta )} > \frac{\delta \Gamma (\beta )}{\Gamma ^2 ( \alpha +\beta )}\nonumber \\&\quad \Leftrightarrow \delta \bigg ( \frac{\Gamma (\beta )}{\Gamma ^2(\alpha +\beta )} - \frac{1}{\Gamma (2 \alpha +\beta )} \bigg ) < \frac{1}{\Gamma (2 \alpha + \beta )} \end{aligned}$$
(18)

and

$$\begin{aligned} \frac{\Gamma (\beta )}{\Gamma ^2(\alpha +\beta )} - \frac{1}{\Gamma (2 \alpha +\beta )}> 0 \quad \mathrm{as} \quad \text {Beta}( \beta , \alpha ) > \text {Beta}(\alpha +\beta , \alpha ). \end{aligned}$$
(19)

Thus, the distribution is overdispersed for

$$\begin{aligned} \delta < \frac{\text {Beta}( \alpha + \beta , \alpha )}{\text {Beta}( \beta , \alpha ) - \text {Beta}( \alpha + \beta , \alpha )}. \end{aligned}$$
(20)

Observe that the function \(\beta \text {Beta}(\beta ,\alpha )\) is increasing in \(\beta \) for \(\alpha , \beta \in (0,1)\) as

$$\begin{aligned} \frac{\partial }{\partial \beta } \beta \text {Beta}(\beta ,\alpha ) = \text {Beta}(\beta ,\alpha ) (1 + \beta (\psi (\beta )-\psi (\alpha +\beta ))>0, \end{aligned}$$
(21)

where \(\psi \) is the digamma function. Note that (21) is positive by formula (1.3.3) of Lebedev (1972) as \(\psi \) is increasing on \((0,\infty )\). Thus

$$\begin{aligned} \beta \text {Beta}( \beta , \alpha )< (\alpha + \beta ) \text {Beta}( \alpha + \beta , \alpha )\Leftrightarrow \frac{\beta }{\alpha } < \frac{\text {Beta}( \alpha + \beta , \alpha )}{\text {Beta}( \beta , \alpha ) - \text {Beta}( \alpha + \beta , \alpha )} \end{aligned}$$
(22)

and for \(\delta \in (0, \beta /\alpha )\) the bound (20) is always verified. \(\square \)

2.1 Fractional Poisson distribution

This section analyzes the classical fPd, which is a special case of gfPd, and is obtained when \(\beta =\delta =1\). The fPd can model asymmetric (both left-skewed and right-skewed) overdispersed count data for all mean count values (small and large). The fPd has probability mass function (pmf)

$$\begin{aligned} P(X_\alpha =x) = \mu ^x E_{\alpha ,\alpha x +1}^{x +1} ( -\mu ) , \qquad x=0,1,2,\ldots , \end{aligned}$$
(23)

where \(\mu >0\), \(\alpha \in [0,1]\).

Notice that if \(\alpha = 1\), the standard Poisson distribution is retrieved, while for \(\alpha =0\) we have \(X_{0} {\mathop {=}\limits ^{d}} \text {Geo} \left( 1/(1+ \mu )\right) \). Indeed,

$$\begin{aligned} P(X_0=x) = \frac{\mu ^x}{x!}\sum _{j=0}^{\infty }\frac{(j+x)!}{j!}(-\mu )^j = \frac{1}{1+\mu }\left( \frac{\mu }{1+\mu }\right) ^x, \qquad x \ge 0. \end{aligned}$$
(24)

Furthermore, the probability mass function can be determined using the following integral representation (Beghin and Orsingher 2010):

$$\begin{aligned} P(X_\alpha =x) = \frac{ \mu ^x }{x!}\int _{{\mathbb {R}}^+}e^{- \mu y} y^x M_{\alpha } (y) dy, \end{aligned}$$
(25)

where the M-Wright function (Mainardi et al. 2010)

$$\begin{aligned} M_\alpha (y) =\sum \limits _{j=0}^\infty \frac{ (-y)^j}{j! \Gamma [-\alpha j + (1-\alpha ) ]} = \frac{1}{\pi } \sum \limits _{j=1}^\infty \frac{ (-y)^{j-1}}{(j-1)!} \Gamma (\alpha j) \sin ( \pi \alpha j) \end{aligned}$$
(26)

is the probability density function of the random variable \(S^{-\alpha }\) with \(S {\mathop {=}\limits ^{d}} \alpha ^+\)-stable supported in \({\mathbb {R}}^+.\) By using (25), the cumulative distribution function turns out to be

$$\begin{aligned} F_{X_\alpha } (x)= \sum _{r=0}^\infty \left( {\begin{array}{c}x + r-1\\ x\end{array}}\right) \frac{(-1)^r \mu ^{-(r+1)}}{\Gamma (1 - \alpha (r+1))} \mathbb {1}_{(x > 0)}(x). \end{aligned}$$
(27)

Remark 2.2

From (2.1), the random variable \(X_\alpha \) has factorial moments

$$\begin{aligned} a_{k} = \frac{\mu ^{k} k!}{\Gamma (1 + \alpha k)}, \qquad k \ge 0. \end{aligned}$$
(28)

Hence the probability generating function is \(G_{X_\alpha }(u)= E_{\alpha ,1}^1 \left( \mu \left( u-1\right) \right) \), \(|u| \le 1\).

With respect to the symmetry structure of \(X_\alpha \), from (4) and (28), the skewness of \(X_\alpha \) reads

$$\begin{aligned} \gamma (X_{\alpha })= \frac{\frac{1}{\mu ^2 \Gamma (1 + \alpha ) } + \frac{6}{\mu \Gamma (1 + 2\alpha ) } + \frac{6}{\Gamma (1 + 3 \alpha ) } -\frac{3}{\mu \left[ \Gamma (1 + \alpha )\right] ^2 } -\frac{6}{\Gamma (1 + \alpha ) \Gamma (1 + 2 \alpha ) } + \frac{2}{[\Gamma (1 + \alpha ) ]^3}}{ \left( \frac{1}{\mu \Gamma (1 + \alpha ) } + \frac{2}{\Gamma (1 + 2\alpha ) } - \frac{1}{ [ \Gamma (1 + \alpha )]^2} \right) ^{3/2} }.\nonumber \\ \end{aligned}$$
(29)

Moreover,

$$\begin{aligned} \lim _{\mu \rightarrow \infty } \gamma (X_{\alpha }) = \frac{ \frac{6}{\Gamma (1 + 3 \alpha ) } - \frac{6}{\Gamma (1 + \alpha ) \Gamma (1 + 2 \alpha ) } + \frac{2}{\left[ \Gamma (1 + \alpha ) \right] ^3}}{ \left( \frac{2}{\Gamma (1 + 2\alpha ) } - \frac{1}{ [ \Gamma (1 + \alpha )]^2} \right) ^{3/2}} \ne 0, \end{aligned}$$
(30)

which correctly vanishes if \(\alpha =1\), like the ordinary Poisson distribution.

2.1.1 Simulation and parameter estimation

The integral representation (25) allows visualization of the probability mass function of \(X_\alpha \) (see Fig. 1). Figure 1 shows the flexibility of the fPd. The probability distribution ranges from zero-inflated right-skewed (\(\alpha \rightarrow 0\)) to left-skewed (\(\alpha \rightarrow 1\)) and symmetric (\(\alpha = 1\)) overdispersed count data. To compute the integral in (25) by means of Monte Carlo techniques, we use the approximation,

Fig. 1
figure 1

Probability mass functions of \(X_\alpha \) for \(\alpha =0.1, 0.5, 0.75, 0.9\), and \(\mu = 20\)

$$\begin{aligned} p_x^{\alpha } \approx \frac{\mu ^x}{\; x!} \left( \frac{1}{N} \sum _{j=1}^N e^{- \mu Y_j} Y_j^x \right) , \end{aligned}$$
(31)

where \(Y_j's \; {\mathop {=}\limits ^{iid}} S^{-\alpha }.\) Note that the random variable S can be generated using the following formula (Kanter 1975; Chambers et al. 1976):

$$\begin{aligned} S {\mathop {=}\limits ^{d}} \frac{\sin (\alpha \pi U_1)[ \sin ((1-\alpha )\pi U_1)]^{1/\alpha -1}}{[\sin (\pi U_1)]^{1/\alpha }|\ln U_2|^{1/\alpha -1}}, \end{aligned}$$
(32)

where \(U_1\) and \(U_2\) are independently and uniformly distributed in [0, 1]. Thus, fractional Poisson random numbers can be generated using the algorithm below.

Algorithm:

Step 1. Set \(X=0,\) and \(T=0.\)

Step 2. While \(\lbrace T\le 1 \rbrace \)

$$\begin{aligned} T= & {} T+ V^{1/\alpha } \, S \\ X= & {} \text {ifelse}(T \le 1, X+1, X) \end{aligned}$$

Step 3. Repeat steps \(1-2, n\) times.

Note that the random variable V follows the exponential distribution with density function \(\mu \exp (-\mu v), v \ge 0.\) Algorithms for generating random variables from the exponential density function are well-known. Hence, the algorithm allows estimation of the kth moment, i.e., \({\varvec{E}} X_\alpha ^{k}.\)

Figure 2 shows the plot of the skewness coefficient (30) as a function of \(\mu \) and \(\alpha .\) Unlike the negative binomial, the fPd can accommodate both left-skewed and right-skewed count data making it more flexible. Thus, the fPd is more flexible than the negative binomial, especially if the number of failures becomes large.

Fig. 2
figure 2

Skewness coefficient for \(\mu = 0.5,1,3\) and its limit as functions of \(\alpha \in (0,1)\)

We applied the fractional Poisson model \(\text {fPd}(\alpha , \mu )\) to two data sets, named Data 1 and Data 2,  which are about the reported incidents of crime that occurred in the city of Chicago from 2001 to present.Footnote 1 The sample distributions together with their description are shown in Fig. 3.

Fig. 3
figure 3

(Left) The number of all incidents from 2001 to 2018 for each police district. (Right) The number of incidents described as “\(\$500\) AND UNDER” for each police district

Furthermore, we compared \(\text {fPd}(\alpha , \mu )\) with the negative binomial \(\text {NegBinom}(\textit{size},\) \(\textit{mean})\) using the usual chi-square goodness-of-fit test statistic and the maximum likelihood estimates for both models. Note that the chi-square test statistic follows, approximately, a chi-square distribution with \((k - 1-p)\) degrees of freedom where k is the number of cells and p is the number of parameters to be estimated plus one.

For illustration purposes, we used grid search for the \(\text {fPd}(\alpha , \mu )\) as it is relatively fast due to \(\alpha \) being bounded in (0, 1) and to \(\mu \), which is just in the neighborhood of the true data mean scaled by \(\Gamma (1 + \alpha ).\) Observe that \(5 \times 10^5\) random numbers are used in all the calculations. From the results below, the fractional Poisson distribution \(\text {fPd}(\alpha , \mu )\) provides better fits than the negative binomial \(\text {NegBinom}(\textit{size}, mean)\) model for both data sets at \(5 \%\) level of significance. This exercise clearly demonstrates the limitation of the negative binomial in dealing with left-skewed count data (Table 1).

Table 1 Comparison between fPd\((\alpha , \mu )\) and NegBinom(sizemean) fits

2.2 The case for gfPd\((\alpha , \alpha , 1, \mu )\)

When \(\beta =\alpha \) and \(\delta =1\), we have \(X_{\alpha , \alpha } \; {\mathop {=}\limits ^{d}} \; \)gfPd\((\alpha , \alpha , 1, \mu )\) with

$$\begin{aligned} P(X_{\alpha , \alpha } = x) = \Gamma (\alpha ) \mu ^x E_{\alpha ,\alpha (x +1) }^{x+1} (-\mu ), \qquad \mu >0;\, x \in {\mathbb {N}}; \, \alpha \in (0, \;1]. \end{aligned}$$
(33)

Proposition 2.1

The probability mass function can be written as

$$\begin{aligned} P(X_{\alpha , \alpha } = x) = \Gamma (\alpha +1) \frac{\mu ^x}{x!} \int _{{\mathbb {R}}^+}y^{-\alpha (x+1)} e^{-\mu y^{-\alpha }} \nu _S(dy), \end{aligned}$$
(34)

where \(\nu _S\) is the distribution of a random variable S whose density has Laplace transform \(\exp (-t^\alpha )\).

Proof

Note that

$$\begin{aligned}&\frac{1}{x!}\int _{{\mathbb {R}}^+}y^{-\alpha (x+1)} e^{-\mu y^{-\alpha }} \nu _S(dy) = \frac{1}{x!} \sum _{k \ge 0} \frac{(-\mu )^k}{k!} \int _{{\mathbb {R}}^+}y^{-\alpha k -\alpha (x+1)} \nu _S (dy) \\&\quad = \frac{1}{x!} \sum _{k \ge 0} \frac{(-\mu )^k}{k!} \frac{\Gamma ( 1 + k+x +1) }{\Gamma (1 +\alpha k + \alpha (x+1) )}\nonumber \\&\quad = \sum _{k \ge 0} \frac{(-\mu )^k}{k!} \frac{ \Gamma ( k+x +1) }{\alpha x! \Gamma (\alpha k + \alpha (x+1))} \nonumber \\&\quad = \frac{1}{\alpha } E_{\alpha ,\alpha (x +1) }^{x+1} (-\mu ). \nonumber \end{aligned}$$
(35)

\(\square \)

The above result provides an algorithm to evaluate the probability mass function as

$$\begin{aligned} P(X_{\alpha , \alpha } = x)&= \Gamma (\alpha +1 ) \frac{\mu ^x}{x!} {\varvec{E}}\left( S^{-\alpha (x+1)} e^{-\mu S^{-\alpha }} \right) \nonumber \\&\approx \Gamma (\alpha +1 ) \frac{\mu ^x}{x!} \left( \frac{1}{N} \sum _{j=1}^N S_j^{-\alpha (x+1)} e^{-\mu S_j^{-\alpha }} \right) . \end{aligned}$$
(36)

Thus, we can now estimate \(\alpha \) and \(\mu \) using maximum likelihood just like in the fPd case. The maximum likelihood estimates for the two crime datasets above are given in Table 2 below. The chi-square goodness-of-fit test statistics are large, indicating bad fits.

Table 2 Maximum likelihood estimates for \(\text {gfPd}(\alpha , \alpha , 1, \mu )\)

Remark 2.3

From (2.1), the random variable \(X_{\alpha ,\alpha }\) has factorial moments

$$\begin{aligned} a_{k} = \frac{\Gamma (\alpha ) \mu ^{k} k!}{\Gamma ( \alpha + \alpha k )}, \qquad k \ge 0. \end{aligned}$$
(37)

Thus the pgf is \(G_{X_{\alpha ,\alpha }}(u)= \Gamma (\alpha ) E_{\alpha ,\alpha }^1 \left( \mu \left( u-1\right) \right) \), \(|u| \le 1\).

From (4) and (37), the symmetry structure of \(X_{\alpha , \alpha }\) can be determined as follows:

$$\begin{aligned} \gamma (X_{\alpha , \alpha })= \frac{\Gamma (\alpha ) \left( \frac{6}{ \Gamma (4 \alpha ) } + \frac{6}{\mu \Gamma (3\alpha ) } + \frac{1}{\mu ^2 \Gamma (2\alpha ) } -\frac{6}{\Gamma (2\alpha )\Gamma (3\alpha ) } +\frac{2}{\Gamma (2\alpha )^3} - \frac{3}{\mu \Gamma (2\alpha )^2} \right) }{ \left( \frac{1}{\mu \Gamma (2\alpha ) } + \frac{2}{\Gamma (3\alpha ) } - \frac{1}{ \Gamma (2\alpha )^2} \right) ^{3/2} }.\nonumber \\ \end{aligned}$$
(38)

Moreover,

$$\begin{aligned} \lim _{\mu \rightarrow \infty } \gamma (X_{\alpha ,\alpha }) = \frac{\Gamma (\alpha ) \left( \frac{6}{ \Gamma (4 \alpha ) } -\frac{6}{\Gamma (2\alpha )\Gamma (3\alpha ) } +\frac{2}{\Gamma (2\alpha )^3} \right) }{ \left( \frac{2}{\Gamma (3\alpha ) } - \frac{1}{ \Gamma (2\alpha )^2} \right) ^{3/2} } \ne 0, \end{aligned}$$
(39)

which vanishes if \(\alpha =1\) (Poisson distribution). Moreover, (39) is non-negative and decreasing: this explains the bad fits indicated by the large chi-square values above.

3 Underdispersion and overdispersion for weighted Poisson distributions

Weighted Poisson distributions (Rao 1965) provide a unifying approach for modelling both overdispersion and underdispersion (Kokonendji et al. 2008). Let Y be a Poisson random variable of parameter \(\lambda >0\) and let \(Y^{w}\) be the corresponding WPD with weight function w.

Theorem 3.1

If \({\varvec{E}}w(Y+k)<\infty \) for all \(k \in {\mathbb {N}}\), and \(a_k=\lambda ^k h(\lambda ,k)\), where \(h(\lambda ,k)=\frac{ {\varvec{E}}w(Y+k)}{ {\varvec{E}}w(Y)}\), satisfies (5), then \(Y^{w}\) has factorial moments \(a_k\).

Proof

It is enough to observe that the pgf \(G_{Y^{w}}(u)\) can be written in form (2) as follows:

$$\begin{aligned} G_{Y^{w}}(u)&= \sum _{k \ge 0} (u + 1 -1 )^k \frac{e^{-\lambda } \lambda ^k w(k)}{k! {\varvec{E}}w(Y)} = \sum _{k \ge 0} \frac{(u - 1)^k}{k!} \sum _{j \ge 0} \frac{e^{-\lambda } \lambda ^{j+k} w(j+k)}{j! {\varvec{E}}w(Y)} \nonumber \\&= \sum _{k \ge 0} \frac{(u - 1)^k}{k!} \lambda ^k h(\lambda ,k). \end{aligned}$$
(40)

\(\square \)

Let T be the linear left-shift operator acting on number sequences. Let us still denote with T its coefficientwise extension to the ring of formal power series in \({\mathbb R}_+[[\lambda ]]\) (Stanley 2012). Next proposition links overdispersion and underdispersion of \(Y^w\) respectively to a Turán-type and a reverse Turán-type inequality involving T.

Theorem 3.2

The random variable \(Y^w\) is overdispersed (underdispersed) if and only if

$$\begin{aligned} f(\lambda ) T^2 f(\lambda ) > (<)\, [T f(\lambda )]^2, \end{aligned}$$
(41)

where \(f(\lambda ) = {\varvec{E}}w(Y)\).

Proof

The random variable \(Y^w\) is overdispersed if and only if \(a_2 > a_1^2\), that is \({\varvec{E}}w(Y){\varvec{E}}w(Y+2)>[ {\varvec{E}}w(Y+1)]^2.\) Equivalently,

$$\begin{aligned} \left( \sum _{k \ge 0} \frac{ \lambda ^k}{k!} w(k) \right) \left( \sum _{k \ge 0} \frac{ \lambda ^{k}}{k!} w(k+2) \right) > \left( \sum _{k \ge 0} \frac{ \lambda ^{k}}{k!} w(k+1) \right) ^2, \end{aligned}$$
(42)

and the result follows observing that \(T^j f(\lambda ) = \sum _{k \ge 0} \frac{\lambda ^k}{k!} T^j[ w(k)]\) for \(j=1,2\). \(\square \)

Remark 3.1

Observe that when w does not depend on \(\lambda \), then \(T^j f(\lambda ) = D_{\lambda }^j f(\lambda )\) for \(j=1,2.\) In this case, condition (41) is equivalent to \(f(\lambda ) D^2_{\lambda }f(\lambda )> (<)\,[D_{\lambda }f(\lambda )]^2\), i.e. log-convexity (log-concavity) of f. This is already known in the literature (see Theorem 3 of Kokonendji et al. (2008)).

Remark 3.2

Note that from (42) we have

$$\begin{aligned}&\sum _{k \ge 0} \frac{\lambda ^k}{k!} \left( \sum _{j=0}^k \left( {\begin{array}{c}k\\ j\end{array}}\right) w(j) w(k-j+2) \right) \nonumber \\&\quad > \sum _{k \ge 0} \frac{\lambda ^k}{k!} \left( \sum _{j=0}^k \left( {\begin{array}{c}k\\ j\end{array}}\right) w(j+1) w(k-j+1) \right) \end{aligned}$$
(43)

and some algebra leads us to the following sufficient condition for overdispersion or underdispersion: the random variable \(Y^w\) is overdispersed (underdispersed) if

$$\begin{aligned} \sum _{j=0}^{k+1} \left[ \left( {\begin{array}{c}k\\ j\end{array}}\right) - \left( {\begin{array}{c}k\\ j-1\end{array}}\right) \right] w(j) w(k-j+2) > (<) \, 0 . \end{aligned}$$
(44)

Notice that \({\varvec{E}}w(Y)\) is a function of the Poisson parameter \(\lambda \). For the sake of clarity, from now on, let us denote it by \(\eta (\lambda )\). Weighted Poisson distributions with a weight function w not depending on the Poisson parameter \(\lambda \) are also known as power series distributions (PSD) (Johnson et al. 2005) and it is easy to see that the factorial generating function in this case reads

$$\begin{aligned} Q(t)=\frac{\eta [\lambda (t+1)]}{\eta (\lambda )} \end{aligned}$$
(45)

with factorial moments

$$\begin{aligned} a_r=\frac{\lambda ^r}{\eta (\lambda )} \frac{d^r}{d\lambda ^r} [\eta (\lambda )], \qquad r \ge 1. \end{aligned}$$
(46)

A special well-known family of PSD is the generalized hypergeometric probability distribution (GHPD) (Kemp 1968), where

$$\begin{aligned} Q(t) = \frac{_pF_q \left[ (a);(b); \lambda (t+1) \right] }{_pF_q \left[ (a) ;(b); t \right] } \end{aligned}$$
(47)

with \(_pF_q\) given in (9). Depending on the values of the parameters of GHPD both overdispersion and underdispersion are possible (Tripathi and Gurland 1979). For \(p=q=1,\) a special case of GHPD is the hyper-Poisson distribution (Bardwell and Crow 1964). In the next section we will analyze an alternative WPD in which the hyper-Poisson distribution remains a special case and that exhibits both underdispersion and overdispersion.

3.1 A novel flexible WPD allowing overdispersion or underdispersion

Let \(Y^w\) be a WP random variable with weight function

$$\begin{aligned} w(k) = \frac{\Gamma (k+\gamma )}{\Gamma (\alpha k +\beta )^\nu }, \end{aligned}$$
(48)

where \(\gamma > 0\), \( \min (\alpha , \beta ,\nu )\ge 0\), \(\alpha +\beta >0\). Moreover, if \(\gamma =\beta \) and \(\nu \ge 1\) then \(\beta \) is allowed to be zero. Since it is a PSD, the random variable \(Y^w\) is characterized by the normalizing function

$$\begin{aligned} \eta (\lambda ) = \eta _{\alpha ,\beta }^{\gamma ,\nu }(\lambda ) = \sum _{k=0}^\infty \frac{\lambda ^k}{k!} \frac{\Gamma (k+\gamma )}{\Gamma (\alpha k +\beta )^\nu }. \end{aligned}$$
(49)

The convergence of the above series can be ascertained as follows. Let \(\gamma \le 1\); by Gautschi’s inequality (see Qi (2010), formula (2.23)) we have the upper bound

$$\begin{aligned} \eta (\lambda ) \le \frac{\Gamma (\gamma )}{\Gamma (\beta )^\nu } + \sum _{k=1}^{\infty } \frac{\lambda ^k k^{\gamma -1}}{\Gamma (\alpha k + \beta )^\nu }, \end{aligned}$$
(50)

which converges by ratio test and taking into account the well-known asymptotics for the ratio of gamma functions (see Tricomi and Erdélyi (1951)). Now, let \(\gamma > 1\). In this case an upper bound can be derived by formula (3.72) of Qi (2010):

$$\begin{aligned} \eta (\lambda ) < \frac{\Gamma (\gamma )}{\Gamma (\beta )^\nu } + \sum _{k=1}^{\infty } \frac{\lambda ^k (k+\gamma )^{\gamma -1}}{\Gamma (\alpha k + \beta )^\nu }. \end{aligned}$$
(51)

Again, this converges by ratio test and recurring to the above-mentioned asymptotic behaviour of the ratio of gamma functions.

The random variable \(Y^w\) specializes to some well-known classical random variables. Specifically, we recognize the following:

  1. 1.

    If \(\gamma =\beta =\alpha =\nu =1\), we recover the Poisson distribution as the weights equal unity for each k.

  2. 2.

    If \(\gamma =\beta =\alpha =1\), we recover the COM-Poisson distribution (Conway and Maxwell 1962) of Poisson parameter \(\lambda \) and dispersion parameter \(\nu \).

  3. 3.

    If \(\gamma =\alpha =\nu =1\) we obtain the hyper-Poisson distribution (Bardwell and Crow 1964).

  4. 4.

    If \(\gamma =\nu =1\) we obtain the alternative Mittag–Leffler distribution considered e.g. in Bardwell and Crow (1964) and Herrmann (2016).

  5. 5.

    If \(\gamma =1\) we recover the fractional COM-Poisson distribution (Garra et al. 2018).

  6. 6.

    If \(\nu =1\) we obtain the alternative generalized Mittag–Leffler distribution (Pogány and Tomovski 2016).

Since \(Y^w\) is a PSD, it is easy to derive its factorial moments,

$$\begin{aligned} a_r = \frac{\lambda ^r}{\eta _{\alpha ,\beta }^{\gamma ,\nu }(\lambda )} \sum _{k=r}^\infty \frac{\lambda ^{k-r}}{(k-r)!} \frac{\Gamma (k+\gamma )}{\Gamma (\alpha k+\beta )^\nu } = \lambda ^r \frac{\eta _{\alpha ,\alpha r+\beta }^{\gamma +r,\nu }(\lambda )}{\eta _{\alpha ,\beta }^{\gamma ,\nu }(\lambda )}, \end{aligned}$$
(52)

from which the moments are immediately derived by recalling formula (3).

Remark 3.3

Since \(\eta _{\alpha ,\alpha r+\beta }^{\gamma +r,\nu }(\lambda ) = \sum _{j \ge 0} \frac{\lambda ^j}{j!} A_{j,r}\) with

$$\begin{aligned} A_{j,r} = \frac{\Gamma (j + r + \gamma )}{\Gamma [\alpha (j+r)+\beta )]^{\nu }}, \end{aligned}$$
(53)

by using Faà di Bruno’s formula (Stanley 2012) one has

$$\begin{aligned} a_r&= \lambda ^r \sum _{j \ge 0} \frac{\lambda ^j}{j!} \sum _{i=0}^j \left( {\begin{array}{c}j\\ i\end{array}}\right) A_{j-i,r} D_i \quad \text {with} \nonumber \\ D_i&= \sum _{k=0}^i (-1)^k A_{0,0}^{-(k+1)} {{\mathfrak {B}}}_{i,k}(A_{1,0}, \ldots , A_{i-k+1,0}), \end{aligned}$$
(54)

where \(\{A_{j,0}\}\) and and \(\{{{\mathfrak {B}}}_{i,k}\}\) are the coefficients of \(\eta _{\alpha , \beta }^{\gamma ,\nu }(\lambda )\) and the partial Bell exponential polynomials (Stanley 2012), respectively.

Furthermore, the probability mass function reads

$$\begin{aligned} P(Y^w=x) = \frac{\lambda ^x}{x!}\frac{\Gamma (x+\gamma )}{\Gamma (\alpha x + \beta )^\nu }\frac{1}{\eta _{\alpha ,\beta }^{\gamma ,\nu }(\lambda )}, \qquad x \ge 0. \end{aligned}$$
(55)

Concerning the variability of \(Y^w\), by using Theorem 3 of Kokonendji et al. (2008), the preceding Lemma and the succeeding Corollary, that is by imposing log-convexity (log-concavity) of the weight function, we write for \(y \in {\mathbb {R}}_+\),

$$\begin{aligned} \frac{d^2}{d y^2} \log \frac{\Gamma (y+\gamma )}{\Gamma (\alpha y + \beta )^\nu }&= \frac{d}{dy} \left[ \frac{1}{\Gamma (y+\gamma )} \frac{d}{dy}\Gamma (y+\gamma ) - \frac{\nu }{\Gamma (\alpha y + \beta )} \frac{d}{dy} \Gamma (\alpha y+\beta ) \right] \nonumber \\&= \frac{d}{dy} \left[ \psi (y+\gamma ) - \nu \alpha \, \psi (\alpha y+\beta ) \right] , \end{aligned}$$
(56)

where \(\psi (z)\) is the Psi function (see Lebedev (1972), Section 1.3). In addition, by considering formula (6.4.10) of Abramowitz and Stegun (1964),

$$\begin{aligned} \frac{d^2}{d y^2} \log \frac{\Gamma (y+\gamma )}{\Gamma (\alpha y + \beta )^\nu } = \sum _{r=0}^\infty (y+\gamma +r)^{-2} - \nu \alpha ^2 \sum _{r=0}^\infty (\alpha y+\beta +r)^{-2}. \end{aligned}$$
(57)

Therefore log-convexity (log-concavity) of w(y) is equivalent to the condition

$$\begin{aligned} \nu < (>) \, \frac{\sum _{r=0}^\infty (y+\gamma +r)^{-2}}{\alpha ^2\sum _{r=0}^\infty (\alpha y+\beta +r)^{-2}}, \qquad \forall \, y \in {\mathbb {R}}_+. \end{aligned}$$
(58)

This yields that if (58) holds, then \(Y^w\) is overdispersed (underdispersed).

Remark 3.4

(Classical special cases) If \(\alpha =\beta =\gamma =1\), then \(Y^w\) is the COM-Poisson random variable and (58) correctly reduces to the ranges \(\nu >1\) giving underdispersion and \(\nu \in [0,1)\) giving overdispersion. If \(\alpha =\gamma =\nu =1\), then \(Y^w\) is the hyper-Poisson random variable and (58) correctly reduces to the ranges \(\beta >1\) (overdispersion) and \(\beta \in [0,1)\) (underdispersion). This holds as \(\beta \mapsto \sum _{r=0}^\infty (y+\beta +r)^{-2}\) is decreasing for all fixed \(y \in {\mathbb {R}}_+\).

In the two next sections we analyze two special cases of interest, the first of which, to the best of our knowledge, is still not considered in the literature.

3.1.1 Model I

We first introduce the special case in which \(\alpha =1\), \(\gamma =\beta \), \(\beta >0\), and \(\beta \) is allowed to be zero only if \(\nu \ge 1\). This is a three-parameter (\(\lambda ,\nu ,\beta \)) model which retains the same simple conditions for underdispersion and overdispersion as for the COM-Poisson model. Indeed, formula (58) reduces to \(\nu >1\) and \(\nu \in [0,1)\), respectively. However, this model is more flexible than the COM-Poisson model because of the presence of the parameter \(\beta \). Notice that the pmf can be written as

$$\begin{aligned} P(Y^w = x) = \frac{1}{x!}\exp \left( x \log \lambda + (1-\nu ) \log \Gamma (x + \beta ) -\log \eta _{1,\beta }^{\beta ,\nu }(\lambda ) \right) , \end{aligned}$$
(59)

which suggests that Model I belongs to the exponential family of distributions with parameters \(\log \lambda \) and \(1-\nu \), where \(\beta \) is a nuisance parameter or is known. Figures 4 and 5 show sample shapes of this family of distributions.

Fig. 4
figure 4

Probability mass functions (59) for \(\lambda =0.1, 0.5, 1\), \(\beta = 0.5\), and \(\nu =0.1\)

Fig. 5
figure 5

Probability mass functions (59) for \(\lambda =0.5, 2, 5, 10\), \(\beta = 0.1\), and \(\nu = 1.1\)

Note that distributions in Fig. 4 (Fig. 5) are overdispersed (underdispersed). Also,

$$\begin{aligned} P(Y^w = x+1) = \frac{\lambda }{(x+1)(x+\beta )^{\nu -1}} P(Y^w = x). \end{aligned}$$
(60)

This gives a procedure to calculate iteratively the probability mass function and generate random numbers. The only thing to figure out is to compute \(\eta _{1,\beta }^{\beta ,\nu }(\lambda )\) in order to obtain \(P(Y^w = 0)=1/[\Gamma (\beta )^{\nu -1} \eta _{1,\beta }^{\beta ,\nu }(\lambda )]\).

An upper bound for the normalizing function \(\eta _{1,\beta }^{\beta ,\nu }(\lambda )\) can be determined similarly to Minka et al. (2003), Section 3.2, taking into consideration that the multiplier

$$\begin{aligned} \lambda (j+\beta )^{1-\nu }/(j+1) \end{aligned}$$
(61)

is ultimately monotonically decreasing. Hence, we can approximate the normalizing constant \(\eta _{1,\beta }^{\beta ,\nu }(\lambda )\) by truncating the series and bound the truncation error \(R_{{\widetilde{k}}}\),

$$\begin{aligned} \eta _{1,\beta }^{\beta ,\nu }(\lambda )&= \sum _{j=0}^{{\widetilde{k}}} \frac{\lambda ^j}{j!} \Gamma (j+\beta )^{1-\nu } + R_{{\widetilde{k}}} \nonumber \\&< \sum _{j=0}^{{\widetilde{k}}} \frac{\lambda ^j}{j!} \Gamma (j+\beta )^{1-\nu } + \frac{\lambda ^{{\widetilde{k}}+1}\Gamma ({\widetilde{k}}+1+\beta )^{1-\nu }}{({\widetilde{k}}+1)!} \sum _{j=0}^\infty \varepsilon _{{\widetilde{k}}}^j \nonumber \\&< \sum _{j=0}^{{\widetilde{k}}} \frac{\lambda ^j}{j!} \Gamma (j+\beta )^{1-\nu } + \frac{\lambda ^{{\widetilde{k}}+1}\Gamma ({\widetilde{k}}+1+\beta )^{1-\nu }}{({\widetilde{k}}+1)!\,(1-\varepsilon _{{\widetilde{k}}})}, \end{aligned}$$
(62)

where \({\widetilde{k}}\) is such that for \(j>{\widetilde{k}}\) the multiplier (61) is already monotonically decreasing and bounded above by \(\varepsilon _{{\widetilde{k}}} \in (0,1)\). Correspondingly, denoting with \({\widetilde{\eta }}_{1,\beta }^{\beta ,\nu }(\lambda ) = \sum _{j=0}^{{\widetilde{k}}} \frac{\lambda ^j}{j!} \Gamma (j+\beta )^{1-\nu }\), the relative truncation error \(R_{{\widetilde{k}}}/{\widetilde{\eta }}_{1,\beta }^{\beta ,\nu }(\lambda )\) is bounded by

$$\begin{aligned} \frac{\lambda ^{{\widetilde{k}}+1}\Gamma ({\widetilde{k}}+1+\beta )^{1-\nu }}{({\widetilde{k}}+1) !\,(1-\varepsilon _{{\widetilde{k}}}){\widetilde{\eta }}_{1,\beta }^{\beta ,\nu }(\lambda )}. \end{aligned}$$
(63)

As a last remark, we can further simplify the model obtaining a two-parameter model. In order to do so, let \(\nu =\beta \), with \(\beta >0\). The obtained model still allows for underdispersion (\(\beta >1\)) and overdispersion (\(\beta \in (0,1)\)) and it should be directly compared with the COM-Poisson and the hyper-Poisson models.

3.1.2 Model II

If we set \(\alpha =\nu =1\) we get another three-parameter (\(\lambda ,\gamma ,\beta \)) model, special case of the alternative generalized Mittag–Leffler distribution (see point 6 above). The reparametrization \(\beta =\xi \gamma \) together with condition (58) shows that both overdispersion (\(\xi >1\)) and underdispersion (\(\xi \in (0,1)\)) are possible. This comes from the fact that \(\omega \mapsto \sum _{r=0}^\infty (y+\omega +r)^{-2}\) is decreasing for all fixed \(y \in {\mathbb {R}}_+\). As for Model I, the probability distribution belongs to the exponential family with parameter \(\log \lambda ,\) with \(\gamma \) and \(\beta \) as nuisance parameters. Explicitly, the pmf reads

$$\begin{aligned} P(Y^w=x) = \frac{\lambda ^x}{x!}\frac{\Gamma (x+\gamma )}{\Gamma (x + \beta )}\frac{1}{\eta _{1,\beta }^{\gamma ,1}(\lambda )}, \qquad x \ge 0, \end{aligned}$$
(64)

and, as in the previous Sect. 3.1.1, the iterative representation

$$\begin{aligned} P(Y^w = x+1) = \frac{\lambda (x+\gamma )}{(x+1)(x+\beta )} P(Y^w = x), \end{aligned}$$
(65)

allows an approximated evaluation of the pmf with error control, and consequently random number generation. Also in this case this holds as the involved multiplier is ultimately monotonically decreasing. Figures 6 and 7 show some forms of this class of distributions. Observe that distributions in Fig. 6 (Fig. 7) are underdispersed (overdispersed).

Fig. 6
figure 6

Probability mass functions (64) for \(\lambda =0.5, 2, 5, 10\), \(\beta = 0.5\), and \(\gamma =0.1\)

Fig. 7
figure 7

Probability mass functions (64) for \(\lambda =0.5, 2, 5, 10\), \(\beta = 0.1\), and \(\gamma = 1.1\)

If we further let \(\lambda =1\), we obtain a two-parameter model, still allowing for underdispersion if \(\beta \in (0,\gamma )\) (or equivalently \(\xi \in (0,1)\)) and overdispersion if \(\beta > \gamma \) (or \(\xi > 1\)), which is also directly comparable with the two-parameter Model I above, the COM-Poisson model, and the hyper-Poisson model.

3.1.3 Comparison

We now compare Model I and Model II with known models that allow overdispersion and underdispersion such as the COM-Poisson, generalized Poisson and hyper-Poisson models as cited above. Note that the hyper-Poisson distribution satisfies

$$\begin{aligned} P(Y^w = x+1) = \frac{\lambda }{(x+\beta )} P(Y^w = x). \end{aligned}$$
(66)

For comparison purposes, we first consider the number of fish caught dataFootnote 2 shown in Fig. 8 (left panel) below. The dataset corresponds to 239 groups (as 11 potential outliers were removed) that went to a state park and state wildlife biologists asked visitors how many fish they caught. The mean fish caught is around 1.48 while the variance is 8.04. Furthermore, the optimx (for hyper-Poisson, Model I and Model II), COMPoissonReg (for COM-Poisson), compoisson (for COM-Poisson), and VGAM (for generalized Poisson) packages in R are used for the maximum likelihood estimation and the chi-square goodness-of-fit tests. In particular, the L-BFGS-B method from the optimx package is used and 1000 terms were summed for the normalizing constant \(\eta _{\alpha ,\beta }^{\gamma , \nu } (\lambda )\). Just like the comparisons above, a chi-square distribution is used as reference where the degrees of freedom is the number of cells minus the number of model parameters. From Table 3, Model I and Model II clearly outperform the other models although the generalized Poisson and hyper-Poisson (subcase of WPD) also provide good fits to the fish count data.

Fig. 8
figure 8

(Left) The fish caught count data. (Right) The count of articles produced by graduate students in biochemistry Ph.D. programs

Table 3 Comparison results for the fish count data

We have also considered the bioChemists data from the pscl package in R, particularly the count of articles produced by 915 graduate students in biochemistry Ph.D. programs during last 3 years in the program. The data has mean 1.69 and variance of 3.71, and is showcased in Fig. 8 (right panel). Apparently, Table 4 suggests that Model II outperforms the rest of the models considered for the article count data. Overall, there is potential in WPD’s (e.g., Model I and Model II) in flexibly capturing overdispersed and/or underdispersed count data distributions.

Table 4 Comparison results for the article count data