1 Introduction

Availability of count data, the number of occurrences of an event within a fixed period of time, is rapidly increasing in all areas of human activity. Modeling these data becomes an important issue in medicine, biology, ecology, economics, demography and other sciences.

In many real-life problems, the count represents two dependent variables. For example, in medical research we count coronavirus cases treated in a hospital and the number of deaths recorded among them. In actuarial science, we count the number of traffic accidents and the corresponding number of deaths or, in insurance claims, the count variable often represents damage and bodily injury.

Consequently, for the statistical analysis of such data, the use of bivariate discrete models may be appropriate.

Over several years, a large number of univariate discrete distributions were extended to the bivariate (multivariate) case and basic references are the books by Kocherlakota and Kocherlakota [1], Johnson, Kotz and Balakrishnan [2] and the review paper by Lai [3].

The one-parameter univariate Poisson–Lindley distribution was introduced by Sankaran [4]. Since it is a Poisson mixture constructed by mixing the Poisson parameter with a continuous Lindley model, this distribution is over-dispersed. Such a property is often useful in describing count data. In addition, since it has only one parameter, it has attracted the attention of several researchers. In the last decade, several univariate Poisson–Lindley distributions with two or more parameters were introduced by [5,6,7,8,9], among others. These distributions are Poisson mixtures derived by assuming that the mixing variable follows a continuous Lindley distribution with two or more parameters.

Bivariate (multivariate) extensions of the one-parameter Poisson–Lindley model were considered by Gómez-Déniz et al. [10], and their usefulness was demonstrated for bivariate count data sets. Their model was revisited by [11].

The main purpose of this paper is to introduce and study two families of bivariate Poisson–Lindley distributions, each with five members, by extending the univariate Poisson–Lindley models examined by [4,5,6,7,8] to the bivariate case. For this purpose, we adopted two widely used procedures, namely the mixing and the generalizing approach. The individual bivariate models examined can be useful in describing bivariate count data, since their basic characteristic is that the number of their parameters is limited to two or three. Furthermore, this assumption is further enhanced because one of the parameters is readily estimated by simple ratios of the sample means. In addition, in all marginal distributions the index of dispersion is greater than one. General properties of each class are derived mainly by employing the use of probability-generating functions (p.g.f.’s), and these are customized for each of their members.

The rest of the paper is organized as follows. In Sect. 2, we document certain properties of various univariate Poisson–Lindley distributions appeared in the literature. Additional characteristics are derived, including characterizations as special cases of a result due to Cacoullos and Papageorgiou [12]. In Sect. 3, bivariate Poisson–Lindley mixtures are derived by assuming that the common parameter in a bivariate Poisson distribution follows a univariate Lindley distribution with one or two parameters. This procedure was used by [4,5,6,7,8] in the univariate case to derive their corresponding Poisson–Lindley models. Furthermore, implementing a characterization theorem proved by Cacoullos and Papageorgiou [13] relative characterizations are given for the bivariate (XY) distribution. In Sect. 4, we generalize a bivariate binomial distribution with respect to its exponent, when it follows one of the five univariate Poisson–Lindley distributions derived by [4,5,6,7,8]. Finally, Sect. 5 concludes.

2 Univariate Poisson Mixtures and Poisson–Lindley Distributions

Let the parameter \(\lambda \) of a Poisson distribution be a continuous random variable (r.v.) with probability density function (d.f.) \(F'(\lambda )=f(\lambda )\) and moment-generating function (m.g.f.) \(M_\varLambda (\cdot )\). Then, if X is a nonnegative integer-valued r.v. with probability mass function (p.f.) \(p(x)=P(X=x)\), a Poisson mixture is defined by

$$\begin{aligned} p(x)=\int ^\infty _0e^{-\lambda }\frac{\lambda ^x}{x!}\mathrm{d}F(\lambda ). \end{aligned}$$
(1)

Consequently, the p.g.f. of the r.v. X is

$$\begin{aligned} G_X(s)=M_\varLambda (s-1). \end{aligned}$$
(2)

General properties of Poisson mixtures model were studied by Karlis and Xekalaki [14]. In particular, they pointed out that the index of dispersion, that is, the variance-to-the mean ratio for a mixed Poisson distribution is always greater than one. Consequently, Poisson mixture models are over-dispersed. A simple characterization of Poisson mixtures was indicated by Cacoullos and Papageorgiou [12], see also [15], using the one-parameter Poisson–Lindley distribution as an illustrative example. Their result is stated here as in Theorem 1, and a detailed proof is provided in the Appendix.

Theorem 1

Let X be a Poisson mixture defined by Eq. (1) and \(\varLambda >0\) a continuous r.v. with density function \(F'(\lambda )=f(\lambda )\). Then, the regression function of \(\varLambda \) on X

$$\begin{aligned} E[\varLambda |X=x]=(x+1)\frac{p(x+1)}{p(x)} \end{aligned}$$
(3)

determines uniquely both the distributions of \(\varLambda \) and X.

Poisson–Lindley distributions are obtained by allowing the parameter \(\lambda \) of a Poisson distribution to follow a Lindley distribution with one or more parameters.

In this section, we document some basic properties of various univariate Poisson–Lindley distributions already introduced in the literature. Additional useful characteristics are also derived.

2.1 One-Parameter Poisson–Lindley Distribution

This distribution was introduced by Sankaran [4] by mixing (compounding) the Poisson parameter using a distribution given by Lindley [16] with m.g.f.

$$\begin{aligned} M_\varLambda (s)=\frac{\theta ^2}{\theta +1}\,\frac{\theta -s+1}{(\theta -s)^2}, \ \ \text {with} \ \ \lambda ,\theta >0. \end{aligned}$$
(4)

Then, the p.g.f. of the corresponding Poisson–Lindley r.v. X is

$$\begin{aligned} G_X(s)=\frac{\theta ^2}{\theta +1}\,\frac{\theta -s+2}{(\theta -s+1)^2}. \end{aligned}$$
(5)

Since

$$\begin{aligned} \frac{\partial ^xG(s)}{\partial s^x}=x!\,\frac{\theta ^2}{\theta +1}\,\frac{\theta -s+x+2}{(\theta -s+1)^{x+2}}, \end{aligned}$$
(6)

we immediately derive the p.f.

$$\begin{aligned} p(x;\theta )=\frac{\theta ^2}{\theta +1}\,\frac{\theta +x+2}{(\theta +1)^{x+2}}, \ \ x=0,1,2,\ldots \end{aligned}$$
(7)

and the factorial moments

$$\begin{aligned} \mu _{[\tau ]:X}=\tau !\frac{1}{\theta ^\tau }\,\frac{\theta +\tau +1}{\theta +1}, \ \ r=1,2,\ldots , \end{aligned}$$
(8)

where

$$\begin{aligned} \mu _{[\tau ]:X}=E(X^{(\tau )}) \ \ \text {and} \ \ X^{(\tau )}=X(X-1)\ldots (X-\tau +1). \end{aligned}$$

2.2 A Two-Parameter Poisson–Lindley Distribution

A two-parameter Lindley distribution was introduced by [17] with m.g.f.

$$\begin{aligned} M_\varLambda (s)=\frac{\theta ^2}{\alpha \theta +1}\,\frac{\alpha (\theta -s)+1}{(\theta -s)^2}, \ \ \text {with} \ \ \lambda ,\theta ,\alpha >0. \end{aligned}$$
(9)

By assuming that the Poisson parameter \(\lambda \) in Eq. (2) follows a distribution with m.g.f. given by Eq. (9), Shanker and Mishra [5] obtained a two-parameter Poisson–Lindley distribution with p.g.f.

$$\begin{aligned} G_X(s)=\frac{\theta ^2}{\alpha \theta +1}\,\frac{\alpha (\theta -s+1)+1}{(\theta -s+1)^2}. \end{aligned}$$
(10)

Since

$$\begin{aligned} \frac{\partial ^xG(s)}{\partial s^x}=x!\frac{\theta ^2}{\alpha \theta +1}\,\frac{\alpha (\theta -s+1)+x+1}{(\theta -s+1)^{x+2}}, \end{aligned}$$
(11)

we derive

$$\begin{aligned} p(x;\theta ,\alpha )=\frac{\theta ^2}{\alpha \theta +1}\,\frac{\alpha (\theta +1)+x+1}{(\theta +1)^{x+2}}, \ \ x=0,1,2,3\ldots \end{aligned}$$
(12)

and a simple relation for the factorial moments

$$\begin{aligned} \mu _{[\tau ]:X}=\tau !\frac{1}{\theta ^\tau }\,\frac{\alpha \theta +\tau +1}{\alpha \theta +1}, \ \ r=1,2,3\ldots \;. \end{aligned}$$
(13)

Finally, from Theorem 1, the two-parameter Lindley and the two-parameter Poisson–Lindley distributions are determined uniquely, if

$$\begin{aligned} E[\varLambda |X=x]=\frac{x+1}{\theta +1}\,\frac{\alpha (\theta +1)+x+2}{\alpha (\theta +1)+x+1}. \end{aligned}$$

2.3 A New Generalized Poisson–Lindley Distribution

A slightly different two-parameter Lindley distribution from the one introduced by [17] was suggested by [18] with m.g.f.

$$\begin{aligned} M_\varLambda (s)=\frac{\theta ^2}{\theta +\alpha }\,\frac{\theta -s+\alpha }{(\theta -s)^2}, \ \ \text {with} \ \ \lambda ,\theta ,\alpha >0. \end{aligned}$$
(14)

Based on (14), a new generalized Poisson–Lindley distribution was proposed by Bhati et al. [6] with p.g.f.

$$\begin{aligned} G_X(s)=\frac{\theta ^2}{\theta +\alpha }\,\frac{\theta -s+\alpha +1}{(\theta -s+1)^2}. \end{aligned}$$
(15)

Since

$$\begin{aligned} \frac{\partial ^xG(s)}{\partial s^x}=x!\frac{\theta ^2}{\theta +\alpha }\,\frac{\theta -s+1+\alpha (x+1)}{(\theta -s+1)^{x+2}}, \end{aligned}$$
(16)

we have

$$\begin{aligned} p(x;\theta ,\alpha )=\frac{\theta ^2}{\theta +\alpha }\,\frac{\theta +1+\alpha (x+1)}{(\theta +1)^{x+2}}, \ \ x=0,1,2,\ldots \end{aligned}$$
(17)

and

$$\begin{aligned} \mu _{[\tau ]:X}=\tau !\frac{\theta +\alpha (\tau +1)}{\theta ^\tau (\theta +\alpha )}, \ \ r=1,2,\ldots \;. \end{aligned}$$
(18)

Finally, from Theorem 1

$$\begin{aligned} E[\varLambda |X=x]=\frac{x+1}{\theta +1}\,\frac{(\theta +1)+\alpha (x+2)}{(\theta +1)+\alpha (x+1)}. \end{aligned}$$

2.4 A Generalized Poisson–Lindley Distribution

A generalized Lindley distribution with m.g.f.

$$\begin{aligned} M_\varLambda (s)=\frac{\theta ^{\alpha +1}}{\theta +1}\,\frac{\theta -s+1}{(\theta -s)^{\alpha +1}}, \ \ \text {with} \ \ \lambda ,\theta ,\alpha >0 \end{aligned}$$
(19)

was obtained by [19].

A generalized Poisson–Lindley distribution utilizing Eq. (19) was introduced by Mahmoudi and Zakerzadeh [7] with p.g.f.

$$\begin{aligned} G_X(s)=\frac{\theta ^{\alpha +1}}{\theta +1}\,\frac{\theta -s+2}{(\theta -s+1)^{\alpha +1}}. \end{aligned}$$
(20)

From

$$\begin{aligned} \frac{\partial ^xG(s)}{\partial s^x}=\frac{{\varGamma }(x+\alpha )}{{\varGamma }(\alpha +1)}\,\frac{\theta ^{\alpha +1}}{\theta +1}\, \frac{\alpha (\theta -s+1)+x+\alpha }{(\theta -s+1)^{x+\alpha +1}}, \end{aligned}$$
(21)

the following properties are derived:

$$\begin{aligned} p(x;\theta ,\alpha )=\frac{{\varGamma }(x+\alpha )}{x!{\varGamma }(\alpha +1)}\,\frac{\theta ^{\alpha +1}}{\theta +1}\, \frac{\alpha (\theta +1)+x+\alpha }{(\theta +1)^{x+\alpha +1}}, \ \ x=0,1,2,\ldots \end{aligned}$$
(22)

and

$$\begin{aligned} \mu _{[\tau ]:X}=\frac{{\varGamma }(\tau +\alpha )}{{\varGamma }(\alpha +1)}\,\frac{\alpha (\theta +1)+\tau }{\theta ^\tau (\theta +1)}, \ \ r=1,2,\ldots \;. \end{aligned}$$
(23)

Another illustration of Theorem 1 is the following characterization. If

$$\begin{aligned} E[\varLambda |X=x]=\frac{x+\alpha }{\theta +1}\,\frac{\alpha (\theta +1)+x+\alpha +1}{\alpha (\theta +1)+x+\alpha }, \end{aligned}$$

then the distributions of the generalized Lindley and the generalized Poisson–Lindley distributions are uniquely determined.

2.5 A New Two-Parameter Poisson-Generalized Lindley Distribution

This distribution was recently introduced by Altun [8] and is a Poisson mixture when the Poisson parameter \(\lambda \) follows the new generalized Lindley distribution studied by [20] with m.g.f.

$$\begin{aligned} M_\varLambda (s)=\frac{\theta ^2}{\theta +1}\,\frac{(\theta -s)^{\alpha -1}+\theta ^{\alpha -2}}{(\theta -s)^\alpha }, \ \ \text {with} \ \ \lambda ,\theta ,\alpha >0. \end{aligned}$$
(24)

Some properties of this distribution are:

$$\begin{aligned}&G_X(s)=\frac{\theta ^2}{\theta +1}\,\frac{(\theta -s+1)^{\alpha -1}+\theta ^{\alpha -2}}{(\theta -s+1)^\alpha } \end{aligned}$$
(25)
$$\begin{aligned}&\frac{\partial ^xG(s)}{\partial s^x}=\frac{\theta ^2}{\theta +1}\,\frac{x!{\varGamma }(\alpha )(\theta -s+1)^{\alpha -1}+{\varGamma }(x+\alpha )\theta ^{\alpha -2}}{{\varGamma }(\alpha )(\theta -s+1)^{x+\alpha }} \end{aligned}$$
(26)
$$\begin{aligned}&p(x;\theta ,\alpha )=\frac{\theta ^2}{\theta +1}\,\frac{x!{\varGamma }(\alpha )(\theta +1)^{\alpha -1}+{\varGamma }(x+\alpha )\theta ^{\alpha -2}}{x!{\varGamma }(\alpha )(\theta +1)^{x+\alpha }}, \ \ x=0,1,2,\ldots , \quad \qquad \end{aligned}$$
(27)
$$\begin{aligned}&\mu _{[\tau ]:X}=\frac{\tau !{\varGamma }(\alpha )\theta +{\varGamma }(\tau +\alpha )}{\theta ^\tau (\theta +1){\varGamma }(\alpha )}, \ \ r=1,2,\ldots \;. \end{aligned}$$
(28)

2.6 Another Two-Parameter Poisson–Lindley Distribution

All previous models were based on the assumption that in a Poisson mixture model the Poisson parameter \(\lambda \) varies according to a Lindley distribution.

Let us now suppose that in a Poisson mixture the Poisson parameter is of the form \(\varphi \lambda \), where \(\varphi \) is a positive constant and \(\lambda \) is a continuous r.v. with m.g.f. \(M_\varLambda (\cdot )\).

Then, if Y is a nonnegative integer-valued r.v. with p.g.f. \(G_Y(t)\) the corresponding Poisson mixture model is defined as

$$\begin{aligned} G_Y(t)=M_\varLambda (\varphi (t-1)). \end{aligned}$$
(29)

A two-parameter Poisson–Lindley distribution was derived by [10], assuming that the m.g.f. of the r.v. \(\varLambda \) in Eq. (29) is given by expression (4) corresponding to the m.g.f. of the one-parameter Lindley distribution. The p.g.f. of this distribution is

$$\begin{aligned} G_Y(t)=\frac{\theta ^2}{\theta +1}\,\frac{\theta +\varphi -\varphi t+1}{(\theta +\varphi -\varphi t)^2} \end{aligned}$$
(30)

and p.f.

$$\begin{aligned} p(y;\theta ,\varphi )=\varphi ^y\frac{\theta ^2}{\theta +1}\,\frac{\theta +\varphi +y+1}{(\theta +\varphi )^{y+2}}, \ \ y=0,1,2,\ldots . \end{aligned}$$
(31)

In addition, they obtained

$$\begin{aligned} \mu _{[\tau ]:Y}=\tau !\bigg (\frac{\varphi }{\theta }\bigg )^\tau \,\frac{\theta +\tau +1}{\theta +1}, \ \ r=1,2,\ldots \;. \end{aligned}$$
(32)

3 Mixed (Compounded) Bivariate Poisson Distributions

A general class of compounded bivariate Poisson distributions was extensively studied by [1], chapter 8] and Kocherlakota [21]. They considered the class of distributions (XY) with p.g.f.

$$\begin{aligned} G_{X,Y}(s,t|\lambda )=\exp \{\lambda [\varphi _1(s-1)+\varphi _2(t-1)+\varphi _{12}(st-1)]\} \end{aligned}$$

where \(\varphi _1,\varphi _2,\varphi _{12}\) are constants and \(\lambda \) is a r.v. with m.g.f. \(M_\varLambda (\cdot )\). They proved that

$$\begin{aligned} G_{X,Y}(s,t)=M_\varLambda [\varphi _1(s-1)+\varphi _2(t-1)+\varphi _{12}(st-1)] \end{aligned}$$
(33)

and this representation enabled them to derive various general properties.

3.1 A Bivariate Poisson–Lindley Distribution

Let us now consider a simpler form of Eq. (33), that has p.g.f.

$$\begin{aligned} G_{X,Y}(s,t)=M_\varLambda [\varphi _1(s-1)+\varphi _2(t-1)] \end{aligned}$$

Gómez-Déniz et al. [10], by assuming that \(\lambda \) follows the one-parameter Poisson–Lindley distribution with m.g.f. given by Eq. (4), derived a bivariate Poisson–Lindley distribution with p.g.f.

$$\begin{aligned} G_{X,Y}(s,t)=\frac{\theta ^2}{\theta +1}\,\frac{\theta +\varphi _1+\varphi _2-\varphi _1s-\varphi _2t+1}{(\theta +\varphi _1+\varphi _2-\varphi _1s-\varphi _2t)^2}. \end{aligned}$$
(34)

The marginals are two-parameter Poisson–Lindley distributions with p.g.f.’s of the form given by expression (30).

Not only a detailed study of this bivariate distribution was presented by [10], but also they considered multivariate extensions.

Remark

Bivariate Poisson–Lindley distributions can also be derived by using an approach suggested by David and Papageorgiou [22]. They examined the general class of distributions with p.g.f.

$$\begin{aligned} G_{X,Y}(s,t|\lambda _1,\lambda _2)=\exp \{\lambda _1[\varphi _1(s-1)]+\lambda _2[\varphi _2(t-1)]\} \end{aligned}$$

where \(\varphi _1\) and \(\varphi _2\) are constants and \((\lambda _1,\lambda _2)\) r.v.’s of the discrete or the continuous type, with m.g.f. \(M_{\varLambda _1,\varLambda _2}(\cdot ,\cdot )\). Then, since

$$\begin{aligned} G_{X,Y}(s,t)=M_{\varLambda _1,\varLambda _2}[\varphi _1(s-1)+\varphi _2(t-1)], \end{aligned}$$

if \((\lambda _1,\lambda _2)\) follows a bivariate Lindley distribution, the corresponding bivariate Poisson–Lindley models can be constructed.

3.2 A Family of Mixed Bivariate Poisson–Lindley Distributions

A bivariate discrete model with the structure

$$\begin{aligned} Y=Y_1+Y_2+\cdots +Y_X \end{aligned}$$
(35)

where X follows a Poisson distribution with parameter \(\lambda \) and the \(Y_i\)’s are i.i.d. Bernoulli r.v.’s with parameter p were studied by Leiter and Hamdan [23] and Cacoullos and Papageorgiou [24] to express the joint distribution of the number of accidents and the number of fatal accidents.

Its p.g.f. is given by the relation

$$\begin{aligned} G_{X,Y}(s,t)=\exp \{\lambda [(1-p)(s-1)+p(st-1)]\}. \end{aligned}$$
(36)

For other bivariate models with the structure given by relation (35), see, among others, [25,26,27]. By assuming that \(\lambda \) follows a distribution with m.g.f. \(M_\varLambda (\cdot )\), Eq. (36) becomes

$$\begin{aligned} G_{X,Y}(s,t)=M_\varLambda [q(s-1)+p(st-1)] \end{aligned}$$
(37)

where \(q=1-p\). Then,

$$\begin{aligned} G_X(s)=M_\varLambda (s-1) \end{aligned}$$

which is Eq. (2) and

$$\begin{aligned} G_Y(t)=M_\varLambda (p(t-1)) \end{aligned}$$

which is Eq. (29) with the parameter \(\varphi \) replaced by p.

Some general properties of this class of distributions can be easily derived. In particular, since

$$\begin{aligned} \mu _{[\tau ]:Y}=p^\tau \mu _{[\tau ]:X} \end{aligned}$$
(38)

and

$$\begin{aligned} E(XY)=pE(X^2) \end{aligned}$$

we have

$$\begin{aligned} E(Y)=pE(X) \end{aligned}$$
(39)
$$\begin{aligned} Var(Y)=p[pVar(X)+qE(X)] \end{aligned}$$
$$\begin{aligned} Cov(X,Y)=pVar(X) \end{aligned}$$
(40)
$$\begin{aligned} Var(Y)\le Cov(X,Y)\le Var(X). \end{aligned}$$

To derive the conditional p.g.f. \(G_{Y|X=x}(z)\) of the r.v. Y given \(X=x\), we use the following result due to Subrahmaniam [28]:

For a bivariate discrete r.v. (XY) with p.g.f. \(G_{X,Y}(s,t)\), the conditional p.g.f. \(G_{Y|X=x}(z)\) of Y on X is

$$\begin{aligned} G_{Y|X=x}(z)=\frac{G^{(x,0)}(0,z)}{G^{(x,0)}(0,1)} \end{aligned}$$
(41)

where

$$\begin{aligned} G^{(x,y)}(u,v)=\frac{\partial ^{x+y}G(s,t)}{\partial s^x\partial t^y}\bigg |\begin{array}{c} s=u \\ t=v. \end{array} \end{aligned}$$

Hence, from Eqs. (37) and (41), we obtain

$$\begin{aligned} G_{Y|X=x}(z)=(q+pz)^x, \ \ y=0,1,2,\ldots ,x. \end{aligned}$$
(42)

This result facilitates the calculation of the joint p.f. of X and Y, as

$$\begin{aligned} P(X=x,Y=y)=\left( {\begin{array}{c}x\\ y\end{array}}\right) p^yq^{x-y}P(X=x) \end{aligned}$$
(43)

and, additionally, a characterization of the joint distribution of (XY) can be obtained by using the following theorem derived by Cacoullos and Papageorgiou [13].

Theorem 2

For a bivariate discrete r.v. (XY), let

$$\begin{aligned} P(Y=y|X=x)=\left( {\begin{array}{c}x\\ y\end{array}}\right) p^yq^{x-y}, \ \ y=0,1,\ldots ,x \end{aligned}$$
(44)

and

$$\begin{aligned} E[X|Y=y]=y+\frac{q(y+1)}{p}\,\frac{P(Y=y+1)}{P(Y=y)}. \end{aligned}$$
(45)

Then, \(P(Y=y|X=x)\) and \(E[X|Y=y]\) together determine the distribution of (XY).

Furthermore, from Eq. (39) the parameter p can be immediately estimated by the ratio of the two marginal means, i.e.,

$$\begin{aligned} {\widehat{p}}=\frac{\bar{Y}}{\bar{X}} \ \ y=0,1,2,\ldots ,x. \end{aligned}$$

This property facilitates the applicability of this class of distributions, since the remaining parameters can be estimated by procedures suggested for univariate Poisson–Lindley models.

3.3 Examples

3.3.1 Bivariate Poisson–Lindley Distribution Defined by Relations (37) and (4)

The p.g.f. of this distribution is

$$\begin{aligned} G_{X,Y}(s,t)=\frac{\theta ^2}{\theta +1}\,\frac{\theta -qs-pst+2}{(\theta -qs-pst+1)^2} \end{aligned}$$
(46)

with marginals

$$\begin{aligned} G_X(s)=\frac{\theta ^2}{\theta +1}\,\frac{\theta -s+2}{(\theta -s+1)^2}, \end{aligned}$$

a univariate one-parameter Poisson–Lindley distribution discussed in Sect. 2.1, and

$$\begin{aligned} G_Y(t)=\frac{\theta ^2}{\theta +1}\,\frac{\theta +p-pt+1}{(\theta +p-pt)^2}. \end{aligned}$$
(47)

This distribution has p.f.

$$\begin{aligned} p(y;\theta ,p)=p^y\frac{\theta ^2}{\theta +1}\,\frac{\theta +p+y+1}{(\theta +p)^{y+2}}, \ \ y=0,1,2,\ldots , \end{aligned}$$
(48)

which is Eq. (31) with the parameter \(\varphi \) replaced by p.

From Eqs. (43) and (7),

$$\begin{aligned} P(X=x,Y=y)=\frac{x!}{y!(x-y)!}p^yq^{x-y}\,\frac{\theta ^2}{\theta +1}\,\frac{\theta +x+2}{(\theta +1)^{x+2}},&\quad x=0,1,2,\ldots , \\&\quad y=0,1,2,\ldots ,x. \end{aligned}$$

Simple recurrences for probabilities can be obtained by using the ratios

$$\begin{aligned} \frac{P(X=x+1,Y=y)}{P(X=x,Y=y)} \ \ \text {and} \ \ \frac{P(X=x,Y=y+1)}{P(X=x,Y=y)}. \end{aligned}$$

In particular,

$$\begin{aligned} P(X=x+1,Y=y)=\frac{(x+1)}{(x+1-y)}\,\frac{q}{(\theta +1)}\,\frac{(\theta +x+3)}{(\theta +x+2)} P(X=x,Y=y) \end{aligned}$$

and

$$\begin{aligned} P(X=x,Y=y+1)=\frac{(x-y)}{(y+1)}\,\frac{p}{q}P(X=x,Y=y),&\quad x=0,1,2,\ldots , \\&\quad y=0,1,2,\ldots ,x \end{aligned}$$

with

$$\begin{aligned} P(X=0,Y=0)=\frac{\theta ^2(\theta +2)}{(\theta +1)^3} \end{aligned}$$

independent of the parameter p.

From expressions (40) and (8),

$$\begin{aligned} Cov(X,Y)=p\frac{\theta ^3+4\theta ^2+6\theta +2}{\theta ^2(\theta +1)^2}. \end{aligned}$$

The conditional p.g.f. of \(G_{Y|X=x}(z)\) is given by Eq. (42).

Applying to Eq. (46) the general formula of [28] for the derivation of conditional p.g.f.’s as indicated by Eq. (41), \(G_{X|Y=y}(z)\) can be expressed as

$$\begin{aligned} G_{X|Y=y}(z)=z^y\frac{A_y(\theta ,qz)}{A_y(\theta ,q)} \end{aligned}$$
(49)

where

$$\begin{aligned} A_y(\theta ,qz)=\frac{\theta -qz+y+2}{(\theta -qz+1)^{y+2}}, \end{aligned}$$
(50)

i.e., it is a shifted Poisson–Lindley-type distribution.

From Eq. (49), we can derive the conditional expectation of X given \(Y=y\) as

$$\begin{aligned} E[X|Y=y]=y+q\frac{y+1}{\theta +p}\,\,\frac{\theta +p+y+2}{\theta +p+y+1}. \end{aligned}$$
(51)

The conditional expectation of X given \(Y=y\) can also be obtained from expression (45) given in Theorem 2 and Eq. (48).

Consequently, from Theorem 2, relations (44) and (51) characterize the joint distribution of (XY) with p.g.f. given by Eq. (46).

3.3.2 Bivariate Poisson–Lindley Distribution Defined by Relations (37) and (9)

The p.g.f. of this distribution is

$$\begin{aligned} G_{X,Y}(s,t)=\frac{\theta ^2}{\alpha \theta +1}\,\frac{\alpha (\theta -qs-pst+1)+1}{(\theta -qs- pst+1)^2} \end{aligned}$$

with marginals

$$\begin{aligned} G_X(s)=\frac{\theta ^2}{\alpha \theta +1}\,\frac{\alpha (\theta -s+1)+1}{(\theta -s+1)^2}, \end{aligned}$$

which is a two-parameter Poisson–Lindley distribution discussed in Sect. 2.2, and

$$\begin{aligned} G_Y(t)=\frac{\theta ^2}{\alpha \theta +1}\,\frac{\alpha (\theta +p-pt)+1}{(\theta +p-pt)^2}. \end{aligned}$$

The p.f. of this distribution is

$$\begin{aligned} p(y;\theta ,p,\alpha )=p^y\frac{\theta ^2}{\alpha \theta +1}\,\frac{\alpha (\theta +p)+y+1}{(\theta +p)^{y+2}}, \ \ y=0,1,2,\ldots , \end{aligned}$$
(52)

and its moments can be derived from Eq. (38).

Furthermore, from expressions (43) and (12)

$$\begin{aligned} P(X=x,Y=y)=\frac{x!}{y!(x-y)!}p^yq^{x-y}\,\frac{\theta ^2}{\alpha \theta +1}\,\frac{\alpha (\theta +1)+x+1}{(\theta +1)^{x+2}}, \end{aligned}$$

\(x=0,1,2,\ldots \), \(y=0,1,2,\ldots ,x\).

Also, from Eqs. (40) and (13) an expression for Cov(XY) is obtained.

Finally, from Eqs. (45) and (52)

$$\begin{aligned} E[X|Y=y]=y+q\frac{y+1}{\theta +p}\,\,\frac{\alpha (\theta +p)+y+2}{\alpha (\theta +p)+y+1}. \end{aligned}$$

Consequently, from Theorem 2 a characterization of (XY) can be obtained.

3.3.3 Bivariate Poisson–Lindley Distributions Defined by Relations (37) and (14)

This distribution has p.g.f. given by

$$\begin{aligned} G_{X,Y}(s,t)=\frac{\theta ^2}{\theta +\alpha }\,\frac{\theta -qs-pst+\alpha +1}{(\theta -qs-pst+1)^2} \end{aligned}$$

with marginals

$$\begin{aligned} G_X(s)=\frac{\theta ^2}{\theta +\alpha }\,\frac{\theta -s+\alpha +1}{(\theta -s+1)^2}, \end{aligned}$$

which is a new generalized Poisson–Lindley distribution examined in Sect. 2.3, and

$$\begin{aligned} G_Y(t)=\frac{\theta ^2}{\theta +\alpha }\,\frac{\theta +p-pt+\alpha }{(\theta +p-pt)^2}. \end{aligned}$$

The p.f. of r.v. Y is

$$\begin{aligned} p(y;\theta ,p,\alpha )=p^y\frac{\theta ^2}{\theta +\alpha }\,\frac{\theta +p+\alpha (y+1)}{(\theta +p)^{y+2}}, \ \ y=0,1,2,\ldots , \end{aligned}$$

and its moments can be obtained from Eq. (38). An expression for Cov(XY) is derived from Eqs. (40) and (18).

Utilizing expressions (43) and (17), we obtain

$$\begin{aligned} P(X=x,Y=y)=\frac{x!~}{y!(x-y)!}p^yq^{x-y}\,\frac{\theta ^2}{(\theta +\alpha )}\,\frac{(\theta +1) +\alpha (x+1)}{(\theta +1)^{x+2}}, \end{aligned}$$

\(x=0,1,2,\ldots \), \(y=0,1,2,\ldots ,x\).

Finally,

$$\begin{aligned} E[X|Y=y]=y+q\frac{y+1}{\theta +p}\,\,\frac{\theta +p+\alpha (y+2)}{\theta +p+\alpha (y+1)}. \end{aligned}$$

3.3.4 Bivariate Poisson–Lindley Distributions Defined by Relations (37) and (19)

The p.g.f. of this distribution is

$$\begin{aligned} G_{X,Y}(s,t)=\frac{\theta ^{\alpha +1}}{\theta +1}\,\frac{\theta -qs-pst+2}{(\theta -qs-pst+1)^{\alpha +1}} \end{aligned}$$

with marginals

$$\begin{aligned} G_X(s)=\frac{\theta ^{\alpha +1}}{\theta +1}\,\frac{\theta -s+2}{(\theta -s+1)^{\alpha +1}}, \end{aligned}$$

given by Eq. (20), and

$$\begin{aligned} G_Y(s)=\frac{\theta ^{\alpha +1}}{\theta +1}\,\frac{\theta +p-pt+1}{(\theta +p-pt)^{\alpha +1}}. \end{aligned}$$

The p.f. of the r.v. Y is

$$\begin{aligned} p(y;\theta ,p,\alpha )=\frac{p^y}{y!}\,\frac{{\varGamma }(y+\alpha )}{{\varGamma }(\alpha +1)}\,\frac{\theta ^{\alpha +1}}{\theta +1}\, \frac{\alpha (\theta +p)+y+\alpha }{(\theta +p)^{y+\alpha +1}}, \ \ y=0,1,2,\ldots \;. \end{aligned}$$

In addition,

$$\begin{aligned} E[X|Y=y]=y+q\frac{y+\alpha }{\theta +p}\,\frac{\alpha (\theta +p)+y+\alpha +1}{\alpha (\theta +p)+y+\alpha }. \end{aligned}$$

3.3.5 Bivariate Poisson–Lindley Distributions Defined by Relations (37) and (24)

Some basic characteristics of the distribution are

$$\begin{aligned}&G_{X,Y}(s,t)=\frac{\theta ^2}{\theta +1}\,\frac{(\theta -qs-pst+1)^{\alpha -1}+\theta ^{\alpha -2}}{(\theta -qs-pst+1)^\alpha } \\&G_X(s)=\frac{\theta ^2}{\theta +1}\,\frac{(\theta -s+1)^{\alpha -1}+\theta ^{\alpha -2}}{(\theta -s+1)^\alpha } \\&G_Y(t)=\frac{\theta ^2}{\theta +1}\,\frac{(\theta +p-pt)^{\alpha -1}+\theta ^{\alpha -2}}{(\theta +p-pt)^\alpha } \\&p(y;\theta ,p,\alpha )=\frac{p^y}{y!}\frac{\theta ^2}{\theta +1}\,\frac{y!{\varGamma }(\alpha )(\theta +p)^{\alpha -1}+{\varGamma }(y+\alpha )\theta ^{\alpha -2}}{{\varGamma }(\alpha )(\theta +p)^{y+\alpha }}, \ \ y=0,1,2,\ldots \\&E[X|Y=y]=y+\frac{q}{\theta +p}\,\frac{(y+1)!{\varGamma }(\alpha )(\theta +p)^{\alpha -1}+{\varGamma }(y+\alpha +1)\theta ^{\alpha -2}}{y!{\varGamma }(\alpha )(\theta +p)^{\alpha -1}+{\varGamma }(y+\alpha )\theta ^{\alpha -2}}. \end{aligned}$$

It should be noted that, as expected, for \(\alpha =1\) all relations in Sects. 3.3.23.3.4 become their corresponding relations in Sect. 3.3.1. This result also holds for the relations in Sect. 3.3.5 when \(\alpha =2\).

4 Generalized Bivariate Binomial Models

Generalized (or countable mixtures of) bivariate binomial models with respect to their index parameter(s) were studied by Papageorgiou and David [29], and illustrative examples were given.

A bivariate binomial distribution with p.g.f.

$$\begin{aligned} E(s^Xt^Y|N=n)=(qs+pt)^n \ \ 0<p<1, \ \ q=1-p \end{aligned}$$

where N is a nonnegative integer-valued r.v. with p.g.f.

$$\begin{aligned} E(z^N)=h_N(z) \end{aligned}$$

was introduced by Rao et al. [30] in their effort to study the correlation between the numbers of two types of children X and Y in a family where N is the family size (sibship size).

Consequently, the joint distribution of X and Y is given by the p.g.f.

$$\begin{aligned} G_{X,Y}(s,t)=h_N(qs+pt). \end{aligned}$$
(53)

Applications to actual set(s) of family size data were given by [29] and [30] when N follows a negative binomial or Neyman type A distributions. In addition, when N follows a “Short” distribution a corresponding bivariate model was fitted to accident data by [31].

4.1 Properties

For distributions with p.g.f. given by Eq. (53), we can obtain various properties of the marginal distributions of X and Y and the joint distribution of (XY), in terms of the corresponding properties of the distribution of the r.v. N.

In particular, since

$$\begin{aligned} G_X(s)=h_N(qs+p) \end{aligned}$$

we have

$$\begin{aligned} P(X=x)=\frac{q^x}{x!}h^{(x)}_N(p) \end{aligned}$$

and

$$\begin{aligned} \mu _{[\tau ]:X}=q^\tau \mu _{[\tau ]:N}. \end{aligned}$$

The joint p.f. is

$$\begin{aligned} P(X=x,Y=y)=\left( {\begin{array}{c}x+y\\ y\end{array}}\right) q^xp^yP(N=x+y), \ \ x,y=0,1,2,\ldots \end{aligned}$$
(54)

and an expression for the factorial moments is

$$\begin{aligned} \mu _{[\tau ,k]}=q^\tau p^k\mu _{[\tau +k]:N}, \ \ \tau ,k=0,1,2,\ldots \end{aligned}$$
(55)

where

$$\begin{aligned} \mu _{[\tau ,k]}=E(X^{(\tau )}Y^{(k)}). \end{aligned}$$

Using Eq. (41), we can prove that the conditional p.g.f. of Y given \(X=x\) is

$$\begin{aligned} G_{Y|X=x}(z)=\frac{h^{(x)}_N(pz)}{h^{(x)}_N(p)}. \end{aligned}$$
(56)

Consequently, the conditional probability function of Y given \(X=x\) is

$$\begin{aligned} P(Y=y|X=x)=\frac{p^y(x+y)!}{y!}\,\frac{P(N=x+y)}{h^{(x)}_N(p)} \end{aligned}$$
(57)

and the related conditional factorial moments are

$$\begin{aligned} \mu _{[\tau |x]}=\frac{p^\tau h^{(x+\tau )}_N(p)}{h^{(x)}_N(p)}. \end{aligned}$$

Hence,

$$\begin{aligned} E[Y|X=x]=p\frac{h^{(x+1)}_N(p)}{h^{(x)}_N(p)}. \end{aligned}$$
(58)

The corresponding expressions for

$$\begin{aligned} G_Y(t), \ \ P(Y=y), \ \ \mu _{[k]:Y}, \ \ G_{X|Y=y}(z), \ \ \mu _{[k|y]} \end{aligned}$$

can be easily obtained.

4.2 Generalized Bivariate Poisson–Lindley Distributions

From Eq. (55) (see also [30]), we have

$$\begin{aligned} \begin{array}{lc} E(X)=q E(N), &{} E(Y)=pE(N) \\ [0.5ex] Var(X)=q[qV(N)+pE(N)]. &{} \end{array} \end{aligned}$$

Consequently, the index of dispersion for the r.v. X denoted by \(D_X\) is

$$\begin{aligned} D_X=q(D_N-1)+1 \end{aligned}$$

which is greater than one since in this section the r.v. N follows a Poisson–Lindley distribution. A similar property also holds for \(D_Y\).

In addition, an estimator of the parameter p can be easily obtained by using a simple ratio of the marginal means. That is

$$\begin{aligned} {\widehat{p}}=\frac{\bar{Y}}{\bar{X}+\bar{Y}} \end{aligned}$$

and the remaining parameters in the bivariate Poisson–Lindley models can be estimated by using procedures already employed in their univariate versions.

4.3 Examples

4.3.1 Bivariate Poisson–Lindley Distributions Defined by Relations (53) and (5)

The p.g.f. of this distribution is

$$\begin{aligned} G_{X,Y}(s,t)=\frac{\theta ^2}{\theta +1}\,\frac{\theta -qs-pt+2}{(\theta -qs-pt+1)^2}. \end{aligned}$$
(59)

Notice that Eq. (59) corresponds to Eq. (34) for \(\varphi _1=q\) and \(\varphi _2=p\). The p.g.f. of the X marginal is

$$\begin{aligned} G_X(s)=\frac{\theta ^2}{\theta +1}\,\frac{\theta +q-qs+1}{(\theta +q-qs)^2} \end{aligned}$$

while the p.g.f. of the Y marginal is given by Eq. (47).

From Eqs. (55) and (8),

$$\begin{aligned} \begin{array}{l} \mu _{[\tau ,k]}=q^\tau p^k(\tau +k)!\dfrac{1}{\theta ^{\tau +k}}\,\dfrac{\theta +\tau +k+1}{\theta +1}. \end{array} \end{aligned}$$

Hence,

$$\begin{aligned} \begin{array}{l} E(X)=q\dfrac{\theta +2}{\theta (\theta +1)} \\ E(XY)=2pq\dfrac{(\theta +3)}{\theta ^2(\theta +1)} \\ Cov(X,Y)=pq\dfrac{\theta ^2+4\theta +2}{\theta ^2(\theta +1)^2}. \end{array} \end{aligned}$$

From Eqs. (54) and (7),

$$\begin{aligned} P(X=x,Y=y)=\frac{(x+y)!}{x!y!}q^xp^y\,\frac{\theta ^2(\theta +x+y+2)}{(\theta +1)^{x+y+3}}, \ \ x,y=0,1,2,\ldots \;. \end{aligned}$$

Simple recurrences for the probabilities are

$$\begin{aligned}&P(X=x+1,Y=y)=\frac{q}{\theta +1}\,\,\frac{x+y+1}{x+1}\,\,\frac{\theta +x+y+3}{\theta +x+y+2} P(X=x,Y=y) \\&P(X=x,Y=y+1)=\frac{p}{\theta +1}\,\,\frac{x+y+1}{y+1}\,\,\frac{\theta +x+y+3}{\theta +x+y+2} P(X=x,Y=y), \\&\quad x,y=0,1,2,\ldots , \end{aligned}$$

with

$$\begin{aligned} P(X=0,Y=0)=\dfrac{\theta ^2(\theta +2)}{(\theta +1)^3} \end{aligned}$$

independent of p. From Eqs. (56) and (6),

$$\begin{aligned} G_{Y|X=x}(z)=\frac{A_x(\theta ,pz)}{A_x(\theta ,p)} \end{aligned}$$

where \(A_x(\theta ,pz)\) can be obtained from Eq. (50).

From Eqs. (57), (6) and (7),

$$\begin{aligned} P(Y=y|X=x)=p^y\frac{(x+y)!}{x!y!}\,\,\frac{(\theta +q)^{x+2}}{(\theta +1)}\,\, \frac{\theta +x+y+2}{\theta +q+x+1}. \end{aligned}$$

Finally, from Eqs. (58) and (6)

$$\begin{aligned} E[Y|X=x]=p\frac{x+1}{\theta +q}\,\,\frac{\theta +q+x+2}{\theta +q+x+1}. \end{aligned}$$

4.3.2 Bivariate Poisson–Lindley Distributions Defined by Relations (53) and (10)

The p.g.f. of this distribution is

$$\begin{aligned} G_{X,Y}(s,t)=\frac{\theta ^2}{\alpha \theta +1}\,\,\frac{\alpha (\theta -qs-pt+1)+1}{(\theta -qs-pt+1)^2}. \end{aligned}$$

Some other properties of this distribution are

$$\begin{aligned}&G_X(s)=\frac{\theta ^2}{\alpha \theta +1}\,\,\frac{\alpha (\theta +q-qs)+1}{(\theta +q-qs)^2} \\&\mu _{[\tau ,k]}=q^\tau p^k(\tau +k)!\frac{1}{\theta ^{\tau +k}}\,\,\frac{\alpha \theta +\tau +k+1}{(\alpha \theta +1)} \\&E(X)=q\dfrac{\alpha \theta +2}{\theta (\alpha \theta +1)} \end{aligned}$$
$$\begin{aligned} \begin{array}{c} Cov(X,Y)=pq\dfrac{\alpha ^2\theta ^2+4\alpha \theta +2}{\theta ^2(\alpha \theta +1)^2} \\ P(X=x,Y=y)=\dfrac{(x+y)!}{x!y!}q^xp^y\,\dfrac{\theta ^2}{(\alpha \theta +1)} \dfrac{\alpha (\theta +1)+x+y+1}{(\theta +1)^{x+y+2}}, \ \ x,y=0,1,2,\ldots \\ P(Y=y|X=x)=p^y\dfrac{(x+y)!}{x!y!}\,\dfrac{(\theta +q)^{x+2}}{(\theta +1)^{x+y+2}} \dfrac{\alpha (\theta +1)+x+y+1}{\alpha (\theta +q)+x+1} \\ E[Y|X=x]=p\dfrac{x+1}{\theta +q}\,\dfrac{\alpha (\theta +q)+x+2}{\alpha (\theta +q)+x+1}. \end{array} \end{aligned}$$

4.3.3 Bivariate Poisson–Lindley Distributions Defined by Relations (53) and (15)

This model has p.g.f. given by

$$\begin{aligned} G_{X,Y}(s,t)=\frac{\theta ^2}{\theta +\alpha }\,\frac{\theta -qs-pt+\alpha +1}{(\theta -qs-pt+1)^2}. \end{aligned}$$

Other basic properties are

$$\begin{aligned}&G_X(s)=\dfrac{\theta ^2}{\theta +\alpha }\dfrac{\theta +q-qs+\alpha }{(\theta +q-qs)^2}\\&\mu _{[\tau ,k]}=q^\tau p^k(\tau +k)!\dfrac{\theta +\alpha (\tau +k+1)}{\theta ^{\tau +k}(\theta +\alpha )} \\&E(X)=q\dfrac{\theta +2\alpha }{\theta (\theta +\alpha )}\ \\&Cov(X,Y)=pq\dfrac{\theta ^2+4\alpha \theta +2\alpha ^2}{\theta ^2(\theta +\alpha )^2} \\&P(X=x,Y=y)=\dfrac{(x+y)!}{x!y!}q^xp^y\,\dfrac{\theta ^2}{\theta +\alpha } \dfrac{\theta +1+\alpha (x+y+1)}{(\theta +1)^{x+y+2}}, \ \ x,y=0,1,2,\ldots \\&P(Y=y|X=x)=p^y\dfrac{(x+y)!}{x!y!}\dfrac{(\theta +q)^{x+2}}{(\theta +1)^{x+y+2}} \dfrac{\theta +1+\alpha (x+y+1)}{\theta +q+\alpha (x+1)} \\&E[Y|X=x]=p\dfrac{x+1}{\theta +q}\,\dfrac{\theta +q+\alpha (x+2)}{\theta +q+\alpha (x+1)}. \end{aligned}$$

4.3.4 Bivariate Poisson–Lindley Distributions Defined by Relations (53) and (20)

The p.g.f. of this distribution is

$$\begin{aligned} G_{X,Y}(s,t)=\frac{\theta ^{\alpha +1}}{\theta +1}\,\frac{\theta -qs-pt+2}{(\theta -qs-pt+1)^{\alpha +1}}. \end{aligned}$$

The p.g.f. of X is

$$\begin{aligned} \begin{array}{l} G_X(s)=\dfrac{\theta ^{\alpha +1}}{\theta +1}\dfrac{\theta +q-qs+1}{(\theta +q-qs)^{\alpha +1}}. \end{array} \end{aligned}$$

Some other properties are

$$\begin{aligned} \begin{array}{l} \mu _{[\tau ,k]}=\dfrac{{\varGamma }(\tau +k+\alpha )}{{\varGamma }(\alpha +1)}q^\tau p^k\dfrac{\alpha (\theta +1)+\tau +k}{\theta ^{\tau +k}(\theta +1)} \\ E(X)=q\dfrac{\alpha (\theta +1)+1}{\theta (\theta +1)} \\ Cov(X,Y)=pq\dfrac{\alpha (\theta +1)^2+2\theta +1}{\theta ^2(\theta +1)^2}. \end{array} \end{aligned}$$

Expressions for \(P(X=x,Y=y)\) can be obtained from Eqs. (54) and (22) and for \(P(Y=y|X=x)\) from Eqs. (57), (22) and (21).

Finally,

$$\begin{aligned} E[Y|X=x]=p\frac{x+a}{\theta +q}\,\frac{\alpha (\theta +q)+x+\alpha +1}{\alpha (\theta +q)+x+\alpha }. \end{aligned}$$

4.3.5 Bivariate Poisson–Lindley Distributions Defined by Relations (53) and (25)

The p.g.f. of this distribution is

$$\begin{aligned} G_{X,Y}(s,t)=\frac{\theta ^2}{\theta +1}\,\,\frac{(\theta -qs-pt+1)^{\alpha -1}+\theta ^{\alpha -2}}{(\theta -qs-pt+1)^\alpha }. \end{aligned}$$

Some characteristic properties of this distribution are

$$\begin{aligned} \begin{array}{l} G_X(s)=\dfrac{\theta ^2}{\theta +1}\,\dfrac{(\theta +q-qs)^{\alpha -1}+\theta ^{\alpha -2}}{(\theta +q-qs)^\alpha } \\ \mu _{[\tau ,k]}=q^\tau p^k\dfrac{(\tau +k)!{\varGamma }(\alpha )\theta +{\varGamma }(\tau +k+\alpha )}{\theta ^{\tau +k}(\theta +1){\varGamma }(\alpha )} \\ E(X)=q\dfrac{\theta +\alpha }{\theta (\theta +1)} \\ Cov(X,Y)=pq\dfrac{\theta ^2+\theta \alpha ^2-\theta \alpha +2\theta +\alpha }{\theta ^2(\theta +1)^2}. \end{array} \end{aligned}$$

Finally,

$$\begin{aligned} E[Y|X=x]=\frac{p}{\theta +q}\,\frac{(x+1)!\,{\varGamma }(\alpha )(\theta +q)^{\alpha -1}+{\varGamma }(x+\alpha +1) \theta ^{\alpha -2}}{x!\,{\varGamma }(\alpha )(\theta +q)^{\alpha -1}+{\varGamma }(x+\alpha )\theta ^{\alpha -2}}. \end{aligned}$$

5 Conclusions

In this paper, two families of bivariate Poisson–Lindley distributions are introduced either by mixing or by generalizing. Each family extends to the bivariate case five univariate Poisson–Lindley models already appeared in the literature. We examined a number of characteristics both for the families and for their individual members. We also indicated that all bivariate models can be useful in analyzing count data because they contain only two or three parameters. The models derived by the generalization procedure also have the attractive property that \(Z=X+Y\) follows the same distribution with the generalizing variable a univariate Poisson–Lindley. As Kemp and Papageorgiou [32] pointed out “this property is often required for consistency in accident models where the split into two time periods is entirely arbitrary.” Obviously, more complicated bivariate Poisson–Lindley models can be derived, but their use may be restricted because of their increased number of parameters.