1 Introduction and background

In recent years, there is a deep interest in proposing new measures of uncertainty, in order to respond to the increasingly diversified needs of researchers in the fields of reliability and risk analysis. At the same time, getting lost in the vast sea of the new notions is relatively easy. To meet these needs, in this paper we aim to propose a new generating function which is able to recover the cumulative residual entropy and the cumulative entropy, as well as both their generalized and fractional extensions.

If X is a nonnegative absolutely continuous random variable having support (0, r), with \(r\in (0,+\infty ]\), and probability density function (PDF) f, the differential entropy of X is defined as (see, for instance, Cover and Thomas (1991))

$$\begin{aligned} {\mathcal {H}}(X)= {\mathbb {E}}\left[ -\log f(X)\right] = -\int _{0}^{r} f(x)\log f(x)\,\text {d}x. \end{aligned}$$
(1)

Such a measure can be obtained from the information generating function, defined by Golomb (1966) as

$$\begin{aligned} \mathcal{I}\mathcal{G}_X(\nu ) = {\mathbb {E}}[ (f(X))^{\nu -1} ] = \int _0^r [f(x)]^{\nu } \,\text {d}x, \end{aligned}$$
(2)

where \(\nu \in {\mathbb {R}}\) is such that the right-hand-side of (2) is finite. Indeed, Eqs. (1) and (2) give:

$$\begin{aligned} {\mathcal {H}}(X)= - \frac{\text {d}}{\text {d}\nu } \mathcal{I}\mathcal{G}_X(\nu )\big \vert _{\nu =1}. \end{aligned}$$
(3)

Recent developments and examples of applications of information generating functions can be found in Clark (2020), Kharazmi et al. (2021), and Kharazmi and Balakrishnan (2021).

We remark that the differential entropy can take negative values, whereas the Shannon entropy of a discrete distribution is nonnegative. To avoid this drawback, and for other reasons as mentioned in Rao et al. (2004), various alternative measures have been proposed recently. Table 1 shows some information measures for a random variable X having support (0, r), with \(r\in (0,+\infty ]\), having respectively cumulative distribution function (CDF) and survival function (SF) given by

$$\begin{aligned} F(x)={\mathbb {P}}(X\le x), \qquad {\overline{F}}(x)= 1-F(x). \end{aligned}$$

We remark that the entropies presented in Table 1 can also be expressed in terms of the cumulative hazard rate and the cumulative reversed hazard rate of X, defined respectively as

$$\begin{aligned} \Lambda (x)=-\log {\overline{F}}(x), \qquad T(x)=-\log F(x). \end{aligned}$$
(4)

These functions are involved in the cumulative residual entropy introduced in Rao et al. (2004), and the cumulative entropy (see Di Crescenzo and Longobardi (2009), Di Crescenzo and Longobardi (2009)), given respectively in cases (i) and (ii) of Table 1. Such measures are obtained by replacing the PDF in (1) with the SF and the CDF, respectively. This preserves the fact that the logarithm of the probability of an event represents the information contained in the event, in accordance with the Shannon entropy in the discrete case.

Table 1 Information measures of interest, for a given random lifetime X with support (0, r) where \(r\in (0,+\infty ]\), with \(n\in {\mathbb {N}}_0\) for case (iii), \(n\in {\mathbb {N}}\) for case (iv), \(\nu \ge 0\) for case (v) and \(\nu >0\) for case (vi)

Both the cumulative residual entropy and the cumulative entropy take nonnegative values, vanishing only in the case of degenerate random variables. These measures are particularly suitable for describing information in problems related to reliability theory, where X denotes the random lifetime of an item, and x is the reference time. In particular, in Table 1, the cumulative residual entropy (i) and the generalized versions (iii) and (v) deal with events for which the uncertainty is related to the future, while the cumulative entropies (ii), (iv) and (vi) are suitable to quantify the information when the uncertainty is related to the past. In addition, Asadi and Zohrevand (2007) showed that \(\mathcal {CRE}(X)={\mathbb {E}}[\text {mrl}(X)]\), where mrl(X) is the mean residual life of X. Also, in Di Crescenzo and Longobardi (2009) it is shown that \(\mathcal{C}\mathcal{E}(X)={\mathbb {E}}[{\tilde{\mu }}(X)]\), where \({\tilde{\mu }}(X)\) is the mean inactivity time of X. Moreover, other applications of these information measures can be found in Risk Theory, since the risk is strictly related to the notion of uncertainty, see e.g. Dulac and Simon (2023).

Recently, Psarrakos and Navarro (2013) introduced the generalized cumulative residual entropy of order n of X, defined as in (iii) of Table 1, in order to extend the cumulative residual entropy. A dual information measure, known as generalized cumulative entropy of order n, was proposed by Kayal (2016) (cf. case (iv) of Table 1). Various results on these generalized measures have been studied by Toomaj and Di Crescenzo (2020). In particular, the measures given in (iii) and (iv) of Table 1 play a role in the theory of point processes. Indeed, the generalized cumulative residual entropy of order n, say \(\mathcal {CRE}_n(X)\), is equal to the mean of the \((n+1)\)-th interepoch interval of a non-homogeneous Poisson process having cumulative intensity function given by the first of (4). Similarly, the generalized cumulative entropy of order n can be viewed as an expected spacing in lower record values (see, for instance, Section 6 of Toomaj and Di Crescenzo (2020)). They are also related to the upper and lower record values densities (see, for instance, Kumar and Dangi (2023)).

Fractional versions of the above measures have been studied as well, with the aim of disposing with more advanced mathematical tools to handle complex systems and anomalous dynamics. Specifically, see Xiong et al. (2019) and Di Crescenzo et al. (2021), for the fractional generalized cumulative residual entropy and the fractional generalized cumulative entropy of X, given respectively in cases (v) and (vi) of Table 1. Certain features of fractional calculus allow these measures to better capture long-range phenomena and nonlocal dependence in some random systems.

The entropies considered in Table 1 deal with nonnegative random variables since they are often referred to random lifetimes of interest in reliability theory. However they can be straightforwardly extended to the case when X has a general support contained in (lr), with \(-\infty \le l<r\le +\infty \).

The aim of this paper is to propose and study a new information generating function that, in analogy with the functions in Eqs. (2) and (3), is able to recover the information measures presented in Table 1. It is defined as the integral of the product between suitable powers of the CDF and the SF. Throughout the paper it emerges that some advantages related to the use of the new generating function are:

  • the convenience of gaining information from both the CDF and the SF of the random variable under investigation,

  • the existence of suitable applications to notions of interest in reliability theory such as proportional hazards, odds function, order statistics, k-out-of-n systems, and stress–strength models for multi-component systems,

  • the possibility of using it as a measure of concentration, since it is an extension of the Gini mean semi-difference.

With reference to the latter statement, we will show also that the proposed generating function can be extended (i) by replacing the powers of the CDF and the SF with suitable distortion functions, and (ii) by defining a weighted version based on a mixture distribution. Moreover, it is worth mentioning that the proposed generating function and its generalized versions can be applied in risk analysis. Indeed, we show that they are proper variability measures.

1.1 Plan of the paper

In Sect. 2 we define the new generating function, named cumulative information generating function. We show that it is useful to recover the measures given in Table 1. Moreover, we illustrate the effect of an affine transformation of the considered random variable, and provide some connections with the proportional hazard model, the proportional reversed hazard model, and the odds function.

In Sect. 3 we use various well-known inequalities in order to obtain some bounds for the cumulative information generating function. In addition, we show how the cumulative information generating function is related (i) to the Euler beta function, and (ii) to the Golomb’s information generating function of the equilibrium random variable.

In Sect. 4 we discuss the connections with some notions of systems reliability, as series and parallel systems, and k-out-of-n systems, also with special attention to the reliability of the multi-component stress–strength system.

In Sect. 5 we introduce the above mentioned generalized information measures, named ‘q-distorted Gini function’ and ‘weighted q-distorted Gini function’, being related also to the Gini mean semi-difference. We also prove that they are suitable variability measures, since in particular the dispersive order between pairs of random variables implies the ordering between these functions. An application to the reliability of multi-component stress–strength systems is provided, too.

The Sect. 6 is concerning the extension of the cumulative information generating function to the case of a two-dimensional random vector, with special care to the case of independent components.

Some final remarks are then given in Sect. 7.

Throughout the paper, the terms increasing and decreasing are used in non-strict sense, \({\mathbb {N}}\) denotes the set of positive integers, and \({\mathbb {N}}_0={\mathbb {N}}\cup \{0\}\). Moreover, given a distribution function F(x), we denote the right-continuous version of its inverse by \(F^{-1}(u)=\sup \{x:F(x)\le u\}\), \(u\in [0,1]\), which is also named quantile function in statistical framework.

2 Cumulative information generating function

In the same spirit of Eq. (2) we now introduce a new generating function which allows to measure the cumulative information coming both from the CDF and the SF.

Definition 2.1

Let X be a random variable with CDF F(x) and SF \({\overline{F}}(x)\), \(x\in {\mathbb {R}}\), and let

$$\begin{aligned} l=\inf \{x\in {\mathbb {R}}: F(x)>0\}, \qquad r=\sup \{x\in {\mathbb {R}}: {\overline{F}}(x)>0\} \end{aligned}$$
(5)

denote respectively the lower and upper limits of the support of X (which may be finite or infinite). The cumulative information generating function (CIGF) of X is defined as

$$\begin{aligned} \begin{aligned} G_X:&\,D_X\subseteq {\mathbb {R}}^{2} \longrightarrow [0,+\infty )\\&\;\;(\alpha ,\beta )\quad \;\longmapsto \;G_X(\alpha ,\beta ) =\int _{l}^{r}\left[ F(x)\right] ^{\alpha }\left[ {\overline{F}}(x)\right] ^{\beta }\,\mathrm{{d}}x \end{aligned} \end{aligned}$$
(6)

where

$$\begin{aligned} D_X=\{(\alpha ,\beta )\in {\mathbb {R}}^2: G_X(\alpha ,\beta ) <+\infty \}. \end{aligned}$$

Clearly, one has \({\mathbb {P}}(l\le X\le r)=1\). Moreover, if X is degenerate then \(G_X(\alpha ,\beta )=0\), otherwise \(G_X(\alpha ,\beta )>0\).

Example 2.1

Let \(X\sim Erlang(2,\lambda )\), with \(\lambda >0\), and CDF \(F(x)=1-e^{-\lambda x}-\lambda xe^{-\lambda x}\), \(x\ge 0\). From (6), recalling that \(\forall x: \vert x\vert <1\),

$$\begin{aligned} (1+x)^{\alpha }=\sum _{n=0}^{+\infty }\left( {\begin{array}{c}\alpha \\ n\end{array}}\right) x^n, \qquad \hbox {with }\; \left( {\begin{array}{c}\alpha \\ n\end{array}}\right) = \frac{\alpha (\alpha -1) \cdots (\alpha -n+1)}{n!}, \end{aligned}$$
(7)

and taking into account that

$$\begin{aligned} \int _{0}^{+\infty }e^{-(n+\beta )\lambda x}(1+\lambda x)^{n+\beta }\,\text {d}x =\frac{1}{\lambda }\,\Gamma (n+\beta +1,n+\beta )\,e^{(n+\beta )}(n+\beta )^{-(n+\beta +1)}, \end{aligned}$$

with \(D_X=\{(\alpha ,\beta )\in {\mathbb {R}}^2: \beta \in {\mathbb {R}}{\setminus } {\mathbb {Z}}_0^-\}\), we obtain the CIGF of X in series form:

$$\begin{aligned} G_X(\alpha ,\beta )=\frac{1}{\lambda }\sum _{n=0}^{+\infty }\left( {\begin{array}{c}\alpha \\ n\end{array}}\right) (-1)^n\, \Gamma (n+\beta +1,n+\beta )\,e^{n+\beta }\,(n+\beta )^{-(n+\beta +1)}. \end{aligned}$$
(8)

We remark that if X is a discrete random variable with finite support \(\{x_{1}\le x_{2}\le \ldots \le x_{n}\}\), then due to (6) the CIGF can be expressed as a sum, i.e.

$$\begin{aligned} G_X(\alpha ,\beta )=\sum _{k=1}^{n-1} \left( P_k\right) ^{\alpha }\left( 1-P_k\right) ^{\beta } (x_{k+1}-x_k), \qquad \hbox {where \ } P_k=\sum _{i=1}^{k}{\mathbb {P}}(X=x_i). \end{aligned}$$

Other examples will be illustrated below.

In the next theorem we show the effect of an affine transformation. The result follows from Definition 2.1, and recalling the relation between the CDFs of X and \(Y=\gamma X + \delta \).

Theorem 2.1

Let X be a random variable with finite CIGF. Consider the affine transformation \(Y=\gamma X + \delta \), with \(\gamma \in {\mathbb {R}}{\setminus }\{0\}\), \(\delta \in {\mathbb {R}}\). Then

$$\begin{aligned} G_Y(\alpha ,\beta )= {\left\{ \begin{array}{ll} \gamma \,G_X(\alpha ,\beta ), &{} 0<\gamma<\infty \\ \vert \gamma \vert \, G_X(\beta ,\alpha ), &{} -\infty<\gamma <0 \end{array}\right. } \end{aligned}$$
(9)

for \((\alpha ,\beta )\in D_X\) if \(0<\gamma <\infty \), and \((\beta ,\alpha )\in D_X\) if \(-\infty<\gamma <0\).

Remark 2.1

If X is absolutely continuous, with PDF f(x), by setting \(u=F(x)\) in the right-hand-side of Eq. (6), the CIGF of X can be expressed as

$$\begin{aligned} G_X(\alpha ,\beta )=\int _{0}^{1} u^{\alpha }(1-u)^{\beta }\frac{1}{f(F^{-1}(u))}\,\textrm{d}u, \qquad (\alpha ,\beta )\in D_X. \end{aligned}$$
(10)

Remark 2.2

The CIGF of a nonnegative random variable X can be regarded as a measure of concentration. Indeed, if \( X' \) is an independent copy of X, we can deduce that (see, for instance, the proof of Proposition 1 of Rao (2005))

$$\begin{aligned} G_X(1,1)=\int _{l}^{r}F(x){\overline{F}}(x)\,\mathrm{{d}}x= \frac{1}{2}\, {\mathbb {E}}\left[ \vert X-X'\vert \right] =Gini(X). \end{aligned}$$
(11)

This quantity is also known as the Gini mean semi-difference, which represents an example of coherent measure of variability with comonotonic additivity (see Section 2.2 of Hu and Chen (2020)).

Similarly as the information generating function defined in (2), the following generating measures can be introduced as marginal versions of the CIGF.

Definition 2.2

Under the same assumptions of Definition 2.1, the cumulative information generating measure and the cumulative residual information generating measure are defined respectively by

$$\begin{aligned} H_X(\alpha ) \equiv G_X(\alpha ,0)=\int _{l}^{r}\left[ F(x)\right] ^\alpha \,\mathrm{{d}}x, \qquad \forall \; (\alpha ,0)\in D_X \end{aligned}$$
(12)

and

$$\begin{aligned} K_X(\beta )\equiv G_X(0,\beta )=\int _{l}^{r}\left[ {\overline{F}}(x)\right] ^\beta \,\mathrm{{d}}x, \qquad \forall \; (0,\beta )\in D_X. \end{aligned}$$
(13)

We remark that when X is absolutely continuous with support \((0,\infty )\), the measure (13) has been introduced in Eq. (10) of Kharazmi and Balakrishnan (2023), denoted as \(\mathcal {CIG}_\beta ({\overline{F}})\), for \(\beta >0\). Under these assumptions, other properties and the non-parametric estimation of the function given in Eq. (13) have been studied in Smitha et al. (2023).

Remark 2.3

If X is a random variable with finite CIGF and symmetric CDF, in the sense that for some \(m\in {\mathbb {R}}\) one has \(F(m+x)={\overline{F}}(m-x)\) \( \forall \,x\in {\mathbb {R}}\), then

  1. (i)

    \(G_X(\alpha ,\beta )=G_X(\beta ,\alpha )\) for all \((\alpha ,\beta )\in D_X\);

  2. (ii)

    from Eqs. (12) and (13) we have \(H_X(\alpha )=K_X(\alpha )\) for all \((\alpha ,0)\in D_X\);

  3. (iii)

    under the assumptions of Theorem 2.1, Eq. (9) becomes \( G_Y(\alpha ,\beta )=\vert \gamma \vert \,G_X(\alpha ,\beta )\), \( \gamma \in {\mathbb {R}}\).

Recalling the measures (i) and (ii) of Table 1, now we can show that the cumulative residual entropy and the cumulative entropy can be obtained from the CIGF.

Proposition 2.1

Let X be a random variable having finite CIGF \(G_X(\alpha ,\beta )\), for \((\alpha ,\beta )\in D_X\). If \((0,1)\in D_X\), then

$$\begin{aligned} \mathcal {CRE}(X)= - \frac{\partial }{\partial \beta }G_X(\alpha ,\beta )\Big \vert _{\alpha =0,\beta =1}. \end{aligned}$$

If \((1,0)\in D_X\), then

$$\begin{aligned} \mathcal{C}\mathcal{E}(X)= - \frac{\partial }{\partial \alpha }G_X(\alpha ,\beta )\Big \vert _{\alpha =1,\beta =0}. \end{aligned}$$

Proof

The stated results follow from Eq. (6), by differentiation under the integral sign. \(\square \)

Let us now obtain a similar relation for the generalized cumulative residual entropy and the generalized cumulative entropy (cf. cases (iii) and (iv) of Table 1).

Proposition 2.2

Let X be a random variable having finite CIGF \(G_X(\alpha ,\beta )\), for \((\alpha ,\beta )\in D_X\). If \((0,1)\in D_X\), then

$$\begin{aligned} \mathcal {CRE}_n(X) = \frac{(-1)^n}{n!} \frac{\partial ^n}{\partial \beta ^n}G_X(\alpha ,\beta )\Big \vert _{\alpha =0,\beta =1}, \qquad n\in {\mathbb {N}}_0. \end{aligned}$$
(14)

If \((1,0)\in D_X\), then

$$\begin{aligned} \mathcal{C}\mathcal{E}_n(X) = \frac{(-1)^n}{n!} \frac{\partial ^n}{\partial \alpha ^n}G_X(\alpha ,\beta )\Big \vert _{\alpha =1,\beta =0}, \qquad n\in {\mathbb {N}}. \end{aligned}$$
(15)

Proof

The proof of (15) is analogous to Proposition 2.1, by using this identity:

$$\begin{aligned} \frac{\partial ^n}{\partial \alpha ^n}G_X(\alpha ,\beta ) =\int _{l}^{r}\left( \log F(x) \right) ^n\left[ F(x)\right] ^\alpha \left[ {\overline{F}}(x)\right] ^\beta \,\text {d}x. \end{aligned}$$
(16)

In the same way we obtain Eq. (14). \(\square \)

In order to extend the above relations to the case of the generalized fractional cumulative residual entropy and the generalized fractional cumulative entropy, given respectively in cases (v) and (vi) of Table 1, let us now recall briefly the expression of the Caputo fractional derivatives (see, for instance, Kilbas et al. (2006)). Specifically, given a function \(y(x_1,x_2)\), we consider

$$\begin{aligned} \left( {}^{C}{}{D_{-,x_1}^\nu y}\right) (x_1,x_2)=\frac{(-1)^n}{\Gamma (n-\nu )}\int _{x_1}^{+\infty }\frac{y^{(n)}(t,x_2)}{(t-x_1)^{\nu +1-n}}\, \text {d}t, \qquad x_1,x_2\in {\mathbb {R}}, \end{aligned}$$

that is the left-sided Caputo partial fractional derivative with respect to \(x_1\) of order \(\nu \) on the whole axis \({\mathbb {R}}\), where \(\nu \in {\mathbb {C}}\) with \(\text {Re}(\nu )>0\), \(\nu \notin {\mathbb {N}}\) and \(n=\lfloor \text {Re}(\nu )\rfloor + 1\).

Proposition 2.3

Let X be a random variable having finite CIGF \(G_X(\alpha ,\beta )\), for \((\alpha ,\beta )\in D_X\). If \((0,1)\in D_X\), then

$$\begin{aligned} \mathcal {CRE}_\nu (X)=\frac{1}{\Gamma (\nu +1)}\left( {}^{C}{}{D_{-,\beta }^\nu G_X}\right) (\alpha ,\beta )\Big \vert _{\alpha =0,\beta =1}, \qquad \nu >0. \end{aligned}$$
(17)

If \((1,0)\in D_X\), then

$$\begin{aligned} \mathcal{C}\mathcal{E}_\nu (X)=\frac{1}{\Gamma (\nu +1)}\left( {}^{C}{}{D_{-,\alpha }^\nu G_X}\right) (\alpha ,\beta )\Big \vert _{\alpha =1,\beta =0}, \qquad \nu >0. \end{aligned}$$
(18)

Proof

We show only the proof of Eq. (18) because Eq. (17) can be derived similarly. From (16) we obtain

$$\begin{aligned} \begin{aligned}&\left( {}^{C}{}{D_{-,\alpha }^\nu G_X}\right) (\alpha ,\beta )\\&\quad \quad =\frac{(-1)^n}{\Gamma (n-\nu )}\int _{\alpha }^{+\infty }\frac{1}{(t-\alpha )^{\nu +1-n}}\frac{\partial ^n}{\partial t^n}G_X(t,\beta )\,\text {d}t\\&\quad \quad =\frac{(-1)^n}{\Gamma (n-\nu )}\int _{\alpha }^{+\infty }\frac{1}{(t-\alpha )^{\nu +1-n}} \left( \int _{l}^{r}\left( \log F(x)\right) ^n\left[ F(x)\right] ^t\left[ {\overline{F}}(x)\right] ^\beta \,\text {d}x\right) \,\text {d}t\\&\quad \quad =\frac{(-1)^n}{\Gamma (n-\nu )}\int _{l}^{r} \left( \log F(x)\right) ^n\left[ {\overline{F}}(x)\right] ^\beta \left( \int _{\alpha }^{+\infty }\frac{\left[ F(x)\right] ^t}{(t-\alpha )^{\nu +1-n}}\,\text {d}t\right) \,\text {d}x, \end{aligned} \end{aligned}$$

where the last equality is obtained by use of Fubini’s theorem. By placing \(t-\alpha =z\) and \(\gamma =-z\log F(x)\), we have

$$\begin{aligned} \begin{aligned} \int _{\alpha }^{+\infty }\frac{\left[ F(x)\right] ^t}{(t-\alpha )^{\nu +1-n}}\, \text {d}t&=\left[ F(x)\right] ^\alpha \int _{0}^{+\infty }\left[ F(x)\right] ^z z^{n-\nu -1} \, \text {d}z\\&=\left[ F(x)\right] ^\alpha \int _{0}^{+\infty }e^{-\gamma }\gamma ^{n-\nu -1} \left( -\log F(x) \right) ^{-n+\nu }\, \text {d}\gamma \\&=\left[ F(x)\right] ^\alpha \Gamma (n-\nu )\left( -\log F(x)\right) ^{-n+\nu }. \end{aligned} \end{aligned}$$

Finally we deduce

$$\begin{aligned} \begin{aligned} \left( {}^{C}{}{D_{-,\alpha }^\nu G_X}\right) (\alpha ,\beta ) =\int _{l}^{r}\left( -\log F(x)\right) ^{\nu }\left[ {\overline{F}}(x)\right] ^\beta \left[ F(x)\right] ^\alpha \,\text {d}x, \end{aligned} \end{aligned}$$

so that Eq. (18) follows by taking \(\alpha =1\) and \(\beta =0\). \(\square \)

Remark 2.4

Recently, Kharazmi and Balakrishnan (2023) noted that \(\mathcal {CRE}(X)=-\frac{\text {d}}{\text {d}\beta }K_X(\beta )\big \vert _{\beta =1}\).

Similarly, we obtain \(\mathcal{C}\mathcal{E}(X)=-\frac{\textrm{d}}{\textrm{d}\alpha }H_X(\alpha )\big \vert _{\alpha =1}\).

Moreover, for the generalized versions and for the fractional versions we have respectively

$$\begin{aligned} \mathcal {CRE}_n(X)= & {} \frac{(-1)^n}{n!}\frac{\textrm{d}^n}{\textrm{d}\beta ^n}K_X(\beta )\Big \vert _{\beta =1}, \qquad \mathcal{C}\mathcal{E}_n(X)=\frac{(-1)^n}{n!}\frac{\textrm{d}^n}{\textrm{d}\alpha ^n}H_X(\alpha )\Big \vert _{\alpha =1},\\ \mathcal {CRE}_\nu (X)= & {} \frac{1}{\Gamma (\nu +1)}\left( {}^{C}{}{D_{-}^\nu K_X}\right) (\beta )\Big \vert _{\beta =1}, \qquad \mathcal{C}\mathcal{E}_\nu (X)=\frac{1}{\Gamma (\nu +1)}\left( {}^{C}{}{D_{-}^\nu H_X}\right) (\alpha )\Big \vert _{\alpha =1}. \end{aligned}$$

Now we recall two important models that are largely adopted in survival analysis and reliability theory. Let X be a random lifetime with CDF F(x) and SF \({\overline{F}}(x)\). The proportional hazard model (see for instance Cox 1972), Kumar and Klefsjö (1994) is expressed by a random lifetime \(X^*_\gamma \) with SF

$$\begin{aligned} {\overline{F}}_{X^*_\gamma }(x)=\left[ {\overline{F}}(x)\right] ^\gamma , \qquad \gamma \in {\mathbb {R}}^+. \end{aligned}$$
(19)

Similarly, the proportional reversed hazard model (see for instance Di Crescenzo 2000; Gupta and Gupta 2007; Gupta et al. 1998) is expressed by a random lifetime \({\hat{X}}_\theta \) with CDF

$$\begin{aligned} F_{{\hat{X}}_\theta }(x)=\left[ F(x)\right] ^\theta , \qquad \theta \in {\mathbb {R}}^+. \end{aligned}$$
(20)

Recently, modified versions of these models have been studied by Das and Kayal (2021).

Remark 2.5

The measures given in Definition 2.2 satisfy the following relations.

  1. (i)

    Under the proportional hazard model (19) we have

    $$\begin{aligned} K_{X^*_\gamma }(\beta )=K_X(\gamma \beta ) \qquad \forall \,\gamma \in {\mathbb {R}}^+ \;\; \hbox {s.t.}\;\; (0,\gamma \beta )\in D_X. \end{aligned}$$
  2. (ii)

    Under the proportional reversed hazard model (20) we have

    $$\begin{aligned} H_{{\hat{X}}_\theta }(\alpha )=H_X(\theta \alpha ) \qquad \forall \,\theta \in {\mathbb {R}}^+ \;\; \hbox {s.t.}\;\; (\theta \alpha ,0)\in D_X. \end{aligned}$$

Remark 2.6

If X is a random lifetime such that \(D_X\subseteq ({\mathbb {R}}^+)^2\), then recalling the Definition 2.1 and Eqs. (19) and (20), the CIGF of X can be expressed as

$$\begin{aligned} G_X(\alpha ,\beta )=\int _{l}^{r}F_{{\hat{X}}_\alpha }(x)\,{\overline{F}}_{X^*_\beta }(x)\,\text {d}x. \end{aligned}$$

We now recall another useful concept. Let X be a random variable with CDF and SF denoted by F(x) and \({\overline{F}}(x)\), respectively. For all \(x\in (l,r)\) the odds function of X is (cf. Kirmani and Gupta (2001))

$$\begin{aligned} \theta (x)=\frac{{\overline{F}}(x)}{F(x)}. \end{aligned}$$
(21)

This function represents the ratio of the probability of an event occurring to the probability of its not occurring, and always assumes nonnegative finite values. It is used in reliability theory, because it quantifies the strength of the association between the failure of a system after time x and before time x. Due to Eq. (21), we can express the CIGF of X in terms of the odds function in two equivalent useful ways:

$$\begin{aligned} G_X(\alpha ,\beta )=\int _{l}^{r}[F(x)]^{\alpha +\beta }[\theta (x)]^{\beta }\,\text {d}x =\int _{l}^{r}\left[ \theta (x)\right] ^{-\alpha }\left[ {\overline{F}}(x)\right] ^{\alpha +\beta }\,\text {d}x. \end{aligned}$$

Hence, when the parameters \(\alpha \) and \(\beta \) may take negative values such that \(\alpha +\beta =0\), then the CIGF of X can be expressed in terms of the odds function as

$$\begin{aligned} G_X(-\beta ,\beta )=\int _{l}^{r}[{\theta (x)}]^{\beta }\,\text {d}x \qquad \forall \,\beta \in D_{X,\theta } \end{aligned}$$
(22)

where

$$\begin{aligned} D_{X,\theta }=\{ \beta \in {\mathbb {R}} :G_{X}(-\beta ,\beta ) <+\infty \}. \end{aligned}$$

Table 2 shows various examples of the CIGF expressed in terms of the Euler Beta function \(B(x,y)=\int _{0}^{1}t^{x-1}(1-t)^{y-1}\,\text {d}t\) or the incomplete Beta function \(B\left( p;x,y\right) =\int _{0}^{p}t^{x-1}(1-t)^{y-1}\,\text {d}t\), \(p\in [0,1]\).

Finally, by recalling Eq. (11), we remark that for \(\alpha =\beta =1\) Eq. (8) and the examples in Table 2 are in agreement with Giorgi and Nadarajah (2010).

Table 2 The CIGF for some notable distributions

3 Inequalities and further results

In this section, we obtain some bounds and further results regarding the CIGF. Specifically, we first refer to well-known inequalities named after Chernoff, Bernoulli, Minkowski and Hölder (see, for instance, Schilling (2005)).

Hereafter, thanks to the Chernoff’s inequalities we recover some bounds for the CIGF in terms of the moment generating function (MGF) of X, denoted by \(M_X(s)={\mathbb {E}}(e^{sX})\), \(s\in {\mathbb {R}}\).

Proposition 3.1

Let X be a nonnegative random variable with support (0, r), where \(r\in (0,+\infty )\), and having finite CIGF \(G_X(\alpha ,\beta )\), for \((\alpha ,\beta )\in D_X\). Assume that the MGF of X satisfies \(M_X(s)<+\infty \) for all \(s\in (-s_0, s_0)\), with \(s_0>0\). Then,

  1. (i)

    for all \(s_1\in (-s_0,0)\), \(s_2\in (0,s_0)\) and \((\alpha ,\beta )\in D_X\cap ({\mathbb {R}}^+)^2\), one has

    $$\begin{aligned} G_X(\alpha ,\beta ) \le g(r; \alpha ,\beta , \textbf{s}) \left[ M_X(s_1)\right] ^\alpha \left[ M_X(s_2)\right] ^\beta , \end{aligned}$$
    (23)
  2. (ii)

    for all \(s_1\in (-s_0,0)\), \(s_2\in (0,s_0)\) and \((\alpha ,\beta )\in D_X\cap ({\mathbb {R}}^-)^2\), one has

    $$\begin{aligned} G_X(\alpha ,\beta )\ge g(r; \alpha ,\beta , \textbf{s}) \left[ M_X(s_1)\right] ^\alpha \left[ M_X(s_2)\right] ^\beta , \end{aligned}$$
    (24)

    where

    $$\begin{aligned} g(r; \alpha ,\beta , \textbf{s})=\left\{ \begin{array} {ll} \displaystyle \frac{1}{\alpha s_1+\beta s_2}\left[ 1-e^{-(\alpha s_1+\beta s_2)r}\right] , &{} \hbox { if }\alpha s_1+\beta s_2\ne 0, \\ r, &{} \hbox { if }\alpha s_1+\beta s_2= 0. \end{array} \right. \end{aligned}$$

Proof

Applying Chernoff’s inequalities \({\mathbb {P}}(X\le x)\le e^{-s_1x}M_X(s_1)\) and \( {\mathbb {P}}(X\ge x)\le e^{-s_2x}M_X(s_2)\), for \(x>0\), to Eq. (6), by the effect of integration, for all \(s_1\in (-s_0,0)\), \(s_2\in (0,s_0)\) and \((\alpha ,\beta )\in D_X\cap ({\mathbb {R}}^+)^2\), it follows that

$$\begin{aligned} G_X(\alpha ,\beta )\le \int _{0}^{r}\left[ e^{-s_1x}M_X(s_1)\right] ^\alpha \left[ e^{-s_2x}M_X(s_2)\right] ^\beta \,\text {d}x. \end{aligned}$$

Few calculations give Eq. (23). For \((\alpha ,\beta )\in D_X\cap ({\mathbb {R}}^-)^2\), Eq. (24) can be obtained similarly. \(\square \)

The case when X has support \((0,+\infty )\) can be easily derived as follows.

Corollary 3.1

Let X be a nonnegative random variable with support \((0,+\infty )\), and having finite CIGF \(G_X(\alpha ,\beta )\), for \((\alpha ,\beta )\in D_X\). Assume that the MGF of X satisfies \(M_X(s)<+\infty \) for all \(s\in (-s_0, s_0)\), with \(s_0>0\). Then,

  1. (i)

    for all \(s_1\in (-s_0,0)\), \(s_2\in (0,s_0)\) and \((\alpha ,\beta )\in D_X\cap ({\mathbb {R}}^+)^2\) such that \(\alpha s_1+\beta s_2>0\), one has

    $$\begin{aligned} G_X(\alpha ,\beta )\le \frac{1}{\alpha s_1+\beta s_2}\left[ M_X(s_1)\right] ^\alpha \left[ M_X(s_2)\right] ^\beta , \end{aligned}$$
    (25)
  2. (ii)

    for all \(s_1\in (-s_0,0)\), \(s_2\in (0,s_0)\) and \((\alpha ,\beta )\in D_X\cap ({\mathbb {R}}^-)^2\) such that \(\alpha s_1+\beta s_2>0\), one has

    $$\begin{aligned} G_X(\alpha ,\beta )\ge \frac{1}{\alpha s_1+\beta s_2}\left[ M_X(s_1)\right] ^\alpha \left[ M_X(s_2)\right] ^\beta . \end{aligned}$$

The following example provides an application of the previous results.

Example 3.1

Let \(X\sim Erlang(2,\lambda )\), \(\lambda >0\), as in Example 2.1, with MGF \(M_X(s)=\lambda ^2 (\lambda -s)^{-2}\), for \(s<\lambda \). Thanks to Eqs. (8) and (25), some calculations give, for \((\alpha ,\beta )\in ({\mathbb {R}}^+)^2\),

$$\begin{aligned} G_X(\alpha ,\beta ) \le \inf _{\begin{array}{c} (s_1,s_2)\in (-\lambda ,0)\times (0,\lambda )\\ \alpha s_1+\beta s_2>0 \end{array}}\, \frac{1}{\alpha s_1+\beta s_2}\cdot \frac{\lambda ^{2(\alpha +\beta )}}{(\lambda -s_1)^{2\alpha }(\lambda -s_2)^{2\beta }} = \frac{1}{\lambda }\frac{1}{2^{2\beta }}\left( \frac{1+2\beta }{\beta }\right) ^{1+2\beta }. \end{aligned}$$

Let us now express some upper bounds for the CIGF in terms of the measures introduced in Definition 2.2.

Proposition 3.2

Under the assumptions specified in Definition 2.1, the CIGF of a random variable X satisfies the following inequalities:

$$\begin{aligned}{} & {} G_X(\alpha ,\beta )\le K_X(\beta )-\alpha K_X(\beta +1)\qquad \forall \, (\alpha ,\beta )\in D_X\cap [0,1]\times {\mathbb {R}},\\{} & {} G_X(\alpha ,\beta )\le H_X(\alpha )-\beta H_X(\alpha +1)\qquad \forall \,(\alpha ,\beta )\in D_X\cap {\mathbb {R}}\times [0,1]. \end{aligned}$$

Proof

Due to Bernoulli’s inequality with real exponents, for all \(x\in {\mathbb {R}}\) it follows that

$$\begin{aligned} \left[ 1-{\overline{F}}(x)\right] ^\alpha \le 1 -\alpha {\overline{F}}(x) \qquad \forall \, \alpha \in [0,1] \end{aligned}$$

and

$$\begin{aligned} \left[ 1-F(x)\right] ^\beta \le 1-\beta F(x) \qquad \forall \,\beta \in [0,1]. \end{aligned}$$

Hence, the thesis immediately follows from Definitions 2.1 and 2.2. \(\square \)

Hereafter we use the Minkowski’s inequality to obtain suitable bounds for the measures introduced in Definition 2.2 and for \(G_X(\gamma ,\gamma )\).

Proposition 3.3

Under the assumptions specified in Definition 2.1 and Definition 2.2, if X has finite support in (lr),

  1. (i)

    for all \(\gamma \ge 1\) such that \((\gamma ,0)\in D_X\) and \((0,\gamma )\in D_X\) we have

    $$\begin{aligned}{} & {} \left[ \left( r-l\right) ^\frac{1}{\gamma }-\left[ H_X(\gamma )\right] ^\frac{1}{\gamma }\right] ^\gamma \le K_X(\gamma ) \le \left[ \left( r-l\right) ^\frac{1}{\gamma }+\left[ H_X(\gamma )\right] ^\frac{1}{\gamma }\right] ^\gamma ,\\{} & {} \left[ \left( r-l\right) ^\frac{1}{\gamma }-\left[ K_X(\gamma )\right] ^\frac{1}{\gamma }\right] ^\gamma \le H_X(\gamma )\le \left[ \left( r-l\right) ^\frac{1}{\gamma }+\left[ K_X(\gamma )\right] ^\frac{1}{\gamma }\right] ^\gamma ; \end{aligned}$$
  2. (ii)

    for all \(\gamma \ge 1\) such that \((0,\gamma )\in D_X\) we have

    $$\begin{aligned} G_X(\gamma ,\gamma )\le \left[ \left[ K_X(\gamma )\right] ^\frac{1}{\gamma }+\left[ K_X(2\gamma )\right] ^\frac{1}{\gamma }\right] ^\gamma ; \end{aligned}$$
  3. (iii)

    for all \(\gamma \ge 1\) such that \((\gamma ,0)\in D_X\) we have

    $$\begin{aligned} G_X(\gamma ,\gamma )\le \left[ \left[ H_X(\gamma )\right] ^\frac{1}{\gamma }+\left[ H_X(2\gamma )\right] ^\frac{1}{\gamma }\right] ^\gamma . \end{aligned}$$

Proof

By applying Minkowski’s inequality, for \(\gamma \ge 1\) we have

$$\begin{aligned} \begin{aligned} \left[ K_X(\gamma )\right] ^\frac{1}{\gamma } =\left[ \int _{l}^{r}\left[ 1-F(x)\right] ^\gamma \, \text {d}x\right] ^\frac{1}{\gamma }&\le \left( r-l\right) ^\frac{1}{\gamma }+\left[ \int _{l}^{r}\left[ F(x)\right] ^\gamma \, \text {d}x\right] ^\frac{1}{\gamma }\\&=\left( r-l\right) ^\frac{1}{\gamma }+\left[ H_X(\gamma )\right] ^\frac{1}{\gamma }, \end{aligned} \end{aligned}$$

and also

$$\begin{aligned} \begin{aligned} \left( r-l\right) ^\frac{1}{\gamma } =\left[ \int _{l}^{r}\left[ F(x)+{\overline{F}}(x)\right] ^\gamma \, \text {d}x\right] ^\frac{1}{\gamma }&\le \left[ \int _{l}^{r}\left[ F(x)\right] ^\gamma \,\text {d}x\right] ^\frac{1}{\gamma } +\left[ \int _{l}^{r}\left[ {\overline{F}}(x)\right] ^\gamma \, \text {d}x\right] ^\frac{1}{\gamma }\\&=\left[ H_X(\gamma )\right] ^\frac{1}{\gamma }+\left[ K_X(\gamma )\right] ^\frac{1}{\gamma }. \end{aligned} \end{aligned}$$

Combining the two latter inequalities we obtain the bounds for \(K_X(\gamma )\). The other relations can be obtained in the same way, by taking into account that, for \(\gamma \ge 1\), if \((0,\gamma )\in D_X\) then \((0,2\gamma )\in D_X\), and if \((\gamma ,0)\in D_X\) then \((2\gamma ,0)\in D_X\). \(\square \)

We now prove an upper bound for \(G_X(\alpha ,\beta )\), for \(\alpha +\beta =1\), making use of the Hölder’s inequality.

Proposition 3.4

Under the assumptions specified in Definition 2.1, let X have finite support in (lr), with \((\theta ,1-\theta )\in D_X\) for all \(\theta \in (0,1)\). Then,

$$\begin{aligned} G_X(\theta ,1-\theta )\le \left[ r-{\mathbb {E}}(X)\right] ^\theta \left[ {\mathbb {E}}(X)-l\right] ^{1-\theta } \qquad \forall \; \theta \in (0,1). \end{aligned}$$
(26)

Proof

Due to the Hölder’s inequality with conjugate exponents \(\frac{1}{\theta },\frac{1}{1-\theta }\), for all \(\theta \in (0,1)\) we have

$$\begin{aligned} G_X(\theta ,1-\theta )\le \left( \int _{l}^{r}F(x)\,\text {d}x\right) ^\theta \left( \int _{l}^{r}{\overline{F}}(x)\,\text {d}x\right) ^{1-\theta }, \end{aligned}$$

this yielding Eq. (26). \(\square \)

We remark that Eq. (26) is satisfied as equality when \(F(x)={\overline{F}}(x)\) \(\forall \, x\in (l,r)\), i.e. when \({\mathbb {P}}(X=l)={\mathbb {P}}(X=r)=1/2\). Moreover, we note that the right-hand-side of Eq. (26) can be rewritten by taking into account that, under the given assumptions,

$$\begin{aligned} r-{\mathbb {E}}(X)=H_X(1), \qquad {\mathbb {E}}(X)-l=K_X(1). \end{aligned}$$

Hereafter we show that the CIGF of an absolutely continuous random variable can be expressed as the product of the Euler beta function and the expected value of a suitably transformed beta-distributed random variable.

Proposition 3.5

If X is an absolutely continuous random variable with PDF f, CDF F and finite CIGF, then,

$$\begin{aligned} G_X(\alpha ,\beta )=B(\alpha +1,\beta +1)\,{\mathbb {E}}\left[ r(Y)\right] , \end{aligned}$$

where \(Y\sim \textrm{Beta}(\alpha +1,\beta +1)\) is independent from X, with \(r(Y)=[f(F^{-1}(Y))]^{-1}\).

Proof

The thesis is obtained making use of Eq. (10) and recalling the PDF of \(Y\sim \textrm{Beta}(\alpha +1,\beta +1)\). \(\square \)

The above result collects various features of the CIGF, i.e. the relations (i) to the Beta distribution, which reflects the form of the right-hand-side of (6), and (ii) to the transformation \(f(F^{-1}(\cdot ))\), which plays a relevant role in the context of variability measures as developed in Sect. 5.

We conclude this section by relating the CIGF to a series involving the Golomb’s information generating function of the equilibrium random variable. To this aim, we recall that for a nonnegative random variable X, with SF \({\overline{F}}(x)\) and expected value \({\mathbb {E}}[X]\in (0, +\infty )\), the equilibrium random variable of X is a nonnegative absolutely continuous random variable, denoted as \(X_e\), whose PDF is given by

$$\begin{aligned} f_e(x)=\frac{{\overline{F}}(x)}{{\mathbb {E}}[X]}, \qquad x>0. \end{aligned}$$
(27)

Recalling Eq. (2), hereafter we denote by \(\mathcal{I}\mathcal{G}_{X_e}\) the Golomb’s information generating function of the equilibrium random variable \(X_e\).

Proposition 3.6

If X is a nonnegative random variable having expected value \({\mathbb {E}}[X]\in (0, +\infty )\) and with finite CIGF, then

$$\begin{aligned} G_X(\alpha ,\beta )=\sum _{n=0}^{\infty }\left( {\begin{array}{c}\alpha \\ n\end{array}}\right) (-1)^n({\mathbb {E}}[X])^{n+\beta }\,\mathcal{I}\mathcal{G}_{X_e}(n+\beta ) \qquad \forall \; (\alpha ,\beta )\in D_X, \end{aligned}$$

where \(X_e\) is the equilibrium random variable of X.

Proof

Denoting by (0, r) the support of X, with \(r\in (0,+\infty ]\), from Eqs. (6) and (27) it follows that, for all \((\alpha ,\beta )\in D_X\)

$$\begin{aligned} G_X(\alpha , \beta ) =\int _{0}^{r}\left( 1-f_e(x)\,{\mathbb {E}}[X]\right) ^{\alpha }(f_e(x)\,{\mathbb {E}}[X])^{\beta }\;\text {d}x. \end{aligned}$$
(28)

Since \(\left| f_e(x)\,{\mathbb {E}}[X]\right| <1\), due to Eq. (7) we have

$$\begin{aligned} (1-f_e(x)\,{\mathbb {E}}[X])^{\alpha }=\sum _{n=0}^{\infty }\left( {\begin{array}{c}\alpha \\ n\end{array}}\right) (-1)^n\left( f_e(x)\,{\mathbb {E}}[X]\right) ^n. \end{aligned}$$

By replacing the latter equation in (28), the thesis immediately follows from Eq. (2). \(\square \)

4 Connections with systems reliability

In this section we relate some results exploited above to notions of interest in reliability theory.

Several applied problems involve complex systems consisting of many components. Here we focus on systems formed by n components, where \(X_1,X_2,\dots ,X_n\) describe the random lifetimes of each component. We assume that they are independent and identically distributed (i.i.d.), with common CDF F(x) and SF \({\overline{F}}(x)\). As well known, a parallel system continues to work until the last component fails, and thus its lifetime is described by the sample maximum

$$\begin{aligned} X_{(n:n)}=\max \{X_1,X_2,\dots ,X_n\}, \end{aligned}$$

which has CDF

$$\begin{aligned} F_{(n:n)}(x)={\mathbb {P}}(X_{(n:n)}\le x)=\left( F(x)\right) ^n, \qquad x\in {\mathbb {R}}. \end{aligned}$$
(29)

Similarly, a series system fails as soon as the first component stops working, and thus its lifetime is described by the sample minimum

$$\begin{aligned} X_{(1:n)}=\min \{X_1,X_2,\dots ,X_n\}, \end{aligned}$$

that possesses SF

$$\begin{aligned} {\overline{F}}_{(1:n)}(x)={\mathbb {P}}(X_{(1:n)}>x)=\left( {\overline{F}}(x)\right) ^n, \qquad x\in {\mathbb {R}}. \end{aligned}$$
(30)

Remark 4.1

Let \(n\in {\mathbb {N}}\). Recalling Definition 2.2, from Eqs. (29) and (30) it immediately follows that

$$\begin{aligned} H_{X_{(n:n)}}(\alpha )= & {} H_X(n\alpha ), \qquad \forall \; (n\alpha ,0)\in D_X,\\ K_{X_{(1:n)}}(\beta )= & {} K_X(n\beta ), \qquad \forall \; (0,n\beta )\in D_X, \end{aligned}$$

where \(H_X\) and \(K_X\) denote respectively the cumulative information generating measure and the cumulative residual information generating measure of \(X_i\).

We now focus on the expression of the CIGF for order statistics \(X_{(n:n)}\) and \(X_{(1:n)}\).

Proposition 4.1

For \(n\in {\mathbb {N}}\), let \(X_1,X_2,\dots ,X_n\) be a random sample formed by i.i.d. random lifetimes having finite cumulative information generating measure \(H_X\) and cumulative residual information generating measure \(K_X\). Then, the CIGF of the order statistics \(X_{(n:n)}\) and \(X_{(1:n)}\) can be expressed respectively as

$$\begin{aligned} G_{X_{(n:n)}}(\alpha ,\beta ) =\sum _{i=0}^{\infty }(-1)^i\left( {\begin{array}{c}\beta \\ i\end{array}}\right) H_{X}\left( n(i+\alpha )\right) , \qquad \forall \; (\alpha , \beta )\in D_{X_{(n:n)}} \end{aligned}$$
(31)

and

$$\begin{aligned} G_{X_{(1:n)}}(\alpha ,\beta ) =\sum _{j=0}^{\infty }(-1)^j\left( {\begin{array}{c}\alpha \\ j\end{array}}\right) K_{X}\left( n(j+\beta )\right) , \qquad \forall \; (\alpha , \beta )\in D_{X_{(1:n)}}. \end{aligned}$$
(32)

Proof

For simplicity, assume that the support of X is (0, r). Recalling Eq. (6), from Eqs. (29) and (7) we have

$$\begin{aligned} G_{X_{(n:n)}}(\alpha ,\beta )= & {} \int _{0}^{r}\left[ F(x)\right] ^{n\alpha }\left[ 1-[ {F}(x)]^n\right] ^{\beta }\,\text {d}x\\= & {} \sum _{i=0}^{\infty }(-1)^i\left( {\begin{array}{c}\beta \\ i\end{array}}\right) \int _{0}^{r}\left[ {F}(x)\right] ^{n(i+\alpha )}\,\text {d}x. \end{aligned}$$

The right-hand-side of (31) then follows making use of (12). Equation (32) can be obtained similarly. \(\square \)

It is well known that a system with n independent components is said to be a k-out-of-n system when it works if and only if at least k components work (see, for instance, Boland and Proschan (1983)). Clearly, if \(k=1\) we have a parallel system, while for \(k=n\) we have a series system. For any k, the lifetime of the k-out-of-n system formed by components with i.i.d. lifetimes is expressed as the corresponding k-th order statistic. This allows to express the reliability and the information content of this kind of systems in a tractable way.

Remark 4.2

Consider a k-out-of-n system formed by n components with i.i.d. random lifetimes, for \(n\in {\mathbb {N}}\). Denoting by \(X_{(k:n)}\) the corresponding k-th order statistic, which in turn gives the system lifetime, the CIGF of \(X_{(k:n)}\) can be expressed in terms of the cumulative information generating measures. Indeed, similarly as Proposition 4.1, recalling Eqs. (12) and (13) one has the following two equivalent expressions

$$\begin{aligned} G_{X_{(k:n)}}(\alpha ,\beta )= & {} \sum _{i=0}^{\infty }(-1)^i\left( {\begin{array}{c}\beta \\ i\end{array}}\right) H_{X_{(k:n)}}(i+\alpha ),\\ G_{X_{(k:n)}}(\alpha ,\beta )= & {} \sum _{j=0}^{\infty }(-1)^j\left( {\begin{array}{c}\alpha \\ j\end{array}}\right) K_{X_{(k:n)}}(j+\beta ), \end{aligned}$$

for all \((\alpha ,\beta )\in D_{X_{(k:n)}}\). Moreover, the mean of \(X_{(k:n)}\) can be expressed in terms of the CIGF of X as

$$\begin{aligned} {\mathbb {E}}\left[ X_{(k:n)}\right] = \sum _{j=0}^{k-1}\left( {\begin{array}{c}n\\ j\end{array}}\right) \int _{l}^{r}\left[ F(x)\right] ^j\left[ {\overline{F}}(x)\right] ^{n-j}\,\mathrm{{d}}x =\sum _{j=0}^{k-1}\left( {\begin{array}{c}n\\ j\end{array}}\right) \,G_X(j,n-j). \end{aligned}$$

Clearly, for \(k=1\) we have \({\mathbb {E}}\left[ X_{(1:n)}\right] =G_X(0,n)=K_X(n)\).

4.1 Stress–strength models for multi-component systems

A further connection of the CIGF with systems reliability arises in the analysis of stress–strength models for multi-component systems. Let us consider a system with n components having i.i.d. strengths \(X_1, X_2, \ldots , X_n\) having common CDF F(x). Assume that each component is stressed according to an independent random stress T having CDF \(F_T(x)\). Moreover, suppose that the system survives if and only if the components’ strengths are greater than the stress by at least k out of n (\(1\le k\le n\)). Then, the reliability of the considered multi-component stress–strength system is given by (cf. Bhattacharyya and Johnson (1974))

$$\begin{aligned} R_{k,n}= & {} {\mathbb {P}}[\hbox {at least} \,k\, \hbox {of the} \, (X_1, X_2, \ldots , X_n)\, \hbox {exceed}\, T] \nonumber \\= & {} \sum _{j=k}^n {n\atopwithdelims ()j}\int _{-\infty }^{+\infty } [1-F(t)]^j [F(t)]^{n-j}\, \textrm{d}F_T(t), \end{aligned}$$
(33)

with \(R_{0,n} =1\). See also, for instance, the recent contribution by Kohansal and Shoaee (2021) on the statistical inference of multicomponent stress–strength reliability under suitable censored samples.

For instance, if T is distributed as \(X_1\) then it is not hard to see that

$$\begin{aligned} R_{k,n} = 1-\frac{k}{n+1},\qquad 0\le k\le n. \end{aligned}$$
(34)

The following result is a straightforward consequence of Eqs. (6) and (33).

Proposition 4.2

Let \(X_1, X_2, \ldots , X_n,T\) have common finite support (lr), with \(X_1, X_2, \ldots , X_n\) i.i.d. If T is uniformly distributed over (lr), then

$$\begin{aligned} R_{k,n} =\frac{1}{r-l} \sum _{j=k}^n {n\atopwithdelims ()j}G_X(n-j,j), \qquad 0\le k\le n. \end{aligned}$$

An iterative formula allows us to evaluate the reliability of the multi-component stress–strength system as follows, under the assumptions of Proposition 4.2:

$$\begin{aligned} R_{k+1,n} =R_{k,n} -\frac{1}{r-l} {n\atopwithdelims ()k}G_X(n-k,k), \qquad 0\le k\le n-1. \end{aligned}$$

As example, if X has Power\((\theta )\) distribution then from Proposition 4.2 and Table 2 after few calculations one has

$$\begin{aligned} R_{k,n} =\frac{ \Gamma (n + 1)\,\Gamma (n - k + 1 + \frac{1}{\theta })}{\Gamma (n + 1 + \frac{1}{\theta })\,\Gamma (n - k + 1)}, \qquad 0\le k\le n, \quad \theta >0. \end{aligned}$$
(35)

Note that the expression in (35) can be also represented as a ratio of Pochhammer symbols, or as an infinite product (cf. Eq. 8.325.1 of Gradshteyn and Ryzhik (2015)). Clearly, if \(\theta =1\) then Eq. (35) reduces to Eq. (34). Figure 1 shows some plots of \(R_{k,n}\) as given in (35).

Fig. 1
figure 1

Plots of the reliability of the multi-component stress–strength system with underlying Power\((\theta )\) distribution as in Eq. (35) for \(0\le k\le n\), with \(n=50\) (left) and \(n=100\) (right), with \(\theta =0.1, 0.5, 1, 2, 10\) (from bottom to top)

5 Generalized Gini functions

This section is devoted to the analysis of a generalized version of the CIGF. Specifically, we aim to extend the Definition 2.1 to the case in which the powers included in the right-hand side of Eq. (6) are replaced by suitable distortion functions. In this case, recalling Remark 2.2, we come to an extension of the Eq. (11).

Let X be a random variabile with CDF F and SF \({\overline{F}}\), and let \(q_i:[0,1]\rightarrow [0,1]\) be two distortion functions, for \(i=1,2\), i.e. increasing functions such that \(q_i(0)=0\) and \(q_i(1)=1\) (cf. Section 2.9.2 of Belzunce et al. (2016) and Section 2.4 of Navarro (2022)). In some applications it is required that \(q_i\) is continuous or left-continuous, however these assumptions are not necessarily required in general. The distorted distribution function and the distorted survival function of F through \(q_i\) are given respectively by

$$\begin{aligned} F_{q_1}(x)=q_1\left( F(x)\right) , \qquad {\overline{F}}_{q_2}(x)=q_2\left( {{\overline{F}}}(x)\right) , \qquad \forall x\in {\mathbb {R}}. \end{aligned}$$
(36)

From Eq. (36), in general one has \(F_{q_1}(x)+{\overline{F}}_{q_2}(x)\ne 1\), for all \(x\in (l,r)\), unless \(q_1(u)=1-q_2(1-u)\). The functions given in (36) have been introduced in the context of the theory of choice under risk (see Wang (1996)), and are largely used in various applied fields (for instance, see Sordo and Suárez-Llorens (2011) for applications to variability measures).

Let us now consider the preannounced generalization of the CIGF based on (36).

Definition 5.1

Let X be a random variable with CDF F(x) and SF \({\overline{F}}(x)\), \(x\in {\mathbb {R}}\), and let

$$\begin{aligned} l=\inf \{x\in {\mathbb {R}}: F(x)>0\}, \qquad r=\sup \{x\in {\mathbb {R}}: {\overline{F}}(x)>0\}. \end{aligned}$$

The q-distorted Gini function (or, shortly, q-Gini function) of X is defined as

$$\begin{aligned} {\hat{G}}_X(\textbf{q})=\int _{l}^{r} F_{q_1}(x) {\overline{F}}_{q_2}(x)\,\mathrm{{d}}x, \end{aligned}$$
(37)

where \(\textbf{q}=(q_1,q_2)\), and \(q_i:[0,1]\rightarrow [0,1]\), \(i=1,2\), are distortion functions such that \({\hat{G}}_X(\textbf{q})\) is finite.

Clearly, if the distortion functions are taken as \(q_1(x)=x^\alpha \) and \(q_2(x)=x^\beta \) with \((\alpha ,\beta )\in D_X\), then Eq. (37) corresponds to the definition of the CIGF given in Eq. (6). Specifically, if \(\alpha =\beta =1\) then we recover the Gini mean semi-difference (11).

It is worth mentioning that the q-Gini function may be viewed as an extension of the distorted measures treated in Giovagnoli and Wynn (2012) and in Greselin and Zitikis (2018). In these papers, distortions of the only CDF or SF (which can be viewed as generalizations of Eqs. (12) and (13)), are considered for the analysis of stochastic dominance, Lorenz ordering and risk measures.

Hereafter we shall prove various results regarding the q-Gini function. They include the effect of an affine transformation of X and the pointwise ordering of the q-Gini functions. To this aim, we recall that, if X and Y are nonnegative random variables with CDFs F and G, respectively, then X is said to be smaller than Y in the dispersive order, denoted as \(X\le _d Y\), if and only if (see Section 3.B of Shaked and Shanthikumar (2007))

$$\begin{aligned} F^{-1}(v)-F^{-1}(u) \ge G^{-1}(v)- G^{-1}(u)\qquad \hbox {whenever }0<u \le v<1. \end{aligned}$$

Among the variability stochastic orders, the dispersive order is one of the most popular, since it involves quantities that are easily tractable, requiring that the difference between any two quantiles of X is smaller than the corresponding quantity of Y. Moreover, if X and Y are absolutely continuous with PDFs f and g, respectively, then

$$\begin{aligned} X\le _d Y \quad \hbox {if and only if} \quad f(F^{-1}(u))\ge g(G^{-1}(u))\quad \forall \, u\in (0,1). \end{aligned}$$
(38)

We can now prove that, under suitable assumptions, the q-Gini function is a variability measure in the sense of Bickel and Lehmann (2012).

Theorem 5.1

Let X and Y be random variables having the same support, and let \(\textbf{q}=(q_1,q_2)\), where \(q_i:[0,1]\rightarrow [0,1]\), \(i=1,2\), be distortion functions such that \({\hat{G}}_X(\textbf{q})\) and \({\hat{G}}_Y(\textbf{q})\) are finite. Then, the following properties hold:

  1. 1.

    \({\hat{G}}_{X+\delta }(\textbf{q})={\hat{G}}_X(\textbf{q})\) for all \(\delta \in {\mathbb {R}}\);

  2. 2.

    \({\hat{G}}_{\gamma X}(\textbf{q})=\gamma {\hat{G}}_X(\textbf{q})\) for all \(\gamma \in {\mathbb {R}}^+\);

  3. 3.

    \({\hat{G}}_X(\textbf{q})=0\) for any degenerate random variable X;

  4. 4.

    \({\hat{G}}_X(\textbf{q})\ge 0\) for any random variable X;

  5. 5.

    \(X\le _d Y\) implies \({\hat{G}}_X(\textbf{q})\le {\hat{G}}_Y(\textbf{q})\).

Proof

The properties 1 and 2 follow recalling Eq. (37) and the relation between the CDFs of X and \(Y=\gamma X + \delta \). The properties 3 and 4 are guaranteed by the Definition 5.1. In analogy to Eq. (10), we can write

$$\begin{aligned} {\hat{G}}_Y(\textbf{q})-{\hat{G}}_X(\textbf{q}) =\int _{0}^{1}q_1(u)q_2(1-u)\left[ \frac{1}{g(G^{-1}(u))}-\frac{1}{f(F^{-1}(u))}\right] \,\text {d}u. \end{aligned}$$

Hence, due to relation (38) the property 5 immediately follows. \(\square \)

In addition, we remark that if \({\hat{G}}_X(\textbf{q})=0\) and if \(q_i(0^+)>0\) and \(q_i(1^-)<1\) for \(i=1,2\), then X is necessarily a degenerate random variable.

It is worth mentioning that the results given in this section can be further extended. Indeed, in the same conditions given in the Definition 5.1, by taking as reference the approach in Giovagnoli and Wynn (2012) we introduce the weighted q -distorted Gini function (or, shortly, weighted q-Gini function) as follows:

$$\begin{aligned} {\hat{G}}_X(\textbf{q},F_T)=\int _{\Delta } F_{q_1}(x) {\overline{F}}_{q_2}(x)\,\mathrm{{d}}F_T(x), \end{aligned}$$
(39)

where \(F_T(x)\) is the CDF of a random variable T, and where the intersection of the supports of X and T is a non-empty set denoted by \(\Delta \).

It is not hard to see that the function given in (39) satisfies the properties 1–4 given in Theorem 5.1 for \({\hat{G}}_X(\textbf{q})\). Concerning the property 5, hereafter we see that additional assumptions are needed. Here, \(l_X\), \(r_X\) and \(l_Y\), \(r_Y\) are defined as in (5) for X and Y, respectively.

Theorem 5.2

Let X and Y be random variables having the same support, let \(\textbf{q}=(q_1,q_2)\), where \(q_i:[0,1]\rightarrow [0,1]\), \(i=1,2\), be distortion functions such that \({\hat{G}}_X(\textbf{q},F_T)\) and \({\hat{G}}_Y(\textbf{q},F_T)\) are finite, and let T be absolutely continuous with PDF \(f_T\). If

  1. (i)

    \(f_T(t)\) is increasing in t and \(-\infty <l_X=l_Y\), or if

  2. (ii)

    \(f_T(t)\) is decreasing in t and \( r_X=r_Y<\infty \), then

    $$\begin{aligned} X\le _d Y \quad \text {implies}\quad {\hat{G}}_X(\textbf{q},F_T)\le {\hat{G}}_Y(\textbf{q},F_T). \end{aligned}$$
    (40)

Proof

Due to (39), by setting \(u = F (x)\) one has

$$\begin{aligned} {\hat{G}}_Y(\textbf{q},F_T)-{\hat{G}}_X(\textbf{q},F_T) =\int _{0}^{1}q_1(u)q_2(1-u)\left[ \frac{f_T(G^{-1}(u))}{g(G^{-1}(u))}-\frac{f_T(F^{-1}(u))}{f(F^{-1}(u))}\right] \,\text {d}u. \end{aligned}$$

Then, under assumption (i), from Theorem 3.B.13 of Shaked and Shanthikumar (2007) we have that assumption \(X\le _d Y\) implies \(X\le _{st} Y\), i.e. \(F(x)\ge G(x)\) for all \(x\in {\mathbb {R}}\), so that \(G^{-1}(u)\ge F^{-1}(u)\) for all \(u\in (0,1)\). The relation (40) thus follows from (38). The same result can be proved similarly under assumption (ii). \(\square \)

An immediate application of Theorem 5.2 can be given to the reliability of multi-component stress–strength systems, as seen in Sect. 4.1. Consider two n-component systems, the first having i.i.d. strengths \(X_1, X_2, \ldots , X_n\) distributed as X, and the second having i.i.d. strengths \(Y_1, Y_2, \ldots , Y_n\) distributed as Y. Assume that each component of both systems is stressed according to an independent random stress T having CDF \(F_T(x)\). We denote by \(R_{k,n}^X\) and \(R_{k,n}^Y\) the reliability of the corresponding multi-component stress–strength systems defined as in (33). We are now able to provide a comparison result based on the weighted q-Gini function.

Theorem 5.3

Let the strengths X and Y have the same support, and let the random stress T be absolutely continuous with PDF \(f_T\). If

  1. (i)

    \(f_T(t)\) is increasing in t and \(-\infty <l_X=l_Y\), or if

  2. (ii)

    \(f_T(t)\) is decreasing in t and \( r_X=r_Y<\infty \), then

    $$\begin{aligned} X\le _d Y \quad \text {implies}\quad R_{k,n}^X\le R_{k,n}^Y \quad \text {for all}\quad 0\le k\le n, \end{aligned}$$

    provided that \(R_{k,n}^X\) and \(R_{k,n}^Y\) are finite.

Proof

The thesis follows recalling Eq. (33) and making use of Theorem 5.2 when the relevant distortions are given by \(q_1(u)=u^{n-j}\) and \(q_2(u)=u^{j}\), with \(k\le j\le n\). \(\square \)

In the last result of this section, thanks to the probabilistic analogue of the mean value theorem, we provide a suitable expression of the weighted q-Gini function in the special case when the related distortion functions are equal to the identity.

Proposition 5.1

Let X be a nondegenerate random variable such that \({\mathbb {E}}(\min \{X,X'\})\) and \({\mathbb {E}}(\max \{X,X'\})\) are finite, where \(X'\) is an independent copy of X. For the weighted q-Gini function introduced in Eq. (39), if T is absolutely continuous with the same support of X, and if \(q_i(u)=u\), \(0\le u\le 1\), for \(i=1,2\), then

$$\begin{aligned} {\hat{G}}_X(\textbf{q},F_T)=\frac{1}{2}\,{\mathbb {E}}\left[ F_T(\max \{X,X'\})-F_T(\min \{X,X'\})\right] . \end{aligned}$$
(41)

Proof

The proof follows making use of Eq. (39) and Theorem 4.1 in Di Crescenzo (1999) extended to the case of a general support of X, by taking \(Z=\Psi (\min \{X,X'\},\max \{X,X'\})\) and \(g(\cdot )=F_T(\cdot )\). \(\square \)

We immediately note that Eq. (41) generalizes Eq. (11) in this proposed context.

6 Two-dimensional cumulative information generating function

Let us now extend the analysis of the CIGF to the case of a two-dimensional random vector. In analogy with Definition 2.1, by avoiding trivial degenerate cases, we introduce the following

Definition 6.1

Let (XY) be a random vector with nondegenerate components, having joint CDF and SF given respectively by

$$\begin{aligned} F(x,y)=P(X\le x,Y\le y), \qquad {\overline{F}}(x,y)=P(X>x,Y>y), \qquad (x,y)\in {\mathbb {R}}^2. \end{aligned}$$

We consider the following domain

$$\begin{aligned} {\mathcal {S}}_{(X,Y)}=\{(x,y)\in {\mathbb {R}}^2: F(x,y){\overline{F}}(x,y)>0\}. \end{aligned}$$

The CIGF of (XY) is defined as:

$$\begin{aligned} \begin{aligned} G_{(X,Y)}:&\,D_{(X,Y)}\subseteq {\mathbb {R}}^{2} \longrightarrow (0,+\infty )\\&\qquad (\alpha ,\beta )\;\quad \;\longmapsto \;G_{(X,Y)}(\alpha ,\beta )= \iint _{{\mathcal {S}}_{(X,Y)}}[F(x,y)]^{\alpha }\,[{\overline{F}}(x,y)]^{\beta }\,\mathrm{{d}}x\,\mathrm{{d}}y \end{aligned} \end{aligned}$$

where

$$\begin{aligned} D_{(X,Y)}=\{(\alpha ,\beta )\in {\mathbb {R}}^2: G_{(X,Y)}(\alpha ,\beta )<+\infty \}. \end{aligned}$$

We first discuss few examples. The first example is stimulated by the fact that if \(X\sim \text {Bernoulli}\left( \frac{1}{2}\right) \), then \(G_X(\alpha ,\beta )=\left( \frac{1}{2}\right) ^{\alpha +\beta }\) (cf. Table 2), thus satisfying the symmetry conditions expressed in Remark 2.3.

Example 6.1

Let (XY) be a discrete random vector, with probability function

$$\begin{aligned} {\mathbb {P}}(X=x,Y=y)= {\left\{ \begin{array}{ll} \frac{1}{4}+\theta \qquad (x,y)\in \{(0,0),(1,1)\}\\ \frac{1}{4}-\theta \qquad (x,y)\in \{(0,1),(1,0)\}\\ 0\qquad \qquad \text {otherwise} \end{array}\right. } \end{aligned}$$

for \(\theta \in \left( -\frac{1}{4},\frac{1}{4}\right) \). Therefore the CDF and the SF are identical, given by \(F(x,y)={\overline{F}}(x,y)=\frac{1}{4}+\theta \), for \((x,y)\in {\mathcal {S}}_{(X,Y)}=[0,1]^2\). Hence, from Definition 6.1 the CIGF of (XY) is

$$\begin{aligned} G_{(X,Y)}(\alpha ,\beta ) =\left( \frac{1}{4}+\theta \right) ^{\alpha +\beta }, \qquad (\alpha ,\beta )\in D_{(X,Y)}={\mathbb {R}}^2. \end{aligned}$$

Example 6.2

Let (XY) be an absolutely continuous random vector, uniformly distributed in the triangular domain \({\mathcal {T}}=\left\{ (x,y)\in {\mathbb {R}}^2: 0\le x\le 1,\, 0\le y\le 1-x\right\} \). The PDF, CDF and SF are given respectively by

$$\begin{aligned} f(x,y)=2, \qquad F(x,y)=2xy, \qquad {\overline{F}}(x,y)=(1-x-y)^2, \qquad \hbox {for } (x,y)\in {\mathcal {T}}. \end{aligned}$$

In this case \({\mathcal {S}}_{(X,Y)}={\mathcal {T}}\), so that the CIGF of (XY) is

$$\begin{aligned} \begin{aligned} G_{(X,Y)}(\alpha ,\beta )&= \int _{0}^{1}\int _{0}^{1-x}\left( 2xy\right) ^{\alpha }\,\left( 1-x-y\right) ^{2\beta }\,\mathrm{{d}}y\,\mathrm{{d}}x\\&=2^\alpha B(\alpha +1,2\beta +1)\int _{0}^{1}x^\alpha (1-x)^{\alpha +2\beta +1}\,\mathrm{{d}}x\\&=2^\alpha B(\alpha +1,2\beta +1)B(\alpha +1, \alpha +2\beta +2), \end{aligned} \end{aligned}$$

with

$$\begin{aligned} D_{(X,Y)}=\left\{ (\alpha ,\beta )\in {\mathbb {R}}^2: \alpha>-1, \beta >-\frac{1}{2}\right\} . \end{aligned}$$

Example 6.3

Let (XY) be an absolutely continuous random vector, distributed on the domain \({\mathcal {Q}}=[0,1]^2\), with PDF, CDF and SF given respectively by, for \((x,y)\in {\mathcal {Q}}\),

$$\begin{aligned} f(x,y)=x+y,\quad F(x,y)=\frac{1}{2} xy(x+y),\quad {\overline{F}}(x,y)=\frac{1}{2}(x-1)(y-1)(x+y+2), \end{aligned}$$

so that in this case \({\mathcal {S}}_{(X,Y)}={\mathcal {Q}}\). Since \(\left| \frac{x+y}{2}\right| <1\) for \((x,y)\in {\mathcal {Q}}\), recalling the expression of the Gauss hypergeometric function (cf. 15.3.1 of Abramowitz and Stegun (1972))

$$\begin{aligned}{} & {} {}_{2}{F_1(a,b,c,z)}=\frac{\Gamma (c)}{\Gamma (b)\Gamma (c-b)}\int _{0}^{1}t^{b-1}(1-t)^{c-b-1}(1-tz)^{-a}\,\mathrm{{d}}t, \\{} & {} \qquad Re(c)>Re(b)>0, \end{aligned}$$

and making use of 15.3.7 in Abramowitz and Stegun (1972) and of 2.21.1.4 in Prudnikov et al. (1986), the CIGF of (XY) is

$$\begin{aligned} \begin{aligned} G_{(X,Y)}(\alpha ,\beta )&=\frac{1}{2^\alpha }\sum _{k=0}^{+\infty }{\beta \atopwithdelims ()k}\frac{1}{2^k}\frac{\Gamma (\beta +1)\Gamma (2\alpha +k+1)}{\Gamma (2+2\alpha +\beta +k)}B(\alpha +1,\beta +1)\\&\quad \times {}_{3}{F_2(-\alpha -k,-1-2\alpha -\beta -k,\alpha +1,-2\alpha -k,-\alpha +\beta +2,-1)} \\&\quad +\frac{1}{2^\alpha }\sum _{k=0}^{+\infty }{\beta \atopwithdelims ()k}\frac{1}{2^k}\frac{\Gamma (\alpha +1)\Gamma (-2\alpha -k-1)}{\Gamma (-\alpha -k)}B(3\alpha +k+2,\beta +1)\\&\quad \times {}_{3}{F_2(\alpha +1,-\beta ,3\alpha +k+2,2+2\alpha +k,3\alpha +\beta +3,-1)} \end{aligned} \end{aligned}$$

with \(D_{(X,Y)}=\{(\alpha ,\beta )\in {\mathbb {R}}^2:\alpha ,\beta \in {\mathbb {R}}{\setminus } {\mathbb {Z}}_0^-\}\), where the function \({}_{3}F_2\) can be found in Prudnikov et al. (1986), for instance.

The following result is an immediate consequence of the involved notions.

Proposition 6.1

Let (XY) be a random vector having finite CIGF. If X and Y are independent then

$$\begin{aligned} G_{(X,Y)}(\alpha ,\beta )=G_{X}(\alpha ,\beta )\,G_{Y}(\alpha ,\beta ) \qquad \forall \,(\alpha ,\beta )\in D_{(X,Y)}. \end{aligned}$$

Consider a nonnegative random vector (XY) with support \((0, r_1)\times (0, r_2)\), for \(r_1,r_2\in (0,+\infty ]\). In many practical situations, it is worthwhile to adopt the following information measures for multi-device systems. The joint cumulative residual entropy of (XY) is defined as (cf. Rao et al. (2004))

$$\begin{aligned} \mathcal {CRE}(X,Y) =-\int _{0}^{r_1}\text {d}x\int _{0}^{r_2}{\overline{F}}(x,y)\log {\overline{F}}(x,y) \,\text {d}y. \end{aligned}$$
(42)

A dynamic version of this measure has been studied by Rajesh et al. (2014). Similarly, the joint cumulative entropy of (XY) is defined as (cf. Di Crescenzo and Longobardi (2009))

$$\begin{aligned} \mathcal{C}\mathcal{E}(X,Y) =-\int _{0}^{r_1}\text {d}x\int _{0}^{r_2}F(x,y)\log F(x,y) \,\text {d}y. \end{aligned}$$
(43)

Remark 6.1

We recall that, if (XY) has support \((0, r_1)\times (0, r_2)\), for \(r_1,r_2\in (0,+\infty )\), and if X and Y are independent, then (cf. Proposition 2.2 of Di Crescenzo and Longobardi (2009))

$$\begin{aligned} \mathcal{C}\mathcal{E}(X,Y)=\left[ r_2-{\mathbb {E}}(Y)\right] \mathcal{C}\mathcal{E}(X)+\left[ r_1-{\mathbb {E}}(X)\right] \mathcal{C}\mathcal{E}(Y). \end{aligned}$$
(44)

Under the same assumptions, similarly, from Eq. (42) we can observe that

$$\begin{aligned} \mathcal {CRE}(X,Y)={\mathbb {E}}\left( Y\right) \mathcal {CRE}(X)+{\mathbb {E}}\left( X\right) \mathcal {CRE}(Y). \end{aligned}$$
(45)

Remark 6.2

For the discrete random vector (XY) considered in Example 6.1, we have

$$\begin{aligned} \mathcal {CRE}(X)=\mathcal {CRE}(Y)=\mathcal{C}\mathcal{E}(X)=\mathcal{C}\mathcal{E}(Y)=-\frac{1}{2}\log \left( \frac{1}{2}\right) \end{aligned}$$

and

$$\begin{aligned} \mathcal {CRE}(X,Y)=\mathcal{C}\mathcal{E}(X,Y)=-\left( \frac{1}{4}+\theta \right) \log \left( \frac{1}{4}+\theta \right) , \qquad \theta \in \left( -\frac{1}{4},\frac{1}{4}\right) . \end{aligned}$$

Hence, in this case Eqs. (44) and (45) are satisfied if and only if X and Y are independent, i.e. \(\theta =0\).

In analogy with the one-dimensional measures considered in Table 1, we define the generalized and the fractional versions of the measures given in Eqs. (42) and (43).

Definition 6.2

Let (XY) be a nonnegative random vector with support \((0,r_1)\times (0,r_2)\), where \(r_1,r_2\in (0,+\infty ]\). The generalized cumulative residual entropy of order n of (XY) is defined as

$$\begin{aligned} \mathcal {CRE}_n(X,Y) =\frac{1}{n!}\int _{0}^{r_1}\mathrm{{d}}x\int _{0}^{r_2}{\overline{F}}(x,y)\left[ -\log {\overline{F}}(x,y)\right] ^n \mathrm{{d}}y, \qquad n\in {\mathbb {N}}_0, \end{aligned}$$

while the generalized cumulative entropy of order n of (XY) is defined as

$$\begin{aligned} \mathcal{C}\mathcal{E}_n(X,Y)=\frac{1}{n!}\int _{0}^{r_1}\mathrm{{d}}x\int _{0}^{r_2}F(x,y)\left[ -\log F(x,y)\right] ^n \mathrm{{d}}y, \qquad n\in {\mathbb {N}}. \end{aligned}$$

Definition 6.3

Let (XY) be a nonnegative random vector with support \((0,r_1)\times (0,r_2)\), with \(r_1,r_2\in (0,+\infty ]\). The fractional cumulative residual entropy of (XY) is defined as

$$\begin{aligned} \mathcal {CRE}_\nu (X,Y) =\frac{1}{\Gamma (\nu +1)}\int _{0}^{r_1}\mathrm{{d}}x\int _{0}^{r_2}{\overline{F}}(x,y)\left[ -\log {\overline{F}}(x,y)\right] ^\nu \mathrm{{d}}y, \qquad \nu \ge 0 \end{aligned}$$

whereas the fractional cumulative entropy of (XY) is defined as

$$\begin{aligned} \mathcal{C}\mathcal{E}_\nu (X,Y) =\frac{1}{\Gamma (\nu +1)}\int _{0}^{r_1}\mathrm{{d}}x\int _{0}^{r_2}F(x,y)\left[ -\log F(x,y)\right] ^\nu \mathrm{{d}}y, \qquad \nu >0. \end{aligned}$$

According to Propositions 2.1, 2.2 and 2.3, now we propose the following generalizations, whose proof is analogous.

Proposition 6.2

Let (XY) be a random vector with finite CIGF \(G_{(X,Y)}(\alpha ,\beta )\), for \((\alpha ,\beta )\in D_{(X,Y)}\), and let \({{\mathcal {S}}_{(X,Y)}}=(0,r_1)\times (0,r_2)\) where \(r_1,r_2\in (0,+\infty ]\). If \((0,1)\in D_{(X,Y)}\), then

$$\begin{aligned} \mathcal {CRE}(X,Y)= & {} -\frac{\partial }{\partial \beta }G_{(X,Y)}(\alpha ,\beta )\Big \vert _{\alpha =0,\beta =1},\\ \mathcal {CRE}_n(X,Y)= & {} \frac{(-1)^n}{n!} \frac{\partial ^n}{\partial \beta ^n}G_{(X,Y)}(\alpha ,\beta )\Big \vert _{\alpha =0,\beta =1}, \qquad n\in {\mathbb {N}}_0,\\ \mathcal {CRE}_\nu (X,Y)= & {} \frac{1}{\Gamma (\nu +1)}\left( {}^{C}{}{D_{-,\beta }^\nu G_{(X,Y)}}\right) (\alpha ,\beta )\Big \vert _{\alpha =0,\beta =1}, \qquad \nu > 0, \end{aligned}$$

If \((1,0)\in D_{(X,Y)}\), then

$$\begin{aligned} \mathcal{C}\mathcal{E}(X,Y)= & {} -\frac{\partial }{\partial \alpha }G_{(X,Y)}(\alpha ,\beta )\Big \vert _{\alpha =1,\beta =0},\\ \mathcal{C}\mathcal{E}_n(X,Y)= & {} \frac{(-1)^n}{n!} \frac{\partial ^n}{\partial \alpha ^n}G_{(X,Y)}(\alpha ,\beta )\Big \vert _{\alpha =1,\beta =0}, \qquad n\in {\mathbb {N}},\\ \mathcal{C}\mathcal{E}_\nu (X,Y)= & {} \frac{1}{\Gamma (\nu +1)}\left( {}^{C}{}{D_{-,\alpha }^\nu G_{(X,Y)}}\right) (\alpha ,\beta )\Big \vert _{\alpha =1,\beta =0}, \qquad \nu >0. \end{aligned}$$

In analogy with Eq. (21), if (XY) is a random vector with CDF F(xy) and SF \({\overline{F}}(x,y)\), for all \((x,y)\in {\mathcal {S}}_{(X,Y)}\) we can introduce the two-dimensional odds function as

$$\begin{aligned} \theta (x,y)=\frac{{\overline{F}}(x,y)}{F(x,y)}. \end{aligned}$$
(46)

Hence, the two-dimensional extension of Eq. (22) can be given in terms of the function in (46), so that

$$\begin{aligned} G_{(X,Y)}(-\beta ,\beta )=\iint _{{\mathcal {S}}_{(X,Y)}}[\theta (x,y)]^{\beta }\,\text {d}x\,\text {d}y \end{aligned}$$

for all \(\beta \in {\mathbb {R}}\) such that the right-hand-side is finite.

7 Concluding remarks

In this paper, we defined the cumulative information generating function and its suitable distortions-based extensions. The CIGF is noteworthy in the context of information measures, since it allows to determine the cumulative residual entropy and the cumulative entropy of a given probability distribution, even in their generalized and fractional forms.

Several results, properties and bounds have been studied, also with reference to symmetry properties and relations with the equilibrium density. Moreover, we illustrated that the considered functions are variability measures which extend the Gini mean semi-difference.

In the realm of reliability theory, the CIGF and its extensions have been found useful to study properties of k-out-of-n systems, with special regards to the reliability of multi-component stress–strength systems.

Future developments can be oriented to connections with other notions, such as the Information Divergence (see Toulias and Kitsos (2021)) and the Rényi Entropy (see, for instance, Buryak and Mishura (2021)), also with reference to possible applications and distortions-based extensions of the two-dimensional CIGF introduced in Sect. 6.