Abstract
We introduce and study the cumulative information generating function, which provides a unifying mathematical tool suitable to deal with classical and fractional entropies based on the cumulative distribution function and on the survival function. Specifically, after establishing its main properties and some bounds, we show that it is a variability measure itself that extends the Gini mean semi-difference. We also provide (i) an extension of such a measure, based on distortion functions, and (ii) a weighted version based on a mixture distribution. Furthermore, we explore some connections with the reliability of k-out-of-n systems and with stress–strength models for multi-component systems. Also, we address the problem of extending the cumulative information generating function to higher dimensions.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and background
In recent years, there is a deep interest in proposing new measures of uncertainty, in order to respond to the increasingly diversified needs of researchers in the fields of reliability and risk analysis. At the same time, getting lost in the vast sea of the new notions is relatively easy. To meet these needs, in this paper we aim to propose a new generating function which is able to recover the cumulative residual entropy and the cumulative entropy, as well as both their generalized and fractional extensions.
If X is a nonnegative absolutely continuous random variable having support (0, r), with \(r\in (0,+\infty ]\), and probability density function (PDF) f, the differential entropy of X is defined as (see, for instance, Cover and Thomas (1991))
Such a measure can be obtained from the information generating function, defined by Golomb (1966) as
where \(\nu \in {\mathbb {R}}\) is such that the right-hand-side of (2) is finite. Indeed, Eqs. (1) and (2) give:
Recent developments and examples of applications of information generating functions can be found in Clark (2020), Kharazmi et al. (2021), and Kharazmi and Balakrishnan (2021).
We remark that the differential entropy can take negative values, whereas the Shannon entropy of a discrete distribution is nonnegative. To avoid this drawback, and for other reasons as mentioned in Rao et al. (2004), various alternative measures have been proposed recently. Table 1 shows some information measures for a random variable X having support (0, r), with \(r\in (0,+\infty ]\), having respectively cumulative distribution function (CDF) and survival function (SF) given by
We remark that the entropies presented in Table 1 can also be expressed in terms of the cumulative hazard rate and the cumulative reversed hazard rate of X, defined respectively as
These functions are involved in the cumulative residual entropy introduced in Rao et al. (2004), and the cumulative entropy (see Di Crescenzo and Longobardi (2009), Di Crescenzo and Longobardi (2009)), given respectively in cases (i) and (ii) of Table 1. Such measures are obtained by replacing the PDF in (1) with the SF and the CDF, respectively. This preserves the fact that the logarithm of the probability of an event represents the information contained in the event, in accordance with the Shannon entropy in the discrete case.
Both the cumulative residual entropy and the cumulative entropy take nonnegative values, vanishing only in the case of degenerate random variables. These measures are particularly suitable for describing information in problems related to reliability theory, where X denotes the random lifetime of an item, and x is the reference time. In particular, in Table 1, the cumulative residual entropy (i) and the generalized versions (iii) and (v) deal with events for which the uncertainty is related to the future, while the cumulative entropies (ii), (iv) and (vi) are suitable to quantify the information when the uncertainty is related to the past. In addition, Asadi and Zohrevand (2007) showed that \(\mathcal {CRE}(X)={\mathbb {E}}[\text {mrl}(X)]\), where mrl(X) is the mean residual life of X. Also, in Di Crescenzo and Longobardi (2009) it is shown that \(\mathcal{C}\mathcal{E}(X)={\mathbb {E}}[{\tilde{\mu }}(X)]\), where \({\tilde{\mu }}(X)\) is the mean inactivity time of X. Moreover, other applications of these information measures can be found in Risk Theory, since the risk is strictly related to the notion of uncertainty, see e.g. Dulac and Simon (2023).
Recently, Psarrakos and Navarro (2013) introduced the generalized cumulative residual entropy of order n of X, defined as in (iii) of Table 1, in order to extend the cumulative residual entropy. A dual information measure, known as generalized cumulative entropy of order n, was proposed by Kayal (2016) (cf. case (iv) of Table 1). Various results on these generalized measures have been studied by Toomaj and Di Crescenzo (2020). In particular, the measures given in (iii) and (iv) of Table 1 play a role in the theory of point processes. Indeed, the generalized cumulative residual entropy of order n, say \(\mathcal {CRE}_n(X)\), is equal to the mean of the \((n+1)\)-th interepoch interval of a non-homogeneous Poisson process having cumulative intensity function given by the first of (4). Similarly, the generalized cumulative entropy of order n can be viewed as an expected spacing in lower record values (see, for instance, Section 6 of Toomaj and Di Crescenzo (2020)). They are also related to the upper and lower record values densities (see, for instance, Kumar and Dangi (2023)).
Fractional versions of the above measures have been studied as well, with the aim of disposing with more advanced mathematical tools to handle complex systems and anomalous dynamics. Specifically, see Xiong et al. (2019) and Di Crescenzo et al. (2021), for the fractional generalized cumulative residual entropy and the fractional generalized cumulative entropy of X, given respectively in cases (v) and (vi) of Table 1. Certain features of fractional calculus allow these measures to better capture long-range phenomena and nonlocal dependence in some random systems.
The entropies considered in Table 1 deal with nonnegative random variables since they are often referred to random lifetimes of interest in reliability theory. However they can be straightforwardly extended to the case when X has a general support contained in (l, r), with \(-\infty \le l<r\le +\infty \).
The aim of this paper is to propose and study a new information generating function that, in analogy with the functions in Eqs. (2) and (3), is able to recover the information measures presented in Table 1. It is defined as the integral of the product between suitable powers of the CDF and the SF. Throughout the paper it emerges that some advantages related to the use of the new generating function are:
-
the convenience of gaining information from both the CDF and the SF of the random variable under investigation,
-
the existence of suitable applications to notions of interest in reliability theory such as proportional hazards, odds function, order statistics, k-out-of-n systems, and stress–strength models for multi-component systems,
-
the possibility of using it as a measure of concentration, since it is an extension of the Gini mean semi-difference.
With reference to the latter statement, we will show also that the proposed generating function can be extended (i) by replacing the powers of the CDF and the SF with suitable distortion functions, and (ii) by defining a weighted version based on a mixture distribution. Moreover, it is worth mentioning that the proposed generating function and its generalized versions can be applied in risk analysis. Indeed, we show that they are proper variability measures.
1.1 Plan of the paper
In Sect. 2 we define the new generating function, named cumulative information generating function. We show that it is useful to recover the measures given in Table 1. Moreover, we illustrate the effect of an affine transformation of the considered random variable, and provide some connections with the proportional hazard model, the proportional reversed hazard model, and the odds function.
In Sect. 3 we use various well-known inequalities in order to obtain some bounds for the cumulative information generating function. In addition, we show how the cumulative information generating function is related (i) to the Euler beta function, and (ii) to the Golomb’s information generating function of the equilibrium random variable.
In Sect. 4 we discuss the connections with some notions of systems reliability, as series and parallel systems, and k-out-of-n systems, also with special attention to the reliability of the multi-component stress–strength system.
In Sect. 5 we introduce the above mentioned generalized information measures, named ‘q-distorted Gini function’ and ‘weighted q-distorted Gini function’, being related also to the Gini mean semi-difference. We also prove that they are suitable variability measures, since in particular the dispersive order between pairs of random variables implies the ordering between these functions. An application to the reliability of multi-component stress–strength systems is provided, too.
The Sect. 6 is concerning the extension of the cumulative information generating function to the case of a two-dimensional random vector, with special care to the case of independent components.
Some final remarks are then given in Sect. 7.
Throughout the paper, the terms increasing and decreasing are used in non-strict sense, \({\mathbb {N}}\) denotes the set of positive integers, and \({\mathbb {N}}_0={\mathbb {N}}\cup \{0\}\). Moreover, given a distribution function F(x), we denote the right-continuous version of its inverse by \(F^{-1}(u)=\sup \{x:F(x)\le u\}\), \(u\in [0,1]\), which is also named quantile function in statistical framework.
2 Cumulative information generating function
In the same spirit of Eq. (2) we now introduce a new generating function which allows to measure the cumulative information coming both from the CDF and the SF.
Definition 2.1
Let X be a random variable with CDF F(x) and SF \({\overline{F}}(x)\), \(x\in {\mathbb {R}}\), and let
denote respectively the lower and upper limits of the support of X (which may be finite or infinite). The cumulative information generating function (CIGF) of X is defined as
where
Clearly, one has \({\mathbb {P}}(l\le X\le r)=1\). Moreover, if X is degenerate then \(G_X(\alpha ,\beta )=0\), otherwise \(G_X(\alpha ,\beta )>0\).
Example 2.1
Let \(X\sim Erlang(2,\lambda )\), with \(\lambda >0\), and CDF \(F(x)=1-e^{-\lambda x}-\lambda xe^{-\lambda x}\), \(x\ge 0\). From (6), recalling that \(\forall x: \vert x\vert <1\),
and taking into account that
with \(D_X=\{(\alpha ,\beta )\in {\mathbb {R}}^2: \beta \in {\mathbb {R}}{\setminus } {\mathbb {Z}}_0^-\}\), we obtain the CIGF of X in series form:
We remark that if X is a discrete random variable with finite support \(\{x_{1}\le x_{2}\le \ldots \le x_{n}\}\), then due to (6) the CIGF can be expressed as a sum, i.e.
Other examples will be illustrated below.
In the next theorem we show the effect of an affine transformation. The result follows from Definition 2.1, and recalling the relation between the CDFs of X and \(Y=\gamma X + \delta \).
Theorem 2.1
Let X be a random variable with finite CIGF. Consider the affine transformation \(Y=\gamma X + \delta \), with \(\gamma \in {\mathbb {R}}{\setminus }\{0\}\), \(\delta \in {\mathbb {R}}\). Then
for \((\alpha ,\beta )\in D_X\) if \(0<\gamma <\infty \), and \((\beta ,\alpha )\in D_X\) if \(-\infty<\gamma <0\).
Remark 2.1
If X is absolutely continuous, with PDF f(x), by setting \(u=F(x)\) in the right-hand-side of Eq. (6), the CIGF of X can be expressed as
Remark 2.2
The CIGF of a nonnegative random variable X can be regarded as a measure of concentration. Indeed, if \( X' \) is an independent copy of X, we can deduce that (see, for instance, the proof of Proposition 1 of Rao (2005))
This quantity is also known as the Gini mean semi-difference, which represents an example of coherent measure of variability with comonotonic additivity (see Section 2.2 of Hu and Chen (2020)).
Similarly as the information generating function defined in (2), the following generating measures can be introduced as marginal versions of the CIGF.
Definition 2.2
Under the same assumptions of Definition 2.1, the cumulative information generating measure and the cumulative residual information generating measure are defined respectively by
and
We remark that when X is absolutely continuous with support \((0,\infty )\), the measure (13) has been introduced in Eq. (10) of Kharazmi and Balakrishnan (2023), denoted as \(\mathcal {CIG}_\beta ({\overline{F}})\), for \(\beta >0\). Under these assumptions, other properties and the non-parametric estimation of the function given in Eq. (13) have been studied in Smitha et al. (2023).
Remark 2.3
If X is a random variable with finite CIGF and symmetric CDF, in the sense that for some \(m\in {\mathbb {R}}\) one has \(F(m+x)={\overline{F}}(m-x)\) \( \forall \,x\in {\mathbb {R}}\), then
-
(i)
\(G_X(\alpha ,\beta )=G_X(\beta ,\alpha )\) for all \((\alpha ,\beta )\in D_X\);
-
(ii)
from Eqs. (12) and (13) we have \(H_X(\alpha )=K_X(\alpha )\) for all \((\alpha ,0)\in D_X\);
-
(iii)
under the assumptions of Theorem 2.1, Eq. (9) becomes \( G_Y(\alpha ,\beta )=\vert \gamma \vert \,G_X(\alpha ,\beta )\), \( \gamma \in {\mathbb {R}}\).
Recalling the measures (i) and (ii) of Table 1, now we can show that the cumulative residual entropy and the cumulative entropy can be obtained from the CIGF.
Proposition 2.1
Let X be a random variable having finite CIGF \(G_X(\alpha ,\beta )\), for \((\alpha ,\beta )\in D_X\). If \((0,1)\in D_X\), then
If \((1,0)\in D_X\), then
Proof
The stated results follow from Eq. (6), by differentiation under the integral sign. \(\square \)
Let us now obtain a similar relation for the generalized cumulative residual entropy and the generalized cumulative entropy (cf. cases (iii) and (iv) of Table 1).
Proposition 2.2
Let X be a random variable having finite CIGF \(G_X(\alpha ,\beta )\), for \((\alpha ,\beta )\in D_X\). If \((0,1)\in D_X\), then
If \((1,0)\in D_X\), then
Proof
The proof of (15) is analogous to Proposition 2.1, by using this identity:
In the same way we obtain Eq. (14). \(\square \)
In order to extend the above relations to the case of the generalized fractional cumulative residual entropy and the generalized fractional cumulative entropy, given respectively in cases (v) and (vi) of Table 1, let us now recall briefly the expression of the Caputo fractional derivatives (see, for instance, Kilbas et al. (2006)). Specifically, given a function \(y(x_1,x_2)\), we consider
that is the left-sided Caputo partial fractional derivative with respect to \(x_1\) of order \(\nu \) on the whole axis \({\mathbb {R}}\), where \(\nu \in {\mathbb {C}}\) with \(\text {Re}(\nu )>0\), \(\nu \notin {\mathbb {N}}\) and \(n=\lfloor \text {Re}(\nu )\rfloor + 1\).
Proposition 2.3
Let X be a random variable having finite CIGF \(G_X(\alpha ,\beta )\), for \((\alpha ,\beta )\in D_X\). If \((0,1)\in D_X\), then
If \((1,0)\in D_X\), then
Proof
We show only the proof of Eq. (18) because Eq. (17) can be derived similarly. From (16) we obtain
where the last equality is obtained by use of Fubini’s theorem. By placing \(t-\alpha =z\) and \(\gamma =-z\log F(x)\), we have
Finally we deduce
so that Eq. (18) follows by taking \(\alpha =1\) and \(\beta =0\). \(\square \)
Remark 2.4
Recently, Kharazmi and Balakrishnan (2023) noted that \(\mathcal {CRE}(X)=-\frac{\text {d}}{\text {d}\beta }K_X(\beta )\big \vert _{\beta =1}\).
Similarly, we obtain \(\mathcal{C}\mathcal{E}(X)=-\frac{\textrm{d}}{\textrm{d}\alpha }H_X(\alpha )\big \vert _{\alpha =1}\).
Moreover, for the generalized versions and for the fractional versions we have respectively
Now we recall two important models that are largely adopted in survival analysis and reliability theory. Let X be a random lifetime with CDF F(x) and SF \({\overline{F}}(x)\). The proportional hazard model (see for instance Cox 1972), Kumar and Klefsjö (1994) is expressed by a random lifetime \(X^*_\gamma \) with SF
Similarly, the proportional reversed hazard model (see for instance Di Crescenzo 2000; Gupta and Gupta 2007; Gupta et al. 1998) is expressed by a random lifetime \({\hat{X}}_\theta \) with CDF
Recently, modified versions of these models have been studied by Das and Kayal (2021).
Remark 2.5
The measures given in Definition 2.2 satisfy the following relations.
-
(i)
Under the proportional hazard model (19) we have
$$\begin{aligned} K_{X^*_\gamma }(\beta )=K_X(\gamma \beta ) \qquad \forall \,\gamma \in {\mathbb {R}}^+ \;\; \hbox {s.t.}\;\; (0,\gamma \beta )\in D_X. \end{aligned}$$ -
(ii)
Under the proportional reversed hazard model (20) we have
$$\begin{aligned} H_{{\hat{X}}_\theta }(\alpha )=H_X(\theta \alpha ) \qquad \forall \,\theta \in {\mathbb {R}}^+ \;\; \hbox {s.t.}\;\; (\theta \alpha ,0)\in D_X. \end{aligned}$$
Remark 2.6
If X is a random lifetime such that \(D_X\subseteq ({\mathbb {R}}^+)^2\), then recalling the Definition 2.1 and Eqs. (19) and (20), the CIGF of X can be expressed as
We now recall another useful concept. Let X be a random variable with CDF and SF denoted by F(x) and \({\overline{F}}(x)\), respectively. For all \(x\in (l,r)\) the odds function of X is (cf. Kirmani and Gupta (2001))
This function represents the ratio of the probability of an event occurring to the probability of its not occurring, and always assumes nonnegative finite values. It is used in reliability theory, because it quantifies the strength of the association between the failure of a system after time x and before time x. Due to Eq. (21), we can express the CIGF of X in terms of the odds function in two equivalent useful ways:
Hence, when the parameters \(\alpha \) and \(\beta \) may take negative values such that \(\alpha +\beta =0\), then the CIGF of X can be expressed in terms of the odds function as
where
Table 2 shows various examples of the CIGF expressed in terms of the Euler Beta function \(B(x,y)=\int _{0}^{1}t^{x-1}(1-t)^{y-1}\,\text {d}t\) or the incomplete Beta function \(B\left( p;x,y\right) =\int _{0}^{p}t^{x-1}(1-t)^{y-1}\,\text {d}t\), \(p\in [0,1]\).
Finally, by recalling Eq. (11), we remark that for \(\alpha =\beta =1\) Eq. (8) and the examples in Table 2 are in agreement with Giorgi and Nadarajah (2010).
3 Inequalities and further results
In this section, we obtain some bounds and further results regarding the CIGF. Specifically, we first refer to well-known inequalities named after Chernoff, Bernoulli, Minkowski and Hölder (see, for instance, Schilling (2005)).
Hereafter, thanks to the Chernoff’s inequalities we recover some bounds for the CIGF in terms of the moment generating function (MGF) of X, denoted by \(M_X(s)={\mathbb {E}}(e^{sX})\), \(s\in {\mathbb {R}}\).
Proposition 3.1
Let X be a nonnegative random variable with support (0, r), where \(r\in (0,+\infty )\), and having finite CIGF \(G_X(\alpha ,\beta )\), for \((\alpha ,\beta )\in D_X\). Assume that the MGF of X satisfies \(M_X(s)<+\infty \) for all \(s\in (-s_0, s_0)\), with \(s_0>0\). Then,
-
(i)
for all \(s_1\in (-s_0,0)\), \(s_2\in (0,s_0)\) and \((\alpha ,\beta )\in D_X\cap ({\mathbb {R}}^+)^2\), one has
$$\begin{aligned} G_X(\alpha ,\beta ) \le g(r; \alpha ,\beta , \textbf{s}) \left[ M_X(s_1)\right] ^\alpha \left[ M_X(s_2)\right] ^\beta , \end{aligned}$$(23) -
(ii)
for all \(s_1\in (-s_0,0)\), \(s_2\in (0,s_0)\) and \((\alpha ,\beta )\in D_X\cap ({\mathbb {R}}^-)^2\), one has
$$\begin{aligned} G_X(\alpha ,\beta )\ge g(r; \alpha ,\beta , \textbf{s}) \left[ M_X(s_1)\right] ^\alpha \left[ M_X(s_2)\right] ^\beta , \end{aligned}$$(24)where
$$\begin{aligned} g(r; \alpha ,\beta , \textbf{s})=\left\{ \begin{array} {ll} \displaystyle \frac{1}{\alpha s_1+\beta s_2}\left[ 1-e^{-(\alpha s_1+\beta s_2)r}\right] , &{} \hbox { if }\alpha s_1+\beta s_2\ne 0, \\ r, &{} \hbox { if }\alpha s_1+\beta s_2= 0. \end{array} \right. \end{aligned}$$
Proof
Applying Chernoff’s inequalities \({\mathbb {P}}(X\le x)\le e^{-s_1x}M_X(s_1)\) and \( {\mathbb {P}}(X\ge x)\le e^{-s_2x}M_X(s_2)\), for \(x>0\), to Eq. (6), by the effect of integration, for all \(s_1\in (-s_0,0)\), \(s_2\in (0,s_0)\) and \((\alpha ,\beta )\in D_X\cap ({\mathbb {R}}^+)^2\), it follows that
Few calculations give Eq. (23). For \((\alpha ,\beta )\in D_X\cap ({\mathbb {R}}^-)^2\), Eq. (24) can be obtained similarly. \(\square \)
The case when X has support \((0,+\infty )\) can be easily derived as follows.
Corollary 3.1
Let X be a nonnegative random variable with support \((0,+\infty )\), and having finite CIGF \(G_X(\alpha ,\beta )\), for \((\alpha ,\beta )\in D_X\). Assume that the MGF of X satisfies \(M_X(s)<+\infty \) for all \(s\in (-s_0, s_0)\), with \(s_0>0\). Then,
-
(i)
for all \(s_1\in (-s_0,0)\), \(s_2\in (0,s_0)\) and \((\alpha ,\beta )\in D_X\cap ({\mathbb {R}}^+)^2\) such that \(\alpha s_1+\beta s_2>0\), one has
$$\begin{aligned} G_X(\alpha ,\beta )\le \frac{1}{\alpha s_1+\beta s_2}\left[ M_X(s_1)\right] ^\alpha \left[ M_X(s_2)\right] ^\beta , \end{aligned}$$(25) -
(ii)
for all \(s_1\in (-s_0,0)\), \(s_2\in (0,s_0)\) and \((\alpha ,\beta )\in D_X\cap ({\mathbb {R}}^-)^2\) such that \(\alpha s_1+\beta s_2>0\), one has
$$\begin{aligned} G_X(\alpha ,\beta )\ge \frac{1}{\alpha s_1+\beta s_2}\left[ M_X(s_1)\right] ^\alpha \left[ M_X(s_2)\right] ^\beta . \end{aligned}$$
The following example provides an application of the previous results.
Example 3.1
Let \(X\sim Erlang(2,\lambda )\), \(\lambda >0\), as in Example 2.1, with MGF \(M_X(s)=\lambda ^2 (\lambda -s)^{-2}\), for \(s<\lambda \). Thanks to Eqs. (8) and (25), some calculations give, for \((\alpha ,\beta )\in ({\mathbb {R}}^+)^2\),
Let us now express some upper bounds for the CIGF in terms of the measures introduced in Definition 2.2.
Proposition 3.2
Under the assumptions specified in Definition 2.1, the CIGF of a random variable X satisfies the following inequalities:
Proof
Due to Bernoulli’s inequality with real exponents, for all \(x\in {\mathbb {R}}\) it follows that
and
Hence, the thesis immediately follows from Definitions 2.1 and 2.2. \(\square \)
Hereafter we use the Minkowski’s inequality to obtain suitable bounds for the measures introduced in Definition 2.2 and for \(G_X(\gamma ,\gamma )\).
Proposition 3.3
Under the assumptions specified in Definition 2.1 and Definition 2.2, if X has finite support in (l, r),
-
(i)
for all \(\gamma \ge 1\) such that \((\gamma ,0)\in D_X\) and \((0,\gamma )\in D_X\) we have
$$\begin{aligned}{} & {} \left[ \left( r-l\right) ^\frac{1}{\gamma }-\left[ H_X(\gamma )\right] ^\frac{1}{\gamma }\right] ^\gamma \le K_X(\gamma ) \le \left[ \left( r-l\right) ^\frac{1}{\gamma }+\left[ H_X(\gamma )\right] ^\frac{1}{\gamma }\right] ^\gamma ,\\{} & {} \left[ \left( r-l\right) ^\frac{1}{\gamma }-\left[ K_X(\gamma )\right] ^\frac{1}{\gamma }\right] ^\gamma \le H_X(\gamma )\le \left[ \left( r-l\right) ^\frac{1}{\gamma }+\left[ K_X(\gamma )\right] ^\frac{1}{\gamma }\right] ^\gamma ; \end{aligned}$$ -
(ii)
for all \(\gamma \ge 1\) such that \((0,\gamma )\in D_X\) we have
$$\begin{aligned} G_X(\gamma ,\gamma )\le \left[ \left[ K_X(\gamma )\right] ^\frac{1}{\gamma }+\left[ K_X(2\gamma )\right] ^\frac{1}{\gamma }\right] ^\gamma ; \end{aligned}$$ -
(iii)
for all \(\gamma \ge 1\) such that \((\gamma ,0)\in D_X\) we have
$$\begin{aligned} G_X(\gamma ,\gamma )\le \left[ \left[ H_X(\gamma )\right] ^\frac{1}{\gamma }+\left[ H_X(2\gamma )\right] ^\frac{1}{\gamma }\right] ^\gamma . \end{aligned}$$
Proof
By applying Minkowski’s inequality, for \(\gamma \ge 1\) we have
and also
Combining the two latter inequalities we obtain the bounds for \(K_X(\gamma )\). The other relations can be obtained in the same way, by taking into account that, for \(\gamma \ge 1\), if \((0,\gamma )\in D_X\) then \((0,2\gamma )\in D_X\), and if \((\gamma ,0)\in D_X\) then \((2\gamma ,0)\in D_X\). \(\square \)
We now prove an upper bound for \(G_X(\alpha ,\beta )\), for \(\alpha +\beta =1\), making use of the Hölder’s inequality.
Proposition 3.4
Under the assumptions specified in Definition 2.1, let X have finite support in (l, r), with \((\theta ,1-\theta )\in D_X\) for all \(\theta \in (0,1)\). Then,
Proof
Due to the Hölder’s inequality with conjugate exponents \(\frac{1}{\theta },\frac{1}{1-\theta }\), for all \(\theta \in (0,1)\) we have
this yielding Eq. (26). \(\square \)
We remark that Eq. (26) is satisfied as equality when \(F(x)={\overline{F}}(x)\) \(\forall \, x\in (l,r)\), i.e. when \({\mathbb {P}}(X=l)={\mathbb {P}}(X=r)=1/2\). Moreover, we note that the right-hand-side of Eq. (26) can be rewritten by taking into account that, under the given assumptions,
Hereafter we show that the CIGF of an absolutely continuous random variable can be expressed as the product of the Euler beta function and the expected value of a suitably transformed beta-distributed random variable.
Proposition 3.5
If X is an absolutely continuous random variable with PDF f, CDF F and finite CIGF, then,
where \(Y\sim \textrm{Beta}(\alpha +1,\beta +1)\) is independent from X, with \(r(Y)=[f(F^{-1}(Y))]^{-1}\).
Proof
The thesis is obtained making use of Eq. (10) and recalling the PDF of \(Y\sim \textrm{Beta}(\alpha +1,\beta +1)\). \(\square \)
The above result collects various features of the CIGF, i.e. the relations (i) to the Beta distribution, which reflects the form of the right-hand-side of (6), and (ii) to the transformation \(f(F^{-1}(\cdot ))\), which plays a relevant role in the context of variability measures as developed in Sect. 5.
We conclude this section by relating the CIGF to a series involving the Golomb’s information generating function of the equilibrium random variable. To this aim, we recall that for a nonnegative random variable X, with SF \({\overline{F}}(x)\) and expected value \({\mathbb {E}}[X]\in (0, +\infty )\), the equilibrium random variable of X is a nonnegative absolutely continuous random variable, denoted as \(X_e\), whose PDF is given by
Recalling Eq. (2), hereafter we denote by \(\mathcal{I}\mathcal{G}_{X_e}\) the Golomb’s information generating function of the equilibrium random variable \(X_e\).
Proposition 3.6
If X is a nonnegative random variable having expected value \({\mathbb {E}}[X]\in (0, +\infty )\) and with finite CIGF, then
where \(X_e\) is the equilibrium random variable of X.
Proof
Denoting by (0, r) the support of X, with \(r\in (0,+\infty ]\), from Eqs. (6) and (27) it follows that, for all \((\alpha ,\beta )\in D_X\)
Since \(\left| f_e(x)\,{\mathbb {E}}[X]\right| <1\), due to Eq. (7) we have
By replacing the latter equation in (28), the thesis immediately follows from Eq. (2). \(\square \)
4 Connections with systems reliability
In this section we relate some results exploited above to notions of interest in reliability theory.
Several applied problems involve complex systems consisting of many components. Here we focus on systems formed by n components, where \(X_1,X_2,\dots ,X_n\) describe the random lifetimes of each component. We assume that they are independent and identically distributed (i.i.d.), with common CDF F(x) and SF \({\overline{F}}(x)\). As well known, a parallel system continues to work until the last component fails, and thus its lifetime is described by the sample maximum
which has CDF
Similarly, a series system fails as soon as the first component stops working, and thus its lifetime is described by the sample minimum
that possesses SF
Remark 4.1
Let \(n\in {\mathbb {N}}\). Recalling Definition 2.2, from Eqs. (29) and (30) it immediately follows that
where \(H_X\) and \(K_X\) denote respectively the cumulative information generating measure and the cumulative residual information generating measure of \(X_i\).
We now focus on the expression of the CIGF for order statistics \(X_{(n:n)}\) and \(X_{(1:n)}\).
Proposition 4.1
For \(n\in {\mathbb {N}}\), let \(X_1,X_2,\dots ,X_n\) be a random sample formed by i.i.d. random lifetimes having finite cumulative information generating measure \(H_X\) and cumulative residual information generating measure \(K_X\). Then, the CIGF of the order statistics \(X_{(n:n)}\) and \(X_{(1:n)}\) can be expressed respectively as
and
Proof
For simplicity, assume that the support of X is (0, r). Recalling Eq. (6), from Eqs. (29) and (7) we have
The right-hand-side of (31) then follows making use of (12). Equation (32) can be obtained similarly. \(\square \)
It is well known that a system with n independent components is said to be a k-out-of-n system when it works if and only if at least k components work (see, for instance, Boland and Proschan (1983)). Clearly, if \(k=1\) we have a parallel system, while for \(k=n\) we have a series system. For any k, the lifetime of the k-out-of-n system formed by components with i.i.d. lifetimes is expressed as the corresponding k-th order statistic. This allows to express the reliability and the information content of this kind of systems in a tractable way.
Remark 4.2
Consider a k-out-of-n system formed by n components with i.i.d. random lifetimes, for \(n\in {\mathbb {N}}\). Denoting by \(X_{(k:n)}\) the corresponding k-th order statistic, which in turn gives the system lifetime, the CIGF of \(X_{(k:n)}\) can be expressed in terms of the cumulative information generating measures. Indeed, similarly as Proposition 4.1, recalling Eqs. (12) and (13) one has the following two equivalent expressions
for all \((\alpha ,\beta )\in D_{X_{(k:n)}}\). Moreover, the mean of \(X_{(k:n)}\) can be expressed in terms of the CIGF of X as
Clearly, for \(k=1\) we have \({\mathbb {E}}\left[ X_{(1:n)}\right] =G_X(0,n)=K_X(n)\).
4.1 Stress–strength models for multi-component systems
A further connection of the CIGF with systems reliability arises in the analysis of stress–strength models for multi-component systems. Let us consider a system with n components having i.i.d. strengths \(X_1, X_2, \ldots , X_n\) having common CDF F(x). Assume that each component is stressed according to an independent random stress T having CDF \(F_T(x)\). Moreover, suppose that the system survives if and only if the components’ strengths are greater than the stress by at least k out of n (\(1\le k\le n\)). Then, the reliability of the considered multi-component stress–strength system is given by (cf. Bhattacharyya and Johnson (1974))
with \(R_{0,n} =1\). See also, for instance, the recent contribution by Kohansal and Shoaee (2021) on the statistical inference of multicomponent stress–strength reliability under suitable censored samples.
For instance, if T is distributed as \(X_1\) then it is not hard to see that
The following result is a straightforward consequence of Eqs. (6) and (33).
Proposition 4.2
Let \(X_1, X_2, \ldots , X_n,T\) have common finite support (l, r), with \(X_1, X_2, \ldots , X_n\) i.i.d. If T is uniformly distributed over (l, r), then
An iterative formula allows us to evaluate the reliability of the multi-component stress–strength system as follows, under the assumptions of Proposition 4.2:
As example, if X has Power\((\theta )\) distribution then from Proposition 4.2 and Table 2 after few calculations one has
Note that the expression in (35) can be also represented as a ratio of Pochhammer symbols, or as an infinite product (cf. Eq. 8.325.1 of Gradshteyn and Ryzhik (2015)). Clearly, if \(\theta =1\) then Eq. (35) reduces to Eq. (34). Figure 1 shows some plots of \(R_{k,n}\) as given in (35).
5 Generalized Gini functions
This section is devoted to the analysis of a generalized version of the CIGF. Specifically, we aim to extend the Definition 2.1 to the case in which the powers included in the right-hand side of Eq. (6) are replaced by suitable distortion functions. In this case, recalling Remark 2.2, we come to an extension of the Eq. (11).
Let X be a random variabile with CDF F and SF \({\overline{F}}\), and let \(q_i:[0,1]\rightarrow [0,1]\) be two distortion functions, for \(i=1,2\), i.e. increasing functions such that \(q_i(0)=0\) and \(q_i(1)=1\) (cf. Section 2.9.2 of Belzunce et al. (2016) and Section 2.4 of Navarro (2022)). In some applications it is required that \(q_i\) is continuous or left-continuous, however these assumptions are not necessarily required in general. The distorted distribution function and the distorted survival function of F through \(q_i\) are given respectively by
From Eq. (36), in general one has \(F_{q_1}(x)+{\overline{F}}_{q_2}(x)\ne 1\), for all \(x\in (l,r)\), unless \(q_1(u)=1-q_2(1-u)\). The functions given in (36) have been introduced in the context of the theory of choice under risk (see Wang (1996)), and are largely used in various applied fields (for instance, see Sordo and Suárez-Llorens (2011) for applications to variability measures).
Let us now consider the preannounced generalization of the CIGF based on (36).
Definition 5.1
Let X be a random variable with CDF F(x) and SF \({\overline{F}}(x)\), \(x\in {\mathbb {R}}\), and let
The q-distorted Gini function (or, shortly, q-Gini function) of X is defined as
where \(\textbf{q}=(q_1,q_2)\), and \(q_i:[0,1]\rightarrow [0,1]\), \(i=1,2\), are distortion functions such that \({\hat{G}}_X(\textbf{q})\) is finite.
Clearly, if the distortion functions are taken as \(q_1(x)=x^\alpha \) and \(q_2(x)=x^\beta \) with \((\alpha ,\beta )\in D_X\), then Eq. (37) corresponds to the definition of the CIGF given in Eq. (6). Specifically, if \(\alpha =\beta =1\) then we recover the Gini mean semi-difference (11).
It is worth mentioning that the q-Gini function may be viewed as an extension of the distorted measures treated in Giovagnoli and Wynn (2012) and in Greselin and Zitikis (2018). In these papers, distortions of the only CDF or SF (which can be viewed as generalizations of Eqs. (12) and (13)), are considered for the analysis of stochastic dominance, Lorenz ordering and risk measures.
Hereafter we shall prove various results regarding the q-Gini function. They include the effect of an affine transformation of X and the pointwise ordering of the q-Gini functions. To this aim, we recall that, if X and Y are nonnegative random variables with CDFs F and G, respectively, then X is said to be smaller than Y in the dispersive order, denoted as \(X\le _d Y\), if and only if (see Section 3.B of Shaked and Shanthikumar (2007))
Among the variability stochastic orders, the dispersive order is one of the most popular, since it involves quantities that are easily tractable, requiring that the difference between any two quantiles of X is smaller than the corresponding quantity of Y. Moreover, if X and Y are absolutely continuous with PDFs f and g, respectively, then
We can now prove that, under suitable assumptions, the q-Gini function is a variability measure in the sense of Bickel and Lehmann (2012).
Theorem 5.1
Let X and Y be random variables having the same support, and let \(\textbf{q}=(q_1,q_2)\), where \(q_i:[0,1]\rightarrow [0,1]\), \(i=1,2\), be distortion functions such that \({\hat{G}}_X(\textbf{q})\) and \({\hat{G}}_Y(\textbf{q})\) are finite. Then, the following properties hold:
-
1.
\({\hat{G}}_{X+\delta }(\textbf{q})={\hat{G}}_X(\textbf{q})\) for all \(\delta \in {\mathbb {R}}\);
-
2.
\({\hat{G}}_{\gamma X}(\textbf{q})=\gamma {\hat{G}}_X(\textbf{q})\) for all \(\gamma \in {\mathbb {R}}^+\);
-
3.
\({\hat{G}}_X(\textbf{q})=0\) for any degenerate random variable X;
-
4.
\({\hat{G}}_X(\textbf{q})\ge 0\) for any random variable X;
-
5.
\(X\le _d Y\) implies \({\hat{G}}_X(\textbf{q})\le {\hat{G}}_Y(\textbf{q})\).
Proof
The properties 1 and 2 follow recalling Eq. (37) and the relation between the CDFs of X and \(Y=\gamma X + \delta \). The properties 3 and 4 are guaranteed by the Definition 5.1. In analogy to Eq. (10), we can write
Hence, due to relation (38) the property 5 immediately follows. \(\square \)
In addition, we remark that if \({\hat{G}}_X(\textbf{q})=0\) and if \(q_i(0^+)>0\) and \(q_i(1^-)<1\) for \(i=1,2\), then X is necessarily a degenerate random variable.
It is worth mentioning that the results given in this section can be further extended. Indeed, in the same conditions given in the Definition 5.1, by taking as reference the approach in Giovagnoli and Wynn (2012) we introduce the weighted q -distorted Gini function (or, shortly, weighted q-Gini function) as follows:
where \(F_T(x)\) is the CDF of a random variable T, and where the intersection of the supports of X and T is a non-empty set denoted by \(\Delta \).
It is not hard to see that the function given in (39) satisfies the properties 1–4 given in Theorem 5.1 for \({\hat{G}}_X(\textbf{q})\). Concerning the property 5, hereafter we see that additional assumptions are needed. Here, \(l_X\), \(r_X\) and \(l_Y\), \(r_Y\) are defined as in (5) for X and Y, respectively.
Theorem 5.2
Let X and Y be random variables having the same support, let \(\textbf{q}=(q_1,q_2)\), where \(q_i:[0,1]\rightarrow [0,1]\), \(i=1,2\), be distortion functions such that \({\hat{G}}_X(\textbf{q},F_T)\) and \({\hat{G}}_Y(\textbf{q},F_T)\) are finite, and let T be absolutely continuous with PDF \(f_T\). If
-
(i)
\(f_T(t)\) is increasing in t and \(-\infty <l_X=l_Y\), or if
-
(ii)
\(f_T(t)\) is decreasing in t and \( r_X=r_Y<\infty \), then
$$\begin{aligned} X\le _d Y \quad \text {implies}\quad {\hat{G}}_X(\textbf{q},F_T)\le {\hat{G}}_Y(\textbf{q},F_T). \end{aligned}$$(40)
Proof
Due to (39), by setting \(u = F (x)\) one has
Then, under assumption (i), from Theorem 3.B.13 of Shaked and Shanthikumar (2007) we have that assumption \(X\le _d Y\) implies \(X\le _{st} Y\), i.e. \(F(x)\ge G(x)\) for all \(x\in {\mathbb {R}}\), so that \(G^{-1}(u)\ge F^{-1}(u)\) for all \(u\in (0,1)\). The relation (40) thus follows from (38). The same result can be proved similarly under assumption (ii). \(\square \)
An immediate application of Theorem 5.2 can be given to the reliability of multi-component stress–strength systems, as seen in Sect. 4.1. Consider two n-component systems, the first having i.i.d. strengths \(X_1, X_2, \ldots , X_n\) distributed as X, and the second having i.i.d. strengths \(Y_1, Y_2, \ldots , Y_n\) distributed as Y. Assume that each component of both systems is stressed according to an independent random stress T having CDF \(F_T(x)\). We denote by \(R_{k,n}^X\) and \(R_{k,n}^Y\) the reliability of the corresponding multi-component stress–strength systems defined as in (33). We are now able to provide a comparison result based on the weighted q-Gini function.
Theorem 5.3
Let the strengths X and Y have the same support, and let the random stress T be absolutely continuous with PDF \(f_T\). If
-
(i)
\(f_T(t)\) is increasing in t and \(-\infty <l_X=l_Y\), or if
-
(ii)
\(f_T(t)\) is decreasing in t and \( r_X=r_Y<\infty \), then
$$\begin{aligned} X\le _d Y \quad \text {implies}\quad R_{k,n}^X\le R_{k,n}^Y \quad \text {for all}\quad 0\le k\le n, \end{aligned}$$provided that \(R_{k,n}^X\) and \(R_{k,n}^Y\) are finite.
Proof
The thesis follows recalling Eq. (33) and making use of Theorem 5.2 when the relevant distortions are given by \(q_1(u)=u^{n-j}\) and \(q_2(u)=u^{j}\), with \(k\le j\le n\). \(\square \)
In the last result of this section, thanks to the probabilistic analogue of the mean value theorem, we provide a suitable expression of the weighted q-Gini function in the special case when the related distortion functions are equal to the identity.
Proposition 5.1
Let X be a nondegenerate random variable such that \({\mathbb {E}}(\min \{X,X'\})\) and \({\mathbb {E}}(\max \{X,X'\})\) are finite, where \(X'\) is an independent copy of X. For the weighted q-Gini function introduced in Eq. (39), if T is absolutely continuous with the same support of X, and if \(q_i(u)=u\), \(0\le u\le 1\), for \(i=1,2\), then
Proof
The proof follows making use of Eq. (39) and Theorem 4.1 in Di Crescenzo (1999) extended to the case of a general support of X, by taking \(Z=\Psi (\min \{X,X'\},\max \{X,X'\})\) and \(g(\cdot )=F_T(\cdot )\). \(\square \)
We immediately note that Eq. (41) generalizes Eq. (11) in this proposed context.
6 Two-dimensional cumulative information generating function
Let us now extend the analysis of the CIGF to the case of a two-dimensional random vector. In analogy with Definition 2.1, by avoiding trivial degenerate cases, we introduce the following
Definition 6.1
Let (X, Y) be a random vector with nondegenerate components, having joint CDF and SF given respectively by
We consider the following domain
The CIGF of (X, Y) is defined as:
where
We first discuss few examples. The first example is stimulated by the fact that if \(X\sim \text {Bernoulli}\left( \frac{1}{2}\right) \), then \(G_X(\alpha ,\beta )=\left( \frac{1}{2}\right) ^{\alpha +\beta }\) (cf. Table 2), thus satisfying the symmetry conditions expressed in Remark 2.3.
Example 6.1
Let (X, Y) be a discrete random vector, with probability function
for \(\theta \in \left( -\frac{1}{4},\frac{1}{4}\right) \). Therefore the CDF and the SF are identical, given by \(F(x,y)={\overline{F}}(x,y)=\frac{1}{4}+\theta \), for \((x,y)\in {\mathcal {S}}_{(X,Y)}=[0,1]^2\). Hence, from Definition 6.1 the CIGF of (X, Y) is
Example 6.2
Let (X, Y) be an absolutely continuous random vector, uniformly distributed in the triangular domain \({\mathcal {T}}=\left\{ (x,y)\in {\mathbb {R}}^2: 0\le x\le 1,\, 0\le y\le 1-x\right\} \). The PDF, CDF and SF are given respectively by
In this case \({\mathcal {S}}_{(X,Y)}={\mathcal {T}}\), so that the CIGF of (X, Y) is
with
Example 6.3
Let (X, Y) be an absolutely continuous random vector, distributed on the domain \({\mathcal {Q}}=[0,1]^2\), with PDF, CDF and SF given respectively by, for \((x,y)\in {\mathcal {Q}}\),
so that in this case \({\mathcal {S}}_{(X,Y)}={\mathcal {Q}}\). Since \(\left| \frac{x+y}{2}\right| <1\) for \((x,y)\in {\mathcal {Q}}\), recalling the expression of the Gauss hypergeometric function (cf. 15.3.1 of Abramowitz and Stegun (1972))
and making use of 15.3.7 in Abramowitz and Stegun (1972) and of 2.21.1.4 in Prudnikov et al. (1986), the CIGF of (X, Y) is
with \(D_{(X,Y)}=\{(\alpha ,\beta )\in {\mathbb {R}}^2:\alpha ,\beta \in {\mathbb {R}}{\setminus } {\mathbb {Z}}_0^-\}\), where the function \({}_{3}F_2\) can be found in Prudnikov et al. (1986), for instance.
The following result is an immediate consequence of the involved notions.
Proposition 6.1
Let (X, Y) be a random vector having finite CIGF. If X and Y are independent then
Consider a nonnegative random vector (X, Y) with support \((0, r_1)\times (0, r_2)\), for \(r_1,r_2\in (0,+\infty ]\). In many practical situations, it is worthwhile to adopt the following information measures for multi-device systems. The joint cumulative residual entropy of (X, Y) is defined as (cf. Rao et al. (2004))
A dynamic version of this measure has been studied by Rajesh et al. (2014). Similarly, the joint cumulative entropy of (X, Y) is defined as (cf. Di Crescenzo and Longobardi (2009))
Remark 6.1
We recall that, if (X, Y) has support \((0, r_1)\times (0, r_2)\), for \(r_1,r_2\in (0,+\infty )\), and if X and Y are independent, then (cf. Proposition 2.2 of Di Crescenzo and Longobardi (2009))
Under the same assumptions, similarly, from Eq. (42) we can observe that
Remark 6.2
For the discrete random vector (X, Y) considered in Example 6.1, we have
and
Hence, in this case Eqs. (44) and (45) are satisfied if and only if X and Y are independent, i.e. \(\theta =0\).
In analogy with the one-dimensional measures considered in Table 1, we define the generalized and the fractional versions of the measures given in Eqs. (42) and (43).
Definition 6.2
Let (X, Y) be a nonnegative random vector with support \((0,r_1)\times (0,r_2)\), where \(r_1,r_2\in (0,+\infty ]\). The generalized cumulative residual entropy of order n of (X, Y) is defined as
while the generalized cumulative entropy of order n of (X, Y) is defined as
Definition 6.3
Let (X, Y) be a nonnegative random vector with support \((0,r_1)\times (0,r_2)\), with \(r_1,r_2\in (0,+\infty ]\). The fractional cumulative residual entropy of (X, Y) is defined as
whereas the fractional cumulative entropy of (X, Y) is defined as
According to Propositions 2.1, 2.2 and 2.3, now we propose the following generalizations, whose proof is analogous.
Proposition 6.2
Let (X, Y) be a random vector with finite CIGF \(G_{(X,Y)}(\alpha ,\beta )\), for \((\alpha ,\beta )\in D_{(X,Y)}\), and let \({{\mathcal {S}}_{(X,Y)}}=(0,r_1)\times (0,r_2)\) where \(r_1,r_2\in (0,+\infty ]\). If \((0,1)\in D_{(X,Y)}\), then
If \((1,0)\in D_{(X,Y)}\), then
In analogy with Eq. (21), if (X, Y) is a random vector with CDF F(x, y) and SF \({\overline{F}}(x,y)\), for all \((x,y)\in {\mathcal {S}}_{(X,Y)}\) we can introduce the two-dimensional odds function as
Hence, the two-dimensional extension of Eq. (22) can be given in terms of the function in (46), so that
for all \(\beta \in {\mathbb {R}}\) such that the right-hand-side is finite.
7 Concluding remarks
In this paper, we defined the cumulative information generating function and its suitable distortions-based extensions. The CIGF is noteworthy in the context of information measures, since it allows to determine the cumulative residual entropy and the cumulative entropy of a given probability distribution, even in their generalized and fractional forms.
Several results, properties and bounds have been studied, also with reference to symmetry properties and relations with the equilibrium density. Moreover, we illustrated that the considered functions are variability measures which extend the Gini mean semi-difference.
In the realm of reliability theory, the CIGF and its extensions have been found useful to study properties of k-out-of-n systems, with special regards to the reliability of multi-component stress–strength systems.
Future developments can be oriented to connections with other notions, such as the Information Divergence (see Toulias and Kitsos (2021)) and the Rényi Entropy (see, for instance, Buryak and Mishura (2021)), also with reference to possible applications and distortions-based extensions of the two-dimensional CIGF introduced in Sect. 6.
References
Abramowitz M, Stegun IA (1972) Handbook of mathematical functions with formulas, graphs, and mathematical tables. National Bureau of Standards Applied Mathematics Series, No. 55
Asadi M, Zohrevand Y (2007) On the dynamic cumulative residual entropy. J Stat Plan Inference 137(6):1931–1941
Belzunce F, Martínez-Riquelme C, Mulero J (2016) An introduction to stochastic orders. Elsevier, London
Bhattacharyya GK, Johnson RA (1974) Estimation of reliability in a multicomponent stress-strength model. J Am Stat Assoc 69(348):966–970
Bickel PJ, Lehmann EL (2012) Descriptive statistics for nonparametric models IV. Spread. Selected Works of E. L. Lehmann (Rojo J., ed.). Selected Works in Probability and Statistics. Springer, Boston, pp 519–526
Boland PJ, Proschan F (1983) The reliability of \(K\) out of \(N\) systems. Ann Probab 11(3):760–764
Buryak F, Mishura Y (2021) Convexity and robustness of the Rényi entropy. Mod Stoch Theory Appl 8(3):387–412
Clark DE (2020) Local entropy statistics for point processes. IEEE Trans Inf Theory 66(2):1155–1163
Cox DR (1972) Regression models and life-tables. J Royal Stat Soc Ser B Stat Methodol 34(2):187–220
Cover TM, Thomas JA (1991) Elements of information theory. John Wiley & Sons, New York
Das S, Kayal S (2021) Some ordering results for the Marshall and Olkin’s family of distribution. Commun Math Stat 9(2):153–179
Di Crescenzo A (1999) A probabilistic analogue of the mean value theorem and its applications to reliability theory. J Appl Probab 36(3):706–719
Di Crescenzo A (2000) Some results on the proportional reversed hazards model. Stat Probab Lett 50(4):313–321
Di Crescenzo A, Kayal S, Meoli A (2021) Fractional generalized cumulative entropy and its dynamic version. Commun Nonlinear Sci Numer Simulat 102:105899
Di Crescenzo A, Longobardi M (2009) On cumulative entropies and lifetime estimations. Lecture Notes in Computer Science, LNCS (PART 1) 5601, 132–141
Di Crescenzo A, Longobardi M (2009) On cumulative entropies. J Stat Plan Inference 139(12):4072–4087
Dulac G, Simon T (2023) On cumulative Tsallis entropies. Acta Appl Math 188(1):9
Giorgi GM, Nadarajah S (2010) Bonferroni and Gini indices for various parametric families of distributions. Metron 68:23–46
Giovagnoli A, Wynn H (2012) (U, V) ordering and a duality theorem for risk aversion and Lorenz type orderings. LSE Philosophy Papers, Centre for Philosophy of Natural and Social Science, London, UK
Golomb SW (1966) The information generating function of a probability distribution. IEEE Trans Inf Theory 12(1):75–77
Gradshteyn IS, Ryzhik IM (2015) Table of integrals, series, and products, 8th edn. Elsevier/Academic Press, Amsterdam
Greselin F, Zitikis R (2018) From the classical Gini index of income inequality to a new Zenga-type relative measure of risk: a modeller’s perspective. Econometrics 6(1):4
Gupta RC, Gupta RD (2007) Proportional reversed hazard rate model and its applications. J Stat Plan Inference 137(11):3525–3536
Gupta RC, Gupta RD, Gupta PL (1998) Modeling failure time data by Lehman alternatives. Commun Stat Theory Methods 27(4):887–904
Hu T, Chen O (2020) On a family of coherent measures of variability. Insur Math Econ 95:173–182
Kayal S (2016) On generalized cumulative entropies. Probab Eng Inf Sci 30(4):640–62
Kharazmi O, Tamandi M, Balakrishnan N (2021) Information generating function of ranked set samples. Entropy 23(11):1381
Kharazmi O, Balakrishnan N (2021) Jensen-information generating function and its connections to some well-known information measures. Stat Probab Lett 170:108995
Kharazmi O, Balakrishnan N (2023) Cumulative and relative cumulative residual information generating measures and associated properties. Commun Stat Theory Methods 52(15):5260–5273
Kilbas AA, Srivastava HM, Trujillo JJ (2006) Theory and applications of fractional differential equations. N-Holl Math Stud 204
Kirmani SNUA, Gupta RC (2001) On the proportional odds model in survival analysis. Ann Inst Stat Math 53(2):203–216
Kohansal A, Shoaee S (2021) Bayesian and classical estimation of reliability in a multicomponent stress-strength model under adaptive hybrid progressive censored data. Stat Pap 62(1):309–359
Kumar V, Dangi B (2023) Quantile-based Shannon entropy for record statistics. Commun Math Stat 11(2):283–306
Kumar D, Klefsjö B (1994) Proportional hazards model: a review. Reliab Eng Syst Saf 44(2):177–188
Navarro J (2022) Introduction to system reliability theory. Springer
Prudnikov AP, Brychkov YA, Marichev OI (1986) Integrals and series, vol 3. Gordon and Breach, London
Psarrakos G, Navarro J (2013) Generalized cumulative residual entropy and record values. Metrika 76:623–640
Rajesh G, Abdul-Sathar EI, Muraleedharan Nair KR, Reshmi KV (2014) Bivariate extension of dynamic cumulative residual entropy. Stat Methodol 16:72–82
Rao M (2005) More on a new concept of entropy and information. J Theor Probab 18(14):967–981
Rao M, Chen Y, Vemuri BC (2004) Cumulative residual entropy, a new measure of information. IEEE Trans Inf Theory 50(6):1220–1228
Schilling RL (2005) Measures, integrals and martingales. Cambridge University Press, Cambridge
Shaked M, Shanthikumar JG (2007) Stochastic orders. Springer, New York
Smitha S, Sudheesh KK, Sreedevi EP (2023) Dynamic cumulative residual entropy generating function and its properties. Commun Stat Theory Methods. https://doi.org/10.1080/03610926.2023.2235448
Sordo MA, Suárez-Llorens A (2011) Stochastic comparisons of distorted variability measures. Insur Math Econ 49(1):11–17
Toomaj A, Di Crescenzo A (2020) Generalized entropies, variance and applications. Entropy 22(6):709
Toulias TL, Kitsos CP (2021) Information divergence and the generalized normal distribution: a study on symmetricity. Commun Math Stat 9(4):439–465
Wang S (1996) Premium calculation by transforming the layer premium density. Astin Bull 26(1):71–92
Xiong H, Shang P, Zhang Y (2019) Fractional cumulative residual entropy. Commun Nonlinear Sci Numer Simulat 78:104879
Acknowledgements
The authors are members of the group GNCS of INdAM (Istituto Nazionale di Alta Matematica). This work is partially supported by MIUR–PRIN 2017, Project “Stochastic Models for Complex Systems” (no. 2017JFFHSH).
Funding
Open access funding provided by Università degli Studi di Salerno within the CRUI-CARE Agreement. The Funding was provided by Gruppo Nazionale per il Calcolo Scientifico.
Author information
Authors and Affiliations
Contributions
All authors have contributed equally to this work.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Capaldo, M., Di Crescenzo, A. & Meoli, A. Cumulative information generating function and generalized Gini functions. Metrika (2023). https://doi.org/10.1007/s00184-023-00931-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00184-023-00931-3
Keywords
- Entropy
- Proportional hazard model
- Proportional reversed hazard model
- k-out-of-n system
- Stress–strength system
- Variability measure