Cumulative Information Generating Function and Generalized Gini Functions

We introduce and study the cumulative information generating function, which provides a unifying mathematical tool suitable to deal with classical and fractional entropies based on the cumulative distribution function and on the survival function. Specifically, after establishing its main properties and some bounds, we show that it is a variability measure itself that extends the Gini mean semi-difference. We also provide (i) an extension of such a measure, based on distortion functions, and (ii) a weighted version based on a mixture distribution. Furthermore, we explore some connections with the reliability of $k$-out-of-$n$ systems and with stress-strength models for multi-component systems. Also, we address the problem of extending the cumulative information generating function to higher dimensions.


Introduction and background
In recent years, there is a deep interest in proposing new measures of uncertainty, in order to respond to the increasingly diversified needs of researchers in the fields of reliability and risk analysis.At the same time, getting lost in the vast sea of the new notions is relatively easy.To meet these needs, in this paper we aim to propose a new generating function which is able to recover the cumulative residual entropy and the cumulative entropy, as well as both their generalized and fractional extensions.
If X is a nonnegative absolutely continuous random variable having support (0, r), with r ∈ (0, +∞], and probability density function (PDF) f , the differential entropy of X is defined as (see, for instance, Cover and Thomas [10]) Such a measure can be obtained from the information generating function, defined by Golomb [20] as where ν ∈ R is such that the right-hand-side of ( 2) is finite.Indeed, Eqs. ( 1) and (2) give: Recent developments and examples of applications of information generating functions can be found in Clark [8], Kharazmi et al. [27], and Kharazmi and Balakrishnan [28].
We remark that the differential entropy can take negative values, whereas the Shannon entropy of a discrete distribution is nonnegative.To avoid this drawback, and for other reasons as mentioned in Rao et al. [40], various alternative measures have been proposed recently.Table 1 shows some information measures for a random variable X having support (0, r), with r ∈ (0, +∞], having respectively cumulative distribution function (CDF) and survival function (SF) given by We remark that the entropies presented in Table 1 can also be expressed in terms of the cumulative hazard rate and the cumulative reversed hazard rate of X, defined respectively as Λ(x) = − log F (x), T (x) = − log F (x).
These functions are involved in the cumulative residual entropy introduced in [40], and the cumulative entropy (see Di Crescenzo and Longobardi [15], [16]), given respectively in cases (i) and (ii) of Table 1.Such measures are obtained by replacing the PDF in (1) with the SF and the CDF, respectively.This preserves the fact that the logarithm of the probability of an event represents the information contained in the event, in accordance with the Shannon entropy in the discrete case.
Both the cumulative residual entropy and the cumulative entropy take nonnegative values, vanishing only in the case of degenerate random variables.These measures are particularly suitable for describing information in problems related to reliability theory, where X denotes the random lifetime of an item, and x is the reference time.In particular, in Table 1, the cumulative residual entropy (i) and the generalized versions (iii) and (v) deal with events for which the uncertainty is related to the future, while the cumulative entropies (ii), (iv) and (vi) are suitable to quantify the information when the uncertainty is related to the past.In addition, Asadi and Zohrevand [2] showed that CRE(X) = E[mrl(X)], where mrl(X) is the mean residual life of X.Also, in [16] it is shown that CE(X) = E[μ(X)], where μ(X) is the mean inactivity time of X.Moreover, other applications of these information measures can be found in Risk Theory, since the risk is strictly related to the notion of uncertainty, see e.g.Dulac and Simon [17].
Recently, Psarrakos and Navarro [37] introduced the generalized cumulative residual entropy of order n of X, defined as in (iii) of Table 1, in order to extend the cumulative residual Table 1: Information measures of interest, for a given random lifetime X with support (0, r) where r ∈ (0, +∞], with n ∈ N 0 for case (iii), n ∈ N for case (iv), ν ≥ 0 for case (v) and ν > 0 for case (vi).
(i) cumulative residual entropy (ii) cumulative entropy entropy.A dual information measure, known as generalized cumulative entropy of order n, was proposed by Kayal [26] (cf.case (iv) of Table 1).Various results on these generalized measures have been studied by Toomaj and Di Crescenzo in [45].In particular, the measures given in (iii) and (iv) of Table 1 play a role in the theory of point processes.Indeed, the generalized cumulative residual entropy of order n, say CRE n (X), is equal to the mean of the (n + 1)-th interepoch interval of a non-homogeneous Poisson process having cumulative intensity function given by the first of (4).Similarly, the generalized cumulative entropy of order n can be viewed as an expected spacing in lower record values (see, for instance, Section 6 of [45]).They are also related to the upper and lower record values densities (see, for instance, Kumar and Dangi [33]).
Fractional versions of the above measures have been studied as well, with the aim of disposing with more advanced mathematical tools to handle complex systems and anomalous dynamics.Specifically, see Xiong et al. [48] and Di Crescenzo et al. [14], for the fractional generalized cumulative residual entropy and the fractional generalized cumulative entropy of X, given respectively in cases (v) and (vi) of Table 1.Certain features of fractional calculus allow these measures to better capture long-range phenomena and nonlocal dependence in some random systems.
The entropies considered in Table 1 deal with nonnegative random variables since they are often referred to random lifetimes of interest in reliability theory.However they can be straightforwardly extended to the case when X has a general support contained in (l, r), with −∞ ≤ l < r ≤ +∞.
The aim of this paper is to propose and study a new information generating function that, in analogy with the functions in Eqs. ( 2) and ( 3), is able to recover the information measures presented in Table 1.It is defined as the integral of the product between suitable powers of the CDF and the SF.Throughout the paper it emerges that some advantages related to the use of the new generating function are: -the convenience of gaining information from both the CDF and the SF of the random variable under investigation, -the existence of suitable applications to notions of interest in reliability theory such as proportional hazards, odds function, order statistics, k-out-of-n systems, and stress-strength models for multi-component systems, -the possibility of using it as a measure of concentration, since it is an extension of the Gini mean semi-difference.
With reference to the latter statement, we will show also that the proposed generating function can be extended (i) by replacing the powers of the CDF and the SF with suitable distortion functions, and (ii) by defining a weighted version based on a mixture distribution.Moreover, it is worth mentioning that the proposed generating function and its generalized versions can be applied in risk analysis.Indeed, we show that they are proper variability measures.

Plan of the paper
In Section 2 we define the new generating function, named cumulative information generating function.We show that it is useful to recover the measures given in Table 1.Moreover, we illustrate the effect of an affine transformation of the considered random variable, and provide some connections with the proportional hazard model, the proportional reversed hazard model, and the odds function.
In Section 3 we use various well-known inequalities in order to obtain some bounds for the cumulative information generating function.In addition, we show how the cumulative information generating function is related (i) to the Euler beta function, and (ii) to the Golomb's information generating function of the equilibrium random variable.
In Section 4 we discuss the connections with some notions of systems reliability, as series and parallel systems, and k-out-of-n systems, also with special attention to the reliability of the multi-component stress-strength system.
In Section 5 we introduce the above mentioned generalized information measures, named 'q-distorted Gini function' and 'weighted q-distorted Gini function', being related also to the Gini mean semi-difference.We also prove that they are suitable variability measures, since in particular the dispersive order between pairs of random variables implies the ordering between these functions.An application to the reliability of multi-component stress-strength systems is provided, too.
The Section 6 is concerning the extension of the cumulative information generating function to the case of a two-dimensional random vector, with special care to the case of independent components.Some final remarks are then given in Section 7. Throughout the paper, the terms increasing and decreasing are used in non-strict sense, N denotes the set of positive integers, and N 0 = N∪{0}.Moreover, given a distribution function F (x), we denote the right-continuous version of its inverse by F −1 (u) = sup{x : F (x) ≤ u}, u ∈ [0, 1], which is also named quantile function in statistical framework.

Cumulative information generating function
In the same spirit of Eq. ( 2) we now introduce a new generating function which allows to measure the cumulative information coming both from the CDF and the SF.Definition 2.1 Let X be a random variable with CDF F (x) and SF F (x), x ∈ R, and let denote respectively the lower and upper limits of the support of X (which may be finite or infinite).The cumulative information generating function (CIGF) of X is defined as where Clearly, one has Example 2.1 Let X ∼ Erlang(2, λ), with λ > 0, and CDF F (x) = 1 − e −λx − λxe −λx , x ≥ 0. From ( 6), recalling that ∀x : |x| < 1, and taking into account that , we obtain the CIGF of X in series form: We remark that if X is a discrete random variable with finite support {x 1 ≤ x 2 ≤ . . .≤ x n }, then due to (6) the CIGF can be expressed as a sum, i.e.
Other examples will be illustrated below.
In the next theorem we show the effect of an affine transformation.The result follows from Definition 2.1, and recalling the relation between the CDFs of X and Y = γX + δ.Theorem 2.1 Let X be a random variable with finite CIGF.Consider the affine transformation Remark 2.1 If X is absolutely continuous, with PDF f (x), by setting u = F (x) in the right-hand-side of Eq. ( 6), the CIGF of X can be expressed as Remark 2.2 The CIGF of a nonnegative random variable X can be regarded as a measure of concentration.Indeed, if X ′ is an independent copy of X, we can deduce that (see, for instance, the proof of Proposition 1 of Rao [39]) This quantity is also known as the Gini mean semi-difference, which represents an example of coherent measure of variability with comonotonic additivity (see Section 2.2 of Hu and Chen [25]).
Similarly as the information generating function defined in (2), the following generating measures can be introduced as marginal versions of the CIGF.Definition 2.2 Under the same assumptions of Definition 2.1, the cumulative information generating measure and the cumulative residual information generating measure are defined respectively by and We remark that when X is absolutely continuous with support (0, ∞), the measure ( 13) has been introduced in Eq. ( 10) of Kharazmi and Balakrishnan [29], denoted as CIG β (F ), for β > 0. Under these assumptions, other properties and the non-parametric estimation of the function given in Eq. ( 13) have been studied in Smitha et al. [43].
Remark 2.3 If X is a random variable with finite CIGF and symmetric CDF, in the sense that for some m ∈ R one has (ii) from Eqs. ( 12) and ( 13) we have H X (α) = K X (α) for all (α, 0) ∈ D X ; (iii) under the assumptions of Theorem 2.1, Eq. ( 9) becomes Recalling the measures (i) and (ii) of Table 1, now we can show that the cumulative residual entropy and the cumulative entropy can be obtained from the CIGF. . .
Proof.The stated results follow from Eq. ( 6), by differentiation under the integral sign.□ Let us now obtain a similar relation for the generalized cumulative residual entropy and the generalized cumulative entropy (cf.cases (iii) and (iv) of Table 1).
Proof.The proof of ( 15) is analogous to Proposition 2.1, by using this identity: In the same way we obtain Eq. ( 14).□ In order to extend the above relations to the case of the generalized fractional cumulative residual entropy and the generalized fractional cumulative entropy, given respectively in cases (v) and (vi) of Table 1, let us now recall briefly the expression of the Caputo fractional derivatives (see, for instance, Kilbas et al. [30]).Specifically, given a function y(x 1 , x 2 ), we consider that is the left-sided Caputo partial fractional derivative with respect to x 1 of order ν on the whole axis R, where ν ∈ C with Re(ν) > 0, ν / ∈ N and n = ⌊Re(ν)⌋ + 1.
Proof.We show only the proof of Eq. ( 18) because Eq. ( 17) can be derived similarly.From ( 16) we obtain where the last equality is obtained by use of Fubini's theorem.By placing t − α = z and γ = −z log F (x), we have Finally we deduce α dx, so that Eq. ( 18) follows by taking α = 1 and β = 0. □ Remark 2.4 Recently, Kharazmi and Balakrishnan [29] Moreover, for the generalized versions and for the fractional versions we have respectively Now we recall two important models that are largely adopted in survival analysis and reliability theory.Let X be a random lifetime with CDF F (x) and SF F (x).The proportional hazard model (see for instance Cox [9], Kumar and Klefsjö [34]) is expressed by a random lifetime X * γ with SF Similarly, the proportional reversed hazard model (see for instance Di Crescenzo [13], Gupta and Gupta [23], Gupta et al. [24]) is expressed by a random lifetime Xθ with CDF Recently, modified versions of these models have been studied by Das and Kayal [11].
Remark 2.5 The measures given in Definition 2.2 satisfy the following relations.
(i) Under the proportional hazard model (19) we have (ii) Under the proportional reversed hazard model (20) we have Remark 2.6 If X is a random lifetime such that D X ⊆ (R + ) 2 , then recalling the Definition 2.1 and Eqs. ( 19) and (20), the CIGF of X can be expressed as Table 2: The CIGF for some notable distributions.
We now recall another useful concept.Let X be a random variable with CDF and SF denoted by F (x) and F (x), respectively.For all x ∈ (l, r) the odds function of X is (cf.Kirmani and Gupta [31]) This function represents the ratio of the probability of an event occurring to the probability of its not occurring, and always assumes nonnegative finite values.It is used in reliability theory, because it quantifies the strength of the association between the failure of a system after time x and before time x.Due to Eq. ( 21), we can express the CIGF of X in terms of the odds function in two equivalent useful ways: Hence, when the parameters α and β may take negative values such that α + β = 0, then the CIGF of X can be expressed in terms of the odds function as where Table 2 shows various examples of the CIGF expressed in terms of the Euler Beta function B(x, y) = Finally, by recalling Eq. ( 11), we remark that for α = β = 1 Eq.( 8) and the examples in Table 2 are in agreement with Giorgi and Nadarajah [18].

Inequalities and further results
In this section, we obtain some bounds and further results regarding the CIGF.Specifically, we first refer to well-known inequalities named after Chernoff, Bernoulli, Minkowski and Hölder's (see, for istance Schilling [41]).
, for s < λ.Thanks to Eqs. ( 8) and ( 25), some calculations give, for (α, Let us now express some upper bounds for the CIGF in terms of the measures introduced in Definition 2.2.Proposition 3.2 Under the assumptions specified in Definition 2.1, the CIGF of a random variable X satisfies the following inequalities: Proof.Due to Bernoulli's inequality with real exponents, for all x ∈ R it follows that Hence, the thesis immediately follows from Definitions 2.1 and 2.2.□ Hereafter we use the Minkowski's inequality to obtain suitable bounds for the measures introduced in Definition 2.2 and for G X (γ, γ).Proposition 3.3 Under the assumptions specified in Definition 2.1 and Definition 2.2, if X has finite support in (l, r), (i) for all γ ≥ 1 such that (γ, 0) ∈ D X and (0, γ) ∈ D X we have Proof.By applying Minkowski's inequality, for γ ≥ 1 we have and also (r − l) Combining the two latter inequalities we obtain the bounds for K X (γ).The other relations can be obtained in the same way, by taking into account that, for γ ≥ 1, if (0, γ) ∈ D X then (0, 2γ) ∈ D X , and if (γ, 0) ∈ D X then (2γ, 0) ∈ D X .□ We now prove an upper bound for G X (α, β), for α + β = 1, making use of the Hölder's inequality.
Proposition 3.4 Under the assumptions specified in Definition 2.1, let X have finite support in (l, r), with (θ, 1 − θ) ∈ D X for all θ ∈ (0, 1).Then, Proof.Due to the Hölder's inequality with conjugate exponents 1 θ , 1 1−θ , for all θ ∈ (0, 1) we have this yielding Eq. (26).□ We remark that Eq. ( 26) is satisfied as equality when F (x) = F (x) ∀ x ∈ (l, r), i.e. when P(X = l) = P(X = r) = 1/2.Moreover, we note that the right-hand-side of Eq. ( 26) can be rewritten by taking into account that, under the given assumptions, Hereafter we show that the CIGF of an absolutely continuous random variable can be expressed as the product of the Euler beta function and the expected value of a suitably transformed beta-distributed random variable.Proposition 3.5 If X is an absolutely continuous random variable with PDF f , CDF F and finite CIGF, then, where Proof.The thesis is obtained making use of Eq. ( 10) and recalling the PDF of Y ∼ Beta(α + 1, β + 1).□ The above result collects various features of the CIGF, i.e. the relations (i) to the Beta distribution, which reflects the form of the right-hand-side of (6), and (ii) to the transformation f (F −1 (•)), which plays a relevant role in the context of variability measures as developed in Section 5.
We conclude this section by relating the CIGF to a series involving the Golomb's information generating function of the equilibrium random variable.To this aim, we recall that for a nonnegative random variable X, with SF F (x) and expected value E[X] ∈ (0, +∞), the equilibrium random variable of X is a nonnegative absolutely continuous random variable, denoted as X e , whose PDF is given by Recalling Eq. ( 2), hereafter we denote by IG Xe the Golomb's information generating function of the equilibrium random variable X e .
Proposition 3.6 If X is a nonnegative random variable having expected value E[X] ∈ (0, +∞) and with finite CIGF, then where X e is the equilibrium random variable of X.

Connections with systems reliability
In this section we relate some results exploited above to notions of interest in reliability theory.
Several applied problems involve complex systems consisting of many components.Here we focus on systems formed by n components, where X 1 , X 2 , . . ., X n describe the random lifetimes of each component.We assume that they are independent and identically distributed (i.i.d.), with common CDF F (x) and SF F (x).As well known, a parallel system continues to work until the last component fails, and thus its lifetime is described by the sample maximum Similarly, a series system fails as soon as the first component stops working, and thus its lifetime is described by the sample minimum Remark 4.1 Let n ∈ N. Recalling Definition 2.2, from Eqs. ( 29) and ( 30) it immediately follows that where H X and K X denote respectively the cumulative information generating measure and the cumulative residual information generating measure of X i .
We now focus on the expression of the CIGF for order statistics X (n:n) and X (1:n) .
Proposition 4.1 For n ∈ N, let X 1 , X 2 , . . ., X n be a random sample formed by i.i.d.random lifetimes having finite cumulative information generating measure H X and cumulative residual information generating measure K X .Then, the CIGF of the order statistics X (n:n) and X (1:n) can be expressed respectively as and Proof.For simplicity, assume that the support of X is (0, r).Recalling Eq. ( 6), from Eqs. ( 29) and ( 7) we have The right-hand-side of ( 31) then follows making use of (12).Eq. ( 32) can be obtained similarly.□ It is well known that a system with n independent components is said to be a k-out-of-n system when it works if and only if at least k components work (see, for instance, Boland and Proschan [6]).Clearly, if k = 1 we have a parallel system, while for k = n we have a series system.For any k, the lifetime of the k-out-of-n system formed by components with i.i.d.lifetimes is expressed as the corresponding k-th order statistic.This allows to express the reliability and the information content of this kind of systems in a tractable way.Remark 4.2 Consider a k-out-of-n system formed by n components with i.i.d.random lifetimes, for n ∈ N. Denoting by X (k:n) the corresponding k-th order statistic, which in turn gives the system lifetime, the CIGF of X (k:n) can be expressed in terms of the cumulative information generating measures.Indeed, similarly as Proposition 4.1, recalling Eqs.(12) and ( 13) one has the following two equivalent expressions for all (α, β) ∈ D X (k:n) .Moreover, the mean of X (k:n) can be expressed in terms of the CIGF of X as Clearly, for k = 1 we have E X

Stress-strength models for multi-component systems
A further connection of the CIGF with systems reliability arises in the analysis of stressstrength models for multi-component systems.Let us consider a system with n components having i.i.d.strengths X 1 , X 2 , . . ., X n having common CDF F (x). Assume that each component is stressed according to an independent random stress T having CDF F T (x).Moreover, suppose that the system survives if and only if the components strengths are greater than the stress by at least k out of n (1 ≤ k ≤ n).Then, the reliability of the considered multi-component stress-strength system is given by (cf.Bhattacharyya and Johnson [4]) with R 0,n = 1.See also, for instance, the recent contribution by Kohansal and Shoaee [32] on the statistical inference of multicomponent stress-strength reliability under suitable censored samples.
For instance, if T is distributed as X 1 then it is not hard to see that The following result is a straightforward consequence of Eqs. ( 6) and (33).
An iterative formula allows us to evaluate the reliability of the multi-component stressstrength system as follows, under the assumptions of Proposition 4.2: As example, if X has Power(θ) distribution then from Proposition 4.2 and Table 2 after few calculations one has Note that the expression in (35) can be also represented as a ratio of Pochhammer symbols, or as an infinite product (cf.Eq. 8.325.1 of Gradshteyn and Ryzhik [21]).Clearly, if θ = 1 then Eq. ( 35) reduces to Eq. ( 34). Figure 1 shows some plots of R k,n as given in (35).

Generalized Gini functions
This section is devoted to the analysis of a generalized version of the CIGF.Specifically, we aim to extend the Definition 2.1 to the case in which the powers included in the right-handside of Eq. ( 6) are replaced by suitable distortion functions.In this case, recalling Remark 2.2, we come to an extension of the Eq. ( 11).Let X be a random variabile with CDF F and SF F , and let q i : [0, 1] → [0, 1] be two distortion functions, for i = 1, 2, i.e. increasing functions such that q i (0) = 0 and q i (1) = 1 (cf.Section 2.9.2 of Belzunce et al. [3] and Section 2.4 of Navarro [36]).In some applications it is required that q i is continuous or left-continuous, however these assumptions are not necessarily required in general.The distorted distribution function and the distorted survival function of F through q i are given respectively by From Eq. ( 36), in general one has The functions given in (36) have been introduced in the context of the theory of choice under risk (see Wang [47]), and are largely used in various applied fields (for instance, see Sordo and Suárez-Llorens [44] for applications to variability measures).
Let us now consider the preannounced generalization of the CIGF based on (36).
Definition 5.1 Let X be a random variable with CDF F (x) and SF F (x), x ∈ R, and let The q-distorted Gini function (or, shortly, q-Gini function) of X is defined as where q = (q 1 , q 2 ), and Clearly, if the distortion functions are taken as q 1 (x) = x α and q 2 (x) = x β with (α, β) ∈ D X , then Eq. ( 37) corresponds to the definition of the CIGF given in Eq. ( 6).Specifically, if α = β = 1 then we recover the Gini mean semi-difference (11).
It is worth mentioning that the q-Gini function may be viewed as an extension of the distorted measures treated in Giovagnoli and Wynn [19] and in Greselin and Zitikis [22].In these papers, distortions of the only CDF or SF (which can be viewed as generalizations of Eqs. ( 12) and ( 13)), are considered for the analysis of stochastic dominance, Lorenz ordering and risk measures.
Hereafter we shall prove various results regarding the q-Gini function.They include the effect of an affine transformation of X and the pointwise ordering of the q-Gini functions.
To this aim, we recall that, if X and Y are nonnegative random variables with CDFs F and G, respectively, then X is said to be smaller than Y in the dispersive order, denoted as X ≤ d Y , if and only if (see Section 3.B of Shaked and Shanthikumar [42]) Among the variability stochastic orders, the dispersive order is one of the most popular, since it involves quantities that are easily tractable, requiring that the difference between any two quantiles of X is smaller than the corresponding quantity of Y .Moreover, if X and Y are absolutely continuous with PDFs f and g, respectively, then We can now prove that, under suitable assumptions, the q-Gini function is a variability measure in the sense of Bickel and Lehmann [5].
Proof.The properties 1 and 2 follow recalling Eq. (37) and the relation between the CDFs of X and Y = γX + δ.The properties 3 and 4 are guaranteed by the Definition 5.1.In analogy to Eq. ( 10), we can write ĜY (q) − ĜX (q) = Hence, due to relation (38) the property 5 immediately follows.□ In addition, we remark that if ĜX (q) = 0 and if q i (0 + ) > 0 and q i (1 − ) < 1 for i = 1, 2, then X is necessarily a degenerate random variable.
It is worth mentioning that the results given in this section can be further extended.Indeed, in the same conditions given in the Definition 5.1, by taking as reference the approach in [19] we introduce the weighted q-distorted Gini function (or, shortly, weighted q-Gini function) as follows: ĜX (q, F T ) = where F T (x) is the CDF of a random variable T , and where the intersection of the supports of X and T is a non-empty set denoted by ∆.It is not hard to see that the function given in (39) satisfies the properties 1-4 given in Theorem 5.1 for ĜX (q).Concerning the property 5, hereafter we see that additional assumptions are needed.Here, l X , r X and l Y , r Y are defined as in (5) for X and Y , respectively.Theorem 5.2 Let X and Y be random variables having the same support, let q = (q 1 , q 2 ), where q i : [0, 1] → [0, 1], i = 1, 2, be distortion functions such that ĜX (q, F T ) and ĜY (q, F T ) are finite, and let T be absolutely continuous with PDF f T .If Proof.Due to (39), by setting u = F (x) one has ĜY (q, F T ) − ĜX (q, F T ) = Then, under assumption (i), from Theorem 3.B.13 of [42] we have that assumption X ≤ d Y implies X ≤ st Y , i.e.F (x) ≥ G(x) for all x ∈ R, so that G −1 (u) ≥ F −1 (u) for all u ∈ (0, 1).The relation (40) thus follows from (38).The same result can be proved similarly under assumption (ii).□ An immediate application of Theorem 5.2 can be given to the reliability of multi-component stress-strength systems, as seen in Section 4.1.Consider two n-component systems, the first having i.i.d.strengths X 1 , X 2 , . . ., X n distributed as X, and the second having i.i.d.strengths Y 1 , Y 2 , . . ., Y n distributed as Y .Assume that each component of both systems is stressed according to an independent random stress T having CDF F T (x).We denote by R X k,n and R Y k,n the reliability of the corresponding multi-component stress-strength systems defined as in (33).We are now able to provide a comparison result based on the weighted q-Gini function.
Theorem 5.3 Let the strengths X and Y have the same support, and let the random stress T be absolutely continuous with PDF f T .If Proof.The thesis follows recalling Eq. ( 33) and making use of Theorem 5.2 when the relevant distortions are given by q 1 (u) = u n−j and q 2 (u) = u j , with k ≤ j ≤ n. □ In the last result of this section, thanks to the probabilistic analogue of the mean value theorem, we provide a suitable expression of the weighted q-Gini function in the special case when the related distortion functions are equal to the identity.Proposition 5.1 Let X be a nondegenerate random variable such that E(min{X, X ′ }) and E(max{X, X ′ }) are finite, where X ′ is an independent copy of X.For the weighted q-Gini function introduced in Eq. ( 39), if T is absolutely continuous with the same support of X, and if Proof.The proof follows making use of Eq. ( 39) and Theorem 4.1 in Di Crescenzo [12] extended to the case of a general support of X, by taking Z = Ψ(min{X, X ′ }, max{X, X ′ }) and g(•) = F T (•).□ We immediately note that Eq. ( 41) generalizes Eq. ( 11) in this proposed context.

Two-dimensional cumulative information generating function
Let us now extend the analysis of the CIGF to the case of a two-dimensional random vector.
In analogy with Definition 2.1, by avoiding trivial degenerate cases, we introduce the following Definition 6.1 Let (X, Y ) be random vector with nondegenerate components, having joint CDF and SF given respectively by We consider the following domain The CIGF of (X, Y ) is defined as: We first discuss few examples.The first example is stimulated by the fact that if . Therefore the CDF and the SF are identical, given by F (x, y) = F (x, y) = Example 6.2 Let (X, Y ) be an absolutely continuous random vector, uniformly distributed in the triangular domain T = (x, y) ∈ R 2 : 0 ≤ x ≤ 1, 0 ≤ y ≤ 1 − x .The PDF, CDF and SF are given respectively by In this case S (X,Y ) = T , so that the CIGF of (X, Y ) is with D (X,Y ) = {(α, β) ∈ R 2 : α, β ∈ R \ Z − 0 }, where the function 3 F 2 can be found in [35], for instance.
The following result is an immediate consequence of the involved notions.Proposition 6.1 Let (X, Y ) be a random vector having finite CIGF.If X and Y are independent then Consider a nonnegative random vector (X, Y ) with support (0, r 1 ) × (0, r 2 ), for r 1 , r 2 ∈ (0, +∞].In many practical situations, it is worthwhile to adopt the following information measures for multi-device systems.The joint cumulative residual entropy of (X, Y ) is defined as (cf. A dynamic version of this measure has been studied by Rajesh et al. [38].Similarly, the joint cumulative entropy of (X, Y ) is defined as (cf.[16]) Hence, in this case Eqs. ( 44) and ( 45) are satisfied if and only if X and Y are independent, i.e. θ = 0.
In analogy with the one-dimensional measures considered in Table 1, we define the generalized and the fractional versions of the measures given in Eqs.(42) and (43).Definition 6.2 Let (X, Y ) be a nonnegative random vector with support (0, r 1 ) × (0, r 2 ), where r 1 , r 2 ∈ (0, +∞].The generalized cumulative residual entropy of order n of (X, Y ) is defined as

Table 2
), thus satisfying the symmetry conditions expressed in Remark 2.3.Example 6.1 Let (X, Y ) be a discrete random vector, with probability function