On Cumulative Entropies in Terms of Moments of Order Statistics

In this paper, relations between some kinds of cumulative entropies and moments of order statistics are established. By using some characterizations and the symmetry of a non-negative and absolutely continuous random variable X, lower and upper bounds for entropies are obtained and illustrative examples are given. By the relations with the moments of order statistics, a method is shown to compute an estimate of cumulative entropies and an application to testing whether data are exponentially distributed is outlined.


Introduction
In reliability theory, to describe and study the information associated with a non-negative absolutely continuous random variable X, we use the Shannon entropy, or differential entropy of X, defined by Shannon (1948) In the literature, there are several different versions of entropy, each one suitable for a specific situation. In Rao et al. (2004) introduced the Cumulative Residual Entropy (CRE) of X as Di Crescenzo and Longobardi (2009) introduced the Cumulative Entropy (CE) of X as If the random variable X is a lifetime of a system, this information measure is suitable when uncertainty is related to the past. It is a concept dual to the cumulative residual entropy which relates uncertainty on the future lifetime of the system. Mirali et al. (2016) introduced the Weighted Cumulative Residual Entropy (WCRE) of X as Mirali and Baratpour (2017) introduced the Weighted Cumulative Entropy (WCE) of X as It should be mentioned that the above measures can also be defined for random variables X with support over the entire real line provided the involved integrals exist (as in some examples discussed in later sections).
Recently, various authors have discussed different versions of entropy and their applications (see, for instance, Cali et al. 2017Cali et al. , 2019Cali et al. , 2020Longobardi 2014).
The paper is organized as follows. In Section 2, we study relationships between some kinds of entropies and moments of order statistics and present various illustrative examples. In Section 3, bounds are given by using some characterizations and properties (as the symmetry) of the random variable X, and some examples and bounds for a few well-known distributions are also presented. In Section 4, we present a method based on moments of order statistics to estimate the cumulative entropies and an application to testing exponentiality of data is also outlined.

Relationships Between Entropies and Order Statistics
We recall that, if we have n i.i.d. random variables X 1 , . . . , X n , we can introduce the order statistics X k:n , k = 1, . . . , n. The k-th order statistic is equal to the k-th smallest value from the sample. We know that the cdf of X k:n can be given in terms of the cdf of the parent distribution as while the pdf of X k:n is Choosing k = 1 and k = n, we get the smallest and largest order statistics, respectively. Their cdf and pdf are given by In the following, we denote by μ and μ k:n the expectation or mean of X and the mean of the k-th order statistic in a sample of size n with parent distribution as that of X, respectively, i.e., μ = E(X) and μ k:n = E(X k:n ). Moreover, we denote by μ (2) and μ (2) k:n the second moment of X and the second moment of k-th order statistic in a sample of size n with parent distribution as that of X, respectively, i.e., μ (2) = E(X 2 ) and μ (2) k:n = E(X 2 k:n ). We recall that if X is a random variable with finite expectation μ, then the first moment of all order statistics is finite, and if the second moment of X is finite, then the second moment of all order statistics is also finite; see David and Nagaraja (2003) for further details.

Cumulative Residual Entropy
Let X be a random variable with finite expectation μ. The Cumulative Residual Entropy (CRE) of X can also be written in terms of order statistics as follows: provided that lim x→+∞ −x(1 − F (x)) log(1 − F (x)) exists and CRE is finite. In this case, the previous limit is equal to 0. We note that Eq. 2 can be rewritten as Remark 1 We want to emphasize that, under the assumptions made, the steps in Eq. 2 are correct. The improper integral can be written as Hence, we observe that the sequence S N (x) = N n=1 F (x) n n is increasing and converges pointwise to the continuous function − log(1 − F (x)) for each x ∈ [0, t] and, by applying Dini's theorem for uniform convergence (Bartle and Sherbert 2000), the convergence is uniform. Then, Eq. 4 can be written as In order to apply Moore-Osgood theorem for the iterated limit (Taylor 2010), we have to show that converges pointwise for each fixed N , and this is satisfied if X has finite mean. Hence, by applying Moore-Osgood theorem for the iterated limit, Eq. 5 can be written as In the following examples, we use Eq. 3 to evaluate the CRE for the standard exponential and uniform distributions.
So, from Eq. 2, we readily find

Cumulative Entropy
Let X be a random variable with finite expectation μ. The Cumulative Entropy (CE) of X can also be rewritten in terms of the mean of the minimum order statistic; in fact, from Eq. 1, we easily obtain provided that lim x→+∞ −xF (x) log F (x) exists and CE is finite. We note that Eq. 6 can be rewritten as In the following examples, we give an application of Eq. 7 to the standard exponential and uniform distributions.
Example 3 For the standard exponential distribution, it is known that and so from Eq. 7, we readily have by the use of Euler's identity.
Example 4 For the standard uniform distribution, using the fact that we obtain from Eq. 6 that by the use of Euler's identity.
Remark 2 If the random variable X has finite mean μ and is symmetrically distributed about μ, then it is known that and so the equality E(X) = CE(X) readily follows.

Weighted Cumulative Entropies
In a similar manner, the Weighted Cumulative Residual Entropy (WCRE) of X, with finite second moment, can be expressed as exists and WCRE is finite. In the following example, we use Eq. 8 to evaluate the WCRE for the standard uniform distribution.
Example 5 For the standard uniform distribution, using the fact that μ (2) n+1:n+1 = n + 1 n + 3 and μ (2) = 1 3 , we obtain from Eq. 8 that Moreover, we can derive the Weighted Cumulative Entropy (WCE) of X in terms of the second moment of the minimum order statistic as follows: provided that lim x→+∞ − x 2 2 F (x) log F (x) exists and WCE is finite.

Bounds
In the following, we use Z to denote the standard version of the random variable X, i.e., where σ is the standard deviation of X. By construction, the relation between a variable and its standard version holds for order statistics and so we have for k = 1, . . . , n. Hence, the mean of X k:n and the mean of Z k:n are directly related and, in particular, for the largest order statistic, we have E(Z n:n ) = μ n:n − μ σ .
We remark that this formula also holds by considering a generalization of the random variable Z with an arbitrary location parameter μ in place of the mean, and an arbitrary scale parameter σ in place of the standard deviation.
Let us consider a sample with parent distribution Z such that E(Z) = 0 and E(Z 2 ) = 1. Hartley and David (1954) and Gumbel (1954) have then shown that Using the Hartley-David-Gumbel bound for a parent distribution with mean μ and variance σ 2 , we get Theorem 1 Let X be a random variable with mean μ and variance σ 2 . Then, we obtain an upper bound for the CRE of X as Proof From Eqs. 2 and 10, we get which is the upper bound given in Eq. 11.
Remark 3 If X is a non-negative random variable, we have μ n+1:n+1 ≥ 0, for all n ∈ N. For this reason, using finite series approximations in Eq. 2, we get lower bounds for E(X) as for all m ∈ N.
Remark 4 If X is a non-negative random variable, we have μ 1:n+1 ≥ 0, for all n ∈ N. For this reason, using finite series approximations in Eq. 6, we get upper bounds for CE(X) as 1 n(n + 1) μ 1:n+1 + μ, for all m ∈ N.
In the following theorem, we consider a decreasing failure rate distribution (DFR). We recall that the failure rate, or hazard rate, function of X, r(x), is defined as where the last equality occurs under the assumption of absolute continuity. Then, the random variable X is said to be DFR when the function r(x) is decreasing in x; see Barlow and Proschan (1996) for further details on hazard rate functions and DFR distributions.
Theorem 2 Let X be DFR. Then, we have the following lower bound for CE(X): Proof Let X be DFR. From Theorem 12 of Rychlik (2001), it is known that for a sample of size n, if For j = 1, we have δ 1,n = 1 n ≤ 2 for all n ∈ N, so that Then, from Eq. 6, we get the following lower bound for CE(X): Remark 5 We note that we can not provide an analogous bound for E(X) because δ n,n ≤ 2 is not fulfilled for n ≥ 4.
From Eqs. 2 and 6, we get the following expression for the sum of the cumulative residual entropy and the cumulative entropy: Calì et al. (2017) have shown a connection between (14) and the partition entropy studied by Bowden (2007).

Theorem 3 We have the following bound for the sum of the CRE and the CE:
Proof From Theorem 3.24 of Arnold and Balakrishnan (1989), it is known that the bound for the difference between the expectations of the largest and smallest order statistics from a sample of size n + 1 is and so using Eq. 16 in Eq. 14, we get the following bound for the sum of the CRE and the CE: as required.
In Table 1, we present some of the bounds obtained in this section for a few know distributions.

Symmetric Distributions
In this subsection, we obtain bounds for a symmetric parent distribution. David and Nagaraja (2003) have stated that if we have a sample Z 1 , . . . , Z n with parent distribution Z symmetric about 0 and with variance 1, then where Using the bound in Eq. 17 for a random variable X symmetric about mean μ and with variance σ 2 , we have Remark 6 It is clear that a non-negative random variable X that is symmetric about the mean has a bounded support.

Theorem 4 Let X be a symmetric random variable with mean μ and variance σ 2 . Then, an upper bound for the CRE of X is given by
Proof From Eqs. 2 and 18, we get which is the upper bound given in Eq. 19.
The bound in Eq. 17 is equivalent to (see Arnold and Balakrishnan 1989) where B(n, n) is the complete beta function defined as where α, β > 0 (see Abramowitz and Stegun (1964) for further details). In fact Then, the bound in Eq. 18 is equivalent to Hence, we have the following theorem which is equivalent to Theorem 4.

Theorem 5 Let X be a symmetric random variable with mean μ and variance σ 2 . Then, an upper bound for the CRE of X is
Example 6 Let us consider a sample with parent distribution for X ∼ N(0, 1). From Harter (1961), we have the values of means of largest order statistics for samples of size up to 100. Hence, we compare the finite series approximations of Eqs. 2 and 22 and we expect the same result as the true value because truncated terms are negligible. We thus get the following result: computed by truncating the series in Theorem 5.
For a symmetric distribution, Arnold and Balakrishnan (1989) have stated that if we have a sample Z 1 , . . . , Z n from a parent distribution symmetric about mean μ and with variance 1, then where B(n, n) is the complete beta function. Using the bound in Eq. 23 for a parent distribution symmetric about mean μ and with variance σ 2 , we have μ n:n − μ 1:n = σ (E(Z n:n ) − E(Z 1:n )) ≤ σ n √ 2 1 2n − 1 − B(n, n).
Theorem 6 Let X be a symmetric random variable with mean μ and variance σ 2 . Then, an upper bound for the sum of the CRE and the CE of X is given by Proof From Eqs. 14 and 24, we have which is the upper bound given in Eq. 25.

Numerical Evaluation
In this subsection, we present a method for the evaluation of an estimate of cumulative entropies based on the relationships with moments of order statistics derived in Section 2. The method is based on finite series approximations. About the CRE, we will use the result shown in Eq. 12: where the difference between LHS and RHS is infinitesimal for m → +∞. Starting from a sample X = (X 1 , . . . , X N ), we estimate the CRE bŷ whereμ is the arithmetic mean of the realization of the sample andμ n+1:n+1 is an estimate for the mean of largest order statistic in a sample of size n + 1. In order to obtain an estimate of the mean of largest order statistic in a sample of size n + 1, we generate several simple random samples starting from X of size n + 1, for example, by using the function randsample of MATLAB. Then, we consider the maximum of each sample and by the mean we obtainμ n+1:n+1 . Specifically, for a fixed n ∈ {1, . . . , m}, starting from X, we generate K simple random samples of size n + 1, (X 1,k , . . . , X n+1,k ), k = 1, . . . , K. For each of them, we can consider the maximum, i.e., X n+1:n+1,k , k = 1, . . . , K. Then, we take as estimate of the mean of the maximum order statistic in a sample of size n + 1 from X bŷ Using similar techniques, we can get estimates for the second moment of the largest order statistic and the first and second moments of the smallest order statistic and, in analogy with Eq. 26, we obtain estimates for the other cumulative entropies,ĈE(X),Ê w (X),ĈE w (X).
Example 7 By using the function exprnd of MATLAB, we generated a sample of size N = 1000 values from an exponential distribution with parameter 1. We obtain estimates for cumulative entropies stopping the sum at m = 100. For each sample size n from 2 to 101  we generate K = 100 random samples of size n by the function randsample. In Table 2, we compare these results with the corresponding theoretical results.

Application to Test for Exponentiality
We recall a result proved in Rao et al. (2004) about the maximum cumulative residual entropy for fixed moments of the first and second order.
Theorem 7 Let X be a non-negative random variable. Then, where X(μ) is an exponentially distributed random variable with mean μ = E(X 2 )/2E(X).
There are many situations in which we have a sample and we want to investigate about the underlying distribution, and in some of these situations, it may be of interest to test if the data are distributed according to an exponential distribution. In the literature, there are several papers about testing exponentiality of data; see, for example, Balakrishnan et al. (2007).
From Theorem 7, we know that the CRE of X is lower than E(X 2 )/2E(X) because, for an exponential random variable Y , we have E(Y ) = E(Y ). If we have the realization of a sample X = (X 1 , . . . , X N ), we can consider the difference between the estimate of CRE, given in Eq. 26, andμ (2) /2μ, whereμ andμ (2) are given by the arithmetic means of X 1 , . . . , X N and X 2 1 , . . . , X 2 N , respectively. If this difference is small enough in absolute value, we may suppose that the data are exponentially distributed and, as the difference increases, it becomes less plausible that the sample is exponentially distributed.
It will, of course, be of interest to develop a formal testing procedure along these lines.
Funding Open access funding provided by Università degli Studi di Napoli Federico II within the CRUI-CARE Agreement.

Declarations
Competing interests The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommonshorg/licenses/by/4.0/.