## 1 Introduction

In reliability theory, to describe and study the information associated with a non-negative absolutely continuous random variable X, we use the Shannon entropy, or differential entropy of X, defined by Shannon (1948)

$$H(X)=-\mathbb E[\log(X)]=-{\int}_{0}^{+\infty}f(x)\log f(x)\mathrm dx,$$

where $$\log$$ is the natural logarithm and f is the probability density function (pdf) of X. In the following, we use F and $$\overline F$$ to indicate the cumulative distribution function (cdf) and the survival function (sf) of X, respectively.

In the literature, there are several different versions of entropy, each one suitable for a specific situation. In Rao et al. (2004) introduced the Cumulative Residual Entropy (CRE) of X as

$$\begin{array}{@{}rcl@{}} \mathcal{E}(X)&=&-{\int}_{0}^{+\infty}\overline F(x)\log \overline F(x)\mathrm dx. \end{array}$$

Di Crescenzo and Longobardi (2009) introduced the Cumulative Entropy (CE) of X as

$$\begin{array}{@{}rcl@{}} \mathcal{CE}(X)&=&-{\int}_{0}^{+\infty}F(x)\log F(x)\mathrm dx . \end{array}$$
(1)

If the random variable X is a lifetime of a system, this information measure is suitable when uncertainty is related to the past. It is a concept dual to the cumulative residual entropy which relates uncertainty on the future lifetime of the system.

Mirali et al. (2016) introduced the Weighted Cumulative Residual Entropy (WCRE) of X as

$$\begin{array}{@{}rcl@{}} \mathcal{E}^{w}(X)&=&-{\int}_{0}^{+\infty}x(1-F(x))\log(1-F(x))\mathrm dx . \end{array}$$

Mirali and Baratpour (2017) introduced the Weighted Cumulative Entropy (WCE) of X as

$$\begin{array}{@{}rcl@{}} \mathcal{CE}^{w}(X)&=&-{\int}_{0}^{+\infty}xF(x)\log F(x)\mathrm dx. \end{array}$$

It should be mentioned that the above measures can also be defined for random variables X with support over the entire real line provided the involved integrals exist (as in some examples discussed in later sections).

Recently, various authors have discussed different versions of entropy and their applications (see, for instance, Cali et al. 2017, 2019, 2020; Longobardi 2014).

The paper is organized as follows. In Section 2, we study relationships between some kinds of entropies and moments of order statistics and present various illustrative examples. In Section 3, bounds are given by using some characterizations and properties (as the symmetry) of the random variable X, and some examples and bounds for a few well-known distributions are also presented. In Section 4, we present a method based on moments of order statistics to estimate the cumulative entropies and an application to testing exponentiality of data is also outlined.

## 2 Relationships Between Entropies and Order Statistics

We recall that, if we have n i.i.d. random variables $$X_{1},\dots ,X_{n}$$, we can introduce the order statistics Xk:n, $$k=1,\dots ,n$$. The k-th order statistic is equal to the k-th smallest value from the sample. We know that the cdf of Xk:n can be given in terms of the cdf of the parent distribution as

$$F_{k:n}(x)=\sum\limits_{j=k}^{n} \binom nj [ F(x) ]^{j} [ 1 - F(x) ]^{n-j},$$

while the pdf of Xk:n is

$$f_{k:n}(x)= \binom nk k[ F(x) ]^{k-1} [ 1 - F(x) ]^{n-k} f(x).$$

Choosing k = 1 and k = n, we get the smallest and largest order statistics, respectively. Their cdf and pdf are given by

$$\begin{array}{@{}rcl@{}} &F_{1:n}(x)=1-[1-F(x)]^{n}, \ \ \ \ \ f_{1:n}(x)=n[1-F(x)]^{n-1}f(x), \\ &F_{n:n}(x)=[F(x)]^{n}, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ f_{n:n}(x)=n[F(x)]^{n-1}f(x). \end{array}$$

In the following, we denote by μ and μk:n the expectation or mean of X and the mean of the k-th order statistic in a sample of size n with parent distribution as that of X, respectively, i.e., $$\mu =\mathbb E(X)$$ and $$\mu _{k:n}=\mathbb E(X_{k:n})$$. Moreover, we denote by μ(2) and $$\mu ^{(2)}_{k:n}$$ the second moment of X and the second moment of k-th order statistic in a sample of size n with parent distribution as that of X, respectively, i.e., $$\mu ^{(2)}=\mathbb E(X^{2})$$ and $$\mu ^{(2)}_{k:n}=\mathbb E(X_{k:n}^{2})$$. We recall that if X is a random variable with finite expectation μ, then the first moment of all order statistics is finite, and if the second moment of X is finite, then the second moment of all order statistics is also finite; see David and Nagaraja (2003) for further details.

### 2.1 Cumulative Residual Entropy

Let X be a random variable with finite expectation μ. The Cumulative Residual Entropy (CRE) of X can also be written in terms of order statistics as follows:

$$\begin{array}{@{}rcl@{}} \mathcal{E}(X)&=&-{\int}_{0}^{+\infty}(1-F(x))\log(1-F(x))\mathrm dx \\ &=&-x(1-F(x))\log(1-F(x))\big|_{0}^{+\infty}-{\int}_{0}^{+\infty}x\log(1-F(x))f(x)\mathrm dx \\ & & -{\int}_{0}^{+\infty}xf(x)\mathrm dx \\ &=&{\int}_{0}^{+\infty}x[-\log(1-F(x))]f(x)\mathrm dx - \mu \\ &=&{\int}_{0}^{+\infty}x\left[\sum\limits_{n=1}^{+\infty}\frac{F(x)^{n}}{n}\right]f(x)\mathrm dx-\mu \\ &=&\sum\limits_{n=1}^{+\infty}\frac{1}{n(n+1)}\mu_{n+1:n+1}-\mu, \end{array}$$
(2)

provided that $$\lim _{x\to +\infty }-x(1-F(x))\log (1-F(x))$$ exists and CRE is finite. In this case, the previous limit is equal to 0. We note that Eq. 2 can be rewritten as

$$\mathcal{E}(X)=\sum\limits_{n=1}^{+\infty}\left( \frac{1}{n}-\frac{1}{n+1}\right)\mu_{n+1:n+1}-\mu.$$
(3)

### Remark 1

We want to emphasize that, under the assumptions made, the steps in Eq. 2 are correct. The improper integral can be written as

$$\lim\limits_{t\to+\infty}{{\int}_{0}^{t}} x \lim\limits_{N\to+\infty} \sum\limits_{n=1}^{N} \frac{F(x)^{n}}{n} f(x) \mathrm dx.$$
(4)

Hence, we observe that the sequence $$S_{N}(x)={\sum }_{n=1}^{N} \frac {F(x)^{n}}{n}$$ is increasing and converges pointwise to the continuous function $$-\log (1-F(x))$$ for each x ∈ [0,t] and, by applying Dini’s theorem for uniform convergence (Bartle and Sherbert 2000), the convergence is uniform. Then, Eq. 4 can be written as

$$\lim\limits_{t\to+\infty}\lim\limits_{N\to+\infty} \sum\limits_{n=1}^{N}{{\int}_{0}^{t}} x \frac{F(x)^{n}}{n} f(x) \mathrm dx.$$
(5)

In order to apply Moore–Osgood theorem for the iterated limit (Taylor 2010), we have to show that

$$\lim\limits_{t\to+\infty} \sum\limits_{n=1}^{N}{{\int}_{0}^{t}} x \frac{F(x)^{n}}{n} f(x) \mathrm dx=\sum\limits_{n=1}^{N}\frac{1}{n(n+1)}\mu_{n+1:n+1}$$

converges pointwise for each fixed N, and this is satisfied if X has finite mean. Hence, by applying Moore–Osgood theorem for the iterated limit, Eq. 5 can be written as

$$\lim\limits_{N\to+\infty}\lim\limits_{t\to+\infty} \sum\limits_{n=1}^{N}{{\int}_{0}^{t}} x \frac{F(x)^{n}}{n} f(x) \mathrm dx=\sum\limits_{n=1}^{+\infty}\frac{1}{n(n+1)}\mu_{n+1:n+1}.$$

In the following examples, we use Eq. 3 to evaluate the CRE for the standard exponential and uniform distributions.

### Example 1

Consider the standard exponential distribution with pdf f(x) = ex, x > 0. Then, it is known that

$$\mu=1 \ \text{ and }\ \mu_{n:n}=1+\frac{1}{2}+\dots+\frac{1}{n};$$

see Arnold and Balakrishnan (1989) for further details. Then, from Eq. 3, we readily have

$$\begin{array}{@{}rcl@{}} \mathcal E(X)&=&\left( 1-\frac{1}{2}\right)\mu_{2:2}+\left( \frac{1}{2}-\frac{1}{3}\right)\mu_{3:3}+\left( \frac{1}{3}-\frac{1}{4}\right)\mu_{4:4}+{\dots} -\mu \\ &=& \left( \mu_{2:2}-\mu\right)+\frac{1}{2} \left( \mu_{3:3}-\mu_{2:2}\right)+\frac{1}{3} \left( \mu_{4:4}-\mu_{3:3}\right)+{\dots} \\ &=& \frac{1}{2}+ \frac{1}{2\cdot 3}+\frac{1}{3\cdot 4}+{\dots} \\ &=& \sum\limits_{n=1}^{+\infty}\frac{1}{n(n+1)}=1. \end{array}$$

### Example 2

Consider the standard uniform distribution with pdf f(x) = 1, 0 < x < 1. Then, it is known that

$$\mu=\frac{1}{2} \ \text{ and }\ \mu_{n:n}=\frac{n}{n+1}.$$

So, from Eq. 2, we readily find

$$\begin{array}{@{}rcl@{}} \mathcal E(X)&=& \sum\limits_{n=1}^{+\infty} \frac{1}{n(n+1)}\ \frac{n+1}{n+2}-\frac{1}{2} \\ &=& \frac{1}{2}\sum\limits_{n=1}^{+\infty}\left( \frac{1}{n}-\frac{1}{n+2}\right)-\frac{1}{2} \\ &=& \frac{1}{2}\left( 1+\frac{1}{2}\right)-\frac{1}{2}=\frac{1}{4}. \end{array}$$

### 2.2 Cumulative Entropy

Let X be a random variable with finite expectation μ. The Cumulative Entropy (CE) of X can also be rewritten in terms of the mean of the minimum order statistic; in fact, from Eq. 1, we easily obtain

$$\begin{array}{@{}rcl@{}} \mathcal{CE}(X)&=&-xF(x)\log F(x)\big|_{0}^{+\infty}+{\int}_{0}^{+\infty}x\log F(x)f(x)\mathrm dx+{\int}_{0}^{+\infty}xf(x)\mathrm dx \\ &=&{\int}_{0}^{+\infty}x\log[1-(1-F(x))]f(x)\mathrm dx + \mu \\ &=&-{\int}_{0}^{+\infty}x\sum\limits_{n=1}^{+\infty}\frac{(1-F(x))^{n}}{n}f(x)\mathrm dx+\mu \\ &=&-\sum\limits_{n=1}^{+\infty}\frac{1}{n(n+1)}\mu_{1:n+1}+\mu, \end{array}$$
(6)

provided that $$\lim _{x\to +\infty }-xF(x)\log F(x)$$ exists and CE is finite. We note that Eq. 6 can be rewritten as

$$\mathcal{CE}(X)=-\sum\limits_{n=1}^{+\infty}\left( \frac{1}{n}-\frac{1}{n+1}\right)\mu_{1:n+1}+\mu.$$
(7)

In the following examples, we give an application of Eq. 7 to the standard exponential and uniform distributions.

### Example 3

For the standard exponential distribution, it is known that

$$\mathbb \mu_{1:n}=\frac{1}{n},$$

and so from Eq. 7, we readily have

$$\begin{array}{@{}rcl@{}} \mathcal{CE}(X)&=&-\sum\limits_{n=1}^{+\infty}\left( \frac{1}{n}-\frac{1}{n+1}\right)\frac{1}{n+1}+1 \\ &=& -\sum\limits_{n=1}^{+\infty}\frac{1}{n(n+1)}+\sum\limits_{n=1}^{+\infty}\frac{1}{(n+1)^{2}}+1 \\ &=& \frac{\pi^{2}}{6}-1, \end{array}$$

by the use of Euler’s identity.

### Example 4

For the standard uniform distribution, using the fact that

$$\mathbb \mu_{1:n}=\frac{1}{n+1},$$

we obtain from Eq. 6 that

$$\begin{array}{@{}rcl@{}} \mathcal{CE}(X)&=&-\sum\limits_{n=1}^{+\infty}\frac{1}{n(n+1)(n+2)}+\frac{1}{2} \\ &=& \frac{1}{2}-\frac{1}{2}\sum\limits_{n=1}^{+\infty}\left[\frac{1}{n}-\frac{2}{n+1}+\frac{1}{n+2}\right] \\ &=& \frac{1}{2}-\frac{1}{2}\sum\limits_{n=1}^{+\infty}\left[\frac{1}{n}-\frac{1}{n+1}\right]+\frac{1}{2}\sum\limits_{n=1}^{+\infty}\left[\frac{1}{n+1}-\frac{1}{n+2}\right] \\ &=& \frac{1}{4} \end{array}$$

by the use of Euler’s identity.

### Remark 2

If the random variable X has finite mean μ and is symmetrically distributed about μ, then it is known that

$$\mu_{n:n}-\mu=\mu-\mu_{1:n},$$

and so the equality $$\mathcal {E}(X)=\mathcal {CE}(X)$$ readily follows.

### 2.3 Weighted Cumulative Entropies

In a similar manner, the Weighted Cumulative Residual Entropy (WCRE) of X, with finite second moment, can be expressed as

$$\begin{array}{@{}rcl@{}} \mathcal{E}^{w}(X)&=&-\frac{x^{2}}{2}(1-F(x))\log(1-F(x))\big|_{0}^{+\infty}-\frac{1}{2}{\int}_{0}^{+\infty}x^{2}\log(1-F(x))f(x)\mathrm dx \\ &&-\frac{1}{2}{\int}_{0}^{+\infty}x^{2}f(x)\mathrm dx \\ &=&\frac{1}{2}{\int}_{0}^{+\infty}x^{2}[-\log(1-F(x))]f(x)\mathrm dx - \frac{\mu^{(2)}}{2} \\ &=&\frac{1}{2}{\int}_{0}^{+\infty}x^{2}\left[\sum\limits_{n=1}^{+\infty}\frac{F(x)^{n}}{n}\right]f(x)\mathrm dx - \frac{\mu^{(2)}}{2} \\ &=&\frac{1}{2}\sum\limits_{n=1}^{+\infty}\frac{1}{n(n+1)}\mu^{(2)}_{n+1:n+1} - \frac{\mu^{(2)}}{2}, \end{array}$$
(8)

provided that $$\lim _{x\to +\infty }-\frac {x^{2}}{2}(1-F(x))\log (1-F(x))$$ exists and WCRE is finite.

In the following example, we use Eq. 8 to evaluate the WCRE for the standard uniform distribution.

### Example 5

For the standard uniform distribution, using the fact that

$$\mu^{(2)}_{n+1:n+1}=\frac{n+1}{n+3}\ \text{ and } \ \mu^{(2)}=\frac{1}{3},$$

we obtain from Eq. 8 that

$$\begin{array}{@{}rcl@{}} \mathcal E^{w}(X)&=& \frac{1}{2}\sum\limits\limits_{n=1}^{+\infty} \frac{1}{n(n+1)}\ \frac{n+1}{n+3}-\frac{1}{6} \\ &=& \frac{1}{6}\sum\limits_{n=1}^{+\infty}\left( \frac{1}{n}-\frac{1}{n+3}\right)-\frac{1}{6} \\ &=& \frac{1}{6}\left( 1+\frac{1}{2}+\frac{1}{3}\right)-\frac{1}{6}=\frac{5}{36}. \end{array}$$

Moreover, we can derive the Weighted Cumulative Entropy (WCE) of X in terms of the second moment of the minimum order statistic as follows:

$$\begin{array}{@{}rcl@{}} \mathcal{CE}^{w}(X)&=&-\frac{x^{2}}{2}F(x)\log F(x)\big|_{0}^{+\infty}+\frac{1}{2}{\int}_{0}^{+\infty}x^{2}\log F(x)f(x)\mathrm dx\\ & & +\frac{1}{2}{\int}_{0}^{+\infty}x^{2}f(x)\mathrm dx \\ &=&\frac{1}{2}{\int}_{0}^{+\infty}x^{2}\log[1-(1-F(x))]f(x)\mathrm dx + \frac{\mu^{(2)}}{2} \\ &=&-\frac{1}{2}{\int}_{0}^{+\infty}x^{2}\sum\limits_{n=1}^{+\infty}\frac{(1-F(x))^{n}}{n}f(x)\mathrm dx+\frac{\mu^{(2)}}{2} \\ &=&-\frac{1}{2}\sum\limits_{n=1}^{+\infty}\frac{1}{n(n+1)}\mu^{(2)}_{1:n+1}+\frac{\mu^{(2)}}{2}, \end{array}$$
(9)

provided that $$\lim _{x\to +\infty }-\frac {x^{2}}{2}F(x)\log F(x)$$ exists and WCE is finite.

## 3 Bounds

In the following, we use Z to denote the standard version of the random variable X, i.e.,

$$Z=\frac{X-\mu}{\sigma},$$

where σ is the standard deviation of X. By construction, the relation between a variable and its standard version holds for order statistics and so we have

$$Z_{k:n}=\frac{X_{k:n}-\mu}{\sigma},$$

for $$k=1,\dots ,n$$. Hence, the mean of Xk:n and the mean of Zk:n are directly related and, in particular, for the largest order statistic, we have

$$\mathbb E(Z_{n:n})=\frac{\mu_{n:n}-\mu}{\sigma}.$$

We remark that this formula also holds by considering a generalization of the random variable Z with an arbitrary location parameter μ in place of the mean, and an arbitrary scale parameter σ in place of the standard deviation.

Let us consider a sample with parent distribution Z such that $$\mathbb E(Z)=0$$ and $$\mathbb E(Z^{2})=1$$. Hartley and David (1954) and Gumbel (1954) have then shown that

$$\mathbb E(Z_{n:n})\leq\frac{n-1}{\sqrt{2n-1}}.$$

Using the Hartley-David-Gumbel bound for a parent distribution with mean μ and variance σ2, we get

$$\mu_{n:n}=\sigma \mathbb E(Z_{n:n})+\mu \leq \sigma\frac{n-1}{\sqrt{2n-1}} +\mu.$$
(10)

### Theorem 1

Let X be a random variable with mean μ and variance σ2. Then, we obtain an upper bound for the CRE of X as

$$\mathcal E(X)\leq \sum\limits_{n=1}^{+\infty}\frac{\sigma}{(n+1)\sqrt{2n+1}}\simeq 1.21 \ \sigma.$$
(11)

### Proof

From Eqs. 2 and 10, we get

$$\begin{array}{@{}rcl@{}} \mathcal E(X)&=&\sum\limits_{n=1}^{+\infty}\frac{1}{n(n+1)}\mu_{n+1:n+1}-\mu\\ &\leq&\sum\limits_{n=1}^{+\infty}\frac{1}{n(n+1)}\left( \sigma\frac{n}{\sqrt{2n+1}} +\mu\right)-\mu \\ &=&\sum\limits_{n=1}^{+\infty}\frac{\sigma}{(n+1)\sqrt{2n+1}}\simeq 1.21 \ \sigma, \end{array}$$

which is the upper bound given in Eq. 11. □

### Remark 3

If X is a non-negative random variable, we have μn+ 1:n+ 1 ≥ 0, for all $$n\in \mathbb N$$. For this reason, using finite series approximations in Eq. 2, we get lower bounds for $$\mathcal E(X)$$ as

$$\mathcal E(X)\ge \sum\limits_{n=1}^{m}\frac{1}{n(n+1)}\mu_{n+1:n+1}-\mu,$$
(12)

for all $$m\in \mathbb N$$.

### Remark 4

If X is a non-negative random variable, we have μ1:n+ 1 ≥ 0, for all $$n\in \mathbb N$$. For this reason, using finite series approximations in Eq. 6, we get upper bounds for $$\mathcal {CE}(X)$$ as

$$\mathcal {CE}(X)\leq -\sum\limits_{n=1}^{m}\frac{1}{n(n+1)}\mu_{1:n+1}+\mu,$$

for all $$m\in \mathbb N$$.

In the following theorem, we consider a decreasing failure rate distribution (DFR). We recall that the failure rate, or hazard rate, function of X, r(x), is defined as

$$\begin{array}{@{}rcl@{}} r(x) &=& \lim\limits_{{\varDelta} x\to 0^{+}}\frac{\mathbb P(x<X\leq x+{\varDelta} x|X>x)}{{\varDelta} x}\\ &=& \frac{1}{1-F(x)}\lim\limits_{{\varDelta} x\to 0^{+}}\frac{\mathbb P(x<X\leq x+{\varDelta} x)}{{\varDelta} x}=\frac{f(x)}{1-F(x)}, \end{array}$$

where the last equality occurs under the assumption of absolute continuity. Then, the random variable X is said to be DFR when the function r(x) is decreasing in x; see Barlow and Proschan (1996) for further details on hazard rate functions and DFR distributions.

### Theorem 2

Let X be DFR. Then, we have the following lower bound for $$\mathcal {CE}(X)$$:

$$\mathcal {CE}(X) \ge \mu-\sqrt{\frac{\mu^{(2)}}{2}}\left( 2-\frac{\pi^{2}}{6}\right).$$
(13)

### Proof

Let X be DFR. From Theorem 12 of Rychlik (2001), it is known that for a sample of size n, if

$$\delta_{j,n}= \sum\limits_{k=1}^{j} \frac{1}{n+1-k}\leq 2 \ \ \ j\in\{1,\dots,n\},$$

then

$$\mu_{j:n}\leq \frac{\delta_{j,n}}{\sqrt 2} \sqrt{\mu^{(2)}}.$$

For j = 1, we have $$\delta _{1,n}=\frac {1}{n}\leq 2$$ for all $$n\in \mathbb N$$, so that

$$\mu_{1:n}\leq \frac{\sqrt{\mu^{(2)}}}{\sqrt 2 \ n}.$$

Then, from Eq. 6, we get the following lower bound for $$\mathcal {CE}(X)$$:

$$\begin{array}{@{}rcl@{}} \mathcal {CE}(X) &\ge& -\sum\limits_{n=1}^{+\infty} \frac{1}{n(n+1)^{2}} \sqrt{\frac{\mu^{(2)}}{2}}+\mu \\ &=& \mu-\sqrt{\frac{\mu^{(2)}}{2}}\left( 2-\frac{\pi^{2}}{6}\right). \end{array}$$

### Remark 5

We note that we can not provide an analogous bound for $$\mathcal E(X)$$ because δn,n ≤ 2 is not fulfilled for n ≥ 4.

From Eqs. 2 and 6, we get the following expression for the sum of the cumulative residual entropy and the cumulative entropy:

$$\mathcal{E}(X)+\mathcal{CE}(X)= \sum\limits_{n=1}^{+\infty}\frac{1}{n(n+1)}(\mu_{n+1:n+1}-\mu_{1:n+1}).$$
(14)

Calì et al. (2017) have shown a connection between (14) and the partition entropy studied by Bowden (2007).

### Theorem 3

We have the following bound for the sum of the CRE and the CE:

$$\mathcal{E}(X)+\mathcal{CE}(X)\leq \sum\limits_{n=1}^{+\infty}\frac{\sqrt{2}\ \sigma }{n\sqrt{n+1}}\simeq 3.09\ \sigma.$$
(15)

### Proof

From Theorem 3.24 of Arnold and Balakrishnan (1989), it is known that the bound for the difference between the expectations of the largest and smallest order statistics from a sample of size n + 1 is

$$\mu_{n+1:n+1}-\mu_{1:n+1} \leq \sigma \sqrt{2(n+1)},$$
(16)

and so using Eq. 16 in Eq. 14, we get the following bound for the sum of the CRE and the CE:

$$\mathcal{E}(X)+\mathcal{CE}(X)\leq \sum\limits_{n=1}^{+\infty}\frac{\sigma \sqrt{2(n+1)}}{n(n+1)}=\sum\limits_{n=1}^{+\infty}\frac{\sqrt{2}\ \sigma }{n\sqrt{n+1}}\simeq 3.09\ \sigma,$$

as required.

In Table 1, we present some of the bounds obtained in this section for a few know distributions.

### 3.1 Symmetric Distributions

In this subsection, we obtain bounds for a symmetric parent distribution. David and Nagaraja (2003) have stated that if we have a sample $$Z_{1},\dots ,Z_{n}$$ with parent distribution Z symmetric about 0 and with variance 1, then

$$\mathbb E(Z_{n:n})\leq \frac{1}{2} \ nc(n),$$
(17)

where

$$c(n)=\left[\frac{2\left( 1-\frac{1}{\binom{2n-2}{n-1}}\right)}{2n-1}\right]^{\frac{1}{2}}.$$

Using the bound in Eq. 17 for a random variable X symmetric about mean μ and with variance σ2, we have

$$\mu_{n:n}=\sigma \mathbb E(Z_{n:n})+\mu \leq \frac{1}{2}\sigma n c(n) +\mu.$$
(18)

### Remark 6

It is clear that a non-negative random variable X that is symmetric about the mean has a bounded support.

### Theorem 4

Let X be a symmetric random variable with mean μ and variance σ2. Then, an upper bound for the CRE of X is given by

$$\mathcal E(X)\leq \frac{\sigma}{2}\sum\limits_{n=1}^{+\infty}\frac{c(n+1)}{n}.$$
(19)

### Proof

From Eqs. 2 and 18, we get

$$\begin{array}{@{}rcl@{}} \mathcal E(X)&=&\sum\limits_{n=1}^{+\infty}\frac{1}{n(n+1)}\mu_{n+1:n+1}-\mu \\ &\leq&\sum\limits_{n=1}^{+\infty}\frac{1}{n(n+1)}\left( \frac{1}{2}\sigma \ c(n+1) (n+1) +\mu\right)-\mu \\ &=&\frac{\sigma}{2}\sum\limits_{n=1}^{+\infty}\frac{c(n+1)}{n}, \end{array}$$

which is the upper bound given in Eq. 19. □

The bound in Eq. 17 is equivalent to (see Arnold and Balakrishnan 1989)

$$\mathbb E(Z_{n:n})\leq \frac{n}{\sqrt{2}} \sqrt{\frac{1}{2n-1}-B(n,n)},$$
(20)

where B(n,n) is the complete beta function defined as

$$B(\alpha,\beta)={{\int}_{0}^{1}} x^{\alpha-1}(1-x)^{\beta-1}\mathrm dx=\frac{{\varGamma}(\alpha){\varGamma}(\beta)}{{\varGamma}(\alpha+\beta)},$$

where α,β > 0 (see Abramowitz and Stegun (1964) for further details). In fact

$$\begin{array}{@{}rcl@{}} \frac{1}{2} n c(n)&=& \frac{1}{2} n \left[\frac{2\left( 1-\frac{1}{\binom{2n-2}{n-1}}\right)}{2n-1}\right]^{\frac{1}{2}} \\ &=& \frac{1}{\sqrt 2} n \left[\frac{1-\frac{(n-1)!(n-1)!}{(2n-2)!}}{2n-1} \right]^{\frac{1}{2}} \\ &=& \frac{n}{\sqrt 2} \left[\frac{1}{2n-1}-\frac{(n-1)!(n-1)!}{(2n-1)!} \right]^{\frac{1}{2}} \\ &=& \frac{n}{\sqrt 2} \left[\frac{1}{2n-1}-B(n,n) \right]^{\frac{1}{2}}. \end{array}$$

Then, the bound in Eq. 18 is equivalent to

$$\mu_{n:n}=\sigma \mathbb E(Z_{n:n})+\mu \leq \sigma \frac{n}{\sqrt{2}} \sqrt{\frac{1}{2n-1}-B(n,n)} +\mu.$$
(21)

Hence, we have the following theorem which is equivalent to Theorem 4.

### Theorem 5

Let X be a symmetric random variable with mean μ and variance σ2. Then, an upper bound for the CRE of X is

$$\mathcal E(X)\leq \frac{\sigma}{\sqrt{2}}\sum\limits_{n=1}^{+\infty}\frac{1}{n} \sqrt{\frac{1}{2n+1}-B(n+1,n+1)}.$$
(22)

### Example 6

Let us consider a sample with parent distribution for $$X\sim N(0,1)$$. From Harter (1961), we have the values of means of largest order statistics for samples of size up to 100. Hence, we compare the finite series approximations of Eqs. 2 and 22 and we expect the same result as the true value because truncated terms are negligible. We thus get the following result:

$$\begin{array}{@{}rcl@{}} \mathcal E(X)&\simeq& \sum\limits_{n=1}^{99}\frac{1}{n(n+1)}\mu_{n+1:n+1}\simeq 0.87486 \\ &<& \frac{1}{\sqrt{2}}\sum\limits_{n=1}^{99}\frac{1}{n} \sqrt{\frac{1}{2n+1}-B(n+1,n+1)}\simeq 0.94050, \end{array}$$

computed by truncating the series in Theorem 5.

For a symmetric distribution, Arnold and Balakrishnan (1989) have stated that if we have a sample $$Z_{1},\dots ,Z_{n}$$ from a parent distribution symmetric about mean μ and with variance 1, then

$$\mathbb E(Z_{n:n})-\mathbb E(Z_{1:n})\leq n \sqrt{2} \sqrt{\frac{1}{2n-1}-B(n,n)},$$
(23)

where B(n,n) is the complete beta function.

Using the bound in Eq. 23 for a parent distribution symmetric about mean μ and with variance σ2, we have

$$\mu_{n:n}-\mu_{1:n}=\sigma \left( \mathbb E(Z_{n:n})-\mathbb E(Z_{1:n})\right) \leq \sigma n\sqrt{2} \sqrt{\frac{1}{2n-1}-B(n,n)}.$$
(24)

### Theorem 6

Let X be a symmetric random variable with mean μ and variance σ2. Then, an upper bound for the sum of the CRE and the CE of X is given by

$$\mathcal{E}(X)+\mathcal{CE}(X)\leq \sqrt{2} \ \sigma\sum\limits_{n=1}^{+\infty}\frac{1}{n} \sqrt{\frac{1}{2n+1}-B(n+1,n+1)}.$$
(25)

### Proof

From Eqs. 14 and 24, we have

$$\begin{array}{@{}rcl@{}} \mathcal{E}(X)+\mathcal{CE}(X)&=& \sum\limits_{n=1}^{+\infty}\frac{1}{n(n+1)}(\mu_{n+1:n+1}-\mu_{1:n+1}) \\ &\leq&\sum\limits_{n=1}^{+\infty}\frac{1}{n(n+1)}\left( \sigma (n+1)\sqrt{2} \sqrt{\frac{1}{2n+1}-B(n+1,n+1)}\right) \\ &=&\sqrt{2} \ \sigma \sum\limits_{n=1}^{+\infty}\frac{1}{n}\sqrt{\frac{1}{2n+1}-B(n+1,n+1)}, \end{array}$$

which is the upper bound given in Eq. 25. □

## 4 Computation and Application

### 4.1 Numerical Evaluation

In this subsection, we present a method for the evaluation of an estimate of cumulative entropies based on the relationships with moments of order statistics derived in Section 2. The method is based on finite series approximations. About the CRE, we will use the result shown in Eq. 12:

$$\mathcal E(X)\ge \sum\limits_{n=1}^{m}\frac{1}{n(n+1)}\mu_{n+1:n+1}-\mu,$$

where the difference between LHS and RHS is infinitesimal for $$m\to +\infty$$. Starting from a sample $$\underline X=(X_{1},\dots ,X_{N})$$, we estimate the CRE by

$$\hat{\mathcal E}(X)= \sum\limits_{n=1}^{m}\frac{1}{n(n+1)}\hat{\mu}_{n+1:n+1}-\hat{\mu},$$
(26)

where $$\hat {\mu }$$ is the arithmetic mean of the realization of the sample and $$\hat {\mu }_{n+1:n+1}$$ is an estimate for the mean of largest order statistic in a sample of size n + 1. In order to obtain an estimate of the mean of largest order statistic in a sample of size n + 1, we generate several simple random samples starting from $$\underline X$$ of size n + 1, for example, by using the function randsample of MATLAB. Then, we consider the maximum of each sample and by the mean we obtain $$\hat {\mu }_{n+1:n+1}$$. Specifically, for a fixed $$n\in \{1,\dots ,m\}$$, starting from $$\underline X$$, we generate K simple random samples of size n + 1, $$(X_{1,k},\dots ,X_{n+1,k})$$, $$k=1,\dots ,K$$. For each of them, we can consider the maximum, i.e., Xn+ 1:n+ 1,k, $$k=1,\dots ,K$$. Then, we take as estimate of the mean of the maximum order statistic in a sample of size n + 1 from $$\underline X$$ by

$$\hat{\mu}_{n+1:n+1}=\frac{1}{K}\sum\limits_{k=1}^{K} X_{n+1:n+1,k}.$$

Using similar techniques, we can get estimates for the second moment of the largest order statistic and the first and second moments of the smallest order statistic and, in analogy with Eq. 26, we obtain estimates for the other cumulative entropies, $$\hat {\mathcal {CE}}(X),\hat {\mathcal E}^{w}(X), \hat {\mathcal {CE}}^{w}(X)$$.

### Example 7

By using the function exprnd of MATLAB, we generated a sample of size N = 1000 values from an exponential distribution with parameter 1. We obtain estimates for cumulative entropies stopping the sum at m = 100. For each sample size n from 2 to 101 we generate K = 100 random samples of size n by the function randsample. In Table 2, we compare these results with the corresponding theoretical results.

### 4.2 Application to Test for Exponentiality

We recall a result proved in Rao et al. (2004) about the maximum cumulative residual entropy for fixed moments of the first and second order.

### Theorem 7

Let X be a non-negative random variable. Then,

$$\mathcal E(X)\leq\mathcal E(X(\mu)),$$

where X(μ) is an exponentially distributed random variable with mean $$\mu =\mathbb E(X^{2})/2\mathbb E(X)$$.

There are many situations in which we have a sample and we want to investigate about the underlying distribution, and in some of these situations, it may be of interest to test if the data are distributed according to an exponential distribution. In the literature, there are several papers about testing exponentiality of data; see, for example, Balakrishnan et al. (2007).

From Theorem 7, we know that the CRE of X is lower than $$\mathbb E(X^{2})/2\mathbb E(X)$$ because, for an exponential random variable Y, we have $$\mathcal E(Y)=\mathbb E(Y)$$. If we have the realization of a sample $$\underline X=(X_{1},\dots ,X_{N})$$, we can consider the difference between the estimate of CRE, given in Eq. 26, and $$\hat \mu ^{(2)}/2\hat \mu$$, where $$\hat \mu$$ and $$\hat \mu ^{(2)}$$ are given by the arithmetic means of $$X_{1},\dots ,X_{N}$$ and $${X_{1}^{2}},\dots ,{X_{N}^{2}}$$, respectively. If this difference is small enough in absolute value, we may suppose that the data are exponentially distributed and, as the difference increases, it becomes less plausible that the sample is exponentially distributed.

We fixed as threshold values 0.1 and 0.25 and we tested data generated in MATLAB from different distributions. For each distribution, we generated K = 1000 samples of size N = 1000, Xk, and the obtained results are presented in Table 3, where the success with threshold α, S(α), is given by

$$S(\alpha)=\#\{k\in\{1,\dots,K\}: |\hat{\mathcal E}(X_{k})-\hat\mu_{k}^{(2)}/2\hat\mu_{k}|<\alpha \}.$$

It will, of course, be of interest to develop a formal testing procedure along these lines.