1 Introduction

We assume that \(X_1,\ldots ,X_n\) are i.i.d. random variables, and \(X_{1:n} \le \cdots \le X_{n:n}\) stand for the respective order statistics. We denote by \(F\), \(\mu = \mathbb{E }X_1\), and \(\sigma _p^p = \mathbb{E }|x_1-\mu |^p\) for some \( p \ge 1\), the respective parent distribution function, mean, and \(p\)th absolute central moment. Writing \(\sigma _p^p\) or \(\sigma _p= (\sigma _p^p)^{1/2}\) below, we tacitly assume that they are positive and finite. Let

$$\begin{aligned} F^{-1}(x) = \sup \{ u: \; F(u) \le x \}, \qquad 0 \le x < 1, \end{aligned}$$

be the upper quantile function of \(F\). We also assume that \(a_F= F^{-1}(0) > - \infty \), i.e., \(F\) has a support bounded below, and

$$\begin{aligned} \frac{F^{-1}(x)-a_F}{x}, \qquad 0 < x < 1, \end{aligned}$$
(1)

is a non-decreasing function. An equivalent condition is that \(F(x)/(x-a_F)\), \(x > a_F\), is non-increasing which means that \(F\) has a possible jump at \(a_F\), and a density function \(f\) such that \(\frac{1}{x-a_F} [F(a_F) + \int _{a_F}^x f(t)dt]\) does not increase for \(x > a_F\). If \(F(a_F)=0\), one could say that the density function is non-increasing on the average. We extend the definition to the distributions with atoms described above in order to make the family of decreasing density on the average distributions (DDA) closed.

Moreover, we consider a narrower family of decreasing failure rate on the average distribution functions (DFRA) for which functions

$$\begin{aligned} \frac{F^{-1}(1-e^{-x}) - a_F}{x} = \frac{F^{-1}(1-e^{-x}) - a_F}{1-e^{-x}}\, \frac{1-e^{-x}}{x}, \qquad x>0, \end{aligned}$$
(2)

are non-decreasing. Note that the latter fraction of the RHS is decreasing, and so (2) is indeed a stronger condition than (1). In other words, for the DFRA distributions the function

$$\begin{aligned} \frac{-\ln (1-F(x))}{x-a_F} = \frac{-\ln (1-F(a_F))+ \int _{a_F}^x \frac{f(t)}{1-F(t)}\,dt}{x-a_F}, \qquad x > a_F, \end{aligned}$$

does not increase. Usually the DFRA family is treated as a subfamily of life distributions with \(a_F=0\). Here we drop this restrictive assumption.

We aim at providing precise upper evaluations on \(\mathbb{E }_{F}X_{j:n}-\mu \), expressed in various scale units \(\sigma _p\), \(p \ge 1\), when the parent distributions come from the DDA and DFRA families. It is intuitively obvious that these bounds can be positive for large \(j\) (with respect to \(n\)), and negative for small ones. Rychlik (2002) presented conditions on the pairs \((j,n)\) such that \(\mathbb{E }_{F}X_{j:n} \le \mu \) when the marginal distribution is either DDA or DFRA. We recall them below. Since relations \(\mathbb{E }_{F}X_{1:n} \le \mu \le \mathbb{E }_{F}X_{n:n}\) are valid for arbitrary distributions of \(X_i\)’s, we focus now on \(2 \le j \le n-1\). Formula

$$\begin{aligned} f_{j:n}(x) = n \left( \begin{array}{c}n-1 \\ j-1\end{array}\right) x^{j-1}(1-x)^{n-j}, \qquad 0 < x < 1, \end{aligned}$$
(3)

defines the density function of the \(j\)th order statistic based on \(n\) i.i.d. standard uniform random variables. If \(2 \le j \le n-1\), it continuously increases from \(f_{j:n}(0)=0\) to \(f_{j:n}\left( \frac{j-1}{n-1}\right) >1\), and ultimately decreases to \(f_{j:n}(1)=0\). Let \(0 < a_{j:n} < \frac{j-1}{n-1} < b_{j:n} <1\) denote the smaller and greater arguments at which \(f_{j:n}\) amounts to 1. Let

$$\begin{aligned} F_{j:n}(x)&= \sum _{i=j}^n \left( \begin{array}{c}n \\ i\end{array}\right) x^{i}(1-x)^{n-i}, \\ \bar{F}_{j:n}(x)&= \sum _{i=0}^{j-1} \left( \begin{array}{c}n\\ i\end{array}\right) x^{i}(1-x)^{n-i}, \qquad 0 < x < 1, \end{aligned}$$

denote the distribution and survival functions of of the \(j\)th order statistic from the standard uniform sample. We also define

$$\begin{aligned} K_{j:n}(x)&= \frac{j}{n+1} \bar{F}_{j+1:n+1}(x) - \frac{1-x^2}{2}, \end{aligned}$$
(4)
$$\begin{aligned} L_{j:n}(x)&= \sum _{i=1}^{j} \frac{\bar{F}_{i:n}(x)}{n+1-i} - \bar{F}_{j:n}(x) \ln (1-x) - (1-x)[1-\ln (1-x)]. \qquad \quad \end{aligned}$$
(5)

Proposition 1

(Rychlik 2002, Propositions 1 and 2) Let random variables \(X_1,\ldots ,X_n\) be i.i.d. with a common distribution function \(F\) and mean \(\mu \). If either

  1. (a)

    \(F\) is DDA and either \(j=1\) or \(K_{j:n}(a_{j:n}) \le 0\) for \(j \ge 2\), or

  2. (b)

    \(F\) is DFRA and either \(j=1\) or \(L_{j:n}(a_{j:n}) \le 0\) for \(j \ge 2\), then \(\mathbb{E }_{F}X_{j:n} \le \mu \).

Rychlik (2002) established positive sharp upper bounds on \(\mathbb{E }_{F}(X_{j:n} - \mu )/\sigma _2\) when either \(F\) is DDA and \(K_{j:n}(a_{j:n}) > 0\) or \(F\) is DFRA and \(L_{j:n}(a_{j:n}) > 0\). Here we focus on calculating sharp non-positive bounds on \(\mathbb{E }_{F}(X_{j:n} - \mu )/\sigma _p\) under assumptions (a) and (b) of Proposition 1. The DDA and DFRA families of distributions are treated in Sects. 2 and 3, respectively. We show that in both the cases these bounds amount trivially to 0 when \(p>1\). For \(p=1\), we describe strictly negative evaluations. We also determine (sequences of) distributions which attain the bounds (possibly in the limit). We conclude Sects. 2 and 3 with presenting numerical values of negative sharp bounds on \(\mathbb{E }_{F}(X_{j:n} - \mu )/\sigma _1\) in the DDA and DFRA cases, respectively, for the samples of sizes \(n= 10,20 \hbox { and } 30\).

We notice that the families of DDA and DFRA distributions can be defined in a coherent way by use of the star order of distributions, introduced by Barlow et al. (1969) (see also Shaked and Shantikumar 2007 for a comprehensive study). We say that \(F\) succeeds \(G\) in the order iff the composition \(F^{-1} \circ G\) is starshaped, i.e. function \([F^{-1}(G(x)) - F^{-1}(G(a_G))]/(x-a_G)\) is non-decreasing. The DDA and DFRA distribution functions are the ones that succeed the uniform and exponential distributions in the star ordering. Our results can be extended onto the families of distributions succeeding a general fixed distribution in the star order.

The expectations of order statistics coming from general i.i.d. samples were evaluated in terms of the population mean and standard deviation by Moriguti (1953). Analytic formulae for respective bounds on the sample maxima were determined by Gumbel (1954) and Hartley and David (1954). Analogous results for more general scale units, generated by central absolute moments of various orders were determined by Arnold (1985) for the sample maxima, and Rychlik (1998) for the other order statistics. The first optimal evaluations of the expectations of order statistics from the decreasing density and failure rate populations (DD and DFR, respectively) are due to Gajek and Rychlik (1998). They were represented in the scale units connected with the second raw moments of the parent distribution. Positive sharp mean-standard deviation estimates of the expected order statistics and spacings from the DD and DFR families were determined by Danielak (2003) and Danielak and Rychlik (2004), respectively.

Barlow and Proschan (1966) and Barlow et al. (1969) derived some inequalities for \(L\)-statistics implied by the star ordering of the distributions of observations [see also Arnold nad Balakrishnan (1989, Section 3.4)]. Rychlik (2002) determined positive tight upper mean-standard deviation bounds on the expectations of order statistics with sufficiently large ranks, coming from DDA and DFRA populations. Similar results for the spacings are due to Danielak and Rychlik (2003). Bieniek (2006, 2008) presented analogous evaluations for the generalized order statistics based on the DFR, DFRA, DD and DDA populations.

Tight positive lower (and negative upper) bounds on the expectations of positive (negative) \(L\)-statistics with general parent distributions, expressed in various scale units, were obtained by Goroncy (2009). Negative upper estimates for order statistics less than the sample median, based on symmetric and symmetric unimodal distributions can be found in Rychlik (2009a). Rychlik (2009b, c) provided analogous results for small order statistics coming from DD and DFR populations, respectively. Negative sharp upper evaluations for the expectations of generalized order statistics coming from arbitrary populations were studied in Goroncy (2013).

2 DDA distributions

Here we collect the assumptions valid throughout this section.

Assumptions \(DDA(p)\): Random variables \(X_1,\ldots ,X_n\) are i.i.d. with a common distribution function \(F\) which is DDA, i.e. it has a finite left end-point \(a_F\) of the support, and function \(F(x)/(x-a_F)\), \(x>a_F\), is non-increasing. \(X_1\) has a finite expectation \(\mathbb{E }_{F}X_{1}= \mu \) and positive finite \(p\)th central absolute moment \(0< \sigma _p^p = \mathbb{E }_{F}|X_{1}- \mu |^p < \infty \) for some \(1 \le p <\infty \). We also assume that either \(j=1\) or \(2 \le j \le n-1\) is such that \(K_{j:n}(a_{j:n}) \le 0\) (cf. Proposition \(1a\)), where function \(K_{j:n}\) is defined in (4), and \(a_{j:n}\) is the smaller of two roots of equation \(f_{j:n}(x)=1\) in \((0,1)\) [see (3)]. We put

$$\begin{aligned} M_p(\alpha )\!=\! \left[ \frac{2(p\!+\!1)\alpha (1\!\!-\alpha )^p + (1\!+\!\alpha ^2)^{p+1} \!-\! (\alpha ^2 \!+\!2 \alpha \!-\!1) |\alpha ^2\!+\!2\alpha \!-\!1|^{p}}{2^{p+1}(p+1)} \right] ^{1/p} \qquad \end{aligned}$$
(6)

for \(0 \le \alpha < 1\) and \(p \ge 1\). In particular, for \(p=1\) we have

$$\begin{aligned} M_1 (\alpha )&= \frac{1}{8} \left[ 4 \alpha (1-\alpha ^2) + (1+\alpha ^2)^2 - \mathrm{sgn}(\alpha ^2+2\alpha -1) (\alpha ^2+2\alpha -1)^2 \right] \nonumber \\&= \left\{ \begin{array}{l@{\quad }l} \frac{(\alpha ^2+1)^2}{4}, &{} 0 \le \alpha \le \sqrt{2} -1, \\ \alpha (1-\alpha ^2), &{} \sqrt{2}-1 \le \alpha <1. \end{array} \right. \end{aligned}$$
(7)

Proposition 2

(Case \({p}>{1}\)) If the above assumptions \(DDA(p)\) hold for some \(p>1\) and \(1 \le j <n\), then the bound

$$\begin{aligned} \frac{\mathbb{E }_{F}X_{j:n} -\mu }{\sigma _p} \le 0 \end{aligned}$$
(8)

is sharp, and it is attained in the limit by the family of mixtures of atoms at \( \mu -\frac{\sigma _p}{2M_p(\alpha )}(1-\alpha ^2)\) and uniform distributions on the intervals \(\left[ \mu \!+\! \frac{\sigma _p}{2M_p(\alpha )}(2 \alpha \!-\!1\!+\!\alpha ^2),\right. \) \(\left. \mu + \frac{\sigma _p}{2M_p(\alpha )}(1+\alpha ^2)\right] \) with weights \(\alpha \) and \(1-\alpha \), respectively, as \(\alpha \nearrow 1\).

Proof

Inequality (8) is obvious in view of Proposition 1. It merely suffices to check that the distributions defined in the latter claim are DDA, satisfy the moment conditions, and attain the zero bound in the limit. Note that the respective quantile functions satisfy

$$\begin{aligned} \frac{F^{-1}_\alpha (x) - \mu }{\sigma _p} = \frac{ x\, \mathbf{1}_{[\alpha ,1)} (x) - \frac{1-\alpha ^2}{2}}{M_p(\alpha )}, \qquad 0 \le x < 1. \end{aligned}$$

Therefore

$$\begin{aligned} \frac{F^{-1}_\alpha (x) - \mu + \frac{\sigma _p}{2M_p(\alpha )}(1-\alpha ^2)}{x} = \frac{\sigma _p}{M_p(\alpha )} \mathbf{1}_{[\alpha ,1)} (x), \qquad 0 \le x < 1, \end{aligned}$$

are non-decreasing, and so \(F_\alpha \), \(0 \le \alpha <1\), are DDA. Since

$$\begin{aligned} \int \limits _0^1 [ F^{-1}_\alpha (x) - \mu ] dx = \frac{\sigma _p}{M_p(\alpha )}\left[ \int \limits _0^1 x\mathbf{1}_{[\alpha ,1)} (x)\,dx - \frac{1-\alpha ^2}{2} \right] =0, \end{aligned}$$

we have \(\mathbb{E }_{\alpha } X_{1} = \mu \). Now we calculate

$$\begin{aligned}&\int \limits _0^1 \left| x\mathbf{1}_{[\alpha ,1)} (x)- \frac{1-\alpha ^2}{2} \right| ^p dx = \int \limits _0^\alpha \left( \frac{1-\alpha ^2}{2}\right) ^p dx\\&\qquad + \left\{ \begin{array}{l@{\quad }l} \int _\alpha ^{(1-\alpha ^2)/2} \left( \frac{1-\alpha ^2}{2} \!-\!x \right) ^p dx \!+\! \int _{(1-\alpha ^2)/2}^1 \left( x \!-\! \frac{1-\alpha ^2}{2} \right) ^p dx, &{} 0 \le \alpha \le \sqrt{2}-1, \\ \int _\alpha ^1 \left( x - \frac{1-\alpha ^2}{2} \right) ^p dx, &{} \sqrt{2}-1 \le \alpha < 1, \end{array}\right. \\&\quad =\frac{(1-\alpha ^2)^p\alpha }{2^p} + \left\{ \begin{array}{l@{\quad }l} \frac{(1-\alpha ^2-2\alpha )^{p+1} + (1+\alpha ^2)^{p+1}}{2^{p+1}(p+1)}, &{} 0 \le \alpha \le \sqrt{2}-1,\\ \frac{ (1+\alpha ^2)^{p+1}-(1-\alpha ^2-2\alpha )^{p+1} }{2^{p+1}(p+1)}, &{} \sqrt{2}-1 \le \alpha < 1, \end{array} \right. = M_p^p(\alpha ). \end{aligned}$$

Therefore

$$\begin{aligned} \mathbb{E }_{\alpha }|X_{1} - \mu |^p \!=\! \int \limits _0^1 | F^{-1}_\alpha (x) \!-\! \mu |^p dx= \frac{\sigma _p^p}{M_p^p(\alpha )} \int \limits _0^1 \left| x\mathbf{1}_{[\alpha ,1)} (x)\!-\! \frac{1-\alpha ^2}{2} \right| ^p dx = \sigma _p^p, \end{aligned}$$

as claimed. Furthermore

$$\begin{aligned} \frac{\mathbb{E }_{\alpha } X_{j:n}-\mu }{\sigma _p}&= \int \limits _0^1 \frac{F^{-1}_\alpha (x) - \mu }{\sigma _p} \,f_{j:n}(x)dx \\&= \int \limits _0^1 \frac{x\mathbf{1}_{[\alpha ,1)} (x)- \frac{1-\alpha ^2}{2}}{M_p(\alpha )} \,f_{j:n}(x)dx \\&= \frac{1}{M_p(\alpha )} \left[ \int \limits _\alpha ^1 x f_{j:n}(x) dx - \frac{1-\alpha ^2}{2} \right] \\&= \frac{1}{M_p(\alpha )} \left[ \frac{j}{n+1} \int \limits _\alpha ^1 f_{j+1:n+1}(x) dx - \frac{1-\alpha ^2}{2} \right] = \frac{K_{j:n}(\alpha )}{M_p(\alpha )}. \end{aligned}$$

[cf (4) and (6)]. Note that

$$\begin{aligned} K_{j:n}(\alpha )&= \frac{j}{n+1} \sum _{i=0}^j \left( \begin{array}{c}n+1 \\ i\end{array}\right) \alpha ^i (1-\alpha )^{n+1-i} - \frac{1-\alpha ^2}{2} \\&= (1-\alpha ) \left[ \frac{j}{n+1} \sum _{i=0}^j \left( \begin{array}{c}n+1 \\ i\end{array}\right) \alpha ^i (1-\alpha )^{n-i} - \frac{1+\alpha }{2} \right] , \end{aligned}$$

and the expression in the square brackets tends to \(-\)1, as \( \alpha \nearrow 1\). Indeed, the sum does not converge there only for \(j=n\). However, this case is excluded from our study because \(\mathbb{E }_{F}X_{n:n} \ge \mu \). So we have \(K_{j:n}(\alpha ) \sim -(1-\alpha ) \) as \( \alpha \nearrow 1\).

For \( \alpha \ge \sqrt{2}-1\), we have

$$\begin{aligned} M_p^p(\alpha )= \frac{2(p+1)\alpha (1-\alpha )^p + (1+\alpha ^2)^{p+1} - (\alpha ^2+2\alpha -1)^{p+1}}{2^{p+1}(p+1)}. \end{aligned}$$

Applying the Taylor expansions, we obtain

$$\begin{aligned} (1+\alpha ^2)^{p+1}&= 2^{p+1} - (p+1) 2^{p+1}(1-\alpha ) + \mathcal O ((1-\alpha )^2), \\ (\alpha ^2+2\alpha -1)^{p+1}&= 2^{p+1} - (p+1) 2^{p+2}(1-\alpha ) + \mathcal O ((1-\alpha )^2). \end{aligned}$$

Hence \(M_p^p(\alpha ) = (1- \alpha ) + \mathcal O ((1-\alpha )^{\min \{2,p\}})\), and so

$$\begin{aligned} \frac{K_{j:n}(\alpha )}{M_p(\alpha )} \sim - (1-\alpha )^{1-1/p} \rightarrow 0 \quad \mathrm{as}\; \alpha \nearrow 1, \end{aligned}$$

which is the desired statement. \(\square \)

Proposition 3

(Case \({p}={1}\)) If assumptions \(DDA(p)\) are satisfied for \(p=1\) and \( 1 \le j < n\), then

$$\begin{aligned} \frac{\mathbb{E }_{F}X_{j:n} -\mu }{\sigma _1} \le \sup _{0 \le \alpha <1} \frac{K_{j:n}(\alpha )}{M_1(\alpha )}, \end{aligned}$$
(9)

[see (4) and (7)]. If the supremum in (9) is attained at some \(0 \le \alpha = \alpha _{j:n} <1\), then the bound is attained by the mixture of the pole at \( \mu -\frac{\sigma _1}{2M_1(\alpha )}(1-\alpha ^2)\) and the uniform distribution on \(\left[ \mu + \frac{\sigma _1}{2M_1(\alpha )}(2 \alpha -1+\alpha ^2), \mu + \frac{\sigma _1}{2M_1(\alpha )}(1+\alpha ^2)\right] \) with weights \(\alpha \) and \(1-\alpha \), respectively. If

$$\begin{aligned} \sup _{0 \le \alpha <1} \frac{K_{j:n}(\alpha )}{M_1(\alpha )} = \lim _{ \alpha \nearrow 1} \frac{K_{j:n}(\alpha )}{M_1(\alpha )} = - \frac{1}{2}, \end{aligned}$$

then the bound is attained in the limit by the sequences of mixtures described above as \(\alpha \nearrow 1\).

Proof

By Proposition 1, we have

$$\begin{aligned} \sup _{F \in DDA(1)} \frac{\mathbb{E }_{F}X_{j:n} -\mu }{\sigma _1} \le 0, \end{aligned}$$

where \(DDA(1)\) denotes the family of non-degenerate \(DDA\) distributions with a finite first moment. Below we prove the implications

$$\begin{aligned} K_{j:n}(a_{j:n}) = 0&\Rightarrow \sup _{F \in DDA(1)} \frac{\mathbb{E }_{F}X_{j:n} -\mu }{\sigma _1} = 0, \end{aligned}$$
(10)
$$\begin{aligned} \sup _{F \in DDA(1)} \frac{\mathbb{E }_{F}X_{j:n} -\mu }{\sigma _1} < 0&\Rightarrow \mathrm{either}\; j=1 \;\mathrm{or}\; K_{j:n}(a_{j:n}) < 0, \end{aligned}$$
(11)

which together imply that both (10) and (11) are equivalence relations.

We first observe that the mixtures defined in the statements of the proposition are DDA, and satisfy both the moment conditions. It suffices to recall respective arguments of the proof of Proposition 2 for specific \(p=1\). We also get

$$\begin{aligned} \frac{\mathbb{E }_{\alpha } X_{j:n} -\mu }{\sigma _1} = \frac{K_{j:n}(\alpha )}{M_1(\alpha )}, \qquad 0 \le \alpha < 1. \end{aligned}$$

Moreover,

$$\begin{aligned} K'_{j:n}(\alpha ) = \alpha [1-f_{j:n}(\alpha )], \qquad 0 \le \alpha < 1. \end{aligned}$$
(12)

This means that for \(2 \le j \le n-1\) function \(K_{j:n}(\alpha )\) increases from \(\frac{j}{n+1} - \frac{1}{2}\) at 0 to \(K_{j:n}(a_{j:n})\), then decreases to \(K_{j:n}(b_{j:n})\), and ultimately increases to 0 at 1. Function (7) is strictly positive for \(0 \le \alpha <1\), and tends to 0 as \(\alpha \nearrow 1\). By the de l’Hospital rule, we also have

$$\begin{aligned} \lim _{ \alpha \nearrow 1} \frac{K_{j:n}(\alpha )}{M_1(\alpha )} = \lim _{ \alpha \nearrow 1} \frac{\alpha [1-f_{j:n}(\alpha )]}{1-3 \alpha ^2} = - \frac{1}{2}. \end{aligned}$$

Therefore \(K_{j:n}(a_{j:n}) =0\) implies

$$\begin{aligned} \sup _{0 \le \alpha <1} \frac{K_{j:n}(\alpha )}{M_1(\alpha )} = \frac{K_{j:n}(a_{j:n})}{M_1(a_{j:n})}= 0 \ge \sup _{F \in DDA(1)} \frac{\mathbb{E }_{F}X_{j:n} -\mu }{\sigma _1}, \end{aligned}$$

which verifies (10). One should note, though, that equality \(K_{j:n}(a_{j:n})=0\) for some \((j,n)\) is very unlikely, because \(a_{j:n}\) is usually an irrational number, and \(K_{j:n}\) is a polynomial with rational coefficients.

Now we prove that

$$\begin{aligned} \frac{\mathbb{E }_{F} X_{j:n} -\mu }{\sigma _1} < 0, \qquad F \in DDA(1), \end{aligned}$$
(13)

implies

$$\begin{aligned} \sup _{F \in DDA(1)} \frac{\mathbb{E }_{F} X_{j:n} -\mu }{\sigma _1} = \sup _{0 \le \alpha <1} \frac{\mathbb{E }_{\alpha } X_{j:n} -\mu }{\sigma _1} = \sup _{0 \le \alpha <1} \frac{K_{j:n}(\alpha )}{M_1(\alpha )} <0. \end{aligned}$$
(14)

We can write

$$\begin{aligned} \frac{\mathbb{E }_{F} X_{j:n} -\mu }{\sigma _1} = \int \limits _0^1 \frac{F^{-1} (x) - \mu }{\sigma _1} [f_{j:n}(x)-1]dx = \int \limits _0^1 g(x)f(x)dx = T_f(g). \end{aligned}$$

Function \(f= f_{j:n}-1\) is fixed and integrates to \(0\). Functions \(g =\frac{F^{-1} - \mu }{\sigma _1} \), \(F \in DDA(1)\), belong to the subset \(\mathcal G \) of elements of \(L^1([0,1), dx)\) which are (right-continuous versions of) non-decreasing, starshaped functions with unit norms satisfying

$$\begin{aligned} T_1(g) = \int \limits _0^1g(x)dx = \int \limits _0^1 \frac{F^{-1} (x) - \mu }{\sigma _1} dx =0. \end{aligned}$$

In particular,

$$\begin{aligned} g_\alpha (x) = \frac{1}{M_1(\alpha )} \left[ x\mathbf{1}_{[\alpha ,1)} (x)- \frac{1-\alpha ^2}{2} \right] \in \mathcal G , \qquad 0 \le \alpha <1. \end{aligned}$$

Since \(T_f(g) <0\) for \(f \in DDA(1)\), the simple linear operator \( g \mapsto \tilde{g} = \frac{g}{-T_f(g)}\) transforms \(\mathcal G \) onto the convex subset \(\tilde{\mathcal{G }}\) of \( L^1([0,1), dx)\) whose elements are right-continuous, non-decreasing, star-shaped, and satisfy

$$\begin{aligned} T_1(\tilde{g})&= \int \limits _0^1 \tilde{g}(x)dx =0, \\ T_f(\tilde{g})&= \int \limits _0^1 \tilde{g}(x)f(x)dx =-1. \end{aligned}$$

In particular,

$$\begin{aligned} \tilde{g}_\alpha (x) = \frac{-1}{K_{j:n}(\alpha )} \left[ x\mathbf{1}_{[\alpha ,1)} (x)- \frac{1-\alpha ^2}{2} \right] \in \tilde{\mathcal{G }}, \qquad 0 \le \alpha <1. \end{aligned}$$

Note that \(||\tilde{g}|| = \frac{1}{-T_f(g)}\), \( \tilde{g}\in \tilde{\mathcal{G }}\), and the problem of maximizing \(T_f(g)\), \( g \in \mathcal G \), is equivalent with that of maximizing \(|| \tilde{g}||\), \( \tilde{g}\in \tilde{\mathcal{G }}\). The norm functional is convex, and so

$$\begin{aligned} || \alpha \tilde{g}_1 + (1-\alpha ) \tilde{g}_2|| \le \alpha ||\tilde{g}_1|| +(1- \alpha ) ||\tilde{g}_2|| \le \max \{ ||\tilde{g}_1||, ||\tilde{g}_2|| \} \end{aligned}$$

for arbitrary \( \tilde{g}_1, \tilde{g}_2 \in \tilde{\mathcal{G }}\) and \(0 \le \alpha \le 1\).

Suppose that for some \(0 < \alpha _1 < \cdots < \alpha _k < \alpha _{k+1} =1\), and \(\beta _{-1} <\beta _0=0 < \beta _1 < \cdots < \beta _{k}\),

$$\begin{aligned} h_{\alpha , \beta } (x) = \beta _{-1} + \sum _{i=1}^k \beta _i \, x \, \mathbf{1}_{[\alpha _{i},\alpha _{i+1})}(x) \in \tilde{\mathcal{G }}. \end{aligned}$$

We show that \(h_{\alpha , \beta }\) is a convex combination of functions \( \tilde{g}_{\alpha _i}\), \(i=1,\ldots ,k\). Firstly, \( \tilde{g}_{\alpha _i}\), \(i=1,\ldots ,k\), are linearly independent. Their linear combinations \( \tilde{h}_{\alpha , \gamma } (x) = \sum _{i=1}^k \gamma _i \tilde{g}_{\alpha _i}(x)\) satisfy

$$\begin{aligned} T_1( \tilde{h}_{\alpha , \gamma }) = \int \limits _0^1 \tilde{h}_{\alpha , \gamma }(x) \,dx = 0, \end{aligned}$$

and

$$\begin{aligned} \frac{ \tilde{h}_{\alpha , \gamma }(x) - \tilde{h}_{\alpha , \gamma }(0)}{x} = \sum _{i=1}^k \frac{\gamma _i}{K_{j:n}(\alpha _i)} \mathbf{1}_{[\alpha _{i},1)}(x)= \sum _{i=1}^k \delta _i \mathbf{1}_{[\alpha _{i-1},\alpha _i)}(x) \end{aligned}$$

for \(\delta _j = \sum _{i=1}^j \frac{\gamma _i}{K_{j:n}(\alpha _i)}\), \(j=1, \ldots ,k\). In particular,

$$\begin{aligned} h_{\alpha , \beta } (x) = \sum _{i=1}^k K_{i:n}(\alpha _i)(\beta _i-\beta _{i-1}) \tilde{g}_{\alpha _i}(x) = \sum _{i=1}^k \gamma _i \tilde{g}_{\alpha _i}(x). \end{aligned}$$

Starshapedness of \(h_{\alpha , \beta }\) and condition

$$\begin{aligned} -1 = T_f( h_{\alpha , \beta }) = \sum _{i=1}^k \gamma _i T_f(\tilde{g}_{\alpha _i}) = - \sum _{i=1}^k \gamma _i \end{aligned}$$

imply that \( \gamma _i\), \(i=1,\ldots ,k\), are non-negative, and sum up to \(1\). In consequence,

$$\begin{aligned} || h_{\alpha , \beta } || \le \max _{1 \le i \le k} || \tilde{g}_{\alpha _i}||. \end{aligned}$$

Consider now arbitrary \(\tilde{g} \in \tilde{\mathcal{G }}\). For \(k=1,2\ldots \), define

$$\begin{aligned} g_k(x) = g(0) + \sum _{i=1}^{2^k-1} \frac{2^k}{i} [ g(i/ 2^{k}) - g(0)]\, x\, \mathbf{1}_{[i /2^{k}, (i+1)/2^{k})}(x). \end{aligned}$$

The functions are non-decreasing, starshaped, and tend monotonously to \(\tilde{g}\) on \([0,1)\). By the Lebesgue monotone convergence theorem,

$$\begin{aligned} ||\tilde{g}-g_k|| = \int \limits _0^1 [\tilde{g}(x) - g_k(x)]dx \searrow 0, \end{aligned}$$

and in consequence \(T_1(g_k) \rightarrow 0\), and \(T_f(g_k) \rightarrow -1\) as \( k \rightarrow \infty \). Furthermore, each

$$\begin{aligned} \tilde{g}_k = \frac{g_k - T_1(g_k)}{-T_f(g_k)} \in \tilde{\mathcal{G }}, \end{aligned}$$

and so is a convex combination of \(\tilde{g}_{i/2^{k}}\), \(i=1, \ldots , 2^k-1\), and

$$\begin{aligned} || \tilde{g}_k - \tilde{g} ||&= \left| \left| \frac{g_k }{-T_f(g_k)} - g_k + g_k - \tilde{g} - \frac{T_1(g_k)}{-T_f(g_k)} \right| \right| \\&\le \left| \frac{1}{-T_f(g_k)}-1 \right| ||g_k|| + || g_k - \tilde{g}|| + \frac{|T_1(g_k)|}{-T_f(g_k)} \rightarrow 0. \end{aligned}$$

By continuity of the norm functional,

$$\begin{aligned} \max _{i=1, \ldots , 2^k-1} || \tilde{g}_{i/2^{k}}||&\ge || \tilde{g}_k|| \rightarrow || \tilde{g}||, \\ \sup _{0 \le \alpha <1} || \tilde{g}_\alpha ||&\ge || \tilde{g}||, \qquad \qquad \tilde{g} \in \tilde{\mathcal{G }}, \end{aligned}$$

and in consequence

$$\begin{aligned} \sup _{0 \le \alpha <1}\frac{K_{j:n}(\alpha )}{M_1(\alpha )} = \sup _{0 \le \alpha <1} T_f(g_\alpha ) \ge \sup _{g \in \mathcal G } T_f(g) = \sup _{F \in DDA(1)} \frac{\mathbb{E }_{F}X_{j:n} -\mu }{\sigma _1}. \end{aligned}$$

So we proved that (13) implies (14).

If \(j=1\) and \(\sigma _1 >0\), we have (13). By (12), \(K_{1:n}(\alpha ) <0\), \(0 \le \alpha <1\), and so \(\sup _{0 \le \alpha <1}\frac{K_{1:n}(\alpha )}{M_1(\alpha )}<0\). If \(j \ge 2\) and \(K_{j:n}(a_{j:n})=0\), then condition (13) does not hold. Then

$$\begin{aligned} \sup _{F \in DDA(1)} \frac{\mathbb{E }_{F}X_{j:n} -\mu }{\sigma _1} = \frac{\mathbb{E }_{a_{j:n}}X_{j:n} -\mu }{\sigma _1} =0. \end{aligned}$$

Summing up, under condition \(K_{j:n}(a_{j:n}) \le 0\), we have

$$\begin{aligned} \sup _{F \in DDA(1)} \frac{\mathbb{E }_{F}X_{j:n} -\mu }{\sigma _1} = \sup _{0 \le \alpha <1}\frac{K_{j:n}(\alpha )}{M_1(\alpha )} \le 0, \end{aligned}$$

and the attainability conditions coincide with ones presented in Proposition 3. \(\square \)

By Proposition 3,

$$\begin{aligned} \sup _{F \in DDA(1)} \frac{\mathbb{E }_{F}X_{j:n} -\mu }{\sigma _1} \ge \lim _{\alpha \nearrow 1}\frac{K_{j:n}(\alpha )}{M_1(\alpha )} = - \frac{1}{2}. \end{aligned}$$

It occurs that the supremum is equal to \(- \frac{1}{2}\) for some small \(j\) close to 1. These bounds are attained in the limit. For larger \(j\), the negative bounds fall between \( - \frac{1}{2}\) and 0, and they are attained by particular combinations of the degenerate and uniform distributions. These combinations always contain atoms, because for the uniform distribution we have

$$\begin{aligned} \frac{\mathbb{E }_{0}X_{j:n} -\mu }{\sigma _1} = \frac{K_{j:n}(0)}{M_1(0)} = 4 \left( \frac{j}{n+1} - \frac{1}{2} \right) \ge 4 \left( \frac{1}{n+1} - \frac{1}{2} \right) > - \frac{1}{2} \end{aligned}$$

when \(n \ge 2\). Observe that condition \(\frac{K_{j:n}(0)}{M_1(0)} = 4 \left( \frac{j}{n+1} - \frac{1}{2} \right) \le 0\) is necessary for non-positivity of the bounds in the DD case. This is the necessary and sufficient for non-positivity of \(\mathbb{E }_{F}X_{j:n} -\mu \) for the parent distributions with decreasing density functions (cf Rychlik 2009b). Numerical examples show that conditions \(K_{j:n}(a_{j:n}) \le 0\) and \( j \le \frac{n+1}{2}\) do not differ much for small \(n\).

Table 1 contains exemplary values of negative, greater than \(- \frac{1}{2}\), tight upper bounds on the expectations of standardized order statistics \( \mathbb{E }_{F}(X_{j:n} -\mu )/\sigma _1\) for \(n =10,20, 30\). If for a given \(n\), the numerical bounds are presented for \(j= j_1(n), \ldots ,j_2(n)\), one should conclude that the respective bounds are equal to \(- \frac{1}{2}\) for \(j < j_1(n)\), and are positive for \(j > j_2(n)\). Each bound is accompanied by the value of argument \(\alpha \) which maximizes the RHS of (9) for a given pair \((j,n)\). It fully determines the mixture distribution for which the bound is attained. The simplest interpretation of this parameter is that it is equal to the probability of the atom in the optimal mixture. In the cases of our numerical analysis the atom probabilities range between 0.1817 and 0.2988.

Table 1 Negative upper bounds on the expectations of standardized order statistics \((\mathbb{E }X_{j:n}-\mu )/\sigma _1\), \(n =10,20, 30\), coming from DDA populations

3 DFRA distributions

The assumptions in Sect. 3 are following.

Assumptions \(DFRA(p)\): Let \(X_1,\ldots ,X_n\) be i.i.d. random variables with a parent distribution function \(F\) which has the DFRA property, i.e. its support starts at a finite point \(a_F\), and the ratio \(-\ln (1-F(x))/(x-a_F)\) is nonincreasing for \(x >a_F\). Each \(X_i\) has a mean \(\mu \in \mathbb R \), and \(p\)th central absolute moment \(0< \sigma _p^p < \infty \) for some \(1 \le p < \infty \). Further we assume that either \(j=1\) or \(2 \le j \le n-1\) which satisfies \(L_{j:n}(a_{j:n})\le 0\), where \(L_{j:n}\) is defined in (5), and \(a_{j:n}\) is the smaller of two roots of \(f_{j:n}(x)=1\) belonging to (0,1). We define \(N_p(\alpha ) = (N_p^p(\alpha ))^{1/p}\), \( p \ge 1\), \( \alpha \ge 0\), where

$$\begin{aligned} N_p^p(\alpha )&= (\alpha +1)^p e^{-p\alpha } (1-e^{-\alpha }) + \exp (-(\alpha +1)e^{-\alpha }) \\&\quad \times \left\{ \begin{array}{l@{\quad }l} \left[ \int _0^{(\alpha +1)e^{-\alpha }-\alpha } x^p e^{x} dx + \Gamma (p+1)\right] , &{} \alpha \le \alpha _*, \\ \int _{\alpha -(\alpha +1)e^{-\alpha }}^\infty x^p e^{-x} dx, &{} \alpha \ge \alpha _*, \end{array}\right. \end{aligned}$$

with \(\alpha _{*} \approx 0.80647\) uniquely determined by the equation \( \alpha e^\alpha = \alpha +1\). We also use here

$$\begin{aligned} Q_1(\beta ) = N_1(-\ln (1-\beta )) = \left\{ \begin{array}{l@{\quad }l} 2 \left( \frac{1-\beta }{e} \right) ^{1-\beta }, &{} 0 \le \beta \le \beta _*, \\ 2 \beta (1-\beta ) [1-\ln (1-\beta )], &{} \beta _* \le \beta <1, \end{array} \right. \qquad \quad \end{aligned}$$
(15)

with \(\beta _* = 1-e^{-\alpha _*} \approx 0.408156\) being the solution to the equation \(- \ln (1- \beta ) = \frac{1}{\beta }- 1\).

Proposition 4

(Case \({p}>{1}\)) If assumptions \(DFRA(p)\) are satisfied for some \(p>1\) and \(1 \le j <n\), then the evaluation

$$\begin{aligned} \frac{\mathbb{E }_{F} X_{j:n} - \mu }{\sigma _p} \le 0 \end{aligned}$$

is sharp, and the equality is attained in the limit by the family of mixtures of the atom at \(\mu - \frac{ \sigma _p (1-\beta )[1-\ln (1-\beta )]}{N_p(-\ln (1-\beta ))}\) with probability \(\beta \), and the exponential distribution with location \(\mu + \frac{ \sigma _p [-(1-\beta )-\ln (1-\beta )]}{N_p(-\ln (1-\beta ))}\) and scale \( \frac{ \sigma _p }{N_p(-\ln (1-\beta ))}\) with probability \(1-\beta \), as \(\beta \nearrow 1\).

Proof

It only suffices to check the attainability conditions. For the family of mixture distributions, the the quantile functions satisfy

$$\begin{aligned} \frac{F_\beta ^{-1}(y)-\mu }{\sigma _p} = \frac{-\ln (1-y)\mathbf{1}_{[\beta ,1)}(y) - (1-\beta )[1-\ln (1-\beta )]}{N_p(-\ln (1-\beta ))}. \end{aligned}$$

Substituting \(x = -\ln (1-y)\) and setting \(\alpha = -\ln (1-\beta )\), we obtain

$$\begin{aligned} \frac{F_{\beta }^{-1}\left( 1-e^{-x}\right) -\mu }{\sigma _p} = \frac{x \mathbf{1}_{[\alpha ,\infty )}(x) - (\alpha +1)e^{-\alpha }}{N_p(\alpha )}. \end{aligned}$$

This shows that \(F_\beta \), \( 0 \le \beta <1\), are DFRA. Since

$$\begin{aligned} \mathbb{E }_{\beta } X_{1} - \mu&= \int \limits _0^1 [F^{-1}_\beta (y)-\mu ]dy = \int \limits _0^\infty [F^{-1}_\beta (1-e^{-x})-\mu ]e^{-x} dx\\&= \frac{\sigma _p}{N_p(\alpha )}\int \limits _0^\infty [ x \mathbf{1}_{[\alpha ,\infty )}(x) - (\alpha +1)e^{-\alpha }]e^{-x}dx =0, \end{aligned}$$

and

$$\begin{aligned}&\frac{N_p^p(\alpha )}{\sigma _p^p} \mathbb{E }_{\beta } | X_{1} - \mu |^p = \frac{N_p^p(\alpha )}{\sigma _p^p}\int \limits _0^1 |F^{-1}_\beta (y)-\mu |^pdy \\&\quad = \int \limits _0^\infty | x \mathbf{1}_{[\alpha ,\infty )}(x) - (\alpha +1)e^{-\alpha }|^pe^{-x}dx \\&\quad = \int \limits _0^\alpha (\alpha +1)^p e^{-p\alpha } e^{-x} dx + \exp (-(\alpha +1)e^{-\alpha }) \int \limits _{\alpha -(\alpha +1)e^{-\alpha }}^\infty |x|^p e^{-x} dx \\&\quad = (\alpha +1)^p e^{-p\alpha } (1-e^{-\alpha }) + \exp (-(\alpha +1)e^{-\alpha })\\&\qquad \times \left\{ \begin{array}{l@{\quad }l} \int _{\alpha -(\alpha +1)e^{-\alpha }}^0 (-x)^p e^{- x} dx + \int _0^\infty x^p e^{-x} dx, &{} \alpha \le \alpha _*, \\ \int _{\alpha -(\alpha +1)e^{-\alpha }}^\infty x^p e^{-x} dx, &{} \alpha \ge \alpha _*, \end{array} \right. = N_p^p(\alpha ), \end{aligned}$$

distribution functions \(F_\beta \), \( 0 \le \beta <1\), satisfy both the moment conditions. Furthermore, we calculate

$$\begin{aligned}&\frac{N_p(\alpha )}{\sigma _p} (\mathbb{E }_{\beta } X_{j:n} - \mu ) = \frac{N_p(\alpha )}{\sigma _p} \int \limits _0^1 [F^{-1}_\beta (y)-\mu ]f_{j:n}(y)dy \\&\quad = \int \limits _0^\infty [x \mathbf{1}_{[\alpha ,\infty )}(x) - (\alpha +1)e^{-\alpha }] f_{j:n}(1-e^{-x}) e^{-x}dx \\&\quad = \int \limits _\alpha ^\infty x f_{j:n}(1-e^{-x}) e^{-x}dx - (\alpha +1)e^{-\alpha } \\&\quad = \alpha \bar{F}_{j:n}(1-e^{-\alpha }) + \int \limits _\alpha ^\infty \bar{F}_{j:n}(1-e^{-x}) dx - (\alpha +1)e^{-\alpha } \\&\quad = \sum _{i=0}^{j-1} \left( \begin{array}{c}n \\ i\end{array}\right) \int \limits _\alpha ^\infty (1-e^{-x})^i e^{-(n-1-i)x} e^{-x} dx + \alpha \bar{F}_{j:n}(1-e^{-\alpha }) - (\alpha +1)e^{-\alpha } \\&\quad = \sum _{i=0}^{j-1} \frac{1}{n-i} \int \limits _{1-e^{-\alpha }}^1 f_{i+1:n}(y)dy + \alpha \bar{F}_{j:n}(1-e^{-\alpha }) - (\alpha +1)e^{-\alpha } \\&\quad = \sum _{i=1}^{j} \frac{\bar{F}_{i:n}(1-e^{-x})}{n+1-i} + \alpha \bar{F}_{j:n}(1-e^{-\alpha }) - (\alpha +1)e^{-\alpha } \\&\quad = L_{j:n}(1-e^{-\alpha }) \le 0. \end{aligned}$$

Accordingly, it remains to prove that for \(p>1\)

$$\begin{aligned} \lim _{\beta \nearrow 1} \frac{\mathbb{E }_{\beta } X_{j:n}-\mu }{\sigma _p} = \lim _{\alpha \nearrow \infty } \frac{L_{j:n}(1-e^{-\alpha })}{N_p(\alpha )} = 0. \end{aligned}$$

We observe that

$$\begin{aligned} \lim _{\alpha \nearrow \infty } \frac{L_{j:n}(1-e^{-\alpha })}{\alpha e^{-\alpha }} = -1, \end{aligned}$$

because

$$\begin{aligned} \lim _{\alpha \nearrow \infty } \sum _{i=1}^{j} \frac{\bar{F}_{i:n}(1-e^{-x})}{(n+1-i)e^{-(n+1-j)\alpha }}&= \left( \begin{array}{c}n\\ j-1\end{array}\right) \frac{1}{n+1-j}, \\ \lim _{\alpha \nearrow \infty } \frac{ \alpha \bar{F}_{j:n}(1-e^{-\alpha })}{\alpha e^{-(n+1-j)\alpha }}&= \left( \begin{array}{c}n\\ j-1\end{array}\right) , \\ \lim _{\alpha \nearrow \infty } \frac{(\alpha +1)e^{-\alpha }}{\alpha e^{-\alpha }}&= 1. \end{aligned}$$

Also,

$$\begin{aligned} \lim _{\alpha \nearrow \infty } \frac{N_p^p(\alpha )}{\alpha ^p e^{-\alpha }} = 1, \end{aligned}$$

because

$$\begin{aligned} \lim _{\alpha \nearrow \infty } \frac{(\alpha +1)^p e^{-p\alpha }(1-e^{-\alpha })}{\alpha ^p e^{-p\alpha }}&= 1, \\ \lim _{a \nearrow \infty } \frac{\int _a^\infty x^pe^{-x}dx}{a^p e^{-a}}&= \lim _{a \nearrow \infty } \frac{-a^p e^{-a}}{pa^{p-1}e^{-a} - a^p e^{-a}}\\&= \lim _{a \nearrow \infty } \frac{1}{1-p/a} = 1, \\ \lim _{\alpha \nearrow \infty } \frac{\exp (-(\alpha +1)e^{-\alpha }) \int _{\alpha -(\alpha +1)e^{-\alpha }}^\infty x^p e^{-x} dx }{\alpha ^p e^{-\alpha }}&= \lim _{\alpha \nearrow \infty } \frac{(\alpha -(\alpha \!+\!1)e^{-\alpha })^p e^{-\alpha }}{\alpha ^p e^{-\alpha }} =1. \end{aligned}$$

Therefore

$$\begin{aligned} \lim _{\alpha \nearrow \infty } \frac{L_{j:n}(1-e^{-\alpha })}{N_p(\alpha )} = \lim _{\alpha \nearrow \infty } \frac{\alpha e^{-\alpha }}{\alpha e^{-\alpha /p}} = 0, \end{aligned}$$

which ends the proof. \(\square \)

Proposition 5

(Case \({p}={1}\)) Under the assumptions \(DFRA(p)\) with \(p=1\), we have

$$\begin{aligned} \frac{\mathbb{E }_{F} X_{j:n} - \mu }{\sigma _p} \le \sup _{0 \le \beta < 1} \frac{L_{j:n}(\beta )}{Q_1(\beta )}, \end{aligned}$$
(16)

where the numerator and denominator of the RHS are defined in (5) and (15), respectively. If the supremum of the RHS of (16) is attained at some \( 0 \le \beta <1\), then the bound is attained by the mixture of the pole at \(\mu - \frac{ \sigma _1 (1-\beta )[1-\ln (1-\beta )]}{Q_1(\beta ))}\) with probability \(\beta \), and the exponential distribution with location \(\mu + \frac{ \sigma _1 [-(1-\beta )-\ln (1-\beta )]}{Q_1(\beta )}\) and scale \( \frac{ \sigma _1 }{Q_1(\beta )}\) with probability \(1-\beta \). Otherwise, i.e. when

$$\begin{aligned} \sup _{0 \le \beta < 1} \frac{L_{j:n}(\beta )}{Q_1(\beta )} = \lim _{\beta \nearrow 1} \frac{L_{j:n}(\beta )}{Q_1(\beta )} = - \frac{1}{2}, \end{aligned}$$

then the bound is attained in the limit by the sequences of the mixture distributions as \( \beta \nearrow 1\).

Proof

It is similar to that of Proposition 3, and we describe only the main ideas. Recalling the arguments of the proof of Proposition 4, we observe that the mixture distributions described in Proposition 5 are DFRA, and have first raw and central absolute moments \(\mu \) and \(\sigma _1\), respectively, for all \( 0 \le \beta <1\). Moreover,

$$\begin{aligned} \frac{\mathbb{E }_{\beta } X_{j:n}-\mu }{\sigma _1} = \frac{L_{j:n}(\beta )}{Q_1(\beta )}, \qquad 0 \le \beta <1, \end{aligned}$$

holds. Since

$$\begin{aligned} L'_{j:n}(\beta ) = -\ln (1-\beta )[1-f_{j:n}(\beta )], \end{aligned}$$

function \(L_{j:n}\) itself is first decreasing, and then increasing for \(j=1\), and it is increasing on \([0,a_{j:n}]\), decreasing on \([a_{j:n},b_{j:n}]\), and ultimately increasing for \(j \ge 2\). Function (15) is strictly positive. Since

$$\begin{aligned} \lim _{\beta \nearrow 1} \frac{L_{j:n}(\beta )}{Q_1(\beta )} = \lim _{\beta \nearrow 1} \frac{ -\ln (1-\beta )[1-f_{j:n}(\beta )]}{2(1-2\beta ) [ -\ln (1-\beta )]+2\beta } = - \frac{1}{2}, \end{aligned}$$

the supremum of the RHS in (16) is non-negative iff \(L_{j:n}(a_{j:n}) \ge 0\), and this certainly implies that

$$\begin{aligned} \sup _{F \in DFRA(1)} \frac{\mathbb{E }_{F} X_{j:n}-\mu }{\sigma _1} \ge 0. \end{aligned}$$

Now we show that under the condition

$$\begin{aligned} \frac{\mathbb{E }_{F} X_{j:n}-\mu }{\sigma _1} < 0, \qquad F \in DFRA(1), \end{aligned}$$

(that implies \(L_{j:n}(a_{j:n}) < 0\) in particular), we have

$$\begin{aligned} \sup _{F \in DFRA(1)} \frac{\mathbb{E }_{F} X_{j:n}-\mu }{\sigma _1} = \sup _{0 \le \beta <1} \frac{\mathbb{E }_{\beta } X_{j:n}-\mu }{\sigma _1} = \sup _{0 \le \beta <1}\frac{L_{j:n}(\beta )}{Q_1(\beta )} <0. \nonumber \\ \end{aligned}$$
(17)

First we note that

$$\begin{aligned} \frac{\mathbb{E }_{F} X_{j:n}-\mu }{\sigma _1}&= \int \limits _0^\infty \frac{F^{-1}(1-e^{-x})-\mu }{\sigma _1} [f_{j:n}(1-e^{-x})-1] e^{-x}dx \\&= \int \limits _{0}^\infty h(x) f(x) e^{-x} dx, \end{aligned}$$

where \(f(x) = f_{j:n}(1-e^{-x})-1\), and \(h(x) = \frac{F^{-1}(1-e^{-x})-\mu }{\sigma _1}\) is an arbitrary element of the set \(\mathcal H \subset L^2(\mathbb R _+, e^{-x}dx)\), which contains the (right-continuous versions of) non-decreasing, starshaped functions such that

$$\begin{aligned} S_1(h)&= \int \limits _0^\infty h(x) e^{-x} dx = 0, \\ ||h||&= \int \limits _0^\infty |h(x)| e^{-x} dx = 1. \end{aligned}$$

We also have

$$\begin{aligned} S_f(h) = \int \limits _0^\infty h(x)f(x) e^{-x} dx <0, \qquad h \in \mathcal H . \end{aligned}$$

The family \( \tilde{\mathcal{H }}\) of functions \(\tilde{h}(x) = \frac{h(x) }{- S_f(h)}\) is a convex set in \( L^2(\mathbb R _+, e^{-x}dx)\) which consists of non-decreasing starshaped functions with \(S_1(\tilde{h})=0\) and \(S_f(\tilde{h})=-1\). Since \(||\tilde{h}|| = - \frac{1}{S_f(h)}\), the problems of maximizing \(S_f\) over \(\mathcal H \) and that of maximizing the norm over \(\tilde{\mathcal{H }}\) are equivalent.

Using the convexity property of the norm functional, we prove that the latter problem is solved by the elements

$$\begin{aligned} \tilde{h}_\alpha (x) = \frac{x \mathbf{1}_{[\alpha , \infty )} (x) - (\alpha +1)e^{-\alpha }}{L_{j:n}(\alpha )}, \qquad 0 \le \alpha < \infty , \end{aligned}$$

of a parametric class contained in \( \tilde{\mathcal{H }}\). Every \(\tilde{h} \in \tilde{\mathcal{H }}\) can be approximated by piecewise linear discontinuous functions

$$\begin{aligned} h_k(x)&= \tilde{h}(0) + \sum _{i=1}^{4^k-1} \frac{2^k}{i} \left[ \tilde{h} \left( \frac{i}{2^k}\right) - \tilde{h}(0) \right] \, x \,\mathbf{1}_{\left[ \frac{i}{2^k}, \frac{i+1}{2^k} \right) }(x)\\&+ \frac{ \tilde{h}(2^k)- \tilde{h}(0)}{2^k}\, x \,\mathbf{1}_{[ 2^k, \infty )}(x), \end{aligned}$$

\(k=1,2,\ldots ,\) so that \(h_k \nearrow \tilde{h}\) and \(||\tilde{h}-h_k|| \searrow 0\) as \( k \rightarrow \infty \). Functions

$$\begin{aligned} \tilde{h}_k= \frac{h_k - S_1(h_k)}{-S_f(h_k)} \in \tilde{\mathcal{H }}, \end{aligned}$$

and are convex combinations of \( \tilde{h}_{\frac{i}{2^k}}\), \(i=1, \ldots , 4^k\). We have

$$\begin{aligned} ||\tilde{h}|| \leftarrow ||\tilde{h}_k || \le \sup _{1 \le i \le 4^k} || \tilde{h}_{\frac{i}{2^k}}|| \le \sup _{0 \le \alpha <\infty } || \tilde{h}_\alpha ||. \end{aligned}$$

This allows us to deduce (17) which ends the proof. \(\square \)

As in the DDA case, under condition (5) yields

$$\begin{aligned} - \frac{1}{2} \le \sup _{F \in DFRA(1)} \frac{\mathbb{E }_{F} X_{j:n}-\mu }{\sigma _1} \le 0. \end{aligned}$$

The supremum is not attained by the exponential distribution (without any contribution of the atom) because

$$\begin{aligned} \frac{L_{j:n}(0)}{Q_1(0)} = \frac{e}{2} \left( \sum _{i=1}^j \frac{1}{n+1-i} - 1 \right) > \frac{e}{2} \left( \frac{1}{n} - 1 \right) > -\frac{1}{2} \end{aligned}$$

for all \(n \ge 2\). The sign of \(\frac{L_{j:n}(0)}{Q_1(0)}\) is identical with the sign of the upper bound on \( \frac{\mathbb{E }_{F} X_{j:n}-\mu }{\sigma _1}\) for the DFR populations (cf Rychlik 2002, 2009c).

Table 2 provides the tight upper negative and different from \(-\frac{1}{2}\) bounds on the standardized expectations of order statistics from DFRA populations, when the sample size \(n=10, 20\), and \(30\). If for a given \(n\), the bounds are presented for \(j=j_1(n), \ldots , j_2(n)\), then the respective evaluations amount to \(-\frac{1}{2}\) for \(j=1, \ldots , j_1(n)-1\), and are positive for \(j_2(n)+1, \ldots , n\). Together with the bounds values, there are listed the probabilities of atoms in the DFRA distributions attaining the bounds. They are essentially greater than the respective values in Table 1 for the DDA distributions, and greater than the contributions of the uniform distributions in the mixtures as well.

Table 2 Negative upper bounds on the expectations of standardized order statistics \((\mathbb{E }X_{j:n}-\mu )/\sigma _1\), \(n =10,20, 30\), coming from DFRA populations

We finally note that we presented simple examples of families of distributions which attain in the limit the zero bounds in Propositions 2 and 4, and \(-\frac{1}{2}\) in Propositions 3 and 5. Slight modifications of the distributions would not violate these asymptotic properties. The other bounds determined in Propositions 3 and 5 are attained by the distributions uniquely determined up to the location and scale transformations.