1 Introduction

Consider an i.i.d. sample \(X_1, \ldots ,X_n\) based on the common cumulative distribution function F with the finite expectation

$$\begin{aligned} \mu = {\mathbb {E}}X_1 = \int _0^1 F^{-1}(x) dx, \end{aligned}$$

and positive and finite variance

$$\begin{aligned} \sigma ^2 = {\mathbb {E}}(X_1-\mu )^2 = \int _0^1 [F^{-1}(x)-\mu ]^2 dx. \end{aligned}$$

Let \(X_{1:n}\le \cdots \le X_{n:n}\) denote the order statistics based on \(X_1,\ldots ,X_n\). The density function of the ith order statistic from the standard uniform i.i.d. sample of size n is defined by

$$\begin{aligned} f_{j:n}(x)=nB_{j-1,n-1}(x),\quad 0<x<1, \quad i=1,\ldots ,n, \end{aligned}$$

where

$$\begin{aligned} B_{k,n}(x)= \left( \begin{array}{l}{n}\\ {k}\end{array}\right) x^k (1-x)^{n-k}, \quad 0<x<1, \quad k=0,\ldots , n, \end{aligned}$$

denote the Bernstein polynomials of order n. Then

$$\begin{aligned} F_{j:n}(x)= \sum _{k=j}^{n} B_{k,n}(x), \end{aligned}$$

is the respective distribution function.

Order statistics and their linear combinations (shortly called L-statistics), especially spacings, play an important role in mathematical statistics and other fields of applied probability. In the present paper we consider the problem of establishing sharp bounds on the expected order statistics and spacings coming from the restricted class of distributions which is defined with use of the convex transform order of van Zwet (1964). We say that F precedes W in the convex transform order (\(F\,{\preceq _C}\,W\)) if the composition \(F^{-1}W\) is concave on the support of W (equivalently, \(WF^{-1}\) is convex on the support of F). We assume here that W is a fixed absolutely continuous distribution function with a positive density w on an interval [0, d) for some \(0<d \le + \infty \). If W is either uniform or exponential distribution function, every \(F\,{\preceq _C}\,W\) has increasing density (\(F\sim \) ID) or increasing failure rate (\(F\sim \) IFR), respectively.

A lot of papers devoted to bounds on the expectations of order statistics and L-statistics in various nonparametric models have been published so far. The most classical results presenting the bounds on the sample range, maximum and other order statistics and their differences based on sequences of independent observations with arbitrary, identical distributions, expressed in terms of the standard deviation units, were determined by Plackett (1947), Gumbel (1954), Hartley and David (1954) and Moriguti (1953), respectively. Rychlik (1998) and Goroncy (2009) established the optimal positive and nonpositive upper bounds on the expectations of L-statistics coming from arbitrary distributions, respectively.

Sharper evaluations of order statistics and spacings from the classes of distributions with decreasing density and failure rate functions were presented by Gajek and Rychlik (1998), Danielak (2003) and Danielak and Rychlik (2004). They were extended by Rychlik (2002), and Danielak and Rychlik (2003) to more general families of distributions with monotone density and failure rate functions on the average, which are generated by the star ordering. Recently, Rychlik (2014) described the precise upper bounds on the expectations of extreme order statistics based on the ID and IFR distributions. Goroncy and Rychlik (2015) provided general tools for obtaining sharp upper bounds on the expectations of single order statistics and spacings expressed in terms of the population mean and standard deviation, for the families of all parent distributions preceding various W in the convex transform order. They also characterized the distributions which attain the bounds, and specified the general results for the distributions with increasing density functions.

We aim at completing these results with the analogous upper bounds for the distributions with the increasing failure rates, i.e. for \({\mathbb {E}}\frac{X_{k:n}-\mu }{\sigma }, 1\,{\le }\,j\,{\le }\,n\), and \({\mathbb {E}}\frac{X_{j+1:n}-X_{j:n}}{\sigma }, 1\,{\le }\,j\,{\le }\,n-1\), with parent distribution functions F from the IFR family. A general method of establishing positive sharp upper bounds on the expectations of properly normalized linear combinations of order statistics \({\mathbb {E}} \sum _{i=1}^n c_i \frac{X_{i:n}-\mu }{\sigma }\), for arbitrarily fixed \(\mathbf {c}=(c_1,\ldots ,c_n)\in {\mathbb {R}}^n\), and many other statistical functionals based on restricted nonparametric families of distributions was presented in Gajek and Rychlik (1996). In our setup, it is based on the following sequence of relations

$$\begin{aligned} {\mathbb {E}} \sum _{i=1}^n c_i \frac{X_{i:n}-\mu }{\sigma }= & {} \int _0^1 \frac{F^{-1}(x) - \mu }{\sigma } \sum _{i=1}^n c_i [f_{i:n}(x)-1]dx \nonumber \\= & {} \int _0^d \frac{F^{-1}W(x) - \mu }{\sigma } \sum _{i=1}^n c_i [f_{i:n}W(x)-1]w(x)dx \nonumber \\\le & {} \int _0^d \frac{F^{-1}W(x) - \mu }{\sigma } P_{\preceq _cW}\left( \sum _{i=1}^n c_i [f_{i:n}W-1]\right) (x) w(x)dx \nonumber \\\le & {} \left( \int _0^d \left[ \frac{F^{-1}W(x) - \mu }{\sigma }\right] ^2 w(x)dx\right. \nonumber \\&\times \,\left. \int _0^d \left[ P_{\preceq _cW}\left( \sum _{i=1}^n c_i [f_{i:n}W-1]\right) (x)\right] ^2 w(x)dx \right) ^{\frac{1}{2}} \nonumber \\= & {} \left| \left| P_{\preceq _cW}\left( \sum _{i=1}^n c_i [f_{i:n}W-1]\right) \right| \right| \left( \int _0^1 \left[ \frac{F^{-1}(x) - \mu }{\sigma }\right] ^2 dx \right) ^{\frac{1}{2}} \nonumber \\= & {} \left| \left| P_{\preceq _cW}\left( \sum _{i=1}^n c_i [f_{i:n}W-1]\right) \right| \right| , \end{aligned}$$
(1.1)

where \(P_{\preceq _cW}h\) denotes the projection of a function \(h \in L^2((0,d),w(x)dx)\) onto the convex cone

$$\begin{aligned} {\mathcal {C}}_{\preceq _cW}=\{g\in L^2([0,d),w(x)dx)\,{:}\,g(x)\text { is nondecreasing and concave}\}, \end{aligned}$$
(1.2)

and \(|| \cdot ||\) is the norm of \(L^2([0,d),w(x)dx)\). If the norm in (1.1) is positive, then the bound is attained by the distribution function satisfying

$$\begin{aligned} \frac{F^{-1}W(x) - \mu }{\sigma }= \frac{P_{\preceq _cW}\left( \sum _{i=1}^n c_i [f_{i:n}W-1]\right) (x)}{||P_{\preceq _cW}\left( \sum _{i=1}^n c_i [f_{i:n}W-1]\right) ||}, \quad 0<x <d. \end{aligned}$$
(1.3)

Goroncy and Rychlik (2015) described the projections on (1.2) of some particular functions h satisfying the conditions which we list below.

(A) Assume that h is a bounded, twice differentiable function on [0, d) such that

$$\begin{aligned} \int \limits _{0}^dh(x)w(x)dx = 0. \end{aligned}$$

Moreover, h is strictly decreasing on (0, a), strictly convex increasing on (ab), strictly concave increasing on (bc) with \(h(c)> 0 \ge h(0)\), and strictly decreasing on (cd) with \(h(d) =h(0)\) for some \(0\le a<b<c< d\).

The assumptions are similar to those presented in Danielak and Rychlik (2004). Let us recall some auxiliary functions, which are necessary for determining the projection of h satisfying (A) onto (1.2). We begin with

$$\begin{aligned} T(\beta ) = h(\beta ) [1-W(\beta )]- \int _\beta ^d h(x)w(x)dx, \quad 0 \le \beta \le d. \end{aligned}$$
(1.4)

It is easy to check that if T vanishes at some \(\beta \), then \(g(x)=h(\beta )\) is the optimal constant approximation of h restricted to the interval \((\beta ,d)\) in the norm of \(L^2((\beta ,d), w(x)dx)\). Moreover, there exists a unique \(a < \beta _* < c\) such that \(T(\beta _*)=0\). Further we take

$$\begin{aligned} \lambda _*(\alpha )= & {} \frac{\int \nolimits _0^\alpha (x-\alpha )[h(x)-h(\alpha )] w(x)dx}{\int \nolimits _0^\alpha (x-\alpha )^2w(x)dx}, \end{aligned}$$
(1.5)

which is the slope of optimal \(L^2\)-approximation of \(h_{(0,\alpha )}\) by linear functions \(\lambda (x-\alpha ) +h(\alpha )\) with the fixed right-end point \(h(\alpha )\). We also put

$$\begin{aligned} Y(\alpha )= & {} \lambda _*(\alpha )-h'(\alpha ),\end{aligned}$$
(1.6)
$$\begin{aligned} Z(\alpha )= & {} \int \limits _0^\alpha [h(x)-h(\alpha ) -\lambda _*(\alpha )(x-\alpha )]w(x)dx, \quad 0\le \alpha < d. \end{aligned}$$
(1.7)

If \(Y(\alpha )\ge 0\) for some \(b < \alpha < c\), then the function arising by gluing \(\lambda _* (\alpha ) (x-\alpha ) +h(\alpha )\) left to \(\alpha \) and h(x) on the right is concave in a neighborhood of \(\alpha \). If \(Z(\alpha )=0\), then \(\lambda _* (\alpha ) (x-\alpha ) +h(\alpha )\) is the projection of \(h_{(0,\alpha )}\) onto the subspace of all linear functions in \(L^2((0,\alpha ), w(x)dx)\).

It occurs that the projection of h onto the convex cone (1.2) is either linear increasing, then equal to h, and finally constant (written further l-h-c for brevity), or first linear and ultimately constant (l-c, respectively). This is precisely described in the following proposition (cf. Goroncy and Rychlik 2015, Proposition 1).

Proposition 1

Assume that the zero \(a < \beta _* <c\) of (1.4) belongs to (bc), the set \({\mathcal {Y}}=\{\alpha \in (b,\beta _*):\; Y(\alpha )\ge 0,\; Z(\alpha )=0\}\) is nonempty, and \(\alpha _*=\inf \{\alpha \in {\mathcal {Y}}\}\). Then

$$\begin{aligned} P_{\preceq _cW}h(x)=\left\{ \begin{array}{ll} h(\alpha _*)+\lambda _*(\alpha _*)(x-\alpha _*),&{}\quad 0\le x<\alpha _*,\\ h(x),&{}\quad \alpha _*\le x<\beta _*,\\ h(\beta _*),&{}\quad \beta _*\le x<d, \end{array}\right. \end{aligned}$$

is the projection of h on (1.2). Otherwise we define

$$\begin{aligned} P_{\alpha }h(x)= \frac{\int _\alpha ^d h(y)w(y)dy}{1-W(\alpha )} \left[ \frac{ (x-\alpha ) \mathbf {1}_{(0,\alpha )}(x)}{-\int _0^\alpha (y-\alpha )w(y)dy} +1 \right] , \quad \beta _* \le \alpha < d, \end{aligned}$$
(1.8)

with

$$\begin{aligned} ||P_{\alpha }h||^2 = \frac{\int _\alpha ^d h(x)w(x)dx \left[ \int \nolimits _0^\alpha (x-\alpha )^2w(x)dx-\left( \int \nolimits _0^\alpha (x-\alpha )w(x)dx\right) ^2 \right] }{-[1-W(\alpha )]\int _0^\alpha (x-\alpha )w(x)dx}. \end{aligned}$$

Let \({\mathcal {Z}}\) denote the set of arguments \(\alpha \ge \beta _*\) satisfying

$$\begin{aligned} \frac{\int \nolimits _\alpha ^d h(x)w(x)dx}{1-W(\alpha )} = - \frac{\int \nolimits _0^\alpha (x-\alpha )h(x)w(x)dx \int \nolimits _0^\alpha (x-\alpha )w(x)dx}{\int \nolimits _0^\alpha (x-\alpha )^2w(x)dx-\left( \int \nolimits _0^\alpha (x-\alpha )w(x)dx\right) ^2} >0. \end{aligned}$$
(1.9)

Then \({\mathcal {Z}} \) is nonempty, and \(P_{\preceq _cW}h(x)=P_{\alpha _*}h(x)\) for unique \(\alpha _* = \arg \max _{\alpha \in {\mathcal {Z}}} ||P_{\alpha }h||^2\).

The original version of the proposition contained only the necessary condition (1.9) for parameter \(\alpha \) determining the projection of the l-c shape. Here we complete the statement precisely indicating parameter \(\alpha _*\) of the l-c projection. Set \({\mathcal {Z}}\) is nonempty by assumption, because the l-c projection is the only option if l-h-c is excluded. Then the breaking point \(\alpha _* > \beta _*\) of the broken line projection has to satisfy (1.9). If \(\alpha \in {\mathcal {Z}}\), then the linear increasing part

$$\begin{aligned} l_\alpha (x) = \frac{\int _\alpha ^d h(y)w(y)dy}{1-W(\alpha )} \left[ \frac{ (x-\alpha ) }{-\int _0^\alpha (y-\alpha )w(y)dy} +1 \right] , \quad 0 < x < \alpha , \end{aligned}$$

of \(P_\alpha h\) is the orthogonal projection of \(h_{(0,\alpha )}\) onto the linear subspace of linear functions in \(L^2((0,\alpha ), w(x)dx)\), i.e.

$$\begin{aligned} \int _0^\alpha l_\alpha (x) [h(x)-l_\alpha (x)]w(x)dx =0. \end{aligned}$$

Similarly, the constant part

$$\begin{aligned} c_\alpha (x) = \frac{\int _\alpha ^d h(y)w(y)dy}{-[1-W(\alpha )]\int _0^\alpha (y-\alpha )w(y)dy} , \quad \alpha < x < d, \end{aligned}$$

is the orthogonal projection of \(h_{(\alpha ,d)}\) onto the subspace of constant functions in \(L^2((\alpha ,d), w(x)dx)\), i.e.

$$\begin{aligned} \int _\alpha ^d c_\alpha (x) [h(x)-c_\alpha (x)]w(x)dx =0. \end{aligned}$$

This implies that for every \(\alpha \in {\mathcal {Z}}\), we obtain

$$\begin{aligned} \int _0^d P_\alpha h(x) [h(x)-P_\alpha h(x)]w(x)dx =0, \end{aligned}$$

and in consequence

$$\begin{aligned} ||h||^2 = ||P_\alpha h ||^2 + || h- P_\alpha h ||^2. \end{aligned}$$

Since the norm of h is fixed, the function \(P_{\alpha _*} h\) minimizing the distance to h is just the one with maximal norm. The projection is unique, and so is \(\alpha _*\). It occurs that in the particular problems we consider below, there is only one \(\alpha \) satisfying (1.9), and there is no need for comparing norms of different \(P_\alpha h\).

2 Increasing failure rate distributions

In this paper we consider distribution functions F which precede the standard exponential distribution function \(V(x)=1-e^{-x}, 0\,{<}\,x\,{<}\,d=\infty \) in the convex transform ordering. Note that \(F\preceq _C V\) means that the hazard function \(\Lambda _F(x)=V^{-1}F(x)=-\ln [1-F(x)] \) is convex. In consequence, its derivative called the failure rate function \(\lambda _F(x)=\Lambda _F'(x)=\frac{f(x)}{1-F(x)}\) (which exists almost everywhere and has both one-sided derivatives at each point as well as the respective density function \(f=F'\)) is nondecreasing. Therefore every distribution function \(F\preceq _C V\) is said to have increasing failure rate (\(F \sim \) IFR, for short). Distribution functions with monotone failure rates are of vital interest in various branches of lifetime analysis. In order to calculate sharp mean–variance upper bounds on the expectations of centered L-statistics \({\mathbb {E}}\sum _{i=1}^n c_i (X_{i:n}-\mu )\) based on IFR samples, we determine the projections of functions \(\sum \nolimits _{i=1}^n c_i[f_{i:n}V -1]\) onto the convex cone

$$\begin{aligned} {\mathcal {C}}_{\preceq _cV}=\{g\in L^2([0,\infty ),e^{-x}dx)\,{:}\, g(x)\text { is nondecreasing and concave}\}. \end{aligned}$$
(2.1)

For arbitrarily fixed \(c_1, \ldots , c_n\), the functions are possibly multimodal polynomials of degree n of the argument \(e^{-x}\). There are not known universal methods of projecting such functions onto (2.1). We focus here on the single order statistics and spacings for which the respective projected functions satisfy Assumptions (A). They are also the most popular L-statistics useful in the lifetime analysis, because they represent consecutive failure times of items examined in lifetime experiments, and the time distances between them.

2.1 Single order statistics

For an i.i.d. sample \(X_1,\ldots , X_n\) with common marginal \(F\sim \) IFR, we aim at establishing accurate upper bounds for \({\mathbb {E}}\frac{X_{j:n}-\mu }{\sigma }, 3\le j\le n-1, n\ge 4\). The extreme order statistics with \(j=1,2,\) and n were already treated by Rychlik (2014). Our auxiliary problem is to project the following function

$$\begin{aligned} h(x)=h_{j:n}(x)=f_{j:n}V(x)-1=f_{j:n}(1-e^{-x})-1, \end{aligned}$$
(2.2)

onto (2.1). Note that (2.2) satisfies Assumptions (A) with \(a=0\),

$$\begin{aligned}&\displaystyle b=b_{j:n}= -\ln \left( 1-\frac{(j-1)(2n-3)-\sqrt{(j-1)(4n^2-4n-1+5j-4jn)}}{2(n-1)^2}\right) ,\\&\displaystyle c=c_{j:n}= -\ln \left( \frac{n-j}{n-1}\right) . \end{aligned}$$

and \(d= +\infty \). Here the first interval of decrease of (2.2) is empty, which is acceptable. By Proposition 1, the projection of (2.2) onto (2.1) can be either of l-h-c or l-c type. In Proposition 2 below, we present the bounds corresponding with the first case. To this end we specify the general functions (1.4)–(1.7) for particular \(h=h_{j:n}\) defined in (2.2). Using auxiliary formulas

$$\begin{aligned}&\int \limits _0^{\alpha }(x-\alpha )e^{-x}dx=1-e^{-\alpha }-\alpha , \end{aligned}$$
(2.3)
$$\begin{aligned}&\int \limits _0^{\alpha }\left( x-\alpha \right) ^2e^{-x}dx=\alpha ^2-2\alpha +2-2e^{-\alpha },\end{aligned}$$
(2.4)
$$\begin{aligned}&\int \limits _0^{\alpha }f_{j:n}\left( 1-e^{-x}\right) e^{-x}dx=F_{j:n}\left( 1-e^{-\alpha }\right) ,\end{aligned}$$
(2.5)
$$\begin{aligned}&\int \limits _0^{\alpha }(x-\alpha )f_{j:n}\left( 1-e^{-x}\right) e^{-x}dx=\sum \limits _{k=1}^j\frac{F_{k:n}\left( 1-e^{-\alpha }\right) }{n-k+1}-\alpha , \end{aligned}$$
(2.6)

we determine functions

$$\begin{aligned} T_{j:n}(\beta )= & {} f_{j:n}\left( 1-e^{-\beta }\right) e^{-\beta } - 1 + F_{j:n}\left( 1-e^{-\beta }\right) , \end{aligned}$$
(2.7)
$$\begin{aligned} \lambda _{j:n} (\alpha )= & {} \frac{\sum \nolimits _{k=1}^j\dfrac{F_{k:n}\left( 1-e^{-\alpha }\right) }{n-k+1} -f_{j:n}\left( 1-e^{-\alpha }\right) \left( 1-e^{-\alpha }-\alpha \right) -\alpha }{\alpha ^2-2\alpha +2-2e^{-\alpha }}, \end{aligned}$$
(2.8)
$$\begin{aligned} Y_{j:n}(\alpha )= & {} \lambda _{j:n}(\alpha )-n\left[ (n-j+1)B_{j-2,n-1} \left( 1-e^{-\alpha }\right) \right. \nonumber \\&\left. -(n-j)B_{j-1,n-1}\left( 1-e^{-\alpha }\right) \right] , \end{aligned}$$
(2.9)
$$\begin{aligned} Z_{j:n}(\alpha )= & {} F_{j:n}\left( 1-e^{-\alpha }\right) -f_{j:n}\left( 1-e^{-\alpha }\right) \left( 1-e^{-\alpha }\right) \nonumber \\&- \lambda _{j:n}(\alpha )\left( 1-e^{-\alpha }-\alpha \right) , \end{aligned}$$
(2.10)

on the positive half-axis.

Proposition 2

Suppose that \(T_{j:n}(b_{j:n})<0\) so that the unique zero \(0< \beta _{j:n} < c_{j:n}\) of (2.7) belongs to \((b_{j:n},c_{j:n})\). Also, suppose that \({\mathcal {Y}}_{j:n}= \{ b_{j:n} < \alpha < \beta _{j:n}\,{:}\,Y_{j:n} \ge 0\;and\; Z_{j:n}=0\}\) is nonempty. Let \(\alpha _{j:n}\) denote the smallest (possibly unique) element of \({\mathcal {Y}}_{j:n}\), and \(\lambda _{j:n}= \lambda _{j:n}(\alpha _{j:n})\). Then

$$\begin{aligned} \frac{{\mathbb {E}}X_{j:n}-\mu }{\sigma } \le B_{j:n}, \end{aligned}$$
(2.11)

where

$$\begin{aligned} B^2_{j:n}= & {} f_{j:n}^2\left( 1-e^{-\alpha _{j:n}}\right) \left( 1-e^{-\alpha _{j:n}}\right) +2\lambda _{j:n}f_{j:n}\left( 1-e^{-\alpha _{j:n}}\right) \left( 1-e^{-\alpha _{j:n}}-\alpha _{j:n}\right) \\&+\,\lambda _{j:n}^2\left( \alpha _{j:n}^2-2\alpha _{j:n}+2-2e^{-\alpha _{j:n}}\right) +f_{j:n}^2\left( 1-e^{-\beta _{j:n}}\right) e^{-\beta _{j:n}} \\&+\,\frac{(n!)^2\left( \begin{array}{l}{2j-2}\\ {j-1}\end{array}\right) \left( \begin{array}{l}{2n-2j}\\ {n-j}\end{array}\right) }{(2n-1)!} \left[ F_{2j-1:2n-1}\left( 1-e^{-\beta _{j:n}}\right) \right. \\&\left. -\,F_{2j-1:2n-1}\left( 1-e^{-\alpha _{j:n}}\right) \right] -1. \end{aligned}$$

The equality in (2.11) holds for the distribution function represented by the formula

$$\begin{aligned} F(y)=\left\{ \begin{array}{ll} 0,&{}\quad y<f_{j:n}\left( 1-e^{-\alpha _{j:n}}\right) - \lambda _{j:n}\alpha _{j:n},\\ 1-\exp \left( -\alpha _{j:n} - \frac{y-f_{j:n}\left( 1-e^{-\alpha _{j:n}}\right) }{\lambda _{j:n}} \right) , &{}\quad f_{j:n}\left( 1-e^{-\alpha _{j:n}}\right) - \lambda _{j:n}\alpha _{j:n} \\ &{}\quad \le y < f_{j:n}\left( 1-e^{-\alpha _{j:n}}\right) ,\\ f_{j:n}^{-1}(y), &{} \quad f_{j:n}\left( 1-e^{-\alpha _{j:n}}\right) \le y < f_{j:n}\left( 1-e^{-\beta _{j:n}}\right) , \\ 1,&{} \quad y \ge f_{j:n}\left( 1-e^{-\beta _{j:n}}\right) , \end{array} \right. \end{aligned}$$
(2.12)

uniquely determined up to the location and scale parameters \(\mu \) and \(\sigma \), respectively, with modified argument \(x \mapsto y= \frac{x-\mu }{\sigma }B_{j:n} +1\), introduced for brevity.

Proof

The crucial step of our reasoning consists in showing that the assumptions guarantee that the projection of \(h_{j:n}(x)=f_{j:n}(1-e^{-x})-1\) onto \({\mathcal {C}}_{\preceq _c V}\) has the form

$$\begin{aligned} P_{\preceq _cV}h_{j:n}(x) =\left\{ \begin{array}{ll} \lambda _{j:n}(x-\alpha _{j:n})+f_{j:n}\left( 1-e^{-\alpha _{j:n}}\right) -1, &{}\quad 0\le x<\alpha _{j:n},\\ f_{j:n}(1-e^{-x})-1,&{}\quad \alpha _{j:n}\le x<\beta _{j:n},\\ f_{j:n}\left( 1-e^{-\beta _{j:n}}\right) -1,&{}\quad x\ge \beta _{j:n}. \end{array} \right. \end{aligned}$$
(2.13)

The tools are collected in Proposition 1.

Function (2.7) satisfies \(T_{j:n}(0)<0, T_{j:n}(c_{j:n})>0\), and increases in between (see Goroncy and Rychlik 2015, p. 180). The first necessary condition for (2.13) is that the unique zero \(\beta _{j:n}\) of (2.7) belongs to the interval \((b_{j:n}, c_{j:n})\) of concave increase of \(h_{j:n}\). This obviously holds iff \(T_{j:n}(b_{j:n}) <0\). Point \(\beta _{j:n}\) is the only admissible candidate for the change of the l-h-c type projection from \(h_{j:n}\) itself to the constant.

The other condition is that \({\mathcal {Y}}_{j:n}\) is nonempty, i.e. the interval \([b_{j:n}, \beta _{j:n})\) contains points \(\alpha \) satisfying \(Y_{j:n}(\alpha ) \ge 0\) and \(Z_{j:n}(\alpha )=0\). The latter relation together with \(T_{j:n}(\beta _{j:n}) =0\) imply that the weighted integral of the l-h-c function glued at \(\alpha \) and \(\beta _{j:n}^*\) coincides with that of the original function \(h_{j:n}\), which is a necessary condition for the projection (see, Rychlik 2001a, Lemma 1). Condition \(Y_{j:n}(\alpha ) \ge 0\) implies that gluing \(\lambda _{j:n}(\alpha ) (x-\alpha ) + h_{j:n}(\alpha )\) with \(h_{j:n}(x)\) at \(\alpha \in (b_{j:n}, \beta _{j:n})\) produces a concave function in a vicinity of \(\alpha \). If there were more points in \({\mathcal {Y}}_{j:n}\), the projection is constructed with use of the smallest one. Since all the necessary conditions are deduced from the assumptions of Proposition 2, the projection is actually equal to (2.13).

Due to (1.1), an upper bound on the expectation of standardized jth order statistic coincides with the norm of projection \(P_{\preceq _cV}h_{j:n}\). Since \(P_{\preceq _cV}h_{j:n} \not \equiv 0\), the bound is sharp. We present here slightly simpler analytic form of the norm based on the identity \(B_{j:n}^2= ||P_{\preceq _cV}h_{j:n}||^2 = ||P_{\preceq _cV}f_{j:n}V||^2-1\). It follows from the obvious relations \(P_{\preceq _cV}h_{j:n} = P_{\preceq _cV}(f_{j:n}V-1) = P_{\preceq _cV}f_{j:n}V -1\), and

$$\begin{aligned} ||P_{\preceq _cV}f_{j:n}V-1||^2= & {} ||P_{\preceq _cV}f_{j:n}V||^2 - 2 \int _0^\infty P_{\preceq _cV}f_{j:n}V(x) e^{-x}dx + \int _0^\infty e^{-x}dx \\= & {} ||P_{\preceq _cV}f_{j:n}V||^2 -1, \end{aligned}$$

valid due to

$$\begin{aligned} \int _0^\infty P_{\preceq _cV}f_{j:n}V(x) e^{-x}dx = \int _0^\infty f_{j:n}V(x) e^{-x}dx = \int _0^1 f_{j:n}(x) dx=1 \end{aligned}$$

(cf. Rychlik 2001a, Lemma 1). Using

$$\begin{aligned} \frac{F^{-1}V(x) - \mu }{\sigma }= \frac{P_{\preceq _cV}\left( f_{i:n}V\right) (x)-1}{B_{j:n}}, \qquad 0<x <\infty , \end{aligned}$$

[cf. (1.3)], and performing some tedious calculations, we arrive to the explicit form (2.12) of the parent IFR distribution function attaining the bound. \(\square \)

Up to linear transformations of the argument, the extreme distribution (2.12) is composed of three parts: the exponential one on the left, the inverse of some increasing part of the density function \(f_{j:n}\), and the jump of height \(e^{-\beta _{j:n}}\) at the right-end point. We performed a number of numerical verifications of the assumptions of Proposition 2 for moderate n and all \(3 \le j \le n-1\). It occurs that Proposition 2 provides the bound for the case \(j=3, n=4\) only. The precise value of the bound and description of the IFR distribution attaining it is presented in Example 1.

Example 1

The sharp bound

$$\begin{aligned} \frac{{\mathbb {E}}X_{3:4}-\mu }{\sigma } \le 0.50522 \end{aligned}$$

is attained by the distribution function

$$\begin{aligned} F\left( x \right) =\left\{ \begin{array}{ll} 0,&{}\quad \frac{x-\mu }{\sigma }< -1.73065,\\ 1-\exp \left( -0.20544 \frac{x-\mu }{\sigma }-0.45773\right) , &{}\quad -1.73065 \le \frac{x-\mu }{\sigma }< 0.60205,\\ f_{3:4}^{-1}\left( 0.50522\frac{x-\mu }{\sigma }+1 \right) , &{} \quad 0.60205 \le \frac{x-\mu }{\sigma }< 0.75138, \\ 1,&{} \quad \frac{x-\mu }{\sigma }\ge 0.75138. \end{array} \right. \end{aligned}$$

The exponential part on the left have probability 0.44089. The jump on the right has value 0.53753. The contribution of the inverse cubic function between them amounts to 0.02160 only.

Our conjecture is that Example 1 is the only application of Proposition 2. For large n, the inflection point \(b_{j:n}\) lies close to the maximal argument \(c_{j:n}\), and in consequence \(h_{j:n}(b_{j:n})\) is only slightly less than the maximum \(h_{j:n}(c_{j:n})\). However, by definition

$$\begin{aligned} h_{j:n}(\beta _{j:n}) = \frac{ \int _{\beta _{j:n}}^{\infty } h_{j:n}(x) e^{-x} dx }{\int _{\beta _{j:n}}^{\infty } e^{-x} dx} \end{aligned}$$

cannot be too large, because \(h_{j:n}(x)\) for large arguments x is essentially less than \(h_{j:n}(c_{j:n})\). This implies that \(\beta _{j:n} < b_{j:n}\), which violates condition \(T_{j:n}(b_{j:n}) <0\). Even for small n, when the relation holds, there is not enough space in the interval \( (b_{j:n}, \beta _{j:n})\) for any points \(\alpha \) satisfying \(Z_{j:n}(\alpha )=0\) together with \(Y_{j:n}(\alpha )\ge 0\).

It seems that with the exception of the case presented in Example 1, the bounds on the expectations of order statistics from IFR populations are determined with use of the l-c-type projection. These are presented in Proposition 3 below. However, one should be aware of the fact that for given j and n, first the assumptions of Proposition 2 should be verified and the l-h-c projection excluded before one uses the formulas of Proposition 3. If the assumptions for the l-h-c shape of the projection do not hold, we define

$$\begin{aligned} A_{j:n}(\alpha )= & {} (1-2\alpha e^{-\alpha }-e^{-2\alpha })[1-e^{-\alpha }-F_{j:n}(1-e^{-\alpha })]\nonumber \\&+\,e^{-\alpha }(1-e^{-\alpha }-\alpha )\left[ \sum \limits _{k=1}^j\frac{F_{k:n}(1-e^{-\alpha })}{n-k+1}+e^{-\alpha }-1\right] , \end{aligned}$$
(2.14)
$$\begin{aligned} \gamma _{j:n}(\alpha )= & {} [1-F_{j:n}(1-e^{-\alpha })]e^{\alpha } -1, \end{aligned}$$
(2.15)
$$\begin{aligned} \lambda _{j:n}(\alpha )= & {} \frac{\gamma _{j:n}(\alpha )}{e^{-\alpha }-1+\alpha }, \end{aligned}$$
(2.16)
$$\begin{aligned} B^2_{j:n}(\alpha )= & {} (\alpha +1)^2 \lambda _{j:n}^2(\alpha )-[\lambda _{j:n}(\alpha )+\gamma _{j:n}(\alpha )]^2. \end{aligned}$$
(2.17)

Proposition 3

Suppose that either \(T_{j:n}(\beta _{j:n}) \ge 0\) or \({\mathcal {Y}}_{j:n} = \emptyset \) for some fixed \(3 \le j \le n-1 \ge 3\). Then set \({\mathcal {Z}}_{j:n}= \{ \alpha \ge \beta _{j:n}:\; A_{j:n}(\alpha )=0,\;\gamma _{j:n}(\alpha )>0 \}\) is nonempty, and

$$\begin{aligned} \frac{{\mathbb {E}}X_{j:n}-\mu }{\sigma } \le B_{j:n} = B_{j:n}(\alpha _{j:n}), \end{aligned}$$
(2.18)

where \(\alpha _{j:n}= \arg \max _{\alpha \in {\mathcal {Z}}_{j:n}} B_{j:n}^2(\alpha )\). The equality in (2.18) holds for the distribution function

$$\begin{aligned} F(y)=\left\{ \begin{array}{ll} 0,&{}\quad y<0,\\ 1-e^{-y}, &{}\quad 0 \le y < \alpha _{j:n},\\ 1,&{}\quad y\ge \alpha _{j:n} \end{array} \right. \end{aligned}$$
(2.19)

for \(y=y(x) = \frac{x-\mu }{\sigma \lambda _{j:n}} B_{j:n} -\frac{\gamma _{j:n}}{\lambda _{j:n}} + \alpha _{j:n}\) with \(\gamma _{j:n}=\gamma _{j:n}(\alpha _{j:n})\) and \(\lambda _{j:n}=\lambda _{j:n}(\alpha _{j:n})\).

Proof

Owing to the assumptions, the projection of (2.2) has an l-c form. Precisely, by (1.8), (2.3), and (2.5), we have

$$\begin{aligned} P_\alpha h_{j:n}(x)= & {} \left[ 1-e^{-\alpha }-F_{j:n}(1-e^{-\alpha })\right] e^\alpha \left[ \frac{(x-\alpha )\mathbf {1}_{(0,\alpha )}(x)}{e^{-\alpha }-1+\alpha } +1 \right] \\= & {} \lambda _{j:n}(\alpha ) \mathbf {1}_{(0,\alpha )}(x)+ \gamma _{j:n}(\alpha ) \end{aligned}$$

[cf. (2.15) and (2.16)] for some \(\alpha \ge \beta _{j:n}\) satisfying

$$\begin{aligned} \frac{1-e^{-\alpha }-F_{j:n}(1-e^{-\alpha })}{e^{-\alpha }}= \frac{-\left[ \sum \nolimits _{k=1}^j\dfrac{F_{k:n}(1-e^{-\alpha })}{n-k+1}-\alpha \right] (1-e^{-\alpha }-\alpha )}{\alpha ^2-2\alpha +2-2e^{-\alpha }-(1-\alpha -e^{-\alpha })^2} >0 \end{aligned}$$

[cf. (1.9), (2.4), (2.6)]. Observe that the last formula can be rewritten as \(A_{j:n}(\alpha )=0\) with \(\gamma _{j:n}(\alpha )>0\). This means that parameter \(\alpha \) determining the projection should be chosen from set \({\mathcal {Z}}_{j:n}\). Its emptiness contradict existence of projection. Its cardinality is also restricted, because function (2.14) cannot have infinitely many zeros.

For selecting the proper element of the set, we compare respective squared norms:

$$\begin{aligned} ||P_\alpha h_{j:n}||^2= & {} \lambda _{j:n}^2(\alpha ) \int _0^\alpha (x-\alpha )^2 e^{-x}dx + 2 \lambda _{j:n}(\alpha )\gamma _{j:n}(\alpha )\\&\times \,\int _0^\alpha (x-\alpha ) e^{-x}dx + \gamma _{j:n}^2(\alpha ) \\= & {} \lambda _{j:n}^2(\alpha )\left[ \alpha ^2 - 2(e^{-\alpha }-1+\alpha )\right] \\&+\,2 \frac{\gamma _{j:n}(\alpha )}{e^{-\alpha }-1+\alpha }\gamma _{j:n}(\alpha ) (1-e^{-\alpha }-\alpha )+ \gamma _{j:n}^2(\alpha ) \\= & {} \lambda _{j:n}^2(\alpha ) \alpha ^2 - 2 \lambda _{j:n}(\alpha )\gamma _{j:n}(\alpha ) - \gamma _{j:n}^2(\alpha ) \\= & {} (\alpha ^2+1) \lambda _{j:n}^2(\alpha )- [ \lambda _{j:n}(\alpha )+\gamma _{j:n}(\alpha )]^2. \end{aligned}$$

There is a unique \(\alpha _{j:n} \in {\mathcal {Z}}_{j:n}\) minimizing \(||P_\alpha h_{j:n}||^2 \), and this defines the unique non-zero projection \(P_{\preceq _cV} h_{j:n} \). By (1.1), the sharp upper mean–variance bound for the expectation of jth order statistic is

$$\begin{aligned} ||P_{\alpha _{j:n}} h_{j:n}|| = B_{j:n} =\left[ \left( \alpha _{j:n}^2+1\right) \lambda _{j:n}^2-\left( \lambda _{j:n}+\gamma _{j:n}\right) ^2\right] ^{1/2}. \end{aligned}$$

The distribution function attaining the bound is characterized by the relation

$$\begin{aligned} \frac{F^{-1}(1-e^{-x}) - \mu }{\sigma }= \frac{\lambda _{j:n}(x-\alpha _{j:n}) \mathbf {1}_{(0,\alpha )}(x)+ \gamma _{j:n}}{B_{j:n}}, \quad 0<x <\infty , \end{aligned}$$

which determines (2.19). \(\square \)

The bounds in (2.18) are attained by the right truncated and linearly transformed exponential random variables [cf. (2.19)], with the jumps of sizes \(e^{-\alpha _{j:n}}\) on the right. Precisely, \(X_i \mathop {=}\limits ^{d} \frac{\lambda _{j:n} \sigma }{B_{j:n}} \left[ \min \{ Y_i- \alpha _{j:n},0 \} + \frac{\gamma _{j:n}}{\lambda _{j:n}} \right] +\mu \) for \(Y_i, i=1,\ldots , n\), being standard exponential. The transformation is defined in the complicated way just to fulfil the first two moment conditions. The convex order transform is invariant under the location and scale modifications. This means that every exponential distribution truncated on the right at the level \(1-e^{-\alpha _{j:n}}\) attains the bound for the expected jth order statistic standardized with respective mean and standard deviation.

Numerical studies show that each set \({\mathcal {Z}}_{j:n}\) contains only one element, and this is used in construction of the projection and calculation of the bound. We cannot prove it formally, though. We were able to do it for the analogous hypothesis in analysis of increasing density distributions. In that case, the counterparts of (2.7)–(2.10) and (2.14)–(2.16) were expressed by linear combinations of Bernstein polynomials. Then we could apply the respective variation diminishing property of Schoenberg (1959) for evaluating the numbers of their zeros and extremes. The property was also useful in analysis of the DFR case, when the Bernstein polynomials of transformed argument \(\alpha \mapsto 1-e^{-\alpha }\) were studied. The method is not applicable here, because we consider functions composed of polynomials of argument \(1-e^{-\alpha }\) combined with small powers of \(\alpha \) itself. Accordingly, specifying particular functions \(h(x)= f_{j:n}(1-e^{-x})-1\), does not allow us to obtain results stronger than those concluded directly from Proposition 1, being stated for general h satisfying (A).

In Table 1 we present numerical values of bounds \(B_{j:n}\) on the standardized expectations of order statistics \({\mathbb {E}}(X_{j:n}-\mu )/\sigma , 3 \le j \le n-1, 4 \le n \le 10\), for the increasing failure rate populations. Each bound is accompanied by the value \(1-\exp (-\alpha _{j:n})\) which represents the contribution of the expectation part in the distribution attaining the bound. Parameter \(\alpha _{j:n}\) uniquely determines the distribution. It does not appear for \(j=3, n=4\), because the extreme distribution has a more complicated form, precisely described in Example 1. Comparing the obtained bounds with the respective ones for the ID distributions (see Goroncy and Rychlik 2015, Table 1), we note that the bounds in the IFR case are slightly greater. The relations are not surprising, since the ID family of distributions is contained in the IFR family. We can also observe that the bounds of Table 1 are close to the respective general bounds for arbitrary parent distributions.

Table 1 Bounds on standardized expectations of order statistics \({\mathbb {E}}(X_{j:n}-\mu )/\sigma , 3 \le j \le n-1, 4 \le n \le 10\), for the increasing failure rate distributions

2.2 Spacings

We now establish sharp upper bounds on the standardized expectations of spacings \(\frac{{\mathbb {E}}(X_{j+1:n}- X_{j:n})}{\sigma }, 1 \le j \le n-1\), coming from i.i.d. samples with IFR distributions. To this end, we project functions

$$\begin{aligned} \tilde{h}_{j:n}(x)=(f_{j+1:n}-f_{j:n})V(x) =f_{j+1:n}(1-e^{-x})-f_{j:n}(1-e^{-x}). \end{aligned}$$
(2.20)

onto the convex cone (2.1). We first take into account the cases \(2 \le j \le n-2\), for which functions (2.20) satisfy Assumptions (A) of Sect. 1. Indeed, they decrease from 0 at 0 to their negative global minima at

$$\begin{aligned} \tilde{a}_{j:n}=\ln \frac{n}{n-j+\sqrt{\frac{j(n-j)}{n-1}}}, \end{aligned}$$

then increase to the positive global maxima at

$$\begin{aligned} \tilde{c}_{j:n}=\ln \frac{n}{n-j-\sqrt{\frac{j(n-j)}{n-1}}}, \end{aligned}$$

and finally decrease to zero at \(d= +\infty \). In the increase intervals \((\tilde{a}_{j:n},\tilde{c}_{j:n})\), they are first convex and then concave. The tangency points cannot be written down explicitly. These are unique \(\tilde{b}_{j:n}\in (\tilde{a}_{j:n},\tilde{c}_{j:n})\) solving the equations

$$\begin{aligned}&\frac{n(n-1)^2}{(n-j)(n-j-1)}\ln ^3 x+\frac{(n-1)(3n^2-3n-3jn+4j)}{(n-j-1)(n-j)}\ln ^2 x \\&\quad -\frac{-3n^2+3jn-5j+6n-3}{n-j-1}\ln x +n-j-1=0. \end{aligned}$$

The bounds and justifications of their sharpness are presented in Propositions 4 and 5. Their statements are similar to those of Propositions 2 and 3, respectively. Their proofs are almost identical with those of their counterparts, and therefore we omit them. The only differences consist in using modifications of functions (2.7)–(2.10) and (2.14)–(2.17). Noting the identity \(\tilde{h}_{j:n}=h_{j+1:n}-h_{j:n}\) and linearity of operators (1.4)–(1.7) acting on functions h, we define

$$\begin{aligned} \tilde{T}_{j:n}(\beta )= & {} T_{j+1:n}(\beta )-T_{j:n}(\beta ) \nonumber \\= & {} -(n-j+1)B_{j-1,n}\left( 1-e^{-\beta }\right) +(n-j-1)B_{j,n}(1-e^{-\beta }), \nonumber \\ \tilde{\lambda }_{j:n}(\alpha )= & {} \lambda _{j+1:n}(\alpha )-\lambda _{j:n}(\alpha )\nonumber \\= & {} \frac{1}{\alpha ^2-\alpha +2-2e^{-\alpha }} \left\{ \frac{1}{n-j}\sum \limits _{k=j+1}^nB_{k,n}\left( 1-e^{-\alpha }\right) \right. \nonumber \\&-\,\left. n(1-e^{-\alpha }-\alpha )\left[ B_{j,n-1}\left( 1-e^{-\alpha }\right) -B_{j-1,n-1}\left( 1-e^{-\alpha }\right) \right] \right\} ,\nonumber \\ \tilde{Y}_{j:n}(\alpha )= & {} Y_{j+1:n}(\alpha )-Y_{j:n}(\alpha )= \tilde{\lambda }_{j:n}(\alpha )+n\left[ (n-j+1)B_{j-2,n-1} \left( 1-e^{-\alpha }\right) \right. \nonumber \\&+\,\left. (n-j-1)B_{j,n-1}\left( 1-e^{-\alpha }\right) -2(n-j)B_{j-1,n-1} \left( 1-e^{-\alpha }\right) \right] ,\nonumber \\ \tilde{Z}_{j:n}(\alpha )= & {} Z_{j+1:n}(\alpha )-Z_{j:n}(\alpha )\nonumber \\= & {} (j-1)B_{j,n}(1-e^{-\alpha }) -(j+1)B_{j+1,n}\left( 1-e^{-\alpha }\right) \nonumber \\&-\,\tilde{\lambda }_{j:n}(\alpha )\left( 1-e^{-\alpha }-\alpha \right) . \end{aligned}$$
(2.21)

Proposition 4

Suppose that \(\tilde{T}_{j:n}(\tilde{b}_{j:n})<0\) so that the unique zero \(\tilde{a}_{j:n}< \tilde{\beta }_{j:n} < \tilde{c}_{j:n}\) of (2.21) belongs to \((\tilde{b}_{j:n},\tilde{c}_{j:n})\). Also, suppose that \(\tilde{{\mathcal {Y}}}_{j:n}= \{ \tilde{b}_{j:n} < \alpha < \tilde{\beta }_{j:n}:\, \tilde{Y}_{j:n} \ge 0,\; \tilde{Z}_{j:n}=0\}\) is nonempty. Let \(\tilde{\alpha }_{j:n}\) denote the smallest (possibly unique) element of \(\tilde{{\mathcal {Y}}}_{j:n}, \tilde{f}_{j:n}=f_{j+1:n} - f_{j:n}\), and \(\tilde{\lambda }_{j:n}= \tilde{\lambda }_{j:n}(\tilde{\alpha }_{j:n})\). Then

$$\begin{aligned} {\mathbb {E}}\frac{X_{j+1:n}-X_{j:n}}{\sigma } \le \tilde{B}_{j:n}, \end{aligned}$$

where

$$\begin{aligned} \tilde{B}^2_{j:n}= & {} \tilde{f}_{j:n}^2\left( 1-e^{-\tilde{\alpha }_{j:n}}\right) \left( 1-e^{-\tilde{\alpha }_{j:n}}\right) + 2\tilde{\lambda }_{j:n}\tilde{f}_{j:n}\left( 1-e^{-\tilde{\alpha }_{j:n}}\right) \left( 1-e^{-\tilde{\alpha }_{j:n}}-\tilde{\alpha }_{j:n}\right) \\&+\,\tilde{\lambda }_{j:n}^2\left( \tilde{\alpha }_{j:n}^2-2\tilde{\alpha }_{j:n} +2-2e^{-\tilde{\alpha }_{j:n}}\right) +\tilde{f}_{j:n}^2\left( 1-e^{-\tilde{\beta }_{j:n}}\right) e^{-\tilde{\beta }_{j:n}} \\&+\,\frac{(n!)^2\left( \begin{array}{l}{2j}\\ {j}\end{array}\right) \left( \begin{array}{l}{2n-2j-2}\\ {n-j-1}\end{array}\right) }{(2n-1)!} \left[ F_{2j+1:2n-1}\left( 1-e^{-\tilde{\beta }_{j:n}}\right) - F_{2j+1:2n-1}\left( 1-e^{-\tilde{\alpha }_{j:n}}\right) \right] \\&-\,\frac{2(n!)^2\left( \begin{array}{l}{2j-1}\\ {j-1}\end{array}\right) \left( \begin{array}{l}{2n-2j-1}\\ {n-j}\end{array}\right) }{(2n-1)!} \left[ F_{2j:2n-1}\left( 1-e^{-\tilde{\beta }_{j:n}}\right) - F_{2j:2n-1}\left( 1-e^{-\tilde{\alpha }_{j:n}}\right) \right] \\&+\,\frac{(n!)^2\left( \begin{array}{l}{2j-2}\\ {j-1}\end{array}\right) \left( \begin{array}{l}{2n-2j}\\ {n-j}\end{array}\right) }{(2n-1)!} \left[ F_{2j-1:2n-1}\left( 1-e^{-\tilde{\beta }_{j:n}}\right) - F_{2j-1:2n-1}(1-e^{-\tilde{\alpha }_{j:n}})\right] . \end{aligned}$$

The bound is attained by

$$\begin{aligned} F(y)=\left\{ \begin{array}{ll} 0,&{} \quad y< \tilde{f}_{j:n}\left( 1-e^{-\tilde{\alpha }_{j:n}}\right) - \tilde{\lambda }_{j:n}\tilde{\alpha }_{j:n},\\ 1-\exp \left( -\tilde{\alpha }_{j:n} - \frac{y-\tilde{f}_{j:n}\left( 1-e^{-\tilde{\alpha }_{j:n}}\right) }{\tilde{\lambda }_{j:n}} \right) , &{}\quad \tilde{f}_{j:n}\left( 1-e^{-\tilde{\alpha }_{j:n}}\right) - \tilde{\lambda }_{j:n}\tilde{\alpha }_{j:n} \\ &{}\quad \le y < f_{j:n}\left( 1-e^{-\tilde{\alpha }_{j:n}}\right) ,\\ \tilde{f}_{j:n}^{-1}(y), &{}\quad \tilde{f}_{j:n}\left( 1-e^{-\tilde{\alpha }_{j:n}}\right) \le y < \tilde{f}_{j:n}\left( 1-e^{-\tilde{\beta }_{j:n}}\right) , \\ 1,&{} \quad y \ge \tilde{f}_{j:n}\left( 1-e^{-\tilde{\beta }_{j:n}}\right) , \end{array} \right. \end{aligned}$$

uniquely determined up to the location and scale parameters \(\mu \) and \(\sigma \), respectively, with modified argument \(x \mapsto y= \frac{x-\mu }{\sigma }\tilde{B}_{j:n} \).

Define now

$$\begin{aligned} \tilde{A}_{j:n}(\alpha )= & {} A_{j+:n}(\alpha )- A_{j:n}(\alpha ) = nB_{j,n-1}\left( 1-e^{-\alpha }\right) \left( 1-e^{-2\alpha }-2\alpha e^{-\alpha }\right) \\&+\,(1-e^{-\alpha }-\alpha )F_{j+1:n}(1-e^{-\alpha }),\\ \tilde{\gamma }_{j:n}(\alpha )= & {} \gamma _{j+1:n}(\alpha )- \gamma _{j:n}(\alpha ) = \frac{n}{n-j}B_{j,n-1}(1-e^{-\alpha }),\\ \tilde{\lambda }_{j:n}(\alpha )= & {} \lambda _{j+1:n}(\alpha ) - \lambda _{j:n}(\alpha ) = \frac{\tilde{\gamma }_{j:n}(\alpha )}{e^{-\alpha }-1+\alpha },\\ \tilde{B}^2_{j:n}(\alpha )= & {} (\alpha +1)^2 \tilde{\lambda }_{j:n}^2(\alpha )-\left[ \tilde{\lambda }_{j:n}(\alpha )+\tilde{\gamma }_{j:n}(\alpha )\right] ^2. \end{aligned}$$

Proposition 5

Suppose that either \(\tilde{T}_{j:n}(\tilde{\beta }_{j:n}) \ge 0\) or \(\tilde{{\mathcal {Y}}}_{j:n} = \emptyset \) for some fixed \(2 \le j \le n-2 \ge 2\). Then set \(\tilde{{\mathcal {Z}}}_{j:n}= \{ \alpha \ge \tilde{\beta }_{j:n}:\; \tilde{A}_{j:n}(\alpha )=0,\;\tilde{\gamma }_{j:n}(\alpha )>0 \}\) is nonempty, and

$$\begin{aligned} \frac{{\mathbb {E}}(X_{j+1:n}-X_{j:n})}{\sigma } \le \tilde{B}_{j:n} = \tilde{B}_{j:n}(\tilde{\alpha }_{j:n}), \end{aligned}$$

for \(\tilde{\alpha }_{j:n}= \arg \max _{\alpha \in \tilde{{\mathcal {Z}}}_{j:n}} \tilde{B}_{j:n}^2(\alpha )\). The equality holds for the distribution function

$$\begin{aligned} F(y)=\left\{ \begin{array}{ll} 0,&{}\quad y<0,\\ 1-e^{-y}, &{}\quad 0 \le y < \tilde{\alpha }_{j:n},\\ 1,&{}\quad y\ge \tilde{\alpha }_{j:n}. \end{array} \right. \end{aligned}$$

for \(y=y(x) = \frac{x-\mu }{\sigma \tilde{\lambda }_{j:n}} \tilde{B}_{j:n} -\frac{\tilde{\gamma }_{j:n}}{\tilde{\lambda }_{j:n}} + \tilde{\alpha }_{j:n}\) with \(\tilde{\gamma }_{j:n}=\tilde{\gamma }_{j:n}(\tilde{\alpha }_{j:n})\) and \(\tilde{\lambda }_{j:n}=\tilde{\lambda }_{j:n}(\tilde{\alpha }_{j:n})\).

We suspect that the assumptions of Proposition 4 never hold. We verified the claim for small n. For large n, the increase parts of \(\tilde{h}_{j:n}\) are very steep, and only minor highly-located fragments are concave. It is very unlikely that their smaller pieces become parts of projections.

Now we focus on the extreme spacings with \(j=1\) and \(n-1\). In the first case, we recall the results of Goroncy and Rychlik (2015, Proposition 6). The bounds

$$\begin{aligned} {\mathbb {E}}\frac{X_{2:n}-X_{1:n}}{\sigma }\le \tilde{B}_{1:n}=n\sqrt{\frac{n-1}{(2n-1)(2n-3)}\left( 1-\left( \frac{n-2}{n-1}\right) ^{2n-1}\right) } \end{aligned}$$
(2.22)

are valid for arbitrary parent distributions. They are attained by

$$\begin{aligned} F(x)=\left\{ \begin{array}{ll} 0, &{}\quad \frac{x-\mu }{\sigma }\tilde{B}_{1:n} < -n, \\ \tilde{f}^{-1}_{1:n}\left( \frac{x-\mu }{\sigma }\tilde{B}_{1:n} \right) , &{}\quad -n \le \frac{x-\mu }{\sigma }\tilde{B}_{1:n} < \frac{n(n-2)^{n-2}}{(n-1)^{n-1}}, \\ 1, &{}\quad \frac{x-\mu }{\sigma }\tilde{B}_{1:n} \ge \frac{n(n-2)^{n-2}}{(n-1)^{n-1}}. \end{array} \right. \end{aligned}$$
(2.23)

They have increasing density functions on their interval support, and atoms with masses \(\frac{n-2}{n-1}\) at the right-end points. This implies that they are IFR as well. Accordingly, the general upper bounds (2.22) are sharp for the IFR distributions. For \(n=2\), the bound in (2.22) reduces to \(2 \frac{\sqrt{3}}{3}\), and (2.23) has uniform density. This is a special case of range evaluations, due to Plackett (1947).

We finally proceed to the last spacings with \(j=n-1\). At the first step, we project the functions

$$\begin{aligned} \tilde{h}_{n-1:n}(x)=f_{n:n}(1-e^{-x})-f_{n-1:n}( 1-e^{-x})=n(1-e^{-x})^{n-2}(1-ne^{-x}), \, x>0, \end{aligned}$$

onto (2.1). Starting from the origin, they decrease to the global minimum at \(\tilde{a}_{n-1:n}= \ln \frac{n}{2}\) , then convexly increase to the tangency point at \(\tilde{b}_{n-1:n}= \ln \frac{n}{2-\sqrt{2 \frac{n-2}{n-1}}}\), and eventually concavely increase to n at \(+\infty \). The functions do not fulfil Assumptions (A) of Sect. 1. Below we change them slightly and present a respective modification of Proposition 1. We say that Assumptions \((\tilde{\mathbf{A }})\) hold if (A) are modified so that \(c=d =+\infty \) and \(\sup _{x>0} h(x) = \lim _{x \rightarrow \infty } h(x)>0\).

Proposition 6

Under Assumptions (\(\tilde{\mathbf{A}}\)), with notation (1.5)–(1.7), the set \(\tilde{{\mathcal {Y}}}=\{\alpha >b:\; Y(\alpha )\ge 0,\; Z(\alpha )=0\}\) is nonempty, and for \(\alpha _*=\inf \{\alpha \in {\mathcal {Y}}\}\) yields

$$\begin{aligned} P_{\preceq _cW}h(x)=\left\{ \begin{array}{ll} h(\alpha _*)+\lambda _*(\alpha _*)(x-\alpha _*),&{}\quad 0\le x<\alpha _*,\\ h(x),&{}\quad x \ge \alpha _*. \end{array} \right. \end{aligned}$$

Outline of proof

Following Rychlik (2014, Proposition 3.2), we can show that the projection belongs to the family

$$\begin{aligned} Ph_{\alpha ,\lambda }(x)=\left\{ \begin{array}{ll} \lambda (x-\alpha )+ h(\alpha ),&{}\quad 0\le x<\alpha ,\\ h(x),&{}\quad x \ge \alpha , \end{array} \right. \end{aligned}$$

with \(\alpha \ge b, \lambda \ge h'(\alpha )\). The only difference between our assumption and those of Rychlik (2014) are that function h in the latter case does not have the decreasing part. The arguments in both the cases are identical, though. Especially, they are based on the fact that every concave nondecreasing function crosses at most two times the strictly convex increasing part of h, and so does if the convex increasing part is preceded by a strictly decreasing one.

We further note that for fixed \(\alpha \ge b\)

$$\begin{aligned} ||h-Ph_{\alpha ,\lambda }||^2 = \int _0^\alpha [\lambda (x-\alpha ) + h(\alpha ) -h(x)]^2w(x)dx \end{aligned}$$

is a quadratic convex function of argument \(\lambda \). It is globally minimized at \(\lambda _*(\alpha )\) defined in (1.5), and the constrained minimal slope is \(\max \{ \lambda _*(\alpha ), h'(\alpha ) \}\).

Now we claim that \(Y(\alpha )>0\) for all sufficiently large \(\alpha \). Indeed, the linear functions \(h'(\alpha )(x-\alpha )+h(\alpha )\) tend to the constant \(\lim _{x\rightarrow \infty }h(x)\) as \(\alpha \rightarrow \infty \). Hence \(h'(\alpha )(x-\alpha )+h(\alpha )\ge h(x)\) for all \(x >0 \) if \(\alpha \) is sufficiently large. However, this does not hold for \(\lambda _*(\alpha )(x-\alpha )+h(\alpha )\), because this is the best approximation of \(h_{(0,\alpha )}\) by functions \(\lambda (x-\alpha )+h(\alpha ), \lambda \in {\mathbb {R}}\). This implies that \(\lambda _*(\alpha )(x-\alpha )+h(\alpha )\) and h(x) cross each other in \((0,\alpha )\), and so \(\lambda _*(\alpha ) > h'(\alpha )\), as required.

Due to continuity of functions \(\lambda _*\) and \(h'\), the set \(\{ \alpha >b\,{:}\,Y(\alpha )<0 \}\) is a possibly empty sum of open intervals. Repeating arguments of the proof of Proposition 1 in Goroncy and Rychlik (2015), we conclude that

$$\begin{aligned} Y(\alpha )<0 \quad \Rightarrow \quad \frac{d}{d \alpha } ||Ph_{\alpha , h'(\alpha )} - h||^2 <0. \end{aligned}$$

This means that for every \(\alpha \) such that \(\lambda _*(\alpha ) < h'(\alpha )\) we can decrease the distance \(||Ph_{\alpha , h'(\alpha )} - h||\) moving \(\alpha \) to the right as long as the inequality holds. In consequence, we can restrict ourselves to the family \(\{ Ph_{\alpha , \lambda _*(\alpha )}\,{:}\,\alpha \ge b, \; Y(\alpha )\ge 0 \}\).

Condition \(Z(\alpha )=0\), being equivalent to

$$\begin{aligned} \int _0^d Ph_{\alpha , \lambda _*(\alpha )}(x) w(x)dx = \int _0^d h(x) w(x) dx, \end{aligned}$$

is a necessary condition for \(Ph_{\alpha , \lambda _*(\alpha )}= P_{\preceq _cW}h\). One can easily check that under condition \(Z(\alpha )=c \ne 0\), function \(Ph_{\alpha , \lambda _*(\alpha )}-c\) better approximates h than \(Ph_{\alpha , \lambda _*(\alpha )}\) itself.

Accordingly, we reduced the set of candidates for projection to \(\{ Ph_{\alpha , \lambda _*(\alpha )}\,{:}\, \alpha \in \tilde{{\mathcal {Y}}}\}\). The set is nonempty, because the projection exists, and is of the form \(Ph_{\alpha , \lambda _*(\alpha )}\). If \(\alpha _1 < \alpha _2\) for some \(\alpha _1,\alpha _2 \in \tilde{{\mathcal {Y}}}\), the former provides a better approximation of h. It follows from the fact that \(Ph_{\alpha , \lambda _*(\alpha )}\) with \(Z(\alpha )=0\) is the projection of \(h_{(0,\alpha )}\) onto the subspace of linear functions. Hence, \(\lambda _*(\alpha _1) (x-\alpha _1) + h(\alpha _1)\) lies closer to h on \((0,\alpha _1)\) than \(\lambda _*(\alpha _2) (x-\alpha _2) + h(\alpha _2)\). The same clearly holds if we compare h(x) itself with \(\lambda _*(\alpha _2) (x-\alpha _2) + h(\alpha _2)\) in \((\alpha _1,\alpha _2)\). \(\square \)

Remark 1

If we assumed that \(c=d<+\infty \) and \(h(c)=\max _{0 \le x \le c} h(x)\), we could get \(\tilde{{\mathcal {Y}}}= \emptyset \) and \(P_{\preceq _cW}h(x)=\lambda _* x + \gamma _*\), being the orthogonal projection of h onto the family of linear functions with

$$\begin{aligned} \lambda _*= & {} \frac{ \int _0^d xh(x)w(x)dx - \int _0^d h(x)w(x)dx \int _0^d xw(x)dx}{\int _0^d x^2w(x)dx - \left( \int _0^d xw(x)dx\right) ^2}, \\ \gamma _*= & {} \frac{ \int _0^d h(x)w(x)dx \int _0^d x^2w(x)dx - \int _0^d xh(x)w(x)dx \int _0^d xw(x)dx}{\int _0^d x^2w(x)dx - \left( \int _0^d xw(x)dx\right) ^2}. \end{aligned}$$

Fixing \(h= \tilde{h}_{n-1:n}\), we obtain

$$\begin{aligned} \tilde{\lambda }_{n-1:n}(\alpha )= & {} \frac{e^{-2\alpha }\left( 1-n^2\right) +e^{-\alpha }\left[ n-2+n^2(1-\alpha )\right] +1-n(1-\alpha )}{\alpha ^2-2\alpha +2-2e^{-\alpha }} \left( 1-e^{-\alpha }\right) ^{n-2} ,\\ \tilde{Y}_{n-1:n}(\alpha )= & {} \tilde{\lambda }_{n-1:n}(\alpha ) -n(n-1)\left( 1-e^{-\alpha }\right) ^{n-3} e^{-\alpha } \left( 2-ne^{-\alpha }\right) ,\\ \tilde{Z}_{n-1:n}(\alpha )= & {} \frac{e^{-2\alpha }(1-n^2)+e^{-\alpha }\left[ n-2+n^2(1-\alpha )\right] +1-n(1-\alpha )}{\alpha ^2+2-2\alpha -2e^{-\alpha }}\left( 1-e^{-\alpha }-\alpha \right) \nonumber \\&+ n\left( 1-e^{-\alpha }\right) \left[ 1-e^{-\alpha }(n-1)\right] ,\qquad \qquad \qquad \alpha >0. \end{aligned}$$

Proposition 7

The set \(\tilde{{\mathcal {Y}}}_{n-1:n}= \{ \alpha \ge \tilde{b}_{n-1:n}\,{:}\, \tilde{Y}_{n-1:n}(\alpha )\ge 0,\;\tilde{Z}_{n-1:n}(\alpha )=0 \}\) is nonempty and

$$\begin{aligned} {\mathbb {E}}\frac{X_{n:n}-X_{n-1:n}}{\sigma }\le \tilde{B}_{n-1:n}, \end{aligned}$$

where

$$\begin{aligned} \tilde{B}_{n-1:n}^2= & {} \tilde{\lambda }_{n-1:n}^2 \left( \tilde{\alpha }_{n-1:n}^2-2\tilde{\alpha }_{n-1:n} +2-2e^{-\tilde{\alpha }_{n-1:n}}\right) \\&+\,2\tilde{\lambda }_{n-1:n}\tilde{h}_{n-1:n}(\tilde{\alpha }_{n-1:n}) (1-e^{-\tilde{\alpha }_{n-1:n}}-\tilde{\alpha }_{n-1:n})\\&+\,\tilde{h}^2_{n-1:n}(\tilde{\alpha }_{n-1:n})(1-e^{-\tilde{\alpha }_{n-1:n}})+ \frac{n^2(n-1)}{2n-1} \left[ 1-\left( 1-e^{-\tilde{\alpha }_{n-1:n}}\right) ^{2n-1}\right] \\&-\,n^3 \left( 1-e^{-\tilde{\alpha }_{n-1:n}}\right) ^{2n-2} e^{-\tilde{\alpha }_{n-1:n}} - n^2 (n-1)^2 \left( 1-e^{-\tilde{\alpha }_{n-1:n}}\right) ^{2n-3} e^{-2\tilde{\alpha }_{n-1:n}}, \end{aligned}$$

\(\tilde{\alpha }_{n-1:n} = \inf \tilde{{\mathcal {Y}}}_{n-1:n}\), and \(\tilde{\lambda }_{n-1:n}= \tilde{\lambda }_{n-1:n}(\tilde{\alpha }_{n-1:n})\). With the notation \(y = \frac{x-\mu }{\sigma } \tilde{B}_{n-1:n}\) and \(\tilde{f}_{n-1:n}= f_{n:n}-f_{n-1:n}\), the inequality becomes equality for

$$\begin{aligned} F(y)=\left\{ \begin{array}{ll} 0, &{} \quad y <\tilde{f}_{n-1:n}\left( 1-e^{-\tilde{\alpha }_{n-1:n}}\right) \\ &{} \quad -\tilde{\lambda }_{n-1:n} \tilde{\alpha }_{n-1:n},\\ 1-\exp \left( -\tilde{\alpha }_{n-1:n}- \frac{y-\tilde{f}_{n-1:n}\left( 1-e^{-\tilde{\alpha }_{n-1:n}}\right) }{\tilde{\lambda }_{n-1:n}} \right) , &{} \quad \tilde{f}_{n-1:n}\left( 1-e^{-\tilde{\alpha }_{n-1:n}}\right) \\ &{}\quad -\tilde{\lambda }_{n-1:n} \tilde{\alpha }_{n-1:n} \le y \\ &{}\quad < \tilde{f}_{n-1:n}\left( 1-e^{-\tilde{\alpha }_{n-1:n}}\right) \\ \tilde{f}^{-1}_{n-1:n}(y), &{}\quad \tilde{f}_{n-1:n}\left( 1-e^{-\tilde{\alpha }_{n-1:n}}\right) \le y < n, \\ 1, &{}\quad y \ge n. \end{array} \right. \end{aligned}$$

We conclude the section with numerical evaluations of bounds \(\tilde{B}(j,n)\) for the spacings in small samples, presented in Table 2 below. They are presented together with the probabilities \(1-\exp (-\tilde{\alpha }_{j:n})\) of exponential parts of the distributions attaining the bounds. The rests of the probability masses \(\exp (-\tilde{\alpha }_{j:n})\) are concentrated at the atoms located at the right ends of the supports. For fixed n, smaller bounds can be observed for central spacings, and greater ones hold for extreme ones. Contributions of the exponential density functions in the extreme distributions increase as do so the spacing ranks.

Table 2 Bounds on expected spacings \({\mathbb {E}}(X_{j+1:n}-X_{j:n})/\sigma , 1 \le j \le n-1, 3 \le n \le 10\), for the increasing failure rate distributions

3 Possible further developments

Propositions 1 and 6 provide direct useful tools for evaluations of standardized expectations of L-statistics \(\sum _{i=1}^n c_i (X_{i:n}-\mu )/\sigma \) from IFR populations under the condition that respective functions \(h_{\mathbf {c}}= f_{\mathbf {c}}V = \sum _{i=1}^n c_i (f_{i:n}V-1)= n\sum _{i=0}^{n-1} c_{i+1} (B_{i,n-1}V-1)\) satisfy either of Assumptions (A) and \((\tilde{\mathbf{A }})\). We have

$$\begin{aligned} h'_{\mathbf {c}}= & {} n \sum _{i=0}^{n-2}(n-i-1)( c_{i+2}-c_{i+1} )B_{i,n-1}V, \nonumber \\ h''_{\mathbf {c}}= & {} n \sum _{i=0}^{n-2}(n-i-1)[(n-i-2) c_{i+3}\nonumber \\&-\,(2n-2i-3)c_{i+2}+(n-i-1)c_{i+1}] B_{i,n-1}V \end{aligned}$$
(3.1)

(the coefficient at \(c_{n+1}\) vanishes in the latter formula). Variation diminishing property (VDP, for short) of Bernstein polynomials (see, e.g., Rychlik 2001b, Lemma 14) can be used in establishing the numbers of increase/decrease and convexity/concavity intervals and verifying the assumptions. It asserts that the number of sign changes of any linear combination of Bernstein polynomials in (0, 1) does not exceed the number of sign changes in the vector of combination coefficients. Moreover, the initial and ultimate signs of the combination coincide with the signs of the first and last non-zero coefficients, respectively.

Apparent applications are provided by the reliability theory. If a system is composed of n elements with i.i.d. lifetimes \(X_1, \ldots X_n\) (exchangeability is sufficient here), then the distribution function of the system lifetime T is a convex combination of order statistics distribution functions

$$\begin{aligned} {\mathbb {P}}(T \le t) = \sum _{i=1}^n s_i {\mathbb {P}}(X_{i:n} \le t), \end{aligned}$$

where the combination coefficient vector \(\mathbf {s}=(s_1,\ldots , s_n)\), called the Samaniego signature depends merely on the system structure. Therefore \({\mathbb {E}}T= {\mathbb {E}}\sum \nolimits _{i=1}^n s_i X_{i:n}\), and our methods can be applied for precise evaluations of \(\frac{{\mathbb {E}}T-{\mathbb {E}}X_1}{\sqrt{{\mathbb {V}}ar \,X_1}}\), when \(X_i\) have an IFR distribution.

For the overwhelming majority of the coherent systems, the signature vector is either monotone or unimodal, i.e. first nondecreasing and then nonincreasing. E.g., Navarro and Rubio (2010) showed that there is only one system with a bimodal signature among 180 systems of size 5. Due to (3.1) and VDP, the monotonicity properties are inherited by the respective functions \(h_{\mathbf {s}}\). The conclusion is not immediately apparent in analysis of the second derivatives: then the modifications \(\left[ 1- \frac{1}{2n-2i-3} \right] c_{i+3} - 2 c_{i+2} + \left[ 1+ \frac{1}{2n-2i-3} \right] c_{i+1}\) instead of the standard second differences \( c_{i+3} - 2 c_{i+2} + c_{i+1}, i=0,\ldots , n-2\), should be studied.

Example 2

The classic bridge system (see Fig. 1) has signature \(\mathbf {s}_1= \left( 0, \frac{1}{5}, \frac{3}{5} , \frac{1}{5} , 0\right) \).

Fig. 1
figure 1

Bridge system

In consequence,

$$\begin{aligned} h'_{\mathbf {s}_1}= & {} 4 B_{0,4}V + 6 B_{1,4}V - 4 B_{2,4}V - B_{3,4}V, \\ h''_{\mathbf {s}_1}= & {} 8 B_{0,4}V - 6 B_{1,4}V + 6 B_{2,4}V + B_{3,4}V. \end{aligned}$$

By VDP, \(h_{\mathbf {s}_1} \) is first convex increasing, then concave increasing, concave decreasing, and finally convex decreasing. It satisfies the other requirements of (A) with the exponential weight as well. Using the projection of \(h_{\mathbf {s}_1}\) onto (2.1), we determine the bound

$$\begin{aligned} \frac{{\mathbb {E}}T_1-{\mathbb {E}}X_1}{\sqrt{{\mathbb {V}}ar \,X_1}} \le 0.304099, \end{aligned}$$

for the i.i.d. IFR component lifetimes and the attainability condition

$$\begin{aligned} F(x)=\left\{ \begin{array}{ll} 0, &{} \quad \frac{x-\mu }{\sigma } <-3.29566,\\ 1-\exp \left( -0.07239 \frac{x-\mu }{\sigma } - 0.23855 \right) , &{} \quad -3.29566, \le \frac{x-\mu }{\sigma } < -1.66202, \\ \tilde{f}_{\mathbf {s}_1}^{-1}(0.30412\frac{x-\mu }{\sigma }+1), &{} \quad -1.66202 \le \frac{x-\mu }{\sigma } < 0.48936, \\ 1, &{}\quad \frac{x-\mu }{\sigma } \ge 0.48936. \end{array} \right. \end{aligned}$$

The difference from the respective sharp bound without the IFR restriction is \(\frac{{\mathbb {E}}T_1-{\mathbb {E}}X_1}{\sqrt{{\mathbb {V}}ar \,X_1}} \le 0.304111\) is almost unnoticeable. We see that except for Example 1, the l-h-c type projections may be useful in description of bounds for various systems. Finally, we note that \(\frac{{\mathbb {E}}X_{3:5}-{\mathbb {E}}X_1}{\sqrt{{\mathbb {V}}ar \,X_1}} \le 0.37576\) in the IFR case.

Example 3

Consider the parallel connection of three single components and the series of two items, which lifetime is given by

$$\begin{aligned} T_2=\max (X_1,X_2,X_3,\min (X_4,X_5)). \end{aligned}$$

This system has nondecreasing signature \(\mathbf {s}_2= \left( 0, 0, 0, \frac{2}{5}, \frac{3}{5} \right) \). Since

$$\begin{aligned} h'_{\mathbf {s}_2}= & {} 4 B_{2,4}V + B_{3,4}V, \\ h''_{\mathbf {s}_2}= & {} 12 B_{1,4}V - 6 B_{2,4}V - B_{3,4}V, \end{aligned}$$

assumptions \((\tilde{\mathbf{A }})\) are satisfies. Using Proposition 6, we obtain

$$\begin{aligned} \frac{{\mathbb {E}}T_2-{\mathbb {E}}X_1}{\sqrt{{\mathbb {V}}ar \,X_1}} \le 0.95632, \end{aligned}$$

with the equality for

$$\begin{aligned} F(x)=\left\{ \begin{array}{ll} 0, &{} \quad \frac{x-\mu }{\sigma } <-1.28639,\\ 1-\exp \left( -0.67669 \frac{x-\mu }{\sigma } - 0.87049 \right) , &{} \quad -1.28639 \le \frac{x-\mu }{\sigma } < 1.10586, \\ \tilde{f}_{\mathbf {s}_2}^{-1}(0.95632\frac{x-\mu }{\sigma }+1), &{} \quad 1.10586 \le \frac{x-\mu }{\sigma } < 2.09135, \\ 1, &{} \quad \frac{x-\mu }{\sigma } \ge 2.09135. \end{array} \right. \end{aligned}$$

The bound for the five-component parallel system amounts to 1.15470. For general i.i.d. distributions of component lifetimes we have \(\frac{{\mathbb {E}}T_2-{\mathbb {E}}X_1}{\sqrt{{\mathbb {V}}ar \,X_1}} \le \sqrt{\frac{58}{63}} = 0.95950\), which is slightly more than in the IFR case. Observe that for the dual system, i.e. the series connection of 3 items and parallelly connected pair with nonincreasing signature \( \mathbf {s}_3= \left( \frac{3}{5}, \frac{2}{5}, 0,0,0 \right) \), we get the trivial bound \(\frac{{\mathbb {E}}T_3-{\mathbb {E}}X_1}{\sqrt{{\mathbb {V}}ar \,X_1}} \le 0\), valid in the general and IFR cases.