1 Introduction

A vector (or a sequence) of generalized order statistics (GOSs) \(X_{*,1}, \ldots , X_{*,n}\) (or \(X_{*,1},X_{*,2}, \ldots \), respectively) are defined with use of a vector \((\gamma _1,\ldots , \gamma _n)\) (or a sequence\((\gamma _1,\gamma _{2}\ldots )\)) of positive parameters, and a one-dimensional distribution function F. The joint distribution function of the first r GOSs is

$$\begin{aligned} \left( X_{*,j} \right) _{j=1}^r {\mathop {=}\limits ^{d}} \left( F^{-1} \left( 1- \prod _{i=1}^{j} U_i^{\frac{1}{\gamma _i}} \right) \right) _{j=1}^r, \end{aligned}$$
(1)

where

$$\begin{aligned} F^{-1}(x) = \inf \{ t: F(t) \ge x \}, \quad 0< x <1, \end{aligned}$$

is the left-continuous version of the quantile function of F, and \(U_1,\ldots U_r\) are iid standard uniform random variables. Formula (1) shows that the generalized order statistics are ordered in the ascending order \(X_{*,1}\le \cdots \le X_{*,r} \le \cdots \). We also notice that the joint distribution of \(X_{*,1}, \ldots , X_{*,r}\) depends only on the first r parameters \(\gamma _1,\ldots , \gamma _r\), and preserving their order is essential. However, the marginal distribution of the rth generalized order statistic (GOS) depends on \(\gamma _1,\ldots , \gamma _r\) as well, but arbitrary reordering of them does not affect the distribution. For multidimensional marginals, reorderings are also possible, but some restrictions should be imposed. E.g., defining the joint distribution of \((X_{*,r},X_{*,s})\), \(1 \le r < s\), we can admit arbitrary reorderings of the first r and the last \(s-r\) parameters.

In the paper, we consider moments of single GOS \(X_{*,r}\) with arbitrarily fixed \(r \in \mathbb {N}\). Therefore for simplifying calculations, throughout the whole paper and without loss of generality we assume that the vector of parameters \(\gamma _1, \ldots , \gamma _r\), denoted further shortly as \({\varvec{\gamma }}\in \mathbb {R}^r_+\) is nonincreasing, i.e., \(\gamma _1\ge \ldots \ge \gamma _r\). We assume that the rth generalized order statistic \(X_{*,r}\) with parameters \({\varvec{\gamma }}\) is based on an arbitrary life distribution function F with a finite and positive mean. It means that F corresponds to a random variable with non-negative values, so that the left-limit of F at 0 is \(F(0^-)=0\). Moreover, the mean of F may be expressed in any of the forms

$$\begin{aligned} \mu =\int _0^\infty x\,dF(x)=\int _0^1 F^{-1}(x)\,dx=\int _0^\infty [1-F(x)]\,dx. \end{aligned}$$
(2)

Let \(\ell \in \{1,\dotsc ,r\}\) denote the number of distinct elements of \({\varvec{\gamma }}\). More precisely, suppose that \({\varvec{\gamma }}\) contains \(\ell \) different values \(\delta _1> \ldots > \delta _\ell \) with respective multiplicities \(d_1,\ldots ,d_\ell \in \{ 1, \ldots , r\}\) so that \(\sum _{k=1}^{\ell }d_k =r\). Under this notation the distribution function of \(X_{*,r}\) is the composition \(F_{*,{\varvec{\gamma }}}= G_{*,{\varvec{\gamma }}}\circ F\), where

$$\begin{aligned} G_{*,{\varvec{\gamma }}}(x) = 1- \sum _{\nu =1}^{\ell } \sum _{j=0}^{d_\nu -1} a_{{\nu , j}} (1-x)^{\delta _\nu } \sum _{m=0}^{d_\nu -j-1} \frac{[-\delta _\nu \ln (1-x)]^m}{m!}, \quad 0<x<1, \end{aligned}$$
(3)

and the density function of (3) has the form

$$\begin{aligned} g_{*,{\varvec{\gamma }}}(x)= \sum _{\nu =1}^{\ell } \sum _{j=0}^{d_\nu -1} b_{{\nu , j}} (1-x)^{\delta _\nu -1} [- \ln (1-x)]^{d_\nu -j-1}, \quad 0<x<1, \end{aligned}$$
(4)

where the coefficients \(a_{{\nu , j}}\) and \(b_{{\nu , j}}\) depend only on \({\varvec{\gamma }}\) (see Cramer and Kamps 2003). The explicit form is irrelevant for our considerations except for \(a_{\ell ,0}\) which is given by

$$\begin{aligned} a_{\ell ,0}= \prod _{k=1}^{\ell -1} \left( \frac{\delta _k}{\delta _k-\delta _\ell } \right) ^{d_k} >0. \end{aligned}$$
(5)

In consequence \(b_{\ell , 0}\) is also positive.

Our main achievement presented here consists in establishing necessary and sufficient conditions for finiteness of the moments \(\mathbb {E}X_{*,r}^p\) of positive order p for fixed parameters \({\varvec{\gamma }}\) of GOSs and all baseline life distribution functions with finite non-zero expectations. It occurs that the conditions depend merely on the value of the smallest parameter \(\delta _\ell \) and its multiplicity. The number r of the generalized order statistic does not matter here except for the fact that for multiple \(\delta _\ell \) we need \(r \ge 2\). Another essential result consists in determining the sharp upper bounds on the ratios \(\frac{\mathbb {E}X_{*,r}^p}{\mu ^p}\) valid for all baseline life distributions with finite positive means under the necessary and sufficient conditions on the moment order p and model parameters \({\varvec{\gamma }}\). In Sect. 2 we present analytic tools used in proving our results. In Sect. 3 we determine finite sharp bounds on \(\frac{\mathbb {E}X_{*,r}^p}{\mu ^p}\) under various assumptions on \(p>0\) and \({\varvec{\gamma }}\in \mathbb {R}^r_+\). We prove in Sect. 4 that the assumptions of Sect. 3 are also necessary for finiteness of the moments of GOSs. In Sect. 5 we apply our general findings to the most popular special cases of the GOSs model.

The model of generalized order statistics was introduced by Kamps (1995). It contains many known models of ordered statistical data as special cases, e.g. order statistics and record values. Some of them are discussed in more detail in the last section of the paper. The model enables unified approach to analogous problems concerning seemingly different situations, and so it quickly attracted the attention of many authors. Exemplary papers adopting such approach are Belzunce and Martínez-Riquelme (2015) (stochastic ordering), Mahmoud and Al-Nagar (2009) (recurrence relations on moments and related characterizations of distributions) and Barakat et al. (2018) (prediction of future values GOSs).

The fundamental problem concerning generalized order statistics was to find the explicit forms of the distribution function \(G_{*,{\varvec{\gamma }}}\) and the density function \(g_{*,{\varvec{\gamma }}}\). The first results in this direction were obtained by Kamps (1995) in the special case when the successive parameters \(\gamma _i\) constitute an arithmetic sequence. A further generalization was due to Kamps and Cramer (2001) who treated the case when all the elements of the parameter vector \({\varvec{\gamma }}\) are pairwise different. The general case was solved by Cramer and Kamps (2003) with an application of Meijer’s G-functions.

These results enabled extensive studies on bounds on moments of generalized order statistics. The first results were obtained by Cramer et al. (2002b) who considered bounds on the expectations \(\mathbb {E}X_{*,r}\) expressed the scale units based on pth absolute central moments units of the baseline distribution. Next, Cramer et al. (2004) obtained bounds on \(\mathbb {E}X_{*,r}\) expressed in mean units. These results were later refined for restricted families of underlying distributions with a finite variance. Bieniek (2008a) derived upper mean-variance bounds for distributions with decreasing generalized failure rate function, and Bieniek, Burkschat and Rychlik (2020) determined the corresponding lower bounds. Bounds valid for bounded populations were derived by Rychlik (2010).

To the best of our knowledge, the weakest sufficient conditions for existing moments of GOSs were presented in Cramer et al. (2002a). There were represented by three conditions: \(p \le r\), \(p \le \gamma _r\) and \(p < \gamma _{r-1}\). In particular, they depend on the index r of the GOS. Necessary and sufficient conditions were formulated only for specific submodels of the GOSs model. Sen (1959) determined such conditions for the standard order statistics. Analogous results for the classic record values were presented in Nagaraja (1978). Recently, Rychlik and Szymkowiak (2023) generalized the Nagaraja conditions to the case of kth records, \(k \ge 1\). The problem of finding necessary and sufficient conditions for finiteness of moments of GOSs with arbitrary model parameters was still an open question till now.

2 Auxiliary results

In this section we present a number of auxiliary lemmas whose conclusions will be used in the proofs of the main results. First we recall the result proven in Cramer et al. (2004) as Theorem 2.1, which describes the monotonicity properties of \(g_{*,{\varvec{\gamma }}}\).

Lemma 1

The density function \(g_{*,{\varvec{\gamma }}}\) of each uniform generalized order statistic is unimodal.

  1. (a)

    For \(r=1\), the function \(g_{*,\gamma }\) is strictly increasing from 0 to \(\infty \), constant equal to 1, and strictly decreasing from \(\gamma _1\) to 0 for \(\gamma _1<1\), \(=1\) and \(>1\), respectively.

  2. (b)

    For \(r\ge 2\), we have the following. If \(\delta _\ell \le 1\), then the density function is strictly increasing from 0 to \(\infty \) except for the case \(\delta _\ell =d_\ell =1\), when the maximum is finite. Otherwise it is strictly unimodal with a mode in (0, 1) and equal to 0 at the ends of the interval.

The next one is a simplified version of Theorem 1 of Moriguti (1953).

Lemma 2

Suppose that a real function g defined on (ab) has a finite integral. Let \(\bar{g}\) denote the right-continuous version of the derivative of the greatest convex minorant \(\bar{G}\) of the antiderivative \(G(x) = \int _{a}^{x} g(t)dt\), \(a< x < b\), of g. Then for every nondecreasing function \(h: (a,b)\mapsto \mathbb {R}\) we have

$$\begin{aligned} \int _{a}^{b} g(x)h(x)\,dx \le \int _{a}^{b} \bar{g}(x)h(x) \,dx \end{aligned}$$
(6)

under the assumption that both the integrals exist. The equality in (6) is attained if h is constant on every interval contained in the set \(\{ x \in (a,b): \bar{G}(x) < G(x)\}\).

Corollary 1

The derivatives of the greatest convex minorants of the density \(g_{*,{\varvec{\gamma }}}\) given by (4) have the following forms.

  1. (a)

    If \(r=1\) and \(\gamma _1 \ge 1\), then

    $$\begin{aligned} \bar{g}_{*,\gamma }(x) = 1, \quad 0<x<1. \end{aligned}$$
  2. (b)

    If \(r \ge 1\) and \(\delta _\ell \le 1\), then

    $$\begin{aligned} \bar{g}_{*,{\varvec{\gamma }}}(x) = g_{*,{\varvec{\gamma }}}(x), \quad 0<x<1. \end{aligned}$$
  3. (c)

    If \(r \ge 2\) and \(\delta _\ell > 1\), then

    $$\begin{aligned} \bar{g}_{*,{\varvec{\gamma }}}(x) = g_{*,{\varvec{\gamma }}}(\min \{x, \alpha _*\}), \quad 0<x<1, \end{aligned}$$
    (7)

    where \(\alpha _*= \alpha _*({\varvec{\gamma }},1) \) (cf. Eq. (13) below) is the unique solution to the equation

    $$\begin{aligned} 1-G_{*,{\varvec{\gamma }}}(\alpha )= (1-\alpha )g_{*,{\varvec{\gamma }}}(\alpha ). \end{aligned}$$

The case \(r=\gamma _1=1\) falls under both the cases (a) and (b). Corollary 1 is a rephrased version of Proposition 3.1 of Cramer et al. (2002b). The following lemma presents the famous Hölder inequality (see, e.g., Mitrinovic 1970).

Lemma 3

Let g and h be non-negative non-zero elements of the Banach spaces \(L^p((a,b),dx)\) and \(L^q((a,b),dx)\), respectively, for some \(1<p <\infty \) and \(1< q = \frac{p}{p-1} < \infty \). Then

$$\begin{aligned} \int _{a}^{b} g(x)h(x)\,dx \le \left[ \int _{a}^{b} g^p(x) \,dx \right] ^{1/p} \left[ \int _{a}^{b} h^q(x) \,dx \right] ^{1/q}, \end{aligned}$$
(8)

and the equality in (8) holds if for some constant \(c>0\) the equality

$$\begin{aligned} g(x)= c \, h^{q/p}(x) \end{aligned}$$

holds almost everywhere on (ab).

Lemma 4

If either \(0<p<\delta _\ell <1\) or \(0< p< \delta _\ell =1 < d_\ell \), then

$$\begin{aligned} \int _{0}^{1} g_{*,{\varvec{\gamma }}}^{\frac{1}{1-p}}(x) dx < \infty . \end{aligned}$$
(9)

Proof

Under the assumptions of the lemma, \(g_{*,{\varvec{\gamma }}}\) increases on (0, 1) from 0 at 0 to \(\infty \) at 1. Therefore it actually suffices to show that

$$\begin{aligned} \int _{1-\eta }^{1} g_{*,{\varvec{\gamma }}}^{\frac{1}{1-p}}(x) dx < \infty \end{aligned}$$

for some \(0< \eta <1\). For \(\nu =1,\ldots ,\ell \) and \(j=0,\ldots d_\nu -1\) we have

$$\begin{aligned} \lim _{x \rightarrow 1}\frac{(1-x)^{\delta _\nu -1}[-\ln (1-x)]^{d_\nu -j-1}}{(1-x)^{\delta _\ell -1}[-\ln (1-x)]^{d_\ell -1}} =0 \end{aligned}$$
(10)

except for \(\nu = \ell \), \(j=0\), when the quotient is just equal to 1. Moreover, for every \(0<\zeta < 1\)

$$\begin{aligned} \lim _{x \rightarrow 1 } \frac{[-\ln (1-x)]^\frac{d_\ell -1}{1-p}}{(1-x)^{-\zeta }} =0. \end{aligned}$$
(11)

First suppose that \(0<p<\delta _\ell < 1\). By (4), (10), and (11) for any \(0< \varepsilon <1\) there exists \( 0< \eta <1\) such that for all \(1- \eta< x < 1\) we have

$$\begin{aligned} g_{*,{\varvec{\gamma }}}(x) \le b_{\ell ,0} (1+\varepsilon ) (1-x)^{\delta _\ell -1}[-\ln (1-x)]^{d_\ell -1} \end{aligned}$$

and

$$\begin{aligned} g_{*,{\varvec{\gamma }}}^{\frac{1}{1-p}}(x) \le [b_{\ell ,0} (1+\varepsilon )]^{\frac{1}{1-p}} (1-x)^{\frac{\delta _\ell -1}{1-p}-\zeta }. \end{aligned}$$

Accordingly, we can write

$$\begin{aligned} \int _{1-\eta }^{1} g_{*,{\varvec{\gamma }}}^{\frac{1}{1-p}}(x) dx \le [b_{\ell ,0} (1+\varepsilon )]^{\frac{1}{1-p}} \int _{1-\eta }^{1} (1-x)^{\frac{\delta _\ell -1}{1-p}-\zeta } dx. \end{aligned}$$

If we take \(0< \zeta < \frac{\delta _\ell - p}{1-p} \), then \(\frac{\delta _\ell -1}{1-p}-\zeta > -1\), and the last integral is finite.

Now suppose that \(0< p< \delta _\ell =1 < d_\ell \). It follows that given \(0< \varepsilon <1\) there exists \( 0< \eta <1\) such that for all \(1- \eta< x < 1\) we have

$$\begin{aligned} g_{*,{\varvec{\gamma }}}(x)\le & {} b_{\ell ,0} (1+\varepsilon ) [-\ln (1-x)]^{d_\ell -1}, \\ g_{*,{\varvec{\gamma }}}^{\frac{1}{1-p}}(x)\le & {} [b_{\ell ,0} (1+\varepsilon )]^{\frac{1}{1-p}} (1-x)^{-\zeta }. \end{aligned}$$

In consequence,

$$\begin{aligned} \int _{0}^{1-\eta } g_{*,{\varvec{\gamma }}}^{\frac{1}{1-p}}(x) dx \le [b_{\ell ,0} (1+\varepsilon )]^{\frac{1}{1-p}} \int _{1-\eta }^{1} (1-x)^{-\zeta } dx, \end{aligned}$$

which is finite for every \(0< \zeta <1\). \(\square \)

For all the other parameters of GOSs the density functions \(g_{*,{\varvec{\gamma }}}(x)\), \(0<x<1\), are bounded, and the integrals (9) are finite for every \(0<p<1\). The statement of Lemma 4 in the case \(0<p<\delta _\ell <1\) is equivalent to that of Lemma 2.1 in Cramer et al. (2002a).

Next we present Corollary 4 of Papadatos (2021).

Lemma 5

Let F be the distribution function of a non-negative random variable with a positive and finite mean. Then for all \(p >1\) we have

$$\begin{aligned} \int _{0}^{\infty } p x^{p-1} [1-F(x)]^p dx \le \left[ \int _0^\infty [1-F(x)]\,dx \right] ^p. \end{aligned}$$
(12)

The equality is attained if F is a two-point distribution function with one of the atoms located at 0.

By (2), the RHS of (12) represents the pth power of the mean of F. Observe also that for \(p=1\) relation (12) becomes a trivial equality with no assumptions on F.

For deriving the bounds on \(\frac{\mathbb EX_{*,r}^p}{\mu ^p}\) for \(p\ge 1\) we use an auxiliary function defined as

$$\begin{aligned} H_{{\varvec{\gamma }},p}(x) = \frac{1-G_{*,{\varvec{\gamma }}}(x)}{(1-x)^p}, \quad 0\le x<1, \end{aligned}$$

for fixed \(p>0\) and \({\varvec{\gamma }}\in \mathbb {R}^r_+\) such that \(\gamma _{1}\ge \cdots \ge \gamma _r>0\). Note that the function \(H_{{\varvec{\gamma }},p}\) with \(p=1\) was considered by Cramer et al. (2004). In order to establish desired properties of \(H_{{\varvec{\gamma }},p}\) we consider the density functions \(g_{*,{\varvec{\gamma }}(j)}\), \(j=1,\ldots ,r\), with the parameter vectors \({\varvec{\gamma }}(j) = (\gamma _1,\ldots ,\gamma _j)\). Obviously, \(g_{*,{\varvec{\gamma }}(r)}=g_{*,{\varvec{\gamma }}}\) given by (4). The basic theoretical tool in the analysis of the functions (3) and (4) is so called variation diminishing property (VDP) of densities of uniform generalized order statistics. The following lemma is a compilation of Theorems 1 and 2 and Corollary 1 of Bieniek (2007).

Lemma 6

Fix \(r\ge 1\) and suppose that \(\gamma _1\ge \dotsc \ge \gamma _r>0\).

  1. (a)

    The number of zeros in (0, 1) of any linear combination \(\sum _{j=1}^r a_j g_{*,{\varvec{\gamma }}(j)}\) of density functions \(g_{*,{\varvec{\gamma }}(1)},\dotsc ,g_{*,{\varvec{\gamma }}(r)}\) does not exceed the number of the sign changes in the sequence of the coefficients \(a_1,\dotsc ,a_r\) after deletion of zeros.

  2. (b)

    The first and the last sign of the combination are the same as the signs of the first and the last nonzero coefficient, respectively.

Now we study the shapes of functions \(H_{{\varvec{\gamma }},p}\). Obviously \(H_{{\varvec{\gamma }},p}(0)=1\). In the next lemma we determine the limit of \(H_{{\varvec{\gamma }},p}\) at \(x=1\), denoted as usual by \(H_{{\varvec{\gamma }},p}(1^-)\), as well as monotonicity properties of \(H_{{\varvec{\gamma }},p}\).

Lemma 7

Assume that \(p>0\) and \(\gamma _{1}\ge \cdots \ge \gamma _r>0\).

  1. (a)

    If \(\gamma _r\le p\), then the function \(H_{{\varvec{\gamma }},p}\) is strictly increasing on (0, 1). Moreover, if \(\gamma _{r-1}>\gamma _r=p\), then the limit \(H_{{\varvec{\gamma }},p}(1^-)\) is finite and equal to

    $$\begin{aligned} H_{{\varvec{\gamma }},p}(1^-)= a_{\ell ,0}= \prod _{k=1}^{\ell -1} \left( \frac{\delta _k}{\delta _k-\delta _\ell } \right) ^{d_k} \end{aligned}$$

    (cf. (5)), otherwise the limit is infinite.

  2. (b)

    If \(\gamma _r>p\), then the equation

    $$\begin{aligned} p[1-G_{*,{\varvec{\gamma }}}(\alpha )]= (1-\alpha )g_{*,{\varvec{\gamma }}}(\alpha ) \end{aligned}$$
    (13)

    has the unique solution \(\alpha _*=\alpha _*({\varvec{\gamma }},p)\in (0,1)\) and \(H_{{\varvec{\gamma }},p}\) is increasing on \((0,\alpha _*)\) and decreasing on \((\alpha _*,1)\) with \(H_{{\varvec{\gamma }},p}(1^-)=0\).

Proof

First we determine the limit \(H_{{\varvec{\gamma }},p}(1^-)=\lim _{x\rightarrow 1^-}H_{{\varvec{\gamma }},p}(x)\). By (3) we easily obtain

$$\begin{aligned} H_{{\varvec{\gamma }},p}(x)= \sum _{\nu =1}^{\ell } \sum _{j=0}^{d_\nu -1} a_{\nu j} (1-x)^{\delta _\nu -p} \sum _{m=0}^{d_\nu -j-1} \frac{[-\delta _\nu \ln (1-x)]^m}{m!}. \end{aligned}$$

Hence \(H_{{\varvec{\gamma }},p}\) is a linear combination of functions of the form \((1-x)^a(-\ln (1-x))^b\) with \(a\in \mathbb R\) and \(b\in \mathbb {N}\cup \{0\}\). Since

$$\begin{aligned} \lim _{x\rightarrow 0^+}x^a(-\ln x)^b= {\left\{ \begin{array}{ll} 1, &{} \text {if}\;a=b=0,\\ 0, &{} \text {if}\; a>0 \;\text {and} \;b\ge 0,\\ +\infty , &{} \text {if either}\; a<0\; \text {or}\; a=0, b>0, \end{array}\right. } \end{aligned}$$

then the limit \(H_{{\varvec{\gamma }},p}(1^-)\) can be positive and finite only if one of the components has a finite non-zero limit and the remaining ones tend to 0 as \(x\rightarrow 1^-\). Since \(\delta _1>\dotsc >\delta _\ell \), this happens iff \(\nu =\ell \) and \(d_\ell =1\). Summing up, if \(\gamma _r>p\), then the exponents \(a=\delta _\nu -p\) in all of the components are positive, and the limit \(H_{{\varvec{\gamma }},p}(1^-)\) is equal to 0. If \(\gamma _r=p\) and \(\gamma _{r-1}>\gamma _r\), then \(d_\ell =1\), so that the component with \(\nu =\ell \) has the finite limit \(a_{\nu ,0}\) and the remaining ones tend to 0. Then \(H_{{\varvec{\gamma }},p}(1^-)=a_{\nu ,0}\). If \(\gamma _r=\gamma _{r-1}=p\), then \(d_\ell \ge 2\), so that the component with \(\nu =\ell \) has infinite limit as \(x\rightarrow 1^-\). Thus \(H_{{\varvec{\gamma }},p}(1^-)\) is infinite as well. The same holds if \(\gamma _r<p\), regardless of the value of \(d_\ell \).

Now we determine monotonicity properties of \(H_{{\varvec{\gamma }},p}\). Elementary computations show that

$$\begin{aligned} H_{{\varvec{\gamma }},p}'(x)=\frac{1}{(1-x)^p} \left[ p\,\frac{1-G_{*,{\varvec{\gamma }}}(x)}{1-x}-g_{*,{\varvec{\gamma }}}(x) \right] . \end{aligned}$$
(14)

Since \(G_{*,{\varvec{\gamma }}(1)}=1-(1-x)^{\gamma _1}\) and \(g_{*,{\varvec{\gamma }}(1)}=\gamma _1(1-x)^{\gamma _1-1}\), the lemma holds trivially also for \(r=1\). By Lemma 5 of Bieniek (2008b) for \(r\ge 2\) and \(0\le x\le 1\), we have

$$\begin{aligned} 1-G_{*,{\varvec{\gamma }}}(x)=(1-x)\sum _{j=1}^{r}\frac{1}{\gamma _j}g_{*,{\varvec{\gamma }}(j)}(x). \end{aligned}$$
(15)

In fact, the formula holds true regardless of the ordering of \(\gamma _1,\dotsc ,\gamma _r\). By substitution of (15) into (14) we obtain

$$\begin{aligned} H_{{\varvec{\gamma }},p}'(x)=\frac{1}{(1-x)^p} \left[ \sum _{j=1}^{r-1}\frac{1}{\gamma _j}\,g_{*,j}(x) -\left( 1-\frac{p}{\gamma _r} \right) g_{*,r}(x)\right] . \end{aligned}$$

To determine the sign changes of the derivative \(H_{{\varvec{\gamma }},p}'\) we apply VDP formulated in Lemma 6. If \(\gamma _r\le p\), then \(1-\frac{p}{\gamma _r}\le 0\), so \(H_{{\varvec{\gamma }},p}'\) is positive on (0, 1). Therefore \(H_{{\varvec{\gamma }},p}\) itself is strictly increasing. If \(\gamma _r>p\), then by VDP the derivative \(H_{{\varvec{\gamma }},p}'\) is first positive, then negative in (0, 1), so it has unique zero \(\alpha _*\) being the solution to (13). Therefore, \(H_{{\varvec{\gamma }},p}\) is increasing on \((0,\alpha _*)\) and decreasing on \((\alpha _*,1)\). \(\square \)

The last lemma can be formulated more generally. Namely, we can drop the assumption \(\gamma _1 \le \cdots \le \gamma _r\) replacing \(\gamma _r\) and \(\gamma _{r-1}\) by the minimal and second minimal element (possibly equal to the minimal one), respectively, of the vector \((\gamma _1,\ldots , \gamma _r)\).

3 Bounds on moments of GOS

In this section, for fixed \(r\ge 1\), \({\varvec{\gamma }}\in \mathbb {R}_+^n\), and \(p>0\), we determine sharp upper bounds

$$\begin{aligned} B({\varvec{\gamma }},p)=\sup _{F\in {\mathcal {F}}}\frac{\mathbb E X_{*,r}^p}{\mu ^p}. \end{aligned}$$

on the pth moments \(\mathbb EX_{*,r}^p\) of rth generalized order statistics with parameters \({\varvec{\gamma }}\) over the family \(\mathcal {F}\) of all baseline life distribution functions (i.e., ones satisfying \(F(0^-)=0\)) with finite and positive means \(\mu \), expressed in the scale units being the pth powers \(\mu ^p\) of baseline means. We show that \(B({\varvec{\gamma }},p)\) is finite for \(p< \delta _\ell \) if either \(\delta _\ell <1\) or \(\delta _\ell \ge 1\) and \(d_\ell \ge 2\), and for \(p \le \delta _\ell \) if \(\delta _\ell \ge 1\) and \(d_\ell = 1\). We recall the assumptions \(\gamma _1\ge \cdots \ge \gamma _r>0\) with \(\delta _\ell =\gamma _r\), made for simplicity of reasoning. For the clarity of presentation we describe the bounds for the first and other generalized order statistics separately.

Theorem 1

Fix \(r=1\).

  1. (a)

    If \(0< p< \gamma _1 <1\), then

    $$\begin{aligned} \frac{\mathbb {E}X_{*,1}^p}{\mu ^p} \le \gamma _1 \left( \frac{1-p}{\gamma _1-p} \right) ^{1-p}, \end{aligned}$$

    and the equality is attained for the Pareto baseline distribution function

    $$\begin{aligned} F(x) = 1- \left( \frac{\gamma _1-p}{1-p} \, \frac{\mu }{x} \right) ^{\frac{1-p}{1-\gamma _1}}, \quad x \ge \mu \, \frac{\gamma _1-p}{1-p}. \end{aligned}$$
    (16)
  2. (b)

    In the case \(0< p {\,<\,} \gamma _1 \ge 1\) we have

    $$\begin{aligned} \frac{\mathbb {E}X_{*,1}^p}{\mu ^p} \le 1, \end{aligned}$$
    (17)

    with the equality attained by the degenerate distribution concentrated at \(\mu \). If \(0<p< \gamma _1\), then this is the only possibility of getting the equality in (17). If \(p=\gamma _1= 1\), the equality holds for arbitrary baseline distribution. If \(1 < p=\gamma _1\), then the equality is also attained by the two-point distributions

    $$\begin{aligned} \mathbb {P}(X=0) = \alpha = 1- \mathbb {P} \left( X= \frac{\mu }{1-\alpha } \right) \end{aligned}$$
    (18)

    for arbitrary \(0< \alpha <1\).

Proof

We start with noting that the distribution and density functions of the first uniform GOS have simple forms

$$\begin{aligned} G_{*,\gamma _1}(x)= & {} 1- (1-x)^{\gamma _1}, \nonumber \\ g_{*,\gamma _1}(x)= & {} \gamma _1(1-x)^{\gamma _1-1}, \end{aligned}$$
(19)

respectively. We first analyze the case \(0< p< \gamma _1 <1\) for which (19) is increasing. We use of the Hölder inequality of Lemma 3 with the exponent \(\frac{1}{p} >1\) and its conjugate \(\frac{1}{1-p} >1\), and get

$$\begin{aligned} \frac{\mathbb {E}X_{*,1}^p}{\mu ^p}= & {} \int _0^1 \frac{[F^{-1}(x)]^p}{\mu ^p} \gamma _1(1-x)^{\gamma _1-1}dx \nonumber \\\le & {} \left( \int _0^1 \left[ \frac{[F^{-1}(x)]^p}{\mu ^p} \right] ^{\frac{1}{p}} dx \right) ^p \gamma _1\left( \int _0^1 (1-x)^{\frac{\gamma _1-1}{1-p}} dx \right) ^{1-p} \nonumber \\= & {} \gamma _1 \left( \frac{1-p}{\gamma _1-p} \right) ^{1-p}. \end{aligned}$$

The equality in the inequality as well as the expectation condition are satisfied if

$$\begin{aligned} F^{-1}(x) = \mu \left[ \frac{(1-x)^{\gamma _1-1}}{\left( \frac{1-p}{\gamma _1-p}\right) ^{1-p}} \right] ^{\frac{1}{1-p}} = \mu \, \frac{\gamma _1-p}{1-p} (1-x)^{\frac{\gamma _1-1}{1-p}}. \end{aligned}$$
(20)

The explicit formula for the distribution function described by (20) is given in (16).

Now we proceed to the case \(0<p< 1 < \gamma _1\). Since (19) is decreasing then we combine the Moriguti of Lemma 2 and Hölder inequality. Then \(\bar{g}_{*,\gamma _1}(x) = 1\), which coincides with (19) for \(\gamma _1=1\) (see Corollary 1(a)). Therefore this case can be also included to our analysis. We have

$$\begin{aligned} \frac{\mathbb {E}X_{*,1}^p}{\mu ^p}= & {} \int _0^1 \frac{[F^{-1}(x)]^p}{\mu ^p} \gamma _1(1-x)^{\gamma _1-1} dx \le \int _0^1 \frac{[F^{-1}(x)]^p}{\mu ^p} dx \nonumber \\\le & {} \left( \int _0^1 \left[ \frac{[F^{-1}(x)]^p}{\mu ^p} \right] ^{\frac{1}{p}} dx \right) ^p \left( \int _0^1 1^{\frac{1}{1-p}} dx \right) ^{1-p} = 1. \end{aligned}$$
(21)

The common condition for attaining the equalities in both the inequalities is that the function \(\frac{[F^{-1}(x)]^p}{\mu ^p}\) is constant. The moment condition is satisfied if the constant amounts to 1, which defines the distribution concentrated at \(\mu \). Notice that if \(\mu \ge 0\), then any generalized order statistic with arbitrary parameters and degenerate baseline distribution satisfies \(\mathbb {E}X_{*,r}^p = \mu ^p\) for arbitrary real p.

Now we assume that \(1 \le p \le \gamma _1\). If \(p=\gamma _1=1\), then \(g_{*,\gamma _1}\) is constant equal to 1 and therefore \(\mathbb {E}X_{*,1}=\int _0^1 F^{-1}(x)dx=\mu \) for each parent distribution F. In the remaining cases, integrating by parts and using Lemma 5 we conclude

$$\begin{aligned} \mathbb {E}X_{*,1}^p= & {} \int _0^\infty x^p F_{*,\gamma _1}(x)dx = \int _0^\infty p x^{p-1}[1-F_{*,\gamma _1}(x)] dx \nonumber \\= & {} \int _0^\infty p x^{p-1}[1- F(x)]^{\gamma _1} dx \nonumber \\\le & {} \sup _{0<x<\infty }[1- F(x)]^{\gamma _1-p} \int _0^\infty p x^{p-1}[1- F(x)]^{p} dx \nonumber \\\le & {} \sup _{0<\alpha <1}(1- \alpha )^{\gamma _1-p}\left( \int _0^\infty [1-F(x)]dx \right) ^p = \mu ^p. \end{aligned}$$
(22)

Application of the integrating by parts formula was justified by the fact that the integrals are bounded by \(\mu ^p\), and consequently finite. By Lemma 5 the equality in the latter inequality holds if F has only one value different from 1 on \(\mathbb {R}_+\). For the equality in the first inequality we need that this value should coincide with the maximal value of \((1-\alpha )^{\gamma _1-p}\). If \(p=\gamma _1\), one can take arbitrary \(0< \alpha <1\). This is the respective probability of 0. The point where F jumps from the level \(\alpha \) to 1, has to be chosen so that the expectation is equal to \(\mu \). This should be necessarily \(\frac{\mu }{1-\alpha }\).

If \(p < \gamma _1\), then \((1- \alpha )^{\gamma _1-p}\) is maximized at 0. This implies that the degenerate distribution supported at \(\mu \) provides the equality in (21).

However, for \(0< p < \gamma _1 \ge 1\) it is possible to attain the bound (17) in the limit by nontrivial parent distributions. Note that under the baseline distribution (18) the first GOS has the distribution

$$\begin{aligned} \mathbb {P}\left( X_{*,1}= \frac{\mu }{1-\alpha } \right) = (1-\alpha )^{\gamma _1} = 1-\mathbb {P}( X_{*,1}=0), \end{aligned}$$

and the expectation \(\mathbb {E}X_{*,1}^p= \mu ^p (1-\alpha )^{\gamma _1-p}\). Letting \(\alpha \rightarrow 0\), we attain the bound. \(\square \)

For the remaining generalized order statistics we have the following bounds.

Theorem 2

Suppose that \(r \ge 2\).

  1. (a)

    If \(0< p < \delta _\ell \le 1\), then

    $$\begin{aligned} \frac{\mathbb {E}X_{*,r}^p }{\mu ^p} \le B({\varvec{\gamma }},p) = \left[ \int _0^1 g_{*,{\varvec{\gamma }}}^{\frac{1}{1-p}}(x) \,dx \right] ^{1-p}. \end{aligned}$$
    (23)

    The equality is attained for

    $$\begin{aligned} F(x) = g_{*,{\varvec{\gamma }}}^{-1} \left( B({\varvec{\gamma }},p) \left( \frac{x}{\mu } \right) ^{1-p} \right) , \quad x > 0. \end{aligned}$$
    (24)
  2. (b)

    If \(0< p<1 < \delta _\ell \), we have

    $$\begin{aligned} \frac{\mathbb {E}X_{*,r}^p }{\mu ^p} \le B({\varvec{\gamma }},p) = \left[ \int _0^{\alpha _*} g_{*,{\varvec{\gamma }}}^{\frac{1}{1-p}}(x) \,dx + (1-\alpha _*)g_{*,{\varvec{\gamma }}}^{\frac{1}{1-p}}(\alpha _*) \right] ^{1-p}, \end{aligned}$$
    (25)

    with \(\alpha _*= \alpha _*({\varvec{\gamma }}, 1)\) defined in (7) of Corollary 1(c). The equality is attained for the parent distribution function

    $$\begin{aligned} F(x) = g_{*,{\varvec{\gamma }}}^{-1} \left( B({\varvec{\gamma }},p) \left( \frac{x}{\mu } \right) ^{1-p} \right) , \quad 0< x < \mu \left[ \frac{g_{*,{\varvec{\gamma }}}(\alpha _*)}{B({\varvec{\gamma }},p)}\right] ^{1-p}, \end{aligned}$$
    (26)

    with \(g_{*,{\varvec{\gamma }}}^{-1}\) denoting the inverse of the increasing part of \(g_{*,{\varvec{\gamma }}}\).

  3. (c)

    For \(1 \le p < \delta _\ell \) we have

    $$\begin{aligned} \frac{\mathbb {E}X_{*,r}^p }{\mu ^p} \le \frac{1-G_{*,{\varvec{\gamma }}}(\alpha _*)}{(1-\alpha _*)^p} = \frac{g_{*,{\varvec{\gamma }}}(\alpha _*)}{p(1-\alpha _*)^{p-1}}, \end{aligned}$$

    where \(\alpha _*= \alpha _*({\varvec{\gamma }}, p)\) is defined in (13) of Lemma 7(b). The equality is attained for the two-point baseline distribution

    $$\begin{aligned} \mathbb {P}(X=0) = \alpha _* = 1- \mathbb {P}\left( X = \frac{\mu }{1-\alpha _*}\right) . \end{aligned}$$
    (27)
  4. (d)

    In the case \(d_\ell = 1 \le p = \delta _\ell \) we obtain

    $$\begin{aligned} \frac{\mathbb {E}X_{*,r}^p }{\mu ^p} \le \prod _{k=1}^{\ell -1} \left( \frac{\delta _k}{\delta _k-\delta _\ell } \right) ^{d_k}, \end{aligned}$$

    with the equality attained in the limit by

    $$\begin{aligned} \mathbb {P}(X=0) = \alpha = 1- \mathbb {P}\left( X = \frac{\mu }{1-\alpha }\right) . \end{aligned}$$
    (28)

    as \(\alpha \) tends to 1.

Observe that under the assumptions (a), the bound of (23) is finite due to Lemma 4. So is (25) under (b), because \(g_{*,{\varvec{\gamma }}}\) is bounded then.

Proof

(a) In the case \(0<p < \delta _\ell \le 1\), the density function \(g_{*,{\varvec{\gamma }}}\) is nondecreasing. We apply the Hölder inequality

$$\begin{aligned} \frac{\mathbb {E}X_{*,r}^p}{\mu ^p}= & {} \int _0^1 \frac{[F^{-1}(x)]^p}{\mu ^p} g_{*,{\varvec{\gamma }}}(x)\,dx \nonumber \\\le & {} \left( \int _0^1 \left[ \frac{[F^{-1}(x)]^p}{\mu ^p} \right] ^{\frac{1}{p}} dx \right) ^p \left( \int _0^1 g_{*,{\varvec{\gamma }}}^{\frac{1}{1-p}}(x)\, dx \right) ^{1-p} \nonumber \\= & {} \left( \int _0^1 g_{*,{\varvec{\gamma }}}^{\frac{1}{1-p}}(x)\, dx \right) ^{1-p} = B({\varvec{\gamma }},p). \end{aligned}$$

Both the equality and expectation conditions hold if

$$\begin{aligned} \frac{F^{-1}(x)}{\mu } = \left[ \frac{g_{*,{\varvec{\gamma }}}(x)}{B({\varvec{\gamma }},p)} \right] ^{\frac{1}{1-p}}, \end{aligned}$$

which represents (24).

(b) For \(\delta _\ell >1\), the function \(g_{*,{\varvec{\gamma }}}\) is first increasing, and then decreasing. Before we apply the Hölder inequality, we make the Moriguti transformation (7) of Corollary 1(c). Then by Lemma 3 we obtain

$$\begin{aligned} \frac{\mathbb {E}X_{*,r}^p}{\mu ^p}= & {} \int _0^1 \frac{[F^{-1}(x)]^p}{\mu ^p} g_{*,{\varvec{\gamma }}}(x)\,dx \le \int _0^1 \frac{[F^{-1}(x)]^p}{\mu ^p} g_{*,{\varvec{\gamma }}}(\min \{ x, \alpha _*\})\,dx \nonumber \\\le & {} \left( \int _0^1 \left[ \frac{[F^{-1}(x)]^p}{\mu ^p} \right] ^{\frac{1}{p}} dx \right) ^p \left( \int _0^1 g_{*,{\varvec{\gamma }}}^{\frac{1}{1-p}}(\min \{ x, \alpha _*\})\, dx \right) ^{1-p} \nonumber \\= & {} \left( \int _0^{\alpha _*} g_{*,{\varvec{\gamma }}}^{\frac{1}{1-p}}(x)\, dx + (1-\alpha _*) g_{*,{\varvec{\gamma }}}^{\frac{1}{1-p}}(\alpha _*) \right) ^{1-p} = B({\varvec{\gamma }},p). \end{aligned}$$
(29)

The condition

$$\begin{aligned} \frac{F^{-1}(x)}{\mu } = \left[ \frac{g_{*,{\varvec{\gamma }}}(\min \{ x, \alpha _*\})}{B({\varvec{\gamma }},p)} \right] ^{\frac{1}{1-p}}, \end{aligned}$$

describes (26), and assures that both the inequalities in (29) become the equalities, as well as the first moment restriction is satisfied.

(c) Under the assumption \(1 \le p \le \delta _\ell \) we have

$$\begin{aligned} \mathbb {E}X_{*,r}^p= & {} \int _0^\infty p x^{p-1}[1-F_{*,{\varvec{\gamma }}}(x)] dx \nonumber \\= & {} \int _{\{x>0: F(x)<1\}} p x^{p-1}\frac{1-G_{*,{\varvec{\gamma }}}(F(x))}{[1-F(x)]^p}[1-F(x)]^p\, dx \nonumber \\\le & {} \sup _{\{x>0: F(x)<1\}}H_{{\varvec{\gamma }},p}(F(x)) \int _0^\infty p x^{p-1}[1- F(x)]^{p} dx \nonumber \\\le & {} \sup _{0\le \alpha <1}H_{{\varvec{\gamma }},p}(\alpha )\left( \int _0^\infty [1-F(x)]dx \right) ^p = H_{{\varvec{\gamma }},p}(\alpha _*)\mu ^p \end{aligned}$$
(30)

(cf. (22)). The former and the latter inequalities follow from Lemmas 7(b) and 5, respectively. The conditions guaranteeing the equalities in both the inequalities are that F has the atom at 0 of height \(\alpha _*\), and another positive atom with weight \(1-\alpha _*\). The unique distribution function with the fixed mean \(\mu \) that satisfies the condition is that of (27).

(d) For \(1 \le p \le \delta _\ell \) with the single minimal parameter, we use the arguments analogous to (30). The only difference is that here

$$\begin{aligned} \sup _{0\le \alpha <1}H_{{\varvec{\gamma }},p}(\alpha ) = \lim _{\alpha \rightarrow 1} H_{{\varvec{\gamma }},p}(\alpha )= \prod _{k=1}^{\ell -1} \left( \frac{\delta _k}{\delta _k-\delta _\ell } \right) ^{d_k} \end{aligned}$$

(see Lemma 7(a)). This suggests that we obtain the equality in the limit if we use (28) with \(\alpha \rightarrow 1\). Indeed, (28) implies that

$$\begin{aligned} \mathbb {P}(X_{*,r}=0) = G_{*,{\varvec{\gamma }}}(\alpha ) = 1- \mathbb {P}\left( X_{*,r}= \frac{\mu }{1-\alpha } \right) , \end{aligned}$$

and

$$\begin{aligned} \mathbb {E}X_{*,r}^p = [1- G_{*,{\varvec{\gamma }}}(\alpha )] \frac{\mu ^p}{(1-\alpha )^p} = H_{{\varvec{\gamma }},p}(\alpha ) \mu ^p. \end{aligned}$$

Taking \(\alpha \rightarrow 1\), we obtain

$$\begin{aligned} \lim _{\alpha \rightarrow 1} \frac{\mathbb {E}X_{*,r}^p}{\mu ^p} = \lim _{\alpha \rightarrow 1} H_{{\varvec{\gamma }},p}(\alpha ) = \prod _{k=1}^{\ell -1} \left( \frac{\delta _k}{\delta _k-\delta _\ell } \right) ^{d_k}. \end{aligned}$$

This ends the proof. \(\square \)

4 Conditions for finiteness of moments of GOS

Under the assumptions of Theorems 1 and 2 on \({\varvec{\gamma }}\) and p we have

$$\begin{aligned} \mathbb {E}X_{*,r}^p \le B({\varvec{\gamma }},p) \mu ^p < \infty \end{aligned}$$

for all baseline lifetime F with finite mean \(\mu \). These are sufficient for finiteness of moments of generalized order statistics. We summarize these results in the following corollary.

Corollary 2

The sufficient conditions for finiteness of pth moment of rth generalized order statistic, \(r \ge 1\), with a given parameter vector \({\varvec{\gamma }}= (\gamma _1,\ldots ,\gamma _r) \in \mathbb {R}^r_+\) and arbitrary non-zero life baseline distribution function with a finite mean are

  1. (a)

    p is less than the smallest element \(\delta _\ell \) of the parameter vector \({\varvec{\gamma }}\) if either \(\delta _\ell <1\) or \(\delta _\ell \ge 1\) and \(d_\ell \ge 2\), i.e., the minimal parameter is multiple,

  2. (b)

    p is not greater than the single smallest parameter \(\delta _\ell \ge 1\).

The main result of this section is contained in Theorem 3.

Theorem 3

The conditions (a) and (b) of Corollary 2 are also necessary for assuring finiteness of the pth moment of GOS with a fixed parameter vector \({\varvec{\gamma }}\) and arbitrary life baseline distribution with a finite expectation.

Proof

In order to prove the above statements, it suffices to construct an appropriately chosen baseline distribution F such that

  1. (a)

    if \(\gamma <1\), then

    $$\begin{aligned} \mathbb {E}X_{*,1}^{\gamma } = +\infty , \end{aligned}$$
    (31)
  2. (b)

    if \(\gamma \ge 1\), then

    $$\begin{aligned} \mathbb {E}X_{*,1}^{p} = +\infty \quad \text {for any} p > \gamma , \end{aligned}$$
    (32)
  3. (c)

    if \(\gamma \ge 1\), then

    $$\begin{aligned} \mathbb {E}X_{*,2}^{\gamma } = +\infty , \end{aligned}$$
    (33)

where \(X_{*,1}\) denotes the first GOS with the parameter \(\gamma _1=\gamma \) and \(X_{*,2}\) stands for the second GOS with two identical parameters \((\gamma _1,\gamma _2) =(\gamma ,\gamma )\). Obviously, the baseline distributions F assuring (31), (32), and (33) are possibly different in particular cases.

Indeed, we simply have \(X_{*,1}\le X_{*,r}\), and the relation holds in particular if \(X_{*,r}\) have parameters \((\gamma _1, \ldots , \gamma _r)\) such that \(\delta _\ell = \gamma _1= \gamma \). Moreover, for any random variable X, if pth moment \(\mathbb E |X|^p\) of X is infinite, then \(\mathbb E |X|^q=+\infty \) for all \(q>p>0\). Therefore, if we prove (31) for some distribution F, then \(\mathbb {E}X_{*,1}^{p} =+\infty \) for \(p \ge \delta _\ell =\gamma \), and in consequence

$$\begin{aligned} \mathbb {E}X_{*,r}^{p} \ge \mathbb {E}X_{*,1}^{p}=+\infty ,\quad \text {for all } p \ge \gamma = \delta _\ell . \end{aligned}$$

The analogous conclusion with \(p > \gamma = \delta _\ell \) is obtained when (32) holds for some F. Similarly, if (33) holds for some F, then \(\mathbb {E}X_{*,2}^p=+\infty \) for any \(p\ge \gamma =\delta _\ell \). Therefore for \(r\ge 2\) we have \(\mathbb {E}X_{*,r}^p \ge \mathbb {E}X_{*,2}^{p}= \infty \) for all \(p \ge \gamma = \delta _\ell \).

We prove that all (31)–(33) are satisfied by some elements of a parametric family \(\mathbb {P}_\theta \), \(\theta >1\), of baseline distributions. All the members of the family have the same discrete support \(\{ e_j, \; j=0,1,\ldots \}\), where the elements of the sequence are defined recursively

$$\begin{aligned} e_0 = 1, \qquad e_j = e^{e_{j-1}}, \quad j=1,2,\ldots . \end{aligned}$$

For every \(\theta > 1\) and \(j=1,2, \ldots \) we set

$$\begin{aligned} \mathbb {P}_\theta (X=e_0) = \alpha _\theta (0)= & {} 1- \frac{1}{e-1}, \nonumber \\ \mathbb {P}_\theta (X=e_j)= \alpha _\theta (j)= & {} \frac{1}{j^\theta (e_j-e_{j-1})} - \frac{1}{(j+1)^\theta (e_{j+1}-e_{j})}. \end{aligned}$$
(34)

The formulae define proper probability measures, because the RHS of (34) is positive and

$$\begin{aligned} \sum _{j=0}^{\infty } \alpha _\theta (j)= & {} 1- \frac{1}{e-1} + \sum _{j=1}^{\infty } \left[ \frac{1}{j^\theta (e_j-e_{j-1})} - \frac{1}{(j+1)^\theta (e_{j+1}-e_{j})} \right] \nonumber \\= & {} \lim _{j \rightarrow \infty }\left[ 1- \frac{1}{(j+1)^\theta (e_{j+1}-e_{j})}\right] = 1. \end{aligned}$$

The distributions have finite expectations

$$\begin{aligned} \mathbb {E}_\theta X= & {} \sum _{j=0}^{\infty } \alpha _\theta (j) e_j = 1- \frac{1}{e-1} + \sum _{j=1}^{\infty } \left[ \frac{e_j}{j^\theta (e_j-e_{j-1})} - \frac{e_j}{(j+1)^\theta (e_{j+1}-e_{j})}\right] \\= & {} 1 + \sum _{j=1}^{\infty } \frac{1}{j^\theta } = 1 + \zeta (\theta ) < \infty , \end{aligned}$$

where \(\zeta \) denotes the Riemann zeta function. We also observe that

$$\begin{aligned} F_\theta (e_j) = \sum _{i=0}^{j} \alpha _\theta (i) = 1- \frac{1}{(j+1)^\theta (e_{j+1}-e_j)}, \quad j=0,1,\ldots . \end{aligned}$$
(35)

We now assume that \(\gamma < 1\) and prove that (31) holds for the baseline distribution (34) with \(1< \theta < \frac{1}{\gamma }\). Since \(G_{*,\gamma }(x)= 1-(1-x)^{\gamma }\), the distribution function of the first GOS with parameter \(\gamma \) based on \(F_\theta \) takes on particular values

$$\begin{aligned} G_{*,\gamma } \circ F_\theta (e_j) = 1- \frac{1}{(j+1)^{\theta \gamma } (e_{j+1}-e_j)^{\gamma }}, \quad j=0,1,\ldots . \end{aligned}$$

Accordingly,

$$\begin{aligned} \mathbb {P}_\theta ( X_{*,1} = e_0)= & {} G_{*,\gamma } \circ F_\theta (e_0) = 1- \frac{1}{(e-1)^{\gamma }}, \nonumber \\ \mathbb {P}_\theta ( X_{*,1} = e_j)= & {} G_{*,\gamma } \circ F_\theta (e_j) - G_{*,\gamma } \circ F_\theta (e_{j-1}) \nonumber \\= & {} \frac{1}{j^{\theta \gamma } (e_{j}-e_{j-1})^{\gamma }} - \frac{1}{(j+1)^{\theta \gamma } (e_{j+1}-e_j)^{\gamma }}, \end{aligned}$$
(36)

for \(j=1,2,\ldots \), and

$$\begin{aligned} \mathbb {E}_\theta X_{*,1}^{\gamma }= & {} 1- \frac{1}{(e-1)^{\gamma }} + \sum _{j=1}^{\infty } \left[ \frac{e_j^{\gamma }}{j^{\theta \gamma } (e_{j}-e_{j-1})^{\gamma }} - \frac{e_j^{\gamma }}{(j+1)^{\theta \gamma } (e_{j+1}-e_j)^{\gamma }}\right] \nonumber \\= & {} 1- \frac{1}{(e-1)^{\gamma }} + \sum _{j=1}^{\infty } [A_1(j)-B_1(j)] \\= & {} 1- \frac{1}{(e-1)^{\gamma }} + \sum _{j=1}^{\infty } C_1(j), \end{aligned}$$

say. Note that for every real q

$$\begin{aligned} \lim _{j \rightarrow +\infty } \frac{e_{j+1}}{e_j^q} = \lim _{x \rightarrow +\infty } \frac{e^x}{x^q} = +\infty , \end{aligned}$$
(37)

and so

$$\begin{aligned} \frac{B_1(j)}{A_1(j)} = \left( \frac{j}{j+1}\right) ^{\theta \gamma } \left( \frac{1- \frac{e_{j-1}}{e_j}}{\frac{e_{j+1}}{e_j}-1} \right) ^{\gamma } \longrightarrow 0, \end{aligned}$$

as \(j \rightarrow \infty \), because both the exponents are positive. Therefore we can write

$$\begin{aligned} C_1(j) \ge (1-\varepsilon ) A_1(j) \end{aligned}$$

for some small positive \(\varepsilon \) and sufficiently large \(j \ge J\), say. Moreover, we have

$$\begin{aligned} A_1(j) = \frac{1}{j^{\theta \gamma }} \, \frac{1}{\left( 1- \frac{e_{j-1}}{e_j}\right) ^{\gamma }} \ge \frac{1}{j^{\theta \gamma }}. \end{aligned}$$

This means that \(C_1(j) \ge (1-\varepsilon ) \frac{1}{j^{\theta \gamma }}\), \(j \ge J\), and, in consequence, taking into account that \(0< \theta \gamma <1\) we obtain

$$\begin{aligned} \mathbb {E}_\theta X_{*,1}^{\gamma } \ge \sum _{j=J}^{\infty } C_1(j)\ge (1-\varepsilon ) \sum _{j=J}^{\infty } \frac{1}{j^{\theta \gamma }} = \infty , \end{aligned}$$

which proves the claim (a).

To prove (b) we check that (32) holds for (34) with arbitrary \(\theta >1\). Applying (36) we obtain

$$\begin{aligned} \mathbb {E}_\theta X_{*,1}^{p}= & {} 1- \frac{1}{(e-1)^{\gamma }} + \sum _{j=1}^{\infty } \left[ \frac{e_j^{p}}{j^{\theta \gamma } (e_{j}-e_{j-1})^{\gamma }} - \frac{e_j^{p}}{(j+1)^{\theta \gamma } (e_{j+1}-e_j)^{\gamma }}\right] \nonumber \\= & {} 1- \frac{1}{(e-1)^{\gamma }} + \sum _{j=1}^{\infty } [A_2(j)-B_2(j)]. \end{aligned}$$

It suffices to check that the summands do not tend to 0. In fact, we show that \(A_2(j) \rightarrow + \infty \) and \(B_2(j) \rightarrow 0\) as \(j \rightarrow \infty \). Indeed, since \(p > \gamma \ge 1\), then by (37) we have,

$$\begin{aligned} \lim _{j \rightarrow \infty } A_2(j)= & {} \lim _{j \rightarrow \infty } \frac{e_j^{p-\gamma }}{j^{\theta \gamma }\left( 1- \frac{e_{j-1}}{e_j}\right) ^{\gamma } } = \lim _{j \rightarrow \infty } \frac{e_j^{p-\gamma }}{j^{\theta \gamma } } = \infty , \\ \lim _{j \rightarrow \infty } B_2 (j)= & {} \lim _{j \rightarrow \infty } (j+1)^{-\theta \gamma } \left( \frac{e_{j+1}}{e_j^{\frac{p}{\gamma }}} -\frac{1}{e_j^{\frac{p}{\gamma }-1}} \right) ^{-\gamma } = 0. \end{aligned}$$

It remains to prove the claim (c), namely that (33) holds for (34) with any \(\theta >1\). Due to (3), the second generalized order statistic \(X_{*,2}\) with parameters \({\varvec{\gamma }}= (\gamma ,\gamma )\) and baseline distribution (36) has the distribution function \( G_{*,(\gamma ,\gamma )}\circ F_\theta \), where

$$\begin{aligned} G_{*,(\gamma ,\gamma )}(x) = 1 - (1-x)^{\gamma } [1- \gamma \ln (1-x)], \quad 0<x <1. \end{aligned}$$
(38)

Combining (35) and (38) we obtain

$$\begin{aligned} G_{*,(\gamma ,\gamma )}\circ F_\theta \left( e_j \right) = 1- \frac{1+\gamma \theta \ln (j+1) +\gamma \ln (e_{j+1}-e_{j})}{j^{\gamma \theta }(e_{j+1}-e_{j})^{\gamma }}, \end{aligned}$$

and, in consequence, for \(j=1,2,\ldots \) we have

$$\begin{aligned} \begin{aligned} \mathbb {P}(X_{*,2} =e_j)&= F_{*,(\gamma ,\gamma )} \left( e_{j} \right) - F_{*,(\gamma ,\gamma )} \left( e_{j-1} \right) \\&= \frac{1+\gamma \theta \ln j +\gamma \ln (e_j-e_{j-1})}{j^{\gamma \theta }(e_j-e_{j-1})^{\gamma }}\\ {}&\quad - \frac{1+\gamma \theta \ln (j+1) +\gamma \ln (e_{j+1}-e_{j})}{(j+1)^{\gamma \theta }(e_{j+1}-e_{j})^{\gamma }}. \end{aligned} \end{aligned}$$

We consider

$$\begin{aligned} \mathbb {E}X_{*,2}^{\gamma } = \sum _{j=0}^{\infty } e_j^\gamma \mathbb {P}(X_{*,2} =e_j) = 1- \frac{1}{(e-1)^{\gamma }} + \sum _{j=1}^{\infty } [A_3(j)-B_3(j)], \end{aligned}$$
(39)

where

$$\begin{aligned} A_3(j)= & {} \frac{ e_j^{\gamma } \,[1+\gamma \theta \ln j +\gamma \ln (e_j-e_{j-1})]}{j^{\gamma \theta }(e_j-e_{j-1})^{\gamma }}, \\ B_3(j)= & {} \frac{ e_j^{\gamma }\, [1+\gamma \theta \ln (j+1) +\gamma \ln (e_{j+1}-e_{j})]}{(j+1)^{\gamma \theta }(e_{j+1}-e_{j})^{\gamma }}, \quad j=1,2,\ldots . \nonumber \end{aligned}$$
(40)

We prove that \(A_3(j)\) and \(B_3(j)\) tend to \(\infty \) and 0, respectively, as j increases to \(\infty \), which verifies that (39) amounts to \(+\infty \). Note that

$$\begin{aligned} \ln ( e_j- e_{j-1})= & {} \ln \left( e^{e_{j-1}} - e^{e_{j-2}} \right) = \ln \left( e^{e_{j-1}-e_{j-2}}-1\right) + \ln e^{e_{j-2}} \nonumber \\\ge & {} e_{j-1}(1 - \varepsilon ) \end{aligned}$$

for some \(0< \varepsilon <1\) and sufficiently large j. Therefore

$$\begin{aligned} 1+ \gamma \theta \ln j + \gamma \ln (e_j-e_{j-1}) \ge \gamma e_{j-1} (1- \varepsilon ) \end{aligned}$$
(41)

for small positive \(\varepsilon \) and large j. Combining (40) with (37) and (41) we obtain

$$\begin{aligned} \lim _{j \rightarrow \infty } A_3(j) = \frac{ [1+\gamma \theta \ln j +\gamma \ln (e_j-e_{j-1})]}{j^{\gamma \theta } \left( 1- \frac{e_{j-1}}{e_j} \right) ^{\gamma }} = \lim _{j \rightarrow \infty } \frac{\gamma e_{j-1}}{j^{\gamma \theta }} = \infty . \end{aligned}$$

In a similar way we prove that

$$\begin{aligned} \lim _{j \rightarrow \infty } B_3(j) = \lim _{j \rightarrow \infty } \frac{e_j^{\gamma +1}}{(e_{j+1}-e_j)^{\gamma }} \; \frac{\gamma }{(j+1)^{\gamma \theta }}=0, \end{aligned}$$

because both the factors of the second limit tend to 0. This ends the proof of Theorem 3. \(\square \)

Distribution functions of (34) are the members of family \(\mathcal {F}\) with relatively heavy tails. One can check that (32) can be satisfied by much simpler baseline distribution functions, e.g.,

$$\begin{aligned} F(x) = 1 - \frac{e}{x (\ln x)^2}, \quad x > e. \end{aligned}$$
(42)

However, neither (42) nor its more subtle modifications cannot guarantee the equalities in (31) and (33).

A simple generalization of Theorem 3 is presented in the following corollary.

Corollary 3

The necessary and sufficient conditions for finiteness of pth moment of rth generalized order statistic, \(r \ge 1\), with a given parameter vector \({\varvec{\gamma }}= (\gamma _1,\ldots ,\gamma _r) \in \mathbb {R}^r_+\) and arbitrary life baseline distribution function with a finite qth moment are

  1. (a)

    p is less than \(q \delta _\ell \) if either \(\delta _\ell <1\) or \(\delta _\ell \ge 1\) and \(d_\ell \ge 2\), i.e., the minimal parameter is multiple,

  2. (b)

    p is not greater than \(q \delta _\ell \ge 1\).

5 Particular cases

We conclude the paper with examples, considering several special cases of the model of generalized order statistics.

Example 1

Order statistics and L -statistics. Suppose that \(X_1,\ldots ,X_n\) are iid random variables with a common distribution function F, and let \(X_{1:n} \le \cdots \le X_{n:n}\) stand for the respective order statistics. David and Nagaraja (2003) is the most comprehensive monograph devoted to the analysis and applications of the order statistics. It is well-known that they constitute a finite sequence of GOSs of length n with parameters \(\gamma _r= n-r+1\), \(r=1,\ldots , n\), and baseline distribution function F. The sequence of parameters is clearly strictly decreasing. It follows from Corollary 2 and Theorem 3 that \(\mathbb {E}X_{r:n}^p < \infty \) for arbitrary baseline F supported on a subset of the non-negative half-axis and possessing a finite mean iff \(p \le n+1-r\). Also, if \(\sum _{i=1}^{n}c_i X_{i:n}\) is a linear combination of order statistics in the above model, then

$$\begin{aligned} \mathbb {E}\left| \sum _{i=1}^{n} c_i X_{i:n}\right| ^p < \infty \end{aligned}$$
(43)

if \(p \le n+1-r\), where r is the largest index \(i \in \{1, \ldots , n\}\), such that \(c_i \ne 0\). Similarly, we can notice that (43) holds for arbitrary parent with finite expectation when \(p \le \min \{ q, n-r+1 \}\), where q and r are the minimal and maximal indices of non-zero elements of \((c_1,\ldots ,c_n)\), respectively. All the above claims can be easily concluded from the paper by Sen (1959) (see also Papadatos 2021).

Example 2

Upper record values and k th record values. Suppose that \(X_1,X_2, \ldots \) is an infinite sequence of iid nonnegative continuous random variables with a finite mean. For a positive integer k we define sequences of record times \(T_n^{(k)}\) and record values \(R_n^{(k)}\), \(n=1,2, \ldots \), as follows

$$\begin{aligned} \begin{array}{ll} T_1^{(k)} = k, &{} R_1^{(k)} = X_{1:k}, \\ T_n^{(k)} = \min \{ j> T_{n-1}^{(k)}: X_{j+1-k:j} > X_{j-k:j-1} \}, &{} R_n^{(k)} = X_{T_{n-1}^{(k)}+1-k: T_{n-1}^{(k)}}. \end{array} \end{aligned}$$

For \(k=1\) we simply obtain the classic upper records which occur at time j and amount to \(X_j\) when a new observation \(X_j\) is greater than all preceding ones. The first records were defined by Chandler (1952), and the generalizations to arbitrary \(k \ge 1\) are due to Dziubdziela and Kopociński (1976). The most popular book on record statistics is due to Arnold et al. (1998). The kth upper record values constitute a particular case of infinite sequence of GOSs with the constant sequence of parameters \(\gamma _r= k\), \(k=1,2,\ldots \). Corollary 2 and Theorem 3 imply that

$$\begin{aligned} \mathbb {E}\left( R_n^{(k)}\right) ^p < \infty \end{aligned}$$

for arbitrary parent distribution with a nonnegative support and finite mean if \(p \le k\) when \(n=1\), and \(p < k\) for other n. Since \(R_1^{(k)}=X_{1:k}\), the condition in the case \(n=1\) can be concluded from the previous example. The condition for \(k=1\) with \(n \ge 2\) was established by Nagaraja (1978). The solutions in the other cases \(k,n \ge 2\) were presented recently in Rychlik and Szymkowiak (2023). Tables of exemplary bounds on moments of record values were presented there as well.

Example 3

Progressively censored type II order statistics. The model is described by means of integer parameters \(2 \le n < N\), and \(0 \le R_i \le N-m\), \(i=1,\ldots ,n\), such that \(\sum _{i=1}^{n} R_i= N-n\), and a one-dimensional distribution function F. We also use the notation \(\textbf{R}= (R_1,\ldots , R_n)\), and call it the censoring scheme of the experiment. It is assumed that nonnegative iid random variables \(X_1,\ldots ,X_N\) with the common distribution function F undergo a life test. After the first failure at a random time denoted by \(X_{1:n:N}^\textbf{R}\), \(R_1\) randomly selected items are removed from the experiment, and remaining \(N-1-R_1\) items are further observed. The procedure is repeated n times. After the rth failure at a random time \(X_{r:n:N}^\textbf{R}\), a number \(R_r\) of objects are removed, and the experiment terminates after nth failure \(X_{n:n:N}^\textbf{R}\), when the observation of last \(R_n= N-n- \sum _{i=1}^{n-1} R_i\) is given up. In particular, the censoring plan \(\textbf{R}= (0,\ldots , 0, N-n)\) represents the standard type II censoring model. A comprehensive study of the model was presented in Balakrishnan and Cramer (2014). The progressively censored type II order statistics model can be embedded in the GOSs model if we put \(\gamma _r=N-r+1 - \sum _{i=1}^{r-1}R_i= n-r+1+ \sum _{i=r}^{n}R_i\), \(r=1,\ldots ,n\). Since here all parameters \(\gamma _r\) are distinct here, and they decrease as r increases, Corollary 2 and Theorem 3 imply that

$$\begin{aligned} \mathbb {E}\left( X_{r:n:N}^\textbf{R}\right) ^p < \infty \end{aligned}$$

for all baseline life distribution functions with finite mean if \(p \le N-r+1 - \sum _{i=1}^{r-1}R_i\). In particular, the whole experiment time has a finite pth moment if \(p \le R_n+1\).

Example 4

Failure dependent proportional hazard reliability model. We consider here a coherent system with \(n \ge 2\) identical components. We assume that the components start working independently, and their random working times \(T_1,\ldots ,T_n\) are identically distributed with some life distribution function F. For simplicity of interpretation, we tentatively suppose that F has a density function. After the first component failure \(T_{1:n}\), \(n-1\) still living components keep working independently, but they undergo an increased burden. This burden is expressed by a stepwise multiplicative increase of the component failure rate by a factor \(\alpha _2 \ge 1\), without any change of shape of the original failure rate function. Similar phenomena occur after each consecutive component failure: the failure rates of still operating \(n-r\) components between \(T_{r:n}\) and \(T_{r+1:n}\) is \(\alpha _r\) times greater than the original one. It is natural to assume that \(\alpha _1= 1 \le \alpha _2 \le \cdots \le \alpha _n\). In other words, under the convention \(T_{0:n}=0\), \(n-r\) operating components between the rth and \(r+1\)st failure under the condition that \(T_{r:n}= t_{r:n}\) are iid with the common survival function \(\left[ \frac{1-F(t)}{1-F(t_{r:n})} \right] ^{\alpha _r}\). We can use this interpretation even if F is discontinuous and the failure rate is not necessarily well-defined. The model was treated in a number of papers, e.g., Hollander and Peña (1995), Aki and Hirano (1997), Burkschat (2009), Navarro and Burkschat (2011), and Bieniek, Burkschat and Rychlik (2020). The assumptions imply that the component lifetimes are exchangeable, and in consequence the system lifetime distribution function has the representation

$$\begin{aligned} \mathbb {P}(T \le t) = \sum _{i=1}^{n} s_i \mathbb {P}(T_{i:n} \le t), \end{aligned}$$

(see Navarro et al. 2008), where the probability vector \((s_1,\ldots ,s_n)\), called the Samaniego signature (see Samaniego 1985), depends merely on the system structure and does not depend on the particular joint distribution of the component lifetimes. The Samaniego signature of a coherent system does not have zeros between nonzero elements, and satisfies \(s_1s_n =0\) (see, e.g., D’Andrea and De Sanctis 2015). Moreover, the failure dependent proportional hazard reliability model is a particular case of the GOSs model with the parameters \(\gamma _r= (n+1-r)\alpha _r\), \(r=1,\ldots ,n\), and baseline distribution function F. The parameter sequence consists of products of the elements of decreasing and non-decreasing sequences, and so we cannot establish its monotonicity properties in general. Suppose that \(s_p,\ldots , s_q\) are the only non-zero coordinates of the Samaniego signature of the system. The conclusion of our paper for the system lifetime satisfying the model is the following:

$$\begin{aligned} \mathbb {E}T^p < \infty \end{aligned}$$

for arbitrary (possibly discontinuous) original component lifetime distribution function F with a finite mean iff \(p \le \min \{ (n+1-r) \alpha _r: r=p,\ldots ,q \}\) in the case when the minimum is unique, and \(p < \min \{ (n+1-r) \alpha _r: r=p,\ldots ,q \}\) otherwise. If we take a special simple assumption that a constant overall load \(L>0\) imposes on the system, and it is uniformly distributed over all operating components, then \(\alpha _r = \frac{L}{n+1-r}\), \(r=1,\ldots ,n\). In this case the distribution of the consecutive component failure times coincides with the distribution of first n values of 1st records based on an iid sequence with the parent distribution function \(1-(1-F)^{L}\) (or kth records with baseline \(1-(1-F)^{\frac{L}{k}}\)).