1 Introduction

Kamps (1995) introduced the generalized order statistics (GOSs) as a unified model of ascending ordered random variables. The GOSs have gotten a lot of attention in recent years. This is because such a concept describes random variables (RVs) in ascending order of magnitude, which has important applications and includes well-known concepts that have been treated separately in the statistical literature. Ordinary order statistics (OOSs), sequential order statistics (SOSs), Progressive type II censored order statistics (POSs), record values, kth record values, and Pfeifer’s records are examples of the GOSs model.

Clearly, descending order RVs, such as lower record values, are not included in the GOSs model. DGOSs were first introduced by Burkschat et al. (2003) as a unified model of descending ordered RVs, similar to reversed OOSs, lower \(k-\)records and lower Pfeirfer’ records, through a combined approach. By analogy with Kamps (1995), the DGOSs, \(X_{r,n,\underline{\gamma }}^{(D)}, ~r=1,2,\ldots ,n\), based on a continuous cumulative distribution function (CDF) F,  were defined in Burkschat et al. (2003), as

$$\begin{aligned} X_{r,n,\underline{\gamma }}^{(D)} \begin{array}{c} {\mathop {=}\limits ^{d}}\\ \scriptstyle {} \end{array}F^{-1}\left( \prod _{j=1}^{r}\,B_j\right) \begin{array}{c} {\mathop {=}\limits ^{d}}\\ \scriptstyle {} \end{array}F^{-1}\left( \prod _{j=1}^{r}\, {U^{*}_j}^{\tfrac{1}{\gamma _j}}\right) , ~r=1,2,\ldots ,n, \end{aligned}$$
(1.1)

where, \(\underline{\gamma }= (\gamma _1,\ldots ,\gamma _n) \in \Re _{+}^{n}\) is the vector of the model parameters with \(\gamma _j=k+n-j+ \sum _{i=j}^{n-1} m_i > 0,\) \(m_1,\ldots , m_{n-1} \in \Re , ~\gamma _n=k>0,\) \(U^{*}_j,~j=1,2,\ldots ,n\) are independent standard uniform RVs, and \(X{ \begin{array}{c} {\mathop {=}\limits ^{d}}\\ \scriptstyle {}\end{array}}Y\) means X and Y have the same CDF. Hence, the strict relations \(~X_{1,n,\underline{\gamma }}^{(D)}> X_{2,n,\underline{\gamma }}^{(D)}> \cdots > X_{n,n,\underline{\gamma }}^{(D)}~\) hold almost surely. Interested readers can be referred to Ahsanullah (2004); Barakat and El-Adll (2009); Burkschat et al. (2003); Shah Imtiyaz et al. (2020) for more details on DGOSs.

Predicting future events based on past or current events is an important problem in statistics. In life testing problems, some failure times cannot be observed for various reasons and it is necessary to predict or reconstruct such failure times using a point or an interval. Clearly, OOSs play a significant role in predicting future observations and reconstructing previously unseen ones. For both frequentist and Bayesian approaches, many authors have studied point and interval predictions in statistical literature. Among them are Ahsanullah (1980), Al-Hussaini (1999), Al-Hussaini and Ahmad (2003), Al-Mutairi and Raqab (2020), David and Nagaraja (2003), Geisser (1993), Kaminsky and Rhodin (1985), Kotb and Raqab (2021), Lawless (1977), Nagaraja (1986) and Raqab (2001).

The first prediction result based on pivotal quantity is due to Lawless (1971), who applied the results of Sukhatme (1937) to construct confidence intervals for future OOSs from the exponential distribution. Lingappaiah (1973), defined a different pivotal quantity for the same purpose. Recent works on prediction and reconstruction based on pivotal quantities have been published by Aly (2015, 2016, 2022), Barakat et al. (2011, 2018, 2021), El-Adll (2011, 2021), El-Adll and Aly (2016a, 2016b), among others. A general finite-sample method for predicting future observations from any arbitrary continuous distribution was proposed by Barakat et al. (2014). Later, Aly et al. (2019) extended this result to fractional record-values.

The Weibull distribution is one of the most widely used distributions in engineering, hydrology, ecology, medicine, the environment, and energy research. The inverse Weibull distribution, like the Weibull distribution, enables us to model long-tailed right-skewed data. The Inverse Weibull distribution is a special case of the generalized extreme value distribution, which is considered as an alternative to the Weibull distribution for modeling wind speed data. For some wind speed data measured in various locations and seasons, the inverse Weibull distribution outperforms the Weibull distribution in modeling. Since the Weibull distribution does not perform well in modeling wind speed data from various geographical regions around the world (e.g. Akgül et al. (2016); Wang et al. (2015)), the heavier right tail of the inverse Weibull distribution provides an advantage for modeling the right tail’s extreme or outlying observations.

The probability density function (PDF) and CDF of the inverse Weibull distribution are respectively, given by

$$\begin{aligned} f(x)=\frac{\beta }{\sigma } \left( \frac{x}{\sigma }\right) ^{-(\beta +1)} \exp \Big [- \left( \frac{x}{\sigma }\right) ^{-\beta }\Big ], \quad x>0, \quad \beta ,~\sigma >0, \end{aligned}$$
(1.2)

and

$$\begin{aligned} F(x)= \exp \Big [- \left( \frac{x}{\sigma }\right) ^{-\beta }\Big ], \quad x>0, \quad \beta ,~\sigma >0. \end{aligned}$$
(1.3)

It can be noted that the transformations, \(Z_{r,n,\underline{\gamma }}^{(D)}=\log F \left( X_{r,n,\underline{\gamma }}^{(D)} \right) ,~ r=1,2,\ldots ,n\), represent DGOSs based on the negative exponential distribution (NEXP(1)) with PDF and CDF, \(g(x)=G(x)=e^{x},\,x \leqslant 0.\) In what follows, \(U_1,~ U_2,\) and \(U_3\) will denote predictive pivotal quantities, while \(V_1,~ V_2,\) and \(V_3\) denote reconstructive pivotal quantities. Moreover, in the DGOSs model considered here, it is assumed that \(\gamma _i \ne \gamma _j \) for \(i\ne j\), \(1 \leqslant i, j \leqslant n\), which includes most of the important descending ordered RVs except for the lower record values. Furthermore, \(W_{r,l}=Z_{r,n,\underline{\gamma }}^{(D)} - Z_{l,n,\underline{\gamma }}^{(D)}<0,\) for \(1 \leqslant l<r < n,\) and \(T_{l,r}= \sum _{i=l+1}^{r} \gamma _i ( Z_{i,n,\underline{\gamma }}^{(D)} - Z_{i-1,n,\underline{\gamma }}^{(D)}) \) follows the negative gamma distribution with parameters \(r-l,1,\) i.e, \(T_{l,r} \thicksim N \Gamma (r-l,1) \) with PDF

\(f_{T_{l,r}}(t)= \dfrac{1}{\Gamma (r-l)} (-t)^{r-l-1} e^t,~t<0,~l<r. \)

The rest of this paper is organized as follows. In Sect. 2, three predictive pivotal quantities are suggested and their distributions are established. Section 3 is devoted to the reconstruction problem. In Sect. 4, the MLP as well as the PMLEs are discussed. Simulation studies are carried out in Sect. 5.

2 Prediction intervals of DGOSs

In this section, based on the knowledge of \(X_{l,n,\underline{\gamma }}^{(D)},\ldots ,X_{r,n,\underline{\gamma }}^{(D)} \), three predictive pivotal quantities of the unobserved sth, DGOS \(X_{s,n,\underline{\gamma }}^{(D)},\) for \(1 \leqslant l< r<s \leqslant n \), are proposed and their distributions are derived. Consequently, three predictive intervals of \(X_{s,n,\underline{\gamma }}^{(D)}\) are constructed. The predictive pivotal quantities are

$$\begin{aligned} U_1= & {} 1- \left( \frac{X_{s,n,\underline{\gamma }}^{(D)}}{X_{r,n,\underline{\gamma }}^{(D)}}\right) ^{\beta }, \end{aligned}$$
(2.1)
$$\begin{aligned} U_2= & {} \frac{ \Big (X_{s,n,\underline{\gamma }}^{(D)}\Big )^{-\beta } - \Big (X_{r,n,\underline{\gamma }}^{(D)}\Big )^{-\beta }}{ \sum _{i=l+1}^{r} \gamma _i \left( \Big (X_{i,n,\underline{\gamma }}^{(D)}\Big )^{-\beta } -\Big (X_{i-1,n,\underline{\gamma }}^{(D)}\Big )^{-\beta } \right) }, \end{aligned}$$
(2.2)
$$\begin{aligned} U_3= & {} \frac{\Big (X_{s,n,\underline{\gamma }}^{(D)}\Big )^{-\beta } - \Big (X_{r,n,\underline{\gamma }}^{(D)}\Big )^{-\beta }}{ \Big (X_{r,n,\underline{\gamma }}^{(D)}\Big )^{-\beta } - \Big (X_{l,n,\underline{\gamma }}^{(D)}\Big )^{-\beta }}. \end{aligned}$$
(2.3)

The following lemma will be needed in the sequel which gives the marginal distributions of a single DGOS and marginal joint between two DGOSs.

Lemma 2.1

Under the condition, \(\gamma _i \ne \gamma _j \) for \(i\ne j\), \(1 \leqslant i, j \leqslant n\), the marginal PDF of the rth DGOS as well as the joint PDF of the rth and sth DGOSs are respectively, given by

$$\begin{aligned} f_{_{X_{r,n,\underline{\gamma }}^{(D)}}} (x_r)=C_r \sum _{i=1}^{r} a_i(r) ~ F^{\gamma _i-1}(x_r) f(x_r), \quad x_r \in \mathbb {R}, \end{aligned}$$
(2.4)

and

$$\begin{aligned} f_{_{X_{r,n,\underline{\gamma }}^{(D)}, X_{s,n,\underline{\gamma }}^{(D)}}} (x_r,x_s)&= C_s \left[ \sum _{i=r+1}^{s} a_i^{(r)} (s) \left( \frac{F(x_s)}{F(x_r)} \right) ^{\gamma _i} \right] \nonumber \\&\times \left[ \sum _{i=1}^{r} a_i(r) F^{\gamma _i}(x_r) \right] \frac{f(x_r)}{F(x_r)} \frac{f(x_s)}{F(x_s)}, \quad x_r>x_s, \quad r<s, \end{aligned}$$
(2.5)

where

$$\begin{aligned} C_{r}= & {} \prod \limits _{j=1}^{r} \gamma _j,~ a_i (r)= \prod \limits _{j=1, ~j\ne i}^{r} \frac{1}{\gamma _j-\gamma _i},~ 1 \leqslant i \leqslant r, \\&a_i^{(r)}(s)= \prod \limits _{j=r+1, ~j\ne i}^{s} \frac{1}{\gamma _j-\gamma _i},~ r+1 \leqslant i \leqslant s. \end{aligned}$$

The proof of Lemma 2.1 is similar to the proof of Lemma 2.1 of Kamps and Cramer (2001) with appropriate adjustments.

Theorem 2.1

The CDF of \(U_1\) is given by

$$\begin{aligned} F_{U_1}(u_1)= C_s \sum _{j=1}^{r} \sum _{i=r+1}^{s} \frac{ a_{i}^{(r)}(s) a_{j}(r) u_1}{\gamma _j (\gamma _j - ( \gamma _j - \gamma _i ) u_1)},\quad 0 \leqslant u_1 \leqslant 1. \end{aligned}$$
(2.6)

A \(100(1- \tau )\%\) predictive interval for \(X_{ s,n,\underline{\gamma } }^{(D)}\) based on \(U_1\) is

$$\begin{aligned} \left( (1-u_1)^{1 / \beta } X_{ r,n,\underline{\gamma } }^{(D)} , X_{ r,n,\underline{\gamma } }^{(D)} \right) , \end{aligned}$$

where \(u_1=u_1(\tau ) \) is such that \(F_{U_1}(u_1)=1-\tau .\)

Proof

First, note that the pivotal quantity \(U_1\) can be expressed as

$$\begin{aligned} U_1&=1- \left( \frac{X_{s,n,\underline{\gamma }}^{(D)}}{X_{r,n,\underline{\gamma }}^{(D)}}\right) ^{\beta } = \frac{\left( X_{s,n,\underline{\gamma }}^{(D)}/\sigma \right) ^{-\beta }-\left( X_{r,n,\underline{\gamma }}^{(D)}/\sigma \right) ^{-\beta }}{\left( X_{s,n,\underline{\gamma }}^{(D)}/\sigma \right) ^{-\beta }}= \frac{Z_{s,n,\underline{\gamma }}^{(D)}-Z_{r,n,\underline{\gamma }}^{(D)}}{Z_{s,n,\underline{\gamma }}^{(D)}}. \end{aligned}$$

Clearly, \(0<U_1<1\). Therefore, for \(0<u_1<1\), we have

$$\begin{aligned} F(u_1)&=P( U_1 \leqslant u_1) = P\left( 0< 1 - \frac{Z_{r,n,\underline{\gamma }}^{(D)}}{Z_{s,n,\underline{\gamma }}^{(D)}} \leqslant u_1\right) \nonumber \\&= P\left( Z_{s,n,\underline{\gamma }}^{(D)} < Z_{r,n,\underline{\gamma }}^{(D)} \leqslant (1-u_1) Z_{s,n,\underline{\gamma }}^{(D)} \right) \nonumber \\&= \int _{-\infty }^{0} \int _{z_s}^{(1-u_1)z_s} f_{_{Z_{r,n,\underline{\gamma }}^{(D)}, Z_{s,n,\underline{\gamma }}^{(D)}}} (z_r,z_s)dz_{r}~dz_{s}. \end{aligned}$$
(2.7)

By the relation (2.5), the joint PDF of the rth and sth DGOSs based on the NEXP(1) can be simplified and written as

$$\begin{aligned} f_{Z_{r,n,\underline{\gamma }}^{(D)}, Z_{s,n,\underline{\gamma }}^{(D)}}(z_r,z_s) = C_s \sum _{j=1}^{r} \sum _{i=r+1}^{s} a_j(r) a_i^{(r)}(s) e^{\gamma _i z_s } e^{( \gamma _j - \gamma _i ) z_r}, ~ -\infty< z_s< z_r < 0. \end{aligned}$$
(2.8)

By (2.7) and (2.8) we obtain

$$\begin{aligned} F_{U_1}(u_1)&= C_s \sum _{j=1}^{r} \sum _{i=r+1}^{s} a_{i}^{(r)}(s) a_{j}(r) \int _{-\infty }^{0} \int _{z_s}^{(1-u_1) z_s} e^{\gamma _i z_{s}} e^{(\gamma _j - \gamma _i ) z_{r}} ~dz_{r}~ dz_{s}\\&=C_s \sum _{j=1}^{r} \sum _{i=r+1}^{s} a_{i}^{(r)}(s) a_{j}(r) \left( \frac{1}{\gamma _j - \gamma _i} \right) \left( \frac{1}{\gamma _j - ( \gamma _j -\gamma _i ) u_1} -\frac{1}{\gamma _j} \right) . \end{aligned}$$

After some algebraic calculations, we get the relation (2.6). The predictive intervals can be accomplished directly from the definition of the pivotal quantity \(U_1\). Hence, the theorem is proved.

Lemma 2.2

The normalized spacings, \(Y_i=\gamma _i \left( Z_{i,n,\underline{\gamma }}^{(D)} - Z_{ i-1,n,\underline{\gamma } }^{(D)} \right) ,~i=1,2,\ldots ,n\), are independent and identically distributed (iid) RVs each of which has the NEXP(1) with \(Z_{ 0,n,\underline{\gamma } }^{(D)}\equiv 0.\) Moreover,

$$\begin{aligned} Z_{r,n,\underline{\gamma }}^{(D)} { \begin{array}{c} {\mathop {=}\limits ^{d}}\\ \scriptstyle {}\end{array}} \sum _{i=1}^{r} \frac{Y_i}{\gamma _i}, \quad r=1,2,\ldots .,n. \end{aligned}$$

Lemma 2.2, which is due to Burkschat et al. (2003), represents a fundamental tool for proving the next theorems.

Theorem 2.2

The CDF of the predictive pivotal quantity \(U_2\) is

$$\begin{aligned} F_{U_2}(u_2)= 1- \frac{C_s}{C_r} \sum _{i=r+1}^{s} \frac{ a_{i}^{(r)}(s)}{\gamma _i} (1+ \gamma _i u_2)^{-(r-l)},\quad u_2\geqslant 0, \quad r>l\geqslant 0. \end{aligned}$$
(2.9)

A \(100(1- \tau )\%\) predictive interval for \(X_{ s,n,\underline{\gamma } }^{(D)}\) is

$$\begin{aligned} \left( \left( \left( X_{ r,n,\underline{\gamma } }^{(D)}\right) ^{- \beta } + u_2 \sum _{i=l+1}^{r} \gamma _i \left( \left( X_{ i,n,\underline{\gamma } }^{(D)}\right) ^{- \beta } - \left( X_{ i-1,n,\underline{\gamma } }^{(D)}\right) ^{- \beta } \right) \right) ^{-1/\beta } , X_{ r,n,\underline{\gamma } }^{(D)} \right) , \end{aligned}$$

where \(u_2=u_2(\tau ) \) satisfies the nonlinear equation, \(F_{U_2}(u_2)=1-\tau .\)

Proof

The pivotal quantity \(U_2\) can be written as

$$\begin{aligned} U_2&= \frac{ \Big (X_{s,n,\underline{\gamma }}^{(D)}\Big )^{-\beta } - \Big (X_{r,n,\underline{\gamma }}^{(D)}\Big )^{-\beta }}{ \sum _{i=l+1}^{r} \gamma _i \left( \Big (X_{i,n,\underline{\gamma }}^{(D)}\Big )^{-\beta } -\Big (X_{i-1,n,\underline{\gamma }}^{(D)}\Big )^{-\beta } \right) },\\&= \frac{Z_{s,n,\underline{\gamma }}^{(D)} - Z_{ r,n,\underline{\gamma } }^{(D)}}{ T_{l,r} }=\frac{W_{r,s}}{T_{l,r}}, \quad 0\leqslant l<r<s. \end{aligned}$$

By Lemma 2.2, it can be noted that \(W_{r,s}=\sum _{i=r+1}^{s} Y_i/\gamma _i\) and \(T_{l,r}=\sum _{i=l+1}^{r} Y_i\). Since \(Y_1,\ldots ,Y_n\) are independent, \(W_{r,s}\) and \(T_{l,r}\) are independent. The CDF of \(W_{r,s}\) can be obtained as follows

$$\begin{aligned} F_{W_{r,s}}(w)= & {} P\left( W_{r,s} \leqslant w \right) =P\left( Z_{s,n,\underline{\gamma }}^{(D)} \leqslant Z_{ r,n,\underline{\gamma } }^{(D)} +w\right) \nonumber \\= & {} \int _{-\infty }^{0} \int _{-\infty }^{z_r+w } f_{Z_{r,n,\underline{\gamma }}^{(D)}, Z_{s,n,\underline{\gamma }}^{(D)}}(z_r,z_s) dz_s dz_r \nonumber \\= & {} C_s \sum _{j=1}^{r} \sum _{i=r+1}^{s} \frac{a_{j}(r) a_{i}^{(r)}(s)}{\gamma _i} \frac{e^{\gamma _i w }}{\gamma _j}, \quad w<0. \end{aligned}$$
(2.10)

Consequently, the PDF of \(W_{r,s}\) is given by

$$\begin{aligned} f_{W_{r,s}}(w) = C_s \left( \sum _{j=1}^{r} \frac{a_{j}(r)}{\gamma _j} \right) \left( \sum _{i=r+1}^{s} a_{i}^{(r)}(s) e^{\gamma _i w} \right) =\frac{C_s}{C_r} \sum _{i=r+1}^{s} a_{i}^{(r)}(s) e^{\gamma _i w}. \end{aligned}$$
(2.11)

Therefore, by the independence of \(W_{r,s}\) and \(T_{l,r}\), coupled with the continuous version of the total law of probability, we get

$$\begin{aligned} F_{U_2}(u_2)= & {} P\left( 0< U_2 \leqslant u_2 \right) =P\left( u_2 T_{l,r}\leqslant W_{r,s} < 0 \right) \\= & {} \int _{-\infty }^{0} \left( 1-F_{W_{r,s}} (u_2 t) \right) f_{_{T_{l,r}}}(t) dt\\= & {} 1 - \frac{C_s}{C_r } \sum _{i=r+1}^{s} \frac{a_{i}^{(r)}(s)}{\gamma _i} ( 1 + \gamma _i u_2)^{-(r-l)}, \quad u_2 \geqslant 0, \end{aligned}$$

which is (2.9). The predictive interval is a direct consequence of the form of the pivotal quantity. This completes the proof of the theorem. \(\square \)

Theorem 2.3

The CDF of the predictive pivotal quantity \(U_3\) is given by

$$\begin{aligned} F_{U_3}(u_3)= \frac{C_s}{C_l} \sum _{i=r+1}^{s} \sum _{j=l+1}^{r} \frac{ a_{i}^{(r)}(s) a_{j}^{(l)}(r) u_3}{\gamma _j (\gamma _j+ \gamma _i u_3)} ,\quad u_3 \geqslant 0. \end{aligned}$$

A \(100(1- \tau )\%\) predictive interval for \(X_{s,n,\underline{\gamma } }^{(D)}\) is

$$\begin{aligned} \left( X_{ r,n,\underline{\gamma } }^{(D)} \left( 1 + u_3 \left( 1- \left( \frac{X_{l,n,\underline{\gamma } }^{(D)}}{X_{r,n,\underline{\gamma } }^{(D)}}\right) ^{ - \beta } \right) \right) ^{-1/\beta } \, , \, X_{ r,n,\underline{\gamma } }^{(D)} \right) , \end{aligned}$$

where \(u_3=u_3(\tau ) \) is obtained by solving the nonlinear equation, \(F_{U_3}(u_3)=1-\tau .\)

Proof

As we proceed in the previous theorems, the pivotal quantity \(U_3\) can be formulated as

$$\begin{aligned} U_3 =\frac{Z_{s,n,\underline{\gamma }}^{(D)} - Z_{ r,n,\underline{\gamma } }^{(D)}}{ Z_{r,n,\underline{\gamma }}^{(D)} - Z_{l,n,\underline{\gamma } }^{(D)}}=\frac{ W_{r,s}}{W_{l,r}}. \end{aligned}$$

By Lemma (2.2), the RVs \(W_{l,r}=\sum _{i=l+1}^{r} Y_i/\gamma _i\) and \(W_{r,s}=\sum _{i=r+1}^{s} Y_i/\gamma _i\) are independent. Accordingly, the relation (2.11) yields

$$\begin{aligned} f_{W_{l,r}, W_{r,s}} (w_{l,r},w_{r,s})&= f_{ W_{l,r}} ( w_{l,r}) f_{W_{r,s}} (w_{r,s}) \nonumber \\&= \frac{C_s}{C_l} \sum _{i=r+1}^{s} \sum _{j=l+1}^{r} a_{i}^{(r)}(s) a_{j}^{(l)}(r) e^{\gamma _j w_{l,r}} e^{\gamma _i w_{r,s}},\quad ~ w_{l,r}, ~w_{r,s}<0. \end{aligned}$$
(2.12)

Hence,

$$\begin{aligned} F_{U_3}(u_3)= & {} P(0< U_3 \leqslant u_3) = P(u_3 W_{l,r} \leqslant W_{r,s} < 0) \\= & {} \int _{-\infty }^{0} \int _{u_3 w_{l,r}}^{0} f_{W_{l,r}, W_{r,s}} (w_{l,r},w_{r,s}) \,dw_{r,s}\,dw_{l,r}\\= & {} \frac{C_s}{C_l} \sum _{i=r+1}^{s} \sum _{j=l+1}^{r} a_{i}^{(r)}(s) a_{j}^{(l)}(r) \int _{-\infty }^{0} \int _{u_3 w_{l,r}}^{0} e^{\gamma _j w_{l,r}} e^{\gamma _i w_{r,s}} ~dw_{l,r}~ dw_{r,s}\\= & {} \frac{C_s}{C_l} \sum _{i=r+1}^{s} \sum _{j=l+1}^{r} \frac{a_{i}^{(r)}(s) a_{j}^{(l)}(r) \, v_3}{\gamma _j ( \gamma _j + \gamma _i v_3)}, \end{aligned}$$

which was to be proved. The rest of the theorem is easy to prove. \(\square \)

3 Reconstructive intervals of DGOSs

This section is devoted to the reconstruction problem of DGOSs relying on the pivotal quantities approach. In this section, it is assumed that \(X_{s,n,\underline{\gamma }}^{(D)},\ldots ,X_{n,n,\underline{\gamma }}^{(D)} \) are observed and \(X_{r,n,\underline{\gamma }}^{(D)}, ~r=s-1,s-2,\ldots ,1 \) are to be reconstructed . For this goal, four reconstructive pivotal quantities are proposed and their distributions are established. In what follows, a corollary to Theorem 2.1 and three theorems are presented without proof. Their proofs can be accomplished in the same manner as in Sect. 2.

Corollary 3.1

A \(100(1- \tau )\%\) reconstructive interval of \(X_{ r,n,\underline{\gamma } }^{(D)}\) based on \(U_1\) is

$$\begin{aligned} \left( X_{ s,n,\underline{\gamma } }^{(D)} , (1-u_1)^{-1 / \beta } X_{ s,n,\underline{\gamma } }^{(D)} \right) , \end{aligned}$$

where \(u_1=u_1(\tau ) \) satisfies the nonlinear equation \(F_{U_1}(u_1)=1-\tau ,~0<u_1<1.\)

Theorem 3.1

The CDF of the pivotal quantity \(V_1=\frac{Z_{s,n,\underline{\gamma }}^{(D)} - Z_{ r,n,\underline{\gamma } }^{(D)}}{Z_{ r,n,\underline{\gamma } }^{(D)}}\) takes the form

$$\begin{aligned} F_{V_1}(v_1)= 1- C_s \sum _{j=1}^{r} \sum _{i=r+1}^{s} \frac{a_{j}(r) a_{i}^{(r)}(s)}{\gamma _i(\gamma _j+\gamma _i v_1)},\quad v_1\geqslant 0. \end{aligned}$$

Moreover, a \(100(1- \tau )\%\) reconstructive interval for \(X_{ r,n,\underline{\gamma } }^{(D)}\) is

$$\begin{aligned} \left( X_{ s,n,\underline{\gamma } }^{(D)} ,(1+v_1)^{(1/\beta )} X_{ s,n,\underline{\gamma } }^{(D)}\right) , \end{aligned}$$

where \(v_1=v_1(\tau ) \) is the solution to the nonlinear equation, \(F_{V_1}(v_1)=1-\tau .\)

Theorem 3.2

The CDF of the reconstructive pivotal quantity \(V_2=\frac{Z_{s,n,\underline{\gamma }}^{(D)} - Z_{ r,n,\underline{\gamma } }^{(D)}}{ T_{s,n} }\) is given by

$$\begin{aligned} F_{V_2}(v_2)= 1- \frac{C_s}{C_r} \sum _{i=r+1}^{s} \frac{ a_{i}^{(r)}(s)}{\gamma _i} (1+ \gamma _i v_2)^{-(n-s)},\quad v_2\geqslant 0, \end{aligned}$$

where \(T_{s,n}= \sum _{i=s+1}^{n} \gamma _i ( Z_{i,n,\underline{\gamma }}^{(D)} - Z_{i-1,n,\underline{\gamma }}^{(D)}). \) Furthermore, a \(100(1- \tau )\%\) reconstructive interval of \(X_{ r,n,\underline{\gamma } }^{(D)}\) is

$$\begin{aligned} \left( \left( X_{ s,n,\underline{\gamma } }^{(D)}\right) ^{-\beta } ,\left( \left( X_{ s,n,\underline{\gamma } }^{(D)}\right) ^{-\beta } - v_2 \sum _{i=s+1}^{n} \gamma _i \left( \left( X_{ i,n,\underline{\gamma } }^{(D)}\right) ^{-\beta } - \left( X_{i-1,n,\underline{\gamma } }^{(D)}\right) ^{-\beta }\right) \right) ^{-1/\beta } \right) , \end{aligned}$$

where \(v_2=v_2(\tau ) \) can be obtained by solving the nonlinear equation, \(F_{V_2}(v_2)=1-\tau .\)

Theorem 3.3

The CDF of the reconstructive pivotal quantity, \(V_3=\frac{Z_{s,n,\underline{\gamma }}^{(D)} - Z_{ r,n,\underline{\gamma } }^{(D)}}{ Z_{n,n,\underline{\gamma }}^{(D)} - Z_{ s,n,\underline{\gamma } }^{(D)} },\) is

$$\begin{aligned} F_{V_3}(v_3)= \frac{C_n}{C_r} \sum _{i=r+1}^{s} \sum _{j=s+1}^{n} \frac{ a_{i}^{(r)}(s) a_{j}^{(s)}(n) v_3}{\gamma _j (\gamma _j+ \gamma _i v_3)} ,\quad v_3 \geqslant 0. \end{aligned}$$

A \(100(1- \tau )\%\) confidence interval for \(X_{ r,n,\underline{\gamma } }^{(D)}\) is

$$\begin{aligned} \left( X_{ s,n,\underline{\gamma } }^{(D)} , X_{ s,n,\underline{\gamma } }^{(D)} \left( 1- v_3 \left( \left( \frac{ X_{ s,n,\underline{\gamma } }^{(D)}}{ X_{ n,n,\underline{\gamma } }^{(D)}}\right) ^{\beta } -1 \right) \right) ^{-1/\beta } \right) , \end{aligned}$$

where \(v_3=v_3(\tau ) \) is computed by solving, \(F_{V_3}(v_3)=1-\tau .\)

Remark 3.1

  1. 1.

    Clearly, all the predictive and reconstructive results of the inverse exponential distribution are obtained as special cases from the obtained results in Sects. 2 and 3 if \(\beta =1\).

  2. 2.

    The predictive and reconstructive intervals are free of the scale parameter \(\sigma ,\) while this is not the case for the shape parameter \(\beta \).

  3. 3.

    If the shape parameter \(\beta \) is known, the transformation, \(Y^\star =\left( \dfrac{X}{\sigma }\right) ^\beta \) reduces the problem to the inverse exponential distribution.

The next section addresses the issue of the unknown parameters.

4 The MLP based on DGOSs

In this section, the MLEs and MLP, as well as the PMLEs based on the first r DGOSs, are studied. The following proposition is formulated in a general framework.

Proposition 4.1

The likelihood function based on the DGOSs, \(X_{1,n,\underline{\gamma }}^{(D)},\ldots ,X_{r,n,\underline{\gamma }}^{(D)}, \) from any continuous DF, F is

$$\begin{aligned} L^{\star }(\underline{\Theta }|\underline{ \mathbf {x}}_r)= & {} C_r \left( \prod _{i=1}^{r-1} F^{m_i}(x_i;\underline{\Theta }) f(x_i;\underline{\Theta })\right) F^{\gamma _r-1}(x_r;\underline{\Theta }) f(x_r;\underline{\Theta }),\nonumber \\&-\infty<x_r<\cdots<x_1 < \infty , \end{aligned}$$
(4.1)

where \(\underline{\Theta }=(\theta _1,\theta _2,\ldots ,\theta _d)\) is the vector of unknown parameters and \(\underline{ \mathbf {x}}_r=(x_1,x_2,\ldots ,x_r)\) denotes the first r observed DGOSs. Moreover, the predictive likelihood \((PL^{\star })\) function of \(X_{s,n,\underline{\gamma }}^{(D)}\) relying on \(X_{1,n,\underline{\gamma }}^{(D)},\ldots ,X_{r,n,\underline{\gamma }}^{(D)}\) is given by

$$\begin{aligned} PL^{\star }(\underline{\Theta },x_s|\underline{ \mathbf {x}}_r)&=C_s \left( \prod _{i=1}^{r-1} F^{m_i}(x_i;\underline{\Theta }) f(x_i;\underline{\Theta })\right) F^{\gamma _r}(x_r;\underline{\Theta }) \left( \frac{f(x_r;\underline{\Theta }) f(x_s;\underline{\Theta })}{F(x_r;\underline{\Theta }) F(x_s;\underline{\Theta })}\right) \nonumber \\&\times \sum _{j=r+1}^{s} a_j^{(r)}(s) \left( \frac{F(x_s;\underline{\Theta })}{F(x_r;\underline{\Theta })} \right) ^{\gamma _j}~ -\infty<x_s<x_r<\cdots<x_1 < \infty . \end{aligned}$$
(4.2)

Proof

According to Burkschat et al. (2003), after integrating the remaining variables, \(x_{r+1},\ldots ,x_n\), on the region \(x_r>x_{r+1}>\cdots> x_n>- \infty , \) the joint PDF of the first r DGOSs can be expressed as (4.1).

In view of Theorem 2.1 in Burkschat et al. (2003), the DGOSs form a Markov chain. Consequently, the conditional PDF of \( X_{s,n,\underline{\gamma }}^{(D)} \) given that \(X_{1,n,\underline{\gamma }}^{(D)}=x_1,\ldots ,X_{r,n,\underline{\gamma }}^{(D)}=x_r \) is equal to the conditional PDF of \( X_{s,n,\underline{\gamma }}^{(D)} \) given that \(X_{r,n,\underline{\gamma }}^{(D)}=x_r. \) Following Kaminsky and Rhodin (1985); Barakat et al. (2018), Lemma 2.1 implies

$$\begin{aligned} f^{(D)}_{1,2,\ldots ,r,s}(x_1,\ldots ,x_r,x_s)&= f^{(D)}_{1,2,\ldots ,r}(x_1,\ldots ,x_r) f_{_{X_{s,n,\underline{\gamma }}^{(D)}| X_{r,n,\underline{\gamma }}^{(D)}}} (x_s|x_r) \nonumber \\&= f^{(D)}_{1,2,\ldots ,r}(x_1,\ldots ,x_r) \frac{f_{_{X_{r,n,\underline{\gamma }}^{(D)}, X_{s,n,\underline{\gamma }}^{(D)}}} (x_r,x_s)}{f_{_{X_{r,n,\underline{\gamma }}^{(D)}}} (x_r)} \nonumber \\&= \frac{C_s}{C_r} f^{(D)}_{1,2,\ldots ,r}(x_1,\ldots ,x_r) \sum _{j=r+1}^{s} a_j^{(r)}(s) \left( \frac{F(x_s)}{F(x_r)} \right) ^{\gamma _j} \frac{f(x_s)}{F(x_s)}. \end{aligned}$$
(4.3)

Hence, (4.2) follows directly from (4.1). This completes the proof. \(\square \)

For the inverse Weibull distribution, the log-likelihood function based on (4.1) can be simplified as

$$\begin{aligned} L(\sigma ,\beta )\varpropto r \log \beta + r \beta \, \log \sigma -\beta \sum _{j=1}^r \log x_j-\sum _{j=1}^{r-1} (\gamma _j-\gamma _{j+1}) \left( \frac{x_j}{\sigma }\right) ^{-\beta }-\gamma _r \left( \frac{x_r}{\sigma }\right) ^{-\beta }. \end{aligned}$$
(4.4)

The MLEs of \(\sigma \) and \(\beta \) can be obtained numerically using an iterative method like the Newton-Rophson method by solving the nonlinear equations

$$\begin{aligned} \frac{\partial L(\sigma ,\beta )}{\partial \sigma }=0 \quad \text{ and } \quad \frac{\partial L(\sigma ,\beta )}{\partial \beta }=0. \end{aligned}$$
(4.5)

If the scale parameter \(\sigma \) is known we have

$$\begin{aligned} \frac{\partial ^2 L(\sigma ,\beta )}{\partial \beta ^2}=- \left[ \frac{r}{\beta ^2}+\sum _{j=1}^{r-1} (m_j+1) x_j^{\star } \left( \log \left( \frac{\sigma }{x_j}\right) \right) ^2 +\gamma _r x_r^{\star } \left( \log \left( \frac{\sigma }{x_r}\right) \right) ^2 \right] <0, \end{aligned}$$

where \(x_i^\star =\left( \dfrac{x_i }{\sigma }\right) ^{-\beta }.\) This ensures that there exists a unique MLE of \(\beta \) (e.g. Mäkeläinen et al. (1981)). Similarly, the logarithm of the \(PL^{\star }\) function can be written as

$$\begin{aligned} PL(x_s,\sigma ,\beta )&\varpropto \log \left( \sum _{t=r+1}^s a_t^{(r)}(s) \exp \left( -\gamma _t \left( x_s^\star -x_r^\star \right) \right) \right) -\sum _{t=1}^{r-1} (\gamma _t-\gamma _{t+1}) x_t^\star -\gamma _r x_r^\star \nonumber \\&-(\beta +1) \left( \sum _{t=1}^r \log x_t+\log x_s \right) +\beta (r+1) \log \sigma +(r+1) \log \beta , \end{aligned}$$
(4.6)

consequently, the MLP of \(x_s\), as well as the PMLEs of \(\sigma \) and \(\beta \), can be obtained numerically by solving the simultaneous equations

$$\begin{aligned} \frac{\partial PL(x_s,\sigma ,\beta )}{\partial x_s}=0, \qquad \frac{\partial PL(x_s,\sigma ,\beta )}{\partial \sigma }=0, \quad \text{ and } \quad \frac{\partial PL(x_s,\sigma ,\beta )}{\partial \beta }=0. \end{aligned}$$
(4.7)

4.1 On the existence and uniqueness of the MLEs, MLP, and PMLEs

The main aim of this subsection is to discuss the existence and uniqueness of the MLEs, MLP, and PMLEs. Except in very limited circumstances, the analytical demonstration is a tough problem. Simulation can be used to provide an alternative solution for such problems. Clearly, the support of the inverse Weibull distribution does not depend on the distribution parameters, and the PDF is absolutely continuous in \(\sigma \) and \(\beta \). Consequently, the function \(L(\sigma ,\beta )\) is the logarithm of a twice differentiable likelihood function with respect to \(\sigma \) and \(\beta \) in which \((\sigma ,\beta )\) varying in a connected open subset \( \Theta \subset \mathbb {R}^2_{+}\). According to Mäkeläinen et al. (1981), there exists a unique MLEs if Hessian matrix \(\mathbf {H}_L(\widehat{\sigma },\widehat{\beta })\) of \(L(\widehat{\sigma },\widehat{\beta })\) is negative definite, where \(\widehat{\sigma }\) and \( \widehat{\beta }\) are the solutions of (4.5). The analytical derivation of the negative definite of the Hessian matrix is a difficult problem in most cases. Alternatively, in this work, a comprehensive simulation study based on 100,000 replicates is carried out to endorse the negative definite of the Hessian matrix for different values of the parameters of the selected models. Similar conclusions concerning the MLP and the PMLEs can be achieved via simulation. The numerical solutions of (4.5) and (4.7) are obtained for each sample, after which the corresponding Hessian matrices of the obtained solutions are computed and they are checked to see if they are negative definite or not. The percentages of the samples from which Hessian matrices, \(\mathbf {H}_L(\widehat{\sigma },\widehat{\beta })\) and \(\mathbf {H}_{PL}(\widehat{x}_s,\widehat{\sigma },\widehat{\beta })\), are negative definite, are shown in Tables 1 and 2 for OOSs and SOSs, respectively.

Table 1 The percentages of samples from which the Hessian matrices, \(\mathbf {H}_L(\widehat{\sigma },\widehat{\beta })\) and \(\mathbf {H}_{PL}(\widehat{x}_s,\widehat{\sigma },\widehat{\beta })\), are negative definite for OOSs with selected values of \(\sigma \) and \(\beta \)
Table 2 The percentages of samples from which the Hessian matrices, \(\mathbf {H}_L(\widehat{\sigma },\widehat{\beta })\) and \(\mathbf {H}_{PL}(\widehat{x}_s,\widehat{\sigma },\widehat{\beta })\), are negative definite for SOSs with selected values of \(\sigma \) and \(\beta \)

Remark 4.1

The simulation study, which is carried out for various values of r,  s,  and n (for brevity, we report selected values in Tables 1 and 2) , reveals that:

  1. 1.

    In about 99% of the cases, the matrix \(\mathbf{H} _L\) is negative definite, which supports the existence of a unique MLEs of \(\sigma \) and \(\beta \).

  2. 2.

    In at least 95% of the cases, the matrix \(\mathbf{H} _L\) is negative definite provided that \(s>r+1\), which backs up the existence of the MLP of \(X_{s,n,\underline{\gamma }}^{(D)}\) and PMLEs of \(\sigma \) and \(\beta \) uniquely.

  3. 3.

    The OOSs and SOSs have no discernible differences.

4.2 The maximum likelihood reconstructor for the reversed OOSs

The maximum likelihood reconstructor (MLR) as well as the reconstructive maximum likelihood estimates (RMLEs) for the OOS are discussed in Asgharzadeh et al. (2012). After routine calculations, it can be shown that the reconstructive likelihood (\(RL^{\star })\) function of \(X_{r:n}, ~r<s\) based on the reversed OOSs, \(x_{s:n},\ldots ,x_{n:n}\), takes the form

$$\begin{aligned} RL^{\star }(x_r,\sigma ,\beta |x_s,\ldots ,x_n)&\propto \left( \prod _{j=s}^{n} f(x_j;\sigma ,\beta ) \right) \left( F(x_r;\sigma ,\beta )\right. \\&\quad \left. -F(x_s;\sigma ,\beta ) \right) ^{s-r-1} \left( 1-F(x_r;\sigma ,\beta )\right) ^{r-1} f(x_r;\sigma ,\beta ), \end{aligned}$$

\(x_r>x_s>\cdots >x_n.\) The log-likelihood function based on the inverse Weibull distribution can be written as

$$\begin{aligned} RL(x_r,\sigma ,\beta )&\propto (n-s+2) (\beta \log \sigma +\log \beta ) -\sum _{j=s}^n x^{*}_j\\&\quad -(\beta +1) \sum _{j=s}^n \log x_j-x^{*}_r-(\beta +1) \log x_r\\&\quad +(s-r-1) \log \left( e^{-x_r^{\star }}-e^{-x_s^{\star }} \right) +(r-1) \log \left( 1-e^{-x_r^{\star }}\right) . \end{aligned}$$

The MLR of \(X_{r:n}\), RMLEs of \(\sigma \) and \( \beta \) can be obtained numerically by solving the nonlinear system

$$\begin{aligned} \frac{\partial RL(x_r,\sigma ,\beta )}{\partial x_r}=0, \qquad \frac{\partial RL(x_r,\sigma ,\beta )}{\partial \sigma }=0, \quad \text{ and } \quad \frac{\partial RL(x_r,\sigma ,\beta )}{\partial \beta }=0. \end{aligned}$$
(4.8)

Remark 4.2

In many practical situations, the parameters are unknown, and we have to replace them with their estimates. Consequently, some of the accuracy will be lost. In the next section, it is shown that when the unknown parameters are replaced with their estimates, the accuracy of the results is satisfactory compared with the ideal case of known parameters, provided that \(s-r\) is not large. The comparison is based on the interval width and the coverage probability.

Table 3 Three \(95\%\) predictive intervals and their corresponding coverage probability of the reversed OOSs with parameters \(\sigma =10\) and \(\beta =2\)

5 Numerical results

5.1 Simulation studies

In this section, simulation experiments are conducted to assess the efficiency of the obtained results in the preceding sections. For this aim, two special models from the DGOSs model are considered. The first one is the reversed OOSs with model parameters \(\gamma _i=n-i+1\), while the second one corresponding to the choice \(\gamma _i=2(n-i)+1\) which may be interpreted as reversed SOSs. Here, two different cases are considered. In the first case, it is assumed that the inverse Weibull distribution parameters are known, with \(\sigma =10.0\) and \(\beta =2.0\) (Tables 3, 4, and 5). In the second case, the MLP is obtained and the parameters \(\sigma \) and \(\beta \) are replaced with their PMLEs (Tables 6, 7). In Table 8, the parameters \(\sigma \) and \(\beta \) are replaced with their RMLEs, which are obtained by (4.8). For comparison purposes, in the second case, we generate DGOSs from the inverse Weibull distribution with \(\sigma = 10.0\) and \(\beta = 2.0\) as in the first case.

Table 4 Three \(95\%\) predictive intervals and their corresponding coverage probability of \(X_{ s,n,\underline{\gamma } }^{(D)}\), the reversed SOSs \((\gamma _i=2(n-i)+1)\), with parameters \(\sigma =10,\) and \(\beta =2\)
Table 5 Two \(95\%\) reconstructive intervals and their corresponding coverage probability of the reversed OOSs based on the reconstructive pivotal quantities \(U_1\) and \(V_1\) with parameters \(\sigma =10,\) and \(\beta =2\)
Table 6 The MLP, PMLEs, and three \(95\%\) predictive intervals with their corresponding coverage probability of the reversed OOSs, \(X_{s:n}\), when the unknown parameters are replaced with their PMLEs
Table 7 The MLP, PMLEs, and three \(95\%\) predictive intervals with their corresponding coverage probability of the reversed SOSs \((\gamma _i=2(n-i)+1)\), \(X_{ s,n,\underline{\gamma } }^{(D)}\), when the unknown parameters are replaced with their PMLEs
Table 8 Two 95 % reconstructive intervals with their coverage probability, the MLR, and RMLEs of the reversed OOSs, \(X_{r:n}\), based on the reconstructive pivotal quantities \(U_1\) and \(V_1\) when the parameters are unknown

5.2 Algorithms

In view of the results of Burkschat et al. (2003), the rth DGOS can be generated by the following algorithm:

Algorithm 1 (Generating dual generalized order statistics)

Step 1.:

Choose the values of n, k, and the DGOSs model parameters, \(\gamma _i, ~i=1,2,\ldots ,n,\)

Step 2.:

generate a random sample of size n say \( B_1,B_2,\ldots ,B_n,\) from beta distribution with CDF, \(G(t)=t^{\gamma _j}, ~0\leqslant t\leqslant 1,\)

Step 3.:

compute the rth DGOS from any continuous distribution by the relation

$$\begin{aligned} X_{r,n,\underline{\gamma }}^{(D)}=F^{-1}\left( \prod _{j=1}^{r}\,B_j\right) , \quad r=1,2,\ldots ,n, \end{aligned}$$
Step 4.:

for the inverse Weibull distribution, compute the rth DGOS from the formula

$$\begin{aligned} X_{r,n,\underline{\gamma }}^{(D)}=\sigma \left( - \sum _{i=1}^{r} \log B_j \right) ^{-\frac{1}{\beta }}, \quad r=1,2,\ldots ,n. \end{aligned}$$

Algorithm 2 (Constructing predictive (reconstructive) intervals and computing their coverage probability)

Step 1.:

Determine the distribution parameters, \(\sigma \) and \(\beta \),

Step 2.:

determine k, \(\gamma _i,\) and n, the number of DGOSs to be generated,

Step 3.:

use Algorithm 1 to generate and store \(M\times n\) arrays, each of which contains n DGOSs based on the inverse Weibull distribution, where M is the number of repetitions,

Step 4.:

specify the number of observed DGOSs and the number of unknown DGOSs that required to be predicted or reconstructed,

Step 5.:

apply Theorems 2.1, 2.2, and 2.3 to find the required quantiles \(q_i\) by solving the nonlinear equations \(F_{_{U_i}}(q_i)=1-\tau , ~i=1,2,3, \) for the prediction problem,

Step 6.:

for the reconstruction problem, apply Theorems 2.1, 3.1 (for small values of n) and Theorems 3.2, and 3.3 (for large values of n) to compute the required quantiles,

Step 7.:

find the MLP and the PMLEs of the parameters based on the first r DGOSs, from (4.7),

Step 8.:

from the obtained results of Sects. 2 and 3, compute the upper and lower limits of the predictive (reconstructive) intervals, when:

(i):

the true values of parameters are known and

(ii):

the parameters are replaced with their PMLEs or RMLEs,

Step 9. :

check whether the observed value of \(X_{s,n,\underline{\gamma }}^{(D)} ~\left( X_{r,n,\underline{\gamma }}^{(D)}\right) \) did belong to the predictive (reconstructive) interval or not?

Step 10. :

repeat Steps 7, 8, and 9 M times,

Step 11. :

finally, compute the percentage of coverage probability, that is, the percent that the true value of the unobserved DGOS lies inside the predictive (reconstructive) interval, the average of the lower and upper limits.

Remark 5.1

  1. 1.

    The simulation studies are based on \(M=100,000\) replicates.

  2. 2.

    All the computations in this paper are performed by Mathematica 12.3.

6 Conclusion

In this paper, some predictive results concerning DGOSs based on the inverse Weibull distribution were considered. More specifically, different predictive and reconstructive pivotal quantities were proposed and their exact distributions were derived. Accordingly, some predictive and reconstructive intervals were constructed. Moreover, the MLP and the PMLEs of DGOSs based on the inverse Weibull distribution were discussed. A comprehensive simulation study backs up the existence and uniqueness of the MLP and PMLEs (Tables 1 and 2). The simulation studies revealed that:

  1. 1.

    The probability coverage is closer to the theoretical level (95%) whenever the distribution parameters are known.

  2. 2.

    If the distribution parameters are unknown, the probability coverage is lower than the theoretical level. However, the predictive intervals become shorter.

  3. 3.

    In most cases, the lower limits of the three predictive intervals are close to each other up to at least one decimal place.

  4. 4.

    As \(s-r\) increases, the interval width increases for all the predictive and reconstructive intervals.

  5. 5.

    The MLP, RML, PMLEs, and RMLEs perform well when they are compared with the true values, whenever \(s-r\) is small.

  6. 6.

    For small samples, the reconstructive pivotal quantities \(U_1\) and \(V_1\) are recommended (see Tables 5 and 8).

  7. 7.

    If the sample size is greater than 30, the predictive pivotal quantity \(U_2\) and the reconstructive pivotal quantities \(V_2\) and \(V_3\) are recommended.