Skip to main content
Log in

Multivariate stochastic comparisons of sequential order statistics with non-identical components

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

Sequential order statistics (SOS) are useful tools for modeling the lifetimes of systems wherein the failure of a component has a significant impact on the lifetimes of the remaining surviving components. The SOS model is a general model that contains most of the existing models for ordered random variables. In this paper, we consider the SOS model with non-identical components and then discuss various univariate and multivariate stochastic comparison results in both one-and two-sample scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Ahmad I, Kayid M (2007) Reversed preservation of stochastic orders for random minima and maxima with applications. Stat Pap 48:283–293

    Article  MathSciNet  Google Scholar 

  • Balakrishnan N, Zhao P (2013) Ordering properties of order statistics from heterogeneous populations: a review with an emphasis on some recent developments. Probab Eng Inf Sci 27:403–443

    Article  MathSciNet  Google Scholar 

  • Barlow RE, Proschan F (1975) Statistical theory of reliability and life testing: probability models. Holt, Rinehart and Winston, New York

    Google Scholar 

  • Baratnia M, Doostparast M (2017) Modeling lifetime of sequential \(r\)-out-of-\(n\) systems with independent and heterogeneous components. Commun Stat 46:7365–7375

    Article  MathSciNet  Google Scholar 

  • Belzunce F, Lillo RE, Ruiz JM, Shaked M (2001) Stochastic comparisons of nonhomogeneous processes. Probab Eng Inf Sci 15:199–224

    Article  MathSciNet  Google Scholar 

  • Belzunce F, Ruiz JM, Ruiz MDC (2003) Multivariate properties of random vectors of order statistics. J Stat Plan Inference 115:413–424

    Article  MathSciNet  Google Scholar 

  • Belzunce F, Mercader JA, Ruiz JM (2005) Stochastic comparisons of generalized order statistics. Probab Eng Inf Sci 19:99–120

    Article  MathSciNet  Google Scholar 

  • Belzunce F, Gurler S, Ruiz JM (2011) Revisiting multivariate likelihood ratio ordering results for order statistics. Probab Eng Inf Sci 25:355–368

    Article  MathSciNet  Google Scholar 

  • Belzunce F, Martínez-Riquelme C (2015) Some results for the comparison of generalized order statistics in the total time on test and excess wealth orders. Stat Pap 56:1175–1190

    Article  MathSciNet  Google Scholar 

  • Burkschat M, Navarro J (2018) Stochastic comparisons of systems based on sequential order statistics via properties of distorted distributions. Probab Eng Inf Sci 32:246–274

    Article  MathSciNet  Google Scholar 

  • Burkschat M, Torrado N (2014) On the reversed hazard rate of sequential order statistics. Stat Probab Lett 85:106–113

    Article  MathSciNet  Google Scholar 

  • Chen J, Hu T (2007) Multivariate dispersive ordering of generalized order statistics. Appl Math Lett 22:968–974

    MathSciNet  Google Scholar 

  • Cramer E, Kamps U (1996) Sequential order statistics and \(k\)-out-of-\(n\) systems with sequentially adjusted failure rates. Ann Inst Stat Math 48:535–549

    Article  MathSciNet  Google Scholar 

  • Cramer E, Kamps U (2001) Sequential \(k\)-out-of-\(n\) systems. In: Balakrishnan N, Rao CR (eds) Handbook of statistics (Chap. 12), vol 20. North-Holland, Amsterdam, pp 301–372

    Google Scholar 

  • Cramer E, Kamps U (2003) Marginal distributions of sequential and generalized order statistics. Metrika 58:293–310

    Article  MathSciNet  Google Scholar 

  • Gurler S (2012) On residual lifetimes in sequential \((n- k+ 1)\)-out-of-\(n\) systems. Stat Pap 53:23–31

    Article  MathSciNet  Google Scholar 

  • Hazra NK, Kuiti MR, Finkelstein M, Nanda AK (2017) On stochastic comparisons of maximum order statistics from the location-scale family of distributions. J Multivar Anal 160:31–41

    Article  MathSciNet  Google Scholar 

  • Hu T, Zhuang W (2005) A note on stochastic comparisons of generalized order statistics. Stat Probab Lett 72:163–170

    Article  MathSciNet  Google Scholar 

  • Kamps U (1995) A concept of generalized order statistics. J Stat Plan Inference 48:1–23

    Article  MathSciNet  Google Scholar 

  • Kelkinnama M, Asadi M (2019) Stochastic and ageing properties of coherent systems with dependent identically distributed components. Stat Pap 60:805–821

    Article  MathSciNet  Google Scholar 

  • Khaledi BE, Kochar S (2000) On dispersive ordering between order statistics in one-sample and two-sample problems. Stat Probab Lett 46:257–261

    Article  MathSciNet  Google Scholar 

  • Kvam PH, Pena EA (2005) Estimating load-sharing properties in a dynamic reliability system. J Am Stat Assoc 100:262–272

    Article  MathSciNet  Google Scholar 

  • Li C, Li X (2015) Likelihood ratio order of sample minimum from heterogeneous Weibull random variables. Stat Probab Lett 97:46–53

    Article  MathSciNet  Google Scholar 

  • Marshall AW, Olkin I, Arnold BC (2011) Inequalities: theory of majorization and its applications. Springer, New York

    Book  Google Scholar 

  • Navarro J, Burkschat M (2011) Coherent systems based on sequential order statistics. Nav Res Logist 58:123–135

    Article  MathSciNet  Google Scholar 

  • Proschan F (1963) Theoretical explanation of observed decreasing failure rate. Technometrics 5:375–383

    Article  Google Scholar 

  • Sahoo T, Hazra NK (2023) Ordering and aging properties of systems with dependent components governed by the Archimedean copula. Probab Eng Inf Sci 37:1–28

    Article  MathSciNet  Google Scholar 

  • Shaked M, Shanthikumar JG (2007) Stochastic orders. Springer, New York

    Book  Google Scholar 

  • Tavangar M, Asadi M (2011) On stochastic and aging properties of generalized order statistics. Probab Eng Inf Sci 25:187–204

    Article  MathSciNet  Google Scholar 

  • Torrado N, Lillo RE, Wiper MP (2012) Sequential order statistics: ageing and stochastic orderings. Methodol Comput Appl Probab 14:579–596

    Article  MathSciNet  Google Scholar 

  • Xie H, Hu T (2010) Some new results on multivariate dispersive ordering of generalized order statistics. J Multivar Anal 101:964–970

    Article  MathSciNet  Google Scholar 

  • Zhuang W, Hu T (2007) Multivariate stochastic comparisons of sequential order statistics. Probab Eng Inf Sci 21:47–66

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The first author sincerely acknowledges the financial support received from UGC, Govt. of India. The work of the second author was supported by IIT Jodhpur, India, while the work of the last author was supported by the Natural Sciences and Engineering Research Council of Canada through an individual discovery grant.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nil Kamal Hazra.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Before presenting the proofs, we first introduce some acronyms that are exclusively used here: CIS - conditionally increasing in sequence; ILR - increasing likelihood ratio; IFR - increasing failure rate; DFR - decreasing failure rate; DRFR - decreasing reverse failure rate.

Proof of Part(a) of Theorem 4.1

We have

$$\begin{aligned} \bar{F}_{X_{1:n+1}^{\star }}\left( t\right) =\prod _{j=1}^{n+1} \bar{F}_j^{(1)}\left( t\right) \text { and } \bar{F}_{X_{1:n}^{\star }}\left( t\right) = \prod _{j=1}^{n} \bar{F}_j^{(1)}\left( t\right) , \quad t>0. \end{aligned}$$

Consequently, we have \(X^{\star }_{1:n+1} \le _{st} X^{\star }_{1:n}\). Again, from Remark 3.1, we have

$$\begin{aligned} P\left( X^{\star }_{k:n} > x_k |X^{\star }_{1:n}= x_1, \ldots , X^{\star }_{k-1:n}= x_{k-1}\right) = {\left\{ \begin{array}{ll} \prod \limits _{j=1}^{n-k+1} \frac{ \bar{F}_j^{(k)} \left( x_k\right) }{ \bar{F}_j^{(k)}\left( x_{k-1}\right) } &{} \text {if }x_k \ge x_{k-1},\\ 1 &{} \text {if }x_k < x_{k-1}, \end{array}\right. } \end{aligned}$$
(A.1)

for \(k = 2, \ldots , n\). As \(\bar{F}\left( \cdot \right) \) is a decreasing function, we get \(P(X^{\star }_{k:n} > x_k |X^{\star }_{1:n}= x_1, \) \(\ldots , X^{\star }_{k-1:n}= x_{k-1})\) to be continuous and increasing in \(x_{k-1} >0\). Then, \(\left( X^{\star }_{1:n}, \ldots ,X^{\star }_{n:n} \right) \) is CIS. Thus, in view of Theorem 6.B.4 of Shaked and Shanthikumar (2007), we only need to show that

$$\begin{aligned} P\left( X^{\star }_{i:n+1}> t | X^{\star }_{i-1:n+1}= s\right)\le & {} P\left( X^{\star }_{i:n} > t | X^{\star }_{i-1:n}= s\right) \end{aligned}$$
(A.2)

for all \(t>0\), and \(i=2, \ldots , n\). Now, from (A.1), we get

$$\begin{aligned}{} & {} P\left( X^{\star }_{i:n+1}> t | X^{\star }_{i-1:n+1}= s\right) = \prod \limits _{j=1}^{n-i+2} \frac{ \bar{F}_j^{(i)}\left( t\right) }{ \bar{F}_j^{(i)}\left( s\right) } \le \prod \limits _{j=1}^{n-i+1} \frac{ \bar{F}_j^{(i)}\left( t\right) }{ \bar{F}_j^{(i)}\left( s\right) }\\{} & {} \quad = P\left( X^{\star }_{i:n} > t | X^{\star }_{i-1:n}= s\right) \end{aligned}$$

for all \(t \ge s>0,\) where the inequality follows from the fact that \(\bar{F}\left( \cdot \right) \) is a decreasing function. Further, for \(t < s\), (A.2) is trivially true. Hence, the required result. \(\square \)

Proof of Part(b) of Theorem 4.1

Let \(r_{j}^{(i)}(\cdot )\) be the hazard rate function of \(F_{j}^{(i)}\), \(j=1,\ldots , n-i+2\), \(i=1,\ldots , n+1\). Further, let \(\eta _{\cdot | \cdot } \left( \cdot \right) \) and \(\lambda _{\cdot | \cdot } \left( \cdot \right) \) be the multivariate conditional hazard rate functions of \(\left( X^{\star }_{1:n+1}, \ldots ,X^{\star }_{n:n+1} \right) \) and \(\left( X^{\star }_{1:n}, \ldots ,X^{\star }_{n:n} \right) \), respectively. Note that \(X^{\star }_{1:n} \le \ldots \le X^{\star }_{n:n} \). Then, from Remark 3.1 and (6.C.2) of Shaked and Shanthikumar (2007), we have

$$\begin{aligned} \lambda _{k | I}\left( u | \varvec{t}_I\right)= & {} {\left\{ \begin{array}{ll} \sum \limits _{j=1}^{n-i} r_{j}^{(i+1)} \left( u\right) &{} \text { for } k=i+1,\\ 0 &{} \text { for } k >i+1, \end{array}\right. } \end{aligned}$$
(A.3)

where \(I = \left\{ 1, \ldots , i \right\} \) for some i. Similarly, we have

$$\begin{aligned} \eta _{k | I \cup J}\left( u | \varvec{s}_{I \cup J}\right)= & {} {\left\{ \begin{array}{ll} \sum \limits _{j=1}^{n-m+1} r_{j}^{(m+1)} \left( u\right) &{} \text { for } k=m+1, \\ 0 &{} \text { for } k >m+1, \end{array}\right. } \end{aligned}$$
(A.4)

where \(I = \left\{ 1, \ldots , i \right\} \) and \(J = \left\{ i+1, \ldots , m \right\} \), for some \(m\ge i\) and \(i=0, 1, \ldots , n-1\). In view of (6.D.13) of Shaked and Shanthikumar (2007), to prove the required result, it suffices to show that

$$\begin{aligned} \eta _{k | I \cup J}\left( u | \varvec{s}_{I \cup J}\right)\ge & {} \lambda _{k | I}\left( u | \varvec{t}_I\right) \text { for all } k \in \overline{I \cup J}, \end{aligned}$$
(A.5)

where \(I = \left\{ 1, \ldots , i \right\} \) and \(J = \left\{ i+1, \ldots , m \right\} \), for \(0 \le i \le n-1\), \(\varvec{s}_I \le \varvec{t}_I \le u\varvec{e}\) and \(\varvec{s}_J \le u\varvec{e}\). Let us consider the following two cases.

Case I: Let \(m>i\). As \(\lambda _{k | I}\left( u | \varvec{t}_I\right) = 0, \text { for } k \ge m+1\), (A.5) holds readily.

Case II: Let \(m=i\). Then,

$$\begin{aligned} \eta _{k | I \cup J}\left( u | \varvec{s}_{I \cup J}\right) = \sum \limits _{j=1}^{n-m+1} r_{j}^{(m+1)} \left( u\right) \ge \sum \limits _{j=1}^{n-m} r_{j}^{(m+1)} \left( u\right) = \lambda _{k | I}\left( u | \varvec{t}_I\right) , \text { for } k=m+1, \end{aligned}$$

where the inequality follows from the fact that \(r_{n-m+1}^{(m+1)}\left( u\right) \) is positive in \(u>0\). Further,

\(\eta _{k | I \cup J}\left( u | \varvec{s}_{I \cup J}\right) = 0= \lambda _{k | I}\left( u | \varvec{t}_I\right) , \text { for } k>m+1\).

Thus, (A.5) holds, and hence the required result. \(\square \)

Proof of Part(c) of Theorem 4.1

In view of Lemma 3.1, the result holds if

$$\begin{aligned}{} & {} \prod _{i=1}^{n-1} \Bigg \{ \left( \frac{\bar{F}_{i} \left( x_i\right) }{\bar{F}_{i+1} \left( x_i\right) }\right) ^{ \sum \limits _{j=1}^{n-i+2} \alpha _{j}^{(i)} -1 } \left( {\bar{F}_{i+1} \left( x_i\right) } \right) ^{ \sum \limits _{j=1}^{n-i+2} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} -1 } f_{i}\left( x_{i} \right) \Bigg \}\nonumber \\{} & {} \quad \left( {\bar{F}_{n} \left( x_n\right) } \right) ^{ \sum \limits _{j=1}^{2} \alpha _{j}^{(n)} -1 } f_{n}\left( x_{n} \right) \nonumber \\{} & {} \quad \times \prod _{i=1}^{n-1} \Bigg \{ \left( \frac{\bar{F}_{i} \left( y_i\right) }{\bar{F}_{i+1} \left( y_i\right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} -1 } \left( {\bar{F}_{i+1} \left( y_i\right) } \right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1 } f_{i}\left( y_{i} \right) \Bigg \}\nonumber \\{} & {} \quad \left( {\bar{F}_{n} \left( y_n\right) } \right) ^{ \alpha _{1}^{(n)} -1 } f_{n}\left( y_{n} \right) \nonumber \\ {}{} & {} \le \prod _{i=1}^{n-1} \Bigg \{ \left( \frac{\bar{F}_{i} \left( x_i \wedge y_i \right) }{\bar{F}_{i+1} \left( x_i \wedge y_i\right) }\right) ^{ \sum \limits _{j=1}^{n-i+2} \alpha _{j}^{(i)} -1 } \left( {\bar{F}_{i+1} \left( x_i \wedge y_i \right) } \right) ^{ \sum \limits _{j=1}^{n-i+2} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} -1 } f_{i}\left( x_i \wedge y_i \right) \Bigg \} \nonumber \\{} & {} \quad \times \left( {\bar{F}_{n} \left( x_n \wedge y_n \right) } \right) ^{ \sum \limits _{j=1}^{2} \alpha _{j}^{(n)} -1 } f_{n}\left( x_n \wedge y_n \right) \nonumber \\ {}{} & {} \quad \times \prod _{i=1}^{n-1} \Bigg \{ \left( \frac{\bar{F}_{i} \left( x_i \vee y_i \right) }{\bar{F}_{i+1} \left( x_i \vee y_i \right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} -1 } \left( {\bar{F}_{i+1} \left( x_i \vee y_i \right) } \right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1 } f_{i}\left( x_i \vee y_i \right) \Bigg \} \nonumber \\{} & {} \quad \times \left( {\bar{F}_{n} \left( x_n \vee y_n\right) } \right) ^{ \alpha _{1}^{(n)} -1 } f_{n}\left( x_n \vee y_{n} \right) \end{aligned}$$
(A.6)

for all \(x_1 \le \ldots \le x_n\) and \(y_1 \le \ldots \le y_n\). So, to prove the required result, it suffices to show that the above inequality holds on the set \(E_1 = \{ 1 \le i \le n-1:x_i \ge y_i \}\). Now, consider the set \(E_1\). Then, (A.6) reduces to

$$\begin{aligned}{} & {} \prod _{i \in E_1} \Bigg \{ \left( \frac{\bar{F}_{i} \left( x_i\right) }{\bar{F}_{i+1} \left( x_i\right) }\right) ^{ \alpha _{n-i+2}^{(i)} } \left( {\bar{F}_{i+1} \left( x_i\right) } \right) ^{ \alpha _{n-i+2}^{(i)} - \alpha _{n-i+1}^{(i+1)} } \Bigg \} \nonumber \\{} & {} \quad \times \left( {\bar{F}_{n} \left( x_n\right) } \right) ^{ \sum \limits _{j=1}^{2} \alpha _{j}^{(n)} -1 } \left( {\bar{F}_{n} \left( y_n\right) } \right) ^{ \alpha _{1}^{(n)} -1 } f_{n}\left( x_{n} \right) f_{n}\left( y_{n} \right) \nonumber \\ {}{} & {} \quad \le \prod _{i \in E_1} \Bigg \{ \left( \frac{\bar{F}_{i} \left( y_i\right) }{\bar{F}_{i+1} \left( y_i\right) }\right) ^{ \alpha _{n-i+2}^{(i)} }\left( {\bar{F}_{i+1} \left( y_i\right) } \right) ^{ \alpha _{n-i+2}^{(i)} - \alpha _{n-i+1}^{(i+1)} } \Bigg \} \nonumber \\ {}{} & {} \quad \times \left( {\bar{F}_{n} \left( x_n \wedge y_n \right) } \right) ^{ \sum \limits _{j=1}^{2} \alpha _{j}^{(n)} -1 } \left( {\bar{F}_{n} \left( x_n \vee y_n\right) } \right) ^{ \alpha _{1}^{(n)} -1 } f_{n}\left( x_n \wedge y_n \right) f_{n}\left( x_n \vee y_{n} \right) .\nonumber \\ \end{aligned}$$
(A.7)

Now, from the condition that “\(F_i \le _{hr} F_{i+1}\), \(i=1,\ldots , n-1\)”, we get

$$\begin{aligned} \frac{\bar{F}_{i} \left( u\right) }{\bar{F}_{i+1} \left( u\right) } \le \frac{\bar{F}_{i} \left( v\right) }{\bar{F}_{i+1} \left( v\right) }, \text { for all } u \ge v > 0. \end{aligned}$$
(A.8)

Again, from the condition that “\( \alpha _{n-i+2}^{(i)} \ge \alpha _{n-i+1}^{(i+1)} \), for \( i=1, \ldots , n-1\)”, we get

$$\begin{aligned} \left( {\bar{F}_{i+1} \left( u\right) } \right) ^{ \alpha _{n-i+2}^{(i)} - \alpha _{n-i+1}^{(i+1)} } \le \left( {\bar{F}_{i+1} \left( v\right) } \right) ^{ \alpha _{n-i+2}^{(i)} - \alpha _{n-i+1}^{(i+1)} }, \text { for all } u \ge v > 0. \end{aligned}$$
(A.9)

Upon using (A.8) and (A.9), we obtain (A.7). Hence, the required result. \(\square \)

Proof of Part(a) of Theorem 4.2

From the definition of SOS, we have \(X^{\star }_{1:n} \le _{st} X^{\star }_{2:n}\). Again, from Remark 3.1, we have

$$\begin{aligned} P\left( X^{\star }_{k:n} > x_k |X^{\star }_{1:n}= x_1, \ldots , X^{\star }_{k-1:n}= x_{k-1}\right) = {\left\{ \begin{array}{ll} \prod \limits _{j=1}^{n-k+1} \frac{ \bar{F}_j^{(k)}\left( x_k\right) }{ \bar{F}_j^{(k)}\left( x_{k-1}\right) } &{} \text {for }x_k \ge x_{k-1},\\ 1 &{} \text {for }x_k < x_{k-1}, \end{array}\right. }\nonumber \\ \end{aligned}$$
(A.10)

for \(k = 2, \ldots , n\). As \(\bar{F} \left( \cdot \right) \) is a decreasing function, we have \(P(X^{\star }_{k:n} > x_k |X^{\star }_{1:n}= x_1, \) \(\ldots , X^{\star }_{k-1:n}= x_{k-1})\) to be continuous and increasing in \(x_{k-1} >0\). Then, \(\left( X^{\star }_{1:n}, \ldots ,X^{\star }_{n:n} \right) \) is CIS. Thus, in view of Theorem 6.B.4 of Shaked and Shanthikumar (2007), we only need to show that

$$\begin{aligned} P\left( X^{\star }_{i:n}> t | X^{\star }_{i-1:n}= s\right)\le & {} P\left( X^{\star }_{i+1:n} > t | X^{\star }_{i:n}= s\right) \end{aligned}$$
(A.11)

for all \(t>0\), and \(i=2, 3, \ldots , n-1\). Now, from (A.10), we get

$$\begin{aligned}{} & {} P\left( X^{\star }_{i:n}> t | X^{\star }_{i-1:n}= s\right) = \prod \limits _{j=1}^{n-i+1} \frac{ \bar{F}_j^{(i)}\left( t\right) }{ \bar{F}_j^{(i)}\left( s\right) } \le \prod \limits _{j=1}^{n-i} \frac{ \bar{F}_j^{(i+1)}\left( t\right) }{ \bar{F}_j^{(i+1)}\left( s\right) }\\{} & {} \quad = P\left( X^{\star }_{i+1:n} > t | X^{\star }_{i:n}= s\right) \end{aligned}$$

for all \( t \ge s>0,\) where the inequality follows from the facts that \(\bar{F} \left( \cdot \right) \) is a decreasing function and \(F_j^{(i)} \le _{hr} F_{j}^{(i+1)}\), for \(j=1, \ldots , n-i\), \(i=2, \ldots , n-1\). Further, for \(t < s\), (A.11) holds readily, and hence the required result. \(\square \)

Proof of Part(b) of Theorem 4.2

Let \(r_{j}^{(i)}(\cdot )\) be the hazard rate function of \(F_{j}^{(i)}\), for \(j=1,\ldots , n-i+1\), \(i=1,\ldots , n\). Further, let \(\eta _{\cdot | \cdot } \left( \cdot \right) \) and \(\lambda _{\cdot | \cdot } \left( \cdot \right) \) be the multivariate conditional hazard rate functions of \(\left( X^{\star }_{1:n}, \ldots ,X^{\star }_{n-1:n} \right) \) and \(\left( X^{\star }_{2:n}, \ldots ,X^{\star }_{n:n} \right) \), respectively. Note that \(X^{\star }_{1:n} \le \ldots \le X^{\star }_{n:n}\). Then, from Remark 3.1 and (6.C.2) of Shaked and Shanthikumar (2007), we have

$$\begin{aligned} \lambda _{k | I}\left( u | \varvec{t}_I\right)= & {} {\left\{ \begin{array}{ll} \sum \limits _{j=1}^{n-i-1} r_{j}^{(i+2)} \left( u\right) &{} \text { for } k=i+1,\\ 0 &{} \text { for } k >i+1, \end{array}\right. } \end{aligned}$$
(A.12)

where \(I = \left\{ 1, \ldots , i \right\} \) for some i. Similarly, we have

$$\begin{aligned} \eta _{k | I \cup J}\left( u | \varvec{s}_{I \cup J}\right)= & {} {\left\{ \begin{array}{ll} \sum \limits _{j=1}^{n-m} r_{j}^{(m+1)} \left( u\right) &{} \text { for } k=m+1,\\ 0 &{} \text { for } k >m+1, \end{array}\right. } \end{aligned}$$
(A.13)

where \(I = \left\{ 1, \ldots , i \right\} \) and \(J = \left\{ i+1, \ldots , m \right\} \), for some \(m\ge i\) and \(i=0, 1, \ldots , n-2\). In view of (6.D.13) of Shaked and Shanthikumar (2007), to prove the required result, it suffices to show that

$$\begin{aligned} \eta _{k | I \cup J}\left( u | \varvec{s}_{I \cup J}\right)\ge & {} \lambda _{k | I}\left( u | \varvec{t}_I\right) \text { for all } k \in \overline{I \cup J}, \end{aligned}$$
(A.14)

where \(I = \left\{ 1, \ldots , i \right\} \) and \(J = \left\{ i+1, \ldots , m \right\} \), for \(0 \le i \le n-2\), \(\varvec{s}_I \le \varvec{t}_I \le u\varvec{e}\) and \(\varvec{s}_J \le u\varvec{e}\). Let us consider the following three cases.

Case I: Let \(i\ge 1\) and \(m>i\). As \(\lambda _{k | I}\left( u | \varvec{t}_I\right) = 0, \text { for } k \ge m+1\), (A.14) holds readily.

Case II: Let \(i\ge 1\) and \(m=i\). Then,

$$\begin{aligned}{} & {} \eta _{k | I \cup J}\left( u | \varvec{s}_{I \cup J}\right) = \sum \limits _{j=1}^{n-m} r_{j}^{(m+1)} \left( u\right) \ge \sum \limits _{j=1}^{n-m-1} r_{j}^{(m+2)} \left( u\right) \nonumber \\{} & {} \quad = \lambda _{k | I}\left( u | \varvec{t}_I\right) , \text { for } k=m+1, \end{aligned}$$

where the first inequality follows from the assumption that “\(F_{j}^{(m+1)} \le _{hr} F_{j}^{(m+2)}\), \(j=1, \ldots , n-m-1\), \(m=1, \ldots , n-2\)”. Further,

\( \eta _{k | I \cup J}\left( u | \varvec{s}_{I \cup J}\right) = 0= \lambda _{k | I}\left( u | \varvec{t}_I\right) , \text { for } k>m+1. \) Thus, (A.14) holds.

Case III: Let \(i = 0\). Then, (A.14) can be written as \( \eta _{k | J}\left( u | \varvec{s}_{ J}\right) \ge \lambda _{k | \emptyset }\left( u | \varvec{t}_{ \emptyset }\right) .\) Note that \( \lambda _{k | \emptyset }\left( u | \varvec{t}_{ \emptyset }\right) = 0\), for \(k\ge 2\). Thus, we only need to show that \( \eta _{1 | \emptyset }\left( u | \varvec{s}_{ \emptyset }\right) \ge \lambda _{1 | \emptyset }\left( u | \varvec{t}_{ \emptyset }\right) ,\) or equivalently, \(X^{\star }_{1:n} \le _{hr} X^{\star }_{2:n}\). From Remark 3.1, we have

$$\begin{aligned} \frac{{r}_{X^{\star }_{1:n}}\left( t\right) }{{r}_{X^{\star }_{2:n}}\left( t\right) }= & {} \frac{{f}_{X^{\star }_{1:n}}\left( t\right) }{ \sum _{j=1}^{n-1} r_{j}^{(2)} \left( t\right) \int _{0}^{t} \prod _{j=1}^{n-1} \frac{\bar{F}_j^{(2)} \left( t\right) }{\bar{F}_j^{(2)} \left( z\right) } {f}_{X^{\star }_{1:n}}\left( z\right) dz } + \frac{{r}_{X^{\star }_{1:n}}\left( t\right) }{ \sum _{j=1}^{n-1} r_{j}^{(2)} \left( t\right) }\nonumber \\ \end{aligned}$$
(A.15)

for all \(t>0\). Now, from the given condition that “\(F_{j}^{(1)} \;\le _{hr } \; F_{j}^{(2)}\), \(j=1,\ldots , n-1\)”, we get \( {{r}_{X^{\star }_{1:n}}\left( t\right) }/{ \sum _{j=1}^{n-1} r_{j}^{(2)} \left( t\right) } \ge 1 \text { for all } t>0.\) Furthermore, we have

$$\begin{aligned} \frac{{f}_{X^{\star }_{1:n}}\left( t\right) }{ \sum _{j=1}^{n-1} r_{j}^{(2)} \left( t\right) \int _{0}^{t} \prod _{j=1}^{n-1} \frac{\bar{F}_j^{(2)} \left( t\right) }{\bar{F}_j^{(2)} \left( z\right) } {f}_{X^{\star }_{1:n}}\left( z\right) dz } \ge 0 \text { for all }t>0. \end{aligned}$$

Consequently, from (A.15), we get \( {r}_{X^{\star }_{1:n}}\left( t\right) \; \ge \; {r}_{X^{\star }_{2:n}}\left( t\right) \) for all \(t>0\), and thus \(X^{\star }_{1:n} \le _{hr} X^{\star }_{2:n}\), so that (A.14) follows. Hence, the required result. \(\square \)

Proof of Part(c) of Theorem 4.2

Note that the multivariate likelihood ratio order is closed under marginalization. So, to prove the required result, it suffices to show that \(\left( 0, X^{\star }_{1:n}, \ldots ,X^{\star }_{n-1:n} \right) \le _{lr} \left( X^{\star }_{1:n}, \ldots ,X^{\star }_{n:n} \right) \), which holds, in view of Lemma 3.1, if

$$\begin{aligned}{} & {} \prod _{i=1}^{n-2} \Bigg \{ \left( \frac{\bar{F}_{i} \left( x_{i+1}\right) }{\bar{F}_{i+1} \left( x_{i+1}\right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} -1 } \left( {\bar{F}_{i+1} \left( x_{i+1}\right) } \right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1 } f_{i}\left( x_{i+1} \right) \Bigg \} \nonumber \\{} & {} \times \left( {\bar{F}_{n-1} \left( x_n\right) } \right) ^{ \sum \limits _{j=1}^{2} \alpha _{j}^{(n-1)} -1 } f_{n-1}\left( x_{n} \right) \times \prod _{i=2}^{n-1} \Bigg \{ \left( \frac{\bar{F}_{i} \left( y_i\right) }{\bar{F}_{i+1} \left( y_i\right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} -1 } \nonumber \\{} & {} \left( {\bar{F}_{i+1} \left( y_i\right) } \right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1 } f_{i}\left( y_{i} \right) \Bigg \} \left( {\bar{F}_{n} \left( y_n\right) } \right) ^{ \alpha _{1}^{(n)} -1 } f_{n}\left( y_{n} \right) \nonumber \\{} & {} \le \prod _{i=1}^{n-2} \Bigg \{ \left( \frac{\bar{F}_{i} \left( x_{i+1} \wedge y_{i+1} \right) }{\bar{F}_{i+1} \left( x_{i+1} \wedge y_{i+1}\right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} -1 } \nonumber \\{} & {} \left( {\bar{F}_{i+1} \left( x_{i+1} \wedge y_{i+1} \right) } \right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1 } f_{i}\left( x_{i+1} \wedge y_{i+1} \right) \Bigg \} \nonumber \\{} & {} \times \left( {\bar{F}_{n-1} \left( x_n \wedge y_n \right) } \right) ^{ \sum \limits _{j=1}^{2} \alpha _{j}^{(n-1)} -1 } f_{n-1}\left( x_n \wedge y_n \right) \nonumber \\{} & {} \times \prod _{i=2}^{n-1} \Bigg \{ \left( \frac{\bar{F}_{i} \left( x_i \vee y_i \right) }{\bar{F}_{i+1} \left( x_i \vee y_i \right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} -1 } \nonumber \\{} & {} \left( {\bar{F}_{i+1} \left( x_i \vee y_i \right) } \right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1 } f_{i}\left( x_i \vee y_i \right) \Bigg \} \nonumber \\{} & {} \times \left( {\bar{F}_{n} \left( x_n \vee y_n\right) } \right) ^{ \alpha _{1}^{(n)} -1 } f_{n}\left( x_n \vee y_{n} \right) \end{aligned}$$
(A.16)

for all \(x_2 \le \ldots \le x_n\) and \(y_1 \le \ldots \le y_n\). Let \(E_2 = \{ 1 \le i \le n-2:x_{i+1} \ge y_{i+1} \}\). Note that the above inequality follows if it holds on \(E_2\). Given \(E_2\), (A.16) reduces to

$$\begin{aligned}{} & {} \prod _{i \in E_2} \Bigg \{ \left( \frac{\bar{F}_{i} \left( x_{i+1}\right) }{\bar{F}_{i+1} \left( x_{i+1}\right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} -1 } \left( {\bar{F}_{i+1} \left( x_{i+1}\right) } \right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1 } f_{i}\left( x_{i+1} \right) \Bigg \} \nonumber \\{} & {} \times \left( {\bar{F}_{n-1} \left( x_n\right) } \right) ^{ \sum \limits _{j=1}^{2} \alpha _{j}^{(n-1)} -1 } f_{n-1}\left( x_{n} \right) \nonumber \\ {}{} & {} \times \prod _{i \in E_2} \Bigg \{ \left( \frac{\bar{F}_{i+1} \left( y_{i+1}\right) }{\bar{F}_{i+2} \left( y_{i+2}\right) }\right) ^{ \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1 } \left( {\bar{F}_{i+2} \left( y_{i+1}\right) } \right) ^{ \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} - \sum \limits _{j=1}^{n-i-1} \alpha _{j}^{(i+2)} -1 } f_{i+1}\left( y_{i+1} \right) \Bigg \} \nonumber \\ {}{} & {} \times \left( {\bar{F}_{n} \left( y_n\right) } \right) ^{ \alpha _{1}^{(n)} -1 } f_{n}\left( y_{n} \right) \nonumber \\ {}{} & {} \le \prod _{i \in E_2} \Bigg \{ \left( \frac{\bar{F}_{i} \left( y_{i+1} \right) }{\bar{F}_{i+1} \left( y_{i+1}\right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} -1 } \left( {\bar{F}_{i+1} \left( y_{i+1} \right) } \right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1 } f_{i}\left( y_{i+1} \right) \Bigg \} \nonumber \\ {}{} & {} \times \left( {\bar{F}_{n-1} \left( x_n \wedge y_n \right) } \right) ^{ \sum \limits _{j=1}^{2} \alpha _{j}^{(n-1)} -1 } f_{n-1}\left( x_n \wedge y_n \right) \nonumber \\ {}{} & {} \times \prod _{i \in E_2} \Bigg \{ \left( \frac{\bar{F}_{i+1} \left( x_{i+1} \right) }{\bar{F}_{i+2} \left( x_{i+1}\right) }\right) ^{ \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1 } \left( {\bar{F}_{i+2} \left( x_{i+1} \right) } \right) ^{ \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} - \sum \limits _{j=1}^{n-i-1} \alpha _{j}^{(i+2)} -1 } f_{i+1}\left( x_{i+1} \right) \Bigg \} \nonumber \\ {}{} & {} \times \left( {\bar{F}_{n} \left( x_n \vee y_n\right) } \right) ^{ \alpha _{1}^{(n)} -1 } f_{n}\left( x_n \vee y_{n} \right) . \end{aligned}$$
(A.17)

Now, from the condition that “\(\left( { \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} }-1 \right) \) is positive and decreasing in \(i \in \{1,\ldots , n-1\}\)”, we get \( { \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} } \ge \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} - \sum \limits _{j=1}^{n-i-1} \alpha _{j}^{(i+2)} \ge 1\) and \(\sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} \ge 1\), for \( i=1, \ldots , n-2\). Further, by using these along with the condition that “\( F_{i} \le _{hr} F_{i+1} \)”, we get

$$\begin{aligned}{} & {} \left( \frac{\bar{F}_{i} \left( x_{i+1}\right) }{\bar{F}_{i+1} \left( x_{i+1}\right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} } \le \; \left( \frac{\bar{F}_{i} \left( y_{i+1}\right) }{\bar{F}_{i+1} \left( y_{i+1}\right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} }, \end{aligned}$$
(A.18)
$$\begin{aligned}{} & {} \bar{F}^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1}_{i+1} \le _{hr} \bar{F}^{\sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} - \sum \limits _{j=1}^{n-i-1} \alpha _{j}^{(i+2)} -1}_{i+2} \nonumber \\{} & {} \quad \text { and } \bar{F}^{ \sum \limits _{j=1}^{2} \alpha _{j}^{(n-1)} -1}_{n-1} \le _{hr} \bar{F}^{ \alpha _{1}^{(n)} -1}_{n}, \end{aligned}$$
(A.19)

for all \(x_{i+1} \ge y_{i+1} > 0\), for \( i=1, \ldots , n-2\). Again, from the condition that “\(\left( \bar{F}_{i+1}(u)\right) ^2/\bar{F}_{i}(u) \bar{F}_{i+2}(u)\) is increasing in \(u>0\)”, we get

$$\begin{aligned} \left( \frac{\left( \bar{F}_{i+1}(y_{i+ 1})\right) ^2}{\bar{F}_{i}(y_{i+1}) \bar{F}_{i+2}(y_{i+1})} \right) ^{ \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1 } \le \; \left( \frac{\left( \bar{F}_{i+1}(x_{i+1})\right) ^2}{\bar{F}_{i}(x_{i+1}) \bar{F}_{i+2}(x_{i+1})} \right) ^{ \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1 } \end{aligned}$$
(A.20)

for all \(x_{i+1} \ge y_{i+1} > 0\). Further, we have the assumption that “\( F_{i} \le _{lr} F_{i+1} \), for \( i=1, \ldots , n-1\)”. Finally, upon combining this with (A.18), (A.19) and (A.20), we get (A.17), and hence the required result. \(\square \)

Proof of Part(a) of Theorem 4.3

We have

$$\begin{aligned} \bar{F}_{X^{\star }_{2:n+1} }\left( t\right)= & {} P\left( X^{\star }_{1:n+1}> t\right) + \int _{o}^{t} P\left( X^{\star }_{2:n+1}> t | X^{\star }_{1:n+1} =z\right) {f}_{X^{\star }_{1:n+1}}\left( z\right) \, dz \nonumber \\= & {} \int _{0}^{\infty } \bar{G}_2\left( z|t\right) \; dF_{X_{1:n+1}^{\star }}(z), \quad t>0, \end{aligned}$$
(A.21)

where

$$\begin{aligned} \bar{G}_2\left( z|t\right) = {\left\{ \begin{array}{ll} \prod \limits _{j=1}^{n} \frac{ \bar{F}_j^{(2)}\left( t\right) }{ \bar{F}_j^{(2)}\left( z\right) } &{} \text {if }z \le t,\\ 1 &{} \text {if }z > t. \end{array}\right. } \end{aligned}$$

Note that \(\bar{F}_{j}^{(2)}(t) \le {\bar{F}_{j}^{(2)}(t)}/{\bar{F}_{j}^{(2)}(z)}\), for all \( 0 <z \le t\). This, together with the assumption that “\(F_j^{(1)} \le _{st} F_{j}^{(2)}\), \(j=1, \ldots , n\)”, imply that

$$\begin{aligned} \bar{G}_2\left( z|t\right) \ge \prod \limits _{j=1}^{n} { \bar{F}_j^{(2)}\left( t\right) } \nonumber \ge \prod \limits _{j=1}^{n} { \bar{F}_j^{(1)}\left( t\right) } \text { for all } 0 < z \le t, \end{aligned}$$

which further implies that \(X^{\star }_{1:n} \le _{st} X^{\star }_{2:n+1}\). Again, from Remark 3.1, we have

$$\begin{aligned} P\left( X^{\star }_{k:n} > x_k |X^{\star }_{1:n}= x_1, \ldots , X^{\star }_{k-1:n}= x_{k-1}\right) = {\left\{ \begin{array}{ll} \prod \limits _{j=1}^{n-k+1} \frac{ \bar{F}_j^{(k)}\left( x_k\right) }{ \bar{F}_j^{(k)}\left( x_{k-1}\right) } &{} \text { for }x_k \ge x_{k-1},\\ 1 &{} \text { for }x_k < x_{k-1}, \end{array}\right. } \nonumber \\ \end{aligned}$$
(A.22)

for \(k = 2, \ldots , n\). As \(\bar{F}\left( \cdot \right) \) is a decreasing function, we get \(P(X^{\star }_{k:n} > x_k |X^{\star }_{1:n}= x_1, \) \(\ldots , X^{\star }_{k-1:n}= x_{k-1})\) to be continuous and increasing in \(x_{k-1} >0\). Then, \(\left( X^{\star }_{1:n}, \ldots ,X^{\star }_{n:n} \right) \) is CIS. Thus, in view of Theorem 6.B.4 of Shaked and Shanthikumar (2007), we only need to show that

$$\begin{aligned} P\left( X^{\star }_{i:n}> t | X^{\star }_{i-1:n}= s\right)\le & {} P\left( X^{\star }_{i+1:n+1} > t | X^{\star }_{i:n+1}= s\right) \end{aligned}$$
(A.23)

for all \(t>0\), and \(i=2, \ldots , n\). Now, from (A.22), we get

$$\begin{aligned} P\left( X^{\star }_{i:n}> t | X^{\star }_{i-1:n}= s\right)= & {} \prod \limits _{j=1}^{n-i+1} \frac{ \bar{F}_j^{(i)}\left( t\right) }{ \bar{F}_j^{(i)}\left( s\right) } \le \prod \limits _{j=1}^{n-i+1} \frac{ \bar{F}_j^{(i+1)}\left( t\right) }{ \bar{F}_j^{(i+1)}\left( s\right) } \nonumber \\ {}= & {} P\left( X^{\star }_{i+1:n} > t | X^{\star }_{i:n}= s\right) \end{aligned}$$

\( \text {for all } t \ge s>0, \) where the inequality follows from the facts that \(\phi \left( \cdot \right) \) is a decreasing function and \(F_j^{(i)} \le _{hr} F_{j}^{(i+1)}\), for \(j=1, \ldots , n-i+1\), \(i=2, \ldots , n\). Hence, (A.23) holds for \(t \ge s>0\). Further, for \(t < s\), (A.23) holds readily, and hence, the required result. \(\square \)

Proof of Part(b) of Theorem 4.3

Let \(r_{j}^{(i)}(\cdot )\) be the hazard rate function of \(F_{j}^{(i)}\), for \(j=1,\ldots , n-i+2\), \(i=1,\ldots , n+1\). Further, let \(\eta _{\cdot | \cdot } \left( \cdot \right) \) and \(\lambda _{\cdot | \cdot } \left( \cdot \right) \) be the multivariate conditional hazard rate functions of \(\left( X^{\star }_{1:n}, \ldots ,X^{\star }_{n-1:n} \right) \) and \(\left( X^{\star }_{2:n+1}, \ldots ,X^{\star }_{n+1:n+1} \right) \), respectively. Note that \(X^{\star }_{1:n} \le \ldots \le X^{\star }_{n:n}\). Then, from Remark 3.1 and (6.C.2) of Shaked and Shanthikumar (2007), we have

$$\begin{aligned} \lambda _{k | I}\left( u | \varvec{t}_I\right)= & {} {\left\{ \begin{array}{ll} \sum \limits _{j=1}^{n-i} r_{j}^{(i+2)} \left( u\right) &{} \text { for } k=i+1,\\ 0 &{} \text { for } k >i+1, \end{array}\right. } \end{aligned}$$
(A.24)

where \(I = \left\{ 1, \ldots , i \right\} \), for some i. Similarly, we have

$$\begin{aligned} \eta _{k | I \cup J}\left( u | \varvec{s}_{I \cup J}\right)= & {} {\left\{ \begin{array}{ll} \sum \limits _{j=1}^{n-m} r_{j}^{(m+1)} \left( u\right) &{} \text { for } k=m+1,\\ 0 &{} \text { for } k >m+1, \end{array}\right. } \end{aligned}$$
(A.25)

where \(I = \left\{ 1, \ldots , i \right\} \) and \(J = \left\{ i+1, \ldots , m \right\} \), for some \(m\ge i\) and \(i=0, 1, \ldots , n-1\). In view of (6.D.13) of Shaked and Shanthikumar (2007), to prove the required result, it suffices to show that

$$\begin{aligned} \eta _{k | I \cup J}\left( u | \varvec{s}_{I \cup J}\right)\ge & {} \lambda _{k | I}\left( u | \varvec{t}_I\right) \text { for all } k \in \overline{I \cup J}, \end{aligned}$$
(A.26)

where \(I = \left\{ 1, \ldots , i \right\} \) and \(J = \left\{ i+1, \ldots , m \right\} \), for \(0 \le i \le n-1\), \(\varvec{s}_I \le \varvec{t}_I \le u\varvec{e}\) and \(\varvec{s}_J \le u\varvec{e}\). Now, consider the following three cases.

Case I: Let \(i\ge 1\) and \(m>i\). As \(\lambda _{k | I}\left( u | \varvec{t}_I\right) = 0, \text { for } k \ge m+1\), (A.26) holds readily.

Case II: Let \(i\ge 1\) and \(m=i\). Then,

$$\begin{aligned} \eta _{k | I \cup J}\left( u | \varvec{s}_{I \cup J}\right) = \sum \limits _{j=1}^{n-m} r_{j}^{(m+1)} \left( u\right) \nonumber \ge \sum \limits _{j=1}^{n-m} r_{j}^{(m+2)} \left( u\right) \nonumber = \lambda _{k | I}\left( u | \varvec{t}_I\right) , \text { for } k=m+1, \end{aligned}$$

where the inequality follows from the assumption that “\(F_{j}^{(m+1)} \le _{hr} F_{j}^{(m+2)}\), for \(m=1, \ldots , n-1\)”. Further,

\(\eta _{k | I \cup J}\left( u | \varvec{s}_{I \cup J}\right) = 0= \lambda _{k | I}\left( u | \varvec{t}_I\right) , \text { for } k>m+1\). Thus, (A.26) holds.

Case III: Let \(i = 0\). Then, (A.26) can be equivalently written as

\(\eta _{k | J}\left( u | \varvec{s}_{ J}\right) \ge \lambda _{k | \emptyset }\left( u | \varvec{t}_{ \emptyset }\right) \). Note that \( \lambda _{k | \emptyset }\left( u | \varvec{t}_{ \emptyset }\right) = 0\), for \(k\ge 2\). Thus, we only need to show that \( \eta _{1 | \emptyset }\left( u | \varvec{s}_{ \emptyset }\right) \ge \lambda _{1 | \emptyset }\left( u | \varvec{t}_{ \emptyset }\right) ,\) or equivalently, \(X^{\star }_{1:n} \le _{hr} X^{\star }_{2:n+1}\). From Remark 3.1, we have

$$\begin{aligned} \frac{{r}_{X^{\star }_{1:n}}\left( t\right) }{{r}_{X^{\star }_{2:n+1}}\left( t\right) }= & {} \frac{{f}_{X^{\star }_{1:n}}\left( t\right) }{ \sum \limits _{j=1}^{n-1} r_{j}^{(2)} \left( t\right) \int _{0}^{t} \prod \limits _{j=1}^{n} \frac{\bar{F}_j^{(2)} \left( t\right) }{\bar{F}_j^{(2)} \left( z\right) } {f}_{X^{\star }_{1:n+1}}\left( z\right) dz } + \frac{{r}_{X^{\star }_{1:n}}\left( t\right) }{ \sum \limits _{j=1}^{n} r_{j}^{(2)} \left( t\right) } \end{aligned}$$
(A.27)

for all \(t>0\). Now, from the given condition that “\(F_{j}^{(1)} \;\le _{hr } \; F_{j}^{(2)}\), \(j=1,2,\ldots , n\)", we get \({{r}_{X^{\star }_{1:n}}\left( t\right) }\Big /{ \sum \limits _{j=1}^{n} r_{j}^{(2)} \left( t\right) } \ge 1 \text { for all } t>0\). Furthermore, we have

$$\begin{aligned} \frac{{f}_{X^{\star }_{1:n}}\left( t\right) }{ \sum \limits _{j=1}^{n} r_{j}^{(2)} \left( t\right) \int _{0}^{t} \prod \limits _{j=1}^{n} \frac{\bar{F}_j^{(2)} \left( t\right) }{\bar{F}_j^{(2)} \left( z\right) } {f}_{X^{\star }_{1:n+1}}\left( z\right) dz } \ge 0 \text { for all }t>0. \end{aligned}$$

Consequently, from (A.27), we get \( {r}_{X^{\star }_{1:n}}\left( t\right) \; \ge \; {r}_{X^{\star }_{2:n}}\left( t\right) \), for all \(t>0\), and so \(X^{\star }_{1:n} \le _{hr} X^{\star }_{2:n+1}\). Thus, (A.26) follows, and hence, the required result. \(\square \)

Proof of Part(c) of Theorem 4.3

Note that the multivariate likelihood ratio order is closed under marginalization. So, to prove the required result, it suffices to show that \(\left( 0, X^{\star }_{1:n}, \ldots ,X^{\star }_{n:n} \right) \le _{lr} \left( X^{\star }_{1:n+1}, \ldots ,X^{\star }_{n+1:n+1} \right) \), which holds, in view of Lemma 3.1, if

$$\begin{aligned}{} & {} \prod _{i=1}^{n-1} \Bigg \{ \left( \frac{\bar{F}_{i} \left( x_{i+1}\right) }{\bar{F}_{i+1} \left( x_{i+1}\right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} -1 } \left( {\bar{F}_{i+1} \left( x_{i+1}\right) } \right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1 } f_{i}\left( x_{i+1} \right) \Bigg \} \nonumber \\ {}{} & {} \times \left( {\bar{F}_{n} \left( x_{n+1}\right) } \right) ^{ \alpha _{1}^{(n)} -1 } f_{n}\left( x_{n+1} \right) \nonumber \\{} & {} \times \prod _{i=2}^{n} \Bigg \{ \left( \frac{\bar{F}_{i} \left( y_i\right) }{\bar{F}_{i+1} \left( y_i\right) }\right) ^{ \sum \limits _{j=1}^{n-i+2} \alpha _{j}^{(i)} -1 } \left( {\bar{F}_{i+1} \left( y_i\right) } \right) ^{ \sum \limits _{j=1}^{n-i+2} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} -1 } f_{i}\left( y_{i} \right) \Bigg \} \nonumber \\ {}{} & {} \times \left( {\bar{F}_{n+1} \left( y_{n+1}\right) } \right) ^{ \alpha _{1}^{(n+1)} -1 } f_{n+1}\left( y_{n+1} \right) \nonumber \\ {}{} & {} \le \prod _{i=1}^{n-1} \Bigg \{ \left( \frac{\bar{F}_{i} \left( x_{i+1} \wedge y_{i+1} \right) }{\bar{F}_{i+1} \left( x_{i+1} \wedge y_{i+1}\right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} -1 } \nonumber \\{} & {} \left( {\bar{F}_{i+1} \left( x_{i+1} \wedge y_{i+1} \right) } \right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1 } f_{i}\left( x_{i+1} \wedge y_{i+1} \right) \Bigg \} \nonumber \\ {}{} & {} \times \left( {\bar{F}_{n} \left( x_{n+1} \wedge y_{n+1} \right) } \right) ^{ \alpha _{1}^{(n)} -1 } f_{n}\left( x_{n+1} \wedge y_{n+1} \right) \nonumber \\ {}{} & {} \times \prod _{i=2}^{n} \Bigg \{ \left( \frac{\bar{F}_{i} \left( x_i \vee y_i \right) }{\bar{F}_{i+1} \left( x_i \vee y_i \right) }\right) ^{ \sum \limits _{j=1}^{n-i+2} \alpha _{j}^{(i)} -1 } \left( {\bar{F}_{i+1} \left( x_i \vee y_i \right) } \right) ^{ \sum \limits _{j=1}^{n-i+2} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} -1 } f_{i}\left( x_i \vee y_i \right) \Bigg \} \nonumber \\ {}{} & {} \times \left( {\bar{F}_{n+1} \left( x_{n+1} \vee y_{n+1}\right) } \right) ^{ \alpha _{1}^{(n+1)} -1 } f_{n+1}\left( x_{n+1} \vee y_{n+1} \right) \end{aligned}$$
(A.28)

for all \(x_2 \le \ldots \le x_{n+1}\) and \(y_1 \le \ldots \le y_{n+1}\). Let \(E_3 = \{ 1 \le i \le n-1:x_{i+1} \ge y_{i+1} \}\). Then, the above inequality follows if it holds on \(E_3\). Given \(E_3\), (A.28) reduces to

$$\begin{aligned}{} & {} \prod _{i \in E_3} \Bigg \{ \left( \frac{\bar{F}_{i} \left( x_{i+1}\right) }{\bar{F}_{i+1} \left( x_{i+1}\right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} -1 } \left( {\bar{F}_{i+1} \left( x_{i+1}\right) } \right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1 } f_{i}\left( x_{i+1} \right) \Bigg \} \nonumber \\ {}{} & {} \times \left( {\bar{F}_{n} \left( x_{n+1}\right) } \right) ^{ \alpha _{1}^{(n)} -1 } f_{n}\left( x_{n+1} \right) \nonumber \\ {}{} & {} \times \prod _{i \in E_3} \Bigg \{ \left( \frac{\bar{F}_{i+1} \left( y_{i+1}\right) }{\bar{F}_{i+2} \left( y_{i+2}\right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} -1 } \nonumber \\{} & {} \left( {\bar{F}_{i+2} \left( y_{i+1}\right) } \right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+2)} -1 } f_{i+1}\left( y_{i+1} \right) \Bigg \} \nonumber \\ {}{} & {} \times \left( {\bar{F}_{n+1} \left( y_{n+1}\right) } \right) ^{ \alpha _{1}^{(n+1)} -1 } f_{n+1}\left( y_{n+1} \right) \nonumber \\ {}{} & {} \le \prod _{i \in E_3} \Bigg \{ \left( \frac{\bar{F}_{i} \left( y_{i+1} \right) }{\bar{F}_{i+1} \left( y_{i+1}\right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} -1 } \nonumber \\{} & {} \left( {\bar{F}_{i+1} \left( y_{i+1} \right) } \right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1 } f_{i}\left( y_{i+1} \right) \Bigg \} \nonumber \\ {}{} & {} \times \left( {\bar{F}_{n} \left( x_{n+1} \wedge y_{n+1} \right) } \right) ^{ \alpha _{1}^{(n)} -1 } f_{n}\left( x_{n+1} \wedge y_{n+1} \right) \nonumber \\ {}{} & {} \times \prod _{i \in E_3} \Bigg \{ \left( \frac{\bar{F}_{i+1} \left( x_{i+1} \right) }{\bar{F}_{i+2} \left( x_{i+1}\right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} -1 } \nonumber \\{} & {} \left( {\bar{F}_{i+2} \left( x_{i+1} \right) } \right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+2)} -1 } f_{i+1}\left( x_{i+1} \right) \Bigg \} \nonumber \\ {}{} & {} \times \left( {\bar{F}_{n+1} \left( x_{n+1} \vee y_{n+1}\right) } \right) ^{ \alpha _{1}^{(n+1)} -1 } f_{n+1}\left( x_{n+1} \vee y_{n+1} \right) . \end{aligned}$$
(A.29)

Now, from the condition that “\(\big (\sum \limits _{j=1}^{n-i+1} \left( \alpha _{j}^{(i)} - \alpha _{j}^{(i+1)} \right) -1 \big )\) is positive and decreasing in \(i \in \{1,\ldots , n\}\)”, we get \({ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} } \ge {\sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+2)} } \ge 1\), \(\sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} \ge 1\), for \( i=1, \ldots , n-1\), and \({ \alpha _{1}^{(n)} } \ge { \alpha _{1}^{(n+1)} } \ge 1\). These, together with the condition that “\( F_{i} \le _{hr} F_{i+1} \)”,

imply that

$$\begin{aligned} \left( \frac{\bar{F}_{i} \left( x_{i+1}\right) }{\bar{F}_{i+1} \left( x_{i+1}\right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} } \le \; \left( \frac{\bar{F}_{i} \left( y_{i+1}\right) }{\bar{F}_{i+1} \left( y_{i+1}\right) }\right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} },\nonumber \\ \end{aligned}$$
(A.30)
$$\begin{aligned} \bar{F}^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} -1}_{i+1} \le _{hr} \bar{F}^{\sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+2)} -1}_{i+2} \text { and } \bar{F}^{ \alpha _{1}^{(n)} -1}_{n} \le _{hr} \bar{F}^{ \alpha _{1}^{(n+1)} -1}_{n+1},\nonumber \\ \end{aligned}$$
(A.31)

for all \(x_{i+1} \ge y_{i+1} > 0\), for \( i=1, \ldots , n-1\). Moreover, from the condition that “\(\left( \bar{F}_{i+1}(u)\right) ^2/\bar{F}_{i}(u) \bar{F}_{i+2}(u)\) is increasing in \(u>0\)”, we get

$$\begin{aligned} \left( \frac{\left( \bar{F}_{i+1}(y_{i+ 1})\right) ^2}{\bar{F}_{i}(y_{i+1}) \bar{F}_{i+2}(y_{i+1})} \right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} -1 } \le \; \left( \frac{\left( \bar{F}_{i+1}(x_{i+1})\right) ^2}{\bar{F}_{i}(x_{i+1}) \bar{F}_{i+2}(x_{i+1})} \right) ^{ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} -1 } \end{aligned}$$
(A.32)

for all \(x_{i+1} \ge y_{i+1} > 0\). Additionally, we have the assumption that “\( F_{i} \le _{lr} F_{i+1} \), for \( i=1, \ldots , n\)”. Finally, upon combining this with (A.30), (A.31) and (A.32), we get (A.29). Hence, the required result. \(\square \)

Proof of Part(a) of Theorem 4.4

From Lemma 3.4, we have

$$\begin{aligned} X^{\star }_{i:n} {\mathop {=}\limits ^{d}} D^{-1} \left( \sum _{j=1}^{i}W_{j,n} \right) \text { and } X^{\star }_{i+1:n} {\mathop {=}\limits ^{d}} D^{-1} \left( \sum _{j=1}^{i+1}W_{j,n} \right) . \end{aligned}$$

Note that the likelihood ratio order is closed under increasing transformations. As F is continuous, \(D^{-1} \left( \cdot \right) \) is increasing and so the required result holds if and only if \(\sum _{j=1}^{i}W_{j,n} \le _{lr} \sum _{j=1}^{i+1}W_{j,n}\), for all \(i=1,\ldots , n-1\). Now, we have \(W_{j, n}\), \(j=1,\ldots ,i+1\), to be independent and non-negative random variables. Thus, in view of Theorem 1.C.9 of Shaked and Shanthikumar (2007), the result follows (i.e., the above inequality holds) provided \(W_{j,n}\) is ILR, for all \(j=1,\ldots , i+1\). From (3.9), we have

$$\begin{aligned} \frac{f'_{W_{j,n}} \left( t\right) }{f_{W_{j,n}} \left( t\right) }= & {} -\sum _{l=1}^{n-j+1} \alpha _{l}^{(j)}, \quad t>0, \end{aligned}$$

which implies that \({f'_{W_{j,n}} \left( t\right) } / {f_{W_{j,n}} \left( t\right) }\) is decreasing in \(t>0\) and so \(W_{j,n}\) is ILR, for all \(j=1, \ldots , i+1\). Hence, the required result. \(\square \)

Proof of Part(b) of Theorem 4.4

From Lemma 3.4, we have

$$\begin{aligned} X^{\star }_{i:n+1}{\mathop {=}\limits ^{d}} D^{-1} \left( \sum _{j=1}^{i}W_{j,n+1} \right) \text { and } X^{\star }_{i:n} {\mathop {=}\limits ^{d}} D^{-1} \left( \sum _{j=1}^{i}W_{j,n} \right) . \end{aligned}$$

Note that the likelihood ratio order is closed under increasing transformations. As F is continuous, \(D^{-1} \left( \cdot \right) \) is increasing and so the required result holds if and only if \(\sum _{j=1}^{i}W_{j, n+1} \le _{lr} \sum _{j=1}^{i}W_{j, n} \), for all \(i=1,\ldots ,n\). Now, we have \(W_{j, n}\) and \(W_{j, n+1}\), \(j=1,\ldots ,i\), to be independent and non-negative random variables. Thus, in view of Theorem 1.C.9 of Shaked and Shanthikumar (2007), the result follows (i.e., the above inequality holds) provided \(W_{j,n}\) and \(W_{j,n+1}\) are ILR, and \(W_{j, n+1} \le _{lr} W_{j, n} \), for all \(j=1,\ldots , i\). From (3.9), we have

$$\begin{aligned} \frac{f'_{W_{j,n}} \left( t\right) }{f_{W_{j,n}} \left( t\right) }= & {} -\sum _{l=1}^{n-j+1} \alpha _{l}^{(j)}, \quad t>0, \end{aligned}$$

which implies that \({f'_{W_{j,n}} \left( t\right) } / {f_{W_{j,n}} \left( t\right) }\) is decreasing in \(t>0\) and so \(W_{j,n}\) is ILR, for all \(j=1, \ldots , i\). Similarly, we get \(W_{j,n+1}\) to be ILR, for all \(j=1, \ldots , i\). Again, the condition that “\( \alpha _{n-j+2}^{(j)}>0 \)" implies \(W_{j, n+1} \le _{lr} W_{j, n} \), for all \(j=1,\ldots , i\). Hence, the required result. \(\square \)

Proof of Part(c) of Theorem 4.4

From Lemma 3.4, we have

$$\begin{aligned} X^{\star }_{i:n} {\mathop {=}\limits ^{d}} D^{-1} \left( \sum _{j=1}^{i}W_{j,n} \right) \text { and } X^{\star }_{i+1:n+1} {\mathop {=}\limits ^{d}} D^{-1} \left( \sum _{j=1}^{i+1}W_{j,n+1} \right) . \end{aligned}$$

Note that the likelihood ratio order is closed under increasing transformations. As F is continuous, \(D^{-1} \left( \cdot \right) \) is increasing and so the required result holds if and only if \(\sum _{j=1}^{i}W_{j, n} \le _{lr} \sum _{j=1}^{i+1}W_{j, n+1} \), \(i=1,\ldots , n\). Now, we have \(W_{j, n}\) and \(W_{j, n+1}\), \(j=1,\ldots ,i+1\), to be independent and non-negative random variables. Thus, in view of Theorem 1.C.9 of Shaked and Shanthikumar (2007), the result follows (i.e., the above inequality holds) provided \(W_{j,n}\), \(W_{1,n+1}\) and \(W_{j+1,n+1}\) are ILR, and \(W_{j, n} \le _{lr} W_{j+1, n+1} \), for all \(j=1,\ldots , i\). From (3.9), we have

$$\begin{aligned} \frac{f'_{W_{j,n}} \left( t\right) }{f_{W_{j,n}} \left( t\right) }= & {} -\sum _{l=1}^{n-j+1} \alpha _{l}^{(j)}, \quad t>0, \end{aligned}$$

which implies that \({f'_{W_{j,n}} \left( t\right) } / {f_{W_{j,n}} \left( t\right) }\) is decreasing in \(t>0\) and so \(W_{j,n}\) are ILR, \(j=1, \ldots , i\). Similarly, we get \(W_{j,n+1}\) to be ILR, for all \(j=1, \ldots , i+1\). Again, from the condition that “\(\alpha _{l}^{(j)} \ge \alpha _{l}^{(j+1)}\), for \(l=1, \ldots , n-j+1\)”, we get \(W_{j, n} \le _{lr} W_{j+1, n+1} \), for all \(j=1,\ldots , i\). Hence, the required result. \(\square \)

Proof of Par(a) of Theorem 4.5

From Lemma 3.4, we have

$$\begin{aligned} X^{\star }_{i:n} {\mathop {=}\limits ^{d}} D^{-1} \left( \sum _{j=1}^{i}W_{j,n} \right) \text { and } X^{\star }_{i+1:n} {\mathop {=}\limits ^{d}} D^{-1} \left( \sum _{j=1}^{i+1}W_{j,n} \right) . \end{aligned}$$

As F is DFR, \(D^{-1} \left( \cdot \right) \) is increasing and convex and so in view of Theorem 3.B.10(a) of Shaked and Shanthikumar (2007), the result holds if \(\sum _{j=1}^{i}W_{j,n} \le _{disp} \sum _{j=1}^{i+1}W_{j,n}\) and \(\sum _{j=1}^{i}W_{j,n} \le _{st} \sum _{j=1}^{i+1}W_{j,n}\), for all \(i=1, \ldots , n-1\). Now, we have \(W_{j, n}\), \(j=1,\ldots ,i+1\), to be independent and non-negative random variables. Thus, in view of Theorem 1.1 of Khaledi and Kochar (2000), the result follows (i.e., the above inequality holds) provided \(W_{j,n}\) is ILR, for all \(j=1,\ldots ,i+1\). From (3.9), we have

$$\begin{aligned} \frac{f'_{W_{j,n}} \left( t\right) }{f_{W_{j,n}} \left( t\right) }= & {} -\sum _{l=1}^{n-j+1} \alpha _{l}^{(j)}, \quad t>0, \end{aligned}$$

which implies that \({f'_{W_{j,n}} \left( t\right) } / {f_{W_{j,n}} \left( t\right) }\) is decreasing in \(t>0\) and so \(W_{j,n}\) is ILR, for all \(j=1, \ldots , i+1\). Hence, the required result. \(\square \)

Proof of Part(b) of Theorem 4.5

From Lemma 3.4, we have

$$\begin{aligned} X^{\star }_{i:n+1} {\mathop {=}\limits ^{d}} D^{-1} \left( \sum _{j=1}^{i}W_{j,n+1} \right) \text { and } X^{\star }_{i:n} {\mathop {=}\limits ^{d}} D^{-1} \left( \sum _{j=1}^{i}W_{j,n} \right) . \end{aligned}$$

As F is DFR, \(D^{-1} \left( \cdot \right) \) is increasing and convex and so in view of Theorem 3.B.10(a) of Shaked and Shanthikumar (2007), the result holds if \(\sum _{j=1}^{i}W_{j, n+1} \le _{disp} \sum _{j=1}^{i}W_{j, n} \) and \(\sum _{j=1}^{i}W_{j, n+1} \le _{st} \sum _{j=1}^{i}W_{j, n} \), for all \(i=1,\ldots ,n\). Now, we have \(W_{j, n}\) and \(W_{j, n+1}\), \(j=1,\ldots ,i\), to be independent and non-negative random variables. Thus, in view of Theorem 1.1 of Khaledi and Kochar (2000), the result follows (i.e., the above inequality holds) provided \(W_{j,n}\) and \(W_{j,n+1}\) are ILR, \(W_{j, n+1} \le _{disp} W_{j, n} \) and \(W_{j, n+1} \le _{st} W_{j, n} \), for all \(j=1,\ldots , i\). From (3.9), we have

$$\begin{aligned} \frac{f'_{W_{j,n}} \left( t\right) }{f_{W_{j,n}} \left( t\right) }= & {} -\sum _{l=1}^{n-j+1} \alpha _{l}^{(j)}, \quad t>0, \end{aligned}$$

which implies that \({f'_{W_{j,n}} \left( t\right) } / {f_{W_{j,n}} \left( t\right) }\) is decreasing in \(t>0\) and so \(W_{j,n}\) is ILR, for all \(j=1, \ldots , i\). Similarly, we get \(W_{j,n+1}\) to be ILR, for all \(j=1, \ldots , i\). Again, from the condition that “\( \alpha _{n-j+2}^{(j)} >0\)”, we get

$$\begin{aligned} -\left\{ \frac{1}{ \sum _{l=1}^{n-j+1} \alpha _{l}^{(j)} } - \frac{1}{ \sum _{l=1}^{n-j+2} \alpha _{l}^{(j)} } \right\} \ln \left( 1-u \right) \text { to be increasing in } u \in \left( 0,1 \right) , \end{aligned}$$

which further implies that \({F}^{-1}_{ W_{j,n} } \left( u\right) - {F}^{-1}_{ W_{j,n+1} } \left( u\right) \) is increasing in \(u \in (0,1)\). Thus, we get \(W_{j, n+1} \le _{disp} W_{j, n} \), for all \(j=1,\ldots , i\). Further, it can easily be verified that \(W_{j, n+1} \le _{st} W_{j, n} \), for all \(j=1,\ldots , i\). Hence, the required result. \(\square \)

Proof of Part(c) of Theorem 4.5

From Lemma 3.4, we have

$$\begin{aligned} X^{\star }_{i:n} {\mathop {=}\limits ^{d}} D^{-1} \left( \sum _{j=1}^{i}W_{j,n} \right) \text { and } X^{\star }_{i+1:n+1} {\mathop {=}\limits ^{d}} D^{-1} \left( \sum _{j=1}^{i+1}W_{j,n+1} \right) . \end{aligned}$$

As F is DFR, \(D^{-1} \left( \cdot \right) \) is increasing and convex and so in view of Theorem 3.B.10(a) of Shaked and Shanthikumar (2007), the result holds if \(\sum _{j=1}^{i}W_{j, n} \le _{disp} \sum _{j=1}^{i+1}W_{j, n+1} \) and \(\sum _{j=1}^{i}W_{j, n} \le _{st} \sum _{j=1}^{i+1}W_{j, n+1} \), for all \(i=1,\ldots , n\). Now, we have \(W_{j, n}\), \(j=1,\ldots ,i\), and \(W_{j, n+1}\), \(j=1,\ldots ,i+1\), to be independent and non-negative random variables. Thus, in view of Theorem 1.1 of Khaledi and Kochar (2000), the result follows (i.e., the above inequality holds) provided \(W_{j,n}\), \(W_{1,n+1}\) and \(W_{j,n+1}\) are ILR, \(W_{j, n} \le _{disp} W_{j+1, n+1} \) and \(W_{j, n} \le _{st} W_{j+1, n+1} \), for all \(j=1,\ldots , i\). From (3.9), we have

$$\begin{aligned} \frac{f'_{W_{j,n}} \left( t\right) }{f_{W_{j,n}} \left( t\right) }= & {} -\sum _{l=1}^{n-j+1} \alpha _{l}^{(j)}, \quad t>0, \end{aligned}$$

which implies that \({f'_{W_{j,n}} \left( t\right) } / {f_{W_{j,n}} \left( t\right) }\) is decreasing in \(t>0\) and so \(W_{j,n}\) is ILR, for all \(j=1, \ldots , i\). Similarly, it can be shown that \(W_{j,n+1}\) is ILR, for all \(j=1, \ldots , i+1\). Again, from the condition that “\(\alpha _{l}^{(j)} \ge \alpha _{l}^{(j+1)}\), for \(l=1, \ldots , n-j+1\)”, we get

$$\begin{aligned} -\left\{ \frac{1}{ \sum _{l=1}^{n-j+1} \alpha _{l}^{(j+1)} } - \frac{1}{ \sum _{l=1}^{n-j+1} \alpha _{l}^{(j)} } \right\} \ln \left( 1-u \right) \text { to be increasing in } u \in \left( 0,1 \right) , \end{aligned}$$

which further implies that \({F}^{-1}_{ W_{j+1,n+1} } \left( u\right) - {F}^{-1}_{ W_{j,n} } \left( u\right) \) is increasing in \(u \in (0,1)\). Again, this implies that \(W_{j, n} \le _{disp} W_{j+1, n+1} \), for all \(j=1,\ldots , i\). Further, it can easily be verified that \(W_{j, n} \le _{st} W_{j+1, n+1} \), for all \(j=1,\ldots , i\). Hence, the required result. \(\square \)

Proof of Part(a) of Theorem 4.6

Note that the hazard rate order is closed under increasing transformations, and so \(X^{\star }_{i:n} \le _{hr} X^{\star }_{i+1:n}\) holds if and only if \(D_{i+1}\left( X^{\star }_{i:n}\right) \le _{hr} D_{i+1}\left( X^{\star }_{i+1:n}\right) \); here, \(D_{i+1}\) is the cumulative hazard rate function of \(F_{i+1}\). Further, from Lemma 3.3, we have \(X^{\star }_{i+1:n} = D_{i+1}^{-1}\left( W_{ i+1,n } + D_{i+1}\left( X_{i:n}^{\star }\right) \right) \) and so the above inequality holds if and only if \(D_{i+1}\left( X^{\star }_{i:n}\right) \le _{hr} W_{i+1, n } + D_{i+1}\left( X_{i:n}^{\star }\right) \). Now, we have \(Y_l^{ (i+1) }\), \(l=1, \ldots , n-i\), and \(X^{\star }_{i:n}\) to be independent, which implies that \(W_{i+1,n}\) and \(D_{i+1}\left( X^{\star }_{i:n}\right) \) are independent. Moreover, \(W_{ i+1,n } \) is a non-negative random variable.

Thus, in view of Lemma 1.B.3 of Shaked and Shanthikumar (2007), the result follows (i.e., the above inequality holds) provided \(D_{i+1}\left( X_{i:n}^{\star }\right) \) is IFR. Now, we prove the statement that “\(D_{i+1}\left( X_{i:n}^{\star }\right) \) is IFR" through induction. First, we show that this statement is true for \(i=1\), i.e., \(D_{2}\left( X_{1:n}^{\star }\right) \) is IFR. From Definition 3.2, the cumulative hazard rate function of \(D_{2}\left( X^{\star }_{1:n}\right) \) is given by

$$\begin{aligned} \Delta _{D_{2}\left( X^{\star }_{1:n}\right) } \left( t\right) = \sum _{j=1}^{n} \alpha _{j}^{(1)} \left( D_1 \circ D_2^{-1}\right) \left( t \right) , \quad t>0. \end{aligned}$$

Now, from the condition that “\(F_1 \le _c F_2\)” and the increasing property of \(D_2^{-1}\left( \cdot \right) \), we get

$$\begin{aligned} \frac{\partial ^2 u}{\partial t^2}= & {} \frac{\partial }{\partial t} \left( \frac{r_1\left( D_{2}^{-1}\left( t\right) \right) }{r_{2}\left( D_{2}^{-1}\left( t\right) \right) } \right) \ge 0 \text { for all } t>0, \end{aligned}$$

From this, we get \(\frac{\partial ^2}{\partial t^2}\left( \Delta _{D_{2}(X^{\star }_{1:n})} \left( t\right) \right) \ge 0\), for all \(t>0\), and so \(D_{2}\left( X^{\star }_{1:n} \right) \) is IFR. Thus, the statement is true for \(i=1\). Now, we assume the statement to be true for \(i=j-1\), i.e., \(D_{j}\left( X^{\star }_{j-1:n}\right) \) is IFR. Now, upon using this, we show that \(D_{j+1}\left( X^{\star }_{j:n}\right) \) is IFR. From Lemma 3.3, we get

\(D_{j+1}\left( X^{\star }_{j:n}\right) = \left( D_{j+1} \circ D_j^{-1}\right) \left( Q_{j,n}\right) \),

where \(Q_{j,n} = W_{j,n} + D_{j}\left( X^{\star }_{j-1:n}\right) \). Then, the cumulative hazard rate function of \(D_{j+1}\left( X^{\star }_{j:n}\right) \) is given by

$$\begin{aligned} \Delta _{D_{j+1}\left( X^{\star }_{j:n}\right) } \left( t\right) =\Delta _{ Q_{j,n} }\left( \left( D_{j} \circ D_{j+1}^{-1}\right) \left( t\right) \right) . \end{aligned}$$
(A.33)

Further, we have \(Y_l^{ (j) }\), \(l=1,\ldots , n-j+1\), and \(X^{\star }_{j-1:n}\) to be independent. This implies that \(W_{j,n}\) and \(D_{j}\left( X^{\star }_{j-1:n}\right) \) are independent. Again, by using (3.7), we get

$$\begin{aligned} \frac{\partial ^2}{ \partial t^2} \Delta _{W_{j,n}} \left( t\right)= & {} \frac{\partial }{ \partial t} \left( \sum _{l=1}^{n-j+1} \alpha _{l}^{(j)} t \right) \\ {}\ge & {} 0 \text { for all } t>0, \end{aligned}$$

which implies that \(W_{j,n}\) is IFR. Further, from the induction hypothesis, we have \(D_{j}\left( X^{\star }_{j-1:n}\right) \) to be IFR. Upon combining these two facts, we get \(Q_{j,n}\) to be IFR, which in turn implies that

\(\Delta _{Q_{j,n}} \left( t\right) \text { is increasing and convex in } t>0\).

Again, by proceeding in a manner similar to the case when \(i=1\), we can easily obtain

\(D_j \circ D_{j+1}^{-1} \left( t\right) \text { to be convex in }t>0\).

Finally, upon using these two facts in (A.33), we get \(\Delta _{D_{j+1}\left( X^{\star }_{j:n}\right) }\) to be convex in \(t>0\). Consequently, \(D_{j+1}\left( X^{\star }_{j:n}\right) \) is IFR and hence the statement gets proved for \(i=j\). Thus, by induction, we get \(D_{i+1}\left( X^{\star }_{i:n}\right) \) to be IFR, for all i. Hence, the required result. \(\square \)

Proof of Part(b) of Theorem 4.6

Note that the reverse hazard rate order is closed under increasing transformations and so \(X^{\star }_{i:n} \le _{hr} X^{\star }_{i+1:n}\) holds if and only if \(D_{i+1}\left( X^{\star }_{i:n}\right) \le _{rh} D_{i+1}\left( X^{\star }_{i+1:n}\right) \); here, \(D_{i+1}\) is the cumulative hazard rate function of \(F_{i+1}\). Further, from Lemma 3.3, we have \(X^{\star }_{i+1:n} = D_{i+1}^{-1}\left( W_{ i+1,n } + D_{i+1}\left( X_{i:n}^{\star }\right) \right) \). Thus, the above inequality holds if and only if \(D_{i+1}\left( X^{\star }_{i:n}\right) \le _{rh} W_{i+1, n } + D_{i+1}\left( X_{i:n}^{\star }\right) \). Now, we have \(Y_l^{ (i+1) }\), \(l=1, \ldots , n-i\), and \(X^{\star }_{i:n}\) to be independent, which implies that \(W_{i+1,n}\) and \(D_{i+1}\left( X^{\star }_{i:n}\right) \) are independent. Moreover, \(W_{ i+1,n } \) is a non-negative random variable.

Thus, in view of Lemma 1.B.44 of Shaked and Shanthikumar (2007), the result follows (i.e., the above inequality holds) provided \(D_{i+1}\left( X_{i:n}^{\star }\right) \) is DRFR. Now, we prove the statement that “\(D_{i+1}\left( X_{i:n}^{\star }\right) \) is DRFR”through induction. First, we show that this statement is true for \(i=1\), i.e., \(D_{2}\left( X_{1:n}^{\star }\right) \) is DRFR. From Definition 3.2, the cumulative reverse hazard rate function of \(D_{2}\left( X^{\star }_{1:n}\right) \) is given by

$$\begin{aligned} \tilde{\Delta }_{D_{2}\left( X^{\star }_{1:n}\right) } \left( t\right) = -\ln \left( 1- \prod _{j=1}^{n}e^{-\alpha _{j}^{(1)} \left( D_1 \circ D_2^{-1}\right) \left( t \right) } \right) , \quad t>0, \end{aligned}$$

which gives

$$\begin{aligned} \frac{\partial ^2}{\partial t^2}\left( \tilde{ \Delta }_{D_{2}\left( X^{\star }_{1:n}\right) } \left( t \right) \right)= & {} -\left( \frac{\partial u}{\partial t} \right) ^2 \frac{\partial }{\partial u} \left( \frac{ {\sum \limits _{j=1}^{n}\alpha _{j}^{(1)}} e^{-\sum \limits _{j=1}^{n}\alpha _{j}^{(1)} u}}{1- e^{- \sum \limits _{j=1}^{n}\alpha _{j}^{(1)} u}}\right) \nonumber \\ {}{} & {} - \left( \frac{\sum \limits _{j=1}^{n}\alpha _{j}^{(1)} e^{- \sum \limits _{j=1}^{n}\alpha _{j}^{(1)} u}}{1- e^{- \sum \limits _{j=1}^{n}\alpha _{j}^{(1)} u}}\right) \frac{\partial ^2 u}{\partial t^2} \end{aligned}$$
(A.34)

for \(t>0\), where \(u= \left( D_1 \circ D_2^{-1}\right) \left( t\right) \). Now, from the fact that “\({e^{-a u}}/ {(1- e^{-a u})}\) is positive and decreasing in \(u>0\), for \(a>0\)", we get

\( {\sum \limits _{j=1}^{n}\alpha _{j}^{(1)} e^{- \sum \limits _{j=1}^{n}\alpha _{j}^{(1)} u}} \Big /{\Big (1- e^{- \sum \limits _{j=1}^{n}\alpha _{j}^{(1)} u}\Big )}\) to be positive and decreasing in \(u>0.\)

Again, from the condition that “\(F_1 \ge _c F_2\)” and the increasing property of \(D_2^{-1}\left( \cdot \right) \), we get

$$\begin{aligned} \frac{\partial ^2 u}{\partial t^2}= & {} \frac{\partial }{\partial t} \left( \frac{r_1\left( D_{2}^{-1}\left( t\right) \right) }{r_{2}\left( D_{2}^{-1}\left( t\right) \right) } \right) \le 0 \text { for all } t>0, \end{aligned}$$

Upon using these two facts in (A.34), we get \(\frac{\partial ^2}{\partial t^2}\left( \tilde{\Delta }_{D_{2}(X^{\star }_{1:n})} \left( t\right) \right) \ge 0\) for all \(t>0\), and so \(D_{2}\left( X^{\star }_{1:n} \right) \) is DRFR. Thus, the statement is true for \(i=1\). Now, we assume the statement to be true for \(i=j-1\), i.e., \(D_{j}\left( X^{\star }_{j-1:n}\right) \) is DRFR. Now, upon using this, we show that \(D_{j+1}\left( X^{\star }_{j:n}\right) \) is DRFR. From Lemma 3.3, we get

\( D_{j+1}\left( X^{\star }_{j:n}\right) = \left( D_{j+1} \circ D_j^{-1}\right) \left( Q_{j,n}\right) ,\)

where \(Q_{j,n} = W_{j,n} + D_{j}\left( X^{\star }_{j-1:n}\right) \). Then, the cumulative reverse hazard rate function of \(D_{j+1}\left( X^{\star }_{j:n}\right) \) is given by

$$\begin{aligned} \tilde{\Delta }_{D_{j+1}\left( X^{\star }_{j:n}\right) } \left( t\right) =\tilde{\Delta }_{ Q_{j,n} }\left( \left( D_{j} \circ D_{j+1}^{-1}\right) \left( t\right) \right) . \end{aligned}$$
(A.35)

Further, we have \(Y_l^{ (j) }\), \(l=1,\ldots , n-j+1\), and \(X^{\star }_{j-1:n}\) to be independent. This implies that \(W_{j,n}\) and \(D_{j}\left( X^{\star }_{j-1:n}\right) \) are independent. Again, by using (3.7), we get

$$\begin{aligned} \frac{\partial ^2}{ \partial t^2} \Delta _{W_{j,n}} \left( t\right)= & {} -\frac{\partial }{ \partial t} \left( \frac{\sum \limits _{l=1}^{n-j+1}\alpha _{l}^{(j)} e^{- \sum \limits _{l=1}^{n-j+1}\alpha _{l}^{(j)} u}}{1- e^{- \sum \limits _{l=1}^{n-i+1}\alpha _{l}^{(j)} u}} \right) \ge 0 \text { for all } t>0, \end{aligned}$$

which implies that \(W_{j,n}\) is DRFR. Further, from the induction hypothesis, we have \(D_{j}\left( X^{\star }_{j-1:n}\right) \) to be DRFR. Upon combining these two facts, we get \(Q_{j,n}\) to be DRFR. Further, this implies that

\( \tilde{\Delta }_{Q_{j,n}} \left( t\right) \text { is decreasing and convex in } t>0. \)

Again, by proceeding in a manner similar to the case when \(i=1\), we can easily obtain

\( D_j \circ D_{j+1}^{-1} \left( t\right) \text { to be concave in }t>0. \)

Finally, upon using these two facts in (A.35), we get \(\tilde{\Delta }_{D_{j+1}\left( X^{\star }_{j:n}\right) }\) to be convex in \(t>0\). Consequently, \(D_{j+1}\left( X^{\star }_{j:n}\right) \) is DRFR and hence the statement gets proved for \(i=j\). Thus, by induction, we get \(D_{i+1}\left( X^{\star }_{i:n}\right) \) to be DRFR, for all i. Hence, the required result. \(\square \)

Proof of Part(a) of Theorem 4.7

In view of Theorem 3.11 of Belzunce et al. (2001), it suffices to prove that \(X^{\star }_{1:n} \le _{st} Z^{\star }_{1:n}\) and \(\left( X^{\star }_{i:n} | X^{\star }_{i-1:n} = x \right) \le _{hr} \left( Z^{\star }_{i:n} | Z^{\star }_{i-1:n} = x \right) \), for all \(x>0\), for \(i=2, \ldots , n\). Now, from the definition of SOS, the reliability functions of \(X_{1:n}^{\star }\) and \(Z_{1:n}^{\star }\) are given by

$$\begin{aligned} \bar{F}_{X_{1:n}^{\star }}\left( t\right) = \prod _{j=1}^{n} \bar{F}_{j}^{(1)}\left( t\right) \text { and } \bar{F}_{Z_{1:n}^{\star }}\left( t\right) =\prod _{j=1}^{n} \bar{G}_{j}^{(1)}\left( t\right) , \quad t>0, \end{aligned}$$

which, in view of the condition that “\({F}_{j}^{(1)} \le _{st} {G}_{j}^{(1)}\), for \(j=1,\ldots , n\)”, implies that \(\bar{F}_{X_{1:n}^{\star }}\left( t\right) \le \bar{F}_{Z_{1:n}^{\star }}\left( t\right) \) for \(t>0\). Again, from Remark 3.1, we get, for \(i=2,\ldots ,n\),

$$\begin{aligned} \bar{F}_{X^{\star }_{i:n} | X^{\star }_{i-1:n} =x } (t) = \prod _{j=1}^{n-i+1} \frac{ \bar{F}_{j}^{(i)}\left( t\right) }{ \bar{F}_{j}^{(i)}\left( x\right) } \text { and } \bar{F}_{Z^{\star }_{i:n} | Z^{\star }_{i-1:n} =x }(t) = \prod _{j=1}^{n-i+1} \frac{ \bar{G}_{j}^{(i)}\left( t\right) }{ \bar{G}_{j}^{(i)}\left( x\right) }, \quad t \ge x. \end{aligned}$$

These imply that

$$\begin{aligned} {r}_{X^{\star }_{i:n} | X^{\star }_{i-1:n} =x } (t) = \sum _{j=1}^{n-i+1} {r}_j^{(i)} \left( t \right) \text { and } {r}_{Z^{\star }_{i:n} | Z^{\star }_{i-1:n} =x } (t) = \sum _{j=1}^{n-i+1} {h}_j^{(i)} \left( t \right) , \quad t>0, \end{aligned}$$

where \({r}_j^{(i)}\) and \(h_j^{(i)}\) are the hazard rate functions of \(F_j^{(i)}\) and \(G_j^{(i)}\), respectively. From the condition that “\(F_j^{(i)} \le _{hr} G_j^{(i)}\), for \(i=2,\ldots ,n\)”, we have \({r}_j^{(i)} \left( t \right) \ge {h}_j^{(i)} \left( t \right) \) for all \(t>0\), and so \(\left( X^{\star }_{i:n} | X^{\star }_{i-1:n} = x \right) \le _{hr} \left( Z^{\star }_{i:n} | Z^{\star }_{i-1:n} = x \right) \) for all \(x>0\), for \(i=2, \ldots , n\). Hence, the required result. \(\square \)

Proof of Part(c) of Theorem 4.7

In view of Theorem 3.13 of Belzunce et al. (2001), it is enough to prove that \(\left( X^{\star }_{i:n} | X^{\star }_{i-1:n} = x \right) \le _{hr} \left( Z^{\star }_{i:n} | Z^{\star }_{i-1:n} = x \right) \) and \(\left( X^{\star }_{i:n} | X^{\star }_{i-1:n} = x \right) \ge _{c} \left( Z^{\star }_{i:n} | Z^{\star }_{i-1:n} = x \right) \) for all \(x>0\), and \({r}_{Z^{\star }_{i+1:n} | Z^{\star }_{i:n} =x } (t) - {r}_{Z^{\star }_{i:n} | Z^{\star }_{i-1:n} =x } (t) \ge {r}_{Z^{\star }_{i+1:n} | Z^{\star }_{i:n} =x } (t) - {r}_{Z^{\star }_{i:n} | Z^{\star }_{i-1:n} =x } (t)\) for \(i=1,2, \ldots , n-1\). Now, from the definition of SOS, the reliability functions of \(X_{1:n}^{\star }\) and \(Z_{1:n}^{\star }\) are given by

$$\begin{aligned} \bar{F}_{X_{1:n}^{\star }}\left( t\right) = \prod _{j=1}^{n} \bar{F}^{\alpha _{j}^{(1)}}\left( t\right) \text { and } \bar{F}_{Z_{1:n}^{\star }}\left( t\right) = \prod _{j=1}^{n} \bar{G}^{\beta _{j}^{(1)}}\left( t\right) , \quad t>0. \end{aligned}$$

These imply that

$$\begin{aligned} {r}_{X_{1:n}^{\star }}\left( t\right) =r(t) \sum _{j=1}^{n} {\alpha }_j^{(1)} \text { and } {r}_{Z_{1:n}^{\star }}\left( t\right) = h(t)\sum _{j=1}^{n} {\beta }_j^{(1)}, \quad t>0, \end{aligned}$$

where r and h are the hazard rate functions of F and G, respectively. Now, from the conditions that “\({F} \le _{hr} {G}\)”, “\({F} \ge _{c} {G}\)” and “\( \sum _{j=1}^{n} {\alpha }_j^{(1)} \ge \sum _{j=1}^{n} {\beta }_j^{(1)}\)”, we get \({X_{1:n}^{\star }} \le _{hr} {Z_{1:n}^{\star }} \) and \({X_{1:n}^{\star }} \ge _{c} {Z_{1:n}^{\star }} \). Again, from Remark 3.1, we get, for \(i=2,\ldots ,n\),

$$\begin{aligned} \bar{F}_{X^{\star }_{i:n} | X^{\star }_{i-1:n} =x } (t) = \prod _{j=1}^{n-i+1} \left( \frac{ \bar{F}\left( t\right) }{ \bar{F}\left( x\right) } \right) ^{ {\alpha }_{j}^{(i)} } \text { and } \bar{F}_{Z^{\star }_{i:n} | Z^{\star }_{i-1:n} =x }(t) = \prod _{j=1}^{n-i+1} \left( \frac{ \bar{G}\left( t\right) }{ \bar{G}\left( x\right) } \right) ^{ {\beta }_{j}^{(i)} }, \quad t \ge x, \end{aligned}$$

which imply that

$$\begin{aligned} {r}_{X^{\star }_{i:n} | X^{\star }_{i-1:n} =x } (t) = r(t) \sum _{j=1}^{n-i+1} {\alpha }_j^{(i)} \text { and } {r}_{Z^{\star }_{i:n} | Z^{\star }_{i-1:n} =x } (t) = h(t) \sum _{j=1}^{n-i+1} {\beta }_j^{(i)}, \quad t>0. \end{aligned}$$

From the conditions that “\({F} \le _{hr} {G}\)”, “\({F} \ge _{c} {G}\)” and “\( \sum _{j=1}^{n-i+1} {\alpha }_j^{(i)} \ge \sum _{j=1}^{n-i+1} {\beta }_j^{(i)}\), for \(i=2,\ldots ,n\)”, we get \(\left( X^{\star }_{i:n} | X^{\star }_{i-1:n} = x \right) \le _{hr} \left( Z^{\star }_{i:n} | Z^{\star }_{i-1:n} = x \right) \) and \(\left( X^{\star }_{i:n} | X^{\star }_{i-1:n} = x \right) \ge _{c} \left( Z^{\star }_{i:n} | Z^{\star }_{i-1:n} = x \right) \), for all \(x>0\), for \(i=2, \ldots , n\). Further, from the conditions that “\({F} \le _{hr} {G}\)” and “\( \sum _{j=1}^{n-i+1} {\alpha }_j^{(i)} - \sum _{j=1}^{n-i} {\alpha }_j^{(i+1)} \ge \sum _{j=1}^{n-i+1} {\beta }_j^{(i)} - \sum _{j=1}^{n-i} {\beta }_j^{(i+1)} \ge 0\), for \(i=1,\ldots ,n-1\)”, we get \({r}_{Z^{\star }_{i+1:n} | Z^{\star }_{i:n} =x } (t) - {r}_{Z^{\star }_{i:n} | Z^{\star }_{i-1:n} =x } (t) \ge {r}_{Z^{\star }_{i+1:n} | Z^{\star }_{i:n} =x } (t) - {r}_{Z^{\star }_{i:n} | Z^{\star }_{i-1:n} =x } (t)\) for \(i=1, \ldots , n-1\). Hence, the required result. \(\square \)

Proof of Theorem 4.8

Let

$$\begin{aligned} A_{l,n} = \min \left\{ - \frac{1}{\beta _{1}^{(l)}} \ln \left( 1-U^{(l)}_1\right) , \ldots , -\frac{1}{\beta _{n-l+1}^{(l)}} \ln \left( 1-U^{(l)}_{n-l+1}\right) \right\} , \quad l=1,2,\ldots , n, \end{aligned}$$

where \(U_j^l,\; j=1,\ldots ,n-l+1,\; l=1,\ldots ,n\), are the same as in Lemma 3.2. Now, from Lemma 3.4, we can write

$$\begin{aligned} X^{\star }_{i:n}{\mathop {=}\limits ^{d}} D^{-1} \left( \sum _{j=1}^{i}W_{j,n} \right) \text { and } Z^{\star }_{i:n} {\mathop {=}\limits ^{d}} D^{-1} \left( \sum _{j=1}^{i}A_{j,n} \right) . \end{aligned}$$

Note that the likelihood ratio order is closed under increasing transformations. As F is continuous, \(D^{-1} \left( \cdot \right) \) is increasing and so the required result holds if and only if \(\sum _{j=1}^{i}W_{j, n} \le _{lr} \sum _{j=1}^{i}A_{j, n} \). Now, we have \(W_{j, n}\) and \(A_{j, n}\), \(j=1,\ldots ,i\), to be independent and non-negative random variables. Thus, in view of Theorem 1.C.9 of Shaked and Shanthikumar (2007), the result follows (i.e., the above inequality holds) provided \(W_{j,n}\) and \(A_{j,n}\) are ILR, and \(W_{j, n} \le _{lr} A_{j, n} \), for all \(j=1,\ldots , i\). From (3.9), we have

\( {f'_{W_{j,n}} \left( t\right) } /{f_{W_{j,n}} \left( t\right) } = -\sum _{l=1}^{n-j+1} \alpha _{l}^{(j)}, \;t>0,\)

which implies that \({f'_{W_{j,n}} \left( t\right) } / {f_{W_{j,n}} \left( t\right) }\) is decreasing in \(t>0\) and so \(W_{j,n}\) is ILR. Similarly, it can be shown that \(A_{j,n}\) is also ILR, for all \(j=1, \ldots , i\). Again, we have the condition that “\(\varvec{\alpha }^{(j)} \overset{w}{\preceq } \varvec{\beta }^{(j)}\)”. Then, from Theorem 3.1 of Li and Li (2015), we obtain \(W_{j, n} \le _{lr} A_{j, n} \), for all \(j=1,\ldots , i\). Hence, the required result. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sahoo, T., Hazra, N.K. & Balakrishnan, N. Multivariate stochastic comparisons of sequential order statistics with non-identical components. Stat Papers (2024). https://doi.org/10.1007/s00362-024-01558-w

Download citation

  • Received:

  • Published:

  • DOI: https://doi.org/10.1007/s00362-024-01558-w

Keywords

Mathematics Subject Classification

Navigation