Abstract
Sequential order statistics (SOS) are useful tools for modeling the lifetimes of systems wherein the failure of a component has a significant impact on the lifetimes of the remaining surviving components. The SOS model is a general model that contains most of the existing models for ordered random variables. In this paper, we consider the SOS model with non-identical components and then discuss various univariate and multivariate stochastic comparison results in both one-and two-sample scenarios.
Similar content being viewed by others
References
Ahmad I, Kayid M (2007) Reversed preservation of stochastic orders for random minima and maxima with applications. Stat Pap 48:283–293
Balakrishnan N, Zhao P (2013) Ordering properties of order statistics from heterogeneous populations: a review with an emphasis on some recent developments. Probab Eng Inf Sci 27:403–443
Barlow RE, Proschan F (1975) Statistical theory of reliability and life testing: probability models. Holt, Rinehart and Winston, New York
Baratnia M, Doostparast M (2017) Modeling lifetime of sequential \(r\)-out-of-\(n\) systems with independent and heterogeneous components. Commun Stat 46:7365–7375
Belzunce F, Lillo RE, Ruiz JM, Shaked M (2001) Stochastic comparisons of nonhomogeneous processes. Probab Eng Inf Sci 15:199–224
Belzunce F, Ruiz JM, Ruiz MDC (2003) Multivariate properties of random vectors of order statistics. J Stat Plan Inference 115:413–424
Belzunce F, Mercader JA, Ruiz JM (2005) Stochastic comparisons of generalized order statistics. Probab Eng Inf Sci 19:99–120
Belzunce F, Gurler S, Ruiz JM (2011) Revisiting multivariate likelihood ratio ordering results for order statistics. Probab Eng Inf Sci 25:355–368
Belzunce F, Martínez-Riquelme C (2015) Some results for the comparison of generalized order statistics in the total time on test and excess wealth orders. Stat Pap 56:1175–1190
Burkschat M, Navarro J (2018) Stochastic comparisons of systems based on sequential order statistics via properties of distorted distributions. Probab Eng Inf Sci 32:246–274
Burkschat M, Torrado N (2014) On the reversed hazard rate of sequential order statistics. Stat Probab Lett 85:106–113
Chen J, Hu T (2007) Multivariate dispersive ordering of generalized order statistics. Appl Math Lett 22:968–974
Cramer E, Kamps U (1996) Sequential order statistics and \(k\)-out-of-\(n\) systems with sequentially adjusted failure rates. Ann Inst Stat Math 48:535–549
Cramer E, Kamps U (2001) Sequential \(k\)-out-of-\(n\) systems. In: Balakrishnan N, Rao CR (eds) Handbook of statistics (Chap. 12), vol 20. North-Holland, Amsterdam, pp 301–372
Cramer E, Kamps U (2003) Marginal distributions of sequential and generalized order statistics. Metrika 58:293–310
Gurler S (2012) On residual lifetimes in sequential \((n- k+ 1)\)-out-of-\(n\) systems. Stat Pap 53:23–31
Hazra NK, Kuiti MR, Finkelstein M, Nanda AK (2017) On stochastic comparisons of maximum order statistics from the location-scale family of distributions. J Multivar Anal 160:31–41
Hu T, Zhuang W (2005) A note on stochastic comparisons of generalized order statistics. Stat Probab Lett 72:163–170
Kamps U (1995) A concept of generalized order statistics. J Stat Plan Inference 48:1–23
Kelkinnama M, Asadi M (2019) Stochastic and ageing properties of coherent systems with dependent identically distributed components. Stat Pap 60:805–821
Khaledi BE, Kochar S (2000) On dispersive ordering between order statistics in one-sample and two-sample problems. Stat Probab Lett 46:257–261
Kvam PH, Pena EA (2005) Estimating load-sharing properties in a dynamic reliability system. J Am Stat Assoc 100:262–272
Li C, Li X (2015) Likelihood ratio order of sample minimum from heterogeneous Weibull random variables. Stat Probab Lett 97:46–53
Marshall AW, Olkin I, Arnold BC (2011) Inequalities: theory of majorization and its applications. Springer, New York
Navarro J, Burkschat M (2011) Coherent systems based on sequential order statistics. Nav Res Logist 58:123–135
Proschan F (1963) Theoretical explanation of observed decreasing failure rate. Technometrics 5:375–383
Sahoo T, Hazra NK (2023) Ordering and aging properties of systems with dependent components governed by the Archimedean copula. Probab Eng Inf Sci 37:1–28
Shaked M, Shanthikumar JG (2007) Stochastic orders. Springer, New York
Tavangar M, Asadi M (2011) On stochastic and aging properties of generalized order statistics. Probab Eng Inf Sci 25:187–204
Torrado N, Lillo RE, Wiper MP (2012) Sequential order statistics: ageing and stochastic orderings. Methodol Comput Appl Probab 14:579–596
Xie H, Hu T (2010) Some new results on multivariate dispersive ordering of generalized order statistics. J Multivar Anal 101:964–970
Zhuang W, Hu T (2007) Multivariate stochastic comparisons of sequential order statistics. Probab Eng Inf Sci 21:47–66
Acknowledgements
The first author sincerely acknowledges the financial support received from UGC, Govt. of India. The work of the second author was supported by IIT Jodhpur, India, while the work of the last author was supported by the Natural Sciences and Engineering Research Council of Canada through an individual discovery grant.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Before presenting the proofs, we first introduce some acronyms that are exclusively used here: CIS - conditionally increasing in sequence; ILR - increasing likelihood ratio; IFR - increasing failure rate; DFR - decreasing failure rate; DRFR - decreasing reverse failure rate.
Proof of Part(a) of Theorem 4.1
We have
Consequently, we have \(X^{\star }_{1:n+1} \le _{st} X^{\star }_{1:n}\). Again, from Remark 3.1, we have
for \(k = 2, \ldots , n\). As \(\bar{F}\left( \cdot \right) \) is a decreasing function, we get \(P(X^{\star }_{k:n} > x_k |X^{\star }_{1:n}= x_1, \) \(\ldots , X^{\star }_{k-1:n}= x_{k-1})\) to be continuous and increasing in \(x_{k-1} >0\). Then, \(\left( X^{\star }_{1:n}, \ldots ,X^{\star }_{n:n} \right) \) is CIS. Thus, in view of Theorem 6.B.4 of Shaked and Shanthikumar (2007), we only need to show that
for all \(t>0\), and \(i=2, \ldots , n\). Now, from (A.1), we get
for all \(t \ge s>0,\) where the inequality follows from the fact that \(\bar{F}\left( \cdot \right) \) is a decreasing function. Further, for \(t < s\), (A.2) is trivially true. Hence, the required result. \(\square \)
Proof of Part(b) of Theorem 4.1
Let \(r_{j}^{(i)}(\cdot )\) be the hazard rate function of \(F_{j}^{(i)}\), \(j=1,\ldots , n-i+2\), \(i=1,\ldots , n+1\). Further, let \(\eta _{\cdot | \cdot } \left( \cdot \right) \) and \(\lambda _{\cdot | \cdot } \left( \cdot \right) \) be the multivariate conditional hazard rate functions of \(\left( X^{\star }_{1:n+1}, \ldots ,X^{\star }_{n:n+1} \right) \) and \(\left( X^{\star }_{1:n}, \ldots ,X^{\star }_{n:n} \right) \), respectively. Note that \(X^{\star }_{1:n} \le \ldots \le X^{\star }_{n:n} \). Then, from Remark 3.1 and (6.C.2) of Shaked and Shanthikumar (2007), we have
where \(I = \left\{ 1, \ldots , i \right\} \) for some i. Similarly, we have
where \(I = \left\{ 1, \ldots , i \right\} \) and \(J = \left\{ i+1, \ldots , m \right\} \), for some \(m\ge i\) and \(i=0, 1, \ldots , n-1\). In view of (6.D.13) of Shaked and Shanthikumar (2007), to prove the required result, it suffices to show that
where \(I = \left\{ 1, \ldots , i \right\} \) and \(J = \left\{ i+1, \ldots , m \right\} \), for \(0 \le i \le n-1\), \(\varvec{s}_I \le \varvec{t}_I \le u\varvec{e}\) and \(\varvec{s}_J \le u\varvec{e}\). Let us consider the following two cases.
Case I: Let \(m>i\). As \(\lambda _{k | I}\left( u | \varvec{t}_I\right) = 0, \text { for } k \ge m+1\), (A.5) holds readily.
Case II: Let \(m=i\). Then,
where the inequality follows from the fact that \(r_{n-m+1}^{(m+1)}\left( u\right) \) is positive in \(u>0\). Further,
\(\eta _{k | I \cup J}\left( u | \varvec{s}_{I \cup J}\right) = 0= \lambda _{k | I}\left( u | \varvec{t}_I\right) , \text { for } k>m+1\).
Thus, (A.5) holds, and hence the required result. \(\square \)
Proof of Part(c) of Theorem 4.1
In view of Lemma 3.1, the result holds if
for all \(x_1 \le \ldots \le x_n\) and \(y_1 \le \ldots \le y_n\). So, to prove the required result, it suffices to show that the above inequality holds on the set \(E_1 = \{ 1 \le i \le n-1:x_i \ge y_i \}\). Now, consider the set \(E_1\). Then, (A.6) reduces to
Now, from the condition that “\(F_i \le _{hr} F_{i+1}\), \(i=1,\ldots , n-1\)”, we get
Again, from the condition that “\( \alpha _{n-i+2}^{(i)} \ge \alpha _{n-i+1}^{(i+1)} \), for \( i=1, \ldots , n-1\)”, we get
Upon using (A.8) and (A.9), we obtain (A.7). Hence, the required result. \(\square \)
Proof of Part(a) of Theorem 4.2
From the definition of SOS, we have \(X^{\star }_{1:n} \le _{st} X^{\star }_{2:n}\). Again, from Remark 3.1, we have
for \(k = 2, \ldots , n\). As \(\bar{F} \left( \cdot \right) \) is a decreasing function, we have \(P(X^{\star }_{k:n} > x_k |X^{\star }_{1:n}= x_1, \) \(\ldots , X^{\star }_{k-1:n}= x_{k-1})\) to be continuous and increasing in \(x_{k-1} >0\). Then, \(\left( X^{\star }_{1:n}, \ldots ,X^{\star }_{n:n} \right) \) is CIS. Thus, in view of Theorem 6.B.4 of Shaked and Shanthikumar (2007), we only need to show that
for all \(t>0\), and \(i=2, 3, \ldots , n-1\). Now, from (A.10), we get
for all \( t \ge s>0,\) where the inequality follows from the facts that \(\bar{F} \left( \cdot \right) \) is a decreasing function and \(F_j^{(i)} \le _{hr} F_{j}^{(i+1)}\), for \(j=1, \ldots , n-i\), \(i=2, \ldots , n-1\). Further, for \(t < s\), (A.11) holds readily, and hence the required result. \(\square \)
Proof of Part(b) of Theorem 4.2
Let \(r_{j}^{(i)}(\cdot )\) be the hazard rate function of \(F_{j}^{(i)}\), for \(j=1,\ldots , n-i+1\), \(i=1,\ldots , n\). Further, let \(\eta _{\cdot | \cdot } \left( \cdot \right) \) and \(\lambda _{\cdot | \cdot } \left( \cdot \right) \) be the multivariate conditional hazard rate functions of \(\left( X^{\star }_{1:n}, \ldots ,X^{\star }_{n-1:n} \right) \) and \(\left( X^{\star }_{2:n}, \ldots ,X^{\star }_{n:n} \right) \), respectively. Note that \(X^{\star }_{1:n} \le \ldots \le X^{\star }_{n:n}\). Then, from Remark 3.1 and (6.C.2) of Shaked and Shanthikumar (2007), we have
where \(I = \left\{ 1, \ldots , i \right\} \) for some i. Similarly, we have
where \(I = \left\{ 1, \ldots , i \right\} \) and \(J = \left\{ i+1, \ldots , m \right\} \), for some \(m\ge i\) and \(i=0, 1, \ldots , n-2\). In view of (6.D.13) of Shaked and Shanthikumar (2007), to prove the required result, it suffices to show that
where \(I = \left\{ 1, \ldots , i \right\} \) and \(J = \left\{ i+1, \ldots , m \right\} \), for \(0 \le i \le n-2\), \(\varvec{s}_I \le \varvec{t}_I \le u\varvec{e}\) and \(\varvec{s}_J \le u\varvec{e}\). Let us consider the following three cases.
Case I: Let \(i\ge 1\) and \(m>i\). As \(\lambda _{k | I}\left( u | \varvec{t}_I\right) = 0, \text { for } k \ge m+1\), (A.14) holds readily.
Case II: Let \(i\ge 1\) and \(m=i\). Then,
where the first inequality follows from the assumption that “\(F_{j}^{(m+1)} \le _{hr} F_{j}^{(m+2)}\), \(j=1, \ldots , n-m-1\), \(m=1, \ldots , n-2\)”. Further,
\( \eta _{k | I \cup J}\left( u | \varvec{s}_{I \cup J}\right) = 0= \lambda _{k | I}\left( u | \varvec{t}_I\right) , \text { for } k>m+1. \) Thus, (A.14) holds.
Case III: Let \(i = 0\). Then, (A.14) can be written as \( \eta _{k | J}\left( u | \varvec{s}_{ J}\right) \ge \lambda _{k | \emptyset }\left( u | \varvec{t}_{ \emptyset }\right) .\) Note that \( \lambda _{k | \emptyset }\left( u | \varvec{t}_{ \emptyset }\right) = 0\), for \(k\ge 2\). Thus, we only need to show that \( \eta _{1 | \emptyset }\left( u | \varvec{s}_{ \emptyset }\right) \ge \lambda _{1 | \emptyset }\left( u | \varvec{t}_{ \emptyset }\right) ,\) or equivalently, \(X^{\star }_{1:n} \le _{hr} X^{\star }_{2:n}\). From Remark 3.1, we have
for all \(t>0\). Now, from the given condition that “\(F_{j}^{(1)} \;\le _{hr } \; F_{j}^{(2)}\), \(j=1,\ldots , n-1\)”, we get \( {{r}_{X^{\star }_{1:n}}\left( t\right) }/{ \sum _{j=1}^{n-1} r_{j}^{(2)} \left( t\right) } \ge 1 \text { for all } t>0.\) Furthermore, we have
Consequently, from (A.15), we get \( {r}_{X^{\star }_{1:n}}\left( t\right) \; \ge \; {r}_{X^{\star }_{2:n}}\left( t\right) \) for all \(t>0\), and thus \(X^{\star }_{1:n} \le _{hr} X^{\star }_{2:n}\), so that (A.14) follows. Hence, the required result. \(\square \)
Proof of Part(c) of Theorem 4.2
Note that the multivariate likelihood ratio order is closed under marginalization. So, to prove the required result, it suffices to show that \(\left( 0, X^{\star }_{1:n}, \ldots ,X^{\star }_{n-1:n} \right) \le _{lr} \left( X^{\star }_{1:n}, \ldots ,X^{\star }_{n:n} \right) \), which holds, in view of Lemma 3.1, if
for all \(x_2 \le \ldots \le x_n\) and \(y_1 \le \ldots \le y_n\). Let \(E_2 = \{ 1 \le i \le n-2:x_{i+1} \ge y_{i+1} \}\). Note that the above inequality follows if it holds on \(E_2\). Given \(E_2\), (A.16) reduces to
Now, from the condition that “\(\left( { \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} }-1 \right) \) is positive and decreasing in \(i \in \{1,\ldots , n-1\}\)”, we get \( { \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} } \ge \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} - \sum \limits _{j=1}^{n-i-1} \alpha _{j}^{(i+2)} \ge 1\) and \(\sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} \ge 1\), for \( i=1, \ldots , n-2\). Further, by using these along with the condition that “\( F_{i} \le _{hr} F_{i+1} \)”, we get
for all \(x_{i+1} \ge y_{i+1} > 0\), for \( i=1, \ldots , n-2\). Again, from the condition that “\(\left( \bar{F}_{i+1}(u)\right) ^2/\bar{F}_{i}(u) \bar{F}_{i+2}(u)\) is increasing in \(u>0\)”, we get
for all \(x_{i+1} \ge y_{i+1} > 0\). Further, we have the assumption that “\( F_{i} \le _{lr} F_{i+1} \), for \( i=1, \ldots , n-1\)”. Finally, upon combining this with (A.18), (A.19) and (A.20), we get (A.17), and hence the required result. \(\square \)
Proof of Part(a) of Theorem 4.3
We have
where
Note that \(\bar{F}_{j}^{(2)}(t) \le {\bar{F}_{j}^{(2)}(t)}/{\bar{F}_{j}^{(2)}(z)}\), for all \( 0 <z \le t\). This, together with the assumption that “\(F_j^{(1)} \le _{st} F_{j}^{(2)}\), \(j=1, \ldots , n\)”, imply that
which further implies that \(X^{\star }_{1:n} \le _{st} X^{\star }_{2:n+1}\). Again, from Remark 3.1, we have
for \(k = 2, \ldots , n\). As \(\bar{F}\left( \cdot \right) \) is a decreasing function, we get \(P(X^{\star }_{k:n} > x_k |X^{\star }_{1:n}= x_1, \) \(\ldots , X^{\star }_{k-1:n}= x_{k-1})\) to be continuous and increasing in \(x_{k-1} >0\). Then, \(\left( X^{\star }_{1:n}, \ldots ,X^{\star }_{n:n} \right) \) is CIS. Thus, in view of Theorem 6.B.4 of Shaked and Shanthikumar (2007), we only need to show that
for all \(t>0\), and \(i=2, \ldots , n\). Now, from (A.22), we get
\( \text {for all } t \ge s>0, \) where the inequality follows from the facts that \(\phi \left( \cdot \right) \) is a decreasing function and \(F_j^{(i)} \le _{hr} F_{j}^{(i+1)}\), for \(j=1, \ldots , n-i+1\), \(i=2, \ldots , n\). Hence, (A.23) holds for \(t \ge s>0\). Further, for \(t < s\), (A.23) holds readily, and hence, the required result. \(\square \)
Proof of Part(b) of Theorem 4.3
Let \(r_{j}^{(i)}(\cdot )\) be the hazard rate function of \(F_{j}^{(i)}\), for \(j=1,\ldots , n-i+2\), \(i=1,\ldots , n+1\). Further, let \(\eta _{\cdot | \cdot } \left( \cdot \right) \) and \(\lambda _{\cdot | \cdot } \left( \cdot \right) \) be the multivariate conditional hazard rate functions of \(\left( X^{\star }_{1:n}, \ldots ,X^{\star }_{n-1:n} \right) \) and \(\left( X^{\star }_{2:n+1}, \ldots ,X^{\star }_{n+1:n+1} \right) \), respectively. Note that \(X^{\star }_{1:n} \le \ldots \le X^{\star }_{n:n}\). Then, from Remark 3.1 and (6.C.2) of Shaked and Shanthikumar (2007), we have
where \(I = \left\{ 1, \ldots , i \right\} \), for some i. Similarly, we have
where \(I = \left\{ 1, \ldots , i \right\} \) and \(J = \left\{ i+1, \ldots , m \right\} \), for some \(m\ge i\) and \(i=0, 1, \ldots , n-1\). In view of (6.D.13) of Shaked and Shanthikumar (2007), to prove the required result, it suffices to show that
where \(I = \left\{ 1, \ldots , i \right\} \) and \(J = \left\{ i+1, \ldots , m \right\} \), for \(0 \le i \le n-1\), \(\varvec{s}_I \le \varvec{t}_I \le u\varvec{e}\) and \(\varvec{s}_J \le u\varvec{e}\). Now, consider the following three cases.
Case I: Let \(i\ge 1\) and \(m>i\). As \(\lambda _{k | I}\left( u | \varvec{t}_I\right) = 0, \text { for } k \ge m+1\), (A.26) holds readily.
Case II: Let \(i\ge 1\) and \(m=i\). Then,
where the inequality follows from the assumption that “\(F_{j}^{(m+1)} \le _{hr} F_{j}^{(m+2)}\), for \(m=1, \ldots , n-1\)”. Further,
\(\eta _{k | I \cup J}\left( u | \varvec{s}_{I \cup J}\right) = 0= \lambda _{k | I}\left( u | \varvec{t}_I\right) , \text { for } k>m+1\). Thus, (A.26) holds.
Case III: Let \(i = 0\). Then, (A.26) can be equivalently written as
\(\eta _{k | J}\left( u | \varvec{s}_{ J}\right) \ge \lambda _{k | \emptyset }\left( u | \varvec{t}_{ \emptyset }\right) \). Note that \( \lambda _{k | \emptyset }\left( u | \varvec{t}_{ \emptyset }\right) = 0\), for \(k\ge 2\). Thus, we only need to show that \( \eta _{1 | \emptyset }\left( u | \varvec{s}_{ \emptyset }\right) \ge \lambda _{1 | \emptyset }\left( u | \varvec{t}_{ \emptyset }\right) ,\) or equivalently, \(X^{\star }_{1:n} \le _{hr} X^{\star }_{2:n+1}\). From Remark 3.1, we have
for all \(t>0\). Now, from the given condition that “\(F_{j}^{(1)} \;\le _{hr } \; F_{j}^{(2)}\), \(j=1,2,\ldots , n\)", we get \({{r}_{X^{\star }_{1:n}}\left( t\right) }\Big /{ \sum \limits _{j=1}^{n} r_{j}^{(2)} \left( t\right) } \ge 1 \text { for all } t>0\). Furthermore, we have
Consequently, from (A.27), we get \( {r}_{X^{\star }_{1:n}}\left( t\right) \; \ge \; {r}_{X^{\star }_{2:n}}\left( t\right) \), for all \(t>0\), and so \(X^{\star }_{1:n} \le _{hr} X^{\star }_{2:n+1}\). Thus, (A.26) follows, and hence, the required result. \(\square \)
Proof of Part(c) of Theorem 4.3
Note that the multivariate likelihood ratio order is closed under marginalization. So, to prove the required result, it suffices to show that \(\left( 0, X^{\star }_{1:n}, \ldots ,X^{\star }_{n:n} \right) \le _{lr} \left( X^{\star }_{1:n+1}, \ldots ,X^{\star }_{n+1:n+1} \right) \), which holds, in view of Lemma 3.1, if
for all \(x_2 \le \ldots \le x_{n+1}\) and \(y_1 \le \ldots \le y_{n+1}\). Let \(E_3 = \{ 1 \le i \le n-1:x_{i+1} \ge y_{i+1} \}\). Then, the above inequality follows if it holds on \(E_3\). Given \(E_3\), (A.28) reduces to
Now, from the condition that “\(\big (\sum \limits _{j=1}^{n-i+1} \left( \alpha _{j}^{(i)} - \alpha _{j}^{(i+1)} \right) -1 \big )\) is positive and decreasing in \(i \in \{1,\ldots , n\}\)”, we get \({ \sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+1)} } \ge {\sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} - \sum \limits _{j=1}^{n-i} \alpha _{j}^{(i+2)} } \ge 1\), \(\sum \limits _{j=1}^{n-i+1} \alpha _{j}^{(i+1)} \ge 1\), for \( i=1, \ldots , n-1\), and \({ \alpha _{1}^{(n)} } \ge { \alpha _{1}^{(n+1)} } \ge 1\). These, together with the condition that “\( F_{i} \le _{hr} F_{i+1} \)”,
imply that
for all \(x_{i+1} \ge y_{i+1} > 0\), for \( i=1, \ldots , n-1\). Moreover, from the condition that “\(\left( \bar{F}_{i+1}(u)\right) ^2/\bar{F}_{i}(u) \bar{F}_{i+2}(u)\) is increasing in \(u>0\)”, we get
for all \(x_{i+1} \ge y_{i+1} > 0\). Additionally, we have the assumption that “\( F_{i} \le _{lr} F_{i+1} \), for \( i=1, \ldots , n\)”. Finally, upon combining this with (A.30), (A.31) and (A.32), we get (A.29). Hence, the required result. \(\square \)
Proof of Part(a) of Theorem 4.4
From Lemma 3.4, we have
Note that the likelihood ratio order is closed under increasing transformations. As F is continuous, \(D^{-1} \left( \cdot \right) \) is increasing and so the required result holds if and only if \(\sum _{j=1}^{i}W_{j,n} \le _{lr} \sum _{j=1}^{i+1}W_{j,n}\), for all \(i=1,\ldots , n-1\). Now, we have \(W_{j, n}\), \(j=1,\ldots ,i+1\), to be independent and non-negative random variables. Thus, in view of Theorem 1.C.9 of Shaked and Shanthikumar (2007), the result follows (i.e., the above inequality holds) provided \(W_{j,n}\) is ILR, for all \(j=1,\ldots , i+1\). From (3.9), we have
which implies that \({f'_{W_{j,n}} \left( t\right) } / {f_{W_{j,n}} \left( t\right) }\) is decreasing in \(t>0\) and so \(W_{j,n}\) is ILR, for all \(j=1, \ldots , i+1\). Hence, the required result. \(\square \)
Proof of Part(b) of Theorem 4.4
From Lemma 3.4, we have
Note that the likelihood ratio order is closed under increasing transformations. As F is continuous, \(D^{-1} \left( \cdot \right) \) is increasing and so the required result holds if and only if \(\sum _{j=1}^{i}W_{j, n+1} \le _{lr} \sum _{j=1}^{i}W_{j, n} \), for all \(i=1,\ldots ,n\). Now, we have \(W_{j, n}\) and \(W_{j, n+1}\), \(j=1,\ldots ,i\), to be independent and non-negative random variables. Thus, in view of Theorem 1.C.9 of Shaked and Shanthikumar (2007), the result follows (i.e., the above inequality holds) provided \(W_{j,n}\) and \(W_{j,n+1}\) are ILR, and \(W_{j, n+1} \le _{lr} W_{j, n} \), for all \(j=1,\ldots , i\). From (3.9), we have
which implies that \({f'_{W_{j,n}} \left( t\right) } / {f_{W_{j,n}} \left( t\right) }\) is decreasing in \(t>0\) and so \(W_{j,n}\) is ILR, for all \(j=1, \ldots , i\). Similarly, we get \(W_{j,n+1}\) to be ILR, for all \(j=1, \ldots , i\). Again, the condition that “\( \alpha _{n-j+2}^{(j)}>0 \)" implies \(W_{j, n+1} \le _{lr} W_{j, n} \), for all \(j=1,\ldots , i\). Hence, the required result. \(\square \)
Proof of Part(c) of Theorem 4.4
From Lemma 3.4, we have
Note that the likelihood ratio order is closed under increasing transformations. As F is continuous, \(D^{-1} \left( \cdot \right) \) is increasing and so the required result holds if and only if \(\sum _{j=1}^{i}W_{j, n} \le _{lr} \sum _{j=1}^{i+1}W_{j, n+1} \), \(i=1,\ldots , n\). Now, we have \(W_{j, n}\) and \(W_{j, n+1}\), \(j=1,\ldots ,i+1\), to be independent and non-negative random variables. Thus, in view of Theorem 1.C.9 of Shaked and Shanthikumar (2007), the result follows (i.e., the above inequality holds) provided \(W_{j,n}\), \(W_{1,n+1}\) and \(W_{j+1,n+1}\) are ILR, and \(W_{j, n} \le _{lr} W_{j+1, n+1} \), for all \(j=1,\ldots , i\). From (3.9), we have
which implies that \({f'_{W_{j,n}} \left( t\right) } / {f_{W_{j,n}} \left( t\right) }\) is decreasing in \(t>0\) and so \(W_{j,n}\) are ILR, \(j=1, \ldots , i\). Similarly, we get \(W_{j,n+1}\) to be ILR, for all \(j=1, \ldots , i+1\). Again, from the condition that “\(\alpha _{l}^{(j)} \ge \alpha _{l}^{(j+1)}\), for \(l=1, \ldots , n-j+1\)”, we get \(W_{j, n} \le _{lr} W_{j+1, n+1} \), for all \(j=1,\ldots , i\). Hence, the required result. \(\square \)
Proof of Par(a) of Theorem 4.5
From Lemma 3.4, we have
As F is DFR, \(D^{-1} \left( \cdot \right) \) is increasing and convex and so in view of Theorem 3.B.10(a) of Shaked and Shanthikumar (2007), the result holds if \(\sum _{j=1}^{i}W_{j,n} \le _{disp} \sum _{j=1}^{i+1}W_{j,n}\) and \(\sum _{j=1}^{i}W_{j,n} \le _{st} \sum _{j=1}^{i+1}W_{j,n}\), for all \(i=1, \ldots , n-1\). Now, we have \(W_{j, n}\), \(j=1,\ldots ,i+1\), to be independent and non-negative random variables. Thus, in view of Theorem 1.1 of Khaledi and Kochar (2000), the result follows (i.e., the above inequality holds) provided \(W_{j,n}\) is ILR, for all \(j=1,\ldots ,i+1\). From (3.9), we have
which implies that \({f'_{W_{j,n}} \left( t\right) } / {f_{W_{j,n}} \left( t\right) }\) is decreasing in \(t>0\) and so \(W_{j,n}\) is ILR, for all \(j=1, \ldots , i+1\). Hence, the required result. \(\square \)
Proof of Part(b) of Theorem 4.5
From Lemma 3.4, we have
As F is DFR, \(D^{-1} \left( \cdot \right) \) is increasing and convex and so in view of Theorem 3.B.10(a) of Shaked and Shanthikumar (2007), the result holds if \(\sum _{j=1}^{i}W_{j, n+1} \le _{disp} \sum _{j=1}^{i}W_{j, n} \) and \(\sum _{j=1}^{i}W_{j, n+1} \le _{st} \sum _{j=1}^{i}W_{j, n} \), for all \(i=1,\ldots ,n\). Now, we have \(W_{j, n}\) and \(W_{j, n+1}\), \(j=1,\ldots ,i\), to be independent and non-negative random variables. Thus, in view of Theorem 1.1 of Khaledi and Kochar (2000), the result follows (i.e., the above inequality holds) provided \(W_{j,n}\) and \(W_{j,n+1}\) are ILR, \(W_{j, n+1} \le _{disp} W_{j, n} \) and \(W_{j, n+1} \le _{st} W_{j, n} \), for all \(j=1,\ldots , i\). From (3.9), we have
which implies that \({f'_{W_{j,n}} \left( t\right) } / {f_{W_{j,n}} \left( t\right) }\) is decreasing in \(t>0\) and so \(W_{j,n}\) is ILR, for all \(j=1, \ldots , i\). Similarly, we get \(W_{j,n+1}\) to be ILR, for all \(j=1, \ldots , i\). Again, from the condition that “\( \alpha _{n-j+2}^{(j)} >0\)”, we get
which further implies that \({F}^{-1}_{ W_{j,n} } \left( u\right) - {F}^{-1}_{ W_{j,n+1} } \left( u\right) \) is increasing in \(u \in (0,1)\). Thus, we get \(W_{j, n+1} \le _{disp} W_{j, n} \), for all \(j=1,\ldots , i\). Further, it can easily be verified that \(W_{j, n+1} \le _{st} W_{j, n} \), for all \(j=1,\ldots , i\). Hence, the required result. \(\square \)
Proof of Part(c) of Theorem 4.5
From Lemma 3.4, we have
As F is DFR, \(D^{-1} \left( \cdot \right) \) is increasing and convex and so in view of Theorem 3.B.10(a) of Shaked and Shanthikumar (2007), the result holds if \(\sum _{j=1}^{i}W_{j, n} \le _{disp} \sum _{j=1}^{i+1}W_{j, n+1} \) and \(\sum _{j=1}^{i}W_{j, n} \le _{st} \sum _{j=1}^{i+1}W_{j, n+1} \), for all \(i=1,\ldots , n\). Now, we have \(W_{j, n}\), \(j=1,\ldots ,i\), and \(W_{j, n+1}\), \(j=1,\ldots ,i+1\), to be independent and non-negative random variables. Thus, in view of Theorem 1.1 of Khaledi and Kochar (2000), the result follows (i.e., the above inequality holds) provided \(W_{j,n}\), \(W_{1,n+1}\) and \(W_{j,n+1}\) are ILR, \(W_{j, n} \le _{disp} W_{j+1, n+1} \) and \(W_{j, n} \le _{st} W_{j+1, n+1} \), for all \(j=1,\ldots , i\). From (3.9), we have
which implies that \({f'_{W_{j,n}} \left( t\right) } / {f_{W_{j,n}} \left( t\right) }\) is decreasing in \(t>0\) and so \(W_{j,n}\) is ILR, for all \(j=1, \ldots , i\). Similarly, it can be shown that \(W_{j,n+1}\) is ILR, for all \(j=1, \ldots , i+1\). Again, from the condition that “\(\alpha _{l}^{(j)} \ge \alpha _{l}^{(j+1)}\), for \(l=1, \ldots , n-j+1\)”, we get
which further implies that \({F}^{-1}_{ W_{j+1,n+1} } \left( u\right) - {F}^{-1}_{ W_{j,n} } \left( u\right) \) is increasing in \(u \in (0,1)\). Again, this implies that \(W_{j, n} \le _{disp} W_{j+1, n+1} \), for all \(j=1,\ldots , i\). Further, it can easily be verified that \(W_{j, n} \le _{st} W_{j+1, n+1} \), for all \(j=1,\ldots , i\). Hence, the required result. \(\square \)
Proof of Part(a) of Theorem 4.6
Note that the hazard rate order is closed under increasing transformations, and so \(X^{\star }_{i:n} \le _{hr} X^{\star }_{i+1:n}\) holds if and only if \(D_{i+1}\left( X^{\star }_{i:n}\right) \le _{hr} D_{i+1}\left( X^{\star }_{i+1:n}\right) \); here, \(D_{i+1}\) is the cumulative hazard rate function of \(F_{i+1}\). Further, from Lemma 3.3, we have \(X^{\star }_{i+1:n} = D_{i+1}^{-1}\left( W_{ i+1,n } + D_{i+1}\left( X_{i:n}^{\star }\right) \right) \) and so the above inequality holds if and only if \(D_{i+1}\left( X^{\star }_{i:n}\right) \le _{hr} W_{i+1, n } + D_{i+1}\left( X_{i:n}^{\star }\right) \). Now, we have \(Y_l^{ (i+1) }\), \(l=1, \ldots , n-i\), and \(X^{\star }_{i:n}\) to be independent, which implies that \(W_{i+1,n}\) and \(D_{i+1}\left( X^{\star }_{i:n}\right) \) are independent. Moreover, \(W_{ i+1,n } \) is a non-negative random variable.
Thus, in view of Lemma 1.B.3 of Shaked and Shanthikumar (2007), the result follows (i.e., the above inequality holds) provided \(D_{i+1}\left( X_{i:n}^{\star }\right) \) is IFR. Now, we prove the statement that “\(D_{i+1}\left( X_{i:n}^{\star }\right) \) is IFR" through induction. First, we show that this statement is true for \(i=1\), i.e., \(D_{2}\left( X_{1:n}^{\star }\right) \) is IFR. From Definition 3.2, the cumulative hazard rate function of \(D_{2}\left( X^{\star }_{1:n}\right) \) is given by
Now, from the condition that “\(F_1 \le _c F_2\)” and the increasing property of \(D_2^{-1}\left( \cdot \right) \), we get
From this, we get \(\frac{\partial ^2}{\partial t^2}\left( \Delta _{D_{2}(X^{\star }_{1:n})} \left( t\right) \right) \ge 0\), for all \(t>0\), and so \(D_{2}\left( X^{\star }_{1:n} \right) \) is IFR. Thus, the statement is true for \(i=1\). Now, we assume the statement to be true for \(i=j-1\), i.e., \(D_{j}\left( X^{\star }_{j-1:n}\right) \) is IFR. Now, upon using this, we show that \(D_{j+1}\left( X^{\star }_{j:n}\right) \) is IFR. From Lemma 3.3, we get
\(D_{j+1}\left( X^{\star }_{j:n}\right) = \left( D_{j+1} \circ D_j^{-1}\right) \left( Q_{j,n}\right) \),
where \(Q_{j,n} = W_{j,n} + D_{j}\left( X^{\star }_{j-1:n}\right) \). Then, the cumulative hazard rate function of \(D_{j+1}\left( X^{\star }_{j:n}\right) \) is given by
Further, we have \(Y_l^{ (j) }\), \(l=1,\ldots , n-j+1\), and \(X^{\star }_{j-1:n}\) to be independent. This implies that \(W_{j,n}\) and \(D_{j}\left( X^{\star }_{j-1:n}\right) \) are independent. Again, by using (3.7), we get
which implies that \(W_{j,n}\) is IFR. Further, from the induction hypothesis, we have \(D_{j}\left( X^{\star }_{j-1:n}\right) \) to be IFR. Upon combining these two facts, we get \(Q_{j,n}\) to be IFR, which in turn implies that
\(\Delta _{Q_{j,n}} \left( t\right) \text { is increasing and convex in } t>0\).
Again, by proceeding in a manner similar to the case when \(i=1\), we can easily obtain
\(D_j \circ D_{j+1}^{-1} \left( t\right) \text { to be convex in }t>0\).
Finally, upon using these two facts in (A.33), we get \(\Delta _{D_{j+1}\left( X^{\star }_{j:n}\right) }\) to be convex in \(t>0\). Consequently, \(D_{j+1}\left( X^{\star }_{j:n}\right) \) is IFR and hence the statement gets proved for \(i=j\). Thus, by induction, we get \(D_{i+1}\left( X^{\star }_{i:n}\right) \) to be IFR, for all i. Hence, the required result. \(\square \)
Proof of Part(b) of Theorem 4.6
Note that the reverse hazard rate order is closed under increasing transformations and so \(X^{\star }_{i:n} \le _{hr} X^{\star }_{i+1:n}\) holds if and only if \(D_{i+1}\left( X^{\star }_{i:n}\right) \le _{rh} D_{i+1}\left( X^{\star }_{i+1:n}\right) \); here, \(D_{i+1}\) is the cumulative hazard rate function of \(F_{i+1}\). Further, from Lemma 3.3, we have \(X^{\star }_{i+1:n} = D_{i+1}^{-1}\left( W_{ i+1,n } + D_{i+1}\left( X_{i:n}^{\star }\right) \right) \). Thus, the above inequality holds if and only if \(D_{i+1}\left( X^{\star }_{i:n}\right) \le _{rh} W_{i+1, n } + D_{i+1}\left( X_{i:n}^{\star }\right) \). Now, we have \(Y_l^{ (i+1) }\), \(l=1, \ldots , n-i\), and \(X^{\star }_{i:n}\) to be independent, which implies that \(W_{i+1,n}\) and \(D_{i+1}\left( X^{\star }_{i:n}\right) \) are independent. Moreover, \(W_{ i+1,n } \) is a non-negative random variable.
Thus, in view of Lemma 1.B.44 of Shaked and Shanthikumar (2007), the result follows (i.e., the above inequality holds) provided \(D_{i+1}\left( X_{i:n}^{\star }\right) \) is DRFR. Now, we prove the statement that “\(D_{i+1}\left( X_{i:n}^{\star }\right) \) is DRFR”through induction. First, we show that this statement is true for \(i=1\), i.e., \(D_{2}\left( X_{1:n}^{\star }\right) \) is DRFR. From Definition 3.2, the cumulative reverse hazard rate function of \(D_{2}\left( X^{\star }_{1:n}\right) \) is given by
which gives
for \(t>0\), where \(u= \left( D_1 \circ D_2^{-1}\right) \left( t\right) \). Now, from the fact that “\({e^{-a u}}/ {(1- e^{-a u})}\) is positive and decreasing in \(u>0\), for \(a>0\)", we get
\( {\sum \limits _{j=1}^{n}\alpha _{j}^{(1)} e^{- \sum \limits _{j=1}^{n}\alpha _{j}^{(1)} u}} \Big /{\Big (1- e^{- \sum \limits _{j=1}^{n}\alpha _{j}^{(1)} u}\Big )}\) to be positive and decreasing in \(u>0.\)
Again, from the condition that “\(F_1 \ge _c F_2\)” and the increasing property of \(D_2^{-1}\left( \cdot \right) \), we get
Upon using these two facts in (A.34), we get \(\frac{\partial ^2}{\partial t^2}\left( \tilde{\Delta }_{D_{2}(X^{\star }_{1:n})} \left( t\right) \right) \ge 0\) for all \(t>0\), and so \(D_{2}\left( X^{\star }_{1:n} \right) \) is DRFR. Thus, the statement is true for \(i=1\). Now, we assume the statement to be true for \(i=j-1\), i.e., \(D_{j}\left( X^{\star }_{j-1:n}\right) \) is DRFR. Now, upon using this, we show that \(D_{j+1}\left( X^{\star }_{j:n}\right) \) is DRFR. From Lemma 3.3, we get
\( D_{j+1}\left( X^{\star }_{j:n}\right) = \left( D_{j+1} \circ D_j^{-1}\right) \left( Q_{j,n}\right) ,\)
where \(Q_{j,n} = W_{j,n} + D_{j}\left( X^{\star }_{j-1:n}\right) \). Then, the cumulative reverse hazard rate function of \(D_{j+1}\left( X^{\star }_{j:n}\right) \) is given by
Further, we have \(Y_l^{ (j) }\), \(l=1,\ldots , n-j+1\), and \(X^{\star }_{j-1:n}\) to be independent. This implies that \(W_{j,n}\) and \(D_{j}\left( X^{\star }_{j-1:n}\right) \) are independent. Again, by using (3.7), we get
which implies that \(W_{j,n}\) is DRFR. Further, from the induction hypothesis, we have \(D_{j}\left( X^{\star }_{j-1:n}\right) \) to be DRFR. Upon combining these two facts, we get \(Q_{j,n}\) to be DRFR. Further, this implies that
\( \tilde{\Delta }_{Q_{j,n}} \left( t\right) \text { is decreasing and convex in } t>0. \)
Again, by proceeding in a manner similar to the case when \(i=1\), we can easily obtain
\( D_j \circ D_{j+1}^{-1} \left( t\right) \text { to be concave in }t>0. \)
Finally, upon using these two facts in (A.35), we get \(\tilde{\Delta }_{D_{j+1}\left( X^{\star }_{j:n}\right) }\) to be convex in \(t>0\). Consequently, \(D_{j+1}\left( X^{\star }_{j:n}\right) \) is DRFR and hence the statement gets proved for \(i=j\). Thus, by induction, we get \(D_{i+1}\left( X^{\star }_{i:n}\right) \) to be DRFR, for all i. Hence, the required result. \(\square \)
Proof of Part(a) of Theorem 4.7
In view of Theorem 3.11 of Belzunce et al. (2001), it suffices to prove that \(X^{\star }_{1:n} \le _{st} Z^{\star }_{1:n}\) and \(\left( X^{\star }_{i:n} | X^{\star }_{i-1:n} = x \right) \le _{hr} \left( Z^{\star }_{i:n} | Z^{\star }_{i-1:n} = x \right) \), for all \(x>0\), for \(i=2, \ldots , n\). Now, from the definition of SOS, the reliability functions of \(X_{1:n}^{\star }\) and \(Z_{1:n}^{\star }\) are given by
which, in view of the condition that “\({F}_{j}^{(1)} \le _{st} {G}_{j}^{(1)}\), for \(j=1,\ldots , n\)”, implies that \(\bar{F}_{X_{1:n}^{\star }}\left( t\right) \le \bar{F}_{Z_{1:n}^{\star }}\left( t\right) \) for \(t>0\). Again, from Remark 3.1, we get, for \(i=2,\ldots ,n\),
These imply that
where \({r}_j^{(i)}\) and \(h_j^{(i)}\) are the hazard rate functions of \(F_j^{(i)}\) and \(G_j^{(i)}\), respectively. From the condition that “\(F_j^{(i)} \le _{hr} G_j^{(i)}\), for \(i=2,\ldots ,n\)”, we have \({r}_j^{(i)} \left( t \right) \ge {h}_j^{(i)} \left( t \right) \) for all \(t>0\), and so \(\left( X^{\star }_{i:n} | X^{\star }_{i-1:n} = x \right) \le _{hr} \left( Z^{\star }_{i:n} | Z^{\star }_{i-1:n} = x \right) \) for all \(x>0\), for \(i=2, \ldots , n\). Hence, the required result. \(\square \)
Proof of Part(c) of Theorem 4.7
In view of Theorem 3.13 of Belzunce et al. (2001), it is enough to prove that \(\left( X^{\star }_{i:n} | X^{\star }_{i-1:n} = x \right) \le _{hr} \left( Z^{\star }_{i:n} | Z^{\star }_{i-1:n} = x \right) \) and \(\left( X^{\star }_{i:n} | X^{\star }_{i-1:n} = x \right) \ge _{c} \left( Z^{\star }_{i:n} | Z^{\star }_{i-1:n} = x \right) \) for all \(x>0\), and \({r}_{Z^{\star }_{i+1:n} | Z^{\star }_{i:n} =x } (t) - {r}_{Z^{\star }_{i:n} | Z^{\star }_{i-1:n} =x } (t) \ge {r}_{Z^{\star }_{i+1:n} | Z^{\star }_{i:n} =x } (t) - {r}_{Z^{\star }_{i:n} | Z^{\star }_{i-1:n} =x } (t)\) for \(i=1,2, \ldots , n-1\). Now, from the definition of SOS, the reliability functions of \(X_{1:n}^{\star }\) and \(Z_{1:n}^{\star }\) are given by
These imply that
where r and h are the hazard rate functions of F and G, respectively. Now, from the conditions that “\({F} \le _{hr} {G}\)”, “\({F} \ge _{c} {G}\)” and “\( \sum _{j=1}^{n} {\alpha }_j^{(1)} \ge \sum _{j=1}^{n} {\beta }_j^{(1)}\)”, we get \({X_{1:n}^{\star }} \le _{hr} {Z_{1:n}^{\star }} \) and \({X_{1:n}^{\star }} \ge _{c} {Z_{1:n}^{\star }} \). Again, from Remark 3.1, we get, for \(i=2,\ldots ,n\),
which imply that
From the conditions that “\({F} \le _{hr} {G}\)”, “\({F} \ge _{c} {G}\)” and “\( \sum _{j=1}^{n-i+1} {\alpha }_j^{(i)} \ge \sum _{j=1}^{n-i+1} {\beta }_j^{(i)}\), for \(i=2,\ldots ,n\)”, we get \(\left( X^{\star }_{i:n} | X^{\star }_{i-1:n} = x \right) \le _{hr} \left( Z^{\star }_{i:n} | Z^{\star }_{i-1:n} = x \right) \) and \(\left( X^{\star }_{i:n} | X^{\star }_{i-1:n} = x \right) \ge _{c} \left( Z^{\star }_{i:n} | Z^{\star }_{i-1:n} = x \right) \), for all \(x>0\), for \(i=2, \ldots , n\). Further, from the conditions that “\({F} \le _{hr} {G}\)” and “\( \sum _{j=1}^{n-i+1} {\alpha }_j^{(i)} - \sum _{j=1}^{n-i} {\alpha }_j^{(i+1)} \ge \sum _{j=1}^{n-i+1} {\beta }_j^{(i)} - \sum _{j=1}^{n-i} {\beta }_j^{(i+1)} \ge 0\), for \(i=1,\ldots ,n-1\)”, we get \({r}_{Z^{\star }_{i+1:n} | Z^{\star }_{i:n} =x } (t) - {r}_{Z^{\star }_{i:n} | Z^{\star }_{i-1:n} =x } (t) \ge {r}_{Z^{\star }_{i+1:n} | Z^{\star }_{i:n} =x } (t) - {r}_{Z^{\star }_{i:n} | Z^{\star }_{i-1:n} =x } (t)\) for \(i=1, \ldots , n-1\). Hence, the required result. \(\square \)
Proof of Theorem 4.8
Let
where \(U_j^l,\; j=1,\ldots ,n-l+1,\; l=1,\ldots ,n\), are the same as in Lemma 3.2. Now, from Lemma 3.4, we can write
Note that the likelihood ratio order is closed under increasing transformations. As F is continuous, \(D^{-1} \left( \cdot \right) \) is increasing and so the required result holds if and only if \(\sum _{j=1}^{i}W_{j, n} \le _{lr} \sum _{j=1}^{i}A_{j, n} \). Now, we have \(W_{j, n}\) and \(A_{j, n}\), \(j=1,\ldots ,i\), to be independent and non-negative random variables. Thus, in view of Theorem 1.C.9 of Shaked and Shanthikumar (2007), the result follows (i.e., the above inequality holds) provided \(W_{j,n}\) and \(A_{j,n}\) are ILR, and \(W_{j, n} \le _{lr} A_{j, n} \), for all \(j=1,\ldots , i\). From (3.9), we have
\( {f'_{W_{j,n}} \left( t\right) } /{f_{W_{j,n}} \left( t\right) } = -\sum _{l=1}^{n-j+1} \alpha _{l}^{(j)}, \;t>0,\)
which implies that \({f'_{W_{j,n}} \left( t\right) } / {f_{W_{j,n}} \left( t\right) }\) is decreasing in \(t>0\) and so \(W_{j,n}\) is ILR. Similarly, it can be shown that \(A_{j,n}\) is also ILR, for all \(j=1, \ldots , i\). Again, we have the condition that “\(\varvec{\alpha }^{(j)} \overset{w}{\preceq } \varvec{\beta }^{(j)}\)”. Then, from Theorem 3.1 of Li and Li (2015), we obtain \(W_{j, n} \le _{lr} A_{j, n} \), for all \(j=1,\ldots , i\). Hence, the required result. \(\square \)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Sahoo, T., Hazra, N.K. & Balakrishnan, N. Multivariate stochastic comparisons of sequential order statistics with non-identical components. Stat Papers (2024). https://doi.org/10.1007/s00362-024-01558-w
Received:
Published:
DOI: https://doi.org/10.1007/s00362-024-01558-w